QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
78,351,027
940,259
Getting ErrorCode:AuthorizationPermissionMismatch when listing blobs from Python while AZ CLI works with the same creds
<p>I've got an existing storage account / container where I can list all the blobs using AZ CLI:</p> <pre><code>[ ~ ]$ az storage blob list --account-name storageaccount20122022 --container-name test There are no credentials provided in your command and environment, we will query for account key for your storage account. It is recommended to provide --connection-string, --account-key or --sas-token in your command as credentials. You also can add `--auth-mode login` in your command to use Azure Active Directory (Azure AD) for authorization if your login account is assigned required RBAC roles. For more information about RBAC roles in storage, visit https://docs.microsoft.com/azure/storage/common/storage-auth-aad-rbac-cli. In addition, setting the corresponding environment variables can avoid inputting credentials in your command. Please use --help to get more information about environment variable usage. [ { &quot;container&quot;: &quot;test&quot;, &quot;content&quot;: &quot;&quot;, ... } ] </code></pre> <p>I want to do the same from Python, this is my script:</p> <pre class="lang-py prettyprint-override"><code>import sys from azure.identity import DefaultAzureCredential from azure.storage.blob import BlobServiceClient def list_blobs_using_cli_credential(account_name, container_name): credential = DefaultAzureCredential() # Also tried AzureCliCredential() blob_service_client = BlobServiceClient( account_url=f&quot;https://{account_name}.blob.core.windows.net&quot;, credential=credential ) container_client = blob_service_client.get_container_client(container_name) print(f&quot;Listing blobs in {account_name}/{container_name} ...&quot;) try: blobs = container_client.list_blobs() for blob in blobs: print(blob.name) except Exception as e: print(f&quot;Error listing blobs: {e}&quot;) if __name__ == &quot;__main__&quot;: account_name = sys.argv[1] container_name = sys.argv[2] list_blobs_using_cli_credential(account_name, container_name) </code></pre> <p>However when I run that in the same shell I get an error:</p> <pre><code>[ ~ ]$ python list-blobs.py storageaccount20122022 test Listing blobs in storageaccount20122022/test ... Error listing blobs: This request is not authorized to perform this operation using this permission. RequestId:466b2647-201e-0022-13fd-918e18000000 Time:2024-04-19T01:59:22.6413231Z ErrorCode:AuthorizationPermissionMismatch Content: &lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;&lt;Error&gt;&lt;Code&gt;AuthorizationPermissionMismatch&lt;/Code&gt;&lt;Message&gt;This request is not authorized to perform this operation using this permission. RequestId:466b2647-201e-0022-13fd-918e18000000 Time:2024-04-19T01:59:22.6413231Z&lt;/Message&gt;&lt;/Error&gt; </code></pre> <p>I can list the Storage accounts and the container in each account from Python but when it comes to dealing with the Blobs (list, delete, upload) I can't do anything from Python while the same ops work from AZ CLI.</p> <p>I tried in two different Azure accounts, completely unrelated, in one of them I'm the owner, but it's still the same.</p> <p>What is AZ CLI doing, what other permissions or roles does it automatically acquire, to do the Blob ops? And how can I do the same in my Python code?</p>
<python><azure>
2024-04-19 02:10:54
1
1,420
MLu
78,350,937
390,388
Spacy textcat multilabel config validation error
<p>I am trying to train a spacy textcat_multilabel model. I thought I had everything set up correctly, but I continue to get a validation error.</p> <p>This is the label section of my config:</p> <pre><code>[components.textcat_multilabel] factory = &quot;textcat_multilabel&quot; scorer = {&quot;@scorers&quot;: &quot;spacy.textcat_multilabel_scorer.v2&quot;} threshold = 0.5 labels = [&quot;Operational (Frontline)&quot;, &quot;Certified/Technical&quot;, &quot;Administrative (General)&quot;, &quot;Corporate (HR/Finance/Procurement)&quot;, &quot;Digital (Applications)&quot;, &quot;Digital (ICT)&quot;, &quot;Communication and Engagement&quot;, &quot;Environmental/Scientific&quot;, &quot;Leadership/Management/Coaching/Mentoring&quot;, &quot;Policy/Legislation/Regulatory&quot;, &quot;Cultural Capability&quot;, &quot;Project Management&quot;, &quot;Workplace Health and Safety&quot;, &quot;Analytical (Data/GIS/Modelling)&quot;, &quot;Other&quot;] </code></pre> <p>This command</p> <pre><code>python -m spacy train .\config.cfg --output ..\output --paths.tain .\train.spacy --paths.dev .\dev.spacy </code></pre> <p>throws this error</p> <pre><code>=========================== Initializing pipeline =========================== ✘ Config validation error textcat_multilabel -&gt; labels extra fields not permitted {'nlp': &lt;spacy.lang.en.English object at 0x00000210C3758B10&gt;, 'name': 'textcat_multilabel', 'labels': ['Operational (Frontline)', 'Certified/Technical', 'Administrative (General)', 'Corporate (HR/Finance/Procurement)', 'Digital (Applications)', 'Digital (ICT)', 'Communication and Engagement', 'Environmental/Scientific', 'Leadership/Management/Coaching/Mentor ing', 'Policy/Legislation/Regulatory', 'Cultural Capability', 'Project Management', 'Workplace Health and Safety', 'Analytical (Data/GIS/Modelling)', 'Other'], 'model': {'@architec tures': 'spacy.TextCatBOW.v2', 'exclusive_classes': False, 'ngram_size': 1, 'no_output_layer': False, 'nO': None}, 'scorer': {'@scorers': 'spacy.textcat_multilabel_scorer.v2'}, 'threshold': 0.5, '@factories': 'textcat_multilabel'} </code></pre> <p>Have I got this all wrong? Is there another way to specify labels in the config?</p> <p>Thanks;</p>
<python><spacy>
2024-04-19 01:27:21
1
43,620
John
78,350,841
2,270,422
Mypy complains about poetry packages I included from the project subdirectories in src
<p>Here is my poetry packages in my pyproject.py:</p> <pre><code>packages = [ {include = &quot;api&quot;, from = &quot;src&quot;}, {include = &quot;another_api&quot;, from = &quot;src&quot;}, {include = &quot;infra&quot;} ] </code></pre> <p>And when I import some symbol in my <code>api</code> package like <code>from api.constants import BEST_MYPY_QUESTION</code>, I get the typical error of:</p> <pre><code>src/api/main.py:1: error: Skipping analyzing &quot;api.constants&quot;: module is installed, but missing library stubs or py.typed marker [import-untyped] </code></pre> <p>Here is my code in <code>src/api/main.py</code></p> <pre><code>from api.constants import BEST_MYPY_QUESTION print(BEST_MYPY_QUESTION) </code></pre> <p>The usual way to solve such error is to run mypy with <code>--ignore-missing-imports</code> option set, however, I don't like to do that because this should be type checked as I already have the source code to check against.</p>
<python><python-3.x><mypy><python-poetry>
2024-04-19 00:30:01
0
685
masec
78,350,697
13,605,694
Hosting a Django application on azure app service
<p>After following the official <a href="https://learn.microsoft.com/en-us/azure/app-service/tutorial-python-postgresql-app" rel="nofollow noreferrer">host python with postgres tutorial</a>, and making modifications in my gh actions file because my django apps isn't present in the root on the repo, I get a 404 error when trying to access it.</p> <p>Here is my settings.py</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot; Django settings for server project. Generated by 'django-admin startproject' using Django 5.0.3. For more information on this file, see https://docs.djangoproject.com/en/5.0/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/5.0/ref/settings/ &quot;&quot;&quot; from decouple import config from os import environ from google.oauth2 import service_account from pathlib import Path from django.conf import global_settings # Build paths inside the project like this: BASE_DIR / 'subdir'. BASE_DIR = Path(__file__).resolve().parent.parent # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/5.0/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = config( &quot;SECRET_KEY&quot;, default=&quot;django-insecure-g8khpcexyyb0q@p40^d5#r_j#ezf%(-90r-y^2@x1)2$wpch9+&quot;, ) # SECURITY WARNING: don't run with debug turned on in production! DEBUG = config(&quot;DEBUG&quot;, default=False, cast=bool) ALLOWED_HOSTS = ( [environ[&quot;WEBSITE_HOSTNAME&quot;]] if &quot;WEBSITE_HOSTNAME&quot; in environ else config( &quot;ALLOWED_HOSTS&quot;, cast=lambda v: [s.strip() for s in v.split(&quot;,&quot;)], default=[] ) ) CORS_ALLOWED_ORIGINS = config( &quot;CORS_ALLOWED_ORIGINS&quot;, cast=lambda v: [s.strip() for s in v.split(&quot;,&quot;)], default=&quot;http://localhost:5173,http://127.0.0.1:5173&quot;, ) # Application definition INSTALLED_APPS = [ &quot;django.contrib.admin&quot;, &quot;django.contrib.auth&quot;, &quot;django.contrib.contenttypes&quot;, &quot;django.contrib.sessions&quot;, &quot;django.contrib.messages&quot;, &quot;django.contrib.staticfiles&quot;, &quot;rest_framework&quot;, &quot;rest_framework.authtoken&quot;, &quot;djoser&quot;, &quot;corsheaders&quot;, &quot;drf_spectacular&quot;, &quot;accounts.apps.AccountsConfig&quot;, &quot;videos.apps.VideosConfig&quot;, ] MIDDLEWARE = [ &quot;django.middleware.security.SecurityMiddleware&quot;, &quot;whitenoise.middleware.WhiteNoiseMiddleware&quot;, &quot;django.contrib.sessions.middleware.SessionMiddleware&quot;, &quot;corsheaders.middleware.CorsMiddleware&quot;, &quot;django.middleware.common.CommonMiddleware&quot;, &quot;django.middleware.csrf.CsrfViewMiddleware&quot;, &quot;django.contrib.auth.middleware.AuthenticationMiddleware&quot;, &quot;django.contrib.messages.middleware.MessageMiddleware&quot;, &quot;django.middleware.clickjacking.XFrameOptionsMiddleware&quot;, ] ROOT_URLCONF = &quot;server.urls&quot; TEMPLATES = [ { &quot;BACKEND&quot;: &quot;django.template.backends.django.DjangoTemplates&quot;, &quot;DIRS&quot;: [], &quot;APP_DIRS&quot;: True, &quot;OPTIONS&quot;: { &quot;context_processors&quot;: [ &quot;django.template.context_processors.debug&quot;, &quot;django.template.context_processors.request&quot;, &quot;django.contrib.auth.context_processors.auth&quot;, &quot;django.contrib.messages.context_processors.messages&quot;, ], }, }, ] WSGI_APPLICATION = &quot;server.wsgi.application&quot; # Database # https://docs.djangoproject.com/en/5.0/ref/settings/#databases DATABASES = { &quot;default&quot;: { &quot;ENGINE&quot;: config(&quot;DB_ENGINE&quot;, default=&quot;django.db.backends.sqlite3&quot;), &quot;NAME&quot;: config(&quot;DB_NAME&quot;, default=BASE_DIR / &quot;db.sqlite3&quot;), &quot;USER&quot;: config(&quot;DB_USER&quot;, default=&quot;&quot;), &quot;PASSWORD&quot;: config(&quot;DB_PASSWORD&quot;, default=&quot;&quot;), &quot;HOST&quot;: config(&quot;DB_HOST&quot;, default=&quot;&quot;), &quot;PORT&quot;: config(&quot;DB_PORT&quot;, default=&quot;&quot;), } } # Password validation # https://docs.djangoproject.com/en/5.0/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { &quot;NAME&quot;: &quot;django.contrib.auth.password_validation.UserAttributeSimilarityValidator&quot;, }, { &quot;NAME&quot;: &quot;django.contrib.auth.password_validation.MinimumLengthValidator&quot;, }, { &quot;NAME&quot;: &quot;django.contrib.auth.password_validation.CommonPasswordValidator&quot;, }, { &quot;NAME&quot;: &quot;django.contrib.auth.password_validation.NumericPasswordValidator&quot;, }, ] REST_FRAMEWORK = { &quot;DEFAULT_AUTHENTICATION_CLASSES&quot;: [ &quot;rest_framework.authentication.TokenAuthentication&quot;, &quot;rest_framework.authentication.SessionAuthentication&quot;, &quot;rest_framework.authentication.BasicAuthentication&quot;, # simple cmd line tools can access the API ], &quot;DEFAULT_SCHEMA_CLASS&quot;: &quot;drf_spectacular.openapi.AutoSchema&quot;, &quot;DEFAULT_PAGINATION_CLASS&quot;: &quot;rest_framework.pagination.LimitOffsetPagination&quot;, &quot;PAGE_SIZE&quot;: 100, } SPECTACULAR_SETTINGS = { &quot;TITLE&quot;: &quot;Video API&quot;, &quot;DESCRIPTION&quot;: &quot;Amalitech Video API&quot;, &quot;VERSION&quot;: &quot;1.0.0&quot;, &quot;SERVE_INCLUDE_SCHEMA&quot;: False, # OTHER SETTINGS } DJOSER = {&quot;PASSWORD_RESET_CONFIRM_URL&quot;: &quot;password-reset-confirm/{uid}/{token}&quot;} EMAIL_HOST = config(&quot;EMAIL_HOST&quot;, default=&quot;localhost&quot;) EMAIL_HOST_PASSWORD = config(&quot;EMAIL_HOST_PASSWORD&quot;, default=&quot;&quot;) EMAIL_HOST_USER = config(&quot;EMAIL_HOST_USER&quot;, default=&quot;&quot;) EMAIL_PORT = config(&quot;EMAIL_PORT&quot;, default=25, cast=int) EMAIL_USE_TLS = config(&quot;EMAIL_USE_TLS&quot;, default=False, cast=bool) # Internationalization # https://docs.djangoproject.com/en/5.0/topics/i18n/ LANGUAGE_CODE = &quot;en-us&quot; TIME_ZONE = &quot;UTC&quot; USE_I18N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/5.0/howto/static-files/ STATIC_URL = &quot;static/&quot; STATIC_ROOT = BASE_DIR / &quot;static&quot; SESSION_ENGINE = &quot;django.contrib.sessions.backends.cache&quot; STATICFILES_STORAGE = &quot;whitenoise.storage.CompressedManifestStaticFilesStorage&quot; DEFAULT_FILE_STORAGE = config( &quot;DEFAULT_FILE_STORAGE&quot;, default=global_settings.DEFAULT_FILE_STORAGE ) GS_CREDENTIALS = service_account.Credentials.from_service_account_file( BASE_DIR / &quot;serviceaccount.json&quot; ) GS_BUCKET_NAME = config(&quot;GS_BUCKET_NAME&quot;, default=&quot;&quot;) GS_DEFAULT_ACL = config(&quot;GS_DEFAULT_ACL&quot;, default=&quot;&quot;) # Default primary key field type # https://docs.djangoproject.com/en/5.0/ref/settings/#default-auto-field DEFAULT_AUTO_FIELD = &quot;django.db.models.BigAutoField&quot; </code></pre> <p>and the gh action used to deploy to app service</p> <pre class="lang-yaml prettyprint-override"><code># Docs for the Azure Web Apps Deploy action: https://github.com/Azure/webapps-deploy # More GitHub Actions for Azure: https://github.com/Azure/actions # More info on Python, GitHub Actions, and Azure App Service: https://aka.ms/python-webapps-actions name: Build and deploy Python app to Azure Web App - plvids on: push: branches: - main workflow_dispatch: jobs: build: runs-on: ubuntu-latest defaults: run: working-directory: ./apps/server environment: name: 'Production' steps: - uses: actions/checkout@v4 - name: Set up Python version uses: actions/setup-python@v1 with: python-version: '3.11' - name: Create settings.ini file run: | echo [settings] &gt;&gt; settings.ini echo &quot;ALLOWED_HOSTS = ${{ secrets.ALLOWED_HOSTS }}&quot; &gt;&gt; settings.ini echo &quot;SECRET_KEY = ${{ secrets.SECRET_KEY }}&quot; &gt;&gt; settings.ini echo &quot;DB_NAME = ${{ secrets.DBNAME }}&quot; &gt;&gt; settings.ini echo &quot;DB_USER = ${{ secrets.DBUSER }}&quot; &gt;&gt; settings.ini echo &quot;DB_PASSWORD = ${{ secrets.DBPASSWORD }}&quot; &gt;&gt; settings.ini echo &quot;DB_ENGINE = ${{ secrets.DB_ENGINE }}&quot; &gt;&gt; settings.ini echo &quot;DB_HOST = ${{ secrets.DBHOST }}&quot; &gt;&gt; settings.ini echo &quot;DB_PORT = ${{ secrets.DBPORT }}&quot; &gt;&gt; settings.ini echo &quot;DEFAULT_FILE_STORAGE = ${{ secrets.DEFAULT_FILE_STORAGE }}&quot; &gt;&gt; settings.ini echo &quot;EMAIL_HOST = ${{ secrets.EMAIL_HOST }}&quot; &gt;&gt; settings.ini echo &quot;EMAIL_PORT = ${{ secrets.EMAIL_PORT }}&quot; &gt;&gt; settings.ini echo &quot;EMAIL_HOST_USER = ${{ secrets.EMAIL_HOST_USER }}&quot; &gt;&gt; settings.ini echo &quot;EMAIL_HOST_PASSWORD = ${{ secrets.EMAIL_HOST_PASSWORD }}&quot; &gt;&gt; settings.ini echo &quot;GS_BUCKET_NAME = ${{ secrets.EMAIL_HOST_PASSWORD }}&quot; &gt;&gt; settings.ini echo &quot;GS_DEFAULT_ACL = ${{ secrets.GS_DEFAULT_ACL }}&quot; &gt;&gt; settings.ini echo '${{secrets.GS_SERVICE_ACCOUNT}}' &gt;&gt; serviceaccount.json shell: bash - name: Create and start virtual environment run: | python -m venv venv source venv/bin/activate - name: Install dependencies run: pip install -r requirements.txt # Optional: Add step to run tests here (PyTest, Django test suites, etc.) - name: Zip artifact for deployment run: zip release.zip ./* -r - name: Upload artifact for deployment jobs uses: actions/upload-artifact@v3 with: name: python-app path: | ./apps/server/release.zip !venv/ deploy: runs-on: ubuntu-latest needs: build environment: name: 'Production' url: ${{ steps.deploy-to-webapp.outputs.webapp-url }} permissions: id-token: write #This is required for requesting the JWT steps: - name: Download artifact from build job uses: actions/download-artifact@v3 with: name: python-app - name: Unzip artifact for deployment run: unzip release.zip - name: Login to Azure uses: azure/login@v1 with: client-id: ${{ secrets.*** }} tenant-id: ${{ secrets.*** }} subscription-id: ${{ secrets.*** }} - name: 'Deploy to Azure Web App' uses: azure/webapps-deploy@v2 id: deploy-to-webapp with: app-name: '***' slot-name: 'Production' </code></pre> <p>the logs in app service don't show anything</p> <p>build logs show successful building without any errors</p> <p>This is a <a href="https://github.com/ayitinya/video-platform" rel="nofollow noreferrer">link to the repository</a></p>
<python><django><azure><continuous-integration><azure-web-app-service>
2024-04-18 23:15:04
1
392
ayitinya
78,350,601
16,406
How to avoid "ImportError: attempted relative import with no known parent package" error
<p>Background: I have a bunch of small-to-medium python programs that I'm trying to simplify by factoring out common code into a module that all the programs import.</p> <p>The problem I run into is that when I put the common code into <code>common.py</code> in the same directory as all the programs, and do <code>from . import common</code>, I get the error</p> <blockquote> <p>ImportError: attempted relative import with no known parent package</p> </blockquote> <p>Which doesn't make a lot of sense. From my (limited) understanding of python, a &quot;module&quot; is just a file with a <code>.py</code> extension and a &quot;package&quot; is just a directory containing modules. All of my files are in one directory, so isn't that the &quot;package&quot;? How is it even possible to have &quot;no known parent package&quot; -- by definition all the files are in a directory and that is a package. Various places talk about an <code>__init__.py</code> file, so I've tried adding that to the directory, but that makes no difference.</p> <p>So I guess my question is: what exactly is a python package, and when exactly is a directory NOT a package? What does it mean to not have a parent package, and how does that occur? I've looked at various introductory python documents like <a href="https://www.udacity.com/blog/2021/01/what-is-a-python-package.html" rel="nofollow noreferrer">What is a python package</a> as well as python documentation like <a href="https://docs.python.org/3/tutorial/modules.html#packages" rel="nofollow noreferrer">Python packages</a> (which mentions the <code>__init__.py</code> file), but I can't seem to find anything that really explains what is actaully going on under the hood, or how to understand what this error means and how to fix it.</p>
<python><python-3.x>
2024-04-18 22:41:03
1
127,309
Chris Dodd
78,350,510
7,031,021
Running Multiline py files in Azure Maschine Learning Studio Notebooks
<p>I'd like to know the best practice of running multiline shell commands in a ml notebook</p> <p>Here is some pseudocode and how i would run IT in a notebook cell.</p> <pre><code>%%bash conda activate myenv &amp;&amp; torchrun --standalone --nnodes=1 --nproc-per-node=$NUM_TRAINERS YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args... </code></pre> <p>I am activating the conda env because I noticed that the magic bash function alone doesn't use the correct environment.</p> <p>Would it be better to use the command function from azure.ai.ml?</p> <p>I find this way somehow error prone, even with activating conda and using the magic function.</p> <p>Are there better alternatives?</p>
<python><azure-machine-learning-service>
2024-04-18 22:08:27
1
510
RSale
78,350,475
1,729,649
What is the equivalent for HTTP method LIST in the Ansbile `uri` module?
<p>I am trying to look for the HTTP method equivalent for the following playbook code for method <code>LIST</code>.</p> <pre class="lang-yaml prettyprint-override"><code>- name: List all folders ansible.builtin.uri: url: https://{{ My_vault_url }} method: LIST return_content: true body_format: json register: list_url </code></pre> <p>If I try like <code>requests.list</code>, I get an error that <code>list</code> is not a request method.</p> <p>If it is a custom method how can I use it through the Python?</p>
<python><ansible><python-requests><ansible-2.x>
2024-04-18 21:55:18
1
570
Sukh
78,350,341
1,509,695
How to make argparse not mention -h and --help when started with either of them?
<p>When running with <code>--help</code>, the help output includes the description of the <code>--help</code> argument itself. How can that line be avoided in the output of <code>--help</code>?</p> <p>I could not get <a href="https://stackoverflow.com/a/73380185/1509695">this answer</a> to work, as the following code demonstrates when run using Python 3.10:</p> <pre class="lang-py prettyprint-override"><code>import argparse if __name__ == '__main__': parser = argparse.ArgumentParser( add_help=False, description=f'does foo') parser.add_argument('--bar', type=str, required=True, help='the bar value') args = parser.parse_args() </code></pre> <p>The result of running the above script file with <code>--help</code> is an error message, not the help message sans the <code>--help</code> option in it:</p> <pre class="lang-none prettyprint-override"><code>usage: scratch_54.py --bar BAR scratch_54.py: error: the following arguments are required: --bar </code></pre> <p>I would like to avoid the help option self-referencing itself in its own output, while having the rest of the help message behave as usual.</p>
<python><argparse>
2024-04-18 21:11:19
1
13,863
matanox
78,350,242
3,507,584
Uninstall last pip installed packages
<p>I have just installed a package in my virtual environment which I shouldn't have installed. It also installed many dependency packages.</p> <p>Is there a way to roll back and uninstall the package and its dependencies just installed?</p> <p>Something like &quot;uninstall packages installed in the last 1 hour&quot; or a similar functionality.</p>
<python><python-3.x><pip>
2024-04-18 20:46:36
2
3,689
User981636
78,350,224
610,569
Avoiding repetitive checking of function output before storing into a dictionary
<p>I've a repeating code block of iterating through inputs, checking some functions to return a list and the populating dictionary if the function did return a list, e.g.</p> <pre><code>def some_func(i): &quot;&quot;&quot; This function returns a filled list of a condition is met, otherwise an empty&quot;&quot;&quot; return ['abc', 'def'] if i % 2 == 0 else [] i_inputs = [4, 2, 3, 6, 3, 8, 2] y1 = {} for i in i_inputs: _x = some_funcs(i) if _x: y1[i] = _x </code></pre> <p>The main problem is I'm having a lot of <code>y</code>s an <code>i_inputs</code> in my code, e.g.</p> <pre><code>i_inputs = [4, 2, 3, 6, 3, 8, 2] y1 = {} for i in i_inputs: _x = some_funcs(i) if _x: y1[i] = _x i_inputs2 = [8, 2, 4, 8, 9, 1] y2 = {} for i in i_inputs2: _x = some_funcs2(i) # Sometimes another function that also returns a list. if _x: y2[i] = _x i_inputs3 = [4, 8, 2, 9, 9, 1, 5] y3 = {} for i in i_inputs3: _x = some_funcs3(i) # Yet another function that also returns a list. if _x: y3[i] = _x </code></pre> <p><strong>Is there a better way other than enumerating all the <code>i_input*</code> and the <code>y*</code> in a hard-coded manner?</strong></p>
<python><dry>
2024-04-18 20:40:29
1
123,325
alvas
78,350,183
6,451,746
GStreamer 1.24 Python bindings are blacklisted
<p>I am trying to install Python bindings for GStreamer, but the library is blacklisted. My Dockerfile is below. Everything installs without issue, but the <code>libgstpython.so</code> library is blacklisted. I have tried different Python versions, specifying the Python path, and random keyboard bashing without success.</p> <pre><code>FROM ubuntu:24.04 # Install Python ARG PYTHON_VERSION=3.9.18 WORKDIR /opt RUN apt update -y &amp;&amp; apt upgrade -y &amp;&amp; \ apt install -y libbz2-dev libsqlite3-dev zlib1g-dev libffi-dev wget curl build-essential libssl-dev openssl vim &amp;&amp; \ wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz &amp;&amp; \ tar xzvf Python-${PYTHON_VERSION}.tgz &amp;&amp; \ cd Python-${PYTHON_VERSION} &amp;&amp; \ ./configure --enable-shared &amp;&amp; \ make &amp;&amp; \ make install &amp;&amp; \ ln -s /usr/local/bin/python3 /usr/bin/python &amp;&amp; \ ln -s /usr/local/bin/pip3 /usr/bin/pip # # Install FFmpeg, GStreamer, and reqs for custom plugins RUN apt update &amp;&amp; apt upgrade -y &amp;&amp; apt install -y \ ffmpeg \ libgstreamer1.0-dev \ libgstreamer-plugins-base1.0-dev \ libgstreamer-plugins-bad1.0-dev \ libhdf5-dev \ gstreamer1.0-plugins-base \ gstreamer1.0-plugins-base-apps \ gstreamer1.0-plugins-good \ gstreamer1.0-plugins-bad \ gstreamer1.0-plugins-ugly \ gstreamer1.0-libav \ gstreamer1.0-tools \ gstreamer1.0-x \ gstreamer1.0-alsa \ gstreamer1.0-gl \ gstreamer1.0-gtk3 \ gstreamer1.0-qt5 \ gstreamer1.0-pulseaudio \ graphviz \ python3-gi \ python3-gst-1.0 \ libgirepository1.0-dev \ cmake \ python-gi-dev \ libcairo2-dev \ ninja-build \ git \ flex \ bison # # NOTE: pygobject 3.47.0 introduced a bug # # https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3353 WORKDIR /opt RUN pip install pycairo pygobject==3.46.0 meson pipenv RUN GSTREAMER_VERSION=$(gst-launch-1.0 --version | grep version | tr -s ' ' '\n' | tail -1) \ &amp;&amp; export GIT_SSL_NO_VERIFY=1 \ &amp;&amp; git clone https://gitlab.freedesktop.org/gstreamer/gstreamer.git \ &amp;&amp; cd gstreamer \ &amp;&amp; git checkout $GSTREAMER_VERSION \ &amp;&amp; cd subprojects/gst-python \ &amp;&amp; PREFIX=$(dirname $(dirname $(which python))) \ &amp;&amp; meson setup --prefix=$PREFIX builddir \ &amp;&amp; ninja -C builddir \ &amp;&amp; meson install -C builddir # Install other dependencies ENV GST_PLUGIN_PATH=/usr/lib/aarch64-linux-gnu/gstreamer-1.0 </code></pre> <p>This is the output from <code>gst-inspect-1.0</code>:</p> <pre><code># gst-inspect-1.0 -b Blacklisted files: libgstpython.so Total count: 1 blacklisted file </code></pre> <p>EDIT:</p> <p>The same thing happens when installing <code>apt install gstreamer1.0-python3-plugin-loader</code> rather than building the library from source.</p> <p>EDIT:</p> <p>The output of <code>GST_DEBUG=4 gst-inspect-1.0 libgstpython.so</code> is in <a href="https://gist.github.com/mentoc3000/d9236268dba6c53a9ac8119d61f4204c" rel="nofollow noreferrer">this gist</a>. The full output was too long to include in the post, but the line that stood out to me is 492:</p> <pre><code>** (gst-plugin-scanner:15): CRITICAL **: 18:14:35.880: gi.repository.Gst is no dict </code></pre>
<python><gstreamer><glib>
2024-04-18 20:27:35
1
332
mentoc3000
78,350,133
16,845
What is the correct type annotation for "bytes or bytearray"?
<p>In Python 3.11 or newer, is there a more convenient type annotation to use than <code>bytes | bytearray</code> for a function argument that means &quot;An ordered collection of bytes&quot;? It seems wasteful to require constructing a <code>bytes</code> from a <code>bytearray</code> (or the other way around) just to satisfy the type-checker.</p> <p>Note that the function does not mutate the argument; it's simply convenient to pass <code>bytes</code> or <code>bytearray</code> instances from different call sites.</p> <p>e.g.</p> <pre><code>def serialize_to_stream(stream: MyStream, data: bytes | bytearray) -&gt; None: for byte in data: stream.accumulate(byte) </code></pre> <p>(This example is contrived, of course, but the purpose is to show that <code>data</code> is only read, never mutated).</p>
<python><python-typing>
2024-04-18 20:16:38
1
1,216
Charles Nicholson
78,349,934
3,240,688
Polars - check for null in dataframe
<p>I know I can do <code>.null_count()</code> in Polars, which returns a dataframe telling me the null count for each column.</p> <pre class="lang-py prettyprint-override"><code>import polars as pl data = {&quot;foo&quot;: [1, 2, 3, None], &quot;bar&quot;: [4, None, None, 6]} df = pl.DataFrame(data) df.null_count() </code></pre> <p>would yield a dataframe</p> <pre><code>shape: (1, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ foo ┆ bar β”‚ β”‚ --- ┆ --- β”‚ β”‚ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1 ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I want to know if there's any nulls, within the entire dataframe</p> <p>something like</p> <pre><code>if any(df.null_count()): print('has nulls') else: print('no nulls') </code></pre> <p>Unfortunately, that doesn't work. What is the correct code here?</p>
<python><dataframe><python-polars>
2024-04-18 19:32:29
3
1,349
user3240688
78,349,813
610,569
Generate list of list round-robin without repetition of items with itertools
<p>If the goal is to achieve <code>f&quot;{x1}-{x2}&quot;</code> pairs where <code>x1 != x2</code> from a combination, I can do:</p> <pre><code>import itertools &gt;&gt;&gt; X = ['1','2','3','4'] &gt;&gt;&gt; [f&quot;-&quot;.join(xx) for xx in itertools.combinations(X, 2)] ['1-2', '1-3', '1-4', '2-3', '2-4', '3-4'] </code></pre> <p>But if I want to achieve the desired output some sort of round-robin:</p> <pre><code>[['2-1', '3-1', '4-1'], ['1-2', '3-2', '4-2'], ['1-3', '2-3', '4-3'], ['1-4', '2-4', '3-4']] </code></pre> <p>I could have done:</p> <pre><code>[[f&quot;{x1}-{x2}&quot; for x1 in X if x1 != x2] for x2 in X] </code></pre> <p><strong>But is there some of round-robin of combinations function in itertools that returns that 2-D lists as shown in the desired output output?</strong></p> <p>There is a pointer to roundrobin from <a href="https://docs.python.org/3/library/itertools.html" rel="nofollow noreferrer">https://docs.python.org/3/library/itertools.html</a></p> <pre><code> def roundrobin(*iterables): &quot;Visit input iterables in a cycle until each is exhausted.&quot; # roundrobin('ABC', 'D', 'EF') β†’ A D E B F C # Algorithm credited to George Sakkis iterators = map(iter, iterables) for num_active in range(len(iterables), 0, -1): iterators = cycle(islice(iterators, num_active)) yield from map(next, iterators) </code></pre> <p>But is there a better way to achieve the desired output? Or is the nested loop the most readable and optimal solution to avoid the diagonal?</p>
<python><python-itertools><round-robin>
2024-04-18 19:08:50
1
123,325
alvas
78,349,807
1,182,299
Split image file in lines from ALTO groundtruth coordinates for TrOCR
<p>I want to train a model for TrOCR. I have a personal transcribed groundtruth (ALTO). TrOCR only works with lines and not full pages so I need to crop the image files in lines with the coordinates from the ALTO files and match them with the transcription. My code:</p> <pre><code>import os import cv2 import numpy as np import xml.etree.ElementTree as ET from PIL import Image, ImageDraw from collections import Counter images=os.listdir(&quot;./img&quot;) xmls=os.listdir(&quot;./alto&quot;) def correct_xy(xy): xx=sorted(xy) if len(xy)&lt;21: l=len(xx) missing=21-l c=Counter(np.random.choice(range(l-1),missing)) for i in range(l-1): x=np.linspace(xx[i][0],xx[i+1][0],c[i]+2,dtype=int)[1:-1] y=np.linspace(xx[i][1],xx[i+1][1],c[i]+2,dtype=int)[1:-1] if len(x): xx+=list(map(list,zip(x,y))) return xx if len(xy)&gt;21: new_xy=[0]*21 new_xy[0]=xx[0] step=(len(xx)-2)/19 for i in range(1,20): new_xy[i]=xx[int(np.floor(i*step))] new_xy[-1]=xx[-1] return new_xy return xx def get_polygon(xys,boxes,i,margin=[20]*4): U,D,R,L = margin X,Y,H,W = boxes[i] curr=sorted(xys[i]) curr[0][0]-=L curr[-1][0]+=R curr=[[x[0],x[1]+D] for x in curr] if i == 0: prev=[[x[0],x[1]-H] for x in curr] else: prev=sorted(xys[i-1]) prev=sorted([[curr[0][0],prev[0][1]]]+prev+[[curr[-1][0],prev[-1][1]]]) for l in range(len(prev)): if prev[1][0]==curr[0][0]: break for r in range(len(prev)): if prev[r][0]==curr[-1][0]: break prev=prev[l:r+1] if prev[0]==prev[1]: prev=prev[1:] prev=[[x[0],x[1]+U] for x in prev] gap=min(curr)[1]-min(prev)[1]-H if gap &gt; 0: prev=[[x[0],x[1]+gap] for x in prev] return list(map(tuple,prev[::-1]+curr)) text = [] prefix='{http://www.loc.gov/standards/alto/ns-v4#}' for file_image, file_xml in zip(sorted([img[:-4] for img in images]),sorted([xm[:-4] for xm in xmls])): assert file_image == file_xml path_img=&quot;./img/&quot;+file_image+&quot;.jpg&quot; path_xml=&quot;./alto/&quot;+file_xml+&quot;.xml&quot; tree = ET.parse(path_xml) root = tree.getroot() img = Image.open(path_img).convert('L').point(lambda x : 255 if x &gt; 200 else 0, mode='1') xys = {} #coordinates boxes = {} to_del = [] for i, element in enumerate(root.iter(prefix+'String')): #texts txt = [path_img+&quot;_&quot;+str(i).zfill(2)+&quot;.png&quot;,element.get(&quot;CONTENT&quot;)] if txt[-1]==&quot;&quot;: to_del += [i] continue text.append(txt) for i,element in enumerate(root.iter(prefix+'TextLine')): #images boxes[i] = tuple([int(element.get(s)) for s in ['HPOS','VPOS','HEIGHT','WIDTH']]) xy = [element.get(s) for s in ['BASELINE']][0].split(' ') xys[i]=correct_xy([list(map(int,s.split(','))) for s in xy]) for i in to_del: del xys[i] del boxes[i] for i in xys.keys(): polygon = get_polygon(xys,boxes,i=i,margin=[10,20,50,50]) mask = Image.new(&quot;L&quot;, img.size, 0) background = Image.new(&quot;L&quot;, img.size, 255) draw = ImageDraw(mask) draw.polygon(polygon, fill=&quot;white&quot;, outline=None) result = Image.composite(img, background, mask) (X, Y, W, H) = cv2.boundingRect(np.array(polygon)) result = result.crop([X, Y, X + W, Y + H]) result.save('./lines/'+file_image+&quot;_&quot;+str(i).zfill(2)+'.png') </code></pre> <p>I got this error and I don't see anything wrong with my code:</p> <pre><code>Traceback (most recent call last): File &quot;/home/incognito/TrOCR-py3.10/LINES/alto2lines.py&quot;, line 95, in &lt;module&gt; xys[i]=correct_xy([list(map(int,s.split(','))) for s in xy]) File &quot;/home/incognito/TrOCR-py3.10/LINES/alto2lines.py&quot;, line 19, in correct_xy y=np.linspace(xx[i][1],xx[i+1][1],c[i]+2,dtype=int)[1:-1] IndexError: list index out of range </code></pre> <p>A sample of my ALTO xml file:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;alto xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xmlns=&quot;http://www.loc.gov/standards/alto/ns-v4#&quot; xsi:schemaLocation=&quot;http://www.loc.gov/standards/alto/ns-v4# http://www.loc.gov/standards/alto/v4/alto-4-2.xsd&quot;&gt; &lt;Description&gt; &lt;MeasurementUnit&gt;pixel&lt;/MeasurementUnit&gt; &lt;sourceImageInformation&gt; &lt;fileName&gt;1.jpg&lt;/fileName&gt; &lt;/sourceImageInformation&gt; &lt;/Description&gt; &lt;Tags&gt; &lt;OtherTag ID=&quot;BT1&quot; LABEL=&quot;Title&quot; DESCRIPTION=&quot;block type Title&quot;/&gt;&lt;OtherTag ID=&quot;BT2&quot; LABEL=&quot;Main&quot; DESCRIPTION=&quot;block type Main&quot;/&gt;&lt;OtherTag ID=&quot;BT3&quot; LABEL=&quot;Commentary&quot; DESCRIPTION=&quot;block type Commentary&quot;/&gt;&lt;OtherTag ID=&quot;BT4&quot; LABEL=&quot;Illustration&quot; DESCRIPTION=&quot;block type Illustration&quot;/&gt;&lt;OtherTag ID=&quot;BT35&quot; LABEL=&quot;text&quot; DESCRIPTION=&quot;block type text&quot;/&gt; &lt;OtherTag ID=&quot;LT46&quot; LABEL=&quot;default&quot; DESCRIPTION=&quot;line type default&quot;/&gt; &lt;/Tags&gt; &lt;Layout&gt; &lt;Page WIDTH=&quot;2579&quot; HEIGHT=&quot;2837&quot; PHYSICAL_IMG_NR=&quot;0&quot; ID=&quot;eSc_dummypage_&quot;&gt; &lt;PrintSpace HPOS=&quot;0&quot; VPOS=&quot;0&quot; WIDTH=&quot;2579&quot; HEIGHT=&quot;2837&quot;&gt; &lt;TextBlock HPOS=&quot;580&quot; VPOS=&quot;334&quot; WIDTH=&quot;1940&quot; HEIGHT=&quot;2230&quot; ID=&quot;eSc_textblock_2aa65984&quot; TAGREFS=&quot;BT2&quot;&gt; &lt;Shape&gt;&lt;Polygon POINTS=&quot;2074 334 2489 353 2520 691 2495 762 2476 2532 2451 2558 2306 2564 617 2558 580 906 605 378 712 353 818 371 2074 334&quot;/&gt;&lt;/Shape&gt; &lt;TextLine ID=&quot;eSc_line_48be6c40&quot; TAGREFS=&quot;LT46&quot; BASELINE=&quot;627 397 964 397 1713 373 2473 373&quot; HPOS=&quot;625&quot; VPOS=&quot;338&quot; WIDTH=&quot;1848&quot; HEIGHT=&quot;87&quot;&gt; &lt;Shape&gt;&lt;Polygon POINTS=&quot;2470 351 2454 351 2438 349 2422 349 2407 348 2391 348 2375 346 2359 346 2344 345 2328 345 2312 343 2296 343 2281 343 2274 342 2254 345 2184 338 2183 338 2181 338 2158 349 2156 349 2139 349 1986 342 1984 342 1983 342 1959 349 1868 338 1855 340 1839 340 1823 340 1808 342 1792 342 1776 342 1775 342 1700 346 1655 345 1650 346 1634 346 1618 346 1603 346 1587 348 1571 348 1555 348 1540 349 1524 349 1508 349 1492 351 1477 351 1461 351 1445 351 1429 353 1414 353 1398 353 1382 354 1366 354 1350 354 1335 356 1319 356 1303 356 1287 356 1272 357 1256 357 1240 357 1224 359 1209 359 1193 359 1179 359 1179 360 1161 365 966 365 956 367 941 367 925 367 909 368 893 368 878 368 862 370 846 370 830 370 815 370 799 368 788 365 646 365 641 368 625 370 627 397 627 425 734 423 736 423 780 412 846 420 848 420 1111 419 1112 417 1295 403 1377 409 1379 409 1380 409 1410 400 1741 397 1820 397 1842 406 1844 406 1845 406 1847 406 1886 406 2128 397 2148 401 2159 403 2161 403 2162 403 2167 401 2184 392 2470 395 2473 373 2471 351 2470 351&quot;/&gt;&lt;/Shape&gt; &lt;String CONTENT=&quot;אΧͺ Χ”ΧΧ¨Χ•ΧŸ ואΧͺ Χ‘Χ“Χ™Χ• ואΧͺ Χ”Χ›Χ€Χ¨Χͺ ואΧͺ Χ€Χ¨Χ›Χͺ Χ”ΧžΧ‘Χš&quot; HPOS=&quot;625&quot; VPOS=&quot;338&quot; WIDTH=&quot;1848&quot; HEIGHT=&quot;87&quot;&gt;&lt;/String&gt; &lt;/TextLine&gt; .... snip </code></pre>
<python><numpy><ocr><polygon><alto>
2024-04-18 19:08:04
0
1,791
bsteo
78,349,569
1,489,990
Pass Multiple Inputs to Terminal Command Python
<p>I have this terminal command I need to run programmatically in Python:</p> <p><code>awssaml get-credentials --account-id **** --name **** --role **** --user-name ****</code></p> <p>It will first ask for your password, and then prompt you for a 2 factor authentication code. I have these as variables in python that I just need to pass through to the command.</p> <p>This is what I tried:</p> <pre><code> argss=[str(password_entry.get()),str(twoFactorCode_entry.get())] p=subprocess.Popen([&quot;awssaml&quot;, &quot;get-credentials&quot;, &quot;--account-id&quot;, &quot;****&quot;, &quot;--name&quot;, &quot;****&quot;, &quot;--role&quot;, &quot;****&quot;, &quot;--user-name&quot;, ID_entry.get()],stdin=subprocess.PIPE,stdout=subprocess.PIPE) time.sleep(0.1) out=p.communicate('\n'.join(map(str,argss)).encode()) </code></pre> <p>And when I run this the console prints out that the password was entered because it shows <code>password: xxxxxxxxxxxx</code>, but it then stops execution and does not show the 2 factor code being passed.</p> <p>Any ideas for where I am going wrong to get the 2 factor code also passed through? Both the password and 2 factor code are within the <code>argss</code> variable. <code>password_entry.get()</code> is the password and <code>twoFactorCode_entry.get()</code> is the 2 factor code.</p> <p>This is what the first prompt looks like:</p> <p><a href="https://i.sstatic.net/WGipE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WGipE.png" alt="enter image description here" /></a></p> <p>And I was trying this <code>child.expect('password:')</code> which gives this error:</p> <pre><code>after: &lt;class 'pexpect.exceptions.TIMEOUT'&gt; match: None match_index: None exitstatus: None flag_eof: False pid: 66502 child_fd: 10 closed: False timeout: 30 delimiter: &lt;class 'pexpect.exceptions.EOF'&gt; logfile: None logfile_read: None logfile_send: None maxread: 2000 ignorecase: False searchwindowsize: None delaybeforesend: 0.05 delayafterclose: 0.1 delayafterterminate: 0.1 searcher: sear </code></pre>
<python><pexpect>
2024-04-18 18:21:35
1
10,259
ez4nick
78,349,513
5,502,917
Pytesseract doesnt recognize plate correctly
<p>I am using pytesseract to try to recognize car plates but it does not return the correct result.</p> <p>This is my code</p> <pre><code>text = pytesseract.image_to_string(cropped_License_Plate, lang='eng', config='--psm 9') </code></pre> <p>I have tried using many different psm but the result is never correct.</p> <p>My images</p> <p><a href="https://i.sstatic.net/J1UR9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J1UR9.png" alt="enter image description here" /></a></p> <p>Plate QAN-5512 Pyressecat reading: DAN S512</p> <p><a href="https://i.sstatic.net/CJHUy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CJHUy.png" alt="enter image description here" /></a></p> <p>Plate RWC2I30 Pytesseract reading: 'RWC213G</p> <p><a href="https://i.sstatic.net/M5VXJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M5VXJ.png" alt="enter image description here" /></a></p> <p>Plate RWC2I30 Pytesseract reading: FRWOZLSU</p> <p>Is there a way to fix it?</p>
<python><ocr><python-tesseract>
2024-04-18 18:08:12
1
1,731
GuiDupas
78,349,270
3,120,266
using pandas to number and coerce to force values to ints and still not working
<p>Confused when I am trying to coerce dataframe to numeric. It appears to work when I look at structure but then I still get errors:</p> <p>TypeError: unsupported operand type(s) for +: 'int' and 'str'</p> <p>Code:</p> <pre><code>df = df_leads.apply(pd.to_numeric, errors='coerce') code here df.info() </code></pre> <p>Returns: <code>Columns: 133 entries, org_size_1_99 to engagement_Type_webpage visits dtypes: float64(107), int64(26) memory usage: 3.1 MB</code></p> <p>next line of code:</p> <pre><code>sum(df['target']).astype(int) </code></pre> <p>returns: <code>TypeError: unsupported operand type(s) for +: 'int' and 'str'</code></p>
<python><pandas><string><numeric>
2024-04-18 17:15:25
1
425
user3120266
78,349,268
14,224,948
Changing command order in Python's Typer
<p>I want Typer to display my commands in the order I have initialized them and it displays those commands in alphabetic order. I have tried different approaches including this one: <a href="https://github.com/tiangolo/typer/issues/246" rel="nofollow noreferrer">https://github.com/tiangolo/typer/issues/246</a> In this I get AssertionError. Others like subclassing some Typer and click classes does actually nothing.</p> <p>I want the commands to be in the same order as in this working piece of code:</p> <pre><code>import typer import os app = typer.Typer() @app.command() def change_value(file_name, field): print(&quot;Here I will change the&quot;, file_name, field) @app.command() def close_field(file_name, field): print(&quot;I will close field&quot;) @app.command() def add_transaction(file_name): print(&quot;I will add the transaction&quot;) if __name__ == &quot;__main__&quot;: app() </code></pre> <p>Please help :)</p>
<python><python-3.x><python-click><typer>
2024-04-18 17:15:13
1
1,086
Swantewit
78,349,111
2,386,113
Does passing class variables Stop Paralellization in Numba?
<p>I have a <strong>wrapper method</strong> to call a <strong>Numba-compatible</strong> function. In the code below, the method <code>get_neighbours_wrapper()</code> is just a wrapper to call the Numba function <code>get_neighbours_Numba()</code>.</p> <p>I want to call the <code>neighbours.get_neighbours_wrapper(point)</code> on a separate thread and that's why I am expecting parallelization.</p> <p>Even though I see some performance enhancement by making the function Numba compatible, but I am not sure if it running in parallel (most likely not). I doubt that the caller function i.e. <code>get_neighbours_wrapper()</code> tries to access the member variables of the class and therefore probably stops the real parallelization (due to GIL lock?).</p> <pre><code>import numpy as np from numba import njit @njit def get_neighbours_Numba(points: np.ndarray, num_neighbors: int): for point in points: distances = np.zeros(num_neighbors) neighbours_indices_xy = np.zeros((num_neighbors, 2)) ## There is some further code, but not relevant for the question return distances class Neighbours: def __init__(self, xy_points: np.ndarray, num_neighbors: int): self.xy_points = xy_points self.num_neighbors = num_neighbors def get_neighbours_wrapper(self, point: np.ndarray): distances = get_neighbours_Numba(self.xy_points, self.num_neighbors) # QUESTIONS: Can using class variables STOP PARALLELIZATION? return distances # Example usage xy_points = np.random.rand(100, 2) num_neighbors = 6 neighbours = Neighbours(xy_points, num_neighbors) point = np.random.rand(2) distances = neighbours.get_neighbours_wrapper(point) print(distances) </code></pre> <p><strong>Question</strong>: Are passing class variables Stopping Parallelization in Numba? If yes, what could be the solution?</p>
<python><numba>
2024-04-18 16:46:54
1
5,777
skm
78,349,201
1,046,013
Python script in "Task Scheduler" runs forever
<p>I can't figure out why my Python script works perfectly in the console when I execute it like this (runs for 1-2 seconds):</p> <p><a href="https://i.sstatic.net/9QQCd2LK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QQCd2LK.png" alt="Execution result" /></a></p> <p>But if I run it in the task scheduler (either manually or at the scheduled time), it runs forever and eventually times out after the 2 hours time limit:</p> <p><a href="https://i.sstatic.net/HSCZ53Oy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HSCZ53Oy.png" alt="Task Scheduler history" /></a></p> <p>Here's the script in case it's needed, I even added <code>exit(0)</code> at the end in case it was hanging there (Downloads all zip files over FTP, renames them according to their modification date, then deletes them on the server):</p> <pre><code>import ftplib import os from pathlib import Path from datetime import datetime, timezone # FTP server details FTP_SERVER = &quot;domain.com&quot; FTP_USERNAME = &quot;username&quot; FTP_PASSWORD = &quot;password&quot; # Directory for downloaded files LOCAL_DIR = Path(r&quot;G:\Neverland Backups&quot;) # Connect to the FTP server ftp = ftplib.FTP(FTP_SERVER, FTP_USERNAME, FTP_PASSWORD) # List all files in the current directory on the server files = ftp.nlst() for file in files: # Check if the file is a .zip file if file.endswith('.zip'): # Download the file local_file = LOCAL_DIR / file with open(local_file, 'wb') as fp: ftp.retrbinary('RETR ' + file, fp.write) # Get the file modification time modification_time = ftp.sendcmd('MDTM ' + file) modification_time = datetime.strptime(modification_time[4:], &quot;%Y%m%d%H%M%S&quot;) # Rename the file renamed_filepath = LOCAL_DIR / f&quot;Neverland_{modification_time.strftime('%Y-%m-%d_%H.%M.%S')}.zip&quot; (LOCAL_DIR / file).rename(renamed_filepath) # Convert the modification time to local timezone modification_time = modification_time.replace(tzinfo=timezone.utc).astimezone(tz=None) # Set the file's modification time timestamp = modification_time.timestamp() os.utime(renamed_filepath, (timestamp, timestamp)) # Delete the original file on the server ftp.delete(file) # Close the connection ftp.quit() exit(0) </code></pre> <p>As a sidenote, I also tried the Windows &quot;Run&quot; command with <code>C:\Users\Administrator\AppData\Local\Programs\Python\Python312\python.exe &quot;C:\scripts\download-neverland-backups.py&quot;</code> and the script window opens, executes then closes.</p> <p>The task scheduler &quot;actions&quot; settings <a href="https://i.sstatic.net/VCRYomLt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCRYomLt.png" alt="Task Scheduler &quot;actions&quot; settings" /></a></p>
<python><scheduled-tasks><windows-task-scheduler><python-3.x>
2024-04-18 16:17:33
0
3,866
NaturalBornCamper
78,348,882
179,014
Dynamically alter formulas in Excel templates with jinja2?
<p>I'm searching for a way to fill in pandas dataframes into a given Excel template (keeping all the formatting). I stumbled upon the following interesting blog post using jinja2 templating in Excel sheets:</p> <p><a href="https://hugoworld.wordpress.com/2019/01/21/easy-excel-reporting-with-python-and-jinja2/" rel="nofollow noreferrer">https://hugoworld.wordpress.com/2019/01/21/easy-excel-reporting-with-python-and-jinja2/</a></p> <p>I was especially intrigued about the possibility to dynamically alter formulas based on the inserted content, as explained by the author:</p> <blockquote> <p>In the case of the above the original formula in the template was sum(b3) this will be dynamically altered according to the data rendered into the template in case of the sample above the formula will become =sum(b3:b6)</p> </blockquote> <p>Unfortunately the code announced in the blog post never got published. Has anyone a suggestion on how to implement this idea from the blog in Python?</p>
<python><excel><excel-formula><jinja2>
2024-04-18 16:02:38
1
11,858
asmaier
78,348,878
10,053,485
Middleware inheritance in mounted FastAPI SubAPI's
<p>The project I'm working on has grown to the point one massive API doesn't suffice, so I have split it into sub applications.</p> <p>To ensure this refactor goes well, I wanted to test if and how middleware is inherited across parent/daughter applications.</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Request app = FastAPI() subapi = FastAPI() @app.get(&quot;/app&quot;) def read_main(): return {&quot;message&quot;: &quot;Hello World from main app&quot;} @subapi.get(&quot;/sub&quot;) def read_sub(): return {&quot;message&quot;: &quot;Hello World from sub API&quot;} app.mount(&quot;/subapi&quot;, subapi) @app.middleware(&quot;http&quot;) async def say_hi(request: Request, call_next): response = await call_next(request) print('Middleware Triggered') return response </code></pre> <p>Now, after starting the app: <code>uvicorn.run(app, host=&quot;127.0.0.1&quot;, port=8080)</code> and accessing the the docs of app and the subapi, I obtain the following logs:</p> <pre class="lang-py prettyprint-override"><code>INFO: 127.0.0.1:57079 - &quot;GET /docs HTTP/1.1&quot; 200 OK Middleware Triggered INFO: 127.0.0.1:57079 - &quot;GET /openapi.json HTTP/1.1&quot; 200 OK Middleware Triggered INFO: 127.0.0.1:57079 - &quot;GET /subapi/docs HTTP/1.1&quot; 200 OK Middleware Triggered INFO: 127.0.0.1:57079 - &quot;GET /subapi/openapi.json HTTP/1.1&quot; 200 OK </code></pre> <p>Case closed, middleware is inherited from the parent app, so if I want to add middleware to all sub apps, I should simply add it once to the parent app, or so I thought.</p> <p>Think again. This quite explicitly contradicts <a href="https://fastapi.tiangolo.com/advanced/sub-applications/" rel="nofollow noreferrer">the independence the docs note</a>, as well as <a href="https://stackoverflow.com/a/64325016/10053485">the most notable discussion on SO</a>.</p> <p>When printing the app/subapi's middleware as follows, we confirm <code>subapi</code> does <strong>not</strong> have the middleware attached:</p> <pre class="lang-py prettyprint-override"><code>print(app.__dict__) print(subapi.__dict__) &gt;&gt;&gt; {..., 'user_middleware': [Middleware(BaseHTTPMiddleware, dispatch=&lt;function say_hi at 0x00000191337C0E00&gt;)], 'middleware_stack': None} &gt;&gt;&gt; {..., 'user_middleware': [], 'middleware_stack': None} </code></pre> <p>Which aligns with previous answers and the documentation, as expected.</p> <p><strong>Questions:</strong></p> <ol> <li>Why is the main app's middleware still triggered by requests to the SubAPI when it is not explicitly attached to <code>subapi</code>?</li> <li>What does it mean for middleware to 'not be inherited' in this context, as its triggered either way?</li> </ol> <p>If I understand the situation correctly, the answer to question 1 is fairly straight forward:</p> <p>While the middleware is not directly associated with the SubAPI, it still gets triggered due to the routing structure - as the SubAPI's route contains the main app's route. This does still leave question 2.</p>
<python><fastapi><middleware>
2024-04-18 16:02:15
1
408
Floriancitt
78,348,739
8,021,207
Converting cyclic DiGraph to Acyclic DiGraph (DAG)
<p>How can I remove cycles from my directed graph? It's a big graph (100k+ nodes 200k+ edges) so the method needs to be efficient. I need to make the digraph acyclic in order to use functions like <a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.dag.topological_generations.html" rel="nofollow noreferrer">networkx.topological_generations</a>.</p> <p>I've tried methods where I repeatedly generate cycles and remove the last edge in each cycle path but after running for 10+ hours without finishing I considered this a failed attempt.</p> <p><strong>failed attempt (never finished; inefficient)</strong></p> <pre class="lang-py prettyprint-override"><code>def remove_cycles_from_G(G: nx.DiGraph): search_for_cycles = True while search_for_cycles: for cycle_path in nx.simple_cycles(G): try: G.remove_edge(cycle_path[-1], cycle_path[0]) except nx.NetworkXError: # edge has already been disjointed by a previous edge removal. # Restart cycle generator. search_for_cycles = ( False # Temporary condition which will be reversed. ) break search_for_cycles = not (search_for_cycles) </code></pre> <p>I've also crafted a more sophisticated heuristic approach based on the demonstrations in <a href="https://github.com/zhenv5/breaking_cycles_in_noisy_hierarchies/tree/master" rel="nofollow noreferrer">this project</a> but even this method doesn't work on a digraph of this size (after an hour of running my memory was maxed out).</p> <p>I understand that identifying the fewest edges to remove in order to make the digraph acyclic is an NP-hard problem (<a href="https://en.wikipedia.org/wiki/Feedback_arc_set" rel="nofollow noreferrer">feedback arc set problem</a>) but I'm not necessarily trying to find the fewest edges to make the digraph acyclic, I just want a fast and efficient approach.</p> <h3>EDIT: reproducible input data</h3> <p>Here's an example of a networkx DiGraph with a ton of cycles. My situation involves even more but this demonstrates the point:</p> <pre class="lang-py prettyprint-override"><code>import networkx as nx import random def induce_cycles(g: nx.DiGraph, cycles) -&gt; None: cycles_added = 0 while cycles_added &lt; cycles: node = random.choice(list(g)) non_parent_ancestors = nx.ancestors(g, node).difference(g.predecessors(node)) if non_parent_ancestors: g.add_edge(node, random.choice(list(non_parent_ancestors))) cycles_added += 1 g = nx.balanced_tree(3, 6, create_using=nx.DiGraph()) induce_cycles(g, len(g.edges()) * 5) # Efficiently remove cycles from g... </code></pre>
<python><algorithm><networkx><graph-theory><directed-acyclic-graphs>
2024-04-18 15:42:47
1
492
russhoppa
78,348,714
278,521
How to change numbers received from text file to integer
<p>I used python script to read numbers form *.txt file and use that number for future works, but the number when I parse I am getting error &quot;TypeError: int() argument must be a string, a bytes-like object or a real number, not 'list'&quot;</p> <p>BUt when I hard code number its working</p> <pre><code>with open(r'''C:\\MentorUTTFVC\ChangeSet.txt''') as f: for line in f: inner_list = [int(elt.strip()) for elt in line.split(',')] list_of_lists.append(inner_list) length = len(list_of_lists) for i in range(length): ch = int(list_of_lists[i]) </code></pre> <p>ChangeSet.txt</p> <pre><code>30000 30002 30004 30008 </code></pre>
<python><python-2.7>
2024-04-18 15:38:16
1
4,010
Sijith
78,348,637
455,796
make child window always stay above the main window, but not other applications
<p>How to make a modeless (that is, I can still interact with the main window) child always stay above the main window? <code>Qt.WindowStaysOnTopHint</code> makes the child stay above other applications, so this is not what I want. I am using Plasma 6.0 Wayland. A.I.'s said that setting the main window as the parent would make the child window stay above the main window, but that did not work. An old S.O. answer said set the <code>Qt.Tools</code> window flag, but that did not work either.</p> <p>This must be possible in Wayland Plasma 6.0, because the &quot;Configure&quot; window of Dolphin (a QT6 app) file manager works exactly the way I want.</p> <pre><code>from PySide6.QtCore import Qt from PySide6.QtWidgets import QApplication, QMainWindow, QDialog, QPushButton class ChildWindow(QDialog): def __init__(self, parent=None): super().__init__(parent) self.setWindowFlags(Qt.Tool) self.setWindowTitle(&quot;Child&quot;) self.setGeometry(0, 0, 400, 200) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle(&quot;Main&quot;) self.setGeometry(100, 100, 500, 300) self.but = QPushButton(&quot;Open&quot;, self) self.but.clicked.connect(self.open) self.child_window = ChildWindow(self) self.child_window.show() def open(self): self.child_window.show() if __name__ == &quot;__main__&quot;: app = QApplication() window = MainWindow() window.show() app.exec() </code></pre>
<python><pyside><pyside6>
2024-04-18 15:25:58
0
12,654
Damn Vegetables
78,348,490
23,260,297
json.decoder.JSONDecodeError error python
<p>I have an automation that runs and passes a JSON object to a Python script.</p> <p>My objective is to read the JSON and convert it to a dictionary.</p> <p>My JSON looks like this:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;Items&quot;: [ { &quot;Name&quot;: &quot;baz&quot;, &quot;File&quot;: &quot;\\\\baz\\baz\\baz baz\\baz baz\\baz\\baz.xls&quot; }, { &quot;Name&quot;: &quot;bar&quot;, &quot;File&quot;: &quot;\\\\bar\\bar\\bar bar\\bar bar\\bar\\bar.csv&quot; }, { &quot;Name&quot;: &quot;foo&quot;, &quot;File&quot;: &quot;\\\\foo\\foo\\foo foo\\foo foo\\foo\\foo.csv&quot; } ] } </code></pre> <p>I need it to look like this:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;foo&quot; : &quot;\\\\foo\\foo\\foo foo\\foo foo\\foo\\foo.csv&quot;, &quot;bar&quot; : &quot;\\\\bar\\bar\\bar bar\\bar bar\\bar\\bar.csv&quot;, &quot;baz&quot; : &quot;\\\\baz\\baz\\baz baz\\baz baz\\baz\\baz.xls&quot; } </code></pre> <p>I get this error with this piece of code:</p> <blockquote> <p>json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 4 (char 6)</p> </blockquote> <pre class="lang-py prettyprint-override"><code>if len(sys.argv) &gt; 1: d = json.loads(sys.argv[1]) print(d) </code></pre> <p>This is what I pass to PowerShell:</p> <pre><code>@&quot; { { &quot;Items&quot;: [ { &quot;Name&quot;: &quot;baz&quot;, &quot;File&quot;: &quot;\\\\baz\\baz\\baz baz\\baz baz\\baz\\baz.xls&quot; }, { &quot;Name&quot;: &quot;bar&quot;, &quot;File&quot;: &quot;\\\\bar\\bar\\bar bar\\bar bar\\bar\\bar.csv&quot; }, { &quot;Name&quot;: &quot;foo&quot;, &quot;File&quot;: &quot;\\\\foo\\foo\\foo foo\\foo foo\\foo\\foo.csv&quot; } ] } } &quot;@ &amp; $Python $Script $json </code></pre> <p>I printed sys.argv[1]:</p> <pre><code> {Items:[{Name:foo,File:\\\\foo\\foo\\foo {Items:[{Name:foo,File:\\\\foo\\foo\\foo </code></pre>
<python><json>
2024-04-18 15:00:34
1
2,185
iBeMeltin
78,348,470
1,141,798
XGBoost AFT survival model with external memory iterator
<p>How to make <a href="https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html" rel="nofollow noreferrer">XGBoost external memory</a> and <a href="https://xgboost.readthedocs.io/en/stable/tutorials/aft_survival_analysis.html" rel="nofollow noreferrer">XGBoost survival AFT model</a> work together?</p> <p>Background: I've written XGBoost iterator for batched training as in the linked example. Now I want to train an AFT model from the <code>xgboost</code> library. The problem is the XGB <code>DMatrix</code>, for which we need to run <code>set_float_info</code> to set survival censoring intervals. For example:</p> <pre><code>dtrain.set_float_info('label_lower_bound', y_lower_bound[train_index]) dtrain.set_float_info('label_upper_bound', y_upper_bound[train_index]) </code></pre> <p>Attached please find my redacted code (can't attach everything, but that's the problematic gist). I got the censoring time data in <code>df</code>, but I don't know how to &quot;attach&quot; it to <code>Xy_train</code>.</p> <pre><code>class BatchedParquetIterator(xgboost.DataIter): def __init__( self ): # ... super().__init__(cache_prefix=os.path.join(&quot;.&quot;, &quot;cache&quot;)) def next(self, input_data: Callable): &quot;&quot;&quot;Advance the iterator by 1 step and pass the data to XGBoost. This function is called by XGBoost during the construction of ``DMatrix`` &quot;&quot;&quot; if self._it == len(self._file_paths): return 0 # return 0 to let XGBoost know this is the end of iteration df = pd.read_parquet(self._file_paths[self._it]) X, y = self._preprocess(df) input_data(data=X, label=y) self._it += 1 return 1 # Return 1 to let XGBoost know we haven't seen all the files yet. def reset(self): &quot;&quot;&quot;Reset the iterator to its beginning&quot;&quot;&quot; self._it = 0 def _preprocess(self, df: pd.DataFrame) -&gt; Tuple[pd.DataFrame, pd.DataFrame]: # ... return X, y parquet_iterator_train = BatchedParquetIterator(batches) Xy_train = xgboost.DMatrix(parquet_iterator_train) </code></pre>
<python><xgboost><survival-analysis>
2024-04-18 14:58:09
1
1,302
Dominik Filipiak
78,348,446
13,294,364
Fast / Efficient method to retrieve value of specified field with BLPAPI
<p>The reason why I ask this question is because of the manner in which bloomberg sends its data via BLPAPI. Following on from this <a href="https://stackoverflow.com/questions/75958741/blpapi-retrieve-value-of-specific-field/75959836">post</a>, I want to establish an efficient method of obtaining the value of a specific field. As the nature of the way that data is sent means that there can be multiple messages (msg's) in the session.nextEvent() and that surplus data is sent such there is more data than requested I was wondering whether there was a known efficient way of doing so. So far the techniques and emthods I have used means than for 60 securities and 5 subscriptions the data is never live as it lags behind and I beleive the reason for this is how I manage the data coming in. I have some example below signifying an example subscriptin for one security. Given that MKTDATA_EVENT_TYPE and MKTDATA_EVENT_SUBTYPE can be different I am struggling to find an effective way to-do this.</p> <p>My aim is to avoid for loops where possible and opt for dictionary's to direct me to the value wanted.</p> <pre><code>import blpapi from bloomberg import BloombergSessionHandler # session = blpapi.Session() host='localhost' port=8194 session_options = blpapi.SessionOptions() session_options.setServerHost(host) session_options.setServerPort(port) session_options.setSlowConsumerWarningHiWaterMark(0.05) session_options.setSlowConsumerWarningLoWaterMark(0.02) session = blpapi.Session(session_options) if not session.start(): print(&quot;Failed to start Bloomberg session.&quot;) subscriptions = blpapi.SubscriptionList() fields = ['BID','ASK','TRADE','LAST_PRICE','LAST_TRADE'] subscriptions.add('GB00BLPK7110 @UKRB Corp', fields) session.subscribe(subscriptions) session.start() while(True): event = session.nextEvent() print(&quot;Event type:&quot;,event.eventType()) if event.eventType() == blpapi.Event.SUBSCRIPTION_DATA: i = 0 for msg in event: print(&quot;This is msg &quot;, i) i+=1 print(&quot;\n&quot; , &quot;msg is &quot;, msg, &quot;\n&quot;) print(&quot; Message type:&quot;,msg.messageType()) eltMsg = msg.asElement(); msgType = eltMsg.getElement('MKTDATA_EVENT_TYPE').getValueAsString(); msgSubType = eltMsg.getElement('MKTDATA_EVENT_SUBTYPE').getValueAsString(); print(&quot; &quot;,msgType,msgSubType) for fld in fields: print(&quot; Fields are :&quot;, fields) if eltMsg.hasElement(fld): print(&quot; &quot;,fld,eltMsg.getElement(fld).getValueAsFloat()) else: for msg in event: print(&quot; Message type:&quot;,msg.messageType()) </code></pre> <p>I tried obtaining the values for the specified fields I subscribed to but found that my code was too slow and as such didn't meet the requirements to display live data.</p> <pre><code> def process_subscription_data1(self, session): while True: event = session.nextEvent() print(f&quot;The event is {event}&quot;) if event.eventType() == blpapi.Event.SUBSCRIPTION_DATA: print(f&quot;The event type is: {event.eventType()}&quot;) for msg in event: print(f&quot;The msg is: {msg}&quot;) data = {'instrument': msg.correlationIds()[0].value()} print(f&quot;The data is: {data}&quot;) # Processing fields efficiently for field in self.fields: print(&quot;field is &quot;, field, &quot; &quot;, self.fields) element = msg.getElement(field) if msg.hasElement(field) else None print(&quot;element is &quot;, element) data[field] = element.getValueAsString() if element and not element.isNull() else 'N/A' print(f&quot;Emitting data for {data}&quot;) self.data_signal.emit(data) # Emit data immediately for each message </code></pre> <p>^^ code which I have tried and was far too slow (even without the print statements they are just showing how convoluted the code is)</p>
<python><bloomberg><blpapi>
2024-04-18 14:55:40
1
305
Harry Spratt
78,348,385
3,872,452
AsyncClient logging input/output body
<p>Can AsyncClient be extended and parametrized to log the body of input and output of every external call for reuse in multiple methods that use the same AsyncClient?</p> <pre><code>import json import logging from fastapi import FastAPI, HTTPException from httpx import AsyncClient # Set up basic configuration for logging logging.basicConfig(level=logging.DEBUG) app = FastAPI() client = AsyncClient() @app.get(&quot;/example_post&quot;) async def example_post(): url = &quot;https://jsonplaceholder.typicode.com/posts&quot; # Free fake and reliable API for testing and prototyping. payload = { &quot;title&quot;: 'fooxxx', &quot;body&quot;: 'bar', &quot;userId&quot;: 1, } # Perform the HTTP POST request; would like to log input payload and output data response response = await client.post(url, content=json.dumps(payload)) # Parse response as JSON data = response.json() # Searching for universal solution that can log request and response of every api call instead of logging manually logging.info(f&quot;Received response: {data}&quot;) return data if __name__ == &quot;__main__&quot;: import uvicorn uvicorn.run(app, host=&quot;0.0.0.0&quot;, port=8000) </code></pre>
<python><httpx>
2024-04-18 14:46:06
1
418
Levijatanu
78,348,320
769,922
Python subclasses function
<p>I have an abstract class defined (BaseClass). And then I define a subclass in a different &quot;folder&quot; or file (SubClass).</p> <p>In my &quot;main&quot; function, I try to check what are the subclasses of the base class. Ideally, I was hoping it would show me all the subclasses regardless of where they are. However, python shows me an empty list.</p> <p>If I move the subclass into the same file as the baseclass, then subclasses shows the list properly.</p> <pre class="lang-py prettyprint-override"><code># base.py import abc class BaseClass(abc.ABC): @abc.abstractmethod def hello(): ... class SubClass2(BaseClass): def hello(): print(&quot;Hello Child2&quot;) </code></pre> <pre class="lang-py prettyprint-override"><code># sub_class.py from base import BaseClass class SubClass(BaseClass): def hello(): print(&quot;Hello Child&quot;) </code></pre> <pre class="lang-py prettyprint-override"><code># main.py from base import BaseClass if __name__ == &quot;__main__&quot;: print(BaseClass.__subclasses__()) </code></pre> <p>Here is an example to demo: <a href="https://www.online-python.com/gckwbpi9aQ" rel="nofollow noreferrer">https://www.online-python.com/gckwbpi9aQ</a></p> <p>Further strangeness. If I try to import the missing child in the main file and check its ancestors; lo and behold, subclasses returns the right results</p> <pre class="lang-py prettyprint-override"><code># main.py but importing the missing subclass from base import BaseClass from sub_class import SubClass if __name__ == &quot;__main__&quot;: print(BaseClass.__subclasses__()) print(SubClass.__mro__) </code></pre> <p>I have been trying hard to find documentation on why this behaves the way it does, but I'm pretty sure I'm missing something super basic.</p> <p>Use case I'm trying to accomplish. Following from this solution in typer where we want to create dynamic commands: <a href="https://github.com/tiangolo/typer/issues/257" rel="nofollow noreferrer">https://github.com/tiangolo/typer/issues/257</a>, I wanted to create a base class that folks can extend. And any concrete subclass could then be added into the command structure at runtime. Maybe the incorrect assumption I had in my head was that all classes are loaded ahead of time.</p> <hr />
<python><oop>
2024-04-18 14:36:26
1
1,037
Serendipity
78,348,285
8,262,535
Pandas time series split shows gaps
<p>I am splitting a continuous timeseries (powerconsumption by the hour) into train/val/test but see unexpected gaps in the split dataframes. What might be the cause?</p> <pre><code>train_split_end = round(len(df) * (1 - val_ratio)) val_split_end = len(df) train = df.iloc[:train_split_end] val = df.iloc[train_split_end:val_split_end] </code></pre> <p>The splits themselves are contiguous</p> <pre><code>train.index[-1] Out[26]: Timestamp('2014-07-26 23:00:00') val.index[0] Out[27]: Timestamp('2014-07-27 00:00:00') </code></pre> <p>But the plots show gaps <strong>inside</strong> of each train['MW'].plot() and val['MW'].plot() set which are not present in the original df. The dataset is <a href="https://www.kaggle.com/datasets/robikscube/hourly-energy-consumption" rel="nofollow noreferrer">https://www.kaggle.com/datasets/robikscube/hourly-energy-consumption</a> - AEP_hourly.csv</p> <p>Thanks for suggestions! <a href="https://i.sstatic.net/U9GOm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U9GOm.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/D145k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D145k.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe><datetime>
2024-04-18 14:31:55
2
385
illan
78,348,245
2,393,597
How to uniformly sample from space of orthonormal matrices
<p>A simple way of generating a random orthonormal matrix is to first sample a random matrix and subsequently apply the singular value decomposition</p> <pre><code>def random_orthonormal_matrix(n): random_matrix = np.random.normal(0., 1., (n, n)) u, _, _ = np.linalg.svd(random_matrix) return u </code></pre> <p>However, using this procedure seems to limit the space from which orthonormal matrices are sampled as <code>n</code> increases. In particular, it seems like matrices become less and less likely to be sampled from either end of the extremes, i.e., close to the identity or the negative of the identity matrix:</p> <pre><code>def cosine_similarity(A, B): norm_A = np.linalg.norm(A) norm_B = np.linalg.norm(B) return np.dot(A.flatten(), B.flatten()) / (norm_A * norm_B) for n in range(4, 33, 4): similarities = [] I = np.identity(items_n) for _ in range(100000): U = random_orthonormal_matrix(n) similarities.append(cosine_similarity(U, I)) print(np.min(similarities), np.max(similarities)) </code></pre> <p>Generates output</p> <pre><code>-0.9512750256370359 0.9196429830937393 -0.5402804372155989 0.5641902334710601 -0.33279493103542684 0.3570616519070167 -0.26855564226727774 0.26038092858834694 -0.24803547122804348 0.21364982076125164 -0.18041307918712912 0.18015163443704285 -0.1502303153596804 0.15272379170859116 -0.13915507336472144 0.13962760476513508 </code></pre> <p>How can this procedure be modified, such that the expected distance to the identity matrix is uniform for any <code>n</code>? And further, is it possible to directly generate a random orthonormal matrix for a given distance <code>d</code>?</p>
<python><random><linear-algebra><numeric>
2024-04-18 14:24:46
1
599
Genius
78,348,182
9,274,726
Airflow - K8s- Unable to mount HostPath using KubenetesPodOperator
<p>I have a ADF's Airflow managed instance provisioned. And, I'm trying to schedule a DAG. In this DAG, I'm trying to run a shell script which is present in the HOSTPATH &quot;/opt/airflow/dags&quot; using KubernetesPodOperator. The shell script will submit some kubectl commands to a k8s cluster. However the pod is not getting started on k8s.</p> <p>dag.py:</p> <pre><code>from airflow import DAG from airflow.operators.python_operator import PythonOperator from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator from datetime import datetime, timedelta from kubernetes.client import models as k8s # Define your default_args and DAG default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': datetime(1970, 1, 1), 'retries': 0, 'retry_delay': timedelta(minutes=5), } dag = DAG( 'test_operations_dag', default_args=default_args, description='DAG for test Operations', #schedule_interval=timedelta(days=1), schedule_interval=None, concurrency=80, ) # Correct instantiation using V1HostPathVolumeSource host_path_volume_source = k8s.V1HostPathVolumeSource( path=&quot;/opt/airflow/dags&quot;, type='Directory' # optional: specify the type of the hostPath ) volume = k8s.V1Volume( name=&quot;dags&quot;, host_path=host_path_volume_source ) volume_mounts = [ k8s.V1VolumeMount( mount_path=&quot;/dags&quot;, name=&quot;dags&quot; ) ] status_task = KubernetesPodOperator( task_id=f'status-task-vbuv', namespace=&quot;spark-apps&quot;, image=&quot;bitnami/kubectl:latest&quot;, cmds=[&quot;sh&quot;, &quot;-c&quot;], arguments=[ f&quot;cp /dags/sparkapplication_spark-pi-fixed2.sh /tmp/sparkapplication_spark-pi-fixed2.sh &amp;&amp; chmod +x /tmp/sparkapplication_spark-pi-fixed2.sh &amp;&amp; /tmp/sparkapplication_spark-pi-fixed2.sh&quot; ], service_account_name=&quot;airflow-sparkapp&quot;, get_logs=True, kubernetes_conn_id=&quot;k8s-airflow&quot;, dag=dag, volumes=[volume], volume_mounts=volume_mounts, ) # Set up dependencies status_task </code></pre> <p>Error in dag log:</p> <pre><code>[2024-04-18, 05:01:52 UTC] {pod_manager.py:313} WARNING - Pod not yet started: status-task-vbuv-8h4po563 [2024-04-18, 05:01:52 UTC] {pod.py:726} INFO - Deleting pod: status-task-vbuv-8h4po563 [2024-04-18, 05:01:52 UTC] {taskinstance.py:1824} ERROR - Task failed with exception Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/pod.py&quot;, line 551, in execute_sync self.await_pod_start(pod=self.pod) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/pod.py&quot;, line 513, in await_pod_start self.pod_manager.await_pod_start(pod=pod, startup_timeout=self.startup_timeout_seconds) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py&quot;, line 320, in await_pod_start raise PodLaunchFailedException(msg) airflow.providers.cncf.kubernetes.utils.pod_manager.PodLaunchFailedE </code></pre>
<python><kubernetes><airflow><directed-acyclic-graphs><kubernetespodoperator>
2024-04-18 14:17:22
0
913
Tad
78,348,036
8,934,639
How to call AWS Bedrock asynchronously
<p>Is there a way to call Bedrock claude3 model with Python SDK asynchronously?</p> <p>More specifically, I want the results to be sent to S3.</p>
<python><large-language-model><amazon-bedrock>
2024-04-18 13:59:57
3
301
Chedva
78,347,931
710,955
Python: Add a trailing slash to the URL but only if the URL doesn't end in a slash already or a file extension
<p>I want, in Python, normalize a URL. My main purpose is to add slash / at the end of the URL if it is not already present but only if the URL doesn't end in a slash already or a file extension (so images, .php ,files pages, etc. aren't affected).</p> <p>For example, if it is <code>http://www.example.com</code> then it should be converted to <code>http://www.example.com/</code>. But if it is <code>http://www.example.com/image.png</code> then it should not be affected.</p> <p>To do this, I use this regular expression <code>/([^/.]+)$</code>. <a href="https://regex101.com/r/BGYy9U/1" rel="nofollow noreferrer">Regex demo</a></p> <p>But it doesn't work in this python code, <code>start_url </code> is not modified</p> <pre><code>import re start_url = &quot;https://zonetuto.fr&quot; start_url = re.sub(r'/([^/.]+)$', r'/\1/', start_url) print(start_url) </code></pre>
<python><regex><url>
2024-04-18 13:46:04
3
5,809
LeMoussel
78,347,920
8,781,465
How to integrate a glossary of abbreviations into LangChain for better SQL query generation (NL2SQL)?
<p>I am using <code>LangChain</code> to interface with an Oracle database where many column names include abbreviations. I want to provide <code>LangChain</code> with a glossary that explains these abbreviations to improve its ability to accurately select the right columns for queries. How can I incorporate a glossary into my current <code>LangChain</code> setup to give it this additional context? Note that in my corporate environment simply renaming the columns is not an option.</p> <p>This is how I currently use <code>LangChain</code> for it to answer my natural language questions in natrual language:</p> <pre class="lang-py prettyprint-override"><code>from langchain.agents import create_sql_agent from langchain.agents.agent_toolkits import SQLDatabaseToolkit toolkit = SQLDatabaseToolkit(db=db, llm=chat_client) agent_executor = create_sql_agent(llm=chat_client, toolkit=toolkit, agent_type=&quot;openai-tools&quot;, verbose=True, return_intermediate_steps=True) agent_executor.invoke(&quot;How many orders from Singapore did we have in March 2023?&quot;) </code></pre> <p>I'd like to add a glossary that includes entries like <code>{ &quot;CST_ID&quot;: &quot;Customer ID&quot;, &quot;PRDCT_NUM&quot;: &quot;Product Number&quot; }</code> to help <code>LangChain</code> understand these abbreviations better.</p> <p>What modifications are needed in the <code>LangChain</code> configuration or code to make this possible?</p>
<python><langchain><py-langchain>
2024-04-18 13:43:56
0
1,815
DataJanitor
78,347,898
11,586,490
Emojis appearing as chinese symbols when I share to whatsapp from python
<p>I've built a scorecard app where users can share the results of their scores to WhatsApp. I'm trying to use the medal emojis (first place, second place and third place). It works fine when I print to console on PyCharm with my Windows laptop. However, now I've packaged my apk and deployed my app onto my android phone, when I go to share the scorecard to WhatsApp I get chinese symbols in place of my emojis.</p> <p>I've tried using the unicode for the first place medal (&quot;/U0001F947&quot;) and I've also installed the emoji library and done <code>emoji.emojize(':1st_place_medal:')</code>. I also tried the HTML entity (can't remember what it was exactly, something like &amp;#291315) but that just printed out the actual text &amp;#291315</p> <p>This also happens in Facebook Messenger so it's not a WhatsApp issue</p> <p>Here is my code:</p> <pre><code> def share_app(self): from kivy import platform emoji_test = &quot;/U0001F947&quot; if platform == 'android': from jnius import autoclass PythonActivity = autoclass('org.kivy.android.PythonActivity') Intent = autoclass('android.content.Intent') String = autoclass('java.lang.String') intent = Intent() intent.setAction(Intent.ACTION_SEND) intent.putExtra(Intent.EXTRA_TEXT, String('{}'.format(f&quot;{emoji_test}&quot;))) intent.setType('text/plain') chooser = Intent.createChooser(intent, String(&quot;&quot;)) PythonActivity.mActivity.startActivity(chooser) </code></pre>
<python><android><kivy><whatsapp>
2024-04-18 13:41:56
1
351
Callum
78,347,675
6,212,530
Typesafe abstract attributes in Python with Pylance
<p>I followed this <a href="https://stackoverflow.com/a/41897823/6212530">answer for python 3.3</a>:</p> <pre class="lang-py prettyprint-override"><code>class Abstract(ABC): @property @abstractmethod def title(self) -&gt; str: ... class Concrete(Abstract): title = &quot;Test&quot; # pylance error </code></pre> <p>However in <code>Concrete</code> I get pylance error:</p> <pre><code>Expression of type &quot;Literal['Test']&quot; cannot be assigned to declared type &quot;property&quot; &quot;Literal['Test']&quot; is incompatible with &quot;property&quot; </code></pre> <p>Is it possible to specify abstract attribute in abstract parent class, so it can be overriden by literal value in inheriting class?</p>
<python><python-typing><pyright>
2024-04-18 13:03:18
2
1,028
Matija Sirk
78,347,580
2,583,417
Generate buttons programmatically in PyQT
<p>I need to generate buttons within a loop, and inside that loop assign to every button a different task.<br /> I know something like this was already asked, and by doing some search I achieved the following code:</p> <pre><code>def pressed_0(): print(0) def pressed_1(): print(1) def pressed_2(): print(2) for i in range(0,3): setattr(self,f&quot;pressed_{i}&quot;, qtw.QPushButton(f&quot;button {i}&quot;)) exec(f&quot;self.pressed_{i}.clicked.connect(lambda:pressed_{i}())&quot;) self.layout().addWidget(getattr(self, f&quot;pressed_{i}&quot;)) </code></pre> <p>What I don't like a lot is the usage of exec() function and setattr / getattr. What I ask if if there is a better way to accomplish what I need.</p>
<python><pyqt><pyqt5>
2024-04-18 12:52:19
1
585
Ferex
78,347,474
8,781,465
How to integrate Oracle Column Comments into LangChain for enhanced SQL query generation (NL2SQL)?
<p>I'm working with an Oracle database that uses column comments to detail cryptic column names, crucial for data retrieval operations. These comments are visible in Oracle SQL Developer but are not included in the <code>CREATE</code> table statement:</p> <pre><code>COMMENT ON COLUMN &quot;SCHEMA&quot;.&quot;TABLE_NAME&quot;.&quot;CLMN_NM&quot; IS 'Column which contains information X. Distinct values: (&quot;A&quot;, &quot;B&quot;, &quot;C&quot;)'; </code></pre> <p>I'm using <code>LangChain</code> to interface with the database. LangChain internally requests the CREATE statement. But since the column comments are after the create statement, <code>LangChain</code> does not have access to it.</p> <p><strong>How can I modify my approach so that <code>LangChain</code> can dynamically read and utilize these Oracle column comments?</strong> The goal is for <code>LangChain</code> to programmatically use these comments to better determine which columns to select for generating accurate SQL queries in natural language processing contexts.</p> <p>Here's my current setup:</p> <pre class="lang-py prettyprint-override"><code>from cx_Oracle import makedsn from langchain.sql_database import SQLDatabase from langchain.agents import create_sql_agent from langchain.agents.agent_toolkits import SQLDatabaseToolkit dsn_tns = makedsn(host=host, port=port, service_name=service_name) connection_string = f&quot;oracle+cx_oracle://{usr}:{pwd}@{dsn_tns}&quot; db = SQLDatabase.from_uri(connection_string) toolkit = SQLDatabaseToolkit(db=db,llm=chat_client) agent_executor = create_sql_agent(llm=chat_client, toolkit=toolkit, agent_type=&quot;openai-tools&quot;) agent_executor.invoke(&quot;How many orders from Singapore did we have in March 2023?&quot;) </code></pre> <p>I'm looking for guidance on modifying my code so that LangChain gets access to the column comments.</p>
<python><oracle-database><langchain><py-langchain>
2024-04-18 12:34:02
1
1,815
DataJanitor
78,347,470
10,770,967
Reading UTC timestamp in python pandas and converting it to European dates
<p>I have an issue with a timestamp column, hoping you can provide me some support. I checked here few already posted questions, but somehow I couldn't find the right approach from them.</p> <p>I have a pandas frame with multiple columns,among others a timestamp. This timestamp column contains utc dates. My goal is to extract only the dates, and write them in the european format dd.mm.yyyy because I need to save these dates into excel since I need to work there</p> <pre><code>import pandas as pd Timestamp=[ &quot;06.02.2024 00:43:31 UTC&quot;, &quot;06.02.2024 01:34:35 UTC&quot;, &quot;06.02.2024 02:21:41 UTC&quot;, &quot;06.02.2024 02:26:41 UTC&quot;, &quot;06.02.2024 03:19:52 UTC&quot;, &quot;06.02.2024 07:15:48 UTC&quot;, &quot;06.02.2024 08:22:46 UTC&quot;, &quot;06.02.2024 09:56:12 UTC&quot;, &quot;06.02.2024 12:00:43 UTC&quot;, &quot;06.02.2024 12:22:14 UTC&quot;, &quot;06.02.2024 12:23:21 UTC&quot;] df=pd.DataFrame(Timestamp) df=df.rename(columns={0:&quot;Timestamp&quot;}) df[&quot;Timestamp&quot;]=pd.to_datetime(df[&quot;Timestamp&quot;],utc=True).dt.date df[&quot;Timestamp&quot;]=pd.to_datetime(df[&quot;Timestamp&quot;]) print(df) Timestamp 0 2024-06-02 1 2024-06-02 2 2024-06-02 3 2024-06-02 4 2024-06-02 5 2024-06-02 6 2024-06-02 7 2024-06-02 8 2024-06-02 9 2024-06-02 10 2024-06-02 </code></pre> <p>I need to have it in mm.dd.yyyy format As said: i tried multiple ways but nothing really worked and I am sure that this ist not that complicated and I am just blind. Can you help me?</p>
<python><pandas><datetime><utc>
2024-04-18 12:33:44
2
402
SMS
78,347,434
8,588,743
Problem setting up Llama-2 in Google Colab - Cell-run fails when loading checkpoint shards
<p>I'm trying to use <a href="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf" rel="nofollow noreferrer">Llama 2 chat</a> (via hugging face) with 7B parameters in Google Colab (Python 3.10.12). I've already obtain my access token via Meta. I simply use the code in hugging face on how to implement the model along with my access token. Here is my code:</p> <pre><code>!pip install transformers from transformers import AutoModelForCausalLM, AutoTokenizer import torch token = &quot;---Token copied from Hugging Face and pasted here---&quot; tokenizer = AutoTokenizer.from_pretrained(&quot;meta-llama/Llama-2-7b-chat-hf&quot;, token=token) model = AutoModelForCausalLM.from_pretrained(&quot;meta-llama/Llama-2-7b-chat-hf&quot;, token=token) </code></pre> <p>It starts downloading the model but when it reaches Loading checkpoint shards: it just stops running and there is no error:</p> <p><a href="https://i.sstatic.net/6wcxG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6wcxG.png" alt="enter image description here" /></a></p>
<python><huggingface-transformers><large-language-model><llama>
2024-04-18 12:28:02
1
903
Parseval
78,347,363
10,012,856
Upload a photo returns 'Item type not valid.'
<p>I'm trying to upload a photo using the code below based on ArcGIS Python API:</p> <pre><code>def handle_attachments_api( portal_domain: str, portal_username: str, portal_password: str, data_type: str = None, data_url: str = None, filename: str = None, type_keywords: str = None, description: str = None, title: str = None, url: str = None, text: str = None, tags: str = None, snippet: str = None, extent: str = None, spatial_reference: str = None, access_information: str = None, license_info: str = None, culture: str = None, comments_enabled: bool = True, access: str = None, overwrite: bool = False, data: str = None, thumbnail: str = None, metadata: str = None, owner: str = None, folder: str = None, item_id: guid = None, ): gis = GIS( url=f&quot;https://{portal_domain}/portal&quot;, username=portal_username, password=portal_password ) item_properties = { &quot;type&quot;: data_type, &quot;dataUrl&quot;: data_url, &quot;filename&quot;: filename, &quot;typeKeywords&quot;: type_keywords, &quot;description&quot;: description, &quot;title&quot;: title, &quot;url&quot;: url, &quot;text&quot;: text, &quot;tags&quot;: tags, &quot;snippet&quot;: snippet, &quot;extent&quot;: extent, &quot;spatialReference&quot;: spatial_reference, &quot;accessInformation&quot;: access_information, &quot;licenseInfo&quot;: license_info, &quot;culture&quot;: culture, &quot;commentsEnabled&quot;: comments_enabled, &quot;access&quot;: access, &quot;overwrite&quot;: overwrite, } gis.content.add( item_properties=item_properties, data=data, thumbnail=thumbnail, metadata=metadata, owner=owner, folder=folder, item_id=item_id ) </code></pre> <p>But I see the error below when I use 'jpg' as item type:</p> <pre class="lang-none prettyprint-override"><code>&gt;..\..\..\..\..\AppData\Local\ESRI\conda\envs\arcgispro-py3-ps\lib\site-&gt;packages\arcgis\gis\__init__.py:6837: in add &gt; itemid = self._portal.add_item( &gt;..\..\..\..\..\AppData\Local\ESRI\conda\envs\arcgispro-py3-ps\lib\site-&gt;packages\arcgis\gis\_impl\_portalpy.py:438: in add_item &gt; resp = self.con.post(path, postdata, files) &gt;..\..\..\..\..\AppData\Local\ESRI\conda\envs\arcgispro-py3-ps\lib\site-&gt;packages\arcgis\gis\_impl\_con\_connection.py:1524: in post &gt; return self._handle_response( &gt;..\..\..\..\..\AppData\Local\ESRI\conda\envs\arcgispro-py3-ps\lib\site-&gt;packages\arcgis\gis\_impl\_con\_connection.py:1000: in _handle_response &gt; self._handle_json_error(data[&quot;error&quot;], errorcode) &gt;_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ &gt; &gt;self = &lt;arcgis.gis._impl._con._connection.Connection object at 0x0000026A223A0B20&gt; &gt;error = {'code': 400, 'details': [], 'message': 'Item type not valid.', 'messageCode': &gt;'CONT_0113'} &gt;errorcode = 400 &gt; &gt; def _handle_json_error(self, error, errorcode): &gt; errormessage = error.get(&quot;message&quot;) &gt; # handles case where message exists in the dictionary but is None &gt; if errormessage is None: &gt; errormessage = &quot;Unknown Error&quot; &gt; # _log.error(errormessage) &gt; if &quot;details&quot; in error and error[&quot;details&quot;] is not None: &gt; if isinstance(error[&quot;details&quot;], str): &gt; errormessage = f&quot;{errormessage} \n {error['details']}&quot; &gt; # _log.error(error['details']) &gt; else: &gt; for errordetail in error[&quot;details&quot;]: &gt; if isinstance(errordetail, str): &gt; errormessage = errormessage + &quot;\n&quot; + errordetail &gt; # _log.error(errordetail) &gt; &gt; errormessage = errormessage + &quot;\n(Error Code: &quot; + str(errorcode) + &quot;)&quot; &gt; raise Exception(errormessage) &gt;E Exception: Item type not valid. &gt;E (Error Code: 400) &gt; &gt;..\..\..\..\..\AppData\Local\ESRI\conda\envs\arcgispro-py3-ps\lib\site-&gt;packages\arcgis\gis\_impl\_con\_connection.py:1023: Exception </code></pre> <p>Where I can see a list of all allowed items type?</p>
<python><arcgis>
2024-04-18 12:15:45
1
1,310
MaxDragonheart
78,347,354
893,254
Python find index of element in list based on evaluation of a function
<p>I am trying to find the index of an item in a Python list based on the evaluation value of a lambda function (or other callable).</p> <p>This would be similar to a combination of an <code>.index()</code> operation with a <code>find_if</code> operation.</p> <p>Here is an example:</p> <pre><code># self contains `self.list` def find_index_where(self, id: int) -&gt; int: # callable expression which tests a sub-member of some object `input` def lambda_callable(input, id): return input.id == id # extra work being done: # first find an `item` then find an `index` corresponding to `item` matched_item = next(item for item in self.list if lambda_callable(item, id)) index = self.list.index(matched_item) return index </code></pre> <p>Is there a way to roll the &quot;item finding&quot; operation together with the &quot;index finding&quot; operation?</p>
<python>
2024-04-18 12:12:57
2
18,579
user2138149
78,347,277
3,906,713
How to modularize dependent unit tests in Python
<p>Here is a hypothetical composite unit test. It calls two algorithms, checks some properties of the results independently, and then compares them to each other.</p> <pre><code>class MyTestCase(unittest.TestCase): def test_composite(self): # Test 1. Testing first algorithm result1 = algorthm1() assert len(result1) == 5, &quot;Algorithm 1 failed&quot; # Test 2. Testing second algorithm result2 = algorthm2() assert len(result2) == 5, &quot;Algorithm 2 failed&quot; # Test 3. Testing that algorithms are consistent assert result1 == result2, &quot;Algorithm results are inconsistent&quot; </code></pre> <p>It would be convenient if the first and the second test could be run independently of each other, and the last test, that depends on both of them, would only run if the other two have passed. If only one of the algorithms failed, it would be convenient to know that the second one passed. It would also be convenient if I only had to compute the results of each algorithm once.</p> <p>Is there an intended pythonic way to do such refactoring, and, if yes, what are my options?</p> <p><strong>NOTE</strong>: One solution I am aware of would be to calculate both results in the <code>__init__()</code>, and then only test them in the test functions. However, since the algorithms can crash, it would make sense that the algorithm computations are done in their own tests, so it is clear which one crashed.</p>
<python><unit-testing>
2024-04-18 12:00:47
0
908
Aleksejs Fomins
78,347,183
14,923,149
Title: How to visualize hierarchical data with nested pie charts in Python?
<p>I have hierarchical data that I would like to visualize using nested pie charts in Python. The data consists of Phylum, Genus, and Species levels, and I want to create a nested pie chart where each level represents a ring in the chart.</p> <p>I have already attempted to implement this using Matplotlib, but I'm facing challenges in filtering and displaying only specific portions of the nested pie charts based on the abundance of certain categories. Specifically, I want to:</p> <p>Display all Phylum initially. Filter and display only the Genera related to a specific Phylum (e.g., Firmicutes). Filter and display only the Species related to a specific Genus (e.g., Bacillus). I've tried to modify the code based on suggestions I found online, but I'm not getting the desired output.</p> <p>Could someone please provide guidance or a code example on how to achieve this visualization using Python and Matplotlib?</p> <p>Any help would be greatly appreciated. Thank you!</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt from matplotlib.patches import Patch # Read the Excel file TissueS35_Analysis_Report = pd.read_excel(&quot;TissueS35_Analysis_Report.xlsx&quot;, sheet_name=&quot;Species&quot;) # Select only the 'Phylum', 'Genus', and 'Species' columns selected_columns = TissueS35_Analysis_Report[['Phylum', 'Genus', 'Species', 'Absolute Count']] # Group by Phylum, Genus, and Species and sum the counts grouped_data = selected_columns.groupby(['Phylum', 'Genus', 'Species']).sum().reset_index() # Function to generate nested pie chart data def nested_pie(df): outd = {} for level in range(3): if level == 0: gb = df.groupby('Phylum', sort=False).sum() elif level == 1: gb = df.groupby(['Phylum', 'Genus'], sort=False).sum() else: gb = df.groupby(['Phylum', 'Genus', 'Species'], sort=False).sum() outd[level] = {'names': gb.index.get_level_values(level).tolist(), 'values': gb['Absolute Count'].values} return outd # Generate nested pie chart data outd = nested_pie(grouped_data) # Plot nested donut pie chart fig, ax = plt.subplots() # Plot Species level (Outermost ring) sizes = outd[2]['values'] species_colors = plt.cm.tab20c.colors species_labels = outd[2]['names'] ax.pie(sizes, radius=1, colors=species_colors, labels=species_labels, wedgeprops=dict(width=0.3, edgecolor='w')) # Plot Genus level (Middle ring) sizes = outd[1]['values'] genus_colors = plt.cm.tab20b.colors genus_labels = outd[1]['names'] ax.pie(sizes, radius=0.7, colors=genus_colors, wedgeprops=dict(width=0.3, edgecolor='w')) # Plot Phylum level (Innermost ring) sizes = outd[0]['values'] phylum_colors = plt.cm.tab20.colors phylum_labels = outd[0]['names'] ax.pie(sizes, radius=0.4, colors=phylum_colors, wedgeprops=dict(width=0.3, edgecolor='w')) # Create legend for Phylum level legend_handles = [Patch(color=color, label=label) for color, label in zip(phylum_colors, phylum_labels)] ax.legend(handles=legend_handles, loc='center left', bbox_to_anchor=(1, 0.5), title='Phylum') ax.set(aspect=&quot;equal&quot;) plt.show() </code></pre> <p><a href="https://i.sstatic.net/DeWrA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DeWrA.png" alt="enter image description here" /></a></p> <pre><code>small data refernce is as follow Phylum Genus Species Absolute Count 168 Proteobacteria Pseudomonas Unclassified 73745 152 Proteobacteria Klebsiella Unclassified 10777 190 Proteobacteria Unclassified Unclassified 4932 132 Proteobacteria Chromobacterium Unclassified 1840 84 Firmicutes Lysinibacillus boronitolerans 1780 104 Firmicutes Weissella ghanensis 1101 10 Actinobacteria Corynebacterium Unclassified 703 138 Proteobacteria Cupriavidus gilardii 586 93 Firmicutes Staphylococcus Unclassified 568 183 Proteobacteria Stenotrophomonas geniculata 542 Selection deleted </code></pre> <p>If possible, how can i do for overlay image as given below, I will be thankful for this help, Regards <a href="https://i.sstatic.net/zLvqT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zLvqT.png" alt="enter image description here" /></a></p>
<python><pandas><matplotlib>
2024-04-18 11:49:14
1
504
Umar
78,346,963
6,717,444
Write file extensions and their occurrences in a list into a dictionary?
<p>How can I solve this with loops, without regular expressions?</p> <p>Write a function that accepts a list of file names and returns a dictionary with extensions as keys and their occurrences as value.</p> <p>Example:</p> <pre><code>#print(count_file_types([&quot;image1.jpg&quot;, &quot;image2.jpg&quot;, &quot;preso.pptx&quot;])) #=&gt; {&quot;jpg&quot;: 2, &quot;pptx&quot;: 1} </code></pre> <p>I tried this, but I guess it’s all wrong:</p> <pre><code>def count_file_types(string_arr): arr = [] for i in string_arr: arr.append(i.split(&quot;.&quot;)) return(arr) print(arr) freq = {} for i in arr: if i in freq: freq[i] += 1 else: freq[item] = 1 return freq print(freq) print(count_file_types(['image1.jpg', 'image2.jpg', 'preso.pptx'])) </code></pre>
<python><python-3.x><list><dictionary>
2024-04-18 11:09:46
4
350
Evanto
78,346,937
6,278,424
gettign json.decoder.JSONDecodeError at random
<p>I have implemented this function given by this answer: <a href="https://quant.stackexchange.com/a/70155/33457">https://quant.stackexchange.com/a/70155/33457</a></p> <p>when I run this code sometimes it goes well while most of the time it returns this error:</p> <pre><code>raise JSONDecodeError(&quot;Expecting value&quot;, s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) </code></pre> <p>The funny thing is that whether it goes through or returns this error seems to be totally random? Why is that?</p> <p>I have tried to figure out if my <code>headers</code> is the problem but I don't know</p> <p>Se my full code here;</p> <pre><code>import requests def get_symbol_for_isin(isin): url = 'https://query1.finance.yahoo.com/v1/finance/search' headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.109 Safari/537.36', } params = dict( q=isin, quotesCount=1, newsCount=0, listsCount=0, quotesQueryId='tss_match_phrase_query' ) resp = requests.get(url=url, headers=headers, params=params) data = resp.json() if 'quotes' in data and len(data['quotes']) &gt; 0: return data['quotes'][0]['symbol'] else: return None apple_isin = 'US0378331005' print(get_symbol_for_isin(apple_isin)) </code></pre> <p>The return should be 'AAPL'</p>
<python><request><yahoo-finance>
2024-04-18 11:05:31
1
530
k.dkhk
78,346,892
7,267,640
gRPC client request streaming: - "The client reset the request stream."
<p>I am trying to implement request streaming from my python client to my C# server using gRPC. This is my protofile:</p> <pre class="lang-protobuf prettyprint-override"><code> syntax = &quot;proto3&quot;; service PerceiveAPIDataService { rpc UploadResource (stream UploadResourceRequest) returns (UploadResourceResponse); } message UploadResourceRequest { oneof request_data { ResourceChunk resource_chunk = 1; UploadResourceParameters parameters = 2; } } message ResourceChunk { bytes content = 1; } message UploadResourceParameters { string path = 1; } </code></pre> <p>This is my c# implementation:</p> <pre class="lang-cs prettyprint-override"><code> public override async Task&lt;UploadResourceResponse&gt; UploadResource(IAsyncStreamReader&lt;UploadResourceRequest&gt; requestStream, ServerCallContext context) { if (!await requestStream.MoveNext()) { throw new RpcException(new Status(StatusCode.FailedPrecondition, &quot;No upload parameters found.&quot;)); } var initialMessage = requestStream.Current; if (initialMessage.RequestDataCase != UploadResourceRequest.RequestDataOneofCase.Parameters) { throw new RpcException(new Status(StatusCode.FailedPrecondition, &quot;First message must contain upload parameters.&quot;)); } var path = initialMessage.Parameters.Path; if (string.IsNullOrWhiteSpace(path)) { throw new RpcException(new Status(StatusCode.InvalidArgument, &quot;Upload path is required.&quot;)); } using (var ms = new MemoryStream()) { while (await requestStream.MoveNext()) { var chunk = requestStream.Current.ResourceChunk; if (chunk == null) { continue; // Skip any messages that are not resource chunks } await ms.WriteAsync(chunk.Content.ToByteArray().AsMemory(0, chunk.Content.Length)); } ms.Seek(0, SeekOrigin.Begin); // Reset memory stream position to the beginning for reading during upload var uploadResult = await _dataService.UploadResourceAsync(path, ms); return new UploadResourceResponse { Succeeded = uploadResult.IsSuccessful }; } } </code></pre> <p>And this is my python client code:</p> <pre class="lang-py prettyprint-override"><code> def generate_request(self, data: bytearray, next_cloud_path: str) -&gt; Generator: first_req = perceive_api_data_service_pb2.UploadResourceRequest( parameters=perceive_api_data_service_pb2.UploadResourceParameters(path=next_cloud_path) ) yield first_req print(&quot;Sent initial request with path:&quot;, next_cloud_path) chunk_size = 2048 total_chunks = (len(data) + chunk_size - 1) // chunk_size # Ceiling division to get total number of chunks print(f&quot;Data size: {len(data)} bytes, chunk size: {chunk_size} bytes, total chunks: {total_chunks}&quot;) for i in range(0, len(data), chunk_size): chunk = data[i:i+chunk_size] yield perceive_api_data_service_pb2.UploadResourceRequest( resource_chunk=perceive_api_data_service_pb2.ResourceChunk(content=chunk) ) print(f&quot;Sent chunk {((i // chunk_size) + 1)} of {total_chunks}&quot;) async def upload_file(self, data: bytearray, next_cloud_path: str) -&gt; bool: async with grpc.aio.insecure_channel(&quot;localhost:5228&quot;) as channel: stub = perceive_api_data_service_pb2_grpc.PerceiveAPIDataServiceStub(channel) request_iterator = self.generate_request(data, next_cloud_path) response = await stub.UploadResource(request_iterator) return response.succeeded </code></pre> <p>The error I get on the server is this:</p> <blockquote> <p>Grpc.AspNetCore.Server.ServerCallHandler: Error: Error when executing service method 'UploadResource'.</p> <p>System.IO.IOException: The client reset the request stream. at System.IO.Pipelines.Pipe.GetReadResult(ReadResult&amp; result) at System.IO.Pipelines.Pipe.GetReadAsyncResult()</p> </blockquote> <p>And the error I get on the client is this:</p> <blockquote> <p>File &quot;C:\Users\user_name\AppData\Local\Programs\Python\Python311\Lib\site-packages\grpc\aio_call.py&quot;, line 690, in _conduct_rpc serialized_response = await self._cython_call.stream_unary( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;src\python\grpcio\grpc_cython_cygrpc/aio/call.pyx.pxi&quot;, line 458, in stream_unary File &quot;src\python\grpcio\grpc_cython_cygrpc/aio/callback_common.pyx.pxi&quot;, line 166, in _receive_initial_metadata File &quot;src\python\grpcio\grpc_cython_cygrpc/aio/callback_common.pyx.pxi&quot;, line 99, in execute_batch asyncio.exceptions.CancelledError</p> </blockquote> <p>The first message is sent successfully. So the parameters. However, as soon as the server tries to do <code>requestStream.MoveNext()</code>, it throws this error.</p> <p>I already tried numerous of different solutions, but I cannot find anything that works. Does anyone see where I am making an error?</p>
<python><c#><grpc><grpc-python><grpc-c#>
2024-04-18 10:57:56
1
6,888
Luuk Wuijster
78,346,868
10,811,647
How to reduce my Tensorflow docker image?
<p>I have a Dash app running fine locally. The app uses tensorflow and ultralytics to detect some events on a graph using yolo8. I am trying to deploy this app to a server inside a docker container. The first image I built was based on the <code>tensorflow:latest-gpu</code> docker image. The resulting image size was 19,5 Gb. removing the -gpu tag helped reduce the size to 14Gb. Then I tried building from a python image (3.11.0). The python based image was 13Gb.</p> <p>How can I further reduce the size of my image? 13Gb is a lot considering that my dash app folder containing the assets is about 6Mb.</p> <p>Here are my Dockerfile:</p> <pre><code>#Using python FROM python:3.11.0 COPY requirements.txt ./requirements.txt # install opencv dependencies and requirements RUN apt-get update &amp;&amp; apt-get install ffmpeg libsm6 libxext6 -y &amp;&amp; pip install -r requirements.txt --ignore-installed #Copy files to container COPY . ./ #Running APP and doing some PORT Forwarding CMD gunicorn -b 0.0.0.0:1312 app:server </code></pre> <p>and requirements file:</p> <pre><code>dash==2.16.1 dash-core-components==2.0.0 dash-daq==0.5.0 dash-html-components==2.0.0 dash-table==5.0.0 gunicorn==21.2.0 influxdb-client==1.41.0 keras==3.0.5 numpy==1.26.4 opencv-python==4.9.0.80 pandas==2.2.1 pillow==10.2.0 plotly==5.20.0 tensorflow==2.16.1 ultralytics==8.1.27 kaleido </code></pre> <p>Thanks for your help !</p>
<python><docker><tensorflow><plotly-dash>
2024-04-18 10:55:26
0
397
The Governor
78,346,757
5,931,672
Overriding multiprocessing.queues.Queue put method
<p>I want to implement a <code>multiprocessing.Queue</code> that does not add an element that already exists. Using Python STL Queue I had no problem, following <a href="https://stackoverflow.com/a/16506527/5931672">this</a> response. For multiprocessing I had some issues that I solved thanks to <a href="https://stackoverflow.com/questions/34292296/multiprocessing-queue-subclass-issue">this</a> For that I do the following:</p> <pre><code>from multiprocessing.queues import Queue from multiprocessing import get_context class CustomQueue(Queue): def put(self, obj, block=True, timeout=None): if obj not in self: return super().put(obj, block, timeout) def __contains__(self, item): with self.mutex: return item in self.queue custom_queue = CustomQueue(ctx=get_context()) </code></pre> <p>However, when I call the put method I get <code>AttributeError: 'CustomQueue' object has no attribute 'mutex'</code></p> <p>How can I solve this issue? Thank you in advance.</p> <hr /> <p>I read the code of <code>multiprocessing.queues.Queue</code>, and did my change to this:</p> <pre><code>class CustomQueue(Queue): def put(self, obj, block=True, timeout=None): if self._closed: raise ValueError(f&quot;Queue {self!r} is closed&quot;) if not self._sem.acquire(block, timeout): raise Full with self._notempty: if self._thread is None: self._start_thread() if obj not in self._buffer: self._buffer.append(obj) self._notempty.notify() </code></pre> <p>But still does not work. <code>self._buffer</code> seems to be the queue (is a <code>collections.deque</code> object). But the <code>obj not in self._buffer</code> returns always <code>True</code>. Why is this happening?</p>
<python><multiprocessing><queue>
2024-04-18 10:36:02
2
4,192
J Agustin Barrachina
78,346,628
10,595,871
Scrape contents of Network section of an element in a page
<p>I need to scrape a page, the website is the following: <a href="https://commercialisti.it/iscritti" rel="nofollow noreferrer">https://commercialisti.it/iscritti</a> It's only in italian but still, it is a list of professional people that I'm able to search via &quot;Cap&quot;.</p> <p>For example, by filling the Cap with the value 37138 and then pressing on &quot;CERCA&quot; it will display a list of professionals with a few data. I found that if I inspect the page, go to network and then on the element <code>LstIscritti?_=1713434471262</code> there is a JSON with all the data that I need. The problem is that I don't understand how to scrape by entering in this section of the website.</p> <p>I tried with beautifulsoup but I'm only able to scrape the html of the main page</p> <p>My code so far:</p> <pre><code>from selenium import webdriver import time from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.maximize_window() driver.get('https://commercialisti.it/iscritti') driver.implicitly_wait(10) driver.switch_to.frame(driver.find_element(By.XPATH, &quot;(//iframe)[1]&quot;)) casella_testo = driver.find_element(&quot;id&quot;, &quot;Cap&quot;) casella_testo.send_keys(&quot;37138&quot;) pulsante_cerca = driver.find_element(&quot;id&quot;, 'btnContinua') pulsante_cerca.click() time.sleep(5) res = driver.find_element(&quot;id&quot;, &quot;listIscritti&quot;) time.sleep(10) </code></pre> <p>The content of <code>res</code> is the scraped table displayed in the page after pressing &quot;CERCA&quot; button, but I need the details that are in the Network section</p>
<python><beautifulsoup>
2024-04-18 10:15:28
1
691
Federicofkt
78,346,215
7,219,400
Python Flask Sqlite is not creating a column in a specific line or name
<p>This is just mind blowing I am trying to use in-memory table for small data that can be got easily but I dont want to send request all the time, so I save it in in-memory sqlite table here is my <strong>init</strong>.py file:</p> <pre><code>from flask import Flask from .models import db from apscheduler.schedulers.background import BackgroundScheduler # Corrected import class Config: SCHEDULER_API_ENABLED = True # SQLALCHEMY_DATABASE_URI = 'sqlite:///:memory:' SQLALCHEMY_DATABASE_URI = 'sqlite:///..//db.sqlite3' SQLALCHEMY_TRACK_MODIFICATIONS = False def create_app(): app = Flask(__name__) app.config.from_object(Config) db.init_app(app) with app.app_context(): db.create_all() # Create the database tables - IMPORTANT scheduler = BackgroundScheduler() # Corrected class name scheduler.start() from .views import views from .auth import auth app.register_blueprint(views, url_prefix='/') app.register_blueprint(auth, url_prefix='/') return app </code></pre> <p>I wrote db.sqlite3 to be able to see it in vs-code, but it works the same in in-memory table</p> <p>And here is my table:</p> <pre><code>class Holidays(db.Model): __tablename__ = 'Holidays' data_id = db.Column(db.Integer, primary_key=True) # date format: &quot;yyyy-MM-dd&quot; date = db.Column(db.String) test = db.Column(db.String) test2 = db.Column(db.Float) test3 = db.Column(db.Integer) # sholday = db.Column(db.Integer) # sholday = db.Column(db.Integer) test4 = db.Column(db.Boolean) isholday = db.Column(db.Integer) sholday = db.Column(db.Integer) @classmethod def add(cls, holiday): db.session.add(holiday) db.session.commit() @classmethod def add_all(cls, holidays): db.session.add_all(holidays) db.session.commit() @classmethod def sholday(cls, date): holiday = Holidays.query.filter(Holidays.date == date).first() if holiday: return holiday.sholday return False @classmethod def does_date_exist(cls, date): return Holidays.query.filter(Holidays.date == date).first() is not None </code></pre> <p>now here, it creates every column except sholday, I initially made its name is_holiday but it wasn't creating also with that name. I copied pasted this line to somewhere else, I changed the name, it is not creating that column.</p>
<python><sqlite><flask><sqlalchemy>
2024-04-18 09:09:32
1
1,464
Sahin
78,346,156
7,295,936
pd dataframe applymap tries to modify col names
<p>Hello i have a dataframe containing datetime infos and i'd like too format those infos so i've used this command :</p> <pre><code>df2[[&quot;month&quot;, &quot;day&quot;, &quot;hour&quot;, &quot;min&quot;, &quot;s&quot;]] = df2[[&quot;month&quot;, &quot;day&quot;, &quot;hour&quot;, &quot;min&quot;, &quot;s&quot;]].applymap(lambda x: f&quot;{int(x):02d}&quot;) </code></pre> <p>However i get this error : <code>ValueError: invalid literal for int() with base 10: 'month'</code></p> <p>so my guess is the applymap function is trying to apply the format on column name, how would you solve this problem ?</p> <p>here is a sample of the data : <a href="https://i.sstatic.net/1XTVQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1XTVQ.png" alt="enter image description here" /></a></p> <p>so i'd like month values to be '01' instead of '1'</p>
<python><python-3.x><pandas><dataframe><lambda>
2024-04-18 08:59:57
1
1,560
FrozzenFinger
78,346,024
2,803,777
How to fill an nd array with values from a 1d-array?
<p>The following is a real-world problem in <code>numPy</code> reduced to the essentials, just with smaller dimensions.</p> <p>Let's say I want to create an n-dimensional array <code>all</code> with dimensions (10, 10, 100):</p> <pre><code>all = np.empty((10, 10, 100)) </code></pre> <p>I also have a 1d array <code>data</code>, simulated here as</p> <pre><code>data = np.arange(0, 100) </code></pre> <p>for all i, j I now want to achieve that</p> <pre><code>all[i,j]=data </code></pre> <p>So I do:</p> <pre><code>all[:, :]=data </code></pre> <p>Of course that works.</p> <p>But now I want to import <code>data</code> to <code>all2</code> with shape (100, 10, 10). I could do that with</p> <pre><code>all2 = np.empty((100, 10, 10)) # new target to be populated for i in range(100): for j in range(10): for k in range(10): all2[i, j, k]=data[i] </code></pre> <p>But is there an easier way to do this without looping? I would be surprised if it couldn't be done more elegantly, but I don't see how.</p>
<python><numpy><numpy-ndarray><numpy-slicing>
2024-04-18 08:35:57
1
1,502
MichaelW
78,345,829
11,586,490
Making text line up vertically when sharing to WhatsApp
<p>I've created a simple scorecard app, that sums users scores while they're playing games (card games, golf etc). I've added in the ability to share the result of their game to WhatsApp and I'd like it to appear a bit like a table, with the player name and then the player score, each player on a new line.</p> <p>I'm trying to make the scores line up vertically, which is challenging given the player names will differ in length. I did this successfully in my IDE on my laptop by working out the length of the longest name and adding the correct amount of whitespace to the shorter names. This prints out to the terminal correctly, like so:</p> <p><a href="https://i.sstatic.net/KTa4P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KTa4P.png" alt="enter image description here" /></a></p> <p>However, when I share to WhatsApp in my app on my android phone the text no longer lines up. I understand this is due to the font used by whatsapp, where an &quot;i&quot; takes up less space than a &quot;w&quot;, whereas I need it to be monospaced.</p> <p>Here's how it currently appears on my phone:</p> <p><a href="https://i.sstatic.net/my4BE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/my4BE.png" alt="enter image description here" /></a></p> <p>Any ideas on how I could get it to line up neatly on WhatsApp?</p>
<python><android><whatsapp>
2024-04-18 08:06:33
1
351
Callum
78,345,753
13,200,217
mypy checking pyi in venv despite excluding it
<p>I have a PySide project set up using a pyproject.toml file, with venv+pip.</p> <p>I have set up mypy in the pyproject.toml file as follows:</p> <pre class="lang-ini prettyprint-override"><code>[tool.mypy] disable_error_code = [&quot;import-untyped&quot;] exclude = [&quot;^.venv/&quot;, &quot;^myproject/somefolder/&quot;] [[tool.mypy.overrides]] module = &quot;PySide6.*&quot; ignore_errors = true </code></pre> <p>However when running <code>mypy .</code> I get the following error:</p> <pre><code>venv\Lib\site-packages\PySide6\QtGui.pyi:1094: error: unexpected indent [syntax] Found 1 error in 1 file (errors prevented further checking) </code></pre> <p>This is caused by this issue in PySide: <a href="https://bugreports.qt.io/browse/PYSIDE-2665" rel="nofollow noreferrer">https://bugreports.qt.io/browse/PYSIDE-2665</a> and I'd like to ignore it.</p> <p>What I've tried:</p> <ul> <li>Check if <code>somefolder</code> is actually being ignored. It is, as when I remove the regex from the exclude, more errors show up.</li> <li>Change the module to <code>&quot;PySide6&quot;</code>. Still getting the same error.</li> <li>Remove the <code>[[tool.mypy.overrides]]</code> and what's under it. Still getting the same error.</li> <li>Using strings instead of regexes: <code>exclude = [&quot;venv&quot;,&quot;myproject/somefolder&quot;]</code>. Still the same.</li> </ul> <p>The mypy documentation only mentions how to ignore individual files (<a href="https://mypy.readthedocs.io/en/stable/config_file.html#confval-exclude" rel="nofollow noreferrer">link</a>).</p> <p>So how would I go about fixing this?</p>
<python><mypy><pyproject.toml>
2024-04-18 07:54:14
0
353
Andrei MiculiΘ›Δƒ
78,345,731
2,739,700
Azure alerting for KQL query using Python
<p>I could not able to create alert using Python code, Manually It got created</p> <p>Below is the code:</p> <pre><code>from azure.identity import DefaultAzureCredential from azure.mgmt.resource import ResourceManagementClient from azure.mgmt.monitor import MonitorManagementClient from azure.mgmt.monitor.v2018_04_16.models import LogSearchRuleResource, Source, Schedule, Action # Define the KQL query kql_query = &quot;&quot;&quot; ConfigurationData | where Computer contains &quot;test_machine&quot; | where SvcName contains &quot;test-service&quot; | where SvcState != &quot;Running&quot; &quot;&quot;&quot; # Azure subscription ID subscription_id = '5xxxxxxxxxxxx' # Resource group resource_group_name = 'rg-name' uri = &quot;/subscriptions/xxxxxxxxx/resourceGroups/rg-anme/providers/Microsoft.Compute/virtualMachines/test-machine&quot; # Define parameters scheduledqueryrules_custom_query_name = 'custom_query' # Authenticate to Azure credential = DefaultAzureCredential() # Initialize Resource Management Client resource_client = ResourceManagementClient(credential, subscription_id) actions = Action( odata_type=&quot;LogToMetricAction&quot; ) # Initialize Monitor Management Client monitor_client = MonitorManagementClient(credential, subscription_id) source = Source(query=kql_query, data_source_id=uri) schedule = Schedule(frequency_in_minutes=5, time_window_in_minutes=15) log_search = LogSearchRuleResource(location=&quot;northcentralus&quot;, source=source, action=actions) rule_name = scheduledqueryrules_custom_query_name rule_result = monitor_client.scheduled_query_rules.create_or_update(resource_group_name=resource_group_name, parameters=log_search, rule_name=&quot;ddfed&quot;) print(&quot;Rule created successfully:&quot;, rule_result) </code></pre> <p>Error:</p> <pre><code>ile &quot;/usr/local/lib/python3.11/site-packages/azure/mgmt/monitor/v2018_04_16/operations/_scheduled_query_rules_operations.py&quot;, line 386, in create_or_update raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) azure.core.exceptions.HttpResponseError: (BadRequest) Invalid value for properties.action.odata.type Activity ID: 49321a7c-b696-4042-aa5c-a109997224e4. Code: BadRequest Message: Invalid value for properties.action.odata.type Activity ID: 49321a7c-b696-4042-aa5c-a1sddfrre4. </code></pre> <p>Below is the Microsoft Azure docs for classes:</p> <p><a href="https://learn.microsoft.com/en-us/python/api/azure-mgmt-monitor/azure.mgmt.monitor.v2018_04_16.models.logsearchruleresource?view=azure-python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/azure-mgmt-monitor/azure.mgmt.monitor.v2018_04_16.models.logsearchruleresource?view=azure-python</a></p> <p>Not sure what went wrong and any help be greatly appreciated</p> <p>Python Version: 3.11 Packages:</p> <pre><code>azure-common==1.1.28 azure-core==1.30.1 azure-identity==1.16.0 azure-mgmt-core==1.4.0 azure-mgmt-monitor==6.0.2 azure-mgmt-resource==23.0.1 azure-monitor-query==1.3.0 certifi==2024.2.2 cffi==1.16.0 charset-normalizer==3.3.2 cryptography==42.0.5 idna==3.7 isodate==0.6.1 msal==1.28.0 msal-extensions==1.1.0 packaging==24.0 portalocker==2.8.2 pycparser==2.22 PyJWT==2.8.0 requests==2.31.0 six==1.16.0 typing_extensions==4.11.0 urllib3==2.2.1 </code></pre>
<python><azure><azure-media-services><azure-monitoring><azure-alerts>
2024-04-18 07:48:33
1
404
GoneCase123
78,345,554
3,793,935
Python file.save saves empty file
<p>I retrieve files from a frontend upload, and convert an hash over the file this way:</p> <pre><code>blob_file = convert_to_blob(file) </code></pre> <p>with this function:</p> <pre><code>def convert_to_blob(file: file_storage.FileStorage) -&gt; bytes: os_path = os.path.join(config('UPLOAD_CONVERT'), &quot;convert.pdf&quot;) file.save(os_path) # Convert digital data to binary format with open(os_path, 'rb') as file: blobData = file.read() return blobData </code></pre> <p>this works fine and the file is saved as expected. After that I save the file a second time, but with the hash as name in the folder intendend for the file:</p> <pre><code> if file and allowed_file(file.filename): # first, convert the file as blob, so we can build a hash over the blob blob_file = convert_to_blob(file) hash_ = hashlib.md5(blob_file).hexdigest() # save file local file_path = f&quot;{config('UPLOAD_FOLDER')}/{request.form['mandant']}&quot; if not os.path.exists(file_path): os.makedirs(file_path) os_path = os.path.join(f&quot;{file_path}/{hash_}.pdf&quot;) file.save(os_path) </code></pre> <p>But for some reason, the second time around the file won't properly save. It's always empty. I've tried to save the file in different folders and without the hash-building stuff before, but no luck.</p> <p>Can someone explain what's happening here?</p> <p><strong>Edit:</strong> file type -&gt; &lt;class 'werkzeug.datastructures.file_storage.FileStorage'&gt;</p> <p><strong>Edit2:</strong> Okay, it is because of the file.save, it seems like the file is closed after or something? Is there a way to avoid that without reopening the file?</p>
<python><file>
2024-04-18 07:18:16
2
499
user3793935
78,345,428
108,390
How to avoid Mypy Incompatible type warnings in Chained when/then assignments?
<p>I have the following code</p> <pre><code>expr = pl.when(False).then(None) for pattern, replacement in replacement_rules.items(): expr = expr.when(pl.col(&quot;data&quot;).str.contains(pattern)) expr = expr.then(pl.lit(replacement)) expr = expr.when(pl.col(&quot;ISO_codes&quot;).str.len_chars() &gt; 0) expr = expr.then( pl.col(&quot;ISO_codes&quot;) .replace(iso_translation, default=&quot;Unknown ISO Code&quot;) ) </code></pre> <p>The code works as intended, but Mypy is not too happy about it: <a href="https://i.sstatic.net/dGR7O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dGR7O.png" alt="enter image description here" /></a></p> <p>I cannot understand how get rid of the warnings without losing all &quot;Incompatible type&quot; warnings,or rewrite the code to make it go away.</p>
<python><mypy><python-typing><python-polars>
2024-04-18 06:55:37
1
1,393
Fontanka16
78,345,364
51,816
How to draw waveform as curve using matplotlib?
<p>I wrote this code:</p> <pre><code>def plotWaveforms(audioFile1, audioFile2, imageFile, startSegment=30, endSegment=35, amp1=0.5, amp2=0.5): # Load audio files y1, sr1 = librosa.load(audioFile1, sr=None, offset=startSegment, duration=endSegment - startSegment) y2, sr2 = librosa.load(audioFile2, sr=None, offset=startSegment, duration=endSegment - startSegment) # Normalize and adjust the amplitude of the audio signals y1 = normalize_audio(y1, amp1) y2 = normalize_audio(y2, amp2) # Create a figure with a black background plt.figure(figsize=(16, 1), facecolor='black') # Plot the second audio file as a filled waveform plt.fill_between(np.linspace(0, len(y2) / sr2, len(y2)), y2, color='green', alpha=1) # Plot the first audio file as a filled waveform plt.fill_between(np.linspace(0, len(y1) / sr1, len(y1)), y1, color='blue', alpha=0.5) # Remove axes, labels, and title for a clean look plt.axis('off') # Save the figure with a specific resolution plt.savefig(imageFile, format='png', dpi=300, bbox_inches='tight', pad_inches=0) plt.close() </code></pre> <p>which produces this:</p> <p><a href="https://i.sstatic.net/Ko07L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ko07L.png" alt="enter image description here" /></a></p> <p>But I am trying to draw a filled curve using the peak of each point or at regular intervals that look like this:</p> <p><a href="https://i.sstatic.net/QCerB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QCerB.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/djaKh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/djaKh.png" alt="enter image description here" /></a></p> <p>How can I do this?</p>
<python><matplotlib><audio><visualization><waveform>
2024-04-18 06:45:08
1
333,709
Joan Venge
78,345,271
8,510,149
Pandas shift operation with condition
<p>Below I have a small dataset with 3 columns, ID; tag and value. Tag represent the source of the information that the feature 'value' is based on.</p> <p>I want to create a lag feature for 'value'. Below I do that in an easy way. However, for index 3 and 4 we can see that 'tag' has the same value. This situation I do not want.</p> <p>I wish to take 'tag' into consideration. I want a condition that to perform the shift only when 'tag' feature is not the same.</p> <p>What would be good method to perform this?</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'ID':[1,1,1,2,2, 2,2,3,3,3], 'tag':[10, 11, 15, 11, 12, 12, 13, 16, 17, 18], 'value':[21, 19, 22, 41, 43, 43, 38, 9, 12, 16]}) df['value_lag'] = df.sort_values(by=['ID', 'tag']).groupby('ID')['value'].shift(1) print(df) </code></pre> <pre><code> ID tag value value_lag 0 1 10 21 NaN 1 1 11 19 21.0 2 1 15 22 19.0 3 2 11 41 NaN 4 2 12 43 41.0 5 2 12 43 43.0 6 2 13 38 43.0 7 3 16 9 NaN 8 3 17 12 9.0 9 3 18 16 12.0 </code></pre> <p>Desired output would be:</p> <pre><code> ID tag value value_lag 0 1 10 21 NaN 1 1 11 19 21.0 2 1 15 22 19.0 3 2 11 41 NaN 4 2 12 43 41.0 5 2 12 43 41.0 -Here, should not be 43 6 2 13 38 43.0 7 3 16 9 NaN 8 3 17 12 9.0 9 3 18 16 12.0 </code></pre>
<python><pandas>
2024-04-18 06:27:24
1
1,255
Henri
78,345,055
4,987,648
Static type checking for union type and pattern matching
<p>In functional languages like Ocaml/Haskell/… I can type something like:</p> <pre class="lang-ocaml prettyprint-override"><code>type expr = | Nb of float | Add of expr * expr | Soust of expr * expr | Mult of expr * expr | Div of expr * expr | Opp of expr let rec eval x = match x with | Nb n -&gt; n | Add (e1, e2) -&gt; (eval e1) +. (eval e2) | Soust (e1, e2) -&gt; (eval e1) -. (eval e2) | Mult (e1, e2) -&gt; (eval e1) *. (eval e2) | Div (e1, e2) -&gt; (eval e1) /. (eval e2) | Opp n -&gt; -. (eval n) </code></pre> <p>And once my code compiles, I will be guaranteed that for any <code>x</code> of type <code>expr</code>, <code>eval x</code> will always produce an output of type <code>float</code>. This notably implies that my pattern matching was not forgetting any cases, so if latter I add a new item in the <code>expr</code> type, it will fail to compile until I add this new case in the pattern matching.</p> <p>Sadly I can't find anything in python that would provide such a strong guarantee, including with Python 10's typing system… Am I missing something?</p>
<python><pattern-matching><python-typing>
2024-04-18 05:30:55
1
2,584
tobiasBora
78,344,781
9,951,273
How can I infer return type for object based on parameter?
<p>Let's say we have a function</p> <pre><code>def get_attr_wrapper(obj: object, attr: str) -&gt; ???: return getattr(obj, attr) </code></pre> <p>How can I infer the return type of <code>get_attr_wrapper</code> based on the parameters given?</p> <p>Maybe with a generic somehow?</p> <p>For example, if I passed in</p> <pre><code>from dataclasses import dataclass @dataclass class Foo: bar: str foo = Foo(bar=&quot;baz&quot;) rv = get_attr_wrapper(foo, &quot;bar&quot;) </code></pre> <p>In our desired scenario, <code>rv</code> would be inferred by Python's type checker as being of type <code>string</code>.</p>
<python><python-typing>
2024-04-18 03:55:38
1
1,777
Matt
78,344,729
1,601,580
How do I have multiple src directories at the root of my python project with a setup.py and pip install -e?
<p>I want to have two src dirs at the root of my project. The reason is that one is code I want to work without modifying any of the imports. The second is new code indepdent of the &quot;old code&quot;. I want two src's with and <code>pip install -e .</code> to work. My <code>setup.py</code> is:</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot; python -c &quot;print()&quot; refs: - setup tools: https://setuptools.pypa.io/en/latest/userguide/package_discovery.html#using-find-or-find-packages - https://stackoverflow.com/questions/70295885/how-does-one-install-pytorch-and-related-tools-from-within-the-setup-py-install &quot;&quot;&quot; from setuptools import setup from setuptools import find_packages import os here = os.path.abspath(os.path.dirname(__file__)) with open(os.path.join(here, 'README.md'), encoding='utf-8') as f: long_description = f.read() setup( name='massive-evaporate-4-math', # project name version='0.0.1', long_description=long_description, long_description_content_type=&quot;text/markdown&quot;, author='Me', author_email='me@gmail.com', python_requires='&gt;=3.9', license='Apache 2.0', # ref: https://chat.openai.com/c/d0edae00-0eb2-4837-b492-df1d595b6cab # The `package_dir` parameter is a dictionary that maps package names to directories. # A key of an empty string represents the root package, and its corresponding value # is the directory containing the root package. Here, the root package is set to the # 'src' directory. # # The use of an empty string `''` as a key is significant. In the context of setuptools, # an empty string `''` denotes the root package of the project. It means that the # packages and modules located in the specified directory ('src' in this case) are # considered to be in the root of the package hierarchy. This is crucial for correctly # resolving package and module imports when the project is installed. # # By specifying `{'': 'src'}`, we are informing setuptools that the 'src' directory is # the location of the root package, and it should look in this directory to find the # Python packages and modules to be included in the distribution. package_dir={ '': 'src_math_evaporate', 'bm_evaporate': 'src_bm_evaporate', }, # The `packages` parameter lists all Python packages that should be included in the # distribution. A Python package is a way of organizing related Python modules into a # directory hierarchy. Any directory containing an __init__.py file is considered a # Python package. # # `find_packages('src')` is a convenience function provided by setuptools, which # automatically discovers and lists all packages in the specified 'src' directory. # This means it will include all directories in 'src' that contain an __init__.py file, # treating them as Python packages to be included in the distribution. # # By using `find_packages('src')`, we ensure that all valid Python packages inside the # 'src' directory, regardless of their depth in the directory hierarchy, are included # in the distribution, eliminating the need to manually list them. This is particularly # useful for projects with a large number of packages and subpackages, as it reduces # the risk of omitting packages from the distribution. packages=find_packages('src_math_evaporate') + find_packages('src_bm_evaporate'), # When using `pip install -e .`, the package is installed in 'editable' or 'develop' mode. # This means that changes to the source files immediately affect the installed package # without requiring a reinstall. This is extremely useful during development as it allows # for testing and iteration without the constant need for reinstallation. # # In 'editable' mode, the correct resolution of package and module locations is crucial. # The `package_dir` and `packages` configurations play a vital role in this. If the # `package_dir` is incorrectly set, or if a package is omitted from the `packages` list, # it can lead to ImportError due to Python not being able to locate the packages and # modules correctly. # # Therefore, when using `pip install -e .`, it is essential to ensure that `package_dir` # correctly maps to the root of the package hierarchy and that `packages` includes all # the necessary packages by using `find_packages`, especially when the project has a # complex structure with nested packages. This ensures that the Python interpreter can # correctly resolve imports and locate the source files, allowing for a smooth and # efficient development workflow. # for pytorch see doc string at the top of file install_requires=[ 'fire', 'dill', 'networkx&gt;=2.5', 'scipy', 'scikit-learn', 'lark-parser', 'tensorboard', 'pandas', 'progressbar2', 'requests', 'aiohttp', 'numpy', 'plotly', 'wandb', 'matplotlib', # 'statsmodels' # 'statsmodels==0.12.2' # 'statsmodels==0.13.5' # - later check why we are not installing it... # 'seaborn' # 'nltk' 'twine', # # mercury: https://github.com/vllm-project/vllm/issues/2747 # 'dspy-ai', # # 'torch==2.1.2+cu118', # 2.2 net supported due to vllm see: https://github.com/vllm-project/vllm/issues/2747 # 'torch==2.2.2', # 2.2 net supported due to vllm see: https://github.com/vllm-project/vllm/issues/2747 # # 'torchvision', # # 'torchaudio', # # 'trl', # 'transformers', # 'accelerate', # # 'peft', # # 'datasets==2.18.0', # 'datasets', # 'evaluate', # 'bitsandbytes', # # 'einops', # # 'vllm==0.4.0.post1', # my gold-ai-olympiad project uses 0.4.0.post1 ref: https://github.com/vllm-project/vllm/issues/2747 # ampere 'dspy-ai', # 'torch==2.1.2+cu118', # 2.2 net supported due to vllm see: https://github.com/vllm-project/vllm/issues/2747 'torch==2.1.2', # 2.2 net supported due to vllm see: https://github.com/vllm-project/vllm/issues/2747 # 'torchvision', # 'torchaudio', # 'trl', 'transformers==4.39.2', 'accelerate==0.29.2', # 'peft', # 'datasets==2.18.0', 'datasets==2.14.7', 'evaluate==0.4.1', 'bitsandbytes== 0.43.0', # 'einops', 'vllm==0.4.0.post1', # my gold-ai-olympiad project uses 0.4.0.post1 ref: https://github.com/vllm-project/vllm/issues/2747 # pip install -q -U google-generativeai &quot;tqdm&quot;, &quot;openai&quot;, &quot;manifest-ml&quot;, 'beautifulsoup4', # 'pandas', 'cvxpy', # 'sklearn',The 'sklearn' PyPI package is deprecated, use 'scikit-learn' rather than 'sklearn' for pip commands. # 'scikit-learn', 'snorkel', 'snorkel-metal', 'tensorboardX', 'pyyaml', 'TexSoup', ] ) </code></pre> <p>and the errors I get in cli bash:</p> <pre class="lang-bash prettyprint-override"><code>(math_evaporate) brando9@skampere1~/massive-evaporation-4-math $ tree src_math_evaporate/ src_math_evaporate/ └── math_evaporate_llm_direct.py 0 directories, 1 file (math_evaporate) brando9@skampere1~/massive-evaporation-4-math $ tree src_bm_evaporate/ src_bm_evaporate/ β”œβ”€β”€ configs.py β”œβ”€β”€ evaluate_profiler.py β”œβ”€β”€ evaluate_synthetic.py β”œβ”€β”€ evaluate_synthetic_utils.py β”œβ”€β”€ massive_evaporate_4_math.egg-info β”‚ β”œβ”€β”€ dependency_links.txt β”‚ β”œβ”€β”€ PKG-INFO β”‚ β”œβ”€β”€ requires.txt β”‚ β”œβ”€β”€ SOURCES.txt β”‚ └── top_level.txt β”œβ”€β”€ profiler.py β”œβ”€β”€ profiler_utils.py β”œβ”€β”€ prompts_math.py β”œβ”€β”€ prompts.py β”œβ”€β”€ __pycache__ β”‚ β”œβ”€β”€ configs.cpython-39.pyc β”‚ β”œβ”€β”€ prompts.cpython-39.pyc β”‚ └── utils.cpython-39.pyc β”œβ”€β”€ run_profiler_maf.py β”œβ”€β”€ run_profiler_math_evaporate.py β”œβ”€β”€ run_profiler.py β”œβ”€β”€ run.sh β”œβ”€β”€ schema_identification.py β”œβ”€β”€ snap_cluster_setup.egg-info β”‚ β”œβ”€β”€ dependency_links.txt β”‚ β”œβ”€β”€ PKG-INFO β”‚ β”œβ”€β”€ requires.txt β”‚ β”œβ”€β”€ SOURCES.txt β”‚ └── top_level.txt β”œβ”€β”€ utils.py └── weak_supervision β”œβ”€β”€ binary_deps.py β”œβ”€β”€ __init__.py β”œβ”€β”€ make_pgm.py β”œβ”€β”€ methods.py β”œβ”€β”€ pgm.py β”œβ”€β”€ run_ws.py └── ws_utils.py 4 directories, 34 files (math_evaporate) brando9@skampere1~/massive-evaporation-4-math $ pip install -e . Obtaining file:///afs/cs.stanford.edu/u/brando9/massive-evaporation-4-math Preparing metadata (setup.py) ... error error: subprocess-exited-with-error Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─&gt; [8 lines of output] running egg_info creating /tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info writing /tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info/PKG-INFO writing dependency_links to /tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info/dependency_links.txt writing requirements to /tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info/requires.txt writing top-level names to /tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info/top_level.txt writing manifest file '/tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info/SOURCES.txt' error: package directory 'src_math_evaporate/weak_supervision' does not exist [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>everything looks right to my. Why are the bugs happening?</p> <p>I tried:</p> <pre class="lang-py prettyprint-override"><code> package_dir={ '': 'src_math_evaporate', 'bm_evaporate': 'src_bm_evaporate', }, </code></pre> <p>to</p> <pre class="lang-py prettyprint-override"><code> package_dir={ 'math_evaporate': 'src_math_evaporate', 'bm_evaporate': 'src_bm_evaporate', }, </code></pre> <p>doesn't work. Both as root:</p> <pre class="lang-py prettyprint-override"><code> package_dir={ '': 'src_math_evaporate', '': 'src_bm_evaporate', }, </code></pre> <p>Don't know what else to try. What do I do?</p>
<python><pip><setuptools><setup.py><python-packaging>
2024-04-18 03:34:08
2
6,126
Charlie Parker
78,344,695
20,898,396
Type checking for pipeline similar to Langchain LCEL
<p>I am trying to write a pipeline with types that will give errors if the steps are not compatible <code>step1() | step2()</code>.</p> <pre><code>from typing import Any, Callable, Generic, TypeVar I = TypeVar('I') O = TypeVar('O') R = TypeVar('R') class Runnable(Generic[I, O]): def __init__(self, func: Callable[[I], O]) -&gt; None: self.func = func # not sure how to make it work with multiple arguments # def __or__(self, other: Callable[[O], R]): # def chained_func(*args: I, **kwargs): # output = self.func(*args, **kwargs) # return other(output) # return Runnable(chained_func) def __or__(self, other: Callable[[O], R]): def chained_func(input: I): output = self.func(input) return other(output) # has type Unknown instead of I, hence why I specify [I, R] explicity return Runnable[I, R](chained_func) def __call__(self, *args: Any, **kwargs: Any): return self.func(*args, **kwargs) def add_five(x: int): return x + 5 def parse(x:str): return x.strip() add_five = Runnable(add_five) # Runnable[int, int] parse = Runnable(parse) # Runnable[str, str] chain = add_five | parse chain(3) </code></pre> <p>(code based on <a href="https://www.youtube.com/watch?v=O0dUOtOIrfs" rel="nofollow noreferrer">this</a> video)</p> <p><code>add_five: Runnable[int, int]</code>, <code>parse: Runnable[str, str]</code> and <code>chain: Runnable[int, str]</code>, but there is no (type hint) error to indicate that the output of <code>add_five</code> is not compatible with the input of <code>parse</code>. Can this be achieved?</p>
<python><langchain>
2024-04-18 03:22:04
0
927
BPDev
78,344,611
12,314,521
How to random a number depend on length of a given sring Python
<p>I want to random an integer from a range which the probability correlates with the number of tokens string.</p> <p>For example:</p> <p>Given max possible number of tokens = 64. Random integer's range is from 0 to 7</p> <p>Given a string has 46 tokens.</p> <p>I want to use the function <code>random.choices([0,1,2,3,4,5,6,7], weights=[..], k=1</code> and set the <code>weights</code> something like: <code>[0.1, 0.15, 0.2, 0.25, 0.3, 0.25, 0.2]</code></p> <p>I just give an example for the <code>weights</code>, I mean its weights need to be correlated with the <code>len(tokens)</code> and <code>max_len_token=64</code>. Here is 46 compare to 64, so it gives more probs on 4 and 5, but still give a chance for others just decrease by some reasonable ratio</p>
<python>
2024-04-18 02:55:11
2
351
jupyter
78,344,486
2,740,376
Permission denied errors using docker image glue_libs_4.0.0_image_01 for AWS Glue
<p>I'm trying to build a pipeline that is using glue_libs_4.0.0_image_01. A step in the pipeline is running the docker instance as follows:</p> <pre><code> docker run \ --mount=type=bind,source=./test,target=/home/glue_user/workspace/test \ --mount=type=bind,source=./libs,target=/home/glue_user/workspace/libs \ -w /home/glue_user/workspace \ -e DISABLE_SSL=true \ -e &quot;PYTHONPATH=$PYTHONPATH:/home/glue_user/workspace/deps&quot; \ --rm -p 4040:4040 \ -p 18080:18080 \ --name glue_unit_tests docker-default-virtual.${{ vars.ARTIFACTORY_HOST }}/amazon/aws-glue-libs:glue_libs_4.0.0_image_01 \ -c &quot;mkdir -p deps/ &amp;&amp; pip install -r test/requirements.txt -r libs/requirements.txt -t deps/; cd test &amp;&amp; pytest || exit 1&quot; </code></pre> <p>I am getting multiple permission denied errors when trying to create the <code>deps/</code> directory inside <code>/home/glue_user/workspace</code> and also <code>pip</code> throws out permission denied errors among with <code>pytest</code> trying to write cache files inside the mounted paths</p> <pre><code>../deps/_pytest/cacheprovider.py:445 /home/glue_user/workspace/deps/_pytest/cacheprovider.py:445: PytestCacheWarning: could not create cache path /home/glue_user/workspace/test/.pytest_cache/v/cache/nodeids: [Errno 13] Permission denied: '/home/glue_user/workspace/test/.pytest_cache' config.cache.set(&quot;cache/nodeids&quot;, sorted(self.cached_nodeids)) ../deps/_pytest/stepwise.py:56 /home/glue_user/workspace/deps/_pytest/stepwise.py:56: PytestCacheWarning: could not create cache path /home/glue_user/workspace/test/.pytest_cache/v/cache/stepwise: [Errno 13] Permission denied: '/home/glue_user/workspace/test/.pytest_cache' session.config.cache.set(STEPWISE_CACHE_DIR, []) </code></pre>
<python><amazon-web-services><docker><aws-glue>
2024-04-18 02:02:29
1
319
Iulian
78,344,470
292,502
How to have a programmatical conversation with an agent created by Agent Builder
<p>I created an agent with No Code tools offered by the Agent Builder GUI: <a href="https://vertexaiconversation.cloud.google.com/" rel="nofollow noreferrer">https://vertexaiconversation.cloud.google.com/</a> I created a playbook and added a few Data store Tools for the agent to use for RAG. I'd like to call this agent programmatically to integrate it into mobile apps or web pages. There's a lot of code related to the classic Dialogflow agents, the Agent Builder is quite new and uses the Gemini 1.0 Pro under the hood.</p> <p>I've seen this code <a href="https://stackoverflow.com/a/78229704/292502">https://stackoverflow.com/a/78229704/292502</a> however the question was about DialogFlow ES while the Agent Builder agent is rather a DialogFLow CX agent under the hood (and is listed in the Dialogflow CX dashboard). The Python package is promising, but I haven't found how can I have a conversation with the agent Playbook after I get hold of one.</p> <p>Or maybe I'm just looking at the wrong place. I was also browsing <a href="https://github.com/GoogleCloudPlatform/python-docs-samples/tree/main/dialogflow-cx" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/python-docs-samples/tree/main/dialogflow-cx</a> but webhooks, intents and fulfillments are for the &quot;classic&quot; agents. I tried to go over <a href="https://github.com/googleapis/google-cloud-python/blob/main/packages/google-cloud-dialogflow-cx/samples/generated_samples/" rel="nofollow noreferrer">https://github.com/googleapis/google-cloud-python/blob/main/packages/google-cloud-dialogflow-cx/samples/generated_samples/</a> but haven't find the one which would help me yet.</p>
<python><google-cloud-platform><google-cloud-vertex-ai><dialogflow-cx><rag>
2024-04-18 01:56:24
2
10,879
Csaba Toth
78,344,353
2,488,207
Create new columns and assign their values from existing column's values
<p>I have a Dataset downloaded from Kaggle for my project, I would like to create new columns and assign their values based on an existing column.</p> <p>My actual Dataset is complicated, I will give a similar but simpler dataset for easy discussion.</p> <p><strong>Input:</strong></p> <pre><code>Month | Fruit | Weight ------- -------- -------- 1-2020 | Orange | 0.2 1-2020 | Kiwi | 0.9 2-2020 | Orange | 2.1 2-2020 | Kiwi | 1.4 ...... | ..... | ... </code></pre> <p>To be able to create a required line chart, I need to change this Dataset structure, making <code>Orange, Kiwi</code> new columns with <code>Weight</code> values, so that <code>Month</code> is not repeated.</p> <p><strong>Desired output:</strong></p> <pre><code>Month | Orange | Kiwi ------- -------- ------ 1-2020 | 0.2 | 0.9 2-2020 | 2.1 | 1.4 </code></pre>
<python><dataframe>
2024-04-18 01:04:27
1
868
vyclarks
78,344,349
16,717,009
Can a list comprehension that builds a list of lists referring to itself be done in one line?
<p>I have a number of list comprehensions that build a variety of lists of lists. To keep this simple, consider:</p> <pre><code>foo = [] for i in range(1,3): # dummy loop for the example if len(foo) == 0: foo = [[x] for x in (0, 1, -1)] # can I avoid this step? else: foo = [f + [x] for f in foo for x in (0, 1, -1)] print(foo) </code></pre> <p>produces:</p> <pre><code>[[0, 0], [0, 1], [0, -1], [1, 0], [1, 1], [1, -1], [-1, 0], [-1, 1], [-1, -1]] </code></pre> <p>I know there are other ways using itertools to produce this particular output; I'm simplifying here. The key is that the comprehension has to build on itself, therefore the <code>for f in foo</code>.</p> <p>My specific question is: is there a way to avoid the <code>if, else</code> code and just do this in one line? If I just do <code>foo = [f + [x] for f in foo for x in (0, 1, -1)]</code> without the case for <code>len(foo) == 0</code> I get an empty <code>foo</code>.</p>
<python>
2024-04-18 01:02:25
1
343
MikeP
78,344,145
8,876,025
buildx build --platform linux/amd64 significantly increases image size with poetry
<p>Packages installed by poetry significantly increases the image size when it's built for amd64.</p> <p>I'm building a docker image on my host machine(MacOS, M2 Pro), which I want to deploy to an EC2 instance. Normal build will make an image size of 2GB, which is good. But it will result in system compatibility issue when deployed on EC2: <code>WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v3) and no specific platform was requested</code>. So I am trying a build with <code>buildx</code> command. However, it results in whopping 13GB, even though all I changed was a build command. I'd like to know why and how to reduce the size.</p> <p>Here is the Dockerfile (<strong>Edited</strong>: tried multi-stage build based on <a href="https://stackoverflow.com/a/78344174/8876025">this answer</a>):</p> <pre><code>FROM python:3.11-slim as builder # Set environment variables to make Python and Poetry play nice ENV POETRY_VERSION=1.7.1 \ PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 \ # for -slim version (it breaks if you don't comment out &amp;&amp; apt-get clean) RUN apt-get update &amp;&amp; apt-get install -y \ gfortran \ libopenblas-dev \ liblapack-dev \ &amp;&amp; apt-get clean \ &amp;&amp; rm -rf /var/lib/apt/lists/* ## Install poetry RUN pip install &quot;poetry==$POETRY_VERSION&quot; ## copy project requirement files here to ensure they will be cached. WORKDIR /app COPY pyproject.toml ./ RUN poetry config virtualenvs.create false \ &amp;&amp; poetry install --no-interaction --no-dev --no-ansi --verbose \ &amp;&amp; poetry cache clear pypi --all FROM python:3.11-slim # Copy all of the python files built in the Builder container into this smaller container. COPY --from=builder /app /app COPY --from=builder /usr/local/lib/python3.11 /usr/local/lib/python3.11 EXPOSE 7070 CMD [&quot;poetry&quot;, &quot;run&quot;, &quot;flask&quot;, &quot;run&quot;, &quot;--host=0.0.0.0&quot;] </code></pre> <p>And this command will build a 2GB image.</p> <pre><code>docker build -f ./docker/Dockerfile \ -t malicious-url-prediction-img:v1 . </code></pre> <p>And this will make a 13GB image</p> <pre><code>docker buildx build --platform linux/amd64 -f ./docker/Dockerfile \ -t malicious-url-prediction-img:v1-amd64 . </code></pre> <p>The image size stays small if I remove the RUN command <code>poetry config virtualenvs.create...</code>, even if I build the image for amd64. So I assume that poetry is causing this problem. However, it still is weird to have such a big difference in size by just changing the build context.</p> <p><strong>Edited</strong>: Based on two answers from <a href="https://stackoverflow.com/a/78349375/8876025">anthony sottile</a> and <a href="https://stackoverflow.com/a/78349652/8876025">Ghorban M. Tavakoly</a>, the cause might be the torch. I changed my pyproject.toml file like this:</p> <pre><code>[tool.poetry] name = &quot;malicious-url&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;Makoto1021 &lt;makoto.miyazaki1021@gmail.com&gt;&quot;] readme = &quot;README.md&quot; [tool.poetry.dependencies] python = &quot;^3.11&quot; numpy = &quot;^1.26.4&quot; tld = &quot;^0.13&quot; fuzzywuzzy = &quot;^0.18.0&quot; scikit-learn = &quot;^1.4.1.post1&quot; pandas = &quot;^2.2.1&quot; mlflow = {extras = [&quot;pipelines&quot;], version = &quot;^2.11.3&quot;} xgboost = &quot;^2.0.3&quot; python-dotenv = &quot;^1.0.1&quot; imblearn = &quot;^0.0&quot; flask = &quot;^3.0.3&quot; torch = {url = &quot;https://download.pytorch.org/whl/cpu-cxx11-abi/torch-2.2.2%2Bcpu.cxx11.abi-cp311-cp311-linux_x86_64.whl&quot;} googlesearch-python = &quot;^1.2.3&quot; whois = &quot;^1.20240129.2&quot; nltk = &quot;^3.8.1&quot; [tool.poetry.group.dev.dependencies] ipykernel = &quot;^6.29.3&quot; tldextract = &quot;^5.1.2&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>This resulted in an image of 10GB. Happy with the progress but still quite big for my EC2 instance. Here's the result of <code>/bin/bash</code> and <code>du -h -d 1</code>.</p> <pre><code>4.0K ./mnt 1.9G ./usr 4.0K ./opt 4.0K ./boot 0 ./sys 6.8M ./var 4.0K ./media 4.0K ./tmp 1.4M ./etc 4.0K ./home du: cannot access './proc/12/task/12/fd/7': No such file or directory du: cannot access './proc/12/task/12/fdinfo/7': No such file or directory du: cannot access './proc/12/fd/8': No such file or directory du: cannot access './proc/12/fdinfo/8': No such file or directory 0 ./proc 8.0K ./run 24K ./root 0 ./dev 4.0K ./srv 216K ./utils 8.1G ./app 10G . </code></pre> <p>FYI, this is how I run the container.</p> <pre><code>docker run --rm -p 7070:5000 -v $(pwd)/logs:/app/logs malicious-url-prediction-img:v1-amd64 </code></pre> <p>EDITED 1:</p> <ul> <li>changed Dockerfile to minimal example</li> <li>added myproject.toml to reproduce the build</li> <li>added my investigation on poetry</li> </ul> <p>EDITED 2:</p> <ul> <li>updated Dockerfile with multi-stage build</li> <li>updated .toml file and the result</li> </ul>
<python><docker><python-poetry>
2024-04-17 23:30:34
4
2,033
Makoto Miyazaki
78,344,061
4,766
How do I install avdec_h264 for use with GStreamer in Python on macOS?
<p>I answered my own question <a href="https://stackoverflow.com/q/78281985/4766">How do install gst-python on macOS to work with the recommended GStreamer installers?</a> by using <a href="https://stackoverflow.com/a/78295888/4766">miniconda</a>.</p> <p>Then I moved on to creating a GStreamer pipeline. But I get an error making an avdec_h264 decoder:</p> <pre><code>$ GST_DEBUG=3 python3 Python 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:54:21) [Clang 16.0.6 ] on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import gi &gt;&gt;&gt; import sys &gt;&gt;&gt; import threading &gt;&gt;&gt; gi.require_version(&quot;Gtk&quot;, &quot;3.0&quot;) &gt;&gt;&gt; gi.require_version(&quot;Gdk&quot;, &quot;3.0&quot;) &gt;&gt;&gt; gi.require_version('Gst', '1.0') &gt;&gt;&gt; gi.require_version('GstVideo', '1.0') &gt;&gt;&gt; from gi.repository import GObject, Gst, GstVideo, Gtk, Gdk, GLib, Gio &gt;&gt;&gt; Gst.init([]) [] &gt;&gt;&gt; decoder = Gst.ElementFactory.make('avdec_h264', 'decoder') 0:00:08.230811000 98675 0x600001a94630 WARN GST_ELEMENT_FACTORY gstelementfactory.c:765:gst_element_factory_make_valist: no such element factory &quot;avdec_h264&quot;! &gt;&gt;&gt; decoder == None True </code></pre> <p>I successfully installed the following:</p> <pre><code>conda install gst-plugins-good conda install libavif conda install ffmpeg </code></pre> <p>...but afterward get the same warning and <code>Gst.ElementFactory.make()</code> returns <code>None</code>.</p> <p>I also tried:</p> <pre><code>conda install decodebin3 conda install gst-ffmpeg conda install gst-libav </code></pre> <p>...but got &quot;…packages are not available from current channels&quot;.</p> <p>How do I install avdec_h264 so the call to <code>Gst.ElementFactory.make('avdec_h264', 'decoder')</code> works?</p>
<python><macos><conda><gstreamer><h.264>
2024-04-17 22:53:45
1
150,682
Daryl Spitzer
78,344,022
219,153
Why is this seemingly redundant Python import statement necessary?
<p>This snippet of Python 3.12 code:</p> <pre><code>import paho import paho.mqtt.client # line 2 client = paho.mqtt.client.Client(paho.mqtt.enums.CallbackAPIVersion(2)) </code></pre> <p>fails when line #2 is commented out. <code>paho</code> module is imported by the first line. I'm using the full name <code>paho.mqtt.client.Client</code> in the last line. Why is the seemingly redundant <code>import paho.mqtt.client</code> necessary?</p> <p>Is there a way to import <code>paho.mqtt</code> module, so it can be used to shorten both <code>paho.mqtt.client.Client</code> and <code>paho.mqtt.enums.CallbackAPIVersion</code> names?</p>
<python><python-3.x><python-import>
2024-04-17 22:38:48
2
8,585
Paul Jurczak
78,344,011
14,083,003
ValueError: For a sparse output, all columns should be a numeric or convertible to a numeric
<p>I am doing a pre-processing for my data before applying sklearn models, but I am having trouble identifying why an error keeps happening. When I run the code for each individual column index in <code>ColumTransformer</code>, it works well for each variable. However, the error happens when I apply it to multiple columns.</p> <ol> <li>What is the problem when I run it all together?</li> <li>How can you identify which column causes the error using codes? (I checked it by changing the argument manually)</li> <li>What is the remedy for the error when a single column causes this error?</li> </ol> <p>Data and example code:</p> <pre><code>import numpy as np import pandas as pd from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder # Number of samples num_samples = 1000 # Generating random data data = { 'Feature_1': np.random.rand(num_samples), 'Feature_2': np.random.rand(num_samples), 'Feature_3': np.random.choice(['A', 'B', 'C'], num_samples), 'Feature_4': np.random.choice(['X', 'Y', 'Z'], num_samples), 'Feature_5': np.random.choice(['M', 'N', 'O'], num_samples), # Non-numeric values intentionally introduced 'Feature_6': np.random.choice(['P', 'Q', 'R'], num_samples), # Non-numeric values intentionally introduced 'Feature_7': np.random.choice(['D', 'E', 'F'], num_samples), 'Feature_8': np.random.choice(['G', 'H', 'I'], num_samples), 'Feature_9': np.random.choice(['S', 'T', 'U'], num_samples), 'Feature_10': np.random.rand(num_samples), 'Feature_11': np.random.rand(num_samples), 'Feature_12': np.random.choice(['V', 'W', 'X'], num_samples), 'Feature_13': np.random.choice(['Y', 'Z'], num_samples), 'Feature_14': np.random.choice(['P', 'Q', 'R'], num_samples), 'Feature_15': np.random.choice(['A', 'B', 'C', 'D'], num_samples), 'Target': np.random.choice([0, 1], num_samples) } categorical_indices = [3, 4, 5, 6, 7, 8, 9, 12, 13, 14, 15] d = pd.DataFrame(data) X = d.values ct = ColumnTransformer( transformers=[('encoder', OneHotEncoder(), categorical_indices)], remainder='passthrough' ) X_1 = np.array(ct.fit_transform(X)) </code></pre> <p>The error:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/compose/_column_transformer.py&quot;, line 588, in _hstack converted_Xs = [check_array(X, File &quot;/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/compose/_column_transformer.py&quot;, line 588, in &lt;listcomp&gt; converted_Xs = [check_array(X, File &quot;/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/utils/validation.py&quot;, line 63, in inner_f return f(*args, **kwargs) File &quot;/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/utils/validation.py&quot;, line 673, in check_array array = np.asarray(array, order=order, dtype=dtype) File &quot;/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/numpy/core/_asarray.py&quot;, line 102, in asarray return array(a, dtype, copy=False, order=order) ValueError: could not convert string to float: 'C' The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/var/folders/25/5mycjlz1013629wcstsb_mwh0000gn/T/ipykernel_24019/2314645552.py&quot;, line 10, in &lt;module&gt; X_1 = np.array(ct.fit_transform(X)) File &quot;/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/compose/_column_transformer.py&quot;, line 529, in fit_transform return self._hstack(list(Xs)) File &quot;/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/compose/_column_transformer.py&quot;, line 593, in _hstack raise ValueError( ValueError: For a sparse output, all columns should be a numeric or convertible to a numeric. </code></pre>
<python><scikit-learn><transformation><one-hot-encoding><categorical>
2024-04-17 22:31:51
1
411
J.K.
78,343,931
1,972,982
Creating Scheduled Posts with an Image using Facebook Graph API
<p>I'm using a Python script to try and create a post on a Facebook page. I've been able to create the post and schedule it for the future. It all goes wrong when I try to add an image.</p> <p>First question, is this a limit of the Facebook API?</p> <p>Here is my Python code (I've redacted the access token and page ID). I've included the error I'm getting beneath.</p> <p>For background, I've added the <code>page_access_token</code> element because I was originally getting an error <code>Error: (#200) Unpublished posts must be posted to a page as the page itself.</code>. This error appeared after I added the <code>temporary=True</code> during the image upload - I found this as a potential solution to a bug in the scheduled posts.</p> <p>Any suggestions appreciated.</p> <pre><code>import facebook import datetime # Your Facebook access token access_token = 'xxx_Redacted_xxx' # ID of your Facebook page page_id = '426572384077950' # Initialize Facebook Graph API with your access token user_graph = facebook.GraphAPI(access_token) page_info = user_graph.get_object(f'/{page_id}?fields=access_token') page_access_token = page_info.get(&quot;access_token&quot;) print(page_info) print(&quot;Page: &quot;, page_access_token) graph = facebook.GraphAPI(page_access_token) def schedule_post(page_id, message, days_from_now, scheduled_time=None, image_path=None): try: # Default scheduled time: 17:00 if not provided if scheduled_time is None: scheduled_time = datetime.time(17, 0) # Default to 17:00 # Calculate scheduled datetime scheduled_datetime = datetime.datetime.now() + datetime.timedelta(days=days_from_now) scheduled_datetime = scheduled_datetime.replace(hour=scheduled_time.hour, minute=scheduled_time.minute, second=0, microsecond=0) # Default image path: None (no image) attached_media = [] if image_path: # Upload the image (check for errors) try: image = open(image_path, 'rb') image_id = graph.put_photo(image, album_path=f'{page_id}/photos', published=False, temporary=True)['id'] print(image_id) except facebook.GraphAPIError as e: print(f&quot;Error uploading image: {e}&quot;) # Handle image upload error (optional: log the error or continue without image) # If upload successful, append to attached_media attached_media.append({'media_fbid': image_id}) # Format scheduled time as required by Facebook API scheduled_time_str = scheduled_datetime.strftime('%Y-%m-%dT%H:%M:%S') # Debugging: Print attached_media before scheduling the post print(&quot;Attached Media:&quot;, attached_media) # Construct parameters for the put_object method parameters = { 'message': message, 'published': False, 'scheduled_publish_time': scheduled_time_str } # Add attached_media to parameters if it's not None if attached_media is not None: parameters['attached_media'] = attached_media print(&quot;parameters:&quot;, parameters) # Schedule the post graph.put_object(page_id, &quot;feed&quot;, **parameters) print(f&quot;Post scheduled for {scheduled_time_str}: {message}&quot;) return True except facebook.GraphAPIError as e: print(f&quot;Error: {e}&quot;) return False # Example usage if __name__ == &quot;__main__&quot;: # Message for the post message = &quot;This is a scheduled post for 3 days from now at 17:00!&quot; # Number of days from now days_from_now = 3 # Scheduled time (optional) scheduled_time = datetime.time(10, 30) # Change this to the desired time or None for default (17:00) # Image path (set to None for no image) image_path = 'img/Academic.jpg' # Change this to the path of your image or None for no image # image_path = None # Schedule the post success = schedule_post(page_id, message, days_from_now, scheduled_time, image_path) if not success: print(&quot;Failed to schedule the post.&quot;) </code></pre> <p>Output:</p> <pre><code>{'access_token': 'xxx_Redacted_xxx', 'id': '426572384077950'} Page: xxx_Redacted_xxx 860430092794306 Attached Media: [{'media_fbid': '860430092794306'}] parameters: {'message': 'This is a scheduled post for 3 days from now at 17:00!', 'published': False, 'scheduled_publish_time': '2024-04-20T10:30:00', 'attached_media': [{'media_fbid': '860430092794306'}]} Error: (#100) param attached_media must be an array. Failed to schedule the post. </code></pre>
<python><facebook-graph-api>
2024-04-17 21:58:40
0
333
Jamie
78,343,897
8,021,207
An async/parallel approach to working a (potentially) growing task queue
<p>I have a list of items that need to be processed and I want to be able to process them in parallel for efficiency. But during the processing of one item I may discover more items that need to be added to the list to be processed.</p> <p>I've looked at the <a href="https://docs.python.org/3/library/multiprocessing.html" rel="nofollow noreferrer">multiprocessing</a> and <a href="https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor" rel="nofollow noreferrer">concurrent</a> libraries but I couldn't find a feature of a queue of this sort that can be modified during runtime, or after it's been passed to the pool. Is there a solution that meets my desires?</p> <p>Here's some code that demonstrates what I'm wanting.</p> <pre class="lang-py prettyprint-override"><code>i = 0 jobs_to_be_processed = [f'job{(i:=i+1)}' for _ in range(5)] def process_job(job): if int(job[-1]) % 3 == 0: jobs_to_be_processed.append(f'new job{(i:=i+1)}') # do process job ... pass # Add jobs to a pool that allows `jobs_to_be_processed` # to have jobs added while processing pool = AsyncJobPool(jobs_to_be_processed) pool.start() pool.join() </code></pre>
<python><multithreading><concurrency><multiprocessing>
2024-04-17 21:43:15
1
492
russhoppa
78,343,854
3,512,538
python object cleanup order - can I use object reference to force GC to collect another object first?
<p>I have 2 objects <code>a, b</code> (instances of <code>A,B</code> respectively) that are created inside my app:</p> <pre class="lang-py prettyprint-override"><code>class A: def __del__(self): print(&quot;A.__del__&quot;) class B: def __del__(self): print(&quot;B.__del__&quot;) a = A() b = B() </code></pre> <p>this would print out (on my machine :) ):</p> <pre class="lang-py prettyprint-override"><code>A.__del__ B.__del__ </code></pre> <p>which means, that in this case, the garbage collection order is the creation order.</p> <p>What I need, is to force the garbage collection order, so that <code>a</code> would be destroyed after <code>b</code>. I tried keeping <code>a</code> inside <code>b</code>:</p> <pre class="lang-py prettyprint-override"><code>b._guard = a </code></pre> <p>but that didn't help, and <code>a</code> was destroyed first (at least the <code>__del__</code> functions were called in the same order).</p> <p>My real world case is using <code>pybind11</code>, where a grandparent creates a parent which then creates a child, and that child must be destroyed before the grandparent. Keeping <code>self</code> of the grandparent inside the child seems to work, but in this simple case I'm asking it clearly doesn't so I don't think that is a robust solution.</p> <p>It seems that <code>py::keep_alive</code> might have helped me if my case was parnent and child, but since there is no connection between the grandparent and the child, I think its irrelevant.</p> <p>Is there a pure pythonic way (or a neat <code>pybind11</code> way) to force the grandparent to be kept alive until the cleanup of the child?</p>
<python><garbage-collection><pybind11>
2024-04-17 21:32:06
1
12,897
CIsForCookies
78,343,764
23,260,297
JSON text as command line argument when running python script
<p>I have read similar questions relating to passing JSON text as a command line argument with python, but none of the solutions have worked with my case.</p> <p>I am automating a python script, and the automation runs a powershell script that takes a JSON object generated from a power automate flow. Everything works great until it comes to processing the JSON in my python script.</p> <p>My goal is to convert the JSON to a dictionary so that I can use the key value pairs in my code.</p> <p>My powershell script looks like this:</p> <pre><code> Python script.py {&quot;Items&quot;:[{&quot;Name&quot;:&quot;foo&quot;,&quot;File&quot;:&quot;\\\\files\\foo\\foo.csv&quot;},{&quot;Name&quot;:&quot;bar&quot;,&quot;File&quot;:&quot;\\\\files\\bar\\bar.csv&quot;},{&quot;Name&quot;:&quot;baz&quot;,&quot;File&quot;:&quot;\\\\files\\baz\\baz.csv&quot;}]} </code></pre> <p>My JSON looks like this:</p> <pre><code>{ &quot;Items&quot;: [ { &quot;Name&quot;: &quot;foo&quot;, &quot;File&quot;: &quot;\\\\files\\foo\\foo.csv&quot; }, { &quot;Name&quot;: &quot;bar&quot;, &quot;File&quot;: &quot;\\\\files\\bar\\bar.csv&quot; }, { &quot;Name&quot;: &quot;baz&quot;, &quot;File&quot;: &quot;\\\\files\\baz\\baz.csv&quot; } ] } </code></pre> <p>I tried this solution from SO:</p> <pre><code>if len(sys.argv) &gt; 1: d = json.loads(sys.argv[1]) print(d) </code></pre> <p>but it returns this error:</p> <pre><code>Unexpected token ':' in expression or statement. + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : UnexpectedToken </code></pre> <p>I am unsure how to solve this problem, any suggestions would help!</p>
<python><json><powershell>
2024-04-17 21:05:59
2
2,185
iBeMeltin
78,343,713
9,092,669
scrape rotowire MLB player news and form into a table using python
<p>i would like to scrape <a href="https://www.rotowire.com/baseball/news.php" rel="nofollow noreferrer">https://www.rotowire.com/baseball/news.php</a> which contains news about MLB players and save the data in a table format like so:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>date</th> <th>player</th> <th>headline</th> <th>news</th> <th></th> </tr> </thead> <tbody> <tr> <td>4/17</td> <td>Abner Uribe</td> <td>Picks up second win</td> <td>Uribe (2-1) earned the win Wednesday against the Padres after he allowed a hit and no walks in a scoreless eighth inning. He had one strikeout.</td> <td></td> </tr> <tr> <td>4/17</td> <td>Richie Palacios</td> <td>Gets day off vs. lefty</td> <td>Palacios is out of the lineup for Wednesday's game against the Angels.</td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table></div> <p>I'm having difficulties understanding how to isolate each of the content into their own rows into a dataframe. Looking for any help to get this going. Ideally I'd scrape every 5 minutes, and keep the table ever growing.</p>
<python><web-scraping><beautifulsoup>
2024-04-17 20:54:49
1
395
buttermilk
78,343,506
345,660
Split concave object mask into 1 or more convex sub-sections
<p>I am working with an object detection model. It works pretty well, but the output is a mask, and I need bounding boxes. Naively, I can just use OpenCV to draw a bounding box around the contours of the mask, but if the mask is very concave that can include large non-image regions.</p> <p>I've figured out how to use a convex hull to check if the mask is concave, but I can't figure out how to split a concave mask into convex sub-regions. I'm ok if my bounding boxes overlap, I just don't want them to contain large non-masked regions.</p> <p>Is there a simple heuristic I can use here to split the bounding boxes into sub-boxes? Maybe I could use an optimizer to find a set of 1 or more rectangles that mostly fill a given contour?</p> <p>Here are some examples of my semantic masks:</p> <p>I'd like to split the big object into 2 or 3 rectangles, but keep the 2 smaller objects as-is: <a href="https://i.sstatic.net/d771v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d771v.png" alt="Mask 1" /></a></p> <p>2 rectangles would be good, but we could get up to 5 <a href="https://i.sstatic.net/HiMex.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HiMex.png" alt="Mask 2" /></a></p> <p>2 rectangles would be prefect for the big object, but I'd like to keep the small one as-is: <a href="https://i.sstatic.net/pHOjC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pHOjC.png" alt="Mask 3" /></a></p> <p>Here's a quick script to draw the bounding boxes, which illustrates the issue:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import cv2 import requests from PIL import Image, ImageDraw from io import BytesIO response = requests.get('https://i.sstatic.net/d771v.png') image = Image.open(BytesIO(response.content)).convert('RGB') image_array = np.array(image.convert('L')) contours, _ = cv2.findContours(image_array, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) draw = ImageDraw.Draw(image) for contour in contours: x1, y1, w, h = cv2.boundingRect(contour) x2 = x1 + w y2 = y1 + h draw.rectangle([(x1, y1), (x2, y2)], outline=&quot;red&quot;, width=3) image.show() </code></pre> <p>Here's an example of the bounding boxes I have now. I want to cut off the &quot;arm&quot; of the big object into its own box. <a href="https://i.sstatic.net/ZnOa3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZnOa3.png" alt="enter image description here" /></a></p>
<python><opencv><geometry><object-detection><semantic-segmentation>
2024-04-17 19:56:17
0
30,431
Zach
78,343,430
595,305
Mock or stub a QEvent?
<p>Is there a way to mock a <code>QEvent</code> and pass it as a parameter?</p> <p>I have a test like this which I want to include:</p> <pre><code>def test_table_resize_event_sets_column_widths(): table = results_table_classes.ResultsTableView(None) with mock.patch.object(table, 'setColumnWidth') as mock_set: mock_event = mock.Mock() mock_size = mock.Mock() mock_event.size = mock.Mock(return_value=mock_size) mock_size.width = mock.Mock(return_value=300) table.resizeEvent(mock_event) assert mock_set.call_args_list[0].args == (0, int(300 * 0.20)) assert mock_set.call_args_list[1].args == (1, int(300 * 0.80)) </code></pre> <p>This test should lead to an implementation something like this:</p> <pre><code>def resizeEvent(self, event): width = event.size().width() self.setColumnWidth(0, int(width * 0.20)) # 20% Width Column self.setColumnWidth(1, int(width * 0.80)) # 80% Width Column </code></pre> <p>But I get</p> <blockquote> <p>E TypeError: resizeEvent(self, e: QResizeEvent): argument 1 has unexpected type 'Mock'</p> </blockquote> <p>I tried creating <code>mock_event</code> like so:</p> <pre><code>mock_event = mock.MagicMock(spec=QtGui.QResizeEvent) </code></pre> <p>... but I still get the same error. Maybe PyQt has some extra level of type-checking with this kind of method?</p> <p>If it helps at all, I have installed pytest_qt, so the <code>qtbot</code> fixture is available, if this can somehow be used to solve this.</p>
<python><testing><pyqt><pytest><pytest-qt>
2024-04-17 19:36:57
0
16,076
mike rodent
78,343,313
8,382,028
Adding HTML Button to Draftail Editor Action Buttons in Wagtail
<p>I am having an issue wrapping my head around adding a custom button in Wagtail to the RichTextEditor buttons, where when clicked, a user of the editor is able to add a link with an html <code>button</code>.</p> <p>The code I used for this in TinyMCE was originally provided here: <a href="https://dev.to/codeanddeploy/tinymce-add-custom-button-example-399m" rel="nofollow noreferrer">https://dev.to/codeanddeploy/tinymce-add-custom-button-example-399m</a></p> <p>But I can't figure out how to register a hook in Wagtail to implement that type of functionality. There is a similar example here, but I can't figure out how to enable the actual button to be rendered with style we predefine like I had done in TinyMCE editor.</p> <p>Here is an example that works fine for this use case, but I am hoping there is a simple one with the idea to add buttons like I had done with TinyMCE: <a href="https://erev0s.com/blog/wagtail-list-tips-and-tricks/#add-a-code-button-in-the-rich-text-editor" rel="nofollow noreferrer">https://erev0s.com/blog/wagtail-list-tips-and-tricks/#add-a-code-button-in-the-rich-text-editor</a></p> <p>Here is the code from that post:</p> <pre><code>from wagtail.core import hooks @hooks.register(&quot;register_rich_text_features&quot;) def register_code_styling(features): &quot;&quot;&quot;Add the &lt;code&gt; to the richtext editor and page.&quot;&quot;&quot; # Step 1 feature_name = &quot;code&quot; type_ = &quot;CODE&quot; tag = &quot;code&quot; # Step 2 control = { &quot;type&quot;: type_, &quot;label&quot;: &quot;&lt;/&gt;&quot;, &quot;description&quot;: &quot;Code&quot; } # Step 3 features.register_editor_plugin( &quot;draftail&quot;, feature_name, draftail_features.InlineStyleFeature(control) ) # Step 4 db_conversion = { &quot;from_database_format&quot;: {tag: InlineStyleElementHandler(type_)}, &quot;to_database_format&quot;: {&quot;style_map&quot;: {type_: {&quot;element&quot;: tag}}} } # Step 5 features.register_converter_rule(&quot;contentstate&quot;, feature_name, db_conversion) # Step 6. This is optional # This will register this feature with all richtext editors by default features.default_features.append(feature_name) </code></pre>
<python><django><wagtail><draftail>
2024-04-17 19:06:53
1
3,060
ViaTech
78,343,287
7,921,684
influxd TypeError: <lambda>() got an unexpected keyword argument 'key_key_password'
<p>I am using the guide and copying the lines and token from the generated code in the document but when I run I face this error which refers to this line <strong>write_api.write(bucket=bucket, org=org, record=point).</strong> influxdb 2.7.5</p> <pre><code>client = influxdb_client.InfluxDBClient(url=url, token=token, org=org) write_api = client.write_api(write_options=SYNCHRONOUS) query_api = client.query_api() delete_api = client.delete_api() buckets = client.buckets_api() print(&quot;buckets&quot;, buckets.find_buckets()) for value in range(5): point = ( Point(&quot;measurement1&quot;) .tag(&quot;id&quot;, &quot;1&quot;) .field(&quot;field1&quot;, value) ) write_api.write(bucket=bucket, org=org, record=point) time.sleep(1) # separate points by 1 second write_api.close() </code></pre> <p>the code above is given in the user-setup for python, and if you copy it and run it it will generate the following error.</p> <pre><code>Traceback (most recent call last): File &quot;/influxManager.py&quot;, line 21, in &lt;module&gt; print(&quot;buckets&quot;, buckets.find_buckets()) File &quot;/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/client/bucket_api.py&quot;, line 119, in find_buckets return self._buckets_service.get_buckets(**kwargs) File &quot;/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/service/buckets_service.py&quot;, line 558, in get_buckets (data) = self.get_buckets_with_http_info(**kwargs) # noqa: E501 File &quot;/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/service/buckets_service.py&quot;, line 586, in get_buckets_with_http_info return self.api_client.call_api( File &quot;/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/_sync/api_client.py&quot;, line 343, in call_api return self.__call_api(resource_path, method, File &quot;/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/_sync/api_client.py&quot;, line 173, in __call_api response_data = self.request( File &quot;/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/_sync/api_client.py&quot;, line 365, in request return self.rest_client.GET(url, File &quot;/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/_sync/rest.py&quot;, line 268, in GET return self.request(&quot;GET&quot;, url, File &quot;/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/_sync/rest.py&quot;, line 235, in request r = self.pool_manager.request(method, url, File &quot;/opt/anaconda3/lib/python3.9/site-packages/urllib3/request.py&quot;, line 66, in request return self.request_encode_url(method, url, fields=fields, File &quot;/opt/anaconda3/lib/python3.9/site-packages/urllib3/request.py&quot;, line 89, in request_encode_url return self.urlopen(method, url, **extra_kw) File &quot;/opt/anaconda3/lib/python3.9/site-packages/urllib3/poolmanager.py&quot;, line 313, in urlopen conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) File &quot;/opt/anaconda3/lib/python3.9/site-packages/urllib3/poolmanager.py&quot;, line 229, in connection_from_host return self.connection_from_context(request_context) File &quot;/opt/anaconda3/lib/python3.9/site-packages/urllib3/poolmanager.py&quot;, line 240, in connection_from_context pool_key = pool_key_constructor(request_context) File &quot;/opt/anaconda3/lib/python3.9/site-packages/urllib3/poolmanager.py&quot;, line 105, in _default_key_normalizer return key_class(**context) TypeError: &lt;lambda&gt;() got an unexpected keyword argument 'key_key_password' </code></pre>
<python><influxdb><influxdb-2>
2024-04-17 19:00:52
1
586
Gray
78,343,089
7,938,217
Ensuring VSCode Python Autocompletion
<p>How can I ensure that when I instantiate a data structure in VSCode+Jupyter+Python, the attributes of the data structure are available for autocompletion throughout the notebook.</p> <pre><code> # %% Jupyter Cell #1 #This cell is executed before attempting autocompletes in cell 2 @dataclass class ExistingItemNames: pass class SearchableItemNames: def __init__(self, var_names:list): self.names__ = ExistingItemNames() for name in var_names: setattr(self.names__, name, name) self.names = self.names__.__dict__ si = SearchableItemNames([f&quot;v{i}&quot; for i in range(2000)]) # %% Jupyter Cell 2 #outside other data structures, accessing through a # dict or attr seem equivalent si.names['v1999'] #does not find 'v1999' key via autocomplete si.names['v10'] # does find 'v10' key via autocomplete si.names__.v1999 #does not find `v1999` attr via autocomplete si.names__.v10 #does find `v10` attr via autocomplete #inside of a data structure, the dict is required for autocompletion # but still does not find all values (si.names__.v1999) #does not find `v1999` attr via autocomplete (si.names__.v10) #does not find `v10` attr via autocomplete (si.names['v1999']) #does not find 'v1999' key via autocomplete (si.names['v10']) # does find 'v10' key via autocomplete </code></pre> <p>I understand that exhaustive enumeration of keys or attr would not be a good solution for all use cases due to the limitations of the python language server, but is there a way I can force the IDE (VSCode+Jupyter) to only do so for certain objects, within certain python envs, or certain Jupyter notebooks?</p>
<python><visual-studio-code><jupyter-notebook><pylance><python-jedi>
2024-04-17 18:21:16
1
400
Kelley Brady
78,343,052
5,722,359
Lifted ttk.Label widget can't redraw promptly?
<p>Here is my minimum representative example (MRE) on how I create <code>ttk.Button</code> widgets with an image via a multithreaded approach. However, I am experiencing an issue with a task that occurs before the multithreading task. Whenever the <code>self.label</code> widget is lifted, it can't be redrawn promptly; a grey patch appears for a short period before <code>self.label</code> appears completely. Running <code>self.update_idletasks()</code> (see line 97) can't fix this issue. Only running <code>self.update()</code> can fix this issue (you have to uncomment line 98). However, some opined that the use of <code>self.update()</code> can be <a href="https://stackoverflow.com/questions/78318063/tkinter-tcl-update-considered-harmful-is-this-msg-still-valid/78325942?noredirect=1#comment138087739_78325942">harmful</a>. Is it possible to resolve this issue without using <code>self.update()</code>? If so, how? Please can you also explain why this issue happens? Thank you.</p> <p><strong>Issue demo:</strong></p> <p><a href="https://i.sstatic.net/Ysbe0.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ysbe0.gif" alt="Issue" /></a></p> <p><strong>Desired outcome demo:</strong> <a href="https://i.sstatic.net/ylCqW.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ylCqW.gif" alt="Desired outcome" /></a></p> <p><strong>MRE:</strong></p> <p>Please save any <code>.jpg</code> file you have into the same directory/folder as this script and rename it to <code>testimage.jpg</code>. How this GUI works? Click the <code>Run</code> button to start the multithreading. To rerun, you have to first click <code>Reset</code> button, thereafter click the <code>Run</code> button. DO NOT click <code>Reset</code> when threading is ongoing and vice versa.</p> <pre><code># Python modules import tkinter as tk import tkinter.ttk as ttk import concurrent.futures as cf import queue import threading from itertools import repeat import random from time import sleep # External modules from PIL import Image, ImageTk def get_thumbnail_c(gid: str, fid: str, fpath: str, psize=(100, 100)): # print(f&quot;{threading.main_thread()=} {threading.current_thread()=}&quot;) with Image.open(fpath) as img: img.load() img.thumbnail(psize) return gid, fid, img def get_thumbnails_concurrently_with_queue( g_ids: list, f_ids: list, f_paths: list, rqueue: queue.Queue, size: tuple): futures = [] job_fn = get_thumbnail_c with cf.ThreadPoolExecutor() as vp_executor: for gid, fids, fpath in zip(g_ids, f_ids, f_paths): for gg, ff, pp in zip(repeat(gid, len(fids)), fids, repeat(fpath, len(fids))): job_args = gg, ff, pp, size futures.append(vp_executor.submit(job_fn, *job_args)) for future in cf.as_completed(futures): rqueue.put((&quot;thumbnail&quot;, future.result())) futures.remove(future) if not futures: print(f'get_thumbnails_concurrently has completed!') rqueue.put((&quot;completed&quot;, ())) class GroupNoImage(ttk.Frame): def __init__(self, master, gid, fids): super().__init__(master, style='gframe.TFrame') self.bns = {} self.imgs = {} for i, fid in enumerate(fids): self.bns[fid] = ttk.Button(self, text=f&quot;{gid}-P{i}&quot;, compound=&quot;top&quot;, style=&quot;imgbns.TButton&quot;) self.bns[fid].grid(row=0, column=i, stick=&quot;nsew&quot;) class App(ttk.PanedWindow): def __init__(self, master, **options): super().__init__(master, **options) self.master = master self.groups = {} self.rqueue = queue.Queue() self.vsf = ttk.Frame(self) self.add(self.vsf) self.label = ttk.Label( self, style=&quot;label.TLabel&quot;, width=7, anchor=&quot;c&quot;, text=&quot;ttk.Label&quot;, font=('Times', '70', '')) self.label.place( relx=0.5, rely=0.5, relwidth=.8, relheight=.8, anchor=&quot;center&quot;, in_=self.vsf) self.label.lower(self.vsf) def create_grpsframe(self): self.grpsframe = ttk.Frame(self.vsf, style='grpsframe.TFrame') self.grpsframe.grid(row=0, column=0, sticky=&quot;nsew&quot;) def run(self, event): self.create_grpsframe() gids = [f&quot;G{i}&quot; for i in range(50)] random.seed() fids = [] for gid in gids: f_ids = [] total = random.randint(2,10) for i in range(total): f_ids.append(f&quot;{gid}-P{i}&quot; ) fids.append(f_ids) fpaths = [&quot;testimage.jpg&quot; for i in range(len(gids))] self.create_groups_concurrently(gids, fids, fpaths) def reset(self, event): self.grpsframe.destroy() self.groups.clear() def create_groups_concurrently(self, gids, fids, fpaths): print(f&quot;\ncreate_groups_concurrently&quot;) self.label.lift(self.vsf) # self.update_idletasks() # Can't fix self.label appearance issue # self.update() # Fixed self.label appearance issue for i, (gid, f_ids) in enumerate(zip(gids, fids)): self.groups[gid] = GroupNoImage(self.grpsframe, gid, f_ids) self.groups[gid].grid(row=i, column=0, sticky=&quot;nsew&quot;) self.update_idletasks() # sleep(3) print(f&quot;\nStart thread-queue&quot;) jthread = threading.Thread( target=get_thumbnails_concurrently_with_queue, args=(gids, fids, fpaths, self.rqueue, (100,100)), name=&quot;jobthread&quot;) jthread.start() self.check_rqueue() def check_rqueue(self): # print(f&quot;\ndef _check_thread(self, thread, start0):&quot;) duration = 1 # millisecond try: info = self.rqueue.get(block=False) # print(f&quot;{info=}&quot;) except queue.Empty: self.after(1, lambda: self.check_rqueue()) else: match info[0]: case &quot;thumbnail&quot;: gid, fid, img = info[1] print(f&quot;{gid=} {fid=}&quot;) grps = self.groups grps[gid].imgs[fid] = ImageTk.PhotoImage(img) grps[gid].bns[fid][&quot;image&quot;] = grps[gid].imgs[fid] self.update_idletasks() self.after(duration, lambda: self.check_rqueue()) case &quot;completed&quot;: print(f'Completed') self.label.lower(self.vsf) class ButtonGroups(ttk.Frame): def __init__(self, master, **options): super().__init__(master, style='bnframe.TFrame', **options) self.master = master self.bnrun = ttk.Button( self, text=&quot;Run&quot;, width=10, style='bnrun.TButton') self.bnreset = ttk.Button( self, text=&quot;Reset&quot;, width=10, style='bnreset.TButton') self.columnconfigure(0, weight=1) self.columnconfigure(1, weight=1) self.bnrun.grid(row=0, column=0, sticky=&quot;nsew&quot;) self.bnreset.grid(row=0, column=1, sticky=&quot;nsew&quot;) if __name__ == &quot;__main__&quot;: root = tk.Tk() root.geometry('1300x600') root.columnconfigure(0, weight=1) root.rowconfigure(0, weight=1) ss = ttk.Style() ss.theme_use('default') ss.configure(&quot;.&quot;, background=&quot;gold&quot;) ss.configure(&quot;TPanedwindow&quot;, background=&quot;red&quot;) ss.configure('grpsframe.TFrame', background='green') ss.configure('gframe.TFrame', background='yellow') ss.configure('imgbns.TButton', background='orange') ss.configure(&quot;label.TLabel&quot;, background=&quot;cyan&quot;) ss.configure('bnframe.TFrame', background='white') ss.configure('bnrun.TButton', background='violet') ss.configure('bnreset.TButton', background='green') app = App(root) bns = ButtonGroups(root) app.grid(row=0, column=0, sticky=&quot;nsew&quot;) bns.grid(row=1, column=0, sticky=&quot;nsew&quot;) bns.bnrun.bind(&quot;&lt;B1-ButtonRelease&gt;&quot;, app.run) bns.bnreset.bind(&quot;&lt;B1-ButtonRelease&gt;&quot;, app.reset) root.mainloop() </code></pre>
<python><tkinter><tcl>
2024-04-17 18:12:29
2
8,499
Sun Bear
78,343,028
8,484,885
Remove any observations containing only characters (or a zip code with no other numeric values)
<p>I'm trying to create a flag for flawed addresses and my idea is to remove all observations that have no numeric value in them. I don't want zip codes, so first step would be to remove those) and then apply a second filter to remove anything without non-remaining numeric values.</p> <p>In the following data frame, I would want to retain only the second row (containing a numeric address). The first row is only characters, and the third row, while containing numeric values, really only contains a five digit zipcode.</p> <pre><code>d = {'col1': ['San Diego County, California', '4150 Ute Dr, San Diego, California', 'Vista del Lago, Perris, California, 92570'], 'col2': ['prov_1', 'prov_2', 'prov_3']} df = pd.DataFrame(data=d) df </code></pre>
<python><pandas>
2024-04-17 18:07:21
1
589
James
78,342,932
15,100,030
Django annotate over multi records forignkeys
<p>I Work on a KPI system in which every manager can put specific questions for each user and answer them every month from 1 to 10 we have 4 departments, and every department says to have 4 question these answers have to be calculated to return the % number for each department</p> <p><strong>Models</strong></p> <pre class="lang-py prettyprint-override"><code> # question to answer every month class Question(models.Model): department = models.CharField(max_length=100, choices=USER_TYPE) question = models.CharField(max_length=100) def __str__(self): return f&quot;{self.question}&quot; # create every 01-month celery class Kpi(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE, related_name=&quot;kip_for&quot;) created_at = models.DateField() class Meta: unique_together = (&quot;user&quot;, &quot;created_at&quot;) def __str__(self): return f&quot;{self.user} for Month {self.created_at.month}&quot; # group the answers by department class DepartmentsKPI(models.Model): department = models.CharField(max_length=100, choices=USER_TYPE) kpi = models.ForeignKey( Kpi, on_delete=models.CASCADE, related_name=&quot;department_kpi&quot; ) class Answer(models.Model): question = models.ForeignKey( Question, on_delete=models.CASCADE, related_name=&quot;answer_for&quot; ) score = models.PositiveSmallIntegerField(default=0) comment = models.TextField(null=True, blank=True) kpi_answers = models.ForeignKey( DepartmentsKPI, on_delete=models.CASCADE, related_name=&quot;department_answers&quot;, ) </code></pre> <p>the problem is when I apply the annotation over the KPI model to sum the answer score for each department, the result becomes ungrouped so I have to group them by a unique value in my case the username</p> <pre class="lang-py prettyprint-override"><code>def get_kpi_range(*, year, month): queryset = ( KPIRepository.filter_kpis( created_at__year=year, created_at__month=month, ) .prefetch_related(&quot;department_kpi__department_answers&quot;) .annotate( score=Sum(&quot;department_kpi__department_answers__score&quot;) / Count(&quot;department_kpi__department_answers&quot;), department=F(&quot;department_kpi__department&quot;), ) .values(&quot;id&quot;, &quot;score&quot;, &quot;department&quot;, username=F(&quot;user__username&quot;)) ) grouped = [ {&quot;username&quot;: username, &quot;kpis&quot;: list(instances)} for username, instances in itertools.groupby(queryset, lambda x: x[&quot;username&quot;]) ] return grouped </code></pre> <p>However, this fails to handle multiple records for the same user. If I query all KPIs in one year, the result becomes messy.</p> <p>Also, if I want to make charts, I cannot return data for each user grouped by each month of the year</p> <p><strong>Current Result With <code>grouped</code> above</strong></p> <pre class="lang-json prettyprint-override"><code>[ { &quot;username&quot;: &quot;xxxxx&quot;, &quot;kpis&quot;: [ { &quot;department&quot;: &quot;HR&quot;, &quot;score&quot;: 5 }, { &quot;department&quot;: &quot;IT&quot;, &quot;score&quot;: 6 }, { &quot;department&quot;: &quot;QUALITY&quot;, &quot;score&quot;: 4 }, { &quot;department&quot;: &quot;WFM&quot;, &quot;score&quot;: 6 } ] }, { &quot;username&quot;: &quot;ahmed&quot;, &quot;kpis&quot;: [ { &quot;department&quot;: &quot;IT&quot;, &quot;score&quot;: 6 } ] } ] </code></pre> <p>Without <code>grouped</code></p> <pre class="lang-json prettyprint-override"><code>[ [ { &quot;id&quot;: 7, &quot;score&quot;: 5, &quot;department&quot;: &quot;HR&quot;, &quot;username&quot;: &quot;xxx&quot; }, { &quot;id&quot;: 7, &quot;score&quot;: 6, &quot;department&quot;: &quot;IT&quot;, &quot;username&quot;: &quot;xxx&quot; }, { &quot;id&quot;: 7, &quot;score&quot;: 4, &quot;department&quot;: &quot;QUALITY&quot;, &quot;username&quot;: &quot;xxx&quot; }, { &quot;id&quot;: 7, &quot;score&quot;: 6, &quot;department&quot;: &quot;WFM&quot;, &quot;username&quot;: &quot;xxx&quot; }, { &quot;id&quot;: 8, &quot;score&quot;: 6, &quot;department&quot;: &quot;IT&quot;, &quot;username&quot;: &quot;ahmed&quot; } ] ] </code></pre>
<python><django><postgresql>
2024-04-17 17:48:20
0
698
Elabbasy00
78,342,889
1,644,352
Poetry can't install setuptools?
<p>I'm trying to use Poetry to run <a href="https://jenkins-job-builder.readthedocs.io/en/latest/" rel="nofollow noreferrer">jenkins-job-builder</a> 4.3 (yes it's old but IIUC upgrading is non-trivial). However, it fails with:</p> <pre class="lang-none prettyprint-override"><code>ERROR:stevedore.extension:Could not load '&lt;command&gt;': No module named 'pkg_resources' </code></pre> <p>This seems to indicate that I need setuptools, and indeed on machines where <code>pythonX/site-packages/pkg_resources</code> exists, it seems to be owned by <code>setuptools</code>. Here, however, is where I run into problems.</p> <p>If I add this to <code>pyproject.toml</code>:</p> <pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies] setuptools = &quot;*&quot; </code></pre> <p>I get:</p> <pre class="lang-none prettyprint-override"><code>Because &lt;project&gt; depends on setuptools (*) which doesn't match any versions, version solving failed. </code></pre> <p>(Same results with anything else I've tried for the version specification.)</p> <p>This is all in a GHA runner using Python 3.12 and Poetry 1.8.2 (installed via Python 3.10.12). My local machine (using Python 3.10) is able to run <code>jenkins-jobs</code> just fine. Also, <code>setuptools</code> and <code>pkg_resources</code> exist in my local Poetry cache. So it seems &quot;install setuptools&quot; is the correct solution... except I can't figure out how to do that. It either does it on its own without being told, or refuses to do it at all.</p> <p><strong>How do I convince Poetry to reliably install setuptools?</strong></p> <hr /> <p>In case it's useful, my GHA script looks like:</p> <pre class="lang-yaml prettyprint-override"><code>jobs: test: runs-on: ubuntu-latest container: ubuntu:latest steps: - run: | echo &quot;$HOME/.local/bin&quot; &gt;&gt; $GITHUB_PATH apt-get -qq update apt-get -y --no-install-recommends install pipx - run: pipx install poetry - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: python-version: '3.x' cache: poetry - run: poetry install </code></pre> <p>Also, in one experiment, I managed to get:</p> <pre class="lang-none prettyprint-override"><code>pkg_resources.VersionConflict: (setuptools 69.5.1 (/github/home/.cache/pypoetry/virtualenvs/.../lib/python3.10/site-packages), Requirement.parse('setuptools&lt;=65.7.0')) </code></pre> <p>...which is from <code>jenkins_job_builder-4.3.0.dist-info/METADATA</code>.</p>
<python><setuptools><python-poetry>
2024-04-17 17:41:04
0
2,842
Matthew
78,342,736
15,098,472
Collecting varying element indices from a tensor across multiple dimensions
<p>Assume I got the following tensor:</p> <pre><code>arr = torch.randint(0, 9, (100, 50, 3)) </code></pre> <p>What I want to achieve is collecting, for example, 2 elements of that tensor, let's start with collecting the 6th and 56th one:</p> <pre><code>indices = torch.tensor([5, 55]) partial_arr = arr[indices] </code></pre> <p>This gives me an array of shape</p> <pre><code>torch.Size([2, 50, 3]) </code></pre> <p>Now, let's assume that from the first element, I want to collect the elements 5 through 10</p> <pre><code>first_result = partial_arr[0, 5:10] </code></pre> <p>and from the second element, the elements from 10 to 15:</p> <pre><code>second_result = partial_arr[1, 10:15] </code></pre> <p>Since I want everything in one tensor, I can do:</p> <pre><code>final_result = torch.cat([first_result, second_result]) </code></pre> <p>How can I achieve the final result only with one operation on the first tensor: <code>arr = torch.randint(0, 9, (100, 50, 3))</code> ?</p>
<python><indexing><pytorch>
2024-04-17 17:12:39
1
574
kklaw
78,342,468
12,190,301
Is the result of pyperf in real time or in CPU seconds?
<p>I would assume the output of <a href="https://github.com/psf/pyperf/tree/main" rel="nofollow noreferrer"><code>pyperf</code></a> is in real time, but I couldn't find confirmation anywhere.</p>
<python><profiling>
2024-04-17 16:20:11
1
2,109
Schottky
78,342,414
11,233,365
How to check Python environment for hidden modules (not listed, but can be imported)
<p>To expand on the title, my question comes in two parts:</p> <ol> <li>Is it possible for a Python package to be installed in an environment as a dependency for another package, but not show up when you list down all installed packages in that environment using commands such as <code>conda list</code>, <code>mamba list</code>, or <code>pip list</code>?</li> <li>If yes, then is there a way to verify the presence of such packages in your Python environment from the command line? My understanding is that multiple instances of the same package existing in the same environment could cause dependency conflicts.</li> </ol> <p>Thanks!</p>
<python><installation><package><environment>
2024-04-17 16:08:59
0
301
TheEponymousProgrammer
78,342,362
16,717,009
Finding existing subtotals in a pandas dataframe or a list of numbers
<p>Here's an interesting problem. Given a pandas Dataframe (or even a Python list) how would one go about finding the subtotals that might be in that list? For example:</p> <pre><code> running value 0 False 50709 1 False 26715 2 False 1715 3 False 79139 4 False 34447 5 False -7256 6 False 1210 7 False 42913 8 True 36227 9 False 999 10 False 20107 11 False 5787 12 False -1466 13 False -216 14 False 615 15 False 24827 16 True 11400 17 False 5642 18 True 5758 19 False -5 20 True 5753 </code></pre> <p>Observations about the data:</p> <ol> <li><strong>Signs may be incorrect</strong>.</li> <li>There are both subtotals and running totals in the data. Lines <code>[3, 7, 15]</code> are subtotals, <code>[8, 16, 18, 20]</code> are running totals.</li> <li>Subtotal 3 could be considered a special case, as it's both a subtotal and a running total.</li> <li>I can determine the running totals through other means, therefore they are marked True in the sample data.</li> <li>Subtotals <code>[3, 7, 15]</code> represent rows <code>[0, 1, 2]</code>, <code>[4, 5, 6]</code> and <code>[10, 11, 12, 13, 14]</code> respectively.</li> <li>It's fair to assume a subtotal follows a contiguous subset of numbers.</li> <li>There might not be any subtotals.</li> <li>I don't know if there are cases where a subtotal set includes another smaller subtotal set. Even an answer that doesn't consider this will be helpful.</li> <li>The number of rows will relatively small, less than 100.</li> </ol> <p>I need to identify subtotals <strong>and</strong> the rows represented by each subtotal.</p> <p>See my answer below.</p>
<python><pandas><algorithm>
2024-04-17 15:58:31
1
343
MikeP
78,342,315
1,422,096
Reorder dict with custom order
<p>Given a dict <code>d</code>, now that we know since Python 3.7 that (insertion) order is preserved, is there a built-in way to ask for <strong>the same dict with the same keys, except that some keys k1, k2, ... should come first?</strong></p> <p>Example:</p> <ul> <li>key <code>a</code> (if present) should be first,</li> <li>key <code>first</code> (if present) should be present next</li> </ul> <p>I came up with this:</p> <pre><code>def reorder_dict(d, first_keys): new_keys = [k for k in first_keys if k in d.keys()] + [k for k in d.keys() if k not in first_keys] new_d = {k: d[k] for k in new_keys} return new_d d1 = {&quot;c&quot;: 3, &quot;b&quot;: 2, &quot;a&quot;: 1} reorder_dict(d1, [&quot;a&quot;, &quot;first&quot;]) # {'a': 1, 'c': 3, 'b': 2} as expected ; NB: &quot;first&quot; is not present </code></pre> <p>Is there a built-in way to do this more directly?</p>
<python><dictionary>
2024-04-17 15:51:40
2
47,388
Basj
78,342,289
6,197,439
Very bizarre: tzlocal.get_localzone() different output based on python3 aliasing?
<p>I just noticed this, I'm completely puzzled so as to why it happens, and how I can prevent it.</p> <p>The computer I work on is Windows 10, and is set up in city Copenhagen. My platform is this:</p> <pre class="lang-none prettyprint-override"><code>$ for ix in &quot;uname -s&quot; &quot;python3 --version&quot;; do echo &quot;$ix: &quot; $($ix); done uname -s: MINGW64_NT-10.0-19045 python3 --version: Python 3.11.9 </code></pre> <p>Since I use a <code>bash</code> terminal under MINGW64, I have also an alias set up for python3 in <code>.bashrc</code>:</p> <pre class="lang-bash prettyprint-override"><code>alias python3=&quot;winpty python3&quot; </code></pre> <p>OK; so now I want to print <code>tzlocal.get_localzone()</code>, by calling a <code>python3</code> command in the bash terminal:</p> <pre class="lang-bash prettyprint-override"><code>$ python3 -c 'import tzlocal; print(tzlocal.get_localzone())' Europe/Copenhagen </code></pre> <p>Excellent, I got exactly the time zone as expected. However, recall <code>python3</code> here is actually <code>winpty python3</code>; to test <em>just</em> <code>python3</code>, let's prepend a backslash to the command, to escape the <code>bash</code> aliasing:</p> <pre class="lang-none prettyprint-override"><code>$ \python3 -c 'import tzlocal; print(tzlocal.get_localzone())' Europe/Paris </code></pre> <p>Amazing - I never would have expected this; why on earth do I get Paris here, and not Copenhagen (which is what Windows 10 itself on that machine is set up for)?</p> <p>I mean, it's not that far off, as far as timezones go - but why settle for less, when there are obviously conditions that make it output the correct time zone?</p> <p>So, why does this happen - and how can I get <code>\python3</code> run of <code>tzlocal.get_localzone()</code> also return Europe/Copenhagen?</p> <hr /> <p>EDIT: by printing <code>os.environ</code> in both cases, can see that the <code>winpty</code> python environment defines an environment variable 'TZ': 'Europe/Copenhagen' - while the direct python environment has no such variable.</p>
<python><python-3.x><timezone><mingw-w64>
2024-04-17 15:47:54
1
5,938
sdbbs
78,342,216
7,217,960
Suppress GLib-GIO-WARNING originating from Weasyprint/GTK3
<p>I'm using Weasyprint in Python to generate PDF files from HTML files. After a recent system update of my Windows machine, I started to observe waring log messages printed on the console such as this one:</p> <blockquote> <p>(process:41316): GLib-GIO-WARNING **: 10:36:44.529: Unexpectedly, UWP app <code>Microsoft.OutlookForWindows_1.2024.403.300_x64__8wekyb3d8bbwe' (AUMId </code>Microsoft.OutlookForWindows_8wekyb3d8bbwe!Microsoft.OutlookforWindows') supports 4 extensions but has no verbs</p> </blockquote> <p>This is apparently coming from the GLib library of GTK3 on which Weasyprint relies on to produce the PDF files.</p> <p>My application is behaving as expected, except for those warning messages.</p> <p>I would like to know if there are ways to control the logging level of GLib from Weasyprint in Python to suppress those messages.</p> <p>Note: It seems that those messages originate from a sub-process, so the following trick didn't work in this case:</p> <pre><code>old_stdout = sys.stdout # backup current stdout sys.stdout = open(os.devnull, &quot;w&quot;) suspect_function() sys.stdout = old_stdout # reset old stdout </code></pre>
<python><gtk3><glib><weasyprint>
2024-04-17 15:38:20
0
412
Guett31
78,342,036
16,759,116
Cartesian product without reuse
<p>I have two generators producing data, for example:</p> <pre class="lang-py prettyprint-override"><code>def xs(): yield [1, 2] yield [3, 4] def ys(): yield [5, 6] yield [7, 8] </code></pre> <p>And I want to process all possible (x,y) pairs:</p> <pre class="lang-py prettyprint-override"><code>process([1, 2], [5, 6]) process([1, 2], [7, 8]) process([3, 4], [5, 6]) process([3, 4], [7, 8]) </code></pre> <p>I can do this:</p> <pre class="lang-py prettyprint-override"><code>from itertools import product for x, y in product(xs(), ys()): process(x, y) </code></pre> <p>Here's the problem: <code>process</code> might modify the data, for example like this:</p> <pre class="lang-py prettyprint-override"><code>def process(x, y): print(f'process({x}, {y})') x.pop() y.pop() </code></pre> <p>Then what happens is this:</p> <pre class="lang-py prettyprint-override"><code>process([1, 2], [5, 6]) process([1], [7, 8]) process([3, 4], [5]) process([3], [7]) </code></pre> <p>That's because <code>product(xs(), ys())</code> creates all the xs and ys only once, and reuses them. So the earlier <code>process</code> calls affect the data for the later calls. I need to avoid this reuse.</p> <p>This is slightly better:</p> <pre class="lang-py prettyprint-override"><code>for x in xs(): for y in ys(): process(x, y) </code></pre> <p>This reuses each <code>x</code> but each <code>y</code> is created freshly, leading to:</p> <pre class="lang-py prettyprint-override"><code>process([1, 2], [5, 6]) process([1], [7, 8]) process([3, 4], [5, 6]) process([3], [7, 8]) </code></pre> <p>One way to avoid reuse of each <code>x</code> is to always make a deep copy:</p> <pre class="lang-py prettyprint-override"><code>from copy import deepcopy for x in xs(): for y in ys(): process(deepcopy(x), y) </code></pre> <p>That gives the desired behavior. The trouble is that <code>deepcopy</code> can be much slower than freshly generating the data would be. Here are times where <code>xs()</code> and <code>ys()</code> yield 100 lists of 100 ints (and <code>process</code> doesn't do anything) with the above three methods:</p> <pre class="lang-py prettyprint-override"><code> 0.6 Β± 0.0 ms using_product 3.1 Β± 0.0 ms nested_loops 404.7 Β± 25.9 ms with_deepcopy </code></pre> <p>How can we always use fresh <code>x</code> and fresh <code>y</code> without <code>deepcopy</code>, so that it's much faster? It should be possible to take only about twice as long as <code>nested_loops</code>, since that already produces half of all values freshly.</p> <p>Benchmark/testing script:</p> <pre class="lang-py prettyprint-override"><code>def using_product(xs, ys, process): for x, y in product(xs(), ys()): process(x, y) def nested_loops(xs, ys, process): for x in xs(): for y in ys(): process(x, y) def with_deepcopy(xs, ys, process): for x in xs(): for y in ys(): process(deepcopy(x), y) funcs = [ using_product, nested_loops, with_deepcopy, ] from itertools import * from copy import deepcopy from timeit import timeit from statistics import mean, stdev import sys import random # The little example def xs(): yield [1, 2] yield [3, 4] def ys(): yield [5, 6] yield [7, 8] def process(x, y): print(f'process({x}, {y})') x.pop() y.pop() for f in funcs: print(f.__name__ + ':') f(xs, ys, process) print() # Arguments for benchmark def xs(): for _ in range(100): yield [1] * 100 ys = xs def process(x, y): pass # Run the benchmark times = {f: [] for f in funcs} def stats(f): ts = [t * 1e3 for t in sorted(times[f])[:5]] return f'{mean(ts):5.1f} Β± {stdev(ts):3.1f} ms ' for _ in range(25): random.shuffle(funcs) for f in funcs: t = timeit(lambda: f(xs, ys, process), number=1) times[f].append(t) for f in sorted(funcs, key=stats): print(stats(f), f.__name__) print('\nPython:', sys.version) </code></pre> <p><a href="https://ato.pxeger.com/run?1=rVTNjpswED71wlOM1AOw9aLNpmlXSDn0DaqqtxQhAuPEWrCRbbZBUZ6kl720975CH6NPs_6BJaTdnoqEZGY-f-OZ78PffrS93gv--Pi90_T67ver6wopdIrxXd5KUXWljg6KQG9e812iUnEagHmokHAwCWAcJmQUW2wUDyD7DNsiC46DwPJzVBqrvBaiVf-it9yWcyKzYVeyn4VfqPOV6X1eIbalaPv_Xmgijn3JgHa8VLCGjcPPpkhc6LxxH5kdkQSZIZGiAaZRaiFqBaxphdRw5eMWNYbGXT6jWYNMjzn_5TNKF5opzcpnsgYLTky8wodgCKlejUtZ8Eo0QfAaPu8RaqZ1jYCHomlrdGOdJtUzrCvYLAjcZueBJYG3mcP2l9gVgXcz7HsCdx47U9BvaiXjOqLhmDoeTgSO_SkOY5c_JK1oI7_uh7WVjlrpnBwzniTPedFgnsMbCNOBg_7hjLMtsZ3DB7nrGuRaOVtskZf7ppD3F8OwudzWNQPcYbS4uTkzzjipDK7AZILe-uSgXuy7UMpW_tRx0EaFqaZV1u490hQ2GcybPTk-K7iK6EClnSO1rYtLh9cWr4zUWEWObkOzeJOuMq-LRN1JQxgerU8ibf6UVbKgJ_j1E47ONC62dLFGQRhcdH67Gkp7JyVq31FaY-SOGD_P6lIjd1hzVu_dqC6abVWkf5GHAO-aLcr1Ip42Dn0kRdsiN32d-WBo1ZUicI_92k1oZrFxZgQmkxjtfTL8wj-6ezINif1TkgeUigke-1tzuDzHS_QJ" rel="nofollow noreferrer">Attempt This Online!</a></p>
<python><generator><cartesian-product>
2024-04-17 15:10:22
3
10,901
no comment
78,341,826
1,422,058
Extract feature names from XGBRegressor used in scikit-learn pipeline with OneHotEncoded categorical features
<p>I have a dataset with a few numierical and a few categorical features. After calling fit on the XGBRegressor, I want to check the feature importance. Fo this, I want to map the feature importance scores to the feature names the regressor is used in a scikit learn pipeline.</p> <pre><code>categorical_encoder = Pipeline( steps=[(&quot;encoder&quot;, OneHotEncoder(handle_unknown=&quot;ignore&quot;))] ) encoder = ColumnTransformer( transformers=[ (&quot;categories&quot;, categorical_encoder, [&quot;cat_feature1&quot;, &quot;cat_feature2&quot;, &quot;cat_feature3&quot;]) ], remainder=&quot;passthrough&quot; ) pipeline = Pipeline([ (&quot;encoder&quot;, encoder), (&quot;regressor&quot;, XGBRegressor()) ]) </code></pre> <p><code>pipeline['regressor'].get_booster().get_fscore()</code> returns a dictionary with feature names <code>f0</code>, <code>f2</code>, <code>f7</code>, ... <code>pipeline['encoder'].named_transformers_['categories']['encoder'].get_feature_names_out()</code> returns the feature names of the one hot encoded categorical variables.</p> <p>Can I somehow get the full feature list whcih has been created in the pipeline and map it to the feature importance scores? I could not really figure it out by myself.</p>
<python><scikit-learn><xgbregressor>
2024-04-17 14:36:57
1
1,029
Joysn
78,341,724
3,341,533
KustoBlobError When Performing Queued Ingestion with azure-kusto-ingest python client
<p>I am able to successfully ingest data from a local pandas DataFrame into Azure ADX using the python azure-kusto-ingest library when I run this from my windows laptop.<br /> However, when I run this from an Azure compute Windows VM, a KustoBlobError is raised with the following message:</p> <blockquote> <p>azure.kusto.data.exceptions.KustoBlobError: &lt;urllib3.connection.HTTPSConnection object at 0x000002B73E481EA0&gt;: Failed to resolve 'j6v{adx_cluster_name}00.blob.core.windows.net' ([Errno 11001] getaddrinfo failed)</p> </blockquote> <p>I am using the same Azure Service Principle (SP) in both environments, and am able to read data from the same ADX table with this SP in both environments prior to performing the ingest, so authentication/authorization should not be a problem.<br /> While I initially had different versions of the python azure libraries installed on the VM compared to the laptop, I've incrementally changed all of these library versions to match what is on my laptop environment, but that has not resolved the issue.</p> <p>It seems there are two blob storage resources referenced in the logs, with the following name structures:</p> <ul> <li><p><strong>0uz</strong>{adx_cluster_name}<strong>01</strong></p> </li> <li><p><strong>j6v</strong>{adx_cluster_name}<strong>00</strong></p> </li> </ul> <p>When I run the code on the VM, it looks like both of these two blob resources are tried with the order being random, and the last one tried is referenced in the error message.</p> <p>In the client logs, I see this pair of log entries for request/response without an error when run on my laptop:</p> <pre> > 2024-04-17T09:40:15 | INFO | in > azure.core.pipeline.policies.http_logging_policy | Request URL: > 'https://j6v{adx_cluster_name}00.blob.core.windows.net/20240417-ingestdata-e5c334ee145d4b4-0/manufacturing_eng__tag_value__359d382f-3780-4d17-a4b9-e2f81fc97840__df_2020154677360_1713361188_cad099c7-731e-4dac-bbab-8bec9eed1b5d.csv.gz?timeout=REDACTED&sv=REDACTED&st=REDACTED&se=REDACTED&sr=REDACTED&sp=REDACTED&sig=REDACTED' > Request method: 'PUT' Request headers: > 'Content-Length': '8844133' > 'x-ms-blob-type': 'REDACTED' > 'If-None-Match': '*' > 'x-ms-version': 'REDACTED' > 'Content-Type': 'application/octet-stream' > 'Accept': 'application/xml' > 'User-Agent': 'azsdk-python-storage-blob/12.12.0 Python/3.10.0 (Windows-10-10.0.19045-SP0)' > 'x-ms-date': 'REDACTED' > 'x-ms-client-request-id': '05db41cf-fcc0-11ee-85bc-8cae4cf0e805' A body is sent with the request > > > 2024-04-17T09:40:16 | INFO | in > azure.core.pipeline.policies.http_logging_policy | Response status: > 201 Response headers: > 'Content-Length': '0' > 'Content-MD5': 'REDACTED' > 'Last-Modified': 'Wed, 17 Apr 2024 13:40:16 GMT' > 'ETag': '"0x8DC5EE3EA8B33AB"' > 'Server': 'Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0' > 'x-ms-request-id': 'b1aead80-901e-00d8-78cc-905fdd000000' > 'x-ms-client-request-id': '05db41cf-fcc0-11ee-85bc-8cae4cf0e805' > 'x-ms-version': 'REDACTED' > 'x-ms-content-crc64': 'REDACTED' > 'x-ms-request-server-encrypted': 'REDACTED' > 'Date': 'Wed, 17 Apr 2024 13:40:16 GMT' </pre> <p>Ane when I run it on the VM there are a series of eight (8) log entries for request attempts with no responses and with each of those two blob resources being tried 4 times (showing the last of the series of requests prior to the error):</p> <pre> > 2024-04-16T14:28:22 | INFO | in > azure.core.pipeline.policies.http_logging_policy | Request URL: > 'https://j6v{adx_cluster_name}00.blob.core.windows.net/20240416-ingestdata-e5c334ee145d4b4-0/manufacturing_eng__tag_value__623a8561-2cab-4726-8234-eab384bf2b24__df_2986047233024_1713291932_9f2f2ff2-2a38-4c7d-a6fa-4c43451c3fe1.csv.gz?timeout=REDACTED&sv=REDACTED&st=REDACTED&se=REDACTED&sr=REDACTED&sp=REDACTED&sig=REDACTED' > Request method: 'PUT' Request headers: > 'x-ms-blob-type': 'REDACTED' > 'Content-Length': '903579' > 'If-None-Match': '*' > 'x-ms-version': 'REDACTED' > 'Content-Type': 'application/octet-stream' > 'Accept': 'application/xml' > 'User-Agent': 'azsdk-python-storage-blob/12.12.0 Python/3.10.11 (Windows-10-10.0.17763-SP0)' > 'x-ms-date': 'REDACTED' > 'x-ms-client-request-id': '1b5a95a8-fc1f-11ee-8cc7-6045bd7dbb82' No body was attached to the request </pre> <p>Note that for these failed requests, they all indicate that &quot;No body was attached to the request&quot;, whereas the successful requests state that &quot;A body is sent with the request&quot;.</p> <p>And then the error trace looks like this:</p> <pre> > 2024-04-16T14:28:23 | ERROR | in root | > : > Failed to resolve 'j6v{adx_cluster_name}00.blob.core.windows.net' > ([Errno 11001] getaddrinfo failed) > > Traceback (most recent call last): File > "E:\apps\python310\venvs\base\lib\site-packages\azure\kusto\ingest\ingest_client.py", > line 229, in upload_blob > blob_client.upload_blob(data=stream, timeout=timeout) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\tracing\decorator.py", > line 78, in wrapper_use_tracer > return func(*args, **kwargs) File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_blob_client.py", > line 728, in upload_blob > return upload_block_blob(**options) File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_upload_helpers.py", > line 101, in upload_block_blob > response = client.upload( File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\tracing\decorator.py", > line 78, in wrapper_use_tracer > return func(*args, **kwargs) File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_generated\operations\_block_blob_operations.py", > line 793, in upload > pipeline_response = self._client._pipeline.run( # pylint: disable=protected-access File > "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 230, in run > return first_node.send(pipeline_request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 86, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 86, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 86, in send > response = self.next.send(request) [Previous line repeated 2 more times] File > "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\policies\_redirect.py", > line 197, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 86, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_shared\policies.py", > line 543, in send > raise err File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_shared\policies.py", > line 517, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 86, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 86, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 86, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_shared\policies.py", > line 313, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 86, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 86, in send > response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py", > line 119, in send > self._sender.send(request.http_request, **request.context.options), File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_shared\base_client.py", > line 333, in send > return self._transport.send(request, **kwargs) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\transport\_requests_basic.py", > line 386, in send > raise error azure.core.exceptions.ServiceRequestError: : > Failed to resolve 'j6v{adx_cluster_name}00.blob.core.windows.net' > ([Errno 11001] getaddrinfo failed) > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): File > "{python_script_path}", > line 546, in > adx_status_queues.extend(load_data_ADX(kusto_ingest_client, kusto_ingest_config["db"], "tag_value", combinedDF)) File > "{python_script_path}", > line 411, in load_data_ADX > r = kusto_client.ingest_from_dataframe(data, ingestion_properties=ingestion_props) File > "E:\apps\python310\venvs\base\lib\site-packages\azure\kusto\ingest\base_ingest_client.py", > line 121, in ingest_from_dataframe > return self.ingest_from_file(temp_file_path, ingestion_properties) File > "E:\apps\python310\venvs\base\lib\site-packages\azure\core\tracing\decorator.py", > line 78, in wrapper_use_tracer > return func(*args, **kwargs) File "E:\apps\python310\venvs\base\lib\site-packages\azure\kusto\ingest\ingest_client.py", > line 77, in ingest_from_file > blob_descriptor = self.upload_blob( File "E:\apps\python310\venvs\base\lib\site-packages\azure\kusto\ingest\ingest_client.py", > line 237, in upload_blob > raise KustoBlobError(e) azure.kusto.data.exceptions.KustoBlobError: > : > Failed to resolve 'j6v{adx_cluster_name}00.blob.core.windows.net' > ([Errno 11001] getaddrinfo failed) </pre> <p>These are the ingestion properties I'm using:</p> <pre><code>ingestion_props = IngestionProperties( database=dest_database_name, table=dest_table_name, data_format=DataFormat.CSV, report_level=ReportLevel.FailuresAndSuccesses, ingestion_mapping_kind=IngestionMappingKind.CSV, column_mappings=data_mappings ) r = kusto_client.ingest_from_dataframe(data, ingestion_properties=ingestion_props) </code></pre> <p>I think I've seen somewhere in the documentation that for queued ingestion ADX uses temporary blob storage to land the data before actually ingesting it, so I'm assuming these blob storage containers being referenced in the logs correspond to those, but am not certain.</p> <p>Any ideas what is going on here or how to try to fix it?</p>
<python><azure-data-explorer>
2024-04-17 14:23:07
0
1,032
BioData41