QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
76,150,890
3,515,174
Python variadic generics accepting class
<p><a href="https://docs.python.org/3/library/typing.html#typing.Type" rel="nofollow noreferrer">a variable annotated with Type[C] may accept values that are classes themselves</a></p> <p>How do you do this with <a href="https://peps.python.org/pep-0646/#type-variable-tuples" rel="nofollow noreferrer">TypeVarTuples (PEP 646)</a></p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar, TypeVarTuple T = TypeVar(&quot;T&quot;) Ts = TypeVarTuple(&quot;Ts&quot;) def foo(a: T) -&gt; T: ... def bar(a: type[T]) -&gt; T: ... def baz(*a: *Ts) -&gt; tuple[*Ts]: ... a = foo(int) # type[int] b = bar(int) # int c = baz(int, str) # tuple[type[int], type[str]] # What syntax? def qux(*a: type[*Ts]) -&gt; tuple[*Ts]: ... d = qux(int, str) # should be tuple[int, str] </code></pre>
<python><generics><python-typing>
2023-05-01 23:55:55
1
4,502
Mardoxx
76,150,870
4,158,016
How to get key value fashion json output while scraping using python selenium
<p>My data is like this, repeated for different companies likewise</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt;&lt;/head&gt; &lt;body&gt; &lt;script type=&quot;text/javascript&quot;&gt;&lt;/script&gt; &lt;div class=&quot;result-container&quot; id=&quot;main&quot;&gt; &lt;div&gt; &lt;ul class=&quot;search-results-container&quot;&gt; &lt;div data-testid=&quot;downloadCompanyCard&quot;&gt; &lt;li class=&quot;search-result-item sr-collapsed &quot;&gt; &lt;div class=&quot;ant-row ant-row-start ant-row-top search-item-row spacer-small-bottom search-item-row-padding-bottom&quot;&gt;&lt;svg xmlns=&quot;http://www.w3.org/2000/svg&quot; viewBox=&quot;0 0 24 24&quot; height=&quot;24px&quot; class=&quot;sr-collapse-button&quot; data-testid=&quot;CompanyCardClose24Px&quot;&gt; &lt;/svg&gt; &lt;div class=&quot;ant-col ant-col-20 results-col-padding&quot;&gt; &lt;div class=&quot;results-info paragraph&quot;&gt; &lt;div&gt;&lt;/div&gt; &lt;div&gt;&lt;a class=&quot;results-name strong&quot; href=&quot;/company/2c257c50-0f76-3886-914d-4ead9d3ea998&quot;&gt;Republic of kazinboom&lt;/a&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;ant-row ant-row-start ant-row-top relative-day-ago relative-day-row-padding&quot;&gt; &lt;div class=&quot;ant-col ant-col-14 company-vitals-col-padding&quot;&gt; &lt;div class=&quot;paragraph&quot; data-testid=&quot;CompanyCardVitals&quot;&gt; &lt;div class=&quot;inlined-card-info&quot;&gt;&lt;span class=&quot;strong&quot;&gt;Kyonesto, County of Cukaliya&lt;/span&gt; &lt;div data-testid=&quot;CompanyCardVitalsPhone&quot;&gt;&lt;span class=&quot;dot dark&quot;&gt;&lt;/span&gt;&lt;span class=&quot;no-wrap&quot;&gt;+0-765-543-4321&lt;/span&gt;&lt;/div&gt; &lt;/div&gt; &lt;div&gt;Top Executive and Legislature&lt;/div&gt; &lt;div&gt;Public Sector&lt;span class=&quot;dot dark&quot;&gt;&lt;/span&gt;Parent&lt;span class=&quot;dot dark&quot; data-testid=&quot;CompanyCardVitalsHQHeader&quot;&gt;&lt;/span&gt;Headquarters&lt;/div&gt; &lt;div data-testid=&quot;CompanyCardVitalsCompcode&quot;&gt;AL-P-H-A: 12-345-6789&lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;ant-col ant-col-10&quot;&gt; &lt;div class=&quot;ant-row ant-row-no-wrap ant-row-start ant-row-top&quot;&gt; &lt;div class=&quot;ant-col ant-col-none ant-col-md-16 ant-col-lg-14 ant-col-xl-14&quot;&gt; &lt;div data-testid=&quot;CompanyCardStatsId&quot; class=&quot;ant-row ant-row-start ant-row-top&quot;&gt; &lt;div class=&quot;ant-col ant-col-none grid-data grid-data-sales&quot;&gt; &lt;table class=&quot;paragraph&quot;&gt; &lt;tbody&gt; &lt;tr data-testid=&quot;CompanyCardStatsAssets&quot;&gt; &lt;td&gt;Assets:&lt;/td&gt; &lt;td&gt;1100.50T&lt;/td&gt; &lt;td&gt;&lt;/td&gt; &lt;/tr&gt; &lt;tr data-testid=&quot;CompanyCardStatsEmployees&quot;&gt; &lt;td&gt;Employees (HQ):&lt;/td&gt; &lt;td&gt;1&lt;/td&gt; &lt;/tr&gt; &lt;tr data-testid=&quot;CompanyCardStatsEmployeesTotal&quot;&gt; &lt;td&gt;Employees (Total):&lt;/td&gt; &lt;td&gt;92002.768886B&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;ant-col ant-col-none trigger-signals-column&quot; style=&quot;flex: 1 1 auto; min-width: 0px;&quot;&gt; &lt;div class=&quot;company-card-ideal-profile-score spacer-small-bottom&quot;&gt;&lt;/div&gt; &lt;div class=&quot;text-align-right&quot;&gt;&lt;small&gt;Added: 13-Apr-2023&lt;/small&gt;&lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/li&gt; &lt;/div&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>My current python selenium code, gets me text out of it</p> <pre class="lang-py prettyprint-override"><code>tdata = browser.find_element(By.XPATH,'//*[@id=&quot;main&quot;]/div[3]/div[2]/div[2]').text with open('testdata.json', 'w') as f: json.dump(tdata,f) </code></pre> <p>Now output I get is plain text.</p> <p>What I want to achieve is to get <code>key:value</code> like output, to facilitate easy integration with other applications, not all company records will have all fields (e.g. there may not be assets reported for other companies). In such a case, having <code>key:value</code> ( asset:1trillion) helps.</p> <p>And for the fields which don't have a tag associated,(e.g. <code>company_name</code>), I want to add a fix key name that says <code>&quot;company_name&quot;:&quot;Republic...&quot;</code></p> <p>Appreciate any pointers</p>
<python><selenium-webdriver><web-scraping>
2023-05-01 23:51:15
1
450
itsavy
76,150,862
6,321,559
Profiling Server Side vs Client Side Psycopg2 Cursor
<p>I'm profiling code to see the difference between bulk load from a Postgres DB source or stream from the source. The design for bulk is <code>psycopg2</code> with a client side cursor. The design I'm testing for stream is a server side cursor with an <code>itersize</code> of 20,000. The test table has ~600,000 rows.</p> <p>The &quot;server side&quot; script I'm running is:</p> <pre><code>with psycopg2.connect(dbname=.., host=.., port=.., user=.., password=..) as conn: cursor = conn.cursor(&quot;server_side&quot;, cursor_factory=psycopg2.extras.RealDictCursor) cursor.itersize = 20_000 cursor.execute(&quot;select * from schema.large_table&quot;) for i, row in enumerate(cursor): if i % 1000 == 0: print(f&quot;Row number {i}&quot;) if i == 10_000: break </code></pre> <p>I expect this to fetch 20,000 rows to the client then iterate over them until it gets to 10,000 and stop.</p> <p>The &quot;client side&quot; code is:</p> <pre><code>with psycopg2.connect(dbname=.., host=.., port=.., user=.., password=..) as conn: cursor = conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor) cursor.execute(&quot;select * from schema.large_table&quot;) for i, row in enumerate(cursor): if i % 1000 == 0: print(f&quot;Row number {i}&quot;) if i == 10_000: break </code></pre> <p>I expect this to fetch the whole table (600,000 rows) then iterate over 10,000 rows and stop. This seems like it should be slower because it has to move more data over the network. In practice I see the memory usage of the process grow much higher for the client side solution, but the client side solution runs for 65 seconds and the server side solution takes ~10x as long 619s.</p> <p>The output for each is the same so I know they're processing the same number of rows:</p> <pre><code>Row number 0 Row number 1000 Row number 2000 Row number 3000 Row number 4000 Row number 5000 Row number 6000 Row number 7000 Row number 8000 Row number 9000 Row number 10000 </code></pre> <p>Why is server side slower?</p> <h2>Edit:</h2> <p>Applying the changes suggested in the comments (<code>enumerate(cursor.fetchone())</code> -&gt; <code>enumerate(cursor)</code>) results in expected behavior. I updated the code snippets for posterity.</p> <p>After I fixed the bug in our code which was iterating through every value of every row instead of row by row the client side cursor took ~4.5x longer with more memory used.</p>
<python><postgresql><psycopg2>
2023-05-01 23:50:03
0
2,046
it's-yer-boy-chet
76,150,855
8,276,973
How to get correct path for Sphinx autodoc with os.path
<p>I just built Sphinx for the first time and generated all of the skeleton files. Next I want to add another path with Python files for autodoc to analyze. <code>sphinx.ext.autodoc</code> is in conf.py as an extension.</p> <p>My Sphinx install is in: <code>/home/mbcn/Software_Projects/Sphinx</code>.<br /> The other path I want to include is <code>/home/mbcn/Software_Projects/Python_Projects/Svnx</code></p> <p>I edited <code>conf.py</code> and uncommented these lines:</p> <pre><code>import os import sys sys.path.insert(0, os.path.abspath('.')) </code></pre> <p>Then I ran make html again. Nothing changed; no new files were analyzed. So I tried:</p> <pre><code>sys.path.insert(0, os.path.abspath('/home/mbcn/Software_Projects/Python_Projects/Svnx/')) </code></pre> <p>Still nothing changed. So I tried:</p> <pre><code>sys.path.insert(0, os.path.abspath('os.path.realpath('/home/mbcn/Software_Projects/Python_Projects/Svnx/’)')) </code></pre> <p>What am I doing wrong?</p> <p>Thanks very much for any help.</p>
<python><python-3.x><python-sphinx><autodoc>
2023-05-01 23:48:34
0
2,353
RTC222
76,150,773
2,687,317
pandas pivot table aggregating over 2 diff columns and 2 functions: sum and mean
<p>I have data of the form</p> <pre><code> Date_Time LF Name Count pwr TS 0 2022-08-03 00:00:02 2885184100 OpenP1 1 0.302229 1 1 2022-08-03 00:00:02 2885184100 Net3 1 4.790000 3 2 2022-08-03 00:00:02 2885184100 OpenP1 1 0.000000 1 3 2022-08-03 00:00:02 2885184100 OpenP1 3 1.300000 4 4 2022-08-03 00:00:02 2885184100 Net3 1 0.033000 4 5 2022-08-03 00:00:05 2885184220 OpenP1 1 0.302229 1 6 2022-08-03 00:00:05 2885184220 Net3 1 4.790000 3 7 2022-08-03 00:00:05 2885184220 OpenP1 1 0.520000 1 8 2022-08-03 00:00:05 2885184220 OpenP1 2 0.000000 4 9 2022-08-03 00:00:05 2885184220 Net3 1 0.440000 4 </code></pre> <p>And I need to aggregate in the following way:</p> <p>For each LF, I want to:</p> <ol> <li>SUM the <code>pwr</code> field for all the same values of <code>TS</code> AND <code>Name</code> then</li> <li>AVERAGE across the (up to 4) TS <em>for each</em> <code>Name</code>. That final mean would be associated with LF.</li> </ol> <p>So for LF = 2885184100 we have OpenP1 0.302229 1 OpenP1 0.000000 1</p> <p>OpenP1 1.300000 4</p> <p>TS 1 total = 0.302229 TS 4 total = 1.3</p> <p>So the average for LF = 2885184100 and OpenP2 is 0.801115 I need to do the same for Net3 at LF = 2885184100 and then repeat for each LF...</p> <p>I also want to sum <code>Count</code> for each LF -- that should be straightfoward:</p> <pre><code> Date_Time LF OpenP1_pwr Net3_pwr OpenP1_cnt Net3_cnt 0 2022-08-03 00:00:02 2885184100 0.801115 1.205750 5 2 1 2022-08-03 00:00:05 288518220 0.205557 2.615000 4 2 </code></pre> <p>I thought I could figure out a pivot table for this... but I can't get it right.</p> <p>I thought something like this was close:</p> <pre><code>tmp=theData.pivot_table(index='LF', columns=['Name','TS'], values=['pwr', 'Count'], aggfunc={'pwr': '??','Count':'sum'}, fill_value=0) </code></pre> <p>but them I'm stuck.</p>
<python><pandas><pivot-table>
2023-05-01 23:20:07
1
533
earnric
76,150,759
12,022,239
FastAPI - Struggling to use dependency injection to inject a service class into an endpoint
<p>I have an AI related FastAPI application, and I have the business logic nicely separated into multiple services. I want to use inject these services into my application so I can use them, but I can't get it to work. These are the services I have:</p> <p>aws_rekognition_service.py</p> <pre><code>from io import BytesIO # AWS SDK import boto3 class AWSRekognitionService: def __init__(self): self.client = boto3.client(&quot;rekognition&quot;) # rest of the business logic </code></pre> <p>grounding_dino_service.py</p> <pre><code>import torch from PIL import Image # Grounding DINO. import groundingdino.datasets.transforms as T from groundingdino.models import build_model from groundingdino.util.slconfig import SLConfig from groundingdino.util.utils import ( clean_state_dict, get_phrases_from_posmap, ) class GroundingDINOService: def __init__(self) -&gt; None: pass # rest of the business logic </code></pre> <p>image_segmentation_service.py</p> <pre><code>import numpy as np import cv2 from PIL import Image # Services from aws_rekognition_service import AWSRekognitionService # Initialize service aws_rekognition_service = AWSRekognitionService() class ImageSegmentationService: def __init__(self) -&gt; None: pass # rest of the business logic </code></pre> <p>segment_anything_service.py</p> <pre><code>import cv2 import torch # Segment Anything. from segment_anything import build_sam, SamPredictor # Services. from grounding_dino_service import GroundingDINOService # Initialize services. grounding_dino_service = GroundingDINOService() class SegmentAnythingService: def __init__(self) -&gt; None: pass # rest of the business logic </code></pre> <p>visualization_service.py</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import os import json import torch import cv2 class VisualizationService: def __init__(self) -&gt; None: pass # rest of the business logic </code></pre> <p>Two services depend on AWSRekognitionService. Besides that, I have one main service, book_recognition_service.py, which uses all services above:</p> <pre><code>import os import json # Services. from grounding_dino_service import GroundingDINOService from image_segmentation_service import ImageSegmentationService from segment_anything_service import SegmentAnythingService from visualization_service import VisualizationService # Initialize services. grounding_dino_service = GroundingDINOService() image_segmentation_service = ImageSegmentationService() segment_anything_service = SegmentAnythingService() visualization_service = VisualizationService() with open(&quot;../config/ml_config.json&quot;) as json_file: config = json.load(json_file) class BookRecognitionService: def __init__(self) -&gt; None: pass async def recognize(): # business logic </code></pre> <p>I want to inject the BookRecognitionService into one of my routers so I can use it in an endpoint, and I thought it would be as simple as doing this in my book router:</p> <p>book.py</p> <pre><code>... # Services. from services.book_recognition_service import BookRecognitionService # Instantiate services. book_recognition_service = BookRecognitionService() @router.get(&quot;/recognize&quot;, status_code=status.HTTP_200_OK) async def recognize_boks(): book_texts = await book_recognition_service.recognize() return JSONResponse(status_code=status.HTTP_200_OK, content=book_texts) ... </code></pre> <p>Sadly, this does not work. This is my main.py file:</p> <pre><code># FastAPI from fastapi import FastAPI # MongoDB driver from motor.motor_asyncio import AsyncIOMotorClient from config import mongo_config # Router imports from routers import auth, book app = FastAPI() @app.on_event(&quot;startup&quot;) async def startup_mongo_client(): app.mongodb_client = AsyncIOMotorClient(mongo_config.MONGO_URI) app.mongodb = app.mongodb_client[mongo_config.MONGO_DB_NAME] @app.on_event(&quot;shutdown&quot;) async def shutdown_mongo_client(): app.mongodb_client.close() app.include_router(auth.router) app.include_router(book.router) </code></pre> <p>When running my FastAPI application, I now get the following error stack:</p> <pre><code> File &quot;C:\Users\Stefan\book-recognizer-backend\api\main.py&quot;, line 9, in &lt;module&gt; from routers import auth, book File &quot;C:\Users\Stefan\book-recognizer-backend\api\routers\book.py&quot;, line 15, in &lt;module&gt; from services.book_recognition_service import BookRecognitionService File &quot;C:\Users\Stefan\book-recognizer-backend\api\services\book_recognition_service.py&quot;, line 5, in &lt;module&gt; from grounding_dino_service import GroundingDINOService ModuleNotFoundError: No module named 'grounding_dino_service' </code></pre> <p>And this goes for all services. My question is: how do I properly inject the BookRecognitionService into my book router so I can use its functionality in an API endpoint? I'm very new to FastAPI and the dependency injection documentation is not really helping. Thank you in advance!</p>
<python><fastapi>
2023-05-01 23:16:08
0
831
Stefan Jaspers
76,150,615
8,667,016
Adding items from one list to another list of lists
<p>Suppose I have the following lists:</p> <pre class="lang-py prettyprint-override"><code>list_1 = [[1,2],[3,4],[5,6],[7,8]] list_2 = [11,12,13,14] </code></pre> <p>How can I add items from the second list to each of the items from the first one?</p> <p>Stated clearly, this is the result I'm looking for:</p> <pre class="lang-py prettyprint-override"><code>desired_output = [[1, 2, 11], [3, 4, 12], [5, 6, 13], [7, 8, 14]] </code></pre> <h3>What I've tried</h3> <p>I've tried using the <code>zip</code> function, but the results I get are nested:</p> <pre class="lang-py prettyprint-override"><code>list(zip(list_1,list_2)) # [((1, 2), 11), ((3, 4), 12), ((5, 6), 13), ((7, 8), 14)] </code></pre> <p>Notice how it doesn't follow the actual desired output because of the extra degree of nesting.</p>
<python><list><iterable><unpack><iterable-unpacking>
2023-05-01 22:37:29
4
1,291
Felipe D.
76,150,614
3,380,902
Nested struct type not supported pyspark error
<p>I am attempting to apply a function to a pyspark DataFrame and save the API response to a new column and then parse using <code>json_normalize</code>. This works fine in pandas, however, I run into an exception with <code>pyspark</code>.</p> <pre><code>import pyspark.pandas as ps import pandas as pd import requests def get_vals(row): # make api call return row['A'] * row['B'] # Create a pandas DataFrame pdf = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) # apply function - get api responses pdf['api_response'] = pdf.apply(lambda row: get_vals(row), axis=1) pdf.sample(5) # Unpack JSON API Response try: dff = pd.json_normalize(pdf['api_response'].str['location']) except TypeError as e: print(f&quot;Error: {e}&quot;) print(f&quot;Problematic data: {data['data']}&quot;) # To pySpark DataFrame psdf = ps.DataFrame(df) psdf.head(5) </code></pre> <p>Expected output is a <code>json</code> normalized DataFrame. When I attempt to apply the function over a <code>pyspark</code> DataFrame, it throws an exception:</p> <pre><code>psdf['api_response'] = psdf.apply(lambda row: get_vals(row), axis=1) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) File &lt;command-4372401754138893&gt;:2 1 ----&gt; 2 psdf['api_response'] = psdf.apply(lambda row: get_vals(row), axis=1) TypeError: Nested StructType not supported in conversion from Arrow: struct&lt;data: struct&lt;geographies: </code></pre>
<python><pandas><pyspark>
2023-05-01 22:37:29
0
2,022
kms
76,150,515
18,551,983
filter pandas dataframe by inserting zeros based on another data farme index
<p>is there anyway to filter the first dataframe based on the index of second dataframe and generate the output dataframe? In first datafarme, we filterout the rows whose index are present in second dataframe and need to insert whole 0 zero in place of index value of second dataframe.</p> <p>first dataframe</p> <pre><code> C1 C2 C3 C4 A 1 1 1 1 B 1 1 1 1 C 0 0 0 0 D 1 1 1 1 </code></pre> <p>second dataframe</p> <pre><code> C1 C2 C3 C4 A 1 1 1 1 </code></pre> <p>output dataframe</p> <pre><code> C1 C2 C3 C4 A 0 0 0 0 B 1 1 1 1 C 0 0 0 0 D 1 1 1 1 </code></pre>
<python><python-3.x><pandas><dataframe><dictionary>
2023-05-01 22:14:23
1
343
Noorulain Islam
76,150,479
5,022,913
Expose Grafana from Docker behind Nginx
<h2>Background</h2> <p>My app stack uses several Docker containers managed by <code>docker compose</code>:</p> <ul> <li>PostgreSQL at port 5433</li> <li>FastAPI (Python) backend at port 8000</li> <li>NodeJS frontend at port 8090 and exposed to 80 (via Nginx, see further)</li> <li>pgAdmin at port 5050</li> <li>Prometheus logging at port 9090</li> <li>Grafana at port 3090</li> </ul> <p>See the container info returned by <code>docker-compose ps</code>:</p> <p><a href="https://i.sstatic.net/fHcVx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fHcVx.png" alt="enter image description here" /></a></p> <h2>Docker compose config</h2> <p>The containers are exposed in the <code>docker-compose.override.yml</code> config as follows:</p> <pre class="lang-yaml prettyprint-override"><code>version: &quot;3.8&quot; services: pgadmin: ports: - &quot;5050:5050&quot; frontend: ports: - &quot;8090:80&quot; prometheus: ports: - &quot;9090:9090&quot; grafana: ports: - &quot;3090:3090&quot; </code></pre> <h2>Internal proxy config</h2> <p>Inside the frontend container, there's a Nginx proxy that manages the port routing inside the app stack. Its config is like this:</p> <pre><code>server { listen 80; server_name example.com; gzip on; gzip_types text/plain application/xml text/css application/javascript application/json; gzip_min_length 1000; charset utf-8; include /etc/nginx/mime.types; client_max_body_size 20M; root /usr/share/nginx/html; location / { expires $expires; index index.html index.htm; try_files $uri $uri/ /index.html; } location /api/ { proxy_pass http://backend:8000/; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 1800; proxy_connect_timeout 1800; } } </code></pre> <h2>Host proxy config</h2> <p>The host itself has a running Nginx proxy configured simply like so:</p> <pre><code>server { listen 80 default_server; listen [::]:80 default_server; server_name _; location / { proxy_pass http://127.0.0.1:8090/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } </code></pre> <h2>The Question</h2> <p>I want to be able to view the Grafana dashboard from <code>http://&lt;myip&gt;:3090</code> and pgAdmin from <code>http://&lt;myip&gt;:5050</code>, but currently 'Page Not Found (404)' is returned by the browser (without any Nginx header). When I try <code>http://&lt;myip&gt;:3000</code> (seeing that Grafana also runs at 3000, although I explicitly set it to 3090), I get the same error.</p> <p>At the same time, I can somehow access Prometheus at <code>http://&lt;myip&gt;:9090</code>... But not Grafana or pgAdmin. What did I miss out?</p>
<python><docker><nginx><docker-compose><grafana>
2023-05-01 22:07:42
1
584
s0mbre
76,150,400
16,527,596
Print out 2 graphs in python
<p>Im a beginner when it comes to python OOP so i ask for some help regarding some small stuff.</p> <p>Firstly before we are required to print out the results we are asked to calculate different stuff for example one of them is from Class BSC the value of Snr_BSC(Its just formulas)</p> <p>What i dont understand though for this first part we are required to print out the result first then we transform it into DB(a premade function given to us by professors) BUt what i dont get is when i go to print out the result it gives one single value.</p> <p>THe <code>error_probability</code> given in the <code>exercise1()</code> function is not just one value so why is it printing out just one</p> <pre><code>class BSC(): def __init__(self,error_probability): self.error_probability=error_probability @property def snr_BSC(self): #interested in this one here which is found with the formula below error_probability=self.error_probability snr_lin_BSC=1/(4*error_probability) return snr_lin_BSC @property def snr_db(self): snr_lin_BSC = self.snr_BSC snr_db = lin2db(snr_lin_BSC) return snr_db </code></pre> <p>Secondly we are required to print out two graphs separetely in <code>exercise1</code> but i dont know how to do that</p> <pre><code>def exercise1(): fs_step = 2.75625e3 error_probability = np.arange(start=1e-12, stop=1) n_bit = np.array([2, 3, 4, 6, 8, 10, 12, 14, 16], dtype=np.int64) adc=ADC(n_bit) bsc=BSC(error_probability) print('Quantization SNR',adc.snr) plt.plot(n_bit,adc.snr_db) #first funct i want to print plt.xlabel('n_bit') plt.ylabel('SNR in db') print('Quantization SNR_BSC', bsc.snr_BSC) plt.xlabel('P_e') plt.xscale('log') plt.plot(error_probability,bsc.snr_db) #Secondfunct i want to print plt.ylabel('SNR in db') plt.show() </code></pre> <p>For those that need to see the whole project</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as mcol color_dict = mcol.TABLEAU_COLORS def lin2db(x): return 10*np.log10(x) def db2lin(x): return 10**(x/10) class ADC(): def __init__(self,n_bit): self.n_bit=n_bit @property def snr(self): n_bit=self.n_bit M=2**n_bit snr_lin=M**2 return snr_lin @property def snr_db(self): snr_lin=self.snr snr_db=lin2db(snr_lin) return snr_db @property def m(self): M=2**n_bit return M class BSC(): def __init__(self,error_probability): self.error_probability=error_probability @property def snr_BSC(self): error_probability=self.error_probability snr_lin_BSC=1/(4*error_probability) return snr_lin_BSC @property def snr_db(self): snr_lin_BSC = self.snr_BSC snr_db = lin2db(snr_lin_BSC) return snr_db def exercise1(): fs_step = 2.75625e3 error_probability = np.arange(start=1e-12, stop=1) n_bit = np.array([2, 3, 4, 6, 8, 10, 12, 14, 16], dtype=np.int64) adc=ADC(n_bit) bsc=BSC(error_probability) print('Quantization SNR',adc.snr) plt.plot(n_bit,adc.snr_db) plt.xlabel('n_bit') plt.ylabel('SNR in db') print('Quantization SNR_BSC', bsc.snr_BSC) plt.xlabel('P_e') plt.xscale('log') plt.plot(error_probability,bsc.snr_db) plt.ylabel('SNR in db') plt.show() exercise1() </code></pre>
<python>
2023-05-01 21:52:40
0
385
Severjan Lici
76,150,363
13,142,245
Executing asyncio method from a synchronous method / blocking until complete in Python
<p>I really don't want to update my large code base such that <code>async</code> and <code>await</code> are used in every method dependent on an asynchronous function or method. But this seems impossible.</p> <p>From what I can tell, there is no convenient way to</p> <ol> <li>Define two asynchronous functions</li> <li>Collect them and wait on them</li> <li>Block any further application logic until step 2 is complete</li> <li>Never mention async or await again in any such application logic downstream of 1 and 2.</li> </ol> <p>Is this possible?</p> <p>If so, how can I call asynchronous methods from a method that is not proceeded by the async keyword? (ex in 3.) Because if this is not possible, then async / await will spread like a virus through all my code.</p> <pre class="lang-py prettyprint-override"><code>import asyncio class Application: def __init__(self, A, B): self.A = A self.B = B async def pull(self, stuff): result = aws.get_stuff(stuff) return await result def applicationLogic(self): pass async def aws_wrapper(self): self.aPull = self.pull(self.A) self.bPull = self.pull(self.B) tasks = [self.aPull, self.bPull] return await asyncio.wait(tasks) def aws_async_executor(self): loop = asyncio.new_event_loop() loop.run_until_complete(self.aws_wrapper()) self.applicationLogic() </code></pre> <p>Error: &quot;errorMessage&quot;: &quot;An asyncio.Future, a coroutine or an awaitable is required&quot;,</p>
<python><python-asyncio>
2023-05-01 21:45:16
1
1,238
jbuddy_13
76,150,266
2,219,369
plot timestamps and index in the same plot
<p>Hi I'm trying to figure out if you can plot dates on one axis and corresponding integer indices on another axis in the same subplot with matplotlib, for example just the price with dates on one axes and price and corresponding integers indices (to the dates, in the example idx = np.arange(len(aapl))) . Here is an example of a stock signal.</p> <pre><code>import yfinance as yf import matplotlib.pyplot as plt # Download Apple's stock price data aapl = yf.download(&quot;AAPL&quot;, start=&quot;2022-01-01&quot;, end=&quot;2023-04-30&quot;) # Plot the adjusted close price plt.plot(aapl['Adj Close']) # Set x-axis label, y-axis label, and title plt.xlabel('Date') plt.ylabel('Price') plt.title('AAPL Stock Price') # Display the plot plt.show() </code></pre>
<python><matplotlib><timestamp><indices>
2023-05-01 21:24:44
1
14,049
jonas
76,150,083
13,142,245
asyncio.wait in OOP design
<p>I have an application where my operations are network bound so I would like to execute these data pulls asynchronously.</p> <p>I make multiple data pulls to AWS, which is in the <code>pull</code> method. In the <code>runApplication</code> method, I use <code>asyncio.wait</code> for each of the two tasks.</p> <pre class="lang-py prettyprint-override"><code>import asyncio class Application: def __init__(self, A, B): self.A = A self.B = B async def pull(self, stuff): result = aws.get_stuff(stuff) return await result def applicationLogic(self): pass async def aws_wrapper(self): self.aPull = self.pull(self.A) self.bPull = self.pull(self.B) tasks = [self.aPull, self.bPull] await asyncio.wait(tasks) def aws_async_executor(self): asyncio.run(self.aws_wrapper) self.applicationLogic() </code></pre> <p>The application is failing because the <code>applicationLogic</code> method requires that <code>tasks</code> have been completed prior to execution.</p> <p>However, <code>applicationLogic</code> is being called before <code>tasks</code> is completed. How do I force the <code>applicationLogic</code> method to be blocked until <code>tasks</code> is complete?</p> <p>The error that I'm getting is</p> <blockquote> <p>An asyncio.Future, a coroutine or an awaitable is required</p> </blockquote>
<python><python-asyncio>
2023-05-01 20:48:35
0
1,238
jbuddy_13
76,150,016
929,732
Need to install some MariaDB/Mysql tools for Python Mysql and having issues with dependancies
<h2>Here is the version I'm running.</h2> <pre><code>mysqld Ver 10.5.15-MariaDB-1:10.5.15+maria~focal-log for debian-linux-gnu on x86_64 (mariadb.org binary distribution) </code></pre> <p>-- Attempting to install</p> <pre><code>sudo apt-get install libmariadbclient-dev </code></pre> <p>And ending up with</p> <pre><code>libmariadbclient-dev : Depends: libmariadb-dev (= 1:10.3.38-0ubuntu0.20.04.1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. </code></pre> <p>installed on the machine is</p> <pre><code>libdbd-mariadb-perl/focal,now 1.11-3ubuntu2 amd64 [installed,automatic] libmariadb-dev-compat/unknown 1:10.5.19+maria~ubu2004 amd64 libmariadb-dev/unknown,now 1:10.5.19+maria~ubu2004 amd64 [installed] libmariadb-java/focal 2.5.3-1 all libmariadb3-compat/unknown 1:10.5.19+maria~ubu2004 amd64 libmariadb3/unknown,now 1:10.5.19+maria~ubu2004 amd64 [installed,automatic] libmariadbclient-dev/focal-updates,focal-security 1:10.3.38-0ubuntu0.20.04.1 amd64 libmariadbclient18/unknown 1:10.5.19+maria~ubu2004 amd64 libmariadbd-dev/unknown 1:10.5.19+maria~ubu2004 amd64 libmariadbd19/unknown 1:10.5.19+maria~ubu2004 amd64 libmysqlclient18/unknown 1:10.5.19+maria~ubu2004 amd64 maria-doc/focal 1.3.5-4.1build2 all maria/focal 1.3.5-4.1build2 amd64 mariadb-backup/unknown 1:10.5.19+maria~ubu2004 amd64 [upgradable from: 1:10.5.16+maria~focal] mariadb-client-10.3/focal-updates,focal-security 1:10.3.38-0ubuntu0.20.04.1 amd64 mariadb-client-10.5/unknown 1:10.5.19+maria~ubu2004 amd64 [upgradable from: 1:10.5.16+maria~focal] mariadb-client-core-10.3/focal-updates,focal-security 1:10.3.38-0ubuntu0.20.04.1 amd64 mariadb-client-core-10.5/unknown 1:10.5.19+maria~ubu2004 amd64 [upgradable from: 1:10.5.16+maria~focal] mariadb-client/unknown 1:10.5.19+maria~ubu2004 all [upgradable from: 1:10.5.16+maria~focal] mariadb-columnstore-cmapi/unknown 22.08.2 amd64 mariadb-common/unknown 1:10.5.19+maria~ubu2004 all [upgradable from: 1:10.5.16+maria~focal] mariadb-plugin-columnstore/unknown 1:10.5.19-5.6.8+maria~ubu2004 amd64 mariadb-plugin-connect/unknown 1:10.5.19+maria~ubu2004 amd64 mariadb-plugin-cracklib-password-check/unknown 1:10.5.19+maria~ubu2004 amd64 mariadb-plugin-gssapi-client/unknown 1:10.5.19+maria~ubu2004 amd64 mariadb-plugin-gssapi-server/unknown 1:10.5.19+maria~ubu2004 amd64 mariadb-plugin-mroonga/unknown 1:10.5.19+maria~ubu2004 amd64 mariadb-plugin-oqgraph/unknown 1:10.5.19+maria~ubu2004 amd64 mariadb-plugin-rocksdb/unknown 1:10.5.19+maria~ubu2004 amd64 mariadb-plugin-s3/unknown 1:10.5.19+maria~ubu2004 amd64 mariadb-plugin-spider/unknown 1:10.5.19+maria~ubu2004 amd64 mariadb-plugin-tokudb/focal-updates,focal-security 1:10.3.38-0ubuntu0.20.04.1 amd64 mariadb-server-10.3/focal-updates,focal-security 1:10.3.38-0ubuntu0.20.04.1 amd64 mariadb-server-10.5/unknown 1:10.5.19+maria~ubu2004 amd64 [upgradable from: 1:10.5.15+maria~focal] mariadb-server-core-10.3/focal-updates,focal-security 1:10.3.38-0ubuntu0.20.04.1 amd64 mariadb-server-core-10.5/unknown 1:10.5.19+maria~ubu2004 amd64 [upgradable from: 1:10.5.15+maria~focal] mariadb-server/unknown 1:10.5.19+maria~ubu2004 all [upgradable from: 1:10.5.15+maria~focal] mariadb-test-data/unknown 1:10.5.19+maria~ubu2004 all mariadb-test/unknown 1:10.5.19+maria~ubu2004 amd64 mysql-common/unknown 1:10.5.19+maria~ubu2004 all [upgradable from: 1:10.5.15+maria~focal] odbc-mariadb/focal 3.1.4-1 amd64 </code></pre>
<python><mariadb>
2023-05-01 20:36:05
0
1,489
BostonAreaHuman
76,149,748
5,942,100
Round to nearest even value based on row by row conditions using Pandas
<p>I would like to round EACH VALUE to the nearest even number, with this logic: Any value less than or = to 0.69, round down to 0</p> <pre><code>**Data** Location range type Q1 28 Q2 28 Q3 28 Q4 28 rounded_sum NY low re AA 1.14 0 0 0 2 NY low re BB 0 0 0 0 0 NY low re DD 0.51 2 4 0 6 NY low re SS 0 0 0 0 0 NY low stat AA 1.03 2 2 4 10 NY low stat BB 0.45 0 2 2 4 NY low stat DD 1.53 2 4 6 14 NY low stat SS 0.26 0 0 2 2 CA low re AA 0.34 0 2 0 2 CA low re BB 0 0 0 0 0 CA low re DD 0.69 0 2 0 2 CA low re SS 0 0 0 0 0 CA low stat AA 0.18 0 0 2 2 CA low stat BB 0.2 0 0 0 0 CA low stat DD 0.27 0 0 2 2 CA low stat SS 0.04 0 0 0 0 **Desired** Location range type Q1 28 Q2 28 Q3 28 Q4 28 rounded_sum NY low re AA 2 0 0 0 2 NY low re BB 0 0 0 0 0 NY low re DD 0 2 4 0 6 NY low re SS 0 0 0 0 0 NY low stat AA 2 2 2 4 10 NY low stat BB 0 0 2 2 4 NY low stat DD 2 2 4 6 14 NY low stat SS 0 0 0 2 2 CA low re AA 0 0 2 0 2 CA low re BB 0 0 0 0 0 CA low re DD 0 0 2 0 2 CA low re SS 0 0 0 0 0 CA low stat AA 0 0 0 2 2 CA low stat BB 0 0 0 0 0 CA low stat DD 0 0 0 2 2 CA low stat SS 0 0 0 0 0 </code></pre> <p><strong>Doing</strong></p> <pre><code>arr1 = df.iloc[:, 3:].values s1 = np.sum(arr1, axis=1) s1 = np.where(s1 &lt; 0.69, 0, np.ceil(s1 / 2) * 2) </code></pre> <p>Any suggestion is appreciated</p>
<python><pandas><numpy>
2023-05-01 19:44:44
2
4,428
Lynn
76,149,718
18,551,983
filter pandas dataframe based on another data farme index
<p>is there anyway to filter the first dataframe based on the index of second dataframe and generate the output dataframe? In first datafarme, we filterout the rows whose index are present in second dataframe. first dataframe</p> <pre><code> C1 C2 C3 C4 A 1 1 1 1 B 1 1 1 1 C 0 0 0 0 D 1 1 1 1 </code></pre> <p>second dataframe</p> <pre><code> C1 C2 C3 C4 A 1 1 1 1 C 1 1 1 1 </code></pre> <p>Output dataframe</p> <pre><code> C1 C2 C3 C4 B 1 1 1 1 D 1 1 1 1 </code></pre>
<python><python-3.x><pandas><dataframe>
2023-05-01 19:40:31
1
343
Noorulain Islam
76,149,632
9,937,874
Polars specify dtypes fails
<p>I am trying to optimize the memory used when reading in a csv file into a polars dataframe as part of a pipeline. I have a large number of columns and I know ahead of time that 2 columns need to be objects, 2 columns need to be int64, and the rest can be uint8. I use the following bit of code to generate the list of dtypes:</p> <pre><code>import polars as pl a = pl.read_csv(&quot;path_to_file.csv&quot;, n_rows=100) dtype_list = [] for col in a.columns: if col in [&quot;col1&quot;, &quot;col10&quot;]: dtype_list.append(pl.Int64) elif col in [&quot;col13&quot;, &quot;col500&quot;]: dtype_list.append(pl.Object) else: dtype_list.append(pl.UInt8) </code></pre> <p>However when I use the dtype_list created using the above snippet I get the following error:</p> <pre><code>b = pl.read_csv(&quot;path_to_file.csv&quot;, dtypes=dtypes_list) </code></pre> <p>ComputeError: unsupported data type when reading CSV: u8 when reading CSV</p> <p>***** EDIT *****</p> <p>So changing the code to this seems to work, just curious as to why the original did not.</p> <pre><code>import polars as pl a = pl.read_csv(&quot;path_to_file.csv&quot;, n_rows=100) dtype_list = [] columns = [] for col in a.columns: columns.append(col) if col in [&quot;col1&quot;, &quot;col10&quot;]: dtype_list.append(pl.Int64) elif col in [&quot;col13&quot;, &quot;col500&quot;]: dtype_list.append(pl.Object) else: dtype_list.append(pl.UInt8) b = pl.read_csv(&quot;path_to_file.csv&quot;, dtypes=dtypes_list, columns=columns) </code></pre>
<python><python-3.x><python-polars>
2023-05-01 19:26:35
1
644
magladde
76,149,628
9,626,922
'>=' not supported between instances of 'int' and 'str' when using env.step from gym
<p>I have the following code that I keep getting an error saying <code>'&gt;=' not supported between instances of 'int' and 'str'</code> coming from env.step() from gym. It seems to be the <code>terminated</code> value that is causing the error but I can not see where from:</p> <pre><code>%matplotlib notebook import gym import time import matplotlib.pyplot as plt import numpy as np from IPython.display import clear_output env = gym.make(&quot;MountainCar-v0&quot;, 'rgb_array') env.reset() def create_bins(num_bins_per_observation): # CODE HERE car_velocity = np.linspace(-0.07, 0.07, num_bins_per_observation) # based off highest and lowest possible values car_position = np.linspace(-1.2, 0.6, num_bins_per_observation) # run the above loop and see a reasonable range for velocity as it can be -inf - inf bins = np.array([car_position, car_velocity]) return bins NUM_BINS = 10 BINS = create_bins(NUM_BINS) def discretize_observation(observations, bins): binned_observations = [] for i,observation in enumerate(observations): discretized_observation = np.digitize(observation, bins[i]) binned_observations.append(discretized_observation) return tuple(binned_observations) # Important for later indexing # CREATE THE Q TABLE q_table_shape = (NUM_BINS,NUM_BINS,env.action_space.n) q_table = np.zeros(q_table_shape) def epsilon_greedy_action_selection(epsilon, q_table, discrete_state): if np.random.random() &gt; epsilon: action = np.argmax(q_table[discrete_state]) else: action = np.random.randint(0, env.action_space.n) return action def compute_next_q_value(old_q_value, reward, next_optimal_q_value): return old_q_value + ALPHA * (reward + GAMMA * next_optimal_q_value - old_q_value) def reduce_epsilon(epsilon, epoch): if BURN_IN &lt;= epoch &lt;= EPSILON_END: epsilon -= EPSILON_REDUCE return epsilon EPOCHS = 30000 BURN_IN = 100 epsilon = 1 EPSILON_END= 10000 EPSILON_REDUCE = 0.0001 ALPHA = 0.8 GAMMA = 0.9 log_interval = 100 # How often do we update the plot? (Just for performance reasons) ### Here we set up the routine for the live plotting of the achieved points ###### fig = plt.figure() ax = fig.add_subplot(111) plt.ion() fig.canvas.draw() ################################################################################## max_position_log = [] # to store all achieved points mean_positions_log = [] # to store a running mean of the last 30 results epochs = [] # store the epoch for plotting for epoch in range(EPOCHS): # TODO: Get initial observation and discretize them. Set done to False initial_state = env.reset()[0] # get the initial observation discretized_state = discretize_observation(initial_state, BINS) # map the observation to the bins done = False # to stop current run when the car reaches the top or the time limit is reached max_position = -np.inf # for plotting epochs.append(epoch) # TODO: As long as current run is alive (i.e not done) perform the following steps: while not done: # Perform current run as long as done is False (as long as there is still time to reach the top) # TODO: Select action according to epsilon-greedy strategy action = epsilon_greedy_action_selection(epsilon, q_table, discretized_state) # Epsilon-Greedy Action Selection # TODO: Perform selected action and get next state. Do not forget to discretize it next_state, reward, done, test, info = env.step(action) # perform action and get next state position, velocity = next_state next_state_discretized = discretize_observation(next_state, BINS) # map the next observation to the bins # TODO: Get old Q-value from Q-Table and get next optimal Q-Value old_q_value = q_table[discretized_state + (action,)] # get the old Q-Value from the Q-Table next_optimal_q_value = np.max(q_table[next_state_discretized]) # Get the next optimal Q-Value # TODO: Compute next Q-Value and insert it into the table next_q = compute_next_q_value(old_q_value, reward, next_optimal_q_value) # Compute next Q-Value q_table[discretized_state + (action,)] = next_q # Insert next Q-Value into the table # TODO: Update the old state with the new one discretized_state = next_state_discretized # Update the old state with the new one if position &gt; max_position: # Only for plotting the results - store the highest point the car is able to reach max_position = position # TODO: Reduce epsilon epsilon = reduce_epsilon(epsilon, epoch) # Reduce epsilon ############################################################################## max_position_log.append(max_position) # log the highest position the car was able to reach running_mean = round(np.mean(max_position_log[-30:]), 2) # Compute running mean of position over the last 30 epochs mean_positions_log.append(running_mean) # and log it ################ Plot the points and running mean ################## if epoch % log_interval == 0: ax.clear() ax.scatter(epochs, max_position_log) ax.plot(epochs, max_position_log) ax.plot(epochs, mean_positions_log, label=f&quot;Running Mean: {running_mean}&quot;) plt.legend() fig.canvas.draw() ###################################################################### env.close() </code></pre> <p>This is the full error I am receiving too from Jupyter notebook:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /var/folders/jn/59brf9ps68b366pxgyt4hpfw0000gn/T/ipykernel_55458/601254501.py in &lt;module&gt; 29 action = epsilon_greedy_action_selection(epsilon, q_table, discretized_state) # Epsilon-Greedy Action Selection 30 # TODO: Perform selected action and get next state. Do not forget to discretize it ---&gt; 31 next_state, reward, done, test, info = env.step(action) # perform action and get next state 32 position, velocity = next_state 33 next_state_discretized = discretize_observation(next_state, BINS) # map the next observation to the bins ~/anaconda3/envs/ai_env/lib/python3.7/site-packages/gym/wrappers/time_limit.py in step(self, action) 51 self._elapsed_steps += 1 52 ---&gt; 53 if self._elapsed_steps &gt;= self._max_episode_steps: 54 truncated = True 55 TypeError: '&gt;=' not supported between instances of 'int' and 'str' </code></pre>
<python><machine-learning><artificial-intelligence><reinforcement-learning><openai-gym>
2023-05-01 19:26:20
2
617
Jm3s
76,149,608
1,874,170
Using email.message to parse an HTTP POST request?
<p>With the deprecation of the standard <code>cgi</code> library in favor of <a href="https://peps.python.org/pep-3333/" rel="nofollow noreferrer">WSGI</a> starting in Python 3.11, the documentation <a href="https://docs.python.org/3.11/library/cgi.html" rel="nofollow noreferrer">alleges</a>:</p> <blockquote> <p>The <code>FieldStorage</code> class can typically be replaced with … the <a href="https://docs.python.org/3.11/library/email.message.html#module-email.message" rel="nofollow noreferrer"><code>email.message</code></a> module or [3rd-party] <a href="https://pypi.org/project/multipart/" rel="nofollow noreferrer">multipart</a> for <code>POST</code> and <code>PUT</code>.</p> </blockquote> <p>However, it's not at all obvious how the first of these (using the standard library to parse HTTP POST requests) is actually done.</p> <p>For example, the following code hangs on <code>EmailBytesParser.parse()</code>:</p> <pre class="lang-py prettyprint-override"><code>import wsgiref.simple_server import email.parser import logging _EmailBytesParser = email.parser.BytesParser() def main_wsgi(environ, start_response): if environ['REQUEST_METHOD'] == 'GET': start_response('200 OK', [('Content-Type', 'text/html')]) yield '&lt;form method=&quot;POST&quot; enctype=&quot;multipart/form-data&quot;&gt;&lt;label for=&quot;username&quot;&gt;username:&lt;/label&gt;&lt;input name=&quot;username&quot; value=&quot;user123&quot; /&gt;&lt;br /&gt;&lt;label for=&quot;avatar&quot;&gt;avatar: &lt;/label&gt;&lt;input name=&quot;avatar&quot; type=&quot;file&quot;&gt;&lt;br /&gt;&lt;input type=&quot;submit&quot; value=&quot;register&quot; /&gt;'.encode() else: assert environ['REQUEST_METHOD'] == 'POST' assert environ['CONTENT_TYPE'].startswith('multipart/form-data') start_response('200 OK', [('Content-Type', 'text/plain')]) logging.debug(&quot;About to start parsing form data...&quot;) # body = environ['wsgi.input'].read(int(environ['CONTENT_LENGTH'])) body = _EmailBytesParser.parse(environ['wsgi.input'], headersonly=True) logging.debug(&quot;Form data parsed!&quot;) yield repr(body).encode() if __name__ == '__main__': logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) wsgiref.simple_server.make_server('', 8000, main_wsgi).serve_forever() </code></pre> <p>I wasn't able to find any code examples online actually demonstrating the <code>email</code> module's suitability for this. What am I doing wrong?</p>
<python><parsing><http-post><wsgi><python-3.11>
2023-05-01 19:23:18
1
1,117
JamesTheAwesomeDude
76,149,466
2,236,231
Prevent TimeoutError from asyncio in Python
<p>I am building a test suite for my Telegram Bot. There is a long-running task which may take +10mins. However, after ~15s I receive a <code>asyncio.exceptions.TimeoutError</code></p> <p>This is a simplified code:</p> <pre class="lang-py prettyprint-override"><code>from tgintegration import BotController from tgintegration import Response from pyrogram import Client import asyncio from async_timeout import timeout def create_client(session_name: str = SESSION_NAME) -&gt; Client: return Client( name=session_name, api_id = os.getenv(&quot;API_ID&quot;) , api_hash = os.getenv(&quot;API_HASH&quot;) ) async def run_test(client: Client): controller = BotController( peer=&quot;@bot_to_test&quot;, client=client, max_wait=5, wait_consecutive=5, raise_no_response=True ) ... controller.max_wait=600 controller.wait_consecutive=600 controller.max_wait_response=600 print(&quot;Compute with file&quot;) async with timeout(600): async with controller.collect(count=7) as response: await controller.send_command(&quot;compute&quot;) assert &quot;Init&quot; in response.messages[0].text </code></pre>
<python><python-asyncio><pyrogram>
2023-05-01 19:01:12
1
1,099
Geiser
76,149,446
6,676,101
In python, how can we ignore case-sensitivity in dictionary keys when printing the dictionary to string, printing to console, or printing to file?
<p>In python, how do you <em><strong>pretty print</strong></em> a dictionary such that dictionary keys are <strong>not</strong>-case sensitive?</p> <p>For example, <code>Apple</code> with a big letter <code>A</code> and <code>apple</code> with a small letter <code>a</code> will be near each-other.</p> <p>The results should be printed in the same style as the following example:</p> <pre class="lang-python prettyprint-override"><code>d = { 'approx' : 'approximately' , 'dfrac' : 'display fraction' , 'Leftrightarrow' : 'Left and right arrow' , 'rangle' : 'right angle' , 'vDash' : 'vertical turnstile with double dash' , 'vdash' : 'vertical turnstile with single dash' , 'lhd' : 'left lazy head' , 'lim' : 'limit' , 'xleftarrow' : 'left arrow with parameter input' , 'varlimsup' : 'variable limit supremum' , 'simeq' : 'similar or equal' , 'iff' : 'if and only if' , 'lt' : 'less than' , 'notin' : 'not element of or in' , 'equiv' : 'equivalent to' , 'ge' : 'greater than or equal' , 'Gamma' : 'big uppercase Gamma' , 'cong' : 'congruent' , 'infty' : 'infinity' , 'subsetneq' : 'subset of and not equal to' , 'prod' : 'product' , 'varepsilon' : 'variable epsilon' , 'sum' : 'summation' , 'mathbb' : 'mathematics black board bold' , 'le' : 'less than or equal to' , 'bar' : 'over bar' , 'lbrace' : 'left brace' , 'mu' : 'greek letter mu' , 'cdots' : 'centered dots' , 'mp' : 'minus plus' , 'lnot' : 'logical not' , 'spadesuit' : 'spade suit symbol' , 'ell' : 'script el el' , 'subseteq' : 'subset or equal' , 'rceil' : 'right ceiling' , 'vdots' : 'vertical dots' , 'mapsto' : 'maps to arrow' , 'genfrac' : 'generalized fraction' , 'varliminf' : 'variable limit infimum' , 'rVert' : 'right vertical bar' , 'iint' : 'integral integral' , 'iiint' : 'integral integral integral' , 'lVert' : 'left vertical bar' , 'ddot' : 'double diagonal dot' , 'varnothing' : 'variable nothing' , 'frac' : 'fraction' , } </code></pre> <p>In this particular example, the inputs (keys) are <strong>LaTeX</strong> commands and the outputs (values) are English phrases with words spelled out in full such that the key is a sub-sequence formed by deleting zero or more letters from the full English phrase.</p>
<python><python-3.x><dictionary><pretty-print>
2023-05-01 18:58:50
2
4,700
Toothpick Anemone
76,149,427
4,372,237
Vector autoregressive models with custom lags
<p>I am trying to apply vector autoregression to my data using statsmodels package. This package has a lot of tools for univariate time-series modeling, including <code>statsmodels.tsa.ar_model.AutoReg()</code> method that accepts a list of custom lags. For a vector autoregression, the function <code>statsmodels.tsa.vector_ar.var_model.VAR</code> only supports integer for a lag, in which case all of the lags until the specified integer are included.</p> <p>In my problem I want to use lags at 1,2,3,24,48,72. The motivation behind this is that my data has a very strong daily trend, so I want to have 24, 48, and 72 hours lag. At the same time, I don't want to include all of the lags until 72, as this will be a really heavy and potentially over-parametrized model.</p> <p>I do realize that VAR accepts exogeneous variables, and I can provide 23 seasonal dummies that will indicate each hour. However, I would like to take advantage of both seasonal dummies, and autoregression.</p> <p>Does anyone know how to make VAR work with a list of custom lags? Is this possible with R (never worked with R before)?</p> <p>EDIT: Found VARIMAX function that accepts order parameter. However, the order parameter for AR part has to be integer, so I still cannot use custom lags.</p>
<python><statsmodels><vector-auto-regression>
2023-05-01 18:55:31
0
3,470
Mikhail Genkin
76,149,423
5,942,100
Tricky conditional transform values per row based on logic of another column using Pandas
<p>I would like to round EACH VALUE to the nearest even # so that our row sum doesn't exceed or go below the 'rounded_sum' column value for that row. If we exceed or go below, compensate for the difference by subtracting or adding the difference to one of the values. Making sure no negative values.</p> <p><strong>Data</strong></p> <pre><code>Location range type Q1 28 Q2 28 Q3 28 Q4 28 rounded_sum NY low re AA 1.14 0 0 0 2 NY low re BB 0 0 0 0 0 NY low re DD 0.51 2 4 0 6 NY low re SS 0 0 0 0 0 NY low stat AA 1.03 2 2 4 10 NY low stat BB 0.45 0 2 2 4 NY low stat DD 1.53 2 4 6 14 NY low stat SS 0.26 0 0 2 2 CA low re AA 0.34 0 2 0 2 CA low re BB 0 0 0 0 0 CA low re DD 0.69 0 2 0 2 CA low re SS 0 0 0 0 0 CA low stat AA 0.18 0 0 2 2 CA low stat BB 0.2 0 0 0 0 CA low stat DD 0.27 0 0 2 2 CA low stat SS 0.04 0 0 0 0 </code></pre> <p><strong>Desired</strong></p> <pre><code>Location range type Q1 28 Q2 28 Q3 28 Q4 28 rounded_sum NY low re AA 2 0 0 0 2 NY low re BB 0 0 0 0 0 NY low re DD 0 2 4 0 6 NY low re SS 0 0 0 0 0 NY low stat AA 2 2 2 4 10 NY low stat BB 0 0 2 2 4 NY low stat DD 2 2 4 6 14 NY low stat SS 0 0 0 2 2 CA low re AA 0 0 2 0 2 CA low re BB 0 0 0 0 0 CA low re DD 0 0 2 0 2 CA low re SS 0 0 0 0 0 CA low stat AA 0 0 0 2 2 CA low stat BB 0 0 0 0 0 CA low stat DD 0 0 0 2 2 CA low stat SS 0 0 0 0 0 </code></pre> <p><strong>Doing</strong></p> <pre><code>arr1 = df.iloc[:, 3:].values s1 = np.sum(arr1, axis=1) s1 = np.where(s1 &lt; 0.5, 0, np.ceil(s1 / 2) * 2) </code></pre> <p>Any suggestion is appreciated</p>
<python><pandas><numpy>
2023-05-01 18:55:05
1
4,428
Lynn
76,149,381
113,538
Why can't libclang find a function argument declaration?
<p>I'm writing a tool to find the dependencies of a C function. For example in the tcpdump project there is a function in print-ppp.c</p> <p><code>static void ppp_hdlc(netdissect_options *ndo, const u_char *p, int length)</code></p> <p>netdissect_options is defined in the netdissect.h header.</p> <p>I'm trying to use libclang to find the code in the header. However it's not finding it.</p> <p>Here is the code that I'm using. <code>definition.spelling</code>, <code>definition.location.file</code>, <code>definition.location.line</code> are all missing.</p> <pre><code>import sys import clang from clang.cindex import CursorKind import json import os def load_compile_commands(compile_commands_file): with open(compile_commands_file, &quot;r&quot;) as f: compile_commands = json.load(f) return compile_commands def get_translation_unit(compile_commands, source_file): if clang.cindex.Config.library_path is None: clang.cindex.Config.set_library_file(&quot;/opt/homebrew/opt/llvm/lib/libclang.dylib&quot;) index = clang.cindex.Index.create() for command in compile_commands: if command[&quot;file&quot;] == source_file: args = command[&quot;arguments&quot;][1:-2] args.append(&quot;-I&quot; + os.path.dirname(source_file)) args.append(&quot;-I/opt/homebrew/opt/llvm/lib/clang/16/include/&quot;) return index.parse(source_file, args) raise Exception(f&quot;No translation unit found for source file {source_file}&quot;) def function_node(cursor, function_name): for child in cursor.get_children(): if ( child.kind == clang.cindex.CursorKind.FUNCTION_DECL and child.spelling == function_name ): return child return None def get_function_arguments(cursor): if cursor.kind != clang.cindex.CursorKind.FUNCTION_DECL: raise ValueError(&quot;Cursor must point to a function declaration&quot;) arguments = [] for child in cursor.get_children(): if child.kind == clang.cindex.CursorKind.PARM_DECL: arguments.append(child) return arguments def find_arg_declatation(project_root:str, file_name: str, function:str): compile_commands_file = project_root + &quot;/compile_commands.json&quot; source_file = project_root +&quot;/&quot;+ file_name compile_commands = load_compile_commands(compile_commands_file) tu = get_translation_unit(compile_commands, source_file) for diagnostic in tu.diagnostics: print(diagnostic) function = function_node(tu.cursor, function_name) declarations = [] types = [] if function is not None: arguments = get_function_arguments(function) print(f&quot;Nodes dependent on function '{function_name}':&quot;) for argument in arguments: definition = argument.type.get_declaration() print(definition.spelling, definition.location.file, definition.location.line) if __name__ == &quot;main&quot;: find_arg_declatation(&lt;args go here&gt;) </code></pre>
<python><parsing><clang><abstract-syntax-tree>
2023-05-01 18:47:31
1
895
Brian
76,149,371
5,370,631
Explode 2 columns into multiple columns in pyspark dataframe
<p>I have the following PySpark dataframe:</p> <pre><code>+--------------------------------+-------+------------------------------------+-----------+------------------+------------------+ |id |order |cart |item |sub |rank | +--------------------------------+-------+------------------------------------+-----------+------------------+------------------+ |694e45100f52475d8dac13b829e02394|null |baa4b664-8a84-4abb-8919-01836340a30a|826017462 |697030209 |6 | |532f20768ce64599b6f79bb17b9fc597|null |aa45d078-dfc6-4710-adac-4385b43d3eba|10402650 |13893731 |3 | |fae01765055f4df0a1041de0669393e6|null |f7b498d6-ea9c-47e7-9220-ec18070f9ee8|293214825 |1226495201 |4 | |f2135f7919713625e044001517f43a86|null |d4873f7a-2219-49d9-a0e8-cb41d24f8aec|814735847 |222288904 |2 | |6e74f194b9d2403cb6d28f0a33a00a9a|null |20753b35-36bd-4679-9d64-c480c9c19b97|10291833 |10313028 |1 | +--------------------------------+-------+------------------------------------+-----------+------------------+------------------+ </code></pre> <p>And I would like to explode the columns into multiple columns based on columns sub and rank.</p> <pre><code> +--------------------------------+-------+------------------------------------+-----------+------------------+------------------+-------------+-------------+-------------+-------------+-------------+-------------+-------------+ |id |order |cart |item |sub |rank | sub1 | sub2 | sub3 | sub4 | sub5 | sub6 | sub7 | +--------------------------------+-------+------------------------------------+-----------+------------------+------------------+-------------+-------------+-------------+-------------+-------------+-------------+-------------+ |694e45100f52475d8dac13b829e02394|null |baa4b664-8a84-4abb-8919-01836340a30a|826017462 |697030209 |6 | 0 | 0 | 0 | 0 | 0 |697030209 | 0 | |532f20768ce64599b6f79bb17b9fc597|null |aa45d078-dfc6-4710-adac-4385b43d3eba|10402650 |13893731 |3 | 0 | 0 |13893731 | 0 | 0 | 0 | 0 | |fae01765055f4df0a1041de0669393e6|null |f7b498d6-ea9c-47e7-9220-ec18070f9ee8|293214825 |1226495201 |4 | 0 | 0 | 0 | 1226495201 | 0 | 0 | 0 | |f2135f7919713625e044001517f43a86|null |d4873f7a-2219-49d9-a0e8-cb41d24f8aec|814735847 |222288904 |2 | 0 | 222288904 | 0 | 0 | 0 | 0 | 0 | |6e74f194b9d2403cb6d28f0a33a00a9a|null |20753b35-36bd-4679-9d64-c480c9c19b97|10291833 |10313028 |1 | 10313028 | 0 | 0 | 0 | 0 | 0 | 0 | +--------------------------------+-------+------------------------------------+-----------+------------------+------------------+-------------------------------------------------------------------------------------------------- </code></pre> <p>How can I get this ?</p>
<python><apache-spark><pyspark><apache-spark-sql>
2023-05-01 18:45:50
1
1,572
Shibu
76,149,284
11,391,711
how to use every even column for the corresponding odd column and create a new data frame - Python
<p>I read a huge Excel file where each even column (e.g., 0,2) is the number of days, and the next column is a location. Each location has a different length. As a small example, here's a sample data frame.</p> <pre><code>import pandas as pd import numpy as np from datetime import datetime, timedelta data = { 'Unnamed: 0': [1.8, 2, 5.9], 'Location A': [0.2, 0.3, 0.87], 'Unnamed: 2': [6, 7], 'Location B': [1.5, 2.0], 'Unnamed: 4': [11], 'Location C': [], 'Unnamed: 6': [16.7, 17, 18, 19.6, 26,72.9], 'Location D': [3.5, 4.0, 5.5, 6.0, 7.5, 8.0] } max_len = max([len(v) for v in data.values()]) for key in data.keys(): if len(data[key]) &lt; max_len: data[key].extend([np.nan] * (max_len - len(data[key]))) df = pd.DataFrame(data) </code></pre> <p>Since even columns do not have a header, they are saved as <code>Unnamed</code> when I use <code>pd.read_excel</code>. I would like to convert this into a new data frame with the following logic. Let <code>date_value = pd.to_datetime('2023-01-01', format='%Y-%m-%d')</code>.</p> <p>The first column is the name of the location, the second column is <code>date_value + timedelta(days=days_to_add)</code> where days_to_add is the values in the Unnamed column, and the third column is the values under Location column.</p> <pre><code>Location A 2023-01-02 0.2 Location A 2023-01-03 0.3 Location A 2023-01-06 0.87 ... Location D 2023-03-12 8 </code></pre>
<python><pandas><dataframe>
2023-05-01 18:31:20
2
488
whitepanda
76,149,118
10,573,543
Why the registeration.html form is not rendering in DJANGO?
<p>New to DJANGO.</p> <p>I am having trouble with rendering my <code>registeration.html</code> page.</p> <p>This is my project structure</p> <pre><code>auth_test | |-urls.py |-settings.py big_app | |-admin.py |-forms.py |-models.py |-urls.py |-views.py templates media static </code></pre> <p>This is my application <code>big_app</code>.</p> <p>The <code>auth_test</code> folder <code>urls.py</code> is</p> <pre class="lang-py prettyprint-override"><code>from django.contrib import admin from django.urls import path from big_app import views # from django.conf.urls import url, include ---&gt; this is depreciated from Django 4.0 onwards from django.urls import include,re_path urlpatterns = [ re_path(r'^$', views.index,name='index'), re_path(r'^admin/', admin.site.urls), re_path(r'big_app/',include('big_app.urls')), ] </code></pre> <p>now the urls.py of the <code>big_app</code> is</p> <pre class="lang-py prettyprint-override"><code>from big_app import views from django.urls import include, re_path # TEMPLATE URLS app_name = 'big_app' urlpatterns = [ re_path(r'^register/$',views.register, name='register') ] </code></pre> <p>The <code>views.py</code> has the code.</p> <pre class="lang-py prettyprint-override"><code>from django.shortcuts import render from big_app.forms import UserForm, UserProfileForm # Create your views here. def index(request): return render(request, &quot;big_app/index.html&quot;) def register(request): registered = False # breakpoint() if request.method == &quot;POST&quot;: user_form = UserForm(data=request.POST) profile_form = UserProfileForm(data=request.POST) if user_form.is_valid() and profile_form.is_valid(): user = user_form.save() # hashing the password user.set_password(user.password) user.save() profile = profile_form.save(commit=False) # setting up the one to one relationship with the user present in the admin perspective profile.user = user if &quot;profile_pic&quot; in request.FILES: profile.profile_pic = request.FILES[&quot;profile_pic&quot;] profile.save() registered = True else: print(user_form.errors, profile_form.errors) else: user_form = UserForm() profile_form = UserProfileForm() return render( request, &quot;big_app/resgisteration.html&quot;, { &quot;user_form&quot;: user_form, &quot;profile_form&quot;: profile_form, &quot;registered&quot;: registered, }, ) </code></pre> <p>my <code>base.html</code> looks like this</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;IE=edge&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt; &lt;link href=&quot;https://cdn.jsdelivr.net/npm/bootstrap@5.3.0-alpha3/dist/css/bootstrap.min.css&quot; rel=&quot;stylesheet&quot; integrity=&quot;sha384-KK94CHFLLe+nY2dmCWGMq91rCGa5gtU4mk92HdvYe+M/SXH301p5ILy+dN9+nJOZ&quot; crossorigin=&quot;anonymous&quot;&gt; &lt;title&gt;Document&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;nav class=&quot;navbar navbar-expand-lg bg-body-tertiary&quot;&gt; &lt;div class=&quot;container-fluid&quot;&gt; &lt;a class=&quot;navbar-brand&quot; href=&quot;{% url 'index' %}&quot;&gt;DJANGO&lt;/a&gt; &lt;ul class=&quot;navbar-nav&quot;&gt; &lt;li class=&quot;nav-item&quot;&gt; &lt;a class=&quot;nav-link active&quot; aria-current=&quot;page&quot; href=&quot;#&quot;&gt;Home&lt;/a&gt; &lt;/li&gt; &lt;li class=&quot;nav-item&quot;&gt; &lt;a class=&quot;nav-link&quot; href=&quot;{% url 'admin:index' %}&quot;&gt;Admin&lt;/a&gt; &lt;/li&gt; &lt;li class=&quot;nav-item&quot;&gt; &lt;a class=&quot;nav-link&quot; href=&quot;{% url 'big_app:register' %}&quot;&gt;Register&lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/nav&gt; &lt;div class=&quot;container&quot;&gt; {% block body_block %} {% endblock %} &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>And my <code>resgisteration.html</code> looks like this.</p> <pre class="lang-html prettyprint-override"><code>{% extends 'big_app/base.html' %} {% load static %} {% block body_bloc %} &lt;div class=&quot;jumbotorn&quot;&gt; &lt;h1&gt; Form &lt;/h1&gt; {% if registered %} &lt;h1&gt; Thank you for registering &lt;/h1&gt; {% else %} &lt;h1&gt; Register here &lt;/h1&gt; &lt;form method=&quot;post&quot; enctype=&quot;multipart/form-data&quot;&gt; {% csrf_token %} {{ user_form.as_p }} {{ profile_form.as_p }} &lt;input type=&quot;submit&quot; name=&quot;&quot; value=&quot;register&quot;&gt; &lt;/form&gt; {% endif %} &lt;/div&gt; {% endblock body_bloc %} </code></pre> <hr /> <p><em><strong>Now I have identified the problem.</strong></em></p> <ul> <li><p>When the <code>base.html</code> is triggered the link is properly taken to <code>url/big_app/register</code>.</p> </li> <li><p>But the <code>views.py</code> the <code>requests.method</code> is coming as <code>GET</code> not <code>POST</code>. That is why the page is not displaying the form.</p> </li> </ul> <hr /> <p>But I am not able to fix this. Need Help. What changes need to be done, in the code.</p>
<python><django><django-views>
2023-05-01 18:06:19
1
1,166
Danish Xavier
76,149,068
19,325,656
Factory boy connect models to existing users
<p>Im populating my database with dummy data, I have separate Profile, User, Picture model. How can connect them to use same users?</p> <pre><code>class UserFactory(DjangoModelFactory): class Meta: model = User email = factory.Faker('email') first_name = factory.Faker('first_name') birthday = factory.Faker('date_object') age = factory.Faker('pyint', min_value=18, max_value=100) is_staff = False is_active = True class UserPhotoFactory(DjangoModelFactory): class Meta: model = UserPhoto user = factory.RelatedFactory(UserFactory) #???? order = factory.Faker('pyint', min_value=0, max_value=4) url = factory.Faker( 'random_element', elements=[x for x in ['https://picsum.photos/seed/picsum/200/300', 'https://picsum.photos/seed/pssss/200/300', 'https://picsum.photos/seed/picwum/200/300']] ) class ProfileFactory(DjangoModelFactory): class Meta: model = Profile user = factory.SubFactory(UserFactory) bio = factory.LazyAttribute(lambda o: FAKE.paragraph(nb_sentences=5, ext_word_list=['abc', 'def', 'ghi', 'jkl'])) </code></pre> <p>In this case I'm calling in shell</p> <pre><code>ProfileFacotry.create_batch(10) </code></pre> <p>And that creates users and their corresponding profiles. Now I want to add UserPhoto to the mix that is related to USER via ForeignKey like this</p> <pre><code>class UserPhoto(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) order = models.PositiveIntegerField(null=True) url = models.CharField(max_length=220) </code></pre> <p>What I want to achieve is to get 10 users 10 profiles and let say 20 pictures 2 for each user</p>
<python><django><django-rest-framework><faker><factory-boy>
2023-05-01 17:57:14
1
471
rafaelHTML
76,149,018
2,112,406
How to filter a list of objects based on an attribute in parallel, using Pool
<p>Given a list of objects, I want to be able to reduce that list based on the attributes of the objects. Suppose I have the following class:</p> <pre><code>class TestClass: def __init__(self, x, y): self.x = x self.y = y </code></pre> <p>and a list of objects from that class:</p> <pre><code>N = 10 list_of_objects = [] for i in range(N): x = random.randint(0, 10) y = random.randint(0, 10) tst = TestClass(x, y) list_of_objects.append(tst) </code></pre> <p>I want to reduce <code>list_of_objects</code> such that I only have elements for which <code>self.x &gt; self.y</code>. Since I will have many such objects, and the actual filtering will be much more resource-intensive, I want to parallelize this. I have tried the following, which gives me a list of <code>None</code>s and objects that match the criterion:</p> <pre><code>import random from multiprocessing import Pool class TestClass: def __init__(self, x, y): self.x = x self.y = y def filter(test_object): if test_object.x &gt; test_object.y: return test_object else: return None def parallel_process(operation, input, pool): result = pool.map(operation, input) return result if __name__ == &quot;__main__&quot;: N = 10 list_of_objects = [] for i in range(N): x = random.randint(0, 10) y = random.randint(0, 10) tst = TestClass(x, y) list_of_objects.append(tst) process_count = 2 process_pool = Pool(process_count) result = parallel_process(filter, list_of_objects, process_pool) print(result) </code></pre> <p>which returns a list of <code>None</code>s and objects that match the criterion:</p> <pre><code>[&lt;__main__.TestClass object at 0x1033f0610&gt;, &lt;__main__.TestClass object at 0x103e9f8d0&gt;, &lt;__main__.TestClass object at 0x103e9fa90&gt;, None, &lt;__main__.TestClass object at 0x103e9fb10&gt;, &lt;__main__.TestClass object at 0x103e9fb50&gt;, None, &lt;__main__.TestClass object at 0x103e9fcd0&gt;, None, &lt;__main__.TestClass object at 0x103e9fd50&gt;] </code></pre> <p>I could, in principle, then get rid of <code>None</code>s:</p> <pre><code>result = [i for i in result if i is not None] </code></pre> <p>Is this the best way of doing this? By &quot;best&quot; here I mean: (i) doing it in a pythonic and concise way, and (ii) the most efficient way within the confines of (i).</p>
<python><oop><parallel-processing><python-multiprocessing>
2023-05-01 17:49:59
1
3,203
sodiumnitrate
76,148,989
5,942,100
Sort within categories in a specific transformation using Pandas
<p>I have a dataset where I would like to sort my data within categories in a specific way.</p> <p><strong>Data</strong></p> <pre><code>Location range type Q1 27 Q2 27 Q3 27 Q4 27 NY low re AA 5 0 7 0 NY low re BB 0 0 0 0 NY low re DD 0.51 1 2 2.05 NY low re SS 0 0 0 0 NY low stat AA 1 0.86 1.5 5.27 NY low stat BB 0.45 0.49 0.91 2.17 NY low stat DD 1.53 1.27 2.22 7.78 NY low stat SS 0.26 0.21 0.37 1.3 CA med stat AA 0.38 3 0 0 CA med stat BB 0.33 1.22 0 0.11 CA med stat DD 0.55 5.4 0 0 CA med stat SS 0.09 0.91 0 0 CA med re AA 1.14 0 0 0 CA med re BB 0 0 0 2 CA med re DD 1.02 0 0 0.56 CA med re SS 0 0 0 0 import pandas as pd data = { 'Location': ['NY', 'NY', 'NY', 'NY', 'NY', 'NY', 'NY', 'NY', 'CA', 'CA', 'CA', 'CA', 'CA', 'CA', 'CA', 'CA'], 'range': ['low re', 'low re', 'low re', 'low re', 'low stat', 'low stat', 'low stat', 'low stat', 'med stat', 'med stat', 'med stat', 'med stat', 'med re', 'med re', 'med re', 'med re'], 'type': ['AA', 'BB', 'DD', 'SS', 'AA', 'BB', 'DD', 'SS', 'AA', 'BB', 'DD', 'SS', 'AA', 'BB', 'DD', 'SS'], 'Q1 27': [5, 0, 0.51, 0, 1, 0.45, 1.53, 0.26, 0.38, 0.33, 0.55, 0.09, 1.14, 0, 1.02, 0], 'Q2 27': [0, 0, 1, 0, 0.86, 0.49, 1.27, 0.21, 3, 1.22, 5.4, 0.91, 0, 0, 0, 0], 'Q3 27': [7, 0, 2, 0, 1.5, 0.91, 2.22, 0.37, 0, 0, 0, 0, 0, 0, 0, 0], 'Q4 27': [0, 0, 2.05, 0, 5.27, 2.17, 7.78, 1.3, 0, 0.11, 0, 0, 0, 2, 0.56, 0] } df = pd.DataFrame(data) print(df) </code></pre> <p><strong>Desired</strong></p> <pre><code>All types are similar - ex AA belongs w AA BB belongs with BB categorized by Location. Order doesn't matter as long as the types are adjacent to each other categorized by Location. Location range type Q1 27 Q2 27 Q3 27 Q4 27 NY low re AA 5 0 7 0 NY low stat AA 1 0.86 1.5 5.27 NY low re DD 0.51 1 2 2.05 NY low stat DD 1.53 1.27 2.22 7.78 NY low re SS 0 0 0 0 NY low stat SS 0.26 0.21 0.37 1.3 NY low re BB 0 0 0 0 NY low stat BB 0.45 0.49 0.91 2.17 CA med re AA 1.14 0 0 0 CA med stat AA 0.38 3 0 0 CA med re DD 1.02 0 0 0.56 CA med stat DD 0.55 5.4 0 0 CA med re SS 0 0 0 0 CA med stat SS 0.09 0.91 0 0 CA med re BB 0 0 0 2 CA med stat BB 0.33 1.22 0 0.11 </code></pre> <p><strong>Doing</strong></p> <p>I am trying to implement</p> <pre><code> df.sort_values('Location', 'range', inplace=True) </code></pre> <p>However, I am not obtaining my desired result. Any suggestion is appreciated.</p>
<python><pandas><numpy>
2023-05-01 17:44:54
2
4,428
Lynn
76,148,945
5,583,772
How can I select rows with repeating values
<p>Is there a way to filter for rows in polars dataframe that allows for repeated values (this would be useful I think in block bootstrapping). What I have in mind is an example dataframe:</p> <pre><code>df = pl.DataFrame({ 'x':[0,1,2,3,4,5], 'y':[10,11,12,13,14,15], 'z':[0,0,1,1,2,2] }) </code></pre> <p>Giving</p> <pre><code>β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ x ┆ y ┆ z β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 0 ┆ 10 ┆ 0 β”‚ β”‚ 1 ┆ 11 ┆ 0 β”‚ β”‚ 2 ┆ 12 ┆ 1 β”‚ β”‚ 3 ┆ 13 ┆ 1 β”‚ β”‚ 4 ┆ 14 ┆ 2 β”‚ β”‚ 5 ┆ 15 ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Where I would like to select for example:</p> <pre><code>z_select = [0,1,1] </code></pre> <p>And getting a dataframe</p> <pre><code>β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ x ┆ y ┆ z β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 0 ┆ 10 ┆ 0 β”‚ β”‚ 1 ┆ 11 ┆ 0 β”‚ β”‚ 2 ┆ 12 ┆ 1 β”‚ β”‚ 3 ┆ 13 ┆ 1 β”‚ β”‚ 2 ┆ 12 ┆ 1 β”‚ # Note this is a repeat b/c z_select has two 1s β”‚ 3 ┆ 13 ┆ 1 β”‚ # Note this is a repeat b/c z_select has two 1s β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><dataframe><python-polars>
2023-05-01 17:37:55
2
556
Paul Fleming
76,148,912
3,204,250
Pandas pivot_table with columns as horizon
<p>I have a dataframe like so:</p> <pre><code>first_dt dt val 2023-01-01 2023-01-02 1 2023-01-01 2023-01-02 1 2023-01-01 2023-01-03 1 ... 2023-01-02 2023-01-03 1 2023-01-02 2023-01-04 1 </code></pre> <p>You should note that <code>dt</code> is never less than <code>first_dt</code>. I would like to reshape to a <code>pivot_table</code> like the following</p> <pre><code>first_dt horizon_1 horizon_2 horizon_3 2023-01-01 2 1 0 2023-01-02 1 1 0 </code></pre> <p>Initially, I thought I would do the following</p> <pre class="lang-py prettyprint-override"><code>pandas.pivot_table(df, index='first_dt', columns='dt', values='val', aggfunc=np.sum) </code></pre> <p>However, this doesn't scale well when I have years of data when I only want a handful of horizons (e.g. 3, 7, etc).</p> <p>Any thoughts about how to dynamically count <code>dt</code> over the <code>first_dt</code> time?</p>
<python><pandas><pivot-table>
2023-05-01 17:31:50
1
20,030
cdeterman
76,148,657
1,893,164
How can I convert an Excel spreadsheet to HTML with Python?
<p>I have several .xlsx Excel spreadsheets that I'd like to convert to an HTML web format by using a Python script. How can I do this?</p>
<python><pandas><excel>
2023-05-01 16:53:03
1
13,197
Al Sweigart
76,148,588
1,115,833
solr distance search with proximity
<p>I am trying to do some filtering of results using proximity search and I am finding it difficult to construct the correct query for this.</p> <p>So, in my index I have the following entry:</p> <pre><code> { &quot;aff&quot;:&quot;lg electronics&quot;, &quot;shortuuid&quot;:&quot;sddsd3ww&quot;, &quot;name&quot;:&quot;changhee kim&quot;, &quot;id&quot;:&quot;hjgjh-7678ghjhjhj-fdsfdg&quot;, &quot;_version_&quot;:1764697833293742080}, </code></pre> <p>I try a small variation of the name:</p> <pre><code>import requests name=&quot;changhee kim a&quot; org=&quot;lg electronics&quot; requests.get('http://localhost:8983/solr/searcher/select', params={ 'q': f'&quot;{name}&quot;~10 AND &quot;{org}&quot;~1', 'wt': 'json', 'rows': 1, 'start': 0, }).json() </code></pre> <p>and it f'king returns 0 results! why? I would have thought since the query term is two words out including the space it should capture this and show me the result entry above.</p> <p><em>EDIt</em></p> <p>Following @Eric's answer:</p> <pre><code>import requests name=&quot;changhee kim a&quot; # a space and a added at end org=&quot;lg electronics&quot; requests.get('http://localhost:8983/solr/searcher/select', params={ 'q': f'name:{name}~10 AND aff:{org}~1', 'wt': 'json', 'rows': 1, 'start': 0, }).json() </code></pre> <p>I get no matching results for the above query..</p> <p>However, when I make edits inbetween the string:</p> <pre><code>import requests name=&quot;changh kim&quot; #deleted two `e` in changhee org=&quot;lg electronics&quot; requests.get('http://localhost:8983/solr/searcher/select', params={ 'q': f'name:{name}~10 AND aff:{org}~1', 'wt': 'json', 'rows': 1, 'start': 0, }).json() </code></pre> <p>..gives me the correct expected result.</p> <p>Also, adding chars to the end of the query works fine:</p> <pre><code>import requests name=&quot;changhee kima&quot; #added `a` to `kim` org=&quot;lg electronics&quot; requests.get('http://localhost:8983/solr/searcher/select', params={ 'q': f'name:{name}~10 AND aff:{org}~1', 'wt': 'json', 'rows': 1, 'start': 0, }).json() </code></pre> <p>so what does not seem to work is when a word is added at the end:</p> <pre><code>name=&quot;changhee kim a&quot; #added aditional `a` at the end org=&quot;lg electronics&quot; </code></pre> <p>why so?</p>
<python><elasticsearch><solr><lucene>
2023-05-01 16:41:14
1
7,096
JohnJ
76,148,493
2,386,605
BioGPT causal language model with unexpected error
<p>I am trying to use a Causal Language Model from BioGPT. However, I got a strange error.</p> <p>Here are my steps:</p> <p>First, I installed <code>transformers</code> and <code>sacremoses</code>:</p> <pre><code>!pip install transformers sacremoses -q </code></pre> <p>Then I executed the following code:</p> <pre><code>input_sequence = &quot;Hello, I'm a language model,&quot; inputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device) past_key_values = None count = 0 complete_token = [] with torch.no_grad(): while count&lt;10: count += 1 print(&quot;Iteration no.: &quot; + str(count)) if count &gt; 1: inputs = input_token model_out = model(input_ids=inputs.to(device), past_key_values=past_key_values) logits = model_out.logits[:, -1, :] past_key_values = model_out.past_key_values topk_values, topk_indices = torch.topk(logits, 5) log_probs = F.softmax(topk_values, dim=-1) inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True) input_token = torch.gather(topk_indices, 1, inputs_in_topk) complete_token.append(input_token) </code></pre> <p>And here is the error I got:</p> <pre><code>Iteration no.: 1 Iteration no.: 2 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /tmp/ipykernel_18990/2689790310.py in &lt;cell line: 8&gt;() 13 inputs = input_token 14 ---&gt; 15 model_out = model(input_ids=inputs.to(device), past_key_values=past_key_values) 16 logits = model_out.logits[:, -1, :] 17 past_key_values = model_out.past_key_values ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -&gt; 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, past_key_values, labels, use_cache, output_attentions, output_hidden_states, return_dict) 677 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 678 --&gt; 679 outputs = self.biogpt( 680 input_ids, 681 attention_mask=attention_mask, ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -&gt; 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 589 ) 590 else: --&gt; 591 layer_outputs = decoder_layer( 592 hidden_states, 593 attention_mask=attention_mask, ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -&gt; 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, hidden_states, attention_mask, layer_head_mask, past_key_value, output_attentions, use_cache) 313 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None 314 # add present self-attn cache to positions 1,2 of present_key_value tuple --&gt; 315 hidden_states, self_attn_weights, present_key_value = self.self_attn( 316 hidden_states=hidden_states, 317 past_key_value=self_attn_past_key_value, ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -&gt; 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] ~/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/biogpt/modeling_biogpt.py in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions) 211 if attention_mask is not None: 212 if attention_mask.size() != (bsz, 1, tgt_len, src_len): --&gt; 213 raise ValueError( 214 f&quot;Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}&quot; 215 ) ValueError: Attention mask should be of size (1, 1, 0, 12), but is torch.Size([1, 1, 1, 1]) </code></pre> <p>So apparently, everything went fine in the first execution, but the in the second model call this error came up.</p> <p>Do you know how to fix this? πŸ™‚</p>
<python><pytorch><huggingface-transformers><large-language-model>
2023-05-01 16:24:49
1
879
tobias
76,148,492
10,620,003
Normalize the data and inverse_transform value error
<p>I have a prediction model and I have to normalize my label, since the loss exploded without normalization. But the problem is that, I use MinMaxScaler to normalize the data_train which is with size (2,10) and my predicted values size is (2,5). So when I want to use <code>inverse_transform</code>, I got</p> <pre><code>ValueError: operands could not be broadcast together with shapes (2,5) (10,) (2,5). </code></pre> <p>Also my test data is with shape (2,10). Could you please help me with this?</p> <pre><code>import numpy as np from sklearn.preprocessing import MinMaxScaler data_train = np.random.randint(10, size = (2,10)) predicted_values_ = np.random.randint(2, size = (2,5)) # fit transform transformer = MinMaxScaler() transformer.fit(data_train) transformed = transformer.transform(data_train) ## model.... and execution inverted = transformer.inverse_transform(predicted_values_) </code></pre>
<python><machine-learning><scikit-learn>
2023-05-01 16:24:29
1
730
Sadcow
76,148,307
8,112,003
Executing docker exec concurrently
<p>Actually the docker python sdk is working fine: <a href="https://docker-py.readthedocs.io/en/stable/client.html" rel="nofollow noreferrer">https://docker-py.readthedocs.io/en/stable/client.html</a></p> <p>But I tried to perform docker exec with asyncio package simultaneously. It seems not to be possible? How could jenkins do it?</p> <p>asyncio code:</p> <pre><code>import asyncio async def factorial(name, number): print(number) await asyncio.sleep(number) print(name) async def main(): # Schedule three calls *concurrently*: L = await asyncio.gather( factorial(&quot;A&quot;, 2), factorial(&quot;B&quot;, 3), factorial(&quot;C&quot;, 4), ) print(L) asyncio.run(main()) </code></pre> <p>and now the docker code:</p> <pre><code>async def gogo(): client = docker.DockerClient(base_url='unix://var/run/docker.sock') container = client.containers.create(image_parsed, detach=True, stdin_open=True, tty=True, entrypoint=&quot;bash&quot;) container.start() res = container.exec_run(cmd='bash -c &quot;echo hello stdout ; sleep 3s; echo hello stderr &gt;&amp;2; ls -a&quot;', stream=True, demux=False) #container.wait() while True: try: print(next(res.output)) except: break container.stop() container.remove() async def gogo_group(): print(f&quot;started at {time.strftime('%X')}&quot;) L = await asyncio.gather( gogo(), gogo() ) print(L) print(f&quot;finished at {time.strftime('%X')}&quot;) asyncio.run(gogo_group()) </code></pre> <p>You can observe the asyncio code to be executed simultaneously, but the docker code is executed sequentially. Any idea how to solve this?</p> <p>Due to Paul Cornelius comment I changed code, but it doesnt help:</p> <pre><code>async def async_wrap(container): return container.exec_run(cmd='bash -c &quot;echo hello stdout ; sleep 3s; echo hello stderr &gt;&amp;2; ls -a&quot;', stream=True, demux=False) async def gogo(): client = docker.DockerClient(base_url='unix://var/run/docker.sock') container = client.containers.create(image_parsed, detach=True, stdin_open=True, tty=True, entrypoint=&quot;bash&quot;) container.start() res = await async_wrap(container) #container.wait() while True: try: print(next(res.output)) except: break container.stop() container.remove() async def gogo_group(): print(f&quot;started at {time.strftime('%X')}&quot;) L = await asyncio.gather( gogo(), gogo() ) print(L) print(f&quot;finished at {time.strftime('%X')}&quot;) </code></pre>
<python><docker><python-asyncio><python-docker>
2023-05-01 15:55:05
1
752
Nikolai Ehrhardt
76,148,267
2,593,480
OpenAI API "AuthenticationError No API key provided" when switching to internalConsole in VS Code launch.json
<p>I'm using the OpenAI API in a Python script and have set the OPENAI_API_KEY as an environment variable in my system, which works fine when running the script in the integrated terminal. What I have used to set the key. <a href="https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety" rel="nofollow noreferrer">https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety</a></p> <p>However, when I change the debugging configuration in my launch.json file from &quot;console&quot;: &quot;integratedTerminal&quot; to &quot;console&quot;: &quot;internalConsole&quot;, the API key is not recognized, and I get an AuthenticationError.</p> <p>How can I properly set the API key when using the &quot;internalConsole&quot; configuration in VS Code so that the OpenAI API works correctly?</p>
<python><visual-studio-code><vscode-debugger><openai-api>
2023-05-01 15:49:36
1
581
Yam Shargil
76,148,251
499,721
Is there a way to change gunicorn log level at runtime?
<p>I have a <a href="https://fastapi.tiangolo.com/" rel="nofollow noreferrer">FastApi</a> service running on <a href="https://gunicorn.org/" rel="nofollow noreferrer">Gunicorn</a> server in a K8s pod. Is there a way to change the log levels / settings at runtime, without restarting the server?</p>
<python><gunicorn>
2023-05-01 15:48:03
1
11,117
bavaza
76,148,181
10,901,843
How do I get Python Timedelta to rollover to the next day when calculating time elapsed?
<p>I have a function that takes in the hour and minute of two programs and returns the elapsed time. It works well when the end_time isn't the next day. If it's the next day, I get a negative number. What's the best way to resolve this?</p> <pre><code>def elapsed_time_seconds(hour_start,hour_end,minute_start,minute_end): hour_start = int(hour_start) minute_start = int(minute_start) hour_end = int(hour_end) minute_end = int(minute_end) t1 = timedelta(hours=hour_start,minutes=minute_start) t2 = timedelta(hours=hour_end,minutes=minute_end) return (t2-t1).total_seconds() </code></pre> <p>If a program starts at 11:30 PM, and ends at 3:00 AM, we expect it to return 12600 seconds. However this is what I get currently:</p> <pre><code>elapsed_time_seconds('23','03','00','00') -72000.0 </code></pre> <p>What is the best way to resolve this edge case? The function works fine when the start time and end time are on the same day.</p>
<python><pandas><datetime><timedelta>
2023-05-01 15:36:58
2
407
AI92
76,148,115
8,849,071
Mypy failing when you type hint with a more general type
<p>I was writing some code and I found the following behaviour in mypy:</p> <pre class="lang-py prettyprint-override"><code>from typing import Dict, TypedDict class SomeDictionary(TypedDict): key: str def function() -&gt; Dict: some_dictionary: SomeDictionary = {&quot;key&quot;: &quot;value&quot;} return some_dictionary </code></pre> <p>As you can see, in this example we have a <code>TypedDict</code>. We return one instance of that type inside a function, but the the function signature is set to a more general type: a simple dict. I was expecting this to work pretty nicely, because at the end of the day, the typed dict <code>SomeDictionary</code> is a dictionary. So I'm returning the correct type. Nonetheless, I'm getting the following error:</p> <pre><code>main.py:10: error: Incompatible return value type (got &quot;SomeDictionary&quot;, expected &quot;Dict[Any, Any]&quot;) [return-value] </code></pre> <p>This can be reproduced in <a href="https://mypy-play.net/?mypy=latest&amp;python=3.11" rel="nofollow noreferrer">mypy playground</a>. Maybe I'm not understanding how mypy works correctly, but for me right now this behaviour is pretty weird. Any reason to it? Or any way to fix it? Rather than changing the signature of <code>function</code>.</p>
<python><mypy>
2023-05-01 15:25:40
0
2,163
Antonio Gamiz Delgado
76,148,104
1,205,281
How do I get the current gcloud configuration user/service account's id token in python
<p>I'm using gcloud configurations to handle my CLI access. (switching between them with <code>gcloud config configurations activate &lt;env_name&gt;</code>). I'm NOT using <code>GOOGLE_APPLICATION_CREDENTIALS</code> env var at all as I want to be able to switch between configurations/projects/accounts.</p> <p>It works well with resources like <code>google.cloud.firestore.Client()</code> which takes the current configuration .</p> <p>I'm trying to have <a href="https://cloud.google.com/functions/docs/securing/authenticating?_ga=2.21152633.-390339464.1679569752&amp;_gac=1.222690665.1679569752.Cj0KCQjw8e-gBhD0ARIsAJiDsaVA-1FKsiCWcO-VFu-VhuC4y5I9dWmnIGwPGykK87c7A5YUE8lPXv0aAuyjEALw_wcB#generating_tokens_programmatically" rel="nofollow noreferrer">authenticated calls</a> between my (python) cloud functions. When I try to get the token using -</p> <pre class="lang-py prettyprint-override"><code>auth_req = google.auth.transport.requests.Request() id_token = google.oauth2.id_token.fetch_id_token(auth_req, audience) </code></pre> <p>I'm getting <code>google.auth.exceptions.DefaultCredentialsError: Neither metadata server or valid service account credentials are found.</code> I'll note that in a real cloud function <code>fetch_id_token</code> works.</p> <p>I am able to get the token using cli command <code>gcloud auth print-identity-token</code>, but I want to get it using the python google auth library so it will work on both my local machine (using functions-framework) and in a real cloud function.</p> <p>Is it possible? am I approaching all of this in a wrong way?</p> <p>btw I'm using a Linux machine.</p>
<python><google-cloud-platform><google-cloud-functions>
2023-05-01 15:24:29
3
646
David Avikasis
76,147,945
1,912,104
How to concatenate string columns that contain NaN values?
<p>I have a data that looks like this:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np mydict = { 'col1' : ['a', 'b', 'c'], 'col2' : ['d', np.NaN, 'e'], 'col3' : ['f', 'g', 'h'] } mydf = pd.DataFrame(mydict) </code></pre> <p>I want to concatenate these string columns. I try this but it doesn't work:</p> <pre class="lang-py prettyprint-override"><code>mydf['concat'] = mydf[['col1', 'col2', 'col3'].apply('-'.join, axis=1) </code></pre> <p>The error is <code>TypeError: sequence item 0: expected str instance, float found</code>.</p> <p>How can I make it work? It should skip the missing value and only concatenate the non-missing values. The outcome should look like this:</p> <pre><code>concat_dict = { 'col1' : ['a', 'b', 'c'], 'col2' : ['d', np.NaN, 'e'], 'col3' : ['f', 'g', 'h'], 'concat' : ['a-d-f', 'b-g', 'c-e-h'] } concat_df = pd.DataFrame(concat_dict) </code></pre>
<python><pandas><dataframe><numpy>
2023-05-01 15:02:15
2
851
NonSleeper
76,147,864
11,462,274
Reducing the number of ticklabels on x axis of the graph while keeping the curve with all the dataframe lines
<p>At the end of the question, there is an example of a CSV file with data starting from records on April 17, 2023 to April 30, 2023 in the &quot;open_local_data&quot; column.</p> <p>Well, note that that when trying to generate the graph points, they are reset to 1970 due to the formatting of the date and time column. However, if I convert the column to the date model provided by Pandas, the graph line is automatically changed to only a few daily points, but I would like to keep all unique points, only reducing the visual markings on the x-axis.</p> <p>What should I do?</p> <p><a href="https://i.sstatic.net/P5Jma.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P5Jma.png" alt="enter image description here" /></a></p> <pre class="lang-python prettyprint-override"><code>import pandas as pd import matplotlib.pyplot as plt import matplotlib.ticker as ticker import matplotlib.dates as mdates def chart(): df_chart = pd.read_csv('example.csv') fig, ax = plt.subplots(figsize=(10.8, 10.8)) ax.plot(df_chart['open_local_data'], df_chart['cumulative_back'], linestyle='-', linewidth=2, color='blue') def format_ytick(value, pos): if value.is_integer(): return f'{int(value)}u' else: return f'{value:.2f}u' ax.yaxis.set_major_formatter(ticker.FuncFormatter(format_ytick)) ax.set_title('Example', color='white') ax.set_xlabel('Data', color='white') ax.set_ylabel('Profit &amp; Loss', color='white') locator = mdates.AutoDateLocator() formatter = mdates.ConciseDateFormatter(locator) ax.xaxis.set_major_locator(locator) ax.xaxis.set_major_formatter(formatter) ax.set_facecolor('black') ax.grid(color='orange', linewidth=0.5) ax.spines['bottom'].set_color('orange') ax.spines['bottom'].set_linewidth(2) ax.spines['left'].set_color('orange') ax.spines['left'].set_linewidth(2) ax.tick_params(colors='white') plt.savefig('example_chart.png', facecolor='black', dpi=500) if __name__ == '__main__': chart() </code></pre> <p><a href="https://i.sstatic.net/ZaGat.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZaGat.png" alt="enter image description here" /></a></p> <p>In my attempts, if I try to convert the column to Pandas' date format, the line and curve of the graph are not faithful to all 158 rows of the dataframe, the amount of points in the graph decreases and I don't want that to happen, I just want the legend to be changed for better visualization:</p> <pre class="lang-python prettyprint-override"><code>df['open_local_data'] = pd.to_datetime(df['open_local_data']) </code></pre> <p><a href="https://i.sstatic.net/6DLfp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6DLfp.png" alt="enter image description here" /></a></p> <pre class="lang-none prettyprint-override"><code>open_local_data,back,cumulative_back 2023-04-17 15:45:00,-1.0,-1.0 2023-04-17 16:00:00,0.68255,-0.31745 2023-04-17 16:00:00,-1.0,-1.31745 2023-04-18 15:45:00,0.7199500000000001,-0.5974999999999999 2023-04-18 15:45:00,0.21505,-0.38244999999999996 2023-04-18 15:45:00,1.0098,0.6273500000000001 2023-04-18 15:45:00,-1.0,-0.3726499999999999 2023-04-18 15:45:00,0.8976000000000001,0.5249500000000001 2023-04-18 16:00:00,-1.0,-0.47504999999999986 2023-04-18 16:00:00,1.7017,1.2266500000000002 2023-04-19 14:30:00,0.92565,2.1523000000000003 2023-04-19 15:45:00,1.0472,3.1995000000000005 2023-04-19 15:45:00,-1.0,2.1995000000000005 2023-04-19 15:45:00,2.9920000000000004,5.191500000000001 2023-04-19 16:00:00,-1.0,4.191500000000001 2023-04-19 16:00:00,-1.0,3.1915000000000013 2023-04-19 16:00:00,0.4114,3.6029000000000013 2023-04-19 16:00:00,-1.0,2.6029000000000013 2023-04-20 13:45:00,-1.0,1.6029000000000013 2023-04-20 14:30:00,-1.0,0.6029000000000013 2023-04-20 16:00:00,-1.0,-0.3970999999999987 2023-04-20 16:00:00,-1.0,-1.3970999999999987 2023-04-20 16:00:00,1.0659,-0.3311999999999986 2023-04-20 16:00:00,0.5236000000000001,0.19240000000000146 2023-04-20 16:00:00,0.7854000000000001,0.9778000000000016 2023-04-21 15:30:00,-1.0,-0.022199999999998443 2023-04-21 15:45:00,1.8326,1.8104000000000016 2023-04-21 16:00:00,-1.0,0.8104000000000016 2023-04-21 16:00:00,-1.0,-0.18959999999999844 2023-04-21 16:00:00,0.1589499999999999,-0.03064999999999854 2023-04-22 08:00:00,-1.0,-1.0306499999999985 2023-04-22 08:30:00,1.3277,0.2970500000000016 2023-04-22 09:00:00,1.3464,1.6434500000000016 2023-04-22 10:00:00,-1.0,0.6434500000000016 2023-04-22 10:30:00,-1.0,-0.35654999999999837 2023-04-22 11:00:00,1.4025,1.0459500000000017 2023-04-22 11:00:00,-1.0,0.04595000000000171 2023-04-22 11:00:00,-1.0,-0.9540499999999983 2023-04-22 11:00:00,-1.0,-1.9540499999999983 2023-04-22 11:00:00,-1.0,-2.9540499999999983 2023-04-22 11:00:00,-1.0,-3.9540499999999983 2023-04-22 11:00:00,0.3926999999999999,-3.5613499999999982 2023-04-22 11:00:00,-1.0,-4.561349999999998 2023-04-22 11:00:00,0.9537,-3.6076499999999982 2023-04-22 11:15:00,0.6732,-2.9344499999999982 2023-04-22 12:45:00,0.2244,-2.710049999999998 2023-04-22 13:00:00,-1.0,-3.710049999999998 2023-04-22 13:30:00,0.5797000000000001,-3.130349999999998 2023-04-22 15:45:00,-1.0,-4.130349999999998 2023-04-22 16:00:00,1.0846000000000002,-3.045749999999998 2023-04-22 16:00:00,0.27115,-2.774599999999998 2023-04-22 16:30:00,0.2057,-2.5688999999999984 2023-04-23 08:00:00,-1.0,-3.5688999999999984 2023-04-23 08:30:00,0.73865,-2.8302499999999986 2023-04-23 09:00:00,0.8789,-1.9513499999999986 2023-04-23 09:30:00,1.1033000000000002,-0.8480499999999984 2023-04-23 10:00:00,-1.0,-1.8480499999999984 2023-04-23 10:00:00,1.5521000000000005,-0.29594999999999794 2023-04-23 10:00:00,0.9163,0.6203500000000021 2023-04-23 10:00:00,-1.0,-0.37964999999999793 2023-04-23 10:00:00,-1.0,-1.379649999999998 2023-04-23 10:30:00,0.8134500000000001,-0.5661999999999979 2023-04-23 11:15:00,0.9537,0.38750000000000207 2023-04-23 12:30:00,-1.0,-0.6124999999999979 2023-04-23 12:30:00,-1.0,-1.612499999999998 2023-04-23 13:00:00,0.5516500000000001,-1.060849999999998 2023-04-23 13:00:00,-1.0,-2.060849999999998 2023-04-23 14:00:00,0.1402499999999999,-1.920599999999998 2023-04-23 15:45:00,1.6269000000000002,-0.29369999999999785 2023-04-23 16:00:00,-1.0,-1.2936999999999979 2023-04-24 14:00:00,-1.0,-2.2936999999999976 2023-04-24 14:00:00,-1.0,-3.2936999999999976 2023-04-24 15:45:00,1.4025,-1.8911999999999975 2023-04-24 16:00:00,1.1220000000000003,-0.7691999999999972 2023-04-24 16:15:00,0.4114,-0.35779999999999723 2023-04-25 14:30:00,-1.0,-1.3577999999999972 2023-04-25 14:30:00,-1.0,-2.3577999999999975 2023-04-25 15:30:00,1.2903,-1.0674999999999975 2023-04-25 15:45:00,0.92565,-0.14184999999999748 2023-04-25 15:45:00,0.7106,0.5687500000000025 2023-04-25 16:00:00,-1.0,-0.43124999999999747 2023-04-25 16:00:00,1.1781,0.7468500000000025 2023-04-25 16:00:00,-1.0,-0.25314999999999754 2023-04-25 16:00:00,-1.0,-1.2531499999999975 2023-04-25 17:00:00,-1.0,-2.2531499999999975 2023-04-26 14:30:00,-1.0,-3.2531499999999975 2023-04-26 14:30:00,0.3272500000000001,-2.9258999999999973 2023-04-26 15:00:00,0.5236000000000001,-2.402299999999997 2023-04-26 15:30:00,4.581500000000001,2.179200000000004 2023-04-26 15:30:00,-1.0,1.1792000000000038 2023-04-26 15:45:00,-1.0,0.1792000000000038 2023-04-26 15:45:00,1.0285000000000002,1.207700000000004 2023-04-26 16:00:00,0.90695,2.114650000000004 2023-04-26 16:00:00,0.9537,3.068350000000004 2023-04-26 16:00:00,0.66385,3.732200000000004 2023-04-26 17:00:00,-1.0,2.732200000000004 2023-04-26 17:00:00,-1.0,1.7322000000000042 2023-04-27 14:30:00,0.53295,2.265150000000004 2023-04-27 14:30:00,0.68255,2.947700000000004 2023-04-27 15:45:00,0.8228,3.770500000000004 2023-04-27 15:45:00,-1.0,2.770500000000004 2023-04-27 16:00:00,-1.0,1.7705000000000042 2023-04-27 16:15:00,-1.0,0.7705000000000042 2023-04-27 17:00:00,-1.0,-0.22949999999999582 2023-04-28 13:30:00,1.1407000000000005,0.9112000000000047 2023-04-28 15:30:00,-1.0,-0.08879999999999533 2023-04-28 15:45:00,1.6829999999999998,1.5942000000000045 2023-04-28 16:00:00,1.6643,3.258500000000004 2023-04-28 16:00:00,1.2342,4.492700000000005 2023-04-28 16:00:00,1.2342,5.726900000000004 2023-04-28 16:15:00,1.2903,7.017200000000004 2023-04-29 08:30:00,1.5334,8.550600000000005 2023-04-29 10:30:00,-1.0,7.550600000000005 2023-04-29 10:30:00,0.70125,8.251850000000005 2023-04-29 10:30:00,0.3739999999999999,8.625850000000005 2023-04-29 10:30:00,-1.0,7.625850000000005 2023-04-29 11:00:00,0.3552999999999999,7.981150000000005 2023-04-29 11:00:00,-1.0,6.981150000000005 2023-04-29 11:00:00,-1.0,5.981150000000005 2023-04-29 11:00:00,0.6077499999999999,6.588900000000005 2023-04-29 11:00:00,0.7293000000000001,7.318200000000005 2023-04-29 11:00:00,-1.0,6.318200000000005 2023-04-29 11:15:00,-1.0,5.318200000000005 2023-04-29 11:30:00,0.2431,5.5613000000000055 2023-04-29 12:00:00,0.2805000000000001,5.841800000000005 2023-04-29 13:00:00,-1.0,4.841800000000005 2023-04-29 13:30:00,1.3838,6.225600000000005 2023-04-29 13:30:00,0.1402499999999999,6.365850000000005 2023-04-29 13:30:00,0.86955,7.235400000000006 2023-04-29 15:45:00,1.309,8.544400000000005 2023-04-29 16:00:00,0.3926999999999999,8.937100000000004 2023-04-29 16:30:00,0.2805000000000001,9.217600000000004 2023-04-29 16:30:00,1.1033000000000002,10.320900000000005 2023-04-29 18:30:00,-1.0,9.320900000000005 2023-04-30 07:30:00,0.83215,10.153050000000006 2023-04-30 08:00:00,-1.0,9.153050000000006 2023-04-30 08:00:00,2.1972500000000004,11.350300000000006 2023-04-30 09:00:00,-1.0,10.350300000000006 2023-04-30 09:30:00,1.0846000000000002,11.434900000000006 2023-04-30 10:00:00,1.4399000000000002,12.874800000000006 2023-04-30 10:00:00,-1.0,11.874800000000006 2023-04-30 10:00:00,0.7480000000000001,12.622800000000005 2023-04-30 10:00:00,0.2618,12.884600000000004 2023-04-30 10:00:00,0.2431,13.127700000000004 2023-04-30 10:00:00,-1.0,12.127700000000004 2023-04-30 10:00:00,-1.0,11.127700000000004 2023-04-30 10:00:00,1.0472,12.174900000000004 2023-04-30 10:30:00,0.1028500000000001,12.277750000000005 2023-04-30 11:15:00,0.6358,12.913550000000004 2023-04-30 12:05:00,-1.0,11.913550000000004 2023-04-30 12:30:00,0.5423000000000001,12.455850000000005 2023-04-30 13:00:00,0.3459500000000001,12.801800000000005 2023-04-30 14:00:00,0.21505,13.016850000000005 2023-04-30 15:45:00,0.2992000000000001,13.316050000000006 2023-04-30 15:45:00,-1.0,12.316050000000006 2023-04-30 16:00:00,0.66385,12.979900000000006 2023-04-30 16:30:00,0.2244,13.204300000000005 2023-04-30 18:10:00,0.7854000000000001,13.989700000000006 </code></pre>
<python><pandas><matplotlib>
2023-05-01 14:51:28
1
2,222
Digital Farmer
76,147,835
20,285,962
How to change theme in JupyterLab Desktop
<p>I've installed the JupyterLab Desktop application found here:</p> <p><a href="https://github.com/jupyterlab/jupyterlab-desktop" rel="nofollow noreferrer">jupyterlab-desktop</a></p> <p>and wanted to change the theme noticing only two options Jupyter Dark and Light. So naturally with the extensions setting I've tried to install a theme such as monokai as such,</p> <pre><code>jupyter labextension install @hokyjack/jupyterlab-monokai-plus </code></pre> <p>Then I get the following warning</p> <pre><code>(Deprecated) Installing extensions with the jupyter labextension install command is now deprecated and will be removed in a future major version of JupyterLab. Users should manage prebuilt extensions with package managers like pip and conda, and extension authors are encouraged to distribute their extensions as prebuilt packages </code></pre> <p>So is there no way to change the theme in JupyterLab Desktop other then the default?</p>
<python><jupyter-notebook><themes><jupyter><jupyter-lab>
2023-05-01 14:46:52
0
319
Shane Gervais
76,147,662
2,071,807
Pydantic Custom Data Types can't be optional in FastAPI route
<p>Following the Pydantic <a href="https://docs.pydantic.dev/latest/usage/types/#custom-data-types" rel="nofollow noreferrer">custom data types</a> instructions, I've created a type which attempts to get strings from a single comma-separated string like:</p> <pre><code>&quot;foo,bar,baz&quot; -&gt; [&quot;foo&quot;, &quot;bar&quot;, &quot;baz&quot;] </code></pre> <p>It works fine:</p> <pre class="lang-py prettyprint-override"><code>class CommaSeparatedString(str): @classmethod def __get_validators__(cls): yield cls.split_on_comma @classmethod def split_on_comma(cls, v: str | None): if not v: return None if not isinstance(v, str): raise TypeError(&quot;String required&quot;) return [item.strip() for item in v.split(&quot;,&quot;)] @app.get(&quot;/&quot;) def applications_index( strings: CommaSeparatedString | None, ): print(strings) </code></pre> <p>But FastAPI doesn't honour the optional nature of the <code>strings</code> param.</p> <p>It complains about missing fields when I make a request without <code>strings</code> in the querystring:</p> <pre><code>{'detail': [{'loc': ['query', 'strings'], 'msg': 'field required', 'type': 'value_error.missing'}] </code></pre>
<python><fastapi><optional-parameters>
2023-05-01 14:19:27
1
79,775
LondonRob
76,147,208
1,739,325
How to post second question via POST call to hugginface chat?
<p>So this code works pretty good I get first desired output to the <a href="https://huggingface.co/chat/conversation/" rel="nofollow noreferrer">https://huggingface.co/chat/conversation/</a>:</p> <pre><code>from requests.sessions import Session from json import loads prompt = &quot;Explain first condition in english?&quot; session = Session() session.get(url=&quot;https://huggingface.co/chat/&quot;) res = session.post(url=&quot;https://huggingface.co/chat/conversation&quot;) assert res.status_code == 200, &quot;Failed to create new conversation&quot; conversation_id = res.json()[&quot;conversationId&quot;] url = f&quot;https://huggingface.co/chat/conversation/{conversation_id}&quot; max_tokens = int(2000) - len(prompt) if max_tokens &gt; 1904: max_tokens = 1904 res = session.post( url=url, json={ &quot;inputs&quot;: prompt, &quot;parameters&quot;: { &quot;temperature&quot;: 0.5, &quot;top_p&quot;: 0.95, &quot;repetition_penalty&quot;: 1.2, &quot;top_k&quot;: 50, &quot;truncate&quot;: 1024, &quot;watermark&quot;: False, &quot;max_new_tokens&quot;: max_tokens, &quot;stop&quot;: [&quot;&lt;|endoftext|&gt;&quot;], &quot;return_full_text&quot;: False, }, &quot;stream&quot;: False, &quot;options&quot;: {&quot;use_cache&quot;: False}, }, stream=False, ) try: data = res.json() except ValueError: print(&quot;Invalid JSON response&quot;) data = {} data = data[0] if data else {} data.get(&quot;generated_text&quot;, &quot;&quot;) </code></pre> <p>And it returns such output:</p> <pre><code>'Sure! The first condition you mentioned .... Is there something specific you would like me to explain about this condition?' </code></pre> <p>However I dont know how to send second request?</p> <p>Next code completely ruins the chat:</p> <pre><code>res = session.post( url=url, json={ &quot;inputs&quot;: &quot;GIVE more information&quot;, &quot;parameters&quot;: { &quot;temperature&quot;: 0.5, &quot;top_p&quot;: 0.95, &quot;repetition_penalty&quot;: 1.2, &quot;top_k&quot;: 50, &quot;truncate&quot;: 1024, &quot;watermark&quot;: False, &quot;max_new_tokens&quot;: max_tokens, &quot;stop&quot;: [&quot;&lt;|endoftext|&gt;&quot;], &quot;return_full_text&quot;: False, }, &quot;stream&quot;: False, &quot;options&quot;: {&quot;use_cache&quot;: False}, }, stream=False, ) try: data = res.json() except ValueError: print(&quot;Invalid JSON response&quot;) data = {} data.get(&quot;generated_text&quot;, &quot;&quot;) </code></pre> <p>Output:</p> <pre><code>{'error': 'Model is overloaded', 'error_type': 'overloaded'} </code></pre>
<python><post><request><huggingface>
2023-05-01 13:10:35
0
5,851
Rocketq
76,147,137
14,900,791
Python constant uuid token in class (property does not work)
<p>I need a class that remembers its constant token (uuid). When I use the property decorator as <a href="https://stackoverflow.com/a/45068730/14900791">described here</a>, <strong>the <code>property</code> will prevent the <code>uuid</code> function from changing, but not the value</strong>:</p> <pre class="lang-py prettyprint-override"><code>from uuid import uuid4 class Tokenizer: @property def token(self): return uuid4().hex t = Tokenizer() print(t.token) print(t.token) </code></pre> <p>This prints diffrent tokens, so how can I achieve the <strong>constant token so the output will be the same</strong>, without passing token to the class as argument?</p> <p>Edit: <code>t.__token</code> can by changed but does not affect <code>t.token</code>. Do you know why?</p> <pre class="lang-py prettyprint-override"><code>from uuid import uuid4 class Tokenizer: def __init__(self): self.__token = uuid4().hex @property def token(self): return self.__token t = Tokenizer() print(t.token) print(t.token) </code></pre>
<python><properties><constants>
2023-05-01 13:02:28
1
1,171
Jurakin
76,147,033
14,442,010
Method Not Allowed (submit form with jQuery in Django)
<p>I want to submit my &quot;contact us&quot; form on my Django website with jQuery.</p> <p>I'm working on development server.</p> <p>The error I receive in the browser and Django shell is:</p> <pre><code>Method Not Allowed (POST): / Method Not Allowed: / [01/May/2023 17:08:34] &quot;POST / HTTP/1.1&quot; 405 0 </code></pre> <p>This is my view:</p> <pre class="lang-py prettyprint-override"><code>def is_ajax(request): return request.META.get('HTTP_X_REQUESTED_WITH') == 'XMLHttpRequest' def submit_contact_form(request): if is_ajax(request=request): sender_name = request.POST.get('sender_name') sender_email = request.POST.get('sender_email') message_subject = request.POST.get('message_subject') message_body = request.POST.get('message_body') remote_addr = request.META.get('REMOTE_ADDR') remote_host = request.META.get('REMOTE_HOST') context = { 'from': sender_name, 'email': sender_email, 'subject': message_subject, 'message': message_body, 'IP': remote_addr, 'HOST': remote_host, } html_message = render_to_string('homepage/submit.contact.form.html', context) send_mail( f'Contact Form Submission: {message_subject}', f'{message_body}', 'myemail@mydomain.com', ['myemail@mydomain.com', ], html_message=html_message, ) return JsonResponse({}, status=200) else: return redirect('homepage:index') </code></pre> <p>This is the jQuery code stored in a <code>js</code> file:</p> <pre><code>$('#contact-form').submit(function(e) { e.preventDefault(); $('#sent-message').css(&quot;display&quot;, &quot;none&quot;); $('#loading').css(&quot;display&quot;, &quot;block&quot;); $.ajax({ type: &quot;POST&quot;, url: &quot;/submit-contact-form/&quot;, data: { sender_name: $('#name').val(), sender_email: $('#email').val(), message_subject: $('#subject').val(), message_body: $('#message').val(), csrfmiddlewaretoken: csrfToken, datatype: &quot;json&quot;, }, success: function(){ $('#loading').css(&quot;display&quot;, &quot;none&quot;); $('#sent-message').css(&quot;display&quot;, &quot;block&quot;); $('#contact-form').trigger('reset'); $('#sent-message').delay(3000).fadeOut(&quot;slow&quot;); }, error: function(){ $('#loading').css(&quot;display&quot;, &quot;none&quot;); $('#error-message').css(&quot;display&quot;, &quot;block&quot;); $('#error-message').append(&quot;An error has occured. Please try again later.&quot;); } }); }); </code></pre> <p>As far as I can tell, this should work. I'm sure I'm making some stupid mistakes somewhere.</p> <p>Please assist.</p>
<python><jquery><django>
2023-05-01 12:45:09
1
359
Omid Shojaee
76,146,972
6,687,699
Heroku deployment failed yet the build has passed
<p>I am facing this error on Heroku but I don't know the issue :</p> <pre><code>Traceback (most recent call last): File &quot;/app/manage.py&quot;, line 11, in main from django.core.management import execute_from_command_line ModuleNotFoundError: No module named 'django' The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/app/manage.py&quot;, line 22, in &lt;module&gt; main() File &quot;/app/manage.py&quot;, line 13, in main raise ImportError( ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment? </code></pre> <p>But my requirements.txt file is fine :</p> <pre><code>asgiref==3.6.0 cachetools==5.3.0 certifi==2022.12.7 chardet==3.0.4 charset-normalizer==2.1.1 dj-database-url==1.2.0 Django==3.2 django-bunny-storage==0.1.2 django-countries==7.5.1 django-extensions==3.2.1 django-phone-auth==0.3.1 django-phonenumber-field==7.0.2 django-storages==1.13.2 djangorestframework==3.14.0 djangorestframework-jwt==1.11.0 djangorestframework-simplejwt==5.2.2 google-api-core==2.11.0 google-auth==2.17.3 google-cloud-core==2.3.2 google-cloud-storage==2.8.0 google-crc32c==1.5.0 google-resumable-media==2.5.0 googleapis-common-protos==1.59.0 gunicorn==20.1.0 idna==2.10 isort==5.12.0 phonenumbers==8.13.5 Pillow==9.4.0 protobuf==4.22.3 psycopg2-binary==2.9.5 pyasn1==0.5.0 pyasn1-modules==0.3.0 pycountry==22.3.5 PyJWT==1.7.1 python-dateutil==2.8.2 python-dotenv==0.15.0 pytz==2022.7.1 requests==2.28.1 rsa==4.9 six==1.16.0 sqlparse==0.4.3 twilio==6.63.2 typing_extensions==4.4.0 urllib3==1.26.12 uuid==1.30 whitenoise==6.4.0 </code></pre> <p>Here's my Procfile :</p> <pre><code>web: gunicorn pixsar.wsgi --log-file - release: python manage.py migrate </code></pre> <p>and my gitlab.yml :</p> <pre><code>image: name: docker/compose:1.25.4 entrypoint: [&quot;&quot;] services: - docker:dind variables: DOCKER_HOST: tcp://docker:2375 DOCKER_DRIVER: overlay2 stages: - deploy-stag deploy-stag: stage: deploy-stag image: ruby:latest before_script: - gem install dpl - wget -qO- https://cli-assets.heroku.com/install-ubuntu.sh | sh script: - dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_API_KEY - export HEROKU_API_KEY=$HEROKU_API_KEY - heroku run --app $HEROKU_APP_NAME python manage.py migrate - heroku run --app $HEROKU_APP_NAME python manage.py collectstatic --noinput only: - develop </code></pre> <p>What is wrong here</p>
<python><django>
2023-05-01 12:33:06
0
4,030
Lutaaya Huzaifah Idris
76,146,824
19,325,656
Improve performance of qs with multiple filters and exclude statements
<p>I'm writing an app that has &quot;social media capabilities&quot;. You can block someone you can wave to someone etc...</p> <p>I have <code>get_queryset</code> for getting all users you didn't see on the app or blocked them with multiple exclude and filter statements that look like this.</p> <pre class="lang-py prettyprint-override"><code>def get_queryset(self): current_id = self.request.user.id blocked_users = Blocked.objects.filter(user__id=current_id).values('blocked_user') users_taken_action = UserWave.objects.filter(user__id=current_id).values('waved_user') users_not_taken_action = Profile.objects.exclude(user__in=users_taken_action) \ .exclude(user__in=blocked_users).exclude(user__id=current_id) return users_not_taken_action </code></pre> <p>I'm new to optimization and when I see this function I get a feeling that this is not too efficient. Im wrong or is there a better way to handle this?</p> <p><strong>models</strong></p> <pre class="lang-py prettyprint-override"><code>from django.contrib.auth.models import User class UserWave(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) user = models.ForeignKey(User, on_delete=models.CASCADE, related_name=&quot;current_user&quot;) waved_user = models.ForeignKey(User, on_delete=models.CASCADE, related_name=&quot;waved_user&quot;) class Blocked(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) user = models.ForeignKey(User, on_delete=models.CASCADE, related_name=&quot;user_thats_blocking&quot;) blocked_user = models.ForeignKey(User, on_delete=models.CASCADE, related_name=&quot;blocked_user&quot;) class Profile(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) user = models.OneToOneField(User, on_delete=models.CASCADE) bio = models.TextField(null=True, blank=True) </code></pre>
<python><django><django-models><django-rest-framework><django-queryset>
2023-05-01 12:11:57
1
471
rafaelHTML
76,146,799
4,755,229
Group consecutive True in 1-D numpy array
<p>Suppose we have a boolean array <code>x=np.array([True, True, False, True, False])</code>. There are two consecutive group of <code>True</code>. What I want is to create a list of boolean arrays <code>l</code> where each array in <code>l</code> contains exactly one set of consecutive <code>True</code>. For instance, <code>x</code> should be identical to <code>y</code> defined by</p> <pre class="lang-python prettyprint-override"><code>y = np.zeros_like(x) for e in l: y = y|e </code></pre> <p>So far my only successful attempt on this is using the <code>consecutive</code> function by <a href="https://stackoverflow.com/a/7353335/4755229">https://stackoverflow.com/a/7353335/4755229</a></p> <pre class="lang-python prettyprint-override"><code>def consecutive_bools(bool_input): consecutive_idx = consecutive(np.argwhere(bool_input).flatten()) ret = [np.zeros_like(bool_input) for i in range(len(consecutive_idx))] for i, idx in enumerate(consecutive_idx): ret[i][idx] = True return ret </code></pre> <p>This seems overly complicated. Is there any better (concise, and possibly faster) way of doing this?</p>
<python><numpy>
2023-05-01 12:07:16
3
498
Hojin Cho
76,146,740
15,966,103
Django inheriting from base.html not displaying content on child page
<p>I have a <code>base.html</code> file:</p> <pre><code> {% load static %} {% load custom_tags %} &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;script src=&quot;https://code.jquery.com/jquery-3.6.0.min.js&quot;&gt;&lt;/script&gt; {% comment %} &lt;script type=&quot;text/javascript&quot; src=&quot;{% static 'dashboardscript.js' %}&quot; defer&gt;&lt;/script&gt; {% endcomment %} &lt;script src=&quot;https://cdn.jsdelivr.net/npm/chart.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;//cdn.jsdelivr.net/npm/sweetalert2@11&quot;&gt;&lt;/script&gt; &lt;title&gt;Codera | Dashboard&lt;/title&gt; &lt;/head&gt; &lt;body&gt; {% block nav %} &lt;div class=&quot;nav-container&quot;&gt; &lt;img src=&quot;{% static 'levelstatic/assets/coderalogo.svg' %}&quot; style=&quot;margin: 0 auto; width: 80%; margin-bottom: 25px;&quot; alt=&quot;CODERA&quot;&gt; &lt;div class=&quot;nav-links&quot;&gt; &lt;div class=&quot;nav-link nav-active&quot;&gt; &lt;img src=&quot;{% static '/dashimages/learn.svg' %}&quot; &gt; &lt;a&gt;LEARN&lt;/a&gt; &lt;/div&gt; &lt;div class=&quot;nav-link&quot;&gt; &lt;img src=&quot;{% static '/dashimages/review.svg' %}&quot; &gt; &lt;a&gt;REVIEW&lt;/a&gt; &lt;/div&gt; &lt;div class=&quot;nav-link&quot;&gt; &lt;img src=&quot;{% static '/dashimages/progress.svg' %}&quot; &gt; &lt;a&gt;PROGRESS&lt;/a&gt; &lt;/div&gt; &lt;div class=&quot;nav-link&quot;&gt; &lt;img src=&quot;{% static '/dashimages/compete.svg' %}&quot; &gt; &lt;a&gt;COMPETE&lt;/a&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; {% endblock %} &lt;/body&gt; &lt;/html&gt; </code></pre> <p>And I inherit from the <code>base.html</code> file on my <code>dashboard.html</code> file:</p> <pre><code>{% extends &quot;base.html&quot; %} {% load static %} {% block content %} &lt;h1&gt;Hello World!&lt;/h1&gt; {% endblock %} </code></pre> <p>However, the <code>&lt;h1&gt;</code> does not show up and instead all that is displayed is the content from the <code>base.html</code>. Could someone possibly help me with this? Why is the <code>&lt;h1&gt;</code> not displaying?</p>
<python><django><django-templates>
2023-05-01 11:57:43
1
918
jahantaila
76,146,571
6,550,449
How to make Celery I/O bound tasks execute concurrently?
<p>The only thing my Celery task is doing is making an API request and sending the response back to <code>Redis</code> queue. What I'd like to achieve is to utilize as many resources as possible by executing tasks in a coroutine-like fashion. This way every time a coroutine hits <code>requests.post()</code> the context switcher can switch and allocate resources to another coroutine to send one more request and so forth.</p> <p>As I understand, to achieve this, my worker has to run with a <code>gevent</code> execution pool:</p> <pre><code>celery worker --app=worker.app --pool=gevent --concurreny=500 </code></pre> <p>But it doesn't solve the problem on its own. I have found that (probably) for it to work as expected we need monkey patching:</p> <pre><code>@app.task def task_make_request(payload) import gevent.monkey gevent.monkey.patch_all() requests.post('url', payload) </code></pre> <p>The questions:</p> <ol> <li>Is <code>Gevent</code> the only execution pool that can be used for this goal?</li> <li>Will <code>patch_all</code> make <code>requests.post()</code> asynchronous so that the context switcher can allocate resources to other coroutines?</li> <li>What is the preferred way of achieving cooperative multitasking behavior for celery tasks with a single I/O bound operation (API call)?</li> </ol>
<python><asynchronous><celery><gevent><greenlets>
2023-05-01 11:31:29
1
927
Taras Mykhalchuk
76,146,556
15,656,276
How to build CNN in Pytorch for RGB images?
<p>I am building a CNN in Pytorch. Below is the code I would use for grayscale input images:</p> <pre class="lang-py prettyprint-override"><code>import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1x1x28x28 to 32x1x28x28 self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3, stride=1, padding=1) # 32x1x28x28 to 64x1x28x28 self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=1) # 64x1x28x28 to 64x1x14x14 self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) # 64x1x14x14 to 128x1x14x14 self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1) # 128x1x14x14 to 128x1x7x7 self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) # 128x1x7x7 to 128 self.fc1 = nn.Linear(in_features=128*7*7, out_features=128) # 128 to 27 (no. of classes) self.fc2 = nn.Linear(in_features=128, out_features=27) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = self.pool1(x) x = F.relu(self.conv3(x)) x = self.pool2(x) x = x.view(-1, 128*7*7) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) return x </code></pre> <p>Do I have to change any of this code to adapt it to RGB images? If so, what are these changes?</p> <p>FYI:</p> <ul> <li>The input images have the shape of 28x28</li> <li>They are RGB (3 color channels)</li> </ul> <p>Thank you so much.</p>
<python><pytorch><conv-neural-network>
2023-05-01 11:29:36
1
520
chai
76,146,386
15,803,668
Installing langchain and kor in PyCharm failling due to failed building wheel for greenlet
<p>I want to use <code>kor</code> and <code>langchain</code> in <code>PyCharm</code>. I import the packages using:</p> <pre><code># kor from kor.extraction import create_extraction_chain from kor.nodes import Object, Text, Number # LangChain Models from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI </code></pre> <p>But installing the latest versions of both packages in <code>PyCharm</code> I run into the same error:</p> <blockquote> <pre><code> xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: </code></pre> <p>/Library/Developer/CommandLineTools/usr/bin/xcrun error: command '/usr/bin/gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for greenlet ERROR: Could not build wheels for greenlet, which is required to install pyproject.toml-based projects</p> </blockquote> <p>Building <code>greenlet</code> on its own for my os, results in the same error. However, installing the packages in the terminal with pip works fine. The problem only occurs in <code>PyCharm</code>.</p> <p>I use</p> <ul> <li>macOS 13.3.1</li> <li>Python 3.9.9</li> <li>PyCharm 2023.1.1 (Community Edition)</li> <li>Pip 23.1.2</li> </ul> <p><strong>Edit:</strong></p> <p>I tried the following in the terminal to install <code>langchain</code> to <code>PyCharm</code></p> <pre><code>&gt; /Users/user/PycharmProjects/test_openai/venv/bin/activate &gt; /Users/user/PycharmProjects/test_openai/venv/bin/python &gt; /Applications/PyCharm.app/Contents/plugins/python-ce/helpers/packaging_tool.py &gt; install langchain </code></pre> <p>It gives me, even though the file <code>packaging_tool.py</code> is in the directory:</p> <blockquote> <p>can't open file /Applications/PyCharm.app/Contents/plugins/python-ce/helpers/packaging_tool.py: [Errno 2] No such file or directory</p> </blockquote>
<python><pip><pycharm>
2023-05-01 10:59:17
1
453
Mazze
76,146,250
16,250,404
Calculate required start date for work order excluding weekends to reach shipping date
<p>Suppose there is an order which we have to ship on 28/04/2023 12:00 PM and to finish that order we need 11d:03h:36m:10s time. Find datetime when we should start working on that order.</p> <p>Conditions:</p> <ol> <li>Planned start date should not fall in Friday 22:00 PM to Sunday 22:00 PM.</li> <li>Also exclude Friday 22:00 PM to Sunday 22:00 PM if in Start and Ship date range.</li> </ol> <p>In short we don't have to consider Friday 22:00 PM to Sunday 22:00 PM at any cost.</p> <p>I tried with this code. It uses an initial guess for the start date by subtracting the required time of the work order from the shipping date and then tries to adjust it by taking weekends into account:</p> <pre><code>import datetime from datetime import timedelta, time import pytz excluded_start_time = time(22, 0) excluded_end_time = time(22, 0) excluded_days = {4, 5, 6} # straight subtraction of ship date and operation time. # no friday and sunday logic applied start_date = datetime.datetime(2023, 4, 17, 8, 23, 50) end_date = datetime.datetime(2023, 4, 28, 12, 0, 0) def count_weekends(start_date, end_date): while end_date.date() &gt; start_date.date() or (start_date.weekday() in excluded_days and excluded_start_time &lt;= start_date &lt;=excluded_end_time): friday = datetime.datetime.combine(start_date + timedelta(days=(4-start_date.weekday())), datetime.time(hour=22)) sunday = datetime.datetime.combine(start_date + timedelta(days=(6-start_date.weekday())), datetime.time(hour=22)) if friday &lt;= end_date &lt;= sunday: start_date = start_date - timedelta(hours=48) end_date = end_date - timedelta(hours=24) return start_date print(count_weekends(start_date, end_date)) </code></pre> <p>But I got incorrect results, it returns 11/04/2023 08:23 AM, the correct answer should be 13/04/2023 8:23 AM.</p>
<python><datetime><timedelta>
2023-05-01 10:34:20
2
933
Hemal Patel
76,146,187
1,330,734
Unpacking a list of tuples of func, *args, **kwargs / command design pattern
<p>I have a routine that will take a list of tuples in the form:</p> <pre><code>(function, [optional] arguments, [optional] keyword arguments) </code></pre> <p>as part of a Command design pattern implementation.</p> <pre><code>ORG1_PROCESS = [(add_header_row, df, COLUMN_LABELS['ORG1']), (func_1, {'abc' = 123}), (func_2), ] ORG2_PROCESS = [(add_header_row, df, COLUMN_LABELS['ORG2']), ] def process_commands(a_list: List[Tuple[Any, ...]]) -&gt; None: for item in a_list: (func, *args, **kwargs) = item if kwargs: func(*args, **kwargs) elif args: func(*args) else: func() </code></pre> <p>I want to execute like this:</p> <pre><code>process_commands(ORG1_PROCESS) </code></pre> <p>to execute the equivalent of:</p> <pre><code>add_header_row(df, COLUMN_LABELS['ORG1']) func_1(abc=123) func_2() </code></pre> <p>However, I have a syntax error:</p> <pre><code> (func, *args, **kwargs) = item ^ SyntaxError: invalid syntax </code></pre> <p>I thought that is valid tuple unpacking in Python 3.9.7. Any advice on what am I missing here would be great!</p>
<python><syntax-error><iterable-unpacking>
2023-05-01 10:19:36
1
490
user1330734
76,146,181
992,687
How do I fix misquoted CSV files to allow parsing them with the csv module?
<p>I'd like the below code to avoid splitting within double quotes, but it does:</p> <pre><code>import csv from io import StringIO contents = &quot;&quot;&quot; gene &quot;Tagln2&quot;; note &quot;putative; transgelin 2 (MGD|MGI:1312985 GB|BC049861, evidence: BLASTN, 99%, match=1379)&quot;; product &quot;transgelin-2&quot;; protein_id &quot;NP_848713.1&quot;; tag &quot;RefSeq Select&quot;; exon_number &quot;4&quot;; &quot;&quot;&quot; for l in csv.reader(StringIO(contents), delimiter=&quot;;&quot;, quotechar='&quot;', skipinitialspace=True, quoting=csv.QUOTE_MINIMAL): print(l) </code></pre> <p>outputs:</p> <pre><code>['gene &quot;Tagln2&quot;', 'note &quot;putative', 'transgelin 2 (MGD|MGI:1312985 GB|BC049861, evidence: BLASTN, 99%, match=1379)&quot;', 'product &quot;transgelin-2&quot;', 'protein_id &quot;NP_848713.1&quot;', 'tag &quot;RefSeq Select&quot;', 'exon_number &quot;4&quot;', ''] </code></pre> <p>You can see that it splits within the double quotes so that <code>note &quot;putative; transgelin 2&quot;</code> becomes <code>['note &quot;putative', 'transgelin 2']</code>. How do I fix this?</p>
<python><csv>
2023-05-01 10:18:53
2
32,459
The Unfun Cat
76,146,072
11,644,523
Dynamic pivot or lateral flatten in Snowflake / Snowpark to columns
<p>Given this sample table in Snowflake:</p> <pre><code>CREATE OR REPLACE TABLE vnt (src variant) AS SELECT parse_json(column1) as src FROM values ('{&quot;a&quot;: 1,&quot;b&quot;: 2,&quot;c&quot;: 3}'), ('{&quot;a&quot;: 1,&quot;b&quot;: 2,&quot;c&quot;: 3,&quot;d&quot;: 4}'); select * from vnt; </code></pre> <p>I would like to output a table with two rows such as</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>a</th> <th>b</th> <th>c</th> <th>d</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2</td> <td>3</td> <td>NULL</td> </tr> <tr> <td>1</td> <td>2</td> <td>3</td> <td>4</td> </tr> </tbody> </table> </div> <p>meaning that I would like to flatten the JSON data into columns instead of rows. I tried in Snowpark to flatten it, but I have the problem with the pivot as it does not work. And since the Keys can dynamically change, how could I handle this?</p> <pre><code>import snowflake.snowpark as snowpark def main(session: snowpark.Session): df = session.sql(&quot;select * from vnt&quot;) df = df.join_table_function(&quot;flatten&quot;, df[&quot;SRC&quot;]) \ .drop([&quot;SEQ&quot;, &quot;SRC&quot;, &quot;PATH&quot;, &quot;INDEX&quot;, &quot;THIS&quot;]) df = df.pivot(&quot;VALUE&quot;,['a','b','c','d']).min(&quot;KEY&quot;) # Return value will appear in the Results tab. return df </code></pre>
<python><snowflake-cloud-data-platform>
2023-05-01 10:01:10
1
735
Dametime
76,146,041
12,535,999
Scaling Matplotlib Colorbar
<p>I do many scatterplots (x vs. y) with a colormap to visualize a third variable (z). Since I do this for many z-variables, this has to work without manual intervention. Now the problem is, that some z-variables have only some very high values, so that the scaling of the colormap ist not useful (see figure).</p> <p>How can I scale the colormap without scaling the z-values and still showing all datapoints? So the overall colormap should cover the whole range of z, but the actual color gradient should only span the meaningful range, e. g. -2 * StdDev to +2 * StdDev. So the gradient should be kind of centered, depending on the data distribution.</p> <p>Here is my code so far:</p> <pre><code>def linreg(x, y, z, alpha=0.5, size=15): figsize = [size, 0.75*size] mpl.rcParams[&quot;font.size&quot;] = 1.5625*size a, b = np.polyfit(x, y, 1) plt.figure(figsize=figsize) plt.scatter(x=x, y=y, c=z, marker=&quot;o&quot;, alpha=alpha, cmap=&quot;viridis&quot;) plt.colorbar(label=z.name) plt.xlabel(x.name) plt.ylabel(y.name) plt.plot(x, a*x+b, color=&quot;k&quot;) return(a, b, Rsq(y, a*x+b)) </code></pre> <p><a href="https://i.sstatic.net/fKXkW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fKXkW.png" alt="enter image description here" /></a></p>
<python><matplotlib><plot><colormap>
2023-05-01 09:56:39
0
315
Scrabyard
76,145,777
1,581,090
How to fix "pywinauto" not starting a windows application?
<p>I am trying to use python 3.10.11 on windows 10 to start a windows application. The command I am using is</p> <pre><code>from pywinauto.application import Application path = r&quot;C:\Program Files (x86)\Main_App\SafetyToolbox\ConfigurationRegister.exe&quot; app = Application(backend='uia').connect(path=path, timeout=300) </code></pre> <p>with the <strong>correct</strong> path. I double checked the path multiple times, and when I double-click on the <code>exe</code> file in the File Explorer the application starts without problem.</p> <p>Why can't I start the application with <code>pywinauto</code>?</p>
<python><windows><pywinauto>
2023-05-01 09:01:58
1
45,023
Alex
76,145,761
461,499
Use poetry to create binary distributable with pyinstaller on 'package'?
<p>I think I'm missing something simple</p> <p>I have a python poetry application:</p> <pre><code>name = &quot;my-first-api&quot; version = &quot;0.1.0&quot; description = &quot;&quot; readme = &quot;README.md&quot; packages = [{include = &quot;application&quot;}] [tool.poetry.scripts] start = &quot;main:start&quot; [tool.poetry.dependencies] python = &quot;&gt;=3.10,&lt;3.12&quot; pip= &quot;23.0.1&quot; setuptools=&quot;65.5.0&quot; fastapi=&quot;0.89.1&quot; uvicorn=&quot;0.20.0&quot; [tool.poetry.group.dev.dependencies] pyinstaller = &quot;^5.10.1&quot; pytest = &quot;^7.3.1&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>I can run this and build this using Poetry, however, I would like to be able to create the executable with a poetry script as well.</p> <p>Now I build it like this:</p> <p><code>poetry run pyinstaller main.py --collect-submodules application --onefile --name myapi</code></p> <p>I would like something like</p> <p><code>poetry package</code> to automatically create this executable as well. How do I hook that up?</p> <p>Btw. ths does not work :(</p> <pre><code>[tool.poetry.scripts] start = &quot;main:start&quot; builddist = &quot;poetry run pyinstaller main.py --collect-submodules application --onefile --name myapi&quot; </code></pre>
<python><package><executable><python-poetry>
2023-05-01 08:58:48
4
20,319
Rob Audenaerde
76,145,647
11,241,501
enhance Tabula for accurate text with layout extraction
<p>I extracted all the text from pdf using tabula and it is great but as my pdf has border less tables and in some rows only single column is present with width of 3 columns, tabula put all text into single column.</p> <p>let me explain via some example. I highlighted the line in image using blue arrow. If I remove this line all the text is extracted in 3 columns but now it is extracting in single column.</p> <p><a href="https://i.sstatic.net/Wk0EL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wk0EL.png" alt="Here is example" /></a></p> <pre><code>from tabula import read_pdf from tabulate import tabulate #reads table from pdf file df = read_pdf(&quot;abc.pdf&quot;,pages=&quot;all&quot;, guess=False, lattice=False, stream=True, multiple_tables=True) print(tabulate(df)) </code></pre> <p>Is there any way to fix tabula configuration or suggest me any other library or tool which can help me.</p>
<python><pdf><pdfbox><tabula>
2023-05-01 08:34:03
0
312
Sybghatallah Marwat
76,145,539
8,621,405
How to prevent overlapping of subplots in Plotly using `make_subplots`
<p>I am creating a dashboard to visualize data using the Plotly and Dash packages in Python.</p> <p>I use the <code>prepare_plot_data</code> and <code>prepare_data_trend</code> functions to prepare the data for plotting.</p> <p>I am trying to create two subplots in one figure using <code>make_subplots</code> to set up the figure layout. However, the two subplots overlap each other at the same position.</p> <p>As far as I understand, <code>make_subplots</code> allows the position of the subplot to be defined using row and column indices. In my case, I set the rows to 2 and the columns to 1. This should create two positions in the figure: one at (1, 1) and another at (2, 1).</p> <p>When I plot the first heatmap graph, I set the row to 1 and the column to 1, expecting it to be placed at the (1, 1) position. For the second plot, I set the row to 2 and the column to 1, expecting it to be placed at (2, 1). However, the outcome does not match my expectations based on how I imagine it should look.</p> <p>Does anyone know how to separate the subplots into different positions? Or do I misunderstand how Plotly's <code>make_subplots</code> works?</p> <pre><code>from datetime import date, datetime from functools import reduce import numpy as np import pandas as pd import plotly.express as px import plotly.graph_objects as go from dash import Dash, Input, Output, dcc, html from loguru import logger from plotly.subplots import make_subplots from scipy import stats class OverView: def __init__(self): self.catologs = [&quot;1&quot;,&quot;2&quot;,&quot;3&quot;,] def prepare_plot_data(self, start_date, end_date): data = [ {&quot;date&quot;:&quot;2020-01-01&quot;, &quot;1&quot;:0.1}, {&quot;date&quot;:&quot;2020-01-01&quot;, &quot;2&quot;:-0.1}, {&quot;date&quot;:&quot;2020-01-01&quot;, &quot;3&quot;:0.9}, {&quot;date&quot;:&quot;2020-02-01&quot;, &quot;1&quot;:0.1}, {&quot;date&quot;:&quot;2020-02-01&quot;, &quot;2&quot;:0.1}, {&quot;date&quot;:&quot;2020-02-01&quot;, &quot;3&quot;:0.1}, {&quot;date&quot;:&quot;2020-03-01&quot;, &quot;1&quot;:0.1}, {&quot;date&quot;:&quot;2020-03-01&quot;, &quot;2&quot;:0.1}, {&quot;date&quot;:&quot;2020-03-01&quot;, &quot;3&quot;:0.1}, ] return pd.DataFrame(data) def prepare_data_trend(self, start_date, end_date): data = [ {&quot;year-month&quot;:&quot;2020-01&quot;, &quot;id&quot;:&quot;1&quot;, &quot;normal_slope&quot;:0.1}, {&quot;year-month&quot;:&quot;2020-01&quot;, &quot;id&quot;:&quot;2&quot;, &quot;normal_slope&quot;:-0.1}, {&quot;year-month&quot;:&quot;2020-01&quot;, &quot;id&quot;:&quot;3&quot;, &quot;normal_slope&quot;:0.9}, {&quot;year-month&quot;:&quot;2020-02&quot;, &quot;id&quot;:&quot;1&quot;, &quot;normal_slope&quot;:0.12}, {&quot;year-month&quot;:&quot;2020-02&quot;, &quot;id&quot;:&quot;2&quot;, &quot;normal_slope&quot;:0.15}, {&quot;year-month&quot;:&quot;2020-02&quot;, &quot;id&quot;:&quot;3&quot;, &quot;normal_slope&quot;:-0.15}, {&quot;year-month&quot;:&quot;2020-03&quot;, &quot;id&quot;:&quot;1&quot;, &quot;normal_slope&quot;:0.5}, {&quot;year-month&quot;:&quot;2020-03&quot;, &quot;id&quot;:&quot;2&quot;, &quot;normal_slope&quot;:0.3}, {&quot;year-month&quot;:&quot;2020-03&quot;, &quot;id&quot;:&quot;3&quot;, &quot;normal_slope&quot;:0.2}, ] return pd.DataFrame(data) def plot(self, start_date=None, end_date=None): df = self.prepare_plot_data(start_date, end_date) df_trend = self.prepare_data_trend(start_date, end_date) fig = make_subplots( rows=2, cols=1, row_heights=[0.6, 0.4], specs=[ [{&quot;type&quot;: &quot;xy&quot;, &quot;colspan&quot;: 1, &quot;rowspan&quot;: 1}], [{&quot;type&quot;: &quot;xy&quot;, &quot;colspan&quot;: 1, &quot;rowspan&quot;: 1, &quot;secondary_y&quot;: True}], ], horizontal_spacing=0, vertical_spacing=0.01, ) fig1 = go.Heatmap( x=df_trend['year-month'].tolist(), y=df_trend['id'].tolist(), z=df_trend['normal_slope'].tolist(), colorbar=dict(title='heat'), ) fig.add_trace(fig1, row=1, col=1,) fig.update_traces(colorbar_orientation='h', selector=dict(type='heatmap')) for catolog in self.catologs: fig.add_trace( go.Scatter( x=df[&quot;date&quot;].values, y=df[catolog].values, name=catolog, ), secondary_y=True, row=2, col=1, ) return fig def run_dash(self): app = Dash(__name__) app.layout = html.Div( [ html.H1(&quot;tmp Overview&quot;), dcc.DatePickerRange( id=&quot;my-date-picker-range&quot;, min_date_allowed=date(1990, 1, 1), max_date_allowed=date(2100, 12, 31), initial_visible_month=date(2021, 1, 1), start_date=date(2018, 1, 1), end_date=date(2023, 12, 31), ), dcc.Graph(id=&quot;graph&quot;), ], ) @app.callback( Output(&quot;graph&quot;, &quot;figure&quot;), [ Input(&quot;my-date-picker-range&quot;, &quot;start_date&quot;), Input(&quot;my-date-picker-range&quot;, &quot;end_date&quot;), ], ) def update_output(start_date, end_date): return self.plot(start_date, end_date) app.run_server(debug=True) OverView().run_dash() </code></pre> <p>the result is as below <a href="https://i.sstatic.net/zLtUw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zLtUw.png" alt="enter image description here" /></a></p> <p>the correct result should have two graphs as the below <a href="https://i.sstatic.net/QXxM3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QXxM3.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/FEZEQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FEZEQ.png" alt="enter image description here" /></a></p>
<python><python-3.x><plotly-dash><heatmap><plotly>
2023-05-01 08:10:16
1
325
CYC
76,145,466
11,162,983
How to compute Euler angles from Quaternion form?
<p>I am trying to use the Quaternion form instead of rotation matrices.</p> <p>I already trained my model, but when I need to test the model, I must compute Euler angles from the Quaternion form as this person did with <a href="https://github.com/papagina/RotationContinuity/blob/758b0ce551c06372cab7022d4c0bdf331c89c696/Inverse_Kinematics/code/tools.py#LL413C1-L413C68" rel="nofollow noreferrer">computing Euler angles from rotation matrices</a>.</p> <p>For example, the output in Quaternion form for B = batch_size = 2 (-&gt; Bx4) is :</p> <pre><code>tensor([[ 0.0725, -0.0645, 0.0308, 0.9948], [-0.5235, -0.2456, 0.0824, 0.8117]], device='cuda:0') </code></pre> <p>And, the output in rotation matrices form for B = batch_size = 2 (-&gt; Bx3x3) :</p> <pre><code>tensor([[[ 0.9933, -0.0871, 0.0755], [ 0.0850, 0.9959, 0.0316], [-0.0779, -0.0250, 0.9966]], [[ 0.7244, -0.0551, -0.6871], [ 0.2493, 0.9503, 0.1866], [ 0.6427, -0.3065, 0.7021]]], device='cuda:0') </code></pre> <p><strong>Note:</strong> this is output for batch_size; it is not one frame or one image.</p> <p>Could you help me with how we can compute Euler angles from Quaternion form?</p>
<python><quaternions><euler-angles>
2023-05-01 07:54:21
0
987
Redhwan
76,145,455
2,186,349
Python AzureAppConfiguration SDK and Azure CLI SDK refuse to authenticate
<p>So I'm having all sorts of problems trying to authenticate a Python environment. My ultimate goal here is to use the Azure AppConfiguration SDK to read values from an AppConfig service in Azure. But right now I'm failing at the very first hurdle.</p> <p>FYI I'm running all of this on a Windows 11 machine using VS Code (run as Administrator). I have the latest Azure CLI installed.</p> <p>My <strong>Azure App Config</strong> script is as follows:</p> <pre><code> from azure.identity import DefaultAzureCredential from azure.appconfiguration import AzureAppConfigurationClient appCfgUrl = os.environ[&quot;AppConfigUrl&quot;] credential = DefaultAzureCredential() client = AzureAppConfigurationClient(appCfgUrl, credential) value = client.get_configuration_setting(configKey) </code></pre> <p>But this results in the error:</p> <pre><code>DefaultAzureCredential failed to retrieve a token from the included credentials. Attempted credentials: EnvironmentCredential: Authentication failed: AADSTS70011: The provided request must include a 'scope' input parameter. The provided value for the input parameter 'scope' is not valid. The scope https://Endpoint=https://foo.azconfig.io;Id=XXXXXX;Secret=YYYYYY=/.default is not valid. </code></pre> <p>So I thought maybe there was something wrong with the AppConfig SDK, or the way I'm implementing it, so I thought I'd try going direct and just connecting straight to the <strong>Azure CLI</strong> directly using the following code as suggested by <a href="https://stackoverflow.com/questions/62983437/authenticating-azure-cli-with-python-sdk">Authenticating Azure CLI with Python SDK</a></p> <pre><code>import os from azure.cli.core import get_default_cli az_cli = get_default_cli() clientId = os.environ[&quot;AZURE_CLIENT_ID&quot;] clientSecret = os.environ[&quot;AZURE_CLIENT_SECRET&quot;] tenantId = os.environ[&quot;AZURE_TENANT_ID&quot;] az_cli.invoke(['login', '--service-principal', '-u', clientId, '-p', clientSecret, '--tenant',tenantId]) </code></pre> <p>However, this just results in the error message</p> <pre><code>No module named 'azure.cli.command_modules'. 'login' is misspelled or not recognized by the system. </code></pre> <p>I've tried both <em>pip install azure.cli.core</em> as well as <em>pip install azure-cli</em></p> <p>I've also upgraded the Azure CLI on my machine. I can successfully authenticate to the Azure CLI in both Bash and PowerShell without any issues - and I have another .NET C# project which uses Azure CLI without problems.</p> <hr /> <p>At this point I'm really tearing my hair out. I've tried using hard-coded values for the service principal, as well as using locally defined Environment Variables - but none of them work.</p> <p>As mentioned, I've done all of this in .NET in both Unit Tests, an Azure Function App and an Azure Web App without any issues at all, but doing this in Python appears to have just hit a hard stop.</p>
<python><azure><azure-cli><azure-app-configuration>
2023-05-01 07:52:22
1
401
Martin Hatch
76,145,394
2,707,342
Run Commands From Frontend Client in a Kubernetes Pod Using Python Kubernetes Client
<p>I am trying to build a Linux Playground using Django, Django Channels, Docker, and Kubernetes. Technically, how this would work is a user will head to the terminal route where a terminal-like interface will be displayed and they will be able to run Linux commands and interact with it just like they are interacting with an actual Linux Terminal.</p> <p>The workflow will be as follows:</p> <ol> <li>User will head to the /terminal route and enter a command</li> <li>The command will be passed to the backend via WebSocket</li> <li>The backend will execute the command in a different Kubernetes pod created specifically for the user.</li> </ol> <p>Here is what I have on the backend:</p> <pre><code>class TerminalConsumer(WebsocketConsumer): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) config.load_incluster_config() # Load the in-cluster configuration configuration.assert_hostname = False self.core_api = client.CoreV1Api() # Create a CoreV1Api instance def connect(self): print(&quot;CONNECTING&quot;) self.accept() def disconnect(self, close_code): pass def receive(self, text_data): text_data_json = json.loads(text_data) command = text_data_json['message'] try: output = self.execute_command_in_pod(command) logging.info(f&quot;SENDING: \n{output}&quot;) self.send(text_data=json.dumps({'message': output})) except Exception as e: self.send(text_data=json.dumps({'message': str(e)})) def execute_command_in_pod(self, command): pod_name = 'linux-playground' # The name of the Pod running the Linux Playground namespace = 'default' # The namespace in which the Pod is running container_name = 'linux-playground' # The name of the container inside the Pod command_list = ['/bin/bash', '-c', command] # Execute the command inside the Pod and stream the output resp = stream.stream( self.core_api.connect_get_namespaced_pod_exec, name=pod_name, namespace=namespace, container=container_name, command=command_list, stderr=True, stdin=False, stdout=True, tty=False, _preload_content=False, ) output_str = &quot;&quot; while resp.is_open(): resp.update(timeout=60) if resp.peek_stdout(): stdout_data = resp.read_stdout() logging.info(f&quot;STDOUT: \n{stdout_data}&quot;) output_str += stdout_data if resp.peek_stderr(): stderr_data = resp.read_stderr() logging.info(stderr_data) output_str += stderr_data if resp.returncode != 0: raise Exception(&quot;Script failed&quot;) # Close the stream resp.close() return output_str </code></pre> <p>This works as expected; I can run the command in the pod specified and get back the output, but the problem is I lose the state when I run the second command. For example, if I <code>cd</code> into some directory, the second command will not run in the directory I checked into.</p> <p>What could be a potential solution for production-grade applications?</p>
<python><linux><docker><kubernetes><terminal>
2023-05-01 07:37:06
1
571
Harith
76,145,334
11,649,567
Error while trying to call np.ravel_multi_index function in python to convert subscripts to linear Indices for Three-Dimensional Array
<p>I am trying to use the np.ravel_multi_index function in order to convert y,x matrix subscripts to a linear index. Both y and x are ndarrays of shape/dimension (240,1236,4). Basically the line of code which I used is as follow:</p> <pre><code>lin_indices = np.ravel_multi_index((y, x), dims=x.shape) </code></pre> <p>I add an image which shows how y and x arrays are built: <a href="https://i.sstatic.net/5FPYE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5FPYE.png" alt="enter image description here" /></a></p> <p>When I call the abovementioned function, I am getting the following error:</p> <p>{ValueError}ValueError('parameter multi_index must be a sequence of length 3')</p> <p>In order to create y and x parameters of your own you can use the following lines:</p> <pre><code>x = np.random.randint(0, 100, size=(240, 1236, 4)) y = np.random.randint(0, 100, size=(240, 1236, 4)) </code></pre> <p>Any help would be greatly appreciated.</p> <p>Edit: thanks to the comments below I see that in order to get a three dimension array I need to provide 3 arguments, one to every dimension. But I would like help to build it if possible.</p>
<python><python-3.x><numpy>
2023-05-01 07:24:13
1
400
mashtock
76,145,123
832,230
Vectorize iterative Python loop: c[i] = a[i] + b[i] * c[i-1]
<p>I have an iterative Python function:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd def calculate_result(foo: pd.Series, bar: pd.Series, baz: pd.Series) -&gt; pd.Series: result = pd.Series(index=foo.index, dtype=float) result.iloc[0] = foo.iloc[0] for i in range(1, len(foo)): result.iloc[i] = bar.iloc[i] + baz.iloc[i] * result.iloc[i - 1] # core iterative logic return result </code></pre> <p>The function has a loop with an iterative calculation. I'd like to vectorize the loop if possible for performance reasons because I need to use the function a lot. It is okay to use NumPy or Pandas or SciPy or anything similar.</p> <p>Here is a sample result:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; foo = pd.Series([1, 2, 3, 4]) &gt;&gt;&gt; bar = pd.Series([2, 3, 4, 5]) &gt;&gt;&gt; baz = pd.Series([3, 4, 5, 6]) &gt;&gt;&gt; print(calculate_result(foo, bar, baz)) 0 1.0 1 7.0 2 39.0 3 239.0 dtype: float64 </code></pre> <p>I tried the following but it returns an incorrect result:</p> <pre class="lang-py prettyprint-override"><code>def calculate_result_vectorized(foo: pd.Series, bar: pd.Series, baz: pd.Series) -&gt; pd.Series: result = (baz.cumprod() * foo + bar.cumsum() * baz.cumprod().shift()).fillna(foo) result.iloc[0] = foo.iloc[0] return result </code></pre> <p>I am wondering if a clever combination of <code>cumprod</code> and <code>cumsum</code>, etc. may do the trick.</p>
<python><pandas><numpy><vectorization>
2023-05-01 06:34:20
1
64,534
Asclepius
76,144,837
20,122,390
How to create multiple records in a firebase realtime path?
<p>I am using a firebase real time database for a web application that I have written in python. One of my endpoints has the function that the client can load a csv file and subsequently each row of the csv will be converted to a record (node) in a specific path in the database (I simply read the csv and convert it to a list of dictionaries). The problem is when I go to send the records to the database, I use the following code:</p> <pre><code>def create(self, path:str, data: Dict[str, Any]): return self.db.reference(path).push(data) @router.post(&quot;/&quot;) def create_alerts(file: UploadFile): try: data = convert_csv(file) # convert csv to dictionary list for alert in data: info_alert = alert info_alert[&quot;status&quot;] = &quot;not_validated&quot; fb_client.create(settings.FB_PATH, info_alert) #create function defined above except Exception as e: print(e) return {&quot;message&quot;: &quot;Error creating document&quot;} return {&quot;message&quot;: &quot;Document created&quot;} </code></pre> <p>My problem is that the process takes a long time because the csv has many rows and for each row a call is made to the firebase api. I would like with a single call I could create a node for each dictionary-row in the path. So I tried this:</p> <pre><code>try: data = convert_csv(file) for alert in data: info_alert = alert info_alert[&quot;status&quot;] = &quot;not_validated&quot; fb_client.create(settings.FB_PATH, info_alert) </code></pre> <p>(remove the last line of the cycle). But in this case it creates a single new node in the path with all the records. Is there any way to create all a node in the path specified by each dictionary without needing to call in each record the api?</p> <p>Update:</p> <pre><code>try: data = convert_csv(file) ref = fb_client.db.reference(settings.FB_PATH) nodes_data = {} for alert in data: info_alert = alert info_alert[&quot;status&quot;] = &quot;not_validated&quot; new_post_ref = ref.push() post_id = new_post_ref.key nodes_data[post_id] = info_alert fb_client.create(settings.FB_PATH, nodes_data) except Exception as e: print(e) return {&quot;message&quot;: &quot;Error creating document&quot;} return {&quot;message&quot;: &quot;Document created&quot;} </code></pre>
<python><database><firebase><firebase-realtime-database><fastapi>
2023-05-01 05:08:17
1
988
Diego L
76,144,789
17,347,824
Postgresql in Python - Joining tables on a matching variable that is not showing as matching
<p>I have two tables in a database, 1 is a table of NFL teams (teams) and the other is a table of NFL games (games). The games table has the full team name of the home team (team_home) and the away team (team_away), the teams table has the full team name (team_name) and the team id (team_id) such as SF for San Fransisco 49ers.</p> <p>I need to update the games table so that the code is what shows up in the team_home and team_away columns instead of the full team names. However, even though I can see matches in the columns, it keeps returning an error (<code>TypeError: 'NoneType' object is not subscriptable</code>) the code below should work for this:</p> <pre><code>for x, row in games.iterrows(): # look up team_id from teams based on team_home in games cursor.execute(&quot;&quot;&quot;SELECT team_id FROM teams WHERE team_name = %s&quot;&quot;&quot;, (games.loc[x]['team_home'],)) team_home_id = cursor.fetchone()[0] # Look up team_id from teams based on team_away in games cursor.execute(&quot;&quot;&quot;SELECT team_id FROM teams WHERE team_name = %s&quot;&quot;&quot;, (games.loc[x]['team_away'],)) team_away_id = cursor.fetchone()[0] </code></pre> <p>I have implemented a code to call out what teams aren't finding matches:</p> <pre><code>for x, row in games.iterrows(): try: # look up team_id from teams based on team_home in games cursor.execute(&quot;&quot;&quot;SELECT team_id FROM teams WHERE team_name = %s&quot;&quot;&quot;, (games.loc[x]['team_home'],)) team_home_id = cursor.fetchone()[0] print(team_home_id) except TypeError: print(f&quot;No match found for home team name: {games.loc[x]['team_home']}&quot;) team_home_id = None try: # Look up team_id from teams based on team_away in games cursor.execute(&quot;&quot;&quot;SELECT team_id FROM teams WHERE team_name = %s&quot;&quot;&quot;, (games.loc[x]['team_away'],)) team_away_id = cursor.fetchone()[0] print(team_away_id) except TypeError: print(f&quot;No match found for away team name: {games.loc[x]['team_away']}&quot;) team_away_id = None </code></pre> <p>This returns information that there are a few that aren't finding matches, but when I look them up they have matches on both sides. I've tried removing leading and trailing whitespace on both sides to see if that was an issue but still got the same result.</p> <p>Some of the main issues as far as team names are Oakland Raiders, San Diego Chargers, St. Louis Cardinals, Washington Redskins, and New England Patriots.</p> <p>The data is very large, so I have moved it into a google sheet for reference.</p> <p><a href="https://docs.google.com/spreadsheets/d/1Y3PA-WhaOgq2H3jpuovZQgTLwGF8-t9JSU_shYY56eg/edit#gid=1701285808" rel="nofollow noreferrer">https://docs.google.com/spreadsheets/d/1Y3PA-WhaOgq2H3jpuovZQgTLwGF8-t9JSU_shYY56eg/edit#gid=1701285808</a></p> <p>This sheet has what data is in the games table and in the teams table on separate tabs. I have also added a &quot;check matches&quot; tab to this where I copied the list of teams from each of the tables, then made a list with only the unique values, sorted them alphabetically and then did a quick logical formula to check if all unique values have a match and they do.</p>
<python><postgresql>
2023-05-01 04:53:21
0
409
data_life
76,144,756
2,987,552
ModuleNotFoundError: No module named '_ctypes' trying buildozer android debug
<p>I have setup WSL on my Windows 10 machine. I am trying to</p> <pre><code>buildozer android debug </code></pre> <p>on the ubunutu VM started on my Windows 10 machine so that I can convert my python application(s) to android apk(s) as documented in numerous articles. However I am getting stuck at</p> <pre><code>ModuleNotFoundError: No module named '_ctypes' </code></pre> <p>inspite of trying numerous options like</p> <ol> <li>install libffi-dev and then reinstall python</li> <li>executing ldconfig</li> </ol> <p>etc. as suggested by numerous posts here.</p> <p>Any ideas?</p> <p>following are the exact steps that I had to do for item 1 above (due to numerous errors I used to get trying them) as a reference in case that holds the key:</p> <ul> <li>sudo apt remove python3</li> <li>sudo apt-get install libffi-dev</li> <li>sudo apt-get install python3-distutils</li> <li>sudo apt-get install python3-apt</li> <li>sudo apt-get install python3 (did nothing)</li> </ul> <p>Here is the complete stack of the error:</p> <p>Traceback (most recent call last): File &quot;/usr/lib/python3.8/runpy.py&quot;, line 193, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/usr/lib/python3.8/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/home/sameer/paadas/app/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py&quot;, line 1312, in main() File &quot;/home/sameer/paadas/app/.buildozer/android/platform/python-for-android/pythonforandroid/entrypoints.py&quot;, line 18, in main ToolchainCL() File &quot;/home/sameer/paadas/app/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py&quot;, line 734, in <strong>init</strong> getattr(self, command)(args) File &quot;/home/sameer/paadas/app/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py&quot;, line 153, in wrapper_func build_dist_from_args(ctx, dist, args) File &quot;/home/sameer/paadas/app/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py&quot;, line 212, in build_dist_from_args build_recipes(build_order, python_modules, ctx, File &quot;/home/sameer/paadas/app/.buildozer/android/platform/python-for-android/pythonforandroid/build.py&quot;, line 504, in build_recipes recipe.build_arch(arch) File &quot;/home/sameer/paadas/app/.buildozer/android/platform/python-for-android/pythonforandroid/recipe.py&quot;, line 934, in build_arch self.install_python_package(arch) File &quot;/home/sameer/paadas/app/.buildozer/android/platform/python-for-android/pythonforandroid/recipe.py&quot;, line 950, in install_python_package shprint(hostpython, 'setup.py', 'install', '-O2', File &quot;/home/sameer/paadas/app/.buildozer/android/platform/python-for-android/pythonforandroid/logger.py&quot;, line 167, in shprint for line in output: File &quot;/home/sameer/.local/lib/python3.8/site-packages/sh.py&quot;, line 915, in next self.wait() File &quot;/home/sameer/.local/lib/python3.8/site-packages/sh.py&quot;, line 845, in wait self.handle_command_exit_code(exit_code) File &quot;/home/sameer/.local/lib/python3.8/site-packages/sh.py&quot;, line 869, in handle_command_exit_code raise exc sh.ErrorReturnCode_1: RAN: /home/sameer/paadas/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/hostpython3/desktop/hostpython3/native-build/python3 setup.py install -O2 --root=/home/sameer/paadas/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/python-installs/paadas/armeabi-v7a --install-lib=. STDOUT: Traceback (most recent call last): File &quot;/home/sameer/paadas/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/setuptools/armeabi-v7a__ndk_target_21/setuptools/setup.py&quot;, line 7, in import setuptools File &quot;/home/sameer/paadas/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/setuptools/armeabi-v7a__ndk_target_21/setuptools/setuptools/<strong>init</strong>.py&quot;, line 18, in from setuptools.dist import Distribution File &quot;/home/sameer/paadas/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/setuptools/armeabi-v7a__ndk_target_21/setuptools/setuptools/dist.py&quot;, line 32, in from setuptools import windows_support File &quot;/home/sameer/paadas/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/setuptools/armeabi-v7a__ndk_target_21/setuptools/setuptools/windows_support.py&quot;, line 2, in import ctypes File &quot;/home/sameer/paadas/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/hostpython3/desktop/hostpython3/Lib/ctypes/<strong>init</strong>.py&quot;, line 8, in from _ctypes import Union, Structure, Array ModuleNotFoundError: No module named '_ctypes'</p>
<python><android>
2023-05-01 04:42:54
0
598
Sameer Mahajan
76,144,724
15,233,108
error with ffmpeg converting images to video
<p>I have this code that I am using to convert multiple images in different folders into videos. this is the code i have:</p> <pre><code>counter = 0 list_of_videos = os.listdir('./Pending/') # obtain list of completed videos and strip extensions completed_mp4s = os.listdir('./outputvideos/') completed_mp4s = [x[:len(x) - 4] for x in completed_mp4s] # for videos that haven't been created, create video! if it exists, skip creation process framerate = 30 for video in list_of_videos: if video not in completed_mp4s: counter += 1 inputos = &quot;ffmpeg -framerate &quot; + str(framerate) + &quot; -pattern_type sequence -i ./Pending/&quot; + video + &quot;/&quot; + video + &quot;.%d.png -c:v libx264 -pix_fmt yuv420p ./outputvideos/&quot; + video + &quot;.mp4&quot; os.system(inputos) print(counter) </code></pre> <p>however I have the same issue as this <a href="https://stackoverflow.com/questions/61568273/ffmpeg-could-find-no-file-with-path-and-no-such-file-or-directory">FFMPEG &quot;Could find no file with path&quot; and &quot;No such file or directory&quot;</a></p> <p>The images are named as follows:</p> <p><a href="https://i.sstatic.net/Up9gr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Up9gr.png" alt="image of names" /></a></p> <p>and the error I receive is this:</p> <pre><code>ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 10.2.1 (GCC) 20200726 configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf libavutil 56. 51.100 / 56. 51.100 libavcodec 58. 91.100 / 58. 91.100 libavformat 58. 45.100 / 58. 45.100 libavdevice 58. 10.100 / 58. 10.100 libavfilter 7. 85.100 / 7. 85.100 libswscale 5. 7.100 / 5. 7.100 libswresample 3. 7.100 / 3. 7.100 libpostproc 55. 7.100 / 55. 7.100 [image2 @ 000001dbf1f854c0] Could find no file with path './Pending/WYZCW_Colored_Lights_Left/WYZCW_Colored_Lights_Left.%d.png' and index in the range 0-4 ./Pending/WYZCW_Colored_Lights_Left/WYZCW_Colored_Lights_Left.%d.png: No such file or directory </code></pre> <p>This is my folder structure:</p> <p><a href="https://i.sstatic.net/csRKL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/csRKL.png" alt="folder structr" /></a></p> <p>but when I tried the solutions provided , like changing the %04 to %4d or %d, the same problem persists. Does anyone know how I can fix this?</p> <p>Thank you! :)</p>
<python><python-3.x><image><video><ffmpeg>
2023-05-01 04:28:57
0
582
Megan Darcy
76,144,722
2,987,552
APK generated by github python to android action crashing
<p>I created apk using python to android action on github for my repo <a href="https://github.com/sameermahajan/Paadas" rel="nofollow noreferrer">https://github.com/sameermahajan/Paadas</a> (you can check my workflow yml as well as buildozer spec there). The apk is generated correctly without any error however when I download, install and run it on my mobile phone(s) it exits / crashes soon after opening the screen and stating &quot;Loading...&quot; What am I missing?</p> <p>I even tried a simple 'Hello Android!' program (<a href="https://github.com/sameermahajan/hello-android" rel="nofollow noreferrer">https://github.com/sameermahajan/hello-android</a>) which runs into the same problem.</p> <p>Any ideas?</p>
<python><android><github><github-actions>
2023-05-01 04:28:51
1
598
Sameer Mahajan
76,144,700
3,186,922
Scope of function parameter python
<p>I have this code snippet</p> <pre><code>def fun(i = []): print(&quot;i is&quot;, i) i.append(1) for k in range(5): print(&quot;k is&quot;, k) fun() </code></pre> <p>And its output</p> <pre><code>k is 0 i is [] k is 1 i is [1] k is 2 i is [1, 1] k is 3 i is [1, 1, 1] k is 4 i is [1, 1, 1, 1] </code></pre> <p>My understanding till today was, For every function call the parameter will use its default value if no explicit value provided. But here even after not providing any value for <code>i</code> while calling <code>fun</code> from the loop you can see <code>i</code> is getting updated with respect to the previous function call.</p> <p>I was expecting something like this as the answer:</p> <pre><code>k is 0 i is [] k is 1 i is [] k is 2 i is [] k is 3 i is [] k is 4 i is [] </code></pre> <p>Don't we have scope of a function parameter within that function call ?</p>
<python><function><for-loop><default-value>
2023-05-01 04:17:31
1
6,332
Hari Krishnan
76,144,698
1,369,110
Ordinal logistic regression prediction and accuracy using statsmodels
<p>I am trying to do a ordinal logistic regression analysis using statsmodels. However, the predictions I'm getting are vastly different from that I get when using SciKit-Learn <code>LogisticRegression</code>.</p> <p>I'm using a dataset similar to the following. The aim is to predict the <code>quality</code> (on a scale of <code>1-10</code>) based on the composition of <code>chlorides</code> and <code>sulphates</code>.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>chlorides</th> <th>sulphates</th> <th>quality</th> </tr> </thead> <tbody> <tr> <td>0.076</td> <td>0.56</td> <td>5</td> </tr> <tr> <td>0.098</td> <td>0.68</td> <td>5</td> </tr> <tr> <td>0.092</td> <td>0.65</td> <td>5</td> </tr> <tr> <td>0.075</td> <td>0.58</td> <td>6</td> </tr> <tr> <td>0.076</td> <td>0.56</td> <td>5</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> </div> <p>The code I'm using:</p> <pre><code>import numpy as np from sklearn import metrics from sklearn.model_selection import train_test_split from statsmodels.miscmodels.ordinal_model import OrderedModel y = df['quality'] X = df[['chlorides', 'sulphates']] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=20) mod_probe = OrderedModel(y_train, X_train, distr='logit') res_log = mod_probe.fit(method='bgfs') predicted = res_log.model.predict(res_log.params, np.array(X_test)[:, None]) </code></pre> <p><code>predicted</code> sample:</p> <pre><code>array([[[0.00394536, 0.02194635, 0.32950146, 0.47302334, 0.15847723, 0.01310626]], [[0.01405662, 0.07326043, 0.57761266, 0.2806573 , 0.05073693, 0.00367607]], [[0.02683372, 0.12930636, 0.63716285, 0.17780338, 0.02698959, 0.0019041 ]], ..., </code></pre> <p>When I do</p> <pre><code>metrics.accuracy_score(y_test, predicted) </code></pre> <p>I get the error</p> <pre><code>ValueError: Classification metrics can't handle a mix of multiclass and unknown targets </code></pre> <p>I have done many hours of searching on this but can't seem to crack it. Any help would be highly appreciated. Many thanks.</p>
<python><machine-learning><logistic-regression><prediction><statsmodels>
2023-05-01 04:16:03
1
650
AntikM
76,144,690
5,947,182
How do you scrape HTML content of a scrolled down page to the nth post on TikTok with Python Playwright?
<p><strong>I want to scroll down to let's say at least until 72 posts is visible on a TikTok user page</strong> such as <a href="https://www.tiktok.com/@thebeatles" rel="nofollow noreferrer">thebeatles page</a> <strong>and then save the HTML content</strong>. This page currently has 73 posts. I aimed to achieve the scrolling down with pressing the &quot;End&quot; key with <code>page.keyboard.press(&quot;End&quot;)</code> as long as the number of visible post is below 72 but only got back 30 posts. The fetching process aborts image and font requests for optimization. Below are the codes <code>test_fetch_raw_source.py</code> and <code>helper.py</code>, where the latter only contains functions for optimizations. <strong>How do you scroll down to the nth, in this case 72nd, visible post?</strong></p> <pre><code># test_fetch_raw_source.py def test_fetch_front_page_with_n_latest_posts(): # Fetch front page with n_count of visible posts usr = r&quot;thebeatles&quot; url = &quot;&quot;.join([r&quot;https://www.tiktok.com/@&quot;, usr]) n = 72 # The most current posts count def get_visible_post_count(page): # Get visible post count post_urls = page.locator(&quot;//div[@data-e2e='user-post-item']//a&quot;) post_urls_count = len(post_urls.all()) return post_urls_count def press_end(url_n): if not url_n: page.keyboard.press(&quot;End&quot;) with sync_playwright() as playwright: browser = playwright.chromium.launch() context = browser.new_context() page = context.new_page() helper.logging_network_events(page) # Logging # Abort requests helper.abort_image(page) helper.abort_image_by_ext(page) helper.abort_font_by_ext(page) # Get page page.goto(url, timeout=0) # If nth post element is not visible keep pressing key &quot;End&quot; url_n = page.locator(&quot;//div[@data-e2e='user-post-item']//a&quot;).nth(n-1).is_visible(timeout=0) for i in range(n): press_end(url_n) # Post count post_count = get_visible_post_count(page) print(post_count) # Save html usr_rel_path = &quot;&quot;.join([&quot;html/&quot;, usr, &quot;/&quot;]) path_exists = os.path.exists(usr_rel_path) if not path_exists: os.makedirs(usr_rel_path) usr_folder = &quot;&quot;.join([os.getcwd(), &quot;/&quot;, usr_rel_path]) file_name = &quot;&quot;.join([usr, &quot;.html&quot;]) file_dir = &quot;&quot;.join([usr_folder, file_name]) html_source = page.content() with open(file_dir, &quot;w&quot;) as file: file.write(html_source) file.close() browser.close() # helper.py import re def logging_network_events(page): # Logging network events page.on(&quot;request&quot;, lambda request: print( &quot;&gt;&gt;&quot;, request.method, request.url, request.resource_type)) page.on(&quot;response&quot;, lambda response: print( &quot;&lt;&lt;&quot;, response.status, response.url)) def abort_image(page): page.route(&quot;**/*&quot;, lambda route: route.abort() if route.request.resource_type == &quot;image&quot; else route.continue_()) def abort_image_by_ext(page): page.route(re.compile(&quot;jpeg|jpg|png|tiff|gif&quot;), lambda route: route.abort()) def abort_font_by_ext(page): page.route(re.compile(&quot;woff2|woff|otf&quot;), lambda route: route.abort()) </code></pre>
<python><scrollview><playwright><tiktok>
2023-05-01 04:12:25
2
388
Andrea
76,144,420
4,038,547
Problem using JPype on Mac OS X: I try to start the JVM but I get a DLL not found, but it is there
<p>My Python code is:</p> <pre class="lang-py prettyprint-override"><code>import jpype jvm_path = &quot;/Library/Java/JavaVirtualMachines/microsoft-11.jdk/Contents/Home/lib/jli/libjli.dylib&quot; jpype.startJVM( jvm_path, classpath=['~/org.alloytools.alloy.dist.jar'], ) </code></pre> <p>I get the following error:</p> <pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last): File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/Users/&lt;username&gt;/research-llm/alloy-bridge/src/__main__.py&quot;, line 5, in &lt;module&gt; jpype.startJVM( File &quot;/Users/&lt;username&gt;/research-llm/alloy-bridge/.env/lib/python3.10/site-packages/jpype/_core.py&quot;, line 224, in startJVM _jpype.startup(jvmpath, tuple(args), FileNotFoundError: [Errno 2] JVM DLL not found: /Library/Java/JavaVirtualMachines/microsoft-11.jdk/Contents/Home/lib/jli/libjli.dylib </code></pre> <p>A few useful diagnostics:</p> <pre class="lang-bash prettyprint-override"><code>java -version openjdk version &quot;11.0.12&quot; 2021-07-20 OpenJDK Runtime Environment Microsoft-25199 (build 11.0.12+7) OpenJDK 64-Bit Server VM Microsoft-25199 (build 11.0.12+7, mixed mode) javac -version javac 11.0.12 python --version Python 3.10.4 pip show jpype1 Name: JPype1 Version: 1.4.1 Summary: A Python to Java bridge. Home-page: https://github.com/jpype-project/jpype Author: Steve Menard Author-email: devilwolf@users.sourceforge.net License: License :: OSI Approved :: Apache Software License Location: /Users/username/projects/alloy-bridge/.env/lib/python3.10/site-packages Requires: packaging Required-by: sw_vers ProductName: macOS ProductVersion: 13.2.1 BuildVersion: 22D68 </code></pre> <p>Given the context, can someone point me in a right direction? I tried a lot myself but couldn't get it working yet.</p>
<python><java><interop><alloy><jpype>
2023-05-01 02:17:18
1
425
Rodrigo Stv
76,144,372
4,873,538
Dynamic argument parser for python
<p>I am building a small wrapper which can accept a python command infer args and their associated values on the fly. I want something like and arg parser but the catch is that I do not know the arguments in advance. For example, I can get as input:</p> <pre><code>python some_script.py --arg1 val1 --arg2 val2 --arg3 val3 </code></pre> <p>I want some dictionary <code>{arg1: val1, arg2: val2, arg3: val3}</code></p>
<python><argparse>
2023-05-01 01:53:09
2
521
saurbh
76,144,256
6,131,504
How to use parameterized fixtures in parameterized tests?
<p>I have some code that tests the values of a <code>Thing</code> object.</p> <pre class="lang-py prettyprint-override"><code>import pytest class Thing: def __init__(self, val) -&gt; None: self.val = val print(f&quot;Made a thing with val {val}&quot;, end=&quot; &quot;) @pytest.fixture(scope=&quot;class&quot;) def thing1(): return Thing(1) @pytest.fixture(scope=&quot;class&quot;) def thing2(): return Thing(2) class TestThing: @pytest.mark.parametrize( &quot;thing_name,expected_val&quot;, [ (&quot;thing1&quot;, 1), (&quot;thing2&quot;, 2), ], ) def test_thing_val(self, thing_name, expected_val, request): assert request.getfixturevalue(thing_name).val == expected_val @pytest.mark.parametrize( &quot;thing_name&quot;, [ (&quot;thing1&quot;), (&quot;thing2&quot;), ], ) def test_thing_useless(self, thing_name, request): assert request.getfixturevalue(thing_name) </code></pre> <p>Obviously all four tests pass. And as expected, the print statement only happens once per fixture as they get only created once due to being class-scoped.</p> <p>However, I want to use <code>pytest.fixture</code>'s <code>params</code> keyword argument so that I don't have to declare multiple fixtures myself. How can I do this and get those fixtures, made by the <code>params</code> keyword argument, to be used in the tests, yielding the same results that my code currently does?</p>
<python><pytest>
2023-05-01 00:59:23
1
318
Zeke Marffy
76,144,252
10,918,680
Saving a Plotly figure as an PNG image in a zip file is very slow
<p>I need to save a Plotly figure as an image in a zip file. I also need to store some texts that my program generates as a text file in the zip file. This is my attempt:</p> <pre><code>import io import zipfile import plotly.express as px import kaleido #write texts to stringIO s = io.StringIO() texts = ['foo','bar'] newline = '\n' for text in texts: s.write(text) s.write(newline) #create sample plotly figure long_df = px.data.medals_long() fig = px.bar(long_df, x=&quot;nation&quot;, y=&quot;count&quot;, color=&quot;medal&quot;, title=&quot;Long-Form Input&quot;) zip_buffer = io.BytesIO() with zipfile.ZipFile(zip_buffer, mode=&quot;w&quot;) as zf: fig_png = fig.to_image(format=&quot;png&quot;) # kaleido library buf = io.BytesIO(fig_png) zf.writestr('barplot.png', buf.getvalue()) #write plot as png file zf.writestr('text.txt', s.getvalue()) #write texts as text file with open(&quot;a.zip&quot;, &quot;wb&quot;) as f: f.write(zip_buffer.getbuffer()) </code></pre> <p>It did create a zip file but it took perhaps 5 seconds to run. Am I implementing it incorrectly or is that just how it is? Please advise if there are better ways.</p>
<python><plotly>
2023-05-01 00:57:11
1
425
user173729
76,144,023
4,100,282
Elegantly include dynamically generated code in Python package
<p>I am writing a python package <code>foolib</code> which defines a class <code>foolib.Foo</code>. Creating a brand-new <code>Foo</code> object reads a <code>data_in</code> parameter and then performs an expensive numerical minimization, <strong>unless the results of this minimization are also specified in the instantiation declaration</strong>. Once this is done I can &quot;export&quot; this <code>Foo</code> instance by writing an instantiation declaration including the results of this minimization, which means that importing that exported code will not require performing the minimization once again:</p> <pre class="lang-py prettyprint-override"><code>class Foo(): def __init__(self, data_in, results = None): self.data_in = data_in if results is None: self.results = expensive_numerical_minimization(data_in) else: self.results = results def export(self, name, filename): with open(filename, 'w') as fid: fid.write(f'{name} = Foo({repr(self.data_in)}, {repr(self.results)})') </code></pre> <p>I'd like the user-facing code to look like this:</p> <pre class="lang-py prettyprint-override"><code>from foolib.prebuilt import first_foo, second_foo </code></pre> <p>where <code>first_foo</code>, <code>second_foo</code> were pre-minimized prior to my package's release using the code below:</p> <pre class="lang-py prettyprint-override"><code>from foolib import Foo foo1 = Foo(123456) # takes a while foo1.export('first_foo', 'foo1.py') foo2 = Foo(345678) # takes a while foo2.export('second_foo', 'foo2.py') </code></pre> <p>My problem is that I don't know how to cleanly import <code>foo1</code> and <code>foo2</code> into the submodule <code>foolib.prebuilt</code>, in a way that does not expose the <code>foo1.py</code> and <code>foo2.py</code> files themselves when using the package. Ideally, the only way to access the first pre-built instances would be through <code>foolib.prebuilt.first_foo</code> rather than through <code>foolib.foo1.first_foo</code>.</p> <p>Is there an canonical, elegant way to achieve this?</p>
<python><python-import><python-packaging>
2023-04-30 23:34:23
0
305
Mathieu
76,143,962
1,820,715
Calculating world coordinates of a pixel from a camera picture
<p>Sorry for this question. I know there are many <a href="https://stackoverflow.com/questions/30073616/calculating-world-coordinates-from-camera-coordinates">similar questions</a> but I really know nothing about math that involves this case (I'm not a 3D programmer) and those answers are very obscure to me, so I can't implement any.</p> <p>Also I <a href="https://computergraphics.stackexchange.com/questions/13441/calculating-world-coordinates-of-a-pixel-from-a-camera-picture">asked here</a>, but have no audience and I regretted but don't know how to delete there and keep it here. I know I must not do any cross site question on the SE network.</p> <p>I have a picture taken from a camera. I know some information about this camera and this picture:</p> <p>Camera:</p> <pre><code>* FOV : 61 deg. * Altitude : 91 meters * Long. : -43.17687898427574 * Lat. : -22.89925324277859 * Azimuth : 109 deg. * Pitch : 91 deg. ( 1 degree below the horizon ) * Far : 10000000000 * Near : 0.1 * FOV_Y : 0.9522258776618419 </code></pre> <p>These numbers came from <a href="https://cesium.com/" rel="nofollow noreferrer">Cesium</a>. I have no access to the camera specifications so they can be with approximate values. However, Cesium was extremely accurate in calculating the actual pixel coordinates of the image, so I think these numbers are pretty close to reality.</p> <p>Picture:</p> <pre><code>* Height : 485px * Width : 859px </code></pre> <p>Now, I need to know the real world geographic coordinates of a specific pixel in this image.</p> <p>I've made a test with Cesium, and I was able to check some results to serve as test parameters:</p> <pre><code>* Pixel at 327 x 223 (w/h) * Real world coordinates of that pixel: Lat: -22.899739635348244 Lon: -43.16761414545814 </code></pre> <p>Cesium is a very good tool to do this job, but I need some backend resource like Java or Python and can't find any ( maybe due to my math limitations I can't determine what works and what doesn't ).</p> <p>I found a material I think is very close to what I need, but can't implement it because it is very theoretical: <a href="https://medium.com/swlh/ray-tracing-from-scratch-in-python-41670e6a96f9" rel="nofollow noreferrer">https://medium.com/swlh/ray-tracing-from-scratch-in-python-41670e6a96f9</a></p> <p>Here is the picture from the test case. The red arrow is where the pixel is (approximately, of course. I'm not so precise with the mouse). Have in mind that this picture is a screenshot and may not have same dimensions that the real one I have.:</p> <p><a href="https://i.sstatic.net/cQDJ7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cQDJ7.png" alt="enter image description here" /></a></p> <p>... and this is the screenshot of my Cesium map to the same area ( but in 3D view ):</p> <p><a href="https://i.sstatic.net/6PLXf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6PLXf.png" alt="enter image description here" /></a></p> <p>... and 2D view ( the red mark is the point used in the test case. The Blue mark is where the camera is):</p> <p><a href="https://i.sstatic.net/JoGDl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JoGDl.png" alt="enter image description here" /></a></p> <p>Resuming, I need to take the geographic coordinates of a pixel from a picture using Python or Java ( or any agnostic math pseudo code for laymen that I could implement ).</p> <p><strong>EDIT:</strong></p> <p>Well... I have some help from ChatGPT and now I have this code in Python (sorry for PT_BR comments):</p> <pre><code>import numpy as np # FOV : 61 deg. # Altitude : 91 meters # Long. : -43.17687898427574 # Lat. : -22.89925324277859 # Azimuth : 109 deg. # Pitch : 91 deg. ( 1 degree below the horizon ) # Width : 859.358px # Height : 484.983px # Pixel at 850.1561889648438 x 475.18054962158203 (w/h) camera_fov = 61 # FOV image_width = 859.358 # largura da imagem em pixels image_height = 484.983 # altura da imagem em pixels camera_height = 90.0 # altura da cΓ’mera em metros pitch_angle = 91.0 # Γ’ngulo de inclinaΓ§Γ£o da cΓ’mera em graus yaw_angle = 109.0 # Γ’ngulo de guinada da cΓ’mera em graus camera_position = np.array([-43.17687898427574, -22.89925324277859, camera_height]) # posiΓ§Γ£o da cΓ’mera no sistema de coordenadas do mundo # coordenadas 2D do pixel na imagem ( u=w v=h) u = 850.1561889648438 # w v = 475.18054962158203 # h # cΓ‘lculo das coordenadas 3D do pixel no espaΓ§o do mundo aspect_ratio = image_width / image_height fov = np.radians( camera_fov ) # campo de visΓ£o em radianos y_fov = 2 * np.arctan(np.tan(fov / 2) * np.cos(np.radians(pitch_angle))) # campo de visΓ£o vertical em radianos y_angle = np.radians((v / image_height - 0.5) * y_fov * 180 / np.pi + pitch_angle) # Γ’ngulo vertical em radianos x_fov = 2 * np.arctan(np.tan(fov / 2) * np.cos(y_angle)) # campo de visΓ£o horizontal em radianos x_angle = np.radians((u / image_width - 0.5) * x_fov * 180 / np.pi) # Γ’ngulo horizontal em radianos direction = np.array([-np.sin(x_angle), np.cos(x_angle) * np.sin(y_angle), np.cos(x_angle) * np.cos(y_angle)]) rotation_matrix = np.array([ [np.cos(np.radians(yaw_angle)), -np.sin(np.radians(yaw_angle)), 0], [np.sin(np.radians(yaw_angle)), np.cos(np.radians(yaw_angle)), 0], [0, 0, 1] ]) direction = rotation_matrix @ direction world_coords = camera_position + direction * camera_height / np.tan(y_angle) # Real world coordinates of that pixel: # Lat: -22.901614465334827 # Lon: -43.17465101364515 print(world_coords[1],&quot;/&quot;,world_coords[0] ) </code></pre> <p>I've found the numbers very close to what Cesium gave me, then I've made some tests. First I found the pixel near the horizon. All pixels in height less than it will be sky. Then I take all four pixels from horizon to the image bottom:</p> <p><a href="https://i.sstatic.net/pAmOK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pAmOK.png" alt="enter image description here" /></a></p> <p>The result is ( pixel pos ) LAT , LON : ( coordinates calculated by the code )</p> <pre><code>p1 : ( 1,264) -22.40093040092381 , -41.77409577136955 p2 : ( 858,254) -22.42828595321902 , -41.76467637515944 p3 : ( 1,481) -22.68142494320101 , -42.55301747856779 p4 : ( 858,481) -22.68681453552334 , -42.55116168224547 </code></pre> <p>Now I need to check these coordinates the code gave against those the Cesium calculated to see if the results are the same.</p> <p>Don't, but was close.</p> <p>I'm curious about what coordinates the code are giving me, so I take a WKT Polygon (GeoJSON) and use the site geojson.io to draw it.</p> <p><a href="https://i.sstatic.net/4iJoL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4iJoL.png" alt="enter image description here" /></a></p> <p>The gray area is the polygon representing the &quot;viewport&quot; ( to the horizon ) that I made from the camera picture ( p1 to p4 ) and calculated by the code.</p> <p>The red arrow is where the camera is.</p> <p>The Blue triangle is what I think the camera picture are actualy seeing.</p> <p>As you can see I'm very close. I also suspect that input values ​​such as pitch and yaw (azimuth) orientation may be different than what Cesium uses. Any help to check the math in the code will be wellcome!!</p> <p>I can give the ipynb file if you want to try it on Jupyter.</p> <p><strong>EDIT 2: New Thoughts</strong></p> <p>I decided to loop using variations of YAW and PITCH to see where the points would be plotted. I also decided to add a map library to facilitate direct visualization in Jupyter. I've concluded that the method I'm using is pretty much correct. I did the following: I located the center point of the camera ( 429, 265 ) for the given image size ( 859, 484 ) and locked onto the YAW and PITCH variations to see where the code would point.The method used showed me that I must vary PITCH in decimals, starting from 90 degrees. A very high value causes the traced line to suffer distortions, perhaps due to the curvature of the earth, I don't know. The following image shows the result of a PITCH variation between 90 and 94, with a step of 0.1. Yaw points north.</p> <p><a href="https://i.sstatic.net/qmq04.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qmq04.png" alt="enter image description here" /></a></p> <p>I concluded that a value close to 94 degrees would give me a distance of about 13Km to the farthest point and 90 degress the cam will point to my foot. I will use this fixed value for PITCH from now on.</p> <p>Here is the code:</p> <pre><code>coords_pitch = [] xs = (x * 0.1 for x in range(1, 40)) # Vary pitch for x in xs: pitch_angle = 90 + x p1 = calc_position( 429, 265 ); # 429, 265 = Screen center pixel coords_pitch.append( [p1[0],p1[1]] ) points_geom_pitch = MultiPoint(coords_pitch) points_pitch = gpd.GeoDataFrame(index=[0], crs='epsg:4326', geometry=[points_geom_pitch]) m = folium.Map([-22.9149,-43.1330], zoom_start=13, tiles='cartodbpositron') folium.GeoJson(points_pitch).add_to(m) </code></pre> <p>The next step would be to see how YAW responds to variations. I expected a value of 0 to point north, 90 east, 180 south, and 270 west. I found that the north-south axes were reversed, so I changed the code to subtract 180. Here is the changed main code:</p> <pre><code>def calc_position(u,v): yaw_angle_inv = 180 - yaw_angle aspect_ratio = image_width / image_height fov = camera_fov # campo de visΓ£o em radianos y_fov = 2 * np.arctan(np.tan(fov / 2) * np.cos(np.radians(pitch_angle))) # campo de visΓ£o vertical em radianos y_angle = np.radians((v / image_height - 0.5) * y_fov * 180 / np.pi + pitch_angle) # Γ’ngulo vertical em radianos x_fov = 2 * np.arctan(np.tan(fov / 2) * np.cos(y_angle)) # campo de visΓ£o horizontal em radianos x_angle = np.radians((u / image_width - 0.5) * x_fov * 180 / np.pi) # Γ’ngulo horizontal em radianos direction = np.array([-np.sin(x_angle), np.cos(x_angle) * np.sin(y_angle), np.cos(x_angle) * np.cos(y_angle)]) rotation_matrix = np.array([ [np.cos(np.radians(yaw_angle_inv)), -np.sin(np.radians(yaw_angle_inv)), camera_height], [np.sin(np.radians(yaw_angle_inv)), np.cos(np.radians(yaw_angle_inv)), camera_height], [0, 0, 0] ]) direction = rotation_matrix @ direction world_coords = camera_position + direction * camera_height / np.tan(y_angle) return world_coords </code></pre> <p>... and the code to plot YAW axis:</p> <pre><code># Point the camera (central pixel) to my own foot to take a start point pitch_angle = 90 yaw_angle = 0 start_point = calc_position( 429, 265 ); # Loop on YAW, using the fixed PITCH value of 94 deg. as calculated before. This will give me 360 points around the start point pitch_angle = 94 coords_yaw = [] xs = (x * 1 for x in range(0, 360)) for x in xs: yaw_angle = 0 + x p1 = calc_position( 429, 265 ); coords_yaw.append( [p1[0],p1[1]] ) # Print on map just the 4 cardinal axis line_geom_yaw_0 = LineString( [ [ start_point[0],start_point[1] ] , coords_yaw[ 0 ] ] ) line_geom_yaw_90 = LineString( [ [ start_point[0],start_point[1] ] , coords_yaw[ 90 ] ] ) line_geom_yaw_180 = LineString( [ [ start_point[0],start_point[1] ] , coords_yaw[ 180 ] ] ) line_geom_yaw_270 = LineString( [ [ start_point[0],start_point[1] ] , coords_yaw[ 270 ] ] ) line_yaw_0 = gpd.GeoDataFrame(index=[0], crs='epsg:4326', geometry=[line_geom_yaw_0]) line_yaw_90 = gpd.GeoDataFrame(index=[0], crs='epsg:4326', geometry=[line_geom_yaw_90]) line_yaw_180 = gpd.GeoDataFrame(index=[0], crs='epsg:4326', geometry=[line_geom_yaw_180]) line_yaw_270 = gpd.GeoDataFrame(index=[0], crs='epsg:4326', geometry=[line_geom_yaw_270]) m = folium.Map([-22.9149,-43.1330], zoom_start=13, tiles='cartodbpositron') folium.GeoJson(line_yaw_0).add_to(m) folium.GeoJson(line_yaw_90).add_to(m) folium.GeoJson(line_yaw_180).add_to(m) folium.GeoJson(line_yaw_270).add_to(m) </code></pre> <p>This is the result:</p> <p><a href="https://i.sstatic.net/jzxSg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jzxSg.png" alt="enter image description here" /></a></p> <p>Next step: Find out why the cardinal axes are distorted and start working on the pixel variations, leaving the center of the camera in an already defined position. I have already discovered that the height of the camera is causing this distortion. A value of 0 meters causes the cardinal axes distortion to disappear. However, the values ​​used for PITCH appear to be attenuated quite a bit and allow for larger increments without suffering distortion. As my camera is effectively 56m high, I need to check for these distortions when testing pixel variations.</p>
<python><3d><transformation><computational-geometry><raytracing>
2023-04-30 23:13:03
0
2,215
Magno C
76,143,920
13,330,700
Kubernetes health check always fails for django application
<p>I'm kinda new to kubernetes and I'm trying to figure out how to configure my health check. When I configure my <code>livenessProbe</code> it always returns a 400, but when I remove the probe, exec into the pod, and run <code>curl 127.0.0.1/health</code> I get <code>{&quot;status&quot;: &quot;ok&quot;}</code>.</p> <p>(I'm running this locally on a minikube host)</p> <p>Here's my dockerfile</p> <pre><code>FROM python:3.11 # setup env variables ENV PYTHONBUFFERED=1 ENV DockerHOME=/app/django-app # Expose port EXPOSE 8000 # create work dir RUN mkdir -p $DockerHOME # set work dir WORKDIR $DockerHOME # copy code to work dir COPY . $DockerHOME # install dependencies RUN pip install -r requirements.txt # move working dir to where manage.py is WORKDIR $DockerHOME/flag_games # set default command (I thinkk) ENTRYPOINT [&quot;python&quot;] # run commands for app to run CMD [&quot;manage.py&quot;, &quot;collectstatic&quot;, &quot;--noinput&quot;] CMD [&quot;manage.py&quot;, &quot;runserver&quot;, &quot;localhost:8000&quot;] </code></pre> <p>Here's my <code>deployment.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: flag-game-deployment labels: app: flag-game-deployment spec: replicas: 3 selector: matchLabels: app: flag-game-deployment template: metadata: labels: app: flag-game-deployment spec: containers: - image: docker-django imagePullPolicy: Never name: docker-django livenessProbe: httpGet: path: /health port: 8000 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 5 </code></pre> <p>Here's the steps to my makebuild</p> <pre><code>minikube-deploy: make docker-build minikube start minikube image load $(IMAGE_NAME) kubectl apply -f &quot;$(PWD)\manifests\deployment.yaml&quot; kubectl expose deployment $(KUBE_DEPLOYMENT_NAME) --type=NodePort --port=8000 --dry-run=client -o yaml | kubectl apply -f - </code></pre> <p>and here's my views.py and url.py for my healthcheck</p> <pre><code>def health_check(request): # Perform any checks to determine the health of your application is_healthy = True # Return a JSON response with the health status if is_healthy: return JsonResponse({'status': 'ok'}, status=200) else: return JsonResponse({'status': 'error'}, status=503) urlpatterns = [ path('', views.index, name='index'), path('world_flags/', include('world_flags.urls')), path('health', views.health_check), ] </code></pre> <p>Any and all help is appreciated!</p>
<python><django><docker><kubernetes>
2023-04-30 22:57:54
1
499
mike_gundy123
76,143,836
15,966,103
Django block content not working properly
<p>I have a <code>base.html</code> file with the following code:</p> <pre><code>{% load static %} {% load custom_tags %} &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;body&gt; &lt;style&gt; html { scroll-behavior: smooth; } *, body { margin: 0; } body { display:flex; flex-direction:row; background: #030914; background: url('/static/dashimages/map.svg'), #030914;; background-attachment: fixed; overflow-y:scroll; } .nav-container { width: 200px; padding: 20px; position: fixed; top: 0; left: 0; height: 100%; max-height: 100vh; border-right: 2px solid rgba(33, 40, 54, 0.5); overflow: hidden; padding-top: 35px; } .nav-links { display:flex; flex-direction:column; justify-content:center; align-items:center; gap: 25px; margin-top: 30px; } .nav-link { display:flex; flex-direction:row; justify-content: flex-start; width: 110%; padding: 14px 20px; gap: 20px; transform: rotate(5deg); } .nav-link:hover { cursor: pointer; } .nav-link a { font-family: 'Poppins'; font-weight: 700; font-size: 19px; color: rgba(255, 255, 255, 0.15); } .nav-link img { width: 25px; height: 25px; } .nav-active a { background: linear-gradient(90deg, #A2FACF 0%, #64ACFF 101.4%); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text; text-fill-color: transparent; } .nav-active { background: linear-gradient(90deg, rgba(162, 250, 207, 0.10) 0%, rgba(100, 172, 255, 0.1) 101.4%); } &lt;/style&gt; {% block nav %} &lt;div class=&quot;nav-container&quot;&gt; &lt;img src=&quot;{% static 'levelstatic/assets/coderalogo.svg' %}&quot; style=&quot;margin: 0 auto; width: 80%; margin-bottom: 25px;&quot; alt=&quot;CODERA&quot;&gt; &lt;div class=&quot;nav-links&quot;&gt; &lt;div class=&quot;nav-link nav-active&quot;&gt; &lt;img src=&quot;{% static '/dashimages/learn.svg' %}&quot; &gt; &lt;a&gt;LEARN&lt;/a&gt; &lt;/div&gt; &lt;div class=&quot;nav-link&quot;&gt; &lt;img src=&quot;{% static '/dashimages/review.svg' %}&quot; &gt; &lt;a&gt;REVIEW&lt;/a&gt; &lt;/div&gt; &lt;div class=&quot;nav-link&quot;&gt; &lt;img src=&quot;{% static '/dashimages/progress.svg' %}&quot; &gt; &lt;a&gt;PROGRESS&lt;/a&gt; &lt;/div&gt; &lt;div class=&quot;nav-link&quot;&gt; &lt;img src=&quot;{% static '/dashimages/compete.svg' %}&quot; &gt; &lt;a&gt;COMPETE&lt;/a&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; {% endblock%} &lt;/body&gt; &lt;/html&gt; </code></pre> <p>When I extend <code>base.html</code> on my <code>dashboard.html</code>, I want to display the content (navbar) in the <code>base.html</code> as well as my other content.</p> <p>I do this using the following code:</p> <pre><code>{% extends &quot;base.html&quot; %} {% block nav %} {% endblock %} </code></pre> <p>However, it doesn't seem to be working. Nothing shows up, except a black screen and some styles I have in my <code>base.html</code>.</p> <p>Does anyone have an idea as to why this is occuring? In case it has something to do with the rest of my code in my <code>dashboard.html</code>, I have provided that below.</p> <pre><code>{% extends &quot;base.html&quot; %} {% load static %} {% load custom_tags %} &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;script defer data-domain=&quot;codera.app&quot; src=&quot;https://plausible.io/js/script.js&quot;&gt;&lt;/script&gt; &lt;!--PLAUSIBLE--&gt; &lt;meta charset=&quot;UTF-8&quot; /&gt; &lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;IE=edge&quot; /&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot; /&gt; {% comment %} &lt;link rel=&quot;stylesheet&quot; href=&quot;{% static 'dashboardstyle.css' %}&quot;&gt; {% endcomment %} &lt;link rel=&quot;preconnect&quot; href=&quot;https://fonts.googleapis.com&quot; /&gt; &lt;link rel=&quot;preconnect&quot; href=&quot;https://fonts.gstatic.com&quot; crossorigin /&gt; &lt;link href=&quot;https://fonts.googleapis.com/css2?family=Poppins:wght@300;400;500;600;700&amp;family=Space+Grotesk:wght@500;700&amp;display=swap&quot; rel=&quot;stylesheet&quot; /&gt; &lt;link rel=&quot;icon&quot; type=&quot;image/x-icon&quot; href=&quot;{% static 'levelstatic/assets/fav.ico' %}&quot;&gt; &lt;link href=&quot;https://fonts.googleapis.com/css2?family=Gloria+Hallelujah&amp;display=swap&quot; rel=&quot;stylesheet&quot;&gt; &lt;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/gsap/3.11.3/gsap.min.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;https://code.jquery.com/jquery-3.6.0.min.js&quot;&gt;&lt;/script&gt; {% comment %} &lt;script type=&quot;text/javascript&quot; src=&quot;{% static 'dashboardscript.js' %}&quot; defer&gt;&lt;/script&gt; {% endcomment %} &lt;script src=&quot;https://cdn.jsdelivr.net/npm/chart.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;//cdn.jsdelivr.net/npm/sweetalert2@11&quot;&gt;&lt;/script&gt; &lt;title&gt;Codera | Dashboard&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;style&gt; html { scroll-behavior: smooth; } *, body { margin: 0; } body { display:flex; flex-direction:row; background: #030914; background: url('/static/dashimages/map.svg'), #030914;; background-attachment: fixed; overflow-y:scroll; } h1 { font-family: 'Space Grotesk'; font-style: normal; font-weight: 700; color:rgba(255,255,255, .90); } &lt;/style&gt; {% block nav %} {% endblock %} &lt;div class=&quot;maincontainer&quot;&gt; &lt;div class=&quot;leftcontent&quot;&gt; &lt;div class=&quot;levelscontainer&quot;&gt; {% comment %} &lt;div class=&quot;chapter&quot;&gt; &lt;h1&gt;Unit 1&lt;/h1&gt; &lt;p&gt;learn all about this ....&lt;/p&gt; &lt;/div&gt; {% endcomment %} &lt;div id=&quot;levels&quot;&gt; &lt;/div&gt; &lt;/div&gt; {% comment %} &lt;div class=&quot;levelscontainer&quot;&gt; &lt;div class=&quot;chapter&quot;&gt; &lt;h1&gt;Unit 1&lt;/h1&gt; &lt;p&gt;learn all about this ....&lt;/p&gt; &lt;/div&gt; &lt;div class=&quot;levels&quot;&gt; &lt;div class=&quot;level&quot;&gt; &lt;/div&gt; &lt;div class=&quot;level&quot;&gt;&lt;/div&gt; &lt;div class=&quot;level&quot;&gt;&lt;/div&gt; &lt;/div&gt; &lt;/div&gt; {% endcomment %} &lt;/div&gt; &lt;div class=&quot;rightcontent&quot;&gt; &lt;div class=&quot;top&quot;&gt; &lt;div class=&quot;points focus&quot;&gt; &lt;img src=&quot;{% static '/dashimages/focusclock.svg' %}&quot; style=&quot;width:20px; filter: invert(19%) sepia(1%) saturate(6780%) hue-rotate(187deg) brightness(88%) contrast(100%);&quot;&gt; &lt;p&gt;FOCUS&lt;/p&gt; &lt;/div&gt; &lt;div class=&quot;points&quot;&gt; &lt;img src=&quot;{% static '/dashimages/medal.png' %}&quot; style=&quot;width:20px;&quot;&gt; &lt;p style=&quot;color:#F69C18;&quot;&gt;{{medals}}&lt;/p&gt; &lt;/div&gt; &lt;div class=&quot;points&quot;&gt; &lt;img src=&quot;{% static '/dashimages/fire.svg' %}&quot; style=&quot;width:25px;&quot;&gt; &lt;p style=&quot;color: #FFC001;&quot;&gt;{{streaks}}&lt;/p&gt; &lt;/div&gt; &lt;div class=&quot;points&quot;&gt; &lt;img src=&quot;{% static '/dashimages/diamond.png' %}&quot; style=&quot;width:25px;&quot;&gt; &lt;p style=&quot;color: #30E1B9;&quot;&gt;{{diamonds}}&lt;/p&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;bottom&quot;&gt; &lt;div class=&quot;box&quot;&gt; &lt;div class=&quot;message&quot;&gt; &lt;img src=&quot;{% static '/dashimages/drvega.png' %}&quot; style=&quot;width:70px; height: 70px;&quot;&gt; &lt;div class=&quot;messagetitle&quot;&gt; &lt;h1&gt;Dr. Vega&lt;/h1&gt; &lt;p&gt;Incoming message&lt;/p&gt; &lt;button&gt;VIEW MESSAGE&lt;/button&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/html&gt; </code></pre>
<javascript><python><html><css><django>
2023-04-30 22:30:37
1
918
jahantaila
76,143,807
5,065,546
Struggling to install pandas_datareader with conda install
<p>I'm trying to install pandas_datareader through conda install but having no luck. Each time I get the error message:</p> <pre><code>ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'conda-forge::charset-normalizer-2.1.1-pyhd8ed1ab_0'. CondaError: Cannot link a source that does not exist. C:\Users\Euan Ritchie\Anaconda3\Scripts\conda.exe Running `conda clean --packages` may resolve your problem. Attempting to roll back. CondaError: Cannot link a source that does not exist. C:\Users\Euan Ritchie\Anaconda3\Scripts\conda.exe Running `conda clean --packages` may resolve your problem. </code></pre> <p>This is after the following commands run in the Anaconda prompt:</p> <pre><code>anaconda search -t conda pandas_datareader anaconda show delichon/pandas_datareader conda install --channel https://conda.anaconda.org/delichon pandas_datareader </code></pre> <p>I then do what it suggests (ran <code>conda clean --packages</code>), then restarted the prompt, but then got the same error message.</p> <p>I'd greatly appreciate any help. Conda version is 4.3.30 and python version is 3.6.1.final.0 if this is helpful.</p>
<python><anaconda>
2023-04-30 22:21:04
1
362
Euan Ritchie
76,143,799
733,002
talib.PLUS_DM - shouldn't the return value be bounded by [0, 100]?
<p>Why does this example return values out of the range [0, 100] for PLUS_DM, and why is it sensitive to scaling the values? I would assume with the ratio that changes of magnitude would cancel out.</p> <pre><code>import numpy as np import talib highs= np.array([39050.0, 39050.0, 39100.0, 39100.0, 39050.0, 39061.0, 39090.0, 39100.0, 39100.0, 39050.0, 39100.0, 39100.0, 39050.0, 39050.0, 39150.0, 39200.0, 39175.0, 39200.0, 39200.0, 39200.0, 39200.0,]) lows = np.array([39000.0, 39000.0, 39000.0, 39000.0, 39000.0, 39000.0, 39000.0, 39000.0, 39050.0, 39000.0, 39050.0, 39000.0, 39000.0, 39000.0, 39000.0, 39100.0, 39100.0, 39100.0, 39100.0, 39100.0, 39100.0,]) span =14 dmip = talib.PLUS_DM(highs, lows, span) &gt;&gt;&gt;dmip &gt;&gt;&gt; array([ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 150. , 239.28571429, 272.19387755, 252.75145773, 259.69778217, 241.14794059, 223.92308769, 207.92858143]) highs = highs/10000 lows = lows/10000 dmip = talib.PLUS_DM(highs, lows, span) &gt;&gt;&gt;dmip &gt;&gt;&gt; array([ nan, nan, nan, nan, nan nan, nan, nan, nan, nan, nan, nan, nan, 0.015 , 0.02392857, 0.02721939, 0.02527515, 0.02596978, 0.02411479, 0.02239231, 0.02079286]) </code></pre>
<python><ta-lib>
2023-04-30 22:17:22
1
1,031
Joe
76,143,677
1,930,402
How to do an efficient partial string match in Pandas?
<p>I have a dataframe of around 50k strings containing some variations of city names like these.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;">city</th> <th style="text-align: right;">variation</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">new delhi</td> <td style="text-align: right;">new dilli</td> </tr> <tr> <td style="text-align: right;">new delhi</td> <td style="text-align: right;">delhi</td> </tr> <tr> <td style="text-align: right;">new delhi</td> <td style="text-align: right;">dilli</td> </tr> <tr> <td style="text-align: right;">new delhi</td> <td style="text-align: right;">nayi dilli</td> </tr> <tr> <td style="text-align: right;">bengaluru</td> <td style="text-align: right;">bangalore</td> </tr> <tr> <td style="text-align: right;">bengaluru</td> <td style="text-align: right;">blr</td> </tr> <tr> <td style="text-align: right;">bengaluru</td> <td style="text-align: right;">bengalur</td> </tr> <tr> <td style="text-align: right;">mysore</td> <td style="text-align: right;">mysuru</td> </tr> <tr> <td style="text-align: right;">mysore</td> <td style="text-align: right;">mysur</td> </tr> <tr> <td style="text-align: right;">mysore</td> <td style="text-align: right;">mysore</td> </tr> <tr> <td style="text-align: right;">mysore</td> <td style="text-align: right;">mysor</td> </tr> </tbody> </table> </div> <p>I have an inverted index (2 million variations) created from similar, but noisy variations from a larger dataset where variations are present. I want to calculate how many times each of these variations in the above table appears at least partially in the larger dataset.</p> <p>Table to lookup in:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>city_variations</th> <th>indices</th> <th>count</th> </tr> </thead> <tbody> <tr> <td>new dilli #num</td> <td>[2,3,51,44,33]</td> <td>5</td> </tr> <tr> <td>new dilliplace</td> <td>[231,3,11,44]</td> <td>4</td> </tr> <tr> <td>new york</td> <td>[1231,2211]</td> <td>2</td> </tr> <tr> <td>#num new dilli##num</td> <td>[211,4223,532]</td> <td>3</td> </tr> </tbody> </table> </div> <p>I am expecting my results to contain: <code> new dilli -&gt; new dilli #num, new dilliplace and #num new dilli##num</code></p> <p>My code is also added below, this works, but takes forever! Can someone suggest me ways to optimize it?</p> <pre class="lang-py prettyprint-override"><code>def if_contains(pattern,df,col,sum_col): return df[df[col].str.contains(pattern)][sum_col].sum() inputs[&quot;count_of_vns_with_pattern&quot;] = inputs.apply( lambda x: if_contains( x[&quot;variation&quot;], inv_index, &quot;city_variations&quot;, &quot;No.of times present in inv index&quot; ), axis=1, ) </code></pre>
<python><pandas><optimization><lookup>
2023-04-30 21:39:08
1
1,509
pnv
76,143,613
11,135,962
Trying to add a pydantic model to a set gives unhashable error
<p>I have the following code</p> <pre><code>from pydantic import BaseModel class User(BaseModel): id: int name = &quot;Jane Doe&quot; def add_user(user: User): a = set() a.add(user) return a add_user(User(id=1)) </code></pre> <p>When I run this, I get the following error:</p> <pre><code>TypeError: unhashable type: 'User' </code></pre> <p>Is there a way this issue can be solved?</p>
<python><python-3.x><pydantic>
2023-04-30 21:24:33
2
3,620
some_programmer
76,143,494
14,529,779
Issue with a function that calculates the sum of positive elements after a negative element
<p>I am working on a Python function that takes an array as input and returns the maximum sum of positive elements that come right after a negative element:</p> <pre class="lang-py prettyprint-override"><code>def some(arr): som = 0 startSum = False ta = [] for i in range(len(arr)): if arr[i] &lt; 0: som = 0 startSum = True elif startSum == True: som += arr[i] ta.append(som) if len(ta) == 0: return 0 return max(ta) </code></pre> <p>For example, when I input the array <code>[1, -2]</code>, the function returns <code>0</code>, but the correct output should be <code>-1</code>.</p> <p>How can I modify my function to handle this situation correctly?</p>
<python>
2023-04-30 20:53:38
1
9,636
TAHER El Mehdi
76,143,398
1,485,877
Pickle ignores __getstate__ on frozen dataclasses with slots
<p>You are supposed to be able to override how <code>pickle</code> pickles an object with <code>__getstate__</code> and <code>__setstate__</code>. However, these methods are ignored when a dataclass specifies both <code>frozen=True</code> and <code>slots=True</code>.</p> <pre class="lang-py prettyprint-override"><code>import pickle from dataclasses import dataclass @dataclass(frozen=True, slots=True) class Foo: bar: int def __getstate__(self): print(&quot;getstate&quot;) return {&quot;bar&quot;: self.bar} def __setstate__(self, state): print(&quot;setstate&quot;) object.__setattr__(self, &quot;bar&quot;, state[&quot;bar&quot;]) b = pickle.dumps(Foo(1)) foo = pickle.loads(b) </code></pre> <p>The above script should print &quot;getstate&quot; and then &quot;setstate&quot;. However, it prints nothing. It prints what I expect if I remove either <code>frozen</code> or <code>slots</code> or both. It is only the combination that fails.</p> <p>I am on Python 3.11.3.</p>
<python><pickle><python-dataclasses>
2023-04-30 20:30:45
1
9,852
drhagen
76,143,253
18,758,062
Running command with paramiko PTY gives error: write() argument must be str, not bytes
<p>I'm using <code>paramiko</code> to connect to the remote machine via SSH where a bash command is ran and the stdout output needs to be read line by line as they are sent.</p> <p>This is what I have so far. <code>get_pty=True</code> needs to be set.</p> <pre><code>import paramiko import sys ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy) ssh.connect('my.host.com', 22, 'root', key_filename=&quot;/home/me/.ssh/id_ed25519&quot;) stdin, stdout, stderr = ssh.exec_command(&quot;ls&quot;, get_pty=True) while True: v = stdout.channel.recv(1024) if not v: break sys.stdout.write(v) sys.stdout.flush() </code></pre> <p>Running this gives the error</p> <pre><code> File &quot;/home/me/test.py&quot;, line 13, in &lt;module&gt; sys.stdout.write(v) TypeError: write() argument must be str, not bytes </code></pre> <p>What is the correct way to read the multi-line output from running the command?</p>
<python><python-3.x><ssh><paramiko><pty>
2023-04-30 19:55:53
1
1,623
gameveloster
76,143,042
2,097
Is there an interface to access pyproject.toml from Python?
<p>Is there an interface to access the information in pyproject.toml from Python?</p> <p>In particular, I'd like to access the dependencies. It doesn't seem hard to do</p> <pre class="lang-py prettyprint-override"><code>toml.load(&quot;pyproject.toml&quot;)['project']['dependencies'] </code></pre> <p>and then parse that list. But that would be rewriting logic that must already be available somewhere.</p> <p>Apparently there is <code>importlib.metadata.requires()</code>, which would be better than the line above, but that seems to give a similar list of strings that still requires manual parsing:</p> <pre><code>&gt;&gt;&gt; metadata.requires(&quot;scipy&quot;) ['numpy&lt;1.27.0,&gt;=1.19.5', 'pytest; extra == &quot;test&quot;', ... </code></pre> <p>(As for the XY problem, I was thinking about writing a test that tries to install a package with the lowest versions of its dependencies to see whether the package still works.)</p>
<python><python-importlib><toml><pyproject.toml>
2023-04-30 19:04:44
2
2,416
BlackShift
76,142,930
4,907,339
Why does pip not install the .whl version of a package on alpine base?
<p>While trying to roll a docker-in-docker image, I'm running up against an issue where certain python modules are not being installed by their pre-compiled <code>.whl</code> variants, instead they are trying to be source compiled. In this particular case it's Pandas and consequently NumPy:</p> <pre><code>FROM docker:dind RUN apk add --no-cache \ mtr \ tcpdump \ bind-tools \ iputils \ python3 \ py3-pip \ git \ </code></pre> <p>This Dockerfile gives me an x86_64 container with Python 3.10.11:</p> <pre class="lang-bash prettyprint-override"><code>/workspaces/ariobot # uname -a Linux a76ba19d56af 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 Linux /workspaces/ariobot # python -V Python 3.10.11 /workspaces/ariobot # </code></pre> <p>According to <a href="https://pypi.org/project/pandas/1.5.3/#modal-close" rel="nofollow noreferrer">pypi</a> there is <code>pandas-1.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl</code> which I am assuming is compatible, but it doesn't seem to be retrieving it:</p> <pre><code>/workspaces/ariobot # pip install pandas==1.5.3 Collecting pandas==1.5.3 Downloading pandas-1.5.3.tar.gz (5.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.2/5.2 MB 26.7 MB/s eta 0:00:00 Installing build dependencies ... error error: subprocess-exited-with-error Γ— pip subprocess to install build dependencies did not run successfully. β”‚ exit code: 1 ╰─&gt; [282 lines of output] Collecting setuptools&gt;=51.0.0 Downloading setuptools-67.7.2-py3-none-any.whl (1.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 15.7 MB/s eta 0:00:00 Collecting wheel Downloading wheel-0.40.0-py3-none-any.whl (64 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.5/64.5 kB 14.9 MB/s eta 0:00:00 Collecting Cython&lt;3,&gt;=0.29.32 Downloading Cython-0.29.34-cp310-cp310-musllinux_1_1_x86_64.whl (2.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 34.0 MB/s eta 0:00:00 Collecting oldest-supported-numpy&gt;=2022.8.16 Downloading oldest_supported_numpy-2022.11.19-py3-none-any.whl (4.9 kB) Collecting numpy==1.21.6 Downloading numpy-1.21.6.zip (10.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.3/10.3 MB 54.2 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Building wheels for collected packages: numpy Building wheel for numpy (pyproject.toml): started Building wheel for numpy (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error Γ— Building wheel for numpy (pyproject.toml) did not run successfully. β”‚ exit code: 1 ╰─&gt; [248 lines of output] setup.py:63: RuntimeWarning: NumPy 1.21.6 may not yet support Python 3.10. warnings.warn( Running from numpy source directory. </code></pre> <p>The output above also shows it trying to install NumPy 1.21.6 from source, despite <a href="https://pypi.org/project/numpy/1.21.6/#files" rel="nofollow noreferrer">pypi</a> showing a 3.10/x86_64/anylinux <code>.whl</code> variant being available.</p> <p>What am I missing?</p>
<python>
2023-04-30 18:37:32
1
492
Jason
76,142,861
10,024,860
Behavior of Tensorflow's GradientTape when target is not a scalar
<p>Can somebody explain to me the shape and value of Tensorflow's <code>GradientTape</code> output when <code>target</code> is not a scalar value? For example, I had the following code:</p> <pre><code>import tensorflow as tf a = tf.Variable([[-1.], [0.], [1.]]) b = tf.Variable([[1.,2.,3.],[4.,5.,6.]]) with tf.GradientTape() as g: c = b @ a grads = g.gradient(c, a) print(c) print(grads) </code></pre> <p>The value of c is <code>[[2.],[2.]]</code>. The value of <code>grads</code> is <code>[[5.],[7.],[9.]]</code>.</p> <p>I expected the value of <code>grads</code> to have shape <code>(3,2)</code> or <code>(2,3)</code>, and contain values of partial derivatives of each entry of c with respect to a. I am not sure what the values of 5, 7, and 9 represent (interestingly, it seems to be the gradients as if <code>c</code> had been <code>tf.reduce_sum(b @ a)</code> instead)</p> <p>The <a href="https://www.tensorflow.org/api_docs/python/tf/GradientTape#:%7E:text=do%20not%20match.-,gradient,-View%20source" rel="nofollow noreferrer">documentation</a> that I found doesn't really explain the output.</p>
<python><tensorflow><gradienttape>
2023-04-30 18:21:46
1
491
Joe C.
76,142,805
726,730
Python QTreeWidget const columns
<p>Is there any way to have a QTreeWidget with horizontal scroll bar but to have some specific columns (for example the first and the second) be const as the user scrolls horizontally?</p> <p>As for now there is no need to do something like that, but just in case is this possible? If yes how?</p>
<python><pyqt5><qtreewidget><qscrollarea><qscrollbar>
2023-04-30 18:07:57
0
2,427
Chris P
76,142,599
9,983,652
how to clear memory of global variables?
<p>I am using global variables to save big dataframe which is read from a file, so when it is read the first time executing callback, it is saved at a global variable. so when calling next time callback, I don't need to read it from an extenal file again, I just use it from global variable. Overtime, the dash app becomes very laggy due to big data saved in global variables. so my question is how to clear the memory of global variables so the dash app is not laggy?</p> <p>Thanks</p>
<python><plotly-dash>
2023-04-30 17:23:28
0
4,338
roudan
76,142,445
4,075,912
Converting pandas datetime to sparse datetime fails
<p>I'm trying to convert several columns in a pd.DataFrame from dense to sparse. The following MRE (dense integer to sparse integer) works:</p> <pre><code>&gt;&gt;&gt; dense = pd.DataFrame({&quot;A&quot;: [1, 0, 0, 1]}) &gt;&gt;&gt; dtype = pd.SparseDtype(int, fill_value=0) &gt;&gt;&gt; sparse = dense.astype(dtype) &gt;&gt;&gt; print(sparse.dtypes) A Sparse[int32, 0] dtype: object </code></pre> <p>However, extending this logic to sparse datetime data fails:</p> <pre><code>&gt;&gt;&gt; dense = pd.DataFrame({&quot;A&quot;: pd.to_datetime(['2021-01-01', pd.NaT, pd.NaT])}) &gt;&gt;&gt; dtype = pd.SparseDtype('datetime64') &gt;&gt;&gt; sparse = dense.astype(dtype) &gt;&gt;&gt; print(sparse.dtypes) </code></pre> <p>An assertion error is returned &quot;assert values.tx is None and aware&quot;</p> <p>I'm using pandas 1.3.5.</p>
<python><pandas><sparse-matrix>
2023-04-30 16:52:44
1
439
David Johnson