QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,479,335
| 525,865
|
How to automate scraping wikipedia-info box specifically and print the data using python for more (other) wiki page?
|
<p>How to automate scraping wikipedia info box specifically and print the data using python for any wiki page? My task is to automate printing the wikipedia infobox data. And that said i found out that the infobox is a typical wiki-part. so if i get familiar on this part - then i have learned alot - for future tasks - not only for me but for many others more that are diving into the Topos of scraping-wiki pages. So this might be a general task - helpful and packed with lots of information for many others too.</p>
<p>here is my starting point in this "lesson": As an example,i did some tests (on the Trek wikipedia page ( <a href="https://en.wikipedia.org/wiki/Star_Trek" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Star_Trek</a>) and there i extracte the infobox section from the right hand side and print them row by row on screen using python. I specifically want the info box - the example worked perfectly.</p>
<p>so i went on - and I have done the following:</p>
<pre><code>import pandas
urlpage = 'https://de.wikipedia.org/wiki/Raiffeisenbank_Aidlingen'
data = pandas.read_html(urlpage)[0]
null = data.isnull()
for x in range(len(data)):
first = data.iloc[x][0]
second = data.iloc[x][1] if not null.iloc[x][1] else ""
print(first,second,"\n")
</code></pre>
<p>that works perfect - now we have a set of some pages - similar to the above mentioned:</p>
<p>if we take this: page we have about 600 records ( links ) on the list: <a href="https://de.wikipedia.org/wiki/Liste_der_Genossenschaftsbanken_in_Deutschland" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Liste_der_Genossenschaftsbanken_in_Deutschland</a>
What I need is to extract all the pages for the traversal with depth 1.</p>
<p>top-page: <a href="https://de.wikipedia.org/wiki/Liste_der_Genossenschaftsbanken_in_Deutschland" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Liste_der_Genossenschaftsbanken_in_Deutschland</a>
sub-page: <a href="https://de.wikipedia.org/wiki/Abtsgm%C3%BCnder_Bank" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Abtsgm%C3%BCnder_Bank</a>
with the <strong>information:</strong></p>
<pre><code>Staat Deutschland
Sitz Hauptstraße 13
73453 Abtsgmünd
Rechtsform eingetragene Genossenschaft
Bankleitzahl 600 696 73[1]
BIC GENO DES1 ABR[1]
Gründung 24. August 1890
Verband Baden-Württembergischer Genossenschaftsverband e.V.
Website abtsgmuender-bank.de
Geschäftsdaten 2022[2]
Bilanzsumme 217,5 Mio. Euro
Einlagen 170,0 Mio. Euro
Kundenkredite 107,9 Mio. Euro
Mitarbeiter 24
Geschäftsstellen 3
Mitglieder 4.669
Leitung
Vorstand Danny Dürrich
Karl Heinz Gropper
Aufsichtsrat Holger Wengert (Vors.)
</code></pre>
<p>next sub-page: <a href="https://de.wikipedia.org/wiki/Raiffeisenbank_Aidlingen" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Raiffeisenbank_Aidlingen</a>
with the information:</p>
<pre><code>Logo der Genossenschaftsbanken Raiffeisenbank Aidlingen eG
Staat Deutschland
Sitz Hauptstraße 8
71134 Aidlingen
Rechtsform eingetragene Genossenschaft
Bankleitzahl 600 692 06[1]
BIC GENO DES1 AID[1]
Gründung 12. Oktober 1901
Verband Baden-Württembergischer Genossenschaftsverband e.V.
Website ihrziel.de
Geschäftsdaten 2022[2]
Bilanzsumme 268,0 Mio. EUR
Einlagen 242,0 Mio. EUR
Kundenkredite 121,0 Mio. EUR
Mitarbeiter 26
Geschäftsstellen 1 + 1 SB-GS
Mitglieder 3.196
Leitung
Vorstand Marco Bigeschi
Markus Vogel
Aufsichtsrat Thomas Rott (Vorsitzender)
</code></pre>
<p>and the following sub-pages: derived from the top-page: (note on this page, we have about 600 records ( links ) on the <strong>list</strong>: <a href="https://de.wikipedia.org/wiki/Liste_der_Genossenschaftsbanken_in_Deutschland" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Liste_der_Genossenschaftsbanken_in_Deutschland</a></p>
<p>now we need to apply to all the sub-pages the following parser - and subsequently add the results into a dataframe and print it to screen</p>
<pre><code>import pandas
urlpage = 'https://de.wikipedia.org/wiki/Raiffeisenbank_Aidlingen'
data = pandas.read_html(urlpage)[0]
null = data.isnull()
for x in range(len(data)):
first = data.iloc[x][0]
second = data.iloc[x][1] if not null.iloc[x][1] else ""
print(first,second,"\n")
</code></pre>
<p>here the <strong>list</strong> of <strong>the urls:</strong> that we want to work on :</p>
<pre><code>/wiki/Volksbank_im_Wesertal
/wiki/Raiffeisenbank_Westallg%C3%A4u
/wiki/Raiffeisenbank_MEHR_Mosel_-_Eifel_-_Hunsr%C3%BCck_-_Region
/wiki/Volksbank_Sangerhausen
/wiki/Volksbank_Grebenhain
/wiki/Obersulm
/wiki/Raiffeisenbank_Regenstauf
/wiki/Damme_(D%C3%BCmmer)
/wiki/Volksbank_Westerstede
/wiki/VR-Bank_Westm%C3%BCnsterland
/wiki/Breisach_am_Rhein
/wiki/DKM_Darlehnskasse_M%C3%BCnster
/wiki/Scharnhauser_Bank
/wiki/Sitz_(juristische_Person)
/wiki/Volksbank_Gera_Jena_Rudolstadt
/wiki/Volksbank_Bocholt
/wiki/G%C3%B6rlitz
/wiki/Raiffeisen-Volksbank_Neustadt
</code></pre>
<p>Well with this set of URLS i do the following:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
# Function to parse data from Wikipedia page
def parse_wikipedia_page(url):
response = requests.get(url)
soup = BeautifulSoup(response.content, "html.parser")
# Find the relevant content
infobox = soup.find("table", class_="toccolours toptextcells float-right infobox")
if not infobox:
return None
# Extract data from the infobox
data = {}
rows = infobox.find_all("tr")
for row in rows:
cells = row.find_all(["th", "td"])
if len(cells) == 2:
key = cells[0].text.strip()
value = cells[1].text.strip()
data[key] = value
return data
# List of URLs for Wikipedia pages
urls = [
'https://de.wikipedia.org/wiki/Volksbank_im_Wesertal',
'https://de.wikipedia.org/wiki/Raiffeisenbank_Westallg%C3%A4u',
'https://de.wikipedia.org/wiki/Raiffeisenbank_MEHR_Mosel_-_Eifel_-_Hunsr%C3%BCck_-_Region',
'https://de.wikipedia.org/wiki/Volksbank_Sangerhausen',
'https://de.wikipedia.org/wiki/Volksbank_Grebenhain',
'https://de.wikipedia.org/wiki/Obersulm',
'https://de.wikipedia.org/wiki/Raiffeisenbank_Regenstauf',
'https://de.wikipedia.org/wiki/Damme_(D%C3%BCmmer)',
'https://de.wikipedia.org/wiki/Volksbank_Westerstede',
'https://de.wikipedia.org/wiki/VR-Bank_Westm%C3%BCnsterland',
'https://de.wikipedia.org/wiki/Breisach_am_Rhein',
'https://de.wikipedia.org/wiki/DKM_Darlehnskasse_M%C3%BCnster',
'https://de.wikipedia.org/wiki/Scharnhauser_Bank',
'https://de.wikipedia.org/wiki/Sitz_(juristische_Person)',
'https://de.wikipedia.org/wiki/Volksbank_Gera_Jena_Rudolstadt',
'https://de.wikipedia.org/wiki/Volksbank_Bocholt',
'https://de.wikipedia.org/wiki/G%C3%B6rlitz',
'https://de.wikipedia.org/wiki/Raiffeisen-Volksbank_Neustadt'
]
# Initialize an empty list to store parsed data
all_data = []
# Parse each Wikipedia page and store the results
for url in urls:
parsed_data = parse_wikipedia_page(url)
if parsed_data:
all_data.append(parsed_data)
# Create a DataFrame from the parsed data
df = pd.DataFrame(all_data)
# Print the DataFrame
print(df)
</code></pre>
<p>this gives back the following:</p>
<pre><code> Staat Sitz \
0 Deutschland Deutschland Osterstraße 1131863 Coppenbrügge
1 Deutschland Deutschland Alois-Stadler-Straße 288167 Gestratz
2 Deutschland Deutschland Koblenzer Straße 5256759 Kaisersesch
3 Deutschland Deutschland Göpenstraße 3506526 Sangerhausen
4 Deutschland Deutschland Hauptstraße 3936355 Grebenhain
5 Deutschland Deutschland Marktplatz 893128 Regenstauf
6 Deutschland Deutschland Peterstraße 1926655 Westerstede
7 Deutschland Deutschland Kupferstr. 2848653 Coesfeld
8 Deutschland Deutschland Breul 2648143 Münster
9 Deutschland Deutschland Raiffeisenstraße 2–473760 Ostfildern
10 Deutschland Deutschland Johannisplatz 707743 Jena
11 Deutschland Deutschland Meckenemstraße 1046395 Bocholt
12 Deutschland Deutschland Hagener Straße 4431535 Neustadt am Rübenberge
Rechtsform Bankleitzahl BIC \
0 eingetragene Genossenschaft 254 626 80[1] GENO DEF1 COP[1]
1 eingetragene Genossenschaft 733 698 23[1] GENO DEF1 WWA[1]
2 eingetragene Genossenschaft 570 691 44[1] GENO DED1 KAI[1]
3 eingetragene Genossenschaft 800 635 58[1] GENO DEF1 SGH[1]
4 eingetragene Genossenschaft 500 691 46[1] GENO DE51 GRC[1]
5 eingetragene Genossenschaft 750 618 51[1] GENO DEF1 REF[1]
6 eingetragene Genossenschaft 280 632 53[1] GENO DEF1 WRE[1]
7 eingetragene Genossenschaft 428 613 87[1] GENO DEM1 BOB[1]
8 eingetragene Genossenschaft 400 602 65[1] GENO DEM1 DKM[1]
9 eingetragene Genossenschaft 600 695 17[1] GENO DES1 SCA[1]
10 eingetragene Genossenschaft 830 944 54[1] GENO DEF1 RUJ[1]
11 eingetragene Genossenschaft 428 600 03[1] GENO DEM1 BOH[1]
12 eingetragene Genossenschaft 250 692 62[1] GENO DEF1 NST[1]
Gründung Verband \
0 31. Dezember 1886 Genoverband e.V.
1 1. April 1903 Genossenschaftsverband Bayern e.V.
2 1872 Genoverband e.V.
3 2. Dezember 1931 Genoverband e.V.
4 1880 Genoverband e. V.
5 6. Februar 1894 Genossenschaftsverband Bayern e.V.
6 2. Februar 1899 Genossenschaftsverband Weser-Ems e.V.
7 NaN Genoverband e.V.
8 24. Januar 1961 Genoverband e.V.
9 23. Oktober 1890 Baden-Württembergischer Genossenschaftsverband...
10 16. März 1857 Genoverband e.V.
11 1900 Genoverband e.V.
12 12. Januar 1920 Genoverband e.V.
Website Bilanzsumme Einlagen \
0 vb-iw.de 348,0 Mio. EUR 295,0 Mio. EUR
1 raiffeisenbank-westallgaeu.de 404,6 Mio. EUR 344,5 Mio. EUR
2 rb-mehr.de 855,0 Mio. EUR 689,0 Mio. EUR
3 volksbank-sangerhausen.de 176,6 Mio. EUR 155,7 Mio. EUR
4 vb-grebenhain.de 151,4 Mio. € 117,2 Mio. €
5 raiffeisenbank-regenstauf.de 315,1 Mio. Euro 240,4 Mio. Euro
6 vb-westerstede.de 495,5 Mio. EUR 290,8 Mio. EUR
7 www.vrbank-wml.de 3,5 Mrd. Euro 2,1 Mrd. Euro
8 dkm.de 4.827 Mio. EUR 3.682 Mio. EUR
9 scharnhauserbank.de 175,0 Mio. EUR 136,0 Mio. EUR
10 www.volksbank-vor-ort.de 1.828,5 Mio. EUR 1.504,6 Mio. EUR
11 vb-bocholt.de 1.695,0 Mio. EUR 1.145,0 Mio. EUR
12 raiffeisen-volksbank-neustadt.de 139,9 Mio. EUR 108,6 Mio. EUR
Kundenkredite Mitarbeiter Geschäftsstellen Mitglieder \
0 180,0 Mio. EUR 58 3 + 2 SB-GS 7.546
1 254,4 Mio. EUR 59 8 und 3 SB-GS 6.367
2 447,0 Mio. EUR 128 9 + 3 SB-GS 13.728
3 40,4 Mio. EUR 20 4 4.634
4 96,5 Mio. € 21 4 2.796
5 205,6 Mio. Euro 53 5 + 1 SB-GS 3.107
6 346,3 Mio. EUR 63 2 3.579
7 2,6 Mrd. Euro 361 20 über 48.000
8 1.759 Mio. EUR 137 1 1.566
9 124,0 Mio. EUR 9 1 1.020
10 1.108,3 Mio. EUR 247 15 Filialen + 20 SB-Stellen 34.653
11 1.354,0 Mio. EUR 218 7 + 7 SB-GS 22.676
12 74,9 Mio. EUR 24 2 1.763
Vorstand \
0 Illka Osterwald (Vors.)Marco Weßling
1 Martin Öfner (Vorstandssprecher)Hans-Peter Beyrer
2 Heinrich Josef BlümlingKarl Josef BrunnerElmar...
3 Carmen ClausDaniel Kubica
4 Martin WinterKarsten Beckmann
5 Stephan Hauf (Vorsitzender)Wolfgang Haas
6 Christian BlessenStefan Terveer
7 Carsten Düerkop (Vors.)Matthias EntrupBerthold...
8 Christoph Bickmann (Vors.)Gerrit Abelmann
9 Joachim Rapp (Vorstandsvorsitzender)Andreas Gi...
10 Falko Gaudig Torsten Narr Jens Luley Harald Ra...
11 Franz-Josef HeidermannMartin Wilms
12 Frank HahnMarkus Heumann
Aufsichtsrat
0 Andreas Voß (Vors.)
1 Kathrin Koch (Vorsitzende)
2 Günter Urwer (Vors.)
3 Anette Stelter (Vorsitzende)
4 Werner Müller (Vors.)
5 Barbara Eigl (Vorsitzende)
6 Ralf Denker (Vors.)
7 Helmut Rüskamp (Vors.)
8 Antonius Hamers (Vors.)
9 Thomas Durst (Vorsitzender)
10 Bernhard Schanze (Vors.)
11 Christoph Ernsten (Vors.)
12 Gilbert Herzig (Vorsitzender)
</code></pre>
<p>my <strong>question</strong> is the following: my approach looks a bit fancy - especially i did not manage to find a clever way to get all the urls from the first page: <a href="https://de.wikipedia.org/wiki/Liste_der_Genossenschaftsbanken_in_Deutschland" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Liste_der_Genossenschaftsbanken_in_Deutschland</a></p>
<p>additionally - the final results - well they look a bit funny. i d love to bring that to a table that would be awesome.</p>
|
<python><pandas><web-scraping><beautifulsoup><request>
|
2024-05-14 16:03:29
| 1
| 1,223
|
zero
|
78,479,313
| 1,650,379
|
Updating a Plotly FIgureWidget "in situ" in a Dash app
|
<p>Consider the following toy Python class, which can occasionally fetch additional data from some source, and updates its display:</p>
<pre><code># minimum_working_example.py
import plotly.graph_objects as go
import random
class MyData:
def __init__(self):
self.mydata = []
self.figure = go.FigureWidget()
self.figure.add_trace(go.Scatter(
name = 'mydata',
))
self.get_more_data()
def get_more_data(self):
self.mydata += [ random.random() for _ in range(10) ]
self.figure.update_traces(
selector=dict(name='mydata'),
y = self.mydata,
x = list(range(len(self.mydata))),
)
</code></pre>
<p>If I go into a Jupyter notebook and say things like</p>
<pre><code># interactive jupyter prompt
from minimum_working_example import MyData
mydata = MyData()
display(mydata.figure)
</code></pre>
<p>then I get a figure that I can fiddle around with: zooming the axes, toggling which traces are visible, etc. (My non-minimal example has multiple traces.) Because the figure is a <code>FigureWidget</code>, calling <code>mydata.get_more_data()</code> changes the trace on the graph, but it doesn't change any other feature.</p>
<p>[<img src="https://i.sstatic.net/ENcWejZP.jpg" alt="The thing in action1" /></p>
<p>I would like to move this display from an interactive notebook to a Dash app with a "get more data" button, but so far everything that I've tried causes the figure to reset all of its modifications, rather than updating in place. Here's an example fails in an educational way, based on the documentation page for <a href="https://dash.plotly.com/partial-properties" rel="nofollow noreferrer">Partial Property Updates</a>:</p>
<pre><code># mwe_dash.py
from dash import Dash, html, dcc, Input, Output, Patch, callback
import plotly.graph_objects as go
import datetime
import random
import minimum_working_example
app = Dash(__name__)
mydata = minimum_working_example.MyData()
app.layout = html.Div(
[
html.Button("Append", id="append-new-val"),
dcc.Graph(figure=mydata.figure, id="append-example-graph"),
]
)
@callback(
Output("append-example-graph", "figure"),
Input("append-new-val", "n_clicks"),
prevent_initial_call=True,
)
def put_more_data_on_figure(n_clicks):
mydata.get_more_data()
return mydata.figure
if __name__ == "__main__":
app.run(port=8056)
</code></pre>
<p>If I run this by itself with <code>python mwe_dash.py</code>, then the figure is completely reset every time I press the "get more data" button — just like the examples on the documentation page above. But if I run it within Jupyter, with</p>
<pre><code># in jupyter again
from mwe_dash import app, mydata
display(mydata.figure)
app.run(jupyter_server_url='localhost:8888',port=8057)
</code></pre>
<p>then appearance-changing interactions with the figure in the Jupyter notebook are applied in the Dash app when the figure updates. However, the figure in the Dash app resets to the version in the notebook every time the "get more data" button is pressed.</p>
<p>How can I get the figure in the Dash app to reproduce the update-without-resetting behavior that is in the Jupyter notebook? I assume I need some additional callback from the figure to the Dash app. Will it be necessary to have a Jupyter server running in the background, even when I get rid of the notebook part of this analysis tool?</p>
|
<python><jupyter-notebook><plotly><ipython><plotly-dash>
|
2024-05-14 15:58:27
| 1
| 392
|
rob
|
78,478,992
| 12,256,384
|
Call Databricks notebook without specifying parameters from an external notebook
|
<p>I have main notebook <strong>main_notebook</strong>. I am going to call an external notebook <strong>notebook1</strong> which has 2 parameters <code>param1</code> and <code>param2</code>, but I do not want to specify parameters name, I know parameters orders. I want something like this:</p>
<pre><code>dbutils.notebook.run("/path/to/notebook1", 60, {"param1_value", "param2_value"})
</code></pre>
<p>But this does not work and I could not find any solution.</p>
|
<python><databricks><databricks-notebook>
|
2024-05-14 15:03:41
| 0
| 1,214
|
Beso
|
78,478,961
| 2,954,547
|
IPython --profile option works with `jupyter console` but not JupyterLab
|
<p>I have the following (redacted) <code>my-project/kernel.json</code> in one of the standard Jupyter kernel paths:</p>
<pre class="lang-json prettyprint-override"><code>{
"argv": [
".../my-project/.local/conda/bin/python",
"-Xfrozen_modules=off",
"-m",
"ipykernel_launcher",
"--ipython-dir",
".../my-project/.local/ipython",
"--profile",
"default",
"-f",
"{connection_file}"
],
"display_name": "My Project",
"language": "python",
"metadata": {
"debugger": true
}
}
</code></pre>
<p>In the <code>.local/ipython/profile_default/ipython_config.py</code> and <code>.local/ipython/profile_default/startup/*.py</code> scripts, I have some custom configuration, such as adding entries to <code>sys.path</code> and enabling the <code>autoreload</code> extension.</p>
<p>When I run <code>jupyter console --kernel my-project</code>, all of the customization seems to work perfectly.</p>
<p>However when I run a JupyterLab notebook using the exact same kernel, none of the customizations seem to work!</p>
<p>What might cause this problem? Is there an effective way to debug this? Some log file to inspect? A cache to clear? Some known "gotcha" setting to be aware of?</p>
|
<python><jupyter-notebook><jupyter><ipython><jupyter-console>
|
2024-05-14 14:58:05
| 1
| 14,083
|
shadowtalker
|
78,478,896
| 20,122,390
|
How can I run Pandas (Python) scripts from Go?
|
<p>I have several csv files that I must process using the Pandas (Python) library but I would like to be able to take advantage of Go's concurrency model to process several files concurrently/parallel. So I guess each goroutine should start a Python interpreter in order to execute Pandas code. I have the csv file loaded in my Go code, so it is not simply executing a Python script but I must pass the file to the script. Is this possible?. or which approach I should use given the context of my problem. I don't know if there is any library already created for this purpose.</p>
|
<python><pandas><go>
|
2024-05-14 14:47:44
| 0
| 988
|
Diego L
|
78,478,844
| 2,543,065
|
Python FastApi - Import error: No module named 'routers'
|
<p>i am trying to build FastAPi with my custom api provider.
Following the fastapi guide by documentation.
When i run this command : <code>fastapi run</code></p>
<p>It gives an error: <code>ERROR Import error: No module named 'routers'</code></p>
<p>I have double checked <strong>init</strong>.py file is exist under routers folder and project's main folder.</p>
<p>Here is the main.py:</p>
<pre><code>import sys
from fastapi import FastAPI, Depends, HTTPException
from sqlalchemy.orm import Session
#from core.config import settings
import sys
from fastapi import FastAPI, Depends, HTTPException
from sqlalchemy.orm import Session
sys.path.append('/routers/chat_order_router') # Append the directory
from routers.chat_order_router import order_router
app = FastAPI()
app.include_router(order_router)
@app.get("/")
def read_root():
return {"Hello": "World"}
</code></pre>
<p><strong>chat_order_router.py :</strong></p>
<pre><code>from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from typing import List
from dependencies import get_db
from schemas.chat_schema import ChatCreateSchema, ChatResponseSchema
from models.chat import Chat
import os
from dotenv import load_dotenv
from services.zus_backend_service import ZusBackendService # Ensure this is correctly imported
import pytest
from httpx import Response
order_router = APIRouter(
prefix="/chats_order",
tags=["chats_order"],
responses={404: {"description": "Not found"}},
)
load_dotenv()
# Dependency
def get_zus_service():
base_uri = os.getenv('ZUS_BE_URL') # Default value if not set
client_name = os.getenv('ZUS_BE_NAME')
client_secret = os.getenv('ZUS_BE_SECRET')
if not base_uri or not client_name or not client_secret:
raise Exception("Critical environment variables are missing for ZusBackendService")
return ZusBackendService(base_uri, client_name, client_secret)
@order_router.get("/order-details/{order_id}")
async def order_details(order_id: str, zus_service: ZusBackendService = Depends(get_zus_service)):
order_details = zus_service.get_order_details({"order_no": order_id})
if not order_details:
raise HTTPException(status_code=404, detail="Order not found")
return order_details
@order_router.post("/", response_model=ChatResponseSchema, status_code=201)
def create_chat_orders(chat: ChatCreateSchema, db: Session = Depends(get_db)):
new_chat = Chat(**chat.dict())
db.add(new_chat)
db.commit()
db.refresh(new_chat)
return new_chat
@order_router.get("/", response_model=List[ChatResponseSchema])
def read_chats_orders(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
chats = db.query(Chat).offset(skip).limit(limit).all()
return chats
@order_router.get("/{chat_id}", response_model=ChatResponseSchema)
def read_chats_orders(chat_id: int, db: Session = Depends(get_db)):
chat = db.query(Chat).filter(Chat.id == chat_id).first()
if chat is None:
raise HTTPException(status_code=404, detail="Chat not found")
return chat
@order_router.get("/check_order/{order_id}", status_code=200)
def check_order(order_id: int, db: Session = Depends(get_db)):
"""API endpoint to check if an order exists."""
# Check directly if an order exists by querying the Chat table.
order_exists = db.query(Chat).filter(Chat.order_id == order_id).first() is not None
if order_exists:
return {"message": "Order exists."}
else:
raise HTTPException(status_code=404, detail="Order not found")
</code></pre>
<p>And project's file structure:</p>
<p><a href="https://i.sstatic.net/UmkzsdPE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmkzsdPE.png" alt="enter image description here" /></a></p>
|
<python><fastapi>
|
2024-05-14 14:40:06
| 1
| 726
|
balron
|
78,478,825
| 1,422,096
|
Numpy slicing on a zero-padded 2D array
|
<p>Given a 2D Numpy array, I'd like to be able to pad it on the left, right, top, bottom side, like the following pseudo-code. Is there anything like this already built into Numpy?</p>
<pre><code>import numpy as np
a = np.arange(16).reshape((4, 4))
# [[ 0 1 2 3]
# [ 4 5 6 7]
# [ 8 9 10 11]
# [12 13 14 15]]
pad(a)[-4:2, -1:3] # or any syntax giving the same result
#[[0 0 0 0]
# [0 0 0 0]
# [0 0 0 0]
# [0 0 0 0]
# [0 0 1 2]
# [0 4 5 6]]
pad(a)[-4:2, -1:6]
#[[0 0 0 0 0 0 0]
# [0 0 0 0 0 0 0]
# [0 0 0 0 0 0 0]
# [0 0 0 0 0 0 0]
# [0 0 1 2 3 0 0]
# [0 4 5 6 7 0 0]]
</code></pre>
|
<python><numpy><multidimensional-array><padding><numpy-ndarray>
|
2024-05-14 14:36:38
| 3
| 47,388
|
Basj
|
78,478,782
| 4,317,190
|
Is there a way to force a command's argument to read from stdin instead of straight from the command line?
|
<p>I have a python script I'm running that I don't have the ability to change. Here's how I tried running it:</p>
<pre><code>python my_script.py --username <username> --password -
</code></pre>
<p>Just to see if python would allow me to use the <code>-</code> to read from stdin somehow, but it didn't, the script just assumed the password I wanted was "-" of course.</p>
<p>Can I force the input to <code>--password</code> to be read from stdin somehow?</p>
|
<python><bash><command-line>
|
2024-05-14 14:27:59
| 1
| 351
|
Ezra Henley
|
78,478,754
| 7,217,960
|
__del__ method execution order different when dealing with exception before call to del
|
<p>I'm using Python 3.8.10 on Windows.</p>
<p>The code below prints what I expect to the console:</p>
<pre><code>import gc
class Myclass():
def my_method(self):
raise Exception("something bad happened.")
def __del__(self):
print("__del__exectuted.")
print("first instance.")
my_instance = Myclass()
del my_instance
gc.collect()
print("second instance.")
my_instance_2 = Myclass()
</code></pre>
<p><a href="https://i.sstatic.net/CbZmfgUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbZmfgUr.png" alt="enter image description here" /></a></p>
<p>However, if an exception is raised by the instance before calling del, it looks like the <code>__del__</code> execution for that instance is delayed and only runs upon shutdown:</p>
<pre><code>import gc
class Myclass():
def my_method(self):
raise Exception("something bad happened.")
def __del__(self):
print("__del__exectuted.")
print("first instance.")
my_instance = Myclass()
try:
my_instance.my_method()
except Exception as e:
print(f"{e}")
del my_instance
gc.collect()
print("second instance.")
my_instance_2 = Myclass()
</code></pre>
<p><a href="https://i.sstatic.net/rWHw57kZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rWHw57kZ.png" alt="enter image description here" /></a></p>
<p>I would expect the <code>__del__</code> method to be called for the first object before "second instance." is printed, like in the first example.
Is there anything I can do to make it execute in that way?</p>
|
<python><del>
|
2024-05-14 14:24:14
| 0
| 412
|
Guett31
|
78,478,574
| 3,289,890
|
Investigating discrepancies in TensorFlow and PyTorch performance
|
<p>In my pursuit of mastering PyTorch neural networks, I've attempted to replicate an existing TensorFlow architecture. However, I've encountered a significant performance gap. While TensorFlow achieves rapid learning within 25 epochs, PyTorch requires at least 250 epochs for comparable generalization. Despite meticulous code scrutiny, I've been unable to identify further enhancements. Despite carefully aligning the architectures of both neural networks, disparities still persist. Can anyone shed light on what else might be amiss here?</p>
<p>In the subsequent section, I'll present the full Python code for both implementations, along with the CLI output and graphical visualization.</p>
<p><strong>Reproducibility:</strong> As I prefer not to share the original dataset, I've attached a piece of code that emulates the dataset instead. The generated <code>data_inverter.csv</code> can be used to reproduce the observed behavior.</p>
<p><strong>PyTorch code:</strong></p>
<pre><code># Standard library imports
import pandas as pd
import matplotlib.pyplot as plt
# External library imports
import torch
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, MinMaxScaler, StandardScaler
from sklearn.metrics import max_error, mean_absolute_error, mean_squared_error
# Loading dataset
df_data = pd.read_csv("./data_inverter.csv", names=["pvt", "edge", "slew", "load", "delay"])
# Selecting subset of data based on specific conditions
df_select = df_data[(df_data["pvt"] == "PtypV1500T027") & (df_data["edge"] == "rise")]
# Splitting features and target variable
X = df_select.drop(["pvt", "edge", "delay"], axis='columns')
y = df_select["delay"]
# Scaling input features using Min-Max scaling
slew_scaler = MinMaxScaler()
load_scaler = MinMaxScaler()
X_scaled = X.copy()
X_scaled["slew"] = slew_scaler.fit_transform(X_scaled.slew.values.reshape(-1, 1))
X_scaled["load"] = load_scaler.fit_transform(X_scaled.load.values.reshape(-1, 1))
# Splitting data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.1, random_state=42)
# Converting data to PyTorch tensors
X_train_tensor = torch.FloatTensor(X_train.values)
y_train_tensor = torch.FloatTensor(y_train.values).view(-1, 1)
X_test_tensor = torch.FloatTensor(X_test.values)
y_test_tensor = torch.FloatTensor(y_test.values).view(-1, 1)
# Setting random seed for reproducibility
torch.manual_seed(42)
# Defining neural network architecture
model = torch.nn.Sequential(
torch.nn.Linear(X_train_tensor.shape[1], 128),
torch.nn.ReLU(),
torch.nn.Linear(128, 128),
torch.nn.ReLU(),
torch.nn.Linear(128, 64),
torch.nn.ReLU(),
torch.nn.Linear(64, 32),
torch.nn.ReLU(),
torch.nn.Linear(32, 16),
torch.nn.ReLU(),
torch.nn.Linear(16, 1),
torch.nn.ELU()
)
# Loss function and optimizer
criterion = torch.nn.MSELoss()
criterion_val = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
# Training the model
num_epochs = 25
progress = {'loss': [], 'mae': [], 'mse': [], 'val_loss': [], 'val_mae': [], 'val_mse': []}
for epoch in range(num_epochs):
# Forward pass
y_predict = model(X_train_tensor)
loss = criterion(y_predict, y_train_tensor)
# Backward and optimize
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation
with torch.no_grad():
model.eval()
y_test_predict = model(X_test_tensor)
loss_val = criterion_val(y_test_predict, y_test_tensor)
model.train()
# Record progress
progress['loss'].append(loss.item())
progress['mae'].append(mean_absolute_error(y_train_tensor, y_predict.detach().numpy()))
progress['mse'].append(mean_squared_error(y_train_tensor, y_predict.detach().numpy()))
progress['val_loss'].append(loss_val.item())
progress['val_mae'].append(mean_absolute_error(y_test_tensor, y_test_predict.detach().numpy()))
progress['val_mse'].append(mean_squared_error(y_test_tensor, y_test_predict.detach().numpy()))
print("Epoch %i/%i - loss: %0.5F" % (epoch, num_epochs, loss.item()))
# Displaying model summary
print(model)
# Plotting training progress
df_progress = pd.DataFrame(progress)
df_progress.plot()
plt.title("Model training progress: DNN PyTorch")
plt.tight_layout()
plt.show()
# Making predictions on the testing set
with torch.no_grad():
model.eval()
y_predict_tensor = model(X_test_tensor)
y_predict = y_predict_tensor.numpy()
# Displaying model performance metrics
print("Model performance metrics: DNN PyTorch")
print("MAX error:", max_error(y_test_tensor, y_predict))
print("MAE error:", mean_absolute_error(y_test_tensor, y_predict))
print("MSE error:", mean_squared_error(y_test_tensor, y_predict, squared=False))
plt.scatter(y_test, y_predict)
plt.scatter(y_test, y_test, marker='.')
plt.title("Model predictions: DNN PyTorch")
plt.tight_layout()
plt.show()
</code></pre>
<p><strong>TensorFlow code:</strong></p>
<pre><code># Standard library imports
import pandas as pd
import matplotlib.pyplot as plt
# External library imports
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, MinMaxScaler, StandardScaler
from sklearn.metrics import max_error, mean_absolute_error, mean_squared_error
# Loading dataset
df_data = pd.read_csv("./data_inverter.csv", names=["pvt", "edge", "slew", "load", "delay"])
# Selecting subset of data based on specific conditions
df_select = df_data[(df_data["pvt"] == "PtypV1500T027") & (df_data["edge"] == "rise")]
# Splitting features and target variable
X = df_select.drop(["pvt", "edge", "delay"], axis='columns')
y = df_select["delay"]
# Scaling input features using Min-Max scaling
slew_scaler = MinMaxScaler()
load_scaler = MinMaxScaler()
X_scaled = X.copy()
X_scaled["slew"] = slew_scaler.fit_transform(X_scaled.slew.values.reshape(-1, 1))
X_scaled["load"] = load_scaler.fit_transform(X_scaled.load.values.reshape(-1, 1))
# Splitting data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.1, random_state=42)
# Converting data to TensorFlow tensors
X_train_tensor = tf.constant(X_train.values, dtype=tf.float32)
y_train_tensor = tf.constant(y_train.values, dtype=tf.float32)
X_test_tensor = tf.constant(X_test.values, dtype=tf.float32)
y_test_tensor = tf.constant(y_test.values, dtype=tf.float32)
# Setting random seed for reproducibility
tf.keras.utils.set_random_seed(42)
# Defining neural network architecture
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_dim=X_train_tensor.shape[1]),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(1, activation='elu')
])
# Compiling the model
model.compile(
loss=tf.keras.losses.MeanSquaredError(), # Using Mean Squared Error loss function
optimizer=tf.keras.optimizers.Adam(), # Using Adam optimizer
metrics=['mae', 'mse'] # Using Mean Absolute Error and Mean Squared Error as metrics
)
# Training the model
progress = model.fit(X_train_tensor, y_train_tensor, validation_data=(X_test_tensor, y_test_tensor), epochs=25)
# Evaluating model performance on the testing set
model.evaluate(X_test_tensor, y_test_tensor, verbose=2)
# Displaying model summary
print(model.summary())
# Plotting training progress
pd.DataFrame(progress.history).plot()
plt.title("Model training progress: DNN TensorFlow")
plt.tight_layout()
plt.show()
# Making predictions on the testing set
y_predict = model.predict(X_test_tensor)
# Displaying model performance metrics
print("Model performance metrics: DNN TensorFlow")
print("MAX error:", max_error(y_test_tensor, y_predict))
print("MAE error:", mean_absolute_error(y_test_tensor, y_predict))
print("MSE error:", mean_squared_error(y_test_tensor, y_predict, squared=False))
plt.scatter(y_test, y_predict)
plt.scatter(y_test, y_test, marker='.')
plt.title("Model predictions: DNN TensorFlow")
plt.tight_layout()
plt.show()
</code></pre>
<p><strong>CLI output of PyTorch model performance metrics after 25 epochs:</strong></p>
<pre><code>Sequential(
(0): Linear(in_features=2, out_features=128, bias=True)
(1): ReLU()
(2): Linear(in_features=128, out_features=128, bias=True)
(3): ReLU()
(4): Linear(in_features=128, out_features=64, bias=True)
(5): ReLU()
(6): Linear(in_features=64, out_features=32, bias=True)
(7): ReLU()
(8): Linear(in_features=32, out_features=16, bias=True)
(9): ReLU()
(10): Linear(in_features=16, out_features=1, bias=True)
(11): ELU(alpha=1.0)
)
Model performance metrics: DNN PyTorch
MAX error: 1.2864852
MAE error: 0.3353702
MSE error: 0.42874745
</code></pre>
<p><strong>CLI output of TensorFlow model performance metrics after 25 epochs:</strong></p>
<pre><code>Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 128) 384
dense_1 (Dense) (None, 128) 16512
dense_2 (Dense) (None, 64) 8256
dense_3 (Dense) (None, 32) 2080
dense_4 (Dense) (None, 16) 528
dense_5 (Dense) (None, 1) 17
=================================================================
Total params: 27777 (108.50 KB)
Trainable params: 27777 (108.50 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
None
6/6 [==============================] - 0s 750us/step
Model performance metrics: DNN TensorFlow
MAX error: 0.013849139
MAE error: 0.0029576812
MSE error: 0.0036013061
</code></pre>
<p>PyTorch training progress:
<a href="https://i.sstatic.net/rUf77Kmk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUf77Kmk.png" alt="PyTorch training progress" /></a></p>
<p>TensorFlow training progress:
<a href="https://i.sstatic.net/QSNWEP3n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QSNWEP3n.png" alt="TensorFlow training progress" /></a></p>
<p>PyTorch scatter plot (orange = target against itself, blue = target against prediction):
<a href="https://i.sstatic.net/QSSBUZPn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QSSBUZPn.png" alt="PyTorch scatter plot" /></a></p>
<p>TensorFlow scatter plot (orange = target against itself, blue = target against prediction):
<a href="https://i.sstatic.net/wiXtzSzY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiXtzSzY.png" alt="enter image description here" /></a></p>
<p>.</p>
<p>..............................................................................................................................................</p>
<p>Appending additional info (reaction to the questions and comments):</p>
<p><code>torch.optim.Adam</code> - the default learning rate is set to 0.001.
<a href="https://i.sstatic.net/7ckLr4eK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7ckLr4eK.png" alt="torch.optim.Adam" /></a></p>
<p><code>tf.keras.optimizers.Adam</code> - the default learning rate is set to 0.001
<a href="https://i.sstatic.net/wiZUZ4UY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiZUZ4UY.png" alt="tf.keras.optimizers.Adam" /></a></p>
<p>.</p>
<p>..............................................................................................................................................</p>
<p>Here's the PyTorch model performance after 250 epoch:</p>
<pre><code>Sequential(
(0): Linear(in_features=2, out_features=128, bias=True)
(1): ReLU()
(2): Linear(in_features=128, out_features=128, bias=True)
(3): ReLU()
(4): Linear(in_features=128, out_features=64, bias=True)
(5): ReLU()
(6): Linear(in_features=64, out_features=32, bias=True)
(7): ReLU()
(8): Linear(in_features=32, out_features=16, bias=True)
(9): ReLU()
(10): Linear(in_features=16, out_features=1, bias=True)
(11): ELU(alpha=1.0)
)
Model performance metrics: DNN PyTorch
MAX error: 0.025619686
MAE error: 0.006687804
MSE error: 0.008531998
</code></pre>
<p><a href="https://i.sstatic.net/LR90rTxd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LR90rTxd.png" alt="PyTorch training progress, 250 epochs" /></a>
<a href="https://i.sstatic.net/TzqIwzJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TzqIwzJj.png" alt="PyTorch scatter plot, 250 epochs" /></a></p>
<p>.</p>
<p>..............................................................................................................................................</p>
<p>If you want to run reproduce the issue, you can use this code to emulate the dataset:</p>
<pre><code>import csv
import math
x_values = [0.003, 0.00354604, 0.00546274, 0.00912297, 0.0148254, 0.0228266, 0.0333551, 0.0466191, 0.0628111, 0.0821111, 0.104689, 0.130705, 0.160313, 0.193659, 0.230886, 0.272128, 0.317517, 0.36718, 0.42124, 0.479818, 0.54303, 0.61099, 0.683809, 0.761595, 0.844455, 0.932492, 1.02581, 1.1245, 1.22868, 1.33842, 1.45383, 1.57501, 1.70203, 1.835, 1.974]
y_values = [0.001, 0.00102008, 0.00109058, 0.0012252, 0.00143494, 0.00172922, 0.00211646, 0.0026043, 0.00319984, 0.0039097, 0.0047401, 0.00569697, 0.00678594, 0.00801243, 0.00938161, 0.0108985, 0.0125679, 0.0143945, 0.0163828, 0.0185373, 0.0208622, 0.0233618, 0.0260401, 0.028901, 0.0319486, 0.0351866, 0.0386187, 0.0422487, 0.0460802, 0.0501166, 0.0543615, 0.0588182, 0.0634902, 0.0683808, 0.0734931, 0.0788305, 0.0843961, 0.0901929, 0.0962242, 0.102493, 0.109002, 0.115755, 0.122753, 0.130001, 0.137502, 0.145257, 0.153269, 0.161543, 0.170079, 0.178881]
z_values = [[math.sqrt(5*(x+0.25)) * math.sqrt(3*(y+0.005)) for y in y_values] for x in x_values]
with open("./data_inverter.csv", 'w') as fid:
writer = csv.writer(fid)
for i in range(len(x_values)):
for j in range(len(y_values)):
writer.writerow(["PtypV1500T027", "rise", x_values[i], y_values[j], z_values[i][j]])
</code></pre>
|
<python><tensorflow><machine-learning><pytorch><neural-network>
|
2024-05-14 13:54:26
| 1
| 1,008
|
Boris L.
|
78,478,521
| 9,548,525
|
A regex line to remove whitespaces unless within double quotes, taking into account escaped double quotes
|
<p>I am parsing some game config files using Python and putting it all in dictionaries. It seemed to all work well until I encountered the following edge-case:</p>
<pre><code>random_owned_controlled_state = {
create_unit = {
division = "name = \"6. Belarusian Red Riflemen\" division_template = \"Belarusian Red Riflemen\" start_experience_factor = 0.5"
owner = PREV
}
create_unit = {
division = "name = \"7. Belarusian Red Riflemen\" division_template = \"Belarusian Red Riflemen\" start_experience_factor = 0.5"
owner = PREV
}
}
</code></pre>
<p>I want to remove all spaces, tabs and newlines unless within doublequotes and eventually delimit separate statements with semicolons to get something like this which is easier to extract information from:</p>
<pre><code>random_owned_controlled_state={create_unit={division="name = \"6. Belarusian Red Riflemen\" division_template = \"Belarusian Red Riflemen\" start_experience_factor = 0.5";owner=PREV;};create_unit={division="name = \"7. Belarusian Red Riflemen\" division_template = \"Belarusian Red Riflemen\" start_experience_factor = 0.5";owner=PREV;};};
</code></pre>
<p>I created this function to achieve this, or so I thought:</p>
<pre><code>def remove_spacers(string: str, brackets: Tuple[str, str], operators: List[str], replacement: str, commentchar: str) -> str:
"""
Get rid of all white spaces in file and remove or replace with replacement
:param string: Raw String
:param brackets Types of brackets to match
:param operators: List of operators to check for and remove space around
:param replacement: What to replace whitespaces at end of statements with
:param commentchar: Character used to indicate comments
:return: Cleaned String
"""
# Strip front and end
result = string.strip(" \n\t")
# Remove all comments from text
result = re.sub(rf"{commentchar}.*", "", result)
# Remove all whitespace bundles except for inside "" and replace with single space
result = re.sub(r"\s+(?=([^\"]*\"[^\"]*\")*[^\"]*$)", " ", result)
# Remove whitespaces around operators (=,<,>)
for operator in operators:
result = re.sub(rf"\s*{operator}\s*", rf"{operator}", result)
# Remove whitespaces after {
result = re.sub(rf"{brackets[0]}\s", rf"{brackets[0]}", result)
# Replace whitespaces after anything else with ;
result = re.sub(r"\s(?=([^\"]*\"[^\"]*\")*[^\"]*$)", replacement, result)
# Replace any potential multiple consecutive ; with a single ;
result = re.sub(rf"{replacement}{2,}", replacement, result)
# Make sure every statement ends with ;
result = re.sub(rf"(?<!{replacement}){brackets[1]}", rf"{replacement}{brackets[1]}", result)
if result[-1] != replacement:
result += replacement
result = f"{{{result}}}{replacement}"
return result
</code></pre>
<p>Calling it like:</p>
<p><code>cleaned_text = remove_spacers(text, ('{', '}'), ['=', '<', '>'], ";", "#")</code></p>
<p>I likely need to adjust the following regex line:</p>
<p><code>result = re.sub(r"\s+(?=([^\"]*\"[^\"]*\")*[^\"]*$)", " ", result)</code></p>
<p>But am unsure how to achieve this behaviour since I am by no means experienced with regex.</p>
<p>Some example desired in- and outputs:</p>
<pre><code># Works
in_1 = """state={
id=49
name="STATE_49" # Ankara
manpower = 810284
resources={
chromium=11 # was: 16
steel = 28
}
state_category = town
history={
owner = TUR
victory_points = {
11747 15
}
victory_points = {
804 5
}
victory_points = {
3951 1
}
buildings = {
infrastructure = 2
arms_factory = 2
industrial_complex = 2
air_base = 3
}
add_core_of = TUR
}
provinces={
804 909 3907 3951 6808 6908 6925 9806 9901 11747 11784
}
local_supplies=7.0
}"""
out_1 = """{state={id=49;name="STATE_49";manpower=810284;resources={chromium=11;steel=28;};state_category=town;history={owner=TUR;victory_points={11747;15;};victory_points={804;5;};victory_points={3951;1;};buildings={infrastructure=2;arms_factory=2;industrial_complex=2;air_base=3;};add_core_of=TUR;};provinces={804;909;3907;3951;6808;6908;6925;9806;9901;11747;11784;};local_supplies=7.0;};};"""
# Works
in_2 = """ focus = {
id = AST_royal_australian_artillery
icon = GFX_goal_generic_army_artillery2
prerequisite = { focus = AST_additional_militia_training }
x = -1
y = 1
relative_position_id = AST_additional_militia_training
cost = 10
ai_will_do = {
factor = 1
}
available = {
}
bypass = {
}
cancel_if_invalid = yes
continue_if_invalid = no
available_if_capitulated = yes
search_filters = { FOCUS_FILTER_RESEARCH }
completion_reward = {
add_tech_bonus = {
name = AST_royal_australian_artillery
bonus = 1.0
uses = 1
category = artillery
}
}
}
"""
out_2 = """{focus={id=AST_royal_australian_artillery;icon=GFX_goal_generic_army_artillery2;prerequisite={focus=AST_additional_militia_training;};x=-1;y=1;relative_position_id=AST_additional_militia_training;cost=10;ai_will_do={factor=1;};available={;};bypass={;};cancel_if_invalid=yes;continue_if_invalid=no;available_if_capitulated=yes;search_filters={FOCUS_FILTER_RESEARCH;};completion_reward={add_tech_bonus={name=AST_royal_australian_artillery;bonus=1.0;uses=1;category=artillery;};};};};"""
# Doesnt work
in_3 = """random_owned_controlled_state = {
create_unit = {
division = "name = \"6. Belarusian Red Riflemen\" division_template = \"Belarusian Red Riflemen\" start_experience_factor = 0.5"
owner = PREV
}
create_unit = {
division = "name = \"7. Belarusian Red Riflemen\" division_template = \"Belarusian Red Riflemen\" start_experience_factor = 0.5"
owner = PREV
}
}"""
out_3 = """{random_owned_controlled_state={create_unit={division="name = \"6. Belarusian Red Riflemen\" division_template = \"Belarusian Red Riflemen\" start_experience_factor = 0.5";owner=PREV;};create_unit={division="name = \"7. Belarusian Red Riflemen\" division_template = \"Belarusian Red Riflemen\" start_experience_factor = 0.5";owner=PREV;};};};"""
</code></pre>
|
<python><regex><parsing><text-processing>
|
2024-05-14 13:46:49
| 1
| 360
|
lrdewaal
|
78,478,409
| 562,697
|
pylint invalid-name warning for override methods
|
<p>I have a Python (3.12.3) script that uses PyQt6 to make a simple GUI. pylint returns the following warning:</p>
<pre><code>C0103: Method name "paintEvent" doesn't conform to snake_case naming style (invalid-name)
</code></pre>
<p>This method is an override (even has the <code>@override</code> decoration from typing), so I have no choice in the naming convention. I can add <code>pylint: disable=invalid-name</code> to this function, however, that disables that warning for the function definition as well, which is undesirable.</p>
<p>Is there a way to prevent that warning for override methods?</p>
|
<python><pylint>
|
2024-05-14 13:26:23
| 0
| 11,961
|
steveo225
|
78,478,148
| 2,697,895
|
How to replicate the dynamic stdout of a command?
|
<p>I am working on a Raspberry Pi OS, and I make this Python script to run a command and capture it's output. It works fine for commands that outputs text in a sequential way. But when I try to run commands that use dynamic updates, like <code>apt-get update</code> that display a progres percentage, it fails to capture it. Dow you know how can I do to replicate the exact display behavior of the command, like it is shown in a real terminal window ? The thing is that I don't realy use <code>print(line.strip())</code>, instead I send the data over a network socket on another computer, and there I display it. On the other computer I know how to print the data in the virtual terminal windows, but what I don't know is how can I read the control characters(?) and how I interpret them... ? I dont' need it to work with complex programs with menus or other stuff, just simple commands like that shows some progress like <code>apt-get update</code> does...</p>
<pre><code>import subprocess
import select
import os
os.environ['PYTHONUNBUFFERED'] = '1'
process = subprocess.Popen(['apt-get', 'update'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
while True:
line = process.stdout.readline()
if not line: break
print(line.strip())
process.wait()
</code></pre>
<p><strong>Update1:</strong></p>
<p>I made another python script that prints a number and then moves the cursor in the same position and prints again over...</p>
<pre><code>import time
for i in range(10):
print(i)
print('\033[F', end='')
time.sleep(1)
</code></pre>
<p>And then, when I run that script as a subprocess an read it's output in raw byte mode, I get those escape charaters I sent... But if I try with <code>apt-get update</code> command I receive only text and not the escape characters for moving the cursor to display in the same location the package readin progres.</p>
<pre><code>import subprocess
import select
import os
os.environ['PYTHONUNBUFFERED'] = '1'
process = subprocess.Popen(['python3', 'cursor.py'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
ready = select.select([process.stdout], [], [], 0.1)[0]
if not ready: continue
B = process.stdout.read(1)
if len(B) < 1: break
print(hex(B[0]))
process.wait()
</code></pre>
|
<python><linux><stdout>
|
2024-05-14 12:37:09
| 0
| 3,182
|
Marus Gradinaru
|
78,477,934
| 18,910,865
|
How to pip install from a text file skipping unreachable libraries
|
<p>I'm currently in a scenario where I need to install the following:</p>
<pre class="lang-bash prettyprint-override"><code>pip install -r https://raw.githubusercontent.com/intro-stat-learning/ISLP_labs/v2.1.3/requirements.txt
</code></pre>
<p>What happens is that the library <code>scipy</code> is not available for some reason, resulting in the following error:</p>
<pre class="lang-bash prettyprint-override"><code>(test_islp) PS C:\Users\project> pip install -r https://raw.githubusercontent.com/intro-stat-learning/ISLP_labs/v2.1.3/requirements.txt
Collecting numpy==1.24.2 (from -r https://raw.githubusercontent.com/intro-stat-learning/ISLP_labs/v2.1.3/requirements.txt (line 1))
...
...
ERROR: No matching distribution found for scipy==1.11.1
</code></pre>
<p>Is there a way to simply ignore libraries that return such error, and still go on with the installation of the remaining libraries?</p>
<p>I tried using <code>--ignore-installed</code> but it still leads to the same error</p>
<p>The <em>quick and dirty</em> solution would be to simply copy and paste the requirements into a text file, delete <code>scipy</code> and perform <code>pip install</code> again. That is not ideal for me because in a scenario with hundreds of libraries and dozens of potential missing distributions it could lead to tedious work.</p>
|
<python><pip><scipy>
|
2024-05-14 11:59:02
| 1
| 522
|
Nauel
|
78,477,709
| 647,852
|
Django-tables2: Populate table with data from different database tables (with model.inheritance)
|
<p>I want to show data from different database tables on a django-tables2 table.</p>
<p>I have several apps with each app having some database tables (with each table in its own file). Project structure:</p>
<pre><code> --project
-all
-overview
</code></pre>
<p>all/models/compound.py</p>
<pre><code> class Compound (models.Model):
name = models.CharField(max_length = 100, unique = True, verbose_name="Probe")
dateAdded = models.DateField();
class Meta:
app_label = 'all'
</code></pre>
<p>all/models/probe.py</p>
<pre><code> class Probe(Compound):
mechanismOfAction = models.CharField(max_length = 255, verbose_name="Mode of action")
def get_absolute_url(self):
return reverse('overview:probe_detail', args=[str(self.id)])
def __str__(self):
return f'{self.name}, {self.mechanismOfAction}'
class Meta:
app_label = 'all'
</code></pre>
<p>all/models/negativeControl.py</p>
<pre><code> class NegativeControl(Compound):
probeId = models.ForeignKey(Probe, on_delete=models.CASCADE, related_name="controls", verbose_name="Control")
class Meta:
app_label = 'all'
</code></pre>
<p>overview/models/usageData.py</p>
<pre><code> class UsageData (models.Model):
cellularUsageConc = models.CharField (max_length = 50, null = True, verbose_name="Recommended concentration")
probeId = models.ForeignKey(Probe, on_delete=models.CASCADE, related_name="usageData")
def __str__(self):
return f'{self.cellularUsageConc}, {self.inVivoUsage}, {self.recommendation}'
class Meta:
app_label = 'overview'
</code></pre>
<p>all/tables.py</p>
<pre><code> class AllProbesTable(tables.Table):
Probe = tables.Column(accessor="probe.name", linkify=True)
control = tables.Column(accessor="probe.controls")
mechanismOfAction = tables.Column()
cellularUsageConc = tables.Column(accessor="usageData.cellularUsageConc")
class Meta:
template_name = "django_tables2/bootstrap5-responsive.html"
</code></pre>
<p>all/views.py</p>
<pre><code> class AllProbesView(SingleTableView):
table_class = AllProbesTable
queryset = queryset = Probe.objects.all().prefetch_related('usageData', 'controls')
template_name = "all/probe_list.html"
</code></pre>
<p>all/templates/all/probe_list.html</p>
<pre><code> {% load render_table from django_tables2 %}
{% render_table table %}
</code></pre>
<p>The rendered table shows all headers correct and has data for the "Probe" and "Mode of action". For the "Control", I get "all.NegativeControl.None" and for the "Recommended concentration" "-".
Why do I get "-" for the "Recommended concentration".</p>
<p>For a probe there can be more than one control. I would need to create a string from all control names such as control1, control2 to show it in the table. How can I do this?</p>
<p>This <a href="https://stackoverflow.com/questions/38542025/how-to-make-a-join-with-two-tables-with-django-tables2">post</a> was helpful but in my case it did not work.</p>
<p>I use Django 5.0.1 and Python 3.12.1.</p>
|
<python><django><join><django-tables2>
|
2024-05-14 11:18:30
| 1
| 471
|
Natalie
|
78,477,704
| 1,283,984
|
oauth example of bearer jwt token missing validation token belongs to user by not storing mapping on server side?
|
<p>wrt <a href="https://fastapi.tiangolo.com/tutorial/security/oauth2-jwt/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/security/oauth2-jwt/</a> dont we have to ensure token belongs to same user ?</p>
<p>I see code has login to generate token (authenticate_user). on client side some where (localstorage in browser) it can be stored but then on server side its not mentioned where its stored. I would expect that for each user or some entity it should be stored in cache/db. so in create_user_token we need to store in db user:token mapping and in get_current_user we should also have some way to compare the token for user ? compare TokenData.username for request TokenData.username ?</p>
<p>so its left to user to implement it ? or am I misunderstanding and already implemented in example ?</p>
<p>regards,</p>
<p>miten.</p>
|
<python><fastapi>
|
2024-05-14 11:17:12
| 1
| 358
|
Miten
|
78,477,604
| 4,429,265
|
Kafka consumers in FastAPI -> different services, different consumer, same producers
|
<p>I have gone through these questions and some others, but unfortunately, I could not find an answer for my specific case even after reading the linked documents. Therefore, I believe this question is not a duplicate.</p>
<p><a href="https://stackoverflow.com/questions/35561110/can-multiple-kafka-consumers-read-same-message-from-the-partition">Can multiple Kafka consumers read same message from the partition</a></p>
<p><a href="https://stackoverflow.com/questions/45175424/how-multiple-consumer-group-consumers-work-across-partition-on-the-same-topic-in">How multiple consumer group consumers work across partition on the same topic in Kafka?</a></p>
<p><a href="https://stackoverflow.com/questions/46458834/how-does-kafka-achieve-its-parallelism-with-multiple-consumption-on-the-same-top">How does Kafka achieve its parallelism with multiple consumption on the same topic same partition?</a></p>
<h2>My question</h2>
<p>We have a PHP side that handles Kafka producer operations and broadcasts messages related to product insert, update, and delete. This side has one topic named ‘products’ with three partitions: partition 0 for <code>insert</code> messages, partition 1 for <code>update</code> messages, and partition 2 for `delete’ messages.</p>
<p>On the other hand, we have additional services that need to listen to this topic. Each service performs operations on its own database independently, without relying on other services. All of these services handle their own insert, update, and delete operations. Before discussing my specific challenges, I’d like to mention the following points:</p>
<ul>
<li>I am using FastAPI for my services.</li>
<li>Some of my services utilize Qdrant, while others use MongoDB as their
database.</li>
<li>For my consumer client code, I am using <code>aiokafka</code>.</li>
</ul>
<p>So my challenge is this:</p>
<h3>1- <code>auto-commit</code>: True or False?</h3>
<p>One of the options for configuring the consumer is the <code>auto_commit</code> setting. If I set it to false, all the messages in the partitions will remain as new ones even if my consumers read them. Initially, I thought this was the correct approach since my consumers are independent of each other. If one reads a message more quickly than others, it should not mark it as committed, allowing others to read it as well. However, after reading some questions and documentation, I realized that I may not fully understand the concept of <code>auto_commit</code>. Therefore, I’m unsure whether I should set it to <code>true</code> or <code>false</code> in order to receive messages in all my consumers.</p>
<h3>2- <code>group_id</code>: what is it's role here?</h3>
<p>Now that each service has only one consumer, what should I name their <code>group_id</code>? Should they all have different <code>group_id</code>s? How does it affect the message retrieval scenario here? I mean, do these choices have any effects on how my consumers receive the messages?</p>
<h3>3- Partitions: Should I listen to all of them with one consumer?</h3>
<p>As the partitions on kafka producer are brodcasting different action messages, I am not sure if I should listen to each with separate consumer, or listen to all 3 with one single consumer. The <code>msg.header</code> contains the action name (<code>insert, update, or delete</code>), so I can know what is each message after receiving it.</p>
<p>Overall, I described my main concerns here, and the usecase. Any help, tip, or guide is deeply appreciated.</p>
<p>By the way, this is the test consumer that I have so far:</p>
<pre><code>import sys, json, asyncio
from argparse import ArgumentParser, FileType
from configparser import ConfigParser
from aiokafka import AIOKafkaConsumer, TopicPartition
def encode_json(msg):
to_load = msg.value.decode("utf-8")
return json.loads(to_load)
async def main():
topic_partition = TopicPartition(topic="products", partition=1)
consumer = AIOKafkaConsumer(
bootstrap_servers="host:port",
enable_auto_commit=False,
group_id="update",
auto_offset_reset="earliest",
)
consumer.assign([topic_partition])
await consumer.start()
try:
async for msg in consumer:
print(
msg.topic,
msg.key.decode("utf-8"),
msg.headers,
# json.loads(msg.value()),
)
finally:
consumer.stop()
asyncio.run(main())
</code></pre>
|
<python><apache-kafka><fastapi><aiokafka>
|
2024-05-14 11:01:39
| 1
| 417
|
Vahid
|
78,477,470
| 11,198,558
|
Django form problem on Model object and redirect
|
<p>I'm facing 2 problems when creating Form in Django:</p>
<ol>
<li>Problem on redirect after submitting Form</li>
<li>Model object is not iterable</li>
</ol>
<p>Specifically, I'm creating feedback form and all my code is as below. There are 4 files and using <code>crispy_form</code> to show the form in html file. Everything works fine except after I click on submit button. It cannot return to the same feedback form with blank fields, and it shows the error <code>'BugTypes' object is not iterable</code></p>
<p>Please check and help. Thank you!</p>
<pre><code># models.py
from django.db import models
# Create your models here.
class BugTypes(models.Model):
bugType = models.CharField(max_length=100)
def __str__(self):
return self.bugType
class UserFeedback(models.Model):
username = models.CharField(max_length=100)
problem = models.CharField(max_length=255)
bugType = models.ManyToManyField("BugTypes", related_name='userFeedBack')
feedback = models.TextField()
def __str__(self):
return self. Problem
</code></pre>
<pre><code># forms.py
from django import forms
from survey.models import UserFeedback, BugTypes
class FeedbackForm(forms.ModelForm):
username = forms.CharField(
widget = forms.TextInput(
attrs={'class': 'form-control'}
)
)
problem = forms.CharField(
widget = forms.TextInput(
attrs = {'class': 'form-control'}
)
)
bugType = forms.ModelChoiceField(
queryset = BugTypes.objects.all(),
widget = forms.Select(
attrs={'class': 'form-control'}
))
feedback = forms.CharField(
widget = forms.Textarea(
attrs = {'class': 'form-control'}
))
class Meta:
model = UserFeedback
fields = "__all__"
</code></pre>
<pre><code># views.py
from django.shortcuts import render
from django.urls import reverse
from django.views.generic.edit import CreateView
from survey.forms import FeedbackForm
from survey.models import UserFeedback
# Create your views here.
class UserFeedbackView(CreateView):
model = UserFeedback
form_class = FeedbackForm
success_url = '/survey/feedback/'
userFeedback_view = UserFeedbackView.as_view(template_name = "survey/user_feedback.html")
</code></pre>
<pre><code># urls.py
from django.urls import path
from survey.views import userFeedback_view
app_name = 'survey'
urlpatterns = [
path('feedback', view = userFeedback_view, name='userFeedback')
]
</code></pre>
|
<python><django>
|
2024-05-14 10:33:58
| 0
| 981
|
ShanN
|
78,477,442
| 12,789,602
|
Dynamic update of add_qty parameter in Odoo eCommerce product details page
|
<p>I'm customizing an Odoo eCommerce website and I need to modify the add_qty parameter in the product details page dynamically via a URL parameter. I'm extending the WebsiteSale class to achieve this.</p>
<p>Here's my code snippet:</p>
<pre><code>class WebsiteSaleCustom(WebsiteSale):
def _prepare_product_values(self, product, category, search, **kwargs):
values = super()._prepare_product_values(product, category, search, **kwargs)
values['add_qty'] = int(kwargs.get('add_qty', 1))
return values
</code></pre>
<p>This solution works as expected upon the first load of the product details page. However, the issue arises when attempting to dynamically update the add_qty parameter.</p>
<p>Suppose i hit this link</p>
<p><code>http://localhost:8069/shop/black-shoe?add_qty=5</code></p>
<p>Quantity will set 5. For first time, it is setting 5 but if i change quantity it not changing in quantity field. It's remaining 5.</p>
<p>Any insights or suggestions on how to achieve this would be greatly appreciated. Thank you for your help!</p>
|
<python><odoo><e-commerce><odoo-16><odoo-website>
|
2024-05-14 10:29:12
| 1
| 552
|
Bappi Saha
|
78,477,171
| 2,006,674
|
Python testcontainers initialise Postgres with SQL file?
|
<p>How to init Postgres with SQL file ?</p>
<p>Have found <a href="https://github.com/testcontainers/testcontainers-java/discussions/4841" rel="nofollow noreferrer">https://github.com/testcontainers/testcontainers-java/discussions/4841</a>, but with_init_script not available in python version.</p>
|
<python><postgresql><testcontainers>
|
2024-05-14 09:42:09
| 1
| 7,392
|
WebOrCode
|
78,477,073
| 11,251,938
|
Annotate django queryset based on related field attributes
|
<p>Suppose you have this models structure in a django project</p>
<pre><code>from django.db import models
class Object(models.Model):
name = models.CharField(max_length=100)
class ObjectEvent(models.Model):
class EventTypes(models.IntegerChoices):
CREATED = 1, "Created"
SCHEDULED = 2, "Scheduled"
COMPLETED = 3, "Completed"
CANCELED = 4, "Canceled"
event_type = models.IntegerField(choices=EventTypes.choices)
object = models.ForeignKey(Object, on_delete=models.CASCADE, related_name="events")
</code></pre>
<p>I now want to access the derived property <code>ended</code> which is defined as every object that have an event of type <em>COMPLETED</em> or <em>CANCELED</em>. I already did that with the @property decorator, but as I want to be able to filter using <code>ended</code> attribute. So I'm trying to implement this through the <code>annotate</code> queryset method.</p>
<pre><code>class ObjectManager(models.Manager):
def get_queryset(self):
qs = super().get_queryset()
qs = qs.annotate(
ended=models.Case(
models.When(
events__event_type__in=(
ObjectEvent.EventTypes.COMPLETED,
ObjectEvent.EventTypes.CANCELED,
),
then=models.Value(True),
),
default=models.Value(False),
output_field=models.BooleanField(
verbose_name="Object ended",
),
)
)
return qs
class Object(models.Model):
objects = ObjectManager()
name = models.CharField(max_length=100)
</code></pre>
<p><strong>fake_data.json</strong></p>
<pre><code>[
{ "model": "main.object", "pk": 1, "fields": { "name": "task1" } },
{ "model": "main.object", "pk": 2, "fields": { "name": "task2" } },
{ "model": "main.object", "pk": 3, "fields": { "name": "task3" } },
{ "model": "main.object", "pk": 4, "fields": { "name": "task4" } },
{
"model": "main.objectevent",
"pk": 1,
"fields": { "event_type": 1, "object": 1 }
},
{
"model": "main.objectevent",
"pk": 2,
"fields": { "event_type": 2, "object": 1 }
},
{
"model": "main.objectevent",
"pk": 3,
"fields": { "event_type": 4, "object": 1 }
},
{
"model": "main.objectevent",
"pk": 4,
"fields": { "event_type": 1, "object": 2 }
},
{
"model": "main.objectevent",
"pk": 5,
"fields": { "event_type": 1, "object": 3 }
},
{
"model": "main.objectevent",
"pk": 6,
"fields": { "event_type": 2, "object": 3 }
},
{
"model": "main.objectevent",
"pk": 7,
"fields": { "event_type": 3, "object": 3 }
},
{
"model": "main.objectevent",
"pk": 8,
"fields": { "event_type": 1, "object": 4 }
},
{
"model": "main.objectevent",
"pk": 9,
"fields": { "event_type": 2, "object": 4 }
}
]
</code></pre>
<p>Now, trying with this fake data I have a strange result in manage.py shell</p>
<pre><code>>>> ended_objects = Object.objects.filter(ended=True)
>>> ended_objects.count()
2 # this is fine
>>> not_ended_objects = Object.objects.filter(ended=False)
>>> not_ended_objects.count()
7 # why?
>>> not_ended_objects.distinct().count()
4 # Event using distinct doesn't resolve the problem
</code></pre>
<p>What am I missing?</p>
|
<python><django><django-queryset>
|
2024-05-14 09:27:32
| 1
| 929
|
Zeno Dalla Valle
|
78,476,875
| 1,942,868
|
use dynamic string for model key in django
|
<p>For exampkle I have tabel like this,</p>
<pre><code>class FormSelector(models.Model):
prefs = models.JSONField(default=dict,null=True, blank=True)
items = models.JSONField(default=dict,null=True, blank=True)
</code></pre>
<p>then in views, I want to do like this,</p>
<pre><code>json = {"prefs":[1,2],"items":[2,3,4]}
mo = FormSelector.objects.last()
for key in json: // key is string "prefs","items"
mo.{key} = di[key] // I want to do the equivalent to mo.prefs , mo.items
</code></pre>
<p>Is there any good method to do this?</p>
|
<python><django><model>
|
2024-05-14 08:52:24
| 1
| 12,599
|
whitebear
|
78,476,844
| 13,200,217
|
Using singleton instead of using context property breaks translation
|
<p>I have a Python-QML communication setup using <code>setContextProperty</code> and I am trying to move to a setup that can be parsed by <code>qmllint</code>. As described in the docs (<a href="https://doc.qt.io/qt-6/qtqml-cppintegration-contextproperties.html" rel="nofollow noreferrer">link</a>) using a context property is invisible to all tooling including qmllint and recommends using a singleton instead. Indeed even the Qt6 book recommends exposing classes instead of objects by moving from <code>setContextProperty</code> to using <code>qmlRegisterType</code>.</p>
<p>Additionally, using the <code>@QmlElement</code> (<a href="https://doc.qt.io/qtforpython-6/PySide6/QtQml/QmlElement.html" rel="nofollow noreferrer">doc</a>) seems like an even cleaner way to do the same thing as <code>qmlRegisterType</code>.</p>
<p>The class I am exposing is an API through which I want to be able to set the application language. Using the old approach with <code>setContextProperty</code>, it works and translates the entire window instantly:</p>
<pre class="lang-py prettyprint-override"><code>class MyAPI(QObject):
def __init__(self, engine):
super().__init__()
self._engine = engine
... register some other properties
@QtCore.Slot(str)
def retranslate(self, language=None):
...instantiate translator
app=QtGui.QtGuiApplication.instance()
if self._translator:
app.removeTranslator(self._translator)
app.installTranslator(newTranslator)
if self._translator:
self._engine.retranslate()
self._translator = newTranslator
... then in my main function
... instantiate self.app & set some properties
engine = QQmlAPPlicationEngine(parent=self.app)
api=QmlApi(engine)
api.retranslate()
engine.rootContext().setContextProperty("_api", api)
engine.load("mymain.qml")
return self.app.exec()
</code></pre>
<p>I can then simply call <code>_api.retranslate</code> anywhere in my QML and it works.</p>
<p>After moving to using the @QmlElement decorator with @QmlSingleton, I cannot pass the engine as a constructor argument so I made it a static variable that gets initialized once when needed. These are the parts that are different:</p>
<pre class="lang-py prettyprint-override"><code>QML_IMPORT_NAME='io.qt.myapi'
QML_IMPORT_MAJOR_VERSION=1
@QmlSingleton
@QmlElement
class MyAPI(QObject):
_engine=None
def __init__(self):
super().__init__()
... register some other properties
@QtCore.Slot(str)
def retranslate(self, language=None):
... same as before
app.installTranslator(newTranslator)
if self._engine is None:
self._engine = QQmlApplicationEngine(parent=app)
if self._translator:
self._engine.retranslate()
self._translator = newTranslator
... then in my main function
... instantiate self.app & set some properties
engine = QQmlApplicationEngine(parent=self.app)
api=QmlApi()
api.retranslate()
engine.load("mymain.qml")
return self.app.exec()
</code></pre>
<p>This should then be used by QML as:</p>
<pre><code>import io.qt.myapi
Item {
MyAPI {
id: _api
}
... call _api.retranslate as before
</code></pre>
<p>However in this case translation is no longer instant. It seems to be quite messy, it only changes some strings in the application & only takes full effect after a restart. How can I recover the previous behaviour where translation was instant & complete, while keeping this <code>@QmlElement</code> approach which allows static analysis?</p>
|
<python><qt><qml><pyside>
|
2024-05-14 08:46:02
| 0
| 353
|
Andrei Miculiță
|
78,476,370
| 1,910,555
|
How to create a "Split-Bars" plot in Python with matplotlib?
|
<p>In the <code>datawrapper.de</code> visualizations service a "Split-Bars" plot is available; and I would like to recreate that visualization plot type (programmatically) in Python using matplotlib (or similar library).</p>
<p>I have a dataset as follows:</p>
<pre><code>A B C D
0.52072 0.08571 0.70141 0.01849
0.46156 0.47112 0.92709 0.24230
0.82056 0.07003 0.14328 0.79489
0.64635 0.72227 0.12274 0.06919
0.83555 0.50729 0.65337 0.62428
0.38232 0.19952 0.24025 0.47434
0.97861 0.53296 0.43911 0.40135
0.93070 0.73063 0.06899 0.00429
0.08514 0.98483 0.80090 0.27527
0.93412 0.05890 0.68416 0.81203
0.78269 0.55302 0.30861 0.19934
</code></pre>
<p>I want to produce a plot, such as:
<a href="https://i.sstatic.net/A2pYxNQ8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2pYxNQ8.png" alt="enter image description here" /></a></p>
<h2>How can I recreate this <code>Split-Bar</code> plot type in <code>matplotlib</code>?</h2>
<hr />
<h3>Steps taken to find a solution:</h3>
<ul>
<li>No related questions were found on SO.</li>
<li>No google search returns open source code or library with this plot type to reproduce the plot.</li>
<li>Using a generative AI service did not produce the correct output.</li>
<li>Begin to find a solution, define the problem and specify a solution:</li>
</ul>
<p><em><strong>Problem Description:</strong></em> The structure seems to be table format; with each cell filled with a shaded area; to a given filled-percentage. Axis lines are removed. Value-text colors change depending on background color. Value-text positions change depending on % value.</p>
<p><em><strong>Attempts:</strong></em> I tried <code>ax.add_patch(Rectangle(...))</code>, which seems to be the ideal choice for the outer 'legend' color patches and the inner <code>ax</code> %-filled color-patches.</p>
|
<python><matplotlib><visualization><diagram>
|
2024-05-14 07:16:56
| 1
| 2,650
|
pds
|
78,476,342
| 15,148,870
|
Django - Adding more options to AdminTimeWidget
|
<p>I tried to add more time choices to <code>AdminTimeWidget</code> by overriding <code>DateTimeShortcuts</code> refering to <a href="https://stackoverflow.com/questions/5770973/django-how-to-change-the-choices-of-admintimewidget">this post</a> also another similar posts on SO. My problems is I am getting <code>Uncaught ReferenceError: DateTimeShortcuts is not defined</code> and <code>jQuery.Deferred exception: DateTimeShortcuts is not defined ReferenceError: DateTimeShortcuts is not defined at HTMLDocument</code> error on console. I am newbie to django, I did not understant why I am getting this error. Here is how I implemented it:</p>
<pre><code>(function($) {
$(document).ready(function() {
DateTimeShortcuts.clockHours.default_ = [
['16:30', 16.5],
['17:30', 17.5],
['18:00', 18],
['19:00', 19],
['20:00', 20],
];
DateTimeShortcuts.handleClockQuicklink = function (num, val) {
let d;
if (val == -1) {
d = DateTimeShortcuts.now();
} else {
const h = val | 0;
const m = (val - h) * 60;
d = new Date(1970, 1, 1, h, m, 0, 0);
}
DateTimeShortcuts.clockInputs[num].value = d.strftime(get_format('TIME_INPUT_FORMATS')[0]);
DateTimeShortcuts.clockInputs[num].focus();
DateTimeShortcuts.dismissClock(num);
};
});
})(jQuery);
</code></pre>
<p>and in my ModelAdmin I included this JS file in Media class as below:</p>
<pre><code>@admin.register(models.MyModel)
class MyAdminModel():
// other stuff
list_filter = [("created", custom_titled_datetime_range_filter("By created"))]
class Media:
js = ("admin/js/DateTimeShortcuts.js",)
</code></pre>
<p>Also I have a custom datetime range filter implemented as:</p>
<pre><code>def custom_titled_datetime_range_filter(title):
class CustomDateTimeRangeFilter(DateTimeRangeFilter):
def __init__(self, field, request, params, model, model_admin, field_path):
super().__init__(field, request, params, model, model_admin, field_path)
self.title = title
return CustomDateTimeRangeFilter
</code></pre>
|
<python><django><django-rest-framework>
|
2024-05-14 07:11:47
| 1
| 328
|
Saidamir
|
78,476,300
| 4,219,264
|
Pip install fails for pandas
|
<p>I am trying to install pandas on a Windows machine but get the following output:</p>
<pre><code>python -m pip install pandas
Collecting pandas
Using cached pandas-2.2.2.tar.gz (4.4 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [13 lines of output]
+ meson setup C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db\.mesonpy-6bapjes7\build -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --vsenv --native-file=C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db\.mesonpy-6bapjes7\build\meson-python-native-file.ini
The Meson build system
Version: 1.2.1
Source dir: C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db
Build dir: C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db\.mesonpy-6bapjes7\build
Build type: native build
Project name: pandas
Project version: 2.2.2
Activating VS 15.9.60
..\..\meson.build:2:0: ERROR: Value "c11" (of type "string") for combo option "C language standard to use" is not one of the choices. Possible choices are (as string): "none", "c89", "c99", "gnu89", "gnu90", "gnu9x", "gnu99".
A full log can be found at C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db\.mesonpy-6bapjes7\build\meson-logs\meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>I tried the following:</p>
<ul>
<li>running the terminal as administrator</li>
<li>updating pip</li>
<li>installing older versions of pandas because I read there is some issue with the latest one</li>
<li>looking inside the log file from the output, but it doesn't exist anymore after the process finishes</li>
</ul>
<p>The output is still the same.</p>
<p>I see it complaining about a C standard. Do I need a specific compiler to be installed or any other prerequisites?</p>
<p>Thanks.
Regards,
Serban</p>
|
<python><pandas><pip>
|
2024-05-14 07:04:18
| 2
| 3,966
|
Serban Stoenescu
|
78,476,128
| 7,194,569
|
connect to linked service in Azure Data factory using robot framework
|
<p>how to connect to <strong>linked service</strong> in Azure Data Factory and run a query there using robot framework</p>
|
<python><python-3.x><robotframework><automation-testing>
|
2024-05-14 06:28:45
| 0
| 397
|
Nikhil
|
78,476,039
| 22,213,065
|
Just keep color range area in images using python
|
<p>I have high number of JPG images in specific folder that I want to keep only <code>#c7d296</code> color in my images and fill all other remaining areas in images with <code>white color</code>.<br />
for this I can't use Photoshop because I have high number of JPG images and it get me a lot of time! (about <code>29000 JPG images</code>).<br />
for this I should use <code>color range tool</code> in python script.</p>
<p>my images are like following sample:<br />
<a href="https://i.sstatic.net/3KPrGAEl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KPrGAEl.jpg" alt="enter image description here" /></a></p>
<p>I wrote following script for this process:</p>
<pre><code>import cv2
import os
import numpy as np
import keyboard
def keep_color_only(input_file, output_directory, color_range, fuzziness):
try:
# Read the input image
img = cv2.imread(input_file)
# Convert image to HSV color space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Define lower and upper bounds for the color range
lower_color = np.array(color_range[0])
upper_color = np.array(color_range[1])
# Threshold the HSV image to get only desired colors
mask = cv2.inRange(hsv, lower_color, upper_color)
# Invert the mask
mask_inv = cv2.bitwise_not(mask)
# Create a white background image
white_background = np.full_like(img, (255, 255, 255), dtype=np.uint8)
# Combine the original image with the white background using the mask
result = cv2.bitwise_and(img, img, mask=mask)
result = cv2.bitwise_or(result, white_background, mask=mask_inv)
# Output file path
output_file = os.path.join(output_directory, os.path.basename(input_file))
# Save the resulting image
cv2.imwrite(output_file, result)
except Exception as e:
print(f"Error processing {input_file}: {str(e)}")
def process_images(input_directory, output_directory, color_range, fuzziness):
# Create output directory if it doesn't exist
if not os.path.exists(output_directory):
os.makedirs(output_directory)
# Process each JPG file in the input directory
for filename in os.listdir(input_directory):
if filename.lower().endswith('.jpg'):
input_file = os.path.join(input_directory, filename)
keep_color_only(input_file, output_directory, color_range, fuzziness)
# Check for 'F7' key press to stop the process
if keyboard.is_pressed('F7'):
print("Process stopped by user.")
return
def main():
input_directory = r'E:\Desktop\inf\CROP'
output_directory = r'E:\Desktop\inf\OUTPUT'
# Color range in HSV format
color_range = [(75, 90, 160), (95, 255, 255)] # Lower and upper bounds for HSV color range
fuzziness = 80
process_images(input_directory, output_directory, color_range, fuzziness)
print("Color removal completed.")
if __name__ == "__main__":
main()
</code></pre>
<p>Note that <code>fuzziness</code> of color range must set on 80<br />
script working good but whole of output images fill by white color. this mean output images just have a empty white screen and <code>no any #c7d296 color area keep</code>!</p>
<p>where is my script problem?</p>
|
<python><opencv><photoshop>
|
2024-05-14 06:08:47
| 1
| 781
|
Pubg Mobile
|
78,475,685
| 3,102,968
|
Why is the path not resolving to an absolute path for the data_dir that is passed as method parameter?
|
<p>I have the following project structure in a Python project:</p>
<pre><code>> nn-project
-.env
- data
- raw
- boston_housing_price
> - src
> - models
> - bird-model
> - env.py
> - train_model.py
</code></pre>
<p>I have in my .env file, the following:</p>
<pre><code>PROJECT_ROOT_FOLDER = ../
</code></pre>
<p>In my env.py, I do the following:</p>
<pre><code>project_root = os.environ.get('PROJECT_ROOT_FOLDER')
if not project_root:
raise ValueError("PROJECT_ROOT environment variable is not set.")
absolute_path = os.path.abspath(project_root)
data_dir = Path(os.path.join(absolute_path, 'data/raw/boston_housing_price/'))
models_dir = Path(os.path.join(absolute_path, 'models/boston_housing_price/'))
print('***************** LOAD ENVIRONMENT ********************+')
print("Project Root DIR", project_root)
print("Project Root DIR abs", absolute_path)
print("Project Data DIR", data_dir)
print("Models Dump DIR", models_dir)
print('***************** LOAD ENVIRONMENT ********************+')
</code></pre>
<p>I get to see the following printed:</p>
<pre><code>***************** LOAD ENVIRONMENT ********************+
Project Root DIR ../nn-project/
Project Root DIR abs /home/user/Projects/Private/ml-projects/nn-project
Project Data DIR /home/user/Projects/Private/ml-projects/nn-project/data/raw/boston_housing_price
Models Dump DIR /home/user/Projects/Private/ml-projects/nn-project/models/boston_housing_price
***************** LOAD ENVIRONMENT ********************+
</code></pre>
<p>I then have the following method in train_model.py that is supposed to load the dataset:</p>
<pre><code>def load_data(data_dir):
print(data_dir)
# Check if the dataset file exists in the data directory
dataset_file = Path(os.path.join(data_dir, env.boston_dataset))
print(dataset_file)
if os.path.exists(dataset_file):
# If the dataset file exists, load it directly
raw_df = pd.read_csv(dataset_file, sep="\s+", skiprows=22, header=None)
else:
# If the dataset file doesn't exist, fetch it from the URL
response = requests.get(env.boston_dataset_url)
if response.status_code == 200:
# Parse the CSV data from the response content
csv_data = response.text
raw_df = pd.read_csv(StringIO(csv_data), sep="\s+", skiprows=22, header=None)
# Save the dataset to the data directory for future use
raw_df.to_csv(dataset_file, index=False)
else:
print("Failed to fetch data from URL:", env.boston_dataset_url)
return None
return raw_df
</code></pre>
<p>I call it like this:</p>
<pre><code>boston = train_model.load_data(data_dir=env.data_dir)
</code></pre>
<p>But it fails when I run it with the message:</p>
<pre><code>OSError: Cannot save file into a non-existent directory: '../nn-project/data/raw/boston_housing_price'
</code></pre>
<p>Question is, why is it not respecting the full path of the data_dir that I pass in as parameter to the method?</p>
|
<python>
|
2024-05-14 04:13:40
| 0
| 15,565
|
joesan
|
78,475,657
| 2,507,197
|
Asyncio multiprocessing communication with queues - only one coroutine running
|
<p>I have a manager script that's launching some processes, then using two coroutines (one to monitor, one to gather results). For some reason only one coroutine seems to be run, what am I missing? (I don't work with asyncio)</p>
<p><a href="https://i.sstatic.net/3K96CkFl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3K96CkFl.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing as mp
import time
import asyncio
import logging
logging.basicConfig(level=logging.DEBUG)
class Process(mp.Process):
def __init__(self, task_queue: mp.Queue, result_queue: mp.Queue):
super().__init__()
self.task_queue = task_queue
self.result_queue = result_queue
logging.info('Process init')
def run(self):
while not self.task_queue.empty():
try:
task = self.task_queue.get(timeout=1)
except mp.Queue.Empty:
logging.info('Task queue is empty')
break
time.sleep(1)
logging.info('Processing task %i (pid %i)', task, self.pid)
self.result_queue.put(task)
logging.info('Process run')
class Manager:
def __init__(self):
self.processes = []
self.task_queue = mp.Queue()
self.result_queue = mp.Queue()
self.keep_running = True
async def monitor(self):
while self.keep_running:
await asyncio.sleep(0.1)
logging.info('Task queue size: %i', self.task_queue.qsize())
logging.info('Result queue size: %i', self.result_queue.qsize())
self.keep_running = any([p.is_alive() for p in self.processes])
async def consume_results(self):
while self.keep_running:
try:
result = self.result_queue.get()
except mp.Queue.Empty:
logging.info('Result queue is empty')
continue
logging.info('Got result: %s', result)
def start(self):
# Populate the task queue
for i in range(10):
self.task_queue.put(i)
# Start the processes
for i in range(3):
p = Process(self.task_queue, self.result_queue)
p.start()
self.processes.append(p)
# Wait for the processes to finish
loop = asyncio.get_event_loop()
loop.create_task(self.monitor())
loop.create_task(self.consume_results())
manager = Manager()
manager.start()
</code></pre>
<ul>
<li>expecting to see the monitor queue sizes, however only the <code>consume_results()</code> is run</li>
</ul>
|
<python><python-asyncio>
|
2024-05-14 04:01:48
| 1
| 3,494
|
Alter
|
78,475,581
| 14,056,352
|
Algorithm to filter the address from a large text
|
<p>I am trying to grab part of this string, I am looking for it to start grabbing the string at the first digit in the string and copy the entire string all the away until the end digits.</p>
<pre><code>import re
string = "['Today is the open house of 1234 High Drive, Denver, COLORADO 80204; open to the Public "
property_address = re.findall('\d-\d\d\d\d\d', str(string))
print(property_address)
</code></pre>
<p>Code above does not work, I'm a bit confused on how to tell Regex, start on first digit you find and grab until you find 5 digit sequence.</p>
|
<python><regex><algorithm>
|
2024-05-14 03:32:29
| 1
| 380
|
Josh
|
78,475,551
| 2,537,486
|
Discontinuous selections with pandas MultiIndex
|
<p>I have the following <code>DataFrame</code> with <code>MultiIndex</code> columns (the same applies to MultiIndex rows):</p>
<pre><code> import pandas as pd
df = pd.DataFrame(columns=pd.MultiIndex.from_product([['A','B'],[1,2,3,4]]),
data=[[0,1,2,3,4,5,6,7],[10,11,12,13,14,15,16,17],
[20,21,22,23,24,25,26,27]])
</code></pre>
<p>Now I want to select the following columns into a new <code>DataFrame</code>: from the <code>A</code> group, elements at index [1,2,4], <strong>AND</strong> from the <code>B</code> group, elements at index [1,3]. So my new <code>DataFrame</code> would have 5 columns.</p>
<p>I can easily make any of the two selections separately using <code>.loc</code>:</p>
<pre><code> df_grA = df.loc[:,('A',(1,2,4))]
df_grB = df.loc[:,('B',(1,3))]
</code></pre>
<p>But I cannot find a way to achieve what I want.
The only way that I can think of is to <code>concat</code> the two pieces together like that:</p>
<pre><code> df_selection = pd.concat([df_grA,df_grB],axis=1)
</code></pre>
<p>This works, but it's clunky. I can't believe there's not a more convenient way.</p>
|
<python><pandas><multi-index>
|
2024-05-14 03:16:51
| 1
| 1,749
|
germ
|
78,475,496
| 3,788,557
|
Different results for scipy-minimize using SLSQP dependent upon initalized values
|
<p>I am using scipy.minimize in an attempt to optmize the inputs to my function.</p>
<p>I have a given amount of budget/hours and 6 inputs where I'm trying to get the highest return from the input mix. Each of these 6 inputs has a non-linear concave-shaped return. That is each of the response curve 'saturates' in a curve similar to what you see in a logiarimic curve.</p>
<p>My issue is that this all seems highly sensitive to the <code>initial_guess</code> values that are passed in.</p>
<p>I ran 1000 simulations of random numbers for the initial guess. I got various optimized values. I thought that I would consistently get the same result but each time it is different. Again the response curve for all of them is similar to a logarithmic curve.
My <code>graident_fuction</code> is correct, <code>my objective_function</code> is as well.</p>
<p>I just don't know what is wrong or what I could be misinterpretting.</p>
<pre><code>optimization = minimize(objective_function, initial_guess, method='SLSQP', jac=gradient_function, bounds=boundaries, constraints=[{'type': 'eq', 'fun': constraints_function}], options={'maxiter': maxiter})
</code></pre>
|
<python><optimization><scipy><scipy-optimize><scipy-optimize-minimize>
|
2024-05-14 02:52:07
| 1
| 6,665
|
runningbirds
|
78,475,446
| 2,796,170
|
FastAPI Custom Router Class - How to get All Router Function Input Parameters
|
<p>Is there a way to get the input parameters for a fastapi <code>APIRoute</code> function when defining a custom router class (per the <a href="https://fastapi.tiangolo.com/how-to/custom-request-and-route/" rel="nofollow noreferrer">docs</a>)?</p>
<p>For example, in the example below, if the <code>things</code> variable wasn't sent in the request, could I still get a hold of the value passed to it in my custom router class?</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, Callable
from fastapi import Body, FastAPI, Request, Response
from fastapi.routing import APIRoute
class CustomRoute(APIRoute):
def get_route_handler(self) -> Callable:
original_route_handler = super().get_route_handler()
async def custom_route_handler(request: Request) -> Response:
# HOW CAN I GET THE INPUTS TO sum_numbers FUNCTION HERE?
_r = await original_route_handler(request)
return _r
return custom_route_handler
def create_app():
app = FastAPI()
app.router.route_class = CustomRoute
return app
app = create_app()
@app.post("/sum")
async def sum_numbers(numbers: List[int] = Body(), things: str = "stuff"):
return {"sum": sum(numbers), "things": things}
</code></pre>
|
<python><fastapi>
|
2024-05-14 02:26:10
| 1
| 557
|
codeAndStuff
|
78,475,426
| 2,307,441
|
Pandas map two dataframe based on column name mentioned on the other df and partial match to derive new column
|
<p>I have two dataframes df1 & df2 as below.</p>
<pre><code>import pandas as pd
data1 = {'Column1': [1, 2, 3],
'Column2': ['Account', 'Biscut', 'Super'],
'Column3': ['Funny', 'Super', 'Nice']}
df1 = pd.DataFrame(data1)
data2 = {'ColumnName':['Column2','Column3','Column1'],
'ifExist':['Acc','Sup',3],
'TarName':['Account_name','Super_name','Val_3']}
df2 = pd.DataFrame(data2)
</code></pre>
<p>I want to add new column <code>TarName</code> to the df1 by partially matching the ifExists value from df2 against the ColumnName that Mentioned in the df2 with df1.</p>
<p>My Expected Ouput is:</p>
<pre><code>Column1 column2 column3 TarName
1 Account Funny Account_Name
2 Biscut Super Super_name
3 Super Nice Val_3
</code></pre>
<p>I have tried below code. This code able to partial map but only to one column. With this approach I Need to create dictionaries as many column mapping I have and need to apply as many.</p>
<p>Is there more dynamic approach ?</p>
<pre><code>df2_Column2_dict = df2[df2['ColumnName']=='Column2'].set_index(['ifExist'])['TarName'].to_dict()
pat = r'({})'.format('|'.join(df2_Column2_dict.keys()))
extracted = df1['Column2'].str.extract(pat, expand=False).dropna()
df1['TarName'] = extracted.apply(lambda x: df2_Column2_dict[x]).reindex(df2.index)
print(df1)
</code></pre>
|
<python><pandas><dataframe><partial>
|
2024-05-14 02:15:15
| 3
| 1,075
|
Roshan
|
78,475,395
| 1,405,767
|
Caching Django Rest Framework function-based view causes permission classes to be ignored
|
<p>I'm having a strange issue where my function-based view caching seems to be conflicting with the permission classes applied to it. The decorators for the view are as follows:</p>
<pre><code>@cache_page(3600)
@api_view(['GET'])
@permission_classes((APIKeyPermission,))
def function_based_view(request):
# function-based view code here...
</code></pre>
<p>The problem is: When I try to access the view via HTTP GET, if it is not yet cached, it will require the <code>APIKeyPermission</code> to be satisfied and give an 401 error. This is correct behavior. Once the view has been successfully cached, any HTTP GET request can successfully access the view without providing any required permissions. It is important to note that once the view is cached, the <code>@permission_classes</code> decorator seems to do <em>nothing</em> <strong>and</strong> the view does not even fall back to the <code>DEFAULT_PERMISSION_CLASSES</code> as specified in <code>settings.py</code> (which in this case are even more strict than the single permission class specified in the <code>@permission_classes</code> method decorator.)</p>
<p>In short: Once the view is cached, it can be accessed without <em>any</em> of the permission classes being applied to it.</p>
<p>How do I fix this so that the permission classes are properly applied to cached views?</p>
|
<python><django><django-rest-framework>
|
2024-05-14 01:54:55
| 1
| 926
|
stackunderflow
|
78,475,278
| 4,443,378
|
Get duplicate rows in a specific column from dataframe
|
<p>I have a dataframe df:</p>
<pre><code>num_rows = 5
num_cols = 3
data = [
[10, 20, 30],
[10, 50, 60],
[70, 80, 90],
[20, 30, 10],
[20, 10, 20]
]
columns = [f"Column_{i+1}" for i in range(num_cols)]
df = spark.createDataFrame(data, columns)
|Column_1|Column_2|Column_3|
+--------+--------+--------+
| 10| 20| 30|
| 10| 50| 60|
| 70| 80| 90|
| 20| 30| 10|
| 20| 10| 20|
+--------+--------+--------+
</code></pre>
<p>I want to create another column with true/false based on the first column, where the first original value is "true", and any duplicate would be "fasle". So it would look like:</p>
<pre><code>|Column_1|Column_2|Column_3|Column_4|
+--------+--------+--------+--------+
| 10| 20| 30| TRUE|
| 10| 50| 60| FALSE|
| 70| 80| 90| TRUE|
| 20| 30| 10| TRUE|
| 20| 10| 20| FALSE|
</code></pre>
|
<python><pandas><dataframe><pyspark>
|
2024-05-14 00:39:03
| 2
| 596
|
Mitch
|
78,475,149
| 7,082,628
|
Convert HTML to PDF - PDFKit File Size too Large
|
<p>I have one complete (static, it doesn't rely on calls to the internet) HTML file that's < 900 KB in size, and I am currently using PDF Kit to create a single PDF from it that ends up being about 100 pages long. The PDF is 30-40 mB - which is way too large, frankly - considering each page of the PDF is just text and a small image repeated 4 times.</p>
<p>The way I create the PDF is pretty simple.</p>
<p>installation:</p>
<pre><code>apt-get install wkhtmltopdf -y
pip install pdfkit==1.0.0
pip install pypdf2==2.10.5
</code></pre>
<pre class="lang-py prettyprint-override"><code>import pdfkit
def html_to_pdf(html_path: str, pdf_path: str):
pdfkit.from_file(
input=html_path,
output_path=pdf_path,
configuration=pdfkit.configuration(),
options={
'zoom': '0.9588', # seemed to be the right zoom through trial and error
'disable-smart-shrinking': '',
'page-size': 'Letter',
'orientation': 'Landscape',
'margin-top': '0',
'margin-right': '0',
'margin-left': '0',
'margin-bottom': '0',
'encoding': "UTF-8",
})
html_to_pdf(".my_html_file.html", "my_pdf_file.pdf")
</code></pre>
<p>The image I pull in - I've tried resizing the image and shrunk it to be about 30% of its original size, but there was no change at all in the size of the resulting .pdf.</p>
<p>What I notice about the PDF's I generate with PDFKit is that it's not <em>really</em> a PDF. As in - you can't really search the text, highlight text blocks, etc. It acts like it's essentially a big image on every page. When I do a print from my browser on the HTML and convert that to a PDF - I can do all those things for example.</p>
<p>I am stuck building something programmatically - so I need this to be automated. Is there some setting I'm missing with PDF Kit?</p>
<p>Also what can be noted is I have access to the actual string I use to make the HTML - I don't have to read an HTML file. Would that make a difference?</p>
<p>I'm also open to not using PDF kit at all. I just need something that doesn't require a license.</p>
<p>======</p>
<p>Addendum - it's fonts!</p>
<p>I use custom fonts that are <code>.otf</code> files, so I run some code that turns the file into binary. I then store the font as its byte string in the <code><head></code> like this -</p>
<pre class="lang-html prettyprint-override"><code><style>
@font-face {
font-family: "my_custom_font";
src: url(data:font/woff2;base64, asdlfjsads92932super-long-byte-string-here) format("woff2");
font-weight: normal;
font-style: normal
}</style>
<style>
@font-face {
font-family: "my_custom_font_bold";
src: url(data:font/woff2;base64, asdlfjsads92932super-long-byte-string-here) format("woff2");
font-weight: normal;
font-style: normal
}</style>
</code></pre>
<p>I refer to them in the CSS and apply them to my elements like this:</p>
<pre class="lang-css prettyprint-override"><code> span {
font-family: "my_custom_font", Helvetica, sans-serif;
}
b {
font-family: "my_custom_font_bold";
}
</code></pre>
<p>And then the <code><body></code> of my html will contain this structure, repeated over and over again:</p>
<pre class="lang-html prettyprint-override"><code><div class=offer>
<div class="offer_banner">
<div class=text_container>
<div class="stateroom_text"><span>Deliver to you</span></div>
</div>
<div class=text_container>
<div class=colored_heading>
<div class=colored_heading_child><span class=colored_heading_text>My other text</span></div>
</div>
</div>
</div>
<div class="offer_top_content">
<div class=text_container>
<div class=greeting_text><span>Text 1</span></div>
</div>
<hr>
<div class=text_container>
<div class=offer_text><span>text 2</span></div>
</div>
<hr>
<div class=text_container>
<div class=redemption_text><span>Text 3</span></div>
</div>
<div class=text_container>
<div><span class="italics_span">Text 4</span></div>
</div>
</div>
<div class=logo_container>
<div style="margin:0 20px 0 20px;text-align:center"><img
src="data:image/svg+xml,%3Csvg%20xmlns=%27http%3A//www.w3.org/2000/svg%27%20width=%27128%27%20height=%2730%27%3E%3Crect%20fill-opacity=%270%27/%3E%3C/svg%3E"
alt=""
style="background-blend-mode:normal!important; background-clip:content-box!important; background-position:50% 50%!important; background-color:rgba(0,0,0,0)!important; background-image:var(--sf-img-5)!important; background-size:100% 100%!important; background-origin:content-box!important; background-repeat:no-repeat!important">
</div>
</div>
</div>
</code></pre>
<p>Is there a way I can still use my custom fonts and get PDFKit to print it like normal text?</p>
|
<python><pdf><pdfkit>
|
2024-05-13 23:25:59
| 0
| 3,634
|
NateH06
|
78,475,106
| 11,196,682
|
Should I store start time, end time, and days as separate fields or use a CRON expression in my database?
|
<p>I am developing a scheduling feature in my application where users can set specific time ranges and days for recurring tasks. For example, a user might configure a task to run every <code>Thursday and Friday from 15:00 to 16:00</code>.</p>
<p>I need to store these schedules in my database and am debating between two approaches:</p>
<ol>
<li>Storing start time, end time, and a list of days as separate fields.</li>
<li>Using a CRON expression to represent the schedule.</li>
</ol>
<p>Here's a simplified version of the current approach using separate fields:</p>
<ul>
<li><code>start_time</code>: "15:00"</li>
<li><code>end_time</code>: "16:00"</li>
<li><code>days</code>: ["Thursday", "Friday"]</li>
</ul>
<p>Alternatively, I could convert these schedules to a CRON expression in UTC:</p>
<ul>
<li><code>CRON</code>: "0 15 * * 4,5"</li>
</ul>
<p><strong>Considerations</strong> :</p>
<ul>
<li><p>Time Zones: Users configure times in their local time zone, but the application needs to store and process these times in UTC - Maybe it is better to store it as UTC, right?</p>
</li>
<li><p>If a user change it's time zone, for example he moves from time zone A to B, I will have to update all it's schedules stored in database, because the time (15h-16h) should be the same.</p>
</li>
<li><p>Users have a date picker. it's easier for them to use date picker instead define a cron.</p>
</li>
</ul>
<p><strong>Questions</strong>:</p>
<ul>
<li>Which approach is generally recommended for storing recurring schedules in a database?</li>
<li>Are there best practices or common patterns for handling such scheduling requirements?</li>
</ul>
<p>Thank you a lot!</p>
|
<python><date><cron>
|
2024-05-13 23:05:41
| 0
| 552
|
MathAng
|
78,475,061
| 11,233,365
|
pip install entire Python script as a command line executable via pyproject.toml
|
<p>I am helping to write a pyproject.toml file for a Python package, and the author has written two Python scripts (not functions, but the script itself) that they would like to use as command-line executables. The folder structure looks as follows:</p>
<pre class="lang-bash prettyprint-override"><code>___ bin
| |__ script1.py
| |__ script2.py
|__ src
|__ module1.py
|__ module2.py
</code></pre>
<p>Reading the setuptools docs, I can see that functions within scripts can be pointed to using the string <code>"bin.script1:func1"</code>. However, in this case, the script itself is supposed to be the executable. All of the functions for the package are in the src folder, but the executables are in bin.</p>
<p>In such a situation, how should I configure the <code>[pyproject.scripts]</code> section of the pyproject.toml file such that it can be installed properly with pip? The relevant (I think) sections of my pyproject.toml configuration are as such at the moment:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
build-backend = "setuptools.build_meta"
requires = [
"setuptools",
]
...
...
[project.scripts]
Script1 = "bin:Script1.py"
Script2 = "bin:Script2.py"
...
...
[tool.setuptools.packages.find]
where = ["."]
include = [
"src",
"bin",
]
</code></pre>
<p>Feedback would be much appreciated. Many thanks!</p>
|
<python><pip><setuptools><pyproject.toml>
|
2024-05-13 22:45:13
| 0
| 301
|
TheEponymousProgrammer
|
78,475,028
| 7,514,722
|
Numpy array in function for A *= B vs A = A*B
|
<p>The following produces two different behaviors</p>
<pre><code>import numpy as np
def func1 (A, B):
A = A * B
return A
def func2 (A, B):
A *= B
return A
A = np.arange(3, dtype=float)
B = np.full( 3, 2.0 )
C1 = func1 (A, B)
print ( " func 1", C1)
print ( " A after function 1 = ", A)
print ()
</code></pre>
<p>The result is as expected C1 = [0.0, 2.0, 4.0] and A is unchanged, still [0.0, 1.0, 2.0]</p>
<p>Repeating above with func2</p>
<pre><code>A = np.arange(3, dtype=float)
B = np.full(3, 2.0 )
C2 = func2 (A, B)
print (" func 2", C2)
print (" A after function 2 = ", A)
</code></pre>
<p>The result is as expected for C2 = [0.0, 2.0, 4.0] but A is changed, [0.0, 2.0, 4.0]</p>
<p>This is rather dangerous, as I thought the array A inside func2 will be a copy as soon as it's used for calculation such as above. Can anyone explain why such behavior ? Shouldn't they be expected to be same ?</p>
|
<python><arrays><numpy>
|
2024-05-13 22:29:10
| 0
| 335
|
Ong Beng Seong
|
78,475,004
| 3,652,584
|
Export `stdout` and `stderr` of a single-line bash command to text file while using python module followed by `-m` syntax
|
<p>I would like to export the <code>stdout</code> and <code>stderr</code> of a bash command to the same text file.</p>
<p>The bash command is a single-line command that calls <code>python3</code> followed by the name of the module and function, followed by three arguments (each using <code>--</code>).</p>
<p>The bash command is running on an HPC as part of a slurm job.</p>
<pre><code>python3 -m module.function --Arg1 Val1 --Arg2 Val2 --Arg3 Val3
</code></pre>
<p>I tried the following but it failed. It assumes that <code>2></code> or <code>></code> are extra information to the last argument.</p>
<pre><code>python3 -m module.function --Arg1 Val1 --Arg2 Val2 --Arg3 Val3 2> Output.txt
python3 -m module.function --Arg1 Val1 --Arg2 Val2 --Arg3 Val3 > Output.txt
</code></pre>
<p>How to be able to export the output to a file without making large changes to the syntax (I need to keep using a single line command for calling the function from the module).</p>
<p>Thanks</p>
|
<python><python-3.x><bash><slurm>
|
2024-05-13 22:20:08
| 2
| 537
|
Ahmed El-Gabbas
|
78,474,995
| 2,796,170
|
Is there a way to apply a custom route_class for every APIRouter in your fastapi application?
|
<p>Is there a way to apply a custom <code>route_class</code> for every single APIRouter in your fastapi app? Similar to <a href="https://fastapi.tiangolo.com/how-to/custom-request-and-route/#custom-apiroute-class-in-a-router" rel="nofollow noreferrer">this</a>, but instead of creating each <code>APIRouter</code> instance passing the parameter, I'd like to apply that for all with a single declaration. I can't seem to find how to do that in the docs.</p>
<p>One thing I find odd is that I can set the <code>route_class</code> in the same file where the router is defined, but if I try to do it in the main function (where <code>include_router()</code> is called), it seems to have no effect. Why is this?</p>
<p>Thanks in advance for any guidance.</p>
|
<python><fastapi>
|
2024-05-13 22:16:00
| 1
| 557
|
codeAndStuff
|
78,474,978
| 2,418,793
|
Convert svg to png in Python using Inkscape, but nothing works
|
<p>I want to loop through a folder, take all SVGs and convert them to PNGs, then use OCR to read text from the images.</p>
<p>I know the OCR part of my code works, because I did read text successfully from PNGs. But somehow I can't for the life of me figure out how to first convert all SVGs to PNGs, and then do the OCR parts. So that's where my code gets stuck, in the conversion.</p>
<p>I have tried everything I possibly can but get stuck in malfunctioning library hell.</p>
<p>These are NOT working for me:</p>
<ul>
<li>cairosvg</li>
<li>svglib / reportlab</li>
<li>Inkscape command line</li>
<li>pyvips</li>
<li>Image Magick</li>
</ul>
<p>I am still trying to get the Inkscape command to work. I tried changing the Inkscape path from .exe to .com, but no difference there.</p>
<p>And now I run my code but nothing happens. I'm testing using a single svg file in a single folder. But now I can't even use breakpoints, the code won't stop at the break point. As you can see none of the print statements print, only the one at the end, outside of the loop. I've tried closing and reopening Visual Studio Code, but nothing. Please help.</p>
<pre><code>import cv2
import glob
import pytesseract
import subprocess
#import time
# Path to tesseract.exe
pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files/Tesseract-OCR/tesseract.exe'
#Path to Inkscape
inkscape_path = 'C:/Program Files/Inkscape/bin/inkscape.exe'
pva_image_list = []
#start_time = time.time()
# Get a list of image file paths from a folder, including subfolders (**)
for img_name in glob.glob('my-images/**/*.svg', recursive=True):
print('Svg path name:', img_name)
try:
png_converted = subprocess.run([inkscape_path, '--export-type=png', f'--export-filename=file.png', img_name])
print('png_converted: ', type(png_converted.stdout))
# Reads the image
img = cv2.imread(png_converted)
print(img)
# Find text on each image, returns string of text
text = pytesseract.image_to_string(img)
# Search for keywords in extracted text. No match returns a -1.
if text.find('text to read') > -1 or text.find('more text') > -1:
# Add filename of image to list if there's a match
pva_image_list.append(img_name)
except Exception as e:
print(e)
pass
# Print to console. You can select each list item in the console to open image in VSC.
print(pva_image_list)
</code></pre>
<p>The output of this code:</p>
<pre><code>PS C:\Users\v-wiazure\Desktop> python test-svg-convert.py
[]
PS C:\Users\v-wiazure\Desktop>
</code></pre>
|
<python><svg><inkscape>
|
2024-05-13 22:08:14
| 0
| 3,172
|
Azurespot
|
78,474,751
| 3,102,968
|
Loading Files and Data from Absolute and Relative Path in Python
|
<p>I have the following project structure in a Python project:</p>
<pre><code>> nn-project
-.env
> - src
> - models
> - bird-model
> - env.py
> - train_model.py
</code></pre>
<p>I have in my .env file, the following:</p>
<pre><code>PROJECT_ROOT = ../
</code></pre>
<p>In my env.py, I do the following:</p>
<pre><code>project_root = os.environ.get('PROJECT_ROOT')
if not project_root:
raise ValueError("PROJECT_ROOT environment variable is not set.")
absolute_path = os.path.abspath(project_root)
data_dir = Path(os.path.join(absolute_path, 'data/raw/boston_housing_price/'))
models_dir = Path(os.path.join(absolute_path, 'models/boston_housing_price/'))
print('***************** LOAD ENVIRONMENT ********************+')
print("Project Root DIR", project_root)
print("Project Root DIR abs", absolute_path)
print("Project Data DIR", data_dir)
print("Models Dump DIR", models_dir)
print('***************** LOAD ENVIRONMENT ********************+')
</code></pre>
<p>I get to see the following printed:</p>
<pre><code>***************** LOAD ENVIRONMENT ********************+
Project Root DIR ../nn-project/
Project Root DIR abs /home/user/Projects/Private/ml-projects/nn-project/nn-project
Project Data DIR /home/user/Projects/Private/ml-projects/nn-project/nn-project/data/raw/boston_housing_price
Models Dump DIR /home/user/Projects/Private/ml-projects/nn-project/nn-project/models/boston_housing_price
***************** LOAD ENVIRONMENT ********************+
</code></pre>
<p>I'm intrigued by the nn-project being printed twice. Why is that? What am I missing?</p>
<p>I'm doing the following in my env.py:</p>
<pre><code>from dotenv import find_dotenv
from dotenv import load_dotenv
env_file = find_dotenv(".env")
load_dotenv(env_file)
</code></pre>
|
<python><environment-variables><python-dotenv>
|
2024-05-13 20:55:27
| 1
| 15,565
|
joesan
|
78,474,730
| 3,740,545
|
Solve pyomo.common.errors.InfeasibleConstraintException
|
<p>After running the below code:</p>
<pre><code>from pyomo.environ import *
model = ConcreteModel()
model.q11, model.q12, model.q13, model.q21, model.q22, model.q23, model.q31, model.q32, model.q33 = [Var(bounds=(0.0, 11.0), within=Integers, initialize=0.0) for i in range(9)]
model.s1, model.s2, model.s3 = [Var(bounds=(0.0, 1.0), within=Binary, initialize=0.0) for i in range(3)]
model.S1, model.S2, model.S3 = [Var(bounds=(0.0, 100.0), within=Integers, initialize=0.0) for i in range(3)]
model.c1 = Constraint(expr=model.q11*model.s1*model.S1 + model.q12*model.s1*model.S1 + model.q13*model.s1*model.S1 >= 130.0 - 3.0)
model.c2 = Constraint(expr=model.q11*model.s1*model.S1 + model.q12*model.s1*model.S1 + model.q13*model.s1*model.S1 <= 130.0 + 3.0)
model.c3 = Constraint(expr=model.q21*model.s2*model.S2 + model.q22*model.s2*model.S2 + model.q23*model.s2*model.S2 >= 130.0 - 3.0)
model.c4 = Constraint(expr=model.q21*model.s2*model.S2 + model.q22*model.s2*model.S2 + model.q23*model.s2*model.S2 <= 130.0 + 3.0)
model.c5 = Constraint(expr=model.q31*model.s3*model.S3 + model.q32*model.s3*model.S3 + model.q33*model.s3*model.S3 >= 130.0 - 3.0)
model.c6 = Constraint(expr=model.q31*model.s3*model.S3 + model.q32*model.s3*model.S3 + model.q33*model.s3*model.S3 <= 130.0 + 3.0)
model.c7 = Constraint(expr=model.q11 + model.q12 + model.q13 <= 11.0)
model.c8 = Constraint(expr=model.q21 + model.q22 + model.q23 <= 11.0)
model.c9 = Constraint(expr=model.q31 + model.q32 + model.q33 <= 11.0)
model.objective = Objective(expr=model.s1 + model.s2 + model.s3, sense=minimize)
SolverFactory('mindtpy').solve(model, mip_solver='glpk', nlp_solver='ipopt')
model.objective.display()
model.display()
model.pprint()
</code></pre>
<p>I get:</p>
<pre><code> Trivial constraint c1 violates LB 127.0 ≤ BODY 0.
Traceback (most recent call last):
File "/Users/francopiccolo/Git/GitHub/data-in-action/clothes-production-optimization/venv/lib/python3.12/site-packages/pyomo/contrib/mindtpy/algorithm_base_class.py", line 1097, in solve_subproblem
TransformationFactory('contrib.deactivate_trivial_constraints').apply_to(
File "/Users/francopiccolo/Git/GitHub/data-in-action/clothes-production-optimization/venv/lib/python3.12/site-packages/pyomo/core/base/transformation.py", line 77, in apply_to
reverse_token = self._apply_to(model, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/francopiccolo/Git/GitHub/data-in-action/clothes-production-optimization/venv/lib/python3.12/site-packages/pyomo/contrib/preprocessing/plugins/deactivate_trivial_constraints.py", line 118, in _apply_to
raise InfeasibleConstraintException(
pyomo.common.errors.InfeasibleConstraintException: Trivial constraint c1 violates LB 127.0 ≤ BODY 0.
Infeasibility detected in deactivate_trivial_constraints.
WARNING: Deactivating trivial constraints on the block unknown for which
trivial constraints were previously deactivated. Reversion will affect all
deactivated constraints.
</code></pre>
<p>Any ideas what is going on here and how to overcome this error?</p>
|
<python><pyomo>
|
2024-05-13 20:49:42
| 1
| 7,490
|
Franco Piccolo
|
78,474,642
| 14,173,197
|
Need help in pydantic class output
|
<p>how to solve this pydantic validation issue?</p>
<pre class="lang-none prettyprint-override"><code>An error occurred while processing your query:
2 validation errors for Answer
answer.TableData
Input should be a valid dictionary or instance of TableData [type=model_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
answer.str
Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/string_type <class 'str'>
</code></pre>
<p>here is my pydantic class which I am using for extracting data from LLM response:</p>
<pre><code>def fix_json_string(input_str):
# Manually escape backslashes first
escaped_str = input_str.replace("\\", "\\\\")
# Escape other problematic characters
escaped_str = escaped_str.replace("\n", "\\n")
escaped_str = escaped_str.replace("\r", "\\r")
escaped_str = escaped_str.replace("\t", "\\t")
escaped_str = escaped_str.replace('"', '\\"')
escaped_str = escaped_str.replace("[", "\\[")
escaped_str = escaped_str.replace("]", "\\]")
escaped_str = escaped_str.replace("(", "\\(")
escaped_str = escaped_str.replace(")", "\\)")
escaped_str = escaped_str.replace("^", "\\^")
escaped_str = escaped_str.replace("$", "\\$")
escaped_str = escaped_str.replace("*", "\\*")
escaped_str = escaped_str.replace("+", "\\+")
escaped_str = escaped_str.replace("?", "\\?")
escaped_str = escaped_str.replace("|", "\\|")
escaped_str = escaped_str.replace(".", "\\.")
escaped_str = escaped_str.replace("'", "\\'")
return escaped_str
class TableData(BaseModel):
# data: Optional[Dict[str, List[Any]]]
data: Union[Dict[str, List[Any]], List[Dict[str, Any]]]
answer: Optional[str]
@classmethod
def from_json(cls, json_str):
try:
fixed_json_str = fix_json_string(json_str)
data = json.loads(fixed_json_str) # Ensure the string is valid JSON
return cls(**data)
except json.JSONDecodeError as e:
print(f"JSON decode error: {e}")
return None
except ValidationError as e:
print(f"Validation error: {e}")
return None
class Answer(BaseModel):
answer: Union[TableData, str] = Field
sql: Union[str, Any] = Field
@classmethod
def from_json(cls, json_str):
try:
fixed_json_str = fix_json_string(json_str)
data = json.loads(fixed_json_str) # Ensure the string is valid JSON
return cls(**data)
except json.JSONDecodeError as e:
print(f"JSON decode error: {e}")
return None
except ValidationError as e:
print(f"Validation error: {e}")
return None
</code></pre>
|
<python><pydantic><pydantic-v2>
|
2024-05-13 20:24:28
| 0
| 323
|
sherin_a27
|
78,474,624
| 3,156,085
|
Why is `Callable` generic type contravariant in the arguments?
|
<h1>TL;DR:</h1>
<p>Why is the <code>Callable</code> generic type contravariant in the arguments as stated by the <a href="https://peps.python.org/pep-0483/#covariance-and-contravariance" rel="nofollow noreferrer">PEP 483</a> and how is my analysis of that question (in)accurate? (Said analysis at the bottom of the post)</p>
<hr />
<h1>Context:</h1>
<p>I'm currently studying topics about use of annotations for type analysis (MyPy, Pyright...). And I was reading the <a href="https://peps.python.org/pep-0483/" rel="nofollow noreferrer">PEP 483 – The Theory of Type Hints</a> that is related.</p>
<p>This PEP states the following in the <a href="https://peps.python.org/pep-0483/#covariance-and-contravariance" rel="nofollow noreferrer">part dedicated to _Covariance and contravariange</a>:</p>
<blockquote>
<p>One of the best examples to illustrate (somewhat counterintuitive) contravariant behavior is the callable type. It is covariant in the return type, but contravariant in the arguments.</p>
</blockquote>
<p>This is what I'm trying to understand : <strong>why are callables contravariant in the arguments?</strong></p>
<hr />
<h1>More from the PEP 483:</h1>
<p>The PEP 483 provides the following notions and definitions:</p>
<h2><a href="https://peps.python.org/pep-0483/#subtype-relationships" rel="nofollow noreferrer">Subtype relationship</a>:</h2>
<p>Type <code>subtype</code> is a subtype of type <code>basetype</code> if:</p>
<ul>
<li>every value from <code>subtype</code> is also in the set of values of <code>basetype</code>; and</li>
<li>every function from <code>basetype</code> is also in the set of functions of <code>subtype</code>.</li>
</ul>
<h2><a href="https://peps.python.org/pep-0483/#covariance-and-contravariance" rel="nofollow noreferrer">Contravariance</a>:</h2>
<p>If <code>subtype</code> is a subtype of <code>basetype</code>, then a generic type constructor <code>GenType</code> is called <em>"contravariant"</em> if <code>GenType[basetype]</code> is a subtype of <code>GenType[subtype]</code> for all such <code>subtype</code> and <code>basetype</code>.</p>
<hr />
<h1>My analysis attempt:</h1>
<p>The question <em>"Why is <code>Callable</code> contravariant in the arguments?"</em> could be rephrased as:</p>
<p><strong>With type <code>subtype</code> being a subtype of type <code>basetype</code>, why is <code>Callable[[basetype], None]</code> a subtype of <code>Callable[[subtype], None]</code>?</strong></p>
<p>Which would imply, according to the above content from <a href="https://peps.python.org/pep-0483/#covariance-and-contravariance" rel="nofollow noreferrer">PEP 483</a>, that:</p>
<ol>
<li>Every value for <code>Callable[[basetype], None]</code> would be included in the set of values of <code>Callable[[subtype], None]</code>.</li>
<li>Every function of <code>Callable[[subtype], None]</code> would also be in the set of functions of <code>Callable[[basetype], None]</code>.</li>
</ol>
<p>Answering my main question would be answering why these two conditions are verified.</p>
<h2>1. Is every value for <code>Callable[[basetype], None]</code> included in the set of values of <code>Callable[[subtype], None]</code>?</h2>
<p><code>subtype</code> being a subtype of <code>basetype</code>, every value of type <code>subtype</code> is therefore a value of type <code>basetype</code>.</p>
<p>Therefore, any callable taking an argument of type <code>basetype</code> will also accept an argument of type <code>subtype</code>.</p>
<p>Which means that any value (callable value) of type <code>Callable[[basetype], None]</code> is also a value of type <code>Callable[[subtype], None]</code>.</p>
<p>Therefore, yes: <strong>every value of <code>Callable[[basetype], None]</code> is also a value of the set of values of <code>Callable[[subtype], None]</code></strong>.</p>
<blockquote>
<p><strong>NB:</strong> The contrary (covariance one could naively expect as I did first) can be not possible.</p>
<p>let's define the following from the PEP 483's own examples:</p>
<pre class="lang-py prettyprint-override"><code>class Employee: ...
class Manager(Employee):
def manage(self): ...
def my_callable(m: Manager):
m.manage()
</code></pre>
<p>Here, despite <code>my_callable</code> being of type <code>Callable[[Manager], None]</code>,
providing an <code>Employee</code> to it would go against type safety and it's
made obvious by the definition of the <code>Manager.manage</code> method. This
means that despite being of type <code>Callable[[Manager], None]</code>,
<code>my_callable</code> isn't of type <code>Callable[[Employee], None]</code>, which is
enough to refute a systematic covariance.</p>
</blockquote>
<h2>2. Is every function of <code>Callable[[subtype], None]</code> included in the set of functions of <code>Callable[[basetype], None]</code>?</h2>
<p>I'm less sure about this question and how to answer it.</p>
<p>Let's define the following function taking a callable as a parameter:</p>
<pre class="lang-py prettyprint-override"><code>def my_func(c: Callable[[subtype], None]): ...
</code></pre>
<p>Would any function with such a signature (<code>Callable[[Callable[[subtype], None]], None]</code>) be a function of type <code>Callable[[Callable[[basetype], None]], None]</code>?</p>
<p>Would any function accepting a <code>Callable[[subtype], None]</code> also accept a <code>Callback[[basetype], None]</code>? (I <em>suppose</em> of all that boils down to that question).</p>
<p>I <em>suppose</em> that any such function would pass to it's callable parameter a <code>subtype</code> instance, and that therefore a <code>Callable[[basetype], None]</code> would accept such <code>subtype</code> instance which would make passing a <code>Callable[[basetype], None]</code> OK.</p>
<p>But is it really enough to assert that every function such as <code>my_func</code> would also belong to the set of functions of type <code>Callable[[Callable[basetype], None], None]</code>?</p>
|
<python><contravariance><subtype>
|
2024-05-13 20:19:34
| 1
| 15,848
|
vmonteco
|
78,474,619
| 3,071,350
|
How to download an image file using the Google Drive API in a Lambda Function in Python?
|
<p>I'm trying to download an image file (in a raw .cr2 format, but could change if needed) from Google Drive to AWS Lambda so I can edit using Pillow and upload the edited image back to drive.</p>
<p>I've tried two approaches to download the image following this <a href="https://developers.google.com/drive/api/guides/manage-downloads" rel="nofollow noreferrer">documentation</a>.</p>
<ol>
<li><a href="https://developers.google.com/drive/api/reference/rest/v3/files/get" rel="nofollow noreferrer">Get</a></li>
<li><a href="https://developers.google.com/drive/api/reference/rest/v3/files/export" rel="nofollow noreferrer">Export</a></li>
</ol>
<p>When I use Get, with the code below:</p>
<pre><code>credentials = service_account.Credentials.from_service_account_file(filename=SERVICE_ACCOUNT_FILE, scopes=SCOPES)
drive_service = build('drive', 'v3', credentials=credentials)
request = drive_service.files().get_media(fileId=file_id)
fh = io.BytesIO()
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
print("Download %d%%" % int(status.progress() * 100))
</code></pre>
<p>I get the following error:</p>
<pre><code>{
"errorMessage": "<HttpError 403 when requesting https://www.googleapis.com/drive/v3/files/XXXXX?alt=media returned \"Only files with binary content can be downloaded. Use Export with Docs Editors files.\". Details: \"[{'message': 'Only files with binary content can be downloaded. Use Export with Docs Editors files.', 'domain': 'global', 'reason': 'fileNotDownloadable', 'location': 'alt', 'locationType': 'parameter'}]\">",
"errorType": "HttpError",
"requestId": "XXXXX",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 44, in lambda_handler\n status, done = downloader.next_chunk()\n",
" File \"/opt/python/lib/python3.12/site-packages/googleapiclient/_helpers.py\", line 130, in positional_wrapper\n return wrapped(*args, **kwargs)\n",
" File \"/opt/python/lib/python3.12/site-packages/googleapiclient/http.py\", line 780, in next_chunk\n raise HttpError(resp, content, uri=self._uri)\n"
]
}
</code></pre>
<p>When I try the Export method:</p>
<pre><code>credentials = service_account.Credentials.from_service_account_file(filename=SERVICE_ACCOUNT_FILE, scopes=SCOPES)
drive_service = build('drive', 'v3', credentials=credentials)
request = drive_service.files().export_media(fileId=file_id, mimeType=file_type)
fh = io.BytesIO()
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
print("Download %d%%" % int(status.progress() * 100))
</code></pre>
<p>the error is:</p>
<pre><code>{
"errorMessage": "<HttpError 403 when requesting https://www.googleapis.com/drive/v3/files/XXXXX/export?mimeType=image%2Fjpeg&alt=media returned \"Export only supports Docs Editors files.\". Details: \"[{'message': 'Export only supports Docs Editors files.', 'domain': 'global', 'reason': 'fileNotExportable'}]\">",
"errorType": "HttpError",
"requestId": "XXXXX",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 47, in lambda_handler\n status, done = downloader.next_chunk()\n",
" File \"/opt/python/lib/python3.12/site-packages/googleapiclient/_helpers.py\", line 130, in positional_wrapper\n return wrapped(*args, **kwargs)\n",
" File \"/opt/python/lib/python3.12/site-packages/googleapiclient/http.py\", line 780, in next_chunk\n raise HttpError(resp, content, uri=self._uri)\n"
]
}
</code></pre>
<p>Was it the correct way to download an image from drive using a Lambda function in Python?</p>
|
<python><amazon-web-services><aws-lambda><google-api><google-drive-api>
|
2024-05-13 20:18:01
| 0
| 1,962
|
filipebarretto
|
78,474,573
| 1,914,781
|
extract event pairs from multiline text
|
<p>I would like to extract event pair (start and end marked by <code>+</code> and <code>-</code>). but the pairs maybe not match which means start happen two times then followed the end event.</p>
<p>In below example, event <code>B</code> start happed 2 times, so I wish it output a mismatched pair with <code>nil</code> in the end event not found.</p>
<pre><code>import re
import pandas as pd
data = """
00:00:00 +running A
dummy data
00:00:01 -running
00:00:02 +running B
dummy data
00:00:03 +running B
00:00:04 -running
00:00:05 +running C
dummy data
00:00:06 -running
00:00:07 +running D
10:00:08 -running
"""
m = re.findall(r"(\d+:\d+:\d+) \+running (\w+).*?(\d+:\d+:\d+) \-running",data,re.DOTALL)
print(len(m))
df = pd.DataFrame(m,columns=['ts1','name','ts2'])
print(df)
</code></pre>
<p>Current output:</p>
<pre><code> ts1 name ts2
0 00:00:00 A 00:00:01
1 00:00:02 B 00:00:04
2 00:00:05 C 00:00:06
3 00:00:07 D 10:00:08
</code></pre>
<p>Expected:</p>
<pre><code> ts1 name ts2
0 00:00:00 A 00:00:01
1 00:00:02 B NA
2 00:00:03 B 00:00:04
3 00:00:05 C 00:00:06
4 00:00:07 D 10:00:08
</code></pre>
<p>What's proper way to get such results in python? I do not care about if use <code>findall</code> or not.</p>
|
<python><pandas>
|
2024-05-13 20:09:40
| 2
| 9,011
|
lucky1928
|
78,474,448
| 7,339,624
|
OSError: [model] does not appear to have a file named config.json
|
<p>I want to load a huggingface model. <a href="https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup" rel="nofollow noreferrer">The model</a> I want to load has about 150K downloads so I don't think there is any problem with the model itself.</p>
<p>With the both loading codes below I get the same error:</p>
<pre><code>from transformers import AutoModel
AutoModel.from_pretrained("laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup")
</code></pre>
<p>And</p>
<pre><code>from transformers import CLIPProcessor, CLIPModel
model_id = "laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup"
processor = CLIPProcessor.from_pretrained(model_id)
model = CLIPModel.from_pretrained(model_id)
</code></pre>
<p>With both I get:</p>
<pre><code>OSError: laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup/main' for available files.
</code></pre>
<p>Any help to load the model would be appreciated.</p>
|
<python><machine-learning><deep-learning><huggingface-transformers><huggingface>
|
2024-05-13 19:38:47
| 1
| 4,337
|
Peyman
|
78,474,433
| 719,276
|
Inkscape overwrites extensions.xml at startup thus it does not take the python interpreter into account
|
<p>I followed <a href="https://inkscape.gitlab.io/extensions/documentation/authors/interpreters.html" rel="nofollow noreferrer">this guide</a> to define a custom python interpreter (my system default python), so I edited <code>/Users/amasson/Library/Application Support/org.inkscape.Inkscape/config/inkscape/preferences.xml</code> with the foollowing:</p>
<pre><code>...
<group
id="extensions"
python-interpreter="/Library/Frameworks/Python.framework/Versions/3.12/bin/python3"
...
</code></pre>
<p>But when I start inkscape, the file gets overwritten and the <code>python-interpreter="/Library/Frameworks/Python.framework/Versions/3.12/bin/python3"</code> part is removed.</p>
<p>Thus my extension (inkscape-silhouette) does not work.</p>
<p>What can I do to set the python interpreter?</p>
|
<python><inkscape>
|
2024-05-13 19:35:25
| 0
| 11,833
|
arthur.sw
|
78,474,388
| 2,986,153
|
generate and unnest a list column of random values in polars using np.random.binomial()
|
<p>I am trying to generate arrays of varying length within a polars dataframe (i.e., a list column).</p>
<p>For each <code>cluster_id</code> I would like to generate a series of 0s and 1s of length <code>trials</code>, which varies by <code>cluster_id</code>:</p>
<pre><code>import numpy as np
import polars as pl
from polars import col
CLUSTERS = 200
MEAN_TRIALS = 20
MU = 0.5
SIGMA = 0.1
df_cluster = pl.DataFrame({'cluster_id': range(1, CLUSTERS+1)})
df_cluster = df_cluster.with_columns(
mu = stats.truncnorm(a=0, b=1, loc=MU, scale=SIGMA).rvs(size=CLUSTERS),
trials = np.random.poisson(lam=MEAN_TRIALS, size=CLUSTERS)
)
</code></pre>
<p>I can use <code>np.random.binomial()</code> to generate a new column when <code>size=1</code></p>
<pre><code>df_cluster.with_columns(
pl.struct(["mu", "trials"]).apply(lambda x: np.random.binomial(n=1, p=x['mu'], size=1)).alias('paid')
)
</code></pre>
<p>But I am unable to use <code>np.random.binomial()</code> to generate a new column when <code>size=x['trials']</code></p>
<pre><code>df_cluster.with_columns(
pl.struct(["mu", "trials"]).apply(lambda x: np.random.binomial(n=1, p=x['mu'], size=x['trials'])).alias('paid')
)
</code></pre>
|
<python><dataframe><numpy><python-polars>
|
2024-05-13 19:24:04
| 1
| 3,836
|
Joe
|
78,474,175
| 3,063,547
|
Problems with build.gradle file when trying to integrate chaquopy into Android Studio Jellyfish
|
<p>I am trying to integrate chaquopy into Android Studio Jellyfish to allow for the use of Python but there are problems with the module-level build.gradle.kts file and I get these errors when I sync the gradle files:</p>
<pre><code>Unable to load class 'com.android.build.api.variant.Variant'
com.android.build.api.variant.Variant
Gradle's dependency cache may be corrupt (this sometimes occurs after a network connection timeout.)
Re-download dependencies and sync project (requires network)
The state of a Gradle build process (daemon) may be corrupt. Stopping all Gradle daemons may solve this problem.
Stop Gradle build processes (requires restart)
Your project may be using a third-party plugin which is not compatible with the other plugins in the project or the version of Gradle requested by the project.
In the case of corrupt Gradle processes, you can also try closing the IDE and then killing all Java processes.
</code></pre>
<p>This is my top-level build.gradle.kts file:</p>
<pre><code>plugins {
id("com.chaquo.python") version "15.0.1" apply false
}
</code></pre>
<p>This is my module-level build.gradle.kts file:</p>
<pre><code>plugins {
alias(libs.plugins.android.application)
id("com.chaquo.python")
}
android {
namespace = "com.example.pythontestwithjava"
compileSdk = 34
flavorDimensions += "pyVersion"
productFlavors {
create("py310") { dimension = "pyVersion" }
create("py311") { dimension = "pyVersion" }
}
defaultConfig {
applicationId = "com.example.pythontestwithjava"
minSdk = 24
targetSdk = 34
versionCode = 1
versionName = "1.0"
testInstrumentationRunner = "androidx.test.runner.AndroidJUnitRunner"
ndk {
// On Apple silicon, you can omit x86_64.
abiFilters += listOf("arm64-v8a", "x86_64")
}
python {
version "3.11"
pip {
// A requirement specifier, with or without a version number:
install "scipy"
install "requests==2.24.0"
// An sdist or wheel filename, relative to the project directory:
// install "MyPackage-1.2.3-py2.py3-none-any.whl"
// A directory containing a setup.py, relative to the project
// directory (must contain at least one slash):
// install "./MyPackage"
// "-r"` followed by a requirements filename, relative to the
// project directory:
// install "-r", "requirements.txt"
}
}
}
buildTypes {
release {
isMinifyEnabled = false
proguardFiles(
getDefaultProguardFile("proguard-android-optimize.txt"),
"proguard-rules.pro"
)
}
}
compileOptions {
sourceCompatibility = JavaVersion.VERSION_1_8
targetCompatibility = JavaVersion.VERSION_1_8
}
buildFeatures {
viewBinding = true
}
}
chaquopy {
defaultConfig { }
productFlavors { }
sourceSets { }
productFlavors {
getByName("py310") { version = "3.10" }
getByName("py311") { version = "3.11" }
}
}
dependencies {
implementation(libs.appcompat)
implementation(libs.material)
implementation(libs.constraintlayout)
implementation(libs.navigation.fragment)
implementation(libs.navigation.ui)
testImplementation(libs.junit)
androidTestImplementation(libs.ext.junit)
androidTestImplementation(libs.espresso.core)
}
</code></pre>
<p>My specific errors (highlighted red in Android Studio) in the module-level build.gradle.kts file are:</p>
<pre><code>undefined reference: python
undefined reference: pip
undefined reference: install
</code></pre>
<p>Please let me know how I can fix these errors. TIA.</p>
|
<python><android><chaquopy>
|
2024-05-13 18:30:05
| 1
| 853
|
user3063547
|
78,474,144
| 2,444,008
|
Getting nested models fields. Serializer error. Object of type QuerySet is not JSON serializable
|
<p>I'm having a trouble with getting nested object in Django. My main purpose is generating JSON object from nestedDjango objects.</p>
<p>I have models as below:</p>
<pre><code>class SurveyAnswer(models.Model):
id = models.UUIDField(default=uuid.uuid4, unique=True,
primary_key=True, editable=False)
survey=models.ForeignKey("Survey",on_delete=models.CASCADE)
answer=models.ForeignKey("Answer",on_delete=models.CASCADE)
total_count=models.IntegerField(null=True,blank=True)
total_percentage=models.FloatField(null=True,blank=True)
class Meta:
db_table="SurveyAnswer"
class Answer(models.Model):
id = models.UUIDField(default=uuid.uuid4, unique=True,
primary_key=True, editable=False)
name= models.CharField(max_length=100)
def __str__(self) -> str:
return self.name
class Meta:
db_table="Answer"
</code></pre>
<p>I want to get all UserAnswer records with related Answer model(just name field in Answer model). To be able to do that I created serializer as below but this time I got error
like 'Object of type QuerySet is not JSON serializable'.</p>
<p>What am I supposed to do? Is there any easy way to do that?</p>
<pre><code>class SurveyAnswerSerializer(serializers.ModelSerializer):
answers=serializers.StringRelatedField()
class Meta:
model=SurveyAnswer
fields=["id","total_count","total_percentage","answers",]
</code></pre>
<p>Tried blitzoc suggestion but this time it returns empty object.</p>
<pre><code> survey_answers=list(SurveyAnswer.objects.filter(survey=survey).select_related('answer'))
</code></pre>
<p>data=SurveyAnswerSerializer(survey_answers).data</p>
<p>Data is empty.Normally I am able to get data without serialization operation.</p>
|
<python><django><django-rest-framework>
|
2024-05-13 18:22:09
| 2
| 1,093
|
ftdeveloper
|
78,474,040
| 13,142,245
|
Save Python dictionary to Json
|
<p>I'm trying to save a python dictionary to disk as a json file. From this <a href="https://stackoverflow.com/questions/12309269/how-do-i-write-json-data-to-a-file">Q/A</a>, the answer should be</p>
<pre><code>import json
with open('data.json', 'w') as f:
json.dump(data, f)
</code></pre>
<p>However, in my application (AWS Lambda) the error I receive is</p>
<pre><code>{
"errorMessage": "a bytes-like object is required, not 'str'",
"errorType": "TypeError",
</code></pre>
<p>I have verified that data's type is indeed a dictionary. So I'm curious why <code>json.dump(data, f)</code> is treating data as if it were a string, not bytes. Perhaps the python Lambda runtime introduces some unexpected behavior (I do have ephemeral storage enabled)</p>
<pre><code>def upload(data):
date = datetime.today().strftime('%Y%m%d')
s3=boto3.resource('s3')
with open('/tmp/output.json', 'wb') as temp_file:
json.dump(data, temp_file)
data_ = open(temp_file,'rb')
s3.put_object(Body=data_,
Bucket='bucket', #replaced
Key=f'Solution/{date}/data.json')
os.remove(temp_file)
</code></pre>
|
<python><json><amazon-web-services><aws-lambda>
|
2024-05-13 17:58:52
| 0
| 1,238
|
jbuddy_13
|
78,474,029
| 2,138,913
|
Drag and Drop a generated file (e.g. MIDI) from a Pyside6 or Qt Application to another application (like a DAW)
|
<p>I would like to write a simple Qt app (in Python with PySide6) that will generate a midi file such that the user can drag it directly into their DAW (e.g. Reaper, Ableton, Bitwig).</p>
<p>I have got</p>
<pre class="lang-py prettyprint-override"><code> def mousePressEvent(self,e):
if e.button() == Qt.LeftButton:
drag = QDrag(self)
mimeData = QMimeData()
mimeData.setData("audio/midi",mid)
drag.setMimeData(mimeData)
drag.exec(Qt.CopyAction)
</code></pre>
<p>in my <code>mousePressEvent</code>, and this seems to get halfway there: Reaper reacts as if a MIDI file is being dragged over. But what do I do next so that the MIDI arrives in Reaper when I release the drag?</p>
|
<python><qt><drag-and-drop>
|
2024-05-13 17:56:56
| 1
| 1,292
|
John Allsup
|
78,473,894
| 2,977,164
|
Yolov8 CNN model in Shiny mixing R and Python
|
<p>I want to create a Shiny app using R and Python cause the Yolov8 model was developed in Python. But, I try to use my app calling a *py code (<code>yolov8_loader.py</code>) in my app directory and it doesn't work.</p>
<p>In my example, I try:</p>
<pre><code>library(shiny)
library(shinydashboard)
library(rsconnect)
library(tidyverse)
library(reticulate)
library(purrr)
library(stringr)
# Create a py_env environment and install: pip install ultralytics
setwd('C:/Users/IFMT/anaconda3/envs/py_env')
renv::init()
Sys.setenv(RENV_PATHS_CACHE = 'C:/Users/IFMT/anaconda3/envs/py_env')
renv::use_python(type = 'conda', name = 'py_env')
#
#Create a new Python file, e.g., yolov8_loader.py, with the following content inside the py_env environment:
# import ultralytics as yolo
#
# def load_model():
# model = yolo.YOLO("https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt")
# return model
# Load the Python module
yolov8_loader <- source_python("yolov8_loader.py")
# Import the load_model function
load_model <- yolov8_loader$load_model
# Load the model
model <- load_model()
# Define the UI
ui <- fluidPage(
# App title ----
titlePanel("Hello YOLOv8!"),
# Sidebar layout with input and output definitions ----
sidebarLayout(
# Sidebar panel for inputs ----
sidebarPanel(
# Input: File upload
fileInput("image_path", label = "Input a JPEG image")
),
# Main panel for displaying outputs ----
mainPanel(
# Output: Histogram ----
textOutput(outputId = "prediction"),
plotOutput(outputId = "image")
)
)
)
# Define server logic required to draw a histogram ----
server <- function(input, output) {
image <- reactive({
req(input$image_path)
jpeg::readJPEG(input$image_path$datapath)
})
output$prediction <- renderText({
img <- image() %>%
array_reshape(., dim = c(1, dim(.), 1))
# Use the loaded model to make predictions
prediction <- model(img)
paste0("The predicted class is ", prediction)
})
output$image <- renderPlot({
plot(as.raster(image()))
})
}
shinyApp(ui, server)
</code></pre>
<p>But the <code>yolov8_loader</code> object is always:</p>
<pre><code>
NULL
</code></pre>
<p>Please, any help with it?</p>
<p>Thanks in advance,</p>
<p>Alexandre</p>
|
<python><r><shiny><yolo><yolov8>
|
2024-05-13 17:29:35
| 1
| 1,883
|
Leprechault
|
78,473,728
| 1,644,395
|
Why is using the "distance.cosine" function from SciPy faster than directly executing its Python code?
|
<p>I am executing the below two code snippets to calculate the cosine similarity of two vectors where the vectors are the same for both executions and the code for the second one is mainly the code SciPy is running (see <a href="https://github.com/scipy/scipy/blob/1c0bdc44fff93c5c6277a3cb4dac66b4bf07bfcf/scipy/spatial/distance.py#L575-L694" rel="nofollow noreferrer">scipy cosine implementation</a>).</p>
<p>The thing is that when calling SciPy it is running slightly faster (~0.55ms vs ~0.69ms) and I don't understand why, as my implementation is like the one from SciPy removing some checks, which if something I would expect to make it faster.</p>
<p>Why is SciPy's function faster?</p>
<pre class="lang-py prettyprint-override"><code>
import time
import math
import numpy as np
from scipy.spatial import distance
SIZE = 6400000
EXECUTIONS = 10000
path = "" # From https://github.com/joseprupi/cosine-similarity-comparison/blob/master/tools/vectors.csv
file_data = np.genfromtxt(path, delimiter=',')
A,B = np.moveaxis(file_data, 1, 0).astype('f')
accum = 0
start_time = time.time()
for _ in range(EXECUTIONS):
cos_sim = distance.cosine(A,B)
print(" %s ms" % (((time.time() - start_time) * 1000)/EXECUTIONS))
cos_sim_scipy = cos_sim
def cosine(u, v, w=None):
uv = np.dot(u, v)
uu = np.dot(u, u)
vv = np.dot(v, v)
dist = 1.0 - uv / math.sqrt(uu * vv)
# Clip the result to avoid rounding error
return np.clip(dist, 0.0, 2.0)
accum = 0
start_time = time.time()
for _ in range(EXECUTIONS):
cos_sim = cosine(A,B)
print(" %s ms" % (((time.time() - start_time) * 1000)/EXECUTIONS))
cos_sim_manual = cos_sim
print(np.isclose(cos_sim_scipy, cos_sim_manual))
</code></pre>
<p>EDIT:</p>
<p>The code to generate A and B is below and the exact files I am using can be found at:</p>
<p><a href="https://github.com/joseprupi/cosine-similarity-comparison/blob/master/tools/vectors.csv" rel="nofollow noreferrer">https://github.com/joseprupi/cosine-similarity-comparison/blob/master/tools/vectors.csv</a></p>
<pre class="lang-py prettyprint-override"><code>def generate_random_vector(size):
"""
Generate 2 random vectors with the provided size
and save them in a text file
"""
A = np.random.normal(loc=1.5, size=(size,))
B = np.random.normal(loc=-1.5, scale=2.0, size=(size,))
vectors = np.stack([A, B], axis=1)
np.savetxt('vectors.csv', vectors, fmt='%f,%f')
generate_random_vector(640000)
</code></pre>
<p>Setup:</p>
<ul>
<li>AMD Ryzen 9 3900X 12-Core Processor</li>
<li>64GB RAM</li>
<li>Debian 12</li>
<li>Python 3.11.2</li>
<li>scipy 1.13.0</li>
<li>numpy 1.26.4</li>
</ul>
|
<python><numpy><scipy>
|
2024-05-13 16:51:05
| 2
| 349
|
joseprupi
|
78,473,702
| 243,031
|
netaddr remove `is_private` method from BaseIP class in new version
|
<p>I was using <code>netaddr</code> version <code>0.10.1</code>, and its <code>BaseIP</code> class has method <a href="https://github.com/netaddr/netaddr/blob/0.10.1/netaddr/ip/__init__.py#L158-L186" rel="nofollow noreferrer"><code>is_private</code></a>. We are using that method as <code>if mynetwork.is_private(): # DO SOMETHING</code>.</p>
<p>Now we upgrade <code>netaddr</code> package to <code>1.2.1</code>. In this new upgrade, <a href="https://github.com/netaddr/netaddr/blob/1.2.1/netaddr/ip/__init__.py#L23-L224" rel="nofollow noreferrer"><code>BaseIP</code></a> has no method call <code>is_private</code>. even <code>IPV4_PRIVATEISH</code> and <code>IPV6_PRIVATEISH</code> was removed.</p>
<p>I want to check is there any other method available which we can use for <code>is_private</code>? I tried <code>if mynetwork in IPV4_PRIVATE_USE: # DO SOMETHING</code> but this doesn't cover all cases covered by <code>is_private</code> method in <code>0.10.1</code>.</p>
<p>Let me know if there is pythonic way to implement this <code>is_private</code> logic again.</p>
|
<python><package><inetaddress>
|
2024-05-13 16:46:34
| 0
| 21,411
|
NPatel
|
78,473,567
| 12,554,903
|
WebSockets down stream is Rate-Limited Channels
|
<p>We have a game which is running on our Django server. All calculations are made on the server.</p>
<p>We use WebSocket to communicate the game data to our front-end clients, which do the rendering.</p>
<p>But we have a problem. With the channels <a href="https://channels.readthedocs.io/en/latest/topics/consumers.html#asyncwebsocketconsumer" rel="nofollow noreferrer"><code>AsyncWebsocketConsumer</code></a>, we have a <code>while True</code> loop that send a message 60 times per seconds to our client (we only have one for the moment).</p>
<p>After receiving some data, the client receives nothing (But we know that the server continue to send data).</p>
<p>I read a <a href="https://stackoverflow.com/questions/18782179/rate-limiting-the-data-sent-down-websockets">StackOverflow post</a> that talk about a <code>bufferedAmount</code>, and volatile messages. but I'd like to know if we can avoid using a send queue when using <code>django-channels</code> to drop messages that can't be sent/received by the server/client, and let the others through.</p>
<p>Or if anyone has a better transmission solution for a game that needs to run server-side (like using <code>UDP</code> with another protocol available on the browser), but I don't know how.</p>
|
<python><python-3.x><django-channels>
|
2024-05-13 16:16:53
| 0
| 365
|
Pioupia
|
78,473,493
| 19,283,541
|
Azure FunctionApp functions not listed when publishing
|
<p>I've got an Azure Functionapp that works fine locally, but I'm having trouble publishing it to Azure.</p>
<p>I'm using the Python v2 programming model. My project folder looks like this:</p>
<pre><code>project_folder
├── .venv
├── .vscode
├── .funcignore
├── .gitignore
├── my_code_1.py
├── my_code_2.py
├── my_code_3.py
├── function_app.py
├── host.json
├── local.settings.json
└── requirements.txt
</code></pre>
<p>Where <code>function_app.py</code> contains multiple functions (both blog trigger and HTTP trigger types), and the <code>my_code_x.py</code> modules contain helper code and various utilities.</p>
<p>I run the functions locally using a basic web app (html page) which connects to my Azure account for storage and models, etc. When I run it locally it works as intended.</p>
<p>The problem is when I try to publish the function app.</p>
<p>I can create it and it seems to succeed, using the following command in the terminal:</p>
<pre><code>az functionapp create --consumption-plan-location <xxx> --name <xxx> --os-type linux --resource-group <xxx> --runtime python --storage-account <xxx> --functions-version 4
</code></pre>
<p>Then I publish using:</p>
<pre><code>func azure functionapp publish <my_project_name>
</code></pre>
<p>It seems to work, and pip installs all the requirements and so on, and then says:</p>
<pre><code>...
Deployment successful. deployer = Push-Deployer deploymentPath = Functions App ZipDeploy. Extract zip. Remote build.
Remote build succeeded!
[2024-05-13T15:38:03.688Z] Syncing triggers...
Functions in <my_project_name>:
</code></pre>
<p>I expect it to then list the functions in the project, but it doesn't list anything. When I inspect the FunctionApp in the Azure portal, there are no functions listed under "functions".</p>
|
<python><azure><azure-functions><azure-cli>
|
2024-05-13 16:02:53
| 1
| 309
|
radishapollo
|
78,473,387
| 9,873,381
|
I am using the YOLOv5 model provided by Ultralytics in PyTorch. How can I see which images the model is struggling with?
|
<p>This is the <a href="https://github.com/ultralytics/yolov5" rel="nofollow noreferrer">YOLOv5</a> implementation I am talking about and <a href="https://github.com/ultralytics/yolov5/blob/master/val.py" rel="nofollow noreferrer">this</a> is the file I am using to test the model.</p>
<p>For some classes, it performing decently enough. However, for the rest of the classes, it is not doing a great job. I would like to see the type of images where the model struggles.</p>
<p>How can I get the name of the images or the file paths?</p>
<p>I tried running <a href="https://github.com/ultralytics/yolov5/blob/master/val.py" rel="nofollow noreferrer">this file</a> with the --save-txt parameter but I do not understand its meaning.</p>
<p>Thank you!</p>
|
<python><pytorch><object-detection><yolov5>
|
2024-05-13 15:41:27
| 1
| 672
|
Skywalker
|
78,473,218
| 7,746,472
|
Access network shares from Windows, Mac and Linux
|
<p>I need to read files from a network share using Python. The Python programm needs to work on macOS, Windows (the systems we use across our team in development) and Linux (which the server is running).</p>
<p>In macOS I can read from the mounted network share like this:</p>
<pre><code>file_path = '//Volumes/data/path/to/files'
list_of_files = os.listdir(file_path)
</code></pre>
<p>In Windows I can do</p>
<pre><code>file_path = 'V:\path\to\files'
list_of_files = os.listdir(file_path)
</code></pre>
<p>or alternatively</p>
<pre><code>file_path = '//our.server.de/share/data/path/to/files'
list_of_files = os.listdir(file_path)
</code></pre>
<p>I'm sure there is something similar in Linux (which I haven't tried yet).</p>
<p>I suppose I could somehow determine the current OS and use if-statements, but I was hoping there is a better way to achieve this.</p>
<p>I was hoping I could get the files somehow over the network instead of over the local file system (if that makes sense).</p>
<p>Any hints are greatly appreciated!</p>
|
<python><python-3.x><network-programming>
|
2024-05-13 15:11:24
| 1
| 1,191
|
Sebastian
|
78,473,210
| 10,083,382
|
Specify a custom storage path when registering Pandas DataFrame to Azure Blob Storage
|
<p>I want to register a Pandas data frame as a tabular dataset into Azure Blob Storage. I do not want to create a unique path every time I register a new version of that dataset. Secondly, I need to specify the path which would depend on date whenever I upload a dataset. E.g. if I upload today the path should be <code>'data/{}'.format(current_date)</code></p>
<p>Below is the code that I used to fulfill both the requirements.</p>
<pre><code>dataset = Dataset.Tabular.register_pandas_dataframe(
dataframe=df,
target=(ws.get_default_datastore(), 'data/{}'.format(current_date)),
name='test dataset',
description='test dataset',
make_target_path_unique = False
)
</code></pre>
<p><code>target</code> parameter should make a unique folder for each day. <code>make_target_path_unique</code> would stop creating a random storage URI path.</p>
<p>The above code fails stating <code>TypeError: register_pandas_dataframe() got an unexpected keyword argument 'make_target_path_unique'</code></p>
<p>I do not want the storage URI to change randomly whenever another version of that dataset is registered. E.g. in
<code>https://xxxx.blob.core.windows.net/azureml-blobstore-xxx/managed-dataset/xxxx/</code> the last 4 <code>xxxx</code> change randomly. How can I fix this issue?</p>
|
<python><pandas><azure><azure-blob-storage><azure-machine-learning-service>
|
2024-05-13 15:09:39
| 1
| 394
|
Lopez
|
78,473,182
| 16,155,080
|
vLLM + FastAPI async streaming response - fastapi can't handle vllm speed and bottlenecks
|
<p>I have a chatbot web app with the following components :</p>
<ul>
<li>a frontend</li>
<li>a FastAPI back-end that handles requests</li>
<li>a vLLM <a href="https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/api_server.py" rel="nofollow noreferrer">api_server</a> running a local Llama model on a H100.</li>
</ul>
<p>The architecture is the following :</p>
<ol>
<li>front queries back.</li>
<li>Back does a bit of preprocessing then queries the vLLM server with stream parameter.</li>
<li>Back then listen to vllm tokens streaming responses and stream it himself back to the front-end using FastAPI.StreamingResponse.</li>
<li>Front display tokens as they are generated.</li>
</ol>
<p>Everything works fine when it's one request at a time. Problems starts when we're having multiples requests.
The fastAPI back-end is all working asynchronously to handle concurrent requests. The problem is whenever he starts listening to vLLM, the stream of tokens is so fast that he is never giving back ressources to handle other incoming requests. The second request is only sent when the first one is done and that can be long.</p>
<p>This is how I query and listen to vllm streaming answer :</p>
<pre><code>async def call_infer_llm(request: LlmRequestModel):
data = {
"model": "/usr/Workplace/models/llama3-8b-Instruct/",
"messages": request.messages,
"temperature": request.temperature,
"top_p":request.top_p,
"stream":true,
}
async with httpx.AsyncClient() as client:
async with client.stream('POST', URL+'/v1/chat/completions', data=json.dumps(data)) as resp:
async for r in resp.aiter_bytes():
text = r.decode('utf-8')
#asyncio.sleep(0.05)
new_tokens = json.loads(text.split('data:')[1])
yield {'tokens':new_tokens['choices'][0]['delta'].get('content','')}
</code></pre>
<p>This is the code calling this function and queuing results :</p>
<pre><code>answer_generator = call_infer_llm(c_request)
async for tokens in answer_generator:
queue_out.put_nowait({'status':'PARTIAL_RESULT',
'result': tokens['tokens']})
</code></pre>
<p>and this is the endpoint consuming the queues items and streaming the answer to the front :</p>
<pre><code>@app.post("/api/llm-request")
async def llm_endpoint(data:str = Form(...)):
async def handle_llm_request():
# For every incoming request, we create an output queue that this generator will then listen to
response_queue = asyncio.Queue()
request_queue.put_nowait([response_queue, data])
# Wait for message in the output queue
try:
while True:
await asyncio.sleep(0.1)
message = await response_queue.get()
if message:
if message['status'] in ['IN_PROGRESS', 'PARTIAL_RESULT', 'PROCESS_FILES']:
yield json.dumps(message)
elif message['status'] in ['DONE','ERROR']:
yield json.dumps(message)
break
else:
break
except asyncio.CancelledError:
print('task cancelled')
# Return a EventStream StreamingResponse with a generator that yields messages every now and then.
return StreamingResponse(handle_llm_request(), media_type="application/x-ndjson")
</code></pre>
<p>I know this is not a vLLM issue because during a request on the webapp, I can easily query 10's of request to the vllm endpoint using curl and get answers quickly. So the problem must be coming from FastAPI creating a bottleneck with the StreamingResponse.</p>
<p>I wonder how I could bypass this bottleneck problem, this is the solutions I tried that didn't work :</p>
<ul>
<li>implements an asyncio.sleep(0.05) in the backend listening loop to force giving back resources to other requests.</li>
</ul>
<p>Options that I don't like but might end up trying :</p>
<ul>
<li>Modifying the vllm api_server.py file in order to send batch of generated tokens to lower the streaming flow</li>
<li>Making the front end querying vllm directly (and querying back before/after for preprocessing and postprocessing)</li>
</ul>
|
<python><streaming><fastapi><vllm>
|
2024-05-13 15:04:04
| 0
| 641
|
Jules Civel
|
78,473,010
| 673,600
|
Hugging Face Datasets .map not working as expected
|
<p>I'm running a function over a dataset, but when I compute this, I seem to replace my existing dataset rather than adding to it. What is going wrong?</p>
<pre><code>dataset_c = Dataset.from_pandas(df_all[0:100])
</code></pre>
<p>That <code>dataset_c</code> looks like:</p>
<pre><code>Dataset({
features: ['_id', 'first_name', 'last_name', 'memail', 'company_phone_number', 'company_name', 'global_id'],
num_rows: 100
})
</code></pre>
<p>My function is:</p>
<pre><code>def concatenate_entities(examples):
body = ""
for col in fields_match_list:
if col in examples:
#if (examples[col] != ""):
body += examples[col] + "\n "
else:
body += "\n "
return {"text": str(body)}
</code></pre>
<p>Now, when I do the map.</p>
<pre><code>dataset_c = dataset_c.map(concatenate_entities)
</code></pre>
<p>I get the following:</p>
<pre><code>Dataset({
features: ['text'],
num_rows: 100
})
</code></pre>
<p>So, instead of adding to existing features, it has replaced the features.</p>
|
<python><huggingface><huggingface-datasets>
|
2024-05-13 14:36:03
| 1
| 6,026
|
disruptive
|
78,472,992
| 3,104,974
|
Plotly renders complementary colors in Databricks dark mode
|
<p>Example code:</p>
<pre><code>import numpy as np
import plotly.graph_objs as go
import plotly.io as pio
pio.templates.default = 'plotly_white'
x = np.random.rand(2000)
y = np.random.randn(2000)
trc = go.Scatter(x=x, y=y, mode='markers', marker_color='blue', marker_size=4)
fig.show(renderer='databricks')
</code></pre>
<p>Output image below (tested on plotly 5.9 and 5.18 in Databricks runtime environment 15.1).</p>
<p>I found out that the <strong>problem vanishes</strong> when I switch from dark to <strong>light mode</strong> in the settings. Also, when saving the file from the plot menu the colors are always correct, just the display in dark mode is inverted.</p>
<p>Any idea how to prevent this overzealous dark mode behavior?</p>
<p><a href="https://i.sstatic.net/yrZ4XDv0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrZ4XDv0.png" alt="Plotly graph with complementary colors" /></a></p>
|
<python><plotly><databricks>
|
2024-05-13 14:32:54
| 0
| 6,315
|
ascripter
|
78,472,939
| 222,977
|
Accessing symbolic tensor values within model or layer call function
|
<p>This is a follow up to <a href="https://stackoverflow.com/questions/78466591/tensorflow-input-of-varying-shapes?noredirect=1#comment138336725_78466591">Tensorflow input of varying shapes</a>.</p>
<p>In my particular situation, for each training sample, I have three different inputs. Two are scalars and the third is a sparse tensor.</p>
<p>The two scalars are each used to build tensors to use as state for LSTM. For instance:</p>
<pre><code>hidden_state = tf.tile(self.state_init_values, [scalar_1, 1])
cell_state = tf.zeros([scalar_1, 100])
</code></pre>
<p>My confusion right now is mainly on how to get the value of these scalars. The tf.print fuction is capable of that, but I'm not sure how. Here is a simplified example:</p>
<pre><code>class DataGenerator(tf.keras.utils.PyDataset):
def __init__(self):
pass
def __getitem__(self, index):
return tf.constant([1, 2, 3]), tf.constant([1, 1, 1])
def __len__(self):
return 1
class InfoModel(tf.keras.Model):
def __init__(self):
super(InfoModel, self).__init__()
def call(self, inputs, training=False):
#Prints: Tensor("info_model_9_1/strided_slice:0", shape=(), dtype=float32)
#scalar_1 = inputs[0]
#TypeError: int() argument must be a string, a bytes-like object or a real number, not 'SymbolicTensor'
#scalar_1 = int(inputs[0])
#AttributeError: 'SymbolicTensor' object has no attribute 'numpy'
#scalar_1 = inputs[0].numpy()
#print(scalar_1)
#This works...
tf.print(inputs[0])
return tf.convert_to_tensor([1,1,1])
gen = DataGenerator()
model = InfoModel()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(gen, epochs=1, verbose=1)
model.summary()
</code></pre>
<p>Is there a way to get at this value? Or am I going about this the wrong way to accomplish what I want?</p>
|
<python><tensorflow><keras>
|
2024-05-13 14:22:57
| 0
| 583
|
Dan
|
78,472,804
| 166,229
|
How to parse an optional operator with pyparsing library?
|
<p>I want to parse strings like <code>alpha OR beta gamma</code> where a missing operator (in this case between <code>beta</code> and <code>gamma</code> a implicit <code>AND</code> is used.</p>
<p>Here is the code I tried:</p>
<pre class="lang-py prettyprint-override"><code>import pyparsing as pp
class Term:
def __init__(self, tokens: pp.ParseResults):
self.value = str(tokens[0])
def __repr__(self) -> str:
return f"Term({self.value})"
class BinaryOp:
def __init__(self, tokens: pp.ParseResults) -> None:
self.op = tokens[0][1]
self.left = tokens[0][0]
self.right = tokens[0][2]
def __repr__(self) -> str:
return f"BinaryOp({self.op}, {self.left}, {self.right})"
and_ = pp.Keyword("AND")
or_ = pp.Keyword("OR")
word = (~(and_ | or_) + pp.Word(pp.alphanums + pp.alphas8bit + "_")).set_parse_action(Term)
expression = pp.infix_notation(
word,
[
(pp.Optional(and_), 2, pp.opAssoc.LEFT, BinaryOp),
(or_, 2, pp.opAssoc.LEFT, BinaryOp),
],
)
input_string = "alpha OR beta gamma"
parsed_result = expression.parseString(input_string)
print(parsed_result.asList())
</code></pre>
<p>The output is <code>[BinaryOp(OR, Term(alpha), Term(beta))]</code>, so the implicit <code>AND</code> and <code>gamma</code> is not parsed. How could I fix this?</p>
|
<python><parsing><pyparsing>
|
2024-05-13 14:01:37
| 1
| 16,667
|
medihack
|
78,472,745
| 11,348,734
|
Remove glare noise from an image
|
<p>I have an image of a steak, and my goal is to calculate its marbling by converting the pixels to red for meat and white for fat. However, I have problems with glare in the image, which hinder the calculation. My code helps to improve it somewhat, but I would like to improve it even further. Here is an example, I have this image below, with glare:
<a href="https://i.sstatic.net/YjsZ8gsx.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjsZ8gsx.jpg" alt="enter image description here" /></a></p>
<p>my code is this:</p>
<pre><code>import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load the image
img = cv2.imread("./img.jpg")
# Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply a Gaussian filter to smooth the image
blurred = cv2.GaussianBlur(gray, (15, 15), 0)
# Calculate the difference between the original image and the smoothed one to highlight brightness
diff = cv2.absdiff(gray, blurred)
# Apply a threshold to highlight intensely bright areas
_, mask = cv2.threshold(diff, 35, 255, cv2.THRESH_BINARY)
# Invert the mask to get the dark areas (which might be glare)
mask = 255 - mask
# Apply the mask to the original image to remove glare
result = cv2.bitwise_and(img, img, mask=mask)
cv2.imwrite('marmoreio_red_5.jpg', result)
ret, data_1 = cv2.threshold(result, 127, 240, cv2.THRESH_BINARY) # Standard LG ## Various 127
cv2.imwrite("filter_1.jpeg", data_1)
# Transform non-black and non-red pixels into white
for y in range(data_1.shape[0]):
for x in range(data_1.shape[1]):
pixel = data_1[y, x]
# Define what is considered 'red' and 'black'
# (adjust these thresholds as necessary)
isRed = pixel[2] > 150 and pixel[1] < 50 and pixel[0] < 50
isBlack = pixel[2] < 50 and pixel[1] < 50 and pixel[0] < 50
if not isRed and not isBlack:
# If not red or black, set to white
pixel[0] = 255 # B
pixel[1] = 255 # G
pixel[2] = 255 # R
elif isRed:
# If it is red, standardize to the same shade of red
pixel[0] = 0 # B
pixel[1] = 0 # G
# Keep the red component as it is
elif isBlack:
# If it is black, transform to red
pixel[0] = 0 # B
pixel[1] = 0 # G
pixel[2] = 255 # R
cv2.imwrite("filter_2.jpeg", data_1)
</code></pre>
<p>The code helps to improve the glare, but there's still a lot of noise. Does anyone have a tip on what I can do to improve it? Here's the final processed image:"</p>
<p><a href="https://i.sstatic.net/Jfudf4J2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jfudf4J2.jpg" alt="enter image description here" /></a></p>
|
<python><opencv><image-processing>
|
2024-05-13 13:54:36
| 0
| 897
|
Curious G.
|
78,472,697
| 16,859,084
|
Robot framework or Python compare dates
|
<p>I have example date (which must be RFC3339 format)
<code>2021-07-23T14:07:21Z</code></p>
<p>Next there is an endpoint which takes that date lets say
/setdate
Next I want verify whather it was set OK using another endpoint /getdate for examples.
The problem is that this second endpoit returns this date as
<code>2021-07-23T16:07:21+02:00</code> whis effectively is the same date but its diffent string so I cant simply compare those two in robot framework.
Are there any robot framework keywords to create two date objects which will be the same when we compare their values?</p>
<p>How is it possible?</p>
<pre class="lang-none prettyprint-override"><code>Documentation:
Fails if the given objects are unequal.
Start / End / Elapsed: 20240513 21:06:22.031 / 20240513 21:06:22.031 / 00:00:00.000
21:06:22.031 DEBUG Argument types are:
<class 'datetime.datetime'>
<class 'datetime.datetime'>
21:06:22.031 FAIL 2021-07-23 14:07:21+02:00 != 2021-07-23 12:07:21
</code></pre>
<p>how can I compare two datetime objects as they are not as strings?</p>
<p>Also,</p>
<pre><code>def main():
d1 ='2021-07-23T14:07:21+00:00'
d2='2021-07-23T14:07:21Z'
res = compare_rfc_dates(d1, d2)
print(res)
</code></pre>
<p>prints:</p>
<pre><code>2021-07-23 14:07:21+00:00 2021-07-23 14:07:21
False
</code></pre>
<p>when compared as datetime objects. What does this mean?</p>
|
<python><datetime><robotframework><rfc3339>
|
2024-05-13 13:45:56
| 2
| 479
|
Tom
|
78,472,568
| 3,104,974
|
Override trace properties for legend display
|
<p>How can I force the legend of a plotly 5.9 plot to use different parameters than were defined with the trace?</p>
<p>I have a scatter plot with overlaying low-opacity datapoints, and I want the legend to show the colors in full opacity. Also, I want to increase the marker size in the legend. I have no idea how to access these properties.</p>
<p>Here's the way I create my plots:</p>
<pre><code>import numpy as np
import plotly.graph_objs as go
from plotly.offline import plot
from plotly.subplots import make_subplots
from IPython.display import display, HTML
x = np.random.rand(2000)
y1 = np.random.rand(2000)
y2 = np.random.randn(2000)
fig = make_subplots(rows=1, cols=2)
trc1 = go.Scatter(x=x, y=y1, mode='markers', marker=dict(
color='blue', size=2, opacity=0.2)
)
trc2 = go.Scatter(x=x, y=y2, mode='markers', marker=dict(
color='green', size=4, opacity=0.2)
)
fig.add_trace(trc1, row=1, col=1)
fig.add_trace(trc2, row=1, col=2)
# This raises ValueError: Invalid property 'marker'
#fig.update_layout(legend=dict(marker=dict(opacity=1)))
# This works, but also updates the plots
#fig.for_each_trace(lambda t: t.update(marker=dict(opacity=1)))
display(HTML(plot(fig, output_type="div")))
</code></pre>
<p>Edit: Just for the marker size, the solution would be</p>
<pre><code>fig.update_layout(legend={'itemsizing': 'constant'})
</code></pre>
<p>But how to adjust the opacity (or any other property of the marker)?</p>
|
<python><plotly>
|
2024-05-13 13:24:33
| 1
| 6,315
|
ascripter
|
78,472,403
| 105,589
|
Using python to decrypt AES-GCM encrypted in JavaScript
|
<p>I have some code in JavaScript running inside the browser using WebCrypto. It uses AES-GCM to encrypt some data.</p>
<p>I generate a key and then export it to hex, then generate an initialization vector, and then use that to encode some data which I convert to hex as well.</p>
<pre><code>function generateKey() {
window.crypto.subtle
.generateKey(
{
name: "AES-GCM",
length: 256,
},
true,
["encrypt", "decrypt"],
)
.then(key => {
log('Got key back');
setKey(key);
})
}
// Export the given key and write it into the "exported-key" space.
async function exportCryptoKey(key) {
const exported = await window.crypto.subtle.exportKey("raw", key);
const exportedKeyBuffer = new Uint8Array(exported);
hex = buf2hex(exportedKeyBuffer);
log('HEX is', hex );
}
function buf2hex(buffer) { // buffer is an ArrayBuffer
return [...new Uint8Array(buffer)]
.map(x => x.toString(16).padStart(2, '0'))
.join('');
}
function setKey(_key) {
log('Key is', key);
key = _key;
if (key.extractable) {
exportCryptoKey(key);
} else {
log('Cannot export key, sadface');
}
}
async function encrypt() {
log('Key is', key );
const iv = window.crypto.getRandomValues(new Uint8Array(16));
let enc = new TextEncoder();
const data = enc.encode('Hi there');
const result = await window.crypto.subtle.encrypt(
{
name: "AES-GCM",
iv,
},
key,
data
);
encrypted = new Uint8Array(result);
log('Encrypted is', buf2hex(encrypted));
log('IV is', buf2hex(iv));
}
</code></pre>
<p>I want to decode it using python. I see some <a href="https://stackoverflow.com/questions/67307689/decrypt-an-encrypted-message-with-aes-gcm-in-python">python examples</a> use a nonce with some of the data plucked out. Is that the same as the initialization vector used in the JavaScript? What is wrong? When I run the python code, I get <code>Decryption failed</code></p>
<pre><code># pip install pycryptodome
from Crypto.Cipher import AES
hex_key = '0db43c44650a8b71547dbc6951432f0c545509814d6bca1a10e387ad2d16bc4a'
encrypted_message = '37183efb9e4ce92265d1144110cdbe1e768a85f9630bf872'
iv = '43005ad454f6e3c6e3ead90f7137f007'
key = bytes.fromhex(hex_key)
data = bytes.fromhex(encrypted_message)
nonce = bytes.fromhex(iv)
cipher = AES.new(key, AES.MODE_GCM, nonce)
try:
dec = cipher.decrypt_and_verify(data, nonce) # ciphertext, tag
print(dec)
except ValueError:
print("Decryption failed")
</code></pre>
|
<javascript><python><aes><webcrypto-api>
|
2024-05-13 12:59:15
| 1
| 4,091
|
xrd
|
78,472,201
| 1,422,096
|
Multiple drag and drop rectangular selections on a Matplotlib heatmap / imshow
|
<p>On a Matplotlib heatmap:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
plt.imshow(np.random.random((16, 16)), cmap='jet', interpolation='nearest')
plt.show()
</code></pre>
<p>is there a built-in Matplotlib feature to do a "rectangular selection" with drag and drop, and have the possibility to resize/move the selection later, etc.?</p>
<p>For example by adding a toolbar button that allows to draw a selection rectangle?</p>
<p>NB: I'm looking for a solution to have multiple selections on the same heatmap.</p>
|
<python><matplotlib><visualization><heatmap>
|
2024-05-13 12:24:33
| 1
| 47,388
|
Basj
|
78,472,200
| 561,341
|
How to catch and handle parameter binding errors in Python Flask application?
|
<p>When an error occurs in the Flask application you can normally handle it either using try / except or by registering an application error handler (<a href="https://flask.palletsprojects.com/en/2.3.x/errorhandling/" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/2.3.x/errorhandling/</a>).<br />
However if an error occurs due to invalid URL parameter Flask simply returns 500 result with an HTML page.<br />
For example I have the following endpoint:<br />
<code>@app.route("/user/<login>", methods=["GET"])</code></p>
<p>Calling this route with <code>http://localhost/users/admin</code> will work correctly.<br />
However if I call it like this: <code>http://localhost/users/1</code> then I will get a 500 error with HTML page with HTML page title '<em>TypeError: The view function for xxxx. The function either returned None or ended without a return statement. did not return a valid response</em>'</p>
<p>There does not seem to be any way to catch this error and handle it in some way. The function is not even entered so I can not do a try / except block.</p>
|
<python><flask>
|
2024-05-13 12:23:34
| 2
| 3,152
|
Dalibor Čarapić
|
78,472,054
| 14,587,041
|
Manually color columns in pandas
|
<p>I have a list of colors that created manually;</p>
<pre><code>colmap = ['green', 'red', 'red', 'red', 'green']
</code></pre>
<p>and also have a pandas dataframe <code>df</code> with same length of my <code>colmap</code></p>
<pre><code>Percent
0 -0.5
1 0.2
2 0.8
3 -0.3
4 0.6
</code></pre>
<p>I want to apply the colors to only <code>Percent</code> column, <code>df.style.applymap</code> doesn't work because it requires a function to apply styles. I couldn't figure out how to apply without a lambda function.</p>
<p>Thanks in advance</p>
|
<python><pandas><dataframe>
|
2024-05-13 11:57:15
| 2
| 2,730
|
Samet Sökel
|
78,471,983
| 1,613,983
|
How do I defer reflection on a view/table without primary keys?
|
<p>I have a view <code>xyz.MyTable</code> that I'd like to build a model for. It doesn't have a unique column but the combination of columns <code>col1</code> and <code>col2</code> are guaranteed to be unique.</p>
<p>I also don't have access to the engine at declaration, so I need to use deferred reflection. Here is my attempt:</p>
<pre><code>import sqlalchemy
import sqlalchemy.ext.declarative
Base = sqlalchemy.orm.declarative_base()
class MyTable(sqlalchemy.ext.declarative.DeferredReflection, Base):
__tablename__ = 'MyTable'
__table__ = sqlalchemy.Table(__tablename__, Base.metadata, schema='xyz', autoload=True)
__mapper_args__ = {'primary_key': [__table__.c.col1, __table__.c.col2]}
</code></pre>
<p>However, this results in the following:</p>
<blockquote>
<p>KeyError: 'col1'</p>
</blockquote>
<p>This makes sense, because <code>col1</code> can't be an object until reflection occurs. But how can I declare the primary key column? Note that I have no control over the database and cannot make any changes to the view.</p>
<p><strong>Update</strong>:</p>
<p>As per python_user's suggestion I tried using strings for the column names:</p>
<pre><code>import sqlalchemy
from sqlalchemy import Column
from sqlalchemy.types import DATE, INTEGER, VARCHAR
from sqlalchemy.ext.declarative import declarative_base, DeferredReflection
Base = declarative_base(cls=DeferredReflection)
class VHistSnapPnL(Base):
__tablename__ = "MyTable"
__mapper_args__ = {
"primary_key": [
'col1',
'col2'
]
}
Base.prepare(engine)
</code></pre>
<p>However this results in an AttributeError:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[10], line 6
2 from sqlalchemy.ext.automap import automap_base
4 # Base = automap_base()
5 # reflect the tables
----> 6 Base.prepare(engine)
File /juno-tmp/virtualenvs/python3.11/lib/python3.11/site-packages/sqlalchemy/ext/declarative/extensions.py:400, in DeferredReflection.prepare(cls, engine)
398 for thingy in to_map:
399 cls._sa_decl_prepare(thingy.local_table, insp)
--> 400 thingy.map()
401 mapper = thingy.cls.__mapper__
402 metadata = mapper.class_.metadata
File /juno-tmp/virtualenvs/python3.11/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:1127, in _DeferredMapperConfig.map(self, mapper_kw)
1125 def map(self, mapper_kw=util.EMPTY_DICT):
1126 self._configs.pop(self._cls, None)
-> 1127 return super(_DeferredMapperConfig, self).map(mapper_kw)
File /juno-tmp/virtualenvs/python3.11/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:1047, in _ClassScanMapperConfig.map(self, mapper_kw)
1042 else:
1043 mapper_cls = mapper
1045 return self.set_cls_attribute(
1046 "__mapper__",
-> 1047 mapper_cls(self.cls, self.local_table, **self.mapper_args),
1048 )
File <string>:2, in __init__(self, class_, local_table, properties, primary_key, non_primary, inherits, inherit_condition, inherit_foreign_keys, always_refresh, version_id_col, version_id_generator, polymorphic_on, _polymorphic_map, polymorphic_identity, concrete, with_polymorphic, polymorphic_load, allow_partial_pks, batch, column_prefix, include_properties, exclude_properties, passive_updates, passive_deletes, confirm_deleted_rows, eager_defaults, legacy_is_orphan, _compiled_cache_size)
File /juno-tmp/virtualenvs/python3.11/lib/python3.11/site-packages/sqlalchemy/util/deprecations.py:375, in deprecated_params.<locals>.decorate.<locals>.warned(fn, *args, **kwargs)
368 if m in kwargs:
369 _warn_with_version(
370 messages[m],
371 versions[m],
372 version_warnings[m],
373 stacklevel=3,
374 )
--> 375 return fn(*args, **kwargs)
File /juno-tmp/virtualenvs/python3.11/lib/python3.11/site-packages/sqlalchemy/orm/mapper.py:693, in Mapper.__init__(self, class_, local_table, properties, primary_key, non_primary, inherits, inherit_condition, inherit_foreign_keys, always_refresh, version_id_col, version_id_generator, polymorphic_on, _polymorphic_map, polymorphic_identity, concrete, with_polymorphic, polymorphic_load, allow_partial_pks, batch, column_prefix, include_properties, exclude_properties, passive_updates, passive_deletes, confirm_deleted_rows, eager_defaults, legacy_is_orphan, _compiled_cache_size)
691 self._configure_properties()
692 self._configure_polymorphic_setter()
--> 693 self._configure_pks()
694 self.registry._flag_new_mapper(self)
695 self._log("constructed")
File /juno-tmp/virtualenvs/python3.11/lib/python3.11/site-packages/sqlalchemy/orm/mapper.py:1391, in Mapper._configure_pks(self)
1389 if self._primary_key_argument:
1390 for k in self._primary_key_argument:
-> 1391 if k.table not in self._pks_by_table:
1392 self._pks_by_table[k.table] = util.OrderedSet()
1393 self._pks_by_table[k.table].add(k)
AttributeError: 'str' object has no attribute 'table'
</code></pre>
|
<python><sql-server><sqlalchemy>
|
2024-05-13 11:42:22
| 1
| 23,470
|
quant
|
78,471,887
| 1,584,043
|
Selenium Python test doesn't work with Proxy settings
|
<p>My simple selenium python script to test how proxy fails. How to force webdriver work through proxy connection?</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
PROXY_HOST = "{MYIP}"
PROXY_PORT = 11200
PROXY_USERNAME = "{UNAME}"
PROXY_PASSWORD = "{PW}"
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=http://%s:%s@%s:%s' % (PROXY_USERNAME, PROXY_PASSWORD, PROXY_HOST, PROXY_PORT))
chrome_options.add_argument("ignore-certificate-errors")
chrome = webdriver.Chrome(options=chrome_options)
chrome.get("https://www.ipchicken.com/")
# Wait until a table is present
wait = WebDriverWait(chrome, 30) # Adjust the timeout as needed
wait.until(EC.presence_of_element_located((By.TAG_NAME, "table")))
# Get the page source
page_source = chrome.page_source
# Output page data as text in console
print(page_source)
# Close the browser
chrome.quit()
</code></pre>
<p>It works good if i remove Proxy settings, but, it throws TimeoutException with proxy settings.</p>
<p>My proxy credentials are correct and work well through CURL.</p>
|
<python><google-chrome><selenium-webdriver><proxy><selenium-chromedriver>
|
2024-05-13 11:18:47
| 1
| 309
|
user1584043
|
78,471,803
| 2,378,625
|
"Test framework quit unexpectedly" when running Pytest in PyCharm
|
<p>I'm running PyCharm Professional 2023.3.4 on an Ubuntu 22.04 machine.</p>
<p>My PyTests run OK from the command line, and from the terminal within PyCharm. But running them directly from PyCharm itself gives the message "Test framework quit unexpectedly", with the error output shown below (in two screenshots).</p>
<p><a href="https://i.sstatic.net/V0GHIglt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0GHIglt.png" alt="top of error output" /></a></p>
<p><a href="https://i.sstatic.net/f5jcvu56.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5jcvu56.png" alt="bottom of error output" /></a></p>
<p>None of the lines mentioned in the error output reference any of my code.</p>
<p>Here is how the tests are configured:</p>
<p><a href="https://i.sstatic.net/ocheKqA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ocheKqA4.png" alt="pytest configuration" /></a></p>
<p>Can anyone tell me why Pytest is exiting when run from Pycharm, and how to fix the problem?</p>
<p>Thank you in advance for any help.</p>
|
<python><pycharm><pytest>
|
2024-05-13 11:02:21
| 1
| 413
|
jazcap53
|
78,471,765
| 1,022,138
|
Extract Text from a LED Panel Image Using OCR
|
<p>I have a LED panel from which I am trying to extract text.</p>
<p>I applied some image processing techniques, and here is the result:</p>
<p><a href="https://i.sstatic.net/vCHQpVo7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vCHQpVo7.png" alt="enter image description here" /></a></p>
<p>I want to convert the image to text. For this purpose, I am using the Tesseract library. I tried three different models, but none were successful.</p>
<p>The models 'lets' and 'letsgodigital' are presumably designed for digital fonts, but they did not work for me.</p>
<pre><code> static void tryToExtractText(string file)
{
Dictionary<string, string> dic = new Dictionary<string, string>();
dic.Add("./tessdata", "eng");
dic.Add("./lets", "lets");
dic.Add("./letsgodigital", "letsgodigital");
foreach (var item in dic)
{
using (var engine = new TesseractEngine(item.Key, item.Value, EngineMode.Default))
{
engine.DefaultPageSegMode = PageSegMode.Auto;
using (var img = Pix.LoadFromFile(file))
{
using (var page = engine.Process(img))
{
string text = page.GetText();
Console.WriteLine(item.Value + " result: \"{0}\"", text.Trim());
}
}
}
}
}
</code></pre>
<p><strong>Result:</strong>
tessdata result: "gc 240 Kg"
lets result: "82.248 159" letsgodigital
result: "82-248 169"</p>
<p>How can I read the text? I only need the numbers. Is there a setting where I can specify to only recognize numbers and ignore letters, or something similar?</p>
<p>PS: Now I tried PaddleSharp library (Paddle OCR)</p>
<p>it works somehow, the output is <strong>02'240 K9</strong>
<a href="https://github.com/sdcb/PaddleSharp/blob/master/docs/ocr.md" rel="nofollow noreferrer">https://github.com/sdcb/PaddleSharp/blob/master/docs/ocr.md</a></p>
<p>However, this requires several libraries, and I have serious doubts about its compatibility with mobile platforms (Xamarin.Forms).</p>
|
<python><c#><ocr><tesseract>
|
2024-05-13 10:53:33
| 0
| 1,638
|
boss
|
78,471,751
| 4,343,579
|
Can I pass extra arguments into the panda to_sql method callable?
|
<p>I've been looking at the Pandas Docs and <a href="https://github.com/pandas-dev/pandas/blob/529bcc1e34d18f94c2839813d2dba597235c460d/pandas/io/sql.py#L730" rel="nofollow noreferrer">Pandas Source code</a> of pandas to pass big CSV tables into my PostgreSQL DB in the fastest way possible.</p>
<p>Using pandas.DataFrame.to_sql using a Callable method "psql_insert_copy" seems the best solution so far.
But I don't understand how it works, honestly, I looked in several posts and I get that the pattern of arguments of this method 'table, conn, keys and data_iter' is fixed. I asked how the table is taking the "keys" along other person <a href="https://stackoverflow.com/questions/23103962/how-to-write-dataframe-to-postgres-table">this post</a> and <a href="https://stackoverflow.com/questions/59845206/example-of-using-the-callable-method-in-pandas-to-sql">this one</a>.</p>
<p>Is it possible to pass along this method, or create another one, to pass columnames? so I could rename the columns inside the same callable method? This is because some column_names are not what I expect, and just this change will simplify my code.
(Also I want to learn more about how to customize my callable and how it works).</p>
<p>You can use this code example:</p>
<pre><code># STEP 0: Call libraries
# STEP 1:
# Setup the connection db engine
import csv
from io import StringIO
from sqlalchemy import create_engine
# STEP 2: Create the callable function/method
# Alternative to_sql() *method* for DBs that support COPY FROM
import csv
from io import StringIO
def psql_insert_copy(table, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
# gets a DBAPI connection that can provide a cursor
dbapi_conn = conn.connection
with dbapi_conn.cursor() as cur:
s_buf = StringIO()
writer = csv.writer(s_buf)
writer.writerows(data_iter)
s_buf.seek(0)
columns = ', '.join(['"{}"'.format(k) for k in keys])
if table.schema:
table_name = '{}.{}'.format(table.schema, table.name)
else:
table_name = table.name
sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format(
table_name, columns)
cur.copy_expert(sql=sql, file=s_buf)
## STEP 3 : Create the db connection (in my case I installed postgresql and connect to it
engine = create_engine('postgresql://myusername:mypassword@myhost:5432/mydatabase')
## STEP 4: Read my table "dummycsv.csv" from my project root folder
df = pd.read_csv("dummycsv.csv")
## STEP 5: Pass this dummycsv data frame to the function "to_sql" with the argument, callable method. AND THE NEW COLUMNNAMES!
df.to_sql(name='dummycsv', schema='test', con=engine,if_exists="replace", index=False, method=psql_insert_copy)
</code></pre>
<p>I try to add in this last function, and in the method an extra argument:</p>
<pre><code>def psql_insert_copy(table, conn, keys, data_iter, colsrenamed)
.
.
.
# ...
df.to_sql(name='dummycsv', schema='test', con=engine,if_exists="replace", index=False, method=psql_insert_copy, colsrenamed = listofstringswithnewnames):
.
.
.
</code></pre>
<p>Error I get when adding this new argument _colsrenamed _: <strong>TypeError: NDFrame.to_sql() got an unexpected keyword argument 'keys'</strong></p>
<p>NOTE:
Also since this research took me quite a lot of time I considered:</p>
<ol>
<li>to rename the table after using the function, using "ALTER TABLE tablename RENAME COLUMN ..." iteratively</li>
<li>Also I tried to change the "keys" argument (related with the column names in the <strong>psql_insert_copy</strong> function) by renaming the columns of the dataframe using <code>df.rename(columns=dictionary_with_oldnames_and_newnames, inplace=True)</code></li>
</ol>
<p>But these two approaches are bit buggy and requires more commit options and more steps could lead to more desynchronization of processes in my pipeline.</p>
|
<python><pandas><postgresql><sqlalchemy><psycopg2>
|
2024-05-13 10:49:49
| 0
| 586
|
Corina Roca
|
78,471,671
| 7,112,039
|
SQLAlchemy Cascade Delete on Polymorphic M2M relationship
|
<p>I am building a polymorphic relationship between classes A and B through a Link class.
I want to achieve that when an A object is deleted, the linked B objects are deleted too.
To do that, I defined a relationship between A and Link, with a cascade relationship <code>all, delete</code>, in this way as soon as A is deleted we will try to delete all the links, and then I intercepted the delete on the Link using an event listener <code>before_delete</code> class and I used it to delete also the B object.
A better solution can exist, and most of all I am concerned about the connection object in the event listener that doesn't seem to be bound to any session.</p>
<pre class="lang-py prettyprint-override"><code>class A(BaseORMModel):
id: Mapped[int] = mapped_column(
primary_key=True,
)
links = relationship(
"Link",
primaryjoin="and_("
"foreign(Link.source_id)==A.id,"
"Link.source_type=='TypeA'"
")",
cascade="all, delete",
)
class B(BaseORMModel):
id: Mapped[int] = mapped_column(
primary_key=True,
)
class Link(BaseOrmModel):
id: Mapped[int] = mapped_column(
primary_key=True,
)
source_id: Mapped[int] = mapped_column(
index=True,
)
source_type: Mapped[str]
related_id: Mapped[int] = mapped_column(
index=True,
)
related_type: Mapped[str]
@event.listens_for(Link, "before_delete")
def delete_links(mapper, connection, Link):
if Link.related_type == 'TypeB':
q = delete(B).where(B.id == Link.related_id)
connection.execute(
q
)
print(q.compile(compile_kwargs={"literal_binds": True}))
</code></pre>
|
<python><sqlalchemy>
|
2024-05-13 10:37:17
| 0
| 303
|
ow-me
|
78,471,659
| 11,637,422
|
Google API Authorisation
|
<p>I am using some of the basic google API python scripts about accessing drive and creating files etc...</p>
<p>These exact same files worked some months ago, but now I get errors when the tokens are trying to be refreshed. It also does not prompt me to log in.</p>
<p>The setup code is as follows:</p>
<pre><code>def create_service(credentials, api_name, api_version, *scopes):
print("-"*100)
print("Data: ", credentials, api_name, api_version, scopes, sep='-')
CREDENTIALS = credentials
API_SERVICE_NAME = api_name
API_VERSION = api_version
SCOPES = [scope for scope in scopes[0]]
print("Scopes: ", SCOPES)
token_file = f'token_{API_SERVICE_NAME}_{API_VERSION}.json'
creds = None
# The file token.json stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists(token_file):
creds = Credentials.from_authorized_user_file(token_file, SCOPES)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
CREDENTIALS, SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open(token_file, 'w') as token:
token.write(creds.to_json())
try:
service = build(API_SERVICE_NAME, API_VERSION, credentials=creds)
print('Service created successfully')
print("-"*100)
return service
except HttpError as error:
# Handle errors from drive API.
print(f'An error occurred: {error}')
except Exception as error:
print('Unable to connect.')
print(error)
return None
</code></pre>
<p>The output error is:</p>
<pre><code>google.auth.exceptions.RefreshError: ('invalid_grant: Bad Request', {'error': 'invalid_grant', 'error_description': 'Bad Request'})
</code></pre>
<p>I am not sure what to try, this is a personalish project so I am not sure if I should make the project "In production".</p>
<p>Does anyone have any ideas? I have tried re-making the credentials json file and it is still not working.</p>
|
<python><google-api>
|
2024-05-13 10:35:38
| 0
| 341
|
bbbb
|
78,471,457
| 4,247,881
|
polars ignore zeros when doing mean()
|
<p>Not sure why I am finding this so hard.</p>
<p>For the following dataframe I want to calculate the mean for the grouped months like</p>
<pre><code>df = df.group_by("month", maintain_order=True).mean()
</code></pre>
<p>However, I want to ignore 0.0's</p>
<p>I am trying to use polars.Expr.replace
but can't find an example that does not a existing columns etc</p>
<p>How can I replace all the 0.0 in the data frame with say None or NaN so the mean will ignore them ?</p>
<pre><code>┌─────────────────────────┬────────┬────────┬────────┬────────┬───┬────────┬────────┬────────┬────────┬─────────┐
│ ts ┆ 642935 ┆ 643128 ┆ 642929 ┆ 642930 ┆ … ┆ 642932 ┆ 642916 ┆ 642933 ┆ 643129 ┆ month │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ datetime[ns, UTC] ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ str │
╞═════════════════════════╪════════╪════════╪════════╪════════╪═══╪════════╪════════╪════════╪════════╪═════════╡
│ 2024-02-13 00:00:00 UTC ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ 41.0 ┆ … ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ 02-2024 │
│ 2024-02-13 00:05:00 UTC ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ … ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ 02-2024 │
│ 2024-02-13 00:10:00 UTC ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ … ┆ 0.0 ┆ 44.0 ┆ 0.0 ┆ 0.0 ┆ 02-2024 │
│ 2024-02-13 00:15:00 UTC ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ … ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ 0.0 ┆ 02-2024 │
</code></pre>
|
<python><dataframe><python-polars>
|
2024-05-13 09:57:26
| 2
| 972
|
Glenn Pierce
|
78,471,431
| 7,877,397
|
Configuring Python Build Slaves without root privilege, but allowing install during build job
|
<p>I have run into a bit of a chicken and egg issue. The organisation that I worked for use a setup where they setup a docker build slave to be used in Jenkins to serve as a base image where CICD is run.</p>
<p>There is no issue running any installation in this slave image as I can use USER root to <code>python -m pip install</code> all the standard packages and after that revert back to USER jenkins.</p>
<p>The issue comes in when this base image is used during the actual build for applications. these applications may have their own requirements.txt where during build will have to be installed. As USER jenkins, I do not have permission to install any packages in the requirements.txt.</p>
<p>example,
If I try to run</p>
<p><code>python -m pip install -r requirements.txt</code></p>
<p>I will get an error, for example:</p>
<p><code>ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/opt/app-root/lib/python3.11/site-packages/wheel'</code></p>
<p>I guess I could use USER root to run the container, but it would be a really bad idea as this opens up security flaw and anyone with access to this build image "might" have privilege escalation to the underlying node running the slave.</p>
<p>And I don't really want to limit the users to only be able to install packages during the setup of the Build Slave image.</p>
<p>Any advice would be appreciated.</p>
|
<python><jenkins><cicd>
|
2024-05-13 09:53:18
| 1
| 673
|
Chirrut Imwe
|
78,471,398
| 14,912,118
|
Need to header value in all the files while writing in PySpark for txt files
|
<p>I need to write header value to all the files in txt format</p>
<p>Here is the code</p>
<pre><code>data = [
("John", 25, "USA"),
("Alice", 30, "Canada"),
("Michael", 35, "USA"),
("Emily", 28, "Australia"),
("David", 40, "UK"),
("Sophia", 32, "Canada")
]
df = spark.createDataFrame(data)
df.createOrReplaceTempView("sample_test")
df_columns = str(df.columns)[1:-1]
df_columns = df_columns.replace("'", "")
df_final = spark.sql(f"select 'name,age,country', 0 as dummy_col union all select concat_ws(',',{df_columns}), 1 as dummy_col from sample_test")
df_final = df_final.coalesce(1).orderBy("dummy_col").drop("dummy_col")
spark_options = {"maxRecordsPerFile": "3", "emptyValue": "", "header": "true"}
df_final.write.format("text").mode("append").options(**spark_options).save("path")
</code></pre>
<p>When i am using text format header is not coming in all the file, but when i using csv file format with below code it is working fine.</p>
<pre><code>data = [
("John", 25, "USA"),
("Alice", 30, "Canada"),
("Michael", 35, "USA"),
("Emily", 28, "Australia"),
("David", 40, "UK"),
("Sophia", 32, "Canada")
]
df = spark.createDataFrame(data, ["Name", "Age", "Country"])
spark_options = {"maxRecordsPerFile": "3", "emptyValue": "", "header": "true"}
df.write.format("csv").mode("append").options(**spark_options).save("path")
</code></pre>
<p>Final output expected</p>
<p>file1.csv</p>
<pre><code>Name,Age,Country
John,25,USA
Alice,30,Canada
Michael,35,USA
</code></pre>
<p>file2.csv</p>
<pre><code>Name,Age,Country
Emily,28,Australia
David,40,UK
Sophia,32,Canada
</code></pre>
<p>Note: For text we have used concat_ws since text will consider only one column.</p>
<p>Please help me to get header in all the files while writing in text format using Pyspark.</p>
|
<python><apache-spark><pyspark>
|
2024-05-13 09:49:17
| 0
| 427
|
Sharma
|
78,471,353
| 1,082,349
|
Join claims that I'm attempting to merge object and int64 even though I'm not
|
<p>I have two dataframes (on a server that I cannot take data out of, so unfortunately no reproducible example), both of which have a column called <code>person2</code>.</p>
<pre><code>>>> group[['person2']].dtypes
person2 object
dtype: object
>>> df_returns[['person2']].dtypes
person2 object
dtype: object
>>> group[['person2']].join(df_returns[['person2']], on='person2')
ValueError: You are trying to merge on object and int64 columns for key 'person2'. If you wish to proceed you should use pd.concat
</code></pre>
<p>However, <code>group[['person2']].merge(df_returns[['person2']], on='person2')</code> works. What causes this behavior?</p>
|
<python><pandas>
|
2024-05-13 09:41:02
| 0
| 16,698
|
FooBar
|
78,471,218
| 390,785
|
Programming Error in MySQL LOAD DATA statement when using ENCLOSED BY
|
<p>I am trying to run a SQL statement using cursor.execute</p>
<pre><code>LOAD DATA LOCAL INFILE
'/home/ubuntu/test.csv'
INTO TABLE app_data
CHARACTER SET UTF8MB4 FIELDS TERMINATED BY ','
IGNORE 1 ROWS ENCLOSED BY ""
LINES TERMINATED BY '\r\n'
</code></pre>
<p>This runs using mysql command line but the ENCLOSED BY raises a ProgrammingError / OperationalError.</p>
<p>Any help/pointers appreciated. I have already tried variations such as ENCLOSED BY '"'.</p>
|
<python><mysql><django>
|
2024-05-13 09:16:45
| 0
| 4,440
|
sureshvv
|
78,471,214
| 4,359,511
|
Python pyproject.toml script import failing
|
<p>I have the following project layout:</p>
<pre><code>
├── README.md
├── requirements.txt
├── pyproject.toml
├── awesome_package/
├── __init__.py
├── __version__.py
└── main_module.py
└── some_packages
</code></pre>
<p>Inside the <code>awesome_package.main_module</code> there is a function that starts my script (the <code>foo()</code> function).</p>
<p>I'd like to have a <code>some-script</code> script that executes my <code>foo</code> function after installing my library.</p>
<p>This is my <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "awesome-package"
readme = "README.md"
dynamic = ["version"]
dependencies = [
...
]
[project.urls]
...
[project.optional-dependencies]
dev = [
"build"
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project.scripts]
some-script = "awesome_package.main_module:foo"
[tool.hatch.version]
path = "awesome_package/__version__.py"
[tool.hatch.build.targets.wheel]
packages = ["sphinxlint"]
</code></pre>
<p>Using this <code>pyproject.toml</code> I'm able to build the library using:</p>
<pre class="lang-bash prettyprint-override"><code>python -m build
</code></pre>
<p>Or for dev purposes</p>
<pre><code>pip install -e '.[dev]'
</code></pre>
<p>Everything is working as intended, except for the script which fails to execute. After installation of the wheel, or after the pip install in dev mode, I'm able to run the script <code>some-script</code>. However, it fails to import the foo function and I get the following error:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/Some/path/project_name/venv/bin/some-script", line 7, in <module>
from awesome_package.main_module import foo
ModuleNotFoundError: No module named 'awesome_package'
</code></pre>
<p>Any suggestions?</p>
<h2>Update</h2>
<p>It looks like a problem related to <code>hatchling</code> building backend. Replacing <code>hatchling</code> with <code>setuptools</code> it is working as intended.</p>
<p>This is the updated <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "awesome-package"
readme = "README.md"
dynamic = ["version"]
dependencies = [
...
]
[project.urls]
...
[project.optional-dependencies]
dev = [
"build"
]
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project.scripts]
some-script = "awesome_package.main_module:foo"
[tool.setuptools.dynamic]
version = {attr = "awesome_package.__version__.__version__"}
</code></pre>
|
<python><python-wheel><pyproject.toml>
|
2024-05-13 09:15:53
| 0
| 335
|
Alessandro Staffolani
|
78,471,188
| 2,059,998
|
FastAPI running in IIS - Getting permissionError:[WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions
|
<p>I'm trying to setup FastAPI in IIS using hypercorn and I keep getting the error</p>
<pre><code>PermissionError: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions
</code></pre>
<p>I followed these steps <a href="https://github.com/tiangolo/fastapi/discussions/4207#discussioncomment-2216634" rel="nofollow noreferrer">https://github.com/tiangolo/fastapi/discussions/4207#discussioncomment-2216634</a></p>
<p>My web.config is the following</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.webServer>
<handlers accessPolicy="Read, Execute, Script">
<remove name="httpPlatformHandler"/>
<add name="httpPlatformHandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified" />
</handlers>
<httpPlatform processPath="C:\inetpub\wwwroot\FastAPI\Scripts\python.exe"
arguments="-m hypercorn app.main:app --bind 127.0.0.1:8080 --keep-alive 5 --worker-class asyncio --workers 4"
stdoutLogEnabled="true" stdoutLogFile="C:\logs\python.log" startupTimeLimit="120" requestTimeout="00:05:00"
startupRetryCount="3" processesPerApplication="1">
</httpPlatform>
<httpErrors errorMode="Detailed" />
</system.webServer>
</configuration>
</code></pre>
<p>If I stop IIS and I run the command below it works fine, it's only when running it inside IIS.</p>
<pre class="lang-bash prettyprint-override"><code>C:\inetpub\wwwroot\FastAPI\Scripts\python.exe -m hypercorn app.main:app --bind 127.0.0.1:8080 --keep-alive 5 --worker-class asyncio --workers 4
</code></pre>
<p>I also tried to grant IIS_IUSRS access to python.exe but it didn't make any difference, it was suggested in <a href="https://docs.lextudio.com/blog/running-flask-web-apps-on-iis-with-httpplatformhandler/#the-infinite-loading" rel="nofollow noreferrer">this post</a> as when calling the API it hangs forever</p>
<p>I suspect it's something to do with the IIS port binding, if I change the web.config to point at another port (e.g. 8000) different than the binding port in IIS then I'm able to access the API (e.g. by navigating to http://localhost:8000), it feels to me IIS is using the port when hypercorn is trying to use it
<a href="https://i.sstatic.net/FpkWukVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FpkWukVo.png" alt="enter image description here" /></a></p>
|
<python><iis><fastapi><hypercorn>
|
2024-05-13 09:10:35
| 1
| 520
|
Juan Stoppa
|
78,471,154
| 1,820,480
|
Shipping Large Data With Python Package
|
<p>I am writing a Python library for scientific calculations. The user should have the possibility to try out these calculations on some test data that (ideally) ships with the Python package. For example:</p>
<pre><code>from mypackge.data import dataset1
from mypackage.science import do_stuff
ds = dataset1() # Downloaded on demand
# result = do_stuff(ds)
</code></pre>
<p>The test data are too large that they could be hosted in a Github repository (10-100 MB). For development purposes, I am using dvc and hosting the on a Google Cloud Storage bucket. However, this does not work in production.</p>
<p>Is there a Python library that automatically fetches datasets from a remote bucket? Or more generally speaking, are there any conventions and best practices when shipping test data with Python packages?</p>
|
<python>
|
2024-05-13 09:04:07
| 1
| 3,196
|
r0f1
|
78,471,104
| 4,777,670
|
How to make a function wait for an async operation before executing further code?
|
<p>I have an async function <code>my_func</code> that performs two operations: <code>f</code> and <code>g</code>. The <code>f</code> operation sends data over a WebSocket and doesn't return anything. The <code>g</code> operation performs some computations and returns a value.</p>
<pre class="lang-py prettyprint-override"><code>async def my_func(self, a: str, b: str) -> str:
await f(a, b) # sends data on websocket
out = g(a, b)
return out
</code></pre>
<p>The problem is that <code>g</code> executes immediately after <code>f</code> is called, without waiting for the WebSocket operation to complete. I need <code>g</code> to execute only after <code>f</code> has finished sending the data over the WebSocket.</p>
<p>How can I modify this code to make <code>g</code> wait for the async operation <code>f</code> to complete before executing?</p>
<p>Please note that <code>f</code> doesn't return anything, so I can't use the standard <code>await</code> on its return value.</p>
|
<python><websocket><python-asyncio><fastapi>
|
2024-05-13 08:54:26
| 0
| 3,620
|
Saif
|
78,471,100
| 108,390
|
Check for equality across N (N>2) columns horizontally in Polars
|
<p>Assume I have the following Polars DataFrame:</p>
<pre><code>all_items = pl.DataFrame(
{
"ISO_codes": ["fin", "nor", "eng", "eng", "swe"],
"ISO_codes1": ["fin", "nor", "eng", "eng", "eng"],
"ISO_codes2": ["fin", "ice", "eng", "eng", "eng"],
"OtherColumn": ["1", "2", "3", "4", "5"],
})
</code></pre>
<p>How can I implement a method like check_for_equality along the lines of</p>
<pre><code>def check_for_equality(all_items, columns_to_check_for_equality):
return all_items.with_columns(
pl.col_equals(columns_to_check_for_equality).alias("ISO_EQUALS")
)
</code></pre>
<p>So that when I call it:</p>
<pre><code>columns_to_check_for_equality = ["ISO_codes", "ISO_codes1", "ISO_codes2"]
resulting_df = check_for_equality(all_items, columns_to_check_for_equality)
</code></pre>
<p>I achieve the following:</p>
<pre><code>resulting_df == pl.DataFrame(
{
"ISO_codes": ["fin", "nor", "eng", "eng", "swe"],
"ISO_codes1": ["fin", "nor", "eng", "eng", "eng"],
"ISO_codes2": ["fin", "ice", "eng", "eng", "eng"],
"OtherColumn": ["1", "2", "3", "4", "5"],
"ISO_EQUALS": [True, False, True, True, False],
})
</code></pre>
<p>Note that I do not "know" the column names when doing the actual check, and the number of columns can vary between calls.</p>
<p>Is there something like "col_equals" in the Polars API?</p>
|
<python><python-polars>
|
2024-05-13 08:53:16
| 2
| 1,393
|
Fontanka16
|
78,471,073
| 5,379,182
|
Xray scan shows pip vulnerability in Docker although pip is not installed in the image
|
<p>When I scan my Docker image for vulnerabilities, Xray detects <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-20225" rel="nofollow noreferrer">CVE-2018-20225</a>
which is raised when using an extra index url in pip. However in my image I completely remove <code>pip</code> after the packages have been installed</p>
<pre><code>FROM ubuntu:23.04 AS base
RUN python3 -m venv venv && \
. venv/bin/activate
# ... install the dependencies into virtual environment ...
FROM base AS final
RUN apt-get remove --purge -y python3-pip && \
apt-get autoremove --purge -y && \
apt-get clean && \
rm -rf venv/lib/python3.11/site-packages/pip* && \
rm -rf venv/bin/pip*
</code></pre>
<p>When I attach a shell on the container and execute <code>pip</code> it shows "command not found".</p>
<p>Why would the vulnerability be raised when pip is not available in the final layer of the docker image?</p>
|
<python><docker><pip>
|
2024-05-13 08:49:14
| 1
| 3,003
|
tenticon
|
78,470,913
| 10,710,625
|
Generate new Column in Dataframe with Modulo of other Column
|
<p>I would like to create a new column "Day2" which takes the second digit of column named "Days", so if we have Days equal to 35, we would take the number 5 to be in "Day2", I tried this but it's not working:</p>
<pre><code>DF["Day2"] = DF["Days"].where(
DF["Days"] < 10,
(DF["Days"] / 10 % 10).astype(int),
)
</code></pre>
<p>It seems it's taking the first digit but never the second one, can someone help?</p>
|
<python><pandas><dataframe><modulo>
|
2024-05-13 08:17:23
| 2
| 739
|
the phoenix
|
78,470,907
| 11,935,809
|
Edgedb in docker starting and stops after migration applied
|
<p>I am facing odd issue when running <code>egdedb</code> <code>server</code> in Docker: it stops after migration apply without any useful info about what is happening.</p>
<ol>
<li>Docker-compose:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>version: '3.9'
volumes:
edgedb_meta:
edgedb_data:
services:
wissance_ubs_db:
container_name: wissance_ubs_db
volumes:
- edgedb_meta:/dbschema
- edgedb_data:/var/lib/edgedb/data
build:
context: .
dockerfile: ./Edgedb.Dockerfile
# stdin_open: true
# tty: true
env_file: ./deploy_settings.env
ports:
- "5656:5656"
expose:
- "5656"
wissance_ubs_webapi_backend:
container_name: wissance_ubs_webapi_backend
depends_on:
wissance_ubs_db:
condition: service_started
volumes:
- edgedb_meta:/root/.config/edgedb/credentials
build:
context: .
dockerfile: ./Wissance.Ubs.Dockerfile
stdin_open: true
tty: true
env_file: ./deploy_settings.env
ports:
- "8228:8228"
</code></pre>
<ol start="2">
<li><code>Edgedb Dockerfile</code></li>
</ol>
<pre class="lang-bash prettyprint-override"><code>FROM edgedb/edgedb:4.7 AS base
VOLUME /dbschema
#WORKDIR /dbschema/
WORKDIR /opt
# according to doc above migrations should be placed to /dbschema/migrations
COPY ./src/Wissance.Ubs/Wissance.Ubs.Data/dbschema /dbschema
# copy toml
COPY ./src/Wissance.Ubs/Wissance.Ubs.Data/edgedb.toml /dbschema
COPY ./docs/data_init/init.eql /dbschema
# COPY PY CFG MAKER
COPY ./utils/edgedb_cfg_writer.py /dbschema
# PYTHON INSTALLING IN /usr/bin
RUN apt-get update && apt-get install python3 -y
# copy credentials file to volume
# configure EdgeDb
ARG EDGEDB_SERVER_PASSWORD=_mySuperPass_
ARG EDGEDB_SERVER_INSTANCE_NAME=Wissance_Ubs
RUN /usr/bin/python3 /dbschema/edgedb_cfg_writer.py
RUN cp /root/.config/edgedb/credentials/${EDGEDB_SERVER_INSTANCE_NAME}.json /dbschema
CMD ["edgedb-server"]
</code></pre>
<ol start="3">
<li>Part of env file:</li>
</ol>
<pre><code>EDGEDB_DOCKER_APPLY_MIGRATIONS=always
EDGEDB_DOCKER_LOG_LEVEL=debug
EDGEDB_DOCKER_BOOTSTRAP_TIMEOUT_SEC=3600
EDGEDB_SERVER_PASSWORD=_mySuperPass_
EDGEDB_SERVER_TLS_CERT_MODE=generate_self_signed
EDGEDB_SERVER_ADMIN_UI=enabled
EDGEDB_SERVER_SECURITY=insecure_dev_mode
EDGEDB_SERVER_HTTP_ENDPOINT_SECURITY=optional
EDGEDB_SERVER_BINARY_ENDPOINT_SECURITY=optional
EDGEDB_SERVER_INSTANCE_NAME=Wissance_Ubs
EDGEDB_SERVER_EMIT_SERVER_STATUS=true
EDGEDB_SERVER_COMPILER_POOL_SIZE=64
</code></pre>
<p>Screenshot with result:</p>
<p><a href="https://i.sstatic.net/ZN4Z9dmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZN4Z9dmS.png" alt="enter image description here" /></a></p>
<p>Text log:</p>
<pre><code>wissance_ubs_db | Applied m1e4lbd2auubps6tx5sy5emdys4y2hrll3owcrxxxa66wsidu223fq (00001.edgeql)
wissance_ubs_db | INFO 45 2024-05-13T06:41:30.617 edb.server: Connection established to backend database: edgedb
wissance_ubs_webapi_backend | dbug: Quartz.Core.QuartzSchedulerThread[0]
wissance_ubs_webapi_backend | Batch acquisition of 0 triggers
wissance_ubs_db | INFO 45 2024-05-13T06:41:35.558 edb.server: introspecting database 'edgedb'
wissance_ubs_db | INFO 45 2024-05-13T06:41:36.319 edb.server: Connection established to backend database: edgedb
wissance_ubs_db | INFO 45 2024-05-13T06:41:37.021 edb.server: Received signal: 15.
wissance_ubs_db | INFO 45 2024-05-13T06:41:39.167 edb.server: Shutting down.
wissance_ubs_db | /usr/local/bin/docker-entrypoint-funcs.sh: line 1327: 45 Killed gosu "${EDGEDB_SERVER_UID}" env -i "${pg_vars[@]}" "${EDGEDB_SERVER_BINARY}" "${server_opts[@]}"
</code></pre>
<p>I would like to quickly fix if it possible or understand what is happening (no exact reason)</p>
|
<python><docker><edgedb>
|
2024-05-13 08:15:59
| 0
| 2,914
|
Michael Ushakov
|
78,470,617
| 2,202,989
|
pyModbusTCP packet buffer when no connection eats memory
|
<p>I have a working code that I use to send ModbusTCP commands to a device that can read them and perform actions. When network is fine, it works as expected. But if link is down, the packets are stored and buffered somewhere in either the pyModbusTCP or Linux (debian 9 based) buffers, and when the connection re-establishes, are the packets are flushed, causing some unintended behaviour.</p>
<p>Another thing that I've noticed is that the buffer eats RAM, and I eventually run out (though the machine has only 256Mb).</p>
<p>The python script that generates the ModbusTCP commands is run every minute via crontab, and there is approx. 10 seconds after the run until it starts again. Is there a way to clear the buffer from either Linux commands or python script itself during this time?</p>
<p>I've tried having the auto close option set as True:</p>
<pre><code>mb_client = ModbusClient(host=host, auto_open=True, auto_close = True, port=port, debug=False)
</code></pre>
<p>But the buffering still happens.</p>
|
<python><linux><tcp><pymodbustcp>
|
2024-05-13 07:18:16
| 0
| 383
|
Nyxeria
|
78,470,398
| 240,443
|
basedpyright: a decorated function inside a function/method "is not accessed"
|
<p>(Not relevant to the question itself, but the example uses <code>quart-trio</code> package, which is similar in both interface and function to <code>flask</code>.)</p>
<p>This code has no errors:</p>
<pre><code>from quart_trio import QuartTrio
app = QuartTrio(__name__)
@app.route('/')
async def index() -> str:
return "ok"
app.run()
</code></pre>
<p>However, this code, which I assume would be identical, reports an issue:</p>
<pre><code>from quart_trio import QuartTrio
def create_app() -> QuartTrio:
app = QuartTrio(__name__)
@app.route('/')
async def index() -> str: # error: Function "index" is not accessed (reportUnusedFunction)
return "ok"
return app
app = create_app()
app.run()
</code></pre>
<p>The same happens inside a method:</p>
<pre><code>from quart_trio import QuartTrio
class Server:
def __init__(self) -> None:
super().__init__()
self.app = QuartTrio(__name__)
@self.app.route('/')
async def index() -> str: # error: Function "index" is not accessed (reportUnusedFunction)
return "ok"
def serve(self) -> None:
self.app.run()
Server().serve()
</code></pre>
<p>Anyone has an explanation? Is there a better way to write it that does not trigger the checker? Or is this something that should be considered a bug in basedpyright, and reported as an issue?</p>
|
<python><python-typing><pyright>
|
2024-05-13 06:24:00
| 1
| 199,494
|
Amadan
|
78,470,258
| 1,581,090
|
How to suppress an exception output in python code that is using pyshark?
|
<p>Running the following python 3.10.12 code on Ubuntu 22.04.4 using <code>pyshark</code> 0.6 seems to ignore the <code>try-except</code> statement of python:</p>
<pre><code>capture = pyshark.FileCapture(filename)
try:
for packet in capture:
pass
except:
print("exception")
</code></pre>
<p>Even I enclosed the reading loop in a <code>try-except</code> statement I get the following output:</p>
<pre><code>exception
Exception ignored in: <function Capture.__del__ at 0x79b5be57c3a0>
Traceback (most recent call last):
File "/home/alex/.local/lib/python3.10/site-packages/pyshark/capture/capture.py", line 405, in __del__
self.close()
File "/home/alex/.local/lib/python3.10/site-packages/pyshark/capture/capture.py", line 393, in close
self.eventloop.run_until_complete(self.close_async())
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/alex/.local/lib/python3.10/site-packages/pyshark/capture/capture.py", line 397, in close_async
await self._cleanup_subprocess(process)
File "/home/alex/.local/lib/python3.10/site-packages/pyshark/capture/capture.py", line 379, in _cleanup_subprocess
raise TSharkCrashException(f"TShark (pid {process.pid}) seems to have crashed (retcode: {process.returncode}).\n"
pyshark.capture.capture.TSharkCrashException: TShark (pid 8097) seems to have crashed (retcode: 2).
Last error line: tshark: The file "/home/alex/Work/Data/test.pcap" appears to have been cut short in the middle of a packet.
Try rerunning in debug mode [ capture_obj.set_debug() ] or try updating tshark.
</code></pre>
<p>How to suppress this output?</p>
<p>To clarify: I am not interested to <em>fix</em> any problem or error. I am just interested into <em>suppressing</em> this output of the exception!</p>
<p>Maybe there is a way to have <code>pyshark</code> ignore errors?
Or I can redirect all error output from <code>pyshark</code>?</p>
|
<python><python-3.x><exception><pyshark>
|
2024-05-13 05:47:05
| 2
| 45,023
|
Alex
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.