QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,231,097
| 14,463,396
|
Incorrect syntax near 'CAST' when using pandas.to_sql
|
<p>I'm trying to write some code to update an sql table from the values in a pandas dataframe. The code I'm using to do this is:</p>
<pre><code>df.to_sql(name='streamlit_test', con=engine, schema='dbo', if_exists='replace', index=False)
</code></pre>
<p>Using an sqlalchemy engine. However, I'm getting the error:</p>
<pre><code>ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'CAST'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared. (8180)")
[SQL: SELECT cast(com.value as nvarchar(max)) FROM fn_listextendedproperty('MS_Description', 'schema', CAST(? AS NVARCHAR(max)), 'table', CAST(? AS NVARCHAR(max)), NULL, NULL ) as com; ]
[parameters: ('dbo', 'streamlit_test')]
</code></pre>
<p>I thought it might be something to do with data types, so I've tried changing the dtypes of my dataframe to match the corresponding data types in the table I'm trying to write to (using this lookup: <a href="https://learn.microsoft.com/en-us/sql/machine-learning/python/python-libraries-and-data-types?view=sql-server-ver16" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/sql/machine-learning/python/python-libraries-and-data-types?view=sql-server-ver16</a>) but I'm still getting the same error. I haven't managed to find anyone else with a similar problem through googling, so any help/pointers is greatly appreciated!</p>
<p><strong>Edit</strong>
I've tried using:</p>
<pre><code>from sqlalchemy.dialects.mssql import BIGINT, FLOAT, TEXT, BIT
df.to_sql(name='streamlit_test', con=engine, schema='dbo',
if_exists='replace', index=False,
dtype={'col1':BIGINT, 'col2':FLOAT, 'col3':TEXT,
'Tickbox':BIT, 'Comment':TEXT})
</code></pre>
<p>so that the dtypes match what they are in the created table, but I still get the same error. For more context, <code>to_sql</code> works fine and saves the dataframe to a table if the table doesn't already exist, and if I change <code>if_exists='replace'</code> to <code>if_exists='append'</code>, that also works, but when trying to replace the data in the existing table, I get the 42000 error.</p>
<p>My MSSMS version is: <code>Microsoft SQL Server 2008 R2 (SP3-GDR) (KB4057113) - 10.50.6560.0 (X64) Dec 28 2017 15:03:48 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)</code></p>
|
<python><sql><sql-server><pandas><sqlalchemy>
|
2024-11-27 16:30:00
| 1
| 3,395
|
Emi OB
|
79,231,076
| 495,990
|
Remove rows of one pandas df using another df, but enable "wildcard" behavior when second df is missing values
|
<p>I have a df of hundreds of thousands of rows, which can contain errors. A team is manually reviewing/identifying, and I'm trying to enable flexible removal of combinations to purge.</p>
<p>There are three key values, and my hope was to enable <em>any</em> of these values to be populated, depending on the removal scope. I want to remove rows matching the broadest definition, treating empty columns per row as if they are wildcards.</p>
<p>Here is a repro of one approach, but I'm wondering if there's a more vectorized/elegant method someone will suggest?</p>
<pre><code>import pandas as pd
target_df = pd.DataFrame({'id_1': ['1', '1', '1', '1', '2', '4', '4'],
'id_2': ['2', '2', '3', '3', '3', '5', '5'],
'attr': ['A', 'B', 'A', 'B', 'C', 'C', 'D']})
remove_df = pd.DataFrame({'id_1': ['1', '1', '4'],
'id_2': ['2', '3', None],
'attr': ['A', None, None]})
result_df = target_df.copy()
for _, row in remove_df.iterrows():
row = row.loc[row.notna()]
row = row.to_frame().T.reset_index(drop=True)
filter_cols = row.columns
mask = result_df[filter_cols].eq(row.values).all(axis=1)
result_df = result_df[~mask]
result_df
</code></pre>
<pre><code> id_1 id_2 attr
1 1 2 B
4 2 3 C
</code></pre>
<p>Some other thoughts:</p>
<ul>
<li><p>One thought is to "chunk" these, applying the mask per unique combination of non-NA values in <code>remove_df</code>? That also seems tedious to handle, as there are 7 permutations of possible NA/non-NA columns.</p>
</li>
<li><p>figuring out which rows to remove based on <code>merge()</code> is intriguing. I think this would require scanning <code>target_df</code> to expand <code>remove_df</code> with all possible values when its value is <code>None</code>. That, or we're back to the many logic branches and we merge using <code>on=[cols]</code>, once per combination of non-NA columns.</p>
</li>
<li><p>I could build tuples of non-NA column combos ahead of time, then iterate through those? Then I'm doing this at most 7 times vs. one per row.</p>
</li>
</ul>
<p>There probably won't be <em>that</em> many of these removals (guessing hundreds), so if the above is pythonic enough, I'll let it be.</p>
<p>I'm mainly asking as I didn't see this exact question, and it's an interesting exercise. I'm intermediate in python skill, so I'd love to see how others would approach this.</p>
<hr />
<p>Related:</p>
<ul>
<li>this question is close, asking how to merge while ignoring <code>NaN</code>: <a href="https://stackoverflow.com/questions/68917578/pandas-merging-on-multi-columns-while-ignoring-nan">Pandas merging on multi columns while ignoring NaN</a></li>
<li>similar idea, merging n times, then concatenating results: <a href="https://stackoverflow.com/questions/45549032/how-to-merge-two-data-frames-while-excluding-the-nan-value-column">How to merge two data frames while excluding the NaN value column?</a></li>
<li>credit for the <code>mask</code> approach above: <a href="https://stackoverflow.com/questions/68552063/pandas-isin-comparison-to-multiple-columns-not-including-index">pandas isin comparison to multiple columns, not including index</a></li>
<li>very similar to my initial thought, but didn't love filling in all the permutations: <a href="https://stackoverflow.com/questions/47472207/how-to-merge-with-wildcard-pandas">How to merge with wildcard? - Pandas</a></li>
</ul>
|
<python><pandas><dataframe><filter>
|
2024-11-27 16:22:01
| 2
| 10,621
|
Hendy
|
79,230,945
| 2,275,171
|
Format string template defined outside of loop?
|
<p>I'm getting odd results when defining a format string outside of a loop. I want to define a variable containing my format string like:</p>
<pre><code>MYSTRING = f"filepath/{year}/{month}"
</code></pre>
<p>If I define it outside the loop I get repeated same value. If I copy/paste that string inside the loop it works fine.</p>
<p>This one goes all the way back to 2023 as expected:</p>
<pre><code>last_n_months = 12
months = date.today().year * 12 + date.today().month - 1 # Months since year 0 minus 1
tuples = [((months - i) // 12, (months - i) % 12 + 1) for i in range(last_n_months)]
for item in tuples:
year = str(item[0])
month = str(item[1])
# string used in directly with the format() function
print(f"xyz/Year={year}/Month={month}".format())
------------------------
xyz/Year=2024/Month=11
xyz/Year=2024/Month=10
xyz/Year=2024/Month=9
xyz/Year=2024/Month=8
xyz/Year=2024/Month=7
xyz/Year=2024/Month=6
xyz/Year=2024/Month=5
xyz/Year=2024/Month=4
xyz/Year=2024/Month=3
xyz/Year=2024/Month=2
xyz/Year=2024/Month=1
xyz/Year=2023/Month=12
</code></pre>
<p>But this one repeats the same year/month for each:</p>
<pre><code># the format string
myformatstring = f"xyz/Year={year}/Month={month}"
last_n_months = 12
months = date.today().year * 12 + date.today().month - 1 # Months since year 0 minus 1
tuples = [((months - i) // 12, (months - i) % 12 + 1) for i in range(last_n_months)]
for item in tuples:
year = str(item[0])
month = str(item[1])
# use the format string variable from outside the loop
print(myformatstring.format())
------------------------
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
xyz/Year=2023/Month=12
</code></pre>
<p>Any ideas?</p>
|
<python>
|
2024-11-27 15:44:31
| 2
| 680
|
mateoc15
|
79,230,923
| 3,048,363
|
RegEx in Python
|
<p>I'm trying to search for a word (single or multiple occurance(s)) in a sentence.</p>
<p>Example</p>
<pre><code>strings = ["He likes walking","I like that","He said he liked the movie"]
</code></pre>
<p>If I want to search for "like" and in return get the flavour of like, what should be done?</p>
<p>I tried with following</p>
<pre><code>keyword = "like"
for string in strings:
p = regex.search(keyword+".*" ,string,flags=regex.IGNORECASE)
print(p.allcaptures())
print(p.group())
</code></pre>
<p>Problem with both these is it returns all words after like"*"</p>
<p>The output I'm expecting is ["likes","like","liked"]</p>
|
<python><regex>
|
2024-11-27 15:36:48
| 1
| 1,091
|
A3006
|
79,230,705
| 2,155,362
|
How to access postgresql via pgsql in python
|
<p>I installed pysql via</p>
<pre><code>pip install pgsql
</code></pre>
<p>Now I writed code follow the example on <a href="https://pypi.org/project/pgsql/" rel="nofollow noreferrer">https://pypi.org/project/pgsql/</a></p>
<pre><code>import pgsql
with pgsql.Connection(("localhost", 5432), "postgres", "password") as db:
report_items = db.prepare('SELECT * FROM "declare_report"')
print(len(report_items))
report_items.close()
</code></pre>
<p>And I got the error like below:</p>
<pre><code>pgsql.Error: relation "declare_report" does not exist (42P01)
</code></pre>
<p>And I'm sure the table has been created, and I can read and insert in pgadmin.
I also try</p>
<pre><code>SELECT * FROM "public.declare_report"
</code></pre>
<p>The error is same. How to fix it?</p>
<p>ps: I just need a client to do crud, I try psycopg2, but it seemed I must install postgresql before psycopy2 installation. Is there any light postgresql clients on python3.9?</p>
<p>My os is windows 11 and python3.9</p>
|
<python><postgresql>
|
2024-11-27 14:29:44
| 1
| 1,713
|
user2155362
|
79,230,704
| 1,406,168
|
Writing to application insights from an azure python function app using opentelemetry
|
<p>So I created a function app and logging works as expected, writing entries to application insights. But as I need custom dimensions I included opentelemetry. However I get no logging send to application insigths, and no errors. Any pointers?</p>
<p>function_app.py:</p>
<pre><code>import datetime
import logging
import azure.functions as func
from azure.identity import DefaultAzureCredential,ManagedIdentityCredential
from azure.storage.blob import BlobClient, BlobServiceClient, ContainerClient
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import trace
logger = logging.getLogger(__name__)
credential = DefaultAzureCredential()
configure_azure_monitor(
credential=credential,
)
tracer = trace.get_tracer(__name__)
app = func.FunctionApp()
@app.timer_trigger(schedule="0 0 10 * * *", arg_name="myTimer", run_on_startup=True, use_monitor=False)
def timer_trigger(myTimer: func.TimerRequest) -> None:
with tracer.start_as_current_span("hello with aad managed identity"):
logger.warning("Warning sent")
properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}}
logging.warning('Warning with props', extra=properties)
</code></pre>
<p>local.settings.json:</p>
<pre><code>{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "python",
"APPLICATIONINSIGHTS_CONNECTION_STRING":"InstrumentationKey=xx-xx-xx-xx-xx;IngestionEndpoint=https://westeurope-5.in.applicationinsights.azure.com/;LiveEndpoint=https://westeurope.livediagnostics.monitor.azure.com/;ApplicationId=xx-xx-xx-xx-xx"
}
}
</code></pre>
|
<python><azure><azure-functions><azure-application-insights>
|
2024-11-27 14:29:34
| 1
| 5,363
|
Thomas Segato
|
79,230,632
| 6,195,473
|
I used a 2 workers pool to split a sum in a function with Python multiprocessing but the timing doesn't speed up, is there something I'm missing?
|
<p>The Python code file is provided below. I'm using Python 3.10.12 on a Linux mint 21.3 (in case any of these info are needed). The one with a pool of 2 workers takes more time than the one without any multiprocessing. What am I doing wrong here?</p>
<pre><code>import multiprocessing
import time
import random
def fun1( x ):
y = 0
for i in range( len( x ) ):
y = y + x[i]
return( y )
def fun2( x ):
p = multiprocessing.Pool( 2 )
y1, y2 = p.map( fun1, [ x[ : len( x ) // 2 ], x[ len( x ) // 2 : ] ] )
y = y1 + y2
return( y )
x = [ random.random() for i in range( 10 ** 6 ) ]
st = time.time()
ans = fun1( x )
et = time.time()
print( f"time = {et - st}, answer = {ans}." )
st = time.time()
ans = fun2( x )
et = time.time()
print( f"time = {et - st}, answer = {ans}." )
x = [ random.random() for i in range( 10 ** 7 ) ]
st = time.time()
ans = fun1( x )
et = time.time()
print( f"time = {et - st}, answer = {ans}." )
st = time.time()
ans = fun2( x )
et = time.time()
print( f"time = {et - st}, answer = {ans}." )
</code></pre>
<p>Here is what I get in terminal.</p>
<pre><code>time = 0.043381452560424805, answer = 499936.40420325665.
time = 0.1324300765991211, answer = 499936.40420325927.
time = 0.4444568157196045, answer = 5000677.883536603.
time = 0.8388040065765381, answer = 5000677.883536343.
</code></pre>
<p>I also used the <code>if __name__ == '__main__':</code> line after <code>fun2</code> and before the rest, I get the same results on terminal. I also tried Python 3.6.2 on a Codio server. I got similar timings.</p>
<pre><code>time = 0.048882484436035156, answer = 499937.07655266096.
time = 0.15220355987548828, answer = 499937.0765526707.
time = 0.4848289489746094, answer = 4999759.127770024.
time = 1.4035391807556152, answer = 4999759.127769606.
</code></pre>
<p>I guess it is something wrong with what I'm doing in my code, a misunderstanding on how to use the <code>multiprocessing.Pool</code> rather than Python, but can't think of what. Any help will be appreciated. I expect using two workers I get a speed up by factor two, not a speed down. Also if needed, I checked with <code>multiprocessing.cpu_count()</code>, the Codio server has 4 and my computer has 12 cpus.</p>
|
<python><python-3.x><parallel-processing><multiprocessing><python-multiprocessing>
|
2024-11-27 14:10:26
| 2
| 488
|
AmirHosein Sadeghimanesh
|
79,230,490
| 9,578,198
|
PVLIB fit Sandia module function - TypeError: expected 1D vector for x when calling polyfit
|
<p>I'm trying to use <code>pvlib.inverter.fit_sandia</code> to evaluate the coefficients for the Sandia inverter model</p>
<pre><code>Pac0 = 330*1000
Pnt = 20
sandia_params = pvlib.inverter.fit_sandia(ac_power,
dc_power,
dc_voltage,
dc_voltage_level,
Pac0, Pnt)
</code></pre>
<p><strong>ac_power</strong>, <strong>dc_power</strong>, are <em>float64</em> arrays with 18 elements</p>
<p><strong>dc_voltage</strong> is an array of 18 <em>int64</em> elements</p>
<p><strong>dc_voltage_level</strong> is a list of 18 strings with values that are either <em>'Vmin'</em>, <em>'Vnom'</em> or <em>'Vmax'</em> as specified in the <a href="https://pvlib-python.readthedocs.io/en/stable/reference/generated/pvlib.inverter.fit_sandia.html" rel="nofollow noreferrer">documentation</a>.</p>
<p>In particular, <strong>dc_voltage_level</strong> is</p>
<pre><code> ['Vmax', 'Vmax', 'Vmax', 'Vmax', 'Vmax', 'Vmax',
'Vnom', 'Vnom', 'Vnom', 'Vnom','Vnom', 'Vnom',
'Vmin', 'Vmin', 'Vmin', 'Vmin', 'Vmin', 'Vmin']
</code></pre>
<p>I'm getting the following error:</p>
<pre><code>d:\michele\documents modello-fisico\ plant
simulation\inverter_data.py:67: RuntimeWarning: invalid value
encountered in divide
ac_power_array / eff_array)
Traceback (most recent call last):
File D:\Michele\Documentsmodello-fisico\venv\lib\site-
packages\spyder_kernels\customize\utils.py:209 in
exec_encapsulate_locals
exec_fun(compile(code_ast, filename, "exec"), globals, None)
File d:\michele\documents\modello-fisico\gaibanella plant
simulation\inverter_data.py:103
sandia_params = pvlib.inverter.fit_sandia(ac_power,
File D:\Michele\Documents\modello-fisico\venv\lib\site-
packages\pvlib\inverter.py:530 in fit_sandia
c, b, a = polyfit(x, y, 2)
File D:\Michele\Documents\modello-fisico\venv\lib\site-
packages\numpy\polynomial\polynomial.py:1467 in polyfit
return pu._fit(polyvander, x, y, deg, rcond, full, w)
File D:\Michele\Documents\modello-fisico\venv\lib\site-
packages\numpy\polynomial\polyutils.py:599 in _fit
raise TypeError("expected 1D vector for x")
TypeError: expected 1D vector for x
</code></pre>
<p>from this part of the function</p>
<pre><code>for d in voltage_levels:
x = dc_power[dc_voltage_level == d]
y = ac_power[dc_voltage_level == d]
# [2] STEP 3B
print('x.shape returns:')
print(x.shape)
# fit a quadratic to (DC power, AC power)
c, b, a = polyfit(x, y, 2)
</code></pre>
<p>Calling <code>x.shape</code> returns <code>(0, 18)</code></p>
<p>The <code>polyfit</code> function is imported with</p>
<pre><code>from numpy.polynomial.polynomial import polyfit # different than np.polyfit
</code></pre>
<p>As I understand it <strong>x</strong> will always be a vector with as many dimensions as the number of points considered, which obviously has to be more than 1 to evaluate the inverter efficiency.</p>
<p>I'm not really sure if this is a misuse of the function by my side or maybe a bug. Any help would be appreciated.</p>
|
<python><numpy><pvlib>
|
2024-11-27 13:26:10
| 0
| 326
|
Michele
|
79,230,481
| 8,458,083
|
How to statically check with mypy that a dictionary has keys for all options in a Union type?
|
<p>I have a Union type PossibleOption composed of n dataclasses, and I want to create a dictionary where the keys are instances of these classes. I need to statically check with mypy that the dictionary includes a key for each option in the Union. Here's my current code:</p>
<pre><code>from dataclasses import dataclass
from typing import Union
@dataclass
class Option1:
def __hash__(self): #don t focus too much on this method, A dictionary need a hashable key. that s all
return hash(type(self))
@dataclass
class Option2:
def __hash__(self):
return hash(type(self))
@dataclass
class Option3:
def __hash__(self):
return hash(type(self))
PossibleOption = Union[Option1, Option2, Option3]
# I want to create a dictionary like this:
a = {Option1(): "a",Option2(): "bb",Option3(): "cc"}
# print(a[Option2()]) prints "bb" for example
</code></pre>
<p>I know that I could ask the user to give a function taking PossibleOption as paratmer and returning the the value like that:</p>
<pre><code> def equivalent_to_dictionary(po: PossibleOption) -> str:
match po:
case Option1():
return "aa"
case Option2():
return "bb"
case Option3():
return "cc"
</code></pre>
<p>and mypy can check is each option are covered
But I want to keep things simple for the user...</p>
<p>Is there a way to statically check with mypy that each option (Option1, Option2, Option3 in this particular case) is present as a key in the dictionary?</p>
|
<python><python-typing><mypy>
|
2024-11-27 13:24:35
| 0
| 2,017
|
Pierre-olivier Gendraud
|
79,230,437
| 4,105,601
|
Passing arguments to __enter__ when using a global variable
|
<p>I'm using a global variable to store some useful data that I need across the whole execution of my code.</p>
<p>It's something like this:</p>
<pre><code>config_object = None
def init_config(config_file=CONFIG_PATH):
# some initialization code...
global config_object
config_object = U2DBLiveConfig(config_file)
def get_config():
if not config_object:
init_config()
return config_object
</code></pre>
<p>I added <code>__enter__</code> and <code>__exit__</code> methods so I can set a flag when using the config object along a session object of other type</p>
<pre><code>class U2DBLiveConfig:
def __init__(self, config_file_path=CONFIG_PATH):
self.__uopy_session_opened = False
# rest of init code
def __enter__(self):
self.__uopy_session_opened = True
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.__uopy_session_opened = False
</code></pre>
<p>I use both objects in one with statement like this.</p>
<pre><code> with create_uopy_session() as uopy_session, get_config() as cfg:
# here, the __uopy_session_opened flag is set to true
</code></pre>
<p>I'd like to pass the objetc <code>uopy_session</code> to the <code>__enter__</code> method, so it can be accessed from the <code>config_object</code> elsewhere.</p>
<p>Please note that <code>config_object</code> may have been created before calling the <code>with</code> statement, so I can't pass it to the <code>__init__</code>.</p>
<p>Is there a way to do something like this?</p>
<pre><code>class U2DBLiveConfig:
def __init__(self, config_file_path=CONFIG_PATH):
self.__uopy_session_opened = False
self.__uopy_session = None
# rest of init code
def __enter__(self, **kwargs):
self.__uopy_session_opened = True
self.__uopy_session = kwargs['uopy_session']
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.__uopy_session_opened = False
self.__uopy_session = None
</code></pre>
|
<python>
|
2024-11-27 13:15:57
| 2
| 507
|
Héctor C.
|
79,230,394
| 11,159,734
|
How to connect to an Azure Postgres DB that requires ssl using asyncpg
|
<p>I'm migrating an application from Flask to FastAPI and with it also the database connection from <code>psycopg2</code> to <code>asyncpg</code>. In the old implementation the databse url was defined like this: <code>SQLALCHEMY_DATABASE_URI: str = f"postgresql+psycopg2://{DB_USER}:{DB_PASSWD}@{DB_HOST}:5432/{DB_NAME}?sslmode=require"</code></p>
<p>When switching to <code>asyncpg</code> I had to omit the <code>sslmode</code> argument as apparently this is not supported. Now the connection url looks like this: <code>SQLALCHEMY_DATABASE_URI: str = f"postgresql+asyncpg://{DB_USER}:{DB_PASSWD}@{DB_HOST}:5432/{DB_NAME}"</code></p>
<p>The problem is that now the connection ends in a timeout error as azure refuses the connection attempt due to the ssl dependency.</p>
<p>My code to connect to the database is as follows:</p>
<pre><code>from sqlmodel import SQLModel
from sqlmodel.ext.asyncio.session import AsyncSession
from sqlalchemy.ext.asyncio import AsyncEngine, create_async_engine
from sqlalchemy.orm import sessionmaker
from typing import AsyncGenerator
from config import config
from utils.logging import logger
engine: AsyncEngine = create_async_engine(
url=config.SQLALCHEMY_DATABASE_URI,
echo=config.DB_ECHO,
pool_pre_ping=True,
pool_size=20,
max_overflow=10
)
async_session_maker = sessionmaker(
bind=engine,
class_=AsyncSession,
expire_on_commit=False,
autocommit=False,
autoflush=False,
)
async def get_session() -> AsyncGenerator[AsyncSession, None]:
logger.debug(f"DB url: {config.SQLALCHEMY_DATABASE_URI}")
"""Yield a database session"""
async with async_session_maker() as session:
try:
yield session
await session.commit()
except Exception as e:
await session.rollback()
raise
</code></pre>
<p>In the old version simply setting a flag <code>sslmode=require</code> was enough to make it work without issues in azure. Now apparently based on <a href="https://github.com/sqlalchemy/sqlalchemy/issues/6275" rel="nofollow noreferrer">this GitHub issue</a> it seems like the only way to make this work is by providing the actual ssl certificate like this:</p>
<pre><code># Set up SSL context
ssl_context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ssl_context.load_cert_chain(certfile=config.SSL_CERT_FILE, keyfile=config.SSL_KEY_FILE)
# SQLAlchemy URL, now with SSL parameters
SQLALCHEMY_DATABASE_URI: str = f"postgresql+asyncpg://{DB_USER}:{DB_PASSWD}@{DB_HOST}:5432/{DB_NAME}"
# Create engine with SSL context
engine = create_async_engine(
url=SQLALCHEMY_DATABASE_URI,
echo=config.DB_ECHO,
pool_pre_ping=True,
pool_size=20,
max_overflow=10,
connect_args={"ssl": ssl_context} # Pass SSL context
)
</code></pre>
<p>However I don't have access to the SSL certificate and I need a way to connect to this azure db via an async method without providing the ssl certificate as a file. How can this be done?</p>
|
<python><postgresql><fastapi><asyncpg>
|
2024-11-27 13:05:08
| 1
| 1,025
|
Daniel
|
79,230,339
| 1,750,612
|
PyWebIO: stream logger to output
|
<p>I'm writing a simple PyWebIO setup and am wondering whether I can link an instance of <code>pywebio.output.put_scrollable</code> with the python <code>logging</code> module. I have a logger set up, and would love to be able to stream the logs directly to a pywebio output box.</p>
<p>Is it possible to stream to a pywebio output block, or is it only capable of rendering static text?</p>
|
<python><logging><stream>
|
2024-11-27 12:46:37
| 0
| 359
|
MikeFenton
|
79,230,332
| 8,384,910
|
Insert image with border in Plotly
|
<p>To <a href="https://stackoverflow.com/a/70056977/8384910">create a plotly scatter plot with images at each point</a>, we can do:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
from PIL import Image
xs = [0, 1, 2]
ys = [0, 1, 4]
images = [Image.open("a.png"), Image.open("b.png"), Image.open("c.png")]
figure = px.scatter(x=xs, y=ys)
size = 2
for x, y, image in zip(xs, ys, images):
figure.add_layout_image(
x=x,
y=y,
source=Image.open(png),
xref="x",
yref="y",
sizex=size,
sizey=size,
xanchor="center",
yanchor="middle",
)
</code></pre>
<p>How to add a border around the image inserted without baking it into the image itself?</p>
<p>I suppose <code>figure.add_shape</code> could be used to add a rectangle as the border, something like:</p>
<pre class="lang-py prettyprint-override"><code>figure.add_shape(
type="rect",
x0=x - size / 2,
y0=y - size / 2,
x1=x + size / 2,
y1=y + size / 2,
line=dict(
color="red",
width=2,
),
opacity=0.5,
)
</code></pre>
<p>except that this draws the box in the wrong location.</p>
<p>I would like to use this question to document the correct parameters for positioning the box to be along the border of the image.</p>
|
<python><plotly>
|
2024-11-27 12:44:52
| 2
| 9,414
|
Richie Bendall
|
79,230,326
| 2,155,362
|
How to install psycopy2 on python3.9?
|
<p>I try to install psycopy2 on python3.9 via follow instruction</p>
<pre><code>pip install psycopg2
</code></pre>
<p>and I got error massages like below:</p>
<pre><code> Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [21 lines of output]
running egg_info
writing psycopg2.egg-info\PKG-INFO
writing dependency_links to psycopg2.egg-info\dependency_links.txt
writing top-level names to psycopg2.egg-info\top_level.txt
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
</code></pre>
<p>My os is windows 11 profression, how to solve this problem?</p>
|
<python><psycopg2>
|
2024-11-27 12:43:49
| 1
| 1,713
|
user2155362
|
79,230,202
| 11,790,979
|
I changed my directory name and now my venv isn't working
|
<p>Changed my dir name from a test/development directory name to a proper name for release. (changed to "project" for this question because the actual name is rather long)</p>
<p>Now when I activate my venv, I get the following:</p>
<pre><code>(virtualEnv) PS ~\Desktop\dev\vbvarsel> pip --list
Fatal error in launcher: Unable to create process using '"~\Desktop\dev\project_0.0.1\virtualEnv\Scripts\python.exe" "~\Desktop\dev\project\virtualEnv\Scripts\pip.exe" --list': The system cannot find the file specified.
</code></pre>
<p>I went into my pyvenv.cfg, and the activate scripts to modify the path to the venv's python and pip exes (this is the correct one <code>~\Desktop\dev\project\virtualEnv</code>), but it doesn't seem to be registering this? This [<code>~\Desktop\dev\project_0.0.1\virtualEnv\Scripts\python.exe</code>] is causing the error, it is looking at the wrong directory, I don't know where it is reading that from. It is not a system variable, if I search the files in this directory, it doesn't yield any results. Have I just completely borked it and need to start from scratch with my virtual environment?</p>
<p>I am on Windows 11, using VSCode, if that should help.</p>
|
<python><python-venv>
|
2024-11-27 12:08:11
| 2
| 713
|
nos codemos
|
79,230,163
| 446,137
|
pyarrow parquet does not round trip for a very simple example
|
<pre><code>import numpy as np
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
table = pa.Table.from_pandas(pd.DataFrame(data={'a':np.arange(100)}))
with open('example.parquet', 'wb') as f:
pq.write_table(table, f)
with open('example.parquet', 'rb') as ff:
read_table = pq.read_table(ff)
</code></pre>
<p>then</p>
<pre><code>table
</code></pre>
<p>returns</p>
<pre><code>pyarrow.Table
a: int64
----
a: [[0,1,2,3,4,...,95,96,97,98,99]]
</code></pre>
<p>and</p>
<pre><code>read_table
</code></pre>
<p>returns</p>
<pre><code>pyarrow.Table
a: int64
----
a: [[0,1,2,3,4,...,95,96,97,98,99]]
</code></pre>
<p>but</p>
<pre><code>table['a'].nbytes
</code></pre>
<p>is</p>
<pre><code>800
</code></pre>
<p>and</p>
<pre><code>read_table['a'].nbytes
</code></pre>
<p>is</p>
<pre><code>813
</code></pre>
<p>The tables and their schemas both compare as equal.</p>
<p>How can I explain the discrepancy? The context is I am trying to compute a hash of two pyarrow tables, and that hash should be stable accross writing/reading to/from parquet.</p>
|
<python><hash><parquet><pyarrow>
|
2024-11-27 11:54:45
| 0
| 1,790
|
Hans
|
79,230,082
| 633,001
|
Shift data in a holoviews overlay
|
<p>My code creates an overlay by calculating multiple hv.Curve() together. The issue is, this takes quite some time (in total something like 5000 lines) - around a minute, and its for a webinterface, so waiting that long is just not good.</p>
<p>But: the grid it creates is always the same, except shifted along the X-Axis (Y is sensor value, X is time, and as the data was acquired at different point in times, the X is flexible between samples).</p>
<p>Is it possible to create a X=0 based overlay, and then just shift it to new coordinates? I have not found anything like this yet, but I cannot image it being hard to maybe manually set the X coordinates of that overlay, or call a function to set an offset or similar, but I have not been able to find that.</p>
<pre><code>import numpy as np
import holoviews as hv
hv.extension('bokeh')
points_1 = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
points_2 = [(0.1*i, np.sin(0.1*i) + np.cos(0.2*i)) for i in range(100)]
curve_1 = hv.Curve(points_1)
curve_2 = hv.Curve(points_2)
overlay = curve_1 * curve_2
# overlay = overlay + 3 (what to do here to shift the graph 3 units to the right?)
overlay
</code></pre>
|
<python><holoviews>
|
2024-11-27 11:32:50
| 0
| 3,519
|
SinisterMJ
|
79,229,946
| 13,392,257
|
Failed to create new KafkaAdminClient in Docker
|
<p>I want to create kafka-topic in docker container and send message from my python code outside docker container to this topic</p>
<p>My actions
1 Read this article <a href="https://www.confluent.io/blog/kafka-listeners-explained/" rel="nofollow noreferrer">https://www.confluent.io/blog/kafka-listeners-explained/</a></p>
<p>2 Created kafka infrastructure in docker</p>
<pre><code>version: '2'
services:
zookeeper:
image: "confluentinc/cp-zookeeper:5.2.1"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
# This has three listeners you can experiment with.
# BOB for internal traffic on the Docker network
# FRED for traffic from the Docker-host machine (`localhost`)
# ALICE for traffic from outside, reaching the Docker host on the DNS name `never-gonna-give-you-up`
# Use
kafka0:
image: "confluentinc/cp-enterprise-kafka:5.2.1"
ports:
- '9092:9092'
- '29094:29094'
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 0
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: LISTENER_BOB://kafka0:29092,LISTENER_FRED://kafka0:9092,LISTENER_ALICE://kafka0:29094
KAFKA_ADVERTISED_LISTENERS: LISTENER_BOB://kafka0:29092,LISTENER_FRED://localhost:9092,LISTENER_ALICE://never-gonna-give-you-up:29094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_BOB:PLAINTEXT,LISTENER_FRED:PLAINTEXT,LISTENER_ALICE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_BOB
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
</code></pre>
<p>3 Created python script to send messages to the topic</p>
<pre><code>import asyncio
from aiokafka import AIOKafkaProducer
async def send_to_kafka():
producer = AIOKafkaProducer(
bootstrap_servers='localhost:9092',
enable_idempotence=True)
await producer.start()
try:
await producer.send_and_wait("my-topic-1", b"Super message")
finally:
await producer.stop()
asyncio.run(send_to_kafka())
</code></pre>
<p>I have an error:</p>
<pre><code>Topic my-topic-1 not found in cluster metadata
</code></pre>
<p>I was trying to create a topic in docker-container</p>
<pre><code>docker exec -ti fetcher-service-kafka0-1 bash
kafka-topics --bootstrap-server kafka0:29092 --create --if-not-exists --topic my-topic-1 --replication-factor 1 --partitions 1
[2024-11-27 12:11:11,237] WARN Couldn't resolve server kafka0:29092 from bootstrap.servers as DNS resolution failed for kafka0 (org.apache.kafka.clients.ClientUtils)
fetcher-service-init-kafka-1 | Exception in thread "main" org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
fetcher-service-init-kafka-1 | at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:386)
fetcher-service-init-kafka-1 | at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:55)
fetcher-service-init-kafka-1 | at kafka.admin.TopicCommand$AdminClientTopicService$.createAdminClient(TopicCommand.scala:150)
fetcher-service-init-kafka-1 | at kafka.admin.TopicCommand$AdminClientTopicService$.apply(TopicCommand.scala:154)
fetcher-service-init-kafka-1 | at kafka.admin.TopicCommand$.main(TopicCommand.scala:55)
fetcher-service-init-kafka-1 | at kafka.admin.TopicCommand.main(TopicCommand.scala)
fetcher-service-init-kafka-1 | Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
fetcher-service-init-kafka-1 | at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:90)
fetcher-service-init-kafka-1 | at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:49)
fetcher-service-init-kafka-1 | at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:346)
</code></pre>
<p>How to fix the errors?</p>
|
<python><docker><apache-kafka>
|
2024-11-27 10:53:59
| 0
| 1,708
|
mascai
|
79,229,740
| 955,273
|
How to nest dask.delayed functions within other dask.delayed functions
|
<p>I am trying to learn dask, and have created the following toy example of a delayed pipeline.</p>
<pre><code>+-----+ +-----+ +-----+
| baz +--+ bar +--+ foo |
+-----+ +-----+ +-----+
</code></pre>
<p>So <code>baz</code> has a dependency on <code>bar</code> which in turn has a dependency on <code>foo</code></p>
<p>I would like all 3 to be <code>delayed</code> tasks, so as each gets executed, they show up as individual tasks in the task pipeline on the dashboard.</p>
<p>My code is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import dask
import distributed
@dask.delayed
def foo(data:str) -> str:
return f'foo[{data}]'
@dask.delayed
def bar(data:str) -> str:
return f'bar[{data}]'
@dask.delayed
def baz(data:str) -> str:
f = foo(data)
b = bar(f)
return f'baz[{b}]'
baz_task = baz('hello world')
client = distributed.Client()
future = client.compute(baz_task)
result = future.result()
print(result)
</code></pre>
<p>This doesn't work, as the output of <code>f = foo(data)</code> and <code>b = bar(f)</code> are both <code>Delayed</code> objects.</p>
<p>As such, the <code>result</code> returned from computing <code>baz_task</code> is</p>
<pre><code>baz[Delayed('bar-4c8a0ec0-1d87-43e7-a8af-fee33a9fae3d')]
</code></pre>
<p>Obviously my desired output is:</p>
<pre><code>baz[bar[foo[hello world]]]
</code></pre>
<p>Things I've tried:</p>
<p><em>removing the <code>@dask.delayed</code> decorator from <code>foo</code> and <code>bar</code></em>:</p>
<p>This results in there only being a single task in the task graph, and so one can't follow process on the dashboard.</p>
<p><em>calling the <code>Delayed</code> object inside the body of <code>baz</code></em>:</p>
<p>eg:</p>
<pre class="lang-py prettyprint-override"><code> f = foo(data)()
^^---- # calling the Delayed object
</code></pre>
<p>This returns a delayed <code>apply</code>, whatever that is...</p>
<p><code>Delayed('apply-145901ce-1271-4387-8f12-4640c9fcd23a')</code></p>
<p>As far as I can tell, my code follows the <a href="https://docs.dask.org/en/stable/delayed.html#decorator" rel="nofollow noreferrer">decorator example</a> pretty closely in the docs.</p>
<p>What am I doing wrong?</p>
|
<python><dask><dask-distributed><dask-delayed>
|
2024-11-27 09:54:56
| 1
| 28,956
|
Steve Lorimer
|
79,229,678
| 270,043
|
Optimize PySpark code on large dataframe that exceeds cluster resources
|
<p>I have a large PySpark dataframe containing 250 million rows, with only 2 columns. I am running the minHash code found <a href="https://towardsdatascience.com/scalable-jaccard-similarity-using-minhash-and-spark-85d00a007c5e" rel="nofollow noreferrer">here</a>. I tried to write the resulting dataframe to parquet files by <code>adj_sdf.write.mode("append").parquet("/output/folder/")</code>. However, I kept getting the error <code>pod ephemeral local storage usage exceeds the total limit of containers</code>. I can't increase the cluster resources, so I'm wondering if there are ways to optimize the PySpark code instead.</p>
<p>So far, I've done the following:</p>
<ol>
<li>Partition the dataframe before running the minHash function: <code>sdf = sdf.repartition(200)</code></li>
<li>Filter out pairs that are unlikely to share many hash values before the final step that involves two joins (<code>hash_sdf.alias('a').join(...)</code>):
<code>filtered_sdf = hash_sdf.filter(f.size(f.col('nodeSet')) > threshold)</code>, where <code>threshold = int(0.2 * n_draws)</code></li>
<li>Set the number of shuffle partitions: <code>spark.conf.set("spark.sql.shuffle.partitions", "200")</code></li>
</ol>
<p>What else can I do to write the dataframe to parquet files without running into resource problems?</p>
|
<python><dataframe><pyspark><optimization><parquet>
|
2024-11-27 09:39:42
| 2
| 15,187
|
Rayne
|
79,229,633
| 7,652,266
|
An iterative plot is not drawn in jupyter notebook while python works fine. Why?
|
<p>I have written an iterative plot code for python using plt.draw(). While it works fine for python interpreter and ipython also, but it does not work in jupyter notebook. I am running python in virtualenv in Debian. The codes are</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
N=5
x = np.arange(-3, 3, 0.01)
fig = plt.figure()
ax = fig.add_subplot(111)
for n in range(N):
y = np.sin(np.pi*x*n)
line, = ax.plot(x, y)
plt.draw()
plt.pause(0.5)
line.remove()
</code></pre>
<p>This works fine in command line such as</p>
<blockquote>
<p>$ python a.py</p>
</blockquote>
<p>But, in jupyter notebook, it plots first figure and then, repeat <Figure size ... 0 Axes> for N-1 times, not drawing the new figure inside axis. It looks the problem of jupyter, because no problem in running python code. How can I fix this? Or plotly would work in jupyter? (N.B. I happen to find a jupyter community site and posted <a href="https://discourse.jupyter.org/t/iterative-code-is-not-working-in-jupyter-notebook-while-it-runs-in-command-line-interface/30457?u=fomightez" rel="nofollow noreferrer">there</a> also.)</p>
<p><a href="https://i.sstatic.net/pFHHTufg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pFHHTufg.png" alt="result from jupyter notebook" /></a></p>
|
<python><animation><jupyter-notebook>
|
2024-11-27 09:26:06
| 1
| 563
|
Joonho Park
|
79,229,604
| 1,172,907
|
How to mock a function in a django custom command?
|
<p>How can I mock foo in order to have it NOT called?</p>
<p>Here's my latest attempt:</p>
<pre class="lang-py prettyprint-override"><code>#~/django/myapp/management/commands/acme.py
def foo():
pass
class Command(BaseCommand):
def handle(self, *args, **options):
foo()
#~/django/myapp/tests/test.py
from django.core.management import call_command
@mock.patch('myapp.management.commands.acme.foo')
def test_command_output(self,mock_foo):
call_command('acme')
assert not mock_foo.called
</code></pre>
|
<python><django><pytest>
|
2024-11-27 09:17:55
| 0
| 605
|
jjk
|
79,229,523
| 2,552,713
|
missing xaxis ticks with plotly
|
<p>I have a pandas dataframe with hourly data shown in 7 subplots for a weeks data.</p>
<pre><code>fig = make_subplots(
rows=7,
cols=1,
subplot_titles=subPlotNames,
shared_xaxes=True,
shared_yaxes='all'
)
</code></pre>
<p>My x-axis shows ticks at 0, 5, 10, 15, and 20 hours at the bottom of the page.
I have added the following update_layout statement to get hourly ticks:</p>
<pre><code>fig.update_layout(
barmode = 'group',
xaxis=dict(
tickmode='linear',
tick0=0,
dtick=23,
tickvals=np.arange(0, 23, 1),
ticktext = [f"{i}u" for i in np.arange(0, 23, 1)]
)
)
</code></pre>
<p>However, the statement does not add the x-ticks in the graph. How can I add the ticks on the x-axis?</p>
<p>I am running python 3.13 with dash 2.18.2 and plotly 5.24.1 on an ubuntu 24.10 machine.</p>
|
<python><plotly><xticks>
|
2024-11-27 08:56:35
| 0
| 519
|
juerg
|
79,229,490
| 2,781,031
|
Fine tuning Gemma model that was downloaded from ollama
|
<p>I am new to running models locally. I was very happy that I could run a gemma2b model locally which I pulled using ollama. I am using (or loading?) this local model in my python application by using chatOllama as below</p>
<pre><code>llm = ChatOllama(model="gemma2")
</code></pre>
<p>But now I came to a stage where I want to fine tune this locally running model. I can see the model is saved in <code>~/.ollama/models/blobs</code> as sha256.</p>
<p>But I understood from various reddit posts, chatgpt and general web browsing that I cannot fine-tune the model with ollama but I should somehow fine-tune this outside of ollama and then import it back to ollama to use the inference with fine-tuned model. is this understanding correct?</p>
<p>If yes, How do I actually take the model out of ollama to train and put it back? I understand GGUF is a model format that I can train. but how to get the GGUF for Gemma2 that is stored locally (in my case)</p>
<p>I am planning to use LoRA to train as I understood from some articles that it is efficient and uses fewer resources.</p>
|
<python><ollama><gemma>
|
2024-11-27 08:43:33
| 0
| 5,006
|
Vasanth Nag K V
|
79,229,426
| 3,373,967
|
Get absolute hyperlink from excel via python script
|
<p>I am using <code>openpyxl</code> to process information from a excel spreadsheet. The spreadsheet is in Microsoft sharepoint, when some hyperlink is added into it, if the hyperlink is in the same sharepoint, it will automatically been translated to relative URL when I click the "save" button.</p>
<p>If I copy this spreadsheet to some other PC location, the relative hyperlink simply cannot work. Microsoft provide a way to solve this in <a href="https://support.microsoft.com/en-us/office/work-with-links-in-excel-7fc80d8d-68f9-482f-ab01-584c44d72b3e" rel="nofollow noreferrer">Help > Set the base address for the links in a workbook</a>. But for <code>openpyxl</code>, it seems no method is provided to retrieve this <code>base url address</code>.</p>
<p>I don't want to hard code it in the python script because I need to process different excel sheets. Is there a way for the python script to translate the relative URL to absolute URL? Or alternatively, is there a way in excel itself to avoid storing the URL in relative form?</p>
<p>The code snippet to retrieve URL is as the following</p>
<pre class="lang-py prettyprint-override"><code>wb = load_workbook(file, data_only=True)
sheet = wb[sheet_name]
# Extract the hyperlinks
hyperlinks = {}
for row in sheet.iter_rows():
for cell in row:
if cell.hyperlink:
hyperlinks[cell.coordinate] = cell.hyperlink.target # here the link might be relative
</code></pre>
|
<python><excel><hyperlink><openpyxl>
|
2024-11-27 08:16:23
| 1
| 973
|
Eric Sun
|
79,229,354
| 1,473,517
|
How best to store millions of pairs of 64 bit ints in a set?
|
<p>I have millions of pairs of 64 bit integers I want to put into a set. Speed and space are important to me.</p>
<p>I could put tuples of python ints into the set but python ints use a lot more than 64 bits each. numpy has the type uint64 but I can’t add numpy arrays to a set afaik.</p>
<p>I would really like the RAM usage of storing one million pairs of ints to be as close to 128 million bits as possible.</p>
<p>I will be performing a lot of set add and query operations which I need to be fast.</p>
|
<python><numpy>
|
2024-11-27 07:47:50
| 3
| 21,513
|
Simd
|
79,229,227
| 6,212,530
|
How to annotate type of Manager().from_queryset()?
|
<p>In Django I have custom <code>QuerySet</code> and <code>Manager</code>:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class CustomQuerySet(models.QuerySet):
def live(self):
return self.filter(is_draft=False)
class CustomManager(models.Manager):
def publish(self, instance: "MyModel"):
instance.is_draft = False
instance.save()
</code></pre>
<p>In my model I want to use both, so I use <code>from_queryset</code> method:</p>
<pre class="lang-py prettyprint-override"><code>class MyModel(models.Model):
objects: CustomManager = CustomManager().from_queryset(CustomQuerySet)()
is_draft = models.BooleanField(blank=True, default=True)
</code></pre>
<p>Since I annotated <code>objects</code> as <code>CustomManager</code>, Pylance (via vscode) logically yells at me that <code>MyModel.objects.live()</code> is wrong, due to <code>Cannot access attribute "live" for class "CustomManager" Attribute "live" is unknown</code>.</p>
<p>Removing type annotation leads to similiar complaint: <code>Cannot access attribute "live" for class "BaseManager[MyModel]" Attribute "live" is unknown</code>.</p>
<p>How to annotate <code>objects</code> in <code>MyModel</code> so Pylance will be aware that <code>objects</code> also has <code>CustomQuerySet</code> methods available, not only <code>CustomManager</code> methods?</p>
<hr />
<p>Looking at <a href="https://github.com/django/django/blob/d4b2e06a67c2e1458305c3eac6c4b2b3e917daf9/django/db/models/manager.py#L108" rel="nofollow noreferrer">Django's source</a> <code>from_queryset</code> constructs a new subclass of <code>CustomManager</code> by iterating through <code>CustomQuerySet</code> methods:</p>
<pre class="lang-py prettyprint-override"><code>@classmethod
def from_queryset(cls, queryset_class, class_name=None):
if class_name is None:
class_name = "%sFrom%s" % (cls.__name__, queryset_class.__name__)
return type(
class_name,
(cls,),
{
"_queryset_class": queryset_class,
**cls._get_queryset_methods(queryset_class),
},
)
</code></pre>
<p>So as @chepner pointed out in his comment we get a <em>structural</em> subtype of <code>CustomManager</code>, whose <code>_queryset_class</code> attribute is <code>CustomQuerySet</code>. So the question fundamentally is: <strong>how to type annotate that dynamically generated subclass in a way at least good enough for type-checker and autocomplete to work?</strong></p>
<p>Approaches that I looked at so far are unsatisficatory:</p>
<ol>
<li>Django doesn't type annotate return of <code>from_queryset</code>.</li>
<li><a href="https://github.com/sbdchd/django-types/blob/d7c95baecbd0bef38092f8ab93d3462055df909f/django-stubs/db/models/manager.pyi#L27" rel="nofollow noreferrer">Django-types</a> annotates it as <code>type[Self]</code>, which lacks <code>CustomQuerySet</code> methods.</li>
<li>Same with <a href="https://github.com/typeddjango/django-stubs/blob/402b3a7002258cc602e96f0058a3a72266957307/django-stubs/db/models/manager.pyi#L27" rel="nofollow noreferrer">Django-stubs</a>.</li>
</ol>
|
<python><django><django-orm><python-typing><pyright>
|
2024-11-27 06:55:25
| 2
| 1,028
|
Matija Sirk
|
79,228,975
| 1,144,251
|
How to change timezone of chart in lightweight_chart_python
|
<p>Currently when i plot my chart by default 9:30AM is 13:30 i think this is UTC time and not EST.</p>
<p>I'd like to plot the chart in EST how do i do that? Sample code:</p>
<pre><code>from lightweight_charts import PolygonChart
if __name__ == '__main__':
chart = PolygonChart(
api_key='<API-KEY>',
num_bars=200,
limit=5000,
live=True
)
chart.show(block=True)
</code></pre>
<p><a href="https://i.sstatic.net/mQYQlpDs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mQYQlpDs.png" alt="enter image description here" /></a></p>
<p>I have tried:</p>
<pre><code> # First localize to UTC, then convert to EST
est = pytz.timezone('US/Eastern')
bar_data['time'].dt.tz_localize('UTC').dt.tz_convert(est)
chart.set(bar_data, True)
</code></pre>
<p>But doesn't seem to do anything. I think my only solution is to actually modify the time and subtract 4 hours. But I'd like to have a proper solution instead of a hacky workaround.</p>
<p>Thanks!</p>
|
<python><tradingview-api><lightweight-charts>
|
2024-11-27 04:53:16
| 1
| 357
|
user1144251
|
79,228,802
| 2,016,157
|
Generic class type can't be assigned to a union of types in a for loop in python/pylance/mypy
|
<p>In pylance/pyright, the following code raises an err:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Generic
from dataclasses import dataclass
class ParentType: ...
class ChildType(ParentType): ...
T = TypeVar("T", bound=ParentType)
@dataclass
class MyClass(Generic[T]):
a: T
mc1 = MyClass(ParentType())
mc2 = MyClass(ChildType())
def my_func(cls: MyClass[T]) -> T:
return cls.a
for i in (mc1, mc2):
my_func(i)
# ^
# Argument of type "MyClass[ParentType] | MyClass[ChildType]" cannot be assigned to parameter "cls" of type "MyClass[T@my_func]" in function "my_func"
# Type "MyClass[ParentType] | MyClass[ChildType]" is not assignable to type "MyClass[ParentType]"
# "MyClass[ChildType]" is not assignable to "MyClass[ParentType]"
# Type parameter "T@MyClass" is invariant, but "ChildType" is not the same as "ParentType"PylancereportArgumentType
# (variable) i: MyClass[ParentType] | MyClass[ChildType]
</code></pre>
<p>The issue is that <code>i</code> above becomes a union of <code>MyClass[ParentType] | MyClass[ChildType]</code>, and since my_func expects a <code>MyClass[ParentType]</code>, not a union <code>MyClass[ParentType] | MyClass[ChildType]</code>, it flags it.</p>
<p>If I don't have the choice of making <code>T</code> covariant, how would you address this issue? Is casting the only option? Should the type checker itself try and account for this scenario?</p>
|
<python><python-typing><pyright>
|
2024-11-27 02:40:06
| 0
| 403
|
KevL
|
79,228,615
| 6,862,601
|
pip version inside virtual environment
|
<p>I am running a virtual environment:</p>
<pre><code>$ which python
/Users/myself/.venv/bin/python
$ which python3
/Users/myself/.venv/bin/python3
$ which pip
/Users/myself/.venv/bin/pip
</code></pre>
<p>To find out the version of pip, if I run <code>python -m pip --version</code>, I get a version number different from what <code>pip --version</code> shows.</p>
<pre><code>$ python3 -m pip --version
pip 24.3.1 from /Users/ramepadm/.venv/lib/python3.12/site-packages/pip (python 3.12)
$ python -m pip --version
pip 24.3.1 from /Users/ramepadm/.venv/lib/python3.12/site-packages/pip (python 3.12)
$ pip --version
pip 24.2 from /Users/ramepadm/.venv/lib/python3.13/site-packages/pip (python 3.13) <--- THIS
</code></pre>
<p>What could be the reason for this difference?</p>
|
<python><pip><virtualenv><pyenv-virtualenv>
|
2024-11-27 00:16:52
| 0
| 43,763
|
codeforester
|
79,228,428
| 20,302,906
|
Comma separated names list to parse into dictionary
|
<p>I'd like to parse a list of assignees into a dictionary field. Creating the dictionary is not a problem since Python <code>re.match()</code> returned value can be grouped into dictionary fields <code>re.match().groupdict()</code>.</p>
<p>My regex <code>r'assignees:((\s\w+){2}[\s|,])+'</code> matches as expected with the following example input if I test it in this <a href="https://regex101.com" rel="nofollow noreferrer">website</a>:</p>
<p><em>Input:</em></p>
<p><code>assignees: Peter Jones, Michael Jackson, Tim Burton</code></p>
<p>However, the same regex pattern in Python only matches <code>Peter Jones</code>, resulting in:</p>
<p><code>assignees: Peter Jones,</code></p>
<p>My pattern is declared in a variable <code>p</code> in this way:</p>
<pre><code>p = re.compile(r'assignees:(?P<assignees>((\s\w+){2}[\s|,])+')
</code></pre>
<p>The final output I'm aiming for is:</p>
<pre><code>{'assignees': 'Peter Jones, Michael Jackson, Tim Burton'}
</code></pre>
<p>Can anyone point out what I'm doing wrong/missing here please?</p>
|
<python><regex>
|
2024-11-26 22:12:42
| 1
| 367
|
wavesinaroom
|
79,228,382
| 7,936,119
|
Nested Tuple conversion to List
|
<p>I have some very edgy behavior that I want to circumvent in python.
Imagine that you have such structure:</p>
<pre class="lang-py prettyprint-override"><code>something = {
"data":{"mytuple":(1,2,3)}
}
</code></pre>
<p>Now I would like to access that tuple, for that I can do hthe following:</p>
<pre class="lang-py prettyprint-override"><code>myGetter = something['data']['mytuple']
</code></pre>
<p>If I check that the data are the same, it is returning <code>True</code></p>
<pre class="lang-py prettyprint-override"><code>myGetter is something['data']['mytuple'] ## True
</code></pre>
<p>When I try to then convert that variable to a list, it works, but my nested data is not changed.<br />
I suspect it is coming from memory allocation and the fact that the tuple is immutable.</p>
<pre class="lang-py prettyprint-override"><code>myGetter = list(myGetter)
type(myGetter) ## return list
type(something['data']['mytuple']) ## still tuple
myGetter is something['data']['mytuple'] ## now returns False
</code></pre>
<p>Is there any way to modify the variable I created so it is replicated to the nested structure ?</p>
|
<python><data-structures><tuples>
|
2024-11-26 21:53:27
| 0
| 370
|
Pitchkrak
|
79,228,345
| 1,718,989
|
Python using oracledb to connect to Oracle database with Pandas DataFrame Error: "pandas only supports SQLAlchemy connectable (engine/connection)"
|
<p>I am pretty new to Python and even newer to Pandas and am hoping for some guidance</p>
<p>My company has an on-prem DEV Oracle database that I am trying to connect to using Python & Pandas. After some searching I found that the Python package "oracledb" was recommended to be used.</p>
<p>Using VS code .IPYNB I have the following chunks of code that seem to work with no errors</p>
<pre><code># python -m pip install --upgrade pandas
import oracledb
import pandas as pd
from sqlalchemy import create_engine
connection = oracledb.connect(user="TEST", password="TESTING", dsn="TESTDB:1234/TEST")
print("Connected")
print(connection)
</code></pre>
<p>The above code seems to run just fine which is great</p>
<p>I then run the below code as a quick test</p>
<pre><code>cursor=connection.cursor()
query_test='select * from dm_cnf_date where rownum < 2'
for row in cursor.execute(query_test):
print(row)
</code></pre>
<p>This returns a tuple with a row of data so far so good, looks like I can connect to the database and run a query.</p>
<p>Next I wanted to get the data into a Pandas dataframe and this is where I got stuck</p>
<p>I tried this code</p>
<pre><code>df = pd.read_sql(sql=query_test, con=connection)
</code></pre>
<p>Which then I get hit with the following error</p>
<blockquote>
<p>:1: UserWarning: pandas only supports
SQLAlchemy connectable (engine/connection) or database string URI or
sqlite3 DBAPI2 connection. Other DBAPI2 objects are not tested. Please
consider using SQLAlchemy. df = pd.read_sql(sql=query_test,
con=connection)</p>
</blockquote>
<p>I was loosely trying to follow this article ("Read data as pandas DataFrame"): <a href="https://kontext.tech/article/1019/python-read-data-from-oracle-database" rel="nofollow noreferrer">https://kontext.tech/article/1019/python-read-data-from-oracle-database</a></p>
<p>but it didnt seem to work.</p>
<p>I tried to take a look at the sqlalchemy site here: <a href="https://docs.sqlalchemy.org/en/20/dialects/oracle.html#module-sqlalchemy.dialects.oracle.oracledb" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/dialects/oracle.html#module-sqlalchemy.dialects.oracle.oracledb</a></p>
<p>Which I tried to rewrite my code a bit as follows</p>
<pre><code>conn_url="oracle+oracledb://TEST:TESTING@TESTDB:1234/TEST"
engine=create_engine(conn_url)
df = pd.read_sql(sql=query_test, con=engine)
</code></pre>
<p>And I get hit with another error</p>
<blockquote>
<p>OperationalError: DPY-6003: SID "TEST" is not
registered with the listener at host "TESTDB" port
1234. (Similar to ORA-12505)</p>
</blockquote>
<p>Just looking to connect to an Oracle DB and grab data into a Pandas dataframe but keep hitting a wall</p>
<p>Any insight would be very much appreciated</p>
|
<python><pandas><dataframe><sqlalchemy><python-oracledb>
|
2024-11-26 21:35:38
| 1
| 311
|
chilly8063
|
79,228,343
| 14,909,621
|
What do square brackets mean in Python function definitions?
|
<p>While watching a video about Python features, I came across the <a href="https://youtu.be/rc0tJaPleTg?list=PLbr8rVGhPD0WQgO97Ao67Q-QVuSbm_Zpz&t=406" rel="nofollow noreferrer">following code snippet</a>:</p>
<pre><code>def compose[_First, _Second, _Thirt](
first: Callable[[_First], _Second],
second: Callable[[_Second], _Third],
) -> Callable[[_First], _Third]:
return lambda x: second(first(x))
</code></pre>
<p>I’m curious about the syntax with square brackets right after the function name, specifically <code>[_First, _Second, _Third]</code>. What is this syntax called? When and how is it used in Python?</p>
<hr />
<p>P.S. Summary for future readers:</p>
<ol>
<li>This is called <strong>Type Parameter Syntax</strong></li>
<li>It was <a href="https://docs.python.org/3/whatsnew/3.12.html#pep-695-type-parameter-syntax" rel="nofollow noreferrer">introduced in Python 3.12</a> and is detailed in <a href="https://peps.python.org/pep-0695/#summary-examples" rel="nofollow noreferrer">PEP 695</a></li>
<li>A partial answer to the question is provided in <a href="https://stackoverflow.com/a/77218780/14909621">this article</a> on SO</li>
<li>This specific case is described in the <a href="https://docs.python.org/3/reference/compound_stmts.html#generic-functions" rel="nofollow noreferrer">documentation on generic functions</a></li>
</ol>
|
<python>
|
2024-11-26 21:35:19
| 0
| 7,606
|
Vitalizzare
|
79,228,027
| 16,092,023
|
pymongo.errors.ServerSelectionTimeoutError in AKS cluster
|
<p>We are using <code>mcr.microsoft.com/azurelinux/base/python:3.12.3-4-azl3.0.20241101-amd64</code> as our base image to build python based application . when we are running our application built image in AKS , its failing to connect to azure cosmos db with error as below. We checked the connectivity from AKS container to the cosmosmongo db and there is no connection error.</p>
<p>When we built the application base image from docker hub python:3.12, the application is getting connected to the same mongo instance?</p>
<p>So here the problem is with this azurelinux base image or pymongo driver(pymongo==4.8.0) used in this base image?</p>
<p>Traceback (most recent call last):</p>
<pre><code> File "/usr/lib/python3.12/site-packages/pymongo/synchronous/topology.py", line 333, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: mymongo.mongo.cosmos.azure.com:10255: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1000) (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), Timeout: 30s, Topology Description: <TopologyDescription id: xxxxxxx, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('mymongo.mongo.cosmos.azure.com', 10255) server_type: Unknown, rtt: None, error=AutoReconnect('mymongo.mongo.cosmos.azure.com:10255: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1000) (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>]>
</code></pre>
|
<python><pymongo><pymongo-3.x><azure-linux>
|
2024-11-26 19:35:21
| 0
| 1,551
|
Vowneee
|
79,228,017
| 1,406,168
|
Adding custom properties and metrics from an Azure Python Function App
|
<p>I have the function below. Logs are successfully added to application insights however customDimensions/Properties are not. Any Pointers? Also the logger does not have logging for metrics, dont you have same options as in c#?</p>
<pre><code>import datetime
import logging
import azure.functions as func
app = func.FunctionApp()
@app.timer_trigger(schedule="0 0 10 * * *", arg_name="myTimer", run_on_startup=True, use_monitor=False)
def timer_trigger(myTimer: func.TimerRequest) -> None:
logging.info('Python timer trigger function executed.')
properties = {'custom_dimensions': {'key_1': 'value_1', 'key_2': 'value_2'}}
logging.warning('Python timer trigger function executed.', extra=properties)
</code></pre>
<p><a href="https://i.sstatic.net/8OOWhNTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8OOWhNTK.png" alt="enter image description here" /></a></p>
|
<python><azure><azure-functions><azure-application-insights>
|
2024-11-26 19:32:03
| 1
| 5,363
|
Thomas Segato
|
79,227,950
| 5,777,697
|
Community Detection with both Node and Edge Weights
|
<p>I have a directed graph where there are importance or weight attributes for both the nodes and edges. I am looking for a community or module detection implementation in <code>python</code> that will consider both attribute types when calculating or optimizing for community detection. I know there are many <code>python</code> libraries like <code>igraph</code>, <code>CDlib</code>, <code>networkx</code>, with community detection algorithms, but I am unable to find something that clearly considers both node and edge weights. Any input or feedback on this is appreciated!</p>
|
<python><cluster-analysis><graph-theory><igraph><subgraph>
|
2024-11-26 18:57:49
| 1
| 401
|
abbas786
|
79,227,921
| 16,092,023
|
pip install failing in azurelinux based docker image
|
<p>We are using <code>mcr.microsoft.com/azurelinux/base/python:3.12.3-4-azl3.0.20241101-amd64</code> as our base image to build python based application . but when we are performing docker build from a ubuntu18 or 22 based host machine, with the pip command inside the dockerfile, we are getting below error. But when we execute the build from rhel8 host machine its working.</p>
<pre><code> Downloading pygments-2.18.0-py3-none-any.whl.metadata (2.5 kB)
Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich>=10.11.0->typer>=0.12.3->fastapi-cli>=0.0.5->fastapi-cli[standard]>=0.0.5; extra == "standard"->fastapi[standard]==0.115.5->-r requirements.txt (line 1))
Downloading mdurl-0.1.2-py3-none-any.whl.metadata (1.6 kB)
Downloading fastapi-0.115.5-py3-none-any.whl (94 kB)
Downloading httpx-0.27.2-py3-none-any.whl (76 kB)
Downloading langchain-0.3.7-py3-none-any.whl (1.0 MB)
[91mERROR: Exception:
Traceback (most recent call last):
File "/usr/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 105, in _run_wrapper
status = _inner_run()
^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 96, in _inner_run
return self.run(options, args)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 67, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/pip/_internal/commands/install.py", line 379, in run
requirement_set = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 179, in resolve
self.factory.preparer.prepare_linked_requirements_more(reqs)
File "/usr/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 554, in prepare_linked_requirements_more
self._complete_partial_requirements(
File "/usr/lib/python3.12/site-packages/pip/_internal/operations/prepare.py", line 469, in _complete_partial_requirements
for link, (filepath, _) in batch_download:
File "/usr/lib/python3.12/site-packages/pip/_internal/network/download.py", line 184, in __call__
for chunk in chunks:
File "/usr/lib/python3.12/site-packages/pip/_internal/cli/progress_bars.py", line 54, in _rich_progress_bar
with progress:
File "/usr/lib/python3.12/site-packages/pip/_vendor/rich/progress.py", line 1168, in __enter__
self.start()
File "/usr/lib/python3.12/site-packages/pip/_vendor/rich/progress.py", line 1159, in start
self.live.start(refresh=True)
File "/usr/lib/python3.12/site-packages/pip/_vendor/rich/live.py", line 132, in start
self._refresh_thread.start()
File "/usr/lib/python3.12/threading.py", line 992, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
[0mThe command '/bin/sh -c pip3 install --no-cache-dir -r requirements.txt' returned a non-zero code: 2
</code></pre>
<p>requirement.txt</p>
<pre><code>fastapi[standard]==0.115.5
httpx==0.27.2
langchain==0.3.7
langchain-core==0.3.19
langchain_community==0.3.2
langchain-openai==0.2.9
langchain-text-splitters==0.3.2
logger==1.4
openai==1.54.5
pandas==2.2.3
pydantic==2.9.2
pydantic_core==2.23.4
pytest==8.3.3
pytest-cov==6.0.0
pytest-integration-mark==0.2.0
pytest-mock==3.14.0
python-dotenv==1.0.1
requests==2.32.3
rank-bm25==0.2.2
six==1.16.0
</code></pre>
<p>Dockerfile as below</p>
<pre><code>mcr.microsoft.com/azurelinux/base/python:3.12.3-4-azl3.0.20241101-amd64
WORKDIR /app
COPY requirements.txt /app
COPY ./pip.conf ~/.pip/
RUN pip3 install --no-cache-dir -r requirements.txt
..............
.............
</code></pre>
|
<python><docker><ubuntu><pip><azure-linux>
|
2024-11-26 18:43:08
| 0
| 1,551
|
Vowneee
|
79,227,872
| 807,037
|
How to reuse tests with different subjects-under-test in python?
|
<p>This question is slightly different from <a href="https://stackoverflow.com/questions/1323455/python-unit-test-with-base-and-sub-class">Python unit test with base and sub class</a> because the tests to be re-used exercise different subjects-under-test which are set by the subclasses</p>
<p>I have multiple implementations of the same functionality and want to reuse the test cases:</p>
<p>What I have so far is:</p>
<pre><code>class TestBase(unittest.TestCase):
def __init__(self, sut: Callable[[int], int]):
super().__init__()
self.sut = sut
def test_1(self):
expected = 1
n = 1
actual = self.sut(n)
self.assertEqual(expected, actual)
def test_2(self):
expected = 2
n = 2
actual = self.sut(n)
self.assertEqual(expected, actual)
class TestImplA(TestBase):
def setUp(self):
super().sut = impl_a
class TestImplB(TestBase):
def setUp(self):
super().sut = impl_b
</code></pre>
<p>but no actual tests are being run with the above.</p>
<p>I've tried customizing <code>TestBase.__init__</code> (including using <code>**kwargs</code>) with no luck</p>
<p>How can I go about parameterizing <code>TestBase</code>?</p>
|
<python>
|
2024-11-26 18:20:28
| 2
| 19,948
|
Noel Yap
|
79,227,779
| 7,465,462
|
Query a specific model deployment in an endpoint inside Azure Machine Learning
|
<p>I'm working in Python. I already successfully deployed two model deployments in an Azure Machine Learning real-time endpoint, with a 50-50 traffic split.</p>
<p>Using the Python SDK, I know that to call a specific model deployment I have to specify the <code>deployment_name</code> parameter in the <code>invoke</code> method (<a href="https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-online-endpoints?view=azureml-api-2&tabs=python#invoke-the-endpoint-to-score-data-by-using-your-model" rel="nofollow noreferrer">tutorial</a> and <a href="https://learn.microsoft.com/en-us/python/api/azure-ai-ml/azure.ai.ml.operations.onlineendpointoperations?view=azure-python#azure-ai-ml-operations-onlineendpointoperations-invoke" rel="nofollow noreferrer">docs</a>).</p>
<p>However, if I want to do the same operation using REST APIs, I did not find any parameter to be passed to the REST endpoint query string. Therefore, the model deployment is randomly chosen according to the traffic split.</p>
<p><strong>Code example:</strong></p>
<pre><code>import requests
import json
endpoint = "https://my-endpoint.westeurope.inference.ml.azure.com/score"
api_key = "xyz"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"input_data": {
"columns": [
"a",
"b"
],
"index": [0, 1],
"data": [
[0.00, 185.0],
[18.00, 181.0]
]
}
}
response = requests.post(endpoint, headers=headers, data=str.encode(json.dumps(data)))
print(response.json())
</code></pre>
<p>Since there is the <code>invoke</code> method for the Python SDK, I expected there was a parameter to be added to the endpoint query string, e.g. <code>endpoint = https://my-endpoint.westeurope.inference.ml.azure.com/score?deployment=model1</code> - I appended <code>deployment=model1</code> at the end - but I did not find anything in the <a href="https://learn.microsoft.com/en-us/rest/api/azureml/online-deployments?view=rest-azureml-2024-10-01" rel="nofollow noreferrer">docs</a>.</p>
<p>Any suggestions? Does anyone know if this option is available?</p>
|
<python><rest><endpoint><azure-machine-learning-service>
|
2024-11-26 17:46:38
| 1
| 9,318
|
Ric S
|
79,227,692
| 5,214,931
|
"No module named numpy" when trying to install eif from pypi with pixi
|
<p>I am trying to use <code>pixi</code> for a python library, and it's running into an error when trying to install the pipy dependency <code>eif</code>.</p>
<pre><code>~/code> pixi init test-pixi-eif --format pyproject
✔ Created /home/oliver/code/test-pixi-eif/pyproject.toml
~/code/test-pixi-eif> cd test-pixi-eif/
~/code/test-pixi-eif> pixi add numpy
✔ Added numpy >=2.1.3,<3
~/code/test-pixi-eif> pixi add --pypi eif
× default: error installing/updating PyPI dependencies
├─▶ Failed to prepare distributions
├─▶ Failed to download and build `eif==2.0.2`
╰─▶ Build backend failed to determine requirements with `build_wheel()` (exit status: 1)
[stderr]
Traceback (most recent call last):
File "<string>", line 14, in <module>
requires = get_requires_for_build({})
File "/home/oliver/.cache/rattler/cache/uv-cache/builds-v0/.tmplz6UaX/lib/python3.13/site-packages/setuptools/build_meta.py", line 334, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/oliver/.cache/rattler/cache/uv-cache/builds-v0/.tmplz6UaX/lib/python3.13/site-packages/setuptools/build_meta.py", line 304, in _get_build_requires
self.run_setup()
~~~~~~~~~~~~~~^^
File "/home/oliver/.cache/rattler/cache/uv-cache/builds-v0/.tmplz6UaX/lib/python3.13/site-packages/setuptools/build_meta.py", line 522, in run_setup
super().run_setup(setup_script=setup_script)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/oliver/.cache/rattler/cache/uv-cache/builds-v0/.tmplz6UaX/lib/python3.13/site-packages/setuptools/build_meta.py", line 320, in run_setup
exec(code, locals())
~~~~^^^^^^^^^^^^^^^^
File "<string>", line 3, in <module>
if sys.path[0] == "":
^^^^^^^^^^^^
ModuleNotFoundError: No module named 'numpy'
</code></pre>
<p>The corresponding <code>pyproject.toml</code> file is:</p>
<pre><code>[project]
authors = [{name = "foo", email = "bar"}]
description = "Add a short description here"
name = "test-pixi-eif"
requires-python = ">= 3.11"
version = "0.1.0"
dependencies = ["eif>=2.0.2,<3"]
[build-system]
build-backend = "hatchling.build"
requires = ["hatchling"]
[tool.pixi.project]
channels = ["conda-forge"]
platforms = ["linux-64"]
[tool.pixi.pypi-dependencies]
test_pixi_eif = { path = ".", editable = true }
[tool.pixi.tasks]
[tool.pixi.dependencies]
numpy = ">=2.1.3,<3"
</code></pre>
<p>I've also tried having <code>eif</code> under <code>[tool.pixi.pypi-dependencies]</code>, and not first installing <code>numpy</code>, but that doesn't seem to make a difference.</p>
<p>What I find puzzling is that when I manually install numpy in a micromamba/conda environment, and then <code>pip install eif</code>, I don't encounter any issues. I thought that pixi is doing essentially the same, but there must be some difference somewhere.</p>
|
<python><numpy><pypi><conda-forge><pixi-package-manager>
|
2024-11-26 17:15:04
| 0
| 1,264
|
oulenz
|
79,227,655
| 2,772,805
|
Rescale axes concerning visible artists only
|
<p>I have a request regarding the use of <code>autoscale_view()</code>. I was expecting it to scale the plot considering only the visible artists, but it does not have such an effect.</p>
<p>I have a small test script as follows, any help is welcome:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x1 = np.linspace(0, 10, 100)
y1 = np.sin(x1)
x2 = np.linspace(5, 15, 100)
y2 = np.cos(x2)
fig, ax = plt.subplots()
ax.set_title('Click on legend line to toggle line on/off')
(line1, ) = ax.plot(x1, y1, lw=2, label='1 Hz')
(line2, ) = ax.plot(x2, y2, lw=2, label='2 Hz')
leg = ax.legend()
lines = [line1, line2]
map_legend_to_ax = {} # Will map legend lines to original lines.
pickradius = 5
for legend_line, ax_line in zip(leg.get_lines(), lines):
legend_line.set_picker(pickradius)
map_legend_to_ax[legend_line] = ax_line
def on_pick(event):
legend_line = event.artist
ax_line = map_legend_to_ax[legend_line]
visible = not ax_line.get_visible()
ax_line.set_visible(visible)
legend_line.set_alpha(1.0 if visible else 0.2)
ax.relim()
ax.autoscale_view()
fig.canvas.draw()
fig.canvas.mpl_connect('pick_event', on_pick)
plt.show()
</code></pre>
|
<python><matplotlib><axis>
|
2024-11-26 17:02:16
| 2
| 429
|
PBrockmann
|
79,227,618
| 3,705,139
|
How is super() able to (apparently) resolve to, and call, multiple methods?
|
<p>I'm working on some Python code that seems like it'll be a good fit for multiple inheritance, and I'm reading about multiple inheritance and <code>super()</code> to make sure I correctly understand how they work.</p>
<p>My understanding of <code>super()</code> is that - to paraphrase - it walks a class's method resolution order and returns a "dispatcher" of sorts, where accessing a method or attribute via <code>__getattr__</code> accesses the "first" implementation of that method or attribute, as dictated by the MRO.</p>
<p>However, <a href="https://realpython.com/lessons/multiple-inheritance-python/" rel="nofollow noreferrer">this example from RealPython</a> has me baffled. <em>(I've added print statements, and I've edited their example to remove all the functional methods, since all I care about here is how <code>super()</code> works with the MRO and the various <code>__init__</code> methods.)</em></p>
<pre class="lang-py prettyprint-override"><code>class Rectangle:
def __init__(self, length, width, **kwargs):
print("Running Rectangle.__init__")
self.length = length
self.width = width
super().__init__(**kwargs)
class Square(Rectangle):
def __init__(self, length, **kwargs):
print("Running Square.__init__")
super().__init__(length=length, width=length, **kwargs)
class Triangle:
def __init__(self, base, height, **kwargs):
print("Running Triangle.__init__")
self.base = base
self.height = height
super().__init__(**kwargs)
class RightPyramid(Square, Triangle):
def __init__(self, base, slant_height, **kwargs):
print("Running RightPyramid.__init__")
self.base = base
self.slant_height = slant_height
kwargs["height"] = slant_height
kwargs["length"] = base
super().__init__(base=base, **kwargs)
_ = RightPyramid(10, 5)
</code></pre>
<p>My expectation is that because Square comes before Triangle in RightPyramid's inheritance order, <code>super(RightPyramid).__init__</code> is equivalent to <code>Square.__init__</code>. And indeed:</p>
<pre class="lang-py prettyprint-override"><code>>>> super(RightPyramid).__init__
<bound method Square.__init__ of <__main__.RightPyramid object at [...]>
</code></pre>
<p>However, when I actually run this code to look at what gets printed:</p>
<pre class="lang-bash prettyprint-override"><code>lux@parabolica:~/Desktop $ python test.py
Running RightPyramid.__init__ # No problem
Running Square.__init__ # Square comes first, makes sense
Running Rectangle.__init__ # Square calls Rectangle, gotcha
Running Triangle.__init__ # What? How?!
</code></pre>
<p>What's going on here? How is <code>Triangle.__init__</code> able to get called?</p>
<p>The funny thing is, this is actually <em>exactly</em> what I want to have happen in the code I'm working on - "mixing together" multiple <code>__init__</code> methods from more than one superclass - but none of the documentation or articles I've read indicate that <code>super()</code> is supposed to behave this way. As far as I've read, <code>super()</code> should only "resolve to", and call, one single method; from examining <code>super(RightPyramid).__init__</code>, it seems like that's what's happening, but from the output of the code, <code>Triangle.__init__</code> is clearly getting called... somehow.</p>
<p>What am I misunderstanding here, and where can I read more about this functionality of <code>super()</code>?</p>
<p><em>(This seems to be what <a href="https://docs.python.org/3/library/functions.html#super" rel="nofollow noreferrer">the official documentation for <code>super()</code></a> is referring to in the paragraph starting with "The second use case is to support cooperative multiple inheritance...", but it doesn't go on to say anything more specific about what that means, how that works, or provide any examples for that use-case.)</em></p>
|
<python><multiple-inheritance><super><method-resolution-order>
|
2024-11-26 16:48:55
| 1
| 353
|
Lux
|
79,227,592
| 12,846,524
|
Sinc-like feature removal with an FFT
|
<h3>Outline</h3>
<p>I have some Raman spectroscopy data that contains features that I want to remove. I'll point them out in the figure below:</p>
<p><a href="https://i.sstatic.net/gwlBAEyI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwlBAEyI.png" alt="enter image description here" /></a></p>
<p>Here you can see two sets of peaks that I am interested in, with the right set annotated. The main "vibrational" peaks (blue) I want to preserve, and the accompanying "rotational" peaks (red) that I want to remove with some kind of filter. There is also a potential for other vibrational peaks to appear (green) among the rotational peaks that I also want to preserve.</p>
<h3>What I've tried</h3>
<p>My idea is that, because the rotational peaks appear to have a regular frequency akin to a sinc function, I can remove them by taking an FFT, removing the associated frequencies, and retrieving the filtered spectrum with an IFFT.</p>
<p>Here is the result of my attempt:
<a href="https://i.sstatic.net/824N8EST.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/824N8EST.png" alt="enter image description here" /></a></p>
<p>This figure shows a crop of the region annotated in the previous figure. Hopefully the figure legends are clear, but I will point out some key things:</p>
<ul>
<li>I first subtracted the minimum value for the cropped region, to make the filtering process simpler</li>
<li>I have used a low-pass filter (LPF) to remove the Gaussian-like shape from the FFT (bottom; red) of the original spectrum (top; black) to produce the filtered FFT (bottom; green)</li>
<li>I perform an IFFT on the filtered FFT to produce a spectrum containing my rotational peaks (top; green), and then subtract it from my original spectrum to obtain my filtered result (top; magenta).</li>
</ul>
<p>You can see an overlaid comparison of the original and filtered spectra below:
<a href="https://i.sstatic.net/kEVc0Gyb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kEVc0Gyb.png" alt="enter image description here" /></a></p>
<h3>Thoughts/Observations</h3>
<p>As you can see my filtering attempt is not perfect, however the result of the LPF and IFFT does seem to do a reasonable job at recreating the rotational peaks and not the vibrational peaks, and the peak of interest (encircled green in the first figure) is made more prominent.</p>
<p>The main issues from what I can see are:</p>
<ul>
<li>The LPF has not perfectly fit to the Gaussian-like shape (although I am hesitant to fine-tune this value as then the process will lose some generalisability), this has caused the rotational peaks to not be fully removed.</li>
<li>The filtered spectrum seems to have added a slight sine wave across the filtered regions, which I've roughly annotated</li>
<li>The left filtered region is overall lower in intensity than before, which could be easily fixed with an additional step afterwards, but I think it could prove unnecessary with some improvements to the filtering process</li>
</ul>
<h3>Code & Data</h3>
<p>I'll post a code snippet for how I have applied my filter:</p>
<pre><code>import numpy as np
from scipy.fft import fft, fftshift, ifft, ifftshift
# Make a copy of the spectrum
spectrum_filtered = np.copy(spectrum_avg)
# Manually define some regions to be cropped and filtered
crop_regions = [[379, 479], [620, 791]]
for crop_region in crop_regions:
# Crop the spectrum
crop_left, crop_right = crop_region
crop = np.copy(spectrum_avg[crop_left:crop_right])
# Subtract the minimum value
min_val = np.min(crop)
crop -= min_val
# FFT
crop_fft = np.abs(fftshift(fft(crop)))
# Fit the baseline using a low-pass filter
fit = butter_filter(crop_fft, cutoff=3, fs=crop_fft.size, order=5, btype='lowpass')
# Remove the baseline, then perform an IFFT (this holds the rotational peaks)
crop_fft_fit = crop_fft - fit
crop_ifft = np.abs(ifftshift(ifft(crop_fft_fit)))
# Subtract the rotational peaks
crop_filtered = crop - crop_ifft
# Replace the cropped region with the filtered spectrum (re-adding the minimum value)
spectrum_filtered[crop_left:crop_right] = crop_filtered + min_val
</code></pre>
<p>Here is a pastebin link to the example spectrum: <a href="https://pastebin.com/HHuJJXxL" rel="nofollow noreferrer">https://pastebin.com/HHuJJXxL</a></p>
<p>I'm happy to clarify anything should it prove necessary. I would greatly appreciate any feedback, criticism, or suggestions on how to improve this method!</p>
|
<python><filter><fft><spectrum>
|
2024-11-26 16:42:47
| 1
| 374
|
AlexP
|
79,227,550
| 13,392,257
|
Kafka in docker: No connection to node with id 1
|
<p>I am trying to send data to local Kafka infrastructure</p>
<p>My infrastructure:</p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.8"
services:
zookeeper:
image: bitnami/zookeeper:latest
ports:
- 2181:2181
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka:latest
ports:
- 9092:9092
- 9093:9093
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper
</code></pre>
<p>The code to send some data to Kafka:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from aiokafka import AIOKafkaProducer
async def send_to_kafka():
producer = AIOKafkaProducer(
bootstrap_servers='localhost:9092',
enable_idempotence=True)
await producer.start()
try:
await producer.send_and_wait("my_topic", b"Super message")
finally:
await producer.stop()
asyncio.run(send_to_kafka())
</code></pre>
<p>I have an error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "tests/test_producer.py", line 16, in <module>
asyncio.run(send_to_kafka())
File "/usr/local/Cellar/python@3.8/3.8.20/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/Cellar/python@3.8/3.8.20/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "tests/test_producer.py", line 10, in send_to_kafka
await producer.start()
File "/Users/aamoskalenko/0_root_folder/02_dev/51_fetcher_new/Fetcher-Service/venv/lib/python3.8/site-packages/aiokafka/producer/producer.py", line 352, in start
await self.client.bootstrap()
File "/Users/aamoskalenko/0_root_folder/02_dev/51_fetcher_new/Fetcher-Service/venv/lib/python3.8/site-packages/aiokafka/client.py", line 269, in bootstrap
self._api_version = await self.check_version()
File "/Users/aamoskalenko/0_root_folder/02_dev/51_fetcher_new/Fetcher-Service/venv/lib/python3.8/site-packages/aiokafka/client.py", line 564, in check_version
raise KafkaConnectionError(f"No connection to node with id {node_id}")
aiokafka.errors.KafkaConnectionError: KafkaConnectionError: No connection to node with id 1
Unclosed AIOKafkaProducer
</code></pre>
<p>Also was trying to replace bootstrap server with</p>
<pre class="lang-py prettyprint-override"><code>bootstrap_servers='kafka:9092',
</code></pre>
<p>But still have error:</p>
<pre class="lang-none prettyprint-override"><code> File "/Users/aamoskalenko/0_root_folder/02_dev/51_fetcher_new/Fetcher-Service/venv/lib/python3.8/site-packages/aiokafka/client.py", line 265, in bootstrap
raise KafkaConnectionError(f"Unable to bootstrap from {self.hosts}")
aiokafka.errors.KafkaConnectionError: KafkaConnectionError: Unable to bootstrap from [('kafka', 9092, <AddressFamily.AF_UNSPEC: 0>)]
Unclosed AIOKafkaProducer
</code></pre>
|
<python><apache-kafka>
|
2024-11-26 16:30:24
| 0
| 1,708
|
mascai
|
79,227,481
| 1,883,304
|
Populating CheckboxSelectMultiple widget using my own model in Wagtail admin
|
<h2>Context</h2>
<p>I've created a model, corresponding field model, and intend to reuse the built-in <code>CheckboxSelectMultiple</code> widget for use <em>within the Wagtail admin</em>. The concept is a multiple-select permission field that is saved as a bit-field:</p>
<pre class="lang-py prettyprint-override"><code># Model class
class Perm(IntFlag):
Empty = 0
Read = 1
Write = 2
</code></pre>
<p>I used Django's <a href="https://docs.djangoproject.com/en/5.1/howto/custom-model-fields/#custom-database-types" rel="nofollow noreferrer">model field's</a> documentation to create a field model that can translate my <code>Perm</code> type to and from my database (saved as an integer field that bitwise OR's the respective permission bits):</p>
<pre class="lang-py prettyprint-override"><code># Model field class
class PermField(models.Field):
description = "Permission field"
def __init__(self, value=Perm.Empty.value, *args, **kwargs):
self.value = value
kwargs["default"] = Perm.Empty.value
super().__init__(*args, **kwargs)
def deconstruct(self):
name, path, args, kwargs = super().deconstruct()
args += [self.value]
return name, path, args, kwargs
def db_type(self, connection):
return "bigint" # PostgresSQL
def from_db_value(self, value, expression, connection):
if value is None:
return Perm.Empty
return Perm(value)
def to_python(self, value):
if isinstance(value, Perm):
return value
if isinstance(value, str):
return self.parse(value)
if value is None:
return value
return Perm(value)
def parse(self, value):
v = Perm.Empty
if not isinstance(ast.literal_eval(value), list):
raise ValueError("%s cannot be converted to %s", value, type(Perm))
for n in ast.literal_eval(value):
v = v | Perm(int(n))
return v
</code></pre>
<p>Then, I also created a Wagtail snippet to use this new field and type:</p>
<pre class="lang-py prettyprint-override"><code>perm_choices = [
(Perm.Read.value, Perm.Read.name),
(Perm.Write.value, Perm.Write.name)
]
@register_snippet
class Permission(models.Model):
name = models.CharField(max_length=32, default="None")
perm = PermField()
panels = [FieldPanel("perm", widget=forms.CheckboxSelectMultiple(choices=perm_choices))]
</code></pre>
<hr />
<h2>Problem</h2>
<p>Creating new snippets works fine, but <em>editing</em> an existing one simply shows an empty <code>CheckboxSelectMultiple</code> widget:</p>
<p><a href="https://i.sstatic.net/jt2wXv1F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jt2wXv1F.png" alt="Empty Wagtail CheckboxSelectMultiple upon editing an existing snippet" /></a></p>
<hr />
<h2>Solution attempts</h2>
<p>I clearly need to populate the form when it's initialised. Ideally, making use of the built-in <code>CheckboxSelectMultiple</code> widget. To do that, I tried defining the following form:</p>
<pre class="lang-py prettyprint-override"><code>
@register_snippet
class Permission(models.Model):
# ...
# Custom form subclass for snippets per documentation
# https://docs.wagtail.org/en/v2.15/advanced_topics/customisation/page_editing_interface.html
class Permission(WagtailAdminModelForm):
p = forms.IntegerField(
widget=forms.CheckboxSelectMultiple(
choices=perm_choices,
),
label="Permission field",
)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields['p'].initial = {
k.name for k in [Perm.Read, Perm.Write] if k.value in Perm(int(p))
}
def clean_selected_permissions(self):
selected_p = self.cleaned_data["p"]
value = Perm.Empty
for k in selected_p:
value |= Perm.__members__[k]
return value
class Meta:
model=Permission
fields=["perm"]
# Models not defined yet error here!
Permission.base_form_class = PermissionForm
</code></pre>
<p>However, I cannot get this form to work. There's a cycle where <code>PermissionForm</code> requires <code>Permission</code> to be defined or vice-versa. Using a <a href="https://stackoverflow.com/questions/77930148/how-can-snippet-forms-be-customized">global model form assignment as seen here</a> by gasman did not work for me. I'm also wondering if there's a simpler approach to solving the problem I'm facing that I'm just not seeing.</p>
<hr />
<h2>Similar questions that didn't address my problem</h2>
<ul>
<li><strong>Question</strong>: <a href="https://stackoverflow.com/questions/7364945/populate-checkboxselectmultiple-with-existing-data-from-django-model-form">Populate CheckboxSelectMultiple with existing data from django model form</a>
<ul>
<li><strong>Comment</strong>: OP implements a custom <code>ModelForm</code>, which links the <code>CheckBoxSelectMultiple</code> right up to a <code>models.ManyToManyField</code>. This works since the <code>ManyToManyField</code> type is automatically compatible with the widget. In my case, I have to set it up myself.</li>
</ul>
</li>
<li><strong>Question</strong>: <a href="https://stackoverflow.com/questions/4189970/initial-values-for-checkboxselectmultiple?rq=3">Initial values for CheckboxSelectMultiple</a>
<ul>
<li><strong>Comment</strong>: OP is using a <code>MultiSubscriptionForm</code>, which itself contains a keyword argument for populating the existing fields. This does not exist in my situation.</li>
</ul>
</li>
<li><strong>Question</strong>: <a href="https://stackoverflow.com/questions/667256/django-admin-template-overriding-displaying-checkboxselectmultiple-widget">Django Admin Template Overriding: Displaying checkboxselectmultiple widget</a>
<ul>
<li><strong>Comment</strong>: OP describes a table structure and asks if it is possible. No answer provided solves the problem (arguably poorly phrased question)</li>
</ul>
</li>
<li><strong>Question</strong>: <a href="https://stackoverflow.com/questions/15323724/django-render-checkboxselectmultiple-widget-individually-in-template-manual?rq=3">Django - Render CheckboxSelectMultiple() widget individually in template (manually)</a>
<ul>
<li><strong>Comment</strong>: OP wants to customise the <code>CheckboxSelectMultiple</code> template in order to show a special arrangement. The answers provide template HTML to do this, but otherwise rely on the <code>ManyToMany</code> field type/relation that automagically links/fills the checkboxes.</li>
</ul>
</li>
</ul>
|
<python><django><wagtail><wagtail-admin>
|
2024-11-26 16:14:01
| 1
| 3,728
|
Micrified
|
79,227,473
| 2,287,458
|
Sort Polars Dataframe columns based on row data
|
<p>I have this data:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.DataFrame({
'region': ['EU', 'ASIA', 'AMER', 'Year'],
'Share': [99, 6, -30, 2020],
'Ration': [70, 4, -10, 2019],
'Lots': [70, 4, -10, 2018],
'Stake': [80, 5, -20, 2021],
})
# shape: (4, 5)
# ┌────────┬───────┬────────┬──────┬───────┐
# │ region ┆ Share ┆ Ration ┆ Lots ┆ Stake │
# │ --- ┆ --- ┆ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 │
# ╞════════╪═══════╪════════╪══════╪═══════╡
# │ EU ┆ 99 ┆ 70 ┆ 70 ┆ 80 │
# │ ASIA ┆ 6 ┆ 4 ┆ 4 ┆ 5 │
# │ AMER ┆ -30 ┆ -10 ┆ -10 ┆ -20 │
# │ Year ┆ 2020 ┆ 2019 ┆ 2018 ┆ 2021 │
# └────────┴───────┴────────┴──────┴───────┘
</code></pre>
<p>I want to order the columns based on the <code>Year</code> row, while leaving the <code>region</code> column first. So ideally I am looking for this:</p>
<pre><code>shape: (4, 5)
┌────────┬──────┬────────┬───────┬───────┐
│ region ┆ Lots ┆ Ration ┆ Share ┆ Stake │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 │
╞════════╪══════╪════════╪═══════╪═══════╡
│ EU ┆ 70 ┆ 70 ┆ 99 ┆ 80 │
│ ASIA ┆ 4 ┆ 4 ┆ 6 ┆ 5 │
│ AMER ┆ -10 ┆ -10 ┆ -30 ┆ -20 │
│ Year ┆ 2018 ┆ 2019 ┆ 2020 ┆ 2021 │
└────────┴──────┴────────┴───────┴───────┘
</code></pre>
<p>How can this be achieved? I tried using polars' <code>sort</code> function, but could not get it to do what I needed.</p>
|
<python><dataframe><python-polars>
|
2024-11-26 16:11:28
| 3
| 3,591
|
Phil-ZXX
|
79,227,455
| 8,384,910
|
Principled Conditional Inheritance in Python
|
<p>I have a class that inherits <code>np.ndarray</code>:</p>
<pre class="lang-py prettyprint-override"><code>class A(np.ndarray):
def __new__(cls, foo: np.ndarray):
# Operate on foo...
x = np.array([1.]) + foo # (placeholder)
return x
def bar(self, x):
# Other stuff...
</code></pre>
<p>I want to add CPU/GPU-agnostic behaviour by detecting whether the input is a <code>np.ndarray</code> or <code>cupy.ndarray</code>, and dynamically inheriting one of the two, like this:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __new__(cls, foo: Union[np.ndarray, cupy.ndarray]):
xp = cupy.get_array_module(foo) # Returns either `numpy` or `cupy` depending on `foo`
# Operate on foo...
x = xp.array([1.]) + foo # (placeholder)
return xp.__new__(cls, x)
def bar(self, x):
# Other stuff...
</code></pre>
<p>I think the following might work, but it's very messy:</p>
<pre class="lang-py prettyprint-override"><code>class BaseA(ABC):
def bar(self, x):
# Other stuff...
class ANumpy(BaseA, np.ndarray):
pass
class ACupy(BaseA, cupy.ndarray):
pass
class A:
def __new__(cls, foo: Union[np.ndarray, cupy.ndarray]):
if isinstance(foo, cupy.ndarray):
return ACupy(foo)
return ANumpy(foo)
</code></pre>
<p>How do I do it properly/well?</p>
|
<python><inheritance>
|
2024-11-26 16:07:12
| 2
| 9,414
|
Richie Bendall
|
79,227,390
| 10,637,300
|
How to extract specific entities from unstructured text
|
<p>Given a generic text sentence (in a specific context) how can I extract word/entities of interest belonging to a specific "category" using python and any NLP library?</p>
<p>For example given a step for a culinary recipe <code>Add an onion to a bowl of carrots</code> as input text, I'd like to retrive <code>onion</code> and <code>carrots</code> while given <code>Sprinkle with paprika.</code> should return <code>paprika</code>.
But this should also work with sentences like <code>stir well, and cook an additional minute.</code> that do not contain any food entity in them.</p>
<p>So far, what I was able to achieve is using the <code>spacy</code> library for training a NER module to parse sentences. The problem with the NER pipeline is that it is a rule-based parsing, it is trained providing a set of sentences and entities/matches/labels to learn, which works fine as expeted on sentences similar to the one used during train, but performs bad on new and different sentences:</p>
<pre class="lang-py prettyprint-override"><code>nlp = spacy.load('trained_model')
document = nlp('Add flour, mustard, and salt')
[(ent.text, ent.label_) for ent in document.ents]
# >> [('Add flour', 'FOOD'), ('mustard', 'FOOD'), ('salt', 'FOOD')]
# (quite) correct output
document = nlp('I took a building, car and squirrel on the weekend')
[(ent.text, ent.label_) for ent in document.ents]
# >> [('building', 'FOOD'), ('car', 'FOOD'), ('squirrel', 'FOOD')]
# wrong output
document = nlp('stir well, and cook an additional minute.')
[(ent.text, ent.label_) for ent in document.ents]
# >> [('stir well', 'FOOD'), ('cook', 'FOOD'), ('additional minute.', 'FOOD')]
# wrong output
</code></pre>
<p>I am aware that there are several similar questions and posts, but I have found only solutions working for "semi-structured" text, i.e. list of ingredients as <code>1 tsp. of sugar, 1 cup of milk, ...</code> which can be easily solved using the previous rule-based approach. Also <code>nltk</code> and part-of-speech (POS) are an option, but I'd prefer an alternative solution rather than having to compare each noun with an exhaustive list of foods.</p>
<p>What instead I am looking for is a way of to extract specific entities or at least to classify words in generic text with additional categories beyond those of the basic parsing.
Which methods should I use/look at to achieve this?</p>
|
<python><machine-learning><nlp><nltk><spacy>
|
2024-11-26 15:46:48
| 1
| 396
|
Riccardo Raffini
|
79,227,308
| 1,735,914
|
What is the scope of a name defined using a Python with block?
|
<p>I am trying to update a test script written in Python. It has a construct I do not understand:</p>
<pre><code> with RequestTrap(self.channel) as trap:
request = self.task[0].svc.create({'task_use': TaskUse.STANDARD})
request.request_data.attribute_structures.task_use.idnum = attr
# Verify the request will return an unexpected attribute in list error.
try:
trap.response
except ResponseError as err:
test.eq(err.generalStatus, CipGeneralStatus.UNEXPECTED_ATTRIBUTE_IN_LIST)
test.eq(err.extendedStatus, None)
else:
test.logfail("Class level - Incorrect error response, expecting"
"UNEXPECTED_ATTRIBUTE_IN_LIST: attribute=%d" % attr)
</code></pre>
<p>What is the scope of the name "trap"? It looks to me like it should only be defined in the with block, but it is not even used in that block. The RequestTrap class does have an attribute named "request". Why is "trap" still defined outside the with block?</p>
|
<python>
|
2024-11-26 15:24:21
| 0
| 2,431
|
ROBERT RICHARDSON
|
79,227,288
| 9,072,753
|
How to annotate zipping of dictionaries of differnt ypes?
|
<p>I want to zip dictionaries into keys to tuples of values. I would be happy to annotate only for specific number of arguments, I will only use up to 5 ever.</p>
<pre><code>from typing import Optional, Tuple, overload, TypeVar, overload
T1 = TypeVar("T1")
V1 = TypeVar("V1")
V2 = TypeVar("V2")
V3 = TypeVar("V3")
@overload
def zip_dicts(a: dict[T1, V1]) -> dict[T1, tuple[V1]]: ...
@overload
def zip_dicts(a: dict[T1, V1], b: dict[T1, V2]) -> dict[T1, tuple[V1, V2]]: ...
@overload
def zip_dicts(a: dict[T1, V1], b: dict[T1, V2], c: dict[T1, V3]) -> dict[T1, tuple[V1, V2, V3]]: ...
def zip_dicts(*dicts):
return dict((k, tuple(d[k] for d in dicts)) for k in dicts[0])
a: dict[str, int] = dict(a=1)
b: dict[str, str] = dict(b="a")
c: dict[str, float] = dict(c=1.0)
d: dict[str, tuple[int, str, float]] = zip_dicts(a, b, c)
print(d)
</code></pre>
<p>However this fails with basedpyright:</p>
<pre><code>• basedpyright: Overloaded implementation is not consistent with signature of overload 1
Type "(*dicts: Unknown) -> dict[Unknown, tuple[Unknown, ...]]" is not assignable to type "(a: dict[T1@zip_dicts, V1@zip_dicts]) -> dict[T1@zip_dicts, tuple[V1@zip_dicts]]"
Missing keyword parameter "a" [reportInconsistentOverload]
• basedpyright: Overloaded implementation is not consistent with signature of overload 2
Type "(*dicts: Unknown) -> dict[Unknown, tuple[Unknown, ...]]" is not assignable to type "(a: dict[T1@zip_dicts, V1@zip_dicts], b: dict[T1@zip_dicts, V2@zip_dicts]) -> dict[T1@zip_dicts, tuple[V1@zip_dicts, V2@zip_dicts]]"
Missing keyword parameter "a"
Missing keyword parameter "b" [reportInconsistentOverload]
• basedpyright: Overloaded implementation is not consistent with signature of overload 3
Type "(*dicts: Unknown) -> dict[Unknown, tuple[Unknown, ...]]" is not assignable to type "(a: dict[T1@zip_dicts, V1@zip_dicts], b: dict[T1@zip_dicts, V2@zip_dicts], c: dict[T1@zip_dicts, V3@zip_dicts]) -> dict[T1@zip_dicts, tuple[V1@zip_dicts, V2@zip_dicts, V3@zip_dicts]]"
Missing keyword parameter "a"
Missing keyword parameter "b"
Missing keyword parameter "c" [reportInconsistentOverload]
</code></pre>
<p>that is true - there are no <code>a</code> <code>b</code> <code>c</code> keyword parameters. How do I overload <code>*</code> parameter without giving them names?</p>
|
<python><overloading><python-typing>
|
2024-11-26 15:18:06
| 0
| 145,478
|
KamilCuk
|
79,227,281
| 1,188,318
|
from user's language to the locale string - how to?
|
<p>In our Django/Python/Linux stack we want to determine the correct locale from the user's language. The language might be 'de' and the locale could be something like de_DE or de_AT or even de_CH.UTF-8 - depending on what <code>locale -a</code> returns. In case of ambiguity we would just use the first valid entry for the respective language.</p>
<p>This locale string shall than be used to determine the correct number format for instance like so:</p>
<pre><code> locale.setlocale(locale.LC_ALL, user.language)
formattedValue = locale.format_string(f'%.{decimals}f', val=gram, grouping=True)
</code></pre>
<p>We don't want to create a relation between language code and locale string in our application - so defining a dictionary containing languages and locale strings is not something we have in mind. The information should be retrieved generically.</p>
<p>For instance, on my local system I have these locales available:</p>
<pre><code>stephan@rechenmonster:~$ locale -a
C
C.utf8
de_AT.utf8
de_BE.utf8
de_CH.utf8
de_DE.utf8
de_IT.utf8
de_LI.utf8
de_LU.utf8
en_AG
en_AG.utf8
en_AU.utf8
en_BW.utf8
en_CA.utf8
en_DK.utf8
en_GB.utf8
en_HK.utf8
en_IE.utf8
en_IL
en_IL.utf8
en_IN
en_IN.utf8
en_NG
en_NG.utf8
en_NZ.utf8
en_PH.utf8
en_SG.utf8
en_US.utf8
en_ZA.utf8
en_ZM
en_ZM.utf8
en_ZW.utf8
POSIX
stephan@rechenmonster:~$
</code></pre>
<p>In python many more are listed - not only the available ones:</p>
<pre><code>>>> locale.locale_alias.keys()
dict_keys(['a3', 'a3_az', 'a3_az.koic', 'aa_dj', 'aa_er', 'aa_et', 'af', 'af_za', 'agr_pe', 'ak_gh', 'am', 'am_et', 'american', 'an_es', 'anp_in', 'ar', 'ar_aa', 'ar_ae', 'ar_bh', 'ar_dz', 'ar_eg', 'ar_in', 'ar_iq', 'ar_jo', 'ar_kw', 'ar_lb', 'ar_ly', 'ar_ma', 'ar_om', 'ar_qa', 'ar_sa', 'ar_sd', 'ar_ss', 'ar_sy', 'ar_tn', 'ar_ye', 'arabic', 'as', 'as_in', 'ast_es', 'ayc_pe', 'az', 'az_az', 'az_az.iso88599e', 'az_ir', 'be', 'be@latin', 'be_bg.utf8', 'be_by', 'be_by@latin', 'bem_zm', 'ber_dz', 'ber_ma', 'bg', 'bg_bg', 'bhb_in.utf8', 'bho_in', 'bho_np', 'bi_vu', 'bn_bd', 'bn_in', 'bo_cn', 'bo_in', 'bokmal', 'bokmål', 'br', 'br_fr', 'brx_in', 'bs', 'bs_ba', 'bulgarian', 'byn_er', 'c', 'c-french', 'c.ascii', 'c.en', 'c.iso88591', 'c.utf8', 'c_c', 'c_c.c', 'ca', 'ca_ad', 'ca_es', 'ca_es@valencia', 'ca_fr', 'ca_it', 'catalan', 'ce_ru', 'cextend', 'chinese-s', 'chinese-t', 'chr_us', 'ckb_iq', 'cmn_tw', 'crh_ua', 'croatian', 'cs', 'cs_cs', 'cs_cz', 'csb_pl', 'cv_ru', 'cy', 'cy_gb', 'cz', 'cz_cz', 'czech', 'da', 'da_dk', 'danish', 'dansk', 'de', 'de_at', 'de_be', 'de_ch', 'de_de', 'de_it', 'de_li.utf8', 'de_lu', 'deutsch', 'doi_in', 'dutch', 'dutch.iso88591', 'dv_mv', 'dz_bt', 'ee', 'ee_ee', 'eesti', 'el', 'el_cy', 'el_gr', 'el_gr@euro', 'en', 'en_ag', 'en_au', 'en_be', 'en_bw', 'en_ca', 'en_dk', 'en_dl.utf8', 'en_gb', 'en_hk', 'en_ie', 'en_il', 'en_in', 'en_ng', 'en_nz', 'en_ph', 'en_sc.utf8', 'en_sg', 'en_uk', 'en_us', 'en_us@euro@euro', 'en_za', 'en_zm', 'en_zw', 'en_zw.utf8', 'eng_gb', 'english', 'english.iso88591', 'english_uk', 'english_united-states', 'english_united-states.437', 'english_us', 'eo', 'eo.utf8', 'eo_eo', 'eo_us.utf8', 'eo_xx', 'es', 'es_ar', 'es_bo', 'es_cl', 'es_co', 'es_cr', 'es_cu', 'es_do', 'es_ec', 'es_es', 'es_gt', 'es_hn', 'es_mx', 'es_ni', 'es_pa', 'es_pe', 'es_pr', 'es_py', 'es_sv', 'es_us', 'es_uy', 'es_ve', 'estonian', 'et', 'et_ee', 'eu', 'eu_es', 'eu_fr', 'fa', 'fa_ir', 'fa_ir.isiri3342', 'ff_sn', 'fi', 'fi_fi', 'fil_ph', 'finnish', 'fo', 'fo_fo', 'fr', 'fr_be', 'fr_ca', 'fr_ch', 'fr_fr', 'fr_lu', 'français', 'fre_fr', 'french', 'french.iso88591', 'french_france', 'fur_it', 'fy_de', 'fy_nl', 'ga', 'ga_ie', 'galego', 'galician', 'gd', 'gd_gb', 'ger_de', 'german', 'german.iso88591', 'german_germany', 'gez_er', 'gez_et', 'gl', 'gl_es', 'greek', 'gu_in', 'gv', 'gv_gb', 'ha_ng', 'hak_tw', 'he', 'he_il', 'hebrew', 'hi', 'hi_in', 'hi_in.isciidev', 'hif_fj', 'hne', 'hne_in', 'hr', 'hr_hr', 'hrvatski', 'hsb_de', 'ht_ht', 'hu', 'hu_hu', 'hungarian', 'hy_am', 'hy_am.armscii8', 'ia', 'ia_fr', 'icelandic', 'id', 'id_id', 'ig_ng', 'ik_ca', 'in', 'in_id', 'is', 'is_is', 'iso-8859-1', 'iso-8859-15', 'iso8859-1', 'iso8859-15', 'iso_8859_1', 'iso_8859_15', 'it', 'it_ch', 'it_it', 'italian', 'iu', 'iu_ca', 'iu_ca.nunacom8', 'iw', 'iw_il', 'iw_il.utf8', 'ja', 'ja_jp', 'ja_jp.euc', 'ja_jp.mscode', 'ja_jp.pck', 'japan', 'japanese', 'japanese-euc', 'japanese.euc', 'jp_jp', 'ka', 'ka_ge', 'ka_ge.georgianacademy', 'ka_ge.georgianps', 'ka_ge.georgianrs', 'kab_dz', 'kk_kz', 'kl', 'kl_gl', 'km_kh', 'kn', 'kn_in', 'ko', 'ko_kr', 'ko_kr.euc', 'kok_in', 'korean', 'korean.euc', 'ks', 'ks_in', 'ks_in@devanagari.utf8', 'ku_tr', 'kw', 'kw_gb', 'ky', 'ky_kg', 'lb_lu', 'lg_ug', 'li_be', 'li_nl', 'lij_it', 'lithuanian', 'ln_cd', 'lo', 'lo_la', 'lo_la.cp1133', 'lo_la.ibmcp1133', 'lo_la.mulelao1', 'lt', 'lt_lt', 'lv', 'lv_lv', 'lzh_tw', 'mag_in', 'mai', 'mai_in', 'mai_np', 'mfe_mu', 'mg_mg', 'mhr_ru', 'mi', 'mi_nz', 'miq_ni', 'mjw_in', 'mk', 'mk_mk', 'ml', 'ml_in', 'mn_mn', 'mni_in', 'mr', 'mr_in', 'ms', 'ms_my', 'mt', 'mt_mt', 'my_mm', 'nan_tw', 'nb', 'nb_no', 'nds_de', 'nds_nl', 'ne_np', 'nhn_mx', 'niu_nu', 'niu_nz', 'nl', 'nl_aw', 'nl_be', 'nl_nl', 'nn', 'nn_no', 'no', 'no@nynorsk', 'no_no', 'no_no.iso88591@bokmal', 'no_no.iso88591@nynorsk', 'norwegian', 'nr', 'nr_za', 'nso', 'nso_za', 'ny', 'ny_no', 'nynorsk', 'oc', 'oc_fr', 'om_et', 'om_ke', 'or', 'or_in', 'os_ru', 'pa', 'pa_in', 'pa_pk', 'pap_an', 'pap_aw', 'pap_cw', 'pd', 'pd_de', 'pd_us', 'ph', 'ph_ph', 'pl', 'pl_pl', 'polish', 'portuguese', 'portuguese_brazil', 'posix', 'posix-utf2', 'pp', 'pp_an', 'ps_af', 'pt', 'pt_br', 'pt_pt', 'quz_pe', 'raj_in', 'ro', 'ro_ro', 'romanian', 'ru', 'ru_ru', 'ru_ua', 'rumanian', 'russian', 'rw', 'rw_rw', 'sa_in', 'sat_in', 'sc_it', 'sd', 'sd_in', 'sd_in@devanagari.utf8', 'sd_pk', 'se_no', 'serbocroatian', 'sgs_lt', 'sh', 'sh_ba.iso88592@bosnia', 'sh_hr', 'sh_hr.iso88592', 'sh_sp', 'sh_yu', 'shn_mm', 'shs_ca', 'si', 'si_lk', 'sid_et', 'sinhala', 'sk', 'sk_sk', 'sl', 'sl_cs', 'sl_si', 'slovak', 'slovene', 'slovenian', 'sm_ws', 'so_dj', 'so_et', 'so_ke', 'so_so', 'sp', 'sp_yu', 'spanish', 'spanish_spain', 'sq', 'sq_al', 'sq_mk', 'sr', 'sr@cyrillic', 'sr@latn', 'sr_cs', 'sr_cs.iso88592@latn', 'sr_cs@latn', 'sr_me', 'sr_rs', 'sr_rs@latn', 'sr_sp', 'sr_yu', 'sr_yu.cp1251@cyrillic', 'sr_yu.iso88592', 'sr_yu.iso88595', 'sr_yu.iso88595@cyrillic', 'sr_yu.microsoftcp1251@cyrillic', 'sr_yu.utf8', 'sr_yu.utf8@cyrillic', 'sr_yu@cyrillic', 'ss', 'ss_za', 'st', 'st_za', 'sv', 'sv_fi', 'sv_se', 'sw_ke', 'sw_tz', 'swedish', 'szl_pl', 'ta', 'ta_in', 'ta_in.tscii', 'ta_in.tscii0', 'ta_lk', 'tcy_in.utf8', 'te', 'te_in', 'tg', 'tg_tj', 'th', 'th_th', 'th_th.tactis', 'th_th.tis620', 'thai', 'the_np', 'ti_er', 'ti_et', 'tig_er', 'tk_tm', 'tl', 'tl_ph', 'tn', 'tn_za', 'to_to', 'tpi_pg', 'tr', 'tr_cy', 'tr_tr', 'ts', 'ts_za', 'tt', 'tt_ru', 'tt_ru.tatarcyr', 'tt_ru@iqtelif', 'turkish', 'ug_cn', 'uk', 'uk_ua', 'univ', 'universal', 'universal.utf8@ucs4', 'unm_us', 'ur', 'ur_in', 'ur_pk', 'uz', 'uz_uz', 'uz_uz@cyrillic', 've', 've_za', 'vi', 'vi_vn', 'vi_vn.tcvn', 'vi_vn.tcvn5712', 'vi_vn.viscii', 'vi_vn.viscii111', 'wa', 'wa_be', 'wae_ch', 'wal_et', 'wo_sn', 'xh', 'xh_za', 'yi', 'yi_us', 'yo_ng', 'yue_hk', 'yuw_pg', 'zh', 'zh_cn', 'zh_cn.big5', 'zh_cn.euc', 'zh_hk', 'zh_hk.big5hk', 'zh_sg', 'zh_sg.gbk', 'zh_tw', 'zh_tw.euc', 'zh_tw.euctw', 'zu', 'zu_za'])
>>>
</code></pre>
<p>So I guess it comes down to having the output of <code>locale -a</code> available in python. I've found now basically to ways of doing this:</p>
<ul>
<li>running a subprocess of <code>locale -a</code> and fetching the output of the call</li>
<li>getting all locales in python with <code>locale.locale_alias</code>, filtering for the ones of the language in question and and testing each entry with <code>locale.set_locale()</code> and use the first one that does not raise an error.</li>
</ul>
<p>Are these really the only two options? Is there no better way of doing this?</p>
|
<python><django><linux><locale>
|
2024-11-26 15:15:13
| 1
| 3,749
|
sers
|
79,227,181
| 1,386,750
|
In a Python regex (re), how can I match the start of a string (^) OR a non-alphanumeric character (\W)?
|
<p>I am using the Python regex package <code>re</code> for example to match a hashtag or url. A hashtag starts with a <code>#</code>, which cannot be in the middle of a word. Hence, it can be at the start of a word, which I can find with <code>\W</code> OR at the start of the string (<code>^</code>). How can I catch both cases, i.e. <code>^</code> OR <code>\W</code>? I tried things like (<code>b</code> for my bytes-like object)</p>
<pre class="lang-py prettyprint-override"><code>hashtag = rb'[^|\W]#\w*'
</code></pre>
<p>and all kinds of variation trying to escape <code>^</code>, leaving out <code>|</code>, using <code>\b</code>, etc.</p>
<p>Another example would be an url, which starts with "http(s)://":</p>
<pre class="lang-py prettyprint-override"><code>url = rb'[^|\W]https?:\/\/\w*'
</code></pre>
<p>How can this be done? Note that I am using Python, not Javascript!</p>
|
<python><regex><string><match>
|
2024-11-26 14:46:57
| 1
| 468
|
AstroFloyd
|
79,227,057
| 1,207,971
|
How to prevent Pandas to_csv double quoting empty fields in output csv
|
<p>I currently have a sample python script that reads a csv with double quotes as a text qualifier and removes ascii characters and line feeds in the fields via a dataframe. It then outputs the dataframe to a csv. However the output csv is double quoting empty fields. see below:</p>
<pre><code>import pandas as pd
import re
import csv
# Load the CSV file into a DataFrame
file_path = "\\\\Mylocation\\Original_facility_udfs.csv" # Replace with your CSV file path
df = pd.read_csv(file_path, quotechar='"')
# Define a function to clean a single cell
def clean_field(value):
if isinstance(value, str):
# Remove line feeds (\n, \r) and non-ASCII characters
value = re.sub(r'[\n\r]', ' ', value) # Replace line feeds with a space
value = re.sub(r'[^\x00-\x7F]', '', value) # Remove non-ASCII characters
return value
# Apply the cleaning function to all DataFrame fields
df_cleaned = df.map(clean_field)
# Save the cleaned DataFrame back to a new CSV file
cleaned_file_path = "\\\\Mylocation\\facility_udfs.csv" # Output file path
df_cleaned.to_csv(
cleaned_file_path,
index=False,
quotechar ='"',
quoting=csv.QUOTE_ALL, # Ensure double quotes remain as text qualifiers
lineterminator='\n' # Set line feed (\n) as the row terminator
)
print(f"Cleaned CSV saved to: {cleaned_file_path}")
</code></pre>
<p>My current output is as follows</p>
<pre><code>"date_key","facility_key","udf_type","udf_area_indic","udf_area"
"20240830","251","GL Unit Code","Facility for Type","",""
"20240830","251","Cost Center","Facility for Type","",""
</code></pre>
<p>the desired output should be</p>
<pre><code>"date_key","facility_key","udf_type","udf_area_indic","udf_area"
"20240830","251","GL Unit Code","Facility for Type",,
"20240830","251","Cost Center","Facility for Type",,
</code></pre>
|
<python><pandas><dataframe><csv>
|
2024-11-26 14:13:02
| 1
| 301
|
Eseosa Omoregie
|
79,226,981
| 2,679,476
|
Illegal instruction (core dumped) import keras
|
<p>If I try to do 'import keras', it gives core dump.</p>
<pre><code>python3 -c 'import keras; print(keras.__version__)'
Illegal instruction (core dumped)
</code></pre>
<p>or</p>
<pre><code>python3 test.py
Illegal instruction (core dumped)
</code></pre>
<p>I tried to uninstall and re-install keras, but no luck. Current version is :</p>
<pre><code>pip3 list | grep keras
keras 2.13.1
</code></pre>
<p>I am running Virtual Machine, CPU Architecture: x86_64</p>
<p>How to solve this ?</p>
<p>Note: Here is the CPU details where I am running the Linux.</p>
<pre><code>CPU family: 6
Model: 140
Model name: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
Stepping: 1
CPU MHz: 2803.198
</code></pre>
|
<python><keras>
|
2024-11-26 13:51:05
| 2
| 459
|
user2679476
|
79,226,797
| 34,747
|
How to build a cffi extension for inclusion in binary python wheel with uv
|
<p>I am migrating a <a href="https://github.com/AlertaDengue/pyreaddbc" rel="nofollow noreferrer">library of mine</a> to use <a href="https://docs.astral.sh/uv/" rel="nofollow noreferrer">uv</a> to manage the package. My package, however, includes a C extension that is wrapped and compiled through CFFI (there's a build script for this).</p>
<p>Below is the current version of <code>pyproject.toml</code> that fails to run the build script and, If I build the extension manually, <code>uv build --wheel</code> does not include the <code>_readdbc.so</code> file in the wheel.</p>
<pre class="lang-ini prettyprint-override"><code>[project]
authors = [
{name = "Flavio Codeco Coelho", email = "fccoelho@gmail.com"},
{name = "Sandro Loch", email = "es.loch@gmail.com"},
]
license = {text = "AGPL-3.0"}
requires-python = "<4,>=3.9"
dependencies = [
"dbfread<3,>=2.0.7",
"tqdm<5,>=4.64.0",
"cffi<2,>=1.15.1",
"pyyaml>=6",
"setuptools>=75.6.0",
]
name = "pyreaddbc"
version = "1.1.0"
description = "pyreaddbc package"
readme = "README.md"
classifiers = [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
]
[tool.setuptools]
packages = ["pyreaddbc"]
[build-system]
requires = ["setuptools>=61"]
build-backend = "setuptools.build_meta"
[build]
default = ["python", "-m", "./pyreaddbc/_build_readdbc.py"] # this script never runs
[tool.black]
line-length = 79
skip-string-normalization = true
target-version = ["py39", "py310", "py311", "py312"]
exclude = "docs/"
[project.optional-dependencies]
dev = [
"pytest<9.0.0,>=8.1.1",
"pandas<3.0.0,>=2.1.0",
]
[project.urls]
Repository = "https://github.com/AlertaDengue/pyreaddbc"
</code></pre>
<p>What am I missing?</p>
|
<python><setuptools><python-cffi><uv>
|
2024-11-26 12:53:16
| 1
| 6,262
|
fccoelho
|
79,226,717
| 15,520,615
|
Unable to read file in Databricks with Pandas Errror: FileNotFoundError: [Errno 2] No such file or directory: '/FileStore/tables/ge_selection.csv'
|
<p>I have upload a file called ge_selection.csv to '/FileStore/tables/' in Databricks community edition.</p>
<p>When I run the following code the contents of the .csv is displayed:</p>
<pre><code>file_location = "/FileStore/tables/ge_selection.csv"
file_type = "csv"
# CSV options
infer_schema = "false"
first_row_is_header = "false"
delimiter = ","
# The applied options are for CSV files. For other file types, these will be ignored.
df = spark.read.format(file_type) \
.option("inferSchema", infer_schema) \
.option("header", first_row_is_header) \
.option("sep", delimiter) \
.load(file_location)
display(df)
</code></pre>
<p>However, when I try to read the same file with Pandas I get the error:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/FileStore/tables/ge_selection.csv'
</code></pre>
<p>Trying both the following methods:</p>
<pre><code>import pandas as pd
df_ge = pd.read_csv(file_location)
df_ge = pd.read_csv("/FileStore/tables/ge_selection.csv")
</code></pre>
<p>Any thoughts on why I'm not able to read the file with Pandas?</p>
|
<python><pandas><databricks>
|
2024-11-26 12:24:49
| 1
| 3,011
|
Patterson
|
79,226,669
| 6,251,742
|
How lambda expresison can reference to itself before instanciation?
|
<p>For fun, I tried to create a linked list with lambda functions. I tried this little code as a first step, but my experimentation went short after facing an infinite loop:</p>
<pre class="lang-py prettyprint-override"><code>import itertools
stack = lambda: ("c", None)
stack = lambda: ("b", stack)
stack = lambda: ("a", stack)
for _ in itertools.count():
c, stack = stack()
print(c)
</code></pre>
<p>Expected output:</p>
<pre class="lang-none prettyprint-override"><code>a
b
c
TypeError: 'NoneType' object is not callable
</code></pre>
<p>Actual output:</p>
<pre class="lang-none prettyprint-override"><code>a
a
a
...
</code></pre>
<p>I expect the second lambda to carry a local (to second lambda) reference to the first lambda, and the third lambda to carry a local (to third lambda) reference to the second lambda.</p>
<p>Unexpectedly, the third lambda return a reference to itself, making the de-stack looping on the same lambda forever. This wasn't expected as stack, when the lambda is created, point to the previous lambda. How a lambda can have a reference to itself before even being created?</p>
<p>As we can see in <a href="https://pythontutor.com" rel="nofollow noreferrer">Python Tutor</a> (<a href="https://pythontutor.com/render.html#code=import%20itertools%0A%0Astack%20%3D%20lambda%3A%20%28%22c%22,%20None%29%0Astack%20%3D%20lambda%3A%20%28%22b%22,%20stack%29%0Astack%20%3D%20lambda%3A%20%28%22a%22,%20stack%29%0Afor%20_%20in%20itertools.count%28%29%3A%0A%20%20%20%20_,%20stack%20%3D%20stack%28%29%0A%20%20%20%20input%28_%29&cumulative=false&curInstr=4&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=311&rawInputLstJSON=%5B%5D&textReferences=false" rel="nofollow noreferrer">permalink to example</a>), each lambda is replaced by the new lambda, each time referencing itself (except the first that return None as second element of the tuple):</p>
<p><a href="https://i.sstatic.net/MsfQv5pB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MsfQv5pB.png" alt="python tutor screenshot showing memory map of the script" /></a></p>
<p>This doesn't make sense for me, because of the Call by Object behavior of Python, and how I understand lambda functions carries within its closure (e.g. accessible via <code>stack.__closure__</code>) a cell that point to the value, but in this case, there is no closures, and even when putting the code in a function create a cell that reference the lambda expression itself:</p>
<pre class="lang-py prettyprint-override"><code>>>> def make_stack():
... stack = lambda: ("c", None)
... stack = lambda: ("b", stack)
... stack = lambda: ("a", stack)
... return stack
...
>>> stack = make_stack()
>>> stack.__closure__[0].cell_contents is stack
True
</code></pre>
<p>How is this possible for the lambda to have a reference to itself, an object not even created yet?</p>
|
<python><lambda><linked-list>
|
2024-11-26 12:10:35
| 1
| 4,033
|
Dorian Turba
|
79,226,508
| 2,300,597
|
PySpark GroupedData - chain several different aggregation methods
|
<p>I am playing with <code>GroupedData</code> in pyspark.</p>
<p>This is my environment.</p>
<pre class="lang-bash prettyprint-override"><code>Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.5.1
/_/
Using Scala version 2.12.18, OpenJDK 64-Bit Server VM, 11.0.24
Branch HEAD
</code></pre>
<p><a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.GroupedData.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.GroupedData.html</a></p>
<p>I wonder if the following is possible.</p>
<p>Say I want to use only the methods of <code>GroupedData</code>, and not import any functions from <code>pyspark.sql.functions</code>.</p>
<p>OK, suppose I have some <code>DataFrame</code> and I've grouped it already by <code>column A</code> and I've got a <code>GroupedData</code> object back.</p>
<p>Now I want to do on my <code>GroupedData</code> object say <code>sum(column B)</code>, and say <code>avg(column C)</code> and maybe <code>min(column D)</code> in one shot or via chained method calls.</p>
<p>Can I do this just by using GroupedData methods?</p>
<p>I am asking this because it seems that once I've done <code>sum(column B)</code>, I don't have a <code>GroupedData</code> object anymore, and so I cannot continue to chain any <code>GroupedData</code> methods further.</p>
<p>So is that (what I have in mind) possible or not?<br />
If it's possible, how can we do it?</p>
|
<python><apache-spark><pyspark>
|
2024-11-26 11:25:29
| 2
| 39,631
|
peter.petrov
|
79,226,479
| 1,693,057
|
How to Parse logfmt Error Messages in Datadog
|
<p>I'm currently trying to parse <code>logfmt</code> messages using <code>Datadog</code>, but I'm running into some issues. Here's an example log message that we're trying to parse:</p>
<pre class="lang-none prettyprint-override"><code>time="2024-11-25 10:07:11" level=ERROR logger=mycompany.gis.infra.maps_service msg="unable to parse the provided address" gis_source=maps_service correlation_id=xxxxx client_id=yyyyy project_id=zzzzz exc_info="Traceback (most recent call last):
File \"/usr/local/lib/python3.12/site-packages/mycompany/gis/infra/maps_service.py\", line 95, in map_details_to_address
return Address(
^^^^^^^^^^^^^^
File \"/usr/local/lib/python3.12/site-packages/pydantic/main.py\", line 341, in __init__
raise validation_error
pydantic.error_wrappers.ValidationError: 1 validation error for Address
country
value is not a valid enumeration member; permitted: 'CH', 'FR', 'AT', 'DE' (type=type_error.enum; enum_values=[<Country.CH: 'CH'>, <Country.FR: 'FR'>, <Country.AT: 'AT'>, <Country.DE: 'DE'>])"
</code></pre>
<p>The log is generated by our Python application using logfmter, and we need to extract details like the timestamp, error level, logger name, message, correlation ID, client ID, and project ID for monitoring purposes. However, we are especially struggling to properly parse and structure the <code>exc_info</code> part of the log.</p>
<p>We're using the standard Datadog pipelines and have tried applying various parsing rules, but the nested nature of <code>exc_info</code> is giving us trouble.</p>
<p>Could anyone provide advice for parsing this type of log message effectively in Datadog, especially the <code>exc_info</code> section?</p>
<p>Thanks in advance!</p>
|
<python><logging><datadog>
|
2024-11-26 11:17:58
| 1
| 2,837
|
Lajos
|
79,226,130
| 6,681,932
|
GPS tracking streamlit in mobile device
|
<p>I'm running a Streamlit app where I try to retrieve the user's geolocation in streamlit.
However, when using geocoder.ip("me"), the coordinates returned are 45, -121, which point to Oregon, USA, rather than my actual location.</p>
<p>This is the function I use:</p>
<pre><code>def get_lat_lon():
# Use geocoder to get the location based on IP
g = geocoder.ip('me')
if g.ok:
lat = g.latlng[0] # Latitude
lon = g.latlng[1] # Longitude
return lat, lon
else:
st.error("Could not retrieve location from IP address.")
return None, None
</code></pre>
<p>I would like to find a solution that can work in an streamlit app so by clicking a <code>st.button</code> I can call a function that retrieves my lat, long.</p>
|
<python><gps><ip><streamlit><interactive>
|
2024-11-26 09:49:34
| 1
| 478
|
PeCaDe
|
79,225,846
| 1,406,168
|
Deploying python function apps to azure
|
<p>I am trying to make my first python function app. The app is just a simple app that creates a blob file. The code runs locally. However when I deploy to Azure I dont see the function app.</p>
<p>If I remove this, then it works:</p>
<pre><code>from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
</code></pre>
<p>Full code:</p>
<pre><code>import logging
import datetime
import azure.functions as func
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
app = func.FunctionApp()
@app.timer_trigger(schedule="0 * * * * *", arg_name="myTimer", run_on_startup=False, use_monitor=False)
def timer_trigger(myTimer: func.TimerRequest) -> None:
logging.info('Python timer trigger function executed.')
Configuration in azure:
</code></pre>
|
<python><azure><azure-functions>
|
2024-11-26 08:17:38
| 1
| 5,363
|
Thomas Segato
|
79,225,810
| 10,883,088
|
Parameters for dataset and data-aware scheduling in Airflow
|
<p>Has anyone had issues with their DAG having tasks not complete when passing <code>default_args</code> to the consumer dag?</p>
<p>Producer DAG Upstream:</p>
<pre><code>from airflow.datasets import Dataset
from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator
from airflow.operators.empty import EmptyOperator
MY_DATA = Dataset('bigquery://my-project-name/my-schema/-my-table')
data_set_operator = EmptyOperator(
task_id="producer",
outlets = MY_DATA
)
default_args = {
"start_date": (2024,11,20),
"depends_on_past": False,
"on_failure_callback": some_function
}
with DAG as (
dag_id = "my_dag",
max_active_runs = 1
default_args = default_dags,
schedule_interval = "30 8 * * *",
) as dag:
sql_task = SQLExecuteQueryOperator(
task_id = task_id
query = "my_query"
conn_id = "bq_conn_id"
params = my_dictionary
)
sql_task >> data_set_operator
</code></pre>
<p>Consumer DAG Downstream</p>
<pre><code>from airflow.datasets import Dataset
from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator
MY_DATA = Dataset('bigquery://my-project-name/my-schema/-my-table')
default_args = {
"start_date": (2024,11,20),
"depends_on_past": False,
"on_failure_callback": some_function
}
with DAG as (
dag_id = "my_dag",
max_active_runs = 1
# if I comment out default args the dag works.
default_args = default_dags,
schedule = MY_DATA
) as dag:
sql_task = SQLExecuteQueryOperator(
task_id = task_id
query = "my_query2"
conn_id = "bq_conn_id"
params = my_dictionary2
)
</code></pre>
<p>When the producer dag completes, the downstream has a completed run, but the actual task doesn't run, and task is blank/missing or not green. After I comment out/remove the <code>default_args</code> parameters from the consumer DAG, it runs accordingly and I can see the task actually run. Can we not pass <code>default_args</code> into the consumer DAG?</p>
|
<python><python-3.x><airflow>
|
2024-11-26 08:05:39
| 1
| 471
|
dko512
|
79,225,780
| 614,944
|
Huge memory consumption with SD3.5-medium
|
<p>I have a g4dn.xlarge AWS GPU instance, it has 16GB memory + 48GB swap, and a Tesla T4 GPU Instance with 16GB vRAM.</p>
<p>According to the <a href="https://stability.ai/news/introducing-stable-diffusion-3-5" rel="nofollow noreferrer">stability blog</a>, it should be sufficient to run SD3.5 Medium model.</p>
<p><a href="https://i.sstatic.net/Da6zqXz4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Da6zqXz4.png" alt="enter image description here" /></a></p>
<p>So I downloaded the model from <a href="https://huggingface.co/stabilityai/stable-diffusion-3.5-medium" rel="nofollow noreferrer">hugging face</a>, and launched my test program.</p>
<p>At first I see the memory and swap goes up, consumed in total ~30GB memory. Then the system memory started to go down, and Nvidia GPU memory usage grows slowly. Then later it failed with memory allocation issue. It failed to allocate more memory after ~15GB GPU memory is allocated.</p>
<p>My questions are,</p>
<ol>
<li>Is it normal to consume that much of memory? Both system and GPU level.</li>
<li>What's wrong with my program?</li>
</ol>
<p>Attached my source code,</p>
<pre class="lang-py prettyprint-override"><code>import os
import json
import torch
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("./stable-diffusion-3.5-medium/")
if torch.cuda.is_available():
print('use cuda')
pipe = pipe.to("cuda")
elif torch.mps.is_available():
print('use mps')
pipe = pipe.to('mps')
else:
print('use cpu')
data = []
with open('data.json', 'r') as f:
data = json.load(f)
os.makedirs('output', exist_ok=True)
for row in data:
prompt = row['prompt']
filename = 'output/%s.png' % (row['uuid'])
height = 1280
width = 1280
if row['aspect_ratio'] == '16:9':
width = 720
elif row['aspect_ratio'] == '9:16':
width = 720
height = 1280
print('saving', filename)
image = pipe(prompt, height=height, width=width).images[0]
image.save(filename)
</code></pre>
|
<python><torch><stability><diffusers>
|
2024-11-26 07:53:54
| 1
| 23,701
|
daisy
|
79,225,757
| 17,580,381
|
How to replace the deprecated WebElement.get_attribute() function in Selenium 4.27.0+
|
<p>On some web pages, due to rendering issues (e.g., hidden element), <code>WebElement.text</code> may not reveal the underlying text whereas <code>WebElement.get_attribute("textContent")</code> will. Therefore I have written the following utility function:</p>
<pre><code>from selenium.webdriver.remote.webelement import WebElement
def text(e: WebElement) -> str:
return e.text or e.get_attribute("textContent") or "n/a"
</code></pre>
<p><code>WebElement.get_attribute()</code> is deprecated in Selenium 4.27.0. The recommendation is to use <code>WebElement.get_dom_attribute()</code>.</p>
<p>However, this is not a drop-in replacement because <code>WebElement.get_dom_attribute()</code> will only reveal attributes declared in the HTML markup.</p>
<p>How would I achieve the same functionality without <code>WebElement.get_attribute()</code>?</p>
|
<python><selenium-webdriver>
|
2024-11-26 07:43:21
| 2
| 28,997
|
Ramrab
|
79,225,465
| 5,213,857
|
Montreal Forced Aligner(MFA) taking too much time(almost 18 days still going on) to train a 33 GB corpus
|
<p>WE are using Montreal Forced Aligner (MFA) 3.x to train an acoustic model on a large dataset (~33GB of audio and transcripts in an Indian language). The training process takes an extremely long time(almost 18 days still going on) to complete.</p>
<p>Here are the details of my setup:</p>
<ul>
<li><p><strong>Hardware:</strong> [16 vCPU, 16 GB RAM]</p>
</li>
<li><p><strong>Audio Format:</strong> WAV files</p>
</li>
<li><p><strong>Corpus Details:</strong> 33GB.</p>
</li>
<li><p><strong>Command Used:</strong> <code>mfa train</code> with default parameters.</p>
</li>
<li><p><strong>Version:</strong> MFA 3.x</p>
</li>
</ul>
<p>Can we reduce the training time? Can we predict a ballpark time for the total time taken for training?</p>
|
<python><nlp><speech-recognition><kaldi><phoneme>
|
2024-11-26 05:53:39
| 0
| 1,881
|
Swayangjit
|
79,225,412
| 219,153
|
Why this Pandas DataFrame column operation fails?
|
<p>This script works fine with Python 3.11 and Pandas 2.2:</p>
<pre><code>import pandas as pd
df = pd.read_csv(f'test.csv', comment='#')
df['x1'] = df['x1']*8
# df['y1'] = df['y1']*8
print(df)
</code></pre>
<p>and prints:</p>
<pre><code> x1 y1
0 0 0
1 16 6
2 32 12
</code></pre>
<p>but fails when I uncomment <code>df['y1'] = df['y1']*8</code> and produces <code>KeyError: 'y1'</code>. Why is that? <code>'y1'</code> is a valid key. Here is the <code>test.csv</code> file:</p>
<pre><code># comment
x1, y1
0, 0
2, 6
4, 12
</code></pre>
|
<python><pandas><dataframe>
|
2024-11-26 05:22:39
| 2
| 8,585
|
Paul Jurczak
|
79,225,406
| 3,414,663
|
How do you express the identity expression?
|
<p>How do you express the identity expression in Polars?</p>
<p>By this I mean the expression <code>idexpr</code> that when you do <code>lf.filter(idexpr)</code> you get the entirety of <code>lf</code>.</p>
<p>Similar to <code>SELECT(*)</code> in SQL.</p>
<p>I'm resorting to a logical expression like</p>
<pre><code>idexpr = (pl.col("a") == 0) | (pl.col("a") != 0)
</code></pre>
|
<python><python-polars>
|
2024-11-26 05:19:41
| 2
| 589
|
user3414663
|
79,225,262
| 6,702,598
|
How to check the result of a mocked function without relying on calling convention
|
<p>In Python,</p>
<p>can check the parameters of a mocked function <em>using parameter names</em> in a test without relying on the way the function was called (either with args or with kwargs)?</p>
<h3>Concrete example</h3>
<p>I'd like this example to return a passing test:</p>
<pre><code>
# function under test.
# Please assume that `callback` has proper type annotations.
def domain_code_function(callback):
callback (1,2,c=3)
# function that would normally be passed to `domain_code_function`
# **Note**:
# This function has three params `a`, `b` and `c`.
def add_values(a,b,c):
return a + b + c
class Tests(unittest.TestCase):
def test_works(self):
# setup
mymock = MagicMock()
# run
domain_code_function(mymock)
# check
# **Note**:
# Verify each parameter by name.
mymock.assert_called_with(a=1,b=2,c=3)
</code></pre>
<h3>Constraints:</h3>
<ul>
<li>I don't want to change how the function is called within the domain code (args vs kwargs) (As stated in title as description)</li>
</ul>
<h3>Background</h3>
<p>Relying on positional arguments for tests seems brittle and not very well readable.</p>
<h3>What I've tried</h3>
<p>Using unittest.mock.call:</p>
<ul>
<li><code>mymock.assert_has_calls([call(a=1,b=2,c=3)])</code></li>
<li>By the way, not even checking purely with positional arguments works: <code>mymock.assert_has_calls([call(1,2,3)])</code></li>
</ul>
|
<python><python-3.x><unit-testing><mocking>
|
2024-11-26 03:43:56
| 0
| 3,673
|
DarkTrick
|
79,225,136
| 1,335,475
|
Trouble building onnxruntime python bindings on macos
|
<p>I am curious if anyone is doing this successfully, and if so how?</p>
<p>I can build the shared libraries for inference successfully using the instructions on</p>
<p><a href="https://onnxruntime.ai/docs/build/inferencing.html" rel="nofollow noreferrer">https://onnxruntime.ai/docs/build/inferencing.html</a></p>
<p>However there are a few different variants for building the python bindings, including parameters to cmake and the setup.py script. I am probably not doing this right, but simply following the website guidance of:</p>
<pre><code>export ONNX_ML=1
python3 setup.py bdist_wheel
pip3 install --upgrade dist/*.whl
</code></pre>
<p>Fails immediately with:</p>
<pre><code>error: package directory 'onnxruntime/backend' does not exist
</code></pre>
<p>I can fix that by editing the setup.py to look for the backend where it actually is (<code>onnxruntime/python/backend</code>), but then there are more and more confusing package directory mismatches.</p>
<p>Surely I have made a mistake in the process, and I am curious if anyone is successfully doing this.</p>
|
<python><macos><onnxruntime>
|
2024-11-26 02:26:31
| 1
| 664
|
Blunt Jackson
|
79,225,016
| 11,741,232
|
Altair layer chart, force x axis and y axis + ticks to be all together
|
<p>I have a bunch of dataframes that I don't want to merge because they have the same column names, I want them in different series.</p>
<p>So, I make separate charts and put them in a layerchart together at the end.</p>
<p>Sometimes though, the data is such that the chart changes, the x axis will be on the top or bottom, y axis on the left or right, different offsets, different x and y domains. This makes the final layerchart have a ton of axes and it looks horrible. How can I force them to all use one unified axis? resolve_scale='shared' makes them scaled to the same domain and range, but still leaves behind all the copies.</p>
<p>Edit: I've tried resolve_axis() too. I'm confused, it seems like it's forcing independent axes.</p>
<p><a href="https://i.sstatic.net/T0xxLTJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T0xxLTJj.png" alt="enter image description here" /></a></p>
|
<python><altair>
|
2024-11-26 01:08:22
| 0
| 694
|
kevinlinxc
|
79,224,936
| 11,751,799
|
What to do with `matplotlib` graphs given in a list that a function returns (or what is an alternative function architecture)?
|
<p>I have a function that works something like this.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
def plot_from_dave(n = 100, r = 10):
my_list = []
for i in range(r):
fig, ax = plt.subplots()
x = np.random.normal(0, 1, n)
y = np.random.normal(0, 1, n)
ax.scatter(x, y)
my_list.append((fig, ax))
return(my_list)
</code></pre>
<p>The function creates many plots and then saves the figure-axis tuples to a list that is returned.</p>
<p>I now want to access those figures and axes for downstream customization of the plots. However, accessing them has been problematic. For instance, in the below, I get nothing.</p>
<pre class="lang-py prettyprint-override"><code>np.random.seed(2024)
plot_list = plot_from_dave(10, 3)
plot_list[0]
plt.show()
</code></pre>
<p>How can I access these figures and axes to apply further customization outside of the function? Alternatively, how can I set up the function to allow customization of the plots?</p>
|
<python><matplotlib><graph>
|
2024-11-26 00:14:40
| 1
| 500
|
Dave
|
79,224,916
| 3,808,982
|
Finding the Second-order sections representation of the IIR filter in C#
|
<p>I was given a small amount of python code that quickly Calculates and returns a representation of the second-order sections of an IIR filter.</p>
<pre class="lang-py prettyprint-override"><code> from scipy.signal import butter
# Parameters
order = 1
sample_count = 2048 # Hz 2048
burst_length = 1650 # samples 1650
nyquist = sample_count / 2 # Nyquist frequency
# Get SOS coefficients
cutoff = 0.01 * nyquist
sos = butter(N=order, Wn=cutoff, btype='high',fs=sample_count, output='sos')
print("SOS Coefficients 1:")
print(sos)
</code></pre>
<p>This prints an array, which is a value i can use in an IIRFilter <code>[0.9845337085968967, -0.9845337085968967, 0.0, 1.0, -0.9690674171937933, 0.0]</code></p>
<p>if i use MathNet.Filtering.IIR.OnlineIirFilter</p>
<p>the signal filters correctly using those values with this Methoed</p>
<pre><code>FilterSignalWithButterworth(signal)
</code></pre>
<pre class="lang-cs prettyprint-override"><code> public double[] FilterSignalWithButterworth(double[] signal)
{
double[] coff = [0.98453371, -0.98453371, 0.0, 1.0, -0.96906742, 0.0];
var filter = new MathNet.Filtering.IIR.OnlineIirFilter(coff);
var filteredSignal = filter.ProcessSamples(signal);
return filteredSignal;
}
</code></pre>
<p><strong>Question</strong></p>
<p>Is there a way to calculate the second-order sections of an IIR filter using .Net?</p>
|
<python><.net><mathnet-filtering>
|
2024-11-25 23:58:24
| 0
| 2,268
|
Luke Hammer
|
79,224,765
| 123,594
|
ansible_python_interpreter_fallback is not working
|
<p>I have a playbook with a mix of <code>connection: local</code> tasks and remote tasks that use an AWS dynamic inventory. The Python interpreter has different paths on local and remote systems.</p>
<p>Through another question <a href="https://stackoverflow.com/questions/79157061/python3-venv-how-to-sync-ansible-python-interpreter-for-playbooks-that-mix-con">python3 venv - how to sync ansible_python_interpreter for playbooks that mix connection:local and target system</a>, I have determined I should use <code>ansible_python_interpreter_fallback</code> to configure two Python interpreter paths to try. But I cannot get them working.</p>
<p>I have tried:</p>
<p>Defining it in my playbook:</p>
<pre class="lang-yaml prettyprint-override"><code>---
- hosts: tag_group_web_servers
vars_files:
- group_vars/au
roles:
- autodeploy
vars:
ansible_python_interpreter_fallback:
- /Users/jd/projects/mgr2/ansible/bin/python3
- /usr/bin/python3
</code></pre>
<p>, which is ignored</p>
<p>And defining it in the dynamic inventory:</p>
<pre class="lang-yaml prettyprint-override"><code>plugin: aws_ec2
regions:
- ap-southeast-2
- us-east-1
hostnames:
- ip-address
keyed_groups:
- prefix: "tag"
key: tags
- prefix: "group"
key: tags
- prefix: "security_groups"
key: 'security_groups|json_query("[].group_name")'
all:
hosts:
127.0.0.1:
ansible_connection: local
ansible_python_interpreter: "/Users/jd/projects/mgr2/ansible/bin/python3"
remote:
ansible_host: remote.host.ip
ansible_python_interpreter: /usr/bin/python3
ansible_python_interpreter_fallback:
- /Users/jd/projects/mgr2/ansible/bin/python3
- /usr/bin/python3
</code></pre>
<p>, which is also ignored.</p>
<p>I'm confused where else this can go or why it doesn't work.</p>
<p>Here is my Ansible version:</p>
<pre class="lang-none prettyprint-override"><code>ansible [core 2.17.4]
config file = /Users/jd/projects/mgr2/ansible/ansible.cfg
configured module search path = ['/Users/jd/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/lib/python3.11/site-packages/ansible
ansible collection location = /Users/jd/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.10 (main, Sep 7 2024, 01:03:31) [Clang 15.0.0 (clang-1500.3.9.4)] (/opt/homebrew/opt/python@3.11/bin/python3.11)
jinja version = 3.1.4
libyaml = True
</code></pre>
|
<python><ansible>
|
2024-11-25 22:30:04
| 3
| 2,524
|
jdog
|
79,224,509
| 825,227
|
Is there a way to do an Excel INDEX/MATCH within Python using two dataframes as inputs
|
<p>Have googled and only relevant answers prescribe a merge which isn't applicable in my case.</p>
<p>I have two data frames:</p>
<p><code>da</code></p>
<pre><code>2023-08-14 06:30:01 B C D E F G
2023-08-14 06:30:01 B C D E F G
2023-08-14 06:30:02 B C D E F G
2023-08-14 06:30:03 B C D E F G
2023-08-14 06:30:04 B C D E F G
2023-08-14 06:30:05 B C D E F G
2023-08-14 06:30:06 A B C E F G
2023-08-14 06:30:07 A B C E F G
</code></pre>
<p><code>db</code></p>
<pre><code>2023-08-14 06:30:01 28 26 8 -7 -17 -14
2023-08-14 06:30:01 28 26 8 -7 -17 -14
2023-08-14 06:30:02 28 26 8 -5 -17 -14
2023-08-14 06:30:03 28 26 5 -5 -17 -14
2023-08-14 06:30:04 28 26 5 -11 -17 -14
2023-08-14 06:30:05 28 26 5 -11 -17 -10
2023-08-14 06:30:06 33 28 26 -11 -17 -10
2023-08-14 06:30:07 34 28 26 -11 -17 -10
</code></pre>
<p>I'd like to return a combination of the two using a unique list of values from <code>da</code>, in order, as columns, and match column and time to return the corresponding value from <code>db</code> as value in resulting dataframe like the below:</p>
<p><code>dc</code></p>
<pre><code> A B C D E F G
2023-08-14 06:30:01 0 28 26 8 -7 -17 -14
2023-08-14 06:30:01 0 28 26 8 -7 -17 -14
2023-08-14 06:30:02 0 28 26 8 -5 -17 -14
2023-08-14 06:30:03 0 28 26 5 -5 -17 -14
2023-08-14 06:30:04 0 28 26 5 -11 -17 -14
2023-08-14 06:30:05 0 28 26 5 -11 -17 -10
2023-08-14 06:30:06 33 28 26 0 -11 -17 -10
2023-08-14 06:30:07 34 28 26 0 -11 -17 -10
</code></pre>
<p>There's a one-to-one correspondence between <code>da</code> and <code>db</code> (ie, same number of rows and columns), so could do this row by row but would prefer a solution that doesn't involve iteration as the results aren't path dependent in any way.</p>
<p>I'm able to create column headers for <code>dc</code> via a <code>map/set</code>:</p>
<pre><code>from itertools import chain
a = list(map(set,da.values.T))
b = list(set(chain.from_iterable(a)))
dc = pd.DataFrame(columns = b)
</code></pre>
<p>but how do I populate the resulting dataframe per the logic above?</p>
|
<python><algorithm><data-structures>
|
2024-11-25 20:23:33
| 1
| 1,702
|
Chris
|
79,224,497
| 5,312,606
|
Optimized build for pip
|
<p>This question is similar to
<a href="https://stackoverflow.com/questions/14359644/python-setuptools-distribute-optimize-option-in-setup-py">Python Setuptools Distribute: Optimize Option in setup.py?</a>,
but the question was asked 11 years ago and I hope that the python ecosystem developed further.</p>
<p>We have a code where we would like to use <code>assert</code> for checks of invariants that have sometimes expensive scaling.¹ We can use assert for small problem sizes in a test suite, but do not want the asserts in production code. I know that one could execute <code>python -O</code> when invoking our code, but is there a way to strip <code>assert</code>s away at installation time?</p>
<p>(Similar to <code>-DCMAKE_BUILD_TYPE=Debug</code> vs. <code>-DCMAKE_BUILD_TYPE=Release</code> for compiled languages.)</p>
|
<python><pip><setuptools>
|
2024-11-25 20:18:56
| 0
| 1,897
|
mcocdawc
|
79,224,491
| 141,650
|
How to ensure that pytest-defined `request` fixture is available?
|
<p>I'm still wrestling with how/when I can use pytest's off-the-shelf fixtures (e.g.: <code>request</code>). I have a class structure of the form:</p>
<pre><code>abc.ABC -> TestBase -> IntegrationTestBase -> SomeSpecialTest
</code></pre>
<p>In <code>IntegrationTestBase</code>, I defined a teardown_method like so:</p>
<pre><code>def teardown_method(self, method, request):
...
</code></pre>
<p>However, my test, when invoked, fails to inject the fixture:</p>
<pre><code>teardown_method() missing 1 required positional argument: 'request'
</code></pre>
<p>I have successfully referred to the <code>request</code> fixture <em>in a test method itself</em>, but not in a parent class (<code>IntegrationTestBase</code>, above). Is there any special syntax I need for the parent class <code>teardown_method</code> to get the <code>request</code> fixture?</p>
|
<python><pytest>
|
2024-11-25 20:17:04
| 1
| 5,734
|
Stephen Gross
|
79,224,478
| 37,650
|
How can I enforce a minimum age constraint and manage related models in Django?
|
<p>I am working on a Django project where I need to validate a model before saving it, based on values in its related models. I came up with this issue while extracting an app from an project using an old Django version (3.1) to a separate Django 5.1 project, then there error "ValueError: 'Model...' instance needs to have a primary key value before this relationship can be used" raised on all validation classes that used related model data.</p>
<p>For demonstration and simplification purposes, I have a <strong>Reservation</strong> model that references multiple <strong>Guest</strong> objects via a foreign key. For the reservation to be valid and be saved, all guests linked to it must be at least 18 years old.</p>
<p>However, none of these records (neither the reservation nor the guests) have been saved to the database yet. I need to perform this validation efficiently and cleanly, preferably in a way that keeps the validation logic separated from the models themselves.</p>
<p>How can I approach this validation scenario? What are the best practices for validating unsaved foreign key relationships in Django?</p>
<p>Here is a simplified version of my setup:</p>
<p>File: <code>models.py</code></p>
<pre class="lang-none prettyprint-override"><code>from django.db import models
class Reservation(models.Model):
check_in_date = models.DateField()
check_out_date = models.DateField()
def __str__(self):
return f"Reservation from {self.check_in_date} to {self.check_out_date}"
class Guest(models.Model):
name = models.CharField(max_length=255)
age = models.PositiveIntegerField()
reservation = models.ForeignKey(
Reservation,
related_name="guests",
on_delete=models.CASCADE
)
def __str__(self):
return f"{self.name} ({self. Age} years old)"
</code></pre>
<p>File: <code>validation.py</code></p>
<pre class="lang-none prettyprint-override"><code>from django.core.exceptions import ValidationError
def validate_reservation_and_guests(reservation):
"""
Validate that all guests in the reservation are at least 18 years old.
"""
for guest in reservation.guests.all():
if guest.age < 18:
raise ValidationError("All guests must be at least 18 years old.")
</code></pre>
<h2>Question:</h2>
<p>What is the best way to structure this kind of validation in Django admin? I am open to using custom model methods, form validation, or signals, but I prefer to keep the logic in a separate file for better organization. Are there other approaches I should consider?</p>
<p>Any examples or advice would be greatly appreciated!</p>
|
<python><django><validation>
|
2024-11-25 20:10:28
| 2
| 1,752
|
Danmaxis
|
79,224,463
| 8,589,908
|
matplotlib 3D line plot
|
<p>I am trying to plot a two series of data in 3d as lines but cant find a good example of how to structure the data to do this in python.</p>
<p>My example data is:</p>
<pre><code>x = np.array([[1,1,1,1,5,1,1,1,1], [4.5,5,5,5,5.5,5,5,5,4.5]])
</code></pre>
<p>But I presume to do this I need 3 series of data.</p>
<p>Here is a sample of what the output should look like</p>
<p><a href="https://i.sstatic.net/pBeaEKXf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBeaEKXf.png" alt="enter image description here" /></a></p>
<p>I am assuming I need to add extra rows to the array, but not sure if I should try to build a 3d array or plot each axis with its own separate arrays?</p>
<p>In which case axis</p>
<pre><code>y1 = np.ones(9)
z = np.array([0,1,2,3,4,5,6,7,8])
</code></pre>
<p>I did have a look <a href="https://stackoverflow.com/questions/34099518/plotting-a-series-of-2d-plots-projected-in-3d-in-a-perspectival-way">here</a> and read the documentation <a href="https://matplotlib.org/stable/users/explain/toolkits/mplot3d.html" rel="nofollow noreferrer">here</a> but still could not work out how to apply it to what I am trying to do.</p>
<p>My attempt:</p>
<pre><code>import matplotlib.pylab as pl
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
pl.figure()
ax = pl.subplot(projection='3d')
data = np.array([[1,1,1,1,5,1,1,1,1], [4,5,5,5,5,5,5,5,4]])
y1 = np.ones(9)
z = np.array([1,2,3,4,5,6,7,8,9])
ax.plot(x, y, z, color = 'r')
</code></pre>
<p><strong>UPDATE</strong></p>
<p>This code displays the pseudo data, but ideally x would only have two axes - does it have to be axes of matching size? x just wants to have two ticks at 1, and another at 2.</p>
<pre><code>import matplotlib.pyplot as pl
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
pl.figure()
ax = pl.subplot(projection='3d')
z = np.array([[1,1,1,1,5,1,1,1,1], [1,4.5,5,5,5,5,5,5,4]])
x = np.ones(9)
y1 = np.array([1,2,3,4,5,6,7,8,9])
ax.plot(x, y1, z[0], color = 'r')
ax.plot(x*2, y1, z[1], color = 'g')
ax.set_xlabel('x')
ax.set_box_aspect(aspect = (1,1,2))
</code></pre>
<p><a href="https://i.sstatic.net/QQQxRanZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QQQxRanZ.png" alt="plot sample" /></a></p>
|
<python><matplotlib><matplotlib-3d>
|
2024-11-25 20:03:47
| 1
| 499
|
Radika Moonesinghe
|
79,224,436
| 2,856,552
|
Why does my shapefile color a different polygon than intended?
|
<p>I have risk data which I would like to color on a map according to the risk level. I read a shapefile and a csv data file which I then merge. This works very well working with adm1 shapefile.</p>
<p>When I run the same script with adm2 shapefile, the results are completely weird: The polygons colored are way far from the polygon with data. I have the attached map 3 examples of the colored polygons and in black dot the location of the data.</p>
<p>I will appreciate if anyone can give a clue of what is going on, and if possible how to resolve this.</p>
<p>Python script used to produce the plot:</p>
<pre><code>#!/home/zmumba/anaconda3/bin/python
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from mpl_toolkits.basemap import Basemap # optional ?
map_df = gpd.read_file("/home/zmumba/DA/Dekad_Progs/Shapefiles/Lesotho/geoBoundaries-LSO-ADM2-all/geoBoundaries-LSO-ADM2.shx")
risks_df=pd.read_csv("/home/zmumba/DA/Dekad_Progs/Output/H1dRrisks.csv")
map_df["risk"] = map_df.merge(risks_df, left_on="shapeName", right_on="District")["risk"]
colors = {1: "green", 2: "yellow", 3: "orange", 4: "red"} # or a list
labels = {1: "no risk", 2: "low risk", 3: "medium risk", 4: "high risk"}
catego = map_df["risk"].astype(str).str.cat(map_df["risk"].map(labels), sep="- ")
fig, ax = plt.subplots(figsize=(5, 5))
plt.title(f'Risk of Heavy 24hr Rain: 20-24Nov', y=1.04)
map_df.plot(
column=catego,
categorical=True,
edgecolor="k",
linewidths=0.8,
alpha=0.7,
cmap=ListedColormap([c for r,c in colors.items() if r in map_df["risk"].unique()]),
legend=True,
legend_kwds={
"title": "Risk Level",
"shadow": True,
"loc": "lower right",
"fontsize": 10,
},
ax=ax,
)
ax.set_axis_off()
plt.savefig('H1dRriskmap.png', dpi=300)
plt.show()
</code></pre>
<p>Just to add some clarification to my problem, <a href="https://i.sstatic.net/FyJsNNYV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyJsNNYV.png" alt="enter image description here" /></a>
The attached image shows the polygon numbers from the shapefile. Coloring 42 colors 41, coloring 43 colors 42.
The coordinates of 41, 42 and 43 are</p>
<pre><code>-29.3892289999999, 28.3056183629316 -> 41
-29.2870792999999, 29.2715247780569 -> 42
-29.2255776170000, 27.6546474532047 -> 43
</code></pre>
<p>These coordinates are the centroids of the polygons taken from the shapefile itself. So it cannot be a problem of coordinates.
Could there be something wrong in the python code?
The code reads from a file which is in the format:</p>
<pre><code>"District","risk"
"name1",1
"name2",1
...
"name78",1
</code></pre>
<p>where 1 can b2 2, 3, or 4 depending on the risk level.
I would be glad to try doing the same thing in R, but the problem with R is, it will say "no package sf" and so on.</p>
|
<python><matplotlib>
|
2024-11-25 19:50:06
| 1
| 1,594
|
Zilore Mumba
|
79,224,194
| 5,029,763
|
Convert linearmodels results to DataFrame
|
<p>I have a lot of models that I wish to compare, so I wanted to concat their results into a single DataFrame. I was hoping to do something similar to this <a href="https://stackoverflow.com/a/63223691/5029763">solution</a>.</p>
<p>So I was going to use <code>as_csv()</code> to <code>summary.tables[0]</code> of each model so that I can later "read" them and manipulate them however I like.</p>
<p><a href="https://i.sstatic.net/xTNcoTiI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xTNcoTiI.png" alt="enter image description here" /></a></p>
<p>But the Date field has a comma and I cannot change the separator in <code>as_csv()</code>. So I get the error <code>ParserError: Expected 4 fields in line 5, saw 5</code> when I try to read this object. I wouldn't mind dropping the date field, but I would like to keep the P-value, that is in the same line... so <code>on_bad_lines=skip</code> isn't helpful...</p>
<p>Is there a way to change the separator that I'm missing? Or another way to achieve this?</p>
<p>Using the <a href="https://bashtage.github.io/linearmodels/iv/examples/basic-examples.html" rel="nofollow noreferrer">Basic Example</a>:</p>
<pre><code>import numpy as np
from linearmodels.datasets import mroz
from statsmodels.api import add_constant
data = mroz.load()
data = data.dropna()
data = add_constant(data, has_constant="add")
from linearmodels.iv import IV2SLS
res_ols = IV2SLS(np.log(data.wage), data[["const", "educ"]], None, None).fit(
cov_type="unadjusted"
)
# this results in an error if I do not use on_bad_lines='skip'
pd.read_csv(StringIO(fit_.summary.tables[0].as_csv()),
skipinitialspace=True, #on_bad_lines='skip',
skiprows=0, skipfooter=0, engine='python')
</code></pre>
|
<python><linearmodels>
|
2024-11-25 18:19:16
| 0
| 1,935
|
user5029763
|
79,224,129
| 850,781
|
Why does np.diff handle tz-naive and tz-aware indexes differently
|
<p>I just noticed that <a href="https://numpy.org/doc/stable/reference/generated/numpy.diff.html" rel="nofollow noreferrer"><code>np.diff</code></a> treats tz-naive timestamps:</p>
<pre><code>import numpy as np
import pandas as pd
np.diff(pd.DatetimeIndex(["2024-11-24", "2024-11-25"]))
==> array([86400000000000], dtype='timedelta64[ns]')
</code></pre>
<p>and tz-aware one:</p>
<pre><code>np.diff(pd.DatetimeIndex(["2024-11-24 00:00 UTC", "2024-11-25 00:00 UTC"]))
==> array([Timedelta('1 days 00:00:00')], dtype=object)
</code></pre>
<p>differently.</p>
<p>Why?</p>
<p>(see also <a href="https://stackoverflow.com/q/79212904/850781">Why is tz-naive Timestamp converted to integer while tz-aware is kept as Timestamp?</a>)</p>
|
<python><pandas><numpy><datetime><timezone>
|
2024-11-25 17:59:10
| 0
| 60,468
|
sds
|
79,224,061
| 7,921,684
|
How to Control TP-Link Tapo Plugs from Another Network Using PlugP100 or Python-Kasa Libraries
|
<p>I am using the <strong><a href="https://github.com/petretiandrea/plugp100" rel="nofollow noreferrer">PlugP100</a></strong> / <strong><a href="https://github.com/python-kasa/python-kasa" rel="nofollow noreferrer">Python-Kasa</a></strong> library to control my TP-Link smart plugs. Currently, I can send commands to the plugs when I am on the same network and know their IP addresses. However, I have plugs located on different networks, and I cannot directly access those networks, so the commands won't work in such cases.</p>
<p>When I am not on the same network, device discovery still works, but I cannot retrieve the IP addresses of those devices.</p>
<p>How can I <strong>control</strong> or interact with my plugs <strong>from a different network</strong>? Are there alternative solutions for remote control using this library or other methods?</p>
<p>is there a <strong>MQTT</strong> way to do it?</p>
<p>python-kasa:</p>
<pre><code>async def main():
dev = await Discover.discover_single("192.168.x.x",username=EMAIL,password=PASSWORD)
await dev.turn_on()
await dev.update()
</code></pre>
<p>plugp100:</p>
<pre><code>connectedDevices = [] # List to store connected devices
async def example_discovery(credentials: AuthCredential):
discovered = await TapoDiscovery.scan(timeout=5)
device_objects = [] # List to store created Device objects
for discovered_device in discovered:
try:
# Initialize the Tapo device
device = await discovered_device.get_tapo_device(credentials)
# Update the device state
await device.update()
# Extract raw_state and create a Device object
raw_state = device.raw_state
device_objects.append(raw_state)
# print(new_device)
await device.client.close()
except Exception as e:
logging.error(f"Failed to update {discovered_device.ip} {discovered_device.device_type}", exc_info=e)
return device_objects # Return the list of created Device objects
async def connectAll(rawDeviceList):
for device in rawDeviceList:
device_configuration = DeviceConnectConfiguration(
host=device.ip,
credentials=AuthCredential(EMAIL, PASSWORD)
)
connectedDevice = await connect(device_configuration)
await connectedDevice.update()
connectedDevices.append(connectedDevice)
return connectedDevices
async def toggle_device_state(ip, state, connected_devices):
for device in connected_devices:
if ip == device.raw_state.get('ip', 'Unknown'):
if state == "on":
await device.turn_on()
elif state == "off":
await device.turn_off()
else:
print("Invalid state. Please enter 'on' or 'off'.")
</code></pre>
|
<python>
|
2024-11-25 17:33:23
| 1
| 586
|
Gray
|
79,223,897
| 1,802,693
|
How to Specify return_dtype for Aggregation and Sorting in Polars LazyFrame?
|
<p>Let's assume I have a Polars LazyFrame or DataFrame.</p>
<p>In the first step, I execute a <code>with_columns</code> / <code>struct</code> / <code>map_elements</code> / <code>lambda function</code> combination to create dict objects from the columns specified in the struct.</p>
<pre class="lang-py prettyprint-override"><code>combined_plan = combined_plan.with_columns(
pl.struct(['candle_dt', 'timeframe', 'time_diff_seconds', 'open', 'high', 'low', 'close', 'volume'])
.map_elements(lambda row: row, return_dtype=pl.Object).alias('event')
)
</code></pre>
<p>In the second step, I perform an aggregation on the result using <code>group_by</code> / <code>agg</code> / <code>col</code> / <code>map_elements</code>, which generates lists of the dict objects created in the first step, based on a column that hasn't been used yet.</p>
<pre class="lang-py prettyprint-override"><code>combined_plan = combined_plan.group_by('event_trigger_dt').agg(
pl.col("event").map_elements(lambda row: row.to_list(), return_dtype=pl.List(pl.Object)).alias('events')
)
</code></pre>
<p>How can I specify the <code>return_dtype</code> so that I can perform a sorting operation without any error on the resulting frame after the aggregation?</p>
<pre class="lang-py prettyprint-override"><code>combined_plan = combined_plan.sort(['event_trigger_dt'])
</code></pre>
<p>The error I get:</p>
<pre><code>pyo3_runtime.PanicException: called `Result::unwrap()` on an `Err` value: ComputeError(ErrString("ListArray's child's DataType must match. However, the expected DataType is FixedSizeBinary(8) while it got Extension(\"POLARS_EXTENSION_TYPE\", FixedSizeBinary(8), Some(\"1732551860809866900;4003949774752\"))."))
</code></pre>
<p>I've tried to use multiple types: <code>pl.Object</code>, <code>pl.Struct</code>, <code>pl.List(pl.Object)</code> etc., none of them worked.</p>
<p>I could workaround this with converting the dict into JSON and using pl.String type, but I don't like the fact that I need to convert it back to python dict where I need to use it:</p>
<pre><code>def func(row):
return {
'candle_dt' : str(row['candle_dt']),
'timeframe' : row['timeframe'],
'time_diff_seconds' : row['time_diff_seconds'],
'open' : row['open'],
'high' : row['high'],
'low' : row['low'],
'close' : row['close'],
'volume' : row['volume'],
}
combined_plan = combined_plan.with_columns(
pl.struct(['candle_dt', 'timeframe', 'time_diff_seconds', 'open', 'high', 'low', 'close', 'volume'])
.map_elements(lambda row:func(row), return_dtype=pl.Object).alias('event')
)
combined_plan = combined_plan.group_by('event_trigger_dt').agg(
pl.col("event").map_elements(lambda row: str(json.dumps(row.to_list())), return_dtype=pl.String).alias('events')
)
combined_plan = combined_plan.sort(['event_trigger_dt'])
</code></pre>
|
<python><python-3.x><dataframe><python-polars><lazyframe>
|
2024-11-25 16:39:53
| 0
| 1,729
|
elaspog
|
79,223,860
| 15,394,199
|
Unexpected '__mul__' call during dot product
|
<p>So, I have been trying to implement a basic Autograd and Neural Network from scratch using some numpy. This is the part of code of my AD which matters for this question which has been greatly shorten down for MRE. This is grad.py</p>
<pre class="lang-py prettyprint-override"><code>from typing import Self
import numpy as np
class Variable:
def __init__(self, value: np.ndarray=None):
self.value = value if isinstance(value, np.ndarray) else np.asarray(value)
self.prev = None
def _variablify(self, x) -> Self:
if not isinstance(x, Variable):
x = Variable(x)
return x
def __add__(self, x) -> Self:
x = self._variablify(x)
y = Variable(self.value + x.value)
return y
def __mul__(self, x) -> Self:
x = self._variablify(x)
y = Variable(self.value * x.value)
return y
__radd__ = __add__
__rmul__ = __mul__
def dot(self, x):
x = self._variablify(x)
y = Variable(self.value.dot(x.value))
return y
def __lt__(self, other):
return self.value < other
def __gt__(self, other):
return self.value > other
def dot(a: Variable, b: Variable):
return a.dot(b)
</code></pre>
<p>In the other file, main.py I try to implement a neural net</p>
<pre class="lang-py prettyprint-override"><code>from typing import Self
import numpy as np
from grad import Variable
import grad
class Layer:
def __init__(self, neurons: int):
self.n_size = neurons
self.activation = Variable(0)
def previous(self, layer: Self):
self.previous_layer = layer
self.previous_layer.next_layer = self
def next(self, layer: Self):
self.next_layer = layer
self.next_layer.previous_layer = self
def initialise(self):
self.weight_matrix = Variable(np.random.normal(0, 0.01, (self.n_size, self.next_layer.n_size)))
self.bias_vector = Variable(np.random.normal(0, 0.01, (1, self.next_layer.n_size)))
self.next_layer.x = grad.dot(self.activation, self.weight_matrix) + self.bias_vector
self.next_layer.activation = np.where(self.next_layer.x > 0, self.next_layer.x, 0.01*self.next_layer.x) # Using LeakyReLU
if __name__ == "__main__":
input_layer = Layer(5)
input_layer.activation = Variable(np.random.randint(1, 5, (1,5)))
h1 = Layer(3)
h1.previous(input_layer)
output = Layer(2)
output.previous(h1)
input_layer.initialise()
h1.initialise()
print(input_layer.activation, h1.activation, output.activation)
</code></pre>
<p>So, as you can see in the grad.py I had implemented the code for dot product wrapper. But now, here comes the error upon running the main.py file-</p>
<pre><code>Traceback (most recent call last):
File ".../main.py", line 62, in <module>
h1.initialise()
File ".../main.py", line 40, in initialise
self.next_layer.x = grad.dot(self.activation, self.weight_matrix) + self.bias_vector
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../grad.py", line 191, in dot
return a.dot(b)
^^^^^^^^
File ".../grad.py", line 49, in __mul__
y = Variable(self.value * x.value)
~~~~~~~~~~~^~~~~~~~~
ValueError: operands could not be broadcast together with shapes (1,3) (3,2)
</code></pre>
<p>Now to me, this is very strange. Because the error seems to tell us that <code>a.dot(b)</code> somehow called <code>__mul__</code> which...it never did. I have absolutely no idea what is going on here. Any help would be greatly appreciated.</p>
<p>Thanks.</p>
|
<python><numpy><deep-learning>
|
2024-11-25 16:27:54
| 1
| 2,318
|
random_hooman
|
79,223,745
| 2,402,577
|
How can I use breakpoint inside a Hypercorn funtion for Python 3.9
|
<p>I am using <code>Python 3.9</code>. I am unable to use <code>ipdb.set_trace()</code> inside an <code>Hypercorn</code> process. Please note that this works in <code>Python 3.7</code>.</p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import ipdb
async def app(scope, receive, send):
ipdb.set_trace() # DEBUG
</code></pre>
<p>When I run (<code>hypercorn hello_world:app</code>) I get following error message:</p>
<pre class="lang-bash prettyprint-override"><code>❯ hypercorn hello_world:app
--Return--
None
> /home/alper/trade_bot/bot/hello_world.py(7)app()
5
6 async def app(scope, receive, send):
----> 7 ipdb.set_trace() # DEBUG
ipdb>
[2024-11-25 16:07:33 +0000] [1228742] [WARNING] ASGI Framework Lifespan error, continuing without Lifespan support
[2024-11-25 16:07:33 +0000] [1228742] [INFO] Running on http://127.0.0.1:8000 (CTRL + C to quit)
--Return--
None
> /home/alper/trade_bot/bot/hello_world.py(7)app()
5
6 async def app(scope, receive, send):
----> 7 ipdb.set_trace() # DEBUG
ipdb>
[2024-11-25 16:07:35 +0000] [1228742] [ERROR] Error in ASGI Framework
Traceback (most recent call last):
File "/home/alper/venv/lib/python3.9/site-packages/hypercorn/asyncio/task_group.py", line 27, in _handle
await app(scope, receive, send, sync_spawn, call_soon)
File "/home/alper/venv/lib/python3.9/site-packages/hypercorn/app_wrappers.py", line 34, in __call__
await self.app(scope, receive, send)
File "/home/alper/trade_bot/bot/hello_world.py", line 7, in app
ipdb.set_trace() # DEBUG
File "/usr/lib/python3.9/bdb.py", line 92, in trace_dispatch
return self.dispatch_return(frame, arg)
File "/usr/lib/python3.9/bdb.py", line 154, in dispatch_return
if self.quitting: raise BdbQuit
bdb.BdbQuit
</code></pre>
<p>Here <code>ipdb></code> does not accept any input like <code>c</code>, <code>n</code>...
How can I do debugging using breakpoint (<code>ipdb.set_trace()</code>) inside a Hypercorn process, if possible?</p>
|
<python><ipdb><hypercorn>
|
2024-11-25 15:58:56
| 0
| 3,525
|
alper
|
79,223,646
| 9,371,999
|
.venv folder not showing on vscode tree view
|
<p>The issue is as follows.
I usually work with Python to develop my apps. yesterday I wanted to try UV framework, and I found it very nice. However, I soon found out that the .venv folder created is not showing in my vscode tree view. Seethe picture as follows:
<a href="https://i.sstatic.net/H3Bhi6dO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3Bhi6dO.png" alt="enter image description here" /></a></p>
<p>Basically, the .venv folder exists.. but I just cannot see it in my tree view. I want to know why. and even better, how can I fix this?</p>
<p>I will also copy and paste my user and workspace settings.. Maybe its related to that, but I don't exactly know.</p>
<p>User settings:</p>
<pre><code>{
// General settings
"security.workspace.trust.untrustedFiles": "newWindow",
"window.zoomLevel": 1,
"window.commandCenter": false,
"files.exclude": {
".git": true,
"**/.git": false
},
"extensions.autoUpdate": "onlyEnabledExtensions",
// Git settings
"git.autofetch": true,
"git.confirmSync": false,
"git.enableSmartCommit": true,
"git.showActionButton": {
"commit": false,
"publish": false,
"sync": false
},
"github.copilot.enable": {
"*": true,
"plaintext": false,
"scminput": false,
"yaml": false
},
// Explorer settings
"explorer.excludeGitIgnore": true,
"explorer.autoReveal": true,
"explorer.confirmDelete": false,
"explorer.confirmDragAndDrop": false,
// Workbench settings
"workbench.colorTheme": "Default Light+",
"workbench.editor.tabSizing": "shrink",
"workbench.settings.editor": "json",
// Editor settings
"ruff.importStrategy": "useBundled",
"editor.defaultFormatter": "charliermarsh.ruff",
"editor.formatOnPaste": true,
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "file",
"editor.codeActionsOnSave": {
"source.organizeImports": "always",
"source.fixAll": "always"
},
"files.autoSave": "onFocusChange",
"[json]": {
"editor.defaultFormatter": "vscode.json-language-features"
},
"[jsonc]": {
"editor.defaultFormatter": "vscode.json-language-features"
},
// Debug settings
"debug.toolBarLocation": "docked",
// Terminal settings
"terminal.integrated.tabs.enabled": true,
"terminal.integrated.tabs.hideCondition": "never",
"terminal.integrated.tabs.location": "right",
// Markdown settings
"markdown.preview.scrollEditorWithPreview": true,
"markdown.preview.scrollPreviewWithEditor": true,
"python.createEnvironment.contentButton": "show",
"python.venvFolders": [
""
]
</code></pre>
<p>}</p>
<p>Workspace settings:</p>
<pre><code>// Python settings
"python.analysis.autoSearchPaths": true,
"python.analysis.diagnosticSeverityOverrides": {
"reportMissingImports": "none"
},
"python.analysis.extraPaths": [
"${workspaceFolder}/src"
],
"python.envFile": "${workspaceFolder}/.env",
"python.terminal.activateEnvironment": true,
"python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python",
// Test settings
"python.testing.pytestEnabled": true,
"python.testing.unittestEnabled": false,
"python.testing.cwd": "${workspaceFolder}/tests",
"python.testing.pytestPath": "${workspaceFolder}/.venv/bin/pytest",
"python.testing.autoTestDiscoverOnSaveEnabled": true,
</code></pre>
<p>}</p>
<p>I would be very appreciated if someone can help me with this...</p>
|
<python><visual-studio-code>
|
2024-11-25 15:28:16
| 2
| 529
|
GEBRU
|
79,223,616
| 3,025,242
|
Accessing / downloading artifacts from MLflow running in Docker
|
<p>I am currently working on a way to implement <code>testcontainers</code> like testing using an <code>MLflow</code> server running in Docker. I use a fairly simple Docker image to setup <code>MLflow</code> server and run a small training script that writes the resulting model as an artifact to <code>MLflow</code>:</p>
<pre class="lang-none prettyprint-override"><code># Dockerfile.mlflow-test
FROM ghcr.io/mlflow/mlflow:latest
# Install additional dependencies
RUN pip install scikit-learn pandas numpy
# Set up working directory
WORKDIR /mlflow
# Create MLflow directories
RUN mkdir -p /mlflow/mlruns && \
chmod -R 777 /mlflow
# Copy and modify training script
COPY train_model.py .
# Train and register model during build
ENV MLFLOW_TRACKING_URI=sqlite:///mlflow.db
ENV MLFLOW_ARTIFACT_ROOT=file:///mlflow/mlruns
RUN python train_model.py
# Expose MLflow UI port
EXPOSE 5000
# Start MLflow server
CMD ["mlflow", "server", \
"--host", "0.0.0.0", \
"--port", "5000", \
"--backend-store-uri", "sqlite:///mlflow.db", \
"--default-artifact-root", "file:///mlflow/mlruns"]
</code></pre>
<p>When run, this correctly shows me the <code>MLflow</code> UI and I can also see the artifacts corresponding to the model run (on <a href="http://127.0.0.1:5000/#/experiments/0/runs/3894c76664d24c43a537de715a25d664/artifacts" rel="nofollow noreferrer">http://127.0.0.1:5000/#/experiments/0/runs/3894c76664d24c43a537de715a25d664/artifacts</a>).</p>
<p>However, when I then run the following Python code to download the artifacts locally:</p>
<pre><code>import mlflow
from mlflow import MlflowClient
from mlflow.artifacts import download_artifacts, list_artifacts
mlflow.set_tracking_uri("http://127.0.0.1:5000")
ARTIFACT_DESTINATION_DIR = "./test-artifacts"
MODEL_NAME = "test_model"
ALIAS = "Test"
client = MlflowClient(mlflow.get_tracking_uri())
def get_latest_version(model_name, mlflow_client=client):
return mlflow_client.get_registered_model(model_name).latest_versions[0].version
def get_latest_version_for_alias(
model_name: str, alias: str, mlflow_client=client
) -> int:
return mlflow_client.get_model_version_by_alias(model_name, alias).version
def get_run_id_for_model_version(
model_name: str, version: str, mlflow_client=client
) -> str:
model_version = mlflow_client.get_model_version(model_name, version)
return model_version.run_id
def download_latest_artifacts_for_model_alias(
model_name: str = MODEL_NAME,
alias: str = ALIAS,
destination_dir: str = ARTIFACT_DESTINATION_DIR,
mlflow_client=client,
):
# First, get latest model version for the given alias
version = get_latest_version_for_alias(
model_name=model_name, alias=alias, mlflow_client=mlflow_client
)
print(f"VERSION: {version}")
# Get the run ID for this version
run_id = get_run_id_for_model_version(
model_name=model_name, version=version, mlflow_client=mlflow_client
)
print(f"RUN ID: {run_id}")
# Construct the artifact URI using runs:/ scheme
artifact_uri = f"runs:/{run_id}/model"
print(f"ARTIFACT URI: {artifact_uri}")
# Define the proper local dir
full_destination_dir = f"{destination_dir}/{model_name}/{alias}/{version}"
# Download all of the artifacts
download_artifacts(artifact_uri=artifact_uri, dst_path=full_destination_dir)
# Test the functions
print("Registered models:", mlflow.search_registered_models())
print("Latest version:", get_latest_version(MODEL_NAME, client))
print("Latest version for alias:", get_latest_version_for_alias(MODEL_NAME, ALIAS, client))
print("Downloading artifacts...")
download_latest_artifacts_for_model_alias(
MODEL_NAME,
ALIAS,
destination_dir=ARTIFACT_DESTINATION_DIR,
mlflow_client=client
)
</code></pre>
<p>I get the following output:</p>
<pre><code>Registered models: [<RegisteredModel: aliases={'Test': '1'}, creation_timestamp=1732547270609, description='', last_updated_timestamp=1732547270696, latest_versions=[<ModelVersion: aliases=[], creation_timestamp=1732547270658, current_stage='Production', description='', last_updated_timestamp=1732547270696, name='test_model', run_id='3894c76664d24c43a537de715a25d664', run_link='', source='/mlflow/mlruns/0/3894c76664d24c43a537de715a25d664/artifacts/model', status='READY', status_message='', tags={}, user_id='', version='1'>], name='test_model', tags={}>]
Latest version: 1
Latest version for alias: 1
Downloading artifacts...
VERSION: 1
RUN ID: 3894c76664d24c43a537de715a25d664
ARTIFACT URI: runs:/3894c76664d24c43a537de715a25d664/model
Traceback (most recent call last):
File "/home/user/Projects/my-api/tests/integration/test_container.py", line 154, in <module>
download_latest_artifacts_for_model_alias(
File "/home/user/Projects/my-api/tests/integration/test_container.py", line 147, in download_latest_artifacts_for_model_alias
download_artifacts(artifact_uri=artifact_uri, dst_path=full_destination_dir)
File "/home/user/Projects/venvs/.my-api/lib/python3.10/site-packages/mlflow/artifacts/__init__.py", line 64, in download_artifacts
return _download_artifact_from_uri(artifact_uri, output_path=dst_path)
File "/home/user/Projects/venvs/.my-api/lib/python3.10/site-packages/mlflow/tracking/artifact_utils.py", line 116, in _download_artifact_from_uri
return repo.download_artifacts(artifact_path=artifact_path, dst_path=output_path)
File "/home/user/Projects/venvs/.my-api/lib/python3.10/site-packages/mlflow/store/artifact/runs_artifact_repo.py", line 131, in download_artifacts
return self.repo.download_artifacts(artifact_path, dst_path)
File "/home/user/Projects/venvs/.my-api/lib/python3.10/site-packages/mlflow/store/artifact/local_artifact_repo.py", line 85, in download_artifacts
return super().download_artifacts(artifact_path, dst_path)
File "/homeuser/Projects/venvs/.my-api/lib/python3.10/site-packages/mlflow/store/artifact/artifact_repo.py", line 284, in download_artifacts
raise MlflowException(
mlflow.exceptions.MlflowException: The following failures occurred while downloading one or more artifacts from /mlflow/mlruns/0/3894c76664d24c43a537de715a25d664/artifacts:
##### File model #####
[Errno 2] No such file or directory: '/mlflow/mlruns/0/3894c76664d24c43a537de715a25d664/artifacts/model'
</code></pre>
<p>I just really don't understand why it can get so far as getting all the metadata, version, run id, etc. but just can't let me download the artifacts? I gather this has something to do with the way the artifacts are "mounted" inside the Docker container file system but I then wonder: how <em>do</em> I properly download artifacts using such local Docker setup?</p>
<p>It's also worth mentioning that I can "just" cURL the artifacts directly using a direct link and run id such as <code>http://localhost:5000/get-artifact?path=model/model.pkl&run_uuid=e6d302306fe446ae92e6ccefcad67167</code>, so it is accessible.</p>
<p>Thank you in advance!</p>
|
<python><docker><mlflow>
|
2024-11-25 15:18:29
| 0
| 1,233
|
mabergerx
|
79,223,449
| 4,706,711
|
Why ValueError is not handled upon my Django Middleware?
|
<p>I am Implementing a Basic view that raises a <code>ValueError</code>:</p>
<pre><code>from rest_framework.views import APIView
from django.http import HttpResponse
class ThreadView(APIView):
def get(self, request, *args, **kwargs):
limit = int(request.GET.get('limit', 10))
if limit < 0:
raise ValueError("Limit must be a positive number")
return HttpResponse("OK",200)
</code></pre>
<p>And I made a simple middleware as well:</p>
<pre><code>from django.http import JsonResponse
class ErrorMiddlewareHandler:
"""
Custom middleware to handle Errors globally and return a standardized JSON response.
"""
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
try:
# Process the request and pass to the view
response = self.get_response(request)
except ValueError as v:
print("mIDDLEWARE ",v)
response = JsonResponse({'msg': str(v)}, status=400) # You can change status as needed
except Exception as e:
response = JsonResponse({'msg': "Internal Error occired"}, status=500)
return response
</code></pre>
<p>I registered it into <code>settings.py</code>:</p>
<pre><code>MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'assistant_api.middleware.ErrorMiddlewareHandler.ErrorMiddlewareHandler', # <<< This One
]
</code></pre>
<p>But it seem middleware not to be able to handle it:</p>
<p><a href="https://i.sstatic.net/26ijw5AM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26ijw5AM.png" alt="enter image description here" /></a></p>
<p>Despite being registered into middlewares. Do you know why???</p>
|
<python><django><error-handling>
|
2024-11-25 14:31:03
| 1
| 10,444
|
Dimitrios Desyllas
|
79,223,398
| 2,990,052
|
Missing Pandas methods in PyCharm (JetBrains) code completion
|
<pre><code>import pandas as pd
series = pd.Series([])
series.str.extract("")
</code></pre>
<p>While writing the above code, I expect PyCharm to start completing <code>series.str.ex...</code> with the available methods. It does not.</p>
<p>Example of what I am expecting using <code>pd...</code>:
<a href="https://i.sstatic.net/eAnE97Pv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAnE97Pv.png" alt="pd code completion" /></a></p>
<p>What actually happens:</p>
<p><a href="https://i.sstatic.net/XIUbKNcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XIUbKNcg.png" alt="series.str. completion" /></a></p>
<p>This works in VSCode, and works with most other libraries.</p>
<p>PyCharm 2024.3</p>
|
<python><pandas><pycharm><jetbrains-ide>
|
2024-11-25 14:13:14
| 1
| 855
|
PeterH
|
79,223,224
| 6,151,948
|
Cross-Region S3 Copy with SourceClient Fails in Boto3 and AWS CLI on Scaleway
|
<h3>TL;DR</h3>
<p>I have issues using boto3/aws cli to copy files between buckets that are in a different region (Note: I am using Scaleway as my cloud provider, not AWS).
I could not get it to work using boto3, but managed to find a solution using rclone. I would like to know whether boto3 is still a possibility to limit the number of dependencies in my stack.</p>
<h3>Description</h3>
<p>When performing a cross-region S3 copy operation using Boto3 (or the AWS CLI), the <code>SourceClient</code> parameter in Boto3 and the <code>--endpoint-url</code> parameter in the AWS CLI are not applied consistently. This results in errors when attempting to copy objects from a source bucket in one region to a destination bucket in another region without downloading the objects locally.</p>
<p>Expected Behavior: The object should copy successfully from the source bucket to the destination bucket across regions, using the SourceClient to correctly resolve the source bucket's region.</p>
<p>Actual Behaviour: an error is raised.</p>
<pre><code>botocore.exceptions.ClientError: An error occurred (NoSuchBucket) when calling the CopyObject operation: The specified bucket does not exist
</code></pre>
<p>The copy command does not use information from the <code>SourceClient</code> input, and only uses the info (credentials, location, etc.) from the client on which the copy method was called.</p>
<p>I also tried this with the aws cli, but got the same results:</p>
<pre><code>aws s3 sync s3://source-bucket s3://dest-bucket \
--source-region fr-par \
--region nl-ams \
--endpoint-url https://s3.fr-par.scw.cloud \
--profile mys3profile
</code></pre>
<p>The aws cli seems to fall back on an amazonaws endpoint:</p>
<pre><code>fatal error: Could not connect to the endpoint URL: "https://source-bucket.s3.fr-par.amazonaws.com/?list-type=2&prefix=&encoding-type=url"
</code></pre>
<h3>Reproduction Steps:</h3>
<pre class="lang-py prettyprint-override"><code>import boto3
from dotenv import dotenv_values
config = dotenv_values(".env")
# Initialize source and destination clients
s3_session = boto3.Session(
aws_access_key_id=config.get("SCW_ACCESS_KEY"),
aws_secret_access_key=config.get("SCW_SECRET_KEY"),
region_name="fr-par",
)
src_s3 = s3_session.client(
service_name="s3",
region_name="fr-par",
endpoint_url="https://s3.fr-par.scw.cloud",
)
s3_session = boto3.Session(
aws_access_key_id=config.get("SCW_ACCESS_KEY"),
aws_secret_access_key=config.get("SCW_SECRET_KEY"),
region_name="nl-ams",
)
dest_s3 = s3_session.client(
service_name="s3",
region_name="nl-ams",
endpoint_url="https://s3.nl-ams.scw.cloud",
)
# Set up source and destination parameters
copy_source = {
"Bucket": "source_bucket_name",
"Key": "source_object_name",
}
# Attempt to copy with SourceClient
dest_s3.copy(
copy_source,
"destination_bucket_name",
source_object_name,
SourceClient=src_s3
)
</code></pre>
<h3>Possible Solution</h3>
<p>I could not get it to work using boto3, but I managed to get a solution that was acceptable to me using <a href="https://rclone.org/s3/#scaleway" rel="nofollow noreferrer">rclone.</a></p>
<p>Example config to be placed in <code>~.conf/rclone/rclone.conf</code>:</p>
<pre class="lang-bash prettyprint-override"><code>[scw_s3_fr]
type = s3
provider = Scaleway
access_key_id = ...
secret_access_key = ...
region = fr-par
endpoint = s3.fr-par.scw.cloud
acl = private
[scw_s3_nl]
type = s3
provider = Scaleway
access_key_id = ...
secret_access_key = ...
region = nl-ams
endpoint = s3.nl-ams.scw.cloud
acl = private
</code></pre>
<p>sync the source to the destination one-way:</p>
<pre class="lang-bash prettyprint-override"><code>rclone sync scw_s3_fr:source-bucket scw_s3_nl:destination-bucket -P --metadata --checksum --check-first
</code></pre>
<h3>the actual question</h3>
<p>Does anybody know what I did wrong here? Or could guide me in the right direction to get the configuration setup right.
My short-term needs are currently all set, but I wonder if a pure-boto3 solution is still possible.</p>
<h3>Environment details</h3>
<p>Python 3.11.2 (main, Mar 7 2023, 16:53:12) [GCC 12.2.1 20230201] on linux
boto3='1.35.66'</p>
|
<python><amazon-web-services><amazon-s3><boto3><rclone>
|
2024-11-25 13:24:07
| 1
| 942
|
Daan
|
79,223,032
| 16,895,246
|
tkinter set a different color for the tick and text in a checkbox
|
<p>I'm trying to use checkboxes in tkinter. My GUI has a dark theme so I'd like to use white text on a dark background. Unfortunately if I do this by setting <code>fg="white"</code> <code>bg="black"</code> then I get a black background except for the box where the tick appears which remains white. This means that the tick is white on white background and therfore invisible.</p>
<p>Is there some way to either change the background of the box where the tick appears or, preferably, set the color of the tick mark independent of the rest of the text i.e. so I could have the check itself be a black tick on a white background while the rest of the widget consists of white text on a black background.</p>
<p>To illustrate the issue:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
var = tk.BooleanVar()
checkbutton = tk.Checkbutton(root, text="example", variable=var, bg="black", fg="white")
checkbutton.grid()
tk.mainloop()
</code></pre>
|
<python><tkinter><tkinter.checkbutton>
|
2024-11-25 12:28:53
| 0
| 1,441
|
Pioneer_11
|
79,223,026
| 2,649,681
|
Python Gimp plugin failing to register - gimp_wire_read() error
|
<p>I'm attempting to write a Python plugin for Gimp (2.10.38). Right now, it is absolutely barebones to just register the function and run <code>main</code>, but it is failing to register and show up in the program.</p>
<pre><code>#!/usr/bin/python
from gimpfu import *
def msg(txt):
pdb.gimp_message(txt)
msg('Running knitpc')
def knitpc(timg, tdrawable):
pdb.gimp_message('Hello ')
register(
'knit_pc_plugin',
'Test plugin',
'Test plugin',
'Me',
'Me',
'2024',
'<Image>/Image/Knit...',
'*',
[],
[],
knitpc
)
msg('Registered function')
main()
msg('Ran main')
</code></pre>
<p>I do not find an option for it under the <code>Image</code> menu, nor can I find in the Plugin Browser. Running <code>gimp-2.10 --verbose</code> shows the following in the console with no further information:</p>
<pre><code>Querying plug-in: 'C:\Users\alpac\AppData\Roaming\GIMP\2.10\plug-ins\knit_pc_plugin.py'
gimp-2.10.exe: LibGimpBase-WARNING: gimp-2.10.exe: gimp_wire_read(): error
</code></pre>
<p>Running <code>python knit_pc_plugin.py</code> gives the <code>No module named 'gimpfu'</code> error, but does not indicate any syntax errors.</p>
<p>I have also tried removing the <code>timg, tdrawable</code> params from <code>knitpc</code>, which made no difference.</p>
|
<python><gimp><gimpfu><python-fu>
|
2024-11-25 12:26:53
| 1
| 848
|
user2649681
|
79,222,805
| 2,136,286
|
pip install from private repo via GitHub Actions workflow
|
<p><strong>TL;DR</strong>: How do you pass GITHUB_TOKEN to <code>pip</code> in callable workflow? Any additional setup required?</p>
<p>I'm trying to write a set of workflows that installs dependencies located on a private GitHub repo.</p>
<p>The repos with packages have <em>"Access from repositories in the 'My Org' organization"</em> enabled.</p>
<p><code>GITHUB_TOKEN</code> has write permissions set by default.</p>
<p>In project's action the workflow is called with <code>GITHUB_TOKEN</code> passed explicitly:</p>
<p><code>.github/workflows/deploy_myproject.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>name: Deploy "My Project" to Sandbox
run-name: 🚀 Deploying to sbx
on:
push:
branches:
- sandbox
paths:
- 'my_project/**'
- .github/workflows/deploy_myproject.yaml
jobs:
deploy:
uses: MyORG/github-actions/.github/workflows/call_pip_install.yaml@main
secrets:
GH_TOKEN: ${{secrets.GITHUB_TOKEN}}
with:
env_name: sandbox
</code></pre>
<p><code>MyORG/github-actions/.github/workflows/call_pip_install.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>name: Deploy PIP
run-name: 🚀 Deploying to ${{ inputs.env_name }}
on:
workflow_call:
inputs:
env_name:
description: 'Target environment'
required: true
type: string
default: "sandbox"
secrets:
GH_TOKEN:
description: 'A token passed from the caller workflow'
required: true
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3.12
uses: actions/setup-python@v5
with:
python-version: 3.12
- name: 'Set up Python & deploy'
run: |
python3 -m venv venv
source ./venv/bin/activate
echo "Installing dependencies"
git config --global url."https://${{ secrets.GH_TOKEN }}@github".insteadOf https://github
git config --list
./venv/bin/pip3 install -r requirements.txt -v
</code></pre>
<p><code>git config --list</code> is just to confirm there's <code>url.https://***@github.insteadof=https://github</code></p>
<p><code>requirements.txt</code>:</p>
<pre><code>module-1 @ git+https://github.com/MyORG/module-1-repo@main
module-2 @ git+https://github.com/MyORG/module-2-repo@main
</code></pre>
<p>I tired also with <code>{{ github.token }}</code> inside <code>call_pip_install.yaml</code> since that's just meta annotation that should be accessible across workflows, but ended up explicitly passing token just to be sure.</p>
<p>In both cases the workflow fails with:</p>
<pre><code>fatal: could not read Password for 'https://***@github.com': No such device or address
</code></pre>
<p>The same configuration works if I inject personal token instead of <code>GITHUB_TOKEN</code>.</p>
<p>After researching many other posts on SO I've also tried these formats of <code>requirements.txt</code> instead of global git config:</p>
<pre><code>module-1 @ git+https://x-access-token:{GITHUB_TOKEN}@github.com/MyORG/module-1-repo@main
module-1 @ git+https://oauth2:{GITHUB_TOKEN}@github.com/MyORG/module-1-repo@main
</code></pre>
<p>but they fail with <code>remote: Support for password authentication was removed on August 13, 2021.</code> or similar errors</p>
<p>I don't want to use SSH keys or pass PAT to workflow since these are long-living secrets, I want to use GITHUB_TOKEN because it's disposable</p>
|
<python><github><pip><github-actions>
|
2024-11-25 11:17:42
| 1
| 676
|
the.Legend
|
79,222,706
| 12,466,687
|
How to save PDF after cropping from each page of PDF using pdfplumber?
|
<p>I am using a PDF with multiple pages that has a table on top of each page that I want to get rid of. So I am cropping the PDF after the top table.</p>
<p>What I don't know is how to combine or save it as 1 single PDF after cropping it.</p>
<p>I have tried below:</p>
<pre><code>import pandas as pd
import pdfplumber
path = r"file-tests.pdf"
with pdfplumber.open(path) as pdf:
pages = pdf.pages
# loop over each page
for p in pages:
print(p)
# this will give us the box dimensions in (x0,yo,x1,y1) format
bbox_vals = p.find_tables()[0].bbox
# taking y1 values as to keep/extract the portion of pdf page after 1st table
y0_top_table = bbox_vals[3]
print(y0_top_table)
# cropping pdf page from left to right and y value taken from above box to bottom of pg
p.crop((0, y0_top_table, 590, 840))
</code></pre>
<p>Output:</p>
<pre><code><Page:1>
269.64727650000003
<Page:2>
269.64727650000003
<Page:3>
269.64727650000003
<Page:4>
269.64727650000003
<Page:5>
269.64727650000003
<Page:6>
269.64727650000003
<Page:7>
269.64727650000003
<Page:8>
269.64727650000003
<Page:9>
269.64727650000003
<Page:10>
269.64727650000003
<Page:11>
269.64727650000003
<Page:12>
269.64727650000003
<Page:13>
269.64727650000003
<Page:14>
269.64727650000003
<Page:15>
269.64727650000003
<Page:16>
269.64727650000003
<Page:17>
269.64727650000003
<Page:18>
269.64727650000003
<Page:19>
269.64727650000003
<Page:20>
269.64727650000003
</code></pre>
<p>How do I append, save these cropped pages into 1 PDF?</p>
<p><strong>Update</strong>:</p>
<p>Seems like its not possible to write or save pdf file using <code>pdfplumber</code> as per this <a href="https://github.com/jsvine/pdfplumber/discussions/440" rel="nofollow noreferrer">discussion link</a></p>
<p>(Not sure why this question was degraded to negative. Person who do that should also provide the answer or link to where this is already answered).</p>
<p><strong>Update2:</strong></p>
<pre><code>from pdfrw import PdfWriter
</code></pre>
<pre><code>output_pdf = PdfWriter()
with pdfplumber.open(path) as pdf:
pages = pdf.pages
for p in pages:
print(p)
bbox_vals = p.find_tables()[0].bbox
y0_top_table = bbox_vals[3]
print(y0_top_table)
cropped_pdf = p.crop((0, y0_top_table, 590, 840))
print(type(cropped_pdf))
output_pdf.addpage(cropped_pdf)
output_pdf.write(r"tests_cropped_file.pdf")
</code></pre>
<p><strong>Output & Error:</strong></p>
<pre><code><Page:1>
269.64727650000003
<class 'pdfplumber.page.CroppedPage'>
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[219], line 13
11 cropped_pdf = p.crop((0, y0_top_table, 590, 840))
12 print(type(cropped_pdf))
---> 13 output_pdf.addpage(cropped_pdf)
File c:\Users\vinee\anaconda3\envs\llma_py_3_12\Lib\site-packages\pdfrw\pdfwriter.py:270, in PdfWriter.addpage(self, page)
268 def addpage(self, page):
269 self._trailer = None
--> 270 if page.Type != PdfName.Page:
271 raise PdfOutputError('Bad /Type: Expected %s, found %s'
272 % (PdfName.Page, page.Type))
273 inheritable = page.inheritable # searches for resources
AttributeError: 'CroppedPage' object has no attribute 'Type'
</code></pre>
<p><strong>Update 3:</strong></p>
<p>Seems like this issue of cropping pdf and saving was also raised in 2018 but had no solution as per this <a href="https://github.com/jsvine/pdfplumber/issues/62" rel="nofollow noreferrer">discussion link</a>.</p>
<p>If anyone knows workaround then pls let me know. Would really Appreciate !!!</p>
|
<python><pdf><pdfplumber>
|
2024-11-25 10:49:14
| 1
| 2,357
|
ViSa
|
79,222,611
| 6,555,196
|
Dedicated Exception handling function doesn't really except
|
<p>In order to avoid using numerous <code>try:except</code> blocks like the one below,
I've created a dedicated function that should handle any exception,
The thing is since the data it should handle does not exist, so it fail before it reaches the handle function.
So, basically making the handle function obsolete, and making me have to use a lot of <code>try:except</code> blocks like before.<br />
Furthermore, I can't use one try:except block, since not all of the parameters are needed all the time.</p>
<p>What is the better way to achieve this ?</p>
<ul>
<li>Initial blocks:</li>
</ul>
<pre><code> try:
print('timestamp', event['timestamp'])
except:
print('No timestamp value recived')
try:
print('description', event['description'])
except:
print('No description value received')
</code></pre>
<ul>
<li>2nd rewrite:
Code example:</li>
</ul>
<pre><code>def get_data(event):
status = handle_exception(event['status'])
print(status)
def handle_exception(parameter):
try:
result = parameter
except:
result = 'NONE'
return result
</code></pre>
|
<python><python-3.x><aws-lambda>
|
2024-11-25 10:19:51
| 1
| 305
|
soBusted
|
79,222,586
| 8,981,066
|
Pydantic Nested Model Validation
|
<p>Need help setting up a nested Pydantic model that takes in restricted values</p>
<p>My current code:</p>
<pre><code>class Category(str, Enum):
Category: 'category'
Class Sub_Category(str, Enum):
Sub_Category: 'sub_category'
</code></pre>
<p>I however want to restrict this to pick up Categories and Sub-Categories that I have already defined in my master category list. For example, Category = 'Meat', then Sub_Category must either be 'Beef', 'Poultry', 'Seafood' .. etc.</p>
<p>Need help, not sure how to make the nested Pydantic models related like how I described above.</p>
|
<python><python-3.x><pydantic>
|
2024-11-25 10:14:31
| 1
| 1,147
|
Oday Salim
|
79,222,489
| 2,481,350
|
Why is this DFS + Memoization solution for selecting indices in two arrays too slow while a similar approach is more efficient?
|
<p>I was solving LeetCode problem <a href="https://leetcode.com/problems/maximum-multiplication-score/description/" rel="nofollow noreferrer">3290. Maximum Multiplication Score</a>:</p>
<blockquote>
<p>You are given an integer array <code>a</code> of size 4 and another integer array <code>b</code> of size at least 4.</p>
<p>You need to choose 4 indices <code>i<sub>0</sub>, i<sub>1</sub>, i<sub>2</sub>, and i<sub>3</sub></code> from the array <code>b</code> such that <code>i<sub>0</sub> < i<sub>1</sub> < i<sub>2</sub> < i<sub>3</sub></code>. Your score will be equal to the value <code>a[0] * b[i<sub>0</sub>] + a[1] * b[i<sub>1</sub>] + a[2] * b[i<sub>2</sub>] + a[3] * b[i<sub>3</sub>]</code>.</p>
<p>Return the maximum score you can achieve.</p>
<h3>Example:</h3>
<p><strong>Input:</strong> <code>a = [3,2,5,6], b = [2,-6,4,-5,-3,2,-7]</code></p>
<p><strong>Output:</strong> 26</p>
<p><strong>Explanation:</strong> Choose indices 0, 1, 2, and 5 from <code>b</code>.</p>
</blockquote>
<h2>My Attempt:</h2>
<p>I wrote two recursive + memoization solutions, but one of them exceeds the given time limit ("Time Limit Exceeded" TLE) for larger n. Here are the two approaches:</p>
<h3>Solution 1 (TLE)</h3>
<pre><code>class Solution:
def maxScore(self, a, b):
def dfs(i, j):
if i == len(a): # All elements in `a` processed
return 0
if (i, j) in memo: # Return memoized result
return memo[(i, j)]
max_result = float("-inf")
for index in range(j, len(b)):
if len(b) - index < len(a) - i: # Not enough elements left in `b`
break
max_result = max(max_result, a[i] * b[index] + dfs(i + 1, index + 1))
memo[(i, j)] = max_result
return max_result
memo = {}
return dfs(0, 0)
</code></pre>
<h3>Solution 2 (Works)</h3>
<pre><code>class Solution:
def maxScore(self, a, b):
def dfs(i, picked):
if picked == 4: # All elements in `a` processed
return 0
if len(b) - i + picked < 4: # Not enough elements left in `b`
return float("-inf")
if (i, picked) in memo: # Return memoized result
return memo[(i, picked)]
pick = a[picked] * b[i] + dfs(i + 1, picked + 1) # Pick this index
skip = dfs(i + 1, picked) # Skip this index
memo[(i, picked)] = max(pick, skip)
return memo[(i, picked)]
memo = {}
return dfs(0, 0)
</code></pre>
<h2>Question:</h2>
<p>Why does Solution 1 lead to TLE while Solution 2 works efficiently?
I think without the memoization both the solutions will have the same time complexity of O(2^n) of pick or not pick, but I can't pinpoint why the pruning or memoization doesn't seem to help as much as in Solution 2. Could someone provide a detailed explanation?</p>
|
<python><time-complexity><dynamic-programming><depth-first-search><memoization>
|
2024-11-25 09:57:40
| 1
| 2,062
|
souparno majumder
|
79,222,345
| 1,986,532
|
keras model does not learn if using tf.data pipeline
|
<p>A <code>keras</code> model <em>does</em> learn using Numpy arrays as the input, but fails to make any progress if reading data from a <code>tf.data</code> pipeline. What could be the reason potentially?</p>
<p>In particular, the model consumes batched multidimensional time series (so every point is an N x M tensor) and solves a classification problem. If the data are prepared in advance, by aggregating the time series in a large Numpy array, then the model successfully learns as indicated by a significant increase in the accuracy. However, when exactly the same input data is prepared using <code>tf.data</code> pipeline, the accuracy remains at the baseline level.</p>
<p>I compared the two sets of data by writing to disk, and they are identical. Also the types match.</p>
<p>Tried disabling threading (IIUC) by setting</p>
<pre><code>options.threading.private_threadpool_size = 1
</code></pre>
<p>and experimenting with a bunch of <code>options.experimental_optimization</code> options.</p>
<p>Could it be the case that the data are read in parallel from the <code>tf.data</code> dataset as opposed to being read sequentially from the Numpy array?</p>
<p>For completeness, here's the pipeline, where <code>np_array</code> contains "raw" data:</p>
<pre class="lang-py prettyprint-override"><code>ds = tf.data.Dataset.from_tensor_slices(np_array.T)
y_ds = (
ds
.skip(T - 1)
.map(lambda s: s[-1] - 1)
.map(lambda y: to_categorical(y, 3))
)
X_ds = (
ds
.map(lambda s: s[:n_features])
.window(T, shift=1, drop_remainder=True)
.flat_map(lambda x: x.batch(T, drop_remainder=True))
.map(lambda x: tf.expand_dims(x, -1))
)
Xy_ds = (
tf.data.Dataset.zip(X_ds, y_ds)
.batch(size_batch)
.repeat(n_epochs * size_batch)
.prefetch(tf.data.AUTOTUNE)
)
</code></pre>
<p>and how <code>fit()</code> is called (the <code>steps_per_epoch</code> value is correct)</p>
<pre class="lang-py prettyprint-override"><code>model.fit(
Xy_train,
epochs=n_epochs,
steps_per_epoch=199,
verbose=2
)
</code></pre>
|
<python><numpy><tensorflow><keras><tf.data>
|
2024-11-25 09:07:39
| 1
| 2,552
|
Dmitry Zotikov
|
79,222,113
| 4,471,840
|
Is there a shortcut for python docstring to document implicit raised exception?
|
<p>I'm working on a few python modules whose methods (by design) don't catch exceptions from functions called in their bodies. As far as I understand the results of my research, you should document any relevant exceptions in docstring <code>:raises ...:</code> blocks, in my case to primarily help users or their IDEs, not for automated documentation.</p>
<p>As given in the sample below, I'd have to repeat the <code>:raises ...:</code> blocks in several methods with the same information:</p>
<pre class="lang-py prettyprint-override"><code>def myFunction(param1, param2):
"""
:param str param1: first parameter...
:param int param2: second parameter...
:return None:
:raises MyException1: in case...
:raises MyException2: (from myFunction2) in case...
:raises ValueError: (from myFunction3) in case...
"""
if param1 is None:
raise MyException1
myFunction2(param1, param2)
def myFunction2(param1, param2):
"""
:param str param1: first parameter...
:param int param2: second parameter...
:return void:
:raises MyException2: in case...
:raises ValueError: (from myFunction3) in case...
"""
if param2 is None:
raise MyException2
myFunction3(param1)
def myFunction3(param):
"""
:param str param: parameter...
:return None:
:raises ValueError: in case...
"""
if param is None:
raise ValueError
</code></pre>
<p>Is there a way to say in myFunction1 "here could be raised: myException1 and all exceptions from myFunction2 recursively" and so on in myFunction2?</p>
<p>I.e.:</p>
<pre class="lang-py prettyprint-override"><code>def myFunction(param1, param2):
"""
:param str param1: first parameter...
:param int param2: second parameter...
:return None:
:raises MyException1: in case...
:raisesfrom myFunction2: ??? does anything like this exists, or some other way ???
"""
if param1 is None:
raise MyException1
myFunction2(param1, param2)
...
</code></pre>
|
<python><docstring>
|
2024-11-25 07:58:19
| 0
| 446
|
bohrsty
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.