QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,242,121
| 11,062,613
|
How to rewrite a DataFrame-based grouped rolling aggregation function to use Polars Expressions?
|
<p>I need to apply a custom grouped rolling aggregation function to every nth row for specific columns of a DataFrame. My current implementation works with a DataFrame as an argument and returns the modified DataFrame.</p>
<p>I have a few issues with the current approach:</p>
<ul>
<li>The function requires specifying many column names.</li>
<li>Irrelevant columns (if present) are processed, even though they aren't needed.</li>
</ul>
<p>I would like to rewrite this function so that it works with expressions as arguments and also returns the result as an expression. I'm not sure if this approach is more efficient, but it seems like it would make the handling cleaner.</p>
<p>Unfortunately, I’m having trouble figuring out how to implement this with Polars expressions.</p>
<p>Can anyone guide me on how to convert this DataFrame-based approach into an expression-based one?</p>
<p>Here’s my current function that works with DataFrames:</p>
<pre><code>from typing import Callable, Sequence
import numpy as np
import polars as pl
from numba import guvectorize
@guvectorize(['(float64[:], int64, float64[:])'], '(n),()->(n)')
def rolling_func(input_array, window_size, out):
"""Example for a custom rolling function with a specified window size."""
n = len(input_array)
for i in range(n):
start = max(i - window_size + 1, 0)
out[i] = np.mean(input_array[start:i+1])
def apply_rolling_gathered_agg(
df,
func: Callable,
window_size: int,
*func_args,
group_col: str | list[str] | None = None,
value_col: str | None = None,
result_col: str = 'result',
every_nth: int = 1,
window_buffer: int = 0,
return_dtype: pl.DataType = pl.Float64) -> pl.DataFrame:
"""
Apply a custom rolling aggregation function to a DataFrame, with grouping and every nth value selection.
This function performs a rolling aggregation on a specified value column in a Polars DataFrame. It allows
grouping by one or more columns, gathering every nth value, and applying a custom aggregation function
(e.g., `rolling_func`) with a specified window size and optional buffer.
Args:
df (pl.DataFrame): The DataFrame to operate on.
func (Callable): The aggregation function to apply to each rolling window.
window_size (int): The size of the window over which to apply the aggregation function.
*func_args: Additional arguments to pass to the custom function.
group_col (str | list[str] | None, optional): The column(s) to group by. If `None`, the first column is used.
value_col (str | None, optional): The column to apply the rolling function to. If `None`, the last column is used.
result_col (str, optional): The name of the result column in the output DataFrame. Default is 'result'.
every_nth (int, optional): The step size for gathering values within each group. Default is 1.
window_buffer (int, optional): A buffer to add around the rolling window, extending the window on both ends. Default is 0.
return_dtype (pl.DataType, optional): The desired data type for the result column. Default is `pl.Float64`.
Returns:
pl.DataFrame: A DataFrame containing the results of the rolling aggregation, with one row per group.
Example:
# Create a sample DataFrame with two groups 'A' and 'B', and values from 0 to 99
df = pl.DataFrame({
'group': np.repeat(['A', 'B'], 100), # Repeat 'A' and 'B' for each group
'value': np.tile(np.arange(100), 2) # Tile the values 0 to 99 for each group
})
func_args = []
res = apply_rolling_gathered_agg(
df,
func=rolling_func,
window_size=3,
*func_args,
group_col='group',
value_col='value',
every_nth=10,
window_buffer=0,
return_dtype=pl.Float64,
)
print(res)
res_pd = res.to_pandas()
"""
# Handle cases where group_col or value_col might not be passed
cols = df.columns
group_col = group_col or cols[0]
value_col = value_col or cols[-1]
# If group_col is a list, ensure it is processed correctly
if isinstance(group_col, list):
group_by = group_col
else:
group_by = [group_col]
# Temporary index column for rolling aggregation
index_col = '_index'
# Calculate the total window size
total_window = every_nth * (window_size + window_buffer)
period = f'{total_window}i'
# Apply rolling aggregation
result = (
df
.with_row_index(name=index_col)
.rolling(index_column=index_col, period=period, group_by=group_by)
.agg(
pl.all().last(), # pass the last element of all present columns
pl.col(value_col)
.reverse().gather_every(every_nth).reverse()
.map_batches(lambda batch: func(batch, window_size, *func_args), return_dtype=return_dtype)
.last().alias(result_col)) # This is the desired expression
.drop(index_col)
)
return result
</code></pre>
<p>I want to convert this function into an expression-based function, similar to:</p>
<pre><code>def expr_apply_rolling_gathered_agg(
group_expr: pl.Expr | Sequence[pl.Expr], # Single or list of group column expressions
value_expr: pl.Expr, # Expression for the value column (series/column)
func: Callable, # The rolling aggregation function
window_size: int, # Size of the rolling window
*func_args, # Additional arguments for the rolling function
every_nth: int = 1, # Step size for gathering values
window_buffer: int = 0, # Buffer size around the window
return_dtype: pl.DataType = pl.Float64 # Output data type
) -> pl.Expr:
pass
</code></pre>
<p>EDIT: Is rolling_map suitable as replacement for rolling & map_batches? rolling_map seems to expect a Custom aggregation function. It returns the first result for each rolling window if the function is not an aggregation like in this case:</p>
<pre><code>series = pl.Series(pl.arange(10, eager=True)).rolling_map(lambda x: x+1, window_size=3)
print(series.to_list())
[None, None, 1, 2, 3, 4, 5, 6, 7, 8]
</code></pre>
|
<python><function><python-polars>
|
2024-12-01 17:32:02
| 0
| 423
|
Olibarer
|
79,241,786
| 1,224,643
|
Sveltekit app hosted in Flask does not render
|
<p>I want to run a Sveltekit app from Flask</p>
<pre><code>app = Flask(__name__, static_folder='sveltekit_build')
</code></pre>
<p>I don't have static files so i instead of</p>
<pre><code>return send_from_directory('client/public', 'index.html')
</code></pre>
<p>I tried this instead</p>
<pre><code> return send_from_directory('build/_app', 'index.html')
</code></pre>
<p>but In a browser, <a href="http://127.0.0.1:5000/" rel="nofollow noreferrer">http://127.0.0.1:5000/</a> returns markup instead of the app itself:</p>
<p>%sveltekit.head% %sveltekit.body%</p>
|
<python><flask><vite><sveltekit>
|
2024-12-01 14:33:07
| 0
| 697
|
petercli
|
79,241,735
| 8,939,181
|
unexpected transformer's dataset structure after set_transform or with_transform
|
<p>I am using the feature extractor from ViT like explained <a href="https://colab.research.google.com/github/nateraw/huggingface-hub-examples/blob/main/vit_image_classification_explained.ipynb" rel="nofollow noreferrer">here</a>.</p>
<p>And noticed a weird behaviour I cannot fully understand.</p>
<p>After loading the dataset as in that colab notebook, I see:</p>
<pre><code>ds['train'].features
{'image_file_path': Value(dtype='string', id=None), 'image':
Image(mode=None, decode=True, id=None), 'labels':
ClassLabel(names=['angular_leaf_spot', 'bean_rust', 'healthy'],
id=None)}
</code></pre>
<p>And we can assess the features in both ways:</p>
<pre><code>ds['train']['labels'][0:5]
[0, 0, 0, 0, 0]
ds['train'][0:2]
{'image_file_path':
['/home/albert/.cache/huggingface/datasets/downloads/extracted/967f0d9f61a7a8de58892c6fab6f02317c06faf3e19fba6a07b0885a9a7142c7/train/angular_leaf_spot/angular_leaf_spot_train.0.jpg',
'/home/albert/.cache/huggingface/datasets/downloads/extracted/967f0d9f61a7a8de58892c6fab6f02317c06faf3e19fba6a07b0885a9a7142c7/train/angular_leaf_spot/angular_leaf_spot_train.1.jpg'],
'image': [<PIL.JpegImagePlugin.JpegImageFile image mode=RGB
size=500x500>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB
size=500x500>], 'labels': [0, 0]}
</code></pre>
<p>But after</p>
<pre><code>from transformers import ViTFeatureExtractor
model_name_or_path = 'google/vit-base-patch16-224-in21k'
feature_extractor = ViTFeatureExtractor.from_pretrained(model_name_or_path)
ds = load_dataset('beans')
def transform(example_batch):
inputs = feature_extractor([x for x in example_batch['image']], return_tensors='pt')
inputs['labels'] = example_batch['labels']
return inputs
prepared_ds = ds.with_transform(transform)
</code></pre>
<p>We see the features are kept:</p>
<pre><code>prepared_ds['train'].features
{'image_file_path': Value(dtype='string', id=None), 'image':
Image(mode=None, decode=True, id=None), 'labels':
ClassLabel(names=['angular_leaf_spot', 'bean_rust', 'healthy'],
id=None)}
prepared_ds['train'][0:2]
{'pixel_values': tensor([[[[-0.5686, -0.5686, -0.5608, ..., -0.0275,
0.1843, -0.2471],
...,
[-0.5843, -0.5922, -0.6078, ..., 0.2627, 0.1608, 0.2000]],
[[-0.7098, -0.7098, -0.7490, ..., -0.3725, -0.1608, -0.6000],
...,
[-0.8824, -0.9059, -0.9216, ..., -0.2549, -0.2000, -0.1216]]],
[[[-0.5137, -0.4902, -0.4196, ..., -0.0275, -0.0039, -0.2157],
...,
[-0.5216, -0.5373, -0.5451, ..., -0.1294, -0.1529, -0.2627]],
[[-0.1843, -0.2000, -0.1529, ..., 0.2157, 0.2078, -0.0902],
...,
[-0.7725, -0.7961, -0.8039, ..., -0.3725, -0.4196, -0.5451]],
[[-0.7569, -0.8510, -0.8353, ..., -0.3255, -0.2706, -0.5608],
...,
[-0.5294, -0.5529, -0.5608, ..., -0.1686, -0.1922, -0.3333]]]]), 'labels': [0, 0]}
</code></pre>
<p>But when I try to access the labels directly</p>
<pre><code>prepared_ds['train']['labels']
</code></pre>
<p>I got a key error message:</p>
<pre><code>```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last) Cell In[32], line 1
----> 1 prepared_ds['train']['labels']
File ~/anaconda3/envs/LLM/lib/python3.12/site-packages/datasets/arrow_dataset.py:2872, in Dataset.__getitem__(self, key) 2870 def __getitem__(self, key):
# noqa: F811 2871 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2872 return self._getitem(key)
File ~/anaconda3/envs/LLM/lib/python3.12/site-packages/datasets/arrow_dataset.py:2857, in Dataset._getitem(self, key, **kwargs) 2855 formatter = get_formatter(format_type, features=self._info.features,
**format_kwargs) 2856 pa_subtable = query_table(self._data, key, indices=self._indices)
-> 2857 formatted_output = format_table( 2858 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2859 ) 2860 return formatted_output
File ~/anaconda3/envs/LLM/lib/python3.12/site-packages/datasets/formatting/formatting.py:639, in format_table(table, key, formatter, format_columns, output_all_columns)
637 python_formatter = PythonFormatter(features=formatter.features)
638 if format_columns is None:
--> 639 return formatter(pa_table, query_type=query_type)
640 elif query_type == "column":
641 if key in format_columns:
File ~/anaconda3/envs/LLM/lib/python3.12/site-packages/datasets/formatting/formatting.py:405, in Formatter.__call__(self, pa_table, query_type)
403 return self.format_row(pa_table)
404 elif query_type == "column":
--> 405 return self.format_column(pa_table)
406 elif query_type == "batch":
407 return self.format_batch(pa_table)
File ~/anaconda3/envs/LLM/lib/python3.12/site-packages/datasets/formatting/formatting.py:501, in CustomFormatter.format_column(self, pa_table)
500 def format_column(self, pa_table: pa.Table) -> ColumnFormat:
--> 501 formatted_batch = self.format_batch(pa_table)
502 if hasattr(formatted_batch, "keys"):
503 if len(formatted_batch.keys()) > 1:
File ~/anaconda3/envs/LLM/lib/python3.12/site-packages/datasets/formatting/formatting.py:522, in CustomFormatter.format_batch(self, pa_table)
520 batch = self.python_arrow_extractor().extract_batch(pa_table)
521 batch = self.python_features_decoder.decode_batch(batch)
--> 522 return self.transform(batch)
Cell In[12], line 5, in transform(example_batch)
3 def transform(example_batch):
4 # Take a list of PIL images and turn them to pixel values
----> 5 inputs = feature_extractor([x for x in example_batch['image']], return_tensors='pt')
7 # Don't forget to include the labels!
8 inputs['labels'] = example_batch['labels']
KeyError: 'image'
```
</code></pre>
<p>It sounds like the error is because the feature extractor added 'pixel_values' but the feature is kept as 'image'
But it also appears to imply an attempt to re-apply <code>transform</code>...</p>
<p>Also: it is not possible to save the dataset to the disk</p>
<pre><code> prepared_ds.save_to_disk(img_path)
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last) Cell In[21], line 1
----> 1 dataset.save_to_disk(img_path)
File ~/anaconda3/envs/LLM/lib/python3.13/site-packages/datasets/arrow_dataset.py:1503, in Dataset.save_to_disk(self, dataset_path, max_shard_size, num_shards, num_proc, storage_options) 1501 json.dumps(state["_format_kwargs"][k]) 1502 except TypeError as e:
-> 1503 raise TypeError( 1504 str(e) + f"\nThe format kwargs must be JSON serializable, but key '{k}' isn't." 1505 ) from None 1506 # Get json serializable dataset info 1507 dataset_info = asdict(self._info)
TypeError: Object of type function is not JSON serializable The format kwargs must be JSON serializable, but key 'transform' isn't.
```
</code></pre>
<p>Note the original codes in that notebook work perfectly (training, evaluation, etc). I just got this error because I tried to inspect the dataset, try to save the generated dataset, etc. to explore the dataset object...</p>
<p>Shouldn't the dataset structure be accessible in a similar way after <code>with_transform()</code> or <code>set_transform()</code>? Why does it call the transform function again if we just attempt to access one of the features?</p>
<p>I’m hoping you can shed some light on this behaviour...</p>
|
<python><machine-learning><neural-network><huggingface-transformers><huggingface-datasets>
|
2024-12-01 14:07:14
| 1
| 916
|
hamagust
|
79,241,613
| 13,968,392
|
Filter series in method chain
|
<p>What is the preferred syntax to filter a polars Series without referencing the Series explicitly? I.e. something that works with method chaining. I thought <code>pipe</code> would be an option but it is not available for Series.</p>
<p>Without method chaining, I would do the following:</p>
<pre><code>import polars as pl
ser = pl.Series([1, 2, 3, 4, 5])
ser = ser.filter(ser > 3)
</code></pre>
|
<python><filter><series><python-polars><method-chaining>
|
2024-12-01 13:04:27
| 0
| 2,117
|
mouwsy
|
79,241,502
| 1,924,315
|
Python error trying to install gensim on MacOS
|
<p>trying to install gensim on MacBook Pro (i7, Sonoma 14.7.1) using PyCharm (Python@3.13). I've tried several suggestions from stack, github and other sources but none worked.</p>
<p>Based on what little I found online, it seems that it cannot find <strong>openblas</strong>, even though it is installed. But it is installed and <strong>openBLAS</strong>. Is this just a strict case sensitivity or something else?</p>
<p>How do I resolve this?
Also, when I try to open the full log, path to file does not exist.</p>
<p>Here is the output in the PyCharm Console:</p>
<pre><code>pip install --upgrade gensim
Collecting gensim
Using cached gensim-4.3.3.tar.gz (23.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting numpy<2.0,>=1.18.5 (from gensim)
Using cached numpy-1.26.4.tar.gz (15.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Collecting scipy<1.14.0,>=1.7.0 (from gensim)
Using cached scipy-1.13.1.tar.gz (57.2 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [45 lines of output]
+ meson setup /private/var/folders/92/9fd9rgg976s7zn44fvbxjw000000gn/T/pip-install-ox0sfkbg/scipy_7a9a4932c4254c9d93723beb50b0e629 /private/var/folders/92/9fd9rgg976s7zn44fvbxjw000000gn/T/pip-install-ox0sfkbg/scipy_7a9a4932c4254c9d93723beb50b0e629/.mesonpy-ry56tq_q -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=/private/var/folders/92/9fd9rgg976s7zn44fvbxjw000000gn/T/pip-install-ox0sfkbg/scipy_7a9a4932c4254c9d93723beb50b0e629/.mesonpy-ry56tq_q/meson-python-native-file.ini
The Meson build system
Version: 1.6.0
Source dir: /private/var/folders/92/9fd9rgg976s7zn44fvbxjw000000gn/T/pip-install-ox0sfkbg/scipy_7a9a4932c4254c9d93723beb50b0e629
Build dir: /private/var/folders/92/9fd9rgg976s7zn44fvbxjw000000gn/T/pip-install-ox0sfkbg/scipy_7a9a4932c4254c9d93723beb50b0e629/.mesonpy-ry56tq_q
Build type: native build
Project name: scipy
Project version: 1.13.1
C compiler for the host machine: cc (clang 16.0.0 "Apple clang version 16.0.0 (clang-1600.0.26.4)")
C linker for the host machine: cc ld64 1115.7.3
C++ compiler for the host machine: c++ (clang 16.0.0 "Apple clang version 16.0.0 (clang-1600.0.26.4)")
C++ linker for the host machine: c++ ld64 1115.7.3
Cython compiler for the host machine: cython (cython 3.0.11)
Host machine cpu family: x86_64
Host machine cpu: x86_64
Program python found: YES (/Users/xxxxxxxxxxxx/PycharmProjects/pythonProjectTrainingList/.venv/bin/python)
Found pkg-config: YES (/usr/local/bin/pkg-config) 2.3.0
Run-time dependency python found: YES 3.13
Program cython found: YES (/private/var/folders/92/9fd9rgg976s7zn44fvbxjw000000gn/T/pip-build-env-9qebkipc/overlay/bin/cython)
Compiler for C supports arguments -Wno-unused-but-set-variable: YES
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Library m found: YES
Fortran compiler for the host machine: gfortran (gcc 14.2.0 "GNU Fortran (Homebrew GCC 14.2.0_1) 14.2.0")
Fortran linker for the host machine: gfortran ld64 1115.7.3
Compiler for Fortran supports arguments -Wno-conversion: YES
Compiler for C supports link arguments -Wl,-ld_classic: YES
Checking if "-Wl,--version-script" : links: NO
Program pythran found: YES 0.15.0 0.15.0 (/private/var/folders/92/9fd9rgg976s7zn44fvbxjw000000gn/T/pip-build-env-9qebkipc/overlay/bin/pythran)
Did not find CMake 'cmake'
Found CMake: NO
Run-time dependency xsimd found: NO (tried pkgconfig, framework and cmake)
Run-time dependency threads found: YES
Library npymath found: YES
Library npyrandom found: YES
pybind11-config found: YES (/private/var/folders/92/9fd9rgg976s7zn44fvbxjw000000gn/T/pip-build-env-9qebkipc/overlay/bin/pybind11-config) 2.12.1
Run-time dependency pybind11 found: YES 2.12.1
Run-time dependency scipy-openblas found: NO (tried pkgconfig)
Run-time dependency openblas found: NO (tried pkgconfig, framework and cmake)
Run-time dependency openblas found: NO (tried pkgconfig and framework)
../scipy/meson.build:163:9: ERROR: Dependency "OpenBLAS" not found, tried pkgconfig and framework
A full log can be found at /private/var/folders/92/9fd9rgg976s7zn44fvbxjw000000gn/T/pip-install-ox0sfkbg/scipy_7a9a4932c4254c9d93723beb50b0e629/.mesonpy-ry56tq_q/meson-logs/meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
|
<python><macos><scipy><gensim><openblas>
|
2024-12-01 12:01:13
| 3
| 1,123
|
Blejzer
|
79,241,455
| 1,804,173
|
How to find the exact function annotations of Python's builtin `collections.abc` Protocol types?
|
<p>Python's standard library comes with a bunch of <code>Protocol</code> types in <code>collections.abc</code> like <code>Sequence</code>, <code>Mapping</code>, or <code>Iterable</code>. The documentation contains a nice overview <a href="https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes" rel="nofollow noreferrer">here</a>.</p>
<p>What the documentation is missing though is exact type annotations of these methods. Take for instance the <code>__reversed__</code> method. I would like to see its exact type signature. Does it for instance return a <code>-> Iterator[T]</code> or is it <code>Iterable[T]</code>.</p>
<p>How can I quickly look up these type signatures? Trying to following the types in the IDE seems to lead to a dead-end due to a bunch of re-export barriers.</p>
|
<python><python-typing>
|
2024-12-01 11:30:29
| 1
| 27,316
|
bluenote10
|
79,241,319
| 17,721,722
|
Autoflake prints unused imports/variables but doesn't remove them
|
<p>I'm using the <code>autoflake</code> tool to remove unused imports and variables in a Python file, but while it prints that unused imports/variables are detected, it doesn't actually remove them from the file.</p>
<p>Here's the command I'm running:</p>
<pre class="lang-bash prettyprint-override"><code>autoflake --in-place --remove-unused-variables portal/reports.py
</code></pre>
<h4>Printed Output:</h4>
<pre><code>portal/reports.py: Unused imports/variables detected
</code></pre>
<p>Despite the message indicating unused imports and variables, the file remains unchanged. I've confirmed that the file has unused imports and variables that should be removed.</p>
<h3>Versions:</h3>
<ul>
<li>Django: 5.0.7</li>
<li>autoflake: 2.3.1</li>
<li>Python: 3.11</li>
<li>Ubuntu: 24.04</li>
</ul>
<p>I also noticed this issue: when I try to install autoflake after turning off .venv in my Django app folder or anywhere in the OS, I get the following error.</p>
<pre class="lang-bash prettyprint-override"><code>error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.12/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
</code></pre>
<p>Has anyone encountered this issue? What could be causing it, and how can I fix it to properly remove unused imports and variables?</p>
|
<python><django><flake><autoflake>
|
2024-12-01 10:00:18
| 1
| 501
|
Purushottam Nawale
|
79,241,286
| 7,887,965
|
Apache Nifi: Unable to Merge Multiple CSVs into a Single PARQUET File using ExecuteStreamCommand Processor
|
<p>I am trying to merge the <code>multiple CSVs</code> which are coming from upstream as a <code>flowfiles</code> of similar kind <code>(same Schema)</code> into a one <code>PARQUET</code> file format. Below is the Flow of my <code>processor group</code>. Where in upper <code>ExecuteStreamCommand</code> i am renaming the column names to ensure there is no special character in the column names while in downstream <code>ExecuteStreamCommand</code> processor actually trying to merge in a single parquet format but it does not merge into single and the same number of <code>CSV files</code> (Which are in separate parquet files) comes out..</p>
<p><a href="https://i.sstatic.net/GUfbw7QE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GUfbw7QE.png" alt="ApacheNifi Processor Group Workflow" /></a></p>
<p>Here below is the code which i am using to merge multiple CSVs into single Parquet File.</p>
<pre><code>import sys
import pandas as pd
import io
from pyarrow import parquet as pq
import pyarrow as pa
# Initialize an empty DataFrame to hold all CSV data
merged_df = pd.DataFrame()
# Read CSV data from standard input (incoming flow file content)
input_data = sys.stdin.read().strip()
# Check if the input data is empty
if not input_data:
print("Error: No data received from stdin")
sys.exit(1)
# Use StringIO to read the CSV from stdin
csv_content = io.StringIO(input_data)
# Read and append CSV content to merged_df
try:
# Read CSV into DataFrame
df = pd.read_csv(csv_content)
# If merged_df is empty, initialize it with the same columns as df
if merged_df.empty:
merged_df = df
else:
# Align columns before concatenating (this handles schema inconsistencies)
merged_df = pd.concat([merged_df, df], ignore_index=True, sort=False)
except pd.errors.EmptyDataError:
print("Error: No columns to parse from CSV data.")
sys.exit(1)
# After reading all CSV files, convert the merged DataFrame to a Parquet table
table = pa.Table.from_pandas(merged_df)
# Write the Parquet table to stdout (which NiFi will handle)
pq.write_table(table, sys.stdout.buffer, compression='snappy') # Adjust compression if needed
</code></pre>
<p>Can anybody please anticipate where i am doing it wrong? Why it is unable to merge into a single parquet rather multiple Parquet file. It is also not changing the extension of output files like <code>.parquet</code>.</p>
|
<python><csv><apache-nifi><parquet><pyarrow>
|
2024-12-01 09:43:50
| 0
| 407
|
Filbadeha
|
79,241,135
| 3,130,882
|
Pydantic vs. Python 3.13.0: No module named 'typing_extensions'
|
<p>I have code that worked fine with Python 3.10.12 and Pydantic 2.7.3.</p>
<p>I read that Pydantic ^2.8 supports Python 3.13.</p>
<p>The same code, and indeed even just <code>import pydantic</code> under a Python 3.13.0 shell with Pydantic 2.10.2, however, gives:</p>
<pre><code> File "/home/user/path-to-package/src/validator.py", line 9, in <module>
from pydantic import TypeAdapter, BaseModel, validator
File "/home/user/path-to-package/.venv/lib/python3.13/site-packages/pydantic/__init__.py", line 396, in <module>
_getattr_migration = getattr_migration(__name__)
File "/home/user/path-to-package/.venv/lib/python3.13/site-packages/pydantic/_migration.py", line 260, in getattr_migration
from .errors import PydanticImportError
File "/home/user/path-to-package/.venv/lib/python3.13/site-packages/pydantic/errors.py", line 7, in <module>
from typing_extensions import Literal, Self
ModuleNotFoundError: No module named 'typing_extensions'
</code></pre>
<p>Why?</p>
<p>How can I fix this?</p>
|
<python><pydantic><modulenotfounderror>
|
2024-12-01 07:58:15
| 1
| 431
|
GoneAsync
|
79,241,088
| 555,129
|
Update installed version of python
|
<p>I installed python 3.9.6 few years ago on my MacBook, I do not recollect how it was installed (homebrew was not used). Here is version/path info:</p>
<pre><code>$ which python3
/usr/bin/python3
$ python3 -V
Python 3.9.6
</code></pre>
<ul>
<li>How to update this python version to the latest?</li>
<li>How to tell how it was installed in the first place?</li>
</ul>
|
<python>
|
2024-12-01 07:15:33
| 3
| 1,462
|
Amol
|
79,241,077
| 2,155,362
|
How to get data from post request?
|
<p>I built a pyton flask app like below:</p>
<pre><code>#!/usr/bin/python
# -*- coding: UTF-8 -*-
from flask import Flask, request, jsonify, redirect
from flask_restful import Api, Resource, reqparse
import database_operators
import json
app = Flask(__name__)
app.config.update(RESTFUL_JSON=dict(ensure_ascii=False))
api = Api(app)
@app.route('/task/<string:task_no>',methods=['get'])
def get_task(task_no):
db = database_operators.task_database()
task = db.get(task_no)
return jsonify(task)
@app.route('/task/create',methods=["post"])
def create_task():
print(request.data)
return "error"
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>And I try to test create_task like below:</p>
<pre><code>url = 'http://localhost:5000/task/create'
r = requests.post(url,data={"id":"good"})
print(r.text)
</code></pre>
<p>And the console just print:</p>
<pre><code>b''
127.0.0.1 - - [01/Dec/2024 15:07:35] "POST /task/create HTTP/1.1" 200 -
</code></pre>
<p>How can I get the request data?</p>
|
<python><flask>
|
2024-12-01 07:11:28
| 1
| 1,713
|
user2155362
|
79,241,075
| 12,466,687
|
Unable to use parameters value from command line to Quarto file while rendering Quarto document in python
|
<p>I am trying to pass parameter value using quarto CLI and use it in Quarto file with Python code but it's not working.</p>
<p>Command used:</p>
<pre class="lang-bash prettyprint-override"><code>quarto render test.qmd -P foo:5 --output test_cmd_out.html
</code></pre>
<p>Quarto doc (<code>test.qmd</code>):</p>
<pre><code>---
title: "Test File"
format: html
html:
embed-resources: true
execute:
echo: False
jupyter: python3
---
# Title
Print this in report
```{python}
foo
```
---
</code></pre>
<p>Error:</p>
<pre><code>Starting python3 kernel...Done
Executing 'test.quarto_ipynb'
Cell 1/1: ''...ERROR:
An error occurred while executing the following cell:
------------------
foo
------------------
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[1], line 1
----> 1 foo
NameError: name 'foo' is not defined
</code></pre>
<p>Do I need to use it as <code>params$foo</code> even for Python or some other way?</p>
<p>Not sure what is wrong if I look at their documentation.</p>
|
<python><jupyter><command-line-arguments><quarto>
|
2024-12-01 07:10:47
| 1
| 2,357
|
ViSa
|
79,241,034
| 1,335,492
|
How to maintain synchronization between distributed python processes?
|
<p>I have a number of workstations that run long processes containing sequences like this:</p>
<pre><code>x = wait_while_current_is_set
y = read_voltage
z = z + y
</code></pre>
<p>The workstations must maintain synchronization with a central unit that runs processes like this:</p>
<pre><code>x = set_current
y = wait_while_voltage_is_read
z = z + y
</code></pre>
<p>It's actually implemented like this on both client and server:</p>
<pre><code>x = set_current
y = read_current
z = z + y
</code></pre>
<p>--with "set_current" and "read_current" implemented as library functions from the client and server libraries.</p>
<p>How do I synchronize parallel distributed asynchronous processes in python?</p>
|
<python><parallel-processing><distributed-computing><distributed><distributed-system>
|
2024-12-01 06:40:10
| 1
| 2,697
|
david
|
79,240,851
| 8,384,910
|
Plotly add_image vs add_layout_image
|
<p>In Plotly, both <a href="https://plotly.com/python-api-reference/generated/plotly.graph_objects.Figure.html#plotly.graph_objects.Figure.add_image" rel="nofollow noreferrer"><code>figure.add_image</code></a> and <a href="https://plotly.com/python-api-reference/generated/plotly.graph_objects.Figure.html#plotly.graph_objects.Figure.add_layout_image" rel="nofollow noreferrer"><code>figure.add_layout_image</code></a> allow you to add an image to a chart. It seems like <code>add_layout_image</code> is intended for "cosmetic" images that shouldn't belong to a specific legend item. Are there any other differences, and is there any significant difference in how these two methods are implemented in Plotly?</p>
|
<python><plotly>
|
2024-12-01 03:34:35
| 0
| 9,414
|
Richie Bendall
|
79,240,243
| 10,452,700
|
Why is SARIMAX expensive to apply over high-frequency data in Python? (epochs=5mins) Any alternative models?
|
<p>I’ve been experimenting with <a href="https://www.statsmodels.org/dev/examples/notebooks/generated/statespace_sarimax_stata.html" rel="nofollow noreferrer">SARIMAX</a> using <a href="/questions/tagged/statsmodels" class="s-tag post-tag" title="show questions tagged 'statsmodels'" aria-label="show questions tagged 'statsmodels'" rel="tag" aria-labelledby="tag-statsmodels-tooltip-container" data-tag-menu-origin="Unknown">statsmodels</a> package to model time series data and understand seasonality and non-stationarity. My data is collected with a high frequency of 5-minute epochs, and I’m facing significant challenges:</p>
<ul>
<li><p>Performance Issues: Applying SARIMAX with what I believe is the correct configuration (e.g., daily seasonality with 288 observations per day) is extremely slow. Sometimes, the process crashes due to memory or computational limits.</p>
</li>
<li><p>Suitability for High-Frequency Data: I suspect SARIMAX might not be the right choice for high-frequency datasets. Is this true? If so, why is SARIMAX particularly inefficient for such data?</p>
</li>
</ul>
<p>Fixes or Alternatives:</p>
<p><strong>Important Note:</strong> I am not interested in resampling or downsampling the data because it would cause a loss of valuable information that is critical for my analysis.</p>
<p>Questions:</p>
<ul>
<li>Are there any techniques to optimize SARIMAX for high-frequency data without losing information?</li>
<li>Alternatively, are there better models specifically designed for high-frequency data that still allow me to understand time components like seasonality and trends?</li>
</ul>
<p>I’m open to suggestions or insights from anyone who has tackled similar challenges with high-frequency data. Thanks in advance for your guidance!</p>
<p>I generated a reproducible example:</p>
<pre class="lang-py prettyprint-override"><code># !pip install pmdarima
# Import additional required libraries
import pandas as pd
import numpy as np
import pmdarima as pm
import matplotlib.pyplot as plt
from statsmodels.tsa.statespace.sarimax import SARIMAX
from sklearn.metrics import mean_absolute_error, mean_squared_error
from datetime import datetime, timedelta
# Generate timestamps for 30 consecutive days with 5-minute intervals
start_time = datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
timestamps = [start_time + timedelta(minutes=5 * i) for i in range(30 * 24 * 12)] # 5-min epochs
# Generate 'avgcpu' data with seasonality
np.random.seed(42) # For reproducibility
base_pattern = np.sin(np.linspace(0, 2 * np.pi, len(timestamps) // 30)) * 7.5 + 27.5
avgcpu = np.tile(base_pattern, 30) + np.random.normal(0, 2, len(timestamps))
avgcpu = np.clip(avgcpu, 20, 35) # Normalize within the range [20, 35]
# Create a DataFrame
data = pd.DataFrame({
"timestamp": timestamps,
"avgcpu": avgcpu
}).sort_values("timestamp")
# Regenerate data and split into training and testing sets
test_size = 2 * 24 * 12 # 2 days of data (288 * 2 observations)
train_data = data.iloc[:-test_size]
test_data = data.iloc[-test_size:]
train_series = train_data["avgcpu"]
test_series = test_data["avgcpu"]
</code></pre>
<p>Using the wrapper of <code>SARIMAX</code> and <code>pm.auto_arima()</code> unsuccessfully:</p>
<pre class="lang-py prettyprint-override"><code>
# Model 1: Using pm.auto_arima()
seasonal_period = 288 # Daily seasonality
model1 = pm.auto_arima(
train_series,
seasonal=True,
m=seasonal_period, # Seasonal period
trace=True, # Prints the model selection process
error_action="ignore", # Ignores non-converging models
suppress_warnings=True, # Suppresses warnings
stepwise=True # Stepwise model selection for faster computation
)
# Model 2: Using SARIMAX()
seasonal_period = 288 # Daily seasonality
model2 = SARIMAX(
train_series,
order=(1, 1, 1), # Simplified initial setup for ARIMA terms
seasonal_order=(1, 1, 1, seasonal_period), # Seasonal ARIMA terms
enforce_stationarity=False,
enforce_invertibility=False
)
# Fit SARIMAX model
sarimax_result = model2.fit(disp=False)
# Summary of models
print("Model 1 (auto_arima):")
print(model1.summary())
print("\nModel 2 (SARIMAX):")
print(sarimax_result.summary())
# Forecasting with both models for the test set period
forecast_model1, conf_int_model1 = model1.predict(n_periods=test_size, return_conf_int=True)
forecast_model2 = sarimax_result.get_forecast(steps=test_size)
forecast_mean_model2 = forecast_model2.predicted_mean
forecast_ci_model2 = forecast_model2.conf_int()
# Calculate metrics for both models
mae_model1 = mean_absolute_error(test_series, forecast_model1)
mse_model1 = mean_squared_error(test_series, forecast_model1)
mae_model2 = mean_absolute_error(test_series, forecast_mean_model2)
mse_model2 = mean_squared_error(test_series, forecast_mean_model2)
</code></pre>
|
<python><sarimax><statsforecast><statmodels>
|
2024-11-30 18:18:24
| 0
| 2,056
|
Mario
|
79,240,178
| 726,730
|
python set and get windows 11 volume
|
<p>I have this script:</p>
<pre class="lang-py prettyprint-override"><code>from pycaw.pycaw import AudioUtilities, IAudioEndpointVolume
from ctypes import cast, POINTER
from comtypes import CLSCTX_ALL, CoInitialize, CoUninitialize
CLSCTX_ALL = 7
import time
def set_windows_volume(value_max_100):
devices = AudioUtilities.GetSpeakers()
interface = devices.Activate(IAudioEndpointVolume._iid_, CLSCTX_ALL, None)
volume = cast(interface, POINTER(IAudioEndpointVolume))
scalarVolume = int(value_max_100) / 100
volume.SetMasterVolumeLevelScalar(scalarVolume, None)
def get_windows_volume():
devices = AudioUtilities.GetSpeakers()
interface = devices.Activate(IAudioEndpointVolume._iid_, CLSCTX_ALL, None)
windows_volume = cast(interface, POINTER(IAudioEndpointVolume))
volume_percentage = int(round(windows_volume.GetMasterVolumeLevelScalar() * 100))
return volume_percentage
for i in range(0,100):
set_windows_volume(i)
time.sleep(2)
print(get_windows_volume())
</code></pre>
<p>but sometimes it raises errors:</p>
<pre class="lang-py prettyprint-override"><code>Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
======== Running on http://192.168.1.188:8080 ========
Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90>
Traceback (most recent call last):
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__
self.Release()
File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release
return self.__com_Release() # type: ignore
ValueError: COM method call without VTable
</code></pre>
<p>Basically i use this script in a multiprocessing.Process with CoInitialize and CoUninitialize and safe release, but the error is still there.</p>
<p>Any help or alternatives?</p>
|
<python><audio><comtypes>
|
2024-11-30 17:43:46
| 1
| 2,427
|
Chris P
|
79,240,068
| 4,701,426
|
Problem renaming index value in pandas multiindex series
|
<p>Please consider this dataframe:</p>
<pre><code>temp = pd.DataFrame({'has_exclamation': {('automotive', 0.0): 99.1814132659203,
('automotive', np.nan): 0.8185867340796917,
('beauty_spa', 0.0): 99.8,
('beauty_spa', np.nan): 0.15384615384615385,
('beauty_spa', 1.0): 0.04615384615384615},
'has_exclamation_end': {('automotive', 0.0): 99.1814132659203,
('automotive', np.nan): 0.8185867340796917,
('beauty_spa', 0.0): 99.83846153846154,
('beauty_spa', np.nan): 0.15384615384615385,
('beauty_spa', 1.0): 0.007692307692307693},
'has_all_cap': {('automotive', 0.0): 98.88046226074395,
('automotive', np.nan): 0.8185867340796917,
('beauty_spa', 0.0): 99.56153846153846,
('beauty_spa', np.nan): 0.15384615384615385,
('beauty_spa', 1.0): 0.28461538461538466}})
</code></pre>
<p>temp:</p>
<p><a href="https://i.sstatic.net/tMp1iYyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tMp1iYyf.png" alt="enter image description here" /></a></p>
<p>To replace the NaNs with 'Missing' in the index, why does this not work?</p>
<pre><code>temp.rename(index={np.nan: 'Missing'}, level =1)
</code></pre>
|
<python><pandas>
|
2024-11-30 16:49:04
| 2
| 2,151
|
Saeed
|
79,239,802
| 1,371,666
|
Attempting to freeze a part of scroll area in tkinter using python in Windows
|
<p>I am using python 3.11.9 in windows.</p>
<p>Here is my minimum verifiable code:</p>
<pre><code>import tkinter as tk
class Table(tk.Frame):
def __init__(self, master, header_labels:tuple,*args, **kwargs):
tk.Frame.__init__(self, master, *args, **kwargs)
# configuration for all Labels
# easier to maintain than directly inputting args
self.lbl_cfg = {
'master' : self,
'foreground' : 'blue',
'relief' : 'solid',
'font' : 'Arial 20 bold',
'padx' : 5,
'pady' : 0,
'bd' : 1,
'bg' : 'white',
}
self.headers = []
self.rows = []
for col, lbl in enumerate(header_labels):
self.grid_columnconfigure(col, weight=1)
# make and store header
(header := tk.Label(text=lbl, **self.lbl_cfg)).grid(row=0, column=col, sticky='nswe')
self.headers.append(header)
def add_row(self, desc:str, rate:int, quantity:int, amt:int) -> None:
print('adding a row to table')
self.rows.append([])
for col, lbl in enumerate( ((str(len(self.rows))),desc, rate, quantity, amt) ):
(entry := tk.Label(text=lbl, **self.lbl_cfg)).grid(row=len(self.rows), column=col, sticky='nswe')
self.rows[-1].append(entry)
class Application(tk.Tk):
def __init__(self, title:str="Bill generation", x:int=0, y:int=0, **kwargs):
tk.Tk.__init__(self)
self.title(title)
self.config(**kwargs)
#DEFAULTS
#Fixed Entries
header_labels = ('\u2193','Description','Rate','Quantity','Amt','\u2191')
#KEYBOARD SHORTCUTS
#FONT SETTINGS
self.gui_lbl_look = {
'foreground' : 'black',
'relief' : 'flat',
'font' : 'Arial 18 bold',
'padx' : 0,
'pady' : 0,
'borderwidth': 1,
}
self.gui_btn_look = {
'foreground' : 'red',
'relief' : 'groove',
'font' : 'Arial 18 bold',
'padx' : 0,
'pady' : 0,
}
self.gui_entry_look = {
'foreground' : 'blue',
'font' : 'Arial 18 bold',
'highlightbackground' :"black",
'highlightthickness' : 2,
'highlightcolor' :"blue",
'insertbackground' :"blue",
'insertwidth' :3,
}
self.gui_total_lbl_look = {
'foreground' : 'green',
'relief' : 'solid',
'font' : 'Arial 18 bold',
}
#Text input check
#GUI
#bill number and print button
# Second row
self.second_row_frame=tk.Frame(self)
self.second_row_frame.grid(column=0,row=1,sticky='W',pady=5)
# Goods return field
self.lbl_return=tk.Label(self.second_row_frame,text="GR:",**self.gui_lbl_look)
self.lbl_return.grid(column =0, row =0)
self.return_str=tk.StringVar()
self.return_value=tk.Entry(self.second_row_frame,width=5,textvariable=self.return_str,**self.gui_entry_look)
self.return_value.grid(column=1,row=0,padx=(0,5))
#description field
self.default_txt_for_desc=tk.StringVar()
self.default_txt_for_desc.set("PCS")
self.txt_desc=tk.Entry(self.second_row_frame,width=10,textvariable=self.default_txt_for_desc,**self.gui_entry_look)
self.txt_desc.grid(column=2,row=0)
# rate field
self.lbl_rate=tk.Label(self.second_row_frame,text="Rate:",**self.gui_lbl_look)
self.lbl_rate.grid(column=3,row=0,padx=(5,0))
self.rate_str=tk.StringVar()
self.txt_rate=tk.Entry(self.second_row_frame,width=4,textvariable=self.rate_str,**self.gui_entry_look)
self.txt_rate.grid(column=4,row=0)
# quantity field
self.lbl_qty=tk.Label(self.second_row_frame,text="Qty:",**self.gui_lbl_look)
self.lbl_qty.grid(column=5,row=0,padx=(5,0))
self.qty_str=tk.StringVar()
self.txt_qty=tk.Entry(self.second_row_frame,width=3,textvariable=self.qty_str,**self.gui_entry_look)
self.txt_qty.grid(column=6,row=0)
#Item amount
self.lbl_amt=tk.Label(self.second_row_frame,text="Amount:",**self.gui_lbl_look,bd=1,anchor="e",justify="right")
self.lbl_amt.grid(column=7,row=0,padx=(5,0))
self.txt_amt=tk.Label(self.second_row_frame,text='0',width=5,**self.gui_lbl_look,bd=1,anchor="w",justify="left")
self.txt_amt.grid(column=8,row=0)
#Add to table button
self.btn_add=tk.Button(self.second_row_frame,text="Add",command=self.clicked,**self.gui_btn_look)
self.btn_add.grid(column=9,row=0,padx=5)
# Third row entry Table
self.content_frame=tk.Frame(self,highlightbackground="black",highlightthickness=1)
self.content_frame.grid(column=0,row=2,sticky='news')
self.canvas=tk.Canvas(self.content_frame,bg='sky blue')
scrollbar=tk.Scrollbar(self.content_frame,orient="vertical",command=self.canvas.yview,width=50)
self.canvas.configure(yscrollcommand=scrollbar.set)
self.table_frame=tk.Frame(self.canvas)
self.table_frame.grid(column=0,row=0,sticky='news')
self.table_frame.bind("<Configure>",lambda e:self.canvas.configure(scrollregion=self.canvas.bbox("all")))
self.content_frame.columnconfigure(0,weight=2)
self.content_frame.rowconfigure(0,weight=2)
self.canvas.create_window((0, 0),window=self.table_frame,anchor="nw")
self.frame_id=self.canvas.create_window((0,0),window=self.table_frame,anchor="nw")
self.canvas.grid(row=0,column=0,sticky="nswe")
self.canvas.bind_all("<MouseWheel>",self._on_mousewheel)
scrollbar.grid(row=0,column=1,sticky="ns")
self.table_frame.trial_var=0
self.table = Table(self.table_frame,header_labels)
self.table.grid(row=0,column=0,sticky='nswe')
#virtual keyboard
self.keyboard_frame=tk.Frame(self,highlightbackground="black",highlightthickness=1)
self.keyboard_frame.grid(column=0,row=2,sticky='se',padx=(10,55))
self.info_label=tk.Label(self,text=" New bill:space \n Cancel:x \n GR:F1 \n Rate:F2 \n Qty:F3 \n Print:F10 ",**self.gui_total_lbl_look)
self.info_label.grid(column=0,row=2,sticky='ne',padx=(10,55))
self.grid_rowconfigure(2, weight=1)
self.grid_columnconfigure(2, weight=1)
for numbers in range(1,11):
self.add_number_button=tk.Button(self.keyboard_frame,text=str(numbers%10),**self.gui_btn_look)
self.add_number_button.grid(column=int(numbers-1)%3,row=int((numbers-1)/3))
self.clear_button=tk.Button(self.keyboard_frame,text='Del',**self.gui_btn_look)
self.clear_button.grid(column=1,row=3,columnspan=2,sticky='e')
self.enter_button=tk.Button(self.keyboard_frame,text='Enter',**self.gui_btn_look)
self.enter_button.grid(column=0,row=4,columnspan=3,sticky='')
#maximizing table cell
self.grid_columnconfigure(0, weight=1)
#Total quantity amount and balance
self.totals_frame=tk.Frame(self)
self.totals_frame.grid(column=0,row=3,sticky='')
self.lbl_tot_qty=tk.Label(self.totals_frame,text="Total Quantity:",**self.gui_total_lbl_look)
self.lbl_tot_qty.grid(column=0,row=0,padx=5)
self.lbl_tot_amt=tk.Label(self.totals_frame,text="Total Amount:",**self.gui_total_lbl_look)
self.lbl_tot_amt.grid(column=1,row=0,padx=5)
self.lbl_bal=tk.Label(self.totals_frame,text="Balance:",**self.gui_total_lbl_look)
self.lbl_bal.grid(column=2,row=0,padx=5)
# update so we can get the current dimensions
self.update_idletasks()
self.state('zoomed')
#FOR LINUX self.attributes('-zoomed',True)
# test
def _on_mousewheel(self,event):
print('scroll in table')
self.canvas.yview_scroll(int(-1 * (event.delta / 120)), "units")
def clicked(self):
text_qty=(self.txt_qty.get()).lstrip('0')
text_rate=(self.txt_rate.get()).lstrip('0')
item_amt=int(text_rate)*int(text_qty)
print('adding to table')
desc=self.txt_desc.get().upper()
self.table.add_row(desc,int(text_rate),int(text_qty),item_amt)
return
if __name__ == "__main__":
Application(title="Bill printing app").mainloop()
</code></pre>
<p>In this code when you enter numbers in GR , Rate and Qty entries then press add button then a row gets added to table.<br>
Now my problem is that when number of rows exceed the screen and we scroll downward then i want that headers of table (description, rate quantity, amt )should remain in their place.<br>
Can you help me in this?<br></p>
<p>Here is the screenshot when scroll bar is on top
<a href="https://i.sstatic.net/M6Nx98Ip.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6Nx98Ip.png" alt="window screenshot when scrollbar is at top" /></a></p>
<p>Here is the screenshot when scroll bar is at bottom
<a href="https://i.sstatic.net/pBOBuYJf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBOBuYJf.png" alt="window screenshot when scrollbar is at bottom" /></a></p>
<p>I have tried to post relevant code, which you can easily run on your end.<br>
Please let me know if you need any other details.<br>
Thanks.</p>
|
<python><tkinter>
|
2024-11-30 14:56:44
| 1
| 481
|
user1371666
|
79,239,801
| 1,355,634
|
seaborn countplot count wrongly nan
|
<p>Somehow I have trouble in getting the right result from a countplot. Let's look at the following dummy data</p>
<pre><code>In [111]: import pandas as pd
In [112]: import seaborn as sns
In [113]: import numpy as np
In [114]: data = pd.DataFrame({"A": [np.nan, np.nan, 2], "Cat": [0,1,0], "x":["l", "n", "k"]})
In [115]: data
Out[115]:
A Cat x
0 NaN 0 l
1 NaN 1 n
2 2.0 0 k
In [116]: sns.countplot(data=data, x="x", hue="Cat")
</code></pre>
<p>I would expect bars for <code>l</code> and <code>n</code> to be zero while for <code>k</code> to show a one. However, my countplot shows everywhere a one. What I'm doing wrongly? I would like to have the counts over column <code>A</code></p>
<p><a href="https://i.sstatic.net/TMDqDsTJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMDqDsTJ.png" alt="enter image description here" /></a></p>
|
<python><seaborn>
|
2024-11-30 14:55:58
| 1
| 2,042
|
math
|
79,239,708
| 4,423,300
|
import local module in Python
|
<p>My project structure looks like:</p>
<pre class="lang-none prettyprint-override"><code>my_project/
├── common_lib/
│ ├── __init__.py
│ ├── file1.py
│ └── file2.py
└── module1/
├── src/
│ ├── __init__.py
│ ├── script1.py
│ └── script2.py
├── __init__.py
└── main.py
</code></pre>
<p>And I am trying to import functions from <code>file1.py</code> and <code>file2.py</code> in <code>main.py</code> and in <code>script1.py</code> as:</p>
<pre><code>main.py:
from common_lib.file1 import func1
script1.py:
from common_lib.file2 import func2
</code></pre>
<p>I am trying to import functions from <code>common_lib</code> to <code>script1.py</code>, and <code>main.py</code>.
I have added <code>common_lib</code> and <code>module1</code> in windows env variable (absolute paths under python path). Along with VSCode's project <code>setting.json</code> as well. when I run <code>python main.py</code> in powershell (with current dir as <code>module1</code>) I am getting error:</p>
<pre class="lang-none prettyprint-override"><code>ModuleNotFoundError: No module named 'common_lib'
</code></pre>
<h2>UPDATE:</h2>
<p>I have installed <code>common_lib</code> by including <code>setup.py</code> as:</p>
<pre class="lang-bash prettyprint-override"><code>pip install -e <path_to_common_lib\common_lib>
</code></pre>
<p>Which was successful, but still when I try to import function like below, I am getting <code>ModuleNotFoundError</code>:</p>
<pre class="lang-none prettyprint-override"><code>from common_lib.file1 import func1
</code></pre>
|
<python>
|
2024-11-30 14:04:19
| 0
| 637
|
SheCodes
|
79,239,625
| 1,750,975
|
Using QGIS in Python
|
<p>I'm trying to use QGIS as my map provider in an Python GUI.
However, I cannot get the app to launch.</p>
<p>I've created an script that launches the Python app with the correct environmental variables.</p>
<pre class="lang-bash prettyprint-override"><code>Write-Output "Setting up QGIS environment"
# Path to your QGIS installation
$QGIS_PREFIX_PATH = "C:\Program Files\QGIS 3.40.1"
# Set environment variables
$env:PATH = "$QGIS_PREFIX_PATH\bin;$QGIS_PREFIX_PATH\apps\qgis\bin;$QGIS_PREFIX_PATH\apps\Qt5\bin;$env:PATH"
$env:PYTHONPATH = "$QGIS_PREFIX_PATH\apps\qgis\python;$QGIS_PREFIX_PATH\apps\qgis\python\qgis\PyQt;$env:PYTHONPATH"
$env:GDAL_DATA = "$QGIS_PREFIX_PATH\share\gdal"
$env:QGIS_PREFIX_PATH = $QGIS_PREFIX_PATH
$env:QGIS_PATH = $QGIS_PREFIX_PATH
$env:QT_PLUGIN_PATH = "$QGIS_PREFIX_PATH\apps\Qt5\plugins"
$env:OSGEO4W_ROOT = $QGIS_PREFIX_PATH
$env:PATH = "$OSGEO4W_ROOT\apps\qgis\bin;$OSGEO4W_ROOT\apps\grass\grass78\lib;$env:PATH"
Write-Output "Running Python script"
# Run your Python script
python src/app.py
</code></pre>
<p>However, when running the app, I get the following import warning</p>
<pre><code>Setting up QGIS environment
Running Python script
Could not find platform independent libraries <prefix>
Traceback (most recent call last):
File "C:\Users\Tim\Documents\Projecten\Project\src\app.py", line 2, in <module>
from PyQt5.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget, QPushButton, QCheckBox, QHBoxLayout, QScrollArea, QToolBar, QAction, QFileDialog
ModuleNotFoundError: No module named 'PyQt5.QtWidgets'
</code></pre>
<p>The import section looks like this</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget, QPushButton, QCheckBox, QHBoxLayout, QScrollArea, QToolBar, QAction, QFileDialog
from PyQt5.QtCore import QVariant
from qgis.core import (
QgsApplication,
QgsProject,
QgsVectorLayer,
QgsPointXY,
QgsGeometry,
QgsFeature,
QgsField,
QgsFields,
QgsWkbTypes,
QgsCoordinateReferenceSystem,
QgsCoordinateTransformContext,
)
from qgis.gui import QgsMapCanvas, QgsMapToolPan, QgsMapToolZoom
</code></pre>
<p>QtWidgets.py is included with the QGIS installation in <code>C:\Program Files\QGIS 3.40.1\apps\qgis\python\qgis\PyQt</code> and is included in the <code>PYTHONPATH</code> variable, so I'm not sure what's going on here</p>
|
<python><qgis><pyqgis>
|
2024-11-30 13:20:22
| 1
| 2,276
|
tim687
|
79,239,297
| 11,748,924
|
Numpythonic way of the inverse of sliding_window_view
|
<p>I have original array <code>test</code>:</p>
<pre><code>from numpy.lib.stride_tricks import sliding_window_view
test = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14]).reshape(-1,7) # (batch_size, seq_len) -> (2,7)
slided = sliding_window_view(test, window_shape=(3,), axis=-1).copy()
print(test, test.shape)
print(slided, slided.shape)
</code></pre>
<p>Outputting:</p>
<pre><code>[[ 1 2 3 4 5 6 7]
[ 8 9 10 11 12 13 14]] (2, 7)
[[[ 1 2 3]
[ 2 3 4]
[ 3 4 5]
[ 4 5 6]
[ 5 6 7]]
[[ 8 9 10]
[ 9 10 11]
[10 11 12]
[11 12 13]
[12 13 14]]] (2, 5, 3)
</code></pre>
<p>Given copied of <code>slided</code> array that was computed by <code>sliding_window_view</code> returning shape <code>(batch_size, num_win, win_len)</code>, how do I reconstruct back into original array <code>test</code> with the shape of <code>(batch_size, seq_len)</code>?</p>
|
<python><numpy>
|
2024-11-30 10:13:11
| 1
| 1,252
|
Muhammad Ikhwan Perwira
|
79,238,689
| 476,444
|
Binding a ref-qualified function with nanobind
|
<p>What is the appropriate <code>nanobind</code> syntax to expose the instance method below?</p>
<pre><code>struct Foo {
int &value() & {
return v;
};
int v;
};
</code></pre>
<p>With <code>pybind</code>, one could use <code>static_cast</code> with <code>Foo::*</code> syntax to cast the function pointer to the appropriate type, like so:</p>
<pre><code>nb::class_<Foo>(m, "Foo").def("value", static_cast<int &(Foo::*)() &>(&Foo::value));
</code></pre>
<p>but with <code>nanobind</code>, I get this error:</p>
<pre><code>...third_party/nanobind/include/nanobind/nb_class.h:567:28: error: no matching
function for call to 'cpp_function_def<Foo>(int& (Foo::*)() &, nanobind::scope,
nanobind::name, nanobind::is_method)'
567 | cpp_function_def<T>((detail::forward_t<Func>) f, scope(*this),
| ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
568 | name(name_), is_method(), extra...);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</code></pre>
<p>Do I need some sort of wrapper class with a non-ref-qualified method?</p>
|
<python><c++><nanobind>
|
2024-11-30 01:34:20
| 1
| 1,662
|
dgorur
|
79,238,610
| 13,325,046
|
determine which columns are equal after pandas merge betwen two dataframes
|
<p>I performed a merge with pandas using the suffix option like below:</p>
<pre><code>df3 = df1.merge(df2, how='inner',on='key',suffixes=('_first', '_second'))
</code></pre>
<p>I now need to:</p>
<ol>
<li>pairwise check i.e. (<code>x_first == x_second</code> ) for the approx 60 pairs of columns</li>
<li>If the columns are equal rename <code>x_first</code> to just <code>x</code> and drop <code>x_second</code></li>
<li>If they are not equal keep both columns</li>
</ol>
<p>How can this be done for a moderately large pandas dataframe (~6M rows by 200 columns )?</p>
|
<python><pandas>
|
2024-11-29 23:58:19
| 1
| 495
|
te time
|
79,238,578
| 3,696,153
|
VSCode - python unit test locations
|
<p>I have a large python project - it contains multiple parts, and a number of local "modules" that I put in modules directory.</p>
<p>For each local module, I want to create a set of UNIT TESTs.</p>
<p>I specifically do not want <em>ALL</em> unit tests for the entire project in one flat directory. Instead, I would like to have them split across a few sub directories.</p>
<p>In ascii - I have this:</p>
<pre><code> +-- src <-- Main application source code
+-- Modules
+ module1
+ module2
+ unit-tests <-- This is the problem directory.
+ mod1-tests <-- Module 1 Tests.
+ mod2-tests <-- Module 2 Tests.
+ tracking-test <-- Testing for the tracking feature
+ tlm-tests <-- Testing for the telemetry module
</code></pre>
<p>The problem is - VSCODE - Unit Test - will ONLY allow me to choose 1 test directory, a top most directory, in my case 'unit-tests' has the Tests organized by subject area. My question is: How can make this scan recursively and/or organize the tests rather then a giant directory with hundreds of tests (it makes it hard to find things)</p>
|
<python><unit-testing><visual-studio-code>
|
2024-11-29 23:35:43
| 0
| 798
|
user3696153
|
79,238,475
| 11,091,148
|
Logging Inheritance in Python
|
<p>I am currently developing a core utils package where I want to set some logging properties (I know that this is not best practice, but it´s for interal purposes and intended to generate logs). When I now import the package nothing gets logged:</p>
<pre><code># core.__main__.py
class BaseLoggerConfig(BaseModel):
LOG_FORMAT: str = "%(levelprefix)s %(asctime)s %(name)s:%(lineno)d: %(message)s"
DATEFMT: str = "%Y-%m-%d %H:%M:%S"
LOG_LEVEL: int = logging.INFO
version: int = 1
disable_existing_loggers: bool = False
formatters: dict = {
"default": {
# "()": "uvicorn.logging.DefaultFormatter",
"fmt": LOG_FORMAT,
"datefmt": DATEFMT,
},
}
filters: dict = {}
handlers: dict = {
"default": {
"formatter": "default",
"class": "logging.StreamHandler",
"stream": "ext://sys.stderr",
}
}
loggers: dict = {}
def __init__(self, name: str, **data):
super().__init__(**data)
self.loggers[name] = {
"handlers": ["default"],
"level": self.LOG_LEVEL,
"propagate": False,
}
LOG_CONFIG = BaseLoggerConfig(__name__)
logging.config.dictConfig(LOG_CONFIG)
</code></pre>
<pre><code>- core.__main__
Level: INFO
Handlers: ['StreamHandler']
</code></pre>
<p>I now have logging in my other files, like:</p>
<pre><code># core.utils
import logging
logger = logging.getLogger(__name__)
def test():
logger.info(f"I am a log from {__name__}")
</code></pre>
<pre><code># test.py
import logging
from core.utils import test
logger = logging.getLogger(__name__)
test()
</code></pre>
<p>What am I missing?</p>
|
<python><python-3.x><logging><python-logging>
|
2024-11-29 22:19:44
| 1
| 526
|
Bennimi
|
79,238,420
| 784,044
|
Unable to import config.py into nested subdirectory
|
<p>New to python. I have the following directory structure - however, I am unable to import my config.py file into bottom level files. I have tried sys.path.append in my <code>__init__.py</code>, as well as importing in my <code>__init__.py</code> files. All ears if there's an easy way to do this. Thanks!</p>
<p><a href="https://i.sstatic.net/MzAGNYpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MzAGNYpB.png" alt="Directory structure" /></a></p>
|
<python>
|
2024-11-29 21:40:55
| 1
| 886
|
Sidd Menon
|
79,238,278
| 8,792,671
|
Python is retrieving data from nowhere (?)
|
<p>Currently I've been doing some tests to prepare for an interview to find the longest palindrome, and for some reason Python is retrieving data that is not supposed to.</p>
<p>A brief resume about the issue... If you read the code and the output bellow, the only part of the code that I insert anything into <code>longest_palindrome</code> is inside a single <code>IF</code> statement. I also put a <code>print()</code> function to catch whatever data is catching from the <code>IF</code> statement, and for some reason even if it does not print <code>['b', 'a', 'b', 'a']</code>, I get <code>['b', 'a', 'b', 'a']</code> as result ???</p>
<p>Python version I am using:</p>
<blockquote>
<p>3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:19:04) [GCC 13.3.0]</p>
</blockquote>
<p>Here is the code</p>
<p><strong>CODE</strong></p>
<pre><code>def longestPalindrome(s: str) -> str:
aux = 0
while True:
if s[-(aux+1)] not in s[:-(aux+1)]:
aux+=1
else:
break
s = s if aux == 0 else s[:len(s)-aux]
longest_palindrome= []
for idx, value in enumerate(s):
current = [value]
if value not in s[idx+1:]:
continue
for v in (s[idx+1:]):
current.append(v)
if current == list(reversed(current)):
print('if logic current=', current)
longest_palindrome = current if len(longest_palindrome) < len(current) else longest_palindrome
return longest_palindrome
res = longestPalindrome("babad")
print('resut:', res)
</code></pre>
<p><strong>OUTPUT</strong></p>
<pre><code>if logic current= ['b', 'a', 'b']
if logic current= ['a', 'b', 'a']
result: ['b', 'a', 'b', 'a']
</code></pre>
|
<python><python-3.x>
|
2024-11-29 20:17:31
| 1
| 315
|
SakuraFreak
|
79,238,229
| 1,112,406
|
How can I eliminate spaces around Panel titles in Rich?
|
<p>I'm using Rich's Panel feature. Is there a way to eliminate the extra spaces Panel puts around title text? Here's an example from a Wordle program, which shows a sequence of Guesses. (It is written in and runs in Colab, hence the use of Rich.)</p>
<p><a href="https://i.sstatic.net/ykemDeY0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ykemDeY0.png" alt="enter image description here" /></a></p>
<p>The spaces before and after the title text are intentional. (They are part of the title text.) It's the spaces between those spaces and the Panel outline that I'd like to eliminate.</p>
<p>Thanks.</p>
|
<python><rich>
|
2024-11-29 19:55:58
| 1
| 2,758
|
RussAbbott
|
79,238,188
| 547,231
|
How to handle PRNG splitting in a jax.vmap context?
|
<p>I have a function which simulates a stochastic differential equation. Currently, without stochastic noise, my invokation of simulating the process up to time <code>t</code> looks like this (and, yeah, I need to use jax):</p>
<pre><code>def evolve(u, t):
# return u + dt * b(t, u) + sigma(t, u) * sqrt_dt * noise
def simulate(x, t):
k = jax.numpy.floor(t / dt).astype(int)
u = jax.lax.fori_loop(0, k, lambda i, u : evolve(u, i * dt), u)
</code></pre>
<p>Now, the pain comes with the noise. I'm a C++-guy who only occasionally needs to use Python for research/scientific work. And I really don't understand how I need (or should) implement PRNG splitting here. I guess I would change <code>evolve</code> to</p>
<pre><code>def evolve(u, t, key):
noise = jax.random.multivariate_normal(key, jax.numpy.zeros(d), covariance_matrix, shape = (n,))
# return u + dt * b(t, u) + sigma(t, u) * sqrt_dt * noise
</code></pre>
<p>But that will not work properly I guess. If I got it right, I need to use <code>jax.random.split</code> to split the <code>key</code>. Cause if I don't, I end up with correlated samples. But how and where do I need to split?</p>
<p>Also: I guess I would need to modify <code>simulate</code> to <code>def simulate(x, t, key)</code>. But then, should <code>simulate</code> also return the modified <code>key</code>?</p>
<p>And to make it even more complicated: I actually wrap <code>simulate</code> into a <code>batch_simulate</code> function which uses <code>jax.vmap</code> to process a whole batch of <code>x</code>'s and <code>t</code>'s. How do I pass the PRNG to that <code>batch_simulate</code> function, how do I pass it (and broadcast it) to <code>jax.vmap</code> and what should <code>batch_forward</code> return? At first glance, it seems to me that it would take a single PRNG and split it into many (due to the <code>vmap</code>). But what does the caller of <code>batch_forward</code> do then ...</p>
<p>Completely lost on this. Any help is highly appreciated!</p>
|
<python><random><python-3.8><jax>
|
2024-11-29 19:33:27
| 1
| 18,343
|
0xbadf00d
|
79,238,174
| 9,064,356
|
Pyspark date parser raises timeParserPolicy upgrade exception even when already in CORRECTED mode
|
<p>Spark version: <code>3.3.1</code><br/>
Python version: <code>3.9</code></p>
<p>By default, calling <code>pyspark.sql.functions.to_date(col("foobar"), "yyyy-MM-dd")</code> raises spark upgrade exception if <code>foobar</code> col can't be parsed using given format and, at the same time, this <code>foobar</code> col can be parsed with given format using legacy date parser.</p>
<p>Exception looks like this:</p>
<pre><code>org.apache.spark.SparkUpgradeException:
You may get a different result due to the upgrading of Spark 3.0:
Fail to parse '21/09/20' in the new parser.
You can set spark.sql.legacy.timeParserPolicy to
LEGACY to restore the behavior before Spark 3.0,
or set to CORRECTED and treat it as an invalid datetime string.
</code></pre>
<p>Exception is pretty descriptive but the problem is that my spark session is already configured to use <code>CORRECTED</code> mode.</p>
<p><code>spark.conf.get("spark.sql.legacy.timeParserPolicy") # returns CORRECTED</code></p>
<p>My Reason for using <code>CORRECTED</code> mode is to keep strict date checking but without raising any exceptions - if <code>to_date(...)</code> returns a null value then I know that this row has a date column with different format than the format that was expected which I can gracefully handle.</p>
<p>I tried replicating this for a very simple example but I couldn't - didn't raise any upgrade exceptions in <code>CORRECTED</code> mode.</p>
<pre><code>spark = SparkSession \
.builder \
.appName("foo") \
.getOrCreate()
dates_df = spark.createDataFrame(
data=[
(1, "2024-01-01"),
(1, "2024-1-01"),
(1, "2024-01-1"),
(1, "2024-1-1"),
(1, "24-01-01"),
(1, "24-1-01"),
(1, "24-01-1"),
(1, "24-1-1"),
(1, "2024/01/01"),
(1, "2024/1/01"),
(1, "2024/01/1"),
(1, "2024/1/1"),
(1, "24/01/01"),
(1, "24/1/01"),
(1, "24/01/1"),
(1, "24/1/1"),
(1, "11/01/2024"),
(1, "11/1/2024"),
(1, "11/01/24"),
(1, "11/1/2024"),
],
schema=["id", "raw_date"],
)
dates_df = dates_df.drop("id")
format = "yyyy-MM-dd"
dates_df.withColumn("date_formatted", to_date("raw_date", format)).show()
</code></pre>
<p>Above raises spark upgrade exception, as expected</p>
<pre><code>org.apache.spark.SparkUpgradeException:
You may get a different result due to the upgrading to Spark >= 3.0:
Fail to parse '2024-1-01' in the new parser.
You can set spark.sql.legacy.timeParserPolicy to
LEGACY to restore the behavior before Spark 3.0,
or set to CORRECTED and treat it as an invalid datetime string.
</code></pre>
<p>And when I add <code>config("spark.sql.legacy.timeParserPolicy", "CORRECTED")</code> to the <code>SparkSession</code>, I get expected behaviour - no exceptions just nulls for records that don't match given format.</p>
<pre><code>+----------+--------------+
| raw_date|date_formatted|
+----------+--------------+
|2024-01-01| 2024-01-01|
| 2024-1-01| null|
| 2024-01-1| null|
| 2024-1-1| null|
| 24-01-01| null|
| 24-1-01| null|
| 24-01-1| null|
| 24-1-1| null|
|2024/01/01| null|
| 2024/1/01| null|
| 2024/01/1| null|
| 2024/1/1| null|
| 24/01/01| null|
| 24/1/01| null|
| 24/01/1| null|
| 24/1/1| null|
|11/01/2024| null|
| 11/1/2024| null|
| 11/01/24| null|
| 11/1/2024| null|
+----------+--------------+
</code></pre>
<p>So for some reason this issue is only happening when running inside complicated/long jobs. What's also strange is that this upgrade exception is raised from different line in the code, depending on the env I run that job on. (exception is identical in both cases)</p>
<ol>
<li>Local env, single node - fails at the very beginning of the job (stage 2/10)</li>
<li>Remote env, kubernetes cluster - fails at the end of the job when running <code>df.rdd.getNumPartitions()</code>. During final stage which saves DF into csv. (part of the code that fails locally passes without any problems)</li>
</ol>
<p>Looks like it has to do something with available resources/partitions and complexity of the spark job. So my final guess was that something strange is happening with the job when logical plan grows too big so I tried <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.checkpoint.html" rel="nofollow noreferrer">checkpoints</a> to truncate logical plan.</p>
<pre><code>with tempfile.TemporaryDirectory() as d:
spark.sparkContext.setCheckpointDir("/tmp/bb")
df = df.checkpoint(True)
</code></pre>
<p>The problem is that it didn't change anything, still raising upgrade exception which hints to use either <code>LEGACY</code> or <code>CORRECTED</code> while spark session is already using <code>CORRECTED</code> mode instead of default <code>EXCEPTION</code>.</p>
<p>Unfortunately I can't share the code of the long job which reproduces this issue. But when it comes to date parsing, there is nothing fancy, just calls to <code>to_date(col('foobar'), '<date_format>')</code> and job is definitely running in <code>CORRECTED</code> mode. In <code>LEGACY</code> mode job passes but doesn't return desired outcome because date parsing is not as strict as in <code>CORRECTED</code> mode.</p>
|
<python><apache-spark><date><pyspark><upgrade>
|
2024-11-29 19:27:37
| 0
| 961
|
hdw3
|
79,238,038
| 4,929,704
|
Pyflink watermarks are stuck
|
<p>Here's my pyflink job. No matter how much I tried I was not able to get watermarks advance.</p>
<p>The output I see is:</p>
<pre><code>Event: (1, 1000), Current Watermark: -9223372036854775808
Event: (2, 2000), Current Watermark: -9223372036854775808
Event: (3, 3000), Current Watermark: -9223372036854775808
Event: (4, 4000), Current Watermark: -9223372036854775808
Event: (5, 5000), Current Watermark: -9223372036854775808
(1, 1000)
(2, 2000)
(3, 3000)
(4, 4000)
(5, 5000)
</code></pre>
<p>Which I understand indicates that watermarks are stuck.</p>
<p>When I did the same in Java, I did see watermarks advance, although I added Thread.sleep(1000) in between elements. I tried doing the same in python, but it also didn't work.</p>
<pre class="lang-py prettyprint-override"><code>from pyflink.datastream import StreamExecutionEnvironment
from pyflink.common.watermark_strategy import WatermarkStrategy, TimestampAssigner
from pyflink.datastream.functions import ProcessFunction
from pyflink.common import Duration
# Custom TimestampAssigner
class CustomTimestampAssigner(TimestampAssigner):
def extract_timestamp(self, element, record_timestamp):
return element[1]
# Custom ProcessFunction
class PrintWatermarkProcessFunction(ProcessFunction):
def process_element(self, value, ctx: ProcessFunction.Context):
current_watermark = ctx.timer_service().current_watermark()
print(f"Event: {value}, Current Watermark: {current_watermark}")
yield value # Forward event
# Execution Environment
env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(1)
# Create a data source with more events to better illustrate watermarks
events = [
(1, 1000),
(2, 2000),
(3, 3000),
(4, 4000),
(5, 5000)
]
source = env.from_collection(events)
# WatermarkStrategy with 1 second out-of-orderness
watermark_strategy = (
WatermarkStrategy.for_bounded_out_of_orderness(Duration.of_seconds(1))
.with_timestamp_assigner(CustomTimestampAssigner())
)
# Assign watermarks
watermarked_stream = source.assign_timestamps_and_watermarks(watermark_strategy)
# Print watermark progression
processed_stream = watermarked_stream.process(PrintWatermarkProcessFunction())
# Print the events
processed_stream.print()
# Execute the job
env.execute("Flink Job with Proper Watermark Emission")
</code></pre>
|
<python><apache-flink><flink-streaming><pyflink>
|
2024-11-29 18:18:46
| 1
| 341
|
Viktor Ershov
|
79,237,863
| 8,852,013
|
How can I use Mypy with cookiecutter templates? Mypy throws "is not a valid Python package name" error
|
<p>I have the following directory containing my files:</p>
<pre><code>{{cookiecutter.project_name}}
</code></pre>
<p>I want to run mypy on this directory and all Python files above it (such as Cookiecutter hooks).</p>
<p>When I try to run mypy . I get:</p>
<pre><code>{{cookiecutter.project_name}} is not a valid Python package name
</code></pre>
<p>Which to be fair is true, but are there no ways to get mypy to work with a cookiecutter template?</p>
|
<python><mypy><cookiecutter>
|
2024-11-29 17:05:38
| 0
| 1,145
|
Alexis Drakopoulos
|
79,237,858
| 620,679
|
Efficient access to data in a series of transient Python scripts
|
<p>Pandoc has a filter that accepts Python snippets and uses (for example) Matplotlib to generate charts. I want to produce documents that produce many charts from a common data source (e.g. a pandas data frame).</p>
<p>As an example:</p>
<pre><code>Here's the first chart:
~~~{.matplotlib}
import sqlite3
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
conn = sqlite3.connect('somedb.db')
query = '''SELECT something'''
df = pd.read_sql_query(query, conn).dropna()
fig, ax = plt.subplots()
ax.something()
~~~
</code></pre>
<p>The problem is that every chart has to regenerate the data frame, which is expensive. What I'd like to do is:</p>
<ul>
<li>Run a script at the beginning of the Markdown document that creates the data source and makes it available efficiently to subsequent filter calls.</li>
<li>Use the data to create as many charts as I need from the existing data source.</li>
<li>Shut down the data source with the pandoc call ends (or maybe with a time-to-live parameter).</li>
</ul>
<p>Any ideas?</p>
|
<python><pandas><matplotlib><pandoc>
|
2024-11-29 17:03:52
| 1
| 4,041
|
Scott Deerwester
|
79,237,828
| 2,552,713
|
matplotlib cm.get_cmap(name, num_steps)
|
<p>My python code uses</p>
<p><code>plt.cm.getcmap("coolwarm", num_steps)</code></p>
<p>This causes a deprecation warning.</p>
<p><code>The get_cmap function was deprecated in Matplotlib 3.7 and will be removed in 3.11. Use matplotlib.colormaps[name] or matplotlib.colormaps.get_cmap() or pyplot.get_cmap() instead.</code></p>
<p>however - both suggested methods do not provide a "num_steps" parameter. How do I need to update my statement to be compatible?</p>
|
<python><matplotlib>
|
2024-11-29 16:53:51
| 1
| 519
|
juerg
|
79,237,585
| 4,614,641
|
Compiling Python (3.13) with custom ncurses library path
|
<p>I would like to compile Python 3.13 (on CentOS 7.9), and my 'ncurses' compiled libraries are in a non-standard location (<code>~/custom/gcc-14.2/lib/</code>).</p>
<p>Following the configuration documentation, I define <a href="https://docs.python.org/3/using/configure.html#cmdoption-arg-CURSES_LIBS" rel="nofollow noreferrer"><code>CURSES_LIBS</code></a></p>
<pre class="lang-bash prettyprint-override"><code># I use a custom compiler:
export CC=gcc-14.2 CXX=g++-14.2
# PWD is Python3.13.0
# I compile in a subfolder:
mkdir build && cd build
CURSES_LIBS="-L$HOME/custom/gcc-14.2" ../configure \
--prefix=$HOME/custom/gcc-14.2 \
--enable-loadable-sqlite-extensions \
--enable-optimizations --with-lto \
--with-platlibdir=lib64 \
--with-ensurepip=install \
--enable-shared
</code></pre>
<p>but the standard output contains these lines:</p>
<pre class="lang-none prettyprint-override"><code>checking for ncursesw... no
checking for ncurses... no
checking for ncursesw/curses.h... no
checking for ncursesw/ncurses.h... no
checking for ncursesw/panel.h... no
checking for ncurses/curses.h... no
checking for ncurses/ncurses.h... no
checking for ncurses/panel.h... no
checking for curses.h... no
checking for ncurses.h... no
</code></pre>
<p>and then when running <code>make</code> I get these errors (abbreviated):</p>
<pre class="lang-none prettyprint-override"><code>[...]
gcc-14.2 -pthread -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -fno-semantic-interposition -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c11 -Wext
ra -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -fprofile-generate -I../Include/internal -I../Include/i
nternal/mimalloc -IObjects -IInclude -IPython -I. -I../Include -fPIC -fPIC -c ../Modules/_cursesmodule.c -o Modules/_cursesmodule.o
In file included from ../Modules/_cursesmodule.c:116:
../Include/py_curses.h:80:5: error: unknown type name ‘WINDOW’
80 | WINDOW *win;
| ^~~~~~
../Modules/_cursesmodule.c:173:36: error: ‘FALSE’ undeclared here (not in a function)
173 | static int initialised_setupterm = FALSE;
| ^~~~~
../Modules/_cursesmodule.c: In function ‘PyCursesCheckERR’:
../Modules/_cursesmodule.c:213:17: error: ‘ERR’ undeclared (first use in this function); did you mean ‘ERA’?
213 | if (code != ERR) {
| ^~~
| ERA
[...]
../Modules/_cursesmodule.c: In function ‘_curses_start_color_impl’:
../Modules/_cursesmodule.c:4228:1: warning: control reaches end of non-void function [-Wreturn-type]
4228 | }
| ^
../Modules/_cursesmodule.c: In function ‘_curses_meta_impl’:
../Modules/_cursesmodule.c:3624:1: warning: control reaches end of non-void function [-Wreturn-type]
3624 | }
| ^
make[2]: *** [Modules/_cursesmodule.o] Erreur 1
make[1]: *** [profile-gen-stamp] Erreur 2
make: *** [profile-run-stamp] Erreur 2
</code></pre>
<p>Also the executable installed with <code>make install</code> cannot run:</p>
<pre><code>$ ~/custom/gcc-14.2/bin/python3.13 -c 'print("Hello")'
</code></pre>
<pre class="lang-none prettyprint-override"><code>Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Fatal Python error: Failed to import encodings module
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x00007fd963713780 (most recent call first):
<no Python frame>
</code></pre>
<p>Any idea on the proper way to define <code>CURSES_LIBS</code>? Or is there anything else to set up?</p>
|
<python><compilation><configure>
|
2024-11-29 15:22:00
| 0
| 2,314
|
PlasmaBinturong
|
79,237,283
| 2,956,276
|
Extract specific exception from ExceptionGroup tree
|
<p><a href="https://docs.python.org/3.12/library/exceptions.html#ExceptionGroup" rel="nofollow noreferrer">ExceptionGroup</a>s in Python contains tuple of unrelated exceptions. Some of these exceptions can be also exception groups. So the specific exception (not the group one) can be nested in several exception groups.</p>
<p>The <code>except* SpecificException as exc</code> clause match the specific exception regardless how deep it is embedded in exception groups. So far so good.</p>
<p>The <code>exc</code> variable is for sure an <code>ExceptionGroup</code> (maybe <code>BasicExceptionGroup</code>, but it doesn't matter in my case). The <code>exc.exceptions</code> property can contain the <code>SpecificException</code> but it can also contain only another <code>ExceptionGroup</code> (with another <code>ExceptionGroup</code> inside) and somewhere deep at the bottom there can by the <code>SpecificException</code>.</p>
<p>What is the Pythonic way to extract this embedded <code>SpecificException</code> from the <code>ExceptionGroup</code> tree?</p>
<p>I created this example which demonstrate the issue and present my current solution (which is not nice).</p>
<pre class="lang-py prettyprint-override"><code>from typing import Iterator, Type
import trio
class AlphaExc(Exception):
alphaAttr: str
def __init__(self, message: str, alphaAttr: str):
super().__init__(message)
self.alphaAttr = alphaAttr
class BetaExc(Exception):
pass
async def alpha():
raise AlphaExc("Alpha exception", "foo")
async def beta():
raise BetaExc("Beta exception")
# My helper function to extract exceptions from exception group. Is it possible to do it in a more elegant way?
def extractException[T](excGroup: BaseExceptionGroup, excType: Type[T]) -> Iterator[T]:
for exc in excGroup.exceptions:
if isinstance(exc, BaseExceptionGroup):
yield from extractException(exc, excType)
else:
if isinstance(exc, excType):
yield exc
async def main():
try:
try:
async with trio.open_nursery() as nursery:
nursery.start_soon(alpha)
nursery.start_soon(beta)
except* (AlphaExc, BetaExc) as exc:
match, rest = exc.split(AlphaExc) # This is here just to demonstrate embedded exception group
raise BaseExceptionGroup("My group exception", [match, rest])
except* AlphaExc as excgroup:
for ex in extractException(excgroup, AlphaExc):
print(f"Alpha exception ({type(ex)}): {ex} - {ex.alphaAttr}")
if __name__ == '__main__':
trio.run(main)
</code></pre>
<p>So we have an easy way how to detect, that there is some specific exception somewhere in the <code>ExceptionGroup</code> tree, but we have no easy way, how to access such exception(s).</p>
<p>I wonder, if it is really the best way how to work which exceptions since Python 3.11 to copy the <code>exctractException</code> function from the example above to each project and use this boilerplate code</p>
<pre class="lang-py prettyprint-override"><code>except SpecificException as excgroup:
for exc in extractException(excgroup, SpecificException):
# work with exc
</code></pre>
<p>instead of simple</p>
<pre class="lang-py prettyprint-override"><code>except SpecificException as exc:
# work with exc
</code></pre>
<p>Is there some better (simpler) way how to do it?</p>
<p><em>Note: I found the <a href="https://stackoverflow.com/questions/78448727/how-to-get-an-exception-out-of-an-exceptiongroup">similar question here in SO</a> but answers (including the accepted one) does not count with embedded groups.</em></p>
|
<python><python-3.11>
|
2024-11-29 13:36:44
| 0
| 1,313
|
eNca
|
79,237,081
| 377,303
|
brython in iframe nags about:srcdoc#__main__ is not a url
|
<p>I'm resctircted to use iframe</p>
<pre><code><html style='height: 100%'>
<head>
<script src='https://cdn.jsdelivr.net/npm/brython@3/brython.min.js'></script>
<script src='https://cdn.jsdelivr.net/npm/brython@3/brython_stdlib.js'></script>
</head>
<body>
<div>
<canvas id='my_canvas' > </canvas>
<script>
__BRYTHON__.brython_path =
'https://cdn.jsdelivr.net/npm/brython@3/brython.min.js'
__BRYTHON__.script_path =
'https://cdn.jsdelivr.net/npm/brython@3/brython.min.js'
console.log('asd', __BRYTHON__.script_path )
</script>
<script type='text/python3'>
from browser import document, html , timer
time_elapsed = 0
canvas = document["my_canvas"]
ctx = canvas.getContext("2d")
def draw_circle(t):
ctx.clearRect(0, 0, canvas.width, canvas.height)
radius = 50 + 30 * math.sin(t / 10)
ctx.beginPath()
ctx.arc(150, 150, radius, 0, 2 * math.pi)
ctx.stroke()
timer.set_interval(lambda: draw_circle(timer.time_elapsed() ), 50)
</script>
</div>
</body>
</html>
</code></pre>
<p>and I get :</p>
<pre><code>Uncaught Error: not a url: about:srcdoc#__main__
at $B.strip_host (brython.min.js:1:2062)
at run_scripts (brython.min.js:1:58001)
at $B.parser.brython (brython.min.js:1:55126)
at ev.target.body.onload (brython.min.js:1:52395)
</code></pre>
|
<python><brython>
|
2024-11-29 12:20:08
| 0
| 1,976
|
nerkn
|
79,237,024
| 1,406,168
|
Writing to application insights from a Phyton Azure Function App
|
<p>I am trying to add logs with custom dimensions to the trace table in application insights. I have code below working but it writes to the dependency table. Any pointers how to write to traces and metrics?</p>
<pre><code>import azure.functions as cho
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import trace
tt = cho.FunctionApp(http_auth_level=cho.AuthLevel.ANONYMOUS)
@tt.route(route="http_trigger")
def http_trigger(req: cho.HttpRequest) -> cho.HttpResponse:
configure_azure_monitor(connection_string="ai conn str")
ri_ch = trace.get_tracer(__name__)
with ri_ch.start_as_current_span("This is a Log") as val:
val.set_attribute("who", "rithwik")
print("Hello Rithwik Bojja, the logs are Logged")
return cho.HttpResponse("This HTTP triggered function executed successfully.",status_code=200)
</code></pre>
|
<python><azure><azure-functions><azure-application-insights>
|
2024-11-29 12:00:06
| 1
| 5,363
|
Thomas Segato
|
79,236,951
| 3,333,319
|
Persisting changes to database backend using ibis-framework
|
<p>I am experimenting with the Ibis framework: it seems a great tool to work on tables that already contain data that I need to join, filter, aggregate, etc.
However, I do not find how to persist changes into the database.</p>
<p>As far as I understand, Ibis <code>Table</code>s are immutable objects that represent the steps I want to go through to analyze my data. Each time I add a new step, a new <code>Table</code> object is created, built starting from the previous one.
Once I have described the flow, I can run all the steps calling <code>ibis_table.execute()</code>.
Therefore, in my current understanding, in Ibis the term 'table' does not have the same meaning as in 'database table', instead, it is something much more similar to a SQL statement (actually, they are compiled to SQL statements).</p>
<p>Question 0: please confirm that my current understanding is correct.</p>
<p>Now, consider a DuckDb database with an already populated table <code>person</code></p>
<pre class="lang-py prettyprint-override"><code>import ibis
con = ibis.connect('duckdb://mydb.ddb')
person_table = con.table('person')
</code></pre>
<p>Questions:</p>
<ol>
<li><p>How do I insert new rows in this table? In my use case new rows may come either from a python object or they may be the result of the execution of some query via another ibis table.</p>
</li>
<li><p>How do I delete rows from this table?</p>
</li>
<li><p>Suppose I need to change the shape of an underlying database table (persisting those changes on the disk, i.e. something I would typically do with an <code>ALTER TABLE ...</code> statement in SQL). Can I achieve the same using ibis? How?</p>
</li>
</ol>
<p>Thanks a lot!</p>
|
<python><sql><database><ibis>
|
2024-11-29 11:36:02
| 1
| 973
|
Sirion
|
79,236,903
| 15,673,975
|
Can't find local python package in parallel directory when using Prefect + Docker
|
<p>I have setup a Prefect deployment on a Docker image. The problem is that my flow package depends on a local package which is in a parallel directory:</p>
<pre><code>main_folder
|_ my_flow
|_ src
|_ my_flow
|_ source code the depends on flow_dep...
|_ flow_dep
|_ src
|_ flow_dep
|_ source code...
</code></pre>
<p>On my local machine, this works perfectly. The problem is that I can't find a way to have this setup work in my Docker container: when deploying, I download the code from S3 and I <code>pip install</code> flow_dep as editable.
The thing is, the <em>ONLY</em> way to have this work is to manually add <code>/flow_dep/src</code> to <code>sys.path</code> (something that on my local machine is done automatically by <code>pip</code>).
If I modify <code>PYTHONPATH</code> adding <code>flow_dep/src</code> it DOES NOT WORK. I cannot wrap my head around this. I have printed <code>sys.path</code> and it's identical in both cases, but if modified directly, the flow_dep is found; if modified through <code>PYTHONPATH</code>, flow_dep is NOT found. Again, the <code>sys.path</code> is identical in both cases when printed. What is going on?</p>
|
<python><docker><python-packaging><prefect>
|
2024-11-29 11:21:57
| 0
| 374
|
ultrapoci
|
79,236,884
| 1,613,983
|
Confused by keras/tensorflow error: ValueError: None values not supported
|
<p>I'm trying to build a neural net in <code>tensorflow</code>/<code>keras</code> but I'm stuck on this error:</p>
<pre><code>ValueError: None values not supported.
</code></pre>
<p>I've been able to reduce the code to reproduce this to the following mock example:</p>
<pre><code>from tensorflow.keras import layers, Model, Input
from tensorflow.keras.models import Sequential
import pandas as pd
import numpy as np
import tensorflow as tf
input_data = pd.DataFrame(np.random.rand(1000, 10))
data = tf.data.Dataset.zip({'input_values' : tf.data.Dataset.from_tensor_slices(input_data.values)})
batch_size = 100
train_split = 0.8
train_rows = int(train_split * input_data.shape[0])
train_dataset = data.take(train_rows)
validation_dataset = data.skip(train_rows)
train_data_batched = train_dataset.batch(batch_size).prefetch(tf.data.AUTOTUNE)
validation_data_batched = validation_dataset.batch(batch_size).prefetch(tf.data.AUTOTUNE)
num_outputs = 10
input_layer = Input(shape=(num_outputs,), name=f'input_values')
output = layers.Dense(num_outputs, activation='sigmoid', name='output')(input_layer)
# Define the model
model = Model(
inputs=[input_layer],
outputs=output,
)
max_epochs = 10
def loss(y_true, y_pred):
return 2.0
model.compile(
loss=loss,
optimizer='adam',
)
history = model.fit(
train_data_batched,
epochs=max_epochs,
validation_data=validation_data_batched
)
</code></pre>
<p>Here is the full error:</p>
<pre><code>File /tmp/virtualenvs/python3.11/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File /tmp/virtualenvs/python3.11/lib/python3.11/site-packages/optree/ops.py:752, in tree_map(func, tree, is_leaf, none_is_leaf, namespace, *rests)
750 leaves, treespec = _C.flatten(tree, is_leaf, none_is_leaf, namespace)
751 flat_args = [leaves] + [treespec.flatten_up_to(r) for r in rests]
--> 752 return treespec.unflatten(map(func, *flat_args))
ValueError: None values not supported.
</code></pre>
<p>But I'm stuck as to what to do, mainly because I don't understand what the error is referring to (<code>None</code> values <em>where</em>? In a tensor? In a shape? Somewhere else?)</p>
<p>I'm running <code>tensorflow[and-cuda]==2.18.0</code></p>
|
<python><tensorflow>
|
2024-11-29 11:15:35
| 1
| 23,470
|
quant
|
79,236,813
| 3,218,338
|
Need help dealing with certain address scenarios in large cities and Google Places API
|
<p>I have an issue with the London area using Google Places API and the endpoint for autocomplete. Why are we using autocomplete? Because our input is at time ambiguous. The background story is that we ask the customer for a specific location.</p>
<p>Anyway, imagine the following address "13 Manor Close" being searched using the Google Places Autocomplete API with the following code snippet.</p>
<pre><code>import requests
import json
"""Autocomplete search parameters
"""
api = "XXXX"
address = "13 Manor Close"
lat = "51.5072"
long = "-0.1138486"
radius = 50000
language = "en"
"""Google Autocomplete payload
"""
payload = {
"input": address,
"locationRestriction": {
"circle": {
"center": {
"latitude": lat,
"longitude": long
},
"radius": radius
}
}
}
"""Get suggestions using autocomplete api
"""
response = requests.request(
"POST",
"https://places.googleapis.com/v1/places:autocomplete",
headers={"Content-Type": "application/json; charset=utf-8",
"X-Goog-Api-Key": api},
json=payload
)
response_dict = json.loads(response.text)
"""Iterate over returned suggestions
"""
for res in response_dict["suggestions"]:
place_id = res["placePrediction"]["placeId"]
"""Use Google Places Detail lookup for the suggested place id
"""
place = requests.request(
"GET",
f"https://places.googleapis.com/v1/places/{place_id}?key={api}",
headers={
"Content-Type": "application/json; charset=utf-8",
"X-Goog-Api-Key": api,
"X-Goog-FieldMask": "*"
}
)
place_dict = json.loads(place.text)
print(place_dict)
</code></pre>
<p>If we query the autocomplete with locationRestriction and radius of 50,000m we get 5 results back.</p>
<pre><code> {
"description":"13 Manor Close, Bengeo, Hertford"
},
{
"description":"13 Manor Close, Hatching Green, Harpenden"
},
{
"description":"13 Manor Close, London"
},
{
"description":"13 Manor Close, Aveley, South Ockendon"
},
{
"description":"13 Manor Close, Horley"
}
</code></pre>
<p>Now here's the deal, the "13 Manor Close, London" returned in the above result in the postal code NW9, however the "13 Manor Close, London" also exists in SE28 and E17.</p>
<p>13 Manor Close, London SE28
<a href="https://www.google.com/maps/place/13+Manor+Cl,+London+SE28+8EY,+UK/@51.5088256,0.1190967,17z/data=!4m6!3m5!1s0x47d8af66c71b3579:0x9a72e5f3c90e4276!8m2!3d51.5088256!4d0.1216716!16s%2Fg%2F11cs94cnzx?entry=ttu&g_ep=EgoyMDI0MTEyNC4xIKXMDSoASAFQAw%3D%3D" rel="nofollow noreferrer">https://www.google.com/maps/place/13+Manor+Cl,+London+SE28+8EY,+UK/@51.5088256,0.1190967,17z/data=!4m6!3m5!1s0x47d8af66c71b3579:0x9a72e5f3c90e4276!8m2!3d51.5088256!4d0.1216716!16s%2Fg%2F11cs94cnzx?entry=ttu&g_ep=EgoyMDI0MTEyNC4xIKXMDSoASAFQAw%3D%3D</a></p>
<p>13 Manor Close, London NW9
<a href="https://www.google.com/maps/place/13+Manor+Cl,+London+NW9+9HD,+UK/@51.58704,-0.2789375,16z/data=!3m1!4b1!4m6!3m5!1s0x4876115a17f0204f:0xf13ec4987368f8dd!8m2!3d51.58704!4d-0.2763626!16s%2Fg%2F11c23r9v_f?entry=ttu&g_ep=EgoyMDI0MTEyNC4xIKXMDSoASAFQAw%3D%3D" rel="nofollow noreferrer">https://www.google.com/maps/place/13+Manor+Cl,+London+NW9+9HD,+UK/@51.58704,-0.2789375,16z/data=!3m1!4b1!4m6!3m5!1s0x4876115a17f0204f:0xf13ec4987368f8dd!8m2!3d51.58704!4d-0.2763626!16s%2Fg%2F11c23r9v_f?entry=ttu&g_ep=EgoyMDI0MTEyNC4xIKXMDSoASAFQAw%3D%3D</a></p>
<p>13 Manor Close, London E17
<a href="https://www.google.com/maps/place/13+Manor+Cl,+London+E17+5RT,+UK/@51.5964954,-0.0341318,17z/data=!3m1!4b1!4m6!3m5!1s0x48761dd91d4501ad:0x163048f90f55c230!8m2!3d51.5964954!4d-0.0315569!16s%2Fg%2F11c22cjwml?entry=ttu&g_ep=EgoyMDI0MTEyNC4xIKXMDSoASAFQAw%3D%3D" rel="nofollow noreferrer">https://www.google.com/maps/place/13+Manor+Cl,+London+E17+5RT,+UK/@51.5964954,-0.0341318,17z/data=!3m1!4b1!4m6!3m5!1s0x48761dd91d4501ad:0x163048f90f55c230!8m2!3d51.5964954!4d-0.0315569!16s%2Fg%2F11c22cjwml?entry=ttu&g_ep=EgoyMDI0MTEyNC4xIKXMDSoASAFQAw%3D%3D</a></p>
<p>And unfortunately Google only return at most 5 results for the autocomplete. If we search specifically for "13 Manor Close, London" only NW9 and E27 is returned, but SE28 is not.</p>
<p>The only way to have them returned appears to be to pass the postal code in the search query for an example: 13 Manor Close SE28, but this is not something we can expect from the customer.</p>
<p>Have I managed to find a corner case, or is the London mapping this broken? Open for discussion, questions and suggestions how to deal with this scenario.</p>
|
<python><google-maps><google-places-api>
|
2024-11-29 10:52:02
| 1
| 682
|
user3218338
|
79,236,793
| 8,129,873
|
How to create index pattern in AWS opensearch using API?
|
<p>I've configured Opensearch domain in AWS. As part of automating configurations in opensearch, I want to create a set of index patterns.</p>
<p>I've tried so many APIs available online, but could not succeed. Has anyone managed to create index pattern in AWS Opensearch?</p>
|
<python><amazon-web-services><opensearch><amazon-opensearch>
|
2024-11-29 10:46:04
| 2
| 425
|
Prathyush Peettayil
|
79,236,753
| 2,405,663
|
Python get data using column name instead of column index
|
<p>I have the following Python code to get data from database. I need to get data from Cursor, using column name description.</p>
<pre><code>try:
with _db.connect() as cnn:
crr = cnn.cursor()
crr.execute(
"SELECT Frame, ROW_NUMBER() OVER(ORDER BY Frame ASC),[Left Elbow Ulnar Deviation/Radial Deviation],[Left Elbow Pronation/Supination],"
"[Left Elbow Flexion/Extension],[Left T4 Shoulder Abduction/Adduction],[Left T4 Shoulder Internal/External Rotation],"
"[Left T4 Shoulder Flexion/Extension],[Right T4 Shoulder Abduction/Adduction],[Right T4 Shoulder Internal/External Rotation]"
"FROM AA_V_SHWS_RulaSensorDataXValues r "
"WHERE idRulaSensorData = ? ",(idFileImportato))
rows = crr.fetchall()
if rows is None or len(rows) == 0:
logger.error(f"File [{idFileImportato}] file import not found in Omnia")
return False
for row in rows:
#
Flex_Sh = row.cursor_description('C86')#
except Exception as x:
logger.error(f"File [{idMonitoraggio}] Error in query to get data: {x}")
return False
</code></pre>
<p>I need to get data from "row" using column name but I m not able to do it.</p>
|
<python><database>
|
2024-11-29 10:34:03
| 0
| 2,177
|
bircastri
|
79,236,616
| 13,392,257
|
FastAPI - listen to kafka in separate thread
|
<p>I have a FastAPI application. The application subscribes to kafka-topic and handles messages (everything is working)</p>
<p>But when I added http endpoints, noticed that can't call the endpoints, because I am listen to kafka in infinite loop.</p>
<p>How to fix my code to enable http endpoints? I guess kafka subscription should work in separate thread</p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code># main.py
from pathlib import Path
from fastapi import FastAPI, APIRouter
from app.api.v1.api import api_router
from app.core.config import settings
import logging
from aiokafka import AIOKafkaConsumer
from fastapi import FastAPI
import hashlib
log = logging.getLogger("uvicorn")
router = APIRouter(prefix="/kafka_consumer")
async def consume():
"""Consume and print messages from Kafka."""
while True:
async for msg in consumer:
print("TODO: process msg ", msg.value.decode())
async def check_kafka() -> bool:
"""
Checks if Kafka is available by fetching all metadata from the Kafka client.
Returns:
bool: True if Kafka is available, False otherwise.
"""
try:
await consumer.client.fetch_all_metadata()
except Exception as exc:
logging.error(f'Kafka is not available: {exc}')
else:
return True
return False
@router.get("/healthz/ready")
async def ready_check() -> int:
"""
Check the health of application dependencies.
Returns:
int: Representation of the HTTP status code for a successful response.
- 200 if the server is ready.
- 503 if the server is not ready.
"""
readiness_probes = await asyncio.gather(
*[component for component in [check_kafka(), http_client.is_ready()]],
)
ready = all(probe for probe in readiness_probes)
status = HTTPStatus.OK if ready else HTTPStatus.SERVICE_UNAVAILABLE
return status
def create_application() -> FastAPI:
"""Create FastAPI application and set routes.
Returns:
FastAPI: The created FastAPI instance.
"""
application = FastAPI(openapi_url="/kafka_consumer/openapi.json", docs_url="/kafka_consumer/docs")
application.include_router(router, tags=["consumer"])
return application
def create_consumer() -> AIOKafkaConsumer:
"""Create AIOKafkaConsumer.
Returns:
AIOKafkaConsumer: The created AIOKafkaConsumer instance.
"""
return AIOKafkaConsumer(
settings.kafka_topics,
bootstrap_servers=settings.kafka_instance,
)
app = create_application()
consumer = create_consumer()
@app.on_event("startup")
async def startup_event():
"""Start up event for FastAPI application."""
log.info("Starting up...")
await consumer.start()
log.info("Start consume")
await consume()
log.info("XXX Start app") # Don't see this log
@app.on_event("shutdown")
async def shutdown_event():
"""Shutdown event for FastAPI application."""
log.info("Shutting down...")
await consumer.stop()
</code></pre>
|
<python><apache-kafka><fastapi><aiokafka>
|
2024-11-29 09:59:34
| 0
| 1,708
|
mascai
|
79,236,498
| 3,400,076
|
Pylint flagged import_error message
|
<p>I have executed PyLint on my python code and got flagged that I have "import-error" and the message is "Unable to import 'requests'", "Unable to import 'openpyxl'" and other python modules.</p>
<p>For openpyxl, I have installed the openpyxl module on my Dockerfile with the following command:</p>
<blockquote>
<p>RUN python3.12 /tmp/pip-24.0-py3-none-any.whl/pip install
openpyxl-3.0.9-py2.py3-none-any.whl</p>
</blockquote>
<p>and my program could use excel properly and I am not sure how come pylint will flagged out that there is an import-error of openpyxl. could anyone advise please? thanks!</p>
|
<python><pylint>
|
2024-11-29 09:28:23
| 0
| 519
|
xxestter
|
79,235,967
| 12,466,687
|
pdftotext in python not working missing dll probably
|
<p>I was able to use <code>pdftotext</code> earlier after a alot of challenges in installation few months back and at that time if I remember correctly I did downloaded some files and placed it some folder and did sudo commands for installation and modified content in Micros. Visual Studio.</p>
<p>But recently I deleted my <code>Anaconda</code> and reinstalled and since then I am getting this message <code>importerror-dll-load-failed-the-specified-module-could-not-be-found</code> even when it shows <code>pdftotext=2.2.2</code> installed in my environment.</p>
<p>then after reading in some post I did pip installation of <strong>lower version</strong> of <code>pdftotext</code> but was still getting the error so I reinstalled <code>pip install pdftotext==2.2.2</code></p>
<p>Now the strange thing is I am able to run this library in <code>Jupyter Notebook</code> but when I run this library in <strong>streamlit app</strong> on <strong>localhost</strong> then it gives the error <code>import pdftotext ModuleNotFoundError: No module named 'pdftotext' </code></p>
<p>And when I terminate the <code>streamlit</code> app by <code>ctrl+c</code> then in logs I can see:</p>
<pre><code>forrtl: error (200): program aborting due to control-C event
Image PC Routine Line Source
libifcoremd.dll 00007FF85CBBDF54 Unknown Unknown Unknown
KERNELBASE.dll 00007FF90221D65D Unknown Unknown Unknown
KERNEL32.DLL 00007FF902FAE8D7 Unknown Unknown Unknown
ntdll.dll 00007FF904E5FBCC Unknown Unknown Unknown
</code></pre>
<p>In some old post I have seen using <code>pip3</code> for installation but I am not sure if that's still relevant or not. To me it seems like either some <strong>DLL issue</strong> or multiple <strong>python interpreter</strong> issue.</p>
<p>I am using <strong>Windows 11 with WSL enabled and Anaconda, vscode, python 3.12.4</strong></p>
<p>After removing <code>Anaconda</code> I started facing <code>Power Shell execution policy</code> Restricted issues which was not letting me activate virtual environments <a href="https://stackoverflow.com/questions/77711479/how-to-fix-cant-load-file-activate-ps1-because-script-execution-is-disable">link</a> which I thought I fixed it and then installed Anaconda again.</p>
<p>Can anyone suggest what could be done here to fix it ?</p>
<p><strong>Update:</strong>
My packages are getting installed in this anaconda environment which is correct as I am running it from here only.
<code>C:\Users\vinee\anaconda3\envs\HQ2\Lib\site-packages</code></p>
<p>But I think streamlit is still picking up my packages from the Global environment Python (which I deleted yesterday) and libraries still exist at this path <code>C:\Users\vinee\AppData\Roaming\Python\Python312\site-packages</code></p>
<p>I am just gonna delete this folder & subfolders <code>Python312\</code>. Although I am not sure if this will work or not or do I need to remove some path variables. Please suggest !!</p>
|
<python><pip><anaconda><streamlit><pdftotext>
|
2024-11-29 05:47:37
| 2
| 2,357
|
ViSa
|
79,235,770
| 248,616
|
Python how to set dict key as 01, 02, 03 ... when using dpath?
|
<p>Let's say we want to create/maintain a dict with below key structure</p>
<pre><code>{
"a": {"bb": {"01": "some value 01",
"02": "some value 02",
} },
}
</code></pre>
<p>We use <a href="https://pypi.org/project/dpath/" rel="nofollow noreferrer">dpath</a> .new() as below to do this</p>
<pre><code>import dpath
d=dict() ; print(d) # d={}
dpath.new(d,'a/bb/00/c', '00val') ; print(d) # d={'a': {'bb': [{'c': '00val'}]}}
# d # NOTE this will set :bb as a list, and set at bb[00] ie idx=0 of list bb
# d # what we want instead is d={'a': {'bb': {'00': {'c': '00val'} }}}
# d # what we want is NOT d={'a': {'bb': [ {'c': '00val'} ]}}
</code></pre>
<p>So when using <code>00</code> in the path, dpath translate it into list <code>index at 0</code> instead of dict key <code>00</code></p>
<p>The question is how to set key as <code>00</code>? Currently I have to dodge it by setting prefix <code>s</code> ie <code>s00</code> <code>s01</code> <code>s02</code></p>
|
<python><dictionary><dpath>
|
2024-11-29 03:32:07
| 3
| 35,736
|
Nam G VU
|
79,235,638
| 65,659
|
How can I use type hints to declare an instance method of a class accepts another class instance as a parameter?
|
<p>How can I write a class in Python that has a method with an argument that should be an instance of the class?</p>
<pre><code>class MyClass:
def compare(self, other: MyClass):
pass
</code></pre>
<p>This gives me an error:</p>
<pre><code>NameError: name 'MyClass' is not defined
</code></pre>
|
<python><python-typing>
|
2024-11-29 01:30:14
| 1
| 4,968
|
Chuck
|
79,235,606
| 143,189
|
Mocking with fakeredis+python results in HTTP 500
|
<p>I have the following FastAPI module in python with the corresponding test, but am never able to get an HTTP 200 response from the await redis_service.set_key call I am making. Any suggestions on what I could be doing wrong?</p>
<p>THIS IS THE EXISTING CODE (I CAN REFACTOR THIS, BUT NEED WORKING TESTS BEFORE ATTEMPTING REFACTOR</p>
<pre><code>from fastapi import FastAPI, HTTPException, Depends, Request
from .logs.log_messages import LOG_MESSAGES_INFO
from services.cache.service.cache_service import RedisService
# Define the service name for logging
service_name = "cache-service"
# Initialize the logger
logger = ServiceLogger(
service_name, LOG_MESSAGES_INFO, os.getenv("LOG_FILE_PATH", "api.log")
)
# Create the FastAPI app
app = FastAPI(docs_url="/api", root_path="/plt/cache-service")
redis_url = os.getenv("REDIS_URL", "redis://127.0.0.1:6379")
redis_service = RedisService(redis_url)
# Define the endpoints
@app.post("/redis/set-key/")
async def set_key(request: Request, key_value: KeyValue, namespace: str):
try:
cr_id = request.headers.get("correlationid")
logger.log("info", "MSREDI001", id=cr_id, method="set_key")
# A BREAKPOINT HERE, DOESN'T ALLOW ME TO STEP INTO set_key
# AND DIRECTLY RAISES AN EXCEPTION WITH 500 = <Response [500 Internal Server Error]>
await redis_service.set_key(namespace, key_value.key, key_value.value, key_value.expire)
return {"status": "success"}
except Exception as e:
logger.log("error", "MSREDE004", method="set_key", id=cr_id, error=str(e))
raise HTTPException(status_code=500, detail=str(e))
</code></pre>
<p>The associated test:</p>
<pre><code>import pytest
from unittest.mock import patch, MagicMock
from fastapi.testclient import TestClient
from pydantic import BaseModel
from services.cache.main import app
from fakeredis import aioredis
client = TestClient(app)
@pytest.fixture
def fake_redis():
fake_redis_instance = aioredis.FakeRedis()
return fake_redis_instance
@patch('services.cache.main.redis_service') # Mock the RedisService
@patch('services.cache.main.logger') # Mock the logger
def test_set_key_success(mock_logger, mock_redis_service, fake_redis):
# Mock model for the key-value pair
class KeyValue(BaseModel):
key: str
value: str
expire: int
mock_redis_service.set_key = MagicMock(side_effect=fake_redis.set)
mock_logger.log = MagicMock() # Mock the logger
key_value = KeyValue(key="test_key", value="test_value", expire=3600)
headers = {"correlationid": "4af44d6c-9e52-4929-9ff7-bbae3414e92f"}
response = client.post("/redis/set-key/", json={
"key": key_value.key,
"value": key_value.value,
"expire": key_value.expire
}, headers=headers, params={"namespace": "test_namespace"})
# Assert
assert response.status_code == 200 # ALWAYS FAILS THIS ASSERT
assert response.json() == {"status": "success"}
</code></pre>
|
<python><pytest><fakeredis>
|
2024-11-29 01:00:53
| 1
| 550
|
ossandcad
|
79,235,514
| 9,500,955
|
Cannot write data to BigQuery when using Databricks secret
|
<p>I am following this <a href="https://docs.databricks.com/en/connect/external-systems/bigquery.html#read-and-write-to-a-bigquery-table" rel="nofollow noreferrer">guide</a> on writing data to the BigQuery table.</p>
<p>Right now, I have an error when I try to write data using <a href="https://docs.databricks.com/en/security/secrets/example-secret-workflow.html#step-3-use-the-secrets-in-a-notebook" rel="nofollow noreferrer">Databricks Secret</a> instead of the JSON credential file and setting the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable.</p>
<pre class="lang-bash prettyprint-override"><code>java.io.IOException: Error getting access token from metadata server at: http://169.x.x.x/computeMetadata/v1/instance/service-accounts/default/token
</code></pre>
<p>What is strange here is I can write/read the data using the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable. I can also read the data using Databricks Secret. So I don't know why it is not work when I write the data with Databricks Secret.</p>
<p>Here is my code to read the Databricks Secret:</p>
<pre class="lang-py prettyprint-override"><code>import base64
cred = dbutils.secrets.get(scope="bigquery-scope", key="secret-name").encode('ascii')
cred = base64.b64encode(cred)
cred = cred.decode('ascii')
spark.conf.set("credentials", cred)
</code></pre>
<p>Below is my code to read/write the data:</p>
<pre class="lang-py prettyprint-override"><code># Read data
df = spark.read.format("bigquery")
.option("parentProject", <parent-project-id>)
.option("viewsEnabled","true")
.option("table", <table-name>)
.load()
# Write data
df.write.format("bigquery") \
.mode("overwrite") \
.option("temporaryGcsBucket", <bucket-name>) \
.option("table", <table-name>) \
.option("parentProject", <parent-project-id>) \
.save()
</code></pre>
<p>Am I missing any configuration for writing the data to BigQuery with Databricks Secret?</p>
<p><strong>Update:</strong></p>
<p>I tried to read the Secret, write it into a temporary JSON file and set the path to <code>GOOGLE_APPLICATION_CREDENTIALS</code> with this code but it still failed:</p>
<pre class="lang-py prettyprint-override"><code>import os
# Retrieve the JSON credentials from Databricks Secrets
cred = dbutils.secrets.get("bigquery-cred", "project-id-name")
# Write the JSON to a temporary file
temp_cred_path = "/tmp/cred.json"
with open(temp_cred_path, "w") as temp_cred_file:
temp_cred_file.write(cred)
# Set the environment variable to point to the JSON file
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = temp_cred_path
</code></pre>
<p>This is the full stacktrace:</p>
<pre class="lang-py prettyprint-override"><code>---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<command-846265064717828> in <module>
1 # this exports to Big Query -- note "overwrite" in method def above
----> 2 write_to_bigquery(table_name, bucket, dataset, project_id)
<command-846265064717827> in write_to_bigquery(tableName, tempBucket, df, project_id)
1 def write_to_bigquery(tableName, tempBucket, df, project_id):
----> 2 df.write.format("bigquery") \
3 .option("temporaryGcsBucket", tempBucket) \
4 .option("parentProject", project_id) \
5 .option("table", tableName) \
/databricks/spark/python/pyspark/sql/readwriter.py in save(self, path, format, mode, partitionBy, **options)
736 self.format(format)
737 if path is None:
--> 738 self._jwrite.save()
739 else:
740 self._jwrite.save(path)
/databricks/spark/python/lib/py4j-0.10.9.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
1302
1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
1305 answer, self.gateway_client, self.target_id, self.name)
1306
/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
115 def deco(*a, **kw):
116 try:
--> 117 return f(*a, **kw)
118 except py4j.protocol.Py4JJavaError as e:
119 converted = convert_exception(e.java_exception)
/databricks/spark/python/lib/py4j-0.10.9.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o679.save.
: com.google.cloud.spark.bigquery.repackaged.com.google.inject.ProvisionException: Unable to provision, see the following errors:
1) Error in custom provider, java.io.UncheckedIOException: Failed to create default Credentials
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryClientModule.provideBigQueryCredentialsSupplier(BigQueryClientModule.java:46)
while locating com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryCredentialsSupplier
for the 3rd parameter of com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryClientModule.provideBigQueryClient(BigQueryClientModule.java:63)
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryClientModule.provideBigQueryClient(BigQueryClientModule.java:63)
while locating com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryClient
1 error
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalProvisionException.toProvisionException(InternalProvisionException.java:226)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1097)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1131)
at com.google.cloud.spark.bigquery.BigQueryRelationProvider.createRelation(BigQueryRelationProvider.scala:110)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:47)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:80)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:78)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:89)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:174)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:245)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:393)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:192)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:979)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:147)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:343)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:174)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:170)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:590)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:168)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:590)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:566)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:170)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:324)
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:170)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:155)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:146)
at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:200)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:959)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:427)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:396)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:258)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.io.UncheckedIOException: Failed to create default Credentials
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryCredentialsSupplier.createDefaultCredentials(BigQueryCredentialsSupplier.java:101)
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryCredentialsSupplier.<init>(BigQueryCredentialsSupplier.java:50)
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryClientModule.provideBigQueryCredentialsSupplier(BigQueryClientModule.java:53)
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryClientModule$$FastClassByGuice$$b1b60333.invoke(<generated>)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.ProviderMethod$FastClassProviderMethod.doProvision(ProviderMethod.java:264)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.ProviderMethod.doProvision(ProviderMethod.java:173)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalProviderInstanceBindingImpl$CyclicFactory.provision(InternalProviderInstanceBindingImpl.java:185)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalProviderInstanceBindingImpl$CyclicFactory.get(InternalProviderInstanceBindingImpl.java:162)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:168)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:39)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:65)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.ProviderMethod.doProvision(ProviderMethod.java:173)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalProviderInstanceBindingImpl$CyclicFactory.provision(InternalProviderInstanceBindingImpl.java:185)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalProviderInstanceBindingImpl$CyclicFactory.get(InternalProviderInstanceBindingImpl.java:162)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:168)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:39)
at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1094)
... 45 more
Caused by: java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
at com.google.cloud.spark.bigquery.repackaged.com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:134)
at com.google.cloud.spark.bigquery.repackaged.com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:124)
at com.google.cloud.spark.bigquery.repackaged.com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:96)
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.connector.common.BigQueryCredentialsSupplier.createDefaultCredentials(BigQueryCredentialsSupplier.java:99)
... 64 more
</code></pre>
<p>I think this error message is the main reason my code cannot run:</p>
<pre class="lang-py prettyprint-override"><code>Caused by: java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
</code></pre>
|
<python><pyspark><google-bigquery><databricks>
|
2024-11-28 23:28:13
| 0
| 1,974
|
huy
|
79,235,334
| 3,994,399
|
unstructured cannot find images
|
<p>I am trying to use the unstructured library to convert a word document into a json file. However, for some reason it is not seeing the images; in the list of elements that are returned there should be elements of type "Image". It is not throwing an error, it's just not returning the image elements. Below my code and my test file. The testfile contains a string, an image and another string. But the image is thus not detected. What am I doing wrong?</p>
<pre><code>from unstructured.partition.docx import partition_docx
import os
# Set environment variables
os.environ['UNSTRUCTURED_API_KEY'] = "your unstructured.io api key"
os.environ['UNSTRUCTURED_API_URL'] = "https://api.unstructuredapp.io/general/v0/general"
elements = partition_docx(filename="input/test.docx")
with open("input/test.docx", "rb") as f:
elements = partition_docx(file=f)
elements = [element.to_dict() for element in elements]
# save as json
with open("output/test.json", "w") as f_json:
json.dump(elements, f_json, indent=2)
</code></pre>
<p>My project structure:</p>
<pre><code>├── root
│ └── input
│ └── output
</code></pre>
<p>Here's the file: <a href="https://filebin.net/r12d722qinv0y2n3" rel="nofollow noreferrer">test.docx</a></p>
|
<python><unstructured-data>
|
2024-11-28 21:28:41
| 1
| 692
|
ThaNoob
|
79,235,331
| 9,680,491
|
Can child processes create shared memory and share it with parent processes?
|
<p>Can a child process create a SharedMemory object and share it with a parent process? Currently I am getting errors when I try. I need this because creating and copying memory is the major performance bottleneck in my application, but I need to write the shared memory in the parent process.</p>
<p>Here are three minimal examples illustrating the issue. I use time.sleep(1) to control whether the parent or child process completes first.</p>
<p>Case 1 (no error): Create shared memory in parent process, share with child</p>
<pre><code>import time
from multiprocessing import shared_memory, Process
def worker():
shm = shared_memory.SharedMemory(name="test")
shm = shared_memory.SharedMemory(name="test", create=True, size=1)
Process(target=worker).start()
time.sleep(1)
shm.unlink()
</code></pre>
<p>Case 2 (no error): Creating shared memory in child process, not sharing with parent</p>
<pre><code>import time
from multiprocessing import shared_memory, Process
def worker():
shm = shared_memory.SharedMemory(name="test", create=True, size=1)
time.sleep(1)
shm.unlink()
Process(target=worker).start()
</code></pre>
<p>Case 3 (ERROR): Create shared memory in child process, share with parent</p>
<pre><code>import time
from multiprocessing import shared_memory, Process
def worker():
shm = shared_memory.SharedMemory(name="test", create=True, size=1)
time.sleep(1)
shm.unlink()
Process(target=worker).start()
shm = shared_memory.SharedMemory(name="test")
</code></pre>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "/home/benjamin/Documents/duckit/covin.py", line 24, in <module>
shm = shared_memory.SharedMemory(name="test")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/benjamin/miniforge3/envs/hichdev/lib/python3.12/multiprocessing/shared_memory.py", line 104, in __init__
self._fd = _posixshmem.shm_open(
^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/test'
</code></pre>
<p><strong>Motivation:</strong></p>
<p>I'm trying to use numpy loadtxt in a numer of child processes over a bunch of files (a CPU-bottlenecked, not I/O-bottlenecked task, hence being done with multiprocessing). Then I want to write the arrays to an HDF5 file, which is most efficient when done sequentially in the main thread. If the arrays must be copied for the parent process to receive them (as when passing then via a multiprocessing queue or returning them), then this becomes the performance bottleneck for the whole program. Hence, I want to copy the arrays into shared memory in the child processes and share them with the parent process in order to write them.</p>
|
<python><multiprocessing><shared-memory>
|
2024-11-28 21:25:23
| 1
| 403
|
Autodidactyle
|
79,235,310
| 1,791,279
|
Efficient SQL query with pandas using databricks-sql-python
|
<p>Databricks allows to make SQL queries via an API using the <a href="https://github.com/databricks/databricks-sql-python" rel="nofollow noreferrer">databricks-sql-python</a> package.</p>
<p>There are then two ways of creating a connection object that can be put into a <code>pd.read_sql_query(sql, con=connection)</code>. I'm wondering which one is better in terms of performance and reliability when doing SQL queries from pandas:</p>
<ol>
<li><p>Creating Python DB API 2.0 with</p>
<pre class="lang-py prettyprint-override"><code>from databricks import sql
connection = sql.connect(server_hostname=host, http_path=http_path)
</code></pre>
<p>this works but produces the following warning,</p>
<pre><code>UserWarning: pandas only supports SQLAlchemy connectable (engine/connection) or
database string URI or sqlite3 DBAPI2
connection. Other DBAPI2 objects are not tested. Please consider using SQLAlchemy.
</code></pre>
<p>In the implementation code it looks like they are using pyarrow, which sounds to me like an efficient way of creating pandas DataFrames. The warning is a bit dissuasive though.</p>
</li>
<li><p>The other alternative is to use the SQLAlchemy which has a <code>databricks</code> connector exposed by the same package,</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine
engine = create_engine(f"databricks://<rest of the URL")
</code></pre>
<p>This works and gets rids of the warning, but doesn't it imply an extra serialization step to the SQLAlchemy types?</p>
</li>
</ol>
<p>Also asked in <a href="https://github.com/databricks/databricks-sql-python/issues/476" rel="nofollow noreferrer">databricks-sql-python#476</a></p>
<p>So in general which one should be used with pandas?</p>
|
<python><pandas><databricks><databricks-sql><python-db-api>
|
2024-11-28 21:07:52
| 2
| 11,269
|
rth
|
79,235,198
| 11,850,322
|
Pandas Merge - Elegant way to deal with filling and dropping columns
|
<p>Assume we have two data frames with columns as follows:</p>
<pre><code>df1[['name', 'year', 'col1', 'col2', 'col3']]
df2[['name', 'year', 'col2', 'col3', 'col4']]
</code></pre>
<p>I want to do the merge of df1 and df2 by <code>name</code> and <code>year</code> with the condition to keep all value of <code>col2</code> <code>col3</code> on <code>df1</code>, if it is <code>None</code> then use value in <code>df2</code></p>
<p>I know how to do this in traditional way by merging <code>df1</code> and <code>df2</code> then using <code>ffill()</code>.</p>
<p>Since my process of cleaning data involve many steps of merging different df with same columns, it make the code not so clean when I keep have to using <code>ffill()</code> and <code>drop</code> columns. I don't know if <strong><code>pd.merge</code></strong> has any built-in option like that?</p>
<p>Sample code:</p>
<pre><code>df1 = pd.DataFrame({'name': ['a', 'a', 'b', 'b', 'c', 'c'],
'year': [2000, 2001, 2002, 2003, 2004, 2005],
'col1': [1,2,3,4,5,6],
'col2': [0,2,4,6,8,None],
'col3': [1,3,5,7,None,9]})
df2 = pd.DataFrame({'name': ['b', 'b', 'c', 'c', 'd', 'd'],
'year': [2003, 2004, 2004, 2005, 2006, 2007],
'col2': [10,20,30,None,50,60],
'col3': [100,300,500,700,None,900],
'col4': [5,6,7,8,9,10]})
</code></pre>
<p>Input:</p>
<p><strong>df1</strong></p>
<pre><code> name year col1 col2 col3
0 a 2000 1 0.00 1.00
1 a 2001 2 2.00 3.00
2 b 2002 3 4.00 5.00
3 b 2003 4 6.00 7.00
4 c 2004 5 8.00 NaN
5 c 2005 6 NaN 9.00
</code></pre>
<p><strong>df2</strong></p>
<pre><code> name year col2 col3 col4
0 b 2003 10.00 100.00 5
1 b 2004 20.00 300.00 6
2 c 2004 30.00 500.00 7
3 c 2005 NaN 700.00 8
4 d 2006 50.00 NaN 9
5 d 2007 60.00 900.00 10
</code></pre>
<p>Output desired</p>
<pre><code> name year col1 col2 col3 col4
0 a 2000 1.00 0.00 1.00 NaN
1 a 2001 2.00 2.00 3.00 NaN
2 b 2002 3.00 4.00 5.00 NaN
3 b 2003 4.00 6.00 7.00 5.00
4 b 2004 NaN 20.00 300.00 6.00
5 c 2004 5.00 8.00 500.00 7.00
6 c 2005 6.00 NaN 9.00 8.00
7 d 2006 NaN 50.00 NaN 9.00
8 d 2007 NaN 60.00 900.00 10.00
</code></pre>
|
<python><pandas>
|
2024-11-28 20:02:04
| 2
| 1,093
|
PTQuoc
|
79,235,140
| 2,357,712
|
Converting a nested json three levels deep to dataframe
|
<p>I have a json that is three levels deep.
I want to flatten it into a dataframe that has five columns.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>code</th>
<th>level</th>
<th>parent_id</th>
</tr>
</thead>
</table></div>
<p>So:
The part I struggle with is that I can extract each nested item, but I can't find an elegant way to keep the "parent_id". There's got to be more elegant ways of doing this. Any pointers appreciated.</p>
<pre><code>source = response.json()
print ('there are ' + str(len(source)) + ' records')
df_L1 = df
df_L2 = json_normalize(df_L1['subLevelCategories'][0])
df_L3 = json_normalize(df_L2['subLevelCategories'][0]) #store for later !!!!!
df_L2_wrapper = df_L2['id']
df_L3_wrapper = df_L3['id']
df_L2_wrapper.name = 'parent_id'
df_L3_wrapper.name = 'parent_id'
df_L1 = df_L1.head(5)
df_L2 = df_L2.head(5)
df_L3 = df_L3.head(5)
df_L3_wrapper = df_L3_wrapper.head(5)
df_L2_wrapper = df_L2_wrapper.head(5)
# Build of df_L1
df_L1 = df_L1.drop(['subLevelCategories'], axis=1)
df_L1['parentid']=0
# Build of df_L2
df_L2 = df_L2.drop(['name','code','level'], axis=1)
# Rename the Series
df_L2 = json_normalize(df_L2['subLevelCategories'][0])
# Concatenate the DataFrame and the renamed Series
df_L2 = pd.concat([df_L2, df_L2_wrapper], axis=1)
df_L2 = df_L2.drop(['subLevelCategories'], axis=1)
# ////// L2 is built.
# Build of df_L3
df_L3 = df_L3.drop(['subLevelCategories'], axis=1)
df_L3 = pd.concat([df_L3, df_L3_wrapper], axis=1)
df_combined = pd.concat([df_L1, df_L2, df_L3], ignore_index=True)
</code></pre>
<p>EDIT: The sample has been corrected by enclosing it with the '[' and ']'</p>
<p>source originates from</p>
<pre><code>response = requests.get(url, headers=headers)
source = response.json()
</code></pre>
<p>The sample JSON is as follows:</p>
<pre><code>[
{
"id": 3372,
"name": "Archive",
"code": null,
"level": 1,
"subLevelCategories": [
{
"id": 16708,
"name": ".....",
"code": null,
"level": 2,
"subLevelCategories": [
{
"id": 16727,
"name": ".........",
"code": null,
"level": 3,
"subLevelCategories": null
},
{
"id": 16726,
"name": "........",
"code": null,
"level": 3,
"subLevelCategories": null
}
]
},
{
"id": 16701,
"name": ".......",
"code": null,
"level": 2,
"subLevelCategories": [
{
"id": 16782,
"name": "......",
"code": null,
"level": 3,
"subLevelCategories": null
},
{
"id": 16785,
"name": "......",
"code": null,
"level": 3,
"subLevelCategories": null
}
]
}
]
}
]
</code></pre>
|
<python><json><pandas><dataframe>
|
2024-11-28 19:29:44
| 2
| 1,617
|
Maxcot
|
79,235,043
| 973,956
|
Python evaluation of expression
|
<p><strong>Python yields different answers</strong></p>
<pre class="lang-none prettyprint-override"><code>>>> 0 < 0 == 0
False
>>> (0 < 0) == 0
True
>>> 0 < (0 == 0)
True
</code></pre>
<p>is this a bug?</p>
<pre class="lang-none prettyprint-override"><code>Python 3.11.10 (main, Sep 7 2024, 18:35:41) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
</code></pre>
|
<python><python-3.x><operator-precedence>
|
2024-11-28 18:46:43
| 1
| 1,524
|
MrJ
|
79,235,040
| 7,663,296
|
Rate limiting concurrent web queries to multiple domains, stdlib only
|
<h1>Question</h1>
<p>How to manage rate limiting concurrent web queries to multiple domains using only python stdlib? Not asking about algorithms like leaky bucket or single-domain solutions, but how to approach data and code structures for concurrent queries to multiple domains.</p>
<h2>Related</h2>
<p>The following posts have useful info but don't solve my problem. They don't address concurrent requests and per-domain rate limits, and many answers use 3rd party modules.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/401215/how-to-limit-rate-of-requests-to-web-services-in-python">How to limit rate of requests to web services in Python?</a></li>
<li><a href="https://stackoverflow.com/questions/667508/whats-a-good-rate-limiting-algorithm">What's a good rate limiting algorithm?</a></li>
<li><a href="https://stackoverflow.com/questions/14067049/which-data-structure-to-use-for-dynamic-priority-queueing">Which data-structure to use for "dynamic" priority queueing?</a></li>
<li><a href="https://stackoverflow.com/questions/2288241/priority-queue-with-dynamic-item-priorities">Priority queue with dynamic item priorities</a></li>
<li><a href="https://stackoverflow.com/questions/7586743/limiting-concurrency-and-rate-for-python-threads">Limiting concurrency and rate for Python threads</a></li>
<li><a href="https://stackoverflow.com/questions/13906844/distributed-rate-limiting">Distributed rate limiting</a></li>
<li><a href="https://stackoverflow.com/questions/51475447/distributed-crawling-and-rate-limiting-flow-control">Distributed crawling and rate limiting / flow control</a></li>
</ul>
<p>This question comes closest, quite similar setup (though I don't revisit same links). But approach uses external database and PHP / Java, not python. Too heavy, need a python solution.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/51475447/distributed-crawling-and-rate-limiting-flow-control">Distributed crawling and rate limiting / flow control</a></li>
</ul>
<h1>Setup</h1>
<p>Downloading data from multiple domains. Typical pattern is:</p>
<ul>
<li>add several download items from a first domain</li>
<li>add several download items from a second domain</li>
<li>while first domain items are downloading, hit a rate limit and backoff for awhile</li>
<li>continue downloading items from second domain while first domain is waiting to resume <-- HOW TO IMPLEMENT THIS PART?</li>
</ul>
<p>That's simplified a bit. In reality more download items can be added at any time throughout the process, which is ongoing.</p>
<p>Can't / don't want to install 3rd party libs like from pypi, to minimize dependencies and maintain security. Code should be self-contained using only python stdlib.</p>
<h1>Current Approach</h1>
<p>I use a producer / consumer implementation with a single queue and several worker threads for concurrent downloads. It uses leaky bucket for rate limiting with timestamps (no discard, rate-limited items simply wait). Works perfectly for single-domain downloads. Conceptually like this:</p>
<pre class="lang-py prettyprint-override"><code>def dl_worker (que) :
while true :
item = que.get ()
if paused (item.domain) : # waiting due to rate limit
time.sleep (backofftime (item.domain))
# dl item...
# start pool of dl threads
dlque = queue.Queue ()
workers = multiprocessing.pool.ThreadPool (5)
with workers :
for x in workers._pool :
workers.apply_async (dl_worker, dlque)
</code></pre>
<p>Problem: this uses a single queue for all download items. When dl threads pause to rate limit, any items in queue for second domain are stuck waiting during pause for first domain items.</p>
<h1>Rejected solutions</h1>
<ul>
<li>A naive fix would be for worker threads to cycle through entire queue when they hit a backoff for domain1. Something like:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>def dl_worker () :
while true :
item = dlque.get ()
if paused (item.domain) : # waiting due to rate limit
dlque.put (item) # move item to back of queue
continue
# process item...
</code></pre>
<p>This is wasteful. If queue only contains items from domain1, then workers will continually shuffle items from front to back during backoff period. Want to avoid busy-wait solutions.</p>
<ul>
<li><p>Another option is to spawn separate pool of workers for each domain. That would isolate pauses to just workers on that domain. But domains could be quite large, resulting in huge number of threads and resource exhaustion. And queue becomes more complicated - need a dispatcher to consume items from incoming queue and allocate to separate queues per threadpool/domain.</p>
</li>
<li><p>Could also use 3rd party libs like <a href="https://pypi.org/project/pyrate-limiter/" rel="nofollow noreferrer">pyrate-limiter</a> or <a href="https://pypi.org/project/ratelimit/" rel="nofollow noreferrer">ratelimit</a> as <a href="https://stackoverflow.com/questions/401215/how-to-limit-rate-of-requests-to-web-services-in-python">seen here</a>. But that violates my stdlib only requirement, and only addresses the rate limiting issue, not the multiple domain issue.</p>
</li>
</ul>
<h1>Other Solutions</h1>
<p>The main problem with current solution is having a single queue. I came up with a few other approaches, but they all have drawbacks:</p>
<ol>
<li>Use separate queue for each domain. There doesn't seem to be a way to <code>wait ()</code> on multiple queues in python (like <code>select</code> on file handles). I have to manually poll all queues to see which are ready. Something like (imagine suitable thread locks where needed):</li>
</ol>
<pre class="lang-py prettyprint-override"><code>allques = {}
def put (item):
que = allques.setdefault (item.domain, queue.Queue ())
que.put (item)
def dl_worker (que):
while true :
# find which queues have items ready to process
active = [ x for x in allques if x not in paused ]
ready = [ a for a in active if me.allques [a].qsize () ]
if ready :
for domain in ready :
try :
item = me.allques [domain].get (block = false)
# process item ...
break
except Empty :
# qsize was wrong, no biggie, move on to next queue
pass
else :
# wait and poll again
time.sleep (0.5)
</code></pre>
<p>I dislike the <code>sleep</code> polling. I could use a <code>threading.Semaphore</code> instead to track the number of current items across all queues (call <code>semaphore.release</code> on every <code>que.put</code> and <code>semaphore.acquire</code> before attempting <code>que.get</code>). But that doesn't help if all queues are currently in rate-limit backoff: dl_worker will busy-wait performing the while loop, with <code>ready</code> being an empty list on each pass.</p>
<p>Using individual semaphores for each queue just creates the same polling problem.</p>
<p>This approach also doesn't preserve dl item order. Just grabs first item from first ready queue it finds. I could use <code>random.shuffle</code> on ready to at least randomize which queue is picked from. Preserving item order seems difficult. Would need a separate data structure to track insertion order across all queues. Seems like more trouble than it's worth.</p>
<ol start="2">
<li>I could use two queues: active and paused. Items are popped from active queue. If domain is in backoff period, then stick item on paused queue until backoff expires. I think this has the same problem though. Namely, I need a dispatcher thread to watch paused queue and shuffle items back to active queue once their backoff period expires.</li>
</ol>
<p>What happens when second item on paused queue expires sooner than first item? It will be stuck waiting on the paused queue until first item is removed and put back on active queue.</p>
<p>Or I need a non-queue data structure for paused, so I can pull off any item that's ready. But then I need sleep polling again (no blocking <code>get</code> call available).</p>
<ol start="3">
<li>I could use a different data structure than a queue to filter out items currently in a backoff period. Not sure what structure this would be though. <code>PriorityQueue</code> doesn't seem to help.</li>
</ol>
<p>One, its priorities are static. My priorities are dynamic: hitting rate limit triggers a pause, and pause may increase while items are waiting (eg two workers grab items from same domain at same time. first finishes and triggers a pause, second finishes later and lengthens the pause).</p>
<p>Two, PriorityQueue always returns an item if available. I can't set a backoff period to say "hold this item on the queue until time X, then return it".</p>
<ol start="4">
<li>I could use a heap instead of a queue to retrieve any object. But it's hard to keep heap in sorted order. As downloads happen and rate limits are triggered, item priority changes dynamically. Resorting the heap every time a backoff happens seems inefficient. And python heaps don't block on <code>get()</code>, they either return or throw. Still need time.sleep polling.</li>
</ol>
<h1>Conclusion</h1>
<p>So far option 1 seems like the best solution, despite its drawbacks. Does anyone have better ideas?</p>
<p>I'm not inherently tied to a threading model. Async might work as well. I hate littering my code with async / await junk and the <a href="https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/" rel="nofollow noreferrer">red / blue function problem</a>. But if there's a cleaner solution available, it's worth considering.</p>
|
<python><multithreading><concurrency><cross-domain><rate-limiting>
|
2024-11-28 18:45:17
| 2
| 383
|
Ed_
|
79,234,891
| 1,711,271
|
How to safely check whether a Fortran file exists when reading it with Python?
|
<p>I need to read in Python a file written in Fortran. To do this, I'm using <code>numpy</code>'s <code>f2py</code>. Basically, I write a <code>parse.f90</code> file:</p>
<pre class="lang-none prettyprint-override"><code> subroutine read_params(filename, params)
implicit none
! Argument Declarations !
character(len=*), intent(in) :: filename
integer, dimension(4), intent(out) :: params
! Variable Declarations
integer :: i
open (unit=1,status="unknown",file=filename,form="unformatted")
rewind 1
read(1) (params(i), i=1, 4)
end subroutine read_params
</code></pre>
<p>Then I compile it with</p>
<pre><code>python -m numpy.f2py -c -m parse parse.f90
</code></pre>
<p>And now I can import it in my Python script:</p>
<pre><code>from pathlib import Path
import numpy as np
from .parse import read_params
def do_stuff_with_params(path: Path):
params = read_params(path)
# do something with the parameters
return
</code></pre>
<p>Now, my issue is that sometimes the Fortran file may not exist, for example because the user of <code>do_stuff_with_params</code> passed it the wrong <code>path</code>. Since however path is not a Python file object, the usual <code>with open</code> trick doesn't apply. So far, I used the following workaround:</p>
<pre><code>def do_stuff_with_params(path: Path):
if not path.exists():
raise FileNotFoundError(f'{path} does not exist ')
params = read_params(path)
# do something with the parameters
return
</code></pre>
<p>But this isn't ideal for the usual reasons (the file may be moved or deleted between the <code>if</code> check and when it's actually opened). Also, I actually need to read the file twice because of Fortran limitations (long story...), so I would have to add two <code>if</code> statements. How can I solve this? I was wondering if I could raise an exception in <code>parse.f90</code> and pass it to the Python interpreter, but I don't know how to do it.</p>
|
<python><io><fortran><f2py>
|
2024-11-28 17:42:57
| 1
| 5,726
|
DeltaIV
|
79,234,888
| 919,499
|
python/asyncio: how to get the exception handler "context"?
|
<p>I set up a custom exception handler for my asyncio loop.</p>
<p>I know I can call <code>loop.call_exception_handler(context)</code> to use my custom exception handler to trace/log exception I catch in my code using an <code>except:</code> statement, but is there a way to get a <code>context</code> dict as complete as possible, like the one <code>asyncio</code> generates when it calls <code>loop.call_exception_handler</code> (i.e. autodecting all the items usually found in the context)?</p>
<p>If not, is the exception instance caught using <code>except Except as e:</code> good to create a minimal context like:
<code>loop.call_exception_handler({"exception": e})</code></p>
|
<python><loops><python-asyncio>
|
2024-11-28 17:42:26
| 0
| 500
|
MadeR
|
79,234,789
| 2,192,488
|
How to use aiohttp with apscheduler?
|
<p>I would like to fetch several web pages periodically all within the same <code>aiohttp.ClientSession()</code>. Here is what I have got so far. The URLs need to remain within the jobs, because some other URLs will need to be calculated.</p>
<p>What command is missing in the place of <code>???</code>. Or do I need to do this in a completely different way? Thanks in advance for your help.</p>
<p>P.S.: The seconds interval is for testing purposes only. Later, I will change it to a one minute interval.</p>
<pre><code>from apscheduler.schedulers.asyncio import AsyncIOScheduler
import asyncio
import aiohttp
async def fetch(session, url, timeout=3):
async with session.get(url, ssl=False, timeout=timeout) as response:
return await response.text(), response.status
async def GOESX_job(session):
url = 'https://services.swpc.noaa.gov/json/goes/primary/xrays-6-hour.json'
response, status = await fetch(session, url)
print('GOESX', status)
async def GOESp_job(session):
url = 'https://services.swpc.noaa.gov/json/goes/primary/integral-protons-6-hour.json'
response, status = await fetch(session, url)
print('GOESp', status)
async def jobs(scheduler):
async with aiohttp.ClientSession() as session:
scheduler.add_job(GOESX_job, 'interval', seconds=5, args=[session])
scheduler.add_job(GOESp_job, 'interval', seconds=10, args=[session])
scheduler = AsyncIOScheduler()
??? jobs(scheduler)
scheduler.start()
asyncio.get_event_loop().run_forever()
</code></pre>
|
<python><python-asyncio><aiohttp><apscheduler>
|
2024-11-28 17:06:11
| 2
| 32,046
|
Serge Stroobandt
|
79,234,684
| 1,802,726
|
Python unittest.mock patch fail with F() expressions can only be used to update, not to insert
|
<p>A minimal working example is available at <a href="https://github.com/rgaiacs/django-mwe-magicmock" rel="nofollow noreferrer">https://github.com/rgaiacs/django-mwe-magicmock</a>.</p>
<p>When using Django, I use <code>Model.clean()</code> to validate the form submitted by the user. During the validation, some fields might be updated based on the response of a HTTP request. I want to test the <code>Model.clean()</code> using Python's <code>unittest</code> and mocking the HTTP request.</p>
<p>My <code>app/models.py</code> is</p>
<pre><code>import logging
from django.core.exceptions import ValidationError
from django.db import models
from .aid import GitHosting
logger = logging.getLogger(__name__)
class Resource(models.Model):
code_repository = models.URLField(
help_text="Link to the repository where the un-compiled, human readable code and related code is located."
)
version = models.CharField(
blank=True,
# Git hash contains 40 characters
max_length=50,
default="HEAD",
help_text="The version of the resource in the format of a Git commit ID or Git tag.",
)
def clean(self):
git_host = GitHosting()
self.version = git_host.get_version()
</code></pre>
<p>and my <code>app/tests.py</code> is</p>
<pre><code>import logging
from unittest.mock import patch
from django.test import TestCase
from django.urls import reverse
from .models import Resource
logger = logging.getLogger(__name__)
@patch("app.models.GitHosting.get_version")
class ResourceViewTestCase(TestCase):
def test_add_resource(self, mock_get_version):
mock_get_version = "5678"
logger.error("Submitting form ...")
response = self.client.post(reverse("app:index"), {
"code_repository": "http://mygit.com/foo/bar"
})
resource = Resource.objects.get(id=1)
self.assertEqual(resource.version, "5678")
</code></pre>
<p>When I run <code>python manage.py test</code>, the test fail with</p>
<pre><code>ValueError: Failed to insert expression "<MagicMock name='get_version().resolve_expression()' id='139668720464128'>" on app.Resource.version. F() expressions can only be used to update, not to insert.
</code></pre>
<p>How can I fix my test? Thanks!</p>
|
<python><django><python-unittest.mock>
|
2024-11-28 16:29:08
| 1
| 2,735
|
Raniere Silva
|
79,234,681
| 10,425,150
|
Update created engine in sqlacemy
|
<p>I've created database using the following code:</p>
<pre><code>from sqlalchemy import create_engine, text
db_user = 'postgres'
db_password = 'chnageme'
db_host = '12.123.123.123'
db_port = '5432'
db_name = 'new_db'
# Create a connection string to connect to the PostgreSQL server
connection_string = f'postgresql://{db_user}:{db_password}@{db_host}:{db_port}/'
# Create an SQLAlchemy engine
engine = create_engine(connection_string)
# Check if the database exists and create it if it doesn't
with engine.connect() as connection:
result = connection.execute(text("SELECT 1 FROM pg_database WHERE datname = :db_name"), {"db_name": db_name})
if not result.fetchone():
connection.execute(f"CREATE DATABASE \"{db_name}\";")
print(f"Database '{db_name}' created successfully.")
else:
print(f"Database '{db_name}' already exists.")
engine = create_engine(f'{connection_string}{db_name}')
</code></pre>
<p>Now I'm wondering if there is an option\function to update original <code>engine</code> :</p>
<pre><code> engine = create_engine(connection_string)
</code></pre>
<p>Instead of creating new one later on?</p>
<pre><code>engine = create_engine(rf'{connection_string}{db_name}')
</code></pre>
|
<python><postgresql><sqlalchemy><psycopg2>
|
2024-11-28 16:28:09
| 0
| 1,051
|
Gооd_Mаn
|
79,234,492
| 20,343,817
|
flattening pandas columns in a non-trivial way
|
<p>I have a pandas dataframe which looks like the following:</p>
<pre><code> site pay delta over under
phase a a b
ID
D01 London 12.3 10.3 -2.0 0.0 -2.0
D02 Bristol 7.3 13.2 5.9 5.9 0.0
D03 Bristol 17.3 19.2 1.9 1.9 0.0
</code></pre>
<p>I'd like to flatten the column multindex to the columns are</p>
<pre><code>ID site a b delta over under
D01 London 12.3 10.3 -2.0 0.0 -2.0
D02 Bristol 7.3 13.2 5.9 5.9 0.0
D03 Bristol 17.3 19.2 1.9 1.9 0.0
</code></pre>
<p>I'm struggling with the online documentation and tutorials to work out how to do this.</p>
<p>I'd welcome advice, ideally to do this in a robust way which doesn't hardcode column positions.</p>
<hr />
<p>UPDATE: the <code>to_dict</code> is</p>
<pre><code>{'index': ['D01', 'D02', 'D03'],
'columns': [('site', 'a'),
('pay', 'a'),
('pay', 'b'),
('delta', ''),
('over', ''),
('under', '')],
'data': [['London', 12.3, 10.3, -2.0, 0.0, -2.0],
['Bristol', 7.3, 13.2, 5.8999999999999995, 5.8999999999999995, 0.0],
['Bristol', 17.3, 19.2, 1.8999999999999986, 1.8999999999999986, 0.0]],
'index_names': ['ID'],
'column_names': [None, 'phase']}
</code></pre>
|
<python><pandas><dataframe>
|
2024-11-28 15:25:55
| 2
| 325
|
Penelope
|
79,234,349
| 8,000,016
|
Error building and installing custom python package
|
<p>I'm trying to distribute my custom python package through GCP Artifact repository and use it in my project but got a dependent error.</p>
<p>pyproject.toml:</p>
<pre><code>...
dependencies = [
"pandas>=2.2.3",
"numpy>=1.26.4",
"sentence-transformers>=3.0.1",
"scikit-learn>=1.5.2",
"racplusplus>=0.1.1",
"requests>=2.32.3",
"vertexai>=1.48.0",
"python-dotenv>=1.0.1",
"gcsfs>=2024.10.0"
]
</code></pre>
<p>I build the package with the following command <code>python -m build --sdist --wheel</code> and upload it with <code>python3 -m twine upload --non-interactive --config-file .pypirc --repository-url "https://${{ env.GCP_REGION }}-python.pkg.dev/${{ env.GCP_PROJECT_ID }}/${{ env.GCP_REPOSITORY }}/" dist/*</code></p>
<p>But when I try to install it with <code>pip install mf-ai-subrequests==1.0.0 --index-url https://$GCP_REGION-python.pkg.dev/$GCP_PROJECT_ID/$GCP_REPOSITORY/simple/</code></p>
<p>Got the following error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement pandas==2.2.3 (from mf-ai-subrequests) (from versions: none)
ERROR: No matching distribution found for pandas==2.2.3
</code></pre>
<p>Any idea how to solve it?</p>
|
<python><pip><pyproject.toml>
|
2024-11-28 14:50:06
| 1
| 1,264
|
Alberto Sanmartin Martinez
|
79,234,345
| 18,904,265
|
Subscriber only receives first 2/3 of messages from publisher when using QoS1 or above for publishing
|
<p>I am testing a mosquitto server using two python programs. The mosquitto broker is a docker on a linux machine, the python programs are running on my windows machine. One is subscribed to the topic "Test" and constantly listening and will print all received messages to stdout + add them to a log file. The second sends a specified amount of messages to the topic "Test" as fast as it can.</p>
<p>This works great when publisher and subscriber use QoS 0. However, when using QoS 1 for the publisher, still all of the messages get transmitted, but only 60-90% of the messages are received when using ~ 50 messages or more. I have tested this also with 100, 1000 and 10,000 messages - it's always about 2/3 of the messages which are received, regardless of the amount of messages sent. Also, no messages inbetween are dropped, only the ones at the end.</p>
<p>This gets even worse when changing the publisher to QoS 2, then only 30-50 % of my messages are received by the subscriber.</p>
<p>The QoS level of my subscriber is set at 0, changing it to 1 or 2 doesn't change anything.</p>
<p>Both scripts are connected as the same user.</p>
<p>I'm thinking the system might not be able to catch up with all the messages as quickly - but why isn't it dropping messages inbetween then, but only cutting them off at the end, regardless of message count?</p>
<h3>Here is the publisher program:</h3>
<pre class="lang-py prettyprint-override"><code>import paho.mqtt.client as mqtt
def on_connect(client, userdata, flags, reason_code, properties):
print(f"Connected with result code {reason_code}")
broker_hostname = "myhost"
port = 1883
client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION2)
client.username_pw_set(username="user", password="password")
client.on_connect = on_connect
client.connect(broker_hostname, port)
client.loop_start()
topic = "Test"
msg_count = 0
try:
while msg_count < 1000:
msg_count += 1
result = client.publish(topic, msg_count, qos=1)
status = result[0]
if status == 0:
print("Message " + str(msg_count) + " is published to topic " + topic)
else:
print("Failed to send message to topic " + topic)
if not client.is_connected():
print("Client not connected, exiting...")
break
finally:
client.disconnect()
client.loop_stop()
</code></pre>
<h3>and the subscriber program:</h3>
<pre class="lang-py prettyprint-override"><code>import paho.mqtt.client as mqtt
import logging
logging.basicConfig(filename="log.txt", level=logging.INFO)
def on_connect(client, userdata, flags, reason_code, properties):
print(f"Connected with result code {reason_code}")
client.subscribe("Test", qos=0) # also tested with qos=2, no difference in behaviour
def on_message(client, userdata, msg):
print(msg.topic + " " + str(msg.payload))
logging.info(msg.payload)
broker_hostname = "myhost"
port = 1883
client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION2)
client.username_pw_set(username="user", password="password")
client.on_connect = on_connect
client.on_message = on_message
client.connect(broker_hostname, port)
client.loop_forever()
</code></pre>
<h3>My mosquitto conf:</h3>
<pre class="lang-none prettyprint-override"><code>persistence true
persistence_location /mosquitto/data/
log_type subscribe
log_type unsubscribe
log_type websockets
log_type error
log_type warning
log_type notice
log_type information
log_dest file /mosquitto/log/mosquitto.log
log_dest stdout
password_file /mosquitto/passwd_file
allow_anonymous false
# MQTT Default listener
listener 1883 0.0.0.0
# MQTT over WebSockets
listener 9001 0.0.0.0
protocol websockets
</code></pre>
|
<python><mqtt><mosquitto><paho>
|
2024-11-28 14:48:34
| 0
| 465
|
Jan
|
79,234,318
| 15,835,974
|
How to replace a value including the column in a structure
|
<p>When I use <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.replace.html" rel="nofollow noreferrer">DataFrame.replace</a>, it doesn't replace the values that are in a structure.</p>
<p>In this example, it doesn't replace the value of <code>my_struct.struct_string</code>:</p>
<pre><code>from awsglue.context import GlueContext
from pyspark.context import SparkContext
from pyspark.sql.functions import col
from pyspark.sql.types import StringType, StructType, StructField
glueContext = GlueContext(SparkContext.getOrCreate())
data = [
("null", {"struct_string": "null"}),
]
schema = StructType([
StructField("a_string", StringType(), True),
StructField(
"my_struct",
StructType([
StructField("struct_string", StringType(), True),
]),
True
)
])
df = spark.createDataFrame(data, schema)
df = df.replace("null", None)
df_astring = df.filter(col("a_string").isNotNull())
df_struct_string = df.filter(col("my_struct.struct_string").isNotNull())
print("My df_astring")
df_astring.show()
print("My df_struct_string")
df_struct_string.show()
</code></pre>
<p>It print:</p>
<pre><code>My df_astring
+--------+---------+
|a_string|my_struct|
+--------+---------+
+--------+---------+
My df_struct_string
+--------+---------+
|a_string|my_struct|
+--------+---------+
| null| {null}|
+--------+---------+
</code></pre>
<p>Note that I also tried <code>df = df.replace("null", None, ["a_string", "my_struct.struct_string"])</code>, but I get the exception <code>java.lang.UnsupportedOperationException: Nested field my_struct.struct_string is not supported</code>.</p>
<p>The solution needs to be dynamic, so it won't precise manually the column name that are string.</p>
<p>The expected output is:</p>
<pre><code>My df_astring
+--------+---------+
|a_string|my_struct|
+--------+---------+
+--------+---------+
My df_struct_string
+--------+---------+
|a_string|my_struct|
+--------+---------+
+--------+---------+
</code></pre>
|
<python><pyspark>
|
2024-11-28 14:41:54
| 2
| 597
|
jeremie bergeron
|
79,234,157
| 1,743,124
|
Gradio Image() component to download image with random names
|
<p>I have 3 <code>gr.Image()</code> where they only have output when the LLM inference happen.</p>
<pre><code>img1 = gr.Image(
label="Generated Image",
type="pil",
format="png",
interactive=False,
show_share_button=False,
elem_classes="generated-image"
)
</code></pre>
<p>The inference API call is something like this:</p>
<pre><code>def inference_with_timer(model_key, prompt, api_key):
API_URL = models[model_key]
headers = {"Authorization": f"Bearer {api_key}"}
payload = {"inputs": prompt}
try:
start_time = time.time()
response = requests.post(API_URL, headers=headers, json=payload)
response.raise_for_status()
elapsed_time = round(time.time() - start_time, 2)
image = Image.open(BytesIO(response.content))
return image, f"{elapsed_time}s"
except Exception as e:
return str(e), "Error"
</code></pre>
<p>The built-in download button in the <code>gr.Image()</code> component has a default name: "image.{ext}". I want it to be random after each inference. How can I do so?</p>
<p>Some points to be noted:</p>
<ol>
<li><p>The download button on each image block is not present on the very first load.</p>
</li>
<li><p>The download button on the first image block can be present while the other two can still be absent (as like #1)
<a href="https://i.sstatic.net/5y15oEHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5y15oEHO.png" alt="Image 1 Inference present while the others are absent" /></a></p>
</li>
<li><p>There could be 3 download buttons on each image block</p>
</li>
<li><p>After the first inference, the download button will be there while the system could request for another inference</p>
</li>
</ol>
<p><a href="https://i.sstatic.net/rEU8UuYk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rEU8UuYk.png" alt="Download button is present after the first inference" /></a></p>
<p>I need to update the <code>download</code> attribute of the 3 different anchor tags only when the new inference come.</p>
<p><a href="https://i.sstatic.net/4QauFNLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4QauFNLj.png" alt="The download attribute in the anchor tag" /></a></p>
<h2>What I tried so far</h2>
<p>I tried using custom Javascript, but as I'm not a pro Python programmer, cannot understand how to let the JS understand the inference came or not, or came with a successful inference. Because even with a failed response/inference the <code>src</code> attribute has a value:</p>
<pre><code><img
src="https://mayeenulislam-imagen.hf.space/gradio_api/file=/home/user/app/500 Server Error: Internal Server Error for url: https:/api-inference.huggingface.co/models/black-forest-labs/FLUX.1-schnell"
alt=""
loading="lazy"
class="svelte-1pijsyv"
>
</code></pre>
<p>Hence, I need assistance from the experts.</p>
|
<python><huggingface><gradio><image-generation>
|
2024-11-28 13:48:13
| 0
| 4,792
|
Mayeenul Islam
|
79,234,004
| 5,406,294
|
Llama-3.2-1B-Instruct generate inconsistent output
|
<p>I want to use <code>Llama-3.2-1B-Instruct</code> model, and although I have set <code>"temperature": 0.0, "top_p":0.0 and "top_k":0</code>, it still generates inconsistent output. This is how my pipeline looks like:</p>
<pre><code>pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="mps",
model_kwargs={"temperature": 0.0,
"do_sample":True,
"top_p":0.0,
"top_k":0,},
)
</code></pre>
<p>Any idea how to solve this issue?</p>
|
<python><nlp><huggingface-transformers><large-language-model>
|
2024-11-28 13:02:37
| 2
| 548
|
parvaneh shayegh
|
79,233,998
| 19,146,511
|
how can i fuse embeddings in a manner such that it increase efficiency and score?
|
<p>I've been working on a problem where the goal is to supplement traditional embeddings with LLM-generated embeddings (I'm using the last_hidden_state for this purpose). So far, I've tried simply concatenating them and using a cross-attention mechanism. While concatenating the embeddings yields similar results to using traditional embeddings alone (and the score is definitely not better), the cross-attention mechanism unexpectedly degraded the performance. Are there other methods that could potentially improve the score? Code is provided below:</p>
<p>Code for Simple Concatenation:</p>
<pre class="lang-py prettyprint-override"><code> def forward(self, depot_xy, node_xy_demand_tw, llm_embeddings):
moe_loss = 0
# Get traditional embeddings
if isinstance(self.embedding_depot, MoE) or isinstance(self.embedding_node, MoE):
embedded_depot, loss_depot = self.embedding_depot(depot_xy)
embedded_node, loss_node = self.embedding_node(node_xy_demand_tw)
moe_loss = moe_loss + loss_depot + loss_node
else:
embedded_depot = self.embedding_depot(depot_xy)
embedded_node = self.embedding_node(node_xy_demand_tw)
# Project LLM embeddings and normalize
# print(320, self.llm_projection[0].weight.dtype, llm_embeddings.dtype)
projected_llm = self.llm_projection(llm_embeddings)
projected_llm = self.layer_norm(projected_llm)
# Combine traditional embeddings with LLM embeddings
depot_combined = embedded_depot + projected_llm[:, :1, :] # For depot
node_combined = embedded_node + projected_llm[:, 1:, :] # For nodes
out = torch.cat((depot_combined, node_combined), dim=1)
for layer in self.layers:
out, loss = layer(out)
moe_loss = moe_loss + loss
return out, moe_loss
</code></pre>
<p>Code for Cross attention based fusion with Cross attention class:</p>
<pre class="lang-py prettyprint-override"><code>########################################
# CROSS ATTENTION
########################################
class CrossAttentionFusion(nn.Module):
def __init__(self, embedding_dim, head_num, qkv_dim):
super().__init__()
self.head_num = head_num
# Cross attention layers for traditional -> LLM
self.Wq_trad = nn.Linear(embedding_dim, head_num * qkv_dim, bias=False).to(dtype=torch.bfloat16)
self.Wk_llm = nn.Linear(embedding_dim, head_num * qkv_dim, bias=False).to(dtype=torch.bfloat16)
self.Wv_llm = nn.Linear(embedding_dim, head_num * qkv_dim, bias=False).to(dtype=torch.bfloat16)
# Cross attention layers for LLM -> traditional
self.Wq_llm = nn.Linear(embedding_dim, head_num * qkv_dim, bias=False).to(dtype=torch.bfloat16)
self.Wk_trad = nn.Linear(embedding_dim, head_num * qkv_dim, bias=False).to(dtype=torch.bfloat16)
self.Wv_trad = nn.Linear(embedding_dim, head_num * qkv_dim, bias=False).to(dtype=torch.bfloat16)
# Output projections
self.W_out_trad = nn.Linear(head_num * qkv_dim, embedding_dim).to(dtype=torch.bfloat16)
self.W_out_llm = nn.Linear(head_num * qkv_dim, embedding_dim).to(dtype=torch.bfloat16)
# Layer norms
self.norm_trad = nn.LayerNorm(embedding_dim).to(dtype=torch.bfloat16)
self.norm_llm = nn.LayerNorm(embedding_dim).to(dtype=torch.bfloat16)
def forward(self, trad_emb, llm_emb):
# Cross attention: traditional -> LLM
# print(f"trad_emb dtype: {trad_emb.dtype}, shape: {trad_emb.shape}")
# print(f"llm_emb dtype: {llm_emb.dtype}, shape: {llm_emb.shape}")
# print(f"Wq_trad type: {self.Wq_trad.weight.dtype}, shape: {self.Wq_trad.weight.shape}")
q_trad = reshape_by_heads(self.Wq_trad(trad_emb), self.head_num)
k_llm = reshape_by_heads(self.Wk_llm(llm_emb), self.head_num)
v_llm = reshape_by_heads(self.Wv_llm(llm_emb), self.head_num)
trad_attends_llm = multi_head_attention(q_trad, k_llm, v_llm)
trad_fused = self.W_out_trad(trad_attends_llm)
trad_out = self.norm_trad(trad_emb + trad_fused)
# Cross attention: LLM -> traditional
q_llm = reshape_by_heads(self.Wq_llm(llm_emb), self.head_num)
k_trad = reshape_by_heads(self.Wk_trad(trad_emb), self.head_num)
v_trad = reshape_by_heads(self.Wv_trad(trad_emb), self.head_num)
llm_attends_trad = multi_head_attention(q_llm, k_trad, v_trad)
llm_fused = self.W_out_llm(llm_attends_trad)
llm_out = self.norm_llm(llm_emb + llm_fused)
# Combine the cross-attended features
fused_embeddings = trad_out + llm_out
return fused_embeddings
########################################
# ENCODER
########################################
class MTL_Encoder(nn.Module):
def __init__(self, **model_params):
super().__init__()
self.model_params = model_params
embedding_dim = self.model_params['embedding_dim']
hidden_dim = self.model_params['ff_hidden_dim']
encoder_layer_num = self.model_params['encoder_layer_num']
head_num = self.model_params['head_num']
qkv_dim = self.model_params['qkv_dim']
llama_hidden_size = 4096 # Llama-2 7B hidden size
# Project Llama embeddings to the model's embedding dimension with dtype torch.bfloat16
self.llm_projection = nn.Sequential(
nn.Linear(llama_hidden_size, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, embedding_dim)
).to(dtype=torch.bfloat16)
# Add layer normalization for better embedding fusion
self.layer_norm = nn.LayerNorm(embedding_dim).to(dtype=torch.bfloat16)
self.layer_norm_trad = nn.LayerNorm(embedding_dim).to(dtype=torch.bfloat16)
if self.model_params['num_experts'] > 1 and "Raw" in self.model_params['expert_loc']:
self.embedding_depot = MoE(input_size=2, output_size=embedding_dim,
num_experts=self.model_params['num_experts'],
k=self.model_params['topk'], T=1.0,
noisy_gating=True,
routing_level=self.model_params['routing_level'],
routing_method=self.model_params['routing_method'],
moe_model="Linear")
self.embedding_node = MoE(input_size=5, output_size=embedding_dim,
num_experts=self.model_params['num_experts'],
k=self.model_params['topk'], T=1.0,
noisy_gating=True,
routing_level=self.model_params['routing_level'],
routing_method=self.model_params['routing_method'],
moe_model="Linear")
else:
self.embedding_depot = nn.Linear(2, embedding_dim)
self.embedding_node = nn.Linear(5, embedding_dim)
# Cross-attention fusion module
self.cross_attention_fusion = CrossAttentionFusion(
embedding_dim=embedding_dim,
head_num=head_num,
qkv_dim=qkv_dim
)
self.layers = nn.ModuleList([EncoderLayer(i, **model_params)
for i in range(encoder_layer_num)])
def forward(self, depot_xy, node_xy_demand_tw, llm_embeddings):
moe_loss = 0
# Get traditional embeddings
if isinstance(self.embedding_depot, MoE) or isinstance(self.embedding_node, MoE):
embedded_depot, loss_depot = self.embedding_depot(depot_xy)
embedded_node, loss_node = self.embedding_node(node_xy_demand_tw)
moe_loss = moe_loss + loss_depot + loss_node
else:
embedded_depot = self.embedding_depot(depot_xy)
embedded_node = self.embedding_node(node_xy_demand_tw)
# Combine depot and node embeddings
traditional_embeddings = torch.cat((embedded_depot, embedded_node), dim=1).to(dtype=torch.bfloat16)
# Project and normalize LLM embeddings
projected_llm = self.llm_projection(llm_embeddings)
projected_llm = self.layer_norm(projected_llm)
# Normalize traditional embeddings
traditional_embeddings = self.layer_norm_trad(traditional_embeddings)
# Apply cross-attention fusion
fused_embeddings = self.cross_attention_fusion(
traditional_embeddings,
projected_llm
)
# Pass through encoder layers
out = fused_embeddings
for layer in self.layers:
out, loss = layer(out)
moe_loss = moe_loss + loss
return out, moe_loss
</code></pre>
<p>and following is how I'm getting the LLM embeddings,</p>
<pre class="lang-py prettyprint-override"><code> with torch.no_grad():
outputs = self.llama(**inputs)
# Use the last hidden state's [CLS] token
new_embeddings = outputs.hidden_states[-1][:, 0, :]
</code></pre>
<p>Am I doing something wrong? Or Do the traditional embeddings even need ones generated by LLM?</p>
|
<python><machine-learning><nlp><large-language-model><vehicle-routing>
|
2024-11-28 13:01:00
| 0
| 307
|
lazytux
|
79,233,997
| 1,091,116
|
AttributeError: 'zstd.ZstdDecompressionReader' object has no attribute 'fileno'
|
<p>I need to run a subprocess pipeline that uses zstandard files (too large to fit in memory) both as their input and output. Consider the following example:</p>
<pre><code>import subprocess
import zstandard
with zstandard.open('a.txt.zst', 'w') as f:
f.write('hello\n')
f_in = zstandard.open('a.txt.zst', 'rb')
f_out = zstandard.open('b.txt.zst', 'wb')
# in reality I'd be running multiple programs here by chaining PIPEs, but first
# reads f_in and last writes to f_out:
subprocess.call(['cat'], stdin=f_in, stdout=f_out)
</code></pre>
<p>I'm getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "/tmp/a.py", line 12, in <module>
subprocess.call(['cat'], stdin=f_in, stdout=f_out)
File "/usr/lib/python3.11/subprocess.py", line 389, in call
with Popen(*popenargs, **kwargs) as p:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 892, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1661, in _get_handles
p2cread = stdin.fileno()
^^^^^^^^^^^^
AttributeError: 'zstd.ZstdDecompressionReader' object has no attribute 'fileno'
</code></pre>
<p>I'm thinking of using PIPEs at both ends and feeding them with threads, but it feels rather fragile. Is there a more idiomatic solution to this problem?</p>
|
<python><subprocess><zstd><zstandard>
|
2024-11-28 13:00:52
| 2
| 11,756
|
d33tah
|
79,233,913
| 2,886,640
|
Why does alternative DB cursor remain loading forever on commit?
|
<p>I have a problem when using the <code>l10n_es_aeat_sii_oca</code> module. After a while I could locate the problem, and it is in this peace of code:</p>
<pre><code>try:
...
except Exception as fault:
new_cr = Registry(self.env.cr.dbname).cursor()
env = api.Environment(new_cr, self.env.uid, self.env.context)
document = env[document._name].browse(document.id)
doc_vals.update({
"aeat_send_failed": True,
"aeat_send_error": repr(fault)[:60],
"sii_return": repr(fault),
"aeat_content_sent": json.dumps(inv_dict, indent=4),
})
document.write(doc_vals)
new_cr.commit()
new_cr.close()
raise ValidationError(fault) from fault
</code></pre>
<p>If the exception is raised, a new cursor is created and the document values are modified through this new DB cursor. I don't know the reason for creating a new DB cursor, I guess this is in order to fill in the document fields with the values and avoid them to be removed by the <code>ROLLBACK</code> of the <code>ValidationError</code>, is that the reason?</p>
<p>The problem is that when the workflow reaches the line <code>new_cr.commit()</code>, it stops and stays loading forever. Why?</p>
<p>If I modify the code and do this:</p>
<pre><code>try:
...
except Exception as fault:
document = self.env[document._name].browse(document.id)
doc_vals.update({
"aeat_send_failed": True,
"aeat_send_error": repr(fault)[:60],
"sii_return": repr(fault),
"aeat_content_sent": json.dumps(inv_dict, indent=4),
})
document.write(doc_vals)
raise ValidationError(fault) from fault
</code></pre>
<p>Everything seems to work well, but this is not the solution I want, since I guess I would lose all the information because of the <code>ROLLBACK</code>. How can I find out why <code>new_cr.commit()</code> loads forever?</p>
|
<python><odoo><odoo-16>
|
2024-11-28 12:34:51
| 0
| 10,269
|
forvas
|
79,233,548
| 10,020,283
|
OSMnx throws exception when called on graph created from graph_from_gdfs
|
<p>I'm trying to simplify a graph after modifying the <code>gdf_edges</code> and recreating the graph from the dataframes. My workflow is as follows:</p>
<pre><code>ox.graph_from_polygon => graph_to_gdfs => modify gdf_edges => ox.graph_from_gdfs => ox.simplify_graph
</code></pre>
<p>However, this throws a TypeError in <code>simplification.py</code>: <code>TypeError: unhashable type: 'list'</code>.</p>
<pre><code>File .../osmnx/simplification.py:362, in simplify_graph(G, strict, edge_attrs_differ, endpoint_attrs, remove_rings, track_merged)
359 if attr in attrs_to_sum:
360 # if this attribute must be summed, sum it now
361 path_attributes[attr] = sum(path_attributes[attr])
--> 362 elif len(set(path_attributes[attr])) == 1:
363 # if there's only 1 unique value in this attribute list,
</code></pre>
<p>The same occurs also without modifying the dataframe. MRE:</p>
<pre><code>import shapely
import osmnx as ox
poly = shapely.from_wkt("POLYGON((9.493474960327147 51.228210969202365,"
"9.493346214294432 51.223359875812804,"
"9.498442411422728 51.22356145497176,"
"9.498571157455443 51.228036284963565,"
"9.493474960327147 51.228210969202365))")
G = ox.graph_from_polygon(poly, network_type="drive")
nodes_gdf, edges_gdf = ox.graph_to_gdfs(G.to_undirected())
G2 = ox.graph_from_gdfs(gdf_nodes=nodes_gdf, gdf_edges=edges_gdf)
nodes_gdf_2, edges_gdf_2 = ox.graph_to_gdfs(ox.simplify_graph(G2))
</code></pre>
<p><strong>Question</strong>: How can I prevent this exception and simplify G2?</p>
|
<python><networkx><osmnx>
|
2024-11-28 10:55:23
| 1
| 6,792
|
mcsoini
|
79,233,459
| 7,825,830
|
opening a stylized Tkinter GUI from a Tkinter button click event doesn't apply the style
|
<p>I have two files in a single folder: <code>MainApp.py</code> and <code>SecondApp.py</code>.</p>
<p>Both have <code>ttk.Style()</code> applied to specific controls. And when you launch them individually, the styles are implemented accordingly.</p>
<hr />
<p><strong>MainApp:</strong></p>
<p><a href="https://i.sstatic.net/CbhOiv2r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbhOiv2r.png" alt="MainApp UI" /></a></p>
<p><strong>Code:</strong></p>
<pre><code>import tkinter as tk
from tkinter import ttk, PhotoImage
from SecondApp import SecondApp
class MainApp:
def __init__(self, app):
self.app = app
self.app.title("MainApp")
self.app.geometry("200x200")
self.app.config(bg="#FFFFFF")
def create_styles(self):
self.style = ttk.Style()
self.style.theme_use('clam')
self.style.configure('HELP_BUTTON.TButton',
padding=10,
relief='flat',
background="#625bcb",
foreground='white')
self.style.map('HELP_BUTTON.TButton',
background=[('active', '#352cbc'), ('disabled', '#acacac')],
foreground=[('active', 'white'), ('disabled', 'gray')])
def render_widget(self):
self.btn_help_img = PhotoImage(file=fr"D:\question_mark_25px.png", master=self.app)
self.btn_help = ttk.Button(self.app, padding=(0, -1), image=self.btn_help_img, style="HELP_BUTTON.TButton")
self.btn_help.config(cursor="hand2")
self.btn_help.place(x=10, y=10, width=28, height=28)
self.btn_help.bind("<Button-1>", self.open_app)
def open_app(self, event):
self.root = tk.Tk()
self.sa = SecondApp(self.root)
self.sa.create_styles()
self.sa.render_widget()
if __name__ == "__main__":
root = tk.Tk()
rrt = MainApp(root)
rrt.create_styles()
rrt.render_widget()
rrt.app.mainloop()
</code></pre>
<hr />
<p><strong>SecondApp:</strong></p>
<p><a href="https://i.sstatic.net/51WA4aeH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51WA4aeH.png" alt="SecondApp UI" /></a></p>
<p><strong>Code:</strong></p>
<pre><code>from tkinter import ttk
class SecondApp:
def __init__(self, app):
self.app = app
self.app.title("SecondApp")
self.app.geometry("200x200")
self.app.config(bg="#FFFFFF")
self.HEADER_FONT = ("Istok Web", 9, "bold")
def create_styles(self):
self.style = ttk.Style()
self.style.theme_use('clam')
# Debugging output
print("Creating styles for SecondApp")
self.style.configure('FLAT_BUTTON.TButton',
padding=8,
font=self.HEADER_FONT,
relief='flat',
background='#DDDDDD',
foreground='#302C2c')
self.style.map('FLAT_BUTTON.TButton',
background=[('active', '#cac6c6'), ('disabled', '#acacac')],
foreground=[('active', 'black'), ('disabled', 'gray')])
def render_widget(self):
# Debugging output
print("Rendering widget in SecondApp")
self.btn_startProcessing = ttk.Button(self.app, text="START PROCESSING", style="FLAT_BUTTON.TButton")
self.btn_startProcessing.place(x=10, y=10, width=150, height=40)
</code></pre>
<hr />
<p>However, when I link the launching of <code>SecondApp</code> from within a button click in <code>MainApp</code>, the style for SecondApp doesn't load. The styles are encapsulated inside their respective classes, and I have tried numerous modifications to my code but to no success.</p>
<p><a href="https://i.sstatic.net/o1eNN6A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o1eNN6A4.png" alt="START PROCESSING Button has no Style" /></a></p>
<p>I am wondering if anybody have encountered the same issue and was able to fix?</p>
|
<python><tkinter>
|
2024-11-28 10:24:34
| 1
| 578
|
Nii
|
79,233,300
| 21,049,944
|
Generate multiple disjunct samples from a dataframe
|
<p>I am doing some statistic of a very large dataframe that takes sums of multiple random samples. I would like the samples to be disjuct (no number should be present in two different samples).</p>
<p>Minimal example that might use some numbers multiple times:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import numpy as np
df = pl.DataFrame(
{"a": np.random.random(1000)}
)
N_samples = 50
N_logs = 20
sums = [
df.sample(N_logs).select(pl.col("a").sum()).item()
for _ in range(N_samples)
]
</code></pre>
<p>How to avoid multiple usage of same numbers?</p>
|
<python><python-polars>
|
2024-11-28 09:39:37
| 1
| 388
|
Galedon
|
79,233,261
| 9,542,989
|
Datashare writes are not authorized by producer or associated by consumer
|
<p>I am trying to query a datashare from AWS Data Exchange using Redshift in Python. <a href="https://aws.amazon.com/marketplace/pp/prodview-iopazp7irqk6s" rel="nofollow noreferrer">This</a> datashare, to be precise.</p>
<p>This is how I am attempting to run my Python code:</p>
<pre><code>import os
import psycopg
os.environ["PGCLIENTENCODING"] = "utf-8"
connection = psycopg.connect(
...
)
query = "SELECT * FROM store s LIMIT 10;"
with connection.cursor() as cur:
cur.execute(query)
result = cur.fetchall()
</code></pre>
<p>I also tried it using the official Redshift connector:</p>
<pre><code>import redshift_connector
connection = redshift_connector.connect(
...
)
cursor = connection.cursor()
query = "SELECT * FROM store s LIMIT 10;"
cursor.execute(query)
result = cursor.fetchall()
</code></pre>
<p>What is going on here? The error message implies that I am attempting to run a write operation against the database, but as shown above, it's clearly a very simple SELECT statement.</p>
|
<python><amazon-redshift><aws-data-exchange>
|
2024-11-28 09:30:33
| 0
| 2,115
|
Minura Punchihewa
|
79,233,242
| 6,256,241
|
How to get relative frequencies from pandas groupby, with two grouping variables?
|
<p>Suppose my data look as follows:</p>
<pre><code>import datetime
import pandas as pd
df = pd.DataFrame({'datetime': [datetime.datetime(2024, 11, 27, 0), datetime.datetime(2024, 11, 27, 1), datetime.datetime(2024, 11, 28, 0),
datetime.datetime(2024, 11, 28, 1), datetime.datetime(2024, 11, 28, 2)],
'product': ['Apple', 'Banana', 'Banana', 'Apple', 'Banana']})
datetime product
0 2024-11-27 00:00:00 Apple
1 2024-11-27 01:00:00 Banana
2 2024-11-28 00:00:00 Banana
3 2024-11-28 01:00:00 Apple
4 2024-11-28 02:00:00 Banana
</code></pre>
<p><strong>All I want is to plot the <em>relative</em> frequencies of the products sold at each day.</strong> In this example 1/2 (50%) of apples and 1/2 of bananas on day 2024-11-27. And 1/3 apples and 2/3 bananas on day 2024-11-28</p>
<hr />
<p>What I managed to do:</p>
<pre><code>absolute_frequencies = df.groupby([pd.Grouper(key='datetime', freq='D'), 'product']).size().reset_index(name='count')
total_counts = absolute_frequencies.groupby('datetime')['count'].transform('sum')
absolute_frequencies['relative_frequency'] = absolute_frequencies['count'] / total_counts
absolute_frequencies.pivot(index='datetime', columns='product', values='relative_frequency').plot()
</code></pre>
<p>I am pretty confident, there is a much less complicated way, since for the <em>absolute</em> frequencies I simply can use:</p>
<pre><code>df.groupby([pd.Grouper(key='datetime', freq='D'), 'product']).size().unstack('product').plot(kind='line')
</code></pre>
|
<python><pandas><datetime><group-by><line-plot>
|
2024-11-28 09:22:39
| 2
| 3,969
|
Qaswed
|
79,233,050
| 2,443,525
|
Django model has ManyToMany field, how to get all IDs without fetching the objects?
|
<p>I have a data structure like this:</p>
<pre class="lang-none prettyprint-override"><code>class Pizza(models.Model):
name = models.CharField(max_length=100)
toppings = models.ManyToManyField(Topping, related_name="pizzas")
class Topping(models.Model):
name = models.CharField(max_length=100)
</code></pre>
<p>And to get all topping IDs related to a pizza I can do this:</p>
<pre class="lang-none prettyprint-override"><code>list(map(lambda t: t.id, pizza.toppings.all()))
</code></pre>
<p>But this fetches all toppings completely from the database, even thought I only need the IDs. Is there a way to get the IDs without fetching the complete objects (for performance reasons)?</p>
|
<python><django><django-models>
|
2024-11-28 08:12:26
| 2
| 426
|
Lodewijck
|
79,233,046
| 3,973,269
|
Python ssl issue with azure cosmos db emulator in github actions
|
<p>I am trying to make unit tests for my azure functions, written in Python.
I have a python file that does some setup (making the cosmos db databases and containers) and I do have a github actions yaml file to pull a docker container and then run the scripts.</p>
<p><strong>The error:</strong>
For some reason, I do get an error when running the Python script:
azure.core.exceptions.ServiceRequestError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:1006)</p>
<p>I have already tried to install the CA certificate, provided by the docker container. I think this worked correctly but the error still persists.</p>
<p><strong>The yaml file:</strong></p>
<pre><code>jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Start Cosmos DB Emulator
run: docker run --detach --publish 8081:8081 --publish 1234:1234 mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:latest
- name: pause
run : sleep 120
- name : emulator certificate
run : |
retry_count=0
max_retry_count=10
until sudo curl --insecure --silent --fail --show-error "https://localhost:8081/_explorer/emulator.pem" --output "/usr/local/share/ca-certificates/cosmos-db-emulator.crt"; do
if [ $retry_count -eq $max_retry_count ]; then
echo "Failed to download certificate after $retry_count attempts."
exit 1
fi
echo "Failed to download certificate. Retrying in 5 seconds..."
sleep 5
retry_count=$((retry_count+1))
done
sudo update-ca-certificates
sudo ls /etc/ssl/certs | grep emulator
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Cache dependencies
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Set up Azure Functions Core Tools
run: |
wget -q https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
sudo apt-get update
sudo apt-get install azure-functions-core-tools-4
- name: Log in with Azure
uses: azure/login@v1
with:
creds: '${{ secrets.AZURE_CREDENTIALS }}'
- name: Start Azurite
run: |
docker run -d -p 10000:10000 -p 10001:10001 -p 10002:10002 mcr.microsoft.com/azure-storage/azurite
- name: Wait for Azurite to start
run: sleep 5
- name: Get Emulator Connection String
id: get-connection-string
run: |
AZURE_STORAGE_CONNECTION_STRING="AccountEndpoint=https://localhost:8081/;AccountKey=C2y6yDjf5/R+ob0N8A7Cgv30VR2Vo3Fl+QUFOzQYzRPgAzF1jAd+pQ==;"
echo "AZURE_STORAGE_CONNECTION_STRING=${AZURE_STORAGE_CONNECTION_STRING}" >> $GITHUB_ENV
- name: Setup test environment in Python
run : python Tests/setup.py
- name: Run tests
run: |
python -m unittest discover Tests
</code></pre>
<p><strong>The Python script</strong></p>
<pre><code>urllib3.disable_warnings()
print(DEFAULT_CA_BUNDLE_PATH)
connection_string : str = os.getenv("COSMOS_DB_CONNECTION_STRING")
database_client_string : str = os.getenv("COSMOS_DB_CLIENT")
container_client_string : str = os.getenv("COSMOS_DB_CONTAINER_MEASUREMENTS")
cosmos_client : CosmosClient = CosmosClient.from_connection_string(
conn_str=connection_string
)
cosmos_client.create_database(
id=database_client_string,
offer_throughput=400
)
database_client : DatabaseProxy = cosmos_client.get_database_client(database_client_string)
database_client.create_container(
id=container_client_string,
partition_key=PartitionKey(path="/path")
)
</code></pre>
<p><strong>Output of the certificate installation step</strong></p>
<pre><code>Updating certificates in /etc/ssl/certs...
rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
/etc/ssl/certs/adoptium/cacerts successfully populated.
Updating Mono key store
Mono Certificate Store Sync - version 6.12.0.200
Populate Mono certificate store from a concatenated list of certificates.
Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed.
Importing into legacy system store:
I already trust 146, your new list has 147
Certificate added: CN=localhost
1 new root certificates were added to your trust store.
Import process completed.
Importing into BTLS system store:
I already trust 146, your new list has 147
Certificate added: CN=localhost
1 new root certificates were added to your trust store.
Import process completed.
Done
done.
cosmos-db-emulator.pem
</code></pre>
<p><strong>My thoughts</strong>
I think that the issue arrises at the part where I create the database in Python script. Once I comment those lines, the error will not show. But I do need it :)</p>
<p><strong>Question</strong>
Why might my solution not have worked, and what can I do to solve the issue?</p>
|
<python><azure-functions><github-actions><azure-cosmosdb>
|
2024-11-28 08:10:46
| 1
| 569
|
Mart
|
79,232,831
| 6,212,530
|
How to use Django {% querystring %} with GET form?
|
<p>In Django 5.1 <a href="https://docs.djangoproject.com/en/5.1/ref/templates/builtins/#querystring" rel="nofollow noreferrer">{% querystring %}</a> was added. Is there some way to use it with GET form?</p>
<p>For example, let's say we have template with:</p>
<pre class="lang-html prettyprint-override"><code><span>Paginate by:</span>
<a href="{% querystring paginate_by=50 %}">50</a>
{# ... #}
<form method="GET">
<input name="query" value="{{ request.GET.query }}">
<button type="submit">Search</button>
</form>
</code></pre>
<p>Assuming that we are currently on <code>localhost:8000/?paginate_by=50</code>, how to change <code>form</code> so clicking <code>Search</code> won't delete <code>paginate_by</code> query parameter - so what I want is for example <code>localhost:8000/?paginate_by=50&query=abc</code> and not <code>localhost:8000/?query=abc</code>?</p>
<p>Before 5.1 I handled that via providing form with hidden fields based on GET parameters, but I am hoping that now more elegant solution is possible.</p>
|
<python><django>
|
2024-11-28 06:48:35
| 2
| 1,028
|
Matija Sirk
|
79,232,800
| 183,717
|
Trying to traverse a linked list in Python 3
|
<p>I have a weird issue being able to traverse through a custom linked list. Here is the code for traversal.</p>
<pre><code>from typing import Optional
class ListNode:
def __init__(self, val, next_node=None):
self.val = val
self.next_node = next_node
@property
def value(self):
return self.val
def __str__(self):
return f"Value={self.val}, Next node available={self.next_node.value if self.next_node != None else -1}"
__dict__ = ("val", "next_node",)
class Solution:
def addTwoNumbers(self, l1: Optional[ListNode], l2: Optional[ListNode]) -> Optional[list]:
result_arr = []
l1_running = l1 != None
curr_node = l1
print("***** Start traversing L1 only ******")
while l1_running:
print(curr_node)
if curr_node.next_node:
l1_running = True
curr_node = curr_node.next_node
else:
l1_running = False
curr_node = None
print("***** End traversing L1 only ******")
print(result_arr)
return result_arr
def list_node_builder(self, l1: list[str], l2: list[int]) -> list[int]:
print("***** Start building Linked List ******")
l1_list_nodes = []
l2_list_nodes = []
for index, num in enumerate(l1):
if index == len(l1) - 1:
new_node = ListNode(num)
l1_list_nodes.append(new_node)
print(new_node)
else:
new_node = ListNode(num, ListNode(l1[index+1]))
print(str(new_node))
l1_list_nodes.append(new_node)
for index, num in enumerate(l2):
if index == len(l2) -1:
l2_list_nodes.append(ListNode(num))
else:
l2_list_nodes.append(ListNode(num, ListNode(l2[index+1])))
print("***** Done building Linked List ******")
return self.addTwoNumbers(l1_list_nodes[0], l2_list_nodes[0])
</code></pre>
<p>When I am trying to call this using the following code:</p>
<pre><code>from addNumbers import Solution
if __name__ == "__main__":
s = Solution()
print(s.list_node_builder(l1=["a","b","c","d","e","f","g"], l2=[9,9,9,1]))
</code></pre>
<p>My output is like this:</p>
<pre class="lang-none prettyprint-override"><code>/Users/shyam/codespaces/python_projects/.venv/bin/python /Users/shyam/codespaces/python_projects/index.py
***** Start building Linked List ******
Value=a, Next node available=b
Value=b, Next node available=c
Value=c, Next node available=d
Value=d, Next node available=e
Value=e, Next node available=f
Value=f, Next node available=g
Value=g, Next node available=-1
***** Done building Linked List ******
***** Start traversing L1 only ******
Value=a, Next node available=b
Value=b, Next node available=-1
***** End traversing L1 only ******
[]
[]
Process finished with exit code 0
</code></pre>
<p>I am not able to understand why building the linked list looks fine and the <code>next_node</code> is properly set, but when I traverse the list using the head, I get stopped out on the 2nd node.</p>
|
<python><linked-list>
|
2024-11-28 06:33:39
| 3
| 9,853
|
name_masked
|
79,232,737
| 2,515,265
|
Cannot submit chat request to VLLM Pixtral in Python using MistralAI
|
<p>I have a Pixtral server running locally.</p>
<p>I have installed <code>mistralai 1.2.3</code> on MacOS and am trying to interact with the server.</p>
<p>When I run</p>
<pre><code>curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "mistralai/Pixtral-12B-2409", "messages": [{"role": "user", "content": "What is the best French cheese?"}]}'
</code></pre>
<p>I get back the expected response, but when I run it from Python</p>
<pre><code>from mistralai import Mistral
model = "mistralai/Pixtral-12B-2409"
client = Mistral(server_url="http://localhost:8000")
chat_response = client.chat.complete(
messages=[
{
"role": "user",
"content": "What is the best French cheese?"
}
],
model=model,
)
print(chat_response.choices[0].message.content)
</code></pre>
<p>I get this error:</p>
<pre><code> raise models.SDKError(
mistralai.models.sdkerror.SDKError: API error occurred: Status 400
{"object":"error","message":"[{'type': 'extra_forbidden', 'loc': ('body', 'safe_prompt'), 'msg': 'Extra inputs are not permitted', 'input': False}]","type":"BadRequestError","param":null,"code":400}
</code></pre>
<p>How can I solve this problem?</p>
|
<python><large-language-model><vllm>
|
2024-11-28 06:00:30
| 1
| 2,657
|
Javide
|
79,232,728
| 3,685,918
|
optimizeWarning: Covariance of the parameters could not be estimated in python
|
<p>I am trying to obtain the optimized parameters from the Nelson-Siegel model. Below is a simple example, but I know the optimized parameters are not fully optimized and contain some errors/warnings.</p>
<p>I believe the warning message is related to the optimized parameters. Please tell me how to fix it.</p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
# Define the Nelson-Siegel model function
def nelson_siegel(tau, beta1, beta2, beta3, lambd):
term1 = beta1
term2 = beta2 * ((1 - np.exp(-lambd * tau)) / (lambd * tau))
term3 = beta3 * (((1 - np.exp(-lambd * tau)) / (lambd * tau)) - np.exp(-lambd * tau))
return term1 + term2 + term3
# Given example data (maturity and yields)
tau = np.array([0.25, 2, 5, 10])
yields = np.array([2.54, 2.36, 2.49, 2.96])
# Calculate dynamic initial estimates
beta1_initial = yields[-1] # Yield for the longest maturity
beta2_initial = yields[0] - yields[-1] # Difference between short-term and long-term yields
beta3_initial = (np.max(yields) - np.min(yields)) / 2 # Curvature is about half of the yield range
lambd_initial = 0.2 # Default value
initial_guess = [beta1_initial, beta2_initial, beta3_initial, lambd_initial]
# Set parameter bounds
param_bounds = ([0, -10, -10, 0.01], # Lower bounds (beta1, beta2, beta3, lambda)
[10, 10, 10, 10]) # Upper bounds (beta1, beta2, beta3, lambda)
# Optimization (using curve_fit)
popt, pcov = curve_fit(nelson_siegel, tau, yields, p0=initial_guess, bounds=param_bounds, maxfev=10000, ftol=1e-6)
# Print optimized parameters
beta1, beta2, beta3, lambd = popt
print(f"Optimized Parameters: beta1 = {beta1:.4f}, beta2 = {beta2:.4f}, beta3 = {beta3:.4f}, lambda = {lambd:.4f}")
# Calculate fitted yields using the model
fitted_yields = nelson_siegel(tau, beta1, beta2, beta3, lambd)
# Print fitted values
print(f"Fitted Yields: {fitted_yields}")
# Print the covariance matrix of the optimized parameters
print(f"Parameter Covariance Matrix: {pcov}")
</code></pre>
<p>Results</p>
<pre><code>Optimized Parameters: beta1 = 10.0000, beta2 = -7.6314, beta3 = 2.8412, lambda = 0.0100
Fitted Yields: [2.38171526 2.47248209 2.62498897 2.87073598]
Parameter Covariance Matrix: [[inf inf inf inf]
[inf inf inf inf]
[inf inf inf inf]
[inf inf inf inf]]
C:\Users\CHOI\AppData\Local\Temp\ipykernel_32884\269502971.py:29: OptimizeWarning: Covariance of the parameters could not be estimated
popt, pcov = curve_fit(nelson_siegel, tau, yields, p0=initial_guess, bounds=param_bounds, maxfev=10000, ftol=1e-6)
</code></pre>
|
<python><optimization>
|
2024-11-28 05:49:56
| 0
| 427
|
user3685918
|
79,232,565
| 8,554,833
|
App Registration error AADSTS500011 show tenant is as domain instead of long string provided
|
<p>I've tried numerous times to register an app and connect to in in python:</p>
<pre><code>app_id = '670...'
tenant_id = '065...'
client_secret_value = 'YJr...'
import requests
import msal
authority = f'https://login.microsoftonline.com/{tenant_id}'
scopes = ['https://analysis.microsoft.net/powerbi/api/.default']
app = msal.ConfidentialClientApplication(app_id, authority=authority, client_credential=client_secret_value)
result = None
result = app.acquire_token_for_client(scopes=scopes)
</code></pre>
<p>Overview:
<a href="https://i.sstatic.net/X7VQpMcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X7VQpMcg.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/4wktaMLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4wktaMLj.png" alt="enter image description here" /></a></p>
<p>I feel like I've followed this video exactly:
<a href="https://www.youtube.com/watch?v=3Fu8FjvYvyc&t=577s&ab_channel=JJPowerBI" rel="nofollow noreferrer">https://www.youtube.com/watch?v=3Fu8FjvYvyc&t=577s&ab_channel=JJPowerBI</a>
I'm up to minute 8:38.</p>
<p>I'm getting the following error messsage and googling it shows me the tenatid should be the id and not the domain name. I'm not sure why that's happening and what I need to change to get this to work</p>
<p><a href="https://i.sstatic.net/8MJA1ZfT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MJA1ZfT.png" alt="enter image description here" /></a></p>
<p>Edit adding API Permissions
I am the owner of the subscription.</p>
<p><a href="https://i.sstatic.net/LRPF9qld.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRPF9qld.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/BTORhXzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BTORhXzu.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/2fWJjA6M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fWJjA6M.png" alt="enter image description here" /></a></p>
<p>Edit2: Looks a little different then the comment, but I enabled this and it says it could take 15 minutes to update.</p>
<p><a href="https://i.sstatic.net/0bJT2UgC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0bJT2UgC.png" alt="enter image description here" /></a></p>
|
<python><azure><powerbi-embedded><azure-app-configuration>
|
2024-11-28 04:10:59
| 1
| 728
|
David 54321
|
79,232,303
| 491,637
|
Seaborn heatmap with extra row and extra column
|
<p>I made a seaborn heatmap and I would like to add an extra row and column like image. They must be independent, I will fill each one with its own data.</p>
<p><a href="https://i.sstatic.net/eAVozS4v.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAVozS4v.jpg" alt="heatmap with extra column and row" /></a></p>
<p>I made one through 'quick fix', adding extra row and column in dataframe, and then plotting white vertical and horizontal lines.</p>
<p>Is there a 'correct' way to do this? Some feature for this porpouse?</p>
|
<python><matplotlib><seaborn><heatmap>
|
2024-11-28 00:56:59
| 0
| 2,077
|
walves
|
79,232,261
| 259,485
|
How to use Google Cloud ADC to access Google Drive API from Python with personal account (not service account)
|
<p>Lets say you need to run a script locally that needs to connect to google workspace resources like drive and sheets. How do you do this from Python leveraging Google Application Default Credentials?</p>
|
<python><google-sheets><google-drive-api>
|
2024-11-28 00:26:27
| 2
| 8,714
|
Kristofer
|
79,232,116
| 345,716
|
What is the intended/best-practice PYTHONPATH workflow for developing a python module with a venv?
|
<p>I'm a relative python newbie and find myself going back and forth on how to set the <code>sys.path</code> "nicely" while developing my module with a <code>venv</code>, and thought I'd ask if/what the best-practice way is to make</p>
<pre><code>python3 -m my_module.src1 args
</code></pre>
<p>use the files I'm developing in <code>src/**</code>?</p>
<p>My module uses hatchling in a <code>pyproject.toml</code> file file and I set it up with this structure:</p>
<pre><code>/path/to/work/dir
pyproject.toml
venv/
src/
my_module/
__init__.py
src1.py
src2.py
</code></pre>
<p>I create the <code>venv</code> with</p>
<pre><code>python3 -m venv venv
source ./venv/bin/activate
# Install all dependencies for my_module
pip3 install .
</code></pre>
<p>What works is to:</p>
<pre><code>source $PWD/venv/bin/activate
export PYTHONPATH=$PYTHONPATH:$PWD/src
</code></pre>
<p>in every shell, perhaps with <code>direnv</code>.</p>
<p>Is that what is intended?</p>
<p>I've read so many pages about this recently, so please excuse the lack of references:</p>
<ul>
<li>I've seen recommendations to to modify <code>./venv/bin/activate</code> to set <code>PYTHONPATH</code> and recommendations to definitely don't do that.</li>
<li>I've seen recommendations to create a <code>./venv/lib/python3.12/site-packages/some.pth</code> file with a full path to my <code>./src</code> and that works as long as <code>my_module</code> is not installed in the <code>venv</code>, because that path is <em>appended</em> to <code>sys.path</code>. But <code>my_module</code> <em>is</em> installed in <code>venv</code> since I used <code>pip3 install .</code> to install all my dependencies (and to test installation). So using a <code>.pth</code> file is useless because regardless, it will find and use the installed <code>my_module</code>.</li>
</ul>
<p>I was just wondering if there was some best-practice way to do this.</p>
|
<python>
|
2024-11-27 22:59:03
| 1
| 16,339
|
Peter V. Mørch
|
79,232,088
| 4,180,670
|
Flask error related with SERVER_NAME after upgrading to Flask 3.1.0
|
<p>My application uses subdomains (every logged user has his own: user.domain.com) and Blueprints with no subdomains.</p>
<p>Till now I didn't set <em>subdomain_matching</em> and I set
SERVER_NAME = "domain.com" and SESSION_COOKIE_DOMAIN = ".domain.com".</p>
<p>After upgrading to Flask 3.1.0 from 3.0.3 the application is not working anymore and every url is requested gets 404 error (but error page is correctly loaded).</p>
<p>If I delete the "subdomain" parameter from routes, then <a href="http://www.domain.com" rel="nofollow noreferrer">www.domain.com</a> is working but I can't get username.domain.com.</p>
<p>I see in Flask documentation "Changes" section that something has changed in 3.1.0 about SERVER_NAME, but I can't understand how this parameter has to be set in order to have subdomains working.</p>
|
<python><flask>
|
2024-11-27 22:43:24
| 1
| 463
|
Marco Evasi
|
79,231,900
| 2,142,994
|
Parse JSON in scala and do get with default value
|
<p>Sample JSON read from file:</p>
<pre><code>{
"category": "[{\"a\":3,\"b\":11,\"c\":86}]",
"ids": "[\"1234\", \"5678\"]",
"uid": "55555",
"flag": true
}
</code></pre>
<p>Current Code:</p>
<pre><code>val filePath = args(0)
val jsonFileString = Source.fromFile(filePath).getLines.mkString.stripMargin
val mapper = new ObjectMapper().registerModules(DefaultScalaModule)
val jsonStringObj = mapper.readTree(jsonFileString)
val ids = jsonStringObj.get("ids").asText()
</code></pre>
<p>This works fine to get me the <code>ids</code> when the json file contains it, but I want to provide a default value in case the key "ids" is not present in the JSON file. How do I do that?</p>
<p>In python I could do something like <code>json_dict.get('ids', 'default_value')</code>. I am looking for an equivalent.</p>
|
<python><json><scala><jackson>
|
2024-11-27 21:13:23
| 1
| 28,435
|
Ani Menon
|
79,231,895
| 12,932,447
|
How to get updated values in NiceGUI
|
<p>I'm writing a NiceGUI frontend for my Python code.
This is my first time writing frontend, and I'm not very used to web-framework concepts.</p>
<p>Essentially, the use of this frontend is to ask to user to insert the following variables (I know <code>range</code> is not a good variable name):</p>
<pre class="lang-py prettyprint-override"><code>variants_text: str
keywords_text: str
seed: int | None
min_articles: int | None
max_articles: int | None
range: int | None
limit: int | None
</code></pre>
<p>This is the code I have written so far (the only dependecy is <code>nicegui</code>, and you need a dummy <code>.txt</code> file to pass Step1):</p>
<pre class="lang-py prettyprint-override"><code>from nicegui import ui
class ArticleRetrieval:
def __init__(self):
self.page_title = "Article Retrieval"
self.headers()
self.stepper = ui.stepper().props("vertical").classes("w-full")
self.step1_title = "Add input files"
self.step2_title = "Add optional arguments"
self.step3_title = "Retrieve articles"
self.variants_uploaded = False
self.variants_text = None
self.keywords_uploaded = False
self.keywords_text = None
self.seed = None
self.min_articles = None
self.max_articles = None
self.range = None
self.limit = None
self.positive_integer_validator = {
"You can only insert a non-negative integer": self._acceptable_num
}
def _acceptable_num(self, n) -> bool:
return (n is None) or (n == int(n) and n >=0)
def _variants_uploaded(self, e):
self.variants_uploaded = True
self.variants_text = e.content.readlines()
if self.keywords_uploaded is True:
self.step1_next_button.visible = True
def _keywords_uploaded(self, e):
self.keywords_uploaded = True
self.keywords_text = e.content.readlines()
if self.variants_uploaded is True:
self.step1_next_button.visible = True
def _update_seed(self, e):
self.seed = e.value
self._step2_next_button_visibility()
def _update_min_articles(self, e):
self.min_articles = e.value
self._step2_next_button_visibility()
def _update_max_articles(self, e):
self.max_articles = e.value
self._step2_next_button_visibility()
def _update_range(self, e):
self.range = e.value
self._step2_next_button_visibility()
def _update_limit(self, e):
self.limit = e.value
self._step2_next_button_visibility()
def _step2_next_button_visibility(self):
if all(
self._acceptable_num(x) for x in [
self.seed,
self.min_articles,
self.max_articles,
self.max_articles,
self.limit,
]
):
self.step2_next_button.set_visibility(True)
else:
self.step2_next_button.set_visibility(False)
def _ask_confirm(self):
ui.markdown(
f"""
##### DO YOU CONFIRM THIS DATA?<br>
**variants**: {self.variants_text}<br>
**keywords**: {self.keywords_text}<br>
**seed**: {self.seed}<br>
**min\\_articles**: {self.min_articles}<br>
**max\\_articles**: {self.max_articles}<br>
**range**: {self.range}<br>
**limit**: {self.limit}<br>
"""
)
def headers(self):
ui.page_title(self.page_title)
ui.markdown(f"#{self.page_title}")
def step1(self):
with ui.step(self.step1_title):
ui.upload(
label="Variants (.txt file)",
max_files=1,
auto_upload=True,
on_upload=self._variants_uploaded,
).props("accept=.txt").classes("max-w-full")
ui.upload(
label="Keywords (.txt file)",
max_files=1,
auto_upload=True,
on_upload=self._keywords_uploaded,
).props("accept=.txt").classes("max-w-full")
with ui.stepper_navigation():
self.step1_next_button = ui.button(
"NEXT",
icon="arrow_forward_ios",
on_click=self.stepper.next
)
self.step1_next_button.visible = False
def step2(self):
with ui.step(self.step2_title):
ui.number(
label="Seed",
validation=self.positive_integer_validator,
on_change=self._update_seed
).props("clearable")
ui.number(
label="Minimum articles",
validation=self.positive_integer_validator,
on_change=self._update_min_articles
).props("clearable")
ui.number(
label="Maximum articles",
validation=self.positive_integer_validator,
on_change=self._update_max_articles
).props("clearable")
ui.number(
label="Range",
validation=self.positive_integer_validator,
on_change=self._update_range
).props("clearable")
ui.number(
label="Limit",
validation=self.positive_integer_validator,
on_change=self._update_limit
).props("clearable")
with ui.stepper_navigation():
self.step2_back_button = ui.button(
"BACK",
icon="arrow_back_ios",
on_click=self.stepper.previous,
)
self.step2_next_button = ui.button(
"NEXT",
icon="arrow_forward_ios",
on_click=self.stepper.next
)
self.step2_next_button.visibility = False
def step3(self):
with ui.step(self.step3_title):
self._ask_confirm()
with ui.stepper_navigation():
ui.button(
"BACK",
icon="arrow_back_ios",
on_click=self.stepper.previous,
)
ui.button(
"RUN",
icon="rocket_launch",
on_click=lambda x: None # HERE I WILL PERFORM THE ACTUAL JOB
)
def run(self):
with self.stepper:
self.step1()
self.step2()
self.step3()
ui.run()
ArticleRetrieval().run()
</code></pre>
<p>My question is about the <code>_ask_confirm</code> method.<br>
It always prints the initial values of attributes, the ones I've defined in the <code>__init__</code> method (i.e. all <code>None</code>).</p>
<p><a href="https://i.sstatic.net/HpMh2COy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HpMh2COy.png" alt="enter image description here" /></a></p>
<p>I would like that string to be updated every time I update values in Step1 and Step2.</p>
<p>The thing I don't understand is that if, for example, I change the <code>_update_seed</code> method like this</p>
<pre class="lang-py prettyprint-override"><code>def _update_seed(self, e):
print(self.seed)
self.seed = e.value
print(self.seed)
self._step2_next_button_visibility()
</code></pre>
<p>It prints the updated values.</p>
|
<python><nicegui>
|
2024-11-27 21:10:41
| 1
| 875
|
ychiucco
|
79,231,883
| 9,251,158
|
Frozen OS error when walking the file tree of an external drive
|
<p>I want to normalize filepaths (removing accents) in an external drive and I use <code>os.walk()</code>. At one point, the script freezes and after I cancel, I see this message:</p>
<pre><code>^CTraceback (most recent call last):
File "~/normalize_filepaths.py", line 2
for root, dirs, files in os.walk(target, topdown = False):
File "<frozen os>", line 377, in walk
KeyboardInterrupt
</code></pre>
<p>Here is a snippet of the relevant code, the second</p>
<pre><code>def normalize(fp):
"""
>>> normalize("/Volumes/MM_BUP/MIGUEL/Acólitos")
'/Volumes/MM_BUP/MIGUEL/Acolitos'
>>> normalize("/Volumes/MM_BUP/MIGUEL/Acólitos")
'/Volumes/MM_BUP/MIGUEL/Acolitos'
>>> normalize("'This is my cup.' _ ゼロコ ZEROKO _ 紅茶の遊び方 _ mime _ clowning-8MgRJAXn1tE.mp4")
"'This is my cup.' _ ZEROKO _ _ mime _ clowning-8MgRJAXn1tE.mp4"
>>> normalize(" großer Tag")
' grosser Tag'
"""
fp = fp.replace("ß", "ss")
name_clean = unicodedata.normalize('NFD', fp)
return name_clean.encode('ascii', 'ignore').decode("ascii")
def main(target="/some/path"):
for root, dirs, files in os.walk(target, topdown = False):
for name in files + dirs:
filepath = os.path.join(root, name)
clean = normalize(name)
new_filepath = os.path.join(root, clean)
shutil.move(filepath, new_filepath)
</code></pre>
<p>How can I avoid this frozen OS error and visit all files and directories?</p>
|
<python><operating-system><runtime-error>
|
2024-11-27 21:06:37
| 1
| 4,642
|
ginjaemocoes
|
79,231,864
| 1,958,900
|
`typing.get_origin` vs. `TypeAlias.__origin__`
|
<p>Python's docs define <a href="https://docs.python.org/3/library/stdtypes.html#genericalias.__origin__" rel="nofollow noreferrer"><code>GenericAlias.__origin__</code> as:</a></p>
<blockquote>
<p>This attribute points at the non-parameterized generic class
<code>list[int].__origin__ # returns list</code></p>
</blockquote>
<p>There is also <a href="https://docs.python.org/3/library/typing.html#typing.get_origin" rel="nofollow noreferrer"><code>typing.get_origin</code></a>, which is defined differently:</p>
<blockquote>
<p>Get the unsubscripted version of a type: for a typing object of the form X[Y, Z, ...] return X.</p>
</blockquote>
<p>So usually, there is no practical difference - <code>get_origin(cls)</code> is <em>usually</em> equivalent to <code>getattr(cls, '__origin__', None)</code>. But I've run into at least one case where they're different:</p>
<pre class="lang-py prettyprint-override"><code>>>> get_origin(Annotated[int, 3])
typing.Annotated
>>> Annotated[int, 3].__origin__
int
</code></pre>
<p>So, my questions are:</p>
<ul>
<li>Why are there 2 conflicting definitions of "origin"?</li>
<li>If I don't care about the metadata encoded in a <code>typing.Annotation</code>, and just want to know the type hint, is there a "right way" to strip out the <code>Annotation</code> wrapper? E.g., do I need to unwrap every single type object with <code>if get_origin(cls) is typing.Annotated: cls = cls.__origin__</code> to make sure that <code>get_origin</code> will behave as expected?</li>
</ul>
|
<python><python-typing>
|
2024-11-27 20:55:35
| 1
| 7,145
|
Aaron
|
79,231,789
| 5,965,685
|
Python json.dump scientific notation formatting
|
<p>Due to suspected platform differences between developers, <code>json.dump</code> is formatting scientific notations in different ways (one person's machine, it formats to <code>1e-6</code>, and on others it formats to <code>1e-06</code>). The files are committed to our git history, so having to constantly revert the changes is annoying.</p>
<p>Is there a way to control how scientific numbers are formatted?</p>
|
<python><json><format><dump>
|
2024-11-27 20:28:59
| 2
| 431
|
mitchute
|
79,231,554
| 1,079,075
|
Pydantic nestled list type with sub-list minimum length
|
<p>I want to create a Pydantic class wrapping a list with string sub-lists that have to be at least of length two.</p>
<p>For example, the following are valid:</p>
<ul>
<li><code>[]</code> empty list with no sublists is valid</li>
<li><code>[["a", "b"], ["c", "d", "e"]</code> is valid because each sub-list is of at least length 2</li>
</ul>
<p>The following are not valid:</p>
<ul>
<li><code>[[]]</code> an empty sub-list is not valid</li>
<li><code>[["a"], ["b", "c"]</code> one of the sub-lists has a length less than two which is invalid</li>
</ul>
<p>How do I do this in Pydantic? I don't think I can nest <code>conlist</code>? Can I use <code>typing.Annotated</code> to accomplish this?</p>
|
<python><pydantic>
|
2024-11-27 18:54:25
| 2
| 9,446
|
Seanny123
|
79,231,473
| 7,995,293
|
Can Polars with calamine engine be coerced into failing more gracefully?
|
<p>I have 10s of thousands of excel files to which I'm applying validation using Polars. Some excel files have a problem that spawns an <code>index out of bounds</code> panic in the py03 runtime, when using <code>engine=calamine</code>. This issue does not occur when using <code>engine=xlsx2csv</code>. The excel problem is known and trivial, but due to the workflow pipeline at my company, I have little control on its occasional recurrence. So, I want to be able to handle this panic more gracefully.</p>
<p>A minimum working example is truly minimal, just call <code>read_excel</code>:</p>
<pre><code>from pathlib import Path
import polars as pl
root = Path("/path/to/globdir")
def try_to_open():
for file in root.rglob("*/*_fileID.xlsx"):
print(f"\r{file.name}", end='')
try:
df = pl.read_excel(file, engine="calamine", infer_schema_length=0)
except Exception as e:
print(f"{file.name}: {e}", flush=True)
def main():
try_to_open()
if __name__ == "__main__":
main()
</code></pre>
<p>When a 'contaminated' excel file is processed, it fails like so:</p>
<pre><code>thread '<unnamed>' panicked at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/calamine-0.26.1/src/xlsx/cells_reader.rs:347:39:
index out of bounds: the len is 2585 but the index is 2585
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Traceback (most recent call last):
File "/path/to/script.py", line 18, in <module>
main()
File "/path/to/script.py", line 15, in main
try_to_open()
File "/path/to/script.py", line 10, in try_to_open
df = pl.read_excel(file, engine="calamine", infer_schema_length=0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/venv/lib/python3.12/site-packages/polars/_utils/deprecation.py", line 92, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/venv/lib/python3.12/site-packages/polars/_utils/deprecation.py", line 92, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/venv/lib/python3.12/site-packages/polars/io/spreadsheet/functions.py", line 299, in read_excel
return _read_spreadsheet(
^^^^^^^^^^^^^^^^^^
File "/path/to/venv/lib/python3.12/site-packages/polars/io/spreadsheet/functions.py", line 536, in _read_spreadsheet
name: reader_fn(
^^^^^^^^^^
File "/path/to/venv/lib/python3.12/site-packages/polars/io/spreadsheet/functions.py", line 951, in _read_spreadsheet_calamine
ws_arrow = parser.load_sheet_eager(sheet_name, **read_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/venv/lib/python3.12/site-packages/fastexcel/__init__.py", line 394, in load_sheet_eager
return self._reader.load_sheet(
^^^^^^^^^^^^^^^^^^^^^^^^
pyo3_runtime.PanicException: index out of bounds: the len is 2585 but the index is 2585
</code></pre>
<p>As you can see, the try:except block in the Python script does not catch the PanicException.</p>
<p>I want to be able to capture the name of the file that has failed. Is there a way to coerce Calamine's child threads to collapse and return a fail code, instead of crashing everything?</p>
|
<python><exception><python-polars><pyo3>
|
2024-11-27 18:26:03
| 1
| 399
|
skytwosea
|
79,231,405
| 4,119,822
|
No enum for numpy uintp?
|
<p>I am trying to wrap a C pointer array of type <code>size_t</code> with a numpy <code>ndarray</code> via Cython using the following:</p>
<pre class="lang-none prettyprint-override"><code>cimport numpy as cnp
from libcpp.vector cimport vector
cnp.import_array()
cdef size_t num_layers = 10
cdef vector[size_t] steps_taken_vec = vector[size_t]()
steps_taken_vec.resize(3 * num_layers)
cdef size_t* steps_taken_ptr = steps_taken_vec.data()
cdef cnp.npy_intp[2] shape = [3, num_layers]
cdef cnp.npy_intp ndim = 2
self.shooting_method_steps_taken_array = cnp.PyArray_SimpleNewFromData(
ndim,
&shape[0],
cnp.NPY_UINTP, # <-- This is the problem
steps_taken_ptr) # steps_taken_ptr is a size_t*
</code></pre>
<p>The above produces the error "cimported module has no attribute 'NPY_UINTP'". According to numpy's documentation there should be an enum that directs numpy to create an array using size_t: <a href="https://numpy.org/devdocs/reference/c-api/dtype.html#c.NPY_TYPES.NPY_UINTP" rel="nofollow noreferrer">https://numpy.org/devdocs/reference/c-api/dtype.html#c.NPY_TYPES.NPY_UINTP</a></p>
<p>The <code>PyArray_SimpleNewFromData</code> API requires a enum that defines the type used to create the ndarray.</p>
<p>However, the actual <strong>init</strong>.pxd does not appear to have that enum. It does set the type correctly, see <a href="https://github.com/numpy/numpy/blob/v1.26.4/numpy/__init__.pxd#L25" rel="nofollow noreferrer">line 25 here</a> but no enum in <a href="https://github.com/numpy/numpy/blob/9815c16f449e12915ef35a8255329ba26dacd5c0/numpy/__init__.pxd#L27" rel="nofollow noreferrer">this list</a>.</p>
<p>Those links, and my code, is using numpy 1.26.4. I looked ahead at 2.0+ and see that there was some definitional changes to this type, but the enum still appears to be missing (see <a href="https://github.com/numpy/numpy/blob/4e8f724fbc136b1bac1c43e24d189ebc45e056eb/numpy/__init__.pxd#L108" rel="nofollow noreferrer">here</a>)</p>
<p>As a workaround, I am using <code>cnp.NPY_UINT64</code> which works but I am not sure if that is guranteed to be the same size as <code>size_t</code> across platforms and into the future.</p>
<p>Am I missing something here?</p>
|
<python><numpy><cython>
|
2024-11-27 18:01:49
| 1
| 329
|
Oniow
|
79,231,381
| 2,800,329
|
Convert JSON to set of nested Python classes
|
<p>I need to process multiple messages in JSON format. Each message has its own nested structure. I would like to create a Python SDK to process these messages. My idea is to map each JSON structure into a set of nested Python classes. Currently, I'm doing it manually. But it is a tedious task.</p>
<p>Please find an example JSON message below:</p>
<pre><code>{
"GlobalGnbId": {
"PlmnIdentity": {
"Data": [
19,
241,
132
]
},
"GnbId": {
"Value": 1,
"Length": 22
}
},
"OptionalGnbDuId": 1
}
</code></pre>
<p>Please find below my own handcrafted set of Python classes to work with this JSON message:</p>
<pre><code>class PlmnIdentity(BaseModel):
"""Class for PLMN identity"""
Data: list[int]
class GnbId(BaseModel):
"""Class for gNodeB ID"""
Value: int
Length: int
class GlobalGnbId(BaseModel):
"""Class for global gNodeB ID"""
PlmnIdentity: PlmnIdentity
GnbId: GnbId
class NodeId(BaseModel):
"""Class for node ID"""
GlobalGnbId: GlobalGnbId
OptionalGnbDuId: int
</code></pre>
<p>Finally, please find below a full minimal example:</p>
<pre><code>from pydantic import BaseModel, TypeAdapter
import json
class PlmnIdentity(BaseModel):
"""Class for PLMN identity"""
Data: list[int]
class GnbId(BaseModel):
"""Class for gNodeB ID"""
Value: int
Length: int
class GlobalGnbId(BaseModel):
"""Class for global gNodeB ID"""
PlmnIdentity: PlmnIdentity
GnbId: GnbId
class NodeId(BaseModel):
"""Class for node ID"""
GlobalGnbId: GlobalGnbId
OptionalGnbDuId: int
node_id_str = \
"""
{
"GlobalGnbId": {
"PlmnIdentity": {
"Data": [
19,
241,
132
]
},
"GnbId": {
"Value": 1,
"Length": 22
}
},
"OptionalGnbDuId": 1
}
"""
# NodeId as class
node_id_class = TypeAdapter(NodeId).validate_json(node_id_str)
print(node_id_class)
print(node_id_class.GlobalGnbId)
print(node_id_class.GlobalGnbId.PlmnIdentity)
print(node_id_class.GlobalGnbId.PlmnIdentity.Data)
print(node_id_class.GlobalGnbId.GnbId)
print(node_id_class.GlobalGnbId.GnbId.Value)
print(node_id_class.GlobalGnbId.GnbId.Length)
print(node_id_class.OptionalGnbDuId)
# NodeId as dictionary
node_id_dict = node_id_class.model_dump()
print(node_id_dict)
</code></pre>
<p>My question is there an automatic or semi-automatic way to map a nested JSON message to a set of Python classes?</p>
|
<python><json><dictionary><class>
|
2024-11-27 17:51:57
| 2
| 1,053
|
mabalenk
|
79,231,179
| 405,017
|
Determining base class signatures for subclasses of a Pydantic BaseModel
|
<p><strong>TL;DR</strong>: Is there a way to:</p>
<ul>
<li>have a custom initializer on a sub-subclass of a <code>pydantic.BaseModel</code>,</li>
<li>that shows a full and complete signature in Pylance,</li>
<li><em>without</em> manually copying all the field names into every subclass constructor?</li>
</ul>
<h3>Simplified</h3>
<p>Let's say I have this setup:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class NumberWithMetadata(BaseModel):
value: float | int
source: str
class FloatWM(NumberWithMetadata):
value: float
</code></pre>
<p>Good so far. In VS Code when I type <code>FloatWM(</code> Pylance via IntelliSense shows me the full and correct signature, limited to floats only:<br />
<code>(*, value: float, source: str) -> FloatWM</code></p>
<p>However, I want my subclass to perform custom logic during initialization, so I need an initializer. I don't want the child class to have to know the details of the fields in the base class, so I <code>**kwargs</code> it:</p>
<pre class="lang-py prettyprint-override"><code>class FloatWM(NumberWithMetadata):
value: float
def __init__(self, value: float, **kwargs):
super().__init__(value = value * 1.618, **kwargs)
</code></pre>
<p>This functions as-desired at runtime…but now when I type <code>FloatWM(</code> Pylance shows only:<br />
<code>(*, value: float, **kwargs: Unknown) -> FloatWM</code></p>
<p>Is there a syntax or pattern I can use to get clean IntelliSense in VS Code for the subclass?</p>
<h3>The Real Setup</h3>
<p>To avoid an XY problem with my simplified code above:</p>
<p>The core use case is to have many different fields in many different classes, some of which wrap a <code>pint.Quantity</code>. Each field specifies the required dimensionality and base unit for that field. The dimensionality is enforced when constructing the value for the field, and the default units are applied if not supplied.</p>
<pre class="lang-py prettyprint-override"><code>class ValueWithAttribution(BaseModel):
value: Any
source: str
# more fields here
class NumberWithAttribution(ValueWithAttribution):
value: float | int
type QuantityOrLiteral = float | int | str | PydanticPintQuantity
class QuantityWithAttribution(ValueWithAttribution):
value: Quantity
def __init__(self, value: QuantityOrLiteral, **kwargs):
units = self.__class__.__annotations__.get("value").__metadata__[0].units
if isinstance(value, (float, int)):
super().__init__(value=Quantity(value, units), **kwargs)
else:
if isinstance(value, str):
value = Quantity(value)
if value.units.is_compatible_with(units):
super().__init__(value=value, **kwargs)
else:
# raise hell
class Length(QuantityWithAttribution):
value: Annotated[Quantity, PydanticPintQuantity("meters")]
class Weight(QuantityWithAttribution):
value: Annotated[Quantity, PydanticPintQuantity("kg")]
class Person(MyBase):
height: Length
weight: Weight
me = Person(
height = Length('70 inches', source='I said so'),
weight = Weight(84, source='my treacherous scale')
)
</code></pre>
<hr />
<p>Outside the scope of this question, but related topic: if I want Pylance to accept the positional argument when constructing <code>Length()</code> and <code>Weight()</code>, I believe I also have to add this to each and every such leaf class (despite the fact that their parent class already has this signature):</p>
<pre class="lang-py prettyprint-override"><code>class Length(QuantityWithAttribution):
value: Annotated[Quantity, PydanticPintQuantity("meters")]
# This is not functionally needed, by is required for
# Pylance to accept that there's a positional argument :'(
def __init__(self, value: QuantityOrLiteral, **kwargs):
super().__init__(value=value, **kwargs)
</code></pre>
|
<python><python-typing><pydantic><pyright>
|
2024-11-27 16:47:26
| 1
| 304,256
|
Phrogz
|
79,231,172
| 662,345
|
`python -m build` can't find numpy include files when running in cibuildwheel
|
<p>I'm building a python module that contains a cython file which uses numpy. In some environments <code>python -m build</code> works as is; in other environments I need to set <code>C_INCLUDE_PATH=`python -c "import numpy; print(numpy.get_include())"` </code> beforehand.</p>
<p>But when I type <code>cibuildwheel</code>, I always get <code>fatal error: numpy/arrayobject.h: No such file or directory</code>. I can't figure out how I can help it find these files.</p>
|
<python><numpy><pip><cython><setuptools>
|
2024-11-27 16:45:09
| 1
| 7,039
|
Antonis Christofides
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.