QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,587,828
| 7,201,487
|
Possibility to factorize configuration for multiple project
|
<p>I have the following use case. I am working on a large project with multiple subtasks. For example, we are performing classification on some datasets, segmentation on others, and translation on yet another set of datasets (see below).</p>
<p>Each task has its own configuration. However, there is a part of the configuration that is shared among all tasks (such as the architecture of some network or specific blocks). This shared configuration is evolving, and each time I make a small change to it, I need to copy and paste the updated configuration into all the relevant folders.</p>
<p>Is there a way to centralize this configuration so that it can be imported into all the subtasks?</p>
<pre><code>my_project
| classification_task
|--| conf
|--|--| config.yaml
|--|--| models
|--|--| datasets
|--| train.py
|--| ...
| segmentation_task
|--| conf
|--|--| config.yaml
|--|--| models
|--|--| datasets
|--| train.py
|--| ...
| translation_task
|--| conf
|--|--| config.yaml
|--|--| models
|--|--| datasets
|--| train.py
|--| ...
</code></pre>
|
<python><fb-hydra>
|
2024-06-06 16:00:08
| 1
| 559
|
schlodinger
|
78,587,778
| 9,544,507
|
How to disable autocomit with ibm db2 for i in SqlAlchemy
|
<p>How do i turn off autocommit with ibm_db_sa and sqlalchemy?</p>
<p>If the table has journaling on, this is not required, but this one table does not have journaling on, as such, autocommit must be OFF.</p>
<p>I have tried:</p>
<pre class="lang-py prettyprint-override"><code>smt = f"db2+ibm_db://{user}:{password}@{hostname}:{port}/{database};autocommit=0"
engine = create_engine(smt, echo=True).execution_options(autocommit=False)
sm = sessionmaker(engine, autocommit=False, autoflush=False)
with sm.begin() as session:
record = User(**user_dict)
session.add(record)
</code></pre>
<p>but i get:
<code>sqlalchemy.exc.ProgrammingError: (ibm_db_dbi.ProgrammingError) ibm_db_dbi::ProgrammingError: Statement Execute Failed: [IBM][CLI Driver][AS] SQL7008N REXX variable "USERSG " contains inconsistent data. SQLSTATE=55019 SQLCODE=-7008</code></p>
<p>JBDC connector has autocommit off and it works.</p>
<p>EDIT: setting autocommit=True gives the same error.</p>
<p>I have tried <a href="https://stackoverflow.com/a/8245270/9544507">https://stackoverflow.com/a/8245270/9544507</a> and this <a href="https://stackoverflow.com/questions/11872049/entity-framework-error-updating-iseries-record">Entity Framework error updating iSeries record</a> they recommend disabbling the autocomit mode, but does not seem to work.</p>
<p>I have also tried to set isolation_level to AUTOCOMMIT and NONE (<a href="https://docs.sqlalchemy.org/en/20/core/connections.html#understanding-the-dbapi-level-autocommit-isolation-level" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/core/connections.html#understanding-the-dbapi-level-autocommit-isolation-level</a>)</p>
<p>But that does not work either:
<code>sqlalchemy.exc.ArgumentError: Invalid value 'AUTOCOMMIT' for isolation_level. Valid isolation levels for ibm_db_sa are RS, UR, CURSOR STABILITY, RR, UNCOMMITTED READ, READ STABILITY, CS, REPEATABLE READ</code></p>
|
<python><sqlalchemy><db2><db2-400>
|
2024-06-06 15:50:31
| 1
| 333
|
Wolfeius
|
78,587,761
| 4,146,344
|
Open a new window when clicking on a mesh in Open3D
|
<p>Below is an example in Open3D, in which I created a bunny mesh, as well as a circle mesh. Now I want to create a new window and show an image when I click on the circle mesh. Does anyone know how to do that ? Thank you very much.</p>
<pre><code>import numpy as np
import open3d as o3d
def main():
mesh = o3d.io.read_triangle_mesh(o3d.data.BunnyMesh().path)
pcd1 = mesh.sample_points_poisson_disk(10000)
visualizer = o3d.visualization.Visualizer()
visualizer.create_window()
visualizer.add_geometry(pcd1)
# mesh = convert_image_to_mesh(img)
# mesh.scale(scale=0.1, center=pcd1.get_center())
# visualizer.add_geometry(mesh)
# Show xyz axes
rendering_options = visualizer.get_render_option()
rendering_options.show_coordinate_frame = True
rendering_options.background_color = np.asarray([0.5, 0.5, 0.5])
mesh = o3d.geometry.TriangleMesh.create_sphere(radius=0.1)
mesh.scale(scale=0.3, center=mesh.get_center())
visualizer.add_geometry(mesh)
visualizer.run()
if __name__ == "__main__":
main()
</code></pre>
<p><a href="https://i.sstatic.net/3UvUbhlD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3UvUbhlD.png" alt="Example" /></a></p>
|
<python><point-clouds><open3d>
|
2024-06-06 15:48:05
| 1
| 710
|
Dang Manh Truong
|
78,587,587
| 9,100,431
|
Send error messages to catch with subprocess.check_output()
|
<p>I'm writing a web app (app.py) that when calling an API, runs a python script (main.py).</p>
<p>I'm interested in two things:</p>
<ul>
<li>Printing/logging the logs of main.py in real time to the console</li>
<li>Obtaining 'custom' error messages from main.py</li>
</ul>
<p>Right now, I do the first point, and try to do the second, with:</p>
<pre><code>@app.route('/run_script', methods=['POST'])
def run_script_now():
main_path = str(Path("main.py").resolve())
try:
subprocess.check_output(["python", main_path], stderr=sys.stderr)
msg = "Success", 200
logger.info(msg)
return msg
except subprocess.CalledProcessError as e:
error_msg = f"Error: {e.stderr}"
logger.critical(error_msg)
return error_msg, 500
</code></pre>
<p>But <code>e.stderr</code> returns None and <code>e.output</code> returns <code>Command '['python', 'C:\\Users\\main.py']' returned non-zero exit status 1.</code></p>
<p>And here's how I catch errors in main.py:</p>
<pre><code>try:
portfolio_summary = processing.process_portfolio_summary()
except Exception as ex:
error_msg = str(ex) # Get the error message
sys_msg = f"Error en PortfolioSummary: {error_msg}"
logger.error(sys_msg)
sys.stderr.write(sys_msg)
sys.exit(1)
</code></pre>
<p>logger.error(sys_msg) gets printed, but what I would expect is to e.stderr (app.py) have the value of sys_msg (main.py) but I can't get it to work.</p>
<p>Any ideas how I could implement it better?</p>
|
<python><error-handling><subprocess>
|
2024-06-06 15:15:18
| 1
| 660
|
Diego
|
78,587,486
| 14,282,714
|
Upgrade streamlit with Conda doesn't work
|
<p>I'm trying to upgrade my streamlit version from 1.12.2 to its latest version. I followed the documentation like described <a href="https://docs.streamlit.io/knowledge-base/using-streamlit/how-upgrade-latest-version-streamlit" rel="nofollow noreferrer">here</a>. First I activate my conda environment:</p>
<pre><code>conda activate mariokart_app
</code></pre>
<p>After that I run the command like discussed in the documentation:</p>
<pre><code>conda update -c conda-forge streamlit -y
</code></pre>
<p>Which returns:</p>
<pre><code>Retrieving notices: ...working... done
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 23.7.4
latest version: 24.5.0
Please update conda by running
$ conda update -n base -c defaults conda
Or to minimize the number of packages updated during conda update use
conda install conda=24.5.0
## Package Plan ##
environment location: /Users/quinten/opt/miniconda3/envs/mariokart_app
added / updated specs:
- streamlit
The following packages will be downloaded:
package | build
---------------------------|-----------------
altair-5.3.0 | pyhd8ed1ab_0 427 KB conda-forge
aws-c-auth-0.7.22 | h26aba2d_2 91 KB conda-forge
aws-c-cal-0.6.14 | hb0e519c_1 38 KB conda-forge
aws-c-common-0.9.19 | hfdf4475_0 204 KB conda-forge
aws-c-compression-0.2.18 | hb0e519c_6 18 KB conda-forge
aws-c-event-stream-0.4.2 | hc5e814a_12 46 KB conda-forge
aws-c-http-0.8.1 | ha6e9f73_17 159 KB conda-forge
aws-c-io-0.14.8 | hf69683f_5 135 KB conda-forge
aws-c-mqtt-0.10.4 | h76e2169_4 136 KB conda-forge
aws-c-s3-0.5.9 | hd10324c_3 92 KB conda-forge
aws-c-sdkutils-0.1.16 | hb0e519c_2 48 KB conda-forge
aws-checksums-0.1.18 | hb0e519c_6 48 KB conda-forge
aws-crt-cpp-0.26.9 | h473fab1_0 281 KB conda-forge
aws-sdk-cpp-1.11.329 | h6b2b1af_3 3.3 MB conda-forge
blinker-1.8.2 | pyhd8ed1ab_0 14 KB conda-forge
brotli-python-1.1.0 | py312heafc425_1 358 KB conda-forge
c-ares-1.28.1 | h10d778d_0 149 KB conda-forge
ca-certificates-2024.6.2 | h8857fd0_0 152 KB conda-forge
debugpy-1.8.1 | py312hede676d_0 2.0 MB conda-forge
gitpython-3.1.43 | pyhd8ed1ab_0 153 KB conda-forge
idna-3.7 | pyhd8ed1ab_0 51 KB conda-forge
importlib_resources-6.4.0 | pyhd8ed1ab_0 32 KB conda-forge
ipython-8.25.0 | pyh707e725_0 585 KB conda-forge
ipywidgets-8.1.3 | pyhd8ed1ab_0 111 KB conda-forge
jinja2-3.1.4 | pyhd8ed1ab_0 109 KB conda-forge
jsonschema-4.22.0 | pyhd8ed1ab_0 72 KB conda-forge
jupyter_client-8.6.2 | pyhd8ed1ab_0 104 KB conda-forge
jupyter_core-5.7.2 | py312hb401068_0 91 KB conda-forge
jupyterlab_widgets-3.0.11 | pyhd8ed1ab_0 182 KB conda-forge
libabseil-20240116.2 | cxx17_hc1bcbd7_0 1.1 MB conda-forge
libarrow-16.1.0 | h0870315_6_cpu 5.6 MB conda-forge
libarrow-acero-16.1.0 | hf036a51_6_cpu 512 KB conda-forge
libarrow-dataset-16.1.0 | hf036a51_6_cpu 506 KB conda-forge
libarrow-substrait-16.1.0 | h85bc590_6_cpu 473 KB conda-forge
libblas-3.9.0 |22_osx64_openblas 14 KB conda-forge
libcblas-3.9.0 |22_osx64_openblas 14 KB conda-forge
libcurl-8.8.0 | hf9fcc65_0 377 KB conda-forge
libcxx-17.0.6 | h88467a6_0 1.2 MB conda-forge
libdeflate-1.20 | h49d49c5_0 69 KB conda-forge
libexpat-2.6.2 | h73e2aa4_0 68 KB conda-forge
libgoogle-cloud-2.24.0 | h721cda5_0 834 KB conda-forge
libgoogle-cloud-storage-2.24.0| ha1c69e0_0 512 KB conda-forge
libgrpc-1.62.2 | h384b2fc_0 4.9 MB conda-forge
liblapack-3.9.0 |22_osx64_openblas 14 KB conda-forge
libopenblas-0.3.27 |openmp_hfef2a42_0 5.8 MB conda-forge
libparquet-16.1.0 | h904a336_6_cpu 922 KB conda-forge
libsqlite-3.45.3 | h92b6c6a_0 881 KB conda-forge
libtiff-4.6.0 | h129831d_3 251 KB conda-forge
libwebp-base-1.4.0 | h10d778d_0 347 KB conda-forge
libzlib-1.3.1 | h87427d6_1 56 KB conda-forge
llvm-openmp-18.1.6 | h15ab845_0 293 KB conda-forge
markupsafe-2.1.5 | py312h41838bb_0 25 KB conda-forge
matplotlib-inline-0.1.7 | pyhd8ed1ab_0 14 KB conda-forge
ncurses-6.5 | h5846eda_0 804 KB conda-forge
numpy-1.26.4 | py312he3a82b2_0 6.7 MB conda-forge
openssl-3.3.1 | h87427d6_0 2.4 MB conda-forge
orc-2.0.1 | hf43e91b_1 429 KB conda-forge
packaging-24.0 | pyhd8ed1ab_0 49 KB conda-forge
pandas-2.2.2 | py312h1171441_1 14.0 MB conda-forge
parso-0.8.4 | pyhd8ed1ab_0 73 KB conda-forge
pillow-10.3.0 | py312h0c923fa_0 40.6 MB conda-forge
platformdirs-4.2.2 | pyhd8ed1ab_0 20 KB conda-forge
prompt-toolkit-3.0.46 | pyha770c72_0 264 KB conda-forge
protobuf-4.25.3 | py312hf6c9040_0 362 KB conda-forge
psutil-5.9.8 | py312h41838bb_0 484 KB conda-forge
pyarrow-16.1.0 | py312hdce95a9_1 26 KB conda-forge
pyarrow-core-16.1.0 |py312he4e9a06_1_cpu 3.7 MB conda-forge
pygments-2.18.0 | pyhd8ed1ab_0 859 KB conda-forge
python-3.12.3 |h1411813_0_cpython 13.9 MB conda-forge
python_abi-3.12 | 4_cp312 6 KB conda-forge
pyyaml-6.0.1 | py312h104f124_1 181 KB conda-forge
pyzmq-26.0.3 | py312ha04878a_0 436 KB conda-forge
referencing-0.35.1 | pyhd8ed1ab_0 41 KB conda-forge
requests-2.32.3 | pyhd8ed1ab_0 57 KB conda-forge
rpds-py-0.18.1 | py312ha47ea1c_0 293 KB conda-forge
setuptools-70.0.0 | pyhd8ed1ab_0 472 KB conda-forge
snappy-1.2.0 | h6dc393e_1 36 KB conda-forge
streamlit-1.35.0 | pyhd8ed1ab_0 6.6 MB conda-forge
tenacity-8.3.0 | pyhd8ed1ab_0 23 KB conda-forge
tornado-6.4 | py312h41838bb_0 821 KB conda-forge
traitlets-5.14.3 | pyhd8ed1ab_0 108 KB conda-forge
typing-extensions-4.12.1 | hd8ed1ab_0 10 KB conda-forge
typing_extensions-4.12.1 | pyha770c72_0 39 KB conda-forge
tzlocal-5.2 | py312hb401068_0 40 KB conda-forge
validators-0.28.3 | pyhd8ed1ab_0 36 KB conda-forge
watchdog-4.0.1 | py312hbd25219_0 141 KB conda-forge
wheel-0.43.0 | pyhd8ed1ab_1 57 KB conda-forge
widgetsnbextension-4.0.11 | pyhd8ed1ab_0 1.0 MB conda-forge
zeromq-4.3.5 | hde137ed_4 297 KB conda-forge
zstd-1.5.6 | h915ae27_0 487 KB conda-forge
------------------------------------------------------------
Total: 129.4 MB
The following NEW packages will be INSTALLED:
appnope conda-forge/noarch::appnope-0.1.4-pyhd8ed1ab_0
asttokens conda-forge/noarch::asttokens-2.4.1-pyhd8ed1ab_0
comm conda-forge/noarch::comm-0.2.2-pyhd8ed1ab_0
debugpy conda-forge/osx-64::debugpy-1.8.1-py312hede676d_0
decorator conda-forge/noarch::decorator-5.1.1-pyhd8ed1ab_0
exceptiongroup conda-forge/noarch::exceptiongroup-1.2.0-pyhd8ed1ab_2
executing conda-forge/noarch::executing-2.0.1-pyhd8ed1ab_0
importlib-metadata conda-forge/noarch::importlib-metadata-7.1.0-pyha770c72_0
importlib_metadata conda-forge/noarch::importlib_metadata-7.1.0-hd8ed1ab_0
importlib_resourc~ conda-forge/noarch::importlib_resources-6.4.0-pyhd8ed1ab_0
ipykernel conda-forge/noarch::ipykernel-6.29.3-pyh3cd1d5f_0
ipython conda-forge/noarch::ipython-8.25.0-pyh707e725_0
ipywidgets conda-forge/noarch::ipywidgets-8.1.3-pyhd8ed1ab_0
jedi conda-forge/noarch::jedi-0.19.1-pyhd8ed1ab_0
jupyter_client conda-forge/noarch::jupyter_client-8.6.2-pyhd8ed1ab_0
jupyter_core conda-forge/osx-64::jupyter_core-5.7.2-py312hb401068_0
jupyterlab_widgets conda-forge/noarch::jupyterlab_widgets-3.0.11-pyhd8ed1ab_0
libabseil conda-forge/osx-64::libabseil-20240116.2-cxx17_hc1bcbd7_0
libarrow conda-forge/osx-64::libarrow-16.1.0-h0870315_6_cpu
libarrow-acero conda-forge/osx-64::libarrow-acero-16.1.0-hf036a51_6_cpu
libarrow-dataset conda-forge/osx-64::libarrow-dataset-16.1.0-hf036a51_6_cpu
libarrow-substrait conda-forge/osx-64::libarrow-substrait-16.1.0-h85bc590_6_cpu
libblas conda-forge/osx-64::libblas-3.9.0-22_osx64_openblas
libcblas conda-forge/osx-64::libcblas-3.9.0-22_osx64_openblas
libcrc32c conda-forge/osx-64::libcrc32c-1.1.2-he49afe7_0
libexpat conda-forge/osx-64::libexpat-2.6.2-h73e2aa4_0
libgfortran conda-forge/osx-64::libgfortran-5.0.0-13_2_0_h97931a8_3
libgfortran5 conda-forge/osx-64::libgfortran5-13.2.0-h2873a65_3
libgoogle-cloud conda-forge/osx-64::libgoogle-cloud-2.24.0-h721cda5_0
libgoogle-cloud-s~ conda-forge/osx-64::libgoogle-cloud-storage-2.24.0-ha1c69e0_0
libgrpc conda-forge/osx-64::libgrpc-1.62.2-h384b2fc_0
libjpeg-turbo conda-forge/osx-64::libjpeg-turbo-3.0.0-h0dc2134_1
liblapack conda-forge/osx-64::liblapack-3.9.0-22_osx64_openblas
libopenblas conda-forge/osx-64::libopenblas-0.3.27-openmp_hfef2a42_0
libparquet conda-forge/osx-64::libparquet-16.1.0-h904a336_6_cpu
libre2-11 conda-forge/osx-64::libre2-11-2023.09.01-h81f5012_2
libsodium conda-forge/osx-64::libsodium-1.0.18-hbcb3906_1
libsqlite conda-forge/osx-64::libsqlite-3.45.3-h92b6c6a_0
libutf8proc conda-forge/osx-64::libutf8proc-2.8.0-hb7f2c08_0
libxcb conda-forge/osx-64::libxcb-1.15-hb7f2c08_0
libzlib conda-forge/osx-64::libzlib-1.3.1-h87427d6_1
llvm-openmp conda-forge/osx-64::llvm-openmp-18.1.6-h15ab845_0
matplotlib-inline conda-forge/noarch::matplotlib-inline-0.1.7-pyhd8ed1ab_0
nest-asyncio conda-forge/noarch::nest-asyncio-1.6.0-pyhd8ed1ab_0
parso conda-forge/noarch::parso-0.8.4-pyhd8ed1ab_0
pexpect conda-forge/noarch::pexpect-4.9.0-pyhd8ed1ab_0
pickleshare conda-forge/noarch::pickleshare-0.7.5-py_1003
pkgutil-resolve-n~ conda-forge/noarch::pkgutil-resolve-name-1.3.10-pyhd8ed1ab_1
platformdirs conda-forge/noarch::platformdirs-4.2.2-pyhd8ed1ab_0
prompt-toolkit conda-forge/noarch::prompt-toolkit-3.0.46-pyha770c72_0
psutil conda-forge/osx-64::psutil-5.9.8-py312h41838bb_0
pthread-stubs conda-forge/osx-64::pthread-stubs-0.4-hc929b4f_1001
ptyprocess conda-forge/noarch::ptyprocess-0.7.0-pyhd3deb0d_0
pure_eval conda-forge/noarch::pure_eval-0.2.2-pyhd8ed1ab_0
pyarrow-core conda-forge/osx-64::pyarrow-core-16.1.0-py312he4e9a06_1_cpu
python_abi conda-forge/osx-64::python_abi-3.12-4_cp312
pyyaml conda-forge/osx-64::pyyaml-6.0.1-py312h104f124_1
pyzmq conda-forge/osx-64::pyzmq-26.0.3-py312ha04878a_0
stack_data conda-forge/noarch::stack_data-0.6.2-pyhd8ed1ab_0
traitlets conda-forge/noarch::traitlets-5.14.3-pyhd8ed1ab_0
typing-extensions conda-forge/noarch::typing-extensions-4.12.1-hd8ed1ab_0
tzlocal conda-forge/osx-64::tzlocal-5.2-py312hb401068_0
validators conda-forge/noarch::validators-0.28.3-pyhd8ed1ab_0
watchdog conda-forge/osx-64::watchdog-4.0.1-py312hbd25219_0
wcwidth conda-forge/noarch::wcwidth-0.2.13-pyhd8ed1ab_0
widgetsnbextension conda-forge/noarch::widgetsnbextension-4.0.11-pyhd8ed1ab_0
xorg-libxau conda-forge/osx-64::xorg-libxau-1.0.11-h0dc2134_0
xorg-libxdmcp conda-forge/osx-64::xorg-libxdmcp-1.1.3-h35c211d_0
yaml conda-forge/osx-64::yaml-0.2.5-h0d85af4_2
zeromq conda-forge/osx-64::zeromq-4.3.5-hde137ed_4
zipp conda-forge/noarch::zipp-3.17.0-pyhd8ed1ab_0
The following packages will be REMOVED:
abseil-cpp-20230802.0-h61975a4_2
arrow-cpp-14.0.2-h3ade35f_1
blas-1.0-mkl
boost-cpp-1.82.0-ha357a0b_2
bottleneck-1.3.7-py312h32608ca_0
brotli-1.0.9-h6c40b1e_8
brotli-bin-1.0.9-h6c40b1e_8
expat-2.6.2-hcec6c5f_0
grpc-cpp-1.48.2-hbe2b35a_4
gtest-1.14.0-ha357a0b_1
icu-73.1-hcec6c5f_0
intel-openmp-2023.1.0-ha357a0b_43548
jpeg-9e-h6c40b1e_1
libboost-1.82.0-hf53b9f2_2
libiconv-1.16-h6c40b1e_3
mkl-2023.1.0-h8e150cf_43560
mkl-service-2.4.0-py312h6c40b1e_1
mkl_fft-1.3.8-py312h6c40b1e_0
mkl_random-1.2.4-py312ha357a0b_0
numexpr-2.8.7-py312hac873b0_0
numpy-base-1.26.4-py312h6f81483_0
sqlite-3.45.3-h6c40b1e_0
tbb-2021.8.0-ha357a0b_0
utf8proc-2.6.1-h6c40b1e_1
zlib-1.2.13-h4b97444_1
The following packages will be UPDATED:
altair pkgs/main/osx-64::altair-5.0.1-py312h~ --> conda-forge/noarch::altair-5.3.0-pyhd8ed1ab_0
attrs pkgs/main/osx-64::attrs-23.1.0-py312h~ --> conda-forge/noarch::attrs-23.2.0-pyh71513ae_0
aws-c-auth pkgs/main::aws-c-auth-0.6.19-h6c40b1e~ --> conda-forge::aws-c-auth-0.7.22-h26aba2d_2
aws-c-cal pkgs/main::aws-c-cal-0.5.20-h3333b6a_0 --> conda-forge::aws-c-cal-0.6.14-hb0e519c_1
aws-c-common pkgs/main::aws-c-common-0.8.5-h6c40b1~ --> conda-forge::aws-c-common-0.9.19-hfdf4475_0
aws-c-compression pkgs/main::aws-c-compression-0.2.16-h~ --> conda-forge::aws-c-compression-0.2.18-hb0e519c_6
aws-c-event-stream pkgs/main::aws-c-event-stream-0.2.15-~ --> conda-forge::aws-c-event-stream-0.4.2-hc5e814a_12
aws-c-http pkgs/main::aws-c-http-0.6.25-h6c40b1e~ --> conda-forge::aws-c-http-0.8.1-ha6e9f73_17
aws-c-io pkgs/main::aws-c-io-0.13.10-h6c40b1e_0 --> conda-forge::aws-c-io-0.14.8-hf69683f_5
aws-c-mqtt pkgs/main::aws-c-mqtt-0.7.13-h6c40b1e~ --> conda-forge::aws-c-mqtt-0.10.4-h76e2169_4
aws-c-s3 pkgs/main::aws-c-s3-0.1.51-h6c40b1e_0 --> conda-forge::aws-c-s3-0.5.9-hd10324c_3
aws-c-sdkutils pkgs/main::aws-c-sdkutils-0.1.6-h6c40~ --> conda-forge::aws-c-sdkutils-0.1.16-hb0e519c_2
aws-checksums pkgs/main::aws-checksums-0.1.13-h6c40~ --> conda-forge::aws-checksums-0.1.18-hb0e519c_6
aws-crt-cpp pkgs/main::aws-crt-cpp-0.18.16-hcec6c~ --> conda-forge::aws-crt-cpp-0.26.9-h473fab1_0
aws-sdk-cpp pkgs/main::aws-sdk-cpp-1.10.55-h61975~ --> conda-forge::aws-sdk-cpp-1.11.329-h6b2b1af_3
blinker pkgs/main/osx-64::blinker-1.6.2-py312~ --> conda-forge/noarch::blinker-1.8.2-pyhd8ed1ab_0
brotli-python pkgs/main::brotli-python-1.0.9-py312h~ --> conda-forge::brotli-python-1.1.0-py312heafc425_1
c-ares pkgs/main::c-ares-1.19.1-h6c40b1e_0 --> conda-forge::c-ares-1.28.1-h10d778d_0
ca-certificates pkgs/main::ca-certificates-2024.3.11-~ --> conda-forge::ca-certificates-2024.6.2-h8857fd0_0
charset-normalizer pkgs/main::charset-normalizer-2.0.4-p~ --> conda-forge::charset-normalizer-3.3.2-pyhd8ed1ab_0
freetype pkgs/main::freetype-2.12.1-hd8bbffd_0 --> conda-forge::freetype-2.12.1-h60636b9_2
gflags pkgs/main::gflags-2.2.2-hcec6c5f_1 --> conda-forge::gflags-2.2.2-hb1e8313_1004
gitdb pkgs/main::gitdb-4.0.7-pyhd3eb1b0_0 --> conda-forge::gitdb-4.0.11-pyhd8ed1ab_0
gitpython pkgs/main/osx-64::gitpython-3.1.37-py~ --> conda-forge/noarch::gitpython-3.1.43-pyhd8ed1ab_0
glog pkgs/main::glog-0.5.0-hcec6c5f_1 --> conda-forge::glog-0.7.0-h31b1b29_0
jsonschema pkgs/main/osx-64::jsonschema-4.19.2-p~ --> conda-forge/noarch::jsonschema-4.22.0-pyhd8ed1ab_0
jsonschema-specif~ pkgs/main/osx-64::jsonschema-specific~ --> conda-forge/noarch::jsonschema-specifications-2023.12.1-pyhd8ed1ab_0
krb5 pkgs/main::krb5-1.20.1-h428f121_1 --> conda-forge::krb5-1.21.2-hb884880_0
lcms2 pkgs/main::lcms2-2.12-hf1fd2bf_0 --> conda-forge::lcms2-2.16-ha2f27b4_0
lerc pkgs/main::lerc-3.0-he9d5cce_0 --> conda-forge::lerc-4.0.0-hb486fe8_0
libbrotlicommon pkgs/main::libbrotlicommon-1.0.9-h6c4~ --> conda-forge::libbrotlicommon-1.1.0-h0dc2134_1
libbrotlidec pkgs/main::libbrotlidec-1.0.9-h6c40b1~ --> conda-forge::libbrotlidec-1.1.0-h0dc2134_1
libbrotlienc pkgs/main::libbrotlienc-1.0.9-h6c40b1~ --> conda-forge::libbrotlienc-1.1.0-h0dc2134_1
libcurl pkgs/main::libcurl-8.7.1-hf20ceda_0 --> conda-forge::libcurl-8.8.0-hf9fcc65_0
libcxx pkgs/main::libcxx-14.0.6-h9765a3e_0 --> conda-forge::libcxx-17.0.6-h88467a6_0
libdeflate pkgs/main::libdeflate-1.17-hb664fd8_1 --> conda-forge::libdeflate-1.20-h49d49c5_0
libev pkgs/main::libev-4.33-h9ed2024_1 --> conda-forge::libev-4.33-h10d778d_2
libnghttp2 pkgs/main::libnghttp2-1.57.0-h9beae6a~ --> conda-forge::libnghttp2-1.58.0-h64cf6d3_1
libpng pkgs/main::libpng-1.6.39-h6c40b1e_0 --> conda-forge::libpng-1.6.43-h92b6c6a_0
libprotobuf pkgs/main::libprotobuf-3.20.3-hfff283~ --> conda-forge::libprotobuf-4.25.3-h4e4d658_0
libthrift pkgs/main::libthrift-0.15.0-h70b4b81_2 --> conda-forge::libthrift-0.19.0-h064b379_1
libtiff pkgs/main::libtiff-4.5.1-hcec6c5f_0 --> conda-forge::libtiff-4.6.0-h129831d_3
libwebp-base pkgs/main::libwebp-base-1.3.2-h6c40b1~ --> conda-forge::libwebp-base-1.4.0-h10d778d_0
markdown-it-py pkgs/main/osx-64::markdown-it-py-2.2.~ --> conda-forge/noarch::markdown-it-py-3.0.0-pyhd8ed1ab_0
markupsafe pkgs/main::markupsafe-2.1.3-py312h6c4~ --> conda-forge::markupsafe-2.1.5-py312h41838bb_0
mdurl pkgs/main/osx-64::mdurl-0.1.0-py312he~ --> conda-forge/noarch::mdurl-0.1.2-pyhd8ed1ab_0
ncurses pkgs/main::ncurses-6.4-hcec6c5f_0 --> conda-forge::ncurses-6.5-h5846eda_0
openjpeg pkgs/main::openjpeg-2.4.0-h66ea3da_0 --> conda-forge::openjpeg-2.5.2-h7310d3a_0
openssl pkgs/main::openssl-3.0.13-hca72f7f_2 --> conda-forge::openssl-3.3.1-h87427d6_0
orc pkgs/main::orc-1.7.4-h995b336_1 --> conda-forge::orc-2.0.1-hf43e91b_1
packaging pkgs/main/osx-64::packaging-23.2-py31~ --> conda-forge/noarch::packaging-24.0-pyhd8ed1ab_0
pandas pkgs/main::pandas-2.2.1-py312he282a81~ --> conda-forge::pandas-2.2.2-py312h1171441_1
protobuf pkgs/main::protobuf-3.20.3-py312hcec6~ --> conda-forge::protobuf-4.25.3-py312hf6c9040_0
pyarrow pkgs/main::pyarrow-14.0.2-py312h0b9b6~ --> conda-forge::pyarrow-16.1.0-py312hdce95a9_1
pygments pkgs/main/osx-64::pygments-2.15.1-py3~ --> conda-forge/noarch::pygments-2.18.0-pyhd8ed1ab_0
pysocks pkgs/main/osx-64::pysocks-1.7.1-py312~ --> conda-forge/noarch::pysocks-1.7.1-pyha2e5f31_6
python-tzdata pkgs/main::python-tzdata-2023.3-pyhd3~ --> conda-forge::python-tzdata-2024.1-pyhd8ed1ab_0
re2 pkgs/main::re2-2022.04.01-he9d5cce_0 --> conda-forge::re2-2023.09.01-hb168e87_2
readline pkgs/main::readline-8.2-hca72f7f_0 --> conda-forge::readline-8.2-h9e318b2_1
referencing pkgs/main/osx-64::referencing-0.30.2-~ --> conda-forge/noarch::referencing-0.35.1-pyhd8ed1ab_0
requests pkgs/main/osx-64::requests-2.32.2-py3~ --> conda-forge/noarch::requests-2.32.3-pyhd8ed1ab_0
rich pkgs/main/osx-64::rich-13.3.5-py312he~ --> conda-forge/noarch::rich-13.7.1-pyhd8ed1ab_0
rpds-py pkgs/main::rpds-py-0.10.6-py312hf2ad9~ --> conda-forge::rpds-py-0.18.1-py312ha47ea1c_0
setuptools pkgs/main/osx-64::setuptools-69.5.1-p~ --> conda-forge/noarch::setuptools-70.0.0-pyhd8ed1ab_0
smmap pkgs/main::smmap-4.0.0-pyhd3eb1b0_0 --> conda-forge::smmap-5.0.0-pyhd8ed1ab_0
snappy pkgs/main::snappy-1.1.10-hcec6c5f_1 --> conda-forge::snappy-1.2.0-h6dc393e_1
streamlit pkgs/main/osx-64::streamlit-1.32.0-py~ --> conda-forge/noarch::streamlit-1.35.0-pyhd8ed1ab_0
tenacity pkgs/main/osx-64::tenacity-8.2.2-py31~ --> conda-forge/noarch::tenacity-8.3.0-pyhd8ed1ab_0
toolz pkgs/main/osx-64::toolz-0.12.0-py312h~ --> conda-forge/noarch::toolz-0.12.1-pyhd8ed1ab_0
tornado pkgs/main::tornado-6.3.3-py312h6c40b1~ --> conda-forge::tornado-6.4-py312h41838bb_0
typing_extensions pkgs/main/osx-64::typing_extensions-4~ --> conda-forge/noarch::typing_extensions-4.12.1-pyha770c72_0
wheel pkgs/main/osx-64::wheel-0.43.0-py312h~ --> conda-forge/noarch::wheel-0.43.0-pyhd8ed1ab_1
zstd pkgs/main::zstd-1.5.5-hc035e20_2 --> conda-forge::zstd-1.5.6-h915ae27_0
The following packages will be SUPERSEDED by a higher-priority channel:
bzip2 pkgs/main::bzip2-1.0.8-h6c40b1e_6 --> conda-forge::bzip2-1.0.8-h10d778d_5
cachetools pkgs/main/osx-64::cachetools-5.3.3-py~ --> conda-forge/noarch::cachetools-5.3.3-pyhd8ed1ab_0
certifi pkgs/main/osx-64::certifi-2024.2.2-py~ --> conda-forge/noarch::certifi-2024.2.2-pyhd8ed1ab_0
click pkgs/main/osx-64::click-8.1.7-py312he~ --> conda-forge/noarch::click-8.1.7-unix_pyh707e725_0
idna pkgs/main/osx-64::idna-3.7-py312hecd8~ --> conda-forge/noarch::idna-3.7-pyhd8ed1ab_0
jinja2 pkgs/main/osx-64::jinja2-3.1.4-py312h~ --> conda-forge/noarch::jinja2-3.1.4-pyhd8ed1ab_0
libedit pkgs/main::libedit-3.1.20230828-h6c40~ --> conda-forge::libedit-3.1.20191231-h0678c8f_2
libevent pkgs/main::libevent-2.1.12-h04015c4_1 --> conda-forge::libevent-2.1.12-ha90c15b_1
libffi pkgs/main::libffi-3.4.4-hecd8cb5_1 --> conda-forge::libffi-3.4.2-h0d85af4_5
libssh2 pkgs/main::libssh2-1.11.0-hf20ceda_0 --> conda-forge::libssh2-1.11.0-hd019ec5_0
lz4-c pkgs/main::lz4-c-1.9.4-hcec6c5f_1 --> conda-forge::lz4-c-1.9.4-hf0c8a7f_0
numpy pkgs/main::numpy-1.26.4-py312hac873b0~ --> conda-forge::numpy-1.26.4-py312he3a82b2_0
pillow pkgs/main::pillow-10.3.0-py312h6c40b1~ --> conda-forge::pillow-10.3.0-py312h0c923fa_0
pip pkgs/main/osx-64::pip-24.0-py312hecd8~ --> conda-forge/noarch::pip-24.0-pyhd8ed1ab_0
pydeck pkgs/main/osx-64::pydeck-0.8.0-py312h~ --> conda-forge/noarch::pydeck-0.8.0-pyhd8ed1ab_0
python pkgs/main::python-3.12.3-hd58486a_1 --> conda-forge::python-3.12.3-h1411813_0_cpython
python-dateutil pkgs/main/osx-64::python-dateutil-2.9~ --> conda-forge/noarch::python-dateutil-2.9.0-pyhd8ed1ab_0
pytz pkgs/main/osx-64::pytz-2024.1-py312he~ --> conda-forge/noarch::pytz-2024.1-pyhd8ed1ab_0
six pkgs/main::six-1.16.0-pyhd3eb1b0_1 --> conda-forge::six-1.16.0-pyh6c4a22f_0
tk pkgs/main::tk-8.6.14-h4d00af3_0 --> conda-forge::tk-8.6.13-h1abcd95_1
toml pkgs/main::toml-0.10.2-pyhd3eb1b0_0 --> conda-forge::toml-0.10.2-pyhd8ed1ab_0
tzdata pkgs/main::tzdata-2024a-h04d1e81_0 --> conda-forge::tzdata-2024a-h0c530f3_0
urllib3 pkgs/main/osx-64::urllib3-2.2.1-py312~ --> conda-forge/noarch::urllib3-2.2.1-pyhd8ed1ab_0
xz pkgs/main::xz-5.4.6-h6c40b1e_1 --> conda-forge::xz-5.2.6-h775f41a_0
Downloading and Extracting Packages
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
</code></pre>
<p>Finally I check the version with:</p>
<pre><code>streamlit version
Streamlit, version 1.12.2
</code></pre>
<p>Which as you can see stays the same. I don't know why this happens so I was wondering if anyone knows how to upgrade your streamlit using Conda?</p>
|
<python><streamlit>
|
2024-06-06 14:59:29
| 0
| 42,724
|
Quinten
|
78,587,280
| 263,844
|
python3.9 how to find a compatible Spacy version
|
<p>I need to re-deploy an existing very old python GCP Cloud Function that uses Spacy and other NLP stuff (that I am not familiar with) - to use a newer python runtime (3.9 or above) .</p>
<p>Sparing the details of an insane dependencies version conflicts chasing and adjustments ... I came to the need to upgrade version of spacy from < 2.39 to > 3.0.x , and typer to > 1.4.0 ...
And the final issue I ran into doing this is:</p>
<pre><code>The conflict is caused by:
The user requested typer==0.4.1
spacy 3.0.9 depends on typer<0.4.0 and >=0.3.0
</code></pre>
<p>which looks like I can't use a combination of spacy=3.0.9 and typer=0.4.1.
I cannot go below typer 0.4.1 (many rounds of version adjustments that were failing otherwise...)</p>
<p>Now I need to find a version of spacy that would work with typer=0.4.1 ....
I search and search - and can't find a way to figure out which versions of spacy woudl be compatible.... I tried to do a brute force guessing - trying all versions of spacy above 3.0.9 - and the first try of installing spacy>=4.0.0 caused this:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement spacy>=4.0.0 (from versions: 0.31, 0.32, 0.33, 0.40, 0.51, 0.52, 0.60, 0.61, 0.62, 0.63, 0.64, 0.65, 0.67, 0.68, 0.70, 0.80, 0.81, 0.82, 0.83, 0.84, 0.85, 0.86, 0.87, 0.88, 0.89, 0.90, 0.91, 0.92, 0.93, 0.94, 0.95, 0.97, 0.98, 0.99, 0.100.0, 0.100.1, 0.100.2, 0.100.3, 0.100.4, 0.100.5, 0.100.6, 0.100.7, 0.101.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.5, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.10.0, 1.10.1, 2.0.0, 2.0.1.dev0, 2.0.1, 2.0.2.dev0, 2.0.2, 2.0.3.dev0, 2.0.3, 2.0.4.dev0, 2.0.4, 2.0.5.dev0, 2.0.5, 2.0.6.dev0, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.10.dev0, 2.0.10, 2.0.11.dev0, 2.0.11, 2.0.12.dev0, 2.0.12.dev1, 2.0.12, 2.0.13.dev0, 2.0.13.dev1, 2.0.13.dev2, 2.0.13.dev4, 2.0.13, 2.0.14.dev0, 2.0.14.dev1, 2.0.15, 2.0.16.dev0, 2.0.16, 2.0.17.dev0, 2.0.17.dev1, 2.0.17, 2.0.18.dev0, 2.0.18.dev1, 2.0.18, 2.1.0, 2.1.1.dev0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.6, 2.1.7.dev0, 2.1.7, 2.1.8, 2.1.9, 2.2.0.dev10, 2.2.0.dev11, 2.2.0.dev13, 2.2.0.dev15, 2.2.0.dev17, 2.2.0.dev18, 2.2.0.dev19, 2.2.0, 2.2.1, 2.2.2.dev0, 2.2.2.dev4, 2.2.2, 2.2.3.dev0, 2.2.3, 2.2.4, 2.3.0.dev1, 2.3.0, 2.3.1, 2.3.2, 2.3.3.dev0, 2.3.3, 2.3.4, 2.3.5, 2.3.6, 2.3.7, 2.3.8, 2.3.9, 3.0.0, 3.0.1.dev0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.0.6, 3.0.7, 3.0.8, 3.0.9, 3.1.0, 3.1.1, 3.1.2, 3.1.3, 3.1.4, 3.1.5, 3.1.6, 3.1.7, 3.2.0, 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.5, 3.2.6, 3.3.0.dev0, 3.3.0, 3.3.1, 3.3.2, 3.3.3, 3.4.0, 3.4.1, 3.4.2, 3.4.3, 3.4.4, 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.6.0.dev0, 3.6.0.dev1, 3.6.0, 3.6.1, 3.7.0.dev0, 3.7.0, 3.7.1, 3.7.2, 3.7.4, 3.7.5, 4.0.0.dev0, 4.0.0.dev1, 4.0.0.dev2, 4.0.0.dev3)
ERROR: No matching distribution found for spacy>=4.0.0
</code></pre>
<p>This led me to think that I just do not understand how Spacy operates - it does not seem to be a "normal" Python library - that I could just specify a new version of in the requirements.txt ... Looks like a more involved installation process is needed, I suspect related to downloading some specific version of en_core_web_sm-3.0.0.tar.gz model (?) I see in this project ...</p>
<p>So my question:
<strong>Could someone advise on which version of Spacy I shodul try that woudl be the closest to 3.0.9 - and what is the process of doing that?
Do I need to first get some new model archives?</strong></p>
<p>Please excuse my ignorance of how Spacy+models work - I am not the original NLP developer who wrote all that cool stuff - I am just trying to fix a failed production cloud function until a real NLP developer could do a correct upgrade of the functionality ...</p>
<p>Longer explanation:
the last time this CF was deployed was in 2021 on python37 runtime. It stopped working now due to dependent microservice being moved to a different access point... In order to get this CF working again - I need to redeploy it with a new value for some env variable (use new service URL).</p>
<p>Sounds simple enough, but it is not! Because the CF cannot be deployed as is, using the original source code - due to changes in the current GCP python37 runtime libraries ...</p>
<p>Trying to deploy this CF "as is" - fails in GCP with:</p>
<pre><code>ERROR: (gcloud.alpha.functions.deploy) Build failed with status: FAILURE and message: found incompatible dependencies: "flask 2.2.5 has requirement click>=8.0, but you have click 7.1.2."
</code></pre>
<p>So, I am trying to make this project run on python39 runtime and upgrade the project dependencies - upgrade flask, then spacy, then typer, then hundred others :(- and went through a multi-day hell of dependencies conflict resolutions - moving versions of different libs in the requirements.txt up and down, up and down, in circles....</p>
<p>If anyone has a suggestion for a less painful approach - please share!</p>
<p>Thank you!!!</p>
|
<python><google-cloud-platform><google-cloud-functions><spacy>
|
2024-06-06 14:23:19
| 1
| 4,074
|
Marina
|
78,587,066
| 14,824,108
|
plt.subplots() with gridspec in matplotlib
|
<p>Given that I plot the following:</p>
<pre><code>real = results['real']
pred = results['pred']
# Create the subplots with specified grid ratios
fig_sub, axs_sub = plt.subplots(2, 2, figsize=(8, 6), gridspec_kw={'hspace': 0,
'wspace': 0,
'width_ratios': [5, 1],
'height_ratios': [1, 5]})
# Turn off axes for specific subplots
axs_sub[0, 0].axis("off")
axs_sub[0, 1].axis("off")
axs_sub[1, 1].axis("off")
# Add the marginal KDE plots
sns.kdeplot(crab_sigma['real'], fill=True, ax=axs_sub[0, 0])
sns.kdeplot(y=crab_sigma['pred'], fill=True, ax=axs_sub[1, 1])
# Add the scatter plot
axs_sub[1, 0].scatter(crab_sigma['real'], crab_sigma['pred'], s=2)
axs_sub[1, 0].axline([0, 0], [1, 1], linestyle='--', color='red')
# Set labels
axs_sub[1, 0].set_xlabel('Real $\log_{10}(\sigma)$', labelpad=10)
axs_sub[1, 0].set_ylabel('Pred. $\log_{10}(\sigma)$', labelpad=10)
# Add ticks on the top and right margins
axs_sub[1, 0].tick_params(top=True, right=True, direction='in')
# Synchronize the axis limits to remove the space
axs_sub[0, 0].set_xlim(axs_sub[1, 0].get_xlim())
axs_sub[1, 1].set_ylim(axs_sub[1, 0].get_ylim())
# Adjust the layout to minimize the space
fig_sub.tight_layout()
plt.show()
</code></pre>
<p>result:</p>
<p><a href="https://i.sstatic.net/3U9HxjlD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3U9HxjlD.png" alt="enter image description here" /></a></p>
<p>How can I plot multiple of these in a <code>plt.subplots()</code>? Say that for example I want to replicate this exact figure in 4 subplots.</p>
|
<python><matplotlib>
|
2024-06-06 13:47:23
| 1
| 676
|
James Arten
|
78,586,938
| 9,360,663
|
SQLAlchemy create_all() doesn't create tables
|
<p>I'm trying to create a set of tables using SQLAlchemy 2.0</p>
<pre class="lang-py prettyprint-override"><code>import logging
import logging.handlers
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s [%(name)-5s] [%(levelname)-3s] %(message)s'
)
from typing import List, Dict
# from DAL.database import Report, Database
import util.utility as util
import sqlalchemy
from sqlalchemy import MetaData, DateTime, func, exc
from sqlalchemy.engine import Engine
from sqlalchemy.orm import relationship, sessionmaker, Mapped, DeclarativeBase, mapped_column
from datetime import datetime
config: Dict = util.read_config("../static/configurations/custom.yaml")
class Base(DeclarativeBase):
pass
class Attribute_Details(Base):
"""
"""
__tablename__ = "attribute_details"
id: Mapped[int] = mapped_column(primary_key=True)
description: Mapped[str] = mapped_column()
opened: Mapped[datetime] = mapped_column(DateTime(timezone=True), server_default=func.now())
closed: Mapped[datetime] = mapped_column()
report_attr: Mapped["Report_Attributes"] = relationship(back_populates="details")
class Database:
"""
Abstraction layer for the SQLAlchemy db interface
"""
meta: MetaData
engine: Engine
session_maker: sessionmaker
__connected: bool = False
tables: List[Base] = [Attribute_Details]
def __init__(self, con_str: str):
self.logger = logging.getLogger(Database.__name__)
self.logger.info("Connecting to DB: {}".format(con_str))
self.db = sqlalchemy.create_engine(con_str, echo=False)
def connect(self) -> bool:
"""
Establish connection to the database
"""
try:
self.engine = self.db.connect()
self.session_maker = sessionmaker(bind=self.engine)
self.meta = sqlalchemy.MetaData()
return True
except exc.SQLAlchemyError as e:
print(str(e))
return False
def create_tables(self):
"""
Create the corresponding tables for the Report template
"""
for table in self.tables:
if not self.engine.dialect.has_table(self.engine, table.__tablename__):
table_obj = [table.__table__]
Base.metadata.create_all(self.engine, tables=table_obj)
# CreateTable(table_obj).compile(dialect=postgresql.dialect())
self.logger.info("Created DB Table {}".format(table.__tablename__))
else:
self.logger.info("DB Table {} already exists".format(table.__tablename__))
db = Database(config['db_con_test'])
db.connect()
db.create_tables()
</code></pre>
<p>Although it reports "Created DB Table attribute_details", the table is not created in the database. I tried the option with <code>Base.metadata.create_all()</code> as well as <code>CreateTable()</code>. I cannot spot the issue as the creation of tables with the new Declarative Mapping of SQL2.0 isn't explained very well... at least for me.</p>
|
<python><postgresql><sqlalchemy>
|
2024-06-06 13:28:59
| 1
| 1,154
|
po.pe
|
78,586,918
| 4,900,991
|
Multiprocessing with multiple class objects
|
<p>I have different python classes and all of them having a method called push. What I am trying to is</p>
<ol>
<li>Create multiple class objects one from each class</li>
<li>Call push method</li>
<li>All these methods should execute in parallel</li>
<li>Log messages coming from push method as and when its complete</li>
<li>Wait for all the methods to complete and exit scrip</li>
</ol>
<p>I have tried multiple ways using</p>
<pre><code> #from pathos.multiprocessing import ProcessingPool as Pool
from multiprocessing import Pool
def push_wrapper(obj):
obj.push()
class_objects=[class1_obj1,class2_obj2,class3_obj3]
with Pool(processes=max_thread_count) as pool:
pool.map(push_wrapper, class_objects)
pool.close()
pool.join()
</code></pre>
<p>With this I am getting this error</p>
<blockquote>
<p>raise TypeError(f"cannot pickle {self.<strong>class</strong>.<strong>name</strong>!r} object")</p>
</blockquote>
<p>There are some other approaches like using <code>pool.apply_async</code> but they are not waiting for all the methods to complete exiting immediately. When I add <code>job.wait()</code> along with <code>pool.apply_async</code> its waiting for all threads to complete but I want to print result of the thread as and when its complete.</p>
|
<python><multiprocessing>
|
2024-06-06 13:24:54
| 2
| 1,943
|
Sandeep Lade
|
78,586,815
| 1,022,260
|
Bind Model schema similar to binding Model database
|
<p>How can I dynamically bind a schema, similar to binding a model:</p>
<pre><code>db = PooledPostgresqlExtDatabase(
cfg_db, max_connections=8,
stale_timeout=300,
user=cfg_user, password=cfg_password,
host=cfg_host, port=cfg_port
)
db.bind([BaseModel])
# How do I set the schema for base model here??
</code></pre>
|
<python><peewee>
|
2024-06-06 13:08:29
| 1
| 11,513
|
mikeb
|
78,586,783
| 5,433,628
|
How to use `pycountry.db.Country` objects as a `pd.DataFrame` index?
|
<p>I am creating a dataset collecting data for a given set of countries. To avoid any ambiguity, I would like to use a <a href="https://pypi.org/project/pycountry/" rel="nofollow noreferrer"><code>pycountry.db.Country</code> object</a> to represent each country.</p>
<p>However, when setting the country as the index of my <code>pd.DataFrame</code>, I can't select (<code>.loc[]</code>) a record by passing a country, I'm getting this type of error — despite the record existing:</p>
<blockquote>
<p><code>raise KeyError(f"None of [{key}] are in the [{axis_name}]")</code></p>
</blockquote>
<p><strong>How to select a record in my <code>pd.DataFrame</code>, given a <code>pycountry.db.Country</code> object?</strong></p>
<p>Here is a working example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import pycountry
aruba: pycountry.db.Country = pycountry.countries.get(alpha_3="ABW")
belgium: pycountry.db.Country = pycountry.countries.get(alpha_3="BEL")
canada: pycountry.db.Country = pycountry.countries.get(alpha_3="CAN")
data: list[dict] = [
{"country": aruba, "population": 106_203},
{"country": belgium, "population": 11_429_336},
{"country": canada, "population": 37_058_856},
]
df: pd.DataFrame = pd.DataFrame(data)
df.set_index("country", inplace=True)
# df.index = df.index.astype(dtype="category") # optional: doesn't change the outcome
assert df.index[1] == belgium
assert df.index[1] is belgium
belgium_data = df.loc[belgium] # <-- fails with "None of [Index([('alpha_2', 'BE'),\n('alpha_3', 'BEL'),\n('flag', '🇧🇪'),\n('name', 'Belgium'),\n('numeric', '056'),\n('official_name', 'Kingdom of Belgium')],\ndtype='object', name='country')] are in the [index]"
</code></pre>
|
<python><pandas><dataframe><indexing><country-codes>
|
2024-06-06 13:03:36
| 3
| 1,487
|
ebosi
|
78,586,719
| 3,070,181
|
Tkinter validation triggers an unexpected function causing application to hang
|
<p>I have a tkinter app. On load the focus is set to the first entry widget (self.entry_a). When I tab out of the entry (self.entry_a) for the first time it triggers the validation for the second entry (self.entry_b). This means that if I return <em>False</em> from the function <em>validate_b</em>, the application hangs.</p>
<p>Neither entry can be empty and I do not have default values for either.</p>
<p>How can I fix this?</p>
<pre><code>import tkinter as tk
from tkinter import ttk
class MainFrame():
def __init__(self):
self.root = tk.Tk()
self.entry_value_a = tk.StringVar(value='')
self.entry_value_b = tk.StringVar(value='')
self.root.geometry('400x300')
main_frame = self._main_frame()
main_frame.grid(row=0, column=0, sticky=tk.EW)
self.root.mainloop()
def _main_frame(self) -> tk.Frame:
frame = ttk.Frame(self.root)
frame.columnconfigure(1, weight=1)
valid_command_a = (self.root.register(self.validate_a), '%P')
invalid_command_a = (self.root.register(self.on_invalid_a),)
self.entry_a = ttk.Entry(frame, textvariable=self.entry_value_a)
self.entry_a.config(
validate='focusout',
validatecommand=valid_command_a,
invalidcommand=invalid_command_a
)
self.entry_a.grid(row=0, column=0)
self.entry_a.focus_set()
self.entry_a.select_range(0, 999)
valid_command_b = (self.root.register(self.validate_b), '%P')
invalid_command_b = (self.root.register(self.on_invalid_b),)
self.entry_b = ttk.Entry(frame, textvariable=self.entry_value_b)
self.entry_b.config(
validate='focusout',
validatecommand=valid_command_b,
invalidcommand=invalid_command_b
)
self.entry_b.grid(row=1, column=0)
self.label_error = ttk.Label(frame, foreground='red')
self.label_error.grid(row=2, column=0, sticky=tk.W, padx=5)
return frame
def validate_a(self, event: object = '') -> bool:
print('a triggered')
self.label_error['text'] = ''
if self.entry_value_a.get() == '':
return False
return True
def on_invalid_a(self):
self.label_error['text'] = 'Invalid entry a'
self.entry_a.focus_set()
self.entry_a.select_range(0, 999)
def validate_b(self, event: object = '') -> bool:
print('b triggered')
self.label_error['text'] = ''
if self.entry_value_b.get() == '':
return # False
return True
def on_invalid_b(self):
self.label_error['text'] = 'Invalid entry b'
self.entry_b.focus_set()
self.entry_b.select_range(0, 999)
if __name__ == '__main__':
MainFrame()
</code></pre>
|
<python><validation><tkinter>
|
2024-06-06 12:54:43
| 1
| 3,841
|
Psionman
|
78,586,671
| 27,912
|
Using Luigi for a small POC to run process which do not create files, and it doesn't seem to work
|
<p>I want to execute <em>MyTask1</em> and <strong>then</strong> <em>MyTask2</em>. Only <em>MyTask1</em> executes. Luigi reports that I have an <code>Unfulfilled dependency</code>. However, it prints...</p>
<pre><code>====================
MyTask1: 5
====================
</code></pre>
<p>so I know that <em>MyTask1</em> did execute.</p>
<p>This POC is in preparation for a project where I am going to be chaining a bunch of tasks that do work but don't generally create files (output).</p>
<p>Here is my code...</p>
<pre><code>from enum import Enum
import luigi
class MyTask1(luigi.Task):
x = luigi.IntParameter()
y = luigi.IntParameter(default=0)
task_complete = False
def run(self):
print(f"{'='*20}\nMyTask1: {self.x + self.y}\n{'='*20}")
self.task_complete = True
def complete(self):
return self.task_complete
class MyTask2(luigi.Task):
x = luigi.IntParameter()
y = luigi.IntParameter(default=1)
z = luigi.IntParameter(default=2)
task_complete = False
def requires(self):
return MyTask1(x=self.x, y=self.y)
def run(self):
print(f"{'='*20}\nMyTask2: {self.x * self.y * self.z}\n{'='*20}")
self.task_complete = True
def complete(self):
return self.task_complete
if __name__ == '__main__':
luigi.build([MyTask2(x=3,y=2)], workers=3, local_scheduler=True)
</code></pre>
|
<python><workflow><luigi>
|
2024-06-06 12:44:55
| 1
| 905
|
Jason V
|
78,586,632
| 14,824,108
|
Imposing KDE plots on top of a scatter plot
|
<p>I have a parity plot deriving from Machine Learning model predictions vs actual values. I would like to create some density plots to be shown on top and right corners of the main figure. The effect that I would like to achieve can be seen here (<a href="https://www.microsoft.com/en-us/research/blog/mattergen-property-guided-materials-design/" rel="nofollow noreferrer">original paper</a>):</p>
<p><a href="https://i.sstatic.net/ySneJi0w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ySneJi0w.png" alt="" /></a></p>
<p>So given the following:</p>
<p>import matplotlib.pyplot as plt
import seaborn as sns</p>
<pre><code>real = results['real']
pred = results['pred']
fig, ax = plt.subplots(2, 3, figsize=(18, 12))
# Parity plot
ax.scatter(real, pred)
ax.axline([0, 0], [1, 1], linestyle='--', color='red')
</code></pre>
<p>How can I obtain the KDE plots on top, sharing the main figure axes?</p>
|
<python><matplotlib>
|
2024-06-06 12:39:04
| 0
| 676
|
James Arten
|
78,586,621
| 3,306,091
|
HuggingFace pipeline - Debug prompt
|
<p>I've defined a pipeline using Huggingface transformer library.</p>
<pre><code>pipe = pipeline(
"text-generation",
model=myllm,
tokenizer=tokenizer,
max_new_tokens=512,
)
</code></pre>
<p>I'd like to test it:</p>
<pre><code>result = pipe("Some input prompt for the LLM")
</code></pre>
<p><strong>How can I debug the prompt actually sent to the LLM?</strong></p>
<p>I expect the pipeline to apply the prompt template (tokenizer.default_chat_template) but how can I verify how the prompt is after the template has been applied?</p>
|
<python><nlp><huggingface-transformers>
|
2024-06-06 12:36:47
| 1
| 1,204
|
MarcoM
|
78,586,599
| 857,390
|
Annotate factory method in parent class that returns instances of subclasses
|
<p>I'm trying to add type annotations to a factory method in a parent class that returns instances of subclasses:</p>
<pre class="lang-py prettyprint-override"><code>class Parent:
@classmethod
def make(cls, param: int):
if param > 0:
return ChildA()
else:
return ChildB()
class ChildA(Parent):
pass
class ChildB(Parent):
pass
</code></pre>
<p>According to my understanding, <code>Self</code> should be a valid type annotation for the return value of <code>make</code>. However, with <code>typing_extension.Self</code> (from <code>typing_extensions</code> <code>4.12.1</code>, I'm on Python 3.8) Mypy <code>1.10.0</code> complains:</p>
<pre><code>self_sandbox.py:7: error: Incompatible return value type (got "ChildA", expected "Self") [return-value]
self_sandbox.py:9: error: Incompatible return value type (got "ChildB", expected "Self") [return-value]
</code></pre>
<p>Since <code>ChildA</code> and <code>ChildB</code> are subclasses of <code>Parent</code>, I don't understand why this is happening. Note that Mypy does accept the following code:</p>
<pre><code>param = 1
foo: Parent = ChildA() if param > 0 else ChildB()
</code></pre>
<p>In my understanding, this is is equivalent to <code>make</code>, type-wise.</p>
<p>Note that <code>Parent</code> works as a annotation for the return value of <code>make</code>, and that's the workaround I'm currently using. But I'd really prefer to use <code>Self</code> because it avoids duplicating the class name.</p>
<p>Is this an issue with Mypy or <code>typing_extensions</code>, or am I misunderstanding something?</p>
|
<python><python-typing>
|
2024-06-06 12:32:47
| 1
| 10,575
|
Florian Brucker
|
78,586,529
| 7,729,563
|
How to run ptpython on Windows 11 in asyncio mode
|
<p>I have Windows 11 and Python 3.12.3 and wish to experiment with the asyncio REPL. It works fine with the built-in REPL:</p>
<pre class="lang-py prettyprint-override"><code>PS C:\> python -m asyncio
asyncio REPL 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)] on win32
Use "await" directly instead of "asyncio.run()".
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
>>> await asyncio.sleep(1, 'Done!')
'Done!'
>>> exit()
</code></pre>
<p>However, I cannot get it to work with ptpython:</p>
<pre class="lang-py prettyprint-override"><code>PS C:\> ptpython --asyncio
Starting ptpython asyncio REPL
Use "await" directly instead of "asyncio.run()".
In [1]: await asyncio.sleep(1, 'Done!')
Traceback (most recent call last):
File "C:\Users\james\AppData\Roaming\Python\Python312\site-packages\ptpython\repl.py", line 183, in run_and_show_expression_async
loop.add_signal_handler(signal.SIGINT, lambda *_: task.cancel())
File "C:\Program Files\Python312\Lib\asyncio\events.py", line 582, in add_signal_handler
raise NotImplementedError
NotImplementedError
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\james\AppData\Roaming\Python\Python312\Scripts\ptpython.exe\__main__.py", line 7, in <module>
File "C:\Users\james\AppData\Roaming\Python\Python312\site-packages\ptpython\entry_points\run_ptpython.py", line 231, in run
asyncio.run(embed_result)
File "C:\Program Files\Python312\Lib\asyncio\runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\asyncio\base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\james\AppData\Roaming\Python\Python312\site-packages\ptpython\repl.py", line 528, in coroutine
await repl.run_async()
File "C:\Users\james\AppData\Roaming\Python\Python312\site-packages\ptpython\repl.py", line 252, in run_async
await self.run_and_show_expression_async(text)
File "C:\Users\james\AppData\Roaming\Python\Python312\site-packages\ptpython\repl.py", line 206, in run_and_show_expression_async
loop.remove_signal_handler(signal.SIGINT)
File "C:\Program Files\Python312\Lib\asyncio\events.py", line 585, in remove_signal_handler
raise NotImplementedError
NotImplementedError
Task exception was never retrieved
future: <Task finished name='Task-314' coro=<PythonRepl.run_and_show_expression_async.<locals>.eval() done, defined at C:\Users\james\AppData\Roaming\Python\Python312\site-packages\ptpython\repl.py:172> exception=NameError("name 'asyncio' is not defined")>
Traceback (most recent call last):
File "C:\Users\james\AppData\Roaming\Python\Python312\site-packages\ptpython\repl.py", line 175, in eval
return await self.eval_async(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\james\AppData\Roaming\Python\Python312\site-packages\ptpython\repl.py", line 329, in eval_async
result = await result
^^^^^^^^^^^^
File "<stdin>", line 1, in <module>
NameError: name 'asyncio' is not defined. Did you forget to import 'asyncio'
</code></pre>
<p>From the ptpython docs, this should work - a similar example is shown working:
<a href="https://github.com/prompt-toolkit/ptpython" rel="nofollow noreferrer">https://github.com/prompt-toolkit/ptpython</a></p>
<p>How do I get ptpython to work in asyncio mode on Windows 11?</p>
<p>Update on 6/7/2024, 21:15 UTC - tried on Python 3.8.10 to see if only a problem on 3.12.3, similar results:</p>
<pre class="lang-py prettyprint-override"><code>PS C:\> python -m asyncio
asyncio REPL 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:34:34) [MSC v.1928 32 bit (Intel)] on win32
Use "await" directly instead of "asyncio.run()".
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
>>> await asyncio.sleep(1, 'Done!')
'Done!'
>>> exit()
C:\> ptpython --asyncio
Starting ptpython asyncio REPL
Use "await" directly instead of "asyncio.run()".
In [1]: await asyncio.sleep(1, 'Done!')
Traceback (most recent call last):
File "c:\users\james\appdata\local\programs\python\python38-32\lib\site-packages\ptpython\repl.py", line 183, in run_
and_show_expression_async
loop.add_signal_handler(signal.SIGINT, lambda *_: task.cancel())
File "c:\users\james\appdata\local\programs\python\python38-32\lib\asyncio\events.py", line 536, in add_signal_handler
raise NotImplementedError
NotImplementedError
Traceback (most recent call last):
File "c:\users\james\appdata\local\programs\python\python38-32\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\james\appdata\local\programs\python\python38-32\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\users\james\appdata\local\Programs\Python\Python38-32\Scripts\ptpython.exe\__main__.py", line 7, in <module>
File "c:\users\james\appdata\local\programs\python\python38-32\lib\site-packages\ptpython\entry_points\run_ptpython.py", line 231, in run
asyncio.run(embed_result)
File "c:\users\james\appdata\local\programs\python\python38-32\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "c:\users\james\appdata\local\programs\python\python38-32\lib\asyncio\base_events.py", line 616, in run_until_complete
return future.result()
File "c:\users\james\appdata\local\programs\python\python38-32\lib\site-packages\ptpython\repl.py", line 528, in coroutine
await repl.run_async()
File "c:\users\james\appdata\local\programs\python\python38-32\lib\site-packages\ptpython\repl.py", line 252, in run_async
await self.run_and_show_expression_async(text)
File "c:\users\james\appdata\local\programs\python\python38-32\lib\site-packages\ptpython\repl.py", line 206, in run_and_show_expression_async
loop.remove_signal_handler(signal.SIGINT)
File "c:\users\james\appdata\local\programs\python\python38-32\lib\asyncio\events.py", line 539, in remove_signal_handler
raise NotImplementedError
NotImplementedError
Task exception was never retrieved
future: <Task finished name='Task-61' coro=<PythonRepl.run_and_show_expression_async.<locals>.eval() done, defined at c:\users\james\appdata\local\programs\python\python38-32\lib\site-packages\ptpython\repl.py:172> exception=NameError("name 'asyncio' is not defined")>
Traceback (most recent call last):
File "c:\users\james\appdata\local\programs\python\python38-32\lib\site-packages\ptpython\repl.py", line 175, in eval
return await self.eval_async(text)
File "c:\users\james\appdata\local\programs\python\python38-32\lib\site-packages\ptpython\repl.py", line 329, in eval_async
result = await result
File "<stdin>", line 1, in <module>
NameError: name 'asyncio' is not defined
</code></pre>
<p>Note - the async REPL was added in 3.8:
<a href="https://docs.python.org/3/whatsnew/3.8.html#asyncio" rel="nofollow noreferrer">https://docs.python.org/3/whatsnew/3.8.html#asyncio</a></p>
<p>Update on 6/12/2024, 16:36 UTC - work around from @jupiterbjy:</p>
<pre class="lang-py prettyprint-override"><code>PS C:\> ptpython
>>> import asyncio
>>> async def main():
2 return await asyncio.sleep(1, 'Done!')
3
>>> print(asyncio.run(main()))
Done!
>>> exit()
</code></pre>
|
<python><python-3.x><python-asyncio><ptpython>
|
2024-06-06 12:20:24
| 0
| 529
|
James S.
|
78,586,501
| 312,444
|
Building a "real-time" outbox pattern on Python / SQLALchemy
|
<p>I have create a outbox messages in my FastAPI/SQLAlchemy/Postgres application. The application is hosted in GCP. Now I need to publish this events, "reading" from the outbox table. I need to have this events to be published in real time.
What are the options that i have?</p>
<ol>
<li>A Cron that will read these messages and published it. The problem with this solution is the messages could be delayed. for the sake of nature of a cron job.</li>
<li>Use a CDC, like GCP Datastream. It seems complex as as Datastream GCP still doesnt have a simple way to move these messages to pub/sub. Actually we need to move the messages for a GCP storage, and process from there.</li>
<li>Debezium. The problem is that it will be a new moving piece in my architecture, that we need to manage.</li>
</ol>
|
<python><postgresql><google-cloud-platform><outbox-pattern>
|
2024-06-06 12:16:48
| 0
| 8,446
|
p.magalhaes
|
78,586,497
| 6,941,400
|
Getting openai.BadRequestError: Error code: 400: 'Extra inputs are not permitted' with an LLM model hosted on an on-prem GPU
|
<p>I am not really able to find that much on this.
Here's some discussion on <a href="https://github.com/langchain-ai/langgraph/discussions/187" rel="nofollow noreferrer">Github</a>. I was following this <a href="https://github.com/anurag899/openAI-project/blob/main/Multi-Agent%20Coding%20Framework%20using%20LangGraph/LangGraph%20-%20Code%20Development%20using%20Multi-Agent%20Flow.ipynb" rel="nofollow noreferrer">tutorial</a>. Is the issue that I can't use Mixtral with tool calling? Has anyone used Mixtral/some other local LLM models for Tool Calling? I am trying to figure out how to get this to work.</p>
<pre><code>from langchain_openai import ChatOpenAI
import httpx
import os
from langchain.chains.openai_functions import create_structured_output_runnable
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from typing import Annotated, Any, Dict, Optional, Sequence, TypedDict, List, Tuple
http_client = httpx.Client(verify=False)
llm = ChatOpenAI(
base_url="<URL RUNNING MIXTRAL MODEL>",
http_client=http_client,
api_key="API KEY",
model="mistral-7b-instruct-v02",
)
class Test(BaseModel):
"""Plan to follow in future"""
Input: List[List] = Field(
description="Input for Test cases to evaluate the provided code"
)
Output: List[List] = Field(
description="Expected Output for Test cases to evaluate the provided code"
)
test_gen_prompt = ChatPromptTemplate.from_template(
'''"Write unit tests for the provided code {code}.
'''
)
tester_agent = create_structured_output_runnable(
Test, llm, test_gen_prompt
)
sample_code = """<SAMPLE JAVA Code Base>"""
tester_agent.invoke({"code":sample_code})
</code></pre>
<p>Detailed trace back</p>
<p><code>openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "[{'type': 'extra_forbidden', 'loc': ('body', 'function_call'), 'msg': 'Extra inputs are not permitted', 'input': {'name': '_OutputFormatter'}}, {'type': 'extra_forbidden', 'loc': ('body', 'functions'), 'msg': 'Extra inputs are not permitted', 'input': [{'name': '_OutputFormatter', 'description': 'Output formatter. Should always be used to format your response to the user.', 'parameters': {'type': 'object', 'properties': {'output': {'description': 'Plan to follow in future', 'type': 'object', 'properties': {'Input': {'description': 'Input for Test cases to evaluate the provided code', 'type': 'array', 'items': {'type': 'array', 'items': {}}}, 'Output': {'description': 'Expected Output for Test cases to evaluate the provided code', 'type': 'array', 'items': {'type': 'array', 'items': {}}}}, 'required': ['Input', 'Output']}}, 'required': ['output']}}]}]", 'type': 'BadRequestError', 'param': None, 'code': 400} </code></p>
|
<python><langchain><langgraph><mixtral-8x7b>
|
2024-06-06 12:16:02
| 0
| 576
|
Anshuman Kumar
|
78,586,256
| 3,015,186
|
How to force DuckDB to reconnect to a file and prevent it from remembering contents of a deleted database file?
|
<h3>Motivation</h3>
<p>I'm working on a larger dashboard application which uses duckdb. I found out a bug which in a nutshell is like this:</p>
<pre class="lang-py prettyprint-override"><code># create new connection
con = duckdb.connect('/somedir/duckdb.db')
# do something
# remove or replace the file
Path('/somedir/duckdb.db').unlink()
shutil.copy('/otherdir/duckdb.db', '/somedir/duckdb.db')
# create new connection to the *new* file with same location
con = duckdb.connect('/somedir/duckdb.db')
# PROBLEM: the connection remembers the contents from the first file!
</code></pre>
<h3>Question</h3>
<p>How do you guarantee that when you create a new <a href="https://duckdb.org/docs/api/python/reference/#duckdb.DuckDBPyConnection" rel="nofollow noreferrer">duckdb.DuckDBPyConnection</a> object or an Exception is raised with:</p>
<pre><code>import duckdb
con = duckdb.connect('/somedir/duckdb.db')
</code></pre>
<p>that duckdb would not try to use some internal cache? In essence, I want a <em>new</em>, clean connection to the specified location or an Exception to be raised no matter what has happened before in the application.</p>
<h3>MWE for reproducing the problem</h3>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
import duckdb
filepath = Path("/tmp/test_duckdb/data.db")
filepath.parent.mkdir(exist_ok=True, parents=True)
con = duckdb.connect(str(filepath))
con.sql(f"CREATE TABLE first AS SELECT 42 AS i, 84 AS j")
print("before removing:\n" + str(con.sql("show tables")))
filepath.unlink()
con2 = duckdb.connect(str(filepath))
print("after removing:\n" + str(con2.sql("show tables")))
con2.sql(f"CREATE TABLE first AS SELECT 42 AS i, 84 AS j")
</code></pre>
<h4>Actual outcome:</h4>
<pre><code>before removing:
┌─────────┐
│ name │
│ varchar │
├─────────┤
│ first │
└─────────┘
after removing:
┌─────────┐
│ name │
│ varchar │
├─────────┤
│ first │
└─────────┘
---------------------------------------------------------------------------
CatalogException Traceback (most recent call last)
Cell In[1], line 15
13 con2 = duckdb.connect(str(filepath))
14 print("after removing:\n" + str(con2.sql("show tables")))
---> 15 con2.sql(f"CREATE TABLE first AS SELECT 42 AS i, 84 AS j")
CatalogException: Catalog Error: Table with name "first" already exists!
</code></pre>
<h5>Desired outcome:</h5>
<p>Raise an Exception on the second connect attempt as there are open connections to the file.</p>
<h3>Note:</h3>
<p>The problem is part of a larger application and fixing the bugs there is not the point of this question. The point is that even if you have a bug in your application logic, how do you prevent duckdb giving connection to already deleted/replaced database file? Is there some <code>duckdb.clear_all_caches()</code> method or <code>duckdb.connect(filepath, raise_if_open_connections=True)</code>?</p>
<p><strong>Edit</strong>: Should probably explicitly add that I'm aware that connections should be closed. The question is: Can you do something about it if the connections are <em>not</em> closed?</p>
|
<python><duckdb>
|
2024-06-06 11:26:22
| 2
| 35,267
|
Niko Fohr
|
78,586,213
| 1,169,096
|
have top-level parser and subparsers act on the same variable
|
<p>i have an <code>ArgumentParser</code> with sub-parsers. Some flags are common for all sub-parsers, and I would like to be able to specify them <em>either</em> before or after the sub-command, or even mix before and after (at the user's discretion).</p>
<p>Something like this:</p>
<pre class="lang-bash prettyprint-override"><code>$ ./test -v
Namespace(v=1)
$ ./test.py test -vv
Namespace(v=2)
$ ./test.py -vvv test
Namespace(v=3)
$ ./test.py -vv test -vv
Namespace(v=4)
</code></pre>
<p>So I tried something like this:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-v", action="count")
subparsers = parser.add_subparsers()
sub = subparsers.add_parser("test")
sub.add_argument("-v", action="count")
print(parser.parse_args())
</code></pre>
<p>Running this program "test.py" gives me:</p>
<pre class="lang-bash prettyprint-override"><code> ./test.py -h
usage: test.py [-h] [-v] {test} ...
positional arguments:
{test}
options:
-h, --help show this help message and exit
-v
$ ./test.py test -h
usage: test.py test [-h] [-v]
options:
-h, --help show this help message and exit
-v
$ ./test.py -v
Namespace(v=1)
$ ./test.py test -vv
Namespace(v=2)
</code></pre>
<p>cool.</p>
<p>but it also gives me:</p>
<pre class="lang-bash prettyprint-override"><code>$ ./test.py -vvv test
Namespace(v=None)
$ ./test.py -vv test -vv
Namespace(v=2)
</code></pre>
<p>less cool :-(</p>
<p>I also tried to specify parent-parsers explicitly:</p>
<pre class="lang-py prettyprint-override"><code>common = argparse.ArgumentParser(add_help=False)
common.add_argument("-v", action="count")
parser = argparse.ArgumentParser(parents=[common])
sub = parser.add_subparsers().add_parser("test", parents=[common])
print(parser.parse_args())
</code></pre>
<p>but the result is the same.</p>
<p>So, I guess that as soon as the <code>test</code> subparser kicks in, it resets the value if <code>v</code> to <code>None</code>.</p>
<p>How do I prevent this?</p>
<p>(I notice that <a href="https://stackoverflow.com/questions/37933480">How can I define global options with sub-parsers in python argparse?</a> is similar, and the answer there suggests to use different <code>dest</code> variables for the top-level and the sub-level parser. i would like to avoid that...)</p>
|
<python><argparse><subparsers>
|
2024-06-06 11:15:27
| 3
| 32,070
|
umläute
|
78,586,140
| 7,626,198
|
QUARTO does not find python
|
<p>I am using quarto in vscode.
When I run <code>quarto check</code>, the output is:</p>
<pre><code>Unable to locate an installed version of Python 3.
Install Python 3 from https://www.python.org/downloads/
</code></pre>
<p>However, when I open any *.py vscode detects my python.
I installed python from windows store. Python is installed in this folder:
C:\Users\usuario\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0</p>
<p>How I can configure QUARTO to solve this problem?</p>
|
<python><visual-studio-code><quarto>
|
2024-06-06 11:00:29
| 4
| 442
|
Juan
|
78,585,952
| 9,671,120
|
pytest equivalent of unittest.main()
|
<p>I can run in debug mode a unit test <code>test_foo.py</code> by simplying pressing F5 in my IDE of choice:</p>
<pre class="lang-py prettyprint-override"><code># test_foo_module.py
import unittest
class TestFoo(unittest.TestCase):
def test_foo(self):
self.assertTrue(True)
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>I'd have thought the pytest <em>exact</em> equivalent to be:</p>
<pre class="lang-py prettyprint-override"><code># test_foo_module.py
import pytest
def test_foo():
assert True
if __name__ == '__main__':
pytest.main()
</code></pre>
<p>But <code>pytest.main()</code> scans the whole project directory. I want to execute only this module in debug mode.</p>
|
<python><unit-testing><pytest>
|
2024-06-06 10:20:41
| 2
| 386
|
C. Claudio
|
78,585,814
| 3,488,901
|
How to catch if-then (and optional else) block from rsyslog config file?
|
<p>I'd like to catch the following conditional blocks from rsyslog conf file:</p>
<pre><code>if (($fromhost-ip == '127.0.0.1') and ($syslogfacility-text == "local7")) then {
if (re_match($msg, "^[ ]*[A-Z0-9]{4}\\|[^|]+\\|")) then {
action(type="omfile" DynaFile="a_DynFile" dynaFileCacheSize="128" fileCreateMode="0644" dirCreateMode="0755" dirGroup="log" fileGroup="log" asyncWriting="on")
} else {
action(type="omfile" DynaFile="b_malformedDynFile" dynaFileCacheSize="128" fileCreateMode="0644" dirCreateMode="0755" dirGroup="log" fileGroup="log" asyncWriting="on")
}
}
if (($fromhost-ip == '127.0.0.1') and ($syslogfacility-text == "local6")) then {
action(type="omfile" DynaFile="qradarDynFile" dynaFileCacheSize="128" fileCreateMode="0644" dirCreateMode="0755" dirGroup="log" fileGroup="log" asyncWriting="on")
}
else { action(type="omfile" DynaFile="b_malformedDynFile" dynaFileCacheSize="128" fileCreateMode="0644" dirCreateMode="0755" dirGroup="log" fileGroup="log" asyncWriting="on") }
</code></pre>
<p>I'l trying to catch the body of condition <code>(($fromhost-ip == '127.0.0.1') and ($syslogfacility-text == "local7"))</code> from the first block and entire body <strong>ignoring nested blocks</strong> :</p>
<pre><code> if (re_match($msg, "^[ ]*[A-Z0-9]{4}\\|[^|]+\\|")) then {
action(type="omfile" DynaFile="a_DynFile" dynaFileCacheSize="128" fileCreateMode="0644" dirCreateMode="0755" dirGroup="log" fileGroup="log" asyncWriting="on")
} else {
action(type="omfile" DynaFile="b_malformedDynFile" dynaFileCacheSize="128" fileCreateMode="0644" dirCreateMode="0755" dirGroup="log" fileGroup="log" asyncWriting="on")
}
</code></pre>
<p>And if occured also the else block but only from the main 1st level, so for the 2nd example get all three groups:</p>
<ol>
<li><code>(($fromhost-ip == '127.0.0.1') and ($syslogfacility-text == "local6"))</code></li>
<li><code>action(type="omfile" DynaFile="qradarDynFile" dynaFileCacheSize="128" fileCreateMode="0644" dirCreateMode="0755" dirGroup="log" fileGroup="log" asyncWriting="on")</code></li>
<li><code>action(type="omfile" DynaFile="b_malformedDynFile" dynaFileCacheSize="128" fileCreateMode="0644" dirCreateMode="0755" dirGroup="log" fileGroup="log" asyncWriting="on")</code></li>
</ol>
<p>I'm trying to get it with pattern <code>if([^}]*)|then([^}]*)|else([^}]*)</code> but it seems to be too general and stops on first curly bracket in the <code>then</code> body</p>
|
<python><regex><syntax><rsyslog>
|
2024-06-06 09:55:15
| 1
| 417
|
ibt23sec5
|
78,585,754
| 19,392,385
|
Tight subplot axes without their plot to the figure
|
<p>I have made a subplot in matplotlib and managed to to put the different cmap I have on the same column. For a minimal working example (with dummy cmaps):</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
# Generate sample data
data1 = np.random.rand(10, 10)
data2 = np.random.rand(10, 10)
data3 = np.random.rand(10, 10)
fig_bandwidth = plt.figure(figsize=(12, 6))
ax1 = plt.subplot(3, 2, 6)
ax2 = plt.subplot(3, 2, 4)
ax3 = plt.subplot(3, 2, 2)
ax_bandwidth = plt.subplot(1, 3, 1)
axes = [ax1, ax2, ax3]
# Plot data and add color bars
for ax, data in zip(axes, [data1, data2, data3]):
cax = ax_bandwidth.imshow(data, aspect='auto', cmap='viridis')
plt.colorbar(cax, ax=ax)
ax.axis('off')
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/pkGZmifg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pkGZmifg.png" alt="enter image description here" /></a></p>
<p>What I am trying to do is have a tight subplot with the figure on the left and the 3 color bars on the right in the same column, but it seems the plotting boxes are still there, preventing me from placing these axes next to the figure. Maybe using subplots isn't the best solution, any suggestion?</p>
<p>Then how could I place an ax title spanning across the three color bars since they represent the same thing (bandwidth in MHz for context).</p>
|
<python><matplotlib><subplot>
|
2024-06-06 09:42:12
| 2
| 359
|
Chris Ze Third
|
78,585,559
| 3,813,424
|
Python issue with importing from submodule including scripts that also import from submodule?
|
<p>My Python project has the following file structure.</p>
<pre><code>module/
├── __init__.py
├── main.py
│
└── sub_module/
├── __init__.py
├── foo1.py
└── foo2.py
</code></pre>
<p>My goal is to import a class <code>Bar</code> from <strong>foo1.py</strong> in <strong>main.py</strong>:</p>
<pre class="lang-py prettyprint-override"><code>from sub_module.foo1 import Bar
</code></pre>
<p>It's important to know that <strong>foo1.py</strong> also imports from <strong>foo2.py</strong>, which currently works fine when I run <code>python3 foo1.py</code>.</p>
<pre class="lang-py prettyprint-override"><code>from foo2 import some_function
</code></pre>
<p>Both <strong><strong>init</strong>.py</strong> files are empty.</p>
<p>Here is the exception that I currently get, when attempting to run <code>python3 main.py</code>:</p>
<pre class="lang-bash prettyprint-override"><code>File "/.../module/main.py", line 3, in <module>
from sub_module.foo1 import Bar
File "/.../module/sub_module/foo1.py", line 24, in <module>
from foo2 import some_function
</code></pre>
<p>How can I get this working?</p>
<p>Thanks.</p>
|
<python><python-3.x><python-import><python-module>
|
2024-06-06 09:05:14
| 2
| 319
|
St4rb0y
|
78,585,454
| 1,841,839
|
Create Image from bytes results in ValueError: not enough image data
|
<p>I am downloading images from an offline system (Google Photos) I can download the files and save them locally that works fine.</p>
<p>However what i am trying to do is download them and turn them into a PIL Image so that i can use them and not have to store them locally first.</p>
<h1>Works fine to save a file</h1>
<pre><code> for media_item in media_items:
image_data_response = authed_session.get(media_item['baseUrl'] + "=w500-h250")
print(image_data_response)
with open(media_item['filename'], "wb") as my_file:
my_file.write(image_data_response.content)
print(media_item['id'], media_item['filename'])
</code></pre>
<h1>attempt to create PIL Image</h1>
<pre><code>for media_item in media_items:
width = media_item['mediaMetadata']['width']
height = media_item['mediaMetadata']['height']
image_size = (500, 250) # Width, height
image_data_response = authed_session.get(media_item['baseUrl'] + "=w500-h250")
test = Image.frombytes(mode="RGBA", size=image_size, data=image_data_response.content, decoder_name="raw")
result.append(test)
</code></pre>
<h1>My error</h1>
<p>No matter what i try i keep getting</p>
<blockquote>
<p>ValueError: not enough image data</p>
</blockquote>
<p>I even tried giving PIL the original size of the image it didn't help.</p>
|
<python><python-imaging-library>
|
2024-06-06 08:43:51
| 1
| 118,263
|
Linda Lawton - DaImTo
|
78,585,241
| 12,694,438
|
conda install python UnsatisfiableError
|
<p>I'm trying to update my python version with this command:</p>
<pre><code>conda install python=3.12.0
</code></pre>
<p>But I get an insanely long UnsatisfiableError message. <strong>I've read another <a href="https://stackoverflow.com/questions/42075581/conda-install-python-3-6-unsatisfiableerror">question</a> regarding this error</strong>, but that one has only one incompatible dependency that you can manage manually, whereas in my case, it prints this:</p>
<pre><code>UnsatisfiableError: The following specifications were found to be incompatible with a past
explicit spec that is not an explicit spec in this operation (openssl):
- python=3.12.0 -> bzip2[version='>=1.0.8,<2.0a0'] -> libgcc-ng[version='>=7.3.0|>=7.5.0']
- python=3.12.0 -> expat[version='>=2.5.0,<3.0a0'] -> libstdcxx-ng[version='>=11.2.0|>=7.5.0']
- python=3.12.0 -> ld_impl_linux-64[version='>=2.35.1']
- python=3.12.0 -> libffi[version='>=3.4,<4.0a0']
- python=3.12.0 -> libgcc-ng[version='>=11.2.0'] -> _libgcc_mutex[version='*|0.1',build=main]
- python=3.12.0 -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex
- python=3.12.0 -> libuuid[version='>=1.41.5,<2.0a0']
- python=3.12.0 -> ncurses[version='>=6.4,<7.0a0']
- python=3.12.0 -> openssl[version='>=3.0.11,<4.0a0'] -> ca-certificates
- python=3.12.0 -> pip -> setuptools
- python=3.12.0 -> pip -> wheel
- python=3.12.0 -> readline[version='>=8.0,<9.0a0'] -> ncurses[version='>=6.1,<7.0a0|>=6.2,<7.0a0|>=6.3,<7.0a0']
- python=3.12.0 -> sqlite[version='>=3.41.2,<4.0a0'] -> zlib[version='>=1.2.12,<1.3.0a0|>=1.2.13,<2.0a0']
- python=3.12.0 -> tk[version='>=8.6.12,<8.7.0a0']
- python=3.12.0 -> tzdata
- python=3.12.0 -> xz[version='>=5.4.2,<6.0a0']
- python=3.12.0 -> zlib[version='>=1.2.13,<1.3.0a0']
The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package xz conflicts for:
conda[version='>=22.9.0'] -> python[version='>=3.10,<3.11.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.4.5,<6.0a0|>=5.2.4,<6.0a0']
brotlipy -> python[version='>=3.7,<3.8.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0|>=5.4.6,<6.0a0|>=5.4.2,<6.0a0|>=5.2.8,<6.0a0|>=5.4.5,<6.0a0']
pluggy -> python[version='>=3.12,<3.13.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
pycparser -> python[version='>=3.6'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
certifi -> python[version='>=3.9,<3.10.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.4.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
urllib3 -> python[version='>=3.11,<3.12.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
idna -> python[version='>=3.12,<3.13.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
zstd -> xz[version='>=5.2.10,<6.0a0|>=5.4.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0']
pycosat -> python[version='>=3.10,<3.11.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.4.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
wheel -> python[version='>=3.10,<3.11.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.4.5,<6.0a0|>=5.2.3,<6.0a0']
six -> python -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
pyopenssl -> python[version='>=3.12,<3.13.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
conda-content-trust -> python[version='>=3.10,<3.11.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.4.5,<6.0a0|>=5.2.3,<6.0a0']
toolz -> python[version='>=3.12,<3.13.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
pip -> python[version='>=3.11,<3.12.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
conda-package-streaming -> python[version='>=3.12,<3.13.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.8,<6.0a0']
conda-package-handling -> python[version='>=3.11,<3.12.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
pysocks -> python[version='>=3.7,<3.8.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.4.6,<6.0a0|>=5.4.5,<6.0a0|>=5.4.2,<6.0a0|>=5.2.8,<6.0a0|>=5.2.3,<6.0a0']
cffi -> python[version='>=3.8,<3.9.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.6,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.4.5,<6.0a0|>=5.2.8,<6.0a0|>=5.2.3,<6.0a0']
ruamel_yaml -> python[version='>=3.12,<3.13.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
brotli-python -> python[version='>=3.11,<3.12.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
setuptools -> python[version='>=3.9,<3.10.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.4.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
cryptography -> python[version='>=3.9,<3.10.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.4.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
tqdm -> python[version='>=3.12,<3.13.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
zstandard -> python[version='>=3.10,<3.11.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.4.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
charset-normalizer -> python[version='>=3.5'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.5,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.2.3,<6.0a0']
xz
requests -> python[version='>=3.10,<3.11.0a0'] -> xz[version='>=5.2.10,<6.0a0|>=5.4.2,<6.0a0|>=5.4.6,<6.0a0|>=5.2.8,<6.0a0|>=5.2.6,<6.0a0|>=5.2.5,<6.0a0|>=5.2.4,<6.0a0|>=5.4.5,<6.0a0|>=5.2.3,<6.0a0']
Package cryptography conflicts for:
cryptography
pyopenssl -> cryptography[version='>=1.9|>=2.1.4|>=2.2.1|>=2.8|>=3.3|>=35.0|>=38.0.0,<40|>=38.0.0,<42,!=40.0.0,!=40.0.1|>=41.0.5,<43']
conda[version='>=22.9.0'] -> pyopenssl[version='>=16.2.0'] -> cryptography[version='>=1.9|>=2.1.4|>=2.2.1|>=2.8|>=3.3|>=35.0|>=38.0.0,<40|>=38.0.0,<42,!=40.0.0,!=40.0.1|>=41.0.5,<43']
conda-content-trust -> cryptography[version='<41.0.0a0|>=41']
requests -> urllib3[version='>=1.21.1,<3'] -> cryptography[version='>=1.3.4']
urllib3 -> cryptography[version='>=1.3.4']
urllib3 -> pyopenssl[version='>=0.14'] -> cryptography[version='>=1.9|>=2.1.4|>=2.2.1|>=2.8|>=3.3|>=35.0|>=38.0.0,<40|>=38.0.0,<42,!=40.0.0,!=40.0.1|>=41.0.5,<43']
Package tzdata conflicts for:
tqdm -> python[version='>=3.12,<3.13.0a0'] -> tzdata
conda[version='>=22.9.0'] -> python[version='>=3.10,<3.11.0a0'] -> tzdata
urllib3 -> python[version='>=3.11,<3.12.0a0'] -> tzdata
cffi -> python[version='>=3.12,<3.13.0a0'] -> tzdata
cryptography -> python[version='>=3.9,<3.10.0a0'] -> tzdata
tzdata
pysocks -> python[version='>=3.12,<3.13.0a0'] -> tzdata
requests -> python[version='>=3.10,<3.11.0a0'] -> tzdata
idna -> python[version='>=3.12,<3.13.0a0'] -> tzdata
zstandard -> python[version='>=3.10,<3.11.0a0'] -> tzdata
ruamel_yaml -> python[version='>=3.12,<3.13.0a0'] -> tzdata
pyopenssl -> python[version='>=3.12,<3.13.0a0'] -> tzdata
pluggy -> python[version='>=3.12,<3.13.0a0'] -> tzdata
setuptools -> python[version='>=3.9,<3.10.0a0'] -> tzdata
toolz -> python[version='>=3.12,<3.13.0a0'] -> tzdata
wheel -> python[version='>=3.10,<3.11.0a0'] -> tzdata
conda-content-trust -> python[version='>=3.10,<3.11.0a0'] -> tzdata
pycparser -> python[version='>=3.6'] -> tzdata
six -> python -> tzdata
brotli-python -> python[version='>=3.11,<3.12.0a0'] -> tzdata
brotlipy -> python[version='>=3.9,<3.10.0a0'] -> tzdata
pip -> python[version='>=3.11,<3.12.0a0'] -> tzdata
charset-normalizer -> python[version='>=3.5'] -> tzdata
certifi -> python[version='>=3.9,<3.10.0a0'] -> tzdata
conda-package-handling -> python[version='>=3.11,<3.12.0a0'] -> tzdata
conda-package-streaming -> python[version='>=3.12,<3.13.0a0'] -> tzdata
pycosat -> python[version='>=3.10,<3.11.0a0'] -> tzdata
Package bzip2 conflicts for:
urllib3 -> python[version='>=3.11,<3.12.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
pip -> python[version='>=3.11,<3.12.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
requests -> python[version='>=3.10,<3.11.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
pycosat -> python[version='>=3.10,<3.11.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
idna -> python[version='>=3.12,<3.13.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
setuptools -> python[version='>=3.12,<3.13.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
cffi -> python[version='>=3.12,<3.13.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
pysocks -> python[version='>=3.12,<3.13.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
conda-package-streaming -> python[version='>=3.12,<3.13.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
conda[version='>=22.9.0'] -> python[version='>=3.10,<3.11.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
zstandard -> python[version='>=3.10,<3.11.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
cryptography -> python[version='>=3.11,<3.12.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
certifi -> python[version='>=3.10,<3.11.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
conda-content-trust -> python[version='>=3.10,<3.11.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
conda-package-handling -> python[version='>=3.11,<3.12.0a0'] -> bzip2[version='>=1.0.6,<2.0a0|>=1.0.8,<2.0a0']
brotlipy -> python[version='>=3.11,<3.12.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
pyopenssl -> python[version='>=3.12,<3.13.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
pluggy -> python[version='>=3.12,<3.13.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
pycparser -> python[version='>=3.6'] -> bzip2[version='>=1.0.8,<2.0a0']
toolz -> python[version='>=3.12,<3.13.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
wheel -> python[version='>=3.10,<3.11.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
six -> python -> bzip2[version='>=1.0.8,<2.0a0']
brotli-python -> python[version='>=3.11,<3.12.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
ruamel_yaml -> python[version='>=3.12,<3.13.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
charset-normalizer -> python[version='>=3.5'] -> bzip2[version='>=1.0.8,<2.0a0']
bzip2
tqdm -> python[version='>=3.12,<3.13.0a0'] -> bzip2[version='>=1.0.8,<2.0a0']
Package conda-package-streaming conflicts for:
conda-package-handling -> conda-package-streaming[version='>=0.7.0|>=0.9.0']
conda-package-streaming
conda[version='>=22.9.0'] -> conda-package-handling[version='>=2.2.0'] -> conda-package-streaming[version='>=0.7.0|>=0.9.0']
Package _openmp_mutex conflicts for:
libgcc-ng -> _openmp_mutex[version='>=4.5']
conda-package-handling -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
libffi -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
ncurses -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
cffi -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
cryptography -> libgcc-ng -> _openmp_mutex[version='>=4.5']
zstd -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
openssl -> libgcc-ng[version='>=7.5.0'] -> _openmp_mutex[version='>=4.5']
zlib -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
lz4-c -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
readline -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
bzip2 -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
_openmp_mutex
ruamel_yaml -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
brotli-python -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
yaml -> libgcc-ng[version='>=7.3.0'] -> _openmp_mutex[version='>=4.5']
sqlite -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
tk -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
brotlipy -> libgcc-ng[version='>=7.3.0'] -> _openmp_mutex[version='>=4.5']
zstandard -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
xz -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
pycosat -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
libuuid -> libgcc-ng[version='>=11.2.0'] -> _openmp_mutex[version='>=4.5']
[[[[[SKIPPING LIKE 80% OF THE MESSAGE BECAUSE OF TOO MANY CHARACTERS]]]]]]
Package brotli-python conflicts for:
brotli-python
requests -> urllib3[version='>=1.21.1,<3'] -> brotli-python[version='>=1.0.9']
urllib3 -> brotli-python[version='>=1.0.9']
Package tk conflicts for:
pysocks -> python[version='>=3.7,<3.8.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0|>=8.6.14,<8.7.0a0']
pyopenssl -> python[version='>=3.12,<3.13.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
pluggy -> python[version='>=3.12,<3.13.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
pycparser -> python[version='>=3.6'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
conda-package-streaming -> python[version='>=3.12,<3.13.0a0'] -> tk[version='>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
setuptools -> python[version='>=3.9,<3.10.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
six -> python -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
cffi -> python[version='>=3.8,<3.9.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.7,<8.7.0a0']
conda-content-trust -> python[version='>=3.10,<3.11.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
conda[version='>=22.9.0'] -> python[version='>=3.10,<3.11.0a0'] -> tk[version='>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
brotli-python -> python[version='>=3.11,<3.12.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.7,<8.7.0a0']
toolz -> python[version='>=3.12,<3.13.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
wheel -> python[version='>=3.10,<3.11.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
certifi -> python[version='>=3.9,<3.10.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
tqdm -> python[version='>=3.12,<3.13.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
pip -> python[version='>=3.11,<3.12.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
charset-normalizer -> python[version='>=3.5'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
conda-package-handling -> python[version='>=3.11,<3.12.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
tk
pycosat -> python[version='>=3.10,<3.11.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
brotlipy -> python[version='>=3.7,<3.8.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0|>=8.6.14,<8.7.0a0']
ruamel_yaml -> python[version='>=3.12,<3.13.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
cryptography -> python[version='>=3.9,<3.10.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
requests -> python[version='>=3.10,<3.11.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
idna -> python[version='>=3.12,<3.13.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
zstandard -> python[version='>=3.10,<3.11.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.7,<8.7.0a0']
urllib3 -> python[version='>=3.11,<3.12.0a0'] -> tk[version='8.6.*|>=8.6.10,<8.7.0a0|>=8.6.11,<8.7.0a0|>=8.6.12,<8.7.0a0|>=8.6.8,<8.7.0a0|>=8.6.14,<8.7.0a0|>=8.6.7,<8.7.0a0']
Package ipaddress conflicts for:
conda-content-trust -> cryptography[version='<41.0.0a0'] -> ipaddress
requests -> urllib3[version='>=1.21.1,<3'] -> ipaddress
pyopenssl -> cryptography[version='>=2.8'] -> ipaddress
urllib3 -> ipaddress
cryptography -> ipaddress
Package brotlipy conflicts for:
brotlipy
urllib3 -> brotlipy[version='>=0.6.0']
requests -> urllib3[version='>=1.21.1,<3'] -> brotlipy[version='>=0.6.0']
Package conda-package-handling conflicts for:
conda-package-handling
conda[version='>=22.9.0'] -> conda-package-handling[version='>=1.3.0|>=2.2.0']
Package pyopenssl conflicts for:
requests -> urllib3[version='>=1.21.1,<3'] -> pyopenssl[version='>=0.14']
conda[version='>=22.9.0'] -> pyopenssl[version='>=16.2.0']
urllib3 -> pyopenssl[version='>=0.14']
pyopenssl
Package zstandard conflicts for:
conda-package-handling -> zstandard[version='>=0.15']
conda[version='>=22.9.0'] -> zstandard[version='>=0.19.0']
zstandard
conda-package-streaming -> zstandard[version='>=0.15']
conda[version='>=22.9.0'] -> conda-package-handling[version='>=2.2.0'] -> zstandard[version='>=0.15']
Package toolz conflicts for:
conda[version='>=22.9.0'] -> toolz[version='>=0.8.1']
toolz
Package libgomp conflicts for:
libgcc-ng -> _openmp_mutex -> libgomp[version='>=7.5.0']
_openmp_mutex -> libgomp[version='>=7.5.0']
libgomp
Package tqdm conflicts for:
conda[version='>=22.9.0'] -> conda-package-handling[version='>=1.3.0'] -> tqdm
tqdm
conda-package-handling -> tqdm
conda[version='>=22.9.0'] -> tqdm[version='>=4']
Package pluggy conflicts for:
conda[version='>=22.9.0'] -> pluggy[version='>=1.0.0']
pluggy
Package yaml conflicts for:
yaml
ruamel_yaml -> yaml[version='>=0.1.7,<0.2.0a0|>=0.2.5,<0.3.0a0']
conda[version='>=22.9.0'] -> ruamel_yaml[version='>=0.11.14,<0.17'] -> yaml[version='>=0.1.7,<0.2.0a0|>=0.2.5,<0.3.0a0']
Package urllib3 conflicts for:
urllib3
requests -> urllib3[version='>=1.21.1,<1.23|>=1.21.1,<1.24|>=1.21.1,<1.25|>=1.21.1,<1.26,!=1.25.0,!=1.25.1|>=1.21.1,<1.27|>=1.21.1,<2|>=1.21.1,<3']
conda[version='>=22.9.0'] -> requests[version='>=2.28.0,<3'] -> urllib3[version='>=1.21.1,<1.24|>=1.21.1,<1.25|>=1.21.1,<1.26,!=1.25.0,!=1.25.1|>=1.21.1,<1.27|>=1.21.1,<2|>=1.21.1,<3']
Package pysocks conflicts for:
pysocks
requests -> urllib3[version='>=1.21.1,<3'] -> pysocks[version='>=1.5.6,<2.0,!=1.5.7']
urllib3 -> pysocks[version='>=1.5.6,<2.0,!=1.5.7']
Package ruamel_yaml conflicts for:
conda[version='>=22.9.0'] -> ruamel_yaml[version='>=0.11.14,<0.17']
ruamel_yaml
Package pycosat conflicts for:
conda[version='>=22.9.0'] -> pycosat[version='>=0.6.3']
pycosat
Package wheel conflicts for:
wheel
pip -> wheel
Package charset-normalizer conflicts for:
charset-normalizer
conda[version='>=22.9.0'] -> requests[version='>=2.28.0,<3'] -> charset-normalizer[version='>=2,<3|>=2,<4|>=2.0.0,<3|>=2.0.0,<2.1.0']
conda[version='>=22.9.0'] -> charset-normalizer
requests -> charset-normalizer[version='>=2,<3|>=2,<4|>=2.0.0,<3|>=2.0.0,<2.1.0']
Package enum34 conflicts for:
urllib3 -> cryptography[version='>=1.3.4'] -> enum34
pyopenssl -> cryptography[version='>=2.8'] -> enum34
conda-content-trust -> cryptography[version='<41.0.0a0'] -> enum34
cryptography -> enum34
Package requests conflicts for:
requests
conda[version='>=22.9.0'] -> requests[version='>=2.20.1,<3|>=2.27.0,<3|>=2.28.0,<3']The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.38=0
- feature:|@/linux-64::__glibc==2.38=0
- brotli-python -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- brotlipy -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- bzip2 -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- cffi -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- conda-package-handling -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- cryptography -> libgcc-ng -> __glibc[version='>=2.17']
- libffi -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- libgcc-ng -> __glibc[version='>=2.17']
- libstdcxx-ng -> __glibc[version='>=2.17']
- libuuid -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- lz4-c -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- ncurses -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- openssl -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
- pycosat -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- python=3.12.0 -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- readline -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- ruamel_yaml -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- sqlite -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- tk -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- xz -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- yaml -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- zlib -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- zstandard -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- zstd -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
Your installed version is: 2.38
</code></pre>
<p>It's as if every single package has a conflict. Obviously, I can't manually go and resolve all of them, and also this just seems really really weird, this shouldn't happen.</p>
|
<python><conda>
|
2024-06-06 08:05:03
| 0
| 944
|
splaytreez
|
78,585,018
| 3,133,018
|
Programatically pass primary_key in SQLAlchemy PostgreSQL insert_on_conflict_nothing method
|
<p>I'm trying to insert several Pandas DataFrames to corresponding PostgreSQL tables, each with a different Primary Key. I've attempted to implement the following <code>method</code> according to the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html#r689dfd12abe5-1" rel="nofollow noreferrer">documentation</a>:</p>
<pre class="lang-py prettyprint-override"><code>def insert_on_conflict_nothing(table, conn, keys, data_iter):
# tableIndex is the primary key in "conflict_table"
data = [dict(zip(keys, row)) for row in data_iter]
pkey = [pk_column.name for pk_column in table.primary_key.columns.values()]
stmt = insert(table.table).values(data).on_conflict_do_nothing(index_elements=pkey)
result = conn.execute(stmt)
return result.rowcount
</code></pre>
<p>and then pass it on in a function:</p>
<pre class="lang-py prettyprint-override"><code>def insert_workouts(workouts_df):
engine = create_engine('postgresql+psycopg2://' + db_user + ':' + db_pass + '@' + db_host + '/' + db_name)
workouts_table_columns = pd.read_sql('SELECT * FROM fitness.workouts WHERE 1=2', engine).columns.to_list()
workouts_df.columns = workouts_table_columns
workouts_df.to_sql(
name='workouts',
schema='fitness',
con=engine,
if_exists='append',
index=False,
method=insert_on_conflict_nothing
)
</code></pre>
<p>This, however, results in the following error: <code>AttributeError: 'SQLTable' object has no attribute 'primary_key'</code>.</p>
<p>Can anyone help me figure out a way to check the respective table's primary key (in my case, it will always be one column, if that matters), so that I don't have to hard-code it in multiple methods?</p>
<p>Thanks!</p>
<p>EDIT: I did some exploration/testing:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine, Table, MetaData
meta = MetaData()
tbl = Table('workouts', meta, schema='fitness', autoload_with=engine)
print(tbl.name)
pkey = [pk_column.name for pk_column in tbl.primary_key.columns.values()]
print(pkey)
</code></pre>
<p>which produces the correct output:</p>
<pre><code>'workouts'
['workout_id']
</code></pre>
|
<python><pandas><postgresql><sqlalchemy>
|
2024-06-06 07:20:33
| 1
| 496
|
zkvvoob
|
78,584,901
| 1,559,331
|
Create dataframe from Nested JSON
|
<p>I have the below json</p>
<pre><code>[{"Name":"Tom","Age":"40","Account":"savings","address": {
"city": "New York",
"state": "NY"
}}]
</code></pre>
<p>Now I need to create dataframe using spark from this JSON with below structure</p>
<pre><code>Name Age Account city state
</code></pre>
<p>Below is code I am using</p>
<pre><code>schema2= StructType([
StructField("TICKET", StringType(), True),
StructField("TRANFERRED", StringType(), True),
StructField("ACCOUNT", StringType(), True),
StructField("address", StructType([StructField('city', StringType(), True), StructField('state', StringType(), True)]), True),
])
path='dbfs:/FileStore/new.json'
df = spark.read.schema(schema2).option("multiLine", True).json(path)
</code></pre>
<p>And I am getting below structure</p>
<p><a href="https://i.sstatic.net/JpUCDLF2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpUCDLF2.png" alt="enter image description here" /></a></p>
<p><strong>What schema change should be done to flatten the inner json as columns ?</strong></p>
|
<python><json><apache-spark><pyspark>
|
2024-06-06 06:53:50
| 1
| 932
|
dileepVikram
|
78,584,847
| 2,210,825
|
Convert count row to one hot encoding efficiently
|
<p>I have a table with rows in this format where the integers are a count:</p>
<pre><code> A B C D E
0 a 2 0 3 x
1 b 1 2 0 y
</code></pre>
<p>I'd like to convert it into a format where each count is a one hot encoded row:</p>
<pre><code> A B C D E
0 a 1 0 0 x
1 a 1 0 0 x
2 a 0 0 1 x
3 a 0 0 1 x
4 a 0 0 1 x
5 b 1 0 0 y
6 b 0 1 0 y
7 b 0 1 0 y
</code></pre>
<p>I wrote inefficient code which achieves this</p>
<pre><code># Sample DataFrame
data = {
'A': ['a', 'b'],
'B': [2, 1],
'C': [0, 2],
'D': [3, 0],
'E': ['x', 'y']
}
df = pd.DataFrame(data)
new_df = pd.DataFrame(columns=df.columns)
for index, row in df.iterrows():
first_val = row.iloc[0]
last_val = row.iloc[-1]
middle_vals = row.iloc[1:-1].astype(int)
for i in range(len(middle_vals)):
new_data = [first_val] + [1 if i == j else 0 for j in range(len(middle_vals))] + [last_val]
new_rows = pd.DataFrame([new_data] * middle_vals.iloc[i], columns=df.columns)
new_df = pd.concat([new_df, new_rows], ignore_index=True)
</code></pre>
<p>Any tips for vectorizing this operation which is incredibly slow? I realize a concat operation per iteration is a big issue, so I did try a batching solution where I collect chunks of <code>new_rows</code> and then concat. This remains slow.</p>
|
<python><pandas><vectorization><one-hot-encoding>
|
2024-06-06 06:42:24
| 5
| 1,458
|
donkey
|
78,584,531
| 17,778,275
|
OpenCV code fail to detect overlapping rectangles in an image
|
<p>I am using OpenCV to detect rectangles in an image based on their color. The code works well for non-overlapping rectangles, but it fails to detect overlapping rectangles.
Here's code I am using</p>
<pre><code>image_hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower_blue = np.array([110, 50, 50])
upper_blue = np.array([130, 255, 255])
mask = cv2.inRange(image_hsv, lower_blue, upper_blue)
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for index, contour in enumerate(contours):
epsilon = 0.02 * cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, epsilon, True)
if len(approx) == 4:
(x, y, w, h) = cv2.boundingRect(approx)
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
</code></pre>
<p>Overlapping rectangles are either not detected at all or are detected as a single merged shape. This is problematic because I need to individually detect and crop each rectangle, even if they overlap.</p>
<p>I’ve Tried:</p>
<ul>
<li>Adjusting the Contour Approximation: I tried tweaking the epsilon parameter in cv2.approxPolyDP to various values, but it didn't help in detecting overlapping rectangles.</li>
<li>Changing the Mask Range: I experimented with different HSV ranges to ensure the mask correctly covers the rectangles, but this didn't resolve the overlapping issue</li>
<li>Contour Retrieval Mode: I also tried different contour retrieval modes (like cv2.RETR_TREE and cv2.RETR_LIST), but it didn't improve the detection of overlapping rectangles.</li>
</ul>
<p>For eg. Sample Input Image</p>
<p><a href="https://i.sstatic.net/TMfpkbdJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMfpkbdJ.png" alt="enter image description here" /></a></p>
<p>Outputs I get</p>
<p><a href="https://i.sstatic.net/2R2GLAM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2R2GLAM6.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/VCp3HSbt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCp3HSbt.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/zGlNIa5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zGlNIa5n.png" alt="enter image description here" /></a></p>
<p>Not getting the overlapping one's.</p>
|
<python><opencv><computer-vision><contour>
|
2024-06-06 04:44:40
| 1
| 354
|
spd
|
78,584,279
| 7,267,480
|
Correct Visualization of data using np.meshgrid and ax.pcolormesh of matplotlib, error in visualization
|
<p>Trying to make a visualization to understand the
<a href="https://en.wikipedia.org/wiki/Time_of_flight" rel="nofollow noreferrer">time-of-flight experiment</a></p>
<p>The Idea is simple provided 2 values for energy - calculate corresponding time of flight values and energies and visualize it on the xy plot where color stands for the corresponding neutron energy.
But when I came to realization - I see that if I use pcolormesh - I have error in visualization - see attached Figure - in the upper right corner - right after the dashed vertical line..
<a href="https://i.sstatic.net/H59mmtOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H59mmtOy.png" alt="meshgrid and pcolormesh" /></a></p>
<p>It seems that if I have provided a lower energy limit, then the border in time scale is strictly defined, and the corner of the triangle must be exactly where the dashed line is. But it's not. What is the error, and how can it be fixed?</p>
<p>Another thing to note here is that the lower the minimum value for energy is, the larger the error.</p>
<p>Code for Minimal reproducible example is shown below.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import Normalize
import matplotlib.cm as cm
# diagram for one pulse case
def calculate_time_interval(FP,
energy_mev,
m_n=1.67492749804e-27,
k = 1.60218e-13, # J/MeV
c=3e8):
""" Calculates the time interval given the flight path (FP) and energy values. """
energy_joules = energy_mev * k
# Calculate velocity
v = c * np.sqrt(1 - (m_n * c**2 / (energy_joules + m_n * c**2))**2)
# Calculate time interval
time_interval = FP / v
return time_interval
min_E = 1e-2 #MeV
max_E = 1
FP = 100 # m
# calculating min and max TOF based on energy limits supplied
t_min = calculate_time_interval(FP=FP,
energy_mev=max_E)*1e3
t_max = calculate_time_interval(FP=FP,
energy_mev=min_E) * 1e3
print(t_min, t_max)
# Create a range for the flight path lengths up to 110 meters
flight_paths = np.linspace(0, FP, 500)
# Energy range
energies = np.linspace(min_E, max_E, 500)
# Create meshgrid for flight paths and energies
FP_mesh, E_mesh = np.meshgrid(flight_paths, energies)
# Calculate velocities and corresponding ToF in ms
tof = calculate_time_interval(FP=FP_mesh, energy_mev=E_mesh) * 1e3
# plot
fig, ax = plt.subplots(figsize=(8, 5))
c = ax.pcolormesh(tof, FP_mesh, E_mesh, cmap='coolwarm', shading='auto', norm=Normalize(vmin=min_E, vmax=max_E))
# Add colorbar
cbar = plt.colorbar(c, ax=ax, orientation='vertical')
cbar.set_label('Energy (MeV)')
# Add horizontal line at FP
ax.axhline(y=FP, color='black', linestyle='--')
ax.axvline(x=t_max, color='black', linestyle='--')
# Add title, labels
fig.suptitle('Time of Flight vs Flight Path Length with Energy (single pulse)')
ax.set_title(fr'(! $E_{{min}}$ = {np.min(E_mesh)} MeV => {np.round(t_min,6)} ms, $E_{{max}}$ = {np.max(E_mesh)} MeV => {np.round(t_max,6)} ms)')
ax.set_xlabel('Time of Flight (ms)')
ax.set_ylabel('Flight Path Length (m)')
ax.set_xlim(0, t_max * 2)
ax.set_ylim(0, 1.1*FP)
plt.grid(True)
plt.show()
</code></pre>
|
<python><matplotlib><meshgrid>
|
2024-06-06 02:49:43
| 1
| 496
|
twistfire
|
78,584,169
| 4,594,924
|
QT designer Layout margins are not editable
|
<p>I have UI file generated in QT designer 6.5.1 using python 3.9. This was pushed into git. Now I have downloaded in new machine where I have installed python 3.12.3 with QT designer 6.7.1. Like to fix if there is any gap on newer version and make code cleaner.</p>
<p>Now I found that I could not edit any frame layout margins sizes. I tried changing layout style,etc., nothing worked. I have structure like below.</p>
<pre><code>"MainWindow"
|->QWidget (Layout margins are not editable)
|-> QScrolArea
|->QWidget (Layout margins are not editable)
|-...and so on
</code></pre>
<p>The screenshot
<a href="https://i.sstatic.net/bmdchYEU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bmdchYEU.png" alt="enter image description here" /></a></p>
<p>Please show some light on this to fix this issue.</p>
<p>List of items tried :</p>
<ul>
<li>I tried updating frame style.</li>
<li>I tried creating new frame under my existing widget. Applied horizontal layout in new frames and tried editing layout left/right/top/bottom margins in property editor. I could not edit it.</li>
</ul>
|
<python><qt><qt-designer><designer><python-3.12>
|
2024-06-06 01:57:39
| 1
| 798
|
Simbu
|
78,584,128
| 11,901,732
|
How to set max row height in pandas
|
<p>One column in my pandas dataframe is too long. How can I set max row height (<strong>not number of rows in a dataframe</strong>) in pandas so that I can truncate the cells with too much content and have a balanced view of all columns?</p>
<p>Not asking for</p>
<pre><code>pd.set_option('display.height', 500)
pd.set_option('display.max_rows', 500)
</code></pre>
<p>which sets the max number of rows in a dataframe.</p>
|
<python><pandas><numpy>
|
2024-06-06 01:35:45
| 1
| 5,315
|
nilsinelabore
|
78,584,114
| 10,755,628
|
Best way to stream using langchain `llm.with_structured_output`
|
<pre><code>import asyncio
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
# Define the model with structured output
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0, streaming=True)
structured_llm = model.with_structured_output(Joke)
def stream_joke():
# Invoke the model and stream the response
response = structured_llm.stream("Tell me a joke about cats")
# Initialize an empty Joke object
joke = Joke(setup="", punchline="")
# Stream the response
for part in response:
if 'setup' in part:
joke.setup += part['setup']
print(f"Setup: {joke.setup}")
if 'punchline' in part:
joke.punchline += part['punchline']
print(f"Punchline: {joke.punchline}")
# Run the streaming joke function
stream_joke()
</code></pre>
<p>What is the best practice to stream using langchain <code>llm.with_structured_output</code>? This just doesn't work.</p>
|
<python><artificial-intelligence><langchain><langgraph>
|
2024-06-06 01:25:50
| 2
| 3,387
|
minglyu
|
78,584,084
| 3,369,879
|
Numpy type overloading
|
<p>I'd like to add typehints to a function that accepts either <code>np.float32</code> arrays or <code>np.float64</code> arrays and returns the same type:</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload, Union
import numpy as np
import numpy.typing as npt
NPArray_FLOAT32 = npt.NDArray[np.float32]
NPArray_FLOAT64 = npt.NDArray[np.float64]
NPArray_FLOAT32_64 = Union[NPArray_FLOAT32, NPArray_FLOAT64]
@overload
def foo(xa: NPArray_FLOAT32, xb: NPArray_FLOAT32) -> NPArray_FLOAT32: ...
@overload
def foo(xa: NPArray_FLOAT64, xb: NPArray_FLOAT64) -> NPArray_FLOAT64: ...
def foo(xa: NPArray_FLOAT32_64, xb: NPArray_FLOAT32_64) -> NPArray_FLOAT32_64:
# ...
</code></pre>
<p>However, this results in the following error from <code>mypy</code></p>
<pre><code>mypy [overload-overlap]: Overloaded function signatures 1 and 2 overlap with incompatible return types.
</code></pre>
<p>What's the right way to do this? This almost seems like a bug with <code>mypy</code> since <code>np.float32</code> does not overlap with <code>np.float64</code>.</p>
|
<python><numpy><mypy><python-typing>
|
2024-06-06 01:10:13
| 1
| 2,015
|
Alex Kaszynski
|
78,584,058
| 18,149
|
In Python colorsys is there a way to convert hlsa_to_rgb?
|
<p>I am looking at colors pulled from the background color cells of an excel spreadsheet using Python.</p>
<p>Oddly, what I pull from the fill color in RGB does not match what I am getting in the excel spreadsheet in python for the color.</p>
<p>It leads me to believe that excel is using hlsa as it's storage format; because when I pull it from the sheet using the API it comes back as eight hex-digits and not six as expected.</p>
<p>Instead of being RGB (RRGGBB) it's RRGGBBAA (wrong I think) or maybe more likely HHLLSSAA or something along those lines.</p>
<p>I've had a hard time finding information about color standards for hex colors.</p>
<p>It used to be that everything was RRGGBB, but it seems like things have gotten more complicated in the years since I was a web developer.</p>
<p>I am trying to use the <code>webcolors</code> library to fuzzily identify colors with a name; but the <code>webcolors</code> library seems to use RGB/RRGGBB and so the colors are mis-identified when I pull them from the sheet.</p>
<p>I found the native colorsys library of python, but I don't see that it has a <code>hlsa_to_rgb</code> method, only a <code>hls_to_rgb</code> method.</p>
<p>I also had a crazy idea that I could save my excel spreadsheet in an older format and the problem would just disappear; but it turns out that openpyxl doesn't read those sorts of formats.</p>
|
<python><excel><colors><webcolors>
|
2024-06-06 00:55:45
| 0
| 26,592
|
leeand00
|
78,584,013
| 20,591,261
|
How to chain multiple with_columns in Polars?
|
<p>I'm using Polars to transform my DataFrame, and I want to chain multiple <code>with_columns</code> transformations. However, I encounter an issue when trying to perform operations on a newly created column within the same <code>with_columns</code> context. I end up needing to save the DataFrame after each transformation and then reapply with_columns for subsequent transformations.</p>
<p>Is there a cleaner way to achieve this?</p>
<p>Here is an example of my current approach:</p>
<pre><code>import polars as pl
# Sample data
exampledata = {
'A': [1, 2, 3],
'B': [4, 5, 6]
}
df = pl.DataFrame(exampledata)
# First transformation
df = df.with_columns(
(pl.col("A") + pl.col("B")).alias("C")
)
# Second transformation
df = df.with_columns(
(pl.col("C") * pl.col("B")).alias("D")
)
print(df)
</code></pre>
<p>In this example, I create a new column <code>C</code> from columns <code>A</code> and <code>B</code>. Then, I need to save the DataFrame before I can create column <code>D</code> from <code>C</code> and <code>B</code>. Is there a more efficient or idiomatic way to chain these transformations in Polars?</p>
|
<python><dataframe><python-polars>
|
2024-06-06 00:25:42
| 2
| 1,195
|
Simon
|
78,583,922
| 6,144,940
|
module 'matplotlib' has no attribute 'colormaps'
|
<p>I am using package <code>WindroseAxes</code>, whose function <code>.bar</code> has a parametere <code>cmap</code> for defining the color map. See <a href="https://windrose.readthedocs.io/en/latest/api.html?highlight=bar#windrose.WindroseAxes.bar" rel="nofollow noreferrer">https://windrose.readthedocs.io/en/latest/api.html?highlight=bar#windrose.WindroseAxes.bar</a></p>
<p>It says that ' <code>cmap</code> (a <code>cm</code> Colormap instance from <code>matplotlib.cm</code>, optional.) '.</p>
<p>Therefore, I was reading <code>matplotlib</code> document : <a href="https://matplotlib.org/stable/api/cm_api.html#matplotlib.cm.ColormapRegistry" rel="nofollow noreferrer">https://matplotlib.org/stable/api/cm_api.html#matplotlib.cm.ColormapRegistry</a></p>
<pre><code>import matplotlib as mpl
cmap = mpl.colormaps['viridis']
</code></pre>
<p>In my code, I was doing <code>cmap = mpl.colormaps['magma']</code></p>
<p><code> ax.bar(wd, ws, normed =True, opening=1, edgecolor='white', bins = speed_bin, cmap = mpl.colormaps['magma'], nsector = 12)</code></p>
<p>Then, the error comes out:</p>
<p>AttributeError: module 'matplotlib' has no attribute 'colormaps'.</p>
<p>So, can I ask what should I do to resolve this? Thanks.</p>
|
<python><matplotlib><windrose>
|
2024-06-05 23:37:05
| 1
| 473
|
Justin
|
78,583,862
| 850,781
|
How do I make Tkinter not block Jupyter like Matplotlib does?
|
<p>I have <a href="https://matplotlib.org/" rel="nofollow noreferrer"><code>matplotlib</code></a>-based GUI app which takes inputs and plots graphs. It works just fine except that the UI is abysmally non-responsive (seconds before it reacts to mouse clicks or keyboard, on a fairly powerful windows box). I start it from a <a href="https://jupyter.org/" rel="nofollow noreferrer">Jupyter notebook</a> with <code>%matplotlib qt</code>, the GUI shows up, <strong>and</strong> the notebook is <strong>not</strong> blocked, so I can query and modify the underlying <code>matplotlib</code> objects interactively.</p>
<p>I want the same behavior from <a href="https://docs.python.org/3/library/tkinter.ttk.html" rel="nofollow noreferrer"><code>tkinter.ttk</code></a>, but <code>Tk.mainloop</code> blocks. Is there a way around this?</p>
|
<python><matplotlib><tkinter>
|
2024-06-05 23:08:57
| 0
| 60,468
|
sds
|
78,583,831
| 15,524,510
|
What is the format / notation / units of price and liquidity data for tick entity in uniswap API?
|
<p>I am pulling liquidity pool data from the uniswap API and I don't recognize the format or notation of the numbers that are being generated. Here is the code for ETH/USDT on uniswap v3:</p>
<pre><code>import requests
query = """
{
pool(id: "0x11b815efb8f581194ae79006d24e0d814b7697f6") {
ticks(first: 1000) {
tickIdx
price0
}
}
}
"""
url = "https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v3"
response = requests.post(url, json={'query': query})
response.json()
</code></pre>
<p>If you run this code you will see Price0 is a variety of extremely tiny numbers and extremely large numbers, none of which are close to the current price of ETH / USDT (around 3850 at time of writing).</p>
<p>I thought it was Q64.96 notation at first but it can't be, because some of the numbers are tiny? Or perhaps I am misunderstanding the notation.</p>
|
<python><uniswap>
|
2024-06-05 22:54:57
| 1
| 363
|
helloimgeorgia
|
78,583,808
| 7,321,700
|
Sum Pandas Dataframe rows based on multiple conditions
|
<p><strong>Scenario:</strong> I have a dataframe in which I want to sum rows inside a column, based on values of others columns.</p>
<p><strong>DF Example:</strong></p>
<pre><code>+----------+----------------------------+---------------------+---------+
| SCENARIO | PRODUCT | FLOW | 2030 |
+----------+----------------------------+---------------------+---------+
| Stated | Natural gas: unabated | Total energy supply | 147.541 |
| Stated | Natural gas: with CCUS | Power sector inputs | 0.018 |
| Stated | Natural gas: with CCUS | Total energy supply | 1.147 |
| Stated | Nuclear | Power sector inputs | 36.773 |
| Stated | Nuclear | Total energy supply | 36.773 |
| Stated | Oil | Power sector inputs | 5.162 |
| Stated | Oil | Total energy supply | 194.938 |
| Stated | Renewables | Power sector inputs | 76.612 |
| Stated | Renewables | Total energy supply | 119.979 |
| Stated | Solar | Total energy supply | 22.66 |
| Stated | Solar PV | Power sector inputs | 19.457 |
| Stated | Total | Power sector inputs | 263.2 |
| Stated | Total | Total energy supply | 667.905 |
| Stated | Traditional use of biomass | Total energy supply | 18.975 |
| Stated | Wind | Power sector inputs | 18.823 |
| Stated | Wind | Total energy supply | 18.823 |
+----------+----------------------------+---------------------+---------+
</code></pre>
<p><strong>Wanted Result:</strong> I want to sum row value of a given column (in this case 2030) based on conditions for other columns. For example: when Scenario = Stated, Flow = Total Energy Supply, I want to sum the values of 2030 where Product is Wind and Solar, so in this case, my result would be a new row in which the value on column 2030 is 22.66 (Solar value) + 18.823 (Wind value), or 41.483. Which would look like:</p>
<pre><code>+----------+----------------------------+---------------------+---------+
| SCENARIO | PRODUCT | FLOW | 2030 |
+----------+----------------------------+---------------------+---------+
| Stated | Natural gas: unabated | Total energy supply | 147.541 |
| Stated | Natural gas: with CCUS | Power sector inputs | 0.018 |
| Stated | Natural gas: with CCUS | Total energy supply | 1.147 |
| Stated | Nuclear | Power sector inputs | 36.773 |
| Stated | Nuclear | Total energy supply | 36.773 |
| Stated | Oil | Power sector inputs | 5.162 |
| Stated | Oil | Total energy supply | 194.938 |
| Stated | Renewables | Power sector inputs | 76.612 |
| Stated | Renewables | Total energy supply | 119.979 |
| Stated | Solar | Total energy supply | 22.66 |
| Stated | Solar PV | Power sector inputs | 19.457 |
| Stated | Total | Power sector inputs | 263.2 |
| Stated | Total | Total energy supply | 667.905 |
| Stated | Traditional use of biomass | Total energy supply | 18.975 |
| Stated | Wind | Power sector inputs | 18.823 |
| Stated | Wind | Total energy supply | 18.823 |
| Stated | result_sum | | 41.483 |
+----------+----------------------------+---------------------+---------+
</code></pre>
<p><strong>Issue:</strong> While trying different combinations of the df.sum function, I could not get the expected results. If I try to use an operator (e.g. ^ or |) like:</p>
<pre><code>test3 = test3.loc[(test3['SCENARIO'] == 'Stated') & (test3['FLOW'] == 'Total energy supply') & (test3['PRODUCT'] == 'Solar' ^ 'Wind')]
</code></pre>
<p>I get an unsupported type error:</p>
<pre><code>TypeError: unsupported operand type(s) for |: 'str' and 'str'
</code></pre>
<p>If I try to use a series of conditions, I am unsure how to do a multiple selection for the necessary FLOW values:</p>
<pre><code>test3['sigcontr_steps'] = test3.loc[(test3['SCENARIO'] == 'Stated') & (test3['FLOW'] == 'Total energy supply') &
(test3['PRODUCT'] == 'Solar') & (test3['PRODUCT'] == 'Wind'),[2022]].sum(axis=0)
</code></pre>
<p>This does not work because of the two values for PRODUCT at the same time.</p>
<p><strong>Question:</strong> What is the correct way to perform this operation?</p>
|
<python><pandas><dataframe>
|
2024-06-05 22:42:38
| 1
| 1,711
|
DGMS89
|
78,583,755
| 365,102
|
PyTorch F.interpolate for many dimensions (e.g. 4D, 5D, 6D, 7D, ..., n-D)
|
<p>I want to apply N-d interpolation to an (N+2)-d tensor for N>3.</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn.functional as F
x = torch.randn(1, 1, 2, 3, 4, 5, 6, 7)
output_size = (7, 6, 5, 4, 3, 2)
y = F.interpolate(x, size=output_size, mode="linear")
</code></pre>
<p>The above code gives the following error:</p>
<blockquote>
<p>NotImplementedError: Input Error: Only 3D, 4D and 5D input Tensors supported (got 6D) for the modes: nearest | linear | bilinear | bicubic | trilinear | area | nearest-exact (got linear)</p>
</blockquote>
<p>Note that the first two dimensions are batch size and channels (B, C), and are thus <em>not</em> interpolated, as stated in the <a href="https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html" rel="nofollow noreferrer">docs</a>.</p>
<ul>
<li>linear: 1D interpolation of a 3D tensor (with B, C)</li>
<li>bilinear: 2D interpolation of a 4D tensor (with B, C)</li>
<li>trilinear: 3D interpolation of a 5D tensor (with B, C)</li>
<li>N-d linear: not supported...?!</li>
</ul>
<p>How do I apply N-d linear interpolation for N>3?</p>
|
<python><pytorch><interpolation>
|
2024-06-05 22:24:19
| 1
| 27,757
|
Mateen Ulhaq
|
78,583,338
| 11,117,255
|
Celery Task Logs Not Showing Up in Google Cloud Logging
|
<p>I'm trying to set up Google Cloud Logging for my Celery tasks, but I'm having trouble getting the logs to appear in Google Cloud Logging. I've added the logger configuration to my script, but the logs still do not go up. Below is my setup:</p>
<h4>Logger Configuration:</h4>
<pre class="lang-py prettyprint-override"><code>import json
import os
import logging
from google.cloud import bigquery
from google.cloud import logging as cloud_logging
from google.auth import exceptions as google_auth_exceptions
# Initialize Google Cloud Logging client and setup logging
client = cloud_logging.Client()
client.setup_logging()
# Logger configuration
def setup_logger():
# Define a global logger variable
global logger
logger = logging.getLogger('cloudLogger')
# Check if the logger has handlers already
if not logger.hasHandlers():
logger.setLevel(logging.INFO)
# Create a stream handler
handler = logging.StreamHandler()
handler.setLevel(logging.INFO)
# Define a formatter without including the script name in the log message
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# Add the handler to the logger
logger.addHandler(handler)
return logger
# Example usage
if __name__ == "__main__":
logger = setup_logger()
logger.info("Logger is set up and ready to use.")
</code></pre>
<h4>Celery Task:</h4>
<pre class="lang-py prettyprint-override"><code>import uuid
import redis
import logging
from celery import Celery
import boto3
import json
import time
from datetime import datetime
from collections import deque
from google.cloud import bigquery
from google.cloud import logging as cloud_logging
from google.auth import exceptions as google_auth_exceptions
import os
from logging_setup import setup_logger
app = Celery('upload_tasks')
app.config_from_object('celeryconfig')
config = load_config()
logger = setup_logger()
def upload_to_s3(file_path, s3_bucket, s3_key, logger):
logger.info(f"Initializing S3 client for bucket {s3_bucket}...")
s3_client = boto3.client('s3', region_name=S3_REGION,
aws_access_key_id=S3_ACCESS_KEY,
aws_secret_access_key=S3_SECRET_KEY)
try:
logger.info(f"Uploading {file_path} to s3://{s3_bucket}/{s3_key}...")
s3_client.upload_file(file_path, s3_bucket, s3_key)
logger.info(f"Successfully uploaded {file_path} to s3://{s3_bucket}/{s3_key}")
return f"s3://{s3_bucket}/{s3_key}"
except Exception as e:
logger.error(f"Failed to upload to S3: {e}")
return None
@app.task(bind=True, max_retries=50, queue='upload_files')
def upload_file(self, file_path, s3_bucket, s3_key, data, s3_location):
# print("")
logger = setup_logger()
logger.info(f"Received task to upload {file_path} to S3 bucket {s3_bucket} with key {s3_key}.")
try:
s3_location = upload_to_s3(file_path, s3_bucket, s3_key, logger)
if s3_location:
store_metadata_in_bigquery(data, s3_location)
os.remove(file_path)
logger.info(f"Task completed for uploading {file_path} to s3://{s3_bucket}/{s3_key}")
except Exception as exc:
countdown = 5 * (2 ** self.request.retries) # Exponential backoff
logger.error(f"Error occurred during upload: {exc}. Retrying in {countdown} seconds...")
raise self.retry(exc=exc, countdown=countdown)
</code></pre>
<p>Despite setting up the logger and including it in the Celery configuration, the logs are not showing up in the Google Cloud Logging console. I've verified that the service account has the necessary permissions and the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable is set correctly.</p>
<h4>What I've Tried:</h4>
<ul>
<li>Ensured that the Google Cloud Logging API is enabled.</li>
<li>Checked that the service account has the "Logging Admin" role.</li>
<li>Verified that the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable is correctly set.</li>
<li>Added the logging handler in various parts of the code to capture logs.</li>
</ul>
<h4>Environment:</h4>
<ul>
<li>Python 3.x</li>
<li>Celery 4.x</li>
<li>Google Cloud Logging library</li>
</ul>
<p>What am I missing or doing wrong? Any help would be greatly appreciated!</p>
|
<python><logging><google-bigquery>
|
2024-06-05 20:23:03
| 0
| 2,759
|
Cauder
|
78,583,009
| 1,609,514
|
TypeError: unhashable type: 'ArrayImpl' when trying to use Equinox module with jax.lax.scan
|
<p>I'm new to Equinox and JAX but wanted to use them to simulate a dynamical system.</p>
<p>But when I pass my system model as an Equinox module to <a href="https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html" rel="nofollow noreferrer">jax.lax.scan</a> I get the unhashable type error in the title. I understand that jax expects the function argument to be a pure function but I thought an Equinox Module would emulate that.</p>
<p>Here is a test script to reproduce the error</p>
<pre><code>import equinox as eqx
import jax
import jax.numpy as jnp
class EqxModel(eqx.Module):
A: jax.Array
B: jax.Array
C: jax.Array
D: jax.Array
def __call__(self, states, inputs):
x = states.reshape(-1, 1)
u = inputs.reshape(-1, 1)
x_next = self.A @ x + self.B @ u
y = self.C @ x + self.D @ u
return x_next.reshape(-1), y.reshape(-1)
def simulate(model, inputs, x0):
xk = x0
outputs = []
for uk in inputs:
xk, yk = model(xk, uk)
outputs.append(yk)
outputs = jnp.stack(outputs)
return xk, outputs
A = jnp.array([[0.7, 1.0], [0.0, 1.0]])
B = jnp.array([[0.0], [1.0]])
C = jnp.array([[0.3, 0.0]])
D = jnp.array([[0.0]])
model = EqxModel(A, B, C, D)
# Test simulation
inputs = jnp.array([[0.0], [1.0], [1.0], [1.0]])
x0 = jnp.zeros(2)
xk, outputs = simulate(model, inputs, x0)
assert jnp.allclose(xk, jnp.array([2.7, 3.0]))
assert jnp.allclose(outputs, jnp.array([[0.0], [0.0], [0.0], [0.3]]))
# This raises TypeError
xk, outputs = jax.lax.scan(model, x0, inputs)
</code></pre>
<p>What is <code>unhashable type: 'ArrayImpl'</code> referring to? Is it the arrays A, B, C, and D? In this model, these matrices are parameters and therefore should be static for the duration of the simulation.</p>
<p>I just found this issue thread that might be related:</p>
<ul>
<li><a href="https://github.com/patrick-kidger/equinox/issues/709" rel="nofollow noreferrer">lax.scan for equinox Modules</a></li>
</ul>
|
<python><jax><equinox><computation-graph>
|
2024-06-05 18:59:08
| 1
| 11,755
|
Bill
|
78,582,986
| 2,535,316
|
ydata_profiling not working with Python 3.9
|
<p>I have a Jupyter notebook. Why would this snippet work when the Python kernel is 3.8.12 but not when I use 3.9.19?</p>
<p><code>import pandas as pd</code></p>
<p><code>from ydata_profiling import ProfileReport</code></p>
<p>On 3.9.19, the error is:</p>
<p>ModuleNotFoundError: No module named 'ydata_profiling'</p>
<p>ydata_profiling version 4.8.3</p>
|
<python><pandas><pandas-profiling>
|
2024-06-05 18:52:24
| 0
| 705
|
Stewart Wiseman
|
78,582,925
| 6,546,694
|
A weird issue (to me) with converting the dict values to a list in python (debugger)
|
<p>I am not sure I can explain the issue better than a screenshot. Either I am severely underslept or I have lost it (or, both)</p>
<p><a href="https://i.sstatic.net/nuSTE73P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuSTE73P.png" alt="enter image description here" /></a></p>
<p>chunks is a list of dict. I am able to run <code>len(list(chunks[0].values()))</code> but I am not able to run what is going inside the <code>len</code></p>
<p>Any help is much appreciated</p>
|
<python><list><pdb>
|
2024-06-05 18:38:04
| 1
| 5,871
|
figs_and_nuts
|
78,582,914
| 6,509,519
|
Custom package install leads to subprocess.CalledProcessError -> No module named pip
|
<h1>Important(?) Details</h1>
<ul>
<li>Windows 10, version 10.0.19045</li>
<li>Python 3.12.2</li>
<li>PyCharm 2024.1 (Professional Edition)</li>
<li>GitBash 2.45.1.windows.1</li>
</ul>
<h1>The Setup</h1>
<p>I'm trying to install <code>pygraphviz</code> with my package in a virtual environment.</p>
<p>I've written a custom <code>install</code> subclass to be used by <code>setuptools</code> during package install.</p>
<pre class="lang-py prettyprint-override"><code>"""Contents of custom_install.py"""
from os import getenv
from platform import system
import subprocess
import sys
from setuptools.command.install import install
class InstallPygraphviz(install):
"""Custom command to install pygraphviz package.
Reference:
https://pygraphviz.github.io/documentation/stable/install.html#windows
"""
def run(self):
python_exe = sys.executable
subprocess.check_call([python_exe, "-m", "ensurepip"])
pip_command = [python_exe, "-m", "pip", "install"]
if system() == "Windows":
graphviz_path = getenv("GRAPHVIZ_PATH")
if graphviz_path is None:
raise ValueError("GRAPHVIZ_PATH is not set")
include_path = f"{graphviz_path}/include"
lib_path = f"{graphviz_path}/lib"
pip_command.extend(
[
"--config-settings='--global-option=build_ext'",
f"--config-settings='--global-option=-I{include_path}'",
f"--config-settings='--global-option=-L{lib_path}'",
]
)
pip_command.append("pygraphviz")
subprocess.check_call(pip_command)
install.run(self)
</code></pre>
<p>Per this <a href="https://github.com/praiskup/argparse-manpage/issues/85" rel="nofollow noreferrer">github issue</a>, I learned I could use a pyproject.toml to set my config settings instead of setup.py.</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=70.0.0", "setuptools-scm>=8.1.0"]
build-backend = "setuptools.build_meta"
[project]
name = "awesome"
requires-python = ">=3.12"
dynamic = ["version"]
[tool.setuptools.dynamic]
version = {attr = "awesome.__version__"}
[tool.setuptools.packages.find]
where = ["src"]
[tool.setuptools.cmdclass]
install = "awesome.custom_install.InstallPygraphviz"
</code></pre>
<p>This is my package layout:</p>
<pre><code>pyproject.toml
+---src
+---awesome
| | custom_install.py
| | __init__.py
</code></pre>
<h1>Recreating The Problem</h1>
<ol>
<li><p>I create and activate a virtual environment:</p>
<pre class="lang-bash prettyprint-override"><code>py -3.12 -m venv env
</code></pre>
<pre class="lang-bash prettyprint-override"><code>source env/Scripts/activate
</code></pre>
</li>
<li><p>I <code>cd</code> into the project directory and run the following command:</p>
<pre class="lang-bash prettyprint-override"><code>pip install .
</code></pre>
</li>
<li><p>Errors</p>
<pre><code>Processing c:\users\user\ado\test
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: awesome
Building wheel for awesome (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for awesome (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [66 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib
creating build\lib\awesome
copying src\awesome\custom_install.py -> build\lib\awesome
copying src\awesome\__init__.py -> build\lib\awesome
running egg_info
writing src\awesome.egg-info\PKG-INFO
writing dependency_links to src\awesome.egg-info\dependency_links.txt
writing top-level names to src\awesome.egg-info\top_level.txt
ERROR setuptools_scm._file_finders.git listing git files failed - pretending there aren't any
reading manifest file 'src\awesome.egg-info\SOURCES.txt'
writing manifest file 'src\awesome.egg-info\SOURCES.txt'
installing to build\bdist.win-amd64\wheel
running install
Looking in links: c:\Users\user\AppData\Local\Temp\tmpyagl6bqz
Processing c:\users\user\appdata\local\temp\tmpyagl6bqz\pip-24.0-py3-none-any.whl
Installing collected packages: pip
Successfully installed pip-24.0
C:\Users\user\ADO\test\env\Scripts\python.exe: No module named pip
Traceback (most recent call last):
File "C:\Users\user\ADO\test\env\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\user\ADO\test\env\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\ADO\test\env\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\build_meta.py", line 410, in build_wheel
return self._build_with_temp_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\build_meta.py", line 395, in _build_with_temp_dir
self.run_setup()
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 1, in <module>
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\__init__.py", line 103, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 184, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 200, in run_commands
dist.run_commands()
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\dist.py", line 968, in run_command
super().run_command(command)
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\normal\Lib\site-packages\wheel\bdist_wheel.py", line 403, in run
self.run_command("install")
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\dist.py", line 968, in run_command
super().run_command(command)
File "C:\Users\user\AppData\Local\Temp\pip-build-env-pbo1oi67\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\Users\user\ADO\test\src\awesome\custom_install.py", line 38, in run
subprocess.check_call(pip_command)
File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 413, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['C:\\Users\\user\\ADO\\test\\env\\Scripts\\python.exe', '-m', 'pip', 'install', "--config-settings='--global-option=build_ext'", "--config-settings='--global-option=-IC:\\Users\\user\\AppData\\Local\\Programs\\Graphviz/include'", "--config-settings='--global-option=-LC:\\Users\\user\\AppData\\Local\\Programs\\Graphviz/lib'", 'pygraphviz']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for awesome
Failed to build awesome
ERROR: Could not build wheels for awesome, which is required to install pyproject.toml-based projects
</code></pre>
</li>
</ol>
<p>Looking through the traceback I found this:</p>
<pre><code>C:\Users\user\ADO\test\env\Scripts\python.exe: No module named pip
</code></pre>
<p>Not sure how that's possible as I believe <code>pip</code> comes with the virtual environment.
But to be sure, I run the following at the top of :method:<code>InstallPygraphviz.run</code>:</p>
<pre class="lang-py prettyprint-override"><code>def run(self):
python_exe = sys.executable
subprocess.check_call([python_exe, "-m", "ensurepip"])
...
</code></pre>
<p>This passes, as noted in the traceback just before it errors:</p>
<pre><code>...
running install
Looking in links: c:\Users\user\AppData\Local\Temp\tmpyagl6bqz
Processing c:\users\user\appdata\local\temp\tmpyagl6bqz\pip-24.0-py3-none-any.whl
Installing collected packages: pip
Successfully installed pip-24.0
C:\Users\user\ADO\test\env\Scripts\python.exe: No module named pip
...
</code></pre>
<p>I'm at a loss for what to do. Running the code in :method:<code>InstallPygraphviz.run</code> in an interpreter works just fine, as does running the commands within the <code>pip_command</code> in git-bash.</p>
<p>How do I tell <code>subprocess</code> that <code>pip</code> exists in the environment and it should use it?</p>
|
<python><pip><subprocess><setuptools><python-venv>
|
2024-06-05 18:36:46
| 1
| 3,325
|
Ian Thompson
|
78,582,693
| 4,399,016
|
Clustering using Python
|
<p>I have data that resembles this:</p>
<pre><code>import pandas as pd
import random
random.seed(901)
rand_list1= []
rand_list2= []
rand_list3= []
rand_list4= []
rand_list5= []
for i in range(20):
x = random.randint(80,1000)
rand_list1.append(x/100)
y1 = random.randint(-200,200)
rand_list2.append(y1/10)
y2 = random.randint(-200,200)
rand_list3.append(y2/10)
y3 = random.randint(-200,200)
rand_list4.append(y3/10)
y4 = random.randint(-200,200)
rand_list5.append(y4/10)
df = pd.DataFrame({'Rainfall Recorded':rand_list1, 'TAXI A':rand_list2, 'TAXI B':rand_list3, 'TAXI C':rand_list4, 'TAXI D':rand_list5})
df.head()
Rainfall Recorded TAXI A TAXI B TAXI C TAXI D
0 5.21 13.7 -5.0 -14.2 9.8
1 2.39 -0.3 18.8 4.8 -6.4
2 8.09 15.0 -3.6 18.6 12.7
3 5.79 -0.2 14.6 0.9 3.8
4 7.48 10.9 9.0 15.4 -16.5
</code></pre>
<p>Given the Rainfall recorded in our region in centimeters, these are the % change in earnings reported by TAXI drivers surveyed. Can I use <code>K MEANS CLUSTERING</code> to determine whether the TAXIS operated in our locality or not? Suppose there is relationship between Rainfall recorded and the Earnings change.</p>
<p>I have simple code got from web source:</p>
<pre><code>km = KMeans(n_clusters=2)
y_predicted = km.fit_predict(df[['TAXI','Rainfall Recorded']])
y_predicted
</code></pre>
<p>But I am unsure what transformations need to be done before using this code.</p>
|
<python><pandas><cluster-analysis><k-means>
|
2024-06-05 17:38:14
| 2
| 680
|
prashanth manohar
|
78,582,566
| 1,841,839
|
Get batch predictions for Gemini - Vertex AI returns unsupported
|
<p>I have been chasing my tail all day trying to get <a href="https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/batch-prediction-gemini" rel="nofollow noreferrer">Get batch predictions for Gemini</a> to work.</p>
<p>I have been digging around in all the samples and documentation that i can find and have not been able to get this to work. From what i can tell the library supports this yet. I am faced with this error.</p>
<blockquote>
<p>grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
status = StatusCode.UNIMPLEMENTED
details = "Received http2 header with status: 404"
debug_error_string = "UNKNOWN:Error received from peer {grpc_message:"Received http2 header with status: 404", grpc_status:12, created_time:"2024-06-05T17:00:46.9053534+00:00"}"</p>
</blockquote>
<p>Any help would be greatly apricated</p>
<pre><code>async def sample_create_batch_prediction_job():
input_config = aiplatform_v1.types.BatchPredictionJob.InputConfig(
bigquery_source=aiplatform_v1.types.BigQuerySource(
input_uri="bq://sample.text_input"
)
)
output_config = aiplatform_v1.types.BatchPredictionJob.OutputConfig(
bigquery_destination=aiplatform_v1.types.BigQueryDestination(
output_uri="bq://sample.llm_dataset.embedding_out_BP_sample_publisher_BQ_20230712_134650"
)
)
# https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/batch-prediction-api
batch_prediction_job_1 = BatchPredictionJob(
name=os.getenv("PROJECT_ID"),
display_name='my-batch-prediction-job',
model=f"projects/{os.getenv("PROJECT_ID")}/locations/us-central1/models/gemini-1.0-pro-001",
input_config=input_config,
output_config=output_config,
)
# Create a client
client = aiplatform_v1.JobServiceAsyncClient()
request = aiplatform_v1.CreateBatchPredictionJobRequest(
parent=f"projects/{os.getenv("PROJECT_ID")}/locations/us-central1",
batch_prediction_job=batch_prediction_job_1,
)
# Make the request
response = await client.create_batch_prediction_job(request=request)
# Handle the response
print(response)
</code></pre>
<p>I thought it might be the big query URLs i tried adding the table ID for my own tables but there is no change. The documentation doesn't actually state how or what to pass for these big query URLs. Nor what the data model should look like.</p>
|
<python><google-bigquery><google-cloud-vertex-ai><google-gemini>
|
2024-06-05 17:13:10
| 1
| 118,263
|
Linda Lawton - DaImTo
|
78,582,471
| 9,042,093
|
How to add default pickle type value in sqlalchemy in python
|
<p>I have defined the pickle type column like below and create a table called test.</p>
<p>My intension is to add pickle data to the column and commit it.</p>
<p>Also if there is no value I want to add default pickle value (may be empty list or empty dict) to be added to the column.</p>
<p>If I try to execute below code</p>
<pre><code>from sqlalchemy import Column, Integer, Text, PickleType
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.mutable import Mutable, MutableList
Test1 = declarative_base()
class Test(Test1):
__tablename__ = "test"
id = Column(Integer, primary_key=True, autoincrement=True)
test_name = Column(Text)
test_list = Column(MutableList.as_mutable(PickleType))
def __init__(self):
self.test_name = " "
self.test_list = []
Test.metadata.create_all(engine)
session = sessionmaker(bind=engine)
session = session()
d1 = Test()
d1.test_name = 'test'
d1.test_list = []
session.add(d1)
session.commit()
session.close()
</code></pre>
<p>I get this error</p>
<pre><code>
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table test has no column named test_list
[SQL: INSERT INTO test (test_name, test_list) VALUES (?, ?)]
[parameters: ('test', <memory at 0x7fd610d21cc0>)]
(Background on this error at: https://sqlalche.me/e/14/e3q8)
</code></pre>
<p><strong>Edited:</strong></p>
<p>How to add dict type to the Pickle Column in the above example ?</p>
|
<python><sqlalchemy>
|
2024-06-05 16:53:11
| 2
| 349
|
bad_coder9042093
|
78,582,415
| 1,022,260
|
How to access tags for current route from Middleware
|
<p>Using FastAPI, I have a route like this:</p>
<pre class="lang-py prettyprint-override"><code>@app.get("/apps", tags=['bearer'])
async def apps():
return get_apps()
</code></pre>
<p>I have middleware, and need to get a list of tags for the current route in the middleware.</p>
<p>How do I do that?</p>
<pre class="lang-py prettyprint-override"><code>@app.middleware("http")
async def check_auth(request: Request, call_next):
for header in request.headers:
print("Header: %s=%s" % (header, request.headers.get(header)))
# How do I get the tags array here so I can say -
if 'bearer' in ?tags?:
check_token(request)
...
return response
</code></pre>
|
<python><fastapi>
|
2024-06-05 16:37:53
| 0
| 11,513
|
mikeb
|
78,582,392
| 11,951,910
|
Is there a function or argument to remove quotes when writing to csv file in python
|
<p>I am appending a large amount of data and writing to a csv file.</p>
<p>Data is a list that is appended to an example:
<code>data = ['438.0337145,438.0178423,0.0158\n', '438.0497017,438.0337145,0.01598\n']</code></p>
<pre><code>with open(uuidCSV, 'w') as csvFile:
wr = csv.writer(csvFile, lineterminator="\n", quoting=csv.QUOTE_ALL, delimiter=",")
wr.writerow(data)
</code></pre>
<p>When I open the csv file the output is:</p>
<pre><code>"438.0337145,438.0178423,0.0158
","438.0497017,438.0337145,0.01598
</code></pre>
<p>Using the below method creates:</p>
<pre><code>with open(uuidCSV, 'w') as csvFile:
wr = csv.writer(csvFile, lineterminator="\n", quoting=csv.QUOTE_NONE, delimiter=",", escapechar="\"")
wr.writerow(data)
</code></pre>
<p>Creates:</p>
<pre><code> 438.0337145",438.0178423",0.0158"
,438.0497017",438.0337145",0.01598"
</code></pre>
<p>Is there a way to remove the quotes so the output is:</p>
<pre><code>438.0337145,438.0178423,0.0158
438.0497017,438.0337145,0.01598
</code></pre>
|
<python><csvwriter>
|
2024-06-05 16:32:28
| 2
| 718
|
newdeveloper
|
78,582,282
| 2,123,706
|
apostrophe-like character appears in DataFrame as 'â\x80\x99'
|
<p>I am reading from SQL using sqlalchemy</p>
<pre><code>q = """select * from table_name"""
data = pd.read_sql_query(q,con)
</code></pre>
<p>It appears ok when I view the results on screen</p>
<p><a href="https://i.sstatic.net/MBPDhHSp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBPDhHSp.png" alt="enter image description here" /></a></p>
<p>but when I assign it to a variable and write back to sql, it comes odd looking</p>
<pre><code>string_oi = data.iloc[0,8]
df = pd.DataFrame({'test':[string_oi]})
df.to_sql(
name= 'tableTest',
con = con,
if_exists='replace',
index = False
)
con.commit()
</code></pre>
<p><a href="https://i.sstatic.net/TME0gpdJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TME0gpdJ.png" alt="enter image description here" /></a></p>
<p>When I define the string directly, it goes to SQL correctly</p>
<pre><code>string_oi = 'Tylerâs way'
</code></pre>
<p><a href="https://i.sstatic.net/Fy1EQsDV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fy1EQsDV.png" alt="enter image description here" /></a></p>
<p>Examining the string when read from sql, the accent is encoded as <code>â\x80\x99</code>, ie</p>
<pre><code>'Tylerâ\x80\x99s way'
</code></pre>
<p>How can I save/convert it so that strings are read correctly, which will then be ok when saving back to SQL? Presumably, this will happen with other special characters when reading the SQL table, and I would like to take care of this in a single step</p>
|
<python><sql-server><pandas><sqlalchemy><pyodbc>
|
2024-06-05 16:09:43
| 2
| 3,810
|
frank
|
78,582,268
| 6,691,780
|
building pybind11 package that wraps .so file with Rye and setuptools
|
<p>I try to build a python package that wraps C++ code. I use pybind11 to create the necessary bindings. I manage the project using <a href="https://rye.astral.sh/" rel="nofollow noreferrer">Rye</a> with setuptools instead of the default hatchlings as build system (because the pybind11 docs use setuptools).</p>
<p>I have a few (probably related) problems.:
When I build my project, the resulting <code>dist/minimal_example-0.1.0-cp312-cp312-linux_x86_64.whl</code> does contain a <strong>.so</strong> file, but <code>dist/minimal_example-0.1.0.tar.gz</code> doesn't.</p>
<p>When I import the .so file I can use and see the <code>add()</code> function</p>
<pre class="lang-bash prettyprint-override"><code>cd /dist
unzip *.whl minimal_example.cpython-312-x86_64-linux-gnu.so
</code></pre>
<pre class="lang-py prettyprint-override"><code>>>> import minimal_example
>>> minimal_example.__file__
'/home/me/minimal_example/dist/minimal_example.cpython-312-x86_64-linux-gnu.so'
>>> minimal_example.add(1,2)
3
</code></pre>
<p>when use <code>python</code> in the project dir or with <code>rye run python</code> I can't access functions from the .so file.</p>
<pre class="lang-py prettyprint-override"><code>>>> import minimal_example
>>> minimal_example.__file__
'/home/me/minimal_example/src/minimal_example/__init__.py'
>>> minimal_example.add(1,2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'minimal_example' has no attribute 'add
</code></pre>
<p>I have two questions.</p>
<ul>
<li>how can I configure my build in such a way that <code>minimal_example.cpython-312-x86_64-linux-gnu.so</code> is available in <code>dist/minimal_example-0.1.0.tar.gz</code>?</li>
<li>How can I configure Rye in such a way that it can access the .so file it builds.</li>
</ul>
<p>My suspicion is that I need to put the .so file in <code>src/minimal_example/</code> during the build, and that this will solve both problems. Is this correct? And can somebody guide me in configuring setuptools/Rye to do this?</p>
<h3>recreate minimal project</h3>
<p>See the code to recreate a minimal example.</p>
<pre class="lang-bash prettyprint-override"><code>export PROJECTDIR=${PWD}
# I don't have clang available on my system so I use cc
export CC=cc
export CXX=c++
rye init minimal_example
cd minimal_example
</code></pre>
<p><strong>./.python-version</strong></p>
<pre><code>3.12.2
</code></pre>
<p><strong>./pyproject.toml</strong></p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "minimal-example"
version = "0.1.0"
description = "Add your description here"
authors = [
{ name = "me", email = "me@email.com" }
]
dependencies = [
"pybind11>=2.12.0",
"setuptools>=70.0.0",
]
readme = "README.md"
requires-python = ">= 3.8"
[build-system]
requires = ["setuptools>=42", "pybind11>=2.6.1"]
build-backend = "setuptools.build_meta"
[tool.rye]
managed = true
dev-dependencies = []
</code></pre>
<p><strong>./src/minimal_example/minimal_example.cpp</strong></p>
<pre class="lang-cpp prettyprint-override"><code>#include <pybind11/pybind11.h>
int add(int i, int j) {
return i + j;
}
namespace py = pybind11;
PYBIND11_MODULE(minimal_example, m) {
m.doc() = "minimal docs";
m.def("add", &add, "example add");
}
</code></pre>
<p><strong>/.setup.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from glob import glob
import os
from setuptools import setup
from pybind11.setup_helpers import Pybind11Extension, build_ext
if (projectdir := os.environ.get("PROJECTDIR", None)) is None:
raise ValueError("env PROJECTDIR not set.")
ext_modules = [
Pybind11Extension(
'minimal_example',
sorted(glob('src/minimal_example/*.cpp')),
extra_compile_args=['-O3', '-Wall', '-std=c++11', '-fPIC']
)
]
setup(
cmdclass={"build_ext": build_ext}, ext_modules=ext_modules
)
</code></pre>
|
<python><c++><linux><setuptools><pybind11>
|
2024-06-05 16:07:01
| 0
| 1,529
|
Jonas
|
78,582,226
| 2,153,235
|
Avoid full cross-join in testing each record against every other record
|
<p>The <code>merge</code> in pandas has a cross-join function. I was hoping to
avoid a full cross-join in pairing each record of a large dataframe
with upto (say) a dozen records in another dataframe (possibly the
same dataframe) based on complex criteria involving a function of
multiple fields, i.e., not necessarily an equivalence relationship between
"key" columns.</p>
<p>Below is one simple example of matching based on inequality
relationships. I've commented out some input records to make the
output smaller, but they can be easily added back in for
experimentation with stricter matching criteria:</p>
<pre><code>import pandas as pd
# Input dataframe
df_in = pd.DataFrame([ [ 1, 1, 4, 1 ],
[ 3, 4, 2, 4 ],
[ 0, 2, 0, 0 ],
[ 3, 1, 4, 4 ],
[ 0, 4, 4, 3 ],
# [ 1, 4, 2, 2 ],
# [ 3, 2, 0, 2 ],
# [ 3, 3, 1, 1 ],
# [ 0, 2, 2, 2 ],
# [ 4, 1, 2, 4 ],
# [ 3, 1, 1, 2 ],
# [ 2, 2, 3, 2 ],
# [ 2, 1, 3, 1 ],
# [ 2, 4, 1, 2 ],
# [ 1, 0, 0, 3 ],
[ 2, 1, 1, 3 ],
[ 2, 0, 1, 1 ],
[ 4, 1, 2, 1 ],
[ 3, 2, 2, 4 ],
[ 3, 1, 0, 0 ] ],
columns=['A','B','C','D'] )
# Full cross-join
df_out = df_in.merge( df_in, how='cross' )
# Select a very small subset
df_out = df_out[ ( df_out.A_x < df_out.B_y ) &
( df_out.C_x > df_out.D_y) ]
A_x B_x C_x D_x A_y B_y C_y D_y
2 1 1 4 1 0 2 0 0
4 1 1 4 1 0 4 4 3
34 3 1 4 4 0 4 4 3
40 0 4 4 3 1 1 4 1
42 0 4 4 3 0 2 0 0
44 0 4 4 3 0 4 4 3
45 0 4 4 3 2 1 1 3
47 0 4 4 3 4 1 2 1
49 0 4 4 3 3 1 0 0
</code></pre>
<p>In SQL, I didn't need to create a full cross-join before filtering
away the unwanted matches. I could incorporate the filtering criteria
as a WHERE clause of a join, essentially in place of an ON clause.
The complex join criterion could involve mathematical or string
functions, or I wrote my own arbitrarily complex functions in VBA (at least I could for
Microsoft Access).</p>
<p>These joins didn't take too much time despite the large number of
records. I strongly suspect that the query engine optimized the execution of the join so that
nothing close to a full cross-join is constructed.</p>
<p>Is there any way to incorporate complex join criteria into pandas
joins without the memory and time requirements of an intermediate
cross-join? How flexible can the criteria be, e.g., can they
incorporate arbitrary functions, including user defined functions?</p>
<h2>Background</h2>
<p>The dataframe consists of many messages with various sender IDs. The
IDs are typed in, so the data is very dirty. I will use Levenshtein
distance to group together IDs that are likely to be the same, but
each ID needs to be tested against every other ID, even though matches
will only be made between small subsets of the IDs (at least, I hope so).</p>
<p>That is only the starting point for a matching criterion. The
messages contain travel destinations, various time stamps, and other
identifying data. The join criterion will be expanded in an exploratory manner to make
use of matching conditions between other fields as well.</p>
|
<python><pandas>
|
2024-06-05 15:58:56
| 1
| 1,265
|
user2153235
|
78,582,174
| 719,276
|
How to wait until multiprocessing.connection.Client is available in python?
|
<p>I have a <code>process.py</code> script:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing.connection import Listener
with Listener(('localhost', 6000)) as listener:
with listener.accept() as connection:
message = connection.recv()
# [...]
connection.send(message)
</code></pre>
<p>I launch this script in a thread (to avoid blocking the main thread) with:</p>
<pre class="lang-py prettyprint-override"><code>import threading, subprocess
threading.Thread(target=lambda: subprocess.run(['python', 'process.py'])).start()
</code></pre>
<p>But sometimes I want to wait (block the main thread) until my process has launched.</p>
<p>Here is what I do:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing.connection import Client
nAttempts = 0
while True:
try:
with Client(('localhost', 6000)) as connection:
connection.send(nAttempts)
message = connection.recv()
# [...]
break
except ConnectionRefusedError:
nAttempts += 1
pass
</code></pre>
<p>Is there a better approach?</p>
<p>The program tries ~282 connections before being able to connect, can it be a problem?</p>
|
<python><sockets><multiprocessing><client>
|
2024-06-05 15:48:53
| 2
| 11,833
|
arthur.sw
|
78,582,172
| 3,908,009
|
One liner HTTP server responding in JSON?
|
<p>There are times when I want to just launch a quick http server on a basic linux machine with no special software installed.</p>
<p>I run the following:</p>
<pre><code>python3 -m http.server 8080
</code></pre>
<p>This returns the directory listing HTML page with response code 200.</p>
<p>Is there a similar one-liner that can return <code>{"OK"}</code> or some basic json?</p>
<p>It doesn't have to be python.</p>
|
<python><http>
|
2024-06-05 15:48:40
| 1
| 1,086
|
Neil
|
78,582,171
| 1,676,006
|
Is there any way to read and process an azure blob as a stream using the python sdk without loading the whole blob into memory?
|
<p>I'd like to be able to treat an azure blob like an IO object using the python SDK. As far as I can tell, this requires me to either:</p>
<p>a) use <code>.readinto()</code> to read the blob into an IO object (thus loading the whole undefined-size blob into memory inside a container with 256Mb of memory)</p>
<p>b) manually call <code>.read()</code> with an offset and a limit (thus requiring me to know exactly how much I want to read at a given time)</p>
<p>I'm trying to read a gzipped mongodump BSON file using <code>bson.decode_file_iter</code>, so I don't know exactly how many bytes I want to read in a given stretch (at least, not without having to decode the BSON myself a little), and obviously solution a) isn't great if I have particularly gigantic dump files (for example, I'm working with one that's half a gig uncompressed right now). To the extent of my knowledge, there's nothing in the azure blob sdk that exposes this. In an ideal universe, I'd be able to stream the blob through gzip decompression and into that bson decode function. Is there something I can use to achieve this <em>without</em> having to write a complete translation layer, or should I stick with my current approach of downloading the blob to a file, then using <code>gzip.open</code> to read the downloaded file and pipe it into <code>bson.decode_file_iter</code>?</p>
|
<python><azure-blob-storage><bson><azure-python-sdk>
|
2024-06-05 15:48:34
| 1
| 703
|
ChickenWing
|
78,582,091
| 4,445,584
|
How import external packages correctly using Nuitka
|
<p>I need some help about how importing external packages.
I am using Eclipse with PyDev plugin to develop Python applications, I have a workspace containing all my projects and all the libraries/package I use in them. Also the libs/packages are projects by themselves. So the filesystem structure is:</p>
<pre><code>/ (the base of the workspace: c:/Dati/workspaces/PythonEclipse/
+ APyLibUty +
¦ +-- PyLibUty (this is the package I want to import)
+ MyProject +
+-- src +
+-- main.py
</code></pre>
<p>so I want to import PyLibUty in main.py to be correctly imported for Nuitka. I am just starting to use Nuitka, before I used PyInstaller and I am able to import the packages correctly for Python/PyDev and PyInstaller but seems that Nuitka needs something different. Until now compiling a project using Nuitka I am able to import a package only if it s inside the project that imports it (if it is a sub directory as MyProject/PyLibUty. I mean that otherwise when I ran the binary file I see that PyLibUty is unknown and missing and obviously the binary stops to run.
I hope there is a simple solution because I do not like to copy the PyLibUty package inside each project where I need it. It is a package containing most of the classes and utilities I use more frequently.</p>
<p>This is my bat file to build using Nuitka:</p>
<pre><code>cd C:\Dati\workspaces\PythonEclipse\JanasSchoolsWebLoginPySide2\src
python -m nuitka wbbPyb2Main.py^
--standalone^
--follow-imports^
--windows-console-mode=force^
--include-package=PyLibUty^
--include-data-dir=locales=locales^
--include-data-file=config-cl.ini=config-cl.ini^
--include-data-file=Back.png=Back.png^
--include-data-file=Cancel.png=Cancel.png^
--include-data-file=Exit.png=Exit.png^
--include-data-file=Forward.png=Forward.png^
--include-data-file=Home.png=Home.png^
--include-data-file=Load.png=Load.png^
--include-data-file=Refresh.png=Refresh.png^
--include-data-file=key.ico=key.ico^
--include-data-file=key.png=key.png^
--include-package=certifi,urllib3.contrib.socks^
--enable-plugin=pyside2,pywebview,dll-files,glfw,implicit-imports^
--output-dir=distNuitka^
--output-file=jswl.exe
copy .\libs\libEGL.dll .\distNuitka\wbbPyb2Main.dist\
copy .\libs\libGLESv2.dll .\distNuitka\wbbPyb2Main.dist\
copy .\libs\opengl32sw.dll .\distNuitka\wbbPyb2Main.dist\
cd ..\bat
</code></pre>
<p>It works if PyLibUty is inside the src directory where is saved wbbPyb2Main.py
it works and does not need to add a path using sys.path.add(..)</p>
|
<python><nuitka>
|
2024-06-05 15:31:19
| 1
| 475
|
Massimo Manca
|
78,582,042
| 4,129,131
|
How to solve the xmlsec Error: (100, 'lxml & xmlsec libxml2 library version mismatch')
|
<p>I am facing the following error when deploying my Django (version 4.1) Backend, I have the following Dockerfile (some non-relevant parts omitted) and need to install python3-saml (which has dependencies like lxml and xmlsec).
The documentation mentions the following:
<a href="https://i.sstatic.net/7TijfZeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7TijfZeK.png" alt="no binary installation lxml" /></a></p>
<p>Hence I added the command in the Dockerfile.</p>
<pre><code>FROM python:3.10.6
# Install necessary packages
RUN apt-get update && \
apt-get install -y s3fs pkg-config libxml2-dev libxmlsec1-dev libxmlsec1-openssl libxmlsec1 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# set work directory
WORKDIR /usr/src/app
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
RUN pip install python3-saml && \
pip install --force-reinstall --no-binary lxml lxml
# copy project
COPY . .
# create cache directory
RUN mkdir -p /var/tmp/django_cache
ENTRYPOINT ["./docker-entrypoint.sh"]
</code></pre>
<p>I am able to build it without problem but upon deployment, I get the following error:</p>
<pre><code> File "/usr/local/lib/python3.10/site-packages/onelogin/saml2/auth.py", line 12, in <module>
import xmlsec
xmlsec.Error: (100, 'lxml & xmlsec libxml2 library version mismatch')
</code></pre>
<p>I am not sure which library version is expected in this case, I have tried with
<code>pip install --no-binary lxml==4.6.3 lxml==4.6.3 --force-reinstall</code> instead, as well as with version 4.9.3 (as seen in other recent threads) but with no success. (pip install python3-saml installs lxml version 5.2.2).</p>
<p>The requirement.txt file looks as follows:</p>
<pre><code>apipkg==1.5
asgiref==3.7.2
asn1crypto==1.4.0
astroid==2.4.2
async-timeout==4.0.3
attrs==20.1.0
authy==2.2.6
autopep8==1.5.4
certifi==2020.6.20
cffi==1.15.1
chardet==3.0.4
click==7.1.2
colorama==0.4.6
coverage==5.2.1
cryptography==42.0.2
Cython==0.29.21
decorator==5.1.1
defusedxml==0.7.1
deprecation==2.1.0
dictor==0.1.11
Django==4.1
django-cors-headers==3.5.0
django-debug-toolbar==4.3.0
django-eav2==0.13.0
django-filter==2.4.0
django-liststyle==0.1
django-redis==5.4.0
django-redis-cache==3.0.1
django-rest-auth==0.9.5
django-reversion==5.0.3
django-simple-history==2.11.0
djangorestframework==3.14.0
djangorestframework-jwt==1.11.0
djangorestframework-simplejwt==5.2.2
drf-extensions==0.7.1
elementpath==4.2.1
exceptiongroup==1.2.0
execnet==1.7.1
factory-boy==3.0.1
Faker==4.1.2
flake8==3.8.3
future==0.18.2
idna==2.10
importlib-metadata==1.7.0
inflection==0.5.1
iniconfig==1.0.1
install==1.3.5
isodate==0.6.1
isort==5.5.3
Jinja2==2.11.2
lazy-object-proxy==1.4.3
MarkupSafe==1.1.1
mccabe==0.6.1
more-itertools==8.5.0
numpy==1.23.5
packaging==20.4
pandas==1.4.1
Paste==3.7.1
phonenumbers==8.12.16
pikepdf==7.2.0
Pillow==9.5.0
pluggy==0.13.1
psycopg2==2.9.3
psycopg2-binary==2.9.3
py==1.9.0
pycodestyle==2.6.0
pycparser==2.20
pycryptodomex==3.20.0
pyflakes==2.2.0
Pygments==2.6.1
PyJWT==1.7.1
pylint==2.6.0
pymemcache==4.0.0
PyMuPDF==1.22.3
pyOpenSSL==24.0.0
pyparsing==2.4.7
PyPDF2==1.26.0
pytest==7.1.3
pytest-cov==2.10.1
pytest-django==3.9.0
python-dateutil==2.8.1
python-memcached==1.59
pytz==2020.1
PyYAML==5.3.1
qrcode==7.3.1
redis==3.5.3
redispy==3.0.0
repoze.who==3.0.0
requests==2.24.0
six==1.15.0
slack-sdk==3.12.0
sqlparse==0.3.1
text-unidecode==1.3
toml==0.10.1
tomli==2.0.1
typing_extensions==4.9.0
tzdata==2023.4
urllib3==1.25.10
wcwidth==0.2.5
WebOb==1.8.7
Werkzeug==1.0.1
wrapt==1.12.1
xmlschema==2.5.1
zipp==3.1.0
zope.interface==6.1
</code></pre>
|
<python><django><lxml><saml><xmlsec>
|
2024-06-05 15:23:07
| 1
| 670
|
Philippe
|
78,581,853
| 5,281,775
|
Multi-positional one-hot encoding based update
|
<p>I have some array, and a given index/array of indices. Using this index/indices, I have to update another array. These updates shouldn't be in-place. I am able to do this for a single index, but unable to achieve the right result for array of indices. Here is an example:</p>
<pre class="lang-py prettyprint-override"><code>def one_hot(x, depth):
return np.take(np.eye(depth), x, axis=0)
arr1 = np.random.rand(2, 5, 3, 8)
ohe_depth = 5
# Case 1: Single index based OHE op
curr_idx = np.array([1])
curr_idx_ohe = one_hot(curr_idx, ohe_depth).reshape(1, ohe_depth, 1, 1)
out = arr1 * curr_idx_ohe # produces right output
# Case 2: Multi index based OHE op
curr_idx = np.arange(3)
curr_idx_ohe = one_hot(curr_idx, ohe_depth)
# What should be the transformation here that is equivalent to:
# out2 = np.zeros_like(arr1)
# out2[:, curr_idx, ...] = arr1[:, curr_idx, ...]
</code></pre>
<p>Again, no in-place updates are allowed. TIA.</p>
|
<python><arrays><numpy>
|
2024-06-05 14:49:54
| 0
| 2,325
|
enterML
|
78,581,797
| 6,330,106
|
How can I create an in-memory file object that has a file descriptor in Python?
|
<p>I plan to use <code>subprocess.Popen</code> (in Python 3.11.2) to implement <code>git mktree < foo.txt</code>. Now I face the question.</p>
<p>In order to reproduce my situation, here is the script that creates the environment.</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
export GIT_AUTHOR_NAME=foo
export GIT_AUTHOR_EMAIL=foo@bar.com
export GIT_AUTHOR_DATE="Tue Jun 4 21:40:15 2024 +0800"
export GIT_COMMITTER_NAME=foo
export GIT_COMMITTER_EMAIL=foo@bar.com
export GIT_COMMITTER_DATE="Tue Jun 4 21:40:15 2024 +0800"
rm -rf foo
git init foo
mkdir -p foo/hello
echo hello > foo/hello/hello.txt
echo hello > foo/hello.txt
echo world > foo/world.txt
git -C foo add .
git -C foo commit -m 'hello world'
git -C foo log --no-decorate
git -C foo ls-tree HEAD hello.txt
git -C foo ls-tree HEAD world.txt
</code></pre>
<p>It's expected to print the commit and 2 blob entries.</p>
<pre class="lang-none prettyprint-override"><code>commit d2b25fd15c1435f515dd6379eca8d691dde6abeb
Author: foo <foo@bar.com>
Date: Tue Jun 4 21:40:15 2024 +0800
hello world
100644 blob ce013625030ba8dba906f756967f9e9ca394464a hello.txt
100644 blob cc628ccd10742baea8241c5924df992b5c019f71 world.txt
</code></pre>
<p>I want to create a commit from the tree that has only <code>hello.txt</code> and <code>world.txt</code>. So first I need to create the new tree object. (Update: after getting the right solution, I find that <strong>the code has a fatal bug.</strong> It creates a tree object with wrong content due to str and bytes.)</p>
<pre><code>import subprocess
# get the blob entry of hello.txt
o, e = subprocess.Popen(
['git', 'ls-tree', 'HEAD', 'hello.txt'],
stdout=subprocess.PIPE,
env={'GIT_DIR': 'foo/.git'},
).communicate()
line1 = o.decode()
# get the blob entry of world.txt
o, e = subprocess.Popen(
['git', 'ls-tree', 'HEAD', 'world.txt'],
stdout=subprocess.PIPE,
env={'GIT_DIR': 'foo/.git'},
).communicate()
line2 = o.decode()
# write the 2 lines to foo.txt
with open('foo.txt', 'w') as f:
f.write(line1)
f.write(line2)
# create a new tree object from foo.txt
with open('foo.txt') as f:
o, e = subprocess.Popen(
['git', 'mktree'],
stdin=f,
stdout=subprocess.PIPE,
env={'GIT_DIR': 'foo/.git'},
).communicate()
tree = o.decode()
print(f'created tree object {tree}')
</code></pre>
<p>I wonder if I can use an in-memory file object so that I don't have to create and remove <code>foo.txt</code>. As <code>io.StringIO</code> is recommended in many answers, I try the code.</p>
<pre><code>import subprocess
import io
line1 = '100644 blob ce013625030ba8dba906f756967f9e9ca394464a\thello.txt\n'
line2 = '100644 blob cc628ccd10742baea8241c5924df992b5c019f71\tworld.txt\n'
with io.StringIO(line1 + line2) as f:
o, e = subprocess.Popen(
['git', 'mktree'],
stdin=f,
stdout=subprocess.PIPE,
env={'GIT_DIR': 'foo/.git'},
).communicate()
tree = o.decode()
print(f'created tree object {tree}')
</code></pre>
<p>It raises an exception.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "C:\Python311\Lib\subprocess.py", line 892, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\subprocess.py", line 1339, in _get_handles
p2cread = msvcrt.get_osfhandle(stdin.fileno())
^^^^^^^^^^^^^^
io.UnsupportedOperation: fileno
</code></pre>
<p>According to the <a href="https://docs.python.org/3/library/io.html#io.IOBase.fileno" rel="nofollow noreferrer">io doc</a>, it seems <code>io.StringIO</code> does not have a file descriptor.</p>
<blockquote>
<p>fileno()</p>
<p>Return the underlying file descriptor (an integer) of the
stream if it exists. An OSError is raised if the IO object does not
use a file descriptor.</p>
</blockquote>
<p>Is there any in-memory file object that has a file descriptor? Or is there any method to bypass the exception with <code>io.StringIO</code> in <code>subprocess.Popen</code>?</p>
<p><strong>Update</strong>:</p>
<p>With the help of @Ture Pålsson 's answer, I get the expected solution.</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
# get the blob entry of hello.txt
line1, _ = subprocess.Popen(
['git', 'ls-tree', 'HEAD', 'hello.txt'],
stdout=subprocess.PIPE,
env={'GIT_DIR': 'foo/.git'},
).communicate()
# get the blob entry of world.txt
line2, _ = subprocess.Popen(
['git', 'ls-tree', 'HEAD', 'world.txt'],
stdout=subprocess.PIPE,
env={'GIT_DIR': 'foo/.git'},
).communicate()
# create a tree object from the lines
# although Popen's text=True allows communicate's input to be str,
# here it should be bytes.
# str input creates a tree object with wrong content.
o, e = subprocess.Popen(
['git', 'mktree'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
env={'GIT_DIR': 'foo/.git'},
).communicate(input=line1+line2)
tree = o.decode().strip()
print(f'created tree object {tree}')
</code></pre>
|
<python><python-3.x>
|
2024-06-05 14:42:40
| 1
| 31,575
|
ElpieKay
|
78,581,563
| 6,930,340
|
Create pandas time series DataFrame using hypothesis
|
<p>I need to create a <code>pd.DataFrame</code> with a <code>datetime</code> or <code>pd.date_range</code> index. I have the following code, which almost generates what I need.</p>
<p>However, a time series index is characterized by the fact that you have increasing dates/times. My code fails on this point.</p>
<pre><code>import datetime
from hypothesis import strategies as st
from hypothesis.extra.pandas import columns as cols
from hypothesis.extra.pandas import data_frames, indexes
data_frames(
columns=cols(
["sec1", "sec2", "sec3"], elements=st.floats(allow_infinity=False)
),
index=indexes(
elements=st.dates(
min_value=datetime.date(2023, 10, 31),
max_value=datetime.date(2024, 5, 31),
),
min_size=5,
),
).example()
Out[56]:
sec1 sec2 sec3
2024-01-06 -1.000000e-05 2.225074e-313 -6.293733e+16
2023-10-31 1.797693e+308 -4.344429e+16 1.401298e-45
2024-03-26 NaN -1.112537e-308 1.500000e+00
2023-11-26 1.000000e-05 2.220446e-16 -1.000000e-05
2023-11-13 1.900000e+00 -1.112537e-308 0.000000e+00
2024-03-14 6.607726e-296 NaN -3.333333e-01
2023-12-19 -1.346975e+185 -2.664620e+16 -2.568750e+16
2024-02-02 1.000000e-05 5.911449e+16 NaN
2024-05-03 1.112537e-308 6.984170e-302 -5.000000e-01
2024-01-12 5.000000e-01 -1.175494e-38 NaN
</code></pre>
<p>I am wondering if it is possible to replace my <code>indexes(...)</code> code with something along those lines:</p>
<pre><code>pd.date_range(start="12-29-2023", periods=10, freq="B", name="date")
</code></pre>
|
<python><pandas><python-hypothesis>
|
2024-06-05 14:03:13
| 1
| 5,167
|
Andi
|
78,581,219
| 8,995,555
|
DigitalOcean App Platform says Out of Memory during docker build
|
<p>I have setup a flask web service using a Dockerfile for the deployment. I have added some ML related packages such as <code>torch</code> <code>torchvision</code> etc. and a lot of others.</p>
<p>The docker image builds fine in my local mac, but fails in digital ocean after the following.</p>
<p>It remains in the <code>[my-project] [2024-06-05 13:05:13] │ INFO[0162] Taking snapshot of files...</code> stage for 1hr+ but then says Out of Memory. This happened very recently and I already have a working deployment for this project. But the new deployment fails in the build stage.</p>
<h3>Build freezes at</h3>
<pre><code>[my-project] [2024-06-05 13:05:10] │ Running setup.py install for flasgger: finished with status 'done'
[my-project] [2024-06-05 13:05:11] │ Successfully installed Flask-3.0.3 GitPython-3.1.43 Jinja2-3.1.4 MarkupSafe-2.1.5 PyYAML-6.0.1 Werkzeug-3.0.3 annotated-types-0.7.0 attrs-23.2.0 beautifulsoup4-4.12.3 blinker-1.8.2 cachetools-5.3.3 certifi-2024.6.2 charset-normalizer-3.3.2 click-8.1.7 contourpy-1.2.1 cycler-0.12.1 decorator-4.4.2 filelock-3.14.0 flasgger-0.9.7.1 fonttools-4.53.0 fsspec-2024.6.0 gitdb-4.0.11 google-3.0.0 google-ai-generativelanguage-0.6.4 google-api-core-2.19.0 google-api-python-client-2.131.0 google-auth-2.29.0 google-auth-httplib2-0.2.0 google-generativeai-0.5.4 googleapis-common-protos-1.63.0 grpcio-1.64.0 grpcio-status-1.62.2 gunicorn-22.0.0 httplib2-0.22.0 idna-3.7 imageio-2.34.1 imageio-ffmpeg-0.5.1 itsdangerous-2.2.0 jsonschema-4.22.0 jsonschema-specifications-2023.12.1 kiwisolver-1.4.5 loguru-0.7.2 matplotlib-3.9.0 mistune-3.0.2 moviepy-1.0.3 mpmath-1.3.0 networkx-3.3 numpy-1.26.4 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.5.40 nvidia-nvtx-cu12-12.1.105 opencv-contrib-python-4.10.0.82 opencv-python-4.10.0.82 opencv-python-headless-4.10.0.82 packaging-24.0 pandas-2.2.2 pillow-10.3.0 proglog-0.1.10 proto-plus-1.23.0 protobuf-4.25.3 psutil-5.9.8 py-cpuinfo-9.0.0 pyasn1-0.6.0 pyasn1_modules-0.4.0 pydantic-2.7.2 pydantic_core-2.18.3 pyparsing-3.1.2 python-dateutil-2.9.0.post0 python-dotenv-1.0.1 pytz-2024.1 referencing-0.35.1 requests-2.32.3 rpds-py-0.18.1 rsa-4.9 scipy-1.13.1 seaborn-0.13.2 setuptools-70.0.0 six-1.16.0 smmap-5.0.1 soupsieve-2.5 sympy-1.12.1 torch-2.3.0 torchvision-0.18.0 tqdm-4.66.4 triton-2.3.0 typing_extensions-4.12.1 tzdata-2024.1 ultralytics-8.2.28 ultralytics-thop-0.2.7 uritemplate-4.1.1 urllib3-2.2.1 wheel-0.43.0
[my-project] [2024-06-05 13:05:11] │
[my-project] [2024-06-05 13:05:11] │ [notice] A new release of pip is available: 23.0.1 -> 24.0
[my-project] [2024-06-05 13:05:11] │ [notice] To update, run: pip install --upgrade pip
[my-project] [2024-06-05 13:05:13] │ INFO[0162] Taking snapshot of files...
</code></pre>
<h3>Aftermath</h3>
<pre><code>[2024-06-05 13:05:10] │ Running setup.py install for flasgger: started
[2024-06-05 13:05:10] │ Running setup.py install for flasgger: finished with status 'done'
[2024-06-05 13:05:11] │ Successfully installed Flask-3.0.3 GitPython-3.1.43 Jinja2-3.1.4 MarkupSafe-2.1.5 PyYAML-6.0.1 Werkzeug-3.0.3 annotated-types-0.7.0 attrs-23.2.0 beautifulsoup4-4.12.3 blinker-1.8.2 cachetools-5.3.3 certifi-2024.6.2 charset-normalizer-3.3.2 click-8.1.7 contourpy-1.2.1 cycler-0.12.1 decorator-4.4.2 filelock-3.14.0 flasgger-0.9.7.1 fonttools-4.53.0 fsspec-2024.6.0 gitdb-4.0.11 google-3.0.0 google-ai-generativelanguage-0.6.4 google-api-core-2.19.0 google-api-python-client-2.131.0 google-auth-2.29.0 google-auth-httplib2-0.2.0 google-generativeai-0.5.4 googleapis-common-protos-1.63.0 grpcio-1.64.0 grpcio-status-1.62.2 gunicorn-22.0.0 httplib2-0.22.0 idna-3.7 imageio-2.34.1 imageio-ffmpeg-0.5.1 itsdangerous-2.2.0 jsonschema-4.22.0 jsonschema-specifications-2023.12.1 kiwisolver-1.4.5 loguru-0.7.2 matplotlib-3.9.0 mistune-3.0.2 moviepy-1.0.3 mpmath-1.3.0 networkx-3.3 numpy-1.26.4 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.5.40 nvidia-nvtx-cu12-12.1.105 opencv-contrib-python-4.10.0.82 opencv-python-4.10.0.82 opencv-python-headless-4.10.0.82 packaging-24.0 pandas-2.2.2 pillow-10.3.0 proglog-0.1.10 proto-plus-1.23.0 protobuf-4.25.3 psutil-5.9.8 py-cpuinfo-9.0.0 pyasn1-0.6.0 pyasn1_modules-0.4.0 pydantic-2.7.2 pydantic_core-2.18.3 pyparsing-3.1.2 python-dateutil-2.9.0.post0 python-dotenv-1.0.1 pytz-2024.1 referencing-0.35.1 requests-2.32.3 rpds-py-0.18.1 rsa-4.9 scipy-1.13.1 seaborn-0.13.2 setuptools-70.0.0 six-1.16.0 smmap-5.0.1 soupsieve-2.5 sympy-1.12.1 torch-2.3.0 torchvision-0.18.0 tqdm-4.66.4 triton-2.3.0 typing_extensions-4.12.1 tzdata-2024.1 ultralytics-8.2.28 ultralytics-thop-0.2.7 uritemplate-4.1.1 urllib3-2.2.1 wheel-0.43.0
[2024-06-05 13:05:11] │
[2024-06-05 13:05:11] │ [notice] A new release of pip is available: 23.0.1 -> 24.0
[2024-06-05 13:05:11] │ [notice] To update, run: pip install --upgrade pip
[2024-06-05 13:05:13] │ INFO[0162] Taking snapshot of files...
[2024-06-05 13:07:40] │
[2024-06-05 13:07:40] │ command exited with code -1
[2024-06-05 13:07:40] │
[2024-06-05 13:07:40] │ ✘ build failed
</code></pre>
<h3>My Dockerfile</h3>
<pre><code># Use the official Python image from the Docker Hub
FROM python:3.10-slim
# Set work directory
WORKDIR /app
# Install gcc and other dependencies
RUN apt-get update && apt-get install -y gcc && rm -rf /var/lib/apt/lists/*
# Create a virtual environment and activate it
RUN python -m venv /opt/venv
# Ensure the virtual environment is used for subsequent commands
ENV PATH="/opt/venv/bin:$PATH"
# Install Python dependencies inside the virtual environment
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
# Copy project files
COPY . /app/
# Expose the port on which the Flask app will run
EXPOSE 5000
# Set the environment variable for Flask
ENV FLASK_APP=app.app
# Run the Flask app with Gunicorn and set timeout to 7200 seconds
CMD ["gunicorn", "--timeout", "7200", "-b", "0.0.0.0:5000", "app.app:app"]
</code></pre>
|
<python><flask><digital-ocean>
|
2024-06-05 13:10:17
| 1
| 1,014
|
RukshanJS
|
78,581,188
| 2,245,709
|
Decorator in python-flask is giving "AssertionError: View function mapping is overwriting an existing endpoint function"
|
<p>I am writing a flask endpoint with decorator added for <strong>JWT decoding</strong>, but it keeps giving me the error</p>
<pre><code>AssertionError: View function mapping is overwriting an existing endpoint function
</code></pre>
<p>I am trying to decode the JWT token (using the decorator) then passing the payload from the wrapper to the decorated function to display/return as response.
My snippet is:</p>
<pre><code>def get_config(section):
with open('./config/configurations.yml',"r") as file:
config = yaml.safe_load(file)
return config[section]
def decode_jwt_token(token, secret_key, algo):
try:
decoded_payload = jwt.decode(token,secret_key,algorithms=[algo])
return decoded_payload
except jwt.ExpiredSignatureError:
print("Token has expired")
except jwt.InvalidTokenError:
print("Invalid Token")
def decodeJWT(func):
def wrapper(**kwargs):
config = get_config("jwt")
auth_header = request.headers.get("Authorization")
if auth_header and auth_header.startswith('Bearer'):
token = auth_header.split(' ')[1]
payload = decode_jwt_token(token, config["secret"], config["jwt_algorithm"])
return func(payload=payload)
else:
return jsonify({'message': 'Invalid Token'}), 401
return wrapper
@app.route("/test",methods=["GET"])
@decodeJWT
def test(**kwargs):
payload = kwargs.get("payload",None)
if payload is None:
return jsonify({'message': 'Token is invalid'}), 401
return jsonify({'message': 'This is a secure message', 'data': payload}), 200
</code></pre>
<p>The full error is:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\al90935\Documents\flask_rest_api\app.py", line 45, in <module>
def test(**kwargs):
File "C:\Users\al90935\Documents\flask_env\lib\site-packages\flask\sansio\scaffold.py", line 362, in decorator
self.add_url_rule(rule, endpoint, f, **options)
File "C:\Users\al90935\Documents\flask_env\lib\site-packages\flask\sansio\scaffold.py", line 47, in wrapper_func
return f(self, *args, **kwargs)
File "C:\Users\al90935\Documents\flask_env\lib\site-packages\flask\sansio\app.py", line 657, in add_url_rule
raise AssertionError(
AssertionError: View function mapping is overwriting an existing endpoint function: wrapper
</code></pre>
<p>Where am I going wrong? ..is it somewhere while passing the argument "payload" ?</p>
|
<python><flask>
|
2024-06-05 13:04:17
| 1
| 1,115
|
aiman
|
78,581,162
| 12,881,307
|
Python Kmeans consistently label clusters
|
<p>I have a high dimension variable dataset which I want to classify into different groups. The data within can be confidently classified into 5 distinct groups.</p>
<p>I want to use the result of this clusterization process to create visual charts showcasing the difference within the resulting groups. As part of this task, I want to label each cluster so that instead of showing <code>cluster 0</code> I show <code>Industrial cluster</code>.</p>
<p>However, since Kmeans is a random algorithm, it could happen that <code>cluster 0</code> stops being the <code>Industrial cluster</code>. I've already set a random seed in my code to prevent this, but I expect my dataset to grow in the future.</p>
<p>I'm positive this could have the same effect as a random seed. I want to know how to consistently label my clusters and not worry when the numerical labels of the clusters get rearranged due to random changes in the data. So far the only idea I have is to use a data point from each expected cluster to change the cluster label automatically but I want a more robust solution.</p>
|
<python><scikit-learn>
|
2024-06-05 12:59:40
| 1
| 316
|
Pollastre
|
78,581,052
| 3,120,501
|
'Output states' in Dymos?
|
<p>I'm building a model in Dymos/OpenMDAO. There are a few states which are required for calculating the dynamics, and then some which I'd just like to derive and output. For example, say I had a very simple model with dynamic states of displacement x and velocity v, and a force input u. Then the dynamics would be x_dot = v, v_dot = u/m (where m is mass), and the OpenMDAO model would be:</p>
<pre><code>class MyModel(om.ExplicitComponent):
def initialize(self):
self.options.declare('num_nodes', types=int)
def setup(self):
nn = self.options['num_nodes']
self.add_input('x', shape=(nn,), desc='displacement', units='m')
self.add_input('v', shape=(nn,), desc='velocity', units='m/s')
self.add_input('u', shape=(nn,), desc='force', units='N')
self.add_input('m', shape=(nn,), desc='mass', units='kg')
# Not rates, just states
self.add_output('y', val=np.zeros(nn), desc='derived quantity', units='...')
# State rates
self.add_output('v_dot', val=np.zeros(nn), desc='rate of change of velocity', units='m/s**2')
self.declare_partials(of='*', wrt='*', method='cs')
def compute(self, inputs, outputs):
x, v, u, m = inputs.values()
outputs['v_dot'] = u / m
outputs['y'] = # Some expression
</code></pre>
<p>There's an additional state y which I'd just like to compute and output, so that I can use it in path constraints etc. However, I can't just do</p>
<pre><code>phase.add_state('y', rate_source='<what to put here?>', targets=['y'], units='...')
</code></pre>
<p>when defining the Dymos phase because the rate_source (of which there is none) can't be left unspecified. I thought about setting it as a parameter instead of a state but that requires y to be an input to the OpenMDAO model.</p>
<p>What would be the best way to accomplish this? Maybe I need to create an additional OpenMDAO component for the derived states and link the two together? I don't have much experience of this, so haven't explored this as an option as of yet.</p>
|
<python><optimization><openmdao>
|
2024-06-05 12:39:58
| 1
| 528
|
LordCat
|
78,580,994
| 7,442,673
|
Python 3 process non-printable characters in a unicode string
|
<p>I'm reading text that contains non-printable characters such as backspaces from a file <code>example.log</code>.
If I'm reading the content of the file in a terminal running <code>cat example.log</code>, I obtain a well formatted string:</p>
<p><code>Loading file /path_to/some_db/hold_fix_crash.db</code></p>
<p>But there are some backspaces and other non printable characters in the file. When I run the command <code>cat example.log -v</code> (the <code>-v</code> option shows non printable characters), I obtain</p>
<p><code>Loading file /path_to/some_db/hold_f ^Hix_crash.db</code></p>
<p>I need to read the file in a python script, process some lines and write them back in a file. How can I process the text in these files to obtain the printed "cat" version of the string, when the backspace has been processed and is not in the string anymore ?</p>
<p>If possible, I'm looking for a built in solution that would process all the problematic non printable characters.</p>
<p>Edit to clarify:
Using <code>with open("example.log", "r") as f: f.read()</code> , or using <code>subprocess.checkoutput("cat example.log")</code> I obtain a string that <em>contains</em> non printable characters, meaining I obtain</p>
<p><code>Loading file /path_to/some_db/hold_f ^Hix_crash.db</code></p>
<p>So when I later run commands using this string, I have problems.</p>
<p>My objective is to obtain</p>
<p><code>Loading file /path_to/some_db/hold_fix_crash.db</code></p>
<p>in my string after reading the content of the file.</p>
|
<python><python-3.x><string><non-printing-characters>
|
2024-06-05 12:28:50
| 1
| 3,202
|
m.raynal - bye toxic community
|
78,580,775
| 2,849,491
|
Continuously read journald with Python
|
<p>I wrote a monitoring plugin that reads the systemd-journal for a given unit and process it.
My code looks like this:</p>
<pre><code># Once on start of the monitoring
from systemd import journal
j = journal.Reader()
j.seek_realtime(datetime.now())
j.add_match("_SYSTEMD_UNIT=postfix.service")
j.add_match("_SYSTEMD_UNIT=postfix@-.service")
# Every X Seconds this function is called
def get_metrics():
for entry in j:
print(entry)
# Process the entry
</code></pre>
<p>This seems to work first – but after a few hours, the <code>get_metrics</code> function suddenly stops reading new lines. The iterator just returns nothing anymore.</p>
<p>My current guess is that this happens due to rotation or vacuuming of journal files. But how do I detect this and refresh the Reader? I looked at <code>systemd.journal.Reader.process()</code> but didn't get it working.</p>
|
<python><systemd><systemd-journald>
|
2024-06-05 11:48:59
| 0
| 371
|
Nudin
|
78,580,737
| 10,816,965
|
Understanding the details of equality in Python
|
<p>When trying to construct an example where <code>a == b</code> is not the same as <code>b == a</code>, it seems that I have accidentally constructed an example where <code>a == b</code> is not the same as <code>a.__eq__(b)</code>:</p>
<pre><code>class A:
def __eq__(self, other: object) -> bool:
return type(other) is A
class B(A):
pass
if __name__ == '__main__':
a = A()
b = B()
assert not a.__eq__(b) # as expected
assert not (a == b) # Why does this fail?
</code></pre>
<p>Can somebody explain to me why the last assertion fails? I expected it to be the same as the second one.</p>
|
<python><equality>
|
2024-06-05 11:43:32
| 3
| 605
|
Sebastian Thomas
|
78,580,725
| 308,204
|
Annotate Django JSONField to convert all values to strings
|
<p>I'm using Django's full text search features, and want it to search through a model called Row, which has a JSONfield .data that has a value like:</p>
<pre><code>[["A1", 87, 987, "blue"], ["B1", null, null, "white"]]
</code></pre>
<p>Code to do the search:</p>
<pre><code>search_vector = SearchVector('data')
search_query = SearchQuery(query)
search_headline_data = SearchHeadline('data', search_query, highlight_all=True)
results = Row.objects.all()
.annotate(search=search_vector)
.annotate(headline_data=search_headline_data)
</code></pre>
<p>When query="blue", the match is a string, and highlighted correctly:</p>
<pre><code>[["A1", 87, 987, "<b>blue</b>"], ["B1", null, null, "white"]]
</code></pre>
<p>But when query="87" so there's a match with an integer, the match is not highlighted.</p>
<pre><code>[["A1", 87, 987, "blue"], ["B1", null, null, "white"]]
</code></pre>
<p>This makes me want to cast all values in the .data JSONField to a string, but I can't find out how to do this through annotation.</p>
|
<python><django><django-models>
|
2024-06-05 11:41:39
| 0
| 9,044
|
Mathieu Dhondt
|
78,580,609
| 8,964,393
|
How to vertically concatenate hundreds of .png files in python
|
<p>I have about 250 .png files in one directory.</p>
<p>I need to vertically concatenate them from python.</p>
<p>Here is the code I have so far.</p>
<pre><code>list_im = [] <- this is the list of .png files <- How do I populate it if I have 250 files in one directory?
imgs = [ Image.open(i) for i in list_im ]
# pick the image which is the smallest, and resize the others to match it (can be arbitrary image shape here)
min_shape = sorted( [(np.sum(i.size), i.size ) for i in imgs])[0][1]
imgs_comb = np.hstack([i.resize(min_shape) for i in imgs])
# for a vertical stacking it is simple: use vstack
imgs_comb = np.vstack([i.resize(min_shape) for i in imgs])
imgs_comb = Image.fromarray( imgs_comb)
imgs_comb.save('concatenatedPNGFiles.png' )
</code></pre>
<p>How do I populate the <code>list_im</code> with 250 .png files in python?</p>
|
<python><list><concatenation><png>
|
2024-06-05 11:18:36
| 1
| 1,762
|
Giampaolo Levorato
|
78,580,345
| 7,047,037
|
Can dbt cloud execute python scripts?
|
<p>I am pretty sure that natively this is not possible, but I might be wrong.
Do you know if I can execute Python scripts inside dbt? Not talking about python models.
I basically have a python code that generates Sql models based on informations that i give. I run it on VS code on my dbt-core.
But I wonder if I can execute it in the cloud.</p>
<p>Thanks in advance.</p>
|
<python><dbt>
|
2024-06-05 10:31:03
| 1
| 676
|
Catarina Ribeiro
|
78,580,331
| 18,910,865
|
Python version is higher in virtual environment than the one specified in pyproject.toml
|
<p>I've currently set my <code>pyproject.toml</code> as follows</p>
<pre class="lang-ini prettyprint-override"><code># ...
[tool.poetry.dependencies]
python = "^3.8.0"
pandas = "^2.0.0"
numpy = "^1.21.0"
requests = "^2.25.1"
# ...
</code></pre>
<p>I do then the following:</p>
<pre class="lang-bash prettyprint-override"><code>(base) PS C:\Users\project> poetry install
Installing dependencies from lock file
- Installing numpy (1.24.4)
- Installing python-dateutil (2.9.0.post0)
- Installing pytz (2024.1)
- Installing tzdata (2024.1)
- Installing urllib3 (2.2.1)
- Installing pandas (2.0.3)
- Installing requests (2.32.3)
</code></pre>
<p>As far as I've understood the <code>^</code> should install the latest version of the max nest you provide in the <code>pyproject.toml</code>. So for example I was not expecting <code>numpy</code> to be higher than <code>1.21</code> (while it is instead <code>1.24.4</code>).</p>
<p>The same applies for Python, which finally result in being <code>3.11</code>:</p>
<pre class="lang-bash prettyprint-override"><code>(base) PS C:\Users\project> poetry shell
Spawning shell within C:\Users\AppData\Local\pypoetry\Cache\virtualenvs\project-mrBAX_Rz-py3.11
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows
Loading personal and system profiles took 966ms.
(base) (project-py3.11) PS C:\Users\project>
</code></pre>
<p>The <code>poetry.lock</code>, which was not present before <code>poetry install</code>, has, among the others:</p>
<pre class="lang-ini prettyprint-override"><code># ...
[[package]]
name = "numpy"
version = "1.24.4"
description = "Fundamental package for array computing in Python"
optional = false
python-versions = ">=3.8"
# ...
</code></pre>
<p>Why so? And how to configure it such that both the libraries and Python are not higher than the provided version (i.e. <code>python = 3.8.XX</code>, <code>numpy=1.21.XX</code>)</p>
|
<python><python-packaging><python-poetry><pyproject.toml>
|
2024-06-05 10:28:14
| 1
| 522
|
Nauel
|
78,580,229
| 6,197,439
|
Synchronising dual x axes in a dual (sub)plot in interactive Matplotlib 3.8.1?
|
<p>Some years ago, I have posted the question <a href="https://stackoverflow.com/questions/58415862/synchronising-dual-x-axes-in-a-dual-subplot-in-interactive-matplotlib">Synchronising dual x axes in a dual (sub)plot in interactive Matplotlib?</a> , which ended up having a working solution, which was behaved like this:</p>
<p><img src="https://i.sstatic.net/iSKmy.gif" alt="Figure_1" /></p>
<p>Thankfully, the library authors changed the API, so I can spend even more hours debugging the same old issues, and rewriting the same old tired code - providing me with a job for life <code>:)</code></p>
<p>First, for the line:</p>
<blockquote>
<pre><code>ax.get_shared_x_axes().join(ax, ax2, ax22)
</code></pre>
</blockquote>
<p>... I got the error:</p>
<blockquote>
<p>AttributeError: 'GrouperView' object has no attribute 'join'. Did you mean: 'joined'?</p>
</blockquote>
<p>... however, I managed to find <a href="https://stackoverflow.com/questions/77418896/attributeerror-grouperview-object-has-no-attribute-join">AttributeError: 'GrouperView' object has no attribute 'join'</a> where the accepted error suggests:</p>
<blockquote>
<pre><code>ax1.sharey(ax2)
ax2.sharey(ax3)
</code></pre>
</blockquote>
<p>... but when I try the equivalent with <code>.sharex()</code> in my code, where Matplotlib reports version 3.8.3, I get:</p>
<blockquote>
<p>ValueError: x-axis is already shared</p>
</blockquote>
<p>Is there a fix to get this working again? Here is my modified code:</p>
<pre class="lang-python prettyprint-override"><code>#!/usr/bin/env python3
import matplotlib
print("matplotlib.__version__ {}".format(matplotlib.__version__))
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import packaging
#
# Some toy data
x_seq = [x / 100.0 for x in range(1, 100)]
y_seq = [x**2 for x in x_seq]
y2_seq = [0.3*x**2 for x in x_seq]
#
# Scatter plot
fig, (ax, ax2) = plt.subplots(2, 1, sharex=True, figsize=(9, 6), dpi=100, gridspec_kw={'height_ratios': [2, 1]}) # two rows, one column
# Remove horizontal space between axes
fig.subplots_adjust(hspace=0)
ax22 = ax2.twiny() # instantiate a second axes that shares the same y-axis ; https://stackoverflow.com/q/31803817
#ax.get_shared_x_axes().join(ax, ax22) # for two axes, from https://stackoverflow.com/q/42718823
if packaging.version.Version(matplotlib.__version__) < packaging.version.Version("3.7"):
ax.get_shared_x_axes().join(ax, ax2, ax22) # in matplotlib 3.8: AttributeError: 'GrouperView' object has no attribute 'join'. Did you mean: 'joined'?
else:
# from https://stackoverflow.com/q/77418896 :
ax.sharex(ax2)
ax2.sharex(ax22) # ValueError: x-axis is already shared
# Move twinned axis ticks and label from top to bottom
ax22.xaxis.set_ticks_position("bottom")
ax22.xaxis.set_label_position("bottom")
# Offset the twin axis below the host
ax22.spines["bottom"].set_position(("axes", -0.1))
ax.plot(x_seq, y_seq)
ax2.plot(x_seq, y2_seq)
factor = 655
# FuncFormatter can be used as a decorator
@ticker.FuncFormatter
def major_formatter(x, pos):
#return "[%.2f]" % x
return int(factor*x)
ax22.xaxis.set_major_formatter(major_formatter)
#
# Show
plt.show()
</code></pre>
|
<python><python-3.x><matplotlib>
|
2024-06-05 10:07:39
| 1
| 5,938
|
sdbbs
|
78,580,140
| 12,881,307
|
Integrating Custom Python Environment with Power BI Service
|
<p>I've created a chart using python and pipenv. I want to integrate this chart into my company's main <code>PowerBI</code> environment. I've read that I can use python scripts within <code>PowerBI Desktop</code>, as long as I provide a path to a local python installation.</p>
<p>From what I understand, I can control my local installation, therefore I'm free to use my preferred environment:</p>
<pre class="lang-ini prettyprint-override"><code>[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
scikit-learn = "*"
pandas = "*"
ipykernel = "*"
matplotlib = "*"
seaborn = "*"
plotly = "*"
nbformat = "*"
ipywidgets = "*"
[dev-packages]
[requires]
python_version = "3.10"
python_full_version = "3.10.11"
</code></pre>
<p>However, I don't know how to set up a pipenv environment in the company's environment. I've gone through <a href="https://stackoverflow.com/a/53563129/12881307">this answer</a> to a similar question, but it does not mention how to set up custom environments or whether it is possible or not.</p>
<p>How, if possible, can I customize the python environment I will use within <code>PowerBI</code> (NOT <code>PowerBI Desktop</code>)?</p>
|
<python><powerbi>
|
2024-06-05 09:51:14
| 1
| 316
|
Pollastre
|
78,579,944
| 12,304,000
|
handling special characters while loading data from s3 into mysql
|
<p>In my S3, I have a csv file that looks like this:</p>
<pre><code>"id","created_at","subject","description","priority","status","recipient",
</code></pre>
<p>Now, one the <strong>description</strong> column's <strong>row</strong> looks like this:</p>
<pre><code>"for more, please let us know and we add them. but they did not explain why the following request is not working. We get an Unauthorized error. Can we please receive what is needed for this request including the test data?
{
"errors": [
"Unauthorized"
],
"result": null,
"status": 401
}
curl -X POST "https://xx/v1/prefill/eWR1iNBnG8Wj" -H "accept: application/json" -H "Authorization: Bearer 2e05581e8b" -H "Content-Type: application/json" -d "{\"data\":{\"externalId\":\"string\",\"subId\":\"string\",\"common\":{\"amount\":20000,\"term\":84,\"purpose\":\"REFINANCING\",\"payoutAccount\":{\"iban\":\"string\"},\"payoutAccountHolder\":\"PRIMARY\",..............."
</code></pre>
<p>This has normal text but also some special characters. Notice this part in the cell's value :</p>
<pre><code>\"string\",\"subId\":
</code></pre>
<p>Now I use such a command to load it into MYSQL:</p>
<pre><code>LOAD DATA FROM S3 's3://{s3_bucket_name}/{s3_key}'
REPLACE
INTO TABLE {schema_name}.stg_tickets
CHARACTER SET utf8mb4
FIELDS
TERMINATED BY ','
ENCLOSED BY '"'
IGNORE 1 LINES
(id, url, @created_at, @updated_at, @external_id, @type, subject,
description, priority, status, recipient, requester_id,
submitter_id, assignee_id, @organization_id, @group_id,
collaborator_ids, follower_ids, @forum_topic_id, @problem_id,
@has_incidents)
</code></pre>
<p>This works for all other cases because there are not a lot of commas/special characters in other <strong>description</strong> rows. But this particular record messes up the columns because it cannot distinguish the description properly. For example, it adds this to the priority column</p>
<pre><code>"""subId"":""string""
</code></pre>
<p>Which is not true because this is still a part of the description column.</p>
<p>I want to handle it using the MySQL command itself. Pre-processing the S3 files wouldn't be an option at the moment.</p>
<p>Note: I am running the query using Python.</p>
<p>There are other commas in the description column as well but those don't make an issue.</p>
|
<python><sql><mysql><amazon-s3><special-characters>
|
2024-06-05 09:17:00
| 1
| 3,522
|
x89
|
78,579,640
| 9,601,720
|
Folium filter markers dynamically by property
|
<p>I have the following code to draw circles at different locations, using Folium:</p>
<pre><code>import folium
data = [
{"lat": 37.7749, "lon": -122.4194, "value": 10, "name": "Location A"},
{"lat": 34.0522, "lon": -118.2437, "value": 20, "name": "Location B"},
{"lat": 40.7128, "lon": -74.0060, "value": 15, "name": "Location C"},
{"lat": 41.8781, "lon": -87.6298, "value": 5, "name": "Location D"}
]
# Initialize the map centered at an average location
m = folium.Map(location=[39.8283, -98.5795], zoom_start=4)
# Add circles to the map with custom attribute for value
for item in data:
circle = folium.Circle(
location=[item["lat"], item["lon"]],
radius=100000, # Radius in meters
color='blue',
fill=True,
fill_color='blue',
fill_opacity=0.6,
popup=f"{item['name']}: {item['value']}",
data_value=item["value"]
)
circle.add_to(m)
m.save('map.html')
</code></pre>
<p>I'd like to do some kind of filter by input value, meaning that, for example, entering in a text box number 20 would make markers with lower value than 20 to be hidden. I've seen in Folium things like <code>TreeLayerControl</code>, which is very similar to my goal, but I can't group my markers, since I expect to plot many circles and it would be useless from a UX point of view</p>
|
<python><folium><folium-plugins>
|
2024-06-05 08:20:42
| 0
| 660
|
thmasker
|
78,579,582
| 6,195,489
|
Understanding task stream and speeding up Distributed Dask
|
<p>I have implemented some data analysis in Dask using dask-distributed, but the performance is very far from the same analysis implemented in numpy/pandas and I am finding it difficult to understand the task stream and memory consumption.</p>
<p>The class that sets up the cluster looks like:</p>
<pre><code>class some_class():
def __init__(self,engine_kwargs: dict = None):
self.distributed = engine_kwargs.get("distributed", False)
self.dask_client = None
self.n_workers = engine_kwargs.get(
"n_workers", int(os.getenv("SLURM_CPUS_PER_TASK", os.cpu_count()))
)
@contextmanager
def dask_context(self):
"""Dask context manager to set up and close down client"""
if self.distributed:
if self.distributed_mode == "processes":
processes = True
dask_cluster = LocalCluster(n_workers=self.n_workers, processes=processes)
dask_client = Client(self.dask_cluster)
try:
yield
finally:
if dask_client is not None:
dask_client.close()
local_cluster.close()
</code></pre>
<p>And I have something like the following method, which does the analysis:</p>
<pre><code> def correct(self,
segy_container: "SegyFileContainer",
v_sb: int,
interp_type: int = 1,
brute_downsample: int = None,)
""""
:param segy_container: Container for the seg-y path and data
:type segy_container: SegyFileContainer
:param interp_type: Interpolation type either 1=linear or 3=cubic, defaults to 1
:type interp_type: int, optional
:param brute_downsample: If you wish to down sample the data to get a brute stack, defaults to None
:type brute_downsample: int, optional
:param v_sb: NMO velocity, defaults to 1500
:type v_sb: int, optional
:return: NMO corrected gather
:rtype: pd.DataFrame
"""
min_cmp = segy_container.trace_headers["CDP"].values.min()
max_cmp = segy_container.trace_headers["CDP"].values.max()
groups = segy_container.trace_headers["CDP"]
cdp_series = segy_container.trace_headers["CDP"]
cdp_dataarray = xr.DataArray(cdp_series, dims=["trace"])
dg_cmp = segy_container.segy_file.data.groupby(cdp_dataarray)
dt_s = segy_container.segy_file.attrs["sample_rate"]
hg_cmp = segy_container.trace_headers.groupby(
segy_container.trace_headers["CDP"]
)
segy_container.trace_headers["CDP"].iloc[hg_cmp.indices.get(100)]
tasks = [
delayed(self._process_group)(
segy_container, cmp_index, dg_cmp, hg_cmp, v_sb, interp_type, dt_s
)
for cmp_index in range(min_cmp, max_cmp + 1)
]
with self.dask_context() as dc:
results = compute(*tasks, scheduler=dc)
def _process_group(
self,
segy_container,
cmp_index,
dg_cmp,
hg_cmp,
v_sb: int,
interp_type: int,
dt_s: int,
):
cmp = (
segy_container.segy_file.data[dg_cmp.groups[cmp_index]]
.transpose()
.compute()
)
offsets = hg_cmp.get_group(cmp_index)["offset"]
nmo = self._nmo_correction(
cmp=cmp,
dt=dt_s / 1000,
offsets=offsets,
velocity=v_sb,
interp_type=interp_type,
)
return nmo
def _nmo_correction(
self, cmp, dt: float, offsets, velocity: float, interp_type: int
):
nmo_trace = da.zeros_like(cmp)
nsamples = cmp.data.shape[0]
times = da.arange(0, nsamples * dt, dt)
for ind, offset in enumerate(offsets):
reflected_times = self._reflection_time(times, offset, velocity)
amplitude = self._sample_trace(
reflected_times=reflected_times,
trace=cmp.data[:, ind],
dt=dt,
interp_type=interp_type,
)
if amplitude is not None:
nmo_trace[:, ind] = amplitude
return nmo_trace
def _reflection_time(self, t0, x, vnmo):
t = da.sqrt(t0**2 + x**2 / vnmo**2)
return t.compute()
def _sample_trace(self, reflected_times, trace, dt, interp_type):
times = np.arange(trace.size) * dt
times = xr.DataArray(times)
reflected_times = xr.DataArray(reflected_times, dims="reflected_times")
out_of_bounds = (reflected_times < times[0]) | (reflected_times > times[-1])
if interp_type == 1:
amplitude = np.interp(reflected_times, times, trace)
elif interp_type == 3:
polyfit = CubicSpline(times, trace)
amplitude = polyfit(reflected_times)
else:
raise ValueError(
f"Error in interpolating sample trace. interp_type should be either 1 or 3: {interp_type}"
)
amplitude[out_of_bounds.compute()] = 0.0
return amplitude
</code></pre>
<p>I have the same thing implemented using numpy and pandas, and the runtime is 3 secs. For Dask-distributed in the way shown it is taking around 15 mins. If I just use <code>scheduler=processes</code> and not the cluster it takes about 4 mins.</p>
<p>I understand there will be overhead in setting up and using the cluster, but am trying to understand how to improve the run time.</p>
<p>Looking at the diagnostics in the Dask dash give some quite confusing graphs:</p>
<p><a href="https://i.sstatic.net/D6SGVy4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D6SGVy4E.png" alt="Dask Overview" /></a>
<a href="https://i.sstatic.net/UmhYCSUE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmhYCSUE.png" alt="Dask Tasks" /></a>
<a href="https://i.sstatic.net/Davx3Na4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Davx3Na4.png" alt="Dask Tasks 2" /></a></p>
<p>I understand why there maybe more streams than the 10 workers I have created in this case, but am finding it hard to understand what exactly is going on here. I also dont understand why the memory usage is so high - as the file I am looking at is 715 Mb.</p>
<p>Any advice or insight on how to</p>
<ol>
<li>Understand the task stream</li>
<li>Speed up the Dask-distributed code</li>
<li>Understand why the memory usage is so high</li>
</ol>
<p>Would be very much appreciated!</p>
|
<python><dask><dask-distributed>
|
2024-06-05 08:09:09
| 1
| 849
|
abinitio
|
78,579,156
| 14,818,993
|
“OperationalError: Could not translate host name “db” to address" in Dockerized Django App with Datadog
|
<p>I have a very complex django project that uses postgresql as database, where I have setup datadog to send traces and events. It works fine locally and I am receiving traces and events in datadog. However, if I try to run my app using docker, it won't send traces and events to datadog. Following is my <code>docker-compose.yml</code> file where datadog integration is borrowed from <a href="https://github.com/DataDog/dd-trace-py/blob/1.x/docker-compose.yml" rel="nofollow noreferrer">here</a>.</p>
<h5>docker-compose.yml</h5>
<pre><code>services:
db:
image: ankane/pgvector
env_file:
- ./docker/env.db
volumes:
- pgvector_db_data:/var/lib/postgresql/data
app:
build:
context: .
dockerfile: ./docker/prod.Dockerfile
env_file:
- ./docker/env.db
- ./.env
container_name: scooprank
volumes:
- static_volume:/opt/code/static
- shm:/dev/shm
- save_drivers:/opt/code/save_drivers
- save_models:/opt/code/save_models
command: ./docker/scripts/run-gunicorn
depends_on:
- db
network_mode: host
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=${DATADOG_API_KEY}
- DD_APM_RECEIVER_SOCKET=/tmp/ddagent/trace.sock
- DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true
ports:
- "8127:8126"
news_scraper:
image: scooprank-app
env_file:
- ./.env
container_name: news_scraper
command: python manage.py news_scraper
depends_on:
- app
restart: always
volumes:
static_volume:
shm:
save_drivers:
save_models:
pgvector_db_data:
</code></pre>
<p>I added <code>network_mode: host</code> as suggested by <a href="https://stackoverflow.com/a/52392480/14818993">this answer</a>. I then tried this solution on a simple dummy project having sqlite3 as database and with this docker-compose file, I am receiving traces and events. But if I try to use it in my main project, I can do <code>docker compose build</code> and <code>docker compose up</code>, but when I migrate, I get error.</p>
<pre><code>(.venv) prixite@prixite-Latitude-5410:~/Desktop/scooprank$ docker compose -f docker-compose.yml exec app ./manage.py migrate
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django/db/backends/base/base.py", line 275, in ensure_connection
self.connect()
File "/usr/local/lib/python3.11/site-packages/sentry_sdk/utils.py", line 1711, in runner
return sentry_patched_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/django/__init__.py", line 659, in connect
return real_connect(self)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/base/base.py", line 256, in connect
self.connection = self.get_new_connection(conn_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/postgresql/base.py", line 277, in get_new_connection
connection = self.Database.connect(**conn_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.OperationalError: could not translate host name "db" to address: Temporary failure in name resolution
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/code/./manage.py", line 22, in <module>
main()
File "/opt/code/./manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 413, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 459, in execute
output = self.handle(*args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 107, in wrapper
res = handle_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/core/management/commands/migrate.py", line 100, in handle
self.check(databases=[database])
File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 486, in check
all_issues = checks.run_checks(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/core/checks/registry.py", line 88, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/core/checks/model_checks.py", line 36, in check_all_models
errors.extend(model.check(**kwargs))
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/base.py", line 1617, in check
*cls._check_constraints(databases),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/base.py", line 2467, in _check_constraints
connection.features.supports_nulls_distinct_unique_constraints
File "/usr/local/lib/python3.11/site-packages/django/utils/functional.py", line 47, in __get__
res = instance.__dict__[self.name] = self.func(instance)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/postgresql/features.py", line 141, in is_postgresql_15
return self.connection.pg_version >= 150000
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/utils/functional.py", line 47, in __get__
res = instance.__dict__[self.name] = self.func(instance)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/postgresql/base.py", line 438, in pg_version
with self.temporary_connection():
File "/usr/local/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/base/base.py", line 691, in temporary_connection
with self.cursor() as cursor:
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/base/base.py", line 316, in cursor
return self._cursor()
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/base/base.py", line 292, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.11/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/base/base.py", line 274, in ensure_connection
with self.wrap_database_errors:
File "/usr/local/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.11/site-packages/django/db/backends/base/base.py", line 275, in ensure_connection
self.connect()
File "/usr/local/lib/python3.11/site-packages/sentry_sdk/utils.py", line 1711, in runner
return sentry_patched_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/django/__init__.py", line 659, in connect
return real_connect(self)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/base/base.py", line 256, in connect
self.connection = self.get_new_connection(conn_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/postgresql/base.py", line 277, in get_new_connection
connection = self.Database.connect(**conn_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
django.db.utils.OperationalError: could not translate host name "db" to address: Temporary failure in name resolution
Sentry is attempting to send 2 pending events
Waiting up to 2 seconds
Press Ctrl-C to quit
</code></pre>
<p>Following are my other files in case these are required:</p>
<h5>env.db</h5>
<pre><code>POSTGRES_PASSWORD=secret
POSTGRES_DB=scooprank
POSTGRES_USER=postgres
</code></pre>
<h5>prod.Dockerfile</h5>
<pre><code>FROM python:3.11.5
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
# Add buster Postgres repo. This is necessary to install postgresql-client-12
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
RUN apt-get update && apt-get install -y \
postgresql-client-12 \
build-essential \
libcairo2-dev \
libpango1.0-dev \
libjpeg-dev \
libgif-dev \
librsvg2-dev \
g++ \
xvfb
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
RUN apt-get install -y nodejs
RUN pip install pip==24.0
WORKDIR /opt/code
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN python -m nltk.downloader punkt averaged_perceptron_tagger wordnet omw-1.4
COPY . ./
</code></pre>
<h5>.env file</h5>
<pre><code>DATABASE_URL=postgres://postgres:secret@db/scooprank
CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8006
DATADOG_API_KEY=<my_api_key>
DATADOG_APP_KEY=<my_app_key>
STATSD_HOST=127.0.0.1
STATSD_PORT=8125
</code></pre>
<h5>settings.py</h5>
<pre><code># Database
DATABASES = {"default": env.dj_db_url("DATABASE_URL")}
# Datadog
if not DEBUG and env.str("DATADOG_API_KEY"):
DATADOG_OPTIONS = {
"api_key": env.str("DATADOG_API_KEY"),
"app_key": env.str("DATADOG_APP_KEY"),
"statsd_host": env.str("STATSD_HOST"),
"statsd_port": env.int("STATSD_PORT"),
}
initialize(**DATADOG_OPTIONS)
</code></pre>
<p>I tried changing my database url to use <code>localhost</code> instead of <code>db</code> but no luck. I also tried adding <code>networks</code> in my services but no use. If I remove <code>ddtrace</code> service and <code>network_mode: host</code>, my app will work fine but without sending traces and events to datadog.</p>
|
<python><django><postgresql><docker><datadog>
|
2024-06-05 06:38:45
| 1
| 308
|
Huzaifa Arshad
|
78,579,132
| 2,398,430
|
Pytest parameter id access in test
|
<p>I am using pytest and I have the id set for each parameter of a test.
I use:</p>
<pre><code>pytest.param([], id = "meaningful_string")
</code></pre>
<p>The test can access the param list. Is it also possible for the test to access the param id?</p>
|
<python><pytest>
|
2024-06-05 06:33:50
| 1
| 366
|
stackQA
|
78,579,128
| 5,567,893
|
How can I randomly replace the values remaining at least one value using python?
|
<p>I tried to replace the some of major values in the tensor with the unique value while maintaining at least one value.</p>
<p>For example, given <code>edge_index</code>, I want to change it as below.</p>
<pre class="lang-py prettyprint-override"><code>edge_index = torch.as_tensor([[0, 0, 1, 2, 3, 4, 6, 7, 7, 8],
[1, 2, 2, 4, 4, 5, 0, 1, 3, 7]])
result = torch.as_tensor([[5, 0, 1, 2, 3, 8, 6, 7, 7, 8],
[1, 2, 2, 4, 4, 5, 0, 1, 3, 6]])
</code></pre>
<p>In detail, some values appeared more than 2 times in <code>edge_index</code> (i.e., 0, 1, 2, 4, 7).
To avoid unique values, I need to change some of them with exceptional values (i.e., 5, 6, 8).</p>
<p>I tried to do it as below, but my code couldn't ensure the condition; all values need to appear at least 2 times.</p>
<pre class="lang-py prettyprint-override"><code># Counts values
n_id, n_counts = torch.unique(edge_index, return_counts=True)
unique_nodes = n_id[n_counts==1] #tensor([5, 6, 8])
major_nodes = n_id[n_counts>2] #tensor([0, 1, 2, 4, 7])
# Find the index where the major_nodes located
major_s_idx = (edge_index[..., None] == major_nodes).any(-1)[0].nonzero()[:, 0] #tensor([0, 1, 2, 3, 5, 7, 8])
major_t_idx = (edge_index[..., None] == major_nodes).any(-1)[1].nonzero()[:, 0] #tensor([0, 1, 2, 3, 4, 6, 7, 9])
result = edge_index.clone()
result[0][major_s_idx[torch.randperm(len(major_s_idx))[:len(unique_nodes)]]] = unique_nodes
result[1][major_t_idx[torch.randperm(len(major_t_idx))[:len(unique_nodes)]]] = unique_nodes
result
# tensor([[0, 0, 6, 2, 3, 4, 6, 5, 8, 8],
# [5, 2, 2, 4, 6, 5, 0, 8, 3, 7]]) # 1 is disappeared and 7 remained only one time
</code></pre>
<p>Note that the correct result does not need to be the same as the <code>result</code> in the first code block. Just reference.</p>
|
<python><pytorch>
|
2024-06-05 06:33:23
| 1
| 466
|
Ssong
|
78,578,790
| 12,231,242
|
Tkinter back and forward buttons re: visited history
|
<p>My app displays details for one person at a time based on a <code>current_person_id</code> and other information related to that person which is stored in a database. I wanted to change the details displayed based on the order in which different persons' details had been seen, so it would work like a browser's back/forward button.</p>
<p>I found instructions <a href="https://stackoverflow.com/a/6869625/12231242">here</a>, <a href="https://www.quora.com/What-data-structure-is-used-to-implement-the-back-and-forward-button-of-a-browser" rel="nofollow noreferrer">here</a>, and <a href="https://www.geeksforgeeks.org/implementing-backward-and-forward-buttons-of-browser/" rel="nofollow noreferrer">here</a> which suggested using two stacks, one for the back button and one for the forward button. I didn't understand these instructions so I compared them with each other and compiled a more succinct and better organized set of instructions.</p>
<p>It took several hours but I finally managed to get this to work in Tkinter. I will answer my question below.</p>
|
<python><tkinter><back-button>
|
2024-06-05 04:47:13
| 1
| 574
|
Luther
|
78,578,679
| 3,452,340
|
Using SSE in Flask
|
<p>I'm using Flask to develop a web app. I have a <code>/stream</code> route in my python code, an <code>index.html</code> file, and a <code>stream_page.html</code> that triggers the stream route.</p>
<p>I want to trigger a stream from <code>stream_page.html</code>, calling the <code>/stream</code> route with a POST call. I want to have the streamed text show up in <code>index.html</code>.</p>
<p>The <code>/stream</code> route returns a <code>Response</code> object, and I can see the streaming happen in a new blank page. But I don't want to see it in a new blank page. I don't want to see it in <code>stream_page.html</code> either. I want to see it in <code>index.html</code>. How can I get the <code>Response</code> object to stream in <code>index.html</code>?</p>
<p>Here's roughly what I have so far.</p>
<p>app.py:</p>
<pre><code>@app.route('/stream', methods=['GET', 'POST'])
def stream():
data = request.form['data']
def gen(data):
yield ...
return Response(gen(data), content_type='text/event_stream')
</code></pre>
<p>index.html:</p>
<pre><code> <script>
if (!!window.EventSource) {
var source = new EventSource('/stream');
source.onmessage = function(event) {
var newElement = document.getElementById(stream_div);
newElement.innerHTML = event.data;
document.getElementById("events").appendChild(newElement);
};
source.onerror = function(event) {
console.error("EventSource failed:", event);
};
} else {
console.error("Your browser does not support SSE.");
}
</script>
</code></pre>
<p>stream_page.html:</p>
<pre><code><form action="{{ url_for('app.stream') }}" method="post">
...
</form>
</code></pre>
|
<python><html><flask>
|
2024-06-05 03:52:18
| 1
| 907
|
miara
|
78,578,563
| 2,813,606
|
How to parse out text from PDF into pandas dataframe
|
<p>I am working on scraping data from several infographics on ridership data for Amtrak. I want to collect the yearly ridership #s and addresses of each station in the US.</p>
<p>Here is my code for one station:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
import fitz
url_abe = 'https://www.railpassengers.org/site/assets/files/1679/abe.pdf'
page = requests.get(url_abe)
# Save the PDF file locally
pdf_path = 'abe.pdf'
with open(pdf_path, 'wb') as file:
file.write(page.content)
# Step 2: Extract text from the PDF file
def extract_text_from_pdf(pdf_path):
# Open the PDF file
document = fitz.open(pdf_path)
# Iterate through each page and extract text
text = ''
for page_num in range(len(document)):
page = document.load_page(page_num)
text += page.get_text()
return text
# Get the extracted text
pdf_text = extract_text_from_pdf(pdf_path)
</code></pre>
<p>How can I then parse this out to get a pandas dataframe that looks like the following:</p>
<pre><code>2016 2017 2018 2019 2020 2021 2022 Address
37161 37045 37867 39108 19743 14180 34040 18 E Bel Air Ave Aberdeen, MD 21001-3701
</code></pre>
|
<python><pandas><parsing><beautifulsoup><python-requests>
|
2024-06-05 02:56:36
| 2
| 921
|
user2813606
|
78,578,556
| 4,592,891
|
Freegpt-webui python displays "No matching distribution found for mailgw_temporary_email" when installing
|
<p>complete error message</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement mailgw_temporary_email (from versions: none)
ERROR: No matching distribution found for mailgw_temporary_email
</code></pre>
<p>I tried searching for this package in pip, but it didn't exist at all.</p>
<p>Now the installation is stuck, is there any solution?</p>
|
<python><pip>
|
2024-06-05 02:51:30
| 1
| 980
|
aboutjquery
|
78,578,185
| 1,217,810
|
How do I carry over an import carried out in a 2nd file back to the calling file in python?
|
<p>The code below is a utility I wrote that installs and restarts a script automatically without having the user need to parse through errors to install. Ideal usage would be below:</p>
<pre><code>#file1.py
from file2 import import_and_install
succ = import_and_install("numpy") #import numpy
succ |= import_and_install("cv2", "opencv-python") #import cv2
succ |= import_and_install("openpyxl", alias="xl") #import pyxl
</code></pre>
<p>However I soon learned that the imports executed within file2 do no carry over back to file1 after it has been called resulting in me needing to use the below code instead</p>
<pre><code>#alternate_file1.py
try:
print("IMPORT LIBS")
print("")
import numpy
import cv2
import openpyxl as xl
except:
succ = import_and_install("numpy") #import numpy
succ |= import_and_install("cv2", "opencv-python") #import cv2
succ |= import_and_install("openpyxl", alias="xl") #import numpy
if( not succ == 4096):
exit(1)
</code></pre>
<p>How do I get the dynamic imports to carry back over to the original file1? If I copy the contents of file2 over to file1 everything works as intended, but I have to copy code all the time which is annoying.</p>
<pre><code>#file2.py
import os
import sys
import importlib
import subprocess
import traceback
def try_import(import_name, alt_install_name=None, from_pkg=None, alias=None):
succ = 4096
#allow different import strategies, and force install attempt if import fails
try:
if from_pkg is None:
to_import = import_name
elif not (from_pkg is None):
to_import = from_pkg
else:
raise AttributeError("Not a valid package")
import_entry = importlib.import_module(to_import)
if not (from_pkg is None):
import_entry = import_entry.__dict__[import_name]
print(import_entry)
print(to_import)
if(not (alias is None)):
globals()[alias] = import_entry
else:
globals()[import_name] = import_entry
except AttributeError:
print("Not a valid package")
succ |= 1
except ImportError:
print("can't find the " + to_import + " module.")
succ |= 2
if not (alt_install_name is None):
to_install = alt_install_name
else:
to_install = to_import
return succ, to_install
def try_install(to_install):
#if an importerror occured ask user to install missing package
succ = 4096
should_install = input("Would you like to install? (y/n): ")
if should_install.lower() in ('y', 'yes'):
try:
ret = subprocess.check_call([sys.executable, "-m", "pip", 'install', to_install])
except ChildProcessError:
print("Failed to install package with return code " + str(ret) + ". Try again manually")
succ |= 4
else:
print("Install attempted.")
restart_script()
else:
print("You can't run the script until you install the package")
succ |= 8
return succ
def restart_script():
print("Restarting script. If restart fails try again manually.")
print('\a')
if not (os.name == 'nt'):
os.execl(sys.executable, 'python', traceback.extract_stack()[0].filename, *sys.argv[1:])
else:
p = subprocess.call([sys.executable, os.path.realpath(traceback.extract_stack()[0].filename), *sys.argv], shell=True, start_new_session=True)
sys.exit(0)
def import_and_install(import_name, alt_name=None, from_pkg=None, alias=None):
succ, to_install = try_import(import_name, alt_name, from_pkg, alias)
if( succ == (4096 | 2) ):
succ |= try_install(to_install)
return succ
</code></pre>
|
<python><python-import><python-importlib>
|
2024-06-04 23:37:18
| 1
| 567
|
Painguy
|
78,578,124
| 8,849,071
|
How are connections handled by SQLAlchemy internally
|
<h2>Context</h2>
<p>Hello, I mostly use Django for my main backend monolith. In Django, you do not need to handle db connections on your own, this is abstracted (because Django uses the Active Record pattern instead of the Data mapper pattern used by Sqlalchemy).</p>
<p>I have been doing some reading and I feel like I know the basics of the different constructs of Sqlalchemy (like engine, connection, and session). Nonetheless, we have started to scale our FastAPI + Sqlalchemy apps, but I have found this error appearing:</p>
<pre><code>sqlalchemy.exc.TimeoutError - anyio/streams/memory.py:92 - QueuePool limit of size 8 overflow 4 reached, connection timed out, timeout 10.00
</code></pre>
<p>I would like to understand why that is happening.</p>
<h2>Current setup</h2>
<p>Right now we have instances of the web server running, using the following command:</p>
<pre><code>python -m uvicorn somemodule.api.fast_api.main:app --host 0.0.0.0
</code></pre>
<p>As you can see, I'm not setting the <code>workers</code> flag, so <code>uvicorn</code> is only using 1 worker. On the SQLAlchemy engine, we are using the following options:</p>
<pre><code>SQLALCHEMY_ENGINE_OPTIONS = {
"pool_pre_ping": True,
"pool_size": 16,
"max_overflow": 4,
"pool_timeout": 10,
"pool_recycle": 300,
}
engine = create_engine(get_db_url(), **SQLALCHEMY_ENGINE_OPTIONS)
</code></pre>
<h2>Questions</h2>
<h3>Q1: does this apply to every worker of uvicorn?</h3>
<p>My guess is that if I spin two workers in uvicorn, then the connection pool is not shared between the workers, so if the pool size is set to 16, and I have two workers, then the maximum number of connections (without considering the max overflow) would 32. Am I correct?</p>
<h3>Q2: how does the connections play with async?</h3>
<p>Right now I'm doing the usual thing of creating a session like:</p>
<pre><code>def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
DBSessionDependency = Annotated[Session, Depends(get_db)]
</code></pre>
<p>From my current understanding, the session uses a connection from the pool. We open a new session at the beginning of the request and then it gets closed at the end of the request, so that in every request we get a clean state session (but recycling connections from the pool).</p>
<p>Now in an async fast API, we can handle way more requests concurrently, so I guess the pool size is a bottleneck, and maybe because of that, we are running out of available connections in the pool. Is this correct? My train of thought is: <code>uvicorn</code> can accept at the same time, let's say, 200 requests, those requests start being processed so that every request gets assigned a session with an attached connection from the pool.</p>
<p>So I guess that to handle a lot of requests in Async Fast API I also need to have a way bigger pool size (or use pgbouncer and have less connections between all of the workers?)</p>
<h3>Q3: is it really an advantage to set a new session for every request?</h3>
<p>We are using the repository pattern, so the access to the db of every model is centralized in a class. Would it not be just enough to create a session from inside the class and that's it? Why do I need to bother with dependency injection in Fast API and passing the session around or using something like context vars to share the session object with the repositories?</p>
<p>Let me know if you need any clarification and thanks in advance!! :D</p>
|
<python><django><sqlalchemy><fastapi><connection-pooling>
|
2024-06-04 23:13:38
| 1
| 2,163
|
Antonio Gamiz Delgado
|
78,578,045
| 4,927,864
|
How to get MySQL result values as string or number?
|
<p>I'm currently using PyMySQL to make queries to my MySQL DB. I wanted the results to be a dictionary, where the columns are the keys and with their associated values, and I've got that. But I've noticed that values for types like dates and decimals are returned as objects, which is not what I've encountered with other languages and libraries. An example is:</p>
<pre><code>{'date': datetime.date(2023, 12, 26), 'store_number': 7, 'total': Decimal('11336.43')}
</code></pre>
<p>Ideally, I just want something like:</p>
<pre><code>{'date': '2023-12-26', 'store_number': 7, 'total': 10036.43}
</code></pre>
<p>What would be the best way to do this? I don't necessarily need to use PyMySQL, but it seems like a popular choice. I've read and even asked ChatGPT, but I figured I'd ask real people about what they have done and know. The options I've found are:</p>
<ol>
<li>Some suggest to iterate or use list comprehensions to convert the values, but that seems a little strange, maybe even silly, to me that I need to do that or write a custom function to do this every time.</li>
<li>Using converters. This one seemed more reasonable.</li>
<li>Casting values in the query. This also seems odd to me, as I've never had to resort to doing something like this before.</li>
</ol>
<p>Any help or insight would be appreciated. Thanks!</p>
|
<python><pymysql>
|
2024-06-04 22:37:10
| 1
| 2,395
|
kenshin9
|
78,577,998
| 214,526
|
How to use pandera to validate non numeric columns against numeric datatype in schema and raise error
|
<p>How can I use pandera to validate dataframe columns to ensure errors are raised if data-type does not match against the schema?</p>
<p>Say, I have the schema defined as following -</p>
<pre><code>import re
from typing import Any, Dict, List, Pattern
# import jsonschema
import pandas as pd
import numpy as np
import pandera as pa
pa_schema_dict: Dict[str, pa.Column] = {
item[0]: pa.Column(
dtype=item[1],
nullable=True,
required=True,
name=item[0],
drop_invalid_rows=True,
)
for item in [("housing_median_age", "int"), ("ocean_proximity", "int")]
}
for name, value in pa_schema_dict.items():
if value.dtype.type in ("float", "int"):
print(f"{name}: {value.dtype.type}")
# check: pa.Check = pa.Check(
# lambda x: (
# True
# if re.match(
# pattern=r"^-?\d+(\.\d+)?([eE]-?\d+)?$",
# string=str(x),
# flags=re.ASCII,
# )
# is not None
# else False
# ),
# )
check: pa.Check = pa.Check(
pd.api.types.is_numeric_dtype,
)
if value.checks:
value.checks.append(check)
else:
value.checks = [check]
pa_feature_schema: pa.DataFrameSchema = pa.DataFrameSchema(
columns=pa_schema_dict,
name="data_schema",
drop_invalid_rows=True,
coerce=True,
strict="filter",
add_missing_columns=False,
)
</code></pre>
<p>And say, my dataframe is like following:</p>
<pre><code>df: pd.DataFrame = pd.DataFrame(
{"housing_median_age": [6, 25, np.nan, 25],
"ocean_proximity": ["NEAR BAY", "INLAND", "<1H OCEAN", "NEAR OCEAN"]
})
</code></pre>
<p>Since "ocean_proximity" in schema is of type <code>int64</code> but in the dataframe it is <code>object</code>, I want an error to be raised.</p>
<p>However, when I try the below code, it does not raise any exception</p>
<pre><code>try:
validated_df: pd.DataFrame = pa_feature_schema.validate(
check_obj=df, lazy=True, inplace=True
)
except (pa.errors.SchemaError, pa.errors.SchemaDefinitionError,) as e:
print(str(e))
</code></pre>
<p>How to achieve this?</p>
<p>Since the dataframe could be any random dataframe, I get the schema through a user config file. I'm not sure how to use DataFrameModel to define schema programmatically in such case.</p>
|
<python><pandas><dataframe><pandera>
|
2024-06-04 22:22:35
| 1
| 911
|
soumeng78
|
78,577,887
| 6,141,885
|
How to define the forward pass in a custom PyTorch FasterRCNN?
|
<p>I'm trying to write a custom PyTorch FasterRCNN that's based on an existing PyTorch model, <code>fasterrcnn_resnet50_fpn</code>, but I'm getting stuck on how to correctly write the forward pass. Following <a href="https://discuss.pytorch.org/t/how-to-retrieve-the-loss-function-of-fasterrcnn-for-object-detection/174100/2?u=morepenguins" rel="nofollow noreferrer">this</a> recommendation, I'm basing the forward pass off of the <a href="https://github.com/pytorch/vision/blob/beb4bb706b5e13009cb5d5586505c6d2896d184a/torchvision/models/detection/generalized_rcnn.py#L104-L105" rel="nofollow noreferrer"><code>GeneralizedRCNN</code> forward pass</a>, because I'd like to eventually modify the loss function. However, I'm getting errors on the forward pass formulation without loss function modifications, and not sure how to proceed.</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>class CustomFasterRCNNResNet50FPN(nn.Module):
def __init__(self, num_classes, **kwargs):
super().__init__()
# Load the pre-trained fasterrcnn_resnet50_fpn model
self.model = torchvision.models.detection.fasterrcnn_resnet50_fpn(weights=FasterRCNN_ResNet50_FPN_Weights.DEFAULT)
# Replace the classifier with a new one for the desired number of classes
in_features = self.model.roi_heads.box_predictor.cls_score.in_features
self.model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
def forward(self, images, targets=None):
# Convert the list of images to a tensor
images = torch.stack(images)
# Create an ImageList object from the images tensor
image_list = ImageList(images, [(img.shape[-2], img.shape[-1]) for img in images])
# Pass the images through the model
features = self.model.backbone(images)
if isinstance(features, torch.Tensor):
features = OrderedDict([("0", features)])
proposals, proposal_losses = self.model.rpn(image_list, features, targets)
detections, detector_losses = self.model.roi_heads(features, proposals, image_list.image_sizes, targets)
print("Detections:", detections)
print("Detector Losses:", detector_losses)
losses = {}
losses.update(detector_losses)
losses.update(proposal_losses)
return losses
</code></pre>
<p>When I run this, I get a printout that shows <code>detections</code> is an empty list, which it shouldn't be. I know this isn't my images or my targets, because I can run them directly through an unmodified <code>fasterrcnn_resnet50_fpn</code> and I get detections. I also know it's not the initialization, because I can test that using a simpler forward pass, which also works but wouldn't allow me to eventually modify the loss function.</p>
<p>Thank you in advance for any and all help with this!</p>
|
<python><pytorch><loss-function><faster-rcnn>
|
2024-06-04 21:58:00
| 0
| 1,327
|
morepenguins
|
78,577,881
| 4,271,491
|
A task won't trigger after a BranchPythonOperator calls it
|
<p>I'm trying to achieve max performance in the case I'm working on. <br>
I'm parsing jsons and when the first three tasks:<br></p>
<ul>
<li>check_cm_files_present_task</li>
<li>check_cmts_struct_files_present_task</li>
<li>check_cmts_meas_files_present_task</li>
</ul>
<p>detect any new file in either one of three GCS locations it should trigger one of the file moving tasks:</p>
<p>-
check_cm_files_present_task</p>
<ul>
<li>move_to_valid_cmts_struct</li>
<li>move_to_valid_cmts_meas</li>
</ul>
<p>and in parallel it should also trigger dataproc cluster creation task (to save time)<br>
So far I am unable to achieve it. <br>
Whataever task id I set in the return message or set trigger rules or rearrange dependencies the task: "create_cluster_task" won't start.</p>
<p>Here is a piece of code I'm using:</p>
<pre><code>def check_cm_files_present():
regex = re.compile(r"(.*?).gz")
blobs = bucket.list_blobs(prefix=PREFIX)
jsons = [blob for blob in blobs if regex.match(blob.name)]
if len(jsons) > 0:
return ["move_to_valid_cm","create_cluster_task"]
else:
return ["exit_task"]
with models.DAG(dag_id='job-json-to-csv',
schedule_interval="0,15,30,45 * * * *",
description='',
default_args=default_dag_args,
max_active_runs=1) as dag:
check_cm_files_present_task = BranchPythonOperator(
task_id='check_cm_files_present_task',
python_callable=check_cm_files_present,
)
check_cmts_struct_files_present_task = BranchPythonOperator(
task_id='check_cmts_struct_files_present_task',
python_callable=check_cmts_struct_files_present,
)
check_cmts_meas_files_present_task = BranchPythonOperator(
task_id='check_cmts_meas_files_present_task',
python_callable=check_cmts_meas_files_present,
)
exit_task = DummyOperator(
task_id='exit_task',
trigger_rule='one_success'
)
create_cluster_task = DummyOperator(
task_id='create_cluster_task',
trigger_rule='one_success'
)
check_cm_files_present_task >> move_to_valid_cm >> other_task1
check_cm_files_present_task >> create_cluster_task
check_cm_files_present_task >> exit_task
check_cmts_struct_files_present_task >> move_to_valid_cmts_struct >> other_task2
check_cmts_struct_files_present_task >> create_cluster_task
check_cmts_struct_files_present_task >> exit_task
check_cmts_meas_files_present_task >> move_to_valid_cmts_meas >> other_task3
check_cmts_meas_files_present_task >> create_cluster_task
check_cmts_meas_files_present_task >> exit_task
create_cluster_task >> create_dataproc_cluster
</code></pre>
<p>And here is the log from the check_cm_files_present_task. It should follow the ["move_to_valid_cm", "create_cluster_task"] but it never does. It always gets skipped.
<a href="https://i.sstatic.net/ARXnNj8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ARXnNj8J.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/VC526Azt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VC526Azt.png" alt="enter image description here" /></a></p>
<p>Do you have any idea why is that happening?</p>
|
<python><airflow>
|
2024-06-04 21:56:30
| 1
| 528
|
Aleksander Lipka
|
78,577,849
| 6,365,949
|
How to get the track name in Audacity scripting?
|
<p>I am working with the <a href="https://en.wikipedia.org/wiki/Audacity_(audio_editor)" rel="nofollow noreferrer">Audacity</a> mod-script-pipeline to write a script which goes through every clip of every track, and gets the name of each track.</p>
<p>My code works with everything, except getting the track name, but I can't figure out how to fetch this string using any of the scripting references in the <a href="https://manual.audacityteam.org/man/scripting_reference.html" rel="nofollow noreferrer">documentation online</a>.</p>
<p>My below code works for everything except getting the trackname, and I launch it with <code>python3 script.py -a 3</code>.</p>
<pre><code># python vin
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import requests
import json
import time
import re
import random
'''
Audacity mod-script-pipeline setup
'''
if sys.platform == 'win32':
print("pipe-test.py, running on windows")
TONAME = '\\\\.\\pipe\\ToSrvPipe'
FROMNAME = '\\\\.\\pipe\\FromSrvPipe'
EOL = '\r\n\0'
else:
print("pipe-test.py, running on linux or mac")
TONAME = '/tmp/audacity_script_pipe.to.' + str(os.getuid())
FROMNAME = '/tmp/audacity_script_pipe.from.' + str(os.getuid())
EOL = '\n'
print("Write to \"" + TONAME +"\"")
if not os.path.exists(TONAME):
print(" ..does not exist. Ensure Audacity is running with mod-script-pipe.")
sys.exit()
print("Read from \"" + FROMNAME +"\"")
if not os.path.exists(FROMNAME):
print(" ..does not exist. Ensure Audacity is running with mod-script-pipe.")
sys.exit()
print("-- Both pipes exist. Good.")
TOFILE = open(TONAME, 'w')
print("-- File to write to has been opened")
FROMFILE = open(FROMNAME, 'rt')
print("-- File to read from has now been opened too\r\n")
'''
Audacity Functions
'''
# Send a single command to audacity mod-script-pipeline
def send_command(command):
#print("Send: >>> \n"+command)
TOFILE.write(command + EOL)
TOFILE.flush()
# Return the command response
def get_response():
result = ''
line = ''
while line != '\n':
result += line
line = FROMFILE.readline()
return result
# Send one command, and return the response
def do_command(command):
send_command(command)
response = get_response()
#print("Rcvd: <<< \n" + response)
return response
'''
#############################################
Start processing command line args
#############################################
'''
# auto -a "number of tracks"
if '-a' in sys.argv:
numberOfTracksIndex = sys.argv.index('-a')
numberOfTracks = int(sys.argv[numberOfTracksIndex+1])
print(f"Export {numberOfTracks} tracks")
# Unmute all tracks
print("Go to first track")
do_command('FirstTrack')
time.sleep(1)
trackCount = 0
while(trackCount < numberOfTracks):
print("\nMoves the cursor to the start of the selected track.")
do_command('CursTrackStart')
do_command('CursSelStart')
time.sleep(1)
print(f"Select track: {trackCount}")
do_command(f'SelectTracks: Track={trackCount}')
time.sleep(3)
###### Getting trackname here #######
# Get trackname
print("Get track name:")
nameRsp = do_command('GetInfo: Type="Tracks"')
print("name = ")
print(nameRsp)
trackName = "apple"
# Select clips in this track
maxClips = 3
i = 1
while i <= maxClips:
print(f"select clip {i}")
do_command('SelNextClip')
time.sleep(2)
print("export audio")
i += 1
trackCount += 1
</code></pre>
<p>This is the code that gets the track names:</p>
<pre><code>###### Getting trackname here #######
# Get trackname
print("Get track name:")
nameRsp = do_command('GetInfo: Type="Tracks"')
print("name = ")
print(nameRsp)
trackName = "apple"
</code></pre>
<p>But my line <code>do_command('GetInfo: Type="Tracks"')</code> always returns an empty string.</p>
<p>How can I edit my Python code to get the track names such as my highlight "apple" text in this image?</p>
<p><a href="https://i.sstatic.net/A7Y0Ad8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A7Y0Ad8J.png" alt="Enter image description here" /></a></p>
|
<python><audacity>
|
2024-06-04 21:43:45
| 0
| 1,582
|
Martin
|
78,577,834
| 14,345,989
|
NumPy: get a matrix of distances between values
|
<p>If I have a set of points:</p>
<pre><code>points = np.random.randint(0, 11, size=10)
print(points)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>[ 5 4 9 7 4 1 2 10 4 2]
</code></pre>
<p>And if I want to get a matrix representing the distance from each point to each other point, I can do so like this:</p>
<pre><code>def get_diff_array(values):
dX = values - values[0]
tmp = [dX]
for i in range(1, len(dX)):
tmp.append((dX - dX[i]))
return np.array(tmp)
print(get_diff_array(points))
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>[[ 0 -1 4 2 -1 -4 -3 5 -1 -3]
[ 1 0 5 3 0 -3 -2 6 0 -2]
[-4 -5 0 -2 -5 -8 -7 1 -5 -7]
[-2 -3 2 0 -3 -6 -5 3 -3 -5]
[ 1 0 5 3 0 -3 -2 6 0 -2]
[ 4 3 8 6 3 0 1 9 3 1]
[ 3 2 7 5 2 -1 0 8 2 0]
[-5 -6 -1 -3 -6 -9 -8 0 -6 -8]
[ 1 0 5 3 0 -3 -2 6 0 -2]
[ 3 2 7 5 2 -1 0 8 2 0]]
</code></pre>
<p>Is there a faster NumPy-specific way to calculate this? Currently it takes approximately 0.44 seconds for 10k points, which seems slow. This is really just a learning question to try to understand NumPy better.</p>
|
<python><numpy><matrix>
|
2024-06-04 21:36:21
| 2
| 886
|
Mandias
|
78,577,784
| 3,478,605
|
How can I split a Sum in SymPy?
|
<p>SymPy understands that <code>Sum</code> and <code>+</code> commute.</p>
<pre class="lang-py prettyprint-override"><code>from sympy import symbols, Idx, IndexedBase, Sum
i, n = symbols("i n", cls=Idx)
x = IndexedBase("x", shape=(n,))
s = Sum(x[i] + 1, (i, 1, n))
t = Sum(x[i], (i, 1, n)) + Sum(1, (i, 1, n))
assert s.equals(t) # OK
</code></pre>
<p>But for complex expressions that does not work.</p>
<pre class="lang-py prettyprint-override"><code>from sympy import symbols, Idx, IndexedBase, Sum
a, b = symbols("a b")
i, n = symbols("i n", cls=Idx)
w = IndexedBase("w", shape=(n,))
x = IndexedBase("x", shape=(n,))
y = IndexedBase("y", shape=(n,))
sum = lambda e: Sum(e, (i, 1, n))
sw = sum(w[i])
mx = sum(w[i] * x[i]) / sw
my = sum(w[i] * y[i]) / sw
d = w[i] * ((a * (x[i] - mx) - (y[i] - my)))**2
e = w[i] * (a * mx + b - my)**2
f = w[i] * 2 * (a * (x[i] - mx) - (y[i] - my)) * (a * mx + b - my)
s = sum(d + e + f)
t = sum(d) + sum(e) + sum(f)
assert s.equals(t) # The assert fails
</code></pre>
<p>How can we explain to SymPy that this transformation is actually OK?</p>
|
<python><sympy>
|
2024-06-04 21:18:26
| 1
| 4,738
|
Valéry
|
78,577,720
| 4,751,700
|
Python: How to type a Union of TypeVarTuple
|
<p>I have a function that receives 3 parameters: <code>a: dict[str, str]</code>, <code>b:dict[str, str]</code>, and <code>c: dict[str, str]</code>.
The values/keys of these parameters are "linked". They are based on a existing protocol and some combinations are not possible.</p>
<p>The keys in <code>b</code> and <code>c</code> will differ depending on a certain key in <code>a</code>.</p>
<p>I have all the possible states and their combinations typed out.</p>
<pre class="lang-py prettyprint-override"><code>type combination1 = tuple[SomeTypeA, SomeTypeB, SomeTypeC]
type combination2 = tuple[SomeTypeA2, SomeTypeB2, SomeTypeC3]
...
</code></pre>
<pre class="lang-py prettyprint-override"><code>def function(state: combination1 | combination2):
a, b, c = state
if a["type"] == "1":
_ = b["some_key"] # this works
_ = b["some_other_key"] # typehints say that this key doesn't exist
if a["type"] == "2":
_ = b["some_key"] # typehints say that this key doesn't exist
_ = b["some_other_key"] # this works
</code></pre>
<p>The above works without a problem but my function should have a signature of <code>function(a, b, c) -> None</code> with 3 parameters instead of <code>function(state) -> None</code>. The function is used as a callback by another package and expects this signature.</p>
<p>Using <code>*args</code> the function would run but type hinting works for only one combination:</p>
<pre class="lang-py prettyprint-override"><code>def function(*args: *combination1):
a, b, c = args
if a["type"] == "1":
_ = b["some_key"] # this works
_ = b["some_other_key"] # typehints say that this key doesn't exist
</code></pre>
<p>If the following was allowed then my issue would be fixed, but it isn't.</p>
<pre class="lang-py prettyprint-override"><code>def function(*args: *combination1 | *combination2):
a, b, c = args
if a["type"] == "1":
_ = b["some_key"] # this works
_ = b["some_other_key"] # typehints say that this key doesn't exist
</code></pre>
<p>This last piece of code returns an invalid syntax error: "Unpack operator not allowed in type annotation"</p>
<p>Any ideas on how to solve this, preferably only with typehints without <code>typing.TypeGuard</code> using custom logic?</p>
|
<python><python-typing>
|
2024-06-04 21:01:33
| 1
| 391
|
fanta fles
|
78,577,645
| 1,898,218
|
How to get conda to install pytorch-gpu rather than pytorch-cpu Ubuntu cuda 11.4
|
<p>I've combed through all of the similar questions but to no avail. I have an older GPU that I'm trying to get to work with CUDA and pytorch. For some reason, I can't get conda to install pytorch gpu. No matter what I do, it tries to install the cpu version of pytorch.</p>
<p>I build my conda like this - miniconda</p>
<blockquote>
<p>conda create --name tortoise python=3.9 numba inflect</p>
</blockquote>
<p>I need to force a specific version of CUDA 11.4.</p>
<pre><code>conda install pytorch=1.11.0 torchvision=0.12 torchaudio=0.11 cudatoolkit=11.4 -c pytorch -c nvidia
</code></pre>
<p>Someone said that torchvision and torchaudio could cause the cpu version to be installed.</p>
<pre><code>conda install pytorch=1.11.0 cudatoolkit=11.4 -c pytorch -c nvidia
</code></pre>
<p>Regardless of what I do, it tries to install the cpu version of pytorch.</p>
<p><a href="https://i.sstatic.net/Qkzc3bnZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qkzc3bnZ.png" alt="Showing conda will install cpu version of pytorch" /></a></p>
<p>So what it is that is determining that the cpu version should be installed. I have a GPU in this box.</p>
<p><a href="https://i.sstatic.net/JWga5a2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JWga5a2C.png" alt="nvidia-smi output" /></a></p>
<p>conda info</p>
<p><a href="https://i.sstatic.net/ZLz26c3m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZLz26c3m.png" alt="conda info" /></a></p>
|
<python><pytorch><conda>
|
2024-06-04 20:41:37
| 1
| 1,782
|
Halfstop
|
78,577,558
| 11,608,962
|
ImportError when attempting to create a header using borb PDF library in Python
|
<p>I'm trying to create a header in a PDF document using the <code>borb</code> library in Python. However, I'm encountering an ImportError when attempting to import the necessary modules. Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>from borb.pdf import Page
from borb.pdf import Document
from borb.pdf import MultiColumnLayout
from borb.pdf import PageLayout
from borb.pdf.canvas.layout.layout_element import Container, Span
# Further code to create header and add content
</code></pre>
<p>The error message I'm getting is:</p>
<pre><code>ImportError: cannot import name 'Container' from 'borb.pdf.canvas.layout.layout_element'
</code></pre>
<p>I've tried various import statements, but I can't seem to get past this error. Can someone help me understand what I'm doing wrong and how I can correctly import these modules to create a header in my PDF document using <code>borb</code>?</p>
|
<python><pdf><borb>
|
2024-06-04 20:17:26
| 1
| 1,427
|
Amit Pathak
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.