QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,166,062
495,769
Is python's CFFI an adequate tool to parse C definitions from a header file
<p>From python, I want to fetch the details of structures/arrays/enums defined in C headers: the list of defined types, the list and types of struct members, the names and values defined in enums, the size of arrays, etc.</p> <p>I don't plan to link a C lib in python, but I wanted to use a battle-tested tool to &quot;parse&quot; C definitions so I picked <a href="https://cffi.readthedocs.io/en/stable/" rel="nofollow noreferrer">CFFI</a> and tried the following:</p> <p>Start with a dummy <code>test.h</code> file</p> <pre><code>typedef struct { int a; int b[3]; float c; } other_struct_T; typedef struct { bool i; bool j; other_struct_T k; } main_struct_T; </code></pre> <p>preprocess it once to be sure to resolve #includes, #defines, etc.</p> <p><code>gcc -E -P -xc test.h -o test.preprocessed.h</code></p> <p>Then load it with CFFI like this</p> <pre><code> from pathlib import Path from cffi import FFI u = FFI() txt = Path(&quot;test.preprocessed.h&quot;).read_text() u.cdef(txt) k = u.typeof(&quot;main_struct_T&quot;) print(k) print(k.elements) </code></pre> <p>which prints <code>&lt;ctype 'main_struct_T'&gt;</code> first. But fails at the second one (and k seems to contains neither .length, not .item, nor .relements, as one could expect from a ctype instance, as mentioned <a href="https://cffi.readthedocs.io/en/stable/ref.html#ffi-cdata-ffi-ctype" rel="nofollow noreferrer">here</a>)</p> <pre><code>Traceback (most recent call last): File &quot;./parse_header.py&quot;, line 14, in &lt;module&gt; print(k.elements) ^^^^^^^^^^^ AttributeError: elements </code></pre> <p>What do I miss ? How would you do it differently ?</p>
<python><c><python-cffi>
2024-11-07 10:42:30
1
2,260
yota
79,166,016
14,720,380
error: cannot repair to "manylinux_2_28_x86_64" ABI because of the presence of too-recent versioned symbols
<p>I am trying to build my python package inside a manylinux_2_28 container but I am getting the error:</p> <pre><code>error: cannot repair &quot;pynemo-0.0.0.dev0-cp311-cp311-linux_x86_64.whl&quot; to &quot;manylinux_2_28_x86_64&quot; ABI because of the presence of too-recent versioned symbols. </code></pre> <p>Using <code>auditwheel-symbols</code> I am getting:</p> <pre><code>pynemo/_core.cpython-311-x86_64-linux-gnu.so is manylinux_2_28 compliant. pynemo/ld-2.28.so is manylinux_2_28 compliant. pynemo/ld-linux-x86-64.so.2 is manylinux_2_28 compliant. pynemo/libc-2.28.so is manylinux_2_28 compliant. pynemo/libc.so.6 is manylinux_2_28 compliant. pynemo/libdl-2.28.so is not manylinux_2_28 compliant because it links the following forbidden libraries: libc.so.6 offending symbols: _dl_signal_error@@GLIBC_PRIVATE, _dl_vsym@@GLIBC_PRIVATE, _dl_addr@@GLIBC_PRIVATE, _dl_sym@@GLIBC_PRIVATE, _dl_rtld_di_serinfo@@GLIBC_PRIVATE pynemo/libdl.so.2 is not manylinux_2_28 compliant because it links the following forbidden libraries: libc.so.6 offending symbols: _dl_signal_error@@GLIBC_PRIVATE, _dl_vsym@@GLIBC_PRIVATE, _dl_addr@@GLIBC_PRIVATE, _dl_sym@@GLIBC_PRIVATE, _dl_rtld_di_serinfo@@GLIBC_PRIVATE pynemo/libgcc_s-8-20210514.so.1 is manylinux_2_28 compliant. pynemo/libgcc_s.so.1 is manylinux_2_28 compliant. pynemo/libm-2.28.so is not manylinux_2_28 compliant because it links the following forbidden libraries: libc.so.6 offending versions: GLIBC_PRIVATE pynemo/libm.so.6 is not manylinux_2_28 compliant because it links the following forbidden libraries: libc.so.6 offending versions: GLIBC_PRIVATE pynemo/libpthread-2.28.so is not manylinux_2_28 compliant because it links the following forbidden libraries: libc.so.6 offending symbols: __libc_siglongjmp@@GLIBC_PRIVATE, __tsearch@@GLIBC_PRIVATE, __getrlimit@@GLIBC_PRIVATE, __libc_dlclose@@GLIBC_PRIVATE, __libc_fatal@@GLIBC_PRIVATE, __sigtimedwait@@GLIBC_PRIVATE, __mmap@@GLIBC_PRIVATE, __libc_system@@GLIBC_PRIVATE, __munmap@@GLIBC_PRIVATE, __libc_fork@@GLIBC_PRIVATE, _IO_enable_locks@@GLIBC_PRIVATE, __write_nocancel@@GLIBC_PRIVATE, _dl_deallocate_tls@@GLIBC_PRIVATE, __call_tls_dtors@@GLIBC_PRIVATE, __libc_thread_freeres@@GLIBC_PRIVATE, __mprotect@@GLIBC_PRIVATE, __clock_gettime@@GLIBC_PRIVATE, __libc_fcntl64@@GLIBC_PRIVATE, __nanosleep_nocancel@@GLIBC_PRIVATE, __libc_current_sigrtmax_private@@GLIBC_PRIVATE, __tdelete@@GLIBC_PRIVATE, __libc_dlopen_mode@@GLIBC_PRIVATE, __read_nocancel@@GLIBC_PRIVATE, _dl_make_stack_executable@@GLIBC_PRIVATE, __libc_pthread_init@@GLIBC_PRIVATE, __ctype_init@@GLIBC_PRIVATE, __libc_dlsym@@GLIBC_PRIVATE, __tfind@@GLIBC_PRIVATE, __open64_nocancel@@GLIBC_PRIVATE, __libc_current_sigrtmin_private@@GLIBC_PRIVATE, __madvise@@GLIBC_PRIVATE, __pause_nocancel@@GLIBC_PRIVATE, __mktemp@@GLIBC_PRIVATE, __twalk@@GLIBC_PRIVATE, _dl_allocate_tls@@GLIBC_PRIVATE, __tunable_get_val@@GLIBC_PRIVATE, _dl_get_tls_static_info@@GLIBC_PRIVATE, __libc_alloca_cutoff@@GLIBC_PRIVATE, _dl_allocate_tls_init@@GLIBC_PRIVATE, __close_nocancel@@GLIBC_PRIVATE, __libc_allocate_rtsig_private@@GLIBC_PRIVATE, __libc_longjmp@@GLIBC_PRIVATE pynemo/libpthread.so.0 is not manylinux_2_28 compliant because it links the following forbidden libraries: libc.so.6 offending symbols: __libc_siglongjmp@@GLIBC_PRIVATE, __tsearch@@GLIBC_PRIVATE, __getrlimit@@GLIBC_PRIVATE, __libc_dlclose@@GLIBC_PRIVATE, __libc_fatal@@GLIBC_PRIVATE, __sigtimedwait@@GLIBC_PRIVATE, __mmap@@GLIBC_PRIVATE, __libc_system@@GLIBC_PRIVATE, __munmap@@GLIBC_PRIVATE, __libc_fork@@GLIBC_PRIVATE, _IO_enable_locks@@GLIBC_PRIVATE, __write_nocancel@@GLIBC_PRIVATE, _dl_deallocate_tls@@GLIBC_PRIVATE, __call_tls_dtors@@GLIBC_PRIVATE, __libc_thread_freeres@@GLIBC_PRIVATE, __mprotect@@GLIBC_PRIVATE, __clock_gettime@@GLIBC_PRIVATE, __libc_fcntl64@@GLIBC_PRIVATE, __nanosleep_nocancel@@GLIBC_PRIVATE, __libc_current_sigrtmax_private@@GLIBC_PRIVATE, __tdelete@@GLIBC_PRIVATE, __libc_dlopen_mode@@GLIBC_PRIVATE, __read_nocancel@@GLIBC_PRIVATE, _dl_make_stack_executable@@GLIBC_PRIVATE, __libc_pthread_init@@GLIBC_PRIVATE, __ctype_init@@GLIBC_PRIVATE, __libc_dlsym@@GLIBC_PRIVATE, __tfind@@GLIBC_PRIVATE, __open64_nocancel@@GLIBC_PRIVATE, __libc_current_sigrtmin_private@@GLIBC_PRIVATE, __madvise@@GLIBC_PRIVATE, __pause_nocancel@@GLIBC_PRIVATE, __mktemp@@GLIBC_PRIVATE, __twalk@@GLIBC_PRIVATE, _dl_allocate_tls@@GLIBC_PRIVATE, __tunable_get_val@@GLIBC_PRIVATE, _dl_get_tls_static_info@@GLIBC_PRIVATE, __libc_alloca_cutoff@@GLIBC_PRIVATE, _dl_allocate_tls_init@@GLIBC_PRIVATE, __close_nocancel@@GLIBC_PRIVATE, __libc_allocate_rtsig_private@@GLIBC_PRIVATE, __libc_longjmp@@GLIBC_PRIVATE pynemo/librt-2.28.so is not manylinux_2_28 compliant because it links the following forbidden libraries: libc.so.6 offending symbols: __recv@@GLIBC_PRIVATE, __libc_pwrite@@GLIBC_PRIVATE, __libc_fatal@@GLIBC_PRIVATE, __libc_pread@@GLIBC_PRIVATE, __pthread_barrier_init@@GLIBC_PRIVATE, __pthread_unwind@@GLIBC_PRIVATE, __pthread_get_minstack@@GLIBC_PRIVATE, __libc_dlopen_mode@@GLIBC_PRIVATE, __pthread_barrier_wait@@GLIBC_PRIVATE, __libc_dlsym@@GLIBC_PRIVATE, __socket@@GLIBC_PRIVATE, __pthread_attr_copy@@GLIBC_PRIVATE, __shm_directory@@GLIBC_PRIVATE, __fortify_fail@@GLIBC_PRIVATE, __close_nocancel@@GLIBC_PRIVATE libpthread.so.0 offending symbols: __recv@@GLIBC_PRIVATE, __libc_pwrite@@GLIBC_PRIVATE, __libc_fatal@@GLIBC_PRIVATE, __libc_pread@@GLIBC_PRIVATE, __pthread_barrier_init@@GLIBC_PRIVATE, __pthread_unwind@@GLIBC_PRIVATE, __pthread_get_minstack@@GLIBC_PRIVATE, __libc_dlopen_mode@@GLIBC_PRIVATE, __pthread_barrier_wait@@GLIBC_PRIVATE, __libc_dlsym@@GLIBC_PRIVATE, __socket@@GLIBC_PRIVATE, __pthread_attr_copy@@GLIBC_PRIVATE, __shm_directory@@GLIBC_PRIVATE, __fortify_fail@@GLIBC_PRIVATE, __close_nocancel@@GLIBC_PRIVATE pynemo/librt.so.1 is not manylinux_2_28 compliant because it links the following forbidden libraries: libpthread.so.0 offending symbols: __recv@@GLIBC_PRIVATE, __libc_pwrite@@GLIBC_PRIVATE, __libc_fatal@@GLIBC_PRIVATE, __libc_pread@@GLIBC_PRIVATE, __pthread_barrier_init@@GLIBC_PRIVATE, __pthread_unwind@@GLIBC_PRIVATE, __pthread_get_minstack@@GLIBC_PRIVATE, __libc_dlopen_mode@@GLIBC_PRIVATE, __pthread_barrier_wait@@GLIBC_PRIVATE, __libc_dlsym@@GLIBC_PRIVATE, __socket@@GLIBC_PRIVATE, __pthread_attr_copy@@GLIBC_PRIVATE, __shm_directory@@GLIBC_PRIVATE, __fortify_fail@@GLIBC_PRIVATE, __close_nocancel@@GLIBC_PRIVATE libc.so.6 offending symbols: __recv@@GLIBC_PRIVATE, __libc_pwrite@@GLIBC_PRIVATE, __libc_fatal@@GLIBC_PRIVATE, __libc_pread@@GLIBC_PRIVATE, __pthread_barrier_init@@GLIBC_PRIVATE, __pthread_unwind@@GLIBC_PRIVATE, __pthread_get_minstack@@GLIBC_PRIVATE, __libc_dlopen_mode@@GLIBC_PRIVATE, __pthread_barrier_wait@@GLIBC_PRIVATE, __libc_dlsym@@GLIBC_PRIVATE, __socket@@GLIBC_PRIVATE, __pthread_attr_copy@@GLIBC_PRIVATE, __shm_directory@@GLIBC_PRIVATE, __fortify_fail@@GLIBC_PRIVATE, __close_nocancel@@GLIBC_PRIVATE pynemo/libstdc++.so.6 is manylinux_2_28 compliant. pynemo/libstdc++.so.6.0.25 is manylinux_2_28 compliant. </code></pre> <p>So it seems like my wheel is getting multiple versions of libstdc++, libc and libpthread but I am not sure why this is happening as I am building the entire wheel from inside the container.</p> <p>Looking in the logs, it seems like all of the lib files it needs are taken from the /lib64 folder in the container:</p> <pre><code>DEBUG:auditwheel.wheel_abi:full_elftree: { &quot;pynemo/_core.cpython-311-x86_64-linux-gnu.so&quot;: { &quot;interp&quot;: null, &quot;path&quot;: &quot;pynemo/_core.cpython-311-x86_64-linux-gnu.so&quot;, &quot;realpath&quot;: &quot;pynemo/_core.cpython-311-x86_64-linux-gnu.so&quot;, &quot;needed&quot;: [ &quot;libpthread.so.0&quot;, &quot;librt.so.1&quot;, &quot;libdl.so.2&quot;, &quot;libstdc++.so.6&quot;, &quot;libm.so.6&quot;, &quot;libgcc_s.so.1&quot;, &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot; ], &quot;rpath&quot;: [], &quot;runpath&quot;: [], &quot;libs&quot;: { &quot;libpthread.so.0&quot;: { &quot;realpath&quot;: &quot;/lib64/libpthread-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libpthread.so.0&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;libc.so.6&quot;: { &quot;realpath&quot;: &quot;/lib64/libc-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libc.so.6&quot;, &quot;needed&quot;: [ &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;ld-linux-x86-64.so.2&quot;: { &quot;realpath&quot;: &quot;/lib64/ld-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/ld-linux-x86-64.so.2&quot;, &quot;needed&quot;: [] }, &quot;librt.so.1&quot;: { &quot;realpath&quot;: &quot;/lib64/librt-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/librt.so.1&quot;, &quot;needed&quot;: [ &quot;libpthread.so.0&quot;, &quot;libc.so.6&quot; ] }, &quot;libdl.so.2&quot;: { &quot;realpath&quot;: &quot;/lib64/libdl-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libdl.so.2&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;libstdc++.so.6&quot;: { &quot;realpath&quot;: &quot;/lib64/libstdc++.so.6.0.25&quot;, &quot;path&quot;: &quot;/lib64/libstdc++.so.6&quot;, &quot;needed&quot;: [ &quot;libm.so.6&quot;, &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot;, &quot;libgcc_s.so.1&quot; ] }, &quot;libm.so.6&quot;: { &quot;realpath&quot;: &quot;/lib64/libm-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libm.so.6&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;libgcc_s.so.1&quot;: { &quot;realpath&quot;: &quot;/lib64/libgcc_s-8-20210514.so.1&quot;, &quot;path&quot;: &quot;/lib64/libgcc_s.so.1&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot; ] } } }, &quot;pynemo/ld-2.28.so&quot;: { &quot;interp&quot;: null, &quot;path&quot;: &quot;pynemo/ld-2.28.so&quot;, &quot;realpath&quot;: &quot;pynemo/ld-2.28.so&quot;, &quot;needed&quot;: [], &quot;rpath&quot;: [], &quot;runpath&quot;: [], &quot;libs&quot;: {} }, &quot;pynemo/libc-2.28.so&quot;: { &quot;interp&quot;: &quot;/lib64/ld-linux-x86-64.so.2&quot;, &quot;path&quot;: &quot;pynemo/libc-2.28.so&quot;, &quot;realpath&quot;: &quot;pynemo/libc-2.28.so&quot;, &quot;needed&quot;: [ &quot;ld-linux-x86-64.so.2&quot; ], &quot;rpath&quot;: [], &quot;runpath&quot;: [], &quot;libs&quot;: { &quot;ld-linux-x86-64.so.2&quot;: { &quot;path&quot;: &quot;/lib64/ld-linux-x86-64.so.2&quot;, &quot;realpath&quot;: &quot;/lib64/ld-2.28.so&quot;, &quot;needed&quot;: [] } } }, &quot;pynemo/libdl-2.28.so&quot;: { &quot;interp&quot;: null, &quot;path&quot;: &quot;pynemo/libdl-2.28.so&quot;, &quot;realpath&quot;: &quot;pynemo/libdl-2.28.so&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot; ], &quot;rpath&quot;: [], &quot;runpath&quot;: [], &quot;libs&quot;: { &quot;libc.so.6&quot;: { &quot;realpath&quot;: &quot;/lib64/libc-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libc.so.6&quot;, &quot;needed&quot;: [ &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;ld-linux-x86-64.so.2&quot;: { &quot;realpath&quot;: &quot;/lib64/ld-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/ld-linux-x86-64.so.2&quot;, &quot;needed&quot;: [] } } }, &quot;pynemo/libgcc_s-8-20210514.so.1&quot;: { &quot;interp&quot;: null, &quot;path&quot;: &quot;pynemo/libgcc_s-8-20210514.so.1&quot;, &quot;realpath&quot;: &quot;pynemo/libgcc_s-8-20210514.so.1&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot; ], &quot;rpath&quot;: [], &quot;runpath&quot;: [], &quot;libs&quot;: { &quot;libc.so.6&quot;: { &quot;realpath&quot;: &quot;/lib64/libc-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libc.so.6&quot;, &quot;needed&quot;: [ &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;ld-linux-x86-64.so.2&quot;: { &quot;realpath&quot;: &quot;/lib64/ld-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/ld-linux-x86-64.so.2&quot;, &quot;needed&quot;: [] } } }, &quot;pynemo/libm-2.28.so&quot;: { &quot;interp&quot;: null, &quot;path&quot;: &quot;pynemo/libm-2.28.so&quot;, &quot;realpath&quot;: &quot;pynemo/libm-2.28.so&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot; ], &quot;rpath&quot;: [], &quot;runpath&quot;: [], &quot;libs&quot;: { &quot;libc.so.6&quot;: { &quot;realpath&quot;: &quot;/lib64/libc-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libc.so.6&quot;, &quot;needed&quot;: [ &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;ld-linux-x86-64.so.2&quot;: { &quot;realpath&quot;: &quot;/lib64/ld-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/ld-linux-x86-64.so.2&quot;, &quot;needed&quot;: [] } } }, &quot;pynemo/libpthread-2.28.so&quot;: { &quot;interp&quot;: &quot;/lib64/ld-linux-x86-64.so.2&quot;, &quot;path&quot;: &quot;pynemo/libpthread-2.28.so&quot;, &quot;realpath&quot;: &quot;pynemo/libpthread-2.28.so&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot; ], &quot;rpath&quot;: [], &quot;runpath&quot;: [], &quot;libs&quot;: { &quot;ld-linux-x86-64.so.2&quot;: { &quot;path&quot;: &quot;/lib64/ld-linux-x86-64.so.2&quot;, &quot;realpath&quot;: &quot;/lib64/ld-2.28.so&quot;, &quot;needed&quot;: [] }, &quot;libc.so.6&quot;: { &quot;realpath&quot;: &quot;/lib64/libc-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libc.so.6&quot;, &quot;needed&quot;: [ &quot;ld-linux-x86-64.so.2&quot; ] } } }, &quot;pynemo/librt-2.28.so&quot;: { &quot;interp&quot;: null, &quot;path&quot;: &quot;pynemo/librt-2.28.so&quot;, &quot;realpath&quot;: &quot;pynemo/librt-2.28.so&quot;, &quot;needed&quot;: [ &quot;libpthread.so.0&quot;, &quot;libc.so.6&quot; ], &quot;rpath&quot;: [], &quot;runpath&quot;: [], &quot;libs&quot;: { &quot;libpthread.so.0&quot;: { &quot;realpath&quot;: &quot;/lib64/libpthread-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libpthread.so.0&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;libc.so.6&quot;: { &quot;realpath&quot;: &quot;/lib64/libc-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libc.so.6&quot;, &quot;needed&quot;: [ &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;ld-linux-x86-64.so.2&quot;: { &quot;realpath&quot;: &quot;/lib64/ld-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/ld-linux-x86-64.so.2&quot;, &quot;needed&quot;: [] } } }, &quot;pynemo/libstdc++.so.6.0.25&quot;: { &quot;interp&quot;: null, &quot;path&quot;: &quot;pynemo/libstdc++.so.6.0.25&quot;, &quot;realpath&quot;: &quot;pynemo/libstdc++.so.6.0.25&quot;, &quot;needed&quot;: [ &quot;libm.so.6&quot;, &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot;, &quot;libgcc_s.so.1&quot; ], &quot;rpath&quot;: [], &quot;runpath&quot;: [], &quot;libs&quot;: { &quot;libm.so.6&quot;: { &quot;realpath&quot;: &quot;/lib64/libm-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libm.so.6&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot;, &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;libc.so.6&quot;: { &quot;realpath&quot;: &quot;/lib64/libc-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/libc.so.6&quot;, &quot;needed&quot;: [ &quot;ld-linux-x86-64.so.2&quot; ] }, &quot;ld-linux-x86-64.so.2&quot;: { &quot;realpath&quot;: &quot;/lib64/ld-2.28.so&quot;, &quot;path&quot;: &quot;/lib64/ld-linux-x86-64.so.2&quot;, &quot;needed&quot;: [] }, &quot;libgcc_s.so.1&quot;: { &quot;realpath&quot;: &quot;/lib64/libgcc_s-8-20210514.so.1&quot;, &quot;path&quot;: &quot;/lib64/libgcc_s.so.1&quot;, &quot;needed&quot;: [ &quot;libc.so.6&quot; ] } } } } </code></pre> <p>Which are included in the container by default, so I am not sure why this is not compliant with auditwheel.</p> <p>The script I am running before building my wheel is:</p> <pre><code>yum -y install curl zip unzip tar autoconf automake libtool pkg-config perl-open python3.11-pip python3.11 -m pip install jinja2 for python in /opt/python/*/bin/python3*; do if [ -x &quot;$python&quot; ]; then echo &quot;Using Python: $python&quot; $python -m pip install jinja2 fi done ln -sf /usr/bin/python3.11 /usr/bin/python3 git clone https://github.com/microsoft/vcpkg.git ./vcpkg/bootstrap-vcpkg.sh ./vcpkg/vcpkg install </code></pre> <p>So I am installing some system packages and installing some things from vcpkg. From what I can tell, vcpkg always compiles the packages on the machine so those should be okay, I am not sure about the installed system packages (curl zip unzip tar autoconf automake libtool pkg-config perl-open).</p> <p>Is there any way I can fix the .so's that the wheel is looking for?</p>
<python><linux><python-manylinux>
2024-11-07 10:26:47
0
6,623
Tom McLean
79,165,925
3,906,713
How to interpolate pandas time series using different timestamps
<p>I am looking for a function</p> <pre><code>pandas_interpolate(df: pd.DataFrame, newTime: pd.DatetimeIndex, method: str = 'linear') -&gt; pd.DataFrame </code></pre> <p>that would take an existing dataframe with a <code>DatetimeIndex</code> index, and return a new dataframe with index given by <code>newTime</code>. For each column, the values of the new dataframe should be evaluated by interpolating the values of the original dataframe. In spirit, this function should behave similarly to <code>numpy.interp</code>. I am aware of the method <code>pandas.DataFrame.interpolate</code>, however, it interpolates existing <code>NAN</code> values, and does not accept a new index as an argument.</p> <p>So far I have 2 ideas</p> <ol> <li>Append new index and the end of the dataframe with all values being NAN, then drop duplicate indices for the exact timestamps that already exist, then use pandas interpolate method, then only select rows with the new index.</li> <li>Convert dataframe to numpy array. Loop over columns, use numpy interpolate, then convert back to dataframe.</li> </ol> <p>Both would surely work, but are quite ugly. Is there an intended way to do this?</p> <p><strong>Edit</strong>: Minimal example</p> <pre><code>df = pd.DataFrame({'value': [1, 2, 3]}, index=pd.DatetimeIndex(['2024-01-01', '2024-01-15', '2024-01-30'])) newTime = pd.date_range(start=df.index[0], end=df.index[-1], freq='1D') </code></pre> <p>which results in <code>newTime</code></p> <pre><code>DatetimeIndex(['2024-01-01', '2024-01-02', '2024-01-03', '2024-01-04', '2024-01-05', '2024-01-06', '2024-01-07', '2024-01-08', '2024-01-09', '2024-01-10', '2024-01-11', '2024-01-12', '2024-01-13', '2024-01-14', '2024-01-15', '2024-01-16', '2024-01-17', '2024-01-18', '2024-01-19', '2024-01-20', '2024-01-21', '2024-01-22', '2024-01-23', '2024-01-24', '2024-01-25', '2024-01-26', '2024-01-27', '2024-01-28', '2024-01-29', '2024-01-30'], dtype='datetime64[ns]', freq='D') </code></pre> <p>Then the expected output of the function should be (I have hacked it togethere here)</p> <pre><code>pd.DataFrame({'value': np.interp(np.arange(1, 31), [1,15,30], [1,2,3])}, index=newTimes) </code></pre> <p>which is</p> <pre><code>value 2024-01-01 1.000000 2024-01-02 1.071429 2024-01-03 1.142857 2024-01-04 1.214286 2024-01-05 1.285714 2024-01-06 1.357143 2024-01-07 1.428571 2024-01-08 1.500000 2024-01-09 1.571429 2024-01-10 1.642857 2024-01-11 1.714286 2024-01-12 1.785714 2024-01-13 1.857143 2024-01-14 1.928571 2024-01-15 2.000000 2024-01-16 2.066667 2024-01-17 2.133333 2024-01-18 2.200000 2024-01-19 2.266667 2024-01-20 2.333333 2024-01-21 2.400000 2024-01-22 2.466667 2024-01-23 2.533333 2024-01-24 2.600000 2024-01-25 2.666667 2024-01-26 2.733333 2024-01-27 2.800000 2024-01-28 2.866667 2024-01-29 2.933333 2024-01-30 3.000000 </code></pre> <p><strong>Important NOTE</strong>: Original values may be offset with respect to new values, for example, they may be given with hourly precision. So it is possible that no points of the original index match to the new index.</p>
<python><pandas><dataframe><numpy><interpolation>
2024-11-07 10:09:55
1
908
Aleksejs Fomins
79,165,811
19,356,117
Named symbol not found when use cupy to invoke cuda kernel
<p>This is my cuda kernel: <a href="https://pastebin.com/ti95Qy2p" rel="nofollow noreferrer">https://pastebin.com/ti95Qy2p</a>, and I want to invoke <code>compute_linear_recurrence</code> method in this kernel. But when I use code:</p> <pre><code>import cupy as cp # code_str is code in https://pastebin.com/ti95Qy2p calc_kernel = cp.RawKernel(code_str, 'compute_linear_recurrence', backend='nvcc') </code></pre> <p>to calculate results, it crashed with <code>cupy_backends.cuda.api.driver.CUDADriverError: CUDA_ERROR_NOT_FOUND: named symbol not found</code>. And not only <code>compute_linear_recurrence</code>, but also other methods in this cuda kernel can't be invoked. So what happened and how to resolve it? <a href="https://stackoverflow.com/questions/77798014/cupy-rawkernel-cuda-error-not-found-named-symbol-not-found-cupy">This problem</a> can't help me.</p>
<python><python-3.x><cuda><cupy>
2024-11-07 09:39:01
1
1,115
forestbat
79,165,506
1,826,066
Expand list of struct column in `polars`
<p>I have a <code>pl.DataFrame</code> with a column that is a <code>list</code> of <code>struct</code> entries. The lengths of the lists might differ:</p> <pre class="lang-py prettyprint-override"><code>pl.DataFrame( { &quot;id&quot;: [1, 2, 3], &quot;s&quot;: [ [ {&quot;a&quot;: 1, &quot;b&quot;: 1}, {&quot;a&quot;: 2, &quot;b&quot;: 2}, {&quot;a&quot;: 3, &quot;b&quot;: 3}, ], [ {&quot;a&quot;: 10, &quot;b&quot;: 10}, {&quot;a&quot;: 20, &quot;b&quot;: 20}, {&quot;a&quot;: 30, &quot;b&quot;: 30}, {&quot;a&quot;: 40, &quot;b&quot;: 40}, ], [ {&quot;a&quot;: 100, &quot;b&quot;: 100}, {&quot;a&quot;: 200, &quot;b&quot;: 200}, {&quot;a&quot;: 300, &quot;b&quot;: 300}, {&quot;a&quot;: 400, &quot;b&quot;: 400}, {&quot;a&quot;: 500, &quot;b&quot;: 500}, ], ], } ) </code></pre> <p>This looks like this:</p> <pre><code>shape: (3, 2) ┌─────┬─────────────────────────────────┐ │ id ┆ s │ │ --- ┆ --- │ │ i64 ┆ list[struct[2]] │ ╞═════╪═════════════════════════════════╡ │ 1 ┆ [{1,1}, {2,2}, {3,3}] │ │ 2 ┆ [{10,10}, {20,20}, … {40,40}] │ │ 3 ┆ [{100,100}, {200,200}, … {500,… │ └─────┴─────────────────────────────────┘ </code></pre> <p>I've tried various versions of <code>unnest</code> and <code>explode</code>, but I am failing to turn this into a long <code>pl.DataFrame</code> where the <code>list</code> is turned into rows and the <code>struct</code> entries into columns. This is what I want to see:</p> <pre class="lang-py prettyprint-override"><code>pl.DataFrame( { &quot;id&quot;: [1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3], &quot;a&quot;: [1, 2, 3, 10, 20, 30, 40, 100, 200, 300, 400, 500], &quot;b&quot;: [1, 2, 3, 10, 20, 30, 40, 100, 200, 300, 400, 500], } ) </code></pre> <p>Which looks like this:</p> <pre><code>shape: (12, 3) ┌─────┬─────┬─────┐ │ id ┆ a ┆ b │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 1 ┆ 1 ┆ 1 │ │ 1 ┆ 2 ┆ 2 │ │ 1 ┆ 3 ┆ 3 │ │ 2 ┆ 10 ┆ 10 │ │ 2 ┆ 20 ┆ 20 │ │ … ┆ … ┆ … │ │ 3 ┆ 100 ┆ 100 │ │ 3 ┆ 200 ┆ 200 │ │ 3 ┆ 300 ┆ 300 │ │ 3 ┆ 400 ┆ 400 │ │ 3 ┆ 500 ┆ 500 │ └─────┴─────┴─────┘ </code></pre> <p>Is there a way to manipulate the first <code>pl.DataFrame</code> into the second <code>pl.DataFrame</code>?</p>
<python><dataframe><python-polars>
2024-11-07 08:06:02
1
1,351
Thomas
79,165,357
1,537,366
update on pandas join/merge performance in python
<p>(I post this self-answered question to share my own tests.) There are a huge number of ways to join two DataFrames together in Python/Pandas. Previous performance analyses indicated that <code>DataFrame.join</code> is faster than <code>DataFrame.merge</code> and that it is best that one table has an index on the column to be joined on. None of these are true anymore as it seems that <code>DataFrame.merge</code> has improved a lot. However, I find it still slower than some alternatives. Is there an updated performance comparison between these alternative methods with the latest versions of Python/Pandas in 2024? In particular, I am interested in the most common case, which is a join of a large left table to a smaller right table which retains the left index and with no missing values (so join is the same as left join). The sizes can be about 10000000 for the left table and 100000 for the right table which may have up to 10 columns. And copy-on-write mode should be turned on because it's the future of Pandas.</p>
<python><pandas><dataframe><join><left-join>
2024-11-07 07:08:16
1
1,217
user1537366
79,165,299
219,153
Why grid is not shown in this matplotlib script?
<p>Using Python 3.12 and Matplotlib 3.8.4 on Ubuntu 22.04. I would like to see a uniform grid with 0.25 spacing. This snippet:</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator gridSpacing = MultipleLocator(0.25) plt.clf() plt.gca().set_aspect('equal') plt.gca().xaxis.set_minor_locator(gridSpacing) plt.gca().yaxis.set_minor_locator(gridSpacing) plt.gca().grid(which='minor', axis='both') plt.plot([1, 2, 3], color='r', linewidth=1, marker='o') plt.show() </code></pre> <p>produces a plot with random grid when the window is small:</p> <p><a href="https://i.sstatic.net/OsxIwm18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OsxIwm18.png" alt="enter image description here" /></a></p> <p>or no grid at all, when it is larger:</p> <p><a href="https://i.sstatic.net/A2XR6Zp8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2XR6Zp8.png" alt="enter image description here" /></a></p> <p>How to get a uniform grid with 0.25 spacing?</p> <p>Note that I intend to use this snippet in a loop, that's why you see <code>plt.clf()</code>, etc.</p> <hr /> <p>I applied an answer from @Fatima and it works, but when the window is small, some grid lines are missing. It gets worse when I use <code>set_major_locator</code>, e.g.:</p> <pre><code>gridSpacing = MultipleLocator(0.25) plt.clf() plt.gca().set_aspect('equal') plt.gca().xaxis.set_major_locator(gridSpacing) plt.gca().yaxis.set_major_locator(gridSpacing) plt.gca().grid() plt.plot([1, 2, 3], color='r', linewidth=1, marker='o') plt.show() </code></pre> <p>which plots this:</p> <p><a href="https://i.sstatic.net/DasR2uA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DasR2uA4.png" alt="enter image description here" /></a></p> <p>Grid lines are missing regardless of window size. What is the reason?</p>
<python><matplotlib><grid>
2024-11-07 06:48:17
0
8,585
Paul Jurczak
79,165,267
16,525,263
How to explode arraytype columns in pyspark dataframe
<p>I have a pyspark dataframe as below.</p> <p>I need to explode the <code>Items</code> and <code>Value1</code> columns. This is my code at present:</p> <pre class="lang-py prettyprint-override"><code>df_ob_exploded = df.withColumn('op_it.objects', F.explode(F.array(*[F.array(F.col('op_it.objects')[i]['Items'][j].Name, F.array(*[F.col('op_it.Objects')[i]['Items'[j]['Items'][k].Name for k in range(9)]).cast('string'), F.array(*[F.from_json(F.col('op_it.objects')[i]['Items'][k].Selected, sc).getItem('Value') for k in range(9)]).cast('string'), F.from_json(F.col('op_it.objects')[i]['Items'][j].Selected,sc).getItem('Value'))for i in range(9) for j in range(9)]))).dropDuplicates() df_obj_json = df_ob_exploded.select('ID', F.col('op_it.object'[0].alias('Name'), F.col('op_it.objects')[1].alias('Items'), F.col('op_it.object'[2].alias('Value1'), F.col('op_it.objects')[1].alias('Value'))\ .na.drop(how='all', subset=['Name','Items','Value1','Value']) </code></pre> <p>I am unable to explode the <code>Items</code> and <code>Value1</code> columns.</p> <pre><code>ID Name Items Value1 Value 1 Contact [platform,chat, , ,] [,,,,,] null 1 action [,,,,,,] [,,,,,] windows 1 cycle [,,,,,,] [,,,,,] article </code></pre>
<python><apache-spark><pyspark>
2024-11-07 06:38:21
2
434
user175025
79,165,110
19,356,117
CompileException occurs when compile .cu file with cupy
<p>I have a .cu file with these heads:</p> <pre><code>#include &lt;/usr/include/features.h&gt; #include &lt;/usr/include/assert.h&gt; #include &lt;/usr/include/stdio.h&gt; </code></pre> <p>When I use <code>nvcc</code> command to compile this file, it passed, but when I use <code>cupy.RawKernel()</code> to execute code in this .cu file, it crashed with this:</p> <pre><code>PyDev console: starting. Traceback (most recent call last): File &quot;/home/username/.local/share/JetBrains/IntelliJIdea2024.2/python/helpers-pro/pydevd_asyncio/pydevd_asyncio_utils.py&quot;, line 117, in _exec_async_code result = func() ^^^^^^ File &quot;&lt;input&gt;&quot;, line 1, in &lt;module&gt; File &quot;cupy/_core/raw.pyx&quot;, line 93, in cupy._core.raw.RawKernel.__call__ File &quot;cupy/_core/raw.pyx&quot;, line 100, in cupy._core.raw.RawKernel.kernel.__get__ File &quot;cupy/_core/raw.pyx&quot;, line 117, in cupy._core.raw.RawKernel._kernel File &quot;cupy/_util.pyx&quot;, line 64, in cupy._util.memoize.decorator.ret File &quot;cupy/_core/raw.pyx&quot;, line 538, in cupy._core.raw._get_raw_module File &quot;cupy/_core/core.pyx&quot;, line 2265, in cupy._core.core.compile_with_cache File &quot;cupy/_core/core.pyx&quot;, line 2283, in cupy._core.core.compile_with_cache File &quot;/home/username/.conda/envs/torchhydro1/lib/python3.11/site-packages/cupy/cuda/compiler.py&quot;, line 498, in _compile_module_with_cache return _compile_with_cache_cuda( ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/username/.conda/envs/torchhydro1/lib/python3.11/site-packages/cupy/cuda/compiler.py&quot;, line 577, in _compile_with_cache_cuda ptx, mapping = compile_using_nvrtc( ^^^^^^^^^^^^^^^^^^^^ File &quot;/home/username/.conda/envs/torchhydro1/lib/python3.11/site-packages/cupy/cuda/compiler.py&quot;, line 333, in compile_using_nvrtc return _compile(source, options, cu_path, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/username/.conda/envs/torchhydro1/lib/python3.11/site-packages/cupy/cuda/compiler.py&quot;, line 317, in _compile compiled_obj, mapping = prog.compile(options, log_stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/username/.conda/envs/torchhydro1/lib/python3.11/site-packages/cupy/cuda/compiler.py&quot;, line 711, in compile raise CompileException(log, self.src, self.name, options, cupy.cuda.compiler.CompileException: /usr/include/features.h(439): catastrophic error: cannot open source file &quot;stdc-predef.h&quot; 1 catastrophic error detected in the compilation of &quot;/tmp/tmpjszp_wuk/239939fff8cdfee531aa36905d9ea0a21ebe8db4.cubin.cu&quot;. Compilation terminated. </code></pre> <p>So how to quote correct head files rather than put all files in <code>#include</code>?</p>
<python><c++><python-3.x><cuda><cupy>
2024-11-07 05:24:23
1
1,115
forestbat
79,164,899
4,971,515
In PyQT 5, always show a QCombobox using QStyledItemDelegate in QTableView (Not just when a cell is being edited)
<p>I'm subclassing <code>QStyledItemDelegate</code> to show a dropdown <code>QCombobox</code> in a <code>QTableView</code>. The delegate detects the dataType from the table's model, and if the <code>dataType == EnumMeta</code>, shows a dropdown list of options.</p> <p>With my current implementation, the dropdown is only displayed after a user double clicks a table cell. The user must then click the dropdown again to show the available options.</p> <p><strong>Question:</strong></p> <p>How do I always show the <code>QCombobox</code>, and have it open immediately when the user clicks the table cell?</p> <p><strong>Example Code</strong></p> <pre class="lang-py prettyprint-override"><code>from enum import EnumMeta from qgis.PyQt.QtWidgets import ( QStyledItemDelegate, QWidget, QComboBox, ) from qgis.PyQt.QtGui import QPainter from qgis.PyQt.QtCore import QModelIndex, QSize, QAbstractItemModel, Qt from .shared_constants import CustomRoles class BGCustomDelegate(QStyledItemDelegate): def __init__(self, parent=None): super(BGCustomDelegate, self).__init__(parent) self.setParent(parent) def paint(self, painter: QPainter, option, index: QModelIndex) -&gt; None: QStyledItemDelegate.paint(self, painter, option, index) def sizeHint(self, option, index: QModelIndex) -&gt; QSize: return QStyledItemDelegate.sizeHint(self, option, index) def createEditor(self, parent: QWidget, option, index: QModelIndex) -&gt; QWidget: if index.data(CustomRoles.dataType.value) == EnumMeta: editor = QComboBox(parent) allowed_values_enum: EnumMeta = index.data(CustomRoles.allowedValues.value) allowed_values = [e.value for e in allowed_values_enum] editor.addItems(allowed_values) return editor else: return QStyledItemDelegate.createEditor(self, parent, option, index) def setEditorData(self, editor: QWidget, index: QModelIndex) -&gt; None: if index.data(CustomRoles.dataType.value) == EnumMeta: editor: QComboBox current_value = index.data(Qt.EditRole) combo_index = editor.findText(current_value) if combo_index &gt;= 0: editor.setCurrentIndex(combo_index) else: QStyledItemDelegate.setEditorData(self, editor, index) def setModelData( self, editor: QWidget, model: QAbstractItemModel, index: QModelIndex ) -&gt; None: if index.data(CustomRoles.dataType.value) == EnumMeta: editor: QComboBox current_value = editor.currentText() model.setData(index, current_value, Qt.EditRole) else: QStyledItemDelegate.setModelData(self, editor, model, index) </code></pre>
<python><pyqt><pyqt5>
2024-11-07 03:04:05
0
326
Jesse Reilly
79,164,771
901,827
Extract multiple sparsely packed responses to yes/no identifiers while preserving row information
<p>I have some data from Google Sheets that has a multi-response question, like so:</p> <pre><code> Q1 Q2 ... Multi-Response 0 ... ... ... &quot;A; B&quot; 1 ... ... ... &quot;B; C&quot; 2 ... ... ... &quot;D; F&quot; 3 ... ... ... &quot;A; B; F&quot; </code></pre> <p>(Note the whitespace, the separator is <code>'; '</code> for weird workaround reasons with the way the survey writer wrote the questions and how Google Sheets chose to output the response table)</p> <p>I'm trying to expand this, so I can do some k-modes clustering on it:</p> <pre><code> Q1 Q2 ... A B C D F 0 ... ... ... 1 1 0 0 0 1 ... ... ... 0 1 1 0 0 2 ... ... ... 0 0 0 1 1 3 ... ... ... 1 1 0 0 1 </code></pre> <p>The idea is more or less mapping each response list to a series of &quot;do you agree? yes/no&quot; questions.</p> <p>But I can't quite figure out how to transform the dataframe to that format. I tried to use <code>pivot_table</code> and <code>get_dummies</code>, but if it can do this, it's not clear to me exactly how it works.</p> <p>I can get a table of responses with</p> <pre class="lang-py prettyprint-override"><code>multi_selection_question = data.keys()[-1] expanded = data[multi_selection_question].str.split('; ', expand=True) </code></pre> <p>which yields something like</p> <pre><code> 0 1 2 0 A B None 1 B C None 2 D F None 3 A B F </code></pre> <p>And a list of questions that would be the proper column names with:</p> <pre class="lang-py prettyprint-override"><code>questions = pandas.Series(expanded.values.flatten()).unique() </code></pre> <p>But the examples for <code>pivot_table</code> or <code>get_dummies</code> that I've seen seem to require data in a different format with a more consistent column structure than what this outputs. Using <code>get_dummies</code> for instance makes a separate category for each <code>(column,question)</code> pair, so for the example table above - <code>2_F</code>, <code>3_F</code>, <code>1_B</code>, <code>2_B</code> etc.</p> <p>Of course I could just resort to a couple loops and build up a new dataframe row-by-row and <code>concat</code> it, but <em>usually</em> there's a better way in pandas.</p>
<python><pandas><google-sheets>
2024-11-07 01:36:15
2
22,476
Linear
79,164,770
9,251,158
How to run doc-tests without printing output
<p>I want to run doc-tests and get the number of failures, but not print any output. For example, I tried this:</p> <pre class="lang-py prettyprint-override"><code> with open(os.devnull, 'w') as sys.stdout: tests_failed, tests_run = doctest.testmod(some_module, optionflags=doctest.ELLIPSIS) </code></pre> <p>but this does not play nice with the test runner suite, which requires <code>sys.stdout</code> to write to a JSON file.</p> <p>How can I run doc-tests without printing any output?</p>
<python><stdout><doctest>
2024-11-07 01:36:06
1
4,642
ginjaemocoes
79,164,756
8,800,836
Remove specific indices in each row of a numpy ndarray
<p>I have integer arrays of the type:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np seed_idx = np.asarray([[0, 1], [1, 2], [2, 3], [3, 4]], dtype=np.int_) target_idx = np.asarray([[2,9,4,1,8], [9,7,6,2,4], [1,0,0,4,9], [7,1,2,3,8]], dtype=np.int_) </code></pre> <p>For each row of <code>target_idx</code>, I want to select the elements whose indices are <em>not</em> the ones in <code>seed_idx</code>. The resulting array should thus be:</p> <pre><code>[[4,1,8], [9,2,4], [1,0,9], [7,1,2]] </code></pre> <p>In other words, I want to do something similar to <code>np.take_along_axis(target_idx, seed_idx, axis=1)</code>, but excluding the indices instead of keeping them.</p> <p>What is the most elegant way to do this? I find it surprisingly annoying to find something neat.</p>
<python><numpy><numpy-ndarray><numpy-slicing>
2024-11-07 01:27:03
2
539
Ben
79,164,737
22,407,544
How to make a Django `url` case insensitive?
<p>For example, if I visit <code>http://localhost:8000/detail/PayPal</code> I get a Page not found error 404 with the following message:</p> <pre><code>Using the URLconf ... Django tried these URL patterns, in this order: ... detail/&lt;slug:slug&gt; [name='processor_detail'] The current path, detail/PayPal, matched the last one. </code></pre> <p>Here is my code: <code>views.py</code>:</p> <pre class="lang-py prettyprint-override"><code>class ProcessorDetailView(DetailView): model = Processor template_name = 'finder/processor_detail.html' slug_field = 'slug' # Tell DetailView to use the `slug` model field as the DetailView slug slug_url_kwarg = 'slug' # Match the URL parameter name </code></pre> <p><code>models.py</code>:</p> <pre class="lang-py prettyprint-override"><code>class Processor(models.Model): #the newly created database model and below are the fields name = models.CharField(max_length=250, blank=True, null=True) #textField used for larger strings, CharField, smaller slug = models.SlugField(max_length=250, blank=True) ... def __str__(self): #displays some of the template information instead of 'Processot object' if self.name: return self.name[0:20] else: return '--no processor name listed--' def get_absolute_url(self): # new return reverse(&quot;processor_detail&quot;, args=[str(self.slug)]) def save(self, *args, **kwargs): #`save` model a certain way(detailed in rest of function below) if not self.slug: #if there is no value in `slug` field then... self.slug = slugify(self.name) #...save a slugified `name` field value as the value in `slug` field super().save(*args, **kwargs) </code></pre> <p><code>urls.py</code>: <code>path(&quot;detail/&lt;slug:slug&gt;&quot;, views.ProcessorDetailView.as_view(), name='processor_detail')</code></p> <p>I want that if I follow a link it either 1. doesn't matter what case I use or 2. the case in the browser url window changes to all lowercase.</p>
<python><django><django-urls>
2024-11-07 01:09:03
4
359
tthheemmaannii
79,164,713
4,755,229
How to check if packages are as in "micromamba list" after pip broke it?
<p>I tried to install a package (<code>hyperfit</code>) which is only availabe via PyPI. Given that I have my whole environment set up with <code>micromamba</code>, I installed all the dependency, and tried to install the package. Well, the thing is, <code>pip</code> broke my environment, silently, without asking me. Specifically, it uninstalled the latest <code>numpy</code> I had and reverted it to old version without letting me know. (See snippet 1.)</p> <p>The problem is, when I ran <code>micromamba list</code>, the version didn't seem to be affected by the pip, as shown by the example below.</p> <pre><code>numpy 2.0.2 py312h58c1407_0 conda-forge </code></pre> <p>When I checked with python, it did import <code>1.2.6</code>, not <code>2.0.2</code>, which I had to force-reinstall the numpy.</p> <p>The thing is, I had similar experience with pip before. I don't know how many packages were altered unnoticed. How do I check if all packages are as shown in <code>micromamba list</code>, or at the very least, force <code>micromamba</code> to reinstall all the packages?</p> <h3>Snippet 1.</h3> <pre><code>❯ pip install hyperfit 2024-11-06 18:32:20 CST Collecting hyperfit Using cached hyperfit-0.1.7-py3-none-any.whl.metadata (1.5 kB) Requirement already satisfied: numpy&gt;=1.20.0 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from hyperfit) (2.0.2) Requirement already satisfied: scipy&gt;=1.6.0 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from hyperfit) (1.14.1) Requirement already satisfied: zeus-mcmc&gt;=2.3.0 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from hyperfit) (2.5.4) Requirement already satisfied: pandas&gt;=1.2.0 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from hyperfit) (2.2.2) Requirement already satisfied: emcee&gt;=3.0.0 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from hyperfit) (3.1.6) Requirement already satisfied: snowline&gt;=0.5.0 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from hyperfit) (0.6.3) Requirement already satisfied: python-dateutil&gt;=2.8.2 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from pandas&gt;=1.2.0-&gt;hyperfit) (2.9.0) Requirement already satisfied: pytz&gt;=2020.1 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from pandas&gt;=1.2.0-&gt;hyperfit) (2024.2) Requirement already satisfied: tzdata&gt;=2022.7 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from pandas&gt;=1.2.0-&gt;hyperfit) (2024.2) Requirement already satisfied: pypmc in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from snowline&gt;=0.5.0-&gt;hyperfit) (1.2.2) Requirement already satisfied: iminuit in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from snowline&gt;=0.5.0-&gt;hyperfit) (2.30.1) Requirement already satisfied: tqdm in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (4.67.0) Requirement already satisfied: setuptools in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (75.3.0) Requirement already satisfied: pytest in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (8.3.3) Requirement already satisfied: matplotlib in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (3.9.2) Requirement already satisfied: seaborn in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (0.13.2) Requirement already satisfied: scikit-learn in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (1.5.2) Requirement already satisfied: six&gt;=1.5 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from python-dateutil&gt;=2.8.2-&gt;pandas&gt;=1.2.0-&gt;hyperfit) (1.16.0) Requirement already satisfied: contourpy&gt;=1.0.1 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from matplotlib-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (1.3.0) Requirement already satisfied: cycler&gt;=0.10 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from matplotlib-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (0.12.1) Requirement already satisfied: fonttools&gt;=4.22.0 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from matplotlib-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (4.54.1) Requirement already satisfied: kiwisolver&gt;=1.3.1 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from matplotlib-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (1.4.7) Requirement already satisfied: packaging&gt;=20.0 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from matplotlib-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (24.1) Requirement already satisfied: pillow&gt;=8 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from matplotlib-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (11.0.0) Requirement already satisfied: pyparsing&gt;=2.3.1 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from matplotlib-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (3.2.0) Collecting numpy&gt;=1.20.0 (from hyperfit) Downloading numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB) Requirement already satisfied: iniconfig in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from pytest-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (2.0.0) Requirement already satisfied: pluggy&lt;2,&gt;=1.5 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from pytest-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (1.5.0) Requirement already satisfied: joblib&gt;=1.2.0 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from scikit-learn-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (1.4.2) Requirement already satisfied: threadpoolctl&gt;=3.1.0 in /home/hcho/mamba/envs/py12/lib/python3.12/site-packages (from scikit-learn-&gt;zeus-mcmc&gt;=2.3.0-&gt;hyperfit) (3.5.0) Using cached hyperfit-0.1.7-py3-none-any.whl (245 kB) Downloading numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.0/18.0 MB 93.5 MB/s eta 0:00:00 Installing collected packages: numpy, hyperfit Attempting uninstall: numpy Found existing installation: numpy 2.0.2 Uninstalling numpy-2.0.2: Successfully uninstalled numpy-2.0.2 Successfully installed hyperfit-0.1.7 numpy-1.26.4 </code></pre>
<python><pip><mamba><micromamba>
2024-11-07 00:48:53
1
498
Hojin Cho
79,164,666
1,991,502
Mypy type complaint not tracking if-branch logic. How should I fix?
<p>I have the following code:</p> <pre><code>array: list[list[str | None]] = [] # ... item_text: str = &quot;&quot; if array[0][col] is not None: item_text = array[0][col] </code></pre> <p>In VSCode, mypy throws the following complaint</p> <pre><code>Incompatible types in assignment (expression has type &quot;str | None&quot;, variable has type &quot;str&quot;) </code></pre> <p>(see attached figure). The if branch conditioning the type of <code>array[0][col]</code> does not seem to be registered by mypy. Can I remedy this?</p> <p><a href="https://i.sstatic.net/OBVytJ18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OBVytJ18.png" alt="enter image description here" /></a></p>
<python><python-typing><mypy>
2024-11-07 00:12:08
0
749
DJames
79,164,381
2,528,063
HTTP 404 on flask server when just returning provided parameter
<p>I have a simple flask-app like this:</p> <pre><code>from flask import Flask app = Flask(__name__) @app.route(&quot;/getExposeIds/&lt;cookie&gt;&quot;) def getExposeIds(cookie): return cookie </code></pre> <p>I'm running the app as follows:</p> <pre><code>flask --app myscript run </code></pre> <p>When I call that url in my brower using <code>localhost:5000/getExposeIds/whatever</code> everything works as expected and my service returns <code>&quot;whatever&quot;</code>. However when I provide an (url-encoded) value like this one</p> <pre><code>aws-waf-token%3D48eba622-3632-46b6-ae55-9f0637582425%3ACQoAmXuXthkqAQAA%3AhoU4a212KQ5MCAFIDflGc6vLRzYVJJAyzQS4mY%2BinAkE00MuRpE5YMc9ayD3wiUe4WEsggWn8fGH4holoE6w8khw3YTuOXQ0mUJcwmTyBAeswFnUzqPa1XvrK1DCsAazLqsI8o9RYwTDQ%2BflHbw12xJ9yfb0E7Vx6Y6d07ATWI1FbJDGb%2BzkbuY8WCCiM%2Bi6KA0%2B9u0jD59M%2FYwoPOdM2g%3D%3D </code></pre> <p>I get an HTTP 404.</p>
<python><flask>
2024-11-06 21:38:02
1
37,422
MakePeaceGreatAgain
79,164,358
219,153
Python f-string equivalent of iterable unpacking by print instruction
<p><code>print</code> can nicely unpack a tuple removing brackets and commas, e.g.</p> <pre><code>a = (0, -1, 1) print(*a) </code></pre> <p>produces <code>0 -1 1</code>. Trying the same with f-strings fails:</p> <pre><code>print(f'{*a}') </code></pre> <p>The closest f-string option I found is:</p> <pre><code>print(f'{*a,}'.replace(',', '').replace('(', '').replace(')', '')) </code></pre> <p>Is there a more direct way to get <code>0 -1 1</code> from <code>*a</code> using f-strings?</p>
<python><f-string><iterable-unpacking>
2024-11-06 21:29:09
3
8,585
Paul Jurczak
79,164,305
2,925,767
how to turn cli based argument function into function that takes parameters python
<p>I'm using an example function from the google-api-python-client library that takes command line arguments and parses them using <code>argparser</code>. Here's the code this is based on (converted to python3) <a href="https://developers.google.com/youtube/v3/guides/uploading_a_video" rel="nofollow noreferrer">https://developers.google.com/youtube/v3/guides/uploading_a_video</a></p> <pre class="lang-py prettyprint-override"><code>if __name__ == '__main__': argparser.add_argument(&quot;--file&quot;, required=True, help=&quot;Video file to upload&quot;) # ...bunch of possible arguments args = argparser.parse_args() youtube = get_authenticated_service(args) try: initialize_upload(youtube, args) </code></pre> <p>The top function then passes the arguments along to the <code>run_flow</code> function in the oauth library as the <code>flags</code> parameter. That function expects command line arguments:</p> <pre><code>It presumes it is run from a command-line application and supports the following flags: </code></pre> <p>Is there a way to cleanly parameterize this function so I can easily call it from another python function? I've messed around with creating a wrapper function that sets those arguments as defaults.</p> <pre><code>def uploadVideo(file, title, description, category): # this feels hacky (yah think?) argparser.add_argument(&quot;--file&quot;, required=True, help=&quot;Video file to upload&quot;, default=file) argparser.add_argument(&quot;--title&quot;, help=&quot;Video title&quot;, default=title) </code></pre> <p>I've started writing a <code>subprocess.run</code> call too, but that doesn't seem great.</p> <p>Any suggestions?</p>
<python><parameters><arguments><command-line-interface>
2024-11-06 21:06:35
1
1,085
icicleking
79,164,165
6,447,123
FSDP in Accelerate for Large-Context LLaMA Training
<p>I'm trying to train a LLaMA model with large contexts using Hugging Face's Trainer, Fully Sharded Data Parallel (FSDP), and the accelerate library to handle memory limits. My context size is very large, with max token sizes over 70k. While my setup works fine for smaller context sizes, I'm hitting a roadblock with larger ones.</p> <p>Here's a simplified version of my setup:</p> <pre class="lang-py prettyprint-override"><code>from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments from datasets import Dataset model_name = &quot;meta-llama/Llama-3.2-1B&quot; tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, use_cache=False) tokenizer.pad_token = tokenizer.eos_token dataset = load_dataset(&quot;my_private_dataset&quot;) def tokenize_function(examples): tokenized = tokenizer( prompts, padding=&quot;max_length&quot;, truncation=True, max_length=70000 ) tokenized['labels'] = tokenized['input_ids'] return tokenized tokenized_dataset = dataset.map(tokenize_function, batched=True) training_args = TrainingArguments( output_dir=&quot;./results&quot;, num_train_epochs=5, per_device_train_batch_size=1, save_steps=5000, save_strategy=&quot;epoch&quot;, save_total_limit=1, logging_dir='./logs', logging_steps=1000, eval_strategy=&quot;no&quot;, gradient_checkpointing=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_dataset, ) trainer.train() </code></pre> <p>When I try to train with accelerate for larger contexts, I get this error:</p> <blockquote> <p>RuntimeError: The size of tensor a (0) must match the size of tensor b (2048) at non-singleton dimension 2</p> </blockquote> <p>I verified the batch sizes and tensor shapes for smaller inputs and single GPU, and everything looks correct. The error only appears when the input size gets large.</p> <h2>Questions:</h2> <ul> <li>Has anyone encountered this type of error specifically with FSDP or accelerate for large contexts?</li> <li>Are there additional accelerate configurations or debugging steps for extremely large token sequences that could help identify the issue?</li> <li>Is FSDP right method for large context?</li> </ul>
<python><pytorch><huggingface-transformers><llama><accelerate>
2024-11-06 20:26:37
0
4,309
A.A
79,164,083
2,015,614
Type importing in dynamically loaded modules
<p>I want to dynamically load a module from within a package and to import objects from the package into this module. However, I am getting an inconsistency in the name spaces what the types is concerned.</p> <p>My package files are in directory <code>pkg</code>:</p> <pre><code>pkg: __init__.py, main.py, commands.py, queries.py </code></pre> <p>and the module to be loaded, <code>run.py</code>, resides in some other place,<code>my_directory</code>.</p> <p><code>main.py</code> is the main module, it dynamically loads <code>run.py</code>:</p> <pre><code>import importlib.util import sys script_file = 'my_directory/run.py' spec = importlib.util.spec_from_file_location('my_script', script_file) mod = importlib.util.module_from_spec(spec) sys.modules[script_file] = mod spec.loader.exec_module(mod) </code></pre> <p><code>commands.py</code>:</p> <pre><code>from queries import report, Format def list_data(fmt: Format = Format.ASCII): report(fmt) </code></pre> <p><code>queries.py</code>:</p> <pre><code>from enum import auto, Enum class Format(Enum): ASCII = auto() PYTHON = auto() def report(fmt): if fmt == Format.PYTHON: print('Format is Python') else: print('Same values:', fmt == Format.PYTHON) print('Same types:', type(fmt) == type(Format.PYTHON)) print(f'Formats: {fmt}, {Format.PYTHON}') print('Module of Format:', Format.__module__) print('Module of fmt:', fmt.__module__) </code></pre> <p><code>___init.py</code>:</p> <pre><code>from .queries import Format from .commands import list_data </code></pre> <p>and <code>run.py</code>:</p> <pre><code>from pkg import list_data, Format list_data(Format.PYTHON) </code></pre> <p>I would expect <code>Format is Python</code> to be printed, but the <code>else</code> branch is executed, and I am getting the following output:</p> <pre><code>Same values: False Same types: False Formats: Format.PYTHON, Format.PYTHON Module of Format: queries Module of fmt: pkg.queries </code></pre> <p>which indicates that the types of <code>Format.PYTHON</code> have different name spaces and don't compare.</p> <p>How should I modify my code to get the expected behavior?</p> <p>I apologize for the lengthy description but I could not succeed to reproduce the problem on a smaller example.</p>
<python><python-import>
2024-11-06 19:56:19
2
1,205
Dmitry K.
79,163,896
1,803,648
How to detect task cancellation by Task Group
<p>Given a <code>taskgroup</code> and number of running tasks, per <a href="https://docs.python.org/3/library/asyncio-task.html#terminating-a-task-group" rel="nofollow noreferrer">taskgroup docs</a> if any of the tasks raises an error, rest of the tasks in group will be cancelled.</p> <p>If some of these tasks need to perform cleanup upon cancellation, then <strong>how would one go about detecting <em>within the task</em> it's being cancelled</strong>?</p> <p>Was hoping some exception is raised in the task, but that's not the case:</p> <p>script.py:</p> <pre class="lang-py prettyprint-override"><code>import asyncio class TerminateTaskGroup(Exception): &quot;&quot;&quot;Exception raised to terminate a task group.&quot;&quot;&quot; async def task_that_needs_to_cleanup_on_cancellation(): try: await asyncio.sleep(10) except Exception: print('exception caught, performing cleanup...') async def err_producing_task(): await asyncio.sleep(1) raise TerminateTaskGroup() async def main(): try: async with asyncio.TaskGroup() as tg: tg.create_task(task_that_needs_to_cleanup_on_cancellation()) tg.create_task(err_producing_task()) except* TerminateTaskGroup: print('main() termination handled') asyncio.run(main()) </code></pre> <p>Executing, we can see no exception is raised in <code>task_that_needs_to_cleanup_on_cancellation()</code>:</p> <pre class="lang-bash prettyprint-override"><code>$ python3 script.py main() termination handled </code></pre>
<python><task><python-asyncio>
2024-11-06 18:48:31
1
598
laur
79,163,792
2,879,529
Iterate through large XML data while multiprocessing using Python
<p>I have to periodically check a big XML file with millions of records.</p> <pre><code>context = iter(etree.iterparse(product_file_path, tag=&quot;Record&quot;, events=(&quot;start&quot;, &quot;end&quot;))) _, root = next(context) start_tag = None xml_dict = None for event, elem in context: if event == &quot;start&quot; and start_tag is None: start_tag = elem.tag if event == &quot;end&quot;: pickled_elem = etree.tostring(elem) # This will make sense later xml_dict = _etree_to_dict(pickled_elem) _update_product(self.category, xml_dict) start_tag = None xml_dict = None root.clear() </code></pre> <p>While I managed to loop through the file using <code>lxml</code> without blowing my RAM, it takes too long, so I was trying to multithread this.</p> <p>I'm very new to multithreading and I managed to cobble together a solution using Django management commands and with the help of <a href="https://stackoverflow.com/a/57796419/2879529">this answer</a></p> <pre><code>ProcPoolExc = futures.ProcessPoolExecutor ThreadPoolExc = futures.ThreadPoolExecutor class Command(BaseCommand): def handle(self, *args, **options): # ( other unnecessary code) context = iter(etree.iterparse(product_file_path, tag=&quot;Detail&quot;, events=(&quot;start&quot;, &quot;end&quot;))) _, root = next(context) start_tag = None xml_dict = None xml_dict_futures = [] product_update_futures = [] with ProcPoolExc(max_workers=threads) as ppe, ThreadPoolExc(max_workers=threads) as tpe: for event, elem in context: if event == &quot;start&quot; and start_tag is None: start_tag = elem.tag if event == &quot;end&quot;: xml_dict_futures.append(ppe.submit(_etree_to_dict, elem)) start_tag = None root.clear() for future in futures.as_completed(xml_dict_futures): xml_dict = future.result() product_update_futures.append(tpe.submit(_update_product, *(self.category, xml_dict))) for fut in futures.as_completed(product_update_futures): e = fut.exception() print(&quot;success&quot; if not e else e) </code></pre> <p>This works. However, it threw me back to the same Memory problem when dealing with large XML files in which I wait for a time until it crashes my computer due to insufficient RAM. I suppose it's because I'm saving everything in <code>xml_dict_futures</code> and <code>product_update_futures</code>? Is there a way I could optimize this to avoid this issue?</p> <p>I tried to use an intermediary function and <code>ThreadPoolExecutor.map</code>, but I guess I'm doing it wrong because it stops and it doesn't show anything</p> <pre><code>def _queue_update(default_category, start_tag, root, event, elem): if event == &quot;start&quot; and start_tag is None: start_tag = elem.tag if event == &quot;end&quot;: pickled_elem = etree.tostring(elem) _update_product(default_category, pickled_elem) start_tag = None root.clear() </code></pre> <p>and then</p> <pre><code>with futures.ThreadPoolExecutor(threads) as executor: executor.map( _queue_update, [(self.category, start_tag, root, event, elem) for event, elem in context] ) </code></pre>
<python><memory><lxml><python-multithreading>
2024-11-06 18:13:26
0
373
Mærcos
79,163,625
22,407,544
Django `The current path, detail/PayPal, matched the last one` error
<p>I'm using Django's <code>DetailView</code> to display detailed information in my web app. I've set up a <code>Processor</code> model with a <code>name</code> and a <code>slug</code> field, and I'm using the <code>slug</code> field in the URL pattern and the DetailView. However, I'm running into an issue where the DetailView is not able to find the <code>Processor</code> object if the capitalization of the URL slug doesn't exactly match the <code>slug</code> field in the database.</p> <p>For example if I visit <code>localhost:8000/detail/paypal</code> I get the following error:</p> <pre><code>Using the URLconf ... Django tried these URL patterns, in this order: ... detail/&lt;slug:slug&gt; [name='processor_detail'] The current path, detail/PayPal, matched the last one. </code></pre> <p>In addition the url I entered in the url field changes to <code>localhost:8000/detail/PayPal</code>, capitalizing the letters.</p> <p>Finally, the url only works if I first visit it by clicking on a link to it from another page. After that it works perfectly normally whether I go incognito mode or not and no matter the capitalization I use in the slug. But if I go incognito mode and visit the url directly(ie, after not having visit it by clicking on a link to it from another page) it doesn't load at all whether I capitalize the slug or not. I hope you can understand my point.</p> <p>Here is my code:</p> <p><code>views.py</code>:</p> <pre><code>class ProcessorDetailView(DetailView): model = Processor template_name = 'finder/processor_detail.html' slug_field = 'slug' # Tell DetailView to use the `slug` model field as the DetailView slug slug_url_kwarg = 'slug' # Match the URL parameter name </code></pre> <p><code>models.py</code>:</p> <pre><code>class Processor(models.Model): #the newly created database model and below are the fields name = models.CharField(max_length=250, blank=True, null=True) #textField used for larger strings, CharField, smaller slug = models.SlugField(max_length=250, blank=True) ... def __str__(self): #displays some of the template information instead of 'Processot object' if self.name: return self.name[0:20] else: return '--no processor name listed--' def get_absolute_url(self): # new return reverse(&quot;processor_detail&quot;, args=[str(self.name)]) def save(self, *args, **kwargs): #`save` model a certain way(detailed in rest of function below) if not self.slug: #if there is no value in `slug` field then... self.slug = slugify(self.name) #...save a slugified `name` field value as the value in `slug` field super().save(*args, **kwargs) </code></pre> <p><code>urls.py</code>:</p> <p><code>path(&quot;detail/&lt;slug:slug&gt;&quot;, views.ProcessorDetailView.as_view(), name='processor_detail')</code></p> <p>If I follow a link on a separate template to the the problem url using <code>&lt;a href=&quot;{%url 'processor_detail' processor.slug%}&quot; class=&quot;details-link&quot;&gt; Details → &lt;/a&gt;</code>, for example, it works perfectly fine afterward.</p>
<python><django>
2024-11-06 17:19:19
1
359
tthheemmaannii
79,163,581
23,626,926
algorithm for undoing Bresenham lines
<p>I have a blob of points on a grid that I want to create a sensible polygon outline for. The points will be selected by the user so I can't expect them to be <em>perfectly</em> following Bresenham's algorithm for lines with weird slopes. However, I am still struggling to even get something working for an obvious &quot;nice&quot; sloped side:</p> <pre class="lang-none prettyprint-override"><code># ### ##### ####### ##### ### # </code></pre> <p>What I want is to turn those points into an SVG polygon (or path, or polyline, etc). It is supposed to be a nice neat triangle as you might expect.</p> <p>Here is the code I have tried so far:</p> <pre class="lang-py prettyprint-override"><code>import cmath s = &quot;&quot;&quot; # ### ##### ####### ##### ### # &quot;&quot;&quot; pts = [complex(c, r) for (r, rt) in enumerate(s.splitlines()) for (c, ch) in enumerate(rt) if ch == &quot;#&quot;] def centroid(pts: list[complex]) -&gt; complex: return sum(pts) / len(pts) def sort_counterclockwise(pts: list[complex], center: complex | None = None) -&gt; list[complex]: if center is None: center = centroid(pts) return sorted(pts, key=lambda p: cmath.phase(p - center)) def perimeter(pts: list[complex]) -&gt; list[complex]: out = [] for pt in pts: for d in (-1, 1, -1j, 1j, -1+1j, 1+1j, -1-1j, 1-1j): xp = pt + d if xp not in pts: out.append(pt) break return sort_counterclockwise(out, centroid(pts)) def example(all_points: list[complex], scale: float = 20) -&gt; str: p = perimeter(all_points) p.append(p[0]) vbx = max(map(lambda x: x.real, p)) + 1 vby = max(map(lambda x: x.imag, p)) + 1 return f&quot;&quot;&quot;&lt;svg viewBox=&quot;-1 -1 {vbx} {vby}&quot; width=&quot;{vbx * scale}&quot; height=&quot;{vbx * scale}&quot;&gt; &lt;polyline fill=&quot;none&quot; stroke=&quot;black&quot; stroke-width=&quot;0.1&quot; points=&quot;{&quot; &quot;.join(map(lambda x: f&quot;{x.real},{x.imag}&quot;, p))}&quot;&gt; &lt;/polyline&gt;&lt;/svg&gt;&quot;&quot;&quot; print(example(pts)) </code></pre> <p>It results in a horrible jagged mess:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false" data-babel-preset-react="false" data-babel-preset-ts="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;svg viewBox="-1 -1 7.0 8.0" width="140.0" height="140.0"&gt; &lt;polyline fill="none" stroke="black" stroke-width="0.1" points="0.0,3.0 0.0,2.0 0.0,1.0 1.0,2.0 2.0,2.0 2.0,3.0 3.0,3.0 4.0,3.0 4.0,4.0 5.0,4.0 6.0,4.0 4.0,5.0 3.0,5.0 2.0,5.0 2.0,6.0 1.0,6.0 0.0,7.0 0.0,6.0 0.0,5.0 0.0,4.0 0.0,3.0"&gt; &lt;/polyline&gt;&lt;/svg&gt;</code></pre> </div> </div> </p> <p>Any tips on making the algorithm respond better to making clearly-defined slopes and only produce a triangle for this?</p> <p>EDIT: Here is another test triangle, with mostly-vertical lines:</p> <pre class="lang-none prettyprint-override"><code># # # ## ## ## ### ## ## ## # # # </code></pre> <p>And here is one with both simultaneously (not a triangle, obviously):</p> <pre class="lang-none prettyprint-override"><code># # # ## ## ## ### ###### ######### ###### ### ## ## ## # # # </code></pre>
<python><graphics><vectorization>
2024-11-06 17:06:37
0
360
dragoncoder047
79,163,487
9,591,312
How to generate a sample with a given spearman coefficient
<p>In order to create a dataset to test a statistical calculation package, I want to be able to generate a sample that is correlated to a reference sample with a given searman coefficient.</p> <p>I managed to do it for the Pearson coefficient, but the fact that Spearman works on the rank makes it quite tricky, to say the least!</p> <p>As an example, with a code to generate spearman and pearson correlated samples:</p> <pre class="lang-py prettyprint-override"><code>from statistics import correlation import numpy as np from scipy import stats def generate_pearson_correlated_to_sample(x, correlation): &quot;&quot;&quot; Generate a variable with a specified Pearson correlation coefficient to a given sample. Parameters ---------- x : array-like The fixed first sample. correlation : float Desired Pearson correlation coefficient (-1 &lt;= correlation &lt;= 1). Returns ------- array-like The generated sample y with specified Pearson correlation to x. &quot;&quot;&quot; # standardize the pixel data x_std = (x - x.mean()) / x.std() # generate independent standard normal data z = np.random.normal(loc=0, scale=1, size=x_std.shape) # Create correlated variable (standardized) y_std = correlation * x_std + np.sqrt(1 - correlation ** 2) * z # Scale to desired standard deviation and add mean mean = x.mean() std = x.std() y = y_std * std + mean return y def generate_spearman_correlated_to_sample(x, correlation): &quot;&quot;&quot; Generate a variable with a specified Spearman correlation coefficient to a given sample. Parameters ---------- x : array-like The fixed first sample. correlation : float Desired Spearman correlation coefficient (-1 &lt;= correlation &lt;= 1). Returns ------- array-like The generated sample y with specified Spearman correlation to x. &quot;&quot;&quot; n_samples = len(x) # Convert x to ranks (normalized between 0 and 1) x_ranks = stats.rankdata(x) / (n_samples + 1) # Convert ranks to normal distribution normal_x = stats.norm.ppf(x_ranks) # Generate correlated normal variable normal_y = correlation * normal_x + np.sqrt(1 - correlation ** 2) * np.random.normal(0, 1, n_samples) # Convert back to uniform distribution y_uniform = stats.norm.cdf(normal_y) # Convert uniform to same marginal distribution as x using empirical CDF x_sorted = np.sort(x) y = np.interp(y_uniform, np.linspace(0, 1, n_samples), x_sorted) return y def verify_correlations(x, y): &quot;&quot;&quot; Calculate both Spearman and Pearson correlations between two variables. Parameters: ----------- x, y : array-like The two variables to check Returns: -------- tuple (spearman_correlation, pearson_correlation) &quot;&quot;&quot; spearman_corr = stats.spearmanr(x, y)[0] pearson_corr = stats.pearsonr(x, y)[0] return spearman_corr, pearson_corr # Example usage if __name__ == &quot;__main__&quot;: # Set random seed for reproducibility np.random.seed(42) # Create different types of example data x_normal = np.random.normal(0, 1, 10000) # Normal distribution x_exp = np.random.exponential(2, 10000) # Exponential distribution x_bimodal = np.concatenate([np.random.normal(-2, 0.5, 5000), np.random.normal(2, 0.5, 5000)]) # Bimodal distribution # Test with different distributions and correlations test_cases = [ (x_normal, 0.7, &quot;Normal Distribution&quot;), (x_exp, 0.5, &quot;Exponential Distribution&quot;), (x_bimodal, 0.8, &quot;Bimodal Distribution&quot;) ] # Run examples for x, target_corr, title in test_cases: print(f&quot;\nTesting with {title}&quot;) print(f&quot;Target correlation: {target_corr}&quot;) # Generate correlated sample y_spearman = generate_spearman_correlated_to_sample(x, correlation=target_corr) y_pearson = generate_pearson_correlated_to_sample(x, correlation=target_corr) # Calculate actual correlations spearman_corr, _= verify_correlations(x, y_spearman) _, pearson_corr = verify_correlations(x, y_pearson) print(f&quot;Achieved Spearman correlation: {spearman_corr:.4f}&quot;) print(f&quot;Achieved Pearson correlation: {pearson_corr:.4f}&quot;) </code></pre> <p>With the above code, the generated Pearson coefficient is of course not exactly equal to the targeted value due to the random nature of the code. But I find that Spearman is systematically off by a much larger amount, which makes me suspecting a problem in my code.</p> <p>I work in Python but any help is appreciated!</p>
<python><statistics><correlation><sampling><pearson-correlation>
2024-11-06 16:40:30
2
647
BayesianMonk
79,163,417
2,067,492
How can I get a keras layer to learn an AND operation
<p>To get keras to learn to detect corners from a binary image of a rectangle I reduced the problem down to classifying a 3x3 array of pixels. The top left corner, the pixels would need to look like this.</p> <pre><code>[ [0, 0, 0], [0, 1, 1], [0, 1, 1] ] </code></pre> <p>This generates the full set of all the possible input shapes.</p> <pre><code>def getData(): x = [] y = [] template = numpy.array([[ 0, 0, 0], [0, 1, 1], [0, 1, 1] ]) num = [0, 0, 0, 0, 0, 0, 0, 0, 0] for i in range(2**9): n = numpy.array( num ).reshape((3, 3)) x.append( n ) if numpy.all( n == template ): y.append(1) else: y.append(0) s = 0 j = 0 while s == 0: if num[j] == 0: num[j] = 1 s = 1 else: num[j] = 0 j += 1 if j == len(num): print(num) break return numpy.array(x), numpy.array(y) </code></pre> <p>I should be able to find a classifier from a simple single convolutional layer.</p> <pre><code>def createModel(): inp = keras.layers.Input((3, 3, 1)) cnn = keras.layers.Conv2D( 1, (3, 3), activation = None, use_bias=True)(inp) cnn = keras.layers.Conv2D( 1, (1, 1), activation = &quot;hard_sigmoid&quot;)(cnn) return keras.models.Model(inputs = [inp], outputs=[cnn]) </code></pre> <p>Using this simple model I could set the weights and get the output I desire.</p> <pre><code>dw = numpy.array([ -100, -100, -100, -100, 10, 10, -100, 10, 10]).reshape((3, 3, 1, 1)) bw = numpy.array([ -35 ]) ow = numpy.array([ 1 ]).reshape((1, 1, 1, 1)) obw = numpy.array([0]) mdl.set_weights( [dw, bw, ow, obw] ) mdl.compile( loss =&quot;mse&quot;, optimizer=keras.optimizers.Adam(learning_rate=1e-7) ) mdl.evaluate(x, y) </code></pre> <p>Which gives a loss of:</p> <blockquote> <p>16/16 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 9.3703e-04</p> </blockquote> <p><strong>How can I train the network to learn these weights?</strong></p> <p>The basic setup to train a network is this:</p> <pre><code>mdl = createModel() x, y = getData() mdl.compile( loss =&quot;mse&quot;, optimizer=keras.optimizers.Adam(learning_rate=1e-2) ) mdl.fit(x, y, epochs=100, verbose=2) </code></pre> <p>It doesn't work, it just converges to a value that produces 0 everywhere, which is reasonable considering there is 1 sample out of 512 that is non-zero.</p> <p>Some other observations.</p> <ul> <li>Initialize the network with the correct weights the model immediately converges to a loss of 0.0038, but it still correctly predicts.</li> <li>Using a weighted loss fuction just shifts the mean value</li> <li>Balancing the dataset by including more positive examples also shifts the mean value.</li> </ul> <p>Here is a complete version of the program.</p> <pre><code>#!/usr/bin/env python3 import keras import numpy def createModel(): inp = keras.layers.Input((3, 3, 1)) cnn = keras.layers.Conv2D( 1, (3, 3), activation = None, use_bias=True)(inp) cnn = keras.layers.Conv2D( 1, (1, 1), activation = &quot;hard_sigmoid&quot;)(cnn) return keras.models.Model(inputs = [inp], outputs=[cnn]) def getData(): x = [] y = [] template = numpy.array([[ 0, 0, 0], [0, 1, 1], [0, 1, 1] ]) num = [0, 0, 0, 0, 0, 0, 0, 0, 0] for i in range(2**9): n = numpy.array( num ).reshape((3, 3)) x.append( n ) if numpy.all( n == template ): y.append(1) print(&quot;found&quot;) else: y.append(0) s = 0 j = 0 while s == 0: if num[j] == 0: num[j] = 1 s = 1 else: num[j] = 0 j += 1 if j == len(num): print(num) break return numpy.array(x), numpy.array(y) mdl = createModel() x, y = getData() for ws in mdl.get_weights(): print(ws.shape) dw = numpy.array([ -100, -100, -100, -100, 10, 10, -100, 10, 10]).reshape((3, 3, 1, 1)) bw = numpy.array([ -35 ]) ow = numpy.array([ 1 ]).reshape((1, 1, 1, 1)) obw = numpy.array([0]) mdl.set_weights( [dw, bw, ow, obw] ) mdl.compile( loss =&quot;mse&quot;, optimizer=keras.optimizers.Adam(learning_rate=1e-7) ) mdl.evaluate(x, y) mdl.fit(x, y, epochs=1000, batch_size=32, verbose=2) t0 = numpy.array([[[ 0, 0, 0], [0, 1, 1], [0, 1, 1] ]]) t1 = numpy.array([[[ 1, 0, 0], [0, 1, 1], [0, 1, 1] ]]) print( mdl(t0) ) print( mdl(t1) ) </code></pre>
<python><keras>
2024-11-06 16:18:39
1
12,395
matt
79,163,372
967,621
Python equivalent of the Perl ".." flip-flop operator
<p>What is the Python equivalent of the Perl &quot;<code>..</code>&quot; (range, or flip-flop) <a href="https://perldoc.perl.org/perlop#Range-Operators" rel="nofollow noreferrer">operator</a>?</p> <pre class="lang-perl prettyprint-override"><code>for ( qw( foo bar barbar baz bazbaz bletch ) ) { print &quot;$_\n&quot; if /ar.a/ .. /az\w/; } </code></pre> <p>Output:</p> <pre><code>barbar baz bazbaz </code></pre> <p>The Python workaround that I am aware of includes <a href="https://docs.python.org/howto/functional.html#generator-expressions-and-list-comprehensions" rel="nofollow noreferrer">generator expression</a> and indexing with the help of <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow noreferrer"><code>enumerate</code></a>, but this seems cumbersome:</p> <pre class="lang-py prettyprint-override"><code>import re lst = 'foo bar barbar baz bazbaz bletch'.split() idx_from = list(i for i, el in enumerate(lst) if re.search(r'ar.a', el))[0] idx_to = list(i for i, el in enumerate(lst) if re.search(r'az\w', el))[0] lst_subset = lst[ idx_from : (idx_to+1)] print(lst_subset) # ['barbar', 'baz', 'bazbaz'] </code></pre> <h2>Note:</h2> <p>I am looking for just one range. There is currently no need to have multiple ranges.</p>
<python><regex><range><slice>
2024-11-06 16:06:36
3
12,712
Timur Shtatland
79,163,215
1,283,836
Why PrintOptions.page_height of Selenium is ignored by some websites?
<p>Selenium has <code>print_page</code> <a href="https://www.selenium.dev/documentation/webdriver/interactions/print_page/" rel="nofollow noreferrer">method</a> that can return base64 encoded PDF representation of whatever is loaded by Selenium. That method, accepts <code>PrintOptions</code> object which as the name suggests, can set various options for the rendering of the PDF (e.g. page_height/width, margins, ...)</p> <p>For all the websites that I've tried, the <code>PrintOptions.page_height</code> property works as expected;</p> <pre><code>from selenium import webdriver import base64 selenium_driver = webdriver.Firefox() selenium_driver.get(&quot;https://www.google.com/&quot;) print_options = PrintOptions() print_options.page_height = 10 #set the height of the pdf pages base64_encoded = selenium_driver.print_page(print_options) with open(&quot;print.pdf&quot;, 'wb') as file: file.write(base64.b64decode(base64_encoded)) </code></pre> <p><a href="https://i.sstatic.net/EchCpeZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EchCpeZP.png" alt="enter image description here" /></a></p> <p>But for some websites (such as <a href="https://demo.roundcubeplus.com/" rel="nofollow noreferrer">this</a> demo of <a href="https://github.com/roundcube/roundcubemail" rel="nofollow noreferrer">Roundcube</a>), <code>page_height</code> property is completely ignored and pdf is generated by another set of values (A3 sizes in this case). Why is this the case?</p> <p><strong>URL to test</strong> (<em>print friendly version of an email</em>): <a href="https://demo.roundcubeplus.com/?_task=mail&amp;_safe=0&amp;_uid=388&amp;_mbox=INBOX&amp;_action=print&amp;_extwin=1" rel="nofollow noreferrer">https://demo.roundcubeplus.com/?_task=mail&amp;_safe=0&amp;_uid=388&amp;_mbox=INBOX&amp;_action=print&amp;_extwin=1</a></p> <p>selenium: v4.25.0</p> <p>geckodriver: v0.35.0</p> <p><strong>update:</strong> This issue is different from <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1833964" rel="nofollow noreferrer">this bug</a> (<a href="https://stackoverflow.com/a/76285037/1283836">stackoverflow link</a>) in the sense that <code>page_height</code> is completely ignored rather than being non-exact.</p>
<python><selenium-webdriver><roundcube>
2024-11-06 15:17:41
1
2,093
wiki
79,163,025
2,359,027
Sort QTreeWidget alphabetically, except one item
<p>I have a QTreeWidget with items. One of the items I use it to provide a &quot;Select All&quot; option.</p> <p>I would like to keep that &quot;Select All&quot; item pinned on top, and then sort all other items below alphabetically. I tried creating a ROLE for the &quot;Select All&quot; item to indicate that this one has to be always on top. So I override the <strong>lt</strong> method to return False whenever the skip_sorting is enabled, else sort alphabetically (see code below). The issue I am having is that if I print the comparisons that are called ( <code>print(self.text(column), other.text(column))</code>), self is always the &quot;Select All&quot; item, but I don't see the comparison between other items. I am using pyside6 6.3.2</p> <pre><code>class TreeWidgetItem(QTreeWidgetItem): &quot;&quot;&quot; Sort items alphabetically but exclude the &quot;Select All&quot; on top &quot;&quot;&quot; def __lt__(self, other): column = self.treeWidget().sortColumn() print(self.text(column), other.text(column)) skip_sorting_self = self.data(column, _SKIP_SORT_ROLE) skip_sorting_other = other.data(column, _SKIP_SORT_ROLE) if skip_sorting_self and not skip_sorting_other: return False return self.text(column).lower() &lt; other.text(column).lower() </code></pre> <p><a href="https://i.sstatic.net/fznFIKR6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fznFIKR6.png" alt="enter image description here" /></a></p> <hr /> <p><strong>Update</strong>:</p> <p>Here there is a working example of what is failing:</p> <pre><code>import sys from typing import Optional from PySide6.QtCore import Qt from PySide6.QtWidgets import QApplication, QTreeWidget, QTreeWidgetItem, QWidget _SKIP_SORT_ROLE: int = (Qt.UserRole + 4) class TreeWidgetItem(QTreeWidgetItem): &quot;&quot;&quot; Sort items alphabetically but exclude the Select All &quot;&quot;&quot; def __lt__(self, other): column = self.treeWidget().sortColumn() skip_sorting_self = self.data(column, _SKIP_SORT_ROLE) text = self.text(column) if skip_sorting_self: text = '' # lowest possible return text &gt; other.text(column).lower() class TreeWidget(QTreeWidget): def __init__(self, parent: Optional[QWidget] = None): super(TreeWidget, self).__init__(parent) item = TreeWidgetItem(self) item.setText(0, 'Select All') item.setData(0, _SKIP_SORT_ROLE, True) for i in range(10): item = TreeWidgetItem(self) item.setText(0, f'Item {i}') for i in range(10): item = TreeWidgetItem(self) item.setText(0, f'Item {i}') self.setSortingEnabled(True) if __name__ == '__main__': app = QApplication(sys.argv) tree = TreeWidget() tree.show() sys.exit(app.exec()) </code></pre> <p>See the screenshot:</p> <p><a href="https://i.sstatic.net/65MwENLB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65MwENLB.png" alt="enter image description here" /></a></p>
<python><qt><sorting><pyside6><qtreewidget>
2024-11-06 14:24:13
2
1,133
laurapons
79,163,001
626,804
SQLAlchemy: is it safe to add a new column to a view, not mentioned in the model?
<p>Suppose I have a view, for example</p> <pre><code>create view MyView as select 1 as A, 2 as B </code></pre> <p>(The above SQL is Microsoft dialect but the details of the view itself are not important.)</p> <p>I wrap this view in my SQLAlchemy model definition:</p> <pre><code>class MyView(Base): __tablename__ = 'MyView' A = Column(Integer) B = Column(Integer) </code></pre> <p>The view isn't only used from within SQLAlchemy. Other applications or hand-written SQL queries may use it. Suppose I want to add a new column to the view, so it now reads</p> <pre><code>create view MyView as select 1 as A, 2 as B, 3 as C </code></pre> <p>(This new version of the view is backwards-compatible for most application queries. Only queries that use <code>select *</code> would see a different result. Queries that join the view with some other view or table also providing a column called <code>C</code>, but do not fully qualify column names, might get an error about ambiguous column names.)</p> <p>Suppose for the moment I do not plan to use the column <code>C</code> from SQLAlchemy and I don't add it to the model definition. I may do so eventually, but for whatever reason I can't change the code just yet, or at least not release the new version. Is it nonetheless safe to add the column to the view and remain compatible with the existing SQLAlchemy model file?</p> <p>Because I am asking about a view, not a table, please assume that SQLAlchemy is providing only read access. The application is not trying to update the view or delete from the view, even if that might be possible in some RDBMSes.</p> <p>There won't be any problem if, say, SQLAlchemy tries to introspect the columns in the view and complains about additional ones not mentioned in the model? I think it doesn't... but I have not found a definitive answer.</p>
<python><view><sqlalchemy>
2024-11-06 14:17:47
1
1,602
Ed Avis
79,162,993
12,466,687
How to select column range based on partial column names in Pandas?
<p>I have pandas dataframe and I am trying to <strong>select multiple columns</strong> (<strong>column range</strong> starting from <code>Test</code> to <code>Bio Ref</code>). Selection has to <strong>start</strong> from column <code>Test</code> to any column whose name starts with <code>Bio</code>. Below is the sample dataframe.</p> <p>In reality it can contain:</p> <ol> <li>any number of columns before <code>Test</code> column,</li> <li>any number of columns between <code>Test</code> &amp; <code>Bio Ref</code> like 2,3,4,5 etc.</li> <li>any number of columns after <code>Bio Ref</code>.</li> <li><code>Bio Ref</code> column can contain suffix in it but <code>Bio Ref</code> will be there as start of column name always.</li> </ol> <pre><code>df_chunk = pd.DataFrame({ 'Waste':[None,None], 'Test':['something', 'something'], '2':[None,None], '3':[None,None], 'Bio Ref':['2-50','15-100'], 'None':[None,None]}) df_chunk </code></pre> <pre><code> Waste Test 2 3 Bio Ref None 0 None something None None 2-50 None 1 None something None None 15-100 None </code></pre> <p>I have tried below codes that work:</p> <pre><code>df_chunk.columns.str.startswith('Bio') df_chunk[df_chunk.columns[pd.Series(df_chunk.columns).str.startswith('Bio')==1]] </code></pre> <p><strong>Issue:</strong> But when I try to use them for multiple column Selection then it doesn't work:</p> <pre><code>df_chunk.loc[:, 'Test':df_chunk.columns.str.startswith('Bio')] </code></pre>
<python><pandas><dataframe>
2024-11-06 14:16:00
2
2,357
ViSa
79,162,974
774,575
How to align 2D artists with 3D points (what is the correct coordinate transform)?
<p>In this figure:</p> <p><a href="https://i.sstatic.net/65vnI3wBm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65vnI3wBm.png" alt="image" /></a></p> <p>Red circles are 3D points plotted with <code>Axes3D.scatter()</code>. They have a 3D position. Numbers in their bbox are plotted with <code>Axes3D.add_artist()</code>. They are derived from <code>mpl.text.Text</code>, meaning they are 2D objects with fixed viewport coordinates. I want the figures to be aligned with the circles, so the instances are passed the corresponding 3D coordinates, up to them to draw themselves at the correct viewport coordinates.</p> <p>This example has no real purpose, this is only to practice 3D in Matplotlib and how 2D and 3D widgets can be aligned, regardless of the point of view. I had a hard time reading Matplotlib 3D documentation, It seems there is no real starting point to progressively learn how to use the 3D toolkit. My current attempt is somehow working, but this is more by chance than by a logical method.</p> <p>I think I need to override the <code>draw</code> method of the 2D text and project the 3D coordinates into the 2D viewport plane, but I cannot do it correctly.</p> <p>I had to artificially scale the 2D positions to better match the circles just in order to create the image above. This scaling has no other justification, and without it the figures don't match the circles at all. Can you help me understand what is the correct method?</p> <pre><code>import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import proj3d class Text2d_upd(mpl.text.Text): text_kw = dict(fontsize=12, ha='center', va='center') bbox = dict(boxstyle='circle', pad=0.3, facecolor='turquoise', alpha=0.5) def __init__(self, x, y, z, *args, **kwargs): self.p = x, y, z kwargs = kwargs | self.text_kw super().__init__(x, y, *args, bbox=self.bbox, **kwargs) def draw(self, renderer): # Convert 3D coords to 2D view coords M = self.axes.M x, y, z = proj3d.proj_transform(*self.p, M) # Shift artist to 2D position k = 5.4 # &lt;-- non-justified scaling self.set_position((k*x+0.5, k*y+0.5)) super().draw(renderer) a = np.pi/4 r = 3 # 2D rotation matrix c, s = np.cos(a), np.sin(a) M = np.array([[c, -s], [s, c]]) ax = plt.figure().add_subplot(projection='3d') ax.set(xlabel='x', ylabel='y', zlabel='z') # Plot v = np.array([r, 0]) for i in range(8): p = (*v, 0) ax.scatter(*p, ec='red', fc='none', s=20**2) t = Text2d_upd(*p, f'{i}', transform=ax.transAxes) ax.add_artist(t) v = M @ v ax.set_zlim(-r, r) ax.set(aspect='equal') </code></pre>
<python><matplotlib><3d><transform>
2024-11-06 14:12:49
0
7,768
mins
79,162,915
534,298
How to write a function for numpy array input whose action depends on the input's numerical value
<p>The function has this mathematical form</p> <pre><code>f(x) = 1, if x&lt;1 = g(x), for 1&lt;=x&lt;10 = 0, for x &gt;=10 </code></pre> <p>where <code>g(x)</code> is a simple function.</p> <p>It is straightforward to write such a function if the input is a <code>float</code> using <code>if/else</code>. If the input is a numpy array, is there a neat/efficient implementation without explicit loop?</p>
<python><numpy><numpy-ndarray>
2024-11-06 13:58:03
2
21,060
nos
79,162,743
4,529,546
scikit-learn classifiers and regressors caching training data?
<p>I have some 22,000 rows of training data. I use train_test_split to get training and testing data. I run fitting and then get some idea of how well fitting went using various methods or estimation.</p> <p>I want to have the fitted model go back over the 22,000 rows and predict against them as if it had never seen them before. However when I do this the regressors or classifiers get every single row 100% correct, which cannot be right given that largely the best I can expect is 75% etc.</p> <p>Do the estimators have some sort of learning data cache? How can I delete the cache but keep the trained model?</p>
<python><scikit-learn><model-fitting>
2024-11-06 13:12:45
1
1,128
Richard
79,162,721
753,376
winotify notification keeps on showing on callback
<p>I'm trying to show windows notification using winotify. My problem is it keeps on showing when I click the button that has a callback to open a web browser. Also is there a way to not show the command prompt when clicking the button?</p> <pre><code>from winotify import Notification, audio, Notifier, Registry import webbrowser import json registry = Registry(app_id=&quot;myapp&quot;, script_path=__file__) notifier = Notifier(registry) class Toast: def __init__(self): self.notify() @notifier.register_callback def toast_callback(): webbrowser.open('http://google.co.kr', new=2) return json.dumps({'message': 'link has been clicked'}) def notify(self): toast = Notification(app_id=&quot;308046B0AF4A39CB&quot;, title=&quot;Message Title&quot;, msg=&quot;Hellow World&quot;, duration=&quot;short&quot; ) toast.set_audio(audio.Reminder, loop=False) toast.add_actions(label=&quot;Button text&quot;, launch=notifier.callback_to_url(self.toast_callback)) toast.show() if __name__ == '__main__': notifier.start() myapp = Toast() </code></pre>
<python>
2024-11-06 13:05:05
0
2,852
unice
79,162,666
11,863,823
How to type Polars' Altair plots in Python?
<p>I use <code>polars</code> dataframes (new to this module) and I'm using some static typing, to keep my code tidy and clean for debugging purposes, and to allow auto-completion of methods and attributes on my editor. Everything goes well.</p> <p>However, when plotting things from dataframes with the <code>altair</code> API, as shown in the doc, I am unable to find the type of the returned object in <code>polars</code>.</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import typing as tp data = {&quot;a&quot;: [0,0,0,0,1,1,1,2,2,3]} df = pl.DataFrame(data) def my_plot(df: pl.DataFrame, col: str) -&gt; tp.Any: &quot;&quot;&quot;Plot an histogram of the distribution of values of df[col]&quot;&quot;&quot; return df[col].value_counts( ).plot.bar( y=&quot;count&quot;, x=col ).properties( width=400, ) u = my_plot(df, &quot;a&quot;) u.show() </code></pre> <p>How do I type the output of this function? The doc states the output of <code>(...).plot</code> is a <code>DataFramePlot</code> object but there is no info on this type, and anyway I'm using the output of <code>(...).plot.bar(...)</code> which has a different type.</p> <p>If I run <code>type(u)</code>, I get <code>altair.vegalite.v5.api.Chart</code> but it seems sketchy to use this for static typing, and I don't want to import <code>altair</code> in my code as the <code>altair</code> methods I need are already included in <code>polars</code>.</p> <p>I couldn't find any info about this so any help is welcome</p> <p>Thanks!</p>
<python><python-typing><python-polars><altair>
2024-11-06 12:52:20
1
628
globglogabgalab
79,162,665
4,197,386
Restoring flax model checkpoints using orbax throws ValueError
<p>The following code blocks are being utlized to save the train state of the model during training and to restore the state back into memory.</p> <pre><code> from flax.training import orbax_utils import orbax.checkpoint directory_gen_path = &quot;checkpoints_loc&quot; orbax_checkpointer_gen = orbax.checkpoint.PyTreeCheckpointer() gen_options = orbax.checkpoint.CheckpointManagerOptions(save_interval_steps=5, create=True) gen_checkpoint_manager = orbax.checkpoint.CheckpointManager( directory_gen_path, orbax_checkpointer_gen, gen_options ) def save_model_checkpoints(step_, generator_state, generator_batch_stats): gen_ckpt = { &quot;model&quot;: generator_state, &quot;batch_stats&quot;: generator_batch_stats, } save_args_gen = orbax_utils.save_args_from_target(gen_ckpt) gen_checkpoint_manager.save(step_, gen_ckpt, save_kwargs={&quot;save_args&quot;: save_args_gen}) def load_model_checkpoints(generator_state, generator_batch_stats): gen_target = { &quot;model&quot;: generator_state, &quot;batch_stats&quot;: generator_batch_stats, } latest_step = gen_checkpoint_manager.latest_step() gen_ckpt = gen_checkpoint_manager.restore(latest_step, items=gen_target) generator_state = gen_ckpt[&quot;model&quot;] generator_batch_stats = gen_ckpt[&quot;batch_stats&quot;] return generator_state, generator_batch_stats </code></pre> <p>The training of the model was done on a GPU and loading the state onto GPU device works fine, however, when trying to load the model to cpu, the following error is being thrown by the orbax checkpoint manager's restore method</p> <pre><code>ValueError: SingleDeviceSharding with Device=cuda:0 was not found in jax.local_devices(). </code></pre> <p>I'm not quite sure what could be the reason, any thoughts folks?</p> <p><strong>Update</strong>: Updated to the latest version of orbax-checkpoint, 0.8.0 traceback changed to the following error</p> <pre><code>ValueError: sharding passed to deserialization should be specified, concrete and an instance of `jax.sharding.Sharding`. Got None </code></pre>
<python><jax><flax>
2024-11-06 12:52:03
1
380
yash
79,162,500
11,751,799
Gaps in a `matplotlib` plot of categorical data
<p>When I have numerical data, say index by some kind of time, it is straightforward to plot gaps in the data. For instance, if I have values at times 1, 2, 3, 5, 6, 7, I can set an <code>np.nan</code> at time 4 to break up the plot.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt x = [1, 2, 3, 4, 5, 6, 7] y = [10, 20, 30, np.nan, 10, 20, 30] plt.plot(x, y) plt.show() plt.close() </code></pre> <p><a href="https://i.sstatic.net/CbqV1Qyr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbqV1Qyr.png" alt="gap, numerical" /></a></p> <p>That sure beats the alternative of just skipping time 4!</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt x = [1, 2, 3, 5, 6, 7] y = [10, 20, 30, 10, 20, 30] plt.plot(x, y) plt.show() plt.close() </code></pre> <p><a href="https://i.sstatic.net/xVApqrwi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xVApqrwi.png" alt="no gap, numerical" /></a></p> <p>However, I now have a <code>y</code> variable that is categorical. Mostly, the plotting is straightforward: just use the categories as the <code>y</code>.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt x = [1, 2, 3, 5, 6, 7] y = [&quot;cat&quot;, &quot;cat&quot;, &quot;dog&quot;, &quot;dog&quot;, &quot;cat&quot;, &quot;cat&quot;] plt.plot(x, y) plt.show() plt.close() </code></pre> <p><a href="https://i.sstatic.net/ykEZrAd0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ykEZrAd0.png" alt="no gap, categorical" /></a></p> <p>This puts the categories on the y-axis, just as I want. However, when I do my <code>np.nan</code> trick to get the gap, I get a point plotted at <code>np.nan</code>.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt x = [1, 2, 3, 4, 5, 6, 7] y = [&quot;cat&quot;, &quot;cat&quot;, &quot;dog&quot;, np.nan, &quot;dog&quot;, &quot;cat&quot;, &quot;cat&quot;] plt.plot(x, y) plt.show() plt.close() </code></pre> <p><a href="https://i.sstatic.net/MBcQsKjp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBcQsKjp.png" alt="np.nan does not produce a gap" /></a></p> <p>How can I get my plots to go <code>cat</code> <code>cat</code> <code>dog</code> on 1, 2, 3, and then <code>dog</code> <code>cat</code> <code>cat</code> on 5, 6, 7, leaving a gap at 4?</p>
<python><numpy><matplotlib><plot>
2024-11-06 12:00:39
2
500
Dave
79,162,394
520,556
Neither `xarray.ufuncs.exp()` nor `xr.apply_ufunc.exp()` work: `AttributeError: 'function' object has no attribute 'exp'`
<p>I am trying to make some old code run, but running in circles with exponentiation an array. Neither <code>xarray.ufuncs.exp()</code> nor <code>xr.apply_ufunc.exp()</code> work.</p> <p>Some suggested there is a bug in dask, which should be solved by <code>pip install dask==2.4.0</code>, but that did not help either.</p> <p>Please help!</p>
<python><python-xarray>
2024-11-06 11:29:09
0
1,598
striatum
79,162,377
1,440,764
AWS Sagemaker ClientError: train channel is not specified (Manifest file error)
<p>As a test, I'm running a train_manifest and a validation_manifest that are identical and contain only one file...</p> <pre><code>{&quot;source-ref&quot;: &quot;s3://&lt;bucketname&gt;/bad_ofs/Images_final/Crushing/iO/A_2208040CA2_1430_220804-205516.jpg&quot;, &quot;bounding-box&quot;: {&quot;annotations&quot;: [{&quot;class_id&quot;: 0, &quot;top&quot;: 750, &quot;left&quot;: 7000, &quot;height&quot;: 450, &quot;width&quot;: 5500}, {&quot;class_id&quot;: 0, &quot;top&quot;: 3000, &quot;left&quot;: 7000, &quot;height&quot;: 500, &quot;width&quot;: 5500}]}, &quot;bounding-box-metadata&quot;: {&quot;objects&quot;: [{&quot;confidence&quot;: 1.0}, {&quot;confidence&quot;: 1.0}], &quot;class-map&quot;: {&quot;0&quot;: &quot;good&quot;}, &quot;type&quot;: &quot;groundtruth/object-detection&quot;, &quot;human-annotated&quot;: &quot;yes&quot;, &quot;creation-date&quot;: &quot;2024-11-05T00:00:00&quot;, &quot;job-name&quot;: &quot;labeling-job/bounding-box&quot;}} </code></pre> <p>When attempting to train the model, I get the following error...</p> <pre><code>INFO:sagemaker.image_uris:Same images used for training and inference. Defaulting to image scope: inference. INFO:sagemaker.image_uris:Defaulting to the only supported framework/algorithm version: 1. INFO:sagemaker.image_uris:Ignoring unnecessary instance type: None. INFO:sagemaker:Creating training-job with name: object-detection-2024-11-06-11-11-07-815 ---------------------------------------------- Train Input Config: {'DataSource': {'S3DataSource': {'S3DataType': 'AugmentedManifestFile', 'S3Uri': 's3://agilent-aws-tmp-12-data/bad_ofs/Images_final/Crushing/train_manifest.json', 'S3DataDistributionType': 'FullyReplicated', 'AttributeNames': ['source-ref', 'bounding-box']}}, 'ContentType': 'application/x-image', 'InputMode': 'Pipe'} ---------------------------------------------- Validation Input Config: {'DataSource': {'S3DataSource': {'S3DataType': 'AugmentedManifestFile', 'S3Uri': 's3://agilent-aws-tmp-12-data/bad_ofs/Images_final/Crushing/validation_manifest.json', 'S3DataDistributionType': 'FullyReplicated', 'AttributeNames': ['source-ref', 'bounding-box']}}, 'ContentType': 'application/x-image', 'InputMode': 'Pipe'} ---------------------------------------------- 2024-11-06 11:11:10 Starting - Starting the training job... 2024-11-06 11:11:23 Starting - Preparing the instances for training... 2024-11-06 11:12:08 Downloading - Downloading the training image............... 2024-11-06 11:14:40 Training - Training image download completed. Training in progress...Docker entrypoint called with argument(s): train Running default environment configuration script Nvidia gpu devices, drivers and cuda toolkit versions (only available on hosts with GPU): Wed Nov 6 11:14:49 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.127.05 Driver Version: 550.127.05 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 23C P8 9W / 70W | 1MiB / 15360MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ Checking for nvidia driver and cuda compatibility. CUDA Compatibility driver provided. Proceeding with compatibility check between driver, cuda-toolkit and cuda-compat. Detected cuda-toolkit version: 11.1. Detected cuda-compat version: 455.32.00. Detected Nvidia driver version: 550.127.05. Nvidia driver compatible with cuda-toolkit. Disabling cuda-compat. Running custom environment configuration script /opt/amazon/lib/python3.8/site-packages/mxnet/model.py:97: SyntaxWarning: &quot;is&quot; with a literal. Did you mean &quot;==&quot;? if num_device is 1 and 'dist' not in kvstore: [11/06/2024 11:14:52 INFO 140179896461120] Reading default configuration from /opt/amazon/lib/python3.8/site-packages/algorithm/default-input.json: {'base_network': 'vgg-16', 'use_pretrained_model': '0', 'num_classes': '', 'mini_batch_size': '32', 'epochs': '30', 'learning_rate': '0.001', 'lr_scheduler_step': '', 'lr_scheduler_factor': '0.1', 'optimizer': 'sgd', 'momentum': '0.9', 'weight_decay': '0.0005', 'overlap_threshold': '0.5', 'nms_threshold': '0.45', 'num_training_samples': '', 'image_shape': '300', '_tuning_objective_metric': '', '_kvstore': 'device', 'kv_store': 'device', '_num_kv_servers': 'auto', 'label_width': '350', 'freeze_layer_pattern': '', 'nms_topk': '400', 'early_stopping': 'False', 'early_stopping_min_epochs': '10', 'early_stopping_patience': '5', 'early_stopping_tolerance': '0.0', '_begin_epoch': '0'} [11/06/2024 11:14:52 INFO 140179896461120] Merging with provided configuration from /opt/ml/input/config/hyperparameters.json: {'base_network': 'resnet-50', 'epochs': '30', 'image_shape': '300', 'learning_rate': '0.001', 'mini_batch_size': '16', 'nms_threshold': '0.45', 'num_classes': '2', 'num_training_samples': '1000', 'optimizer': 'adam', 'overlap_threshold': '0.5', 'use_pretrained_model': '1'} [11/06/2024 11:14:52 INFO 140179896461120] Final configuration: {'base_network': 'resnet-50', 'use_pretrained_model': '1', 'num_classes': '2', 'mini_batch_size': '16', 'epochs': '30', 'learning_rate': '0.001', 'lr_scheduler_step': '', 'lr_scheduler_factor': '0.1', 'optimizer': 'adam', 'momentum': '0.9', 'weight_decay': '0.0005', 'overlap_threshold': '0.5', 'nms_threshold': '0.45', 'num_training_samples': '1000', 'image_shape': '300', '_tuning_objective_metric': '', '_kvstore': 'device', 'kv_store': 'device', '_num_kv_servers': 'auto', 'label_width': '350', 'freeze_layer_pattern': '', 'nms_topk': '400', 'early_stopping': 'False', 'early_stopping_min_epochs': '10', 'early_stopping_patience': '5', 'early_stopping_tolerance': '0.0', '_begin_epoch': '0'} Process 13 is a worker. [11/06/2024 11:14:52 INFO 140179896461120] Using default worker. [11/06/2024 11:14:52 INFO 140179896461120] Loaded iterator creator application/x-image for content type ('application/x-image', '1.0') [11/06/2024 11:14:52 INFO 140179896461120] Loaded iterator creator application/x-recordio for content type ('application/x-recordio', '1.0') [11/06/2024 11:14:52 INFO 140179896461120] Loaded iterator creator image/jpeg for content type ('image/jpeg', '1.0') [11/06/2024 11:14:52 INFO 140179896461120] Loaded iterator creator image/png for content type ('image/png', '1.0') [11/06/2024 11:14:52 INFO 140179896461120] Checkpoint loading and saving are disabled. [11/06/2024 11:14:52 INFO 140179896461120] The channel 'train' is in pipe input mode under /opt/ml/input/data/train. [11/06/2024 11:14:52 INFO 140179896461120] The channel 'train' is in pipe input mode under /opt/ml/input/data/train. [11/06/2024 11:14:52 ERROR 140179896461120] Customer Error: train channel is not specified. 2024-11-06 11:15:04 Uploading - Uploading generated training model 2024-11-06 11:15:04 Failed - Training job failed --------------------------------------------------------------------------- UnexpectedStatusException Traceback (most recent call last) Cell In[118], line 70 54 od_model.set_hyperparameters( 55 base_network=&quot;resnet-50&quot;, 56 use_pretrained_model=1, (...) 65 num_training_samples=1000 66 ) 68 # Start the training job with both train and validation channels 69 # od_model.fit({&quot;train&quot;: train_input, &quot;validation&quot;: validation_input}) ---&gt; 70 od_model.fit({&quot;train&quot;: train_input, &quot;validation&quot;: validation_input}) File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/sagemaker/workflow/pipeline_context.py:346, in runnable_by_pipeline.&lt;locals&gt;.wrapper(*args, **kwargs) 342 return context 344 return _StepArguments(retrieve_caller_name(self_instance), run_func, *args, **kwargs) --&gt; 346 return run_func(*args, **kwargs) File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/sagemaker/estimator.py:1376, in EstimatorBase.fit(self, inputs, wait, logs, job_name, experiment_config) 1374 forward_to_mlflow_tracking_server = True 1375 if wait: -&gt; 1376 self.latest_training_job.wait(logs=logs) 1377 if forward_to_mlflow_tracking_server: 1378 log_sagemaker_job_to_mlflow(self.latest_training_job.name) File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/sagemaker/estimator.py:2750, in _TrainingJob.wait(self, logs) 2748 # If logs are requested, call logs_for_jobs. 2749 if logs != &quot;None&quot;: -&gt; 2750 self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs) 2751 else: 2752 self.sagemaker_session.wait_for_job(self.job_name) File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/sagemaker/session.py:5945, in Session.logs_for_job(self, job_name, wait, poll, log_type, timeout) 5924 def logs_for_job(self, job_name, wait=False, poll=10, log_type=&quot;All&quot;, timeout=None): 5925 &quot;&quot;&quot;Display logs for a given training job, optionally tailing them until job is complete. 5926 5927 If the output is a tty or a Jupyter cell, it will be color-coded (...) 5943 exceptions.UnexpectedStatusException: If waiting and the training job fails. 5944 &quot;&quot;&quot; -&gt; 5945 _logs_for_job(self, job_name, wait, poll, log_type, timeout) File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/sagemaker/session.py:8547, in _logs_for_job(sagemaker_session, job_name, wait, poll, log_type, timeout) 8544 last_profiler_rule_statuses = profiler_rule_statuses 8546 if wait: -&gt; 8547 _check_job_status(job_name, description, &quot;TrainingJobStatus&quot;) 8548 if dot: 8549 print() File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/sagemaker/session.py:8611, in _check_job_status(job, desc, status_key_name) 8605 if &quot;CapacityError&quot; in str(reason): 8606 raise exceptions.CapacityError( 8607 message=message, 8608 allowed_statuses=[&quot;Completed&quot;, &quot;Stopped&quot;], 8609 actual_status=status, 8610 ) -&gt; 8611 raise exceptions.UnexpectedStatusException( 8612 message=message, 8613 allowed_statuses=[&quot;Completed&quot;, &quot;Stopped&quot;], 8614 actual_status=status, 8615 ) UnexpectedStatusException: Error for Training job object-detection-2024-11-06-11-11-07-815: Failed. Reason: ClientError: train channel is not specified., exit code: 2. Check troubleshooting guide for common errors: https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-python-sdk-troubleshooting.html </code></pre> <p>The code looks like:</p> <pre><code>import boto3 import sagemaker from sagemaker import get_execution_role from sagemaker.inputs import TrainingInput from sagemaker.image_uris import retrieve # Initialize the session and role sagemaker_session = sagemaker.Session() role = get_execution_role() # Specify the S3 bucket and manifest file path bucket_name = &quot;&lt;bucketname&gt;&quot; train_manifest_s3_key = &quot;bad_ofs/Images_final/Crushing/train_manifest.json&quot; validation_manifest_s3_key = &quot;bad_ofs/Images_final/Crushing/validation_manifest.json&quot; # Define TrainingInput for training and validation data train_input = TrainingInput( s3_data=f&quot;s3://{bucket_name}/{train_manifest_s3_key}&quot;, content_type=&quot;application/x-image&quot;, s3_data_type=&quot;AugmentedManifestFile&quot;, attribute_names=[&quot;source-ref&quot;, &quot;bounding-box&quot;], input_mode=&quot;Pipe&quot; ) validation_input = TrainingInput( s3_data=f&quot;s3://{bucket_name}/{validation_manifest_s3_key}&quot;, content_type=&quot;application/x-image&quot;, s3_data_type=&quot;AugmentedManifestFile&quot;, attribute_names=[&quot;source-ref&quot;, &quot;bounding-box&quot;], input_mode=&quot;Pipe&quot; ) print('----------------------------------------------') print(&quot;Train Input Config:&quot;, train_input.config) print('----------------------------------------------') print(&quot;Validation Input Config:&quot;, validation_input.config) print('----------------------------------------------') # Retrieve the Docker container for object detection container = retrieve(&quot;object-detection&quot;, sagemaker_session.boto_region_name) # Define the estimator for SageMaker od_model = sagemaker.estimator.Estimator( container, role, instance_count=1, instance_type=&quot;ml.g4dn.xlarge&quot;, volume_size=50, max_run=3600, output_path=f&quot;s3://{bucket_name}/output&quot;, sagemaker_session=sagemaker_session ) # Set hyperparameters for object detection od_model.set_hyperparameters( base_network=&quot;resnet-50&quot;, use_pretrained_model=1, num_classes=2, mini_batch_size=16, epochs=30, learning_rate=0.001, optimizer=&quot;adam&quot;, overlap_threshold=0.5, nms_threshold=0.45, image_shape=300, num_training_samples=1000 ) # Start the training job with both train and validation channels # od_model.fit({&quot;train&quot;: train_input, &quot;validation&quot;: validation_input}) od_model.fit({&quot;train&quot;: train_input, &quot;validation&quot;: validation_input}) </code></pre>
<python><machine-learning><artificial-intelligence><amazon-sagemaker>
2024-11-06 11:22:30
0
2,512
RightmireM
79,162,319
11,561,121
How to correctly use and parse an XCom list of dicts in Airflow
<p>I am writing an Airflow dag which generates a dict list based on run parameters that I pass in json format using UI and then parses and uses its elements inside a <code>KubernetesPodOperator</code>. In the below code I simplified my use case to simple <code>echo</code>.</p> <p><strong>This is my code</strong></p> <pre><code>from datetime import datetime import yaml from airflow.decorators import dag, task from airflow.models.param import Param from airflow.operators.empty import EmptyOperator from airflow.providers.cncf.kubernetes.operators.pod import KubernetesPodOperator config = yaml.safe_load(open('/opt/airflow/${dag_config_filename}', 'r')) namespace = config['kubernetes']['namespace'] files = config['sftp']['files'] # list of 4 names default_args = { 'owner': 'airflow', 'depends_on_past': False, 'email': config['alerting_email'], 'email_on_failure': True, 'email_on_retry': False, 'retries': 1, 'executor_config': { 'KubernetesExecutor': { 'service_account_name': '${airflow_worker}', 'namespace': f'{namespace}', } }, } execution_date = '{{ ds_nodash }}' wizaly_files_date_ingestion = '{{ dag_run.start_date.strftime(&quot;%Y%m%d&quot;) }}' @dag( dag_id=config['dag_id'], default_args=default_args, description='Pipeline for Perfmarket ETL', tags=['perfmarket'], schedule_interval=config['schedule_interval'], start_date=datetime(2024, 10, 22), catchup=False, params={ 'entry_param': Param( default='0', type='string', enum=['0', '1', '2'], description='Run type: 0=daily, 1=rattrapage, 2=explo' ) } ) def test_dag() -&gt; None: start_task = EmptyOperator(task_id='start') @task def define_run_confs(**context): params = context['params'] entry_param = params['entry_param'] if entry_param == '0': # daily (default) files_typed = [file + '_option1' for file in config['sftp']['files']] elif entry_param == '1': files_typed = [file + '_option2' for file in config['sftp']['files']] else: files_typed = [file + '_option3' for file in config['sftp']['files']] zip_files = [ { 'first_key': f'{file_typed}.zip', 'second_key': 'ch1' if '_type1' in file else 'ch2', 'task_id': f'import_{file}', } for file, file_typed in zip(files, files_typed) ] return zip_files configs = define_run_confs() # execution of the aboce function to generate my dict list. import_zip_tasks = [] # I parse my dict list and for each dict I print keys for file_info in configs: import_task = KubernetesPodOperator( task_id=f'import_zip_{file_info[&quot;task_id&quot;]}', name='import_zip', cmds=['bash', '-cx'], arguments=[ f&quot;&quot;&quot;echo {file_info['first_key']}; echo {file_info['second_key']} \ &quot;&quot;&quot; ] ) import_zip_tasks.append(import_task) end_task = EmptyOperator(task_id='end') start_task &gt;&gt; configs &gt;&gt; import_zip_tasks &gt;&gt; end_task test_dag() </code></pre> <p>Problem is I am unable to parse my list because of: <code>TypeError: 'XComArg' object is not iterable</code> Code seems unable to parse configs variable. How can I transform it into a normal list variable ?</p> <p>I tried multiple techniques but I am unable to parse the list.</p>
<python><airflow>
2024-11-06 11:03:15
0
1,019
Haha
79,162,282
2,473,382
Access destination property from a Glue S3 DataSink
<p>With Glue 4.0, so Python 3.10.</p> <p>Assuming I am creating a datasink this way:</p> <pre class="lang-py prettyprint-override"><code>sink = gluecontext.getSink(path=&quot;s3://bucket/key&quot;, connection_type=&quot;s3&quot;) </code></pre> <p>How can I access the path property (or equivalent if it is split in <code>bucket</code> and <code>key</code> for instance):</p> <pre class="lang-py prettyprint-override"><code>path = sink.connection_options.path # Not valid </code></pre> <p>I can of course work around it, by creating my own wrapper around the sink object, but there must be a way, possibly via the <code>_jsink</code> object? The <a href="https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-glue-context.html#aws-glue-api-crawler-pyspark-extensions-glue-context-get-sink" rel="nofollow noreferrer">AWS API doc</a> is very secretive</p>
<python><amazon-s3><aws-glue>
2024-11-06 10:53:20
0
3,081
Guillaume
79,162,280
978,781
Python 3.13 generic classes with type parameters and inheritance
<p>I'm exploring types in Python 3.13 and can't get the generic typing hints as strict as I would like.</p> <p>The code below defines a generic Predicate class, two concrete subclasses, and a generic negation predicate.</p> <pre class="lang-py prettyprint-override"><code>class Predicate[T](Callable[[T], bool]): &quot;&quot;&quot; Base class for predicates: a function that takes a 'T' and evaluates to True or False. &quot;&quot;&quot; def __init__(self, _eval: Callable[[T], bool]): self.eval = _eval def __call__(self, *args, **kwargs) -&gt; bool: return self.eval(*args, **kwargs) class StartsWith(Predicate[str]): def __init__(self, prefix: str): super().__init__(lambda s: s.startswith(prefix)) class GreaterThan(Predicate[float]): def __init__(self, y: float): super().__init__(lambda x: x &gt; y) class Not[T](Predicate[T]): def __init__(self, p: Predicate[T]): super().__init__(lambda x: not p(x)) if __name__ == '__main__': assert StartsWith(&quot;F&quot;)(&quot;Foo&quot;) assert GreaterThan(10)(42) assert Not(StartsWith(&quot;A&quot;))(&quot;Foo&quot;) assert Not(GreaterThan(10))(3) </code></pre> <p>This results in an error:</p> <pre><code>Traceback (most recent call last): File &quot;[...]/generics_demo.py&quot;, line 36, in &lt;module&gt; class StartsWith(Predicate[str]): ~~~~~~~~~^^^^^ File &quot;&lt;frozen _collections_abc&gt;&quot;, line 475, in __new__ TypeError: Callable must be used as Callable[[arg, ...], result]. </code></pre> <p>When using <code>class StartsWith(Predicate):</code> (i.e. any predicate) it works, but that is too loosely defined to my taste.</p> <p>Any hints on how to go about this?</p>
<python><generics><python-typing><python-3.12>
2024-11-06 10:53:00
1
12,605
Arie
79,162,161
14,550,855
How to get a summary of a PyTorch model that uses dictionary as an input
<p>My model takes a dictionary as input, e.g. x = {'image': torch.tensor, 'number': torch.tensor}, the model looks like this:</p> <pre><code>class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.imgmodule = ImgModule() self.nummodule = NumModule() self.predict = nn.Linear(input_size, 100) def forward(self, x): xImg = self.imgmodule(x['image']) xNum = self.nummodule(x['number']) x = self.predict(torch.cat([xImg, xNum], dim=1)) return x </code></pre> <p>How do I get model summary, similar to the summary provided in the pytorch-summary package?</p> <p>So far I have tried using it like this:</p> <pre><code>from torchsummary import summary summary(model, input_size=[(3, 224, 224), (1, )]) </code></pre> <p>But I get an error:</p> <pre><code>TypeError: MyModel.forward() takes 2 positional arguments but 3 were given </code></pre>
<python><machine-learning><deep-learning><pytorch>
2024-11-06 10:22:03
0
401
arr10
79,162,109
5,816,253
Geopandas fails reading geojson
<p>I'm trying to read a geojson I created using these steps</p> <pre class="lang-py prettyprint-override"><code>import geopandas as gpd vec_data = gpd.read_file(&quot;map.shp&quot;) vec_data.head() vec_data['LPIS_name'].unique() sel_crop = vec_data[vec_data.LPIS_name == 'Permanent Grassland'] sel_crop.to_file(&quot;Permanent_Grassland.geojson&quot;, driver='GeoJSON') feature = gpd.read_file(&quot;Permanent_Grassland.geojson&quot;) </code></pre> <p>but I'm getting the following error:</p> <pre><code>{ &quot;name&quot;: &quot;DataSourceError&quot;, &quot;message&quot;: &quot;Failed to read GeoJSON data&quot;, &quot;stack&quot;: &quot;--------------------------------------------------------------------------- DataSourceError Traceback (most recent call last) Cell In[8], line 1 ----&gt; 1 feature = gpd.read_file(path_feature) File c:\\Users\\bventura\\AppData\\Local\\anaconda3\\Lib\\site-packages\\geopandas\\io\\file.py:294, in _read_file(filename, bbox, mask, columns, rows, engine, **kwargs) 291 from_bytes = True 293 if engine == \&quot;pyogrio\&quot;: --&gt; 294 return _read_file_pyogrio( 295 filename, bbox=bbox, mask=mask, columns=columns, rows=rows, **kwargs 296 ) 298 elif engine == \&quot;fiona\&quot;: 299 if pd.api.types.is_file_like(filename): File c:\\Users\\bventura\\AppData\\Local\\anaconda3\\Lib\\site-packages\\geopandas\\io\\file.py:547, in _read_file_pyogrio(path_or_bytes, bbox, mask, rows, **kwargs) 538 warnings.warn( 539 \&quot;The 'include_fields' and 'ignore_fields' keywords are deprecated, and \&quot; 540 \&quot;will be removed in a future release. You can use the 'columns' keyword \&quot; (...) 543 stacklevel=3, 544 ) 545 kwargs[\&quot;columns\&quot;] = kwargs.pop(\&quot;include_fields\&quot;) --&gt; 547 return pyogrio.read_dataframe(path_or_bytes, bbox=bbox, **kwargs) File c:\\Users\\bventura\\AppData\\Local\\anaconda3\\Lib\\site-packages\\pyogrio\\geopandas.py:261, in read_dataframe(path_or_buffer, layer, encoding, columns, read_geometry, force_2d, skip_features, max_features, where, bbox, mask, fids, sql, sql_dialect, fid_as_index, use_arrow, on_invalid, arrow_to_pandas_kwargs, **kwargs) 256 if not use_arrow: 257 # For arrow, datetimes are read as is. 258 # For numpy IO, datetimes are read as string values to preserve timezone info 259 # as numpy does not directly support timezones. 260 kwargs[\&quot;datetime_as_string\&quot;] = True --&gt; 261 result = read_func( 262 path_or_buffer, 263 layer=layer, 264 encoding=encoding, 265 columns=columns, 266 read_geometry=read_geometry, 267 force_2d=gdal_force_2d, 268 skip_features=skip_features, 269 max_features=max_features, 270 where=where, 271 bbox=bbox, 272 mask=mask, 273 fids=fids, 274 sql=sql, 275 sql_dialect=sql_dialect, 276 return_fids=fid_as_index, 277 **kwargs, 278 ) 280 if use_arrow: 281 meta, table = result File c:\\Users\\bventura\\AppData\\Local\\anaconda3\\Lib\\site-packages\\pyogrio\\raw.py:196, in read(path_or_buffer, layer, encoding, columns, read_geometry, force_2d, skip_features, max_features, where, bbox, mask, fids, sql, sql_dialect, return_fids, datetime_as_string, **kwargs) 56 \&quot;\&quot;\&quot;Read OGR data source into numpy arrays. 57 58 IMPORTANT: non-linear geometry types (e.g., MultiSurface) are converted (...) 191 192 \&quot;\&quot;\&quot; 194 dataset_kwargs = _preprocess_options_key_value(kwargs) if kwargs else {} --&gt; 196 return ogr_read( 197 get_vsi_path_or_buffer(path_or_buffer), 198 layer=layer, 199 encoding=encoding, 200 columns=columns, 201 read_geometry=read_geometry, 202 force_2d=force_2d, 203 skip_features=skip_features, 204 max_features=max_features or 0, 205 where=where, 206 bbox=bbox, 207 mask=_mask_to_wkb(mask), 208 fids=fids, 209 sql=sql, 210 sql_dialect=sql_dialect, 211 return_fids=return_fids, 212 dataset_kwargs=dataset_kwargs, 213 datetime_as_string=datetime_as_string, 214 ) File c:\\Users\\bventura\\AppData\\Local\\anaconda3\\Lib\\site-packages\\pyogrio\\_io.pyx:1239, in pyogrio._io.ogr_read() File c:\\Users\\bventura\\AppData\\Local\\anaconda3\\Lib\\site-packages\\pyogrio\\_io.pyx:219, in pyogrio._io.ogr_open() DataSourceError: Failed to read GeoJSON data&quot; } </code></pre> <p>As requested, please here you can download the <a href="https://scientificnet-my.sharepoint.com/:u:/g/personal/bventura_eurac_edu/EUBQJa1eowNFs--37bQySB0BpG804b0-h5DixHvGq6-uWw?e=iFaU6X" rel="nofollow noreferrer">Geojson</a> for a better debug of the code</p> <p>In the meanwhile I tried to search online and it seems that a potential error could be the following: <em>Polygons and MultiPolygons should follow the right-hand rule</em></p>
<python><geopandas><shapefile>
2024-11-06 10:03:26
1
375
sylar_80
79,162,094
3,909,896
Using Databricks asset bundles with typer instead of argparse
<p>I want to use Databricks asset bundles - I'd like to use <code>typer</code> as a CLI tool, but I have only been able to set it up with <code>argparse</code>. Argparse seems to be able to retrieve the arguments from the databricks task, but not typer.</p> <p>I specified two entrypoints in my pyproject.toml</p> <pre><code>[tool.poetry.scripts] mypackage_ep_typer = &quot;my_package.entrypoint_typer:main&quot; mypackage_ep_argparse = &quot;my_package.entrypoint_argparse:entrypoint_generic&quot; </code></pre> <p>I'm able to use the asset bundles with an entrypoint to my ELTL applications with argparse as follows in the <strong>entrypoint_argparse.py</strong>:</p> <pre><code>def entrypoint_generic(): &quot;&quot;&quot;Execute the application.&quot;&quot;&quot; logger.info(&quot;Executing 'argparse' entrypoint...&quot;) parser = argparse.ArgumentParser(description=&quot;My module.&quot;) parser.add_argument( &quot;--applicationname&quot;, help=&quot;The name of the application to execute.&quot;, type=str, required=True, dest=&quot;applicationname&quot;, ) args = parser.parse_args() logger.info(f&quot;{args.applicationname=}&quot;) </code></pre> <p>This one deploys and runs without any issues in the asset bundle's workflow task.</p> <p>However, if I try the same with typer in <strong>entrypoint_typer.py</strong>, I cannot get it to work:</p> <pre><code>def main( applicationname: Annotated[str, typer.Option(help=&quot;Application to execute.&quot;)] ): &quot;&quot;&quot;Execute the application.&quot;&quot;&quot; logger.info(&quot;Executing 'typer' entrypoint...&quot;) logger.info(f&quot;{applicationname=}&quot;) </code></pre> <p>I can run typer locally:</p> <pre><code>&gt; poetry run python -m typer .\entrypoint_typer.py run --applicationname MYAPPNAME 2024-11-06 xx:xx:xx - root - INFO - Executing 'typer' entrypoint... 2024-11-06 xx:xx:xx - root - INFO - applicationname='MYAPPNAME' </code></pre> <p>But when I try to deploy and run my asset bundle, I get this error when the workflow task tries to start:</p> <pre><code>TypeError: main() missing 1 required positional argument: 'applicationname' </code></pre> <p><a href="https://i.sstatic.net/DdqYAyd4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdqYAyd4.png" alt="enter image description here" /></a></p>
<python><databricks><typer><databricks-asset-bundle>
2024-11-06 09:57:39
0
3,013
Cribber
79,162,038
4,451,315
Iterate over groups of PyArrow table
<p>In pandas I can iterate over groups in <code>groupby</code>:</p> <pre class="lang-py prettyprint-override"><code>In [3]: import pandas as pd In [4]: data = {'a': [1, 1, 1, 2, 2], 'b': [2, 4, 3, 5, 6]} In [5]: df = pd.DataFrame(data) In [6]: for _, sub_df in df.groupby('a'): ...: print(sub_df) ...: a b 0 1 2 1 1 4 2 1 3 a b 3 2 5 4 2 6 </code></pre> <p>Is there an efficient way to do that in PyArrow? Say I start with</p> <pre class="lang-py prettyprint-override"><code>tbl = pa.table(data) </code></pre> <p>All I could come with is:</p> <pre class="lang-py prettyprint-override"><code>In [16]: for x in pc.unique(tbl['a']): ...: print(tbl.filter(pc.equal(tbl['a'], x))) ...: pyarrow.Table a: int64 b: int64 ---- a: [[1,1,1]] b: [[2,4,3]] pyarrow.Table a: int64 b: int64 ---- a: [[2,2]] b: [[5,6]] </code></pre> <p>but this involves scanning the whole <code>'a'</code> column multiple times...is there a more performant way?</p>
<python><pyarrow>
2024-11-06 09:41:18
1
11,062
ignoring_gravity
79,161,932
3,668,129
Can't iterate over dataset (AttributeError: module 'numpy' has no attribute 'complex'.)
<p>I'm using:</p> <pre><code>windows python version 3.10.0 datasets==2.21.0 numpy==1.24.4 </code></pre> <p>I tried to iterate over dataset I just downloaded:</p> <pre><code>from datasets import load_dataset dataset = load_dataset(&quot;jacktol/atc-dataset&quot;, download_mode='force_redownload') dataset['train'][0] </code></pre> <p>and got error:</p> <pre><code>AttributeError: module 'numpy' has no attribute 'complex'. `np.complex` was a deprecated alias for the builtin `complex`. To avoid this error in existing code, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here. The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations </code></pre> <p>which numpy and datasets do I need iterate over the datasets ?</p>
<python><numpy><dataset><huggingface-datasets>
2024-11-06 09:07:20
1
4,880
user3668129
79,161,931
2,219,080
In pytest set up databases mirroring and test
<p>I have a Django app that reads a read_only replica from a model in the DB. So in the <code>pytest</code> conftest fixtures, I have this <code>settings.DATABASES[&quot;read_only&quot;][&quot;TEST&quot;] = {&quot;MIRROR&quot;: &quot;default&quot;}</code> but when I instantiate fixtures, the <code>read_only</code> database doesn't have the data that I created with factoryboy.</p> <pre class="lang-py prettyprint-override"><code>@pytest.fixture() def populate_cache() -&gt; Callable[[CountryFactory], Household]: &quot;&quot;&quot; Fixture to populate the dashboard cache for a specific business area, verify creation in the default DB, and ensure readability in the read_only DB. &quot;&quot;&quot; def _populate_cache(goodcountry: CountryFactory) -&gt; Household: # Create household and related records household, individuals = create_household(&quot;business_area&quot;: afghanistan) PaymentFactory.create_batch(5, household=household) PaymentRecordFactory.create_batch(3, household=household) # Verify data exists in the default DB payment_count_default = Payment.objects.using(&quot;default&quot;).filter(household=household).count() print(f&quot;Payments in default DB: {payment_count_default}&quot;) # Verify data accessibility in the read_only DB payment_count_read_only = Payment.objects.using(&quot;read_only&quot;).filter(household=household).count() print(f&quot;Payments in read_only DB: {payment_count_read_only}&quot;) # Assert that the data is accessible in the read_only DB assert payment_count_read_only == payment_count_default, &quot;Mismatch in Payment count between default and read_only DBs.&quot; return household return _populate_dashboard_cache </code></pre> <p>and I get an error:</p> <blockquote> <p>Payments in default DB: 5 Payments in read_only DB: 0</p> </blockquote>
<python><django><pytest-django><factory-boy>
2024-11-06 09:07:13
0
1,267
iMitwe
79,161,843
4,036,004
How to go about linking external data to every word in a paragraph and access the external data even when the words are moved around?
<p>Apologies there is no code yet as I'm not sure where to begin. I'm using python.</p> <p>I am using <a href="https://github.com/linto-ai/whisper-timestamped" rel="nofollow noreferrer">whisper-timestamped</a> to transcribe an audio file and create a json file that contains the timecode (start &amp; end) of every word in an audio file.</p> <p>I want to be able to display the words (in their original sequence) in a text document and then cut and paste the words in any order I wish. Then to be able to access the original timecodes of each word regardless of their new postion</p> <p>For example:</p> <p><em>I'm not sure where to begin.</em></p> <p>Produces:</p> <p><em>I'm (00.00) not (00.01) sure (00.02) where (00.04) to (00.05) begin (00.07)</em></p> <p><em>Timecodes data 'stays' with each word as I reorder them.</em></p> <p><em>not (00.01) begin (00.07) I'm (00.00) sure (00.02)</em></p> <p>How would I go about trying to do this? Where is a good place to start?</p>
<python><interactive><transcription>
2024-11-06 08:38:46
0
1,309
JulianJ
79,161,804
10,200,497
What is the best way to filter the groups that have at least N rows that meets the conditions of a mask?
<p>This is my DataFrame:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'a': [10, 20, 30, 50, 50, 50, 4, 100], 'b': [30, 3, 200, 25, 24, 31, 29, 2], 'd': list('aaabbbcc') }) </code></pre> <p>Expected output:</p> <pre><code> a b d 0 10 30 a 1 20 3 a 2 30 200 a </code></pre> <p>The grouping is by column <code>d</code>. I want to return the groups that have at least two instances of this mask</p> <pre><code>m = (df.b.gt(df.a)) </code></pre> <p>This is what I have tried. It works but I wonder if there is a better/more efficient way to do it.</p> <pre><code>out = df.groupby('d').filter(lambda x: len(x.loc[x.b.gt(x.a)]) &gt;= 2) </code></pre>
<python><pandas><dataframe>
2024-11-06 08:20:38
3
2,679
AmirX
79,161,751
16,815,358
Problems with updating colorbar with matplotlib.Slider listener in Jupyter
<p>The problem that I am having is during updating the colorbar of an <code>plt.imshow</code> plot. Here's the code I will try to break it down and to explain some stuff in it afterwards.</p> <ul> <li>For the first cell in Jupyter, I have the functions, the imports and the input parameters:</li> </ul> <pre><code># Imports ############################################################## import numpy as np %matplotlib notebook import matplotlib.pyplot as plt import matplotlib.colors as colors from matplotlib.widgets import Slider from scipy.ndimage import label, find_objects # Functions ############################################################ def intensity_distribution(r, z, PMax, w0, zR): wZ = w0 * np.sqrt(1 + (z / zR)**2) # beam radius at z (Gaussian beam spreading) I0 = 2 * PMax / (np.pi * wZ**2) # peak intensity at radius wZ return I0 * np.exp(-2 * r**2 / wZ**2), wZ # Gaussian intensity distribution, beam radius def get_circle_ROI(I, r, threshold): ROI = (I &gt; threshold).astype(&quot;uint8&quot;) # binary mask for regions above the threshold labels, features = label(ROI) # label connected regions slices = find_objects(labels) # get bounding box slices for labeled regions xSlice, ySlice = slices[0] # extract x and y slices of the largest feature ROIHeight = (xSlice.stop - xSlice.start) * (r[1] - r[0]) * 1e6 # convert height to micrometers ROIWidth = (ySlice.stop - ySlice.start) * (r[1] - r[0]) * 1e6 # convert width to micrometers cx = (ySlice.start + ySlice.stop) // 2 # x-coordinate of the center cy = (xSlice.start + xSlice.stop) // 2 # y-coordinate of the center centre = (r[cy] * 1e6, r[cx] * 1e6) # convert center coordinates to micrometers radius = min(ROIWidth, ROIHeight) / 2 # radius is the smaller dimension's half-width return centre, radius def update_plot(PMax, zOffset): &quot;&quot;&quot;Update the heatmap based on new parameters.&quot;&quot;&quot; global colorbar # Calculate intensity distribution at given z offset I, wZ = intensity_distribution(np.sqrt(R**2 + Z**2), zOffset, PMax, BEAM_RADIUS_AT_FOCUS, zR) I /= 1e6 # convert intensity from W/m² to W/mm² I += 0.01 # small offset for better visualization contrast max_intensity = I.max() # maximum intensity in the current distribution # Calculate the on-axis peak intensity at focus in W/mm² I0 = (2 * PMax) / (np.pi * BEAM_RADIUS_AT_FOCUS**2) # peak intensity in W/m² at z = 0 I0 /= 1e6 # convert peak intensity to W/mm² # Calculate the Full Width at Half Maximum (FWHM) in micrometers centre, fwhm = get_circle_ROI(I, r, max_intensity / 2) _, tenth = get_circle_ROI(I, r, max_intensity / 10) # Clear and update plot ax.clear() # clear current axes # Display the updated intensity distribution as a heatmap im = ax.imshow(I, extent=[r[0]*1e6, r[-1]*1e6, r[0]*1e6, r[-1]*1e6], norm=colors.LogNorm(vmax=14000)) ax.set_xlabel(&quot;x (μm)&quot;) # label for x-axis ax.set_ylabel(&quot;y (μm)&quot;) # label for y-axis # Add plot title with z offset, FWHM, and max intensity in W/mm² ax.set_title(f&quot;FWHM = {fwhm:.1f} μm\n&quot; f&quot;Radius at 10% of total power = {tenth:.2f} μm\n&quot; f&quot;Max power = {I.max():.2f} W/mm²&quot;, loc=&quot;left&quot;) # Draw a circle representing the FWHM boundary cirlcefwhm = plt.Circle(centre, fwhm, color='white', fill=False, linestyle='--', linewidth=2, label=&quot;FWHM&quot;) cirlce10 = plt.Circle(centre, tenth, color='white', fill=False, linestyle='--', linewidth=2, label=&quot;10% of I$_max$&quot;) ax.add_patch(cirlcefwhm) # add the FWHM circle to the plot ax.add_patch(cirlce10) # add the circle where power is 10% of max #### Problematic starts here #### if colorbar is not None: # if colorbar already exists, remove it colorbar.remove() colorbar = plt.colorbar(im, ax=ax, label=&quot;Intensity (W/mm²)&quot;) # create new colorbar in W/mm² fig.draw_without_rendering() # redraw based on the recommendation of matplotlib instead of colorbar.draw_all() fig.canvas.draw() # redraw figure to reflect updates #### Problematic ends here #### def sliders_on_changed(val): ''' Slider update function ''' power = power_slider.val * MAX_LASER_POWER / 100 # calculate current power level in watts z_offset = z_offset_slider.val / 1000 # convert slider z offset from mm to meters update_plot(power, z_offset) # update the plot with new parameters # Inputs ############################################################### WAVELENGTH = 10.6e-6 # wavelength in meters MAX_LASER_POWER = 80 # max laser power in watts BEAM_WIDTH_AT_FOCUS = 120e-6 # beam width at focus in meters BEAM_RADIUS_AT_FOCUS = BEAM_WIDTH_AT_FOCUS / 2 # beam radius at focus in meters zR = np.pi * BEAM_RADIUS_AT_FOCUS**2 / WAVELENGTH # Rayleigh range in meters gridSize = 100 # resolution r = np.linspace(-500e-6, 500e-6, gridSize) # range for spatial coordinates in meters R, Z = np.meshgrid(r, r) # create grid for spatial coordinates colorbar = None # init colorbar </code></pre> <p>The problematic part would be this one:</p> <pre><code>#### Problematic starts here #### if colorbar is not None: # if colorbar already exists, remove it colorbar.remove() colorbar = plt.colorbar(im, ax=ax, label=&quot;Intensity (W/mm²)&quot;) # create new colorbar in W/mm² fig.draw_without_rendering() # redraw based on the recommendation of matplotlib instead of colorbar.draw_all() fig.canvas.draw() # redraw figure to reflect updates #### Problematic ends here #### </code></pre> <p>Am not so sure why it is not working. What happens is that the colorbar appears for the first update and then dissapears after a change in the Z-offset or the power percentage. Basically after a change in any of the values.</p> <ul> <li>The second cell of the jupyter notebook has the call for the functions:</li> </ul> <pre><code>fig, ax = plt.subplots() plt.subplots_adjust(left=0.25, bottom=0.35) # leave space for sliders update_plot(50, 0) ax_power = plt.axes([0.25, 0.2, 0.65, 0.03], facecolor = &quot;lightgray&quot;) ax_z_offset = plt.axes([0.25, 0.15, 0.65, 0.03], facecolor = &quot;lightgray&quot;) power_slider = Slider(ax_power, 'Power (%)', 0.1, 100, valinit=50) z_offset_slider = Slider(ax_z_offset, 'Z-Offset (mm)', 0, 5.0, valinit=0) power_slider.on_changed(sliders_on_changed) z_offset_slider.on_changed(sliders_on_changed) </code></pre> <p>Running this will get you to a UI that looks like this:</p> <p><a href="https://i.sstatic.net/Kn9Wpj6G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kn9Wpj6G.png" alt="First update" /></a></p> <p>And then when I move any of the sliders the colorbar disappears:</p> <p><a href="https://i.sstatic.net/Uoa16OED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uoa16OED.png" alt="After moving slider" /></a></p> <p>The updating of the colorbar seems to be a common problem, I saw a lot of stackoverflow questions about it, I even answered one once.</p> <p>The solution however is not really working with the slider listeners.</p> <p>P.S.: Changing the backend does not really help. However, using <code>%matplotlib qt</code> revealed a traceback:</p> <pre><code>Traceback (most recent call last): File &quot;C:\ProgramData\anaconda3\Lib\site-packages\matplotlib\cbook\__init__.py&quot;, line 309, in process func(*args, **kwargs) File &quot;C:\ProgramData\anaconda3\Lib\site-packages\matplotlib\widgets.py&quot;, line 603, in &lt;lambda&gt; return self._observers.connect('changed', lambda val: func(val)) ^^^^^^^^^ File &quot;C:\Users\User\AppData\Local\Temp\ipykernel_26252\2170461726.py&quot;, line 66, in sliders_on_changed update_plot(power, z_offset) # update the plot with new parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\User\AppData\Local\Temp\ipykernel_26252\2170461726.py&quot;, line 57, in update_plot colorbar.remove() File &quot;C:\ProgramData\anaconda3\Lib\site-packages\matplotlib\colorbar.py&quot;, line 1041, in remove self.ax.remove() File &quot;C:\ProgramData\anaconda3\Lib\site-packages\matplotlib\artist.py&quot;, line 242, in remove self._remove_method(self) File &quot;C:\ProgramData\anaconda3\Lib\site-packages\matplotlib\figure.py&quot;, line 944, in delaxes self._axstack.remove(ax) File &quot;C:\ProgramData\anaconda3\Lib\site-packages\matplotlib\figure.py&quot;, line 92, in remove self._axes.pop(a) KeyError: &lt;Axes: label='&lt;colorbar&gt;', ylabel='Intensity (W/mm²)'&gt; </code></pre> <p>Some stuff that I tried:</p> <ul> <li>There is a method for <code>colorbar</code> called <code>draw_all()</code></li> <li>Matplotlib also suggests using <code>fig.draw_without_rendering()</code></li> <li>My own <a href="https://stackoverflow.com/questions/77726462/updating-a-figure-with-multiple-subplots-during-a-loop-where-subplot-must-contai/77726691#77726691">solution</a></li> <li>Setting the limits of <code>colorbar</code> without deleting the colorbar</li> </ul> <p>Can someone please help me in figuring out what's wrong? I understand ofc that the code is quite long, I can shorten it to a simple example if necessary.</p> <hr /> <p>I am using</p> <ul> <li>Jupyter 6.5.4</li> <li>Matplotlib 3.7.2</li> </ul>
<python><matplotlib><widget><colorbar>
2024-11-06 07:57:30
1
2,784
Tino D
79,161,739
1,184,652
Show product name in template django
<p>I want to show all of the user orders in its panel so, I have following models: this is my product model in django and in my order model I have productfk field that is id of user product.</p> <pre><code>class Product(models.Model): id= models.IntegerField(primary_key=True) activedate = models.DateField() name= models.CharField(max_length=256) description = models.TextField() #following u can set user owner for this row #owner =models.ForeignKey(to=User,on_delete=models.CASCADE) category = models.CharField(max_length=256) unit =models.CharField(max_length=50) active = models.BooleanField(default=False) unitprice = models.DecimalField(max_digits=18, decimal_places=0) quantity = models.FloatField() minorder = models.FloatField() maxorder = models.FloatField() readytopay = models.BooleanField(default=False) showquantity = models.BooleanField(default=False) lastupdate = models.DateField() def __str__(self): return self.name </code></pre> <p>and folloring is my order model:</p> <pre><code>class Orders(models.Model): id = models.IntegerField(primary_key=True) customerfk = models.ForeignKey(to=User,on_delete=models.CASCADE) oxygenid = models.IntegerField() financialfk = models.IntegerField() orderdate = models.DateTimeField() productfk = models.IntegerField() unit = models.CharField(max_length=50) quantity = models.FloatField() unitprice = models.DecimalField(max_digits=18, decimal_places=0) discount = models.DecimalField(max_digits=18, decimal_places=0) totalprice = models.DecimalField(max_digits=18, decimal_places=0) onlinepayment = models.DecimalField(max_digits=18, decimal_places=0) customerdesc = models.TextField() companydesc = models.TextField() userip = models.CharField(max_length=20) status = models.CharField(max_length=50) creationdate = models.DateTimeField() def __str__(self): return self.status </code></pre> <p>and this is my order view</p> <pre><code>@login_required(login_url='/authentication/login') def index(request): unit=Unit.objects.all() orderstatus=OrderStatus.objects.all() #order=Orders.objects.all() order =Orders.objects.select_related('customerfk') paginator = Paginator(order,20) page_number = request.GET.get('page') page_obj = Paginator.get_page(paginator,page_number) #currency = UserPreference.objects.get(user=request.user).currency context={ 'order':order, 'orderstatus':orderstatus, 'unit':unit, 'page_obj':page_obj } return render(request,'orders/index.html',context) </code></pre> <p>how i can show my product name in template view for each order</p>
<python><django><django-views><django-templates>
2024-11-06 07:52:55
1
644
franchesco totti
79,161,450
10,855,529
Conditional join_where using string starts_with predicate in Polars
<p>I have two DataFrames,</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({ &quot;url&quot;: [&quot;https//abc.com&quot;, &quot;https//abcd.com&quot;, &quot;https//abcd.com/aaa&quot;, &quot;https//abc.com/abcd&quot;] }) conditions_df = pl.DataFrame({ &quot;url&quot;: [&quot;https//abc.com&quot;, &quot;https//abcd.com&quot;, &quot;https//abcd.com/aaa&quot;, &quot;https//abc.com/aaa&quot;], &quot;category&quot;: [[&quot;a&quot;], [&quot;b&quot;], [&quot;c&quot;], [&quot;d&quot;]] }) </code></pre> <p>Now I want a df, for assigning categories to the first df based on first match for the url starts with in second df, that is the output should be,</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>url</th> <th>category</th> </tr> </thead> <tbody> <tr> <td>https//abc.com</td> <td>['a']</td> </tr> <tr> <td>https//abcd.com</td> <td>['b']</td> </tr> <tr> <td>https//abcd.com/aaa</td> <td>['b'] - this one starts with https//abcd.com, that is the first match</td> </tr> <tr> <td>https//abc.com/abcd</td> <td>['a'] - this one starts with https//abc.com, that is the first match</td> </tr> </tbody> </table></div> <p>Current code which works is like this,</p> <pre class="lang-py prettyprint-override"><code>def add_category_column(df: pl.DataFrame, conditions_df) -&gt; pl.DataFrame:          # Initialize the category column with empty lists     df = df.with_columns(pl.Series(&quot;category&quot;, [[] for _ in range(len(df))], dtype=pl.List(pl.String)))          # Apply the conditions to populate the category column     for row in conditions_df.iter_rows():         url_start, category = row         df = df.with_columns(             pl.when(                 (pl.col(&quot;url&quot;).str.starts_with(url_start)) &amp; (pl.col(&quot;category&quot;).list.len() == 0)             )             .then(pl.lit(category))             .otherwise(pl.col(&quot;category&quot;))             .alias(&quot;category&quot;)         )          return df </code></pre> <p>Is there a way to achieve the same without using for loops, could we use join_where here, but in my attempts join_where does not work for starts_with</p>
<python><dataframe><python-polars>
2024-11-06 05:47:43
2
3,833
apostofes
79,161,325
558,639
asyncio.run() vs asyncio.get_event_loop().run_until_complete()
<p>I need to call an async function from within a synchronous function.</p> <p>Can someone educate me on the following: What are the salient differences between <code>sync_fn_a</code> and <code>sync_fn_b</code>, and when would I choose one over the other?</p> <pre class="lang-py prettyprint-override"><code>async def my_async_fn(arg1): # internals are not important def sync_fn_a(arg): asyncio.run(my_async_fn(arg)) def sync_fun_b(arg): asyncio.get_event_loop().run_until_complete(my_async_fn(arg)) </code></pre>
<python><python-asyncio>
2024-11-06 04:23:19
2
35,607
fearless_fool
79,161,167
2,870,357
Unsuccesfully adding font in pyside6
<p>I try to add font <a href="https://freefontsfamily.org/gotham-narrow-font-free-download/" rel="nofollow noreferrer">Gotham Narrow</a> into my pyqt apps. Below is my simple script.</p> <pre><code>app = QApplication(sys.argv) fontpath = &quot;GothamNarrow-Bold.otf&quot; _id = QFontDatabase.addApplicationFont(fontpath) font = QFontDatabase.font(&quot;Gotham Narrow&quot;, &quot;Gotham Narrow Bold&quot;, 120) print(QFontDatabase.families()) print(f&quot;id:{_id}&quot;) print (f&quot;is_exist {QFile.exists(font_path)}&quot;) </code></pre> <p>pyqt : PySide6 v6.8.0.2 os : Sonoma 14.6.1 python: 3.12.4</p> <p>_id always returns -1, it seems that the font is not imported. I put <code>GothamNarrow-Bold.otf</code> file in the same level with my script. QFile.exists(font_path) returns True.</p> <p>Any suggestion?</p>
<python><pyside6>
2024-11-06 01:57:35
0
451
slawalata
79,161,150
58,347
How to publish an update to pip *just* for older Python versions?
<p>I have a library published on pip which previously had a minimum Python version of 3.7, and now has a minimum Python version of 3.9.</p> <p>This means that, when a user with Python 3.7 or 3.8 does <code>pip install my-package</code>, they silently get the last version that was published with 3.7 support, rather than the most recent version. This means they're missing updates that I've made since then; in particular, I just changed the format of an external file that my library fetches as part of its operation, and the old version just breaks on it.</p> <p>Is there any way to publish a new version of the library <em>just</em> for Python 3.7, so I can print a deprecation message and quit, rather than having it fail with mysterious errors? Would it work, for example, to just go back to the last 3.7 commit, add the deprecation message and kill switch, change my setuptools to advertise 3.7 support, and publish that on pip as a new version, then immediately revert back to my existing code (advertising 3.9) and publish that on top?</p>
<python><python-packaging>
2024-11-06 01:45:09
1
18,453
Tab Atkins-Bittner
79,161,133
170,005
Getting a strange error while importing pycaret in Airflow
<p>Getting this strange error while importing pycaret in a Airflow Kubernetes pod. This was working fine since deployment and there have been no changes in environment. Anyone know what this is about ? Error occurs when running this line:</p> <p><code>from pycaret.classification import predict_model, load_model</code></p> <p>[</p> <pre><code>2024-11-05, 17:58:43 UTC] {logging_mixin.py:151} WARNING - /home/airflow/.local/lib/python3.8/site-packages/xgboost/compat.py:36 FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead. [2024-11-05, 17:58:43 UTC] {best-action.py:154} INFO - scikit-learn version:1.1.3 [2024-11-05, 17:58:43 UTC] {best-action.py:155} INFO - XGBoost version:1.5.1 [2024-11-05, 17:58:43 UTC] {best-action.py:156} INFO - PyCaret version:3.0.0 [2024-11-05, 17:58:44 UTC] {font_manager.py:1423} INFO - Generating new fontManager, this may take some time... [2024-11-05, 17:58:48 UTC] {taskinstance.py:1935} ERROR - Task failed with exception Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py&quot;, line 192, in execute return_value = self.execute_callable() File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py&quot;, line 209, in execute_callable return self.python_callable(*self.op_args, **self.op_kwargs) File &quot;/home/coder/de-main/airflow/eks-airflow-dags/holding/next_best_action/best-action.py&quot;, line 161, in get_predictions from pycaret.classification import predict_model, load_model File &quot;/home/airflow/.local/lib/python3.8/site-packages/pycaret/classification/__init__.py&quot;, line 1, in &lt;module&gt; from pycaret.classification.functional import ( File &quot;/home/airflow/.local/lib/python3.8/site-packages/pycaret/classification/functional.py&quot;, line 8, in &lt;module&gt; from pycaret.classification.oop import ClassificationExperiment File &quot;/home/airflow/.local/lib/python3.8/site-packages/pycaret/classification/oop.py&quot;, line 31, in &lt;module&gt; from pycaret.internal.pycaret_experiment.non_ts_supervised_experiment import ( File &quot;/home/airflow/.local/lib/python3.8/site-packages/pycaret/internal/pycaret_experiment/non_ts_supervised_experiment.py&quot;, line 3, in &lt;module&gt; from pycaret.internal.pycaret_experiment.supervised_experiment import ( File &quot;/home/airflow/.local/lib/python3.8/site-packages/pycaret/internal/pycaret_experiment/supervised_experiment.py&quot;, line 53, in &lt;module&gt; from pycaret.internal.pycaret_experiment.tabular_experiment import _TabularExperiment File &quot;/home/airflow/.local/lib/python3.8/site-packages/pycaret/internal/pycaret_experiment/tabular_experiment.py&quot;, line 26, in &lt;module&gt; import pycaret.loggers File &quot;/home/airflow/.local/lib/python3.8/site-packages/pycaret/loggers/__init__.py&quot;, line 3, in &lt;module&gt; from .dagshub_logger import DagshubLogger File &quot;/home/airflow/.local/lib/python3.8/site-packages/pycaret/loggers/dagshub_logger.py&quot;, line 4, in &lt;module&gt; from pycaret.loggers.mlflow_logger import MlflowLogger File &quot;/home/airflow/.local/lib/python3.8/site-packages/pycaret/loggers/mlflow_logger.py&quot;, line 10, in &lt;module&gt; import mlflow File &quot;/home/airflow/.local/lib/python3.8/site-packages/mlflow/__init__.py&quot;, line 41, in &lt;module&gt; from mlflow import projects # pylint: disable=unused-import File &quot;/home/airflow/.local/lib/python3.8/site-packages/mlflow/projects/__init__.py&quot;, line 10, in &lt;module&gt; import mlflow.projects.databricks File &quot;/home/airflow/.local/lib/python3.8/site-packages/mlflow/projects/databricks.py&quot;, line 12, in &lt;module&gt; from mlflow import tracking File &quot;/home/airflow/.local/lib/python3.8/site-packages/mlflow/tracking/__init__.py&quot;, line 8, in &lt;module&gt; from mlflow.tracking.client import MlflowClient File &quot;/home/airflow/.local/lib/python3.8/site-packages/mlflow/tracking/client.py&quot;, line 24, in &lt;module&gt; from mlflow.tracking._model_registry.client import ModelRegistryClient File &quot;/home/airflow/.local/lib/python3.8/site-packages/mlflow/tracking/_model_registry/client.py&quot;, line 15, in &lt;module&gt; from mlflow.tracking._model_registry import utils, DEFAULT_AWAIT_MAX_SLEEP_SECONDS File &quot;/home/airflow/.local/lib/python3.8/site-packages/mlflow/tracking/_model_registry/utils.py&quot;, line 8, in &lt;module&gt; from mlflow.tracking._tracking_service.utils import ( File &quot;/home/airflow/.local/lib/python3.8/site-packages/mlflow/tracking/_tracking_service/utils.py&quot;, line 184, in &lt;module&gt; _tracking_store_registry.register_entrypoints() File &quot;/home/airflow/.local/lib/python3.8/site-packages/mlflow/tracking/registry.py&quot;, line 52, in register_entrypoints for entrypoint in entrypoints.get_group_all(self.group_name): File &quot;/home/airflow/.local/lib/python3.8/site-packages/entrypoints.py&quot;, line 237, in get_group_all for config, distro in iter_files_distros(path=path): File &quot;/home/airflow/.local/lib/python3.8/site-packages/entrypoints.py&quot;, line 137, in iter_files_distros if folder.rstrip('/\\').endswith('.egg'): AttributeError: 'PosixPath' object has no attribute 'rstrip' </code></pre>
<python><airflow><python-3.8><pycaret>
2024-11-06 01:31:53
0
16,244
fixxxer
79,161,068
19,429,024
How to listen for hotkeys in a separate thread using Python with Win32 API and PySide6?
<p>I’m setting up a hotkey system for Windows in Python, using the Win32 API and PySide6. I want to register hotkeys in a HotkeyManager class and listen for them in a separate thread, so the GUI remains responsive. However, when I move the listening logic to a thread, the hotkey events are not detected correctly.</p> <p>Here’s the code that works without using threads, where hotkeys are registered and detected on the main thread:</p> <pre class="lang-py prettyprint-override"><code>from threading import Thread from typing import Callable, Dict from win32gui import RegisterHotKey, UnregisterHotKey, GetMessage from win32con import VK_NUMPAD0, MOD_NOREPEAT class HotkeyManager: def __init__(self): self.hotkey_id = 1 self._callbacks: Dict[int, Callable] = {} def register_hotkey(self, key_code: int, callback: Callable): self._callbacks[self.hotkey_id] = callback RegisterHotKey(0, self.hotkey_id, MOD_NOREPEAT, key_code) self.hotkey_id += 1 def listen(self): while True: print(&quot;Listener started.&quot;) msg = GetMessage(None, 0, 0) hotkey_id = msg[1] if hotkey_id in self._callbacks: self._callbacks[hotkey_id]() </code></pre> <p>In the main code, this setup works as expected:</p> <pre class="lang-py prettyprint-override"><code>from PySide6 import QtWidgets from win32con import VK_NUMPAD0 def on_press(): print(&quot;Numpad 0 pressed!&quot;) app = QtWidgets.QApplication([]) manager = HotkeyManager() manager.register_hotkey(VK_NUMPAD0, on_press) manager.listen() # Initialize window widget = QtWidgets.QMainWindow() widget.show() app.exec() </code></pre> <p>When I try to move the listen() method to a separate thread, however, the hotkey doesn’t respond properly:</p> <pre class="lang-py prettyprint-override"><code>class HotkeyManager: def listen(self): def run(): while True: print(&quot;Listener started.&quot;) msg = GetMessage(None, 0, 0) hotkey_id = msg[1] if hotkey_id in self._callbacks: self._callbacks[hotkey_id]() thread = Thread(target=run, daemon=True) thread.start() </code></pre> <p>How can I correctly listen for hotkeys in a separate thread without losing functionality? It seems that the issue may be due to the hotkeys being registered on the main thread while the listening logic runs in a secondary thread. How could I solve this so everything works as expected?</p>
<python><python-multithreading><pywin32><pywinauto>
2024-11-06 00:36:44
1
587
Collaxd
79,161,015
13,350,341
What would be the best option to render typing_extension.Self output type-hint as the class name via mkdocstrings?
<p>I'm looking for advice on how to best render <code>typing_extensions.Self</code> output type-hints, while keeping <code>show_signature_annotations: true</code> option enabled.</p> <p><code>mkdocstrings</code> config follows:</p> <pre><code>plugins: - mkdocstrings: handlers: python: import: - https://installer.readthedocs.io/en/stable/objects.inv rendering: show_signature_annotations: true options: members_order: alphabetical </code></pre> <p>Would it be possible to translate <code>typing_extensions.Self</code> so as that <code>Type</code> is rendered by the class name rather than by the <code>typing_extensions.Self</code> type-hint? IOW,</p> <pre><code>def func(self, arg: str) -&gt; Self: &quot;&quot;&quot; Arguments: arg: Argument. Returns: blah blah &quot;&quot;&quot; </code></pre> <p>would actually output</p> <p><a href="https://i.sstatic.net/TboSujJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TboSujJj.png" alt="enter image description here" /></a></p> <p>while I would possibly like <code>Type</code> to output the class name. Alternatively, I would appreciate getting guidance on how to handle such situation; also, does anyone believe - otoh - it is reasonable for it to output <code>Self</code>?</p> <p>Can anyone advise? Don't know if anyone would consider this a plausible feature request, were this not possible.</p>
<python><python-typing><mkdocs-material><mkdocstrings>
2024-11-05 23:55:49
0
3,157
amiola
79,160,988
417,896
Plotly graph loading in PySide6
<p>I am trying to render a plotly html file in pyside6, but I am getting a blank component where the plotly graph should be. If I try to load the generated file in a web browser it works.</p> <pre><code>import os import sys import signal from PySide6.QtWidgets import QApplication, QMainWindow, QTabWidget, QWidget, QVBoxLayout from PySide6.QtWebEngineWidgets import QWebEngineView from PySide6.QtWebEngineCore import QWebEngineSettings from PySide6.QtWebEngineCore import QWebEnginePage from PySide6.QtCore import QUrl import plotly.express as px import pandas as pd # Sample data data = { 'region': ['Alabama', 'Alaska', 'Arizona', 'Arkansas', 'California'], 'value': [10, 20, 30, 40, 50], 'iso_code': ['US-AL', 'US-AK', 'US-AZ', 'US-AR', 'US-CA'] } # Create DataFrame df = pd.DataFrame(data) # Create choropleth map fig = px.choropleth( df, locations='iso_code', # Column with region codes (ISO codes) color='value', # Column with values to map color intensity locationmode=&quot;USA-states&quot;, # Specify location mode (e.g., &quot;USA-states&quot; for US) scope=&quot;usa&quot;, # Limit map scope color_continuous_scale=&quot;Blues&quot; ) # Save map to HTML html_content = fig.to_html(full_html=True) # Create a custom page class to catch console messages class WebEnginePage(QWebEnginePage): def javaScriptConsoleMessage(self, level, message, lineNumber, sourceID): print(f&quot;JS Console: {message} (Source: {sourceID}, Line: {lineNumber})&quot;) class MainWindow(QMainWindow): def __init__(self): super().__init__() # Set up main tab widget self.tab_widget = QTabWidget() self.setCentralWidget(self.tab_widget) # Add choropleth map tab self.add_choropleth_tab() def add_choropleth_tab(self): # Create widget for the tab tab = QWidget() layout = QVBoxLayout() # Create QWebEngineView and load the HTML file web_view = QWebEngineView() # Enable JavaScript web_view.settings().setAttribute(QWebEngineSettings.WebAttribute.JavascriptEnabled, True) web_view.settings().setAttribute(QWebEngineSettings.WebAttribute.LocalContentCanAccessFileUrls, True) web_view.settings().setAttribute(QWebEngineSettings.WebAttribute.LocalContentCanAccessRemoteUrls, True) html_file_path = os.path.abspath(&quot;choropleth_map.html&quot;) local_url = QUrl.fromLocalFile(html_file_path) web_view.load(local_url) web_view.setPage(WebEnginePage()) web_view.setHtml(html_content) layout.addWidget(web_view) tab.setLayout(layout) self.tab_widget.addTab(tab, &quot;Choropleth Map&quot;) app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec()) </code></pre>
<python><plotly><pyside6>
2024-11-05 23:33:24
0
17,480
BAR
79,160,962
6,457,407
Creating __await__() from another awaitable object
<p>I was experimenting with creating my own awaitable object. I understand that the precise details of what <code>__await__</code> returns are async-library dependent, but I was hoping I could just copy the values from another asynchronous function.</p> <pre><code>import asyncio class MyObject: async def ten(self): await asyncio.sleep(.5) print(&quot;Returning&quot;) return 10 def __await__(self): # Just do exactly what ten() does yield from self.ten().__await__() retu async def main(): object = MyObject() print(await object.ten()) print(await object) if __name__ == '__main__': asyncio.run(main()) </code></pre> <p>When I ran the code, both <code>await object.ten()</code> and <code>await object</code> paused for half a second and then printed the message. But the former returned the value 10 while the latter returned None.</p> <p>Is it possible to craft a generic <code>__await__</code> function that behaves exactly like another asynchronous function, including its return value?</p>
<python><async-await>
2024-11-05 23:14:33
1
11,605
Frank Yellin
79,160,940
11,505,680
pandas bar chart with paired columns
<p>I have a <code>DataFrame</code> with paired columns. I want to plot it such that each <em>pair</em> of columns has a unique color, and one column of each pair has an empty fill.</p> <p>I tried this:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({ ('A', '1'): [1, 2, 3], ('A', '2'): [4, 5, 6], ('B', '1'): [7, 8, 9], ('B', '2'): [10, 11, 12] }) df.plot.bar(color=['C0', 'none', 'C1', 'none'], edgecolor=['C0', 'C0', 'C1', 'C1']) </code></pre> <p>This almost works! But it applies the <code>edgecolor</code>s row-wise instead of column-wise.</p> <p><a href="https://i.sstatic.net/psVpVEfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/psVpVEfg.png" alt="Bar chart with paired columns (wrong)" /></a></p> <p>I asked ChatGPT to save my butt. It gave me a solution that works (see lightly modified version below), but it's very wordy. My question is, is there a simpler way to do this, ideally using <code>DataFrame.plot</code>?</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import matplotlib.pyplot as plt # Sample DataFrame df = pd.DataFrame({ ('A', '1'): [1, 2, 3], ('A', '2'): [4, 5, 6], ('B', '1'): [7, 8, 9], ('B', '2'): [10, 11, 12] }) # Define colors for each pair colors = ['C0', 'C1'] # Create a bar chart fig, ax = plt.subplots() # Number of columns num_cols = len(df.columns) # Bar width bar_width = 0.2 # Plot each pair of columns for i in range(0, num_cols, 2): color_i = colors[i//2] ax.bar(df.index + i*bar_width, df.iloc[:, i], bar_width, label=str(df.columns[i]), color=color_i, edgecolor=color_i) ax.bar(df.index + (i+1)*bar_width, df.iloc[:, i+1], bar_width, label=str(df.columns[i+1]), color='none', edgecolor=color_i) # Add labels, title, and legend ax.set_xlabel('Index') ax.set_ylabel('Values') ax.set_title('Bar chart with paired columns') ax.set_xticks(df.index + bar_width * (num_cols / 2 - 0.5)) ax.set_xticklabels(df.index) ax.legend() </code></pre> <p><a href="https://i.sstatic.net/JEXpwb2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JEXpwb2C.png" alt="Bar chart with paired columns" /></a></p>
<python><pandas><matplotlib><plot>
2024-11-05 23:04:17
1
645
Ilya
79,160,935
2,603,579
Python urldecode urlencoded bytecode data
<p>Payload contains an HMAC tag as well as a nonce for AES. Client-side printing the tag and nonce result in (for example):</p> <pre><code>#tag: b'=x\x9d{_0\xf9;c8\x94inc]\xb1' #nonce: b'\x1f\xf4\xbe\xcc\xf2\x84f\xf2*\x8dP\x16\xc8\x02\xfe\xbe' requests.post(url, data=payload, headers={&quot;Content-Type&quot;: &quot;application/octet-stream&quot;}, verify=&quot;myShnazzyCertificate.pem&quot;) </code></pre> <p>Server-side, my flask api route receives a tag and nonce that have evidently been urlencoded:</p> <pre><code>data = flask.request.data ## stuff happens here, then -&gt; print(&quot;tag: &quot;, tag); print(&quot;nonce: &quot;, nonce) #tag: b'%3Dx%9D%7B_0%F9%3Bc8%94inc%5D%B1' #nonce: b'%1F%F4%BE%CC%F2%84f%F2%2A%8DP%16%C8%02%FE%BE' </code></pre> <p>How do I remove the urlencoding (or prevent it from happening?) while keeping the tag and nonce as bytecode? I tried:</p> <pre><code>tag = tag.replace(b&quot;%&quot;, bytes(r&quot;\x&quot;.encode(&quot;utf-8&quot;))) nonce = nonce.replace(b&quot;%&quot;, bytes(r&quot;\x&quot;.encode(&quot;utf-8&quot;))) </code></pre> <p>But HMAC verification failed since the tag has &quot;{&quot; and the nonce has &quot;*&quot; which also got encoded, so I'd need something more exhaustive.</p>
<python><python-requests><urlencode>
2024-11-05 23:03:05
2
402
Ryan Farber
79,160,913
6,635,590
Python using requests get html data after page has been altered by JS
<p>I've tried searching for something like this online but haven't actually found any solutions for my problem.</p> <p>I'm trying to make a website to be a price tracker for the products they sell, since I've just started making this website I need to input all the products into my database for them to be tracked in the first place, but the issue is, their full product sitemap doesn't seem to be up to date with their products so I can't use that, so I'm using the regular products list page.</p> <p>Now, the actual issue is that when you use a url with a parameter to pick a particular page it actually always gets the content for page 1, and then uses javascript to update the html to the actual correct content for the page number. I'm using <code>requests</code> and <code>BeautifulSoup</code> to get the page and parse through it.</p> <p>Its not entirely relevant but here is my code:</p> <pre class="lang-py prettyprint-override"><code>class CategoryScraper(): def __init__(self, url): self.url = url self.headers = { 'user-agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:131.0) Gecko/20100101 Firefox/131.0' } self.products = [] self.html = None def get_data(self): self.products = [] self.get_html() product_list = self.html.find('div', attrs={'class': 'productList'}) product_containers = product_list.find_all('div', attrs={'class': 'itemContainer'}) for product in product_containers: anchor = product.find('a') product_name = anchor.find('div', attrs={'class': 'itemTitle'}).get_text() product_price = anchor.find('div', attrs={'class': 'itemPrice'}).find('span').get_text().split('\xa0')[1] product_url = anchor['href'] self.products.append( {'product_name': product_name, 'product_price': product_price, 'product_url': product_url}) def get_html(self): page = requests.get(self.url, headers=self.headers) self.html = BeautifulSoup(page.content, 'html5lib') def change_url(self, url): self.url = url self.get_html() def get(self): self.get_data() return self.products </code></pre> <p>I'm aware I might need to use a different library to wait for JavaScript to load and finish to get the page data, but I only started web scraping today so I don't really know what libraries there are and their capabilities.</p>
<javascript><python><beautifulsoup><python-requests>
2024-11-05 22:51:43
0
734
tygzy
79,160,811
2,036,464
Python: Compare html tags in RO folder with their corresponding tags in EN folder and displays in Output the unique tags from both files
<p>In short, I have two files, one in Romanian, the other has been translated into English. In the RO file there are some tags that have not been translated into EN. So I want to display in an html output all the tags in EN that have corresponding tags in RO, but also those tags in RO that do not appear in EN.</p> <p><strong>I have this files:</strong></p> <pre><code> ro_file_path = r'd:\3\ro\incotro-vezi-tu-privire.html' en_file_path = r'd:\3\en\where-do-you-see-look.html' Output = d:\3\Output\where-do-you-see-look.html </code></pre> <p><strong>TASK: Compare the 3 tags below, in both files.</strong></p> <pre><code>&lt;p class=&quot;text_obisnuit&quot;&gt;(.*?)&lt;/p&gt; &lt;p class=&quot;text_obisnuit2&quot;&gt;(.*?)&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;&lt;span class=&quot;text_obisnuit2&quot;&gt;(.*?)&lt;/span&gt;(.*?)&lt;/p&gt; </code></pre> <p><strong>Requirements:</strong></p> <ul> <li>All tags are enclosed between: <code>&lt;!-- START ARTICLE --&gt;</code> and <code>&lt;!-- FINAL ARTICLE --&gt;</code></li> <li>Count the tags in RO and count the tags in EN, and compare.</li> <li>Then count the words in the tags in RO and compare with the number of words in the tags in EN.</li> <li>Compares the html tags in RO with the html tags in EN, in order, and displays in Output the unique tags from both files</li> </ul> <h2><strong>RO d:\3\ro\incotro-vezi-tu-privire.html</strong></h2> <pre><code>&lt;!-- ARTICOL START --&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;&lt;span class=&quot;text_obisnuit2&quot;&gt;Stiu ca este dificil sa conduci la inceput, &lt;/span&gt;dar dupa 4-5 luni inveti.&lt;/p&gt; &lt;p class=&quot;text_obisnuit2&quot;&gt;Imi place sa merg la scoala si sa invat, mai ales in timpul saptamanii.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Sunt un bun conducator auto, dar am facut si greseli din care am invatat.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;În fond, cele scrise de mine, sunt adevarate.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Iubesc sa conduc masina.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;&lt;span class=&quot;text_obisnuit2&quot;&gt;Ma iubesti?&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;&lt;span class=&quot;text_obisnuit2&quot;&gt;Stiu ca este dificil sa conduci la inceput, &lt;/span&gt;dar dupa 4-5 luni inveti.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Totul se repetă, chiar și ochii care nu se vad.&lt;/p&gt; &lt;p class=&quot;text_obisnuit2&quot;&gt;BEE servesc o cafea 2 mai buna&lt;/p&gt; &lt;!-- ARTICOL FINAL --&gt; </code></pre> <h2><strong>EN d:\3\en\where-do-you-see-look.html</strong></h2> <pre><code>&lt;!-- ARTICOL START --&gt; &lt;p class=&quot;text_obisnuit2&quot;&gt;I like going to school and learning, especially during the week.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;I'm a good driver, but I've also made mistakes that I've learned from.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Basically, what I wrote is true.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;I love driving.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;&lt;span class=&quot;text_obisinuit2&quot;&gt;I know it's difficult to drive at first, &lt;/span&gt; but after 4-5 months you learn.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Everything is repeated, even the eyes that can't see.&lt;/p&gt; &lt;!-- ARTICOL FINAL --&gt; </code></pre> <h2><strong>Expected OUTPUT: d:\3\Output\where-do-you-see-look.html</strong></h2> <pre><code>&lt;!-- ARTICOL START --&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;&lt;span class=&quot;text_obisnuit2&quot;&gt;Stiu ca este dificil sa conduci la inceput, &lt;/span&gt; dar dupa 4-5 luni inveti.&lt;/p&gt; &lt;p class=&quot;text_obisnuit2&quot;&gt;I like going to school and learning, especially during the week.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;I'm a good driver, but I've also made mistakes that I've learned from.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Basically, what I wrote is true.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;&lt;span class=&quot;text_obisnuit2&quot;&gt;Ma iubesti?&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;I love driving.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;&lt;span class=&quot;text_obisinuit2&quot;&gt;I know it's difficult to drive at first, &lt;/span&gt; but after 4-5 months you learn.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Everything is repeated, even the eyes that can't see.&lt;/p&gt; &lt;p class=&quot;text_obisnuit2&quot;&gt;BEE servesc o cafea 2 mai buna&lt;/p&gt; &lt;!-- ARTICOL FINAL --&gt; </code></pre> <p>Python code must compares the html tags in RO with the html tags in EN and displays in Output the unique tags in both files, taking into account that most of the tags in RO have their corresponding translation in the tags in EN. But the idea of ​​the code is that the code also finds those html tags in RO that were omitted from being translated into EN.</p> <p>Here's how I came up with the solution in Python code. I followed a simple calculation.</p> <p><strong>First method:</strong></p> <p>First, you have to count all the tags in ro, then all the tags in en. Then you have to memorize each type of tag in ro, but then also in en. Then you have to count the words in each tag in ro and the words in each tag in en. Don't forget that there can be 2 identical tags, but on different lines, just like in RO. Then you have to statistically calculate the result. How much are the tags in ro minus the tags in en?</p> <p><strong>The second method</strong>, to verify the output, is to print the screen. Compare the entire ro part and the entire en part separately through OCR, then line by line, see which tags in ro are plus compared to the tags in en</p> <p><strong>PYTHON CODE:</strong></p> <pre><code>import re import os def extract_tags(content): start = content.find('&lt;!-- ARTICOL START --&gt;') end = content.find('&lt;!-- ARTICOL FINAL --&gt;') if start == -1 or end == -1: raise ValueError(&quot;Marcajele 'ARTICOL START' sau 'ARTICOL FINAL' lipsesc.&quot;) section_content = content[start:end] pattern = re.compile(r'&lt;p class=&quot;text_obisnuit(?:2)?&quot;&gt;(?:&lt;span class=&quot;text_obisnuit2&quot;&gt;)?.*?&lt;/p&gt;', re.DOTALL) tags = [] for idx, match in enumerate(pattern.finditer(section_content), 1): tag = match.group(0) text = re.sub(r'&lt;[^&gt;]+&gt;', '', tag).strip() if '&lt;span class=&quot;text_obisnuit2&quot;&gt;' in tag or '&lt;span class=&quot;text_obisinuit2&quot;&gt;' in tag: tag_type = 'span' elif 'class=&quot;text_obisnuit2&quot;' in tag: tag_type = 'text_obisnuit2' else: tag_type = 'text_obisnuit' tags.append({ 'index': idx, 'tag': tag, 'text': text, 'type': tag_type, 'word_count': len(text.split()) }) return tags def find_matching_pairs(ro_tags, en_tags): matched_indices = set() used_en = set() for i, ro_tag in enumerate(ro_tags): for j, en_tag in enumerate(en_tags): if j in used_en: continue if ro_tag['type'] == en_tag['type']: word_diff = abs(ro_tag['word_count'] - en_tag['word_count']) if word_diff &lt;= 3: matched_indices.add(i) used_en.add(j) break return matched_indices def fix_duplicates(output_content, ro_content): &quot;&quot;&quot;Corectează poziția tag-urilor duplicate&quot;&quot;&quot; ro_tags = extract_tags(ro_content) output_tags = extract_tags(output_content) # Găsim tag-urile care apar în RO și OUTPUT for ro_idx, ro_tag in enumerate(ro_tags): for out_idx, out_tag in enumerate(output_tags): if ro_tag['tag'] == out_tag['tag'] and ro_idx != out_idx: # Am găsit un tag care apare în poziții diferite # Verificăm dacă este cazul de duplicat care trebuie mutat ro_lines = ro_content.split('\n') out_lines = output_content.split('\n') if ro_tag['tag'] in ro_lines[ro_idx+1] and out_tag['tag'] in out_lines[out_idx+1]: # Mutăm tag-ul la poziția corectă out_lines.remove(out_tag['tag']) out_lines.insert(ro_idx+1, out_tag['tag']) output_content = '\n'.join(out_lines) break return output_content def generate_output(ro_tags, en_tags, original_content): start = original_content.find('&lt;!-- ARTICOL START --&gt;') end = original_content.find('&lt;!-- ARTICOL FINAL --&gt;') if start == -1 or end == -1: raise ValueError(&quot;Marcajele 'ARTICOL START' sau 'ARTICOL FINAL' lipsesc.&quot;) output_content = original_content[:start + len('&lt;!-- ARTICOL START --&gt;')] + &quot;\n&quot; matched_indices = find_matching_pairs(ro_tags, en_tags) en_index = 0 for i, ro_tag in enumerate(ro_tags): if i in matched_indices: output_content += en_tags[en_index]['tag'] + &quot;\n&quot; en_index += 1 else: output_content += ro_tag['tag'] + &quot;\n&quot; while en_index &lt; len(en_tags): output_content += en_tags[en_index]['tag'] + &quot;\n&quot; en_index += 1 output_content += original_content[end:] return output_content def main(): try: ro_file_path = r'd:\3\ro\incotro-vezi-tu-privire.html' en_file_path = r'd:\3\en\where-do-you-see-look.html' output_file_path = r'd:\3\Output\where-do-you-see-look.html' with open(ro_file_path, 'r', encoding='utf-8') as ro_file: ro_content = ro_file.read() with open(en_file_path, 'r', encoding='utf-8') as en_file: en_content = en_file.read() ro_tags = extract_tags(ro_content) en_tags = extract_tags(en_content) # Generăm primul output initial_output = generate_output(ro_tags, en_tags, en_content) # Corectăm pozițiile tag-urilor duplicate final_output = fix_duplicates(initial_output, ro_content) with open(output_file_path, 'w', encoding='utf-8') as output_file: output_file.write(final_output) print(f&quot;Output-ul a fost generat la {output_file_path}&quot;) except Exception as e: print(f&quot;Eroare: {str(e)}&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p><strong>My Python code is almost perfect, but not perfect. The problem occurs when I introduce other tags in RO, such as:</strong></p> <pre><code>&lt;!-- ARTICOL START --&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Laptopul meu este de culoare neagra.&lt;/p&gt; &lt;p class=&quot;text_obisnuit2&quot;&gt;Imi place sa merg la scoala si sa invat, mai ales in timpul saptamanii.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Sunt un bun conducator auto, dar am facut si greseli din care am invatat.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;&lt;span class=&quot;text_obisnuit2&quot;&gt;Stiu ca este dificil sa conduci la inceput, &lt;/span&gt;dar dupa 4-5 luni inveti.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;În fond, cele scrise de mine, sunt adevarate.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Iubesc sa conduc masina.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;&lt;span class=&quot;text_obisnuit2&quot;&gt;Stiu ca este dificil sa conduci la inceput, &lt;/span&gt;dar dupa 4-5 luni inveti.&lt;/p&gt; &lt;p class=&quot;text_obisnuit&quot;&gt;Totul se repetă, chiar și ochii care nu se vad.&lt;/p&gt; &lt;!-- ARTICOL FINAL --&gt; </code></pre>
<python><python-3.x><openai-api><claude>
2024-11-05 22:01:59
2
1,065
Just Me
79,160,774
11,281,707
How to disable the caret (^) characters in the Python stacktrace?
<p>I have a script for doing test readouts that worked fine with Python 3.9 but now we upgraded to Python 3.12 we have those carets that break the script. So the easiest way would be disabling it.</p> <p>Is there a way to disable the carets (^^^^^^^^^^^^^^^^^^^^) in the Python stacktrace?</p> <pre><code>ERROR: test_email (tests.test_emails.EmailTestCase.test_email) ---------------------------------------------------------------------- Traceback (most recent call last): File &quot;/my-project/email/tests/test_emails.py&quot;, line 72, in test_email self.assertNotEquals(self.email.id, self.email_id) ^^^^^^^^^^^^^^^^^^^^ </code></pre>
<python><traceback>
2024-11-05 21:43:41
4
1,015
claudius
79,160,696
10,193,760
Converting python logic to sql query (Pairing two status from one column)
<p>I need help with converting my python code to SQL:</p> <pre><code>req_id_mem = &quot;&quot; req_workflow_mem = &quot;&quot; collect_state_main = [] collect_state_temp = [] for req_id, req_datetime, req_workflow in zip(df[&quot;TICKET_ID&quot;], df[&quot;DATETIMESTANDARD&quot;], df[&quot;STATUS&quot;]): if req_id_mem == &quot;&quot; or req_id_mem != req_id: req_id_mem = req_id req_workflow_mem = &quot;&quot; collect_state_temp = [] if req_workflow_mem == &quot;&quot; and req_workflow == &quot;Open&quot; and req_id_mem == req_id: req_workflow_mem = req_workflow collect_state_temp.append(req_id) collect_state_temp.append(req_workflow) collect_state_temp.append(req_datetime) if req_workflow_mem == &quot;Open&quot; and req_workflow == &quot;Closed&quot; and req_id_mem == req_id: req_workflow_mem = req_workflow collect_state_temp.append(req_workflow) collect_state_temp.append(req_datetime) collect_state_main.append(collect_state_temp) collect_state_temp = [] </code></pre> <p>DataFrame:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>TICKET_ID</th> <th>DATETIMESTANDARD</th> <th>STATUS</th> </tr> </thead> <tbody> <tr> <td>79355138</td> <td>9/3/2024 11:54:18 AM</td> <td>Open</td> </tr> <tr> <td>79355138</td> <td>9/3/2024 9:01:12 PM</td> <td>Open</td> </tr> <tr> <td>79355138</td> <td>9/6/2024 4:52:10 PM</td> <td>Closed</td> </tr> <tr> <td>79355138</td> <td>9/6/2024 4:52:12 PM</td> <td>Open</td> </tr> <tr> <td>79355138</td> <td>9/10/2024 4:01:24 PM</td> <td>Closed</td> </tr> <tr> <td>79446344</td> <td>8/27/2024 1:32:54 PM</td> <td>Open</td> </tr> <tr> <td>79446344</td> <td>9/11/2024 9:40:17 AM</td> <td>Closed</td> </tr> <tr> <td>79446344</td> <td>9/11/2024 9:40:24 AM</td> <td>Closed</td> </tr> <tr> <td>79446344</td> <td>9/11/2024 9:42:14 AM</td> <td>Open</td> </tr> </tbody> </table></div> <p>Result:</p> <ol> <li>It will Identify the first <strong>Open</strong> State of a TICKET_ID and look for the closest <strong>Closed</strong> Status</li> <li>It will reiterate for each case to look for an Open and Closed pair (first open and first close will only be considered)</li> </ol> <p><strong>My problem</strong> is I'm stuck since the pairings can happen more than twice. I tried Rank in sql but it only return the first instance of pairing but not the other pairs</p> <p>Adding also my solution to this one as I migrated to snowflake recently:</p> <pre><code>SELECT FOD.TICKET_ID, FOD.FIRSTOPENDATETIME AS OPEN_DATETIME, MIN(NC.DATETIMESTANDARD) AS CLOSED_DATETIME FROM ( SELECT TICKET_ID, MIN(DATETIMESTANDARD) AS FIRSTOPENDATETIME, STATUS FROM DB.TABLE WHERE ( (STATUS IN ('Open') AND EVENT_TYPE IN ('Ticket Open')) OR STATUS IN ('Closed') ) GROUP BY TICKET_ID, STATUS ) AS FOD LEFT JOIN DB.TABLE AS NC ON FOD.TICKET_ID = NC.TICKET_ID AND NC.STATUS = 'Closed' AND NC.DATETIMESTANDARD &gt; FOD.FIRSTOPENDATETIME WHERE FOD.STATUS = 'Open' GROUP BY FOD.TICKET_ID, FOD.FIRSTOPENDATETIME ORDER BY FOD.TICKET_ID ASC, FOD.FIRSTOPENDATETIME ASC </code></pre>
<python><sql><pandas><snowflake-cloud-data-platform>
2024-11-05 21:10:47
2
1,610
Maku
79,160,614
34,747
How to deploy Django app in docker with UV
<p>I am writing a Dockerfile Configuration for a Django app. I am usin uv to manage my dependencies in a virtualenv. The app runs normally outside the container, but when I try to run it from the container, it can't find the django package:</p> <pre class="lang-py prettyprint-override"><code>from django.core.wsgi import get_wsgi_application ModuleNotFoundError: No module named 'django' </code></pre> <p>This tells me that the recreation of the virtualenv inside the container is not working as it should. But I cannot find the problem. Here is my Dockerfile:</p> <pre class="lang-none prettyprint-override"><code>FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim WORKDIR /app # Enable bytecode compilation ENV UV_COMPILE_BYTECODE=1 # Copy from the cache instead of linking since it's a mounted volume ENV UV_LINK_MODE=copy #ENV PYTHONDONTWRITEBYTECODE 1 #ENV PYTHONUNBUFFERED 1 RUN apt-get update &amp;&amp; \ apt-get install -y --no-install-recommends gcc python3-dev libpq-dev gunicorn &amp;&amp;\ apt-get clean &amp;&amp; \ rm -rf /var/lib/apt/lists/* # Install the project's dependencies using the lockfile and settings RUN --mount=type=cache,target=/root/.cache/uv \ --mount=type=bind,source=uv.lock,target=uv.lock \ --mount=type=bind,source=pyproject.toml,target=pyproject.toml \ uv sync --frozen --no-install-project --no-dev # Then, add the rest of the project source code and install it # Installing separately from its dependencies allows optimal layer caching ADD . /app RUN --mount=type=cache,target=/root/.cache/uv \ uv sync --frozen --no-dev #COPY --from=builder /usr/local/lib/python3.11/site-packages/ /usr/local/lib/python3.11/site-packages/ # Place executables in the environment at the front of the path ENV PATH=&quot;/app/.venv/bin:$PATH&quot; ENTRYPOINT [] RUN useradd -m appuser &amp;&amp; chown -R appuser:appuser /app USER appuser WORKDIR /app/myproject EXPOSE 9090 CMD [&quot;gunicorn&quot;, &quot;myproject.wsgi:application&quot;, &quot;--bind&quot;, &quot;0.0.0.0:9090&quot;] </code></pre> <p>I am also using the following <code>docker-compose</code> file to manage the container</p> <pre class="lang-yaml prettyprint-override"><code>services: web: # Build the image from the Dockerfile in the current directory build: . # Host the app on port 8000 ports: - 9090:9090 networks: - app_network restart: unless-stopped networks: app_network: driver: bridge </code></pre> <p>I should say that I based this Dockerfile on the example provide <a href="https://github.com/astral-sh/uv-docker-example" rel="nofollow noreferrer">by astral-sh</a>.</p> <p>Why does the installation of the dependencies not work?</p>
<python><django><docker><docker-compose><uv>
2024-11-05 20:42:10
1
6,262
fccoelho
79,160,324
922,128
How to change a Todoist task's duration with the update_task Python and REST API?
<p>I've tried Todoist APIs' update_task and I get error code 400:</p> <blockquote> <p>&quot;Client Error: bad request for url <a href="https://api.todoist.com/rest/v2/tasks/%5B...%5D%22" rel="nofollow noreferrer">https://api.todoist.com/rest/v2/tasks/[...]&quot;</a> with</p> </blockquote> <pre><code># Define the task ID and the new duration task_id = &quot;0123456789&quot; new_duration = { &quot;amount&quot;: 30, &quot;unit&quot;: &quot;minute&quot; } # Update the task's duration api.update_task(task_id=task_id, duration=new_duration) </code></pre> <p>and</p> <pre><code>payload = { &quot;duration&quot;: { &quot;amount&quot;: 30, &quot;unit&quot;: &quot;minute&quot; } } response = requests.post(f&quot;{TODOIST_API_URL}/{task_id}&quot;, headers=headers, json=payload) </code></pre> <p>What is the correct way of using the duration args? Or is the problem somewhere else?</p> <p>The &quot;content&quot; of the task does update successfully. The task is set in the future and I also tried setting duration as null and valid from the Todoist app before calling the API but I get the same error.</p>
<python><rest><todoist>
2024-11-05 18:35:13
1
476
TudorT
79,160,196
1,275,942
Maximally broad PEP-508 dependency specifier
<p>I have the following <code>pyproject.toml</code>:</p> <pre class="lang-ini prettyprint-override"><code>dependencies = [ &quot;first-example==*&quot;, &quot;second-example&quot;, ] </code></pre> <p>The first dependency gives:</p> <pre><code>configuration error: `project.dependencies[0]` must be pep508 </code></pre> <p>The second dependency gives:</p> <pre><code>warning: Missing version constraint (e.g., a lower bound) for `second-example` </code></pre> <p>How can I specify &quot;any version&quot;? (In this context, I have an editable install of another package, and I always want it to accept that editable install as satisfying the dependency.)</p>
<python><pyproject.toml>
2024-11-05 17:49:16
0
899
Kaia
79,160,162
243,031
Milvus database is not able to load collection
<p>I am running milvusdb in docker, and it has 18K+ records in that database. VM's storage was full, I stopped docker and did <code>system prune</code> to remove unused resources. I remove temp log files.</p> <p>Docker setup is as below.</p> <pre><code>Skn@Skn:~/milvusDB$ sudo docker-compose ps Name Command State Ports ---------------------------------------------------------------------------------------------------------------- attu docker-entrypoint.sh /bin/ ... Up 0.0.0.0:8000-&gt;3000/tcp,:::8000-&gt;3000/tcp milvus-etcd etcd -advertise-client-url ... Up 2379/tcp, 2380/tcp milvus-minio /usr/bin/docker-entrypoint ... Up (healthy) 9000/tcp milvus-standalone /tini -- milvus run standalone Up 0.0.0.0:19530-&gt;19530/tcp,:::19530-&gt;19530/tcp Skn@Skn:~/milvusDB$ sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 43aec8f43837 zilliz/attu:v2.2.6 &quot;docker-entrypoint.s…&quot; 33 minutes ago Up 33 minutes 0.0.0.0:8000-&gt;3000/tcp, :::8000-&gt;3000/tcp attu 8ba67d8bdc52 milvusdb/milvus:v2.4.4 &quot;/tini -- milvus run…&quot; 33 minutes ago Up 33 minutes 0.0.0.0:19530-&gt;19530/tcp, :::19530-&gt;19530/tcp milvus-standalone 3957f3277d7c minio/minio:RELEASE.2020-12-03T00-03-10Z &quot;/usr/bin/docker-ent…&quot; 33 minutes ago Up 33 minutes (healthy) 9000/tcp milvus-minio 78ff48fa2acb quay.io/coreos/etcd:v3.5.5 &quot;etcd -advertise-cli…&quot; 33 minutes ago Up 33 minutes 2379-2380/tcp milvus-etcd </code></pre> <p>After that, I ran milvus in container and load try to load the data.</p> <pre><code>milvus_collection = &quot;all_products_collection&quot; from pymilvus import MilvusClient, Collection, connections connections.connect(host=&quot;127.0.0.1&quot;, port=19530) collection = Collection(milvus_collection) collection.load() RPC error: [get_loading_progress], &lt;MilvusException: (code=101, message=collection not loaded[collection=451005601884144622])&gt;, &lt;Time:{'RPC start': '2024-11-05 17:31:26.212114', 'RPC error': '2024-11-05 17:31:26.214213'}&gt; RPC error: [wait_for_loading_collection], &lt;MilvusException: (code=101, message=collection not loaded[collection=451005601884144622])&gt;, &lt;Time:{'RPC start': '2024-11-05 17:21:25.789017', 'RPC error': '2024-11-05 17:31:26.214354'}&gt; RPC error: [load_collection], &lt;MilvusException: (code=101, message=collection not loaded[collection=451005601884144622])&gt;, &lt;Time:{'RPC start': '2024-11-05 17:21:25.772310', 'RPC error': '2024-11-05 17:31:26.214414'}&gt; Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/orm/collection.py&quot;, line 424, in load conn.load_collection( File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 147, in handler raise e from e File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 143, in handler return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 182, in handler return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 122, in handler raise e from e File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 87, in handler return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/client/grpc_handler.py&quot;, line 1148, in load_collection self.wait_for_loading_collection(collection_name, timeout, is_refresh=_refresh) File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 147, in handler raise e from e File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 143, in handler return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 182, in handler return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 122, in handler raise e from e File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 87, in handler return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/client/grpc_handler.py&quot;, line 1168, in wait_for_loading_collection progress = self.get_loading_progress( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 147, in handler raise e from e File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 143, in handler return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 182, in handler return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 122, in handler raise e from e File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/decorators.py&quot;, line 87, in handler return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/client/grpc_handler.py&quot;, line 1267, in get_loading_progress check_status(response.status) File &quot;/home/AsmaSkinMatch/.local/lib/python3.11/site-packages/pymilvus/client/utils.py&quot;, line 62, in check_status raise MilvusException(status.code, status.reason, status.error_code) pymilvus.exceptions.MilvusException: &lt;MilvusException: (code=101, message=collection not loaded[collection=451005601884144622])&gt; </code></pre> <p>When I check milvusdb standalone container logs, it gives error for show collections.</p> <pre><code>{&quot;log&quot;:&quot;[2024/11/06 03:47:28.041 +00:00] [INFO] [querycoordv2/services.go:56] [\&quot;show collections request received\&quot;] [traceID=70b08d84e33659c01ee8ad00aba85bdc] [collections=\&quot;[451005601884144622]\&quot;]\n&quot;,&quot;stream&quot;:&quot;stdout&quot;,&quot;time&quot;:&quot;2024-11-06T03:47:28.041529283Z&quot;} {&quot;log&quot;:&quot;[2024/11/06 03:47:28.041 +00:00] [WARN] [querycoordv2/services.go:106] [\&quot;show collection failed\&quot;] [collectionID=451005601884144622] [error=\&quot;collection not loaded[collection=451005601884144622]\&quot;]\n&quot;,&quot;stream&quot;:&quot;stdout&quot;,&quot;time&quot;:&quot;2024-11-06T03:47:28.041593484Z&quot;} {&quot;log&quot;:&quot;[2024/11/06 03:47:28.041 +00:00] [WARN] [proxy/util.go:1298] [\&quot;fail to show collections\&quot;] [collectionID=451005601884144622] [error=\&quot;collection not loaded[collection=451005601884144622]\&quot;]\n&quot;,&quot;stream&quot;:&quot;stdout&quot;,&quot;time&quot;:&quot;2024-11-06T03:47:28.041665585Z&quot;} </code></pre> <p>during the load collection, it might be request for <code>show collections</code> and that is failing.</p> <p>how can I load that collection now ?</p>
<python><vector-database><milvus>
2024-11-05 17:40:08
2
21,411
NPatel
79,160,097
16,773,063
How can I redact or remove entire line from pdf using either pymuPDF or other python libraries
<p>I am currently working on formatting PDFs using pyMuPDF library in python. I have several tasks that involves formatting the PDF file as follows:</p> <ol> <li>I need to remove empty lines so that the PDF file does not look dis-oriented. Now, I have tried the following code:</li> </ol> <pre><code>def remove_extra_lines_spaces(doc): for page_index, page in enumerate(doc): blocks = page.get_text(&quot;dict&quot;, sort=True)[&quot;blocks&quot;] empty_line_count = 0 for index, block in enumerate(blocks): try: for line in block[&quot;lines&quot;]: for span in line[&quot;spans&quot;]: if span[&quot;text&quot;].strip() == &quot;&quot;: if empty_line_count &gt;= 1: page.add_redact_annot(line[&quot;bbox&quot;], cross_out=True) empty_line_count += 1 else: empty_line_count = 0 except: pass page.apply_redactions(images=2, graphics=2, text=0) return doc </code></pre> <p>Here, I am keeping track of the empty lines and if I find 2 consecutive lines, then I am removing it. But, this is not removing the entire lines. Here, is the screenshot of what it does when I am simply redacting it without applying it.</p> <p><a href="https://i.sstatic.net/zEflZk5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zEflZk5n.png" alt="enter image description here" /></a></p> <p>As one can see in the uploaded picture, I am able to locate the lines with extra spaces but I am unable to remove it.</p> <p>Now, have anyone came across such problem and has already solved it or does anyone have any idea on how I go about solving it?</p>
<python><pymupdf>
2024-11-05 17:17:47
0
565
MURTUZA BORIWALA
79,160,084
1,442,731
setting default values in Python Protobuf fields
<p>I am writing some python test routines with protobuf acting as the communication medium with a firmware module. I keep tripping over an issue. I receive a response from a module expecting a value in a protobuf field and it's empty. I know that protobuf skips over fields that have the default value in the message, but it would be nice to be able to count on that field to be set in the program itself.</p> <p>Example: I have a protobuf field:</p> <pre><code> ReturnCode returnCode = 11; </code></pre> <p>On normal conditions, the return code is 0, which skips the field in the message. But I want to have something in my python code like:</p> <pre><code> response = await common.mkRequest(study_pb2.STUDY_LIST, study_list_cmd) resp = await common.decodePacket(response.resp) assert resp.returnCode==0 </code></pre> <p>Here, resp contains the decoded protobuf message that would contain returnCode. But when returnCode is the default 0, it is undefined.</p> <p>Is there any way to set (after decode) or preset (before the decode) the default values for a message so I don't have to use something like:</p> <pre><code>getattr(resp, 'returnCode', study_pb2.EOK), resp.study </code></pre> <p>This gives the default value for returnCode, but I would like to do this for the entire message without having to explicitly coding this for each field.</p>
<python><protocol-buffers>
2024-11-05 17:15:18
0
6,227
wdtj
79,159,768
2,171,348
performance using python3/pandas to read .ods (opendocument spreadsheet)file
<p>Environment:</p> <pre> inside docker container in WSL2/Ubuntu22.04 python 3.12 pandas 2.2.2 odfpy 1.4.1 openpyxl 3.1.3 </pre> <p>The .ods file I have on disk is 6.8MB (two sheets, one sheet has 16,000 rows, the other has 74,000 rows). I can open this file in MS excel in no time.</p> <p>I have the following code to read this file (first read it into a bytes variable):</p> <pre><code> t1 = time.perf_counter() excel = ExcelFile(BytesIO(file_content), engine=&quot;odf&quot;) t2 = time.perf_counter() data = pd.read_excel(BytesIO(file_content), engine=&quot;odf&quot;) t3 = time.perf_counter() </code></pre> <p>the following is the value of t1, t2 and t3 after the file content is read into <strong>excel</strong> and <strong>data</strong>:</p> <pre> t1: 366108.7721855 t2: 366606.7265884 t3: 367100.7166519 </pre> <p>It takes about 10 minutes to read the data as Excel or pandas dataframe. Anywhere I can tune to improve the reading performance?</p>
<python><pandas><performance><odf>
2024-11-05 15:34:10
0
481
H.Sheng
79,159,747
4,119,291
Is there a way to render gt tables as PNGs with (a) no browser and (b) without wkhtmltopdf/wkhtmltoimage? (R/Python)
<p>I have a really pretty gt table that we'd like to automate production of. Running this on our remote server has some limitations: enterprise policy is that no browsers, headless or otherwise, may be installed on the server; the admin has been unwilling to install wkhtmltopdf.</p> <p>So I can either run this locally, which I'd rather avoid, or I can schedule it to crank out an HTML table, which is a pain for the person who actually uses these images. Rendering the gt_tbl using ggplot destroys a lot of the formatting that was done.</p> <p>The R script that generates the table is set up now to output an html table. I'm open to solutions in R or Python that can run after the table is generated.</p> <p>Thanks!</p>
<python><r><wkhtmltopdf><gt><wkhtmltoimage>
2024-11-05 15:29:35
1
331
Rich Ard
79,159,314
2,726,900
Can i tell devpi to cache a list of packages?
<p>I want to use <code>devpi</code> as an extra package cache because our main JFrog Artifactory sometimes goes down and I my airflow with PythonVirtualenvOperators should work even in these cases.</p> <p>I want to do the following:</p> <ul> <li>set <code>--index-url</code> to our main Artifactory and <code>--extra-index-url</code> to <code>devpi</code>: so that the package will be downloaded from main Artifactory when it's possible.</li> <li>somehow tell <code>devpi</code> that the <code>requirements.txt</code> that I pass to <code>PythonVirtualenvOperator</code> should be cached into my <code>devpi</code> (and all their dependencies should be cached too).</li> </ul> <p>Can it be done?</p>
<python><pip><devpi>
2024-11-05 13:33:18
0
3,669
Felix
79,159,200
7,462,275
How to fill spaces between subplots with a color in Matplotlib?
<p>With the following code :</p> <pre><code>nb_vars=4 fig, axs = plt.subplots(4,4,figsize=(8,8), gridspec_kw = {'wspace':0.20, 'hspace':0.20}, dpi= 100) for i_ax in axs: for ii_ax in i_ax: ii_ax.set_yticklabels([]) for i_ax in axs: for ii_ax in i_ax: ii_ax.set_xticklabels([]) </code></pre> <p>The space between the subplots is white. How is it possible to colour them ? And with different colors ? See for example this figure : <a href="https://i.sstatic.net/6TuiklBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6TuiklBM.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2024-11-05 12:55:05
1
2,515
Stef1611
79,159,186
4,508,605
How to read from csv and generate excel file with same name python
<p>Currently i am reading sql queries from column in csv file, executing them in Snowflake and then generating sql query result in separate excel file for every query. The code is working fine.</p> <p>The excel files are generating with filename as <code>result_1.xlsx</code>, <code>result_2.xlsx</code>. But now i want to create excel file with filename as provided in second column of csv file.</p> <p>Below is my python code:</p> <pre><code># Read queries from the CSV file with UTF-8 encoding to remove BOM with open(csv_file_path, mode='r', encoding='utf-8-sig') as file: reader = csv.reader(file) queries = [row[0] for row in reader if row] # Skip empty rows # Execute each query and save results to separate Excel files for i, query in enumerate(queries): try: # Execute the query using SQLAlchemy engine df = pd.read_sql(query, engine) # Define the output Excel file name output_file = os.path.join(output_directory, f'result_{i + 1}.xlsx') # Save the DataFrame to an Excel file df.to_excel(output_file, index=False) print(f'Saved results of query {i + 1} to {output_file}') except Exception as e: print(f&quot;Error executing query {i + 1}: {e}&quot;) </code></pre>
<python><excel><csv>
2024-11-05 12:49:18
1
4,021
Marcus
79,159,178
7,334,912
Diffuser pipeline embedings not enough values to unpack
<p>I wanted to generate a image using text embedding instead of text as input using clip to tokenizes &amp; embeds.</p> <p>The code so far :</p> <pre><code>from transformers import AutoTokenizer, CLIPTextModelWithProjection model = CLIPTextModelWithProjection.from_pretrained(&quot;openai/clip-vit-base-patch32&quot;) tokenizer = AutoTokenizer.from_pretrained(&quot;openai/clip-vit-base-patch32&quot;) from diffusers import StableDiffusionPipeline, DDIMScheduler import torch path =&quot;path_to_my_model.safetensors&quot; pipe = StableDiffusionPipeline.from_single_file(f&quot;{path}&quot;, torch_dtype=torch.float16, use_safetensors=True, variant=&quot;fp16&quot;) pipe.to(&quot;cuda&quot;) import numpy as np import torch prompt = &quot;some random prompt&quot; text_input = tokenizer(prompt, padding=&quot;max_length&quot;, max_length=tokenizer.model_max_length, truncation=True, return_tensors=&quot;pt&quot;) text_embeddings = model(text_input.input_ids)[0] batch_size = len(text_input) uncond_input = tokenizer( [&quot;&quot;] * batch_size, padding=&quot;max_length&quot;, max_length=tokenizer.model_max_length, return_tensors=&quot;pt&quot;, truncation=True ) uncond_embeddings = model(uncond_input.input_ids)[0] text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) output_image = pipe(prompt_embeds=text_embeddings).images[0] </code></pre> <p>I get so far <code>ValueError: not enough values to unpack (expected 3, got 2)</code></p> <p>Althought the embedding shape is [3,512], <code>text_embeddings.shape</code> <code>torch.Size([3, 512])</code></p> <p>I can't figure out where the issues is. I also tried to not concat with uncond_embedding.</p>
<python><pytorch><huggingface-transformers><huggingface><stable-diffusion>
2024-11-05 12:45:33
0
502
Felox
79,159,168
5,594,008
Ruff and E203 rule
<p>I'm having some issue with string slicing, ruff is formatting such code:</p> <pre><code>result: str custom_index = 5 result = ( result[:custom_index] + f&quot;new_value&quot; + result[custom_index + 7:] ) </code></pre> <p>and makes last line having extra whitespace <code>result[custom_index + 7 :]</code>.</p> <p>But there is another rule <code>PEP 8: E203 whitespace before ':'</code>.</p> <p>How can I disable such behaviour? I've tried to add some rules to exclude in pyproject.toml, but my guessing failed.</p> <p>Here is a <a href="https://github.com/Headmaster11/ruff_error" rel="nofollow noreferrer">test project</a> to reproduce the error. Launch <code>pre-commit run --all-files</code> and see <code>custom_file_from_project.py</code> line 12 and line 17.</p>
<python><ruff>
2024-11-05 12:43:13
0
2,352
Headmaster
79,158,863
2,915,050
Call Google API within Cloud Function with Authentication
<p>I'm trying to call the Dataform API from within a Cloud Function, however the identity token I am providing is returning with a <code>'Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential.'</code></p> <p>According to the <a href="https://cloud.google.com/dataform/reference/rest/v1beta1/projects.locations.repositories.workflowInvocations/query" rel="nofollow noreferrer">documentation</a>, the SA running the Cloud Function only needs <code>dataform.workflowInvocations.query</code> permission, which I have granted and confirmed this works when using the Python Dataform library (I need to call the REST API endpoint for a technical reason that the Python library does not provide).</p> <p>This is the code I am using to make the request with an authorization token, which is what I got from Google themselves <a href="https://cloud.google.com/functions/docs/securing/authenticating#generate_tokens_programmatically" rel="nofollow noreferrer">here</a>, however using the <code>requests</code> library to make the API call.</p> <pre><code>import requests import google.auth.transport.requests import google.oauth2.id_token def get_workflow_invocation_actions(environment, git_repo, workflow_invocation_id): parent = f&quot;projects/{environment}/locations/europe-west2/repositories/{git_repo}&quot; name = f&quot;{parent}/workflowInvocations/{workflow_invocation_id}&quot; # The API endpoint url = f&quot;https://dataform.googleapis.com/v1beta1/{name}:query&quot; auth_req = google.auth.transport.requests.Request() id_token = google.oauth2.id_token.fetch_id_token(auth_req, url) # A GET request to the API response = requests.get(url, headers={'Authorization': f'Bearer {id_token}'}) # Print the response print(response.json()) </code></pre> <p>The <code>id_token</code> does populate with a valid looking token, so I am unsure what is stopping it from authenticating via the REST API.</p>
<python><google-api><google-cloud-functions><google-oauth>
2024-11-05 11:17:52
0
1,583
RoyalSwish
79,158,826
18,344,512
Printing nested HTML tables in PyQt6
<p>I have an issue when trying to print the contents of a QTableWidget in a PyQt6 application.</p> <p>It actually works, but there is a small problem: I have tables embedded in the main table and I'd like those tables to completely fill the parent cells (100% of their widths), but the child tables don't expand as expected.</p> <p>This is my code:</p> <pre class="lang-py prettyprint-override"><code>import sys from PyQt6 import QtWidgets, QtPrintSupport from PyQt6.QtGui import QTextDocument class MyWidget(QtWidgets.QWidget): def __init__(self): super().__init__() self.table_widget = QtWidgets.QTableWidget() self.button = QtWidgets.QPushButton('Print TableWidget') self.layout = QtWidgets.QVBoxLayout(self) self.layout.addWidget(self.table_widget) self.layout.addWidget(self.button) self.button.clicked.connect(self.print_table) def print_table(self): html_table = ''' &lt;table cellpadding=&quot;0&quot;&gt; &lt;tr&gt;&lt;th&gt;header1&lt;/th&gt;&lt;th&gt;header2&lt;/th&gt;&lt;th&gt;header3&lt;/th&gt;&lt;/tr&gt; &lt;tr&gt; &lt;td&gt;data1&lt;/td&gt; &lt;td&gt;data2&lt;/td&gt; &lt;td&gt;&lt;table&gt; &lt;tr&gt; &lt;th&gt;header1&lt;/th&gt;&lt;th&gt;header2&lt;/th&gt;&lt;th&gt;header3&lt;/th&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;data3&lt;/td&gt;&lt;td&gt;data3&lt;/td&gt;&lt;td&gt;data3&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;data1&lt;/td&gt; &lt;td&gt;data2&lt;/td&gt; &lt;td&gt;&lt;table&gt; &lt;tr&gt; &lt;th&gt;hr1&lt;/th&gt;&lt;th&gt;hr2&lt;/th&gt;&lt;th&gt;hr3&lt;/th&gt;&lt;th&gt;hr4&lt;/th&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;d3&lt;/td&gt;&lt;td&gt;d3&lt;/td&gt;&lt;td&gt;d3&lt;/td&gt;&lt;td&gt;d3&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;d3&lt;/td&gt;&lt;td&gt;d3&lt;/td&gt;&lt;td&gt;d3&lt;/td&gt;&lt;td&gt;d3&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; ''' style_sheet = ''' table { border-collapse: collapse; width: 100%; } th { background-color: lightblue; border: 1px solid gray; height: 1em; } td { border: 1px solid gray; padding: 0; vertical-align: top; } ''' text_doc = QTextDocument() text_doc.setDefaultStyleSheet(style_sheet) text_doc.setHtml(html_table) prev_dialog = QtPrintSupport.QPrintPreviewDialog() prev_dialog.paintRequested.connect(text_doc.print) prev_dialog.exec() if __name__ == '__main__': app = QtWidgets.QApplication([]) widget = MyWidget() widget.resize(640,480) widget.show() sys.exit(app.exec()) </code></pre> <p>And this is what i get:</p> <p><a href="https://i.sstatic.net/CUeTUP2r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CUeTUP2r.png" alt="Screenshot of the result" /></a></p> <p>But this is what i want:</p> <p><a href="https://i.sstatic.net/JAB3Y92C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JAB3Y92C.png" alt="Expected result" /></a></p> <p>I would appreciate any suggestions about this problem, as I have no idea about how to fix it.</p>
<python><html><printing><pyqt6><qtextdocument>
2024-11-05 11:07:48
2
1,499
SergFSM
79,158,791
11,575,738
Tracking test/val loss when training a model with JAX
<p>JAX when being used for training a machine learning model, we only try to minimize the training loss.</p> <p>Whereas in my requirement, in order to assess the number of epochs or to avoid over-training, I need to know the test loss as well at every parameter update step. But the callback or debug option available in JAX explicitly suggests that I shouldn't be doing any compute intensive tasks, like finding the test loss and accuracy.</p> <p>The below optimization works flawless</p> <pre class="lang-py prettyprint-override"><code>import pennylane as qml from pennylane import numpy as np import jax from jax import numpy as jnp import optax from itertools import combinations from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split from sklearn.neural_network import MLPClassifier from sklearn.metrics import log_loss import matplotlib.pyplot as plt import matplotlib.colors import warnings warnings.filterwarnings(&quot;ignore&quot;) np.random.seed(42) import time # Load the digits dataset with features (X_digits) and labels (y_digits) X_digits, y_digits = load_digits(return_X_y=True) # Create a boolean mask to filter out only the samples where the label is 2 or 6 filter_mask = np.isin(y_digits, [2, 6]) # Apply the filter mask to the features and labels to keep only the selected digits X_digits = X_digits[filter_mask] y_digits = y_digits[filter_mask] # Split the filtered dataset into training and testing sets with 10% of data reserved for testing X_train, X_test, y_train, y_test = train_test_split( X_digits, y_digits, test_size=0.1, random_state=42 ) # Normalize the pixel values in the training and testing data # Convert each image from a 1D array to an 8x8 2D array, normalize pixel values, and scale them X_train = np.array([thing.reshape([8, 8]) / 16 * 2 * np.pi for thing in X_train]) X_test = np.array([thing.reshape([8, 8]) / 16 * 2 * np.pi for thing in X_test]) # Adjust the labels to be centered around 0 and scaled to be in the range -1 to 1 # The original labels (2 and 6) are mapped to -1 and 1 respectively y_train = (y_train - 4) / 2 y_test = (y_test - 4) / 2 def feature_map(features): # Apply Hadamard gates to all qubits to create an equal superposition state for i in range(len(features[0])): qml.Hadamard(i) # Apply angle embeddings based on the feature values for i in range(len(features)): # For odd-indexed features, use Z-rotation in the angle embedding if i % 2: qml.AngleEmbedding(features=features[i], wires=range(8), rotation=&quot;Z&quot;) # For even-indexed features, use X-rotation in the angle embedding else: qml.AngleEmbedding(features=features[i], wires=range(8), rotation=&quot;X&quot;) # Define the ansatz (quantum circuit ansatz) for parameterized quantum operations def ansatz(params): # Apply RY rotations with the first set of parameters for i in range(8): qml.RY(params[i], wires=i) # Apply CNOT gates with adjacent qubits (cyclically connected) to create entanglement for i in range(8): qml.CNOT(wires=[(i - 1) % 8, (i) % 8]) # Apply RY rotations with the second set of parameters for i in range(8): qml.RY(params[i + 8], wires=i) # Apply CNOT gates with qubits in reverse order (cyclically connected) # to create additional entanglement for i in range(8): qml.CNOT(wires=[(8 - 2 - i) % 8, (8 - i - 1) % 8]) dev = qml.device(&quot;default.qubit&quot;, wires=8) @qml.qnode(dev) def circuit(params, features): feature_map(features) ansatz(params) return qml.expval(qml.PauliZ(0)) def variational_classifier(weights, bias, x): return circuit(weights, x) + bias def square_loss(labels, predictions): return np.mean((labels - qml.math.stack(predictions)) ** 2) def accuracy(labels, predictions): acc = sum([np.sign(l) == np.sign(p) for l, p in zip(labels, predictions)]) acc = acc / len(labels) return acc def cost(params, X, Y): predictions = [variational_classifier(params[&quot;weights&quot;], params[&quot;bias&quot;], x) for x in X] return square_loss(Y, predictions) def acc(params, X, Y): predictions = [variational_classifier(params[&quot;weights&quot;], params[&quot;bias&quot;], x) for x in X] return accuracy(Y, predictions) np.random.seed(0) weights = 0.01 * np.random.randn(16) bias = jnp.array(0.0) params = {&quot;weights&quot;: weights, &quot;bias&quot;: bias} opt = optax.adam(0.05) batch_size = 7 num_batch = X_train.shape[0] // batch_size opt_state = opt.init(params) X_batched = X_train.reshape([-1, batch_size, 8, 8]) y_batched = y_train.reshape([-1, batch_size]) @jax.jit def update_step_jit(i, args): params, opt_state, data, targets, batch_no, print_training = args _data = data[batch_no % num_batch] _targets = targets[batch_no % num_batch] loss_val, grads = jax.value_and_grad(cost)(params, _data, _targets) updates, opt_state = opt.update(grads, opt_state) params = optax.apply_updates(params, updates) # Print training loss every 5 steps if print_training is True def print_fn(): jax.debug.print(&quot;Step: {i}, Train Loss: {loss_val}&quot;, i=i, loss_val=loss_val) jax.lax.cond((jnp.mod(i, 1) == 0) &amp; print_training, print_fn, lambda: None) return (params, opt_state, data, targets, batch_no + 1, print_training) @jax.jit def optimization_jit(params, data, targets, print_training = True): opt_state = opt.init(params) args = (params, opt_state, data, targets, 0, print_training) (params, opt_state, _, _, _, _) = jax.lax.fori_loop(0, 10, update_step_jit, args) return params start_time = time.time() params = optimization_jit(params, X_batched, y_batched) print(&quot;Training Done! \nTime taken:&quot;,time.time() - start_time) var_train_acc = acc(params, X_train, y_train) print(&quot;Training accuracy: &quot;, var_train_acc) var_test_acc = acc(params, X_test, y_test) print(&quot;Testing accuracy: &quot;, var_test_acc) </code></pre> <p>Although inefficient, I tried to include more compute in the <code>print_fn()</code> function as:</p> <pre class="lang-py prettyprint-override"><code>@jax.jit def update_step_jit(i, args): params, opt_state, data, targets, batch_no, print_training = args _data = data[batch_no % num_batch] _targets = targets[batch_no % num_batch] loss_val, grads = jax.value_and_grad(cost)(params, _data, _targets) updates, opt_state = opt.update(grads, opt_state) params = optax.apply_updates(params, updates) def print_fn(): jax.debug.print(&quot;Step: {i}, Train Loss: {loss_val}&quot;, i=i, loss_val=loss_val) # Calculate accuracy and loss for training and test sets train_accuracy = acc(params, X_train, y_train) test_predictions = jnp.array([variational_classifier(params[&quot;weights&quot;], params[&quot;bias&quot;], x) for x in X_test]) test_loss = square_loss(y_test, test_predictions) test_accuracy = accuracy(y_test, test_predictions) jax.debug.print(&quot;Step: {i}, Train Accuracy {train_accuracy}&quot;, i=i, train_accuracy = train_accuracy) jax.debug.print(&quot;Step: {i}, Test Accuracy {test_accuracy}&quot;, i=i, test_accuracy = test_accuracy) jax.debug.print(&quot;Step: {i}, Test Loss {test_loss}&quot;, i=i, test_loss = test_loss) # if print_training=True, print the loss every 5 steps jax.lax.cond((jnp.mod(i, 1) == 0) &amp; print_training, print_fn, lambda: None) return (params, opt_state, data, targets, batch_no + 1, print_training) @jax.jit def optimization_jit(params, data, targets, print_training = False): opt_state = opt.init(params) args = (params, opt_state, data, targets, 0, print_training) (params, opt_state, _, _, _, _) = jax.lax.fori_loop(0, 10, update_step_jit, args) return params params = optimization_jit(params, X_batched, y_batched, print_training = True) var_train_acc = acc(params, X_train, y_train) var_test_acc = acc(params, X_test, y_test) print(&quot;Training accuracy: &quot;, var_train_acc) print(&quot;Testing accuracy: &quot;, var_test_acc) </code></pre> <p>It gives me a weird error like:</p> <pre><code>TracerArrayConversionError: The numpy.ndarray conversion method __array__() was called on traced array with shape float32[] The error occurred while tracing the function print_fn at C:\Users\...\AppData\Local\Temp\ipykernel_43468\2623796165.py:10 for cond. This value became a tracer due to JAX operations on these lines: operation a:f32[1] = slice[limit_indices=(1,) start_indices=(0,) strides=None] b from line C:\Users\...\AppData\Local\Temp\ipykernel_43468\1231864253.py:61:15 (ansatz) operation a:f32[1] = slice[limit_indices=(2,) start_indices=(1,) strides=None] b from line C:\Users\...\AppData\Local\Temp\ipykernel_43468\1231864253.py:61:15 (ansatz) operation a:f32[1] = slice[limit_indices=(3,) start_indices=(2,) strides=None] b from line C:\Users\mysore\AppData\Local\Temp\ipykernel_43468\1231864253.py:61:15 (ansatz) operation a:f32[1] = slice[limit_indices=(4,) start_indices=(3,) strides=None] b from line C:\Users\...\AppData\Local\Temp\ipykernel_43468\1231864253.py:61:15 (ansatz) operation a:f32[1] = slice[limit_indices=(5,) start_indices=(4,) strides=None] b from line C:\Users\...\AppData\Local\Temp\ipykernel_43468\1231864253.py:61:15 (ansatz) (Additional originating lines are not shown.) </code></pre> <p>My training time would drastically increase if I don't use JAX. So, with JAX, I can get the performance on the test set only at the end of the training, and there's no way I can get the test losses in the middle of the training or is there a workaround for this?</p> <p>For a minimal reproducable example, you can try to run the example given in <a href="https://pennylane.ai/qml/demos/tutorial_post-variational_quantum_neural_networks/#variational-approach" rel="nofollow noreferrer">this demo</a> code.</p>
<python><machine-learning><logging><jax>
2024-11-05 10:58:38
2
331
Sup
79,158,548
8,254,743
How to control plot size whith different legend size matplotlib
<p>I want to have 2 plots of the same size. The size of the figure is not as important. The only change I am making is to the length of the labels. (In reallity I have 2 related data sets )</p> <p>A long label causes the plot to deform. How can I avoid this? I need 2 coherent plots.</p> <pre><code> import numpy as np from matplotlib import pyplot as plt def my_plot(x,ys,labels, size = (5.75, 3.2)): fig, ax1 = plt.subplots(nrows=1, ncols=1, sharex=True, figsize=size, dpi = 300) ax1.plot(x, ys[0], label = labels[0]) ax1.plot(x, ys[1], label = labels[1]) ## Add ticks, axis labels and title ax1.set_xlim(0,21.1) ax1.set_ylim(-50,50) ax1.tick_params(axis='both', which='major', labelsize=18) ax1.set_xlabel('Time', size = 18) ax1.set_ylabel('Angle', size = 18) ## Add legend outside the plot ax1.legend(ncol=1, bbox_to_anchor=(1, 0.5), loc='center left', edgecolor='w') # Dummy data x1 = np.arange(0, 24, 0.1) y1_1 = np.sin(x1)*45 y1_2 = np.cos(x1)*25 my_plot(x1, [y1_1, y1_2], [&quot;sin&quot;, &quot;cos&quot;, &quot;tan&quot;]) my_plot(x1, [y1_1, y1_2], [&quot;long_sin&quot;, &quot;long_cos&quot;, &quot;long_tan&quot;]) </code></pre> <p>resulting in the following two plots:</p> <p><strong>plot 1:</strong> <a href="https://i.sstatic.net/8M17sVxT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8M17sVxT.png" alt="short labels" /></a> <strong>plot 2:</strong> <a href="https://i.sstatic.net/DdalbpD4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdalbpD4.png" alt="long labels" /></a></p> <p><strong>I have tried:</strong></p> <ul> <li><code>plt.tight_layout()</code></li> <li><code>ax1.set_aspect(0.1)</code></li> <li>changing size of subplots - which almost solves the issue, but not quite, as all the effective font sizes change.</li> </ul> <p>to be clear I want the plots to be separated (to be later saved in 2 files)</p>
<python><numpy><matplotlib><plot><figure>
2024-11-05 09:53:31
2
667
Frank Musterman
79,158,535
10,200,497
How can I check previous values of a column to find the value that is greater than the selected row in another column?
<p>This is my DataFrame:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'a': [10, 20, 30, 1, 20, 3, 4, 0], 'b': [30, 3, 11, 25, 24, 31, 29, 2], 'c': [True, True, True, False, False, True, True, True] }) </code></pre> <p>Expected output is creating column <code>d</code>:</p> <pre><code> a b c d 0 10 30 True NaN 1 20 3 True 10 2 30 11 True 20 3 1 25 False NaN 4 20 24 False NaN 5 3 31 True NaN 6 4 29 True 30 7 0 2 True 4 </code></pre> <p>First of all the values in <code>b</code> that their <code>c</code> are <code>True</code> are selected. I explain the process from row <code>1</code> because it is easier to understand.</p> <p>The value in <code>b</code> is 3, then all values above it should be checked. And the NEAREST value in <code>a</code> that is greater than 3 should be selected. So 10 is selected.</p> <p>for row number <code>2</code>, the value is 11. The nearest value to this one that is greater than it in <code>a</code> is 20.</p> <p>For rows 3 and 4 since <code>c</code> is <code>False</code>. <code>NaN</code> should be selected.</p> <p>For row <code>5</code>, since there are no previous values that are greater than 31 in <code>a</code>, <code>NaN</code> is selected.</p> <p>For row <code>6</code>, the nearest value in <code>a</code> that is greater than 29 is 30.</p> <p>This is what I have tried so far. It doesn't give me the output. I think the approach that I'm taking might be correct.</p> <pre><code>t = df['a'].to_numpy() h = df['b'].to_numpy() m2 = t &lt; h[:, None] df['d'] = np.nanmax(np.where(m2, t, np.nan), axis=1) </code></pre>
<python><pandas><dataframe>
2024-11-05 09:50:36
1
2,679
AmirX
79,158,528
3,967,146
Wrapping class method with correct type hint
<p>I am implementing <code>run</code> in subclasses of <code>ParentClass</code>, which will call <code>run</code> from its <code>__call__</code> method.</p> <p>I want the type hints from <code>Subclass.run</code> to apply to call sites of its <code>__call__</code> method.</p> <p>I would like to define the child classes (i.e. <code>MyClass</code> now):</p> <pre class="lang-py prettyprint-override"><code>class ParentClass: # This class can contain the nasty definitions necessary # to keep child classes nice looking @abstractmethod def run(self): raise NotImplementedError() class MyClass(ParentClass): # I don't want to use `metaclass=MetaClass` here # Define the logic in the method `run(...)`, # signature can vary in different child classes # Trying to avoid the need to use the `__call__` dunder def run(self, a: int = 5) -&gt; int: print(&quot;my&quot;, a) return 1 class MyClass2(ParentClass): def run(self, b: str = &quot;&quot;, c: bool = False) -&gt; int: print(&quot;my2&quot;, b, c) return 2 </code></pre> <p>How I want to use the child classes:</p> <pre class="lang-py prettyprint-override"><code>my = MyClass() my(a=6) # prints `my 6` and returns `1` my2 = MyClass2() my2(b=&quot;example&quot;, c=True) # prints `my2 example True` and returns `2` </code></pre> <p>My reason for the above is to wrap the <code>run</code> methods and do stuff before and after calling it.</p> <p>I've tried the following:</p> <pre class="lang-py prettyprint-override"><code>from abc import abstractmethod from functools import wraps from typing import Callable, TypeVar, ParamSpec, Self, Any, cast from __future__ import annotations P = ParamSpec(&quot;P&quot;) T = TypeVar(&quot;T&quot;) class ParentMeta(type): def __new__(cls: Self, name: str, bases: tuple, namespace: dict[str, Any]) -&gt; ParentMeta: def wrapper(func: Callable[P, T]) -&gt; Callable[P, T]: @wraps(func) def inner(self, *args: P.args, **kwargs: P.kwargs) -&gt; T: print(&quot;--pre--&quot;) ret = func(self, *args, **kwargs) print(&quot;--post--&quot;) return ret return inner if &quot;run&quot; in namespace: namespace[&quot;__call__&quot;] = wrapper(namespace[&quot;run&quot;]) del namespace[&quot;run&quot;] return cast(ParentMeta, super().__new__(cls, name, bases, namespace)) class ParentClass(object, metaclass=ParentMeta): @abstractmethod def run(self): raise NotImplementedError() class MyClass(ParentClass): def run(self, a: int = 5) -&gt; int: print(&quot;my&quot;, a) return 1 my = MyClass() my(a=6) # the problem here is that I lost the parameter/type hint in the IDE (I'm using VSCode) # (so when I'm writing `my(`, I cannot see that it has an argument `a`) </code></pre> <p>If I ditch the <code>run</code> and define the <code>__call__</code> (which I really don't want to), I do get a type hint (as it's resolved from <code>MyClass</code>):</p> <pre class="lang-py prettyprint-override"><code>from abc import abstractmethod from functools import wraps from typing import Callable, TypeVar, ParamSpec, Self, Any, cast from __future__ import annotations P = ParamSpec(&quot;P&quot;) T = TypeVar(&quot;T&quot;) class ParentMeta(type): def __new__(cls: Self, name: str, bases: tuple, namespace: dict[str, Any]) -&gt; ParentMeta: def wrapper(func: Callable[P, T]) -&gt; Callable[P, T]: @wraps(func) def inner(self, *args: P.args, **kwargs: P.kwargs) -&gt; T: print(&quot;--pre--&quot;) ret = func(self, *args, **kwargs) print(&quot;--post--&quot;) return ret return inner if &quot;__call__&quot; in namespace: namespace[&quot;__call__&quot;] = wrapper(namespace[&quot;__call__&quot;]) return cast(ParentMeta, super().__new__(cls, name, bases, namespace)) class ParentClass(object, metaclass=ParentMeta): @abstractmethod def __call__(self): raise NotImplementedError() class MyClass(ParentClass): def __call__(self, a: int = 5) -&gt; int: print(&quot;my&quot;, a) return 1 my = MyClass() my(a=6) # got the hint `(a: int = 5) -&gt; int` when typed </code></pre> <p>Can I write the above such way that I don't have to use <code>__call__</code> in <code>MyClass</code> and still get a proper type hint on usage?</p> <p>I'm pretty open to any simpler/more complex solutions and different version of Python (used 3.11.10).</p>
<python><python-typing><mypy><metaclass>
2024-11-05 09:48:28
1
428
gkrupp
79,158,209
8,040,369
Groupby a df column based on 2 other columns
<p>I have an df which has 3 columns lets say Region, Country and AREA_CODE.</p> <pre><code>Region Country AREA_CODE =================================== AMER US A1 AMER CANADA A1 AMER US B1 AMER US A1 </code></pre> <p>I want to get the output like list of AREA_CODE for each country under each Region with 'ALL' as list value as well. something like</p> <pre><code>{ &quot;AMER&quot;: { &quot;US&quot;: [&quot;ALL&quot;, &quot;A1&quot;, &quot;B1&quot;], &quot;CANADA&quot;: [&quot;ALL&quot;, &quot;A1&quot;] } } </code></pre> <p>So far i have tried to groupby both region and country column and then tried to group &amp; agg it by AREA_CODE, it is throwing error</p> <pre><code>df.drop_duplicates().groupby([&quot;Region&quot;, &quot;Country&quot;]).groupby(&quot;Country&quot;)['AREA_CODE'].agg(lambda x: [&quot;ALL&quot;]+sorted(x.unique().tolist())).to_dict() </code></pre> <p>Could someone kindly help me with this.</p> <p>Thanks,</p>
<python><python-3.x><pandas>
2024-11-05 08:27:59
2
787
SM079
79,157,907
10,970,202
Jupyter notebook kernel is not connecting to correct python environment
<p>Version: jupyter_client : 8.6.2 jupyter_core : 5.7.2 jupyter_server : 2.14.2 jupyterlab : 4.2.3</p> <p>Here are steps I've took to create kernel in my jupyter notebook.</p> <ol> <li><p>Create conda env: <code>conda create --name firstEnv</code> <br> <code>conda env list</code> correctly show firstEnv and it is located <code>/Users/username/miniforge3/envs/firstEnv</code></p> </li> <li><p>add environment to kernel<br> <code>conda install -c anaconda ipykernel</code> <br> <code>python -m ipykernel install --user --name firstEnv --display-name firstEnv</code> <br> now this environment is correctly shown on top right of jupyternotebook where you can change your kernels.</p> </li> </ol> <p>But problem is python path from</p> <p><code>conda activate firstEnv</code> &gt; <code>which python</code> from command line shows <code>/Users/username/miniforge3/envs/und_me/bin/pip</code> which is different from what I get from jupyter notebook. In jupyter notebook with correct kernel, running <code>print(sys.executable)</code> shows <code>/Users/username/miniforge3/bin/python</code></p> <p>I've followed many resources which seemed to do exactly what I've done above. Any clue to why I am facing this issue?</p>
<python><jupyter-notebook><anaconda>
2024-11-05 06:37:31
0
5,008
haneulkim
79,157,864
12,466,687
How to correctly extract Numbers from String using regex in Python?
<p>I am trying to extract Numbers from the string only where it <strong>ends with Numbers or Decimals</strong></p> <pre><code>df = pd.DataFrame({'Names': [&quot;Absolute Neutrophil Count&quot;,&quot;Absolute Lymphocyte Count 2.9&quot;, &quot;Absolute Neutrophil Count 10.2&quot;,&quot;ESR (Modified Westergren) 8&quot;, &quot;Free Triiodothyronine (FT3) 3.59&quot;, &quot;Free Triiodothyronine FT4 4.53&quot;]}) df </code></pre> <pre><code> Names 0 Absolute Neutrophil Count 1 Absolute Lymphocyte Count 2.9 2 Absolute Neutrophil Count 10.2 3 ESR (Modified Westergren) 8 4 Free Triiodothyronine (FT3) 3.59 5 Free Triiodothyronine FT4 4.53 </code></pre> <p><strong>Desired Extraction Results:</strong></p> <pre><code>0 Missing/None 1 2.9 2 10.2 3 8 4 3.59 5 4.53 </code></pre> <p>I was trying below code but that is not giving the desired results.</p> <pre><code>df.iloc[:,0].str.extract(r'^(.*?)\s*(\d\.?\d*)?$') # '\d+\.\d+' </code></pre> <pre><code> 0 1 0 Absolute Neutrophil Count NaN 1 Absolute Lymphocyte Count 2.9 2 Absolute Neutrophil Count 1 0.2 3 ESR (Modified Westergren) 8 4 Free Triiodothyronine (FT3) 3.59 5 Free Triiodothyronine FT4 4.53 </code></pre> <p>Please use dataframe form of structure in Answer and <code>.extract</code> otherwise sometimes answers here with <code>re</code> and <code>strings</code> work but when I try to apply them on <code>df</code> then it becomes something else.</p>
<python><pandas><regex>
2024-11-05 06:15:12
3
2,357
ViSa
79,157,812
6,457,407
Writing asynchronous code without asyncio
<p>I'm trying to write code that is the equivalent of the busy loop</p> <pre><code>async def wait_for_callback(): while True: &lt;do some work&gt; if &lt;callback has occurred&gt; return &lt;result&gt; await asyncio.sleep(0) </code></pre> <p>However this is intended to be a library routine, and I do not know which asynchronous library the user is using. I know that I could use <code>anyio.sleep</code> if I believe the user is going to be using <code>trio</code> or <code>asyncio</code>, but I also know those aren't the only asynchronous libraries out there.</p> <p>Is it possible to write the equivalent of <code>await asyncio.sleep(0)</code> while remaining completely agnostic as to the which asynchronous library is being used?</p>
<python><asynchronous><python-asyncio>
2024-11-05 05:55:44
1
11,605
Frank Yellin
79,157,734
2,825,403
Poetry version solving is confusing
<p>I'm getting quite confused with Poetry version solving and I have no idea why what I'm doing isn't working.</p> <p>I have a project and I'm adding as a dependency another package from an internal company repository, but I'm constantly getting python version conflicts. Let's say the name of the package I'm trying to add is <code>external-package</code>, it has the following dependencies in its <code>pyproject.toml</code>:</p> <pre><code>[tool.poetry.dependencies] python = &quot;&gt;=3.9,&lt;3.10&quot; requests = &quot;^2.28.2&quot; pandas = &quot;&gt;=1.5.0&quot; requests-toolbelt = &quot;^1.0&quot; # &quot;^0.10.1&quot; pyarrow = &quot;&gt;=16.1.0,&lt;18&quot; polars = &quot;^1.12.0&quot; </code></pre> <p>My new project has the following:</p> <pre><code>[tool.poetry.dependencies] python = &quot;^3.8&quot; pandas = &quot;&gt;=1.5.0&quot; polars = &quot;^1.12.0&quot; pyarrow = &quot;&lt;18.0.0&quot; seaborn = &quot;^0.13.2&quot; matplotlib = &quot;^3.9.2&quot; requests = &quot;^2.28.2&quot; </code></pre> <p>Based on this it would seem to me that these projects' Python requirements should overlap on <code>3.9.x</code>, but this is what I get when I try to add the package:</p> <pre><code>The current project's supported Python range (&gt;=3.8,&lt;4.0) is not compatible with some of the required packages Python requirement: - external-package requires Python &gt;=3.9,&lt;3.10, so it will not be satisfied for Python &gt;=3.8,&lt;3.9 || &gt;=3.10,&lt;4.0 Because no versions of external-package match &gt;0.1.24,&lt;0.2.0 and external-package (0.1.24) requires Python &gt;=3.9,&lt;3.10, external-package is forbidden. So, because mhi-tools depends on external-package (^0.1.24), version solving failed. • Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties For external-package, a possible solution would be to set the `python` property to &quot;&gt;=3.9,&lt;3.10&quot; </code></pre> <p>The weird thing is that the actual python version in the <code>external-package</code> is the same as what's suggested by poetry, but this still doesn't work. Any idea what's going on here?</p>
<python><python-poetry>
2024-11-05 05:16:57
0
4,474
NotAName
79,157,437
5,527,374
How to change timezone of datetime object using timezone abbreviation
<p>I bet this is easy, but darned if I can figure it out. Standard disclaimer that honestly, I've tried to figure this out.</p> <p>How do you change the time zone of a datetime.datetime object by using the abbreviation of the time zone? For example, if I have an object that has a timezone of EST, how do I get that same datetime for PST?</p> <p>How would you do something like this:</p> <pre><code>from datetime import datetime now = datetime.now().astimezone() pst = your_magic_function(now, 'PST) </code></pre>
<python><datetime>
2024-11-05 01:30:37
1
925
tscheingeld
79,157,290
9,108,781
How to interact with Transcend consent manager shadow DOM using Selenium?
<p>I'm trying to automate interaction with a GoFundMe page's privacy consent dialog managed by Transcend. URL: <a href="https://www.gofundme.com/f/10yr-old-pitt-baby-who-needs-emergency-surgery" rel="nofollow noreferrer">https://www.gofundme.com/f/10yr-old-pitt-baby-who-needs-emergency-surgery</a></p> <p>Specifically, I need to:</p> <ol> <li>Check the &quot;Do not sell/share my information&quot; checkbox</li> <li>Click the &quot;Confirm&quot; button</li> </ol> <p>The elements are within a shadow DOM attached to a div with id=&quot;transcend-consent-manager&quot;.</p> <p>Here's my current code:</p> <pre><code>try: # Wait for the consent manager to be present manager = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, &quot;transcend-consent-manager&quot;)) ) print(&quot;Found consent manager&quot;) # Wait and try to interact using JavaScript js_code = &quot;&quot;&quot; function waitForShadowRoot(callback, maxAttempts = 10) { let attempts = 0; const check = () =&gt; { attempts++; const manager = document.getElementById('transcend-consent-manager'); const root = manager ? manager.shadowRoot : null; if (root) { callback(root); return; } if (attempts &lt; maxAttempts) { setTimeout(check, 1000); } }; check(); } return new Promise((resolve) =&gt; { waitForShadowRoot((root) =&gt; { console.log('Found shadow root'); const checkbox = root.querySelector('input[type=&quot;checkbox&quot;]'); if (checkbox) { checkbox.click(); console.log('Clicked checkbox'); setTimeout(() =&gt; { const button = root.querySelector('button'); if (button) { button.click(); console.log('Clicked button'); resolve(true); } else { resolve(false); } }, 1000); } else { resolve(false); } }); }); &quot;&quot;&quot; print(&quot;Executing JavaScript to interact with shadow DOM...&quot;) result = driver.execute_async_script(js_code) if result: print(&quot;Successfully clicked elements&quot;) return True else: print(&quot;Failed to find or click elements&quot;) # Print debug info debug_js = &quot;&quot;&quot; const manager = document.getElementById('transcend-consent-manager'); return { manager: !!manager, shadowRoot: manager ? !!manager.shadowRoot : false, innerHTML: manager &amp;&amp; manager.shadowRoot ? manager.shadowRoot.innerHTML : 'No content' } &quot;&quot;&quot; debug_info = driver.execute_script(debug_js) print(&quot;Debug info:&quot;, debug_info) return False except Exception as e: print(f&quot;Error: {str(e)}&quot;) return False Url: https://www.gofundme.com/f/10yr-old-pitt-baby-who-needs-emergency-surgery </code></pre> <p><a href="https://i.sstatic.net/9nVQiMIK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nVQiMIK.png" alt="Here is the image" /></a></p>
<python><selenium-webdriver><selenium-chromedriver><transcend-consent-management>
2024-11-04 23:34:32
1
943
Victor Wang
79,157,273
825,227
How to create a datetimeindex from integer date and time columns in Pandas
<p>I have a dataframe with columns corresponding to days since 1/1/1900 (<code>date</code>) and seconds as part of a 24h day (<code>time</code>).</p> <pre><code> date time 0 40603 34222 1 40603 34223 2 40603 34224 3 40603 34225 4 40603 34226 5 40603 34227 6 40603 34228 7 40603 34229 </code></pre> <p>What's the easiest way to create a more standard datetimeindex from this for using to create an index for the dataframe (with endgoal being to use the index to resample the dataframe).</p> <p>Desired output would look something like this:</p> <pre><code> date time idx 0 40603 34222 2011-03-03 09:30:02 1 40603 34223 2011-03-03 09:30:03 2 40603 34224 2011-03-03 09:30:04 3 40603 34225 2011-03-03 09:30:05 4 40603 34226 2011-03-03 09:30:06 5 40603 34227 2011-03-03 09:30:07 6 40603 34228 2011-03-03 09:30:08 7 40603 34229 2011-03-03 09:30:09 </code></pre> <p>Have tried versions of adding days since 1900 via <code>timedelta</code> but it feels way more complicated than it should be given my inputs (although maybe that's just working with dates in Python in general).</p> <p>Most recent attempt:</p> <pre><code>df['ref_date'] = datetime.date(1900, 1, 1) + datetime.timedelta(df.date) + datetime.timedelta(seconds=df.time) </code></pre> <p>Returns <code>TypeError: unsupported type for timedelta seconds component: Series</code></p>
<python><pandas><dataframe><datetime>
2024-11-04 23:22:36
2
1,702
Chris
79,157,219
12,152,992
Use Bayesian PyMC linear model on out-of-sample data
<p>I am trying to fit a linear model to data using Bayesian inference technique. For this, I thought of using PyMC. Naturally, after training a model, I want to test its performance on new data and that's where the problem occurs. I don't seem to be able to set a new dataset. Anybody with experience?</p> <p>Example script that shows the error:</p> <pre><code>import numpy as np import pymc as pm import arviz as az import matplotlib.pyplot as plt def run_model(): # Generate synthetic data np.random.seed(42) x = np.linspace(0, 10, 100) a_true = 2.5 # True slope b_true = 1.0 # True intercept y_true = a_true * x + b_true y = y_true + np.random.normal(0, 1, size=x.size) # Add some noise # Split into training and test sets x_train, x_test = x[:80], x[80:] y_train, y_test = y[:80], y[80:] # Define and fit the model with pm.Model() as linear_model: # Define x as a pm.Data variable to allow updating with pm.set_data x_shared = pm.Data(&quot;x&quot;, x_train) # Priors for slope and intercept a = pm.Normal(&quot;a&quot;, mu=0, sigma=10) b = pm.Normal(&quot;b&quot;, mu=0, sigma=10) sigma = pm.HalfNormal(&quot;sigma&quot;, sigma=1) # Expected value of y mu = a * x_shared + b # Likelihood y_obs = pm.Normal(&quot;y_obs&quot;, mu=mu, sigma=sigma, observed=y_train) # Sample from the posterior trace = pm.sample(1000, tune=1000, return_inferencedata=True, chains=1) # Predict on training data with linear_model: pm.set_data({&quot;x&quot;: x_train}) # Update data to training post_pred_train = pm.sample_posterior_predictive(trace) # Predict on test data with linear_model: pm.set_data({&quot;x&quot;: x_test}) # Update data to testing post_pred_test = pm.sample_posterior_predictive(trace) # Plot results plt.figure(figsize=(10, 5)) # Plot training data plt.scatter(x_train, y_train, c=&quot;blue&quot;, label=&quot;Training data&quot;) plt.plot(x_train, y_true[:80], &quot;k--&quot;, label=&quot;True function&quot;) # Plot posterior predictive for training data plt.plot( x_train, post_pred_train[&quot;y_obs&quot;].mean(axis=0), label=&quot;Posterior predictive (train)&quot;, color=&quot;red&quot;, ) # Plot test data plt.scatter(x_test, y_test, c=&quot;green&quot;, label=&quot;Test data&quot;) # Plot posterior predictive for test data plt.plot( x_test, post_pred_test[&quot;y_obs&quot;].mean(axis=0), label=&quot;Posterior predictive (test)&quot;, color=&quot;orange&quot;, ) plt.legend() plt.xlabel(&quot;x&quot;) plt.ylabel(&quot;y&quot;) plt.title(&quot;Bayesian Linear Regression with PyMC&quot;) plt.show() # Summary of the model parameters print(az.summary(trace, var_names=[&quot;a&quot;, &quot;b&quot;, &quot;sigma&quot;])) # Only execute if run as the main module if __name__ == '__main__': run_model() </code></pre> <p>Error that results:</p> <pre><code>ValueError: shape mismatch: objects cannot be broadcast to a single shape. Mismatch is between arg 0 with shape (80,) and arg 1 with shape (20,). Apply node that caused the error: normal_rv{&quot;(),()-&gt;()&quot;}(RNG(&lt;Generator(PCG64) at 0x1F23323B5A0&gt;), [80], Composite{((i0 * i1) + i2)}.0, ExpandDims{axis=0}.0) Toposort index: 4 Inputs types: [RandomGeneratorType, TensorType(int64, shape=(1,)), TensorType(float64, shape=(None,)), TensorType(float64, shape=(1,))] Inputs shapes: ['No shapes', (1,), (20,), (1,)] Inputs strides: ['No strides', (8,), (8,), (8,)] Inputs values: [Generator(PCG64) at 0x1F23323B5A0, array([80], dtype=int64), 'not shown', array([0.97974278])] Outputs clients: [[output[1](normal_rv{&quot;(),()-&gt;()&quot;}.0)], [output[0](y_obs)]] </code></pre>
<python><bayesian><pymc3><pymc>
2024-11-04 22:56:32
1
1,267
Matthi9000
79,157,183
3,247,471
Base class properties and methods from a library class are not being inherited into my subclass
<p>I can't seem to properly inherit a base class into my custom class, in this case, the <a href="https://github.com/python-zeroconf/python-zeroconf/blob/master/src/zeroconf/_services/info.py" rel="nofollow noreferrer">ServiceInfo</a> class from the <a href="https://github.com/python-zeroconf/python-zeroconf" rel="nofollow noreferrer">zeroconf library</a> into my custom MdnsServiceInfo class. While testing out workarounds I created a local clone of the class, imported it, and it works (but not optimal), and when importing the original class from the zeroconf library, it breaks.</p> <p>My custom ServiceInfo class (added one extra parameter, <code>zc</code>):</p> <pre><code>class MdnsAsyncServiceInfo(ServiceInfo): def __init__(self, zc:'Zeroconf', *args, **kwargs): super().__init__(*args, **kwargs) self._zc = zc </code></pre> <p>I cloned the ServiceInfo class locally (copied/pasted from the zeroconf info.py file, only difference is the import paths).</p> <p>original: <code>from .._cache import DNSCache</code> vs clone: <code>from zeroconf._cache import DNSCache</code>, etc</p> <p>How the clone class is imported</p> <pre><code>from mdns_discovery.service_info_base import ServiceInfo </code></pre> <p>With the cloned import, all the methods and properties I need are available.</p> <p>But when I import from the zeroconf library directly (which is what I'm trying to achieve)</p> <pre><code>from zeroconf import ServiceInfo </code></pre> <p>It doesn't pull in the properties or methods and gives errors like these:</p> <p><code>AttributeError: 'MdnsAsyncServiceInfo' object has no attribute '_get_initial_delay'</code></p> <p>Just some extra info in case it's relevant, this is the relative path of the original class in my setup (<code>out/py</code> is my Python environment):</p> <p><code>/home/ubuntu24/myproject/out/py/lib/python3.12/site-packages/zeroconf/_services/info.py</code></p> <p>Is this a common problem in Python?</p>
<python><inheritance>
2024-11-04 22:32:10
0
1,126
Raul Marquez
79,157,168
16,611,809
Does polars not distinguish between tuple and list?
<p>For Pandas I sometimes cast nested lists to tuples e.g. to be able to drop duplicates (being aware that order of the elements would matter). For Polars there does not seem to be a difference between lists and tuples. I can't find anymore info on this. Could someone elaborate this a little?</p> <pre><code>import polars as pl dftuple_pl = pl.DataFrame({&quot;col1&quot;: [(&quot;a&quot;, &quot;a&quot;), (&quot;a&quot;, &quot;a&quot;)], &quot;col2&quot;: [(&quot;b&quot;, &quot;b&quot;), (&quot;b&quot;, &quot;b&quot;)]}) dflist_pl = pl.DataFrame({&quot;col1&quot;: [[&quot;a&quot;, &quot;a&quot;], [&quot;a&quot;, &quot;a&quot;]], &quot;col2&quot;: [[&quot;b&quot;, &quot;b&quot;], [&quot;b&quot;, &quot;b&quot;]]}) print(dftuple_pl.equals(dflist_pl)) # True print(dftuple_pl.unique()) # shape: (1, 2) # ┌────────────┬────────────┐ # │ col1 ┆ col2 │ # │ --- ┆ --- │ # │ list[str] ┆ list[str] │ # ╞════════════╪════════════╡ # │ [&quot;a&quot;, &quot;a&quot;] ┆ [&quot;b&quot;, &quot;b&quot;] │ # └────────────┴────────────┘ print(dflist_pl.unique()) # shape: (1, 2) # ┌────────────┬────────────┐ # │ col1 ┆ col2 │ # │ --- ┆ --- │ # │ list[str] ┆ list[str] │ # ╞════════════╪════════════╡ # │ [&quot;a&quot;, &quot;a&quot;] ┆ [&quot;b&quot;, &quot;b&quot;] │ # └────────────┴────────────┘ </code></pre> <pre><code>import pandas as pd dftuple_pd = pd.DataFrame({&quot;col1&quot;: [(&quot;a&quot;, &quot;a&quot;), (&quot;a&quot;, &quot;a&quot;)], &quot;col2&quot;: [(&quot;b&quot;, &quot;b&quot;), (&quot;b&quot;, &quot;b&quot;)]}) dflist_pd = pd.DataFrame({&quot;col1&quot;: [[&quot;a&quot;, &quot;a&quot;], [&quot;a&quot;, &quot;a&quot;]], &quot;col2&quot;: [[&quot;b&quot;, &quot;b&quot;], [&quot;b&quot;, &quot;b&quot;]]}) print(dftuple_pd.equals(dflist_pd)) # False print(dftuple_pd.drop_duplicates()) # col1 col2 # 0 (a, a) (b, b) print(dflist_pd.drop_duplicates()) # TypeError: unhashable type: 'list' </code></pre> <p>So, is it useless (or maybe even impossible) to cast columns to tuples in Polars (e.g. using <code>.map_elements(tuple)</code>?</p>
<python><tuples><python-polars>
2024-11-04 22:20:53
1
627
gernophil
79,157,113
1,931,605
Faster RCNN with Pytorch Lightning not showing better results mAP
<p>I've tried to fine-tune the model for binary class object detection Following is the code</p> <p>There are only single class in COCO dataset with label 1 to classify that as object.</p> <p>Tensorboard showing poor results. Can someone help me understand why its not getting better results. Even YOLO shows better than this. I think i'm making some mistake here</p> <p>Recall and Precision is negative here.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from torchmetrics.detection import IntersectionOverUnion from torchmetrics.detection import MeanAveragePrecision import math class CocoDNN(L.LightningModule): def __init__(self): super().__init__() self.model = models.detection.fasterrcnn_mobilenet_v3_large_fpn(weights=&quot;DEFAULT&quot;) self.metric = MeanAveragePrecision(iou_type=&quot;bbox&quot;,average=&quot;macro&quot;,class_metrics = True, iou_thresholds=[0.5, 0.75],extended_summary=True, backend=&quot;faster_coco_eval&quot;) def forward(self, images, targets=None): return self.model(images, targets) def training_step(self, batch, batch_idx): imgs, annot = batch batch_losses = [] for img_b, annot_b in zip(imgs, annot): #print(len(img_b), len(annot_b)) if len(img_b) == 0: continue loss_dict = self.model(img_b, annot_b) losses = sum(loss for loss in loss_dict.values()) #print(losses) batch_losses.append(losses) batch_mean = torch.mean(torch.stack(batch_losses)) self.log('train_loss', batch_mean, on_step=True, on_epoch=True, prog_bar=True, logger=True) return batch_mean def validation_step(self, batch, batch_idx): imgs, annot = batch targets ,preds = [], [] for img_b, annot_b in zip(imgs, annot): if len(img_b) == 0: continue if len(annot_b)&gt; 1: targets.extend(annot_b) else: targets.append(annot_b[0]) loss_dict = self.model(img_b, annot_b) if len(loss_dict)&gt; 1: preds.extend(loss_dict) else: preds.append(loss_dict[0]) self.metric.update(preds, targets) map_results = self.metric.compute() self.log('precision', map_results['precision'].mean().float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True) self.log('recall', map_results['recall'].mean().float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True) self.log('map_50', map_results['map_50'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True) self.log('map_75', map_results['map_75'].float().item(),on_step=True, on_epoch=True, prog_bar=True, logger=True) return map_results['map_75'] def configure_optimizers(self): return optim.SGD(self.parameters(), lr=0.001, momentum=0.9, weight_decay=0.0005) </code></pre> <p>Trainer code</p> <pre class="lang-py prettyprint-override"><code>dnn = CocoDNN() #version=1, logger = TensorBoardLogger(save_dir=os.getcwd(), name=&quot;runs&quot;) #limit_train_batches=100, trainer = L.Trainer(max_epochs=100,accelerator='gpu',logger=logger,log_every_n_steps=50) trainer.fit(model=dnn, train_dataloaders=TRAIN_DATALOADER,val_dataloaders=VAL_DATALOADER) </code></pre> <p><a href="https://i.sstatic.net/82qOg5JT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82qOg5JT.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/nSxR7NlP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSxR7NlP.png" alt="enter image description here" /></a></p>
<python><deep-learning><pytorch><pytorch-lightning>
2024-11-04 21:54:53
0
10,425
Shan Khan