QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,987,396
| 7,991,581
|
Debug python coredump with python callstack
|
<p>I'm currently trying to debug a complex multi-threaded python script on a production environment which sometimes crashes using python3.10.10</p>
<p>I can't reproduce the bug on a development environment and the issue seems to come from low-level C libraries so I need to inspect a coredump in order to understand what is happening.</p>
<p>Following this link <a href="https://developers.redhat.com/articles/2021/09/08/debugging-python-c-extensions-gdb#" rel="nofollow noreferrer">Debugging Python C extensions with GDB </a>, I need a python binary compiled with debug symbols, and also python gdb extensions in order to know where the issue occurs into the script.</p>
<hr />
<ul>
<li><strong>Compiling a debug python version</strong></li>
</ul>
<pre><code>$ wget -c https://www.python.org/ftp/python/3.10.10/Python-3.10.10.tar.xz
$ tar -Jxvf Python-3.10.10.tar.xz
$ mkdir Python-3.10.10/build
$ cd Python-3.10.10/build
$ ../configure --with-pydebug --prefix=~/python_installs/python-3.10.10-debug
$ make EXTRA_CFLAGS="-DPy_REF_DEBUG"
$ make install
$ mkdir ~/python_installs/python-3.10.10-debug/share/gdb/auto-load/usr/bin/
$ cp build/python-gdb.py ~/python_installs/python-3.10.10-debug/share/gdb/auto-load/usr/bin/python3.10-gdb.py
</code></pre>
<hr />
<ul>
<li><strong>Reproducing the link example</strong></li>
</ul>
<p>With the debug version of python, I'm able to reproduce the minimal example using the given <code>slow.py</code> script, even though I have to manually load the <code>python3.10-gdb.py</code> file into gdb with <code>source ~/python_installs/python-3.10.10-debug/share/gdb/auto-load/usr/bin/python3.10-gdb.py</code> for <code>py-*</code> command to be available into gdb.</p>
<pre><code>$ gdb -args ~/python_installs/python-3.10.10-debug/bin/python3.10d slow.py
(gdb) source ~/python_installs/python-3.10.10-debug/share/gdb/auto-load/usr/bin/python3.10-gdb.py
(gdb) run
Starting program: /home/user/python_installs/python-3.10.10-debug/bin/python3.10d slow.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Slow function...
^C
Program received signal SIGINT, Interrupt.
0x00007ffff7d6f196 in __GI___select (nfds=nfds@entry=0, readfds=readfds@entry=0x0, writefds=writefds@entry=0x0, exceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7fffffffdbc0) at ../sysdeps/unix/sysv/linux/select.c:41
(gdb) py-bt
Traceback (most recent call first):
<built-in method sleep of module object at remote 0x7ffff7a62450>
File "/home/user/python_installs/slow.py", line 5, in slow_function
time.sleep(60 * 10)
File "/home/user/python_installs/slow.py", line 6, in <module>
slow_function()
</code></pre>
<p>I can also successfully have the python binary backtrace with <code>bt</code> command.</p>
<hr />
<ul>
<li><strong>Debugging the coredump file</strong></li>
</ul>
<p>In order to analyze the issue, I configured the system to dump core files on crashes and I modified the script launch command in order to use the new debug version of python.</p>
<p>After a while the issue occurred and I could retrieve the generated coredump file.</p>
<p>I tried to use the same approach but using the coredump instead of directly running the script into gdb</p>
<ul>
<li>First try</li>
</ul>
<pre><code>$ gdb ~/python_installs/python-3.10.10-debug/bin/python3.10d 1681210136_python3.10d_157077.coredump
Reading symbols from /home/user/python_installs/python-3.10.10-debug/bin/python3.10d...
Illegal process-id: 1681210136_python3.10d_157077.coredump.
warning: Can't open file /dev/shm/DumZfE (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/PDGPyD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/VkNWTC (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/fy65vC (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/nJUyND (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/a9iMwB (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/CklimD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/DNtZBA (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/r0hhmD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/0ZiCFA (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/8agmRB (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/gcfMAC (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/OWkBsE (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/4qKfhE (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/xztuAE (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/ijyldD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/evuz8E (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/Xg9SVB (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/rjpzPA (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/E5pSJA (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/w3ecTD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/5OqLMB (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/Qh9XlC (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/fqOahE (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/J6a99A (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/6PbLnB (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/L8AzVD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/ogczVD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/zYKzwD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/uPB7vC (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/RUOzPB (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/jhPQxC (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/VQjZZE (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/T22z6B (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/DNpwZD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/1D8rrD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/O0fB3C (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/dlItoE (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/BLPLHE (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/DXuP2C (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/78ZscC (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/uDNACC (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/lgGYDA (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/gtrBnE (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/hyc5dD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/7G99VA (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/V2mTJD (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/r2DB6D (deleted) during file-backed mapping note processing
[New LWP 157077]
[New LWP 157082]
[New LWP 162323]
[New LWP 157087]
[New LWP 162439]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/home/user/python_installs/python-3.10.10-debug/bin/python3.10d /home/user/'.
Program terminated with signal SIGABRT, Aborted.
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) source ~/python_installs/python-3.10.10-debug/share/gdb/auto-load/usr/bin/python3.10-gdb.py
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f7860199537 in __GI_abort () at abort.c:79
#2 0x000056504ac04b4c in fatal_error_exit (status=-1) at ../Python/pylifecycle.c:2553
#3 _Py_FatalErrorFormat (func=func@entry=0x56504adfb040 <__func__.18> "_enter_buffered_busy", format=format@entry=0x56504adfac80 "could not acquire lock for %s at interpreter shutdown, possibly due to daemon threads") at ../Python/pylifecycle.c:2760
#4 0x000056504ac8863b in _enter_buffered_busy (self=self@entry=0x7f785ff99190) at ../Modules/_io/bufferedio.c:294
#5 0x000056504ac8b5c9 in buffered_flush (self=0x7f785ff99190, args=<optimized out>) at ../Modules/_io/bufferedio.c:825
#6 0x000056504acc3641 in method_vectorcall_NOARGS (func=<method_descriptor at remote 0x7f785ff36ab0>, args=0x7ffc58ee14f8, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/descrobject.c:432
#7 0x000056504ab020f5 in _PyObject_VectorcallTstate (tstate=0x56504cebdfb0, callable=<method_descriptor at remote 0x7f785ff36ab0>, args=0x7ffc58ee14f8, nargsf=1, kwnames=0x0) at ../Include/cpython/abstract.h:114
#8 0x000056504ab0226e in PyObject_VectorcallMethod (name=name@entry='flush', args=args@entry=0x7ffc58ee14f8, nargsf=<optimized out>, nargsf@entry=9223372036854775809, kwnames=kwnames@entry=0x0) at ../Objects/call.c:770
#9 0x000056504ac90b15 in _PyObject_VectorcallMethodId (kwnames=0x0, nargsf=9223372036854775809, args=0x7ffc58ee14f8, name=0x56504aed53c0 <PyId_flush>) at ../Include/cpython/abstract.h:233
#10 _PyObject_CallMethodIdNoArgs (name=0x56504aed53c0 <PyId_flush>, self=<optimized out>) at ../Include/cpython/abstract.h:239
#11 _io_TextIOWrapper_flush_impl (self=0x7f78600ca7b0) at ../Modules/_io/textio.c:3044
#12 0x000056504ac90c0d in _io_TextIOWrapper_flush (self=<optimized out>, _unused_ignored=<optimized out>) at ../Modules/_io/clinic/textio.c.h:655
#13 0x000056504acc3641 in method_vectorcall_NOARGS (func=<method_descriptor at remote 0x7f785ff386b0>, args=0x7ffc58ee1608, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/descrobject.c:432
#14 0x000056504ab020f5 in _PyObject_VectorcallTstate (tstate=0x56504cebdfb0, callable=<method_descriptor at remote 0x7f785ff386b0>, args=0x7ffc58ee1608, nargsf=1, kwnames=0x0) at ../Include/cpython/abstract.h:114
#15 0x000056504ab0226e in PyObject_VectorcallMethod (name=<optimized out>, args=args@entry=0x7ffc58ee1608, nargsf=<optimized out>, nargsf@entry=9223372036854775809, kwnames=kwnames@entry=0x0) at ../Objects/call.c:770
#16 0x000056504ac013ea in _PyObject_VectorcallMethodId (kwnames=0x0, nargsf=9223372036854775809, args=0x7ffc58ee1608, name=0x56504aec49f0 <PyId_flush>) at ../Include/cpython/abstract.h:233
#17 _PyObject_CallMethodIdNoArgs (name=0x56504aec49f0 <PyId_flush>, self=<_io.TextIOWrapper at remote 0x7f78600ca7b0>) at ../Include/cpython/abstract.h:239
#18 flush_std_files () at ../Python/pylifecycle.c:1597
#19 0x000056504ac0491e in fatal_error (fd=fd@entry=2, header=header@entry=0, prefix=prefix@entry=0x0, msg=msg@entry=0x0, status=status@entry=-1) at ../Python/pylifecycle.c:2727
#20 0x000056504ac04b47 in _Py_FatalErrorFormat (func=func@entry=0x56504adfb040 <__func__.18> "_enter_buffered_busy", format=format@entry=0x56504adfac80 "could not acquire lock for %s at interpreter shutdown, possibly due to daemon threads") at ../Python/pylifecycle.c:2784
#21 0x000056504ac8863b in _enter_buffered_busy (self=self@entry=0x7f785ff99190) at ../Modules/_io/bufferedio.c:294
#22 0x000056504ac8b5c9 in buffered_flush (self=0x7f785ff99190, args=<optimized out>) at ../Modules/_io/bufferedio.c:825
#23 0x000056504acc3641 in method_vectorcall_NOARGS (func=<method_descriptor at remote 0x7f785ff36ab0>, args=0x7ffc58ee1878, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/descrobject.c:432
#24 0x000056504ab020f5 in _PyObject_VectorcallTstate (tstate=0x56504cebdfb0, callable=<method_descriptor at remote 0x7f785ff36ab0>, args=0x7ffc58ee1878, nargsf=1, kwnames=0x0) at ../Include/cpython/abstract.h:114
#25 0x000056504ab0226e in PyObject_VectorcallMethod (name=name@entry='flush', args=args@entry=0x7ffc58ee1878, nargsf=<optimized out>, nargsf@entry=9223372036854775809, kwnames=kwnames@entry=0x0) at ../Objects/call.c:770
#26 0x000056504ac90b15 in _PyObject_VectorcallMethodId (kwnames=0x0, nargsf=9223372036854775809, args=0x7ffc58ee1878, name=0x56504aed53c0 <PyId_flush>) at ../Include/cpython/abstract.h:233
#27 _PyObject_CallMethodIdNoArgs (name=0x56504aed53c0 <PyId_flush>, self=<optimized out>) at ../Include/cpython/abstract.h:239
#28 _io_TextIOWrapper_flush_impl (self=0x7f78600ca7b0) at ../Modules/_io/textio.c:3044
#29 0x000056504ac90c0d in _io_TextIOWrapper_flush (self=<optimized out>, _unused_ignored=<optimized out>) at ../Modules/_io/clinic/textio.c.h:655
#30 0x000056504acc3641 in method_vectorcall_NOARGS (func=<method_descriptor at remote 0x7f785ff386b0>, args=0x7ffc58ee1988, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/descrobject.c:432
#31 0x000056504ab020f5 in _PyObject_VectorcallTstate (tstate=0x56504cebdfb0, callable=<method_descriptor at remote 0x7f785ff386b0>, args=0x7ffc58ee1988, nargsf=1, kwnames=0x0) at ../Include/cpython/abstract.h:114
#32 0x000056504ab0226e in PyObject_VectorcallMethod (name=<optimized out>, args=args@entry=0x7ffc58ee1988, nargsf=<optimized out>, nargsf@entry=9223372036854775809, kwnames=kwnames@entry=0x0) at ../Objects/call.c:770
#33 0x000056504ac013ea in _PyObject_VectorcallMethodId (kwnames=0x0, nargsf=9223372036854775809, args=0x7ffc58ee1988, name=0x56504aec49f0 <PyId_flush>) at ../Include/cpython/abstract.h:233
#34 _PyObject_CallMethodIdNoArgs (name=0x56504aec49f0 <PyId_flush>, self=<_io.TextIOWrapper at remote 0x7f78600ca7b0>) at ../Include/cpython/abstract.h:239
#35 flush_std_files () at ../Python/pylifecycle.c:1597
#36 0x000056504ac04515 in Py_FinalizeEx () at ../Python/pylifecycle.c:1762
#37 0x000056504aaf2c26 in Py_RunMain () at ../Modules/main.c:668
#38 0x000056504aaf2c76 in pymain_main (args=args@entry=0x7ffc58ee1a30) at ../Modules/main.c:696
#39 0x000056504aaf2cf2 in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at ../Modules/main.c:720
#40 0x000056504aaf16fe in main (argc=<optimized out>, argv=<optimized out>) at ../Programs/python.c:15
(gdb) py-bt
Unable to locate python frame
</code></pre>
<p>Using the coredump file I can successfully have the python binary backtrace, however I can't have the script backtrace, even though the <code>py-bt</code> command is recognized.</p>
<p>I though that maybe the issue was that I wasn't linking the script when calling gdb so I made a second try.</p>
<ul>
<li>Second try</li>
</ul>
<p>This time I launched gdb as I did with the <code>slow.py</code> script, and I loaded the coredump once inside gdb.</p>
<pre><code>$ gdb --args /home/user/python_installs/python-3.10.10-debug/bin/python3.10d my_complex_script.py
Reading symbols from /home/user/python_installs/python-3.10.10-debug/bin/python3.10d...
(gdb) core 1681210136_python3.10d_157077.coredump
[...]
(gdb) source ~/python_installs/python-3.10.10-debug/share/gdb/auto-load/usr/bin/python3.10-gdb.py
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f7860199537 in __GI_abort () at abort.c:79
[...]
#40 0x000056504aaf16fe in main (argc=<optimized out>, argv=<optimized out>) at ../Programs/python.c:15
(gdb) py-bt
Unable to locate python frame
</code></pre>
<p>With this approach I have the same result than with the first one.</p>
<p>I can't find a way to get the script backtrace when using a coredump file.</p>
<hr />
<ul>
<li><strong>Questions</strong></li>
</ul>
<p>Is it possible to get the python script backtrace using a coredump ?</p>
<p>If not, do I have to run the script within gdb and wait for the error to occur ?</p>
<p>Is there a better method to debug complex python scripts than using gdb ?</p>
|
<python><c><linux><gdb><coredump>
|
2023-04-11 15:03:15
| 0
| 924
|
Arkaik
|
75,987,346
| 17,388,934
|
Unable to get coloured column header to excel for multiple pandas dataframes
|
<p>I want to write multiple dataframes to excel and also add color to column headers.
I have written below code to achieve this however, it colors only the column header for the first dataframe, but not the others.</p>
<pre><code># example data frame
df = pd.DataFrame(np.random.randn(15).reshape(5, 3))
# adding _background_gradient style to df
styled_df = (df.style
.background_gradient(subset=[0, 1], # background_gradient method
low=0, high=0.5,
cmap="YlGnBu"))
df2 = pd.DataFrame({'feature 1' : ['cat 1', 'cat 2', 'cat 3', 'cat 4'], 'feature 2' : [400, 300, 200, 100]
,'feature 3' : [400, 300, 200, 100], 'feature 4' : [400, 300, 200, 100]})
#using below function to add color and write multiple dataframes to excel
def multiple_dfs(df_list, sheets, file_name, spaces):
writer = pd.ExcelWriter(file_name, engine = 'xlsxwriter')
row = 0
for dataframe in df_list:
dataframe.to_excel(writer,sheet_name=sheets,startrow=row , startcol=0)
workbook = writer.book
worksheet = writer.sheets[sheets]
# Add a header format.
header_format = workbook.add_format({
'bold': True,
'text_wrap': True,
'fg_color': '#D7E4BC',
'border': 1})
# Write the column headers with the defined format.
for col_num, value in enumerate(dataframe.columns.values):
worksheet.write(0, col_num + 1, value, header_format)
row = row + len(dataframe.index) + spaces + 2
writer.save()
dfs = [styled_df,df2]
multiple_dfs(dfs,'Sheet1','output.xlsx',1)
</code></pre>
<p><a href="https://i.sstatic.net/0PqvF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0PqvF.png" alt="screenshot of output.xlsx" /></a></p>
|
<python><excel><pandas><dataframe>
|
2023-04-11 14:59:00
| 0
| 319
|
be_real
|
75,987,248
| 18,253,588
|
Black images or memory issue with Hugging Face StableDiffusion pipleline, M1 Pro, PyTorch
|
<p>So I'm making a project for a school that offers image generation using stable diffusion. It was working perfectly fine until I upgraded the Pytorch version for "stabilityai/stable-diffusion-x4-upscaler" model. Since then all the other images generated are black because they get flagged NSFW.</p>
<p>The code hasn't changed but I have updated Mac OS to 13.3.1 recently and I tested many different versions of PyTorch, Diffusers, and Transformers, but I can't get it to work as it did before. Since the code didn't change the problem has to do with the package version.</p>
<p>With the latest PyTorch 2.0 I am able to generate working images but I cannot use <code>torch_dtype=torch.float16</code> in the pipeline since it's not supported and I seem to be getting the following insufficient memory issues now.</p>
<p><code>RuntimeError: MPS backend out of memory (MPS allocated: 18.04 GB, other allocations: 94.99 MB, max allowed: 18.13 GB). Tried to allocate 4.00 KB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). INFO: Stopping reloader process [15702]</code></p>
<p>These are the following models and piplines I used.</p>
<pre class="lang-py prettyprint-override"><code>main_model_id = "runwayml/stable-diffusion-v1-5"
inpainting_model_id = "runwayml/stable-diffusion-inpainting"
upscaler_model_id = "stabilityai/stable-diffusion-x4-upscaler"
text2imgPipe = StableDiffusionPipeline.from_pretrained(main_model_id, torch_dtype=torch.float16).to(
device)
text2imgPipe.enable_attention_slicing()
img2imgPipe = StableDiffusionImg2ImgPipeline.from_pretrained(main_model_id, torch_dtype=torch.float16).to(
device)
img2imgPipe.enable_attention_slicing()
inpaintingPipe = StableDiffusionInpaintPipeline.from_pretrained(inpainting_model_id, torch_dtype=torch.float16).to(
device)
inpaintingPipe.enable_attention_slicing()
upscalerPipe = StableDiffusionUpscalePipeline.from_pretrained(upscaler_model_id, torch_dtype=torch.float16).to(
device)
upscalerPipe.enable_attention_slicing()
</code></pre>
<p>These are the current package version I have.</p>
<pre class="lang-bash prettyprint-override"><code>transformers 4.26.0
torchaudio 2.0.0
torchvision 0.14.1
pytorch 2.0.0
diffusers 0.12.0
huggingface-hub 0.13.4
python 3.10.10
</code></pre>
|
<python><pytorch><apple-m1><huggingface-transformers><huggingface>
|
2023-04-11 14:49:48
| 1
| 395
|
itsDanial
|
75,987,105
| 2,228,592
|
Django Returning Items outside filter when filtering by time
|
<p>I have a snippet of code designed to grab the entries that are <24 hours old</p>
<pre class="lang-py prettyprint-override"><code> yesterday = datetime.now() - timedelta(days=1)
items = get_model(name=p).objects.filter(tagid=tag.id, t_stamp__gt=yesterday).order_by('t_stamp')
for i in items:
print(f'{i.t_stamp} - {i.t_stamp > yesterday}')
print(yesterday)
</code></pre>
<p>Yet for some reason, it returns 2 items, when it should return 0.</p>
<p>Results of the print statement above:</p>
<pre><code>2023-04-06 13:12:54.540000 - False
2023-04-06 14:12:46.976000 - False
2023-04-10 10:06:27.526066
</code></pre>
<p>As you can see, the timestamp is NOT greater than <code>yesterday</code>, so why is it being returned?</p>
<hr />
<h1>Edit</h1>
<p>After further troubleshooting, the issue was since <code>t_stamp</code> was a custom field that converted a timestamp to a <code>datetime</code> object, I needed to override <code>get_prep_value()</code> to convert it back into an int properly.</p>
|
<python><django>
|
2023-04-11 14:35:18
| 1
| 9,345
|
cclloyd
|
75,987,044
| 5,409,315
|
How to best use non-concurrent futures?
|
<p>In my program</p>
<ol>
<li>Objects are created expensively,</li>
<li>later modified expensively, and</li>
<li>finally, objects are used to generate debugging output in a very distant area of the code</li>
</ol>
<p>My main goal is to only to execute the creation and modification if debugging. To keep the code cleaner, I want to avoid passing the same condition to all three places and instead make sure that the object is only created and modified if it is used (as opposed to when it is supposed to be used because that creates potential for code rot).</p>
<p>One way of doing that would be use <code>concurrent.futures</code>. However, that comes with the overhead of creating a thread / process that would have to be paid even if not debugging. I want to avoid that (and I can live with not having the upside of speed up, because I would only benefit when debugging, i.e. when expections to speed are anyways relaxed).</p>
<p>My idea is to use a "non-concurrent future", which would have little overhead on creation and modification and would only cost if actually used. It would be relatively simple to derive a <code>SynchronousExecuter</code> from <code>concurrent.futures.Executor</code> - would that be my best option?</p>
<p>One alternative I can see is working with generators, but that gets unwieldy a. because they are aimed at sequences where I need a single object and b. because the object is modified later on.</p>
<p>Another idea would be a dedicated proxy-class. That would be more involved than generators on generation, but transparent on use. Only downside: it doesn't work with overloaded operators.</p>
<pre><code>#!Python
from pathlib import Path # arbitrary class to try out functionality
from typing import Any
class Delayed:
def __init__(self, f, args=[], kwargs={}):
self.__o = None
self.f = f
self.args = args
self.kwargs = kwargs
def __getattr__(self, __name: str) -> Any:
if self.__o is None:
f = self.f
args = self.args
kwargs = self.kwargs
del self.f
del self.args
del self.kwargs
self.__o = f(*args, **kwargs)
return self.__o.__getattribute__(__name)
p = Delayed(Path, ('.',))
p.parent
p.absolute()
p / 'foo' # error
</code></pre>
<p>Which would be the cleanest option?</p>
|
<python><concurrent.futures>
|
2023-04-11 14:27:52
| 0
| 604
|
Jann Poppinga
|
75,986,929
| 1,978,146
|
Mean of classes by groups of rows and columns
|
<p>I have a Pandas dataframe including a class <code>iv</code> that holds sequences <code>A</code> and <code>B</code> with different features (<code>score1</code> and <code>score2</code>). I would like to obtain a dataframe with the mean of each of the features.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'league': ['iv', 'iv', 'iv', 'iv', 'iv', 'iv', 'iv'],
'team': ['A', 'A', 'A', 'B', 'B', 'B', 'B'],
'score1': [1, 1, 1, 2, 2, 2, 2],
'score2': [2, 2, 2, 4, 4, 4, 4]})
</code></pre>
<p><a href="https://i.sstatic.net/zjfuf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zjfuf.png" alt="enter image description here" /></a></p>
<p>So for <code>score1</code>, the mean of <code>A</code> and <code>B</code> (being sequences of different length), would be <code>[1.5, 1.5, 1.5, 2]</code> and for <code>score2</code>, it would be <code>[3, 3, 3, 4]</code>.</p>
<p>The result I would like to obtain would need to be including also the column <code>league</code>, as in the real case I have more than one <code>league</code> class (e.g. <code>iv</code>, <code>v</code>, etc).</p>
<pre><code>df_result = pd.DataFrame({'league': ['iv', 'iv', 'iv', 'iv'] ,
'score1_grouped_mean': [1.5, 1.5, 1.5, 2],
'score2_grouped_mean': [3, 3, 3, 4]})
</code></pre>
<p><a href="https://i.sstatic.net/iLmmw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iLmmw.png" alt="enter image description here" /></a></p>
<p>So, in the end, I only keep <code>league</code> and the mean of the <code>score*</code> columns.</p>
|
<python><pandas>
|
2023-04-11 14:15:59
| 1
| 383
|
Noque
|
75,986,870
| 14,015,493
|
Docker compose usage of external firebird service
|
<p>The architecture of the app I planning to create consist of an React, Python and Firebird component. The React part will serve as front-end, the Python part will be the back-end which communicates with the Firebird DB. The React and Firebird component will run in Docker and has to communicate with a Firebird server, which runs locally on 127.0.0.1.</p>
<p>The current docker-compose file looks like the following:</p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.9"
services:
gui:
build:
context: gui
restart: always
ports:
- 80:80
depends_on:
- api
api:
build:
context: api
restart: always
ports:
- 8000:8000
- 3050:3050
volumes:
- C:/Program Files/Firebird/Firebird_2_5/examples/empbuild:/app/api/empbuild:rw
</code></pre>
<p>For connecting to the local Firebird server, the following code can be used:</p>
<pre class="lang-py prettyprint-override"><code>import firebirdsql
conn = firebirdsql.connect(
host='127.0.0.1',
database='...',
port=3050,
user='sysdba',
password='masterkey'
)
</code></pre>
<p>So, I need to establish a connection to the local network within the docker container to 127.0.0.1:3050. Currently I'm getting the following error:</p>
<pre><code>ConnectionRefusedError: [Errno 111] Connection refused
</code></pre>
<p>Which is due to that the Firebird service is running outside the container and is not mapped to the network of the docker containers. How can I establish this connection by having those two components communicating with the Local Firebird sever?</p>
|
<python><docker><docker-compose><firebird>
|
2023-04-11 14:09:13
| 1
| 313
|
kaiserm99
|
75,986,866
| 3,190,076
|
How to test for race conditions on Pandas DataFrames?
|
<p>I would like to use <a href="https://schedule.readthedocs.io/en/stable/" rel="nofollow noreferrer">schedule</a> to run some functions every x seconds. The functions modify a global Dataframe. I know that Pandas is not thread-safe, so I have added a lock to each function call to mitigate that.
The code below (a minimal example) works as expected but I am not sure how to check that no race conditions will ever be raised by this code.</p>
<p>Can anyone suggest how to properly test this? Or can I simply assume the code is theread-safe since it uses <code>with lock</code>?</p>
<pre><code>import schedule
import time, datetime
import pandas as pd
from threading import Lock
data = [(3,5,7), (2,4,6),(5,8,9)]
df = pd.DataFrame(data, columns = ['A','B','C'])
lock = Lock()
def job(lock):
global df
with lock:
df = pd.concat([df, df.iloc[0:2]])
print('J1', datetime.datetime.utcnow(), len(df), df.A.sum())
def job2(lock):
global df
with lock:
df = pd.concat([df, df.iloc[0:2]])
print('J2', datetime.datetime.utcnow(), len(df), df.A.sum())
schedule.every(0.75).seconds.do(job, lock=lock)
schedule.every(0.25).seconds.do(job2, lock=lock)
while True:
schedule.run_pending()
time.sleep(0.1)
</code></pre>
|
<python><pandas><schedule>
|
2023-04-11 14:09:00
| 0
| 10,889
|
alec_djinn
|
75,986,854
| 10,791,217
|
Permission Denied while using Shutil
|
<p>I am moving around files using the following script and am getting Permission Denied on random folders. This is a SharePoint site that is synced with my OneDrive and therefore on my File Explorer. A few of the files are working one step above in the folder structure, but this particular folder is not. I also can manually access the files just fine.</p>
<p>Any thoughts?</p>
<pre><code>root_path = r'C:\Users\xyz\company'
downloads_path = r'C:\Users\xyz\Downloads'
# loop through each directory in the root directory
for dir_path, dir_names, file_names in os.walk(root_path):
# navigate to the "Key Process Workshops" folder if it exists
if "Key Process Workshops" in dir_names:
kp_path = os.path.join(dir_path, "Key Process Workshops")
# loop through each PowerPoint file in the "Key Process Workshops" folder
for file_name in os.listdir(kp_path):
file_path = os.path.join(kp_path, file_name)
# check if the length of the file path is greater than 255 characters
if len(file_path) > 255:
# create a copy of the file in the downloads folder
new_file_path = os.path.join(downloads_path, file_name)
#shutil.copyfile(file_path, new_file_path)
#copy_file = os.path.basename(file_path)
shutil.copyfile("\\\\?\\" + file_path, new_file_path)
# update the file_path variable to use the copy
file_path = new_file_path
if file_path.endswith(".pptx"):
# extract tables from each slide
tables = []
prs = pptx.Presentation(file_path)
for slide in prs.slides:
tables += extract_tables(slide)
# concatenate tables and add the result to the table_results list
result = concatenate_tables(tables)
if result:
table_results += result
</code></pre>
|
<python><python-3.x><shutil>
|
2023-04-11 14:07:44
| 0
| 720
|
RCarmody
|
75,986,836
| 15,673,412
|
python - find coincidences within a certain time windows
|
<p>I'm struggling with the following task in python:</p>
<p>I have 4 lists of differet length with timestamps (float values).</p>
<pre><code>timestamps
[[0.2, 0.6, 1.5, 4.3],
[1.1, 1.4, 3.5, 3.6, 7.9],
[0.1, 0.7, 1.3, 3.7, 12.2, 36.2],
[1.3, 1.9, 3.8, 4.0, 21.7]]
</code></pre>
<p>I want to keep only the timestamps that have a coincidence in at least 3 out of 4 lists. By coincidence I mean "having a point inside a window of arbitrary width (let's say 0.2)".</p>
<p>Therefore, in tha above example the only survivors are</p>
<pre><code>[[1.5],
[1.4, 3.6],
[1.3, 3.7],
[1.3, 3.8]
</code></pre>
<p>Because at the time <code>1.4</code> I have at least three lists that contain an element in the range <code>(1.3, 1.5)</code>, and at the time <code>3.7</code> I have at least three lists that contain an element in the range <code>(3.6, 3.8)</code>.</p>
<p>I've managed to do this with a moving window, and counting for each window center how many lists contain elements in the range. Obviously, this proves to be extremely time consuming, as my lists are really long and spaced out over a wide range.</p>
<p>Does anyone know any other way (possibly numpy powered or with any other tools) to accomplish this in a quick(ish) way?</p>
|
<python><arrays><numpy><numpy-ndarray>
|
2023-04-11 14:06:34
| 0
| 480
|
Sala
|
75,986,798
| 6,372,859
|
Memory allocation error concatenating zeroes to arrays
|
<p>I have a large 2D numpy array with each sub-array of different length, for example:</p>
<pre><code>[[1,2],[3,4,5,6],[7,8,9]]
</code></pre>
<p>I would like to add zeroes at the end of each sub-array smaller than the largest one, something like:</p>
<pre><code>[[1,2,0,0],[3,4,5,6],[7,8,9,0]]
</code></pre>
<p>To this end, I created the following function to complete the arrays</p>
<pre><code>def add_zeroes(arr, limit):
if len(arr)<limit:
return np.concatenate([arr, np.zeros(limit-len(arr))])
else:
return arr
</code></pre>
<p>but when I apply it to my array, which consists on a list of 60578 sub-arrays, I get the Memory Error:</p>
<pre><code>MemoryError: Unable to allocate 8.59 MiB for an array with shape (1126400,) and data type float64
</code></pre>
<p>I am running on a Core i7 Windows 11, with 16Gb RAM.</p>
<p>Is there a walk around method (more pythonic as well) to complete this task?</p>
|
<python><numpy><out-of-memory><numpy-ndarray>
|
2023-04-11 14:03:12
| 2
| 583
|
Ernesto Lopez Fune
|
75,986,754
| 19,553,193
|
django.db.utils.NotSupportedError: MySQL 8 or later is required (found 5.7.33). in Django
|
<p>I have this error when performing migrations to my file command</p>
<p>I tried this command</p>
<pre><code>python manage.py makemigrations
</code></pre>
<p><strong>but Error insist</strong></p>
<pre><code>django.db.utils.NotSupportedError: MySQL 8 or later is required (found 5.7.33).
</code></pre>
<p>Is there anyway to perform migration? or should I downgrade my python version to v.9?</p>
<p><strong>python version</strong></p>
<pre><code>python v.3.10
</code></pre>
|
<python><django>
|
2023-04-11 13:58:29
| 7
| 335
|
marivic valdehueza
|
75,986,745
| 1,689,811
|
python lambda as callback
|
<p>I have been reading example of astersik ARI in python and could not understand the following code clearly
full sample code :</p>
<pre><code>#!/usr/bin/env python
"""Example demonstrating ARI channel origination.
"""
#
# Copyright (c) 2013, Digium, Inc.
#
import requests
import ari
from requests import HTTPError
OUTGOING_ENDPOINT = "SIP/blink"
client = ari.connect('http://localhost:8088/', 'hey', 'peekaboo')
#
# Find (or create) a holding bridge.
#
bridges = [b for b in client.bridges.list()
if b.json['bridge_type'] == 'holding']
if bridges:
holding_bridge = bridges[0]
print "Using bridge %s" % holding_bridge.id
else:
holding_bridge = client.bridges.create(type='holding')
print "Created bridge %s" % holding_bridge.id
def safe_hangup(channel):
"""Hangup a channel, ignoring 404 errors.
:param channel: Channel to hangup.
"""
try:
channel.hangup()
except HTTPError as e:
# Ignore 404's, since channels can go away before we get to them
if e.response.status_code != requests.codes.not_found:
raise
def on_start(incoming, event):
"""Callback for StasisStart events.
When an incoming channel starts, put it in the holding bridge and
originate a channel to connect to it. When that channel answers, create a
bridge and put both of them into it.
:param incoming:
:param event:
"""
# Only process channels with the 'incoming' argument
if event['args'] != ['incoming']:
return
# Answer and put in the holding bridge
incoming.answer()
incoming.play(media="sound:pls-wait-connect-call")
holding_bridge.addChannel(channel=incoming.id)
# Originate the outgoing channel
outgoing = client.channels.originate(
endpoint=OUTGOING_ENDPOINT, app="hello", appArgs="dialed")
# If the incoming channel ends, hangup the outgoing channel
incoming.on_event('StasisEnd', lambda *args: safe_hangup(outgoing))
# and vice versa. If the endpoint rejects the call, it is destroyed
# without entering Stasis()
outgoing.on_event('ChannelDestroyed',
lambda *args: safe_hangup(incoming))
def outgoing_on_start(channel, event):
"""Callback for StasisStart events on the outgoing channel
:param channel: Outgoing channel.
:param event: Event.
"""
# Create a bridge, putting both channels into it.
bridge = client.bridges.create(type='mixing')
outgoing.answer()
bridge.addChannel(channel=[incoming.id, outgoing.id])
# Clean up the bridge when done
outgoing.on_event('StasisEnd', lambda *args: bridge.destroy())
outgoing.on_event('StasisStart', outgoing_on_start)
client.on_channel_event('StasisStart', on_start)
# Run the WebSocket
client.run(apps="hello")
</code></pre>
<p>Why in some case it has used lambda as callback like in :</p>
<pre><code>incoming.on_event('StasisEnd', lambda *args: safe_hangup(outgoing))
# and vice versa. If the endpoint rejects the call, it is destroyed
# without entering Stasis()
outgoing.on_event('ChannelDestroyed',
lambda *args: safe_hangup(incoming))
</code></pre>
<p>and in some case it doesn not use lambda like :</p>
<pre><code>outgoing.on_event('StasisStart', outgoing_on_start)
</code></pre>
<p>Why is it using lambda in those cases ?Why we can not call the safe_hangup directly like the outgoing_on_start?</p>
|
<python><lambda>
|
2023-04-11 13:57:18
| 0
| 334
|
Amir
|
75,986,727
| 532,570
|
Getting the value of a ListItem/ListView in Textualize/Textual
|
<p>I am struggling with something that I can't help but feel is very basic.</p>
<p>I am using the <a href="https://github.com/Textualize/textual" rel="nofollow noreferrer">Textual framework</a>, with python, and am having difficulty getting the Selected value from a ListItem.</p>
<p>In the code below, I have the <code>ListView.Selected</code> and i would like that to appear in the 2nd vertical, but i can't seem to access the value of that: <code>event.item</code>, <code>event.item.value</code> nothing seems to give me access to the value (as a string) of that event.</p>
<pre class="lang-py prettyprint-override"><code>from textual.app import App, ComposeResult
from textual.widgets import ListView, ListItem, Label, Footer, Static
from textual.containers import Horizontal, Vertical
articles = ['dog', 'cat', 'piano']
class Reader(App):
BINDINGS = [
("f", "toggle_files", "Toggle Files"),
("q", "quit", "Quit"),
]
def createListItem(items):
listItems = []
for item in items:
listItems.append(ListItem(Label(item)))
return listItems
listItems = createListItem(articles)
def compose(self) -> ComposeResult:
with Horizontal():
with Vertical(classes="column"):
yield ListView(
*self.listItems,
id='Foo',
)
with Vertical(classes="column", id='read-pane'):
yield Static(id='read-panel')
yield Footer()
def on_mount(self) -> None:
self.screen.styles.background = "darkblue"
def on_list_view_selected( self, event: ListView.Selected ) -> None:
"""Called when the user click a file in the ListView.
https://github.com/Textualize/textual/blob/main/examples/code_browser.py
"""
reader_view = self.query_one("#read-panel", Static)
print(event.item)
reader_view.update(event.item)
if __name__ == "__main__":
app = Reader()
app.run()
</code></pre>
|
<python><textual>
|
2023-04-11 13:55:55
| 1
| 582
|
toast
|
75,986,339
| 21,420,742
|
How to get the ID from one dataframe to match the name from another datframe in Python
|
<p>I have two dataframes one has the <strong>ID</strong> and <strong>Name</strong> and one just has the <strong>Name</strong>. I want to match the name to the ID and for those without a ID leave as null.</p>
<p>DF1:</p>
<pre><code>ID Name
1 John Doe
2 Mary Sue
3 Josh Smith
4 Sarah Moore
</code></pre>
<p>DF2:</p>
<pre><code>Name
Sarah Moore
John Doe
Josh Smith
Mary Sue
Josh Gordon
</code></pre>
<p>Desired output:</p>
<pre><code>ID Name
4 Sarah Moore
1 John Doe
3 Josh Smith
2 Mary Sue
NA Josh Gordon
</code></pre>
<p>I've tried merging but have not gotten the results I needed</p>
<pre><code>df2 = df2.merge(df2,df1['ID'], on = 'Name', how = 'left')
</code></pre>
<p>Any suggestions?</p>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-04-11 13:16:09
| 0
| 473
|
Coding_Nubie
|
75,986,315
| 13,224,380
|
Translate SQL to Polars and Pandas
|
<p>Can someone help me translate</p>
<pre class="lang-sql prettyprint-override"><code>SELECT (TRUNC(lat/0.5)*0.5) AS ll_lat, (TRUNC(lon/0.5)*0.5) AS ll_lon, COUNT(lat) AS strikes FROM lightning GROUP BY ll_lat, ll_lon;
</code></pre>
<p>into Pandas and Polars code as I tried</p>
<pre class="lang-py prettyprint-override"><code># Pandas
df['ll_lat'] = (df['lat'] // 0.1 * 0.1).round(1)
df['ll_lon'] = (df['lon'] // 0.1 * 0.1).round(1)
df['temporalBasket'] = df['eventtime'].astype(str).str[:13]
df = df.groupby(['ll_lat', 'll_lon', 'temporalBasket']).agg(strikes=('lat', 'count'))
# 3763740 rows × 1 columns
</code></pre>
<pre class="lang-py prettyprint-override"><code># Polars
df_p_out = df_p.groupby(
(pl.col("lat") // 0.1 * 0.1).alias("ll_lat"),
(pl.col("lon") // 0.1 * 0.1).alias("ll_lon"),
pl.col("eventtime").dt.truncate("1h").alias("temporalBasket"),
).agg(strikes=pl.col("lat").count())
# shape: (3763740, 4)
</code></pre>
<p>as opposed to the SQL shape (10168 rows × 3 columns). My Dataframe looks like this</p>
<p><a href="https://i.sstatic.net/MGIjP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MGIjP.png" alt="enter image description here" /></a></p>
|
<python><sql><pandas>
|
2023-04-11 13:13:54
| 1
| 401
|
Curious
|
75,986,302
| 20,220,485
|
What is the most efficient way of checking XML for a set of attributes?
|
<p>I have a folder <code>m_d</code> of very many identically structured xml files that I want to sort. Two examples are below. If any word element of any xml file contains attributes <code>"N", "E", "D", "P", "L"</code> I want to copy the file to another folder. Only 'xml file 1' would thus be copied.</p>
<p>Is there a more efficient way of doing this?</p>
<p>xml file 1</p>
<pre><code><root>
<l id="130">
<word id="0.5" N="I">amet</word>
<word id="1.0" E="s"/>
<word id="2.0" f_0="j">consectetur</word>
<word id="2.1" t_0="a" D="B">adipiscing</word>
<word id="2.2" P="I">elit</word>
<word id="2.4" L="B">do</word>
</l>
</root>
</code></pre>
<p>xml file 2</p>
<pre><code><root>
<l id="131">
<word id="2.0" f_0="j">consectetur</word>
<word id="2.1" t_0="a">adipiscing</word>
</l>
</root>
</code></pre>
<p>code</p>
<pre><code>import os
from lxml import etree
import shutil
m_d = "/m"
m = os.listdir(m_d)
a_d = "/a"
for xml_file in m:
if not xml_file.endswith(".xml"):
continue
input_dir = os.path.join(m_d, xml_file)
input_file = open(input_dir,"r",encoding="utf-8")
output_dir = os.path.join(a_d, xml_file)
parser = etree.XMLParser(remove_blank_text=True)
tree = etree.parse(input_file, parser)
root = tree.getroot()
targets = ["N", "E", "D", "P", "L"]
for word in root.findall('.//word'):
for target in targets:
if target in word.attrib:
shutil.copy2(input_dir, output_dir)
else:
pass
</code></pre>
|
<python><xml><lxml>
|
2023-04-11 13:12:57
| 2
| 344
|
doine
|
75,986,167
| 2,082,884
|
Runtime Warning when setting an GObject property on Gobject derived instance in python
|
<p>In my new library I have I like to present stimuli to a user for psychological experiments. Such as presenting circles and other types of visual stimuli or sounds. I like to base my C library on glib and gobject, as that makes it relatively easy to have language bindings, and glib provides many handy algorithms/data structures for my library, so I don't have to write those myself.
Many of my intended clients (researchers in the field of psychology/linguistics) might not be proficient in C, but might be relatively handy with a language such as python. So I'm using C to be able to meet the requirements of a psychological experimentation toolkit. I would like to know with millisecond precision when a stimulus is presented to the user. I think for this C is handy, but a friendly API for psychologist and for this Python/javascript API is handy. GObject aims to create bindings to these GObject based C libraries via GObject introspection and generally I'm really happy how this process works out.</p>
<p>I'm having some problems when I try to set a GObject property in python when python does not have a reference to a variable, or perhaps only one in the return tuple. If i'm using setter methods, the code runs just fine.
So in the python fragment below, I'm experiencing problem when I do an assignment like this:</p>
<pre class="lang-py prettyprint-override"><code>instance.prop.property_name = Psy.SomeNewObject()
</code></pre>
<p>Setting the property like this:</p>
<pre class="lang-py prettyprint-override"><code>instance.set_property_name(Psy.SomeNewObject())
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>my_property_value = Psy.SomeNewObject()
instance.prop.property_name = my_property_value
</code></pre>
<p>Works like intended and I don't get any errors/warnings. So a small script below is something that I have in my test directory to see whether the bindings work nicely in python. In the example below I'm creating a Canvas (could be a window, or an offscreen egl managed canvas). I'm creating a Circle object which is drawn on the canvas. Circle is derived from an Abstract class PsyVisualStimulus. Each instance of PsyVisualStimulus has a color property, the color in which we would like to draw the VisualStimulus. The Color, Canvas and VisualStimulus all derive from GObject.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
import gi
import tempfile
# All object in Psy.* derive from GObject and not GObject.InitiallyUnowned.
# Hence, they cannot have a floating reference. All objects/functions (should) use
# transfer annotations, to determine object lifetime, as is currently advised.
gi.require_version("Psy", "0.1")
from gi.repository import Psy
bg_color = Psy.Color(r=0.5, g=0.5, b=0.5)
fg_color = Psy.Color(r=1.0, g=0.0, b=0.0)
x, y = 0.0, 0.0
radius = 150.0
num_vertices = 100
canvas = Psy.GlCanvas.new(640, 480)
circle = Psy.Circle.new_full(canvas, x, y, num_vertices, radius)
canvas.set_background_color(bg_color) # this seems fine
circle.props.color = fg_color # this is fine as we have an ref in python
# The next line prints a warning in the shell, and I would like to fix this!!
circle.props.color = Psy.Color(r=0.5, g=0.5, b=0.5)
circle.props.color = fg_color
circle.play_for(
canvas.get_time().add(Psy.Duration.new_ms(16)), Psy.Duration.new_ms(50)
) # Draw circle for 50 ms.
canvas.iterate()
image = canvas.get_image()
image.save_path(tempfile.gettempdir() + "/red-circle.png", "png")
</code></pre>
<p>the line <code>circle.props.color = Psy.Color(r=0.5, g=0.5, b=0.5)</code> emits a runtime warning printed to the shell:</p>
<blockquote>
<p>sys:1: RuntimeWarning: Expecting to marshal a borrowed reference for
<Psy.Color object at 0x7f9e0ee26a80 (PsyColor at 0x5616651dba00)>, but
nothing in Python is holding a reference to this object. See:
<a href="https://bugzilla.gnome.org/show_bug.cgi?id=687522" rel="nofollow noreferrer">https://bugzilla.gnome.org/show_bug.cgi?id=687522</a></p>
</blockquote>
<p>The line below works precisely as intended and doesn't produce any warnings. So I must be doing something incorrectly in my C library and I would like to have it fixed. Following the link in the warning does not bring me to something I understand and it seems quite old and perhaps outdated.
I'll demonstrate fragments of my C code that that the python code above runs through in order to set the <strong>color</strong> property of a <strong>PsyVisualStimulus</strong> The Circle in the fragment above derives from PsyVisualStimulus, as PsyVisualStimulus is abstract I cannot instantiate those directly.</p>
<pre><code>
// header
#define PSY_TYPE_VISUAL_STIMULUS psy_visual_stimulus_get_type()
G_DECLARE_DERIVABLE_TYPE(
PsyVisualStimulus, psy_visual_stimulus, PSY, VISUAL_STIMULUS, PsyStimulus)
G_MODULE_EXPORT PsyColor *
psy_visual_stimulus_get_color(PsyVisualStimulus *self);
G_MODULE_EXPORT void
psy_visual_stimulus_set_color(PsyVisualStimulus *self, PsyColor *color);
// relevant parts of the implementation.
G_DEFINE_ABSTRACT_TYPE_WITH_PRIVATE(PsyVisualStimulus,
psy_visual_stimulus,
PSY_TYPE_STIMULUS)
static void
psy_visual_stimulus_set_property(GObject *object,
guint property_id,
const GValue *value,
GParamSpec *pspec)
{
PsyVisualStimulus *self = PSY_VISUAL_STIMULUS(object);
switch ((VisualStimulusProperty) property_id) {
// many other properties are omitted from this fragment
case PROP_COLOR:
psy_visual_stimulus_set_color(self, g_value_get_object(value));
break;
default:
G_OBJECT_WARN_INVALID_PROPERTY_ID(object, property_id, pspec);
}
}
static void
psy_visual_stimulus_class_init(PsyVisualStimulusClass *klass)
{
GObjectClass *object_class = G_OBJECT_CLASS(klass);
object_class->get_property = psy_visual_stimulus_get_property;
object_class->set_property = psy_visual_stimulus_set_property;
object_class->dispose = psy_visual_stimulus_dispose;
PsyStimulusClass *stimulus_class = PSY_STIMULUS_CLASS(klass);
stimulus_class->play = visual_stimulus_play;
stimulus_class->set_duration = visual_stimulus_set_duration;
klass->update = visual_stimulus_update;
// The other properties are omitted.
/**
* PsyVisualStimulus:color
*
* The color `PsyColor` used to fill this object with
*/
visual_stimulus_properties[PROP_COLOR]
= g_param_spec_object("color",
"Color",
"The fill color for this stimulus",
PSY_TYPE_COLOR,
G_PARAM_READWRITE);
g_object_class_install_properties(
object_class, NUM_PROPERTIES, visual_stimulus_properties);
}
/**
* psy_visual_stimulus_get_color:
* @self: An instance of `PsyVisualStimulus`
*
* Get the color of the stimulus.
*
* Returns:(transfer none): the `PsyColor` of used to fill the stimuli
*/
PsyColor *
psy_visual_stimulus_get_color(PsyVisualStimulus *self)
{
PsyVisualStimulusPrivate *priv
= psy_visual_stimulus_get_instance_private(self);
g_return_val_if_fail(PSY_IS_VISUAL_STIMULUS(self), NULL);
return priv->color;
}
/**
* psy_visual_stimulus_set_color:
* @self: an instance of `PsyVisualStimulus`
* @color:(transfer none): An instance of `PsyVisualStimulus` that is going to
* be used in order to fill the shape of the stimulus
*
* Set the fill color of the stimulus, this color is used to fill the stimulus
*/
void
psy_visual_stimulus_set_color(PsyVisualStimulus *self, PsyColor *color)
{
PsyVisualStimulusPrivate *priv
= psy_visual_stimulus_get_instance_private(self);
g_return_if_fail(PSY_IS_VISUAL_STIMULUS(self) && PSY_IS_COLOR(color));
g_clear_object(&priv->color);
// psy_color_dup creates a deep copy, it calls g_object_new(PSY_TYPE_CIRCLE, NULL) and
// It does not copy by increasing the ref count. Perhaps this is wrong.
// PsyColor Might be converted to a boxed type as is is quite shallow anyway.
// but that is another issue.
priv->color = psy_color_dup(color);
}
</code></pre>
<p>The warning seems to come from: <a href="https://gitlab.gnome.org/GNOME/pygobject/-/blob/3.16.1/gi/pygi-object.c" rel="nofollow noreferrer">https://gitlab.gnome.org/GNOME/pygobject/-/blob/3.16.1/gi/pygi-object.c</a>
line 111</p>
<p>I would really like to fix this warning, so do I perhaps have errors in the implementation of my GObject derived class? Is there some issue with the GObject annotations? I would really appreciate any help I can get.</p>
|
<python><c><pygobject><gobject>
|
2023-04-11 12:57:28
| 0
| 4,441
|
hetepeperfan
|
75,986,114
| 3,840,183
|
ffibuild generate dll with dependencies
|
<p>I´m using CFFI for generating a DLL from python source code. It works well but it depends of python3x.dll and need some .pyd files.
Is it possible to package all dependencies inside the dll or a static library ?</p>
|
<python><dll><dependencies><ffi><cffi>
|
2023-04-11 12:50:05
| 0
| 591
|
Erwan Douaille
|
75,986,062
| 17,596,179
|
Cannot import local module python
|
<p>I am trying to import local modules in python but for some reason it cannot find the modules. I know that you can import them by adding them to your path with sys. But I don't want to use sys for this.
My file structure looks like this</p>
<pre><code>scraper_backend
- jobs
- extract.py
- load.py
- models.py
- transform.py
- url_builder.py
main.py
</code></pre>
<p>my main.py looks like this.</p>
<pre><code>from datetime import datetime, timedelta
from scraper_backend.jobs import extract, load, transform
def main():
# Extract
wind = extract.extract_wind(datetime.now())
solar = extract.extract_solar(datetime.now())
# Transform
date = extract.round_dt(datetime.now()) - timedelta(minutes=15)
df = transform.update_file(date, wind, solar)
# Load
load.write_to_path(df, "energy.parquet")
main()
</code></pre>
<p>For the moment im using</p>
<pre><code>sys.path.append("scraper_backend//jobs")
</code></pre>
<p>But when I remove the sys.path.append it gives me <code>ModuleNotFoundError: No module named 'scraper_backend.jobs</code>.
Anyone knows what I'm doing wrong?
Thanks for your help.</p>
|
<python><python-3.x><import><python-import><modulenotfounderror>
|
2023-04-11 12:45:58
| 1
| 437
|
david backx
|
75,985,838
| 2,593,480
|
How to get RTL (right-to-left) text working in VS Code integrated terminal?
|
<p>If I print in the DEBUG CONSOLE it's fine, every where else VS code display Hebrew just fine, just when the script print to the TERMINAL the Hebrew is upside down, why?
<a href="https://i.sstatic.net/REUYT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/REUYT.png" alt="enter image description here" /></a></p>
<p><strong>Update:</strong> it works fine when debugging on external terminal</p>
<p><a href="https://i.sstatic.net/WinA1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WinA1.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><right-to-left><hebrew>
|
2023-04-11 12:20:00
| 2
| 581
|
Yam Shargil
|
75,985,680
| 14,653,659
|
Deactivate some color categories by default
|
<p>Hello I am creating an offline html dashboard which consists of multiple plots.
Per default all group option are set to active after the plot is created. I would like to manually set deactivate some of the option. Is there a way to do this?</p>
<p>E.g.
I create a chart with the following code:</p>
<pre><code>fig = px.line(test_df, x="date", y="values", color="Group")
fig
</code></pre>
<p>And the results looks like this (for simplicity reasons I have greatly reduced the amount of data. )
<a href="https://i.sstatic.net/ATUMq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ATUMq.png" alt="enter image description here" /></a></p>
<p>If I now click on the P the red line disappears. I would like to have this Line deactivated per default.</p>
<p><a href="https://i.sstatic.net/eXhN2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eXhN2.png" alt="enter image description here" /></a></p>
<p>The reason why I want this is that in my plot there will be very many groups. While in most cases only a few are relevant I do not want to remove the option to look at the less relevant ones, from time to time.</p>
<p>Does anyone know how this can be done?</p>
|
<python><plotly>
|
2023-04-11 12:01:31
| 0
| 807
|
Manuel
|
75,985,662
| 3,387,716
|
multiprocessing.Pool(processes=0) not allowed?
|
<p>In a parallelized script I would like to keep the ability to do the processing in a single CPU. In order to do that it seems that I need to duplicate the code that iterates over the results:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import multiprocessing
# argparsed parameters:
num_cpus = 1
input_filename = 'file.txt'
def worker(line):
'''Do something with a line'''
# dummy processing that requires some CPU time
for x in range(1000000):
line.split()
return line.split()
def data_iter(filename):
'''iterate over the data records'''
with open(filename) as fh:
for line in fh:
yield line
if __name__ == '__main__':
if (num_cpus >= 2):
with multiprocessing.Pool( processes = num_cpus - 1 ) as pool:
for result in pool.imap(worker, data_iter(input_filename)):
# dummy post-processing (copy#1)
print(' '.join(result))
else:
for line in data_iter(input_filename):
result = worker(line)
# dummy post-processing (copy#2)
print(' '.join(result))
</code></pre>
<p><strong>note:</strong> you can use any text file with multiple lines for testing the code.</p>
<p><strong>Is there any valid reason for <code>multiprocessing.Pool</code> not allowing <code>processes = 0</code>? Shouldn't <code>processes = 0</code> just mean to not spawn any additional process, the main one being enough?</strong></p>
<hr />
<h5>POSTSCRIPT</h5>
<p>I though of one plausible "technical" reason for not allowing it:</p>
<p>The worker processes in the Pool can't modify the environment of the main process, which wouldn't be true if the worker function is run without spawning a new process.</p>
|
<python><multiprocessing>
|
2023-04-11 11:59:44
| 2
| 17,608
|
Fravadona
|
75,985,647
| 11,106,507
|
What is the most memory efficient way to add multiple columns from different dataframes together?
|
<p>I have some very large dataframes and often run out of memory when I want to combine them via a weighted sum.</p>
<p>So what I am doing now is:</p>
<pre><code>dfA[col] = dfA[col]*wA dfB[col]*wB + dfC[col]*wC + dfD[col]*wD
</code></pre>
<p>Where <code>wA</code> is the weight of the A-th dataframe.
I run out of memory 4 out of 5 times, so I wonder how I could do this with the smallest possible memory footprint?</p>
<p>I tried adding them one by one.
i.e.</p>
<pre><code>dfA[col] = dfA[col]*wA
dfA[col] += dfB[col]*wB
...
</code></pre>
<p>this ran out of memory 100% of the time for the same dataframes.</p>
<p>Should I iterate over the rows and use <code>iloc</code>?
Do it in numpy and convert back?</p>
|
<python><pandas>
|
2023-04-11 11:58:28
| 3
| 1,166
|
Olli
|
75,985,611
| 1,485,926
|
Get specific tag from repository using PyGithub
|
<p>I have this very simple code using <a href="https://github.com/PyGithub/PyGithub" rel="nofollow noreferrer">PyGithub</a></p>
<pre><code>from github import Github
g = Github('<offuscated token>')
repo = g.get_repo("telefonicaid/fiware-orion")
repo.get_git_tag("3.8.0")
</code></pre>
<p>The repository is public (in fact, maybe the token is not needed...) and the tag exists, it can be checked here: <a href="https://github.com/telefonicaid/fiware-orion/tree/3.8.0" rel="nofollow noreferrer">https://github.com/telefonicaid/fiware-orion/tree/3.8.0</a></p>
<p>However, if I run that code I get at <code>repo.get_git_tag("3.8.0")</code> the following exception:</p>
<pre><code>github.GithubException.UnknownObjectException: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/reference/git#get-a-tag"}
</code></pre>
<p>Maybe this is not the right way of getting a tag using PyGithub? How should be done in that case, please?</p>
<p>Thanks in advance!</p>
|
<python><pygithub>
|
2023-04-11 11:53:44
| 2
| 12,442
|
fgalan
|
75,985,596
| 1,000,343
|
Override Theme Coloring for Just One Character (in VS Code using Python)
|
<p>I want to be able to use the terminal to see RGB coloring for just one "swatch" character (■) as this is pretty convenient. I also like the dark themes (In my case I use Dark+ theme). The problem is that I can't see the true color of the swatch because the dark theme overrides it.</p>
<p>In Python terminal <code>print('\x1b[38;2;0;0;0m■\x1b[0m')</code> and in zsh terminal <code>echo '\x1b[38;2;0;0;0m■\x1b[0m'</code> using black coloring (<em>rgb (0,0,0)</em>) both result in a medium gray:
<a href="https://i.sstatic.net/O9l8A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O9l8A.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/VzRkB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VzRkB.png" alt="enter image description here" /></a></p>
<p>But if I change the theme (in this case Light+ theme)or just go to zsh directly the character prints as black (as expected):</p>
<p><a href="https://i.sstatic.net/KB7s0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KB7s0.png" alt="enter image description here" /></a></p>
<p><strong>How can I get this one character (■) to print without the color theme over riding it (and hopefully, not permanently altering the color theme) regardless of the RGB values I pass?</strong> Ideally, I'd want a solution that can work temporarily (on the fly, print, back to set theme) from Python (not sure if this is possible; I do have <code>code</code> cli installed) without over riding the settings permanently. Though, for other future searchers, it may be instructive to see how to permanently override the color theme for just one character.</p>
|
<python><python-3.x><visual-studio-code>
|
2023-04-11 11:51:08
| 2
| 110,512
|
Tyler Rinker
|
75,985,522
| 245,549
|
Is it possible to access the history of calls done by LangChain LLM object to external API?
|
<p>When we create an Agent in LangChain we provide a Large Language Model object (LLM), so that the Agent can make calls to an API provided by OpenAI or any other provider. For example:</p>
<pre><code>llm = OpenAI(temperature=0)
agent = initialize_agent(
[tool_1, tool_2, tool_3],
llm,
agent = 'zero-shot-react-description',
verbose=True
)
</code></pre>
<p>To address a single prompt of a user the agent might make several calls to the external API.</p>
<p>Is there a way to access all the calls made by LLM object to the API?</p>
<p>For example <a href="https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html" rel="noreferrer">here</a> is described a way to get number of tokens in the request and in the response. What I need, instead, is the requests and the responses themselves (and not just number of tokens in them).</p>
|
<python><openai-api><langchain>
|
2023-04-11 11:44:33
| 3
| 132,218
|
Roman
|
75,985,464
| 9,947,412
|
Selenium EdgeDriver options
|
<p>How do I integrate selenium webdriver Edge options into EdgeDriver in python?</p>
<p>I tried to use:</p>
<pre class="lang-py prettyprint-override"><code>from selenium.webdriver.edge.options import Options
# with
from selenium.webdriver import Edge
</code></pre>
<p>PS: I do not want to use another package like <em><a href="https://pypi.org/project/msedge-selenium-tools/" rel="nofollow noreferrer">msedge</a></em> :)</p>
|
<python><selenium-webdriver><selenium-edgedriver>
|
2023-04-11 11:39:01
| 1
| 907
|
PicxyB
|
75,985,455
| 6,936,582
|
Sort dataframe by multiple columns and specify how for each column
|
<p>I'd like to sort this (I have many more columns of different data types in the real df):</p>
<pre><code>import pandas as pd
from natsort import index_natsorted
import numpy as np
data = {"version":["3.1.1","3.1.10","3.1.2","3.1.3", "4.1.6"],
"id":[2,2,2,2,1]}
df = pd.DataFrame(data)
df.sort_values(by=["id","version"], key=lambda x: np.argsort(index_natsorted(df["version"])), ignore_index=True)
version id
3.1.1 2
3.1.2 2
3.1.3 2
3.1.10 2
4.1.6 1
</code></pre>
|
<python><pandas>
|
2023-04-11 11:37:59
| 1
| 2,220
|
Bera
|
75,985,407
| 5,587,736
|
How to access Flask app context from test_client using pytest?
|
<p>I feel like I don't fully understand <code>app_context</code> and its usage. When I turn my app into a test_client, I can't seem to access the fields that are within the apps context, which means I can't test in my unit tests whether the element was successfully added to the state of the app. How do I access the app context from a <code>test_client</code>, as shown below?</p>
<p>Here are minimal sections of the relevant code:</p>
<p>App:</p>
<pre><code>def make_service():
service = Flask("my_service")
my_content: Dict[str, str] = {}
with service.app_context():
service.my_content = my_content
return service
app = make_service()
@app.route("/test/")
def example_func():
app.my_content["test"] = "testing entry"
</code></pre>
<p>Pytest fixture:</p>
<pre><code>@pytest.fixture(name="test_app")
def test_client():
with app.test_client() as testing_client:
with app.app_context():
yield testing_client
</code></pre>
<p>Pytest test function:</p>
<pre><code>def test_example_func(test_app):
response = test_app.get(
"/test/",
)
assert test_app.my_content["test"] == "testing_entry"
</code></pre>
<p>Error:</p>
<pre><code>AttributeError: 'FlaskClient' object has no attribute 'my_content'
</code></pre>
|
<python><flask><testing><pytest>
|
2023-04-11 11:33:46
| 1
| 697
|
Kroshtan
|
75,985,214
| 1,639,908
|
Why does a "bad format" error occur when importing a key into phantom app?
|
<p>I use <a href="https://solanacookbook.com/references/keypairs-and-wallets.html#how-to-generate-a-mnemonic-phrase" rel="nofollow noreferrer">example in docs solana for python</a></p>
<p>I try get key pair and import to phantom by private key:</p>
<pre><code>from solders.keypair import Keypair
from mnemonic import Mnemonic
mnemo = Mnemonic("english")
words = mnemo.generate(strength=256)
seed = mnemo.to_seed(words)
keypair1 = Keypair.from_bytes(seed)
print("keypair1: {}".format(keypair1))
keypair2 = Keypair()
print("keypair2: {}".format(keypair2))
</code></pre>
<p>Output:</p>
<pre><code>keypair1: 4pqdZVgkCaALGNqvumkcC9gH5ZRj13DupL9go2DfZMhX9c7Mi9cei9AySWHNdcaxf5cnpZ2yyHtc5dvqW9qof2LX
keypair2: 4af72sYDF4ugnoMpNjoyBTmrDpgBKJqNUJGq8Lpy6vsao9kV9nxHtKiExqSwSkgKebJQHMeYWCQbH5JscbckkMck
</code></pre>
<p><strong>Keypair1</strong> was created by seed phrase and when i try import in phantom app i get error "<em>bad format</em>".</p>
<p><strong>Keypair2</strong> was created without seed phrase and when i try import in phantom app i not get error and import success</p>
<p>Why can't i import keypair1 and get error "<em>bad format</em>"?</p>
|
<python><solana>
|
2023-04-11 11:11:54
| 1
| 2,595
|
Leo Loki
|
75,985,210
| 10,413,428
|
Forbid QDialog to be moved by the user
|
<p>I want to disable that the user can move a displayed custom QDialog. I could not find a solution while googling. Maybe move is the wrong term as it showed many results based on the mouse move event.</p>
<p>I need to use a non-frameless dialogue because I need the close button.</p>
|
<python><qt><pyside><pyside6>
|
2023-04-11 11:11:14
| 0
| 405
|
sebwr
|
75,984,983
| 18,140,022
|
Polars: change a value in a dataframe if a condition is met in another column
|
<p>I have this dataframe</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
┌─────┬───────┐
│ one ┆ two │
│ --- ┆ --- │
│ str ┆ str │
╞═════╪═══════╡
│ a ┆ hola │
│ b ┆ world │
└─────┴───────┘
""")
</code></pre>
<p>And I want to change <em>hola</em> for <em>hello</em>:</p>
<pre><code>shape: (2, 2)
┌─────┬───────┐
│ one ┆ two │
│ --- ┆ --- │
│ str ┆ str │
╞═════╪═══════╡
│ a ┆ hello │ # <-
│ b ┆ world │
└─────┴───────┘
</code></pre>
<p>How can I change the values of a row based on a condition in another column?</p>
<p>For instance, with PostgreSQL I could do this:</p>
<pre class="lang-sql prettyprint-override"><code>UPDATE my_table SET two = 'hello' WHERE one = 'a';
</code></pre>
<p>Or in Spark</p>
<pre class="lang-py prettyprint-override"><code>my_table.withColumn("two", when(col("one") == "a", "hello"))
</code></pre>
<p>I've tried using <code>with_columns(pl.when(pl.col("one") == "a").then("hello"))</code> but that changes the column "one".</p>
<p>EDIT: I could create a SQL instance and plot my way through via SQL but there must be way to achieve this via the Python API.</p>
|
<python><dataframe><python-polars>
|
2023-04-11 10:43:17
| 3
| 405
|
user18140022
|
75,984,943
| 5,573,294
|
In python, replace triple-nested if-else with more elegant way to clean up dataframe columns
|
<pre><code>data = [[1, 2.4, 3, np.nan], [4, 5.3, 6, np.nan], [np.nan, 8, 3, np.nan]] # Example data
output_data = pd.DataFrame(data, columns=['total', 'count1', 'count2', 'count3'])
output_data
total count1 count2 count3
0 1.0 2.4 3 NaN
1 4.0 5.3 6 NaN
2 NaN 8.0 3 NaN
# grab all columns to format
all_cols = ['total', 'count1', 'count2', 'count3', 'count4']
float_cols = ['total'] # dont want to convert these to integer
# add empty cols if they do not exist, and format columns correctly
for col in all_cols:
# add empty col if it didnt exist
if not (col in output_data.columns):
output_data[col] = ''
# else convert to integer-string with NA --> ''
else:
is_row_all_nulls = sum(output_data[col].isnull()) == output_data.shape[0]
if is_row_all_nulls:
output_data[col] = output_data[col].fillna(value = '')
else:
if (col in float_cols):
output_data[col] = output_data[col].fillna('').astype(str)
else:
output_data[col] = output_data[col].fillna('').astype(str).str.split('.').str[0]
</code></pre>
<p><a href="https://i.sstatic.net/mJMD2m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mJMD2m.png" alt="enter image description here" /></a></p>
<p>This code gives us exactly the output that we want. Columns with all NaN values get converted to empty strings. Not shown, but missing columns would be created and filled with empty strings. All columns are converted to strings, and NAN values are replaced with empty strings. The <code>total</code> column displays as a float within a string, whereas the other columns display as an integer within the string by chopping off everything after the decimal place via <code>str.split('.').str[0]</code>.</p>
<p>However, this triple-nested <code>if else</code> solution feels very messy. We would prefer to remove the nested <code>if else</code> with a cleaner, less-nested, more sophisticated solution. How could we achieve this?</p>
<p><strong>Edit:</strong> updated with <code>count4</code> added to <code>all_cols</code></p>
|
<python><pandas><dataframe>
|
2023-04-11 10:37:34
| 1
| 10,679
|
Canovice
|
75,984,892
| 5,016,028
|
Function of matrices in Maxima
|
<p>I have two matrices Ac and Ep and a parameter k. I need to implement this matrix, which is a function of my prior matrices and k :</p>
<pre><code>ProbEnt(k)[i,j] := if (k < wmax) then binomial(Ac[i,j], k)*Ep[i,j]^k * (1-Ep[i,j])^(Ac[i,j]-k) else 0;
</code></pre>
<p>For some reason it will not allow me to define (build) ProbEnt parameter-wise. Is there a way to make this work?</p>
|
<python><matrix><sage><maxima><wxmaxima>
|
2023-04-11 10:29:35
| 1
| 4,373
|
Qubix
|
75,984,808
| 11,332,693
|
Python drop duplicates by conditions
|
<p><strong>Problem Statement:</strong> Recruiter wants to recruit an aspirant for a particular job with specific skill and City on the basis of first cum serve. For ex if candidate P1 is selected for JOB 'A'then both JOB 'A' and candidate 'P1' should be dropped for next selection.</p>
<p>Below is the sample dataframe</p>
<p>df</p>
<pre><code>Job Skill City Id Job_Id
A Science London P1 A_P1
A Science London P2 A_P2
B Science London P1 B_P1
B Science London P2 B_P2
C Science London P1 C_P1
C Science London P2 C_P2
D Science London P1 D_P1
D Science London P2 D_P2
E Maths London P3 E_P3
E Maths London P4 E_P4
</code></pre>
<p>Scipt tried :</p>
<pre><code>df.drop_duplicates(['Skill', 'City','Id']).drop_duplicates('Job')
</code></pre>
<p>Output got</p>
<pre><code>Job Skill City Id Job_Id
A Science London P1 A_P1
E Maths London P3 E_P3
</code></pre>
<p>Expected output :</p>
<pre><code>Job Skill City Id Job_Id
A Science London P1 A_P1
B Science London P2 B_P2
E Maths London P3 E_P3
</code></pre>
<p>Since P1 is utlised for A, P2 is still available for B and hence P2 must be selected for B. Since P1 and P2 is utlised for A and B, so they should not be available for C and D.</p>
|
<python><pandas><duplicates>
|
2023-04-11 10:18:18
| 1
| 417
|
AB14
|
75,984,731
| 4,001,592
|
What is the meaning of the re.DEBUG flag?
|
<p>The <code>re.DEBUG</code> flag offers a peek at the inner workings of a regular expression pattern in Python, for example:</p>
<pre><code>import re
re.compile(r"(a(?:b)){1,3}(c)", re.DEBUG)
</code></pre>
<p>Returns:</p>
<pre><code>MAX_REPEAT 1 3
SUBPATTERN 1 0 0
LITERAL 97
LITERAL 98
SUBPATTERN 2 0 0
LITERAL 99
0. INFO 4 0b0 3 7 (to 5)
5: REPEAT 11 1 3 (to 17)
9. MARK 0
11. LITERAL 0x61 ('a')
13. LITERAL 0x62 ('b')
15. MARK 1
17: MAX_UNTIL
18. MARK 2
20. LITERAL 0x63 ('c')
22. MARK 3
24. SUCCESS
</code></pre>
<p>Where can I find the meaning of the OPCODES (SUBPATTERN, MAX_REPEAT, etc.)? Some of them are self-explanatory, but the whole purpose is unclear. What does <code>1 0 0</code> means in <code>SUBPATTERN 1 0 0</code>?</p>
<p>Some things I've tried:</p>
<ul>
<li>Read the docs on <a href="https://docs.python.org/3/library/re.html#re.DEBUG" rel="noreferrer"><code>re.DEBUG</code></a></li>
<li>Read the <a href="https://github.com/python/cpython/blob/75a6fadf369315b27e12f670e6295cf2c2cf7d7e/Lib/re/_parser.py" rel="noreferrer">source code</a> of the parser.</li>
<li>Google <a href="https://www.google.com/search?q=re.debug+python+meaning&oq=re.debug+python+meaning" rel="noreferrer">search</a></li>
</ul>
<p><strong>Note:</strong> I know that perhaps this is not a perfect fit for a StackOverflow question, but I've written a clear problem with an MRE and my efforts at solving the issue at hand. Moreover, I think having this solved benefits the other users as well.</p>
|
<python><regex>
|
2023-04-11 10:09:57
| 2
| 62,150
|
Dani Mesejo
|
75,984,675
| 14,594,208
|
How to keep the values of one column per index?
|
<p>Consider the following Pandas dataframe:</p>
<pre class="lang-py prettyprint-override"><code> col_a col_b col_c
0 10 15 20
0 10 15 20
1 10 15 20
1 10 15 20
1 10 15 20
1 10 15 20
2 10 15 20
</code></pre>
<p>Now, let's consider that we'd like the following mapping:</p>
<pre><code>{
0: 'col_a',
1: 'col_b',
2: 'col_c'
}
</code></pre>
<p>The mapping essentially dictates which column we shall keep for each index!</p>
<p>Output <code>df</code>:</p>
<pre class="lang-py prettyprint-override"><code> column
0 10
0 10
1 15
1 15
1 15
1 15
2 20
</code></pre>
<p>So far, I have something like this:</p>
<pre class="lang-py prettyprint-override"><code>keep_cols = [(0, 'col_a'), (1, 'col_b'), (2, 'col_c')]
output = pd.concat([df.loc[df['index_col'] == idx, col] for idx, col in keep_cols], axis=1)
</code></pre>
<p>However, I am creating sub-dfs before actually concatenating them back, which I guess is sub-optimal as far as performance is concerned!</p>
|
<python><pandas>
|
2023-04-11 10:06:45
| 3
| 1,066
|
theodosis
|
75,984,225
| 1,073,476
|
Doing ctypes.memset as of Python 3.11?
|
<p>I'm implementing a memset function that's supposed to set a bytes object buffer to zero.</p>
<p>As of Python 3.11 the buffer api functions <a href="https://docs.python.org/3/c-api/buffer.html#c.PyObject_GetBuffer" rel="nofollow noreferrer">PyObject_GetBuffer() and PyBuffer_Release()</a> are now part of the Stable ABI.</p>
<p>The below code works, but:</p>
<ul>
<li>It feels strange that I have
to define my own Py_buffer class. Isn't there one predefined
somewhere?</li>
<li>Am I using PyBuffer_Release correctly?</li>
</ul>
<pre class="lang-py prettyprint-override"><code>def memset(bytes_object):
import ctypes
# Define the Py_buffer structure
class Py_buffer(ctypes.Structure):
_fields_ = [
('buf', ctypes.c_void_p),
('obj', ctypes.py_object),
('len', ctypes.c_ssize_t),
('itemsize', ctypes.c_ssize_t),
('readonly', ctypes.c_int),
('ndim', ctypes.c_int),
('format', ctypes.c_char_p),
('shape', ctypes.POINTER(ctypes.c_ssize_t)),
('strides', ctypes.POINTER(ctypes.c_ssize_t)),
('suboffsets', ctypes.POINTER(ctypes.c_ssize_t)),
('internal', ctypes.c_void_p),
]
buf = Py_buffer()
ctypes.pythonapi.PyObject_GetBuffer(ctypes.py_object(bytes_object), ctypes.byref(buf), ctypes.c_int(0))
try:
ctypes.memset(buf.buf, 0, buf.len)
finally:
ctypes.pythonapi.PyBuffer_Release(ctypes.byref(buf))
obj = bytes("hello world", "ascii")
print("before:", repr(obj))
memset(obj)
print("after:", repr(obj))
</code></pre>
<p>Gives this output:</p>
<pre><code>before: b'hello world'
after: b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
</code></pre>
<p>For reference, here's an older way of doing memset on a buffer that uses the <a href="https://docs.python.org/3/c-api/objbuffer.html" rel="nofollow noreferrer">deprecated PyObject_AsCharBuffer</a> function. It still works as of Python 3.11 though:</p>
<pre><code>def memset_old(bytes_object: bytes):
import ctypes
if not isinstance(bytes_object, bytes):
raise TypeError(f"expected bytes, not {type(bytes_object)}")
data = ctypes.POINTER(ctypes.c_char)()
size = ctypes.c_int()
ctypes.pythonapi.PyObject_AsCharBuffer(ctypes.py_object(bytes_object), ctypes.pointer(data), ctypes.pointer(size))
ctypes.memset(data, 0, size.value)
obj = bytes("hello world", "ascii")
print("old before:", repr(obj))
memset_old(obj)
print("old after:", repr(obj))
</code></pre>
|
<python><ctypes><cpython>
|
2023-04-11 09:13:01
| 1
| 429
|
johanrex
|
75,984,105
| 5,852,506
|
Access localhost from within a docker image
|
<p>I have the following .gitlab-ci.yml file :</p>
<pre><code>image: python:3.8
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- venv/
before_script:
- python -V # Print out python version for debugging
- pip install --upgrade pip
- pip install virtualenv
- virtualenv venv
- source venv/bin/activate
- pip install -r requirements.txt
- cp properties_ci properties
- apt-get update
- apt-get install net-tools
stages:
- build
- test
- security
- leanix
include:
- ...
test:unit:
stage: ...
test:integration:
stage: test
script:
- echo "0"
- python app.py &
- curl 127.0.0.1:8126
- py.test tests/integration/test_integration.py
services:
- name: cassandra:3.11
</code></pre>
<p>When I launch my python application by using the command <code>python app.py </code> I can see the following output :</p>
<pre><code>$ python app.py
WARNING:cassandra.cluster:Cluster.__init__ called with contact_points specified, but no load_balancing_policy. In the next major version, this will raise an error; please specify a load-balancing policy. (contact_points = ['cassandra'], lbp = None)
WARNING:cassandra.connection:An authentication challenge was not sent, this is suspicious because the driver expects authentication (configured authenticator = PlainTextAuthenticator)
WARNING:cassandra.connection:An authentication challenge was not sent, this is suspicious because the driver expects authentication (configured authenticator = PlainTextAuthenticator)
Creating keyspace if not exist...
Creating tables if not exist...
* Serving Flask app 'src' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:8126
</code></pre>
<p>So connecting to Cassandra and creating/inserting data works well, but the integration tests cannot access the python application on localhost.</p>
<p>In my integration tests I call <a href="http://127.0.0.1:8126" rel="nofollow noreferrer">http://127.0.0.1:8126</a> but it says the following :</p>
<pre><code>Creating keyspace if not exist...
=========================== short test summary info ============================
ERROR tests/integration/test_integration.py - requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8126): Max retries exceeded with url: /getCSRFToken (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f316d3df700>: Failed to establish a new connection: [Errno 111] Connection refused'))
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 3.05s ===============================
</code></pre>
<p>Any ideas how to call localhost inside the docker container so that I can access my application ? I tried also 127.0.0.1 and it does not work.</p>
<h2>EDIT 1</h2>
<p>The script test_integration.py contains as a small example the following portion of code :</p>
<pre><code>...
url = "http://localhost:8126/series/"
import zipfile
with zipfile.ZipFile("tests/integration/20230323.zip", mode="r") as archive:
#Iterate through files in zip file
for zipfilename in archive.filelist:
txtdata = archive.read(zipfilename)
if (zipfilename.filename == 'OP_lastchange_CRUDE_2023-03-23.json'):
payload2 = json.dumps(json.loads(txtdata))
response = requests.request("POST", url, headers=headers, data=payload2)
assert response.status_code == 204, "Santa Claus is not coming this year !"
...
</code></pre>
<h2>EDIT 2</h2>
<p>The solution is given by @KamilCuk - setting the python app to run in the background in the section before_script like <code>python app.py &</code> and sleeping for several seconds makes my python application available to be called like <code>http://localhost:8126</code>.</p>
|
<python><docker><gitlab><gitlab-ci>
|
2023-04-11 09:00:26
| 2
| 886
|
R13mus
|
75,984,086
| 1,779,532
|
Asking a detailed guideline to forece parameters as positional-only, positional- or keyword-argument, and keyword only when using *args and **kwargs
|
<p><a href="https://docs.python.org/3/tutorial/controlflow.html#special-parameters" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/controlflow.html#special-parameters</a></p>
<pre><code>def f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):
----------- ---------- ----------
| | |
| Positional or keyword |
| - Keyword only
-- Positional only
</code></pre>
<p>According to the reference, / and * indicate the kind of parameter by how the arguments may be passed to the function: positional-only, positional-or-keyword, and keyword-only.</p>
<p>It is clear how to use it when I use normal positional arguments or keyword arguments. However, when I use <code>*args</code> or <code>**kwargs</code> together, it becomes very complicated to me.</p>
<p>for example,</p>
<pre class="lang-py prettyprint-override"><code>def func(a, *args, /): # SyntaxError: / must be ahead of *
pass
</code></pre>
<p>This code raise <code>SyntaxError: / must be ahead of *</code> although it is clear that <code>a</code> and <code>args</code> is positional parameters. I know / is unnecessary here, but what I am wondering is why the Error message is <code>SyntaxError: / must be ahead of *</code>. I used <code>*</code> for to point out <code>*args</code>, not independent <code>*</code>.</p>
<pre class="lang-py prettyprint-override"><code>def func(a, /, *args): # No error!?
pass
</code></pre>
<p>However, this code works without Error although <code>*args</code> is indicated as positioal- or keyword- arguments as it follows <code>/</code> (<code>*args</code> cannot take keyword arguments!). I think this should raise Error such as <code>SyntaxEror: *args cannot take keyword argument</code>.</p>
<p>In addition,</p>
<pre><code>def func(*, **kwargs): # SyntaxError: named arguments must follow bare *
pass
</code></pre>
<p>This code raise <code>SyntaxError: named arguments must follow bare *</code>. I know in the function, <code>*</code> is unnecessary, but the error message says named arguments must follow bare * although the named arguments <code>**kwargs</code> is already following <code>*</code>.</p>
<p>I think I am missing something. So, it would very appreciate if anyone explain more detailed guideline to use <code>/</code> and <code>*</code> when using <code>*args</code> or <code>**kwargs</code>.</p>
|
<python><arguments><keyword-argument>
|
2023-04-11 08:58:30
| 1
| 2,544
|
Park
|
75,983,988
| 4,355,695
|
enum type create with skip logic working in workbench but not in program with psycopg2 execute
|
<p>Postgresql query to create an enum type in postgresql, and skip over if its already created:</p>
<pre class="lang-sql prettyprint-override"><code>DO $$ BEGIN
CREATE TYPE operated_by_enum AS ENUM ('opA', 'opB');
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
</code></pre>
<p>courtesy <a href="https://stackoverflow.com/a/48382296/4355695">https://stackoverflow.com/a/48382296/4355695</a></p>
<p>This works perfectly from DBeaver hooked to my Postgres DB which is latest version. The enum in question is already existing, so this skips over.</p>
<p>However, if I run the same query in my python program, then it fails with:</p>
<pre><code>SyntaxError: syntax error at or near "IF"
LINE 11: IF NOT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'operat...
</code></pre>
<p>The sql executing code is as follows:</p>
<pre class="lang-py prettyprint-override"><code>connection = psycopg2.connect(**DB_PARAMETERS)
cursor = connection.cursor()
cursor.execute(s1)
cursor.commit()
cursor.close()
</code></pre>
<p>s1 here being a string variable having the above sql block.</p>
<p>Why is this behaving differently in the different places? Prior to this I've done a variety of SQL statements and they've all worked the same whether in program or in the workbench tool. Then why here is there a difference?</p>
<p>I haven't studied PostgreSQL intensely enough to understand $$ and BEGIN-END terms completely, from surface level I reckon they're for executing block statements instead of one by one. I've seen that just removing (BEGIN .. END) or (do $$ .. $$) from the sql makes it fail in the workbench.</p>
<p>I've tried similarly with the other way also shared:<br />
<code>SELECT 1 FROM pg_type WHERE typname</code><br />
Getting same failure result when I try to do it from program.</p>
<p>Could there be something in psycopg2 that strips those block keywords away or something?</p>
<p>Not able to make out much from the pysopg2 documentation: <a href="https://www.psycopg.org/docs/cursor.html" rel="nofollow noreferrer">https://www.psycopg.org/docs/cursor.html</a></p>
<p>Finally, I'm looking for a way to execute an enum type creation statement in a python program such that it politely skips over in case it's already there, and it's part of a larger set of DB schema setup statements (creating tables and all).</p>
<hr />
<p><strong>Edit:</strong> I found that I can just do DROP followed by re CREATE:</p>
<pre><code>DROP TYPE IF EXISTS operated_by_enum;
CREATE TYPE operated_by_enum AS ENUM ('opA', 'opB');
</code></pre>
<p>The caveat being that there should be no existing columns in any table belonging to this type at the time. For me this was fine as I was dropping and recreating that table also anyways. I simply inserted above 2 lines after the table drop and before its recreate.</p>
<p>My question was specific to another action, and there may be a valid use case where dropping the enum isn't an option. So. leaving the question up. But for those whose situation allows for dropping, I'll suggest taking this much simpler way.</p>
|
<python><postgresql><psycopg2>
|
2023-04-11 08:45:24
| 0
| 6,252
|
Nikhil VJ
|
75,983,861
| 4,876,058
|
Scrapy Crawl only first 5 pages of the site
|
<p>I am working on the solution to the following problem, My boss wants from me to create a <code>CrawlSpider</code> in <code>Scrapy</code> to scrape the article details like <code>title</code>, <code>description</code> and paginate only the first 5 pages.</p>
<p>I created a <code>CrawlSpider</code> but it is paginating from all the pages, How can I restrict the <code>CrawlSpider</code> to paginate only the first latest 5 pages?</p>
<p>The site article listing page markup that opens when we click on pagination next link:</p>
<p><strong>Listing page markup</strong>:</p>
<pre class="lang-html prettyprint-override"><code> <div class="list">
<div class="snippet-content">
<h2>
<a href="https://example.com/article-1">Article 1</a>
</h2>
</div>
<div class="snippet-content">
<h2>
<a href="https://example.com/article-2">Article 2</a>
</h2>
</div>
<div class="snippet-content">
<h2>
<a href="https://example.com/article-3">Article 3</a>
</h2>
</div>
<div class="snippet-content">
<h2>
<a href="https://example.com/article-4">Article 4</a>
</h2>
</div>
</div>
<ul class="pagination">
<li class="next">
<a href="https://www.example.com?page=2&keywords=&from=&topic=&year=&type="> Next </a>
</li>
</ul>
</code></pre>
<p>For this, I am using <code>Rule</code> object with <code>restrict_xpaths</code> argument to get all the article links, and for the follow I am executing <code>parse_item</code> class method that will get the article <code>title</code> and <code>description</code> from the <code>meta</code> tags.</p>
<pre><code>Rule(LinkExtractor(restrict_xpaths='//div[contains(@class, "snippet-content")]/h2/a'), callback="parse_item",
follow=True)
</code></pre>
<p><strong>Detail page markup</strong>:</p>
<pre><code><meta property="og:title" content="Article Title">
<meta property="og:description" content="Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.">
</code></pre>
<p>After this, I have added another <code>Rule</code> object to handle pagination <code>CrawlSpider</code> will use the following link to open other listing page and do the same procedure again and again.</p>
<pre><code>Rule(LinkExtractor(restrict_xpaths='//ul[@class="pagination"]/li[@class="next"]/a'))
</code></pre>
<p>This is my <code>CrawlSpider</code> code:</p>
<pre><code>from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import w3lib.html
class ExampleSpider(CrawlSpider):
name = "example"
allowed_domains = ["example.com"]
start_urls = ["https://www.example.com/"]
custom_settings = {
'FEED_URI': 'articles.json',
'FEED_FORMAT': 'json'
}
total = 0
rules = (
# Get the list of all articles on the one page and follow these links
Rule(LinkExtractor(restrict_xpaths='//div[contains(@class, "snippet-content")]/h2/a'), callback="parse_item",
follow=True),
# After that get pagination next link get href and follow it, repeat the cycle
Rule(LinkExtractor(restrict_xpaths='//ul[@class="pagination"]/li[@class="next"]/a'))
)
def parse_item(self, response):
self.total = self.total + 1
title = response.xpath('//meta[@property="og:title"]/@content').get() or ""
description = w3lib.html.remove_tags(response.xpath('//meta[@property="og:description"]/@content').get()) or ""
return {
'id': self.total,
'title': title,
'description': description
}
</code></pre>
<p>Is there a way we can restrict the crawler to crawl only the first 5 pages?</p>
|
<python><scrapy>
|
2023-04-11 08:29:05
| 1
| 1,019
|
Ven Nilson
|
75,983,384
| 15,452,168
|
daily sunshine hours/ minutes from weather API
|
<p>I am currently using <code>meteostat</code> weather API and extracting data, but I see that sunshine minutes is NaN for mostly all the countries. Am I doing something wrong? Should I look for a Paid weather API? my major focus is Sunshine minutes or hours per day for last 3 years.</p>
<p>I m using the below code snippet</p>
<pre><code>!pip install meteostat
# Import Meteostat library and dependencies
from datetime import datetime
import matplotlib.pyplot as plt
from meteostat import Stations, Daily
# Set time period
start = datetime(2016, 1, 1)
end = datetime(2022, 12, 31)
# Get daily data
data = Daily('06660', start, end)
data = data.fetch()
data
</code></pre>
<p>where 06660 is the weather station WMO code for Switzerland. Thank you in advance.</p>
|
<python><python-3.x><openweathermap><weather><meteostat>
|
2023-04-11 07:24:38
| 1
| 570
|
sdave
|
75,983,163
| 16,383,578
|
What exactly does `psutil.net_io_counters().byte_recv` mean?
|
<p>I use ExpressVPN and my physical connection is a wired Ethernet connection. I am currently connected to the VPN and in my "Control Panel\Network and Internet\Network Connections" page there is an adapter for the Ethernet and an adapter named "Local Area Connection 2" for the VPN connection.</p>
<p>My Windows Settings application tells me that I am connected to the internet through "Local Area Connection 2" and "Ethernet" has no internet.</p>
<p>I performed a little test, I closed all programs that can use the network connection, took a value of bytes_recv, then downloaded <a href="http://ipv4.download.thinkbroadband.com/1GB.zip" rel="nofollow noreferrer">this file</a>, which is exactly 1,073,741,824 bytes long, then took another value of bytes_recv and checked the difference:</p>
<pre><code>import psutil
import requests
import sys
from io import BytesIO
def test(lan2=False):
if lan2:
bytes_recv0 = psutil.net_io_counters(pernic=True)['Local Area Connection 2'].bytes_recv
else:
bytes_recv0 = psutil.net_io_counters().bytes_recv
done = 0
r = requests.get('http://ipv4.download.thinkbroadband.com/1GB.zip', stream=True)
with BytesIO() as f:
for chunk in r.iter_content(131072):
done += len(chunk)
f.write(chunk)
sys.stdout.write('\r{}% done'.format(round(done / 1073741824 * 100, 4)))
if lan2:
bytes_recv1 = psutil.net_io_counters(pernic=True)['Local Area Connection 2'].bytes_recv
else:
bytes_recv1 = psutil.net_io_counters().bytes_recv
return (bytes_recv1 - bytes_recv0) / 1073741824
</code></pre>
<p>And I got the following results:</p>
<pre><code>In [69]: test()
100.0% doneOut[69]: 2.161009442061186
In [70]: test()
100.0% doneOut[70]: 2.1637731716036797
In [71]: test()
100.0% doneOut[71]: 2.174186712130904
In [72]: test(1)
100.0% doneOut[72]: 1.033312937244773
In [73]: test(1)
100.0% doneOut[73]: 1.0409678984433413
In [74]: test(1)
100.0% doneOut[74]: 1.0346684455871582
</code></pre>
<p>If I don't set <code>pernic</code>, it is evident that the data is somehow double counted, and indeed if I check <code>psutil.net_io_counters(pernic=True)</code> then both <code>'Local Area Connection 2'</code> and <code>'Ethernet'</code> will change over time (all other interfaces don't change)..</p>
<p>If I only check <code>'Local Area Connection 2'</code> interface, then the result is more like it, but not quite. During the check I am quite sure there is only exactly 1GiB of data received, but the difference is a little bit over it, and the <code>errin</code>, <code>errout</code>, <code>dropin</code> & <code>dropout</code> counts didn't change...</p>
<p>I don't know why the number is slightly bigger than expected, however I know in telecommunications smaller chunks of data are encoded in larger chunks so that there is a little bit overhead, and this number seems similar to 64b/66b encoding, but it that is the case then the number should be close to 1.03125, the numbers don't quite match up.</p>
<p>Why is the difference bigger than expected?</p>
|
<python><python-3.x><network-programming><psutil>
|
2023-04-11 06:57:13
| 1
| 3,930
|
Ξένη Γήινος
|
75,983,000
| 648,045
|
ValueError: cannot reindex on an axis with duplicate labels while using assign
|
<p>I am trying to split the values inside the <code>engine_type</code> column using <code>_</code> delimiter using the following code</p>
<pre><code>df = pd.read_csv("/content/sample_data/used_cars.csv")
dds = df.assign(engines_type= lambda x: x['engine_type'].str.split(r'\s*_\s*').explode()).reset_index()
</code></pre>
<p>I am getting the following error</p>
<blockquote>
<p>ValueError: cannot reindex on an axis with duplicate labels</p>
</blockquote>
<p>What could be the reason for this error?</p>
<p>Thanks in advance</p>
|
<python><pandas>
|
2023-04-11 06:33:23
| 1
| 4,953
|
logeeks
|
75,982,868
| 258,279
|
Where does the axis of the great circle through two points meet the sphere?
|
<p>The great circle through two points with lat/lon φ1, λ1 and φ2, λ2 can be calculated. The axis of this great circle meets the sphere at two antipodal points. Do these points have a name? What is the formula to derive them from φ1, λ1 and φ2, λ2?</p>
|
<python><latitude-longitude><great-circle><spherical-coordinate>
|
2023-04-11 06:13:38
| 1
| 381
|
user258279
|
75,982,281
| 149,900
|
How to do interactive "su -c command -" with AsyncSSH
|
<p>I have successfully executed <code>su -c whoami -</code> using <strong>paramiko</strong> like such:</p>
<pre class="lang-py prettyprint-override"><code>def amiroot(ssh: paramiko.client.SSHClient, root_pass: str) -> bool:
session = ssh.get_transport().open_session()
session.set_combine_stderr(True)
session.get_pty()
# print("Sending su -c whoami -")
session.exec_command("su -c whoami -")
stdin = session.makefile('wb', -1)
stdout = session.makefile('rb', -1)
while not re.search(b"[Pp]assword", session.recv(1024)):
time.sleep(1)
# print("Sending password")
stdin.write(root_pass + "\n")
stdin.flush()
start = time.monotonic()
while not stdout.channel.eof_received:
time.sleep(1)
if (time.monotonic() - start) > 10:
# print("STDOUT timeout")
stdout.channel.close()
break
# lines = list(filter(None, (lb.decode("utf-8").strip() for lb in stdout.readlines())))
# print(lines)
return stdout.read().decode("utf-8").strip() == "root"
</code></pre>
<p>So conceptually it works, but I need it to be done asynchronously against many targets.</p>
<p>I've chosen to use <a href="https://asyncssh.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">AsyncSSH</a>, and have gone so far as such:</p>
<pre class="lang-py prettyprint-override"><code>async def amiroot(address: str, username: str, password: str, rootpass: str):
async with asyncssh.connect(
address,
username=username,
password=password,
known_hosts=None,
connect_timeout=10,
login_timeout=10,
) as conn:
print(f"Connected to {address}, sending su")
async with conn.create_process("su -c whoami -", term_type="xterm") as process:
print(f"{address}: su sent, waiting response") ###
while "password" not in (await process.stdout.read()):
await asyncio.sleep(1)
await process.stdin.write(rootpass + "\n")
print(f"{address}: rootpass sent, waiting response")
print(await process.stdout.read())
</code></pre>
<p>However I got stuck at the line marked <code>###</code> and never even reached the <code>rootpass sent</code> line. For example:</p>
<pre><code>Connected to 1.2.3.4, sending su
1.2.3.4: su sent, waiting response
</code></pre>
<p>And the program just stuck there.</p>
<p>How do I invoke <code>su -c ... -</code> in AsyncSSH similarly to how I did it using paramiko?</p>
|
<python><asyncssh>
|
2023-04-11 03:55:53
| 1
| 6,951
|
pepoluan
|
75,982,050
| 3,044
|
In the Hypothesis library for Python, why does the text() strategy cause custom strategies to retry?
|
<p>I have a custom strategy built using <code>composite</code> that draws from <code>text</code> strategy internally.</p>
<p>Debugging another error (<code>FailedHealthCheck.data_too_large</code>) I realized that drawing from the <code>text</code> strategy can cause my composite strategy to be invoked roughly twice as often as expected.</p>
<p>I was able to reproduce the following minimal example:</p>
<pre class="lang-py prettyprint-override"><code>@hypothesis.strategies.composite
def my_custom_strategy(draw, n):
"""Strategy to generate lists of N strings"""
trace("a")
value = [draw(hypothesis.strategies.text(max_size=256)) for _ in range(n)]
trace("b")
return value
@given(my_custom_strategy(100))
def test_my_custom_strategy(value):
assert len(value) == 100
assert all(isinstance(v, str) for v in value)
</code></pre>
<p>In this scenario, <code>trace("a")</code> was invoked 206 times, whereas <code>trace("b")</code> was only invoked 100 times. These numbers are consistent across runs.</p>
<p>More problematic, the gap increases the more times I call text(), and super-linearly. When <code>n=200</code>, <code>trace("a")</code> is called 305 times. <code>n=400</code>, 984 times. <code>n=500</code> or greater, the test reliably pauses and then completes after the 11th iteration (with only 11 iterations, instead of 100!)</p>
<p>What's happening here?</p>
|
<python><hypothesis-test><python-hypothesis><property-based-testing>
|
2023-04-11 02:45:22
| 1
| 8,520
|
levand
|
75,982,049
| 12,931,358
|
How to convert a four dimensional Tensor to image by PIL?
|
<p>For example, if my tensor is</p>
<pre><code>t1 = torch.randn(1,3,256,256) #batch_size/ch/height/width
</code></pre>
<p>it is easily to convert to one image by <code>squeeze()</code></p>
<pre><code>import torchvision.transforms as T
transform = T.ToPILImage()
one_img = transform(t1.squeeze())
one_img.save("test1.jpg")
</code></pre>
<p>The problem is if batch_size is is more than one, I was wondering if there is for function in pytorch like,</p>
<pre><code>t1 = torch.randn(5,3,256,256)
print(t1.shape)
for i in range(t1[0]):
one_tensor = t1[i] #(3,256,256)
one_img = transform(one_tensor)
one_img.save(i + ".jpg")
</code></pre>
|
<python><pytorch>
|
2023-04-11 02:44:53
| 1
| 2,077
|
4daJKong
|
75,982,032
| 4,420,797
|
Convert json file into labels.csv
|
<p>I have <code>labels.json</code> file containing the image name and ground truth value. Due to the changing in library I have to modify my data inside <code>json</code> file</p>
<p><strong>Inside Json</strong></p>
<pre><code>{"자랑스럽다_2730052.jpg": "자랑스럽다", "만족스럽다_1299150.jpg": "만족스럽다"}
</code></pre>
<p>I want to generate a <code>labels.csv</code> file that contains the <code>filename</code> column and <code>words</code> columns and the format of <code>labels.csv</code>like below.</p>
<pre><code>filename words
2730052.jpg 자랑스럽다
</code></pre>
<p><strong>How I can do it?</strong></p>
|
<python><pandas><dataframe>
|
2023-04-11 02:38:44
| 1
| 2,984
|
Khawar Islam
|
75,981,852
| 4,420,797
|
How to extract the file name from a column of paths
|
<p>I am converting the <code>.txt</code> file into <code>labels.csv</code> by adding some columns in a data frame. How I can remove <strong>images/0/</strong> from the column contains <code>images/1/19997.jpg, images/1/19998.jpg images/1/19999.jpg</code> <code>/0</code> is folder name and it varies time to time</p>
<p><strong>Code</strong></p>
<pre><code>import pandas as pd
# Read space-separated columns without header
data = pd.read_csv('/media/cvpr/CM_24/synthtiger/results/gt.txt', sep="\s+", header=None)
# Update columns
data.columns = ['filename', 'words']
# Save to required format
data.to_csv('labels.csv')
</code></pre>
|
<python><pandas><dataframe>
|
2023-04-11 01:43:48
| 1
| 2,984
|
Khawar Islam
|
75,981,727
| 21,305,238
|
TypedDict: Mark a set of keys as incompatible
|
<p>I have an interface named <code>Foo</code> which is supposed to have, aside from other common keys, either one of the two given keys, <code>bar</code> and <code>baz</code>. To let PyCharm know, I wrote two interfaces:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypedDict
class Foo1(TypedDict):
bar: str
class Foo2(TypedDict):
baz: int
Foo = Foo1 | Foo2
foo_instance_1: Foo = { # Works fine
'bar': 'foobar'
}
foo_instance_2: Foo = { # Also fine
'baz': 42
}
foo_instance_3: Foo = { # Warning: Expected type 'Foo1 | Foo2', got 'dict[str, str | int]' instead
'bar': 'foobar',
'baz': 42
}
</code></pre>
<p>That works. However, the problem is the real interface I'm dealing with has more than just one incompatible set of keys. That being said, if there are three sets with 2, 3 and 4 keys correspondingly, I'll have to write <code>2 * 3 * 4</code> or 24 interfaces. It would be great if something like this exists:</p>
<pre class="lang-py prettyprint-override"><code>class Foo(TypedDict):
bar: IncompatibleWith('baz', 'qux')[str]
baz: IncompatibleWith('bar', 'qux')[int]
qux: IncompatibleWith('bar', 'baz')[bool]
</code></pre>
<p>...or, better yet:</p>
<pre class="lang-py prettyprint-override"><code>@incompatible('bar', 'baz', 'qux')
# ...
class Foo(TypedDict):
bar: str
baz: int
qux: bool
</code></pre>
<p>Real world context: I'm writing a source code generator to generate API interfaces for a site, which I do not have control of, in Python (that site's API system has an API for retrieving API documentation). These interfaces are for type hinting only. While it is true that I can just generate all combinations, that would make the file much much longer.</p>
<p>Is there a short and easy way to mark a set of keys of an interface as incompatible with each other?</p>
|
<python><pycharm><python-typing><typeddict>
|
2023-04-11 01:02:41
| 0
| 12,143
|
InSync
|
75,981,677
| 13,392,257
|
How to create db migrations for local tortoise project?
|
<p>I have a FastAPI + tortose projects and I want to run the project locally with database <code>postgres://lom:lom@localhost:5432/lom</code> (database is created)</p>
<p>My code</p>
<pre><code># lom/app.py
class App:
storage: S3Storage
def __init__(self):
self.config = Config(_env_file=".env", _env_file_encoding="utf-8")
self.__setup_sentry()
...
def create_app(self, loop: asyncio.AbstractEventLoop) -> FastAPI:
app = FastAPI()
register_tortoise(
app,
modules={
"models": [
"lom.core",
"aerich.models",
]
}
</code></pre>
<p>I want to apply current migrations and create new migrations
I am trying</p>
<pre><code>aerich init -t <Don't understand path>
</code></pre>
<p>What aerich command should I run and which parameters should I use?</p>
<pre><code>├── lom
│ ├── app.py
│ ├── config.py
│ ├── core
│ │ ├── city.py
│ │ ├── company.py
├──
├── migrations
│ ├── 001_main.sql
│ ├── 002_cities.sql
│ ├── 003_cities_declination.sql
</code></pre>
|
<python><fastapi><tortoise-orm>
|
2023-04-11 00:45:07
| 1
| 1,708
|
mascai
|
75,981,636
| 4,930,914
|
Highlight python-docx with regex and spacy
|
<p>I want to highlight regex pattern in docx files in a folder using python-docx. I am able to achieve it through the normal regex code below.</p>
<p>Issue comes when I want to achieve the same through spacy nlp.</p>
<pre class="lang-py prettyprint-override"><code>from docx import Document
from docx.enum.text import WD_COLOR_INDEX
import pandas as pd
import os
import re
import spacy
nlp = spacy.load("en_core_web_sm")
path = r"/home/coder/Documents/"
doc1 = Document('test.docx')
doc = nlp(doc1)
#re_highlight = re.compile(r"[1-9][0-9]*|0") # This one works.
re_highlight = [token for token in doc if tok.like_num == "TRUE"]
for filename in os.listdir(path):
if filename.endswith(".docx"):
file = "/home/writer/Documents/" + filename
print(file)
for para in doc.paragraphs:
text = para.text
if len(re_highlight.findall(text)) > 0:
matches = re_highlight.finditer(text)
para.text = ''
p3 = 0
for match in matches:
p1 = p3
p2, p3 = match.span()
para.add_run(text[p1:p2])
run = para.add_run(text[p2:p3])
run.font.highlight_color = WD_COLOR_INDEX.YELLOW
para.add_run(text[p3:])
doc.save(file)
</code></pre>
<p>Error:</p>
<blockquote>
<p>raise ValueError(Errors.E1041.format(type=type(doc_like)))<br />
ValueError: [E1041] Expected a string, Doc, or bytes as input, but got: <class 'docx.document.Document'></p>
</blockquote>
<p>I realize the doc doesn't have doc.paragraphs being nlp element. How to sort this problem?</p>
<p>Kindly help.</p>
|
<python><regex><spacy>
|
2023-04-11 00:30:37
| 1
| 915
|
Programmer_nltk
|
75,981,635
| 12,349,101
|
Tkinter - Zoom with text and other elements in Canvas
|
<p>I'm trying to add support for zooming in and out inside a Canvas widget, containing both text element (created using <code>create_text</code>) and non-text elements, such as rectangles (created with <code>create_rectangle</code>), etc.</p>
<p>So far, I made the following MRE using both part of <a href="https://stackoverflow.com/a/60149696/12349101">this answer</a> and <a href="https://stackoverflow.com/a/68723812/12349101">this one</a>:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
from tkinter.font import Font
root = tk.Tk()
canvas = tk.Canvas(root, width=400, height=400)
canvas.pack()
font = Font(family="Arial", size=10)
fontsize = 10
# Add elements to canvas
rectangle = canvas.create_rectangle(100, 100, 300, 200, fill='red')
oval = canvas.create_oval(150, 150, 250, 250, fill='blue')
text = canvas.create_text(200, 175, text="Hello", font=font)
def do_zoom(event):
global fontsize
x = canvas.canvasx(event.x)
y = canvas.canvasy(event.y)
factor = 1.001 ** event.delta
if (event.delta > 0):
fontsize *= 1.1
font.configure(size=int(fontsize))
elif (event.delta < 0):
fontsize *= 0.9
font.configure(size=int(fontsize))
canvas.scale("all", x, y, factor, factor)
canvas.bind("<MouseWheel>", do_zoom)
canvas.bind('<ButtonPress-1>', lambda event: canvas.scan_mark(event.x, event.y))
canvas.bind("<B1-Motion>", lambda event: canvas.scan_dragto(event.x, event.y, gain=1))
root.mainloop()
</code></pre>
<p>This seems to work, but has one or two issues:</p>
<ul>
<li>Once the font size get to <code>0</code> or <code>0.0???</code> in floats (happen when zooming out), the font size doesn't match with the actual visual size of the font, as it seems to be fixed in a higher size (can be seen on <a href="https://i.sstatic.net/nrU6B.jpg" rel="nofollow noreferrer">this gif here</a>)</li>
<li>When zooming in and out fast enough, repeatedly, a discrepancy can be seen on the font size on previous mousewheel scrolling, and the next one (can be seen by printing the font size).</li>
</ul>
<p>In short, I'm wondering if there is a way, either to fix the above (and the reasons for them happening, aside from my own conjecture) or if there is a better way to handle this.</p>
|
<python><tkinter>
|
2023-04-11 00:30:37
| 2
| 553
|
secemp9
|
75,981,439
| 11,938,023
|
How do xor a dataframe slice with the next num and insert the answer in column 'dr'
|
<p>Ok I have this data frame which you notice is names solve and I'm using a slice of 4</p>
<pre><code>In [13147]: solve[::4]
Out[13147]:
rst dr
0 1 0
4 3 0
8 7 0
12 5 0
16 14 0
20 12 0
24 4 0
28 4 0
32 4 0
36 3 0
40 3 0
44 5 0
48 5 0
52 13 0
56 3 0
60 1 0
</code></pre>
<p>What I want is to in column 'rst' xor 1 by 3 and get 2 (1^3=2). then I want to do 3^7 = 4, I want to put those in the corresponding spots in dr. so solve.loc[0:, ('dr')] = 2, and solve.loc[4:, ('dr')] = 4. My current method is tedious and not automatic, here is what I'm doing:</p>
<pre><code>In [13150]: np.array(solve.loc[::4, ('rst')]) ^ np.array(solve.loc[4::4, ('rst')])
ValueError: operands could not be broadcast together with shapes (16,) (15,)
</code></pre>
<p>which is resolved with:</p>
<pre><code>In [13159]: wutwut = np.array(solve.loc[::4, ('rst')])[:15] ^ np.array(solve.loc[4::4, ('rst')])
Out[13159]:
array([ 2, 4, 2, 11, 2, 8, 0, 0, 7, 0, 6, 0, 8, 14, 2],
dtype=int8)
</code></pre>
<p>and then putting the values back into solve.loc['dr'] is an issue because I have to bust a length in manually like:</p>
<pre><code>solve.loc[:56:4, ('dr')] = wutwut
</code></pre>
<p>see I have to manually set the length, is there a more automatic way</p>
<p>As you can see this is tedious and not practical because I'm working with different and changing lengths and I need a more automatic best practice cast for this. I'm looking for some suggestions and thanks in advance. Also I have more advanced use cases where is xor between columns so if anyone has strategies for that it will help me down the road as well</p>
|
<python><pandas><numpy><xor>
|
2023-04-10 23:29:02
| 1
| 7,224
|
oppressionslayer
|
75,981,283
| 13,142,245
|
Efficient days between two dates in Pandas
|
<pre class="lang-py prettyprint-override"><code># works
pre_start['exposure_days'] = (datetime.now() - pd.to_datetime(pre_start['First_Exposure']))
# doesn't work
pre_start['exposure_days'] = (datetime.now() - pd.to_datetime(pre_start['First_Receive_Date'])).days
</code></pre>
<p>The error I get is <code>AttributeError: 'Series' object has no attribute 'days'</code>.</p>
<p>Without doing something obnoxious and inefficient like
<code>pre_start['real_exposure_days'] = pre_start['exposure_days'].apply(lambda x: x.days)</code> is there a way to get the number of days, circumventing this error?</p>
|
<python><pandas><datetime>
|
2023-04-10 22:50:19
| 1
| 1,238
|
jbuddy_13
|
75,981,262
| 850,781
|
Compute moving average with non-uniform domain
|
<p><a href="https://stackoverflow.com/q/14313510/850781">How to calculate rolling / moving average using python + NumPy / SciPy?</a> discusses the situation when the observations are <em>equally spaced</em>, i.e., the index is equivalent to an integer range.</p>
<p>In my case, the observations come at arbitrary times and the interval between them can be an arbitrary float. E.g.,</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({"y":np.random.uniform(size=100)}, index=np.random.uniform(size=100)).sort_index()
</code></pre>
<p>I want to add a column <code>yavg</code> to <code>df</code> whose value at a give index value <code>x0</code> is</p>
<pre><code>sum(df.y[x]*f(x0-x) for x in df.index) / sum(f(x0-x) for x in df.index)
</code></pre>
<p>for a given function <code>f</code>, e.g.,</p>
<pre><code>def f(x):
return np.exp(-x*x)
</code></pre>
<p>How do I do this with a minimal effort (preferably in pure <code>numpy</code>)?</p>
|
<python><numpy><convolution><rolling-computation><moving-average>
|
2023-04-10 22:43:43
| 1
| 60,468
|
sds
|
75,981,048
| 4,358,137
|
How to name a model checkpoint with a metric involving a period?
|
<p>I would like to name my checkpoint like so:</p>
<pre class="lang-py prettyprint-override"><code>mcp_save = tf.keras.callbacks.ModelCheckpoint('effnet-{epoch:02d}-{val_f0.5-score:.4f}.mdl_wts.hdf5', save_best_only=True, monitor='val_f0.5-score', mode='max')
</code></pre>
<p>But because the metric has a period I get an error when it tried to save</p>
<blockquote>
<p>'Failed to format this callback filepath: "effnet-{epoch:02d}-{val_f0.5-score:.4f}.mdl_wts.hdf5". Reason: 'val_f0''</p>
</blockquote>
<p>The metric is from <em>segmentation_models</em> pypi package.</p>
<pre><code>fscore = sm.metrics.FScore(beta=0.5)
</code></pre>
<p>I can see the name while it is logged out by tensorflow:</p>
<blockquote>
<p>1000/1000 [==============================] - ETA: 0s - loss: 0.6205 - accuracy: 0.2607 - f0.5-score: 0.3066</p>
</blockquote>
<p>Is there a way to escape the period or provide a different string such that it can save the filename with the score?</p>
|
<python><tensorflow><keras><filenames>
|
2023-04-10 22:00:18
| 1
| 1,566
|
Seth Kitchen
|
75,981,046
| 11,092,636
|
openpyxl writing in a cell and copying previous format doesn't work: format is not applied
|
<p>To preserve the background colour and the font colour when updating a cell value with <code>openpyxl</code>, I tried to store the original background colour and font colour to then re-apply it after modifying the value since setting a value would remove the formatting. However, it does not work and I'm not sure I understand why.</p>
<p>To reproduce the issue, you need a <code>example.xlsx</code> file. You can either download it (<a href="https://wetransfer.com/downloads/f3accf7902a97bd18fb12bb08064419720230419122708/397f42" rel="nofollow noreferrer">https://wetransfer.com/downloads/f3accf7902a97bd18fb12bb08064419720230419122708/397f42</a>) or create the file yourself in case the link expired or in case you wouldn't want to download something from a random on the internet. To create the file, I highlighted columns <code>B</code>, <code>C</code>, <code>D</code> in yellow and set the font to red. Then I wrote test in the cells <code>B1</code>, <code>C1</code> and <code>D1</code>.</p>
<p>I'm trying to write <code>Hello World</code> in the cell <code>B2</code> without losing the yellow background colour and the red font.</p>
<p>MRE:</p>
<pre class="lang-py prettyprint-override"><code>import openpyxl
from openpyxl.styles import PatternFill, Font
wb = openpyxl.load_workbook('example.xlsx') # download the example.xlsx file at https://wetransfer.com/downloads/72662284d9231c96e82f65a054c262ea20230410215338/d7c23b
ws = wb.active
cell = ws.cell(row=2, column=2)
original_fill = cell.fill
original_font = cell.font
cell.value = 'Hello World'
new_fill = PatternFill(fill_type=original_fill.fill_type,
start_color=original_fill.start_color,
end_color=original_fill.end_color,
fgColor=original_fill.fgColor,
bgColor=original_fill.bgColor)
cell.fill = new_fill
new_font = Font(name=original_font.name, size=original_font.size, bold=original_font.bold, italic=original_font.italic,
vertAlign=original_font.vertAlign, underline=original_font.underline, strike=original_font.strike,
color=original_font.color)
cell.font = new_font
wb.save('example_result.xlsx')
</code></pre>
<p>I'm using <code>openpyxl==3.1.2</code> and <code>Python 3.11.1</code>.</p>
<p>I've read somewhere that this was one of the many shortcomings of <code>openpyxl</code>, if this is the case and I can't go around that issue, what library would you suggest? I would like something that isn't too long.</p>
|
<python><excel>
|
2023-04-10 21:59:19
| 1
| 720
|
FluidMechanics Potential Flows
|
75,981,036
| 3,904,557
|
Python OpenAI Whisper FileNotFoundError when running a standalone created with PyInstaller
|
<p>I made a small Python program that uses <a href="https://github.com/openai/whisper" rel="nofollow noreferrer">OpenAI whisper's library</a>. Everything works fine in my virtual environment.</p>
<p>I generated a <code>.exe</code> of the whole thing with <a href="https://pyinstaller.org/en/stable/" rel="nofollow noreferrer">PyInstaller</a>, but when I run the resulting file, I get the following error:</p>
<pre><code>Exception ignored in thread started by: <function start_interpretor at 0x000001F9EC80F430>
Traceback (most recent call last):
File "interpretor.py", line 67, in start_interpretor
File "transcriber.py", line 125, in run_transcription
File "transcriber.py", line 88, in progress_transcription
File "whisper\transcribe.py", line 121, in transcribe
File "whisper\audio.py", line 141, in log_mel_spectrogram
File "whisper\audio.py", line 94, in mel_filters
File "numpy\lib\npyio.py", line 405, in load
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Div-o\\AppData\\Local\\Temp\\_MEI236562\\whisper\\assets\\mel_filters.npz'
</code></pre>
<p>I have tried to add the datas object to the <code>.spec</code> file generated by PyInstaller:</p>
<pre><code>a = Analysis(
//...
datas=[('.venv/Lib/site-packages/whisper/assets/mel_filters.npz', '.venv/Lib/site-packages/whisper/assets'),],
//...
)
</code></pre>
<p>I even added <code>recursive_copy_metadata="whisper"</code> to the <code>EXE()</code> in the same file, but the issue persists.</p>
<p>Any idea what is causing that?</p>
|
<python><pyinstaller><openai-whisper>
|
2023-04-10 21:56:50
| 2
| 5,526
|
Ryan Pergent
|
75,980,839
| 814,730
|
Merge a Python Array Along an Axis
|
<p>I've been trying to populate a training data set for use in a Keras model. Using numpy's <code>append</code> function everything <em>works</em> fine, but it is <strong>incredibly slow</strong>. Here's what I'm doing right now:</p>
<pre class="lang-py prettyprint-override"><code>def populateData():
images = np.zeros([1, 4, 512, 512, 6])
for m in range(2):
for n in range(4):
batch_np = np.zeros([4, 512, 512, 6])
# Doing stuff with the batch...
# ...
images = np.append(images, batch_np, axis=0)
</code></pre>
<p>As the size of the array grows with each pass, the amount of time numpy takes to append new data increases pretty much exponentially. So, for instance, the first pass takes around ~1 second, the third takes just over ~3 seconds. By the time I've done a dozen or more, each <code>append</code> operation takes many <em>minutes</em>(!). Based on the current pace of things, it could take days to complete.</p>
<p><a href="https://i.sstatic.net/VUdLc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VUdLc.png" alt="Line graph showing the time (seconds) per iteration (#) when using numpy.append. Values increase exponentially." /></a></p>
<p>I'd like to be able to get my training data set populated sometime before the next ice age. Beyond "getting better hardware", <strong>what can I do to speed up <code>np.append(...)</code></strong>? My understanding is that Numpy's <code>append</code> function copies the entire array each time this function gets called. Is there an equivalent function that does <em>not</em> perform a copy each time? Or that uses a reference value instead, and just modifies that?</p>
<p>I've attempted to rewrite some of this using Python's built-in <code>list append</code> function, but it doesn't provide <code>axis</code> support like numpy's append function. So, while that appears to be <em>much much</em> faster, it doesn't quite work for this multi-dimensional setup.</p>
<hr />
<p><strong>TL;DR: Is there a way to specify an axis when appending to a Python list? If not, is there a more optimal way to append to a N-D array along a specified axis / speed up <code>numpy.append</code>?</strong></p>
|
<python><arrays><numpy><multidimensional-array><append>
|
2023-04-10 21:21:32
| 2
| 8,628
|
Sam Spencer
|
75,980,805
| 2,924,334
|
pygal: How to show the data labels in the saved png
|
<p>If I include <code>label</code> as in the code snippet below, it shows the data labels when rendered as svg and mouse-hover. However, how do I make the labels show up in a saved png?</p>
<pre><code>import pygal
chart = pygal.XY()
chart.add('line A', [{'value': (10, 2), 'label': 'A0'}, {'value': (15, 20), 'label': 'A1'}])
chart.render_to_file('chart.svg') # this shows the labels on mouse hover
chart.render_to_png('chart.png') # how do I make this to show the data labels?
</code></pre>
<p>Thank you!</p>
|
<python><pygal>
|
2023-04-10 21:16:38
| 1
| 587
|
tikka
|
75,980,255
| 9,918,823
|
Convolving two arrays in python without for loops
|
<p>I have two arrays(<code>arr_1,arr_2</code>), and need to generate an output(<code>arr_out</code>) as follows:</p>
<pre><code>arr_1 = [21, 28, 36, 29, 40]
arr_2 = [0, 225, 225, 0, 225]
arr_out = [-1, 28, 36, -1, 40]
</code></pre>
<p>The outputarr_out should have -1 at an index if the product of the elements in arr_1 and arr_2 at that index is 0. Otherwise, the value from arr_1 should be in the output array.</p>
<p>I have implemented a solution using a for loop:</p>
<pre><code> arr_out = [0] * 5
for index, value in enumerate(arr_1):
if (value * arr_2[index]) == 0:
arr_out[index] = -1
else:
arr_out[index] = value
</code></pre>
<p>Is it possible to achieve this without using a for loop?</p>
|
<python>
|
2023-04-10 19:52:49
| 2
| 991
|
harinsamaranayake
|
75,980,187
| 418,413
|
How can I determine if I need to install psycopg2?
|
<p>I am trying to build a generic tool to bootstrap an EMR cluster. Some of the jobs we run in PySpark on EMR require psycopg2. Others don't. For the ones that require psycopg2, we need to <code>yum install postgresql-devel</code>. Otherwise, we don't. So I'm trying to detect if psycopg2 is a dependency.</p>
<p>However, every approach I've tried so far (using pip-23.0.1) results in this output <a href="https://github.com/psycopg/psycopg2/blob/0b01ded426b7423621af68d9fc33e532a5bda4a7/setup.py#L81" rel="nofollow noreferrer">from PostgresConfig</a> when psycopg2 is a dependency:</p>
<pre><code> error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [25 lines of output]
running egg_info
creating /mnt/tmp/pip-pip-egg-info-qjiuibm3/psycopg2.egg-info
writing /mnt/tmp/pip-pip-egg-info-qjiuibm3/psycopg2.egg-info/PKG-INFO
writing dependency_links to /mnt/tmp/pip-pip-egg-info-qjiuibm3/psycopg2.egg-info/dependency_links.txt
writing top-level names to /mnt/tmp/pip-pip-egg-info-qjiuibm3/psycopg2.egg-info/top_level.txt
writing manifest file '/mnt/tmp/pip-pip-egg-info-qjiuibm3/psycopg2.egg-info/SOURCES.txt'
/usr/local/lib/python3.7/site-packages/setuptools/config/setupcfg.py:516: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
</code></pre>
<p>This includes ideas from these posts including <code>pip-compile</code>, <code>pip download</code> and <code>pip install --dry-run</code>.</p>
<ol>
<li><a href="https://stackoverflow.com/questions/29751572">How to find a Python package's dependencies</a></li>
<li><a href="https://stackoverflow.com/questions/11147667">Is there a way to list pip dependencies/requirements?</a></li>
</ol>
<p>Here are a few ways to reproduce the behavior I'm seeing. (Assume that <code>requirements.in</code> contains the name of the package we're trying to install. In the simplest case, it can just contain "psycopg2".)</p>
<pre><code>pip-compile requirements.in --output-file requirements.txt
# or
pip download --dest /tmp --requirement requirements.in
# or
pip install --dry-run --requirement requirements.in |
sed -n 's/^Would install //p' |
tr ' ' '\n' |
sed 's/\(.*\)-/\1==/g' > requirements.txt
</code></pre>
<p>(Following the above, I intended to either grep the resolved <code>requirements.txt</code>, or search the download dir for psycopg2. But I don't get that far.)</p>
<p>As a last-ditch, I could try doing <code>pip install --dry-run</code>, capture the stderr and parse it for the above message, but is there a more elegant way to tell if <code>psycopg2</code> is a dependency (transitive or direct) <em>without</em> triggering <code>PostgresConfig</code>?</p>
<p>A general solution for determining if a c extension would need to be compiled would also be helpful, if you know of one.</p>
|
<python><psycopg2><python-c-api>
|
2023-04-10 19:42:13
| 0
| 77,713
|
kojiro
|
75,980,084
| 10,658,339
|
How to overcome memory issue when plotting raster in Python
|
<p>I have an i7 with 32GB and NVIDIA RTX 30 GPU.
I'm trying to plot some raster in Python and getting memory issues.
I have compressed my raster with rasterio lzw, but still getting memory issues.
My raster is boolean and already clipped the interested area.
I have transformed the raster into a dictionary with arrays.</p>
<pre><code>
for name, data in images.items():
print(name)
fig, ax = plt.subplots(1, 1, figsize=(7, 7))
if '2' in name:
plt.imshow(data, cmap='binary', vmin=0, vmax=1)
#plt.savefig(str(name)+'IG.png',format='png',transparent=True)
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.show()
else:
im = ax.imshow(data, cmap='viridis', vmin=0, vmax=0.8)
cbar = fig.colorbar(im, orientation='vertical', pad=0.05)
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.show()
</code></pre>
<p>And I get the error:</p>
<blockquote>
<p>MemoryError: Unable to allocate 7.30 GiB for an array with shape
(62451, 31361) and data type float32</p>
</blockquote>
<p>I really need to plot this data, some library, or another way to overcome this issue?</p>
|
<python><memory><gis><raster><rasterio>
|
2023-04-10 19:27:25
| 0
| 527
|
JCV
|
75,979,891
| 979,242
|
Trying to remove a a role assigned to a GCP user
|
<p>Use Case: I am trying to delete all the roles assigned to a principal inside a GCP project.</p>
<p>As I understand you can't perform that operation directly.
I am referring here: <a href="https://cloud.google.com/iam/docs/samples/iam-modify-policy-remove-member" rel="nofollow noreferrer">https://cloud.google.com/iam/docs/samples/iam-modify-policy-remove-member</a></p>
<p>To perform this operation, I would need a list of currently assigned roles for a GCP principal. I couldn't find this operation anywhere in Python. Has anyone seen this or know how to perform this operation?</p>
|
<python><google-cloud-platform><gcp-iam>
|
2023-04-10 19:00:41
| 1
| 505
|
PiaklA
|
75,979,882
| 20,947,319
|
How to fetch the id of an item in Django when using JSON to fetch from the database
|
<p>I have a Django template whereby I am looping through several items in the homepage. When an item is clicked, a modal which I have included by importing it should be shown and data related to the clicked item displayed. I am using JSONResponse to prepopulate the modal. Once the modal is shown, I want to create a checkout session which will require the id of the item being referred to in the modal. I am stuck at how to get the id.
Here is the script for showing the modal and prepopulating it:</p>
<pre><code>let modal = document.getElementById("modal");
modal.style.display = "none";
function modalHandler(val, content_id) {
if (val) {
let xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (this.readyState == 4 && this.status == 200) {
let data = JSON.parse(this.responseText);
document.getElementById("subtotal").innerHTML = data.subtotal;
document.getElementById("taxes").innerHTML = data.taxes;
document.getElementById("website_fee").innerHTML = data.website_fee;
document.getElementById("total").innerHTML = data.total;
fadeIn(modal);
}
};
xhr.open("GET", "/modal-content/" + content_id + "/", true);
xhr.send();
} else {
fadeOut(modal);
}
}
</code></pre>
<p>Here is the views.py which returns the JSON data:</p>
<pre><code>def modal_content_view(request, content_id):
my_model = get_object_or_404(MyModels, content_id=content_id)
print(f"the model is {my_model.username}")
data = {
'subtotal': my_model.username,
'taxes': '2.60',
'website_fee': '0.87',
'total': '23.47',
'content_id':my_model.content_id
}
return JsonResponse(data)
</code></pre>
<p>Here is the script that is supposed to fetch the data when the checkout button is clicked:</p>
<pre><code><script type="text/javascript">
var checkoutButton = document.getElementById('checkout-button');
checkoutButton.addEventListener('click', function() {
fetch("{% url 'checkout' content_id %}", {
method: 'POST',
data: JSON.stringify({
amount: "{{ cost }}" * 100,
description: "{{ title }}",
gig_id: "{{ gig_id }}",
}),
})
.then(function(response) {
return response.json();
})
.then(function(session) {
return stripe.redirectToCheckout({ sessionId: session.id });
})
.then(function(result) {
if (result.error) {
alert(result.error.message);
}
})
.catch(function(error) {
console.error('Error:', error);
});
});
</script>
</code></pre>
<p>Here is the view that handles the content_id passed from the script above:</p>
<pre><code>@csrf_exempt
def create_checkout_session(request, content_id):
model = MyModels.objects.get(content_id=content_id)
subscription = Subscription(model=model, is_subscribed=False, user=request.user)
</code></pre>
<p>And here is the onclick function that triggers the modal to show:</p>
<pre><code>onclick="modalHandler(true, '{{content.model.content_id}}')"
</code></pre>
<p>My question is how do I pass the 'content_id' from the modal to the script that handles checkout</p>
|
<javascript><python><json><django>
|
2023-04-10 18:58:40
| 1
| 446
|
victor
|
75,979,824
| 10,181,236
|
telethon error after inserting phone number
|
<p>I'm using telethon to read messages from a specific telegram channel. I use this code to get the messages</p>
<pre><code>import configparser
from telethon import TelegramClient
SESSION_NAME = "test_session_1"
# Reading Configs
config = configparser.ConfigParser()
config.read("config.ini")
# Setting configuration values
api_id = int(config['Telegram']['api_id'])
api_hash = str(config['Telegram']['api_hash'])
phone = config['Telegram']['phone']
username = config['Telegram']['username']
messages_dict = {}
channel_list = ["channelname"]
client = TelegramClient(SESSION_NAME, api_id, api_hash)
async def main():
# get last messages of the channel
async for message in client.iter_messages(channel_list[0], limit=2):
print(message.text)
print()
# You can send messages to yourself...
#await client.send_message('me', 'Hello, myself!')
with client:
client.loop.run_until_complete(main())
</code></pre>
<p>The session starts and it ask me for my phone number, I insert it and I receive the code in telegram but then the program raise this exception</p>
<pre><code>Please enter your phone (or bot token): +01XXXXXXXX
Unexpected exception in the receive loop
Traceback (most recent call last):
File "D:\Python 3\lib\site-packages\telethon\network\connection\connection.py", line 332, in _recv_loop
data = await self._recv()
File "D:\Python 3\lib\site-packages\telethon\network\connection\connection.py", line 369, in _recv
return await self._codec.read_packet(self._reader)
File "D:\Python 3\lib\site-packages\telethon\network\connection\tcpfull.py", line 25, in read_packet
packet_len_seq = await reader.readexactly(8) # 4 and 4
File "D:\Python 3\lib\asyncio\streams.py", line 679, in readexactly
await self._wait_for_data('readexactly')
File "D:\Python 3\lib\asyncio\streams.py", line 473, in _wait_for_data
await self._waiter
concurrent.futures._base.CancelledError
Unhandled exception from recv_task after cancelling <class '_asyncio.Task'> (<Task finished coro=<Connection._recv_loop() done, defined at D:\Python 3\lib\site-packages\telethon\network\connection\connection.py:325> exception=UnboundLocalError("local variable 'e' referenced before assignment")>)
Traceback (most recent call last):
File "D:\Python 3\lib\site-packages\telethon\network\connection\connection.py", line 332, in _recv_loop
data = await self._recv()
File "D:\Python 3\lib\site-packages\telethon\network\connection\connection.py", line 369, in _recv
return await self._codec.read_packet(self._reader)
File "D:\Python 3\lib\site-packages\telethon\network\connection\tcpfull.py", line 25, in read_packet
packet_len_seq = await reader.readexactly(8) # 4 and 4
File "D:\Python 3\lib\asyncio\streams.py", line 679, in readexactly
await self._wait_for_data('readexactly')
File "D:\Python 3\lib\asyncio\streams.py", line 473, in _wait_for_data
await self._waiter
concurrent.futures._base.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Python 3\lib\site-packages\telethon\helpers.py", line 176, in _cancel
await task
File "D:\Python 3\lib\site-packages\telethon\network\connection\connection.py", line 344, in _recv_loop
await self._recv_queue.put((None, e))
UnboundLocalError: local variable 'e' referenced before assignment
Unhandled error while receiving data
Traceback (most recent call last):
File "D:\Python 3\lib\site-packages\telethon\network\mtprotosender.py", line 505, in _recv_loop
body = await self._connection.recv()
File "D:\Python 3\lib\site-packages\telethon\network\connection\connection.py", line 299, in recv
result, err = await self._recv_queue.get()
File "D:\Python 3\lib\asyncio\queues.py", line 159, in get
await getter
concurrent.futures._base.CancelledError
Please enter the code you received: XXXXX
</code></pre>
<p>I can insert the pin but after nothing happens and the program is stucked until I press ctrl+c
How can I solve this exception?</p>
<p>Telethon version 1.28.2</p>
|
<python><telegram><telethon>
|
2023-04-10 18:47:22
| 0
| 512
|
JayJona
|
75,979,811
| 129,899
|
Manipulating a list of strings using the Unreal Python API
|
<p>Using the Unreal Python API.
<a href="https://docs.unrealengine.com/5.0/en-US/PythonAPI/" rel="nofollow noreferrer">https://docs.unrealengine.com/5.0/en-US/PythonAPI/</a></p>
<p>In the following method I am creating a list called assetPaths.</p>
<pre><code> import unreal
def listAssetPaths() :
EAL = unreal.EditorAssetLibrary
assetPaths = EAL.list_assets('/Game')
for assetPath in assetPaths: print (assetPath)
</code></pre>
<p>The problem is that each entry in the assetPaths list contains a duplicate of the object at the end of each path. ie</p>
<pre><code>/Game/ProjectName/Textures/Utility/Texture1_01a.Texture1_01a
</code></pre>
<p>It's this last <em><strong>.Texture1_01a</strong></em> that must be removed.</p>
<p>I've spent a couple hours trying the split() method without success.
How can I repopulate the assetPaths list with the results I need?</p>
|
<python><string><unreal-engine5>
|
2023-04-10 18:44:38
| 1
| 7,279
|
Bachalo
|
75,979,713
| 3,541,631
|
Process elements in chunks using multiprocessing queues
|
<p>I have a multiprocessing queue; The end of the queue is signaled by using a SENTINEL value, a string.</p>
<pre><code>aq = Queue()
</code></pre>
<p>........................</p>
<p>The instance in the queue are of class A:</p>
<pre><code>class A:
id: str
desc: str
</code></pre>
<p>In a function I'm getting elements from the queue <code>aq</code> and process them in chunks.
The first element(if is just one) can be a SENTINEL, nothing to process.
....</p>
<pre><code>def process:
chunk_data = []
all = [
item = aq.get()
if not isinstance(item, A):
return
chunk_data.append(item.id)
while item != SENTINEL:
# start process in chunks
# adding elements to the chunk list until is full
while len(chunk_data) < CHUNK_MAX_SIZE: # 50
item = aq.get()
if item == SENTINEL:
break
chunk_data.append(item.id)
# the chunk list is full start processing
chunk_process_ids = process_data(chunk_data) # process chunks
all.extend(chunk_process_ids)
# empty chunk list and start again
chunk_data.clear()
</code></pre>
<p>The function works as expected but I consider the code to be convoluted. I'm looking for a simple, clearer version.</p>
|
<python><python-3.x><queue><python-3.8>
|
2023-04-10 18:32:01
| 2
| 4,028
|
user3541631
|
75,979,676
| 674,039
|
Why does this code work on Python 3.6 but not on Python 3.7?
|
<p>In <code>script.py</code>:</p>
<pre><code>def f(n, memo={0:0, 1:1}):
if n not in memo:
memo[n] = sum(f(n - i) for i in [1, 2])
return memo[n]
print(f(400))
</code></pre>
<p><code>python3.6 script.py</code> correctly prints <code>f(400)</code>, but with <code>python3.7 script.py</code> it stack overflows. The recursion limit is reached at <code>f(501)</code> in 3.6 and at <code>f(334)</code> in 3.7.</p>
<p>What changed between Python 3.6 and 3.7 that caused the maximum recursion depth to be exceeded earlier by this code?</p>
|
<python><recursion><stack-overflow>
|
2023-04-10 18:26:17
| 1
| 367,866
|
wim
|
75,979,563
| 5,561,058
|
Pexpect throwing incorrect EOF error for expect method
|
<p>Here is my function:</p>
<pre><code>def ExecuteList(self, myChild, intf):
for cmd, rsp, timeout in zip(self.myCommandList, self.myResponseList, self.myTimeout): # zip allows you to iterate through multiple list in parallel.
try:
myChild.sendline(cmd)
time.sleep(2)
myChild.expect(rsp, float(timeout))
time.sleep(2)
except pexpect.TIMEOUT:
return -1
intf.captureBuffer1(myChild.before)
return 0
</code></pre>
<p>I get an EOF exception thrown at at <code>myChild.expect(rsp, float(timeout))</code></p>
<p>I am running the command <code>python test.py sample.txt</code> and looking for a certain match from the incoming response. But the problem is <code>myChild.expect</code> command only reads up to 4K bytes of the file. Anything above after the 4K mark of the file the EOF exception is raised, when it shouldn't. I am unsure why it's only reading the file up to 4K bytes and not anymore.</p>
<p>My timeout is 15 seconds.</p>
|
<python><pexpect>
|
2023-04-10 18:07:56
| 0
| 471
|
Yash Jain
|
75,979,562
| 11,564,487
|
Axial inconsistency of pandas.diff
|
<p>Consider the dataframe:</p>
<pre><code>df = pd.DataFrame({'col': [True, False]})
</code></pre>
<p>The following code works:</p>
<pre><code>df['col'].diff()
</code></pre>
<p>The result is:</p>
<pre><code>0 NaN
1 True
Name: col, dtype: object
</code></pre>
<p>However, the code:</p>
<pre><code>df.T.diff(axis=1)
</code></pre>
<p>gives the error:</p>
<pre><code>numpy boolean subtract, the `-` operator, is not supported, use the bitwise_xor, the `^` operator, or the logical_xor function instead.
</code></pre>
<p>Is that a bug?</p>
|
<python><pandas>
|
2023-04-10 18:07:46
| 2
| 27,045
|
PaulS
|
75,979,480
| 18,758,062
|
Python Relative Import Error appears only on server
|
<p>I'm trying to start a Flask server on both my local Ubuntu machine and a remote Ubuntu server, using the same commands in the terminal for both.</p>
<pre><code>virtual venv
source venv/bin/activate
pip install flask flask_cors gunicorn #... and more
FLASK_DEBUG=1 FLASK_APP=server flask run --host=0.0.0.0
</code></pre>
<p>There's no problem running the <code>flask run</code> command on my local dev machine.</p>
<p>However on the remote server, <code>flask run</code> is throwing an error</p>
<pre><code>...
File "/root/foo/src/server.py", line 31, in create_app
from .bar import bar_blueprint
ImportError: attempted relative import with no known parent package
</code></pre>
<p>Why does this error happens only on the remote server?</p>
<p><strong>Local machine:</strong></p>
<ul>
<li>Ubuntu 22.04.2</li>
<li>Python 3.10.6
<pre><code>(venv) gv@dev:~/foo/src$ python
Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more
information.
</code></pre>
</li>
</ul>
<p><strong>Remote server:</strong></p>
<ul>
<li>Ubuntu 22.10</li>
<li>Python 3.10.7
<pre><code>(venv) root@server:~/foo/src# python
Python 3.10.7 (main, Mar 10 2023, 10:47:39) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more
information.
</code></pre>
</li>
</ul>
|
<python><flask><python-import>
|
2023-04-10 17:55:18
| 0
| 1,623
|
gameveloster
|
75,979,420
| 1,492,337
|
using llama_index with mac m1
|
<p><strong>Question #1:</strong></p>
<p>Is there a way of using Mac with M1 CPU and <code>llama_index</code> together?</p>
<p>I cannot pass the bellow assertion:</p>
<pre><code>AssertionError Traceback (most recent call last)
<ipython-input-1-f2d62b66882b> in <module>
6 from transformers import pipeline
7
----> 8 class customLLM(LLM):
9 model_name = "google/flan-t5-large"
10 pipeline = pipeline("text2text-generation", model=model_name, device=0, model_kwargs={"torch_dtype":torch.bfloat16})
<ipython-input-1-f2d62b66882b> in customLLM()
8 class customLLM(LLM):
9 model_name = "google/flan-t5-large"
---> 10 pipeline = pipeline("text2text-generation", model=model_name, device=0, model_kwargs={"torch_dtype":torch.bfloat16})
11
12 def _call(self, prompt, stop=None):
~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
868 kwargs["device"] = device
869
--> 870 return pipeline_class(model=model, framework=framework, task=task, **kwargs)
~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/text2text_generation.py in __init__(self, *args, **kwargs)
63
64 def __init__(self, *args, **kwargs):
---> 65 super().__init__(*args, **kwargs)
66
67 self.check_model_type(
~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/base.py in __init__(self, model, tokenizer, feature_extractor, modelcard, framework, task, args_parser, device, binary_output, **kwargs)
776 # Special handling
777 if self.framework == "pt" and self.device.type != "cpu":
--> 778 self.model = self.model.to(self.device)
779
780 # Update config with task specific parameters
~/Library/Python/3.9/lib/python/site-packages/transformers/modeling_utils.py in to(self, *args, **kwargs)
1680 )
1681 else:
-> 1682 return super().to(*args, **kwargs)
1683
1684 def half(self, *args):
~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in to(self, *args, **kwargs)
1143 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
1144
-> 1145 return self._apply(convert)
1146
1147 def register_full_backward_pre_hook(
~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in _apply(self, fn)
795 def _apply(self, fn):
796 for module in self.children():
--> 797 module._apply(fn)
798
799 def compute_should_use_set_data(tensor, tensor_applied):
~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in _apply(self, fn)
818 # `with torch.no_grad():`
819 with torch.no_grad():
--> 820 param_applied = fn(param)
821 should_use_set_data = compute_should_use_set_data(param, param_applied)
822 if should_use_set_data:
~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in convert(t)
1141 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
1142 non_blocking, memory_format=convert_to_format)
-> 1143 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
1144
1145 return self._apply(convert)
~/Library/Python/3.9/lib/python/site-packages/torch/cuda/__init__.py in _lazy_init()
237 "multiprocessing, you must use the 'spawn' start method")
238 if not hasattr(torch._C, '_cuda_getDeviceCount'):
--> 239 raise AssertionError("Torch not compiled with CUDA enabled")
240 if _cudart is None:
241 raise AssertionError(
AssertionError: Torch not compiled with CUDA enabled
</code></pre>
<p>Obviously I've no Nvidia card, but I've read Pytorch is now supporting Mac M1 as well</p>
<p>I'm trying to run the below example:</p>
<pre><code>from llama_index import SimpleDirectoryReader, LangchainEmbedding, GPTListIndex,GPTSimpleVectorIndex, PromptHelper
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from llama_index import LLMPredictor, ServiceContext
import torch
from langchain.llms.base import LLM
from transformers import pipeline
class customLLM(LLM):
model_name = "google/flan-t5-large"
pipeline = pipeline("text2text-generation", model=model_name, device=0, model_kwargs={"torch_dtype":torch.bfloat16})
def _call(self, prompt, stop=None):
return self.pipeline(prompt, max_length=9999)[0]["generated_text"]
def _identifying_params(self):
return {"name_of_model": self.model_name}
def _llm_type(self):
return "custom"
llm_predictor = LLMPredictor(llm=customLLM())
</code></pre>
<p><strong>Question #2:</strong></p>
<p>Assuming the answer for the above is no - I don't mind using Google Colab with GPU, but once the index will be made, will it be possible to download it and use it on my Mac?</p>
<p>i.e. something like:</p>
<p>on Google Colab:</p>
<pre><code>service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, embed_model=embed_model)
index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)
index.save_to_disk('index.json')
</code></pre>
<p>... and later on my Mac use <code>load_from_file</code></p>
|
<python><machine-learning><pytorch><huggingface-transformers><langchain>
|
2023-04-10 17:46:58
| 2
| 433
|
Ben
|
75,979,419
| 559,426
|
How can you specify tab order in python flet?
|
<p>I have a working flet input page, but the tab order seems to have no logic to it. I would like to specify exactly what I want, but don't know how. Can anyone help?</p>
|
<python><user-interface><tabs><flet>
|
2023-04-10 17:46:57
| 1
| 1,010
|
Highland Mark
|
75,979,362
| 4,876,058
|
Remove white spaces line breaks from the extracted text Python scraping
|
<p>I am facing an issue regarding extracting text from the website page. I am using the <code>XPath</code> selector and <code>Scrapy</code> for this.</p>
<p>The page contains the markup like this:</p>
<pre><code><div class="snippet-content">
<h2>First Child</h2>
<p>Hello</p>
This is large text ..........
</div>
</code></pre>
<p>I basically need the text after the 2 immediate children. The selector which I am using is this:</p>
<pre><code>text = response.xpath('//div[contains(@class, "snippet-content")]/text()[last()]').get()
</code></pre>
<p>The text is extracted correctly but it contains <code>white spaces</code>, <code>NBPS</code>, and new line break <code>\r\n</code> characters.</p>
<p><strong>For example:</strong></p>
<p>Extracting text is like this:</p>
<pre><code>" \r\nRemarks byNBPS Deputy Prime Minister andNBPS Coordinating Minister for Economic Policies Heng Swee Keat at the Opening of the Bilingualism Carnival on 8 April 2023. "
</code></pre>
<p>Is there a way to get sanitized and clean text without all trailing <code>whitespaces</code>, <code>linebreaks</code> characters, and NBPS characters?</p>
|
<python><xpath><scrapy>
|
2023-04-10 17:38:49
| 1
| 1,019
|
Ven Nilson
|
75,979,268
| 4,913,254
|
How to check if a value in one column is in other column when the queried column have many values?
|
<p><strong>Question</strong></p>
<p>How to check if a value in one column is in other column when the queried column have many values?</p>
<p><strong>The minimal reproducible example</strong></p>
<pre><code>df1 = pd.DataFrame({'patient': ['patient1', 'patient1', 'patient1','patient2', 'patient2', 'patient3','patient3','patient4'],
'gene':['TYR','TYR','TYR','TYR','TYR','TYR','TYR','TYR'],
'variant': ['buu', 'luu', 'stm','lol', 'bla', 'buu', 'lol','buu'],
'genotype': ['buu,luu,hola', 'gulu,melon', 'melon,stm','melon,buu,lol', 'bla', 'het', 'het','het']})
print(df1)
patient gene variant genotype
0 patient1 TYR buu buu,luu,hola
1 patient1 TYR luu gulu,melon
2 patient1 TYR stm melon,stm
3 patient2 TYR lol melon,buu,lol
4 patient2 TYR bla bla
5 patient3 TYR buu het
6 patient3 TYR lol het
7 patient4 TYR buu het
</code></pre>
<p><strong>What I have tried</strong></p>
<pre><code>df1.variant.isin(df1.genotype)
0 False
1 False
2 False
3 False
4 True
5 False
6 False
7 False
Name: variant, dtype: bool
</code></pre>
<p>This does not work. The expected result would be:</p>
<pre><code>0 True
1 False
2 True
3 True
4 True
5 False
6 False
7 False
Name: variant, dtype: bool
</code></pre>
<p>I don't know how many different values the column genotype has. This vary a lot from 1 to 20</p>
|
<python><pandas>
|
2023-04-10 17:29:23
| 3
| 1,393
|
Manolo Dominguez Becerra
|
75,979,236
| 6,077,239
|
What is the difference between polars.collect_all and polars.LazyFrame.collect
|
<p>Starting with the example below:</p>
<pre><code>import time
import numpy as np
import polars as pl
n_index = 1000
n_a = 10
n_b = 500
n_obs = 5000000
df = pl.DataFrame(
{
"id": np.random.randint(0, n_index, size=n_obs),
"a": np.random.randint(0, n_a, size=n_obs),
"b": np.random.randint(0, n_b, size=n_obs),
"x": np.random.normal(0, 1, n_obs),
}
).lazy()
dfs = [
pl.DataFrame(
{
"id": np.random.randint(0, n_index, size=n_obs),
"a": np.random.randint(0, n_a, size=n_obs),
f"b_{i}": np.random.randint(0, n_b, size=n_obs),
"x": np.random.normal(0, 1, n_obs),
}
).lazy()
for i in range(50)
]
res = [
df.join(
dfs[i], left_on=["id", "a", "b"], right_on=["id", "a", f"b_{i}"], how="inner"
)
.group_by("a", "b")
.agg((pl.col("x") * pl.col("x_right")).sum().alias(f"x_{i}"))
for i in range(50)
]
</code></pre>
<p>The task is really processing different dataframes, do some computations on them and then join back together all results. The code above constructs <code>res</code> which contains all results as a <code>list</code>.</p>
<p>As for joining back together the results, I tried two options as follows.</p>
<p>Option 1:</p>
<pre><code>start = time.perf_counter()
res2 = pl.collect_all(res)
res3 = res2[0]
for i in range(1, 50):
res3 = res3.join(res2[i], on=["a", "b"])
time.perf_counter() - start
</code></pre>
<p>Option 2:</p>
<pre><code>start = time.perf_counter()
res4 = res[0]
for i in range(1, 50):
res4 = res4.join(res[i], on=["a", "b"])
res4 = res4.collect()
time.perf_counter() - start
</code></pre>
<p>Option 1 does <code>collect_all</code> first and then joins all individual dataframes.
Option 2 just does all things in an entirely lazy way and performs <code>collect</code> at the very end.</p>
<p>As far as I know, <code>collect</code> will do optimizations under the hood and I should expect option 1 and option 2 have similar performance. However, my benchmarking results show that <strong>option 2 takes twice as long as option 1 (21s vs. 10s on my system with 32 cores)</strong>.</p>
<p>So, <strong>is this behavior kind of as expected? Or are there some inefficiencies about the approach I took?</strong></p>
<p>One good thing about option 2 is that it is entirely lazy and it is a preferable approach in a case where we want to have an API which is entirely lazy and return a lazy dataframe and let users determine what to do next. But, from my experiment, performance is sacrificed by a lot. So, <strong>wondering if is there a way to do something like option 2 without sacrificing performance (performance comparable to option 1)?</strong></p>
|
<python><python-polars>
|
2023-04-10 17:23:49
| 1
| 1,153
|
lebesgue
|
75,979,218
| 929,732
|
How do you unpack a python object with a list of tuples as the return value?
|
<p>So I'm getting a repsonse back from an application.</p>
<p>When I write the below to a file I get</p>
<pre><code>f.write(str(request.packet))
</code></pre>
<pre><code>AuthPacket([(1, [username']), (2,
[b'\11?']), (5,
[b'23']), (30, [b'8.8.8.8']), (31,
[b'4.4.4.4']), (61, [b'4.4.4.4']), (66,
[b'4.4.4.4']), ((9, 1), [b'iPhone',
b'mdm-tlv=device-phone-id=unknown',
b'mdm-tlv=device-platform=apple-ios',
b'mdm-tlv=device-platform-version=16.2',
b'mdm-tlv=device-uid=xx',
b'mdm-tlv=device-uid-global=xx',
b'mdm-tlv=ac-user-agent=xxxx (iPhone)
5.0.00246', b'audit-session-id=ac11036711153000643443dd', b'ip:source-ip=xx', b'coa-push=true']), (4,
[b'kfkdkfld']), ((3076, 146), [b'xx']), ((3076, 150),
[b'xxxx'])])
</code></pre>
<pre><code> f.write('\n\n')
for x,x1 in request.packet:
f.write(str(x))
f.write(str(x1))
f.write('\n\n')
</code></pre>
<p>I'm getting an error of</p>
<pre><code>for x,x1 in request.packet:
builtins.TypeError: cannot unpack non-iterable int object
</code></pre>
<p>If I just iterate through X I'll get the first value of the tuple returned and nothing else..so I'm bit confused how to deal with this and get the second values of each tuple in the list in the object.</p>
|
<python>
|
2023-04-10 17:21:30
| 0
| 1,489
|
BostonAreaHuman
|
75,978,962
| 14,220,087
|
how to set class attribute only if it is created during init
|
<pre class="lang-py prettyprint-override"><code>class Foo:
def __init__(self):
self.A = 100
class Boo(Foo):
def __init__(self):
super(Boo, self).__init__()
self.B = None
x = Boo()
print(x.A, x.B)
# 100 None
</code></pre>
<p>As shown above, I've created an instance of <code>Boo</code> with attributes <code>A</code> and <code>B</code>. Now I'd like to assign value to attributes only if they are created during <code>__init__</code>. i.e. if I set <code>x.A=0</code> it will work, but when I want to set value to a new attribute <code>x.C=False</code> it should do nothing.</p>
|
<python><python-3.x>
|
2023-04-10 16:47:08
| 2
| 829
|
Sam-gege
|
75,978,880
| 1,641,112
|
spacy python package no longer runs
|
<p>Running python 3.11.3 on macos, Intel.</p>
<p>I had spacy working fine. I then decided to try adding gpu support with: <code>pip install -U 'spacy[cuda113]'</code> but started getting errors.</p>
<p>I uninstalled with <code>pip uninstall 'spacy[cuda113]'</code> and then reinstalled spacy with just <code>pip install spacy</code>.</p>
<p>However, I'm still getting the same errors when running a simple script with just</p>
<p><code>import spacy</code> in it:</p>
<pre><code>Traceback (most recent call last):
File "/Users/steve/workshop/python/blah.py", line 4, in <module>
import spacy
File "/usr/local/lib/python3.11/site-packages/spacy/__init__.py", line 14, in <module>
from . import pipeline # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/spacy/pipeline/__init__.py", line 1, in <module>
from .attributeruler import AttributeRuler
File "/usr/local/lib/python3.11/site-packages/spacy/pipeline/attributeruler.py", line 6, in <module>
from .pipe import Pipe
File "spacy/pipeline/pipe.pyx", line 1, in init spacy.pipeline.pipe
File "spacy/vocab.pyx", line 1, in init spacy.vocab
File "/usr/local/lib/python3.11/site-packages/spacy/tokens/__init__.py", line 1, in <module>
from .doc import Doc
File "spacy/tokens/doc.pyx", line 36, in init spacy.tokens.doc
File "/usr/local/lib/python3.11/site-packages/spacy/schemas.py", line 158, in <module>
class TokenPatternString(BaseModel):
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 369, in __new__
cls.__signature__ = ClassAttribute('__signature__', generate_model_signature(cls.__init__, fields, config))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/utils.py", line 231, in generate_model_signature
merged_params[param_name] = Parameter(
^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py", line 2722, in __init__
raise ValueError('{!r} is not a valid parameter name'.format(name))
ValueError: 'in' is not a valid parameter name
</code></pre>
|
<python><spacy>
|
2023-04-10 16:33:27
| 2
| 7,553
|
StevieD
|
75,978,865
| 10,755,032
|
How to insert a list into a cell in pandas dataframe
|
<p>I have referred <a href="https://stackoverflow.com/questions/26483254/python-pandas-insert-list-into-a-cell">this</a>
but it did not help. <strong>Don't duplicate this please</strong>. I am trying to count the <code>syllable per word</code> for each text file in my <code>output</code> directory. I wanted to insert a list of syllables for each file.</p>
<p>My approach:</p>
<pre><code>directory = r"..\output"
result = []
i = 0
for filename in os.listdir(directory):
if filename.endswith('.txt'):
filepath = os.path.join(directory, filename)
with open(filepath, 'rb') as f:
encoding = chardet.detect(f.read())['encoding']
with open(filepath, 'r', encoding=encoding) as f:
text = f.read()
words = text.split()
for word in words:
result.append(count_syllables(word))
results.at[i,'SYLLABLE PER WORD'] = result
i += 1
</code></pre>
<p>I am getting the following error:</p>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3801 try:
-> 3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
~\AppData\Roaming\Python\Python39\site-packages\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
~\AppData\Roaming\Python\Python39\site-packages\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'SYLLABLE PER WORD'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\frame.py in _set_value(self, index, col, value, takeable)
4209 else:
-> 4210 icol = self.columns.get_loc(col)
4211 iindex = self.index.get_loc(index)
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3803 except KeyError as err:
-> 3804 raise KeyError(key) from err
3805 except TypeError:
KeyError: 'SYLLABLE PER WORD'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_19000\1445037766.py in <module>
12 for word in words:
13 result.append(count_syllables(word))
---> 14 results.at[i,'SYLLABLE PER WORD'] = result
15 i += 1
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\indexing.py in __setitem__(self, key, value)
2440 return
2441
-> 2442 return super().__setitem__(key, value)
2443
2444
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\indexing.py in __setitem__(self, key, value)
2395 raise ValueError("Not enough indexers for scalar access (setting)!")
2396
-> 2397 self.obj._set_value(*key, value=value, takeable=self._takeable)
2398
2399
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\frame.py in _set_value(self, index, col, value, takeable)
4222 self.iloc[index, col] = value
4223 else:
-> 4224 self.loc[index, col] = value
4225 self._item_cache.pop(col, None)
4226
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\indexing.py in __setitem__(self, key, value)
816
817 iloc = self if self.name == "iloc" else self.obj.iloc
--> 818 iloc._setitem_with_indexer(indexer, value, self.name)
819
820 def _validate_key(self, key, axis: int):
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\indexing.py in _setitem_with_indexer(self, indexer, value, name)
1748 indexer, self.obj.axes
1749 )
-> 1750 self._setitem_with_indexer(new_indexer, value, name)
1751
1752 return
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\indexing.py in _setitem_with_indexer(self, indexer, value, name)
1793 if take_split_path:
1794 # We have to operate column-wise
-> 1795 self._setitem_with_indexer_split_path(indexer, value, name)
1796 else:
1797 self._setitem_single_block(indexer, value, name)
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\indexing.py in _setitem_with_indexer_split_path(self, indexer, value, name)
1848 return self._setitem_with_indexer((pi, info_axis[0]), value[0])
1849
-> 1850 raise ValueError(
1851 "Must have equal len keys and value "
1852 "when setting with an iterable"
ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
<p>I want to iteratively insert the list of <code>syllables per word</code> into a data frame as you can see from my approach.</p>
<p>Here is the link to the code and the dataset I am using: <a href="https://github.com/karthikbhandary2/Data-Extraction-and-NLP" rel="nofollow noreferrer">https://github.com/karthikbhandary2/Data-Extraction-and-NLP</a></p>
|
<python><pandas>
|
2023-04-10 16:31:43
| 2
| 1,753
|
Karthik Bhandary
|
75,978,754
| 13,294,769
|
Error importing Parquet to Redshift: optional int
|
<p>I'm creating a table in Redshift as follows:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column
from sqlalchemy_redshift.dialect import INTEGER, VARCHAR
Base = declarative_base()
class SomeTable(Base):
__tablename__ = "some_table"
id = Column(INTEGER, primary_key=True)
some_column = Column(VARCHAR(128))
# ...
# code that connects to Redshift
# ...
base.metadata.create_all(engine)
</code></pre>
<p>The table in Redshift looks like this:</p>
<pre class="lang-sql prettyprint-override"><code>CREATE TABLE public.some_table (
id integer NOT NULL ENCODE az64,
some_column character varying(128) ENCODE lzo,
)
DISTSTYLE AUTO
SORTKEY ( id );
</code></pre>
<p>I have a <code>pandas.DataFrame</code> with the following schema:</p>
<pre><code>id int64
some_column object
dtype: object
</code></pre>
<p>I create a <code>.parquet</code> file and upload it to S3:</p>
<pre class="lang-py prettyprint-override"><code>from tempfile import NamedTemporaryFile
from pyarrow import Table, int64, schema, string
from pyarrow.parquet import write_table
with NamedTemporaryFile() as file:
parquet_table = Table.from_pandas(
df,
schema=schema(
[
("id", int64()),
("some_column", string()),
]
),
)
write_table(parquet_table, file)
# ...
# code to upload file to S3
# ...
</code></pre>
<p>Using <code>parquet-tools inspect</code>, I get the following schema from the uploaded file:</p>
<pre><code>############ file meta data ############
created_by: parquet-cpp-arrow version 10.0.0
num_columns: 2
num_rows: 21
num_row_groups: 1
format_version: 2.6
serialized_size: 3802
############ Columns ############
id
some_column
############ Column(id) ############
name: id
path: id
max_definition_level: 1
max_repetition_level: 0
physical_type: INT64
logical_type: None
converted_type (legacy): NONE
compression: SNAPPY (space_saved: -4%)
############ Column(some_column) ############
name: some_column
path: some_column
max_definition_level: 1
max_repetition_level: 0
physical_type: BYTE_ARRAY
logical_type: String
converted_type (legacy): UTF8
compression: SNAPPY (space_saved: -5%)
</code></pre>
<p>Finally, I try to upload the file from S3 to Redshift using SQL:</p>
<pre class="lang-py prettyprint-override"><code># ...
# code that gets a SQLAlchemy Session
# ...
session.execute(
f"COPY {SomeTable.__tablename__} "
+ f"FROM '{s3_uri}' "
+ f"IAM_ROLE '{arn_role}' "
+ "FORMAT AS PARQUET;"
)
</code></pre>
<p>I get the following error:</p>
<pre><code>psycopg2.errors.InternalError_: Spectrum Scan Error
DETAIL:
-----------------------------------------------
error: Spectrum Scan Error
code: 15007
context: File 'https://s3.eu-central-1.amazonaws.com/bucket/1681141668/some_table.parquet' has an incompatible Parquet schema for column 's3://bucket/1681141668/some_table.parquet.id'. Column type: INT, Parquet schema:
optional int
query: 2627
location: dory_util.cpp:1450
process: worker_thread [pid=19137]
-----------------------------------------------
</code></pre>
<p>How can I make the Redshift table compatible with the <code>.parquet</code> file (or vice-versa)?</p>
|
<python><python-3.x><pandas><amazon-redshift><parquet>
|
2023-04-10 16:15:04
| 0
| 1,063
|
doublethink13
|
75,978,731
| 248,959
|
Python paho MQTT loop_forever(): how to redirect output to a file while the script is running?
|
<p>I'm running a script to subscribe to topics of an MQTT broker and fetch the data associated to them. I run the script like this:</p>
<p><code>$ python3 test_mqtt_client.py</code></p>
<pre><code>import paho.mqtt.client as paho
import ssl
import random
from config import BROKER_ADDRESS, PORT, CLIENT_CERT, CLIENT_KEY, CA_KEY, SUBSCRIBED_TOPICS, DEST_FOLDER
#"on_message" callback
def on_message(client, userdata, message):
print("received message =",str(message.payload.decode("utf-8")))
filename = str(random.randint(0,4294967295))
file = open(DEST_FOLDER + filename + '.json', 'w+')
file.write(str(message.payload.decode("utf-8")))
file.close()
client=paho.Client()
client.on_message=on_message
print("connecting to broker")
client.tls_set(
CA_KEY,
certfile=CLIENT_CERT,
keyfile=CLIENT_KEY,
cert_reqs=ssl.CERT_REQUIRED,
tls_version=ssl.PROTOCOL_TLSv1_2,
ciphers=None
)
client.tls_insecure_set(True)
client.connect(BROKER_ADDRESS, PORT, 60)
for x in SUBSCRIBED_TOPICS:
client.subscribe(x)
print('Subscribed to topic "' + x + '".')
client.loop_forever()
time.sleep(1)
</code></pre>
<p>The problem: if I try to output to a file like this:</p>
<p><code>$ python3 test_mqtt_client.py >> /tmp/test_mqtt_client.log</code></p>
<p>I don't get any content on the file <em>untill</em> I interrupt the script using Ctrl+C.</p>
<p>How can I get the output of the script inside <code>/tmp/test_mqtt_client.log</code> while the script is running? I mean, before interrupting it.</p>
|
<python><python-3.x><mqtt><paho><python-paho>
|
2023-04-10 16:12:18
| 1
| 31,891
|
tirenweb
|
75,978,533
| 871,495
|
Tensorflow2 Warning: "INVALID_ARGUMENT: You must feed a value for placeholder ..."
|
<p>I get the following warning when running the following minimal code (that will train a very simple tensorflow 2 model). Actually it is telling me that I can ignore this message, but I still got the feeling that something might be wrong and I don't like ignoring messages like this. The message still persists even if I set verbose=0.</p>
<p>I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_1' with dtype double and shape [5000,25,1]
[[{{node Placeholder/_1}}]]</p>
<p>I am using tensorflow 2.12.0.</p>
<p><strong>Code:</strong></p>
<pre><code>import numpy as np
from keras import Model
from keras.layers import Input, Flatten, Dense, Reshape
import tensorflow as tf
x = np.random.rand(5000, 10, 7)
y = np.random.rand(5000, 25, 1)
#############################################################
# create the dataset
#############################################################
ds = tf.data.Dataset.from_tensor_slices((x, y))
ds = ds.batch(32, drop_remainder=True)
#############################################################
# construct the model
#############################################################
inputs = []
x = Input(shape=(10, 7))
inputs.append(x)
x = Flatten()(x)
x = Dense(25)(x)
x = Reshape((25, 1))(x)
model = Model(inputs=inputs, outputs=x)
model.compile(loss="mse")
model.summary()
#############################################################
# fit the model
#############################################################
model.fit(ds, batch_size=10, verbose=1, epochs=10)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>2023-04-10 17:32:46.805357: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-10 17:32:47.312460: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-04-10 17:32:50.162190: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 2578 MB memory: -> device: 0, name: Quadro T1000, pci bus id: 0000:01:00.0, compute capability: 7.5
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 10, 7)] 0
flatten (Flatten) (None, 70) 0
dense (Dense) (None, 25) 1775
reshape (Reshape) (None, 25, 1) 0
=================================================================
Total params: 1,775
Trainable params: 1,775
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
2023-04-10 17:32:50.244149: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_1' with dtype double and shape [5000,25,1]
[[{{node Placeholder/_1}}]]
2023-04-10 17:32:50.804972: I tensorflow/compiler/xla/service/service.cc:169] XLA service 0x7f6e808f79c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2023-04-10 17:32:50.804996: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (0): Quadro T1000, Compute Capability 7.5
2023-04-10 17:32:50.807856: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2023-04-10 17:32:50.909253: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:424] Loaded cuDNN version 8600
2023-04-10 17:32:50.946872: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory
2023-04-10 17:32:50.981030: I ./tensorflow/compiler/jit/device_compiler.h:180] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
156/156 [==============================] - 1s 986us/step - loss: 0.2249
Epoch 2/10
156/156 [==============================] - 0s 958us/step - loss: 0.1457
Epoch 3/10
156/156 [==============================] - 0s 981us/step - loss: 0.1185
Epoch 4/10
156/156 [==============================] - 0s 929us/step - loss: 0.1026
Epoch 5/10
156/156 [==============================] - 0s 1ms/step - loss: 0.0940
Epoch 6/10
156/156 [==============================] - 0s 929us/step - loss: 0.0895
Epoch 7/10
156/156 [==============================] - 0s 960us/step - loss: 0.0872
Epoch 8/10
156/156 [==============================] - 0s 958us/step - loss: 0.0860
Epoch 9/10
156/156 [==============================] - 0s 969us/step - loss: 0.0854
Epoch 10/10
156/156 [==============================] - 0s 998us/step - loss: 0.0851
Process finished with exit code 0
</code></pre>
|
<python><tensorflow><tensorflow2.0>
|
2023-04-10 15:43:12
| 2
| 415
|
Gustav-Gans
|
75,978,476
| 21,363,224
|
Replace every symbol of the word after delimiter using python re
|
<p>I would like to replace every symbol of a word after <code>-</code> with <code>*</code>.</p>
<p>For example:</p>
<pre><code>asd-wqe ffvrf => asd-*** ffvrf
</code></pre>
<p>In TS regex it could be done with <code>(?<=-\w*)\w</code> and replacement <code>*</code>. But default python regex engine requires lookbehinds of fixed width.</p>
<p>Best I can imaging is to use</p>
<pre><code>(?:(?<=-)|(?<=-\w)|(?<=-\w{2}))\w
</code></pre>
<p>and repeat lookbehing some predetermined big number of times, but it seems not very sustainable or elegant.</p>
<p>Is it possible to use default <code>re</code> module for such a task with some more elegant pattern?</p>
<p>Demo for testing <a href="https://regex101.com/r/JJjuUw/1" rel="nofollow noreferrer">here</a>.</p>
<p>P.S. I'm aware that alternative regex engines, that support lookbehind of variable length exist, but would like to stick with default one for a moment if possible.</p>
|
<python><regex><regex-lookarounds><positive-lookbehind>
|
2023-04-10 15:35:24
| 2
| 13,906
|
markalex
|
75,978,464
| 3,398,324
|
How to interpret predictions from a specific PyTorch Model
|
<p>I have obtained the prediction values from this <a href="https://github.com/allegro/allRank/issues/55" rel="nofollow noreferrer">PyTorch model</a>, at least I think so(<a href="https://github.com/allegro/allRank" rel="nofollow noreferrer">https://github.com/allegro/allRank</a>) by running:</p>
<p><code>slates_X, slates_y = __rank_slates(val_dl, model)</code></p>
<p>But the output shape is not clear to me. The number of rows in <code>slates_y</code> is corresponds with the number of <code>qids</code> in my dataset. But I would imagine it should match the number of rows instead, since I want the predicted rank of each row.</p>
<p>Does anyone know or understand, how I can get the predictions for each observation?</p>
<p>I have tried this with rank_slates instead (which is just a wrapper around <code>__rank_slates</code> and then get the same result.</p>
|
<python><pytorch><prediction>
|
2023-04-10 15:34:10
| 0
| 1,051
|
Tartaglia
|
75,978,448
| 19,299,757
|
AWS lambda image not showing the latest code
|
<p>I have a docker image for an AWS lambda function which just prints a "Welcome" message. I've set this up to run the lambda code whenever a file is uploaded to an S3 bucket(via CFN script).
The lambda image is successfully created.
When I uploaded a test file to S3 via CLI (using aws s3 cp command), I could see the cloudwatch logs for the lambda printing that "Welcome" line. So far good.
I added few more lines to the python file and pushed the image to ECR and when I tried to copy a file to the same S3 bucket, I am not seeing the newly added print statements from the lambda handler shown in cloudwatch logs.</p>
<p>This is the command in my Dockerfile</p>
<pre><code>CMD [ "app.lambda_handler" ]
</code></pre>
<p>This is my app.py</p>
<pre><code>import os
def lambda_handler(event, context) -> None:
python_path = os.environ.get('PYTHONPATH')
secretsmanager_client = boto3.client('secretsmanager')
secret_name = "dev-demo-lambda-secret"
print('\nWelcome!!!!!')
</code></pre>
<p>This is my app.py after I added additional an additional print statement (which I am NOT seeing in cloudwatch logs output)</p>
<pre><code>import os
def lambda_handler(event, context) -> None:
python_path = os.environ.get('PYTHONPATH')
secretsmanager_client = boto3.client('secretsmanager')
secret_name = "dev-demo-lambda-secret"
print('\nWelcome!!!!!')
print('\nevent = ', event)
</code></pre>
<p>This is my buildspec.yaml to create the lambda docker image.</p>
<pre><code>- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --
password-stdin 3xxxxxxxxxxx0.dkr.ecr.us-east-1.amazonaws.com
- docker image build -t 359772415770.dkr.ecr.us-east-1.amazonaws.com/demolambda:$MY_DOCKER_IMAGE_VERSION .
- docker push 359772415770.dkr.ecr.us-east-1.amazonaws.com/demo-lambda:$MY_DOCKER_IMAGE_VERSION
</code></pre>
<p>Is there anything I am missing here?</p>
|
<python><amazon-web-services><docker><aws-lambda>
|
2023-04-10 15:32:33
| 0
| 433
|
Ram
|
75,978,420
| 19,130,803
|
how to edit /etc/hosts/ file on nginx docker image
|
<p>I am using docker-compose for my python web app. I am using nginx as one of service. Currently I am running on localhost and it is working fine. Now, instead of localhost I am trying to use domain name as <code>server_name www.mywebapp.com</code> and I also need to edit <code>/etc/hosts</code> file to add <code>127.0.0.1 www.mywebapp.com</code>, Although file already has default entry as <code>127.0.0.1 localhost</code>.</p>
<ul>
<li>File: default.conf.template</li>
</ul>
<pre><code>server {
listen 8080;
server_name ${DOMAIN_NAME};
client_max_body_size 8G;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_pass http://web:5000;
}
}
</code></pre>
<ul>
<li>File: nginx Dockerfile</li>
</ul>
<pre><code>ARG NGINX_IMG
FROM $NGINX_IMG
ARG DOMAIN_NAME
ENV DOMAIN_NAME=$DOMAIN_NAME
USER root
# RUN sed -i 's/127.0.0.1 localhost/127.0.0.1 localhost $DOMAIN_NAME/' /etc/hosts
RUN echo "127.0.0.1 $DOMAIN_NAME" >> /etc/hosts
COPY deploy/docker/proxy/default.conf.template /etc/nginx/conf.d/default.conf.template
USER nginx
CMD ["/bin/sh" , "-c" , "envsubst '$DOMAIN_NAME' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"]
</code></pre>
<p>Output:</p>
<ul>
<li>On running, <code>sed</code> it gives error as busy operation</li>
<li>On running, <code>echo</code> it executes successfully, but on checking from nginx container terminal <code>cat /etc/hosts</code> No modification for domain name</li>
<li>On checking <code>default.conf</code> the <code>server_name</code> gets updated by <code>envbust</code> as <code>server_name www.mywebapp.com</code></li>
</ul>
<p>As getting error to modify <code>/etc/hosts</code> file unable to access through domain name.</p>
<p>How to edit <code>/etc/hosts</code> file on nginx docker image?</p>
|
<python><docker><nginx>
|
2023-04-10 15:29:00
| 0
| 962
|
winter
|
75,978,344
| 8,398,116
|
SQLALCHEMY not utilising the entire connection pool
|
<p>I am new to flask and sqlalchemy and trying to wrtie an API that can support throughputs of 400 requests/sec.</p>
<p>I am using sqlalchemy to connect to a clickhouse database.</p>
<p>my sqlalchemy settings are:</p>
<pre><code> SQLALCHEMY_BINDS_OPTIONS = {
'db': {
'pool_size': 150,
'echo_pool': True,
'max_overflow': 15,
'pool_pre_ping': True,
}
}
</code></pre>
<p>I am printing the no connections established by sqlalchemy and recycled by doing this in logger.py:</p>
<pre><code>class ConnectionPoolHandler(logging.Handler):
def __init__(self):
super().__init__()
self.open_connections = 0
self.recycled_connections = 0
def emit(self, record):
if record.getMessage().startswith('Created new connection'):
self.open_connections += 1
print(
f'{record.getMessage()} (Open connections: {self.open_connections}, Recycled connections: {self.recycled_connections})'
)
elif record.getMessage().startswith('Recycling connection'):
self.recycled_connections += 1
# Set up SQLAlchemy connection pool logging
sqlalchemy_connection_pool_logger = logging.getLogger('sqlalchemy.pool')
sqlalchemy_connection_pool_logger.setLevel(logging.DEBUG)
handler = ConnectionPoolHandler() # Create an instance of ConnectionPoolHandler
sqlalchemy_connection_pool_logger.addHandler(handler)
sqlalchemy_connection_pool_logger.propagate = False # disable propagation of messages to parent logger
</code></pre>
<p>I am using gunicorn as a proxy to handle requests to the flask API.
I am using a threadpooolexecutor to parallelise query executions.</p>
<pre><code>def get_threaded_mock_response():
try:
with app.app_context():
with ThreadPoolExecutor(max_workers=4) as executor:
get_response_1_future = executor.submit(g1)
get_response_2_future = executor.submit(g2)
get_response_3_future = executor.submit(g3)
get_response_4_future = executor.submit(g4)
response_1 = get_response_1_future.result()
response_2 = get_response_2_future.result()
response_3 = get_response_3_future.result()
response_4 = get_response_4_future.result()
response = []
response.append(response_1)
response.append(response_2)
response.append(response_3)
response.append(response_4)
response_data = jsonify(response)
return response_data
except Exception as e:
log_app(e, logging.ERROR)
return {"message": internal_error_message}, 500
def g1():
time.sleep(0.2)
return {
"g1": 1
}
def g2():
time.sleep(0.2)
return {
"g2": 2
}
def g3():
time.sleep(0.2)
return {
"g3": 3
}
def g4():
time.sleep(0.2)
return {
"g4": 4
}
</code></pre>
<p>I have functions that run queries parallely as well just as above.
assuming each query takes 250ms to execute.</p>
<p>I am expecting that
1 connection can handle 4 queries per second.
therefore 1 request per connection /second
which means given my max allowed pool size of 150</p>
<p>sqlalchemy should support 150 req/second.</p>
<p>but my observations are:</p>
<ol>
<li>1 worker and 1 thread - 4 connections are open by sqlalchemy always and no connections are recycled.</li>
<li>16 workers and 64 threads - 15 connections are open and non recycled.</li>
<li>the ch db metrics show max 40 connection being open at peaks.</li>
</ol>
<p>I have independently tested both flask API and CH:</p>
<ol>
<li>by hitting the above mock endpoint and getting 400req/sec throughput.</li>
<li>my CH db is configured to have 500 parallel requests.</li>
<li>when I do a test of 200 requests, the cpu utilised by CH is not more than 25%.</li>
</ol>
<p>what I can do here to improve throughput.
considering my flask app is not using even 25% of allocated resources and neither is CH db.</p>
|
<python><flask><sqlalchemy><gunicorn><clickhouse>
|
2023-04-10 15:18:00
| 0
| 990
|
Abhilash Gopalakrishna
|
75,978,167
| 4,539,956
|
SQLmodel: "UniqueViolation, duplicate key value violates unique constraint" on update
|
<p>I have a fastapi edpoint for updating records in the database, but it doesn't seem to work as intended and I get the following error when I call the endpoint to update a record:</p>
<pre><code> File "/usr/src/app/app/api/controllers/records.py", line 137, in update_record_endpoint
updated_record = update_record(session, id, record)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/app/app/api/controllers/records.py", line 61, in update_record
session.commit()
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 1431, in commit
self._transaction.commit(_to_root=self.future)
... rest of the error ...
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2043, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "record_pkey"
DETAIL: Key (id)=(bc2a4eeb-a8f0-4a6b-a976-91fb630b281b) already exists.
</code></pre>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>from pgvector.sqlalchemy import Vector
from sqlalchemy import text
## model
class Record(SQLModel, table=True):
id: UUID = Field(default_factory=uuid4, primary_key=True)
text: str = Field(default=None)
start: int = Field(default=None)
vector: List[float] = Field(default=None, sa_column=Vector(1536))
parent_id: UUID = Field(default=None, foreign_key="parent.id")
## controller
def update_record(session: Session, id: UUID, record: RecordUpdate):
query = text("SELECT * FROM record WHERE id = :id")
result = session.execute(query, {"id": id}).fetchone()
if not result:
return None
db_record = Record.from_orm(result)
if not db_record:
return None
for key, value in record.dict().items():
if hasattr(db_record, key) and value is not None:
setattr(db_record, key, value)
session.add(db_record)
session.commit()
session.refresh(db_record)
return db_record
</code></pre>
<p>In another controller I'm using <code>session.get(Parent, id)</code> and the updating process works fine, but for this specific controller I'm using <code>session.execute(text("query"))</code> because of <a href="https://stackoverflow.com/questions/75977787/fastapi-and-pgvector-invalidrequesterror-unknown-pg-numeric-type">this issue</a> and update doesn't work (<code>violates unique constraint </code>). How can I fix this issue?</p>
|
<python><postgresql><sqlalchemy><fastapi><sqlmodel>
|
2023-04-10 14:55:22
| 1
| 903
|
Saeed Esmaili
|
75,978,141
| 11,962,592
|
ansible install issue using python3 pip
|
<p>Seeing issue locating installed Ansible package.</p>
<p>Following along</p>
<p><a href="https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html" rel="nofollow noreferrer">https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html</a></p>
<p>Tried running commands as per above link & below is what I got in the environment.</p>
<pre><code>$ python3 -m pip -V
pip 9.0.3 from /usr/lib/python3.6/site-packages (python 3.6)
</code></pre>
<p>And installation of package just looks ok on <code>python3 -m pip install --user ansible</code> as well & output shows like below:</p>
<pre><code>$ python3 -m pip show ansible
Name: ansible
Version: 4.10.0
Summary: Radically simple IT automation
Home-page: https://ansible.com/
Author: Ansible, Inc.
Author-email: info@ansible.com
License: GPLv3+
Location: /home/ansible/.local/lib/python3.6/site-packages
Requires: ansible-core
</code></pre>
<p>But when I run below, its not able to find in PATH</p>
<pre><code>ansible --version
bash: /bin/ansible: No such file or directory
</code></pre>
<p>Does pip3 packages needs to be referenced in different way to access Ansible pacakge?. Don't see anything under <code>/bin/ansible</code> or <code>/usr/bin/ansible</code></p>
|
<python><python-3.x><pip><ansible>
|
2023-04-10 14:52:19
| 2
| 643
|
vinWin
|
75,978,062
| 11,235,680
|
how to set the background color of a RichTextCtrl in wxPython
|
<p>I have a chatbox and I'm trying to colour every message with a different colour</p>
<pre><code>class ChatDisplayer(RichTextCtrl):
def __init__(self, *args, **kw):
super().__init__(*args, **kw)
def addMessage(self, message, red, green, blue):
font = Font(12, DEFAULT, NORMAL, NORMAL, False)
text_attr = RichTextAttr()
text_attr.SetTextColour(BLACK)
text_attr.SetFont(font)
bg_color = Colour(red=red, green=green, blue=blue)
text_attr.SetBackgroundColour(bg_color)
self.BeginStyle(text_attr)
self.AppendText(message)
self.EndStyle()
</code></pre>
<p>this does append the text but it does not change the colour of the text of the background colour.
I can find the SetBackgroundColor attribute in the source code but not in the doc.</p>
<p>The main class is complex but looks like this:</p>
<pre><code>class ChatMessageGrid(FlexGridSizer):
def __init__(self, panel):
super().__init__(6, 2, 9, 25)
chat_message_apikey_label = StaticText(panel, label="API key")
chat_message_apikey_input = TextCtrl(panel)
chat_displayer_label = StaticText(panel, label="Chat", pos=(1, 0))
chat_displayer_input = ChatDisplayer(panel, ID_ANY, style=TE_MULTILINE|TE_READONLY)
chat_message_system_label = StaticText(panel, label="System")
chat_message_system_input = TextCtrl(panel)
chat_message_label = StaticText(panel, label="your message")
chat_message_text = TextCtrl(panel, style=TE_MULTILINE)
chat_message_button = Button(panel, label="send")
model_choice_label = StaticText(panel, label="model")
model_choice_box = ComboBox(panel, value="bbb", choices=model_choices)
self.AddMany(
[
chat_message_apikey_label,
(chat_message_apikey_input, 1, EXPAND),
chat_message_system_label,
(chat_message_system_input, 1, EXPAND),
(chat_displayer_label, 1, EXPAND),
(chat_displayer_input, 1, EXPAND),
(chat_message_label, EXPAND),
(chat_message_text, 1, EXPAND),
(model_choice_label, EXPAND),
(model_choice_box, EXPAND),
(chat_message_button, EXPAND),
]
)
self.AddGrowableRow(2, 1)
self.AddGrowableRow(3, 1)
self.AddGrowableCol(1, 1)
chat_message_button.Bind(
EVT_BUTTON,
lambda event: self.onClick(
event,
chat_message_system_input,
chat_message_apikey_input,
chat_displayer_input,
chat_message_text,
model_choice_box,
chat_message_button
),
)
def completeRequest(self, messages, model, apiKey, chat_displayer_input, message,
button):
# call the service
userMessage = "User:\n"
aiMessage = "Assistant:\n"
# update the UI
CallAfter(chat_displayer_input.addMessage, userMessage, 176, 210,167)
#CallAfter(chat_displayer_input.addMessage, aiMessage, 176, 210,167)
# re enable the button
button.Enable()
def onClick(
self,
event,
chat_message_system_input: TextCtrl,
chat_message_apikey_input: TextCtrl,
chat_message_historic_input: ChatDisplayer,
chat_message_text: TextCtrl,
model_choice_box: ComboBox,
button: Button
):
if not len(chat_message_apikey_input.GetValue().strip()):
MessageBox("API key is empty!", "Error", OK | ICON_ERROR)
elif not len(chat_message_text.GetValue().strip()):
MessageBox("type something please!", "Error", OK | ICON_ERROR)
else:
apiKey = chat_message_apikey_input.GetValue().strip()
message = chat_message_text.GetValue().strip()
button.Disable()
# create a thread to run the API call
t = threading.Thread(target=self.completeRequest, args=(messages,
model_choice_box.GetValue(), apiKey, chat_message_historic_input, message,
button))
t.start()
</code></pre>
|
<python><wxpython>
|
2023-04-10 14:42:23
| 1
| 316
|
Bouji
|
75,977,887
| 12,624,118
|
How to create Python requirements.text file without sub-dependencies?
|
<p>Im quit new to python coming from the JS world .</p>
<p>When installing a package in javascript all the subdependencies of that packge are not part of the package.json. dependencies section.
For example:</p>
<pre><code>npm init -y
npm install express
</code></pre>
<p>would produce</p>
<pre><code> "scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.18.2"
}
</code></pre>
<p>even though express.js has 31 dependencies</p>
<p>Reading about python dependencies. I did the following steps:</p>
<p>created virtual env:</p>
<pre><code> python -m venv my-virutal-env
</code></pre>
<p>made sure no package is installed:</p>
<pre><code> pip freeze > to-uninstall.txt
pip uninstall -r to-uninstall.txt
pip freeze > requirements.txt
</code></pre>
<p>Installed llamaIndex</p>
<pre><code> pip install llama_index
</code></pre>
<p>producing requirements again using <code>pip freeze > requirements.txt</code></p>
<p>produces:</p>
<pre><code>...
llama-index==0.5.12
...
</code></pre>
<p>when ... are a lot of other packages</p>
<p>This is very undesirable. How Can I overcome this?</p>
|
<python><pip><requirements.txt>
|
2023-04-10 14:16:54
| 1
| 766
|
yoty66
|
75,977,591
| 1,231,940
|
Mark rows of one dataframe based on values from another dataframe
|
<p>I have following problem. Let's say I have two dataframes</p>
<pre class="lang-py prettyprint-override"><code>df1 = pl.DataFrame({'a': range(10)})
df2 = pl.DataFrame({'b': [[1, 3], [5,6], [8, 9]], 'tags': ['aa', 'bb', 'cc']})
print(df1)
print(df2)
</code></pre>
<pre><code>shape: (10, 1)
┌─────┐
│ a │
│ --- │
│ i64 │
╞═════╡
│ 0 │
│ 1 │
│ 2 │
│ 3 │
│ 4 │
│ 5 │
│ 6 │
│ 7 │
│ 8 │
│ 9 │
└─────┘
shape: (3, 2)
┌───────────┬──────┐
│ b ┆ tags │
│ --- ┆ --- │
│ list[i64] ┆ str │
╞═══════════╪══════╡
│ [1, 3] ┆ aa │
│ [5, 6] ┆ bb │
│ [8, 9] ┆ cc │
└───────────┴──────┘
</code></pre>
<p>I need to mark/tag rows in dataframe <code>df1</code> based on values of dataframe <code>df2</code>, so I can get following dataframe</p>
<pre class="lang-py prettyprint-override"><code>print(pl.DataFrame({'a': range(10), 'tag': ['NA', 'aa', 'aa', 'aa', 'NA', 'bb', 'bb', 'NA', 'cc', 'cc']}))
</code></pre>
<pre><code>shape: (10, 2)
┌─────┬─────┐
│ a ┆ tag │
│ --- ┆ --- │
│ i64 ┆ str │
╞═════╪═════╡
│ 0 ┆ NA │
│ 1 ┆ aa │
│ 2 ┆ aa │
│ 3 ┆ aa │
│ 4 ┆ NA │
│ 5 ┆ bb │
│ 6 ┆ bb │
│ 7 ┆ NA │
│ 8 ┆ cc │
│ 9 ┆ cc │
└─────┴─────┘
</code></pre>
<p>So list in column <code>b</code> of <code>df2</code> indicates start and end values for column <code>a</code> of <code>df1</code> that needs to be tagged with what's in column <code>tags</code>.</p>
<p>Thanks</p>
|
<python><dataframe><python-polars>
|
2023-04-10 13:34:18
| 1
| 437
|
Kaster
|
75,977,475
| 5,519,012
|
Modifying cython instance read-only properties in runtime
|
<p>I am using python aio grpc implementation, which is using <code>cython</code>. One of <code>grpc</code> features is <code>interceptor</code>, this is a class which get the request before the grpc's Server instance, and can modify the requests as it wants.</p>
<p>The proper way to use <code>interceptors</code> is to pass them to the <code>Server</code> constructor on <code>__init__</code>.</p>
<p>I need to do so in <strong>runtime</strong>, that means that I need to modify the <code>server._interceptors</code> list after the instance is already created.</p>
<p>it's working perfect when using pure-python implementations, but not when using Cython.</p>
<p>This is the <a href="https://github.com/grpc/grpc/blob/ffafac3ce8ea53cbceffb53063070be813edc31f/src/python/grpcio/grpc/_cython/_cygrpc/aio/server.pyx.pxi#L875" rel="nofollow noreferrer">cython AioServer implementation</a>.</p>
<p>when trying to access <code>_interceptors</code> field, I get <code>'grpc._cython.cygrpc.AioServer' object has no attribute 'interceptors'</code>.
When trying to replace any function, I get - <code>'grpc._cython.cygrpc.AioServer' object attribute '_request_call' is read-only</code>.</p>
<p>is there anyway to modify cython implemented instances on runtime ? or it's just not possible ? I want to modify the property or monkey patch some function which will modify the property, but the important part is that should happen on RUNTIME.</p>
<p>Thanks!</p>
|
<python><runtime><grpc><cython>
|
2023-04-10 13:15:44
| 1
| 365
|
Meir Tolpin
|
75,977,435
| 13,126,794
|
Python: json_normalize gives AttributeError for list of dict values
|
<p>I have a pandas dataframe where 2 columns are nested column having decimal value: <code>df.tail(1).to_dict('list')</code> gives this kind of data</p>
<pre><code> {'nested_col1': [array([{'key1': 'CO', 'key2': Decimal('8.940000000')}],
dtype=object)], 'nested_col2': [array([{'key3': 'CO', 'key4': 'P14', 'key5': Decimal('8.940000000'), 'key6': None}],
dtype=object)]}
</code></pre>
<p>I am trying to explode the dataframe with this:</p>
<pre><code>df = (df.drop(cols, axis=1)
.join(pd.concat(
[pd.json_normalize(df[x].explode(), errors='ignore').applymap(
lambda x: str(x) if isinstance(x, (int, float)) else x).add_prefix(f'{x}.') for x in
cols],
axis=1)))
</code></pre>
<p>With this I am getting below error in some cases:</p>
<pre><code> Traceback (most recent call last):
File "data_load.py.py", line 365, in <module>
df = prepare_data(data, transaction_id, cohort_no)
File "data_load.py.py", line 274, in prepare_data
df = flatten_dataframe(cols_to_explode, df)
File "data_load.py.py", line 204, in flatten_dataframe
df1 = pd.concat([pd.json_normalize(df[c].explode()) for c in cols],
File "data_load.py.py", line 204, in <listcomp>
df1 = pd.concat([pd.json_normalize(df[c].explode()) for c in cols],
File "/project1/venv/lib/python3.6/site-packages/pandas/io/json/_normalize.py", line 270, in _json_normalize
if any([isinstance(x, dict) for x in y.values()] for y in data):
File "/project1/venv/lib/python3.6/site-packages/pandas/io/json/_normalize.py", line 270, in <genexpr>
if any([isinstance(x, dict) for x in y.values()] for y in data):
AttributeError: 'float' object has no attribute 'values'
failed to run commands: exit status 1
</code></pre>
<p>anything still I am missing here or any better way to do the same?</p>
<p>Expected Output should be:</p>
<pre><code>nested_col1.key1,nested_col1.key2 nested_col2.key3 ... like this
</code></pre>
|
<python><json><pandas>
|
2023-04-10 13:10:38
| 2
| 961
|
Dcook
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.