QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
77,503,020
2,587,422
Python projects depending on one another in VSCode
<p>I have a workspace in VSCode with some Python projects which depend on one another. I'd like to test changes on some projects at the same time. Is it possible to configure VSCode (maybe with a plugin) in order to do so?</p>
<python><visual-studio-code>
2023-11-17 16:00:41
1
315
Luigi D.
77,502,964
17,194,313
How do I leverage naming conventions in a Python codebase to avoid repetitive code without `eval`/`exec`?
<p>I have a number of DataFrames in Python:</p> <pre class="lang-py prettyprint-override"><code>[df1, df2, fd3, df4, ...] </code></pre> <p>And a large number of functions that act on them</p> <pre class="lang-py prettyprint-override"><code>[clean_df1, clean_df2, clean_df2, ...] </code></pre> <pre class="lang-py prettyprint-override"><code>[format_df1, format_df2, format_df2, ...] </code></pre> <p>This results in somewhat repetitive code, and I'd love to be able to write:</p> <pre class="lang-py prettyprint-override"><code>final_data = {} for i in [1,2,3,4,5,...]: final_data[f'df{i}_final'] = eval(f'df{i}.pipe(clean_df{i}).pipe(format_df{i})') </code></pre> <p>But using <code>eval</code> in this way feels wrong and LSP unfriendly...</p> <p>Is there any functionality in Python to leverage these naming conventions to avoid repetitive code without falling into code-generation/<code>eval</code>/<code>exec</code> etc...</p> <h1>Edit</h1> <p>Currently what I have is a few dictionaries:</p> <pre class="lang-py prettyprint-override"><code>data_registry = {'df1': df2, 'df2': df2, ...} cleaning_registry = {'df1': clean_df2, 'df2': clean_df2, ...} format_registry = {'df1': format_df2, 'df2': format_df2, ...} </code></pre> <p>But keeping these up to date is error prone (forgetting to add one / typos). I suppose my main question is how to auto generate these dicts or something similar (vs. having to manually edit them often)</p> <h1>Edit 2:</h1> <p>@matszwecja asked for more detail on how these thigns are defined.</p> <p>To give more context to the question - these are features in a feature engineering pipeline</p> <p>There is a function that creates a dataframe with the feature</p> <p>A function that normalizes it (normalization is feature specific)</p> <p>And a function that takes the un-normalized feature and formats it into a human readable value (also feature specific)</p> <p>I use dictionaries and not lists because matching on index would be more error prone - say, if two lists didn't get the matching functions in the matching index.</p> <p>Roughly these are:</p> <pre class="lang-py prettyprint-override"><code>def normalize_feature_i() -&gt; pl.Expr: return pl.max_horizontal( pl.col('feature_i').list.get(0).pipe(_default_descending, -0.03), pl.min_horizontal( pl.col('feature_i').list.get(1).pipe(_default_descending, -0.03), pl.col('feature_i').list.get(2).pipe(_default_descending, -0.03) ) ).alias('feature_i') def fmt_feature_i(col: pl.Expr) -&gt; pl.Expr: def _fmt_feature_i(x): val = min(x) i = '' if x[0]==val else '*' if x[1]==val else '**' if x[2]==val else '' return f'{val:.0%}{i}' return col.map_elements(_fmt_feature_i) </code></pre> <p>In some parts of the codebase I want to apply all the formatting to all the features, in others all the normalizations, etc...</p> <p>I thought of having a <code>class Feature()</code> to keep this logic together - but it came across as very messy (the ML code should not care about the visualization code, etc...)</p>
<python><dataframe><python-polars>
2023-11-17 15:51:47
4
3,075
MYK
77,502,783
1,094,836
Formatted xticklabels on pandas plot show as 1971
<p>I've imported data from an Excel file and I plot it with Matplotlib. If I use only default settings I get accurate results with correct tick labels, even if not formatted the way I want.</p> <pre><code>df.plot() </code></pre> <p><a href="https://i.sstatic.net/PsRCm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PsRCm.png" alt="default plot" /></a></p> <p>But trying to use ConciseDateFormatter yields bizarre results</p> <pre><code>ax=df.plot() locator = mdates.AutoDateLocator(minticks=5, maxticks=10) formatter = mdates.ConciseDateFormatter(locator) ax.xaxis.set_major_locator(locator) ax.xaxis.set_major_formatter(formatter) plt.show() </code></pre> <p><a href="https://i.sstatic.net/KQTfv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KQTfv.png" alt="odd output" /></a></p> <p>ConciseDateFormatter makes some odd decisions here, including tagging the whole date series as 'October 1971'. The data actually spans from Feb 2021 through Sep 2023. Note that this is not the '1970' problem that others have asked about. 1970 and 1971 are different years.</p> <p>The big question: why is it selecting and displaying this weird date, and how can I fix it?</p> <p>The smaller question: how do I get it to display only month names and, where appropriate, years, replacing those random numbers on the x-axis in some locations?</p>
<python><pandas><matplotlib><datetime><xticks>
2023-11-17 15:23:41
0
487
Michael Stern
77,502,645
3,102,968
Running a Jupyter Notebook Fails with a 500 Error
<p>I'm trying to run a simple Jupyter notebook on my local machine and I strangely run into some errors.</p> <pre><code>$ python --version Python 3.11.5 </code></pre> <p>I just run the following command:</p> <pre><code>$jupyter notebook </code></pre> <p>and when I select the available notebooks, I get a 500 Error and in the console I see the following:</p> <pre><code>ImportError: cannot import name 'contextfilter' from 'jinja2' (/home/joesan/.local/lib/python3.8/site-packages/jinja2/__init__.py) </code></pre> <p>I tried to install and uninstall jinja2 but to no avail. How can I fix this?</p> <p>Additional Info:</p> <pre><code>$ jupyter --version Selected Jupyter core packages... IPython : 7.27.0 ipykernel : 6.4.1 ipywidgets : 7.6.5 jupyter_client : 7.0.3 jupyter_core : 4.8.1 jupyter_server : not installed jupyterlab : not installed nbclient : 0.5.4 nbconvert : not installed nbformat : 5.1.3 notebook : 6.4.4 qtconsole : 5.1.1 traitlets : 5.1.0 Opening in existing browser session. libva error: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed /usr/lib/python3.8/json/encoder.py:257: UserWarning: date_default is deprecated since jupyter_client 7.0.0. Use jupyter_client.jsonutil.json_default. return _iterencode(o, 0) [E 19:20:53.073 NotebookApp] Uncaught exception GET /notebooks/boston_housing.ipynb (127.0.0.1) HTTPServerRequest(protocol='http', host='localhost:8888', method='GET', uri='/notebooks/boston_housing.ipynb', version='HTTP/1.1', remote_ip='127.0.0.1') Traceback (most recent call last): File &quot;/home/joesan/.local/lib/python3.8/site-packages/tornado/web.py&quot;, line 1704, in _execute result = await result File &quot;/home/joesan/.local/lib/python3.8/site-packages/tornado/gen.py&quot;, line 775, in run yielded = self.gen.send(value) File &quot;/home/joesan/.local/lib/python3.8/site-packages/notebook/notebook/handlers.py&quot;, line 95, in get self.write(self.render_template('notebook.html', File &quot;/home/joesan/.local/lib/python3.8/site-packages/notebook/base/handlers.py&quot;, line 516, in render_template return template.render(**ns) File &quot;/home/joesan/.local/lib/python3.8/site-packages/jinja2/environment.py&quot;, line 1301, in render self.environment.handle_exception() File &quot;/home/joesan/.local/lib/python3.8/site-packages/jinja2/environment.py&quot;, line 936, in handle_exception raise rewrite_traceback_stack(source=source) File &quot;/home/joesan/.local/lib/python3.8/site-packages/notebook/templates/notebook.html&quot;, line 1, in top-level template code {% extends &quot;page.html&quot; %} File &quot;/home/joesan/.local/lib/python3.8/site-packages/notebook/templates/page.html&quot;, line 154, in top-level template code {% block header %} File &quot;/home/joesan/.local/lib/python3.8/site-packages/notebook/templates/notebook.html&quot;, line 115, in block 'header' {% for exporter in get_frontend_exporters() %} File &quot;/home/joesan/.local/lib/python3.8/site-packages/notebook/notebook/handlers.py&quot;, line 23, in get_frontend_exporters from nbconvert.exporters.base import get_export_names, get_exporter File &quot;/home/joesan/.local/lib/python3.8/site-packages/nbconvert/__init__.py&quot;, line 4, in &lt;module&gt; from .exporters import * File &quot;/home/joesan/.local/lib/python3.8/site-packages/nbconvert/exporters/__init__.py&quot;, line 3, in &lt;module&gt; from .html import HTMLExporter File &quot;/home/joesan/.local/lib/python3.8/site-packages/nbconvert/exporters/html.py&quot;, line 14, in &lt;module&gt; from jinja2 import contextfilter ImportError: cannot import name 'contextfilter' from 'jinja2' (/home/joesan/.local/lib/python3.8/site-packages/jinja2/__init__.py) </code></pre> <p>I'm using pyenv to manage environments:</p> <pre><code>$ pyenv versions system 3.8.0 * 3.11.5 (set by /home/joesan/.pyenv/version) </code></pre>
<python><jupyter-notebook>
2023-11-17 15:05:36
2
15,565
joesan
77,502,352
8,359,217
FastAPI passing params in get request via TestClient
<p>I'm currently working with FastAPI's TestClient and came across an unexpected behavior that I'd like some clarification on.</p> <p>Working Example:</p> <pre class="lang-py prettyprint-override"><code>from fastapi.testclient import TestClient from fastapi import status def my_func(client: TestClient) -&gt; None: response = client.get( app.url_path_for(&quot;v1-my-endpoint&quot;), params={&quot;xx&quot;: &quot;11&quot;, &quot;yy&quot;: &quot;22&quot;} ) assert response.status_code == status.HTTP_200_OK </code></pre> <p>Non-Working Example:</p> <pre class="lang-py prettyprint-override"><code>from fastapi.testclient import TestClient from fastapi import status def my_func(client: TestClient) -&gt; None: my_params={&quot;xx&quot;: &quot;11&quot;, &quot;yy&quot;: &quot;22&quot;} response = client.get( app.url_path_for(&quot;v1-my-endpoint&quot;), params=my_params ) assert response.status_code == status.HTTP_200_OK </code></pre> <p>The second example, where I define parameters in a variable (my_params), throws the following error:</p> <pre><code>error: Argument &quot;params&quot; to &quot;get&quot; of &quot;TestClient&quot; has incompatible type &quot;Dict[str, object]&quot;; expected &quot;Union[QueryParams, Mapping[str, Union[Union[str, int, float, bool, None], Sequence[Union[str, int, float, bool, None]]]], List[Tuple[str, Union[str, int, float, bool, None]]], Tuple[Tuple[str, Union[str, int, float, bool, None]], ...], str, bytes, None]&quot; [arg-type] </code></pre> <p>I'm curious about the difference in behavior between directly providing parameters and passing a variable containing the parameters. Why does the first example work while the second one throws an error? What's the recommended way to pass parameters when using TestClient in FastAPI?</p>
<python><pytest><fastapi>
2023-11-17 14:20:49
0
303
Matheus Schaly
77,502,176
12,493,753
ModuleError with pyranges when reading .pkl file in Jupyter Notebook
<p>I need to read a .pkl file in JupyterLab. The code I have is:</p> <pre><code>import dill # Loading scplus_obj scplus_obj = dill.load(open('/path-to-my-file/scplus_obj.pkl', 'rb')) </code></pre> <p>The error get is :</p> <pre><code>--------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[12], line 2 1 # Loading scplus_obj ----&gt; 2 scplus_obj = dill.load(open('/path-to-my-file/scplus_obj.pkl', 'rb')) File ~/.local/lib/python3.8/site-packages/dill/_dill.py:287, in load(file, ignore, **kwds) 281 def load(file, ignore=None, **kwds): 282 &quot;&quot;&quot; 283 Unpickle an object from a file. 284 285 See :func:`loads` for keyword arguments. 286 &quot;&quot;&quot; --&gt; 287 return Unpickler(file, ignore=ignore, **kwds).load() File ~/.local/lib/python3.8/site-packages/dill/_dill.py:442, in Unpickler.load(self) 441 def load(self): #NOTE: if settings change, need to update attributes --&gt; 442 obj = StockUnpickler.load(self) 443 if type(obj).__module__ == getattr(_main_module, '__name__', '__main__'): 444 if not self._ignore: 445 # point obj class to main File ~/.local/lib/python3.8/site-packages/dill/_dill.py:432, in Unpickler.find_class(self, module, name) 430 return type(None) #XXX: special case: NoneType missing 431 if module == 'dill.dill': module = 'dill._dill' --&gt; 432 return StockUnpickler.find_class(self, module, name) ModuleNotFoundError: No module named 'pyranges.pyranges' </code></pre> <p>I have tried uninstalling then installing pyranges module. I have installed pyranges in the JupyterLab.</p> <p>I have made sure the module path is included in the system paths:</p> <pre><code>import pyranges print(pyranges.__file__) import sys sys.path </code></pre> <pre><code>/home/cbgm/.local/lib/python3.8/site-packages/pyranges/__init__.py ['/home/cbgm/.local/lib/python3.8/site-packages/ray/thirdparty_files', '/home/cbgm', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '', '/home/cbgm/.local/lib/python3.8/site-packages', '/home/cbgm/scenicplus/src', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] </code></pre> <p>Instead of <code>dill</code>, I tried with <code>pickle</code> as well:</p> <pre><code>import pickle with open('/path-to-my-file/scplus_obj.pkl', 'rb') as scplus_obj: data = pickle.load(scplus_obj) </code></pre> <p>Even then I get the same error:</p> <pre><code>/usr/lib/python3/dist-packages/paramiko/transport.py:219: CryptographyDeprecationWarning: Blowfish has been deprecated &quot;class&quot;: algorithms.Blowfish, --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[1], line 3 1 import pickle 2 with open('/media/cbgm/6e648ddc-311d-4572-ac67-f8a548f153d4/Neel/Shuchun Li/70_SCENICplus_outs/01_Data/scenicplus/scplus_obj.pkl', 'rb') as scplus_obj: ----&gt; 3 data = pickle.load(scplus_obj) ModuleNotFoundError: No module named 'pyranges.pyranges' </code></pre> <p>I do not have experience with python and its packages, modules, kernels and all that. I just need to read the .pkl file in for now. To me it seems like an issue the module 'pyranges'. I have tried the solutions I have found so far based on that.</p>
<python><pickle><jupyter-lab><dill>
2023-11-17 13:53:32
1
446
Arnoneel Sinha
77,502,027
4,433,386
Infer return type annotation from other function's annotation
<p>I have a function with a complex return type annotation:</p> <pre class="lang-py prettyprint-override"><code>from typing import (Union, List) # The -&gt; Union[…] is actually longer than this example: def parse(what: str) -&gt; Union[None, int, bool, float, complex, List[int]]: # do stuff and return the object as needed def parse_from_something(which: SomeType) -&gt; ????: return parse(which.extract_string()) # … class SomeType: def extract_string(self) -&gt; str: # do stuff and return str </code></pre> <p>How can I type-annotate <code>parse_from_something</code> so that it is annotated to return the same types as <code>parse</code>, without repeating them?</p> <p>The problem I'm solving here is that one function is subject to change, but there's wrappers around it that will always return the identical set of types. I don't want to duplicate code, and because this is a refactoring and after-the-fact type annotation effort, I need to assume I'll remove possible return types from <code>parse</code> in the future, and a static type checker might not realize that <code>parse_from_something</code> can no longer return these.</p>
<python><python-typing>
2023-11-17 13:27:37
3
36,888
Marcus MΓΌller
77,501,948
6,654,730
Creating a stamp with pypdf does not work for simple PDF file, works for others
<p>I am following the <code>pypdf</code> documentation here: <a href="https://pypdf.readthedocs.io/en/latest/user/add-watermark.html#watermark-overlay" rel="nofollow noreferrer">https://pypdf.readthedocs.io/en/latest/user/add-watermark.html#watermark-overlay</a></p> <p>I want to overlay one PDF over another (or overlay text over a PDF). The below code works for certain more complex <code>overlay.pdf</code>, but doesn't work for the simple <code>overlay.pdf</code> linked below. I see the base PDF, but not the overlay. What could the issue be?</p> <p><a href="https://www.dropbox.com/scl/fo/ygqttuti4ed7cc2dz1c4u/h?rlkey=khdu0k5stkt205p4mcgca5n0i&amp;dl=0" rel="nofollow noreferrer">https://www.dropbox.com/scl/fo/ygqttuti4ed7cc2dz1c4u/h?rlkey=khdu0k5stkt205p4mcgca5n0i&amp;dl=0</a></p> <pre><code>from pypdf import PdfWriter, PdfReader stamp = PdfReader(&quot;overlay.pdf&quot;).pages[0] writer = PdfWriter(clone_from=&quot;form_1099_nec_page_4.pdf&quot;) for page in writer.pages: page.merge_page(stamp, over=True) writer.write(&quot;merged.pdf&quot;) </code></pre> <p>This <code>overlay.pdf</code> was generated with the following code, in case that's relevant.</p> <pre><code>wkhtmltopdf_path = getattr(settings, 'WKHTMLTOPDF_PATH', None) config = pdfkit.configuration(wkhtmltopdf=wkhtmltopdf_path) with open('overlay.html', 'r') as html_file: html_content = html_file.read() pdf_data = pdfkit.from_string(html_content, False, configuration=config) with open('overlay.pdf', 'wb') as pdf_file: pdf_file.write(pdf_data) </code></pre>
<python><python-3.x><pdf><pypdf>
2023-11-17 13:14:53
2
7,670
M3RS
77,501,936
1,761,587
Can async generators run concurrently?
<p>I'm trying to understand async generators and its practical use-case better. So, I wrote this code</p> <pre><code>import asyncio async def download(urls): for url in urls: print(f&quot;Downloading page at {url} started&quot;) await asyncio.sleep(2) # simulating a page download print(f&quot;Page downloaded from {url}&quot;) yield f&quot;Page downloaded from url {url}&quot; async def main(): results = [] async for i in download([&quot;foo.com&quot;, &quot;bar.com&quot;, &quot;baz.com&quot;]): results.append(i) print(&quot;All results:&quot;, results) asyncio.run(main()) </code></pre> <p>This code return this output:</p> <pre><code>Downloading page at foo.com started Page downloaded from foo.com Downloading page at bar.com started Page downloaded from bar.com Downloading page at baz.com started Page downloaded from baz.com All results: ['Page downloaded from url foo.com', 'Page downloaded from url bar.com', 'Page downloaded from url baz.com'] </code></pre> <p>The pages are not downloaded concurrently here.</p> <p>I understand that without the async generator I can simply use the following to create independent coroutines to download the pages concurrently:</p> <pre><code>async def download(url): print(f&quot;Downloading page at {url} started&quot;) await asyncio.sleep(2) # simulating a page download print(f&quot;Page downloaded from {url}&quot;) return f&quot;Page downloaded from url {url}&quot; async def main(): await asyncio.gather(*[download(x) for x in [&quot;foo.com&quot;, &quot;bar.com&quot;, &quot;baz.com&quot;]]) asyncio.run(main()) </code></pre> <p>However I do not get the point of using async generators. Can someone explain a practical use-case of async generators with some code(if possible) or point me to a blog etc.?</p> <p>Thanks</p>
<python><asynchronous><async-await><generator><coroutine>
2023-11-17 13:13:00
0
332
knightcool
77,501,713
13,635,877
Looking up the value associated with the previous date of same ticker
<p>I have a reproducible example that only has 4 rows, but I am using this for a 12m row dataset.</p> <p>Is there an easier way to achieve what the following code does? As you can see there are a lot of steps.</p> <pre><code>import pandas as pd df = pd.DataFrame([['1/1/2023','ABC',0.01],['1/1/2023','XYZ',0.09],['1/7/2023','ABC',0.0],['1/7/2023','XYZ',0.03]],columns=['date','ticker','price']) df['date'] = pd.to_datetime(df['date']) dates = pd.DataFrame(df['date'].unique(),columns=['date']) dates['lastweek'] = dates['date'].shift(1) tmp = df[['date','ticker']] tmp = tmp.merge(dates,on='date',how='left') tmp = tmp.merge(df[['date','ticker','price']],left_on=['lastweek','ticker'],right_on=['date','ticker']) tmp['price'] = tmp['price'].fillna(0) tmp = tmp.drop(columns=['date_y']) tmp = tmp.rename(columns={'date_x':'date','price':'lastweekprice'}) df = df.merge(tmp[['date','ticker','lastweekprice']],on=['date','ticker'],how='left') </code></pre>
<python><pandas><dataframe><merge>
2023-11-17 12:39:07
1
452
lara_toff
77,501,687
6,703,592
dataframe merge un-crossed blocks
<p>I want to merge some un-crossed blocks. For example</p> <pre><code>import pandas as pd df = pd.DataFrame() df_11 = pd.DataFrame([[1,1], [1,1]], index=['1', '2'], columns=['col1', 'col2']) df_12 = pd.DataFrame([[2,2], [2,2]], index=['1', '2'], columns=['col3', 'col4']) df_21 = pd.DataFrame([[3,3], [3,3]], index=['3', '4'], columns=['col1', 'col2']) df_22 = pd.DataFrame([[4,4], [4,4]], index=['3', '4'], columns=['col3', 'col4']) </code></pre> <p>the expected result is</p> <pre><code> 'col1', 'col2', 'col3', 'col4' '1' 1,1,2,2 '2' 1,1,2,2 '3' 3,3,4,4 '4' 3,3,4,4 </code></pre> <p>I tried to use <code>join</code>:</p> <pre><code>for ele in [df_11, df_12, df_21, df_22]: df = df.join(ele, how='outer') </code></pre> <p>But it creates the <code>Nan</code> elements which will overlap with the later blocks ( lsuffix and rsuffix).</p> <p>A naive way is just collecting all columns and index and setting to <code>df</code>, which seems not a good way.</p>
<python><pandas><dataframe>
2023-11-17 12:33:03
3
1,136
user6703592
77,501,565
1,761,587
Python generator missing a value during iteration
<p>Hello while using generators in my code, I noticed this strange behaviour where after breaking out from the loop, the <code>next()</code> call on a generator skips one value. Example code:</p> <pre><code>from itertools import cycle def endless(): yield from cycle((9,8,7,6)) total=0 e = endless() for i in e: if total&lt;30: total += i print(i, end=&quot; &quot;) else: print() print(&quot;Reached the goal!&quot;) break print(next(e), next(e), next(e)) </code></pre> <p>This outputs:</p> <pre><code>9 8 7 6 Reached the goal! 8 7 6 </code></pre> <p>Why does it skip printing <code>9</code> after breaking out from the loop. I was expecting it to print the following instead:</p> <pre><code>9 8 7 6 Reached the goal! 9 8 7 </code></pre>
<python><loops><for-loop><iterator><generator>
2023-11-17 12:12:49
1
332
knightcool
77,501,353
726,730
pyqt5 font-size and dimensions missmatch between QtDesigner (preview mode) and production
<p><a href="https://i.sstatic.net/icrtm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/icrtm.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/KjPhV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KjPhV.png" alt="enter image description here" /></a></p> <p>The first print screen is from Qt Designer (preview mode - style Fusion).</p> <p>The second print screen is after running <code>python my_pyqt5_app.py</code> command (first i run <code>pyuic5 -x my_pyqt5_app.ui -o my_pyqt5_app.py</code></p> <p>As you can see they are differences between the two images. (Font size and dimensions-sizes).</p> <p>What's wrong about?</p> <p>Is there something i can change in qt designer? If no, what code should make the two images be the same?</p>
<python><pyqt5><size><font-size><qt-designer>
2023-11-17 11:35:21
1
2,427
Chris P
77,501,303
2,587,422
Python SimpleQueue not fair
<p>The Python standard library provides two implementations for the thread-safe <code>SimpleQueue</code>:</p> <ol> <li>A <a href="https://github.com/python/cpython/blob/main/Modules/_queuemodule.c#L28-L35" rel="nofollow noreferrer">pure C implementation</a></li> <li>A fallback <a href="https://github.com/python/cpython/blob/main/Lib/queue.py#L258" rel="nofollow noreferrer">Python implementation</a>, IIUC in case Cython is not present (and because apparently <a href="https://github.com/python/cpython/pull/3346/files#r161618399" rel="nofollow noreferrer">it's policy</a> to also provide a pure Python implementation)</li> </ol> <p>I was going through the code for both implementations in order to learn more. Among others, I learnt that the C implementation is reentrant while the Python one is not.</p> <p>However, there's something that has left me quite confused, from <a href="https://github.com/python/cpython/blob/main/Lib/queue.py#L263-L266" rel="nofollow noreferrer">a comment</a> in the code for the Python implementation:</p> <blockquote> <p>Note: while this pure Python version provides fairness (by using a threading.Semaphore which is itself fair, being based on threading.Condition), fairness is not part of the API contract. This allows the C version to use a different implementation.</p> </blockquote> <p>How can a thread-safe, <a href="https://github.com/python/cpython/blob/main/Lib/queue.py#L259" rel="nofollow noreferrer">&quot;FIFO queue&quot;</a> not be fair? Isn't &quot;fairness&quot; guaranteed by definition, by it being FIFO in the first place?</p> <p>Also, in what way does this &quot;allow&quot; the C version to use a different implementation? I find the wording quite confusing. Does it mean that somehow this is defined as a &quot;potentially not fair FIFO queue&quot;, and that reflects in the C implementation? Or something else?</p>
<python><queue><cython><python-multithreading>
2023-11-17 11:27:23
0
315
Luigi D.
77,501,140
2,132,157
How can I plot a 2D image and align its projection to the axes keeping the plots dimension small compared to the image?
<p>I am struggling finding a way to keep the projection of the image completely aligned to the image (as in the figure below) but at the same time reducing their dimension so that the image take most of the figure space.</p> <pre class="lang-py prettyprint-override"><code> import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import matplotlib.image as mpimg import numpy as np from skimage import data img = data.coins() h,w = img.shape ratio = h/w fig = plt.figure(figsize=(8, 8)) gs = gridspec.GridSpec(2, 2, width_ratios=[1*ratio, 1], height_ratios=[1/ratio, 1]) ax_center = plt.subplot(gs[1, 1]) ax_center.imshow(img) ax_left = plt.subplot(gs[1, 0]) ax_left.set_title('Left Plot') ax_left.plot(-img.mean(axis=1),range(img.shape[0])) ax_top = plt.subplot(gs[0, 1]) ax_top.plot(img.mean(axis=0)) ax_top.set_title('Top Plot') plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.sstatic.net/HGhJs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HGhJs.png" alt="enter image description here" /></a></p> <p>Basically I would like the top plot to have a smalle height and the left top to have a smaller width keeping them perfectly aligned to the image.</p>
<python><matplotlib><matplotlib-gridspec>
2023-11-17 10:59:42
1
22,734
G M
77,501,093
4,131,583
Pandas - compute the sum of numerical characters of every string in a dataframe
<p>I have a dataframe with a column &quot;dname&quot;. It contains many rows of 2LD domain names. i.e.</p> <pre><code>123ask example92 what3ver ... </code></pre> <p>I want to find the number of digits for every string in every row.</p> <pre><code>6 11 3 ... </code></pre> <p>So, to create a new column in the dataframe with values i.e.</p> <p>I have:</p> <pre><code> df['numeric'] = df.dname.apply(lambda v : sum(x, x.isdigit() for x in v)) </code></pre> <p>Is not work ( Any help ? Thanks in advance!</p>
<python><pandas><dataframe>
2023-11-17 10:53:25
3
923
gevaraweb
77,500,954
859,227
Iterate over dataframe columns by index and step 2
<p>For a dataframe like this</p> <pre><code> M1_x M1_y M2_x M2_y 0 2 0.861626 2 0.980591 1 4 0.685437 4 0.647706 2 8 0.495346 8 0.303102 3 16 0.327949 16 0.133477 </code></pre> <p>I would like to plot two lines, M1 and M2 where each has X and Y points. By iterating the columns like this:</p> <pre><code>n = df_scales.shape[1]-1 // Number of columns is 4, we iterate 0,1 and 2,3 for index in n: x = df_scales.iloc[:, index] y = df_scales.iloc[:, index+1] plt.plot( x, y, marker = 'o') plt.show() </code></pre> <p>But I get this error:</p> <pre><code> for index in n: TypeError: 'int' object is not iterable </code></pre> <p>How can I fix that? Or a better method?</p>
<python><dataframe>
2023-11-17 10:33:09
2
25,175
mahmood
77,500,939
2,929,914
Python Polars ComputeError when reading pickle file
<p>When I try to execute the snippet below:</p> <pre><code>import pickle f = open('my_file_path.pkl', 'rb') loaded_obj = pickle.load(f) f.close() </code></pre> <p>I'm getting the following error:</p> <pre><code>ComputeError: out-of-spec: Unable to get root as message: {err:?} </code></pre> <p>At first I thought that this was an issue with Pickle and its .load() method, but after reading the error trail this is actually related to Polars, as the last line of the error chain is from \polars package:</p> <pre><code>File ~\AppData\Local\anaconda\Lib\site-packages\polars\series\series.py:436 in __setstate__ self._s.__setstate__(state) </code></pre> <p>The awkward behavior is that this exactly same code (not just this snippet, but a whole dashboard design code) was 100% working yesterday. But after I replaced my notebook (and therefore reinstalled anaconda, python and several packages, including Polars) I'm getting the error.</p> <p>Therefore what I think is that is nothing wrong with the pickle file itself or the code, but anything related to any update on a package (Pickle, Polars?) that is affecting the execution now?</p> <p>Thank you.</p>
<python><python-polars>
2023-11-17 10:30:50
0
705
Danilo Setton
77,500,656
14,823,310
Is it safe to store supabase session access token in streamlit session state?
<p>I am using supabase authentication system for my streamlit app which is a multipage app. One page is the login and another page display the data of the logged in user.</p> <p>To retrieve the user in the page with data I have to store in session-state the token of the logged in user.</p> <p>In my login.py I Have:</p> <pre><code>user = supabase.auth.sign_in_with_password({&quot;email&quot;: email, &quot;password&quot;: password}) st.session_state[&quot;token&quot;] = supabase.auth.get_session().access_token </code></pre> <p>and in my data page I retrieve the user like this:</p> <pre><code>user = supabase.auth.get_user(st.session_state[&quot;token&quot;]) </code></pre> <p>Could you please tell me what you think ? I would like to know if this us unsafe or not.</p> <p>Many thanks for reading me.</p>
<python><token><browser-cache><streamlit><supabase>
2023-11-17 09:45:35
0
591
pacdev
77,500,616
15,948,240
Product between MultiIndex and a list
<p>I have a MultiIndex object with 2 levels:</p> <pre><code>import pandas as pd mux = pd.MultiIndex.from_tuples([(1,1), (2,3)]) &gt;&gt;&gt; MultiIndex([(1, 1), (2, 3)], ) </code></pre> <p>I want to multiply it by a list <code>l=[4,5]</code>, I tried</p> <pre><code>pd.MultiIndex.from_product([mux.values, [4,5]]) &gt;&gt;&gt; MultiIndex([((1, 1), 4), ((1, 1), 5), ((2, 3), 4), ((2, 3), 5)], ) </code></pre> <p>I managed to get my expected result with</p> <pre><code>from itertools import product pd.MultiIndex.from_tuples([(*a, b) for a, b in product(mux, [4,5])]) &gt;&gt;&gt; MultiIndex([(1, 1, 4), (1, 1, 5), (2, 3, 4), (2, 3, 5)], ) </code></pre> <p>Is there a better way to do this operation ?</p>
<python><pandas><multi-index><cartesian-product>
2023-11-17 09:38:17
3
1,075
endive1783
77,500,442
12,242,085
How to add cross validation to Optuna function to tune hyperparameters for LSTM?
<p>I have code to tune hyperparameters in LSTM. How can I:</p> <ol> <li>add cross validation based on 5 folds on training dataset</li> <li>print avg <code>AUC</code> from each iteration from training dataset divided on 5 folds</li> <li>print <code>AUC</code> from test dataset (of course do not divide test dataset on folds):</li> </ol> <pre><code>def objective(trial): start_time = time.time() model = create_model(trial) history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=15, verbose=0) y_pred = model.predict(X_test) auc = roc_auc_score(y_test, y_pred) end_time = time.time() elapsed_time = end_time - start_time print(&quot;iteration no:&quot;, trial.number) print(&quot;AUC:&quot;, auc) print(&quot;hyperparameters:&quot;, trial.params) print(&quot;time:&quot;, elapsed_time, &quot;sec&quot;) return auc </code></pre> <p>How can I do that in Python?</p>
<python><machine-learning><lstm><cross-validation><optuna>
2023-11-17 09:10:24
0
2,350
dingaro
77,500,330
13,798,993
Python retrieving data from its memory address
<p>I currently have a very large dataset (~15 GB) which should be used by multiple python scripts at the same time.</p> <p>Loading single elements of the dataset takes too long, so the whole dataset has to be in memory.</p> <p>My current best approach would be to have a python process which has all the elements loaded, and have the other python scripts request the element via a socket connection to localhost.</p> <p>However, then again I have the problem of having to encode and decode the data send via the socket. So my next best Idea was the following, however, I do not know if this is possible:</p> <ol> <li>Have the scripts send a request containing the indeces of the datapoints in the dataset they would like to use via a socket to the process having the data</li> <li>Have the process having the data return the memory address of the datapoints</li> <li>Have the script load the elements from there</li> </ol> <p>So the question is if this is possible?</p> <p>Best regards</p>
<python><python-3.x><memory>
2023-11-17 08:50:01
1
689
Quasi
77,500,265
4,494,781
How to typephint a return value to an IntEnum
<p>I want to annotate a function to indicate that it can only return values from an <code>IntEnum</code></p> <pre><code>from enum import IntEnum import random class GoodIntegers(IntEnum): THE_ANSWER_TO_LIFE_THE_UNIVERSE_AND_EVERYTHING = 42 DEADBEEF = 0xdeadbeef LEET = 1337 def f() -&gt; GoodIntegers: return random.choice([42, 1337, 0xdeadbeef]) </code></pre> <p>But mypy won't trust me that my source of integers is doing the right thing:</p> <pre><code>$ mypy test.py test.py:12: error: Incompatible return value type (got &quot;int&quot;, expected &quot;GoodIntegers&quot;) [return-value] Found 1 error in 1 file (checked 1 source file) </code></pre> <p>Is there a workaround to tell mypy that I really really trust my source of integers to give me only values that match a value in my <code>IntEnum</code>?</p> <p>In my realistic example, the source for the integers are the individual elements of a <code>bytes</code> string:</p> <pre><code>class OpCodes(IntEnum): ... def get_ops_from_script(script: bytes) -&gt; List[Tuple[OpCodes, Optional[bytes]]: ops = [] while n &lt; len(script): op: GoodIntegers = script[n] ... # depending on the opcode some data may be extracted from the script # then n is incremented to find the next opcode ops.append((op, data)) return ops </code></pre>
<python><mypy><python-typing>
2023-11-17 08:35:35
0
1,105
PiRK
77,500,211
16,027,663
Can't Upgrade to Python 3.12 on Windows 8.1 - No "Upgrade Now" Option
<p>I am running Python 3.7.4 currently and need to upgrade to 3.12 on my Windows 8.1 PC.</p> <p>When I run the installer I don't get an &quot;Upgrade Now&quot; option as the various guides I have read suggest:</p> <p><a href="https://i.sstatic.net/XM0ik.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XM0ik.png" alt="enter image description here" /></a></p> <p>I just get an &quot;Install Now&quot; option, like a fresh installation. Running that causes 3.12 to be installed alongside 3.7.</p> <p>How do I upgrade to 3.12 so that 3.7 is also deleted?</p>
<python>
2023-11-17 08:24:57
0
541
Andy
77,500,127
12,104,604
How to pass string data between processes using Python's multiprocessing and shared memory with Array('c', fixed_length)
<p>I looked at <a href="https://stackoverflow.com/q/21290960/12104604">past questions</a> about sharing Python string data using multiprocessing and shared memory, but it seems unresolved.</p> <p>I know that I can share numbers and arrays between processes with <a href="https://qiita.com/t_okkan/items/4127a87177ed2b2db148" rel="nofollow noreferrer">code like this</a>.</p> <pre><code>from multiprocessing import Value, Array, Process import time def process1(count,array): for i in range(5): time.sleep(0.3) count.value = count.value + 1 array[count.value - 1] = count.value print(&quot;process1:&quot;+str(count.value)) def process2(count,array): for i in range(5): time.sleep(0.7) count.value = count.value + 2 array[count.value - 1] = count.value print(&quot;process2:&quot;+str(count.value)) if __name__ == '__main__': count = Value('i', 0) array = Array('i', 15) process_test1 = Process(target=process1, args=[count,array], daemon=True) process_test2 = Process(target=process2, args=[count,array], daemon=True) process_test1.start() process_test2.start() process_test1.join() process_test2.join() print(array[:]) print(&quot;process ended&quot;) </code></pre> <p>However, I'm not sure how to handle a string data.</p> <p>Among the answers to the <a href="https://stackoverflow.com/q/21290960/12104604">past question</a>, it was written that using <code>Array('c', fixed_length)</code> would be good. But, this question did not get a correct answer.</p> <p>I tried the following code, but it doesn't work.</p> <pre><code>from multiprocessing import Value, Array, Process import time def process1(count,array): for i in range(5): time.sleep(0.3) count.value = count.value + 1 array[count.value - 1] = &quot;string:&quot;+str(count.value) print(&quot;process1:&quot;+str(count.value)) def process2(count,array): for i in range(5): time.sleep(0.7) count.value = count.value + 2 array[count.value - 1] = &quot;string:&quot;+str(count.value) print(&quot;process2:&quot;+str(count.value)) if __name__ == '__main__': count = Value('i', 0) array = Array('c', 15) process_test1 = Process(target=process1, args=[count,array], daemon=True) process_test2 = Process(target=process2, args=[count,array], daemon=True) process_test1.start() process_test2.start() process_test1.join() process_test2.join() print(array[:]) print(&quot;process ended&quot;) </code></pre>
<python><multiprocessing>
2023-11-17 08:08:05
1
683
taichi
77,500,119
3,732,793
unittest example from the docs not working for dynamic list
<p>From the <a href="https://docs.python.org/3/library/unittest.html#organizing-test-code" rel="nofollow noreferrer">documentation</a> this example</p> <pre><code>import unittest class WidgetTestCase(unittest.TestCase): def setUp(self): self.widget = Widget('The widget') def tearDown(self): self.widget.dispose() def suite(): suite = unittest.TestSuite() suite.addTest(WidgetTestCase('test_default_widget_size')) suite.addTest(WidgetTestCase('test_widget_resize')) return suite if __name__ == '__main__': runner = unittest.TextTestRunner() runner.run(suite()) </code></pre> <p>does not show any error by</p> <pre><code>pytest -vv --log-level=INFO mytest.py </code></pre> <p>but it also collects 0 items.</p> <p>How would I correctly use this way to dynamically generate a list of verbose test results to later have them in the report?</p>
<python><python-unittest>
2023-11-17 08:07:08
1
1,990
user3732793
77,499,757
5,308,802
SQLAlchemy: return ORM objects from subquery
<p>I need to join several tables, then return distinct rows by some rule based on partitions of model C.id (let's use <code>row_number()==1</code> for simplicity). As it's a window function, it cannot be directly used in <code>where</code>, so requires an outer query to filter. And it works, but it turns that moving models into a subquery, the Alchemy now returns raw rows, not models objects <code>:(</code> GPT suggested to use <code>with_entities</code>, <code>add_entity</code> but they seem to add repeated <code>from</code> extractions, full cartesian product breaking the logic instead of simple parsing of already available columns. How can I achieve it?</p> <pre class="lang-py prettyprint-override"><code># Main query subq = db.query(ModelA, ModelB, ModelC) subq = subq.filter(ModelA.some_tag == ModelB.some_tag) subq = subq.filter(ModelA.some_tag == ModelC.some_tag) subq = subq.filter(ModelA.some_tag == &quot;31415926535&quot;) # just to simplify testing # Additional field by a window function partition_col = func.row_number().over(partition_by=ModelC.id).label(&quot;partition_col_name&quot;) subq = subq.add_column(partition_col) # Outer query subq = subq.subquery() q = db.query(subq).filter(subq.c[&quot;partition_col_name&quot;] == 1) </code></pre> <hr /> <p>Trying to add extra wrapping to get the objects... But why I get wrong results?? SQL of <code>finalq</code> is fully correct and returns 2 rows in the psql, but Alchemy sees only the 1st on <code>o_0</code></p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import text, func from psycopg2.extensions import adapt as sqlescape from database import get_db, ModelA, ModelB, ModelC db = next(get_db()) # Main query subq = db.query(ModelA, ModelB, ModelC) subq = subq.filter(ModelA.some_tag == ModelB.some_tag) subq = subq.filter(ModelA.some_tag == ModelC.some_tag) subq = subq.filter(ModelA.some_tag == &quot;31415926535&quot;) # just to simplify testing # Additional field by a window function partition_col = func.row_number().over(partition_by=ModelC.id).label(&quot;partition_col_name&quot;) subq = subq.add_column(partition_col) subq = subq.subquery() # Outer query outerq = db.query(subq).filter(subq.c[&quot;partition_col_name&quot;] == 1) # outerq = outerq.limit(128) # Final query for ORM compiledq = outerq.statement.compile(dialect=db.bind.dialect) params = {k: sqlescape(v) for k, v in compiledq.params.items()} finalq = db.query(ModelA, ModelB, ModelC).from_statement(text(str(compiledq) % params)) print(len(outerq.all())) # 2 print(len(finalq.all())) # 1 ?! </code></pre> <h3>Update</h3> <p>This way is simpler and works fine:</p> <pre class="lang-py prettyprint-override"><code>finalq = db.query(ModelA, ModelB, ModelC).from_statement(outerq) </code></pre> <p>But I still wonder why the previous approach with custom query compilation doesn't work πŸ€”</p>
<python><sqlalchemy><orm><window-functions>
2023-11-17 06:44:01
1
1,228
AivanF.
77,499,506
4,277,485
python pandas replace any string value in all the numeric column with constant
<p>As a part of data cleaning, Dataset have 100+ column and want to check if any numeric/float columns have string values and replace all of them with constant numeric value.</p> <p>Data:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">data1</th> <th style="text-align: center;">col1</th> <th style="text-align: right;">col2</th> <th style="text-align: left;">left</th> <th style="text-align: center;">center</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">abc</td> <td style="text-align: center;">0.123</td> <td style="text-align: right;">234</td> <td style="text-align: left;">678</td> <td style="text-align: center;">-123</td> </tr> <tr> <td style="text-align: left;">abc</td> <td style="text-align: center;">0.1345</td> <td style="text-align: right;">678</td> <td style="text-align: left;">900</td> <td style="text-align: center;">-0.456</td> </tr> <tr> <td style="text-align: left;">def</td> <td style="text-align: center;">-0.454</td> <td style="text-align: right;"><strong>OVG</strong></td> <td style="text-align: left;"><strong>testing</strong></td> <td style="text-align: center;">8.67</td> </tr> <tr> <td style="text-align: left;">def</td> <td style="text-align: center;"><strong>fail</strong></td> <td style="text-align: right;"><strong>NVT</strong></td> <td style="text-align: left;"><strong>test</strong></td> <td style="text-align: center;">6.90</td> </tr> <tr> <td style="text-align: left;">def</td> <td style="text-align: center;"><strong>fail</strong></td> <td style="text-align: right;">890</td> <td style="text-align: left;">900</td> <td style="text-align: center;">532</td> </tr> </tbody> </table> </div> <p>Replace any string in any numeric column to -1111 output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">data1</th> <th style="text-align: center;">col1</th> <th style="text-align: right;">col2</th> <th style="text-align: left;">left</th> <th style="text-align: center;">center</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">abc</td> <td style="text-align: center;">0.123</td> <td style="text-align: right;">234</td> <td style="text-align: left;">678</td> <td style="text-align: center;">-123</td> </tr> <tr> <td style="text-align: left;">abc</td> <td style="text-align: center;">0.1345</td> <td style="text-align: right;">678</td> <td style="text-align: left;">900</td> <td style="text-align: center;">-0.456</td> </tr> <tr> <td style="text-align: left;">def</td> <td style="text-align: center;">-0.454</td> <td style="text-align: right;"><strong>-1111</strong></td> <td style="text-align: left;"><strong>-1111</strong></td> <td style="text-align: center;">8.67</td> </tr> <tr> <td style="text-align: left;">def</td> <td style="text-align: center;"><strong>-1111</strong></td> <td style="text-align: right;"><strong>-1111</strong></td> <td style="text-align: left;"><strong>-1111</strong></td> <td style="text-align: center;">6.90</td> </tr> <tr> <td style="text-align: left;">def</td> <td style="text-align: center;"><strong>-1111</strong></td> <td style="text-align: right;">890</td> <td style="text-align: left;">900</td> <td style="text-align: center;">532</td> </tr> </tbody> </table> </div> <p>Help to code in python pandas</p>
<python><python-3.x><pandas><dataframe><replace>
2023-11-17 05:26:33
1
438
Kavya shree
77,499,424
3,382,426
Why can I convert an RGB image to a 17 color palette, but no fewer? (Python Pillow version 10.1.0)
<p>The image used doesn't seem to matter. Here's an RGB image:<br /> <a href="https://i.sstatic.net/xLAKX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xLAKX.png" alt="blue/black gradient" /></a></p> <pre class="lang-py prettyprint-override"><code>import numpy as np import PIL from PIL import Image img = Image.open('sigh.png') img_P17 = img.convert('P', palette = Image.Palette.ADAPTIVE, colors = 17) # 17 color palettized image img_P17.show() # Fewer than 17 color palette conversion not working for some reason: img_P16 = img.convert('P', palette = Image.Palette.ADAPTIVE, colors = 16) # 16 color palettized image img_P16.show() #ValueError: conversion not supported </code></pre> <p><a href="https://i.sstatic.net/zVV8q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zVV8q.png" alt="pil_17_color_palette_vs_16_color_palette" /></a></p>
<python><python-imaging-library><color-palette>
2023-11-17 04:55:19
0
772
user3382426
77,499,326
13,430,381
How to cast array data from object to float64 after reading an Excel file with Pandas/NumPy?
<p>I am trying to import a number of Excel files in a for loop and cast a column from the file as an array of type <code>float64</code>, to be used later in an lmfit function. To do so, I read the Excel files (iterated by an index), put data from one column into a list, and then cast the list as an array. Shown below:</p> <pre><code>isolated_peak_df = pd.read_excel(path + r'/Residuals_{}.xlsx'.format(i), header=None) isolated_peak = [a for a in isolated_peak_df.transpose().iloc[0].loc[0:50]] isolated_340_peak = np.array(isolated_340_peak) </code></pre> <p>This works for other purposes, but when I attempt to use the newly-created array to do some math in an lmfit function, I get the error:</p> <blockquote> <p>TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'</p> </blockquote> <p>I've seen a number of questions (e.g., <a href="https://stackoverflow.com/questions/52449101/cannot-cast-array-data-from-dtypeo-to-dtypefloat64-according-to-the-rule">here</a>, <a href="https://stackoverflow.com/questions/39452792/cannot-cast-array-data-from-dtypeo-to-dtypefloat64">here</a>, <a href="https://stackoverflow.com/questions/59652764/typeerror-cannot-cast-array-data-from-dtypeo-to-dtypefloat64-according">here</a>) asking for workarounds to the same error, but none of them resolve this specific situation, and trying to use advice in the answers continues to lead to errors. For example,</p> <ul> <li><p>I tried to add <code>dtype='float64'</code> as an argument to the <code>np.array()</code> call, like how Adrine Correya suggested <a href="https://stackoverflow.com/a/41974538/13430381">here</a>, but the same error remained.</p> </li> <li><p>I tried adding <code>isolated_peak_df = isolated_peak_df.astype('float')</code> the line after I defined isolated_peak_df, and I also tried adding <code>pd.to_numeric(isolated_peak)</code> in both the line after I defined isolated_peak as a list and in the line after I defined isolated_peak as an array, as Nick suggested in the comments below. However, in all cases, the same error remained.</p> </li> <li><p>Additionally, as MHO suggested <a href="https://stackoverflow.com/a/41161412/13430381">here</a>, I made sure that the sizes of the array shown above matches the size of the array with which mathematical operations are being performed (and I also did a basic subtraction by making an <code>np.zeros()</code> array with the same size to verify that the issue was with the newly created array above, and not the latter).</p> </li> <li><p>And, as xagg suggested <a href="https://stackoverflow.com/a/75890729/13430381">here</a> and hpaulj advised in the comments below, I made sure that all of the values passed were indeed floating-point numbers, not strings or other non-numeric data.</p> </li> </ul> <p>Other answers provided were specific to scipy/sci-kit and not lmfit. What is going wrong?</p> <hr /> <p><strong>Edit:</strong> I now suspect that the error is less to do with the data type of the array, as the error would imply, but in the size of the parameters I am using in the lmfit function. By decreasing the starting value of the &quot;x&quot; and &quot;y&quot; parameters by several orders of magnitude (e.g., down to 5), the error no longer appears...</p> <pre><code>freeParams = Parameters() freeParams.add(&quot;x&quot;, value = 5 * (10 ** 20), vary=True) freeParams.add(&quot;y&quot;, value = 5 * (10 ** 15), vary=True) </code></pre> <p>To my knowledge, this shouldn't cause an issue because float64 can store decimal numbers ranging between 2.2E-308 to 1.7E+308. So I'm not sure why 5e15 or 5e20 would induce an error. The rest of the code is shown below:</p> <pre><code>epsfcn=0.01 ftol=1.e-10 xtol=1.e-10 max_nfev=300 for i in absorption.index: isolated_peak_df = pd.read_excel(path + r'/Residuals_{}.xlsx'.format(i), header=None) isolated_peak = [a for a in isolated_peak_df.transpose().iloc[0].loc[0:50]] isolated_peak = np.array(isolated_peak) def calc_residual(freeParams, isolated_peak): residual = isolated_peak[5:22] - np.zeros(17) return residual mini = minimize(calc_residual, freeParams, args=(isolated_peak,), epsfcn=epsfcn, ftol=ftol, xtol=xtol, max_nfev=max_nfev, calc_covar=True, nan_policy=&quot;omit&quot;) </code></pre> <p>Note that &quot;x&quot; and &quot;y&quot; aren't used in the above code only because I wanted to test out what was going wrong, and so I chose to do a simple subtraction operation that should have given a result (in <code>residual</code>) equal to the originally imported array (<code>isolated_peak</code>) -- ultimately, however, I will need to incorporate both parameters, &quot;x&quot; and &quot;y.&quot;</p> <hr /> <p>Edit: Full error message, per hpaulj's request:</p> <pre><code>TypeError Traceback (most recent call last) Cell In[202], line 19 16 residual = isolated_peak[5:22] - np.zeros(17) 17 return residual ---&gt; 19 mini = minimize(calc_residual, freeParams, args=(isolated_peak,), epsfcn=epsfcn, ftol=ftol, xtol=xtol, max_nfev=max_nfev, calc_covar=True, nan_policy=&quot;omit&quot;) File ~/opt/anaconda3/lib/python3.9/site-packages/lmfit/minimizer.py:2600, in minimize(fcn, params, method, args, kws, iter_cb, scale_covar, nan_policy, reduce_fcn, calc_covar, max_nfev, **fit_kws) 2460 &quot;&quot;&quot;Perform the minimization of the objective function. 2461 2462 The minimize function takes an objective function to be minimized, (...) 2594 2595 &quot;&quot;&quot; 2596 fitter = Minimizer(fcn, params, fcn_args=args, fcn_kws=kws, 2597 iter_cb=iter_cb, scale_covar=scale_covar, 2598 nan_policy=nan_policy, reduce_fcn=reduce_fcn, 2599 calc_covar=calc_covar, max_nfev=max_nfev, **fit_kws) -&gt; 2600 return fitter.minimize(method=method) File ~/opt/anaconda3/lib/python3.9/site-packages/lmfit/minimizer.py:2369, in Minimizer.minimize(self, method, params, **kws) 2366 if (key.lower().startswith(user_method) or 2367 val.lower().startswith(user_method)): 2368 kwargs['method'] = val -&gt; 2369 return function(**kwargs) File ~/opt/anaconda3/lib/python3.9/site-packages/lmfit/minimizer.py:1693, in Minimizer.leastsq(self, params, max_nfev, **kws) 1691 result.call_kws = lskws 1692 try: -&gt; 1693 lsout = scipy_leastsq(self.__residual, variables, **lskws) 1694 except AbortFitException: 1695 pass File ~/opt/anaconda3/lib/python3.9/site-packages/scipy/optimize/_minpack_py.py:426, in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag) 424 if maxfev == 0: 425 maxfev = 200*(n + 1) --&gt; 426 retval = _minpack._lmdif(func, x0, args, full_output, ftol, xtol, 427 gtol, maxfev, epsfcn, factor, diag) 428 else: 429 if col_deriv: TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe' </code></pre>
<python><arrays><pandas><numpy><lmfit>
2023-11-17 04:16:24
1
526
ttoshiro
77,499,302
832,230
Calculate Bollinger Z-score using Pandas
<p>Given a series, I want to calculate its Bollinger Z-score (not the Bollinger band) using Pandas. There exists a prior <a href="https://stackoverflow.com/q/74283043/">question</a> which discusses how to calculate the band, but not the z-score.</p> <p>Given a variable <code>series</code> of type <code>pandas.Series</code> with dtype <code>float64</code>, I have the formula for calculating the z-score using this <a href="https://stackoverflow.com/a/52187368/">answer</a>:</p> <pre><code>zs = (data - SMA(data, n=20)) / SD(data, n=20) </code></pre> <p>The above formula however is not Pandas code which I require.</p>
<python><pandas><technical-indicator>
2023-11-17 04:08:37
1
64,534
Asclepius
77,499,188
2,604,247
What is a Plain Text Translation of this SQL Query?
<p>My SQL knowledge is really basic, and need some help in <em>translating</em> this rather long query (running in a python script querying some AWS athena database, with some f-strings embedded) to plain English or flow chart. I understand concepts of SELECT * WHERE, JOINS etc., but this seems really advanced, renaming multiple tables and handling multiple cases.</p> <pre><code>SELECT usc_customer_id, escape_customer_id, use_sms, start_date, least(date('{segment_date_str}'), max(booking_date)) as &quot;max_booking_date&quot;, date('{segment_date_str}') as segment_date, count(t.job_no) as bookings, count(a.trip_no) as trips, sum(total_trip_fare) as total_fare, sum(promo_amt) as &quot;promo_amount&quot;, sum(case when promo_amt &gt; 0 then 1 else 0 end) as promo_times FROM ( SELECT m.usc_customer_id, m.escape_customer_id, use_sms, case when date(user_create_time) &gt; date('{start_date}') then date( user_create_time) else date('{start_date}') end as start_date, date(b.&quot;req_pickup_dt&quot;) as booking_date, j.job_no, j.status FROM (SELECT m.*, CASE WHEN news_sms = 1 OR third_party_sms = 1 THEN True ELSE False END AS use_sms, greatest(COALESCE(c.create_time, m.usc_customer_create_time,m.escape_customer_create_time), COALESCE(m.usc_customer_create_time, m.escape_customer_create_time, c.create_time), COALESCE(m.escape_customer_create_time, c.create_time, m.usc_customer_create_time)) as user_create_time FROM &quot;publish_cab&quot;.&quot;usc_customer&quot; c join customer_mapping m on c.customer_id = m.usc_customer_id where country_code = '65') m join &quot;publish_cab&quot;.&quot;escape_booking&quot; as b on m.escape_customer_id = b.cust_id join &quot;publish_cab&quot;.&quot;escape_job&quot; as j on b.booking_id = j.booking_id where b.bookingdate between '{start_date_short}' and ' {segment_date_str_short}' and j.createddate between '{start_date_short}' and ' {segment_date_str_short}' ) t left join ( select * from &quot;publish_cab&quot;.&quot;escape_trip_details&quot; where createddate between '{start_date_short}' and ' {segment_date_str_short}' ) a on t.job_no = a.job_no group by usc_customer_id, escape_customer_id, start_date, use_sms </code></pre> <p>Would be grateful if it can be translated into some sort of block diagrammatic flow chart, or corresponding pandas dataframe operations for someone who is relatively new to SQL, but in general understands programming and data structures well.</p>
<python><sql><mysql><query-optimization>
2023-11-17 03:26:12
2
1,720
Della
77,499,112
2,446,374
query github commits from a certain date using pygithub
<p>have the following code:</p> <pre class="lang-py prettyprint-override"><code>from github import Github ... g = github(token) query = f'org:{org_name} author:{username} since=2021-10-19T00:00:00Z' commits = g.search_commits(query, sort='author-date', order='desc') </code></pre> <p>If I have the &quot;since&quot; parameter there - I get 0 results. If I take it out, I get hundreds - and that's the problem. I'm doing some processing which can work this way so want to be incremental - ie cache the results and when I call it a few days later - only find the new commits since the last time I read.</p> <p>From the documentation, this should work - but ... it doesn't - what am I doing wrong.</p>
<python><github><pygithub>
2023-11-17 02:51:31
1
3,724
Darren Oakey
77,498,968
14,503,336
FastAPI CORS, wildcard in allow origins not working
<p>Simplified versions of my code, using two approaches.</p> <pre><code>from fastapi import FastAPI middleware = [ Middleware( CORSMiddleware, allow_origins=[&quot;*&quot;], allow_credentials=False, allow_methods=[&quot;*&quot;], allow_headers=[&quot;*&quot;], ) ] app = FastAPI(middleware=middleware) </code></pre> <p>From the documentation.</p> <pre><code>from fastapi import FastAPI, Request from fastapi.middleware.cors import CORSMiddleware app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=[&quot;*&quot;], allow_methods=[&quot;*&quot;], allow_headers=[&quot;*&quot;], ) </code></pre> <p>CORS always fails. It gives me the <code>CORS Missing Allow Origin</code> error (or just a general <code>CORS</code> error. I have double, triple, and centuple checked the headers. The request always sends an origin and FastAPI never responds with <code>Access-Control-Allow-Origin</code>.</p> <p>I feel as if I am missing something incredibly simple here. My front end uses NextJS, and I am caching most of my requests with Redis. The entire network is under Docker and I have tried CORS from both <code>localhost</code> and outside proxies. I don't think any of that info is pertinent, but you never know.</p>
<python><next.js><cors><fastapi><middleware>
2023-11-17 01:59:17
1
599
Anonyo Noor
77,498,924
179,234
Trying pip download with platform manylinux1_x86_64 still gives error for windows
<p>I have a web server (debian) that is not connected to the internet and would like to run my code in it. I have the code running in my local machine (windows) and also have it running on a dev server that has internet access to it (debian)</p> <p>To do that I want to</p> <ul> <li>download all needed pip libraries in my local machine, which is a windows machine</li> <li>Upload this libraries to the webserver, which is a debian machine</li> </ul> <p>In my local machine, I created a venv <code>python -m venv .venv</code></p> <p>and I extracted all the needed libraries using <code>pip freeze &gt; requirements.txt</code> from the dev server (to have same library versions that would work with debian)</p> <p>From my windows machine, and through the venv, I ran this command</p> <pre><code>pip download --platform manylinux1_x86_64 --no-deps --python-version 311 -r requirements.txt </code></pre> <p>That seems to be downloading for most of the libraries until I encountered this error when downloading uvloop library:</p> <pre><code>Collecting uvloop Using cached uvloop-0.19.0.tar.gz (2.3 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; [18 lines of output] Traceback (most recent call last): File &quot;C:\Users\**\Downloads\requirements\backend\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;C:\Users\**\Downloads\requirements\backend\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\**\Downloads\requirements\backend\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\**\AppData\Local\Temp\pip-build-env-u9ekvm49\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 355, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\**\AppData\Local\Temp\pip-build-env-u9ekvm49\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 325, in _get_build_requires self.run_setup() File &quot;C:\Users\**\AppData\Local\Temp\pip-build-env-u9ekvm49\overlay\Lib\site-packages\setuptools\build_meta.py&quot;, line 341, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 8, in &lt;module&gt; RuntimeError: uvloop does not support Windows at the moment [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>does that mean that the --platform option is ignored? shouldn't it be downloading a version that works with linux?</p> <p>I also noticed in 2 other libraries that it cannot find any version for them which doesn't make sense because they are already installed in the other dev machine. The libraries are:</p> <ul> <li>onnxruntime</li> <li>pulsar-client</li> </ul>
<python><pip>
2023-11-17 01:38:34
0
1,417
Sarah
77,498,920
3,140,750
Python Type Narrowing with TypeGuard: Narrowing Return type that is a TypeVar
<p>Consider the following case:</p> <pre><code>from typing import TypeVar from typing_extensions import TypeGuard _T = TypeVar('_T', str, int) def is_str(val: _T) -&gt; TypeGuard[str]: return isinstance(val, str) def is_int(val: _T) -&gt; TypeGuard[int]: return isinstance(val, int) def process_str_int(data: _T) -&gt; _T: if is_str(data): # At this point, `data` is narrowed down to `list[str]` print(&quot;Returning a string&quot;) return data elif is_int(data): print(&quot;Returning an int&quot;) return data def process_str_int_with_isinstance(data: _T) -&gt; _T: if isinstance(data, str): # At this point, `data` is narrowed down to `list[str]` print(&quot;Returning a string&quot;) return data elif isinstance(data, int): print(&quot;Returning an int&quot;) return data process_str_int(&quot;hello&quot;) </code></pre> <p>At the point of return in <code>process_str_int</code> I get an error in pyright complaining that 'Expression of type &quot;str&quot; cannot be assigned to return type &quot;_T@process_list&quot;' (and similarly for the int case). The python interpreter is python 3.9</p> <p>This does not happen with isinstance, where the TypeVar is correctly narrowed down and thus the correct return type is inferred and matched against the return value</p> <p>How can I acheive a similar behaviour using custom type guards?</p>
<python><python-typing><typeguards><pyright>
2023-11-17 01:37:52
1
409
Atimaharjun
77,498,859
7,077,532
Python Dataframe: For Each Row Pick One Value from Five Different Columns And Place In New Column Based on if Value is in List
<p>Let's say I'm starting with the following input table below:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>date</th> <th>id</th> <th>val1</th> <th>val2</th> <th>val3</th> <th>val4</th> <th>val5</th> <th>val_final</th> </tr> </thead> <tbody> <tr> <td>2023-08-29</td> <td>B3241</td> <td>496C</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>2023-09-08</td> <td>A3290</td> <td>349C</td> <td>078F</td> <td>274F</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2023-09-12</td> <td>D2903</td> <td>349C</td> <td>072F</td> <td>307C</td> <td>170F</td> <td>201D</td> <td></td> </tr> <tr> <td>2023-09-14</td> <td>I13490</td> <td>497C</td> <td>0349</td> <td>303F</td> <td>101A</td> <td></td> <td></td> </tr> </tbody> </table> </div> <p>The code to create the initial input table is below:</p> <pre><code>import pandas as pd df = pd.DataFrame({'date':[&quot;2023-08-29&quot;,&quot;2023-09-08&quot;,&quot;2023-09-12&quot;, &quot;2023-09-14&quot;],'id':[&quot;B3241&quot;,&quot;A3290&quot;,&quot;D2903&quot;, &quot;I13490&quot;],'val1':[&quot;496C&quot;,&quot;349C&quot;,&quot;349C&quot;, &quot;497C&quot;], 'val2':[&quot;&quot;,&quot;078F&quot;,&quot;072F&quot;, &quot;0349&quot;], 'val3':[&quot;&quot;,&quot;274F&quot;,&quot;307C&quot;, &quot;303F&quot;], 'val4':[&quot;&quot;,&quot;&quot;,&quot;170F&quot;, &quot;101A&quot;], 'val5':[&quot;&quot;,&quot;&quot;,&quot;201D&quot;,&quot;&quot;]}) </code></pre> <p>I want to look at columns &quot;val1&quot; through &quot;val5&quot; and see which rows contain a value from my code list. I want to populate the &quot;val_final&quot; column accordingly (if the value is in the list).</p> <pre><code>code_list = ['349C', '303F', '201D', '497C'] </code></pre> <p>If multiple columns contain values from code_list I want to pick the one that's on the right most column.</p> <p>Given the above logic, my desired output table would look like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>date</th> <th>id</th> <th>val1</th> <th>val2</th> <th>val3</th> <th>val4</th> <th>val5</th> <th>val_final</th> </tr> </thead> <tbody> <tr> <td>2023-08-29</td> <td>B3241</td> <td>496C</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>2023-09-08</td> <td>A3290</td> <td>349C</td> <td>078F</td> <td>274F</td> <td></td> <td></td> <td>349C</td> </tr> <tr> <td>2023-09-12</td> <td>D2903</td> <td>349C</td> <td>072F</td> <td>307C</td> <td>170F</td> <td>201D</td> <td>201D</td> </tr> <tr> <td>2023-09-14</td> <td>I13490</td> <td>497C</td> <td>349C</td> <td>303F</td> <td>101A</td> <td></td> <td>303F</td> </tr> </tbody> </table> </div> <p>I searched all over StackOverflow to try and solve this problem but to no avail. I assume doing if then else is an option starting with column &quot;val5&quot; and going down to column &quot;val1&quot; but I want to learn a more efficient way to do this.</p>
<python><dataframe><list><filter><isin>
2023-11-17 01:09:43
1
5,244
PineNuts0
77,498,677
1,227,778
How to update Django url (to include data/param) after POST
<p>I have a search form (place names) on my home page with an auto-complete dropdown on the search input field. When the user presses enter after selecting an item in the search box the form does a post. When the results return I still have the same url (<a href="https://mywebsite.com" rel="nofollow noreferrer">https://mywebsite.com</a>). How do I get the selected search entry (a location name) to appear in the url so users can book mark it for later use? eg <a href="https://mywebsite.com/mylocation" rel="nofollow noreferrer">https://mywebsite.com/mylocation</a></p> <p>I have got the GET code working to get the location in the url - just don't know how to update the url to add the location string after the search form post.</p> <p>I am new to Django so really winging it. Have searched but found no discussion on this even in the Django docs.</p>
<python><django>
2023-11-17 00:01:50
1
495
The Huff
77,498,663
858,675
get sharepoint file programatically (I know the url from a web browser)
<p>I use sharepoint from a web browser.</p> <p>I visualize a file (xls file) that has following url</p> <p><a href="https://mysite.sharepoint.com/:x:/r/_layouts/15/Doc.aspx?sourcedoc=%7Bxxxxxx-xxxxx-xxxx%7D&amp;file=filename.xlsx&amp;action=default&amp;mobileredirect=true" rel="nofollow noreferrer">https://mysite.sharepoint.com/:x:/r/_layouts/15/Doc.aspx?sourcedoc=%7Bxxxxxx-xxxxx-xxxx%7D&amp;file=filename.xlsx&amp;action=default&amp;mobileredirect=true</a></p> <p>If I right click on the document and and I click on 'Copy URL' I get a url of the type:</p> <p><a href="https://mysite.sharepoint.com/:x:/g/XXXXXXXXX-sXXXXXXXX?e=yyyyy" rel="nofollow noreferrer">https://mysite.sharepoint.com/:x:/g/XXXXXXXXX-sXXXXXXXX?e=yyyyy</a></p> <p>My question is how can I programatically get this document:</p> <p>I tried using Office365-REST-Python-Client (version 2.5.2)</p> <p>and manage to get a client context (authentication with <strong>same</strong> username and password as used on my browser)</p> <p>with following snippet:</p> <pre><code>import os from office365.runtime.auth.authentication_context import AuthenticationContext from office365.sharepoint.client_context import ClientContext def get_ctx(): sharepoint_url = os.environ[&quot;SP_URL&quot;] username = os.environ[&quot;SP_USER&quot;] password = os.environ[&quot;SP_PASSWORD&quot;] auth_ctx = AuthenticationContext(url=sharepoint_url) auth_ctx.acquire_token_for_user(username, password) ctx = ClientContext(sharepoint_url, auth_ctx) return ctx </code></pre> <p>I manage to list all existing document_libraries and each items in them</p> <pre><code>def get_doc_libraries(ctx): web = ctx.web ctx.load(web) ctx.execute_query() lists = web.lists ctx.load(lists) ctx.execute_query() for sp_list in lists: props = sp_list.properties if props['BaseTemplate'] == 101: # Document libraries library_name = props[&quot;Title&quot;] doc_library = ctx.web.lists.get_by_title(library_name) ctx.load(doc_library) ctx.execute_query() items = doc_library.get_items() ctx.load(items) ctx.execute_query() paged_items = doc_library.items.paged(500, page_loaded=print_progress).get().execute_query() for item in paged_items: # do_something_with_item </code></pre> <p>I get a few thousand items which might be realistic for this sharepoint url, but most of the items don't have titles and I don't know how to find out whether any of these is refering to the document that I have a url for.</p> <p>Attempts of using</p> <pre><code>def get_file_with_rel_url(ctx, url, sharepoint_url): sharepoint_url = sharepoint_url.rstrip(&quot;/&quot;) rel_url = url.replace(sharepoint_url, &quot;&quot;) response = File.open_binary(ctx, rel_url) with open(&quot;bla.xls&quot;, 'wb') as output_file: output_file.write(response.content) </code></pre> <p>do fail.</p> <p>I get an error messageof the kind</p> <p><code>{&quot;error&quot;:{&quot;code&quot;:&quot;-2130575338, Microsoft.SharePoint.SPException&quot;,&quot;message&quot;:{&quot;lang&quot;:&quot;fr-FR&quot;,&quot;value&quot;:&quot;Le fichier /:x:/g/XXXXXX-XXXXXX n'existe pas.&quot;}}}</code></p> <p>Which means in English <code>The file /:x:/g/XXXXXX-XXXXXX doesn't exist</code></p> <p>I think the urls given by the web browser aren't the ones I should use in the API.</p> <p>But I don't know how to determine the right url or how to get something like a uid for the document, that I can use to fetch it.</p> <p>The file I look at is not a file, that I own. It has been shared by somebody else, but I have read and write permissions to it (in my browser)</p>
<python><sharepoint><microsoft365>
2023-11-16 23:57:49
0
5,660
gelonida
77,498,534
10,973,108
Selenium + Cloudflare - trouble to access page
<p>I'm stucked to access this website - https://...</p> <p>The cloudflare is blocking the selenium driver.</p> <p>Currently i'm using <a href="https://github.com/seleniumbase/SeleniumBase" rel="nofollow noreferrer">https://github.com/seleniumbase/SeleniumBase</a> that suports undetected driver, but not works.</p> <p>I've tried selenium stealth and undetected driver, but not works.</p> <p>Can you help me?</p> <p>Currently, i'm stucked in this page.</p> <p><a href="https://i.sstatic.net/4oiHW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4oiHW.png" alt="enter image description here" /></a></p> <p>Whenever I click on input to check if I am human, the page auto reload and waits the click again, generating a loop in this page.</p> <p>If it can be resolved using playwright or something else, it's fine for me!</p> <p>edit: removed url, but's solved with comment!</p>
<python><selenium-webdriver><selenium-chromedriver><cloudflare><seleniumbase>
2023-11-16 23:19:37
0
348
Daniel Bailo
77,498,449
11,141,816
Why V+1 in Embedding layer(`Embedding(V+1,D)(i)`) where V the vocabulary size?
<p>Suppose</p> <pre><code>from tensorflow.keras.preprocessing.text import Tokenizer tokenizer = Tokenizer() ... V = len(tokenizer.word_index) </code></pre> <p>where <code>V</code> was the vocabulary size.</p> <p>I was told that the embedding layer</p> <pre><code>x = Embedding(V+1,D)(i) </code></pre> <p>where <code>D</code> the dimension of the output vector. But I wasn't sure why the size of the Embedding layer had to be <code>(V+1,D)</code> instead of <code>(V,D)</code>, especially because the index of the <code>tokenizer.word_index</code> starts at <code>1</code> instead of <code>0</code>, i.e.</p> <pre><code>tokenizer.word_index {'UNK': 1, 'the': 2, ',': 3, '.': 4, 'of': 5, 'and': 6, ...} </code></pre> <p>so the maximum index(if converted to a list) of <code>tokenizer.word_index</code>(a dictionary) was actually <code>V-1</code>.</p> <p>Why <code>V+1</code> in Embedding layer(<code>Embedding(V+1,D)(i)</code>) where <code>V</code> the vocabulary size?</p>
<python><tensorflow><shapes><word-embedding>
2023-11-16 22:52:46
1
593
ShoutOutAndCalculate
77,498,423
1,175,788
How do I download non-dataframe data from a Databricks notebook?
<p>I'm trying to download a schema of a dataframe, or essentially the output of <code>df.printSchema()</code> but the schema is too big that it's getting truncated in the output. This is in a production environment too, so I can't temporarily store the output onto the Hive metastore. Any way to fully print or save the output of printSchema()?</p>
<python><databricks>
2023-11-16 22:44:56
1
3,011
simplycoding
77,498,397
9,687,785
How to set size of total memory for sparkling water cluster in Databricks
<p>I am working in Databricks with Sparkling Water 3.40.0.4; I have a total driver memory of 512 GB and six workers with 64 GB each. When I call</p> <pre><code>hc = H2OContext.getOrCreate() </code></pre> <p>the internal H2O cluster is created across six workers, but the total cluster memory size is roughly 60 GB. I can create a normal, non-sparkling water H2O cluster and pass the max_mem_size and min_mem_size arguments to the init() method, which will return a much larger sized cluster, but I can't seem to find how to do that on Databricks.</p> <pre><code>h2o.init(max_mem_size=&quot;200g&quot;) </code></pre> <p>That returns roughly 200 GB of memory for the cluster.</p> <p>I created a local Spark installation and changed the spark.driver.memory property, and changing this resulted in a larger sparkling water cluster size, but explicitly setting that property in Data Bricks has no change on the sparkling water cluster there.</p> <p>Is there a configuration I can pass to Spark or an internal H2O cluster to set a larger memory size on Databricks?</p>
<python><apache-spark><databricks><h2o><sparkling-water>
2023-11-16 22:38:15
0
683
omoshiroiii
77,498,172
18,237,126
Tkinter displaying an image on fullscreen
<p>I am trying to set a background image for a menu. However, the image comes with some white padding that I can't quite remove.</p> <p>The image works fine and it is being every element i place on the screen.</p> <p>I am looking for a way to remove this padding. Or, in other words, for the image to fit the whole screen.</p> <p>This is my code</p> <pre class="lang-py prettyprint-override"><code>from tkinter import Tk, Label, Button, Menu, PhotoImage window = Tk() window.state('zoomed') color1 = '#020f12' color2 = '#05d7ff' color3 = '#65e7ff' color4 = 'BLACK' background = PhotoImage(file = r'C:\Users\acout\OneDrive - Activos Reais\Uni\Git\Tkinter\cars_background.png') background_label= Label(window, image=background) background_label.place(x=0, y=0, relwidth=1, relheight=1) def configure_window(): window.geometry(&quot;800x600&quot;) # this might interfere with window.state zoomed idk window.configure(background='#b3ffff') window.title(&quot;My Tkinter game yo - car go brrr&quot;) window.iconbitmap(default = r'C:\Users\acout\OneDrive - Activos Reais\Uni\Git\Tkinter\carro.ico') window.columnconfigure(0, weight=1) window.columnconfigure(1, weight=1) window.rowconfigure(0, weight=1) window.rowconfigure(1, weight=1) window.rowconfigure(2, weight=1) </code></pre> <p>this is what it looks like: <a href="https://i.sstatic.net/fjFkU.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fjFkU.jpg" alt="enter image description here" /></a></p>
<python><image><tkinter><fullscreen>
2023-11-16 21:37:45
1
314
AntΓ³nio Rebelo
77,498,034
215,120
Weird pyspark behavior when filtering on booleans
<p>We have some existing code that used to work in Spark 3.1, and now is not working in Spark 3.3. It is really trivial code, so its blowing my mind why the filter isn't working:</p> <pre><code>widgets_df = widgets_df.filter(F.col(&quot;_id&quot;).isin(*widget_ids)) </code></pre> <p>For some reason, even if there are rows where F.col(&quot;_id&quot;) is in widget_ids, we end up with an empty widgets_df.</p> <p>So, we decided to refactor this some to debug whats going on</p> <pre><code>widgets_df = widgets_df.withColumn(&quot;IdMatch&quot;, F.col(&quot;_id&quot;).isin(widget_ids)) # print outs for debugging widgets_df.printSchema() widgets_df.show(vertical=True) # We can clearly see the IdMatch column with value true on one row. # IdMatch is of type Boolean widgets_df = widgets_df.filter(F.col(&quot;IdMatch&quot;) == True) // But this will drop all rows </code></pre> <p>Again we have an empty widgets_df.</p> <p>We are likely missing something relatively trivial here. What are we missing?</p>
<python><apache-spark><pyspark><apache-spark-sql><aws-glue>
2023-11-16 21:02:30
0
18,431
rouble
77,498,009
13,033,188
Python Read from csv1 and reference csv2 to write to csv3
<p>I have two csv files, csv1 is the working file that changes, csv2 is static and used as a reference set.</p> <p>csv1 - input.csv</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>Type</th> <th>Owner</th> <th>Status</th> <th>Date</th> <th>Group</th> </tr> </thead> <tbody> <tr> <td>bananas</td> <td>fruit</td> <td>Joe</td> <td>In Stock</td> <td>1/1/23</td> <td></td> </tr> <tr> <td>apples</td> <td>fruit</td> <td>jim</td> <td>Out Of Stock</td> <td>1/2/23</td> <td></td> </tr> <tr> <td>tomato</td> <td>veggie</td> <td>bob</td> <td>In Stock</td> <td>1/3/23</td> <td></td> </tr> <tr> <td>potato</td> <td>vegie</td> <td>tom</td> <td>Out Of Stock</td> <td>1/4/23</td> <td></td> </tr> <tr> <td>kiwi</td> <td>fruit</td> <td>jane</td> <td>In Stock</td> <td>1/5/23</td> <td></td> </tr> <tr> <td>chicken</td> <td>meat</td> <td>francis</td> <td>Out Of Stock</td> <td>1/6/23</td> <td></td> </tr> <tr> <td>beef</td> <td>meat</td> <td>linda</td> <td>In Stock</td> <td>1/7/23</td> <td></td> </tr> </tbody> </table> </div> <p>csv2 - reference.csv</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Type</th> <th>Group</th> </tr> </thead> <tbody> <tr> <td>fruit</td> <td>1</td> </tr> <tr> <td>veggie</td> <td>2</td> </tr> <tr> <td>meat</td> <td>3</td> </tr> </tbody> </table> </div> <p>I found this post here <a href="https://stackoverflow.com/a/14257599">https://stackoverflow.com/a/14257599</a> which helped me get started but it's the processing that doesn't seem to be working</p> <p>I'm using this code:</p> <pre><code>with open(&quot;input.csv&quot;, &quot;r&quot;) as csv_input, open(&quot;reference.csv&quot;, &quot;r&quot;)as assign_csv, open(&quot;output.csv&quot;, &quot;w&quot;) as out_file: reader = csv.reader(csv_input) reader2 = csv.reader(assign_csv) writer = csv.writer(out_file) for error in reader: writer.writerow(error) for group in reader2: if group[0] in error[1]: error[5] = group[1] writer.writerow(error) </code></pre> <p>This reads the input and reference files just fine but at the very bottom in the if statement it's not doing anything and I'm not sure why. Basically I want it to loop through every row in the input.csv and check the value in the Type column and then loop through the reference.csv and if the text is contained there, then write to the Group column in the output.csv.</p> <p>Currently the code is essentially just replicating input.csv to output.csv without writing anything to the cells for the Group column. I know the loop logic is correct because I tried it with a separate code sample and it worked just fine so I think my problem is the if statement and where I have the writer.writerow(error) line placed.</p>
<python><csv><reference>
2023-11-16 20:56:43
1
413
Happy.Hartman
77,497,945
19,694,624
Pending bot reply from a slash command pycord
<p>I have a bot, and I want to type the command <code>/chat</code> and then have the bot create a thread and respond to the command there. I did that successfully, but the problem is that the message &quot;bot is thinking&quot; keeps pending even after bot completed everything I asked.</p> <p>So I was wondering, how can I dismiss this pending in the code?</p> <p>My code:</p> <pre><code>import discord from tools.config import TOKEN from tools.gpt_helper.gpt_custom_classes import generate_response from discord.ext import commands bot = discord.Bot() @bot.event async def on_ready(): print(f&quot;{bot.user} is ready and online!&quot;) @bot.slash_command(name=&quot;chat&quot;, description=&quot;some desc&quot;) async def chat(ctx, msg): channel = bot.get_channel(MY_CHANNEL_ID) await ctx.response.defer(ephemeral=True) resp = await generate_response(prompt=msg) thread = await channel.create_thread(name=&quot;wop&quot;, type=None) await thread.send(resp) bot.run(TOKEN) </code></pre> <p>I am using g4f python module here to interact with chat gpt, it generates the answer to user response (variable &quot;msg&quot; in chat() function) with the line <code>resp = await generate_response(prompt=msg)</code> . And then I create a thread and respond there:</p> <pre><code>thread = await channel.create_thread(name=&quot;wop&quot;, type=None) await thread.send(resp) </code></pre> <p>Here a picture of pending bot reply</p> <p><a href="https://i.sstatic.net/Nw2lL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nw2lL.png" alt="enter image description here" /></a></p>
<python><python-3.x><discord><discord.py><pycord>
2023-11-16 20:43:57
2
303
syrok
77,497,941
5,527,646
How to remove string items from a list
<p>Suppose I have a list of string items like this:</p> <pre><code>myList = ['CLIPPOLY', 'ExternalCrosswalk','CrossReference', 'SourceCitation', 'Status', 'Waterbody', 'N_1_EDesc', 'N_1_ETopo', 'N_1_JDesc', 'N_1_JTopo', 'N_1_Props', 'Line'] </code></pre> <p>I want to remove any item that has 'N_1' from my list. Simple right? I tried this:</p> <pre><code> for item in myList: if 'N_1' in item: myList.remove(item) print(&quot;Removed item: {}&quot;.format(item)) </code></pre> <p>This outputs:</p> <pre><code>Removed item: N_1_EDesc Removed item: N_1_JDesc Removed item: N_1_Props </code></pre> <p>When I print out myList again, I can still see that not all of the items with 'N_1' were removed. Why is this occurring and how can I fix it?</p> <pre><code> print(myList) </code></pre> <p>Returns</p> <pre><code>['CLIPPOLY','ExternalCrosswalk','CrossReference','SourceCitation','Status','Waterbody','N_1_ETop', 'N_1_JTopo','Line'] </code></pre> <p>NOTE- I've also just tried creating a list of names I wanted removed, like this :</p> <pre><code>bad_list = ['N_1_EDesc', 'N_1_ETopo', 'N_1_JDesc', 'N_1_JTopo', 'N_1_Props'] </code></pre> <p>I then tried to remove an item if it was in the bad_list, but I had the same effect.</p> <pre><code>for item in myList: if item in bad_list: myList.remove(item) print(&quot;Removed item: {}&quot;.format(item)) </code></pre>
<python><list>
2023-11-16 20:43:30
1
1,933
gwydion93
77,497,663
4,329,348
ROS 2 how have a callback and main code in same node / fille?
<p>I want to have a single file / node that will subscribe to a topic, while it will also let the node to run as a main code synchronously. The problem is that I can seem to find the proper way to have async and sync code in ROS2. I would like when the node is killed to send zero velocities to the robot to stop moving. For some reason, this doesn't work either.</p> <p>Here is what I have so far:</p> <pre><code>import threading import sys import rclpy from rclpy.node import Node from geometry_msgs.msg import Twist from rclpy.exceptions import ROSInterruptException import signal class RobotNode(Node): def __init__(self): super().__init__('my_node') self.subscription = self.create_subscription(Image, '/camera/image_raw', self.callback, 10) self.publisher = self.create_publisher(Twist, '/cmd_vel', 10) self.rate = self.create_rate(10) self.robot_thread = threading.Thread(target=self.main) self.robot_thread.start() def stop(self): desired_velocity = Twist() self.publisher.publish(desired_velocity) self.rate.sleep() def callback(self, data): image = self.bridge.imgmsg_to_cv2(data, 'bgr8') cv2.namedWindow('Camera') cv2.imshow('Camera', image) cv2.waitKey(3) def main(self): while rclpy.ok(): desired_velocity = Twist() desired_velocity.linear.x = 0.03 try: for _ in range(10): self.publisher.publish(desired_velocity) self.rate.sleep() except Exception: pass def main(args=None): def signal_handler(sig, frame): robot.stop() rclpy.shutdown() signal.signal(signal.SIGINT, signal_handler) rclpy.init(args=args) robot = RobotNode() rclpy.spin(robot) if __name__ == '__main__': main() </code></pre>
<python><ros><ros2>
2023-11-16 19:48:27
1
1,219
Phrixus
77,497,639
10,589,070
Python Filtering OData Entity by Parent - Requests Library
<p>I have an OData API I'm trying to query using the Python Requests library. I'm trying to filter a table based on the parent table for an incremental data pull. The children tables don't use a <code>last_update_date</code>, but I can get this date from the parent as the parent's <code>last_updated_date</code> changes even if the change is in a child table. Note that in my example, <code>employees</code> is the parent entity of <code>employee_texts</code>.</p> <p>My code looks like the below:</p> <pre><code>data = requests.get( endpoint_url + &quot;employee_texts&quot; , auth=(api_username, api_password) , params={ &quot;$filter&quot;: &quot;employees/LastModifiedDate gt 2023-11-15T00:00:00.00Z&quot;, &quot;$format&quot;: &quot;json&quot; } , verify=True ) </code></pre> <p>If I run this, I get the below error:</p> <blockquote> <p>{'error': {'code': '',<br /> 'message': 'Value cannot be null.\r\nParameter name: type'}}</p> </blockquote> <p>Please note that this code works as expected:</p> <pre><code>data = requests.get( endpoint_url + &quot;employees&quot; , auth=(api_username, api_password) , params={ &quot;$filter&quot;: &quot;LastModifiedDate gt 2023-11-15T00:00:00.00Z&quot;, &quot;$format&quot;: &quot;json&quot; } , verify=True ) </code></pre>
<python><python-requests><filtering><odata>
2023-11-16 19:45:47
1
446
krewsayder
77,497,448
893,254
Python project structure - no module named 'lib'
<p>I have never worked on or built myself a properly structure Python codebase before. I aim to do that today.</p> <p>Here's the structure I have attempted to implement:</p> <pre><code>My-Fancy-Project-Name /src /bin /binary1/main.py /binary2/my_binary.py /binary3 my_scripy.py aux_file.txt /binaryother binary4.py binary5.py binary6.py /lib /lib1 __init__.py lib1.py /libio __init__.py libio.py /libexception __init__.py exceptions.py </code></pre> <p>This structure is somewhat systems programming language inspired. It may not be a suitable project structure in the world of Python.</p> <p>My reasoning for this structure is as follows:</p> <p>The root directory for my whole &quot;project&quot; lives one level above <code>My-Fancy-Project-Name</code>. It contains all kinds of stuff, not all written in the same language, to do various things which are all related to a single project objective.</p> <p>Broadly speaking this consists of:</p> <ul> <li>data collection and webscraping</li> <li>data preprocessing stages</li> <li>data analysis stages and visualization</li> <li>auxillary stuff, including some metrics visualization</li> <li>a webserver to serve preprocessed data to other external clients</li> <li>a documentation folder</li> <li>some shared stuff such as a couple of data folders containing small size data files</li> <li>a <code>.venv</code> virtual environment</li> </ul> <p>Within the root directory there are other directories. One of those is called <code>My-Fancy-Project-Name</code>. (It isn't really, this is just an example name.)</p> <ul> <li>The point being: <code>My-Fancy-Project-Name</code> collects together an aggregation of python code which all relates to one part of the project. Specifically, all the code for &quot;data collection via webscraping&quot; is contained in this directory.</li> </ul> <p>There are a number of scripts which act like executables. They perform some function. One of those is the main webscraper, others are smaller webscrapers which perform some specific function in some sense acting like a small utility. These &quot;executables&quot; are stored in a <code>bin</code> directory.</p> <p>There is common code which is shared by each of the Python scripts (&quot;executables&quot;). This &quot;library&quot; code lives in a folder called <code>lib</code>.</p> <p>Both <code>bin</code> and <code>lib</code> live in a folder called <code>src</code>. The intention is to add a <code>doc</code> folder which is adjacent to <code>src</code>.</p> <p>I am running into issues where Python doesn't know how to &quot;get to&quot; the library folder code. This is unsurprising because I would not expect python to search any path other than the system defined paths and the current directory from where the Python script is executed.</p> <p>My question is, how <em>should</em> I structure my project in a sensible way so that I can have a folder of shared library code, called <code>lib</code>, which contains sub-modules or packages(?), which can be accessed by a set of Python &quot;executables&quot; (scripts) contained inside a <code>bin</code> directory?</p>
<python>
2023-11-16 19:07:38
1
18,579
user2138149
77,497,397
897,224
How to autogenerate Pydantic field value and not allow the field to be set in initializer or attribute setter
<p>I want to autogenerate an ID field for my Pydantic model and I don't want to allow callers to provide their own ID value. I've tried a variety of approaches using the <code>Field</code> function, but the ID field is still optional in the initializer.</p> <pre class="lang-py prettyprint-override"><code>class MyModel(BaseModel): item_id: str = Field(default_factory=id_generator, init_var=False, frozen=True) </code></pre> <p>I've also tried using <code>PrivateAttr</code> instead of <code>Field</code>, but then the ID field doesn't show up when I call <code>model_dump</code>.</p> <p>This seems like a pretty common and simple use case, but I can't find anything in the docs for how to accomplish this.</p>
<python><pydantic>
2023-11-16 18:57:32
2
2,840
Brian
77,497,391
826,235
Pandas to_csv write json objects with no spaces
<p>I have a dataframe object with some columns of type dict, containing some nested json objects. When I use to_csv to write the dataframe to a csv file, everything works, however the json objects have spaces for formatting in them, like:</p> <p><code>{'field': 'value', 'field2': 'value2'}</code></p> <p>I want to write to csv but remove the extra spaces, to preserve space, similarly to the result from:</p> <p><code>json.dumps(obj, separators=(':',''))</code></p> <p>So the output looks like:</p> <p><code>{'field':'value','field2':'value2'}</code></p> <p>How can I achieve this with a dataframe? Or control somehow the formatting of the specific json columns?</p>
<python><json><pandas><csv>
2023-11-16 18:56:59
1
1,377
yogi
77,497,299
2,492,274
fastapi Error: AssertionError: Status code 204 must not have a response body
<p>I have this code using FastAPI:</p> <pre><code>@router.put( &quot;/api/v1/me/language&quot;, tags=[&quot;users&quot;], responses={ 400: {&quot;model&quot;: BadRequest}, 401: {&quot;model&quot;: Unauthorized}, 403: {&quot;model&quot;: Forbidden}, 404: {&quot;model&quot;: NotFound}, 500: {&quot;model&quot;: InternalServerError} }, response_class=fastapi.responses.ORJSONResponse, status_code=204, ) async def patch_me_users_language( request: Request, session=fastapi.Depends(get_db) ) -&gt; NO_CONTENT_RESPONSE: content_bytes = await request.body() content = content_bytes.decode(&quot;utf-8&quot;).upper() if content in set(COUNTRIES): db_result = DBUser.get_by_uuid( request.state.user_uuid, session ) if db_result: try: db_result.update({&quot;language&quot;: content}, session) # Tried it already with 'return Response(status_code=HTTP_204_NO_CONTENT.value)' return except pydantic.error_wrappers.ValidationError as e: return http_err.UnprocessableEntity(detail=e.errors()).render() else: return http_err.InternalServerError( detail=f&quot;User for this valid token does not exist.&quot; ).render() else: return http_err.UnprocessableEntity( detail=[ { &quot;loc&quot;: [&quot;body&quot;], &quot;msg&quot;: &quot;Expected the content to contain a valid language/country code&quot;, &quot;type&quot;: &quot;value_error&quot;, } ] ).render() </code></pre> <p>wherein <code>http_err</code> is an instance of a <code>HttpErrors</code> subclass, defined as follows:</p> <pre><code>class HttpErrors(pydantic.BaseModel): &quot;&quot;&quot; Base class for HTTP error responses. The render method is used to translate instances of this class into fastapi.responses.ORJSONResponse responses. &quot;&quot;&quot; status: int detail: str def render(self) -&gt; fastapi.responses.ORJSONResponse: &quot;&quot;&quot; Translates a class instance into a fastapi.responses.ORJSONResponse :return: fastapi.responses.ORJSONResponse &quot;&quot;&quot; logger.info(f&quot;Returned error response: status: {self.status}, msg: {self.content}&quot;) return fastapi.responses.ORJSONResponse( status_code=self.status, content=self.content() ) def content(self): return {'detail': self.detail} def json_content(self): return json.dumps(self.content()) class BadRequest(HttpErrors): status = 400 class Unauthorized(HttpErrors): status = 401 class Forbidden(HttpErrors): status = 403 class NotFound(HttpErrors): status = 404 class UnprocessableEntity(pydantic.BaseModel): status = 422 detail: List[dict] def render(self) -&gt; fastapi.responses.ORJSONResponse: return fastapi.responses.ORJSONResponse( status_code=self.status, content={'detail': self.detail} ) </code></pre> <p>This worked in FastAPI 0.85.2, but in 0.104.1 I get an error that says</p> <pre><code>AssertionError: Status code 204 must not have a response body </code></pre> <p>Per the comment in the code, I tried to replace the empty <code>return</code> with <code>return Response(status_code=HTTP_204_NO_CONTENT.value)</code>, but this has no effect.</p> <p>How should I handle the empty response correctly? I assume it depends on the model definition inside the annotation (<code>router.put(...)</code>), but couldn't find anything here.</p>
<python><fastapi>
2023-11-16 18:41:34
1
607
Stoecki
77,497,219
1,188,943
Stale element exception handling is not working in Selenium
<p>The following code cannot handle the Stale element exception error:</p> <pre><code> website_accessable = False try: driver.get(addressToVerify) website_accessable = True except WebDriverException as e: if 'net::ERR_SSL_PROTOCOL_ERROR' in str(e): print(&quot;--- Error: SSL error protocol occurred!&quot;) return website_accessable except TimeoutException: print(&quot;--- Error: Webpage openning Time out!&quot;) return website_accessable except StaleElementReferenceException: print(&quot;--- Error: Stale element referenced!&quot;) return website_accessable except Exception as e: print(&quot;Error:&quot;, str(e)) return website_accessable </code></pre> <p>and full error is shown in the console (Second exception fired). Any ideas?</p> <p>Full Error:</p> <pre><code>Message: stale element reference: stale element not found (Session info: chrome=119.0.6045.123); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#stale-element-reference-exception Stacktrace: 0 chromedriver 0x0000000101d90d28 chromedriver + 4795688 1 chromedriver 0x0000000101d882b3 chromedriver + 4760243 2 chromedriver 0x000000010196188d chromedriver + 407693 3 chromedriver 0x0000000101967d6f chromedriver + 433519 4 chromedriver 0x000000010196a0c4 chromedriver + 442564 5 chromedriver 0x000000010196a20c chromedriver + 442892 6 chromedriver 0x00000001019a9fa2 chromedriver + 704418 7 chromedriver 0x00000001019d7ca2 chromedriver + 892066 8 chromedriver 0x00000001019a3c63 0x00000001019d7a73 </code></pre>
<python><selenium-webdriver>
2023-11-16 18:27:13
0
1,035
Mahdi
77,496,981
7,243,493
update dictionary in nested array of dictionaries
<p>I want to update the list2 based on the values in the opti array, in a way so that it check if the nested array has a dict with the key 'RSI_15' and then update the dict with the key &quot;val&quot; in the same array, and so on for all the arrays in the opti array.</p> <p>right now my output is this</p> <pre><code>[['random', {'ind': 'RSI_15'}, {'cond': '&lt;'}, {'val': 1000}], ['random', {'ind': 'volume'}, {'cond': '&gt;'}, {'val': 1000}]] </code></pre> <p>so clearly my for loop overwrites it, but i cant crack the logic</p> <p>my wished output is this:</p> <pre><code>[['random', {'ind': 'RSI_15'}, {'cond': '&lt;'}, {'val': 15}], ['random', {'ind': 'volume'}, {'cond': '&gt;'}, {'val': 1000}]] </code></pre> <p>code:</p> <pre><code>list2 = [['random', {'ind': 'RSI_15'}, {'cond': '&lt;'}, {'val': 12}], ['random', {'ind': 'volume'}, {'cond': '&gt;'}, {'val': 44}]] opti = [['RSI_15', 15], ['volume', 1000]] def update_val(indicator, val, values): flag = False for v in values: if isinstance(v, dict) and flag == False: try: v['ind'] = indicator flag = True except: continue if isinstance(v, dict) and flag == True: try: if v['val'] : v['val'] = val flag = True except: continue def up(conds, opti_val): for cond in conds: for item in opti_val: update_val(item[0], item[1], cond) return conds list2 = up(list2, opti) print(list2) </code></pre>
<python><arrays><dictionary>
2023-11-16 17:44:14
3
568
Soma Juice
77,496,918
9,443,671
Import PyAudio not working on M3 Pro Mac OS Sonoma (Could not import the PyAudio C module 'pyaudio._portaudio'.)
<p>So I'm running into this error which many other people seem to run into on previous macbook such as the M1... I've tried everything they suggest to fix it, which is to brew install portaudio from <code>--HEAD</code> and to install from source, but I always seem to run into problems with the C library one way or the other.</p> <p>The current error I'm getting from <code>import pyaudio</code> is this one:</p> <pre><code>Could not import the PyAudio C module 'pyaudio._portaudio'. Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Users/salah/opt/anaconda3/envs/audio_env/lib/python3.10/site-packages/pyaudio/__init__.py&quot;, line 111, in &lt;module&gt; import pyaudio._portaudio as pa ImportError: dlopen(/Users/salah/opt/anaconda3/envs/audio_env/lib/python3.10/site-packages/pyaudio/_portaudio.cpython-310-darwin.so, 0x0002): symbol not found in flat namespace '_PaMacCore_SetupChannelMap' </code></pre> <p>Any suggestions on how I might fix this? Building portaudio from source does not work for some reason...</p>
<python><pyaudio><portaudio>
2023-11-16 17:34:00
1
687
skidjoe
77,496,754
464,277
Using reindex with value_counts in pandas groupby
<p>I'd like to use value_counts with groupby, and keep all labels in the original dataframe. In the example below for, instance,</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'month': [1, 1, 1, 1, 1, 1, 2, 2, 2], 'day': [1, 1, 1, 2, 2, 2, 1, 1, 1], 'value': [1, 2, 3, 4, 5, 6, 1, 1, 1], } ) df.groupby(['month', 'day'])['value'].value_counts(normalize=True) month day value 1 1 1 0.333333 2 0.333333 3 0.333333 2 4 0.333333 5 0.333333 6 0.333333 2 1 1 1.000000 </code></pre> <p>I'd like to show all values from 1 to 6 for all combinations of month and day (those that do not show up would have a value of zero.</p> <p>Adding <code>.reindex(range(1, 6))</code>, produces the error:</p> <blockquote> <p>ValueError: Buffer dtype mismatch, expected 'Python object' but got 'long'</p> </blockquote>
<python><pandas><group-by>
2023-11-16 17:04:21
3
10,181
zzzbbx
77,496,693
386,861
Switching between venvs in VSCode
<p>I'm working through a tutorial in running python in VSCode and am struggling to switch to run numpy. <a href="https://code.visualstudio.com/docs/python/python-tutorial" rel="nofollow noreferrer">https://code.visualstudio.com/docs/python/python-tutorial</a></p> <pre><code>import numpy as np msg = &quot;Roll a dice&quot; print(msg) print(np.random.randint(1,9)) </code></pre> <p>The error is:</p> <pre><code>ModuleNotFoundError: No module named 'numpy' </code></pre> <p>That's despite using terminal to pip install numpy.</p> <p>This appears on bottom right so I think the venv is set right:</p> <p><a href="https://i.sstatic.net/VYCmK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VYCmK.png" alt="enter image description here" /></a></p> <p>But panel in the bottom middle points to a folder elsewhere.</p> <p><a href="https://i.sstatic.net/JR2lU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JR2lU.png" alt="enter image description here" /></a></p> <p>What do I need to do to fix this?</p>
<python><numpy><visual-studio-code>
2023-11-16 16:56:28
1
7,882
elksie5000
77,496,677
6,599,648
Python forward a gmail message using Google API
<p>I'm trying to forward a message using Google's API client. In order to do that, I am reading the message, modifying it and would then like to send it, but I run into a problem where the message that I download has the wrong type and I don't know how to encode it. Here is my code:</p> <pre class="lang-py prettyprint-override"><code>from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError import os import base64 from datetime import datetime SCOPES = ['https://www.googleapis.com/auth/gmail.readonly','https://www.googleapis.com/auth/gmail.modify', 'https://www.googleapis.com/auth/gmail.send'] creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( # your creds file here. Please create json file as here https://cloud.google.com/docs/authentication/getting-started 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) service = build('gmail', 'v1', credentials=creds) results = service.users().messages().list(userId='me', labelIds=['INBOX']).execute() messages = results.get('messages',[]); message = messages[0] msg = service.users().messages().get(userId='me', id=message['id']).execute() print(type(msg)) msg['To'] = 'me@mydomain.com' encoded_message = base64.urlsafe_b64encode(message.as_bytes()).decode() create_message = {&quot;raw&quot;: encoded_message} message = (service.users().messages().send(userId='me', body=body).execute()) </code></pre> <p>outputs:</p> <pre><code>&lt;class 'dict'&gt; --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[57], line 32 28 print(type(msg)) 30 msg['To'] = 'me@mydomain.com' ---&gt; 32 encoded_message = base64.urlsafe_b64encode(message.as_bytes()).decode() 33 create_message = {&quot;raw&quot;: encoded_message} 35 message = (service.users().messages().send(userId='me', body=body).execute()) AttributeError: 'dict' object has no attribute 'as_bytes' </code></pre> <p>Does anyone know how to encode the downloaded message object which is a dictionary?</p>
<python><gmail-api><urlencode>
2023-11-16 16:54:11
1
613
Muriel
77,496,615
2,383,070
How to label items in a string column by conditional regex in Polars
<p>In Polars, I'm working with a DataFrame with messy string data in a column. In this example, the column can contain a phone number, email address, state, zip code, etc. (my real data actually has a hundred or more different &quot;patterns&quot; stored in this column that is a million rows long.)</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( { &quot;data&quot;: [ &quot;123-123-1234&quot;, &quot;someone@email.com&quot;, &quot;345-345-3456&quot;, &quot;456-456-4567&quot;, &quot;12345&quot;, &quot;charlie.brown@peanuts.org&quot;, &quot;345-345-3456&quot;, &quot;CA&quot;, &quot;UT&quot;, ] } ) </code></pre> <p>I am trying to label each item based on a regular expression. For example, &quot;if it looks like a phone number, label it PHONE; if it looks like an email, label it EMAIL; etc.&quot;. The result should look something like this...</p> <pre><code>shape: (9, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ data ┆ label β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════║ β”‚ 123-123-1234 ┆ PHONE β”‚ β”‚ someone@email.com ┆ EMAIL β”‚ β”‚ 345-345-3456 ┆ PHONE β”‚ β”‚ 456-456-4567 ┆ PHONE β”‚ β”‚ 12345 ┆ ZIP CODE β”‚ β”‚ charlie.brown@peanuts.org ┆ EMAIL β”‚ β”‚ 345-345-3456 ┆ PHONE β”‚ β”‚ CA ┆ STATE β”‚ β”‚ UT ┆ STATE β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p><br><br></p> <hr /> <p>My first attempt was simply to do multiple <code>str.replace</code> in line...</p> <pre class="lang-py prettyprint-override"><code>df.with_columns( ( pl.col(&quot;data&quot;) .str.replace(&quot;.*@.*&quot;, &quot;EMAIL&quot;) .str.replace(f&quot;\d\d\d-\d\d\d-\d\d\d\d&quot;, &quot;PHONE&quot;) .str.replace(f&quot;\d\d\d\d\d&quot;, &quot;ZIP CODE&quot;) .str.replace(f&quot;^[A-Z][A-Z]$&quot;, &quot;STATE&quot;) ).alias(&quot;label&quot;) ) </code></pre> <p>This is one way to get the result, except when scaled to <em>many</em> <code>str.replace</code> (I have over a hundred regex expresses I'm trying to label for a column over a million rows) it is quite slow because it is rerunning the replace method so many times. I am hoping there is a different approach in Polars that can make this much faster that I haven't thought about yet. Thanks for any ideas.</p>
<python><regex><dataframe><python-polars>
2023-11-16 16:45:38
2
3,511
blaylockbk
77,496,602
9,923,776
Django 4.2 , GenericIPAddress and CIDR
<p>Im updating a project from:</p> <pre><code>Python 3.9 + Postgres 13 + Django 4.0.6 + psycopg2 </code></pre> <p>TO</p> <pre><code>Python 3.11 + Postgresql 15 + Django 4.2 + psycopg3 </code></pre> <p>I have a big problem with GenericIPFields. It seems that it doesn't support cidr anymore.</p> <p>When I try to save an ORM with a cidr, I get:</p> <pre><code>ValueError: '10.255.96.48/28' does not appear to be an IPv4 or IPv6 address </code></pre> <p>I need that INET FIELD on DB with cidr to make queryes like:</p> <pre><code>Subnet.objects.filter().extra(where = [ &quot;cidr::inet &gt;&gt;= inet '&quot;+self.ip_address+&quot;'&quot; ]) </code></pre> <p>and other cidr related queries.</p>
<python><django>
2023-11-16 16:44:18
1
656
EviSvil
77,496,541
2,304,735
How to read double values from CSV file and add them to the same column in pandas?
<p>The following data is in a CSV file:</p> <pre><code>SBUX, Starbucks, YHOO, Yahoo, ABK, Ambac Financial, V, Visa, MA, Mastercard, MCD, McDonald's, MCK, McKesson, MDT, Metronic, MRK, Merk&amp;Co, MAR, Marriott International, MKTX, MarketAxess, LRCX, LAM Research, LOW, Lowe's </code></pre> <p>I want to add each row of the csv file to the same column in a pandas dataframe. I want the output to be the following:</p> <p>Dataframe:</p> <pre><code>SBUX |YHOO |ABK |V |MA |MCD |MCK |MDT | Starbucks|Yahoo|Amback Financial|Visa|Mastercard|McDonald's|McKesson|Metronic| </code></pre> <p>How can I do that with Python?</p>
<python><python-3.x><pandas><dataframe><csv>
2023-11-16 16:34:18
1
515
Mahmoud Abdel-Rahman
77,496,454
3,763,302
Is there a more efficient way to Apply by Row, then by Column?
<p>My dataset contains 5 measurements taken daily, over a 700 day timespan. I wish to be able to group these values by the day of the week, and then apply the <code>trim_mean</code> function from <code>scipy.stats</code> to the each of the 5 measurements, using <code>1/stddev</code> as the <code>proportiontocut</code> parameter.</p> <p>My data:</p> <pre><code>import pandas as pd import numpy as np from scipy.stats import trim_mean np.random.seed(42) data = np.random.randint(0, 100, size=(5, 700)) col_names = pd.date_range('11-16-2023', periods=700) df = pd.DataFrame(data, columns=col_names) # df 2023-11-16 2023-11-17 ... 2025-10-15 0 51 92 ... 57 1 88 48 ... 32 2 89 52 ... 96 3 61 99 ... 48 4 0 7 ... 34 </code></pre> <p>Now, I can do this using the following (not very elegant) process:</p> <pre><code>df_T = df.T df_T['Day of Week'] = pd.to_datetime(df_T.index).isocalendar().day ## Room for improvement here ## # Apply calculation to each type of measurement gb = df_T.groupby('Day of Week') m0 = gb[0].apply(lambda x: trim_mean(x, proportiontocut=1/np.std(x))) m1 = gb[1].apply(lambda x: trim_mean(x, proportiontocut=1/np.std(x))) m2 = gb[2].apply(lambda x: trim_mean(x, proportiontocut=1/np.std(x))) m3 = gb[3].apply(lambda x: trim_mean(x, proportiontocut=1/np.std(x))) m4 = gb[4].apply(lambda x: trim_mean(x, proportiontocut=1/np.std(x))) results_df = pd.DataFrame([m0, m1, m2, m3, m4]) results_df.columns = columns=['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'] # results_df Mon Tue Wed Thu Fri Sat Sun 0 50.936170 51.712766 44.659574 49.117021 48.702128 47.414894 51.223404 1 49.244681 49.000000 49.138298 49.191489 45.872340 49.010638 47.074468 2 49.436170 46.404255 49.021277 46.553191 55.031915 51.265957 50.638298 3 43.744681 47.787234 48.574468 45.882979 47.255319 47.914894 49.606383 4 49.265957 46.255319 50.276596 50.872340 46.723404 45.255319 49.904255 </code></pre> <p>This is very inefficient and doesn't make much sense if I have a lot of measurements. Is there a clever way of applying/mapping my <code>trim_mean</code> function to achieve the same aim?</p>
<python><pandas><numpy><group-by>
2023-11-16 16:21:14
1
1,232
Ted
77,496,267
3,102,121
I cannot step into the imported function from the Python module using the PyCharm debugger
<p>I have my main Python script that uses the function 'hello_world' from my imported module 'module1.py</p> <p><a href="https://i.sstatic.net/EYItj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EYItj.png" alt="enter image description here" /></a></p> <p>However, if I use the PyCharm Debugger I cannot step into the 'hello_world' function.</p> <p>I am following these steps:</p> <ol> <li>Execute 'main.py' in debug mode</li> <li>Set break point in function 'hello_word'</li> </ol> <p><a href="https://i.sstatic.net/UCiTc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UCiTc.png" alt="enter image description here" /></a></p> <ol start="3"> <li>Press 'Step Into' from Debug window: &quot;Frame not available&quot;</li> </ol> <p><a href="https://i.sstatic.net/Hy2gw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hy2gw.png" alt="enter image description here" /></a></p> <ol start="4"> <li>After all the previous steps, it Goes back to main.</li> </ol> <p><a href="https://i.sstatic.net/Vsw5l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vsw5l.png" alt="enter image description here" /></a></p> <p>Do you have any idea what I am doing wrong?</p> <p>PyCharm: 2023.2.1 (Community Edition) Python: 3.8</p> <p>Step into 'hello__world' function, but it did not work.</p>
<python><function><debugging><module><pycharm>
2023-11-16 15:56:25
1
406
jorge
77,496,183
11,016,395
Adding hatches for certain rows and columns in clustermap
<p>I have used seaborn.clustermap() to plot clustermap as shown below</p> <pre><code>labels = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;, &quot;e&quot;, &quot;f&quot;, &quot;g&quot;, &quot;h&quot;, &quot;i&quot;, &quot;j&quot;, &quot;k&quot;, &quot;l&quot;, &quot;m&quot;, &quot;n&quot;, &quot;o&quot;, &quot;p&quot;, &quot;q&quot;, &quot;r&quot;, &quot;s&quot;, &quot;t&quot;, &quot;u&quot;, &quot;v&quot;] sns.clustermap(data, cmap=sns.cm.rocket_r, xticklabels=labels, yticklabels=labels) </code></pre> <p><a href="https://i.sstatic.net/gUnYQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gUnYQ.png" alt="enter image description here" /></a></p> <p>It is pretty clear there are two clusters, &quot;m&quot;, &quot;o&quot;, &quot;d&quot;, &quot;n&quot;, &quot;p&quot; vs the rest. Now I want to add hatches (&quot;//&quot;) to the rows and columns of &quot;m&quot;, &quot;o&quot;, &quot;d&quot;, &quot;n&quot;, &quot;p&quot; to highlight the difference, how can I do that? Thanks.</p>
<python><matplotlib><seaborn><heatmap><clustermap>
2023-11-16 15:44:40
1
483
crx91
77,495,949
3,979,160
LibOpenShot problem with Python bindings, 'openshot' has no attribute 'GetVersion'
<p>I'm trying to compile LibOpenShot for Windows, using this guide: <a href="https://github.com/OpenShot/libopenshot/wiki/Windows-Build-Instructions" rel="nofollow noreferrer">https://github.com/OpenShot/libopenshot/wiki/Windows-Build-Instructions</a></p> <p>All goes well. Had to manually download zmq.hpp and place it in includes and also the python binding from X:\libs\libopenshot\libopenshot\build\bindings\python I manually copied to C:\Python\Lib\openshot</p> <p>Now I can import openshot in Python!</p> <p>However print(openshot.GetVersion().ToString())</p> <p>Gives me: AttributeError: module 'openshot' has no attribute 'GetVersion'</p> <p>And so does any other method....</p>
<python><windows><openshot>
2023-11-16 15:08:40
0
523
Hasse
77,495,913
9,182,743
groupby.rolling slower than with for loop
<p>I want to perfomr a groupby rolling mean on a large dataset.</p> <p>With large df, doign groupby.rolling seems to be slower than using for loop:</p> <p>This</p> <pre class="lang-py prettyprint-override"><code>grouped['value'].rolling(window=window_size, min_periods=0).mean().reset_index(level=0, drop=True) </code></pre> <p>seems to be slower than:</p> <pre class="lang-py prettyprint-override"><code>for key, grp in grouped: grp['rolling_mean'] = grp['value'].rolling(window=window_size, min_periods=0).mean() rolling_mean_df = pd.concat([rolling_mean_df, grp]) </code></pre> <p>why is it? and what method can i use to speed up rolling mean with groupby on large df.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import time import numpy as np import pandas as pd # Define different sizes to test sizes = [10, 100, 1000, 10000, 20000, 30000, 50000, 100000, 200000, 500000] window_size = 10 # Lists to store execution times for each method times_groupby_1 = [] times_groupby_2 = [] for size in sizes: # Create a DataFrame for each size data = {'value': np.random.rand(size), 'group': np.random.choice(['A', 'B', 'C', 'D'], size)} sample_df = pd.DataFrame(data) grouped = sample_df.groupby('group') # Method 1: Using groupby with rolling mean (for loop method) start_time = time.time() rolling_mean_df = pd.DataFrame() for key, grp in grouped: grp['rolling_mean'] = grp['value'].rolling(window=window_size, min_periods=0).mean() rolling_mean_df = pd.concat([rolling_mean_df, grp]) end_time = time.time() times_groupby_1.append(end_time - start_time) # Method 2: Using groupby with rolling mean (direct method) start_time = time.time() sample_df['rolling_mean'] = grouped['value'].rolling(window=window_size, min_periods=0).mean().reset_index(level=0, drop=True) end_time = time.time() times_groupby_2.append(end_time - start_time) # Plotting the results plt.figure(figsize=(10, 6)) plt.plot(sizes, times_groupby_1, label='Method 1 (for loop)', marker='o') plt.plot(sizes, times_groupby_2, label='Method 2 (direct)', marker='o') plt.xlabel('DataFrame Size') plt.ylabel('Time (seconds)') plt.title('Comparison of Execution Times for Rolling Mean Calculation') plt.xscale('log') plt.yscale('log') plt.legend() plt.grid(True) plt.show() </code></pre> <p><a href="https://i.sstatic.net/UjTdk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UjTdk.png" alt="enter image description here" /></a></p>
<python><pandas><group-by>
2023-11-16 15:02:53
2
1,168
Leo
77,495,903
214,296
How to create socket to send and receive UDP packets
<p>How may I create a socket that can both send and receive UDP packets?</p> <p>My application requires that sockets are bound to the physical network interface because it's a multicast system with multiple interfaces connecting to redundant, duplicate networks on the target device. It is necessary that my script be able to distinctly communicate with each network without interaction with the other networks. This is why binding to the interface is necessary.</p> <p>Here is what I have so far, but it keeps timing out. What am I doing wrong?</p> <pre><code>import socket s = socket.socket(socket.AF_PACKET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.setblocking(True) s.settimeout(3.0) s.bind(('eth0', 0)) z_data, z_addr = s.recvfrom(1024) # expected target frame total length is 264 bytes Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; socket.timeout: timed out </code></pre>
<python><sockets><udp><python-3.6><multicast>
2023-11-16 15:01:31
0
14,392
Jim Fell
77,495,844
9,806,500
directory of jsons to df with pathlib
<p>I've got a set of jsons in a directory that I'm trying to loop through and read to Pandas df with <code>pathlib.Path('my_jsons_dir').iterdir()</code></p> <p>The case for a single file works great</p> <pre><code>json_path = pathlib.Path('my_json_path') dict_single = json.loads(json_path.read_bytes()) df_single = pd.DataFrame.from_dict(pd.json_normalize(dict_single), orient='columns') </code></pre> <p>But the loop burps...</p> <pre><code>file_paths = pathlib.Path(file_dir) data = [] for file in file_paths.iterdir(): if file.is_file(): dict = json.loads(file.read_bytes()) df = pd.DataFrame.from_dict(pd.json_normalize(dict), orient='columns') data.append(df) </code></pre> <p>error message</p> <pre><code>JSONDecodeError Traceback (most recent call last) Cell In[34], line 8 6 for file in file_paths.iterdir(): 7 if file.is_file(): ----&gt; 8 dict = json.loads(file.read_bytes()) 9 df = pd.DataFrame.from_dict(pd.json_normalize(dict), orient='columns') 10 data.append(df) File c:\Users\mdw0523\AppData\Local\Programs\Python\Python310\lib\json\__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 341 s = s.decode(detect_encoding(s), 'surrogatepass') 343 if (cls is None and object_hook is None and 344 parse_int is None and parse_float is None and 345 parse_constant is None and object_pairs_hook is None and not kw): --&gt; 346 return _default_decoder.decode(s) 347 if cls is None: 348 cls = JSONDecoder File c:\Users\mdw0523\AppData\Local\Programs\Python\Python310\lib\json\decoder.py:337, in JSONDecoder.decode(self, s, _w) 332 def decode(self, s, _w=WHITESPACE.match): 333 &quot;&quot;&quot;Return the Python representation of ``s`` (a ``str`` instance 334 containing a JSON document). 335 336 &quot;&quot;&quot; --&gt; 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ... 354 except StopIteration as err: --&gt; 355 raise JSONDecodeError(&quot;Expecting value&quot;, s, err.value) from None 356 return obj, end JSONDecodeError: Expecting value: line 1 column 1 (char 0) </code></pre> <p>It looks like a simple typing thing, but don't understand why <code>dict = json.loads(file.read_bytes())</code> isn't working the same way as it was in the single-file case <code>libpath</code> preferred to <code>os</code> due to need for working flexibly across mac/win</p> <p>Thanks for your help!</p>
<python><json><pandas>
2023-11-16 14:51:42
1
635
M. Wood
77,495,593
4,177,781
Polars how to bin a datetime type column
<p>I am trying to bin a date column. I was able to do that fairly easy using Pandas library. But I am getting the following error with Polars:</p> <pre class="lang-python prettyprint-override"><code>Traceback (most recent call last): File &quot;\workspace\polars.py&quot;, line 4, in &lt;module&gt; pl.col(&quot;ts&quot;).cut( File &quot;\venvs\polars\Lib\site-packages\polars\utils\deprecation.py&quot;, line 192, in wrapper return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;\venvs\polars\Lib\site-packages\polars\expr\expr.py&quot;, line 3716, in cut self._pyexpr.cut(breaks, labels, left_closed, include_breaks) TypeError: argument 'breaks': must be real number, not datetime.datetime </code></pre> <p>My Code is:</p> <pre class="lang-python prettyprint-override"><code>&gt;&gt;&gt; df β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ ts β”‚ β”‚ --- β”‚ β”‚ date β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ 2021-10-01 β”‚ β”‚ 2021-11-01 β”‚ β”‚ 2022-03-01 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ &gt;&gt;&gt; start_year = df.select(&quot;ts&quot;).min().item() &gt;&gt;&gt; end_year = df.select(&quot;ts&quot;).max().item() &gt;&gt;&gt; df = df.with_columns( pl.col(&quot;ts&quot;).cut( pl.datetime_range(start_year, end_year, interval=&quot;3mo&quot;, eager=True), ).alias(&quot;period_bins&quot;) ) </code></pre>
<python><datetime><python-polars>
2023-11-16 14:20:25
4
352
arman
77,495,574
13,489,398
Video tag autoplay pauses when user navigates away from page
<p>I got a video tag that looks like this</p> <pre><code>&lt;video muted controls autoPlay src={videoSrc} width=&quot;640px&quot; height=&quot;480px&quot; onEnded={handleOnEnded} ref={videoRef} /&gt; </code></pre> <p>streaming in video from a websocket via the react state videoSrc (createObjectUrl on blobs streamed by websockets). This works fine.</p> <p>However, the problem is that before the blob finishes playing, if I change tabs, the page is still open (and still playing), the video pauses, and only resumes when I navigate back to the page. Why is this happening? Is it some browser optimisation?</p> <p>How do I prevent this from happening (I want the video to keep playing, like how when I navigate away from Youtube the video I opened continues playing)</p>
<python><reactjs><typescript><video>
2023-11-16 14:16:42
0
341
ZooPanda
77,495,512
3,551,700
Add file creation and modification date to metadata with DirectoryLoader
<p>I am working with the LangChain library and am interested in whether it is possible to load file creation and/or modification dates together with file content with DirectoryLoader and add that information to the documents' metadata. Is it possible? How to do that?</p> <p>Currently, I load only docx files, but I would also like to load other documents in the future. My current code snippet is:</p> <pre><code>loader = DirectoryLoader(dir, glob=&quot;**/*.docx&quot;, show_progress=True, silent_errors=True) docs = loader.load() </code></pre>
<python><langchain><py-langchain>
2023-11-16 14:07:30
1
1,526
Primoz
77,495,492
8,869,570
How to set a list of new columns values to 0.0 in a dataframe?
<p>Suppose I have a list of columns</p> <pre><code>cols = ['col1', 'col2', 'col3'] </code></pre> <p>and a dataframe</p> <pre><code>df = pd.DataFrame() </code></pre> <p>I want to add these <code>cols</code> to <code>df</code> and set them to <code>0.0</code>.</p> <pre><code>for col in cols: df[col] = 0.0 </code></pre> <p>works, but</p> <pre><code>df[cols] = 0.0 </code></pre> <p>results in the error</p> <pre><code>KeyError: &quot;None of [Index(['col1', 'col2', 'col3'], dtype='object')] are in the [columns]&quot; </code></pre> <p>Why does this not work? When you want to query column values for a list of columns, you can pass in the list of columns in a similar way, but it seems initialization doesn't work in the same manner.</p>
<python><pandas><dataframe>
2023-11-16 14:03:25
1
2,328
24n8
77,495,422
2,748,523
get a websocket message to test it
<p>I am quite new to testing web-sockets with pyhton I need to tests an existing code that I cannot change.The code is using websocket-client===0.57.0 a quite old version. It's possible that I am missing out on a simple fix</p> <p>Here is the init function for the class :</p> <pre><code>class Client(): def __init__(self): # create websocket connection self.ws = websocket.WebSocketApp( url=&quot;wss://stream.********:9443/ws/********&quot;, on_message=self.on_message, on_error=self.on_error, on_close=self.on_close, on_open=self.on_open ) </code></pre> <p>I need to read the messages received by the web-socket in another test python module then assert the data values. My issue is that I don't know how to access the it from outside the class. The existing code can access the data inside the class this way :</p> <pre><code> def on_message(self, message): data = loads(message) </code></pre> <p>Now I would like to write test with pytest in a separate module here is my code so far that does not work :</p> <pre><code>from **** import Client import pytest @pytest.fixture def my_client(): return Client().ws @pytest.mark.asyncio async def test_url(my_client): # how can I get the data in this code level??? # data = my_client?? </code></pre>
<python><websocket><pytest>
2023-11-16 13:54:19
1
1,220
Ziwdigforbugs
77,495,343
3,979,160
Compiling LibOpenShot -> cannot find zmq.hpp [Win/msys64]
<p>I'm trying to compile LibOpenShot for Windows, using this guide: <a href="https://github.com/OpenShot/libopenshot/wiki/Windows-Build-Instructions" rel="nofollow noreferrer">https://github.com/OpenShot/libopenshot/wiki/Windows-Build-Instructions</a></p> <p>All goes well untill I enter the cmake command for building libopenshot. it gives me this error: In file included from X:/libs/libopenshot/libopenshot/src/Clip.cpp:23: X:/libs/libopenshot/libopenshot/src/ZmqLogger.h:22:10: fatal error: zmq.hpp: No such file or directory 22 | #include &lt;zmq.hpp&gt; | ^~~~~~~~~</p> <p>It was installed through packman as instructed. When looking in c:\msys64\mingw64\include\ I can find a zmq.h but not a zmq.hpp. So something is wrong. I cannot find a zmq.hpp at all anywhere in the c:\msys64.</p>
<python><c><windows><development-environment><openshot>
2023-11-16 13:45:11
1
523
Hasse
77,495,245
2,756,412
SALABIM how to minimize memory requirements
<p>I am using the <a href="https://www.salabim.org/" rel="nofollow noreferrer">Salabim package</a> in python to simulate some queueing theory. Salabim <a href="https://www.salabim.org/manual/" rel="nofollow noreferrer">(documentation)</a> is based on <a href="https://simpy.readthedocs.io/en/latest/" rel="nofollow noreferrer">SimPy</a> but with many extensions and additional features. Most of my problems are solved in a few minutes with minimal memory requirements.</p> <p>However, I want to extend the model and noted that the simulation captures about 32GB after 1 hour of simulation of a simple M/M/1 queueing problem. I expect this is due to some internal monitors capturing statistical data. I have tried to deactivate most of them but till now without success.</p> <blockquote> <p><strong>- What statement am I missing?</strong><br /> <strong>- What can I change to minimize memory requirements?</strong></p> </blockquote> <p><em>Please note that I switch of monitoring for each class with the following code</em>:</p> <pre><code> def setup(self): self.mode.monitor(False) self.status.monitor(False) # added 17 nov, salabim team tip </code></pre> <h1>Code</h1> <pre><code># passivate/activate method # https://www.salabim.org/manual/Modelling.html#a-bank-example import salabim as sim class CarGenerator(sim.Component): def setup(self): self.mode.monitor(False) self.status.monitor(False) # added 17 nov, salabim team tip def process(self): while True: Car() self.hold(iat_distr.sample()) class Car(sim.Component): def setup(self): self.mode.monitor(False) self.status.monitor(False) # added 17 nov, salabim team tip def process(self): self.enter(waitingline) for ChargingStation in ChargingStations: if ChargingStation.ispassive(): ChargingStation.activate() break # activate at most one charging station self.passivate() class ChargingStation(sim.Component): def setup(self): self.mode.monitor(False) self.status.monitor(False) # added 17 nov, salabim team tip def process(self): while True: while len(waitingline) == 0: self.passivate() self.car = waitingline.pop() self.hold(srv_distr.sample()) self.car.activate() N_STATION = 1 iat_distr = sim.Exponential(60 / 40) srv_distr = sim.Exponential(60 / 50) # https://www.salabim.org/manual/Reference.html#environment app = sim.App( trace=False, # defines whether to trace or not random_seed=&quot;*&quot;, # if β€œ*”, a purely random value (based on the current time) time_unit=&quot;minutes&quot;, # defines the time unit used in the simulation name=&quot;Charging Station&quot;, # name of the simulation do_reset=True, # defines whether to reset the simulation when the run method is called yieldless=True, # defines whether the simulation is yieldless or not ) # Instantiate and activate the client generator CarGenerator(name=&quot;Electric Cars Generator&quot;) # Create Queue and set monitor to stats_only waitingline = sim.Queue(name=&quot;Waiting Cars&quot;, monitor=False) waitingline.length_of_stay.monitor(value=True) waitingline.length_of_stay.reset_monitors(stats_only=True) # Instantiate the servers, list comprehension but only 1 server ChargingStations = [ChargingStation() for _ in range(N_STATION)] # Execute Simulation app.run(till=sim.inf) # Print statistics waitingline.length_of_stay.print_statistics() </code></pre> <h1>Environment</h1> <p>I am running <code>'CPython'</code> with <code>python version 3.9.18</code> and <code>SALABIM version 23.3.11.1</code> on Ubuntu 22.04 on a Windows Subsystem for Linux (but same happens on a pure Ubuntu 22.04 system)</p> <p>WSL version: 1.2.5.0<br /> Kernel version: 5.15.90.1<br /> WSLg version: 1.0.51<br /> MSRDC version: 1.2.3770<br /> Direct3D version: 1.608.2-61064218<br /> DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp<br /> Windows version: 10.0.19045.3693</p> <h1>Packages</h1> <p>The following ENV.yml file can be used to recreate the conda environment or check package versions.</p> <pre><code>name: mem_salabim channels: - conda-forge - nodefaults dependencies: - _libgcc_mutex=0.1=conda_forge - _openmp_mutex=4.5=2_gnu - anyio=4.0.0=pyhd8ed1ab_0 - argon2-cffi=23.1.0=pyhd8ed1ab_0 - argon2-cffi-bindings=21.2.0=py39hd1e30aa_4 - arrow=1.3.0=pyhd8ed1ab_0 - asttokens=2.4.1=pyhd8ed1ab_0 - async-lru=2.0.4=pyhd8ed1ab_0 - attrs=23.1.0=pyh71513ae_1 - babel=2.13.1=pyhd8ed1ab_0 - backports=1.0=pyhd8ed1ab_3 - backports.functools_lru_cache=1.6.5=pyhd8ed1ab_0 - beautifulsoup4=4.12.2=pyha770c72_0 - bleach=6.1.0=pyhd8ed1ab_0 - brotli-python=1.1.0=py39h3d6467e_1 - bzip2=1.0.8=hd590300_5 - ca-certificates=2023.7.22=hbcca054_0 - cached-property=1.5.2=hd8ed1ab_1 - cached_property=1.5.2=pyha770c72_1 - certifi=2023.7.22=pyhd8ed1ab_0 - cffi=1.16.0=py39h7a31438_0 - charset-normalizer=3.3.2=pyhd8ed1ab_0 - comm=0.1.4=pyhd8ed1ab_0 - debugpy=1.8.0=py39h3d6467e_1 - decorator=5.1.1=pyhd8ed1ab_0 - defusedxml=0.7.1=pyhd8ed1ab_0 - entrypoints=0.4=pyhd8ed1ab_0 - exceptiongroup=1.1.3=pyhd8ed1ab_0 - executing=2.0.1=pyhd8ed1ab_0 - fqdn=1.5.1=pyhd8ed1ab_0 - greenlet=3.0.1=py39h3d6467e_0 - idna=3.4=pyhd8ed1ab_0 - importlib-metadata=6.8.0=pyha770c72_0 - importlib_metadata=6.8.0=hd8ed1ab_0 - importlib_resources=6.1.1=pyhd8ed1ab_0 - ipykernel=6.26.0=pyhf8b6a83_0 - ipython=8.17.2=pyh41d4057_0 - isoduration=20.11.0=pyhd8ed1ab_0 - jedi=0.19.1=pyhd8ed1ab_0 - jinja2=3.1.2=pyhd8ed1ab_1 - json5=0.9.14=pyhd8ed1ab_0 - jsonpointer=2.4=py39hf3d152e_3 - jsonschema=4.19.2=pyhd8ed1ab_0 - jsonschema-specifications=2023.7.1=pyhd8ed1ab_0 - jsonschema-with-format-nongpl=4.19.2=pyhd8ed1ab_0 - jupyter-lsp=2.2.0=pyhd8ed1ab_0 - jupyter_client=8.6.0=pyhd8ed1ab_0 - jupyter_core=5.5.0=py39hf3d152e_0 - jupyter_events=0.9.0=pyhd8ed1ab_0 - jupyter_server=2.10.0=pyhd8ed1ab_0 - jupyter_server_terminals=0.4.4=pyhd8ed1ab_1 - jupyterlab=4.0.8=pyhd8ed1ab_0 - jupyterlab_pygments=0.2.2=pyhd8ed1ab_0 - jupyterlab_server=2.25.1=pyhd8ed1ab_0 - ld_impl_linux-64=2.40=h41732ed_0 - libffi=3.4.2=h7f98852_5 - libgcc-ng=13.2.0=h807b86a_3 - libgomp=13.2.0=h807b86a_3 - libnsl=2.0.1=hd590300_0 - libsodium=1.0.18=h36c2ea0_1 - libsqlite=3.44.0=h2797004_0 - libstdcxx-ng=13.2.0=h7e041cc_3 - libuuid=2.38.1=h0b41bf4_0 - libzlib=1.2.13=hd590300_5 - markupsafe=2.1.3=py39hd1e30aa_1 - matplotlib-inline=0.1.6=pyhd8ed1ab_0 - mistune=3.0.2=pyhd8ed1ab_0 - nbclient=0.8.0=pyhd8ed1ab_0 - nbconvert-core=7.11.0=pyhd8ed1ab_0 - nbformat=5.9.2=pyhd8ed1ab_0 - ncurses=6.4=h59595ed_2 - nest-asyncio=1.5.8=pyhd8ed1ab_0 - notebook=7.0.6=pyhd8ed1ab_0 - notebook-shim=0.2.3=pyhd8ed1ab_0 - openssl=3.1.4=hd590300_0 - overrides=7.4.0=pyhd8ed1ab_0 - packaging=23.2=pyhd8ed1ab_0 - pandocfilters=1.5.0=pyhd8ed1ab_0 - parso=0.8.3=pyhd8ed1ab_0 - pexpect=4.8.0=pyh1a96a4e_2 - pickleshare=0.7.5=py_1003 - pip=23.3.1=pyhd8ed1ab_0 - pkgutil-resolve-name=1.3.10=pyhd8ed1ab_1 - platformdirs=4.0.0=pyhd8ed1ab_0 - prometheus_client=0.18.0=pyhd8ed1ab_1 - prompt-toolkit=3.0.41=pyha770c72_0 - prompt_toolkit=3.0.41=hd8ed1ab_0 - psutil=5.9.5=py39hd1e30aa_1 - ptyprocess=0.7.0=pyhd3deb0d_0 - pure_eval=0.2.2=pyhd8ed1ab_0 - pycparser=2.21=pyhd8ed1ab_0 - pygments=2.16.1=pyhd8ed1ab_0 - pysocks=1.7.1=pyha2e5f31_6 - python=3.9.18=h0755675_0_cpython - python-dateutil=2.8.2=pyhd8ed1ab_0 - python-fastjsonschema=2.18.1=pyhd8ed1ab_0 - python-json-logger=2.0.7=pyhd8ed1ab_0 - python_abi=3.9=4_cp39 - pytz=2023.3.post1=pyhd8ed1ab_0 - pyyaml=6.0.1=py39hd1e30aa_1 - pyzmq=25.1.1=py39h8c080ef_2 - readline=8.2=h8228510_1 - referencing=0.30.2=pyhd8ed1ab_0 - requests=2.31.0=pyhd8ed1ab_0 - rfc3339-validator=0.1.4=pyhd8ed1ab_0 - rfc3986-validator=0.1.1=pyh9f0ad1d_0 - rpds-py=0.12.0=py39h9fdd4d6_0 - send2trash=1.8.2=pyh41d4057_0 - setuptools=68.2.2=pyhd8ed1ab_0 - six=1.16.0=pyh6c4a22f_0 - sniffio=1.3.0=pyhd8ed1ab_0 - soupsieve=2.5=pyhd8ed1ab_1 - stack_data=0.6.2=pyhd8ed1ab_0 - terminado=0.18.0=pyh0d859eb_0 - tinycss2=1.2.1=pyhd8ed1ab_0 - tk=8.6.13=noxft_h4845f30_101 - tomli=2.0.1=pyhd8ed1ab_0 - tornado=6.3.3=py39hd1e30aa_1 - traitlets=5.13.0=pyhd8ed1ab_0 - types-python-dateutil=2.8.19.14=pyhd8ed1ab_0 - typing-extensions=4.8.0=hd8ed1ab_0 - typing_extensions=4.8.0=pyha770c72_0 - typing_utils=0.1.0=pyhd8ed1ab_0 - tzdata=2023c=h71feb2d_0 - uri-template=1.3.0=pyhd8ed1ab_0 - urllib3=2.1.0=pyhd8ed1ab_0 - wcwidth=0.2.10=pyhd8ed1ab_0 - webcolors=1.13=pyhd8ed1ab_0 - webencodings=0.5.1=pyhd8ed1ab_2 - websocket-client=1.6.4=pyhd8ed1ab_0 - wheel=0.41.3=pyhd8ed1ab_0 - xz=5.2.6=h166bdaf_0 - yaml=0.2.5=h7f98852_2 - zeromq=4.3.5=h59595ed_0 - zipp=3.17.0=pyhd8ed1ab_0 - pip: - salabim==23.3.11.1 prefix: /home/floris/miniforge3/envs/jads_salabim </code></pre>
<python><memory><simulation><simpy>
2023-11-16 13:31:01
2
906
Floris Padt
77,495,105
6,840,039
Pandas: count unique users using rolling window
<p>I have dataframe with time and user_id</p> <pre><code> time user_id 2023-02-20 00:00:20 5008662006351712 2023-02-20 00:01:25 5008662006474892 2023-02-20 00:04:28 5008662006889403 2023-02-20 00:05:33 5008662006351712 2023-02-20 00:07:36 5008662004944382 2023-02-20 00:08:37 5008662006760417 2023-02-20 00:09:38 5008662004941892 2023-02-20 00:11:40 5008662006810617 2023-02-20 00:14:50 5008662006936927 2023-02-20 00:15:52 5008662005514572 2023-02-20 00:16:58 5008662004874462 2023-02-20 00:17:01 5008662006937193 2023-02-20 00:17:05 5008662006914843 2023-02-20 00:18:05 5008662006871041 2023-02-20 00:19:06 5008662006478082 </code></pre> <p>I want to count the number of unique users in each window size * &quot;5T&quot;. The problem I encountered is I can <code>resample</code> this data to &quot;5T&quot; but I can't use then <code>rolling</code> because it works only with numeric data:</p> <pre><code>window_size = 2 df = df.resample(&quot;5T&quot;, label='right', on='time').apply(lambda x: list(set(x))).reset_index() df = df.rolling(window=window_size, min_periods=window_size, center=False).nunique() </code></pre> <p>The data after <code>resample</code> looks like</p> <pre><code> time user_id 2023-02-20 00:05:00 [5008662006351712, 5008662006889403, 500866200... 2023-02-20 00:10:00 [5008662004941892, 5008662006760417, 500866200... 2023-02-20 00:15:00 [5008662006810617, 5008662006936927] 2023-02-20 00:20:00 [5008662006871041, 5008662006937193, 500866200... </code></pre> <p>How should I change my code to get this output?</p> <pre><code> time user_id 2023-02-20 00:05:00 NaN 2023-02-20 00:10:00 6 2023-02-20 00:15:00 6 2023-02-20 00:20:00 8 </code></pre>
<python><pandas>
2023-11-16 13:13:15
2
4,492
Petr Petrov
77,494,978
5,736,091
How to plot columns with a value and x-y positions as a color grid in subplots
<p>In R I would do the following to make a grid of facets with a raster-plot in each facet:</p> <pre><code># R Code DF &lt;- data.frame(expand.grid(seq(0, 7), seq(0, 7), seq(0, 5))) names(DF) &lt;- c(&quot;x&quot;, &quot;y&quot;, &quot;z&quot;) DF$I &lt;- runif(nrow(DF), 0, 1) # x y z I # 1: 0 0 0 0.70252977 # 2: 1 0 0 0.74346071 # --- # 383: 6 7 5 0.93409337 # 384: 7 7 5 0.14143277 library(ggplot2) ggplot(DF, aes(x = x, y = y, fill = I)) + facet_wrap(~z, ncol = 3) + geom_raster() + scale_fill_viridis_c() + theme(legend.position = &quot;bottom&quot;) # desired legend position should be bottom </code></pre> <p><a href="https://i.sstatic.net/DMitG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DMitG.png" alt="ggplot + facet_wrap + geom_raster" /></a></p> <p>How can I do that in python (using matplotlib and probably seaborn)? I tried it with the following code, but had trouble with the plotting of images which I tried with <code>plt.imshow</code>. As the data has to be reshaped for <code>plt.imshow</code> I guess I need a custom plot function for <code>g.map</code>. I tried several things, but had problem with the Axes or the color and with using the data in the custom plot function.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import itertools df = pd.DataFrame(list(itertools.product(range(8), range(8), range(6))), columns=['x', 'y', 'z']) # order of values different than in R, but that shouldn't matter for plotting df['I'] = np.random.rand(df.shape[0]) # x y z I # 0 0 0 0 0.076338 # 1 0 0 1 0.148386 # 2 0 0 2 0.481053 # .. .. .. .. ... # 382 7 7 4 0.144188 # 383 7 7 5 0.700624 g = sns.FacetGrid(df, col='z', col_wrap=2, height=4, aspect=1) g.map(plt.imshow, color = 'I') # &lt;- plt.imshow does not work here. # How can this be corrected (probably with a custom plot function)? plt.show() </code></pre>
<python><pandas><matplotlib><seaborn><heatmap>
2023-11-16 12:55:47
1
1,357
Phann
77,494,964
12,016,688
Why list comprehensions create a function internally?
<p>This is disassembly of a list comprehension in <a href="/questions/tagged/python-3.10" class="post-tag" title="show questions tagged &#39;python-3.10&#39;" aria-label="show questions tagged &#39;python-3.10&#39;" rel="tag" aria-labelledby="tag-python-3.10-tooltip-container">python-3.10</a>:</p> <pre class="lang-py prettyprint-override"><code>Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import dis &gt;&gt;&gt; &gt;&gt;&gt; dis.dis(&quot;[True for _ in ()]&quot;) 1 0 LOAD_CONST 0 (&lt;code object &lt;listcomp&gt; at 0x7fea68e0dc60, file &quot;&lt;dis&gt;&quot;, line 1&gt;) 2 LOAD_CONST 1 ('&lt;listcomp&gt;') 4 MAKE_FUNCTION 0 6 LOAD_CONST 2 (()) 8 GET_ITER 10 CALL_FUNCTION 1 12 RETURN_VALUE Disassembly of &lt;code object &lt;listcomp&gt; at 0x7fea68e0dc60, file &quot;&lt;dis&gt;&quot;, line 1&gt;: 1 0 BUILD_LIST 0 2 LOAD_FAST 0 (.0) &gt;&gt; 4 FOR_ITER 4 (to 14) 6 STORE_FAST 1 (_) 8 LOAD_CONST 0 (True) 10 LIST_APPEND 2 12 JUMP_ABSOLUTE 2 (to 4) &gt;&gt; 14 RETURN_VALUE </code></pre> <p>From what I understand it creates a code object called <code>listcomp</code> which does the actual iteration and return the result list, and immediately call it. I can't figure out the need to create a separate function to execute this job. Is this kind of an optimization trick?</p>
<python><python-3.x><list-comprehension><cpython><python-internals>
2023-11-16 12:53:19
3
2,470
Amir reza Riahi
77,494,963
1,333,515
How do I access the camera/transform parameters in cv2.Stitcher?
<p>I am trying to stitch together a bunch of microscope images, but I need to keep track of the physical location of each image. Before stitching, I know the coordinates of each individual image, but after stitching the Python method only returns the stitched image and some status flag.</p> <p>How do I obtain the (affine) transformations that were automatically estimated in <a href="https://docs.opencv.org/4.x/d5/d48/samples_2python_2stitching_8py-example.html" rel="nofollow noreferrer">cv2.Stitcher</a>.stitch()?</p> <p>I know the full pipeline can be viewed in the C++ source code and there's a high-level flow chart, but if I have to obtain those parameters that way, I basically have to re-implement cv2.Stitcher myself...</p>
<python><opencv><computer-vision><image-stitching><opencv-stitching>
2023-11-16 12:53:18
1
501
Jan M.
77,494,961
17,220,672
mpy gives keeps raising error because of Protocol
<p>Im using pytest framework with mypy and when writing fixtures I created Protocols to mimic return types. (I could not import it directly because of env instantiation)</p> <p>This example greatly simplifies my troubles:</p> <pre><code> from typing_extensions import Protocol class LoadServiceType(Protocol): z: int class LoadService: z: int #This works as expected load_serivce: LoadServiceType = LoadService() # But now I dont know why this does not work class FooType(Protocol): load_service: LoadServiceType class Boo: x:int load_service:LoadService bb: int cc: FooType = Boo() </code></pre> <p>Basically I want to use my FooType as a return type because of import issues. Is there any solution ?</p>
<python><protocols><mypy>
2023-11-16 12:52:45
1
419
mehekek
77,494,902
6,283,849
How to retrieve user's username after authenticating in Gradio
<p>I have a gradio app with a custom login function. I was successful in doing authentication, but then I would like to retrieve and keep the user's username during the session. What would be the solution?</p> <pre class="lang-py prettyprint-override"><code>import gradio as gr def login(email, password): &quot;&quot;&quot;Custom authentication.&quot;&quot;&quot; # Do stuff here return True, email # Application layout with gr.Blocks(title=&quot;My App&quot;) as app: with gr.Row(): text = gr.Textbox(label=&quot;Text&quot;, placeholder=&quot;Hello world&quot;) if __name__ == &quot;__main__&quot;: app.launch(server_name=&quot;0.0.0.0&quot;, auth=login) </code></pre>
<python><authentication><gradio>
2023-11-16 12:43:37
1
6,421
Alexis.Rolland
77,494,877
13,066,054
How to add condition in pandas named aggregates
<pre><code> director_master_id company_master_id designation date_cessation appointment_original_date some_col director_name appt_chng_desig_date t_designation t_dir_category 0 2601721 2465280 Director NaT 2020-08-21 0000000 VINEET NaT director promoter 1 1111111 2465280 Director NaT 2021-09-30 7129633 VIJAY NaT additional director professional 2 2222222 2465280 Director NaT 2022-03-06 9500698 SACHDEV NaT additional director professional 3 3333333 2465280 Director NaT 2023-01-03 9748791 SHUVI NaT additional director professional 4 444444 2465280 Director NaT 2022-09-28 1469375 CHAKRABORTY NaT director independent 5 933052 2465280 Director NaT 2023-02-18 3565167 ANUP NaT NaN NaN 6 2911635 2465280 Managing Director NaT 2020-08-21 7767248 KUMAR NaT managing director promoter 7 779440 2465280 Director NaT 2021-09-30 7298703 TYLER NaT additional director professional 8 804512 2465280 Director NaT 2021-09-30 3559152 KARTIK NaT additional director professional 9 90320 2465280 Director NaT 2021-09-30 177699 GOPAL NaT additional director professional </code></pre> <p>How to do aggregate function based on other function in pandas named aggregates? In the below given code my intention is to calculate <code>num_of_promoter_directors</code> based on the condition that <code>t_dir_category</code> should be equal to <code>promoter</code> and count only those rows. Here the result should be 2.</p> <p>In the next line, I need to collect list of names based condition that column <code>t_designation</code> is <code>managing director</code> and the result should be <code>[KUMAR]</code>.</p> <p>I know that i'm selecting a column on top of whioch i'm doing the aggregates and this code results in <code>KeyError: 't_dir_category'</code></p> <pre><code>directors_info = director_history_details.groupby('company_master_id').agg( num_of_directors=('director_master_id', 'count'), num_of_promoter_directors=('director_master_id', lambda x: x[x['t_dir_category'] == 'promoter'].count()), managing_directors=('director_name', lambda x: x[x['t_designation'] == 'managing director']['director_name'].unique()) ) </code></pre> <p>Also I know that I can calculate this individually and then join, but i'm trying to do this in a single block. Is there a way to achieve this?</p> <p>expected output</p> <pre><code> company_master_id num_of_directors num_of_promoter_directors managing_directors 0 2465280 10 2 [Kumar, VINEET] </code></pre>
<python><pandas><dataframe>
2023-11-16 12:38:51
2
351
naga satish
77,494,813
18,793,790
How do I install packages and a virtual maching on a server that does not have internet using pipenv?
<p>I have a python script that I am trying to put on a server that does not have internet access. I am using pipenv to create a virtual environment. Nornally how I do the scripts on servers with internet access it I copy the script, pipfile and pipfile.lock into a directory on the server. Then I make a empty .venv folder for making a virtual environment locally. I then open CMD and run pipenv install and that will install my virtual environment and install all of the packages. I want to do the same thing on a server that does not have internet access.</p> <p>What I have tried so far: On my local machine:</p> <ul> <li>run <code>pip freeze &gt; requirements.txt</code> -&gt; to create a requirements text file</li> <li>run <code>pip download -r requirements.txt</code> -&gt; downloads the packages into the directory</li> </ul> <p>On server:</p> <ul> <li>copy the entire directory onto my server without internet access</li> <li>add empty .venv folder in directory</li> <li>cd into directory that contains the python script, requirements.txt</li> <li>run <code>pipenv install -r requirements.txt</code> (I also tried running <code>pipv install -r requirements.txt</code></li> </ul> <p>Pipenv is installed on the server. I am not sure what the correct steps are on the server and if I need the empty .venv file. I also am unsure if I need to keep all files (script, requirements.txt, pipfile, pifile.lock, and .whl files) in the same directory. Thanks in advance for any help!</p> <p>Also if there is a better way altogether to copy and entire script with packages I would love to know how. I think I have tunnel vision on this and am unable to see other/better options on how to do this.</p>
<python><python-3.x><pipenv>
2023-11-16 12:30:35
1
346
Avi4nFLu
77,494,746
9,974,205
How can I move datetime to the closest five minutes?
<p>I am working with a dataframe in pandas. In this dataframe there is a column which type is datetime. It represents the moment in which a product is sold, other columns indicate the price, the type, amount sold, amount remaining...</p> <p>To do an analysis, I want more datetime columns to check how many products are sold in intervals from 5 minutes to one hour. I need to convert the datetime data to the upper five minutes.</p> <p>As an example for the 5 minutes case:</p> <p>2023-11-16 13:17:32 would become 2023-11-16 13:20:00, 2023-11-16 13:21:09 would become 2023-11-16 13:25:00 and so on.</p> <p>For the 1 hour case, both of them would become 2023-11-16 14:00:00.</p> <p>Can this be done?</p> <p>I know the functions <a href="https://www.geeksforgeeks.org/floor-ceil-function-python/" rel="nofollow noreferrer">ceil and floor</a>, but I don't know how to use them in datetime format</p>
<python><pandas><dataframe><datetime><ceil>
2023-11-16 12:20:31
1
503
slow_learner
77,494,696
499,699
PyTorch For Loop Optimisations and Speedup techniques
<p>This is a problem I've now encountered three times in the past year.</p> <p>I appreciate that in certain scenarios, a vectorized solution will be better and considerably faster.</p> <p>However IMHO, there is a trade off between discovering a vectorizing solution and using what is essentially a for loop (or double for loop). Discovering the vectorized solution (if there exists indeed one) may take a lot more effort, trial and error, research and so on.</p> <p>The code that is the simplest form (the double for loop in this case) almost always ends up being a bottle neck for me, but takes very little time to implement and test.</p> <p>Here's an example:</p> <pre class="lang-py prettyprint-override"><code>@torch.jit.script def seq_prob(t_samples: torch.Tensor): i = 0 probs = [0] * len(t_samples) for t_i in torch.unbind(t_samples): for t_k in torch.unbind(t_samples): is_same = torch.all(torch.isclose(t_i, t_k, rtol=1e-05, atol=1e-08, equal_nan=False)) if is_same is True: probs[i] += 1 i += 1 return probs </code></pre> <p>The <code>torch.unbind</code> simply treats the outer dimension as the iterable. In some cases, I have spent a considerable amount of time to derive a vectorized form of the loop, which usually results in masking, cumsum, index select and a variety of built-in pytorch methods, complicating the logic compared to a for-loop but making it faster.</p> <p>Similarly, using CUDA and <code>@torch.jit</code> sometimes helps (but not always).</p> <p>Therefore my questions are:</p> <ul> <li>When using some form of for loop in pytorch (e.g., <code>torch.unbind</code> or <code>torch.chunk</code> or anything similar) whose aim is to iterate over a dimension and perform an operation</li> <li>Is there a way, a golden standard, some option, which can speed things up (vectorizing excluded)?</li> <li>If vectorizing is the only option, then what would be a good first plan of attack? Take as an example the code shown above which computes observations of a value within a sample set given some tolerance.</li> </ul>
<python><for-loop><pytorch>
2023-11-16 12:12:57
3
14,954
Γ†lex
77,494,647
12,069,922
CUDA Runtime Error during Spacy Transformers NER Model Training
<p>I am currently training a custom NER model (having 90k data records), using Spacy Transformers (en_core_web_trf) and I'm encountering an issue where the training process is taking an unusually long time and eventually gets killed, throwing a CUDA runtime error.</p> <p>Here's a brief overview of my setup:</p> <pre><code>Model: Custom NER model Library: Spacy Transformers Hardware: I'm using AWS ec2 sever (g4dn.8xlarge) vCPUs : 32 GB, Software: Python version 3.6.9, spacy version 3.6.1 The error message I'm receiving is: &quot;RuntimeError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 14.76 GiB total capacity; 11.19 GiB already allocated; 78.75 MiB free; 12.40 GiB reserved in total by PyTorch) If reserved memory is &gt;&gt; allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF&quot; </code></pre> <p>I've tried a few troubleshooting steps such as checking GPU memory usage, killing processes that might be engaging GPU memory, and adjusting batch sizes, but the issue persists.</p> <p>I would appreciate any insights or suggestions on how to resolve this issue. Has anyone else encountered this problem and found a solution?</p> <p>Thank you in advance for your help!</p>
<python><gpu><spacy><huggingface-transformers>
2023-11-16 12:05:30
1
399
iamhimanshu0
77,494,552
5,199,660
Azure Data Factory, Batch Service, Python: Unable to append blob with csv chunks
<p>Azure Data Facory pipeline runs a Custom activity with Batch Service as a linked service, which holds Python code in it. The transformation was built for running locally, and I want to make it work on Azure, saving blobs (csv files) into Azure Storage (Az Data Lake).</p> <p>The transformation runs on chunks, and performed within a for loop.</p> <pre><code>for i, in_chunk in enumerate(pd.read_csv(source, chunksize=6000, sep=&quot;;&quot;) # transformation happens and spits out out_chunk parameter # holding the 6000 rows out of the entire file kwargs = {'mode':'a', 'header': False} if i&gt;0 else {} out_chunk.to_csv(cdl_file, sep=&quot;|&quot;, index=False, **kwargs) </code></pre> <p>After this, I tried different approaches as it is written in this question and answer for example: <a href="https://stackoverflow.com/questions/76952568/azure-data-factory-update-csv-file-on-azure-storage-with-azure-batch-custom-st/76953595?noredirect=1#comment136616838_76953595">My original question for another issue</a></p> <p>The solution written in the question and the answer above didn't throw errors, but it didn't store the entire file, only the 6000 rows specified as a chunk.</p> <p>What am I doing wrong? I do not understand how should this be handled.</p> <p>EDIT: As requested by JayashankarGS, I added the code I tried and the screenshot about what happened.</p> <pre><code>def transform_data(v, outputType, eventType, fileName, config, source, conn_string, dl, encoding, in_dtypes = None, chunkSize=10000, in_sep = &quot;;&quot;): folderName = 'temp' containerName = 'input' outputBlobName = folderName + &quot;/&quot; + fileName inputBlobPath = containerName + &quot;/&quot; + outputBlobName blob = BlobClient.from_connection_string(conn_str=conn_string, container_name=containerName, blob_name=outputBlobName) credential = {'connection_string': conn_string} accountName = conn_string.split(&quot;AccountName=&quot;)[1].split(&quot;;&quot;)[0] adls_path = 'abfs://' + containerName + '@' + accountName + '.dfs.core.windows.net/' + outputBlobName template = pd.DataFrame(columns = v.headers[outputType]) transformationSchema = config[outputType + &quot;_out_transformations&quot;] logging.info('Reading data chunk...') for i, in_chunk in enumerate(pd.read_csv(source, chunksize = chunkSize, sep=in_sep, encoding = 'unicode_escape', dtype=in_dtypes)): logging.info('Handle duplicates and missing fields...') in_chunk.fillna('', inplace=True) out_chunk = template.copy() out_chunk.fillna('', inplace=True) out_chunk.drop_duplicates(subset=[config[&quot;composite_key&quot;][&quot;key_partA&quot;], config[&quot;composite_key&quot;][&quot;key_partB&quot;], config[&quot;composite_key&quot;][&quot;key_partC&quot;]],inplace=True) logging.info('Start data transformation for schema: ' + outputType) for name, spec in transformationSchema.items(): out_chunk[name] = transform_column(in_chunk, spec) kwargs = {'mode': 'a', 'header': False} logging.info('Transformation was successful for ' + outputType) dateTime = time.strftime(&quot;%Y%m%d%H%M%S&quot;) if not os.path.exists(containerName + &quot;/&quot; + folderName + &quot;/&quot;): os.mkdir(containerName) os.mkdir(containerName + &quot;/&quot; + folderName + &quot;/&quot;) print(f&quot;Uploading chunk: {len(out_chunk)}&quot;) logging.info('Trying to store transformed file in Azure Storage...') out_chunk.to_csv(adls_path, storage_options=credential, sep=dl, index=False, **kwargs) </code></pre> <p>The result of this is two files generated and stored in Azure Storage. As you can see in the results of running this with Batch Service on Azure Data Factory, it processes 10000 rows as given batch size, and then tries to do the same for the second file. The &quot;file not found&quot; error comes from a step after transformation which is a validator (ignore that warning!).</p> <p><a href="https://i.sstatic.net/8YGGm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8YGGm.png" alt="enter image description here" /></a></p>
<python><azure><azure-data-factory><azure-storage><azure-batch>
2023-11-16 11:49:17
1
656
Eve
77,494,543
5,841,406
seleniumbase (undetected Chrome driver): how to set request header?
<p>I am using seleniumbase with Driver(uc=True), which works well for my specific scraping use case (and appears to be the only driver that consistently remains undetected for me).</p> <p>It is fine for everything that doesn't need specific header settings.</p> <p>For one particular type of scrape I need to set the Request Header (Accept -&gt; application/json).</p> <p>This works fine, and consistently, done manually in Chrome via the Requestly extension, but I cannot work out how to put it in place for seleniumbase undetected Chrome.</p> <p>I tried using execute_cdp_cmd with Network.setExtraHTTPHeaders (with Network.enable first): this ran without error but the request appeared to ignore it. (I was, tbh, unconvinced that the uc=True support was handling this functionality properly, since it doesn't appear to have full Chromium driver capabilities.)</p> <p>Requestly has a selenium Python mechanism, but this has its own driver and I cannot see how it would integrate with seleniumbase undetected Chrome.</p> <p>The built-in seleniumbase wire=True support won't coexist with uc=True, as far as I can see.</p> <p>selenium-requests has an option to piggyback on an existing driver, but this is (to be honest) beyond my embryonic Python skills (though it does feel like this might be the answer if I knew how to put it in place).</p> <p>My scraping requires initial login, so I can't really swap from one driver to another in the course of the scraping session.</p>
<python><selenium-webdriver><http-headers><seleniumbase>
2023-11-16 11:48:38
2
1,165
MandyShaw
77,494,525
8,233,873
Altair: Apply condition to axis label color based on label membership in a list
<p>I have a scheduling plot where the Y axis is categorical (each Y-entry corresponds to a piece of equipment and the x axis displays time).</p> <p>I have a list of equipment that contains scheduling clashes. For each equipment item in the list of equipment with clashes, I want to highlight the equipment name in red on the plot.</p> <p>I can't find a way to apply a test that checks for membership in a list.</p> <p>A similar problem was posted here: <a href="https://stackoverflow.com/questions/66684882/color-some-x-labels-in-altair-plot">Color some x-labels in altair plot?</a> - but this solution involves encoding the test in a string which is interpreted by altair I guess. The &quot;is in&quot; test doesn't seem to work in this sort of construction.</p> <p>Here is a minimum working example:</p> <pre><code>import streamlit as st import altair as alt import pandas as pd data = [ {&quot;task&quot;: &quot;Task 1&quot;, &quot;start&quot;: 1, &quot;finish&quot;: 10, &quot;equipment&quot;: &quot;XXX-101&quot;}, {&quot;task&quot;: &quot;Task 2&quot;, &quot;start&quot;: 9, &quot;finish&quot;: 20, &quot;equipment&quot;: &quot;XXX-101&quot;}, {&quot;task&quot;: &quot;Task 3&quot;, &quot;start&quot;: 6, &quot;finish&quot;: 8, &quot;equipment&quot;: &quot;XXX-102&quot;}, {&quot;task&quot;: &quot;Task 4&quot;, &quot;start&quot;: 9, &quot;finish&quot;: 12, &quot;equipment&quot;: &quot;XXX-102&quot;}, {&quot;task&quot;: &quot;Task 5&quot;, &quot;start&quot;: 13, &quot;finish&quot;: 18, &quot;equipment&quot;: &quot;XXX-102&quot;}, {&quot;task&quot;: &quot;Task 6&quot;, &quot;start&quot;: 5, &quot;finish&quot;: 15, &quot;equipment&quot;: &quot;XXX-103&quot;}, {&quot;task&quot;: &quot;Task 7&quot;, &quot;start&quot;: 16, &quot;finish&quot;: 18, &quot;equipment&quot;: &quot;XXX-103&quot;}, {&quot;task&quot;: &quot;Task 8&quot;, &quot;start&quot;: 6, &quot;finish&quot;: 8, &quot;equipment&quot;: &quot;XXX-104&quot;}, {&quot;task&quot;: &quot;Task 9&quot;, &quot;start&quot;: 4, &quot;finish&quot;: 12, &quot;equipment&quot;: &quot;XXX-104&quot;}, {&quot;task&quot;: &quot;Task 10&quot;, &quot;start&quot;: 11, &quot;finish&quot;: 16, &quot;equipment&quot;: &quot;XXX-104&quot;}, ] dataframe = pd.DataFrame(data) equipment_has_clashes = [&quot;XXX-101&quot;, &quot;XXX-104&quot;] # determined by another algorithm chart = ( alt.Chart(dataframe) .mark_bar(height=20) .encode( x=alt.X(&quot;start&quot;).title(&quot;time&quot;), x2=(&quot;finish&quot;), y=alt.Y( &quot;equipment&quot;, axis=alt.Axis( labelColor=alt.condition( # Want to apply test equivalent to &quot;equipment is in equipment_has_clashes&quot; predicate=&quot;datum&quot;, if_true=alt.value(&quot;red&quot;), if_false=alt.value(&quot;black&quot;), ) ), ), tooltip=[alt.Tooltip(t) for t in [&quot;task&quot;, &quot;equipment&quot;, &quot;start&quot;, &quot;finish&quot;]], color=alt.Color(&quot;task&quot;, type=&quot;nominal&quot;).scale(scheme=&quot;category20&quot;), ) .properties(height=alt.Step(30)) .configure_view(strokeWidth=1) .configure_axis(domain=True, grid=True) .interactive() ) st.altair_chart(chart, use_container_width=True) </code></pre>
<python><pandas><streamlit><altair>
2023-11-16 11:45:55
2
313
multipitch
77,494,518
149,900
Emulate openssl s_client verify facility in Python
<p>I want to do something similar to this:</p> <pre><code>openssl s_client -verify_return_error -quiet -strict -verifyCAfile CA_file.crt server.name:4443 </code></pre> <p>The Certificate installed on <code>server.name</code> is a partial chain containing (in order): Server's cert, and the Intermediate1 cert.</p> <p>Running the above command will return (for instance) something like this:</p> <pre><code>### Successful chain traversal depth=2 C = ..., CN = ROOT, emailAddress = ... verify return:1 depth=1 C = ..., CN = Intermediate1, emailAddress = ... verify return:1 depth=0 C = ..., CN = server.name verify return:1 ### UNsuccessful chain traversal (e.g., wrong CA_file) depth=1 C = ..., CN = Intermediate1, emailAddress = ... verify error:num=20:unable to get local issuer certificate </code></pre> <p>How do I do that with Python?</p> <p>I've tried reading the documentation of <a href="https://www.pyopenssl.org/en/latest/index.html" rel="nofollow noreferrer"><code>pyOpenSSL</code></a> but it's really ... sparse. No code examples, no guide on how to do things.</p> <p>Notes:</p> <ol> <li>I just need a boolean return; <code>true</code> if certificate chain is valid, <code>false</code> if certificate chain is invalid (any reason, be it wrong local CA_file or wrong chain installed server-side)</li> <li>It's not necessary to use pyOpenSSL; any other methods that produce the boolean result I want, will do.</li> </ol>
<python><openssl><pyopenssl>
2023-11-16 11:44:10
0
6,951
pepoluan
77,494,287
12,769,783
Is the name of the second parameter to __deepcopy__ required to be memo?
<p>Is it required to name the second argument of the <code>__deepcopy__</code> function <code>memo</code>, in case <code>__deepcopy__</code> is called with <code>memo</code> as a keyword argument?</p> <p>If I simply want to exclude instances from deep copies, would:</p> <pre class="lang-py prettyprint-override"><code>obj.__deepcopy__ = lambda self, _: self </code></pre> <p>be fine, or do I need to pay attention to naming the second argument of the lambda function <code>memo</code> in order to not cause any issues?</p> <p>My initial thoughts are that the parameter's name is part of the function signature, and <code>memo</code> is not documented to be a positional argument, so it should probably be safer to name the second argument <code>memo</code>, and just add a linter exception for marking the argument unused.</p>
<python><deep-copy>
2023-11-16 11:04:00
0
1,596
mutableVoid
77,494,052
6,399,645
How to interrupt python script that is running a bash command?
<p>I'm running a python script, which is running a bash command <code>cmd</code> via</p> <pre class="lang-py prettyprint-override"><code>try: subprocess.run(['/bin/bash', '-ic', cmd]) except as e: print(&quot;Error:&quot;, e) </code></pre> <p>I run this python script from a bash terminal. When I press <code>cntrl-C</code> while some bash command is being executed, it seems like only the bash command is interrupted, but the python script keeps running (without capturing and printing the exception <code>e</code>. <strong>How do I get <code>cntrl-C</code> or some other terminal command to interrupt or cause an exception in python itself?</strong></p>
<python><linux><bash>
2023-11-16 10:30:26
1
434
user56834
77,493,818
4,451,315
Decorator which changes ints to strings with correct type hints
<p>I have written a decorator which changes the types of some arguments passed to a decorated function.</p> <p>For example, any argument which was <code>int</code> should become <code>str</code>:</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable def decorator(func: Callable) -&gt; Callable: def wrapper(*args, **kwargs): modified_args = [str(arg) if isinstance(arg, int) else arg for arg in args] return func(*modified_args, **kwargs) return wrapper @decorator def my_function(a: int | str, b: int) -&gt; str: return a + b result = my_function('foo', 4) print(result) </code></pre> <p>Running this code outputs <code>'foo4'</code>, as expected.</p> <p>However, according to <code>mypy</code> the typing is incorrect:</p> <pre class="lang-py prettyprint-override"><code>t.py:12: error: Incompatible return value type (got &quot;Union[int, Any]&quot;, expected &quot;str&quot;) [return-value] t.py:12: error: Unsupported operand types for + (&quot;str&quot; and &quot;int&quot;) [operator] t.py:12: note: Left operand is of type &quot;Union[int, str]&quot; Found 2 errors in 1 file (checked 1 source file) </code></pre> <p>Is there a way to type <code>decorator</code> so that the within the body of <code>my_function</code>, mypy will know that any argument of type <code>int</code> has been transformed to <code>str</code>?</p>
<python><mypy><python-typing>
2023-11-16 09:57:06
1
11,062
ignoring_gravity
77,493,762
14,661,648
Flask Sock (Websocket) route runs one last time before disconnect
<p>In <code>flask-sock</code> which is for regular websockets, If I run a <code>while</code> loop in a route function, and at the end use a <code>time.sleep(15)</code>, if I disconnect during the <code>time.sleep</code> the loop will run for one last time before disconnecting.</p> <p>I tried to use try except with a boolean to disable the loop but it won't stop it. Any ideas?</p> <pre><code>@socket.route('/example') def example(websocket): while True: print('RAN') time.sleep(10) </code></pre>
<python><flask><websocket>
2023-11-16 09:47:59
1
1,067
Jiehfeng
77,493,727
18,091,040
Create and get pseudonym of a transaction of credential issue using Hyperledger Indy
<p>I am trying to dive deep in <a href="https://github.com/hyperledger/indy-sdk/blob/main/docs/how-tos/issue-credential/README.md" rel="nofollow noreferrer">this example</a> of Hyperledger Indy about how to issue a credential. Basically, an &quot;author 1&quot; creates a schema definition and stores it in the ledger. Then this author 1 generates a credential and issues it to &quot;author 2&quot;. I thought this transaction (author 1 issuing a credential to author 2) would generate a pseudonym.</p> <blockquote> <p><strong>Pseudonym:</strong> Blinded Identifier used to maintain privacy in the context of an ongoing digital relationship (Connection).</p> </blockquote> <p>I was thinking if there is a way, using Python, to get this pseudonym, from this transaction. Apparently one of the fields generated in this transaction is <code>reqId</code>, but it is not unique as it's just a combination of author 1's and author 2's DID and some data present in the credential. Also, I wanted to know if there is a way to know the author's DID using this pseudonym.</p> <p>I had a look in the <a href="https://hyperledger-indy.readthedocs.io/en/latest/search.html?q=pseudonym&amp;check_keywords=yes&amp;area=default" rel="nofollow noreferrer">documentation</a> and in <a href="https://hyperledger-indy.readthedocs.io/projects/sdk/en/latest/docs/getting-started/indy-walkthrough.html" rel="nofollow noreferrer">this getting started example</a>, but they don't give further information about pseudonym. The only time pseudonym is mentioned is in the introduction of the getting started example and during its development, it is not mentioned at all, even though many transactions are implemented.</p>
<python><blockchain><hyperledger><hyperledger-indy>
2023-11-16 09:41:30
1
640
brenodacosta
77,493,697
1,019,455
py.typed does not be included when use pyproject.toml
<p>I am publishing my package with <code>mypy</code> supported. Unfortunately, I can't bundle <code>py.typed</code> which is blank file to py package. Here is my directory structure from my <a href="https://github.com/elcolie/AGA8-python" rel="nofollow noreferrer">repository</a>.</p> <p>I have followed the <a href="https://stackoverflow.com/questions/76073605/add-py-typed-as-package-data-with-setuptools-in-pyproject-toml">answer</a>, but does not work.</p> <pre class="lang-bash prettyprint-override"><code>. β”œβ”€β”€ LICENSE β”œβ”€β”€ README.md β”œβ”€β”€ aga8 β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  β”œβ”€β”€ __pycache__ β”‚Β Β  β”‚Β Β  └── detail.cpython-311.pyc β”‚Β Β  β”œβ”€β”€ detail.py β”‚Β Β  β”œβ”€β”€ py.typed β”‚Β Β  └── tests.py β”œβ”€β”€ pyproject.toml └── setup.cfg 2 directories, 9 files </code></pre> <p><code>pyproject.toml</code></p> <pre class="lang-bash prettyprint-override"><code>[project] name = &quot;aga8-python&quot; version = &quot;0.0.1&quot; authors = [ { name=&quot;Sarit Ritwirune&quot;, email=&quot;sarit@appsmiths.com&quot; }, ] description = &quot;American Gas Association Report No.8&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.8&quot; classifiers = [ &quot;Programming Language :: Python :: 3&quot;, &quot;License :: OSI Approved :: MIT License&quot;, &quot;Operating System :: OS Independent&quot;, ] [tool.poetry] packages = [ {include = &quot;aga8/py.typed&quot;} ] [project.urls] &quot;Homepage&quot; = &quot;https://github.com/elcolie/AGA8-python&quot; [tool.setuptools.package-data] &quot;aga8&quot; = [&quot;py.typed&quot;] [tool.setuptools.packages.find] where = [&quot;aga8&quot;] </code></pre> <p><strong>Question:</strong><br> How to include <code>py.typed</code> which is blank file in <code>aga8</code> packaged directory?</p> <p><strong>Temporary Solution:</strong><br> I follow <code>humanize</code> repository. I think it add <code>py.typed</code> at last before publish.</p>
<python><build><mypy>
2023-11-16 09:36:58
1
9,623
joe
77,493,608
19,198,552
How can I move a tkinter canvas-window by the mouse-pointer?
<p>I have a tkinter Canvas where some objects are placed and can be moved by the mouse pointer. One of these objects is a text, which is not implemented as canvas-text-item but as a text-widget in a canvas-window-item, as I want to implement syntax highlighting for the text. I know how to implement the move-feature for a canvas-text-item but I am not able to implement the same feature for a canvas-window-item. I believe the reason is that when the mouse pointer is inside the canvas-window-item, it not inside the canvas anymore. This is my example code, where moving text1 works, but moving text2 fails:</p> <pre><code> import tkinter as tk event_x = None event_y = None def end_move_text(event, item_id): canvas_id.tag_unbind(item_id, &quot;&lt;Motion&gt;&quot; ) canvas_id.tag_unbind(item_id, &quot;&lt;ButtonRelease-1&gt;&quot;) def move_text(event, item_id): global event_x, event_y canvas_id.move(item_id, event.x-event_x, event.y-event_y) event_x, event_y = event.x, event.y def move_start_text(event, item_id): global event_x, event_y event_x, event_y = event.x, event.y canvas_id.tag_bind(item_id, &quot;&lt;Motion&gt;&quot; , lambda event : move_text (event, item_id)) canvas_id.tag_bind(item_id, &quot;&lt;ButtonRelease-1&gt;&quot;, lambda event : end_move_text(event, item_id)) root = tk.Tk() canvas_id = tk.Canvas(root, width=300, height=200) canvas_text = canvas_id.create_text(50,50, text=&quot;text1&quot;, tag=&quot;text1&quot;, font=(&quot;Courier&quot;, 10)) canvas_rect = canvas_id.create_rectangle(canvas_id.bbox(canvas_text), outline=&quot;black&quot;, tag=&quot;text1&quot;) canvas_id.tag_bind(canvas_text, &quot;&lt;Button-1&gt;&quot;, lambda event: move_start_text(event, &quot;text1&quot;)) text_widget = tk.Text(canvas_id, height=1, width=5, relief=&quot;flat&quot;, font=(&quot;Courier&quot;, 10)) text_widget.insert(&quot;1.0&quot;, &quot;text2&quot;) canvas_wind = canvas_id.create_window(150, 50) canvas_id.itemconfig(canvas_wind, window=text_widget) canvas_id.tag_bind(canvas_wind, &quot;&lt;Button-1&gt;&quot;, lambda event: move_start_text(event, canvas_wind)) # does not work canvas_id.grid() root.mainloop() </code></pre> <p>As a workaround I implemented a binding of Button-1 directly to the canvas, where I check if there is any canvas-item near the mouse-pointer and if yes this canvas item is moved. But then the user has to click &quot;near&quot; the object to move and not &quot;at&quot; the object to move, which is bad user experience.</p>
<python><tkinter><tkinter-canvas>
2023-11-16 09:25:25
2
729
Matthias Schweikart
77,493,498
9,940,188
Why does a Python system package try to import a local module?
<p>I just ran into a weird problem: a system package (smtplib) tries to import a local package / module. I wrote a script named &quot;email.py&quot; that imports smtplib, which in turn seems to try to import email.utils not from the system email package but from my script. The problem goes away if I rename the script to something else. Seems to be an easy fix, but it doesn't make sense. What if some future Python release introduces a new package that collides with a local module somewhere deep in my application?</p> <pre><code>~$ python -V Python 3.11.5 ~$ cat email.py print(&quot;I AM BEING IMPORTED&quot;) import smtplib ~$ python email.py I AM BEING IMPORTED I AM BEING IMPORTED Traceback (most recent call last): File &quot;/home/dh/email.py&quot;, line 2, in &lt;module&gt; import smtplib File &quot;/usr/local/src/Python-3.11.5/Lib/smtplib.py&quot;, line 47, in &lt;module&gt; import email.utils ModuleNotFoundError: No module named 'email.utils'; 'email' is not a package </code></pre>
<python>
2023-11-16 09:08:54
1
679
musbur