QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,966,216
11,278,478
Assign value to a dataframe column based on another column and condition
<p>I have a dataframe with three columns:</p> <pre><code>id,col1,col2 </code></pre> <p>now for records where col1 is empty, I need to copy value of col2 into col1. how to do this? below is what I tried but it is not working.</p> <pre><code>df[df[col1]==''] = df['col2'] </code></pre>
<python><python-3.x><pandas><pyspark>
2023-08-24 04:25:00
3
434
PythonDeveloper
76,966,010
14,624,039
Relative Import --- Ultimate Version
<pre><code>|-- eval.py |-- modeling_llama.py |-- patch | |-- __init__.py | |-- flash_attn.py | |-- log_scale.py | |-- ntk.py | |-- ntk_fixed.py | |-- ntk_mixed.py | `-- rerope.py |-- prompt.py |-- test_model.sh `-- utils.py </code></pre> <p>Above is my project directory.</p> <p>The script I run is <code>eval.py</code>, which imports <code>utils.py</code>. <code>patch</code> is imported to <code>utils.py</code>. <code>patch.rerope</code> depends on <code>modelling_llama</code>, so I have to import <code>modelling_llama</code> in <code>rerope</code>. I'm confused about how I can correctly import <code>modelling_llama</code> in <code>rerope</code> since it's a repository level higher.</p> <p>Should I be <code>import modelling_llama</code>, <code>from .. import modelling_llama</code>, or <code>import ..modeling_llama</code>.</p> <p>I've tried all the above and found merely <code>import modelling_llama</code> work fine for me, but I'm still wondering why.</p>
<python><python-import><relative-import>
2023-08-24 03:08:56
3
432
Arist12
76,965,909
7,533,650
Impossible To Scrape Some Websites Using Python?
<p>I am trying to scrape a particular website <a href="https://birdeye.so/find-gems?chain=solana" rel="nofollow noreferrer">https://birdeye.so/find-gems?chain=solana</a>, but unable to load the data within the table. I am only able to get the table's headers, such as <code>Token</code>, <code>Trending</code>, etc.</p> <p>Are some pages just impossible to scrape? If so, why exactly?</p> <p>Below is my code. I've attempted to scrape this page using Selenium, but am unable to load all of the contents. What am I doing wrong?</p> <pre><code>from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager import pandas as pd from bs4 import BeautifulSoup import requests driver = webdriver.Chrome() driver.maximize_window() driver.get(&quot;https://birdeye.so/find-gems?chain=solana/&quot;) html = driver.page_source soup = BeautifulSoup(html) print(soup) </code></pre>
<python><selenium-webdriver><web-scraping><beautifulsoup>
2023-08-24 02:29:22
1
303
BorangeOrange1337
76,965,751
10,184,158
netmiko connect via remote console port
<p>We are using opengear to connect Cisco devices via the console port remotely, I need to put some configuration through the console port. I refer to this link <a href="https://stackoverflow.com/questions/70054323/netmiko-enter-configuration-terminal-mode">Netmiko - Enter configuration terminal mode</a> and wrote below codes. When I run the code, it throw the error, but if I debug the code using pycharm, it works. From the device console, I can see if I run the code under debug mode, the device is activated and get into enable mode, but if I run the code directly, there is no response from the device.</p> <p>Is there anything that I have missed?</p> <pre><code>from netmiko import ConnectHandler, redispatch device = { &quot;device_type&quot;: &quot;terminal_server&quot;, &quot;ip&quot;: &quot;****&quot;, &quot;username&quot;: &quot;****&quot;, &quot;password&quot;: &quot;****&quot;, &quot;session_log&quot;: 'netmiko_session.log', &quot;port&quot;: '22' } net_connect = ConnectHandler(**device) net_connect.write_channel(&quot;\r&quot;) net_connect.enable() net_connect.send_command_timing(&quot;enable&quot;) redispatch(net_connect, device_type=&quot;cisco_xe&quot;) output = net_connect.send_command(&quot;show version&quot;) print(output) net_connect.disconnect() </code></pre> <p>Here is error</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\user\venv\console_conn.py&quot;, line 24, in &lt;module&gt; redispatch(net_connect, device_type=&quot;cisco_xe&quot;) File &quot;C:\Users\user\venv\lib\site-packages\netmiko\ssh_dispatcher.py&quot;, line 487, in redispatch obj._try_session_preparation() File &quot;C:\Users\user\venv\lib\site-packages\netmiko\base_connection.py&quot;, line 963, in _try_session_preparation self.session_preparation() File &quot;C:\Users\user\venv\lib\site-packages\netmiko\cisco\cisco_ios.py&quot;, line 19, in session_preparation self.set_terminal_width(command=cmd, pattern=cmd) File &quot;C:\Users\user\lib\site-packages\netmiko\base_connection.py&quot;, line 1322, in set_terminal_width output = self.read_until_pattern(pattern=pattern) File &quot;C:\Users\user\venv\lib\site-packages\netmiko\base_connection.py&quot;, line 721, in read_until_pattern raise ReadTimeout(msg) netmiko.exceptions.ReadTimeout: Pattern not detected: 'terminal width 511' in output. Things you might try to fix this: 1. Adjust the regex pattern to better identify the terminating string. Note, in many situations the pattern is automatically based on the network device's prompt. 2. Increase the read_timeout to a larger value. You can also look at the Netmiko session_log or debug log for more information. </code></pre>
<python><console><netmiko>
2023-08-24 01:32:36
1
463
chun xu
76,965,685
11,850,322
Two layer if condition
<p>I try my best to explain what I need:</p> <pre><code>If len(x) &gt; 5: sample = func_sample(5) If sample meet conditionA: Then use sample Else go to elif bellow Elif len(x) &gt; 4: sample = func_sample(4) if sample meet conditionA: Then use sample Else go the else bellow Else: sample = func_sample(3) </code></pre> <p>I know in VBA there is option <code>goto</code> but not in Python, so I am a bit struggling here</p>
<python>
2023-08-24 01:05:48
3
1,093
PTQuoc
76,965,613
3,058,609
"There are no type variables left" after union of Pydantic models
<p>I'm doing some fancy type-related shenanigans and have defined three responses from my API.</p> <pre><code>from pydantic import BaseModel from typing import TypeVar, Literal, Generic T = TypeVar('T') class Success(BaseModel, Generic[T]): status: Literal[&quot;success&quot;] = &quot;success&quot; data: T class Fail(BaseModel): status: Literal[&quot;fail&quot;] = &quot;fail&quot; message: str class Error(BaseModel): status: Literal[&quot;error&quot;] = &quot;error&quot; message: str code: int | None </code></pre> <p>The general form of these is a union:</p> <pre><code>Response = Success[T] | Fail | Error </code></pre> <p>which I would like to also be generic, so my handler function signatures can use a specific type, like</p> <pre><code>def some_handler(message: str) -&gt; Response[int]: return Fail(message=message) # the body of this function is actually irrelevant to the error </code></pre> <p>However, doing so gives me the error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/adam_smith/tmp/test/main.py&quot;, line 22, in &lt;module&gt; def some_handler(message: str) -&gt; Response[int]: TypeError: There are no type variables left in __main__.Success | __main__.Fail | __main__.Error </code></pre> <p>This is notable because if I drop BaseModel from each class, it works as expected</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Literal, Generic T = TypeVar('T') class Success(Generic[T]): status: Literal[&quot;success&quot;] = &quot;success&quot; data: T class Fail: status: Literal[&quot;fail&quot;] = &quot;fail&quot; message: str class Error: status: Literal[&quot;error&quot;] = &quot;error&quot; message: str code: int | None Response = Success[T] | Fail | Error def some_handler(message: str) -&gt; Response[int]: return Fail(message=message) # works! </code></pre>
<python><python-typing><pydantic>
2023-08-24 00:32:58
0
54,569
Adam Smith
76,965,604
14,256,643
How to process multiple url simultaneously using python request libry
<p>prefer sticking with the Python Requests library. Although I've searched on Google and Stack Overflow, I've only found solutions involving asyncio or similar library, without any clear guidance on combining it with the python Requests library.</p> <p>Here's what I want to do: I have a list of 1000 URLs, and I want to work on 100 URLs together in each batch. This means I'd loop through the list 10 times, processing 100 URLs in each loop. This approach is much faster than processing each URL one by one. Now, what if I have 1001 URLs? How can I make this work for odd quantities like 10001 or 507?&quot;</p> <pre><code>def req_proxy(url: str, http_flag: bool): response = requests.get(url, verify=http_flag) return response url_list = [] #assume I have 1k url for link in url_list: res = req_proxy(link,http_flag=True) print(res.text) </code></pre>
<python><python-3.x><asynchronous><python-requests>
2023-08-24 00:28:36
2
1,647
boyenec
76,965,517
11,596,051
Why isn't this Python code properly assigning the DOM variable?
<p>I am fairly new to Python and totally new to XML operations in Python. Here is a snippet of one of the XML files:</p> <pre><code>&lt;!--created on 2023-08-21 06:58:42 - tinyMediaManager 4.3.13--&gt; &lt;movie&gt; &lt;title&gt;El Dorado&lt;/title&gt; &lt;originaltitle&gt;El Dorado&lt;/originaltitle&gt; &lt;sorttitle/&gt; ... &lt;/movie&gt; </code></pre> <p>I need to assign the value to a variable. The program assigns the null string instead of &quot;El Dorado&quot;. Here is the code:</p> <pre><code>import xml.dom.minidom import glob Directory = &quot;/RAID/Server-Main/KODI/VideoDB/&quot; for InFile in glob.glob(Directory + &quot;*.xnfo&quot;): print(InFile) domtree = xml.dom.minidom.parse(InFile) group = domtree.documentElement Main = group.getElementsByTagName('movie') for name in Main: Title = name.getAttribute('title') print(Title) </code></pre> <p>I am not getting any error, and I know it s properly scanning in all the files, but the output is blank.</p> <p>Edit: I did a little poking, and for some reason the program is never entering the for loop, meaning Main has no elements. I do not understand this at all.</p>
<python><xml-parsing>
2023-08-23 23:51:48
1
395
lrhorer
76,965,512
7,658,051
watershed algorithm raises ERROR: error: (-215:Assertion failed) src.type() == CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
<p>I am following the <a href="https://docs.opencv.org/4.x/d3/db4/tutorial_py_watershed.html" rel="nofollow noreferrer">opencv watershed algorithm tutorial</a>, I am just using a different start image.</p> <p>Everything goes fine until I get to the markers.</p> <pre><code>import cv2 import numpy as np img = cv2.imread('myimage.png') assert img is not None, &quot;file could not be read, check with os.path.exists()&quot; img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # NOISE REMOVAL WITH ELABORATE THRESHOLD img_blur = cv2.medianBlur(img, ksize=15) THRESH_SUM = cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU ret, thresh_summed_thresh = cv2.threshold(img_blur, 127, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU) kernel = np.ones((3,3), np.uint8) opening = cv2.morphologyEx(thresh_summed_thresh, cv2.MORPH_OPEN, kernel, iterations=2) dist_transform = cv2.distanceTransform(opening, cv2.DIST_L2, 5 ) dist_transform_normalized = cv2.normalize(dist_transform, None, 0, 1.0, cv2.NORM_MINMAX) # SURE FOREGROUND, BACKGROUND, UNKNOWN REGION ret, sure_foreground = cv2.threshold(dist_transform, 0.7*dist_transform.max(), 255, 0) sure_foreground_uint8 = np.uint8(sure_foreground) sure_background = cv2.dilate(opening, kernel, iterations=3) sure_background_uint8 = np.uint8(sure_background) unknown_region = cv2.subtract(sure_background, sure_foreground_uint8) # MARKERS ret, markers = cv2.connectedComponents(sure_foreground_uint8) markers = markers + 1 markers[unknown_region==255] = 0 # only to show the image on VS code markers_normalized = cv2.normalize(markers, None, 0, 255, cv2.NORM_MINMAX) markers_normalized_uint8 = np.uint8(markers_normalized) </code></pre> <p>But when I try to run the watershed algorithm,</p> <pre><code>watershed_img = cv2.watershed(img, markers) </code></pre> <p>this error is raised</p> <blockquote> <p>watershed_img = cv2.watershed(img, markers) cv2.error: OpenCV(4.8.0) /io/opencv/modules/imgproc/src/segmentation.cpp:161: error: (-215:Assertion failed) src.type() == CV_8UC3 &amp;&amp; dst.type() == CV_32SC1 in function 'watershed'</p> </blockquote> <p>What could be the problem?</p> <p>Some info</p> <pre><code>print(img.shape, markers.shape) (405, 304) (405, 304) print(type(img), type(markers)) &lt;class 'numpy.ndarray'&gt; &lt;class 'numpy.ndarray'&gt; </code></pre>
<python><opencv>
2023-08-23 23:49:41
1
4,389
Tms91
76,965,208
4,175,822
In python, how can I type hint an input that is not str or bytes and is a sequence?
<p>In python, how can I type hint an input that is not str or bytes and is a sequence? I have a function that I want to accept a sequence but not accept a string or bytes input. But this code allows in str and bytes:</p> <pre><code>import typing def validate(input_data: typing.Sequence) -&gt; None: # implementation that validate that input data meets expected constraints #one can ignore the implementation pass </code></pre>
<python><python-typing>
2023-08-23 22:14:14
1
2,821
spacether
76,965,187
10,452,962
SDXL requires_aesthetics_score=True error
<p>I'm trying to figure this one out, been at it for a while and can't seem to make any headway. Any help is greatly appreciated!!! It is my first time using Stable Diffusion so maybe I'm missing something here but I was trying to follow the HuggingFace tutorial <a href="https://huggingface.co/docs/diffusers/v0.20.0/en/api/pipelines/stable_diffusion/stable_diffusion_xl#1-ensemble-of-expert-denoisers" rel="nofollow noreferrer">https://huggingface.co/docs/diffusers/v0.20.0/en/api/pipelines/stable_diffusion/stable_diffusion_xl#1-ensemble-of-expert-denoisers</a></p> <p>It was working before but now I'm trying to specify height and width...that seems to be when the problem started?</p> <p>I've also tried adding requires_aesthetics_score=True to before sending refiner to cuda but that doesn't work -- same error.</p> <pre><code>ValueError Traceback (most recent call last) Cell In[74], line 1 ----&gt; 1 refiner_image = refiner( 2 prompt=&quot;cartoon of colorful monsters frolocking in a dark spooky graveyard with tombstones and graves behind a castle&quot;, 3 num_inference_steps=n_steps, 4 denoising_end=high_noise_frac, 5 image=img 6 ).images[0] File c:\Users\Mark\anaconda3\envs\auto_content_creator\lib\site-packages\torch\utils\_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --&gt; 115 return func(*args, **kwargs) File c:\Users\Mark\anaconda3\envs\auto_content_creator\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl_img2img.py:910, in StableDiffusionXLImg2ImgPipeline.__call__(self, prompt, prompt_2, image, strength, num_inference_steps, denoising_start, denoising_end, guidance_scale, negative_prompt, negative_prompt_2, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, output_type, return_dict, callback, callback_steps, cross_attention_kwargs, guidance_rescale, original_size, crops_coords_top_left, target_size, aesthetic_score, negative_aesthetic_score) 908 # 8. Prepare added time ids &amp; embeddings 909 add_text_embeds = pooled_prompt_embeds --&gt; 910 add_time_ids, add_neg_time_ids = self._get_add_time_ids( 911 original_size, 912 crops_coords_top_left, 913 target_size, 914 aesthetic_score, 915 negative_aesthetic_score, 916 dtype=prompt_embeds.dtype, 917 ) 918 add_time_ids = add_time_ids.repeat(batch_size * num_images_per_prompt, 1) 920 if do_classifier_free_guidance: File c:\Users\Mark\anaconda3\envs\auto_content_creator\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl_img2img.py:613, in StableDiffusionXLImg2ImgPipeline._get_add_time_ids(self, original_size, crops_coords_top_left, target_size, aesthetic_score, negative_aesthetic_score, dtype) 607 expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features 609 if ( 610 expected_add_embed_dim &gt; passed_add_embed_dim 611 and (expected_add_embed_dim - passed_add_embed_dim) == self.unet.config.addition_time_embed_dim 612 ): --&gt; 613 raise ValueError( 614 f&quot;Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. Please make sure to enable `requires_aesthetics_score` with `pipe.register_to_config(requires_aesthetics_score=True)` to make sure `aesthetic_score` {aesthetic_score} and `negative_aesthetic_score` {negative_aesthetic_score} is correctly used by the model.&quot; 615 ) 616 elif ( 617 expected_add_embed_dim &lt; passed_add_embed_dim 618 and (passed_add_embed_dim - expected_add_embed_dim) == self.unet.config.addition_time_embed_dim 619 ): 620 raise ValueError( 621 f&quot;Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. Please make sure to disable `requires_aesthetics_score` with `pipe.register_to_config(requires_aesthetics_score=False)` to make sure `target_size` {target_size} is correctly used by the model.&quot; 622 ) ValueError: Model expects an added time embedding vector of length 2816, but a vector of 2560 was created. Please make sure to enable `requires_aesthetics_score` with `pipe.register_to_config(requires_aesthetics_score=True)` to make sure `aesthetic_score` 6.0 and `negative_aesthetic_score` 2.5 is correctly used by the model. </code></pre> <p>My code is:</p> <pre><code>from diffusers import StableDiffusionXLPipeline, DiffusionPipeline import torch import os base = StableDiffusionXLPipeline.from_pretrained( &quot;stabilityai/stable-diffusion-xl-base-1.0&quot;, torch_dtype=torch.float16, variant=&quot;fp16&quot;, use_safetensors=True ) base.to(&quot;cuda&quot;) refiner = DiffusionPipeline.from_pretrained( &quot;stabilityai/stable-diffusion-xl-refiner-1.0&quot;, **base.components ) refiner.to(&quot;cuda&quot;) refiner.register_to_config(requires_aesthetics_score=True) n_steps = 40 high_noise_frac = 0.8 def text_to_image(prompt): base_image = base( prompt=prompt, num_inference_steps=n_steps, denoising_end=high_noise_frac, output_type=&quot;latent&quot;, height=640, width=1536 ).images refiner_image = refiner( prompt=prompt, num_inference_steps=n_steps, denoising_end=high_noise_frac, image=base_image ).images[0] return refiner_image img = text_to_image(&quot;cartoon of colorful monsters frolocking in a dark spooky graveyard with tombstones and graves behind a castle&quot;) </code></pre>
<python><huggingface-transformers><stable-diffusion><diffusers>
2023-08-23 22:11:13
0
330
Mark Dabler
76,964,947
10,627,413
Quoting Parameters > Triple quotes to single quotes
<p>TLDR; My question is how do I read in a file that has triple quotes wrapped around each item? Drop 2 quotes but keep 1 quotes around the items and then download it as a new csv.</p> <p>Sample Dataset:</p> <p>I have a dataset in a csv format. This is what it looks like when I open it with Text Editor.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">pizza_topping_id</th> <th style="text-align: center;">customer_email</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">&quot;&quot;&quot;1&quot;&quot;&quot;</td> <td style="text-align: center;">&quot;&quot;&quot;junglehouse1@yahoo.com&quot;&quot;&quot;</td> </tr> </tbody> </table> </div> <p>Ideal output (one quote instead of 3):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">pizza_topping_id</th> <th style="text-align: center;">customer_email</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">&quot;1&quot;</td> <td style="text-align: center;">&quot;junglehouse1@yahoo.com&quot;</td> </tr> </tbody> </table> </div><h2>What I've tried using Jupyter Notebook:</h2> <h3>Attempt 1</h3> <p>I read in the file with (keeping in mind that it's wrapped in triple quotes)</p> <pre><code>df = pd.read_csv('~/Downloads/TEST.csv') </code></pre> <p>This generates a dataframe with single quotes.</p> <p>I then generate a new csv with the new dataframe wrapped in single quotes using the below code:</p> <pre><code>df3.to_csv('~/Downloads/TEST_V2.csv', header=True, index=False, line_terminator='\r\n') </code></pre> <p>However, when I open it up in TextEditor it still is triple quotes. After doing all that the file still looks like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">pizza_topping_id</th> <th style="text-align: center;">customer_email</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">&quot;&quot;&quot;1&quot;&quot;&quot;</td> <td style="text-align: center;">&quot;&quot;&quot;junglehouse1@yahoo.com&quot;&quot;&quot;</td> </tr> </tbody> </table> </div> <p>So I think to myself okay, maybe it's a parameter I need to throw into the to_csv to keep the single quotes so I try the below attempt</p> <h3>Attempt 2</h3> <pre><code>df3.to_csv('~/Downloads/TEST_V2.csv', header=True, index=False, encoding ='UTF8', line_terminator='\r\n') </code></pre> <p>My result when I reopen this new file (TEST_V2) is that it's still in triple quotes.</p>
<python><pandas>
2023-08-23 21:18:22
4
366
Maggie Liu
76,964,929
11,278,478
dataframe add hyphen in a column based on a condition
<p>I have a column which should have data in below format:</p> <p>xxxx-xxx</p> <p>But I see for some records hyphen is missing so I need to update the data for these records. Data I see is following below format:</p> <p>xxxx-xxx ( for few records) xxxxxxx ( for remaining records)</p> <p>For the records which are in xxxxxxx format, I need to update the data and insert hyphen (-) at 6th position to convert it to expected format (xxxx-xxx).</p> <p>My logic: I am trying to filter records which does not have length of 8, and does not contain hyphen (-) and then insert hyphen at sixth place.</p> <p>Code I tried ( but it is giving errors):</p> <pre><code>if df.col.str.len()!=9 and '-' not in df['col']: df['col']=df['col'].str[:5] +'-' + df['col'].str[5:] </code></pre> <p>is there any easier way to accomplish this?</p>
<python><python-3.x><pandas><dataframe>
2023-08-23 21:15:06
2
434
PythonDeveloper
76,964,793
1,258,407
How to dynamically add an arbitrary number of values to a specific level in a pandas multiindex using python?
<p>I have this <code>dataframe</code> with a <code>MultiIndex</code> object as its <code>index</code>.</p> <pre><code> apple level_1 level_2 level_3 level_4 level_5 bga B G 1 0 5.0 </code></pre> <p>How can I dynamically add a random amount of values to level 5?</p> <p>I have this:</p> <pre><code>randData = [&quot;r1&quot;, &quot;r13&quot;, &quot;r14&quot;, &quot;r6&quot;] for x in randData: df.loc[( &quot;bga&quot;, &quot;B&quot;, &quot;G&quot;, &quot;1&quot;, x ), &quot;apple&quot;] = np.random.randn(1)[0] </code></pre> <p>Which works great but for lists larger than 4 items takes forever. I was thinking of a simple slice of something like this:</p> <pre><code>df.loc[( &quot;bga&quot;, &quot;B&quot;, &quot;G&quot;, &quot;1&quot; ), &quot;apple&quot;] = (randData, np.random.randn(4)) </code></pre> <p>But it doesn't like that (which makes sense) and I can't think of any other way to slice it that would allow me to add values to the level.</p> <p>I have thought of simply recreating the <code>MultiIndex</code> and setting the new index, but thats a bit more work than I was looking for with the slicing option.</p> <p>Does anyone else have any thoughts? TIA!</p>
<python><pandas><multi-index>
2023-08-23 20:46:14
1
1,874
Seth
76,964,787
6,645,564
How can I get the start positions of my regex matches from a string without also including the matches' length in the string itself?
<p>This is kind of a complicated question to explain and easier to just show as an example.</p> <p>So let's say I have this string here:</p> <pre><code>exampleString = &quot;abcauehafj['hello']jfa['hello']jasfjgafadsf&quot; </code></pre> <p>And so we have the regex pattern of:</p> <pre><code>regex = r&quot;\['hello'\]&quot; </code></pre> <p>Now, what I want to do is get the start positions of regex matches within the exampleString. This would be [10, 22], which I currently calculate using the following code:</p> <pre><code>import re matches = re.finditer(regex, exampleString) start_pos = [] for match in matches: start_pos.append(match.start()) </code></pre> <p>Now, the issue is, I do not want to include the length of the <code>['hello']</code> to the string itself. So the start_pos that I actually want would be [10, 13]. What would be the best way to do this without affecting the strings?</p>
<python><regex>
2023-08-23 20:44:45
3
924
Bob McBobson
76,964,678
22,212,435
Is it possible to create a window from a frame?
<p>I have a frame and I want to make a Toplevel window from it. I want to make a system similar to how the web browser UI works. For example, in Google Chrome you can take a tab and make a new window from it. You can also add other tabs to that new window. That is what I have tried to &quot;demonstrate&quot; this behavior:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk from tkinter import ttk root = tk.Tk() moving_frame = tk.Frame() moving_frame.pack() notebook = ttk.Notebook(moving_frame, width=400, height=700) notebook.pack() movingFrame = tk.Frame(notebook, bg='green', width=400, height=700) movingFrame2 = tk.Frame(notebook, bg='green', width=400, height=700) lab = tk.Label(movingFrame, text='some text') ent = tk.Entry(movingFrame) ent2 = tk.Entry(movingFrame2, width=10, bg='grey30') lab.pack() ent.pack() ent2.pack() notebook.add(movingFrame, sticky='nesw', text='tab1') notebook.add(movingFrame2, sticky='nesw', text='tab2') def window_create(e): if notebook.identify(e.x, e.y): print('tab_pressed') frame_to_window = root.nametowidget(notebook.select()) root.wm_manage(frame_to_window) notebook.bind('&lt;ButtonRelease-1&gt;', window_create) root.mainloop() </code></pre> <p>Note: This code works by pressing the tab, not by dragging it.</p> <p>I wonder if I can somehow adjust the window that is created by using <code>wm_manage</code>. This function return <code>None</code> and works not as I thought, but close enough to what I need.</p> <p>If that is not possible to make by using <code>wm_manage</code>, how should I do that?</p> <p>I thought I can create a custom class, for example from a Frame, but there is a problem: new window should be the <strong>same</strong> to the frame that it is created from. If something has been changed by a user, e.g., User added a text to the <code>Entry</code>; Marked some <code>checkbuttons</code>; If he has used a <code>Treeview</code> hierarchy, like this:<br /> <img src="https://i.sstatic.net/yzIDx.png" alt="" /><br /> , the new window should remember which &quot;folders&quot; are opened (that are marked as minus on the picture), which items are currently selected and so on. And that was only about the <code>Treeview</code>, similar things should be considered for every single widget. Quite a lot of work, so maybe there is a better way to do that.</p>
<python><tkinter>
2023-08-23 20:24:15
1
610
Danya K
76,964,596
3,860,847
How to determine or filter datatype of a rdflib literal?
<p>I haven't used rdflib in a while and am writing some trivial code based on <a href="https://rdflib.readthedocs.io/en/stable/intro_to_graphs.html#basic-triple-matching" rel="nofollow noreferrer">https://rdflib.readthedocs.io/en/stable/intro_to_graphs.html#basic-triple-matching</a></p> <p>like</p> <pre class="lang-py prettyprint-override"><code>from rdflib import Graph graph = Graph() ttl_input = &quot;dir/file.ttl&quot; graph.parse(ttl_input, format='ttl') for s, p, o in graph.triples((None, None, None)): print(f&quot;{s} {p} {o} {o}&quot;) </code></pre> <p>I would like to limit the returned <code>graph.triples</code> to ones where the object <code>o</code> has datatype xsd:anyURI</p>
<python><rdf><rdflib>
2023-08-23 20:08:42
1
3,136
Mark Miller
76,964,533
5,212,614
Trying to Create Plotly Dropdown Control Based on Unique Items in a Dataframe Column
<p>I have this simple dataframe.</p> <pre><code>import requests import pandas as pd from pandas import DataFrame import matplotlib.pyplot as plt import seaborn as sns # Intitialise data of lists data = [{'Month': '2020-01-01', 'Expense':1000, 'ID':'123'}, {'Month': '2020-02-01', 'Expense':3000, 'ID':'123'}, {'Month': '2020-03-01', 'Expense':2000, 'ID':'123'}, {'Month': '2020-01-01', 'Expense':3000, 'ID':'456'}, {'Month': '2020-02-01', 'Expense':5000, 'ID':'456'}, {'Month': '2020-03-01', 'Expense':10000, 'ID':'456'}, {'Month': '2020-03-01', 'Expense':5000, 'ID':'789'}, {'Month': '2020-04-01', 'Expense':2000, 'ID':'789'}, {'Month': '2020-05-01', 'Expense':3000, 'ID':'789'}] df = pd.DataFrame(data) df </code></pre> <p>Based on unique IDs in the ID column, I am trying to crate a dropdown control.</p> <pre><code># uniques = df['ID'].unique() # for i in uniques: # print(i) </code></pre> <p>I'm testing this code, and it looks pretty close, but it's not actually generating a chart for me.</p> <pre><code>uniques = df['ID'].unique() # plotly fig = go.Figure() # set up ONE trace fig.add_trace(go.Scatter(x=df.index, y=df[df.columns[0]], visible=True) ) updatemenu = [] buttons = [] # button with one option for each dataframe for i in uniques: #print(i) #df_single = df[df['ID']==i] #data=df_single #print(data) buttons.append(dict(method='restyle', label=i, visible=True, args=[{'y':[df['ID']==i], 'x':[df.index], 'type':'scatter'}] ) ) # some adjustments to the updatemenus updatemenu = [] your_menu = dict() updatemenu.append(your_menu) updatemenu[0]['buttons'] = buttons updatemenu[0]['direction'] = 'down' updatemenu[0]['showactive'] = True # add dropdown menus to the figure fig.update_layout(showlegend=False, updatemenus=updatemenu) fig.show() </code></pre> <p>I am leveraging two resources for this.</p> <p><a href="https://stackoverflow.com/questions/59406167/plotly-how-to-filter-a-pandas-dataframe-using-a-dropdown-menu">Plotly: How to filter a pandas dataframe using a dropdown menu?</a></p> <p><a href="https://plotly.com/python/dropdowns/" rel="nofollow noreferrer">https://plotly.com/python/dropdowns/</a></p> <p>Any idea what I'm doing wrong?</p>
<python><python-3.x><plotly>
2023-08-23 19:56:30
1
20,492
ASH
76,964,474
1,506,763
Numba typing error when multiplying a single vector with an array of vectors using broadcasting
<p>I'm having a problem applying <code>numba</code> to a set of functions I'm trying to optimise for performance. All the functions work fine without <code>numba</code> but I get a compilation error when I try to use <code>numba</code>.</p> <p>Here's the compilation error I'm struggling with:</p> <pre class="lang-py prettyprint-override"><code>Exception occurred: Type: TypingError Message: Failed in nopython mode pipeline (step: nopython frontend) Failed in nopython mode pipeline (step: nopython frontend) Cannot unify array(float64, 2d, C) and array(float64, 1d, C) for 'q1.2', defined at .\rotations.py (82) File &quot;rotations.py&quot;, line 82: def quaternion_mult(q1, qa): &lt;source elided&gt; quat_result[:, 0] = (q1[:, 0] * q2[:, 0]) - (q1[:, 1] * q2[:, 1]) - (q1[:, 2] * q2[:, 2]) - (q1[:, 3] * q2[:, 3]) ^ During: typing of assignment at .\rotations.py (82) File &quot;rotations.py&quot;, line 82: def quaternion_mult(q1, qa): &lt;source elided&gt; quat_result[:, 0] = (q1[:, 0] * q2[:, 0]) - (q1[:, 1] * q2[:, 1]) - (q1[:, 2] * q2[:, 2]) - (q1[:, 3] * q2[:, 3]) ^ During: resolving callee type: type(CPUDispatcher(&lt;function quaternion_mult at 0x00000290EE6FE670&gt;)) During: typing of call at .\rotations.py (102) During: resolving callee type: type(CPUDispatcher(&lt;function quaternion_mult at 0x00000290EE6FE670&gt;)) During: typing of call at .\rotations.py (102) File &quot;rotations.py&quot;, line 102: def quaternion_vect_mult(q1, vect_array): &lt;source elided&gt; temp = quaternion_mult(q1, q2) ^ </code></pre> <p>and here's the full code of the corresponding functions:</p> <pre class="lang-py prettyprint-override"><code> @njit(cache=True) def quaternion_conjugate_vect(q): &quot;&quot;&quot; return the conjugate of a quaternion or an array of quaternions &quot;&quot;&quot; return q * np.array([1, -1, -1, -1]) @njit(cache=True) def quaternion_mult(q1, qa): &quot;&quot;&quot; multiply an array of quaternions (Nx4) by a single quaternion. qa is always a (Nx4) array of quaternions np.ndarray q1 is always a single (1x4) quaternion np.ndarray &quot;&quot;&quot; N = max(len(qa), len(q1)) quat_result = np.zeros((N, 4), dtype=np.float64) if qa.ndim == 1: q2 = qa.copy().reshape((1, -1)) # q2 = np.reshape(q1, (1,-1)) else: q2 = qa if q1.ndim == 1: # q1 = q1.copy().reshape((1, -1)) q1 = np.reshape(q1, (1, -1)) quat_result[:, 0] = (q1[:, 0] * q2[:, 0]) - (q1[:, 1] * q2[:, 1]) - (q1[:, 2] * q2[:, 2]) - (q1[:, 3] * q2[:, 3]) quat_result[:, 1] = (q1[:, 0] * q2[:, 1]) + (q1[:, 1] * q2[:, 0]) + (q1[:, 2] * q2[:, 3]) - (q1[:, 3] * q2[:, 2]) quat_result[:, 2] = (q1[:, 0] * q2[:, 2]) + (q1[:, 2] * q2[:, 0]) + (q1[:, 3] * q2[:, 1]) - (q1[:, 1] * q2[:, 3]) quat_result[:, 3] = (q1[:, 0] * q2[:, 3]) + (q1[:, 3] * q2[:, 0]) + (q1[:, 1] * q2[:, 2]) - (q1[:, 2] * q2[:, 1]) return quat_result @njit(cache=True) def quaternion_vect_mult(q1, vect_array): &quot;&quot;&quot; Multiplies an array of x,y,z coordinates by a single quaternion q1. &quot;&quot;&quot; # q1 is the quaternion which the coordinates will be rotated by. # Add initial column of zeros to array # N = len(vect_array) q2 = np.zeros((len(vect_array), 4), dtype=np.float64) q2[:, 1::] = vect_array temp = quaternion_mult(q1, q2) result = quaternion_mult(temp, quaternion_conjugate_vect(q1)) return result[:, 1::] </code></pre> <p>I don't understand the unification error as I'm broadcasting in the multiplication so the shape should be irrelevant? All arrays are of <code>np.float64</code> so I've specified that as the type. The only difference is the shape but normal <code>numpy</code> broadcasting should work here as it does without <code>numba</code>. (I've added a load of extra brackets to make sure I was multiplying things correctly but they are not needed at all.)</p> <p>I assume the problem has something to do with the creation of the <code>np.zeros</code> storage array, I've added this as previously I computed each column separately and then combined with <code>np.stack</code>.</p> <p>My only other thought is that it relates to the <code>if ... else...</code> where I check if the single quaternion is of <code>shape</code> <code>(1,4)</code> instead of <code>(,4)</code>.</p> <p>I'm a bit stumped by this and other problems similar to this usually seem to have some type difference involved, like <code>int</code> and <code>float</code> or <code>float32</code> and <code>float64</code>.</p> <p>Any help is appreciated.</p> <p>For clarity, here's an example that works without <code>numba</code> but fails with it enabled:</p> <pre class="lang-py prettyprint-override"><code>from numba import njit import numpy as np quat_single = np.random.random(,4) coord_array = np.random.random([9,3]) Note: quat_single = np.random.random([1,4]) will work with `numba` quaternion_vect_mult(quat_single, coord_array) Out[18]: array([[ 0.12035005, 1.51894951, 0.26731225], [ 1.56889141, 0.56465019, 0.18818138], [ 0.58966646, 1.09653585, -0.19548354], [ 1.15044012, 1.56034916, 0.73943456], [ 0.83003034, 1.80861828, 0.02678796], [ 1.15572912, 0.54263501, 0.16206597], [ 1.34243762, 1.0802315 , -0.20735991], [ 1.5876305 , 0.70017144, 0.80066164], [ 1.20734218, 1.2747372 , -0.47177605]]) </code></pre>
<python><arrays><numpy><numba><array-broadcasting>
2023-08-23 19:43:46
1
676
jpmorr
76,964,394
2,461,398
Pydantic v2 "Could not resolve reference: ..." error using custom response model
<p>The docs (at <code>&lt;host&gt;:&lt;port&gt;/docs</code>) for this example:</p> <pre class="lang-py prettyprint-override"><code>import pydantic from fastapi import FastAPI class Hour(int): def __new__(cls, *args, **kwargs): h = super().__new__(cls, *args, **kwargs) assert 0 &lt;= h &lt; 24, f&quot;Invalid hour: {h}&quot; return h class PydanticConfig: arbitrary_types_allowed = True @pydantic.dataclasses.dataclass(frozen=True, config=PydanticConfig) class Time: hour: Hour app = FastAPI() @app.get(&quot;/&quot;, response_model=Time) def read_root(): return Time(13) </code></pre> <p>Generate fine in <code>pydantic==1.10.9</code>. In <code>pydantic==2.3.0</code>, they produce:</p> <pre><code>Errors Resolver error at responses.200.content.application/json.schema.$ref Could not resolve reference: Evaluation failed on token: &quot;components&quot; Resolver error at responses.200.content.application/json.schema.$ref Could not resolve reference: Evaluation failed on token: &quot;components&quot; </code></pre> <p>Changing to <code>hour: int</code> works again, so it has something to do with the <code>NewType</code>.</p> <p>How do I get the docs / schema to generate correctly in v2?</p>
<python><fastapi><pydantic>
2023-08-23 19:29:55
0
1,853
capitalistcuttle
76,964,359
272,008
Python Mailgun Question - 500 Internal Server Error
<p>I am just learning Python and VS code and am in training.</p> <p>Per Mailgun documentation, I have created a sendmail script (app.py) using Python:</p> <pre><code>def send_simple_message(): return requests.post( &quot;https://api.mailgun.net/v3/YOUR_DOMAIN_NAME/messages&quot;, auth=(&quot;api&quot;, &quot;YOUR_API_KEY&quot;), data={&quot;from&quot;: &quot;Excited User &lt;mailgun@YOUR_DOMAIN_NAME&gt;&quot;, &quot;to&quot;: [&quot;bar@example.com&quot;, &quot;YOU@YOUR_DOMAIN_NAME&quot;], &quot;subject&quot;: &quot;Hello&quot;, &quot;text&quot;: &quot;Testing some Mailgun awesomness!&quot;}) </code></pre> <p>I run the Python script in Visual Studio Code from the terminal like so:</p> <pre><code> Python app.py </code></pre> <p>When I test using Postman, the email is sending successfully, but in Postman, I am getting a 500 error and not exactly sure why:</p> <pre><code>&lt;!doctype html&gt; &lt;html lang=en&gt; &lt;title&gt;500 Internal Server Error&lt;/title&gt; &lt;h1&gt;Internal Server Error&lt;/h1&gt; &lt;p&gt;The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.&lt;/p&gt; </code></pre> <p>Still trying to get the gist of using VS Code, it seems to be a bit of a learning curve though.</p>
<python><visual-studio-code><mailgun>
2023-08-23 19:25:03
1
1,054
Tikhon
76,964,189
13,520,498
`concurrent.futures.ThreadPoolExecutor` is behaving strangely
<p>So, I am working on a project where I'm provided with some images and some excel sheets containing the information in context to those images. Now the images and excels are day to day data readings. They are orgaized in this folder structure:</p> <p><a href="https://i.sstatic.net/5AccJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5AccJ.png" alt="Dataset Organization" /></a></p> <p>Now the the task at hand:</p> <ul> <li>I need to go through those day to day image recordings, do semantic segmentation on the images, extract object height-width-area and then put them in their respective excel sheets. Like this picture below. The created output should be organized like this: <a href="https://i.sstatic.net/4DzFC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4DzFC.png" alt="segmentation-results" /></a></li> </ul> <p>I have solved this part. But my main problem is, it takes almost around 26-32 minutes for image processing, segmentation and feature extraction for each day readings. So it takes around 1.5-2h in total. The data is growing simultaneously and we are taking three readings each week. So I wrote a multi-threading script that can start processing all the days simultaneously. This cut downs the total time taken in 26-32 minutes in total for all 3 days of data.</p> <p>The script works most of the time but sometimes when I run the script it seems like <code>futures.append(executor.submit(processAndsegmentImages, day=day))</code> method is not starting threads for all 3 days. I checked and this happening. Sometimes it is only starting threads for 2 days or sometimes just one day.</p> <p>Here's my code:</p> <pre><code>def processAndsegmentImages(day): # doing image processing, segmentation, and other analysis return '{} images processing and segmentation completed'.format(day) if __name__ == &quot;__main__&quot;: start = time.time() rf = Roboflow(api_key=&quot;my_api_key&quot;) project = rf.workspace().project(&quot;my_project&quot;) model = project.version(1).model print('model loaded\n') import concurrent.futures days = ['day1', 'day2', 'day3'] with concurrent.futures.ThreadPoolExecutor(max_workers=min(32, os. cpu_count() + 4)) as executor: futures = [] for day in days: print('adding {} to executor'.format(day)) futures.append(executor.submit(processAndsegmentImages, day=day)) for future in concurrent.futures.as_completed(futures): print(future.result()) end = time.time() tlapsed = end-start print('total time taken: {:.2f} minutes'.format(tlapsed/60)) </code></pre> <p>The idea output should be like this:</p> <pre><code>loading Roboflow workspace... loading Roboflow project... model loaded adding day1 to executor adding day2 to executor adding day3 to executor created root directory: /home/arrafi/potato/segmentation_results/day2_segmentation_results created root directory: /home/arrafi/potato/segmentation_results/day1_segmentation_results created root directory: /home/arrafi/potato/segmentation_results/day3_segmentation_results starting day1 images processing from /home/arrafi/potato/segmentation_results/day1_segmentation_results/day1_raw_images/ starting day3 images processing from /home/arrafi/potato/segmentation_results/day3_segmentation_results/day3_raw_images/ starting day2 images processing from /home/arrafi/potato/segmentation_results/day2_segmentation_results/day2_raw_images/ 192 day1_images proccessed and saved at: /home/arrafi/potato/segmentation_results/day1_segmentation_results/day1_images created pred_images and json directory for day1 images starting potato segmentation ... of 192 day1_images 192 day2_images proccessed and saved at: /home/arrafi/potato/segmentation_results/day2_segmentation_results/day2_images created pred_images and json directory for day2 images starting potato segmentation ... of 192 day2_images 192 day3_images proccessed and saved at: /home/arrafi/potato/segmentation_results/day3_segmentation_results/day3_images created pred_images and json directory for day3 images starting potato segmentation ... of 192 day3_images ................and some more outputs...................... </code></pre> <p>Most of the time the script runs successfully but sometimes I notice that the script is only starting the process for one/two days. Like this below, it only started for day1 and day3:</p> <pre><code>loading Roboflow workspace... loading Roboflow project... model loaded adding day1 to executor adding day2 to executor adding day3 to executor created root directory: /home/arrafi/potato/segmentation_results/day1_segmentation_results created root directory: /home/arrafi/potato/segmentation_results/day3_segmentation_results starting day1 images processing from /home/arrafi/potato/segmentation_results/day1_segmentation_results/day1_raw_images/ starting day3 images processing from /home/arrafi/potato/segmentation_results/day3_segmentation_results/day3_raw_images/ 192 day1_images proccessed and saved at: /home/arrafi/potato/segmentation_results/day1_segmentation_results/day1_images created pred_images and json directory for day1 images starting potato segmentation ... of 192 day1_images 192 day3_images proccessed and saved at: /home/arrafi/potato/segmentation_results/day3_segmentation_results/day3_images created pred_images and json directory for day3 images starting potato segmentation ... of 192 day3_images ................and some more outputs...................... </code></pre> <p><a href="https://i.sstatic.net/3uXod.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3uXod.png" alt="buggy output" /></a></p> <p>Can anyone point out why is this happening? Am I doing something wrong in calling the <code>ThreadPoolExecutor</code>? I have searched online for solutions but can't find out why is this happening because so far the code is not throwing any error. And the behavior is so random.</p>
<python><multithreading><threadpool><python-multithreading><threadpoolexecutor>
2023-08-23 18:56:06
1
1,991
Musabbir Arrafi
76,964,112
4,685,589
opencv auto georeferencing scanned map
<p>I have the following sample image:</p> <p><a href="https://i.sstatic.net/Ipun1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ipun1.png" alt="enter image description here" /></a></p> <p>where I am trying to locate the coords of the four corners of the inner scanned map image like:</p> <p><a href="https://i.sstatic.net/0Glby.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Glby.png" alt="enter image description here" /></a></p> <p>i have tried with something like this:</p> <pre><code>import cv2 import numpy as np # Load the image image = cv2.imread('sample.jpg') # Convert to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Apply Gaussian blur blurred = cv2.GaussianBlur(gray, (5, 5), 0) # Perform edge detection edges = cv2.Canny(blurred, 50, 150) # Find contours in the edge-detected image contours, _ = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Iterate through the contours and filter for squares or rectangles for contour in contours: perimeter = cv2.arcLength(contour, True) approx = cv2.approxPolyDP(contour, 0.04 * perimeter, True) if len(approx) == 4: x, y, w, h = cv2.boundingRect(approx) aspect_ratio = float(w) / h # Adjust this threshold as needed if aspect_ratio &gt;= 0.9 and aspect_ratio &lt;= 1.1: cv2.drawContours(image, [approx], 0, (0, 255, 0), 2) # Display the image with detected squares/rectangles cv2.imwrite('detectedy.png', image) </code></pre> <p>but all i get is something like this:</p> <p><a href="https://i.sstatic.net/nxeUZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nxeUZ.png" alt="enter image description here" /></a></p> <p>UPDATE:</p> <p>so i have found this code that should do what i require:</p> <pre><code>import cv2 import numpy as np from subprocess import call file = &quot;samplebad.jpg&quot; img = cv2.imread(file) orig = img.copy() # sharpen the image (weighted subtract gaussian blur from original) ''' https://stackoverflow.com/questions/4993082/how-to-sharpen-an-image-in-opencv larger smoothing kernel = more smoothing ''' blur = cv2.GaussianBlur(img, (9,9), 0) sharp = cv2.addWeighted(img, 1.5, blur, -0.5, 0) # convert the image to grayscale # gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) gray = cv2.cvtColor(sharp, cv2.COLOR_BGR2GRAY) # smooth whilst keeping edges sharp ''' (11) Filter size: Large filters (d &gt; 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering. (17, 17) Sigma values: For simplicity, you can set the 2 sigma values to be the same. If they are small (&lt; 10), the filter will not have much effect, whereas if they are large (&gt; 150), they will have a very strong effect, making the image look &quot;cartoonish&quot;. These values give the best results based upon the sample images ''' gray = cv2.bilateralFilter(gray, 11, 17, 17) # detect edges ''' (100, 200) Any edges with intensity gradient more than maxVal are sure to be edges and those below minVal are sure to be non-edges, so discarded. Those who lie between these two thresholds are classified edges or non-edges based on their connectivity. If they are connected to &quot;sure-edge&quot; pixels, they are considered to be part of edges. ''' edged = cv2.Canny(gray, 100, 200, apertureSize=3, L2gradient=True) cv2.imwrite('./edges.jpg', edged) # dilate edges to make them more prominent kernel = np.ones((3,3),np.uint8) edged = cv2.dilate(edged, kernel, iterations=1) cv2.imwrite('edges2.jpg', edged) # find contours in the edged image, keep only the largest ones, and initialize our screen contour cnts, hierarchy = cv2.findContours(edged.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:10] screenCnt = None # loop over our contours for c in cnts: # approximate the contour peri = cv2.arcLength(c, True) approx = cv2.approxPolyDP(c, 0.02 * peri, True) # if our approximated contour has four points, then we can assume that we have found our screen if len(approx) &gt; 0: screenCnt = approx print(screenCnt) cv2.drawContours(img, [screenCnt], -1, (0, 255, 0), 10) cv2.imwrite('contours.jpg', img) break # reshaping contour and initialise output rectangle in top-left, top-right, bottom-right and bottom-left order pts = screenCnt.reshape(4, 2) rect = np.zeros((4, 2), dtype = &quot;float32&quot;) # the top-left point has the smallest sum whereas the bottom-right has the largest sum s = pts.sum(axis = 1) rect[0] = pts[np.argmin(s)] rect[2] = pts[np.argmax(s)] # the top-right will have the minumum difference and the bottom-left will have the maximum difference diff = np.diff(pts, axis = 1) rect[1] = pts[np.argmin(diff)] rect[3] = pts[np.argmax(diff)] # compute the width and height of our new image (tl, tr, br, bl) = rect widthA = np.sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2)) widthB = np.sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2)) heightA = np.sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2)) heightB = np.sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2)) # take the maximum of the width and height values to reach our final dimensions maxWidth = max(int(widthA), int(widthB)) maxHeight = max(int(heightA), int(heightB)) # construct our destination points which will be used to map the screen to a top-down, &quot;birds eye&quot; view dst = np.array([ [0, 0], [maxWidth - 1, 0], [maxWidth - 1, maxHeight - 1], [0, maxHeight - 1]], dtype = &quot;float32&quot;) # calculate the perspective transform matrix and warp the perspective to grab the screen M = cv2.getPerspectiveTransform(rect, dst) warp = cv2.warpPerspective(orig, M, (maxWidth, maxHeight)) # cv2.imwrite('./cvCropped/frame/' + file, warp) # crop border off (85px is empirical) # cropBuffer = 85 # this is for the old (phone) images cropBuffer = 105 # this is for those taken by Nick height, width = warp.shape[:2] cropped = warp[cropBuffer:height-cropBuffer, cropBuffer:width-cropBuffer] # output the result cv2.imwrite('cropped.jpg', cropped) </code></pre> <p>but because these old scanned maps have a fold in them it fails and only detects one side like:</p> <p><a href="https://i.sstatic.net/QdW7c.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QdW7c.jpg" alt="enter image description here" /></a></p> <p>is there a way to somehow get opencv to ignore the center region?</p>
<python><opencv>
2023-08-23 18:43:04
1
355
Dwayne Dibbley
76,964,093
13,067,104
How to interpret numerical values of datetime ticks
<p>I am looking to plot datetime on the x-axis.</p> <pre><code>import matplotlib.pyplot as plt import datetime as dt fig,ax = plt.subplots(1,1) ax.set_xlim(dt.datetime(2013,6,24,11,10), dt.datetime(2013,6,24,11,20)) </code></pre> <p>This would generate a plot with datetime values (e.g., 11:13 rather than some numerical number) on the x-axis.</p> <p>However, when I try to get the xtick values from the plot using</p> <p><code>ax.get_xticks()</code></p> <p>it returns</p> <pre><code>array([15880.46527778, 15880.46597222, 15880.46666667, 15880.46736111, 15880.46805556, 15880.46875 , 15880.46944444, 15880.47013889, 15880.47083333, 15880.47152778, 15880.47222222]) </code></pre> <p>Can someone please explain what these numbers mean, so I can interpret them. Thank you</p>
<python><matplotlib><datetime>
2023-08-23 18:39:15
1
403
patrick7
76,964,029
10,627,413
Quoting parameters in to_csv and pd.read_csv
<p>TLDR; I <strong>want</strong> each item to be wrapped in quotes. When I wrap it in quotes in Jupyter Notebook it has double quotes but when I open it as a file manually (opens in Pages or Excel) it doesn't show double quotes. I'm unsure if I'm going crazy or if these files actually have double quotes</p> <h2>Sample Dataset:</h2> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">pizza_topping_id</th> <th style="text-align: center;">customer_email</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">&quot;1&quot;</td> <td style="text-align: center;">&quot;junglehouse1@yahoo.com&quot;</td> </tr> </tbody> </table> </div> <p>I run this to read it in into Jupyter Notebook:</p> <pre><code>df = pd.read_csv('~/Downloads/test_file.csv', dtype=str, na_values='') </code></pre> <p>It loses it's quotations when I read in the CSV without the quoting parameter. So I then run the following:</p> <pre><code> df = pd.read_csv('~/Downloads/test_file.csv', dtype=str, quoting = 3, na_values='') </code></pre> <p>This shows the csv to have double quotes.</p> <p>I want to save this as a CSV now:</p> <pre><code>df.to_csv('~/Downloads/TEST_QUOTE_TEST.csv', header=True, index=False,quoting=3, line_terminator='\r\n') </code></pre> <p>I open it manually and this doesn't show the double quotes I had added to my CSV in Jupyter Notebook. Is there something I'm missing?</p>
<python><pandas>
2023-08-23 18:27:14
1
366
Maggie Liu
76,963,727
7,663,296
How does socket.gethostbyname () choose from multiple IP addresses?
<p>I'm trying to figure out how gethostbyname() determines an IP address for a private host when multiple addresses are available. The results recently changed on my machine. I want to know why it suddenly changed the IP it returns, and what could cause such a change?</p> <p>See update sections below for new info. Will update Summary if relevant.</p> <h1>Summary</h1> <p>Here's the core problem. /etc/hosts has this:</p> <pre><code>10.1.2.3 zoidberg.local zoidberg # loopback </code></pre> <p>Python does this:</p> <pre><code>&gt;&gt;&gt; socket.gethostbyname ('zoidberg') '127.0.0.1' &gt;&gt;&gt; socket.gethostbyname_ex ('zoidberg') ('zoidberg.local', ['zoidberg'], ['10.1.2.3', '127.0.0.1', '192.168.1.62', '169.254.1.28', '10.1.2.3']) &gt;&gt;&gt; httpd = HTTPServer (('zoidberg', 8888), ReqHandler) &lt;-- binds to 127.0.0.1 not 10.1.2.3 </code></pre> <p>Why and how does this happen? <code>BaseServer.__init__</code> calls socket.bind which is beyond my control. Just like socket.gethostbyname it chooses 127.0.0.1, which is not associated with zoidberg in /etc/hosts. Neither are the other IPs returned by gethostbyname_ex (same machine, different network interfaces). Ping zoidberg in terminal correctly resolves to 10.1.2.3.</p> <p>Server used to bind to 10.1.2.3, but behavior changed in last few days. What's going on?</p> <h1>Description</h1> <p>For several years, the name zoidberg has been a host name for private address 10.1.2.3. My /etc/hosts file has this entry, unchanged all that time (complete /etc/hosts at bottom):</p> <pre><code>10.1.2.3 zoidberg.local zoidberg # loopback </code></pre> <p>I run a few local python servers that listen on certain ports. The address passed to TCPServer is always ('zoidberg', portnum). For years it's run fine, binding to 10.1.2.3 with those parameters. Machine runs python 3.8 on mac 10.15.</p> <p>A few days ago the servers stopped responding. Even though they were clearly running, I couldn't get any response. Eventually I found out with netstat -an that TCPServer is now binding to 127.0.0.1 instead of 10.1.2.3 when I call <code>TCPServer ((zoidberg, portnum))</code>. My question is why? What can cause this to happen?</p> <p>I traced through python stdlib code until I reached socket.gethostbyname (), which is how TCPServer (BaseServer) resolves the host. It seems gethostbyname is the culprit. Yes gethostbyname_ex is better, but it's stdlib code inside TCPServer that I can't modify (or shouldn't modify / won't modify). Why would gethostbyname suddenly return 127.0.0.1 for zoidberg, when for years zoidberg has resolved to 10.1.2.3? What could cause such a change?</p> <p>I went through the obvious candidates:</p> <ul> <li>/etc/hosts hasn't changed in years. only line with zoidberg is the one above. Entire hosts file posted below.</li> <li>no OS updates have been made in months. auto-updates is off, only manual.</li> <li><code>ping zoidberg</code> still correctly resolves to 10.1.2.3. whatever change gethostbyname is picking up, it's not picked up in ping.</li> <li>chrome (I know, doesn't control name resolution) was updated semi-recently via auto update. don't see how this would affect the system and gethostbyname.</li> <li>no other changes to name resolution as far as I'm aware. installed a few python modules with <code>port</code> system but that's usually contained in /opt/local. python3.8 is system python, running from /usr/local/bin. I do have /opt/local/ dirs in my PYTHONPATH for picking up packages. But surely that wouldn't change gethostbyname resolution, would it?</li> </ul> <p>My main concern is making sure the machine is free of malware. If it's just ghosts in the machine, ok, I'd like to know why but I can live with it. Should I be concerned? Or is there a more reasonable explanation? I generally keep things locked down: no optional services, no browsing dodgy websites, no running software of unknown origin, only downloading modules from primary repos, etc.</p> <p>Any ideas?</p> <h1>python server code</h1> <p>Should be covered above but for completeness...</p> <pre><code>from http.server import BaseHTTPRequestHandler, HTTPServer listen_address = ('zoidberg', 8888) httpd = HTTPServer (listen_address, ReqHandler) </code></pre> <p>Notes:</p> <ul> <li>HTTPServer subclasses socket.TCPServer which subclasses socket.BaseServer.</li> <li>BaseServer.<strong>init</strong> calls BaseServer.server_bind.</li> <li>BaseServer.server_bind (stdlib) calls socket.socket.bind on server address.</li> <li>socket.socket.bind is not defined in stdlib .py files, so I can't trace further; it must be in a c interface file somewhere.</li> <li><code>socket.gethostbyname (listen_address)</code> returns 127.0.0.1. Presumably this resolves 'zoidberg' the same way as socket.socket.bind.</li> <li>socket.gethostbyname is not defined in python files (C interface), so I can't trace.</li> </ul> <p>This may or may not be a python problem. Every other program tested on the system so far (ping, curl, chrome) all resolve 'zoidberg' correctly to '10.1.2.3'. Python is the only oddball at this point.</p> <h1>Contents of /etc/hosts</h1> <pre><code>## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 10.1.2.3 zoidberg.local zoidberg # loopback 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost # Added by Docker Desktop # To allow the same kube context to work on the host and the container: 127.0.0.1 kubernetes.docker.internal </code></pre> <h1>Contents of /etc/resolv.conf</h1> <p>Looks ok to me. Not really used on mac. Last updated Jan 2020.</p> <pre><code># # macOS Notice # # This file is not consulted for DNS hostname resolution, address # resolution, or the DNS query routing mechanism used by most # processes on this system. # # To view the DNS configuration used by this system, use: # scutil --dns # # SEE ALSO # dns-sd(1), scutil(8) # # This file is automatically generated. # nameserver 8.8.4.4 nameserver 194.168.4.100 nameserver 8.8.8.8 </code></pre> <h1>_______________________ Update 1 _______________________</h1> <p>Thanks to commenters, I realized that python only recognizes comments in /etc/hosts if the line starts with #. So my /etc/hosts entry for zoidberg wasn't treated as two names and a comment, but as four names, including # and loopback. This is what python console shows:</p> <pre><code>&gt;&gt;&gt; socket.gethostbyname_ex ('zoidberg') ('zoidberg.local', ['zoidberg','#','loopback'], ['10.1.2.3', '127.0.0.1', '192.168.1.62', '169.254.1.28', '10.1.2.3']) </code></pre> <p>Here's what's interesting / odd about that:</p> <ul> <li>despite '#' and 'loopback' being treated as aliases for zoidberg.local (at least in python), it doesn't seem to have any ill effect. Trying to ping loopback or # (to force name resolution) just results in <code>ping: cannot resolve loopback: Unknown host</code>. Whatever method ping uses to resolve hostnames appears to be different than what python socket uses.</li> <li>10.1.2.3 is both the first and last IP address in the list. No idea why it appears twice.</li> <li>127.0.0.1 is there, even though I never explicitly defined zoidberg as an alias for it. Perhaps because both 127.0.0.1 and 10.1.2.3 are registered as addresses for <code>lo0</code>, the loopback device on mac? perhaps python picks up 127.0.0.1 by examining the underlying network device <code>lo0</code>?</li> <li>127.0.0.1 is second in the list from gethostbyname_ex, but it's the only address returned by gethostbyname.</li> <li>what's really bizarre is that both the machine's wifi address (192.) and physical ethernet address (169.) are returned for 'zoidberg'. These addresses are never associated with zoidberg in /etc/hosts or anywhere else I know. They're not even mentioned in /etc/hosts; thoses addresses are assigned dynamically by other devices (routers) using DHCP on separate network interfaces (en0 and en1).</li> </ul> <p>Why would python gethostbyname_ex pick up 192 and 169 addresses from other network interfaces? Is gethostbyname_ex polling all network interfaces for addresses? That seems really strange.</p> <p>I removed the '# loopback' comment from /etc/hosts to see if it fixed things. Only result is that python no longer picks those up as aliases for zoidberg. However gethostbyname still returns 127.0.0.1 for zoidberg, and gethostbyname_ex still returns all the other IPs. Results:</p> <pre><code>&gt;&gt;&gt; socket.gethostbyname ('zoidberg') '127.0.0.1' &gt;&gt;&gt; socket.gethostbyname_ex ('zoidberg') ('zoidberg.local', ['zoidberg'], ['10.1.2.3', '127.0.0.1', '192.168.1.62', '169.254.1.28', '10.1.2.3']) </code></pre> <p>After removing '# loopback' from /etc/hosts I also tested the C call gethostbyname on my system. It returns the same IP list as python: 5 entries with 10.1.2.3 as first and last entry. Yet every other program on my system resolves 'zoidberg' to 10.1.2.3; python gethostbyname is the only one that resolves to 127.0.0.1. And only since a few days ago. Before that, python name resolution worked fine.</p> <p>Perhaps it needs a few hours for /etc/hosts updates to propagate through the name resolution system? But if that were the case, why did python immediately stop picking up '#' and 'loopback' as aliases?</p>
<python><sockets><localhost><bind><gethostbyname>
2023-08-23 17:43:17
0
383
Ed_
76,963,698
11,427,765
Creating dict groupby from Dataframe with node structure
<p>I've the following code :</p> <pre><code>import pandas as pd from io import StringIO data = &quot;&quot;&quot;Nod Levels Parents Amounts 8616 1 NaN 0 8636 5 8648 0 8637 5 8635 0 8631 4 8630 0 8605 5 8609 8888882 8606 5 8609 339494 8609 4 8615 0 8613 6 8620 0 8614 6 8636 0 8615 3 8642 0 8618 6 8620 49832 8619 6 8636 11122 8620 5 8648 0 8621 4 8615 0 8622 5 8621 237837 8623 5 8621 0 8624 4 8615 0 8625 5 8624 87328732 8634 4 8627 0 8639 5 8634 0 8648 4 8627 0 8630 3 8642 0 8632 4 8615 0 8627 3 8642 0 8629 5 8609 -8378383 8633 5 8632 0 8635 4 8627 0 8638 5 8634 -93198318 8638 5 8634 32323 8642 2 8616 0&quot;&quot;&quot; df1 = pd.read_csv(StringIO(data), sep=' ') dct, index = {}, {} for _, row in df1.iterrows(): a = row[&quot;Amounts&quot;] if pd.isna(row[&quot;Parents&quot;]): dct[row[&quot;Nod&quot;]] = index[row[&quot;Nod&quot;]] = {&quot;Amounts&quot;: a} else: if row[&quot;Parents&quot;] not in index: index[row[&quot;Parents&quot;]] = {} index[row[&quot;Parents&quot;]][row[&quot;Nod&quot;]] = index[row[&quot;Nod&quot;]] = {&quot;Amounts&quot;: a} def sum_dct(dct): a = 0 for k, v in dct.items(): if k != &quot;Amounts&quot;: a += sum_dct(v) if len(dct) &gt; 1: dct[&quot;Amounts&quot;] = a return dct[&quot;Amounts&quot;] for v in dct.values(): sum_dct(v) print(dct) </code></pre> <p>The idea is to groupby Nod level 1 will contain Nod level 2 and level 2 contain level 3 ... with results will look like this (Here it's only for 3 levels but my data in df1 is more complex) and Amounts of each Node level will be calculated as sum function of sub node</p> <p>In Pivot table is easier to see the structure of this dataset it’s very complex to make a dict or Jason file of it ..</p> <p>Level 8616 has sublevel 8642 which has 2 sublevels 8615 and 8627 …...</p> <p>{ 8616: { &quot;Amounts&quot;: 3882737, 8642: { &quot;Amounts&quot;: 7388955, 8630: {&quot;Amounts&quot;: 399}, 8615: {&quot;Amounts&quot;: 111}, …… }</p>
<python><pandas><dataframe>
2023-08-23 17:37:51
1
387
Gogo78
76,963,514
1,812,732
How to tell FastAPI to get the descriptions from the docstring
<p>I want FastAPI to use my docstring. Is it possible?</p> <p>This is my code</p> <pre><code>app = FastAPI() @app.get(&quot;/App/test2&quot;, tags=['App']) async def test2(name: str): &quot;&quot;&quot; This API returns a simple message. :param str name: The name of the person to greet. &quot;&quot;&quot; return {&quot;message&quot;: &quot;Hello &quot; + name} </code></pre> <p>And this is the swagger page.</p> <p><a href="https://i.sstatic.net/MLYlR.png" rel="noreferrer"><img src="https://i.sstatic.net/MLYlR.png" alt="swagger page" /></a></p> <p>and I would prefer to put the &quot;The name of the person to greet.&quot; at the bottom, next to the name.</p>
<python><fastapi>
2023-08-23 17:09:17
1
11,643
John Henckel
76,963,335
1,873,237
sqlalchemy with postgres connection: .execute GRANT and DELETE not affecting database
<p>Passing &quot;SELECT&quot;-based SQL queries works. But passing &quot;DELETE&quot;- or &quot;GRANT&quot;-based SQL executed in python without error but makes no change to my database (python 3.8, sqlalchemy 2.0.10). These &quot;DELETE&quot; and &quot;GRANT&quot; queries worked in Python 3.6, sqlalchemy 1.3.23. I could be missing something, but I don't see anything in the sqlalchemy docs/change notes to indicate that this &quot;DELETE&quot; or &quot;GRANT&quot; functionality should be lost.</p> <p>Here is an example that might shed light on where I'm going wrong. After initializing the engine (where I use '' to indicate some arbitrary input):</p> <pre><code>from sqlalchemy import create_engine from sqlalchemy import text db_name = '&lt;insert_db_name&gt;' port = ':&lt;insert_port&gt;/' host = '&lt;insert_host_endpoint&gt;' username = '&lt;insert_username&gt;' password = '&lt;insert_password&gt;' engine = create_engine('postgresql_psycopg2://'+username+':'+ password+'@'+host+port+db_name) </code></pre> <p>I am able read and write through connection.execute or even pandas. Something like the following works just fine (reading with pandas, writing with pandas, reading with .execute):</p> <pre><code>import pandas as pd query_1 = &quot;SELECT * from '&lt;schema&gt;'.'&lt;table_name&gt;'&quot; df = pd.read_sql(sql_query, engine) df.to_sql(name = '&lt;new_table_name&gt;', con = engine, schema = &lt;'schema'&gt;) with engine.connect() as con: con.execute(text(&quot;SELECT * from schema.'&lt;new_table_name&gt;')) </code></pre> <p>Passing GRANT- or DELETE-based SQL through the .execute runs in Python without errors, but there are no changes to the database--as if the Python script never ran. The following examples run in Python without error but I get no effect in the database:</p> <pre><code>query_2 = &quot;GRANT SELECT ON TABLE '&lt;schema&gt;'.'&lt;table_name&gt;' TO &lt;'some_user'&gt;&quot; query_3 = &quot;DELETE FROM '&lt;schema&gt;'.'&lt;table_name&gt;' WHERE &lt;some condition&gt;&quot; with engine.connect() as con: con.execute(text(query_2)) con.execute(text(query_3)) </code></pre> <p>Again, this same code worked in Python 3.6/sqlalchemy 1.3.23 (removing the sqlalchemy &quot;text()&quot; as it wasn't necessary in older versions as described in sqlalchemy change notes).</p>
<python><postgresql><sqlalchemy><psycopg2>
2023-08-23 16:39:26
2
1,789
Docuemada
76,963,311
1,207,193
llama-cpp-python not using NVIDIA GPU CUDA
<p>I have been playing around with <a href="https://github.com/oobabooga/text-generation-webui" rel="noreferrer">oobabooga text-generation-webui </a> on my Ubuntu 20.04 with my NVIDIA GTX 1060 6GB for some weeks without problems. I have been using llama2-chat models sharing memory between my RAM and NVIDIA VRAM. I installed without much problems following the intructions on its repository.</p> <p>So what I want now is to use the model loader <code>llama-cpp</code> with its package <code>llama-cpp-python</code> bindings to play around with it by myself. So using the same miniconda3 environment that oobabooga text-generation-webui uses I started a jupyter notebook and I could make inferences and everything is working well <em>BUT ONLY for CPU</em>.</p> <p>A working example bellow,</p> <pre><code>from llama_cpp import Llama llm = Llama(model_path=&quot;/mnt/LxData/llama.cpp/models/meta-llama2/llama-2-7b-chat/ggml-model-q4_0.bin&quot;, n_gpu_layers=32, n_threads=6, n_ctx=3584, n_batch=521, verbose=True), prompt = &quot;&quot;&quot;[INST] &lt;&lt;SYS&gt;&gt; Name the planets in the solar system? &lt;&lt;/SYS&gt;&gt; [/INST] &quot;&quot;&quot; output = llm(prompt, max_tokens=350, echo=True) print(output['choices'][0]['text'].split('[/INST]')[-1]) </code></pre> <blockquote> <p>Of course! Here are the eight planets in our solar system, listed in order from closest to farthest from the Sun:</p> <ol> <li>Mercury</li> <li>Venus</li> <li>Earth</li> <li>Mars</li> <li>Jupiter</li> <li>Saturn</li> <li>Uranus</li> <li>Neptune</li> </ol> </blockquote> <blockquote> <p>Note that Pluto was previously considered a planet but is now classified as a dwarf planet due to its small size and unique orbit.</p> </blockquote> <p>I want to make inference using GPU as well. What is wrong? Why can't I offload to gpu like the parameter <code>n_gpu_layers=32</code> specifies and also like <code>oobabooga text-generation-webui</code> already does on the same miniconda environment whithout any problems?</p>
<python><python-3.x><nlp><llama><llama-cpp-python>
2023-08-23 16:35:57
11
7,852
imbr
76,963,265
1,325,133
Excluding DJStripe Logs in Django
<p>Im trying to exclude the djstripe logs from my django and prevent them being printed to the console.</p> <p>I have this config in my <code>settings.py</code> but when i perform Stripe operations I still see them on the console.</p> <pre><code>LOGGING = { &quot;version&quot;: 1, &quot;disable_existing_loggers&quot;: False, &quot;handlers&quot;: { &quot;file&quot;: { &quot;level&quot;: &quot;DEBUG&quot;, &quot;class&quot;: &quot;logging.FileHandler&quot;, &quot;filename&quot;: BASE_DIR / &quot;web_debug.log&quot;, }, &quot;console&quot;: { &quot;level&quot;: &quot;INFO&quot;, &quot;class&quot;: &quot;logging.StreamHandler&quot;, }, &quot;djstripe_file&quot;: { # File handler specifically for djstripe &quot;level&quot;: &quot;DEBUG&quot;, &quot;class&quot;: &quot;logging.FileHandler&quot;, &quot;filename&quot;: BASE_DIR / &quot;djstripe.log&quot;, }, }, &quot;loggers&quot;: { &quot;djstripe&quot;: { # Logger specifically for djstripe &quot;handlers&quot;: [&quot;djstripe_file&quot;], # Using the djstripe_file handler &quot;level&quot;: &quot;WARNING&quot;, &quot;propagate&quot;: False, }, }, &quot;root&quot;: { # Correctly defining the root logger outside the &quot;loggers&quot; dictionary &quot;handlers&quot;: [&quot;console&quot;], &quot;level&quot;: &quot;INFO&quot;, }, } </code></pre> <p>Any ideas?</p>
<python><django><stripe-payments>
2023-08-23 16:28:30
1
16,889
felix001
76,963,210
790,701
How to get the page_source to include shadow root elements in Appium/Selenium?
<p>I have a parser for a given web page with a lot of nested shadow-root elements developed in Appium. Since it's extremely slow, I would like to move to BeatifulSoup to dump the HTML only once and then extract the information locally instead of sending hundreds of requests to the Appium server.</p> <p>To do so I need to dump the HTML of the page generated by my appium script.</p> <p>This is the current code that should parse the HTML:</p> <p><code>BeautifulSoup(self.driver.page_source, &quot;html.parser&quot;)</code></p> <p>The problem is that shadow root elements are not included and I can't seem to find a way to have a single HTML tree or at least a collection of DOM documents to search in.</p> <p>Obviously I could access the shadow_root elements through Appium throughout the parsing step, but this would defy the purpose of using BeatifulSoup in the first place.</p> <p>Any suggestion on how to get the shadow DOMs?</p>
<python><selenium-webdriver><selenium-chromedriver><appium><html-parsing>
2023-08-23 16:20:41
0
3,515
Chobeat
76,963,206
3,511,819
Transform one-dimensional Numpy array into mask suitable for array index update of 2-dimensional array
<p>Given a 2-dimensional array a, I want to update select indices specified by b to a fixed value of 1.</p> <p>test data:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np a = np.array( [[0, 1, 0, 0], [0, 0, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 1, 0, 1], [0, 0, 0, 0]] ) b = np.array([1, 2, 2, 0, 3, 3]) </code></pre> <p>One solution is to transform b into a masked array like this:</p> <pre><code>array([[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 0, 1]]) </code></pre> <p>which would allow me to do <code>a[b.astype(bool)] = 1</code> and solve the problem.</p> <p>How can I transform b into the &quot;mask&quot; version below?</p>
<python><numpy>
2023-08-23 16:20:05
2
13,113
Alex
76,963,179
4,336,796
SignalR Client using Python
<p>I'm trying to develop a SignalR client using Python. While I've explored libraries like <a href="https://pypi.org/project/signalr-client/" rel="nofollow noreferrer">signalr</a> and <a href="https://pypi.org/project/signalrcore/" rel="nofollow noreferrer">signalrcore</a>, my SignalR Hub server developed on an older version (.Net Framework 4.8), making it challenging to utilize these libraries. Additionally, my server employs the Server-Sent Events (SSE) transport type for data transmission which these libraries doesn't support.</p> <p>I've managed to successfully retrieve data in my .Net code where after establishing the connection, I need to invoke an API endpoint by providing the connectionId obtained through the negotiation process with the hub. This action is intended to trigger the event flow to the SignalR client. Now striving to replicate this functionality in Python.</p> <p>However, I'm encountering a roadblock as the API invocation is resulting in an &quot;invalid connectionId&quot; error.</p> <p>Here are the sequential steps I'm following in Python:</p> <ol> <li><p>Initiate negotiation to obtain connection details such as connectionToken, connectionId, and other metadata.</p> <p>Request: <code>GET https://myhuburl:443/ws/signalr/negotiate</code></p> <p>Response:</p> <pre><code>{ &quot;url&quot;: &quot;/ws/signalr/&quot;, &quot;connectionToken&quot;: &quot;xxxx&quot;, &quot;connectionId&quot;: &quot;asdf...&quot; .... } </code></pre> </li> <li><p>Employ the connectionToken to establish a connection with the SignalR Hub.</p> <p>Request: <code>GET https://myhuburl:443/ws/signalr/connect?connectionData=[{&quot;Name&quot;:&quot;HubName&quot;}]&amp;connectionToken=xxxxxxxxx</code></p> <p>Response: Connects to SSE and starts receiving empty (<code>{}</code>) messages.</p> </li> <li><p>Attempt to trigger an event using the connectionId via an API call.</p> <p>Request: <code>GET https://myhuburl:8080/api/sr/eventssubscription/{connectionId}</code></p> <p>Response: <code>{&quot;error&quot;:&quot;Invalid connectionId&quot;} // This return success when I'm using the connectionId which I get from .Net Client console.</code></p> </li> </ol> <p>Expecting help from anyone who has knowledge of SignalR. (or DesigoCC)</p>
<python><.net><signalr><server-sent-events><signalr-hub>
2023-08-23 16:13:15
0
1,172
Suroor Ahmmad
76,963,171
3,247,006
`dir()` doesn't show the cache object attributes in Django
<p>I can use these cache methods <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.set" rel="nofollow noreferrer">set()</a>, <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.get" rel="nofollow noreferrer">get()</a>, <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.touch" rel="nofollow noreferrer">touch()</a>, <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.incr" rel="nofollow noreferrer">incr()</a>, <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.decr" rel="nofollow noreferrer">decr()</a>, <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#cache-versioning" rel="nofollow noreferrer">incr_version()</a>, <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#cache-versioning" rel="nofollow noreferrer">decr_version()</a>, <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.delete" rel="nofollow noreferrer">delete()</a>, <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.delete_many" rel="nofollow noreferrer">delete_many()</a>, <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.clear" rel="nofollow noreferrer">clear()</a> and <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.close" rel="nofollow noreferrer">close()</a> as shown below:</p> <pre class="lang-py prettyprint-override"><code>from django.core.cache import cache cache.set(&quot;first_name&quot;, &quot;John&quot;) cache.set(&quot;last_name&quot;, &quot;Smith&quot;) cache.set_many({&quot;age&quot;: 36, &quot;gender&quot;: &quot;Male&quot;}) cache.get(&quot;first_name&quot;) cache.get_or_set(&quot;last_name&quot;, &quot;Doesn't exist&quot;) cache.get_many([&quot;age&quot;, &quot;gender&quot;]) cache.touch(&quot;first_name&quot;, 60) cache.incr(&quot;age&quot;) cache.decr(&quot;age&quot;) cache.incr_version(&quot;first_name&quot;) cache.decr_version(&quot;last_name&quot;) cache.delete(&quot;first_name&quot;) cache.delete_many([&quot;last_name&quot;, &quot;age&quot;]) cache.clear() cache.close() </code></pre> <p>But, <a href="https://docs.python.org/3/library/functions.html#dir" rel="nofollow noreferrer">dir()</a> doesn't show these cache methods as shown below:</p> <pre class="lang-py prettyprint-override"><code>from django.core.cache import cache print(dir(cache)) # Here </code></pre> <pre class="lang-none prettyprint-override"><code>[ '__class__', '__contains__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_alias', '_connections' ] </code></pre> <p>So, how can I show these cache methods?</p>
<python><django><attributes><django-cache><django-caching>
2023-08-23 16:11:49
3
42,516
Super Kai - Kazuya Ito
76,962,726
12,288,028
Avoiding type errors in Python SQL statements passing integer as parameter in psycopg2
<p>Using psycopg2 library encounter this specific error when I pass the argument invite_id as an integer</p> <pre><code>def check_invitations(self, invite_id=2): try: cur = self.db_cursor cur.execute(&quot;&quot;&quot; select id from invitations where deleted_at is null and accepted_at is null and rejected_at is null and id = %s &quot;&quot;&quot;, invite_id) return cur.fetchall() except Exception as e: print('Error in check_invitations: %s' % e) </code></pre> <pre><code>Error in check_invitations: 'int' object does not support indexing </code></pre> <p>But when I convert invite_id to a string, the command executes and returns the fetchall():</p> <pre><code>def check_invitations(self, invite_id=2): try: invite_id = str(invite_id) cur = self.db_cursor cur.execute(&quot;&quot;&quot; select id from invitations where deleted_at is null and accepted_at is null and rejected_at is null and id = %s &quot;&quot;&quot;, invite_id) return cur.fetchall() except Exception as e: print('Error in check_invitations: %s' % e) </code></pre> <p>To get the command to execute with the invite_id as an integer, I have to pass the int value inside a tuple with one value:</p> <pre><code>def check_invitations(self, invite_id=2): try: cur = self.db_cursor cur.execute(&quot;&quot;&quot; select id from invitations where deleted_at is null and accepted_at is null and rejected_at is null and id = %s &quot;&quot;&quot;, (invite_id,)) return cur.fetchall() except Exception as e: print('Error in check_invitations: %s' % e) </code></pre> <p>My question, specifically, is why do I have to pass the integer inside a tuple, while the string can just be passed directly? I can make note of it and just do it, but I am attempting to understand why? What about an &quot;'int' object doe not support indexing&quot; is causing this situation (not really an issue).</p>
<python><psycopg2>
2023-08-23 15:12:35
1
486
MC Hammerabi
76,962,668
4,190,657
Unable to use kafka jars on Jupyter notebook
<p>I'm using spark structured streaming to read data from single node Kafka. Running below setup locally on mac. I can read via spark-submit, but does not work in Jupyter notebook.</p> <pre><code>from pyspark.sql import SparkSession from pyspark.sql.functions import col,from_json from pyspark.sql.types import StructType, StructField, StringType, LongType, DoubleType, IntegerType, ArrayType import time spark.stop() time.sleep(30) spark = SparkSession.builder\ .appName('KafkaIntegration6')\ .config(&quot;spark.jars.packages&quot;, &quot;org.apache.spark:spark-sql-kafka-0-10_2.12:3.4&quot;)\ .getOrCreate() print('done') kafka_stream_df = spark\ .readStream\ .format(&quot;kafka&quot;)\ .option(&quot;kafka.bootstrap.servers&quot;, &quot;localhost:9092&quot;)\ .option(&quot;subscribe&quot;, &quot;TestTopic2&quot;)\ .option(&quot;startingOffsets&quot;, &quot;earliest&quot;)\ .load() </code></pre> <p><strong>Error</strong>: AnalysisException: Failed to find data source: kafka. Please deploy the application as per the deployment section of Structured Streaming + Kafka Integration Guide.</p> <p><strong>Question</strong>: Im able to run same code, can read from Kafka, using spark submit by passing spark-sql-kafka-0-10_2.12:3.4 as --packages. However when I try to run from Jupyter notebook by importing Kafka package during session creation, getting this error. When looking at spark web UI on http://localhost:4040/environment/, I can see package org.apache.spark:spark-sql-kafka-0-10_2.12:3.4 mentioned under spark.jars.packages</p>
<python><apache-spark><apache-kafka><spark-structured-streaming>
2023-08-23 15:04:13
2
305
steve
76,962,657
7,657,180
Simulate pressing Ctrl + right Shift in notepad
<p>I have a function that opens a text file then pressing <code>Ctrl + A</code> to select all the contents</p> <pre><code>def open_in_notepad(file_path): try: subprocess.Popen(['notepad.exe', file_path]) os.system(f'powershell -command &quot;&amp; {{$wshell = New-Object -ComObject WScript.Shell; $wshell.AppActivate(\'Notepad\'); Start-Sleep -Milliseconds 500; $wshell.SendKeys(\'^a\') }}&quot;') except Exception as e: print('Error:', e) </code></pre> <p>How can I simulate pressing <code>Ctrl + right Shift</code>?</p>
<python>
2023-08-23 15:03:03
1
9,608
YasserKhalil
76,962,527
842,693
argparse: optional argument overwriting positional arguments
<p>Let's say I have a python script with arguments <code>foo [--bar baz]</code> and want to add an optional argument/switch <code>--extra-info</code> that would just print some extra information and exit. (My actual use case is bit more complicated, but this illustrates the issue.)</p> <p>I found the following solution, but it looks more like a hack. Basically, I am making the positional argument optional if '--extra-info' is present in the argument list:</p> <pre class="lang-py prettyprint-override"><code>import sys import argparse parser.add_argument('--extra-info', action='store_true') pos_nargs = '?' if '--extra-info' in sys.argv else None parser.add_argument('foo', nargs=pos_nargs) parser.add_argument('--bar') </code></pre> <p>Is there a better approach to this? I would expect there to be one, given that <code>--help</code> behaves exactly the way I want my switch to behave...</p>
<python><argparse>
2023-08-23 14:47:11
3
1,623
Michal Kaut
76,962,520
6,195,489
SQLalchemy unittesting - how to pass an in memory session to a mocked session
<p>Say I have an SQLalchemy model like:</p> <pre><code>from app.data_structures.base import ( Base class User(Base): __tablename__ = &quot;users&quot; user_name: Mapped[str] = mapped_column(primary_key=True, nullable=True) flag = Column(Boolean) def __init__( self, user_name: str = None, flag: bool = false(), )-&gt; None: self.user_name = user_name self.flag = flag </code></pre> <p>where app.data_structures.base.py:</p> <pre><code>from contextlib import contextmanager from os import environ from os.path import join, realpath from sqlalchemy import Column, ForeignKey, Table, create_engine from sqlalchemy.orm import declarative_base, scoped_session, sessionmaker db_name = environ.get(&quot;DB_NAME&quot;) ROOT_DIR =environ.get(&quot;ROOT_DIR&quot;) db_path = realpath(join(ROOT_DIR, &quot;data&quot;, db_name)) engine = create_engine(f&quot;sqlite:///{db_path}&quot;, connect_args={&quot;timeout&quot;: 120}) session_factory = sessionmaker(bind=engine) sql_session = scoped_session(session_factory) @contextmanager def Session(): session = sql_session() try: yield session session.commit() except Exception: session.rollback() raise Base = declarative_base() </code></pre> <p>I then have a function defined (app.helpers.db_helper.get_users_to_query), elsewhere that does something like:</p> <pre><code>from app.data_structures.base import Session def get_usernames_to_query(flag: bool = False) -&gt; List[User]: logger = getLogger(__name__) try: with Session() as session: usernames_to_query = [ {&quot;user_name&quot;: user.user_name} for user in session.query(User).filter( User.flag == flag ) ] except Exception as err: logger.exception(f&quot;Exception thrown in get_users_to_query {' '.join(err.args)}&quot;) usernames_to_query = [] return usernames_to_query </code></pre> <p>I am writing unittests for this app, and to try and mock out the SQLite db, and sqlalchemy I am doing the following:</p> <pre><code>from app.helpers.db_helper.get_users_to_query import get_users_to_query class TestSearchUser(UserHelperTestCase): def setUp(self): super().setUp() def tearDown(self) -&gt; None: return super().tearDown() @patch(&quot;app.data_structures.base.Session&quot;) def test_get_users_to_query(self, mock_session) -&gt; None: self.engine = create_engine(&quot;sqlite:///:memory:&quot;) self.session = Session(self.engine) Base.metadata.create_all(self.engine) mock_session.return_value.__enter__.return_value = ( self.session ) (patcher, environ_dict, environ_mock_get) = self.environ_mock_get_factory() with patcher(): fake_user1 = User( display_name=&quot;Jane Doe&quot;, user_name=&quot;jdb1&quot;, flag=False ) fake_user2 = User( display_name=&quot;John Doe&quot;, user_name=&quot;jdb2&quot;, flag=True ) with mock_session as session: session.add(fake_user1) session.add(fake_user2) session.commit() users_to_query = get_users_to_query() print(users_to_query) </code></pre> <p>But the users to query I get returned from the test are always the ones from the production DB, so the in memory db and session I set up is not being used. What is the best approach for testing this kind of set-up. I have also tried mocking the session, and setting the return values, but it didnt work.</p>
<python><unit-testing><sqlalchemy><mocking><python-unittest>
2023-08-23 14:46:31
1
849
abinitio
76,962,240
3,322,273
Numpy turn hierarchy of matrices into concatenation
<p>I have the following 4 matrices:</p> <pre><code>&gt;&gt;&gt; a array([[0., 0.], [0., 0.]]) &gt;&gt;&gt; b array([[1., 1.], [1., 1.]]) &gt;&gt;&gt; c array([[2., 2.], [2., 2.]]) &gt;&gt;&gt; d array([[3., 3.], [3., 3.]]) </code></pre> <p>I'm creating another matrix that will contain them:</p> <pre><code>&gt;&gt;&gt; e = np.array([[a,b], [c,d]]) &gt;&gt;&gt; e.shape (2, 2, 2, 2) </code></pre> <p><strong>I want to &quot;cancel the hierarchy&quot; and reshape <code>e</code> into a 4x4 matrix that will look like this:</strong></p> <pre><code>0 0 1 1 0 0 1 1 2 2 3 3 2 2 3 3 </code></pre> <p>However, when I run <code>e.reshape((4,4))</code>, I get the following matrix:</p> <pre><code>&gt;&gt;&gt; e.reshape((4,4)) array([[0., 0., 0., 0.], [1., 1., 1., 1.], [2., 2., 2., 2.], [3., 3., 3., 3.]]) </code></pre> <p>Is there a way to reshape my <code>(2,2,2,2)</code> matrix into a <code>(4,4)</code> matrix by cancelling the hierarchy, rather than the the by the indexing I'm currently getting?</p>
<python><numpy><reshape>
2023-08-23 14:15:49
2
12,360
SomethingSomething
76,962,230
3,127,242
Why is pip installed in virtual environments by default?
<p>Python's <code>venv</code> module is installing <code>pip</code> in created virtual environments by default, unless instructed not to using <code>--without-pip</code>, as described in <a href="https://docs.python.org/3/library/venv.html#creating-virtual-environments" rel="nofollow noreferrer">docs.python.org</a>. Why is that so?</p> <p>If <code>pip</code> is &quot;just&quot; a package manager, I don't see why it should in any way be coupled to the used Python interpreter or installed packages. I wonder why the system's pip is not used instead to manage all environments. Indeed, pip's documentation (<a href="https://pip.pypa.io/en/stable/topics/python-option/" rel="nofollow noreferrer">pip.pypa.io</a>) only describes how this can be done and does not discourage it.</p> <pre class="lang-bash prettyprint-override"><code>$ python -m venv .venv --without-pip $ python -m pip --python .venv install SomePackage </code></pre>
<python><pip><python-venv>
2023-08-23 14:14:43
2
1,122
Boyan Hristov
76,962,212
1,040,266
How to make an int8 python numpy array appear as bool to the user
<p>need to store a bool numpy array, but it has to be compatible with an old specification (astropy.io.fits.ImageHDU) which can only store int8 (other types are possible, but int8 is the smallest footprint). The key issue is that the users will want to do</p> <pre><code>mask = np.array((True, False, False, True)) print(np.arange(4, 8)[mask]) [4, 7] </code></pre> <p>which would give a completely different result if mask were to be int</p> <pre><code>[5, 4, 4, 5] </code></pre> <p>Here are two (wrong) implementations to give an idea of what I need</p> <pre><code>import numpy as np class MyClass(dict): @property def mask(self): return self['mask'].astype(bool) @mask.setter def mask(self, inmask): self['mask'] = inmask.astype(np.int8) input_mask = np.array((0, 1), dtype=np.int8) obj = MyClass((('mask', input_mask),)) </code></pre> <p>The desired behaviour should have <code>obj.mask</code> and <code>obj['mask']</code> always synchronised, that is</p> <pre><code>print(obj.mask) [False, True] print(obj['mask']) [0, 1] obj['mask'][0] = 1 print(obj.mask) [True, True] obj.mask[0] = False print(obj.mask) [False, True] print(obj['mask']) [0, 1] </code></pre> <p>But the implementation fails because <code>mask</code> always returns a different instance than <code>self['mask']</code>. So, in alternative, I tried</p> <pre><code>import numpy as np class MyClass(dict): @property def mask(self): try: return self._mask except AttributeError: self._mask = self['mask'].astype(bool) return self._mask @mask.setter def mask(self, inmask): self._mask = inmask self['mask'] = inmask.astype(np.int8) input_mask = np.array((0, 1), dtype=np.int8) obj = MyClass((('mask', input_mask),)) </code></pre> <p>This fails because <code>self._mask</code> and <code>self['mask']</code> go out of sync</p> <pre><code>obj.mask[0] = True print(obj.mask, ' - ', obj['maks']) [True True] - [0 1] </code></pre>
<python><arrays><numpy><casting>
2023-08-23 14:12:16
1
1,059
astabada
76,962,146
18,029,617
Issue with pandas-profiling: Unknown Error during Data Preprocessing and Analysis
<p>I've been working on data preprocessing and analysis tasks. To gain a comprehensive understanding of my data (data distribution and correlation analysis), I decided to utilize the <strong>pandas-profiling</strong> library. However, I've encountered an <strong>unfamiliar error</strong> during this process. For your reference, I've <strong>attached an image depicting the error message</strong>.</p> <p><a href="https://i.sstatic.net/DecuA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DecuA.png" alt="enter image description here" /></a></p> <p><strong>Explored Solutions:</strong></p> <p>It's worth mentioning that I'm executing my script within a Conda environment. In an effort to troubleshoot, I've experimented with different versions of Python, pandas, and pandas-profiling, but unfortunately, none of these efforts have successfully resolved the error.</p> <p><strong>Current Package Versions:</strong></p> <ul> <li>Python: 3.10.12</li> <li>pandas: 2.0.3</li> <li>pandas-profiling: 2.9.0</li> </ul> <p>If anyone within the community has any insights or suggestions regarding this error, I would greatly appreciate your help.</p>
<python><pandas><pandas-profiling>
2023-08-23 14:04:48
1
691
Bennison J
76,962,110
901,426
where are the sparkplug_b_pb2.Payload() methods in a Protobuf object?
<p>i am trying to untangle a Python application that uses Pahu MQTT and Protobuf. while i consider myself to be a decent coder, the docs on Protobuf are just not clicking. in particular, i have an <code>on_message</code> function that captures and processes messages from the MQTT broker:</p> <pre class="lang-py prettyprint-override"><code>def on_message(client, userdata, msg): tokens = msg.topic.split(&quot;/&quot;) # print('payload: ',msg.payload) #&lt;-- this is encoded spB payload if tokens[0] == &quot;spBv1.0&quot; and tokens[1] == myGroupId and tokens[3] == myNodeName: if tokens[2] == &quot;NCMD&quot; or tokens[2] == &quot;DCMD&quot;: print(f'-----&gt;&gt; {tokens[2]} payload has arrived &lt;&lt;-----') inboundPayload = sparkplug_b_pb2.Payload() #&lt;-- trying to figure this out... # this is NOT working. AT ALL. # what i can't figure out # is why it just dumps out # of the loop! no errors, # nothing! just *boop*, # done, out. print('ok. inboundPayload set to sparkplug_b_pb2') # it never gets to here... inboundPayload.ParseFromString(msg.payload) print('ok. inboundPayload parsed from string') # or here... :( for metric in inboundPayload.metrics: # blah blah blah processing code goes here... but it never gets this far. </code></pre> <p>what i can't figure out is where to <em>find</em> the <code>ParseFromString()</code> method to see what's not working. what's frustrating is that <a href="https://softblade.de/en/mqtt-fx/" rel="nofollow noreferrer">MQTT.fx</a> sees and decodes the payloads just fine. so i know that they are coming through, but this code (which i did not write) isn't delivering the messages to the processing loops that follow. and it's making me feel quite stupid. i'd love to be able to prettyPrint the resulting JSON object so i can better visualize what's coming down the pipe, but that's just not happening here.</p> <p>from what i have learned so far, the <code>sparkplug_b_pb2</code> is literally yarded right out of the documentation, as is the <code>Payload()</code> definition (which is confusing as hell ATM), and pasted into the Python file. the only thread that i've been able to tease out, but leads nowhere, is the Payload 'definition'(?) itself:</p> <pre class="lang-py prettyprint-override"><code>Payload = _reflection.GeneratedProtocolMessageType('Payload', (_message.Message,), ... </code></pre> <p>which leads to <em>another</em> rabbit hole <code>_reflection.GeneratedProtocolMessageType</code> which has no information in <a href="https://googleapis.dev/python/protobuf/latest/google/protobuf/reflection.html" rel="nofollow noreferrer">the docs</a> (that i understand, anyway...). i also tried to find something on <code>GeneratedProtocolMessageType</code> to no avail. all i could find was <a href="https://protobuf.dev/getting-started/pythontutorial/#protobuf-api" rel="nofollow noreferrer">a reference and a statement</a> that it was 'beyond the scope of [the] tutorial'. :P</p> <p>so if anyone out there can help me get my bearings on this, i'd be most grateful. and i apologize for the long post. despite having done coding for years, i've never come across Protobuf or even heard of it until this last two months. welcome to Industrial Control...</p> <p>EDIT: i wrapped the section in a <code>try..except</code> block, but to no effect. here's the code:</p> <pre class="lang-py prettyprint-override"><code>if tokens[2] == &quot;NCMD&quot; or tokens[2] == &quot;DCMD&quot;: try: inboundPayload = sparkplug_b_pb2.Payload() inboundPayload.ParseFromString(msg.payload) print('payload: ', payload) except Exception as ex: print('cannot parse: ', str(ex.message), str(ex.args)) </code></pre> <p>EDIT 2: digging ever deeper, apparently the methods for Python are built at runtime. to that end, i tried running <code>dir(sparkplug_b_pb2)</code> and it barfed out an <code>is not defined</code> error. O_o i'm not sure what to do with this or where to go from here. for a 20yr old protocol, i am stunned at how bad the docs are for those new to the protocol. there's a lot of high-level code-fu going on here. i'm sure it would be smoother if i weren't working with an embedded system.</p>
<python><protocol-buffers><mqtt>
2023-08-23 14:01:36
1
867
WhiteRau
76,962,047
3,747,724
Join a grouped transformation back to original DataFrame
<p>I have some data in a made up example. It's a bank with a few accounts:</p> <pre><code>d = {'bank': [1, 1, 2], 'account_money': [100, 300, 500]} df = pd.DataFrame(data=d) &gt;&gt; df bank account_money 0 1 100 1 1 300 2 2 500 </code></pre> <p>For some reason, we want to do a rolling window sum:</p> <pre><code>def transform(grp_obj): result = grp_obj['account_money'].rolling(2, min_periods=0).sum() return result transformed = df.groupby('bank').pipe(transform) &gt;&gt;&gt; transformed bank 1 0 100.0 1 400.0 2 2 500.0 Name: account_money, dtype: float64 </code></pre> <p>How can I join the result back to the original df? Preferably with <code>df.join</code>?</p> <p>It should be possible since we have the bank ID, the original row number and then the desired window_sum.</p>
<python><pandas>
2023-08-23 13:54:24
2
304
nammerkage
76,961,921
1,710,131
VSCode Python debugging with context manager
<p>I have a code block such as the following:</p> <pre class="lang-py prettyprint-override"><code>with Validator.start_server(): return manager(foo=bar) </code></pre> <p>Where <code>Validator.start_server()</code> is set up as follows:</p> <pre class="lang-py prettyprint-override"><code>class Validator: @staticmethod def start_server(): @contextmanager def empty(bool): yield bool return empty(True) </code></pre> <p>I am trying to set a breakpoint on the <code>return manager(foo=bar)</code> line and stepping into the <code>manager</code> function. However, VSCode completely ignores that breakpoint and continues running. During the program execution, I can see in the debugger that there are threads/subprocesses running which I'm assuming are related to the context manager.</p> <p>This is how my <code>launch.json</code> looks like:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;context-manager-debug&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;/path/to/__main__.py&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;justMyCode&quot;: true, &quot;subProcess&quot;: true, }, } </code></pre>
<python><vscode-debugger><with-statement><contextmanager>
2023-08-23 13:40:56
1
934
Ko Ga
76,961,919
1,629,704
typed async decorator in combination with asyncio.run
<p>Have a look at the code below. I think it explains it all. Making the return value as <code>Callable[P, Awaitable[R]]</code> is the way to go I guess or should I comply with what mypy is telling me ?</p> <pre class="lang-py prettyprint-override"><code>from typing import Awaitable, Callable, ParamSpec, TypeVar import asyncio P = ParamSpec(&quot;P&quot;) R = TypeVar(&quot;R&quot;) def a_decorator(func: Callable[P, Awaitable[R]]) -&gt; Callable[P, Awaitable[R]]: async def _wrapper(*args: P.args, **kwargs: P.kwargs) -&gt; R: print(&quot;wrapped&quot;) return await func(*args, **kwargs) return _wrapper @a_decorator async def just_a_function(value1: int) -&gt; bool: print(value1) return isinstance(value1, int) # mypy raises an error here: # Argument 1 to &quot;run&quot; has incompatible type &quot;Awaitable[bool]&quot;; expected &quot;Coroutine[Any, Any, &lt;nothing&gt;]&quot; [arg-type]mypy asyncio.run(just_a_function(10)) </code></pre>
<python><python-asyncio><typing>
2023-08-23 13:40:10
0
1,161
sanders
76,961,817
676,192
Recursively find points with a grid-aligned tangent on spline
<p>I am trying to find the points on a spline where the tangent is perfectly vertical or horizontal, using python and svgpathtools</p> <p>An approach solving the equations isn't giving good results, so I am attacking the problem by approximation.</p> <p>Right now my approach is as follows:</p> <pre><code>def approx_flex_points_recursive(spline, start=0, end=1, precision=10000): number_of_steps = precision step = 1 / (number_of_steps - 1) p = start e = 10 / number_of_steps relevant_points = [] while p &lt;= end: unit = path.unit_tangent(p) if (abs(unit.real) &lt; e) or (abs(unit.imag) &lt; e): relevant_points.append([ p, unit.real, unit.imag]) p += step return relevant_points </code></pre> <p>Which - for a high enough precision - gives me good points. I'd like to improve this by recursively look for good enough points and then iterate with higher precision in their surroundings.</p> <p>Before I go through the trouble of hand-rolling all that, I was wondering: is there something that provides a battle tested approach to this? I was poking around scikit optimization but would be grateful for anything that helps narrow the search and set me on the right way.</p>
<python><bezier>
2023-08-23 13:26:55
0
5,252
simone
76,961,739
8,670,757
Create Specific 365 Day Years to Compare Data
<p>I have what seems like it should be a trivial exercise but I can't seem to crack it. I have the following dataframe:</p> <pre><code>import pandas as pd import numpy as np np.random.seed(5) size = 1000 d = {'Farmer':np.random.choice( ['Bob','Suyzn','John'], size), 'Commodity': np.random.choice( ['Corn','Wheat','Soy'], size), 'Date':np.random.choice( pd.date_range('1/1/2018','12/12/2022', freq='D'), size), 'Bushels': np.random.randint(20,100, size=(size)) } df = pd.DataFrame(d) </code></pre> <p>I use pandas to look at the different farmers &amp; the amount of the different commodities they deliver. I want the ability to look at custom years where I can provide the start_month &amp; start_day to create a new column called 'Interval' that shows the data based on what I choose. This is the function that works for that:</p> <pre><code>def calculate_intervals(df, start_month, start_day): &quot;&quot;&quot;Adds column called Interval to df. It Args: df (dataframe): Dataframe to pass that's been filtered to what you want. start_month (int): Month to start. start_day (int): Day to start. Returns: dataframe: Dataframe with 'Interval' added &quot;&quot;&quot; # Filter the DataFrame filtered_df = df filtered_df['Year'] = filtered_df['Date'].dt.year # Define the date ranges for each interval intervals = [] for year in filtered_df['Year'].unique(): start_date = pd.to_datetime(f&quot;{year}-{start_month}-{start_day}&quot;) end_date = pd.to_datetime(f&quot;{year+1}-{start_month}-{start_day}&quot;) - pd.DateOffset(days=1) intervals.append((start_date, end_date)) # Create a function to assign the interval label to each row def assign_interval(row): for c, (start_date, end_date) in enumerate(intervals): if start_date &lt;= row['Date'] &lt;= end_date: return f&quot;{start_date.date()} - {end_date.date()}&quot; return None # Return None if the row's date is not within any interval # Apply the interval assignment function to each row filtered_df['Interval'] = filtered_df.apply(assign_interval, axis=1) return filtered_df calculate_intervals(df, start_month=7, start_day=6) </code></pre> <p>My goal is to take it one step further. I want to create a graph that takes the start_month &amp; start_day &amp; uses that as position 0 on the y-axis. The x-axis would show the next 365 days all the way out to the end. In this example, the x-axis would start at July 6 &amp; go through July 5th of the following year. The y-axis would be the cumulative sum of bushels sold starting on July 6 through July 5th of the next year. Each custom year would be it's own line on the graph.</p> <p>I have done similar exercises to graph cumulative sum starting on Jan 1 - Dec 31 of the same year, but can't figure out how to do it with a custom start date. My goal is to get a graph that looks something like this: <a href="https://i.sstatic.net/B3NWI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B3NWI.jpg" alt="enter image description here" /></a></p> <p>Any suggestions? Thanks in advance!</p>
<python><pandas><time-series><plotly><line-plot>
2023-08-23 13:17:53
2
341
keg5038
76,961,723
22,221,987
QChart padding is too big which causes charts compression
<p>I create some charts and filling main window with them.</p> <p>The main thing is that i can't decrease the inner (white colored) padding of the <code>QChart</code> and this causes charts compression and Y axis values disappearing. <a href="https://i.sstatic.net/b6Ist.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b6Ist.png" alt="" /></a></p> <p>Here is the code as a minimum reproducible example:</p> <pre><code>import sys from PySide6.QtCore import Qt from PySide6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget from PySide6 import QtCharts, QtGui import random class Chart(QtCharts.QChart): def __init__(self): super().__init__() self.create_series() self.createDefaultAxes() self.legend().setVisible(True) self.legend().setAlignment(Qt.AlignLeft) self.layout().setContentsMargins(10, 10, 10, 10) def create_series(self, s=3): for j in range(s): series = QtCharts.QLineSeries() if j == 0: for i in range(150): if i % 10 == 0: series.append(i, random.randint(0, 10)) else: series.append(i, 2) self.addSeries(series) class ChartView(QtCharts.QChartView): def __init__(self, chart): super().__init__() self.setChart(chart) self.setRenderHint(QtGui.QPainter.Antialiasing) self.setRubberBand(QtCharts.QChartView.RectangleRubberBand) self.setStyleSheet(&quot;background-color: yellow;&quot;) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.central_widget = QWidget() self.central_widget.setObjectName('central_widget') self.layout = QVBoxLayout(self.central_widget) self.layout.setSpacing(0) self.layout.setContentsMargins(20, 20, 20, 20) for _ in range(6): chart_view = self.create_chart_view() self.layout.addWidget(chart_view) self.setStyleSheet('#central_widget{background: red}') self.setCentralWidget(self.central_widget) def create_chart_view(self): chart = Chart() chart_view = ChartView(chart=chart) return chart_view if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec()) </code></pre> <p>I've found how to setup red and yellow areas. But can't figure out which widget can handle white area...</p>
<python><python-3.x><qt><charts><pyqt>
2023-08-23 13:15:04
1
309
Mika
76,961,679
1,379,826
Python dash: how to combine user input with other text on same line
<p>I'm trying to combine in the same row some text with user-defined text. I would like to keep the &quot;some text&quot; out of the callback definition for organization. How can I combine some strings of text with user-defined input strings? An example below, note how <code>html.Div()</code> and <code>html.P()</code> returns text in 2 different lines</p> <pre><code>import dash import dash_html_components as html import dash_core_components as dcc from dash.dependencies import Input, Output app = dash.Dash() app.layout = html.Div([ dcc.Input(id='user-input', type='text', placeholder='Enter a string...'), html.Br(), html.Div([ html.P(&quot;Here's an additional paragraph for context:&quot;), ## line 1 html.Div(id='output-text') ## line 2 ]) ]) @app.callback( Output('output-text', 'children'), [Input('user-input', 'value')] ) def update_text(input_value): return f'This is the text: {input_value}' app.run_server(debug=True) </code></pre> <p>Returns:<br /> <a href="https://i.sstatic.net/kXNFc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kXNFc.png" alt="enter image description here" /></a></p> <p>Many thanks in advance</p>
<python><html><python-3.x><plotly-dash>
2023-08-23 13:10:36
1
1,959
Sos
76,961,667
8,548,828
Combining bokeh formatter and prefixed.Float
<p>I would like to format my axis tickers with <code>prefixed.Float</code> output. I want it to be formatted as I would do when creating a string: <code>f'{prefixed.Float(value):.2h}'</code></p> <p>There is the <code>FuncTickFormatter</code> in bokeh but that allows me to embed JS, not a python function. <code>PrintfTickFormatter(format=&quot;%4.1e&quot;)</code> comes close to what I want but still uses the ordinary printf formatting string and can't use <code>prefixed.Float</code>.</p> <p>How can I give the bokeh axis formatter a formatter which calls <code>prefixed.Float</code>?</p>
<python><bokeh>
2023-08-23 13:08:36
1
3,266
Tarick Welling
76,961,331
7,648,377
Can't lint python files with autopep8
<p>I have the following error</p> <pre><code>Traceback (most recent call last): File &quot;/Library/Frameworks/Python.framework/Versions/3.11/bin/autopep8&quot;, line 8, in &lt;module&gt; sys.exit(main()) ^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/autopep8.py&quot;, line 4528, in main results = fix_multiple_files(args.files, args, sys.stdout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/autopep8.py&quot;, line 4423, in fix_multiple_files ret = _fix_file((name, options, output)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/autopep8.py&quot;, line 4393, in _fix_file return fix_file(*parameters) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/autopep8.py&quot;, line 3589, in fix_file fixed_source = fix_lines(fixed_source, options, filename=filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/autopep8.py&quot;, line 3569, in fix_lines fixed_source = fix.fix() ^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/autopep8.py&quot;, line 613, in fix self._fix_source(filter_results(source=''.join(self.source), File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/autopep8.py&quot;, line 557, in _fix_source modified_lines = fix(result) ^^^^^^^^^^^ File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/autopep8.py&quot;, line 761, in fix_e225 pycodestyle.missing_whitespace_around_operator(fixed, ts)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'pycodestyle' has no attribute 'missing_whitespace_around_operator'. Did you mean: 'whitespace_around_operator'? </code></pre> <p>This is the version of autopep8</p> <pre><code>autopep8 --version autopep8 2.0.2 (pycodestyle: 2.11.0) </code></pre>
<python><python-3.x><autopep8><pycodestyle>
2023-08-23 12:23:19
0
2,762
Andrei Lupuleasa
76,961,301
4,507,231
What is a python field relative to other Python-based language constructs?
<p>I'm using the Python OpenBabel package (<a href="https://openbabel.org/docs/current/UseTheLibrary/PythonDoc.html" rel="nofollow noreferrer">https://openbabel.org/docs/current/UseTheLibrary/PythonDoc.html</a>) and stepping through my code using the PyCharm IDE. I'm running into a problem, but I must understand the terminology used in PyCharm and the Python Language before I can fix the problem. I'm finding some of the terminology confusing.</p> <p>This is the code. On the website, this construct works, but when I tried to perform this operation, it doesn't work i.e., the <code>mol</code> instance of class <code>OBMol</code> does not change, even though it returns <code>True</code> (the API docs state that the mol obj changes):</p> <pre><code>from openbabel import openbabel as ob pdbqtFile = &quot;a string path to a pdbqt file&quot; obConversion = ob.OBConversion() obConversion.SetInAndOutFormats(&quot;pdbqt&quot;, &quot;pdb&quot;) mol = ob.OBMol() obConversion.ReadFile(mol, pdbqtFilePath) # This is where I'm confused value = mol.AddHydrogens() </code></pre> <p>This is where I'm confused:</p> <p>In the PyCharm drop-down code suggestion, <code>mol.AddHydrogens</code> has a lower-case <strong>f</strong> next to it. Looking this up, it's called a <strong>field</strong>. But if I <code>print(mol.AddHydrogens)</code> I get:</p> <p>&lt;bound method OBMol_AddHydrogens of &lt;openbabel.openbabel.OBMol; proxy of &lt;Swig Object of type 'OpenBabel::OBMol *' at 0x0000022DDB9D5B40&gt; &gt;&gt;</p> <p>Notice the word <strong>method</strong> in that output. I understand what that is, but it's been referred to as a field in the IDE. If this is a method, and it states it is, why does PyCharm not have a <strong>m</strong> in the drop-down suggestions? What is a field, and how does it differ from a method? I have a feeling, the answer will have something to do with this method being <strong>bound</strong>.</p>
<python><openbabel>
2023-08-23 12:18:45
1
1,177
Anthony Nash
76,961,298
4,505,544
djongo(django + mongo) trouble with inspectdb, unable to inspectdb and import model
<p>I am newbie to python and django, <br> Using Django version: 4.1.10 and python version : 3.11.4 <br> I have existing mongodb database so I am trying complete migration and import models in djongo (django + mongo) with inspectdb. But I run this command <br> <code>python manage.py inspectdb</code> <br> I keep getting following err,</p> <pre><code> PS C:\Users\del\Downloads\splc\jango\splc1&gt; python manage.py inspectdb Traceback (most recent call last): File &quot;C:\Users\del\Downloads\splc\jango\splc1\manage.py&quot;, line 22, in &lt;module&gt; main() File &quot;C:\Users\del\Downloads\splc\jango\splc1\manage.py&quot;, line 18, in main execute_from_command_line(sys.argv) File &quot;C:\Users\del\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\core\management\__init__.py&quot;, line 446, in execute_from_command_line utility.execute() File &quot;C:\Users\del\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\core\management\__init__.py&quot;, line 420, in execute django.setup() File &quot;C:\Users\del\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\__init__.py&quot;, line 24, in setup apps.populate(settings.INSTALLED_APPS) File &quot;C:\Users\del\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\apps\registry.py&quot;, line 116, in populate app_config.import_models() File &quot;C:\Users\del\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\apps\config.py&quot;, line 269, in import_models self.models_module = import_module(models_module_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\del\AppData\Local\Programs\Python\Python311\Lib\importlib\__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1204, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1176, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1147, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 690, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 936, in exec_module File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1074, in get_code File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1004, in source_to_code File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed SyntaxError: source code string cannot contain null bytes </code></pre> <p>also when I run<br> <code>python manage.py inspectdb items &gt; splc1app/models.py</code> <br>I keep geeting an error which is mentioned <a href="https://github.com/doableware/djongo/issues/299" rel="nofollow noreferrer">here</a> as well<br> What am I missing here, any help is really appriciated.</p>
<python><django><mongodb><django-migrations><djongo>
2023-08-23 12:18:26
0
502
dEL
76,961,247
11,154,841
exifread.process_file() throws "KeyError: 'hdlr'" and "NoParser: hdlr Output is truncated." for a HEIC image even though exifread can deal with HEIC
<p>Taking the guide from <a href="https://pypi.org/project/ExifRead/" rel="nofollow noreferrer">ExifRead 3.0.0</a> which says that it can deal with HEIC images, and with the examples of <a href="https://stackoverflow.com/questions/54395735/how-to-work-with-heic-image-file-types-in-python/">How to work with HEIC image file types in Python</a>, I tried to read metadata of a HEIC file:</p> <pre class="lang-py prettyprint-override"><code>p = Path(r'C:\Users\Admin\Pictures\Apple media files\gallery\202204') l=list(p.glob('**/*.HEIC')) print(l[0]) # Open image file for reading (must be in binary mode) f = open(l[0], 'rb') # Return Exif tags # tags = exifread.process_file(f) # tags exifread.process_file(f) </code></pre> <pre class="lang-bash prettyprint-override"><code>C:\Users\Admin\Pictures\Apple media files\gallery\202204\IMG_1234.HEIC --------------------------------------------------------------------------- KeyError Traceback (most recent call last) c:\Users\Admin\anaconda3\envs\scrape\lib\site-packages\exifread\heic.py in get_parser(self, box) 170 try: --&gt; 171 return defs[box.name] 172 except (IndexError, KeyError) as err: KeyError: 'hdlr' The above exception was the direct cause of the following exception: NoParser Traceback (most recent call last) in 9 # tags 10 ---&gt; 11 exifread.process_file(f) c:\Users\Admin\anaconda3\envs\scrape\lib\site-packages\exifread\__init__.py in process_file(fh, stop_tag, details, strict, debug, truncate_tags, auto_seek) 135 136 try: --&gt; 137 offset, endian, fake_exif = _determine_type(fh) 138 except ExifNotFound as err: 139 logger.warning(err) c:\Users\Admin\anaconda3\envs\scrape\lib\site-packages\exifread\__init__.py in _determine_type(fh) ... --&gt; 173 raise NoParser(box.name) from err 174 175 def parse_box(self, box: Box) -&gt; Box: NoParser: hdlr Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings... </code></pre> <p>How can I reach the metadata of HEIC files with the &quot;exifread&quot; Python module?</p>
<python><python-3.x><image><metadata><exif>
2023-08-23 12:12:44
1
9,916
questionto42
76,961,133
15,399,131
Differences between cubic 2D interpolation methods in scipy and pytorch
<p>Both SciPy and PyTorch have multiple ways of interpolating 2D images. However, for cubic interpolation they don't appear to be doing the same thing. Why are they so different?</p> <p>So far I've found that the following seem to be doing the same thing at <code>order=1</code>/<code>mode=linear</code>/<code>mode=bilinear</code> (up to floating point errors):</p> <ul> <li><code>scipy.interpolate.interpn</code></li> <li><code>scipy.ndimage.map_coordinates</code></li> <li><code>torch.nn.functional.interpolate</code> with <code>align_corners=True</code></li> <li><code>torch.nn.functional.grid_sample</code> with <code>align_corners=True</code></li> </ul> <p>However, at order 3 all of them are different from each other.</p> <p>As there are so many different functions I will accept partial answers that explain differences between some of them.</p> <p>Here's an (attempted) minimal example (still quite long). The example is simply using the interpolation methods to upsample a random image.</p> <pre class="lang-py prettyprint-override"><code># Computations import numpy as np import numpy.typing import torch from scipy.interpolate import interpn from scipy.ndimage import map_coordinates from torch.nn.functional import interpolate, grid_sample # Visualizations import matplotlib.pyplot as plt from matplotlib.colors import LogNorm def get_interpolated_values( original_size: tuple[int, int], sample_size: tuple[int, int], order: int, ) -&gt; list[np.typing.NDArray], list[str]: '''Returns list of interpolated images with labels for which interpolation method was used.''' # Initialize source values for interpolation values = np.random.rand(*original_size) order2str = [ 'nearest', 'linear', None, 'cubic', ] # Initialize data collection labels = [] interpolated_values = [] # Scipy interpn labels.append('scipy.interpn') interpolated_values.append( interpn( points=( np.mgrid[0 : original_size[0]], np.mgrid[0 : original_size[1]], ), values=values, xi=np.stack(np.mgrid[ 0 : original_size[0] - 1 : sample_size[0] * 1j, 0 : original_size[1] - 1 : sample_size[1] * 1j, ], axis=-1), method=order2str[order], bounds_error=True, ) ) # Scipy map_coordinates labels.append('scipy.map_coordinates') interpolated_values.append( map_coordinates( input=values, coordinates=np.mgrid[ 0 : original_size[0] - 1 : sample_size[0] * 1j, 0 : original_size[1] - 1 : sample_size[1] * 1j, ], order=order, mode='constant', ) ) # Torch interpolate with align_corners value_tensor = torch.from_numpy(values)[None, None, ...] labels.append('torch.interpolate') interpolated_values.append( interpolate( input=value_tensor, size=sample_size, align_corners=True, mode=&quot;bi&quot; + order2str[order], ).squeeze().numpy() ) # Torch grid sample with corners labels.append('torch.grid_sample') interpolated_values.append( grid_sample( input=value_tensor, grid=torch.from_numpy( np.mgrid[ -1 : 1 : sample_size[0] * 1j, -1 : 1 : sample_size[1] * 1j, ], ).moveaxis(0, -1).unsqueeze(0).flip(-1), padding_mode='zeros', mode=&quot;bi&quot; + order2str[order], align_corners=True, ).squeeze().numpy() ) return interpolated_values, labels def visualize(interpolated_values, labels): # Visualize each interpolated result fig, ax_row = plt.subplots(1, len(interpolated_values)) for i_ax, ax in enumerate(ax_row): ax.set_title(labels[i_ax]) im = ax.imshow(interpolated_values[i_ax], vmin=0, vmax=1) plt.colorbar(im) # Visualise differences between methods fig, ax_grid = plt.subplots( len(interpolated_values), len(interpolated_values), ) fig.suptitle('Interpolation differences') for i_ax, ax_row in enumerate(ax_grid): for j_ax, ax in enumerate(ax_row): if i_ax == j_ax: # Add label in diagonal ax.text( *((n - 1) / 2 for n in interpolated_values[i_ax].shape), s=labels[i_ax], ha='center', va='center', ) # Plot difference matrix with colorbar diff = interpolated_values[i_ax] - interpolated_values[j_ax] im = ax.imshow(np.abs(diff)) plt.colorbar(im) def main(): interpolated_values, labels = get_interpolated_values( original_size = (4, 4), sample_size = (5, 5), order = 3, ) visualize(interpolated_values, labels) plt.show() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Here's the visualized absolute differences at order 3: <a href="https://i.sstatic.net/TFWjA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TFWjA.png" alt="enter image description here" /></a></p>
<python><numpy><pytorch><scipy><interpolation>
2023-08-23 11:57:31
1
642
VRehnberg
76,961,067
4,170,720
Python: SSH to multiple remote machines and execute linux commands
<p>I have to login to remote machine using SSH. SSH prompt is diferent in first time login and second time login. First time, I have to type &quot;Yes&quot;. Then it will get stored in the known host so that next time logins will not ask this &quot;Yes&quot;.</p> <p>After first time login, I need to execute a command that will do a reboot. Once the device comes up after reboot after 1 minute, I have try to login again to execute remaining 5 commands. Some commands are not generic linux commands.</p> <p>I'm giving here manual execution for a single remote server. The same thing I have ot do for 7 remote machines.</p> <pre><code>PS C:\Users\user1&gt; ssh xyz@20.x.x.181 The authenticity of host '20.x.x.181 (20.x.x.181)' can't be established. ECDSA key fingerprint is SHA256:R++eRV4YRmKsfMUr4BZ+Hx7gQmwW/dXeFsi4O6SybII. This key is not known by any other names Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '20.x.x.181' (ECDSA) to the list of known hosts. viptela 20.9.999-sdwanaas-20.9-305 (xyz@20.x.x.181) Password: Last login: Wed Aug 23 11:23:09 UTC 2023 from 128.x.x.180 on ssh Welcome to Viptela CLI User xyz last logged in 2023-08-23T04:06:20.207088+00:00, to ip-10-0-1-94, from 44.224.28.225 using netconf-ssh xyz connected from 128.x.x.180 using ssh on vsmart-1 vsmart-1# tools support touch_testbed File is touched (please login again) Connection to 20.x.x.181 closed by remote host. Connection to 20.x.x.181 closed. PS C:\Users\user1&gt; PS C:\Users\user1&gt; ssh xyz@20.x.x.181 viptela 20.9.999-sdwanaas-20.9-305 (xyz@20.x.x.181) Password: Last login: Wed Aug 23 11:26:03 UTC 2023 from 128.x.x.180 on pts/0 Welcome to Viptela CLI User xyz last logged in 2023-08-23T11:26:04.046308+00:00, to ip-10-0-1-96, from 128.x.x.180 using cli-ssh xyz connected from 128.x.x.180 using ssh on vsmart-1 vsmart-1# tools internal ip_netns options &quot;exec default bash&quot; vsmart-1:~# touch /etc/viptela/testbed vsmart-1:~# touch /boot/testbed vsmart-1:~# ls -l /etc/viptela/testbed; -rw------- 1 root root 0 Aug 23 11:29 /etc/viptela/testbed vsmart-1:~# ls -l /boot/testbed; -rwx------ 1 root root 0 Aug 23 11:30 /boot/testbed vsmart-1:~# </code></pre> <p><strong>What I tried:</strong></p> <pre><code>import paramiko hostname = [] list_of_ip = &quot;20.x.x.181, 20.x.x.182, 20.x.x.183, 20.x.x.184, 20.x.x.185, 20.x.x.186, 20.x.x.187&quot; hostname=list_of_ip.split(&quot;,&quot;) username = &quot;xyz&quot; password = &quot;abcdefghijk123456789&quot; cmd1=&quot;tools support touch_testbed&quot; cmd2='tools internal ip_netns options &quot;exec default bash&quot;' cmd3=&quot;touch /etc/viptela/testbed&quot; cmd4=&quot;touch /boot/testbed&quot; cmd5=&quot;ls -l /etc/viptela/testbed;&quot; cmd6=&quot;ls -l /boot/testbed;&quot; #initialize the SSH client client = paramiko.SSHClient() #add to known hosts client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) #Login first time to execute the first command for ip in hostname: print(&quot;ssh xyz@&quot;+ip) try: client.connect(hostname=ip, username=username, password=password) #device will go for reboot after this first command stdin, stdout, stderr = client.exec_command(cmd1) #print the output of the command to check if it is successful except: print(&quot;[!] Cannot connect to the SSH Server&quot;) exit() #Login second time to execute the remaining 5 commands for ip in hostname: print(&quot;ssh xyz@&quot;+ip) try: client.connect(hostname=ip, username=username, password=password) stdin, stdout, stderr = client.exec_command(cmd2) stdin, stdout, stderr = client.exec_command(cmd3) stdin, stdout, stderr = client.exec_command(cmd4) stdin, stdout, stderr = client.exec_command(cmd5) stdin, stdout, stderr = client.exec_command(cmd6) #print the output of the command to check if it is successful except: print(&quot;[!] Cannot connect to the SSH Server&quot;) exit() </code></pre> <p><strong>Output with error:</strong></p> <pre><code>ssh xyz@20.x.x.181 ssh xyz@20.x.x.182 ssh xyz@20.x.x.183 ssh xyz@20.x.x.184 ssh xyz@20.x.x.185 ssh xyz@20.x.x.186 ssh xyz@20.x.x.187 Exception ignored in: &lt;function BufferedFile.__del__ at 0x000001EB18C1ED30&gt; Traceback (most recent call last): File &quot;C:\Users\user1\AppData\Local\Programs\Python\Python39\lib\site-packages\paramiko\file.py&quot;, line 67, in __del__ File &quot;C:\Users\user1\AppData\Local\Programs\Python\Python39\lib\site-packages\paramiko\channel.py&quot;, line 1390, in close File &quot;C:\Users\user1\AppData\Local\Programs\Python\Python39\lib\site-packages\paramiko\channel.py&quot;, line 989, in shutdown_write File &quot;C:\Users\user1\AppData\Local\Programs\Python\Python39\lib\site-packages\paramiko\channel.py&quot;, line 965, in shutdown File &quot;C:\Users\user1\AppData\Local\Programs\Python\Python39\lib\site-packages\paramiko\transport.py&quot;, line 1920, in _send_user_message AttributeError: 'NoneType' object has no attribute 'time' </code></pre>
<python><ssh>
2023-08-23 11:49:18
0
1,261
Dipankar Nalui
76,960,817
2,166,823
Is there an optimized way to perfrom repeated sampling in Python, similar to `rep_sample_n` in R/dplyr?
<p>I'm looking for an optimized method of creating sampling distributions in Python, similar to <a href="https://infer.tidymodels.org/reference/rep_sample_n.html" rel="nofollow noreferrer"><code>rep_sample_n</code></a> in <code>dplyr</code>. Currently, I am using <code>df.sample</code> inside a list comprehension (<code>pd.concat([df.sample(size) for n in range(N)])</code>), which is syntactically intuitive, but quite slow when the number of samples increases. Most of the options I have seen here involve either a for loop or list comprehension.</p> <p>Here is an example that also keeps track of the sample replicate number for clarity:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'value': range(3)}) sample_size = 2 replicates = 5 pd.concat([ df.sample(sample_size).assign(replicate=rep) for rep in range(replicates) ]) </code></pre> <p>Output:</p> <pre><code> value replicate 0 0 0 1 1 0 2 2 1 0 0 1 2 2 2 1 1 2 2 2 3 0 0 3 1 1 4 0 0 4 </code></pre>
<python><scikit-learn><statistics><statsmodels>
2023-08-23 11:13:18
2
49,714
joelostblom
76,960,806
17,160,160
Cumulative constraints applied to sub-set from double indexed sparse set in Pyomo
<p>This is a follow up to a previous <a href="https://stackoverflow.com/questions/76922384/cumulative-constraint-expression-over-indexed-set-in-pyomo/76923706#76923706">post</a>. However, I have revised things and thought the change warranted a separate post for clarity's sake. Apologies if this is poor form, I'll happily amend/delete as required.</p> <p>In the following minimal example, I use a sparse set (<code>WEEK_PROD_flat</code>) to formulate my decision variable <code>X</code>:</p> <p><strong>Initial Model</strong></p> <pre><code>weekly_products = { 1: ['Q24'], 2: ['Q24', 'J24'], 3: ['Q24', 'J24','F24'], 4: ['J24', 'F24'], 5: ['F24'] } from pyomo.environ import * model = ConcreteModel() model.WEEKS = Set(initialize = [1,2,3,4,5]) model.PRODS = Set(initialize = ['Q24','J24','F24']) model.WEEK_PROD = Set(model.WEEKS, initialize = weekly_products) model.WEEK_PROD_flat = Set(initialize=[(w, p) for w in model.WEEKS for p in model.WEEK_PROD[w]]) model.X = Var(model.WEEK_PROD_flat) </code></pre> <p>I am attempting to apply constraints to a sub-set of the indices from <code>WEEK_PROD_flat</code>. To do so, I first formulate the following min and max parameters:</p> <p><strong>Parameters</strong></p> <pre><code>week_min = { (2,'J24'):10, (3,'J24'):20, (3,'F24'):10, (4,'J24'):30, (4,'F24'):20, (5,'F24'):30 } week_max = { (2,'J24'):30, (3,'J24'):40, (3,'F24'):30, (4,'J24'):50, (4,'F24'):40, (5,'F24'):50 } model.weekMin = Param(model.WEEK_PROD_flat, within = NonNegativeIntegers, initialize = week_min, default = 0) model.weekMax = Param(model.WEEK_PROD_flat, within = NonNegativeIntegers, initialize = week_max, default = 0) </code></pre> <p>I am attempting to use these parameters to create a constraint that is cumulative for each (week, product) indices with the following specification:</p> <ul> <li>Each week also includes the summation of previous weeks</li> <li>'J..' and 'F..' products should be summed separately</li> <li>'Q..' products are excluded from the index but summed along with the associated 'J..' and 'F..' products respectively.</li> </ul> <p>The output should look like this:</p> <p><strong>Desired Output</strong></p> <pre><code>(2, 'J24'): 10 : X[1,Q24] + X[2,Q24] + X[2,J24] : 30 (3, 'J24'): 20 : X[1,Q24] + X[2,Q24] + X[3,Q24] + X[2,J24] + X[3,J24] : 40 (3, 'F24'): 10 : X[1,Q24] + X[2,Q24] + X[3,Q24] + X[3,F24] : 30 (4, 'J24'): 30 : X[1,Q24] + X[2,Q24] + X[3,Q24] + X[2,J24] + X[3,J24] + X[4,J24] :50 (4, 'F24'): 20 : X[1,Q24] + X[2,Q24] + X[3,Q24] + X[3,F24] + X[4,F24] : 40 (5, 'F24'): 30 : X[1,Q24] + X[2,Q24] + X[3,Q24] + X[3,F24] + X[4,F24] + X[5,F24] : 50 </code></pre> <p>In a previous iteration of the problem double indices were not used in the formulation of parameters or the decision variable and sparse sets were not used. I was able to create the desired output by specifying multiple rules and creating constraints for 'J' and 'F' separately.</p> <p>This would be unwieldly in my actual use case and I would like to be able to create the desired output with a single constraint declaration.</p> <p>Guidance appreciated!</p> <hr /> <p><strong>Current Attempts</strong><br /> So far, the closest I have been able to get to the desired results is below. However, this is far from ideal for a couple of reasons:</p> <p>1.) A similar rule will obviously have to be called to define constraints for each product element in <code>model.PRODS</code> that is not 'Q24'. While this is only 2 in this instance, in the actual use case, this would simply not be efficient.<br /> 2.) Because the constraint is indexed using the single indices from <code>model.WEEKS</code>, it was necessary to apply static bounds (0,1000 in the example shown) as the double indexed bounds from <code>weekMin</code> and <code>weekMax</code> could not be applied.</p> <p><strong>Code</strong></p> <pre><code>def cum_limit_rule(model,w): subset = {x for x in model.WEEKS if x &lt;= w} return(0, sum(model.X[w,p] for w in subset for p in model.WEEK_PROD[w] if p[0] == 'J' or p[0] == 'Q'), 1000) model.cum_limit = Constraint(model.WEEKS, rule= cum_limit_rule) </code></pre> <p><strong>Output</strong></p> <pre><code>cum_limit : Size=5, Index=WEEKS, Active=True Key : Lower : Body : Upper : Active 1 : 0.0 : X[1,Q24] : 1000.0 : True 2 : 0.0 : X[1,Q24] + X[2,Q24] + X[2,J24] : 1000.0 : True 3 : 0.0 : X[1,Q24] + X[2,Q24] + X[2,J24] + X[3,Q24] + X[3,J24] : 1000.0 : True 4 : 0.0 : X[1,Q24] + X[2,Q24] + X[2,J24] + X[3,Q24] + X[3,J24] + X[4,J24] : 1000.0 : True 5 : 0.0 : X[1,Q24] + X[2,Q24] + X[2,J24] + X[3,Q24] + X[3,J24] + X[4,J24] : 1000.0 : True </code></pre>
<python><pyomo>
2023-08-23 11:12:17
1
609
r0bt
76,960,794
1,096,059
How to extract all colors from an image containing solid color blocks with open cv?
<p>As the title suggests, the image consists of solid color blocks. I want to extract all the color blocks from the image and separate them individually. I've made many attempts, but the results haven't been very satisfactory:</p> <ol> <li>Extracting by automatically calculating the threshold, unfortunately, doesn't work for all color blocks.</li> </ol> <pre class="lang-py prettyprint-override"><code>t, _ = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) _, target = cv2.threshold(img, t, 255, cv2.THRESH_BINARY_INV) cnts, hierarchys = cv2.findContours(target, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) </code></pre> <ol start="2"> <li>After edge detection and a series of morphological operations for contour extraction, there are still many unwanted extra parts.</li> </ol> <pre class="lang-py prettyprint-override"><code>gray_temp = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) edges = 255 - cv2.Canny(gray_temp, 25, 50) kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) edges = cv2.erode(edges, kernel, iterations=1) cnts, hierarchys = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) </code></pre> <p>So, what I've been thinking is whether I could first extract all the colors from the image and then use the color information to extract the blocks.</p> <p>Ready to give it a try, I attempted to use numpy to retrieve color information, but the obtained color information surprised me. Pixels that appear to be the same color don't have identical RGB values.</p> <p>I felt a bit disheartened. I searched Google again and found a promising library called &quot;<a href="https://pypi.org/project/extcolors/" rel="nofollow noreferrer">extcolors</a>.&quot; I was excited, as if I had glimpsed a ray of hope. However, reality dealt me a blow - it requires setting a tolerance. The frustrating part is that this tolerance varies for each image, and different values yield vastly different results.</p> <p>So, my question is, is there any effective method that could allow me to obtain all the colors, or are there alternative ways to extract the color blocks with high quality?</p> <p><a href="https://i.sstatic.net/2T2Xv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2T2Xv.png" alt="enter image description here" /></a></p>
<python><opencv><image-processing><colors><contour>
2023-08-23 11:10:48
0
916
HamGuy
76,960,761
1,773,367
Function with try except in np.vectorize returns error message
<p>I have a, vectorized, function that does a simple adjustment on a number</p> <pre><code>import pandas as pd import numpy as np @np.vectorize def adjust_number(number: int) -&gt; int: max_number = 6 default_substitue = 2 # Try to convert to int, if not possible, use default_substitue try: number = int(number) except: number = default_substitue return min(number, max_number) </code></pre> <p>I apply the function on a dataframe</p> <pre><code>df = pd.DataFrame({'numbers': [1.0, 9.0, np.nan]}) df = df.assign(adjusteded_number=lambda x: adjust_number(x['numbers'])) </code></pre> <p>This returns the expected outputs, but I also get a strange return message</p> <pre><code>c:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\numpy\lib\function_base.py:2412: RuntimeWarning: invalid value encountered in adjust_number (vectorized) outputs = ufunc(*inputs) </code></pre> <p>It is not a huge issue, but it is very annoying. The error seems to be triggered by the <code>try-except</code>. If I modify the function, removing the <code>try-except</code>, which I really cannot do without breaking the functionality, the error goes away.</p> <p>What is causing this and how can I get rid of the error message?</p>
<python><pandas><numpy><vectorization>
2023-08-23 11:06:04
1
2,911
mortysporty
76,960,506
10,480,181
Conditional Filtering Pyspark
<p>I have two spark Dataframes <code>Table A - Data</code> and <code>Table B - Filter Lookup</code>. I want to apply a different filter based on a column value in Data (Table A).</p> <p>Table A - Data</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>Col 1</th> <th>Col2</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>A</td> <td>AA</td> </tr> <tr> <td>2</td> <td>B</td> <td>BB</td> </tr> <tr> <td>3</td> <td>A</td> <td>AB</td> </tr> <tr> <td>4</td> <td>A</td> <td>AC</td> </tr> <tr> <td>5</td> <td>B</td> <td>BA</td> </tr> <tr> <td>6</td> <td>A</td> <td>AD</td> </tr> <tr> <td>7</td> <td>B</td> <td>AB</td> </tr> </tbody> </table> </div> <p>Table B - Filter Lookup</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Col1</th> <th>Col2</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>AA</td> </tr> <tr> <td>B</td> <td>BB</td> </tr> <tr> <td>A</td> <td>AB</td> </tr> <tr> <td>B</td> <td>BA</td> </tr> </tbody> </table> </div> <p>The output that I want is:</p> <p>I don't want the data table to have the col2 values defined in Filter Lookup table. So the output needs to be:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>Col1</th> <th>Col2</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>A</td> <td>AC</td> </tr> <tr> <td>6</td> <td>A</td> <td>AD</td> </tr> <tr> <td>7</td> <td>B</td> <td>AB</td> </tr> </tbody> </table> </div> <p>Table A could be huge. And I want an efficient way to do it in spark. Also the lookup table can have more types of col1. So I don't want to just limit to two Col1 values (A &amp; B).</p> <p>Here is what I tried. I am basically applying the filter multiple times depending upon the values of col1. Is there a way to do it in one go without the loop on col1 values.</p> <pre><code>data_col1 = data.select(&quot;col1&quot;) \ .na.drop().dropDuplicates().rdd.flatMap(lambda x: x).collect() filter_col1 = filter_lookup.select(&quot;col1&quot;) \ .na.drop().dropDuplicates().rdd.flatMap(lambda x: x).collect() result_list = [] for col1 in data_col1: if col1 in filter_col1: col2_list = filter_lookup.filter(f.col(&quot;col1&quot;) == f.lit(col1)).select(&quot;col2&quot;).na.drop().dropDuplicates().rdd.flatMap(lambda x: x).collect() filtered = data\ .filter(f.col(&quot;col2&quot;).isin(col2_list) &amp; f.col(&quot;col1&quot;) == f.lit(col1))\ .dropDuplicates() result_list.append(filtered) </code></pre>
<python><apache-spark><pyspark><apache-spark-sql>
2023-08-23 10:31:42
1
883
Vandit Goel
76,960,371
3,731,823
Executing several python files on the same interactive prompt on VS Code
<p>I'm transitioning from Spyder to VS Code, and I'm having an issue executing ad-hoc code on an interactive session. I first save this to a tmp1.py and run the selection on an interactive window:</p> <pre><code>import pandas as pd df = pd.DataFrame({'a': [1,2,3,4], 'b': [1,1,2,3]}) </code></pre> <p>Then on tmp2.py I just write and run:</p> <pre><code>print(df) </code></pre> <p>It opens a <strong>new</strong> interactive window and complains <code>NameError: name 'df' is not defined</code>. How can I configure it to run on the currently active window / session? Note that I'm not using a Jupyter notebook but IPython.</p> <p>At some point I'll want to run code in parallel on several terminals, but at first I'd like to get this simple set-up working.</p>
<python><visual-studio-code><ipython>
2023-08-23 10:12:24
2
4,208
NikoNyrh
76,960,255
7,553,746
What's a Pythonic way of parsing a JSON payload which may be none?
<p>I have the following code:</p> <pre><code> payload = event.get('body', '') parsed_payload = json.loads(payload) username = parsed_payload.get('username') password = parsed_payload.get('password') print(username) print(password) </code></pre> <p>Which works fine until <code>'body': null</code> which causes the following error, presumably because the value of body is null so event.get('body', '') means the default isn't used, just <code>null</code>.</p> <blockquote> <p>[ERROR] TypeError: the JSON object must be str, bytes or bytearray, not NoneType</p> </blockquote> <p>I tried the following:</p> <pre><code> payload = event.get('body') if payload: parsed_payload = json.loads(payload) username = parsed_payload.get('username') password = parsed_payload.get('password') if username and password: print(username) print(password) </code></pre> <p>Now I get a new error:</p> <pre><code>UnboundLocalError: cannot access local variable 'username' where it is not associated with a value </code></pre> <p>At which point I could either have already assigned username and password with a none or I could have another if statement but it seems unpythonic.</p> <p>Could someone point me in the direction of best practice here please?</p>
<python>
2023-08-23 09:58:07
1
3,326
Johnny John Boy
76,959,959
127,508
Python does not recognise a package
<p>I have a folder: backend/api</p> <p>In the folder I have the following files:</p> <pre><code>❯ exa --tree . ├── __init__.py ├── api.py ├── requirements.txt └── routers ├── __init__.py └── phone.py </code></pre> <p>In the api.py I have the following:</p> <p>from api.routers import phone</p> <p>When I start up the file I get the following error:</p> <pre><code>ModuleNotFoundError: No module named 'api.routers'; 'api' is not a package </code></pre> <p>Isn't placing an <strong>init</strong>.py file making a Python folder a package?</p>
<python><python-3.x><module>
2023-08-23 09:19:10
2
8,822
Istvan
76,959,853
7,441,757
Rule of thumb of pandas to_csv chunksize; how to set chunksize?
<p>When you write a large dataframe with <code>df.to_csv(...)</code>, it sometimes clogs up your memory. Setting <code>chunksize</code> helps in that case. See the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer">docs</a>.</p> <p>What is a good rule to decide on chunksize? Now I just start at 1e6 and then lower it by orders of magnitude until it works.</p> <p>I imagine some rule of thumb on (MB-free / size_per_row * 0.9). Wondering if someone had a sensible one, ideally programetically determined.</p>
<python><pandas><csv>
2023-08-23 09:02:13
2
5,199
Roelant
76,959,839
11,328,614
How Python deals with redeclared function
<p>I have occurrences of accidentally redeclared functions in a python codebase. The occurrences are simple function definitions, no <code>functools.singledispatch</code> involved. I want to fix that. However, I do not know which of the functions python actually uses. I want to keep only that function.</p> <p>Please understand, that I ask this question to understand what happens behind the scene and how to solve the issue properly. Of course, I know that redeclaring functions in python is bad coding practice. I also know that a linter can hint you on that. But if the problem is there and I want to solve that, I must understand and find out all of which occurrences should be deleted.</p> <p>I made a small test and it seems that python actually uses the last definition:</p> <pre class="lang-py prettyprint-override"><code>def func1(a: int): print(&quot;Num&quot;, a) def func1(a: int): print(&quot;A number: &quot;, a) func1(100) </code></pre> <p>-&gt;</p> <pre class="lang-bash prettyprint-override"><code>/home/user/PycharmProjects/project/.venv/bin/python /home/user/.config/JetBrains/PyCharm2023.1/scratches/redeclared_func.py A number: 100 </code></pre> <p>I just wanted to ask, to be sure that this interpretation is correct. In that case I would keep none but the last occurrence, of course. There may be a difference between, e.g. Python versions, Python interpreters etc. What happens, if a module with redeclared functions is imported and then the function is redeclared again?</p>
<python><python-3.x><function-declaration><interpretation>
2023-08-23 08:59:58
1
1,132
Wör Du Schnaffzig
76,959,706
5,132,064
Why must the text be a unicode?
<p>Why this code is not working?</p> <pre><code>font = pygame.font.SysFont(None, 24) zahl = 99 img = font.render((&quot;zahl &quot;,str(zahl)), True, WHITE) screen.blit(img, (20, 20))enter code here </code></pre> <p>I get the message: TypeError: text must be a unicode or bytes</p> <p>Best regards Joachim</p>
<python>
2023-08-23 08:41:50
1
375
Joachim
76,959,487
14,280,692
python3 subprocess.Popen call prometheus always reporting an error: No such file or directory
<p>I call prometheus by using the python3 subprocess.Popen function, always reporting an error: No such file or directory. What is Problem?</p> <p>my code as follow:</p> <pre><code>if __name__ == '__main__': subprocess.Popen([&quot;pwd&quot;]) rs = subprocess.Popen([&quot;./prometheus&quot;, &quot;--help&quot;]) </code></pre> <p>the error info:</p> <pre><code>Traceback (most recent call last): File &quot;xxxe/integration_test/test.py&quot;, line 33, in &lt;module&gt; rs = subprocess.Popen([&quot;./prometheus&quot;, &quot;--help&quot;]) File &quot;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/subprocess.py&quot;, line 951, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File &quot;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/subprocess.py&quot;, line 1821, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: './prometheus' </code></pre> <p>It is normal when I execute prometheus directly in the directory where the script is located</p> <pre><code>(venv) ➜ integration_test ./prometheus --help usage: prometheus [&lt;flags&gt;] The Prometheus monitoring server Flags: -h, --[no-]help Show context-sensitive help (also try --help-long and --help-man). --[no-]version Show application version. .... </code></pre> <p>And I try to call node_export through the script, it is also normal. So why cannot find prometheus file ?</p>
<python><subprocess><prometheus><python-3.9><cnosdb>
2023-08-23 08:10:05
1
528
Baker X
76,959,480
21,049,944
How to use Pyplot.Widgets.CheckButtons for the bar plot?
<p>I am trying to use pyplot check buttons following <a href="https://matplotlib.org/stable/gallery/widgets/check_buttons.html" rel="nofollow noreferrer">this</a> example. It works well for classical plot and scatter graphs, but when I tried the bar graph, I got:</p> <blockquote> <p>AttributeError: 'BarContainer' object has no attribute 'get_visible'</p> </blockquote> <p>I tried the documentation but I failed to find any useful alternative to the &quot;get_visible&quot; property. Do you have any advice how to do this?</p> <p>The easiest way to reproduce it is to replace &quot;plot&quot; by &quot;bar&quot; in the CheckButtons example:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.widgets import CheckButtons t = np.arange(0.0, 2.0, 0.01) s0 = np.sin(2*np.pi*t) s1 = np.sin(4*np.pi*t) s2 = np.sin(6*np.pi*t) fig, ax = plt.subplots() l0 = ax.bar(t, s0, visible=False, lw=2, color='black', label='1 Hz') l1 = ax.bar(t, s1, lw=2, color='red', label='2 Hz') l2 = ax.bar(t, s2, lw=2, color='green', label='3 Hz') fig.subplots_adjust(left=0.2) lines_by_label = {l.get_label(): l for l in [l0, l1, l2]} line_colors = [l.get_color() for l in lines_by_label.values()] # Make checkbuttons with all plotted lines with correct visibility rax = fig.add_axes([0.05, 0.4, 0.1, 0.15]) check = CheckButtons( ax=rax, labels=lines_by_label.keys(), actives=[l.get_visible() for l in lines_by_label.values()], label_props={'color': line_colors}, frame_props={'edgecolor': line_colors}, check_props={'facecolor': line_colors}, ) def callback(label): ln = lines_by_label[label] ln.set_visible(not ln.get_visible()) ln.figure.canvas.draw_idle() check.on_clicked(callback) </code></pre>
<python><matplotlib><checkbox><widget>
2023-08-23 08:08:51
1
388
Galedon
76,959,447
18,904,265
How can I reduce the amount of data in a polars DataFrame?
<p>I have a csv file with a size of 28 GB, which I want to plot. Those are way too many data points obviously, so how can I reduce the data? I would like to merge about 1000 data points into one by calculating the mean. This is the sturcture of my DataFrame:</p> <pre class="lang-py prettyprint-override"><code>df = pl.from_repr(&quot;&quot;&quot; ┌─────────────────┬────────────┐ │ Time in seconds ┆ Force in N │ │ --- ┆ --- │ │ f64 ┆ f64 │ ╞═════════════════╪════════════╡ │ 0.0 ┆ 2310.18 │ │ 0.0005 ┆ 2313.23 │ │ 0.001 ┆ 2314.14 │ └─────────────────┴────────────┘ &quot;&quot;&quot;) </code></pre> <p>I thought about using <code>group_by_dynamic</code>, and then calculating the mean of each group, but this only seems to work when using datetimes? The time in seconds is given as a float however.</p>
<python><python-polars>
2023-08-23 08:03:48
2
465
Jan
76,959,268
11,803,687
sub-user join with sqlalchemy
<p>I am trying to find a good solution for the following problem</p> <ul> <li>There are main-users and sub-users which identify themselves via a parent_user_id.</li> <li>All users also get additional information from the company table</li> </ul> <p>I need to join sub-users to the main-users to get the company information, as well as doing the same for main-users</p> <p>Users (contains sub and main users)</p> <ul> <li>user_id</li> <li>login</li> <li>parent_user_id</li> <li>company_id</li> </ul> <p>Company (contains company information for all users of this company)</p> <ul> <li>company_id</li> <li>contact_address</li> </ul> <p>when I search for the user with id 7, it should return the login and the contact address of this user. This is relatively easy because I simply join from Users to Company directly. However, this user 7 can also have a sub-user, which has user_id 11 and parent_user_id 7.</p> <p>I first need to join from the parent_user_id 7 to the user 7, and then join from that user's company_id to the Company table to get the contact address.</p> <p>The challenge here is that a main-user will have a parent_user_id of &quot;NULL&quot;, therefore it cannot join from it's parent_user_id to its own user_id. This only works for sub-users.</p> <p>What I did is a join( or_( ) ) for both parent's company_id as well as the user's company_id (after joining all the users to their parents).</p> <p>Is this the most common solution for this problem? it seems a bit lengthy, requires aliases, outer joins and an or_ join to allow the two different types of users</p> <pre><code>user = Users.alias(&quot;user&quot;) parent = Users.alias(&quot;parent&quot;) company = Company.alias(&quot;company&quot;) joined = user.join(parent, user.c.parent_user_id == parent.c.id, isouter=True) joined = joined.join(company, or_(user.c.company_id == company.c.company_id, parent.c.company_id == company.c.company_id) sql = select(user.c.id, user.c.login, company.c.contact_address).select_from(joined) </code></pre> <p>Maybe there is a &quot;pattern&quot; for this sort of problem? I have looked at reverse cte, but I'm not even sure if it applies to this case. There are only sub-users and no deeper nesting (and no &quot;levels&quot; of depth). Main-users simply have no parent_user_id</p> <p>Edit: Ideally, I am looking for an SQLAlchemy core answer that doesnt use classes, therefore it only uses the metadata from the database using autoload</p>
<python><mysql><join><sqlalchemy>
2023-08-23 07:36:27
1
1,649
c8999c 3f964f64
76,959,265
9,550,867
How to verify the best ARIMA model?
<p>I am trying to do forecasting based on <code>ARIMA</code>. Currently I am choosing the best <code>ARIMA</code> model and predicting for a certain period based on the best chosen <code>ARIMA</code> model. I am doing that by getting the <code>AIC value</code> and keeping the fact in mind that: <strong>The lesser the AIC the better</strong>. However, I need to be able to implement a way to verify for my function so that I do not solely have to rely on the <strong>least AIC value</strong>. So, there should be another way to detect that the model I chose is giving me the best results.</p> <p>To give a clear view, let's say my <code>ARIMA</code> model is supposed to give me values between 5 to 10 based on the historical data input but for some reason after finding the best model it is giving me values which lies somewhere around 1000. It is definitely unusual.</p> <p>What could be an alternative way to verify that <code>ARIMA</code> model is giving me the correct values apart from the given (least AIC) approach?</p> <p>Following is my code:</p> <pre><code>import os import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns sns.set() import statsmodels.tsa.api as smt import statsmodels.api as sm def arima_ci(df_train): df_s = df_train param_range = 3 ps = range(0, param_range) d = 1 qs = range(0, param_range) # Create a list with all possible combinations of parameters parameters = product(ps, qs) parameters_list = list(parameters) # Train many ARIMA models to find the best set of parameters def optimize_ARIMA(parameters_list, d): &quot;&quot;&quot; parameters_list - list with (p, q) tuples d - integration order &quot;&quot;&quot; results = [] best_aic = float('inf') for param in parameters_list: try: model = sm.tsa.SARIMAX(df_s, order=(param[0], d, param[1])).fit(disp=-1) except: continue aic = model.aic # Save best model, AIC and parameters if aic &lt; best_aic: best_model = model best_aic = aic best_param = param results.append([param, model.aic]) result_table = pd.DataFrame(results) result_table.columns = ['parameters', 'aic'] # Sort by AIC in ascending order (lower AIC is better) result_table = result_table.sort_values(by='aic', ascending=True).reset_index(drop=True) return result_table with warnings.catch_warnings(): warnings.filterwarnings(&quot;ignore&quot;) # Ignore all warnings within this block result_table = optimize_ARIMA(parameters_list, d) # result_table = optimize_ARIMA(parameters_list, d) p, q = result_table.parameters[0] best_model = sm.tsa.SARIMAX(df_s, order=(p, d, q)).fit(disp=-1) # print(best_model.summary()) # # do forecast for period? n_steps = fcast_period = 1 forecast_values = best_model.forecast(steps=n_steps) # print(forecast_values) # # forecast = best_model.get_forecast(steps=n_steps) forecast_values = forecast.predicted_mean forecast_ci = forecast.conf_int(alpha=0.05) lower_ci = forecast_ci.iloc[:, 0] upper_ci = forecast_ci.iloc[:, 1] last_date = df_train.index[-1] # Last date in original data next_month = last_date + pd.DateOffset(months=1) # Get the next month after last_date # c i arima_forecast_df = pd.DataFrame({ # 'Invoice Date': pd.date_range(start=next_month, periods=n_steps, freq='MS'), 'arima': forecast_values.astype(int), 'arima_l': lower_ci.astype(int).apply(lambda x: max(0, x)), # Adding lower confidence interval 'arima_u': upper_ci.astype(int) # Adding upper confidence interval }) return arima_forecast_df result_arima_ci = arima_ci(dfn_resampled) print(type(result_arima_ci)) result_arima_ci </code></pre> <p>In this regard, <code>dfn_resampled</code> is a Pandas series. In simple words, it is my training data</p> <pre><code># code dfn_resampled.info() # output &lt;class 'pandas.core.series.Series'&gt; DatetimeIndex: 73 entries, 2017-07-01 to 2023-07-01 Freq: MS Series name: Quantity Non-Null Count Dtype -------------- ----- 73 non-null int64 dtypes: int64(1) </code></pre> <p>I am avoiding the auto-arima library as that gave me poor results. Please help me with this.</p>
<python><forecasting><arima><sarimax>
2023-08-23 07:36:12
2
1,195
raiyan22
76,959,190
9,363,181
Unable to consume data using the latest Pyflink Kafka connector
<p>I am trying to read the data from the <code>Kafka topic</code>. Kafka is set up fine. Now, when I wrote the code using <code>PyFlink</code> and no matter if I add the jars or not, the error remains the same.</p> <pre><code>from pyflink.datastream.connectors.kafka import KafkaSource, KafkaOffsetsInitializer from pyflink.datastream.stream_execution_environment import StreamExecutionEnvironment, RuntimeExecutionMode from pyflink.common import SimpleStringSchema, Configuration class SourceData(object): def __init__(self, env): self.env = env self.env.set_runtime_mode(RuntimeExecutionMode.STREAMING) self.env.set_parallelism(1) self.config = Configuration() self.config.set_string(&quot;pipeline.jars&quot;, &quot;file:///../jars/flink-sql-connector-kafka-1.17.1.jar&quot;) self.env.configure(self.config) def get_data(self): source = KafkaSource.builder() \ .set_bootstrap_servers(&quot;localhost:9092&quot;) \ .set_topics(&quot;test-topic&quot;) \ .set_starting_offsets(KafkaOffsetsInitializer.earliest()) \ .set_value_only_deserializer(SimpleStringSchema()) \ .build() self.env \ .add_source(source) \ .print() self.env.execute(&quot;source&quot;) SourceData(StreamExecutionEnvironment.get_execution_environment()).get_data() </code></pre> <p><strong>Environment</strong>:</p> <ol> <li>Flink 1.17.1</li> <li>Java 11</li> <li>Kafka Client latest one</li> <li>Python 3.10.11</li> </ol> <p>Error:</p> <pre><code>TypeError: Could not found the Java class 'org.apache.flink.connector.kafka.source.KafkaSource.builder'. The Java dependencies could be specified via command line argument '--jarfile' or the config option 'pipeline.jars' </code></pre> <p>I also tried without <code>config</code> option and using <code>env.add_jars</code> but, still the error remains the same. Do I need to configure anything else?</p> <p>The <strong>Second</strong> option I tried was copying the <code>jar</code> to the <code>pyflink&gt;lib</code> inside the <code>site-packages</code> of my virtual environment. After doing this, I am getting the below error:</p> <pre><code>py4j.protocol.Py4JError: An error occurred while calling o12.addSource. Trace: org.apache.flink.api.python.shaded.py4j.Py4JException: Method addSource([class org.apache.flink.connector.kafka.source.KafkaSource, class java.lang.String, null]) does not exist </code></pre>
<python><apache-kafka><apache-flink><pyflink>
2023-08-23 07:27:15
1
645
RushHour
76,959,180
10,089,181
While performing selection of column using plydata package in python is ther any way to select only numeric columns without selecting the boolean
<p>Is there any way to select only the numeric column and not boolean column using plydata package.</p> <pre><code>titanic_data &gt;&gt; select('-alive') &gt;&gt; select_if('is_numeric') &gt;&gt; select_if('-is_bool') </code></pre> <p>also is there any way we can deselect few columns based on columns found in some other dataframe.</p>
<python>
2023-08-23 07:25:49
1
404
Ransingh Satyajit Ray
76,958,817
13,994,829
streamlit: Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0
<p>I previously deployed an app on <strong>Streamlit Cloud</strong> that utilized <code>chromadb</code>.</p> <p>The app worked fine in the past. However, today I encountered a new error (as indicated in the title) and the app has stopped functioning.</p> <p>I attempted to troubleshoot based on <a href="https://discuss.streamlit.io/t/issues-with-chroma-and-sqlite/47950" rel="noreferrer">solutions from the Streamlit forum</a> and performed the following steps sequentially:</p> <ol> <li>Updated the <code>requirements.txt</code> file by adding <code>pysqlite3-binary</code>.</li> <li>Added the following three lines of code at the top of <code>app.py</code>:</li> </ol> <pre class="lang-py prettyprint-override"><code>__import__('pysqlite3') import sys sys.modules['sqlite3'] = sys.modules.pop('pysqlite3') </code></pre> <p>After <strong>rebooting</strong> my app, I discovered the new error:</p> <pre><code>ModuleNotFoundError: No module named 'pysqlite3' Traceback: File &quot;/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py&quot;, line 552, in _run_script exec(code, module.__dict__) File &quot;/mount/src/docgpt-streamlit/app.py&quot;, line 2, in &lt;module&gt; import pysqlite3 </code></pre> <p>Subsequently, I tried adding <code>pysqlite3</code> again to <code>requirements.txt</code>, but the error persisted.</p> <p>According to the <strong>logs</strong> from <strong>manage app</strong>, I observed that Streamlit did not perform a <strong>re-pip install</strong> action.</p> <p><a href="https://i.sstatic.net/TiUaw.png" rel="noreferrer"><img src="https://i.sstatic.net/TiUaw.png" alt="enter image description here" /></a></p> <p>Could this be causing the pysqlite error? If so, how can I correctly enable the Streamlit app to automatically pip install due to my updated <code>requirements.txt</code>?</p>
<python><streamlit><chromadb>
2023-08-23 06:31:40
10
545
Xiang
76,958,642
2,881,414
Using MonthLocator + ConciseDateFormatter, how can I make the labels for years in bold?
<p>I'm creating a Gantt chart using matplotlib, after populating the plot with data spanning over several years and fiddeling with the x-axis like so:</p> <pre class="lang-py prettyprint-override"><code>ax.xaxis.set_minor_locator(mdates.WeekdayLocator(byweekday=mdates.MO)) ax.xaxis.set_major_locator(mdates.MonthLocator()) ax.xaxis.set_major_formatter( mdates.ConciseDateFormatter(mdates.AutoDateLocator()) ) ax.set_xlim(MIN_DATE, MAX_DATE) ax.xaxis.remove_overlapping_locs = False </code></pre> <p>I get a decent x-axis, with the major ticks being the months and the minor ticks the weeks, and every new year, the label for the month is replaced with the year.</p> <p><a href="https://i.sstatic.net/vQ6L3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vQ6L3.png" alt="X-Axis with Years and Months" /></a></p> <p>The problem is now, that the year labels are a bit hard to spot. My question is, is it possible to get the years in <strong>bold</strong> just so they stand a bit out from the months?</p>
<python><matplotlib>
2023-08-23 06:00:49
0
17,530
Bastian Venthur
76,958,590
11,357,695
Plotly DASH output has no URL and unexpected behaviour
<p>I have written a DASH app, but when I run it I dont get a url to the local host printed to my console, with the script staying open as long as the app is active. Instead, the script terminates and the final output to my console is:</p> <pre><code>starting app... &lt;IPython.lib.display.IFrame at 0x238562976a0&gt; In [19]: </code></pre> <p>I would expect:</p> <pre><code>Dash is running on http://127.0.0.1:8050/ * Serving Flask app 'app' (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off * Running on http://127.0.0.1:8050 (Press CTRL+C to quit) </code></pre> <p>I am working in a conda environment, DASH/plotly/python versions are:</p> <pre><code>plotly 5.16.1 pypi_0 pypi dash 2.11.1 pypi_0 pypi dash-core-components 2.0.0 pypi_0 pypi dash-html-components 2.0.0 pypi_0 pypi dash-table 5.0.0 pypi_0 pypi python 3.10.9 h966fe2a_2 </code></pre> <p>DASH was imported via pip as suggested <a href="https://dash.plotly.com/installation" rel="nofollow noreferrer">here</a>, and plotly was bundled with DASH rather than explicitly installed by me - initially I installed both separately from conda forge, and this gave me the normal behaviour I would expect (URL printed to console etc), but the app itself was <a href="https://community.plotly.com/t/modulenotfounderror-no-module-named-plotly-io-json/77866" rel="nofollow noreferrer">broken</a> (so I remade my environment and got the issues described here). The app itself does seem to run when I navigate to my local host in my browser, but I suspect it will not update as the app seems to have stopped running in the console.</p> <p>My app is:</p> <pre><code>import plotly.graph_objects as go from dash import Dash, callback, html, dcc, dash_table, Input, Output, State, MATCH, ALL app = Dash(__name__) user_files = list(app_data.keys()) app.layout = html.Div([dcc.Tabs(id = 'user_tabs', value = 'Single metric', vertical = True, children = [dcc.Tab(label = 'Single metric', children=[html.Div([dcc.Dropdown(user_files, user_files[0], id='json_dropdown') ]), html.Div([dcc.Dropdown(id='record_dropdown') ]), html.Div([dcc.Dropdown(id='bgc_dropdown') ]), html.Div([dcc.Dropdown(['vis1', 'vis2', 'vis3', 'vis4'], 'vis1', id='vis_dropdown') ]), html.Div([dcc.Dropdown(mib_values, mib_values[0], id='mib_value') ],id = 'mib_tab_style'), html.Div([dcc.Dropdown(operations, operations[0], id='mib_operation') ],id = 'operation_tab_style'), html.Div([dcc.Graph(id='single_graph')], style = {'width': '50%'}), ])] )]) if __name__ == '__main__': print ('launching app') app.run(debug = False) </code></pre>
<python><plotly><conda><plotly-dash><anaconda3>
2023-08-23 05:50:25
1
756
Tim Kirkwood
76,958,501
8,471,995
A TypeVar for a function argument which is Optional?
<p>I have a base class to specify the input and output of a function.:</p> <pre class="lang-py prettyprint-override"><code>import typing as ty import abc import typing_extensions as te In = ty.TypeVarTuple(&quot;In&quot;) Out = ty.TypeVar(&quot;Out&quot;) # Base is actually from a library. And it has no type hints. class Base: def inner_function(self, *args, **kwargs): ... def function(self, *args, **kwargs): # do various operations along with calling `inner_function` out = self.inner_function(*args, **kwargs) # do more stuff return out # This class is just for type hinting. class TypeHintedBase(Base, ty.Generic[te.Unpack[In], Out]): def function(self, *args: te.Unpack[In]) -&gt; Out: return super().function(*args) </code></pre> <p>I used <code>TypeVarTuple</code> because <code>function</code> may receive multiple inputs.</p> <p>The problem occurred when I tried to give an optional argument.</p> <pre class="lang-py prettyprint-override"><code> class Subclass(TypeHintedBase[int, int | None, int]): def inner_function(self, a: int, b: int | None = None) -&gt; int: ... Subclass().function(1) # &lt;- Type checker (pylance) complains here </code></pre> <p>In the last line, my type checker complains that it requires another positional argument. I think it's because the type-hinting on the class definition has no default value.</p> <p>But I can't think of a way to type hint that it has a default value.</p> <p>In essence I want to do something like:</p> <pre><code>class Subclass(typeHintedBase[int, int | None = None, int]): pass </code></pre> <p>Is there a method for solving this problem?</p> <p>python version: 3.10.12</p> <p>Additional context: This happened during implementation of an artificial neural network using <code>pytorch</code>. Where <code>function</code> is <code>torch.nn.Module.__call__</code> and <code>inner_function</code> is <code>torch.nn.Module.forward</code>.</p>
<python><pytorch><python-typing><python-3.10>
2023-08-23 05:32:26
0
1,617
Inyoung Kim 김인영
76,958,423
1,903,691
How to MagicMock Airflow context in python?
<p>I am a new-bee to Python/Airflow and trying to use MagicMock for unit test cases. I was able to use it in few cases but in one case I am struggling to get it working.</p> <p>Method which I want to test (UT):</p> <pre><code> def resolve_sensitive_params(self, localvalue_paramset: dict, context, **kwargs): tenantname = context[strings.str_dag].params[strings.str_AF_tenant_name] api_url = &quot;https://abcd.com/uri&quot; apiurl = f&quot;{api_url}/v3/int/{tenantname}/localsecrets&quot; * * * return latest_secret_vals </code></pre> <p>In my test case method I want to mock 'context' or I need to send sample dict/object which will work in the above method.</p> <p>I have tried few things like below, but did not work for me -</p> <pre><code> context = MagicMock() context.dag.return_value = MagicMock({&quot;params&quot;:{&quot;o9AF_tenant_name&quot;:&quot;test&quot;}}) #context.return_value = {&quot;dag&quot;: &quot;test&quot;} # o9AF_tenant_name #context.return_value = MagicMock(tenant_name=&quot;testtenant&quot;) #context.dag.params.o9AF_tenant_name = &quot;testtenant&quot; #context[strings.str_dag].params[strings.str_o9AF_tenant_name] = MagicMock(&quot;okok&quot;) make_request.return_value.json.return_value = {&quot;ParameterValue&quot;: &quot;secretval&quot;} response = dag_init.resolve_sensitive_params(secret_pramset, context) print(response) </code></pre> <p>Any help would be appreciated.</p> <p>Thanks, Mahendra</p>
<python><python-3.x><airflow><airflow-2.x>
2023-08-23 05:10:52
1
397
MHegde
76,957,974
4,013,571
why does `json.dumps` yield an empty (stringified) dict when parsing dataclass that subclases `dict`
<p>Why does the following</p> <pre class="lang-py prettyprint-override"><code>import json from dataclasses import dataclass @dataclass class Foo(dict): bar: Dict example = Foo(bar={'spam': 'eggs'}) json.dumps(example) </code></pre> <p>yield an empty dict in json string form</p> <pre><code>'{}' </code></pre> <p><em>Note: I'm aware that this isn't a sensible struct, rather, just interested in the reason behind the result!</em></p>
<python><json><dictionary><python-dataclasses>
2023-08-23 02:57:29
0
11,353
Alexander McFarlane
76,957,870
13,849,446
Not Writing in File Python
<p>I have a Threaded class in which I am trying to append data but it is not doing anything, no error no success. The following is the basic structure of code</p> <pre><code>from threading import Thread class ABC: def func1(self, user): # Do processing self.func2(lst) # lst Generated in processing def func2(self, lst): thrd_list = [] for ele in lst: x = Thread(target=self.func3, args(ele, )) x.start() thrd_list.append(x) for thrd in thrd_list: thrd.join() def func3(self, ele): # Do some stuff and if successful write to file OUTPUT.write(f&quot;{ele}\n&quot;) with open(&quot;users.txt&quot;, &quot;r&quot;) as f: users = f.readlines() OUTPUT = open(&quot;result.txt&quot;, &quot;a&quot;) thrd_list = [] for user in users: new_obj = ABC() x = Thread(target=new_obj.func1, args(user, )) x.start() thrd_list.append(x) for thrd in thrd_list: thrd.join() OUTPUT.close() </code></pre> <p>The Data is being written on encounter of OUTPUT.close(). I want it to append as it goes so there is no data loss due to crash or bugs</p>
<python><multithreading><file><python-multithreading><file-writing>
2023-08-23 02:22:19
3
1,146
farhan jatt
76,957,841
6,457,407
Use of _var to indicate unused variables in Python
<p>So I know that it is common in Python to use <code>_</code> to indicate a variable whose value is being ignored. So for example:</p> <pre><code>first, *_, last = my_list </code></pre> <p>In some code I've been looking at, I've started to see <code>_var</code> being used instead of just <code>_</code> for unused local variables. So for example, a method that overrides a superclass's method might have:</p> <pre><code>class MyLogger: def log(self, message, _urgency): ... </code></pre> <p>to make it clear what is being elided. And if a subclass wants to override this and bring that argument back, it is clearer what is going on. Similarly:</p> <pre><code> name, _address, city, state = my_list </code></pre> <p>makes it clearer what's being skipped, and helps if the code needs to be modified in the future.</p> <p>Both PyLint and PyCharm don't issue warning messages for unused local variables starting with <code>_</code>, so this is clearly a &quot;thing&quot;, even if it's not sanctioned.</p> <p>I'm trying to figure out whether this use of an underscore on local variables is at all sanctioned or part of any PEP or anything. This use of _ isn't going to be confused with private fields, all of which have to have an object and a period before the underscore. Private globals will typically be uppercase.</p>
<python>
2023-08-23 02:11:27
0
11,605
Frank Yellin
76,957,612
4,367,371
GCP SQL Server Python connection (sqlalchemy) Error
<p>I have the following code:</p> <pre><code>import sqlalchemy as sa db_connect_string = 'driver://username:password@server/db' engine = sa.create_engine(db_connect_string) connection = engine.connect() </code></pre> <p>In the connection string driver is <code>mssql</code>, username is <code>sqlserver</code> and server is the GCP public server IP</p> <p>In the GCP console, under connections, public IP connectivity is enabled and my current machine IP is whitelisted, I tested the connection is SSMS and I am able to successfully connect to the server via SSMS</p> <p>I get the following error:</p> <pre><code>sqlalchemy.exc.InterfaceError: (pyodbc.InterfaceError) ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)') (Background on this error at: https://sqlalche.me/e/20/rvf5) </code></pre> <p>I have tried multiple other drivers (such as <code>mssql+pyodbc</code> and <code>SQL Server</code>) and checked my system drivers, But it still does not work with other drivers</p> <p>The error is unfortunately non descript and I cannot figure out why the connection is failing but works just fine via SSMS.</p>
<python><sql><google-cloud-platform><sqlalchemy><pyodbc>
2023-08-23 00:41:13
2
3,671
Mustard Tiger
76,957,539
1,563,654
How can I receive firebase messages using python (or API)
<p>I have a fleet of devices running python and need to be able to send messages to/from them each individually. I know that it is possible to send a message to an individual InstanceID token, but I can't figure out how to register a device to get an InstanceID token and then listen for messages.</p> <p>I was hoping to have something like <a href="https://cloud.google.com/python/docs/reference/pubsub/latest/index.html#subscribing" rel="nofollow noreferrer">https://cloud.google.com/python/docs/reference/pubsub/latest/index.html#subscribing</a>, where it would listen for messages for just that client. What I want is</p> <pre><code>client = messaging.client() while true: msg = client.getMessage() print(msg) </code></pre> <p>I'm open to doing it with the REST API too, but can't find a way to do it there either. I've look at the <a href="https://firebase.google.com/docs/reference/admin/python/firebase_admin.messaging" rel="nofollow noreferrer">python docs</a> and the <a href="https://firebase.google.com/docs/reference/fcm/rest/v1/projects.messages" rel="nofollow noreferrer">API docs</a>.</p> <p>I'm open to other ideas to accomplish this, like pub/sub, but that doesn't seem to support the individual device mapping. I could have a topic per device, but there's an upper limit of topics that may be too low.</p>
<python><firebase><firebase-cloud-messaging><google-cloud-pubsub><messaging>
2023-08-23 00:17:10
1
3,921
Daniel Watrous
76,957,535
160,808
NPM won't install serialport on raspberry pi 1b
<p>I ran npm install serialport but I get a python error</p> <pre><code>alfred@alfred:~/AccentaG4/src/mpu/server $ python --version Python 3.9.2 &gt; @serialport/bindings@9.2.8 install /home/alfred/AccentaG4/src/mpu/server/node_modules/@serialport/bindings &gt; prebuild-install --tag-prefix @serialport/bindings@ || node-gyp rebuild prebuild-install warn install No prebuilt binaries found (target=11.15.0 runtime=node arch=arm libc= platform=linux) gyp ERR! configure error gyp ERR! stack Error: Command failed: /usr/bin/python -c import sys; print &quot;%s.%s.%s&quot; % sys.version_info[:3]; gyp ERR! stack File &quot;&lt;string&gt;&quot;, line 1 gyp ERR! stack import sys; print &quot;%s.%s.%s&quot; % sys.version_info[:3]; gyp ERR! stack ^ gyp ERR! stack SyntaxError: invalid syntax gyp ERR! stack gyp ERR! stack at ChildProcess.exithandler (child_process.js:299:12) gyp ERR! stack at ChildProcess.emit (events.js:193:13) gyp ERR! stack at maybeClose (internal/child_process.js:999:16) gyp ERR! stack at Socket.stream.socket.on (internal/child_process.js:403:11) gyp ERR! stack at Socket.emit (events.js:193:13) gyp ERR! stack at Pipe._handle.close (net.js:614:12) gyp ERR! System Linux 6.1.21+ gyp ERR! command &quot;/usr/local/node-v11.15.0-linux-armv6l/bin/node&quot; &quot;/usr/local/node-v11.15.0-linux-armv6l/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js&quot; &quot;rebuild&quot; gyp ERR! cwd /home/alfred/AccentaG4/src/mpu/server/node_modules/@serialport/bindings gyp ERR! node -v v11.15.0 gyp ERR! node-gyp -v v3.8.0 gyp ERR! not ok npm WARN ws@8.13.0 requires a peer of bufferutil@^4.0.1 but none is installed. You must install peer dependencies yourself. npm WARN ws@8.13.0 requires a peer of utf-8-validate@&gt;=5.0.2 but none is installed. You must install peer dependencies yourself. npm WARN server@1.0.0 No description npm WARN server@1.0.0 No repository field. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @serialport/bindings@9.2.8 install: `prebuild-install --tag-prefix @serialport/bindings@ || node-gyp rebuild` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @serialport/bindings@9.2.8 install script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. </code></pre> <p>Heres the full log</p> <p><a href="https://docs.google.com/document/d/1awB3tnTxyS4MEFdzmloECIoeFzl9fGo2UjV6OjwOVnE/edit?usp=sharing" rel="nofollow noreferrer">https://docs.google.com/document/d/1awB3tnTxyS4MEFdzmloECIoeFzl9fGo2UjV6OjwOVnE/edit?usp=sharing</a></p> <p>I am wondering do</p>
<python><raspberry-pi>
2023-08-23 00:13:10
1
2,311
Ageis
76,957,511
6,031,995
Pandas groupby one column, agg another and select from a third
<p>I have a dataframe with 3 columns,</p> <pre><code>filename,area,fidx 1,123,0 1,45,1 1,6546,2 1,23,3 1,435,4 .... </code></pre> <p>I want to select 4 frames <code>fidx</code> for each filename such that they are the <code>min</code>, <code>max</code>, <code>33 percentile</code> and <code>66 percentile</code> of the area. Each filename has about 100+ frames. The purpose of the selection is to have a spread so it doesn't necessary have to be the 33/66 percentile. The expected output would be</p> <pre><code>filename,area,fidx 1,123,0 1,6546,2 1,23,3 1,435,4 </code></pre> <p>I know this is a groupby and agg the area but then I'm lost</p>
<python><pandas><dataframe>
2023-08-23 00:05:51
2
14,194
Kenan
76,957,480
3,247,006
Is it needed to use `cache.close()` after finishing using cache in Django Views?
<p>I found <a href="https://docs.djangoproject.com/en/4.2/topics/cache/#django.core.cache.cache.close" rel="nofollow noreferrer">cache.close()</a> saying below in <a href="https://docs.djangoproject.com/en/4.2/topics/cache/" rel="nofollow noreferrer">the doc</a>. *I'm learning <a href="https://docs.djangoproject.com/en/4.2/topics/cache/" rel="nofollow noreferrer">Django Cache</a>:</p> <blockquote> <p>You can close the connection to your cache with close() if implemented by the cache backend.</p> </blockquote> <p>So, I use <code>cache.close()</code> after I finish using cache in Django Views as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;views.py&quot; from django.core.cache import cache def test(request): cache.set(&quot;name&quot;, &quot;John&quot;) cache.set(&quot;age&quot;, 36) print(cache.get(&quot;first_name&quot;)) # John print(cache.get(&quot;age&quot;)) # 36 cache.close() # Here return HttpResponse(&quot;Test&quot;) </code></pre> <p>My questions:</p> <ol> <li>Is it needed to use <code>cache.close()</code> after finishing using cache in Django Views?</li> <li>If <code>cache.close()</code> is not used after finishing using cache in Django Views, are there anything bad?</li> </ol>
<python><django><django-views><django-cache><django-caching>
2023-08-22 23:58:35
1
42,516
Super Kai - Kazuya Ito
76,957,474
4,822,772
Python to detect point in polygons
<p>First I import a shapefile</p> <pre><code>import geopandas as gpd shapefile_path = &quot;data/TRI_75_SIG_DI/TRI_PARI_SIG_DI/n_tri_pari_carte_inond_s_075.shp&quot; gdf = gpd.read_file(shapefile_path) </code></pre> <p>And it works fine.</p> <p><a href="https://i.sstatic.net/wgT2A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wgT2A.png" alt="enter image description here" /></a></p> <p>Then I can display the polygons :</p> <pre><code>import folium m = folium.Map(location=[48.8566, 2.3522], zoom_start=12) folium.GeoJson(gdf).add_to(m) m </code></pre> <p>And it also works</p> <p><a href="https://i.sstatic.net/bvffm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bvffm.jpg" alt="enter image description here" /></a></p> <p>Now, I choose a point, and we can see that it is in the polygons:</p> <pre><code>import geopandas as gpd from shapely.geometry import Point point_coordinates = (2.3546, 48.8517) point = Point(point_coordinates) m = folium.Map(location=[point_coordinates[1], point_coordinates[0]], zoom_start=13) folium.Marker( location=[point_coordinates[1], point_coordinates[0]], icon=folium.Icon(color='blue'), popup=&quot;Point d'intérêt&quot; ).add_to(m) m </code></pre> <p><a href="https://i.sstatic.net/3FEfs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3FEfs.png" alt="enter image description here" /></a></p> <p>And I used the following code, that does not work because the result is empty, whereas there should be some polygons that contain this point:</p> <pre><code>gdf.geometry.contains(point) </code></pre> <p>Could you please help?</p>
<python><point-in-polygon>
2023-08-22 23:57:13
0
1,718
John Smith
76,957,353
21,575,627
Unpacking into a list from a comma-separated right hand side
<p>The syntax:</p> <pre><code>&gt;&gt;&gt; x = 1, 2 </code></pre> <p>creates a tuple where <code>x -&gt; (1, 2)</code>. Is it possible to create a <code>list</code> in this way? I know that you can do the two-liner:</p> <pre><code>&gt;&gt;&gt; x = 1, 2 &gt;&gt;&gt; x = list(x) </code></pre> <p>Is there a more efficient operation for this purpose?</p> <p>I'm thinking no, since a simple:</p> <pre><code>&gt;&gt;&gt; 1, 2, 3 (1, 2, 3) </code></pre> <p>shows a tuple being generated, so now I am thinking that something like:</p> <pre><code>&gt;&gt;&gt; a, b = x, y </code></pre> <p>may not be wholly efficient, since the right-hand side generates a tuple first. Is this true?</p>
<python>
2023-08-22 23:16:37
2
1,279
user129393192
76,957,217
2,981,639
Best practice for adding a conflicting package with Poetry
<p>I have a project which uses <code>awscli</code> - the version I have installed is <code>1.29.29</code></p> <p>I want to add <code>tox</code> so I ran <code>poetry add tox -G dev</code> and it failed</p> <pre><code>Because no versions of awscli match &gt;1.29.29,&lt;1.29.30 || &gt;1.29.30,&lt;1.29.31 || &gt;1.29.31,&lt;1.29.32 || &gt;1.29.32,&lt;2.0.0 and awscli (1.29.30) depends on colorama (&gt;=0.2.5,&lt;0.4.5), awscli (&gt;1.29.29,&lt;1.29.31 || &gt;1.29.31,&lt;1.29.32 || &gt;1.29.32,&lt;2.0.0) requires colorama (&gt;=0.2.5,&lt;0.4.5). And because awscli (1.29.31) depends on colorama (&gt;=0.2.5,&lt;0.4.5), awscli (&gt;1.29.29,&lt;1.29.32 || &gt;1.29.32,&lt;2.0.0) requires colorama (&gt;=0.2.5,&lt;0.4.5). And because awscli (1.29.32) depends on colorama (&gt;=0.2.5,&lt;0.4.5) and awscli (1.29.29) depends on colorama (&gt;=0.2.5,&lt;0.4.5), awscli (&gt;=1.29.29,&lt;2.0.0) requires colorama (&gt;=0.2.5,&lt;0.4.5). Because no versions of tox match &gt;4.10.0,&lt;5.0.0 and tox (4.10.0) depends on colorama (&gt;=0.4.6), tox (&gt;=4.10.0,&lt;5.0.0) requires colorama (&gt;=0.4.6). Thus, tox (&gt;=4.10.0,&lt;5.0.0) is incompatible with awscli (&gt;=1.29.29,&lt;2.0.0). So, because warpspeed-multiclass-model depends on both tox (^4.10.0) and awscli (^1.29.29), version solving failed. </code></pre> <p>Not providing a version for tox seems to have been interpreted as <code>&gt;4.10.0,&lt;5.0.0</code> - is that correct?</p> <p>What is the best practice - I ran <code>poetry add &quot;tox&lt;=4.10.0&quot; -G dev</code> and it found a solution (tox==3.28.0) but is there a way of solving &quot;I need tox <code>&gt;4.10.0,&lt;5.0.0</code> and you can upgrade/downgrade other packages (specifically <code>awscli</code>, <code>sagemaker</code> and <code>boto3</code> from the <code>sagemaker</code> group) to suit?</p>
<python><python-poetry><tox>
2023-08-22 22:36:06
1
2,963
David Waterworth
76,957,161
3,380,902
Pandas DataFrame : Transform columns to JSON strings
<p>I have a following Pandas DataFrame and I'd like to transform the 2 columns to JSON strings and save them in the DataFrame.</p> <pre><code>import pandas as pd # Create a list of data data = [ ['San Francisco', '10', 2023, 1000000], ['San Francisco', '8', 2022, 900000], ['Los Angeles', '12', 2023, 800000], ['Los Angeles', '9', 2022, 700000], ] # Create a DataFrame df = pd.DataFrame(data, columns=['market', 'properties', 'year', 'home_values']) # Print the DataFrame print(df) </code></pre> <p>Expected result:</p> <pre><code> market year home values 0 San Francisco 2023 {&quot;year&quot;: [2023, 2022], &quot;properties&quot;: [10, 8], &quot;home_values&quot;: [1000000, 900000]} 1 Los Angeles 2023 {&quot;year&quot;: [2023, 2022], &quot;properties&quot;: [12, 9], &quot;home_values&quot;: [800000, 700000]} </code></pre>
<python><pandas><dataframe>
2023-08-22 22:22:05
1
2,022
kms
76,957,046
2,680,879
Using streamlit to build question/answer app with history
<p>It is quite simple to build a question / answer app using streamlit, but ideally id like to be able to allow the user to keep asking questions and continue showing the previous questions / answers. Instead of the behavior of reusing the same box to ask new questions.</p> <p>The problem is that I can't use a while loop in streamlit as it runs all the code on load. I tried creating a <code>button</code> that when clicked triggered the &quot;ask&quot; code with a unique counter passed into the <code>streamlit.text_input</code> call, but it isn't working as id hoped.</p> <p>Is there a way to do this using streamlit?</p>
<python><streamlit>
2023-08-22 21:51:16
1
1,805
Atul Bhatia
76,956,967
13,944,456
Organize/run python notebook cells (or functions) as a flow diagram/ mermaid chart
<p>I use python as part of my daily workflow mostly for modeling and data analysis and I've been dying to use some system similar to the one I outlined in this picture (here I am using <a href="https://obsidian.md/" rel="nofollow noreferrer">obsidian</a> canvas as an example). My dream is to have something like this with one click snapshot to save current cell layouts and code for quick and dirty version control with simple notes. Are there any existing systems that come close to this? I have not messed with apache airflow but seems to be similar but much more production oriented instead of proof of concept/prototyping oriented</p> <p>ideal software would</p> <ul> <li>be lightweight</li> <li>allow drag and drop cells and flow lines</li> <li>click into the cell to open up code</li> <li>one click snapshots with timestamp and user note (ie changes made notes)</li> <li>only allow variable inheritance from upstream cell blocks.</li> </ul> <p>The idea here is that I have to quickly experiment with so many variations of data, filtering, modeling approaches, dataset validations etc. So I want to remember what has worked so far without worrying about making perfect/permanent code. Once proof of concept is established I move from notebooks to python package etc.</p> <p><a href="https://i.sstatic.net/iIM1Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iIM1Q.png" alt="enter image description here" /></a></p>
<python><jupyter-notebook>
2023-08-22 21:30:03
1
372
Phillip Maire
76,956,875
4,958,604
Mac OS plist creation triggers notification
<p>Currently I am trying to setup up some automation on my Mac using launchagent plist files and various other mechanisms provided by macOS.</p> <p>How I programmatically create a plist file:</p> <pre><code>import plistlib data = { 'Label': 'some label', 'ProgramArguments': [path/to/shell_script], 'RunAtLoad': True, 'KeepAlive': False } with open(my_plist_file_path, 'wb') as plist_file: plistlib.dump(data, plist_file) </code></pre> <p>So this little snippet successfully creates a plist file in <code>~/Library/LaunchAgents/</code> where all other user specific plist files are located... BUT:</p> <p>Every time I do so, I get the mac os system notification saying:</p> <pre><code>Background Items Added: ... </code></pre> <p>And my question is simply: why?</p> <p>How can i programmatically create a plist without triggering this notification?</p> <p>Version: Mac OS Ventura 13.5</p> <p>In <code>~/Library/LaunchAgents/</code> I can see many other plist files which have been created by e.g. Google Chrome / Homebrew but these plist files never triggered the notification described above.</p>
<python><macos><notifications><applescript><plist>
2023-08-22 21:11:13
2
1,526
Creative crypter
76,956,869
1,471,980
Add dataframe rows based on external condition
<p>I have this dataframe:</p> <pre class="lang-none prettyprint-override"><code>Env location lob grid row server model make slot Prod USA Market AB3 bc2 Server123 Hitachi dcs 1 Prod USA Market AB3 bc2 Server123 Hitachi dcs 2 Prod USA Market AB3 bc2 Server123 Hitachi dcs 3 Prod USA Market AB3 bc2 Server123 Hitachi dcs 4 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 3 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 3 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 3 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 4 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 5 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 5 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 6 UAT PAC Retail AB6 bc4 Serverzzz Cisco ust 3 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 4 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 5 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 6 </code></pre> <p>In this example:</p> <ul> <li>If model is <strong>IBM</strong>, there must be <strong>8</strong> slots; because the slot starts from slot=3, so it must go from 3 to 10. In this case, only slots 3 to 6 are present. <ul> <li>Therefore, I need to add 4 more rows (slot 7, 8, 9, 10).</li> </ul> </li> <li>If model is <strong>Cisco</strong>, row count for cisco needs to be <strong>6</strong>. Only slots 3 to 6 are present. <ul> <li>Therefore, I need to add 2 more rows</li> </ul> </li> </ul> <p>New rows:</p> <ul> <li>must repeat the last row for the model, while incrementing the slot number</li> <li>Their &quot;grid&quot; cell must indicate &quot;available&quot;.</li> </ul> <p>This needs to be done programmatically where given the model, I need to know the total number of slots and if the number of slots is short, I need to create new rows.</p> <p>The final dataframe needs to be like this:</p> <pre class="lang-none prettyprint-override"><code>Env location lob grid row server model make slot Prod USA Market AB3 bc2 Server123 Hitachi dcs 1 Prod USA Market AB3 bc2 Server123 Hitachi dcs 2 Prod USA Market AB3 bc2 Server123 Hitachi dcs 3 Prod USA Market AB3 bc2 Server123 Hitachi. dcs 4 Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 3 Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 4 Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 5 Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 6 Dev EMEA Ins. available bc4 Serverabc IBM abc 7 Dev EMEA Ins. available bc4 Serverabc IBM abc 8 Dev EMEA Ins. available bc4 Serverabc IBM abc 9 Dev EMEA Ins. available bc4 Serverabc IBM abc 10 UAT PAC Retail AB6 bc4 Serverzzz Cisco ust 3 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 4 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 5 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 6 UAT PAC Retail available bc4 Serverzzz Cisco ust 7 UAT PAC Retail available bc4 Serverzzz Cisco ust 8 </code></pre> <p>I tried something like this:</p> <pre class="lang-py prettyprint-override"><code>def slots(row): if 'IBM' in row['model']: number_row=8 if 'Cisco' in row['model']: number_row=6 </code></pre> <p>How do I do this?</p>
<python><pandas><dataframe><numpy><group-by>
2023-08-22 21:08:47
6
10,714
user1471980
76,956,789
6,168,639
HTMX / Django List Of Forms - CSRF Token Issue?
<p>I've got an application where I'm listing out a bunch of forms - it loads a csrf_token into each form.</p> <p>Each form is a simple dropdown for choosing a 'color' for each item in the list.</p> <p>I have a <code>ListView</code> that returns a list of placement objects like so:</p> <pre><code> {% for placement in placements %} {% include 'placements/snippets/placement_update_form.html' %} {% endfor %} </code></pre> <p>The <code>placement_update_form.html</code> looks like this:</p> <pre><code>&lt;form id=&quot;placementUpdateForm&quot; action=&quot;{{ placement.get_update_url }}&quot; method=&quot;post&quot; hx-headers='{&quot;X-CSRFToken&quot;: &quot;{{ csrf_token }}&quot;}'&gt; &lt;input type=&quot;hidden&quot; name=&quot;id&quot; value=&quot;{{ placement.id }}&quot;&gt; &lt;label for=&quot;color&quot;&gt;Color&lt;/label&gt; &lt;select name=&quot;color&quot; id=&quot;color{{ placement.pk }}&quot; class=&quot;form-select&quot; hx-post=&quot;{% url 'placements:placement-update' placement.pk %}&quot; hx-target=&quot;closest #placementUpdateForm&quot;&gt; &lt;option value=&quot;&quot; {% if not placement.color %}selected{% endif %}&gt;Default&lt;/option&gt; {% for value, display_name in placement_colors %} &lt;option value=&quot;{{ value }}&quot; {% if value == placement.color %}selected{% endif %}&gt;{{ display_name }}&lt;/option&gt; {% endfor %} &lt;/select&gt; &lt;/form&gt; </code></pre> <p>I am using HTMX to <code>POST</code> the data to an <code>UpdateView</code> on my backend; which redirects to a <code>success_url</code> that is a <code>DetailView</code> that simply returns the <code>placement_update_template</code> and replaces that exact item in the DOM.</p> <p>It works - <em>once</em>. The <em>first</em> <code>POST</code> works and saves the color as expected. The template is replaced in the DOM, but if you choose a different color (triggering a second <code>POST</code>) then it fails.</p> <p>Also, if I try to change the color of any of the <em>other</em> forms - it also fails with the same error.</p> <p>All subsequent <code>POST</code> requests are giving me this error:</p> <p><code>Forbidden (CSRF token from the 'X-Csrftoken' HTTP header incorrect.): /placements/1/update/ </code></p> <p>What is the best way to handle this when submitting these forms via HTMX?</p> <p>Here is a more in-depth breakdown of views/templates/logic: <a href="https://pastebin.com/YttsAYZC" rel="nofollow noreferrer">https://pastebin.com/YttsAYZC</a></p>
<python><django><forms><django-csrf><htmx>
2023-08-22 20:51:11
3
722
Hanny
76,956,776
7,437,221
Forcing a minimum number of unique variables per solution with PuLP
<p>I'm using the PuLP library in python to build daily fantasy sports lineups for PGA. The inputs and constraints are fairly simple, and the problem is working perfectly. The code is as follows:</p> <pre class="lang-py prettyprint-override"><code># There will be an index for each player and the variable will be binary (0 or 1) representing whether the player is included or excluded from the roster. # player_dict is a dictionary of each player, containing data like Name, Fpts, Salary, etc... lp_variables = { player: plp.LpVariable(player, cat=&quot;Binary&quot;) for player, _ in self.player_dict.items() } # set the objective - maximize fpts self.problem += ( plp.lpSum( self.player_dict[player][&quot;Fpts&quot;] * lp_variables[player] for player in self.player_dict ), &quot;Objective&quot;, ) # Set the salary constraints max_salary = 50000 if self.site == &quot;dk&quot; else 60000 min_salary = 0 self.problem += ( plp.lpSum( self.player_dict[player][&quot;Salary&quot;] * lp_variables[player] for player in self.player_dict ) &lt;= max_salary ) self.problem += ( plp.lpSum( self.player_dict[player][&quot;Salary&quot;] * lp_variables[player] for player in self.player_dict ) &gt;= self.min_salary ) # Need 6 golfers regardless of site. pretty easy. self.problem += ( plp.lpSum(lp_variables[player] for player in self.player_dict) &gt;= 6 ) self.problem += ( plp.lpSum(lp_variables[player] for player in self.player_dict) &lt;= 6 ) # Crunch! for i in range(self.num_lineups): try: self.problem.solve(plp.PULP_CBC_CMD(msg=0)) except plp.PulpSolverError: self.simDoc.update({'jobProgressLog': ArrayUnion(['There was an error with the solver - infeasibility reached. Only generated {} lineups out of {}. Continuing with export.'.format(len(self.num_lineups), self.num_lineups)])}) score = str(self.problem.objective) for v in self.problem.variables(): score = score.replace(v.name, str(v.varValue)) player_names = [ v.name.replace(&quot;_&quot;, &quot; &quot;) for v in self.problem.variables() if v.varValue != 0 ] fpts = eval(score) self.lineups[fpts] = player_names # Force new optimal solution self.problem += plp.lpSum( self.player_dict[player][&quot;Fpts&quot;] * lp_variables[player] for player in self.player_dict ) &lt;= (fpts - 0.001) </code></pre> <p>However, I want to add an additional optional constraint that forces each solution to have a minimum number of unique players from one another.</p> <p>For example, if I were to force a minimum number of 3 unique players per lineup, the following lineups would be invalid, since they are only unique by 2 variables.</p> <pre class="lang-none prettyprint-override"><code>[Player1, Player4, Player29, Player6, Player10] [Player1, Player4, Player29, Player88, Player45] </code></pre> <p>How can I go about implementing this rule using PuLP to enforce that each subsequent lineup produced has at least 3 unique players from the lineups before it?</p>
<python><mathematical-optimization><linear-programming><pulp>
2023-08-22 20:48:32
0
353
Sean Sailer