QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,638,644
442,945
How to declare a method on a Python union type alias?
<p>Suppose I have two classes, <code>A</code> and <code>B</code>:</p> <pre><code>class A: def f(self) -&gt; bool: ... class B: def f(self) -&gt; bool: ... </code></pre> <p>And later I type alias these:</p> <pre><code>from typing import Union FClass: TypeAlias = Union[A, B] </code></pre> <p>So that in another place I can have a function:</p> <pre><code>def g() -&gt; FClass: ... # Returns an object that is either A or B. n = g() n.f() # This complains! </code></pre> <p>The call to <code>f()</code> here looks bad to checkers because <code>f()</code> is not declared on <code>FClass</code> even though it's declared on all possible types. Is there a way to declare the method will always exist on classes with that alias (something akin to a Java interface)? Or is there a better way to go about this? The specific complaint:</p> <pre><code>'A | B' has no attribute 'f' </code></pre> <p>Note that <code>A</code> and <code>B</code> are out of my control, as is the internals of <code>g()</code>. The practical use case is a multi-track library (i.e. 'alpha', 'beta', 'general audience') that have different generated internal objects, but that share an API.</p>
<python><python-typing>
2024-06-18 16:33:18
1
21,319
Nathaniel Ford
78,638,603
958,004
Python processes being leaked when using Reticulate and joblib
<p>I'm having a issue when using <code>joblib.Parallel</code> through <code>reticulate</code>. Basically, if I have an R script call a python function that uses <code>joblib.Parallel</code> to parallelize running of code, and then run that R script, the Python executables are left running after the script has completed, and then at some point in the future, they get killed with the following warning being raised:</p> <pre><code>resource_tracker.py:314: UserWarning: resource_tracker: There appear to be 8 leaked semlock objects to clean up at shutdown resource_tracker.py:314: UserWarning: resource_tracker: There appear to be 1 leaked folder objects to clean up at shutdown </code></pre> <p>To reproduce:</p> <p><strong>Prerequisties</strong></p> <p>This needs the following packages, either globally or in your virtual environment</p> <p><code>pip install joblib</code></p> <p><code>install.packages('reticulate')</code> (in a R interpreter)</p> <p><strong>foo.R</strong></p> <pre><code>foo &lt;- reticulate::import(&quot;foo&quot;) foo$run() </code></pre> <p>And in the same directory</p> <p><strong>foo.py</strong></p> <pre class="lang-py prettyprint-override"><code>from joblib import Parallel, delayed def calc(i): return i**2 def run(): res = Parallel(n_jobs=2)(delayed(calc)(i) for i in range(10)) print(res) </code></pre> <p>Finally, invoke the R script using Rscript:</p> <pre class="lang-bash prettyprint-override"><code>$ Rscript foo.R </code></pre> <p>If you run the Python script directly (calling <code>run</code>) then you don't get the error.</p> <p>Because joblib doesn't kill the processes immediately, you get the warning some time after running, but you can tell immediately something has gone wrong by using <code>ps -a | grep &quot;python&quot; -c</code> to see how many python interpreters are left. Alternatively, you can force joblib to shut down the processors by adding this on after the <code>Parallel</code> call:</p> <pre class="lang-py prettyprint-override"><code>from joblib.externals.loky import get_reusable_executor get_reusable_executor().shutdown(wait=True) </code></pre> <p>Also, very oddly, if you use the play button in VS code I don't see any lingering python processes, and with the above change, no errors. But I can't work out what VS Code is doing differently compared to calling R and then <code>source('foo.R')</code></p>
<python><r><joblib><reticulate>
2024-06-18 16:23:31
0
2,802
T. Kiley
78,638,576
3,159,428
Error RuntimeError: CUDA error: operation not supported when tried to locate something into CUDA
<p>Here is my code:</p> <pre><code>from transformers import AutoModelForCausalLM, AutoTokenizer, QuantoConfig import torch device = &quot;cuda:0&quot; model_id = &quot;bigscience/bloom-560m&quot; quantization_config = QuantoConfig(weights=&quot;int8&quot;) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float32, device_map=device) tokenizer = AutoTokenizer.from_pretrained(model_id) text = &quot;Hello my name is&quot; inputs = tokenizer(text, return_tensors=&quot;pt&quot;).to(device) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) </code></pre> <p>When I run, I obtain the next error:</p> <blockquote> <p>RuntimeError: CUDA error: operation not supported CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.</p> </blockquote> <p>However, when I check if CUDA is available I obtain:</p> <pre><code>print('-------------------------------') print(torch.cuda.is_available()) print(torch.cuda.device_count()) print(torch.cuda.current_device()) print(torch.cuda.device(0)) print(torch.cuda.get_device_name(0)) print('Memory Usage:') print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB') print('Cached: ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB') </code></pre> <blockquote> <p>True 1 0 &lt;torch.cuda.device object at 0x7f8bf6d4a9b0&gt; GRID T4-16Q Memory Usage: Allocated: 0.0 GB Cached: 0.0 GB</p> </blockquote> <p>I run this code on Colab, and I do not have any issues. I also run the code on another machine with another GPU, and it runs as expected.</p> <p>The configuration of the machine where I need to run it fails.</p> <p><a href="https://i.sstatic.net/59iMq2HO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/59iMq2HO.png" alt="enter image description here" /></a></p> <p>And the libraries:</p> <pre><code>accelerate 0.31.0 aiohttp 3.9.5 aiosignal 1.3.1 async-timeout 4.0.3 attrs 23.2.0 certifi 2024.6.2 charset-normalizer 3.3.2 datasets 2.20.0 dill 0.3.8 filelock 3.15.1 frozenlist 1.4.1 fsspec 2024.5.0 huggingface-hub 0.23.4 idna 3.7 Jinja2 3.1.4 MarkupSafe 2.1.5 mpmath 1.3.0 multidict 6.0.5 multiprocess 0.70.16 networkx 3.3 ninja 1.11.1.1 numpy 2.0.0 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 8.9.2.26 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu12 2.20.5 nvidia-nvjitlink-cu12 12.5.40 nvidia-nvtx-cu12 12.1.105 packaging 24.1 pandas 2.2.2 pip 24.0 psutil 5.9.8 pyarrow 16.1.0 pyarrow-hotfix 0.6 python-dateutil 2.9.0.post0 pytz 2024.1 PyYAML 6.0.1 quanto 0.2.0 regex 2024.5.15 requests 2.32.3 safetensors 0.4.3 setuptools 65.5.0 six 1.16.0 sympy 1.12.1 tokenizers 0.19.1 torch 2.3.1 tqdm 4.66.4 transformers 4.42.0.dev0 triton 2.3.1 typing_extensions 4.12.2 tzdata 2024.1 urllib3 2.2.2 xxhash 3.4.1 yarl 1.9.4 </code></pre> <p>I do not know if this affect you, but the machine is a virtual machine with wmware under a vgpu. Also, I tried to run a simple nn, just for check if the problem was with the transformers library, but I obtained the same error when I tried to locate info on the GPU.</p> <pre><code> import torch import torch.nn as nn dev = torch.device(&quot;cuda&quot;) if torch.cuda.is_available() else torch.device(&quot;cpu&quot;) t1 = torch.randn(1,2) t2 = torch.randn(1,2).to(dev) print(t1) # tensor([[-0.2678, 1.9252]]) print(t2) # tensor([[ 0.5117, -3.6247]], device='cuda:0') t1.to(dev) print(t1) # tensor([[-0.2678, 1.9252]]) print(t1.is_cuda) # False t1 = t1.to(dev) print(t1) # tensor([[-0.2678, 1.9252]], device='cuda:0') print(t1.is_cuda) # True class M(nn.Module): def __init__(self): super().__init__() self.l1 = nn.Linear(1,2) def forward(self, x): x = self.l1(x) return x model = M() # not on cuda model.to(dev) # is on cuda (all parameters) print(next(model.parameters()).is_cuda) # True </code></pre> <blockquote> <p>Traceback (most recent call last): File “/home/admin/llm/ModelsService/test.py”, line 14, in t2 = torch.randn(1,2).to(dev) RuntimeError: CUDA error: operation not supported CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.</p> </blockquote> <p>by the way here info about my cuda</p> <blockquote> <p>(test310) admin@appdev-llm-lnx1:~/llm/ModelsService$ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243</p> </blockquote> <p>regards</p>
<python><pytorch><huggingface-transformers>
2024-06-18 16:15:28
2
486
Fermin Pitol
78,638,542
6,867,048
Flatmap for deeply nested lists
<p>I have the following list in Python:</p> <pre><code>N = [1, [2, 2], [[3, 3], 3], [[[4, 4], 4], 4], [[[[5, 5], 5], 5], 5]] </code></pre> <p>and I want to flatten it out and get a list of lists like this</p> <pre><code>[[1],[2,2],[3,3,3],[4,4,4,4],[5,5,5,5,5]] </code></pre> <p>I tried:</p> <pre><code>sum(map(lambda x: [x, x, x], [x]), N) </code></pre> <p>but it doesn't work.</p>
<python><list>
2024-06-18 16:06:49
5
8,893
stack0114106
78,638,521
19,363,912
Apply similar sort of elements across multiple lists (clustering)
<p>I have</p> <ul> <li>dictionary <strong>spots</strong> that contains number of spots per position per day (e.g. on day 0, position 'a' has 3 spots), and</li> <li>dictionary <strong>names</strong> that contains available names per position per day (e.g. on day 0, position 'a' is occupied by John, Claire and Billy)</li> </ul> <p>Sample</p> <pre><code>import pandas as pd spots = { 0: {'a': 3, 'b': 3}, 1: {'a': 3, 'b': 3}, 2: {'a': 1}, 3: {'a': 3, 'b': 3}, 4: {'a': 4, 'b': 3}, } names = { 0: {'a': ['John', 'Claire', 'Billy'], 'b': ['Paul']}, 1: {'a': ['John', 'Billy', 'Claire']}, 2: {'a': ['Billy']}, 3: {'a': ['Claire', 'Billy'], 'b': ['Paul', 'Peter']}, 4: {'a': ['Anna', 'Claire', 'Billy'], 'b': ['Peter']}, } </code></pre> <p>I would like to sort names in the lists so that the position is the same whenever possible (e.g. Billy is available on all days and must therefore be put in the first position, unlike Anna who is available on the least amount of days and must be put at the end of names)</p> <p><strong>Expected outcome</strong></p> <pre><code>output = { 0: {'a': ['Billy', 'Claire', 'John'], 'b': ['Paul', '', '']}, 1: {'a': ['Billy', 'Claire', 'John'], 'b': ['', '', '']}, 2: {'a': ['Billy']}, 3: {'a': ['Billy', 'Claire'], 'b': ['Paul', 'Peter', '']}, 4: {'a': ['Billy', 'Claire', 'Anna', ''], 'b': ['Peter', '', '']}, } </code></pre> <p><strong>Own solution</strong></p> <pre><code>def name_per_category(names): unique_names = {'a': set(), 'b': set()} for key in names: for category in names[key]: unique_names[category].update(names[key][category]) return {category: sorted(unique_names[category]) for category in unique_names} def sort_names(spots, names): output = {} sorted_names = name_per_category(names) for key in spots: output[key] = {} for category in spots[key]: sorted_list = [''] * spots[key][category] if key in names and category in names[key]: for name in names[key][category]: # Find the index for each name index = sorted_names[category].index(name) print(index, name) sorted_list[index-1] = name output[key][category] = sorted_list return output output = sort_names(spots, names) </code></pre> <p><strong>Wrong output</strong></p> <pre><code>{0: {'a': ['Billy', 'Claire', 'John'], 'b': ['', '', 'Paul']}, 1: {'a': ['Billy', 'Claire', 'John'], 'b': ['', '', '']}, 2: {'a': ['Billy']}, 3: {'a': ['Billy', 'Claire', ''], 'b': ['Peter', '', 'Paul']}, 4: {'a': ['Billy', 'Claire', '', 'Anna'], 'b': ['Peter', '', '']}} </code></pre> <p>Other than wrong outcome, my logic seems quite heavy. Is there better way to reason about this type of problem?</p>
<python><sorting><clusterize>
2024-06-18 16:02:01
1
447
aeiou
78,638,290
9,779,999
Client error '404 Not Found' for url 'http://localhost:11434/api/chat' while using ReActAgent of llama_index.core.agent
<p>I am following this tutorial, <a href="https://youtu.be/JLmI0GJuGlY?si=eeffNvHjaRHVV6r7&amp;t=1915" rel="noreferrer">https://youtu.be/JLmI0GJuGlY?si=eeffNvHjaRHVV6r7&amp;t=1915</a>, and trying to build a simple LLM agent.</p> <p>I am on WSL2, Windows 11, and I am coding in VSC. I use Ollama to download and store my LLMs. My python is 3.9.</p> <p>My script my_main3.py is very simple:</p> <pre><code>from llama_index.llms.ollama import Ollama from llama_parse import LlamaParse from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, PromptTemplate from llama_index.core.embeddings import resolve_embed_model from llama_index.core.tools import QueryEngineTool, ToolMetadata from llama_index.core.agent import ReActAgent from prompts import context from dotenv import load_dotenv load_dotenv() llm = Ollama(model=&quot;mistral&quot;, request_timeout=30.0) parser = LlamaParse(result_type=&quot;markdown&quot;) file_extractor = {&quot;.pdf&quot;: parser} documents = SimpleDirectoryReader(&quot;./data&quot;, file_extractor=file_extractor).load_data() embed_model = resolve_embed_model(&quot;local:BAAI/bge-m3&quot;) vector_index = VectorStoreIndex.from_documents(documents, embed_model=embed_model) query_engine = vector_index.as_query_engine(llm=llm) tools = [ QueryEngineTool( query_engine=query_engine, metadata=ToolMetadata( name=&quot;api_documentation&quot;, description=&quot;this gives documentation about code for an API. Use this for reading docs for the API&quot;, ), ) ] code_llm = Ollama(model=&quot;codellama&quot;) agent = ReActAgent.from_tools(tools, llm=code_llm, verbose=True, context=context) # context is from prompts.py while (prompt := input(&quot;Enter a prompt (q to quit): &quot;)) != &quot;q&quot;: result = agent.query(prompt) print(result) </code></pre> <p>Then I run Python main.py in my Terminal. The script runs well until the while loop.</p> <p>It prompts me to input, then in input:</p> <pre><code>Enter a prompt (q to quit): send a post request to make a new item using the api in Python. </code></pre> <p>It then throws me this error.</p> <pre><code>Traceback (most recent call last): File &quot;/home/ubuntu2022/MyUbunDev/210_AI_agent_basic/my_main3.py&quot;, line 38, in &lt;module&gt; result = agent.query(prompt) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 102, in wrapper self.span_drop(*args, id=id, err=e, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 77, in span_drop h.span_drop(*args, id=id, err=err, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/base.py&quot;, line 48, in span_drop span = self.prepare_to_drop_span(*args, id=id, err=err, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/null.py&quot;, line 35, in prepare_to_drop_span raise err File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 100, in wrapper result = func(*args, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/base/base_query_engine.py&quot;, line 51, in query query_result = self._query(str_or_query_bundle) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/callbacks/utils.py&quot;, line 41, in wrapper return func(self, *args, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/types.py&quot;, line 40, in _query agent_response = self.chat( File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 102, in wrapper self.span_drop(*args, id=id, err=e, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 77, in span_drop h.span_drop(*args, id=id, err=err, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/base.py&quot;, line 48, in span_drop span = self.prepare_to_drop_span(*args, id=id, err=err, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/null.py&quot;, line 35, in prepare_to_drop_span raise err File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 100, in wrapper result = func(*args, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/callbacks/utils.py&quot;, line 41, in wrapper return func(self, *args, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/runner/base.py&quot;, line 604, in chat chat_response = self._chat( File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 102, in wrapper self.span_drop(*args, id=id, err=e, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 77, in span_drop h.span_drop(*args, id=id, err=err, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/base.py&quot;, line 48, in span_drop span = self.prepare_to_drop_span(*args, id=id, err=err, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/null.py&quot;, line 35, in prepare_to_drop_span raise err File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 100, in wrapper result = func(*args, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/runner/base.py&quot;, line 539, in _chat cur_step_output = self._run_step( File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 102, in wrapper self.span_drop(*args, id=id, err=e, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 77, in span_drop h.span_drop(*args, id=id, err=err, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/base.py&quot;, line 48, in span_drop span = self.prepare_to_drop_span(*args, id=id, err=err, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/null.py&quot;, line 35, in prepare_to_drop_span raise err File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py&quot;, line 100, in wrapper result = func(*args, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/runner/base.py&quot;, line 382, in _run_step cur_step_output = self.agent_worker.run_step(step, task, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/callbacks/utils.py&quot;, line 41, in wrapper return func(self, *args, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/react/step.py&quot;, line 653, in run_step return self._run_step(step, task) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/react/step.py&quot;, line 463, in _run_step chat_response = self._llm.chat(input_chat) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/llms/callbacks.py&quot;, line 130, in wrapped_llm_chat f_return_val = f(_self, messages, **kwargs) File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/llms/ollama/base.py&quot;, line 105, in chat response.raise_for_status() File &quot;/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/httpx/_models.py&quot;, line 761, in raise_for_status raise HTTPStatusError(message, request=request, response=self) httpx.HTTPStatusError: Client error '404 Not Found' for url 'http://localhost:11434/api/chat' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404 </code></pre> <p>I checked my Edge browser, http://localhost:11434/ is running Ollama. Is this causing the clash? And Noticed I have never set up that http://localhost:11434/api/chat endpoint in my script.</p>
<python><windows-subsystem-for-linux><large-language-model><llama-index><ollama>
2024-06-18 15:14:30
2
1,669
yts61
78,638,246
3,405,291
PyTorch problem with a specific version of CUDA
<h1>Background</h1> <p>I need to test this AI model on the following CUDA server:</p> <p><a href="https://github.com/sicxu/Deep3DFaceRecon_pytorch" rel="nofollow noreferrer">https://github.com/sicxu/Deep3DFaceRecon_pytorch</a></p> <pre class="lang-bash prettyprint-override"><code>$ nvidia-smi Tue Jun 18 18:28:37 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3060 On | 00000000:41:00.0 Off | N/A | | 0% 40C P8 13W / 170W | 39MiB / 12288MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 1078 G /usr/lib/xorg/Xorg 16MiB | | 0 N/A N/A 1407 G /usr/bin/gnome-shell 3MiB | +---------------------------------------------------------------------------------------+ </code></pre> <p>But I'm receiving this warning while testing:</p> <blockquote> <p>/home/arisa/.conda/envs/deep3d_pytorch/lib/python3.6/site-packages/torch/cuda/init.py:125: UserWarning: NVIDIA GeForce RTX 3060 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37. If you want to use the NVIDIA GeForce RTX 3060 GPU with PyTorch, please check the instructions at <a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">https://pytorch.org/get-started/locally/</a></p> </blockquote> <p>I receive this error after the above warning:</p> <blockquote> <p>RuntimeError: CUDA error: no kernel image is available for execution on the device</p> </blockquote> <h1>Note: CUDA <code>12.2</code> vs <code>12.3</code></h1> <p>I was able to test the same AI model on Google Colab with CUDA <code>12.2</code> without any problem. I'm not sure why the server with CUDA <code>12.3</code> is a trouble maker.</p> <p><a href="https://i.sstatic.net/1967iip3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1967iip3.png" alt="Google Colab screenshot of CUDA version" /></a></p> <h1>Why?</h1> <p>Why CUDA <code>12.2</code> works fine but <code>12.3</code> throws warnings and errors?</p> <h1>Building from source</h1> <p>So, I just thought I would build PyTorch <code>1.6.0</code> - required by the AI model - with CUDA <code>12.3</code>. I just wanted to ask about the possibility of building from source. I just want to know if it's <strong>possible</strong> to build PyTorch <code>1.6.0</code> along with CUDA <code>12.3</code> without patching the source code:</p> <p><a href="https://github.com/pytorch/pytorch/releases/tag/v1.6.0" rel="nofollow noreferrer">https://github.com/pytorch/pytorch/releases/tag/v1.6.0</a></p>
<python><pytorch><build><cuda>
2024-06-18 15:03:12
1
8,185
Megidd
78,638,237
10,868,076
How to Model Multivariate Time Series Using Nixtla's NeuralForecast in Python Without Including the Target Variable as an Input
<p>I want to a model multivariate time series using nixtla's neuralforecast in python. However, do not want the target variable to be part of the input of model. Lets make an example:</p> <pre><code>feature1 = np.sin(np.linspace(0, 2*np.pi, 100)) feature2 = np.cos(np.linspace(0, 2*np.pi, 100)) target = feature1 + feature2 </code></pre> <p>I want to model the '<code>target</code>' using '<code>feature1</code>' and '<code>feature2</code>', without the <code>target</code> being part of the model input. The model shall learn that '<code>target = feature1 + feature2</code>', by looking at the past values of '<code>feature1</code>' and '<code>feature2</code>' not at the past values of '<code>target</code>'.</p> <p>Here is my current state:</p> <pre><code>data = pd.DataFrame({ 'ds': np.arange(100), 'y': feature1 + feature2 'feature1': feature1 'feature2': feature2, }) model = NeuralProphet() # add additional features model = model.add_future_regressor(name='feature1') model = model.add_future_regressor(name='feature2') # assume milliseconds as freq model.fit(data, freq='L') </code></pre> <p>My Final goal would be to compare PatchTST and NeuralProphet.</p>
<python><deep-learning><time-series>
2024-06-18 15:00:40
0
549
Felix Hohnstein
78,637,991
12,415,855
Click on button not possible?
<p>I try to click on a button using the following code -</p> <pre><code>import time from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By if __name__ == '__main__': print(f&quot;Checking Browser driver...&quot;) options = Options() options.add_argument(&quot;start-maximized&quot;) options.add_argument('--log-level=3') options.add_experimental_option(&quot;prefs&quot;, {&quot;profile.default_content_setting_values.notifications&quot;: 1}) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv=Service() driver = webdriver.Chrome (service=srv, options=options) waitWD = WebDriverWait (driver, 10) link = &quot;https://www.arbeitsagentur.de/jobsuche/jobdetail/10001-1000341861-S&quot; driver.get (link) time.sleep(3) waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@class=&quot;ba-btn ba-btn-primary&quot;]'))).click() </code></pre> <p>But when i run this code i get the following error -</p> <pre><code>(selenium) C:\DEV\Fiverr2024\TRY\ben_hypace&gt;python temp1.py Checking Browser driver... Traceback (most recent call last): File &quot;C:\DEV\Fiverr2024\TRY\ben_hypace\temp1.py&quot;, line 28, in &lt;module&gt; waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@class=&quot;ba-btn ba-btn-primary&quot;]'))).click() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\DEV\.venv\selenium\Lib\site-packages\selenium\webdriver\support\wait.py&quot;, line 105, in until raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: </code></pre> <p>Why is this red button &quot;Alles zulassen&quot; not clicked on this site using selenium?</p>
<python><selenium-webdriver>
2024-06-18 14:11:16
1
1,515
Rapid1898
78,637,897
1,635,909
Emulating an array-like in python
<p>I am wondering whether <code>python</code> can have the same facilities as <code>C++</code> when defining a class that is derived from a container class, such as a <code>vector</code>. In short, if I define a class which is based on <code>numpy</code> <code>array</code>, how much of array algebraic operations do I need to code?</p> <p>I expect that I have to code <code>__getitem__</code> and <code>__setitem__</code> to the least, and some way to iterate within the data. But <a href="https://docs.python.org/3/reference/datamodel.html#emulating-container-types" rel="nofollow noreferrer">reading the doc</a> it is not clear to me whether this is sufficient so that e.g. a function such that <code>numpy.abs()</code> can be called on an instance of the new class? Do I need to explicitly define <code>__abs__(self)</code> method or are there cases for which this and other functions that operate in place on arrays will be implicitly defined?</p>
<python><containers>
2024-06-18 13:54:18
1
2,234
Joce
78,637,658
2,170,792
Accessing attributes of a Python Descriptor
<p>Not sure if this is feasible or not. The implementation/example below is dummy, FYI.</p> <p>I have a Python class, <em>Person</em>. Each person has a public first name and a public last name attribute with corresponding private attributes - I'm using a descriptor pattern to manage access to the underlying private attributes.</p> <p>I am using the descriptor to count the number of times the attribute is accessed as well as obtain the underlying result.</p> <pre><code>class AttributeAccessCounter: def __init__(self): self._access_count = 0 def __get__(self, instance, owner): self._access_count += 1 return getattr(instance, self.attrib_name) def __set_name__(self, obj, name): self.attrib_name = f'_{name}' @property def counter(self): return self._access_count class Person: first = AttributeAccessCounter() last = AttributeAccessCounter() def __init__(self, first, last): self._first = first self._last = last </code></pre> <p>From an instance of the class Person, how can I access the <code>_access_count</code> or property <code>counter</code>?</p> <pre><code>john = Person('John','Smith') print(john.first) # 'John' print(john.first.counter) # AttributeError: 'str' object has no attribute 'counter' </code></pre>
<python><python-descriptors>
2024-06-18 13:08:15
3
302
pemm
78,637,536
5,024,426
color half of the borders with GeoPandas
<p>I have three polygons with common borders.</p> <p>I can't figure out how to color only the inner half of the border.</p> <p>For the moment I have:</p> <pre><code>data = [ (gpd.read_file(&quot;shape1.shp&quot;), &quot;#f2a6a5&quot;), (gpd.read_file(&quot;shape2.shp&quot;), &quot;#fbebc0&quot;), (gpd.read_file(&quot;shape3.shp&quot;), &quot;#dacbab&quot;), ] for poly, color in data: poly.boundary.plot( ax=ax, color=color, linestyle=&quot;solid&quot;, linewidth=6, ) </code></pre> <p><a href="https://i.sstatic.net/IYDpolIW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYDpolIW.png" alt="enter image description here" /></a></p> <p>So, I would like to color only the inner part of the border. This would allow for a two-color border between two polygons.</p>
<python><matplotlib><geopandas>
2024-06-18 12:45:42
1
1,798
djangoliv
78,637,239
7,699,037
How to type hint class defined in conftest and returned from fixture
<p>I have a class that I want to use in various test cases/files to test certain conditions. Something like the following:</p> <pre><code>class MyTester: @staticmethod def does_string_apply(my_string: str) -&gt; bool: return my_string == 'Hello World' </code></pre> <p>I want to use that class in various test cases among various test files, which is why I defined it in conftest and provided a fixture to access that class.</p> <pre><code>@pytest.fixture(name=my_tester) def fixture_my_tester() -&gt; type[MyTester]: return MyTester </code></pre> <p>That fixture is used in the test cases, but the class name is not imported. Therefore, I do not know how to type hint the fixture:</p> <pre><code>def test_something(my_tester: type['What goes here?']) -&gt; None: assert my_tester.does_string_apply(&quot;Hello&quot;) </code></pre>
<python><pytest><python-typing><fixtures>
2024-06-18 11:50:29
1
2,908
Mike van Dyke
78,637,203
9,251,158
How to uninstall partially installed package with poetry?
<p>I tried installing a package with poetry for the first time. It worked on a machine and failed on another because the hardware is not compatible with TensorFlow. I should have installed in a virtual environment, but didn't. I tried uninstalling with <code>poetry remove</code> and <code>poetry uninstall</code> and neither exist. I also searched online and found no results.</p> <p>How can I uninstall everything that poetry installed for this failed package?</p>
<python><installation><package><python-poetry>
2024-06-18 11:41:07
2
4,642
ginjaemocoes
78,637,002
11,098,908
Child class's attribute did not override which was defined in parent class
<p>I have a parent and two child classes defined like this</p> <pre><code>import pygame class Person(): def __init__(self): self.image = pygame.image.load('person.png').convert_alpha() self.image = pygame.transform.scale(self.image, (int(self.image.get_width() * 0.5), int(self.image.get_height() * 0.5))) print('size: ', self.image.get_size()) class Teacher(Person): def __init__(self): super().__init__() self.image = pygame.image.load('teacher.png').convert_alpha() class Doctor(Person): def __init__(self): super().__init__() self.image = pygame.image.load('doctor.png').convert_alpha() self.image = pygame.transform.scale(self.image, (int(self.image.get_width() * 1.2), int(self.image.get_height() * 0.75))) ... </code></pre> <p>The size of the picture of <code>person.png</code>, <code>teacher.png</code> and <code>doctor.png</code> is 98x106, 125x173 and 97x178 respectively.</p> <p>When I run the following code, its output confused me. It seemed the code <code>pygame.image.load()</code> and <code>pygame.transform.scale()</code> in the child classes <code>Teacher</code> and <code>Doctor</code> didn't override the attribute defined in the parent class <code>Person</code>.</p> <pre><code>pygame.display.set_mode((500, 500)) players = {'Teacher': Teacher(), 'Doctor': Doctor()} Output: pygame 2.4.0 (SDL 2.26.4, Python 3.10.9) Hello from the pygame community. https://www.pygame.org/contribute.html size: (49, 53) &lt;---- expected to be (62, 86) size: (49, 53) &lt;---- expected to be (116, 133) </code></pre> <p>What happened? What did I do wrong?</p>
<python>
2024-06-18 11:00:44
3
1,306
Nemo
78,636,947
23,046,354
A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0 as it may crash
<pre><code>Traceback (most recent call last): File &quot;C:\Users\mohit\OneDrive\Desktop\Front End\Flask\pythonProject1\tut2.py&quot;, line 8, in &lt;module&gt; import imutils File &quot;C:\Users\mohit\OneDrive\Desktop\Front End\Flask\pythonProject1\.venv\Lib\site-packages\imutils\__init__.py&quot;, line 8, in &lt;module&gt; from .convenience import translate File &quot;C:\Users\mohit\OneDrive\Desktop\Front End\Flask\pythonProject1\.venv\Lib\site-packages\imutils\convenience.py&quot;, line 6, in &lt;module&gt; import cv2 File &quot;C:\Users\mohit\OneDrive\Desktop\Front End\Flask\pythonProject1\.venv\Lib\site-packages\cv2\__init__.py&quot;, line 181, in &lt;module&gt; bootstrap() File &quot;C:\Users\mohit\OneDrive\Desktop\Front End\Flask\pythonProject1\.venv\Lib\site-packages\cv2\__init__.py&quot;, line 153, in bootstrap native_module = importlib.import_module(&quot;cv2&quot;) File &quot;C:\Program Files\Python312\Lib\importlib\__init__.py&quot;, line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) AttributeError: _ARRAY_API not found Traceback (most recent call last): File &quot;C:\Users\mohit\OneDrive\Desktop\Front End\Flask\pythonProject1\tut2.py&quot;, line 8, in &lt;module&gt; import imutils File &quot;C:\Users\mohit\OneDrive\Desktop\Front End\Flask\pythonProject1\.venv\Lib\site-packages\imutils\__init__.py&quot;, line 8, in &lt;module&gt; from .convenience import translate File &quot;C:\Users\mohit\OneDrive\Desktop\Front End\Flask\pythonProject1\.venv\Lib\site-packages\imutils\convenience.py&quot;, line 6, in &lt;module&gt; import cv2 File &quot;C:\Users\mohit\OneDrive\Desktop\Front End\Flask\pythonProject1\.venv\Lib\site-packages\cv2\__init__.py&quot;, line 181, in &lt;module&gt; bootstrap() File &quot;C:\Users\mohit\OneDrive\Desktop\Front End\Flask\pythonProject1\.venv\Lib\site-packages\cv2\__init__.py&quot;, line 153, in bootstrap native_module = importlib.import_module(&quot;cv2&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\Python312\Lib\importlib\__init__.py&quot;, line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ImportError: numpy.core.multiarray failed to import </code></pre> <blockquote> <p>A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11&gt;=2.12'.</p> <p>If you are a user of the module, the easiest solution will be to downgrade to 'numpy&lt;2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2.</p> </blockquote> <p>I was trying to make a flask app that uses cv2 and imutils libraries of python in Pycharm. But there seems to be a problem in Pycharm.</p> <p>The error seems to be on the lines of import statement :</p> <pre><code>import imutils import cv2 </code></pre> <p>I watched a tutorial online about how to download cv2 and imutils , but even after downloading there seems to be a problem.</p> <p>The error is about the upgraded version of Numpy which I don't understand why it is happening , as I am not using Numpy.</p> <p>Still I upgraded the version of Numpy but It does not work. I have tried multiple things the people have suggested but nothing is working I think i am not sure how to make a docker.</p> <p>The problem can be something related to Pycharm. I am not sure.</p>
<python><numpy><opencv><flask>
2024-06-18 10:49:29
3
365
Mohit Gupta
78,636,936
8,300,917
Avoiding Merge In Pandas
<p>I have a data frame that looks like this :</p> <p><a href="https://i.sstatic.net/51VAUJMH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51VAUJMH.png" alt="enter image description here" /></a></p> <p>I want to group the data frame by <strong>#PROD</strong> and <strong>#CURRENCY</strong> and replace <strong>TP</strong> with the contents of the <strong>Offshore data</strong> in the <strong>Loc</strong> column <em>Without creating two data frames and joining them.</em></p> <p>The final output will look something like:</p> <p><a href="https://i.sstatic.net/6e7RwABM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6e7RwABM.png" alt="enter image description here" /></a></p> <p>I was able to create the output by splitting the data frame into two (Onshore and Offshore ) and then creating a join on #PROD and #CURRENCY. However, I was wondering if there is a cleaner way to do this ?</p> <p>The Code for the Dataframe is :</p> <pre><code>import pandas as pd data=[['Offshore','NY','A','USD','ABC_USD'],['Onshore','BH','A','USD',''], ['Onshore','AE','A','USD',''],\ ['Offshore','NY','A','GBP','GBP_ABC'],['Onshore','BH','A','GBP',''], ['Onshore','AE','A','GBP',''],\ ['Onshore','BH','A','EUR',''],['Onshore','AE','A','EUR','']] df = pd.DataFrame(data, columns=['Loc', 'Country','#PROD','#CURRENCY','TP']) df </code></pre>
<python><pandas><merge>
2024-06-18 10:47:59
4
904
Number Logic
78,636,865
13,023,201
Pandas interpretation question with double quotes in csv
<p>I have a csv file whose values are comma separated. One of the values of a column in plain text is inputted as</p> <pre class="lang-none prettyprint-override"><code>&quot;XATO,2xASSY,SSD, 6.4TB Gen2, SFF-2.5&quot;&quot;, N&quot; </code></pre> <p>which is supposed be within double quotes so that the comma inside the value is not mistaken as a separator.</p> <p>And when I do <code>pd.read_csv</code> and then print this value, I am getting</p> <pre class="lang-none prettyprint-override"><code>XATO,2xASSY,SSD, 6.4TB Gen2, SFF-2.5&quot;, N </code></pre> <p>as the result. Which is exactly what I need, but I just can't seem to understand the interpretation logic. Why is the second last double quote getting omitted by default in pandas.read_csv ?</p> <p><em>Edit: one can put first string in a csv file, using a text editor (not Excel) and then open the file in Excel or read the csv using pandas and see the output.</em></p> <p>Answer: I think pandas is reading it as the concatenation of 2 strings 'XATO,2xASSY,SSD, 6.4TB Gen2, SFF-2.5&quot;' + ', N'.</p>
<python><pandas><csv><double-quotes>
2024-06-18 10:34:22
1
371
Ritz Ganguly
78,636,490
7,786,149
Counting items in an array and making counts into columns
<p>I am working in databricks where I have a dataframe as follows:</p> <p><strong>dummy_df</strong></p> <pre><code>names items Ash [c1,c2,c2,c3] Bob [c1,c2] May [] Amy [c2,c3,c3] </code></pre> <p>Where <code>names</code> column contains strings for values and <code>items</code> is a column of arrays.</p> <p>I would like to count how many times each item appears for each name and make each count into a column. So the desired output will look something like so:</p> <pre><code>names items c1_count c2_count c3_count Ash [c1,c2,c2,c3] 1 2 1 Bob [c1,c2] 1 1 0 May [] 0 0 0 Amy [c2,c3,c3] 0 1 2 </code></pre> <p>So far my approach is to count each item separately using aggregate as so:</p> <pre><code>c1_count = dummy_df.select(&quot;*&quot;, explode(&quot;items&quot;).alias(&quot;exploded&quot;))\ .where(col(&quot;exploded&quot;).isin(['c1']))\ .groupBy(&quot;names&quot;, &quot;items&quot;)\ .agg(count(&quot;exploded&quot;).alias(&quot;c1_count&quot;)) c2_count = ... c3_count = ... </code></pre> <p>And then I concatenate a new df by taking count columns from my new 3 dataframes and adding them to my <code>dummy_df</code>. But that is very inefficient and if I were to have many items in my arrays (say 50-100), it may even be impractical.</p> <p>I wonder if there is a better way? Can I somehow calculate the count of all items and make such counts into the columns without needing to count them individually and do massive concatenation as the end?</p>
<python><pandas><apache-spark><pyspark><databricks>
2024-06-18 09:17:08
5
425
Joe
78,636,351
14,566,295
Finding the indices of continuous 0 from a sequence of 0 and 1
<p>Let say I have a sequence of <code>0 and 1</code> as below example</p> <pre><code>[0,0,0,0,1,1,1,0,1,0,1,0,1,1,1] </code></pre> <p>We can think this sequence as a sequence of continuous <code>0</code> but interrupted by <code>1</code>.</p> <p>My goal is to fetch the indices where sequence of continuous <code>0</code> interrupted by <code>1</code>. Therefore in this example, I should get</p> <pre><code>0, 7, 9, 11 </code></pre> <p>For below sequence</p> <pre><code>[1,1,1,1,0, 0, 0, 1, 0] </code></pre> <p>I should get <code>4, 8</code></p> <p>For <code>[0,0,0,0,0,0]</code> I should get <code>0</code></p> <p>For <code>[1,1,1,1]</code> I should get <code>NULL</code></p> <p>Is there any Python method/function to directly achieve this?</p>
<python>
2024-06-18 08:47:42
3
1,679
Brian Smith
78,636,327
10,836,309
Pyinstaller in virtual environment still yields very large EXE file
<p>I have a Python code of 78 lines using the following packages:</p> <pre><code>import pandas as pd import base64 from bs4 import BeautifulSoup import os import win32com.client as win32 import pathlib </code></pre> <p>I ran the following commands:</p> <pre><code>venv\Scripts\activate python -m pip install pandas python -m pip install pybase64 python -m pip install bs4 python -m pip install pywin32 python -m pip install pyinstaller pyinstaller --onefile to_HTML.py </code></pre> <p>I have created a virtual environment in which I installed only the above packages.</p> <p>Yet the EXE file created is 740Mb!!</p> <p>What am I doing wrong? How can I reduce it?</p>
<python><pyinstaller>
2024-06-18 08:43:16
3
6,594
gtomer
78,636,280
5,567,893
How can I ensure whether torch.randint generate the random values at least once?
<p>I want to generate the random value using the pytorch (or python).<br /> I tried to use <code>torch.randperm</code> and <code>torch.randint</code> to do this.<br /> (I guess that <code>torch.randperm</code> isn't proper for the following job since it only can generate the fixed number of range)</p> <p>However, some of the options conflicted with my requirements.<br /> For example, I want to generate the random 15 values from 0 to 9.<br /> I used <code>torch.randint</code> as below, but it didn't guarantee that all numbers were contained in the result at least once.</p> <pre class="lang-py prettyprint-override"><code>torch.randint(0, 10, (15, )) # tensor([5, 1, 0, 5, 6, 6, 0, 5, 4, 8, 1, 0, 1, 3, 2]) #Absence of 7 and 9 </code></pre> <p>How can I ensure the result contains all values in the range at least once?</p>
<python>
2024-06-18 08:31:50
1
466
Ssong
78,636,241
3,979,057
How to manipulate/join multiple strings containing UTF-8 characters
<p>My code needs to be compatible with both Python 2.x and 3.x versions. I am getting both string as input to my function, and I need to do some manipulation on those:</p> <pre><code>if len(str1) &gt; 10: str1 = str1[:10] + '...' if six.PY3: return ' : '.join((str1, str2)) </code></pre> <p>For Python 2.x, the above join is giving error:</p> <blockquote> <p>UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128)</p> </blockquote> <p>What is the cleaner way of handling such cases for all versions of 2.x and 3.x? As both the string are input to my code, I need to ensure that even if either of these strings contain UTF-8 characters, they should be joined properly.</p> <p>Declaration : I am very new to Python.</p>
<python><python-3.x><encoding><utf-8><python-2.x>
2024-06-18 08:24:32
1
847
HitchHiker
78,636,238
3,616,293
Wrap around 2D coordinates of numpy array
<p>I have a (5, 5) 2D Numpy array:</p> <pre><code>map_height = 5 map_width = 5 # Define a 2D np array- a = np.arange(map_height * map_width).reshape(map_height, map_width) # a ''' array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) ''' </code></pre> <p>I can wrap this array around on both of its axes using 'pad()':</p> <pre><code>a_wrapped = np.pad(array = a, pad_width = 1, mode = 'wrap') a_wrapped ''' array([[24, 20, 21, 22, 23, 24, 20], [ 4, 0, 1, 2, 3, 4, 0], [ 9, 5, 6, 7, 8, 9, 5], [14, 10, 11, 12, 13, 14, 10], [19, 15, 16, 17, 18, 19, 15], [24, 20, 21, 22, 23, 24, 20], [ 4, 0, 1, 2, 3, 4, 0]]) ''' </code></pre> <p>The 2D coordinates of (5, 5) 'a' are computed (inefficiently) as:</p> <pre><code># 2D coordinates - # 1st channel/axis = row indices &amp; 2 channel/axis = column indices. a_2d_coords = np.zeros((map_height, map_width, 2), dtype = np.int16) for row_idx in range(map_height): for col_idx in range(map_width): a_2d_coords[row_idx, col_idx][0] = row_idx a_2d_coords[row_idx, col_idx][1] = col_idx # a_2d_coords.shape # (5, 5, 2) </code></pre> <p>I want to wrap this 2D coordinates array 'a_2d_coords' as well by doing:</p> <pre><code>a_2d_coords_wrapped = np.pad(array = a_2d_coords, pad_width = 1, mode = 'wrap') # a_2d_coords_wrapped.shape # (7, 7, 4) </code></pre> <p>It also wraps the 3rd axis/dimension which should not be done! The goal is that the coords of a[1, 4] = (1, 4) and its neighbor's coords to right hand side (RHS) should be a[1, 0] = (1, 0). This is wrapping around the x-axis. Similarly, y-axis 2D coordinates should also be wrapped.</p>
<python><arrays><numpy>
2024-06-18 08:23:59
1
2,518
Arun
78,636,212
4,324,329
Dual annotations/decorators of Prometheus Client & FastAPI on same function is not working as expected
<p>I have a use case where I need to have two decorator type of annotation for a single function. I have a rest endpoint for which I need to capture the metrics using prometheus client module. I'm trying something like this, but its not working.</p> <pre><code>rest_app = APIRouter(prefix='/rest') TIME_METRICS_ANNOTATION = Gauge('request_time_taken', 'Time Taken for My Endpoint') @TIME_METRICS_ANNOTATION.time() @rest_app.post(&quot;/endpoint&quot;) async def my_endpoint(request: Request): # body </code></pre> <p>I tried swapping the annotations too, but whichever is immediately above the method definition, only that is taking effect and not the other.</p> <p>I'm aware that there are some other ways of doing it as follows, but the initial approach with annotations would be cleaner code.</p> <pre><code>@rest_app.post(&quot;/endpoint&quot;) async def my_endpoint(request: Request): start_time = time.process_time() # body TIME_METRICS_ANNOTATION.set(time.process_time() - start_time) </code></pre> <p>or</p> <pre><code>@rest_app.post(&quot;/endpoint&quot;) async def my_endpoint(request: Request): my_endpoint_body(request) @TIME_METRICS_ANNOTATION.time() def my_endpoint_body(request: Request): # body </code></pre> <p>Any help is appreciated.</p> <p><strong>Side Note Query</strong> : Is there a built-in decorator for Counter type? I'm facing issues as Counter.inc() is not a callable unlike Gauge.time() or Summary.time()</p>
<python><fastapi><prometheus><python-typing><python-decorators>
2024-06-18 08:16:18
1
367
Sree Karthik S R
78,636,124
9,902,571
Django app with automated file upload from user local filesystem
<p>I want to create a website based on Python and Django on the backend and Bootstrap, CSS, JQuery and Javascript on the frontend. The functionality will be somewhat like: user will provide a &quot;Source_Path&quot; from the user’s local filesystem irrespective of the OS used by the user. It will upload all the files from that folder to the server.</p> <p>Is this possible? Can we make this automation? I have tried using <code>os</code>. It is working on my local server, but in production, it is not working! I have deployed using apache2 web server.</p> <p>Any suggestion can help!</p>
<python><django><file-upload>
2024-06-18 07:51:49
2
1,203
Niladry Kar
78,635,838
10,413,428
Bundling python app compiled with cython with pyinstaller
<h1>Problem</h1> <p>I have an application which is bundled with pyinstaller. Now a new feature request is, that parts are compiled with cyphon to c libraries.</p> <p>After the compilation inside the activated virtual environment (poetry) the app runs as expected.</p> <p>BUT, when I bundle it with pyinstaller the executable afterwards can't find packages which are not imported in the main.py file. With my understanding, this is totally fine, because the Analysis stage of the pyinstaller can't read the conntent of the compiled c code ( In the following example <code>modules/test/test.py</code> which is available for the pyinstaller as <code>modules/test/test.cpython-311-x86_64-linux-gnu.so</code>).</p> <h3>Folder overview:</h3> <pre><code>├── compile_with_cython.py ├── main.py ├── main.spec ├── main_window.py ├── poetry.lock └── pyproject.toml </code></pre> <h3>main.py</h3> <pre><code>import sys from PySide6.QtWidgets import QApplication from main_window import MainWindow if __name__ == '__main__': app = QApplication(sys.argv) mainWin = MainWindow() mainWin.show() sys.exit(app.exec_()) </code></pre> <h3>main_window.py</h3> <p>MVP PySide6 Application which uses tomllib to load some toml file</p> <pre><code>import sys from PySide6.QtWidgets import QApplication, QMainWindow, QPushButton, QDialog, QVBoxLayout, QTextEdit from PySide6.QtCore import Slot class MainWindow(QMainWindow): def __init__(self): super().__init__() ... </code></pre> <h2>Error code</h2> <pre><code>./main Traceback (most recent call last): File &quot;main.py&quot;, line 12, in &lt;module&gt; File &quot;modules/test/test.py&quot;, line 3, in init modules.test.test ModuleNotFoundError: No module named 'tomllib' [174092] Failed to execute script 'main' due to unhandled exception! </code></pre>
<python><python-3.x><pyinstaller><cython>
2024-06-18 06:38:04
1
405
sebwr
78,635,753
14,890,683
Pydantic Inheritance Defaults
<p>How can I typehint a parent <code>BaseModel</code> such that a child subclass can provide defaults for some of the fields:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class Parent(BaseModel): first: str second: str third: str def run(self): print(self.first + self.second + self.third) class Child(Parent): first = &quot;1&quot; second = &quot;2&quot; c = Child(third=&quot;3&quot;) # mypy: Missing named argument &quot;first&quot; for &quot;Child&quot; &amp; Missing named argument &quot;second&quot; for &quot;Child&quot; </code></pre>
<python><inheritance><pydantic>
2024-06-18 06:15:52
2
345
Oliver
78,635,741
672,833
Drop-in replacement for the cgi module
<p>Python 3.13 removed the cgi module.</p> <p>While there are many ways described how to workaround that issue, e.g. via using <code>urllib</code> and the <code>email</code> package, for one project I actually need a drop-in replacement, that is a package which also offers a <code>FieldStorage</code> class which I can use instead of the removed one.</p> <p>I have read about such a project a few weeks ago, but unfortunately did not bookmark it.</p> <p>tl/dr</p> <ul> <li>I look for a package which offers a <code>FieldStorage</code> class as a replacement for the removed <code>cgi.FieldStorage</code> in Python 3.13.</li> <li>I am not interested in ways to rewrite the code via <code>urllib</code> and the <code>email</code> package</li> <li><code>Werkzeug</code> is not a drop-in replacement</li> </ul> <p>Thank you!</p>
<python><cgi>
2024-06-18 06:12:55
1
6,336
Jürgen Gmach
78,635,727
568,289
How to get the Azure Automation Runbook Job Id in a Python Runbook?
<p>Powershell Runbooks allow you to get the Job ID with $PSPrivateMetadata.JobId.Guid</p> <p>How do we get it if the runbook is Python?</p>
<python><python-3.x><azure><azure-automation><azure-runbook>
2024-06-18 06:09:47
3
12,571
richard
78,635,682
4,732,111
DuckDB support for Postgres function WITH ORDINALITY
<p>Currently i'm using DuckDB to execute the following Postgres query containing <strong>WITH ORDINALITY</strong> function on a polars dataframe:</p> <pre><code>with suffixes as ( select ordrnbr,(nr-1)/2 N ,max(case when nr%2=1 then elem end) Nkey ,cast(max(case when nr%2=0 then elem end) as int) Nval from test t left join lateral unnest(string_to_array(t.description, '@')) WITH ORDINALITY AS a(elem, nr) ON true group by ordrnbr,(nr-1)/2 ) select concat(t.ordrnbr,'-',row_number()over(partition by t.ordrnbr order by s.N))ordrnbr , vehiclename, description, id, totprice ,coalesce(s.Nval,totprice) ind_price from test t left join suffixes s on s.ordrnbr=t.ordrnbr where s.Nval&gt;=10000 or (s.Nval is null and totPrice&gt;=10000) </code></pre> <p>When i tried executing this query, Duck DB throws an error</p> <blockquote> <p>exception:duckdb.duckdb.NotImplementedException: Not implemented Error: WITH ORDINALITY not implemented</p> </blockquote> <p>Is there any other way that we could acheive this using DuckDB?</p> <p>Any help would be appreciated please.</p>
<python><python-polars><duckdb>
2024-06-18 05:54:48
1
363
Balaji Venkatachalam
78,635,654
11,163,122
Extract Google style docstring into dataclass using Sphinx Napoleon
<p>I am trying to programmatically ingest (&quot;reflect&quot;) Google style docstrings. I am using <code>sphinx.ext.napoleon</code>, as seemingly not many tools do this. I am following <a href="https://github.com/sphinx-doc/sphinx/blob/v7.3.7/sphinx/ext/napoleon/docstring.py#L120-L146" rel="nofollow noreferrer">this example</a> with the below function:</p> <pre class="lang-py prettyprint-override"><code>from sphinx.ext.napoleon import Config, GoogleDocstring def foo(arg: int | None = 5) -&gt; None: &quot;&quot;&quot;Stub summary. Args: arg(int): Optional integer defaulted to 5. &quot;&quot;&quot; docstring = GoogleDocstring(foo.__doc__) print(docstring) </code></pre> <p>However, my usage doesn't automagically convert the printed output to reST style like the Sphinx example does.</p> <p>So this leads me to my question. How can one programmatically ingest the summary, extended description, arg names, and arg descriptions from a Google Style docstring? Ideally they are converted into some sort of data structure (e.g. <code>dict</code> or <code>dataclass</code>).</p>
<python><reflection><docstring><sphinx-napoleon>
2024-06-18 05:46:05
1
2,961
Intrastellar Explorer
78,635,591
713,844
Python provider healthcheck doesn't detect virtualenv on MacOs
<p>Switching from vim to neovim. I'm running:</p> <p>MacOS Sonoma 14.4.1</p> <p>brew installed pyenv and neovim</p> <p>created a py3nvim virtualenv for neovim, as per the <code>:help provider-python</code> documentation</p> <p>But I still get this error when I run <code>:checkhealth</code></p> <pre><code>provider.python: require(&quot;provider.python.health&quot;).check() Python 3 provider (optional) - pyenv: Path: /opt/homebrew/Cellar/pyenv/2.4.3/libexec/pyenv - pyenv: Root: /Users/stan/.pyenv - Using: g:python3_host_prog = &quot;/Users/XXX/.pyenv/versions/py3nvim/bin/python&quot; - Executable: /Users/XXX/.pyenv/versions/py3nvim/bin/python - Python version: 3.11.9 - pynvim version: 0.5.0 - OK Latest pynvim is installed. Python virtualenv ~ - ERROR Failed to run healthcheck for &quot;provider.python&quot; plugin. Exception: ...0.10.0/share/nvim/runtime/lua/provider/python/health.lua:20: Usage: /Users/XXX/.pyenv/versions/py3nvim/bin/python3-config [--prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--extension-suffix|-- help|--abiflags|--configdir|--embed] </code></pre> <p>When I run <code>/Users/XXX/.pyenv/versions/py3nvim/bin/python3-config</code> I get the following output:</p> <pre><code>Usage: /Users/XXX/.pyenv/versions/py3nvim/bin/python3-config [--prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--extension-suffix|--help|--abiflags|--configdir|--embed] </code></pre> <p>My init.vim file contains:</p> <pre><code>let g:python3_host_prog ='/Users/XXX/.pyenv/versions/py3nvim/bin/python' let g:loaded_perl_provider = 0 let g:loaded_ruby_provider = 0 </code></pre> <p>Help is welcome, I have tried everything, including modifing the health.lua file according to gpt-4o answers but it's making me run in circles.</p> <p>Any help would be greatly appreciated as I've now spent many hours on this issue to no avail</p>
<python><neovim>
2024-06-18 05:19:56
1
600
stanm87
78,635,380
16,687,283
Python openpyxl connector with anchor
<p>I really would like to draw a curved arrow connector with anchor to cell with openpyxl library, like the image below;</p> <p><a href="https://i.sstatic.net/mdeEwXrD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mdeEwXrD.png" alt="enter image description here" /></a></p> <p>After I went through the official documentation, I can not find the way to do this, (though I found some relevant keyword in <a href="https://openpyxl.readthedocs.io/en/2.4/api/openpyxl.drawing.shapes.html" rel="nofollow noreferrer">class openpyxl.drawing.shapes.PresetGeometry2D</a>)</p> <p>How can I draw this kind of anchored connector with Python? I am open to other packages if the OpenPyxl is uncapable of this feature.</p>
<python><excel><openpyxl>
2024-06-18 03:31:18
1
553
lighthouse
78,634,875
1,298,416
Python get request produces different HTML than view source
<pre><code>import requests # Request to website and download HTML contents url='https://beacon.schneidercorp.com/Application.aspx?AppID=165&amp;LayerID=2145&amp;PageTypeID=2&amp;PageID=1104&amp;KeyValue=0527427230' req=requests.get(url, 'html.parser') </code></pre> <p>produces this result:</p> <pre><code>'&lt;!DOCTYPE html&gt;&lt;html lang=&quot;en-US&quot;&gt;&lt;head&gt;&lt;title&gt;Just a moment...&lt;/title&gt;&lt;meta http-equiv=&quot;Content-Type&quot; content=&quot;text/html; charset=UTF-8&quot;&gt;&lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;IE=Edge&quot;&gt;&lt;meta name=&quot;robots&quot; content=&quot;noindex,nofollow&quot;&gt;&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width,initial-scale=1&quot;&gt;&lt;style&gt;*{box-sizing:border-box;margin:0;padding:0}html{line-height:1.15;-webkit-text-size-adjust:100%;color:#313131}button,html{font-family:system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji}@media (prefers-color-scheme:dark){body{background-color:#222;color:#d9d9d9}body a{color:#fff}body a:hover{color:#ee730a;text-decoration:underline}body .lds-ring div{border-color:#999 transparent transparent}body .font-red{color:#b20f03}body .pow-button{background-color:#4693ff;color:#1d1d1d}body #challenge-success-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSIgdmlld0JveD0iMCAwIDI2IDI2Ij48cGF0aCBmaWxsPSIjZDlkOWQ5IiBkPSJNMTMgMGExMyAxMyAwIDEgMCAwIDI2IDEzIDEzIDAgMCAwIDAtMjZtMCAyNGExMSAxMSAwIDEgMSAwLTIyIDExIDExIDAgMCAxIDAgMjIiLz48cGF0aCBmaWxsPSIjZDlkOWQ5IiBkPSJtMTAuOTU1IDE2LjA1NS0zLjk1LTQuMTI1LTEuNDQ1IDEuMzg1IDUuMzcgNS42MSA5LjQ5NS05LjYtMS40Mi0xLjQwNXoiLz48L3N2Zz4=)}body #challenge-error-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iI0IyMEYwMyIgZD0iTTE2IDNhMTMgMTMgMCAxIDAgMTMgMTNBMTMuMDE1IDEzLjAxNSAwIDAgMCAxNiAzbTAgMjRhMTEgMTEgMCAxIDEgMTEtMTEgMTEuMDEgMTEuMDEgMCAwIDEtMTEgMTEiLz48cGF0aCBmaWxsPSIjQjIwRjAzIiBkPSJNMTcuMDM4IDE4LjYxNUgxNC44N0wxNC41NjMgOS41aDIuNzgzem0tMS4wODQgMS40MjdxLjY2IDAgMS4wNTcuMzg4LjQwNy4zODkuNDA3Ljk5NCAwIC41OTYtLjQwNy45ODQtLjM5Ny4zOS0xLjA1Ny4zODktLjY1IDAtMS4wNTYtLjM4OS0uMzk4LS4zODktLjM5OC0uOTg0IDAtLjU5Ny4zOTgtLjk4NS40MDYtLjM5NyAxLjA1Ni0uMzk3Ii8+PC9zdmc+)}}body{display:flex;flex-direction:column;min-height:100vh}body.no-js .loading-spinner{visibility:hidden}body.no-js .challenge-running{display:none}body.dark{background-color:#222;color:#d9d9d9}body.dark a{color:#fff}body.dark a:hover{color:#ee730a;text-decoration:underline}body.dark .lds-ring div{border-color:#999 transparent transparent}body.dark .font-red{color:#b20f03}body.dark .pow-button{background-color:#4693ff;color:#1d1d1d}body.dark #challenge-success-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSIgdmlld0JveD0iMCAwIDI2IDI2Ij48cGF0aCBmaWxsPSIjZDlkOWQ5IiBkPSJNMTMgMGExMyAxMyAwIDEgMCAwIDI2IDEzIDEzIDAgMCAwIDAtMjZtMCAyNGExMSAxMSAwIDEgMSAwLTIyIDExIDExIDAgMCAxIDAgMjIiLz48cGF0aCBmaWxsPSIjZDlkOWQ5IiBkPSJtMTAuOTU1IDE2LjA1NS0zLjk1LTQuMTI1LTEuNDQ1IDEuMzg1IDUuMzcgNS42MSA5LjQ5NS05LjYtMS40Mi0xLjQwNXoiLz48L3N2Zz4=)}body.dark #challenge-error-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iI0IyMEYwMyIgZD0iTTE2IDNhMTMgMTMgMCAxIDAgMTMgMTNBMTMuMDE1IDEzLjAxNSAwIDAgMCAxNiAzbTAgMjRhMTEgMTEgMCAxIDEgMTEtMTEgMTEuMDEgMTEuMDEgMCAwIDEtMTEgMTEiLz48cGF0aCBmaWxsPSIjQjIwRjAzIiBkPSJNMTcuMDM4IDE4LjYxNUgxNC44N0wxNC41NjMgOS41aDIuNzgzem0tMS4wODQgMS40MjdxLjY2IDAgMS4wNTcuMzg4LjQwNy4zODkuNDA3Ljk5NCAwIC41OTYtLjQwNy45ODQtLjM5Ny4zOS0xLjA1Ny4zODktLjY1IDAtMS4wNTYtLjM4OS0uMzk4LS4zODktLjM5OC0uOTg0IDAtLjU5Ny4zOTgtLjk4NS40MDYtLjM5NyAxLjA1Ni0uMzk3Ii8+PC9zdmc+)}body.light{background-color:transparent;color:#313131}body.light a{color:#0051c3}body.light a:hover{color:#ee730a;text-decoration:underline}body.light .lds-ring div{border-color:#595959 transparent transparent}body.light .font-red{color:#fc574a}body.light .pow-button{background-color:#003681;border-color:#003681;color:#fff}body.light #challenge-success-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSIgdmlld0JveD0iMCAwIDI2IDI2Ij48cGF0aCBmaWxsPSIjMzEzMTMxIiBkPSJNMTMgMGExMyAxMyAwIDEgMCAwIDI2IDEzIDEzIDAgMCAwIDAtMjZtMCAyNGExMSAxMSAwIDEgMSAwLTIyIDExIDExIDAgMCAxIDAgMjIiLz48cGF0aCBmaWxsPSIjMzEzMTMxIiBkPSJtMTAuOTU1IDE2LjA1NS0zLjk1LTQuMTI1LTEuNDQ1IDEuMzg1IDUuMzcgNS42MSA5LjQ5NS05LjYtMS40Mi0xLjQwNXoiLz48L3N2Zz4=)}body.light #challenge-error-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iI2ZjNTc0YSIgZD0iTTE2IDNhMTMgMTMgMCAxIDAgMTMgMTNBMTMuMDE1IDEzLjAxNSAwIDAgMCAxNiAzbTAgMjRhMTEgMTEgMCAxIDEgMTEtMTEgMTEuMDEgMTEuMDEgMCAwIDEtMTEgMTEiLz48cGF0aCBmaWxsPSIjZmM1NzRhIiBkPSJNMTcuMDM4IDE4LjYxNUgxNC44N0wxNC41NjMgOS41aDIuNzgzem0tMS4wODQgMS40MjdxLjY2IDAgMS4wNTcuMzg4LjQwNy4zODkuNDA3Ljk5NCAwIC41OTYtLjQwNy45ODQtLjM5Ny4zOS0xLjA1Ny4zODktLjY1IDAtMS4wNTYtLjM4OS0uMzk4LS4zODktLjM5OC0uOTg0IDAtLjU5Ny4zOTgtLjk4NS40MDYtLjM5NyAxLjA1Ni0uMzk3Ii8+PC9zdmc+)}a{background-color:transparent;color:#0051c3;text-decoration:none;transition:color .15s ease}a:hover{color:#ee730a;text-decoration:underline}.main-content{margin:8rem auto;max-width:60rem;width:100%}.heading-favicon{height:2rem;margin-right:.5rem;width:2rem}@media (width &lt;= 720px){.main-content{margin-top:4rem}.heading-favicon{height:1.5rem;width:1.5rem}}.footer,.main-content{padding-left:1.5rem;padding-right:1.5rem}.main-wrapper{align-items:center;display:flex;flex:1;flex-direction:column}.font-red{color:#b20f03}.spacer{margin:2rem 0}.h1{font-size:2.5rem;font-weight:500;line-height:3.75rem}.h2{font-weight:500}.core-msg,.h2{font-size:1.5rem;line-height:2.25rem}.body-text,.core-msg{font-weight:400}.body-text{font-size:1rem;line-height:1.25rem}@media (width &lt;= 720px){.h1{font-size:1.5rem;line-height:1.75rem}.h2{font-size:1.25rem}.core-msg,.h2{line-height:1.5rem}.core-msg{font-size:1rem}}#challenge-error-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iI2ZjNTc0YSIgZD0iTTE2IDNhMTMgMTMgMCAxIDAgMTMgMTNBMTMuMDE1IDEzLjAxNSAwIDAgMCAxNiAzbTAgMjRhMTEgMTEgMCAxIDEgMTEtMTEgMTEuMDEgMTEuMDEgMCAwIDEtMTEgMTEiLz48cGF0aCBmaWxsPSIjZmM1NzRhIiBkPSJNMTcuMDM4IDE4LjYxNUgxNC44N0wxNC41NjMgOS41aDIuNzgzem0tMS4wODQgMS40MjdxLjY2IDAgMS4wNTcuMzg4LjQwNy4zODkuNDA3Ljk5NCAwIC41OTYtLjQwNy45ODQtLjM5Ny4zOS0xLjA1Ny4zODktLjY1IDAtMS4wNTYtLjM4OS0uMzk4LS4zODktLjM5OC0uOTg0IDAtLjU5Ny4zOTgtLjk4NS40MDYtLjM5NyAxLjA1Ni0uMzk3Ii8+PC9zdmc+);padding-left:34px}#challenge-error-text,#challenge-success-text{background-repeat:no-repeat;background-size:contain}#challenge-success-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSIgdmlld0JveD0iMCAwIDI2IDI2Ij48cGF0aCBmaWxsPSIjMzEzMTMxIiBkPSJNMTMgMGExMyAxMyAwIDEgMCAwIDI2IDEzIDEzIDAgMCAwIDAtMjZtMCAyNGExMSAxMSAwIDEgMSAwLTIyIDExIDExIDAgMCAxIDAgMjIiLz48cGF0aCBmaWxsPSIjMzEzMTMxIiBkPSJtMTAuOTU1IDE2LjA1NS0zLjk1LTQuMTI1LTEuNDQ1IDEuMzg1IDUuMzcgNS42MSA5LjQ5NS05LjYtMS40Mi0xLjQwNXoiLz48L3N2Zz4=);padding-left:42px}.text-center{text-align:center}.pow-button{background-color:#0051c3;border:.063rem solid #0051c3;border-radius:.313rem;color:#fff;font-size:.875rem;line-height:1.313rem;margin:2rem 0;padding:.375rem 1rem;transition-duration:.2s;transition-property:background-color,border-color,color;transition-timing-function:ease}.pow-button:hover{background-color:#003681;border-color:#003681;color:#fff;cursor:pointer}.footer{font-size:.75rem;line-height:1.125rem;margin:0 auto;max-width:60rem;width:100%}.footer-inner{border-top:1px solid #d9d9d9;padding-bottom:1rem;padding-top:1rem}.clearfix:after{clear:both;content:&quot;&quot;;display:table}.clearfix .column{float:left;padding-right:1.5rem;width:50%}.diagnostic-wrapper{margin-bottom:.5rem}.footer .ray-id{text-align:center}.footer .ray-id code{font-family:monaco,courier,monospace}.core-msg,.zone-name-title{overflow-wrap:break-word}@media (width &lt;= 720px){.diagnostic-wrapper{display:flex;flex-wrap:wrap;justify-content:center}.clearfix:after{clear:none;content:none;display:initial;text-align:center}.column{padding-bottom:2rem}.clearfix .column{float:none;padding:0;width:auto;word-break:keep-all}.zone-name-title{margin-bottom:1rem}}.loading-spinner{height:76.391px}.lds-ring{display:inline-block;position:relative}.lds-ring,.lds-ring div{height:1.875rem;width:1.875rem}.lds-ring div{animation:lds-ring 1.2s cubic-bezier(.5,0,.5,1) infinite;border:.3rem solid transparent;border-radius:50%;border-top-color:#313131;box-sizing:border-box;display:block;position:absolute}.lds-ring div:first-child{animation-delay:-.45s}.lds-ring div:nth-child(2){animation-delay:-.3s}.lds-ring div:nth-child(3){animation-delay:-.15s}@keyframes lds-ring{0%{transform:rotate(0)}to{transform:rotate(1turn)}}@media screen and (-ms-high-contrast:active),screen and (-ms-high-contrast:none){.main-wrapper,body{display:block}}.rtl .heading-favicon{margin-left:.5rem;margin-right:0}.rtl #challenge-success-text{background-position:100%;padding-left:0;padding-right:42px}.rtl #challenge-error-text{background-position:100%;padding-left:0;padding-right:34px}&lt;/style&gt;&lt;meta http-equiv=&quot;refresh&quot; content=&quot;390&quot;&gt;&lt;/head&gt;&lt;body class=&quot;no-js&quot;&gt;&lt;div class=&quot;main-wrapper&quot; role=&quot;main&quot;&gt;&lt;div class=&quot;main-content&quot;&gt;&lt;noscript&gt;&lt;div id=&quot;challenge-error-title&quot;&gt;&lt;div class=&quot;h2&quot;&gt;&lt;span id=&quot;challenge-error-text&quot;&gt;Enable JavaScript and cookies to continue&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;/noscript&gt;&lt;/div&gt;&lt;/div&gt;&lt;script&gt;(function(){window._cf_chl_opt={cvId: \'3\',cZone: &quot;beacon.schneidercorp.com&quot;,cType: \'managed\',cNounce: \'28917\',cRay: \'89565d36ac397d1a\',cHash: \'47375b3742c7e0b\',cUPMDTk: &quot;\\/Application.aspx?AppID=165&amp;LayerID=2145&amp;PageTypeID=2&amp;PageID=1104&amp;KeyValue=0527427230&amp;html.parser&amp;__cf_chl_tk=Of4B4c7stWB5SXvzpcRfYtsBNPOViNaNK0Ll9b5neUY-1718662168-0.0.1.1-4095&quot;,cFPWv: \'g\',cTTimeMs: \'1000\',cMTimeMs: \'390000\',cTplV: 5,cTplB: \'cf\',cK: &quot;visitor-time&quot;,fa: &quot;\\/Application.aspx?AppID=165&amp;LayerID=2145&amp;PageTypeID=2&amp;PageID=1104&amp;KeyValue=0527427230&amp;html.parser&amp;__cf_chl_f_tk=Of4B4c7stWB5SXvzpcRfYtsBNPOViNaNK0Ll9b5neUY-1718662168-0.0.1.1-4095&quot;,md: &quot;rmmSJgeD56LpTOvrhy4DfR8o7RU_J0Mb91rJJIbLBXQ-1718662168-1.1.1.1-d25k6KHPu0f_TpOQSmgKRsOCg5vU130QkwpV5TbrFFcsoGgiTQp6MeQOCB5gFbnZVLkOOl_glh6FwjR9WNfDp_Nxb99nyFYlHPSMJLLLZXVUsvEaraTpJ0VYK5vR8RPw6JK4TMBg16JOTm6n8N78f49tUYNsgQVbO09II98fphgiKkrY6MTEDVrDY2QqDrGFrUVz7VD2Y3SjGeC1SmKdH9Jic4qnchy.dj1_DJiAJTLQju.3pS34xv1lkxSzMeoBOG.dSwRMG5NB0oICddop9iYkoUYkiimjE7lenhKgL2W_ZrrSICqiyaXm_2rdXmYQxIGacIUetCO9o2j7Kpw47PKH3lszsqLuQMoXOcws2zYJXjG..0WmLIgkabofpUlVrMnDxKMhuI9tKBjjwS32QH4gpaRG.a0PvXbL_EjqiZYRHXxE6.NFARlbOEnPc5lDJB7nlgdAnkyHYTKHcMmA2a.yyvd.YHloLQD_FP748XMIFq4GScn1a9YxnyP6TW2_yiU9ijT_.mhoGlyQKWuQaq52SK5H2j99N4_uHzm3uQZ234Tv8q0eQ4tEDnCvIWI0Mz9F8DXJFDv.DRl1K95GmpD4D1elwhOQqMS9imBfGGN17FzFWLdBXnG1ZbLUlRQ1XTOd6261ibwTsmo2ALqoFjMVN__mucN1dWcWGTdQTGYefKWORAyn28g8eo5Vj28qy7M3wYqdLulVphF57PY4QgvUS5Lj7GeHO4fI1xDL7VosTxCQYdLHt.kc62ctn7f7q4NmiaIMh_X6O7SzgefYNg_0HcnRDsu4f1hdpRoPCrSqYViL60OQrH9s9rFi1cs9yAdvT5KSRdEVKn6yZlGiKp25lw8kwZijkifdlrbUXjjMB13pVqAApSornOd3ZaB8LeTcmebdPn3uVZ3zefpFEqB.zreEN60ctDFywXHNRQJdemXKKw5en8vH9Z1htdOvSUR.zP.jGhVCd6pA5WeC45KkwcMQlB_FpoETsS8eZIxS5k9YXVSlQXCoO3m.rgayokIdNOeQ4CZMTgPx_C5HeXxpbT9IHRpTbsymZdb3MMzeHY89zFcNY5o6K0OFQPV7mzEv19BKRT7l6bI9USomkV2WixTOeZU_m.u_hPCAGibsMrLRGpJCxWckGpWEP3.Sgu5UuPGlGHviIxx9HtbLdiXnv.pwyP7.RTUkcBYcUCeVf3v2c17envOLFKEqNRclD.ZK4TPUZ8QxbeTKXfnKY3FCrSbaU9hfj0GV9k2A98NgGbn.NxOdppg4i7VlA251LtZsnuHRsGkcPP4vIs0GJMNmro9TLjgpDYoqvMenfaBCimq.eofRK5T.o2E2SixxXujdmnSlBMgg.3gGMgyzV.v7iXEYOpg5OFyxNFp37hNw3xcHdd1cZJs421FE8bGH1HyqFzkvym_qE.mx_.Ut8tK.OxJIHDfdO19YJqieJ9YExUR_9uSBscFznIbT_0ZEAl4VGvmpCRehACvjGuhC4q7.o3aWq1MvL3I_r7JG8YyniNQYfZ_mGNgTu5LGMI0Xnt7Pan.zBK6fXh9k.s3CWneSU_51R0p6yiKydsi_B9gw5eJLxsPBqS5PWDGfIlRV2Hj6v4GyEMezVHtuErfCxbkPoQd5BOFvz0en2SwNfGuAY01hdk2eJWlxUujTHnibT8ph7Oipg8_.AFhlXcrcKkpNHD265ZB9LtBsNaiUumlXf_MIGaKrJFaygM6vf8vVnpzR8Qed4K6ws9WLRME8ENh0gwGfZTZuw26qDoK4M97hLodwatbfBrD3eI_W4Nlp.rIYp1l6GzWLeKWdzA0kAQ&quot;,mdrd: &quot;3etA5PVmQCVVM_aEgwlZiVjISxDftXwiCnxf_X7iNsU-1718662168-1.1.1.1-WsAyjo7VoGj0jqSHgoQg0Q9w5sxrSyBSjIIOK68mjmh2mU_tRXXlBf4QodpYxI1XBbA2eMzOWuH0ak4Lefjix59qg0xrz8Ue8czMo86uiGUG6PM04EzUdW3N60Mug9TPFH31V9xAa4xjYSPk8XyYz_NPch3ACGXpoaJrHqNdcGpjdr5Q_f75jrqWIL7JuKza0KapZpWaOQ_yylgCMRVfEvXWmYWkcFPg8Tru4xr6Oarx.yx3fJ1yD1vVoI01k8YG_l8BhAisKNdAPSGN9wxNKka3HKCpaMIwDGhSoO1JOrtyoZFZ0ZoN41RG0GhbojD9TfqNlLcJceq2GEwAum48cuESWLfbqmTvwRJ1_QsQsl2Sz8mHMlKuHqhXjf0garO2bWEMf_gOol8xOxQF_REmhjrKLhWp0UvgZntInAvMulzDsQ2z0LmlxApDvDDEcrIsAS1Gb_e9ochz3tvJVLAyyaMrjqb22.xOwPekNXCcLRcbYmv4Rwp35oUgSLaoRXfuTvZRXCextUuIL8t4YlQpN.KZ3KNjdXOyQRg91VIrjs5DPYSjlcn1DKtUoybSthNLNslrPmGqAGuTcWRCWXAs3dNPlQf2slrzYh9zVoCDO_wYACbt8m5m8q2UUxbDRF2fZetH9JaQCYD3ac6F0MM4MEQHK4NKfZQyJEbZCTP5yTU2Tnl6mPfLbR0vx5JqSQLzEyQvnKb3xFKAf7QBrgSDKhwTssYJ9_pmEbYojDJpjz0xP1kx1PsQY409f6nIMaZHVYFsTl6ivW_6wHuJtL64Ikwdt7JiHRkJnPKjlEwJnToGvHi9MjDP1y_rBOUOQD4DsZzdwkza20ls4i9wZvNGfM8oVqhkMplteNarLSIO6VM5LvaaPJ729twPPT.PV9SEUCCcFFwXba.NZKX5i6gbCytqOuxrL_AzOospag5Oj4RPN3PVI_vRMqYGG7ZSfTbzW9v38WYrz2_seYLyH4.kY8anvNVguqFTBnpi_RwcdwJjkPUMgmnikDtaudhsZpq7RO8ru5TSpgTuonsKZV.ALgmYTHOHTrfzID_2zoQ5SUgoQDkcPt8hmRh8V.yhckcXSgIoLyX4n_MS_7k3Ua.5sosqjL5XKWItOGw7L34lOc3cKubpoFh3Ye.MdVq87yUceA6Z9oyXYSjhrAhBzk5vsrobARiAX2Zc9qlyt8ZNrOF63kcnZhQCEU1xmr1fGwSm17S3eok0fGHr5bVxCQhDUXOXEjsh3lcF_LLKSveWvjI8IHvzTW8eXAuR3aNxatBlvMpRwC9B25.xTn.D2AUj0nV8yeN2GS38FP2ejd1A7g_j2RcrWtrUFp1j7eiuK_wDyyjDE.hSClKZbDA4uTmdw1Rxf1Q6gF_mVH8LxG9azBm5yqAHWke5DEK8qoYfkjOgVwXEL_qwtwu7eyVqD_ZoinrOKDDJwPwJ2KS.bZVfVWAbPXutM8XoI41juZUgJB1xuaI2D.CIls0EdnELw5O0b_BEIiWifzGpohrMWquq8WG0aoDdR8hTet3cHyU5GU535AmCBAZO0VI03sh4YbxJvVBXDPl9EfEZaCe.QFtpc56ej4if7DwOAKvH2dzCKY9fHKOxolu0H5ZOBSnYqcVeAtOfF9fP3INh12jlN338zbAaXsgihoWiALgSvmLo41IxmjUC435vEnFnLhdPvtP10iLWgZeBcaEdTscqd6F5ojTSNhxdwsYrJp3hM2BDpRseSUY_rUAlBWiV4zwERwsQphlnbgfN9fHc6RGdcK.JHW.Ji.y4kATTHIEQKSOxe.Ih0dSu.l8QD54Lv6dnfl8i.W_At0.9G6P4GKPDpgoNRiRvDMh89r5U5kEkNUK7Ly8HqeTJL4hNRZDji13ZN2SUR71XdNWzjyKtMbsHoZacVgqZRLZfZJPF6yY_hIECo99pf7f0Md_oYblwUjcnq4NcstjFm_sDmHmFd69uRODRvyamP8oGI3uK6fOyhuP6_pEfurdEikUz3apG1DO3dx9JKTV0fIcrDbi7gqHuAiNVs22qExP4EE68GrZQykD3RQz64a6gpkR2fIW4m4dJL2R_L9xnHq59iRVYRFh2Py7rwLzvTJIasm0RKtgHpscW8SWViFFTPIxfElDEe_Egc_S61i3gxt1GZf1uPT7s40t3cAw&quot;,cRq: {ru: \'aHR0cHM6Ly9iZWFjb24uc2NobmVpZGVyY29ycC5jb20vQXBwbGljYXRpb24uYXNweD9BcHBJRD0xNjUmTGF5ZXJJRD0yMTQ1JlBhZ2VUeXBlSUQ9MiZQYWdlSUQ9MTEwNCZLZXlWYWx1ZT0wNTI3NDI3MjMwJmh0bWwucGFyc2Vy\',ra: \'cHl0aG9uLXJlcXVlc3RzLzIuMzEuMA==\',rm: \'R0VU\',d: \'u4f3Yobgwf5u6zv7Vyez37fDJMRVaKuOq4Y5paFsnFojnKglovP/CR4xXE9H/B8aGKiWmpGN1XHcP0Vz7qLFRqmmeeJP/GfPv83oDcqCgaDYYPTkl8T8DCoO5yplG3tvPrUquNAWpqzfVWiWIZOybBY9qdP5Noun/yrL94UndnQoz8xRNNujnm8jjICAgAVNC6YW4tAAMTbgyLk3y1J1aaCh029b2aCcPaEbI5000bwD2yX6pG8JSfkDfBRWoBJM7O9+WMUzelXC2Xim15NCEyUmQAsRduvTyqn9aVV0vOTJ4KsQHeCpD5+BDuE49N08SoRSpWXoapg8TSEmP6gIQyN8iqw1uX6eoWvVK9eFXuERXmpzBNANK5R3ba3dN+cEsmdkTaAIPjMlewlmn8HqdZV2dqT7mLG+dXp0CcFsRCa3D0l1saSNF5eE57Xsbn/V52VBim3pJZXT/KoWtfE1wllrKE13g12B19oTZnV1aTNn1CzzOpy3510FgxQOl/BpjXIDr07u4uyzmoQJiFNqpE8kqv6Dun5xNhqPDFu+NzXY2P6c6neXnFpR1nsz/67CBJIJaCf7C0isThSlIYB2B3d6H6X2eHXmXpFHEUEghPU=\',t: \'MTcxODY2MjE2OC4wMDAwMDA=\',cT: Math.floor(Date.now() / 1000),m: \'wDCh8QRn8G04Q366J+IK+KpGMLciSRvuB2gMZ5fhyf0=\',i1: \'9WrJz2g1uAB/7FMTNOMfKw==\',i2: \'JDZfN4djPTXHy+E32CWj5Q==\',zh: \'9PbOsqhSKsLpjRc2OkMWk0h/2bQLoa45/u0P1gLq0HY=\',uh: \'YE9XOpG5TeHmhA1zfs5mxC8CrRZzq2a/+r+OU7dliYQ=\',hh: \'z1PLu2KPJfY2I4YzfUIJyk5iSeOJDT8VV7UBO5WLEL4=\',}};var cpo = document.createElement(\'script\');cpo.src = \'/cdn-cgi/challenge-platform/h/g/orchestrate/chl_page/v1?ray=89565d36ac397d1a\';window._cf_chl_opt.cOgUHash = location.hash === \'\' &amp;&amp; location.href.indexOf(\'#\') !== -1 ? \'#\' : location.hash;window._cf_chl_opt.cOgUQuery = location.search === \'\' &amp;&amp; location.href.slice(0, location.href.length - window._cf_chl_opt.cOgUHash.length).indexOf(\'?\') !== -1 ? \'?\' : location.search;if (window.history &amp;&amp; window.history.replaceState) {var ogU = location.pathname + window._cf_chl_opt.cOgUQuery + window._cf_chl_opt.cOgUHash;history.replaceState(null, null, &quot;\\/Application.aspx?AppID=165&amp;LayerID=2145&amp;PageTypeID=2&amp;PageID=1104&amp;KeyValue=0527427230&amp;html.parser&amp;__cf_chl_rt_tk=Of4B4c7stWB5SXvzpcRfYtsBNPOViNaNK0Ll9b5neUY-1718662168-0.0.1.1-4095&quot; + window._cf_chl_opt.cOgUHash);cpo.onload = function() {history.replaceState(null, null, ogU);}}document.getElementsByTagName(\'head\')[0].appendChild(cpo);}());&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;' </code></pre> <p>which is very different from: the result from: view-source:<a href="https://beacon.schneidercorp.com/Application.aspx?AppID=165&amp;LayerID=2145&amp;PageTypeID=4&amp;PageID=1108&amp;KeyValue=0527427230" rel="nofollow noreferrer">https://beacon.schneidercorp.com/Application.aspx?AppID=165&amp;LayerID=2145&amp;PageTypeID=4&amp;PageID=1108&amp;KeyValue=0527427230</a></p> <p>I'm new to web scraping...</p>
<python><html><web-scraping><request><get>
2024-06-17 22:24:55
2
341
user1298416
78,634,831
678,572
How can I structure my Python package/submodules so I do not get a circular import error?
<p>Before marking as duplicate to this one, please look at the details of my layout. I've gone through and this question does not address my issue and I haven't been able to adapt the answer: <a href="https://stackoverflow.com/questions/64807163/importerror-cannot-import-name-from-partially-initialized-module-m">ImportError: cannot import name &#39;...&#39; from partially initialized module &#39;...&#39; (most likely due to a circular import)</a></p> <p>Here's my package layout:</p> <pre><code>clairvoyance |-- __init__.py |-- bayesian | |-- __init__.py | `-- bayesian.py |-- legacy | |-- __init__.py | `-- legacy.py |-- utils | |-- __init__.py | `-- utils.py `-- visuals |-- __init__.py `-- visuals.py 5 directories, 9 files </code></pre> <p>Here's the relevant files:</p> <ul> <li><code>clairvoyance/__init__.py</code></li> </ul> <pre><code>__version__ = &quot;2024.6.17&quot; from . import bayesian from . import legacy from . import utils from . import visuals </code></pre> <ul> <li><code>clairvoyance/bayesian/__init__.py</code></li> </ul> <pre><code>from .bayesian import * </code></pre> <ul> <li><code>clairvoyance/legacy/__init__.py</code></li> </ul> <pre><code>from .legacy import * </code></pre> <ul> <li><code>clairvoyance/utils/__init__.py</code></li> </ul> <pre><code>from .utils import * </code></pre> <ul> <li><code>clairvoyance/visuals/__init__.py</code></li> </ul> <pre><code>from .visuals import * </code></pre> <p>In terms of relative imports, I have the following:</p> <ul> <li><code>utils.py</code> None</li> <li><code>bayesian.py</code> has <code>from ..utils import *</code></li> <li><code>legacy</code> has <code>from ..utils import *</code> and <code>from ..visuals import *</code></li> <li><code>visuals.py</code> has <code>from ..utils import *</code></li> </ul> <p>I made sure there are no blatant circular imports so I have a feeling it has to do with the way I've structured my submodules.</p> <p>I'm getting the following error:</p> <pre><code>ImportError: cannot import name 'bayesian' from partially initialized module 'clairvoyance' (most likely due to a circular import) (/Users/jolespin/miniconda3/envs/soothsayer_env/lib/python3.9/site-packages/clairvoyance/__init__.py) </code></pre>
<python><module><package><importerror><circular-dependency>
2024-06-17 22:04:05
0
30,977
O.rka
78,634,783
6,440,589
"requests.exceptions.HTTPError: 400 Client Error: BadRequest for url" when querying an Azure Data Explorer table
<p>I am running a <strong>Python</strong> script from the <strong>Azure Machine Learning</strong> (AML) environment. This script queries data from an <strong>Azure Data Explorer</strong> (ADX) table, using the <strong>Kusto Query Language</strong> (KQL).</p> <p>Here is an example KQL query:</p> <pre><code>QUERY = &quot;my_adx_table | where relative_timestamp &gt;= 1896 and relative_timestamp &lt;= 2396 and my_file_id == 640&quot; </code></pre> <p>Most of the time, this query works as expected, but for a handful of examples, AML returns the following error:</p> <pre><code>requests.exceptions.HTTPError: 400 Client Error: BadRequest for url: https://myclustername.myclusterregion.kusto.windows.net/v2/rest/query </code></pre> <p>According to <a href="https://stackoverflow.com/a/19671511/6440589">this SO answer</a>, <em>&quot;a 400 means that the request was malformed. In other words, the data stream sent by the client to the server didn't follow the rules</em>.&quot; I therefore assumed that I was dealing a data-related issue.</p> <p>However when trying to reproduce this error locally, I noticed that:</p> <ul> <li>running the aforementioned query directly in the ADX query pane succeeds</li> <li>calling the query from a Python script executed on my local computer succeeds as well (see code below).</li> </ul> <p><strong>Why am I getting this 400 Client Error when running the code from the AML environment?</strong></p> <hr /> <p><strong>APPENDIX: Example code to run the KQL query from a local computer:</strong></p> <pre><code>from azure.kusto.data import KustoClient, KustoConnectionStringBuilder from azure.kusto.data.helpers import dataframe_from_result_table from azure.kusto.data.exceptions import KustoServiceError QUERY = &quot;my_adx_table | where relative_timestamp &gt;= 1896 and relative_timestamp &lt;= 2396 and my_file_id == 640&quot; print(&quot;QUERY =&quot;,QUERY) adxconn = { &quot;cluster&quot;:&quot;https://myclustername.myclusterregion.kusto.windows.net&quot;, &quot;client_id&quot;:&quot;XXX&quot;, &quot;client_secret&quot;:&quot;YYY&quot;, &quot;authority_id&quot;:&quot;ZZZ&quot;, &quot;kusto_db&quot;:&quot;mydbname&quot;, &quot;kusto_ingest_uri&quot;: &quot;https://ingest-myclustername.myclusterregion.kusto.windows.net&quot; } kcsb = KustoConnectionStringBuilder.with_aad_application_key_authentication(adxconn['cluster'], adxconn['client_id'], adxconn['client_secret'], adxconn['authority_id']) client = KustoClient(kcsb) RESPONSE = client.execute_query(adxconn['kusto_db'], QUERY) print('response',RESPONSE) df = dataframe_from_result_table(RESPONSE.primary_results[0]) </code></pre> <p>This sample code returns a pandas dataframe containing the desired data.</p>
<python><azure><kql><azure-data-explorer><azure-machine-learning-service>
2024-06-17 21:47:30
1
4,770
Sheldon
78,634,762
6,760,934
Tooling with Langchain Bedrock for RAG AI-Chat Generation
<p>I have a function that takes in a Langugaue Model, a vector store, question and tools; and returns a response, at the moment the tools argument is not being added because based on this <a href="https://python.langchain.com/v0.1/docs/modules/model_io/chat/function_calling/" rel="nofollow noreferrer">example</a> the function <code>.bind_tools</code> is not an attribute of the <code>llm</code> -&gt; llm is below</p> <pre><code>## Bedrock Client bedrock_client = boto3.client(service_name=&quot;bedrock-runtime&quot;, region_name=&quot;us-west-2&quot;) bedrock_embeddings = BedrockEmbeddings(model_id=&quot;amazon.titan-embed-text-v1&quot;, client=bedrock_client) llm=Bedrock(model_id=&quot;anthropic.claude-v2:1&quot;, client=bedrock_client, model_kwargs={'max_tokens_to_sample': 512}) </code></pre> <p>without changing the LLM to <code>ChatOpenAPI</code> as in the example reference how do a bind a tool to langchain bedrock.</p> <p>I have also tried <a href="https://python.langchain.com/v0.1/docs/use_cases/tool_use/prompting/" rel="nofollow noreferrer">tools rendering</a> but not working below is my main get response function</p> <pre><code>def get_response(llm, vectorstore, question, tools ): ## create prompt / template this helps to guide the AI on what to look out for and how to answer prompt_template = &quot;&quot;&quot; System: You are a helpful ai bot, your name is Alex, you are to provide information to humans based on faq and user information, in the user information provided you are to extract the users' firstName and lastName from the json payload and recognize that as the persons name. use the currencyVerificationData to determine the number of currency accounts that the user has and if they are approved if the status is VALID, other statuses will indicate that the user is not yet approved and needs to provide more information for validation. use bankFilledData as the users beneficiaries, from that section of the payload you would be able to extract the beneficiaries bankName, bankAccountNumber; use accountDetails as information for bank account detail information; Human: Please use the given context to provide concise answer to the question If you don't know the answer, just say that you don't know, don't try to make up an answer. If you need clarity, ask more questions, do not refer to the json payload when answering questions just use the values you retrieve from the payload to answer &lt;context&gt; {context} &lt;/context&gt; The way you use the information is to identify users name and use it in response Question: {question} Assistant:&quot;&quot;&quot; # llm.bind_tools(tools) // not working, python error attribute not found PROMPT = PromptTemplate( template=prompt_template, input_variables=[&quot;context&quot;, &quot;question&quot;, &quot;user_information&quot;] ) qa = RetrievalQA.from_chain_type( llm=llm, chain_type=&quot;stuff&quot;, retriever=vectorstore.as_retriever( search_type=&quot;similarity&quot;, search_kwargs={&quot;k&quot;: 5} ), return_source_documents=True, chain_type_kwargs={&quot;prompt&quot;: PROMPT} ) answer=qa({&quot;query&quot;:question}) return answer['result'] </code></pre> <p>Finally what I wish to achieve is just a way to call functions based on input from said user</p>
<python><artificial-intelligence><langchain><retrieval-augmented-generation><amazon-bedrock>
2024-06-17 21:39:05
1
670
DaviesTobi alex
78,634,687
3,177,186
Using Python's winreg, how can I make changes to the Windows Registry that I can confirm worked in Regedit?
<p>My python code is below. Online guides and Stack Overflow posts suggest this should work, but it doesn't seem to. When I change the registry, it does a new get to confirm the change was made properly and, according to that result, it shows that the value IS changing. Additionally, if I kill the Flask server and restart it, the page shows the change as well</p> <p>But I can't confirm the changes in the actual registry. When I check <code>Computer\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\ContentDeliveryManager</code> the <code>SubscribedContent-310093Enabled</code> value never changes (yes, I'm pressing f5 to reload... even tried closing and reopening Regedit).</p> <p>The code thinks it changed something though. If I click the button to toggle the setting, kill the flask server, and re-run, it shows the opposite setting (weirdly I have to reload the actual flask server and can't just reload the webpage).</p> <p>What am I doing wrong?</p> <pre class="lang-py prettyprint-override"><code>import winreg key_path = r&quot;SOFTWARE\Microsoft\Windows\CurrentVersion\ContentDeliveryManager&quot; value_name = &quot;SubscribedContent-310093Enabled&quot; broke_val = 1 fixed_val = 0 #fix the setting or no? Change between runs to see if the registry changes fix_it = 0 try: # Open the registry key for reading and writing key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, key_path, 0, winreg.KEY_READ | winreg.KEY_WRITE) except FileNotFoundError: # Handle if the key doesn't exist key = winreg.CreateKey(winreg.HKEY_CURRENT_USER, key_path) # Get the current value of the registry key try: current_value, _ = winreg.QueryValueEx(key, value_name) current_value = int(current_value) # Convert value to integer except FileNotFoundError: current_value = None # No previous value if the value doesn't exist note = None try: change_to = fixed_val if (fix_it == 1) else broke_val if change_to == current_value: note = &quot;no change&quot; else: # Set the new value for the registry key winreg.SetValueEx(key, value_name, 0, winreg.REG_DWORD, change_to) # Another GET just to be sure the change worked current_value, _ = winreg.QueryValueEx(key, value_name) current_value = int(current_value) # Convert value to integer if current_value != change_to: note = &quot;change failed!&quot; except Exception as e: # If the update fails, set the previous and current values to be the same print(f&quot;Error setting registry value: {e}&quot;) note = f&quot;Error setting registry value: {e}&quot; winreg.CloseKey(key) print({ &quot;current&quot;: current_value, &quot;fixed_val&quot;: fixed_val, &quot;broke_val&quot;: broke_val, &quot;is_fixed&quot;: (current_value == fixed_val), &quot;note&quot;: note }) </code></pre>
<python><registry><winreg>
2024-06-17 21:10:02
0
2,198
not_a_generic_user
78,634,536
6,227,035
Filtering dictionary elements by few initial values
<p>I have the following data structure:</p> <pre><code>Clients= { &quot;data&quot;: [ { &quot;nClients&quot;: 3 }, { &quot;name&quot;: &quot;Mark&quot;, &quot;roll_no&quot;: 1, &quot;branch&quot;: &quot;c&quot; }, { &quot;name&quot;: &quot;Cris&quot;, &quot;roll_no&quot;: 3, &quot;branch&quot;: &quot;it3&quot; }, { &quot;name&quot;: &quot;Mark&quot;, &quot;roll_no&quot;: 2, &quot;branch&quot;: &quot;it2&quot; } ] } </code></pre> <p>I am trying to figure out a function that filters out the names given few initial letters, for example <code>myFunction('Ma')</code> that would give:</p> <pre><code>{ &quot;name&quot;: &quot;Mark&quot;, &quot;roll_no&quot;: 1, &quot;branch&quot;: &quot;c&quot; }, { &quot;name&quot;: &quot;Mark&quot;, &quot;roll_no&quot;: 2, &quot;branch&quot;: &quot;it2&quot; } </code></pre> <p>I am trying this kind of syntax: <code>[client for client in Clients['data'] if 'name' in client and Clients['name'].startswith('Ma')]</code>. However, I get the following error: <code>KeyError: 'name'</code>.</p> <p>What am I doing wrong?</p> <p>Thank you!</p>
<python><dictionary><filter><startswith>
2024-06-17 20:22:45
2
1,974
Sim81
78,634,505
6,631,639
Create regex-like (run length encoding) of string s for blocks of a given length k
<p>I am looking for python code to perform a run length encoding to obtain a regex-like summary of a string s, for a known length k for the blocks. How should I tackle this?</p> <p>e.g.</p> <pre><code>s=TATTTTATTTTATTTTATGTTATGTTATGTTATGTTATGTTATGTTATGTTATGTTATGTTACATTATTTTA </code></pre> <p>with k=5 could become</p> <pre><code>(TATTT)3(TATGT)9TACATTATTTTA </code></pre>
<python><string><bioinformatics><encode><rle>
2024-06-17 20:12:39
4
527
Wouter De Coster
78,634,282
1,911,652
Chainlit: How to know when selection in settings changes
<p>I am using latest version of Chainlit. The Python code below correctly prints which option chosen initially by default. It also displays all available options in given list when clicked on setting. But if I change option, it never runs <code>on_option_change</code></p> <p>How can I handle when user changes option in settings?</p> <pre><code># To run: # chainlit run chainlit_trials.py --port 9503 # import chainlit as cl from chainlit.input_widget import Select @cl.on_chat_start async def initialize(): print(f&quot;Started. Carrying initializations.&quot;) settings = await cl.ChatSettings( [ Select( id=&quot;Options&quot;, label=&quot;Available Options&quot;, values=[&quot;option1&quot;, &quot;option2&quot;, &quot;option3&quot;], initial_index=0, on_change= on_option_change ) ] ).send() option = settings[&quot;Options&quot;] print(f&quot;Option chosen: {option}&quot;) # This function will be called whenever the user changes the database selection async def on_option_change(value): print(f&quot;Selected option: {value}&quot;) </code></pre>
<python><python-3.x><streamlit><chainlit>
2024-06-17 19:05:13
1
4,558
Atul
78,634,246
11,557,264
AWS Redshift parallel query issue in Glue script
<p>I have created a Glue script which is supposed to read data from Redshift. The code works perfectly without hash partitions although as soon as I try to run parallel queries it throws an error like this:</p> <pre><code>Caused by: com.amazon.redshift.util.RedshiftException: ERROR: syntax error at or near &quot;)&quot; </code></pre> <p>It's a huge error apart from this line which points me towards <code>create_dynamic_frame.from_options</code> in the end</p> <p>Here's the snippet in question:</p> <pre class="lang-py prettyprint-override"><code> query = f&quot;&quot;&quot; SELECT DISTINCT id as id_list FROM test_db.public.test_table WHERE timestamp BETWEEN {start} AND {end} AND&quot;&quot;&quot; # Get list of ids from Redshift redshift_data_frame = glueContext.create_dynamic_frame.from_options( connection_type=&quot;redshift&quot;, connection_options={ &quot;sampleQuery&quot;: query, &quot;redshiftTmpDir&quot;: args[&quot;TempDir&quot;], &quot;useConnectionProperties&quot;: &quot;true&quot;, &quot;connectionName&quot;: &quot;redshift-conn&quot;, &quot;enablePartitioningForSampleQuery&quot;: True, &quot;hashfield&quot;: &quot;day&quot;, &quot;hashpartitions&quot;: str(end_date), # Last day of month &quot;aws_iam_role&quot;: args[&quot;AWS_IAM_ROLE&quot;], }, additional_options={&quot;aws_iam_role&quot;: args[&quot;AWS_IAM_ROLE&quot;]}, transformation_ctx=&quot;redshift_data_frame&quot;, ) </code></pre> <p>The same parallel queries work correctly in my other script which connects to SQL via JDBC:</p> <pre class="lang-py prettyprint-override"><code> query = f&quot;&quot;&quot; SELECT user_id FROM test_table WHERE id='{id}' AND timestamp BETWEEN {start} AND {end} AND&quot;&quot;&quot; db_data_frame = glueContext.create_dynamic_frame.from_options( connection_type=&quot;mysql&quot;, connection_options={ &quot;url&quot;: jdbc_url, &quot;user&quot;: &quot;username&quot;, &quot;password&quot;: &quot;pass&quot;, &quot;dbtable&quot;: &quot;test_db&quot;, &quot;sampleQuery&quot;: query, &quot;enablePartitioningForSampleQuery&quot;: True, &quot;hashfield&quot;: &quot;day&quot;, &quot;hashpartitions&quot;: str(end_date), #last day of month &quot;redshiftTmpDir&quot;: args[&quot;TempDir&quot;], }, transformation_ctx=&quot;db_data_frame&quot;, ) </code></pre> <p>I am not that experienced with Glue and pyspark, it's been like a couple of months since I have jumped into all this hence I feel there's something small that I might have missed, although I have been following the docs: <a href="https://docs.aws.amazon.com/glue/latest/dg/run-jdbc-parallel-read-job.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/glue/latest/dg/run-jdbc-parallel-read-job.html</a>.</p> <p>Any help would be appreciated!</p>
<python><pyspark><amazon-redshift><aws-glue>
2024-06-17 18:55:56
0
450
Shardul Birje
78,634,235
9,357,484
numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
<p>I want to call my Python module from the Matlab. I received the error:</p> <pre><code>Error using numpy_ops&gt;init thinc.backends.numpy_ops </code></pre> <p>Python Error:</p> <pre><code> ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject. </code></pre> <p>The Python script is as follows</p> <pre><code>import spacy def text_recognizer(model_path, text): try: # Load the trained model nlp = spacy.load(model_path) print(&quot;Model loaded successfully.&quot;) # Process the given text doc = nlp(text) ent_labels = [(ent.text, ent.label_) for ent in doc.ents] return ent_labels </code></pre> <p>The Matlab script is as follows</p> <pre><code>% Set up the Python environment pe = pyenv; py.importlib.import_module('final_output'); % Add the directory containing the Python script to the Python path path_add = fileparts(which('final_output.py')); if count(py.sys.path, path_add) == 0 insert(py.sys.path, int64(0), path_add); end % Define model path and text to process model_path = 'D:\trained_model\\output\\model-best'; text = 'Roses are red'; % Call the Python function pyOut = py.final_output.text_recognizer(model_path, text); % Convert the output to a MATLAB cell array entity_labels = cell(pyOut); disp(entity_labels); </code></pre> <p>I found one solution to update Numpy, what I did, but nothing changed. I am using Python 3.9 and Numpy version 2.0.0</p> <p>The error was received when I tried to call the Python module using a Matlab script.</p> <p>How can I fix the issue?</p>
<python><numpy><matlab><spacy>
2024-06-17 18:52:57
8
3,446
Encipher
78,634,163
1,796,858
How to download a python whl file requiring a different version of python to the one available on the system
<p>I am trying to download a whl file. The only problem is that there's a version constraint on the install. And this constraint on the install, also constrains the download too. This question is how from the client side i can get around this constraint so i can download the whl file.</p> <p>In this project there is a pyproject.toml file he has specified <code>requires-python = &quot;&gt;=3.10&quot;</code> constraint. I don't have write access to this repository, however i can read it, and asking the person to change this parameter is impossible, as they are uncooperative.</p> <p>Now, i want to download, but not install, the whl file onto my dev system with python 3.8 installed. Installing and making python 3.10 the default python on this system isn't an option. How do i tell pip to ignore the version requirement so i can download the whl file, and without having to install a 3.10 virtual environment just to download the file each time.</p> <p>If i download it directly i get this error:</p> <pre><code>...# pip download XXXX==0.2.18 Looking in indexes: ... Collecting XXXX==0.2.18 Downloading .../XXXX.whl (50 kB) |████████████████████████████████| 50 kB 2.0 MB/s Saved ./XXXX.whl ERROR: Package 'XXXX' requires a different Python: 3.8.10 not in '&gt;=3.10' </code></pre> <p>The file that was downloaded is incomplete and not the whl package i need which should be ~500KB or so.</p> <p>If i try and ignore the version number using the <code>--ignores-requires-python</code> parameter it gives me an error, apparently this parameter is only available for <code>pip install</code> but not <code>pip download</code>, e.g:</p> <pre><code>...# pip download --ignore-requires-python XXXX==0.2.18 Usage: pip download [options] &lt;requirement specifier&gt; [package-index-options] ... pip download [options] -r &lt;requirements file&gt; [package-index-options] ... pip download [options] &lt;vcs project url&gt; ... pip download [options] &lt;local project path&gt; ... pip download [options] &lt;archive url/path&gt; ... no such option: --ignore-requires-python </code></pre> <p>Looking around further there's an option for specifying the <code>--python-version</code>, however it opens up a bunch of error messages:</p> <pre><code>...# pip download --python-version 3.10 XXXX==0.2.18 ERROR: When restricting platform and interpreter constraints using --python-version, --platform, --abi, or --implementation, either --no-deps must be set, or --only-binary=:all: must be set and --no-binary must not be set (or must be set to :none:). </code></pre> <p>I can't find any meaningful information on how to use this option, presumably because the assumption is that people might try and install using it, and the download use case hasn't been adequately considered by the python developers.</p> <p>I have tried:</p> <pre><code>...# pip download --python-version 3.10 --no-deps XXXX==0.2.18 Looking in indexes: https://pypi.org/simple, ... Collecting XXXX==0.2.18 File was already downloaded /.../XXXX.whl Successfully downloaded XXXX </code></pre> <p>However this file is the incomplete file, of only 52KB size and is unusable. Similarly i have tried <code>--only-binary=:all:</code> with no luck.</p> <p>How can i download the whl file on a python 3.8 system? Is it possible?</p>
<python><pip>
2024-06-17 18:34:27
1
1,581
Owl
78,633,947
10,327,849
Filter DataFrame events not in time windows DataFrame
<p>I have a DataFrame of events (Event Name - Time) and a DataFrame of time windows (Start Time - End Time). I want to get a DataFrame containing only the events not in <strong>any</strong> of the time windows. I am looking for a &quot;pythonic&quot; way to filter the DataFrame.</p> <p>Example: Events DataFrame:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Event Name</th> <th>Event Time</th> </tr> </thead> <tbody> <tr> <td>Event1</td> <td>02/01/2000 00:00:00</td> </tr> <tr> <td>Event2</td> <td>05/01/2000 10:00:00</td> </tr> <tr> <td>Event3</td> <td>07/01/2000 09:00:00</td> </tr> <tr> <td>Event4</td> <td>10/01/2000 02:00:00</td> </tr> </tbody> </table></div> <p>Time Windows DataFrame:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Time Window Name</th> <th>Start Time</th> <th>End Time</th> </tr> </thead> <tbody> <tr> <td>Window1</td> <td>01/01/2000 00:00:00</td> <td>06/01/2000 00:00:00</td> </tr> <tr> <td>Window2</td> <td>10/01/2000 01:00:00</td> <td>10/01/2000 04:00:00</td> </tr> </tbody> </table></div> <p>Result: Filtered Events DataFrame:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Event Name</th> <th>Event Time</th> </tr> </thead> <tbody> <tr> <td>Event3</td> <td>07/01/2000 09:00:00</td> </tr> </tbody> </table></div> <p>Setup:</p> <pre><code>import pandas as pd events_data = { 'Event Name': ['Event1', 'Event2', 'Event3', 'Event4'], 'Event Time': ['02/01/2000 00:00:00', '05/01/2000 10:00:00', '07/01/2000 09:00:00', '10/01/2000 02:00:00'] } time_windows_data = { 'Time Window Name': ['Window1', 'Window2'], 'Start Time': ['01/01/2000 00:00:00', '10/01/2000 01:00:00'], 'End Time': ['06/01/2000 00:00:00', '10/01/2000 04:00:00'] } events_df = pd.DataFrame(events_data) time_windows_df = pd.DataFrame(time_windows_data) events_df['Event Time'] = pd.to_datetime(events_df['Event Time'], format='%d/%m/%Y %H:%M:%S') time_windows_df['Start Time'] = pd.to_datetime(time_windows_df['Start Time'], format='%d/%m/%Y %H:%M:%S') time_windows_df['End Time'] = pd.to_datetime(time_windows_df['End Time'], format='%d/%m/%Y %H:%M:%S') </code></pre>
<python><pandas><dataframe><datetime><filter>
2024-06-17 17:33:44
3
301
Yakir Shlezinger
78,633,814
11,627,201
Object actually takes up more memory when I use __slots__ in Python 3?
<p>Here is the code:</p> <pre><code>&gt;&gt;&gt; class A: ... __slots__ = (&quot;a&quot;,) ... def __init__(self): ... self.a = 1 ... &gt;&gt;&gt; class B: ... def __init__(self): ... self.b = 1 ... &gt;&gt;&gt; import pickle as pk &gt;&gt;&gt; len(pk.dumps(A())) 40 &gt;&gt;&gt; len(pk.dumps(B())) 36 &gt;&gt;&gt; a = A() &gt;&gt;&gt; a.b = 2 Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; AttributeError: 'A' object has no attribute 'b' </code></pre> <p>A has the slots attribute, B does not, but A actually takes up more memory than B?</p>
<python><python-3.x>
2024-06-17 16:53:03
0
798
qwerty_99
78,633,770
1,473,517
Can Neumaier summation be sped up?
<p>Neumaier summation is an improvement of Kahan summation for accurately summing arrays of floats.</p> <pre><code>import numba as nb @nb.njit def neumaier_sum(arr): s = arr[0] c = 0.0 for i in range(1, len(arr)): t = s + arr[i] if abs(s) &gt;= abs(arr[i]): c += (s - t) + arr[i] else: c += (arr[i] - t) + s s = t return s + c </code></pre> <p>This works well but it is at least four times slower than it would be were you to add fastmath=True. Unfortunately, fastmath allows rebracketing the sum (associativity) which has the effect of ruining its accuracy so we can't do that.</p> <p>Here is a test to show the results with different ways of summing. First we make an array of length 1001.</p> <pre><code>import numpy as np n = 10 ** 3 + 1 a = np.full(n, 0.01, dtype=np.float64) a[0] = 10**10 a[-1] = -10**10 </code></pre> <p>We <code>import math</code> so we can use <code>fsum</code> to see the correct answer. We also define a version Neumaier using fastmath.</p> <pre><code># Bad version using fastmath @nb.njit(fastmath=True) def neumaier_sum_fm(arr): s = arr[0] c = 0.0 for i in range(1, len(arr)): t = s + arr[i] if abs(s) &gt;= abs(arr[i]): c += (s - t) + arr[i] else: c += (arr[i] - t) + s s = t return s + c </code></pre> <p>The results are:</p> <pre><code>math.fsum : 9.99 nb_neumaier_sum : 9.99 nb_neumaier_sum_fm: 9.99001693725586 </code></pre> <p>These are the timings:</p> <pre><code> %timeit nb_neumaier_sum_fm(a) 350 ns ± 0.983 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) %timeit nb_neumaier_sum(a) 1.5 µs ± 18.8 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) </code></pre> <p>Is there any way to speed up this code but have it output exactly what it does now?</p> <hr /> <p>The assembly is as follows. The comments were added by chatgpt so please let me know if any of it is wrong.</p> <pre><code> movq 8(%rsp), %rcx # Load the address of the array into rcx movq 16(%rsp), %rax # Load the length of the array into rax vmovsd (%rcx), %xmm1 # Load the first element of the array into xmm1 (s = arr[0]) leaq -1(%rax), %rdx # Calculate (length of array - 1) into rdx testq %rdx, %rdx # Test if rdx is zero jle .LBB0_1 # If rdx &lt;= 0, jump to .LBB0_1 movabsq $.LCPI0_0, %rsi # Load address of constant value (mask for abs) movl $1, %edx # Initialize loop counter (i = 1) vxorpd %xmm0, %xmm0, %xmm0 # Clear xmm0 (c = 0.0) vmovapd %xmm1, %xmm3 # Copy xmm1 (s) to xmm3 vmovapd (%rsi), %xmm2 # Load abs mask into xmm2 .p2align 4, 0x90 .LBB0_3: vmovsd (%rcx,%rdx,8), %xmm4 # Load arr[i] into xmm4 vandpd %xmm2, %xmm3, %xmm5 # Compute abs(s) into xmm5 incq %rdx # Increment rdx (i++) vaddsd %xmm4, %xmm3, %xmm1 # Compute t = s + arr[i] vandpd %xmm2, %xmm4, %xmm6 # Compute abs(arr[i]) into xmm6 vcmpnlesd %xmm5, %xmm6, %xmm5 # Compare abs(s) and abs(arr[i]) vsubsd %xmm1, %xmm3, %xmm6 # Compute (s - t) into xmm6 vaddsd %xmm6, %xmm4, %xmm6 # Compute (s - t) + arr[i] into xmm6 vsubsd %xmm1, %xmm4, %xmm4 # Compute (arr[i] - t) into xmm4 vaddsd %xmm4, %xmm3, %xmm3 # Compute (arr[i] - t) + s into xmm3 vblendvpd %xmm5, %xmm3, %xmm6, %xmm3 # Conditional blend based on the comparison vaddsd %xmm3, %xmm0, %xmm0 # Update c vmovapd %xmm1, %xmm3 # Update s (s = t) cmpq %rdx, %rax # Compare loop counter with array length jne .LBB0_3 # If not equal, loop again vaddsd %xmm1, %xmm0, %xmm0 # Final addition (s + c) xorl %eax, %eax # Clear eax vmovsd %xmm0, (%rdi) # Store the result retq # Return from function </code></pre>
<python><cython><numba><floating-accuracy><pythran>
2024-06-17 16:39:15
1
21,513
Simd
78,633,756
20,770,190
How to resize an image in Gradio?
<p>I'm looking for an approach to resize an image as a header in Gradio generated UI to be smaller.</p> <p>According to a closed issue on their Github, I followed the following manner:</p> <pre class="lang-py prettyprint-override"><code>import gradio as gr with gr.Blocks() as app: gr.Image(&quot;logo.png&quot;, label=&quot;Top Image&quot;).style(width=600, height=400) app.launch(server_name=&quot;0.0.0.0&quot;, server_port=7860, debug=True) </code></pre> <p>But it raises:</p> <pre><code>AttributeError: 'Markdown' object has no attribute 'style'. Did you mean: 'scale'? </code></pre> <p>I also tried using the <code>Markdown()</code> or <code>HTML()</code> method rather than <code>Image()</code> however, the issue is with this approach it cannot load an image locally.</p> <p>Here's the thing I've done so far:</p> <pre class="lang-py prettyprint-override"><code>import gradio as gr def greet(name): return f&quot;Hello {name}!&quot; # Load your local image image_path = &quot;/file/logo.png&quot; with gr.Blocks() as demo: html_header = f&quot;&quot;&quot; &lt;div style=&quot;text-align: center;&quot;&gt; &lt;img src=&quot;{image_path}&quot; alt=&quot;Header Image&quot; width=&quot;200&quot; height=&quot;100&quot;&gt; &lt;/div&gt; &quot;&quot;&quot; gr.HTML(html_header) name_input = gr.Textbox(label=&quot;Enter your name:&quot;) submit_button = gr.Button(&quot;Submit&quot;) output = gr.Textbox(label=&quot;Greeting:&quot;) submit_button.click(fn=greet, inputs=name_input, outputs=output) demo.launch() </code></pre> <p>I also tried <code>image_path = &quot;/file=logo.png&quot;</code>, <code>image_path = &quot;/file/logo.png&quot;</code>, <code>image_path = &quot;file=logo.png&quot;</code>, and <code>image_path = &quot;./logo.png&quot;</code> routes without any results.</p> <p>I should add that the logo and the .py file are next to each other.</p>
<python><gradio>
2024-06-17 16:33:41
1
301
Benjamin Geoffrey
78,633,516
181,783
Executing non-GUI threads in a separate GIL
<p>My team develops a GTK-Python application that dynamically spawns threads to handle longer running tasks. Recently, we added a feature for requesting some batch computation on demand and this has made the GUI unresponsive. My research and analysis shows that the problem seems to be contention for GIL that starves the GUI thread. The cheapest solution that we're considering is to move our custom threads to another process/GIL so that the threads spawned by the GUI framework run in one process but our <em>dynamically spawned</em> threads run in the other process.</p> <p>I've been reading the <a href="https://docs.python.org/3/library/multiprocessing.html" rel="nofollow noreferrer">multiprocessing docs</a> and nothing jumps out at me that supports such a feature out of the box. Which options should I explore?</p>
<python><multithreading><gil>
2024-06-17 15:38:58
1
5,905
Olumide
78,633,502
1,775,741
Why does the PID reported in a dmesg log message differ from that returned by os.getpid() in Python?
<p>When debugging an OOM error in a Python multiprocessing script I noticed that a dmesg log will report</p> <pre><code>[Mon Jun 17 15:26:04 2024] Memory cgroup out of memory: Killed process **1079355** (python3) total-vm:5323820kB, anon-rss:5249840kB, file-rss:3392kB, shmem-rss:8kB, UID:0 pgtables:10448kB oom_score_adj:-997 </code></pre> <p>Whereas each of the processes as reported by <code>os.getpid()</code> is in the range 471-475.</p> <p>I'll note that this is being run in a container on a kubernetes pod.</p> <p>Why the difference in process ids?</p>
<python><kubernetes><out-of-memory><rhel8><dmesg>
2024-06-17 15:36:01
0
809
Ian Danforth
78,633,448
391,445
How to mark an *instance* of a Python class as deprecated?
<p>I have inherited the maintenance on some API code. One of the things I need to update is some environment information (think staging, production). The old names were <code>test</code> and <code>production</code>, while the new names are <code>sandbox</code>, <code>simulator</code> and <code>production</code>. The old name <code>test</code> has the same semantics as <code>simulator</code> today.</p> <p>How this was implemented is using a class, with instances predefined for each environment, something like:</p> <pre><code>class Environment(object): def __init__(self, name, params): self.name = name self.params = params test = Environment('test', ... params ...) production = Environment('production', ... params ...) </code></pre> <p>I can easily define the new names, and to not break existing code, do</p> <pre><code>test = simulator </code></pre> <p>But I'd like to have code which uses the old instance <code>test</code> raise a Python deprecation warning.</p> <p>How can I do it? I can hack it by inserting an <code>if</code> inside the class code and raise a deprecation warning there, but is there a cleaner way?</p>
<python><instance><deprecated><deprecation-warning>
2024-06-17 15:23:23
1
7,809
Colin 't Hart
78,633,441
238,230
Paketo: Running npm install following Python buildpack
<p>We have added a static build step to our Python app which is triggered by <code>npm build</code> We would like to implement this in our Paketo build so that the built client-side assets are available at runtime.</p> <p>Based on the documentation it appears this should be possible by adding a <code>--post-buildpack</code> argument. However, when we tried adding this e.g.</p> <pre><code>pack build test_img --builder paketobuildpacks/builder-jammy-full --post-buildpack paketo-buildpacks/npm-install@1.1.1 </code></pre> <p>It appears to no longer detect the Python dependencies e.g.</p> <pre><code>6 of 11 buildpacks participating paketo-buildpacks/ca-certificates 3.6.7 paketo-buildpacks/node-engine 3.2.1 paketo-buildpacks/npm-install 1.4.0 paketo-buildpacks/node-run-script 1.0.16 paketo-buildpacks/node-start 1.1.5 paketo-buildpacks/procfile 5.6.8 </code></pre> <p>Therefore the Python dependencies are no longer available in the <code>PATH</code> and the app fails to start up.</p> <p>Is it possible to trigger <code>npm build</code> whilst retaining the Python layers in the buildpack?</p>
<python><node.js><buildpack><paketo>
2024-06-17 15:22:19
0
752
Gids
78,633,275
5,599,687
How can I install extras from a git subdirectory with pip?
<p><code>pip</code> supports VCS installs, as documented in <a href="https://pip.pypa.io/en/stable/topics/vcs-support/" rel="nofollow noreferrer">https://pip.pypa.io/en/stable/topics/vcs-support/</a> (related question: <a href="https://stackoverflow.com/questions/13566200/how-can-i-install-from-a-git-subdirectory-with-pip">How can I install from a git subdirectory with pip?</a>). However, it seems that it's not possible to install providing the extras, i.e. the optional dependencies (as in <a href="https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies" rel="nofollow noreferrer">https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies</a>). Is this right? Is there any workaround?</p>
<python><git><pip><version-control>
2024-06-17 14:41:50
2
751
Gabriele
78,633,207
509,868
How to print a list of numpy.float32 values?
<p>I run some simulations whose result is a <code>dict</code> like this:</p> <pre><code>results = { 'this': 5, 'that': 6, 'those': [2.34, 5.67] } </code></pre> <p>I save it in a human-readable format using code like this:</p> <pre><code>s = '' for key, value in results.items(): s += f'{key}: {value}\n' </code></pre> <p>One of my values happens to be a list of <code>numpy.float32</code> values.</p> <p>All was well until I upgraded to numpy 2.0.0. In this version, the code above produces strings like <code>[np.float32(2.34), np.float32(5.67)]</code> instead of <code>[2.34, 5.67]</code> (which I got in numpy 1.26.1).</p> <p>Should I complicate my saving code and use different code for saving values of different types? Or is there some workaround which could produce human-readable output for all types, including those composed of <code>float32</code>?</p> <hr /> <p>Additional examples (in an interactive python environment):</p> <pre> import numpy x = numpy.float32(2.3) x numpy 1 — 2.3 numpy 2 — np.float32(2.3) f'{x}' numpy 1 — '2.299999952316284' numpy 2 — '2.299999952316284' f'{[x, x]}' numpy 1 — '[2.3, 2.3]' numpy 2 — '[np.float32(2.3), np.float32(2.3)]' y={x:x*x} y numpy 1 — {2.3: 5.29} numpy 2 — {np.float32(2.3): np.float32(5.29)} </pre> <p>I can't make sense of it. Is it a bug that I could expect to be fixed, or a feature which is unlikely to change?</p>
<python><numpy><representation><numpy-2.x>
2024-06-17 14:27:14
2
28,630
anatolyg
78,633,199
5,547,553
How to include thousand separator in numbers in polars output?
<p>I'd like to generate a html output of a dataframe with thousand separators in the output.<br> However pl.Config does not seem to do anything:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({'prod':['apple','banana','melon'], 'price':[7788, 1122, 4400]}) with pl.Config( thousands_separator=&quot; &quot; ): html = '&lt;html&gt;&lt;body&gt;&lt;table&gt;&lt;tr&gt;&lt;th&gt;Product&lt;/th&gt;&lt;th&gt;Price&lt;/th&gt;&lt;/tr&gt;' html += ''.join(df.with_columns((pl.lit('&lt;tr&gt;&lt;td&gt;')+ pl.col('prod')+ pl.lit('&lt;/td&gt;&lt;td class=&quot;right&quot;&gt;')+ pl.col('price').cast(pl.String)+ pl.lit('&lt;/td&gt;&lt;/tr&gt;') ).alias('x') ) .get_column('x') .to_list() ) html += '&lt;/table&gt;&lt;/body&gt;&lt;/html&gt;' print(html) </code></pre>
<python><html><dataframe><python-polars>
2024-06-17 14:26:00
3
1,174
lmocsi
78,633,156
1,231,055
How to use windows_expand_args with single command click program
<p>I have a Python program and am using Click to process the command line. I have a single command program, as in the example below. On Windows I would like to pass in a valid glob expression like <code>*.py</code> to Click 8.x and have the literal string <code>&quot;*.py&quot;</code> show up in the <code>filepattern</code> variable.</p> <p>Click 8.x expands the glob patterns by default (this is different than 7.x). On Bash, if you quote the glob pattern, the shell doesn't do the expansion, and Click passes in the string. On Windows, the shell doesn't do the expansion and passes the string into Python. In this case Click expands them.</p> <p>So the question here is how do I get the string &quot;*.py&quot; passed in as a string on Windows (if there are OR are not any files that match)? I only want to allow 1 argument (so nargs=1). This all works fine in Click 7.x.</p> <p>The following example saved as <code>mycommand.py</code>:</p> <pre><code>import click @click.command(&quot;mycommand&quot;) @click.argument(&quot;filepattern&quot;, nargs=1, type=str) def mycommand(filepattern): print(filepattern) if __name__ == &quot;__main__&quot;: mycommand() </code></pre> <p>If I have a directory full of, say Python files, if I invoke this as <code>python mycommand.py somefile.py</code>, it will succeed as one value gets passed into <code>filepattern</code> and it will echo <code>somefile.py</code>.</p> <p>If I invoke as <code>python mycommand.py *.py</code> it fails with an error like:</p> <pre><code>Usage: mycommand.py [OPTIONS] FILEPATTERN Try 'mycommand.py --help' for help. Error: Got unexpected extra arguments (scratch.py mycommand.py ...) </code></pre> <p>I know there is an argument <a href="https://click.palletsprojects.com/en/8.1.x/api/#click.BaseCommand.main" rel="nofollow noreferrer"><code>windows_expand_args</code></a> for a <code>click.group</code>, but I can't puzzle out how to get it to work for a single command program.</p>
<python><glob><python-click>
2024-06-17 14:15:53
1
3,111
Brad Campbell
78,633,129
1,422,096
Execute a last line of code when the user closes the console window
<p>If we run a Python program in a console on Windows (<code>python myscript.py</code>), is there a possibility to run a last line of code when the user clicks on the &quot;X&quot; top-right button of the console window?</p> <p>I already tried something similar to <a href="https://stackoverflow.com/questions/16590599/executing-a-function-when-the-console-window-closes">Executing a function when the console window closes?</a>:</p> <pre><code>import atexit, time, subprocess def bye(): subprocess.Popen(&quot;notepad.exe&quot;) atexit.register(bye) print('Sleeping now...') time.sleep(10) # click on the top right &quot;X&quot; button of the console window: notepad doesn't start </code></pre> <p>but it doesn't work: <code>notepad</code> doesn't start.</p> <p>PS: not a duplicate of <a href="https://stackoverflow.com/questions/16590599/executing-a-function-when-the-console-window-closes">Executing a function when the console window closes?</a> because my question is specific to Windows which shows a specific behaviour.</p>
<python><windows><atexit>
2024-06-17 14:11:51
1
47,388
Basj
78,632,961
241,552
FactoryBoy: use a factory method on the model instead of __init__
<p>I have a class, for which I would like to write a factory using <code>FactoryBoy</code>. However, this class doesn't produce its instances using the <code>__init__</code> method, but rather a number of factory methods:</p> <pre class="lang-py prettyprint-override"><code>class MyClass: def __init__(): self.field = None self.is_valid = False @staticmethod def from_str(arg): inst = MyClass() inst.field = arg inst.is_valid = True return inst </code></pre> <p>Is there a way to write a FactoryBoy factory such that it uses this static factory method instead of passing the fields to <code>MyClass</code> initialiser?</p>
<python><factory-boy>
2024-06-17 13:41:03
1
9,790
Ibolit
78,632,780
4,451,315
How to set pandas or duckdb backend in ibis memtable?
<p>I have a function like this:</p> <pre class="lang-py prettyprint-override"><code>import ibis def my_function(df): t = ibis.memtable(df) t = t.mutate(ibis._['a'] + 1) return to.to_arrow() </code></pre> <p>My question is - if I pass it a pandas dataframe, then which backend is <code>t</code> going to use to do its calculations? The pandas one, or the duckdb one (which the docs say is the default)?</p> <p>Also:</p> <ul> <li>how do I force it to use the pandas backend?</li> <li>how do I force it to use the duckdb backend?</li> </ul> <p>Note that <code>my_function</code> needs to receive a dataframe as input, I can't change that. But inside <code>my_function</code>, I'd like to use Ibis.</p>
<python><pandas><ibis>
2024-06-17 13:01:48
1
11,062
ignoring_gravity
78,632,524
7,395,592
Pretty-print indices/coordinates of 2D Numpy array
<p>Does Numpy provide built-in capabilities to print the indices/coordinates of a 2-d Numpy array at its borders?</p> <p>What I mean is the following: Given, for example, the array</p> <pre class="lang-py prettyprint-override"><code>a = np.arange(6).reshape(2, 3).astype(float) </code></pre> <p>I would like to have a printout as follows:</p> <pre class="lang-py prettyprint-override"><code> 0 1 2 0 0.0 1.0 2.0 1 3.0 4.0 5.0 </code></pre> <p>It needn't be exactly like that, but the corresponding information should be conveyed. That is, I want to have a &quot;header column&quot; that shows the row indices (here: 0, 1) and a &quot;header row&quot; that shows the column indices (here: 0, 1, 2).</p> <p>At present, I simply use Pandas for this purpose (i.e. I convert the Numpy array to a Pandas DataFrame and then print the DataFrame). This is also suggested, for example, in <a href="https://stackoverflow.com/a/32159502/7395592">this answer</a> to a related question. However, this would always require Pandas, even if I don't use it for anything else in a given project.</p> <p>So my question, again, is: can I achieve the same printout only with Numpy (thus without Pandas)? Or, if not, is there a lightweight solution (in the sense of a few lines of code without an additional library) to achieve a similar output?</p>
<python><numpy><numpy-ndarray>
2024-06-17 12:05:06
2
6,750
simon
78,632,369
14,348,996
How do I type hint a function using TypedDict that mutates a dictionary in Python?
<p>I am defining a function that takes a dictionary as an input, and does some operations on it that creates a new key/value pair and modifies one existing value's type.</p> <p>I have a <code>TypedDict</code> for both the input and output of the function:</p> <pre><code>from typing import TypedDict class MyDict(TypedDict): field_1: int field_2: str field_3: float class InputDict(MyDict): field_4: str class OutputDict(MyDict): field_4: bytes field_5: bytes def transform_dict(input: list[InputDict]) -&gt; list[OutputDict]: output = input.copy() for x in output: x[&quot;field_4&quot;] = x[&quot;field_4&quot;].encode() x[&quot;field_5&quot;] = &quot;some_stuff&quot;.encode() return output </code></pre> <p>My linter is complaining about the lines where I do the transformation of <code>field_4</code> and <code>field_5</code>, which makes sense because output is a copy of a type <code>list[InputDict]</code>. What is the best / type-safe way to do this? I understand why it's complaining but I'm can't figure out how to fix it.</p>
<python><python-typing><typeddict>
2024-06-17 11:31:55
1
1,236
henryn
78,632,352
6,386,056
Plotly px.Timeline y marks do not adjust when using facet_row
<p>With a simple dataFrame that look like this</p> <p><a href="https://i.sstatic.net/AJFmYi18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJFmYi18.png" alt="df" /></a></p> <p>The following code</p> <pre><code>fig = px.timeline(df, x_start=&quot;Target Date Start&quot;, x_end=&quot;Target Date End&quot;, y=&quot;Initiative&quot;, color=&quot;Status Bin&quot;, facet_row=&quot;Project Key&quot;, template=&quot;plotly_white&quot;) </code></pre> <p>Generates a graph that does not adjust based on facet row. I would expect only initiatives associated with a given project key to show in the y marks, but instead all initiatives are shown: <a href="https://i.sstatic.net/KPV8nIeG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPV8nIeG.png" alt="enter image description here" /></a></p> <p>Note that it shows only the relevant bars but it shows all y marks as opposed to filter for relevant initiatives and also it keeps the size the same across facet_rows whereas I would expect their size to be proportional to the number of initiatives in their resp. group</p> <p>EDIT: r-beginners suggested fix</p> <p><a href="https://i.sstatic.net/AJdnzuy8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJdnzuy8.png" alt="enter image description here" /></a></p>
<python><plotly><timeline><gantt-chart><plotly-express>
2024-06-17 11:28:22
1
667
Mth Clv
78,632,287
11,901,834
Only get subsection of Exception
<p>In Python, I'm persisting some errors to a database.</p> <p>I want to be able to group these errors by their error details.</p> <p>However, currently, I am persisting the exception itself as follows:</p> <pre><code>try: # logic except Exception as e: persist(e) # this just does an insert of e </code></pre> <p>This is resulting in a DB table like this:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>name</th> <th>failure_reason</th> </tr> </thead> <tbody> <tr> <td>name1</td> <td>001003 (42000): 01b51169-0409-e03e-0075-e9028c3cc103: SQL compilation error:\n syntax error line 1 at position 0 unexpected 'seelect'.</td> </tr> <tr> <td>name1</td> <td>001003 (42000): 01b5116c-0409-e03e-0075-e9028c3cf86b: SQL compilation error:\n syntax error line 1 at position 0 unexpected 'seelect'.</td> </tr> <tr> <td>name2</td> <td>001003 (42000): 02b6117h-9876-e03f-0075-e9028c2ayhwb: SQL compilation error:\n syntax error line 1 at position 0 unexpected 'seelect'.</td> </tr> </tbody> </table></div> <p>As you can see, when I persist the Exception itself, it contains a UUID which stops me from easily grouping on the error.</p> <p>I've tried using the following, but what I want is something that will give me just <code>SQL compilation error: syntax error line 1 at position 0 unexpected 'seelect'.</code></p> <pre><code>e.__class__.__name__ str(e) repr(e) traceback.format_exc() </code></pre> <p>... this includes the UUID too.</p>
<python>
2024-06-17 11:11:24
2
1,579
nimgwfc
78,632,056
3,906,713
Is there an intended way to incapsulate SeriesHelper in InfluxDB-python
<p>I would like to create a table in InfluxDB and then write a pandas dataframe to it. Thus, I need to develop a function:</p> <pre><code>def write_df_to_influx(df: pd.DataFrame, client: InfluxDBClient, table_name: str, tag_names: list, field_names: list) </code></pre> <p>The <a href="https://influxdb-python.readthedocs.io/en/latest/api-documentation.html#serieshelper" rel="nofollow noreferrer">official documentation</a> states that the intended way of writing entries in bulk is to use <code>SeriesHelper</code> class. The <a href="https://influxdb-python.readthedocs.io/en/latest/examples.html#tutorials-serieshelper" rel="nofollow noreferrer">documented use case</a> seems to suggest that I need to create a class inside of a class, where I would explicitly specify the bulk_size, as well as the fields and tags. I do not really understand this design choice, but my only concern is how to use it in a generic manner to write my function.</p> <p>Is it safe to create <code>__init__</code> method for <code>MySeriesHelper</code>, that would init the parent <code>SeriesHelper</code>, and also use additional parameters to pass directly to the nested <code>Meta</code> class? I'm not an expert in python OOP, and I am mildly afraid that the <code>SeriesHelper</code> has some hidden rules that I need to follow.</p> <pre><code>class MySeriesHelper(SeriesHelper): def __init__(self, *args, my_meta: dict = None, **kwargs): super(MySeriesHelper, self).__init__(*args, **kwargs) self.my_meta = my_meta class Meta: tags = self.my_meta['tags'] # More of the Meta class omitted </code></pre> <p>If I was able to create a generic class as described above, I would have been able to write my function more or less like this</p> <pre><code>def write_df_to_influx(df: pd.DataFrame, client: InfluxDBClient, table_name: str, tag_names: list, field_names: list) my_meta = {'client': client, 'tags': tag_names, 'fields': field_names, 'bulk_size': len(df)} for idx, row in df.iterrows(): MySeriesHelper(my_meta=my_meta, **dict(row)) MySeriesHelper.commit() </code></pre> <p><strong>Edit</strong>: Oh, so in Python nested class does not have access to the attributes of the parent class. What am I supposed to do then?</p>
<python><pandas><influxdb>
2024-06-17 10:17:14
1
908
Aleksejs Fomins
78,631,897
23,725,389
Saving Excel document removes the chart
<p>I am using OpenPyXL for processing, there are only 2 lines of code:</p> <pre><code>workbook = load_workbook(filename='input.xlsx', keep_vba=True, rich_text=True) # I will do operator here, but there is an error, so I comment this line workbook.save('output.xlsx') </code></pre> <p>Then I get the output with <strong>less size</strong>:</p> <p><a href="https://i.sstatic.net/TMEh0c9J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMEh0c9J.png" alt="Output" /></a></p> <p>When I try to open output.xlsx, this notification appears:</p> <p><a href="https://i.sstatic.net/eAVu7wVv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAVu7wVv.png" alt="Error Notification" /></a></p> <p>The chart was removed by Microsoft Excel. Sometimes I cannot open it in Microsoft Excel, but LibreOffice can open it with no error.</p> <p>Left: <a href="https://easyupload.io/ckzbd4" rel="nofollow noreferrer">input.xlsx</a>, right: output.xlsx. The chart was removed and Microsoft Excel reports an error:</p> <p><a href="https://i.sstatic.net/AJWiuHm8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJWiuHm8.png" alt="Result" /></a></p> <p>The alert:</p> <p><a href="https://i.sstatic.net/e8qgQppv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8qgQppv.png" alt="enter image description here" /></a></p> <p>Often my input file has ~ 4 sheets, a chart, and 1 sheet has data. My task is to write data to <em>one</em> sheet, I do not care about the others. I want to get one of these solutions:</p> <ol> <li>Allow OpenPyXL to read the charts the right way.</li> <li>Make OpenPyXL read a specific sheet, and other sheets will be passed.</li> <li>Avoid OpenPyXL decrease the size of the file (I think OpenPyXL does it for performance).</li> </ol> <p>Using another library/language is the last option.</p>
<python><excel><openpyxl>
2024-06-17 09:40:11
1
806
Nguyen Manh Cuong
78,631,285
4,451,315
Use ibis-framework to compute shifts (lags) in dataframe
<p>Say I want to do the following in Polars:</p> <pre class="lang-py prettyprint-override"><code>df.with_columns( a_1 = pl.col('a').shift(1), a_2 = pl.col('a').shift(2), b_1 = pl.col('b').shift(1), b_2 = pl.col('b').shift(2), ) </code></pre> <p>Say, starting from</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({'a': [1,3,2,4], 'b': [5,1,2,1]}) </code></pre> <p>So, desired output:</p> <pre class="lang-py prettyprint-override"><code>shape: (4, 6) ┌─────┬─────┬──────┬──────┬──────┬──────┐ │ a ┆ b ┆ a_1 ┆ a_2 ┆ b_1 ┆ b_2 │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪══════╪══════╪══════╪══════╡ │ 1 ┆ 5 ┆ null ┆ null ┆ null ┆ null │ │ 3 ┆ 1 ┆ 1 ┆ null ┆ 5 ┆ null │ │ 2 ┆ 2 ┆ 3 ┆ 1 ┆ 1 ┆ 5 │ │ 4 ┆ 1 ┆ 2 ┆ 3 ┆ 2 ┆ 1 │ └─────┴─────┴──────┴──────┴──────┴──────┘ </code></pre> <p>How can I do this with ibis?</p> <p>I need to do this starting from the Polars dataframe <code>df</code>, so I need to connect to that using Ibis. Looking at <a href="https://ibis-project.org/backends/polars.html" rel="nofollow noreferrer">the docs</a> doesn't give much away in terms of what I'm actually supposed to do</p> <hr /> <p>Is it correct to do:</p> <pre class="lang-py prettyprint-override"><code>t = ibis.memtable(df) t = t.mutate( ibis._['a'].lead(1), ibis._['a'].lead(2), ibis._['b'].lead(1), ibis._['b'].lead(2), ) t.to_polars() </code></pre>
<python><python-polars><ibis>
2024-06-17 07:13:36
1
11,062
ignoring_gravity
78,630,737
2,975,510
Efficient scipy sparse array and kronecker product computation
<p>I want to compute the Kronecker product of a long list of small matrices (say, Pauli matrices). I tried to define the small matrices using <code>scipy.sparse.csr_array</code>, and use <code>scipy.sparse.kron</code> to perform the Kronecker product. However, the performance isn't ideal.</p> <pre><code> import numpy as np from scipy import sparse import functools import time sigma_x = sparse.csr_array(np.array([[0, 1], [1, 0]])) sigma_y = sparse.csr_array(np.array([[0, -1j], [1j, 0]])) sigma_z = sparse.csr_array(np.array([[1., 0], [0, -1.]])) sigma = [sigma_x, sigma_y, sigma_z] arguments = [sigma[i % 3] for i in range(3 * 5)] start_time = time.time() result = functools.reduce(sparse.kron, arguments) end_time = time.time() execution_time = end_time - start_time print(execution_time) </code></pre> <hr /> <p>=== Update 2 ===</p> <p>Found a simple fix: I should use <code>format='csr'</code> keyword when calling <code>sparse.kron</code>. So I define</p> <pre><code>def kron_csr(x, y): return sparse.kron(x, y, format='csr') </code></pre> <p>and then</p> <pre><code>arguments = [sigma[i % 3] for i in range(3 * 5)] result = functools.reduce(kron_csr, arguments) </code></pre> <p>would yield the result in just <strong>0.005</strong> second.</p> <hr /> <p>=== Update 1 ===</p> <p>Following a comment below, I split the computation into two steps, first compute the Kronecker product of 3 * 4 Pauli matrices, then compute the Kronecker product of that result with the last 3 Pauli matrices,</p> <pre><code>start_time = time.time() arguments = [sigma[i % 3] for i in range(3 * 4)] # reduces to 3 * 4 first = functools.reduce(sparse.kron, arguments) second = functools.reduce(sparse.kron, [sigma_x, sigma_y, sigma_z]) result = functools.reduce(sparse.kron, [first, second]) end_time = time.time() execution_time = end_time - start_time print(execution_time) </code></pre> <p>or first compute the Kronecker product of <code>sigma_x, sigma_y, sigma_z</code>, then compute the Kronecker product of five of this intermediate matrices,</p> <pre><code>start_time = time.time() result_1 = functools.reduce(sparse.kron, sigma) result = functools.reduce(sparse.kron, [result_1 for i in range(5)]) end_time = time.time() execution_time = end_time - start_time </code></pre> <p>The performance improves to around <strong>4~9 seconds</strong>.</p> <hr /> <p>The execution time gives something like <strong>11 seconds</strong>. The same computatoin using Mathematica only takes around 0.01 second,</p> <pre><code>Sigma = {SparseArray[( { {0, 1}, {1, 0} } )], SparseArray[( { {0, -I}, {I, 0} } )], SparseArray[( { {1, 0}, {0, -1} } )]}; ((KroneckerProduct @@ Join @@ Table[{Sigma[[1]], Sigma[[2]], Sigma[[3]]}, {i, 5}]) // Timing)[[1]] </code></pre> <p>I wonder how to improve the python code performence (hopefully to something like that in <code>Mathematica</code>)?</p>
<python><math><scipy><wolfram-mathematica><sparse-matrix>
2024-06-17 02:47:26
1
2,203
Lelouch
78,630,470
10,028,567
How do I remove characters in a list?
<p>The following command when run in BASH cleans <a href="https://gutenberg.org/cache/epub/17192/pg17192.txt" rel="nofollow noreferrer">The Raven</a>.</p> <pre><code>cat The_Raven.txt | gawk '{print tolower($0)}' | tr -d &quot;\!\&quot;#$%&amp;'()*+,-./:;&lt;=&gt;?@[\\]^_\`{|}~&quot; </code></pre> <p>The following command modifies The Raven but makes the file unreadable.</p> <pre><code>cat The_Raven.txt | gawk '{print tolower($0)}' | tr -d &quot;\!\&quot;#$%&amp;'()*+,-./:;&lt;=&gt;?@[\\]^_\`{|}~«»&quot; </code></pre> <p>The following Python code uses <code>subprocess</code> to clean &quot;The Raven&quot;.</p> <pre><code>command = &quot;cat The_Raven.txt | gawk '{print tolower($0)}' | tr -d \&quot;!\\\&quot;#$%&amp;'()*+,-./:;&lt;=&gt;?@[\\\\]^_\\`{|}~\&quot;&quot; cleaned_text_from_command = subprocess.run(command, shell = True, capture_output = True, text = True, encoding = 'utf-8').stdout </code></pre> <p>Inserting <code>«»</code> after <code>~</code> in the above Python code causes the following error.</p> <pre><code>UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte </code></pre> <p>How do I remove all relevant characters including <code>«»</code> if they are present?</p>
<python><linux><bash><subprocess>
2024-06-16 23:25:40
3
437
Tom Lever
78,630,207
819,417
Pygame indexed surface won't display
<p>I'm porting <a href="https://www.pygame.org/pcr/numpy_plasma/index.php" rel="nofollow noreferrer">this ancient code</a> to Python 3. I've applied the varied palette to the surfaces and the <code>plasma_buffer</code> matrix changes, but <code>pygame.display.flip()</code> keeps showing a black screen. The surface pixels also update, so what'd I miss?</p> <p><a href="https://i.sstatic.net/pl7aTfgP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pl7aTfgP.png" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot;Plasma.py -- Korruptor Jan 2001, test code for Seal Basher (v1.0) This little doobry is based on the fast flame example by Pete Shinners and is another novel neat hack on the surfarray module. The plasma algo itself is pretty simple, just a sum of four cosine values from a pre-calculated look-up table inserted into a surf buff. It's all pretty easy really. The comments explain my thinking... This is my first hack, and not really optimised apart from what I've learnt from Pete's example and whilst lurking on #Pygame. If you've got suggestions for speed increases let me know...&quot;&quot;&quot; from math import cos from numpy import array, mod, zeros, uint8 from pygame import Surface, display import pygame.transform from pygame.surfarray import blit_array # pylint: disable=no-name-in-module from pygame.locals import KEYDOWN, MOUSEBUTTONDOWN, QUIT RES = (320, 256) SMALL_X = int(RES[0] // 8) SMALL_Y = int(RES[1] // 8) PI = 3.14159 # Linear array of cosine values to be indexed and summed, initialised to zero prior to pre-calc... cos_tab = [0] * 256 # Array of indexes to be used on our cos_tab. Could be variables I suppose. Just easier to cut_n_paste! ;-) pnt_tab = array((0, 0, 0, 0)) def main(): &quot;Inisalises display, precalculates the cosine values, and controls the update loop&quot; display.init() # Turn SDL_PIXELFORMAT_RGB888 to SDL_PIXELFORMAT_INDEX8 screen_surface: Surface = display.set_mode(RES, 0, 8).convert(8) info = display.Info() print(info) # Numeric array (working) for the display. Do all our fun stuff here... plasma_buffer: list = [[0] * SMALL_Y] * SMALL_X # Pygame Surface object which will take the surfarray data and be translated into a screen blit... plasma_surface = Surface((SMALL_X, SMALL_Y), 0, 8).convert(8) set_palette(screen_surface) plasma_surface.set_palette(screen_surface.get_palette()) screen_surface.fill(100) make_cosine() # Fruity loops... while 1: for e in pygame.event.get(): if e.type in (QUIT, KEYDOWN, MOUSEBUTTONDOWN): return add_cosine(plasma_buffer) # print(plasma_buffer[0]) blit_scaled_surface(screen_surface, plasma_buffer, plasma_surface) pygame.display.flip() def add_cosine(plasma_buffer): &quot;An Y by X loop of screen co-ords, summing the values of four cosine values to produce a color value that'll map to the previously set surface palette.&quot; # Use working indices for the cosine table, save the real ones for later... t1 = pnt_tab[0] t2 = pnt_tab[1] for y in range(0, SMALL_Y): # Save the horizontal indices for later use... t3 = pnt_tab[2] t4 = pnt_tab[3] for x in range(0, SMALL_X): # Our color value will equal the sum of four cos_table offsets. # The preset surface palette comes in handy here! We just need to output the value... # We mod by 256 to prevent our index going out of range. (C would rely on 8bit byte ints and with no mod?) color = ( cos_tab[mod(t1, 256)] + cos_tab[mod(t2, 256)] + cos_tab[mod(t3, 256)] + cos_tab[mod(t4, 256)] ) # Arbitrary values, changing these will allow for zooming etc... t3 += 3 t4 += 2 # Insert the calculated color value into our working surfarray... plasma_buffer[x][y] = color # Arbitrary values again... t1 += 2 t2 += 1 # Arbitrary values to move along the cos_tab. Play around for something nice... # Don't think I need these boundary checkings, but just in case someone decides to run this code for a couple of weeks non-stop... # if pnt_tab[0] &lt; 256: pnt_tab[0] += 1 else: pnt_tab[0] = 1 if pnt_tab[1] &lt; 256: pnt_tab[1] += 2 else: pnt_tab[1] = 2 if pnt_tab[2] &lt; 256: pnt_tab[2] += 3 else: pnt_tab[2] = 3 if pnt_tab[3] &lt; 256: pnt_tab[3] += 4 else: pnt_tab[3] = 4 def make_cosine(): &quot;Knock up a little pre-calculated cosine lookup table...&quot; i = 0 for i in range(0, 256): # Play with the values here for interesting results... I just made them up! :-) cos_tab[i] = 60 * (cos(i * PI / 32)) def set_palette(screen_surface): &quot;Create something trippy... Based on Pete's cmap creator, and without doubt the thing that took the longest... Aaaargh! Decent palettes are hard to find...&quot; colors = zeros((256, 3), uint8) i = 0 for i in range(0, 64): colors[i][0] = 255 colors[i][1] = i * 4 colors[i][2] = 255 - (i * 4) colors[i + 64][0] = 255 - (i * 4) colors[i + 64][1] = 255 colors[i + 64][2] = i * 4 colors[i + 128][0] = 0 colors[i + 128][1] = 255 - (i * 4) colors[i + 128][2] = 255 colors[i + 192][0] = i * 4 colors[i + 192][1] = 0 colors[i + 192][2] = 255 screen_surface.set_palette(colors) def blit_scaled_surface(screen, flame, miniflame): &quot;double the size of the data, and blit to screen -- Nicked from Shread's Fast Flame&quot; f = array(flame, uint8) print(f[0]) blit_array(miniflame, f) s2 = pygame.transform.scale(miniflame, screen.get_size()) print(s2.get_at((0,0))) screen.blit(s2, (0, 0)) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Smaller example:</p> <pre class="lang-py prettyprint-override"><code>import pygame as pg from pygame import QUIT, KEYDOWN, MOUSEBUTTONDOWN RES = (320, 256) pg.display.init() # Turn SDL_PIXELFORMAT_RGB888 to SDL_PIXELFORMAT_INDEX8 screen_surface: pg.Surface = pg.display.set_mode(RES).convert(8) info = pg.display.Info() print(info) PAL = [(251, 252, 253)] * 256 screen_surface.set_palette(PAL) screen_surface.fill(220) print(screen_surface.get_at((0, 0))) # (251, 252, 253, 255) pg.display.flip() # Should work according to https://www.geeksforgeeks.org/how-to-make-a-pygame-window/ running: bool = True while running: for e in pg.event.get(): if e.type in (QUIT, KEYDOWN, MOUSEBUTTONDOWN): running = False </code></pre> <p>The raw code from <a href="https://www.geeksforgeeks.org/how-to-make-a-pygame-window/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/how-to-make-a-pygame-window/</a> does work on pygame 2.5.2 (SDL 2.28.3, Python 3.10.6) on Windows 11.</p>
<python><python-3.x><pygame><pygame-surface>
2024-06-16 20:32:43
1
20,273
Cees Timmerman
78,630,094
607,033
Django The included URLconf '_mod_wsgi_...' does not appear to have any patterns in it
<p>I am a Python novice and I am trying to run a Django hello world app on Windows with Apache and mod_wsgi. The upper error message shows up with the following code:</p> <pre><code>import os import sys import site from django.core.wsgi import get_wsgi_application from django.conf import settings from django.urls import path from django.http import HttpResponse site.addsitedir(&quot;C:/Program files/python312/Lib/site-packages&quot;) sys.path.append('C:/Users/felhasznalo/Desktop/django') sys.path.append('C:/Users/felhasznalo/Desktop/django/todo') settings.ROOT_URLCONF=__name__ settings.ALLOWED_HOSTS = ['localhost', '127.0.0.1'] settings.SECRET_KEY = '1234' def hello(request): return HttpResponse(&quot;Hello World!&quot;) urlpatterns = [ path(&quot;&quot;, hello) ] application = get_wsgi_application() </code></pre> <p>As of the server, it works properly with this code:</p> <pre><code>def application(environ, start_response): status = '200 OK' output = b'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] </code></pre> <p>So the problem is with Django configuration I think. I am not sure how to pass configuration parameters like <code>urlpatterns</code>. Obviously creating global variables does not work. Any idea?</p>
<python><django>
2024-06-16 19:38:56
1
26,269
inf3rno
78,630,047
827,927
How to stop numpy floats being displayed as "np.float64"?
<p>I have a large library with many doctests. All doctests pass on my computer. When I push changes to GitHub, GitHub Actions runs the same tests in Python 3.8, 3.9, 3.10 and 3.11. All tests run correctly on on Python 3.8; however, on Python 3.9, 3.10 and 3.11, I get many errors of the following type:</p> <pre><code>Expected: [13.0, 12.0, 7.0] Got: [np.float64(13.0), np.float64(12.0), np.float64(7.0)] </code></pre> <p>I.e., the results are correct, but for some reason, they are displayed inside &quot;np.float64&quot;.</p> <p>In my code, I do not use np.float64 at all, so I do not know why this happens. Also, as the tests pass on my computer, I do not know how to debug the error, and it is hard to produce a minimal working example. Is there a way I can make the doctests pass again, without changing each individual test?</p>
<python><numpy><doctest><numpy-2.x>
2024-06-16 19:19:38
4
37,410
Erel Segal-Halevi
78,629,960
13,008,170
Multiprocessing Pool: return the minimum element
<p>I want to run a task with a multiprocessing.Pool and return only the minimum element, without taking the memory to store every output.</p> <p>My code so far:</p> <pre class="lang-py prettyprint-override"><code>with Pool() as pool: programs = pool.map(task, groups) shortest, l = min(programs, key = lambda a: len(a[0])) </code></pre> <p>This works, however this would occupy a lot of memory for the result of pool.map(). <code>groups</code> is a set, which can be really big, and the results would take up very much memory.</p> <p>I would like some kind of approach like this:</p> <pre class="lang-py prettyprint-override"><code>with Pool() as pool: shortest, l = pool.execute_and_return_min(task, groups, key = lambda a: len(a[0])) </code></pre> <p>(which would internally compare the results and return the smallest element)</p> <p>or:</p> <pre class="lang-py prettyprint-override"><code>with Pool() as pool: shortest = l = None for program, k in pool.apply_and_return_as_generator(task, groups): if shortest is None or len(program) &lt; len(shortest): shortest = program l = k </code></pre> <p>(which would work like the normal pool but return values from the generator as soon as they are computed)</p> <p>I couldn't find any method of the pool to achieve something like this. Since I only want the minimum element, I do not care about the order of execution. Maybe I was not careful enough when searching.</p> <p>Any help would be appreciated. Preferred is a solution with Pool(), but if you have an idea how to implement this using pther technique - please go ahead as well.</p> <p>Thanks in advance!</p>
<python><multiprocessing><minimum><memory-efficient>
2024-06-16 18:48:07
1
304
gXLg
78,629,917
2,545,680
"which pip" and "pip --version" give different locations
<p>How's the following possible:</p> <pre class="lang-none prettyprint-override"><code>max@p17:~$ which pip /usr/bin/pip max@p17:~$ pip --version pip 24.0 from /home/user/.local/lib/python3.10/site-packages/pip (python 3.10) </code></pre> <p>so where's my pip installed?</p>
<python>
2024-06-16 18:31:58
1
106,269
Max Koretskyi
78,629,727
33,404
Why does my Python socket.connect() call stall for a minute with IPv6 but not with IPv4?
<p>On my Windows 10 desktop machine, I am running the following bit of Python (3.10.9) to make an HTTP POST request:</p> <pre><code>import json import requests response = requests.post( url=&quot;https://foo.bar/baz&quot;, headers={ 'Content-Type': &quot;application/json&quot;, 'Authorization': &quot;bazz&quot;, }, data=json.dumps({&quot;key&quot;:&quot;value&quot;}) ) </code></pre> <p>When I ran the code in PyCharm, I found that the call to the <code>post()</code> method consistently stalls for about 63 seconds before returning. As this is an unreasonably long time to get a response from this API, I ran the same code in the same way on a different computer and found that the call returns almost immediately with a good response.</p> <p>Having profiled the code in PyCharm, I can see that it's the call to Python <code>socket.connect()</code> method that causes the 63-seconds delay.</p> <p>Following advice I've found online, I've disabled IPv6 on the Ethernet adapter that I use to access the internet:</p> <p><a href="https://i.sstatic.net/Ff7xdnVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ff7xdnVo.png" alt="Screenshot of Windows Ethernet adapter with its &quot;Internet Protocol Version 6 (TCP/IPv6)&quot; feature unchecked" /></a></p> <p>Once I've done that, when I run the code, it returns immediately with no delay.</p> <p><strong>My question is - Why is this happening and how can I re-enable IPv6 on the Ethernet adapter and not experience the 63 second delay?</strong></p> <hr /> <p>Additional observations:</p> <ul> <li>The <code>https://foo.bar</code> API application seems to have both IPv6 addressed and IPv4 addresses:</li> </ul> <pre><code>❯❯ nslookup foo.bar Server: vulcan.local Address: 192.168.1.1 Non-authoritative answer: Name: foo.bar Addresses: 2606:4700:20::****:**** 2606:4700:20::****:**** 2606:4700:20::****:**** 172.67.***.*** 104.26.***.*** 104.26.***.*** </code></pre> <ul> <li><p>When enabled, the IPv6 feature is configured to have default values (as far as I can tell).</p> <p><a href="https://i.sstatic.net/YjTnPDlx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjTnPDlx.png" alt="Screenshot of IPv6 feature properties set to obtain addresses automatically" /></a></p> <p><a href="https://i.sstatic.net/2iLnj4M6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2iLnj4M6.png" alt="Screenshot of IPv6 feature advanced properties set to default values" /></a></p> </li> </ul>
<python><windows><network-programming><ipv6>
2024-06-16 17:11:12
0
16,911
urig
78,629,435
10,729,292
Numba throws `type error` for my gradient descent
<p>I am trying to build my MNIST Classifier from scratch in Numba, but it throws the error of type error for np.dot(), however, when I run these functions without NUMBA Jit, I find out all the types are of float32 and run without error, I cannot understand whats wrong here.</p> <p>The code is :</p> <pre><code>import numpy as np from numba import njit from tensorflow.keras.datasets import mnist # Load the dataset (X_train, y_train), (X_test, y_test) = mnist.load_data() # Flatten images X_train = X_train.reshape(X_train.shape[0], -1).astype(np.float32) X_test = X_test.reshape(X_test.shape[0], -1).astype(np.float32) # Normalize data X_train /= 255.0 X_test /= 255.0 # Convert labels to integers y_train = y_train.astype(np.int8) y_test = y_test.astype(np.int8) # Logistic regression model parameters num_features = X_train.shape[1] num_classes = 10 learning_rate = 0.1 num_iterations = 1000 # Numba-optimized logistic regression functions @njit def sigmoid(z): return 1.0 / (1.0 + np.exp(-z)) @njit def compute_cost(X, y, theta): m = X.shape[0] h = sigmoid(np.dot(X, theta)) cost = -(1.0 / m) * np.sum(y * np.log(h) + (1 - y) * np.log(1 - h)) return cost @njit def gradient_descent(X, y, theta, learning_rate, num_iterations): m = X.shape[0] for i in range(num_iterations): h = sigmoid(np.dot(X, theta)) gradient = np.dot(X.T, (h - y)) / m theta -= learning_rate * gradient if i % 100 == 0: cost = compute_cost(X, y, theta) print(f'Iteration {i}, Cost: {cost}') return theta @njit def predict(X, theta): h = sigmoid(np.dot(X, theta)) return np.argmax(h, axis=1) # Add bias term to X_train and X_test X_train_bias = np.hstack([np.ones((X_train.shape[0], 1), dtype=np.float32), X_train]) X_test_bias = np.hstack([np.ones((X_test.shape[0], 1), dtype=np.float32), X_test]) # Convert y_train to one-hot encoding y_train_one_hot = np.zeros((y_train.size, num_classes), dtype=np.float32) y_train_one_hot[np.arange(y_train.size), y_train] = 1.0 # Initialize theta (weights) theta = np.zeros((num_features + 1, num_classes), dtype=np.float32) print(type(X_train_bias[0][0]), type(y_train_one_hot[0][0]), type(theta[0][0]), type(learning_rate), type(num_iterations)) print((X_train_bias[0][0]), (y_train_one_hot[0][0]), (theta[0][0]), (learning_rate), (num_iterations)) # # Train the model theta = gradient_descent(X_train_bias, y_train_one_hot, theta, learning_rate, num_iterations) # # Predictions on test set y_pred = predict(X_test_bias, theta) # Calculate accuracy accuracy = np.mean(y_pred == y_test) * 100 print(f'Accuracy on test set: {accuracy:.2f}%') </code></pre> <p>And this is what my error looks like</p> <pre><code>--------------------------------------------------------------------------- TypingError Traceback (most recent call last) &lt;ipython-input-16-0a5c11357e7d&gt; in &lt;cell line: 74&gt;() 72 73 # # Train the model ---&gt; 74 theta = gradient_descent(X_train_bias, y_train_one_hot, theta, learning_rate, num_iterations) 75 76 # # Predictions on test set 1 frames /usr/local/lib/python3.10/dist-packages/numba/core/dispatcher.py in _compile_for_args(self, *args, **kws) 466 e.patch_message(msg) 467 --&gt; 468 error_rewrite(e, 'typing') 469 except errors.UnsupportedError as e: 470 # Something unsupported is present in the user code, add help info /usr/local/lib/python3.10/dist-packages/numba/core/dispatcher.py in error_rewrite(e, issue_type) 407 raise e 408 else: --&gt; 409 raise e.with_traceback(None) 410 411 argtypes = [] TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(&lt;built-in function dot&gt;) found for signature: &gt;&gt;&gt; dot(array(float32, 2d, F), array(float64, 2d, C)) There are 4 candidate implementations: - Of which 2 did not match due to: Overload in function 'dot_2': File: numba/np/linalg.py: Line 525. With argument(s): '(array(float32, 2d, F), array(float64, 2d, C))': Rejected as the implementation raised a specific error: TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(&lt;intrinsic _impl&gt;) found for signature: &gt;&gt;&gt; _impl(array(float32, 2d, F), array(float64, 2d, C)) There are 2 candidate implementations: - Of which 2 did not match due to: Intrinsic in function 'dot_2_impl.&lt;locals&gt;._impl': File: numba/np/linalg.py: Line 543. With argument(s): '(array(float32, 2d, F), array(float64, 2d, C))': Rejected as the implementation raised a specific error: TypingError: np.dot() arguments must all have the same dtype raised from /usr/local/lib/python3.10/dist-packages/numba/np/linalg.py:563 During: resolving callee type: Function(&lt;intrinsic _impl&gt;) During: typing of call at /usr/local/lib/python3.10/dist-packages/numba/np/linalg.py (582) File &quot;../usr/local/lib/python3.10/dist-packages/numba/np/linalg.py&quot;, line 582: def _dot2_codegen(context, builder, sig, args): &lt;source elided&gt; return lambda left, right: _impl(left, right) ^ raised from /usr/local/lib/python3.10/dist-packages/numba/core/typeinfer.py:1086 - Of which 2 did not match due to: Overload in function 'dot_3': File: numba/np/linalg.py: Line 784. With argument(s): '(array(float32, 2d, F), array(float64, 2d, C))': Rejected as the implementation raised a specific error: TypingError: missing a required argument: 'out' raised from /usr/local/lib/python3.10/dist-packages/numba/core/typing/templates.py:784 During: resolving callee type: Function(&lt;built-in function dot&gt;) During: typing of call at &lt;ipython-input-16-0a5c11357e7d&gt; (43) File &quot;&lt;ipython-input-16-0a5c11357e7d&gt;&quot;, line 43: def gradient_descent(X, y, theta, learning_rate, num_iterations): &lt;source elided&gt; h = sigmoid(np.dot(X, theta)) gradient = np.dot(X.T, (h - y)) / m ^ </code></pre> <p>Note: I am running this on google colab</p>
<python><numba><jit>
2024-06-16 14:48:25
0
1,558
Sadaf Shafi
78,629,411
1,382,667
Chrome driver for Selenium Can not connect to the Service
<p>Trying to use Selenium to click a DIV and return the resulting link that should appear afterwards In Python I can get chrome to open but it eventually fails to get url with error <code> Can not connect to the Service D:\Software\chromedriver_126-win64\chrome.exe</code></p> <p>for for what's worth here's my code</p> <pre><code> from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.chrome.service import Service ChromeDriverPath64 = &quot;D:\\Software\\chromedriver_126-win64\\chrome.exe&quot; s = Service(ChromeDriverPath64) options = webdriver.ChromeOptions() # keep it simple for now options.page_load_strategy = 'normal' try: driver = webdriver.Chrome(service=s, options=options) driver.get(&quot;https://www.python.org&quot;) print(driver.title) except Exception as e: print(e) </code></pre> <p>The Chromedriver matches Chrome version</p> <p>Can any one see what I'm ding wrong</p>
<python><selenium-webdriver>
2024-06-16 14:36:54
0
333
Holly
78,629,133
561,243
Filling matplotlib stairs with different colors
<p>I have the following problem. I have some data for which I need to make an histogram.</p> <p>For this purpose I use the <code>numpy.histogram</code> to generate the bins and the content and then <code>pyplot.stairs</code> for the visualization.</p> <p>So far so good.</p> <p>Next part is that I would like to have the area of the histogram between the left end and a given value x filled in a color, then from x to y in another and finally from y to the right end in a third one.</p> <p>Something like the image here below. <a href="https://i.sstatic.net/M6nH7mwp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6nH7mwp.png" alt="enter image description here" /></a></p> <p>Do you know how to do it?</p> <p>Thanks!</p>
<python><matplotlib>
2024-06-16 12:35:37
1
367
toto
78,629,090
6,124,415
Setting up firebase functions with Python errors and issues
<p>Can anybody explain how correctly to set up firebase-functions with Python? (building in googles &quot;project IDX&quot;)</p> <p>My current steps:</p> <ul> <li><p>create &quot;blank workspace&quot;</p> </li> <li><p>run firebase init functions-&gt; that create &quot;functions&quot; folder as it should but no &quot;venv&quot; is created (and because of that no dependencies are installed) -&gt; not case for the guy in <a href="https://www.youtube.com/watch?v=mvEHYMsk_AE&amp;t=202s" rel="nofollow noreferrer">this video</a></p> </li> <li><p>cd functions -&gt;<code> python3 -m venv venv</code> -&gt; pick &quot;python3&quot; in list (as sugest by gemini) -&gt; instalation completed</p> </li> <li><p>adding <strong>&quot;pkgs.python3&quot;</strong> to <strong>dev.nix</strong> -&gt; packeges</p> </li> <li><p>test runing <code>firebase init</code> again to install missing dependecies (now when I have <strong>venv</strong> in <strong>functions</strong>) but getting error: Requirement already satisfied: pip in ./venv/lib/python3.11/site-packages (24.0) /bin/sh: line 1: python3.12: command not found</p> </li> <li><p>so bc of that I go to dev.nix and change in packages from <strong>&quot;pkgs.python3&quot;</strong> to <strong>&quot;pkgs.python312&quot;</strong> (keep in mind that docs says that python 3.10 and 3.11 are supported but only that can fix the issue)</p> </li> <li><p>run firebase init again and only now all the dependecies are installed</p> </li> <li><p>I can now deploy functions as well BUT let's say that I want to add <code>numpy</code> library to project:</p> </li> <li><p>when in &quot;/functions&quot; directory : <code>source venv/bin/activate</code></p> </li> <li><p>pip install numpy -&gt; python311Packages.pip</p> </li> <li><p>import numpy to the project, ex:</p> </li> </ul> <pre><code>from firebase_functions import https_fn from firebase_admin import initialize_app import numpy as np initialize_app() https_fn.on_request() def on_request_example(req: https_fn.Request) -&gt; https_fn.Response: np.random.seed(42) return https_fn.Response(&quot;Hello world!&quot;) </code></pre> <ul> <li>run firebase deploy, now I get this error (because I changed to python312 in packages?):</li> </ul> <blockquote> <p>[2024-06-16 11:54:11,957] ERROR in app: Exception on /__/functions.yaml [GET] Traceback (most recent call last): File &quot;/home/user/zzz/functions/venv/lib/python3.12/site-packages/flask/app.py&quot;, line 1473, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/user/zzz/functions/venv/lib/python3.12/site-packages/flask/app.py&quot;, line 882, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/user/zzz/functions/venv/lib/python3.12/site-packages/flask/app.py&quot;, line 880, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/user/zzz/functions/venv/lib/python3.12/site-packages/flask/app.py&quot;, line 865, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/user/zzz/functions/venv/lib/python3.12/site-packages/firebase_functions/private/serving.py&quot;, line 122, in get_functions_yaml functions = get_functions() ^^^^^^^^^^^^^^^ File &quot;/home/user/zzz/functions/venv/lib/python3.12/site-packages/firebase_functions/private/serving.py&quot;, line 40, in get_functions spec.loader.exec_module(module) File &quot;&quot;, line 995, in exec_module File &quot;&quot;, line 488, in _call_with_frames_removed File &quot;/home/user/zzz/functions/main.py&quot;, line 7, in import numpy as np ModuleNotFoundError: No module named 'numpy'</p> <p>127.0.0.1 - - [16/Jun/2024 11:54:11] &quot;GET /__/functions.yaml HTTP/1.1&quot; 500 -</p> <p>127.0.0.1 - - [16/Jun/2024 11:54:11] &quot;GET /__/quitquitquit HTTP/1.1&quot; 200 -</p> <p>Error: Functions codebase could not be analyzed successfully. It may have a syntax or runtime error</p> </blockquote>
<python><google-cloud-functions>
2024-06-16 12:12:26
0
1,155
pb4now
78,628,959
5,722,359
How to let KP_Down, KP_Up, KP_Next, KP_Prior scroll a `ttk.Treeview` widget like KeyPress-Down, KeyPress-Up, KeyPress-Next, KeyPress-Prior?
<p>I discovered that in <code>tkinter</code>, <code>'&lt;KP_Down&gt;'</code>,<code>'&lt;KP_Up&gt;'</code>, <code>'&lt;KP_Next&gt;'</code>, <code>'&lt;KP_Prior&gt;'</code> does not scroll a <code>ttk.Treeview</code> widget as <code>'&lt;KeyPress-Down&gt;'</code>, <code>'&lt;KeyPress-Up&gt;'</code>, <code>'&lt;KeyPress-Next&gt;'</code>, <code>'&lt;KeyPress-Prior&gt;'</code> respectively would by default? The sample code below shows these behaviours.</p> <p>How do I let <code>'&lt;KP_Down&gt;'</code> scroll the Treeview like how <code>'&lt;KeyPress-Down&gt;'</code> does? The same question applies to <code>'&lt;KP_Up&gt;'</code>, <code>'&lt;KP_Next&gt;'</code>, <code>'&lt;KP_Prior&gt;'</code> respective to <code>'&lt;KeyPress-Up&gt;'</code>, <code>'&lt;KeyPress-Next&gt;'</code>, <code>'&lt;KeyPress-Prior&gt;'</code>.</p> <pre><code>import random import tkinter as tk import tkinter.ttk as ttk class App(ttk.Frame): def __init__(self, master, **kwargs): super().__init__(master, **kwargs) self.master = master self.tree = None self.ysb = None self.xsb = None self.create_widgets() self.create_bindings() def create_widgets(self): self.tree = ttk.Treeview( self.master, height=10, selectmode='extended', takefocus=True, columns=(&quot;x&quot;, &quot;y&quot;, &quot;z&quot;), displaycolumns=[&quot;x&quot;, &quot;y&quot;, &quot;z&quot;]) self.tree.heading('#0', text=&quot;Points&quot;) self.tree.heading('x', text='X', anchor='center') self.tree.heading('y', text='Y', anchor='center') self.tree.heading('z', text='Z', anchor='center') self.tree.column('#0', stretch=True, width=100) self.tree.column('x', stretch=True, anchor='center', width=50) self.tree.column('y', stretch=True, anchor='center', width=50) self.tree.column('z', stretch=True, anchor='center', width=50) self.ysb = ttk.Scrollbar( self.master, orient='vertical', command=self.tree.yview) self.xsb = ttk.Scrollbar( self.master, orient='horizontal', command=self.tree.xview) self.tree.configure( yscrollcommand=self.ysb.set, xscrollcommand=self.xsb.set) self.columnconfigure(0, weight=1) self.rowconfigure(0, weight=1) self.tree.grid(row=0, column=0, sticky='nsew') self.ysb.grid(row=0, column=1, sticky='ns') self.grid(row=1, column=0, sticky='ew') groups = [&quot;Local&quot;, &quot;Remote&quot;] for n, group in enumerate(groups): self.tree.insert( &quot;&quot;, 'end', iid=f'G{n}', text=group, open=True) for i in range(10): self.tree.insert( f'G{n}', 'end', iid=f'G{n}-D{i}', text=f'G{n}-D{i}', values=[ random.randrange(0, 20), random.randrange(0, 20), random.randrange(0, 20) ] ) def create_bindings(self): self.tree.bind('&lt;KeyPress-Down&gt;', self.kp_down) self.tree.bind('&lt;KeyPress-Up&gt;', self.kp_up) self.tree.bind('&lt;KeyPress-Next&gt;', self.kp_down) self.tree.bind('&lt;KeyPress-Prior&gt;', self.kp_up) self.tree.bind('&lt;KP_Down&gt;', self.kp_down) self.tree.bind('&lt;KP_Up&gt;', self.kp_up) self.tree.bind('&lt;KP_Next&gt;', self.kp_down) self.tree.bind('&lt;KP_Prior&gt;', self.kp_up) def kp_down(self, event): print(f&quot;{event.keysym}&quot;) def kp_up(self, event): print(f&quot;{event.keysym}&quot;) if __name__ == '__main__': root = tk.Tk() app = App(root) root.columnconfigure(0, weight=1) root.rowconfigure(0, weight=1) app.grid(row=0, column=0, sticky='nsew') root.mainloop() </code></pre>
<python><tkinter><treeview><tcl>
2024-06-16 11:20:30
2
8,499
Sun Bear
78,628,887
4,961,540
Getting Same result for AES encryption and decryption in python and typescript
<p>I am using the below code to encrypt and decrypt in typescript:</p> <pre><code>const AES = require('crypto-js/aes'); const Utf8 = require('crypto-js/enc-utf8'); export const encrypt = (text, passphrase) =&gt; { return AES.encrypt(text, passphrase).toString(); }; export const decrypt = (ciphertext, passphrase) =&gt; { const bytes = AES.decrypt(ciphertext, passphrase); const originalText = bytes.toString(Utf8); return originalText; }; </code></pre> <p>Now, the encrypted message generated in Typescript is to be decrypted in a Python environment. I used Chat GPT to come up with a solution but it gave a different result.</p> <p>In one of the StackOverflow answers it was mentioned that CryptoJS uses following method for AES encryption:</p> <blockquote> <p>Now CryptoJs derives a 32 byte long encryption key for AES-256 and a 16 byte long initialization vector (iv) from the password, encrypts the &quot;Message&quot; using this key, iv in AES mode CBC and (default) padding Pkcs7.</p> </blockquote> <p>I gave the above details as context to Chat GPT but that didn't work either. My Python code uses cryptography library and the code looks like below.</p> <pre><code>from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives.kdf.scrypt import Scrypt from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import padding from os import urandom def derive_key_iv(password: str, salt: bytes, key_length=32, iv_length=16): # Key derivation function backend = default_backend() kdf = Scrypt( salt=salt, length=key_length + iv_length, n=2**14, r=8, p=1, backend=backend ) key_iv = kdf.derive(password.encode()) return key_iv[:key_length], key_iv[key_length:key_length+iv_length] def encrypt(message: str, password: str): salt = urandom(16) # Generate a random salt key, iv = derive_key_iv(password, salt) backend = default_backend() cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=backend) encryptor = cipher.encryptor() padder = padding.PKCS7(algorithms.AES.block_size).padder() padded_data = padder.update(message.encode()) + padder.finalize() encrypted = encryptor.update(padded_data) + encryptor.finalize() return salt + encrypted # prepend salt for use in decryption def decrypt(token: bytes, password: str): salt = token[:16] # Extract the salt from the beginning encrypted_message = token[16:] key, iv = derive_key_iv(password, salt) backend = default_backend() cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=backend) decryptor = cipher.decryptor() padded_plaintext = decryptor.update(encrypted_message) + decryptor.finalize() unpadder = padding.PKCS7(algorithms.AES.block_size).unpadder() plaintext = unpadder.update(padded_plaintext) + unpadder.finalize() return plaintext.decode() </code></pre> <p>What can be done to get the correct decrypted result in python?</p>
<python><typescript><encryption><aes>
2024-06-16 10:45:13
0
657
Kartik Watwani
78,628,816
625,396
Fast hashing of integer sequences in nympy/torch ? (except dot product with random vector - it is quite good, but may be something else?)
<p>Consider sequences of integers like (1,0,2,4,...,100). We have many(!) of them, and we need to hash them and it should be very fast in Python - so I am seeking methods using numpy/torch - i.e. without slow Python loops.</p> <p>The simple and efficient method which works - is to generate random integer vector - and just consider the dot products - which can be implemented in just one matrix product:</p> <pre><code>np.matmul(array_of_sequences, random_int_vector ) </code></pre> <p><strong>Question 1:</strong> any ideas to do better ?</p> <p>Any comments/ideas/suggestions are welcome !</p> <p>C++ people suggested me something like that: <a href="https://www.kaggle.com/competitions/santa-2023/discussion/466399#2600835" rel="nofollow noreferrer">https://www.kaggle.com/competitions/santa-2023/discussion/466399#2600835</a> but not sure it is Python relevant .</p> <p>One of the troubles with the method above - it cannot work on old GPU like P100, which does not support multiplication of integer matrices. And when integers are converted to floats, we have another trouble: float multiplication may lose precision and hashing will not work correctly. When hash is float - it is more to explain why: <a href="https://www.kaggle.com/competitions/santa-2023/discussion/468370" rel="nofollow noreferrer">https://www.kaggle.com/competitions/santa-2023/discussion/468370</a> . But for integer hashes there is also problem, that is more surprising.</p>
<python><numpy><hash><torch>
2024-06-16 10:09:56
1
974
Alexander Chervov
78,628,679
5,790,653
sqlite insert not adding any records
<p>This is my code:</p> <pre class="lang-py prettyprint-override"><code>import os path = '/home/test/' os.system(f'rm -vf {path}myDB.db') import sqlite3 import datetime conn = sqlite3.connect(f&quot;{path}myDB.db&quot;) conn.execute( ''' CREATE TABLE logs (id INTEGER PRIMARY KEY, userid INTEGER NOT NULL, email TEXT NOT NULL, ip_address TEXT, date DATE ); ''' ) clients = [ {'userid': 26026, 'email': 'firstfake@gmail.com', 'ip': '1.1.1.158', 'date': f'{datetime.datetime.now()}'}, {'userid': 31010, 'email': 'secondfake@yahoo.com', 'ip': '1.1.1.10', 'date': f'{datetime.datetime.now()}'}, {'userid': 26076, 'email': 'lastfake@gmail.com', 'ip': '1.1.1.160', 'date': f'{datetime.datetime.now()}'}, ] for out in clients: conn.execute( f&quot;&quot;&quot; INSERT INTO logs (id, userid, email, ip_address, date) VALUES (NULL, {out['userid']}, '{out['email']}', '{out['ip']}', '{out['date']}') &quot;&quot;&quot;) conn.close() conn = sqlite3.connect(f&quot;{path}myDB.db&quot;) conn.row_factory = sqlite3.Row c = conn.cursor() c.execute('SELECT * FROM logs') output = [] for r in c.fetchall(): output.append(dict(r)) for x in output: x conn.close() </code></pre> <p>It only creates the file and the table, but does not insert any values.</p> <p>What is wrong with my code? I tried some simpler samples and they worked and did insert, but with the real-world example, they don't work.</p> <p>I also see this output after running <code>INSERT</code>:</p> <pre><code>&lt;sqlite3.Cursor object at 0x7f393d757bc0&gt; &lt;sqlite3.Cursor object at 0x7f393d7578c0&gt; &lt;sqlite3.Cursor object at 0x7f393d757bc0&gt; </code></pre>
<python><sqlite>
2024-06-16 08:52:12
1
4,175
Saeed
78,628,435
17,652,621
Memory Leak Issue with Tkinter GUI application
<p>I'm facing a memory leak issue with a custom widget in a Tkinter/customtkinter application. Despite my efforts to manually delete and clear all member variables, the memory is not being fully cleaned up. Below is the relevant code for one of my custom class.</p> <pre class="lang-py prettyprint-override"><code>import tkinter import webbrowser import customtkinter as ctk from typing import List, Union, Tuple from tkinter import PhotoImage import pyperclip from utils import ( ValueConvertUtility, ) from widgets.components.thumbnail_button import ( ThumbnailButton, ) from services import ( ThemeManager, LanguageManager ) from settings import ( AppearanceSettings ) class Video(ctk.CTkFrame): &quot;&quot;&quot;A class representing a video widget.&quot;&quot;&quot; default_thumbnails: Tuple[tkinter.PhotoImage, tkinter.PhotoImage] = (None, None) def __init__( self, root: ctk.CTk, master: Union[ctk.CTkFrame, ctk.CTkScrollableFrame], width: int = 0, height: int = 0, # video info video_url: str = &quot;&quot;, video_title: str = &quot;-------&quot;, channel: str = &quot;-------&quot;, thumbnails: List[PhotoImage] = (None, None), channel_url: str = &quot;-------&quot;, length: int = 0): super().__init__( master=master, height=height, width=width, border_width=1, corner_radius=8, ) self.root = root self.master = master self.height: int = height self.width: int = width # video details self.video_url: str = video_url self.video_title: str = video_title self.channel: str = channel self.channel_url: str = channel_url self.length: int = length self.thumbnails: List[PhotoImage] = thumbnails # widgets self.info_frame: Union[ctk.CTkFrame, None] = None self.url_label: Union[ctk.CTkLabel, None] = None self.video_title_label: Union[ctk.CTkLabel, None] = None self.channel_btn: Union[ctk.CTkButton, None] = None self.video_length_label: Union[ctk.CTkLabel, None] = None self.thumbnail_btn: Union[ThumbnailButton, None] = None self.remove_btn: Union[ctk.CTkButton, None] = None from widgets import ContextMenu self.context_menu: Union['ContextMenu', None] = None # initialize the object self.create_widgets() self.set_widgets_texts() self.set_widgets_sizes() self.set_widgets_fonts() self.set_widgets_colors() self.set_tk_widgets_colors() self.set_widgets_accent_color() self.place_widgets() self.bind_widgets_events() # register to Theme Manger for accent color updates &amp; widgets colors updates ThemeManager.register_widget(self) LanguageManager.register_widget(self) &quot;&quot;&quot; other methods &quot;&quot;&quot; def __del__(self): &quot;&quot;&quot;Clear the Memory.&quot;&quot;&quot; del self.height del self.width # video details del self.video_url del self.video_title del self.channel del self.channel_url del self.length del self.thumbnails # widgets del self.info_frame del self.url_label del self.video_title_label del self.channel_btn del self.video_length_label del self.remove_btn del self.context_menu del self.thumbnail_btn del self def destroy_widgets(self): &quot;&quot;&quot;Destroy the child widget.&quot;&quot;&quot; self.video_length_label.destroy() self.info_frame.destroy() self.video_title_label.destroy() self.channel_btn.destroy() self.url_label.destroy() self.remove_btn.destroy() self.destroy() def kill(self): &quot;&quot;&quot;Destroy the widget.&quot;&quot;&quot; ThemeManager.unregister_widget(self) LanguageManager.unregister_widget(self) self.pack_forget() self.destroy_widgets() self.__del__() </code></pre> <p>I am trying to understand why this is not getting cleaned and what is wrong here.</p> <p><a href="https://github.com/Thisal-D/PyTube-Downloader" rel="nofollow noreferrer">My repository</a></p>
<python><python-3.x><tkinter><customtkinter>
2024-06-16 06:34:59
0
320
Thisal
78,628,433
10,994,166
Add one hot feature to tensorflow model
<p>I'm new to deep learning, I'm creating MMOE model with and adding feature step by step in my <em>init</em> function.</p> <pre><code>class TIGMMOE(tfrs.Model): def __init__(self, use_cross_layer, deep_layer_sizes, num_units, num_shared_experts, projection_dim=None): super().__init__() self.embedding_dimension = 8 self._embeddings = {} #categorical embedding (common to all tasks) self._embeddings['category'] = tf.keras.Sequential( [tf.keras.layers.Embedding(num_total_pcats + 1, 32) ],name='cat_emb') self._embeddings['ptype'] = tf.keras.Sequential( [tf.keras.layers.Embedding(num_total_ptypes + 1, 128) ],name='ptype_emb') . . . def call(self, feat_inputs): features = feat_inputs anchor_embeddings = [] for feat_name in _CONTEXT_FEATURE_KEYS: anchor_embeddings.append(features[feat_name]) anchor_embeddings.append(self._embeddings['category'](features['anc_pcat_map'])) anchor_embeddings.append(self._embeddings['ptype'](features['anc_ptcode_map'])) anchor_embeddings = {} anchor_embeddings['anc_feat_vec'] = features['anc_feat_vec'] anchor_embeddings['anc_ptcode_map_emb'] = self._embeddings['category'](features['anc_pcat_map']) anchor_embeddings['anc_pcat_map_emb'] = self._embeddings['ptype'](features['anc_ptcode_map']) </code></pre> <p>Now I want to add another feature <code>x</code> to the model which is in <code>int</code> format and I want to add it as <code>one-hot-encode feature</code>. Can someone tell me how I can add this feature from tfrecords to my model.</p>
<python><tensorflow><machine-learning><deep-learning>
2024-06-16 06:33:12
0
923
Chris_007
78,628,191
2,778,405
Get timezone for specific date
<p>I have a list of dates formatted like so:</p> <pre class="lang-py prettyprint-override"><code>'2024-01-01` </code></pre> <p>I need to turn them into ISO datetimes like so:</p> <pre><code>`2024-01-01T00:00:00-08:00` </code></pre> <p>I thought this would be easy, I was wrong. I've seen many questions talk about this on SO, but they tend to over look the complexity of daylight savings time (DST).</p> <p>What can't be used in a solution:</p> <pre><code> datetime.now() </code></pre> <p>Using <code>datetime.now()</code> will give the current timezone, on the date it is today, and will not be relevant to the date in question.</p> <p>I need to return the proper offest, when converting a specific date, based on what the offset would have been in local time at the date in question. Example:</p> <pre class="lang-py prettyprint-override"><code>'2024-01-01' # becomes '2024-01-01T00:00:00-08:00' '2024-06-06' # becomes '2024-01-01T00:00:00-07:00' </code></pre> <p>I have a solution to brute force this by calculating daylight savings time start and finishes. But that's an ugly solution, there has to be some manipulation of the <code>datetime</code> api that works better.</p>
<python><datetime><timezone>
2024-06-16 03:23:06
1
2,386
Jamie Marshall
78,628,089
11,121,557
Unable to debug python C extension using valgrind
<p>I am trying to debug a C extension I made using CFFI. I am using Python 3.11 and Valgrind 3.18.1.</p> <p>As far as I can tell from the docs the only setup needed is setting the <code>PYTHONMALLOC=malloc</code> environment variable, but for some reason, valgrind just refuses to report any errors with or without it being set.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code># test.py from myproj._myproj import lib, ffi handle = ffi.new(&quot;MyType_t **&quot;) # Call malloc to allocate some space on the heap and return a pointer to it lib.func_that_calls_malloc(handle) # Deallocate the data pointed to by the handle # Oh no! I forgot to call this function. This will result in a memory leak! # lib.func_that_frees_the_handle(handle) # del handle # CFFI manages the handle itself, so it is unnecessary to deallocate it manually. </code></pre> <pre class="lang-bash prettyprint-override"><code>$ PYTHONMALLOC=malloc valgrind --show-leak-kinds=definite --leak-check=full python test.py ==2880771== Memcheck, a memory error detector ==2880771== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==2880771== Using Valgrind-3.18.1 and LibVEX; rerun with -h for copyright info ==2880771== Command: /home/user/.pyenv/shims/python test.py ==2880771== $ valgrind --show-leak-kinds=definite --leak-check=full python test.py ==2881962== Memcheck, a memory error detector ==2881962== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==2881962== Using Valgrind-3.18.1 and LibVEX; rerun with -h for copyright info ==2881962== Command: /home/user/.pyenv/shims/python test.py ==2881962== </code></pre> <p>What am I doing wrong?</p> <p>EDIT:</p> <p>I should have mentioned this before - I am not trying to debug the CFFI extension itself. The CFFI wrapper code is autogenerated and the underlying C library that I am writing has its own tests. I am trying to test that my usage of the C extension in Python does not cause memory leaks. I have modified the python code example above to better illustrate this.</p>
<python><c><malloc><valgrind><cffi>
2024-06-16 01:29:07
1
376
Slav
78,628,012
21,343,992
How to create AWS EC2 instance using Boto3 for a specific account?
<p>I currently have a regular/non-organization AWS account and I create EC2 instances via Boto3 using code like this:</p> <pre><code>ec2 = boto3.resource('ec2', region_name = region) instances = ec2.create_instances(...) </code></pre> <p>I may need to change my account to an organization and create sub-accounts so I can use different credit cards to fund different accounts.</p> <p>However, I am concerned how much my existing Boto3 scripts will need to change. Looking at the reference page for <code>create_instances()</code>:</p> <p><a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2/service-resource/create_instances.html" rel="nofollow noreferrer">https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2/service-resource/create_instances.html</a></p> <p>I don't see an argument which refers to a sub account.</p> <p>Using Boto3 how do you create an instance for a specific sub account?</p>
<python><amazon-web-services><amazon-ec2><boto3>
2024-06-16 00:23:55
1
491
rare77
78,627,973
3,347,814
How to extract a substring from a column in a dataframe based in the column from another dataFrame?
<p>I have found a solution from my problem, but it is clearly the most dumb and inefficient one. I was hoping that someone could help me with a proper solution.</p> <p>I have two data frames containing a column with a telephone number. df1[&quot;telephone&quot;] can have one or more numbers separated by commas and df2[&quot;telephone&quot;] has only single numbers:</p> <pre><code>df1[&quot;telephone&quot;] telephone 115879878 411564656 465464654,45646546 464665465,46456465,87972315 123165648 df2[&quot;telephone&quot;] telephone 156465456 132131321 879878999 456489798 546489798 465478978 </code></pre> <p>What I want to do is, check if one of the numbers in df1[&quot;telephone&quot;] is in df2[&quot;telefone&quot;] and create a column with the matched number.</p> <p>I have managed to do this using the following code:</p> <pre><code>df1['telephone'] = df1['telephone'].astype(str) df2[&quot;telephone&quot;] = df2[&quot;telephone&quot;].astype(str) telephone_match = [] for telephone_1 in df1['telephone']: telephone_found = False for telephone_2 in df2[&quot;telephone&quot;]: if (telephone_2 in telephone_1): telephone_match.append(telephone_2) telephone_found = True continue if (not telephone_found): telephone_match.append(False) df1['matches'] = telephone_match </code></pre> <p>This works, but it takes ages to run. I pretty sure it is the dumbest possible method, but I have no idea how to do this efficiently. Can someone please help me?</p>
<python><pandas><dataframe>
2024-06-15 23:56:31
5
1,143
user3347814
78,627,501
202,201
Wildly inconsistent performance of CP-SAT solver
<p>I ran the CP-SAT solver three consecutive times on my model. The wall times were:</p> <ul> <li>0h07m41s</li> <li>0h39m45s</li> <li>2h17m42s</li> </ul> <p>There is almost 18x difference in runtime from the fastest to the slowest. There were no other CPU-intensive jobs on the machine at the time.</p> <p>I understand there's some randomness in the solver, but I tried to remove that variation by seeding:</p> <pre><code>solver = cp_model.CpSolver() solver.parameters.random_seed = 11111 </code></pre> <p>Is this level of performance variation, given a fixed seed, reasonable?</p> <p>(My goal is to get before-and-after performance measurements for some model changes I am considering. These were supposed to be the &quot;before&quot; numbers, but with this much variation, I will not have any confidence that the effects of my change will be visible.)</p> <h1>To recreate</h1> <p>I'm using ortools-9.10.4067 on Python 3.10.12 on Ubuntu Desktop 22.04.4.</p> <pre><code>git clone --branch inconsistent-solver-performance https://scottj97@bitbucket.org/scottj97/builderment.git cd builderment rm seeds/map_50resources.plants.json; make MAP=seeds/map_50resources </code></pre> <p>(Or download this <a href="https://bitbucket.org/scottj97/builderment/raw/16d9ac9bf9b54c43b536bafa89d01af37b6f1652/model.txt" rel="nofollow noreferrer">exported model</a>)</p> <p>Solver elapsed wall time will appear in the log:</p> <pre><code>[2024-06-15 13:16:33.131512] Constraint solver completed after 2:17:42.137048. </code></pre>
<python><or-tools><cp-sat>
2024-06-15 18:58:24
1
1,091
ScottJ
78,627,210
417,896
How to style QTabWidget with PySide6
<p>How can I maximize the space taken up by children of QTabWidget using PySide6?</p> <p>IOW, I don't want the padding the QTabWidget container takes up before drawing the child inside of it.</p> <p>In the image you can see there is a lighter shade of gray from the background of the tab widget parent. I want to remove this lighter gray and have the child expand to fill that space.</p> <p>And as a bonus I want the tab bar showing the names of the tab to be on top of the QTabWidget draw area, not halfway between the tab area and parent area.</p> <p><a href="https://i.sstatic.net/xkJvjMiI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xkJvjMiI.png" alt="enter image description here" /></a></p>
<python><pyside><qtabwidget>
2024-06-15 16:47:35
0
17,480
BAR
78,627,164
5,507,389
FastAPI path parameter translation to enumaration member
<p>I started to learn FastAPI by reading the official documentation. I'm having a bit of difficulty understanding the example which illustrate how to work with Python enumarations. The example in the official documentation is a follows:</p> <pre><code>from enum import Enum from fastapi import FastAPI class ModelName(str, Enum): alexnet = &quot;alexnet&quot; resnet = &quot;resnet&quot; lenet = &quot;lenet&quot; app = FastAPI() @app.get(&quot;/models/{model_name}&quot;) async def get_model(model_name: ModelName): if model_name is ModelName.alexnet: return {&quot;model_name&quot;: model_name, &quot;message&quot;: &quot;Deep Learning FTW!&quot;} if model_name.value == &quot;lenet&quot;: return {&quot;model_name&quot;: model_name, &quot;message&quot;: &quot;LeCNN all the images&quot;} return {&quot;model_name&quot;: model_name, &quot;message&quot;: &quot;Have some residuals&quot;} </code></pre> <p>The documentation says that: &quot;The value of the path parameter will be an enumeration <strong>member</strong>.&quot;</p> <p>I don't understand how the path parameter is an enumration <strong>member</strong>. For me the path parameter will correspond to the enumration <strong>value</strong>. When I call the endpoint <code>127.0.0.1:8000/models/alexnet</code>, the path parameter <code>alexnet</code> is the value of the Enum member <code>ModelName.alexnet</code>. I'm not passing the Enumaration object in any way in the request.</p> <p>So, I'm curious to understand when and how FastAPI makes the &quot;translation&quot; to the enumaration member from the path parameter in the call. Because it seems that the path parameter <code>alexnet</code> is implicitly translated to <code>ModelName.alexnet</code>.</p> <p><strong>UPDATE:</strong> What I also found counter-intuitive is that, if I change the Enum member as <code>ALEXNET = &quot;alexnet&quot;</code> and change the condition to <code>if model_name is ModelName.ALEXNET:</code> (i.e., in capital letters), I can still call the endpoint with <code>127.0.0.1:8000/models/alexnet</code>(in lowercase, which is how the Enum <strong>value</strong> is defined).</p>
<python><enums><fastapi>
2024-06-15 16:29:52
1
679
glpsx
78,626,598
7,695,845
sympy.init_printing() not working in IPython terminal session
<p>I wanted to do some quick calculations and I was too lazy to open a notebook file, so I just opened an <code>IPython</code> session in the terminal. According to <a href="https://docs.sympy.org/latest/tutorials/intro-tutorial/printing.html#setting-up-pretty-printing" rel="nofollow noreferrer">sympy's documentation</a> calling <code>sympy.init_printing()</code> should initialize pretty printing in a python or IPython session. However, when I tried it, I didn't get pretty printing: <a href="https://i.sstatic.net/YF7tzsNx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YF7tzsNx.png" alt="" /></a></p> <p>I tried running with <code>use_latex=&quot;mathjax&quot;/True/False</code>, <code>use_latex=True/False</code>, and I tried using <code>IPython.display.display()</code>. Nothing worked. I always get the raw version of my expressions like in the image above. What am I missing here? How can I enable pretty printing in a terminal session with sympy?</p>
<python><ipython><sympy>
2024-06-15 12:28:42
0
1,420
Shai Avr
78,626,537
6,687,699
Update query not working with Celery and FastAPI
<p>I am using a FastAPI repository pattern, and it works very well in Celery, but I have an Update query it doesn't execute. (The funny part is that, other queries work very well).</p> <p>Here's my query:</p> <pre><code>UPDATE_REMINDER_SENT_AT_QUERY = &quot;&quot;&quot; UPDATE public.trainings SET reminder_sent_at = :next_reminder_date AT TIME ZONE 'utc' WHERE id = :training_id RETURNING id; &quot;&quot;&quot; </code></pre> <p>Then here's my repository:</p> <pre><code> async def update_reminder_sent_at( self, training_id: UUID, next_reminder_date ): async with self.db.transaction(): record: int = await self.db.fetch_val( query=UPDATE_REMINDER_SENT_AT_QUERY, values={ &quot;training_id&quot;: training_id, &quot;next_reminder_date&quot;: next_reminder_date, }, ) logger.info(f'record {record}') </code></pre> <p>And the service:</p> <pre><code>async def update_reminder_sent_at( self, training_id: UUID, next_reminder_date ): await self.repository.update_reminder_sent_at(training_id, next_reminder_date) </code></pre> <p>Then finally called in a Celery task:</p> <pre><code>@app_celery.task def task_update( training_id: UUID, user_id: UUID, tenant_id: UUID ): &quot;&quot;&quot;Send email reminders for a specific training and user.&quot;&quot;&quot; async def send_reminder(training_id: UUID, user_id: UUID, tenant_id: UUID): next_reminder_date = datetime.utcnow() + timedelta( days=training.reminder.schedule_in_days ) await training_service.update_reminder_sent_at( training_id, next_reminder_date ) asyncio.run(send_reminder(training_id, user_id, tenant_id)) </code></pre> <p>So when I logged in the repository: <code>logger.info(f'record {record}')</code>, I get something like this:</p> <pre><code>celery_worker | 2024-06-15 12:00:00.388 |INFO | api.trainings.repository:update_reminder_sent_at:163 - record d1d33ada-1e98-40e7-982c-f553b09bcaa0 </code></pre> <p>Surprisingly checking the db, no effect made on the field. (The field didn't update at all).</p> <pre><code>laas_api=# select id, reminder_sent_at from trainings laas_api-# ; id | reminder_sent_at --------------------------------------+------------------ b8b6b8d3-20ca-4ced-9720-4b6585f116d9 | 3f9e08af-dacf-4979-97be-bbe3f86c5979 | fba1a12d-a8d9-418d-851a-a55e29c02996 | 679596a0-264c-4dad-aca2-37c197534621 | 24d3469d-bfa7-43f0-95b9-996105af6df0 | 45213723-0711-4d86-97ef-d9c7528bcebd | d1d33ada-1e98-40e7-982c-f553b09bcaa0 | (7 rows) </code></pre> <h2>Note</h2> <p>Here's how my db is declared:</p> <pre><code>@asynccontextmanager async def get_db(): database = Database( settings.database_url, force_rollback=True, min_size=3, max_size=20 ) await database.connect() try: yield database finally: await database.disconnect() </code></pre>
<python><postgresql><celery><fastapi><celerybeat>
2024-06-15 12:04:14
1
4,030
Lutaaya Huzaifah Idris
78,626,515
1,473,517
What exactly is slowing np.sum down?
<p>It is known that np.sum(arr) is quite a lot slower than arr.sum(). For example:</p> <pre><code>import numpy as np np.random.seed(7) A = np.random.random(1000) %timeit np.sum(A) 2.94 µs ± 13.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) %timeit A.sum() 1.8 µs ± 40.8 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) </code></pre> <p>Can anyone give a detailed code-based explanation of what np.sum(arr) is doing that arr.sum() is not?</p> <p>The difference is insignificant for much longer arrays. But it is relatively significant for arrays of length 1000 or less, for example.</p> <p>In my code I do millions of array sums so the difference is particularly significant.</p>
<python><numpy>
2024-06-15 11:53:44
2
21,513
Simd
78,626,499
5,836,440
Confusion regarding functions to generate random numbers in numpy
<p>I am confused primarily on two points:</p> <ol> <li><p>Why are there different random number generator functions in numpy that seemingly produce the same thing? e.g. np.random.rand() vs np.random.uniform(); or np.random.randn(), np.random.normal(), np.random.standard_normal()- [I mean, why is the standard_normal() function even a thing? It is longer to type than specifying the locus =0 and scale =1 in np.random.normal() ???]</p> </li> <li><p>What is the point of defining a random number generator at the beginning of a program, rng=np.default.random_rng() and then calling rng.normal(), rng.random() for each instance one wants to generate a random number, rather than using np.random.rand() and np.random.normal() etc in each case separately?</p> </li> </ol> <p>My idea is that it has something to do with how the random numbers are generated for both points, but I'm not at all sure.</p> <p>Many thanks.</p>
<python><numpy><random><random-seed>
2024-06-15 11:42:53
1
403
Meep
78,626,489
11,022,471
Spatially mapping connections in python
<p>I am trying to plot trade routes of commodities from one country to another in Python using <code>plotly</code>. I want the connections to transition from red at the origin to blue at the destination.</p> <p>However, the connections appear broken (non-straight lines) when I try to interpolate the colors. This could also be due to the map projection I am using. Can someone suggest how to achieve this with smooth, straight lines?</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import plotly.graph_objects as go import matplotlib.colors as mcolors # Function to interpolate between two points def interpolate_points(lon1, lat1, lon2, lat2, num_points=10): lons = np.linspace(lon1, lon2, num_points) lats = np.linspace(lat1, lat2, num_points) return lons, lats # Create the figure fig = go.Figure() num_segments = 5 # Number of segments to divide each line into colors = [mcolors.to_hex(c) for c in np.linspace(mcolors.to_rgba('red'), mcolors.to_rgba('blue'), num_segments)] for i in range(len(Def_trade_Kastner)): lon1, lat1 = Def_trade_Kastner['start_lon'][i], Def_trade_Kastner['start_lat'][i] lon2, lat2 = Def_trade_Kastner['end_lon'][i], Def_trade_Kastner['end_lat'][i] lons, lats = interpolate_points(lon1, lat1, lon2, lat2, num_points=num_segments) for j in range(num_segments - 1): fig.add_trace( go.Scattergeo( lon=[lons[j], lons[j + 1]], lat=[lats[j], lats[j + 1]], mode='lines', line=dict(width=Def_trade_Kastner['Relative Emissions'][i], color=colors[j]), showlegend=False ) ) fig.update_layout( title_text='Deforestation', showlegend=False, geo=dict( visible=False, showcountries=True, projection_type='equirectangular', landcolor='rgb(243, 243, 243)', countrycolor='rgb(204, 204, 204)', ), ) fig.update_layout( autosize=False, width=1200, height=700, ) fig.show() </code></pre> <p>Longer connections are the ones where such trends are most apparent. E.g., 'Brazil to China' or 'USA to Indonesia'. Please see below: <a href="https://i.sstatic.net/LR53wL1d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LR53wL1d.png" alt="enter image description here" /></a></p> <p>You can use the following demo DataFrame points to check connections:</p> <pre class="lang-py prettyprint-override"><code>Def_trade_Kastner = pd.DataFrame({ 'Producer country': ['Brazil', 'Indonesia'], 'Consumer country': ['China', 'United States'], 'Relative Emissions': [10, 2.047066], 'start_lat': [-10.333333, -2.483383], 'start_lon': [-53.200000, 117.890285], 'end_lat': [35.000074, 39.78373], 'end_lon': [104.999927, -100.445882] }) </code></pre>
<python><plotly><line-plot>
2024-06-15 11:38:23
0
743
Ep1c1aN
78,626,342
8,110,961
Python Add datetime column in dataframe from an existing column
<p>I have data in csv file as below</p> <pre><code>v,vw,o,c,h,l,t,n 18043.0,374.411,374.69,374.99,374.99,373.8,1656662400000,157 12003.0,375.6296,375.15,375.84,375.9,374.95,1656663300000,98 18426.0,376.0636,375.98,376.02,376.29,375.63,1656664200000,88 4700.0,376.0772,375.88,376.11,376.34,375.85,1656665100000,43 27969.0,376.5703,376.11,376.56,376.92,375.82,1656666000000,135 17922.0,376.7123,376.69,376.48,376.89,376.46,1656666900000,95 11805.0,376.5813,376.6,376.38,376.71,376.38,1656667800000,74 19888.0,376.9877,376.28,377.11,377.2,376.28,1656668700000,100 7853.0,376.7016,377.25,376.66,377.25,376.48,1656669600000,67 36560.0,377.3454,376.69,377.05,377.8,376.69,1656670500000,175 10862.0,376.354,376.74,376.06,376.74,376.06,1656671400000,92 14740.0,375.8719,376.09,375.74,376.09,375.71,1656672300000,126 78885.0,375.9584,375.88,375.901,376.14,375.71,1656673200000,628 43363.0,376.0552,375.8,376.31,376.4,375.68,1656674100000,277 ... </code></pre> <p>where column t is having UTC unix timestamp in miliseconds (1/1000). I would like to add a column to the dataframe with timestamp in EST. Is there a way to achieve the same without going through each row? I tried few things but I am getting error similar to</p> <blockquote> <p>TypeError: cannot convert the series to &lt;class 'int'&gt;</p> </blockquote>
<python><pandas><dataframe>
2024-06-15 10:37:31
1
385
Jack
78,626,196
1,516,331
If sending multiple HTTP GET requests using for loop, why is aiohttp still faster than requests?
<p>I am following the code in this page: <a href="https://www.twilio.com/en-us/blog/asynchronous-http-requests-in-python-with-aiohttp" rel="nofollow noreferrer">https://www.twilio.com/en-us/blog/asynchronous-http-requests-in-python-with-aiohttp</a>. Here is the first approach using <code>aiohttp</code>:</p> <pre class="lang-py prettyprint-override"><code>import aiohttp import asyncio import time start_time = time.time() async def main(): async with aiohttp.ClientSession() as session: for num in range(1, 10): pokemon_url = f'https://pokeapi.co/api/v2/pokemon/{num}' async with session.get(pokemon_url) as response: pokemon_json = await response.json() print(pokemon_json['name'], num) asyncio.run(main()) print(f'{time.time() - start_time} seconds') </code></pre> <p>If I understand it correctly, the for loop executes <code>session.get</code> synchronously thus the GET request are sent out sequentially. In each iteration of the for loop, the <code>await</code> keyword causes the <code>main</code> coroutine to pause, and event loop has no other coroutine to execute and all it does is wait for <code>response.json()</code> to return the result. Asynchronous Programming is not giving any extra performance improvement here.</p> <p>Approach 2 is the regular approach using <code>requests</code>. It's also executed synchronously:</p> <pre class="lang-py prettyprint-override"><code>import requests import time start_time = time.time() for num in range(1, 10): url = f'https://pokeapi.co/api/v2/pokemon/{num}' response = requests.get(url) pokemon = response.json() print(pokemon['name']) print(f'{time.time() - start_time} seconds') </code></pre> <p>The approach 1 took 3.581976890563965 seconds. Approach 2 took 8.428680419921875 seconds. <strong>Since they both executed GET requests sequentially</strong>, what's the reason for approach 1 using <code>aiohttp</code> still being faster than approach 2 using <code>requests</code>?</p>
<python><asynchronous><async-await><coroutine><aiohttp>
2024-06-15 09:30:04
1
3,190
CyberPlayerOne
78,626,109
7,978,112
Converting dict to dataclass type dynamically, and an instance back to a dict, results in empty dict
<p>Have a nested dictionary.</p> <pre class="lang-py prettyprint-override"><code>stock_price = {'lastPrice': 129.1, 'open': 126.57, 'close': 0, 'intraDayHighLow': {'min': 126.4, 'max': 129.55, 'value': 129.1}, 'weekHighLow': {'min': 49.7, 'minDate': '26-Jun-2023', 'max': 142.9, 'maxDate': '30-Apr-2024', 'value': 129.1}, } </code></pre> <p>...which is converted to a dataclass, as follows.</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass def create_dataclass_from_dict(data_dict: dict, class_name:str): fields = {} for key, value in data_dict.items(): if isinstance(value, dict): nested_class_name = key.capitalize() nested_class = create_dataclass_from_dict(value, class_name=nested_class_name) fields[key] = nested_class else: fields[key] = value return dataclass(type(class_name, (), fields)) </code></pre> <p>....but when I see the dict of the instance, it shows an empty dictionary...</p> <pre><code>Stock = create_dataclass_from_dict(stock_price, 'Stock') my_stock = Stock() my_stock.weekHighLow.minDate # gives '26-Jun-2023' as intended. Want this!! my_stock.__dict__ # gives {} </code></pre> <p>...and so not able to convert it back to a dictionary using the following code:</p> <pre class="lang-py prettyprint-override"><code>import json def to_dict(obj): return json.loads(json.dumps(obj, default=lambda o: o.__dict__)) to_dict(my_stock) # gives {} </code></pre> <p>Where am I going wrong? Why is the <code>.__dict__</code> empty?</p>
<python><dictionary><python-dataclasses>
2024-06-15 08:52:40
2
1,847
reservoirinvest