QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,290,072
284,932
How to use transformers.Trainer on Windows without conda?
<p>I am trying to use the Trainer class from transformers module on Windows 10, python 3.10 and CUDA 12.1 and <em>all modules installed using</em> <strong>pip</strong>.</p> <p>nvcc --version :</p> <pre><code>nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Wed_Feb__8_05:53:42_Coordinated_Universal_Time_2023 Cuda compilation tools, release 12.1, V12.1.66 Build cuda_12.1.r12.1/compiler.32415258_0 </code></pre> <p>cuda available:</p> <pre><code>torch.cuda.is_available() #True </code></pre> <p>But when I tried to import Trainer:</p> <pre><code>from transformers import Trainer </code></pre> <p>I got a huge log error:</p> <pre><code>CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')} DEBUG: Possible options found for libcudart.so: set() CUDA SETUP: PyTorch settings found: CUDA_VERSION=121, Highest Compute Capability: 8.6. CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md CUDA SETUP: Loading binary D:\Program Files\Python310\lib\site-packages\bitsandbytes\libbitsandbytes_cuda121.so... argument of type 'WindowsPath' is not iterable CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected. CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2&gt;/dev/null CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA. CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO. CUDA SETUP: Solution 2b): For example, &quot;bash cuda_install.sh 113 ~/local/&quot; will download CUDA 11.3 and install into the folder ~/local </code></pre> <p>and</p> <pre><code>RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): CUDA Setup failed despite GPU being available. Please run the following command to get more information: python -m bitsandbytes Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues </code></pre> <p>Trying to debug the <strong>bitsandbytes</strong> *.py files, on cuda_setups, there is a module called &quot;env_vars&quot;:</p> <p><a href="https://github.com/TimDettmers/bitsandbytes/blob/main/bitsandbytes/cuda_setup/env_vars.py" rel="nofollow noreferrer">https://github.com/TimDettmers/bitsandbytes/blob/main/bitsandbytes/cuda_setup/env_vars.py</a></p> <p>Apparently it would only work with conda, is that right?</p> <p>So is there any workaround to make this work?</p>
<python><huggingface-transformers><huggingface-trainer>
2023-10-13 18:52:49
0
474
celsowm
77,290,012
1,925,652
How can pip install packages which are missing from it's index?
<p>According to Chris' answer (most upvotes) to <a href="https://stackoverflow.com/questions/4888027/python-and-pip-list-all-versions-of-a-package-thats-available/26664162#26664162">this question</a>, <code>pip index versions package_name</code> is a reliable way to find the versions available for a given package.</p> <p>But I discovered something peculiar... It seems that <code>pip install</code> <strong>works</strong> for packages like <code>tb-nightly</code> &amp; <code>tf-nightly-macos</code>. Yet when I try <code>pip index versions tf-nightly-macos</code> or <code>pip index versions tb-nightly</code> then it tells me <code>ERROR: No matching distribution found for tb-nightly</code>.</p> <p>How can pip download/install packages which are missing from its index? And moreover is there a partial/full fix to see package versions for all/more packages which are visible for install/download by pip?</p> <p>P.S. <strong>My default index url is:</strong> <code>https://pypi.org/simple</code>. Also system info is: MacOS Ventura 13.4.1, M1 chips, python 3.9.13, (but likely applies elsewhere too).</p>
<python><tensorflow><pip>
2023-10-13 18:38:23
1
521
profPlum
77,290,003
5,246,617
Segmentation Fault when Using SentenceTransformer Inside Docker Container
<p>Edit: After test on different machines it is Apple M1 M2 specific bug.</p> <p>I am trying to run a Flask application inside a Docker container on Apple Silicon M2 (could be an issue), where I use the SentenceTransformer model to encode sentences. However, when I call the encode method on the model, the application crashes with a segmentation fault.</p> <p>Here's the relevant code:</p> <pre class="lang-py prettyprint-override"><code>from sentence_transformers import SentenceTransformer model = SentenceTransformer('all-MiniLM-L6-v2') #Our sentences we like to encode sentences = ['This framework generates embeddings for each input sentence', 'Sentences are passed as a list of string.', 'The quick brown fox jumps over the lazy dog.'] #Sentences are encoded by calling model.encode() sentence_embeddings = model.encode(sentences) </code></pre> <p>Source: <a href="https://www.sbert.net/docs/quickstart.html" rel="noreferrer">https://www.sbert.net/docs/quickstart.html</a></p> <p>after using <code>import faulthandler</code> and <code>faulthandler.enable()</code> The error traceback is:</p> <pre><code>Fatal Python error: Segmentation fault Thread 0x0000ffff640ff1a0 (most recent call first): File &quot;/usr/local/lib/python3.11/threading.py&quot;, line 331 in wait File &quot;/usr/local/lib/python3.11/threading.py&quot;, line 629 in wait File &quot;/usr/local/lib/python3.11/site-packages/tqdm/_monitor.py&quot;, line 60 in run File &quot;/usr/local/lib/python3.11/threading.py&quot;, line 1045 in _bootstrap_inner File &quot;/usr/local/lib/python3.11/threading.py&quot;, line 1002 in _bootstrap Current thread 0x0000ffffa6814020 (most recent call first): File &quot;/usr/local/lib/python3.11/site-packages/transformers/activations.py&quot;, line 78 in forward File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1527 in _call_impl File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1518 in _wrapped_call_impl File &quot;/usr/local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py&quot;, line 452 in forward File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1527 in _call_impl File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1518 in _wrapped_call_impl File &quot;/usr/local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py&quot;, line 551 in feed_forward_chunk File &quot;/usr/local/lib/python3.11/site-packages/transformers/pytorch_utils.py&quot;, line 240 in apply_chunking_to_forward File &quot;/usr/local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py&quot;, line 539 in forward File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1527 in _call_impl File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1518 in _wrapped_call_impl File &quot;/usr/local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py&quot;, line 612 in forward File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1527 in _call_impl File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1518 in _wrapped_call_impl File &quot;/usr/local/lib/python3.11/site-packages/transformers/models/bert/modeling_bert.py&quot;, line 1022 in forward File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1527 in _call_impl File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1518 in _wrapped_call_impl File &quot;/usr/local/lib/python3.11/site-packages/sentence_transformers/models/Transformer.py&quot;, line 66 in forward File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1527 in _call_impl File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py&quot;, line 1518 in _wrapped_call_impl File &quot;/usr/local/lib/python3.11/site-packages/torch/nn/modules/container.py&quot;, line 215 in forward File &quot;/usr/local/lib/python3.11/site-packages/sentence_transformers/SentenceTransformer.py&quot;, line 165 in encode File &quot;&lt;stdin&gt;&quot;, line 1 in &lt;module&gt; Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.sparse.linalg._isolve._iterative, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.linalg._flinalg, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize.__nnls, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.special.cython_special, scipy.stats._stats, scipy.stats.beta_ufunc, scipy.stats._boost.beta_ufunc, scipy.stats.binom_ufunc, scipy.stats._boost.binom_ufunc, scipy.stats.nbinom_ufunc, scipy.stats._boost.nbinom_ufunc, scipy.stats.hypergeom_ufunc, scipy.stats._boost.hypergeom_ufunc, scipy.stats.ncf_ufunc, scipy.stats._boost.ncf_ufunc, scipy.stats.ncx2_ufunc, scipy.stats._boost.ncx2_ufunc, scipy.stats.nct_ufunc, scipy.stats._boost.nct_ufunc, scipy.stats.skewnorm_ufunc, scipy.stats._boost.skewnorm_ufunc, scipy.stats.invgauss_ufunc, scipy.stats._boost.invgauss_ufunc, scipy.interpolate._fitpack, scipy.interpolate.dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._statlib, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, regex._regex, sklearn.__check_build._check_build, sklearn.utils._isfinite, sklearn.utils.murmurhash, sklearn.utils._openmp_helpers, sklearn.utils._logistic_sigmoid, sklearn.utils.sparsefuncs_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.preprocessing._target_encoder_fast, sklearn.utils._vector_sentinel, sklearn.feature_extraction._hashing_fast, sklearn.utils._random, sklearn.utils._seq_dataset, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.metrics._dist_metrics, sklearn.metrics._pairwise_distances_reduction._datasets_pair, sklearn.utils._cython_blas, sklearn.metrics._pairwise_distances_reduction._base, sklearn.metrics._pairwise_distances_reduction._middle_term_computer, sklearn.utils._heap, sklearn.utils._sorting, sklearn.metrics._pairwise_distances_reduction._argkmin, sklearn.metrics._pairwise_distances_reduction._argkmin_classmode, sklearn.metrics._pairwise_distances_reduction._radius_neighbors, sklearn.metrics._pairwise_fast, sklearn.linear_model._cd_fast, sklearn._loss._loss, sklearn.utils.arrayfuncs, sklearn.svm._liblinear, sklearn.svm._libsvm, sklearn.svm._libsvm_sparse, sklearn.utils._weight_vector, sklearn.linear_model._sgd_fast, sklearn.linear_model._sag_fast, scipy.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, sklearn.datasets._svmlight_format_fast, charset_normalizer.md, yaml._yaml, sentencepiece._sentencepiece, PIL._imaging (total: 163) Segmentation fault </code></pre> <p>Some points:</p> <p>The Docker container has ample memory allocated. I've tried updating the libraries (torch, transformers, and sentence-transformers). The same code works perfectly outside the Docker environment. How can I resolve this segmentation fault when running the code inside Docker?</p> <p>Here is a pip list of actual versions.</p> <pre><code>Package Version --------------------- --------- blinker 1.6.3 certifi 2023.7.22 charset-normalizer 3.3.0 click 8.1.7 filelock 3.12.4 Flask 3.0.0 fsspec 2023.9.2 huggingface-hub 0.17.3 idna 3.4 itsdangerous 2.1.2 Jinja2 3.1.2 joblib 1.3.2 MarkupSafe 2.1.3 mpmath 1.3.0 networkx 3.1 nltk 3.8.1 numpy 1.26.0 packaging 23.2 Pillow 10.0.1 pip 23.2.1 PyYAML 6.0.1 regex 2023.10.3 requests 2.31.0 safetensors 0.4.0 scikit-learn 1.3.1 scipy 1.11.3 sentence-transformers 2.2.2 sentencepiece 0.1.99 setuptools 65.5.1 sympy 1.12 threadpoolctl 3.2.0 tokenizers 0.14.1 torch 2.1.0 torchvision 0.16.0 tqdm 4.66.1 transformers 4.34.0 typing_extensions 4.8.0 urllib3 2.0.6 Werkzeug 3.0.0 wheel 0.41.2 </code></pre> <p>For more clarification, here is the most important part of my Dockerfile.</p> <pre><code>FROM python:3.11 RUN pip install --upgrade pip RUN pip install Flask==3.0.0 sentence-transformers==2.2.2 </code></pre>
<python><docker><apple-m1><sentence-transformers>
2023-10-13 18:36:32
4
1,093
Pavol Travnik
77,289,982
13,227,516
Python asyncssh equivalent of paramiko Channel to display SSH prompt
<p>I'm trying to establish an interactive SSH shell using asyncssh. So far I have the following code:</p> <pre><code>import asyncio, asyncssh, sys async def run_client(): async with asyncssh.connect( HOST, port=PORT, username=USER, password=PASSWORD, known_hosts=None) as conn: stdin, stdout, stderr = await conn.open_session() welcome = await stdout.readline() print(welcome) loop = asyncio.get_event_loop() try: loop.run_until_complete(run_client()) except (OSError, asyncssh.Error) as exc: sys.exit('SSH connection failed: ' + str(exc)) </code></pre> <p>The connection is established successfully, but I can't get the created session to send any data (it awaits forever). Appart for the lack of interactiveness, I got the desired behavior with the following code using paramiko:</p> <pre><code>from paramiko import SSHClient, AutoAddPolicy with SSHClient() as client: client.set_missing_host_key_policy(AutoAddPolicy()) client.connect(hostname, port, username, password) with client.invoke_shell() as chan: chan.settimeout(2) ## wait until some data is available while not channel.recv_ready(): time.sleep(0.3) ## get the data welcome = b&quot;&quot; while channel.recv_ready(): welcome += channel.recv(2**13) time.sleep(delay) print(welcome.decode(&quot;utf-8&quot;)) </code></pre> <p>Is there a way to do the same using asynssh? Basically I want to get a shell prompt displayed on the python console, and have it responding to writes on stdin.</p>
<python><asyncssh>
2023-10-13 18:31:46
1
315
johnc
77,289,653
11,922,765
Combine two lists alternatively
<p>I am doing a simple list comprehension: combine two lists in alternative order and make another list.</p> <pre><code>big_list = [i,j for i,j in zip(['2022-01-01','2022-02-01'],['2022-01-31','2022-02-28'])] </code></pre> <p>Expected output:</p> <pre><code>['2022-01-01','2022-01-31','2022-02-01','2022-02-28'] </code></pre> <p>Present output:</p> <pre><code>Cell In[35], line 1 [i,j for i,j in zip(['2022-01-01','2022-02-01'],['2022-01-31','2022-02-28'])] ^ SyntaxError: did you forget parentheses around the comprehension target? </code></pre>
<python><python-3.x><list><list-comprehension><python-zip>
2023-10-13 17:27:12
5
4,702
Mainland
77,289,577
2,612,592
Can't run test due to ImportError
<p>I have the following project structure:</p> <pre><code>| .gitignore | README.md | call_test_outside.py | main.py | setup.cfg | setup.py |- src |- PropTools | __init__.py | PropTools | __init__.py | SQLiteWrapper.py | dates.py | prices.py |- transactions.py | config | __init__.py |- settings.py |- tests | __init__.py |- transactions_tests.py </code></pre> <p><strong>Problem:</strong> When I run <code>transactions_tests.py</code> inside <code>tests</code> folder I get <code>ImportError</code>:</p> <pre><code>from ..PropTools.transactions import TransactionType, InstrumentType, get_transactions ImportError: attempted relative import with no known parent package </code></pre> <p>But if I try to do the same from <code>call_test_outside.py</code> from root folder it works.</p> <h1>Codes:</h1> <p><code>transactions_test.py</code> code (in tests folder):</p> <pre><code>from ..PropTools.transactions import TransactionType, InstrumentType, get_transactions print(get_transactions) </code></pre> <p><code>call_test_outside.py</code> code:</p> <pre><code>from src.PropTools.tests import transactions_tests print(transactions_tests.get_transactions) </code></pre> <p>How can or should I test the development of the package?</p>
<python><python-packaging>
2023-10-13 17:09:37
2
587
Oliver Mohr Bonometti
77,289,480
10,642,196
Why does the PyAV-encoded video have a different total time in seconds than the original file
<p>I'm taking a mp4 file, decoding it frame by frame and reencoding it. I'm using PyAV and the reencoding is running on h265. When the code is running, it seems that all frames are passed from the decoder to the encoder, but at the end I have a file with 3 seconds less. For longer videos I'll have bigger differences.</p> <p>I have already checked the fps, it comes out with a very small variation, even if I use the Fraction type to avoid decimal differences.</p> <p>I'd like to know if it's some parametrization error, or if this is something to do with the encoder itself like keyframes and all those sort of things. Is there a way to fix it?</p> <pre><code>import av container = av.open('input.mp4') # create an output container to receive the transcoded video packets output_container = av.open('output.mp4', 'w', format='mp4') # add a stream to output container with same fps as input and h265 codec output_stream = output_container.add_stream( 'libx265', rate=container.streams.video[0].average_rate) # set the output stream size output_stream.width = 640 output_stream.height = 480 print('Starting transcoding...') actual_frame = 0 container_total_frames = container.streams.video[0].frames # start decoding packets and reading frames from the container try: for packet_input in container.demux(): if packet_input.dts is None: break for frame in packet_input.decode(): try: actual_frame += 1 # convert the frame to a numpy array img = frame.to_ndarray(format='bgr24') # prepare the ndarray frame and encode it frame_nd = av.VideoFrame.from_ndarray(img, format='bgr24') packet_output = output_stream.encode(frame_nd) output_container.mux(packet_output) print('Frame: {}/{}'.format(actual_frame, container_total_frames)) except Exception as e: print(&quot;Error writing frame: &quot;, e) break except Exception as e: print('Finished transcoding') output_container.close() print('Finished transcoding') </code></pre>
<python><video><ffmpeg><pyav>
2023-10-13 16:54:02
0
595
Diego Medeiros
77,289,175
3,050,730
Why am I getting a Pylance error "Object of type 'A*' is not callable" in VSCode when using generics?
<p>I'm attempting to write a generic class in Python 3.11 that should be able to process a list of elements using different processing classes (<code>A</code> or <code>B</code> in this example). My goal is to use the type variable <code>T</code> to specify the processing class that should be used.</p> <p><strong>Clarification</strong>: Each element <code>x</code> within the list <code>X</code>requires an individual initialization of the subclass <code>A</code> or <code>B</code>, so initializing is only possible withing the <code>process()</code> method.</p> <p>Here's my code:</p> <pre class="lang-py prettyprint-override"><code>from typing import List, Self, TypeVar, Generic class A: def __init__(self): pass def process(self, x): return x + 1 class B: def __init__(self): pass def process(self, x): return x + 2 T = TypeVar(&quot;T&quot;, A, B) class ListProcessor(Generic[T]): &quot;&quot;&quot;Apply a Transformer to the individual list elements.&quot;&quot;&quot; def __init__(self, processor: T): self.processor_class = processor self.processors: List[T] = [] # &lt;- Pylance complains! see error message (1) def process(self, X) -&gt; List: x_processed = [] for x in X: proc: T = self.processor_class() # &lt;- Pylance complains! see error message (2) result = proc.process(x) x_processed.append(result) self.processors.append(proc) return x_processed lp = ListProcessor(B) lp.process([1, 2, 3]) </code></pre> <p>The output of the above is:</p> <pre><code>[3, 4, 5] </code></pre> <p>which is what I expect. But while the code runs, Pylance type checker complains in VSCode.</p> <ol> <li><p>For the line <code>self.processors: List[T] = []</code>, I get:</p> <blockquote> <p>Variable not allowed in type expression PylancereportGeneralTypeIssues<br /> T: ~T<br /> (constant) T: Unknown</p> </blockquote> </li> <li><p>For the line <code>proc: T = self.processor_class()</code>, I get:</p> <blockquote> <p>Object of type &quot;A*&quot; is not callablePylancereportGeneralTypeIssues<br /> Object of type &quot;B*&quot; is not callablePylancereportGeneralTypeIssues</p> </blockquote> </li> </ol> <p>I'm struggling to understand these error messages. Could someone help clarify why these errors occur and how I can fix them?</p>
<python><generics><python-typing><pyright>
2023-10-13 15:58:50
2
523
nicrie
77,289,083
5,213,015
Django - AllAuth How to style the email.html template?
<p>Quick question.</p> <p>I'm beefing up the authentication for my Django application. I would like to have users change and verify their emails with Allauth but I can't figure out how to style a template.</p> <p>Is there a way how we can style the <code>email.html</code> template?</p> <p>I tried to add <code>{{ form.as_p }}</code> but I only get an email field. I don't know how to add everything else.</p> <p>I tried looking at the Github for allauth as well for the <code>email.html</code> page and I don't understand it because there's a bunch of template code. I don't know what to strip out or keep.</p> <p>Below is what the page looks like without styling.</p> <p>Any help is gladly appreciated! Thanks!</p> <p><a href="https://i.sstatic.net/Yth6O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yth6O.png" alt="enter image description here" /></a></p>
<python><django><django-views><django-allauth>
2023-10-13 15:45:10
1
419
spidey677
77,289,011
21,420,742
How to make a date filter using column from one dataframe to another
<p>I have two datasets that I need to create a filter looking at 90 days back in df1 from the <code>target_date</code> in df2.</p> <p>Example:</p> <p>DF1</p> <pre><code> id name start_date end_date team 1234 John Smith 1/1/2022 2/1/2022 Sales 1234 John Smith 2/1/2022 4/2/2022 Sales 1234 John Smith 4/2/2022 8/4/2022 Admin 4321 Derek Jeter 7/23/2022 8/12/2022 Tech 4321 Derek Jeter 8/12/2022 12/12/2022 Tech 5678 Joe Dirt 1/23/2023 3/21/2023 HR 5678 Joe Dirt 3/21/2023 5/4/2023 HR 5678 Joe Dirt 5/4/2023 7/2/2023 HR </code></pre> <p>DF2</p> <pre><code>case team emp_name target_date 100 Sales J.Smith 7/1/2022 101 Tech Jeter, Derek 1/17/2023 102 HR Joe Dirt 7/1/2023 </code></pre> <p>What I would like to see is the <code>target_date</code> in DF2 is used as the base for the filtering on <code>end_date</code> in DF1I.So for example if the <code>target_date</code> is 10/1/2023 the <code>end_date</code> would have to be on or before 7/1/2023. A dataframe would then look like this:</p> <pre><code> id name start_date end_date team 1234 John Smith 1/1/2022 2/1/2022 Sales 1234 John Smith 2/1/2022 4/2/2022 Sales 4321 Derek Jeter 7/23/2022 8/12/2022 Tech 5678 Joe Dirt 1/23/2022 3/21/2023 HR </code></pre> <p>this is what I have tried:</p> <pre><code>for index, row in df2.iterrows(): threshold_date = row['target_date'] - timedelta(days = 90) filtered_rows = df[(df['end_date'] &lt;= threshold_date)] filtered_dfs.append(filtered_rows) df = pd.concat(filtered_dfs,ignore_index = True) </code></pre> <p>I think I need to find a way to match the names, but I am not sure.</p>
<python><pandas><dataframe><loops><datetime>
2023-10-13 15:32:15
0
473
Coding_Nubie
77,288,976
1,422,096
How to export a private key / public key into bytes with Python cryptography module?
<p>I tried many combinations of parameters of <a href="https://cryptography.io/en/latest/hazmat/primitives/asymmetric/rsa/#cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateKey.private_bytes" rel="nofollow noreferrer"><code>rsa.RSAPrivateKey.private_bytes</code></a> but none of them work:</p> <pre><code>from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives.asymmetric import rsa from cryptography.hazmat.primitives.serialization import Encoding, PrivateFormat, PublicFormat, NoEncryption private_key = rsa.generate_private_key(public_exponent=65537, key_size=2048, backend=default_backend()) pvt_bytes = private_key.private_bytes(Encoding.Raw, PrivateFormat.Raw, NoEncryption()) # ERROR # ValueError: format is invalid with this key public_key = private_key.public_key() pub_bytes = public_key.public_bytes(Encoding.Raw, PublicFormat.Raw) print(pvt_bytes, pub_bytes) </code></pre> <p><strong>Question: how to export/import a private key / public key into a bytes form with the Python <code>cryptography</code> module?</strong></p> <p>The goal is to have a base64 encoded version of both like:</p> <p><code>uFWnMdqUALp2NcvKRxE8Fw2uWDbBjnSk5wveuL3DOp7Ct4AyYKpvccMEE63ooYoj4nblafAwXikakGPbCM4amg==</code></p>
<python><cryptography><rsa><python-cryptography>
2023-10-13 15:27:15
0
47,388
Basj
77,288,951
6,125,803
pipenv install and lock fails every time - on new install of Mac OS Ventura
<p>I'm trying to set up a new environment and Ive run into an issue with PIPENV.</p> <p>Pipenv will create the virtual env, but when I try to install <em>any</em> package using the command <code>pipenv install &lt;package&gt;</code> the install and the lock fails - see an example of the traceback below.</p> <p>I can bypass the package install issue with <code>pipenv run pip install &lt;package&gt;</code> but even after that, the lock will still fail.</p> <ul> <li>I've tried multiple different versions of pipenv, no difference.</li> <li>I've tried multiple different versions of python (3.10, 3.11, 3.12)</li> <li>My copy of pipenv is installed using pip (not home-brew).</li> <li><code>pipenv lock --pre</code> fails with the same error</li> <li>Everything was working perfectly on previous environment (Mac OS Mojave)</li> </ul> <p>I've read through the tracebacks, but I can't figure out what's going wrong. The only variable here is the new OS (Ventura), but not sure what about it could be causing the problem.</p> <pre><code>Installing requests... Resolving requests... Added requests to Pipfile's [packages] ... ✔ Installation Succeeded Pipfile.lock not found, creating... Locking [packages] dependencies... Building requirements... Resolving dependencies... ✘ Locking Failed! ⠸ Locking...False ERROR:pip.subprocessor:[present-rich] python setup.py egg_info exited with 1 [ResolutionFailure]: File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/resolver.py&quot;, line 645, in _main [ResolutionFailure]: resolve_packages( [ResolutionFailure]: File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/resolver.py&quot;, line 612, in resolve_packages [ResolutionFailure]: results, resolver = resolve( [ResolutionFailure]: ^^^^^^^^ [ResolutionFailure]: File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/resolver.py&quot;, line 592, in resolve [ResolutionFailure]: return resolve_deps( [ResolutionFailure]: ^^^^^^^^^^^^^ [ResolutionFailure]: File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/utils/resolver.py&quot;, line 892, in resolve_deps [ResolutionFailure]: results, hashes, internal_resolver = actually_resolve_deps( [ResolutionFailure]: ^^^^^^^^^^^^^^^^^^^^^^ [ResolutionFailure]: File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/utils/resolver.py&quot;, line 665, in actually_resolve_deps [ResolutionFailure]: resolver.resolve() [ResolutionFailure]: File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/utils/resolver.py&quot;, line 442, in resolve [ResolutionFailure]: raise ResolutionFailure(message=str(e)) [pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies. You can use $ pipenv run pip install &lt;requirement_name&gt; to bypass this mechanism, then run $ pipenv graph to inspect the versions actually installed in the virtualenv. Hint: try $ pipenv lock --pre if it is a pre-release dependency. ERROR: metadata generation failed Traceback (most recent call last): File &quot;/Library/Frameworks/Python.framework/Versions/3.10/bin/pipenv&quot;, line 8, in &lt;module&gt; sys.exit(cli()) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/vendor/click/core.py&quot;, line 1130, in __call__ return self.main(*args, **kwargs) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/cli/options.py&quot;, line 58, in main return super().main(*args, **kwargs, windows_expand_args=False) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/vendor/click/core.py&quot;, line 1055, in main rv = self.invoke(ctx) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/vendor/click/core.py&quot;, line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/vendor/click/core.py&quot;, line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/vendor/click/core.py&quot;, line 760, in invoke return __callback(*args, **kwargs) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/vendor/click/decorators.py&quot;, line 84, in new_func return ctx.invoke(f, obj, *args, **kwargs) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/vendor/click/core.py&quot;, line 760, in invoke return __callback(*args, **kwargs) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/cli/command.py&quot;, line 209, in install do_install( File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/routines/install.py&quot;, line 297, in do_install raise e File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/routines/install.py&quot;, line 281, in do_install do_init( File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/routines/install.py&quot;, line 672, in do_init do_lock( File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/routines/lock.py&quot;, line 65, in do_lock venv_resolve_deps( File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/utils/resolver.py&quot;, line 833, in venv_resolve_deps c = resolve(cmd, st, project=project) File &quot;/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv/utils/resolver.py&quot;, line 702, in resolve raise RuntimeError(&quot;Failed to lock Pipfile.lock!&quot;) RuntimeError: Failed to lock Pipfile.lock! (test_project) bash-3.2$ pipenv graph requests==2.31.0 ├── certifi [required: &gt;=2017.4.17, installed: 2023.7.22] ├── charset-normalizer [required: &gt;=2,&lt;4, installed: 3.3.0] ├── idna [required: &gt;=2.5,&lt;4, installed: 3.4] └── urllib3 [required: &gt;=1.21.1,&lt;3, installed: 2.0.6] </code></pre> <p>##############################</p> <p>Here is the output of <code>pipenv --support</code></p> $ pipenv --support <p>Pipenv version: <code>'2023.10.3'</code></p> <p>Pipenv location: <code>'/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pipenv'</code></p> <p>Python location: <code>'/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10'</code></p> <p>OS Name: <code>'posix'</code></p> <p>User pip version: <code>'23.2.1'</code></p> <p>user Python installations found:</p> <p>PEP 508 Information:</p> <pre><code>{'implementation_name': 'cpython', 'implementation_version': '3.10.11', 'os_name': 'posix', 'platform_machine': 'arm64', 'platform_python_implementation': 'CPython', 'platform_release': '22.6.0', 'platform_system': 'Darwin', 'platform_version': 'Darwin Kernel Version 22.6.0: Fri Sep 15 13:41:28 PDT ' '2023; root:xnu-8796.141.3.700.8~1/RELEASE_ARM64_T6020', 'python_full_version': '3.10.11', 'python_version': '3.10', 'sys_platform': 'darwin'} </code></pre> <p>System environment variables:</p> <ul> <li><code>TERM_PROGRAM</code></li> <li><code>PIP_PYTHON_PATH</code></li> <li><code>TERM</code></li> <li><code>SHELL</code></li> <li><code>TMPDIR</code></li> <li><code>PIPENV_VENV_IN_PROJECT</code></li> <li><code>TERM_PROGRAM_VERSION</code></li> <li><code>TERM_SESSION_ID</code></li> <li><code>USER</code></li> <li><code>SSH_AUTH_SOCK</code></li> <li><code>__CF_USER_TEXT_ENCODING</code></li> <li><code>VIRTUAL_ENV</code></li> <li><code>PIPENV_ACTIVE</code></li> <li><code>PATH</code></li> <li><code>__CFBundleIdentifier</code></li> <li><code>PWD</code></li> <li><code>LANG</code></li> <li><code>PYTHONFINDER_IGNORE_UNSUPPORTED</code></li> <li><code>XPC_FLAGS</code></li> <li><code>PS1</code></li> <li><code>PYTHONDONTWRITEBYTECODE</code></li> <li><code>XPC_SERVICE_NAME</code></li> <li><code>HOME</code></li> <li><code>SHLVL</code></li> <li><code>LOGNAME</code></li> <li><code>PIP_DISABLE_PIP_VERSION_CHECK</code></li> <li><code>VIRTUAL_ENV_PROMPT</code></li> <li><code>_</code></li> </ul> <p>Pipenv–specific environment variables:</p> <ul> <li><code>PIPENV_VENV_IN_PROJECT</code>: <code>1</code></li> <li><code>PIPENV_ACTIVE</code>: <code>1</code></li> </ul> <p>Debug–specific environment variables:</p> <ul> <li><code>PATH</code>: <code>/Users/username/Documents/Github/test_project/.venv/bin:/Library/Frameworks/Python.framework/Versions/3.10/bin:/Library/Frameworks/Python.framework/Versions/3.11/bin:/opt/homebrew/sbin:/opt/homebrew/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/Little Snitch.app/Contents/Components:/Library/Apple/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin</code></li> <li><code>SHELL</code>: <code>/bin/bash</code></li> <li><code>LANG</code>: <code>en_US.UTF-8</code></li> <li><code>PWD</code>: <code>/Users/username/Documents/Github/test_project</code></li> <li><code>VIRTUAL_ENV</code>: <code>/Users/username/Documents/Github/test_project/.venv</code></li> </ul> <hr /> <p>Contents of <code>Pipfile</code> ('/Users/username/Documents/Github/test_project/Pipfile'):</p> <pre class="lang-ini prettyprint-override"><code>[[source]] url = &quot;https://pypi.org/simple&quot; verify_ssl = true name = &quot;pypi&quot; [packages] aiohttp = &quot;==3.8.1&quot; attr = &quot;==0.3.1&quot; attrs = &quot;==21.4.0&quot; blinker = &quot;==1.4&quot; brotli = &quot;==1.0.9&quot; ca-certs-locater = &quot;==1.0&quot; click = &quot;==8.0.4&quot; configparser = &quot;==5.2.0&quot; cryptography = &quot;==36.0.1&quot; cython = &quot;==0.29.28&quot; dl = &quot;==0.1.0&quot; docutils = &quot;==0.18.1&quot; grpc-status = &quot;==1.0.0&quot; htmlparser = &quot;==0.0.2&quot; ipython = &quot;==8.3.0&quot; ipywidgets = &quot;==7.7.0&quot; jinja2 = &quot;==3.1.2&quot; jnius = &quot;==1.1.0&quot; keyring = &quot;==23.5.0&quot; lockfile = &quot;==0.12.2&quot; lxml = &quot;==4.8.0&quot; numpy = &quot;==1.22.2&quot; oauth2client = &quot;==4.1.3&quot; ordereddict = &quot;==1.1&quot; pillow = &quot;==9.1.0&quot; pyopenssl = &quot;==22.0.0&quot; pytest = &quot;==7.0.1&quot; pyu2f = &quot;==0.1.5&quot; railroad = &quot;==0.5.0&quot; simplejson = &quot;==3.17.6&quot; sphinx = &quot;==4.5.0&quot; toml = &quot;==0.10.2&quot; tornado = &quot;==6.1&quot; unicodedata2 = &quot;==14.0.0&quot; win-inet-pton = &quot;==1.1.0&quot; xmlrpclib = &quot;==1.0.1&quot; [dev-packages] [requires] python_version = &quot;3.11&quot; </code></pre>
<python><python-3.x><pipenv>
2023-10-13 15:23:45
2
305
stevec
77,288,757
10,331,731
ImportError: GSSAPIProxy requires the Python gssapi library
<p>Hello after updating to python 3.11.5 i am getting this particular import error on all of my scripts. Anyone knows what i need to install/update for this</p> <pre><code> ImportError: GSSAPIProxy requires the Python gssapi library: No module named 'krb5' </code></pre> <p>i already installed gssapi but keep getting this error on all of my script.</p>
<python><importerror><gssapi><krb5.ini>
2023-10-13 14:55:47
0
393
Learner
77,288,502
1,380,613
Python Ctypes Segmentation Fault
<p>With the help of folks here I was able to get most of my ctype integration working in my python script.</p> <p><a href="https://stackoverflow.com/questions/74976145/python3-processing-ctypes-data">Python3 processing ctypes data</a></p> <p>I am interfacing with the Fanuc Focas library to pull in data from CNC machines. All of the c functions that don't need me to send arguments work, but I am not able to get any of them with additional arguments to work.</p> <p>So my question is, how to do I debug this? (or how do I fix it if you see the issue). I have tried using gdb, but can't seem to figure that out.</p> <p>For example, I can run this with no issue:</p> <pre><code>focas.cnc_statinfo2(libh, statinfo) </code></pre> <p>I cannot get this one to work: (where I have to supply the spindle number)</p> <pre><code>spindleNumber = ctypes.c_short(1) ret = focas.cnc_rdspload(libh, spindleNumber, sploadA) </code></pre> <p>I believe my problem lies with the &quot;spindleNumber&quot; variable. It is expecting a ctype short, I have tried:</p> <pre><code>focas.cnc_rdspload(libh, 1, sploadA) focas.cnc_rdspload(libh, '1', sploadA) focas.cnc_rdspload(libh, c_short(1), sploadA) focas.cnc_rdspload(libh, ctypes.c_short(1), sploadA) </code></pre> <p>Every one just results in the vague &quot;Segmentation Fault (Core Dumped) error&quot; and it does not generate a core file anywhere that I can find. So I am struggling to do any troubleshooting.</p> <p>If it helps, here is the prototype: (<a href="https://www.inventcom.net/fanuc-focas-library/position/cnc_rdspload" rel="nofollow noreferrer">https://www.inventcom.net/fanuc-focas-library/position/cnc_rdspload</a>)</p> <pre><code>FWLIBAPI short WINAPI cnc_rdspload(unsigned short FlibHndl, short sp_no, ODBSPN *serialspindle); typedef struct odbspn { short datano; /* Spindle number. */ short type; /* Not used. */ short data[MAX_SPINDLE]; /* Spindle data. */ } ODBSPN ; /* MAX_SPINDLE is maximum number of spindle. */ </code></pre> <p>Full Code:</p> <pre><code>#!/usr/bin/env python3 import ctypes from pathlib import Path import os #load fanuc focus cpp library libpath = (&quot;libfwlib32.so&quot;) focas = ctypes.cdll.LoadLibrary(libpath) focas.cnc_startupprocess.restype = ctypes.c_short focas.cnc_exitprocess.restype = ctypes.c_short focas.cnc_allclibhndl3.restype = ctypes.c_short focas.cnc_freelibhndl.restype = ctypes.c_short focas.cnc_sysinfo_ex.restype = ctypes.c_short focas.cnc_rdspload.argtype = ctypes.c_short focas.cnc_rdspload.restype = ctypes.c_short ret = focas.cnc_startupprocess(0, &quot;focas.log&quot;) if ret != 0: raise Exception(f&quot;Failed to create required log file! ({ret})&quot;) #machine connection info (load dynamically) ip = &quot;172.23.4.53&quot; port = 8193 while True: #start connection to machine timeout = 10 libh = ctypes.c_ushort(0) print(f&quot;connecting to machine at {ip}:{port}&quot;) ret = focas.cnc_allclibhndl3( ip.encode(), port, timeout, ctypes.byref(libh), ) if ret != 0: time.sleep(30.0) raise Exception(f&quot;Failed to connect to cnc! ({ret})&quot;) sysinfoex = (ctypes.c_uint16 * 8)() sploadA = (ctypes.c_short * 3)() try: while True: print('--------------') print('CLASS - SYSINFO EX') print('--------------') #get sysinfo data from fanuc focus ret = focas.cnc_sysinfo_ex(libh, sysinfoex) if ret != 0: time.sleep(30.0) raise Exception(f&quot;Failed to read sysinfo_ex! ({ret})&quot;) print('maximum controlled axes '+str(sysinfoex[0])) print('maximum spindle number '+str(sysinfoex[1])) print('maximum path number '+str(sysinfoex[2])) print('maximum machining group number '+str(sysinfoex[3])) print('controlled axes number '+str(sysinfoex[4])) print('servo axis number '+str(sysinfoex[5])) print('spindle number '+str(sysinfoex[6])) print('path number '+str(sysinfoex[7])) print('--------------') print('CLASS - SPINDLE LOAD 1') print('--------------') #get spinde 1 data from fanuc focus spindleNumber = ctypes.c_short(1) ret = focas.cnc_rdspload(libh, spindleNumber, sploadA) time.sleep(2) except: print('failed to connect') time.sleep(5) focas.cnc_exitprocess() </code></pre>
<python><ctypes>
2023-10-13 14:20:27
1
412
Developer Gee
77,288,414
18,904,265
How can I correctly instantiate with __new__ and class methods?
<p>I have a class, which connects to a server through a python module. The login takes some time, so the idea was to put its instance into a singleton pattern, so I always pass around the exact same instance. In another <a href="https://softwareengineering.stackexchange.com/questions/448082/how-can-i-nicely-pass-on-a-class-instance-in-python">question</a> I wrote about the details.</p> <p>However, I added the <code>__new__</code> method to my class as seen below, and after an error which said I provided 4 positional arguments instead of 1 I added <code>user, pass, url</code> to the <code>__new__</code> function, however this of course doesn't do anything yet. So at this point I am massively confused, how I need to rewrite my init/classmethods/<code>__new__</code> to get the same instantiation as before, with the only difference, that I only instantiate once. Also I had the feeling, that <code>__init__</code> gets executed every time nevertheless? So is there a way to just shift the content of my <code>__init__</code> to <code>__new__</code>? Main concern is, how do I assign a variable to the instance which is about to be created? Grateful for any pointers in the right direction!</p> <pre class="lang-py prettyprint-override"><code>from pybis import Openbis class OpenBisLoad: _instance = None def __new__(cls): if cls._instance is None: cls._instance = super(OpenBisLoad, cls).__new__(cls) return cls._instance def __init__(self, username: str, password: str, url: str): openbis_object = Openbis(url) openbis_object.login(username, password, save_token=True) self.openbis_object = openbis_object @classmethod def login_with_credentials(cls, username: str, password: str): url = URL return cls(username, password, url) @classmethod def login_with_env(cls): load_dotenv() return cls(os.getenv(&quot;USER&quot;), os.getenv(&quot;PASSWORD&quot;), os.getenv(&quot;URL&quot;)) </code></pre>
<python><singleton>
2023-10-13 14:06:23
1
465
Jan
77,288,319
1,422,096
Decorator to do a test before launching each function
<p>Instead of repeating <code>if not self.connected: ... return</code> for every method of this class:</p> <pre><code>class A: def __init__(self): self.connected = False def connect(self, password): if password == &quot;ab&quot;: self.connected = True def send(self, message): if not self.connected: print(&quot;abandoned because not connected&quot;) return print(&quot;sending&quot;) def receive(self): if not self.connected: print(&quot;abandoned because not connected&quot;) return print(&quot;receiving&quot;) a = A() a.send() </code></pre> <p>is it possible to do it with a decorator like</p> <pre><code>class A: ... def requires_connected(self, func): if self.connected: func() else: print(&quot;abandoned because not connected&quot;) @requires_connected def send(self, message): print(&quot;sending&quot;) </code></pre> <p>?</p> <p>The latter doesn't work (<code>TypeError: requires_connected() missing 1 required positional argument: 'func'</code>), probably because of the arguments.</p> <p><strong>How to make this decorator work correctly?</strong></p>
<python><decorator>
2023-10-13 13:52:44
3
47,388
Basj
77,288,315
130,468
Pythonic equivalent of Java marker annotations
<p>I want to search my Python classpath for all classes that can handle a certain input (type), at runtime. Then I will instantiate them and call them when I receive that input. I would like each class author to be able to mark their class as a possible handler.</p> <p>In Java, I can define the annotation <code>@Handler</code> and do this:</p> <pre class="lang-java prettyprint-override"><code>@Handler public class MyHandler implements HandlerInterface { // whatever } </code></pre> <p>then get the set of all marked classes like this with the Reflections library (or Spring Framework if I'm using it):</p> <pre class="lang-java prettyprint-override"><code>Set&lt;Class&lt;?&gt;&gt; handlerClasses = reflections.getTypesAnnotatedWith(Handler.class); </code></pre> <p>What is the Python equivalent?</p> <p><code>@decorator</code> doesn't seem to fulfill quite the same goal; it seems to be entirely for wrappers, whereas that's just one possible use of Java's <code>@</code> syntax.</p> <p>I can almost do it with inheritance instead:</p> <pre class="lang-py prettyprint-override"><code>class MyHandler(HandlerInterface): pass handlers = [ handler() for handler in HandlerInterface.__subclasses__() ] </code></pre> <p>But that solution may get intermediate classes (e.g. <code>HandlerInterface</code> -&gt; <code>HandlerType1</code> -&gt; <code>HandlerType1Implementation</code>).</p> <p>Am I missing any other options? Or is there a different design pattern I should be looking at? I may be looking at it from a too Java-centric point of view.</p>
<python>
2023-10-13 13:52:13
1
1,568
Sam Jones
77,288,262
7,978,221
Convert JSON objects to table structure in Python
<p>I have some JSON with an 'items list. I want to convert it to semicolon separated file with headers</p> <p>I want to produce lines in this format: <code>id, date, order[x][0], order[x][1], ...</code></p> <p>My issue is that I need to include headers, the names of the fields, and some <code>order</code> items do not have all the possible elements so I can't just join the values together.</p> <p>for the JSON below I'd expect the following output:</p> <pre><code>id;date;prefix;number;quantity;code;index 06107;2023-09-25T01:51:04Z;VO;32233809;1;;4 06107;2023-09-25T01:51:04Z;VO;31438125;1;;4 06107;2023-09-25T10:00:51Z;VO;31407983;1;14;4 06107;2023-09-25T10:00:51Z;VO;986116;6;12;4 </code></pre> <p>I'm guessing maybe some kind of panda usage would be the solution, but I'm not sure how to make the mapping and export.</p> <pre><code> &quot;items&quot;: [ { &quot;id&quot;: &quot;06107&quot;, &quot;date&quot;: &quot;2023-09-25T01:51:04Z&quot;, &quot;order&quot;: [ { &quot;prefix&quot;: &quot;VO&quot;, &quot;number&quot;: &quot;32233809&quot;, &quot;quantity&quot;: 1, &quot;index&quot;: 4 }, { &quot;prefix&quot;: &quot;VO&quot;, &quot;number&quot;: &quot;31438125&quot;, &quot;quantity&quot;: 1, &quot;index&quot;: 4 } ] }, { &quot;id&quot;: &quot;06107&quot;, &quot;date&quot;: &quot;2023-09-25T10:00:51Z&quot;, &quot;order&quot;: [ { &quot;prefix&quot;: &quot;VO&quot;, &quot;number&quot;: &quot;31407983&quot;, &quot;quantity&quot;: 1, &quot;code&quot;: 14, &quot;index&quot;: 4 }, { &quot;prefix&quot;: &quot;VO&quot;, &quot;number&quot;: &quot;986116&quot;, &quot;quantity&quot;: 6, &quot;code&quot;: 12, &quot;index&quot;: 4 } ] } ] </code></pre>
<python><json><export><export-to-csv>
2023-10-13 13:45:18
1
1,575
Jeppe
77,288,248
2,232,418
Python - SSL Error when making API call with urllib.request
<p>I'm trying to make a HTTPS API call in a Python script that will be included in an Azure DevOps pipeline step. I'm restricted to using <code>urllib.request</code>.</p> <p>The current code I have is:</p> <pre><code>try: os.environ['HTTP_PROXY'] = '&lt;redacted&gt;' os.environ['HTTPS_PROXY'] = '&lt;redacted&gt;' req = urllib.request.Request(f'https://api.github.com/repos/{repository_name}/languages') req.add_header('Accept', 'application/vnd.github+json') req.add_header('Authorization', f'Bearer {github_token}') ssl_context = ssl.create_default_context(cafile=f'{cert_path}') with urllib.request.urlopen(req, context=ssl_context) as response: gt_response = json.load(response) except Exception as e: print(f'Error. [{e}]') </code></pre> <p>And the error I'm getting is:</p> <blockquote> <p>&lt;urlopen error EOF occurred in violation of protocol (_ssl.c:1091)&gt;</p> </blockquote> <p>I'm not an SSL expert and this has totally stumped me for what I would expect to be a simple operation. I've looked through the <a href="https://docs.python.org/3/library/urllib.request.html" rel="nofollow noreferrer">urllib.request documentation</a> but to no avail. Any help would be greatly appreciated.</p>
<python><ssl><urllib><github-api>
2023-10-13 13:42:38
0
2,787
Ben
77,288,212
9,908,011
Python socket.send on localhost alternates success and failure
<p>I am experiencing some weird behavior when calling <code>socket.send()</code>, which alternatively succeeds and fails with a &quot;connection refused&quot; error:</p> <pre class="lang-py prettyprint-override"><code>import socket, time sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) dest = (&quot;127.0.0.1&quot;, 1111) sock.connect(dest) counter = 1 while True: try: sent = sock.send(b&quot;message\n&quot;) print(&quot;counter {}, sent {}&quot;.format(counter, sent)) except Exception as e: print(&quot;counter {}, exception {}&quot;.format(counter, e)) finally: counter += 1 time.sleep(0.5) </code></pre> <p>Which outputs:</p> <pre><code>counter 1, sent 8 counter 2, exception [Errno 111] Connection refused counter 3, sent 8 counter 4, exception [Errno 111] Connection refused counter 5, sent 8 counter 6, exception [Errno 111] Connection refused counter 7, sent 8 counter 8, exception [Errno 111] Connection refused counter 9, sent 8 ... </code></pre> <p>I expected the send to always succeed.</p> <ul> <li><p>I want the code to run regardless of whether there is another process listening on that port or not (if there is one, e.g. Netcat <code>nc -u -l 1111</code>, <code>send()</code> in my example always succeeds).</p> </li> <li><p>the intermittent failure made me think of a buffering issue, but changing the buffersize (<code>setsockopt</code>) did not affect anything (but the kernel imposes a minimum buffer size, so one cannot really set it to 0). Furthermore:</p> </li> <li><p>Using <code>sendto()</code> instead of <code>send()</code> (without the initial <code>connect()</code>) works as expected, i.e., sending never fails regardless of the listener being active or not.</p> </li> <li><p>Using a plain socket in C also works as expected, with <em>both</em> <code>send()</code> and <code>sendto()</code>.</p> </li> </ul> <p>I am totally puzzled, what is happening here?</p> <p>I could not find any documentation about this behavior in the python docs, nor on the C manual pages, nor searching the web. I am sorry if I am missing something obvious.</p> <p>Python 3.10.12 on Linux (Ubuntu)</p>
<python><sockets><localhost><send>
2023-10-13 13:37:26
1
310
L. Bruce
77,288,210
8,843,585
Asyncio: Detecting never awaited coroutines beforehand
<p>When <code>await</code> is not expressed in a coroutine call, asyncio shows</p> <pre><code>RuntimeWarning: coroutine was never awaited </code></pre> <p>The problem is that this print is easily missable in production environment. It's also difficult when your console shows other application logs.</p> <p>It's a critical, bug prone warning that simply goes unnoticed sometimes.</p> <p>Is there a way to detect these errors at <strong>compile time</strong>?</p> <p>If not, is there a way to raise this warning <strong>like an error</strong> or at least improve its report in order to not miss it?</p>
<python><python-asyncio><python-3.10>
2023-10-13 13:36:52
1
1,219
Ramon Dias
77,288,093
8,387,921
Pull the string between two string in python which has multiple repeating same words
<p>I'm trying to get string from text &quot;20231006-OPUS solution _ cfhs0230.22d OP1696574785263-792.txt&quot;, I'm interested in &quot;cfhs0230.22d&quot;, I used the below method but it doesn't work as there are two &quot;OP&quot; in it...so i want to skip first &quot;OP&quot; and take second &quot;OP&quot; instead and get string between &quot;-&quot; and second &quot;OP&quot;, how can i do that?</p> <pre><code>mystr = &quot;20231006-OPUS solution _ cfhs023 OP1696574785263-792&quot; sub1 = &quot;-&quot; sub2 = &quot;OP&quot; #this has to be second OP idx1 = mystr.index(sub1) idx2 = mystr .index(sub2) print(idx1+ len(sub1) + 1,idx2) for nx in range(idx1+ len(sub1) + 1,idx2): print(nx) </code></pre>
<python><python-3.x>
2023-10-13 13:19:23
3
399
Sagar Rawal
77,288,045
10,086,915
Simplifying a set of constraints with sympy by removing variables
<p>I'm currently writing a simplification tool for another software project and I want to simplify some math.</p> <p>I have a given a set of equations and inequalities like</p> <pre><code>x = a y + 1 = b x = y z = c y &gt; z </code></pre> <p>I want to get a set of simplified equations and inequalities with preferably less variables of the set <code>S={x,y,z}</code> and somehow normalized to 0 on one side of the equation</p> <pre><code>0 = b - (a + 1) 0 &gt; c - b + 1 </code></pre> <p>So the variables in <code>S</code> act as intermediate storage, or connections between the variables <code>{a,b,c}</code>.</p> <ul> <li>all variables and constants can be considered integer</li> <li>it is ok if no such simplification exists</li> <li>the result does not need to be optimal</li> <li>if this works for non linear equations too it would be a plus but is not a must</li> <li>I'm ok with using other python libraries than sympy</li> <li>I want to have a general approach, the given constraints are just an example, I do not know the structure of the equations in advance</li> </ul> <p>I tried to read into sympy.solve, sympy.simplify and sympy.reduce_inequalities.</p> <ul> <li>I didn't manage to get simplify to work on a set of constraints instead of a single expression.</li> <li>solve seems to be only able to handle equalities, and not also inequalities. Also I dont think i want to solve these equations, but rather remove variables.</li> <li>reduce_inequalities seems to be restricted to only one variable. I'm not sure how to execute this function only for one variable at a time s.t. it also considers the equations.</li> </ul> <p>I do not need complete source code but a rough approach how to tackle this problem or if it is at all feasible with sympy.</p> <p>Thank you very much.</p>
<python><constraints><sympy>
2023-10-13 13:13:36
1
638
Max Ostrowski
77,288,028
5,151,362
How would you implement a thread-safe function which reads from a shared hashtable using a key and updates the value in a multi-threaded environment?
<p>Suppose we have some function which takes in a key, retreives its value from a shared hashtable, and perform some operations on it to obtain a new value , and then updates the hashtable with this new value. Now, this function is multi-threaded so there could be multiple threads calling this same function, either with the same or different keys, so some sort of race condition protection using mutex is necessary. I've come up with the following implementation in python using locks, where I use a dict as a hashtable. (I know that in python, dict operations are atomic but this is just for illustration purposes for the sake of the algorithm)</p> <pre><code>class Solution: def __init__(self): self.datamap = {} self.lockmap = {} self.datamap_lock = Lock() self.lockmap_lock = Lock() def initializeKey(self, key): with self.datamap_lock: if key not in self.datamap: self.datamap[key] = (-sys.maxsize, 0) with self.lockmap_lock: self.lockmap[key] = Lock() def getLock(self, key): with self.lockmap_lock: return self.lockmap[key] def getValue(self, key): with self.datamap_lock: return self.datamap[key] def storeValue(self, key, max_, value): with self.datamap_lock: self.datamap[key] = (max_, value) def calc(self, key, param_value): self.initializeKey(key) with self.getLock(key): max_, value = self.getValue(key) # Does some operations on value to obtain a new value self.storeValue(key, max_, value) </code></pre> <p>Basically I used a mutex for the hashtable, a mutex for each key and a mutex for the hashtable which maps from the key to the mutex. Would this implementation be correct and thread-safe? Is there a way to do it better without using so many locks/mutexes?</p>
<python><multithreading><concurrency><hashtable><mutex>
2023-10-13 13:11:56
2
843
Lew Wei Hao
77,287,948
1,914,034
pytorch - passing image original size to transform function
<p>I am using pytorch with albumentations for image transformations in a sementic segmentation model.</p> <p>For some transformations, I need to pass the original image size as parameters but I just dont know how I can do it.</p> <p>I have the transforms function and my dataset class in seperate modules like below. In the <code>medium_transform()</code> function in <code>transforms.py</code>, I would like to have access to the <code>original_height</code> and <code>original_width</code> that I am passing through <code>self.transforms(**result)</code> in <code>dataset.py</code>.</p> <p><strong>dataset.py</strong></p> <pre><code>from torch.utils.data import Dataset import cv2 class SegmentationDataset(Dataset): def __init__(self, imagePaths, maskPaths, transforms): self.imagePaths = imagePaths self.maskPaths = maskPaths self.transforms = transforms def __getitem__(self, idx): image = cv2.imread(self.imagePaths[idx], cv2.COLOR_BGR2RGB) mask = cv2.imread(self.maskPaths[idx], 0) result = {&quot;image&quot;: image, &quot;mask&quot;: mask, &quot;original_height&quot;: image.shape[0], &quot;original_width&quot;: image.shape[1]} # check to see if we are applying any transformations if self.transforms is not None: result = self.transforms(**result) return result </code></pre> <p><strong>transforms.py</strong></p> <pre><code>import albumentations as A from . import config def medium_transforms(**params): print(params) #here params is an empty dict return [ A.OneOf([ A.RandomSizedCrop(min_max_height=(50, 101), height=params[&quot;original_height&quot;], width=params[&quot;original_width&quot;], p=config.MEDIUM_TRANSFORMS_PROBABILITY), A.PadIfNeeded(min_height=params[&quot;original_height&quot;], min_width=params[&quot;original_width&quot;], p=config.MEDIUM_TRANSFORMS_PROBABILITY) ], p=1) ] def compose(transforms_to_compose): # combine all augmentations into single pipeline return A.Compose([ item for sublist in transforms_to_compose for item in sublist ]) </code></pre> <p><strong>train.py</strong></p> <pre><code>import torch from . import transforms from .dataset import SegmentationDataset trainImages = [&quot;./images/test.png&quot;] trainMasks = [&quot;./masks/test.png&quot;] train_transforms = transforms.compose([ transforms.medium_transforms() ]) train_dataset = SegmentationDataset(imagePaths=trainImages, maskPaths=trainMasks, transforms=train_transforms) </code></pre>
<python><pytorch><albumentations>
2023-10-13 12:59:55
1
7,655
Below the Radar
77,287,927
8,580,574
I would like to create data with python faker with columns that are dependant on each other
<p>I have to create a synthetic dataset using faker, in which we have columns that have values that are dependant on each other.</p> <p>I tried to use standard faker providers, but it seems there is no way to use this to generate dependant data.</p> <p>For example, I have a column <strong>A</strong> and a column <strong>B</strong>. Whenever the column <strong>A</strong> contains a value <strong>X</strong> the column <strong>B</strong> can only contain certain strings <strong>[ABC, DCB, VBA]</strong>. But if column <strong>A</strong> would contain a value <strong>Y</strong>, then column <strong>B</strong> can only contain values from the list <strong>[QWE, ERY, DSA]</strong>.</p> <p>Above is an example, but I basically need sort of custom logic and I would like to integrate this using the faker providers. I tried to extend the base provider, but I couldn't find a smart way to do it. Is there an easier, built-in way to do this using faker or some other library?</p>
<python><dataframe><faker>
2023-10-13 12:57:03
0
2,542
PEREZje
77,287,827
8,324,480
Creating QWidget class and inserting it in an QFormLayout looses alignment
<p>I want a simple widget which contains a QLineEdit, a QPushButton and a second QLineEdit to enter a directory and file name. I started by defining everything within the same QWidget, named <code>CentralWidget</code> that I wanted to add as the central widget of a <code>QMainWindow</code>. But putting everything in the same object impacts rapidly code readability, so for once, I wanted to split my GUI into multiple simpler widget that would be arranged together.</p> <p>But when I do that, the alignment in the <code>QFormLayout</code> is lost.</p> <p>MWE: First, the version &quot;working&quot;, but with everything in the same object.</p> <pre><code>from PyQt5.QtCore import QRegExp from PyQt5.QtWidgets import ( QApplication, QLineEdit, QFormLayout, QWidget, QPushButton, QGridLayout, QStyle, ) from PyQt5.QtGui import QRegExpValidator class CentralWidget(QWidget): def __init__(self): super().__init__() self.setObjectName(&quot;central_widget&quot;) # create central widget layout layout = QFormLayout() layout_dir = QGridLayout() line = QLineEdit(objectName=&quot;QLineEdit_dir&quot;) button = QPushButton(&quot;&quot;, objectName=&quot;QPushButton_dir&quot;) button.setIcon(self.style().standardIcon(QStyle.SP_DialogOpenButton)) layout_dir.addWidget(line, 0, 0, 1, 5) layout_dir.addWidget(button, 0, 6, 1, 1) layout.addRow(&quot;Directory:&quot;, layout_dir) validator = QRegExpValidator(QRegExp(r&quot;^[a-zA-Z0-9]{1,8}$&quot;)) line = QLineEdit(maxLength=8, objectName=&quot;QLineEdit_fname&quot;) line.setValidator(validator) layout.addRow(&quot;File name:&quot;, line) self.setLayout(layout) app = QApplication([]) window = CentralWidget() window.show() app.exec() </code></pre> <p><a href="https://i.sstatic.net/OWKJe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OWKJe.png" alt="enter image description here" /></a></p> <p>And now the version with the directory part split in a second widget:</p> <pre><code>from pathlib import Path from PyQt5.QtCore import QRegExp from PyQt5.QtWidgets import ( QApplication, QLineEdit, QFormLayout, QWidget, QPushButton, QGridLayout, QStyle, QFileDialog, ) from PyQt5.QtGui import QRegExpValidator class CentralWidget(QWidget): def __init__(self): super().__init__() self.setObjectName(&quot;central_widget&quot;) # create central widget layout layout = QFormLayout() layout.addRow(&quot;Directory:&quot;, DirectoryDialog()) validator = QRegExpValidator(QRegExp(r&quot;^[a-zA-Z0-9]{1,8}$&quot;)) line = QLineEdit(maxLength=8, objectName=&quot;QLineEdit_fname&quot;) line.setValidator(validator) layout.addRow(&quot;File name:&quot;, line) self.setLayout(layout) class DirectoryDialog(QWidget): def __init__(self): super().__init__() self.line = QLineEdit() layout = QGridLayout() self.button = QPushButton(&quot;&quot;) self.button.setIcon(self.style().standardIcon(QStyle.SP_DialogOpenButton)) self.button.clicked.connect(self.browse_path) layout.addWidget(self.line, 0, 0, 1, 5) layout.addWidget(self.button, 0, 6, 1, 1) self.setLayout(layout) def browse_path(self): path = QFileDialog.getExistingDirectory( self, &quot;Select directory&quot;, str(Path.home()), QFileDialog.ShowDirsOnly ) if len(path) != 0: self.line.setText(path) app = QApplication([]) window = CentralWidget() window.show() app.exec() </code></pre> <p><a href="https://i.sstatic.net/Ry7Lv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ry7Lv.png" alt="enter image description here" /></a></p> <p>So, how can I fix the alignment, and more importantly, why did it mess up the alignment?</p>
<python><qt><pyqt><pyside>
2023-10-13 12:40:06
1
5,826
Mathieu
77,287,622
9,518,886
ModuleNotFoundError: No module named 'kafka.vendor.six.moves' in Dockerized Django Application
<p>I am facing an issue with my Dockerized Django application. I am using the following Dockerfile to build my application:</p> <pre><code>FROM python:alpine ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 ENV DJANGO_SUPERUSER_PASSWORD datahub RUN mkdir app WORKDIR /app COPY ./app . RUN mkdir -p volumes RUN apk update RUN apk add --no-cache gcc python3-dev musl-dev mariadb-dev RUN pip3 install --upgrade pip RUN pip3 install -r requirements.txt RUN apk del gcc python3-dev musl-dev CMD python3 manage.py makemigrations --noinput &amp;&amp;\ while ! python3 manage.py migrate --noinput; do sleep 1; done &amp;&amp; \ python3 manage.py collectstatic --noinput &amp;&amp;\ python3 manage.py createsuperuser --user datahub --email admin@localhost --noinput;\ python3 manage.py runserver 0.0.0.0:8000 </code></pre> <p>In my requirements.txt file:</p> <pre><code>kafka-python==2.0.2 </code></pre> <p>When I run my application inside the Docker container, I encounter the following error:</p> <pre><code>ModuleNotFoundError: No module named 'kafka.vendor.six.moves' </code></pre> <p>Compelete Error:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/local/lib/python3.12/site-packages/kafka/__init__.py&quot;, line 23, in &lt;module&gt; from kafka.consumer import KafkaConsumer File &quot;/usr/local/lib/python3.12/site-packages/kafka/consumer/__init__.py&quot;, line 3, in &lt;module&gt; from kafka.consumer.group import KafkaConsumer File &quot;/usr/local/lib/python3.12/site-packages/kafka/consumer/group.py&quot;, line 13, in &lt;module&gt; from kafka.consumer.fetcher import Fetcher File &quot;/usr/local/lib/python3.12/site-packages/kafka/consumer/fetcher.py&quot;, line 19, in &lt;module&gt; from kafka.record import MemoryRecords File &quot;/usr/local/lib/python3.12/site-packages/kafka/record/__init__.py&quot;, line 1, in &lt;module&gt; from kafka.record.memory_records import MemoryRecords, MemoryRecordsBuilder File &quot;/usr/local/lib/python3.12/site-packages/kafka/record/memory_records.py&quot;, line 27, in &lt;module&gt; from kafka.record.legacy_records import LegacyRecordBatch, LegacyRecordBatchBuilder File &quot;/usr/local/lib/python3.12/site-packages/kafka/record/legacy_records.py&quot;, line 50, in &lt;module&gt; from kafka.codec import ( File &quot;/usr/local/lib/python3.12/site-packages/kafka/codec.py&quot;, line 9, in &lt;module&gt; from kafka.vendor.six.moves import range ModuleNotFoundError: No module named 'kafka.vendor.six.moves' </code></pre> <p>I have already tried updating the Kafka package, checking dependencies, and installing the six package manually. However, the issue still persists. Can anyone provide insights on how to resolve this error?</p> <p>Thank you in advance for your help!</p>
<python><django><apache-kafka>
2023-10-13 12:11:04
7
537
Navid Sadeghi
77,287,514
8,204,956
Django ORM group by on primary key mysql
<p>In the django ORM it's easy to group by when using an aggregate function like this:</p> <pre class="lang-py prettyprint-override"><code>from django.contrib.auth.models import User User.objects.values('id').annotate(Count('id')) </code></pre> <p>However if I want to group by <em>without</em> an aggregation function (in case of a join for example)</p> <pre><code>User.objects.filter(groups__name='foo') </code></pre> <p>You can use <code>distinct</code> function, but it will distinct <em>on all</em> columns. Which is painfully slow.</p> <p>In some other DB you can do <code>DISTINCT ON</code> but not on mariadb/mysql.</p> <p>Is there a way to group by a specific column with the django ORM or do I have to write RawSQL ?</p>
<python><mysql><django><django-orm>
2023-10-13 11:55:02
1
938
Rémi Desgrange
77,287,238
10,139,792
Send post request with custom non JSON content in Python
<p>I am trying to create a post request to a specific url but the content I would like to pass is just simple text, not in JSON format, example:</p> <pre><code>POST / HTTP/1.1 Host: www.example.re Content-Type: text/plain Content-Length: 51 \r\n I really like cheese, but I hate it in JSON format! </code></pre> <p>I've tried using the python <em>requests</em> module but it doesn't allow me to define the content to be just simple text. I've found a similar question on stack overflow but it remains unanswered.</p> <p>I would really appreciate help.</p>
<python><http><python-requests>
2023-10-13 11:04:45
1
352
Tal Moshel
77,287,175
8,248,194
Getting NonExistentTimeError in pytz
<p>With this example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import pytz timestamp = pd.Timestamp('2023-04-30 02:20:28.594000') timezone = pytz.timezone('Africa/Casablanca') print( timestamp.tz_localize(timezone, ambiguous=True) ) </code></pre> <p>I'm getting <code>NonExistentTimeError: 2023-04-30 02:20:28.594000</code>.</p> <p>Why is this time nonexistent? I happen to have local Casablanca data with that time.</p>
<python><pandas><timezone><python-datetime><pytz>
2023-10-13 10:56:28
1
2,581
David Masip
77,286,794
5,437,090
TypeError: SparseArray does not support item assignment via setitem
<p>Given:</p> <pre><code>import pandas as pd print(pd.__version__) # '2.1.0' df=pd.DataFrame([[1., 0., 1.5], [0., 2., 0.]], dtype=pd.SparseDtype(&quot;float32&quot;)) </code></pre> <p>I'd like to assign custom values in my sliced pandas sparse dataframe, so I do the following:</p> <pre><code>df_sliced=df.iloc[0].copy() # OK df_sliced.iloc[:]=0 # ERROR </code></pre> <p>I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/CSC_CONTAINER/miniconda/envs/env1/lib/python3.9/site-packages/pandas/core/indexing.py&quot;, line 885, in __setitem__ iloc._setitem_with_indexer(indexer, value, self.name) File &quot;/CSC_CONTAINER/miniconda/envs/env1/lib/python3.9/site-packages/pandas/core/indexing.py&quot;, line 1895, in _setitem_with_indexer self._setitem_single_block(indexer, value, name) File &quot;/CSC_CONTAINER/miniconda/envs/env1/lib/python3.9/site-packages/pandas/core/indexing.py&quot;, line 2138, in _setitem_single_block self.obj._mgr = self.obj._mgr.setitem(indexer=indexer, value=value) File &quot;/CSC_CONTAINER/miniconda/envs/env1/lib/python3.9/site-packages/pandas/core/internals/managers.py&quot;, line 399, in setitem return self.apply(&quot;setitem&quot;, indexer=indexer, value=value) File &quot;/CSC_CONTAINER/miniconda/envs/env1/lib/python3.9/site-packages/pandas/core/internals/managers.py&quot;, line 354, in apply applied = getattr(b, f)(**kwargs) File &quot;/CSC_CONTAINER/miniconda/envs/env1/lib/python3.9/site-packages/pandas/core/internals/blocks.py&quot;, line 1758, in setitem values[indexer] = value File &quot;/CSC_CONTAINER/miniconda/envs/env1/lib/python3.9/site-packages/pandas/core/arrays/sparse/array.py&quot;, line 583, in __setitem__ raise TypeError(msg) TypeError: SparseArray does not support item assignment via setitem </code></pre> <p>What's an efficient way (workaround) to assign values in <code>SparseDtype</code> in Pandas?</p>
<python><pandas><sparse-matrix>
2023-10-13 09:59:18
1
1,621
farid
77,286,695
12,827,931
Join two dataframes median (IQR)
<p>Let's assume two dataframes of medians and interquartile ranges</p> <pre><code>medians = pd.DataFrame({&quot;Median A&quot; : [16, 14, 11], &quot;Median B&quot; : [9, 8, 5]}) iqrs = pd.DataFrame({&quot;IQR A&quot; : [5, 3, 4], &quot;IQR B&quot; : [2, 2, 4]}) </code></pre> <p>What I'd like to to is to join these two dataframes in the way that in cell there is Median (IQR), so:</p> <pre><code> Value A Value B 0 16 (5) 9 (2) 1 14 (3) 8 (2) 2 11 (4) 5 (4) </code></pre> <p>I guess I need to start with converting both dataframes to type <code>str</code>, and apply <code>.join()</code>, but whatever I do it simply doesn't work.</p>
<python><pandas><dataframe><join>
2023-10-13 09:43:30
2
447
thesecond
77,286,508
2,786,156
Convert bytes to list of 2s complement 8 bytes signed integer
<p><code>python 3.7</code></p> <p>Having a <code>bytes</code> of a size multiple to <code>8</code> is there a way to convert to a <code>list</code> of 8 bytes large signed integers? I tried the following:</p> <pre><code>some_bytes = #retrieve bytes print(list(some_bytes)) #prints as it was treated a 4 bytes integers </code></pre> <p>Is there a way to convert <code>bytes</code> to list of 8 bytes signed integers?</p>
<python><list><memory><integer>
2023-10-13 09:13:15
2
27,717
St.Antario
77,286,483
636,626
Maturin project with Python bindings behind feature
<p>I'm trying to write <em>optional Python bindings</em> for a Rust library, using maturin and PyO3. The default layout created by maturin is</p> <pre><code>my-project ├── Cargo.toml ├── python │ └── my_project │ ├── __init__.py │ └── bar.py ├── pyproject.toml ├── README.md └── src └── lib.rs </code></pre> <p>where all Rust code, including the <code>#[pymodule]</code> attributes go into <code>src/lib.rs</code>:</p> <pre><code>use pyo3::prelude::*; /// Formats the sum of two numbers as string. #[pyfunction] fn sum_as_string(a: usize, b: usize) -&gt; PyResult&lt;String&gt; { Ok((a + b).to_string()) } /// A Python module implemented in Rust. #[pymodule] fn rir_generator(_py: Python, m: &amp;PyModule) -&gt; PyResult&lt;()&gt; { m.add_function(wrap_pyfunction!(sum_as_string, m)?)?; Ok(()) } </code></pre> <p>However, since I want to put all of this code behind a conditional feature, I am trying to put all of that wrapper code into <code>src/python.rs</code> and then import it into <code>src/lib.rs</code> using</p> <pre><code>#[cfg(feature = &quot;python&quot;)] pub mod python; </code></pre> <p>But building this fails with the warning</p> <blockquote> <p>Warning: Couldn't find the symbol <code>PyInit_my_project</code> in the native library. Python will fail to import this module. If you're using pyo3, check that <code>#[pymodule]</code> uses <code>my_project</code> as module name</p> </blockquote> <p>If I put the code back into <code>src/lib.rs</code>, the warning disappears.</p> <p>Is there a way to put PyO3 bindings into a submodule that is then conditionally imported using features?</p>
<python><rust><pyo3><maturin>
2023-10-13 09:10:21
1
37,219
Nils Werner
77,286,476
2,148,773
Does the Debian python3-opencv package have advantages over the pypy.org opencv-python one?
<p>Under Debian I can install the OpenCV bindings for Python using (at least) one of these two ways:</p> <ul> <li>from the <a href="https://packages.debian.org/bookworm/python3-opencv" rel="nofollow noreferrer">Debian repository</a> (with <code>apt install python3-opencv</code>)</li> <li>or from the <a href="https://pypi.org/project/opencv-python/" rel="nofollow noreferrer">PyPI repository</a> (with <code>pip install opencv-python</code>)</li> </ul> <p>Does the Debian package have any advantages over the PyPI package? For example, does it use better CPU optimizations, i.e. is it faster?</p> <p>Or is there some recommendation by the OpenCV developers regarding this choice?</p> <p>Background for this question is that I'm currently using the Debian package, but I'd prefer to switch to the PyPI package for easier maintenance; but I don't want to loose performance or miss other improvements by doing that change.</p>
<python><opencv><pip><debian><pypi>
2023-10-13 09:08:51
0
6,891
oliver
77,286,470
13,396,497
Write regular expression for text file having wireshark logs and read few lines
<p>I am trying to write regular expressions for wireshark logs captured in text file and read few lines from frames which says 'eth:ethertype:ip:tcp:http:xml' protocol in frame only.</p> <p>If above frame condition match I want to read only lines -</p> <p>1- Internet Protocol Version( for Src and Dst IP's)</p> <p>2- Transmission Control Protocol ( for Port's)</p> <p>3- Next line of Hypertext Transfer Protocol (e.g. 'HTTP/1.1 200 OK\r\n' from 1st Frame)</p> <p>4- Complete tag from 'eXtensible Markup Language' to closing &lt;/SOAP-ENV:Envelope&gt; OR &lt;/s:Envelope&gt; etc.</p> <pre><code>**Frame** 187: 852 bytes on wire (66 bits), 852 bytes captured (66 bits) on interface \Device\N, id 0 Section number: 1 Frame Number: 187 Frame Length: 852 bytes (6816 bits) Capture Length: 852 bytes (6816 bits) [Frame is marked: False] [Frame is ignored: False] [Protocols in frame: eth:ethertype:ip:tcp:http:xml] Type: IPv4 (0x0800) Internet Protocol Version 4, Src: 10.8.X.X, Dst: 1.1.X.X 0100 .... = Version: 4 .... 0101 = Header Length: 20 bytes (5) ..0. .... = More fragments: Not set ...0 0000 0000 0000 = Fragment Offset: 0 Time to Live: 125 Protocol: TCP (6) Header Checksum: 0xc511 [validation disabled] [Header checksum status: Unverified] Transmission Control Protocol, Src Port: 8, Dst Port: 49, Seq: 1, Ack: 1, Len: 786 Source Port: 8 Destination Port: 49 [Stream index: 234] [Conversation completeness: Incomplete, DATA (15)] [TCP Segment Len: 64] Sequence Number: 26 (relative sequence number) Sequence Number (raw): 274913 [Next Sequence Number: 60 (relative sequence number)] Acknowledgment Number: 43 (relative ack number) Hypertext Transfer Protocol HTTP/1.1 200 OK\r\n [Expert Info (Chat/Sequence): HTTP/1.1 200 OK\r\n] [HTTP/1.1 200 OK\r\n] [Severity level: Chat] [Group: Sequence] Response Version: HTTP/1.1 Status Code: 200 [Status Code Description: OK] Response Phrase: OK eXtensible Markup Language &lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot; ?&gt; &lt;SOAP-ENV:Envelope xmlns:soap=&quot;http://schemas.xmlsoap.org/soap/envelope/&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xmlns:xsd=&quot;http://www.w3.org/2001/XMLSchema&quot;&gt; &lt;soap:Body&gt; &lt;initResponse xmlns=&quot;http://www.test.com/Al/&quot;&gt; &lt;status&gt; &lt;Timestamp&gt; 2023-09-26T16:51:59.6468888+08:00 &lt;/Timestamp&gt; &lt;Type&gt; NO PACKET &lt;/Type&gt; &lt;State&gt; CLEARED &lt;/State&gt; &lt;/status&gt; &lt;/initResponse&gt; &lt;/soap:Body&gt; &lt;/SOAP-ENV:Envelope&gt; **Frame** 179: 60 bytes on wire (480 bits), 60 bytes captured (480 bits) on interface , id 0 Section number: 1 Arrival Time: Sep 26, 2023 16:52:01.174796000 Standard Time [Time shift for this packet: 0.000000000 seconds] Epoch Time: 1695718321.174796000 seconds [Time delta from previous captured frame: 0.000471000 seconds] Frame Number: 179 [Frame is marked: False] [Frame is ignored: False] [Protocols in frame: eth:ethertype:ip:udp:data] Padding: 000000 Internet Protocol Version 4, Src: 192.60.X.X, Dst: 111.8.X.X 0100 .... = Version: 4 .... 0101 = Header Length: 20 bytes (5) 010. .... = Flags: 0x2, Don't fragment Time to Live: 64 [Header checksum status: Unverified] User Datagram Protocol, Src Port: 474, Dst Port: 271 Source Port: 474 Destination Port: 271 Length: 23 [Checksum Status: Unverified] [Stream index: 0] [Timestamps] [Time since first frame: 0.358386000 seconds] [Time since previous frame: 0.000842000 seconds] UDP payload (15 bytes) Data (15 bytes) **Frame** 190: 852 bytes on wire (66 bits), 852 bytes captured (66 bits) on interface \Device\N, id 0 Section number: 1 Frame Number: 190 Frame Length: 852 bytes (816 bits) Capture Length: 852 bytes (816 bits) [Frame is marked: False] [Frame is ignored: False] [Protocols in frame: eth:ethertype:ip:tcp:http:xml] Internet Protocol Version 4, Src: 1.1.X.X, Dst: 10.8.X.X 0100 .... = Version: 4 .... 0101 = Header Length: 20 bytes (5) 010. .... = Flags: 0x2, Don't fragment ...0 0000 0000 0000 = Fragment Offset: 0 Time to Live: 125 Protocol: TCP (6) [Header checksum status: Unverified] Transmission Control Protocol, Src Port: 8, Dst Port: 49, Seq: 1, Ack: 1, Len: 786 Source Port: 8 Destination Port: 49 [Stream index: 234] [Conversation completeness: Incomplete, DATA (15)] [TCP Segment Len: 64] Sequence Number: 26 (relative sequence number) Sequence Number (raw): 274913 [Next Sequence Number: 60 (relative sequence number)] Acknowledgment Number: 43 (relative ack number) Hypertext Transfer Protocol POST /Ser1.asmx HTTP/1.1\r\n [Expert Info (Chat/Sequence): POST /Ser1.asmx HTTP/1.1\r\n] [POST /Ser1.asmx HTTP/1.1\r\n] [Severity level: Chat] [Group: Sequence] Request Method: POST eXtensible Markup Language &lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot; ?&gt; &lt;s:Envelope xmlns:soap=&quot;http://schemas.xmlsoap.org/soap/envelope/&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xmlns:xsd=&quot;http://www.w3.org/2001/XMLSchema&quot;&gt; &lt;soap:Body&gt; &lt;initResponse xmlns=&quot;http://www.test.com/Al/&quot;&gt; &lt;status&gt; &lt;Timestamp&gt; 2023-09-26T16:51:59.6468888+08:00 &lt;/Timestamp&gt; &lt;Type&gt; NO PACKET &lt;/Type&gt; &lt;State&gt; CLEARED &lt;/State&gt; &lt;/status&gt; &lt;/initResponse&gt; &lt;/soap:Body&gt; &lt;/s:Envelope&gt; **Frame** 192: 852 bytes on wire (66 bits), 852 bytes captured (66 bits) on interface \Device\N, id 0 Section number: 1 Frame Number: 192 Frame Length: 852 bytes (816 bits) Capture Length: 852 bytes (816 bits) [Frame is marked: False] [Frame is ignored: False] [Protocols in frame: eth:ethertype:ip:tcp:http:xml] Type: IPv4 (0x0800) Internet Protocol Version 4, Src: 1.1.X.X, Dst: 10.8.X.X 0100 .... = Version: 4 .... 0101 = Header Length: 20 bytes (5) Time to Live: 125 Protocol: TCP (6) Header Checksum: 0xc511 [validation disabled] [Header checksum status: Unverified] Transmission Control Protocol, Src Port: 8, Dst Port: 49, Seq: 1, Ack: 1, Len: 786 Source Port: 8 Destination Port: 49 [Stream index: 234] [Conversation completeness: Incomplete, DATA (15)] [TCP Segment Len: 64] Sequence Number: 26 (relative sequence number) Sequence Number (raw): 274913 [Next Sequence Number: 60 (relative sequence number)] Acknowledgment Number: 43 (relative ack number) Hypertext Transfer Protocol HTTP/1.1 200 OK\r\n [Expert Info (Chat/Sequence): HTTP/1.1 200 OK\r\n] [HTTP/1.1 200 OK\r\n] [Severity level: Chat] [Group: Sequence] Response Version: HTTP/1.1 Status Code: 200 [Status Code Description: OK] Response Phrase: OK eXtensible Markup Language &lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot; ?&gt; &lt;soap:Envelope xmlns:soap=&quot;http://schemas.xmlsoap.org/soap/envelope/&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xmlns:xsd=&quot;http://www.w3.org/2001/XMLSchema&quot;&gt; &lt;soap:Body&gt; &lt;initResponse xmlns=&quot;http://www.test.com/Al/&quot;&gt; &lt;status&gt; &lt;Timestamp&gt; 2023-09-26T16:51:59.6468888+08:00 &lt;/Timestamp&gt; &lt;Type&gt; NO PACKET &lt;/Type&gt; &lt;State&gt; CLEARED &lt;/State&gt; &lt;/status&gt; &lt;/initResponse&gt; &lt;/soap:Body&gt; &lt;/soap:Envelope&gt; **Frame** 195: 60 bytes on wire (480 bits), 60 bytes captured (480 bits) on interface , id 0 Section number: 1 Encapsulation type: Ethernet (1) Arrival Time: Sep 26, 2023 16:52:01.174796000 Standard Time [Time shift for this packet: 0.000000000 seconds] Epoch Time: 1695718321.174796000 seconds [Time delta from previous captured frame: 0.000471000 seconds] Frame Number: 195 [Frame is marked: False] [Frame is ignored: False] [Protocols in frame: eth:ethertype:ip:udp:data] Type: IPv4 (0x0800) Padding: 000000 Internet Protocol Version 4, Src: 192.60.X.X, Dst: 111.8.X.X 0100 .... = Version: 4 .... 0101 = Header Length: 20 bytes (5) 0000 00.. = Differentiated Services Codepoint: Default (0) .... ..00 = Explicit Congestion Notification: Not ECN-Capable Transport (0) Total Length: 43 Identification: 0x0000 (0) 010. .... = Flags: 0x2, Don't fragment 0... .... = Reserved bit: Not set Time to Live: 64 Protocol: UDP (17) Header Checksum: 0x698b [validation disabled] [Header checksum status: Unverified] User Datagram Protocol, Src Port: 474, Dst Port: 271 Source Port: 474 Destination Port: 271 Length: 23 Checksum: 0xa007 [unverified] [Stream index: 0] [Timestamps] [Time since first frame: 0.358386000 seconds] [Time since previous frame: 0.000842000 seconds] UDP payload (15 bytes) Data (15 bytes) </code></pre> <p>Output from 1st Frame (which says - Protocols in frame: eth:ethertype:ip:tcp:http:xml)-</p> <pre><code>Internet Protocol Version 4, Src: 10.8.X.X, Dst: 1.1.X.X Transmission Control Protocol, Src Port: 8, Dst Port: 49, Seq: 1, Ack: 1, Len: 786 HTTP/1.1 200 OK\r\n eXtensible Markup Language &lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot; ?&gt; &lt;SOAP-ENV:Envelope xmlns:soap=&quot;http://schemas.xmlsoap.org/soap/envelope/&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xmlns:xsd=&quot;http://www.w3.org/2001/XMLSchema&quot;&gt; &lt;soap:Body&gt; &lt;initResponse xmlns=&quot;http://www.test.com/Al/&quot;&gt; &lt;status&gt; &lt;Timestamp&gt; 2023-09-26T16:51:59.6468888+08:00 &lt;/Timestamp&gt; &lt;Type&gt; NO PACKET &lt;/Type&gt; &lt;State&gt; CLEARED &lt;/State&gt; &lt;/status&gt; &lt;/initResponse&gt; &lt;/soap:Body&gt; &lt;/SOAP-ENV:Envelope&gt; </code></pre> <p>I was trying below to capture the whole frame in python but it's not working -</p> <pre><code>with open(wireshark.txt) as myfile: content = myfile.read() text = re.search(r'^eth:ethertype:ip:tcp:http:xml$\.(.*)&lt;/[Ss][Oo][Aa][Pp]?[-]?[Ee][Nn][Vv]:?Envelope&gt;', content).group(1) </code></pre>
<python><regex>
2023-10-13 09:07:48
1
347
RKIDEV
77,286,299
15,176,150
How do Jupyter Notebook extensions add functionality to a notebook?
<p>Given that behind the scenes the data in a Jupyter notebook is JSON, I assume that a similar simple system is used for notebook extensions.</p> <p>However, I haven't been able to figure out exactly how they work from the <a href="https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer"><code>nb_extensions</code></a> documentation.</p> <p>How do they add functionality to Jupyter, and can this be mimicked by making additions to the Jupyter JSON file?</p>
<python><json><jupyter-notebook><jupyter>
2023-10-13 08:40:41
1
1,146
Connor
77,286,105
10,431,629
Match two pandas data frames using one column in one data frame as default
<p>I have say 2 data frames as below:</p> <pre><code> df1 A B C 1 a X 1 b Y 1 c U 2 d T 3 k Z </code></pre> <p>And 2nd data frame say as</p> <pre><code> df2 A B C 1 a M1 1 d K 2 d Fr 7 y Io </code></pre> <p>So the way we need to do the merge is if the col A and Col B values match in both data frames then data frame 1 gets priority if not then individual data frame data gets the input.</p> <p>So the output looks like;</p> <pre><code> Outdf A B C 1 a X 1 b Y 1 c U 1 d K 2 d T 3 k Z 7 y Is </code></pre> <p>I tried using merge on left but is not able to achieve, because when not totally matched it gives null or blank. How to avieve my result?. Any help please?</p>
<python><pandas><merge><grouping><multiple-columns>
2023-10-13 08:10:57
0
884
Stan
77,286,104
72,791
matplotlib set_major_formatter taking into account range of values
<p>I've written a UI that includes a matplotlib figure. It shows graphs and I'd like to use engineering notation (exponents that are multiples of 3) on the axes. Note that I want the powers to be written out (e.g. &quot;123×10⁶&quot;, not &quot;123 M&quot;, hence not using <a href="https://matplotlib.org/stable/api/ticker_api.html#matplotlib.ticker.EngFormatter" rel="nofollow noreferrer">EngFormatter</a>).</p> <p>I can do this with a formatting function (<code>format_eng</code>) I wrote many years ago and the <code>set_major_formatter</code> &amp; <code>fmt_x_data</code> features of matplotlib:</p> <pre class="lang-py prettyprint-override"><code>def axes_eng_format(ax, x=True, y=True, **kwargs): if x: if 'use_si' not in kwargs and 'use_latex' not in kwargs: ax.xaxis.set_major_formatter(lambda v, pos: format_eng(v, use_latex=True, **kwargs)) else: ax.xaxis.set_major_formatter(lambda v, pos: format_eng(v, **kwargs)) ax.fmt_xdata = lambda v: format_eng(v, **kwargs) if y: if 'use_si' not in kwargs and 'use_latex' not in kwargs: ax.yaxis.set_major_formatter(lambda v, pos: format_eng(v, use_latex=True, **kwargs)) else: ax.yaxis.set_major_formatter(lambda v, pos: format_eng(v, **kwargs)) ax.fmt_ydata = lambda v: format_eng(v, **kwargs) axes_eng_format(ax) </code></pre> <p>The formatting function itself shouldn't be relevant to this question (as it is just a dumb formatter that only takes a single parameter and hence cannot consider the range of values), however for reference it is here: <a href="https://gist.github.com/abudden/3252252632bd8271c49c886771d8f82d" rel="nofollow noreferrer">https://gist.github.com/abudden/3252252632bd8271c49c886771d8f82d</a></p> <p>The <a href="https://matplotlib.org/stable/api/ticker_api.html#matplotlib.ticker.FuncFormatter" rel="nofollow noreferrer">formatters only get two parameters</a>: the value (<code>v</code>) and the position on the graph (<code>pos</code>). Using just the value <code>v</code> is okay and works, but the 0.5 is rendered as 500×10⁻³. If the range of values on the major axis is (say) 0 to 700×10⁻³, then that's exactly how I'd like it to be rendered, but if the range of values is (like the second image below) 0 to 3.5, I'd rather it were rendered as 0.5 for consistency with the other values on the axis.</p> <p>The only way I can think of doing this at the moment is to post-process all the tick values, but that doesn't seem like it would be very robust given the user can zoom in and out etc.</p> <p>Is there a better way of achieving the effect I'm after? I'm guessing I can do something with <code>pos</code>, but I haven't been able to get my head round what would work. How should I go about using <code>pos</code> to tweak the formatting of a number (or is there a better way)?</p> <p>Y-axis looks good:</p> <p><a href="https://i.sstatic.net/tZvpw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tZvpw.png" alt="enter image description here" /></a></p> <p>Y axis looks not-so-good:</p> <p><a href="https://i.sstatic.net/u8ELn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u8ELn.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2023-10-13 08:10:52
1
73,231
DrAl
77,286,053
6,499,643
installing transformers on M1 gives jax
<p>I am new to Python and trying to install transformers following instructions <a href="https://huggingface.co/docs/transformers/installation#install-with-conda" rel="nofollow noreferrer">here</a></p> <p>i did all the steps but when I got to</p> <blockquote> <p>python -c &quot;from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))&quot;</p> </blockquote> <p>I got</p> <pre><code>RuntimeError: This version of jaxlib was built using AVX instructions, which your CPU and/or operating system do not support. You may be able work around this issue by building jaxlib from source. </code></pre> <p>So I reinstalled and installed with:</p> <pre><code>pip uninstall jax jaxlib pip install --upgrade jax jaxlib </code></pre> <p>and got exactly the same error</p> <p>Now I am on this <a href="https://jax.readthedocs.io/en/latest/developer.html" rel="nofollow noreferrer">link</a></p> <p>and did this:</p> <pre><code>pip install numpy wheel build </code></pre> <p>but I cannot run the next line</p> <pre><code>python build/build.py pip install dist/*.whl # installs jaxlib (includes XLA) </code></pre> <p>because I am getting</p> <pre><code>python: can't open file '/Users/e5028514/build/build.py': [Errno 2] No such file or directory </code></pre> <p>my mac is <a href="https://i.sstatic.net/rAUsu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rAUsu.png" alt="enter image description here" /></a></p>
<python><pip><conda><huggingface-transformers><apple-m1>
2023-10-13 08:02:52
1
385
m45ha
77,285,903
972,647
Barplot grouped by class and time-interval
<p>I have data of request response times in a pandas dataframe</p> <pre><code> execution_time request_type response_time_ms URL Error 2 2023-10-12 08:52:16 Google 91.0 https://www.google.com NaN 3 2023-10-12 08:52:16 CNN 115.0 https://edition.cnn.com NaN 6 2023-10-12 08:52:27 Google 90.0 https://www.google.com NaN 7 2023-10-12 08:52:27 CNN 105.0 https://edition.cnn.com NaN 10 2023-10-12 08:52:37 Google 5111.0 https://www.google.com NaN </code></pre> <p>It contains the time of the request, request_type is simply the website name and the response time.</p> <p>What I want to achieve is a barplot that groups the median response time by website (request_type) and by a time frame, say group every 4 hrs together. This should show that response time varies by daytime.</p> <p>I managed to create the plot but the coloring is &quot;off&quot;. The issue I have is that I want the different websites to be colored differently.</p> <p>What I have till now:</p> <pre><code>df_by_time = df.groupby([&quot;request_type&quot;, pd.Grouper(key=&quot;execution_time&quot;, freq=&quot;4h&quot;)]).agg({&quot;response_time_ms&quot;: [&quot;median&quot;]}) df_by_time.plot(kind='bar', figsize=(8, 6), title='Response Times', xlabel='Type', ylabel='Response time [ms]', rot=90) </code></pre> <p>This leads to below image:</p> <p><a href="https://i.sstatic.net/GcZpP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GcZpP.png" alt="Response Times by hour" /></a></p> <p>I would like to:</p> <ul> <li>group the times together so each time only appears once with a stack in different color per website</li> <li>or at least in this plot have the different websites in different colors</li> <li>get rid of the &quot;none, none&quot; in the legend</li> </ul> <p>How can I achieve that?</p>
<python><pandas><seaborn><grouped-bar-chart>
2023-10-13 07:42:45
1
7,652
beginner_
77,285,870
202,335
troubleshooting: To run a flask file in jupyter lab
<p><a href="https://i.sstatic.net/2hhME.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2hhME.png" alt="enter image description here" /></a>Below is the code in jupyter lab.</p> <pre><code>from flask import Flask, render_template app = Flask(__name__) @app.route('/') def home(): return render_template('home.html') if __name__ == '__main__': app.run() </code></pre> <p>When I create console for editor, I get nothing but three points(an ellipsis). When I type <a href="http://127.0.0.1:5000/" rel="nofollow noreferrer">http://127.0.0.1:5000/</a> in the web browser, I get</p> <blockquote> <p>Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p> </blockquote> <p>Below is the code of home.html</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;My First HTML Page in Jupyter Notebook&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Hello, World!&lt;/h1&gt; &lt;p&gt;This is my first HTML page in Jupyter Notebook.&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Can it be caused by naming conflict? There are two &quot;app&quot;</p> <blockquote> <p>http://localhost:8888/lab/tree/stock.investing.decision.making/app/controllers/app.py</p> </blockquote> <p>If I exit jupyter lab and restart it, it becomes fine. However, when I modify the content of app.py, it doesn't work again, then I have to restart jupyter lab. Why does this happen?</p>
<python><flask><jupyter-lab>
2023-10-13 07:37:50
0
25,444
Steven
77,285,558
8,990,329
Why does python shared memory implicitly unlinked on exit?
<p>I wrote the following simple program to try shared memory in python:</p> <pre><code>#!/usr/bin/env python3.8 from multiprocessing import shared_memory from time import sleep PATH='/pth' if __name__ == '__main__': shm = shared_memory.SharedMemory(PATH) sleep(10) </code></pre> <p>The issue is that on the program termination I got the following warning:</p> <pre><code>$ /usr/lib/python3.8/multiprocessing/resource_tracker.py:203: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown `warnings.warn('resource_tracker: There appear to be %d '` </code></pre> <p>And after program exit the shared memory file dissapeared. Why does this happen and how to avoid the shared memory file unlink?</p>
<python><linux><shared-memory><python-3.8>
2023-10-13 06:39:54
1
9,740
Some Name
77,285,514
15,233,792
How to launch / terminate periodic task running on the background using python asyncio in Flask Backend App?
<p>I have a python flask backend app. I want to create a mechanism that can let some jobs run periodic on the background, like auto-update something every 1 minute.</p> <p>But I have been blocked by some errors that made the backend didn't behave as expected.</p> <p><strong>view_func.py</strong></p> <pre class="lang-py prettyprint-override"><code>from flask import Blueprint, request from async_periodic_job import AsyncPeriodictUtils infras_bp = Blueprint(&quot;infras&quot;, __name__) @infras_bp.route(&quot;/infras/autosync&quot;, methods=[&quot;PUT&quot;]) def auto_sync(): args = request.args check_required_params(args, [&quot;autosnyc&quot;]) autosnyc = args.get(&quot;autosnyc&quot;, default=&quot;false&quot;).lower() == &quot;true&quot; if autosnyc: AsyncPeriodictUtils.start_task() else: AsyncPeriodictUtils.stop_task() return &quot;success&quot;, 200 </code></pre> <p><strong>async_periodic_job.py</strong></p> <pre class="lang-py prettyprint-override"><code>import asyncio from typing import Callable from utils import logger SECOND = 1 MINUTE = 60 * SECOND logger.debug(f&quot;Import {__file__}&quot;) periodic_jobs = dict() task_instance = None # Get default event loop in main thread loop = asyncio.get_event_loop() class AsyncPeriodictUtils: @staticmethod async def run_jobs() -&gt; None: while True: await asyncio.sleep(10) logger.info(f&quot;Called run_jobs periodicly.&quot;) logger.info(f&quot;periodic_jobs: {periodic_jobs.keys()}&quot;) for func_name, function in periodic_jobs.items(): function() logger.info(f&quot;Called function '{func_name}' periodicly.&quot;) @classmethod def create_task(cls) -&gt; None: global task_instance, loop task_instance = loop.create_task(cls.run_jobs()) @staticmethod async def cancel_task() -&gt; None: global task_instance if task_instance: task_instance.cancel() try: await task_instance except asyncio.CancelledError: logger.info(&quot;Async periodic task has been cancelled.&quot;) task_instance = None else: logger.warning(&quot;Async periodic task has not been started yet.&quot;) @classmethod def start_task(cls) -&gt; None: cls.create_task() global loop try: loop.run_until_complete(task_instance) except asyncio.CancelledError: pass logger.info(&quot;Async Periodic jobs launched.&quot;) @classmethod def stop_task(cls) -&gt; None: global loop try: loop.run_until_complete(cls.cancel_task()) except asyncio.CancelledError: pass logger.info(&quot;Async Periodic jobs terminated.&quot;) @classmethod def add_job(cls, function: Callable) -&gt; None: if function.__name__ in periodic_jobs: return periodic_jobs.update({function.__name__: function}) logger.info(f&quot;Added function {function.__name__} in periodic jobs.&quot;) global task_instance if not task_instance: logger.info(f&quot;Auto enable periodic jobs.&quot;) cls.start_task() @classmethod def remove_job(cls, function: Callable) -&gt; None: if function.__name__ not in periodic_jobs: logger.warning( f&quot;function {function.__name__} not in periodic jobs.&quot;) return periodic_jobs.pop(function.__name__) logger.info(f&quot;Removed function {function.__name__} in periodic jobs.&quot;) if not periodic_jobs: logger.info(f&quot;Periodic jobs list clear, auto terminate.&quot;) cls.stop_task() </code></pre> <p>When I called <code>AsyncPeriodictUtils.start_task()</code> My backend will raise an error and response 500, log shown below:</p> <pre class="lang-bash prettyprint-override"><code>[2023-10-13-06:06:19][DAAT][ERROR][__init__.py] [Error] stack: [2023-10-13-06:06:19][DAAT][ERROR][__init__.py] Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/site-packages/flask/app.py&quot;, line 1820, in full_dispatch_request rv = self.dispatch_request() File &quot;/opt/conda/lib/python3.10/site-packages/flask/app.py&quot;, line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File &quot;/opt/conda/lib/python3.10/site-packages/daat-1.0.0-py3.10.egg/daat/app/routes/infras.py&quot;, line 134, in auto_sync AsyncPeriodictUtils.start_task() File &quot;/opt/conda/lib/python3.10/site-packages/daat-1.0.0-py3.10.egg/daat/infras/async_periodic_job.py&quot;, line 53, in start_task loop.run_until_complete(task_instance) File &quot;/opt/conda/lib/python3.10/asyncio/base_events.py&quot;, line 625, in run_until_complete self._check_running() File &quot;/opt/conda/lib/python3.10/asyncio/base_events.py&quot;, line 584, in _check_running raise RuntimeError('This event loop is already running') RuntimeError: This event loop is already running [2023-10-13-06:06:19][werkzeug][INFO][_internal.py] 172.23.0.3 - - [13/Oct/2023 06:06:19] &quot;PUT /infras/autosync?autosnyc=true HTTP/1.1&quot; 500 </code></pre> <p>But after above step happened, the periodic jobs seem to be launched successfully, logs shown below:</p> <pre class="lang-bash prettyprint-override"><code>[2023-10-13-06:06:23][DAAT][INFO][async_periodic_job.py] Called run_jobs periodicly. [2023-10-13-06:06:23][DAAT][INFO][async_periodic_job.py] periodic_jobs: dict_keys([]) [2023-10-13-06:06:33][DAAT][INFO][async_periodic_job.py] Called run_jobs periodicly. [2023-10-13-06:06:33][DAAT][INFO][async_periodic_job.py] periodic_jobs: dict_keys([]) [2023-10-13-06:06:33][DAAT][INFO][async_periodic_job.py] Called run_jobs periodicly. [2023-10-13-06:06:33][DAAT][INFO][async_periodic_job.py] periodic_jobs: dict_keys([]) </code></pre> <p>When I called <code>AsyncPeriodictUtils.stop_task()</code> My backend will raise an error and response 500, log shown below:</p> <pre class="lang-bash prettyprint-override"><code>[2023-10-13-06:06:35][DAAT][ERROR][__init__.py] [Error] stack: [2023-10-13-06:06:35][DAAT][ERROR][__init__.py] Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/site-packages/flask/app.py&quot;, line 1820, in full_dispatch_request rv = self.dispatch_request() File &quot;/opt/conda/lib/python3.10/site-packages/flask/app.py&quot;, line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File &quot;/opt/conda/lib/python3.10/site-packages/daat-1.0.0-py3.10.egg/daat/app/routes/infras.py&quot;, line 136, in auto_sync AsyncPeriodictUtils.stop_task() File &quot;/opt/conda/lib/python3.10/site-packages/daat-1.0.0-py3.10.egg/daat/infras/async_periodic_job.py&quot;, line 62, in stop_task loop.run_until_complete(cls.cancel_task()) File &quot;/opt/conda/lib/python3.10/asyncio/base_events.py&quot;, line 625, in run_until_complete self._check_running() File &quot;/opt/conda/lib/python3.10/asyncio/base_events.py&quot;, line 584, in _check_running raise RuntimeError('This event loop is already running') RuntimeError: This event loop is already running /opt/conda/lib/python3.10/site-packages/flask/app.py:1822: RuntimeWarning: coroutine 'AsyncPeriodictUtils.cancel_task' was never awaited rv = self.handle_user_exception(e) RuntimeWarning: Enable tracemalloc to get the object allocation traceback [2023-10-13-06:06:35][werkzeug][INFO][_internal.py] 172.23.0.3 - - [13/Oct/2023 06:06:35] &quot;PUT /infras/autosync?autosnyc=false HTTP/1.1&quot; 500 - </code></pre> <p>But this didn't terminate the periodic task on the background as expected, the periodic task is still running periodically:</p> <pre class="lang-bash prettyprint-override"><code>[2023-10-13-06:06:43][DAAT][INFO][async_periodic_job.py] Called run_jobs periodicly. [2023-10-13-06:06:43][DAAT][INFO][async_periodic_job.py] periodic_jobs: dict_keys([]) [2023-10-13-06:06:43][DAAT][INFO][async_periodic_job.py] Called run_jobs periodicly. [2023-10-13-06:06:43][DAAT][INFO][async_periodic_job.py] periodic_jobs: dict_keys([]) [2023-10-13-06:06:53][DAAT][INFO][async_periodic_job.py] Called run_jobs periodicly. [2023-10-13-06:06:53][DAAT][INFO][async_periodic_job.py] periodic_jobs: dict_keys([]) [2023-10-13-06:06:53][DAAT][INFO][async_periodic_job.py] Called run_jobs periodicly. [2023-10-13-06:06:53][DAAT][INFO][async_periodic_job.py] periodic_jobs: dict_keys([]) [2023-10-13-06:07:03][DAAT][INFO][async_periodic_job.py] Called run_jobs periodicly. [2023-10-13-06:07:03][DAAT][INFO][async_periodic_job.py] periodic_jobs: dict_keys([]) [2023-10-13-06:07:03][DAAT][INFO][async_periodic_job.py] Called run_jobs periodicly. [2023-10-13-06:07:03][DAAT][INFO][async_periodic_job.py] periodic_jobs: dict_keys([]) [2023-10-13-06:07:13][DAAT][INFO][async_periodic_job.py] Called run_jobs periodicly. [2023-10-13-06:07:13][DAAT][INFO][async_periodic_job.py] periodic_jobs: dict_keys([]) </code></pre> <p>I've struggled trying different kinds of possibility but still coundn't figure it out.</p> <p>Could anyone please check my code and point out where should I change or provide any guide / direction ? Thanks</p> <p>I expect:</p> <p><code>AsyncPeriodictUtils.start_task()</code>: Launch periodic task without raising exception that make the backend response 500.</p> <p><code>AsyncPeriodictUtils.stop_task()</code>: Terminate periodic task without raising exception that make the backend response 500.</p>
<python><flask><python-asyncio>
2023-10-13 06:30:22
1
2,713
stevezkw
77,285,501
8,510,149
Openpyxl, how to do two operations on x-axis labels, rotating and change font, without one overrides the other
<p>I aim to rotate the labels on the x-axis and change font. However, in the code below, these two operations work individually, but when I do both, the rotating is undone by the font change.</p> <p>How can I do these two operations on my chart?</p> <pre><code>import openpyxl from openpyxl.chart import BarChart, Reference from openpyxl.chart.text import RichText from openpyxl.drawing.text import Paragraph, ParagraphProperties, CharacterProperties, Font as Font2 # Create a workbook and activate a sheet wb = openpyxl.Workbook() sheet = wb.active # insert some categories cell = sheet.cell(row=1, column=1) cell.value = 'Category 1.1' cell = sheet.cell(row=2, column=1) cell.value = 'Category 1.2 - limit' cell = sheet.cell(row=3, column=1) cell.value = 'Category 2' cell = sheet.cell(row=4, column=1) cell.value = 'Category 2.1 - extra' cell = sheet.cell(row=5, column=1) cell.value = 'Category 2.2 - extra2' # insert some values for i in range(5): cell = sheet.cell(row=i+1, column=2) cell.value = i+2 # create chart chart = BarChart() values = Reference(sheet, min_col = 2, min_row = 1, max_col = 2, max_row = 5) bar_categories = Reference(sheet, min_col=1, min_row=1, max_row=5) chart.add_data(values) chart.set_categories(bar_categories) chart.title = &quot; BAR-CHART &quot; chart.legend = None chart.x_axis.title = &quot; X_AXIS &quot; chart.y_axis.title = &quot; Y_AXIS &quot; # Rotate X-axis labels, operation 1 chart.x_axis.txPr = chart.x_axis.title.text.rich chart.x_axis.txPr.properties.rot = &quot;-2700000&quot; chart.x_axis.title = None # Adjust font, operation 2, font_ = Font2(typeface='Avenir Next LT Pro') cp = CharacterProperties(latin=font_, sz=900) chart.x_axis.txPr = RichText(p=[Paragraph(pPr=ParagraphProperties(defRPr=cp), endParaRPr=cp)]) sheet.add_chart(chart, &quot;E2&quot;) # save the file wb.save(&quot;barChart.xlsx&quot;) </code></pre>
<python><excel><openpyxl>
2023-10-13 06:27:02
1
1,255
Henri
77,285,334
13,184,183
Is pyspark dataframe a pointer?
<p>I read several dataframes from different locations and them union them. To do that, firstly I collect them in a list.</p> <pre class="lang-py prettyprint-override"><code>from functools import reduce # spark initialization df_list = [] for path in paths: df = spark.table(path) df_list.append(df) data = reduce(lambda x, y : x.union(y), df_list) </code></pre> <p>The question is when I read <code>df</code>, and append it to <code>df_list</code>, then on the next iteration of the loop do reassignment of <code>df</code>, does it change the value that already has been appended to the <code>df_list</code>? As far as I know, that could be the case for pointer variables.</p>
<python><apache-spark><pyspark>
2023-10-13 05:50:42
2
956
Nourless
77,285,312
2,418,375
Install the PyCoral library fails: "No matching distribution found for pycoral~=2.0"
<p>I just got a Google Coral USB Accelerator and am trying to install it on a Windows 11 machine following the instructions at <a href="https://coral.ai/docs/accelerator/get-started/" rel="nofollow noreferrer">https://coral.ai/docs/accelerator/get-started/</a>. I got Python 3.11 from the Microsoft app store and it's working fine. But when I tried installing the PyCoral library using the command</p> <pre class="lang-none prettyprint-override"><code>python3 -m pip install --extra-index-url https://google-coral.github.io/py-repo/ pycoral~=2.0 I'm getting this error saying the library can't be found. ERROR: Could not find a version that satisfies the requirement pycoral~=2.0 (from versions: 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.1.0) ERROR: No matching distribution found for pycoral~=2.0 </code></pre> <p>I'm not sure whether I'm asking for the wrong version or if I'm using the wrong repo or if Microsoft has &quot;customized&quot; python. In any case, it seems the installation docs needs to be updated.</p>
<python><google-coral><pycoral>
2023-10-13 05:45:46
0
334
wjr
77,285,110
521,347
subject table for an INSERT, UPDATE or DELETE expected,got 'comparison_bill_data'
<p>I am trying to write an insert query using Python sqlAlchemy. I keep getting an error <code>sqlalchemy.exc.ArgumentError: subject table for an INSERT, UPDATE or DELETE expected, got 'comparison_bill_data'.</code>.My code is as follows-</p> <pre><code>from uuid import uuid4 import sqlalchemy as sa from sqlalchemy.dialects.postgresql import UUID from base import Base from src.app.database.database import SessionLocal from sqlalchemy.dialects.postgresql import insert import datetime class ComparisonBillData(Base): __tablename__ = 'comparison_bill_data' comparison_bill_data_id = sa.Column(UUID(as_uuid=True), primary_key=True, default=uuid4) bill_id = sa.Column(sa.String(), nullable=False) account_id = sa.Column(sa.String(), nullable=False) service_location_id = sa.Column(sa.String(), nullable=False) usage = sa.Column(sa.Numeric(), nullable=False) cost = sa.Column(sa.Numeric(), nullable=False) business_type = sa.Column(sa.String(), nullable=False) tenant_id = sa.Column(sa.String(), nullable=False) commodity_type = sa.Column(sa.String(), nullable=False) bill_date = sa.Column(sa.Date(), nullable=False) usage_unit = sa.Column(sa.String(), nullable=False) created_at = sa.Column(sa.TIMESTAMP(), nullable=False) updated_at = sa.Column(sa.TIMESTAMP(), nullable=False, server_default=sa.text('now()')) def __init__(self, comparison_bill_data_id,bill_id,account_id,service_location_id,usage,cost,business_type,tenant_id,commodity_type,bill_date,usage_unit,created_at): self.comparison_bill_data_id = comparison_bill_data_id self.bill_id = bill_id self.account_id=account_id self.service_location_id=service_location_id self.usage=usage self.cost=cost self.business_type=business_type self.tenant_id=tenant_id self.commodity_type=commodity_type self.bill_date=bill_date self.usage_unit=usage_unit self.created_at=created_at def upsert_comparison_bill_data(bill_data: ComparisonBillData): print(&quot;List=&quot;+ str(list)) statement = insert(ComparisonBillData.__tablename__).values((bill_data),) # statement = insert(Base.metadata.tables[ComparisonBillData.__tablename__]).values((bill_data),) query = statement.on_conflict_do_update( constraint=&quot;ui_comparison_bill_data_service_location_id&quot;, set_={ &quot;usage&quot;: statement.excluded.usage, &quot;cost&quot;: statement.excluded.cost } ) with SessionLocal() as session: print(&quot;Executing query&quot;) session.execute(query) if __name__ == &quot;__main__&quot;: print(&quot;Starting&quot;) data = ComparisonBillData(&quot;aebebd26-4f3e-44c5-83ef-cf722f0d81e2&quot;, &quot;bill_id_1&quot;, &quot;account_id_1&quot;, &quot;service_location_id_1&quot;, 26.3, 30.2,&quot;Office&quot;, &quot;tenant-1&quot;, &quot;electric&quot;, datetime.datetime(2023, 1, 1) , &quot;kWh&quot;,datetime.datetime(2023, 1, 1)) # data.comparison_bill_data_id = &quot;aebebd26-4f3e-44c5-83ef-cf722f0d81e2&quot; # data.cost = 26.3 # data.usage = 30.2 # data.bill_id = &quot;bill_id_1&quot; # data.account_id = &quot;account_id_1&quot; # data.service_location_id = &quot;service_location_id_1&quot; # data.tenant_id = &quot;tenant_id_1&quot; # data.commodity_type = &quot;electric&quot; # data.bill_date = datetime.datetime(2023, 1, 1) # data.usage_unit = &quot;kWh&quot; # data.created_at = datetime.datetime(2023, 1, 1) upsert_comparison_bill_data(data) </code></pre> <p>I found one of the related answers <a href="https://stackoverflow.com/questions/67377755/subject-table-for-an-insert-update-or-delete-expected">here</a> and I tried replacing the line to create insert statement with following-</p> <pre><code>statement = insert(Base.metadata.tables[ComparisonBillData.__tablename__]).values((bill_data),) </code></pre> <p>However, then I get a different error- <code>'ComparisonBillData' object has no attribute 'items'</code> Is there anything I am missing here?</p>
<python><sqlalchemy><fastapi>
2023-10-13 04:48:27
1
1,780
Sumit Desai
77,284,988
8,387,921
use if else statement in python selenium for a text_to_be_present_in_element
<p>I have following line of code which after uploading files returns 'Upload successful!' text . But several times there is error in uploading files and the element of website displays 'Upload Status' in place of 'Upload Successful'. So how to use the logic of if else statment to run the the rest command even if it failed to upload and continue with next file. I did following but dosen't work</p> <pre class="lang-py prettyprint-override"><code>if WebDriverWait(driver, 300, 1).until( EC.text_to_be_present_in_element((By.XPATH, &quot;//*&quot;), &quot;Upload successful!&quot;) ): print(&quot;uploaded&quot;) driver.back() else: print(&quot;upload failed&quot;) driver.back() </code></pre> <pre><code> OR </code></pre> <pre class="lang-py prettyprint-override"><code>myStr = WebDriverWait(driver, 300, 1).until( EC.text_to_be_present_in_element((By.XPATH, &quot;//*&quot;)) ) if myStr.txt.contains(&quot;Upload successful!&quot;): print(&quot;uploaded&quot;) driver.back() else: print(&quot;upload failed&quot;) driver.back() </code></pre> <p>How to do this?</p>
<python><selenium-webdriver>
2023-10-13 04:04:52
0
399
Sagar Rawal
77,284,983
1,232,087
How to connect to a remote Jupyter server from vscode.dev
<p><strong>Question</strong>: How do I get URL for <code>jupyter server</code> when jupyter extension is installed in <a href="https://code.visualstudio.com/docs/editor/vscode-web" rel="nofollow noreferrer">VSCode for web</a>?</p> <p>I have successfully installed <code>python</code> and <code>jupyter</code> extensions on <a href="https://code.visualstudio.com/docs/editor/vscode-web" rel="nofollow noreferrer">VSCode for web</a>. But when I try to run the simple notebook with code <code>print('Test')</code>, it displays the message <code>Select a kernel to run cells.</code>. And then it asks for entering URL for jupyter server. It also refers to this <a href="https://github.com/microsoft/vscode-jupyter/wiki/Connecting-to-a-remote-Jupyter-server-from-vscode.dev" rel="nofollow noreferrer">link</a> for more help.</p> <p><strong>NOTE</strong>: Readers of this post can simply test the above scenario by just opening <a href="https://vscode.dev" rel="nofollow noreferrer">https://vscode.dev</a> in <code>MS Edge</code> or <code>Chrome</code> browsers.</p> <p><strong>References</strong>:</p> <ul> <li><a href="https://code.visualstudio.com/blogs/2021/10/20/vscode-dev" rel="nofollow noreferrer">vscode.dev(!)</a></li> <li><a href="https://code.visualstudio.com/docs/datascience/notebooks-web#:%7E:text=Connect%20to%20a%20remote%20Jupyter%20server,-You%20can%20also&amp;text=To%20do%20so%2C%20select%20the,Select%20Existing." rel="nofollow noreferrer">Jupyter Notebooks on the web</a></li> </ul>
<python><visual-studio-code><jupyter-notebook>
2023-10-13 04:01:53
1
24,239
nam
77,284,980
525,916
How to add a new column that has the occurrence number of the value in a column?
<p>Given a Polars Dataframe:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.from_repr(&quot;&quot;&quot; ┌───────┬───────┬───────┐ │ Col_1 ┆ Col_2 ┆ Col_3 │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞═══════╪═══════╪═══════╡ │ A ┆ a ┆ 1 │ │ B ┆ b ┆ 2 │ │ C ┆ c ┆ 3 │ │ D ┆ d ┆ 4 │ └───────┴───────┴───────┘ &quot;&quot;&quot;) </code></pre> <p>and a list:</p> <pre class="lang-py prettyprint-override"><code>display_list = ['A','B','B','B','C','D','D','A'] </code></pre> <p>I want the output to be in this format:</p> <pre><code>shape: (8, 4) ┌───────┬───────┬───────┬───────────────┐ │ Col_1 ┆ Col_2 ┆ Col_3 ┆ Occurrence_No │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 ┆ i64 │ ╞═══════╪═══════╪═══════╪═══════════════╡ │ A ┆ a ┆ 1 ┆ 1 │ │ B ┆ b ┆ 2 ┆ 1 │ │ B ┆ b ┆ 2 ┆ 2 │ │ B ┆ b ┆ 2 ┆ 3 │ │ C ┆ c ┆ 3 ┆ 1 │ │ D ┆ d ┆ 4 ┆ 1 │ │ D ┆ d ┆ 4 ┆ 2 │ │ A ┆ a ┆ 1 ┆ 1 │ └───────┴───────┴───────┴───────────────┘ </code></pre> <p>I want to the rows to be duplicated based on the number of times the first column appears in the list. Also, I need an Occurrence_No column that acts as the counter of the occurrence of the first column in the dataframe. The order of the rows in the DataFrame does not matter.</p> <p>I am able to get the result except the Occurrence_No using this code:</p> <pre class="lang-py prettyprint-override"><code>df = df.with_columns(pl.col('Col_1').map_elements(lambda x: display_list.count(x)).alias('occur')) df = df.select(pl.exclude('occur').repeat_by('occur').explode()) </code></pre> <p>The above codes creates the number of rows based on the number of times it occurs in the list.</p> <p>How do I add the Occurrence_No column?</p>
<python><dataframe><python-polars>
2023-10-13 04:00:56
1
4,099
Shankze
77,284,970
19,238,204
How to change x-axis ticks into pi (π) terms with Sympy Plotting (Python 3)
<p>I have this Fourier computation for n=3, n=5, and n=7. I want to make the x-axis have ticks in pi terms (from -4π to 4π with increment that can be adjusted for example π). I find on SoF that x-axis can be changed to π terms, but they all are using matplotlib to plot, this code of mine is using Sympy to plot.</p> <p>The MWE:</p> <pre><code># https://docs.sympy.org/latest/modules/series/fourier.html from sympy import pi import sympy as sm x = sm.symbols(&quot;x&quot;) # Computing Fourier Series # This illustrates how truncating to the higher order gives better convergence. g = x s = sm.fourier_series(g, (x, -pi, pi)) print('') print('Fourier series for f(x) = x over the interval (-π,π) : ') sm.pretty_print(s) s1 = s.truncate(n = 3) s2 = s.truncate(n = 5) s3 = s.truncate(n = 7) print('') print('Fourier series for f(x) = x over the interval (-π,π) with n=3 : ') sm.pretty_print(s1) print('') print('Fourier series for f(x) = x over the interval (-π,π) with n=5 : ') sm.pretty_print(s2) print('') print('Fourier series for f(x) = x over the interval (-π,π) with n=7 : ') sm.pretty_print(s3) p = sm.plot(g, s1, s2, s3, (x, -4*pi, 4*pi), show=False, legend=True) p[0].line_color = 'g' p[0].label = 'x' p[1].line_color = 'r' p[1].label = 'n=3' p[2].line_color = 'b' p[2].label = 'n=5' p[3].line_color = 'cyan' p[3].label = 'n=7' p.show() </code></pre>
<python><sympy>
2023-10-13 03:58:20
1
435
Freya the Goddess
77,284,954
8,942,319
Poetry install not updating scripts
<p>I made a simply change so that a script was moved up a directory. Poetry doesn't seem to recognize that it's been moved since the <code>poetry install</code> or <code>update</code> command does nothing</p> <pre><code>Installing dependencies from lock file No dependencies to install or update </code></pre> <p>and the <code>poetry run script_name</code> command throws while referencing the old path.</p> <pre><code>[tool.poetry.scripts] launch = &quot;launcher.launch:main&quot; </code></pre> <p>Was the command. It reflected a directory structure <code>launcher/launch.py</code>, where a <code>Fire</code> CLI called <code>main</code>. This worked.</p> <p>I moved <code>launch.py</code> up, removed the <code>launcher</code> directory, and updated poetry to</p> <pre><code>[tool.poetry.scripts] launch = &quot;launch:main&quot; </code></pre> <p>But <code>poetry run launch</code> throws <code>No file/folder found for package launcher</code></p> <p>And again, <code>poetry install/update</code> does nothing.</p> <p>What cache/meta data/etc do I need to delete to have poetry re-install things? I have also deleted the lock file and get the same behavior when running install.</p>
<python><python-poetry>
2023-10-13 03:51:27
1
913
sam
77,284,822
10,852,841
Only keep the best trial using keras-tuner (Hyperband)
<p>How can I tell <code>keras_tuner</code> to only save the best model (overwriting each trial) and not create any checkpoints or directories for each trial?</p> <p>There has been discussion already about how to reduce the amount of disk space that <code>keras_tuner</code> uses by either <a href="https://github.com/keras-team/keras-tuner/issues/288#issuecomment-1423600748" rel="nofollow noreferrer">re-implementing</a> <code>run_trial()</code> or <a href="https://github.com/keras-team/keras-tuner/issues/288#issuecomment-1685167496" rel="nofollow noreferrer">adjusting</a> the <code>_save_trail()</code> in <code>HyperbandOracle</code> (which I'm not using). These processes both attempt to implement what I desire and I tried it below</p> <pre><code>import keras_tuner as kt ### Make Hyperband Class update class HyperbandTuner(kt.Hyperband): def __init__(self, hypermodel, **kwargs): super().__init__(hypermodel, **kwargs) def run_trial(self, trial, *args, **kwargs): hp = trial.hyperparameters model = self.hypermodel.build(hp) return self.hypermodel.fit(hp, model, *args, **kwargs) </code></pre> <p>but <code>trial_XXXX</code> folders were still created. Another option I tried was passing a checkpoint to <code>tuner.search()</code> like</p> <pre><code>best_model_checkpoint = tf.keras.callbacks.ModelCheckpoint( f'../Results/{directory}/{project_name}/best_model.hdf5', save_best_only=True, monitor='val_loss', mode='min' ) </code></pre> <p>which saves the model after each trial if it's better than the previous model, but the issue with <code>trial_XXXX</code> directories persists.</p> <p>I can also delete all the trial folders <strong>after</strong> the models finish running like</p> <pre><code>import shutil ### Remove all trials ### trial_path = Path(f&quot;{directory}/{project_name}&quot;) shutil.rmtree(trial_path) </code></pre> <p>but it would be best to delete them as they're made or not make them at all. How can I do this?</p>
<python><tensorflow2.0><keras-tuner>
2023-10-13 03:01:38
0
2,379
m13op22
77,284,818
1,761,761
Should hypen-minus (U+002D) or hypen (U+2010) be used for ISO 8601 datetimes?
<p>Python interpreter gives the following when generating an ISO-8601 formatted date/time string:</p> <pre><code>&gt;&gt;&gt; import datetime &gt;&gt;&gt; datetime.datetime.now().isoformat(timespec='seconds') '2023-10-12T22:35:02' </code></pre> <p>Note that the '-' character in the string is a hypen-minus character. When going backwards to produce the datetime object, we do the following:</p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime('2023-10-12T22:35:02', '%Y-%m-%dT%H:%M:%S') datetime.datetime(2023, 10, 12, 22, 35, 2) </code></pre> <p>This all checks out.</p> <p>However, sometimes when the ISO-8601 formatted date/time string is provided from an external source, such as a parameter sent over in a GET/POST request, or in a <code>.csv</code> file, the hyphens are sent as the <code>‐</code> (U+2010) character, which causes the parsing to break:</p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime('2023‐10‐12T22:35:02', '%Y-%m-%dT%H:%M:%S') Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/_strptime.py&quot;, line 568, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) File &quot;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/_strptime.py&quot;, line 349, in _strptime raise ValueError(&quot;time data %r does not match format %r&quot; % ValueError: time data '2023‐10‐12T22:35:02' does not match format '%Y-%m-%dT%H:%M:%S' </code></pre> <p>What is the correct standard? Is it hypen-minus <code>-</code> U+002D as given by Python when converting via <code>.isoformat()</code>, or hypen <code>‐</code> U+2010?</p> <p>Would it be best practice to accept both?</p>
<python><datetime><unicode><ascii><iso8601>
2023-10-13 02:58:52
3
7,380
deadlock
77,284,715
8,444,568
How to make python auto-select right pyc file to execute?
<p>I want to distribute my python script in <code>pyc</code> format(generated by <code>python -OO -m py_compile run.py</code>)</p> <p>For different python versions, I've created <code>run.cpython-36.pyc</code>, <code>run.cpython-38.pyc</code>...</p> <p>Now, if the users are using python3.6, they need to call <code>python run.cpython-36.pyc</code>, if they are using python3.8. they need to call <code>python run.cpython-38.pyc</code>. This is not convenient, how can I make python to auto-select the right <code>pyc</code> file to execute such that the user don't need to bother choosing the right version?</p>
<python><pyc>
2023-10-13 02:17:36
1
893
konchy
77,284,709
4,898,865
pythonic popup messagebox in dearpygui (asyncio or sync way)
<p>for person who is looking for pythonic popup messagebox in dearpygui, hope it can help</p> <p>target:</p> <pre class="lang-py prettyprint-override"><code>do_sth_step1(...) user_response = msgbox('Confirm?') # Yes/No if user_response == 'Yes': do_sth_step2(...) </code></pre>
<python><messagebox><dearpygui>
2023-10-13 02:16:14
1
337
Atlas
77,284,576
13,176,726
Google AdMob app-ads.txt Configuration issue
<p>I'm currently facing an issue with configuring my Django application published in linode server using Nginx. My issue is in the app-ads.txt file for Google AdMob. I have set up a view in my views.py and added the corresponding URL pattern in urls.py as follows:</p> <h1>views.py</h1> <pre><code>@require_GET def ads_txt(request): content = 'google.com, pub-*****************, DIRECT, ************' return HttpResponse(content, content_type=&quot;text/plain&quot;) </code></pre> <h1>urls.py</h1> <pre><code>from django.urls import path from .views import ads_txt urlpatterns = [ path('app-ads.txt', ads_txt), ] </code></pre> <h1>Problem:</h1> <p>The view works when I access the URL without a trailing slash (<a href="https://www.example.com/app-ads.txt" rel="nofollow noreferrer">https://www.example.com/app-ads.txt</a>), but I encounter a 404 Not Found error when I include the trailing slash (<a href="https://www.example.com/app-ads.txt/" rel="nofollow noreferrer">https://www.example.com/app-ads.txt/</a>).</p> <h1>Question:</h1> <p>Which is the best configuration to my Django application to handle both cases the app-ads.txt URL? is it with or without the backslash as per the documentation without</p> <p>Additionally, is there anything specific I need to do to ensure that Google's crawler can correctly detect changes to the domain URL?</p> <p>How do I make sure that google console and app store has the url required for crawling maybe I added in some fields and missed some. Are there specific fields that is requirement to check if they are filled.</p> <p>Any insights or suggestions would be greatly appreciated. Thank you!</p>
<python><django><admob><linode>
2023-10-13 01:14:00
0
982
A_K
77,284,534
963,298
Python Slack API download binary file returns html
<p>I can upload files to a slack channel via the python Slack API. I am unable to download the binary file with the url indicated in <code>message['files'][0]['url_private_download']</code>.</p> <p>Rather, I get 40KB of HTML when using <code>requests.get</code>, <code>wget -mO &lt;file-name&gt; &lt;url&gt;</code>, and <code>curl &lt;url&gt; --output &lt;file-name&gt;</code>.</p> <p>Oddly, I can enter the same url in the chrome browser and the binary file is downloaded.</p> <p>It appears as though something is going on behind the scenes. And I'm not sure how best to approach this.</p> <p>Suggestions?</p>
<python><download><slack-api>
2023-10-13 00:56:02
2
1,061
stephen
77,284,507
789,750
Use Plotly to show multiple lines by value of another column in the dataframe
<p>I have a dataframe that looks like this:</p> <pre><code>fruit number date apple 7 2013-11-4 banana 8 2016-4-6 apple 2 2018-7-22 banana 103 2020-4-4 </code></pre> <p>I want a line plot with two lines: the number of apples and the number of bananas as a function of the date. Is there a simple way to do this, or do I have to filter the dataframe by the value of the &quot;fruit&quot; column? I'd ideally like this to be able to adapt to the situation where the values and number of different values in the fruit column is not known in when the program is written.</p>
<python><pandas><dataframe><plotly>
2023-10-13 00:41:59
2
12,799
Dan
77,284,430
15,632,586
CUDA out of memory when running validation test for SciBERT
<p>I am trying to train SciBERT with a not-too-big dataset (roughly 10000 rows). I have partitioned the dataset as train, validation and loss, with proportion of 0.6, 0.2 and 0.2 respectively. Then I tried to train the model, with code like this:</p> <pre><code>from torch.optim import AdamW from tqdm import tqdm from statistics import mean optim = AdamW(model.parameters(), lr=2e-5, eps=1e-8) for epoch in range(4): epoch_losses = [] validation_loss = [] for x, y in tqdm(load_data(x_train, y_train, batch_size=10)): model.zero_grad() out = model(x, attention_mask=apply_attention_mask(x), labels=y) epoch_losses.append(out.loss.item()) out.loss.backward() optim.step() print(f&quot;epoch {epoch + 1} loss: {mean(epoch_losses)}&quot;) for x, y in load_data(x_validation, y_validation, batch_size=10): # validation data validation_output = model(x, attention_mask=apply_attention_mask(x), labels=y) validation_loss.append(validation_output.loss.item()) print(f&quot;Validation for epoch {epoch + 1} loss: {mean(validation_loss)}&quot;) </code></pre> <p>However, when I trained the model, after finishing with the training set and loading the validation set, Colab returned this error:</p> <pre><code>OutOfMemoryError: CUDA out of memory. Tried to allocate 120.00 MiB (GPU 0; 14.75 GiB total capacity; 13.84 GiB already allocated; 24.81 MiB free; 14.59 GiB reserved in total by PyTorch) If reserved memory is &gt;&gt; allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF </code></pre> <p>From my understanding, when I tried to validate the model Colab just load SciBERT again to GPU, which led to this error; however, I am not sure about how to fix this problem: <a href="https://i.sstatic.net/MkRL6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MkRL6.png" alt="enter image description here" /></a></p> <p>So, what should I do to fix this error and get the validation loss for each epoch?</p>
<python><pytorch><google-colaboratory><bert-language-model>
2023-10-13 00:03:15
1
451
Hoang Cuong Nguyen
77,284,406
16,988,223
BeautifulSoup with python unable to get value of a span tag
<p>I'm trying to get the value from the main titular news from this <a href="https://elperuano.pe/" rel="nofollow noreferrer">web page</a></p> <p><a href="https://i.sstatic.net/JjGGJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JjGGJ.png" alt="enter image description here" /></a></p> <p>Here is my code:</p> <pre><code>news = &quot;&quot; headers = { &quot;User-Agent&quot;: &quot;Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0&quot; } url = &quot;https://elperuano.pe/&quot; soup = BeautifulSoup(requests.get(url, headers=headers).content, 'html.parser') #Obtener noticia principal for div in soup.findAll('span', attrs={'class':'card-title fz18 lh30 fw500 width100'}): print(div.text) </code></pre> <p>This is the unique span tag with has that class name &quot;card-title fz18 lh30 fw500 width100&quot;. I don't know why this doesn't work. However if try to get the value of the date of the newspaper this works:</p> <pre><code>for div in soup.findAll('div', attrs={'class':'lh18'}): n = div.text.rstrip(&quot;\n\n&quot;) </code></pre> <p>I have tested many ways to get this, but seems that the webpage is locking this. Any idea to fix this problem guys I will appreciate it. Thanks so much.</p>
<python><web-scraping><beautifulsoup>
2023-10-12 23:57:21
3
429
FreddicMatters
77,284,309
5,212,614
How to Convert a Pandas Dataframe to a Spark Dataframe?
<p>I downloaded and installed the Windows 64-bit driver from the link below.</p> <p><a href="https://www.databricks.com/spark/odbc-drivers-download" rel="nofollow noreferrer">https://www.databricks.com/spark/odbc-drivers-download</a></p> <p>Then, I did <code>pip install pyspark</code></p> <p>Now, I am testing the code below, which seems very straightforward.</p> <pre><code>from pyspark.sql import SparkSession import pandas as pd # Assuming you already have a SparkSession created spark = SparkSession.builder.appName(&quot;example&quot;).getOrCreate() # Create a Pandas DataFrame pandas_df = pd.DataFrame({ 'column1': [1, 2, 3], 'column2': ['A', 'B', 'C'] }) # Convert Pandas DataFrame to PySpark DataFrame spark_df = spark.createDataFrame(pandas_df) # Show the PySpark DataFrame spark_df.show() </code></pre> <p>When I run the code, I get this error message.</p> <pre><code>PySparkRuntimeError Traceback (most recent call last) Cell In[3], line 5 2 import pandas as pd 4 # Assuming you already have a SparkSession created ----&gt; 5 spark = SparkSession.builder.appName(&quot;example&quot;).getOrCreate() 7 # Create a Pandas DataFrame 8 pandas_df = pd.DataFrame({ 9 'column1': [1, 2, 3], 10 'column2': ['A', 'B', 'C'] 11 }) File ~\anaconda3\lib\site-packages\pyspark\sql\session.py:497, in SparkSession.Builder.getOrCreate(self) 495 sparkConf.set(key, value) 496 # This SparkContext may be an existing one. --&gt; 497 sc = SparkContext.getOrCreate(sparkConf) 498 # Do not update `SparkConf` for existing `SparkContext`, as it's shared 499 # by all sessions. 500 session = SparkSession(sc, options=self._options) File ~\anaconda3\lib\site-packages\pyspark\context.py:515, in SparkContext.getOrCreate(cls, conf) 513 with SparkContext._lock: 514 if SparkContext._active_spark_context is None: --&gt; 515 SparkContext(conf=conf or SparkConf()) 516 assert SparkContext._active_spark_context is not None 517 return SparkContext._active_spark_context File ~\anaconda3\lib\site-packages\pyspark\context.py:201, in SparkContext.__init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls, udf_profiler_cls, memory_profiler_cls) 195 if gateway is not None and gateway.gateway_parameters.auth_token is None: 196 raise ValueError( 197 &quot;You are trying to pass an insecure Py4j gateway to Spark. This&quot; 198 &quot; is not allowed as it is a security risk.&quot; 199 ) --&gt; 201 SparkContext._ensure_initialized(self, gateway=gateway, conf=conf) 202 try: 203 self._do_init( 204 master, 205 appName, (...) 215 memory_profiler_cls, 216 ) File ~\anaconda3\lib\site-packages\pyspark\context.py:436, in SparkContext._ensure_initialized(cls, instance, gateway, conf) 434 with SparkContext._lock: 435 if not SparkContext._gateway: --&gt; 436 SparkContext._gateway = gateway or launch_gateway(conf) 437 SparkContext._jvm = SparkContext._gateway.jvm 439 if instance: File ~\anaconda3\lib\site-packages\pyspark\java_gateway.py:107, in launch_gateway(conf, popen_kwargs) 104 time.sleep(0.1) 106 if not os.path.isfile(conn_info_file): --&gt; 107 raise PySparkRuntimeError( 108 error_class=&quot;JAVA_GATEWAY_EXITED&quot;, 109 message_parameters={}, 110 ) 112 with open(conn_info_file, &quot;rb&quot;) as info: 113 gateway_port = read_int(info) PySparkRuntimeError: [JAVA_GATEWAY_EXITED] Java gateway process exited before sending its port number. </code></pre> <p>What did I do wrong? What am I missing here?</p>
<python><python-3.x><pandas><apache-spark><pyspark>
2023-10-12 23:28:23
1
20,492
ASH
77,284,251
3,138,238
Run ASGI Application with FastAPI inside a Jelastic PaaS Environment using mod_wsgi
<p>I want to use <code>FastAPI</code> inside my python project, I want to deploy it on a Jelastic PaaS. Apparently <code>mod_wsgi</code> manages only WSGI application so I was trying to run an ASGI application inside a WSGI application using <code>a2wsgi</code> like this, this is my <code>wsgi.py</code>:</p> <pre><code>import os, sys virtenv = os.path.expanduser('~') + '/virtenv/' virtualenv = os.path.join(virtenv, 'bin/activate_this.py') try: if sys.version.split(' ')[0].split('.')[0] == '3': exec(compile(open(virtualenv, &quot;rb&quot;).read(), virtualenv, 'exec'), dict(__file__=virtualenv)) else: execfile(virtualenv, dict(__file__=virtualenv)) except IOError: pass sys.path.append(os.path.expanduser('~')) sys.path.append(os.path.expanduser('~') + '/ROOT/') from fastapi import FastAPI app = FastAPI() @app.get(&quot;/&quot;) async def root(): return {&quot;message&quot;: &quot;Hello World&quot;} from a2wsgi import ASGIMiddleware application = ASGIMiddleware(app) </code></pre> <p>And these lines have been executed for my specific <code>virtenv</code>:</p> <pre><code>virtualenv virtenv source virtenv/bin/activate pip install a2wsgi pip install fastapi deactivate </code></pre> <p>But it still does not work properly. Probably I'm missing something big. The error doesn't seem &quot;to talk&quot; to me (but I'm not a python-guy):</p> <pre><code>mod_wsgi (pid=14438): Failed to exec Python script file '/var/www/webroot/ROOT/wsgi.py'. mod_wsgi (pid=14438): Exception occurred processing WSGI script '/var/www/webroot/ROOT/wsgi.py'. Traceback (most recent call last): File &quot;/var/www/webroot/ROOT/wsgi.py&quot;, line 24, in &lt;module&gt; from fastapi import FastAPI File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/fastapi/__init__.py&quot;, line 7, in &lt;module&gt; from .applications import FastAPI as FastAPI File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/fastapi/applications.py&quot;, line 16, in &lt;module&gt; from fastapi import routing File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/fastapi/routing.py&quot;, line 22, in &lt;module&gt; from fastapi import params File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/fastapi/params.py&quot;, line 5, in &lt;module&gt; from fastapi.openapi.models import Example File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/fastapi/openapi/models.py&quot;, line 4, in &lt;module&gt; from fastapi._compat import ( File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/fastapi/_compat.py&quot;, line 20, in &lt;module&gt; from fastapi.exceptions import RequestErrorModel File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/fastapi/exceptions.py&quot;, line 3, in &lt;module&gt; from pydantic import BaseModel, create_model File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/pydantic/__init__.py&quot;, line 12, in &lt;module&gt; from . import dataclasses File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/pydantic/dataclasses.py&quot;, line 11, in &lt;module&gt; from ._internal import _config, _decorators, _typing_extra File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/pydantic/_internal/_decorators.py&quot;, line 15, in &lt;module&gt; from ..fields import ComputedFieldInfo File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/pydantic/fields.py&quot;, line 19, in &lt;module&gt; import annotated_types File &quot;/var/www/webroot/virtenv/lib/python3.12/site-packages/annotated_types/__init__.py&quot;, line 361, in &lt;module&gt; IsNotFinite = Annotated[_NumericType, Predicate(Not(math.isfinite))] ^^^^^^^^^^^^^^^^^^ TypeError: Not() takes no arguments </code></pre> <p>The full context:</p> <pre><code>APACHE_VERSION=2.4.57 DOCKER_EXPOSED_PORT=21,22,25,443,7979,80 MOD_WSGI_VERSION=4.9.4 OWASP_MODSECURITY_CRS_VERSION=3.3.2 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PYTHON_VERSION=3.12.0 STACK_VERSION=2.4.57 VERSION=3.12.0 WEBROOT=/var/www/webroot WSGI_SCRIPT=/var/www/webroot/ROOT/wsgi.py </code></pre>
<python><mod-wsgi><wsgi><jelastic><asgi>
2023-10-12 23:07:17
1
7,311
madx
77,284,187
14,192,391
Error: pyodbc.ProgrammingError: No results. Previous SQL was not a query. when using python script
<p>I am using python script and pyodbc library to connect to a MS Sql. ODBC Driver 17 for SQL Server.Then I use <code>DECLARE</code> to create a table variable:</p> <pre><code>SQL_QUERY = &quot;&quot;&quot; DECLARE @MyTable TABLE (ID INT, Name NVARCHAR(255)); INSERT INTO @MyTable (ID, Name) VALUES (1, 'John'), (2, 'Jane'); SELECT * FROM @MyTable; &quot;&quot;&quot; cursor.execute(SQL_QUERY) records = cursor.fetchall() -----error print(records) </code></pre> <p>However it gives me error:</p> <pre><code> records = cursor.fetchall() ^^^^^^^^^^^^^^^^^ pyodbc.ProgrammingError: No results. Previous SQL was not a query. </code></pre> <p>I also try to use <code>DECLARE</code> to create a scalar variable:</p> <pre><code>SQL_QUERY = &quot;&quot;&quot; DECLARE @MyVariable INT; SET @MyVariable = 42; SELECT @MyVariable AS MyValue; &quot;&quot;&quot; cursor.execute(SQL_QUERY) result = cursor.fetchone() my_value = result.MyValue print(f&quot;The value of the declared variable is: {my_value}&quot;) </code></pre> <p>No error and the result is correct.</p> <p>Why the first table variable return that weird error?? Can anyone give suggestions? Thanks!</p>
<python><sql><odbc><pyodbc><pymssql>
2023-10-12 22:43:33
0
495
zurich_ruby
77,284,175
9,669,142
Python - linear trendline on log-log plot is not fitted well
<p>I have multiple sets of four data points and I want to create for each of these sets a linear trendline on a log-log plot between each of the points (four points meaning three trendlines in total). I use <code>curve_fit</code> from <code>scipy.optimize</code> to do this. It works for all sets that I have, except for one set:</p> <p><a href="https://i.sstatic.net/xCdqW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xCdqW.png" alt="enter image description here" /></a></p> <p>Most of the time this has something to do with the initial guesses (the <code>p0</code>), but after some trying with different guesses I still end up with this. I have seen in literature that for similar values these plots look just fine, so I must be missing something.</p> <p>What am I missing here to make it work? The only thing I can think of is that there is still something wrong with the guesses. I have a test-code below to copy-paste.</p> <p>The code:</p> <pre><code>from scipy.optimize import curve_fit import matplotlib.pyplot as plt import numpy as np # Two lists with the information for the line list_x = [3.139, 2.53, 0.821, 0.27] list_y = [35.56, 26.82, 10.42, 4.66] def func_exp(x, a, b): return (a * x)**b # Points point1_x = list_x[0] point1_y = list_y[0] point2_x = list_x[1] point2_y = list_y[1] point3_x = list_x[2] point3_y = list_y[2] point4_x = list_x[3] point4_y = list_y[3] # Lines between points p0_12 = (point1_x, point2_x) formula_12, pcov_12 = curve_fit(func_exp, [point1_x, point1_y], [point2_x, point2_y], maxfev=10000, p0=p0_12) p0_23 = (point2_x, point3_x) formula_23, pcov_23 = curve_fit(func_exp, [point2_x, point2_y], [point3_x, point3_y], maxfev=10000, p0=p0_23) p0_34 = (point3_x, point4_x) formula_34, pcov_34 = curve_fit(func_exp, [point3_x, point3_y], [point4_x, point4_y], maxfev=10000, p0=p0_34) # Create plot plot_x_12 = np.linspace(point1_x, point2_x, 1000) plot_y_12 = (formula_12[0] * plot_x_12)**formula_12[1] plot_x_23 = np.linspace(point2_x, point3_x, 1000) plot_y_23 = (formula_23[0] * plot_x_23)**formula_23[1] plot_x_34 = np.linspace(point3_x, point4_x, 1000) plot_y_34 = (formula_34[0] * plot_x_34)**formula_34[1] fig, ax1 = plt.subplots(1, 1, figsize=(10, 5)) ax1.scatter(list_x, list_y, color='black') ax1.plot(plot_x_12, plot_y_12) ax1.plot(plot_x_23, plot_y_23) ax1.plot(plot_x_34, plot_y_34) ax1.set_xscale('log', base=10) ax1.set_yscale('log', base=10) </code></pre>
<python><scipy><curve-fitting>
2023-10-12 22:40:01
1
567
Fish1996
77,284,167
1,750,849
Conda env: confusing python versions in conda info and actual python command
<p>I created a new conda env with python 3.11 and here is the conda env info:</p> <pre><code>rakesh@Rakeshs-MBP ~ % conda activate mlearn (mlearn) rakesh@Rakeshs-MBP ~ % conda info active environment : mlearn active env location : /usr/local/Caskroom/miniconda/base/envs/mlearn shell level : 1 user config file : /Users/rks/.condarc populated config files : /Users/rks/.condarc conda version : 23.9.0 conda-build version : not installed python version : 3.11.4.final.0 virtual packages : __archspec=1=x86_64 __osx=14.0=0 __unix=0=0 base environment : /usr/local/Caskroom/miniconda/base (writable) conda av data dir : /usr/local/Caskroom/miniconda/base/etc/conda conda av metadata url : None channel URLs : https://conda.anaconda.org/conda-forge/osx-64 https://conda.anaconda.org/conda-forge/noarch https://repo.anaconda.com/pkgs/main/osx-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/osx-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /usr/local/Caskroom/miniconda/base/pkgs /Users/rks/.conda/pkgs envs directories : /usr/local/Caskroom/miniconda/base/envs /Users/rks/.conda/envs platform : osx-64 user-agent : conda/23.9.0 requests/2.31.0 CPython/3.11.4 Darwin/23.0.0 OSX/14.0 UID:GID : 501:20 netrc file : None offline mode : False </code></pre> <p>According to this the python version should be 3.11.4. But the python version for python on command line is 3.11.5:</p> <pre><code>(mlearn) rks@Rks-MBP ~ % which python /usr/local/Caskroom/miniconda/base/envs/mlearn/bin/python (mlearn) rks@Rks-MBP ~ % python --version Python 3.11.5 </code></pre> <p>Is this how it should be or am I missing something?</p>
<python><conda><miniconda>
2023-10-12 22:37:43
0
451
rks
77,284,039
14,813,970
How to properly round to the nearest integer in double-double arithmetic
<p>I have to analyse a large amount of data using Python3 (PyPy implementation), where I do some operations on quite large floats, and must check if the results are close enough to integers.</p> <p>To exemplify, say I'm generating random pairs of numbers, and checking if they form <a href="https://en.wikipedia.org/wiki/Pythagorean_triple#:%7E:text=A%20Pythagorean%20triple%20consists%20of,for%20any%20positive%20integer%20k." rel="nofollow noreferrer">pythagorean triples</a> (are sides of right triangles with integer sides):</p> <pre><code>from math import hypot from pprint import pprint from random import randrange from time import time def gen_rand_tuples(start, stop, amount): ''' Generates random integer pairs and converts them to tuples of floats. ''' for _ in range(amount): yield (float(randrange(start, stop)), float(randrange(start, stop))) t0 = time() ## Results are those pairs that results in integer hypothenuses, or ## at least very close, to within 1e-12. results = [t for t in gen_rand_tuples(1, 2**32, 10_000_000) if abs((h := hypot(*t)) - int(h)) &lt; 1e-12] print('Results found:') pprint(results) print('finished in:', round(time() - t0, 2), 'seconds.') </code></pre> <p>Running it I got:</p> <pre><code>Python 3.9.17 (a61d7152b989, Aug 13 2023, 10:27:46) [PyPy 7.3.12 with GCC 13.2.1 20230728 (Red Hat 13.2.1-1)] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license()&quot; for more information. &gt;&gt;&gt; ===== RESTART: /home/user/Downloads/pythagorean_test_floats.py ==== Results found: [(2176124225.0, 2742331476.0), (342847595.0, 3794647043.0), (36.0, 2983807908.0), (791324089.0, 2122279232.0)] finished in: 2.64 seconds. </code></pre> <p>Fun, it ran fast, processing 10 million datapoints in a bit over 2 seconds, and I even found some matching data. The hypothenuse is apparently integer:</p> <pre><code>&gt;&gt;&gt; pprint([hypot(*x) for x in results]) [3500842551.0, 3810103759.0, 2983807908.0, 2265008378.0] </code></pre> <p>But not really, if we check the results using the decimal arbitrary precision module, we see the results are not actually not close enough to integers:</p> <pre><code>&gt;&gt;&gt; from decimal import Decimal &gt;&gt;&gt; pprint([(x[0]*x[0] + x[1]*x[1]).sqrt() for x in (tuple(map(Decimal, x)) for x in results)]) [Decimal('3500842551.000000228516418075'), Decimal('3810103758.999999710375341513'), Decimal('2983807908.000000217172157183'), Decimal('2265008377.999999748566051441')] </code></pre> <p>So, I think the problem is the numbers are large enough to fall in the range where python floats lack precision, so false positives are returned.</p> <p>Now, we can just change the program to use arbitrary precision decimals everywhere:</p> <pre><code>from decimal import Decimal from pprint import pprint from random import randrange from time import time def dec_hypot(x, y): return (x*x + y*y).sqrt() def gen_rand_tuples(start, stop, amount): ''' Generates random integer pairs and converts them to tuples of decimals. ''' for _ in range(amount): yield (Decimal(randrange(start, stop)), Decimal(randrange(start, stop))) t0 = time() ## Results are those pairs that results in integer hypothenuses, or ## at least very close, to within 1e-12. results = [t for t in gen_rand_tuples(1, 2**32, 10_000_000) if abs((h := dec_hypot(*t)) - h.to_integral_value()) &lt; Decimal(1e-12)] print('Results found:') pprint(results) print('finished in:', round(time() - t0, 2), 'seconds.') </code></pre> <p>Now we don't get any false positives, but we take a large performance hit. What previously took a bit over 2s, now takes over 100s. It appears decimals are not JIT-friendly:</p> <pre><code>====== RESTART: /home/user/Downloads/pythagorean_test_dec.py ====== Results found: [] finished in: 113.82 seconds. </code></pre> <p>I found <a href="https://stackoverflow.com/a/66654242/14813970">this answer</a> to the question, <a href="https://stackoverflow.com/q/66646850/14813970">CPython and PyPy Decimal operation performance</a>, suggesting the use of double-double precision numbers as a faster, JIT-friendly alternative to decimals, to get better precision than built-in floats. So I pip installed the doubledouble third-party module, and changed the program accordingly:</p> <pre><code>from doubledouble import DoubleDouble from decimal import Decimal from pprint import pprint from random import randrange from time import time def dd_hypot(x, y): return (x*x + y*y).sqrt() def gen_rand_tuples(start, stop, amount): for _ in range(amount): yield (DoubleDouble(randrange(start, stop)), DoubleDouble(randrange(start, stop))) t0 = time() print('Results found:') results = [t for t in gen_rand_tuples(1, 2**32, 10_000_000) if abs((h := dd_hypot(*t)) - int(h)) &lt; DoubleDouble(1e-12)] pprint(results) print('finished in:', round(time() - t0, 2), 'seconds.') </code></pre> <p>But I get this error:</p> <pre><code>======= RESTART: /home/user/Downloads/pythagorean_test_dd.py ====== Results found: Traceback (most recent call last): File &quot;/home/user/Downloads/pythagorean_test_dd.py&quot;, line 24, in &lt;module&gt; results = [t for t in gen_rand_tuples(1, 2**32, 10_000_000) if abs((h := dd_hypot(*t)) - int(h)) &lt; DoubleDouble(1e-12)] File &quot;/home/user/Downloads/pythagorean_test_dd.py&quot;, line 24, in &lt;listcomp&gt; results = [t for t in gen_rand_tuples(1, 2**32, 10_000_000) if abs((h := dd_hypot(*t)) - int(h)) &lt; DoubleDouble(1e-12)] TypeError: int() argument must be a string, a bytes-like object or a number, not 'DoubleDouble' </code></pre> <p>I think the problem is the module doesn't specify a conversion or rounding to the nearest integer method. The best I could write was an extremely contrived &quot;int&quot; function, that rounds a double-double to the nearest integer by doing a round-trip through string and decimals and back to DoubleDouble:</p> <pre><code>def contrived_int(dd): rounded_str = (Decimal(dd.x) + Decimal(dd.y)).to_integral_value() hi = float(rounded_str) lo = float(Decimal(rounded_str) - Decimal(hi)) return DoubleDouble(hi, lo) </code></pre> <p>But it's very roundabout, defeats the purpose of sidesteping decimals and makes the progam even slower than the full-decimal version.</p> <p>Then I ask, is there a fast way to round a double-double precision number to the nearest integer directly, without intermediate steps going through decimals or strings?</p>
<python><precision><floating-accuracy><pypy><double-double-arithmetic>
2023-10-12 21:58:09
4
337
kleite
77,283,982
6,076,975
Pattern Matching callbacks for inputs of a form on multiple pages
<p>I have a use case where I am rendering a form with multiple select dropdowns. The form can be rendered with varying numbers of selects (chosen from a file). I am using dashboard_bootstrap_forms for this. I am rendering each form with a unique ID and there can be multiple number of such forms on a page in different tabs. The users can decide how many tabs to open and each tab has this form.</p> <p>Form example</p> <pre><code>dbc.Form( id={“type”: form, “index”: 1}, children=[list_of_selects] ) </code></pre> <p>Select example</p> <pre><code>dbc.Select( key=key id={“type”: “select”, “index”: 1}, options=options, size=“sm”, ) </code></pre> <p>Whenever I submit the form I want to access the info of all the selects in that particular form. I am doing it with the STATE</p> <pre><code>@callback( Output(toaster, “is_open”), [Input(form, “n_submit”)], [State({“type”: “select”, “index”: ALL}, “value”)], [State({“type”: “select”, “index”: ALL}, “key”)],) </code></pre> <p>As you can see I am using the index: ALL to get data from all the selects. The issue is it returns all the selects in all the forms along with the one that is submitted. I was thinking of pattern matching on a particular form ID, but from the documentation, I couldn’t find how to do that as the form IDs are generated dynamically and are not available when the callback is registered.</p> <p>What would be the right way to solve this problem?</p>
<python><plotly><plotly-dash>
2023-10-12 21:41:21
0
697
Arjun Singh
77,283,976
4,969,603
Add coroutine to gather array
<p>I have an array of coroutines in asyncio, all of the are blocking, and in one of them would like to add a new one to the array to be gathered. Not sure if this is possible ... Imagine the pingLoop code needs to add a new routine pingSpecial() and await it, it will be also blocking, so it would block the pingLoop, which is not OK</p> <p>loops=[pingLoop()]<br /> await asyncio.gather(*loops)</p>
<python><python-asyncio>
2023-10-12 21:40:12
1
373
Miro Krsjak
77,283,841
3,657,988
Cached WMTS requests
<p>The Cartopy <a href="https://scitools.org.uk/cartopy/docs/latest/reference/generated/cartopy.mpl.geoaxes.GeoAxes.html#cartopy.mpl.geoaxes.GeoAxes.add_wmts" rel="nofollow noreferrer"><code>add_wmts</code></a> method uses <a href="https://scitools.org.uk/cartopy/docs/latest/reference/generated/cartopy.io.ogc_clients.WMTSRasterSource.html" rel="nofollow noreferrer"><code>WMTSRasterSource</code></a> under the hood. Its docstring says it uses caching for repeated retrievals and a quick, uninformed glance at the <a href="https://scitools.org.uk/cartopy/docs/latest/_modules/cartopy/io/ogc_clients.html#WMTSRasterSource" rel="nofollow noreferrer">source</a> suggests it does check some kind of local cache before making a request.</p> <p>If, however, I set up a simple figure using Cartopy in Matplotlib, repeated calls to the code generate network traffic suggesting repeated downloads from the WMTS source. Often times I am making small adjustments to a map unrelated to the imagery and don't want to download the imagery every time (or get throttled or banned).</p> <p><code>cartopy.config['data_dir']</code> is set to the default location and contains cached data of other types (SRTM elevation data, Natural Earth data, etc)</p> <pre class="lang-py prettyprint-override"><code>import cartopy.crs as ccrs import matplotlib.pyplot as plt plt.figure() ax = plt.subplot(1,1,1,projection=ccrs.Mercator()) ax.set_extent([-122.55, -122, 37.4, 37.85], crs=ccrs.PlateCarree()) ax.add_wmts('https://basemap.nationalmap.gov/arcgis/rest/services/USGSImageryOnly/MapServer/WMTS/1.0.0/WMTSCapabilities.xml', layer_name='USGSImageryOnly') plt.show() </code></pre> <p>How can I get Cartopy to use a local cache in this situation?</p>
<python><matplotlib><cartopy>
2023-10-12 21:04:26
1
1,195
Dan
77,283,747
3,846,421
Windows 10: rstudio reticulate keras tensorflow :: anaconda vs. miniconda vs. python
<p>The number of issues I'm having with a STABLE python environment operating beneath R/Rstudio is overwhelming... and the permutations of installation steps is far too flexible to inspire confidence with any approach.</p> <p>First, I used <code>reticulate</code> to <code>uninstall_miniconda</code>. I wiped the computer of any trace of python. I have updated all packages, and reinstalled <code>reticulate</code>. I had <code>reticulate</code> <code>install_miniconda</code> (maybe I should have <code>reticulate</code> <code>install_python()</code> instead, but I don't understand why I wouldn't use conda when most R [Python context] documentation uses conda management... <em>so as a new user of python in R contexts, I want the documentation to be applicable to my work),</em> ...then I installed the latest <code>tensorflow</code> package and attempted to <code>install_tensorflow()</code>, but got <code>mapply()</code> errors (crazy that there is nothing on the internet about this error). I also got <code>mapply()</code> errors when attempting to run <code>install_keras()</code> from a new install of the <code>keras</code> package.</p> <p>Further, nearly all of the reticulate documentation refers to using the &quot;r-reticulate&quot; conda environment... which again suggests that installation of miniconda or anaconda would be preferable.</p> <p>After trying different ways to activate the conda environment, or install tensorflow first, or keras first, or activating the conda environment last... etc. etc... I had no other recourse than filing an issue on Github. So when attempting to report this issue, the prefill text says to use the following approach:</p> <pre><code>install.packages(&quot;remotes&quot;) remotes::install_github(sprintf(&quot;rstudio/%s&quot;, c(&quot;reticulate&quot;, &quot;tensorflow&quot;, &quot;keras&quot;))) if (is.null(reticulate::virtualenv_starter())) reticulate::install_python() keras::install_keras() </code></pre> <p>Why is posit/rstudio suggesting using <code>install_python()</code> instead of <code>install_miniconda()</code>? Why did <code>install_keras()</code> finally work after <code>install_python()</code> with no further interventions (like defining the <code>RETICULATE_PYTHON</code> environment variable, or running the function <code>use_python(my_python_directory)</code>), yet everything fails after running <code>install_miniconda()</code>?</p> <p>I'm just looking for some explanation in hopes that it will facilitate my use of python with R, and in hopes that reticulate will operate predictably, or that I will be able to troubleshoot future issues without feeling like I'm on a scavenger hunt.</p> <p>Here is my <code>sessionInfo()</code> in case it matters:</p> <pre><code>&gt; sessionInfo() R version 4.1.2 (2021-11-01) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 10 x64 (build 19045) Matrix products: default locale: [1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252 [4] LC_NUMERIC=C LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base loaded via a namespace (and not attached): [1] compiler_4.1.2 cli_3.6.1 tools_4.1.2 rstudioapi_0.15.0 rlang_1.1.1 </code></pre>
<python><tensorflow><keras><rstudio><miniconda>
2023-10-12 20:44:07
0
685
quickreaction
77,283,648
11,107,541
VS Code Python extension (circa v2018.19) no longer includes support for linters and formatters. Why?
<p>The Python extension for VS Code used to provide builtin support for tools like formatters and linters, including:</p> <ul> <li>Linting: Pylint, Flake8, Mypy, Bandit, Pydocstyle, Pycodestyle, Prospector, Pylama</li> <li>Formatting: autopep8, Black, YAPF</li> </ul> <p>What's happening to the builtin support for these tools in the Python extension? How can I get integrated support for these tools in VS Code going forward?</p>
<python><visual-studio-code>
2023-10-12 20:24:36
1
59,003
starball
77,283,503
376,535
Python poetry stops with exit code 1 but no error message in CICD
<p>I am running <code>poetry install -vvv --no-interaction</code> in the CICD environment as part of CI. We recently updated poetry to 1.6.1. This is the latest version as of today. The build was passing. But then this started to happen.</p> <pre><code> #!/bin/bash -eo pipefail poetry install --no-interaction -vvv Loading configuration file /root/.config/pypoetry/config.toml Loading configuration file /root/project1/poetry.toml Using virtualenv: /code/.venv Installing dependencies from lock file Finding the necessary packages for the current system [keyring.backend] Loading KWallet [keyring.backend] Loading SecretService [keyring.backend] Loading Windows [keyring.backend] Loading chainer [keyring.backend] Loading libsecret [keyring.backend] Loading macOS No suitable keyring backend found No suitable keyring backends were found Keyring is not available, credentials will be stored and retrieved from configuration files as plaintext. [urllib3.connectionpool] Starting new HTTPS connection (1): bitbucket.org:443 [urllib3.connectionpool] https://bitbucket.org:443 &quot;GET /org1/django-yubin.git/info/refs?service=git-upload-pack HTTP/1.1&quot; 200 None Cloning https://bitbucket.org/org1/django-yubin.git at 'HEAD' to /code/.venv/src/django-yubin Package operations: 0 installs, 1 update, 0 removals, 232 skipped Error: Exited with code exit status 1 </code></pre> <p>Here the warning <code>Keyring is not available, credentials will be stored and retrieved from configuration files as plaintext</code> is in color yellow.</p> <p>What is the problem here? How to fix it?</p>
<python><git><pip><python-poetry>
2023-10-12 19:53:43
1
57,835
Shiplu Mokaddim
77,283,291
21,061,890
Python ctypes how to reject function call with arguments for function taking no arguments
<p>In C++, I have created a C linkage function in a shared library like this:</p> <pre class="lang-cpp prettyprint-override"><code>#include &lt;iostream&gt; extern &quot;C&quot; int myfunc() { std::cout &lt;&lt; &quot;Hello myfunc&quot; &lt;&lt; std::endl; return 1; } </code></pre> <p>Note the function does not require any arguments, and I would like to generate an error if someone tried to invoke it with arguments.</p> <p>I intend to invoke the function from python using ctypes like this:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/python3 from ctypes import * libmylib = cdll.LoadLibrary(&quot;./libmylib.so&quot;) libmylib.myfunc.argtypes = [] return_code = libmylib.myfunc() print(f&quot;return_code={return_code:d}&quot;) </code></pre> <p>When I run the python script it produces the expected output:</p> <pre class="lang-bash prettyprint-override"><code>Hello myfunc return_code=1 </code></pre> <p>To test it, I changed the invocation of the function from python to additionally pass some arguments, i.e. the behaviour I want to reject. For example:</p> <pre class="lang-py prettyprint-override"><code>return_code = libmylib.myfunc(123) #or return_code = libmylib.myfunc(&quot;123&quot;) </code></pre> <p>However it still executes myfunc without any error, and appears to simply ignore that I &quot;wrongly&quot; supplied some arguments.</p> <p>I tried to change the argtypes line, but nothing seemed to have the desired behaviour:</p> <pre class="lang-py prettyprint-override"><code>libmylib.myfunc.argtypes = None #or libmylib.myfunc.argtypes = () </code></pre> <p>Is it possible for ctypes to detect and reject the arguments, or do I simply need to accept that myfunc could be called with extra arguments and I should just ignore them, and not get so fussy about it?</p> <p>I could write a wrapper shim function in python that would detect extra arguments and generate an error when the detection occurs, but really I was hoping to avoid doing that.</p>
<python><ctypes>
2023-10-12 19:13:32
1
313
InTheWonderlandZoo
77,282,710
6,572,639
Stream a file from S3 to a HTTP multipart endpoint
<p>I'm trying to stream a large file from an S3 bucket to an HTTP API. I read a lot of SO threads partially covering my needs but was not capable of putting things together.</p> <p>Reading the file locally is working using request toolbelt:</p> <pre><code>stream = open('/tmp/file', 'rb') encoder = MultipartEncoder( {'attachments': ('file', stream, 'application/octet-stream'), 'canRename': 'true'} ) headers['Content-Type'] = encoder.content_type r = requests.post(f'/api/attachments', data=encoder, headers=headers) print(r.text) </code></pre> <p>But reading the stream from S3 doesn't:</p> <pre><code>obj = s3_client.get_object(Bucket='bucket', Key='key') stream = obj['Body'] encoder = MultipartEncoder( {'attachments': ('file', stream, 'application/octet-stream'), 'canRename': 'true'} ) headers['Content-Type'] = encoder.content_type r = requests.post(f'/api/attachments', data=encoder, headers=headers) print(r.text) </code></pre> <blockquote> <p>self = &lt;requests_toolbelt.multipart.encoder.FileWrapper object at 0x1004d4f10&gt;</p> <p>return total_len(self.fd) - self.fd.tell()</p> <p>E TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'</p> </blockquote> <p>I also tried to wrap the stream with different types but no luck:</p> <pre><code>stream = StreamingIterator(obj['ContentLength'], obj['Body']) # transfers data but hangs forever stream = io.BufferedReader(obj['Body']._raw_stream, obj['ContentLength']) # makes encoder.CustomBytesIO fail on total length calculation </code></pre>
<python><python-requests><boto3>
2023-10-12 17:29:49
1
1,352
Plup
77,282,683
6,672,746
PyAthena - Invalid length for parameter RoleArn
<p>I am trying to use Python, PyAthena, and SQLAlchemy to connect to Athena.</p> <p>When trying to define a SQLAlchemy cursor:</p> <pre><code>from pyathena import connect cursor = connect( aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key, s3_staging_dir=s3_staging_dir, region_name=aws_region, schema_name=schema_name, role_arn=aws_workgroup ).cursor() </code></pre> <p>I get this error:</p> <pre><code>ParamValidationError: Parameter validation failed: Invalid length for parameter RoleArn, value: 14, valid min length: 20 </code></pre> <p>The string for <code>aws_workgroup</code>, which is correct and I cannot change, is 14 characters long. I came across <a href="https://github.com/boto/botocore/issues/1465" rel="nofollow noreferrer">this issue in botocore</a> from 2018, but it refers to aws-visualizer, and I don't know what that is or if I'm using it (or boto, for that matter).</p> <p>If I remove <code>role_arn</code> from the cursor, that code will run, but then I get this permissions error when trying to execute a query:</p> <pre><code>DatabaseError: An error occurred (AccessDeniedException) when calling the StartQueryExecution operation: You are not authorized to perform: athena:StartQueryExecution on the resource. After your AWS administrator or you have updated your permissions, please try again. </code></pre> <p>I have verified the credentials in other applications, so it is not an issue with the credentials provided.</p> <p>How do I fix the <code>RoleArn</code> error? Or is there another method to connect to Athena in Python that gets around this?</p>
<python><sqlalchemy><boto3><amazon-athena><pyathena>
2023-10-12 17:25:35
0
2,171
Evan
77,282,567
2,745,116
How to build Python wheels that are independent of the MacOS version?
<p>I am regularly building and shipping Python wheels for a library, where we build different wheels for different platforms, particularly for Mac x86 and Mac ARM (M1/M2) platforms.</p> <p>A resulting wheel for ARM is usually called something like <code>mylib-16.2.0-cp39-cp39-macosx_14_0_arm64.whl</code>, i.e., it not only includes the library version (16.2.0) and Python version (3.9) but also the MacOS version (14.0).</p> <p>My colleagues who want to install the wheel are not all on the same MacOS version such that pip tells them there is no suitable wheel available. If I simply publish a renamed copy of the exact same wheel with a different MacOS version in the filename, the wheel is found by pip and installed properly.</p> <p>Is there are way to name or build or publish the wheel such that it is independent of the MacOS <em>version</em>? If I just omit the version from the filename <code>mylib-16.2.0-cp39-cp39-macosx_arm64.whl</code>, it does not work.</p>
<python><macos><pip><build><python-wheel>
2023-10-12 17:07:24
1
6,176
stefanbschneider
77,282,466
1,230,694
Calculating end position of annotation on plotly bar chart
<p>I am creating a bar chart using <code>plotly</code> which only has two bars, the left bar is always a larger value than the right.</p> <p>I need to place an arrow pointing downwards above the smaller bar, the head of the arrow should touch the smaller bar and this position is quite easy to compute, however the tail of the arrow needs to extend upwards and stop at the y position of the taller bar and this is proving hard to calculate because the value for some reason is now a negative number and has no relation to the data at all.</p> <p>Below is a snippet of my code:</p> <pre><code>data = { &quot;Big Value&quot;: 34000, &quot;Small Value&quot;: 18000, } max_value = max(data.values()) min_value = min(data.values()) difference = max_value - min_value arrow_start_position = max_value - difference arrow_end_position = ?? print(arrow_end_position) fig = go.Figure( data=[ go.Bar( x=list(data.keys()), y=list(data.values()) ) ] ) fig.add_annotation( x=1, y=arrow_start_position, text=f&quot;{difference}&quot;, showarrow=True, arrowhead=2, arrowsize=2, arrowwidth=2, arrowcolor=&quot;#84be54&quot;, ax=0, ay=arrow_end_position, ) </code></pre> <p>How can I calculate the <code>arrow_end_position</code> value accurately? The y coordinate should be the max value (value of the larger bar) but for some unknown reason in order to have the arrow head point down <code>ay</code> needs to be negative so now I am unable to use the max value to calculate where the line should stop.</p>
<python><plotly>
2023-10-12 16:48:42
2
3,899
berimbolo
77,282,427
1,946,418
python - library to browse and download files from a remote server
<p>Have a webserver that has some folder/file structure. This list keeps on changing. I can download using <code>requests</code> if I have the full path, but the list of files is dynamic; looking to find the more recent file based on timestamps.</p> <p>I can scrape the HTML, but hoping to avoid. Looking for some sort of a &quot;remote file browser, similar to <code>pathlib</code>&quot;</p> <p>Does anyone know if there is a library that would let me browse these remote files if I give it the root of the url?</p> <p>TIA</p>
<python>
2023-10-12 16:42:37
0
1,120
scorpion35
77,282,316
1,555,615
How can Flask rest endpoint communicate to a pyqt application running on the same python program
<p>I am trying to create a simple rest endpoint, that when hit, it changes the status of a tray icon that is installed by the program itself.</p> <p>I believe that I need Flask and the QApplication to run in different threads and I tried to do this.</p> <p>However I am not familiar with python and most probably I am doing something wrong below:</p> <pre class="lang-py prettyprint-override"><code>import sys import asyncio from PyQt6.QtGui import QIcon from PyQt6.QtWidgets import ( QSystemTrayIcon, QApplication, QWidget ) from flask import Flask async def gui(): app = QApplication(sys.argv) w = QWidget() trayIcon = QSystemTrayIcon(QIcon(&quot;disconnected.png&quot;), w) trayIcon.show() print(&quot;running gui&quot;) return app.exec() async def rest(): app1 = Flask(__name__) @app1.route(&quot;/status/connected&quot;) def setStatusConnected(): return &quot;I will signal gui to change icon to connected&quot; @app1.route(&quot;/status/disconnected&quot;) def setStatusDisconnected(): return &quot;I will signal gui to change icon to disconnected&quot; print(&quot;creating rest endpoint&quot;) return app1.run() async def main(): return await asyncio.gather(gui(), rest() ) if __name__ == &quot;__main__&quot;: asyncio.run(main()) </code></pre> <p>When I run the code above I see only &quot;running gui&quot; and I can see the tray icon being installed to the status bar. I do not see &quot;creating rest endpoint&quot;</p> <p>If I comment-out the &quot;return app.exec()&quot;. I see the &quot;creating rest endpoint&quot; and I can access the endpoints, but I don't see the the tray icon.</p> <p>Whay is am I missing?</p> <p>extra info: this is my <code>Pipfile</code> (Using <code>pipenv</code>)</p> <pre><code>[[source]] url = &quot;https://pypi.org/simple&quot; verify_ssl = true name = &quot;pypi&quot; [packages] pyqt6 = &quot;*&quot; flask = {extras = [&quot;async&quot;], version = &quot;*&quot;} [dev-packages] [requires] python_version = &quot;3.10&quot; </code></pre>
<python><flask><pyqt><pyqt6>
2023-10-12 16:25:41
2
11,246
Marinos An
77,282,256
2,429,869
Why do these datetime conversions using pytz not land on hours before December of 1901?
<p>Perhaps I am not using the libraries correctly, or perhaps there was some kind of standard change on December 15, 1901 concerning world wide time keeping. But I stumbled across this odd behavior while working on a time related application.</p> <pre><code>sydney = pytz.timezone(&quot;Australia/Sydney&quot;) tokyo = pytz.timezone(&quot;Japan&quot;) new_york = pytz.timezone(&quot;US/Eastern&quot;) dt1 = datetime(1901, 12, 14) dt2 = datetime(1901, 12, 15) print(&quot;-&quot; * 50) print(sydney.localize(dt1).astimezone(pytz.UTC)) print(tokyo.localize(dt1).astimezone(pytz.UTC)) print(new_york.localize(dt1).astimezone(pytz.UTC)) print(&quot;-&quot; * 50) print(sydney.localize(dt2).astimezone(pytz.UTC)) print(tokyo.localize(dt2).astimezone(pytz.UTC)) print(new_york.localize(dt2).astimezone(pytz.UTC)) print(&quot;-&quot; * 50) </code></pre> <p>I expected this code to tell me the UTC equivalent of the midnight times in these different time zones. What I found odd was that for older datetimes before Dec 1901 the results were not even falling on the hour but had minute components. After Dec 1901 it seems every conversion ended up being on the hour. Here is the output of the code above:</p> <pre><code>-------------------------------------------------- 1901-12-13 13:55:00+00:00 1901-12-13 14:41:00+00:00 1901-12-14 05:00:00+00:00 -------------------------------------------------- 1901-12-14 14:00:00+00:00 1901-12-14 15:00:00+00:00 1901-12-15 05:00:00+00:00 -------------------------------------------------- </code></pre> <p>Wondering if I am not using a reliable technique or if this is actually correct due to some historical/political reason?</p>
<python><datetime><timezone><pytz>
2023-10-12 16:16:15
1
1,430
Andrew Allaire
77,282,208
8,382,028
Aggregate not working in prefetched queryset in Django
<p>I have a query that I am trying to aggregate values so I can calculate balances more quickly than querying multiple times to get the values needed.</p> <p>Overall I simply want to be able to run, but the aggregation that seems to be what would allow that isn't working:</p> <pre><code>header_accounts = custom_report.account_tree_header.all() for header_account in header_accounts: for regular_account in header_account.associated_regular_account_tree_accounts.all(): gl_account = regular_account.associated_account_from_chart_of_accounts gl_entries = gl_account.range_gl # range entries # this does not work below... prior_credit = gl_account.old_gl.prior_credit_amount prior_debit = gl_account.old_gl.prior_debit_amount </code></pre> <p>When I run the query below with aggregate instead of annotate, I get an <code>AttributeError</code> <code>'dict' object has no attribute '_add_hints'</code></p> <p><strong>How can I do this?</strong></p> <pre><code>custom_report = AccountTree.objects.select_related().prefetch_related( 'account_tree_total', 'account_tree_regular', Prefetch('account_tree_header', queryset=AccountTreeHeader.objects.select_related( 'associated_account_from_chart_of_accounts', 'associated_total_account_tree_account__associated_account_from_chart_of_accounts' ).prefetch_related( 'associated_regular_account_tree_accounts', Prefetch('associated_regular_account_tree_accounts__associated_account_from_chart_of_accounts__general_ledger', queryset=GeneralLedger.objects.select_related( ).filter(Q( accounts_payable_line_item__property__pk__in=property_pks, journal_line_item__property__pk__in=property_pks, _connector=Q.OR, ), date_entered__date__gte=start_date, date_entered__date__lte=end_date).order_by('date_entered'), to_attr='range_gl'), # ISSUE IS HERE.... Prefetch('associated_regular_account_tree_accounts__associated_account_from_chart_of_accounts__general_ledger', queryset=GeneralLedger.objects.select_related( ).filter(Q( accounts_payable_line_item__property__pk__in=property_pks, journal_line_item__property__pk__in=property_pks, _connector=Q.OR, ), date_entered__date__lte=start_date).aggregate(prior_credit_amount=Sum('credit_amount'), prior_debit_amount=Sum('debit_amount')), to_attr='old_gl'), )), ).get(pk=custom_report.pk) </code></pre> <p>As a note in the traceback the error occurs in <code>.get(pk=custom_report.pk)</code></p>
<python><python-3.x><django><django-aggregation>
2023-10-12 16:07:35
1
3,060
ViaTech
77,282,169
412,655
In a Jupyter notebook, is it possible to configure it to automatically display a (decorated) function where it's defined?
<p>Suppose I have a decorate that adds a <code>_repr_html_</code> method to an object, and I use it on a function <code>foo</code>:</p> <pre class="lang-py prettyprint-override"><code>def add_repr_html(f): f._repr_html_ = lambda: f&quot;This is {f.__name__}!&quot; return f @add_repr_html def foo(): ... </code></pre> <p>If that is the content of the code cell in Jupyter, it won't display anything. However, if I add <code>foo</code> on a separate line afterward, it will display &quot;This is foo!&quot;.</p> <p>Is there a way to configure Jupyter so that putting <code>foo</code> on a separate line isn't necessary for it to be displayed? I would like for to automatically call the <code>_repr_html_</code> method without needing the extra line.</p> <p>(Note that this is a toy example. In my actual use case, the decorator instantiates a class with a <code>_repr_html_</code> method.)</p>
<python><jupyter-notebook>
2023-10-12 16:02:10
1
4,147
wch
77,282,091
8,452,246
Change the color, marker size and name of single value - Plotly scatter
<p>I have a dataframe and I am using Plotly to draw a scatter plot. I want to change the color, size and name of just a single value there, e.g. datapoint &quot;one&quot;. I am not sure how to do this. I have been trying <code>for_each_trace</code> aiming to change the marker shape but never works.</p> <pre><code>import plotly.express as px import pandas as pd df1=pd.DataFrame(index=['one','two','three','four'],data=[[3,4],[2,5],[2,3],[6,4]],columns=['a','b']) fig1=px.scatter(df1, x=&quot;a&quot;, y=&quot;b&quot;,text=df1.index.astype(str)) fig1.update_traces(textposition='top center') fig1.for_each_trace( lambda trace: trace.update(marker_symbol=&quot;square&quot;) if trace.name == 'one' else ()) </code></pre> <p><a href="https://i.sstatic.net/mIgak.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mIgak.png" alt="enter image description here" /></a></p>
<python><plotly><plotly-express>
2023-10-12 15:51:43
1
477
Martin Yordanov Georgiev
77,282,055
4,710,409
Chatterbot- multiple custom adapters don't work
<p>I'm using the django integration.</p> <p>settings.py</p> <pre><code>CHATTERBOT = { 'name': 'chatbot0', 'storage_adapter': &quot;chatterbot.storage.SQLStorageAdapter&quot;, 'logic_adapters': [ 'chatterbot.logic.BestMatch', #custom adapters 'chatbot.adapters.adapter_1', 'chatbot.adapters.adapter_2', ] } </code></pre> <p>But adapter_2 doesnt work unless I remover adapter_1, and vise versa. What is the problem?</p>
<python><django><chatterbot>
2023-10-12 15:42:58
1
575
Mohammed Baashar
77,282,006
12,502,424
Resolve incompatible packages for a Flask application
<p>Here's my requirements.txt</p> <pre><code>aiohttp==3.8.6 aiosignal==1.3.1 async-timeout==4.0.3 attrs==23.1.0 blinker==1.6.3 certifi==2023.7.22 charset-normalizer==3.3.0 click==8.1.7 Flask==1.1.2 Flask-Login==0.6.2 Flask-SQLAlchemy==2.5.1 frozenlist==1.4.0 greenlet==3.0.0 idna==3.4 itsdangerous==2.1.2 Jinja2==3.1.2 MarkupSafe==2.1.3 multidict==6.0.4 openai==0.28.1 psycopg2==2.9.9 python-dotenv==1.0.0 requests==2.31.0 SQLAlchemy==2.0.21 tqdm==4.66.1 typing_extensions==4.8.0 urllib3==2.0.6 Werkzeug==1.0.1 yarl==1.9.2 </code></pre> <p>I'm playing whack-a-mole between Flask, Werkzeug, MarkupSafe, Jinja2, and Flask-SQLAlchemy versions. The application seemed to work previously. However, I didn't keep a clean list of deployed packages, and I kept experimenting with various installs without ever checking a &quot;clean deployment.&quot;</p> <p>There must be a better way to figure this out. Please help!</p> <p>Issues I've run into:</p> <ol> <li>ImportError: cannot import name 'escape' from 'jinja2' =&gt; reinstalled flask, jinja2</li> <li>ImportError: cannot import name '_app_ctx_stack' from 'flask' =&gt; reinstalled flask-sqlalchemy and flask</li> <li>ImportError: cannot import name 'url_decode' from 'werkzeug.urls' =&gt; can't figure out the compatibility between the three</li> </ol>
<python><flask><pip><jinja2><werkzeug>
2023-10-12 15:35:52
1
1,199
VeeDuvv
77,281,982
5,994,623
SQLAlchemy - Relationship with "indirect" Foreign Key
<h1>Context</h1> <p>I'm using SQLAlchemy (v2) to set up a database for an application, and I'm trying to model some relationships on my data. The relationships look similar to this diagram.</p> <p><a href="https://i.sstatic.net/roJ6G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/roJ6G.png" alt="ER-Diagram" /></a></p> <p>So I have these hierarchical many to one relationships, where <code>Cluster</code>s belong to <code>SubSubtype</code>s, which in turn belong to a <code>Subtype</code>. <code>Sample</code>s are similar, but they might only have a <code>Subtype</code> and not a <code>SubSubtype</code>.</p> <h1>Problem</h1> <p>Now I have the problem, that in my <code>cluster</code> table, I only want one reference to the <code>subtype_id</code> (preferably on <code>subsubtype.subtype_id</code>), and similarly in my <code>Sample</code> table I only want to reference <code>subtype_id</code> once.</p> <p>This should be no problem, when specifying an <a href="https://docs.sqlalchemy.org/en/20/orm/join_conditions.html#specifying-alternate-join-conditions" rel="nofollow noreferrer">alternate join condition</a> or a <a href="https://docs.sqlalchemy.org/en/20/orm/join_conditions.html#specifying-alternate-join-conditions" rel="nofollow noreferrer">custom FK-relationship</a>, which I've tried.</p> <p>I have some code (can be found close to the bottom), that does that, but when I run it, it successfully creates the database schema, but fails on inserting the objects. I get a <code>NoForeignKeysError</code>, and SQLAlchemy tells me that</p> <blockquote> <p>there are no foreign keys linking [the subtype and cluster] tables.</p> </blockquote> <p>and to</p> <blockquote> <p>Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or specify a 'primaryjoin' expression.</p> </blockquote> <p>And while indeed, there is no direct connection, I specified a <code>primaryjoin</code> expression <em>and</em> the <code>foreign_keys</code> to use (in various combinations, with and without).</p> <p>Also, when I subsequently visualize the database schema in the generated file, I get the following Diagram, which seems to suggest, that what I want should work.</p> <p><a href="https://i.sstatic.net/Ua4gd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ua4gd.png" alt="DataGrip Diagram" /></a></p> <p>This actually seems like a bug with SQLAlchemy to me (bad error message at the very least), but I've learned that things like these are usually user errors, so I'm coming here for second opinions first.</p> <p>I've put the full error message (excluding the generated SQL) at the end, since details tags don't seem to work on SO.</p> <p>I also tried to use the <code>hybrid_property</code> decorator for <code>subtype</code> (and/or <code>subsubtype</code>), which would work for <code>Cluster</code>, but afaics not for <code>Sample</code>, because I still don't see how I could get at the <code>SubSubtype</code>s without the</p> <h1>Question</h1> <p>How can I model these relationships in SQLAlchemy without duplicating the references, and preferably using a declarative style.</p> <p>Changing the schema would be okay iff it is necessary to achieve the goal.</p> <h2>Similar Questions</h2> <p>I've had a look at these other questions, and while somewhat similar, they don't really help me here.</p> <ul> <li><a href="https://stackoverflow.com/questions/20842756/sql-indirect-foreign-key">Sql - Indirect Foreign Key</a><br /> Different setup, not SQLAlchemy specific.</li> <li><a href="https://stackoverflow.com/questions/76970237/sqlalchemy-how-to-add-indirect-relationships-through-more-than-one-model">SQLAlchemy - How to add indirect relationships through more than one model?</a><br /> Somewhat related, but using old imperative style from v1.4 and no answer.</li> <li><a href="https://stackoverflow.com/questions/48958612/sqlalchemy-association-proxy-preventing-duplicate-entries">SQLAlchemy Association Proxy - Preventing Duplicate Entries</a><br /> Different problem and old-style imperative code.</li> <li><a href="https://stackoverflow.com/questions/43089073/sqlalchemy-relationships-no-foreign-key">SQLAlchemy Relationships No Foreign Key</a><br /> Similar error message, but different setup, and also old-style imperative code.</li> </ul> <p>I haven't found much more that seems relevant, but would be happy about someone pointing me to an applicable solution.</p> <h1>Code</h1> <p>Here's a (somewhat) minimal reproducible example:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path from typing import List, Optional from sqlalchemy import ForeignKey, String, create_engine, event, Engine from sqlalchemy.orm import Mapped, mapped_column, relationship, DeclarativeBase, Session class Base(DeclarativeBase): pass @event.listens_for(Engine, &quot;connect&quot;) def sqlite_pragma_enforce_foreign_keys(dbapi_connection, _): &quot;&quot;&quot; This is necessary to enforce foreign key constraints on SQLite. Cf. `SQLAlchemy docs &lt;https://docs.sqlalchemy.org/en/20/dialects/sqlite.html#foreign-key-support&gt;`_. :param dbapi_connection: The database connection. :param _: connection_record? &quot;&quot;&quot; cursor = dbapi_connection.cursor() cursor.execute(&quot;PRAGMA foreign_keys=ON&quot;) cursor.close() class Sample(Base): __tablename__ = &quot;sample&quot; id: Mapped[str] = mapped_column(String(16), primary_key=True) &quot;&quot;&quot;A unique identifier for the sample, aka. the &quot;scount&quot;.&quot;&quot;&quot; sequence: Mapped[str] = mapped_column() &quot;&quot;&quot;The actual genome data/consensus sequences. May only contain valid characters, cf. :meth:`_validate_sequence`.&quot;&quot;&quot; subtype_id: Mapped[str] = mapped_column(ForeignKey(&quot;subtype.id&quot;)) &quot;&quot;&quot;The ID of the sub- or sub-subtype of this sample.&quot;&quot;&quot; subsubtype_id: Mapped[Optional[str]] = mapped_column(ForeignKey(&quot;subsubtype.id&quot;)) &quot;&quot;&quot;The ID of the sub-subtype of this sample.&quot;&quot;&quot; subtype: Mapped[&quot;Subtype&quot;] = relationship(back_populates=&quot;samples&quot;) &quot;&quot;&quot;The :class:`Subtype` of this sample.&quot;&quot;&quot; subsubtype: Mapped[Optional[&quot;SubSubtype&quot;]] = relationship(back_populates=&quot;samples&quot;) &quot;&quot;&quot;The :class:`SubSubtype` of this sample.&quot;&quot;&quot; class Subtype(Base): __tablename__ = &quot;subtype&quot; id: Mapped[str] = mapped_column(String(3), primary_key=True) &quot;&quot;&quot;The id of this subtype.&quot;&quot;&quot; subsubtypes: Mapped[List[&quot;SubSubtype&quot;]] = relationship() &quot;&quot;&quot;A list of :class:`SubSubtype` clades under this subtype.&quot;&quot;&quot; clusters: Mapped[List[&quot;Cluster&quot;]] = relationship() &quot;&quot;&quot;A list of :class:`Cluster` objects under this sub-subtype.&quot;&quot;&quot; samples: Mapped[List[Sample]] = relationship() &quot;&quot;&quot;All :class:`Sample` objects of this subtype.&quot;&quot;&quot; class SubSubtype(Base): __tablename__ = &quot;subsubtype&quot; subtype_id: Mapped[str] = mapped_column(ForeignKey(&quot;subtype.id&quot;), primary_key=True) &quot;&quot;&quot;Sub-subtypes belong to a :class:`Subtype`, which is their &quot;parent&quot;, this is identified by the ``subtype_id``.&quot;&quot;&quot; id: Mapped[str] = mapped_column(String(16), primary_key=True) &quot;&quot;&quot;The sub-subtype specific part of the id.&quot;&quot;&quot; subtype: Mapped[Subtype] = relationship(back_populates=&quot;subsubtypes&quot;) &quot;&quot;&quot;Sub-subtypes have a :class:`Subtype` as parent.&quot;&quot;&quot; clusters: Mapped[List[&quot;Cluster&quot;]] = relationship() &quot;&quot;&quot;A list of :class:`Cluster` objects under this sub-subtype.&quot;&quot;&quot; samples: Mapped[List[Sample]] = relationship() &quot;&quot;&quot;All :class:`Sample` objects of this subtype.&quot;&quot;&quot; class Cluster(Base): __tablename__ = &quot;cluster&quot; subtype_id: Mapped[str] = mapped_column(ForeignKey(&quot;subsubtype.subtype_id&quot;), primary_key=True) &quot;&quot;&quot;The ID of the sub- or sub-subtype of this cluster.&quot;&quot;&quot; subsubtype_id: Mapped[str] = mapped_column(ForeignKey(&quot;subsubtype.id&quot;), primary_key=True) &quot;&quot;&quot;The ID of the sub-subtype of this cluster.&quot;&quot;&quot; id: Mapped[str] = mapped_column(String(10), primary_key=True) &quot;&quot;&quot;The cluster specific part of the name/id, e.g., in case of &quot;A1_1&quot;, it would be &quot;1&quot;.&quot;&quot;&quot; subtype: Mapped[&quot;Subtype&quot;] = relationship( primaryjoin=subtype_id == Subtype.id, foreign_keys=[subtype_id], back_populates=&quot;clusters&quot;, ) &quot;&quot;&quot;The :class:`Subtype` of this cluster.&quot;&quot;&quot; subsubtype: Mapped[&quot;SubSubtype&quot;] = relationship(back_populates=&quot;clusters&quot;) &quot;&quot;&quot;The :class:`SubSubtype` of this cluster.&quot;&quot;&quot; if __name__ == '__main__': engine = create_engine(&quot;sqlite:///:memory:&quot;, echo=True) Base.metadata.create_all(engine) subtype = Subtype(id=&quot;A&quot;) subsubtype = SubSubtype(subtype_id=&quot;A&quot;, id=&quot;1&quot;) cluster = Cluster(subtype_id=&quot;A&quot;, subsubtype_id=&quot;1&quot;, id=&quot;1&quot;) with Session(engine) as session: session.add_all([subtype, subsubtype, cluster]) session.commit() </code></pre> <h1>Exception</h1> Full output <pre><code>Traceback (most recent call last): File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/relationships.py&quot;, line 2418, in _determine_joins self.primaryjoin = join_condition( ^^^^^^^^^^^^^^^ File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/sql/util.py&quot;, line 123, in join_condition return Join._join_condition( ^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/sql/selectable.py&quot;, line 1358, in _join_condition raise exc.NoForeignKeysError( sqlalchemy.exc.NoForeignKeysError: Can't find any foreign key relationships between 'subtype' and 'cluster'. The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/home/fynn/.config/JetBrains/PyCharm2023.2/scratches/scratch_6.py&quot;, line 138, in &lt;module&gt; subtype = Subtype(id=&quot;A&quot;) ^^^^^^^^^^^^^^^ File &quot;&lt;string&gt;&quot;, line 4, in __init__ File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/state.py&quot;, line 561, in _initialize_instance manager.dispatch.init(self, args, kwargs) File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/event/attr.py&quot;, line 487, in __call__ fn(*args, **kw) File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/mapper.py&quot;, line 4391, in _event_on_init instrumenting_mapper._check_configure() File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/mapper.py&quot;, line 2386, in _check_configure _configure_registries({self.registry}, cascade=True) File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/mapper.py&quot;, line 4199, in _configure_registries _do_configure_registries(registries, cascade) File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/mapper.py&quot;, line 4240, in _do_configure_registries mapper._post_configure_properties() File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/mapper.py&quot;, line 2403, in _post_configure_properties prop.init() File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/interfaces.py&quot;, line 579, in init self.do_init() File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/relationships.py&quot;, line 1636, in do_init self._setup_join_conditions() File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/relationships.py&quot;, line 1881, in _setup_join_conditions self._join_condition = jc = JoinCondition( ^^^^^^^^^^^^^^ File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/relationships.py&quot;, line 2305, in __init__ self._determine_joins() File &quot;/home/fynn/Desktop/HIV/HIV-Clustering/venv/lib64/python3.11/site-packages/sqlalchemy/orm/relationships.py&quot;, line 2439, in _determine_joins raise sa_exc.NoForeignKeysError( sqlalchemy.exc.NoForeignKeysError: Could not determine join condition between parent/child tables on relationship Subtype.clusters - there are no foreign keys linking these tables. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or specify a 'primaryjoin' expression. </code></pre>
<python><database><sqlite><database-design><sqlalchemy>
2023-10-12 15:31:45
1
820
Fynn
77,281,898
6,930,441
Pandas dataframe columns unexpectedly out of order
<p>I encountered a rather unexpected result today with a pandas dataframe.</p> <p>My script takes genome sequence data (in fasta format) as input and calculates several basic metrics. I store those metrics in a pandas dataframe.</p> <p>The script starts by defining an empty dataframe with headers:</p> <pre><code>stats_df = pd.DataFrame(columns=['Assembly','Size','#Contigs','#Contigs &gt; 3000','N50','Longest_contig']) </code></pre> <p>The main body of the script then loops through all genome files and calculates all the metrics listed in the headers. That all works just fine. I then add the metrics for the genome assembly to the stats_df dataframe using these two lines:</p> <pre><code>new_df_row = pd.DataFrame({'Assembly':[assembly_name],'Size':[assembly_size_in_Mbp],'#Contigs':[num_of_contigs],'#Contigs &gt; 3000':[greater_than_3000_count],'N50':[n50],'Longest_contig':[longest_contig]}) stats_df = pd.concat([stats_df,new_df_row],ignore_index=True) </code></pre> <p>The unexpected behaviour is when I view stats_df the columns are ordered: #Contigs, #Contigs &gt; 3000, Assembly, Longest_contig, N50, Size.</p> <p>This is different to the order of the entries in the empty dataframe a the start and in each new row added. The metrics are all in the right place, I'm just wondering what causes the columns to move around like that?</p>
<python><pandas><dataframe>
2023-10-12 15:19:53
1
456
Rainman
77,281,895
8,595,535
Summing dataframe between dates that are located in another dataframe
<p>say I have two pandas dataframe df1 and df2 as follows:</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'Name': ['A', 'B', 'C'], 'Date1':['2023-01-01', '2023-01-02', '2023-01-03'], 'Date2':['2023-01-03', '2023-01-04', '2023-01-05']}) df1.loc[:, ['Date1', 'Date2']] = df1.loc[:, ['Date1', 'Date2']].apply(pd.to_datetime, errors='coerce') df2 = pd.DataFrame({'Date': ['2023-01-01', '2023-01-02', '2023-01-03', '2023-01-04', '2023-01-05'], 'A': [1, 2, 3, 4, 5], 'B': [6, 7, 8, 9, 10], 'C': [11,12,13,14,15]}) df2['Date'] = pd.to_datetime(df2['Date']) </code></pre> <p>Is there an efficient way to add a column to df1 named 'Sum' that will calculate the sum of name's data located in df2 between the Date1 and Date2 referred in df1 (excluding Date1 and including Date2).</p> <p>Desired result should be:</p> <pre><code>df_result = pd.DataFrame({'Name': ['A', 'B', 'C'], 'Date1':['2023-01-01', '2023-01-02', '2023-01-03'], 'Date2':['2023-01-03', '2023-01-04', '2023-01-05'], 'Sum': [2+3, 8+9, 14+15]}) </code></pre>
<python><pandas>
2023-10-12 15:19:35
2
309
CTXR
77,281,875
1,914,781
add new row with differ of last and first row
<p>I would like to create a summary row to show differ between last and first row.</p> <p>e.g.</p> <pre><code>import pandas as pd data = [ ['A',1,5], ['B',2,4], ['C',3,3], ['D',4,2], ['E',5,1], ['F',6,0] ] df = pd.DataFrame(data,columns=['name','x','y']) print(df) </code></pre> <p>Output ddataframe should be:</p> <pre><code>: name x y : 0 A 1 5 : 1 B 2 4 : 2 C 3 3 : 3 D 4 2 : 4 E 5 1 : 5 F 6 0 : 6 diff 5 -5 </code></pre> <p>What's the best way to do that?</p>
<python><pandas>
2023-10-12 15:17:27
4
9,011
lucky1928
77,281,818
1,616,528
find_nearest_contour is deprecated. Now what?
<p>I'm using Matplotlib contours to explore a 2d map. I'm using <a href="https://matplotlib.org/stable/api/contour_api.html#matplotlib.contour.ContourSet.find_nearest_contour" rel="nofollow noreferrer"><code>contour.find_nearest_contour</code></a> to get the range of x and y of the contour that passes close to a point <code>x0, y0</code>, as follows:</p> <pre class="lang-py prettyprint-override"><code>cs = fig.gca().contour(x, y, image, [level]) cont, seg, idx, xm, ym, d2 = cs.find_nearest_contour(x0, y0, pixel=False) min_x = cs.allsegs[cont][seg][:, 0].min() max_x = cs.allsegs[cont][seg][:, 0].max() min_y = cs.allsegs[cont][seg][:, 1].min() max_y = cs.allsegs[cont][seg][:, 1].max() </code></pre> <blockquote> <p>cont, seg, idx, xm, ym, d2 = cs.find_nearest_contour(x0, y0, pixel=False)</p> </blockquote> <p>Now Matplotlib v3.8 is throwing a <code>MatplotlibDeprecationWarning</code>, but I can't find any document that explains how to get the same functionality.</p> <p>Note that a given contour level can create multiple segments, and I also need what segment is closer to my point. I need <code>seg</code> in my line of code, practically. This is not shared from the private method <code>_find_nearest_contour</code> which was a very good candidate for replacement.</p>
<python><matplotlib><contour><deprecation-warning>
2023-10-12 15:09:23
1
329
matteo
77,281,614
1,734,097
python duplicate function parameter and execution
<p>Help i have the following code.</p> <p>I ran <code>gspread_connect()</code> and no issue found. The problem is I don't want to execute <code>get_data_gsheet()</code> by supplying the parameters that required by <code>gspread_connect()</code> in</p> <p><code>gc = gspread_connect(config_section_name,config_section_var,debug_mode=debug_mode)</code></p> <p>from the following code:</p> <pre><code>def gspread_connect(config_section_name,config_section_var,debug_mode=False): gc = None # json_key = os.path.dirname(os.getcwd())+'\\keyz\gsheet-automations-383508-f832c5a2f142.json' json_key = os.path.join(os.path.join(os.path.dirname(os.path.dirname(__file__)),'creds'),get_config_value(config_section_name,config_section_var)) print_text_line(1,&quot;json key: {} is exist? {}&quot;.format(json_key,os.path.exists(json_key)),debug_mode) scope = ['https://spreadsheets.google.com/feeds','https://www.googleapis.com/auth/drive'] if os.path.exists(json_key): print_text_line(1,&quot;json key exists. Creating credentials...&quot;) credentials = ServiceAccountCredentials.from_json_keyfile_name(json_key, scope) print_text_line(1,&quot;credentials created. Authorizing GSpread...&quot;) try: gc = gspread.authorize(credentials) print('Gspread authorized') except Exception as e: print_text_line(1,&quot;Authorization with gspread failed.\n{}&quot;.format(e)) else: print_text_line(1,'Failed to get credentials. Key is not found.\nLocation: {}'.format(json_key)) return gc def get_data_gsheet(file_id,worksheet_name,debug_mode=False): df = pd.DataFrame() try: gc = gspread_connect(config_section_name,config_section_var,debug_mode=debug_mode) sh = gc.open_by_key(file_id) try: ws = sh.worksheet(worksheet_name) print_text_line(1,&quot;Getting data from '{}' in '{}'...&quot;.format(worksheet_name,file_id),debug_mode) data = ws.get_all_records() df = pd.DataFrame(data) print(&quot;Getting data completed.\n Total row: {} row(s)&quot;.format(df.shape[0])) print_text_line(1,&quot;Sample data:\n{}&quot;.format(df.head()),debug_mode) except Exception as e: print(&quot;Failed to get data from '{}'&quot;.format(worksheet_name)) print(e) except Exception as e: print(&quot;Failed to connect with file_id '{}'&quot;.format(file_id)) print(e) return df </code></pre> <p>how do I do that?</p>
<python>
2023-10-12 14:43:43
0
1,099
Cignitor
77,281,582
14,829,523
Sort pandas df based on constraints
<p>I have the following dummy df.</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'car': ['Pickup', 'Racer', 'Lorry', 'Luxury', 'Cabrio', 'Bicycle', 'Truck'], 'old_owner': ['1', '1', '1', '2', '2', '3', '1'], 'new_owner': ['2', '3', '2', '1', '1', '1', '1']}) print(df) &gt;&gt;&gt; car old_owner new_owner 0 Pickup 1 2 1 Racer 1 3 2 Lorry 1 2 3 Luxury 2 1 4 Cabrio 2 1 5 Bicycle 3 1 6 Truck 1 1 </code></pre> <p>The dummy df shows how I reassigned cars. Think of the numbers as account numbers. The numbers could be 14534 or 12131, so pretty random and not just, 1,2,3,4.... Unfortunately, it is not sorted how I want it to be. I reassigned cars either to 1 or &quot;non-1's&quot; Now I want to sort it such that before I take away a car from non-1's I need to give them one from 1. In other words, non-1's should never have less cars at any moment than they had originally. 1 can have one less car than originally at any moment until 1 got another one assigned. Sorted based on that I would like the dummy df to end up looking like this:</p> <pre><code> car old_owner new_owner 0 Pickup 1 2 1 Luxury 2 1 2 Racer 1 3 3 Bicycle 3 1 4 Lorry 1 2 5 Cabrio 2 1 6 Truck 1 1 </code></pre>
<python><pandas>
2023-10-12 14:39:22
2
468
Exa
77,281,556
8,332,501
Python where to mock complexe case
<p>I have a complexe mocking case where I don't know where to patch in Python</p> <p>file <strong>supervised_model.py</strong></p> <pre class="lang-py prettyprint-override"><code>from sklearn.ensemble import RandomForestClassifier class SModel: def __init__(self) -&gt; None: self.model = RandomForestClassifier() </code></pre> <p>file <strong>model.py</strong></p> <pre class="lang-py prettyprint-override"><code>from supervised import SModel class Model: def __init__(self) -&gt; None: self.smodel = SModel() </code></pre> <p>file <strong>process.py</strong></p> <pre class="lang-py prettyprint-override"><code>from model import Model class Processer: def __init__(self) -&gt; None: self.model = Model() </code></pre> <p>Finally my test case is the following</p> <pre class="lang-py prettyprint-override"><code>from process import Processer from unittest import mock @mock.patch(&quot;what to patch ??? for RandomForestClassifier&quot;) def test_processer(patched_rf): patched_rf.return_value.predict_proba.return_value = &quot;foo&quot; processer = Processer() assert processer.model.smodel.model.predict_proba([[1., 2.], [3., 4.]]) == &quot;foo&quot; </code></pre> <p>I already tryied ...</p> <p><code>@patch(&quot;RandomForestClassifier&quot;)</code> and <code>@patch(&quot;smodel.RandomForestClassifier&quot;)</code></p> <p>But none of these work,</p> <p>Thanks by advance</p>
<python><mocking>
2023-10-12 14:37:28
0
521
thetradingdogdj
77,281,465
7,505,228
Form a "chain" of relationship through SQLAlchemy
<p>Say I have a database where I want to model a many-to-many relationship between users and the websites they have access to.</p> <p>I have the tables <code>users</code> and <code>websites</code>, and I create a third <code>users_have_sites</code> table with two foreign keys to modelize this many-to-many relationship.</p> <p>With SQLAlchemy, this looks something like this:</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy.ext.declarative import DeclarativeBase from sqlalchemy import Column, Integer, Foreignkey Base = DeclarativeBase() class User(Base): __table_name__ = &quot;users&quot; id = Column(Integer, primary_key=True) link = relationship(&quot;UserHasWebsite&quot;, back_populates=&quot;user&quot;) ... # Other columns class Website(Base): ___table_name__ = &quot;websites&quot; id = Column(Integer, primary_key=True) link = relationship(&quot;UserHasWebsite&quot;, back_populates=&quot;website&quot;) ... class UserHasWebsite(Base): __table_name__ = &quot;users_have_websites&quot; id = Column(Integer, primary_key=True) user = relationship(&quot;User&quot;, back_populates=&quot;link&quot; ) website = relationship(&quot;Website&quot;, back_populates=&quot;link&quot;) </code></pre> <p>I can get a list of <code>Website</code> instances linked to an user by calling <code>[link.website for website in user.link]</code>, but I was wondering if there was the possibility to pass this &quot;chained&quot; relationship in the definition of the class, so that I could directly call an attribute <code>user.websites</code></p>
<python><sqlalchemy>
2023-10-12 14:24:34
1
2,289
LoicM
77,281,448
3,380,209
How to get "xy" of a transformed polygon
<p><code>poly</code> is a given polygon.</p> <pre><code>&gt;&gt;&gt; from matplotlib.figure import Figure &gt;&gt;&gt; import matplotlib as mpl &gt;&gt;&gt; fig = Figure(figsize=(10, 8)) &gt;&gt;&gt; ax = fig.add_subplot() &gt;&gt;&gt; ax.add_patch(poly) &gt;&gt;&gt; poly &lt;matplotlib.patches.Polygon object at 0x7fbf5f313a90&gt; &gt;&gt;&gt; poly.xy array([[ 24.56, -32.63], [ 30.36, -7.01], [ 24.56, -32.63]]) &gt;&gt;&gt; t = mpl.transforms.Affine2D().translate(2, 2) &gt;&gt;&gt; poly.set_transform(t + ax.transData) &gt;&gt;&gt; poly.xy array([[ 24.56, -32.63], [ 30.36, -7.01], [ 24.56, -32.63]]) &gt;&gt;&gt; </code></pre> <p>The polygon has been transformed, but it does not affect its <code>xy</code> attribute.</p> <p>How can I get the <code>xy</code> of the transformed polygon?</p>
<python><matplotlib>
2023-10-12 14:22:35
2
3,100
albar
77,281,211
8,201,655
Manually Add Source Files to Conda-Build Package
<p>I am building a conda package for a pure python application. The application has a setup.py file that the build.sh file uses to locate the packages that should go into the build. This works well for all of the normal packages.</p> <p>Now, I have a &quot;nested&quot; package that I also want to include in the build. It isn't picked up by the setuptools find_packages() function, and when I tried to add it to the array I got an exception.</p> <p>The location is src/layers/core_layer where core_layer is an importable python package. I figured that my best option would be to simply copy the core_layer directory into the build PREFIX in the build.sh file, however I have not been able to get it to work.</p> <p>Running the command <code>cp -r $SRC_DIR/src/layers/core_layer $PREFIX/lib/python${PY_VER}/site-packages</code> eventually ends in an error during the conda-build command with no useful output:</p> <blockquote> <p>subprocess.CalledProcessError: Command '['/bin/bash', '-o', 'errexit', '/home/ec2-user/anaconda3/envs/X/conda-bld/X_1697117436940/work/conda_build.sh']' returned non-zero exit status 1.</p> </blockquote> <p>I verified that both of the paths actually exist. Am I doing something wrong? What is the recommended way to manually copy files into the build directory to be included in the package artifact?</p> <p>Thanks</p>
<python><continuous-integration><conda><setuptools><conda-build>
2023-10-12 13:51:53
0
977
Jared M
77,281,193
5,759,295
Get data output from Sagemaker training step before pipeline executes
<p>I'm saving output data as a part of my training step in my sagemaker pipeline. Some of this data is later used in another step for evaluation, not the model. Is there any way I can get the path before the execution? A pipeline variable is good enough. Just anything that lets me point to the s3 data path for later use as a ProcessingInput. Example:</p> <pre><code>estimator = HuggingFace( py_version=&quot;py310&quot;, entry_point=&quot;entrypoint.py&quot;, source_dir=os.path.join(&quot;code&quot;, &quot;nlp&quot;, &quot;train&quot;), transformers_version=&quot;4.28.1&quot;, pytorch_version=&quot;2.0.0&quot;, sagemaker_session=session, role=role, instance_count=1, instance_type=&quot;ml.p3.2xlarge&quot;, ) step_train = TrainingStep( name=&quot;TrainHuggingFaceModel&quot;, estimator=estimator, inputs={ &quot;data&quot;: TrainingInput(s3_data=&quot;PathToData&quot;), }, cache_config=CACHE_CONFIG, ) </code></pre> <p>From the training step, I would like to get the training output. If this does not work, any other suggestions? :)</p>
<python><amazon-sagemaker>
2023-10-12 13:49:22
1
558
Carl Rynegardh