QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,371,478
159,072
How can I convert a Union type into a tensor type?
<p>I want to convert a tab-delimited text into a 2D tensor object so that I can feed the data into a CNN.</p> <p>What is the proper way to do this?</p> <p>I wrote the following:</p> <pre><code>from typing import List, Union, cast import tensorflow as tf CellType = Union[str, float, int, bool] RowType = List[CellType] # Mapping Python types to TensorFlow data types TF_DATA_TYPES = { str: tf.string, float: tf.float32, int: tf.int32, bool: tf.bool } def convert_string_to_tensorflow_object(data_string): # Split the string into lines linesStringList1d: List[str] = data_string.strip().split('\n') # Split each line into columns dataStringList2d: List[List[str]] = [] for line in linesStringList1d: rowItem: List[str] = line.split(' ') dataStringList2d.append(rowItem) # Convert the data to TensorFlow tensors listOfRows: List[RowType] = [] for rowItem in dataStringList2d: oneRow: RowType = [] for stringItem in rowItem: oneRow.append(cast(CellType, stringItem)) listOfRows.append(oneRow) # Get the TensorFlow data type based on the Python type of CellType tf_data_type = TF_DATA_TYPES[type(CellType)] listOfRows = tf.constant(listOfRows, dtype=tf_data_type) # Create a TensorFlow dataset return listOfRows if __name__ == &quot;__main__&quot;: # Example usage data_string: str = &quot;&quot;&quot; 1 ASN C 7.042 9.118 0.000 1 1 1 1 1 0 2 LEU H 5.781 5.488 7.470 0 0 0 0 1 0 3 THR H 5.399 5.166 6.452 0 0 0 0 0 0 4 GLU H 5.373 4.852 6.069 0 0 0 0 1 0 5 LEU H 5.423 5.164 6.197 0 0 0 0 2 0 &quot;&quot;&quot; tensorflow_dataset = convert_string_to_tensorflow_object(data_string) print(tensorflow_dataset) </code></pre> <p>Output:</p> <pre><code>C:\Users\pc\AppData\Local\Programs\Python\Python311\python.exe C:/git/heca_v2~~2/src/cnn_lib/convert_string_to_tensorflow_object.py Traceback (most recent call last): File &quot;C:\git\heca_v2~~2\src\cnn_lib\convert_string_to_tensorflow_object.py&quot;, line 51, in &lt;module&gt; tensorflow_dataset = convert_string_to_tensorflow_object(data_string) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\git\heca_v2~~2\src\cnn_lib\convert_string_to_tensorflow_object.py&quot;, line 34, in convert_string_to_tensorflow_object tf_data_type = TF_DATA_TYPES[type(CellType)] ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^ KeyError: &lt;class 'typing._UnionGenericAlias'&gt; Process finished with exit code 1 </code></pre> <p>Can I resolve the error?</p>
<python><tensorflow><union-types>
2023-10-27 04:19:38
0
17,446
user366312
77,371,442
1,277,865
M1 MacBook can not execute python, after execute ln -s /usr/bin/python3 /usr/bin/local/python
<pre><code>luis@LuisdeMacBook-Pro ~ % python zsh: command not found: python luis@LuisdeMacBook-Pro ~ % ln -s /usr/bin/python3 /usr/local/bin/python ln: /usr/local/bin/python: Permission denied luis@LuisdeMacBook-Pro ~ % sudo ln -s /usr/bin/python3 /usr/local/bin/python luis@LuisdeMacBook-Pro ~ % python python: error: Failed to locate 'python'. xcode-select: Failed to locate 'python', requesting installation of command line developer tools. </code></pre> <p>python3 is ok, but every time execute above to ln -s, comes up below even installed success! <a href="https://i.sstatic.net/jdqiv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jdqiv.png" alt="enter image description here" /></a></p>
<python><macos><ln>
2023-10-27 04:02:48
2
2,207
thecr0w
77,371,311
235,218
From inside a python function, use a var representing the func object, to get the func's parameters and the values supplied
<p>The first line of the code in the function, '<em>func</em>', below is a print statement. It prints/generates the following sort of cross between a tuple and a dictionary, (to the right of the first equals sign) containing all <em>func</em>’s parameters along with the <strong>default</strong> values for those parameters:</p> <pre><code>str(inspect.signature(func)) = (a='my', b='dog', c='chews', d='bones') </code></pre> <p>I want to modify the print statement on the first line of '<em>func</em>' (as distinguished from print Statement #2 far below):</p> <ol> <li>So that each time '<em>func</em>' is called, the print statement prints the <strong>names</strong> of the parameters <em>along with</em>: [i] the <strong>supplied</strong> values <strong>where they are different</strong> from the parameters’ <strong>default</strong> values, and [ii] the <strong>default</strong> values if no other values are supplied; and</li> <li>By replacing the string 'func' with a code variable that represents the function object, where the function object is the function in which the code/variable is located (in this case, '<em>func</em>'), as an <strong>object</strong>. To illustrate, assume X = PSEUDO_CODE_REPRESENTING_THIS_FUNCTION_OBJECT, and that the following line is the first line of code in the function:</li> </ol> <blockquote> <pre><code>print('str(inspect.signature(X) = ' + str(inspect.signature(X)) </code></pre> </blockquote> <p>Here is the function, '<em>func</em>' (containing a print statement on the first line of '<em>func</em>') and, following the definition of '<em>func</em>', a second print statement (on the last line of the code) supplying non-default values to '<em>func</em>':</p> <pre><code>def func(a='my', b='dog', c='chews', d='bones'): print('str(inspect.signature(func)) = ' + str(inspect.signature(func))) return a + ' ' + b + ' ' + c + ' ' + d print('Statement #2: ' + func(a='her', b='cat', c='drinks', d='milk')) </code></pre> <p>Any suggestions would be much appreciated, preferably with a link or some explanation.</p>
<python><attributes><inspect>
2023-10-27 03:04:49
1
767
Marc B. Hankin
77,371,281
1,715,544
Why does unittest's `mock.patch.start` re-run the function in which the patcher is started?
<p>Let's say we have two files:</p> <p><strong>to_patch.py</strong></p> <pre class="lang-py prettyprint-override"><code>from unittest.mock import patch def patch_a_function(): print(&quot;Patching!&quot;) patcher = patch(&quot;to_be_patched.function&quot;) patcher.start() print(&quot;Done patching!&quot;) </code></pre> <p><strong>to_be_patched.py</strong></p> <pre class="lang-py prettyprint-override"><code>from to_patch import patch_a_function def function(): pass patch_a_function() function() </code></pre> <p>And we run <code>python -m to_be_patched</code>. This will output:</p> <pre><code>Patching! Patching! </code></pre> <ol> <li>Why isn't <code>Done patching!</code> ever printed?</li> <li>Why is <code>Patching!</code> printed twice?</li> </ol> <p>I've narrowed the answer to (2) down; the call to <code>patch.start</code> seems to trigger <code>patch_a_function</code> again. I suspect this is because it's imported in <code>to_be_patched.py</code>, but am not sure why the function itself would run for a second time. Similarly, I'm not sure why the <code>Done patching!</code> line is not reached in either of the calls to <code>patch_a_function</code>. <code>patcher.start()</code> can't be blocking, because the program exits nicely instead of hanging there... right?</p> <p><strong>Edit:</strong> Huh. It looks like no one can reproduce <code>Done patching!</code> <em>not</em> being printed (which was honestly the main difficulty)—so I guess that's just a my-side problem</p>
<python><python-unittest><patch><monkeypatching><python-unittest.mock>
2023-10-27 02:53:10
2
1,410
AmagicalFishy
77,371,189
7,580,944
Using sklearn KMeans without fitting
<p>Similarly to the question in <a href="https://stackoverflow.com/questions/73909643/using-sklearn-kmeans-with-initial-centroids-without-fitting-model">here</a>, I want to cluster some data according to pre-computed centroids. Differently by that question, these centroids are are provided by other methods.</p> <p>I could compute it myself, but using the interface of scikit-learn's KMeans is extremely useful for other operation (e.g. labels points and sample points later).</p> <p>Basically, is there a workaround to use KMean with <code>max_iter=0</code> and <code>n_init=0</code>?</p>
<python><machine-learning><scikit-learn><k-means>
2023-10-27 02:16:08
1
359
Chutlhu
77,371,047
202,335
Import "flask" could not be resolvedPylancereportMissingImports)
<pre><code>from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/process_user_input', methods=['POST']) def process_user_input(): # Retrieve user input from the request user_input = request.form.get('input') # Return a response response = {'processed_input': user_input} return jsonify(response) </code></pre>
<python><visual-studio-code><flask>
2023-10-27 01:28:17
1
25,444
Steven
77,371,015
7,188,690
return a difference of nested dictionary in a new column of a dataframe
<p>I have a dataframe where I want to find the difference in the unique_users and total_sessions for the latest day(2023-09-06) and previous day(2023-09-07) for a specific hour 2 and 13 separately for each key 'jio' and 'other' and for a specific Exception. I need to consider the Exception and Hour and Date to calculate the difference. Get the top two <code>'total_sessions'</code> and <code>unique_users</code></p> <pre><code>Date Hour Exception cell 0 2023/09/06 2 S1AP {'design': {'jio': {'total_sessions': 39273, 'unique_users': 30837}, 'nokia': {'total_sessions': 9523, 'unique_users': 7690}}} 1 2023/09/06 13 S1AP {'design': {'jio': {'total_sessions': 46870, 'unique_users': 39330}, 'nokia': {'total_sessions': 11745, 'unique_users': 10059}}} 2 2023/09/07 13 S1AP {'design': {'jio': {'total_sessions': 35688, 'unique_users': 29628}, 'nokia': {'total_sessions': 8759, 'unique_users': 7537}}} 3 2023/09/07 2 S1AP {'design': {'jio': {'total_sessions': 37804, 'unique_users': 29654}, 'nokia': {'total_sessions': 8738, 'unique_users': 7272}}} </code></pre> <p>I want to write a <code>generic function</code> which can be applied to similar columns such as <code>cell</code></p> <p>What I have tried :</p> <pre><code>import ast df2 = pd.json_normalize(df['cell'] df3 = pd.concat([df, df2], axis=1) df3 = df3.sort_values(['Hour', 'Exception', 'Date'], ascending=True) # group by and calculate difference diff = df3.groupby(['Hour', 'Exception'])[df2.columns].diff() # clean and join diff.columns = [&quot;diff_&quot;+ x for x in diff.columns] df3 =pd.concat([df3, diff], axis=1) df3.dropna().drop(columns=df2.columns) </code></pre> <p><code>Expected output</code>:</p> <pre><code> Date Hour Exception IMSI_Operator 2023-09-07 00:00:00 2 S1AP: NAS: [2] Detach {'design': {'jio': {'total_sessions': -1469, 'unique_users': -1183}, 'nokia': {'total_sessions': -785, 'unique_users': -418}}} 2023/09/07 0:00 13 S1AP: NAS: [2] Detach {'design': {'jio': {'total_sessions': 11182, 'unique_users': 9702}, 'nokia': {'total_sessions': 2986, 'unique_users': 2522}}} </code></pre>
<python><pandas>
2023-10-27 01:16:48
1
494
sam
77,371,009
2,130,515
How to train doc2vec with pre-built vocab in gensim
<p>I have 1000 documents.</p> <p>For some purpose I need to keep specific words in the vocab. I tokenize the 1000 documents and I design a word_freq dict. e.g. {&quot;word1&quot;:100, &quot;word2&quot;: 2000, ...}</p> <p>Now I want to build a doc2vec model using this word_freq.</p> <pre><code>from gensim.models.doc2vec import Doc2Vec, TaggedDocument model = Doc2Vec(vector_size= 300, window=300, min_count=0, alpha=0.01, min_alpha=0.0007, sample=1e-4, negative=5, dm=1, epochs=20, workers=16) model.build_vocab_from_freq(word_freq=tf_idf_vocab, keep_raw_vocab=False, corpus_count=1000, update=False) N = 942100020 # is the total number of words in the whole 1000 docs. model.train(corpus_file=train_data, total_examples=model.corpus_count, total_words=N, epochs=model.epochs) </code></pre> <p>To boost the training time, I used corpus file (SentenceLine) for training where each document is a line (document' words are separated by space).</p> <p>Each document is expected to be tagged with its number in the corpus file (i.e. numeric tag.)</p> <p>As a test, I trained the model for few epochs. To get the most similar word to a given document with e.g. tag=0, I use:</p> <pre><code>doc_vector = model_copy.dv[tag] sims = model_copy.wv.most_similar([doc_vector], topn=20) </code></pre> <p><strong>I got an error in <code>doc_vector = model_copy.dv[tag]</code> said that tag=0 does not exist!</strong> I debug and it seems that model.dv is empty!</p> <pre><code>model.dv.expandos # {} </code></pre> <p>I checked the code of <code>build_vocab()</code>, at some point it call _scan_vocab() where it set the <code>model.dv</code> with tags.</p> <p>However, in <code>build_vocab_from_freq()</code> it does not call _scan_vocab() and there is no tagging!?</p> <pre><code>def _scan_vocab(...): .... for t, dt in doctags_lookup.items(): self.dv.key_to_index[t] = dt.index self.dv.set_vecattr(t, 'word_count', dt.word_count) self.dv.set_vecattr(t, 'doc_count', dt.doc_count) </code></pre> <p>Note that when I used <code>model.build_vocab(corpus_file=train_data, progress_per=1000)</code> to build the vocab internally, the documents are tagged with numeric numbers as I explained above!</p>
<python><nlp><gensim><word-embedding><doc2vec>
2023-10-27 01:15:12
0
1,790
LearnToGrow
77,370,889
6,471,140
Sagemaker-huggingface An error occurred (ValidationException) when calling the CreateModel operation: Requested image not found
<p>I'm trying to create an asyncronous sagemaker endpoint as shown here <a href="https://github.com/aws/amazon-sagemaker-examples/blob/main/async-inference/Async-Inference-Walkthrough.ipynb" rel="nofollow noreferrer">shown here</a> but using a huggingface model, so I need to use one of the huggingface images and for that I check the official list of available options <a href="https://docs.aws.amazon.com/sagemaker/latest/dg-ecr-paths/ecr-us-east-1.html" rel="nofollow noreferrer">here</a> but when trying to create a model based on that, I get the error:</p> <pre><code>ClientError: An error occurred (ValidationException) when calling the CreateModel operation: Requested image 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:1.9.1-transformers4.12-gpu-py38-cu111-ubuntu20.04 not found. </code></pre> <p>I tested like 3 options from the official list and none worked(not even the example shown in that page works). But using the exact same code, just changing to xgboost and it's corresponding version works, so I know it has something to do specifically with this images, any ideas or suggestions on where to find the list of supported huggingface images?</p> <p><a href="https://i.sstatic.net/2P4hb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2P4hb.png" alt="enter image description here" /></a></p>
<python><amazon-web-services><machine-learning><amazon-sagemaker><huggingface>
2023-10-27 00:22:15
1
3,554
Luis Leal
77,370,866
1,330,974
How to filter pandas dataframe using lambda function with regular expression to extract date
<p>Say I have a pandas dataframe like below (with millions of rows) -</p> <pre><code>data = {'s3_path': ['s3://mybucket/date=2023-10-26/f1.txt', 's3://mybucket/date=2023-10-25/f2.txt', 's3://mybucket/date=2023-10-24/f3.txt', 's3://mybucket/date=2023-10-23/f4.txt']} df = pd.DataFrame(data) </code></pre> <p>I want to filter S3 paths that are before <code>2023-10-24</code>. What would be an efficient way to do that in pandas? Not knowing a lot about pandas, what I can think of is below, but it is not still complete:</p> <pre><code>date_cutoff_str = '2023-10-24' date_cutoff_obj = datetime.strptime(date_cutoff_str, '%Y-%m-%d') def is_before(cur_date, cutoff_date): if cur_date &lt; cutoff_date: True return False date_regex_pattern = r'\d{4}-\d{2}-\d{2}' filtered_df = df.apply(is_before, cur_date=how_do_i_get_regex_value_here, cutoff_date=date_cutoff_obj) </code></pre> <p>Any suggestion/answer would be greatly appreciated. Thank you.</p>
<python><pandas>
2023-10-27 00:13:46
2
2,626
user1330974
77,370,814
12,172,744
Fastest way to extract raw Y' plane data from Y'Cb'Cr encoded video?
<p>I have a use-case where I'm extracting <code>I-Frames</code> from videos and turning them into <a href="https://en.wikipedia.org/wiki/Perceptual_hashing" rel="nofollow noreferrer">perceptual hashes</a> for later analysis.</p> <p>⠀</p> <p>I'm currently using <code>ffmpeg</code> to do this with a command akin to:</p> <p><code>ffmpeg -skip_frame nokey -i 'in%~1.mkv' -vsync vfr -frame_pts true -vf 'keyframes/_Y/out%~1/%%06d.bmp'</code></p> <p>and then reading in the data from the resulting images.</p> <p>⠀</p> <p>This is a bit wasteful as, to my understanding, <code>ffmpeg</code> does implicit <code>YUV -&gt; RGB</code> colour-space conversion and I'm also needlessly saving intermediate data to disk.</p> <p>Most modern video codecs utilise <a href="https://en.wikipedia.org/wiki/Chroma_subsampling" rel="nofollow noreferrer">chroma subsampling</a> and have frames encoded in a <a href="https://en.wikipedia.org/wiki/YCbCr" rel="nofollow noreferrer">Y'C<sub>b</sub>C<sub>r</sub></a> colour-space, where <strong>Y'</strong> is the <a href="https://en.wikipedia.org/wiki/Luma_(video)" rel="nofollow noreferrer">luma</a> component, and <strong>Cb</strong> <strong>Cr</strong> are the <a href="https://en.wikipedia.org/wiki/B-Y" rel="nofollow noreferrer">blue-difference</a>, <a href="https://en.wikipedia.org/wiki/R-Y" rel="nofollow noreferrer">red-difference</a> <a href="https://en.wikipedia.org/wiki/Chrominance" rel="nofollow noreferrer">chroma</a> components.</p> <p>Which in something like <code>YUV420p</code> used in <a href="https://en.wikipedia.org/wiki/Advanced_Video_Coding" rel="nofollow noreferrer"><em>h.264</em></a>/<a href="https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding" rel="nofollow noreferrer"><em>h.265</em></a> video codecs is encoded as such:</p> <p><a href="https://i.sstatic.net/o4xDf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o4xDf.png" alt="single YUV420p encoded frame" /></a></p> <p>Where each <strong>Y'</strong> value is <code>8 bits</code> long and corresponds to a pixel.</p> <p>⠀</p> <p>As I use gray-scale data for generating the <a href="https://en.wikipedia.org/wiki/Perceptual_hashing" rel="nofollow noreferrer">perceptual hashes</a> anyway, I was wondering if there is a way to simply grab <em>just</em> the raw <strong>Y'</strong> values from any given <code>I-Frame</code> into an array and skip all of the unnecessary conversions and extra steps?</p> <p><em>(as the <a href="https://en.wikipedia.org/wiki/Luma_(video)" rel="nofollow noreferrer">luma</a> component is essentially equivalent to the grayscale data i need for generating hashes)</em></p> <p>I came across the <code>-vf 'extractplanes=y'</code> filter in <code>ffmpeg</code> that <em>seems</em> like it might do just that, but according to <a href="https://hhsprings.bitbucket.io/docs/programming/examples/ffmpeg/manipulating_video_colors/extractplanes.html" rel="nofollow noreferrer">source</a>:</p> <blockquote> <p>&quot;...what is extracted by 'extractplanes' is not raw data of the (for example) Y plane. Each extracted is converted to grayscale. That is, the converted video data has YUV (or RGB) which is different from the input.&quot;</p> </blockquote> <p>which makes it seem like it's touching <a href="https://en.wikipedia.org/wiki/Chrominance" rel="nofollow noreferrer">chroma</a> components and doing some conversion anyway, in testing applying this filter didn't affect the processing time of the <code>I-Frame</code> extraction either.</p> <p>⠀</p> <p>My script is currently written in <code>Python</code>, but I am in the process of migrating it to <code>C++</code>, so I would prefer any solutions pertaining to the latter.</p> <p><code>ffmpeg</code> seems like the ideal candidate for this task, but I really am looking for whatever solution that would ingest the data fastest, preferably saving directly to <code>RAM</code>, as I'll be processing a large number of video files and discarding <code>I-Frame</code> <a href="https://en.wikipedia.org/wiki/Luma_(video)" rel="nofollow noreferrer">luma</a> pixel data once a hash has been generated.</p> <p>I would also like to associate each <code>I-Frame</code> with its corresponding frame number in the video.</p>
<python><c++><ffmpeg><video-encoding><yuv>
2023-10-26 23:48:44
1
364
memeko
77,370,743
14,187,095
Kedro viz blank page
<p>I created a sample pipeline that works correctly with <code>kedro run</code>, however when I try to visualise it with <code>kedro viz</code> I'm getting basically a blank page, even though the terminal doesn't show a single error. The only detail I've found was in the inspection mode:</p> <p><a href="https://i.sstatic.net/46hpN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/46hpN.png" alt="enter image description here" /></a></p> <p>The operating system I'm running it on is windows 10, when I launch it on WSL everything is completely normal.</p> <p>Kedro version: 0.18.14 Kedro viz version: 6.6.1</p> <p>Any ideas why it is happening and how to solve it?</p>
<python><server><localhost><visualization><kedro>
2023-10-26 23:16:13
2
307
Fatafim
77,370,620
9,067,016
A DRY approach for multiple excepts
<p>Let's assume we have a following code:</p> <pre><code>try: whatever except ValueError: something1 except (IndexError, KeyError): something2 except KeyBoardInterrupt: something3 except Exception: something4 else: # no error something5 finally: finalize </code></pre> <p>So we have multiple excepts. Do we have any else-antipod that will suit DRY approach, so this will be executed only if any error happened?</p> <p>For example, we want to set some error variable/flag or log the error, but apart of that execute different except logic without writing a method/func <code>except_logic()</code> and calling it in every except.</p>
<python><error-handling><dry>
2023-10-26 22:32:36
3
609
Vova
77,370,408
20,122,390
How does Celery Chord works in Python?
<p>I have the following piece of code:</p> <pre><code>def create_pipeline( real: DataFrame, forecast_name: str, energy_asset_id: int, family: str, forecast_range: date_range, methodologies: List[str], start_forecast: datetime, subfamily_x_forecast_id: int, uuid: str, subfamily_id: int, execution_id: int, ) -&gt; None: forecast_tasks = [] for methodology in methodologies: log.info(f&quot;Methodology: {methodology}&quot;) forecast_task = celery_app.signature( PATHS.get(methodology.lower(), &quot;app.worker.classic.forecast&quot;), args=[ real, forecast_name, energy_asset_id, family, methodology, forecast_range, start_forecast, uuid, subfamily_id, ], ) forecast_tasks.append(forecast_task) combination_task = celery_app.signature( &quot;app.worker.combinations.pipeline_combinations&quot;, args=[ energy_asset_id, family, methodologies, forecast_name, subfamily_x_forecast_id, uuid, start_forecast, execution_id, ], queue=f&quot;combinations-{settings.ENV}&quot; ) group_forecast_task = chord(forecast_tasks) group_forecast_task(combination_task) </code></pre> <p>In total there are 8 methodologies so forecast_tasks is a list with 8 tasks. What I need is for these 8 tasks to be executed and once the 8 tasks have finished, the combination_task task is executed (adding the result of group_forecast_task as the first argument), for that I implement the part of:</p> <pre><code>group_forecast_task = chord(forecast_tasks) group_forecast_task(combination_task) </code></pre> <p>I had implemented it like this in another project recently but now it doesn't work for me! The 8 forecast tasks are executed but when they finish, combination_task is never called, what could be happening? I'm not really sure how chord actually works and if I'm implementing it correctly.</p>
<python><celery><celery-task>
2023-10-26 21:33:14
1
988
Diego L
77,370,398
652,528
Python decimal sum returning wrong value
<p>I'm using decimal module to avoid float rounding errors. In my case the values are money so I want two decimal places.</p> <p>To do that I do <code>decimal.getcontext().prec = 2</code> but then I get some surprising results which make me think I'm missing something. In this code the first assertion works, but the second fails</p> <pre class="lang-py prettyprint-override"><code>from decimal import getcontext, Decimal assert Decimal(&quot;3000&quot;) + Decimal(&quot;20&quot;) == 3020 getcontext().prec = 2 assert Decimal(&quot;3000&quot;) + Decimal(&quot;20&quot;) == 3020 # fails </code></pre> <p>Since <code>3000</code> and <code>20</code> are integers I was expecting this to hold, but I get <code>3000</code> instead. Any ideas on what is happening?</p>
<python><fixed-point>
2023-10-26 21:31:37
2
6,449
geckos
77,370,358
7,197,899
Python check if value in tuple fails with numpy array
<p>I've got a function that returns a tuple, and I need to check if a specific element is in the tuple. I don't know types of elements will be in the tuple, but I know that I want an exact match. For example, I want</p> <pre><code>1 in (1, [0, 6], 0) --&gt; True 1 in ([1], 0, 6]) --&gt; False </code></pre> <p>This should be really straightforward, right? I just check <code>1 in tuple_output_from_function</code>.</p> <p>This breaks if there is a numpy array as an element in the tuple</p> <pre><code>import numpy as np s = tuple((2, [0, 6], 1)) 4 in s --&gt; False t = tuple((2, np.arrary([0, 6]), 1)) 4 in t --&gt; ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>I expect the second case to return False because 4 is not in the tuple. Even if it were in the array, I would still expect False. I can do <code>0 in t[1]</code> without error. Why does this break, and how can I make my check robust to it without <em>assuming</em> that there will be a numpy array or having to check for it explicitly?</p>
<python><arrays><numpy><tuples>
2023-10-26 21:19:32
1
1,173
KindaTechy
77,370,195
6,523,485
Plotly Error: X-axis of scatter is grouping time series points erroneously
<h1>TL;DR</h1> <p>I am plotting time series data using a plotly scatter plot in a streamlit dashboard. The data is captured every second (actually down to microseconds) but the plot groups approximately every two minutes of data together.</p> <h2>Improper &quot;grouped every 2 minute&quot; output</h2> <p>Plotly express in a streamlit dashboard</p> <p><a href="https://i.sstatic.net/Qtz13.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qtz13.png" alt="Actual output" /></a></p> <h2>Code</h2> <pre><code>fig = px.scatter(df, x='dateTime', y='currentHeading') fig.update_yaxes( scaleanchor=&quot;x&quot;, scaleratio=1) fig.update_xaxes(showgrid=False) fig.update_xaxes(tickformat=&quot;%H:%M:%S&quot;) </code></pre> <h2>Expected output</h2> <p>Matplotlib in a jupyter notebook:</p> <p><a href="https://i.sstatic.net/zg7Wy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zg7Wy.png" alt="Expected output" /></a></p> <h2>Is the data the issue? Seemingly not</h2> <p>Here is the output of a plotly express line plot, the points are spaced along the x-axis appropriately.</p> <p><a href="https://i.sstatic.net/3SUta.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3SUta.png" alt="Plotly line works correctly" /></a></p>
<python><scatter-plot><plotly><streamlit>
2023-10-26 20:42:03
0
1,089
Harry de winton
77,370,187
6,202,092
Problem with nested loops and if else statements in python, Jinja2
<p>The following code is doing what it is supposed to do, but I need it to return just one instance under each category; please see the output example below.</p> <pre><code>categories = [[1], [2], [3], [4], [5], [6], [7], [8]] files = [(5, 1, 25, 1, 'file.pdf'), (6, 1, 25, 3, 'file.pdf'), (7, 1, 25, 6, 'file.pdf'),] for c in categories: print(f'Category: {c[0]}') for f in files: if c[0] == f[3]: print(f) else: print('Standard text block B') </code></pre> <p>The code returns the following.</p> <pre><code>Category: 1 (5, 1, 25, 1, 'file.pdf') Standard text block B Standard text block B Category: 2 Standard text block B Standard text block B Standard text block B Category: 3 Standard text block B (6, 1, 25, 3, 'file.pdf') Standard text block B Category: 4 Standard text block B Standard text block B Standard text block B Category: 5 Standard text block B Standard text block B Standard text block B Category: 6 Standard text block B Standard text block B (7, 1, 25, 6, 'file.pdf') Category: 7 Standard text block B Standard text block B Standard text block B Category: 8 Standard text block B Standard text block B Standard text block B </code></pre> <p>But I am looking for a solution to return the following:</p> <pre><code>Category: 1 (5, 1, 25, 1, 'file.pdf') Category: 2 Standard text block B Category: 3 (6, 1, 25, 3, 'file.pdf') Category: 4 Standard text block B Category: 5 Standard text block B Category: 6 (7, 1, 25, 6, 'file.pdf') Category: 7 Standard text block B Category: 8 Standard text block B </code></pre> <p>I want to accomplish it with loops with Jinja2, but I think the solution needs to be with Python first.</p>
<python><django-templates><jinja2>
2023-10-26 20:41:01
2
503
Enrique Bruzual
77,370,130
8,508
Is there a way to have an Argparse argument imply multiple other options?
<p>I have 2 options, that can take various arguments. I want to have a special option that means &quot;both of those with specific standard data&quot;. This will be a different mode then the operation with no arguments.</p> <pre><code> parser.add_argument('--use-std-date', '-U', action='store_const', const=nominal_test_fixture_day, dest='use_date') parser.add_argument('--use-date', '-u', action='store', type=parse_date, metavar='%Y-%m-%d', dest='use_date', nargs=1) parser.add_argument('--write-out-data', '-w', action='store_true') parser.add_argument('-build-standerd-data', help=&quot;This option implies both '--use-std-date', and '--write-out-data'&quot;) </code></pre> <p>In my dream world, there would be something like:</p> <pre><code>parser.add_argument('--build-standerd-data', implies=&quot;-U -w&quot;) </code></pre> <p>and if the parser got <code>--build-standerd-data</code> it would recurse the <code>&quot;-U -w&quot;</code> back into <code>parse_args()</code> and add the results to the return. It would probably also do something smart if it got both <code>--build-standerd-data</code> and <code>--use-std-date</code> (ignore one? raise error?).</p> <p>Does anything like this exist or do I just need to add a mess of if-else statements to the my main function?</p>
<python><argparse>
2023-10-26 20:26:34
0
15,639
Matthew Scouten
77,370,098
12,436,050
Streamlit application too slow after each user selection
<p>I am new to streamlit. I am creating a webservice through reading a spreadsheet (~2 million rows).<br /> I created two drop down selection using columns &quot;PRODUCTID&quot;, &quot;PRODUCTNDC&quot; from the table. However when a user is selecting the identifier from the drop down, it is reading a dataframe again and that makes it very slow. How can I improve the speed of this service?</p> <p>Below is the code:</p> <pre><code>#import libraries import pandas as pd import numpy as np import openpyxl import streamlit as st from streamlit_extras.app_logo import add_logo #page settings st.set_page_config(page_title = 'FDA-NDC Code Service', layout=&quot;wide&quot;) st.header('FDA-NDC Code Service') #st.subheader('beta version') #data reading &amp; pre-processing df = pd.read_excel(io='package.xlsx', engine='openpyxl', sheet_name= 'package') #removing leading spaces and more than 1 space in column names df = df[[&quot;PRODUCTID&quot;, &quot;PRODUCTNDC&quot;, &quot;NDCPACKAGECODE&quot;, &quot;PACKAGEDESCRIPTION&quot;, &quot;STARTMARKETINGDATE&quot;, &quot;ENDMARKETINGDATE&quot;, &quot;NDC_EXCLUDE_FLAG&quot;, &quot;SAMPLE_PACKAGE&quot;]] df['PRODUCTID'] = df['PRODUCTID'].str.strip() product_list = list(df['PRODUCTID'].unique()) df6 = df.copy() options_pid = df6['PRODUCTID'].unique().tolist() selected_pid = st.sidebar.multiselect('Search product id',options_pid) options_nid = df6['PRODUCTNDC'].unique().tolist() selected_nid = st.sidebar.multiselect('Search product ndc',options_nid) if selected_pid: df6 = df6[df6[&quot;PRODUCTID&quot;].isin(selected_pid)] elif selected_nid: df6 = df6[df6[&quot;PRODUCTNDC&quot;].isin(selected_nid)] else: st.markdown( f'&lt;p class=&quot;header_title&quot;&gt; {str(df6.shape[0])} &lt;/p&gt;', unsafe_allow_html=True, ) if selected_pid or selected_nid: st.dataframe(df6.reset_index(drop = True)) else: st.dataframe(df) </code></pre> <p>Any help is highly appreciated.</p>
<python><pandas><dataframe><streamlit>
2023-10-26 20:19:49
1
1,495
rshar
77,370,079
4,399,016
How to read a pandas date column as date type
<p>I have this code. I can download a spreadsheet and load a worksheet as a data frame. But the Date column is not getting converted as expected.</p> <pre><code>import requests import pandas as pd def get_BH_spreadsheet(URLS, SPREADSHEET_NAME): resp = requests.get(URLS) output = open(SPREADSHEET_NAME, 'wb') output.write(resp.content) output.close() df_NA_RIG_COUNT = pd.read_excel(open(SPREADSHEET_NAME, 'rb'), sheet_name='US Oil &amp; Gas Split', index_col=None,header = 'infer', skiprows=6) df_NA_RIG_COUNT['Date'] = pd.to_datetime(df_NA_RIG_COUNT['Date'] ) return df_NA_RIG_COUNT </code></pre> <p>By passing the URL and filename, we get the data frame.</p> <pre><code>get_BH_spreadsheet('https://rigcount.bakerhughes.comstatic-files/027e0bcc-86ec-407b-9029-b5bd3bf1982b', 'North America Rotary Rig Count - Jan 2000 - Current.xlsx') </code></pre> <p>This, however does not give the desired result. What can I do differently to get proper date time type?</p> <pre><code>df_NA_RIG_COUNT['Date'] = pd.to_datetime(df_NA_RIG_COUNT['Date']) </code></pre>
<python><pandas><dataframe><datetime>
2023-10-26 20:16:57
2
680
prashanth manohar
77,370,040
6,818,619
Decode byte string to Cyrillic in Python
<p>I have a byte string like this, it should be <code>Сравнение</code> in Cyrillic characters:</p> <pre><code>a = b'&amp;#1057;&amp;#1088;&amp;#1072;&amp;#1074;&amp;#1085;&amp;#1077;&amp;#1085;&amp;#1080;&amp;#1077;' </code></pre> <p>Decoding it into UTF-8 doesn't help:</p> <pre><code>a = b'&amp;#1057;&amp;#1088;&amp;#1072;&amp;#1074;&amp;#1085;&amp;#1077;&amp;#1085;&amp;#1080;&amp;#1077;' a.decode(&quot;utf-8&quot;) # prints same &amp;#1057;&amp;#1088;... string </code></pre> <p>Which encoding is this and how to decode the string?</p> <p>I'm using Google Colab with Python 3.10.12.</p> <p><a href="https://www.online-decoder.com/bg" rel="nofollow noreferrer">This online decoder</a> after applying auto-decode says it must be decoded from UTF-8 to UTF-8.</p>
<python><python-3.x><utf-8><decode>
2023-10-26 20:08:28
1
390
ans
77,369,986
1,338,449
Ansible collections, google.cloud.gcp_storage_object, increase timeout
<p>I am running a playbook that uses google.cloud.gcp_storage_object to upload a file to google cloud storage. Some times this module throws timeout exception. What I want, is to increase the timeout while calling this module but I couldn't figure out and understand why the module itself doesn't provide this parameter - unless I am missing something really obvious.</p> <p>Now, first things first. the module documentation can be found <a href="https://docs.ansible.com/ansible/latest/collections/google/cloud/gcp_storage_object_module.html#ansible-collections-google-cloud-gcp-storage-object-module" rel="nofollow noreferrer">here</a>. I was expecting a little bit more from google to be honest. For instance, at least few more examples, but anyhow. As you can see it doesn't have any parameter for timeout.</p> <p>I can see in the <a href="https://github.com/ansible-collections/google.cloud/blob/master/plugins/modules/gcp_storage_object.py" rel="nofollow noreferrer">repo/../gcp_storage_object.py</a> which is the source code for the module that is calling the blob.upload_from_file(file_obj). As expected, it doesn't specify the optional timeout parameter at all. And according <a href="https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.blob.Blob#google_cloud_storage_blob_Blob_upload_from_file" rel="nofollow noreferrer">Blob class documentation</a> it supports the timeout parameter. (To my understanding this Class uses python requests, and python requests uses urllib3 to make the request.)</p> <p>So, to recap these are my questions. Is my understanding right? Is it possible that they have developed the ansible module without giving the option to provide custom timeout? If no, what I am doing wrong and where should I specify the timeout? If yes, how I could handle it in a production environment? I do not think it is proper to modify the module bymyself, what I will do everytime there is a new version of this module? Is there any professional and elegant way?</p> <p>I hope all the above make sense, I am a little bit tired and cloudy at the moment. Thank you in advance for your time! :)</p>
<python><ansible><google-cloud-storage><ansible-galaxy>
2023-10-26 19:56:44
0
481
laxris
77,369,969
152,781
How to persist a dask dataframe loaded with dask.dataframe.from_delayed
<p>I have a large sharded dataset stored in a custom format that would benefit greatly from <code>dask.dataframe.from_delayed</code></p> <p>However, I'm seeing odd behavior when trying to persist the resulting dataframe:</p> <pre><code>def load(file): # Just an example...Actual loading code is more complex. return pd.read_parquet(file) filenames = ['my_file'] df = dd.from_delayed([delayed(load)(f) for f in filenames]) df = df.persist() print(df.count().compute()) </code></pre> <p>This results in two consecutive 'load' tasks with each task loading data from the network from scratch: once when calling .persist() and once when running computations on the persisted dataframe.</p> <p>I would expect only one 'load' task, and then the computations would work on the persisted dataframe.</p> <p>I verified that</p> <pre><code>df = dd.read_parquet('my_file') df = df.persist() print(df.count().compute()) </code></pre> <p>correctly only schedules one read_parquet task so data is only loaded from the network once.</p> <p>Is there a workaround for this issue to ensure that after calling .persist, data isn't re-downloaded from the network?</p>
<python><pandas><dask><dask-distributed>
2023-10-26 19:52:27
1
753
kjleftin
77,369,814
226,081
Python: type annotate a function return value that returns a class defined in that function
<p>I'm wondering what the proper way to type annotate a return value of a function or method where a class is defined inside of that function.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>from typing import List from dataclasses import dataclass def get_users() -&gt; List['ReturnUser']: @dataclass class ReturnUser: first_name: str last_name: str return [ ReturnUser(&quot;John&quot;, &quot;Doe&quot;), ReturnUser(&quot;Jane&quot;, &quot;Doe&quot;)] </code></pre> <p>... the ReturnUser dataclass is only relevant within the scope of the function. (I don't care about the object outside of the <code>get_users()</code> function other than that I can access its properties in the calling code).</p> <p>However, Pylance shows the return value annotation <code>List['ReturnUser']</code> as a type error. My environment is unfortunately Python 3.7, but also curious about if newer approaches exist in newer Python versions?</p> <p>How do I annotate a return value that doesn't yet exist like this?</p>
<python><python-typing>
2023-10-26 19:20:30
1
10,861
Joe Jasinski
77,369,793
10,426,490
How to show Azure Function with Python runtime as a failed run?
<p>Does an Azure Function run only show as &quot;failed&quot; if an unhandled exception occurs? If all exceptions are handled, will the Function run show as &quot;successful&quot; even though some part of it failed?</p> <p><strong>Example</strong>: If the below parsing logic fails, do I need to <code>raise</code> in the <code>main()</code> function to cause the Azure Function run to show as <code>failed</code> in the Azure Function &quot;Monitor&quot; blade?</p> <pre><code>import azure.functions as func #------------------------ def parse_eventgrid_msg(msg): try: eg_msg_json = msg.get_json() blob_url = eg_msg_json['data']['blobUrl'] blob_name = blob_url.split('/')[-1] container_name = blob_url.split('/')[3] blob_source = container_name.split('-')[0] except KeyError as e: raise KeyError(f&quot;#### Missing key in EventGrid message: {e}&quot;) except IndexError as e: raise IndexError(f&quot;#### Error while parsing blob URL or container name: {e}&quot;) except Exception as e: raise Exception(f&quot;#### Unexpected error while parsing EventGrid message: {e}&quot;) results = { &quot;blob_url&quot;: blob_url, &quot;blob_name&quot;: blob_name, &quot;container_name&quot;: container_name, &quot;blob_source&quot;: blob_source, } return results #-------------- def main(msg: func.QueueMessage): # Parse EventGrid message from QueueMessage body try: parsed_msg = parse_eventgrid_msg(msg) except Exception as e: logging.error(f'#### Error parsing EventGrid message: {e}') return #&lt;--- does this need to be `raise` to get the Azure Function to show as a failed run? </code></pre>
<python><azure><azure-functions>
2023-10-26 19:16:38
1
2,046
ericOnline
77,369,695
3,420,197
How can I get 1000 sample from each class of dataframe?
<p>I want to get sample of each class, for example, 1000 rows each class 1, 2, 3, 4 or fewer if there aren't enough rows.</p> <pre><code>import pandas as pd data = { 'Date': ['2023-10-20', '2023-10-21', '2023-10-22', '2023-10-23', '2023-10-24' ...], 'class': [4, 1, 1, 2, 3 ...], 'other_col1': [5, 6, 3, 1, 4 ...], 'other_col2': [15, 10, 72, 6, 8 ...] } df = pd.DataFrame(data) </code></pre> <p>I try simple approach but want more sophisticated way</p> <pre><code>sampled_data = [] for class_label, group in df.groupby('class'): if len(group) &gt;= sample_size: sampled_data.append(group.sample(sample_size)) else: sampled_data.append(group) sampled_df = pd.concat(sampled_data) </code></pre>
<python><pandas><dataframe>
2023-10-26 18:57:22
1
1,712
Anna Andreeva Rogotulka
77,369,612
11,092,636
Issue with .dt accessor even after confirming Timestamp datatype in pandas DataFrame
<p>I am encountering an AttributeError when trying to use the .dt accessor on a pandas Series, despite confirming that the Series contains only Timestamp objects. I've condensed my problem into a Minimal Reproducible Example (MRE) below:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd # Sample data data = {'DateGreffe': ['25/10/2023', '26/10/2023', '27/10/2023']} df = pd.DataFrame(data) # Convert &quot;DateGreffe&quot; to datetime df.loc[:, &quot;DateGreffe&quot;] = pd.to_datetime(df[&quot;DateGreffe&quot;], dayfirst=True) # Check data type of &quot;DateGreffe&quot; column print(df[&quot;DateGreffe&quot;].apply(type).unique()) # Convert &quot;DateGreffe&quot; back to string format &quot;DD/MM/YYYY&quot; df.loc[:, &quot;DateGreffe&quot;] = df[&quot;DateGreffe&quot;].dt.strftime(&quot;%d/%m/%Y&quot;) print(df) </code></pre> <p>The output of the print statement confirms that the DateGreffe column only contains Timestamp objects. However, when I try to use .dt.strftime on this column, I get the following error:</p> <p>AttributeError: Can only use .dt accessor with datetimelike values</p>
<python><pandas><dataframe>
2023-10-26 18:43:47
1
720
FluidMechanics Potential Flows
77,369,454
1,902,616
pytube NameError: name 'YouTube' is not defined
<p>I had pytube working before and I've obviously messed something up, however at the moment I can't get pytube or pytube3 to work at the most basic level.</p> <p>When I run &quot;py&quot; to open Python, then I type in just:</p> <pre><code>import pytube i = &quot;https://www.youtube.com/watch?v=X&quot; yt = YouTube(i) </code></pre> <p>It doesn't matter what I put for X. The third line doesn't run and fails with:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; NameError: name 'YouTube' is not defined </code></pre> <p>I've tried to fix this by:</p> <pre><code>pip uninstall pytube pip uninstall pytube3 pip install pytube </code></pre> <p>And also by:</p> <pre><code>pip uninstall pytube pip uninstall pytube3 pip install pytube3 </code></pre> <p>I can't seem to get either one to work at the moment.</p> <p>I couldn't find anyone else with this same error. <a href="https://stackoverflow.com/questions/70495354/nameerror-name-yt-is-not-defined">The closest I could find was this one but it seems completely different.</a> Any ideas what I can try to get this working?</p>
<python><installation><pip><namespaces><pytube>
2023-10-26 18:14:46
0
1,025
azoundria
77,369,410
1,072,283
How to handle utf-8 non-breaking spaces in CSV
<p>In a Python script I am using <code>openpyxl</code> to read an XLSX file. I move some metadata around, and ultimately writing out a new CSV. When I open that CSV in Microsoft Excel, there are whitespaces in a certain metadata field that are being replaced with the following characters: <code>†¬</code>. Here is a before and after example…</p> <p>Before (XLSX):</p> <pre><code>list of items 1.     apple 2.     orange 3.     banana </code></pre> <p>After (CSV as viewed in Excel):</p> <pre><code>list of items 1.¬†¬†¬†¬† apple 2.¬†¬†¬†¬† orange 3.¬†¬†¬†¬† banana </code></pre> <p>If I open the CSV in Google Sheets it renders just fine. If I preview it in Mac OS finder, it also looks fine. If I look at the CSV as plain text in Sublime Text it also looks fine. It is only Excel that renders these space characters in the strange fashion. Looking at the data in Sublime Text does seem to provide a hint… one would typically expect space characters to show a dot when you have selected them, so as to differentiate them from a tab whitespace. When I select these spaces however, the ones that Excel renders incorrectly do not show dots, and the one space it handles correctly does show a dot as expected.</p> <p><a href="https://i.sstatic.net/TFK0A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TFK0A.png" alt="enter image description here" /></a></p> <p>Looking at that selection one might assume that the non-dotted whitespace is just a tab, but it is not, you can move the cursor four increments over, so it seems to be four whitespaces. After a lot of searching I finally found that I could just copy and paste these two different whitespaces into a UTF-8 encoder. This revealed that the correctly rendered space is <code>\x20</code> aka &quot;space&quot; and that the ones Excel struggles with are <code>\xc2\xa0</code> aka &quot;no-break space&quot;.</p> <p>I would like to preserve the original / source metadata as pristinely as possible, so I am not inclined to replace the non-breaking spaces with regular spaces, but should I be doing so in my Python? Is that unicode character liable to be incorrectly rendered by other tools? Is there something more nefarious going on here that I am missing?</p>
<python><excel><csv><utf-8>
2023-10-26 18:06:09
1
661
dongle
77,369,379
11,716,727
Why do I face such error when I import data throught pandas?
<p>As a beginner, I am trying to import data with the name of (Facebook_Ads_2.csv) using pandas on jupyter notebook; the output must be as shown below.</p> <p><a href="https://i.sstatic.net/gsiAL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gsiAL.png" alt="enter image description here" /></a></p> <p>but when I import them using the following lines of Python code:</p> <pre><code>import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt T = pd.read_csv('Facebook_Ads_2.csv') </code></pre> <p>I get the following error:</p> <pre><code>UnicodeDecodeError Traceback (most recent call last) Cell In[4], line 1 ----&gt; 1 T = pd.read_csv('Facebook_Ads_2.csv') File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py:912, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend) 899 kwds_defaults = _refine_defaults_read( 900 dialect, 901 delimiter, (...) 908 dtype_backend=dtype_backend, 909 ) 910 kwds.update(kwds_defaults) --&gt; 912 return _read(filepath_or_buffer, kwds) File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py:577, in _read(filepath_or_buffer, kwds) 574 _validate_names(kwds.get(&quot;names&quot;, None)) 576 # Create the parser. --&gt; 577 parser = TextFileReader(filepath_or_buffer, **kwds) 579 if chunksize or iterator: 580 return parser File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py:1407, in TextFileReader.__init__(self, f, engine, **kwds) 1404 self.options[&quot;has_index_names&quot;] = kwds[&quot;has_index_names&quot;] 1406 self.handles: IOHandles | None = None -&gt; 1407 self._engine = self._make_engine(f, self.engine) File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py:1679, in TextFileReader._make_engine(self, f, engine) 1676 raise ValueError(msg) 1678 try: -&gt; 1679 return mapping[engine](f, **self.options) 1680 except Exception: 1681 if self.handles is not None: File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\c_parser_wrapper.py:93, in CParserWrapper.__init__(self, src, **kwds) 90 if kwds[&quot;dtype_backend&quot;] == &quot;pyarrow&quot;: 91 # Fail here loudly instead of in cython after reading 92 import_optional_dependency(&quot;pyarrow&quot;) ---&gt; 93 self._reader = parsers.TextReader(src, **kwds) 95 self.unnamed_cols = self._reader.unnamed_cols 97 # error: Cannot determine type of 'names' File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_libs\parsers.pyx:548, in pandas._libs.parsers.TextReader.__cinit__() File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_libs\parsers.pyx:637, in pandas._libs.parsers.TextReader._get_header() File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_libs\parsers.pyx:848, in pandas._libs.parsers.TextReader._tokenize_rows() File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_libs\parsers.pyx:859, in pandas._libs.parsers.TextReader._check_tokenize_status() File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_libs\parsers.pyx:2017, in pandas._libs.parsers.raise_parser_error() UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc5 in position 4001: invalid continuation byte </code></pre> <p><strong>Any assistance, please?</strong></p>
<python><pandas><read-csv>
2023-10-26 17:59:45
1
709
SH_IQ
77,369,332
3,380,902
find the maximum Split of Positive Even integers given an array
<p>I am trying to solve LeetCode problem <a href="https://leetcode.com/problems/maximum-split-of-positive-even-integers/description/" rel="nofollow noreferrer">2178. Maximum Split of Positive Even Integers</a>:</p> <blockquote> <p>You are given an integer <code>finalSum</code>. Split it into a sum of a <strong>maximum</strong> number of <strong>unique</strong> positive even integers.</p> <ul> <li>For example, given <code>finalSum = 12</code>, the following splits are valid (unique positive even integers summing up to <code>finalSum</code>): <code>(12)</code>, <code>(2 + 10)</code>, <code>(2 + 4 + 6)</code>, and <code>(4 + 8)</code>. Among them, <code>(2 + 4 + 6)</code> contains the maximum number of integers. Note that <code>finalSum</code> cannot be split into <code>(2 + 2 + 4 + 4)</code> as all the numbers should be unique.</li> </ul> <p>Return <em>a list of integers that represent a valid split containing a maximum number of integers</em>. If no valid split exists for <code>finalSum</code>, return <em>an <strong>empty</strong> list</em>. You may return the integers in any order.</p> <h3>Constraints:</h3> <ul> <li>1 &lt;= <code>finalSum</code> &lt;= 10<sup>10</sup></li> </ul> </blockquote> <p>I am trying to do it using for-loops. For instance, I have an the following <code>arr</code> and <code>finalSum</code> values:</p> <pre><code>even = [2, 4, 6, 8, 10, 12, 14, 16] finalSum = 16 current_sum = 0 current_comb = [] all_combs = [] for j in range(len(even)): current_comb.append(even[j]) for k in range(j+1, len(even)): current_comb.append(even[k]) print(&quot;current combination&quot;, current_comb) current_sum = sum(current_comb) print(&quot;current sum&quot;, current_sum) if current_sum == finalSum: print(&quot;sum = finalSum&quot;) # append to all combs all_combs.append(current_comb) current_comb = [] elif current_sum &gt; finalSum: print(&quot;current sum &gt; final sum&quot;) # since array is sorted by ASC, even other value is going to be &gt;, therefore pop the last 2 elements current_comb = current_comb[:-2] print(&quot;pop last 2 elements&quot;, current_comb) </code></pre> <p>I am stuck here and trying to figure out next steps. I prefer solving this using <code>for</code> or <code>while</code> loops.</p>
<python><algorithm>
2023-10-26 17:52:42
4
2,022
kms
77,369,029
469,476
parsing actual link from href using lxml
<p>Using jupyter notebook, python 3. I'm downloading some files from the web, much with them in bulk locally. The files are listed on a webpage, but they are in an href attribute. The code I found gives me the text, but not the actual link (even though my understanding is the code was supposed to get the link).</p> <p>Here's what I have:</p> <pre><code>import os import requests from lxml import html from lxml import etree import urllib.request import urllib.parse ... web_string = requests.get(url).content parsed_content = html.fromstring(web_string) td_list = [e for e in parsed_content.iter() if e.tag == 'td'] directive_list = [] for td_e in td_list: txt = td_e.text_content() directive_list.append(txt) </code></pre> <p>This is a long web page with a bunch of entries that look like <code>&lt;a href=&quot;file1.pdf&quot;&gt; text1 &lt;/a&gt;</code></p> <p>This code returns: text1, text2, etc. instead of file1.pdf, file2.pdf</p> <p>How can I extract the link?</p>
<python><parsing><lxml>
2023-10-26 17:00:33
0
1,994
elbillaf
77,369,002
12,274,651
Python Gekko Differential Equation Solution wrt Distance, not Time
<p>I need to solve a differential equation for a plug flow chemical reactor with respect to distance, not time. Gekko uses <code>m.time</code> to define the time horizon and <code>y.dt()</code> to use <code>dy/dt</code> in equations. Is it possible to covert to <code>m.distance</code> and <code>y.dx()</code> instead to make the problem easier to read for problems that integrate wrt distance?</p> <p>Here is an attempt:</p> <pre class="lang-py prettyprint-override"><code>from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt m = GEKKO() k = 10 m.distance = np.linspace(0,20,100) y = m.Var(value=5) t = m.Param(value=m.time) m.Equation(k*y.dx()==-t*y) m.options.IMODE=4 m.solve(disp=False) plt.plot(m.distance,y.value) plt.xlabel('distance') plt.ylabel('y') plt.show() </code></pre> <pre><code>Traceback (most recent call last): File &quot;C:\Users\johnh\Desktop\test1.py&quot;, line 11, in &lt;module&gt; m.Equation(k*y.dx()==-t*y) File &quot;C:\Users\johnh\Python311\Lib\site-packages\gekko\gk_operators.py&quot;, line 36, in __getattr__ raise AttributeError(name) AttributeError: dx. Did you mean: 'dt'? </code></pre> <p>I don't need to solve a Partial Differential Equation (PDE) for a dynamic simulation of the plug flow reactor, only solve a steady-state concentration profile along the reactor. What is an easy way to redefine Gekko in terms of distance and make the model more readable?</p>
<python><ode><gekko>
2023-10-26 16:55:44
1
744
TexasEngineer
77,368,905
16,383,578
PyQt6 custom title bar causes window to move differently from the cursor
<p>This is about my GUI program that I have been working on for over 2 months.</p> <p><a href="https://i.sstatic.net/DLJfp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DLJfp.png" alt="enter image description here" /></a></p> <p>I have finally completed it and I have posted it on <a href="https://codereview.stackexchange.com/questions/287608/gui-tic-tac-toe-game-with-six-ai-players-part-1-the-ui">Code Review</a> and I have also opened a GitHub <a href="https://github.com/Estrangeling/Tic-Tac-Toe" rel="nofollow noreferrer">repository</a> for it. I have implemented everything I originally intended, and I have fixed every bug that I found.</p> <p>Now I want to customize the title bar, but somehow it isn't working. Because the project is massive, it contains 113714 characters in the scripts alone, I can't post the complete code here. I also can't post a minimal reproducible example because I have no idea what caused the bug, the code I adapted is working properly, but when I put the custom title bar to the main window with all other stuff, it malfunctions.</p> <p>I found this <a href="https://stackoverflow.com/a/44249552/16383578">answer</a>, and I adapted it to PyQt6:</p> <pre><code>import sys from PyQt6.QtCore import QPoint from PyQt6.QtCore import Qt from PyQt6.QtWidgets import QApplication from PyQt6.QtWidgets import QHBoxLayout from PyQt6.QtWidgets import QLabel from PyQt6.QtWidgets import QPushButton from PyQt6.QtWidgets import QVBoxLayout from PyQt6.QtWidgets import QWidget class MainWindow(QWidget): def __init__(self): super(QWidget, self).__init__() self.layout = QVBoxLayout() self.layout.addWidget(MyBar()) self.setLayout(self.layout) self.layout.setContentsMargins(0, 0, 0, 0) self.layout.addStretch(-1) self.setMinimumSize(800, 400) self.setWindowFlags(Qt.WindowType.FramelessWindowHint) self.pressing = False class MyBar(QWidget): def __init__(self, ): super(MyBar, self).__init__() self.layout = QHBoxLayout() self.layout.setContentsMargins(0, 0, 0, 0) self.title = QLabel(&quot;My Own Bar&quot;) btn_size = 35 self.btn_close = QPushButton(&quot;x&quot;) self.btn_close.clicked.connect(self.btn_close_clicked) self.btn_close.setFixedSize(btn_size, btn_size) self.btn_close.setStyleSheet(&quot;background-color: red;&quot;) self.btn_min = QPushButton(&quot;-&quot;) self.btn_min.clicked.connect(self.btn_min_clicked) self.btn_min.setFixedSize(btn_size, btn_size) self.btn_min.setStyleSheet(&quot;background-color: gray;&quot;) self.btn_max = QPushButton(&quot;+&quot;) self.btn_max.clicked.connect(self.btn_max_clicked) self.btn_max.setFixedSize(btn_size, btn_size) self.btn_max.setStyleSheet(&quot;background-color: gray;&quot;) self.title.setFixedHeight(35) self.title.setAlignment(Qt.AlignmentFlag.AlignCenter) self.layout.addWidget(self.title) self.layout.addWidget(self.btn_min) self.layout.addWidget(self.btn_max) self.layout.addWidget(self.btn_close) self.title.setStyleSheet( &quot;&quot;&quot; background-color: black; color: white; &quot;&quot;&quot; ) self.setLayout(self.layout) def resizeEvent(self, QResizeEvent): super(MyBar, self).resizeEvent(QResizeEvent) self.title.setFixedWidth(self.window().width()) def mousePressEvent(self, event): self.start = self.mapToGlobal(event.pos()) self.pressing = True def mouseMoveEvent(self, event): print(event.pos()) self.end = self.mapToGlobal(event.pos()) delta = self.end - self.start self.window().setGeometry( self.mapToGlobal(delta).x(), self.mapToGlobal(delta).y(), self.window().width(), self.window().height(), ) self.start = self.end def mouseReleaseEvent(self, QMouseEvent): self.pressing = False def btn_close_clicked(self): self.window().close() def btn_max_clicked(self): self.window().showMaximized() def btn_min_clicked(self): self.window().showMinimized() if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) app.setStyle(&quot;Fusion&quot;) mw = MainWindow() mw.show() sys.exit(app.exec()) </code></pre> <p>It is working properly.</p> <p>So I tried to add it to my project, but somehow it doesn't work properly, the relevant code is below:</p> <pre><code>class SquareButton(QPushButton): def __init__(self, text: str) -&gt; None: super().__init__() self.setFont(FONT) self.setText(text) self.setFixedSize(32, 32) class TitleBar(QWidget): def __init__(self, name: str, exitfunc: Callable) -&gt; None: super().__init__() self.hbox = make_hbox(self, 0) self.name = name self.exitfunc = exitfunc self.setObjectName(&quot;Title&quot;) self.add_icon() self.add_title() self.add_buttons() self.setFixedHeight(35) def add_icon(self) -&gt; None: self.icon = QLabel() self.icon.setPixmap(GLOBALS[&quot;Logo&quot;]) self.icon.setFixedSize(32, 32) self.hbox.addWidget(self.icon) self.hbox.addStretch() def add_title(self) -&gt; None: self.title = Label(self.name) self.hbox.addWidget(self.title) self.hbox.addStretch() def add_buttons(self) -&gt; None: self.minimize_button = SquareButton(&quot;—&quot;) self.minimize_button.clicked.connect(self.minimize) self.hbox.addWidget(self.minimize_button) self.close_button = SquareButton(&quot;X&quot;) self.close_button.clicked.connect(self.exitfunc) self.hbox.addWidget(self.close_button) def mousePressEvent(self, e: QMouseEvent) -&gt; None: self.start = self.mapToGlobal(e.pos()) self.pressing = True def mouseMoveEvent(self, e: QMouseEvent): if self.pressing: print(e.pos()) self.end = self.mapToGlobal(e.pos()) delta = self.end - self.start self.window().setGeometry( self.mapToGlobal(delta).x(), self.mapToGlobal(delta).y(), self.window().width(), self.window().height(), ) self.start = self.end def mouseReleaseEvent(self, e: QMouseEvent) -&gt; None: self.pressing = False def minimize(self) -&gt; None: self.window().showMinimized() </code></pre> <p>I don't know what caused the issue, the code should work, but when I ran it, and tried to move the window by dragging the title bar, the window moves erratically, its movement doesn't correspond to the movement of the cursor. At first it seemed to follow the cursor but lags behind a little bit, and then it doesn't follow the cursor at all.</p> <p>The window has a tendency to move to the lower right, and if the mouse moves up the window somehow moves down, and the window will move below the lower edge of the screen, the window seems to move in the same direction as the cursor horizontally, but not the same distance.</p> <p>So I tried to debug it by printing the positions of the events, and the working example prints all positive coordinates:</p> <pre><code>PyQt6.QtCore.QPoint(575, 17) PyQt6.QtCore.QPoint(575, 17) PyQt6.QtCore.QPoint(575, 17) PyQt6.QtCore.QPoint(575, 16) PyQt6.QtCore.QPoint(574, 17) PyQt6.QtCore.QPoint(575, 17) PyQt6.QtCore.QPoint(574, 17) PyQt6.QtCore.QPoint(571, 16) PyQt6.QtCore.QPoint(575, 16) PyQt6.QtCore.QPoint(575, 17) PyQt6.QtCore.QPoint(571, 15) PyQt6.QtCore.QPoint(573, 17) PyQt6.QtCore.QPoint(568, 16) PyQt6.QtCore.QPoint(567, 15) PyQt6.QtCore.QPoint(570, 14) PyQt6.QtCore.QPoint(567, 13) PyQt6.QtCore.QPoint(564, 11) PyQt6.QtCore.QPoint(564, 13) PyQt6.QtCore.QPoint(565, 13) PyQt6.QtCore.QPoint(565, 16) PyQt6.QtCore.QPoint(562, 13) PyQt6.QtCore.QPoint(564, 11) </code></pre> <p>But somehow the code when used in my project prints some negative coordinates:</p> <pre><code>PyQt6.QtCore.QPoint(55, -322) PyQt6.QtCore.QPoint(51, -327) PyQt6.QtCore.QPoint(47, -330) PyQt6.QtCore.QPoint(41, -334) PyQt6.QtCore.QPoint(37, -337) PyQt6.QtCore.QPoint(35, -341) PyQt6.QtCore.QPoint(30, -343) PyQt6.QtCore.QPoint(30, -346) PyQt6.QtCore.QPoint(29, -348) PyQt6.QtCore.QPoint(26, -351) PyQt6.QtCore.QPoint(25, -352) PyQt6.QtCore.QPoint(25, -354) PyQt6.QtCore.QPoint(23, -356) PyQt6.QtCore.QPoint(19, -358) PyQt6.QtCore.QPoint(16, -362) PyQt6.QtCore.QPoint(14, -365) PyQt6.QtCore.QPoint(9, -368) PyQt6.QtCore.QPoint(7, -372) PyQt6.QtCore.QPoint(3, -374) PyQt6.QtCore.QPoint(-2, -378) PyQt6.QtCore.QPoint(-1, -379) PyQt6.QtCore.QPoint(-7, -383) PyQt6.QtCore.QPoint(-7, -386) PyQt6.QtCore.QPoint(-10, -388) PyQt6.QtCore.QPoint(-14, -392) PyQt6.QtCore.QPoint(-17, -394) PyQt6.QtCore.QPoint(-20, -399) PyQt6.QtCore.QPoint(-22, -401) PyQt6.QtCore.QPoint(-25, -403) PyQt6.QtCore.QPoint(-30, -407) PyQt6.QtCore.QPoint(-31, -409) </code></pre> <p>I have packaged the full code to a zip file and uploaded to <a href="https://drive.google.com/file/d/1wDUiAvu47cUbS-8omZeiqZe--FFREUy5/view?usp=sharing" rel="nofollow noreferrer">Google Drive</a>, so that you may run the code and debug it. Because I use pickle data files, you may need to run <code>analyze_tic_tac_toe_states.py</code> to generate the data files before running <code>main.py</code> to call up the window.</p> <p>What caused the strange bug and how to fix it?</p>
<python><pyqt><pyqt6>
2023-10-26 16:42:23
1
3,930
Ξένη Γήινος
77,368,904
7,318,120
Methods and attributes do not show when dot is pressed (intellisense)
<p>Yesterday I upgraded from <code>3.11</code> to python <code>3.12</code>. The editor that i use is <code>VScode</code>.</p> <p>When i use standard patterns with matplotlib (from the official docs), the intellisense does not recognise the types and therefore pressing the dot fails to expose the methods and attributes.</p> <p>Here is reproducible code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt fig, ax = plt.subplots() fruits = ['apple', 'blueberry', 'cherry', 'orange'] counts = [40, 100, 30, 55] colours = ['red', 'blue', 'red', 'orange'] ax.bar(fruits, counts, label=colours, color=colours) ax.set_ylabel('fruit supply') ax.set_title('Fruit supply by kind and color') ax.legend(title='Fruit color') plt.show() </code></pre> <p>If i <strong>explicitly typehint</strong>, then it works. But this is very clunky and seems broken.</p> <p>i now have to do this:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt from matplotlib.figure import Figure # added this from matplotlib.axes import Axes # added this fig, ax = plt.subplots() fig: Figure # added this ax: Axes # added this fruits = ['apple', 'blueberry', 'cherry', 'orange'] counts = [40, 100, 30, 55] colours = ['red', 'blue', 'red', 'orange'] ax.bar(fruits, counts, label=colours, color=colours) ax.set_ylabel('fruit supply') ax.set_title('Fruit supply by kind and color') ax.legend(title='Fruit color') plt.show() </code></pre> <p>I have never had this issue with earlier versions of python as far as i can recall.</p> <p>How to fix this such that i do not have to explicitly typehint the <code>fig</code> and <code>ax</code> ?</p>
<python><matplotlib><visual-studio-code><pylance>
2023-10-26 16:42:14
1
6,075
darren
77,368,894
5,052,605
Python: PackageNotFoundError No package metadata was found for <myproject>
<p>I have an existing python project and trying to debug in via Microsoft Visual Studio Code.</p> <p>Therefore I have the following launch.json:</p> <pre><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python: myproject&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;E:\\&lt;path&gt;\\__main__.py&quot;, &quot;cwd&quot;: &quot;${workspaceFolder}&quot;, &quot;env&quot;: {&quot;PYTHONPATH&quot;: &quot;${cwd}&quot; }, &quot;args&quot;: [ &quot;publish&quot;, &quot;--config=E:\\&lt;path&gt;\\config.yaml&quot; ], } ] } </code></pre> <p>Unfortunately it runs up to one excpetion:</p> <blockquote> <p>Exception has occurred: PackageNotFoundError No package metadata was found for myproject</p> </blockquote> <blockquote> <p>importlib.metadata.PackageNotFoundError: No package metadata was found for myproject</p> </blockquote> <p>I already tried</p> <blockquote> <p>pip install importlib.metadata</p> </blockquote> <p>But it doesn't help.</p> <p>The lines of code that are running the exception are in the return:</p> <blockquote> <p>import atexit, copy, getopt, importlib.metadata, json, logging, os, re, signal, shutil, sys, textwrap, time, urllib</p> <p>. . . return importlib.metadata.version(<strong>package</strong>)</p> </blockquote> <p>Using the interpreter: <a href="https://i.sstatic.net/4CCcW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4CCcW.png" alt="enter image description here" /></a></p> <p>Anyone could help how to fix this?</p> <p>Thanks in advance.</p> <p>Cheers</p>
<python><visual-studio-code><python-3.12>
2023-10-26 16:40:15
1
326
Airwave
77,368,828
2,687,317
Compute time diffs between each entry in one dataframe and all entries in another - pandas
<p>Again, I have 2 data frames consisting of times. I'd like to find the <strong>delta</strong> time between <strong>each</strong> df1 Stop Time and <strong>every</strong> df2 Start Time. If the Start Time in df2 is before the Stop Time in df1 I would want deltatime of 0. For instance:</p> <p>df1</p> <pre>Stop Time Site 2023-10-17 20:10:00.310 P2 2023-10-17 21:20:00.440 P1 2023-10-17 23:30:00.200 P2 2023-10-18 00:00:00.190 P1 2023-10-18 01:00:00.130 P1 2023-10-18 02:00:00.500 P2 2023-10-18 03:00:00.480 P1 2023-10-18 04:00:00.020 P2 2023-10-18 05:00:00.000 P1 2023-10-18 06:00:00.580 P2</pre> <p>df2</p> <pre>Start Time Site 2023-10-17 16:00:00.190 SMR 2023-10-17 17:05:00.050 SMR 2023-10-17 19:10:00.550 SMR 2023-10-17 21:40:00.530 SMR 2023-10-17 22:21:00.180 SMR 2023-10-18 05:21:00.090 SMR 2023-10-18 09:15:00.360 SMR 2023-10-18 11:54:00.160 SMR</pre> <p>This should generate a new df with</p> <pre> dt Site 0 P2 0 P2 0 P2 30 min P2 1hr 11 min P2 ...</pre> <p>This dataset is <code>df1.shape[0] * df2.shape[0]</code> entries long.</p> <p>This is just for entry 1 of df1 where the first 3 entries in df2 are BEFORE df1[1] so I get 0, and then I have the delta time going forward. Again I can do this row by row but I'm sure there's a pandas was to do this quickly.</p> <p>As an added step, I'd like to find just the minimum delta time (including the zero) for each entry in df1 against ALL entries in df2... this would another resulting df.</p> <p>Thx,</p>
<python><pandas>
2023-10-26 16:32:27
2
533
earnric
77,368,813
13,944,524
Why joinedload() is the better choice than selectinload() in many to one relationship in SQLAlchemy
<p>I know how <a href="https://docs.sqlalchemy.org/en/20/tutorial/orm_related_objects.html#selectin-load" rel="noreferrer"><code>selectinload()</code></a> and <a href="https://docs.sqlalchemy.org/en/20/tutorial/orm_related_objects.html#joined-load" rel="noreferrer"><code>joinedload()</code></a> work. And I also know where to choose which. Based on the <a href="https://docs.sqlalchemy.org/en/20/orm/queryguide/relationships.html#what-kind-of-loading-to-use" rel="noreferrer">documentation</a>:</p> <blockquote> <p><strong>One to Many / Many to Many Collection</strong> - The selectinload() is generally the best loading strategy to use. It emits an additional SELECT that uses as few tables as possible, leaving the original statement unaffected, and is most flexible for any kind of originating query. Its only major limitation is when using a table with composite primary keys on a backend that does not support “tuple IN”, which currently includes SQL Server and very old SQLite versions; all other included backends support it.</p> <p><strong>Many to One</strong> - The joinedload() strategy is the most general purpose strategy. In special cases, the immediateload() strategy may also be useful, if there are a very small number of potential related values, as this strategy will fetch the object from the local Session without emitting any SQL if the related object is already present.</p> </blockquote> <p>Also <a href="https://docs.sqlalchemy.org/en/20/tutorial/orm_related_objects.html#joined-load" rel="noreferrer">here</a>:</p> <blockquote> <p>The joinedload() strategy is best suited towards loading related many-to-one objects, as this only requires that additional columns are added to a primary entity row that would be fetched in any case.</p> </blockquote> <p>Yes, based on their behavior, <code>joinedload</code> is in better in &quot;many-to-one&quot; relation ship as opposed to &quot;one-to-many&quot; relationship. That's because the primary rows won't get multiplied in the former. But still the other table's rows are get multiplied. In many-to-one relationship, the result set won't grow vertically but horizontally.</p> <p>What I'm trying to say is, I think even in many-to-one relationship, <code>selectinload()</code> would fetch less data.</p> <p>Am I misunderstanding the concept? Are there other factors that makes &quot;joinedload&quot; a better choice than &quot;selectinload&quot; in many-to-one relationship?</p>
<python><database><sqlalchemy><orm>
2023-10-26 16:31:02
1
17,004
S.B
77,368,601
5,344,058
Combination of @statsd.timed and @swag_from decorators causes FileNotFoundError
<p>I have a working Flask endpoint decorated with <code>flasgger.swag_from</code> decorator:</p> <pre class="lang-py prettyprint-override"><code>from flasgger import swag_from @app.route(&quot;/jobs&quot;, methods=[&quot;GET&quot;]) @swag_from('swagger/get_jobs.yml') def get_jobs(request_id): # ... </code></pre> <p>This works as expected.</p> <p>I'm using DataDog and would like to <em>time</em> every call to this endpoint. Based on <a href="https://docs.datadoghq.com/metrics/custom_metrics/dogstatsd_metrics_submission/#code-examples-4" rel="nofollow noreferrer">this documentation</a>, this can easily be achieved by adding the <code>timed</code> decorator:</p> <pre class="lang-py prettyprint-override"><code>from flasgger import swag_from from datadog import statsd @app.route(&quot;/jobs&quot;, methods=[&quot;GET&quot;]) @swag_from('swagger/get_jobs.yml') @statsd.timed(&quot;api_call.duration&quot;) def get_jobs(request_id): # ... </code></pre> <p>However - once I do that, I get an error when trying to render the Swagger page:</p> <p><a href="https://i.sstatic.net/APRnb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/APRnb.png" alt="enter image description here" /></a></p> <p>The log shows the error is a <code>FileNotFoundError</code> caused by Swagger looking for the yml file in the wrong location:</p> <blockquote> <p>FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.7/site-packages/datadog/dogstatsd/swagger/backup_sources.yml'</p> </blockquote> <p>Based on <a href="https://stackoverflow.com/questions/20943169/flask-assets-searching-in-the-wrong-directory">this</a> post, this error sometimes happens when Flask's &quot;static_folder&quot; is configured incorrectly. I've tried changing it, using <code>Flask(__name__, static_folder='./')</code>, but the same issue persists.</p>
<python><flask><datadog><flasgger>
2023-10-26 15:59:15
1
37,882
Tzach Zohar
77,368,577
21,612,376
Convert an .xlsb file to .xlsx
<p>Suppose I have <code>file.xlsb</code>. I would like to convert it to <code>file.xlsx</code> programmatically using R. How can I do this?</p> <p>I have tried using both <code>openxlsx::read.xlsx()</code> and <code>readxl::read_excel()</code> to read the files (to then later write to <code>.xlsx</code>), but both are unable to read this file extension. The error message in <code>readxl::read_excel()</code> is a little cryptic, but I assume it's because of the <code>.xlsb</code> file extension.</p> <pre class="lang-r prettyprint-override"><code>openxlsx::read.xlsx('file.xlsb') #&gt; Error: openxlsx can only read .xlsx or .xlsm files readxl::read_excel('file.xlsb') #&gt; Error: expected &lt; </code></pre> <p>I've seen a way to do it <a href="https://pypi.org/project/xlsb2xlsx/" rel="nofollow noreferrer">using python</a>, which I imagine can be used with <code>reticulate</code>, but I am not familiar enough with python to know how to use that.</p>
<python><r>
2023-10-26 15:56:14
2
833
joshbrows
77,368,387
4,004,541
Understanding OpenCV projectPoints: reference points and origins
<p>I have been battling with projectPoints for days without understanding the basic reference point of the function. I find it hard to find the word coordinates and the reference points of all the inputs.</p> <p>Here a small example.</p> <pre><code> import numpy as np import cv2 as cv camera_matrix = np.array([ [1062.39, 0., 943.93], [0., 1062.66, 560.88], [0., 0., 1.] ]) points_3d = xyz_to_np_point(10, 0, 0) rvec = np.zeros((3, 1), np.float32) tvec = np.zeros((3, 1), np.float32) dist_coeffs = np.zeros((5, 1), np.float32) points_2d, _ = cv.projectPoints(points_3d, rvec, tvec, camera_matrix, dist_coeffs) print(points_2d) </code></pre> <p>Camera has no rotation, therefore rvec=(0,0,0) and we take the camera position as origin of our world making tvec=(0,0,0). The object we want to calculate the 3D to 2D is then positioned in front of the camera at 10 units.</p> <p>Here an illustration:</p> <p><a href="https://i.sstatic.net/YM4dg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YM4dg.png" alt="enter image description here" /></a></p> <p>The output of the code is then (11567.83 560.88) and not (0,0) as I would expect.</p> <p>A longer explanation. I am trying to project the location of ships in my image. I have the GPS location of both my camera and the ships. I transform the GPS coordinates into a plane by taking the distance on X axis (X points at east) and distance on Y axis (Y points at north). Since the ships are allocated over the sea level, I take the 3D point to be projected as (X, Y, 0)</p> <p>For the extrinsic parameters of the camera, I assume the camera is again the reference world and the tvec is only considering the height of the camera over the sea level (tvec = (0,0,4)). As the rotation I have a absolute IMU so I can calculate the rvec over my X acis (for simplicity the camera is parallel to XY plane).</p> <p>I have done a camera calibration and obtained my camera matrix and my distortion coefficient. Not sure how to test the camera matrix, but I see that undistortioning my images with the distortion coefficient my images obtain linear lanes and remove the distorion.</p> <p>Here code of my problem with some examples.</p> <pre><code>import numpy as np import cv2 as cv from scipy.spatial.transform import Rotation as R from haversine import haversine, Unit def distance_between_coordinates(c1, c2): x = haversine(c1, (c1[0], c2[1]), Unit.METERS) if c1[1] &gt; c2[1]: x = -x y = haversine(c1, (c2[0], c1[1]), Unit.METERS) if c1[1] &gt; c2[1]: y = -y dist = haversine(c1, c2, Unit.METERS) return dist, x, y def rotvec_from_euler(orientation): r = R.from_euler('xyz', [orientation[0], orientation[1], orientation[2]], degrees=True) return r.as_rotvec() if __name__ == '__main__': camera_matrix = np.array([ [1062.39, 0., 943.93], [0., 1062.66, 560.88], [0., 0., 1.] ]) dist_coeffs = np.array([-0.33520254, 0.14872426, 0.00057997, -0.00053154, -0.03600385]) camera_p = (37.4543785, 126.59113666666666) ship_p = (37.448312, 126.5781) # Other ships near to the previous one. # ship_p = (37.450693, 126.577617) # ship_p = (37.4509, 126.58565) # ship_p = (37.448635, 126.578202) camera_orientation = (206.6925, 0, 0) # Euler orientation. rvec = rotvec_from_euler(camera_orientation) tvec = np.zeros((3, 1), np.float32) _, x, y = distance_between_coordinates(camera_p, ship_p) points_3d = np.array([[[x, y, 0]]], np.float32) points_2d, _ = cv.projectPoints(points_3d, rvec, tvec, camera_matrix, dist_coeffs) print(points_2d) </code></pre> <p>I got some more coordinates that are ships nearby in the same direction and should be hitting close to the center of the camera. If you try also other the prediction from projectPoints changes drastically.</p> <p>For clarity I added an illustration of coordinates systems of the second code block.</p> <p><a href="https://i.sstatic.net/kd0aj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kd0aj.png" alt="enter image description here" /></a></p>
<python><opencv><computer-vision><camera-calibration><camera-projection>
2023-10-26 15:29:56
1
360
Patrick Vibild
77,368,186
2,600,531
Python Multiprocessing - detect if PID is alive
<p>I need to detect if the PID of a grandchild process is alive. Because the multiprocessing.Process() object <a href="https://stackoverflow.com/questions/29007619/python-typeerror-pickling-an-authenticationstring-object-is-disallowed-for-sec">is non-pickleable (for security reasons)</a>, I can't pass it up the process hierarchy and call <code>process.is_alive()</code> so I need to pass the PID and use that instead.</p> <p>The following code successfully identifies that the PID is active however when I manually kill the process (in Ubuntu using <code>kill</code> in bash) it erroneously continues detecting it as active. <code>process.is_alive()</code> on the other hand works correctly. How can I reliably detect PID state using only the PID?</p> <pre><code>import os import time from multiprocessing import Process def pid_is_alive(pid): try: os.kill(pid, 0) # Send signal 0 to check if the process exists. except ProcessLookupError: return False return True def worker(): print(&quot;worker started&quot;) print(os.getpid()) while True: time.sleep(1) print(&quot;worker running&quot;) p = Process(target=worker) p.start() while True: time.sleep(1) # Check if the PID is alive if pid_is_alive(p.pid): print(f'PID {p.pid} is alive.') else: print(f'PID {p.pid} is not alive.') # if p.is_alive(): # print(&quot;Process is alive&quot;) # else: # print(&quot;Process is not alive&quot;) </code></pre>
<python><multiprocessing><pid>
2023-10-26 15:05:53
1
944
davegravy
77,368,184
850,781
Non-uniform domain moving average using sparse matrices
<p><a href="https://stackoverflow.com/q/75981262/850781">Moving average with non-uniform domain</a> works fine:</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt def convolve(s, f): &quot;&quot;&quot;Compute the convolution of series S with a universal function F (see https://numpy.org/doc/stable/reference/ufuncs.html). This amounts to a moving average of S with weights F based on S.index.&quot;&quot;&quot; index_v = s.index.values weight_mx = f(index_v - index_v[:, np.newaxis]) weighted_sum = np.sum(s.values[:, np.newaxis] * weight_mx, axis=0) normalization = np.sum(weight_mx, axis=0) return pd.Series(weighted_sum/normalization, index=s.index) size = 1000 df = pd.DataFrame({&quot;x&quot;:np.random.normal(size=size)}, index=np.random.exponential(size=size)).sort_index() def f(x): return np.exp(-x*x*30) df[&quot;avg&quot;] = convolve(df.x, f) plt.scatter(df.index, df.avg, s=1, label=&quot;average&quot;) plt.scatter(df.index, df.x, s=1, label=&quot;random&quot;) plt.title(&quot;Moving Average for random data&quot;) plt.legend() </code></pre> <p><a href="https://i.sstatic.net/ff7Wn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ff7Wn.png" alt="Moving Average for random data" /></a></p> <p><em><strong>However</strong></em>, this allocates a <code>O(size^3)</code> array:</p> <blockquote> <p>MemoryError: Unable to allocate 22.0 TiB for an array with shape (14454, 14454, 14454) and data type float64</p> </blockquote> <h1>Is it possible to &quot;<em>sparsify</em>&quot; the function <code>convolve</code>?</h1> <p>Specifically, <code>f</code> normally returns non-0 for a fairly narrow band of values.</p>
<python><numpy><sparse-matrix><convolution><rolling-computation>
2023-10-26 15:05:39
1
60,468
sds
77,368,089
6,524,169
SandBoxing Python : exec with __builtins__ set as None, Accessing a method on datetime leads to a KeyError exception
<p>I was trying to limit random code execution by doing the below in Py 3.11.4 on M1 chip -:</p> <pre class="lang-py prettyprint-override"><code>import datetime import io import traceback from contextlib import redirect_stderr, redirect_stdout from typing import Dict SAFE_BUILTINS = ['abs', 'all', 'any', 'chr', 'dict', 'divmod', 'enumerate', 'filter', 'float', 'hasattr', 'int', 'isinstance', 'len', 'list', 'map', 'max', 'min', 'object', 'pow', 'print', 'range', 'reversed', 'round', 'set', 'slice', 'sorted', 'str', 'tuple', 'type', 'zip'] def _get_restricted_locals() -&gt; Dict[str, object]: _locals: Dict[str, object] = {&quot;datetime&quot;: datetime, &quot;timedelta&quot;: datetime.timedelta} safe_builtins = { k: __builtins__.__dict__.get(k) for k in SAFE_BUILTINS # type: ignore } _locals = {**_locals, **safe_builtins} return _locals def execute_python(code: str): out = io.StringIO() err = io.StringIO() _locals = _get_restricted_locals() try: with redirect_stdout(out), redirect_stderr(err): exec( code, {&quot;__builtins__&quot;: None}, _locals, ) except SyntaxError as e: print(&quot;SYNTAX ERROR&quot;, e) tbk = traceback.format_exc() print(tbk) except Exception as e: print(&quot;RUNTIME ERROR&quot;, e) tbk = traceback.format_exc() print(tbk) finally: stdout = out.getvalue() stderr = err.getvalue() return { &quot;stdout&quot;: stdout, &quot;stderr&quot;: stderr, } # Usage # The below doesn't work because of `.strftime(&quot;%Y-%m-%d&quot;)`. If we remove the same, then the snippet executes. _code: str = &quot;&quot;&quot; begin_date = (datetime.datetime.now() - timedelta(days=7)).strftime(&quot;%Y-%m-%d&quot;) end_date = str(datetime.datetime.now().date()) print(begin_date, end_date) &quot;&quot;&quot; # Will fail (comment out the `.strftime` for a successful execution) print(execute_python(code=_code)) </code></pre> <ul> <li><p>How can we make changes to the above snippet to make it work with <code>strftime</code>?</p> </li> <li><p>Would like to know and understand more as to what happens when we do <code>object.method()</code> on Python? -&gt; It tries to access the <code>__dict__</code> and <code>getattr</code>. What next?</p> </li> </ul> <p>I tried few ideas but not sure. Looks like because we're adjusting the <a href="https://docs.python.org/3/library/builtins.html#:%7E:text=As%20an%20implementation%20detail%2C%20most,by%20alternate%20implementations%20of%20Python." rel="nofollow noreferrer">builtins</a>, it removes the <code>__import__</code> and lot of other things critical for Python to work.</p> <p>Here's the <strong>traceback</strong></p> <pre class="lang-py prettyprint-override"><code>RUNTIME ERROR '__import__' Traceback (most recent call last): File &quot;testing_exec.py&quot;, line 279, in execute_python exec( File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; KeyError: '__import__' </code></pre>
<python><python-3.x>
2023-10-26 14:52:51
1
2,580
Aditya
77,367,935
2,724,080
Celery Chains: Do chains work is there is a one to many relationship with the functions?
<p>Chain documentation: <a href="http://docs.celeryproject.org/en/latest/userguide/tasks.html#avoid-launching-synchronous-subtasks" rel="nofollow noreferrer">http://docs.celeryproject.org/en/latest/userguide/tasks.html#avoid-launching-synchronous-subtasks</a></p> <p>Example from another thread: <a href="https://stackoverflow.com/questions/49553620/does-a-celery-chain-execute-tasks-in-a-specific-order">Does a celery chain execute tasks in a specific order?</a></p> <pre><code>@shared_task def task_a(): pass @shared_task def task_b(): pass @shared_task def task_b(): pass @shared_task def task_main(): chain = task_a.s() | task_b.s() | task_c.s() chain() </code></pre> <p>Would this chain structure still work if I have a one to many relationship in Task A?</p> <p>I.E.</p> <ul> <li>Task A is a function that searches for files</li> <li>Task B is a function that needs to be run on each file</li> <li>Task C is a function that is only run if Task B returns a positive result</li> </ul>
<python><celery>
2023-10-26 14:34:37
1
393
Chase Westlye
77,367,782
18,159,603
music21 seems to modify MIDI file when reading/rewritting it
<p>I've got a midi file from the <a href="https://magenta.tensorflow.org/datasets/groove" rel="nofollow noreferrer">Groove MIDI dataset</a>. Here is an example: <a href="https://drive.google.com/file/d/1xcWT8Uodgh1TG9QxZfMUYul1bUAu54C4/view?usp=drive_link" rel="nofollow noreferrer">groove/drummer1/eval_session/1_funk-groove1_138_beat_4-4.mid</a>.</p> <p>I'm using music21 to manipulate the midi file, but it seems that it does not read/write properly the file. Here is a quick experiment to show that:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path import music21 file_path = Path(&quot;groove/drummer1/eval_session/1_funk-groove1_138_beat_4-4.mid&quot;) stream = music21.converter.parse(file_path, quantizePost=False) stream.write(&quot;midi&quot;, fp=&quot;/tmp/test_write.mid&quot;) </code></pre> <p>So I am basically reading the file from disk, and then immediately rewriting it. But when I open the two files in a MIDI editor, they are different: <a href="https://i.sstatic.net/ltQvR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ltQvR.png" alt="Two pianorolls side by side" /></a></p> <p>Am I doing something wrong? The files I'm dealing with are drums tracks, maybe there is something different to do in that case?</p> <p>Versions:</p> <ul> <li>Python: 3.11.5</li> <li>Music21: 9.1.0</li> </ul>
<python><midi><music21>
2023-10-26 14:15:35
0
1,036
leleogere
77,367,745
774,575
Numpy and linear algebra: How to code Axꞏy?
<p>I've some difficulty in matching what Numpy expects when performing the dot product and vector representation in linear algebra, in term of shapes.</p> <p>Let's say I've a matrix and two column vectors represented by Numpy arrays:</p> <pre><code>import numpy as np A = np.array([[1,2], [3,1], [-5,2]]) x = np.array([[0], [2]]) y = np.array([[-2], [0], [3]]) </code></pre> <p>and I want to compute <code>Axꞏy</code>, <code>Ax</code> being a matrix multiplication and <code>ꞏ</code> being the dot product. This doesn't work:</p> <pre><code># a = Axꞏy a = (A @ x).dot(y) </code></pre> <blockquote> <p>shapes (3,1) and (3,1) not aligned: 1 (dim 1) != 3 (dim 0)</p> </blockquote> <p><code>Ax</code>:</p> <pre><code>[[4], [2], [4]] </code></pre> <p>is indeed a column vector and the dot product of the two column vectors is the scalar: <code>-8+0+12=4</code>, this is the result I'm expecting.</p> <p>What is the correct operation or reshaping required, using vectors as they are defined?</p>
<python><numpy><linear-algebra>
2023-10-26 14:10:12
1
7,768
mins
77,367,509
8,318,946
Django Rest Framework - allow other application to log in
<p>I want to share <code>api/user/profile</code> API endpoint with other developers so they can take user data and process it in their application. I am struggling with writing python script to succesfully log in to DRF because I am constantly getting <code>{&quot;errors&quot;: {&quot;detail&quot;: &quot;Authentication credentials were not provided.&quot;}}</code>. I am usually login using DRF browsable API to log in and in React I use JWT to handle user authorization but in this case I user must authorize just to get to <code>user/profile</code> API endpoint and I am not sure how to achieve that using python.</p> <p>Please help me to figure out what I am doing wrong in my script.</p> <pre><code>import requests import re def get_csrf_token(response_content): # Use a regular expression to extract the CSRF token from the response content match = re.search(r'csrfmiddlewaretoken&quot; value=&quot;(.+?)&quot;', response_content) if match: return match.group(1) return None def login_and_get_user_data(email, password, api_base_url, referer_url): # Create a session to handle cookies and maintain state session = requests.Session() # Step 1: Get the login page to obtain the CSRF token login_url = f&quot;{api_base_url}api-auth/login/?next=/api/users/profile/&quot; print(f&quot;Login URL: {login_url}&quot;) login_page = session.get(login_url, headers={'Referer': referer_url}) # Extract the CSRF token from the response content csrf_token = get_csrf_token(login_page.text) print(f&quot;CSRF token: {csrf_token}&quot;) if csrf_token: # Step 2: Define the login payload with the CSRF token login_payload = { &quot;email&quot;: email, &quot;password&quot;: password, &quot;csrfmiddlewaretoken&quot;: csrf_token } print(f&quot;Login Payload: {login_payload}&quot;) # Step 3: Authenticate and obtain the token response = session.post(login_url, data=login_payload, headers={'Referer': referer_url}) print(f&quot;Login Response: {response.content}&quot;) if response.status_code == 200: user_profile_url = f&quot;{api_base_url}api/users/profile/&quot; user_profile_response = session.get(user_profile_url, headers={'Referer': referer_url}) print(f&quot;User Profile Data: {user_profile_response.content}&quot;) else: print(&quot;Login was not successful.&quot;) else: print(&quot;CSRF token not found in the response content.&quot;) # Call the function with your email, password, and API base URL email = &quot;t.user@test.com&quot; password = &quot;testpass&quot; api_base_url = &quot;https://admin.myapp.com/&quot; referer_url = api_base_url login_and_get_user_data(email, password, api_base_url, referer_url) </code></pre>
<python><django-rest-framework>
2023-10-26 13:40:00
0
917
Adrian
77,367,482
20,122,390
Why can some Celery workers stop without any evidence of failure?
<p>I have an event-driven architecture in a part of my application that is implemented in Celery workers. It is a training of machine learning models so the process is usually somewhat delayed (around 48 hours). So at a certain point, there is a communication between workers that invoke each other's tasks and do some things. The problem is that one of my workers in the middle of the process stopped out of nowhere... there is no error log, he was simply checking the kubernetes pod logs and the last thing was a simple:</p> <pre><code>sync with celery@thori-at-pronosticos-worker-forecast-model-training-deployhvhhl </code></pre> <p>Nothing more... no error log. (The worker still had pending tasks to do) I'm very sorry that I don't have to give more context but there really isn't any. Any idea why this could happen with Celery? Anyone else has happened?</p>
<python><kubernetes><microservices><celery>
2023-10-26 13:36:55
1
988
Diego L
77,367,468
6,494,707
Replacing the original pixel values with the predicted values by model: it returns True when comparing two tennsors?
<p>I have an original input value tensor <code>patches</code> to my model</p> <pre><code>patches.shape torch.Size([64, 1280, 10]) # batch: 64, number of patches: 1280, original pixel values of each patch: 10 </code></pre> <p>My model predicts the pixel values of specific patches that are masked and the indices of masked patches are saved in <code>masked_indices</code> with the shape of <code>torch.Size([64, 896])</code> means 896 patches out of 1280 are predicted by the model.</p> <p>I wanna replace the original pixels values of those 896 patches with the new predicted values by the model (<code>pred_pixel_values, shape: torch.Size([64, 896, 10])</code>). I did the following</p> <pre><code># for indexing purposes batch_range = torch.arange(batch, device=device)[:, None] pa = patches pa[batch_range, masked_indices, :]= pred_pixel_values pa.shape torch.Size([64, 1280, 10]) </code></pre> <p>I wanted to compare if the values are replaced or not:</p> <pre><code>torch.equal(pa,patches) </code></pre> <p>but it returns True. Where I am doing wrong?</p>
<python><python-3.x><deep-learning><pytorch><pytorch-lightning>
2023-10-26 13:34:44
2
2,236
S.EB
77,367,213
5,431,734
python multithreading. How to prevent multiple threads from running on the gpu
<p>I am using multithreading to read files and obtain some data and then I am processing those data with a gpu accelerated function. My system has only one gpu, hence I think, I should be allowing only one thread to run on my gpu at any given time.</p> <p>My program looks as follows</p> <pre><code>from multiprocessing.dummy import Pool as ThreadPool def my_fun(): urls = [url_1, url_2,....,url_n] pool = ThreadPool(1) # &lt;--- Single thread! pool.map(read_and_process, urls) pool.close() pool.join() def read_and_process(url): data = read(url) # io bound process(data) # runs on gpu </code></pre> <p>As you can see at the moment I am making a pool with only one thread. If I change that the program crashes (i think because of the reason outlined just above). That obviously makes the multithreading set up meaningless. A simple loop would achieve the same result in the same time.</p> <p>Is there a way to benefit from multithreading on my io bound operation and at the same time prevent multiple threads from accessing my gpu? A lock maybe?</p> <p>In more general terms, from a design perspective does something like make sense?</p> <p>I wouldnt like to remove the process() call from inside read_and_process. The data each url returns is more than 600MB and I have 1000s of these. Storing them in memory with the plan of iterating over them when all are parsed and send them to the gpu function (outside the multithreading call) will not work as they do not fit my memory.</p> <p><strong>Edit</strong>: I have been reading this nice <a href="https://superfastpython.com/multiprocessing-pool-mutex-lock/" rel="nofollow noreferrer">post</a> and I try to adapt it. So far I have changed the code above to something like:</p> <pre><code>from multiprocessing.dummy import Pool as ThreadPool from multiprocessing.dummy import Lock def my_fun(): urls = [url_1, url_2,....,url_n] lock = Lock() items = [(d, lock) for d in urls] with ThreadPool(4): pool.starmap_async(read_and_process, items) def read_and_process(url, lock): data = read(url) # io bound with lock: process(data) # runs on gpu </code></pre> <p>which looks to work but still I am not 100% sure. Could someone please review it and help me some feedback? It will be massively appreciated</p>
<python><multithreading><multiprocessing>
2023-10-26 12:57:30
0
3,725
Aenaon
77,367,195
4,203,480
Simple flood fill algorithm in C to fill an array of integers
<p>After trying my luck at <a href="https://codereview.stackexchange.com/questions/287543/flood-fill-algorithm-in-fortran-90">implementing it in Fortran 90</a>, I turned to C. I'm not sure whether it's my <code>queue</code> implementation or the actual <code>flood fill algorithm</code> but it doesn't work as expected and tends to fill the whole array, irrespective of any border I'm putting in it's way. My goal is to create a <code>Python</code> extension.</p> <p>Below is the code, which is based on <code>scikit-image</code>'s implementation of this algorithm.<br /> The initial intent was to implement the <code>queue</code> as a dynamic array and borrow the code from <a href="https://rosettacode.org/wiki/Queue/Definition#Dynamic_array" rel="nofollow noreferrer">rosettacode</a> but I couldn't make it work either so I've tried to make it more like <code>scikit-image</code>'s <code>QueueWithHistory</code> implementation, although with minor adjustments.</p> <p><code>queue.h</code></p> <pre><code>#ifndef QUEUE_H #define QUEUE_H typedef size_t DATA; /* type of data to store in queue */ typedef struct { DATA *buf; size_t head; size_t tail; size_t alloc; } queue_t, *queue; extern queue init_queue(); extern void free_queue(queue q); extern void push_queue(queue q, DATA *n); extern unsigned char pop_queue(queue q, DATA *n); #endif /* QUEUE_H */ </code></pre> <p><code>queue.c</code></p> <pre><code>#include &lt;stdio.h&gt; #include &lt;stdlib.h&gt; #include &lt;string.h&gt; #include &quot;queue.h&quot; #define QUEUE_INIT_SIZE 64 #define QUEUE_THRESHOLD 512 static void _handle_memory_allocation_failure(const char *message, queue q) { fprintf(stderr, &quot;Error: Failed to (re)allocate memory for %s\n&quot;, message); free_queue(q); // Ensure proper cleanup exit(EXIT_FAILURE); } queue init_queue() { queue q = malloc(sizeof(queue_t)); if (!q) { _handle_memory_allocation_failure(&quot;queue structure&quot;, NULL); } q-&gt;buf = malloc(sizeof(DATA) * (q-&gt;alloc = QUEUE_INIT_SIZE)); if (!q-&gt;buf) { _handle_memory_allocation_failure(&quot;queue buffer&quot;, q); } q-&gt;head = q-&gt;tail = 0; return q; } void free_queue(queue q) { if (!q) return; free(q-&gt;buf); free(q); } static void _grow_queue(queue q) { q-&gt;alloc *= 2; DATA *new_buf = realloc(q-&gt;buf, sizeof(DATA) * q-&gt;alloc); if (!new_buf) { _handle_memory_allocation_failure(&quot;growing queue buffer&quot;, q); } q-&gt;buf = new_buf; } void push_queue(queue q, DATA *n) { if (q-&gt;alloc &lt;= q-&gt;head) { _grow_queue(q); } q-&gt;buf[q-&gt;head++] = *n; } unsigned char pop_queue(queue q, DATA *n) { if (q-&gt;tail &lt; q-&gt;head) { *n = q-&gt;buf[q-&gt;tail++]; return 1; } return 0; } </code></pre> <p><code>floodfill.h</code></p> <pre><code>#ifndef FLOODFILL_H #define FLOODFILL_H // Function declarations int flood_fill(int* image, int* offsets, size_t n_offsets, size_t start_index, int new_value, size_t image_size); #endif /* FLOODFILL_H */ </code></pre> <p><code>floodfill.c</code></p> <pre><code>#include &lt;stdlib.h&gt; #include &lt;stdio.h&gt; #include &quot;floodfill.h&quot; #include &quot;queue.h&quot; // C implementation of floodfill algorithm int flood_fill(int* image, int* offsets, size_t n_offsets, size_t start_index, int new_value, size_t image_size) { size_t current_index, neighbor_index; // Get the seed value (image value at start location) int seed_value = image[start_index]; // Short circuit to avoid infinite loop if (seed_value == new_value) { return 0; } image[start_index] = new_value; // Initialize the queue and push start position queue q = init_queue(); push_queue(q, &amp;start_index); // Break loop if all queued positions were evaluated while (pop_queue(q, &amp;current_index)) { // Look at all neighbors for (int i = 0; i &lt; n_offsets; ++i) { neighbor_index = current_index + offsets[i]; if (neighbor_index &lt; image_size) { if (image[neighbor_index] == seed_value) { // Process neighbor and push to queue image[neighbor_index] = new_value; push_queue(q, &amp;neighbor_index); } } } } // Ensure memory is released free_queue(q); return 0; } </code></pre> <p>The above C code is wrapped in a Python module, below is the main function that calls the C code. It's called <code>flood_fill</code> in the module API.</p> <pre><code>#define RAVEL_INDEX(tuple, h) ( \ (int)PyLong_AsLong(PyTuple_GetItem(tuple, 0)) + \ h * (int)PyLong_AsLong(PyTuple_GetItem(tuple, 1)) \ ) #define N_OFFSETS 8 static PyObject *algorithms_flood_fill(PyObject *self, PyObject *args) { PyObject *image_obj; PyObject *start_obj; int seed_value; if (!PyArg_ParseTuple(args, &quot;O!O!i&quot;, &amp;PyArray_Type, &amp;image_obj, &amp;PyTuple_Type, &amp;start_obj, &amp;seed_value)) { PyErr_SetString(PyExc_TypeError, &quot;Error parsing input&quot;); return NULL; } PyArrayObject *image_array; image_array = (PyArrayObject*)PyArray_FROM_OTF(image_obj, NPY_INT, NPY_ARRAY_IN_FARRAY); if (image_array == NULL) { // Py_XDECREF(image_array); PyErr_SetString(PyExc_TypeError, &quot;Couldn't parse the input array&quot;); return NULL; } int h = (int)PyArray_DIM(image_array, 0); int w = (int)PyArray_DIM(image_array, 1); intptr_t start_index = (intptr_t)RAVEL_INDEX(start_obj, h); // it should be safe to use PyArray_DATA as we ensured aligned contiguous Fortran array int *image = (int *)PyArray_DATA(image_array); size_t image_size = h * w; int offsets[N_OFFSETS] = {-(h+1), -h, -(h-1), -1, +1, h-1, h, h+1}; flood_fill(image, offsets, N_OFFSETS, start_index, seed_value, image_size); Py_DECREF(image_array); Py_RETURN_NONE; } </code></pre> <p>If I give it a 9x9 array of integers initialized to 0 with a row of 1 in the middle (row index 4), <code>int offsets[] = {-10, -9, -8, -1, 1, 8, 9, 10}</code>, <code>size_t n_offsets = 8</code> <code>size_t start_index = 0</code>, <code>int new_value = 3</code>, <code>size_t image_size = 81</code>, all the 0 in the array get replaced by 3, which isn't the intended behavior.</p> <pre><code>import numpy as np from mymodule import flood_fill a = np.zeros((9, 9), order='F', dtype=np.int32) a[4, :] = 1 flood_fill(a, (0, 0), 3) # output : # array([[3, 3, 3, 3, 3, 3, 3, 3, 3], # [3, 3, 3, 3, 3, 3, 3, 3, 3], # [3, 3, 3, 3, 3, 3, 3, 3, 3], # [3, 3, 3, 3, 3, 3, 3, 3, 3], # [1, 1, 1, 1, 1, 1, 1, 1, 1], # [3, 3, 3, 3, 3, 3, 3, 3, 3], # [3, 3, 3, 3, 3, 3, 3, 3, 3], # [3, 3, 3, 3, 3, 3, 3, 3, 3], # [3, 3, 3, 3, 3, 3, 3, 3, 3]]) </code></pre> <p>Is there something wrong with either the <code>queue</code> implementation or the <code>flood fill algorithm</code> logic itself or maybe the way it is called by the Python wrapper that could cause this problem ?</p>
<python><c><algorithm><queue><flood-fill>
2023-10-26 12:54:07
1
1,180
YeO
77,367,185
5,961,077
My headless browser works on its own, but fail when used as a FastAPI endpoint
<p>I coded a class to use as a headless browser for my FastAPI app. The problem is the class works just fine when I use it directly, but it fails when used in a FastAPI endpoint.</p> <p>Here's the class</p> <pre class="lang-py prettyprint-override"><code>from playwright.async_api import async_playwright import logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) class CustomBrowser: browser = None # Class-level attribute for Playwright browser instance playwright = None # Class-level attribute for Playwright process @classmethod async def initialize_browser(cls): &quot;&quot;&quot; Initialize the Playwright browser instance asynchronously. Example: await CustomBrowser.initialize_browser() &quot;&quot;&quot; try: if cls.browser is None: if cls.playwright is None: cls.playwright = await async_playwright().__aenter__() # Initialize Playwright process cls.browser = await cls.playwright.chromium.launch(headless=True) logger.info('Browser successfully initialized.') except Exception as e: logger.error(f'Failed to initialize browser: {e}') async def get_page_content(self, url): &quot;&quot;&quot; Browse to the given URL and return the page content. Args: url (str): URL to browse. Returns: str: Page content as HTML. Example: content = await instance.get_page_content(&quot;https://example.com&quot;) &quot;&quot;&quot; if self.browser is None: raise RuntimeError(&quot;Browser not initialized. Call initialize_browser first.&quot;) context = await self.browser.new_context() page = await context.new_page() await page.goto(url) content = await page.content() await context.close() return content @classmethod async def setup_and_get_content(cls, url): &quot;&quot;&quot; Initialize the browser (if not already) and get content from the given URL. Args: url (str): URL to browse. Returns: str: Page content as HTML. Example: content = await CustomBrowser.setup_and_get_content(&quot;https://example.com&quot;) &quot;&quot;&quot; await cls.initialize_browser() # Initialize the browser instance = cls() # Create an instance return await instance.get_page_content(url) # Get and return the content </code></pre> <p>It works when used like this:</p> <pre class="lang-py prettyprint-override"><code>async def main(): content = await CustomBrowser.setup_and_get_content(&quot;https://example.com&quot;) print(content) asyncio.run(main()) </code></pre> <p>But it fails when I use it in a FastAPI endpoint:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI app = FastAPI() @app.get(&quot;/parse/&quot;) async def read_item(): content = await CustomBrowser.setup_and_get_content('https://example.com') return content </code></pre> <p>Here's the error</p> <pre><code> raise RuntimeError(&quot;Browser not initialized. Call initialize_browser first.&quot;) RuntimeError: Browser not initialized. Call initialize_browser first. </code></pre>
<python><asynchronous><fastapi><playwright-python>
2023-10-26 12:52:13
1
1,411
Mehdi Zare
77,366,886
17,160,160
Create quarter: month dictionary in Python from varied arrays of quarters and months
<p><strong>Basic Problem</strong><br /> Given a list of quarters and a list of months:</p> <pre><code>quarters = ['Q1-21', 'Q2-21', 'Q3-21', 'Q4-21', 'Q1-22', 'Q2-22', 'Q3-22', 'Q4-22'] months = ['JAN-21', 'FEB-21', 'MAR-21', 'APR-21', 'MAY-21', 'JUN-21', 'JUL-21','AUG-21', 'SEP-21','OCT-21', 'NOV-21', 'DEC-21', 'JAN-22', 'FEB-22','MAR-22', 'APR-22', 'MAY-22', 'JUN-22', 'JUL-22', 'AUG-22', 'SEP-22','OCT-22', 'NOV-22', 'DEC-22'] </code></pre> <p>I want to create a dictionary that lists component months indexed by quarter.<br /> This is simple to achieve by splitting the month array into the appropriate number of sub arrays and then zipping together:</p> <pre><code>dict(zip(quarters,np.split(months, 8))) {'Q1-21': ['JAN-21', 'FEB-21', 'MAR-21'], 'Q2-21': ['APR-21', 'MAY-21', 'JUN-21'], 'Q3-21': ['JUL-21', 'AUG-21', 'SEP-21'], 'Q4-21': ['OCT-21', 'NOV-21', 'DEC-21'], 'Q1-22': ['JAN-22', 'FEB-22', 'MAR-22'], 'Q2-22': ['APR-22', 'MAY-22', 'JUN-22'], 'Q3-22': ['JUL-22', 'AUG-22', 'SEP-22'], 'Q4-22': ['OCT-22', 'NOV-22', 'DEC-22']} </code></pre> <p><strong>Extended Problem</strong><br /> This is no problem if the quarters and months lists are complete and correctly ordered (as above), but I need to be able to accommodate situations in which elements are missing. i.e.:</p> <pre><code>quarters = ['Q2-21', 'Q3-21', 'Q4-21', 'Q1-22', 'Q2-22', 'Q3-22'] months = ['MAY-21', 'JUN-21', 'JUL-21','AUG-21', 'SEP-21', 'OCT-21', 'NOV-21', 'JAN-22', 'FEB-22','MAR-22', 'APR-22', 'MAY-22', 'JUN-22', 'JUL-22', 'AUG-22', 'SEP-22', 'OCT-22'] </code></pre> <p>In this instance the desired output would be:</p> <pre><code>{'Q2-21': ['MAY-21', 'JUN-21'], 'Q3-21': ['JUL-21', 'AUG-21', 'SEP-21'], 'Q4-21': ['OCT-21', 'NOV-21'], 'Q1-22': ['JAN-22', 'FEB-22', 'MAR-22'], 'Q2-22': ['APR-22', 'MAY-22', 'JUN-22'], 'Q3-22': ['JUL-22', 'AUG-22', 'SEP-22'], 'Q4-22': ['OCT-22']} </code></pre> <p><strong>Initial Solution</strong><br /> Using datetimes and offsets, I have been able to achieve this by creating an appropriately formatted list of months from each listed quarter that is present in the month list and then using dictionary comprehension to associate with the relevant quarter key:</p> <pre><code>month_vals = [ [ #create appropriate month string calendar.month_name[m.month][:3].upper() + q[-3:] #list all months in quarter for m in pd.date_range(pd.to_datetime(q[-2:] + q[:2]), pd.to_datetime(q[-2:] + q[:2]) + pd.offsets.QuarterEnd(), freq='MS') #check membership in month list if calendar.month_name[m.month][:3].upper() + q[-3:] in months] for q in quarters ] #create dictionary {k:v for k,v in zip(quarters, month_vals)} </code></pre> <p>My solution does not feel very efficient and I wondered if there was a more elegant method to achieve the output?</p>
<python>
2023-10-26 12:11:03
3
609
r0bt
77,366,713
9,525,238
PySide6 QPushButton with Icon in Center
<p>I'm trying to design a button that has a bigger icon, slightly above the center of the button, with some small text below it.</p> <p><a href="https://i.sstatic.net/qPlEd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qPlEd.png" alt="enter image description here" /></a></p> <p>Minimum example:</p> <pre><code>from pathlib import Path from PySide6.QtWidgets import (QPushButton, QMainWindow, QApplication, QVBoxLayout, QHBoxLayout, QWidget) import sys class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setMinimumSize(400, 400) layoutV = QVBoxLayout() layoutH = QHBoxLayout() layoutV.addLayout(layoutH) btn = QPushButton(self) btn.setMinimumSize(200, 200) btn.setMaximumSize(200, 200) layoutH.addWidget(btn) container = QWidget() container.setLayout(layoutV) container.setStyleSheet(&quot;background: black;&quot;) self.setCentralWidget(container) path_to_image = Path(&quot;example.png&quot;) assert path_to_image.exists() btn.setStyleSheet( &quot;&quot;&quot; QPushButton{ background: #0E0E0E; border-radius: 10px; background-image: url(&quot;example.png&quot;); background-position: center; background-repeat: no-repeat; color: white; text-align: bottom; font: 18px; padding-bottom: 15px; } &quot;&quot;&quot;) btn.setText(&quot;Icon Example&quot;) app = QApplication(sys.argv) window = MainWindow() window.show() app.exec() </code></pre> <p>the problem is in the style sheets (i think).</p> <blockquote> <p>background-position: center;</p> </blockquote> <p>puts it in the dead center, as expected.</p> <p><a href="https://i.sstatic.net/HljvH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HljvH.png" alt="enter image description here" /></a></p> <blockquote> <p>background-position: center top;</p> </blockquote> <p>puts it on the top, but there is no way of padding-top this.</p> <p><a href="https://i.sstatic.net/ZAS8r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZAS8r.png" alt="enter image description here" /></a></p> <p>Is there a way of putting it slightly (+10/15% or 15-20px) above the center of the button? Mid-way between &quot;center&quot; and &quot;top&quot; would probably be the best. Googling did not return anything relevant except a thread from 2006 ... <a href="https://www.qtcentre.org/threads/41910-QStyleSheets-background-position-using-pixels-problem" rel="nofollow noreferrer">https://www.qtcentre.org/threads/41910-QStyleSheets-background-position-using-pixels-problem</a> with no answers.</p>
<python><pyqt6><pyside6><qtstylesheets>
2023-10-26 11:44:03
0
413
Andrei M.
77,366,710
752,976
How can I get Python stacktraces to show up properly in Google Cloud Trace?
<p>I have a web service written in python, running on GKE. I've enabled opentelemetry instrumentation via FastAPI contrib package, OTel SDK and cloud trace exporter.</p> <p>I am able to send and view successful traces, but the exception information seems wrong. I am getting the correct message and <code>exception.type</code>, and I'm able to correlate it with my logs, but the stack trace displayed in the cloud console seems completely bogus:</p> <p><code>exception.stacktrace</code> key in &quot;Logs &amp; Events&quot; tab:</p> <pre><code> Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/site-packages/opentelemetry/trace/__init__.py&quot;, line 573, in use_span yield span File &quot;/usr/local/lib/python3.11/site-packages/opentelemetry/sdk/trace/__init__.py&quot;, line 1045, in st </code></pre> <p>It seems as if it was cut off at <code>st</code>, and while seemingly describing the same exception, it doesn't have my, correct stacktrace, which starts with:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py&quot;, line 426, in run_asgi result = await app( # type: ignore[func-returns-value] </code></pre> <p>This exception is attached to every span in this trace up to the root one:</p> <p><a href="https://i.sstatic.net/W5rAf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W5rAf.png" alt="google cloud trace span view showing an event dot on every span" /></a></p> <p>Is this misconfigured instrumentation? How can I see actual stacktraces in the console?</p>
<python><google-cloud-platform><fastapi><stack-trace><open-telemetry>
2023-10-26 11:43:35
0
39,520
Bartek Banachewicz
77,366,617
736,662
Using the "name" argument to group requests
<p>I have a working request like this:</p> <pre><code> resp = self.client.get(f'/api/loadValues?tsIds={load_ts_ids_in_function}&amp;resolution={resolution}' f'&amp;startUtc={set_from_date_load_values()}&amp;endUtc={set_to_date_load_values()}', headers={'X-API-KEY': 'xxx'}) </code></pre> <p>But the User’s statistics are not aggregated in the Locust web-gui.</p> <p>Came across this:</p> <p><a href="https://docs.locust.io/en/stable/writing-a-locustfile.html#name-parameter" rel="nofollow noreferrer">https://docs.locust.io/en/stable/writing-a-locustfile.html#name-parameter</a></p> <p>But how do I use the name-argument in the above code?</p>
<python><locust>
2023-10-26 11:30:46
1
1,003
Magnus Jensen
77,366,382
21,049,944
polars dataFrame.sort() - large memory requirements
<p>When I call the <code>DataFrame.sort()</code> from the Python-polars library, the RAM jumps to more then double of its original values. This is of course a problem when dealing with large datasets (we are talking tens to hundereds of GB). Is there any workaround that (even in cost of performance) cost less RAM?</p> <p>Thank you for any hints.</p>
<python><dataframe><sorting><ram><python-polars>
2023-10-26 10:56:10
1
388
Galedon
77,366,310
4,547,189
Connection in DaG - Using Connection() to connect to MsSQL Backend in Airflow
<p>We have our secrets stored in AWS Secrets manager. Hence I figured, let use Connection() to create connection to MsSQL and then use the Hook to execute SQL query.</p> <pre><code>c = Connection( conn_id=&quot;test_mssql_1&quot;, conn_type=&quot;mssql&quot;, description=&quot;connection description&quot;, host=&quot;**************&quot;, login=&quot;**************&quot;, password=&quot;*****************&quot;, schema=&quot;***********&quot; #extra=json.dumps(dict(this_param=&quot;some val&quot;, that_param=&quot;other val*&quot;)), ) mssql_hook = c.get_hook() data_records = mssql_hook.get_records(&quot;Select top 10 * from SomeTable&quot;) </code></pre> <p>However when executing this query, I get an error</p> <pre><code>airflow.exceptions.AirflowNotFoundException: The conn_id `test_mssql_1` isn't defined </code></pre> <p>Any thoughts on what is incorrect or what should be the right approach ? This is the first foray inot COnnection() call. We dont want to use Connections from UI since our credentials come from Secrets Manager. Thanks</p>
<python><sql-server><airflow>
2023-10-26 10:45:55
1
648
tkansara
77,366,252
17,487,457
Why macOS is pointing to two versions of python 3?
<p>Good that MacBook now shipped without the dead <code>python2.7</code>.</p> <p>I have python 3 installed already:</p> <pre><code> % python -V Python 3.9.6 </code></pre> <p>However, I a create an isolated virtual environment, I get <code>Python 3.11</code> instead.</p> <pre><code>python -m venv venv </code></pre> <p>Which means packages installed in the environment go into <code>venv/lib/python3.11</code>.</p> <p>By this I cannot scripts using:</p> <pre><code>python my_script_file.py </code></pre> <p>But</p> <pre><code>python3 my_script_file.py </code></pre> <p>Should I completely remove <code>Python 3.9.6</code>? And for any future release, possibly means removing <code>Python 3.11</code>?</p> <p>What is the pythonic way to handle this?</p> <p><strong>UPDATE</strong></p> <p>Apparently, I have 2 versions installed:</p> <pre><code>% python -V Python 3.9.6 % python3 -V Python 3.11.6 </code></pre>
<python><macos><macos-sonoma>
2023-10-26 10:37:02
2
305
Amina Umar
77,366,027
15,320,579
Create nested dictionary based on some custom rules
<p>I have a python dictionary as follows:</p> <pre><code>ip_dict = { &quot;img_folder/144-64ee3d9bb7-3.png&quot;: &quot;COMMERCIAL PROPERTY &quot;, &quot;img_folder/144-64ee3d9bb7-2.png&quot;: &quot;CBIC COMMERCIAL &quot;, &quot;img_folder/144-64ee3d9bb7-4.png&quot;: &quot;CBIC COMMERCIAL GENERAL&quot;, &quot;img_folder/144-64ee3d9bb7-1.png&quot;: &quot;Contractors Bonding&quot;, &quot;img_folder/144-64ee3d9bb7-5.png&quot;: &quot;CBIC&quot;, &quot;img_folder/Excess-Liability-8.png&quot;: &quot; Energy laswance &quot;, &quot;img_folder/144-64ee3d9bb7-0.png&quot;: &quot;CONTRACTORS BONDING AND INSURANCE &quot;, &quot;img_folder/Excess-Liability-10.png&quot;: &quot; FOLLOWING FORM&quot;, &quot;img_folder/Excess-Liability-14.png&quot;: &quot; (2) property and&quot;, &quot;img_folder/Excess-Liability-0.png&quot;: &quot; Energy &quot;, &quot;img_folder/Excess-Liability-5.png&quot;: &quot; The additional premium&quot;, &quot;img_folder/Excess-Liability-3.png&quot;: &quot;Ein Enos asurance Maral&quot;, &quot;img_folder/Excess-Liability-4.png&quot;: &quot; IV. Conditions &quot;, &quot;img_folder/Excess-Liability-13.png&quot;: &quot; FOLLOWING FORM &quot;, &quot;img_folder/Excess-Liability-12.png&quot;: &quot; FOLLOWING FORM EXCESS&quot;, &quot;img_folder/Excess-Liability-9.png&quot;: &quot; Surplus Lines&quot;, &quot;img_folder/Excess-Liability-11.png&quot;: &quot; ALL OTHER TERMS&quot;, &quot;img_folder/Excess-Liability-2.png&quot;: &quot; Il. Limit of&quot;, &quot;img_folder/Excess-Liability-6.png&quot;: &quot; (G) Notice of&quot;, &quot;img_folder/Excess-Liability-7.png&quot;: &quot;Ss So Ss The &quot;, &quot;img_folder/Excess-Liability-1.png&quot;: &quot;eee ee ee&quot; } </code></pre> <p>It contains text extracted from pages of 2 different pdf files (<code>144-64ee3d9bb7-3</code> and <code>Excess-Liability</code>). I want to convert the above dictionary into a <strong>nested dictionary</strong> where the <strong>global key</strong> is the <strong>pdf name</strong> and the nested dictionary is the same as above. So the output would look like following:</p> <pre><code>op_dict = { &quot;144-64ee3d9bb7.png&quot;: { &quot;img_folder/144-64ee3d9bb7-3.png&quot;: &quot;COMMERCIAL PROPERTY &quot;, &quot;img_folder/144-64ee3d9bb7-2.png&quot;: &quot;CBIC COMMERCIAL &quot;, &quot;img_folder/144-64ee3d9bb7-4.png&quot;: &quot;CBIC COMMERCIAL GENERAL&quot;, &quot;img_folder/144-64ee3d9bb7-1.png&quot;: &quot;Contractors Bonding&quot;, &quot;img_folder/144-64ee3d9bb7-5.png&quot;: &quot;CBIC&quot;, &quot;img_folder/144-64ee3d9bb7-0.png&quot;: &quot;CONTRACTORS BONDING AND INSURANCE &quot; }, &quot;Excess Liability.png&quot;: { &quot;img_folder/Excess Liability-8.png&quot;: &quot; Energy laswance &quot;, &quot;img_folder/Excess Liability-10.png&quot;: &quot; FOLLOWING FORM&quot;, &quot;img_folder/Excess Liability-14.png&quot;: &quot; (2) property and&quot;, &quot;img_folder/Excess Liability-0.png&quot;: &quot; Energy &quot;, &quot;img_folder/Excess Liability-5.png&quot;: &quot; The additional premium&quot;, &quot;img_folder/Excess Liability-3.png&quot;: &quot;Ein Enos asurance Maral&quot;, &quot;img_folder/Excess Liability-4.png&quot;: &quot; IV. Conditions &quot;, &quot;img_folder/Excess Liability-13.png&quot;: &quot; FOLLOWING FORM &quot;, &quot;img_folder/Excess Liability-12.png&quot;: &quot; FOLLOWING FORM EXCESS&quot;, &quot;img_folder/Excess Liability-9.png&quot;: &quot; Surplus Lines&quot;, &quot;img_folder/Excess Liability-11.png&quot;: &quot; ALL OTHER TERMS&quot;, &quot;img_folder/Excess Liability-2.png&quot;: &quot; Il. Limit of&quot;, &quot;img_folder/Excess Liability-6.png&quot;: &quot; (G) Notice of&quot;, &quot;img_folder/Excess Liability-7.png&quot;: &quot;Ss So Ss The &quot;, &quot;img_folder/Excess Liability-1.png&quot;: &quot;eee ee ee&quot; } } </code></pre> <p>I tried the below logic but it is not working as expected:</p> <pre><code>op_dict = {} for key, value in ip_dict.items(): doc_name = key.split(&quot;/&quot;)[-1] if doc_name not in op_dict: op_dict[doc_name] = {} op_dict[doc_name][key] = value </code></pre> <p>Any help is appreciated!</p>
<python><dictionary><nested><data-munging>
2023-10-26 10:05:29
2
787
spectre
77,366,022
2,724,299
How to get comments from a worksheet as a pandas dataframe after reading in with openpyxl and
<p>I can get the data from an Excel worksheet as a dataframe by using :</p> <pre><code>from openpyxl import load_workbook import pandas as pd wb = load_workbook(filename = 'test.xlsx',data_only=True) ws = wb[wb.sheetnames[1]] df = pd.DataFrame(ws.values) </code></pre> <p>But how do I get the comments from each cell into a dataframe? The worksheet cells might not all have any comment with them, but resulting dataframe must have the same dimensions as the original worksheet.</p>
<python><pandas><excel><openpyxl>
2023-10-26 10:04:50
1
738
Frash
77,366,004
13,328,010
Snakemake workflow run with conda and singularity fails to find conda
<p>I am facing an issue when running snakemake with a container and conda. Let me explain better. I am developing a workflow (locally) in snakemake, where some rules use a conda env (defined in /envs/env.yaml) and other rules use wrappers (so they also are based on conda envs).</p> <p>An example here:</p> <pre><code>rule clean_spike: input: sample_ref=&quot;{}results/bam/{{id}}.tmp.bam&quot;.format(outdir), sample_spike=&quot;{}results/bam_spike/{{id}}_spike.tmp.bam&quot;.format(outdir), sample_ref_index=&quot;{}results/bam/{{id}}.tmp.bam.bai&quot;.format(outdir), sample_spike_index=&quot;{}results/bam_spike/{{id}}_spike.tmp.bam.bai&quot;.format(outdir), output: sample_ref=&quot;{}results/bam/{{id}}.clean.bam&quot;.format(outdir), sample_spike=&quot;{}results/bam_spike/{{id}}_spike.clean.bam&quot;.format(outdir), conda: &quot;../envs/pysam.yaml&quot; log: &quot;{}results/logs/spike/{{id}}.removeSpikeDups&quot;.format(outdir), script: &quot;../scripts/remove_spikeDups.py&quot; </code></pre> <p>Locally, the pipeline runs smoothly.</p> <p>Now, I wanted to test the pipeline on another machine (server from the institute), and so I run the command</p> <p><code>snakemake --containerize &gt; Dockerfile</code></p> <p>I got a dockerfile and I generated an image that was uploaded on dockerhub <a href="https://hub.docker.com/repository/docker/davidebrex/chiprxsnakemake/general" rel="nofollow noreferrer">here</a>.</p> <p>At this point, I copied the workflow on the other machine, I added to the Snakefile the line:</p> <p><code>containerized: &quot;docker://davidebrex/chiprxsnakemake:latest&quot;</code></p> <p>and tried to run the pipeline with: <code>snakemake -c 10 --use-conda --use-singularity.</code> Ideally, snakemake should run the conda envs within the docker container for each rule</p> <p>The problem is that i get an error related to the fact that conda is missing.. I checked the docker image and I can run conda and see the installed envs.</p> <p><strong>Any suggestions about what is going on?</strong></p> <p>Thank you!</p> <p>This is the error I get:</p> <pre><code>/usr/bin/bash: conda: command not found Traceback (most recent call last): File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/__init__.py&quot;, line 792, in snakemake success = workflow.execute( ^^^^^^^^^^^^^^^^^ File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/workflow.py&quot;, line 1246, in execute raise e File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/workflow.py&quot;, line 1242, in execute success = self.scheduler.schedule() ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/scheduler.py&quot;, line 636, in schedule self.run(runjobs) File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/scheduler.py&quot;, line 685, in run executor.run_jobs( File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/executors/__init__.py&quot;, line 158, in run_jobs self.run( File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/executors/__init__.py&quot;, line 582, in run future = self.run_single_job(job) ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/executors/__init__.py&quot;, line 635, in run_single_job self.cached_or_run, job, run_wrapper, *self.job_args_and_prepare(job) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/executors/__init__.py&quot;, line 623, in job_args_and_prepare self.workflow.conda_base_path, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/workflow.py&quot;, line 456, in conda_base_path return Conda().prefix_path ^^^^^^^ File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/deployment/conda.py&quot;, line 653, in __init__ shell.check_output(self._get_cmd(&quot;conda info --json&quot;), text=True) File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/site-packages/snakemake/shell.py&quot;, line 61, in check_output return sp.check_output(cmd, shell=True, executable=executable, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/subprocess.py&quot;, line 466, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/davide.bressan-1/mambaforge/envs/new_snakemake/lib/python3.12/subprocess.py&quot;, line 571, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command 'conda info --json' returned non-zero exit status 127. </code></pre> <p>I tried to change the snakemake command and remove --use-conda but then I get an error related to the fact that the tools are missing. I checked out other questions on stackoverflow but they were not helpful in my case (like this one: <a href="https://stackoverflow.com/questions/64000711/how-to-use-singularity-and-conda-wrappers-in-snakemake">here</a>).</p>
<python><docker><containers><conda><snakemake>
2023-10-26 10:02:47
0
2,424
DavideBrex
77,365,963
14,967,240
Why is 'plt.imshow' showing the original image when the actual values are changed?
<h2>What is the problem of this piece of code?</h2> <p>It modifies the values but when it wants to show me the actual modified image, it will show me the original image again. Am I doing something wrong?</p> <p>And let me explain this in images too:</p> <p>1- Original image:</p> <p><a href="https://i.sstatic.net/M1C8R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M1C8R.png" alt="enter image description here" /></a></p> <p>2- Modified image:</p> <p><a href="https://i.sstatic.net/KTMw3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KTMw3.png" alt="enter image description here" /></a></p> <p>As it is obvious from histogram of images, the values are changed but the pictures are showing me exactly the same image.</p> <p>This is the code for this pages to come up.</p> <pre class="lang-py prettyprint-override"><code># Load the image img = cv2.imread(current_dir + &quot;/img.jpg&quot;, cv2.IMREAD_GRAYSCALE) plt.figure(&quot;Original Image&quot;, figsize=(6, 6)) plt.subplot(211) plt.title(&quot;Original Image&quot;) plt.imshow(img, &quot;gray&quot;) plt.subplot(212) hist_full = cv2.calcHist([img], [0], None, [256], [0, 256]) plt.plot(hist_full) img_min = np.amin(img) img_max = np.amax(img) gray_levels = np.arange(256, dtype=np.uint8) gray_levels[:] = np.uint8( ((gray_levels - img_min) / (img_max - img_min)) * (30 - 0) + 0 ) img_shrunk = cv2.LUT(img, gray_levels) plt.figure(&quot;Shrunk Original Image&quot;, figsize=(6, 6)) plt.subplot(211) plt.title(&quot;Shrunk Original Image&quot;) plt.imshow(img_shrunk, &quot;gray&quot;) plt.subplot(212) hist_full = cv2.calcHist([img_shrunk], [0], None, [256], [0, 256]) plt.plot(hist_full) </code></pre> <p>What is the problem?</p>
<python><numpy><matplotlib><image-processing>
2023-10-26 09:57:51
1
615
Mahmood
77,365,901
1,794,714
How can I fix the async call to OpenAI embeddings from Databricks?
<p>I am attempting to use the OpenAI Python package to get some embeddings for text classification.</p> <p>The regular version currently works:</p> <pre><code>import openai from openai.embeddings_utils import get_embedding def add_embeddings(data:pd.DataFrame) -&gt; pd.DataFrame: data[&quot;embedding&quot;] = data['target_column'].apply(lambda x: get_embedding(x, engine=&quot;text-embedding-ada-002&quot;)) return data </code></pre> <p>But I wish to move to an asynchronous method, since it would be more efficient in the long term. I have attempted a few different things, but my current version looks like this:</p> <pre><code>import asyncio import openai from openai.embeddings_utils import aget_embedding async def add_embeddings(data: pd.DataFrame) -&gt; pd.DataFrame: data[&quot;embedding&quot;] = await asyncio.gather(*(aget_embedding(x, engine=&quot;text-embedding-ada-002&quot;) for x in data['target_column'])) return data </code></pre> <p>When running the function (i.e. <code>data = add_embeddings(data)</code>) the async version returns a coroutine object instead of the expected text embeddings. I am assuming I am missing something simple, but my experience with async is quite basic.</p> <p>For context I am working with Azure Databricks and have attempted to utilise the following answers:</p> <ul> <li><a href="https://stackoverflow.com/questions/56925702/can-i-execute-a-function-in-apply-to-pandas-dataframe-asynchronously">Can I execute a function in &quot;apply&quot; to pandas dataframe asynchronously?</a></li> <li><a href="https://stackoverflow.com/questions/67944791/fastest-way-to-apply-an-async-function-to-pandas-dataframe">fastest way to apply an async function to pandas dataframe</a></li> <li><a href="https://stackoverflow.com/questions/67240458/how-does-await-works-in-a-for-loop">How does await works in a for-loop?</a></li> </ul>
<python><asynchronous><databricks><openai-api>
2023-10-26 09:47:57
1
391
FitzKaos
77,365,862
13,146,029
UnboundLocalError: cannot access local variable 'Migrate' where it is not associated with a value - flask
<p>I'm trying to set up Flask Migrate to handle my migrations for my app. I use a factory to handle my set-up and when I try to run flask db init I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/grahammorby/Documents/GitHub/tagr-api/venv/bin/flask&quot;, line 8, in &lt;module&gt; sys.exit(main()) ^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/flask/cli.py&quot;, line 1064, in main cli.main() File &quot;/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/click/core.py&quot;, line 1078, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/click/core.py&quot;, line 1682, in invoke cmd_name, cmd, args = self.resolve_command(ctx, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/click/core.py&quot;, line 1729, in resolve_command cmd = self.get_command(ctx, cmd_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/flask/cli.py&quot;, line 579, in get_command app = info.load_app() ^^^^^^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/flask/cli.py&quot;, line 313, in load_app app = locate_app(import_name, None, raise_if_not_found=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/flask/cli.py&quot;, line 219, in locate_app __import__(module_name) File &quot;/Users/grahammorby/Documents/GitHub/tagr-api/wsgi.py&quot;, line 3, in &lt;module&gt; app = init_app() ^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/tagr-api/application/__init__.py&quot;, line 26, in init_app Migrate = Migrate(app, db) UnboundLocalError: cannot access local variable 'Migrate' where it is not associated with a value </code></pre> <p>and My set app application looks as follows:</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask from config import Config from flask_sqlalchemy import SQLAlchemy from flask_marshmallow import Marshmallow from flask_bcrypt import Bcrypt from flask_jwt_extended import JWTManager from flask_migrate import Migrate # Global accessible libraries db = SQLAlchemy() ma = Marshmallow() bcrypt = Bcrypt() JWTManager = JWTManager() def init_app(): app = Flask(__name__, instance_relative_config=False) app.config.from_object(Config) # Setup plugins db.init_app(app) ma.init_app(app) bcrypt.init_app(app) JWTManager.init_app(app) Migrate = Migrate(app, db) from models import User, App with app.app_context(): # Blueprints from application.users import users_blueprint from application.apps import apps_blueprint app.register_blueprint(users_blueprint) app.register_blueprint(apps_blueprint) return app </code></pre> <p>I have tried to use it with init_app but that caused errors as well</p>
<python><flask>
2023-10-26 09:42:44
1
317
Graham Morby
77,365,790
14,243,731
Jinja2 Whitespace control on {% with %} tag not working as expected
<p>I am running into the issue that a <code>{% with %}</code> tag is adding an extra line to my jinja template although I set <code>trim_blocks=True</code> and <code>lstrip_blocks=True</code> in my <code>Environment(loader=loader, trim_blocks=True, lstrip_blocks=True)</code> class.</p> <p>Here is the code:</p> <p>ddl.jinja</p> <pre><code>{{ ddl.input }} ( {% with columns = ddl.columns %} {% include 'columns.jinja' %} {% endwith %} ); </code></pre> <p>columns.jinja</p> <pre><code>{% for column in columns %} {{ column + ',' if not loop.last else column}} {% endfor %} </code></pre> <p>Output with the extra whitespace line occurring after the open parenthesis:</p> <pre><code>CREATE TABLE USER ( username char(40) NOT NULL, email varchar(80) NULL ); </code></pre> <p>Here is the data I am passing to the template:</p> <pre class="lang-py prettyprint-override"><code>generator.render_template(template=template, data={&quot;ddl&quot;: {&quot;input&quot;: &quot;CREATE TABLE USER&quot;, &quot;columns&quot;: [&quot;username char(40) NOT NULL&quot;, &quot;email varchar(80) NULL&quot;]}} </code></pre> <p>I am not sure why the extra whitespace line is being added and although I can remove it using the <code>-</code> at the end of my with block, I'd prefer not to as I am not doing that anywhere else in the codebase.</p>
<python><flask><jinja2>
2023-10-26 09:33:20
1
328
Adventure-Knorrig
77,365,770
10,722,752
how to perform rolling mean using if condition in group by operation
<p>I am trying to add a code wherein, if the difference between current date and the minimum date of the id column is more than 3 months, the rolling mean should be for a 21 days window, else it should be for a 7 days window.</p> <p>Sample data:</p> <pre><code>import pandas as pd import numpy as np np.random.seed(0) dt = pd.DataFrame({'id' : [1,1,2,2,1], 'date' : ['2023-09-01', '2023-09-10', '2023-01-01', '2023-01-13', '2023-09-11'], 'rev' : np.random.randint(100, 150, 5)}) dt id date rev 0 1 2023-09-01 144 1 1 2023-09-10 147 2 2 2023-01-01 100 3 2 2023-01-13 103 4 1 2023-09-11 103 </code></pre> <p>Code I am trying to get the rolling mean is:</p> <pre><code>dt.groupby('id').transform(lambda x : x['rev'].rolling(window = '21D', min_periods = 1).mean() if pd.to_datetime('today') - x['date'].min() &gt;= 90 else x['rev'].rolling(window = '7D', min_periods = 1).mean()) </code></pre> <p>But I am getting <code>KeyError: 'date'</code> error.</p> <p>Could someone please help on how to get the rolling means.</p>
<python><pandas><numpy>
2023-10-26 09:31:07
2
11,560
Karthik S
77,365,743
8,030,794
How do professional programs process huge amounts of data from websockets?
<p>I'm just learning Python. I would like to receive data from the exchange for all coins via a websocket. But the loop does not have time to process so much data and the queue gradually accumulates. I don’t understand how screener sites process so much in parallel and don’t slow down. Can you tell me please how this is implemented, at least roughly? Simple example, how i impletement one of part:</p> <pre><code>import websockets import json from multiprocessing import Manager async def method(add_symbols, bookTickers): manager = Manager() async for websocket in websockets.connect('wss://fstream.binance.com/ws'): await websocket.send(json.dumps({ &quot;method&quot;: &quot;SUBSCRIBE&quot;, &quot;params&quot;: [ el.lower() + '@bookTicker' for el in add_symbols], &quot;id&quot;: 1 })) result = await websocket.recv() for symbol in add_symbols: bookTickers[symbol] = manager.dict() async for message in websocket: result = json.loads(message) symbol = result['s'] best_bid = result['b'] best_ask = result['a'] bookTickers[result['s']]['best_bid'] = best_bid bookTickers[result['s']]['best_ask'] = best_ask </code></pre>
<python><websocket><python-asyncio>
2023-10-26 09:28:36
0
465
Fresto
77,365,740
2,986,042
How to send logs back to tkinter GUI from Robot framework?
<p>I have created GUI using python Tkinter. After clicking <code>run button</code> in the GUI, It will send selected GUI variables (eg: text box values) to Robot framework. This is my design concept</p> <pre><code>Tkinter GUI ----&gt; Run.bat ---&gt; Execute Robot framework </code></pre> <p><strong>Run.bat</strong></p> <pre><code>@py -m robot -A robot.txt --variable myval:%arg1% test_script.robot </code></pre> <p>This is working as expected. Now I want to scale my application such a way that, it will print the <code>logs</code> from Robot framework to Tkinter GUI <code>text area</code>.</p> <p>I am planning to create a <code>Thread</code> in Tkinter to capture the real time logs from Robot framework.</p> <p><strong>example:</strong></p> <pre><code>import os *** Settings *** Documentation This test file Suite Setup Suite Teardown *** Variables *** *** Test Cases *** Log To Console \nargumrnt are =&gt; ${myval} # if value is correct or not IF ${myval} send_back_to_Tkinter_GUI (&quot;Value is OK&quot;) # Send back log to GUI from robot framework ELSE send_back_to_Tkinter_GUI (&quot;Value is Not OK&quot;) # Send back log to GUI from robot framework END [Teardown] </code></pre> <p>I want to know, Is there any method available to get this logs from robot framework to tkinter? or Any suggestion from your side to send back logs to Tkinter script?</p>
<python><tkinter><robotframework><python-multithreading>
2023-10-26 09:28:00
0
1,300
user2986042
77,365,516
21,185,825
Python - textual - question marks instead of borders in DOS prompt
<p>When I run this Python code in VS Code, it displays the borders fine.</p> <p>But in a Windows command-line (DOS prompt) I get question marks where borders should display.</p> <h3>Code</h3> <p>I use the code from Textual tutorial:</p> <pre><code>class Hello(Static): BORDER_TITLE = &quot;Hello Widget&quot; def on_mount(self) -&gt; None: self.next_word() self.border_subtitle = &quot;Click for next hello&quot; def on_click(self) -&gt; None: self.next_word() def next_word(self) -&gt; None: hello = next(hellos) self.update(f&quot;{hello}, [b]World[/b]!&quot;) </code></pre> <p>with this Textual CSS (.tcss):</p> <pre class="lang-css prettyprint-override"><code>CustomWidget { width: 40; height: 9; padding: 1 2; background: $panel; color: $text; border: $secondary tall; content-align: center middle; } </code></pre> <h3>Question</h3> <p>The Unicode character with <a href="https://www.charactercodes.net/25BA" rel="nofollow noreferrer">codepoint U+25BA (Black Right-Pointing Pointer)</a> <code>▶</code> is displayed as a question mark <code>?</code>.</p> <p>So I believe it is a UTF-8 issue, but how could I fix this ?</p>
<python><utf-8><windows-console><textual>
2023-10-26 08:54:45
0
511
pf12345678910
77,365,453
16,473,860
Error Installing Pytorch+cuda121 in Poetry Virtual Environment: Source (pytorch): Authorization error accessing https://download.pytorch.org/
<p>I am looking to use the yolov5 model and opencv in my project. I am currently using Poetry as the virtual environment for my project.</p> <p>In order to operate yolov5 with gpu rather than cpu, I am looking to install cuda. I am trying to download the Cuda 12.1 version from <a href="https://pytorch.org/" rel="nofollow noreferrer">https://pytorch.org/</a>. For this, I have configured my pyproject.toml as follows:</p> <pre class="lang-ini prettyprint-override"><code>[tool.poetry] name = &quot;yolo-test&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;&quot;] readme = &quot;README.md&quot; packages = [{include = &quot;yolo_test&quot;}] [tool.poetry.dependencies] python = &quot;&gt;=3.11,&lt;3.13&quot; opencv-python = &quot;^4.8.0.74&quot; pyinstaller = &quot;^6.1.0&quot; pillow = &quot;^10.1.0&quot; pandas = &quot;^2.1.1&quot; requests = &quot;^2.31.0&quot; ultralytics = &quot;^8.0.201&quot; logging = &quot;^0.4.9.6&quot; # Cuda installation test [[tool.poetry.source]] name = &quot;pytorch&quot; url = &quot;https://download.pytorch.org/whl/cu121&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>Afterwards, I tried to install it in the poetry virtual environment using the following command:</p> <pre class="lang-bash prettyprint-override"><code>poetry add torch torchvision torchaudio --source pytorch </code></pre> <p>However, during the installation, I observed the following error logs:</p> <pre><code>Source (pytorch): Authorization error accessing https://download.pytorch.org/whl/cu121/pefile/ Source (pytorch): Authorization error accessing https://download.pytorch.org/whl/cu121/macholib/ Source (pytorch): Authorization error accessing https://download.pytorch.org/whl/cu121/logging/ Source (pytorch): Authorization error accessing https://download.pytorch.org/whl/cu121/ultralytics/ ... </code></pre> <p>Additionally, I used <code>print(torch.__version__)</code> for verification, and as an output, I observed <code>2.1.0+cpu</code>. Could anyone assist me on how to install the Pytorch Cuda version in Poetry?</p> <p>Thank you.</p>
<python><deep-learning><pytorch><yolo><python-poetry>
2023-10-26 08:46:20
1
747
Kevin Yang
77,365,408
8,905,583
Defining Correct DataType for Nested JSON in Flink's Elasticsearch Sink (Python)
<p>I am working with Apache Flink and attempting to sink documents into Elasticsearch using the built-in Elasticsearch Sink. I'm having difficulty defining the correct data type for the sink to accept my documents.</p> <p>My JSON documents look as follows:</p> <pre class="lang-json prettyprint-override"><code>{ body: { &quot;foo&quot;: &quot;bar&quot;, &quot;hello&quot;: [&quot;world&quot;] }, doc_id: &quot;1234&quot; } </code></pre> <p>From the documentation and my understanding, the Elasticsearch sink expects a <code>Types.MAP()</code> input. This type requires an additional type definition for both the keys and the values. In my case, all my keys are strings, but my values are a mix of strings and nested documents.</p> <p>Below is my code to declare the sink. I'm using <code>apache-flink==1.17.1</code> on Python <code>3.10.13</code>.</p> <pre class="lang-py prettyprint-override"><code># Set up the Elasticsearch sink elasticsearch_sink = ( Elasticsearch7SinkBuilder() .set_bulk_flush_max_actions(1) .set_emitter(ElasticsearchEmitter.dynamic_index(&quot;name&quot;, &quot;id&quot;)) .set_hosts([config[&quot;elasticsearch_endpoint&quot;]]) .set_connection_username(config[&quot;elasticsearch_username&quot;]) .set_connection_password(config[&quot;elasticsearch_password&quot;]) .build() ) # Connect the stream to the sink ( input_stream .map(Entity.to_dict, Types.MAP(Types.STRING(), &quot;&lt;value type goes here&gt;&quot;)) .sink_to(elasticsearch_sink) .name(&quot;Elasticsearch Sink&quot;) ) </code></pre> <p>I'm unsure of how to declare the correct value type for nested JSON in this context. How can I correctly define the <code>Types.MAP()</code> such that it accounts for both strings and nested documents?</p> <p>Thank you for your assistance!</p>
<python><json><elasticsearch><apache-flink>
2023-10-26 08:39:37
1
838
thijsfranck
77,365,361
5,775,358
Move x-axis to data=0 but keep tick labels at the bottom
<p>I would like to move the line of the x-axis (with all the corresponding ticks) x=0. But the tick-labels should be positioned at the bottom of the figure, to prevent clutter in the graph.</p> <p><a href="https://stackoverflow.com/questions/56669981/show-axis-at-center-but-keep-labels-on-the-left/56670291#comment136388675_56670291">Show axis at center, but keep labels on the left</a></p> <pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt import matplotlib.transforms as transforms import numpy as np # Generate some sample data x = np.linspace(-10, 10, 100) y = np.sin(x) # Create the plot fig, ax = plt.subplots() # Plot the data ax.plot(x, y) # Move the x-axis to the bottom ax.spines['bottom'].set_position(('data', 0)) trans = transforms.blended_transform_factory(ax.transAxes,ax.transData) # plt.setp(ax.get_xticklabels(), 'transform', trans) # this returns a plot with zero height plt.show() </code></pre> <p>I would like to move the axis itself including the ticks. This makes working with a grid and logarithmic scale for the ticks easier. Therefore a solution with a horizontal line lke <code>ax.axhline(y=0)</code> is not a good solution and is not what I am looking for.</p>
<python><numpy><matplotlib><xticks>
2023-10-26 08:33:20
1
2,406
3dSpatialUser
77,365,273
736,662
Aggregate Locust test results runtime
<p>I have two apparantly similar tests in Locust, but one of them writes every call two the <code>web-gui</code>, while the other one aggregates the calls. I want both tests to do as the latter aka aggregate.</p> <p><a href="https://i.sstatic.net/wJznX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wJznX.png" alt="Locust GUI" /></a></p> <p>This test does not aggregate the calls:</p> <pre><code>class LoadValues(FastHttpUser): host = server_name def _run_read_ts(self, series_list, resolution, start, end): TSIDs = &quot;,&quot;.join(series_list) resp = self.client.get(f'/api/loadValues?tsIds={TSIDs}&amp;resolution={resolution}' f'&amp;startUtc={set_from_date()}&amp;endUtc={set_to_date()}', headers={'X-API-KEY': 'AKjCg9hTcYQ='}) print(&quot;Response status code:&quot;, resp.status_code) @task def test_get_ts_1(self): self._run_read_ts(random.sample(TSIDs, 1), 'PT15M', set_from_date(), set_to_date()) </code></pre> <p>But this test do aggregate:</p> <pre><code>class SaveValuesUser(FastHttpUser): host = server_name # wait_time = between(10, 15) @task(2) def save_list_values(self): data_list = [] # for i, (ts_id, ts_value) in enumerate(random.sample(TS_IDs.items(), random.randint(1, 25))): for i, (ts_id, ts_value) in enumerate(random.sample(TS_IDs.items(), 100)): data = get_data(ts_id, from_date, to_date, set_value()) data_list.append(data) json_data = json.dumps(data_list, indent=2) self.save_values(json_data) def save_values(self, json_data): print(type(json_data)) print(json_data) # Make the PUT request with authentication: response = self.client.put(&quot;/api/SaveValues&quot;, data=json_data, headers=headers) # Check the response: if response.status_code == 200: print(&quot;SaveValues successful!&quot;) print(&quot;Response:&quot;, response.json()) else: print(&quot;SaveValues failed.&quot;) print(&quot;Response:&quot;, response.text) </code></pre>
<python><locust>
2023-10-26 08:20:56
1
1,003
Magnus Jensen
77,365,266
10,232,932
ModuleNotFoundError: No module named 'ruptures.datasets'
<p>Probably I am opening the 1.000.000 question to a modul not found error. But I installed <code>ruptures</code> with <code>pip install ruptures</code>, but somehow when I run my script I get the following error:</p> <pre><code>--------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[1], line 12 10 from noisy_timeseries_c import * 11 from outlier_detection_c import * ---&gt; 12 from changepoint_detection_c import * 13 from modelfactory_c import * 14 #from visualize import * 15 16 # Loading default config and updating with custom values from request body File ~/workspace/Azure/changepoint_detection_c.py:3 1 #changepoint detection 2 import pandas as pd ----&gt; 3 import ruptures as rpt 6 def changepoint(df, minimumsize): 7 df = df.reset_index() File ~/.local/lib/python3.10/site-packages/ruptures/__init__.py:3 1 &quot;&quot;&quot;Offline change point detection for Python.&quot;&quot;&quot; ----&gt; 3 from .datasets import pw_constant, pw_linear, pw_normal, pw_wavy 4 from .detection import Binseg, BottomUp, Dynp, KernelCPD, Pelt, Window 5 from .exceptions import NotEnoughPoints ModuleNotFoundError: No module named 'ruptures.datasets' </code></pre> <p>I tried to uninstall and install the package again, but it still not work.</p> <p>The modul version is ruptures Version: 1.1.8</p>
<python>
2023-10-26 08:20:00
0
6,338
PV8
77,365,178
12,176,803
Poetry cannot install custom script
<h1>Issue</h1> <p>Poetry does not install a custom script (for CLI app)</p> <h1>My setup</h1> <p>I have a very simple project on MacOS:</p> <pre><code>- .venv - pyproject.toml - main.py - src |... </code></pre> <p>Here is my <code>pyproject.toml</code></p> <pre><code>[tool.poetry] ... [tool.poetry.dependencies] ... [tool.poetry.scripts] my-cli=&quot;main:app&quot; </code></pre> <p>Details:</p> <ul> <li>I run on MacOS, zsh, python 3.10</li> <li>I ran in order: <ul> <li><code>python -m venv .venv</code></li> <li><code>pip install poetry</code></li> <li><code>source .venv/bin/activate</code></li> <li><code>poetry install</code></li> </ul> </li> </ul> <p>When I run <code>poetry install</code>, my packages get installed in the <code>.venv</code>.</p> <p>I would expect to be able to run <code>which my-cli</code>, but I get <code>my-cli not found</code>.</p> <p>When I run <code>poetry run my-cli</code>, I get the following error:</p> <pre><code>Warning: 'my-cli' is an entry point defined in pyproject.toml, but it's not installed as a script. You may get improper `sys.argv[0]`. </code></pre> <h1>What I tried</h1> <ol> <li>Deleting <code>.venv</code>, recreating, and install poetry with <code>pipx</code> instead of <code>pip</code></li> <li>Changing the <code>main.py</code> to something trivial (like simple function to print &quot;Hello World&quot;: <code>def app(): print(&quot;Hello World&quot;)</code>)</li> <li>Removing <code>poetry.lock</code> and installing again</li> <li>Changing the name of the script in <code>pyproject.toml</code> (my-cli to abc...)</li> <li>Removing the <code>[tool.poetry.scripts]</code> section, deleting <code>poetry.lock</code> and <code>.venv</code>, then recreating <code>.venv</code>, installing <code>poetry</code>, running <code>poetry install</code>, then adding <code>[tool.poetry.scripts]</code> again, and running <code>poetry install</code> again ---&gt; it says <code>No dependencies to install or update</code>. This would indicate that the script does not even get considered</li> </ol> <p>Note: that's my first time with poetry</p>
<python><python-poetry>
2023-10-26 08:06:32
2
366
HGLR
77,365,163
5,955,479
Airflow - import DAGs from multiple repositories
<p>We have deployed airflow using the official helm chart on kubernetes, we are using KubernetesExecutor and git-sync.</p> <p>I have no problems running DAGs from a single Gitlab repository, however how are you supposed to load DAGs from a different repository? I ran into this discussion - <a href="https://github.com/apache/airflow/discussions/19381#discussioncomment-4976747" rel="nofollow noreferrer">https://github.com/apache/airflow/discussions/19381#discussioncomment-4976747</a>, which suggests using git submodules. Fair enough, however I struggle designing a proper repository structure.</p> <p>I have a base airflow repo, which I would like to have some common DAGs, plugins and tests. Then I would add other repos to this base one using git submodules. The structure I came up with looks like this</p> <pre><code>. ├── dags/ │ ├── common/ │ │ ├── common_dag_1.py │ │ ├── common_dag_2.py │ │ └── util/ │ │ └── util_code_dag_1.py │ └── added_repo_1/ │ ├── added_dag_1 │ ├── added_dag_2 │ └── util/ │ └── util_code_added_dag_1.py ├── plugins/ │ ├── hooks/ │ ├── sensors/ │ └── operators/ └── tests/ </code></pre> <p>Now a couple of questions</p> <ul> <li>A lot of our <code>added_repo</code>s currently have a lot of custom code like jupyter notebooks, sometimes it's already a python package or contains random eval scripts. Should we split or restructure them, to have only the relevant code in it?</li> <li>What about tests of the <code>added_repo</code>s? I assume they should be run using CI/CD pipelines at their original location, what should be put in the base repo tests?</li> <li>Any other tips how to improve the overall project structure?</li> </ul>
<python><git><airflow>
2023-10-26 08:03:26
0
355
user430953
77,365,129
11,716,727
How can I get zeros or ones instead of True or false using pd.get_dummies?
<p>I am working as a beginner on the dataset of Titanic survivals, and I am trying to convert the <strong>Sex column</strong> into a single column with <strong>zeros</strong> and <strong>ones</strong> using <strong>get_dummies</strong>.</p> <pre><code> female male 0 True False 1 False True 2 True False </code></pre> <p>My problem is <strong>instead of getting</strong> 0s and 1s, I get True/False.</p> <p>Any assistance, please?</p> <p>This is the line of coding that I used in Python:</p> <pre><code>male = pd.get_dummies(training_set['Sex']) </code></pre>
<python><pandas>
2023-10-26 07:58:38
1
709
SH_IQ
77,364,938
1,635,253
Forecasting with Prophet with "id" column on test test
<p>I'm building a forecasting model with Prophet in python. My train dataset consist on column <strong>&quot;Date&quot;, &quot;Var1&quot;, &quot;Var2&quot;, &quot;Y&quot;</strong>. And the test set consist of column <strong>&quot;id&quot;, &quot;Date&quot;, &quot;Var1&quot;, &quot;Var2&quot;</strong>. The &quot;id&quot; column is unique (based on combination of &quot;Date&quot;, &quot;Var1&quot;, and &quot;Var2&quot;. Below is my code:</p> <pre><code>dir_df_train_clean = &quot;data/df_train_clean.csv&quot; df_train_clean = pd.read_csv(dir_df_train_clean, parse_dates=[0]) dir_df_test_clean = &quot;data/df_test_clean.csv&quot; df_test_clean = pd.read_csv(dir_df_test_clean, parse_dates=[1]) split_date = '2019-01-01' df_train = df_train_clean[df_train_clean.index.get_level_values('Date') &lt; split_date] df_val = df_train_clean[df_train_clean.index.get_level_values('Date') &gt;= split_date] df_train = df_train.rename(columns={'Date':'ds','Y':'y'}) df_val = df_val.rename(columns={'Date':'ds','Y':'y'}) df_test = df_test_clean.rename(columns={'Date':'ds'}) </code></pre> <p>My model:</p> <pre><code>m = Prophet() m.add_regressor('Var1') m.add_regressor('Var2') m.fit(df_train) </code></pre> <p>And try to predict my test set:</p> <pre><code>test_forecast = m.predict(df_test) </code></pre> <p>But the result is my <strong>&quot;id&quot;</strong> column dissapear from the &quot;test_forecast&quot; dataframe. How can i keep my id column?</p> <p>I tried to merge the 'id' column back, but the index was altered. The first row (id='a1') shows date value column is '2022-07-30', but the prediction result shows that the first row on 'date' column is '2022-07-30'.</p>
<python><pandas><facebook-prophet>
2023-10-26 07:30:09
2
347
thenoirlatte
77,364,715
3,089,084
Installing python kats on macos m1
<p>I aim to use <a href="https://facebookresearch.github.io/Kats/" rel="nofollow noreferrer">python kats</a> library to do some time series analysis. I can not get it to install using python 3.11 on my Mac.</p> <p>Basic machine info: M1 Macbook Pro Python Version: 3.11</p> <p>The process for installation:</p> <pre><code>➜ workspace mkdir kats-experiment ➜ workspace cd !$ ➜ workspace cd kats-experiment ➜ kats-experiment pipenv install --python 3.11 Creating a virtualenv for this project... Pipfile: /Users/james/workspace/kats-experiment/Pipfile Using /opt/homebrew/bin/python3 (3.11.6) to create virtualenv... ⠹ Creating virtual environment...created virtual environment CPython3.11.6.final.0-64 in 1716ms creator CPython3Posix(dest=/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w, clear=False, no_vcs_ignore=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/james/Library/Application Support/virtualenv) added seed packages: pip==23.2.1, setuptools==68.2.2, wheel==0.41.2 activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator ✔ Successfully created virtual environment! Virtualenv location: /Users/james/.virtualenvs/kats-experiment-AHXRJQ7w Creating a Pipfile for this project... Pipfile.lock not found, creating... Locking [packages] dependencies... Locking [dev-packages] dependencies... Updated Pipfile.lock (102cb99fcf0bcc5f47dd749d9fbb3e31d68924b88460c0d504129ac825be2142)! Installing dependencies from Pipfile.lock (be2142)... To activate this project's virtualenv, run pipenv shell. Alternatively, run a command inside the virtualenv with pipenv run. ➜ kats-experiment pipenv install kats Installing kats... Resolving kats... Added kats to Pipfile's [packages] ... ✔ Installation Succeeded Pipfile.lock (be2142) out of date, updating to (4c5c22)... Locking [packages] dependencies... Building requirements... Resolving dependencies... ✘ Locking Failed! ⠋ Locking... INFO:pipenv.patched.pip._internal.operations.prepare:Collecting kats (from -r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.network.download:Using cached kats-0.2.0-py3-none-any.whl (612 kB) INFO:pipenv.patched.pip._internal.operations.prepare:Collecting attrs&gt;=21.2.0 (from kats-&gt;-r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.network.download:Using cached attrs-23.1.0-py3-none-any.whl (61 kB) INFO:pipenv.patched.pip._internal.operations.prepare:Collecting deprecated&gt;=1.2.12 (from kats-&gt;-r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.operations.prepare:Obtaining dependency information for deprecated&gt;=1.2.12 from https://files.pythonhosted.org/packages/20/8d/778b7d51b981a96554f29136cd59ca7880bf58094338085bcf2a979a0e6a/Deprecated-1.2.14-py2.py3-none-any.whl.metadata INFO:pipenv.patched.pip._internal.network.download:Using cached Deprecated-1.2.14-py2.py3-none-any.whl.metadata (5.4 kB) INFO:pipenv.patched.pip._internal.operations.prepare:Collecting matplotlib&gt;=2.0.0 (from kats-&gt;-r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.operations.prepare:Obtaining dependency information for matplotlib&gt;=2.0.0 from https://files.pythonhosted.org/packages/af/f3/fb27b3b902fc759bbca3f9d0336c48069c3022e57552c4b0095d997c7ea8/matplotlib-3.8.0-cp311-cp311-macosx_11_0_arm64.whl.metadata INFO:pipenv.patched.pip._internal.network.download:Using cached matplotlib-3.8.0-cp311-cp311-macosx_11_0_arm64.whl.metadata (5.8 kB) INFO:pipenv.patched.pip._internal.operations.prepare:Collecting numpy&lt;1.22,&gt;=1.21 (from kats-&gt;-r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.network.download:Using cached numpy-1.21.1.zip (10.3 MB) INFO:pipenv.patched.pip._internal.cli.spinners:Installing build dependencies: started INFO:pipenv.patched.pip._internal.cli.spinners:Installing build dependencies: finished with status 'done' INFO:pipenv.patched.pip._internal.cli.spinners:Getting requirements to build wheel: started INFO:pipenv.patched.pip._internal.cli.spinners:Getting requirements to build wheel: finished with status 'done' INFO:pipenv.patched.pip._internal.cli.spinners:Preparing metadata (pyproject.toml): started INFO:pipenv.patched.pip._internal.cli.spinners:Preparing metadata (pyproject.toml): finished with status 'done' INFO:pipenv.patched.pip._internal.operations.prepare:Collecting pandas&lt;=1.3.5,&gt;=1.0.4 (from kats-&gt;-r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.network.download:Using cached pandas-1.3.5.tar.gz (4.7 MB) INFO:pipenv.patched.pip._internal.cli.spinners:Installing build dependencies: started INFO:pipenv.patched.pip._internal.cli.spinners:Installing build dependencies: finished with status 'done' INFO:pipenv.patched.pip._internal.cli.spinners:Getting requirements to build wheel: started INFO:pipenv.patched.pip._internal.cli.spinners:Getting requirements to build wheel: finished with status 'done' INFO:pipenv.patched.pip._internal.cli.spinners:Preparing metadata (pyproject.toml): started INFO:pipenv.patched.pip._internal.cli.spinners:Preparing metadata (pyproject.toml): finished with status 'done' INFO:pipenv.patched.pip._internal.operations.prepare:Collecting python-dateutil&gt;=2.8.0 (from kats-&gt;-r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.network.download:Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) INFO:pipenv.patched.pip._internal.operations.prepare:Collecting pystan==2.19.1.1 (from kats-&gt;-r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.network.download:Using cached pystan-2.19.1.1.tar.gz (16.2 MB) INFO:pipenv.patched.pip._internal.cli.spinners:Preparing metadata (setup.py): started INFO:pipenv.patched.pip._internal.cli.spinners:Preparing metadata (setup.py): finished with status 'done' INFO:pipenv.patched.pip._internal.operations.prepare:Collecting fbprophet==0.7.1 (from kats-&gt;-r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.network.download:Using cached fbprophet-0.7.1.tar.gz (64 kB) INFO:pipenv.patched.pip._internal.cli.spinners:Preparing metadata (setup.py): started INFO:pipenv.patched.pip._internal.cli.spinners:Preparing metadata (setup.py): finished with status 'done' INFO:pipenv.patched.pip._internal.operations.prepare:Collecting scikit-learn&gt;=0.24.2 (from kats-&gt;-r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.operations.prepare:Obtaining dependency information for scikit-learn&gt;=0.24.2 from https://files.pythonhosted.org/packages/40/c6/2e91eefb757822e70d351e02cc38d07c137212ae7c41ac12746415b4860a/scikit_learn-1.3.2-cp311-cp311-macosx_12_0_arm64.whl.metadata INFO:pipenv.patched.pip._internal.network.download:Using cached scikit_learn-1.3.2-cp311-cp311-macosx_12_0_arm64.whl.metadata (11 kB) INFO:pipenv.patched.pip._internal.operations.prepare:Collecting scipy&lt;1.8.0 (from kats-&gt;-r /var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pipenv-s569uhlm-requirements/pipenv-zqq0oo5_-constraints.txt (line 2)) INFO:pipenv.patched.pip._internal.network.download:Using cached scipy-1.6.1.tar.gz (27.3 MB) INFO:pipenv.patched.pip._internal.cli.spinners:Installing build dependencies: started INFO:pipenv.patched.pip._internal.cli.spinners:Installing build dependencies: finished with status 'done' INFO:pipenv.patched.pip._internal.cli.spinners:Getting requirements to build wheel: started INFO:pipenv.patched.pip._internal.cli.spinners:Getting requirements to build wheel: finished with status 'done' INFO:pipenv.patched.pip._internal.cli.spinners:Preparing metadata (pyproject.toml): started INFO:pipenv.patched.pip._internal.cli.spinners:Preparing metadata (pyproject.toml): finished with status 'error' ERROR:pip.subprocessor:[present-rich] Preparing metadata (pyproject.toml) exited with 1 [ResolutionFailure]: File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 646, in _main [ResolutionFailure]: resolve_packages( [ResolutionFailure]: File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 613, in resolve_packages [ResolutionFailure]: results, resolver = resolve( [ResolutionFailure]: ^^^^^^^^ [ResolutionFailure]: File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/resolver.py&quot;, line 593, in resolve [ResolutionFailure]: return resolve_deps( [ResolutionFailure]: ^^^^^^^^^^^^^ [ResolutionFailure]: File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 845, in resolve_deps [ResolutionFailure]: results, hashes, internal_resolver = actually_resolve_deps( [ResolutionFailure]: ^^^^^^^^^^^^^^^^^^^^^^ [ResolutionFailure]: File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 618, in actually_resolve_deps [ResolutionFailure]: resolver.resolve() [ResolutionFailure]: File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 444, in resolve [ResolutionFailure]: raise ResolutionFailure(message=str(e)) [pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies. You can use $ pipenv run pip install &lt;requirement_name&gt; to bypass this mechanism, then run $ pipenv graph to inspect the versions actually installed in the virtualenv. Hint: try $ pipenv lock --pre if it is a pre-release dependency. ERROR: metadata generation failed Traceback (most recent call last): File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/bin/pipenv&quot;, line 8, in &lt;module&gt; sys.exit(cli()) ^^^^^ File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/vendor/click/core.py&quot;, line 1130, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/cli/options.py&quot;, line 58, in main return super().main(*args, **kwargs, windows_expand_args=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/vendor/click/core.py&quot;, line 1055, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/vendor/click/core.py&quot;, line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/vendor/click/core.py&quot;, line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/vendor/click/core.py&quot;, line 760, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/vendor/click/decorators.py&quot;, line 84, in new_func return ctx.invoke(f, obj, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/vendor/click/core.py&quot;, line 760, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/cli/command.py&quot;, line 209, in install do_install( File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/routines/install.py&quot;, line 297, in do_install raise e File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/routines/install.py&quot;, line 281, in do_install do_init( File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/routines/install.py&quot;, line 648, in do_init do_lock( File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/routines/lock.py&quot;, line 65, in do_lock venv_resolve_deps( File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 786, in venv_resolve_deps c = resolve(cmd, st, project=project) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv/utils/resolver.py&quot;, line 655, in resolve raise RuntimeError(&quot;Failed to lock Pipfile.lock!&quot;) RuntimeError: Failed to lock Pipfile.lock! </code></pre> <p>This error is not very descriptive</p> <p>If, as suggested, I run:</p> <pre><code>pipenv run pip install kats </code></pre> <p>The output is:</p> <pre><code>Collecting kats Using cached kats-0.2.0-py3-none-any.whl (612 kB) Collecting attrs&gt;=21.2.0 (from kats) Using cached attrs-23.1.0-py3-none-any.whl (61 kB) Collecting deprecated&gt;=1.2.12 (from kats) Obtaining dependency information for deprecated&gt;=1.2.12 from https://files.pythonhosted.org/packages/20/8d/778b7d51b981a96554f29136cd59ca7880bf58094338085bcf2a979a0e6a/Deprecated-1.2.14-py2.py3-none-any.whl.metadata Using cached Deprecated-1.2.14-py2.py3-none-any.whl.metadata (5.4 kB) Collecting matplotlib&gt;=2.0.0 (from kats) Obtaining dependency information for matplotlib&gt;=2.0.0 from https://files.pythonhosted.org/packages/af/f3/fb27b3b902fc759bbca3f9d0336c48069c3022e57552c4b0095d997c7ea8/matplotlib-3.8.0-cp311-cp311-macosx_11_0_arm64.whl.metadata Using cached matplotlib-3.8.0-cp311-cp311-macosx_11_0_arm64.whl.metadata (5.8 kB) Collecting numpy&lt;1.22,&gt;=1.21 (from kats) Using cached numpy-1.21.1.zip (10.3 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting pandas&lt;=1.3.5,&gt;=1.0.4 (from kats) Using cached pandas-1.3.5.tar.gz (4.7 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting python-dateutil&gt;=2.8.0 (from kats) Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) Collecting pystan==2.19.1.1 (from kats) Using cached pystan-2.19.1.1-cp311-cp311-macosx_13_0_arm64.whl Collecting fbprophet==0.7.1 (from kats) Using cached fbprophet-0.7.1.tar.gz (64 kB) Preparing metadata (setup.py) ... done Collecting scikit-learn&gt;=0.24.2 (from kats) Obtaining dependency information for scikit-learn&gt;=0.24.2 from https://files.pythonhosted.org/packages/40/c6/2e91eefb757822e70d351e02cc38d07c137212ae7c41ac12746415b4860a/scikit_learn-1.3.2-cp311-cp311-macosx_12_0_arm64.whl.metadata Using cached scikit_learn-1.3.2-cp311-cp311-macosx_12_0_arm64.whl.metadata (11 kB) Collecting scipy&lt;1.8.0 (from kats) Using cached scipy-1.6.1.tar.gz (27.3 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [123 lines of output] setup.py:461: UserWarning: Unrecognized setuptools command ('dist_info --egg-base /private/var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pip-modern-metadata-1apq9j1t'), proceeding with generating Cython sources and expanding templates warnings.warn(&quot;Unrecognized setuptools command ('{}'), proceeding with &quot; setup.py:563: DeprecationWarning: `numpy.distutils` is deprecated since NumPy 1.23.0, as a result of the deprecation of `distutils` itself. It will be removed for Python &gt;= 3.12. For older Python versions it will remain present. It is recommended to use `setuptools &lt; 60.0` for those Python versions. For more details, see: https://numpy.org/devdocs/reference/distutils_status_migration.html from numpy.distutils.core import setup Running from SciPy source directory. INFO: lapack_opt_info: INFO: lapack_armpl_info: INFO: customize UnixCCompiler INFO: libraries armpl_lp64_mp not found in ['/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib', '/usr/local/lib', '/usr/lib'] INFO: NOT AVAILABLE INFO: INFO: lapack_mkl_info: INFO: libraries mkl_rt not found in ['/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib', '/usr/local/lib', '/usr/lib'] INFO: NOT AVAILABLE INFO: INFO: lapack_ssl2_info: INFO: libraries fjlapackexsve not found in ['/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib', '/usr/local/lib', '/usr/lib'] INFO: NOT AVAILABLE INFO: INFO: openblas_lapack_info: INFO: libraries openblas not found in ['/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib', '/usr/local/lib', '/usr/lib'] INFO: NOT AVAILABLE INFO: INFO: openblas_clapack_info: INFO: libraries openblas,lapack not found in ['/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib', '/usr/local/lib', '/usr/lib'] INFO: NOT AVAILABLE INFO: INFO: flame_info: INFO: libraries flame not found in ['/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib', '/usr/local/lib', '/usr/lib'] INFO: NOT AVAILABLE INFO: INFO: accelerate_info: INFO: NOT AVAILABLE INFO: INFO: atlas_3_10_threads_info: INFO: Setting PTATLAS=ATLAS INFO: libraries tatlas,tatlas not found in /Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib INFO: libraries tatlas,tatlas not found in /usr/local/lib INFO: libraries tatlas,tatlas not found in /usr/lib INFO: &lt;class 'numpy.distutils.system_info.atlas_3_10_threads_info'&gt; INFO: NOT AVAILABLE INFO: INFO: atlas_3_10_info: INFO: libraries satlas,satlas not found in /Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib INFO: libraries satlas,satlas not found in /usr/local/lib INFO: libraries satlas,satlas not found in /usr/lib INFO: &lt;class 'numpy.distutils.system_info.atlas_3_10_info'&gt; INFO: NOT AVAILABLE INFO: INFO: atlas_threads_info: INFO: Setting PTATLAS=ATLAS INFO: libraries ptf77blas,ptcblas,atlas not found in /Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib INFO: libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib INFO: libraries ptf77blas,ptcblas,atlas not found in /usr/lib INFO: &lt;class 'numpy.distutils.system_info.atlas_threads_info'&gt; INFO: NOT AVAILABLE INFO: INFO: atlas_info: INFO: libraries f77blas,cblas,atlas not found in /Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib INFO: libraries f77blas,cblas,atlas not found in /usr/local/lib INFO: libraries f77blas,cblas,atlas not found in /usr/lib INFO: &lt;class 'numpy.distutils.system_info.atlas_info'&gt; INFO: NOT AVAILABLE INFO: INFO: lapack_info: INFO: libraries lapack not found in ['/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib', '/usr/local/lib', '/usr/lib'] INFO: NOT AVAILABLE INFO: /private/var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pip-build-env-nu_ho94k/overlay/lib/python3.11/site-packages/numpy/distutils/system_info.py:1974: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. return getattr(self, '_calc_info_{}'.format(name))() INFO: lapack_src_info: INFO: NOT AVAILABLE INFO: /private/var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pip-build-env-nu_ho94k/overlay/lib/python3.11/site-packages/numpy/distutils/system_info.py:1974: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. return getattr(self, '_calc_info_{}'.format(name))() INFO: NOT AVAILABLE INFO: Traceback (most recent call last): File &quot;/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/james/.virtualenvs/kats-experiment-AHXRJQ7w/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 149, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/private/var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pip-build-env-nu_ho94k/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 161, in prepare_metadata_for_build_wheel self.run_setup() File &quot;/private/var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pip-build-env-nu_ho94k/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 254, in run_setup self).run_setup(setup_script=setup_script) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/private/var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pip-build-env-nu_ho94k/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 145, in run_setup exec(compile(code, __file__, 'exec'), locals()) File &quot;setup.py&quot;, line 588, in &lt;module&gt; setup_package() File &quot;setup.py&quot;, line 584, in setup_package setup(**metadata) File &quot;/private/var/folders/gn/cnpn1v5n2f76fkstdxc6vyww0000gn/T/pip-build-env-nu_ho94k/overlay/lib/python3.11/site-packages/numpy/distutils/core.py&quot;, line 136, in setup config = configuration() ^^^^^^^^^^^^^^^ File &quot;setup.py&quot;, line 499, in configuration raise NotFoundError(msg) numpy.distutils.system_info.NotFoundError: No BLAS/LAPACK libraries found. Note: Accelerate is no longer supported. To build Scipy from sources, BLAS &amp; LAPACK libraries need to be installed. See site.cfg.example in the Scipy source directory and https://docs.scipy.org/doc/scipy/reference/building/index.html for details. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>I have tried installing openblas with</p> <pre><code>brew install openblas </code></pre> <p>This still errors.</p> <p>Any help appreciated.</p>
<python><pipenv>
2023-10-26 06:52:34
1
1,222
silverdagger
77,364,710
7,052,826
Plotly express inconsistent behavior with text size
<p>I am having trouble controlling text size within plotly.express figures.</p> <p>If I create some fake data like so, it appears to work.</p> <pre><code>df = pd.DataFrame({ &quot;x&quot;:['1','2','2','1','1','2','2','1'], 'y':[10,20,30,40,30,20,10,10], 'type':['a','a','b','b','a','a','b','b'], 'text':['00000','000','0','00','00000','000','0','00']}) fig = px.bar(df, x='x',y='y', text='text', barmode=&quot;group&quot;, color='type') fig.update_traces(textposition='outside', textfont_size=3000000, textangle=90, cliponaxis=False) fig.show() </code></pre> <p><a href="https://i.sstatic.net/eTU4h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eTU4h.png" alt="enter image description here" /></a></p> <p>Note that I'm setting the <code>textfont_size</code> ridiculously large, but we can see this actually reflected in the figure.</p> <h1>The problem</h1> <p>However, when I use my real data. The text size is simply small, despite the very large <code>textfont_size</code>. I can not share my real data here, but the <code>dtypes</code> are identical to the example shown above.</p> <pre><code># my real data data = ... # can't share assert all(data.dtypes == df.dtypes) # not a problem, all identical print(data.dtypes) x object y int64 type object text object dtype: object # identical code as above fig = px.bar(data, x='x',y='y', text='text', barmode=&quot;group&quot;, color='type') fig.update_traces(textposition='outside', textfont_size=3000000, textangle=90, cliponaxis=False) fig.show() </code></pre> <p>As we can see, while everything appears (?) identical, the large <code>textfont_size</code> is ignored and instead the text is small.</p> <p><a href="https://i.sstatic.net/XvZAy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XvZAy.png" alt="enter image description here" /></a></p>
<python><plotly><plotly-express>
2023-10-26 06:51:45
0
4,155
Mitchell van Zuylen
77,364,550
6,323,020
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
<p>Earlier I installed some packages like <a href="https://en.wikipedia.org/wiki/Matplotlib" rel="noreferrer">Matplotlib</a>, <a href="https://en.wikipedia.org/wiki/NumPy" rel="noreferrer">NumPy</a>, pip (version 23.3.1), wheel (version 0.41.2), etc., and did some programming with those. I used the command <code>C:\Users\UserName&gt;pip list</code> to find the list of packages that I have installed, and I am using Python 3.12.0 (by employing code <code>C:\Users\UserName&gt;py -V</code>).</p> <p>I need to use <a href="https://github.com/spedas/pyspedas" rel="noreferrer">pyspedas</a> to analyse some data. I am following the instruction that that I received from site to install the package, with a variation (I am not sure whether it matters or not: I am using <code>py</code>, instead of <code>python</code>). The commands that I use, in the order, are:</p> <pre class="lang-none prettyprint-override"><code>py -m venv pyspedas .\pyspedas\Scripts\activate pip install pyspedas </code></pre> <p>After the last step, I am getting the following output:</p> <pre class="lang-none prettyprint-override"><code>Collecting pyspedas Using cached pyspedas-1.4.47-py3-none-any.whl.metadata (14 kB) Collecting numpy&gt;=1.19.5 (from pyspedas) Using cached numpy-1.26.1-cp312-cp312-win_amd64.whl.metadata (61 kB) Collecting requests (from pyspedas) Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB) Collecting geopack&gt;=1.0.10 (from pyspedas) Using cached geopack-1.0.10-py3-none-any.whl (114 kB) Collecting cdflib&lt;1.0.0 (from pyspedas) Using cached cdflib-0.4.9-py3-none-any.whl (72 kB) Collecting cdasws&gt;=1.7.24 (from pyspedas) Using cached cdasws-1.7.43.tar.gz (21 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting netCDF4&gt;=1.6.2 (from pyspedas) Using cached netCDF4-1.6.5-cp312-cp312-win_amd64.whl.metadata (1.8 kB) Collecting pywavelets (from pyspedas) Using cached PyWavelets-1.4.1.tar.gz (4.6 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [33 lines of output] Traceback (most recent call last): File &quot;C:\Users\UserName\pyspedas\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;C:\Users\UserName\pyspedas\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\UserName\pyspedas\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 112, in get_requires_for_build_wheel backend = _build_backend() ^^^^^^^^^^^^^^^^ File &quot;C:\Users\UserName\pyspedas\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 77, in _build_backend obj = import_module(mod_path) ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\UserName\AppData\Local\Programs\Python\Python312\Lib\importlib\__init__.py&quot;, line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1381, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1354, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1304, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 488, in _call_with_frames_removed File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1381, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1354, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1325, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 929, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 994, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 488, in _call_with_frames_removed File &quot;C:\Users\UserName\AppData\Local\Temp\pip-build-env-_lgbq70y\overlay\Lib\site-packages\setuptools\__init__.py&quot;, line 16, in &lt;module&gt; import setuptools.version File &quot;C:\Users\UserName\AppData\Local\Temp\pip-build-env-_lgbq70y\overlay\Lib\site-packages\setuptools\version.py&quot;, line 1, in &lt;module&gt; import pkg_resources File &quot;C:\Users\UserName\AppData\Local\Temp\pip-build-env-_lgbq70y\overlay\Lib\site-packages\pkg_resources\__init__.py&quot;, line 2191, in &lt;module&gt; register_finder(pkgutil.ImpImporter, find_on_path) ^^^^^^^^^^^^^^^^^^^ AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'? [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>After little bit of googling, I came to know that this issues was reported at multiple places, but none for this package. I did install wheel in the new environment as mentioned in the answer <a href="https://stackoverflow.com/a/56504270/6323020">here</a>, but the problem still persists.</p> <p>Instead of setting up a virtual environment, I simply executed the command <code>py -m pip install pyspedas</code>. But I am still getting the error.</p> <p>What I could gather is that the program has an issue with</p> <pre class="lang-none prettyprint-override"><code>Collecting pywavelets (from pyspedas) Using cached PyWavelets-1.4.1.tar.gz (4.6 MB) Installing build dependencies ... done </code></pre> <p>I am using <a href="https://en.wikipedia.org/wiki/IDLE" rel="noreferrer">IDLE</a> in Windows 11.</p>
<python><python-3.x><numpy><pip>
2023-10-26 06:22:24
9
2,687
sreeraj t
77,364,461
839,733
Find the length of the last word
<p>I'm working on the LeetCode <a href="https://leetcode.com/problems/length-of-last-word/description" rel="nofollow noreferrer">58. Length of Last Word</a>.</p> <blockquote> <p>Given a string s consisting of words and spaces, return the length of the last word in the string.</p> <p>A word is a maximal substring consisting of non-space characters only.</p> </blockquote> <p>Example:</p> <blockquote> <p>Input: s = &quot; fly me to the moon &quot;</p> <p>Output: 4</p> <p>Explanation: The last word is &quot;moon&quot; with length 4.</p> </blockquote> <p>This can be trivially solved as follows:</p> <pre><code>return len(s.split()[-1]) </code></pre> <p>Assuming the question is asked in an interview, it is customary to impose artificial restrictions, so, let's say that the built-in <code>split</code> or <code>splitlines</code> can't be used. No problem.</p> <pre><code>from operator import methodcaller, not_ from itertools import takewhile, dropwhile def lengthOfLastWord(s: str) -&gt; int: space = methodcaller(&quot;isspace&quot;) word = takewhile(not_(space), dropwhile(space, s[::-1])) return len(list(word)) </code></pre> <p>This gives me the following error:</p> <pre><code>TypeError: 'bool' object is not callable return len(list(word)) </code></pre> <p>I can, of course, use lambdas instead of <code>methodcaller</code>, but knowing that I need to use the same function, <code>isspace</code>, the above attempt seems more elegant. In a language like Haskell, I would be able to simply use <code>isSpace</code> and <code>not . isSpace</code> directly. Python is not Haskell, so, how do I get the same effect?</p> <p>Please note that there are other ways of solving this question, like using a reverse <code>for</code> loop. I'm not looking for other or better options, merely how to make the above code work by negating the callable returned by <code>methodcaller</code>.</p>
<python><python-itertools><python-operator>
2023-10-26 06:03:44
4
25,239
Abhijit Sarkar
77,364,235
4,451,521
How to efficiently select some data from a large trajectory set
<p><strong>The situation</strong></p> <p>I have one dataframe with one (well actually three but let's start with one) set of coordinates (X,Y) or <strong>trajectory</strong> (in two columns). For the sake to illustrate my point and to have some reproducible example let's use this code</p> <pre><code>import numpy as np import pandas as pd import plotly.graph_objects as go # Create data for trajectories n_points = 500 theta = np.linspace(0, 2 * np.pi, n_points) data = pd.DataFrame({ 'X': [10 * np.cos(theta) for theta in np.linspace(0, 2 * np.pi, n_points)], 'Y': [10 * np.sin(theta) for theta in np.linspace(0, 2 * np.pi, n_points)], 'XL': [11 * np.cos(theta) for theta in np.linspace(0, 2 * np.pi, n_points)], 'YL': [11 * np.sin(theta) for theta in np.linspace(0, 2 * np.pi, n_points)], 'XR': [9 * np.cos(theta) for theta in np.linspace(0, 2 * np.pi, n_points)], 'YR': [9 * np.sin(theta) for theta in np.linspace(0, 2 * np.pi, n_points)] }) fig = go.Figure() # Add traces for trajectories fig.add_trace(go.Scatter(x=data['X'], y=data['Y'], mode='lines+markers', name='Circle 1', line=dict(color='blue'))) fig.add_trace(go.Scatter(x=data['XL'], y=data['YL'], mode='lines+markers', name='Circle 2', line=dict(color='red'))) fig.add_trace(go.Scatter(x=data['XR'], y=data['YR'], mode='lines+markers', name='Circle 3', line=dict(color='green'))) # Define the rectangle parameters XC, YC = 7.07, 7.07 # Center of the rectangle XD, YD = 1, -1 # Direction of the rectangle width = 3 height = 4 # Calculate the corner points of the rectangle using trigonometric functions theta = np.arctan2(YD, XD) cos_theta = np.cos(theta) sin_theta = np.sin(theta) rectangle_x = [XC + 0.5 * width * cos_theta - 0.5 * height * sin_theta, XC - 0.5 * width * cos_theta - 0.5 * height * sin_theta, XC - 0.5 * width * cos_theta + 0.5 * height * sin_theta, XC + 0.5 * width * cos_theta + 0.5 * height * sin_theta, XC + 0.5 * width * cos_theta - 0.5 * height * sin_theta] rectangle_y = [YC + 0.5 * width * sin_theta + 0.5 * height * cos_theta, YC - 0.5 * width * sin_theta + 0.5 * height * cos_theta, YC - 0.5 * width * sin_theta - 0.5 * height * cos_theta, YC + 0.5 * width * sin_theta - 0.5 * height * cos_theta, YC + 0.5 * width * sin_theta + 0.5 * height * cos_theta] # Add a trace for the rectangle fig.add_trace(go.Scatter(x=rectangle_x, y=rectangle_y, fill='toself', name='Rectangle')) # Customize layout fig.update_layout( title='Trajectories and Rectangle Plot', xaxis_title='X-axis', yaxis_title='Y-axis', showlegend=True ) # Set axis aspect ratio to ensure circles appear as circles fig.update_xaxes(scaleanchor=&quot;y&quot;, scaleratio=1) fig.update_yaxes(scaleanchor=&quot;x&quot;, scaleratio=1) fig.show() </code></pre> <p>This give us</p> <p><a href="https://i.sstatic.net/K7E7v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K7E7v.png" alt="enter image description here" /></a></p> <p>Take into account that for the example I put a perfect circular trajectory but this can be anything, as long as it is continuous.</p> <p><strong>What I want</strong></p> <p>As you can see in the plot there is a rectangle. This rectangle is centered at a point in the X,Y trajectory. (XC,YC).</p> <p>What I want is to get <strong>only</strong> the data (I am not sure in what format, perhaps a sub dataframe??) that is contained inside the rectangle. (and let's say plot it for confirmation)</p> <p>I realize that I can make a comparation of all the data that I have to check if it is inside the rectangle but here I have 500 points and I expect to have many more in the trajectory I am thinking (50,000 points). And I believe checking all of them might take too much time</p> <p>Is there a way to efficiently select some geographical region (the rectangle) from the original data?</p> <p><em>(after note)</em> After selecting the data I would like to coordinate transform this so that I can plot it in coordinates where XC,YC is the origin and the rectangle sides are parallel to X,Y. But first I need the data</p>
<python><pandas>
2023-10-26 05:05:34
2
10,576
KansaiRobot
77,364,184
1,068,378
AWS SAM Environment variable
<p>i was wondering if someone can help me here. I am devleoping a python SAM application in which i need to use environment variables. at the moment i am declaring my variables in template.yaml</p> <pre><code>Globals: Function: Timeout: 3 Environment: Variables: FMPREPKEY: &lt;myfmprepkey&gt; </code></pre> <p>However, given my project is on github, i'd rather put a placeholder in the yaml and 'override' the value somehow at runtime How can i do that?</p> <p>Kind regards Marco</p>
<python><amazon-web-services><sam>
2023-10-26 04:47:55
1
319
user1068378
77,364,104
2,419,531
How to efficiently find pairs of nodes in a graph that when removed will cause the graph to split?
<p>Consider this simple biconnected graph (graph without <a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.components.articulation_points.html" rel="nofollow noreferrer">articulation points</a>):</p> <pre class="lang-py prettyprint-override"><code>import networkx as nx G = nx.diamond_graph() nx.draw(G, with_labels=True) </code></pre> <p><a href="https://i.sstatic.net/c60LB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c60LB.png" alt="diamond graph" /></a></p> <p>I want to find out what two pairs of nodes when removed from the graph will cause the graph to split. For example removing nodes 1 and 2 from this graph will cause the graph to have 2 components: node 3 and node 0). For this I am removing the combination of nodes one by one and counting the number of components left in the graph.</p> <pre class="lang-py prettyprint-override"><code>from itertools import combinations breaking_combinations = [] combos = combinations(G.nodes(), 2) for combo in combos: G_copy = G.copy() init_num_components = nx.number_connected_components(G_copy) G_copy.remove_nodes_from(combo) num_components_after_combo_removed = nx.number_connected_components(G_copy) if num_components_after_combo_removed &gt; init_num_components: breaking_combinations.append(combo) print(breaking_combinations) </code></pre> <p>The logic works fine and I am getting the expected results: <code>(1, 2)</code> in this example. But this process is so slow on big graphs (~10,000 nodes) that it takes forever to find all combinations in the graph.</p> <p>My question is: is there an efficient alternate approach I can use to achieve the same results - perhaps by using in-built networkx algorithms?</p>
<python><python-3.x><graph><networkx><graph-theory>
2023-10-26 04:22:13
1
41,002
Saravana
77,364,093
7,144,427
TLSV1_ALERT_UNKNOWN_CA using self signed certificate and proxy.py?
<p>I'm trying to use a self signed certificate with <a href="https://github.com/abhinavsingh/proxy.py" rel="nofollow noreferrer">proxy.py</a> (v2.3.1). I found <a href="https://github.com/abhinavsingh/proxy.py/issues/1268" rel="nofollow noreferrer">this</a> issue but couldn't really find a solution there. My code is as follows:</p> <pre class="lang-py prettyprint-override"><code>if __name__ == '__main__': # get random available port to run proxy proxy_port = proxy.common.utils.get_available_port() # set up proxy to redirect all requests from the browser through the client with proxy.start( ['--host', '127.0.0.1', '--port', str(proxy_port), '--ca-cert-file', '/home/user/app/app-ca.pem', '--ca-key-file', '/home/user/app/app-ca.key', '--ca-signing-key-file', '/home/user/app/app-signing.key'], plugins= [b'header_modifier.BasicAuthorizationPlugin', header_modifier.BasicAuthorizationPlugin]): selenium_proxy.proxyType = ProxyType.MANUAL selenium_proxy.httpProxy = '127.0.0.1:' + str(proxy_port) selenium_proxy.sslProxy = '127.0.0.1:' + str(proxy_port) print('Proxy address: ' + selenium_proxy.httpProxy) run_app() </code></pre> <p>Where running the service I get:</p> <pre><code>Oct 26 14:56:32 raspberrypi python3[2727]: 2023-10-26 14:56:32,654 - pid:2727 [I] load_plugins:334 - Loaded plugin proxy.http.proxy.HttpProxyPlugin Oct 26 14:56:32 raspberrypi python3[2727]: 2023-10-26 14:56:32,656 - pid:2727 [I] load_plugins:334 - Loaded plugin header_modifier.BasicAuthorizationPlugin Oct 26 14:56:32 raspberrypi python3[2727]: 2023-10-26 14:56:32,656 - pid:2727 [I] load_plugins:334 - Loaded plugin __main__.BasicAuthorizationPlugin Oct 26 14:56:32 raspberrypi python3[2727]: 2023-10-26 14:56:32,657 - pid:2727 [I] listen:115 - Listening on 127.0.0.1:42591 Oct 26 14:56:32 raspberrypi python3[2727]: 2023-10-26 14:56:32,692 - pid:2727 [I] start_workers:136 - Started 4 workers Oct 26 14:56:43 raspberrypi python3[2727]: 2023-10-26 14:56:43,505 - pid:2733 [E] intercept:540 - OSError when wrapping client Oct 26 14:56:43 raspberrypi python3[2727]: Traceback (most recent call last): Oct 26 14:56:43 raspberrypi python3[2727]: File &quot;/usr/local/lib/python3.7/dist-packages/proxy/http/proxy/server.py&quot;, line 529, in intercept Oct 26 14:56:43 raspberrypi python3[2727]: self.wrap_client() Oct 26 14:56:43 raspberrypi python3[2727]: File &quot;/usr/local/lib/python3.7/dist-packages/proxy/http/proxy/server.py&quot;, line 559, in wrap_client Oct 26 14:56:43 raspberrypi python3[2727]: self.client.wrap(self.flags.ca_signing_key_file, generated_cert) Oct 26 14:56:43 raspberrypi python3[2727]: File &quot;/usr/local/lib/python3.7/dist-packages/proxy/core/connection/client.py&quot;, line 43, in wrap Oct 26 14:56:43 raspberrypi python3[2727]: ssl_version=ssl.PROTOCOL_TLS) Oct 26 14:56:43 raspberrypi python3[2727]: File &quot;/usr/lib/python3.7/ssl.py&quot;, line 1222, in wrap_socket Oct 26 14:56:43 raspberrypi python3[2727]: suppress_ragged_eofs=suppress_ragged_eofs Oct 26 14:56:43 raspberrypi python3[2727]: File &quot;/usr/lib/python3.7/ssl.py&quot;, line 412, in wrap_socket Oct 26 14:56:43 raspberrypi python3[2727]: session=session Oct 26 14:56:43 raspberrypi python3[2727]: File &quot;/usr/lib/python3.7/ssl.py&quot;, line 853, in _create Oct 26 14:56:43 raspberrypi python3[2727]: self.do_handshake() Oct 26 14:56:43 raspberrypi python3[2727]: File &quot;/usr/lib/python3.7/ssl.py&quot;, line 1117, in do_handshake Oct 26 14:56:43 raspberrypi python3[2727]: self._sslobj.do_handshake() Oct 26 14:56:43 raspberrypi python3[2727]: ssl.SSLError: [SSL: TLSV1_ALERT_UNKNOWN_CA] tlsv1 alert unknown ca (_ssl.c:1056) Oct 26 14:56:43 raspberrypi python3[2727]: 2023-10-26 14:56:43,511 - pid:2733 [I] access_log:397 - 127.0.0.1:42880 - CONNECT firefox.settings.services.mozilla.com:443 - 0 bytes - 123.52 ms </code></pre> <p>It seems like it doesn't like my certificate as it's not issued from a trusted certificate authority as it's self signed. I created the certificate and keys like so using openssl:</p> <pre><code>openssl genrsa -out app-ca.key 2048 openssl req -x509 -new -nodes -key app-ca.key -sha256 -days 1825 -out app-ca.pem openssl genrsa -out app-signing.key 2048 chown user app-* </code></pre> <p>I tried adding the certificate to my systems trusted certificates like so:</p> <pre><code>cp app-ca.pem /usr/local/share/ca-certificates/app-ca.crt update-ca-certificates </code></pre> <p>which added it to <code>/etc/ssl/certs</code>:</p> <pre><code>Updating certificates in /etc/ssl/certs... 1 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d... done. </code></pre> <p>But the proxy setup still fails.</p> <p>Additional info:</p> <pre><code>uname -a Linux raspberrypi 5.10.63-v7l+ #1496 SMP Wed Dec 1 15:58:56 GMT 2021 armv7l GNU/Linux </code></pre> <pre><code>python3 --version Python 3.7.3 </code></pre>
<python><linux><ssl><openssl>
2023-10-26 04:19:32
0
1,636
wizzfizz94
77,364,000
18,649,992
Convert for loop to jax.lax.scan
<p>How does one convert the following (to accelerate compiling)? The <code>for</code> loop version works with <code>jax.jit</code>,</p> <pre><code>import functools import jax import jax.numpy as jnp @functools.partial(jax.jit, static_argnums=0) def func(n): p = 1 x = jnp.arange(8) y = jnp.zeros((n,)) for idx in range(n): y = y.at[idx].set(jnp.sum(x[::p])) p = 2*p return y func(2) # &gt;&gt; Array([28., 12.], dtype=float32) </code></pre> <p>but will return <code>static start/stop/step</code> errors when using <code>scan</code></p> <pre><code>import numpy as np def body(p, xi): y = jnp.sum(x[::p]) p = 2*p return p, y x = jnp.arange(8) jax.lax.scan(body, 1, np.arange(2)) # &gt;&gt; IndexError: Array slice indices must have static start/stop/step ... </code></pre>
<python><index-error><jax>
2023-10-26 03:47:51
1
440
DavidJ
77,363,957
268,581
Using Grouper to group dataframe by various timeframes and charting
<p>Here's a Python program which does the following:</p> <ul> <li>Makes an API call to treasury.gov to retrieve data</li> <li>Stores the data in a Pandas dataframe</li> </ul> <pre><code>import requests import pandas as pd import matplotlib.pyplot as plt # ---------------------------------------------------------------------- date = '1900-01-01' transaction_type = 'Withdrawals' transaction_catg = 'Interest on Treasury Securities' page_size = 10000 url = 'https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v1/accounting/dts/deposits_withdrawals_operating_cash' url_params = f'?filter=record_date:gt:{date},transaction_type:eq:{transaction_type},transaction_catg:eq:{transaction_catg}&amp;page[size]={page_size}' response = requests.get(url + url_params) result_json = response.json() # ---------------------------------------------------------------------- df = pd.DataFrame(result_json['data']) df['record_date'] = pd.to_datetime(df.record_date) df['transaction_today_amt'] = pd.to_numeric(df['transaction_today_amt']) # ---------------------------------------------------------------------- plt.ion() # ---------------------------------------------------------------------- </code></pre> <h1>Group using <code>strftime</code></h1> <p>Let's group by month using <code>strftime</code>.</p> <pre><code>items = df.groupby(df['record_date'].dt.strftime('%Y-%m'))['transaction_today_amt'].sum() plt.figure() plt.bar(x=items.index, height=items.values) plt.xticks(rotation=90) </code></pre> <p><a href="https://i.sstatic.net/ZQYuI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZQYuI.png" alt="enter image description here" /></a></p> <p>The graph looks pretty good. However, there are some drawbacks:</p> <ul> <li>scrunched x-axis labels</li> <li>does not use Grouper</li> </ul> <h1>Group using <code>Grouper</code></h1> <pre><code>items = df.groupby(pd.Grouper(key='record_date', freq='M'))['transaction_today_amt'].sum() plt.figure() plt.bar(x=items.index, height=items.values) plt.xticks(rotation=90) </code></pre> <p><a href="https://i.sstatic.net/gc3yu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gc3yu.png" alt="enter image description here" /></a></p> <p>Now the x-axis labels look good and we're using Grouper. However, the columns are thin lines.</p> <h1>Group using <code>Grouper</code>. Use pandas plot method.</h1> <pre><code>df.groupby(pd.Grouper(key='record_date', freq='M'))['transaction_today_amt'].sum().plot(kind='bar') </code></pre> <p><a href="https://i.sstatic.net/y0MMt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y0MMt.png" alt="enter image description here" /></a></p> <p>Now we're using <code>Grouper</code> and the column lines are thick. However, the x-axis labels include time and are pretty scrunched.</p> <h1>Question</h1> <p>What's a good way setup this chart so that Grouper is being used without the drawbacks shown in the above approaches?</p> <p>I'd like to use Grouper because it allows for easily grouping by arbitrary timeframes.</p>
<python><pandas><matplotlib>
2023-10-26 03:29:57
1
9,709
dharmatech
77,363,909
3,099,733
How to block execution until Jupyter widget's value has been changed?
<h2>Busy wait: NOT work</h2> <p>Given the following code:</p> <pre class="lang-py prettyprint-override"><code>from ipywidgets import IntsInput from IPython.display import display in_value = IntsInput(description=&quot;value&quot;) before = in_value.value print('before': before) display(in_value) while before == in_value.value: ... # blocking until value is changed print('after', in_value.value) # never reach </code></pre> <p>Obviously, the loop will blocking forever as <code>in_value.value</code> won't be evaluated again. Is there any way to make it work as simple as possible?</p> <p>It looks like not possible as the observer is running on the main thread. Thus the following method also not work.</p> <h2>Blocking queue: NOT work</h2> <pre class="lang-py prettyprint-override"><code>from ipywidgets import IntsInput from IPython.display import display from queue import Queue import threading import time q = Queue() widget = IntsInput() display(widget) def getvalue(change): q.put(change.new) widget.unobserve(getvalue, 'value') widget.observe(getvalue, 'value') print(q.get()) # pending forever </code></pre> <h2>Async: NOT work</h2> <pre class="lang-py prettyprint-override"><code>import asyncio from IPython.display import display from ipywidgets import Text def wait_for_change(widget, value): future = asyncio.Future() def getvalue(change): # make the new value available future.set_result(change.new) widget.unobserve(getvalue, value) widget.observe(getvalue, value) return future text_input = Text() print('before', text_input.value) display(text_input) future = wait_for_change(text_input, 'value') await asyncio.ensure_future(future) # block forever # never reach print(text_input.value) </code></pre> <p>Is there any solution for this? I know there is a solution to use event callback to achieve the similar result. But what I am looking for is a method to get rid of the event callback like <code>on_submit(main_flow)</code> or <code>observer(on_change, 'value')</code>. I want to turn those callback style methods into a sync one, that allow users to display a widget and blocking there until they finish input.</p>
<python><jupyter-notebook><ipywidgets>
2023-10-26 03:08:53
1
1,959
link89
77,363,673
3,162,975
Oauth 2.0 run flow for Youtube Data Api v3 not work - cache problem?
<p>I want to upload videos to youtube via command line (prompt) using the following code in python and the google api &quot;<strong>Youtube Data Api v3</strong>&quot;.</p> <p>When the code is executed for the first time, the <strong>Oauth 2.0 run flow</strong> start. A screen appears in the prompt asking for the <em>user's permission</em> to upload videos to the channel automatically.</p> <p>Giving permission, is created a file with the token, which will avoid the confirmation screen in next runs. I did these steps with a <strong>google cloud account and youtube channel</strong> and it worked.</p> <p><strong>But now it doesn't work ANYMORE.</strong></p> <pre><code>import os import json import sys import time import http.client import httplib2 from apiclient.discovery import build from apiclient.errors import HttpError from apiclient.http import MediaFileUpload from oauth2client.client import flow_from_clientsecrets from oauth2client.file import Storage from oauth2client.tools import argparser, run_flow # Explicitly tell the underlying HTTP transport library not to retry, since # we are handling retry logic ourselves. httplib2.RETRIES = 1 # Maximum number of times to retry before giving up. MAX_RETRIES = 10 # Always retry when these exceptions are raised. RETRIABLE_EXCEPTIONS = (httplib2.HttpLib2Error, IOError, http.client.NotConnected, http.client.IncompleteRead, http.client.ImproperConnectionState, http.client.CannotSendRequest, http.client.CannotSendHeader, http.client.ResponseNotReady, http.client.BadStatusLine) # Always retry when an apiclient.errors.HttpError with one of these status # codes is raised. RETRIABLE_STATUS_CODES = [500, 502, 503, 504] CLIENT_SECRETS_FILE =&quot;api-google/client_secret_xxx.apps.googleusercontent.com.json&quot; # This OAuth 2.0 access scope allows an application to upload files to the # authenticated user's YouTube channel, but doesn't allow other types of access. YOUTUBE_UPLOAD_SCOPE = &quot;https://www.googleapis.com/auth/youtube.upload&quot; YOUTUBE_API_SERVICE_NAME = &quot;youtube&quot; YOUTUBE_API_VERSION = &quot;v3&quot; # This variable defines a message to display if the CLIENT_SECRETS_FILE is # missing. MISSING_CLIENT_SECRETS_MESSAGE = &quot;&quot;&quot; WARNING: Please configure OAuth 2.0 To make this sample run you will need to populate the client_secrets.json file found at: %s with information from the API Console https://console.cloud.google.com/ For more information about the client_secrets.json file format, please visit: https://developers.google.com/api-client-library/python/guide/aaa_client_secrets &quot;&quot;&quot; % os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE)) VALID_PRIVACY_STATUSES = (&quot;public&quot;, &quot;private&quot;, &quot;unlisted&quot;) def get_authenticated_service(args): flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE, scope=YOUTUBE_UPLOAD_SCOPE, message=MISSING_CLIENT_SECRETS_MESSAGE) storage = Storage(&quot;%s-oauth2.json&quot; % sys.argv[0]) credentials = storage.get() if credentials is None or credentials.invalid: credentials = run_flow(flow, storage, args) return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, http=credentials.authorize(httplib2.Http())) def initialize_upload(youtube, options): tags = None if options.keywords: tags = options.keywords.split(&quot;,&quot;) body=dict( snippet=dict( title=options.title, description=options.description, tags=tags, categoryId=options.category, chennelId= &quot;xxx&quot; # id youtube channel ), status=dict( privacyStatus=options.privacyStatus, madeForKids=False, selfDeclaredMadeForKids=False ) ) # Call the API's videos.insert method to create and upload the video. insert_request = youtube.videos().insert( part=&quot;,&quot;.join(body.keys()), body=body, media_body=MediaFileUpload(options.file, chunksize=-1, resumable=True) ) resumable_upload(insert_request) # This method implements an exponential backoff strategy to resume a # failed upload. def resumable_upload(insert_request): response = None error = None retry = 0 while response is None: try: print(&quot;Uploading file...&quot;) status, response = insert_request.next_chunk() if response is not None: if 'id' in response: print(&quot;Video id '%s' was successfully uploaded.&quot; % response['id']) else: exit(&quot;The upload failed with an unexpected response: %s&quot; % response) except HttpError as e: if e.resp.status in RETRIABLE_STATUS_CODES: error = &quot;A retriable HTTP error %d occurred:\n%s&quot; % (e.resp.status,e.content) else: raise except (RETRIABLE_EXCEPTIONS, e): error = &quot;A retriable error occurred: %s&quot; % e if error is not None: print(error) retry += 1 if retry &gt; MAX_RETRIES: exit(&quot;No longer attempting to retry.&quot;) max_sleep = 2 ** retry sleep_seconds = random.random() * max_sleep print(&quot;Sleeping %f seconds and then retrying...&quot; % sleep_seconds) time.sleep(sleep_seconds) #upload video def carico_short(path_video, folder_video): percorso_cartella = os.path.join(path_video, folder_video) video_youtube = os.path.join(percorso_cartella, &quot;video_youtube.mp4&quot;) class ArgsNamespace: file = video_youtube title = &quot;title&quot; description = &quot;content&quot; category = &quot;24&quot; #24 è intrattenimento, prima era 22 keywords = &quot;shorts, curiosity&quot; privacyStatus = &quot;public&quot; args = ArgsNamespace() try: youtube = get_authenticated_service(args) initialize_upload(youtube, args) except HttpError as e: # Definisci e come un'eccezione HttpError print(&quot;An HTTP error %d occurred:\n%s&quot; % (e.resp.status, e.content)) # code to upload the video cartella_output = &quot;C:\\Users\\user\\Desktop\\canale youtube\\video-4-dopo-non-pubblicati&quot; carico_short(cartella_output, &quot;folder&quot;) </code></pre> <p>Now i have <strong>NEW</strong> cloud console account and <strong>new</strong> youtube channel. But the computer is the same.</p> <p>If i execute the code the <strong>run flow</strong> does not work: <strong>the authorization request does not appear in the prompt</strong> (that should appears the first time)</p> <p>And i have this error:</p> <pre><code>C:\Users\Giovanni\AppData\Local\Programs\Python\Python310\lib\site-packages\oauth2client\_helpers.py:255: UserWarning: Cannot access myscript.py-oauth2.json: No such file or directory warnings.warn(_MISSING_FILE_MESSAGE.format(filename)) </code></pre> <p><strong>EDIT</strong></p> <pre><code>C:\Users\Giovanni\AppData\Local\Programs\Python\Python310\lib\site-packages\oauth2client\_helpers.py:255: UserWarning: Cannot access myscript.py-oauth2.json: No such file or directory warnings.warn(_MISSING_FILE_MESSAGE.format(filename)) Traceback (most recent call last): File &quot;C:\Users\Giovanni\Desktop\canale youtube\myscript.py&quot;, line 258, in carico_short youtube = get_authenticated_service(args) File &quot;C:\Users\Giovanni\Desktop\canale youtube\myscript.py&quot;, line 99, in get_authenticated_service credentials = run_flow(flow, storage, args) File &quot;C:\Users\Giovanni\AppData\Local\Programs\Python\Python310\lib\site-packages\oauth2client\_helpers.py&quot;, line 133, in positional_wrapper return wrapped(*args, **kwargs) File &quot;C:\Users\Giovanni\AppData\Local\Programs\Python\Python310\lib\site-packages\oauth2client\tools.py&quot;, line 195, in run_flow logging.getLogger().setLevel(getattr(logging, flags.logging_level)) AttributeError: 'ArgsNamespace' object has no attribute 'logging_level' </code></pre> <p>I suspect that google/youtube has some cache.</p> <p>Maybe for YouTube the computer shows the authorization of the old account? Is that so? how can I delete it?</p> <p>Thanks a lot</p>
<python><google-cloud-platform><oauth-2.0><youtube><youtube-data-api>
2023-10-26 01:41:47
1
3,599
Borja
77,363,596
327,694
How to get more word suggestions from Hunspell with pyhunspell
<p>I'm using hunspell with the pyhunspell wrapper. I'm calling:</p> <pre><code>hunspell.suggest(&quot;Yokk&quot;) </code></pre> <p>But this is returning only [&quot;Yolk&quot;, &quot;Yoke&quot;]. I saw that &quot;York&quot; is in the dictionary but is not being returned. Is there a way to return more than 2 suggestions, either by increasing the distance threshold or the number of top suggestions?</p> <p>The text I'm trying to correct is &quot;New York&quot; and I have my own ranker that ranks the suggestions downstream. I just need more suggestions. I tried aspell and by default its returning 10 suggestions, one of which is in fact &quot;York&quot;.</p> <p>Note: The documentation doesn't mention any other arguments for method <code>suggest</code>. Even using the CLI I only get two suggestions:</p> <pre><code>hunspell -d en_US Hunspell 1.7.2 yokk &amp; yokk 2 0: yolk, yoke </code></pre> <p>I've checked the default dictionaries are properly loaded using:</p> <pre><code>hunspell -D SEARCH PATH: ... AVAILABLE DICTIONARIES (path is not mandatory for -d option): /Library/Spelling/en_US LOADED DICTIONARY: /Library/Spelling/en_US.aff /Library/Spelling/en_US.dic ➜ 2 subl /Library/Spelling/en_US.dic </code></pre> <p>And I've also checked that the expected &quot;York&quot; is in the dictionary:</p> <pre><code>cat /Library/Spelling/en_US.dic | grep York York/M </code></pre> <p>I wonder if there is some other configuration I can set somewhere, I can't see anything evident in either the wrapper or the CLI documentation: <a href="https://github.com/pyhunspell/pyhunspell/wiki/Documentation" rel="nofollow noreferrer">https://github.com/pyhunspell/pyhunspell/wiki/Documentation</a> <a href="https://github.com/hunspell/hunspell" rel="nofollow noreferrer">https://github.com/hunspell/hunspell</a></p>
<python><spell-checking><hunspell>
2023-10-26 01:06:20
1
5,610
Josep Valls
77,363,541
4,961,048
Python multiprocessor pool post-process
<p>I have a set of tasks to be executed (to the tune of millions) and I have to <code>POST</code> the result of each &quot;execution&quot;. The POST offers me a way to batch the results.</p> <p>I have the following skeleton :</p> <pre class="lang-py prettyprint-override"><code>def init_pool_processes(): global gathered_data gathered_data = [] class Generate: def __init__(self): self.dummy = 0 def __call__(self, message): print (f'processing for {message}') # message processed. Assume the result is message gathered_data.append(message) if len(gathered_data) == 3: # POST gathered data with a batch of (say) 3 and # clear as we do not need it any more gathered_data.clear() def process (self, message): try: pool = Pool(initializer=init_pool_processes) pool.map(self, message) finally: pool.close() pool.join() def func(): message = ['This', 'is', 'the', 'end', 'my', 'only', 'friend', 'thee', 'eend', '.'] obj = Generate() obj.process(message) print ('Processing done!!!') func() </code></pre> <p>Now clearly, some executions at the end might not end up in the POST call. So I need a way to call a function (once) for each worker that was created.</p> <p>I tried adding the following after the map call :</p> <pre class="lang-py prettyprint-override"><code>pool.map(post_process, list(range(pool._processes)), chunksize=1) </code></pre> <p>but this does not guarantee that the function post_process will be called for all workers.</p> <p>Any pointers on how to achieve this? I want to avoid a global queue/timer solutions if possible</p>
<python><multiprocessing>
2023-10-26 00:39:25
0
471
midi
77,363,447
395,857
Whenever I submit the first query, I always get an "error" with Gradio. But subsequent queries work fine. Why?
<p>I'm following the basic Gradio interface from the <a href="https://www.gradio.app/docs/interface" rel="nofollow noreferrer">Gradio documentation</a>:</p> <pre><code>import gradio as gr def greet(name): return &quot;Hello &quot; + name #+ &quot;!&quot; +a demo = gr.Interface(fn=greet, inputs=&quot;text&quot;, outputs=&quot;text&quot;) demo.launch() </code></pre> <p>When I go to the UI via Chrome (<a href="http://127.0.0.1:7860/" rel="nofollow noreferrer">http://127.0.0.1:7860/</a>), whenever I submit the first query right after stating the server, I always get an &quot;error&quot;:</p> <p><a href="https://i.sstatic.net/Hx3dp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hx3dp.png" alt="enter image description here" /></a></p> <p>The error message is:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\fra\anaconda3\envs\project\lib\site-packages\gradio\routes.py&quot;, line 534, in predict output = await route_utils.call_process_api( File &quot;C:\Users\fra\anaconda3\envs\project\lib\site-packages\gradio\route_utils.py&quot;, line 226, in call_process_api output = await app.get_blocks().process_api( File &quot;C:\Users\fra\anaconda3\envs\project\lib\site-packages\gradio\utils.py&quot;, line 855, in __exit__ matplotlib.use(self._original_backend) File &quot;C:\Users\fra\anaconda3\envs\project\lib\site-packages\matplotlib\__init__.py&quot;, line 1276, in use plt.switch_backend(name) File &quot;C:\Users\fra\anaconda3\envs\project\lib\site-packages\matplotlib\pyplot.py&quot;, line 343, in switch_backend canvas_class = module.FigureCanvas AttributeError: module 'backend_interagg' has no attribute 'FigureCanvas' </code></pre> <p>However, subsequent queries work fine:</p> <p><a href="https://i.sstatic.net/HtJkx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HtJkx.png" alt="enter image description here" /></a></p> <p>Why does the first query error while subsequent queries work fine? I use Windows 10.</p>
<python><gradio>
2023-10-25 23:59:21
1
84,585
Franck Dernoncourt
77,363,429
1,524,372
Inconsistency in Enum with class mixing string formatting in Python 3.11
<p>I have been following the back and forth on Enum formatting in Python 3.11, see <a href="https://github.com/python/cpython/issues/100458" rel="nofollow noreferrer">https://github.com/python/cpython/issues/100458</a>.</p> <p>However I have a few mixins with classes that implement <code>__str__</code> and have been seeing inconsistent behavior among the minor versions of 3.11, specifically 3.11.0 - 3.11.3.</p> <p>With the below script, run on a few different late 3.10 to early 3.12:</p> <pre><code>import enum class A: def __init__(self, a, b): self.a = a self.b = b def __str__(self): return f'{self.a}|{self.b}' class Foo(A, enum.Enum): BAR = 'a', 'b' import sys print(sys.version) print(Foo.BAR) print(str(Foo.BAR)) print(&quot;%s&quot; % Foo.BAR) print(f&quot;{Foo.BAR}&quot;) print(&quot;{}&quot;.format(Foo.BAR)) </code></pre> <p>Output:</p> <pre><code>3.10.13 (main, Oct 25 2023, 19:27:56) [Clang 15.0.0 (clang-1500.0.40.1)] a|b a|b a|b a|b a|b 3.11.0 (main, Oct 25 2023, 19:22:31) [Clang 15.0.0 (clang-1500.0.40.1)] Foo.BAR Foo.BAR Foo.BAR Foo.BAR Foo.BAR 3.11.3 (main, Oct 25 2023, 12:37:31) [Clang 15.0.0 (clang-1500.0.40.1)] Foo.BAR Foo.BAR Foo.BAR Foo.BAR Foo.BAR 3.11.4 (main, Oct 25 2023, 00:36:48) [Clang 15.0.0 (clang-1500.0.40.1)] a|b a|b a|b a|b a|b 3.11.6 (main, Oct 25 2023, 12:41:05) [Clang 15.0.0 (clang-1500.0.40.1)] a|b a|b a|b a|b a|b 3.12.0 (main, Oct 25 2023, 19:30:35) [Clang 15.0.0 (clang-1500.0.40.1)] a|b a|b a|b a|b a|b </code></pre> <p>I am just looking to confirm that the 3.11.0 - 3.11.3 versions are some kind of regression that was fixed and that the intended behavior is the 3.11.4+ behavior going forward.</p>
<python><enums><python-3.11>
2023-10-25 23:53:08
1
311
Paul
77,363,316
268,581
How to resolve issues with a bar plot x-axis being overcrowded
<p>Here's a Python program which does the following:</p> <ul> <li>Makes an API call to treasury.gov to retrieve data</li> <li>Stores the data in a Pandas dataframe</li> <li>Plots the data as a bar chart</li> </ul> <pre><code>import requests import pandas as pd import matplotlib.pyplot as plt date = '1900-01-01' transaction_type = 'Withdrawals' transaction_catg = 'Interest on Treasury Securities' page_size = 10000 url = 'https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v1/accounting/dts/deposits_withdrawals_operating_cash' url_params = f'?filter=record_date:gt:{date},transaction_type:eq:{transaction_type},transaction_catg:eq:{transaction_catg}&amp;page[size]={page_size}' response = requests.get(url + url_params) result_json = response.json() df = pd.DataFrame(result_json['data']) # Convert transaction_today_amt to numeric df['transaction_today_amt'] = pd.to_numeric(df['transaction_today_amt']) df.plot.bar(x='record_date', y='transaction_today_amt') plt.show() </code></pre> <p>Here's the resulting chart that I get:</p> <p><a href="https://i.sstatic.net/g6CGa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g6CGa.png" alt="enter image description here" /></a></p> <p>As you can see, there are too many x-axis labels.</p> <h1>Question</h1> <p>What's a good way to setup the chart so that the x-axis labels are legible?</p>
<python><pandas><matplotlib><time-series><bar-chart>
2023-10-25 23:08:15
1
9,709
dharmatech
77,363,105
1,514,682
How to visualize colmap export that contains Intrinsic and Extrinsic Camera Parameters using python
<p>I have a transforms.json file which contains Intrinsic and Extrinsic for 23 cameras. I would like to visualize those using Python, colmap, pycolmap and jupyternotebook. Below is a part of the .json file</p> <pre><code>{ &quot;w&quot;: 1920, &quot;h&quot;: 1080, &quot;fl_x&quot;: 1098.8550003271516, &quot;fl_y&quot;: 1110.2997543513977, &quot;cx&quot;: 970.1319034923014, &quot;cy&quot;: 542.0541746563172, &quot;k1&quot;: -0.28118870977442023, &quot;k2&quot;: 0.06674186867742171, &quot;p1&quot;: 0.0026768267765996103, &quot;p2&quot;: -0.00237229158478273, &quot;camera_model&quot;: &quot;OPENCV&quot;, &quot;frames&quot;: [ { &quot;file_path&quot;: &quot;images/frame_00023.jpg&quot;, &quot;transform_matrix&quot;: [ [ -0.07042611592680023, -0.9713950978549236, 0.22678563900496068, 2.1881674886247935 ], [ 0.9325864609677816, 0.016566699247072256, 0.3605662730978109, 1.7471888187630829 ], [ -0.3540093996139834, 0.2368904986264339, 0.9047431882282766, -0.21938707719027645 ], [ 0.0, 0.0, 0.0, 1.0 ] ], </code></pre>
<python><camera-calibration><colmap>
2023-10-25 22:01:21
2
2,393
Khayam Gondal
77,363,017
8,279,172
python set element in dict by accessing non-existent element
<p>I want to add a new element to dictionary this way:</p> <pre class="lang-python3 prettyprint-override"><code>d[&quot;key1&quot;][&quot;key2&quot;][&quot;key3&quot;] = 10 </code></pre> <p>But I have to do it such way:</p> <pre class="lang-python3 prettyprint-override"><code>d[&quot;key1&quot;] = {} d[&quot;key1&quot;][&quot;key2&quot;] = {} d[&quot;key1&quot;][&quot;key2&quot;][&quot;key3&quot;] = 10 </code></pre> <p>Because otherwise I will encounter <code>KeyError</code>. <br /> Is there any options to add new element in more elegant way?</p> <p><strong>UPD</strong><br /> Actually my code contains such new elements assigning:</p> <pre class="lang-python3 prettyprint-override"><code>d[&quot;key1&quot;][&quot;key2&quot;][&quot;key3&quot;] = 10 d[&quot;key1&quot;][&quot;key2&quot;][&quot;key4&quot;][&quot;key5&quot;] = 15 d[&quot;key6&quot;][&quot;key7&quot;][&quot;key8&quot;][&quot;key9&quot;][&quot;key10&quot;] = 20 </code></pre> <p>So it seems that declaring <code>defaultdict</code> with constructor in such way:</p> <pre class="lang-python3 prettyprint-override"><code>d = defaultdict(lambda: defaultdict(lambda: &quot;&quot;)) </code></pre> <p>will not work because amount acceses of dict is not predetermined.</p>
<python><dictionary>
2023-10-25 21:34:57
2
1,540
wowonline