QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,575,852
7,817,490
Strange recurring errors downloading python 3.10.9.tgz with wget
<p>I and several workmates have had a recurring problem building python in a docker container under WSL and VirtualBox. The download seems to succeed but then the unzipping reports 'trailing garbage ignored', and the build process exits with value 2. We typically hit this problem a few times, do a manual workaround and then don't see it for a while, so it feels intermittent.</p> <p>If downloaded via a browser, or even directly from the command line (with wget) the process works, but if you get the file with by running a script that calls <code>wget</code>, it most often has this 'trailing garbage ignored' error, even though the downloaded file reports the same total number of bytes exactly.</p> <p>I'm trying to eliminate this so our dev setup can be hassle free but I'm at a loss as to what is going wrong.</p> <p>My most recent experience with this was this morning: Running the wget command inside a bash script produced a file that unzipped with errors, pasting the same lines to a terminal downloaded the file and it unzipped perfectly.</p> <pre><code>&gt; `# environment values &gt; &gt; PYTHON_DOCKER_IMAGE=&quot;ken/python&quot; &gt; PYTHON_DOCKER_TAG=&quot;3.10.9&quot; &gt; UBI_DOCKER_IMAGE=&quot;redhat/ubi8&quot; &gt; UBI_DOCKER_TAG=&quot;8.7&quot; ` # build_local.sh #!/usr/bin/env bash set -euo pipefail function build_local() { mkdir -p output wget \ -c \ --no-cache \ -O output/python.tar.gz \ &quot;https://www.python.org/ftp/python/${PYTHON_DOCKER_TAG}/Python-${PYTHON_DOCKER_TAG}.tgz&quot; docker build \ --file Dockerfile.python \ --build-arg UBI_DOCKER_IMAGE=&quot;${UBI_DOCKER_IMAGE}&quot; \ --build-arg UBI_DOCKER_TAG=&quot;${UBI_DOCKER_TAG}&quot; \ --tag &quot;${PYTHON_DOCKER_IMAGE}:${PYTHON_DOCKER_TAG}&quot; \ --tag &quot;${PYTHON_DOCKER_IMAGE}:latest&quot; \ . docker compose -p &quot;${PROJECT_NAME}&quot; build } function main() { build_local &quot;$@&quot; } main &quot;$@&quot;` </code></pre> <p>This is the Dockerfile that builds Python. We don't use the pre-built images due to access issues on the work network.</p> <pre><code>`# Dockerfile.python [snipped basic construction from redhat ubi] COPY output/python.tar.gz / RUN mkdir -p /usr/local/src/python &amp;&amp; \ tar -zxf python.tar.gz -C /usr/local/src/python --strip-components=1 &amp;&amp; \ cd /usr/local/src/python &amp;&amp; \ ./configure \ --enable-loadable-sqlite-extensions \ --enable-optimizations \ --enable-option-checking=fatal \ --with-system-expat \ --with-ensurepip &amp;&amp; \ make &amp;&amp; \ make altinstall [snipped a bunch of subsequent construction that runs if the previous lines run, and isn't reached if they fail.] </code></pre>
<python><bash><windows-subsystem-for-linux><wget>
2023-02-27 00:20:43
0
489
Atcrank
75,575,803
607,407
How can I wrap multiple `with x() as val_x, ...` in a with statement into function for reuse?
<p>I am writing what should've been a simple unit test, but to it turns out I need to mock a bunch of things, and all similar tests will need those. I am using <code>mock.patch</code> from standard library, and now the code looks like this:</p> <pre><code>with mock.patch(&quot;project.folder.file.SomeClass.method&quot;) as mock_this,\ mock.patch(&quot;otherproj.directory.file.AnotherClass.some_method&quot; as mock_that,\ ... and so on, like 4 other things ... : mock_this.return_value = True mock_that.side_effect = some_side_effect ... and more ... </code></pre> <p>So I'd like to wrap all that to some method, let's call it <code>server_mock_boilerplate</code>. And I'd like to use it as:</p> <pre><code># I may want to pass objects for cases where patching a global is needed (sadly it is) # I probably only will need to output select few of the mocked things with server_mock_boilerplate(obj1, obj2) as (mock1, mock2, mock4): # run test ... # Maybe change side effect during the test mock1.side_effect = something_else </code></pre> <p>But I am not sure how to handle this. I can't return from <code>with</code> statement without closing the values and if I just return them without activating them with with, the return type of <code>mock.patch</code> is not <code>MagicMock</code>.</p> <p>So how to wrap multiple boilerplate <code>with</code> statement entries into a function and then use that function and output the entries into a tuple?</p>
<python><python-3.x><unit-testing><mocking><with-statement>
2023-02-27 00:07:54
2
53,877
Tomáš Zato
75,575,765
18,758,062
Long-running cell in Vscode Jupyter: Reconnecting to the kernel
<p>Whenever I run a Jupyter cell in Vscode that takes a long time to complete, it does not ever seem to complete running the cell.</p> <p>At the bottom of Vscode, I will see the notification</p> <blockquote> <p>Reconnecting to the kernel myvenv (Python 3.8.16)</p> </blockquote> <p>Is it possible that this issue is due to the cell taking longer than a configurable amout of time to complete running? If so, is there a way to increase this timeout value?</p> <p>There is no problem when the cell completes running in ~5mins.</p> <p>Pip packages: <code>jupyter_client==7.4.8</code>, <code>jupyter_core==5.1.1</code>, <code>jupyterlab-widgets==3.0.5</code>.</p> <p>Vscode extension <code>Jupyter</code> is on <code>v2023.2.1000541047 </code></p>
<python><visual-studio-code><jupyter-notebook><ipython>
2023-02-26 23:58:56
1
1,623
gameveloster
75,575,730
6,367,971
Concatenate CSVs into dataframe based on timestamp
<p>I have a dir of subdirs that contain CSVs, and I want to concatenate those into a single dataframe. But I only want to do so with files that are the 'the most recently' exported based on a timestamp in the filename.</p> <p>For example, this is a list of files contained in various sub-dirs:</p> <pre><code>FileA_20230208_ExportedOn_20230202T0215Z FileA_20230208_ExportedOn_20230208T0015Z FileB_20230208_ExportedOn_20230205T0215Z FileB_20230208_ExportedOn_20230208T2218Z FileC_20230208_ExportedOn_20210208T0215Z FileC_20230208_ExportedOn_20230201T0215Z FileC_20230208_ExportedOn_20230208T2208Z FileC_20230208_ExportedOn_20200207T0215Z FileA_20230209_ExportedOn_20230202T1915Z FileA_20230209_ExportedOn_20230202T0215Z </code></pre> <p>So the final dataframe should be a concatenation of just these 4 files:</p> <pre><code>FileA_20230208_ExportedOn_20230208T0015Z FileB_20230208_ExportedOn_20230208T2218Z FileC_20230208_ExportedOn_20230208T2208Z FileA_20230209_ExportedOn_20230202T1915Z </code></pre> <p>I can concatente all of them by doing this:</p> <pre><code>import pandas as pd # Dir of CSVs files = glob.glob('/**/*.csv', recursive=True) # Combine all CSVs into single CSV df = pd.concat([pd.read_csv(fp).assign(file_name=Path(fp).name) for fp in files], ignore_index=True) </code></pre> <p>But how do I select only the most recently timestamped files?</p>
<python><pandas><glob>
2023-02-26 23:50:46
1
978
user53526356
75,575,624
19,429,024
How to create a recursive folder menu using pystray?
<p>I am trying to create a navigable tree of folders and subfolders using pystray. I am having trouble creating a recursive menu with pystray.</p> <p>I am getting an error</p> <pre class="lang-py prettyprint-override"><code>&quot;TypeError: MenuItem.__init__() takes from 3 to 8 positional arguments but 1757 were given&quot;. </code></pre> <p>Yes my folder has 1757 .cfr files...</p> <p>The code is attached below. How can I create a recursive menu using pystray where only the last .cfr file of each folder is clickable while the other files in the folder are only for navigation?</p> <pre class="lang-py prettyprint-override"><code> import os import pystray class OpenTrees: def __init__(self, dir_path): self.dir_path = dir_path self.submenus = self._create_submenus() icon = pystray.Icon('name', icon=pystray.Icon('icon.png', width=32, height=32)) icon.menu = pystray.Menu(*self.submenus) icon.run() def _create_submenus(self, file_path=None): submenus = [] file_path = file_path or self.dir_path for file in sorted(os.listdir(file_path)): full_file_path = os.path.join(file_path, file) if os.path.isdir(full_file_path): submenu = pystray.MenuItem(file, *self._create_submenus( file_path=full_file_path)) submenus.append(submenu) elif file.endswith('.cfr'): submenu = pystray.MenuItem(file, lambda: print('Hello')) submenus.append(submenu) return submenus if __name__ == '__main__': OpenTrees(dir_path=r&quot;D:\Trees\AllTrees&quot;) </code></pre> <p>Here is an visual example of what I would like as the final result, only the first .cfr item appears in the menu, and the folders are all navigable..</p> <p><a href="https://i.sstatic.net/dghZj.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dghZj.gif" alt="enter image description here" /></a></p>
<python><pystray>
2023-02-26 23:24:49
1
587
Collaxd
75,575,601
13,309,379
Why Union[List[List[int]], List[int]] is casted to List[Union[List[List[int]], List[int]]]? (Correct use of type hints)
<p>I have following code:</p> <pre><code>from typing import Union,List,Any v: Union[list[list[int]],list[int]] = [-1,3,1,6,-5] # Create a list of inst if not isinstance(v[0],list): v =[v] v =[v] # Cast list of ints to list of list of ints print(v) </code></pre> <p><code>mypy</code> complains about it in the following way:</p> <pre><code>functions.py:5: error: Incompatible types in assignment (expression has type &quot;List[Union[List[List[int]], List[int]]]&quot;, variable has type &quot;Union[List[List[int]], List[int]]&quot;) [assignment] Found 1 error in 1 file (checked 1 source file) </code></pre> <p>I probably do not understand how type hints work but I really can not understand why assigning the new <code>v</code> variable this way makes mypy complain, because the end object is actually a <code>list[list[int]]</code>. Can anybody explain?</p> <p><em>PS</em></p> <p>I want to add a note: I encountered this issue while adding type hinting to some code that I do not own on github, so I can not really touch the &quot;logic&quot; of it.</p>
<python><type-hinting><mypy>
2023-02-26 23:17:43
1
712
Indiano
75,575,569
2,056,201
OpenCV not getting proper Confidence values in Java
<p>I am following this tutorial, however my code is in Java for Android and the tutorial is in Python <a href="https://pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/" rel="nofollow noreferrer">https://pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/</a></p> <p>I had to convert the code, but it seems to be getting incorrect values for Confidence. Very likely because I did not do the conversion correctly.</p> <p>This is the original code in Python</p> <pre><code># loop over the detections for i in np.arange(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with the # prediction confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the `confidence` is # greater than the minimum confidence if confidence &gt; args[&quot;confidence&quot;]: # extract the index of the class label from the `detections`, # then compute the (x, y)-coordinates of the bounding box for # the object idx = int(detections[0, 0, i, 1]) box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype(&quot;int&quot;) # display the prediction label = &quot;{}: {:.2f}%&quot;.format(CLASSES[idx], confidence * 100) print(&quot;[INFO] {}&quot;.format(label)) cv2.rectangle(image, (startX, startY), (endX, endY), COLORS[idx], 2) y = startY - 15 if startY - 15 &gt; 15 else startY + 15 cv2.putText(image, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2) </code></pre> <p>This is my converted Java code</p> <pre><code> float confidenceThreshold = 0.8f; // loop over the detections for (int i = 0; i &lt; detections.rows(); i++) { // extract the confidence (i.e., probability) associated with the // prediction double confidence = detections.get(i, 2)[0]; // filter out weak detections by ensuring the `confidence` is // greater than the minimum confidence if (confidence &gt; confidenceThreshold) { // extract the index of the class label from the `detections`, // then compute the (x, y)-coordinates of the bounding box for // the object int idx = (int)detections.get(i, 1)[0]; int boxX = (int)(detections.get(i, 3)[0] * input_image.cols()); int boxY = (int)(detections.get(i, 4)[0] * input_image.rows()); int boxWidth = (int)(detections.get(i, 5)[0] * input_image.cols()) - boxX; int boxHeight = (int)(detections.get(i, 6)[0] * input_image.rows()) - boxY; // display the prediction Imgproc.putText(input_image, imgLabels.get((int) idx) + &quot;: &quot; + String.format(&quot;%.2f&quot;, confidence * 100) + &quot;%&quot;, new Point(boxX, boxY - 5), Imgproc.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar(0, 255, 0), 2); Imgproc.rectangle(input_image, new Point(boxX, boxY), new Point(boxX + boxWidth, boxY + boxHeight), new Scalar(0, 255, 0), 2); } } </code></pre> <p>I am getting negative values for confidence, and other values like <code>idx</code> dont look right either. Im not familiar enough with Python or Java syntax to see what I am doing wrong. If someone can help spot the mistake, please answer with properly converted code.</p> <p>Thank you</p>
<python><java><android><opencv>
2023-02-26 23:12:10
0
3,706
Mich
75,575,512
7,654,476
How to unite second level ManyToManyFields?
<p>I have models as follows:</p> <pre class="lang-py prettyprint-override"><code>class Tag(models.Model): name = models.CharField(max_length=50) class Element(models.Model): tags = models.ManyToManyField('Tag') class Category(models.Model): elements = models.ManyToManyField('Element') @property def tags(self): ... # how can I do this? </code></pre> <p>How can I get the union of all the tags that appear in elements of a given category?</p> <p>I could do something like this:</p> <pre class="lang-py prettyprint-override"><code>def tags(self): all_tags = Tag.objects.none() for element in self.elements.all(): all_tags = all_tags | element.tags.all() return all_tags.distinct() </code></pre> <p>But is there a way to do this directly at database level?</p>
<python><django><django-queryset><manytomanyfield>
2023-02-26 23:00:21
1
443
shd33
75,575,439
326,389
Streamlit slider resetting when changing max_value
<p>I have 2 sliders. The first slider (μ) determines the <code>max_value</code> for the second slider (σ²). Works great with the following code except for a <em>major issue</em>:</p> <pre><code>mu = st.slider('$\mu$', 0.0, 5.0, 1.0, step=0.1) var = st.slider('$\sigma^2$', 0.0, mu, 1.0, step=0.1) </code></pre> <p>The issue is that change <code>mu</code>, resets the value of the <code>var</code> slider back to <code>1.0</code>. How do I have the <code>var</code> slider change its <code>max_value</code> from <code>mu</code> without affecting its currently set value? Is Streamlit's <code>session_state</code> mechanism appropriate here? It's very important for this app to allow users the ability to have multiple tabs open with different values for <code>mu</code> and <code>var</code> in each tab. Is each tab a separate <code>session</code>?</p> <p>Here's the MRE with code: <a href="https://mre-mu.streamlit.app/" rel="nofollow noreferrer">https://mre-mu.streamlit.app/</a></p> <p><strong>Update</strong>: I was able to accomplish what I wanted with <code>session_state</code>. Is this the right use of <code>session_state</code>?</p> <pre><code>mu = st.slider('$\mu$', 0.0, 5.0, 2.5, step=0.1) var = st.slider('$\sigma^2$', 0.0, mu, min(mu, st.session_state.var) if 'var' in st.session_state else 1.0, step=0.1, key='var') </code></pre> <p>Seems to work, even across tabs (no sharing of state, which is what I needed), but the σ² slider no longer slides! The μ slider does slide and update values as you slide. But if you try and slide the σ² slider, it stops the moment values are updated and you have to click and drag it again. Any idea what's happening?</p>
<python><streamlit>
2023-02-26 22:43:55
1
52,820
at.
75,575,412
1,857,373
continuous data, Y response not support in the cross_val_score() binary|multiclass for IterativeImputer for BayesianRidge
<p><strong>Problem Defined, Continuous Challenge</strong></p> <p>This new imputer_bayesian_ridge() function is for Iterative Imputer to impute training data. Sending in data frame training data, then immediately get data.values for numpy array variable. This send or passes a training data with many features, and Y response variable. This effort is only seeking to impute on one single feature.</p> <p>Apparently my continuous data, Y response data, which is price $$$$ continuous data, is not supported in the cross_val_score(interative_imputer, data_array).</p> <p>So what advise on how to work with continuous data in Y response variable to work with Iterative Imputer and satisfy the cross_val_score for the object 'interativea_imputer'</p> <p>To support the target type, should I cast my continuous data in Y response variable to binary? No. For this is not a binary classification, so multiclass is more in line. So how to handle price data when it is the response variable?</p> <p><strong>Error Received</strong></p> <p>ValueError: Supported target types are: ('binary', 'multiclass'). Got 'continuous' instead.</p> <p><strong>CODE</strong></p> <pre><code> def imputer_regressor_bay_ridge(data, y): data_array = data.values. ##looks OK interative_imputer = IterativeImputer(BayesianRidge()). ## runs OK interative_imputer_fit = interative_imputer.fit(data_array) ## runs OK data_imputed = interative_imputer_fit.transform(data_array) ## runs OK cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) ## runs OK scores = cross_val_score(interative_imputer, data_array, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise') return scores, data_imputed </code></pre> <p><strong>DATA SAMPLE</strong></p> <pre><code>print(train_data.shape) data_array = train_data.values data_array (1460, 250) array([[-1.73086488, -0.20803433, -0.20714171, ..., -0.11785113, 0.4676514 , -0.30599503], [-1.7284922 , 0.40989452, -0.09188637, ..., -0.11785113, 0.4676514 , -0.30599503], [-1.72611953, -0.08444856, 0.07347998, ..., -0.11785113, 0.4676514 , -0.30599503], ..., [ 1.72611953, -0.16683907, -0.14781027, ..., -0.11785113, 0.4676514 , -0.30599503], [ 1.7284922 , -0.08444856, -0.08016039, ..., -0.11785113, 0.4676514 , -0.30599503], [ 1.73086488, 0.20391824, -0.05811155, ..., -0.11785113, 0.4676514 , -0.30599503]]) y = train_data['ResponseY'].values y.shape (1460,) array([ 0.34727322, 0.00728832, 0.53615372, ..., 1.07761115, -0.48852299, -0.42084081]) </code></pre> <p><strong>Value Error</strong></p> <p>Apparently my continuous data, which is price $ data, is not supported in cross_val_score(interative_imputer, data_array on:</p> <p>ValueError: Supported target types are: ('binary', 'multiclass'). Got 'continuous' instead.</p> <pre><code>Empty Traceback (most recent call last) File ~/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py:820, in Parallel.dispatch_one_batch(self, iterator) 819 try: --&gt; 820 tasks = self._ready_batches.get(block=False) 821 except queue.Empty: 822 # slice the iterator n_jobs * batchsize items at a time. If the 823 # slice returns less than that, then the current batchsize puts (...) 826 # accordingly to distribute evenly the last items between all 827 # workers. File ~/opt/anaconda3/lib/python3.9/queue.py:168, in Queue.get(self, block, timeout) 167 if not self._qsize(): --&gt; 168 raise Empty 169 elif timeout is None: Empty: During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) Cell In[27], line 5 3 #train_data, test_data = minmaxscaler(train_data, test_data) # alternate run for min-max scaler 4 columns, imputed_df = imputer_regressor(train_data) ----&gt; 5 scores, data_imputed = imputer_regressor_bay_ridge(train_data, y) 7 misTrain = whichColumnsMissing(train_data) 8 misTest = whichColumnsMissing(test_data) Cell In[24], line 110, in imputer_regressor_bay_ridge(data, y) 108 data_imputed = interative_imputer_fit.transform(data_array) 109 cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) --&gt; 110 scores = cross_val_score(interative_imputer, data_array, 111 y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise') 113 return scores, data_imputed File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:509, in cross_val_score(estimator, X, y, groups, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch, error_score) 506 # To ensure multimetric format is not supported 507 scorer = check_scoring(estimator, scoring=scoring) --&gt; 509 cv_results = cross_validate( 510 estimator=estimator, 511 X=X, 512 y=y, 513 groups=groups, 514 scoring={&quot;score&quot;: scorer}, 515 cv=cv, 516 n_jobs=n_jobs, 517 verbose=verbose, 518 fit_params=fit_params, 519 pre_dispatch=pre_dispatch, 520 error_score=error_score, 521 ) 522 return cv_results[&quot;test_score&quot;] File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:267, in cross_validate(estimator, X, y, groups, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch, return_train_score, return_estimator, error_score) 264 # We clone the estimator to make sure that all the folds are 265 # independent, and that it is pickle-able. 266 parallel = Parallel(n_jobs=n_jobs, verbose=verbose, pre_dispatch=pre_dispatch) --&gt; 267 results = parallel( 268 delayed(_fit_and_score)( 269 clone(estimator), 270 X, 271 y, 272 scorers, 273 train, 274 test, 275 verbose, 276 None, 277 fit_params, 278 return_train_score=return_train_score, 279 return_times=True, 280 return_estimator=return_estimator, 281 error_score=error_score, 282 ) 283 for train, test in cv.split(X, y, groups) 284 ) 286 _warn_about_fit_failures(results, error_score) 288 # For callabe scoring, the return type is only know after calling. If the 289 # return type is a dictionary, the error scores can now be inserted with 290 # the correct key. File ~/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py:1041, in Parallel.__call__(self, iterable) 1032 try: 1033 # Only set self._iterating to True if at least a batch 1034 # was dispatched. In particular this covers the edge (...) 1038 # was very quick and its callback already dispatched all the 1039 # remaining jobs. 1040 self._iterating = False -&gt; 1041 if self.dispatch_one_batch(iterator): 1042 self._iterating = self._original_iterator is not None 1044 while self.dispatch_one_batch(iterator): File ~/opt/anaconda3/lib/python3.9/site-packages/joblib/parallel.py:831, in Parallel.dispatch_one_batch(self, iterator) 828 n_jobs = self._cached_effective_n_jobs 829 big_batch_size = batch_size * n_jobs --&gt; 831 islice = list(itertools.islice(iterator, big_batch_size)) 832 if len(islice) == 0: 833 return False File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:267, in &lt;genexpr&gt;(.0) 264 # We clone the estimator to make sure that all the folds are 265 # independent, and that it is pickle-able. 266 parallel = Parallel(n_jobs=n_jobs, verbose=verbose, pre_dispatch=pre_dispatch) --&gt; 267 results = parallel( 268 delayed(_fit_and_score)( 269 clone(estimator), 270 X, 271 y, 272 scorers, 273 train, 274 test, 275 verbose, 276 None, 277 fit_params, 278 return_train_score=return_train_score, 279 return_times=True, 280 return_estimator=return_estimator, 281 error_score=error_score, 282 ) 283 for train, test in cv.split(X, y, groups) 284 ) 286 _warn_about_fit_failures(results, error_score) 288 # For callabe scoring, the return type is only know after calling. If the 289 # return type is a dictionary, the error scores can now be inserted with 290 # the correct key. File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_split.py:1411, in _RepeatedSplits.split(self, X, y, groups) 1409 for idx in range(n_repeats): 1410 cv = self.cv(random_state=rng, shuffle=True, **self.cvargs) -&gt; 1411 for train_index, test_index in cv.split(X, y, groups): 1412 yield train_index, test_index File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_split.py:340, in _BaseKFold.split(self, X, y, groups) 332 if self.n_splits &gt; n_samples: 333 raise ValueError( 334 ( 335 &quot;Cannot have number of splits n_splits={0} greater&quot; 336 &quot; than the number of samples: n_samples={1}.&quot; 337 ).format(self.n_splits, n_samples) 338 ) --&gt; 340 for train, test in super().split(X, y, groups): 341 yield train, test File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_split.py:86, in BaseCrossValidator.split(self, X, y, groups) 84 X, y, groups = indexable(X, y, groups) 85 indices = np.arange(_num_samples(X)) ---&gt; 86 for test_index in self._iter_test_masks(X, y, groups): 87 train_index = indices[np.logical_not(test_index)] 88 test_index = indices[test_index] File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_split.py:709, in StratifiedKFold._iter_test_masks(self, X, y, groups) 708 def _iter_test_masks(self, X, y=None, groups=None): --&gt; 709 test_folds = self._make_test_folds(X, y) 710 for i in range(self.n_splits): 711 yield test_folds == i File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_split.py:652, in StratifiedKFold._make_test_folds(self, X, y) 650 allowed_target_types = (&quot;binary&quot;, &quot;multiclass&quot;) 651 if type_of_target_y not in allowed_target_types: --&gt; 652 raise ValueError( 653 &quot;Supported target types are: {}. Got {!r} instead.&quot;.format( 654 allowed_target_types, type_of_target_y 655 ) 656 ) 658 y = column_or_1d(y) 660 _, y_idx, y_inv = np.unique(y, return_index=True, return_inverse=True) ValueError: Supported target types are: ('binary', 'multiclass'). Got 'continuous' instead. </code></pre>
<python><scikit-learn><cross-validation><imputation>
2023-02-26 22:35:15
1
449
Data Science Analytics Manager
75,575,346
2,377,399
How can I configure my tools to ignore or prevent updates to the execution_count field in a Jupyter Notebook
<p>I'm using the <code>Jupyter</code> extension (v2022.9.1303220346) in <code>Visual Studio Code</code> (v1.73.1).</p> <p>To reproduce this issue, make any modification to the notebook and check it into git. You'll observe that you get an extra difference for <code>execution_count</code>. For example (display from <code>Git Gui</code>):</p> <pre><code>- &quot;execution_count&quot;: 7, + &quot;execution_count&quot;: 9, </code></pre> <p>The execution count doesn't appear to be useful and is noise in the git history. Can Jupyter or VS Code be configured to stop updating this value or (better) ignore it altogether?</p>
<python><git><visual-studio-code><jupyter-notebook>
2023-02-26 22:18:34
2
6,805
AlainD
75,575,330
1,445,985
Python SELECT with parameters: The data types text and nvarchar are incompatible in the equal to operator
<p>Im trying to find out if an entry is already in the database like so:</p> <pre><code>cursor.execute('SELECT id, text FROM dbo.tags WHERE text = ?', str(tag.text)) row = cursor.fetchone() </code></pre> <p>In Microsoft SQL Server Management Studio, I am running SQLEXPRESS server. I have setup the table dbo.tags with a column called text with is of Data Type: text. I get the following error:</p> <pre><code> File &quot;scraper.py&quot;, line 69, in insertTag cursor.execute('SELECT id, text FROM dbo.tags WHERE text = ?', str(tag.text)) pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]The data types text and nvarchar are incompatible in the equal to operator. (402) (SQLExecDirectW); [42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Statement(s) could not be prepared. (8180)') </code></pre> <p><code>text</code> is the column type but I dont undestand why its passing <code>nvarchar</code>. The class property <code>Tag.text</code> is just a string. How can I run this select statement?</p> <p>I have tried changing the DB Data type to <code>ntext</code> but that made no difference.</p> <p>Please help.</p>
<python><azure><pyodbc><sql-server-express>
2023-02-26 22:15:53
0
5,037
Chud37
75,575,237
5,431,734
package get installed under the include folder and not under site-packages
<p>I am trying to install a package (more specifically <code>diplib</code>) with <code>conda install -c conda-forge diplib</code> but for some reason nothing is added to <code>envs/env_name/lib/site-packages</code>. The command runs fine, no warnings/error at all. Everything looks normal but I am still getting <code>'ModuleNotFoundError: No module named 'diplib'</code> error. However I can find a folder <code>diplib</code> under <code>envs/env_name/include</code> (actually it contains header files).</p> <p>Does anyone know what has happened please? What is the include folder for? Are we supposed to import packages from it or something is wrong with the conda package and I need to remove (how?) the <code>diplid</code> that appeared under the <code>include</code> directory</p> <p>This is in Ubuntu (in case it matters)</p>
<python><conda><diplib>
2023-02-26 21:53:29
1
3,725
Aenaon
75,575,084
4,865,723
Transform a pandas MultiIndex to a single Index using indention
<p>I have a <code>pandas.DataFrame</code> with a <code>pandas.MultiIndex</code> like this:</p> <pre><code> Vals Fruits Apples a Banana b Vegetables Tomato c Onion d Potato e Foobar LoreIpsum f </code></pre> <p>I want to transform it into this using a single index only:</p> <pre><code> Vals Fruits Apples a Banana b Vegetables Tomato c Onion d Potato e Foobar LoreIpsmum f </code></pre> <ul> <li>New rows are inserted based on the first level.</li> <li>The values in the second level got and indention by 4 chars.</li> </ul> <p>I can imagine how to &quot;hack&quot; this with a lot of if-then-else and for-loops. But I assume that there is an easier pandas way to do it.</p> <p>Here is a full MWE</p> <pre><code>#!/usr/bin/env python3 import pandas as pd midx = pd.MultiIndex.from_tuples([ ('Fruits', 'Apples'), ('Fruits', 'Banana'), ('Vegetables', 'Tomato'), ('Vegetables', 'Onion'), ('Vegetables', 'Potato'), ('Foobar', 'LoreIpsum') ]) df = pd.DataFrame({'Vals': list('abcdef')}, index=midx) print(df.to_markdown()) exp_idx = pd.Index([ 'Fruits', ' Apples', ' Banana', 'Vegetables', ' Tomato', ' Onion', ' Potato', 'Foobar', ' LoreIpsmum', ]) exp_df = pd.DataFrame({'Vals': list(' ab cde f')}, index=exp_idx) print(exp_df.to_markdown()) </code></pre>
<python><pandas>
2023-02-26 21:21:42
1
12,450
buhtz
75,574,972
12,436,050
Internal server error while passing parameters in API endpoint URL in Python 3.7
<p>I would like to perform a GET operation to an API endpoint. Following is the base URL.</p> <pre><code>#GET /v{version}/lists/{list-id}/terms/{term-id}/preferred-name url = &quot;https://abcd.eu/v{version}/lists/{list-id}/terms/{term-id}/preferred-name&quot; headers = {'Accept': 'application/json'} params = {'version': '1', 'list-id': '102', 'term-id': '103'} r = requests.get(url, auth=('ghfgdhg', 'hgdfhghd'), params = params) print(r.text) </code></pre> <p>However, I am getting an internal server error. Any help is highly appreciated.</p>
<python><python-requests>
2023-02-26 20:58:18
1
1,495
rshar
75,574,916
5,024,631
pandas: group dataframe rows into different clusters
<p>I have this dataframe:</p> <pre><code>df = pd.DataFrame({ 'forms_a_cluster': [False, False, True, True, True, False, False, False, True, True, False, True, True, True, False], 'cluster_number':[False, False, 1, 1, 1, False, False, False, 2, 2, False, 3, 3, 3, False]}) </code></pre> <p>The idea is that I have some criteria which, when certain rows have met it, selects those cases as True, and when consecutive rows meet the criteria, they then form a cluster. I want to be able to label each cluster as <code>cluster_1</code>, <code>cluster_2</code>, <code>cluster_3</code> etc. I've given an example of the hoped for output with the column <code>cluster_number</code>. But I have no idea how to do this, given that in the real data, I have to do it many times on different datasets which have a different number of rows and the cluster sizes will be different every time. Do you have any idea how to go about this? Thanks in advance!</p>
<python><pandas><dataframe><group-by><grouping>
2023-02-26 20:47:29
2
2,783
pd441
75,574,884
179,372
Converting from dictionary to Dask HighLevelGraph
<p>Dask's <a href="https://docs.dask.org/en/stable/high-level-graphs.html" rel="nofollow noreferrer">HighLevelGraph</a> has a method to convert it to a Python <code>dict</code>, however, it is not clear how to build the <code>HighLevelGraph</code> from a <code>dict</code> (doing the reverse process). Alternatively, is there a way to get the dependencies (as in <code>HighLevelGraph.dependencies</code>) from a <code>dict</code> representing the graph ?</p>
<python><dask>
2023-02-26 20:40:35
1
20,050
Tarantula
75,574,875
11,693,768
Merge or append 2 dataframes row wise and add a check in a separate column determining which one it came from
<p>I have the following 2 dataframes, <code>df1</code>,</p> <pre><code>import pandas as pd data = { 'commonshortname': ['SNX.US', '002400.CH', 'CDW.US', 'CEC.GR', '300002.CH'], 'altshortname': ['SNX.US', '002400.SHE', 'CDW.US', 'CEC.XETRA', '300002.SHE'], 'Code': ['SNX', '002400', 'CDW', 'CEC', '300002', ...], 'Type': ['Common Stock', 'Common Stock', 'Common Stock', 'Common Stock', 'Common Stock'], 'common': [1, 1, 1, 1, 1] } df1 = pd.DataFrame(data) </code></pre> <p>and <code>df2</code> which looks like this,</p> <pre><code>data = {'altshortname': ['SEDG.US', 'MHLD.US', 'CDW.US', 'POLA.US', 'PHASQ.US'], 'Code': ['SEDG', 'MHLD', 'CDW', 'POLA', 'PHASQ'], 'Type': ['Common Stock', 'Common Stock', 'Common Stock', 'Common Stock', 'Common Stock'], 'alt': [1, 1, 1, 1, 1]} df2 = pd.DataFrame(data) </code></pre> <p>This is what they look like in dataframe form,</p> <pre><code> commonshortname altshortname Code Type common 0 SNX.US SNX.US SNX Common Stock 1 1 002400.CH 002400.SHE 002400 Common Stock 1 2 CDW.US CDW.US CDW Common Stock 1 3 CEC.GR CEC.XETRA CEC Common Stock 1 4 300002.CH 300002.SHE 300002 Common Stock 1 ... ... ... ... ... ... </code></pre> <p>and</p> <pre><code> altshortname Code Type alt 0 SEDG.US SEDG Common Stock 1 1 MHLD.US MHLD Common Stock 1 2 CDW.US CDW Common Stock 1 3 POLA.US POLA Common Stock 1 4 PHASQ.US PHASQ Common Stock 1 </code></pre> <p>I want to merge these 2 row wise, so that if they exist in both, the data from the top dataframe is taken and a 1 is added into the alt column for it.</p> <p>The final frame should look like this,</p> <pre><code> commonshortname altshortname Code Type common alt 0 SNX.US SNX.US SNX Common Stock 1 1 002400.CH 002400.SHE 002400 Common Stock 1 2 CDW.US CDW.US CDW Common Stock 1 1 3 CEC.GR CEC.XETRA CEC Common Stock 1 4 300002.CH 300002.SHE 300002 Common Stock 1 0 SEDG.US SEDG Common Stock 1 1 MHLD.US MHLD Common Stock 1 3 POLA.US POLA Common Stock 1 4 PHASQ.US PHASQ Common Stock 1 </code></pre> <p>Basically, if the data came from df1, there will be a 1 in the common column, if it came from df2, there will be a 1 in the alt column, and if it came from both, there will be a 1 in both columns.</p> <p>Can this be done in pandas?</p> <p>I tried to do a merge, but it keeps joining it column wise and I end up with millions of rows.</p> <pre><code>merged_df = pd.merge(df1, df2, on=['altshortname', 'Code', 'Type'], how='outer') </code></pre>
<python><pandas><dataframe><merge><concatenation>
2023-02-26 20:39:31
2
5,234
anarchy
75,574,661
3,314,953
writing to a temp file in python with a for loop
<p>The code in <em>Example 1</em> below writes a single line into a temp file. I was expecting it to write multiple lines into the temp file as is shown in <em>Example 2</em>, which writes 94 lines to the console. Cause the <code>temp.write(shift(alphabet,i))</code> is in the same place of the loop as <code>print(shift(alphabet,i))</code>.</p> <p>Why?</p> <p><strong>Example 1</strong></p> <pre class="lang-py prettyprint-override"><code>import tempfile, os alphabet = b'!&quot;\'#$%&amp;()*+,-./0123456789:;&lt;=&gt;?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~' def shift(seq, n): n = n % len(seq) return seq[n:] + seq[:n] def alphatemp(): temp = tempfile.NamedTemporaryFile(delete=False) print(&quot;Created file is:&quot;, temp) print(&quot;Name of the file is:&quot;, temp.name) try: for i in range(len(alphabet)): open(temp.name) # os.write(temp.name, shift(alphabet,i)) temp.write(shift(alphabet,i)) temp.seek(0) print(temp.read()) finally: # os.close(temp.name) temp.close() if __name__ == &quot;__main__&quot;: alphatemp() </code></pre> <p><strong>Example 2</strong></p> <pre class="lang-py prettyprint-override"><code>import tempfile, os alphabet = b'!&quot;\'#$%&amp;()*+,-./0123456789:;&lt;=&gt;?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~' def shift(seq, n): n = n % len(seq) return seq[n:] + seq[:n] temp = tempfile.NamedTemporaryFile(delete=False) print(&quot;Created file is: &quot;, temp) print(&quot;Name of this file is: &quot;, temp.name) try: for i in range(len(alphabet)): # temp.write(shift(alphabet,i)) print(shift(alphabet,i)) # print(temp.read()) finally: temp.close() </code></pre>
<python><python-3.x>
2023-02-26 19:58:13
0
850
Whitequill Riclo
75,574,639
2,369,000
Setting values in a pandas multi-index cross-sectional slice
<p>I would like to set the value of a cross section to the value relative to the mean. The code below sets the values to null, but I would like the values to be -5 and 5. Is there an easily readable way to do this without looping through each column in the index?</p> <pre><code>import pandas as pd x = pd.DataFrame({'a': [1, 2, 3], 'b': [1, 2, 3]}) y = pd.DataFrame({'a': [11, 12, 13], 'b': [21, 22, 23]}) df = pd.concat({'x': x, 'y': y}, axis=1) timeslice = df.loc[1, (slice(None), 'a')].values.flatten() timeslice = timeslice[~np.isnan(timeslice)] average = np.mean(timeslice) df.loc[1, (slice(None), 'b')] = df.loc[1, (slice(None), 'a')] - average </code></pre> <pre><code> x y a b a b 0 1 1.0 11 21.0 1 2 NaN 12 NaN 2 3 3.0 13 23.0 </code></pre>
<python><pandas><multi-index>
2023-02-26 19:52:28
1
461
HAL
75,574,461
4,442,337
Debug dockerized Django application running Uvicorn - ASGI on VSCode?
<p>I'm trying to run <a href="https://pypi.org/project/debugpy/" rel="nofollow noreferrer">debugpy</a> in attach mode to debug on VScode a dockerized Django app. With the following configuration on <code>launch.json</code></p> <pre class="lang-json prettyprint-override"><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python: Django&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;attach&quot;, &quot;pathMappings&quot;: [{ &quot;localRoot&quot;: &quot;${workspaceFolder}&quot;, &quot;remoteRoot&quot;: &quot;/app&quot; }], &quot;port&quot;: 9999, &quot;host&quot;: &quot;127.0.0.1&quot; } ] } </code></pre> <p>I've been able to attach correctly to it adding the following section on the <code>manage.py</code> file:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python import os import sys from pathlib import Path if __name__ == &quot;__main__&quot;: os.environ.setdefault(&quot;DJANGO_SETTINGS_MODULE&quot;, &quot;config.settings.local&quot;) # debugpy configuration from django.conf import settings if settings.DEBUG: if os.environ.get(&quot;RUN_MAIN&quot;) or os.environ.get(&quot;WERKZEUG_RUN_MAIN&quot;): import debugpy debugpy.listen((&quot;0.0.0.0&quot;, 9999)) ... </code></pre> <p>The Django <code>Dockerfile</code> is launching the script:</p> <pre class="lang-bash prettyprint-override"><code>#!/bin/bash set -o errexit set -o pipefail set -o nounset python manage.py migrate exec python manage.py runserver 0.0.0.0:8000 </code></pre> <p>And I'm exposing the ports 8000 and 9999 running the docker image. So far so good.</p> <p>What I'm trying to do now is enable the same support for the ASGI application running under <a href="https://www.uvicorn.org/" rel="nofollow noreferrer">uvicorn</a>.</p> <pre class="lang-bash prettyprint-override"><code>#!/bin/bash set -o errexit set -o pipefail set -o nounset python manage.py migrate exec uvicorn config.asgi:application --host 0.0.0.0 --reload --reload-include '*.html' </code></pre> <p><code>asgi.py</code></p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot; ASGI config for Suite-Backend project. It exposes the ASGI callable as a module-level variable named ``application``. For more information on this file, see https://docs.djangoproject.com/en/dev/howto/deployment/asgi/ &quot;&quot;&quot; import os import sys from pathlib import Path from django.core.asgi import get_asgi_application # This allows easy placement of apps within the interior # suite_backend directory. BASE_DIR = Path(__file__).resolve(strict=True).parent.parent sys.path.append(str(BASE_DIR / &quot;suite_backend&quot;)) # If DJANGO_SETTINGS_MODULE is unset, default to the local settings os.environ.setdefault(&quot;DJANGO_SETTINGS_MODULE&quot;, &quot;config.settings.local&quot;) # This application object is used by any ASGI server configured to use this file. django_application = get_asgi_application() </code></pre> <p>Even forcing debugpy to listen on port 9999 I'm unable to attach to it. I can't provide you any logs but I think that running the VSCode debugger with &quot;Python: Django&quot; configuration cannot simply find an available listener and just dies.</p> <p>Do you have any clue or suggestion on how to setup this environment properly? I've been searching for about an hour but couldn't find any resources on the matter.</p>
<python><django><docker><visual-studio-code><uvicorn>
2023-02-26 19:21:44
1
2,191
browser-bug
75,574,407
20,266,647
Issue with ingest values, 2x more
<p>When I ingested values to the feature set, the pipeline was called 2x more (I used MLRun version 1.2.1). It seems as the issue, do you know why?</p> <p>I used this code:</p> <pre><code>import mlrun import mlrun.feature_store as fstore # mlrun: start-code import math def calc(x): x['fn2']=math.sin(x['fn2'])*100.0 print('calc') return x # mlrun: end-code mlrun.set_env_from_file(&quot;mlrun-nonprod.env&quot;) project = mlrun.get_or_create_project(project_name, context='./', user_project=False) feature_derived = fstore.get_feature_set(f&quot;{project_name}/{feature_derivedName}&quot;) ... # dataFrm has only two values feature_derived.graph.to(name=&quot;calc&quot;, handler='calc') fstore.ingest(feature_derived, dataFrm) </code></pre> <p>I got this output (method <code>calc</code> was called four times) for dataFrm with two values:</p> <pre><code>&gt; calc &gt; calc &gt; calc &gt; calc </code></pre>
<python><data-ingestion><mlops><feature-store><mlrun>
2023-02-26 19:13:40
1
1,390
JIST
75,574,250
386,861
How to scrape data into PythonAnywhere database
<p>I have a database on PythonAnywhere and credentials in place.</p> <p>The aim is to scrape a whole load of news websites and chuck the data into a new website using Flask.</p> <p>Here's my code for a section of my website. (I've ignored the imports because it runs)</p> <pre><code>@app.route(&quot;/nationals&quot;) def scrape_nationals(): feeds = [{&quot;type&quot;: &quot;news&quot;,&quot;title&quot;: &quot;BBC&quot;, &quot;url&quot;: &quot;http://feeds.bbci.co.uk/news/uk/rss.xml&quot;}, {&quot;type&quot;: &quot;news&quot;,&quot;title&quot;: &quot;The Economist&quot;, &quot;url&quot;: &quot;https://www.economist.com/international/rss.xml&quot;}, {&quot;type&quot;: &quot;news&quot;,&quot;title&quot;: &quot;The New Statesman&quot;, &quot;url&quot;: &quot;https://www.newstatesman.com/feed&quot;}, {&quot;type&quot;: &quot;news&quot;,&quot;title&quot;: &quot;The New York Times&quot;, &quot;url&quot;: &quot;https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml&quot;}, {&quot;type&quot;: &quot;news&quot;,&quot;title&quot;: &quot;Metro UK&quot;,&quot;url&quot;: &quot;https://metro.co.uk/feed/&quot;}, {&quot;type&quot;: &quot;news&quot;, &quot;title&quot;: &quot;Evening Standard&quot;, &quot;url&quot;: &quot;https://www.standard.co.uk/rss.xml&quot;}, {&quot;type&quot;: &quot;news&quot;,&quot;title&quot;: &quot;Daily Mail&quot;, &quot;url&quot;: &quot;https://www.dailymail.co.uk/articles.rss&quot;}, {&quot;type&quot;: &quot;news&quot;,&quot;title&quot;: &quot;Sky News&quot;, &quot;url&quot;: &quot;https://news.sky.com/feeds/rss/home.xml&quot;}, {&quot;type&quot;: &quot;news&quot;, &quot;title&quot;: &quot;The Mirror&quot;, &quot;url&quot;: &quot;https://www.mirror.co.uk/news/?service=rss&quot;}, {&quot;type&quot;: &quot;news&quot;, &quot;title&quot;: &quot;The Sun&quot;, &quot;url&quot;: &quot;https://www.thesun.co.uk/news/feed/&quot;}, {&quot;type&quot;: &quot;news&quot;, &quot;title&quot;: &quot;Sky News&quot;, &quot;url&quot;: &quot;https://news.sky.com/feeds/rss/home.xml&quot;}, {&quot;type&quot;: &quot;news&quot;, &quot;title&quot;: &quot;The Guardian&quot;, &quot;url&quot;: &quot;https://www.theguardian.com/uk/rss&quot;}, {&quot;type&quot;: &quot;news&quot;, &quot;title&quot;: &quot;The Independent&quot;, &quot;url&quot;: &quot;https://www.independent.co.uk/news/uk/rss&quot;}, #{&quot;type&quot;: &quot;news&quot;, &quot;title&quot;: &quot;The Telegraph&quot;, &quot;url&quot;: &quot;https://www.telegraph.co.uk/news/rss.xml&quot;}, {&quot;type&quot;: &quot;news&quot;, &quot;title&quot;: &quot;The Times&quot;, &quot;url&quot;: &quot;https://www.thetimes.co.uk/?service=rss&quot;}] print(feeds) data = [] # &lt;---- initialize empty list here for feed in feeds: parsed_feed = feedparser.parse(feed['url']) #print(&quot;Title:&quot;, feed['title']) #print(&quot;Number of Articles:&quot;, len(parsed_feed.entries)) #print(&quot;\n&quot;) for entry in parsed_feed.entries: title = entry.title print(title) url = entry.link #print(entry.summary) try: summary = entry.summary[:400] or &quot;No summary available&quot; # I simplified the ternary operators here except: #print(&quot;no summary&quot;) summary = &quot;none&quot; try: date = pd.to_datetime(entry.published)# #or &quot;No data available&quot; # I simplified the ternary operators here except: #print(&quot;date&quot;) date = pd.to_datetime(&quot;01-01-1970&quot;) data.append([title, url, summary, date]) # &lt;---- append data from each entry here df = pd.DataFrame(data, columns = ['title', 'url', 'summary', 'date']) articles = pd.read_sql('nationals', con = engine) articles = articles.drop_duplicates() df = df.append(articles) df = df.drop_duplicates() df.to_sql('nationals', con = engine, if_exists = 'replace', index = False) </code></pre> <p>It works in VSCode locally, but I can't work out why my table on PythonAnywhere won't populate. What have I got wrong?</p>
<python><pandas><flask><feedparser>
2023-02-26 18:44:26
1
7,882
elksie5000
75,574,174
6,367,971
Forward fill dataframe column if string is present in another column
<p>I have a dataframe and I want to forward fill one of the columns but only if the string is present in one of the other columns.</p> <pre><code> Type Match Matchup 0 Parent ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_NYY-vs-LAA NYY-vs-LAA 1 Child ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_NYY-vs-LAA NaN 2 SubChild ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_NYM-vs-LAD NYM-vs-LAD 3 SubChild ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_NYM-vs-LAD NaN 4 Test All_Star_Game_12252000 NaN 5 Child ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_BAL-vs-BOS BAL-vs-BOS 6 Parent ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_BAL-vs-BOS NaN 7 Child ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_TBR-vs-COL TBR-vs-COL 8 Parent ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_TBR-vs-COL NaN 9 SubChild ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_TBR-vs-COL NaN </code></pre> <p>For example, if <code>Matchup</code> is present <code>Match</code>, I want to forward fill it so that the output looks like:</p> <pre><code> Type Match Matchup Parent ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_NYY-vs-LAA NYY-vs-LAA Child ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_NYY-vs-LAA NYY-vs-LAA SubChild ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_NYM-vs-LAD NYM-vs-LAD SubChild ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_NYM-vs-LAD NYM-vs-LAD Test All_Star_Game_12252000 Child ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_BAL-vs-BOS BAL-vs-BOS Parent ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_BAL-vs-BOS BAL-vs-BOS Child ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_TBR-vs-COL TBR-vs-COL Parent ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_TBR-vs-COL TBR-vs-COL SubChild ABC_12252000_NY_Leag_Natl_en-NY_RegSeason_TBR-vs-COL TBR-vs-COL </code></pre>
<python><pandas><dataframe>
2023-02-26 18:30:23
1
978
user53526356
75,574,109
17,696,880
How to remove consecutively repeated strings only if this strings are in the middle of "((VERB)" and ")"?
<pre><code>import re input_text = &quot;((VERB) saltar a nosotros a nosotros) a nosotros a nosotros a nosotros ((VERB)correr a nosotros) sdsdsd ((VERB) saltar a nosotros a nosotros)&quot; input_text = re.sub(r&quot;\(\(VERB\)&quot; + r&quot;((?:\w\s*)+)&quot; + r&quot;\)&quot;, lambda x: re.sub(r&quot;(a nosotros)\s*\1+&quot;, r&quot;\1&quot;, x.group()), input_text) print(input_text) # --&gt; output </code></pre> <p>In this code I was trying to remove consecutively repeated <code>&quot;a nosotros&quot;</code> strings only if this strings are in the middle of <code>&quot;((VERB)&quot;</code> and <code>)&quot;</code>, that is, that string that captures the capturing group <code>r&quot;\(\(VERB\)&quot; + r&quot;((?:\w\s*)+)&quot; + r&quot;\)&quot;</code></p> <p>This is the output you should be getting when running this script:</p> <pre><code>&quot;((VERB) saltar a nosotros) a nosotros a nosotros a nosotros ((VERB)correr a nosotros) sdsdsd ((VERB) saltar a nosotros)&quot; </code></pre> <p>Although the code that I have placed in the question does edit the input string, what should i modify?</p>
<python><python-3.x><regex><string><regex-group>
2023-02-26 18:19:55
2
875
Matt095
75,573,824
13,518,907
"collate_fn" for Huggingface Hyperparameter Tuning
<p>I am following <a href="https://wandb.ai/matt24/vit-snacks-sweeps/reports/Hyperparameter-Search-for-HuggingFace-Transformer-Models--VmlldzoyMTUxNTg0" rel="nofollow noreferrer">this</a> tutorial on how to do hyperparameter tuning with Huggingface and Wandb. Most of it works but I don't quite understand what the &quot;collate_fn&quot; Function is doing and how I have to adjust it for my use case. My dataset is looks like this: &quot;text&quot; column of type str with the contents of a tweet and a &quot;majority_votes&quot; with int values. Here is my code:</p> <pre><code>import wandb wandb.login() %env WANDB_PROJECT=vit_snacks_sweeps %env WANDB_LOG_MODEL=true %env WANDD_NOTEBOOK_NAME=Trainer_Huggingface from transformers import BertTokenizer, BertForSequenceClassification import numpy as np from sklearn.model_selection import train_test_split with s3.open(f&quot;{bucket_name}/KFOLD1/{train_file_name}&quot;,'r') as file: data = pd.read_csv(file) with s3.open(f&quot;{bucket_name}/KFOLD1/{test_file_name}&quot;,'r') as file: test_data = pd.read_csv(file) data = data[[&quot;Text&quot;, &quot;majority_vote&quot;]] test_data = test_data[[&quot;Text&quot;, &quot;majority_vote&quot;]] data.rename(columns={'Text': 'text', 'majority_vote': 'labels'}, inplace=True) test_data.rename(columns={'Text': 'text', 'majority_vote': 'labels'}, inplace=True) # Define pre trained tokenizer and model tokenizer = BertTokenizer.from_pretrained(model_name) #model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2) # ----- 1. Preprocess data -----# # Preprocess data X = list(data[&quot;text&quot;]) y = list(data[&quot;labels&quot;]) X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=11) X_train_tokenized = tokenizer(X_train, padding=True, truncation=True, max_length=512) X_val_tokenized = tokenizer(X_val, padding=True, truncation=True, max_length=512) # Create torch dataset class Dataset(torch.utils.data.Dataset): def __init__(self, encodings, labels=None): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} if self.labels: item[&quot;labels&quot;] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.encodings[&quot;input_ids&quot;]) train_dataset = Dataset(X_train_tokenized, y_train) val_dataset = Dataset(X_val_tokenized, y_val) def model_init(): vit_model = model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2) return vit_model # method sweep_config = { 'method': 'random' } # hyperparameters parameters_dict = { 'epochs': { 'value': 1 }, 'batch_size': { 'values': [8, 16, 32] }, 'learning_rate': { 'distribution': 'log_uniform_values', 'min': 1e-5, 'max': 1e-3 }, 'weight_decay': { 'values': [0.0, 0.2,0.3,0.4,0.5] }, } sweep_config['parameters'] = parameters_dict sweep_id = wandb.sweep(sweep_config, project='hatespeech') # define function to compute metrics from datasets import load_metric import numpy as np def compute_metrics_fn(eval_preds): metrics = dict() accuracy_metric = load_metric('accuracy') precision_metric = load_metric('precision') recall_metric = load_metric('recall') f1_metric = load_metric('f1') logits = eval_preds.predictions labels = eval_preds.label_ids preds = np.argmax(logits, axis=-1) metrics.update(accuracy_metric.compute(predictions=preds, references=labels)) metrics.update(precision_metric.compute(predictions=preds, references=labels, average='weighted')) metrics.update(recall_metric.compute(predictions=preds, references=labels, average='weighted')) metrics.update(f1_metric.compute(predictions=preds, references=labels, average='weighted')) return metrics import torch def collate_fn(examples): text = torch.stack([example['text'] for example in examples]) labels = torch.tensor([example['labels'] for example in examples]) return {'text': text, 'labels': labels} from transformers import TrainingArguments, Trainer def train(config=None): with wandb.init(config=config): # set sweep configuration config = wandb.config # set training arguments training_args = TrainingArguments( output_dir='hyper', overwrite_output_dir=True, report_to='wandb', # Turn on Weights &amp; Biases logging num_train_epochs=config.epochs, learning_rate=config.learning_rate, weight_decay=config.weight_decay, per_device_train_batch_size=config.batch_size, per_device_eval_batch_size=16, save_strategy='epoch', evaluation_strategy='epoch', logging_strategy='epoch', load_best_model_at_end=True, #fp16=True ) # define training loop trainer = Trainer( # model, model_init=model_init, args=training_args, data_collator=collate_fn, train_dataset=train_dataset, eval_dataset=val_dataset, compute_metrics=compute_metrics_fn ) # start training loop trainer.train() wandb.agent(sweep_id, train, count=20) </code></pre> <p>When executing the script I get the following error message:</p> <pre><code>wandb: ERROR Run jav32 errored: KeyError('text') </code></pre> <p>I tried to adjust the collate_fn function with the columns from my data frame, but it doesn't work. So how do I have to change the code to make it work?</p> <p>Thanks in advance!</p>
<python><pytorch><huggingface-transformers><collate><wandb>
2023-02-26 17:33:06
1
565
Maxl Gemeinderat
75,573,658
244,297
How to use binary search on an existing SortedList with a key function?
<p>I would like to use binary search on a <a href="https://grantjenks.com/docs/sortedcontainers/sortedlist.html#sortedlist" rel="nofollow noreferrer"><code>SortedList</code></a> with a key function, e.g. similarly to <a href="https://docs.python.org/3/library/bisect.html#bisect.bisect_right" rel="nofollow noreferrer"><code>bisect_right()</code></a> in the <a href="https://docs.python.org/3/library/bisect.html" rel="nofollow noreferrer"><code>bisect</code></a> module. However, <a href="https://grantjenks.com/docs/sortedcontainers/sortedlist.html#sortedcontainers.SortedList.bisect_right" rel="nofollow noreferrer"><code>SortedList.bisect_right()</code></a> only supports searching by value. How to make it work with a key function?</p>
<python><binary-search><sortedcontainers>
2023-02-26 17:08:53
2
151,764
Eugene Yarmash
75,573,612
12,767,387
How can I create multiple msi installers for the same script with different parameters using cx_Freeze?
<p>I want to create many msi installers from one tkinter script with different parameters to distribute it to different customers. I want this, because I have to change the default installation directory and GUID. I am using the latest stable version of the <code>cx_Freeze</code> package and <code>Python 3.7.9</code>. When I run the <code>setup()</code> function inside <code>setup.py</code> it creates the first installer without any problem, but at the second iteration I get an error:</p> <pre><code>running install_exe copying build\exe.win32-3.7\frozen_application_license.txt -&gt; build\bdist.win32\msi error: could not create 'build\bdist.win32\msi\frozen_application_license.txt': No such file or directory </code></pre> <p>I already tried to remove the <code>build</code> dir after every iteration or modify <code>argv</code> after every iteration, but that won't work.</p> <p>Here's a minimal example of the application to run and get an error. I just run <code>python setup.py</code> to create the installers:</p> <p><code>setup.py</code></p> <pre><code>import sys from cx_Freeze import setup, Executable sys.argv.append(&quot;bdist_msi&quot;) programs = { # name, GUID pairs &quot;hello&quot;: &quot;{6ae7456f-2761-43a2-8a23-1a3dd284e947}&quot;, &quot;world&quot;: &quot;{494d5953-651d-41c5-a6ef-9156c96987a1}&quot;, } for program, GUID in programs.items(): setup( name=f&quot;Hello-{program}&quot;, version=&quot;1.0&quot;, executables=[Executable(&quot;hello.py&quot;)], options={ &quot;bdist_msi&quot;: { &quot;initial_target_dir&quot;: f&quot;C:\{program}&quot;, &quot;upgrade_code&quot;: GUID, }, }, ) </code></pre> <p><code>hello.py</code></p> <pre><code>from tkinter import * from tkinter import ttk root = Tk() frm = ttk.Frame(root, padding=10) frm.grid() ttk.Label(frm, text=&quot;Hello World!&quot;).grid(column=0, row=0) ttk.Button(frm, text=&quot;Quit&quot;, command=root.destroy).grid(column=1, row=0) root.mainloop() </code></pre>
<python><tkinter><cx-freeze>
2023-02-26 17:02:56
1
542
Paweł Kowalski
75,573,482
2,758,414
Why conda doesn't see the latest version of sqlalchemy package available in the conda-forge channel?
<p>I'm trying to install <code>sqlalchemy</code> with conda.</p> <pre><code>conda install -c conda-forge sqlalchemy </code></pre> <p>On the channel website I can see that the latest available version is 2.0.4.</p> <p><a href="https://anaconda.org/conda-forge/sqlalchemy" rel="nofollow noreferrer">https://anaconda.org/conda-forge/sqlalchemy</a></p> <p>However, when I execute the command, <code>sqlalchemy</code>'s latest available version is 1.4.39.</p> <pre><code>The following NEW packages will be INSTALLED: greenlet pkgs/main/win-64::greenlet-2.0.1-py310hd77b12b_0 sqlalchemy pkgs/main/win-64::sqlalchemy-1.4.39-py310h2bbff1b_0 </code></pre> <p>I'm on <code>win-64</code>. I'm using <code>powershell</code> but also tried conda's <code>cmd</code>.</p> <p>Why conda doesn't see the latest available version?</p>
<python><sqlalchemy><anaconda><conda><conda-forge>
2023-02-26 16:42:29
1
2,747
LLaP
75,573,439
8,618,242
Error in installing pip package with setup.py requirements
<p>I'm trying to install the <a href="https://pypi.org/project/aptdaemon" rel="nofollow noreferrer">aptdaemon</a> package on Ubuntu <code>20.04</code> as follows: <code>pip3 install aptdaemon</code> but I'm getting an error:</p> <p><code>error in setup.cfg: command 'build' has no such option 'i18n'</code></p> <p>I have installed both <code>python3-distutils</code> and <code>python3-distutils-extra</code>:</p> <pre><code>sudo apt install --reinstall python3-distutils sudo apt install --reinstall python3-distutils-extra </code></pre> <p>but still the error appears.</p> <p>Can you please tell me how can I get rid of this error in setup.py build requirements? thanks in advance.</p>
<python><pip><package>
2023-02-26 16:35:48
2
4,115
Bilal
75,573,426
1,203,556
Using flutter flutter_image_compress to upload base64 compressed file but cant uncompress in python
<p>I can successfully upload a file to s3 by</p> <ol> <li>using flutter_image_compress to compress and</li> <li>converting the file to base64.</li> </ol> <p>Now I can decode the file:</p> <pre><code> base64_string = open(&quot;img1b95b3ed9e494595b01610f06d9f074b.txt&quot;, &quot;r&quot;).readlines()[0] </code></pre> <p>Now after that all fails:</p> <pre><code> msg = base64.b64decode(base64_string) inflated = zlib.decompress(msg) inflated = zlib.decompress(msg) zlib.error: Error -3 while decompressing data: incorrect header check </code></pre> <p>So what is the proper method to decode in python from using the flutter flutter_image_compress package?</p> <p>Thanks</p> <p>PS this is now I compress in flutter:</p> <pre><code>Future&lt;Uint8List?&gt; testCompressFile(File file) async { var result = await FlutterImageCompress.compressWithFile(file.absolute.path, quality: 100, keepExif: true ); return result; } </code></pre>
<python><python-3.x><flutter><dart>
2023-02-26 16:33:36
1
78,780
Tampa
75,573,420
10,596,249
elasticesearch adding keyword not working
<p>error:</p> <pre><code> {&quot;timestamp&quot;: &quot;2023-02-26T16:06:23.388759Z&quot;, &quot;level&quot;: &quot;INFO&quot;, &quot;name&quot;: &quot;denali_syncer.opensearch&quot;, &quot;message&quot;: &quot;create index cd.io.schema.asset mapping, status 400, body: b'{\&quot;error\&quot;:{\&quot;root_cause\&quot;:[{\&quot;type\&quot;: \&quot;mapper_parsing_exception\&quot;,\&quot;reason\&quot;:\&quot;Failed to parse mapping [_doc]: mapper [metadata.cloudAccountId] cannot be changed from type [text] to [ObjectMapper]\&quot;}],\&quot;type\&quot;:\&quot;mapper_parsing_exception\&quot;,\&quot;reason\&quot;:\&quot;Failed to parse mapping [_doc]: mapper [metadata.cloudAccountId] cannot be changed from type [text] to [ObjectMapper]\&quot;,\ \&quot;caused_by\&quot;:{\&quot;type\&quot;:\&quot;illegal_argument_exception\&quot;,\&quot;reason\&quot;:\&quot;mapper [metadata.cloudAccountId] cannot be changed from type [text] to [ObjectMapper]\&quot;}},\&quot;status\&quot;:400}'&quot;} </code></pre> <p>Data for index mapping</p> <pre><code>{ &quot;cloudAccountId&quot; : {&quot;type&quot; : &quot;text&quot;}, &quot;cloudAccountId.keyword&quot; : {&quot;type&quot; : &quot;keyword&quot;}, } </code></pre> <p>Code:</p> <pre><code>url = f'{OPENSEARCH_URL}/{dataset}' response = requests.post(url=url, headers=HEADERS, data=fields_json) </code></pre> <p>I am using above code to create index and using keyword for sorting I am getting above error</p> <p>Please check how to handle this situation</p>
<python><elasticsearch>
2023-02-26 16:32:54
1
806
soubhagya
75,573,390
5,985,593
GStreamer Python: gi.repository.GLib.GError: gst_parse_error: no element "appsrc" on OSX
<p>I'm trying to setup a RTSP stream in Python with OpenCV. I found <a href="https://github.com/prabhakar-sivanesan/OpenCV-rtsp-server/issues?q=is%3Aissue%20is%3Aclosed" rel="nofollow noreferrer">this repository</a> which I started from as an example. I'm pretty sure the code in there is not entirely written by that person, because I found some older sources with the same code. However, I took that as a base to get familiar with everything.</p> <p>Here is the full (very slightly altered) Python script I'm trying to run:</p> <pre class="lang-py prettyprint-override"><code>import gi import cv2 as cv import argparse gi.require_versions({ # 'Gtk': '3.0', 'Gst': '1.0', 'GstRtspServer': '1.0', 'GLib': '2.0' }) from gi.repository import Gst, GstRtspServer, GLib class StreamFactory(GstRtspServer.RTSPMediaFactory): def __init__(self, **properties): super(StreamFactory, self).__init__(**properties) self.cap = cv.VideoCapture(0) # 0 is the default camera self.number_frames = 0 self.fps = opt.fps self.duration = 1 / self.fps * Gst.SECOND # duration of a frame in nanoseconds self.launch_string = 'appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ' \ 'caps=video/x-raw,format=BGR,width={},height={},framerate={}/1 ' \ '! videoconvert ! video/x-raw,format=I420 ' \ '! x264enc speed-preset=ultrafast tune=zerolatency ' \ '! rtph264pay config-interval=1 name=pay0 pt=96' \ .format(opt.image_width, opt.image_height, self.fps) def on_need_data(self, src, length): if self.cap.isOpened(): ret, frame = self.cap.read() if ret: frame = cv.resize(frame, (640, 480)) data = frame.tostring() buf = Gst.Buffer.new_allocate(None, len(data), None) buf.fill(0, data) timestamp = self.number_frames * self.duration buf.pts = int(timestamp) buf.dts = int(timestamp) buf.offset = timestamp self.number_frames += 1 retval = src.emit('push-buffer', buf) if retval != Gst.FlowReturn.OK: print(retval) def do_create_element(self, url): return Gst.parse_launch(self.launch_string) def do_configure(self, rtsp_media): self.number_frames = 0 appsrc = rtsp_media.get_element().get_child_by_name('source') appsrc.connect('need-data', self.on_need_data) class GstServer(GstRtspServer.RTSPServer): def __init__(self, **properties): super(GstServer, self).__init__(**properties) self.factory = StreamFactory() self.factory.set_shared(True) self.get_mount_points().add_factory(opt.stream_uri, self.factory) self.attach(None) parser = argparse.ArgumentParser() parser.add_argument(&quot;--device_id&quot;, required=True, help=&quot;device id for the video device or video file location&quot;) parser.add_argument(&quot;--fps&quot;, required=True, help=&quot;fps of the camera&quot;, type = int) parser.add_argument(&quot;--image_width&quot;, required=True, help=&quot;video frame width&quot;, type = int) parser.add_argument(&quot;--image_height&quot;, required=True, help=&quot;video frame height&quot;, type = int) parser.add_argument(&quot;--port&quot;, default=8554, help=&quot;port to stream video&quot;, type = int) parser.add_argument(&quot;--stream_uri&quot;, default = &quot;/video_stream&quot;, help=&quot;rtsp video stream uri&quot;) opt = parser.parse_args() # GObject.threads_init() Gst.init(None) server = GstServer() loop = GLib.MainLoop() loop.run() </code></pre> <p>The result of running this is that it starts, but as soon as I connect to the server, it throws the following exception:</p> <p><a href="https://i.sstatic.net/MJqel.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MJqel.png" alt="error output" /></a></p> <p>Currently I only have access to an M1 Mac, so I can't confirm it myself, but according to some issues I found (like <a href="https://github.com/prabhakar-sivanesan/OpenCV-rtsp-server/issues/7" rel="nofollow noreferrer">this one</a>), might be architecture related, although I'm not entirely positive about it because I also saw <a href="https://github.com/Homebrew/homebrew-core/issues/109283" rel="nofollow noreferrer">this issue</a> and it looks that it's indeed outdated and not related, because if I run <code>gst-launch-1.0 videotestsrc ! videoconvert ! autovideosink</code> it works just fine, unlike what's been said in that issue.</p> <p>I installed <code>gstreamer</code> + all dependencies via HomeBrew, but I also verified I get the same result building from scratch without HomeBrew.</p> <p>All the libraries I've installed so far with homebrew (I'll just post the complete list here, but keep in mind not all dependencies are related to this issue):</p> <p><a href="https://i.sstatic.net/bdRK6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bdRK6.png" alt="brew list" /></a></p> <p>And here is also my Python dependency list:</p> <pre><code>✖ pip3 freeze docutils==0.19 numpy==1.24.2 opencv-python==4.7.0.72 pycairo==1.23.0 PyGObject==3.42.2 six==1.16.0 </code></pre> <p>On python version: <code>3.9.16.final.0</code>, also tried latest <code>3.11</code> version. OSX version: <code>13.0.1 (22A400)</code></p> <p>I also verified that the <code>appsrc</code> plugin is available to gst using the <code>gst-inspect</code> command, and it looks like it is:</p> <pre><code>✖ gst-inspect-1.0 appsrc Factory Details: Rank none (0) Long-name AppSrc Klass Generic/Source Description Allow the application to feed buffers to a pipeline Author David Schleef &lt;ds@schleef.org&gt;, Wim Taymans &lt;wim.taymans@gmail.com&gt; Documentation https://gstreamer.freedesktop.org/documentation/app/appsrc.html Plugin Details: Name app Description Elements used to communicate with applications Filename /opt/homebrew/lib/gstreamer-1.0/libgstapp.dylib Version 1.22.0 License LGPL Source module gst-plugins-base Documentation https://gstreamer.freedesktop.org/documentation/app/ Source release date 2023-01-23 Binary package GStreamer Base Plug-ins source release Origin URL Unknown package origin GObject +----GInitiallyUnowned +----GstObject +----GstElement +----GstBaseSrc +----GstAppSrc Implemented Interfaces: GstURIHandler Pad Templates: SRC template: 'src' Availability: Always Capabilities: ANY Element has no clocking capabilities. URI handling capabilities: Element can act as source. Supported URI protocols: appsrc Pads: SRC: 'src' Pad Template: 'src' Element Properties: block : Block push-buffer when max-bytes are queued flags: readable, writable Boolean. Default: false blocksize : Size in bytes to read per buffer (-1 = default) flags: readable, writable Unsigned Integer. Range: 0 - 4294967295 Default: 4096 caps : The allowed caps for the src pad flags: readable, writable Caps (NULL) current-level-buffers: The number of currently queued buffers flags: readable Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 current-level-bytes : The number of currently queued bytes flags: readable Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 current-level-time : The amount of currently queued time flags: readable Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 do-timestamp : Apply current stream time to buffers flags: readable, writable Boolean. Default: false duration : The duration of the data stream in nanoseconds (GST_CLOCK_TIME_NONE if unknown) flags: readable, writable Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 18446744073709551615 emit-signals : Emit need-data, enough-data and seek-data signals flags: readable, writable Boolean. Default: true format : The format of the segment events and seek flags: readable, writable Enum &quot;GstFormat&quot; Default: 2, &quot;bytes&quot; (0): undefined - GST_FORMAT_UNDEFINED (1): default - GST_FORMAT_DEFAULT (2): bytes - GST_FORMAT_BYTES (3): time - GST_FORMAT_TIME (4): buffers - GST_FORMAT_BUFFERS (5): percent - GST_FORMAT_PERCENT handle-segment-change: Whether to detect and handle changed time format GstSegment in GstSample. User should set valid GstSegment in GstSample. Must set format property as &quot;time&quot; to enable this property flags: readable, writable, changeable only in NULL or READY state Boolean. Default: false is-live : Whether to act as a live source flags: readable, writable Boolean. Default: false leaky-type : Whether to drop buffers once the internal queue is full flags: readable, writable, changeable only in NULL or READY state Enum &quot;GstAppLeakyType&quot; Default: 0, &quot;none&quot; (0): none - GST_APP_LEAKY_TYPE_NONE (1): upstream - GST_APP_LEAKY_TYPE_UPSTREAM (2): downstream - GST_APP_LEAKY_TYPE_DOWNSTREAM max-buffers : The maximum number of buffers to queue internally (0 = unlimited) flags: readable, writable Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 max-bytes : The maximum number of bytes to queue internally (0 = unlimited) flags: readable, writable Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 200000 max-latency : The maximum latency (-1 = unlimited) flags: readable, writable Integer64. Range: -1 - 9223372036854775807 Default: -1 max-time : The maximum amount of time to queue internally (0 = unlimited) flags: readable, writable Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 min-latency : The minimum latency (-1 = default) flags: readable, writable Integer64. Range: -1 - 9223372036854775807 Default: -1 min-percent : Emit need-data when queued bytes drops below this percent of max-bytes flags: readable, writable Unsigned Integer. Range: 0 - 100 Default: 0 name : The name of the object flags: readable, writable String. Default: &quot;appsrc0&quot; num-buffers : Number of buffers to output before sending EOS (-1 = unlimited) flags: readable, writable Integer. Range: -1 - 2147483647 Default: -1 parent : The parent of the object flags: readable, writable Object of type &quot;GstObject&quot; size : The size of the data stream in bytes (-1 if unknown) flags: readable, writable Integer64. Range: -1 - 9223372036854775807 Default: -1 stream-type : the type of the stream flags: readable, writable Enum &quot;GstAppStreamType&quot; Default: 0, &quot;stream&quot; (0): stream - GST_APP_STREAM_TYPE_STREAM (1): seekable - GST_APP_STREAM_TYPE_SEEKABLE (2): random-access - GST_APP_STREAM_TYPE_RANDOM_ACCESS typefind : Run typefind before negotiating (deprecated, non-functional) flags: readable, writable, deprecated Boolean. Default: false Element Signals: &quot;need-data&quot; : void user_function (GstElement* object, guint arg0, gpointer user_data); &quot;enough-data&quot; : void user_function (GstElement* object, gpointer user_data); &quot;seek-data&quot; : gboolean user_function (GstElement* object, guint64 arg0, gpointer user_data); Element Actions: &quot;push-buffer&quot; : GstFlowReturn user_function (GstElement* object, GstBuffer* arg0); &quot;push-buffer-list&quot; : GstFlowReturn user_function (GstElement* object, GstBufferList* arg0); &quot;push-sample&quot; : GstFlowReturn user_function (GstElement* object, GstSample* arg0); &quot;end-of-stream&quot; : GstFlowReturn user_function (GstElement* object); </code></pre> <p>Last thing I've tried is to run the command directly from CLI instead of calling it via Python and it looks like its running without crashing:</p> <pre><code>❯ gst-launch-1.0 appsrc name=source is-live=true block=true format=GST_FORMAT_TIME caps=video/x-raw,format=BGR,width=1920,height=1080,framerate=30/1 ! videoconvert ! video/x-raw,format=I420 ! x264enc speed-preset=ultrafast tune=zerolatency ! rtph264pay config-interval=1 name=pay0 pt=96 Setting pipeline to PAUSED ... Pipeline is live and does not need PREROLL ... Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock: GstSystemClock Redistribute latency... ^Chandling interrupt. Interrupt: Stopping pipeline ... Execution ended after 0:00:46.577237000 Setting pipeline to NULL ... Freeing pipeline ... </code></pre> <p>Has anyone solved this issue or made more progress on this? I don't really know where else to open an issue related to this, because the original repo owner where I got the sample script from, doesn't seem to have a clue what is actually happening with the code. Opening another issue in the homebrew repo also seems wrong, because it doesn't look like it's a homebrew issue (anymore), and the current issue is closed and not accessible to comment on. GStreamer repo also doesn't allow for issues to be opened.</p> <p>I'm going to try to also verify that it's happening on an intel mac and/or x86_64 ubuntu systems when I get the chance, but in the meantime, all pointers are welcome :)</p>
<python><macos><homebrew><gstreamer>
2023-02-26 16:27:39
0
1,630
JC97
75,573,119
4,920,221
stop termination of pyspark stream when testing
<p>I have a PySpark stream that reads from Kafka, and writes to an Apache Hudi table, for example</p> <pre class="lang-py prettyprint-override"><code>def write_batch_to_hudi(batch_df, batch_id): new_df = here_im_getting_new_df(batch_df) write_to_hudi(new_df) def stream(spark): df = read_stream_from_kafka(spark) df \ .writeStream \ .foreachBatch(write_batch_to_hudi) \ .start() \ .awaitTermination() </code></pre> <p>And I want to test it all end to end, so I mock the <code>read_stream_from_kafka</code> function to read from some Json files, and the <code>write_to_hudi</code> to do nothing</p> <pre class="lang-py prettyprint-override"><code>def test_stream(mock_spark, mock_read_stream_from_kafka, mock_write_to_hudi): stream(mock_spark) </code></pre> <p>But when I run the <code>test_stream</code>, it never ends because of the <code>.awaitTermination()</code> statement, but how can I make sure that after I run all files given in <code>mock_read_stream_from_kafka</code>, the test will terminate?</p>
<python><pyspark><pytest>
2023-02-26 15:44:10
1
324
Kallie
75,573,094
498,504
Flask project always got error "We're sorry, but something went wrong"
<p>I'm a beginner in Flask and I have developed a face-detection application based on it. I had many problems when deploy project on a cPanel host and can solve that one by one. But now have another problem that always say me <strong>We're sorry, but something went wrong</strong> and I do not know what is problem because has no error in log file.</p> <p>I have a python_files directory included three files like <strong>app.py</strong> , <strong>passenger_wsgi.py</strong> and <strong>prediction_blueprint.py</strong> and following are the things that are inside.</p> <p><strong>app.py</strong> file :</p> <pre><code>import os from flask import Flask, render_template, request, redirect from werkzeug.utils import secure_filename from prediction_blueprint import prediction_blueprint # from time import sleep app = Flask(__name__, template_folder='../templates', static_folder='../static') app.config['TEMPLATES_AUTO_RELOAD'] = True app.register_blueprint(prediction_blueprint) @app.route('/') def index(): # put application's code here return render_template('./index.html') @app.route('/upload', methods=['POST']) def upload(): file = request.files['file'] fn = secure_filename(file.filename) # sleep(3) file.save(os.path.join('../uploads', fn)) return { &quot;success&quot;: True, &quot;message&quot;: fn } if __name__ == '__main__': app.run() </code></pre> <p><strong>passenger_wsgi.py</strong> file:</p> <pre><code>import imp import os import sys sys.path.insert(0, os.path.dirname(__file__)) wsgi = imp.load_source('wsgi', 'app.py') application = wsgi.app </code></pre> <p>and <strong>prediction_blueprint.py</strong></p> <pre><code>from flask import Blueprint, Flask, request from keras_vggface import VGGFace, utils from keras_vggface.utils import preprocess_input from mtcnn import MTCNN import cv2 as cv import numpy as np import json, requests prediction_blueprint = Blueprint('prediction_blueprint', __name__) vggface = VGGFace(model='vgg16') detector_obj = MTCNN() @prediction_blueprint.route('/predict', methods=['POST']) def index(): image = request.get_json().get('image') face = extract_face('../uploads/' + image).astype('float32') input_sample = np.expand_dims(face, axis=0) samples = preprocess_input(input_sample) pred = vggface.predict(samples) output = utils.decode_predictions(pred) founded_person = output[0][0][0].replace(&quot;b'&quot;, &quot;&quot;).replace(&quot;'&quot;, &quot;&quot;) result = { &quot;success&quot;: True, &quot;message&quot;: {} } if founded_person: celeb_name = get_best_title_wiki_page(founded_person) if celeb_name: result['message']['celeb_name'] = celeb_name celeb_images = get_celeb_images(celeb_name) if celeb_images: result['message']['celeb_images'] = celeb_images return result else: return { &quot;success&quot;: False, &quot;message&quot;: &quot;Best Title Not Found!&quot; } else: return { &quot;success&quot;: False, &quot;message&quot;: &quot;Not Found any things !&quot;, } def extract_face(address): img = cv.imread(address) rgb_img = cv.cvtColor(img, cv.COLOR_BGR2RGB) face = detector_obj.detect_faces(rgb_img)[0] x, y, w, h = face['box'] actual_face = img[y:y + h, x:x + w] # This crop only section of image that contain person Face actual_face = cv.resize(actual_face, (224, 224)) return np.asarray(actual_face) def get_best_title_wiki_page(celeb_name): endpoint = &quot;https://en.wikipedia.org/w/api.php&quot; parameters = { &quot;action&quot;: &quot;query&quot;, &quot;list&quot;: &quot;search&quot;, &quot;srsearch&quot;: celeb_name, &quot;format&quot;: &quot;json&quot;, &quot;origin&quot;: &quot;*&quot; } best_title = None try: response = requests.get(endpoint, params=parameters) data = json.loads(response.text) best_title = data[&quot;query&quot;][&quot;search&quot;][0][&quot;title&quot;] except Exception as e: print(e) return best_title def get_celeb_images(celeb_name): endpoint = &quot;https://en.wikipedia.org/w/api.php&quot; parameters = { &quot;action&quot;: &quot;query&quot;, &quot;titles&quot;: celeb_name, &quot;prop&quot;: &quot;images&quot;, &quot;iiprop&quot;: &quot;url&quot;, &quot;format&quot;: &quot;json&quot;, &quot;origin&quot;: &quot;*&quot; } try: response = requests.get(endpoint, params=parameters) data = json.loads(response.text) page_id = list(data[&quot;query&quot;][&quot;pages&quot;].keys())[0] images = data[&quot;query&quot;][&quot;pages&quot;][page_id][&quot;images&quot;] filtered_images = [image for image in images if &quot;file:&quot; + celeb_name.lower() in image[ &quot;title&quot;].lower() or &quot;file:&quot; + celeb_name.lower().replace(&quot; &quot;, &quot;&quot;) in image[ &quot;title&quot;].lower()] image_urls = [f&quot;https://en.wikipedia.org/wiki/Special:Redirect/file/{image['title'].replace('File:', '')}&quot; for image in filtered_images] cel_images = image_urls[0:4] return cel_images except Exception as e: print(&quot;Getting Images API request Failed&quot;, e) </code></pre> <p>these are all things that I have wrote , all packages are installed properly But I so not know which problem caused <strong>We're sorry, but something went wrong</strong> message?</p> <p>Of course, I have a warning like this in log.log file while I think it cannot cause the problem :</p> <pre><code>App 631448 output: /opt/passenger-5.3.7-13.el7.cloudlinux/src/helper-scripts/wsgi-loader.py:26: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses App 631448 output: import sys, os, io, re, imp, threading, signal, traceback, socket, select, struct, logging, errno </code></pre>
<python><flask>
2023-02-26 15:41:02
1
6,614
Ahmad Badpey
75,572,878
962,891
Shading regions inside an mplfinance chart
<p>I am using matplotlib v 3.7.0, mplfinance version '0.12.9b7', and Python 3.10.</p> <p>I am trying to shade regions of a plot, and although my logic seems correct, the shaded areas are not being displayed on the plot.</p> <p>This is my code:</p> <pre><code>import yfinance as yf import mplfinance as mpf import pandas as pd # Download the stock data df = yf.download('TSLA', start='2022-01-01', end='2022-03-31') # Define the date ranges for shading red_range = ['2022-01-15', '2022-02-15'] blue_range = ['2022-03-01', '2022-03-15'] # Create a function to shade the chart regions def shade_region(ax, region_dates, color): region_dates.sort() start_date = region_dates[0] end_date = region_dates[1] # plot vertical lines ax.axvline(pd.to_datetime(start_date), color=color, linestyle='--') ax.axvline(pd.to_datetime(end_date), color=color, linestyle='--') # create fill xmin, xmax = ax.get_xlim() ymin, ymax = ax.get_ylim() ax.fill_between(pd.date_range(start=start_date, end=end_date), ymin, ymax, alpha=0.2, color=color) ax.set_xlim(xmin, xmax) ax.set_ylim(ymin, ymax) # Plot the candlestick chart with volume fig, axlist = mpf.plot(df, type='candle', volume=True, style='charles', title='TSLA Stock Price', ylabel='Price ($)', ylabel_lower='Shares\nTraded', figratio=(2,1), figsize=(10,5), tight_layout=True, returnfig=True) # Get the current axis object ax = axlist[0] # Shade the regions on the chart shade_region(ax, red_range, 'red') shade_region(ax, blue_range, 'blue') # Show the plot mpf.show() </code></pre> <p>Why are the selected regions not being shaded, and how do I fix this?</p>
<python><matplotlib><mplfinance>
2023-02-26 15:07:09
1
68,926
Homunculus Reticulli
75,572,698
4,879,688
How to load and inspect a gmsh mesh (*.msh file)
<p>I want to programmatically inspect <em>*.msh</em> files (get a number of nodes and elements, possibly also edges and faces). How can I do this with either <code>gmsh</code> or <code>pygmsh</code> module? All tutorials I have found so far focus mostly on mesh generation. I can not find any function to read the mesh, not to mention its inspection. Do I have to resort to <code>meshio</code>?</p>
<python><mesh><gmsh><meshio>
2023-02-26 14:36:51
2
2,742
abukaj
75,572,543
8,536,211
What to look out for when passing a generator into model.fit in tensorflow?
<p>I want to replace the x and y training data parameters in tf.keras.Model.fit with a generator. However, some subtlety seems to escape me, as the model accuracy doesn't improve with the generator when training.</p> <p>As far as I understand the documentation, the generator is supposed to yield tuples <code>(x_vals,y_vals)</code>, such that <code>x_vals</code> is a concatenation of <code>batch_size</code>-many training samples along a new 0th dimension, and 'v_vals' is the concatenation of their corresponding labels.</p> <p>As long as the generator fulfills this, as I understand it, we can just replace the x parameter in tf.keras.Model.fit with the generator and omit the y parameter, though to define an epoch, we also need to specify 'steps_per_epoch' in fit.</p> <p>There however seems to be something here I misunderstood or forgot, because starting with a model and input data that trains (i.e. its accuracy improves) and replacing the training data array with a generator as discussed, results in a model that doesn't train (i.e. its accuracy instead goes up a little, then however goes back down till its equal to chance).</p> <p>The corresponding code:</p> <pre><code>import numpy as np import tensorflow as tf BATCH_SIZE = 32 #Loading training data: def load_cifar(): (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() assert x_train.shape == (50000, 32, 32, 3) assert x_test.shape == (10000, 32, 32, 3) assert y_train.shape == (50000, 1) assert y_test.shape == (10000, 1) #Normalize the data &amp; cast to fp32: x_train = np.true_divide(x_train,255,dtype=np.single) x_test = np.true_divide(x_test,255,dtype=np.single) y_train = y_train.astype(np.single) y_test = y_test.astype(np.single) return (x_train,y_train), (x_test,y_test) (train_x, train_y) , (validation_x, validation_y) = load_cifar() # Defining the generator: def data_generator_dummy(input_data_x:np.ndarray, input_data_y:np.ndarray, batch_size=BATCH_SIZE, ): &quot;&quot;&quot; Given the input_data's, generate infinitely by: 1. Drawing batch_size-many vectors from input_data_x and input_data_y 2. Turn the drawn vectors into a mini-batch (with shape [None]+input_data.shape) :param batch_size: :param input_data_x, input_data_y: The data on which noise shall be added :return: A generator for the input data. &quot;&quot;&quot; index =0 while True: # We start with a zero-vector of expected size and fill the drawn samples into it: samples_x = np.zeros( [batch_size] + list(input_data_x.shape[1:]),dtype=np.single) samples_y = np.zeros( [batch_size] + list(input_data_y.shape[1:]),dtype=np.single) for i in range(batch_size): samples_x[i] = input_data_x[index%50_000] samples_y[i] = input_data_y[index%50_000] index +=1 yield samples_x,samples_y # Basically a linear classifier: def make_model(): model = tf.keras.models.Sequential() model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(10,tf.nn.softmax)) model.build([None] +list(train_x[0,:,:,:].shape)) return model #Training: generator = data_generator_dummy(train_x,train_y,batch_size=BATCH_SIZE) model = make_model() model.summary() optimizer_adam=tf.keras.optimizers.Adam(learning_rate=0.0005/32,beta_1=0.9,beta_2=0.999,epsilon=1e-07) model.compile(optimizer_adam,loss=&quot;sparse_categorical_crossentropy&quot;,metrics=&quot;accuracy&quot;) model.fit(generator, validation_data=(validation_x,validation_y),epochs=10, steps_per_epoch=train_x.shape[0]//BATCH_SIZE, ) # This one however works: # model.fit(train_x, train_y, validation_data=(validation_x,validation_y),epochs=30, # steps_per_epoch=train_x.shape[0]//BATCH_SIZE, # shuffle=True # ) </code></pre> <hr /> <p>The model also trains if one first let's the generator generate a long list of samples and then passes those into <code>fit</code> as <code>x</code>and <code>y</code>:</p> <pre><code>#Training: generator = data_generator_dummy(train_x,train_y,batch_size=50000) model = make_model() model.summary() optimizer_adam=tf.keras.optimizers.Adam(learning_rate=0.0005/32,beta_1=0.9,beta_2=0.999,epsilon=1e-07) model.compile(optimizer_adam,loss=&quot;sparse_categorical_crossentropy&quot;,metrics=&quot;accuracy&quot;) while True: samples_x,samples_y = next(generator) model.fit(samples_x,samples_y, validation_data=(validation_x,validation_y),epochs=10,batch_size=BATCH_SIZE ) </code></pre>
<python><tensorflow><keras>
2023-02-26 14:08:48
0
360
Sudix
75,572,272
386,861
How to turn a string column of quarterly data in pandas into sompething I can plot
<p>I have a dataframe that looks like this:</p> <p>oops:</p> <pre><code> Quarter London UK NaN 6 Mar-May 1992 0.12305 0.098332 NaN 7 Apr-Jun 1992 0.123895 0.097854 NaN 8 May-Jul 1992 0.124076 0.098878 NaN 9 Jun-Aug 1992 0.127796 0.099365 NaN 10 Jul-Sep 1992 0.126064 0.099371 NaN </code></pre> <p>I've tried to use the PeriodIndex on quarter so I can plot the data but it just keeps giving me errors. What can I try next?</p> <p>The code that I was trying to use:</p> <pre><code>quarter_column = df.Quarter # create PeriodIndex periods = pd.PeriodIndex(quarter_column, freq='Q-Mar') pd.DataFrame(df, index=periods) </code></pre> <p>The error was:</p> <pre><code>DateParseError: Unknown datetime string format, unable to parse: MAR-MAY 1992 </code></pre>
<python><pandas>
2023-02-26 13:21:41
1
7,882
elksie5000
75,572,209
1,432,980
create a new dataframe column using the data from other column on condition
<p>I have a data frame that looks like this</p> <pre><code> Name Default Expression Override Expression 0 AACT_NAM pystr pyint 1 ACCT_CCY pystr 2 ACCT_TYP pystr </code></pre> <p>I want to create a column, <code>_faker_method_</code> that would contain specially transformed data by checking if <code>Override Expression</code> has value and use it, if there is some value or use <code>Default Expression</code> column if there is no value.</p> <p>I tried to do like this</p> <pre><code>df['_faker_invocation_'] = df['Override Expression'].apply(lambda x: render_faker_expresison(df['Name'], x) if x else df['Default Expression']) </code></pre> <p>But logs should me that the function <code>render_faker_expression</code> recieves whole column with index (and thus it fails in my app)</p> <pre><code>0 ACCT_NAM 1 ACCT_CCY 2 ACCT_TYP </code></pre> <p>How to perform the action I need?</p>
<python><pandas><dataframe>
2023-02-26 13:10:57
1
13,485
lapots
75,572,028
17,718,870
Unexpected timing result after micro-optimization
<p>I've been doing some experiments with some micro-optimizations and got an unexpected timing result, which I couldn't wrap around my head. I would be very thankful for your suggestions.</p> <p>Following the code :</p> <pre><code>def findSmallest(arr): smallest = arr[0] smallest_indx = 0 for i in range(1, len(arr)): if arr[i] &lt; smallest: smallest = arr[i] smallest_indx = i return smallest_indx def selectionSort1(arr): newArr = [] for i in range(len(arr)): smallest = findSmallest(arr) newArr.append(arr.pop(smallest)) return newArr def selectionSort2(arr): newArr = [] na = newArr.append for i in range(len(arr)): smallest = findSmallest(arr) na(arr.pop(smallest)) return newArr def selectionSort3(arr): ap = arr.pop newArr = [] na = newArr.append for i in range(len(arr)): smallest = findSmallest(arr) na(ap(smallest)) return newArr import random as r test = r.sample(range(0,1000000),10000) test1 = test[:] test2 = test[:] test3 = test[:] if __name__ == '__main__': import timeit print(timeit.timeit(&quot;selectionSort1(test1)&quot;, setup=&quot;from __main__ import test1, selectionSort1&quot;)) print(timeit.timeit(&quot;selectionSort2(test2)&quot;, setup=&quot;from __main__ import test2, selectionSort2&quot;)) print(timeit.timeit(&quot;selectionSort3(test3)&quot;, setup=&quot;from __main__ import test3, selectionSort3&quot;)) </code></pre> <p>On my computer :</p> <pre><code>3.8686506970000005 #selectionSort1 3.961112386 #selectionSort2 4.0788594190000005 #selectionSort3 </code></pre> <p>The point is that I'd expected that, when I'm isolating the attributes search (<strong>newArr.append</strong> and <strong>arr.pop</strong>) for both list out of the loop-scope should give me back the best result. As you've seen from the given results this isn't the case and will be very happy with any help. Thank you in advance :)</p> <p><em>Note: For sure this type of optimization would be relevant for very big lists</em></p>
<python><optimization>
2023-02-26 12:39:31
1
869
baskettaz
75,571,961
5,195,209
Uploading multiple files to release on Github using Python
<p>I am trying to create a release and upload all files with a specific ending as an asset. My code is as follows:</p> <pre class="lang-py prettyprint-override"><code># This uploads all apkg files in the current directory to a release on GitHub. import github import dotenv import os import glob dotenv.load_dotenv() g = github.Github(os.getenv(&quot;GITHUB_TOKEN&quot;)) repo = g.get_repo(&quot;Vuizur/tatoeba-to-anki&quot;) release = None for r in repo.get_releases(): if r.title == &quot;latest&quot;: print(&quot;Found a release with the name 'latest'&quot;) release = r break if release is None: print(&quot;Creating a new release&quot;) release = repo.create_git_release( &quot;latest&quot;, &quot;latest&quot;, &quot;latest&quot;, draft=False, prerelease=False, ) # Now delete all files in the release for asset in release.get_assets(): asset.delete_asset() print(&quot;Deleted all files in the release&quot;) print(f&quot;Uploading {len(glob.glob('*.apkg'))} files to the release...&quot;) # Now upload all files in the current directory for file in glob.glob(&quot;*.apkg&quot;): print(f&quot;Uploading {file}&quot;) release.upload_asset(file) print(f&quot;Finished uploading {file}&quot;) </code></pre> <p>Unfortunately, this doesn't quite work. When I run the code, it prints:</p> <pre><code>Found a release with the name 'latest' Deleted all files in the release Uploading 37 files to the release... Uploading ara_eng.apkg </code></pre> <p>It successfully uploads the first asset, which is perfectly visible on Github. However, after that nothing happens, and no network traffic is visible on the task manager. It is as if the <code>upload_asset</code> function never returns despite having finished.</p> <p>I then though that it might be PyGithub's fault, so I replaced the upload_release call with:</p> <pre class="lang-py prettyprint-override"><code>for file in glob.glob(&quot;*.apkg&quot;): print(f&quot;Uploading {file}&quot;) release_id = release.id url = f&quot;https://uploads.github.com/repos/Vuizur/tatoeba-to-anki/releases/{release_id}/assets?name={file}&quot; headers = { &quot;Accept&quot;: &quot;application/vnd.github.v3+json&quot;, # apkg files are zip files &quot;Content-Type&quot;: &quot;application/zip&quot;, # the authorization token &quot;Authorization&quot;: f&quot;token {os.getenv('GITHUB_TOKEN')}&quot;, } with open(file, &quot;rb&quot;) as f: response = requests.post(url, headers=headers, data=f) print(response) </code></pre> <p>But this shows exactly the same behaviour as when using PyGithub.</p> <p>I also read a bit about shenanigans with IPV4/6, so I tried disabling IPV6 using <code>requests.packages.urllib3.util.connection.HAS_IPV6 = False</code> at the beginning of my script, but this did not work either.</p> <p>Thanks for the help!</p> <p>System: Windows 11 Python: 3.10.9</p>
<python><github><post><pygithub>
2023-02-26 12:25:59
0
587
Pux
75,571,754
16,444,630
Numpy giving wrong Eigenvectors
<pre><code>from numpy.linalg import eig import numpy as np A = np.array([[1,2],[3,4]]) eval, evec = eig(A) print(&quot;Eigenvectors:&quot;, evec) </code></pre> <p>gives:</p> <pre><code>Eigenvectors: [[-0.82456484 -0.41597356] [ 0.56576746 -0.90937671]] </code></pre> <p>while in mathematica:</p> <pre><code>A = {{1, 2}, {3, 4}}; {eval, evec} = Eigensystem[A]; N[evec] </code></pre> <p>gives:</p> <pre><code>{{0.457427, 1.}, {-1.45743, 1.}} </code></pre> <p>And I know the mathematica one is correct because it gives the correct graph of the wavefuntion calculated using eigenvectors.</p> <p>And this is not just a matter of normalization because the numpy result cannot be scaled by a factor to get the correct result.</p> <p>There is a workaround using sympy,but What is the problem and is there a way to get correct eigen vectors in numpy itself?</p> <p><strong>Update:</strong> Both eigen vectors were correct. And the graph of the wavefunction was incorrect only because of another factor (I didn't transpose the matrix)</p>
<python><numpy>
2023-02-26 11:46:14
1
317
Sophile
75,571,703
17,696,880
How to find nested patterns within a string, and merge them into one using a regex reordering of the string?
<pre class="lang-py prettyprint-override"><code>import re #example input string: input_text = &quot;here ((PERS)the ((PERS)Andys) ) ((PERS)assása asas) ((VERB)asas (asas)) ((PERS)saasas ((PERS)Asasas ((PERS)bbbg gg)))&quot; def remove_nested_pers(match): # match is a re.Match object representing the nested pattern, and I want to remove it nested_text = match.group(1) # recursively remove nested patterns nested_text = re.sub(r&quot;\(\(PERS\)(.*?)\)&quot;, lambda m: m.group(1), nested_text) #nested_text = re.sub(r&quot;\(\(PERS\)((?:\w\s*)+)\)&quot;, lambda m: m.group(1), nested_text) # replace nested pattern with cleaned text return nested_text # recursively remove nested PERS patterns input_text = re.sub(r&quot;\(\(PERS\)(.*?)\)&quot;, remove_nested_pers, input_text) print(input_text) # --&gt; output </code></pre> <p>I need to remove the <code>((PERS) something_1)</code> that are inside another <code>((PERS) something_2)</code> , for example <code>((PERS)something_1 ((PERS)something_2))</code> should become <code>((PERS)something_1 something_2)</code></p> <p>Or for example, <code>((PERS)something_1 ((PERS)something_2 ((PERS)something_3)) ((PERS)something_4))</code>should become <code>((PERS)something_1 something_2 something_3 something_4)</code></p> <p>In this way, encapsulations within other encapsulations would be avoided.</p> <p>I've used the <code>(.*?)</code> capturing group which looks for anything (including new line characters) between the previous pattern and the next one. Although perhaps a pattern like <code>((?:\w\s*)+)</code> is better to avoid capturing elements of the sequence <code>((PERS) )</code>. Although regardless of this, this code fails to correctly join the content of the nested patterns, eliminating necessary parts.</p> <p>This is the output you should be getting when running this script:</p> <pre><code>&quot;here ((PERS)the Andys ) ((PERS)assása asas) ((VERB)asas (asas)) ((PERS)saasas Asasas bbbg gg)&quot; </code></pre> <p>So the nested patterns <code>((PERS) )</code> should have been removed from the input text and the remaining patterns are not modified.</p>
<python><regex>
2023-02-26 11:36:55
1
875
Matt095
75,571,432
1,354,439
Python Celery: 2-way communication between worker and task requester
<p>I am trying to achieve 2 way communication between the worker and the task requester, once the task has been scheduled. So far I was only able to achieve one-way communication (worker -&gt; requester):</p> <pre class="lang-py prettyprint-override"><code>@app.task(bind=True) def long_task(self: celery.Task): time.sleep(1) self.update_state(state=&quot;PROGRESS&quot;, meta={&quot;progress&quot;: 50}) time.sleep(1) self.update_state(state=&quot;PROGRESS&quot;, meta={&quot;progress&quot;: 90}) time.sleep(1) self.update_state(state=&quot;DONE&quot;, meta={&quot;progress&quot;: 100}) return &quot;done&quot; result: celery.result.AsyncResult = long_task.apply_async() result.get(on_message=on_message) # on_message callback will receive progress updates. </code></pre> <p>Is it possible to also send the additional information to the worker <strong>after</strong> <code>long_task.apply_async()</code> call?</p> <p>Unfortunately, some information needed for task completion is only available slightly later. I could create a separate connection by sending the IP of the &quot;task requester&quot;, but this is a bit painful. If there exists some 2 way connection already I would prefer to reuse it.</p>
<python><celery>
2023-02-26 10:44:58
0
5,979
Piotr Dabkowski
75,571,298
19,989,634
Like button not recording the data
<p>I have implemented a like button onto my view_post page, but the like's aren't been registered. When the button is clicked the page is redirected correctly but no likes are added.</p> <p><strong>views</strong></p> <pre><code>def get_post(request, slug): try: post = BlogPost.objects.get(slug=slug) except BlogPost.DoesNotExist: messages.error(request, 'This post does not exist.') post = None comment_form = CommentForm() return render(request, 'mhpapp/view-post.html', {'post': post, 'comment_form': comment_form,}) def like_post(request, slug): template_name = 'view-post.html' post = get_object_or_404(BlogPost, slug=slug) liked = False if post.likes.filter(id=request.user.id).exists(): post.likes.remove(request.user) liked = False else: post.likes.add(request.user) messages.success(request, (&quot;Thanks for the like...:-)&quot;)) liked = True return redirect('get_post', {'slug': slug,}) </code></pre> <p><strong>urls</strong></p> <pre><code>path('&lt;slug:slug&gt;/', views.get_post, name='viewpost'), path('&lt;slug:slug&gt;/',views.like_post, name='likepost'), </code></pre> <p><strong>html</strong></p> <pre><code> &lt;strong&gt;{{ post.total_likes }} Likes&lt;/strong&gt; {% if user.is_authenticated %} &lt;form action=&quot;{% url 'likepost' post.slug %}&quot; method=&quot;POST&quot;&gt; {% csrf_token %} {% if request.user in post.likes.all %} &lt;button class=&quot;btn btn-outline-secondary rounded-0 custom-button&quot; id=&quot;like&quot; type=&quot;sumbit&quot; name=&quot;post-id&quot; value=&quot;{{ post.slug }}&quot;&gt;&lt;i class=&quot;fa-solid fa-heart-crack&quot;&gt;&lt;/i&gt;&lt;/button&gt; {% else %} &lt;button class=&quot;btn btn-outline-secondary rounded-0 custom-button&quot; id=&quot;like&quot; type=&quot;sumbit&quot; name=&quot;post-id&quot; value=&quot;{{ post.slug }}&quot;&gt;&lt;i class=&quot;fa-solid fa-heart&quot;&gt;&lt;/i&gt;&lt;/button&gt; {% endif %} &lt;/form&gt; {% else %} {% endif %} </code></pre>
<python><django><django-views><django-forms><django-urls>
2023-02-26 10:16:45
1
407
David Henson
75,571,167
9,475,509
Download model 'en' for spacy produces "TypeError: can only concatenate list (not 'tuple') to list"
<p>While installing <code>spacy-2.3.6</code> after <code>chatterbot-1.0.8</code> in <code>virtualenv-20.19.0</code> with <code>python-3.7.0</code>, I receive error message</p> <pre><code>OSError: [E050] Can't find model 'en'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory </code></pre> <p>Following suggestion from <a href="https://stackoverflow.com/a/64328808/9475509">here</a> as</p> <pre><code>spacy.cli.download(&quot;en&quot;) nlp = spacy.load('en_core_web_sm') </code></pre> <p>and from <a href="https://stackoverflow.com/a/61546898/9475509">here</a> as</p> <pre><code>python -m spacy download en_core_web_md python -m spacy link en_core_web_md en </code></pre> <p>do not solved the problem, but produce other error message</p> <pre><code>[..] File &quot;C:\venvs\chatterbot\lib\site-packages\spacy\cli\download.py&quot;, line 132, in download_model cmd = [sys.executable, &quot;-m&quot;, &quot;pip&quot;, &quot;install&quot;] + pip_args + [download_url] TypeError: can only concatenate list (not &quot;tuple&quot;) to list </code></pre> <p>whose normally in user program as in <a href="https://stackoverflow.com/q/10506904/9475509">here</a>, <a href="https://stackoverflow.com/q/29336773/9475509">here</a>, and <a href="https://stackoverflow.com/q/35051508/9475509">here</a>, but in my case is in the <code>download.py</code> file.</p> <p>I modify the file as follow</p> <pre><code>def download_model(filename, user_pip_args=None): download_url = about.__download_url__ + &quot;/&quot; + filename pip_args = user_pip_args if user_pip_args is not None else [] #cmd = [sys.executable, &quot;-m&quot;, &quot;pip&quot;, &quot;install&quot;] + pip_args + [download_url] cmd = [sys.executable, &quot;-m&quot;, &quot;pip&quot;, &quot;install&quot;] + list(pip_args) + [download_url] return subprocess.call(cmd, env=os.environ.copy()) </code></pre> <p>and it works.</p> <pre><code>[..] Collecting en_core_web_sm==2.3.1 [..] ✔ Download and installation successful You can now load the model via spacy.load('en_core_web_sm') symbolic link created for C:\venvs\chatterbot\lib\site-packages\spacy\data\en &lt;&lt;===&gt;&gt; C:\venvs\chatterbot\lib\site-packages\en_core_web_sm ✔ Linking successful C:\venvs\chatterbot\lib\site-packages\en_core_web_sm --&gt; C:\venvs\chatterbot\lib\site-packages\spacy\data\en You can now load the model via spacy.load('en') </code></pre> <p>The questions are</p> <ul> <li>Is it legal to modify package file instead of search right version of the package?</li> <li>Is there a better way?</li> <li>How to assure that the modification will not overwritten while updating the package?</li> </ul>
<python><pip><concatenation><spacy><chatterbot>
2023-02-26 09:49:43
1
789
dudung
75,571,163
12,579,308
cv2 imshow right click dropdown menu disappears immediately
<p><strong>Problem</strong></p> <p><img src="https://i.sstatic.net/O2uDh.gif" alt="problem illustration" /></p> <p>When I use cv imshow with the code below, I cannot use the right click to see the dropdown menu.</p> <p><strong>Code</strong></p> <pre><code>import cv2 image_path = &quot;/nas/data/IST/FODCam/230216/c6/p838_v4/85349980.jpg&quot; cv2.namedWindow(&quot;Image&quot;, cv2.WINDOW_NORMAL) cv2.resizeWindow(window_name, 900, 600) img = cv2.imread(image_path) cv2.imshow(&quot;Image&quot;, img) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p><strong>Detailed Explanation</strong></p> <ul> <li><p>What happens is I just see the border of the menu for a very short time. It seems like it starts opening but maybe just after some milliseconds it decides to close.</p> <p>Note that in the same computer, in the same cv setup (same conda environment), with another code, I achieve opening it and it looks like this:</p> <p><a href="https://i.sstatic.net/POSnE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/POSnE.png" alt="cv dropdown menu image" /></a></p> </li> <li><p>I tried different cv2.namedWindow flags, I tried to use cv2.setMouseCallback to cancel that effect on EVENT_RBUTTONDOWN. I was expecting to see the problem disappears.</p> </li> <li><p>I have realized that this is not related to code. The same code can work as desired. Then I discovered that the problem arises when the cell including the code in Jupyter Notebook(in VsCode) runs again. If I restart the notebook and run the cell, it is OK. When I run the second time without restarting, it is problematic. So my current workaround is &quot;always restart the notebook&quot; But I still do not know why this happens.</p> </li> </ul>
<python><opencv><visual-studio-code><jupyter-notebook>
2023-02-26 09:49:14
0
341
Oguz Hanoglu
75,571,138
18,107,780
Flet paking into macos application
<p>I'm trying to convert a python flet application into a macos application. I'm using the flet cli with the command <code>flet pack</code>.</p> <p>The project files tree is:</p> <pre><code>PyFlutter |_ assets | |_ fonts | | |_font | |_ image.png |_ backend.py |_ credentials.log |_ main.py |_ icon.png </code></pre> <p>The command I used is:</p> <pre><code>flet pack &quot;main.py&quot; -n &quot;AltExp-beta&quot; --add-data &quot;assets:assets&quot; --add-data &quot;.:backend.py&quot; --icon &quot;icon.png&quot; --add-data &quot;.:credentials.log&quot; </code></pre> <p>The <code>.spec</code> file created is:</p> <pre><code># -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis( ['main.py'], pathex=[], binaries=[], datas=[('assets', 'assets'), ('.', 'backend.py'), ('.', 'credentials.log')], hiddenimports=[], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE( pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='AltExp-beta', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=False, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, icon=['icon.png'], ) app = BUNDLE( exe, name='AltExp-beta.app', icon='icon.png', bundle_identifier=None, ) </code></pre> <p>The files that are supposed to be created are created just fine, but when I try to execute the application it gives me an error:</p> <pre><code>Unable to proceed your request [Error]: &quot;credentials.log&quot; not found </code></pre> <p>But when I execute the unix executable file, everythings works fine.</p> <p>I'll attach a demonstration:</p> <a href="https://i.sstatic.net/CYWzG.jpg" rel="nofollow noreferrer">Flet pack into .app not working</a>
<python><macos><unix><flet>
2023-02-26 09:42:03
1
457
Edoardo Balducci
75,571,055
17,696,880
How to add a whitespace before "((VERB)" only if it is not preceded by a space or the beginning of the string?
<pre><code>import re #input string example: input_text = &quot;((VERB)ayudar a nosotros) ár((VERB)ayudar a nosotros) Los computadores pueden ((VERB)ayudar a nosotros)&quot; #this give me a raise error(&quot;look-behind requires fixed-width pattern&quot;) re.error: look-behind requires fixed-width pattern #input_text = re.sub(r&quot;(?&lt;!^|\s)\(\(VERB\)&quot;, &quot; ((VERB)&quot;, input_text) #and this other option simply places a space in front of all ((VERB) ) # without caring if there is a space or the beginning of the string in front input_text = re.sub(r&quot;(^|\s)\(\(VERB\)&quot;, lambda match: match.group(1) + &quot;((VERB)&quot;, input_text) print(repr(input_text)) # --&gt; output </code></pre> <p>I have tried using <code>(^|\s)</code> as it is a capturing group that looks for the start of the string <code>^</code> or a whitespace just before the pattern <code>&quot;((VERB)&quot;</code>. Another pattern option could be with a non-capturing group <code>(?:|)</code> or better still using a context limiter like look-behind <code>(?&lt;!^|\s)</code></p> <p>This is the output you should be getting when running this script:</p> <pre><code>&quot;((VERB)ayudar a nosotros) ár ((VERB)ayudar a nosotros) Los computadores pueden ((VERB)ayudar a nosotros)&quot; </code></pre>
<python><python-3.x><regex><string><regex-lookarounds>
2023-02-26 09:24:36
2
875
Matt095
75,571,036
3,728,901
TorchStudio cannot run
<p>I seen <a href="https://www.youtube.com/watch?v=aNKTdMWO56w" rel="nofollow noreferrer">https://www.youtube.com/watch?v=aNKTdMWO56w</a> . Then I install TorchStudio. My PC: Windows 11 x64, Python 3.10 . I go to homepage of TorchStudio <a href="https://www.torchstudio.ai/" rel="nofollow noreferrer">https://www.torchstudio.ai/</a> , then download. My log</p> <pre><code>Microsoft Windows [Version 10.0.22621.1265] (c) Microsoft Corporation. All rights reserved. C:\Users\donhu&gt;python Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] on win32 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; exit() C:\Users\donhu&gt;where python C:\Users\donhu\AppData\Local\Programs\Python\Python310\python.exe C:\Users\donhu\AppData\Local\Microsoft\WindowsApps\python.exe C:\Users\donhu&gt;pip install torch Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Requirement already satisfied: torch in c:\users\donhu\appdata\roaming\python\python39\site-packages (1.13.1+cu117) Requirement already satisfied: typing-extensions in c:\programdata\anaconda3\lib\site-packages (from torch) (4.4.0) C:\Users\donhu&gt;pip install torchvision Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Requirement already satisfied: torchvision in c:\users\donhu\appdata\roaming\python\python39\site-packages (0.14.1+cu117) Requirement already satisfied: pillow!=8.3.*,&gt;=5.3.0 in c:\programdata\anaconda3\lib\site-packages (from torchvision) (9.3.0) Requirement already satisfied: typing-extensions in c:\programdata\anaconda3\lib\site-packages (from torchvision) (4.4.0) Requirement already satisfied: requests in c:\programdata\anaconda3\lib\site-packages (from torchvision) (2.28.1) Requirement already satisfied: torch==1.13.1 in c:\users\donhu\appdata\roaming\python\python39\site-packages (from torchvision) (1.13.1+cu117) Requirement already satisfied: numpy in c:\programdata\anaconda3\lib\site-packages (from torchvision) (1.23.5) Requirement already satisfied: idna&lt;4,&gt;=2.5 in c:\programdata\anaconda3\lib\site-packages (from requests-&gt;torchvision) (3.4) Requirement already satisfied: urllib3&lt;1.27,&gt;=1.21.1 in c:\programdata\anaconda3\lib\site-packages (from requests-&gt;torchvision) (1.26.14) Requirement already satisfied: certifi&gt;=2017.4.17 in c:\programdata\anaconda3\lib\site-packages (from requests-&gt;torchvision) (2022.12.7) Requirement already satisfied: charset-normalizer&lt;3,&gt;=2 in c:\programdata\anaconda3\lib\site-packages (from requests-&gt;torchvision) (2.0.4) C:\Users\donhu&gt;pip install torchaudio Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Requirement already satisfied: torchaudio in c:\users\donhu\appdata\roaming\python\python39\site-packages (0.13.1+cu117) Requirement already satisfied: torch==1.13.1 in c:\users\donhu\appdata\roaming\python\python39\site-packages (from torchaudio) (1.13.1+cu117) Requirement already satisfied: typing-extensions in c:\programdata\anaconda3\lib\site-packages (from torch==1.13.1-&gt;torchaudio) (4.4.0) C:\Users\donhu&gt;pip install torchtext Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Collecting torchtext Downloading torchtext-0.14.1-cp39-cp39-win_amd64.whl (1.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.9/1.9 MB 16.8 MB/s eta 0:00:00 Requirement already satisfied: numpy in c:\programdata\anaconda3\lib\site-packages (from torchtext) (1.23.5) Requirement already satisfied: requests in c:\programdata\anaconda3\lib\site-packages (from torchtext) (2.28.1) Requirement already satisfied: tqdm in c:\programdata\anaconda3\lib\site-packages (from torchtext) (4.64.1) Requirement already satisfied: torch==1.13.1 in c:\users\donhu\appdata\roaming\python\python39\site-packages (from torchtext) (1.13.1+cu117) Requirement already satisfied: typing-extensions in c:\programdata\anaconda3\lib\site-packages (from torch==1.13.1-&gt;torchtext) (4.4.0) Requirement already satisfied: certifi&gt;=2017.4.17 in c:\programdata\anaconda3\lib\site-packages (from requests-&gt;torchtext) (2022.12.7) Requirement already satisfied: urllib3&lt;1.27,&gt;=1.21.1 in c:\programdata\anaconda3\lib\site-packages (from requests-&gt;torchtext) (1.26.14) Requirement already satisfied: idna&lt;4,&gt;=2.5 in c:\programdata\anaconda3\lib\site-packages (from requests-&gt;torchtext) (3.4) Requirement already satisfied: charset-normalizer&lt;3,&gt;=2 in c:\programdata\anaconda3\lib\site-packages (from requests-&gt;torchtext) (2.0.4) Requirement already satisfied: colorama in c:\programdata\anaconda3\lib\site-packages (from tqdm-&gt;torchtext) (0.4.6) Installing collected packages: torchtext Successfully installed torchtext-0.14.1 C:\Users\donhu&gt;pip install graphviz Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Collecting graphviz Downloading graphviz-0.20.1-py3-none-any.whl (47 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 47.0/47.0 kB 1.2 MB/s eta 0:00:00 Installing collected packages: graphviz Successfully installed graphviz-0.20.1 C:\Users\donhu&gt;where python C:\Users\donhu\AppData\Local\Programs\Python\Python310\python.exe C:\Users\donhu\AppData\Local\Microsoft\WindowsApps\python.exe C:\Users\donhu&gt; </code></pre> <p><a href="https://i.sstatic.net/DZRMO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DZRMO.png" alt="enter image description here" /></a></p> <p>After I seen in console (CMD) show install finished, I press icon, but it show as when I no download.</p> <p><a href="https://i.sstatic.net/lt0KB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lt0KB.png" alt="enter image description here" /></a></p> <p>then I try <a href="https://i.sstatic.net/n3bKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n3bKN.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/57M1M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/57M1M.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/Bm94h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bm94h.png" alt="enter image description here" /></a></p> <p>My install log <a href="https://gist.github.com/donhuvy/c3371e8c40b0524f8a500a12a90a1f5f#file-install-log-L542" rel="nofollow noreferrer">https://gist.github.com/donhuvy/c3371e8c40b0524f8a500a12a90a1f5f#file-install-log-L542</a></p> <p>How to install, then run Torch Studio success?</p>
<python><pytorch><torchstudio>
2023-02-26 09:21:41
1
53,313
Vy Do
75,571,009
573,082
Why I get "cannot unpack" error with PythonNet but not with IronPython?
<p>I tried to connect to socket with <code>IronPython</code> and everything worked. But, when I try with <code>Python.Runtime</code> I get <code>Python.Runtime.PythonException: 'cannot unpack non-iterable ValueTuple[String,Int32] object'</code> exception. I use <code>PythonRuntime</code> for <code>python3.8</code></p> <p><strong>Python.Runtime</strong></p> <pre><code>Runtime.PythonDLL = &quot;python38.dll&quot;; PythonEngine.Initialize(); dynamic socketModule = Py.Import(&quot;socket&quot;); dynamic clientSocket = socketModule.create_connection((&quot;127.0.0.1&quot;, 30001)); </code></pre> <p><strong>IronPython</strong></p> <pre><code>ScriptRuntime ipy = IronPython.Hosting.Python.CreateRuntime(); dynamic sys = ipy.ImportModule(&quot;sys&quot;); sys.path.append(@&quot;C:\Python34\lib&quot;); dynamic socketModule = ipy.ImportModule(&quot;socket&quot;); </code></pre>
<python><c#><ironpython>
2023-02-26 09:14:51
0
14,501
theateist
75,570,862
6,630,397
django-two-factor-auth[phonenumbers] got a redundant migration -> psycopg2.errors.DuplicateTable: relation "two_factor_phonedevice" already exists
<p>I'm facing the following database creation table error when spinning up a <a href="https://www.djangoproject.com/" rel="nofollow noreferrer">django</a> project <em>from scratch</em> when I have <a href="https://github.com/jazzband/django-two-factor-auth" rel="nofollow noreferrer"><code>django-two-factor-auth[phonenumbers]</code></a> in my requirements.</p> <p>When I run the <code>migrate</code> command, it raises a <code>psycopg2.errors.DuplicateTable</code> error:</p> <pre class="lang-bash prettyprint-override"><code>$ python manage.py migrate Operations to perform: Apply all migrations: admin, auth, contenttypes, projectapplication, otp_static, otp_totp, sessions, two_factor Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying auth.0010_alter_group_name_max_length... OK Applying auth.0011_update_proxy_permissions... OK Applying auth.0012_alter_user_first_name_max_length... OK Applying projectapplication.0001_initial... OK (... project application migrations) Applying projectapplication.0032_alter_foo_bar_baz_and_more... OK Applying otp_static.0001_initial... OK Applying otp_static.0002_throttling... OK Applying otp_totp.0001_initial... OK Applying otp_totp.0002_auto_20190420_0723... OK Applying sessions.0001_initial... OK Applying two_factor.0001_initial... OK Applying two_factor.0002_auto_20150110_0810... OK Applying two_factor.0003_auto_20150817_1733... OK Applying two_factor.0004_auto_20160205_1827... OK Applying two_factor.0005_auto_20160224_0450... OK Applying two_factor.0006_phonedevice_key_default... OK Applying two_factor.0007_auto_20201201_1019... OK Applying two_factor.0008_delete_phonedevice... OK Applying two_factor.0009_initial...Traceback (most recent call last): File &quot;/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py&quot;, line 87, in _execute return self.cursor.execute(sql) psycopg2.errors.DuplicateTable: relation &quot;two_factor_phonedevice&quot; already exists The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/app/manage.py&quot;, line 21, in &lt;module&gt; main() File &quot;/app/manage.py&quot;, line 17, in main execute_from_command_line(sys.argv) File &quot;/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py&quot;, line 446, in execute_from_command_line utility.execute() File &quot;/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py&quot;, line 440, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File &quot;/usr/local/lib/python3.9/site-packages/django/core/management/base.py&quot;, line 402, in run_from_argv self.execute(*args, **cmd_options) File &quot;/usr/local/lib/python3.9/site-packages/django/core/management/base.py&quot;, line 448, in execute output = self.handle(*args, **options) File &quot;/usr/local/lib/python3.9/site-packages/django/core/management/base.py&quot;, line 96, in wrapped res = handle_func(*args, **kwargs) File &quot;/usr/local/lib/python3.9/site-packages/django/core/management/commands/migrate.py&quot;, line 349, in handle post_migrate_state = executor.migrate( File &quot;/usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py&quot;, line 135, in migrate state = self._migrate_all_forwards( File &quot;/usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py&quot;, line 167, in _migrate_all_forwards state = self.apply_migration( File &quot;/usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py&quot;, line 252, in apply_migration state = migration.apply(state, schema_editor) File &quot;/usr/local/lib/python3.9/site-packages/django/db/migrations/migration.py&quot;, line 130, in apply operation.database_forwards( File &quot;/usr/local/lib/python3.9/site-packages/django/db/migrations/operations/models.py&quot;, line 96, in database_forwards schema_editor.create_model(model) File &quot;/usr/local/lib/python3.9/site-packages/django/db/backends/base/schema.py&quot;, line 447, in create_model self.execute(sql, params or None) File &quot;/usr/local/lib/python3.9/site-packages/django/db/backends/base/schema.py&quot;, line 199, in execute cursor.execute(sql, params) File &quot;/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py&quot;, line 102, in execute return super().execute(sql, params) File &quot;/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py&quot;, line 67, in execute return self._execute_with_wrappers( File &quot;/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py&quot;, line 80, in _execute_with_wrappers return executor(sql, params, many, context) File &quot;/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py&quot;, line 89, in _execute return self.cursor.execute(sql, params) File &quot;/usr/local/lib/python3.9/site-packages/django/db/utils.py&quot;, line 91, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File &quot;/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py&quot;, line 87, in _execute return self.cursor.execute(sql) django.db.utils.ProgrammingError: relation &quot;two_factor_phonedevice&quot; already exists </code></pre> <p>This is the corresponding <a href="https://postgresql.org/" rel="nofollow noreferrer">PostgreSQL</a> log at that time:</p> <pre class="lang-sql prettyprint-override"><code>2023-02-26 07:59:49.104 UTC [57] LOG: database system was shut down at 2023-02-26 07:59:48 UTC 2023-02-26 07:59:49.110 UTC [1] LOG: database system is ready to accept connections 2023-02-26 08:00:58.733 UTC [72] ERROR: relation &quot;two_factor_phonedevice&quot; already exists 2023-02-26 08:00:58.733 UTC [72] STATEMENT: CREATE TABLE &quot;two_factor_phonedevice&quot; ( &quot;id&quot; bigint NOT NULL PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY, &quot;name&quot; varchar(64) NOT NULL, &quot;confirmed&quot; boolean NOT NULL, &quot;throttling_failure_timestamp&quot; timestamp with time zone NULL, &quot;throttling_failure_count&quot; integer NOT NULL CHECK (&quot;throttling_failure_count&quot; &gt;= 0), &quot;number&quot; varchar(128) NOT NULL, &quot;key&quot; varchar(40) NOT NULL, &quot;method&quot; varchar(4) NOT NULL, &quot;user_id&quot; integer NOT NULL ) </code></pre> <p>And if I check in the database, the table <em>already</em> exists as:</p> <pre class="lang-sql prettyprint-override"><code>CREATE TABLE IF NOT EXISTS public.two_factor_phonedevice ( id integer NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 2147483647 CACHE 1 ), name character varying(64) COLLATE pg_catalog.&quot;default&quot; NOT NULL, confirmed boolean NOT NULL, &quot;number&quot; character varying(128) COLLATE pg_catalog.&quot;default&quot; NOT NULL, key character varying(40) COLLATE pg_catalog.&quot;default&quot; NOT NULL, method character varying(4) COLLATE pg_catalog.&quot;default&quot; NOT NULL, user_id integer NOT NULL, throttling_failure_count integer NOT NULL, throttling_failure_timestamp timestamp with time zone, CONSTRAINT two_factor_phonedevice_pkey PRIMARY KEY (id), CONSTRAINT two_factor_phonedevice_user_id_54718003_fk_auth_user_id FOREIGN KEY (user_id) REFERENCES public.auth_user (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED, CONSTRAINT two_factor_phonedevice_throttling_failure_count_check CHECK (throttling_failure_count &gt;= 0) ); </code></pre> <p>Some more information about the migrations of the <code>two_factor</code> app:</p> <pre class="lang-bash prettyprint-override"><code>$ ls -lA /usr/local/lib/python3.9/site-packages/two_factor/migrations/ total 40 -rw-r--r-- 1 root root 1851 Feb 25 21:55 0001_initial.py -rw-r--r-- 1 root root 597 Feb 25 21:55 0002_auto_20150110_0810.py -rw-r--r-- 1 root root 1511 Feb 25 21:55 0003_auto_20150817_1733.py -rw-r--r-- 1 root root 454 Feb 25 21:55 0004_auto_20160205_1827.py -rw-r--r-- 1 root root 1768 Feb 25 21:55 0005_auto_20160224_0450.py -rw-r--r-- 1 root root 608 Feb 25 21:55 0006_phonedevice_key_default.py -rw-r--r-- 1 root root 853 Feb 25 21:55 0007_auto_20201201_1019.py -rw-r--r-- 1 root root 380 Feb 25 21:55 0008_delete_phonedevice.py -rw-r--r-- 1 root root 1992 Feb 26 08:59 0009_initial.py # &lt;-------- this one is causing the trouble -rw-r--r-- 1 root root 0 Feb 25 21:55 __init__.py drwxr-xr-x 1 root root 4096 Feb 26 08:59 __pycache__ </code></pre> <p>also:</p> <pre class="lang-bash prettyprint-override"><code>$ python manage.py showmigrations two_factor two_factor [X] 0001_initial [X] 0002_auto_20150110_0810 [X] 0003_auto_20150817_1733 [X] 0004_auto_20160205_1827 [X] 0005_auto_20160224_0450 [X] 0006_phonedevice_key_default [X] 0007_auto_20201201_1019 [X] 0008_delete_phonedevice [ ] 0009_initial # &lt;---- not run because of the above error </code></pre> <p>=&gt; Here is the content of <code>0001_initial.py</code>:</p> <pre class="lang-py prettyprint-override"><code>import django.core.validators from django.conf import settings from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ] operations = [ migrations.CreateModel( name='PhoneDevice', fields=[ ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), ('name', models.CharField(help_text='The human-readable name of this device.', max_length=64)), ('confirmed', models.BooleanField(default=True, help_text='Is this device ready for use?')), ('number', models.CharField( max_length=16, verbose_name='number', validators=[django.core.validators.RegexValidator( regex='^(\\+|00)', message='Please enter a valid phone number, including your country code ' 'starting with + or 00.', code='invalid-phone-number' )] )), ('key', models.CharField(help_text='Hex-encoded secret key', max_length=40)), ('method', models.CharField( max_length=4, verbose_name='method', choices=[('call', 'Phone Call'), ('sms', 'Text Message')] )), ('user', models.ForeignKey( help_text='The user that this device belongs to.', to=settings.AUTH_USER_MODEL, on_delete=models.CASCADE )), ], options={ 'abstract': False, }, bases=(models.Model,), ), ] </code></pre> <p>=&gt; And the content of <code>0009_initial.py</code>:</p> <pre class="lang-py prettyprint-override"><code># Generated by Django 4.1.7 on 2023-02-26 07:59 from django.conf import settings from django.db import migrations, models import django.db.models.deletion import django_otp.util import phonenumber_field.modelfields import two_factor.plugins.phonenumber.models class Migration(migrations.Migration): initial = True dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ('two_factor', '0008_delete_phonedevice'), ] operations = [ migrations.CreateModel( name='PhoneDevice', fields=[ ('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('name', models.CharField(help_text='The human-readable name of this device.', max_length=64)), ('confirmed', models.BooleanField(default=True, help_text='Is this device ready for use?')), ('throttling_failure_timestamp', models.DateTimeField(blank=True, default=None, help_text='A timestamp of the last failed verification attempt. Null if last attempt succeeded.', null=True)), ('throttling_failure_count', models.PositiveIntegerField(default=0, help_text='Number of successive failed attempts.')), ('number', phonenumber_field.modelfields.PhoneNumberField(max_length=128, region=None)), ('key', models.CharField(default=django_otp.util.random_hex, help_text='Hex-encoded secret key', max_length=40, validators=[two_factor.plugins.phonenumber.models.key_validator])), ('method', models.CharField(choices=[('call', 'Phone Call'), ('sms', 'Text Message')], max_length=4, verbose_name='method')), ('user', models.ForeignKey(help_text='The user that this device belongs to.', on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)), ], options={ 'db_table': 'two_factor_phonedevice', }, ), ] </code></pre> <br> <p>Prior to the <code>two_factor</code> migrations, all other app migrations went well, in that order: <code>admin</code>, <code>auth</code>, <code>contenttypes</code>, <code>projectapplication</code>, <code>otp_static</code>, <code>otp_totp</code>, <code>session</code>.</p> <p>I strictly followed the documentation for installing the two factor auth: <a href="https://django-two-factor-auth.readthedocs.io/en/stable/installation.html" rel="nofollow noreferrer">https://django-two-factor-auth.readthedocs.io/en/stable/installation.html</a></p> <p>which means that I added <code>django-two-factor-auth[phonenumbers]</code> to my <code>requirements.txt</code> + that I parametrized the <code>settings.py</code> with what is in the actual documentation.</p> <p><code>settings.py</code>:</p> <pre class="lang-py prettyprint-override"><code>INSTALLED_APPS = [ 'project.apps.ProjectConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django_extensions', 'django_bootstrap5', 'import_export', 'rest_framework', 'drf_spectacular', 'django_otp', 'django_otp.plugins.otp_static', 'django_otp.plugins.otp_totp', 'two_factor', ] MIDDLEWARE = [ ... 'django.contrib.auth.middleware.AuthenticationMiddleware', ... 'django_otp.middleware.OTPMiddleware', ] </code></pre> <p>I also added the <code>path(r'', include(two_factor.urls.urlpatterns)),</code> routes in the <code>urls.py</code>.</p> <p>How can I identify what went wrong and what is needed to make my project run without errors?</p> <hr /> <h4>Version info</h4> <ul> <li>Python 3.9.13</li> <li>django 4.1.7</li> <li>django-two-factor-auth[phonenumbers] in <code>/usr/local/lib/python3.9/site-packages</code> (1.15.0)</li> <li>django-phonenumber-field&lt;7,&gt;=1.1.0 in <code>/usr/local/lib/python3.9/site-packages</code> (from django-two-factor-auth[phonenumbers]) (6.4.0)</li> <li>django-formtools in <code>/usr/local/lib/python3.9/site-packages</code> (from django-two-factor-auth[phonenumbers]) (2.4)</li> <li>django-otp&gt;=0.8.0 in <code>/usr/local/lib/python3.9/site-packages</code> (from django-two-factor-auth[phonenumbers]) (1.1.4)</li> <li>Django&gt;=3.2 in <code>/usr/local/lib/python3.9/site-packages</code> (from django-two-factor-auth[phonenumbers]) (4.1.7)</li> <li>qrcode&lt;7.99,&gt;=4.0.0 in <code>/usr/local/lib/python3.9/site-packages</code> (from django-two-factor-auth[phonenumbers]) (7.4.2)</li> <li>phonenumbers&lt;8.99,&gt;=7.0.9 in <code>/usr/local/lib/python3.9/site-packages</code> (from django-two-factor-auth[phonenumbers]) (8.13.6)</li> <li>asgiref&lt;4,&gt;=3.5.2 in <code>/usr/local/lib/python3.9/site-packages</code> (from Django&gt;=3.2-&gt;django-two-factor-auth[phonenumbers]) (3.6.0)</li> <li>sqlparse&gt;=0.2.2 in <code>/usr/local/lib/python3.9/site-packages</code> (from Django&gt;=3.2-&gt;django-two-factor-auth[phonenumbers]) (0.4.3)</li> <li>typing-extensions in <code>/usr/local/lib/python3.9/site-packages</code> (from qrcode&lt;7.99,&gt;=4.0.0-&gt;django-two-factor-auth[phonenumbers]) (4.5.0)</li> <li>pypng in <code>/usr/local/lib/python3.9/site-packages</code> (from qrcode&lt;7.99,&gt;=4.0.0-&gt;django-two-factor-auth[phonenumbers]) (0.20220715.0)</li> </ul>
<python><django><totp><django-two-factor-auth>
2023-02-26 08:42:36
1
8,371
swiss_knight
75,570,857
2,908,017
How to change Label font size in Python FMX GUI app
<p>I'm using the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI library for Python</a> and trying to change the font size on a <code>Label</code> component, but it's not working.</p> <p>I have the following code to create the <code>Form</code> and the <code>Label</code> on my Form:</p> <pre><code>from delphifmx import * class HelloForm(Form): def __init__(self, owner): self.Caption = 'Hello World' self.Width = 1000 self.Height = 500 self.Position = &quot;ScreenCenter&quot; self.myLabel = Label(self) self.myLabel.Parent = self self.myLabel.Text = &quot;Hello World!&quot; self.myLabel.Align = &quot;Client&quot; self.myLabel.TextSettings.Font.Size = 50 self.myLabel.TextSettings.HorzAlign = &quot;Center&quot; </code></pre> <p>My output Form, then looks like this: <a href="https://i.sstatic.net/bltrw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bltrw.png" alt="Python GUI Hello World" /></a></p> <p>My &quot;Hello World!&quot; Label should be much bigger than what it's showing. I'm setting the font size with this piece of code:</p> <pre><code>self.myLabel.TextSettings.Font.Size = 50 </code></pre>
<python><user-interface><firemonkey>
2023-02-26 08:41:28
1
4,263
Shaun Roselt
75,570,840
16,363,897
Pandas ewm correlation - not rolling
<p>I have the following pandas dataframe:</p> <pre><code> a b c 2023-01-01 35 34 17 2023-01-02 85 54 31 2023-01-03 33 8 27 2023-01-04 95 9 45 2023-01-05 71 98 7 </code></pre> <p>I want to calculate today's (2023-01-05) EWM correlations between the 3 series.</p> <p>I tried</p> <pre><code>correls = data.ewm(alpha=0.01, adjust=True).corr(method='pearson') </code></pre> <p>and it produced rolling correlations (calculated on all dates):</p> <pre><code> a b c 2023-01-01 a NaN NaN NaN b NaN NaN NaN c NaN NaN NaN 2023-01-02 a 1.000000 1.000000 1.000000 b 1.000000 1.000000 1.000000 c 1.000000 1.000000 1.000000 2023-01-03 a 1.000000 0.845674 0.694635 b 0.845674 1.000000 0.203512 c 0.694635 0.203512 1.000000 2023-01-04 a 1.000000 0.177224 0.842738 b 0.177224 1.000000 -0.362909 c 0.842738 -0.362909 1.000000 2023-01-05 a 1.000000 0.209568 0.478477 b 0.209568 1.000000 -0.748170 c 0.478477 -0.748170 1.000000 </code></pre> <p>I know I can now slice the correls dataframe to get only the latest correlations. The problem is the real &quot;data&quot; dataframe is very large and computing rolling correlations takes a lot of time. Since I only need today's correlations, how can I avoid EWN.corr function calculating rolling correlations in the first place?</p> <p>To be clear, I'm looking for a fast way to get the following output:</p> <pre><code> a b c a 1.000000 0.209568 0.478477 b 0.209568 1.000000 -0.748170 c 0.478477 -0.748170 1.000000 </code></pre> <p>Thanks</p>
<python><pandas><dataframe>
2023-02-26 08:36:31
1
842
younggotti
75,570,839
7,959,614
Asyncio run Task conditional of another Task
<p>I want to run a task infinitely. Basically, the script needs to do the following:</p> <ul> <li>check each week if there is a match</li> <li>sleep until the match starts</li> <li>create a connection with the websocket</li> <li>check the status of a match using a subscription query</li> <li>depending on the status of the match run another subscription and log the output</li> <li>close the websocket connection at some point and start all over again.</li> </ul> <p>I wrote the following script for it:</p> <pre><code>import asyncio from gql import Client from gql.transport.websockets import WebsocketsTransport async def execute_subscription1(session): async for response in session.subscribe(subscription1): if response['status'] == 'in progress': task_2 = asyncio.create_task(execute_subscription2(session)) asyncio.run(task_2) elif response['status'] == 'complete': # task_1 is completed return None else: # status is suspended / starting soon / waiting etc try: task_2.cancel() except (asyncio.CancelledError, asyncio.InvalidStateError): pass async def execute_subscription2(session): async for response in session.subscribe(subscription2): print(response) async def graphql_connection(): transport = WebsocketsTransport(url=&quot;wss://YOUR_URL&quot;) client = Client(transport=transport, fetch_schema_from_transport=False) async with client as session: task_1 = asyncio.create_task(execute_subscription1(session)) await task_1 async def watch(game): seconds_until_game = get_time_until_game() await asyncio.sleep(seconds_until_game) await graphql_connection() async def watch_always() -&gt; None: while True: game = get_upcoming_game() asyncio.run(watch(game)) loop = asyncio.new_event_loop() loop.run_until_complete(watch_always()) </code></pre> <p>I expect that I will receive <code>response</code> from <code>session.subscribe(subscription1)</code> every minute. I expect that a change in the match status will only occur every 10 minutes.</p> <p>So, I only want to launch <code>task_2</code> the first time <code>response['status'] == 'in progress'</code> or the first time <code>response['status'] == 'in progress'</code> after being earlier cancelled. How can I achieve this? In addition, I read the <a href="https://docs.python.org/3/library/asyncio-exceptions.html" rel="nofollow noreferrer">documentation</a> of the errors, but I couldn't conclude whether <code>(asyncio.CancelledError, asyncio.InvalidStateError)</code> is called when a non-existent task is cancelled.</p> <p>Please advice</p>
<python><graphql><python-asyncio>
2023-02-26 08:36:15
1
406
HJA24
75,570,820
4,653,436
pynamodb last_evaluated_key always return null
<p>I'm using <code>Pynamodb</code> for interacting with <code>dynamodb</code>. However, last_evaluated_key always returns null even if there are multiple items. When I run this query</p> <pre><code>results = RecruiterProfileModel.profile_index.query( hash_key=UserEnum.RECRUITER, scan_index_forward=False, limit=1, ) </code></pre> <p>If I try getting this value</p> <pre><code>results.last_evaluated_key </code></pre> <p>it always returns <code>null</code>.</p> <p>However, if I dump the <code>results</code> object, it returns</p> <pre><code>{ &quot;page_iter&quot;: { &quot;_operation&quot;: {}, &quot;_args&quot;: [ &quot;recruiter&quot; ], &quot;_kwargs&quot;: { &quot;range_key_condition&quot;: null, &quot;filter_condition&quot;: null, &quot;index_name&quot;: &quot;PROFILE_INDEX&quot;, &quot;exclusive_start_key&quot;: { &quot;sk&quot;: { &quot;S&quot;: &quot;PROFILE#6ab0f5bc-7283-4236-9e37-ea746901d19e&quot; }, &quot;user_type&quot;: { &quot;S&quot;: &quot;recruiter&quot; }, &quot;created_at&quot;: { &quot;N&quot;: &quot;1677398017.685749&quot; }, &quot;pk&quot;: { &quot;S&quot;: &quot;RECRUITER#6ab0f5bc-7283-4236-9e37-ea746901d19e&quot; } }, &quot;consistent_read&quot;: false, &quot;scan_index_forward&quot;: false, &quot;limit&quot;: 1, &quot;attributes_to_get&quot;: null }, &quot;_last_evaluated_key&quot;: { &quot;sk&quot;: { &quot;S&quot;: &quot;PROFILE#95a4f201-2475-45a7-b096-5167f6a4d639&quot; }, &quot;user_type&quot;: { &quot;S&quot;: &quot;recruiter&quot; }, &quot;created_at&quot;: { &quot;N&quot;: &quot;1677398017.68518&quot; }, &quot;pk&quot;: { &quot;S&quot;: &quot;RECRUITER#95a4f201-2475-45a7-b096-5167f6a4d639&quot; } }, &quot;_is_last_page&quot;: false, &quot;_total_scanned_count&quot;: 2, &quot;_rate_limiter&quot;: null, &quot;_settings&quot;: { &quot;extra_headers&quot;: null } }, &quot;_map_fn&quot;: {}, &quot;_limit&quot;: 0, &quot;_total_count&quot;: 1, &quot;_index&quot;: 1, &quot;_count&quot;: 1, &quot;_items&quot;: [ { &quot;company_name&quot;: { &quot;S&quot;: &quot;fletcher-rodriguez&quot; }, &quot;created_at&quot;: { &quot;N&quot;: &quot;1677398017.685749&quot; }, &quot;last_name&quot;: { &quot;S&quot;: &quot;craig&quot; }, &quot;first_name&quot;: { &quot;S&quot;: &quot;tyler&quot; }, &quot;designation&quot;: { &quot;S&quot;: &quot;manager&quot; }, &quot;verification_status&quot;: { &quot;S&quot;: &quot;pending&quot; }, &quot;sk&quot;: { &quot;S&quot;: &quot;PROFILE#6ab0f5bc-7283-4236-9e37-ea746901d19e&quot; }, &quot;user_type&quot;: { &quot;S&quot;: &quot;recruiter&quot; }, &quot;email&quot;: { &quot;S&quot;: &quot;fowlerkathy@hernandez.com&quot; }, &quot;pk&quot;: { &quot;S&quot;: &quot;RECRUITER#6ab0f5bc-7283-4236-9e37-ea746901d19e&quot; } } ] } </code></pre> <p>You can clearly see that <code>last_evaluated_key</code> exists over there. I'm getting confused here. Please help. I've deadlines to meet. Thank you in advance.</p> <p><strong>Edit:</strong> Here is a video attachment of trying to get the value of <code>last_eveluated_key</code> in different ways and they all return <code>null</code> <a href="https://drive.google.com/file/d/1LbImYQbx4Kz2gGQcFOa0Vz1bJJyhWPuw/view?usp=share_link" rel="nofollow noreferrer">pynamodb last_evaluated_key issue Gdrive video</a></p>
<python><amazon-dynamodb><nosql><dynamodb-queries><pynamodb>
2023-02-26 08:33:09
2
10,911
Koushik Das
75,570,612
6,952,996
mock a specific file with mock_open in Python
<p>I use this code snippet (From: <a href="https://stackoverflow.com/questions/69670597/how-do-i-mock-a-file-open-for-a-specific-path-in-python">How do I mock a file open for a specific path in python?</a>)</p> <pre class="lang-py prettyprint-override"><code>builtin_open = open def my_mock_open(*args, **kwargs): if args[0] == &quot;myFile&quot;: # mocked open for path &quot;myFile&quot; return mock.mock_open()(*args, **kwargs) # unpatched version for every other path return builtin_open(*args, **kwargs) def test_myfunc(mocker): mocker.patch('builtins.open', my_mock_open) myfunc() </code></pre> <p>This works well in that it only mocks the call to <code>myFile</code> and no other files that <code>myfunc()</code> reads from. But I also want to assert that the correct data was attempted to be written to the file <code>myFile</code>. I have tried to put the mock in a context &quot;with as&quot; statement but that didn't work.</p> <p><code> AttributeError: 'function' object has no attribute assert_called_with</code></p>
<python><python-3.x><mocking><pytest>
2023-02-26 07:46:04
1
937
Johnathan
75,570,578
19,504,610
pyo3: Passing a reference of Rust's Self into a method of Self with attribute `#[classmethod]`
<p>Context of my question:</p> <p>In <a href="https://docs.pydantic.dev/usage/types/#classes-with-__get_validators__" rel="nofollow noreferrer">pydantic</a>'s, you can define a <code>CustomType</code>, which is any python class with the following class method named <code>__get_validators__(cls)</code> defined. The objective is to have pydantic validate the input to the initialization of the <code>CustomType</code> instance.</p> <pre><code>class CustomType: @classmethod def __get_validators__(cls): yield cls.validate_func_one </code></pre> <p>I am attempting to create a struct in Rust, mapping it to Python class using the familiar attribute macro <code>#[pyclass]</code>. My objective is to create from Rust, a Python class that is compatible with pydantic validation scheme, e.g. having the <code>classmethod(__get_validators__)</code> defined.</p> <p>Here is my attempt:</p> <pre><code>#[pyclass(name = &quot;MyPythonClass&quot;)] #[derive(Debug, Clone)] pub struct PyClass { pub rust_object: MyRustStruct, } </code></pre> <p>The implementation of <code>PyClass</code> is as follows:</p> <pre><code>#[pymethods] impl PyClass { #[new] fn new(string: String) -&gt; PyResult&lt;PyClass &gt; { Ok(PyClass { rust_object: MyRustStruct::new(string).unwrap(), }) } // intended to be the `cls.validate` function which is yielded fn __call__(_cls: &amp;PyType, v: &amp;PyString, _values: &amp;PyDict) -&gt; PyResult&lt;PyClass&gt; { let v: &amp;str = v.extract()?; Ok(PyClass{ // PyClass::new does all the validations required in rust rust_object: PyClass::new(v).unwrap(), }) } #[classmethod] // not sure how to write this... fn __get_validator__(_cls: &amp;PyType) -&gt; PyResult&lt;PyClassContainer&gt; { Ok(Self::get_container()) } </code></pre> <p><code>PyClass</code> is a wrapper over the underlying Rust struct <code>MyRustStruct</code>, such that <code>MyRustStruct</code> can be used for other purposes, e.g. to build a binary etc. This is standard practice.</p> <p>To be able to <code>yield</code>, I would need two things: first, a struct that is an <code>Iterator</code> in rust and last, that <code>struct</code> has to implement</p> <p>First:</p> <pre><code>#[pyclass(name = &quot;PyClassContainer&quot;)] #[derive(Debug)] struct PyClassContainer{ iter: Vec&lt;PyClass&gt;, } </code></pre> <p>To trivially 'implement' the rust <code>Iterator</code> trait, I dump the <code>PyClass</code> object into a <code>Vector</code>.</p> <p>Then lastly, for the implementations of <code>__iter__</code> and <code>__next__</code> of <code>PyClassContainer</code>:</p> <pre><code>#[pymethods] impl PyClassContainer{ fn __iter__(slf: PyRef&lt;'_, Self&gt;) -&gt; PyResult&lt;Py&lt;PyClassContainer&gt;&gt; { let iter = PyClassContainer{ iter: slf.iter.clone().into_iter(), }; Py::new(slf.py(), iter) } // to use `yield`, the `__next__` method has to return an `IterNextOutput&lt;T, U&gt;` enum // https://docs.rs/pyo3/latest/pyo3/pyclass/enum.IterNextOutput.html fn __next__(slf: PyRef&lt;'_, Self&gt;) -&gt; IterNextOutput&lt;Result&lt;PyClass, &amp;'static str&gt;, PyErr&gt; { match self.iter.into_iter() { Some(Ok(item)) =&gt; IterNextOutput::Yield(Ok(item)), Some(Err(err)) =&gt; IterNextOutput::Return(PyStopIteration::new_err(err)), } } } </code></pre> <p>My intent is to have <code>__next__</code> to 'yield' <code>Ok(item)</code> from the <code>.iter</code> field of <code>PyClassContainer</code> which will return me an instance of the python object of type <code>PyClass</code> when <code>PyClass.__get_validator__</code> is called.</p> <p>However, the signature of attribute macro <a href="https://pyo3.rs/v0.18.1/class#class-methods" rel="nofollow noreferrer"><code>#[classmethod]</code></a> does not allow the passing in of a reference of <code>self</code>, which makes it impossible for me to 'clone' the fields that are already available in <code>self</code>.</p>
<python><rust><pyo3><python-bindings>
2023-02-26 07:37:05
0
831
Jim
75,570,567
6,820,121
write ANTLR regex when declaring variables list
<p>I've wrote a grammar rule for a language in ANTLR as below:</p> <pre><code>variable: idlist COLON type (EQUAL explist)? SEMI; idlist: identifier (COMMA identifier)*; explist: exp (COMMA exp)*; COLON: ':'; EQUAL: '='; SEMI: ';'; COMMA: ','; </code></pre> <p>This input is valid for above grammar:</p> <pre><code>a, b, c: integer = 3, 4, 6; </code></pre> <p>But now if I want this input:</p> <pre><code>a, b, c, d: integer = 3, 4, 6; </code></pre> <p>or this:</p> <pre><code>a, b, c: integer = 3, 4, 6, 1; </code></pre> <p>becomes invalid due to inequality between amount of ID in <em>idlist</em> and value in <em>explist</em>, how I rewrite my grammar? Tks</p>
<python><regex><antlr><antlr4>
2023-02-26 07:33:39
2
621
Duy Duy
75,570,533
2,315,911
Python pandas groupby: how to use variables in different columns to create a new one
<p>Consider the following <code>DataFrame</code>:</p> <pre><code>df = pd.DataFrame({'c0':['1980']*3+['1990']*2+['2000']*3, 'c1':['x','y','z']+['x','y']+['x','y','z'], 'c2':range(8) }) c0 c1 c2 0 1980 x 0 1 1980 y 1 2 1980 z 2 3 1990 x 3 4 1990 y 4 5 2000 x 5 6 2000 y 6 7 2000 z 7 </code></pre> <p>I want to do the following using <code>pandas</code>'s <code>groupby</code> over <code>c0</code>:</p> <ol> <li>Group rows based on <code>c0</code> (indicate year).</li> <li>In each group, subtract the value of <code>c2</code> for <code>y</code> (in <code>c1</code>) from the values of <code>c2</code>.</li> <li>Add a new column <code>c3</code> collecting those values.</li> </ol> <p>The final result is</p> <pre><code> c0 c1 c2 c3 0 1980 x 0 -1 1 1980 y 1 0 2 1980 z 2 1 3 1990 x 3 -1 4 1990 y 4 0 5 2000 x 5 -1 6 2000 y 6 0 7 2000 z 7 1 </code></pre> <p>I was able to get the result without <code>groupby</code> like the following:</p> <pre><code>dic = {} for yr in df['c0'].unique(): cond1 = ( df['c0']==yr ) tmp = df.loc[cond1,:].copy() cond2 = ( tmp['c1']=='y' ) val = tmp.loc[cond2,'c2'].to_numpy() tmp['c3'] = tmp['c2'] - val dic[yr] = tmp pd.concat([dic['1980'],dic['1990'],dic['2000']]) </code></pre> <p>It works but does not look great. I tried <code>transform</code> and <code>apply</code> for <code>groupby</code>, but could not figure it out. Any help would be appreciated.</p>
<python><pandas><dataframe><group-by>
2023-02-26 07:24:30
4
1,300
Spring
75,570,527
9,006,687
Numpyro: Error when using MCMC with a model that uses scan
<p>I am trying to get the following model to work:</p> <pre><code>def model_dynamic(self, hemp_size_t, values_t): # Unpack the values at time t t, actions_performed = values_t # Check if harvesting are performed at time step t harvest = self.is_performed(&quot;harvest-hemp&quot;, actions_performed) # Compute the states at time t + 1 hemp_size_t1 = deterministic(f&quot;hemp_size&quot;, (hemp_size_t + 0.05 * hemp_can_grow_t) * (1 - harvest)) # Compute the yield at time t test = deterministic(&quot;yield_test&quot;, hemp_size_t * harvest) # Works ok. sample(&quot;yield&quot;, Normal(test, 0.1)) # assert rng_key is not None sample(&quot;yield&quot;, Gamma(1, 0.1), sample_shape=(1, 1, )) # assert rng_key is not None sample(&quot;yield&quot;, Normal(hemp_size_t * harvest, 0.1)) # assert rng_key is not None return hemp_size_t1, None def model(self, *args, **kwargs): # Sample the initial hemp size hemp_size = jnp.zeros((1, 1)) # Create a vector of time indices time_indices = jnp.expand_dims(jnp.expand_dims(jnp.arange(0, len(self.policy)), axis=1), axis=2) # Call the scan function that unroll the model over time scan(self.model_dynamic, hemp_size, (time_indices, self.policy)) </code></pre> <p>The only observed variable is the yield, and as soon I try to sample it from a distribution I get the following error that seems to come from within the scan function:</p> <pre><code>File &quot;/usr/local/lib/python3.10/dist-packages/numpyro/contrib/control_flow/scan.py&quot;, line 47, in _subs_wrapper assert rng_key is not None AssertionError </code></pre> <p>In terms of infernce, I am using MCMC as follows:</p> <pre><code>prng = jax.random.PRNGKey(0) prng, _rng_key = random.split(prng) cond_model = numpyro.handlers.condition(model, data=data) mcmc = numpyro.infer.MCMC(self.kernel, num_chains=4, num_samples=1000, num_warmup=1000) mcmc.run(rng_key=_rng_key) </code></pre> <p><strong>I found the following fix:</strong> passing a rng_key to the sample function fixes the error.</p> <pre><code>rng_key = jax.random.PRNGKey(0) sample(&quot;yield&quot;, Gamma(1, 0.1), sample_shape=(1, 1, ), rng_key=rng_key) </code></pre> <p>However, I still don't understand why. I have other models, that are made similarly but do not the rng_key to be provided to the sample function.</p>
<python><numpyro>
2023-02-26 07:23:33
0
461
Theophile Champion
75,570,474
14,749,391
How to create a reusable postgresql connection in a multithreaded python application
<p>I'm working on a multithreaded project that makes many database queries but the method now is creating a new database connection in thread every time before executing the queries and closing it after. As this is heavy operation I need a way to create a connection once and reuse it multiple times through the app. I've read about the threaded connection pool class of psycopg2, but I need help implementing it.</p> <p>Consider this code, got the idea from <a href="https://stackoverflow.com/questions/69145633/how-to-initialize-a-database-connection-only-once-and-reuse-it-in-run-time-in-py">here</a>:</p> <pre><code>class Database(object): _instance = None def __new__(cls, *args, **kwagrs): if cls._instance is None: print(&quot;none&quot;) cls._instance = super().__new__(cls) cls._instance.initialize_pool(*args, **kwagrs) return cls._instance def initialize_pool(self, *args, **kwagrs): self.db_pool = ThreadedConnectionPool(maxconn=5, minconn=1, *args, **kwagrs) def get_connection(self): try: return self.db_pool.getconn() except Exception as e: print(e) def return_connection(self, conn): self.db_pool.putconn(conn) def insert_query(self, query): connection = self.get_connection() cursor = connection.cursor() cursor.execute(query) result = cursor.fetchone()[0] self.return_connection(connection) def update_query(self, query): connection = self.get_connection() cursor = connection.cursor() cursor.execute(query) result = cursor.fetchone()[0] self.return_connection(connection) </code></pre> <p>The methods are then used by another class like so:</p> <pre><code>from .db import Database import threading class MainApp: self.records = [list of records ids] self.conn_str = &quot;postgresql_connection_string&quot; self.db = Database(self.conn_str) def insert_into_db(self, query): thread = threading.Thread(target=self.db.insert_query, args=query) thread.start() def update_records(self, query): for record in self.records: thread = threading.Thread(target=self.db.update_query, args=query) thread.start() app = MainApp() query = &quot;INSERT ...&quot; app.insert_into_db(query) update_query = &quot;UPDATE ...&quot; app.update_records(update_query) </code></pre> <p>My main problem is that in <code>update_query</code> I receive <code>pool exhausted</code> error and therefore not updating anything. How do I solve it?</p>
<python><postgresql>
2023-02-26 07:06:23
0
772
Tony
75,570,422
4,357,210
How do I import a .pyd module written in C++ to Python using PyBind11
<p>I am working on Windows 10 with Python <code>3.9.7</code> and have anaconda setup on my laptop. I have compiled a C++ code <code>calcSim.cpp</code> where the module name is <code>calJaccSimm</code> and am able to successfully generate a .pyd file with the following extension <strong>.cp39-win_amd64.pyd</strong> as described <a href="https://stackoverflow.com/questions/60699002/how-can-i-build-manually-c-extension-with-mingw-w64-python-and-pybind11">here</a> .</p> <p>I am launching my jupyter notebook at the following location: jupyter-notebook <strong>D:\projects\sem4\code</strong> and my <code>.pyd</code> file named calcSim.cp39-win_amd64.pyd is at the same location.</p> <p>When trying to import module using:</p> <p><code>import calJaccSimm</code> I am getting <code>ModuleNotFoundError</code>.</p> <p>I have tried the following things:</p> <ol> <li><p><code>import sys</code></p> <p><code>sys.path.insert(0, 'D:\projects\sem4\code')</code></p> </li> <li><p><code>import os</code></p> <p><code>os.dll_directory(&quot;D:\projects\sem4\code&quot;)</code></p> </li> <li><p>Setup environment variables with the path '&quot;D:\projects\sem4\code&quot;'</p> </li> <li><p>Tried putting the generated <code>.pyd</code> in different locations like anaconda\DLLs and anaconda\lib\site-packages folder.</p> </li> </ol> <p>But after all this, I am still not able to load the module. Please help.</p> <p>Edit 1: I had multiple versions of python on my machine. I deleted all the versions and re-installed anaconda as well. Still facing the same issue.</p>
<python><windows-10><python-3.9><pybind11>
2023-02-26 06:51:28
1
3,116
tushaR
75,570,417
2,218,321
How the errors of each epoch must look like in machine learning?
<p>This is the first model I created in PyTorch and I'm new to this field. I ran the model for a few epochs and this is the output of errors for each epoch for two runs of the algorithm.</p> <p><a href="https://i.sstatic.net/7x5Ue.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7x5Ue.jpg" alt="enter image description here" /></a> <a href="https://i.sstatic.net/BOra9.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOra9.jpg" alt="enter image description here" /></a></p> <p>I also tried 100 epochs, and this is the result</p> <p><a href="https://i.sstatic.net/CBBeZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CBBeZ.png" alt="enter image description here" /></a></p> <p>The iterations denote the epochs and error is the error rate of the last input training sample for each epoch. I would like to know if the output is OK, or it must be descending? Here is the code with dataset <a href="https://www.kaggle.com/datasets/kumargh/pimaindiansdiabetescsv" rel="nofollow noreferrer">pima-indians-diabetes.csv</a>. <strong>If there is anything wrong with my code, I would be appreciated if you let me on it.</strong></p> <blockquote> <p>I also modified learning rate for smaller values, but nothing happend.</p> </blockquote> <p>This is file <code>dataset.py</code></p> <pre><code>import pandas as pd import torch from sklearn.preprocessing import StandardScaler class DataSet: divide_rate = 0.8 file = './pima-indians-diabetes.csv' def __init__(self): dataframe = pd.read_csv(self.file) train_size = int(self.divide_rate * len(dataframe)) train_set = dataframe.iloc[:train_size, :] train_label = train_set['label'] train_feature = train_set.loc[:, train_set.columns != 'label'] sc = StandardScaler() train_feature = sc.fit_transform(train_feature) self.train_labels = torch.tensor(train_label.values, dtype=torch.float32) self.train_features = torch.tensor(train_feature, dtype=torch.float32) def __getitem__(self, index): return self.train_features[index], self.train_labels[index] def __len__(self): return len(self.train_features) </code></pre> <p>and this is the main file</p> <pre><code>import torch import torch.nn as nn import torch.nn.functional as F from torch.optim import SGD import matplotlib.pyplot as plt import seaborn as sns from dataset import DataSet from torch.utils.data import DataLoader class NeuralNetwork(nn.Module): input_dim = 8 hidden_dim = 4 output_dim = 1 def __init__(self): super().__init__() self.layers = [ nn.Linear(self.input_dim, self.hidden_dim), nn.Linear(self.hidden_dim, self.output_dim) ] self.layers = nn.ModuleList(self.layers) def forward(self, x): for layer in self.layers: x = torch.sigmoid(layer(x)) return x dataset = DataSet() model = NeuralNetwork() loss_fn = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) train_loader = torch.utils.data.DataLoader(dataset, shuffle=True) sns.set(style=&quot;whitegrid&quot;) Epochs = 10 loss_per_epoch = [] x = [] for i in range(Epochs): x.append(i) ll = 0 for feature, label in train_loader: pred = model(feature) loss = loss_fn(pred, label.unsqueeze(1)) loss.backward() optimizer.step() optimizer.zero_grad() ll = loss.item() loss_per_epoch.append(ll) sns.lineplot(x=x, y=loss_per_epoch, color='green', linewidth=1.5) plt.ylabel('Error') plt.xlabel('Iteration') </code></pre> <p>As <a href="https://stackoverflow.com/users/21281213/m-h">@M H</a> mentioned, I added the average error of each epoch and this is the result</p> <p><a href="https://i.sstatic.net/aSU6v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aSU6v.png" alt="enter image description here" /></a></p>
<python><machine-learning><pytorch>
2023-02-26 06:50:27
1
2,189
M a m a D
75,570,151
13,194,245
Convert Pandas dataframe to an api format
<p>I the following pandas dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame({'Year': [2023, 2024, 2022,2022, 2023,2025,2024,2025,2022,2023],'Name': ['John', 'Jim', 'John','Jim', 'John','Jim','John','Jim','John','Jim'] ,'Month': ['Jan', 'Jan', 'Feb','Jan', 'Jan','Feb','Jan','Jan','Feb','Feb'],'Day': [1, 2, 5,6, 7,4,8,24,13,15]}) df </code></pre> <p>which looks like:</p> <p><a href="https://i.sstatic.net/FBqZL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FBqZL.png" alt="enter image description here" /></a></p> <p>I am trying to convert this to a dictionary/api format so it can be use in a <code>get</code> request.</p> <p>I need it in a format that is grouped by <code>Name</code>, then <code>Year</code>, then <code>Month</code> and the values are <code>Day</code>. My front end output expected is this</p> <p><a href="https://i.sstatic.net/H7dqS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H7dqS.png" alt="enter image description here" /></a></p> <p>By any help to create the dictionary/api view to achieve this would be fantastic. Thanks very much</p>
<python><pandas>
2023-02-26 05:28:50
0
1,812
SOK
75,569,990
1,503,669
how to use Python get the image URL and this URL is dynamically loaded after the page was loaded, and I can't find this URL in the API call?
<p>My target is to get the Full image URL and download this picture.</p> <p>Page URL: <a href="https://allinone.pospal.cn/m#/details/6223421" rel="nofollow noreferrer">https://allinone.pospal.cn/m#/details/6223421</a></p> <p>The challenge is this URL is dynamically loaded after the page is loaded.</p> <p>I can't find this information inside the any API calls. I want to know how this piece of information is being loaded into the page?</p> <p>and What is the best way to get this full image URL? Is the selenium the only way to do?</p> <p>The page source, there is no content inside yb-details-wrapper, so I can't get anything using the BeautifulSoup solely.</p> <pre><code> &lt;div id=&quot;detailsView&quot; data-route=&quot;details&quot; class='yb-page'&gt; &lt;div class=&quot;yb-page-inner&quot;&gt; &lt;div class=&quot;yb-header-home yb-header-tpl weui-flex&quot;&gt; &lt;img class=&quot;yb-logo&quot; src=&quot;//imgw.pospal.cn/we/weidian/img/iconsV2/store.png&quot; onerror=&quot;this.onerror = null; this.src = '//imgw.pospal.cn/we/weidian/img/iconsV2/store.png'&quot; alt=&quot;&quot; /&gt; &lt;span class=&quot;yb-store-name&quot;&gt;...&lt;/span&gt; &lt;span class=&quot;yb-filler&quot;&gt;&lt;/span&gt; &lt;a href=&quot;#/search&quot; class=&quot;yb-search-lnk&quot;&gt;&lt;img src=&quot;//imgw.pospal.cn/we/weidian/img/iconsV2/searchBlack@2x.png&quot; alt=&quot;&quot; /&gt;&lt;/a&gt; &lt;/div&gt; &lt;div class=&quot;yb-details-wrapper&quot;&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>The target information was loaded dynamically somoe how...</p> <pre><code>&lt;div class=&quot;yb-details-wrapper&quot; style=&quot;&quot;&gt; &lt;div class=&quot;yb-details&quot; p_uid=&quot;6223421&quot;&gt; &lt;div class=&quot;yb-details-inner&quot;&gt; &lt;div class=&quot;yb-details__hd&quot;&gt; &lt;div class=&quot;yb-details-photo&quot;&gt; &lt;img src=&quot;https://img.pospal.cn/productImages/4560246/1b7f3337-f67b-43c1-9b9c-b2817bdfd0e9_640x640.jpeg&quot; onerror=&quot;_yb.errProdImg(this)&quot; alt=&quot;&quot;&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;yb-details__bd&quot;&gt; &lt;div class=&quot;yb-details-sharebtn&quot;&gt; &lt;img src=&quot;//imgw.pospal.cn/we/mini/image/icons/share.png&quot; class=&quot;yb-details-shareicon&quot;&gt;Share &lt;/div&gt; &lt;div class=&quot;yb-details-title&quot;&gt;新裝 馬爹利XO干邑(拱橋)700ml行貨&lt;/div&gt; &lt;div class=&quot;yb-details-price&quot;&gt;MOP1360.00 &lt;del&gt;MOP1460.00&lt;/del&gt;&lt;/div&gt; &lt;div class=&quot;yb-details-numbers&quot;&gt;&lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;yb-details-promote&quot; id=&quot;YbPromote&quot;&gt;&lt;/div&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p><a href="https://i.sstatic.net/0cDsQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0cDsQ.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/UT3Wb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UT3Wb.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/iZSIU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iZSIU.png" alt="enter image description here" /></a></p>
<python><selenium-webdriver><web-scraping><beautifulsoup>
2023-02-26 04:21:51
1
513
Panco
75,569,941
572,575
How to find best number of layer and best size of hidden layer of MLP in GridsearchCV
<p>I want to use GridsearchCV find best number of layer and best size of hidden layer. I try to set hidden layer as list find best number of hidden layer.</p> <pre><code> df = pd.read_csv(r'C:\\Users\data.csv') X = df.iloc[:,:20] Y = df.iloc[:,20] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2) params = { 'estimator__hidden_layer_sizes': list(range([list(range(10,200,10))])) } clf = GridSearchCV( estimator=MLPClassifier(), param_grid=params, cv=5, n_jobs=5, verbose=1, ) clf.fit(X_train, y_train) print(clf.best_params_) </code></pre> <p>I show error like this.</p> <pre><code>TypeError: 'list' object cannot be interpreted as an integer </code></pre> <p>How to fix it?</p>
<python><pandas><scikit-learn><grid-search>
2023-02-26 04:02:48
1
1,049
user572575
75,569,885
10,773,854
Error of Graph disconnected is raised when I create an autoencoder
<p>I am trying to create an autoencoder, but I meet this error <code>raise ValueError( ValueError: Graph disconnected: cannot obtain value for tensor KerasTensor(type_spec=TensorSpec(shape=(None, 32), dtype=tf.float32, name='input_2'), name='input_2', description=&quot;created by layer 'input_2'&quot;) at layer &quot;dense_2&quot;. The following previous layers were accessed without issue: [] </code></p> <p>I need to use the way to <code>AE.add_loss(recon_loss)</code> to add the loss function of my autoencoder. Here is the code I used to build that autoecoder:</p> <pre><code>from keras.layers import Lambda, Reshape, Dense, Input, Activation from keras.losses import mse from keras.models import Model from tensorflow.keras.optimizers import Adam ini = 'glorot_uniform' &quot;&quot;&quot; build autoencoder model&quot;&quot;&quot; # encoder X = Input(shape=(64,)) # output of encoder or transmitted vector # encoder enc_1 = Dense(256, activation='relu', kernel_initializer=ini)(X) enc_2 = Dense(32, activation='relu', kernel_initializer=ini)(enc_1) U = Lambda(lambda x: np.sqrt(32) * K.l2_normalize(x, axis=1))(enc_2) # decoder X_en = Input(shape=(32,)) dec_1 = Dense(512, activation='relu', kernel_initializer=ini)(X_en) dec_2 = Dense(128, activation='relu', kernel_initializer=ini)(dec_1) dec_3 = Dense(64, activation='linear', kernel_initializer=ini)(dec_2) X_hat = dec_3 # model enc and dec encoder = Model(X, U) decoder = Model(X_en, X_hat) # Define the autoencoder model autoencoder_input = X autoencoder_output = decoder(U) AE = Model(autoencoder_input, autoencoder_output) &quot;&quot;&quot;Training&quot;&quot;&quot; # loss design recon_loss = mse(X, X_hat) ### training ### AE.add_loss(recon_loss) AE.compile(optimizer=Adam(lr=0.001)) </code></pre>
<python><deep-learning><autoencoder>
2023-02-26 03:44:22
0
337
New_student
75,569,836
99,989
In Python, is there a way to do a partial application with types?
<p>I'm trying to understand how to annotate a function that may be a method:</p> <pre class="lang-py prettyprint-override"><code>from typing import Callable, Generic, Protocol, TypeVar, overload, Any from typing_extensions import ParamSpec, Self, Concatenate V_co = TypeVar(&quot;V_co&quot;, covariant=True) U = TypeVar(&quot;U&quot;, contravariant=True) P = ParamSpec(&quot;P&quot;) class Wrapped(Protocol, Generic[P, V_co]): def __call__(self, /, *args: P.args, **kwargs: P.kwargs) -&gt; V_co: ... class MethodWrapped(Protocol, Generic[U, P, V_co]): def __call__(self, u: U, /, *args: P.args, **kwargs: P.kwargs) -&gt; V_co: ... def __get__(self, instance: Any, owner: Any = None) -&gt; Wrapped[P, V_co]: ... @overload def jit(f: Callable[Concatenate[U, P], V_co]) -&gt; MethodWrapped[U, P, V_co]: # type: ignore pass @overload def jit(f: Callable[P, V_co]) -&gt; Wrapped[P, V_co]: # pyright: ignore pass def jit(f: Callable[..., Any]) -&gt; Any: pass class X: @jit def f(self, x: int) -&gt; None: pass @jit def g(x: int, y: float) -&gt; None: pass x = X() x.f(3) x.f(x=3) g(3, 4.2) g(x=3, y=4.2) # Fails! reveal_type(x.f) reveal_type(g.__call__) </code></pre> <p>Essentially, if <code>jit().__get__</code> is called with an instance, it should <em>partially apply</em> the function using that instance as the first argument. Otherwise, it should return a callable with the same signature as the function.</p> <p>(This is a question about annotations only.)</p>
<python><python-typing>
2023-02-26 03:28:08
0
33,551
Neil G
75,569,735
10,184,226
Given that f(x) is a function satisfying f(1)=1 and for any x belongs to R, f(x+5) >= f(x) + 5 and f(x+1)<=f(x)+1. if g(x) = f(x) +5 - x, find g(2022)
<p>I am struggling to solve this problem, I couldn't understand the mathmatics to solve this problem, it will be really helpful if anyone can help me with this problem.</p> <p>Its already 12 hours I am continiously trying to think but its getting out of my head :( please help me.</p> <p>And I want to solve this using python.</p> <p>Here is what I am done -</p> <pre><code>def f(x): return x # Define the function g def g(x): return f(x) + 5 - x # Find g(2022) g_2022 = g(2022) # Print the result print(&quot;g(2022) =&quot;, g_2022) </code></pre>
<python>
2023-02-26 02:54:02
0
301
Arjun Ghimire
75,569,641
4,348,400
Transparent highlight in XKCD plot?
<p>I would like to use some XKCD style plots on my blog. Here is an example figure:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np with plt.xkcd(): fig = plt.figure() ax = fig.add_axes((0.1, 0.2, 0.8, 0.7)) ax.spines[['top', 'right']].set_visible(False) ax.set_xticks([]) ax.set_yticks([]) ax.set_ylim([-30, 15]) # Plot of function x = np.linspace(-5, 5, num=100) y = x ** 3 - 10 * x ax.plot(x, y, color='y') # Tangent space ax.plot(x, 2 * x - 16, color='m') ax.text(3, -10, r'$T_pC$', color='m') ax.scatter(2, -12, color='m') ax.text(2, -12, r'$p$', color='m') plt.tight_layout() plt.savefig('/first_example_tangent_space.png', transparent=True, dpi=300) plt.close() </code></pre> <p>The problem is it doesn't look good in dark mode, revealing white highlights:</p> <p>Light mode:</p> <p><a href="https://i.sstatic.net/fCjVR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fCjVR.png" alt="enter image description here" /></a></p> <p>Dark mode:</p> <p><a href="https://i.sstatic.net/dP2o3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dP2o3.png" alt="enter image description here" /></a></p> <p>The highlights around the curves and text are fine, but the boxes are distracting. Is there a way to change the white highlights to transparent highlights for those boxes?</p>
<python><matplotlib><transparency>
2023-02-26 02:19:47
1
1,394
Galen
75,569,620
6,367,971
Splitting dataframe at underscore within a range
<p>I have rows in a dataframe that I want to split on a <em>range</em> of underscores, and save the split values into new columns.</p> <pre><code> Type Name Parent ABC_US_Test_en-us Child ABC_12252020_US_Test_Natl_en-us_Home-vs-Away Subchild break1 </code></pre> <p>For example, I want to split <code>ABC_12252020_US_Test_Natl_en-us_Home-vs-Away</code> into a column for <code>US_Test_Natl_en-us</code> and another column for <code>Home-vs-Away</code>, so that the output looks like:</p> <pre><code> Type Name Type Matchup Parent ABC_US_Test_en-us Child ABC_12252020_US_Test_Natl_en-us_Home-vs-Away US_Test_Natl_en-us Home-vs-Away Subchild break1 </code></pre> <p>Said another way, I want to take everything <em>between</em> the 2nd and 6th underscore and save that to a new column, and everything <em>after</em> the 6th underscore and save it to another new column.</p>
<python><pandas><dataframe>
2023-02-26 02:11:48
2
978
user53526356
75,569,573
15,775,069
How to Implment feature similar to python's inspect module in rust?
<p>I recently started learning rust and this maybe a very stupid question.</p> <p>I have a large-ish codebase in python that sort of looks like this (this is all pseudocode as I can't post the original code, it's proprietary):</p> <pre><code>main_package/ main.py transforms/ __init__.py base_transform.py class1.py class2.py ... and so on </code></pre> <p><code>base_transform.py</code> contains a <code>BaseTransform</code> parent class with some common methods. And class1.py (any class[x].py) looks something like this:</p> <pre><code>class Class1Transform(BaseTransform): id = &quot;class_1&quot; def run(data): # do some logic # return result_value (any python type) </code></pre> <p>Now, in the main.py I use the python's inspect module to get all the classes in the transforms module and then use the <code>id</code> field to selectively run the class I want to run. The <code>data</code> here is obtained from a database- always a dict.</p> <p>main.py pseudocode:</p> <pre><code>import transforms all_classes = inspect.getmembers(transforms, inspect.isclass) all_classes = list(zip(*all_classes))[1] code_to_run = sys.argv[2] for class in all_classes: if class.id == code_to_run: class().run(get_data()) </code></pre> <p>And if you call the main.py with <code>python main.py class_2</code>, it will call the <code>run()</code> in the <code>class2Transform</code> from <code>class2.py</code>. You can run multiple <code>class_x</code> methods one after the other, and the main.py can read a json file containing the series of <code>id</code>s that you want to run one after the other. But I've shown the command line usage for simplicity.</p> <p>I started learning rust recently and I'm not sure how something like this would be implemented in rust? Considering I have 100+ such classes, divided into 10 sub packages (here I only have shown only one transforms/ package for simplicity). I suppose it's a design question so it would be useful if I can get some pointers on how to even think about a problem like this.</p>
<python><rust>
2023-02-26 01:53:56
0
361
yuser099881232
75,569,521
13,376,511
Why does math.cos(math.pi/2) not return zero?
<p>I came across some weird behavior by <code>math.cos()</code> (Python 3.11.0):</p> <pre><code>&gt;&gt;&gt; import math &gt;&gt;&gt; math.cos(math.pi) # expected to get -1 -1.0 &gt;&gt;&gt; math.cos(math.pi/2) # expected to get 0 6.123233995736766e-17 </code></pre> <p>I suspect that floating point math might play a role in this, but I'm not sure how. And if it did, I'd assume Python just checks if the parameter equaled <code>math.pi/2</code> to begin with.</p> <p>I found <a href="https://stackoverflow.com/a/8050751/13376511">this answer</a> by Jon Skeet, who said:</p> <blockquote> <p>Basically, you shouldn't expect binary floating point operations to be exactly right when your inputs can't be expressed as exact binary values - which pi/2 can't, given that it's irrational.</p> </blockquote> <p>But if this is true, then <code>math.cos(math.pi)</code> shouldn't work either, because it also uses the <code>math.pi</code> approximation. My question is: why does this issue only show up when <code>math.pi/2</code> is used?</p>
<python><floating-point><trigonometry><pi>
2023-02-26 01:35:36
3
11,160
Michael M.
75,569,496
2,118,290
Strange Behavior With python FastAPI and transmission for a URL Encoded SHA1
<p>So I was trying to put together my own bittorent announce server in python, and iv run into a bit of a snag. For some reason I cant seem to get FastAPI to decode the URL Encoded SHA1 field of info_hash correctly.</p> <p>As an example, I'm using the Ubuntu Torrent file, and receive a request like so:</p> <pre><code>GET /announce?info_hash=%99%c8%2b%b75%05%a3%c0%b4S%f9%fa%0e%88%1dnZ2%a0%c1 </code></pre> <p>I get that its a URL Encoded SHA1 when I tried to dump it as a string:</p> <pre><code>from fastapi import FastAPI app = FastAPI() @app.get(&quot;/announce&quot;) async def announce(info_hash: str): print(info_hash) #prints: &quot;��+�5���S���nZ2��&quot; </code></pre> <p>So I tried to pull it as raw bytes, and then re-encode it.</p> <pre><code>@app.get(&quot;/announce&quot;) async def announce(info_hash: bytes): print(info_hash) #prints: b'\xef\xbf\xbd\xef\xbf\xbd+\xef\xbf\xbd5\x05\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbdS\xef\xbf\xbd\xef\xbf\xbd\x0e\xef\xbf\xbd\x1dnZ2\xef\xbf\xbd\xef\xbf\xbd' </code></pre> <p>Which that byte array looks all sort of wrong to me. I don't get how 3 bytes: \xef\xbf\xbd become 9</p> <p>Even looking at the Raw input to compare it, its not making a whole lot of sense.</p> <pre><code>SHA1 99 c8 2b b73505 a3 c0 b453f9 fa 0e 88 1d6e5a32a0c1 URL %99%c8%2b%b75%05%a3%c0%b4S%f9%fa%0e%88%1dnZ2%a0%c1 </code></pre> <p>It looks like some of the bytes are encoded as ASCII? and anything outside of the ASCII range is encoded as a URL Encoded byte? IE: Hex(35) = 5</p> <p>It looks like somehow under the hood, Fast API is decoding the characters directly, and Iv searched through the documentation and I cant figure out how to get it to stop</p> <p>What is going on, and how do i Fix it?</p>
<python><fastapi><bittorrent>
2023-02-26 01:29:23
0
674
Steven Venham
75,569,433
14,029,775
Dictionary as argument for groupby and aggregate dataframe in Python
<p>How can I use a dictionary as the argument for dataframe groupby and aggregate?</p> <p>For example, instead of hard coding the arguments like below...</p> <pre><code>grouped_multiple = df.groupby('experience_level','job_title').agg({'salary': ['mean', 'min', 'max']}) </code></pre> <p>I would like to use the values of dictionaries as the arguments.</p> <pre><code>a={&quot;1&quot;:&quot;experience_level&quot;, &quot;2&quot;:&quot;job_title&quot;} b={&quot;1&quot;:&quot;salary&quot;} grouped_multiple = df.groupby(a).agg({b: ['mean', 'min', 'max']}) </code></pre>
<python><pandas><dataframe>
2023-02-26 01:09:45
1
365
Jonathan Chen
75,568,963
219,153
What is the fastest way to get the number of neighbor nodes with NetworkX?
<p>Here is the benchmark testing a few options:</p> <pre><code>import numpy as np, timeit as ti, networkx as nx from more_itertools import ilen g = nx.ladder_graph(1000) n = g.number_of_nodes() fun = f'sum(len(g.adj[i]) for i in range(n))' t = 1000 * np.array(ti.repeat(stmt=fun, setup='', globals=globals(), number=1, repeat=100)) print(f'{fun}: {np.amin(t):6.3f}ms {np.median(t):6.3f}ms ') fun = f'sum(len(g[i]) for i in range(n))' t = 1000 * np.array(ti.repeat(stmt=fun, setup='', globals=globals(), number=1, repeat=100)) print(f'{fun}: {np.amin(t):6.3f}ms {np.median(t):6.3f}ms ') fun = f'sum(sum(1 for _ in g.neighbors(i)) for i in range(n))' t = 1000 * np.array(ti.repeat(stmt=fun, setup='', globals=globals(), number=1, repeat=100)) print(f'{fun}: {np.amin(t):6.3f}ms {np.median(t):6.3f}ms ') fun = f'sum(ilen(g.neighbors(i)) for i in range(n))' t = 1000 * np.array(ti.repeat(stmt=fun, setup='', globals=globals(), number=1, repeat=100)) print(f'{fun}: {np.amin(t):6.3f}ms {np.median(t):6.3f}ms ') </code></pre> <p>and the results with Python 3.10.6 on my PC:</p> <pre><code>sum(len(g.adj[i]) for i in range(n)): 0.733ms 0.738ms sum(len(g[i]) for i in range(n)): 0.847ms 0.884ms sum(sum(1 for _ in g.neighbors(i)) for i in range(n)): 0.964ms 0.977ms sum(ilen(g.neighbors(i)) for i in range(n)): 1.335ms 1.362ms </code></pre> <p><code>len(g.adj[i])</code> is the fastest method I found. Is there a faster way to get the number of neighbor nodes with NetworkX?</p> <p>Note that I'm checking the number of neighbors for each node of the graph sequentially, which is my use case scenario. I much prefer this method than a single node micro-benchmark.</p>
<python><performance><networkx>
2023-02-25 22:57:28
0
8,585
Paul Jurczak
75,568,863
1,219,317
How to get varying shades of red colour based on an input value in python?
<p>In python, how can I get varying shades of the red/yellow colour based on an input value between -1 and 1? If the value is between -1 and 0, the function should return yellow colour. If the value is between 0 and 1, the function should return various levels or shades of the red colour. More specifically, if the value is 0, then the colour returned should be the standard red colour #FF0000, but if it is 1, then the returned value should be vanishing light red.</p> <p>My code below shows colours that are not always red (or does not appear to be red).</p> <pre><code>import colorsys import matplotlib.colors as mc def get_red_shade(value): if value &lt; -1 or value &gt; 1: raise ValueError(&quot;Input value must be between -1 and 1&quot;) if value &lt;= 0: # For values between -1 and 0, return yellow return '#ffff00' # Convert the input value to a hue value between 0 and 1 hue = (1 - value) / 2 # Convert the hue value to an RGB value r, g, b = colorsys.hsv_to_rgb(hue, 1, 1) # Convert the RGB value to a hex code return mc.to_hex((r, g, b)) </code></pre> <p>I appreciate if the function returns hex code of the colour. Thank you!</p>
<python><matplotlib><colors><hex><rgb>
2023-02-25 22:32:30
1
2,281
Travelling Salesman
75,568,711
13,508,045
Overload type hint with Literal and Enum not working in PyCharm
<p>I want to typehint an overload function. For that I use the <code>overload</code> decorator from <code>typing</code>. I want to set multiple possible callees based on a parameter's value. This parameter is <code>color</code>.</p> <p>I have this code:</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal, overload from enum import Enum class Color(Enum): RED = 0 BLUE = 1 GREEN = 2 @overload def func( *, title: str, color: Literal[Color.RED, Color.BLUE], description: str, ) -&gt; None: ... @overload def func( *, title: str, color: Literal[Color.GREEN], ) -&gt; None: ... def func( *, title: str = None, color: Color = None, description: str = None, ) -&gt; None: print(title, color, description) func(title=&quot;hello&quot;, color=Color.GREEN, description=&quot;hello&quot;) </code></pre> <p>I want to get a warning when I try to set the <code>description</code>, even the <code>color</code> is set to <code>Color.GREEN</code>, but I don't get a warning:</p> <p><a href="https://i.sstatic.net/FMmef.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FMmef.png" alt="no_warning_enum" /></a></p> <p>When I do the same just with strings, it works. I replaced the Literals with <code>Literal[&quot;red&quot;, &quot;blue&quot;]</code> and <code>Literal[&quot;green&quot;]</code> and changed the type of <code>color</code> to <code>str</code>:</p> <p><a href="https://i.sstatic.net/JPb7z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JPb7z.png" alt="warning_strings" /></a></p> <p>Accordingly, there is no error, when I don't try to set <code>description</code>, which I expect:</p> <p><a href="https://i.sstatic.net/gDRfN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gDRfN.png" alt="no_warnings_strings" /></a></p> <p>Python Version: 3.8 <br> IDE: PyCharm</p>
<python><pycharm><type-hinting>
2023-02-25 21:57:03
1
1,508
codeofandrin
75,568,354
13,642,459
rgb2hed not giving the same answer when applied to a full or part matrix
<p>I have an image in rgb format that is loaded into a 3D numpy array. I would like to convert this image from RGB to HED using the <code>rgb2hed</code> function from <code>skimage.color</code>.</p> <pre><code>import numpy as np from skimage.color import rgb2hed np.random.seed(42) img = np.random.randint(0, 255, (10, 10, 3)) rgb2hed(img) </code></pre> <p>This works fine and is the result I would like to end up with. The problem is that this image is very big and so I can't fit two copies of the matrix in memory at one time. What I have done to get round this is split the matrix into smaller chunks and applied it to each chunk in turn.</p> <pre><code>img = img.astype(float) x_split_inds = np.linspace(0, img.shape[0], num_tiles + 1, dtype=int) x_start = x_split_inds[:-1] x_end = x_split_inds[2:] for start, end in zip(x_start, x_end): img[start:end, :, :] = rgb2hed(img[start:end, :, :]) </code></pre> <p>However, the two matrices calculated by these methods are not the same (the latter method is wrong). What is going on?</p> <p>Thanks</p>
<python><numpy><scikit-image>
2023-02-25 20:52:41
0
456
Hugh Warden
75,568,346
11,825,717
Get real-time progress updates from a Python script in R
<p>I would like to get real-time progress updates from a Python script running in the background.</p> <h3>Python script</h3> <pre class="lang-py prettyprint-override"><code>from time import sleep for i in range(10): print(f'file{i}.txt') sleep(1) print('done') </code></pre> <p>I thought I could use a <code>pipe</code> connection but it appears that <code>readLines</code> doesn't return output until <code>my_script.py</code> is done running. How do I get real-time feedback from my Python job?</p> <h3>R script</h3> <pre class="lang-r prettyprint-override"><code>con &lt;- pipe(&quot;python3 my_script.py&quot;) while(TRUE) { files &lt;- readLines(con) # waits for job to finish current_file &lt;- tail(files, 1) if(current_file == &quot;done&quot;) { print(&quot;Done processing files&quot;) break() } else { print(paste(&quot;Processing&quot;, current_file)) Sys.sleep(1) } } </code></pre>
<python><r><pipe><reticulate>
2023-02-25 20:51:21
0
2,343
Jeff Bezos
75,568,254
18,806,499
ImportError: cannot import name 'ParameterSource' from 'click.core'
<p>I'm working on simple flask app, and I received this error</p> <pre><code> from click.core import ParameterSource ImportError: cannot import name 'ParameterSource' from 'click.core' (/usr/local/lib/python3.10/dist-packages/click/core.py) </code></pre> <p>I don't know why it's appearing, because everything was fine and then just...</p> <p>Here are versions I use:</p> <pre><code>black 23.1.0 click 8.1.3 Flask 2.2.3 Python 3.10.6 pip 22.0.2 </code></pre> <p>I've been searching for solution and found that many people can't deal with this problem, and the only advice I found, is that I have to update Click and black to the latest version, but I'm already using the latest version.</p> <p>What should I do? I there any any way to not use Click at all?</p> <p>UPDATE</p> <p>Here is how full error looks like</p> <pre><code>Traceback (most recent call last): File &quot;/usr/lib/python3.10/runpy.py&quot;, line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File &quot;/usr/lib/python3.10/runpy.py&quot;, line 146, in _get_module_details return _get_module_details(pkg_main_name, error) File &quot;/usr/lib/python3.10/runpy.py&quot;, line 110, in _get_module_details __import__(pkg_name) File &quot;/home/diametr/.local/lib/python3.10/site-packages/flask/__init__.py&quot;, line 5, in &lt;module&gt; from .app import Flask as Flask File &quot;/home/diametr/.local/lib/python3.10/site-packages/flask/app.py&quot;, line 34, in &lt;module&gt; from . import cli File &quot;/home/diametr/.local/lib/python3.10/site-packages/flask/cli.py&quot;, line 15, in &lt;module&gt; from click.core import ParameterSource ImportError: cannot import name 'ParameterSource' from 'click.core' (/usr/local/lib/python3.10/dist-packages/click/core.py) </code></pre>
<python><importerror>
2023-02-25 20:35:27
6
305
Diana
75,568,249
14,386,187
Error when trying to import module from different folder
<p>I have the following directory structure:</p> <pre><code>foo.py │ └─── foodir/ │ └─── foofunc1.py │ └─── foofunc2.py </code></pre> <p>I have a file <code>foo.py</code> that I'm trying to run that imports a function <code>func1</code> from <code>foofunc1.py</code> that relies on an import <code>func2</code> from <code>foofunc2.py</code>.</p> <p>In <code>foo.py</code>, I import the function with <code>from foodir.foofunc1 import func1</code>. In <code>foofunc1.py</code>, <code>func2</code> is imported with <code>from foofunc2 import func2</code>.</p> <p>When I run <code>foo.py</code> from the root directory, python complains that it can't find the module <code>foofunc2</code>. I realize that I could change the import statement to <code>from foodir.foofunc2 import func2</code>, but what if I need to run a script from inside <code>foodir</code>? I would like to avoid a case where I need to change all the import statements in a particular file just because I'm running it from a different location.</p>
<python>
2023-02-25 20:33:52
0
676
monopoly
75,568,211
11,940,581
Module/Package resolution in Python
<p>So I have a project directory &quot;dataplatform&quot; and its contents are the follows:</p> <pre><code> ── dataplatform ├── __init__.py ├── commons │   ├── __init__.py │   ├── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   ├── get_partitions.cpython-38.pyc │   │   └── packages.cpython-38.pyc │   ├── get_partitions.py │   ├── packages.py │   ├── pipeline │   │   ├── __init__.py │   │   └── pipeline.py │   └── spark_logging.py ├── pipelines │   ├── __init__.py │   └── batch │   ├── ETL.py │   ├── ReadMe.rst │   └── main.py └── requirement.txt </code></pre> <p>I have two questions here:</p> <ol> <li><p>In pipelines package, I try to import modules from commons package in the <strong>main.py</strong> module, by saying <code>from dataplatform.commons import *</code> .However, the IDE(Pycharm) immediately throws up an error saying that it is not permitted,as it cannot find the package dataplatform. However, dataplatform here has <strong>init</strong>.py , and is therefore a package that has commons as a sub-package. What could be going wrong there? When I replace the above import statement with <code>from commons import *</code> , it works just fine.</p> </li> <li><p>Now, the project working directory : When I execute the <strong>main.py</strong> script from the the dataplatform directory by passing the complete path of the main.py file to the python3 executable, it refuses to execute, with the same import error being thrown as below:</p> <p>File &quot;pipelines/batch/main.py&quot;, line 2, in from dataplatform.commons import * ModuleNotFoundError: No module named 'dataplatform'</p> </li> </ol> <p>I would like to know as to what must be the root directory (working directory) from which I should try executing the main file (on my local machine) so that the main.py file will execute successfully.</p> <p>I am keen on keeping the <strong>dataplatform</strong> package appended to every subpackage name I use in the code, as the environment on which I am running this is Hadoop Sandbox (HDP 3.1) , and for some unknown reasons, appending the <strong>dataplatform</strong> package name is required to load files from HDFS successfully (The code is zipped and stored on HDFS; call to the main.py executes the whole program correctly somehow).</p> <p>Note: Using sys.path.append is not an option.</p>
<python><python-3.x><hadoop><hdfs><hadoop2>
2023-02-25 20:25:57
1
371
halfwind22
75,568,009
12,349,101
Tkinter - How to resize frame containing a text widget (in all directions)?
<p>I'm trying to resize certain layered tkinter components, mostly because I'm curious. Right now, seems stuck in trying to resize a frame containing a text widget. here is my attempt:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk def make_draggable(widget): widget.bind(&quot;&lt;Button-1&gt;&quot;, on_drag_start) widget.bind(&quot;&lt;B1-Motion&gt;&quot;, on_drag_motion) widget.bind(&quot;&lt;Button-3&gt;&quot;, on_resize_start) widget.bind(&quot;&lt;B3-Motion&gt;&quot;, on_resize_motion) def on_drag_start(event): widget = event.widget widget._drag_start_x = event.x widget._drag_start_y = event.y def on_drag_motion(event): widget = event.widget x = widget.winfo_x() - widget._drag_start_x + event.x y = widget.winfo_y() - widget._drag_start_y + event.y widget.place(x=x, y=y) def on_resize_start(event): widget = event.widget widget._resize_start_x = event.x widget._resize_start_y = event.y widget._resize_width = widget.winfo_width() widget._resize_height = widget.winfo_height() def on_resize_motion(event): widget = event.widget width = widget._resize_width + event.x - widget._resize_start_x height = widget._resize_height + event.y - widget._resize_start_y widget.place(width=width, height=height) widget.winfo_children()[0].configure(width=width, height=height) main = tk.Tk() frame = tk.Frame(main, bd=4, bg=&quot;grey&quot;) frame.place(x=10, y=10) make_draggable(frame) notes = tk.Text(frame) notes.pack() main.mainloop() </code></pre> <p>It is based on <a href="https://stackoverflow.com/a/37281388/12349101">this</a> other answer on SO.</p> <p>This works, but only when right-clicking and dragging the mouse on the bottom and right side of the frame (the gray part). I don't know how to make it work on the other directions (eg: top and left, and if possible, the edges too)</p> <p>How can this be done for all directions?</p> <p>Note: I'm using 3.8.10 and Tk version 8.6.9 (patch level), on Win10</p>
<python><tkinter><resize><tkinter-text>
2023-02-25 19:49:55
1
553
secemp9
75,567,969
3,668,129
How to plot 2 subplots of wav spectogram file?
<p>I have 2 wav files (with same sampling rate) and I want to display their spectogram side by side.</p> <p>I'm using the code from here (<a href="https://librosa.org/doc/main/generated/librosa.feature.melspectrogram.html" rel="nofollow noreferrer">https://librosa.org/doc/main/generated/librosa.feature.melspectrogram.html</a>) to show wav file spectogram:</p> <pre><code>y, sr = librosa.load(FILE_NO_SPEAKING) mel_spec_no_speak = librosa.feature.melspectrogram(y=y, sr=sr) y, sr = librosa.load(FILE_SPEAKING) mel_spec_speak = librosa.feature.melspectrogram(y=y, sr=sr) D = np.abs(librosa.stft(y))**2 S = librosa.feature.melspectrogram(S=D, sr=sr) fig, (ax1, ax2) = plt.subplots(1, 2) S_dB = librosa.power_to_db(S, ref=np.max) img_no_speak = librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sr, fmax=8000, ax=ax1) img_speak = librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sr, fmax=8000, ax=ax2) #fig.colorbar(?, ax=(ax1, ax2), format='%+2.0f dB') ax1.set(title='No Speak') ax2.set(title='Speak') </code></pre> <ul> <li>But I dont know how to set the colorbar with the images of each subplot ?</li> <li>In addition I want to enlarge the figure plot size, how can I do it ?</li> </ul>
<python><matplotlib>
2023-02-25 19:42:19
1
4,880
user3668129
75,567,914
20,761,758
Django - No module named 'webapp'
<p><a href="https://i.sstatic.net/YHZTL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YHZTL.png" alt="Project Structure" /></a></p> <p><strong>Settings.py</strong></p> <pre><code>INSTALLED_APPS = [ 'webapp', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] </code></pre> <p><strong>Apps.py</strong></p> <pre><code>class WebappConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'webapp' </code></pre> <p><strong>Error</strong></p> <pre><code>C:\Users\agent\PycharmProjects\ExampleProj\venv\Scripts\python.exe C:\Users\agent\PycharmProjects\ExampleProj\mysite\manage.py createsuperuser Traceback (most recent call last): File &quot;C:\Users\agent\PycharmProjects\ExampleProj\mysite\manage.py&quot;, line 22, in &lt;module&gt; main() File &quot;C:\Users\agent\PycharmProjects\ExampleProj\mysite\manage.py&quot;, line 18, in main execute_from_command_line(sys.argv) File &quot;C:\Users\agent\PycharmProjects\ExampleProj\venv\lib\site-packages\django\core\management\__init__.py&quot;, line 446, in execute_from_command_line utility.execute() File &quot;C:\Users\agent\PycharmProjects\ExampleProj\venv\lib\site-packages\django\core\management\__init__.py&quot;, line 420, in execute django.setup() File &quot;C:\Users\agent\PycharmProjects\ExampleProj\venv\lib\site-packages\django\__init__.py&quot;, line 24, in setup apps.populate(settings.INSTALLED_APPS) File &quot;C:\Users\agent\PycharmProjects\ExampleProj\venv\lib\site-packages\django\apps\registry.py&quot;, line 91, in populate app_config = AppConfig.create(entry) File &quot;C:\Users\agent\PycharmProjects\ExampleProj\venv\lib\site-packages\django\apps\config.py&quot;, line 193, in create import_module(entry) File &quot;C:\Users\agent\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1030, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 984, in _find_and_load_unlocked ModuleNotFoundError: No module named 'webapp' Process finished with exit code 1 </code></pre> <p>Am I missing something here? Same result when using mysite.webapp... I included the traceback as requested.</p>
<python><django>
2023-02-25 19:34:15
1
665
Hosea
75,567,818
514,149
How to rotate YOLO bounding boxes in 90° steps?
<p>I have a dataset of images for a computer vision object detection project. I am using the YOLO framework, which stores the object labels (bounding boxes) for the training data in text files, one per image, that have the following format:</p> <ul> <li>one row per object</li> <li>each row is in <code>class x_center y_center width height</code> format</li> <li>box coordinates and dimensions must be normalized format, from 0.0 - 1.0</li> <li>class numbers are zero-indexed</li> </ul> <p>Now I want to create more training images by rotating the images I have in 90° steps. Doing so for the images themselves is easy, but I also need to create new text files with the annotations, which reflect these rotations. Thus the label coordinates in the new text files need to be rotated versions of the original annotations. Using Python, how can I calculate these 90° rotations for the object annotations (i.e. for 90, 180, and 270 degrees)?</p>
<python><math><yolo>
2023-02-25 19:16:13
2
10,479
Matthias
75,567,806
19,989,634
Send 'User' name and 'Form content' in email notification
<p>I'm trying to get the form to send an email notification of the 'Users' username and content of the 'Comment' its self. I have managed to get the title of the post working but nothing else. Need a little help.</p> <p><strong>Views</strong></p> <pre><code>def ViewPost(request, slug): try: post = BlogPost.objects.get(slug=slug) except BlogPost.DoesNotExist: print(&quot;Blog with this slug does not exist&quot;) post = None comments = post.comments.filter(status=True) user_comment = None if request.method == 'POST': comment_form = CommentForm(request.POST) if comment_form.is_valid(): user_comment = comment_form.save(commit=False) user_comment.post = post user_comment.user = request.user user_comment.save() messages.success(request, (&quot;Comment successfully posted, thank you...&quot;)) send_mail('You have recieved a comment on a blog post', post.title, post.user # not working , #reference comment here [ 'gmail.com']) else: comment_form = CommentForm() return render(request, 'mhpapp/view-post.html', {'post': post, 'slug': slug, 'comments': user_comment, 'comments': comments, 'comment_form': comment_form}) </code></pre> <p><strong>Comment Model</strong></p> <pre><code>class BlogComment(models.Model): post = models.ForeignKey(BlogPost, related_name=&quot;comments&quot;, on_delete=models.CASCADE) user = models.ForeignKey(User, editable=False, on_delete=models.CASCADE, default=&quot;&quot;,) name = models.CharField('Name',max_length=100, default=&quot;&quot;) text = models.TextField('Comment',max_length=1000, default=&quot;&quot;) date = models.DateTimeField(auto_now_add=True) status = models.BooleanField(default=True) class Meta: ordering = (&quot;date&quot;,) def __str__(self): return '%s -&gt; %s'%(self.post.title, self.user) </code></pre>
<python><django><django-models><django-views><django-email>
2023-02-25 19:14:00
2
407
David Henson
75,567,562
12,491,345
How to automate version bump with main branch protection rules without PAT?
<p>My team has multiple low level and high level packages written in Python and they are dependent on each other, we typically install them as GitHub repositories using <code>pip</code>. Basically, I would like to automate the process of providing versions to the libraries, so the production code is easier to manage and rollback to previous versions in case of emergency.</p> <p>Right now, we use a version string in the code and git tags to annotate the releases. By using commitizen's <code>cz bump --changelog</code> we can manually update the versions in the file and when PR is approved and merged (due to main branch protection rules), the tag can be pushed too.</p> <p>I've been trying to find a way to automate the process and I've encountered multiple problems along the way.</p> <ol> <li>The <code>commitizen-action</code> is almost perfect. But, since we have main branch protection in place, a PAT and dedicated user to bypass the rules are needed. Since I'm not an admin of our organisation, adding the tokens to every repository and rotation of tokens aren't very efficient. Is there a way to circumvent that so we, as a team, can manage our repositories?</li> <li>I tried to write a custom GitHub Action executing <code>cz bump --changelog</code> when a pull request is created, so everything is still being done on the current feature branch. But then some commits after a PR review still can be pushed to the branch and the tag will point to the previous version of the code. I couldn't find a way to execute an Action just before merging the branch into main which theoretically would be something I need.</li> <li>Maybe I am missing something? Is there a way to automatically and efficiently version Python packages and release them? We also use CircleCi, but I guess we still need the tokens to push to main. We've been thinking about our private package storage, are there any tools like that e.g. on Azure?</li> </ol> <p>Edit: I'm also open to different approaches to versioning our releases and deployment approaches.</p>
<python><github><github-actions><commitizen>
2023-02-25 18:34:32
1
598
pdaawr
75,567,394
4,167,804
Trying to find human names in a file using ntlk
<p>I'd like to extract human names from a text file. I'm getting a blank line as output for some reason. Here is my code:</p> <pre><code>import nltk import re nltk.download('names') nltk.download('punkt') from nltk.corpus import names # Create a list of male and female names from the nltk names corpus male_names = names.words('male.txt') female_names = names.words('female.txt') all_names = set(male_names + female_names) def flag_people_names(text): possible_names = [] words = nltk.word_tokenize(text) for word in words: # Split the word by ' ', '.' or '_' and check each part parts = re.split('[ _.]', word) for part in parts: if part.lower() in all_names: possible_names.append(word) break return possible_names # Read text file with open('sample.txt', 'r') as file: text = file.read() # Call function to flag possible names names = flag_people_names(text) print(names) </code></pre> <p>Here is the input file called sample.txt</p> <pre><code>James is a really nice guy Gina is a friend of james. Gina and james like to play with Andy. </code></pre> <p>I get this as the output:</p> <pre><code>[] </code></pre> <p>I'd like to get James, Gina and Andy.</p> <p>I'm on a MAC Catalina with python3.8.5. Any idea what's not working here?</p>
<python><nlp>
2023-02-25 18:03:22
1
381
Brajesh
75,567,303
6,367,971
Fill down between rows in new dataframe column
<p>I have a dataframe where I need to create a new column (<code>Break</code>), and forward fill that column between all the <code>Break</code> rows.</p> <pre><code>Type,Name Parent,Parent1 Break,break010 Op,Op1 Unit,Unit1 Item,Item1 Break,break020 Op,Op2 Unit,Unit2 Break,break030 Op,Op3 Unit,Unit3 Parent,Parent2 Break,break010 </code></pre> <p>For example, the output should be</p> <pre><code> Type Name Break Parent Parent1 Break break010 break010 Op Op1 break010 Unit Unit1 break010 Item Item1 break010 Break break020 break020 Op Op2 break020 Unit Unit2 break020 Break break030 break030 Op Op3 break030 Unit Unit3 break030 Parent Parent2 Break break010 break010 </code></pre>
<python><pandas><dataframe>
2023-02-25 17:48:15
1
978
user53526356
75,567,078
10,582,321
Move margins of subplots
<p>I would like to move all the subplots a little to the right, so that there is more space before the zero on x-axis. The <code>margins</code> results in symmetrical spacing. How can I do this?</p> <p>Here is a MWE:</p> <p><a href="https://i.sstatic.net/cVFnf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cVFnf.png" alt="enter image description here" /></a></p> <pre><code>import matplotlib.pyplot as plt x = [0, 1, 2, 3] y = [1, 3, 4, 2] fig, ax = plt.subplots(2,sharex=True) ax[0].plot(x, y) ax[1].plot(x, y) ax[0].margins(0.3) ax[1].margins(0.3) plt.show() </code></pre>
<python><matplotlib><plot>
2023-02-25 17:11:02
2
330
Pedro
75,567,051
5,363,686
Shift dataframe values for all columns to make monotonically increasing
<p>I have a dataframe of measures in multiple columns, that are aggregated. This means that the function they represent is a monotonically increasing one. Now, due to reset of an apparatus, all measurements are reset to zero, after which the aggregation resumes. But to work with the data, I need to disgard the reset and shift all values in all columns to mimic that th reset never occured.</p> <p>Hence, I what this sistuation:</p> <p><a href="https://i.sstatic.net/3W2xo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3W2xo.png" alt="enter image description here" /></a></p> <p>to become</p> <p><a href="https://i.sstatic.net/CYGDU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CYGDU.png" alt="enter image description here" /></a></p> <p>What I want is a function that will shift all values in all columns to to the last measured maximum.</p> <p>for some sample data, I have created this:</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt date_range = pd.date_range(start='2021-01-01', end='2021-01-05', freq='1D') df1 = pd.DataFrame({'Date': date_range, 'Column 1': range(5), 'Column 2': range(5)}) date_range = pd.date_range(start='2021-01-06', end='2021-01-10', freq='1D') df2 = pd.DataFrame({'Date': date_range, 'Column 1': range(5), 'Column 2': range(5)}) df = pd.concat([df1,df2]) </code></pre> <p>which I want to become</p> <pre><code>date_range = pd.date_range(start='2021-01-01', end='2021-01-10', freq='1D') df3 = pd.DataFrame({'Date': date_range, 'Column 1': range(10), 'Column 2': range(10)}) </code></pre> <p>I know how to do this in the case where I know that df is constructed from df1 and df2</p> <pre><code>def shift_df(df, df1, df2): columns = list(df.columns) columns.remove('Date') max_values = {} for col in columns: max_values[col] = df1[col].max() min_values = {} for col in columns: min_values[col] = df2[col].min() differences = {} for col in columns: differences[col] = max_values[col] - min_values[col]+1 for col in columns: df[col] = np.where(df['Date'].isin(df2['Date']), df[col] + differences[col], df[col]) return df </code></pre> <p>But I do not know how to generalize it if I only have the knowledge of df. Basically, how do I transform my function</p> <pre><code>shift_df(df, df1, df2) </code></pre> <p>to</p> <pre><code>shift_df(df) </code></pre>
<python><pandas><dataframe>
2023-02-25 17:06:40
1
11,592
Serge de Gosson de Varennes
75,567,023
11,397,243
Descriptors in python for implementing perl's tie scalar operation
<p>I need some help with descriptors in python. I wrote an automatic translator from perl to python (Pythonizer) and I'm trying to implement tied scalars, which is basically an object that acts as a scalar, but has FETCH and STORE operations that are called appropriately. I'm using a dynamic class namespace 'main' to store the variable values. I'm attempting to define <code>__get__</code> and <code>__set__</code> operations for the object, but they are not working. Any help will be appreciated!</p> <pre><code>main = type('main', tuple(), dict()) class TiedScalar(dict): # Generated code - can't be changed FETCH_CALLED_v = 0 STORE_CALLED_v = 0 def STORE(*_args): _args = list(_args) self = _args.pop(0) if _args else None TiedScalar.STORE_CALLED_v = TiedScalar.STORE_CALLED_v + 1 self[&quot;value&quot;] = _args.pop(0) if _args else None return self.get(&quot;value&quot;) #TiedScalar.STORE = lambda *_args, **_kwargs: perllib.tie_call(STORE, _args, _kwargs) def FETCH(*_args): _args = list(_args) self = _args.pop(0) if _args else None TiedScalar.FETCH_CALLED_v = TiedScalar.FETCH_CALLED_v + 1 return self.get(&quot;value&quot;) #TiedScalar.FETCH = lambda *_args, **_kwargs: perllib.tie_call(FETCH, _args, _kwargs) @classmethod def TIESCALAR(*_args): _args = list(_args) class_ = _args.pop(0) if _args else None self = {&quot;value&quot;: (_args.pop(0) if _args else None)} #self = perllib.bless(self, class_) #return perllib.add_tie_methods(self) return add_tie_methods(class_(self)) def __init__(self, d): for k, v in d.items(): self[k] = v #setattr(self, k, v) def add_tie_methods(obj): # This code is part of perllib and can be changed cls = obj.__class__ classname = cls.__name__ result = type(classname, (cls,), dict()) cls.__TIE_subclass__ = result def __get__(self, obj, objtype=None): return self.FETCH() result.__get__ = __get__ def __set__(self, obj, value): return self.STORE(value) result.__set__ = __set__ obj.__class__ = result return obj main.tied_scalar_v = TiedScalar.TIESCALAR(42) # Generated code assert main.tied_scalar_v == 42 main.tied_scalar_v = 100 assert main.tied_scalar_v == 100 print(TiedScalar.FETCH_CALLED_v) print(TiedScalar.STORE_CALLED_v) </code></pre> <p>Here the 2 print statements print 1 and 0, so esp STORE is not being called, and fetch is not being called enough times based on the code. Note that 'main' stores all of the user's variables, also the TiedScalar class is (mostly) generated from the user’s perl code.</p>
<python><python-descriptors>
2023-02-25 17:02:27
3
633
snoopyjc
75,566,976
6,552,836
Modelling 6 separate equations into 1 equation
<p>I have 6 functions of the form <code>y = beta*(exp(alpha*x/n))</code>. Each <code>func</code> has specific parameters as displayed in the table below.</p> <p>The input is a 10x6 matrix and the output is a single value. The aim is to optimize the 10x6 matrix to produce the largest y value. Each column of the matrix has it's own function (i.e func1 for column1 etc).</p> <p>Is there a way I could model all 6 functions as 1 equation?</p> <pre><code>y = beta*(exp(alpha*x/n)) n alpha beta func1 1.3 434 415 func2 1.4 333 102 func3 1.1 344 512 func4 1.1 434 262 func5 1.9 243 431 func6 3.9 213 421 </code></pre> <p>I'm thinking if a model like a linear regessor should be used in this case?</p>
<python><machine-learning><linear-regression><data-modeling>
2023-02-25 16:56:04
1
439
star_it8293
75,566,800
790,785
How to resolve error of Event loop is closed python unittest?
<p>While implement Python unittest by subclassing IsolatedAsyncioTestCase, it only runs first test case successfully. For any subsequent test case it throws error that event loop is closed. This happens in both Windows and Mac. Could you please suggest how to make sure that event loop is running during the execution of test within each of the subsclasses of the IsolatedAsyncioTestCase that I have implemented.</p>
<python><python-asyncio><python-unittest>
2023-02-25 16:27:01
1
7,039
Atharva
75,566,776
1,532,602
Issue using FPDF2 with python.barcode
<p>Using FPDF2 to generate a document that requires a Code39 barcode:</p> <pre><code>from barcode import Code39 from barcode.writer import ImageWriter from fpdf import FPDF from io import BytesIO ... pdf = new PDF() pdf.add_page() rv = BytesIO() Code39(&quot;TEST&quot;,writer=ImageWriter()).write(rv) pdf.image(rv,10,10,100) pdf.output(name=&quot;test.pdf&quot;) </code></pre> <p>There are no errors; however, the barcode is not included in the document. Is there another or better approach to getting this done? Thanks!</p>
<python><fpdf>
2023-02-25 16:22:47
0
1,665
Elcid_91
75,566,743
6,552,836
Scipy Minimize - Keep getting a singular matrix error
<p>I'm trying to optimize a 20x5 matrix to maximize a return value y. There are 1 main constraints that I need to include:</p> <ol> <li>Total sum of all the elements must between a min and max range</li> </ol> <p>However, I keep getting this singular matrix error below:</p> <pre><code>Singular matrix C in LSQ subproblem (Exit mode 6) Current function value: -3.0867160133139926 Iterations: 1 Function evaluations: 261 Gradient evaluations: 1 </code></pre> <p>I have attached the full code below. I can't seem to see what I'm doing wrong?</p> <pre><code># Import Libraries import pandas as pd import numpy as np import scipy.optimize as so import random # Define Objective function def obj_func(matrix): return np.sum(output_matrix) # Create optimizer function def optimizer_result(tot_min_sum, tot_max_sum, matrix_input): # Create constraint 1) - total matrix sum range constraints_list = [{'type': 'ineq', 'fun': lambda x: np.sum(x) - tot_min_sum}, {'type': 'ineq', 'fun': lambda x: -(np.sum(x) - tot_max_sum)}] # Create an inital matrix start_matrix = [random.randint(0, 3) for i in range(0, 20)] # Run optimizer optimizer_solution = so.minimize(cost, start_matrix, method='SLSQP', bounds=[(0, total_matrix_max_sum)] * 260, tol=0.01, options={'disp': True, 'maxiter': 100}, constraints=constraints_list, callback=callback) return optimizer_solution # Initalise constraints tot_min_sum = 0 tot_max_sum = 20000 matrix_input = np.zeros((52, 5)) matrix_input[0, 0] = 100 # Run Optimizer y = optimizer_result(total_matrix_min_sum, total_matrix_max_sum, column_sum_min_lst, column_sum_max_lst, matrix_input) print(y) </code></pre>
<python><optimization><scipy-optimize><scipy-optimize-minimize>
2023-02-25 16:17:12
2
439
star_it8293