QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,031,827
5,387,770
Flattening of Nested dataframe
<p>I have a multi level nested dataframe something like the below:</p> <pre><code>DataFrame[date_time: timestamp, filename: string, label: string, description: string, feature_set: array&lt;struct&lt;direction:string,tStart:double,tEnd:double, features:array&lt;struct&lt;field1:string,field2:string,field3:string,field4:string&gt;&gt;&gt;&gt;] </code></pre> <p>and its values are:</p> <pre><code>[[datetime.datetime(2022, 8, 24, 7, 51, 54), 'filename1', 'label1', 'description of file 1', [['east', 78.23018987, 79.23010199, [['fld_val11', 'fld_val12', 'fld_Val13', 'fld_Val14']]], ['west', 78.23018987, 79.23010199, [['fld_val21', 'fld_val22', 'fld_val23', 'fld_val24']]], ['south', 78.23018987, 79.23010199, [['fld_val31', 'fld_val32', 'fld_val33', 'fld_val34']]] root |-- date_time: timestamp (nullable = true) |-- filename: string (nullable = true) |-- label: string (nullable = true) |-- description: string (nullable = true) |-- feature_set: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- direction: string (nullable = true) | | |-- tStart: double (nullable = true) | | |-- tEnd: double (nullable = true) | | |-- features: array (nullable = true) | | | |-- element: struct (containsNull = true) | | | | |-- field1: string (nullable = true) | | | | |-- field2: string (nullable = true) | | | | |-- field3: string (nullable = true) | | | | |-- field4:string (nullable = true) </code></pre> <p>I am trying to flatten it in such a way that it should look like below:</p> <pre><code>-------------------+--------------------+--------------------+--------------------+--------------------+ | date_time| filename| label| description| feature_set_direction| feature_set_tStart| feature_set_tEnd| feature_set_features_Field1| feature_set_features_Field2| feature_set_features_Field3| feature_set_features_Field4| +-------------------+--------------------+--------------------+--------------------+--------------------+ |2022-08-24 13:47:47|filename1|label1| description of file 1|east| 78.230189787|79.23010199| fld_val11| fld_val12| fld_Val13| fld_Val14| +-------------------+--------------------+--------------------+--------------------+--------------------+ root |-- date_time: timestamp (nullable = true) |-- filename: string (nullable = true) |-- label: string (nullable = true) |-- description: string (nullable = true) |-- feature_set: array (nullable = true) |-- feature_set_element: struct (containsNull = true) |-- feature_set_element_direction: string (nullable = true) |-- feature_set_element_tStart: double (nullable = true) |-- feature_set_element_tEnd: double (nullable = true) |-- feature_set_element_features: array (nullable = true) |-- feature_set_element_features_element: struct (containsNull = true) |-- feature_set_element_features_element_field1: string (nullable = true) |-- feature_set_element_features_element_field2: string (nullable = true) |-- feature_set_element_features_element_field3: string (nullable = true) |-- feature_set_element_features_element_field4:string (nullable = true) </code></pre> <p>I tried to flatten it with the help of the below code, but its throwing error:</p> <pre><code>flat_df = df.select(&quot;date_time&quot;, &quot;filename&quot;, &quot;label&quot;, &quot;description&quot;, &quot;feature_set.*&quot;) </code></pre> <blockquote> <p>AnalysisException: Can only star expand struct data types. Attribute: <code>ArrayBuffer(feature_set)</code>.</p> </blockquote> <p>I also tried with couple of other methods like val, but quite unsuccessful.</p> <pre><code>val df2 = df.select(col(&quot;date_time&quot;), col(&quot;filename&quot;), col(&quot;label&quot;), col(&quot;description&quot;), col(&quot;feature_set&quot;)) </code></pre> <blockquote> <p>SyntaxError: invalid syntax (, line 1) File :1 val df2 = df.select(col(&quot;date_time&quot;)</p> </blockquote> <p>Could anyone please suggest how to proceed with?</p> <p>Thank you.</p>
<python><apache-spark><pyspark>
2023-04-17 04:57:26
1
625
Arun
76,031,802
13,557,241
Python Kubernetes Client: equivalent of kubectl api-resources --namespaced=false
<p>Via the CLI, I can use <code>kubectl api-resources --namespaced=false</code> to list all available cluster-scoped resources in a cluster.</p> <p>I am writing a custom operator with the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Python Kubernetes Client API</a>, however I can't seem to find anything in the API that allows me to do this.</p> <p>The closest I have found is the <a href="https://github.com/kubernetes-client/python/blob/master/examples/api_discovery.py" rel="nofollow noreferrer">following code</a>, which was included as an example in the repository:</p> <pre class="lang-py prettyprint-override"><code>from kubernetes import client, config def main(): # Configs can be set in Configuration class directly or using helper # utility. If no argument provided, the config will be loaded from # default location. config.load_kube_config() print(&quot;Supported APIs (* is preferred version):&quot;) print(&quot;%-40s %s&quot; % (&quot;core&quot;, &quot;,&quot;.join(client.CoreApi().get_api_versions().versions))) for api in client.ApisApi().get_api_versions().groups: versions = [] for v in api.versions: name = &quot;&quot; if v.version == api.preferred_version.version and len( api.versions) &gt; 1: name += &quot;*&quot; name += v.version versions.append(name) print(&quot;%-40s %s&quot; % (api.name, &quot;,&quot;.join(versions))) if __name__ == '__main__': main() </code></pre> <p>Unfortunately, <code>client.ApisApi()</code> doesn't have a <code>get_api_resources()</code> option.</p> <p>Does anybody know of a way that I can list all the api-resources?</p>
<python><kubernetes>
2023-04-17 04:51:31
3
323
0xREDACTED
76,031,736
14,358,886
Pandas - Table Pivot
<p>I currently have this table</p> <p><a href="https://i.sstatic.net/1fduw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1fduw.png" alt="enter image description here" /></a></p> <p>and want to get this output: <a href="https://i.sstatic.net/lqTZl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lqTZl.png" alt="enter image description here" /></a></p> <p>The expected logic is i want to strip any columns that have periods in the column name, after the third period. In this case, there are 4, and after stripping the column names we will have 2 unique column names. (Cars and Food)</p> <p>The additional column &quot;trait&quot; is just the value before the third period. and will be in the expected output.</p> <p>The expected output will have more rows as it is pivoted. Also assume this case for x column names that should have periods in them.</p> <p>I am not able to get it yet, but my current code is pasted below.</p> <pre><code>import pandas as pd data = { 'GenoType': [1, 2, 4], 'Parameter1': [2, 2, 787], 'First.Second.Third.Car': ['Honda', 'Tesla', 'Jeep'], 'First.Second.Third.Food': ['Pizza', 'Fries', 'Grapes'], 'Red.Orange.Yellow.Car': ['Acura', 'BMW', 'Toyota'], 'Red.Orange.Yellow.Food': ['Burger', 'Potatoes', 'Wings'] } df = pd.DataFrame(data) melted_df = df.melt(id_vars=['GenoType', 'Parameter1'], var_name='Trait', value_name='Value') melted_df[['Trait', 'Column']] = melted_df['Trait'].str.rsplit('.', n=1, expand=True) pivoted_df = melted_df.pivot_table(index=['GenoType', 'Parameter1', 'Trait'], columns='Column', values='Value', aggfunc='first') pivoted_df.reset_index(inplace=True) pivoted_df.columns.name = '' # Melt the pivoted_df to obtain the final output final_df = pivoted_df.melt(id_vars=['GenoType', 'Parameter1', 'Trait'], var_name='Column', value_name='Value') final_df.sort_values(by=['GenoType', 'Parameter1', 'Trait', 'Column'], inplace=True) final_df.reset_index(drop=True, inplace=True) expanded_df = final_df.pivot_table(index=['GenoType', 'Parameter1', 'Trait'], columns='Column', values='Value', aggfunc='first') expanded_df.reset_index(inplace=True) expanded_df.columns.name = '' expanded_df = expanded_df.rename_axis(None, axis=1) print(expanded_df) </code></pre>
<python><pandas><dataframe>
2023-04-17 04:33:46
2
802
Void S
76,031,591
1,290,147
Custom metric for a CatBoost classifier using GPU & optuna
<p>I have the following objective function to run in an Optuna hyper parameter optimization:</p> <pre><code>def objective(trial, data=data): num_train_pool = data[&quot;num_train_pool&quot;] num_test_pool = data[&quot;num_test_pool&quot;] true_labels_test = data[&quot;true_labels_test&quot;] params = {'objective': 'MultiClass', 'task_type': 'GPU', 'logging_level': 'Verbose', 'bootstrap_type': 'Bayesian', 'learning_rate': trial.suggest_loguniform('learning_rate', 1e-4, 1), 'depth': trial.suggest_int('depth', 3, 12), 'l2_leaf_reg': trial.suggest_loguniform('l2_leaf_reg', 1e-2, 1), 'border_count': trial.suggest_int('border_count', 32, 255) } def custom_precision(y_true, y_pred): true_positives = ((y_true[:, 0] == -1) &amp; (y_pred[:, 0] == -1)).sum() + ((y_true[:, 1] == 1) &amp; (y_pred[:, 1] == 1)).sum() false_positives = ((y_true[:, 0] != -1) &amp; (y_pred[:, 0] == -1)).sum() + ((y_true[:, 1] != 1) &amp; (y_pred[:, 1] == 1)).sum() return true_positives / (true_positives + false_positives + 1e-8) model = catboost.CatBoostClassifier(**params, custom_metric='Precision:custom_precision', random_seed=65) model.fit(num_train_pool, verbose=0, eval_set=num_test_pool, early_stopping_rounds=5) preds = model.predict(num_test_pool) pred_labels = np.rint(preds) score = custom_precision(true_labels_test, pred_labels) </code></pre> <p>This is for an unbalanced classification problem, where my class weights are as follows: {-1: 12.5, 0:0.5, 1:10}. I only really care about the obtaining a high True Positives score for classes -1 and 1, that why I wrote that custom precision function. Also, my train Pool has a weight_vector, since not all the samples have the same &quot;quality&quot;. My Poool looks like this:</p> <pre><code>num_train_pool = catboost.Pool(data=X_train, label=y_train, weight=y_train_weights, feature_names=X_train.columns.to_list()) </code></pre> <p>After running my optimization, I get this:</p> <pre><code>CatBoostError: catboost/private/libs/options/loss_description.cpp:34: Invalid metric description, it should be in the form &quot;metric_name:param1=value1;...;paramN=valueN&quot; </code></pre> <p>I think I am following CatBoost <a href="https://catboost.ai/en/docs/concepts/python-usages-examples#custom-loss-function-eval-metric" rel="nofollow noreferrer">guidelines</a> correctly, but can't figure out what is wrong with my code. Any feedback?</p>
<python><machine-learning><classification><catboost><optuna>
2023-04-17 03:47:09
1
5,137
Luis Miguel
76,031,569
3,398,324
Merge PyTorch predictions to original dataframe
<p>I have obtained predictions from my PyTorch model as a tensor with the following shape (torch.Size([2958, 96])). My original dataset has 2958 qids and some of them have a max number of documents of 96 (min is 47). The predictions have padded the missing one with -1s. The shape of my original dataframe is (221567, 7).</p> <p>I would like to merge the predictions from the PyTorch model or tensor back to this dataframe, using the qid. Every row of the tensor represents a qid, while every column represents the rank of that particular document (based on the order of documents in each qid).</p> <p>Below is a min example (after converting the tensor to a df):</p> <pre><code>tensor = {'0': ['3', '1','2'],'1': ['2', '1','2'],'2': ['2', '1','-1']} y_pred = pd.DataFrame(tensor) data = {'qid': ['0', '0','0','1', '1','1','2', '2'],'irrelevant_col': ['foo', 'foo','foo','foo', 'bar','bar','bar', 'bar']} original_df = pd.DataFrame(data) </code></pre> <p>Notice that for qid==2, there are only 2 rows, and the tensor has a '-1' in row 2 and column 2 because of this. Also, the order of the tensor is correct in the sense that it matches the order of the items in the dataframe. This is the target output:</p> <pre><code>target = {'qid': ['0', '0','0','1', '1','1','2', '2'],'irrelevant_col': ['foo', 'foo','foo','foo', 'bar','bar','bar', 'bar'],'y_pred': ['3', '2','2','1', '1','1','2', '2']} target_df = pd.DataFrame(target) </code></pre> <p>EDIT: I fixed an incorrect column (2 instead of 3, and made the last y_pred -1.</p>
<python><pytorch><prediction>
2023-04-17 03:39:59
2
1,051
Tartaglia
76,031,485
2,955,827
Inherit function arguments definition in python
<p>If I have these two functions:</p> <pre class="lang-py prettyprint-override"><code>def foo(code:int, *args, **kwargs): if code == 1: bar(*args, **kwargs) else: return None def bar(a:str, b:int, c:int=1, d:int=2): pass </code></pre> <p>I want users can see what argument can be passed to <code>foo</code>, just like if I define <code>foo</code> like:</p> <pre class="lang-py prettyprint-override"><code>def foo(code:int, a:str, b:int, c:int=1, d:int=2): if code == 1: bar(a, b, c, d) else: return None </code></pre> <p>What can I do to inherit <code>bar</code>'s argument in <code>foo</code>?</p> <p><code>ParamSpec</code> is almost there but not enough for my situation, <code>foo</code> has one more argumnt: <code>code</code> than <code>bar</code>.</p>
<python><type-hinting>
2023-04-17 03:13:42
0
3,295
PaleNeutron
76,031,409
3,713,336
Fastest way to loop through GCP project
<p>I am trying to loop through GCP projects, where I have more than 50,000 projects and close to 20,000 buckets. However, what is the best way to make the loop faster? Here is my code below:</p> <pre class="lang-py prettyprint-override"><code>for project in projects: #print(project) project = project['projectId'] try: buckets = client.list_buckets(project=project) for bucket in buckets: policy = bucket.get_iam_policy() print(policy) for test in policy: if 'name' in test: print(&quot;this is just an example&quot;) except... </code></pre> <pre><code>def main(): logger = logging.getLogger() logger.setLevel(logging.INFO) try: projects = build('cloudresourcemanager', 'v1', credentials=Credentials.from_authorized_user_info(info=google.auth.default()[1]), cache_discovery=False).projects().list().execute().get('projects', []) except DefaultCredentialsError: logger.exception(&quot;Error getting default credentials&quot;) return thread = 25 with concurrent.futures.ThreadPoolExecutor(max_workers=thread) as executor: futures = [executor.submit(get_bucket_policies, project) for project in projects] results = [] for future in concurrent.futures.as_completed(futures): results.extend(future.result()) logger.info(f&quot;Finished processing {len(results)} of {len(projects)} projects&quot;) for project_id, bucket_name in results: print(f&quot;{project_id} - {bucket_name}&quot;) </code></pre>
<python><performance><google-cloud-platform><devops>
2023-04-17 02:47:16
0
357
lisa_rao007
76,031,309
12,311,071
Is it possible to hint that a function parameter should not be modified?
<p>For example, I might have an abstract base class with some abstract method that takes some mutable type as a parameter:</p> <pre class="lang-py prettyprint-override"><code>from abc import * class AbstractClass(metaclass=ABCMeta): @abstractmethod def abstract_method(self, mutable_parameter: list | set): raise NotImplementedError </code></pre> <p>Is there some way of hinting to the function implementer that this parameter should not be modified in any implementation of this method?</p> <p>I imagine maybe something like <code>ReadOnly</code> could exist:</p> <pre class="lang-py prettyprint-override"><code>def abstract_method(self, mutable_parameter: ReadOnly[list]): </code></pre> <p>but I can't seem to find anything suggesting such a thing does exist.</p> <p>From looking at the typing module docs I would have assumed that <code>Final</code> was what I am looking for but PyCharm tells me <code>'Final' could not be used in annotations for function parameters</code>.</p>
<python><python-typing>
2023-04-17 02:13:23
2
314
Harry
76,031,275
12,300,981
How is the error of the solutions in your residuals incorporated in the error of minimized variables
<p>I have a pretty straight forward question. Given a system of equations</p> <pre><code>x+y=1 x-y=2 x+2y=3 </code></pre> <p>Find the values of x and y. A simple minimization problem.</p> <pre><code>from scipy.optimize import minimize import numpy as np sol1=1 sol2=2 sol3=3 fun(k): a=k[0]+k[1] b=k[0]-k[1] c=k[0]+2*k[1] residual=(a-sol1)**2+(b-sol2)**2+(c-sol3)**2 return residual min=minimize(fun,x0=(0,0)) solution=min.x error=np.diag(min.hess_inv) &gt;&gt;&gt; fun: 1.785714285714286 hess_inv: array([[ 0.21428571, -0.07142857], [-0.07142857, 0.10714286]]) jac: array([-2.98023224e-08, 0.00000000e+00]) message: 'Optimization terminated successfully.' nfev: 18 nit: 4 njev: 6 status: 0 success: True x: array([1.85714285, 0.21428571]) </code></pre> <p>So solution found at x=1.8+/-0.21 and y=0.2+/-0.107 with residual of 1.78. Quite straight forward.</p> <p>But in this scenario, we assumed our solutions that the residuals were calculated against (i.e. sol1,sol2,sol3) had no errors. But what if there are errors associated with them?</p> <pre><code>from scipy.optimize import minimize import numpy as np sol1=1 sol1_error=0.5 sol2=2 sol2_error=1 sol3=3 sol3_error=1.5 fun(k): a=k[0]+k[1] b=k[0]-k[1] c=k[0]+2*k[1] residual=(a-sol1)**2+(b-sol2)**2+(c-sol3)**2 return residual min=minimize(fun,x0=(0,0)) solution=min.x error=np.diag(min.hess_inv) </code></pre> <p>How does one incorporate the errors of the 3 solutions into the final error calculation?</p>
<python><scipy><scipy-optimize>
2023-04-17 02:02:42
0
623
samman
76,031,100
6,087,667
pandas' `merge` drops levels on multiindex joins
<p>I don't understand why <code>merge</code> call below dropped the zero level <code>l0</code> while <code>join</code> didn't? I don't see that behavior described in the docs. Any explanation?</p> <pre><code>import string import pandas as pd alph = string.ascii_lowercase n=5 inds = pd.MultiIndex.from_tuples([(i,j) for i in alph[:n] for j in range(1,n)]) t = pd.DataFrame(data=np.random.randint(0,10, len(inds)), index=inds).sort_index() t.index.names=['l0', 'l1'] t2 = pd.DataFrame(data = [222,333], index=[2,3]) t2.index.names = ['l1'] display(t, t2) display(t.merge(t2,how='left', on='l1')) display(t.join(t2,how='left', on='l1', lsuffix='_x', rsuffix='_y')) </code></pre>
<python><pandas><join><merge>
2023-04-17 01:07:37
1
571
guyguyguy12345
76,031,012
15,178,267
Django: how to create a signal using django?
<p>I have a bidding system and there is an ending date for the bidding, i want users to bid against themselves till the ending of the bidding date.</p> <p>The product creators are the ones to add the ending date which would be some date in the future, how can i write a django signal to check if the date that was added as the ending date is the current date, then i'll perform some operations like mark the highest bidder as winner.</p> <p>This is my model</p> <pre><code> class Product(models.Model): user = models.ForeignKey(User, on_delete=models.SET_NULL, null=True) price = models.DecimalField(max_digits=12, decimal_places=2, default=0.00) type = models.CharField(choices=PRODUCT_TYPE, max_length=10, default=&quot;regular&quot;) ending_date = models.DateTimeField(auto_now_add=False, null=True, blank=True) ... class ProductBidders(models.Model): user = models.ForeignKey(User, on_delete=models.SET_NULL, null=True) product = models.ForeignKey(Product, on_delete=models.SET_NULL, null=True, related_name=&quot;product_bidders&quot;) price = models.DecimalField(max_digits=12, decimal_places=2, default=1.00) date = models.DateTimeField(auto_now_add=True) </code></pre>
<python><django><django-models><django-views>
2023-04-17 00:41:52
1
851
Destiny Franks
76,030,877
1,884,367
Problem with migrating Metropolis-Hastings algorithm from R to Python
<p>New to Python. I am trying to port the R code for the Metropolis Hastings Algorithm found <a href="https://stephens999.github.io/fiveMinuteStats/MH_intro.html" rel="nofollow noreferrer">here</a> over to Python. I have successfully duplicated the results in R, but am struggling with the Python version. I just can't seem to obtain anything close to the R results.</p> <p>R code:</p> <pre class="lang-r prettyprint-override"><code>target &lt;- function(x) { return(ifelse(x &lt; 0, 0, exp(-x))) } x &lt;- rep(0, 10000) x[1] &lt;- 3 #initialize; I've set arbitrarily set this to 3 for (i in 2:10000){ current_x &lt;- x[i - 1] proposal &lt;- rnorm(n = 1, mean = 0, sd = 1) proposed_x &lt;- current_x + proposal A &lt;- target(proposed_x) / target(current_x) if (runif(1) &lt; A){ x[i] &lt;- proposed_x # accept move with probabily min(1,A) } else { x[i] &lt;- current_x # otherwise &quot;reject&quot; move, and stay where we are } } hist(x, xlim = c(0, 10), probability = TRUE, main = &quot;Histogram of values of x visited by MH algorithm&quot;) </code></pre> <p>Python code:</p> <pre class="lang-py prettyprint-override"><code>import math import numpy as np import matplotlib.pyplot as plt import random as random def target(x): return(np.where(x &lt; 0, 0, math.exp(-x))) list_x = [3] # start with current_x = 3 for i in range(1,10000): current_x = list_x[-1] # pull last item from list_x proposal = np.random.normal(0,1) proposed_x = current_x + proposal alpha = target(proposed_x)/target(current_x) if min(1,alpha) &lt; 1: #if np.random.uniform(0,1) &lt; alpha: #min = 0, max = 1, sample size = 1 list_x.append(proposed_x) # accept and append proposed_x to list_x else: list_x.append(current_x) # reject and append current_x to list_x # print(list_x) # plt.xlim([0,10]) plt.hist(list_x, edgecolor='black') plt.show() </code></pre>
<python><r><montecarlo><markov>
2023-04-16 23:46:30
1
425
user1884367
76,030,844
1,798,351
how to get a human readable call stack from an ipynb error traceback['stack']?
<p>I'm hitting an error in a notebook and instead of showing a full call stack, it's a truncated stack with only the last few calls and the following message printed on top:</p> <pre><code>Output exceeds the size limit. Open the full output data *in a text editor* </code></pre> <p>Which links to a text file like:</p> <pre><code>{ &quot;name&quot;: &quot;TypeError&quot;, &quot;message&quot;: &quot;default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found &lt;class 'anndata._core.anndata.AnnData'&gt;&quot;, &quot;stack&quot;: &quot;\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)\nFile \u001b[0;32m/usr/local/python/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py:128\u001b[0m, in \u001b[0;36mcollate\u001b[0;34m(batch, collate_fn_map)\u001b[0m\n\u001b[1;32m 127\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[0;32m--&gt; 128\u001b[0m \u001b[39mreturn\u001b[39;00m elem_type({key: collate([d[key] \u001b[39mfor\u001b[39;00m d \u001b[39min\u001b[39;00m batch], collate_fn_map\u001b[39m=\u001b[39mcollate_fn_map) \u001b[39mfor\u001b[39;00m key \u001b[39min\u001b[39;00m elem})\n\u001b[1;32m 129\u001b[0m \u001 </code></pre> <p>The issue is that <code>['stack']</code> is not a human-readable value.</p> <p>How can I get a full call stack out of this, in human readable form?</p>
<python><exception><jupyter><stack-trace>
2023-04-16 23:38:34
1
1,073
mkk
76,030,776
10,363,163
How to get partial subtree depending on dependency relations with SpaCy?
<p>I have parsed the dependency relations of some text with SpaCy. How can I impose a condition relating to those dependency relations when extracting the subtree of a given token/span?</p> <p>For example, I would like to get the subtree of a given token but exclude all portions of the subtree where the immediate child of my original token has a conjunction (&quot;conj&quot;) dependency relation with that token.</p> <p>To give an even more concrete example: I would like to extract the names and the corresponding attributes from the following sentence: <code>&quot;The entrepreneur and philanthropist Bill Gates and the Apple's Steve Jobs ate hamburgers.&quot;</code></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>person</th> <th>attribute</th> </tr> </thead> <tbody> <tr> <td>Bill Gates</td> <td>entrepreneur and philanthropist</td> </tr> <tr> <td>Steve Jobs</td> <td>Apple's</td> </tr> </tbody> </table> </div> <p>The dependency relations look like this: <a href="https://i.sstatic.net/RXHWA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RXHWA.png" alt="dependency graph" /></a></p> <p>The following code succeeds at extracting the person entities but Bill Gates' subtree overlaps that of Steve Jobs:</p> <pre class="lang-py prettyprint-override"><code>import spacy nlp = spacy.load(&quot;en_core_web_trf&quot;) s = &quot;The entrepreneur and philanthropist Bill Gates and the Apple's Steve Jobs ate hamburgers.&quot; doc = nlp(s) persons = [ent for ent in doc.ents if ent.label_ == &quot;PERSON&quot;] # [Bill Gates, Steve Jobs] [[token for token in p.subtree] for p in persons] # [[The, entrepreneur, and, philanthropist, Bill, Gates, and, the, Apple, 's, Steve, Jobs], [the, Apple, 's, Steve, Jobs]] </code></pre> <p>So I would like to either get only the parts of Bill Gates' subtree where the first child has a <code>nmod</code> dependency relation, or remove those parts that are connected to a first child with the <code>conj</code> dependency relation. In R, the package <a href="https://github.com/vanatteveldt/rsyntax" rel="nofollow noreferrer">rsyntax</a> would get the job done so I assume something similar is already built into SpaCy.</p> <p>(Any tips for smarter ways to get the table above are also appreciated – I'm not super well-versed in SpaCy nor Python in general)</p>
<python><graph-theory><spacy>
2023-04-16 23:20:18
0
3,489
dufei
76,030,663
1,330,719
Excluding fields on a pydantic model when it is the nested child of another model
<p>I have a pydantic model that I want to dynamically exclude fields on.</p> <p>I can do this by overriding the <code>dict</code> function on the model so it can take my custom flag, e.g.:</p> <pre class="lang-py prettyprint-override"><code>class MyModel(BaseModel): field: str def dict(self, **kwargs): if ('exclude_special_fields' in kwargs): super().dict(exclude={&quot;field&quot;: True}, **kwargs) super().dict(**kwargs) </code></pre> <p>However, this does not work if my model is a child of another model that has <code>.dict</code> called on it:</p> <pre class="lang-py prettyprint-override"><code>class AnotherModel(BaseModel): models: List[MyModel] AnotherModel(models=[...]).dict(exclude_special_fields=True) # does not work </code></pre> <p>This is because when <code>MyModel.dict()</code> is called, it isn't called with the same arguments as the parent.</p> <p>I could write a <code>dict</code> override on the parent model too to specify an exclude for any child components (e.g. <code>exclude={&quot;models&quot;: {&quot;__all__&quot;: {&quot;field&quot;: True}}}</code>), but in my real world example, I have many parent models that use this one sub-model, and I don't want to have to write an override for each one.</p> <p>Is there anyway I can ensure the child model knows when to exclude fields?</p> <p><em><strong>Extra context</strong></em></p> <p>Extra context not completely important to the question, but the reason I want to do this is to exclude certain fields on a model if it's ever returned from an API call.</p>
<python><pydantic>
2023-04-16 22:37:11
2
1,269
rbhalla
76,030,612
9,668,218
How to compare two python files with the same name in different branches in GitHub?
<p>I have 2 versions of a Python code (python files with the same name) in 2 different branches in GitHub.</p> <p>How can I compare these files in GitHub?</p> <p>I am aware of <code>git diff</code> command but I am looking for a way to compare files in the repository user interface (instead of using git commands in the terminal).</p>
<python><github><compare><branch>
2023-04-16 22:21:01
1
1,033
Mohammad
76,030,280
635,799
How to import a module in Python unit test?
<pre><code>. ├── mymodule │ ├── __init__.py │ └── foo.py ├── tests ├── __init__.py ├── utils.py └── test_module.py </code></pre> <p>I have the above directory structure for my Python package.</p> <p>Within the <code>test_module.py</code> file, I need to import <code>tests/utils.py</code>. How should I import it so that I can run the test by using both</p> <ul> <li><code>python tests/test_module.py</code></li> <li><code>pytest</code> from the root directory?</li> </ul> <p>Currently, I imported <code>tests/utils.py</code> as follows in the <code>test_module.py</code> file:</p> <pre class="lang-py prettyprint-override"><code>from utils import myfunction </code></pre> <p>Then, <code>pytest</code> will generate the following error:</p> <pre><code>E ModuleNotFoundError: No module named 'utils' </code></pre> <p>If I change to:</p> <pre class="lang-py prettyprint-override"><code>from .utils import myfunction </code></pre> <p>Then, <code>pytest</code> works fine, but <code>python tests/test_module.py</code> generates the following error:</p> <pre><code>ImportError: attempted relative import with no known parent package </code></pre> <p>How can I properly import <code>utils.py</code> so that I can use in both ways?</p>
<python><import><pytest>
2023-04-16 20:52:48
1
898
Chang
76,030,225
19,854,658
How to select separate lines in a json file for json.loads()?
<p>Anyone know an effective way to select just one line at a time of this json file in Python?</p> <p>I want to be able to write each line into a relational database but json.loads() throws an 'Extra Data' error if I don't select each line separately.</p> <p>Thanks</p> <pre><code>{&quot;flight_date&quot;:&quot;2023-04-14&quot;,&quot;flight_status&quot;:&quot;scheduled&quot;,&quot;departure.airport&quot;:&quot;Seoul (Incheon)&quot;,&quot;departure.timezone&quot;:&quot;Asia/Seoul&quot;,&quot;departure.iata&quot;:&quot;ICN&quot;,&quot;departure.icao&quot;:&quot;RKSI&quot;,&quot;departure.terminal&quot;:&quot;1&quot;,&quot;departure.gate&quot;:&quot;38&quot;,&quot;departure.scheduled&quot;:&quot;2023-04-14T12:00:00+00:00&quot;,&quot;departure.estimated&quot;:&quot;2023-04-14T12:00:00+00:00&quot;,&quot;arrival.airport&quot;:&quot;Fukuoka&quot;,&quot;arrival.timezone&quot;:&quot;Asia/Tokyo&quot;,&quot;arrival.iata&quot;:&quot;FUK&quot;,&quot;arrival.icao&quot;:&quot;RJFF&quot;,&quot;arrival.terminal&quot;:&quot;I&quot;,&quot;arrival.scheduled&quot;:&quot;2023-04-14T13:20:00+00:00&quot;,&quot;arrival.estimated&quot;:&quot;2023-04-14T13:20:00+00:00&quot;,&quot;airline&quot;:&quot;Korean Air&quot;,&quot;flight.number&quot;:&quot;5077&quot;,&quot;flight.iata&quot;:&quot;KE5077&quot;,&quot;flight.icao&quot;:&quot;KAL5077&quot;,&quot;flight.codeshared.airline_name&quot;:&quot;jin air&quot;,&quot;flight.codeshared.airline_iata&quot;:&quot;lj&quot;,&quot;flight.codeshared.airline_icao&quot;:&quot;jna&quot;,&quot;flight.codeshared.flight_number&quot;:&quot;223&quot;,&quot;flight.codeshared.flight_iata&quot;:&quot;lj223&quot;,&quot;flight.codeshared.flight_icao&quot;:&quot;jna223&quot;,&quot;destination&quot;:&quot;Tokyo&quot;,&quot;country&quot;:&quot;Japan&quot;,&quot;arrival_airport&quot;:&quot;Fukuoka&quot;,&quot;schedule_arrive&quot;:&quot;2023-04-14T13:20:00+00:00&quot;,&quot;temperature&quot;:16,&quot;description&quot;:1,&quot;wind_speed&quot;:24,&quot;wind_degree&quot;:240,&quot;humidity&quot;:72,&quot;feelslike&quot;:16,&quot;visibility&quot;:10,&quot;cloud_cover&quot;:25} {&quot;flight_date&quot;:&quot;2023-04-14&quot;,&quot;flight_status&quot;:&quot;scheduled&quot;,&quot;departure.airport&quot;:&quot;Seoul (Incheon)&quot;,&quot;departure.timezone&quot;:&quot;Asia/Seoul&quot;,&quot;departure.iata&quot;:&quot;ICN&quot;,&quot;departure.icao&quot;:&quot;RKSI&quot;,&quot;departure.terminal&quot;:&quot;1&quot;,&quot;departure.gate&quot;:&quot;E01&quot;,&quot;departure.scheduled&quot;:&quot;2023-04-14T12:00:00+00:00&quot;,&quot;departure.estimated&quot;:&quot;2023-04-14T12:00:00+00:00&quot;,&quot;arrival.airport&quot;:&quot;Taiwan Taoyuan International (Chiang Kai Shek International)&quot;,&quot;arrival.timezone&quot;:&quot;Asia/Taipei&quot;,&quot;arrival.iata&quot;:&quot;TPE&quot;,&quot;arrival.icao&quot;:&quot;RCTP&quot;,&quot;arrival.terminal&quot;:&quot;2&quot;,&quot;arrival.gate&quot;:&quot;C8&quot;,&quot;arrival.baggage&quot;:&quot;7B&quot;,&quot;arrival.scheduled&quot;:&quot;2023-04-14T13:35:00+00:00&quot;,&quot;arrival.estimated&quot;:&quot;2023-04-14T13:35:00+00:00&quot;,&quot;airline&quot;:&quot;Thai Airways International&quot;,&quot;flight.number&quot;:&quot;6397&quot;,&quot;flight.iata&quot;:&quot;TG6397&quot;,&quot;flight.icao&quot;:&quot;THA6397&quot;,&quot;flight.codeshared.airline_name&quot;:&quot;eva air&quot;,&quot;flight.codeshared.airline_iata&quot;:&quot;br&quot;,&quot;flight.codeshared.airline_icao&quot;:&quot;eva&quot;,&quot;flight.codeshared.flight_number&quot;:&quot;169&quot;,&quot;flight.codeshared.flight_iata&quot;:&quot;br169&quot;,&quot;flight.codeshared.flight_icao&quot;:&quot;eva169&quot;,&quot;destination&quot;:&quot;Taipei&quot;,&quot;country&quot;:&quot;Taiwan&quot;,&quot;arrival_airport&quot;:&quot;Taiwan Taoyuan International (Chiang Kai Shek International)&quot;,&quot;schedule_arrive&quot;:&quot;2023-04-14T13:35:00+00:00&quot;,&quot;temperature&quot;:22,&quot;description&quot;:2,&quot;wind_speed&quot;:6,&quot;wind_degree&quot;:310,&quot;humidity&quot;:88,&quot;feelslike&quot;:25,&quot;visibility&quot;:6,&quot;cloud_cover&quot;:50} {&quot;flight_date&quot;:&quot;2023-04-14&quot;,&quot;flight_status&quot;:&quot;scheduled&quot;,&quot;departure.airport&quot;:&quot;Cologne/bonn&quot;,&quot;departure.timezone&quot;:&quot;Europe/Berlin&quot;,&quot;departure.iata&quot;:&quot;CGN&quot;,&quot;departure.icao&quot;:&quot;EDDK&quot;,&quot;departure.delay&quot;:22,&quot;departure.scheduled&quot;:&quot;2023-04-14T03:55:00+00:00&quot;,&quot;departure.estimated&quot;:&quot;2023-04-14T03:55:00+00:00&quot;,&quot;arrival.airport&quot;:&quot;Vienna International&quot;,&quot;arrival.timezone&quot;:&quot;Europe/Vienna&quot;,&quot;arrival.iata&quot;:&quot;VIE&quot;,&quot;arrival.icao&quot;:&quot;LOWW&quot;,&quot;arrival.scheduled&quot;:&quot;2023-04-14T05:23:00+00:00&quot;,&quot;arrival.estimated&quot;:&quot;2023-04-14T05:23:00+00:00&quot;,&quot;airline&quot;:&quot;UPS Airlines&quot;,&quot;flight.number&quot;:&quot;274&quot;,&quot;flight.iata&quot;:&quot;5X274&quot;,&quot;flight.icao&quot;:&quot;UPS274&quot;,&quot;destination&quot;:&quot;Vienna&quot;,&quot;country&quot;:&quot;Austria&quot;,&quot;arrival_airport&quot;:&quot;Vienna International&quot;,&quot;schedule_arrive&quot;:&quot;2023-04-14T05:23:00+00:00&quot;,&quot;temperature&quot;:7,&quot;description&quot;:3,&quot;wind_speed&quot;:17,&quot;wind_degree&quot;:330,&quot;humidity&quot;:87,&quot;feelslike&quot;:5,&quot;visibility&quot;:10,&quot;cloud_cover&quot;:75} {&quot;flight_date&quot;:&quot;2023-04-14&quot;,&quot;flight_status&quot;:&quot;scheduled&quot;,&quot;departure.airport&quot;:&quot;Seoul (Incheon)&quot;,&quot;departure.timezone&quot;:&quot;Asia/Seoul&quot;,&quot;departure.iata&quot;:&quot;ICN&quot;,&quot;departure.icao&quot;:&quot;RKSI&quot;,&quot;departure.terminal&quot;:&quot;1&quot;,&quot;departure.gate&quot;:&quot;E01&quot;,&quot;departure.scheduled&quot;:&quot;2023-04-14T12:00:00+00:00&quot;,&quot;departure.estimated&quot;:&quot;2023-04-14T12:00:00+00:00&quot;,&quot;arrival.airport&quot;:&quot;Taiwan Taoyuan International (Chiang Kai Shek International)&quot;,&quot;arrival.timezone&quot;:&quot;Asia/Taipei&quot;,&quot;arrival.iata&quot;:&quot;TPE&quot;,&quot;arrival.icao&quot;:&quot;RCTP&quot;,&quot;arrival.terminal&quot;:&quot;2&quot;,&quot;arrival.gate&quot;:&quot;C8&quot;,&quot;arrival.baggage&quot;:&quot;7B&quot;,&quot;arrival.scheduled&quot;:&quot;2023-04-14T13:35:00+00:00&quot;,&quot;arrival.estimated&quot;:&quot;2023-04-14T13:35:00+00:00&quot;,&quot;airline&quot;:&quot;Thai Airways International&quot;,&quot;flight.number&quot;:&quot;6397&quot;,&quot;flight.iata&quot;:&quot;TG6397&quot;,&quot;flight.icao&quot;:&quot;THA6397&quot;,&quot;flight.codeshared.airline_name&quot;:&quot;eva air&quot;,&quot;flight.codeshared.airline_iata&quot;:&quot;br&quot;,&quot;flight.codeshared.airline_icao&quot;:&quot;eva&quot;,&quot;flight.codeshared.flight_number&quot;:&quot;169&quot;,&quot;flight.codeshared.flight_iata&quot;:&quot;br169&quot;,&quot;flight.codeshared.flight_icao&quot;:&quot;eva169&quot;,&quot;destination&quot;:&quot;Taipei&quot;,&quot;country&quot;:&quot;Taiwan&quot;,&quot;arrival_airport&quot;:&quot;Taiwan Taoyuan International (Chiang Kai Shek International)&quot;,&quot;schedule_arrive&quot;:&quot;2023-04-14T13:35:00+00:00&quot;,&quot;temperature&quot;:22,&quot;description&quot;:4,&quot;wind_speed&quot;:6,&quot;wind_degree&quot;:310,&quot;humidity&quot;:88,&quot;feelslike&quot;:25,&quot;visibility&quot;:6,&quot;cloud_cover&quot;:50} {&quot;flight_date&quot;:&quot;2023-04-14&quot;,&quot;flight_status&quot;:&quot;scheduled&quot;,&quot;departure.airport&quot;:&quot;Guangzhou Baiyun International&quot;,&quot;departure.timezone&quot;:&quot;Asia/Shanghai&quot;,&quot;departure.iata&quot;:&quot;CAN&quot;,&quot;departure.icao&quot;:&quot;ZGGG&quot;,&quot;departure.terminal&quot;:&quot;2&quot;,&quot;departure.scheduled&quot;:&quot;2023-04-14T10:10:00+00:00&quot;,&quot;departure.estimated&quot;:&quot;2023-04-14T10:10:00+00:00&quot;,&quot;arrival.airport&quot;:&quot;Xiamen&quot;,&quot;arrival.timezone&quot;:&quot;Asia/Shanghai&quot;,&quot;arrival.iata&quot;:&quot;XMN&quot;,&quot;arrival.icao&quot;:&quot;ZSAM&quot;,&quot;arrival.terminal&quot;:&quot;3&quot;,&quot;arrival.scheduled&quot;:&quot;2023-04-14T11:40:00+00:00&quot;,&quot;arrival.estimated&quot;:&quot;2023-04-14T11:40:00+00:00&quot;,&quot;airline&quot;:&quot;Hebei Airlines&quot;,&quot;flight.number&quot;:&quot;8312&quot;,&quot;flight.iata&quot;:&quot;NS8312&quot;,&quot;flight.icao&quot;:&quot;HBH8312&quot;,&quot;flight.codeshared.airline_name&quot;:&quot;xiamen airlines&quot;,&quot;flight.codeshared.airline_iata&quot;:&quot;mf&quot;,&quot;flight.codeshared.airline_icao&quot;:&quot;cxa&quot;,&quot;flight.codeshared.flight_number&quot;:&quot;8306&quot;,&quot;flight.codeshared.flight_iata&quot;:&quot;mf8306&quot;,&quot;flight.codeshared.flight_icao&quot;:&quot;cxa8306&quot;,&quot;destination&quot;:&quot;Shanghai&quot;,&quot;country&quot;:&quot;China&quot;,&quot;arrival_airport&quot;:&quot;Xiamen&quot;,&quot;schedule_arrive&quot;:&quot;2023-04-14T11:40:00+00:00&quot;,&quot;temperature&quot;:16,&quot;description&quot;:5,&quot;wind_speed&quot;:4,&quot;wind_degree&quot;:170,&quot;humidity&quot;:94,&quot;feelslike&quot;:16,&quot;visibility&quot;:10,&quot;cloud_cover&quot;:75} {&quot;flight_date&quot;:&quot;2023-04-14&quot;,&quot;flight_status&quot;:&quot;scheduled&quot;,&quot;departure.airport&quot;:&quot;Hangzhou&quot;,&quot;departure.timezone&quot;:&quot;Asia/Shanghai&quot;,&quot;departure.iata&quot;:&quot;HGH&quot;,&quot;departure.icao&quot;:&quot;ZSHC&quot;,&quot;departure.terminal&quot;:&quot;3&quot;,&quot;departure.scheduled&quot;:&quot;2023-04-14T09:20:00+00:00&quot;,&quot;departure.estimated&quot;:&quot;2023-04-14T09:20:00+00:00&quot;,&quot;arrival.airport&quot;:&quot;Nanning&quot;,&quot;arrival.timezone&quot;:&quot;Asia/Shanghai&quot;,&quot;arrival.iata&quot;:&quot;NNG&quot;,&quot;arrival.icao&quot;:&quot;ZGNN&quot;,&quot;arrival.terminal&quot;:&quot;T2&quot;,&quot;arrival.scheduled&quot;:&quot;2023-04-14T12:05:00+00:00&quot;,&quot;arrival.estimated&quot;:&quot;2023-04-14T12:05:00+00:00&quot;,&quot;airline&quot;:&quot;Loong Air&quot;,&quot;flight.number&quot;:&quot;3479&quot;,&quot;flight.iata&quot;:&quot;GJ3479&quot;,&quot;flight.icao&quot;:&quot;CDC3479&quot;,&quot;flight.codeshared.airline_name&quot;:&quot;xiamen airlines&quot;,&quot;flight.codeshared.airline_iata&quot;:&quot;mf&quot;,&quot;flight.codeshared.airline_icao&quot;:&quot;cxa&quot;,&quot;flight.codeshared.flight_number&quot;:&quot;8351&quot;,&quot;flight.codeshared.flight_iata&quot;:&quot;mf8351&quot;,&quot;flight.codeshared.flight_icao&quot;:&quot;cxa8351&quot;,&quot;destination&quot;:&quot;Shanghai&quot;,&quot;country&quot;:&quot;China&quot;,&quot;arrival_airport&quot;:&quot;Nanning&quot;,&quot;schedule_arrive&quot;:&quot;2023-04-14T12:05:00+00:00&quot;,&quot;temperature&quot;:16,&quot;description&quot;:6,&quot;wind_speed&quot;:7,&quot;wind_degree&quot;:210,&quot;humidity&quot;:94,&quot;feelslike&quot;:16,&quot;visibility&quot;:10,&quot;cloud_cover&quot;:75} {&quot;flight_date&quot;:&quot;2023-04-14&quot;,&quot;flight_status&quot;:&quot;scheduled&quot;,&quot;departure.airport&quot;:&quot;Hangzhou&quot;,&quot;departure.timezone&quot;:&quot;Asia/Shanghai&quot;,&quot;departure.iata&quot;:&quot;HGH&quot;,&quot;departure.icao&quot;:&quot;ZSHC&quot;,&quot;departure.terminal&quot;:&quot;3&quot;,&quot;departure.scheduled&quot;:&quot;2023-04-14T09:20:00+00:00&quot;,&quot;departure.estimated&quot;:&quot;2023-04-14T09:20:00+00:00&quot;,&quot;arrival.airport&quot;:&quot;Nanning&quot;,&quot;arrival.timezone&quot;:&quot;Asia/Shanghai&quot;,&quot;arrival.iata&quot;:&quot;NNG&quot;,&quot;arrival.icao&quot;:&quot;ZGNN&quot;,&quot;arrival.terminal&quot;:&quot;T2&quot;,&quot;arrival.scheduled&quot;:&quot;2023-04-14T12:05:00+00:00&quot;,&quot;arrival.estimated&quot;:&quot;2023-04-14T12:05:00+00:00&quot;,&quot;airline&quot;:&quot;Hebei Airlines&quot;,&quot;flight.number&quot;:&quot;8353&quot;,&quot;flight.iata&quot;:&quot;NS8353&quot;,&quot;flight.icao&quot;:&quot;HBH8353&quot;,&quot;flight.codeshared.airline_name&quot;:&quot;xiamen airlines&quot;,&quot;flight.codeshared.airline_iata&quot;:&quot;mf&quot;,&quot;flight.codeshared.airline_icao&quot;:&quot;cxa&quot;,&quot;flight.codeshared.flight_number&quot;:&quot;8351&quot;,&quot;flight.codeshared.flight_iata&quot;:&quot;mf8351&quot;,&quot;flight.codeshared.flight_icao&quot;:&quot;cxa8351&quot;,&quot;destination&quot;:&quot;Shanghai&quot;,&quot;country&quot;:&quot;China&quot;,&quot;arrival_airport&quot;:&quot;Nanning&quot;,&quot;schedule_arrive&quot;:&quot;2023-04-14T12:05:00+00:00&quot;,&quot;temperature&quot;:16,&quot;description&quot;:7,&quot;wind_speed&quot;:7,&quot;wind_degree&quot;:210,&quot;humidity&quot;:94,&quot;feelslike&quot;:16,&quot;visibility&quot;:10,&quot;cloud_cover&quot;:75} {&quot;flight_date&quot;:&quot;2023-04-14&quot;,&quot;flight_status&quot;:&quot;scheduled&quot;,&quot;departure.airport&quot;:&quot;Yancheng&quot;,&quot;departure.timezone&quot;:&quot;Asia/Shanghai&quot;,&quot;departure.iata&quot;:&quot;YNZ&quot;,&quot;departure.icao&quot;:&quot;ZSYN&quot;,&quot;departure.scheduled&quot;:&quot;2023-04-14T11:55:00+00:00&quot;,&quot;departure.estimated&quot;:&quot;2023-04-14T11:55:00+00:00&quot;,&quot;arrival.airport&quot;:&quot;Changsha&quot;,&quot;arrival.timezone&quot;:&quot;Asia/Shanghai&quot;,&quot;arrival.iata&quot;:&quot;CSX&quot;,&quot;arrival.icao&quot;:&quot;ZGHA&quot;,&quot;arrival.terminal&quot;:&quot;2&quot;,&quot;arrival.scheduled&quot;:&quot;2023-04-14T14:00:00+00:00&quot;,&quot;arrival.estimated&quot;:&quot;2023-04-14T14:00:00+00:00&quot;,&quot;airline&quot;:&quot;Chongqing Airlines&quot;,&quot;flight.number&quot;:&quot;2005&quot;,&quot;flight.iata&quot;:&quot;OQ2005&quot;,&quot;flight.icao&quot;:&quot;CQN2005&quot;,&quot;destination&quot;:&quot;Shanghai&quot;,&quot;country&quot;:&quot;China&quot;,&quot;arrival_airport&quot;:&quot;Changsha&quot;,&quot;schedule_arrive&quot;:&quot;2023-04-14T14:00:00+00:00&quot;,&quot;temperature&quot;:16,&quot;description&quot;:8,&quot;wind_speed&quot;:7,&quot;wind_degree&quot;:210,&quot;humidity&quot;:94,&quot;feelslike&quot;:16,&quot;visibility&quot;:10,&quot;cloud_cover&quot;:75} </code></pre>
<python><json>
2023-04-16 20:40:05
1
379
Jean-Paul Azzopardi
76,030,177
11,388,321
Wordpress REST API how to assign different categories to a post using Python requests library
<p>I've created a python script to create &amp; post articles to WordPress site but it seems that the category in post data isn't getting assigned to the posts and it always gets assigned to <code>uncategorized</code> category, I am looking to only assign 1 category to the posts.</p> <p>Am I doing something wrong here? WordPress REST API Docs aren't really helpful.</p> <p>Here's the post creator function:</p> <pre><code># Post creator function def create_post(inputTitleSent, outputText): randomAuthor = random.choice(authorList) post_status = &quot;draft&quot; headers = { &quot;Content-Type&quot;: &quot;application/x-www-form-urlencoded&quot; } post = { &quot;title&quot;: inputTitleSent, &quot;content&quot;: outputText, &quot;status&quot;: post_status, &quot;author&quot;: randomAuthor, &quot;categories:&quot;: &quot;6&quot; } url = wp_base_url + &quot;/wp-json/wp/v2/posts&quot; response = requests.post(url, data=post, headers=headers, auth=(wp_username,wp_password)) return response </code></pre> <p>I've tried <code>&quot;categories&quot;: 6</code> and I saw somewhere that it's suppose to be an array so I tried <code>&quot;categories&quot;: [6]</code> and <code>&quot;categories&quot;: ['6']</code> but still posts get assigned to <code>uncategorized</code> category.</p>
<python><wordpress><python-requests><wordpress-rest-api>
2023-04-16 20:31:03
3
810
overdeveloping
76,030,107
1,330,719
Excluding pydantic model fields only when returned as part of a FastAPI call
<h2>Context</h2> <p>I have a very complex pydantic model with a lot of nested pydantic models. I would like to ensure certain fields are never returned as part of API calls, but I would like those fields present for internal logic.</p> <h2>What I tried</h2> <p>I first tried using pydantic's <code>Field</code> function to specify the <code>exclude</code> flag on the fields I didn't want returned. This worked, however functions in my internal logic had to override this whenever they called <code>.dict()</code> by calling <code>.dict(exclude=None)</code>.</p> <p>Instead, I specified a custom flag <code>return_in_api</code> on the <code>Field</code>, with the goal being to only apply exclusions when FastAPI called <code>.dict()</code>. I tried writing a middleware to call <code>.dict()</code> and pass through my own <code>exclude</code> property based on which nested fields contained <code>return_in_api=False</code>. However FastAPI's middleware was giving me a stream for the response which I didn't want to prematurely resolve.</p> <p>Instead, I wrote a decorator that called <code>.dict()</code> on the return values of my route handlers with the appropriate <code>exclude</code> value.</p> <h2>Problem</h2> <p>One challenge is that whenever new endpoints get added, the person who added them has to remember to include this decorator, otherwise fields leak.</p> <p>Ideally I would like to apply this decorator to every route, but doing it through middleware seems to break response streaming.</p>
<python><fastapi><pydantic><starlette>
2023-04-16 20:15:06
2
1,269
rbhalla
76,029,974
3,323,394
How to run blocking operations loops concurrently using python async?
<p>I have two unrelated blocking operations that listen to different events. When any of them return, I need to do an appropriate handling of the underlying event they raised.</p> <p>For some reason, no matter how I schedule them using AsyncIO, I never get to run them concurrently. Apparently, <code>receive_json()</code> seem to block indefinitely whenever the other loop is running; which is why, I suspect a concurrency problem on the <code>websocket</code> or the async loop without being able to really pinpoint what is or how to solve the issue.</p> <p>My current code is illustrated below in this simplified example, but I've also tried other asyncio interfaces like running them in a single loop, using timeouts, or using <code>asycio.wait()</code> without any more success.</p> <p>The techs used are uvicorn as a ASGI server, FastApi for web interface, Redis pubsub (redis-py connector) as one awaitable and the starlette Websocket as the other. They run in a docker container, hosted on a windows machine if that's of any interest.</p> <pre class="lang-py prettyprint-override"><code> async def await_redis(p): return str(p.get_message(timeout=None)) @router.websocket('/'): def ws_endpoint(websocket Websocket): async def ws_loop(): while True: data = await websocket.receive_json() # Blocks here whenever rd_loop runs messages = await handler(data) r.publish('some-channel', messages) async def rd_loop(): r = Redis('host') p = r.pubsub('some-channel') while True: mess = await await_redis(p) if mess: await websocket.send_json([mess]) # The strange thing is if rd_loop exits because of exception, # ws_loop starts to receive and handle messages. await asyncio.gather(ws_loop(), rd_loop()) </code></pre>
<python><websocket><redis><fastapi><starlette>
2023-04-16 19:40:43
1
1,512
Diane M
76,029,891
16,067,738
Python-Sql connection. Entries made in python terminal are not committing to the MySql server. Producing null table despite showing no error
<p>I am working on a small project which pertains the connectivity of python with MySql. What I have envisioned is that I will use Python to store the credentials like username and password to the database named <code>PENGUINX</code> I connected with MySql server. So far , my code looks clean and shows no error upon execution but when I go back to MySql workbench and attempt to display the logs Python has entered into the <code>Credents</code> table by runnning the command <code>select * from credents;</code> , it gives me a null table, which apparently indicates that my entries are not committing to the server database.. The python-sql connectivity shows no defect when I try to create a table or a new database with python.</p> <pre><code>import random import mysql.connector as boss connection = boss.connect(host = 'localhost ', username = 'root', password = '********', database = 'penguinx') control = connection.cursor() control.execute('use penguinx ;') print('---------DATABASE OF PENGUIN-X USERS CREDENTIALS---------') def login_credentials(): __ask_number_of_datachunks__ = int(input('How many credentials to be entered ? : ')) for i in range(__ask_number_of_datachunks__) : ask_email = input('Enter Your Username : ') ask_pass = input('Enter Your password : ') assign_id = random.randint(1,100) control.execute('insert into credents values({},&quot;{}&quot;,&quot;{}&quot;);'.format(assign_id,ask_email,ask_pass)) control.commit() login_credentials() </code></pre> <p>bear in mind that commit() in contron.commit() is not being considered as a function (not blending with the color of function code)</p>
<python><mysql>
2023-04-16 19:20:32
1
417
LunaticXXD10
76,029,548
906,551
Connecting to Bluetooth devices from Python for Windows?
<p>I've spent a good time searching, but cannot find a solution that does not require installing C++ compilers. Surely in 2023 there is a way to use Bluetooth from Python with pure Python or at least a pre-built module?</p>
<python><windows><bluetooth>
2023-04-16 18:15:09
1
349
Scott Thibault
76,029,219
629,283
setting up docker with python 3.9 + poetry getting 'does not contain any element'
<p>I am trying to setup docker for local dev and i keep getting &quot;does not contain any element&quot; when i try to use poetry with docker. Originally I used the requirement.txt, however i would prefer it to have poetry because where i work we have a pyserver which we need to pull some of the .wh from.</p> <p>So i am trying to keep it basic. This my structure and some basic code</p> <p><a href="https://i.sstatic.net/9G93o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9G93o.png" alt="enter image description here" /></a></p> <p>-------------------main.py-------------</p> <pre><code>from fastapi import FastAPI app = FastAPI() @app.get(&quot;/&quot;) async def root(): return {&quot;message&quot;: &quot;Hello, World!&quot;} </code></pre> <p>----------------------uvicorn_conf.py------</p> <pre><code>from uvicorn.workers import UvicornWorker bind = &quot;0.0.0.0:8000&quot; workers = 4 worker_class = UvicornWorker </code></pre> <p>-----------Dockerfile-------</p> <pre><code>FROM python:3.9 # Set the working directory to /app WORKDIR /app # Copy the entire project directory to the container COPY . . # Install Poetry and project dependencies RUN pip install poetry RUN poetry config virtualenvs.create false RUN poetry install --no-dev # Start the server CMD [&quot;poetry&quot;, &quot;run&quot;, &quot;start&quot;] </code></pre> <p>---------pyproject.toml------</p> <pre><code>[tool.poetry] name = &quot;my-app&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;py_dev &lt;test@testing.com&gt;&quot;] readme = &quot;README.md&quot; packages = [{include = &quot;my_app&quot;}] [tool.poetry.dependencies] python = &quot;^3.9&quot; fastapi = &quot;^0.95.1&quot; uvicorn = &quot;^0.21.1&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; [tool.poetry.scripts] start = &quot;uvicorn my_app.main:app --config uvicorn_conf.py&quot; </code></pre> <p>And every time i run build, I get this error. <a href="https://i.sstatic.net/r74Hc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r74Hc.png" alt="enter image description here" /></a> I am not sure how to fix this. Any advice or direction on how to get it working would be brilliant. Thank you in advance.</p>
<python><docker><python-poetry>
2023-04-16 17:08:49
1
349
user629283
76,029,048
4,835,204
Code crashes when put into separate functions within a class
<p>I've ran into an odd situation in Python where code works if I put everything in the <code>__init__</code> function of a class, but if I break it up into functions within the class, the otherwise identical code crashes. Here is my current example:</p> <p>This works:</p> <pre><code># SimpleDisplay5.py import faulthandler import pypangolin as pango import OpenGL.GL as gl import numpy as np import time class Display3D(object): def __init__(self): self.win = pango.CreateWindowAndBind('Pangolin Viewer', 640, 480) gl.glEnable(gl.GL_DEPTH_TEST) pm = pango.ProjectionMatrix(640, 480, 420, 420, 320, 240, 0.1, 1000) mv = pango.ModelViewLookAt(-0, 0.5, -3, 0, 0, 0, pango.AxisY) self.s_cam = pango.OpenGlRenderState(pm, mv) ui_width = 180 handler = pango.Handler3D(self.s_cam) self.d_cam = ( pango.CreateDisplay() .SetBounds( pango.Attach(0), pango.Attach(1), pango.Attach.Pix(ui_width), pango.Attach(1), -640.0 / 480.0, ) .SetHandler(handler) ) self.points = np.random.uniform(-1.0, 1.0, (100, 3)).astype(np.float32) gl.glPointSize(5) gl.glColor3f(1.0, 0.0, 0.0) # red while not pango.ShouldQuit(): gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT) self.d_cam.Activate(self.s_cam) pango.glDrawPoints(self.points) pango.FinishFrame() # end while # end function # end class def main(): faulthandler.enable() np.set_printoptions(suppress=True) disp3d = Display3D() while True: time.sleep(0.5) # end while # end function if __name__ == &quot;__main__&quot;: main() </code></pre> <p>This crashes with a segmentation fault:</p> <pre><code># SimpleDisplay6.py import faulthandler import pypangolin as pango import OpenGL.GL as gl import numpy as np import time class Display3D(object): def __init__(self): self.viewerInit() self.viewerRefresh() # end function def viewerInit(self): self.win = pango.CreateWindowAndBind('Pangolin Viewer', 640, 480) gl.glEnable(gl.GL_DEPTH_TEST) pm = pango.ProjectionMatrix(640, 480, 420, 420, 320, 240, 0.1, 1000) mv = pango.ModelViewLookAt(-0, 0.5, -3, 0, 0, 0, pango.AxisY) self.s_cam = pango.OpenGlRenderState(pm, mv) ui_width = 180 handler = pango.Handler3D(self.s_cam) self.d_cam = ( pango.CreateDisplay() .SetBounds( pango.Attach(0), pango.Attach(1), pango.Attach.Pix(ui_width), pango.Attach(1), -640.0 / 480.0, ) .SetHandler(handler) ) self.points = np.random.uniform(-1.0, 1.0, (100, 3)).astype(np.float32) gl.glPointSize(5) gl.glColor3f(1.0, 0.0, 0.0) # red # end function def viewerRefresh(self): while not pango.ShouldQuit(): gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT) self.d_cam.Activate(self.s_cam) pango.glDrawPoints(self.points) pango.FinishFrame() # end while # end function # end class def main(): faulthandler.enable() np.set_printoptions(suppress=True) disp3d = Display3D() while True: time.sleep(0.5) # end while # end function if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Specifically, the 2nd program crashes when I mouse over the Pangolin viewer window. Here is the terminal output for the 2nd program:</p> <pre><code>$ python3 SimpleDisplay6.py Fatal Python error: Segmentation fault Current thread 0x00007f4384b05740 (most recent call first): File &quot;SimpleDisplay6.py&quot;, line 54 in viewerRefresh File &quot;SimpleDisplay6.py&quot;, line 14 in __init__ File &quot;SimpleDisplay6.py&quot;, line 64 in main File &quot;SimpleDisplay6.py&quot;, line 73 in &lt;module&gt; Segmentation fault (core dumped) </code></pre> <p>Line 54 in the 2nd program is:</p> <pre><code> pango.FinishFrame() </code></pre> <p>Note that the only difference between the 2 programs is in the 2nd, I divided up the <code>__init__</code> content into 2 separate functions. Can somebody explain what is going on here or how this is even possible? Thinking of how this would break down into CPython, I would have figured it should have been identical.</p> <p>I did post a question in the Pangolin viewer repo <a href="https://github.com/stevenlovegrove/Pangolin/issues/861" rel="nofollow noreferrer">https://github.com/stevenlovegrove/Pangolin/issues/861</a>, but the more I consider the situation I'm not sure this is a Pangolin concern since the 1st example works.</p> <p>Moreover, I recall something very similar happening before when I attempted to call OpenCV functions from within a Qt frame (I don't have the code for that currently however).</p> <p>I should probably also mention I'm aware the <code>while True</code> in <code>main()</code> is never reached in either program, I left that code in since my eventual plan was to start the Pangolin viewer on a separate thread when I get this current problem worked out.</p> <p>Any suggestions as to the cause of this or what I should check next?</p>
<python><pangolin>
2023-04-16 16:35:47
1
3,840
cdahms
76,029,005
4,458,718
Docker wont install numpy on ubunty 18.04 but same docker ile runs just fine on sagemaker notebook instance
<p>I am trying to build an image using the docker file below (using this <a href="https://github.com/aws/amazon-sagemaker-examples/tree/main/advanced_functionality/scikit_bring_your_own" rel="nofollow noreferrer">example</a>). It works just fine on my notebook instance, but when I try to run it locally I get numpy errors like the below. I've spent hours trying to use other numpy versions, install python 3.8, use ubuntu 20.04 but it just leads to other error messages. It's making me think there is some underlying error, and the package error messages are just symptoms...why would the exact same docker files produce a different error message in a different environment? I thought they were downloading from the same source and running it containerized? I thought the whole point was that the container would build the same way no matter where you run it....and like I said, it gives no 3.8 version warning when I run the same image on the notebook instance (and I've verified that the same version of python gets installed in both cases). Why would numpy install behave differently?</p> <pre><code>Collecting numpy==1.16.2 Downloading https://files.pythonhosted.org/packages/cf/8d/6345b4f32b37945fedc1e027e83970005fc9c699068d2f566b82826515f2/numpy-1.16.2.zip (5.1MB) Collecting scipy==1.2.1 Downloading https://files.pythonhosted.org/packages/a9/b4/5598a706697d1e2929eaf7fe68898ef4bea76e4950b9efbe1ef396b8813a/scipy-1.2.1.tar.gz (23.1MB) Complete output from command python setup.py egg_info: Traceback (most recent call last): File &quot;/usr/lib/python3/dist-packages/setuptools/sandbox.py&quot;, line 154, in save_modules yield saved File &quot;/usr/lib/python3/dist-packages/setuptools/sandbox.py&quot;, line 195, in setup_context yield File &quot;/usr/lib/python3/dist-packages/setuptools/sandbox.py&quot;, line 250, in run_setup _execfile(setup_script, ns) File &quot;/usr/lib/python3/dist-packages/setuptools/sandbox.py&quot;, line 45, in _execfile exec(code, globals, locals) File &quot;/tmp/easy_install-ua250hpf/numpy-1.24.2/setup.py&quot;, line 20, in &lt;module&gt; RuntimeError: Python version &gt;= 3.8 required. </code></pre> <p>Docker file:</p> <pre><code># Build an image that can do training and inference in SageMaker # This is a Python 3 image that uses the nginx, gunicorn, flask stack # for serving inferences in a stable way. FROM ubuntu:18.04 MAINTAINER Amazon AI &lt;sage-learner@amazon.com&gt; RUN apt-get -y update &amp;&amp; apt-get install -y --no-install-recommends \ wget \ python3-pip \ python3-setuptools \ nginx \ ca-certificates \ &amp;&amp; rm -rf /var/lib/apt/lists/* RUN python3 --version RUN ln -s /usr/bin/python3 /usr/bin/python RUN ln -s /usr/bin/pip3 /usr/bin/pip # Here we get all python packages. # There's substantial overlap between scipy and numpy that we eliminate by # linking them together. Likewise, pip leaves the install caches populated which uses # a significant amount of space. These optimizations save a fair amount of space in the # image, which reduces start up time. RUN pip --no-cache-dir install numpy==1.16.2 scipy==1.2.1 scikit-learn==0.20.2 pandas flask gunicorn # Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard # output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE # keeps Python from writing the .pyc files which are unnecessary in this case. We also update # PATH so that the train and serve programs are found when the container is invoked. ENV PYTHONUNBUFFERED=TRUE ENV PYTHONDONTWRITEBYTECODE=TRUE ENV PATH=&quot;/opt/program:${PATH}&quot; # Set up the program in the image COPY decision_trees /opt/program WORKDIR /opt/program </code></pre>
<python><amazon-web-services><docker><numpy><amazon-sagemaker>
2023-04-16 16:29:03
1
1,931
L Xandor
76,028,969
238,898
PyPDF extract text from highlight annonations coordinates
<p>I want to extract the Text from Highlight Annotations.</p> <p>PyPDF should be able to do this.<br /> They provide an example on how to get the coordinates for the Highlights: <a href="https://pypdf.readthedocs.io/en/latest/user/reading-pdf-annotations.html#highlights" rel="nofollow noreferrer">https://pypdf.readthedocs.io/en/latest/user/reading-pdf-annotations.html#highlights</a></p> <p>After I have the coordinates, I understand I should use <a href="https://pypdf.readthedocs.io/en/latest/user/extract-text.html?highlight=extract_text#using-a-visitor" rel="nofollow noreferrer">extract_text()</a> with <a href="https://pypdf.readthedocs.io/en/latest/modules/PageObject.html?highlight=visitor_text#pypdf._page.PageObject.extract_text" rel="nofollow noreferrer">visitor_text()</a></p> <p>But the use of this function is very confusing to me and I can't seem to wrap my head around the 2 examples provided..</p> <p>Anybody so kind to show me the link between following code examples:</p> <pre><code>from pypdf import PdfReader reader = PdfReader(&quot;commented.pdf&quot;) for page in reader.pages: if &quot;/Annots&quot; in page: for annot in page[&quot;/Annots&quot;]: subtype = annot.get_object()[&quot;/Subtype&quot;] if subtype == &quot;/Highlight&quot;: coords = annot.get_object()[&quot;/QuadPoints&quot;] x1, y1, x2, y2, x3, y3, x4, y4 = coords # print(coords) # [51.10668, 704.2371, 99.92829, 704.2371, 51.10668, 689.7126, 99.92829, 689.7126] # page.extract_text(visitor_text=visitor_body) # ??? how to pass these coordinates </code></pre> <pre><code>parts = [] def visitor_body(text, cm, tm, font_dict, font_size): # ??? tm: Text Matrix parts.append(text) ... text_body = &quot;&quot;.join(parts) print(text_body) </code></pre> <p>from the manual:</p> <blockquote> <p><strong>extract_text</strong>(<br /> <strike>*args: Any,</strike><br /> <strike>Tj_sep: Optional[str] = None,</strike><br /> <strike>TJ_sep: Optional[str] = None,</strike><br /> <strike>orientations: Union[int, Tuple[int, ...]] = (0, 90, 180, 270),</strike><br /> <strike>space_width: float = 200.0,</strike><br /> <strike>visitor_operand_before: Optional[Callable[[Any, Any, Any, Any], None]] = None,</strike><br /> <strike>visitor_operand_after: Optional[Callable[[Any, Any, Any, Any], None]]= None,</strike><br /> <strong>visitor_text</strong>: Optional[Callable[[Any, Any, Any, Any, Any], None]] = None<br /> ) → str</p> <p>Locate all text drawing commands, in the order they are provided in the content stream, and extract the text. Additionally you can provide visitor-methods to get informed on all operations and all text-objects.</p> <p><em>Parameters</em><br /> <strong>visitor_text</strong> – function to be called when extracting some text at some position.<br /> It has five arguments: text, current transformation matrix, <strong>text matrix</strong>, font-dictionary and font-size.</p> <p><em>Returns</em><br /> The extracted text</p> </blockquote>
<python><pypdf>
2023-04-16 16:22:15
0
11,338
TunaFFish
76,028,844
10,313,194
How to get post data from facebook page using python?
<p>I try to get post data from facebook page using Facebook Graph API like this.</p> <p><a href="https://i.sstatic.net/5QuUB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5QuUB.png" alt="enter image description here" /></a></p> <p>The output not show anything. So I try to use facebook-scraper like <a href="https://medium.com/analytics-vidhya/facebook-post-scraping-and-text-analytics-9eea10563a3a" rel="nofollow noreferrer">this tutorial.</a></p> <pre><code>from facebook_scraper import get_posts import pandas as pd post_df_full = pd.DataFrame(columns = []) for post in get_posts('bbcnews', extra_info=True, pages=5, options={&quot;comments&quot;: True}): post_entry = post print(post_entry) fb_post_df = pd.DataFrame.from_dict(post_entry, orient='index') fb_post_df = fb_post_df.transpose() post_df_full = post_df_full.append(fb_post_df) </code></pre> <p>I use this code to print post_entry which is the post from page but it not show anything. How to fix it?</p>
<python><facebook>
2023-04-16 16:03:37
1
639
user58519
76,028,664
1,916,588
asyncio to download data in batches and return a list of lists
<p>I want to download data in batches asynchronously.</p> <p>The data for each <code>name</code> is downloaded in batches, and I'd like <code>asyncio.gather(*coroutines)</code> to return a list of lists (a list of batches for each name). So far I have this code, but it raises an exception:</p> <pre class="lang-py prettyprint-override"><code>import asyncio import datetime async def run(names): &quot;&quot;&quot;Start one coroutine for each name.&quot;&quot;&quot; coroutines = [_fetch_data(name) for name in names] return await asyncio.gather(*coroutines) # This fails! async def _fetch_data(name): &quot;&quot;&quot;Fetch data for a single symbol in batches.&quot;&quot;&quot; start_timestamp = datetime.datetime(2021, 9, 1).timestamp() end_timestamp = datetime.datetime(2021, 9, 2).timestamp() i = 1 while start_timestamp &lt; end_timestamp: batch = f&quot;Batch {i} for {name}&quot; await asyncio.sleep(2) # Some async API call, for example # If I remove the yield, it works. But I want to use the results! yield batch start_timestamp += 3600 i += 1 async def main(): names = [&quot;Jack&quot;, &quot;Jill&quot;, &quot;Bob&quot;] return await run(names) output = asyncio.run(main()) print(output) # I'd expect something like # [[&quot;Batch 1 for Jack&quot;, &quot;Batch 2 for Jack&quot;, ...], [&quot;Batch 1 for Jill&quot;, &quot;Batch 2 for Jill&quot;, ...], ...] </code></pre> <p>Unfortunately, this code returns an exception for <code>asyncio.gather(*coroutines)</code>:</p> <blockquote> <p>TypeError: An asyncio.Future, a coroutine or an awaitable is required</p> </blockquote> <p>Isn't <code>_fetch_data</code> a coroutine? What is this error trying to tell me? And how can I get past it?</p> <p>I'm trying to learn more about asyncio in Python and I'm quite sure I'm missing some basics here.</p>
<python><python-asyncio>
2023-04-16 15:31:15
1
12,676
Kurt Bourbaki
76,028,639
1,609,066
VSCode Python Testing `Test result not found for...` bug
<p>While working in a multi-project (VSCode would call it multi-root) python repository the VSCode python plugin for testing fails to run all the test when choosing to run all the tests, however running individual folders or tests passes. This is obviously a strange issue so I investigated this further.</p> <p>The output of the vscode python plugin in the &quot;Test Results&quot; panel is as follows:</p> <pre><code>Running tests (pytest): /home/workspace Running test with arguments: --rootdir /home/workspace --override-ini junit_family=xunit1 --junit-xml=/tmp/tmp-289km5DwtjAXzkh.xml Current working directory: /home/workspace Workspace directory: /home/workspace Run completed, parsing output Test result not found for: ./my_awesome_dir/tests/my_awesome_module/my_awesome_test.py::TestAwesomeness::test_is_awesome[input_data0-AwesomeStuff] </code></pre> <p>This issue has been mentioned before on the vscode-python repo here: <a href="https://github.com/microsoft/vscode-python/issues/18658" rel="noreferrer">https://github.com/microsoft/vscode-python/issues/18658</a></p> <p>However even though I found a solution for my problem the VSCode Python repository does not allow adding more comments to existing issues and neither do they allow opening new issues so that I can add this information over there which is much better suited. Therefore I am resorting to adding this information here which may be the second best place.</p>
<python><pytest><vscode-extensions><vscode-python>
2023-04-16 15:26:52
1
1,171
Rijul Gupta
76,028,597
13,723,501
How can I send data from thread to Gtk app?
<p>I have the following code:</p> <pre class="lang-py prettyprint-override"><code>import socket from queue import Queue import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk, GLib import threading HOST = '' # Listen on all available network interfaces PORT = 5000 # Arbitrary port number def socket_server(queue): server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.bind((HOST, PORT)) server_socket.listen(1) print('Listening for incoming connections...') while True: client_socket, client_address = server_socket.accept() data = client_socket.recv(1024) print(f'Received data from client: {data.decode()}') queue.put(data.decode()) client_socket.close() class MyWindow(Gtk.Window): def __init__(self, queue): self.queue = queue Gtk.Window.__init__(self, title=&quot;GTK App&quot;) self.set_default_size(400, 300) self.label = Gtk.Label(label=&quot;Welcome to the GTK App!&quot;) self.add(self.label) def set_label_text(self): text = self.queue.get() self.label.set_text(text) if __name__ == '__main__': q = Queue() socket_thread = threading.Thread(target=socket_server, args=(q,)) socket_thread.start() win = MyWindow(q) win.connect(&quot;destroy&quot;, Gtk.main_quit) win.show_all() Gtk.main() </code></pre> <p>The programs basically runs a socket server on a thread and a gtk app on the main thread. What I'm trying to do is to, every time the socket receives a request, to send the message recived to the gtk app and the gtk app update his label with that message. I'm trying to do it with <code>queue</code> but I'm stuck on <code>self.queue.get()</code> because I can't figure how to watch for messages without block GTK Main loop. How can I watch for queue messages without blocking the main gtk loop, are there better approaches to this?</p>
<python><multithreading><sockets><gtk3>
2023-04-16 15:18:10
1
343
Parker
76,028,510
12,832,931
Convert Regularly Geographic Points into a matrix (keeping the correct order)
<p>Considering the following dataset:</p> <pre><code>freguesia Datetime C1 latitude longitude geometry 0 Parque das Nações 2022-09-07 09:30:00+00:00 72.37 38.753769 -9.095670 POINT (-9.09567 38.75377) 1 Parque das Nações 2022-09-07 09:30:00+00:00 4.65 38.753769 -9.093873 POINT (-9.09387 38.75377) 2 Parque das Nações 2022-09-07 09:30:00+00:00 433.18 38.755170 -9.101060 POINT (-9.10106 38.75517) 3 Parque das Nações 2022-09-07 09:30:00+00:00 274.43 38.755170 -9.099263 POINT (-9.09926 38.75517) 4 Parque das Nações 2022-09-07 09:30:00+00:00 212.09 38.755170 -9.097466 POINT (-9.09747 38.75517) 5 Parque das Nações 2022-09-07 09:30:00+00:00 49.86 38.755170 -9.095670 POINT (-9.09567 38.75517) </code></pre> <p>You can reproduce this entire dataframe loading the following dict:</p> <pre><code>{'freguesia': {0: 'Parque das Nações', 1: 'Parque das Nações', 2: 'Parque das Nações', 3: 'Parque das Nações', 4: 'Parque das Nações', 5: 'Parque das Nações', 6: 'Parque das Nações', 7: 'Parque das Nações', 8: 'Parque das Nações', 9: 'Parque das Nações', 10: 'Parque das Nações', 11: 'Parque das Nações', 12: 'Parque das Nações', 13: 'Parque das Nações', 14: 'Parque das Nações'}, 'Datetime': {0: '2022-09-07 09:30:00+00:00', 1: '2022-09-07 09:30:00+00:00', 2: '2022-09-07 09:30:00+00:00', 3: '2022-09-07 09:30:00+00:00', 4: '2022-09-07 09:30:00+00:00', 5: '2022-09-07 09:30:00+00:00', 6: '2022-09-07 09:30:00+00:00', 7: '2022-09-07 09:30:00+00:00', 8: '2022-09-07 09:30:00+00:00', 9: '2022-09-07 09:30:00+00:00', 10: '2022-09-07 09:30:00+00:00', 11: '2022-09-07 09:30:00+00:00', 12: '2022-09-07 09:30:00+00:00', 13: '2022-09-07 09:30:00+00:00', 14: '2022-09-07 09:30:00+00:00'}, 'C1': {0: 72.37, 1: 4.65, 2: 433.18, 3: 274.43, 4: 212.09, 5: 49.86, 6: 3.82, 7: 173.22, 8: 75.16, 9: 506.67, 10: 433.19, 11: 136.86, 12: 2.24, 13: 0.0, 14: 0.0}, 'latitude': {0: 38.7537686909, 1: 38.7537686909, 2: 38.7551697675, 3: 38.7551697675, 4: 38.7551697675, 5: 38.7551697675, 6: 38.7551697675, 7: 38.7565708166, 8: 38.7565708166, 9: 38.7565708166, 10: 38.7565708166, 11: 38.7565708166, 12: 38.7565708166, 13: 38.7565708166, 14: 38.7565708166}, 'longitude': {0: -9.09566985629, 1: -9.09387322572, 2: -9.10105974799, 3: -9.09926311742, 4: -9.09746648685, 5: -9.09566985629, 6: -9.09387322572, 7: -9.10285637856, 8: -9.10105974799, 9: -9.09926311742, 10: -9.09746648685, 11: -9.09566985629, 12: -9.09387322572, 13: -9.09207659515, 14: -9.09027996458}} </code></pre> <p>If I load into geopandas, I get these points regularly divided:</p> <pre><code>import geopandas as gpd import matplotlib.pyplot as plt import numpy as np import pandas as pd crs={'init':'epsg:4326'} gps_data = gpd.GeoDataFrame( test, geometry=gpd.points_from_xy(test.longitude, test.latitude, crs=crs)) gps_data.plot() </code></pre> <p><a href="https://i.sstatic.net/jLFtG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jLFtG.png" alt="enter image description here" /></a></p> <p>My question is:</p> <p>How to convert these points into a matrix, with C1 column values, respecting the points location of the plot ?</p> <p>Expected output:</p> <pre><code>[ [173.22, 75.16, 506.67, 433.19, 136.86, 2.24, 0 , 0], [0, 433.18, 274.43, 212.09, 49.86, 3.82, 0, 0], [0, 0, 0, 0, 72.37, 4.65, 0, 0]] </code></pre>
<python><pandas><numpy><gis><geopandas>
2023-04-16 14:59:52
1
542
William
76,028,482
359,178
Python module defining constants for month numbers?
<p>The <code>calendar</code> module defines constants e.g. <code>MONDAY</code> (0), <code>TUESDAY</code> (1), etc. for the <a href="https://docs.python.org/3/library/calendar.html#calendar.MONDAY" rel="nofollow noreferrer">days of the week</a>.</p> <p>And the source code even <a href="https://github.com/python/cpython/blob/main/Lib/calendar.py#L41" rel="nofollow noreferrer">defines</a> (but does not export) constants for <code>January</code> (1) and <code>February</code> (2), but not the rest of the months.</p> <p>I can of course throw in a dozen of these definitions in a small module of my own. But given how common these must be, I'm left wondering:</p> <p><strong>Is there a standard python module defining constants for all of the months?</strong></p>
<python><datetime><calendar>
2023-04-16 14:53:56
0
11,105
Juan A. Navarro
76,028,292
15,326,565
POST a file and json object to FastAPI route
<p>I want to post a file and a json object to a route in my FastAPI in only one request, but doing so results in errors like:</p> <pre><code>422 {&quot;detail&quot;:[{&quot;loc&quot;:[&quot;body&quot;],&quot;msg&quot;:&quot;value is not a valid list&quot;,&quot;type&quot;:&quot;type_error.list&quot;},{&quot;loc&quot;:[&quot;body&quot;],&quot;msg&quot;:&quot;value is not a valid dict&quot;,&quot;type&quot;:&quot;type_error.dict&quot;}]} </code></pre> <p>This is the client file:</p> <pre class="lang-py prettyprint-override"><code>import requests url = &quot;http://localhost:8000/&quot; file = &quot;...&quot; data = {...} files = {&quot;pdf&quot;: (file, open(file, &quot;rb&quot;), &quot;application/pdf&quot;)} response = requests.post(url, json=data, files=files) </code></pre> <p>I have also tried setting the headers to <code>multipart/form-data</code> and <code>multipart/reated</code></p> <p>This is what my server file looks like:</p> <pre class="lang-py prettyprint-override"><code>... class SomeMetadata(BaseModel): ... ... @app.post(&quot;/&quot;) def submit(someMetadata: SomeMetadata = Body(...), pdf: UploadFile = File(...)): return ... </code></pre>
<python><json><http><file-upload><fastapi>
2023-04-16 14:18:55
0
857
Anm
76,028,237
15,763,991
Setting a timeout for Calculator code: How to handle high number calculations and prevent freezing
<p>I'm making a Calculator and the Code freezes when you try to calculate a high number (like 9⁹⁹⁹⁹⁹⁹) To counter this, I want to set a Timeout and if the calculation takes longer than 3 seconds, the result is just &quot;NAN&quot;. the Code snipped where the results are calculated looks like this, but it doesn't work. If the calculation takes longer than 3 seconds, the task is not being canceled.</p> <pre><code>async def do_calculation(realcalculation, X, Y, A, B, C): try: result = str(numerize(sympy.sympify(realcalculation).subs(dict(X=X, Y=Y, A=A, B=B, C=C)))) except: result = &quot;NAN&quot; raise return result async def calculate(realcalculation, X, Y, A, B, C): loop = asyncio.get_running_loop() task = loop.create_task(do_calculation(realcalculation, X, Y, A, B, C)) try: result = await asyncio.wait_for(task, timeout=3) except asyncio.TimeoutError: task.cancel() result = &quot;NAN&quot; return result result = await calculate(realcalculation, X, Y, A, B, C) </code></pre>
<python><python-asyncio>
2023-04-16 14:06:22
1
418
EntchenEric
76,028,127
595,305
Suppress/redirect stack trace print out with logger.exception?
<p>A great way to process exceptions is to use <code>Logger.exception</code>:</p> <pre><code>try: do_something() except BaseException: logger.exception('some message') </code></pre> <p>... this not only prints out the user message, but also the exception message and exception type, and also a full stacktrace.</p> <p>My logger here has two handlers: a <code>FileHandler</code> and a <code>StreamHandler</code>.</p> <p>Using <a href="https://pypi.org/project/colorlog/" rel="nofollow noreferrer">colorlog</a> to colour my logger output according to the various levels, I can see that in the above case &quot;some message&quot; is printed according to the logger (level <code>logging.ERROR</code> I believe, i.e. red)... but the stacktrace itself is printed in ordinary console colours. It could be <code>stdout</code>, but more likely to be <code>stderr</code>.</p> <p>As it happens, I want this stacktrace in such cases to be printed out in full to my logger's file handler, but not to the stream handler which handles console output: stacktraces are fantastic for debugging purposes but by default the user shouldn't be exposed to them.</p> <p>I looked at logging docs on <a href="https://docs.python.org/3/library/logging.handlers.html#streamhandler" rel="nofollow noreferrer">StreamHandler</a>, but couldn't see a way to suppress this stacktrace printout for a given such handler.</p>
<python><exception><logging><stack-trace><stderr>
2023-04-16 13:45:47
1
16,076
mike rodent
76,028,082
4,913,254
Find duplicates in three columns and add plus 1 to results in another column
<p>I want to identify duplicate values in three columns and sum +1 in the results of another column.</p> <pre><code># This is my data frame, # I want to identify duplication in column1, column2 and column3 d = { 'Column1': ['1', '1', '2','3'], 'column2': [101, 101, 234, 203], 'column3': ['c', 'c', 'd','c'], 'columnx': ['0.1', '0.2', '0.1','0.2']} Column1 column2 column3 columnx 0 1 101 c 0.1 1 1 101 c 0.2 2 2 234 d 0.1 3 3 203 c 0.2 </code></pre> <p>Desired result</p> <pre><code> Column1 column2 column3 columnx 0 1 101 c 1.1 1 1 101 c 1.2 2 2 234 d 0.1 3 3 203 c 0.2 </code></pre>
<python><pandas>
2023-04-16 13:36:38
3
1,393
Manolo Dominguez Becerra
76,028,053
713,200
How to remove preceding zeros in version numbers with alphabets using python?
<p>I have to read version numbers from a couple of servers and remove preceding 0s from it to match with expected version number. My version number from server looks like these</p> <pre><code>15.04.03a 4.05.03.034I 12.09.02a 4.09.1.028I </code></pre> <p>so I'm expecting output to be like <code>15.4.3a</code> and so on.</p> <p>I have the below in which works well if there is no character, but doesnt work when there is a character</p> <pre><code>s = &quot;17.05.03a&quot; print(&quot;.&quot;.join(str(int(i)) for i in s.split(&quot;.&quot;))) </code></pre> <p>How can I change it to include characters as well ?</p>
<python>
2023-04-16 13:29:47
4
950
mac
76,028,018
12,828,984
numpy roll produce a unnecessary horizontal line
<p>I have the following python code where I read from a CSV file a produce a plot. My goal is to shift the data in X-axis by some extend however the x axis is phase (between 0 - 1) and shifting in this context means rolling the elements (thats why I use numpy roll). My problem is that when I use numpy roll, It produces some unnecessary line along the x axis (the red plot). How can I get rid of this?</p> <pre><code>data = np.genfromtxt('radio.csv', delimiter=',', skip_header=1) rad_ph = data[:, 0] rad_pp = data[:, 1] bin_normalized = rad_ph / max(rad_ph) shift = 0.85 bin_shifted = np.roll(bin_normalized, int(shift / np.diff(bin_normalized)[0])) plt.figure(figsize=(14,8)) ax = plt.subplot() ax.plot(bin_normalized, (rad_pp*50), linestyle='-',color='b', label='Normal', linewidth=2) ax.plot(bin_shifted, (rad_pp*50), linestyle='-',color='#9a2462', label='shifted', linewidth=2) plt.legend(prop={'size': 16}) plt.savefig('PP_overplot_radio.pdf' , dpi=300) </code></pre> <p><a href="https://i.sstatic.net/4YfCo.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4YfCo.jpg" alt="enter image description here" /></a></p> <p>For the record, I also tried shift both the bin_normalized and rad_pp arrays, and then plot the shifted data,</p> <pre><code>rad_pp_shifted = np.roll(rad_pp, int(shift / np.diff(bin_normalized)[0])) ax.plot(bin_shifted, (rad_pp_shifted*50), linestyle='-', color='#9a2462', label='shifted', linewidth=2, drawstyle='steps-post') </code></pre> <p>As well as using slicing instead of np.roll, and then plot the shifted data using plt.step instead of plt.plot but it didnt work either.</p> <pre><code>shift = 0.85 shift_idx = int(shift / np.diff(bin_normalized)[0]) bin_shifted = np.concatenate((bin_normalized[shift_idx:], bin_normalized[:shift_idx])) rad_pp_shifted = np.concatenate((rad_pp[shift_idx:], rad_pp[:shift_idx])) plt.figure(figsize=(14,8)) ax = plt.subplot() ax.step(bin_normalized, (rad_pp*50), where='post', linestyle='-', color='b', label='Normal', linewidth=2) ax.step(bin_shifted, (rad_pp_shifted*50), where='post', linestyle='-', color='#9a2462', label='shifted', linewidth=2) </code></pre>
<python><numpy>
2023-04-16 13:23:05
1
398
Sara Krauss
76,027,886
11,092,636
Suffix Array, n^2*log(n) faster than n*log^2(n) even for large inputs?
<p>I learned this theory in class and decided to implement everything to make sure I understood it all; but the theoretical results don't align with the memory and time usages I obtain. I'm pretty sure this is not a shortcoming of theoretical algorithms but rather a mistake on my end and would like to investigate it.</p> <p>The naive approach to constructing a suffix array is to generate all suffixes of the input string and then sort them lexicographically to obtain the final suffix array.</p> <pre class="lang-py prettyprint-override"><code>def naive_suffix_array(text: str) -&gt; list[int]: n = len(text) suffixes = [(text[i:], i) for i in range(n)] suffixes.sort() suffix_array = [index for _, index in suffixes] return suffix_array </code></pre> <p>This algorithm has a time complexity of <code>O(n^2 * log n)</code> since we are generating all <code>n</code> suffixes of the input string <code>(O(n))</code> and then sorting them, which takes <code>O(n log n) * O(n)</code> time (because ordering two strings of size <code>n</code> takes <code>O(n)</code>). Space complexity is <code>O(n^2)</code>.</p> <p>The efficient algorithm uses a &quot;doubling&quot; technique that sorts the suffixes based on the first <code>k</code> characters, where <code>k</code> is doubled in each step until <code>k &gt;= n</code>. This approach reduces the time complexity to <code>O(n * log(n)^2)</code>. Space complexity is <code>O(n)</code>.</p> <pre class="lang-py prettyprint-override"><code>def efficient_suffix_array(text: str) -&gt; list[int]: text = text + &quot;$&quot; # add a special character to the end of the string n = len(text) k = 1 # Initialize rank array with character values rank = [ord(c) for c in text] while k &lt; n: # Sort the suffixes based on the first k characters (suffixes is a list of indices) suffixes = sorted( range(n), key=lambda i: (rank[i], rank[(i + k) % n]) ) # Update the rank array with new rank values new_rank = [0] * n rank_value = 0 # goes up by one when the rank changes prev_suffix = suffixes[0] # the best suffix new_rank[prev_suffix] = rank_value # the best suffix gets rank = 0 for i in range(1, n): cur_suffix = suffixes[i] if rank[cur_suffix] != rank[prev_suffix] or rank[(cur_suffix + k) % n] != rank[(prev_suffix + k) % n]: # if rank of first k characters are different, or they are the same but rank of next k characters are different rank_value += 1 new_rank[cur_suffix] = rank_value prev_suffix = cur_suffix if rank_value == n - 1: # if all suffixes are in different equivalence classes, we are done, we don't need to look at the next characters, # the prefixes are already unique break rank = new_rank k *= 2 return suffixes[1:] # the first suffix is always the $, so we don't need it </code></pre> <p>Both algorithms work (on small unit tests that I wrote manually AND on large inputs they return the same results).</p> <p>However, as you can see here, for an input of size 1000, nor the time complexity neither the space complexity of the efficient suffix array function are better: <a href="https://i.sstatic.net/zAPZD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zAPZD.png" alt="enter image description here" /></a></p> <p>I generate my large input with the following command:</p> <pre class="lang-py prettyprint-override"><code>import random text: str = &quot;&quot;.join(random.choice(&quot;abn&quot;) for _ in range(1000)) </code></pre>
<python><algorithm><suffix-array>
2023-04-16 12:54:40
1
720
FluidMechanics Potential Flows
76,027,854
4,913,660
Pandas Dataframe - compute mean on column k based on value on column j being the same, then remove duplicates
<p>Say I have a dataframe like this</p> <pre><code> columns = [&quot;Col1&quot;, &quot;Col2&quot;, &quot;Col3&quot;] values = np.array([[1, 1, 7], [1,1, 8], [1, 2, 5], [1, 2, 5.5]]) index = [&quot;foo&quot;, &quot;foo&quot;, &quot;baz&quot;,&quot;baz&quot;] df = pd.DataFrame(values, index=index, columns = columns) df </code></pre> <pre><code> Col1 Col2 Col3 foo 1.0 1.0 7.0 foo 1.0 1.0 8.0 baz 1.0 2.0 5.0 baz 1.0 2.0 5.5 </code></pre> <p>The dataframe is such that, taken any two rows, the values in the first two columns <code>Col1</code> and <code>Col2</code> are the same as long as the index is the same.</p> <p>What I would like to do is, for each unique index:</p> <ul> <li>compute the mean for <code>Col3</code> for all the rows with said index</li> <li>remove all the rows with said index, but one</li> <li>replace the <code>Col3</code> value with what calculated in the first step</li> </ul> <p>For the given example, the desired output is hence</p> <pre><code> Col1 Col2 Col3 foo 1.0 1.0 7.50 baz 1.0 2.0 5.25 </code></pre> <p>I am totally unable to find an elegant approach. I tried to make use of <code>dataframe.groupby</code>. I would need to add a fresh index, something like</p> <pre><code>df.reset_index().groupby(&quot;index&quot;, sort=False)[&quot;Col3&quot;].mean() </code></pre> <p>outputting</p> <pre><code> index foo 7.50 baz 5.25 Name: Col3, dtype: float64 </code></pre> <p>but I still have to add these values &quot;manually&quot;, as well as deleting duplicates.</p> <p>I am wondering if there is a pandas-like gracious way to achieve the aim in one sweep, thanks.</p>
<python><pandas>
2023-04-16 12:48:39
1
414
user37292
76,027,790
3,103,957
type object in Python
<p>In Python, I reckon object(s) are available for the type class and the object name is same as class name which is type. I have a question regarding this... When we create classes using type(classname,base_class,att_dict), actually the <strong>call</strong>() method in type is getting invoked. This will eventually produce an object of the given class (eg: <code>type(&quot;some_class&quot;, (),{}</code>).</p> <p>In this statement, <code>type(&quot;some_class&quot;, (),{})</code>, the type is the object (of the class type) Here, by using the type object the <strong>call</strong>() is invoked.</p> <p>I reckon type() is a class method as well. So on this case, ideally it can be invoked by the following too:</p> <pre><code>type.__call__(&quot;some_class&quot;, (),{}) </code></pre> <p>But this is erroring out.</p> <pre><code>TypeError: descriptor '__call__' requires a 'type' object but received a 'str' </code></pre> <p>At the same time, the below works well.</p> <pre><code>&gt;&gt;&gt; class sample: ... @classmethod ... def __call__(cls,a): ... return a*a ... &gt;&gt;&gt; sample.__call__(10) 100 </code></pre> <p>So I am confused about the above error message.. like: where my understanding is incorrect? Can someone please help me.</p> <p>Thanks</p>
<python><call><metaclass>
2023-04-16 12:35:29
1
878
user3103957
76,027,718
10,535,123
Create new column when using Pandas groupby with two columns and aggregate by multiple metrices?
<p>The data itself is not realy matter.</p> <p>I have the following code -</p> <pre><code># convert timestamp to millisecond relevant_data_pdf['milli'] = pd.to_datetime(relevant_data_pdf['timestamp']).astype(np.int64) / int(1e6) # sort (day are values between 1 to 7) relevant_data_pdf = relevant_data_pdf.sort_values(['id', 'inf_day', 'milli']) # Calculate the diff between to consecutive rows relevant_data_pdf['milli_diff'] = relevant_data_pdf.groupby(['id', 'inf_day'])['milli'].diff() # aggregation by multiple metrices relevant_data_pdf = relevant_data_pdf.groupby(['id', 'inf_day']).agg(avg=('milli_diff', np.mean), median=('milli_diff', np.median), max=('milli_diff', np.max), min=('milli_diff', np.min)) </code></pre> <p>The result I got is something from the form -</p> <pre><code> avg median max min id inf_day 1 1 8.060000e+06 7200000.0 16500000.0 480000.0 2 1.200333e+06 1771000.0 1800000.0 30000.0 3 5.400000e+06 5400000.0 7200000.0 3600000.0 2 0 1.800000e+06 1800000.0 3600000.0 0.0 2 0.000000e+00 0.0 0.0 0.0 3 0.000000e+00 0.0 0.0 0.0 </code></pre> <p>How can make the result be one row for each <code>id</code>, such that for each id I will have all the possible columns from the shape of <code>{inf_day}_{metric_name}</code>?</p>
<python><pandas><dataframe>
2023-04-16 12:21:01
1
829
nirkov
76,027,495
10,200,497
creating a column based on values of two other columns
<p>This is my dataframe:</p> <pre><code>df = pd.DataFrame({'a': [10, 20, 50], 'b': [5, 2, 20]}) </code></pre> <p>And this is the output that I need:</p> <pre><code> a b c 0 10 5 1005 1 20 2 1009.02 2 50 20 1109.922 </code></pre> <p>I want to create column <code>c</code>. I have an initial value which is 1000. The first line of <code>c</code> is calculated like this:</p> <pre><code>x = 1000 * (10 / 100) # 10 is first line of column a y = 1 + (5 / 100) # 5 is first line of column b z = 1000 - x result = (x * y) + z </code></pre> <p>The second row of <code>c</code> is:</p> <pre><code>x = 1005 * (20 / 100) # 20 is second line of column a y = 1 + (2 / 100) # 2 is second line of column b z = 1005 - x result = (x * y) + z </code></pre> <p>The same logic applies for that last row of <code>c</code>. 1005 is the result of the calculation form the first line.</p> <p>I have tried the following code but it doesn't work:</p> <pre><code>df['c'] = ((df.a / 100 * (1 + df.b / 100)).cumprod() * 1000) + 1000 </code></pre>
<python><pandas>
2023-04-16 11:32:48
2
2,679
AmirX
76,026,821
8,586,803
Cannot print numpy arrays with logging module
<p>This snippet doesn't print anything:</p> <pre class="lang-py prettyprint-override"><code>import logging import numpy as np logging.info(np.eye(4)) # this doesn't work either logging.info('matrix', np.eye(4)) </code></pre> <p>But it works fine with native <code>print</code>:</p> <pre class="lang-py prettyprint-override"><code>import logging print(np.eye(4)) </code></pre> <pre><code>[[1. 0. 0. 0.] [0. 1. 0. 0.] [0. 0. 1. 0.] [0. 0. 0. 1.]] </code></pre>
<python><numpy><logging><stdout>
2023-04-16 09:05:50
1
6,178
Florian Ludewig
76,026,805
9,940,188
Logging: How to use different formatters for different handlers?
<p>I'm trying to set up two logging handlers, one of which has a custom Formatter for exceptions. Here's what I've tried:</p> <pre><code>import logging def log_something(): log.error(&quot;Normal error&quot;) try: raise RuntimeError(&quot;Exception!&quot;) except Exception as e: log.exception(e) class MyFormatter(logging.Formatter): def formatException(self, exc_info): return &quot;Too bad&quot; s_handler = logging.StreamHandler() s_handler.setFormatter(MyFormatter(fmt=&quot;My formatter: %(message)s&quot;)) f_handler = logging.FileHandler(&quot;log.txt&quot;) log = logging.getLogger() log.addHandler(f_handler) # first log.addHandler(s_handler) # second -- why does the order matter? log_something() </code></pre> <p>The output is:</p> <pre><code>My formatter: Normal error My formatter: Exception! Traceback (most recent call last): File &quot;exception.py&quot;, line 6, in log_something raise RuntimeError(&quot;Exception!&quot;) RuntimeError: Exception! </code></pre> <p>So it seems that the formatException method of my custom formatter doesn't get called. However, when I reverse the two addHandler calls like so:</p> <pre><code>log.addHandler(handler) log.addHandler(f_handler) </code></pre> <p>the output is as expected:</p> <pre><code>My formatter: Normal error My formatter: Exception! Too bad </code></pre> <p>Also in either case the logging outputs to the file and the stream are exactly the same. My questions:</p> <ol> <li><p>How come that the formatter of the handler that is added first is used for the subsequent handler as well? I thought that the formatter &quot;belongs&quot; to the handler, and that when I add different handlers to different formatters the wouldn't interfere.</p> </li> <li><p>Why does the format() method get called in either case, but the formatException() method only if the handler with the custom formatter is added first?</p> </li> </ol> <p>It seems that the formatter of the second handler is ignored and that always the formatter of the first handler is used. Why is that? I thought the formatter &quot;belonged&quot; to the handler, and that different handlers could use different formatters?</p>
<python>
2023-04-16 09:02:39
1
679
musbur
76,026,778
5,029,589
Read only column names using python
<p>I have a excel file (Can be any type from csv,xsls,xslb) which can be very large (may be 500 mb) . I only want to get the column names from the file . Now I tried using pandas and get only column names ,but internally it seems to read or load the entire file which takes approx 1 minute . I don't want to load the entire file in memory and just want to read the column names .What is the best approach which I can use in order to avoid pandas to read the entire file and read and load only column names . Is there a way where I can read the column names only without loading or reading the whole file as pandas by default seems to internally read/load the entire file . For example I used the below code to read xlsb file but it still takes a huge time (approx 1.5 min plus to get column names of file with size 17.9 mb)</p> <pre><code>filePath = &quot;/Users/aj/testing/File_1.xlsb&quot; cols=pd.read_excel(filePath, engine='pyxlsb', index_col=0, nrows=0).columns.tolist() </code></pre> <p>This seems to load the entire file as I see reading the file and reading just column name takes same time</p>
<python><pandas>
2023-04-16 08:54:36
1
2,174
arpit joshi
76,026,500
713,200
How to extract data from html response using python?
<p>I'm trying to extract some of the data from an HTML response I'm getting after executing an API in Python. Here is the HTML response I get:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; ?&gt; &lt;mgmtResponse responseType=&quot;operation&quot; requestUrl=&quot;https://6.7.7.7/motion/api/v1/op/enablement/ethernet/summary?deviceIpAddress=10.3.4.3&quot; rootUrl=&quot;https://6.7.7.7/webacs/api/v1/op&quot;&gt; &lt;ethernetSummaryDTO&gt; &lt;CoredIdentityCapable&gt;false&lt;/CoredIdentityCapable&gt; &lt;currentIcmpLatency&gt;0&lt;/currentIcmpLatency&gt; &lt;deviceAvailability&gt;100&lt;/deviceAvailability&gt; &lt;deviceName&gt;TRP5504.130.Cored.com&lt;/deviceName&gt; &lt;deviceRole&gt;Unknown&lt;/deviceRole&gt; &lt;deviceType&gt;Cored TRP 5504&lt;/deviceType&gt; &lt;ipAddress&gt;10.3.4.3&lt;/ipAddress&gt; &lt;locationCapable&gt;false&lt;/locationCapable&gt; &lt;nrPortsDown&gt;49&lt;/nrPortsDown&gt; &lt;nrPortsUp&gt;16&lt;/nrPortsUp&gt; &lt;reachability&gt;Reachable&lt;/reachability&gt; &lt;softwareVersion&gt;7.8.1&lt;/softwareVersion&gt; &lt;stackCount&gt;0&lt;/stackCount&gt; &lt;systemTime&gt;2023-Apr-16, 12:47:51 IST&lt;/systemTime&gt; &lt;udiDetails&gt; &lt;description&gt;TRP5500 4 Slot Single Chassis&lt;/description&gt; &lt;modelNr&gt;TRP-5504&lt;/modelNr&gt; &lt;name&gt;Rack 0&lt;/name&gt; &lt;productId&gt;TRP-5504&lt;/productId&gt; &lt;udiSerialNr&gt;FOX2304P14Z&lt;/udiSerialNr&gt; &lt;vendor&gt;Cored Systems, Inc.&lt;/vendor&gt; &lt;versionId&gt;V01&lt;/versionId&gt; &lt;/udiDetails&gt; &lt;upTime&gt;87 days 20 hrs 40 mins 27 secs&lt;/upTime&gt; &lt;/ethernetSummaryDTO&gt; &lt;/mgmtResponse&gt; </code></pre> <p>Basically, I want to extract data like <code>deviceName</code> and <code>softwareVersion</code>, and `udiSerialNr' from the HTML response. I tried the following code:</p> <pre><code> if response.status_code == 200: #resp = response.text resp = response.json() api_resp = resp[&quot;ethernetSummaryDTO&quot;] print(api_resp) </code></pre> <p>so I tried to convert it to JSON, but I end with below error:</p> <pre><code>json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) </code></pre> <p>How can I parse this to extract the required data?</p>
<python><html><json><python-3.x>
2023-04-16 07:45:25
2
950
mac
76,026,337
2,051,572
Pandas - merging rows under certain conditions
<p>I have a table in this form:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Cat A</th> <th>Cat B</th> <th>Year</th> <th>Area</th> <th>Names</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>C</td> <td>2012</td> <td>1</td> <td>A</td> </tr> <tr> <td>A</td> <td>D</td> <td>2012</td> <td>2</td> <td>B</td> </tr> <tr> <td>A</td> <td>D</td> <td>2013</td> <td>3</td> <td>C</td> </tr> <tr> <td>A</td> <td>D</td> <td>2013</td> <td>4</td> <td>D</td> </tr> <tr> <td>B</td> <td>D</td> <td>2013</td> <td>5</td> <td>E</td> </tr> <tr> <td>B</td> <td>C</td> <td>2013</td> <td>6</td> <td>F</td> </tr> </tbody> </table> </div> <p>And I would like to merge rows if they share the same values in Cat A, Cat B, and Year, such that the merged row will have the sum of the values in the Area column, and a comma separated list from the Names column. The result should look like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Cat A</th> <th>Cat B</th> <th>Year</th> <th>Area</th> <th>Names</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>C</td> <td>2012</td> <td>1</td> <td>A</td> </tr> <tr> <td>A</td> <td>D</td> <td>2012</td> <td>2</td> <td>B</td> </tr> <tr> <td>A</td> <td>D</td> <td>2013</td> <td><strong>7</strong></td> <td><strong>C,D</strong></td> </tr> <tr> <td>B</td> <td>D</td> <td>2013</td> <td>5</td> <td>E</td> </tr> <tr> <td>B</td> <td>C</td> <td>2013</td> <td>6</td> <td>F</td> </tr> </tbody> </table> </div> <p>I tried the line:</p> <pre><code>df.groupby([&quot;Cat A&quot;,&quot;Cat B&quot;, &quot;Year&quot;]) </code></pre> <p>But got stuck after that... how should I continue?</p>
<python><pandas><group-by>
2023-04-16 07:08:29
1
2,512
Ohm
76,026,333
11,210,476
dataclass __annotation__ not working with inheritance
<p>I tried this code:</p> <pre><code>from dataclasses import dataclass @dataclass class Data1: d11: str d12: float @dataclass class Data2: d21: str d22: float @dataclass class D3(Data1, Data2): pass print(D3.__annotations__) </code></pre> <p>But the annotations for <code>D3</code> are only <code>{'d11': &lt;class 'str'&gt;, 'd12': &lt;class 'float'&gt;}</code>. Why does it not include annotations for the <code>Data2</code> fields?</p>
<python><dictionary><object><inheritance><python-dataclasses>
2023-04-16 07:06:56
1
636
Alex
76,026,327
1,581,090
How to use the same logger in different python modules?
<p>Here is what I have in the main code:</p> <pre><code>from mf import gameplay .... if __name__ == &quot;__main__&quot;: import logging logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) # create a file handler handler = logging.FileHandler('example.log') handler.setLevel(logging.INFO) # create a logging format formatter = logging.Formatter('%(asctime)s - %(worker)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) # add the file handler to the logger logger.addHandler(handler) logger.info(&quot;Querying database for docs...&quot;, extra={'worker': 'id_1'}) </code></pre> <p>and in a module <code>gameplay</code> I import in the main code I have</p> <pre><code>import logging logger = logging.getLogger(&quot;__main__&quot;) logger.info(&quot;test module&quot;) </code></pre> <p>but still I do not get any log output from the logger in the module. I do not see the text &quot;test module&quot; in the logging file &quot;example.log&quot;.</p> <p>I looked <a href="https://stackoverflow.com/questions/50714316/how-to-use-logging-getlogger-name-in-multiple-modules">HERE</a>, <a href="https://docs.python.org/3/howto/logging.html" rel="nofollow noreferrer">HERE</a>, <a href="https://stackoverflow.com/questions/15727420/using-logging-in-multiple-modules">HERE</a> and <a href="https://docs.python.org/3/howto/logging.html#advanced-logging-tutorial" rel="nofollow noreferrer">HERE</a> and I think I followed what is described in all these articles but it is still not working for me. I am pretty sure I overlooked something totally simple.</p> <p>What am I missing?</p> <p>MacOS, python 3.9.6</p>
<python><logging>
2023-04-16 07:04:25
2
45,023
Alex
76,026,274
1,581,090
How to fix "logging.ini" for python?
<p>I am following the tutorial on logging <a href="https://betterprogramming.pub/how-to-implement-logging-in-your-python-application-1730315003c4" rel="nofollow noreferrer">HERE</a> and also had a look at the documentation <a href="https://docs.python.org/3/library/logging.config.html#configuration-file-format" rel="nofollow noreferrer">HERE</a>, so I created the following file <code>logging.ini</code>:</p> <pre><code>[loggers] keys=root [handlers] keys=consoleHandler [formatters] keys=extend [logger_root] level=INFO handlers=consoleHandler propagate=0 [handler_consoleHandler] class=StreamHandler level=INFO formatter=extend args=(sys.stdout) [formatter_extend] format=%(asctime)s - %(worker)s - %(levelname)s - %(message)s </code></pre> <p>which I use in python code as</p> <pre><code>import logging.config logging.config.fileConfig(fname='logger.ini') </code></pre> <p>but which results in an error</p> <pre><code> logging.config.fileConfig(fname='logger.ini') /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/logging/config.py:71: in fileConfig formatters = _create_formatters(cp) /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/logging/config.py:104: in _create_formatters flist = cp[&quot;formatters&quot;][&quot;keys&quot;] /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/configparser.py:963: in __getitem__ raise KeyError(key) E KeyError: 'formatters' </code></pre> <p>On MacOS, python 3.9.6. Do I miss something?</p>
<python><logging>
2023-04-16 06:46:56
1
45,023
Alex
76,026,112
610,569
Is there a vectorize way to iterate through a fixed n of an input x dict instead the full range of n?
<p>Given a fixed size <code>n</code> and a <code>x</code> dict of input key-value pairs, the goal is iterate through 1...n (1st index), then fetch the values from x if the index exists as x's key, otherwise insert the value -1.</p> <p>I've tried the following and it kind of work as expected:</p> <pre><code>n = 10 # Valid keys ranges from [1,10], any positive integer is valid in values. x = {1:231, 2:341, 5:123} y = {i+1:x[i+1] if i+1 in x else -1 for i in range(n)} y </code></pre> <p>[out]:</p> <pre><code>{1: 231, 2: 341, 3: -1, 4: -1, 5: 123, 6: -1, 7: -1, 8: -1, 9: -1, 10: -1} </code></pre> <p>But this seems like a very common pandas or encoding / embedding operation.</p> <p><strong>Is there a different/simpler way that can take in the sparse key-values from x and directly create y given that we know n without iterating through <code>O(n)</code> but instead <code>O(len(x))</code>?</strong></p> <p>Rationale being, if I've billions of Xs and n is substantially huge e.g. in 1000s then the full O(n) operation is really expensive.</p>
<python><pandas><dictionary><vectorization>
2023-04-16 05:56:27
2
123,325
alvas
76,025,977
15,320,579
Perform some operation if 2 pandas dataframe have same entries in python
<p>I have 2 dataframes (<strong>purchase</strong> and <strong>sales</strong>) as follows:</p> <p><strong>PURCHASE:</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>item</th> <th>voucher</th> <th>Amt</th> <th>Qty</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>Item1</td> <td>Purchase</td> <td>10000</td> <td>100</td> </tr> <tr> <td>B</td> <td>Item2</td> <td>Purchase</td> <td>500</td> <td>50</td> </tr> <tr> <td>B</td> <td>Item1</td> <td>Purchase</td> <td>2000</td> <td>20</td> </tr> <tr> <td>C</td> <td>Item3</td> <td>Purchase</td> <td>1000</td> <td>100</td> </tr> <tr> <td>D</td> <td>Item4</td> <td>Purchase</td> <td>500</td> <td>100</td> </tr> <tr> <td>A</td> <td>Item3</td> <td>Purchase</td> <td>5000</td> <td>50</td> </tr> </tbody> </table> </div> <p><strong>SALES:</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>item</th> <th>voucher</th> <th>Amt</th> <th>Qty</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>Item1</td> <td>Sales</td> <td>5300</td> <td>50</td> </tr> <tr> <td>B</td> <td>Item2</td> <td>Sales</td> <td>450</td> <td>40</td> </tr> <tr> <td>B</td> <td>Item1</td> <td>Sales</td> <td>1675</td> <td>15</td> </tr> <tr> <td>C</td> <td>Item3</td> <td>Sales</td> <td>1800</td> <td>100</td> </tr> </tbody> </table> </div> <p>I want an output dataframe where if the person (<code>Name</code>) sells an item, the <code>Amt</code> and <code>Qty</code> should be deducted from the <strong>purchase</strong> dataframe and a new dataframe should be created with the remaining <code>Amt</code> and <code>Qty</code> as shown below:</p> <p><strong>OUTPUT DATAFRAME:</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>item</th> <th>voucher</th> <th>Amt</th> <th>Qty</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>Item1</td> <td>Remaining</td> <td>4700</td> <td>50</td> </tr> <tr> <td>A</td> <td>Item3</td> <td>Remaining</td> <td>5000</td> <td>50</td> </tr> <tr> <td>B</td> <td>Item2</td> <td>Remaining</td> <td>50</td> <td>10</td> </tr> <tr> <td>B</td> <td>Item1</td> <td>Remaining</td> <td>325</td> <td>5</td> </tr> <tr> <td>C</td> <td>Item3</td> <td>Remaining</td> <td>-800</td> <td>0</td> </tr> <tr> <td>D</td> <td>Item4</td> <td>Remaining</td> <td>500</td> <td>100</td> </tr> </tbody> </table> </div> <p>Notice that whatever items have been sold by a person (<code>Name</code>) has been deducted from the <strong>purchase</strong> dataframe and the remaining items (<code>Amt</code> and <code>Qty</code>) are stored in a new output dataframe. Also person <code>D</code> never sold any items even then it should be included in the output dataframe.</p> <p>Thanks in advance!</p> <p><strong>Dataframe</strong></p> <pre class="lang-py prettyprint-override"><code>import pandas as pd Purchases = { &quot;Name&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;A&quot;], &quot;item&quot;: [&quot;Item1&quot;, &quot;Item2&quot;, &quot;Item1&quot;, &quot;Item3&quot;, &quot;Item4&quot;, &quot;Item3&quot;], &quot;voucher&quot;: [&quot;Purchase&quot;, &quot;Purchase&quot;, &quot;Purchase&quot;, &quot;Purchase&quot;, &quot;Purchase&quot;, &quot;Purchase&quot;], &quot;Amt&quot;: [10000, 500, 2000, 1000, 500, 5000], &quot;Qty&quot;: [100, 50, 20, 100, 100, 50], } Purchases = pd.DataFrame(Purchases) Sales = { &quot;Name&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;C&quot;], &quot;item&quot;: [&quot;Item1&quot;, &quot;Item2&quot;, &quot;Item1&quot;, &quot;Item3&quot;], &quot;voucher&quot;: [&quot;Sales&quot;, &quot;Sales&quot;, &quot;Sales&quot;, &quot;Sales&quot;], &quot;Amt&quot;: [5300, 450, 1675, 1800], &quot;Qty&quot;: [50, 40, 15, 100], } Sales = pd.DataFrame(Sales) </code></pre>
<python><pandas><dataframe><data-munging>
2023-04-16 04:59:52
3
787
spectre
76,025,854
17,491,224
Pandas fillna not successfully replacing nan values
<p>Attempting to fill na values in the following manner:</p> <pre><code>df['column'].fillna(value='value',inplace=True) </code></pre> <p>I print the values before and after and get the following results:</p> <pre><code>['11', '12', '81', '21', '22', nan, '41', '71', '10', '23', '02', '20', '19', '72', '24', '53', '60', '49'] [&lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'float'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;] ['11', '12', '81', '21', '22', nan, '41', '71', '10', '23', '02', '20', '19', '72', '24', '53', '60', '49'] [&lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'float'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;, &lt;class 'str'&gt;] </code></pre> <p>Clearly the values are being identified as NaN values, but the fill not is not filling the nan floats.</p> <p>Quick note - unsure why but when I dropped the inplace = True in my code, the code worked as intended. Regardless, the answer pinned should work!</p>
<python><python-3.x><pandas>
2023-04-16 04:09:14
1
518
Christina Stebbins
76,025,807
4,107,349
Setting similarity threshold for trigram_similar Django PostgreSQL lookups?
<p>We're using <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/postgres/lookups/#trigram-similarity" rel="nofollow noreferrer">Django's trigram similarity</a> <code>__trigram_similar</code> lookup and it works nicely, except we want to adjust the similarity threshold and don't know how. Queries are currently filtered like so:</p> <pre class="lang-py prettyprint-override"><code>class MarkerViewSet(viewsets.ReadOnlyModelViewSet): queryset = tablename.objects.all() def filter_queryset(self, queryset): queryset = super().filter_queryset(queryset) queryset = queryset.filter(colname__trigram_similar= search_value) </code></pre> <p>We thought we could add something like <code>similarity__gt=0.1</code> to the args for <code>filter()</code> like <a href="https://stackoverflow.com/a/44012948">this answer</a> does, but it throws the error &quot;django.core.exceptions.FieldError: Cannot resolve keyword 'similarity_gt' into field&quot;. They are using the <code>.filter()</code> function in a different way we don't understand which might be the problem. What's the argument and how can we supply it?</p> <p>Seems we might be able to <a href="https://www.postgresql.org/docs/15/pgtrgm.html#PGTRGM-FUNC-TABLE" rel="nofollow noreferrer">adjust it manually in PostgreSQL with <code>set_limit</code></a> (previously <code>SET pg_trgm.similarity_threshold</code>), but we'd prefer to set this in Django.</p>
<python><django><postgresql><django-views><django-queryset>
2023-04-16 03:46:18
1
1,148
Chris Dixon
76,025,419
2,661,251
Python string check against a particular pattern
<p>My requirement is that I want to print true if the string matches with the word &quot;develop&quot; or &quot;develop[0-9]&quot; . This is the python code I wrote</p> <pre class="lang-py prettyprint-override"><code>import re # Target String one str = [&quot;develop&quot;,&quot;develop1&quot;,&quot;develop123&quot;,&quot;developtest&quot;,&quot;develop9_test&quot;] # pattern to find three consecutive digits # compile string pattern to re.Pattern object for i in range(0, len(str)): if re.match(r&quot;develop\d&quot;, str[i]): print(&quot;true&quot;) </code></pre> <p>I want to print true only for the strings &quot;develop&quot;, &quot;develop1&quot;, and &quot;develop123&quot;. Any help is appreciated? Above code also got syntax issue which I am not sure how to resolve it?</p>
<python><regex>
2023-04-16 00:47:37
2
516
gcpdev-guy
76,025,370
5,394,072
optimization - scipy minimize with contraints on returned values
<p>I have a function for weekly revenue, which is function of few variables (say variables are spend on different days of week, sunday through saturday). Now, I need to find the values of the variables that maximizes the weekly revenue.</p> <p>In addition to above, I also need to have a constraint on the <code>total return on spend</code>, for the week.</p> <blockquote> <p>Total return on spend = total revenue/ total spend.</p> </blockquote> <blockquote> <p>Total revenue = sum of revenue from all 7 days (total spend = sum of spend on all 7 days. Note that spend on each day is a variable in optimization).</p> </blockquote> <p>I am using scipy optimize.minimize for this task.</p> <pre><code>minimize(revenue_for_week, initial, bounds = bnds, method='L-BFGS-B', constraints = constr) </code></pre> <p>The 'bnds', 'constr' in above formulation, are nothing but constraints on the spend for different days of week (say sum of all spend variables value should be $100, and each variable should be within $5 and $25)</p> <p>In addition to constraints on variables, how do I also include a constraint of the return on spend (total revenue/ total spend) ? Say my constraint is 1.2, i.e, for every $ of spend , I need to receive $1.2 as revenue.</p> <p>Because the optimizer tries different combination of spend variables, for each combination of spend (for each weekday), I will have different total revenue and so this <code>total return on spend</code> will be coming from the function to minimize ( <code>revenue_For_week</code> in above). How do I tell the minimize function, that one of the constraint is constraint on the return value.</p> <p>I tried the following approach in the <code>revenue_for_Week</code> function,</p> <pre><code>if total_return&gt;=1.2: return -total_Revenue # revenue from the spend combination else: return 10000 # This way minimize function will know the value is so large when the return is &lt;1.2 </code></pre> <p>but did not work (always returning 10000 , since for the given combinations roas is &lt;1.2). The minimize function is also trying few different values of variables, all in very close neighborhood of the initial values chosen for the variables. In only 17 iterations or less, it returns success (example - Where first 16 doesn't satisfy the constraint on variables, that sum of all should = $100, but the 17 satisfy. All 17 have a return value of 10000, in the above example. All have 10000, since they don't satisfy the roas constraint.). I know for certain other combinations, the roas &gt;1.2, but the minimize function stops as soon as the variable constraint satisfy (even though all revenue seen so far 10000, across all iterations)</p> <p>I also tried, <code>method = 'SLSQP'</code> in minimize but result is the same as above.</p> <p>Question 2 :</p> <p>How to make to scipy optimize check different range of values than the initial. IF initial is at 0.24, scipy is checking all in the neighborhood of 0.24 like 0.24<strong>4</strong>5345<strong>4</strong>35 or 0.24<strong>3</strong>5345<strong>5</strong>35. I wanted to make it look at values like 0.18 or 0.20</p>
<python><optimization><mathematical-optimization><scipy-optimize><scipy-optimize-minimize>
2023-04-16 00:26:34
0
738
tjt
76,025,305
18,533,248
dynamically calling an ffi function in python?
<p>I'm writing a simple program that needs to reach out to libc for some functions. My problem is that I don't know what these functions are when writing the program ie. the functions that will be called are imported dynamically, based on some user input.</p> <p>How can I do this? Ideally I'd like to have a python dictionary of all the functions from the imported library.</p> <p>Thanks!</p>
<python><shared-libraries><dllimport>
2023-04-16 00:02:43
1
501
kamkow1
76,025,304
12,035,739
Why is my PNG being read as black and white?
<p>I am reading this picture,</p> <p><a href="https://i.sstatic.net/PcjMu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PcjMu.png" alt="enter image description here" /></a></p> <p>Using the following Python,</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from PIL import Image as im from matplotlib import pyplot as plt image = np.array(im.open(&quot;optimised.png&quot;)) print(image.shape) plt.imshow(image) plt.show() </code></pre> <p>But this plots a greyscaled image,</p> <p><a href="https://i.sstatic.net/4j53b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4j53b.png" alt="enter image description here" /></a></p> <p>Further, the shape is printed as <code>(57, 86)</code> where I expect this to be something like <code>(57, 86, 3)</code>.</p> <p>I opened the image in an image viewer and sure enough, it has color,</p> <p><a href="https://i.sstatic.net/GjlIl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GjlIl.png" alt="enter image description here" /></a></p> <p>What is happening here?</p>
<python><numpy><matplotlib><python-imaging-library>
2023-04-16 00:01:14
3
886
scribe
76,025,232
2,266,881
Click event function not working in PyQT6 when added dynamically
<p>I'm creating a bunch of QPushButton's in PyQt6 dynamically in a loop. They all use the same target function and the only difference between them is one argument they are receiving, but for some reason, when i assign them dynamically, the all get the arg of the last one assigned.</p> <p>Doing this:</p> <pre><code>for i in range(n): btn.clicked.connect(lambda: click_event(i)) </code></pre> <p>All events get assigned like this, with n as the argument:</p> <pre><code>btn.clicked.connect(lambda: click_event(n)) btn.clicked.connect(lambda: click_event(n)) btn.clicked.connect(lambda: click_event(n)) ... </code></pre> <p>But if i assign them &quot;manually&quot;:</p> <pre><code>btn.clicked.connect(lambda: click_event(0)) btn.clicked.connect(lambda: click_event(1)) btn.clicked.connect(lambda: click_event(2)) ... </code></pre> <p>They work flawless..</p> <p>Any idea what could it be?</p>
<python><pyqt6>
2023-04-15 23:36:40
0
1,594
Ghost
76,025,196
480,118
how to gracefully kill all processes created by a python subprocess call to script which creates child processes (script, java, tee)
<p>This gets a bit complicated. But in short i have a python script that calls a bash script which launches a long running java process via yet another bash script. i've written another python process that will allow me to kill the process if need be. The problem im having is that i see the processes for the two shell script disappear, but the java process and a 'tee' processes linger around and are labeled as 'defunct' ...hopefully i can make it more clear with some code..</p> <p><strong>Python code to launch the bash/shell process</strong></p> <pre><code>#module level dict to store running processes that i may want to kill if they take too long _processes={} #call script - which will call another bash script, which will launch a java app, and tee some outputs to log and console script = ''/apps/myapp/prep_and_launch'' process = subprocess.Popen([script, 'test1', &quot;200&quot;], preexec_fn=os.setsid) _processes[script] = process ret= process.wait() </code></pre> <p><strong>python code to kill the process</strong></p> <pre><code>process = _processes[script] os.killpg(os.getpgid(process.pid), signal.SIGTERM) #process.terminate() #process.kill() </code></pre> <p><strong>#script1 - which is called by the python code and calls a second script to launch a java app:</strong></p> <pre><code>#prep_and_launch.sh #! /bin/bash arg1=${1:-&quot;test1&quot;} arg2=${2:-&quot;100&quot;} #do some munging, setup, and prep for java run here #call script to run java app /apps/myapp/scripts/run_java_app.sh $arg1 $arg2 </code></pre> <p><strong>script2 which launches the java and tees output</strong></p> <pre><code>#run_myapp.sh arg1=$1 arg2=$2 LOG_DIR=/apps/myapp/logs/app.log JAVA_OPTS=&quot;-Xms512m -Xmx4G -Drs.logpath=$LOG_DIR&quot; EXEC_CMD=&quot;java $JAVA_OPTS -cp $CLASSPATH com.myapp.TestApp $arg1 $arg2&quot; exec 3&gt;&amp;1 1&gt;&gt;${LOG_FILE} 2&gt;&amp;1 echo &quot;JAVA_OPTS: $JAVA_OPTS&quot;| tee /dev/fd/3 echo &quot;EXEC_CMD: $EXEC_CMD&quot;| tee /dev/fd/3 exec $EXEC_CMD 2&gt;&amp;1 | tee /dev/fd/3 </code></pre> <p>So in the end, there are 4 processes i need killed gracefully - all four are indeed killed, but the java and tee processes still linger as &quot;&quot;:</p> <p><a href="https://i.sstatic.net/cF9q0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cF9q0.png" alt="enter image description here" /></a></p> <p>What am i missing here to cleanly kill the java and tee processes?</p> <p><strong>EDIT: what is pid 1?</strong> definitely these are from the container build: <a href="https://i.sstatic.net/lH3J9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lH3J9.png" alt="enter image description here" /></a></p>
<python><linux><bash><shell><sh>
2023-04-15 23:23:59
0
6,184
mike01010
76,025,129
6,063,706
Custom Pytorch function with hard clamp forwards and softclamp backwards
<p>I want to implement a custom differentiable function in PyTorch that acts like torch.clamp in the forward pass but in the backward pass outputs the gradients as if it where a tanh.</p> <p>I tried the following code:</p> <pre><code>import torch class ClampWithGrad (torch.autograd.Function): @staticmethod def forward (ctx, input): ctx.save_for_backward(input) return torch.clamp(input, -1, 1) @staticmethod def backward(ctx, grad_output): input, = ctx.saved_tensors grad_input = grad_output.clone() grad_input[input &lt;= -1] = (1.0 - torch.tanh(input[input &lt;= -1])**2.0) * grad_output[input &lt;= -1] grad_input[input &gt;= 1] = (1.0 - torch.tanh(input[input &gt;= 1])**2.0) * grad_output[input &gt;= 1] return grad_input </code></pre> <p>However, when I include this in my neural network, I get nans. How can I best implement this?</p>
<python><pytorch>
2023-04-15 22:59:02
1
1,035
Tob
76,025,095
165,659
Can't correctly decode an image frame using PyAV
<p>I'm trying to simply encode and decode a capture frame from the web-cam. I want to be able to send this over TCP but at the moment I'm having trouble performing this just locally.</p> <p>Here's my code that simply takes the frame from the web-cam, encodes, then decodes, and displays the two images in a new window. The two images look like this:</p> <p><a href="https://i.sstatic.net/6vv2q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6vv2q.png" alt="1" /></a></p> <p>Here's the code:</p> <pre><code>import struct import cv2 import socket import av import time import os class PerfTimer: def __init__(self, name): self.name = name def __enter__(self): self.start_time = time.perf_counter() def __exit__(self, type, value, traceback): end_time = time.perf_counter() print(f&quot;'{self.name}' taken:&quot;, end_time - self.start_time, &quot;seconds.&quot;) os.environ['AV_PYTHON_AVISYNTH'] = 'C:/ffmpeg/bin' socket_enabled = False sock = None if socket_enabled: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print(&quot;Connecting to server...&quot;) sock.connect(('127.0.0.1', 8000)) # Set up video capture. print(&quot;Opening web cam...&quot;) cap = cv2.VideoCapture(0, cv2.CAP_DSHOW) cap.set(cv2.CAP_PROP_FRAME_WIDTH, 800) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600) # Initialize the encoder. encoder = av.CodecContext.create('h264', 'w') encoder.width = 800 encoder.height = 600 encoder.pix_fmt = 'yuv420p' encoder.bit_rate = 5000 # Initialize the decoder. decoder = av.CodecContext.create('h264', 'r') decoder.width = 800 decoder.height = 600 decoder.pix_fmt = 'yuv420p' decoder.bit_rate = 5000 print(&quot;Streaming...&quot;) while(cap.isOpened()): # Capture the frame from the camera. ret, orig_frame = cap.read() cv2.imshow('Source Video', orig_frame) # Convert to YUV. img_yuv = cv2.cvtColor(orig_frame, cv2.COLOR_BGR2YUV_I420) # Create a video frame object from the num py array. video_frame = av.VideoFrame.from_ndarray(img_yuv, format='yuv420p') with PerfTimer(&quot;Encoding&quot;) as p: encoded_frames = encoder.encode(video_frame) # Sometimes the encode results in no frames encoded, so lets skip the frame. if len(encoded_frames) == 0: continue print(f&quot;Decoding {len(encoded_frames)} frames...&quot;) for frame in encoded_frames: encoded_frame_bytes = bytes(frame) if socket_enabled: # Get the size of the encoded frame in bytes size = struct.pack('&lt;L', len(encoded_frame_bytes)) sock.sendall(size + encoded_frame_bytes) # Step 1: Create the packet from the frame. packet = av.packet.Packet(frame) # Step 2: Decode the packet. decoded_packets = decoder.decode(packet) for packet in decoded_packets: # Step 3: Convert the pixel format from the encoder color format to BGR for displaying. frame = cv2.cvtColor(packet.to_ndarray(format='yuv420p'), cv2.COLOR_YUV2BGR_I420) # Step 4. Display frame in window. cv2.imshow('Decoded Video', frame) if cv2.waitKey(1) &amp; 0xFF == ord('q'): break # release everything cap.release() sock.close() cv2.destroyAllWindows() </code></pre>
<python><ffmpeg><pyav>
2023-04-15 22:47:51
2
2,205
Martin Blore
76,025,067
19,003,861
How to call variable processed in forloop function (views) in django template?
<p>I am trying to refer a variable <code>modela_completed</code> in my django template after some logic was applied to it.</p> <p>Since it seems templates are only a place for making a page look nice, there must be a way to assign a value to a variable in a forloop and refer it into the template.</p> <p><strong>models</strong></p> <pre><code>class ModelA(models.Model, HitCountMixin): name = models.CharField(verbose_name=&quot;Name&quot;,max_length=100, blank=True) class ModelB(models.Model): user = models.ForeignKey(UserProfile, blank=True, null=True, on_delete=models.CASCADE) venue = models.ForeignKey(ModelA, blank=True, null=True, on_delete=models.CASCADE) created_at = models.DateTimeField(auto_now_add=True, null=True, blank=True) class ModelC(models.Model): title = models.CharField(verbose_name=&quot;title&quot;,max_length=100, null=True, blank=True) venue = models.ManyToManyField(ModelA, blank=True) created_at = models.DateTimeField(auto_now_add=True, null=True, blank=True) class ModelD(models.Model): name = models.ForeignKey(ModelC,null=True, blank=True, on_delete=models.SET_NULL, related_name=&quot;related_name&quot;) user = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE) completed = models.BooleanField(default=False) accepted_on = models.DateTimeField(auto_now_add=True, null=True, blank=True) </code></pre> <p><strong>views</strong></p> <pre><code>def function(request, modeld_id): q_1 = ModelD.objects.get(pk=modeld_id) q_2 = q_1.name.modela.all() #count the number of q_2 in modelc modela_count = q_2 .count() modela_set = set() sum_completed = 0 # identify date the ModelC was accepted modeld_accepted_on = get_object_or_404(ModelD, pk=modeld_id) for modela in q_2 : print(modela) if modela.id not in modela_set: modela_set.add(modela.id) latest_point = None modela_completed = 0 for in_modela in modela.modelb_set.filter(user=request.user.userprofile): if latest_point is None or in_modela.created_at &gt; latest_point: latest_point = in_modela.created_at modela_completed = 0 if latest_point is not None and latest_point &gt; modeld_accepted_on.accepted_on: modela_completed = 1 #&lt;--- I want to be able to verify if modela_completed = 1 in my template. print(f'Points for modela- {in_modela.venue} - {modela_completed}') return render(request,&quot;template.html&quot;, {'modeld_accepted_on':modeld_accepted_on,'modela_completed':modela_completed,'modela_count':modela_count,'loyalty_card_latest':loyalty_card_latest, 'modelc':modelc,'q_2 ':q_2 }) </code></pre> <p><strong>template</strong></p> <pre><code>{%for modela in q_2 %} {%for in_modela in modela.modelb_set.all%} {{modela_completed}} #&lt;-- this equals to zero on every occasions {%endfor%} {%endfor%} </code></pre> <p>Following an example like this, how do I can I test if <code>modela_completed=1</code>? It seems whatever I try <code>modela_completed=None</code> because this is where the variable is first mentioned in the view.</p> <p>When I print the views values on the console I get a different result than what the template shows.</p>
<python><django><django-views><django-templates>
2023-04-15 22:39:23
2
415
PhilM
76,025,059
124,367
Python - Cannot upgrade pip or install any packages due to SSL Certificate errors
<p>I have a company-owned Windows machine with Python 3.11.2. Pip is version 22.3.1 (as I write this, the current version is 23.1). The local Python folder is <code>C:\Users\{User}\AppData\Local\Programs\Python\Python311</code></p> <p>If I try to upgrade pip or install any packages, I get an SSL Error. For example, running <code>pip install pandas</code> results in the following:</p> <pre><code>pip : WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)'))': /simple/pandas/ At line:1 char:1 + pip install pandas + ~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (WARNING: Retryi.../simple/pandas/:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)'))': /simple/pandas/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)'))': /simple/pandas/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)'))': /simple/pandas/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)'))': /simple/pandas/ Could not fetch URL https://pypi.org/simple/pandas/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pandas/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify faile d: unable to get local issuer certificate (_ssl.c:992)'))) - skipping ERROR: Could not find a version that satisfies the requirement pandas (from versions: none) ERROR: No matching distribution found for pandas Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: una ble to get local issuer certificate (_ssl.c:992)'))) - skipping WARNING: There was an error checking the latest version of pip. </code></pre> <p>Based on other posts (sorry, I didn't keep track of all of them), I have tried a few things:</p> <ol> <li>Verified I can visit the above links in a browser. I think this means the administrator is not blocking access?</li> <li>In Environment Variables, I went into PATH and removed the trailing slash from both python folders.</li> <li>Per this <a href="https://stackoverflow.com/questions/64759797/windows-10-unable-to-install-pip-due-to-ssl-certificate-error">post</a>, I tried <code>curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py</code> followed by <code>python get-pip.py</code> and still got SSL errors.</li> <li>Per this <a href="https://stackoverflow.com/questions/71147103/pip-will-not-skip-ssl-certificate-check">post</a>, I verified I can open a browser and visit <a href="https://pypi.python.org" rel="nofollow noreferrer">pypi.python.org</a>, <a href="https://pypi.org" rel="nofollow noreferrer">pypi.org</a>, and <a href="https://files.pythonhosted.org" rel="nofollow noreferrer">files.pythonhosted.org</a></li> <li>Per another post I lost, I went into the list of local certificates via Google but I didn't understand the instructions.</li> <li>I downloaded the latest certifi file from <a href="https://pypi.org/project/certifi/" rel="nofollow noreferrer">https://pypi.org/project/certifi/</a>. However, I think it was already up to date.</li> <li>There are plenty of other SO posts asking the same question but I saw few with answers and they haven't worked yet.</li> <li>This is a company-owned work machine so the Trusted Connection flag is not a viable option.</li> </ol>
<python><python-3.x><pandas><ssl><pip>
2023-04-15 22:36:05
1
11,801
PowerUser
76,024,738
10,327,984
RuntimeError: running_mean should contain 1 elements not 200
<p>I am implementing a conditional gan for image generation with text embedding from scratch and I am getting the above error exactly in the BatchNorm1d layer from the embedding_layers in the generator <br> generator class :</p> <pre><code>import torch.nn as nn class Generator(nn.Module): def __init__(self, embedding_dim=300, latent_dim=100, image_size=64, num_channels=3): super(Generator, self).__init__() self.embedding_size = embedding_dim self.latent_dim = latent_dim self.image_size = image_size # Define embedding processing layers self.embedding_layers = nn.Sequential( nn.Linear(embedding_dim,latent_dim), nn.BatchNorm1d(latent_dim), nn.LeakyReLU(0.2, inplace=True) ) # Define noise processing layers self.noise_layers = nn.Sequential( nn.Linear(latent_dim, image_size * image_size * 4), nn.BatchNorm1d(image_size * image_size * 4), nn.LeakyReLU(0.2, inplace=True) ) # Define image processing layers self.conv_layers = nn.Sequential( nn.ConvTranspose2d(latent_dim + 256, 512, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(512), nn.ReLU(True), nn.ConvTranspose2d(512, 256, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(256), nn.ReLU(True), nn.ConvTranspose2d(256, 128, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(128), nn.ReLU(True), nn.ConvTranspose2d(128, 64, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(64), nn.ReLU(True), nn.ConvTranspose2d(64, num_channels, kernel_size=4, stride=2, padding=1, bias=False), nn.Tanh() ) def get_latent_dim(self): return self.latent_dim def forward(self, embeddings,noise): # Process embedding embedding_features = self.embedding_layers(embeddings) # Process noise noise_features = self.noise_layers(noise) # Combine features features = torch.cat((embedding_features, noise_features), dim=1) features = features.view(features.shape[0], -1, self.image_size // 16, self.image_size // 16) # Generate image image = self.conv_layers(features) return image </code></pre> <p>discriminator class:</p> <pre><code>import torch.nn as nn class Discriminator(nn.Module): def __init__(self, embedding_dim=300, image_size=64, num_channels=3): super(Discriminator, self).__init__() # Define image processing layers self.conv_layers = nn.Sequential( nn.Conv2d(num_channels, 64, kernel_size=4, stride=2, padding=1), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(128), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(256), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(256, 512, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(512), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(512, 1024, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(1024), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(1024, 1, kernel_size=4, stride=1, padding=0, bias=False), nn.Sigmoid() ) # Define embedding processing layers self.embedding_layers = nn.Sequential( nn.Linear(embedding_dim, image_size * image_size), nn.BatchNorm1d(image_size * image_size), nn.LeakyReLU(0.2, inplace=True) ) def forward(self, images, embeddings): # Process image image_features = self.conv_layers(images) # Process embedding embedding_features = self.embedding_layers(embeddings) embedding_features = embedding_features.view(embedding_features.shape[0], 1, 64, 64) # Combine features features = torch.cat((image_features, embedding_features), dim=1) # Classify classification = self.classification_layers(features).view(features.shape[0], -1) validity = self.validity_layers(features).view(features.shape[0], -1) return validity, classification </code></pre> <p>train function:</p> <pre><code>import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader from torchvision.utils import save_image from tqdm import tqdm def train_gan(generator, discriminator, dataset, batch_size, num_epochs, device): &quot;&quot;&quot; Trains a conditional GAN with a generator and a discriminator using a PyTorch dataset containing text embeddings and images. &quot;&quot;&quot; # Set up loss functions and optimizers adversarial_loss = nn.BCELoss() generator_optimizer = optim.Adam(generator.parameters(), lr=0.0002, betas=(0.5, 0.999)) discriminator_optimizer = optim.Adam(discriminator.parameters(), lr=0.0002, betas=(0.5, 0.999)) # Set up data loader data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=True) generator.to(device) discriminator.to(device) # Train the GAN for epoch in range(num_epochs): for i, data in enumerate(tqdm(data_loader)): # Load data and labels onto the device text_embeddings = data['text_embedding'].to(device) real_images = data['image'].to(device) # Generate fake images using the generator and the text embeddings noise = torch.randn(batch_size,generator.latent_dim).to(device) fake_images = generator(text_embeddings,noise) # Train the discriminator discriminator_optimizer.zero_grad() real_labels = torch.ones(real_images.size(0), 1).to(device) fake_labels = torch.zeros(fake_images.size(0), 1).to(device) real_predictions = discriminator(real_images, text_embeddings) real_loss = adversarial_loss(real_predictions, real_labels) fake_predictions = discriminator(fake_images.detach(), text_embeddings) fake_loss = adversarial_loss(fake_predictions, fake_labels) discriminator_loss = real_loss + fake_loss discriminator_loss.backward() discriminator_optimizer.step() # Train the generator generator_optimizer.zero_grad() fake_predictions = discriminator(fake_images, text_embeddings) generator_loss = adversarial_loss(fake_predictions, real_labels) generator_loss.backward() generator_optimizer.step() # Save generated images and model checkpoints every 500 batches if i % 500 == 0: with torch.no_grad(): fake_images = generator(text_embeddings[:16]).detach().cpu() save_image(fake_images, f&quot;images\generated_images_epoch_{epoch}_batch_{i}.png&quot;, normalize=True, nrow=4) torch.save(generator.state_dict(), f&quot;images\generator_checkpoint_epoch_{epoch}_batch_{i}.pt&quot;) torch.save(discriminator.state_dict(), f&quot;images\discriminator_checkpoint_epoch_{epoch}_batch_{i}.pt&quot;) # Print loss at the end of each epoch print(f&quot;Epoch [{epoch+1}/{num_epochs}] Discriminator Loss: {discriminator_loss.item()}, Generator Loss: {generator_loss.item()}&quot;) </code></pre> <p>main</p> <pre><code># defining hyperparamter torch.cuda.empty_cache() embedding_dim=768 img_size=512 latent_dim=200 batch_size=32 num_epochs=100 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') #define the main components generator=Generator(embedding_dim=embedding_dim, latent_dim=latent_dim, image_size=img_size) discriminator=Discriminator(embedding_dim=embedding_dim,image_size=img_size) train_gan(generator=generator, discriminator=discriminator, dataset=dataset, batch_size=batch_size, num_epochs=num_epochs, device=device,) </code></pre> <p>as for my dataset, it consists of images and text embeddings with the following shape</p> <pre><code>torch.Size([3, 512, 512]) torch.Size([1, 768]) </code></pre>
<python><pytorch><generative-adversarial-network>
2023-04-15 20:59:58
2
622
Mohamed Amine
76,024,686
12,603,110
Pytorch Lambda Transforms and pickling?
<p>I am beginning to read and experiment with Pytorch recently and I saw that the transform module has a <code>Lambda</code> class. currently as far as I am aware composing transforms requires just a callable and a plain lambda can suffice.</p> <p>What's the purpose of the <code>Lambda</code> class if python lambdas work just the same? or I am missing something?</p>
<python><pytorch><pickle>
2023-04-15 20:47:19
0
812
Yorai Levi
76,024,680
1,028,162
Problems with immidiate concistency in terraform AWS lambda layer
<p><br /> I'm having trouble configuring my tf insrastructure of AWS Lambda + Lambda Layer. I'll start with what I want to achieve: I want to implement a Lambda that uses a Layer for dependencies (in my case - using python - installed from <code>requirements.txt</code>), and I want the layer to be built and uploaded <strong>if and only if</strong> there was a change to the <code>requirements.txt</code> file. <br /> I managed to get there using the following terraform:</p> <pre><code># Layer resource &quot;null_resource&quot; &quot;layer_code&quot; { triggers = { requirements_file = md5(file(&quot;${local.project_root}/requirements.txt&quot;)) } provisioner &quot;local-exec&quot; { command = &lt;&lt;EOF image_name=&quot;public.ecr.aws/sam/build-python3.9&quot; export_folder=&quot;tf/.deployment/layer-dependencies/python&quot; update_pip_cmd=&quot;pip install --upgrade pip&quot; install_dependencies_cmd=&quot;pip install -r requirements.txt -t $export_folder&quot; docker_cmd=&quot;$update_pip_cmd; $install_dependencies_cmd; exit&quot; echo &quot;Will build layer in '$export_folder'&quot; docker run -v &quot;${local.project_root}&quot;:/var/task &quot;$image_name&quot; /bin/sh -c &quot;$docker_cmd&quot; EOF } } data &quot;archive_file&quot; &quot;layer_code_archive&quot; { type = &quot;zip&quot; source_dir = &quot;${local.project_root}/tf/.deployment/layer-dependencies&quot; output_path = &quot;layer.zip&quot; depends_on = [ null_resource.layer_code ] } resource &quot;aws_lambda_layer_version&quot; &quot;dependencies_layer&quot; { filename = data.archive_file.layer_code_archive.output_path layer_name = &quot;${local.service_name}-layer&quot; source_code_hash = filebase64sha256(data.archive_file.layer_code_archive.output_path) skip_destroy = true } # Lambda data &quot;archive_file&quot; &quot;service_code_archive&quot; { type = &quot;zip&quot; source_dir = &quot;${local.project_root}/src&quot; output_path = &quot;service.zip&quot; } resource &quot;aws_lambda_function&quot; &quot;service_lambda&quot; { function_name = &quot;${local.service_name}-lambda&quot; role = aws_iam_role.lambda_exec_role.arn handler = &quot;lambda_handler.handle&quot; runtime = &quot;python3.9&quot; filename = data.archive_file.service_code_archive.output_path source_code_hash = filebase64sha256(data.archive_file.service_code_archive.output_path) layers = [ aws_lambda_layer_version.dependencies_layer.arn ] } </code></pre> <p>This almost does the trick, but it's not good enough, as it requires <strong>two runs</strong> of <code>tf apply</code>:</p> <ol> <li>Assume we are in a stable state (<code>tf plan</code> is empty), now I change <code>requirements.txt</code>.</li> <li>Running <code>tf plan</code> outputs <code>Plan: 1 to add, 0 to change, 1 to destroy.</code> <ul> <li><code>null_resource.layer_code must be replaced</code> (because <code>requirements.txt</code> has changed).</li> <li><code>data.archive_file.layer_code_archive will be read during apply</code>.</li> </ul> </li> <li>Running <code>tf apply</code> outputs <code>Apply complete! Resources: 1 added, 0 changed, 1 destroyed.</code> (layer was built locally, nothing actually changed in AWS).</li> <li>Running <code>tf plan</code> <strong>again</strong> outputs: <code>Plan: 1 to add, 1 to change, 1 to destroy.</code> <ul> <li><code>aws_lambda_layer_version.dependencies_layer must be replaced</code> (because <code>source_code_hash</code> changed, due to previous <code>tf apply</code>).</li> <li><code>aws_lambda_function.service_lambda will be updated in-place</code> (because layer will change).</li> </ul> </li> <li>Running <code>tf apply</code> <strong>again</strong> updates the layer + lambda.</li> <li>We are back in stable state (<code>tf plan</code> is empty).</li> </ol> <p>The point is that while this is eventual consistent, its <strong>not</strong> immediate consistent (what I want it to be). <br /> I tried setting <code>dependencies_layer.source_code_hash</code> with <code>md5(requirements.txt)</code> , and this does give immidiate concistency, but the problem is that after 1 deployment, the <code>dependencies_layer.source_code_hash</code> is saved in tf state using the actual layer code, which causes <code>tf plan</code> to always be out of sync (even though it will reach the exact same code state). <br /> So, the question is: is there some option to explicitly trigger <code>dependencies_layer</code> update when <code>layer_code</code> is changed? That would solve the issue. If not, maybe there is some other type of best practice solution for this situation (lambda layer that depeneds on requirements file)? It does not sound rare (and still I wasn't able to find anything useful). Also, is running 2 (or more) cycles of <code>tf plan</code> + <code>tf apply</code> considered bad practice? Sounds to me like it is.</p> <p>Thanks</p>
<python><aws-lambda><terraform><terraform-provider-aws>
2023-04-15 20:46:34
1
839
A. Kali
76,024,643
817,659
Pass MySQLDB connection to SQLAlchemy
<p>If I create a connection that works to <code>MySQL</code> using MySQLdb like this:</p> <pre><code>connection = MySQLdb.connect( host= os.getenv(&quot;HOST&quot;), user=os.getenv(&quot;USERNAME&quot;), passwd= os.getenv(&quot;PASSWORD&quot;), db= os.getenv(&quot;DATABASE&quot;), ssl_mode = &quot;VERIFY_IDENTITY&quot;, ssl = { &quot;ca&quot;: &quot;/etc/ssl/certs/ca-certificates.crt&quot; } ) </code></pre> <p>Is there a way to pass this connection to <code>SQLAlchemy</code> <code>create_engine</code> so I <strong>don't</strong> have to do the string thing?</p> <pre><code>engine = create_engine(&quot;mysql+:MySQLdb//&quot;...&quot;) </code></pre>
<python><sqlalchemy><mysql-python>
2023-04-15 20:40:31
0
7,836
Ivan
76,024,591
11,388,321
How can I post response to WordPress via REST API?
<p>I'm trying to post OpenAI's response (content message) as the content for my WordPress post, however, the WordPress part of my code doesn't seem to work, I've played around with doing it different ways but no post is created as a draft, how can I get OpenAI's response as the content of my drafted WordPress post?</p> <p>Here's the code I'm working with:</p> <pre><code>import openai import requests import json from requests.auth import HTTPBasicAuth import random import base64 # Array holding wordpress author IDs authorList = [&quot;1&quot;, &quot;2&quot;] # Randomly choose a number from authorList array randomAuthor = random.choice(authorList) # Set OpenAI API key openai.api_key = &quot;&quot; # Then, you can call the &quot;gpt-3.5-turbo&quot; model modelEngine = &quot;gpt-3.5-turbo&quot; # set your input text inputText = &quot;Write a 500 word tweet on {} stock&quot; inputTitle = &quot;Test analyst on {}&quot; # Array of keywords to generate article on keywords = [&quot;Nio (NIO)&quot;, &quot;Apple (AAPL)&quot;, &quot;Microsoft (MSFT)&quot;, &quot;Tesla (TSLA)&quot;, &quot;Meta (META)&quot;, &quot;Amazon (AMZN)&quot;] # WordPress information wp_base_url = &quot;&quot; #auth = HTTPBasicAuth(&quot;&quot;, &quot;&quot;) wp_username = &quot;&quot; wp_password = &quot;&quot; auth = wp_username + ':' + wp_password token = base64.b64encode(auth.encode()) # Post creator function def create_post(title, content, authorList): post_status = &quot;draft&quot; headers = { &quot;Authorization&quot;: &quot;Basic&quot; + token.decode('utf-8') #&quot;Accept&quot;: &quot;application/json&quot;, #&quot;Content-Type&quot;: &quot;application/json&quot; } post = json.dumps({ &quot;title&quot;: title, &quot;content&quot;: content, &quot;status&quot;: post_status, &quot;author&quot;: authorList }) url = wp_base_url + &quot;/wp-json/wp/v2/posts&quot; response = requests.post(url, json=post, headers=headers) return response # Switches and injects keywords into API request for keyword in keywords: # set input text with the current keyword inputSent = inputText.format(keyword) inputTitleSent = inputTitle.format(keyword) # Send an API request and get a response, note that the interface and parameters have changed compared to the old model response = openai.ChatCompletion.create( model=modelEngine, messages=[{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: inputSent }], n = 1 ) print(&quot;ChatGPT API replies for&quot;, keyword, &quot;stock:\n&quot;) for choice in response.choices: outputText = choice.message print(outputText) print(&quot;------&quot;) # Create the WordPress post using the OpenAI response create_post(inputTitleSent, choice.message, randomAuthor) print(&quot;\n&quot;) </code></pre>
<python><wordpress>
2023-04-15 20:25:53
0
810
overdeveloping
76,024,288
12,724,372
Problems installing quantlib in python
<p>I'm trying to install <code>quantlib</code> in python, using <code>pycharm</code>. But I'm receiving this error:</p> <pre><code>ERROR: Could not find a version that satisfies the requirement QuantLib (from versions: none) ERROR: No matching distribution found for QuantLib [notice] A new release of pip available: 22.3.1 -&gt; 23.1 [notice] To update, run: pip install --upgrade pip </code></pre> <p>I've upgraded <code>pip</code>, but that was not the issue. I'm using python Python 3.7.16. I've even tried installing <code>QuantLib</code> python library available in <code>pypy</code> but the same result shows:</p> <pre><code>Collecting QuantLib-Python==1.18 Using cached QuantLib_Python-1.18-py2.py3-none-any.whl (1.4 kB) ERROR: Could not find a version that satisfies the requirement QuantLib (from quantlib-python) (from versions: none) ERROR: No matching distribution found for QuantLib [notice] A new release of pip available: 22.3.1 -&gt; 23.1 [notice] To update, run: pip install --upgrade pip </code></pre>
<python><quantlib>
2023-04-15 19:15:29
2
1,275
Devarshi Goswami
76,024,251
3,475,434
hatch and scripts entry point in Python
<p>I need a script to be produced by installing the package and to be put in user <code>PATH</code>. I've followed the <a href="https://hatch.pypa.io/latest/config/metadata/#entry-points" rel="nofollow noreferrer">Hatch guide on entry points</a> and my directory looks like this:</p> <pre><code>. ├── docs │   ├── conf.py │   ├── index.rst │   ├── make.bat │   ├── Makefile │   ├── modules.rst │   ├── pylb.experiments.rst │   └── pylb.rst ├── LICENSE.txt ├── pyproject.toml ├── README.md ├── src │   └── pylb │   ├── __about__.py │   ├── experiments │   │   ├── __init__.py │   │   └── sphinx.py │   ├── __init__.py │   └── scripts │   └── __init__.py └── tests └── __init__.py </code></pre> <p>my <code>pyproject.toml</code> looks like this:</p> <pre><code>[project.scripts] pylbtestapp = &quot;pylb.scripts:main&quot; </code></pre> <p>and <code>src/pylb/scripts/__init__.py</code> is:</p> <pre class="lang-py prettyprint-override"><code>def main(): print(&quot;Hi There! This is pylb test app.&quot;) </code></pre> <p>But when I install the package with pip, it does not put the script as it should (I guess). What am I doing wrong? For more info the full repo is <a href="https://github.com/lbraglia/pylb" rel="nofollow noreferrer">here</a>.</p>
<python><program-entry-point><hatch>
2023-04-15 19:07:39
0
3,253
Luca Braglia
76,024,248
13,903,942
Java | Safest way to close resources
<h2>Question</h2> <p>What's the best way / safest way to close a 'i/O resource' in Java?</p> <h3>Context</h3> <p>Coming from python background, which uses the <code>with</code> which creates a <a href="https://docs.python.org/3/reference/datamodel.html#context-managers" rel="nofollow noreferrer">Context Manager</a> statement to <a href="https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files" rel="nofollow noreferrer">Reading and Writing Files</a>, it <strong>ensures that, the resource of the context if closed/cleanedup</strong> (technically this can be anything, but built in io operations for instance will do a close once it goes out of scope, even sockets)</p> <p>So taking this code:</p> <pre><code>file = open('workfile', encoding=&quot;utf-8&quot;) read_data = f.read() file.close() </code></pre> <p><strong>The recommended way is using the <code>with</code> statement</strong></p> <pre><code>with open('workfile', encoding=&quot;utf-8&quot;) as file: read_data = file.read() </code></pre> <p>By the time we exit the with scope (which is literally, the previous indentation level) even if an error occurred, the file <strong>will always be closed</strong></p> <p>Sure we could do this</p> <pre><code>try: file = open('workfile', encoding=&quot;utf-8&quot;) read_data = f.read() except Exception as _: pass finally: file.close() </code></pre> <p>But, most cases, a context manager is better. One reason is that, simply someone may forget to write down the <code>file.close()</code> but also, is simply more clean and safe.</p> <h3>Java example code</h3> <p>Now in java, what is the recommended, similar why to handle this?</p> <p>Taking as example the <code>Scanner</code>:</p> <pre><code>public class App { public static void main( String... args ) { Scanner reader = new Scanner(System.in); try { System.out.print(&quot;&gt;&gt;&gt; username: &quot;); if (reader.hasNextLine()) { String username = reader.nextLine(); System.out.format(&quot;Hello %s&quot;, username); } } catch (Exception error) { System.out.format(&quot;Error while reading scanner %s&quot;, error); } finally { reader.close(); } } } </code></pre> <p>As you can see, this is how I am handling it,</p> <h4>is this the <code>java</code> way?</h4>
<python><java><memory-leaks><io><resources>
2023-04-15 19:06:34
2
7,945
Federico Baù
76,024,167
10,200,497
Creating a new column based on values of two other columns
<p>This is my dataframe:</p> <pre><code>df = pd.DataFrame({'a': [10, 20, 50], 'b': [5, -2, 20]}) </code></pre> <p>And this is the output that I want:</p> <pre><code> a b c 0 10 5 105 1 20 -2 20.58 2 50 20 12.348 </code></pre> <p>I have an initial value which is 1000. The first row of <code>c</code> is calculated like this:</p> <p>(1000 * 0.1) * 1.05</p> <p>Columns <code>a</code> and <code>b</code> are percent. That is 10 percent of 1000 is 100 and then I want to add 5 percent to 100 which gives me 105 for first row. Then for the next row of <code>c</code>, I want to calculate 20 percent of 105 which is 21 and then subtract 2 percent of it which is 20.58.</p> <p>I have tried this code but it doesn't work:</p> <pre><code>df['b'] = df['b'] / 100 df['c'] = (df.a / 100).cumprod() * 1000 df['c'] = df.c * (1 + df.b) </code></pre> <p>Feel free to edit the title. I want something more specific but couldn't figure it out.</p>
<python><pandas>
2023-04-15 18:51:56
2
2,679
AmirX
76,024,153
5,636,889
Retrieve large objects with multiprocessing.Pipe
<p>I'm current working in a RAM monitoring decorator:</p> <pre><code>import time import asyncio import multiprocessing as mp import psutil import numpy as np class MemoryMonitor: def __init__( self, threshold=115343360, # Bytes delay=1 # Seconds ): self.threshold = threshold self.delay = delay async def __memory_info(self, pid): memory = psutil.Process(pid).memory_info().rss print(f'Monitoring RAM: current value={memory}') if memory &gt; self.threshold: print(f'Memory threashold exceeded: thr={self.threshold}, current={memory}') return True return False async def __ram_monitoring(self, process): while process.is_alive(): memory_error = await asyncio.gather(self.__memory_info(process.pid)) if memory_error[0]: if process.is_alive(): process.kill() raise MemoryError(f'Memory exceeded threshold={self.threshold}') await asyncio.sleep(self.delay) def __call__(self, **monitoring_kwargs): def monitoring_decorator(func): def wrapper(*args, **kwargs): # Target function to collect retrived valued of decorated function def target(*args, **kwargs): try: cconn.send([func(*args, **kwargs), False]) except Exception as exc: cconn.send([exc, True]) pconn, cconn = mp.Pipe(duplex=False) # Call the function in a different process # to be able to kill the process if memory is exceeded process = mp.Process(target=target, args=args, kwargs=kwargs) process.start() # Run the ram monitorin process in async mode asyncio.run(self.__ram_monitoring(process=process)) # Collect the decorated function retrived valued value, error = pconn.recv() if error: raise value return value return wrapper return monitoring_decorator if __name__ == '__main__': memory_monitor = MemoryMonitor(threshold=2.5e+7) @memory_monitor() def test_function(n): for i in range(n): v = np.ones([i,i]) time.sleep(0.1) return v v = test_function(100) print(v) </code></pre> <p>I'm using:</p> <ol start="0"> <li>Linux operating system</li> <li>Asynchronous functions to collect the RAM memory value with <code>psutil</code> library.</li> <li><code>multiprocessing.Process</code> to run the decorated function</li> <li>And <code>multiprocessing.Pipe</code> to collect the retrieved value of the decorated function</li> </ol> <p>The process works fine, it kills the process if the RAM memory is exceeded:</p> <pre><code>Memory threashold exceeded: thr=25000000.0, current=25014272 Traceback (most recent call last): File &quot;main.py&quot;, line 98, in &lt;module&gt; v = test_function(500) File &quot;main.py&quot;, line 77, in wrapper asyncio.run(self.__ram_monitoring(process=process)) File &quot;/usr/lib/python3.8/asyncio/runners.py&quot;, line 44, in run return loop.run_until_complete(main) File &quot;/usr/lib/python3.8/asyncio/base_events.py&quot;, line 616, in run_until_complete return future.result() File &quot;main.py&quot;, line 56, in __ram_monitoring raise MemoryError(f'Memory exceeded threshold={self.threshold}') MemoryError: Memory exceeded threshold=25000000.0 </code></pre> <p>And returns the value of the decorated function:</p> <pre><code>Monitoring RAM: current value=22700032 [[1. 1. 1. 1.] [1. 1. 1. 1.] [1. 1. 1. 1.] [1. 1. 1. 1.]] Main.py DONE </code></pre> <p>The problem is when I'm trying to recover a large value (like a <code>numpy.ones([100,100])</code>); the process gets stuck in <code>cconn.send(func(*args, **kwargs))</code> and never ends.</p> <p>Question:</p> <p>How can I recover large values with <code>multiprocessing.Pipe</code> or <code>multiprocessing.Queue</code>? <a href="https://medium.com/analytics-vidhya/using-numpy-efficiently-between-processes-1bee17dcb01" rel="nofollow noreferrer">I read this post:</a>, he suggests <code>multiprocessing.Array</code> and <code>multiprocessing.Value</code>, however, I cannot always predict the type and shape of the returned value.</p> <p>Or am I using a wrong approach (asynchronous functions + multiprocessing)?</p>
<python><asynchronous><multiprocessing><out-of-memory><psutil>
2023-04-15 18:50:35
0
525
Jose
76,024,129
2,946,746
ERROR in using beautiful Soup to extract SEC filling header information with returning NONE
<p>When I'm trying to use beautiful soup to get the header information from the SEC I'm getting a a return value of none. The tag I'm using is <code>&quot;&lt;SEC-HEADER&gt;&quot;</code>.</p> <p>Any idea on how to get this code work correctly?</p> <p>My code is:</p> <pre><code> import requests from bs4 import BeautifulSoup # URL of the SEC filing url = 'https://www.sec.gov/Archives/edgar/data/1166036/000110465904027382/0001104659-04-027382.txt' # Fetch the SEC filing data response = requests.get(url) content = response.content # Parse the content with Beautiful Soup soup = BeautifulSoup(content, 'html.parser') # Find the SEC-HEADER tag using a custom function def find_sec_header(tag): return tag.name == 'sec-header' sec_header = soup.find(find_sec_header) # Extract the header data if sec_header: header_data = sec_header.get_text() print(header_data) else: print(&quot;SEC-HEADER not found in the document.&quot;) </code></pre> <p>From the website link in the ode you can see their is header information. Copy of some of the text is:</p> <pre class="lang-none prettyprint-override"><code>-----BEGIN PRIVACY-ENHANCED MESSAGE----- Proc-Type: 2001,MIC-CLEAR Originator-Name: webmaster@www.sec.gov Originator-Key-Asymmetric: MFgwCgYEVQgBAQICAf8DSgAwRwJAW2sNKK9AVtBzYZmr6aGjlWyK3XmZv3dTINen TWSM7vrzLADbmYQaionwg5sDW3P6oaM5D3tdezXMm7z1T+B+twIDAQAB MIC-Info: RSA-MD5,RSA, Qv2jQwkNcHK0jmh1uw9JmsapJjhSCceQJLdEQsrmZh5YPKB2tMIC07tAARLtOJNv RF+yO70MOHHbXjXQABtM/g== &lt;SEC-DOCUMENT&gt;0001104659-04-027382.txt : 20040913 &lt;SEC-HEADER&gt;0001104659-04-027382.hdr.sgml : 20040913 &lt;ACCEPTANCE-DATETIME&gt;20040913074905 ACCESSION NUMBER: 0001104659-04-027382 CONFORMED SUBMISSION TYPE: 8-K/A PUBLIC DOCUMENT COUNT: 7 CONFORMED PERIOD OF REPORT: 20040730 ITEM INFORMATION: Completion of Acquisition or Disposition of Assets ITEM INFORMATION: Financial Statements and Exhibits FILED AS OF DATE: 20040913 DATE AS OF CHANGE: 20040913 FILER: COMPANY DATA: COMPANY CONFORMED NAME: MARKWEST ENERGY PARTNERS L P CENTRAL INDEX KEY: 0001166036 STANDARD INDUSTRIAL CLASSIFICATION: CRUDE PETROLEUM &amp; NATURAL GAS [1311] IRS NUMBER: 270005456 FISCAL YEAR END: 1231 FILING VALUES: FORM TYPE: 8-K/A SEC ACT: 1934 Act SEC FILE NUMBER: 001-31239 FILM NUMBER: 041026639 BUSINESS ADDRESS: STREET 1: 155 INVERNESS DR WEST STREET 2: STE 200 CITY: ENGLEWOOD STATE: CO ZIP: 80112 BUSINESS PHONE: 303-925-9275 MAIL ADDRESS: STREET 1: 155 INVERNESS DR WEST STREET 2: STE 200 CITY: ENGLEWOOD STATE: CO ZIP: 80112 &lt;/SEC-HEADER&gt; &lt;DOCUMENT&gt; &lt;TYPE&gt;8-K/A &lt;SEQUENCE&gt;1 &lt;FILENAME&gt;a04-10341_18ka.htm &lt;DESCRIPTION&gt;8-K/A &lt;TEXT&gt; &lt;html&gt; &lt;head&gt; &lt;/head&gt; </code></pre>
<python><beautifulsoup>
2023-04-15 18:46:01
1
1,810
user2946746
76,024,124
11,397,243
Python "import as" changing global namespace
<p>I'm automatically translating some perl code to python with my <a href="https://github.com/snoopyjc/pythonizer" rel="nofollow noreferrer">pythonizer</a> tool. The perl code uses the Date::Manip package, which has a submodule Date::Manip::Date. I use a <code>Date</code> variable (which I defined in <code>builtins</code>) in my generated code to hold the namespace where I define <code>Date.Manip</code>. When I import the Date.Manip.Date package, even though I import it as <code>_Date_Manip_Date</code>, the import still sets a global <code>Date</code> variable preventing access to my <code>Date</code> namespace. I'm looking for ideas on how to avoid this issue. I can't change the name of the submodule. Here is code that reproduces the problem (note I'm using <code>Date</code> as a string in this test case and not a namespace):</p> <pre><code># date_issue.py: import sys sys.path[0:0] = '.' import Date.Manip as _Date_Manip # Date/Manip/__init__.py: Date = '2023-04-15' import Date.Manip.Date as _Date_Manip_Date assert Date == '2023-04-15', f&quot;Date got changed to {Date}&quot; # Date/Manip/Date.py (empty file) </code></pre> <p>I get this error on the assert:</p> <pre><code>AssertionError: Date got changed to &lt;module 'Date.Manip.Date' from '/mnt/c/pythonizer/play/./Date/Manip/Date.py'&gt; </code></pre> <p>I'm thinking of maybe saving the value of <code>Date</code> before the import and restoring it afterwards, and only doing this if there is a module name being imported that is the same name as any of the parent namespaces, but do you have any better ideas?</p>
<python><import><global><python-module>
2023-04-15 18:45:13
1
633
snoopyjc
76,023,998
3,116,231
Create an Azure Keyvault with Python
<p>I'm trying to create a keyvault with Python by running the code from <a href="https://github.com/Azure-Samples/key-vault-python-manage" rel="nofollow noreferrer">here</a>:</p> <pre><code>import os import json from azure.common.credentials import ServicePrincipalCredentials from azure.mgmt.keyvault import KeyVaultManagementClient from azure.mgmt.resource.resources import ResourceManagementClient from haikunator import Haikunator haikunator = Haikunator() WEST_US = &quot;westus&quot; GROUP_NAME = &quot;azure-sample-group&quot; KV_NAME = haikunator.haikunate() # The object ID of the User or Application for access policies. Find this number in the portal OBJECT_ID = &quot;401e9294-xxxx-xxxx-xxxx-xxxx&quot; # Manage resources and resource groups - create, update and delete a resource group, # deploy a solution into a resource group, export an ARM template. Create, read, update # and delete a resource # # This script expects that the following environment vars are set: # # AZURE_TENANT_ID: with your Azure Active Directory tenant id or domain # AZURE_CLIENT_ID: with your Azure Active Directory Application Client ID # AZURE_CLIENT_SECRET: with your Azure Active Directory Application Secret # AZURE_SUBSCRIPTION_ID: with your Azure Subscription Id # def run_example(): &quot;&quot;&quot;Resource Group management example.&quot;&quot;&quot; # # Create the Resource Manager Client with an Application (service principal) token provider # subscription_id = os.environ[&quot;AZURE_SUBSCRIPTION_ID&quot;] credentials = ServicePrincipalCredentials( client_id=os.environ[&quot;AZURE_CLIENT_ID&quot;], secret=os.environ[&quot;AZURE_CLIENT_SECRET&quot;], tenant=os.environ[&quot;AZURE_TENANT_ID&quot;], ) kv_client = KeyVaultManagementClient(credentials, subscription_id) resource_client = ResourceManagementClient(credentials, subscription_id) # You MIGHT need to add KeyVault as a valid provider for these credentials # If so, this operation has to be done only once for each credentials resource_client.providers.register(&quot;Microsoft.KeyVault&quot;) # Create Resource group print(&quot;\nCreate Resource Group&quot;) resource_group_params = {&quot;location&quot;: WEST_US} print_item( resource_client.resource_groups.create_or_update( GROUP_NAME, resource_group_params ) ) # Create a vault print(&quot;\nCreate a vault&quot;) vault = kv_client.vaults.create_or_update( GROUP_NAME, KV_NAME, { &quot;location&quot;: WEST_US, &quot;properties&quot;: { &quot;sku&quot;: {&quot;name&quot;: &quot;standard&quot;}, &quot;tenant_id&quot;: os.environ[&quot;AZURE_TENANT_ID&quot;], &quot;access_policies&quot;: [ { &quot;tenant_id&quot;: os.environ[&quot;AZURE_TENANT_ID&quot;], &quot;object_id&quot;: OBJECT_ID, &quot;permissions&quot;: {&quot;keys&quot;: [&quot;all&quot;], &quot;secrets&quot;: [&quot;all&quot;]}, } ], }, }, ) print_item(vault) # List the Key vaults print(&quot;\nList KeyVault&quot;) for vault in kv_client.vaults.list(): print_item(vault) # Delete Resource group and everything in it print(&quot;\nDelete Resource Group&quot;) delete_async_operation = resource_client.resource_groups.delete(GROUP_NAME) delete_async_operation.wait() print(&quot;\nDeleted: {}&quot;.format(GROUP_NAME)) def print_item(group): &quot;&quot;&quot;Print an instance.&quot;&quot;&quot; print(&quot;\tName: {}&quot;.format(group.name)) print(&quot;\tId: {}&quot;.format(group.id)) print(&quot;\tLocation: {}&quot;.format(group.location)) print(&quot;\tTags: {}&quot;.format(group.tags)) if __name__ == &quot;__main__&quot;: run_example() </code></pre> <p>After receiving comments, I assigned the <code>Contribute</code>role to the service principal.</p> <p>My only modification to the code was substituting the object id with the one from my service principal (see image below):</p> <p><a href="https://i.sstatic.net/Q22Sc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q22Sc.jpg" alt="enter image description here" /></a></p> <p>I'm receiving following error: <a href="https://i.sstatic.net/i3ZWL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i3ZWL.png" alt="enter image description here" /></a></p> <p><strong>EDIT:</strong> Accepted @sridev's answer because I solved the issue by running the sample code which was linked in the comments</p> <p>Here's my working code, based on <a href="https://github.com/Azure-Samples/azure-samples-python-management/tree/main/samples/keyvault" rel="nofollow noreferrer">this GitHub repo</a>, assuming that the Azure environment variables are set properly:</p> <pre><code>import os from azure.identity import DefaultAzureCredential from azure.mgmt.keyvault import KeyVaultManagementClient from azure.mgmt.resource import ResourceManagementClient def main(): TENANT_ID = os.environ.get(&quot;AZURE_TENANT_ID&quot;, None) SUBSCRIPTION_ID = os.environ.get(&quot;AZURE_SUBSCRIPTION_ID&quot;, None) GROUP_NAME = &quot;resource_group_name&quot; VAULT = &quot;vault_name&quot; LOCATION = &quot;azure_location_eg_westus&quot; OBJECT_ID = &quot;service_principal_object_id&quot; # Create client # Other authentication approaches: https://pypi.org/project/azure-identity/ resource_client = ResourceManagementClient( credential=DefaultAzureCredential(), subscription_id=SUBSCRIPTION_ID ) keyvault_client = KeyVaultManagementClient( credential=DefaultAzureCredential(), subscription_id=SUBSCRIPTION_ID ) # Create resource group resource_client.resource_groups.create_or_update(GROUP_NAME, {&quot;location&quot;: LOCATION}) # Create vault vault = keyvault_client.vaults.begin_create_or_update( GROUP_NAME, VAULT, { &quot;location&quot;: LOCATION, &quot;properties&quot;: { &quot;tenant_id&quot;: TENANT_ID, &quot;sku&quot;: {&quot;family&quot;: &quot;A&quot;, &quot;name&quot;: &quot;standard&quot;}, &quot;access_policies&quot;: [ { &quot;tenant_id&quot;: TENANT_ID, &quot;object_id&quot;: OBJECT_ID, &quot;permissions&quot;: { &quot;keys&quot;: [ &quot;encrypt&quot;, &quot;decrypt&quot;, &quot;wrapKey&quot;, &quot;unwrapKey&quot;, &quot;sign&quot;, &quot;verify&quot;, &quot;get&quot;, &quot;list&quot;, &quot;create&quot;, &quot;update&quot;, &quot;import&quot;, &quot;delete&quot;, &quot;backup&quot;, &quot;restore&quot;, &quot;recover&quot;, &quot;purge&quot;, ], &quot;secrets&quot;: [ &quot;get&quot;, &quot;list&quot;, &quot;set&quot;, &quot;delete&quot;, &quot;backup&quot;, &quot;restore&quot;, &quot;recover&quot;, &quot;purge&quot;, ], }, } ], &quot;enabled_for_deployment&quot;: True, &quot;enabled_for_disk_encryption&quot;: True, &quot;enabled_for_template_deployment&quot;: True, }, }, ).result() print(&quot;Create vault:\n{}&quot;.format(vault)) # Get vault vault = keyvault_client.vaults.get(GROUP_NAME, VAULT) print(&quot;Get vault:\n{}&quot;.format(vault)) # Update vault vault = keyvault_client.vaults.update( GROUP_NAME, VAULT, {&quot;tags&quot;: {&quot;category&quot;: &quot;Marketing&quot;}} ) print(&quot;Update vault:\n{}&quot;.format(vault)) # Delete vault keyvault_client.vaults.delete(GROUP_NAME, VAULT) print(&quot;Delete vault.\n&quot;) # Purge a deleted vault keyvault_client.vaults.begin_purge_deleted(VAULT, LOCATION).result() print(&quot;Purge a deleted vault.\n&quot;) # Delete Group resource_client.resource_groups.begin_delete(GROUP_NAME).result() if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><azure-active-directory><azure-keyvault>
2023-04-15 18:19:15
1
1,704
Zin Yosrim
76,023,938
10,200,497
Creating a new column that shows percent change based on values of another column
<p>This is my dataframe:</p> <pre><code>df = pd.DataFrame({'a': [10, 20, 50]}) </code></pre> <p>And this is the output that I want:</p> <pre><code> a b 0 10 100 1 20 20 2 50 10 </code></pre> <p>I have an initial value which is 1000. Then I want to create column <code>b</code>. The first row of <code>b</code> is 10 percent of 1000 which is 100. The second row is 20 percent of 100 that we got previously. And the last row is 50 percent of 20.</p> <p>I have tried this code but it doesn't work:</p> <pre><code>df = df.reset_index(drop=True) df.loc[0, 'b'] = 1000 * (df.a.iloc[0] / 100) df['b'] = (df.a / 100) * df.b.shift(1) </code></pre>
<python><pandas>
2023-04-15 18:07:42
1
2,679
AmirX
76,023,916
13,955,154
Layer "decoder" expects 1 input(s), but it received 3 input tensors
<p>I'm trying to train an autoencoder (and actually the fit seems to go correctly). Then I want to test my models:</p> <pre><code>encoded_imgs = encoder.predict(images[:10]) decoded_imgs = decoder.predict(encoded_imgs) </code></pre> <p>where images is an array of images (224,224) and the latent vector is equal to 1024. I'd expect encoded_imgs to be 10x1024, but instead it is 3x10x24, which results in the error of the title when I perform the decoder.predict. Why the result of the encoder has that shape?</p> <p>I'll add the structure of both encoder and decoder, while the predict uses the standard training.py library</p> <pre><code>latent_dim = 1024 encoder_inputs = Input(shape=(224, 224)) x = layers.Reshape((224, 224, 1))(encoder_inputs) # add a batch dimension x = layers.Conv2D(32, 3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;)(x) x = layers.MaxPool2D()(x) x = layers.Conv2D(64, 3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;)(x) x = layers.MaxPool2D()(x) x = layers.Conv2D(128, 3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;)(x) x = layers.Flatten()(x) x = layers.Dense(4096, activation=&quot;relu&quot;)(x) z_mean = layers.Dense(latent_dim, name=&quot;z_mean&quot;)(x) z_log_var = layers.Dense(latent_dim, name=&quot;z_log_var&quot;)(x) z = Sampling()([z_mean, z_log_var]) encoder = Model(encoder_inputs, [z_mean, z_log_var, z], name=&quot;encoder&quot;) latent_inputs = Input(shape=(latent_dim,)) x = layers.Dense(7 * 7 * 64, activation=&quot;relu&quot;)(latent_inputs) x = layers.Reshape((7, 7, 64))(x) x = layers.Conv2DTranspose(128, 3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;)(x) x = layers.UpSampling2D(size=(2, 2))(x) x = layers.Conv2DTranspose(64, 3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;)(x) x = layers.UpSampling2D(size=(2, 2))(x) x = layers.Conv2DTranspose(32, 3, activation=&quot;relu&quot;, strides=2, padding=&quot;same&quot;)(x) x = layers.Conv2DTranspose(1, 3, activation=&quot;sigmoid&quot;, padding=&quot;same&quot;)(x) decoder_outputs = layers.Reshape((224, 224))(x) decoder = Model(latent_inputs, decoder_outputs, name=&quot;decoder&quot;) </code></pre> <p>If you think that some additional information is required to answer, tell me and I'll add it.</p>
<python><tensorflow><keras><autoencoder>
2023-04-15 18:03:32
1
720
Lorenzo Cutrupi
76,023,766
20,266,647
MLRunPreconditionFailedError: 412 Client Error, "API is waiting for migrations to be triggered"
<p>I got this error:</p> <pre><code>MLRunPreconditionFailedError: 412 Client Error: Precondition Failed for url: http://localhost:8080/api/v1/projects: Failed creating project tutorial-&lt;name&gt; details: MLRunPreconditionFailedError('API is waiting for migrations to be triggered. Send POST request to [/api/operations/migrations](https://file+.vscode-resource.vscode-cdn.net/api/operations/migrations) to trigger it') </code></pre> <p>when I called this source code:</p> <pre><code>import mlrun ... mlrun.set_env_from_file(envFile) project = mlrun.get_or_create_project(&quot;test&quot;, &quot;./&quot;, user_project=True) </code></pre> <p>I used MLRun CE 1.3.0 in Desktop Docker. Did you solve the issue?</p>
<python><mlops><mlrun>
2023-04-15 17:38:36
1
1,390
JIST
76,023,652
17,724,172
Trouble with assigning colored categories on a broken bar chart
<p>Why am I getting such extended final colors (green) parts to these bars?</p> <p>Each GUI box should take two comma-separated integers for start and duration of that category. The numbers should get a little larger for each category, so that the colors don't overlap. This can be entered in all the rows of the GUI, one pair of numbers in each box: 1,9 10,19 20,29 30,39 40,49. Any help would be greatly appreciated.</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt from matplotlib.patches import Patch import tkinter as tk # Define the data as a dictionary data = { 'x_values': [], 'y_values': [(6.0, 2), (8.5, 2), (12.5, 2), (15.0, 2), (19.0, 2)] } # Define a list of colors and categories for the bars colors = ('tab:red', 'tab:orange', 'tab:purple', 'tab:blue', 'tab:green') categories = ('Cat1', 'Cat2', 'Cat3', 'Cat4', 'Cat5') jobs = ('Job1', 'Job2', 'Job3', 'Job4', 'Job5') # Create a GUI interface to get user input for x_values def get_x_values(): global data x_values = [] for i in range(len(jobs)): values_list = [] for j in range(len(categories)): x, y = x_values_entry[i][j].get().split(',') values_list.append((int(x), int(y))) x_values.append(values_list) data['x_values'] = x_values root.destroy() root = tk.Tk() # Create a grid of input boxes for the x_values x_values_entry = [] for i in range(len(jobs)): row = [] for j in range(len(categories)): if i == 0: label = tk.Label(root, text=categories[j]) label.grid(row=i, column=j+1) if j == 0: label = tk.Label(root, text=jobs[i]) label.grid(row=i+1, column=j) entry = tk.Entry(root, width=10) entry.grid(row=i+1, column=j+1) row.append(entry) x_values_entry.append(row) # Add a button to submit the x_values and close the GUI submit_button = tk.Button(root, text='Submit', command=get_x_values) submit_button.grid(row=len(jobs)+1, columnspan=len(categories)+1) # Run the GUI root.mainloop() # Define a list of colors and categories for the bars colors = ('tab:red', 'tab:orange', 'tab:purple', 'tab:blue', 'tab:green') categories = ('Category 1', 'Category 2', 'Category 3', 'Category 4', 'Category 5') # Add the colors and categories to each row of the DataFrame for i in range(len(data['x_values'])): data['facecolors'] = [colors] * len(data['x_values']) data['categories'] = [categories] * len(data['x_values']) # Create a pandas DataFrame from the data df = pd.DataFrame(data) # Create a new figure and axis fig, ax = plt.subplots(figsize=(10,6)) # Loop through each row of the DataFrame and plot the broken bar chart for i, row in df.iterrows(): #makes no dif 0 or i here^ ax.broken_barh(row['x_values'], row['y_values'], facecolors=row['facecolors']) #ax.broken_barh(row['x_values'], row['y_values'], facecolors=row['facecolors']) # Create legend entries with color rectangles and category labels legend_entries = [Patch(facecolor=color, edgecolor='black', label=category) for color, category in zip(colors, categories)] # Add the legend to the plot ax.legend(handles=legend_entries, loc='upper right', ncol=5, bbox_to_anchor=(1.0, 1.00)) # Customize the axis labels and limits ax.set_xlabel('Days') ax.set_ylabel('Jobs') ax.set_yticks([7, 9.5, 13.5, 16, 20], labels=['Job1', 'Job2', 'Job3','Job4', 'Job5']) title = ax.set_title('Tasks and Crew-Days') title.set_position([0.5, 1.0]) #set title at center ax.set_ylim(5, 26) ax.grid(True) # Display the plot plt.show() </code></pre>
<python><matplotlib>
2023-04-15 17:14:04
1
418
gerald
76,023,593
713,200
Storing a value from ios-xe router command output in python
<p>After making a connection to a cisco device using python, and executing a command I get a output as shown below, Basically I want to get only one value, that the value for <code>Services</code> which is shown as <code>1</code> below and store it in a variable.</p> <pre><code>#show ethernet cfm domain brief Domain Name Index Level Services Archive(min) EVC 2 4 1 100 </code></pre> <p>I'm not sure how to parse this, I tried converting to a list, but the list became very huge something like this and I dont know if the same will work on another cisco ios-xe device.</p> <pre><code>['Domain', 'Name', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', 'Index', 'Level', 'Services', 'Archive(min)\r\nEVC', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '2', '', '', '', '', '4', '', '', '', '', '', '', '', '1', '', '', '', '', '100'] </code></pre> <p>Is there any other reliable way to get the value of <code>services</code> which is <code>1</code> ?</p>
<python><python-3.x><list><dictionary><cisco-ios>
2023-04-15 17:01:06
1
950
mac
76,023,559
4,172,765
Python selenium waiting to download file completely
<p>I am writing code to download an Android APK file. It worked. However, I faced trouble in the process of waiting for the file to download completely. I want to download many files, and each file has a different size, so I can't set the <code>time.sleep(time)</code> in general.</p> <p>I try to combine a <code>Thread</code> and a <code>While loop</code>, and it worked. But, in the cut-off internet case, my code is still in the infinite while loop because at that time, the file is not downloaded completed and it is like that <code>.crdownload</code>. Please help me with the issue.</p> <pre><code>def waitingDownload(): path = &quot;C:\\Users\\ASUS\\anaconda3\\phd_implement\\download_apk\\tmp\\*.apk&quot; files = glob.glob(path) if(len(files)!=0): return True else: return False #Setup web driver chromedriver = 'C:\\Users\\ASUS\\anaconda3\\phd_implement\\chromedriver.exe' os.environ[&quot;webdriver.chrome.driver&quot;] = chromedriver chrome_options = Options() chrome_options.add_argument(&quot;--disable-blink-features=AutomationControlled&quot;) chrome_options.add_argument(&quot;--disable-popup-blocking&quot;) prefs = {'profile.default_content_setting_values.automatic_downloads': 1,&quot;download.default_directory&quot; : &quot;C:\\Users\\ASUS\\anaconda3\\phd_implement\\download_apk\\tmp\\&quot;} chrome_options.add_experimental_option(&quot;prefs&quot;, prefs) web = webdriver.Chrome(options=chrome_options) web.maximize_window() # Download url=&quot;https://download.apkcombo.com/com.avast.android.secure.browser/Avast%20Secure%20Browser_7.5.2_apkcombo.com.apk?ecp=Y29tLmF2YXN0LmFuZHJvaWQuc2VjdXJlLmJyb3dzZXIvNy41LjIvNDYzMC5jZWZhYTAzZTYwOTY3NGY0Y2Y2YWRiMTk1N2VkMzhlODMzYTExNGQzLmFwaw==&amp;iat=1681553964&amp;sig=4cc693ff4b5dfe4adea4030b65b8e454&amp;size=198815248&amp;from=cf&amp;version=latest&amp;lang=en&amp;fp=1dce2e74bf37995f5efd03cd5877afb5&amp;ip=130.25.102.251&quot; def task(url): web.get(url) # create a thread thread= Thread(target=task(url)) # run the thread thread.start() # wait for the thread to finish print('Waiting for the download file...') while True: if(waitingDownload()): break thread.join() print('Download completed') web.close() </code></pre>
<python><python-3.x><selenium-webdriver>
2023-04-15 16:53:07
1
917
ThanhLam112358
76,023,557
3,070,181
Can I create an executable Python zip archive with dependencies?
<p>I have a small python application developed on Nix that I wish to distribute to some M$ Windows users. It seems that an <a href="https://docs.python.org/3/library/zipapp.html" rel="nofollow noreferrer">executable Python zip archive</a> is an excellent way to achieve this.</p> <p>However, I would like to include dependencies that are not included with the standard Python installation, e.g. termcolor. Is this possible?</p> <p>This is the app that I am testing</p> <pre><code>from termcolor import cprint print('hello world') cprint('hola mundo', 'red') </code></pre> <p>and <em>termcolor</em> is not included in the standard Python implementation. I do not expect the users to <em>pip install</em>.</p> <p>I have built a .pyz using</p> <pre><code> python -m zipapp . -o my_app.pyz </code></pre> <p>and the app is created and it works in a virtualenv with termcolor installed. But of course fails if it is not.</p> <p>I have also tried</p> <pre><code>python -m zipapps my_app -o my_app.pyz -r requirements.txt </code></pre> <p>This creates a .pyz which includes termcolor, but when I run it, it drops into the Python REPL</p> <p>Is this possible?</p>
<python><python-packaging><zipapp>
2023-04-15 16:52:34
2
3,841
Psionman
76,023,530
3,034,631
All other transforms disappear after broadcasting my custom odom to base_link transform (ROS2 Foxy)
<p>I have made a node to broadcast the odometry transform and publish the odometry topic. Before I run this new node, everything looks good. However, when I run my odometry node, and broadcast the tf odom to base_link, all other transforms to base_link disappear.</p> <p>I have searched the internet a lot but I can't find how to fix this issue.</p> <p>Here is what I have in my node (simplified for posting here):</p> <pre><code>#!/usr/bin/env python3 import rclpy from rclpy.node import Node import traceback from geometry_msgs.msg import TransformStamped from sensor_msgs.msg import JointState from nav_msgs.msg import Odometry from tf2_ros import TransformBroadcaster class CalculatedNode(Node): def __init__(self): super().__init__('calculated_node') # Create publisher self.calculated_node = self.create_publisher(Odometry, '/odom', 10) self.odom_tf_broadcaster = TransformBroadcaster(self) # Subscribe to node self.subscription = self.create_subscription( JointState, '/joint_states', self.update_odom, 1) def update_odom(self, msg): try: # DO THE ODOM CALCULATIONS (using data from the /joint_states topic) # [...] # Broadcast the transform odom_trans = TransformStamped() self.odom_tf_broadcaster.sendTransform(odom_trans) # Publish Odom self.calculated_node.publish(Odometry()) except Exception as e: traceback.print_exc() print(self.get_clock().now()) # print(e) def main(args=None): rclpy.init(args=args) calculate_odom = CalculatedNode() try: rclpy.spin(calculate_odom) except KeyboardInterrupt: pass calculate_odom.destroy_node() rclpy.shutdown() if __name__ == '__main__': main() </code></pre>
<python><ros>
2023-04-15 16:47:03
1
342
guidout
76,023,269
5,561,875
How to apply a rotation to a node in trimesh
<p>I have a <code>.glb</code> glTF file that specifies a character. The character has 75 nodes, and no animation track.</p> <p>I wrote the following code to render the character, and now I would like to apply some rotation to specific nodes before I render it.</p> <p>My rendering code is:</p> <pre class="lang-py prettyprint-override"><code>import math import pyrender import numpy as np from PIL import Image import trimesh def rotation_x(angle_degrees): angle = angle_degrees * math.pi / 180 return np.array([ [1, 0, 0, 0], [0, math.cos(angle), -math.sin(angle), 0], [0, math.sin(angle), math.cos(angle), 0], [0, 0, 0, 1], ]) class Renderer: def __init__(self, character_path: str): self.mesh = trimesh.load(character_path) # Just trying some random rotations self.rotate(&quot;mixamorig:LeftUpLeg&quot;, np.array([0.707, 1.0, 0.0, 0.707])) self.rotate(&quot;mixamorig:RightHand&quot;, np.array([0.707, 5.0, 0.0, 0.707])) self.rotate(&quot;mixamorig:LeftHand&quot;, np.array([0.707, 2.0, 0.0, 0.707])) self.rotate(&quot;mixamorig:Spine&quot;, np.array([1.707, 2.0, 0.0, 0.707])) self.scene = self.make_scene() self.renderer = pyrender.OffscreenRenderer(viewport_width=1024, viewport_height=1024) def rotate(self, node_name: str, quaternion: np.ndarray): raise NotImplementedError() def make_scene(self): # Create a pyrender scene scene = pyrender.Scene() # Add the mesh to the scene for geometry in self.mesh.geometry.values(): node = pyrender.Mesh.from_trimesh(geometry, smooth=True) scene.add(node) # Set up a camera with an orthographic projection camera = pyrender.PerspectiveCamera(yfov=float(np.radians(54)), aspectRatio=1) camera_pose = np.array([ [1, 0, 0, 0], [0, 1, 0, 3.5], # Move the camera 1 meter up in the y-axis [0, 0, 1, 4], [0, 0, 0, 1], ]) @ rotation_x(-20) scene.add(camera, pose=camera_pose) # Set up a directional light light = pyrender.DirectionalLight(color=np.ones(3), intensity=3.0) light_pose = np.array([ [1, 0, 0, 0], [0, 1, 0, 5], [0, 0, 1, 3], [0, 0, 0, 1], ]) scene.add(light, pose=light_pose) return scene def __call__(self): color, depth = self.renderer.render(self.scene) return color, depth </code></pre> <p>I have tried many solutions, but to no avail. This solution returns no errors, but also does not perform the rotation.</p> <pre class="lang-py prettyprint-override"><code>def rotate(self, node_name: str, quaternion: np.ndarray): rotation_matrix = trimesh.transformations.quaternion_matrix(quaternion) self.mesh.graph[node_name][0].setflags(write=1) self.mesh.graph[node_name][0][:] = rotation_matrix </code></pre> <p><strong>How can I control the rotation of a specific node before rendering it?</strong></p> <p>For testing purposes, the character I am using is: <a href="https://firebasestorage.googleapis.com/v0/b/sign-mt-assets/o/3d%2Fcharacter.glb?alt=media" rel="nofollow noreferrer">https://firebasestorage.googleapis.com/v0/b/sign-mt-assets/o/3d%2Fcharacter.glb?alt=media</a></p>
<python><gltf><trimesh>
2023-04-15 15:43:55
1
6,344
Amit
76,023,246
15,724,084
inside nested loops appending list to a new list gives unexpected result
<p>I have tried to write my error issue in reproducible manner for platform to be seen and guided. I cannot see my logic gap why this error happens. I have a inner loop which brings new elements while scraping and appends it to list named <code>list_inner</code>. Then in outer loop list named <code>list_outer</code> appends that new list. But final result gives amount of members right, but elements of list list_outer are same, the last list element of list <code>list_inner</code>. How can this happen? If it will be one elemented list I will understand.</p> <pre><code>import random list_inner=[] list_outer=[] for i in range(5): for r in range(random.randint(1,10)): list_inner.append(r) print(r) list_outer.append(list_inner) print(list_outer) print(list_outer) </code></pre> <p>I am sharing for two results, as giving idea what is in real and what I was expecting. I got this result:</p> <pre><code>0 1 2 3 [[0, 1, 2, 3]] 0 1 2 3 4 [[0, 1, 2, 3, 0, 1, 2, 3, 4], [0, 1, 2, 3, 0, 1, 2, 3, 4]] </code></pre> <p>But I was expecting this result:</p> <pre><code>[[0,1,2,3],[0,1,2,3,4]] </code></pre>
<python>
2023-04-15 15:39:36
2
741
xlmaster
76,023,235
19,553,193
Cannot reinitialise DataTable in Django
<p>I'm confused on how can I infuse my json format from django to Datatable , what I tried below using loop in the script</p> <p><code>{% for json in company %}</code> <code>&lt;script&gt;..&lt;/script&gt;</code> <code>{% endfor% }</code></p> <p>but It persist error <code>DataTables warning: table id=DataTables_Table_0 - Cannot reinitialise DataTable.</code></p> <p>Is there's any wrong with my implementation, Am I correct? to fetch all data from views.py json to datatable , is there anyway?</p> <p><strong>Javascript</strong></p> <pre><code>{% block footer_scripts %} {% for row in company %} &lt;script&gt; $(function () { var dt_basic_table = $('.datatables-basic'), dt_complex_header_table = $('.dt-complex-header'), dt_row_grouping_table = $('.dt-row-grouping'), dt_multilingual_table = $('.dt-multilingual'), dt_basic; // DataTable with buttons // -------------------------------------------------------------------- if (dt_basic_table.length) { dt_basic = dt_basic_table.DataTable({ ajax: assetsPath + 'json/table-datatable_example.json', //I want to infused here the json format from views.py columns: [ { data: '' }, { data: 'id' }, { data: 'id' }, { data: 'full_name' }, { data: 'email' }, { data: 'start_date' }, { data: 'salary' }, { data: 'status' }, { data: '' } ], columnDefs: [ { // For Responsive className: 'control', orderable: false, searchable: false, responsivePriority: 2, targets: 0, render: function (data, type, full, meta) { return ''; } }, { // For Checkboxes targets: 1, orderable: false, searchable: false, responsivePriority: 3, checkboxes: true, render: function () { return '&lt;input type=&quot;checkbox&quot; class=&quot;dt-checkboxes form-check-input&quot;&gt;'; }, checkboxes: { selectAllRender: '&lt;input type=&quot;checkbox&quot; class=&quot;form-check-input&quot;&gt;' } }, { targets: 2, searchable: false, visible: false }, { // Avatar image/badge, Name and post targets: 3, responsivePriority: 4, render: function (data, type, full, meta) { var $user_img = full['avatar'], $name = full['full_name'], $post = full['post']; if ($user_img) { // For Avatar image var $output = '&lt;img src=&quot;' + assetsPath + 'img/avatars/' + $user_img + '&quot; alt=&quot;Avatar&quot; class=&quot;rounded-circle&quot;&gt;'; } else { // For Avatar badge var stateNum = Math.floor(Math.random() * 6); var states = ['success', 'danger', 'warning', 'info', 'primary', 'secondary']; var $state = states[stateNum], $name = full['full_name'], $initials = $name.match(/\b\w/g) || []; $initials = (($initials.shift() || '') + ($initials.pop() || '')).toUpperCase(); $output = '&lt;span class=&quot;avatar-initial rounded-circle bg-label-' + $state + '&quot;&gt;' + $initials + '&lt;/span&gt;'; } // Creates full output for row var $row_output = '&lt;div class=&quot;d-flex justify-content-start align-items-center user-name&quot;&gt;' + '&lt;div class=&quot;avatar-wrapper&quot;&gt;' + '&lt;div class=&quot;avatar me-2&quot;&gt;' + $output + '&lt;/div&gt;' + '&lt;/div&gt;' + '&lt;div class=&quot;d-flex flex-column&quot;&gt;' + '&lt;span class=&quot;emp_name text-truncate&quot;&gt;' + $name + '&lt;/span&gt;' + '&lt;small class=&quot;emp_post text-truncate text-muted&quot;&gt;' + $post + '&lt;/small&gt;' + '&lt;/div&gt;' + '&lt;/div&gt;'; return $row_output; } }, { responsivePriority: 1, targets: 4 }, { // Label targets: -2, render: function (data, type, full, meta) { var $status_number = full['status']; var $status = { 1: { title: 'Current', class: 'bg-label-primary' }, 2: { title: 'Professional', class: ' bg-label-success' }, 3: { title: 'Rejected', class: ' bg-label-danger' }, 4: { title: 'Resigned', class: ' bg-label-warning' }, 5: { title: 'Applied', class: ' bg-label-info' } }; if (typeof $status[$status_number] === 'undefined') { return data; } return ( '&lt;span class=&quot;badge ' + $status[$status_number].class + '&quot;&gt;' + $status[$status_number].title + '&lt;/span&gt;' ); } }, ], }); $('div.head-label').html('&lt;h5 class=&quot;card-title mb-0&quot;&gt;Admin&lt;/h5&gt;'); } }); &lt;/script&gt; {% endfor %} {% endblock footer_scripts %} </code></pre> <p><strong>This is my Views.py</strong> I used json to convert data to jason format and fetch it inside datatable.</p> <pre><code>from main.models import ( Company,Unit ) import json def company(request): qs_json = serializers.serialize('json', Company.objects.filter().order_by('name')) context = { 'company' : qs_json, } return render(request, 'admin/company.html', context) </code></pre> <p><strong>Updated:Table headers</strong> my tbody is on my script so should i used loop there?</p> <pre><code> &lt;div class=&quot;card&quot;&gt; &lt;div class=&quot;card-datatable table-responsive pt-0&quot;&gt; &lt;table class=&quot;datatables-basic table&quot;&gt; &lt;thead&gt; &lt;tr&gt; &lt;th&gt;&lt;/th&gt; &lt;th&gt;&lt;/th&gt; &lt;th&gt;id&lt;/th&gt; &lt;th&gt;Name&lt;/th&gt; &lt;th&gt;Code&lt;/th&gt; &lt;th&gt;Address&lt;/th&gt; &lt;th&gt;Remarks&lt;/th&gt; &lt;th&gt;Status&lt;/th&gt; &lt;th&gt;Action&lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;/table&gt; &lt;/div&gt; &lt;/div&gt; </code></pre>
<python><django>
2023-04-15 15:36:57
1
335
marivic valdehueza
76,023,117
1,488,363
Scrapy in python - save within the parse function -thread safe?
<p>I'm using scrapy to download pages. I would like to save all the downloaded pages in a single file. I have the following code for the constructor and parse:</p> <pre><code> def __init__(self): self.time = time_utils.get_current_time_hr() self.folder = f&quot;{ROOT_DIR}/data/tickers/scrapy/{self.time}/&quot; os.makedirs(self.folder, exist_ok=True) filename = self.folder + &quot;bigfile.txt&quot; self.f = open(filename, 'w') def parse(self, response): buffer = list() buffer.append(response.body.decode(&quot;utf-8&quot;) ) self.f.write(&quot;&quot;.join(buffer)) self.f.flush() </code></pre> <p>Is there a possibility that different html pages will be mixed in the big_file.txt I'm writing?</p>
<python><scrapy>
2023-04-15 15:13:30
1
1,554
dotan
76,023,089
20,646,254
How to connect DigitalOcean Function to MySQL?
<p>I have created a simple Python Function (section Functions and NOT Apps). The function needs to access the MySQL database (the database is also created on DigitalOcean). When running the function I get an error: &quot;2023-04-15T13:55:42.402632917Z stderr: The function exceeded its time limits of 6000 milliseconds. Logs might be missing.&quot;</p> <p>The function is very basic, so 6 sec should more than enough to run successfuly. When it is trying to connect to DB the timeout occurs somehow.</p> <p>If I comment out the lines which connect to DB than the app runs just fine without errors.</p>
<python><function><digital-ocean>
2023-04-15 15:07:33
1
447
TaKo
76,023,085
912,873
Native messaging not working from host to chrome extension
<p>I am working on a chrome extension native messaging with the host on Python. The communication from extension to python works perfectly fine. But when I am trying to send the message from the host to the extension, it is not working. I am getting no errors. Even I have enabled logging for the browser. I have tested on Linux and Mac with the same issue. Below is my code. Any help would be appreciated.</p> <p><strong>background.js</strong></p> <pre><code>const port = chrome.runtime.connectNative(&quot;com.google.chrome.uniphore&quot;); port.onDisconnect.addListener((p) =&gt; console.log(chrome.runtime.lastError)); port.postMessage(&quot;ping&quot;) port.onMessage.addListener(function (msg) { console.log('Received' + msg); return false; }); chrome.runtime.onInstalled.addListener((reason) =&gt; { console.log(reason); }); </code></pre> <p><strong>main.py</strong></p> <pre><code>import sys import json import struct def getMessage(): try: rawLength = sys.stdin.buffer.read(4) if len(rawLength) == 0: sys.exit(0) messageLength = struct.unpack('@I', rawLength)[0] message = sys.stdin.buffer.read(messageLength).decode('utf-8') return json.loads(message) except Exception as x: print(&quot;Error&quot;, x) # Send an encoded message to stdout def sendMessage(encodedMessage): try: sys.stdout.buffer.write(encodedMessage[&quot;length&quot;]) sys.stdout.buffer.write(encodedMessage[&quot;content&quot;]) sys.stdout.flush() except Exception as x: print(&quot;Error&quot;, x) def encodeMessage(messageContent): # https://docs.python.org/3/library/json.html#basic-usage # To get the most compact JSON representation, you should specify # (',', ':') to eliminate whitespace. # We want the most compact representation because the browser rejects # messages that exceed 1 MB. encoded_content = json.dumps(messageContent).encode(&quot;utf-8&quot;) encoded_length = struct.pack(&quot;@I&quot;, len(encoded_content)) return {&quot;length&quot;: encoded_length, &quot;content&quot;: encoded_content} while True: receivedMessage = getMessage() if receivedMessage == &quot;ping&quot;: sendMessage(encodeMessage(&quot;pong&quot;)) </code></pre> <p><strong>run.sh</strong></p> <pre><code>#!/bin/sh python3 main.py &gt;&gt; hello 2&gt;&gt; err_file </code></pre> <p><strong>com.google.chrome.echo.json</strong></p> <pre><code>{ &quot;name&quot;: &quot;com.google.chrome.echo&quot;, &quot;description&quot;: &quot;echo Application&quot;, &quot;path&quot;: &quot;/Applications/native/run.sh&quot;, &quot;type&quot;: &quot;stdio&quot;, &quot;allowed_origins&quot;: [ &quot;chrome-extension://bmfbcejdknlknpncfpeloejonjoledha/&quot; ] } </code></pre>
<javascript><python><google-chrome><google-chrome-extension><chrome-native-messaging>
2023-04-15 15:06:55
0
2,841
pvnarula
76,022,985
7,987,455
Web Scraping by BeautifulSoup if the data shown only if I click "show details"
<p>I am trying to scrape data from selling cars website, when I enter the website I see a table of cars (type, price, year), but if I want to know more details about the car I have to click on the car and then it shows more details. How can i scrape data from the those details without Selenium?</p> <pre><code>import headers import requests from bs4 import BeautifulSoup page_num = 1 url = f&quot;https://www.example.com/vehicles/cars?page={page_num}&quot; req = requests.get(url, headers=headers.get_headers()).text soup = BeautifulSoup(req,&quot;html.parser&quot;) def decide_row(soup): rows = soup.find_all('div',class_=&quot;feeditem table&quot;) return rows def decide_details(rows): for car in rows: car_kilometrage = car.find('div',id='accordion_wide_0') print(car_kilometrage) decide_details(decide_row(soup)) </code></pre>
<python><web-scraping><beautifulsoup>
2023-04-15 14:51:31
1
315
Ahmad Abdelbaset
76,022,890
6,528,055
Why are my deep learning models giving unreasonably high accuracy on test data?
<p>I'm trying to do sarcasm detection on Twitter data to replicate the results mentioned in this <a href="https://aclanthology.org/2020.wnut-1.2.pdf" rel="nofollow noreferrer">paper</a>. Binary classification problem. For that I used a separate set of <strong>unlabeled</strong> tweets to create the embedding matrix using Word2Vec model. Before doing that I preprocessed the <strong>unlabeled</strong> data and removed the rare words as mentioned in the paper. Code is as follows:</p> <pre><code>model = Word2Vec(df_hing_eng['tweet_text'], vector_size=300, window=10, hs=0, negative = 1) embedding_size = model.wv.vectors.shape[1] </code></pre> <p>Next I fit a tokenizer on this <strong>unlabeled</strong> data:</p> <pre><code>tok = Tokenizer() tok.fit_on_texts(df_hing_eng['tweet_text']) vocab_size = len(tok.word_index) + 1 </code></pre> <p>Next, I created the embedding matrix as follows:</p> <pre><code>word_vec_dict={} for word in vocab: word_vec_dict[word]=model.wv.get_vector(word) embed_matrix=np.zeros(shape=(vocab_size,embedding_size)) for word,i in tok.word_index.items(): embed_vector=word_vec_dict.get(word) if embed_vector is not None: embed_matrix[i]=embed_vector </code></pre> <p>Now, I'm using a separate set of <strong>labeled</strong> tweets to be used as training and test data (for the DL models). I used the same preprocessing steps as the <code>unlabeled</code> data and removed the same rare words we found in the <code>unlabeled</code> data. Now I find the maximum length of all tweets in the <strong>labeled</strong> data.</p> <pre><code>maxi = -1 for row in df_labeled.loc[:,'tweet_text']: if len(row)&gt;maxi: maxi = len(row) </code></pre> <p>After that I used the tokenizer, that I fit on the <strong>unlabeled</strong> data, to create the word indices for the <strong>labeled</strong> data as follows:</p> <pre><code>encoded_tweets = tok.texts_to_sequences(df_labeled['tweet_text']) </code></pre> <p>Now I padded the <strong>labeled</strong> data to the length of the maximum tweets among the <strong>labeled</strong> data.</p> <pre><code>padded_tweets = pad_sequences(encoded_tweets, maxlen=maxi, padding='post') </code></pre> <p>Finally, I split the <strong>labeled</strong> data into training and test data as follows,</p> <pre><code>x_train,x_test,y_train,y_test=train_test_split(padded_tweets, df_labeled['is_sarcastic'], test_size=0.10, random_state=42) </code></pre> <p><strong>Is there any data leakage anywhere from training to test data or any other problem?</strong> Almost all of my DL models are giving more than 90% accuracy contrary to the original paper which reported a maximum of 75% accuracy. The codes for DL models were written by the authors of the papers. I used the same parameters as they mentioned.</p> <p>The tokenizer was actually fit on a completely different <strong>unlabeled</strong> data that is absolutely separate from (<strong>labeled</strong>) training and test data.</p> <p><strong>Edit</strong></p> <p>Further analyses of the datasets reveal the followings:</p> <ol> <li>There are huge overlaps between <strong>unlabeled</strong> embedding data and <strong>labeled</strong> data.</li> <li>There are many duplicate rows in both <strong>labeled</strong> and <strong>unlabeled</strong> data.</li> </ol> <p>I shared the code of an LSTM below. These codes originally written by the authors of the paper.</p> <pre><code>#define callbacks early_stopping = EarlyStopping(monitor='val_loss', min_delta=0.01, patience=4, verbose=1) callbacks_list = [early_stopping] #this one---&gt;LSTM lstm_out1 = 150 model = Sequential() model.add(Embedding(vocab_size, embed_dim, weights=[embed_matrix], input_length=max_tweet_len, trainable=False)) model.add(LSTM(lstm_out1, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(64, activation='relu')) model.add(Dense(1, activation='sigmoid')) adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy']) model.summary() </code></pre>
<python><keras><deep-learning><nlp><text-classification>
2023-04-15 14:34:58
0
969
Debbie
76,022,501
1,804,027
How can I capture raw messages that are coming out to OpenAI API?
<p>I'm using LangChain to build prompts that are later sent to the OpenAI API. I've built an agent, but it's behaving a bit differently than I expected. It looks like it's missing some of my instructions that I included in the prompt. Specifically: it seems to not remember past messages. I'm looking for a way to debug it.</p> <p>Here's more or less how I'm building the agent. I'm not including everything; it's just for general information on what I'm using.</p> <pre class="lang-py prettyprint-override"><code>memory = ConversationBufferMemory(memory_key=&quot;chat_history&quot;, return_messages=True) llm = ChatOpenAI(temperature=0) agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory) agent_chain.run(input=&quot;hi, I am Bob&quot;) </code></pre> <p>As far as I know, LangChain itself adds some parts of prompts. For example, this particular agent I use add some information about how the answer should be structured. But this is not transparent to the end user, the final prompt is being constructed inside the library. Even in verbose mode I can't see every part of the sent prompt.</p> <p>Is there any way I could retrieve complete, raw messages sent to the OpenAI API and responses from it?</p>
<python><openai-api><langchain>
2023-04-15 13:15:59
2
11,299
Piotrek
76,022,365
13,078,279
Why does the jacobian of the metric tensor give zero?
<p>I am trying to compute the derivatives of the <a href="https://en.wikipedia.org/wiki/Metric_tensor" rel="nofollow noreferrer">metric tensor</a> given as follows:</p> <p><img src="https://quicklatex.com/cache3/e6/ql_b24e8838c420fed42f5ae46a5346c9e6_l3.png" alt="Metric tensor in spherical coordinates" /></p> <p>As part of this, I am using PyTorch to compute the jacobian of the metric. Here is my code so far:</p> <pre class="lang-py prettyprint-override"><code># initial coordinates r0, theta0, phi0 = (3., torch.pi/2, 0.1) coord = torch.tensor([r0, theta0, phi0], requires_grad=True) print(&quot;r, theta, phi coordinates: &quot;,coord.data) def metric_tensor(coords): r = coords[0] theta = coords[1] phi = coords[2] return torch.tensor([ [1., 0., 0.], [0., r ** 2, 0.], [0., 0., r ** 2 * torch.sin(theta) ** 2] ]) jacobian = torch.autograd.functional.jacobian(metric_tensor, coord, create_graph=True) </code></pre> <p>For reasons I don't understand, the jacobian always returns zero, even though the derivatives of the metric shouldn't all be zero. Could anyone point me to what the issue may be?</p>
<python><pytorch><autograd>
2023-04-15 12:49:01
1
416
JS4137
76,022,351
885,764
Python lists always assign reference not value
<p>I want to copy list values to another list because I modify the original one and use the new one. Bu t whatever I try it will alwasy assign reference not value.</p> <pre><code>class Chromosome: def __init__(self): self.matrix = [[0 for x in range(LENGTH)] for y in range(LENGTH)] class Population: def __init__(self, start): self.start = start self.chromosomes = [] def crossover(self): for index in range(0, POPULATION_SIZE - 1, 2): swap_point = random.randint(7, 10) matrix1 = self.chromosomes[index].matrix matrix2 = self.chromosomes[index + 1].matrix self.chromosomes[index].crossover(matrix2, swap_point) self.chromosomes[index + 1].crossover(matrix1, swap_point) </code></pre> <p>The following lines is that ı wan to assign values not reference</p> <p><code>matrix1 = self.chromosomes[index].matrix[:]</code> and <code>matrix2 = self.chromosomes[index + 1].matrix[:]</code></p> <p>I tried followings</p> <p><code>list(self.chromosomes[index + 1].matrix)</code></p> <p><code>self.chromosomes[index + 1].matrix[:]</code></p> <p><code>self.chromosomes[index + 1].matrix.copy()</code></p> <p>How can I get values of the chromosome to matrix1 and matrix2?</p>
<python><python-3.x><list><reference>
2023-04-15 12:44:48
1
2,234
Hakan SONMEZ
76,022,291
13,492,584
Get list of latest installed python packages
<p>I am trying to get an ordered list of the lastest installed packages in Python. I am using pip (I cannot use -conda or terminal). The instruction <code>pip list</code> returns all the packages I have ever installed. Since I installed a package which damaged the older ones, I need the list of the latest installed packages in order to uninstall only them and reinstall them later. In simple words, I need to do a rollback. How can I do it?</p>
<python><pip><package>
2023-04-15 12:32:25
2
644
hellomynameisA
76,022,237
2,663,585
working examples of js.amcharts in python
<p>I came across this <a href="https://pypi.org/project/js.amcharts" rel="nofollow noreferrer">link</a> for using js.amcharts in python, but couldn't find any working examples of the same. I did try googling but no luck. Can someone please help me on this? I like the 3d charts in amcharts and thus want to generate a PDF in python and embed the charts in it. Please help.</p>
<python><amcharts>
2023-04-15 12:20:33
1
2,561
Iowa