QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
76,668,828 | 3,884,536 | How to format a list of int into a hex representation? | <p>This is more a question of style and what is pythonic. I like to create a string out of a list of ids. My current code looks like that:</p>
<pre><code>idL = [1, 2, 64, 32, 3200, 441122]
txt = ""
for id in idL:
if txt != "":
txt += "-"
txt += "{:04X}".format(id)
print(txt)
</code></pre>
<p>I guess that there is a more pythonic way to do the formating. Like a generator expression.</p>
<p>I was also thinking if something like</p>
<pre><code>txt += f"{id.hex()}"
</code></pre>
<p>works better - but I somehow failed to get capital letters done.</p>
<p>Any ideas and suggestions?</p>
| <python><formatting><hex> | 2023-07-12 09:01:28 | 1 | 2,411 | Stefan Jaritz |
76,668,744 | 1,303,562 | Start process and get output and return code for python 3.6 | <p>I want to Start process and get output and return code for <code>python 3.6</code>.</p>
<p>Currently I have this code but I need something for <code>python 3.6</code> (for another machine that I cannot upgrade <code>python</code> version):</p>
<pre><code>process = subprocess.run(['ll'], capture_output=True, text=True)
return_code = process.returncode
</code></pre>
| <python> | 2023-07-12 08:50:27 | 2 | 721 | Dana Yeger |
76,668,652 | 5,722,359 | How to detect a change in state of a ttk.Button caused by an event and how to use the .state method? | <p>I would like to simulate the following events:</p>
<ol>
<li>When the app frame resizes, it will toggle a change in state in a <code>ttk.Button</code> widget (i.e. <code>self.bn1</code>). If it is not in a disabled state, it will change to a disabled state, and vice versa.</li>
<li>When the state of <code>self.bn1</code> is toggled, it will similarly toggle a change in state in <code>self.bn2</code> but in an opposite sense. That is, if <code>self.bn1</code> is disabled, <code>self.bn2</code> will be enabled, and vice versa. The is the key objective.</li>
</ol>
<p>For objective 2, I want to use the following approach (I think this is the correct way but do correct me if I am wrong):</p>
<pre><code>self.bn1.bind("<Activate>", self._set_bn2_disabled)
self.bn1.bind("<Deactivate>", self._set_bn2_enabled)
</code></pre>
<p>with the intention to learn how to use the <code>Activate</code> and <code>Deactivate</code> event types. Their documentation is given in <a href="https://anzeljg.github.io/rin2/book2/2405/docs/tkinter/event-types.html" rel="nofollow noreferrer">here</a>.</p>
<p>Below is my test code.</p>
<pre><code>import tkinter as tk
from tkinter import ttk
class App(ttk.Frame):
def __init__(self, parent):
super().__init__(parent)
self.parent = parent
self.after_id = None
self._create_widget()
self._create_bindings()
def _create_widget(self):
self.bn1 = ttk.Button(self, text="B1")
self.bn1.grid(row=0, column=0, padx=5, pady=5)
self.bn2 = ttk.Button(self, text="B2")
self.bn2.grid(row=1, column=0, padx=5, pady=5)
def _create_bindings(self):
self.bind("<Configure>", self._schedule_event)
self.bind("<<FrameMoved>>", self._change_bn1_status)
self.bn1.bind("<Activate>", self._set_bn2_disabled)
self.bn1.bind("<Deactivate>", self._set_bn2_enabled)
# Event handlers
def _schedule_event(self, event):
if self.after_id:
self.after_cancel(self.after_id)
self.after_id = self.after(500, self.event_generate, "<<FrameMoved>>")
def _change_bn1_status(self, event):
print(f"_change_bn1_status(self, event):")
print(f"{event.widget=}")
print(f"{self.bn1.state()=}")
if self.bn1.state() == () or self.bn1.state() == ('!disable'):
self.bn1.state(('disable'))
elif self.bn1.state() == ('disable'):
self.bn1.state(['!disable'])
def _set_bn2_disabled(self, event):
self.bn2.state(['disabled'])
print(f"{self.bn2.state()=}")
def _set_bn2_enabled(self, event):
self.bn2.state(['!disabled'])
print(f"{self.bn2.state()=}")
if __name__ == '__main__':
root = tk.Tk()
app = App(root)
app.pack(fill="both", expand=True)
root.mainloop()
</code></pre>
<p>However, it is experiencing an error with the state command.</p>
<pre><code>_change_bn1_status(self, event):
event.widget=<__main__.App object .!app>
self.bn1.state()=()
Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python3.10/tkinter/__init__.py", line 1921, in __call__
return self.func(*args)
File "/home/user/Coding/test.py", line 36, in _change_bn1_status
self.bn1.state(('disable'))
File "/usr/lib/python3.10/tkinter/ttk.py", line 588, in state
return self.tk.splitlist(str(self.tk.call(self._w, "state", statespec)))
_tkinter.TclError: Invalid state name d
</code></pre>
<p>Documentation for handling state query of a ttk widget is given by the <code>.state</code> method described in <a href="https://anzeljg.github.io/rin2/book2/2405/docs/tkinter/ttk-Widget.html" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Update:</strong></p>
<p>Following the comment by @Tranbi, I have revised the <code>self._change_bn1_status</code> method to:</p>
<pre><code>def _change_bn1_status(self, event):
print(f"\nBefore: {self.bn1.state()=}")
if self.bn1.state() == ():
self.bn1.state(['disabled'])
elif 'disabled' in self.bn1.state():
self.bn1.state(['!disabled'])
print(f"After: {self.bn1.state()=}")
</code></pre>
<p>The state of <code>self.bn1</code> is toggling correctly but not <code>self.bn2</code>. How do I do this?</p>
| <python><tkinter><ttk> | 2023-07-12 08:38:13 | 1 | 8,499 | Sun Bear |
76,668,627 | 9,182,743 | Separate Clearly Defined clusters with DBSCAN | <p>I have a dataset with 4 features,with features (1,4) and (2,4) clearly separable.
<a href="https://i.sstatic.net/dwlWz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dwlWz.png" alt="enter image description here" /></a></p>
<p>I am trying to use <strong>DBSCAN</strong> to come up with the Clusters, however I am unable to create satisfatocty clusters.</p>
<p>Here is the code snippet where I:</p>
<ul>
<li>iterate over all combinations of eps and min_sample values.</li>
<li>Run DBSCAN</li>
<li>save the results if Clustes are more than 1, and less than 7</li>
</ul>
<pre class="lang-py prettyprint-override"><code>#### STEP 4: DBSCAN ####
# Define the parameter combinations to evaluate
eps_values = [0.01, 0.03, 0.05, 0.07, 0.1, 0.15]
min_samples_values = [2, 3, 5, 7, 10, 15]
# Iterate over parameter combinations
names = []
for eps, min_samples in itertools.product(eps_values, min_samples_values):
# Create a DBSCAN object with the current parameter values
dbscan = DBSCAN(eps=eps, min_samples=min_samples)
# Fit the DBSCAN model to the data and obtain the cluster labels
cluster_labels = dbscan.fit_predict(df_t[new_features])
if len(pd.Series(cluster_labels).unique()) > 1:
if len(pd.Series(cluster_labels).unique()) < 7:
name = f"eps_{eps}_mins_{min_samples}"
df_t[name] = cluster_labels
names.append(name)
# Filter out the outliers (-1 label) from the cluster labels
filtered_labels = cluster_labels[cluster_labels != -1]
print("Eps:", eps, "Min Samples:", min_samples, "clusters:", len(pd.Series(filtered_labels).unique()))
</code></pre>
<p>Here I am plotting the reuslts for clusters that have more than 1, less than 7 clusters. As you can see none of the param gave satisfactory clusters that look like the origianl data.
<a href="https://i.sstatic.net/naYjw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/naYjw.png" alt="enter image description here" /></a></p>
<p><strong>Q:</strong> is it the code/setup that is making it unable to cluster properely?</p>
<p>Here is the complete code that reproduces the example, for completeness.
the steps are:</p>
<ul>
<li>import the data</li>
<li>scale using minmax</li>
<li>create new features using kernel PCA</li>
<li>run DBSCAN
the first 3 steps are just to setup the dataframe DATA to have hte correct features.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import requests
import zipfile
import seaborn as sns, matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.manifold import MDS
from sklearn.cluster import DBSCAN
from itertools import combinations
import itertools
%matplotlib inline
#### SETUP TO MAKE THE DATA ####
#### STEP1: IMPORT THE DATA ####
# Specify the URL of the ZIP file
zip_url = 'https://archive.ics.uci.edu/static/public/267/banknote+authentication.zip'
# Download the ZIP file
response = requests.get(zip_url)
# Save the ZIP file locally
zip_path = 'banknote_authentication.zip'
with open(zip_path, 'wb') as f:
f.write(response.content)
# Extract the contents of the ZIP file
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall()
# Specify the path to the extracted CSV file
csv_path = 'data_banknote_authentication.txt'
column_names = ['variance', 'skewness', 'curtosis', 'entropy', 'original']
features = ['variance', 'skewness', 'curtosis', 'entropy']
df = pd.read_csv(csv_path, names=column_names)
##### STEP2: SCALE THE DATA ####
mms = MinMaxScaler()
data = df.copy()
for col in features:
data[col] = mms.fit_transform(data[[col]]).squeeze()
### STEP 3: TRANFORM KERNEL PCA ####
embedding = MDS(n_components=4,max_iter=300, random_state=10)
X_transformed = embedding.fit_transform(data[features])
new_features = ["1","2", "3", "4"]
df_t=pd.DataFrame(X_transformed , columns=new_features)
df_t['original'] = data["original"]
### SHOW THE DATA
sns.set_context('notebook')
sns.set_style('white')
sns.pairplot(df_t, hue="original")
### CODE FOR MAKING DBSCAN AND PLOTS
#### STEP 4: DBSCAN ####
# Define the parameter combinations to evaluate
eps_values = [0.01, 0.03, 0.05, 0.07, 0.1, 0.15]
min_samples_values = [2, 3, 5, 7, 10, 15]
# Iterate over parameter combinations
names = []
for eps, min_samples in itertools.product(eps_values, min_samples_values):
# Create a DBSCAN object with the current parameter values
dbscan = DBSCAN(eps=eps, min_samples=min_samples)
# Fit the DBSCAN model to the data and obtain the cluster labels
cluster_labels = dbscan.fit_predict(df_t[new_features])
if len(pd.Series(cluster_labels).unique()) > 1:
if len(pd.Series(cluster_labels).unique()) < 7:
name = f"eps_{eps}_mins_{min_samples}"
df_t[name] = cluster_labels
names.append(name)
# Filter out the outliers (-1 label) from the cluster labels
filtered_labels = cluster_labels[cluster_labels != -1]
print("Eps:", eps, "Min Samples:", min_samples, "clusters:", len(pd.Series(filtered_labels).unique()))
###### PLOT THE DBSCAN RESULS ####
df_plot = df_t.melt(id_vars =new_features, value_vars =['original'] + names , var_name = "cluster")
df_plot['value']= df_plot['value'].astype(str)
# Create a 3 by 3 subplot grid
fig, axes = plt.subplots(nrows=3, ncols=4, figsize=(12, 12))
# Flatten the axes for easy iteration
axes = axes.flatten()
# Iterate over each cluster and create scatter plots
for i, cluster in enumerate(df_plot['cluster'].unique()):
print (i)
ax = axes[i] # Select the current subplot
# Filter data for the current cluster
subset = df_plot[df_plot['cluster'] == cluster]
# Create scatter plot
sns.scatterplot(data=subset, x="2", y="4", hue='value', legend='full', ax=ax)
# Set subplot title
ax.set_title(f"Cluster {cluster}", fontsize=12)
# Set axis labels
ax.set_xlabel("x")
ax.set_ylabel("y")
# Remove x and y ticks
ax.set_xticks([])
ax.set_yticks([])
# Adjust spacing between subplots
plt.tight_layout()
plt.show()
</code></pre>
<h2>Answering Gijs Wobben question: what's the point in having MDS with the same number of components as before?</h2>
<p>From a lecture i am following, i was hoping to use MDS with the same number of dimensions to separate better the classes, helping with the clustering alghorithms. Here you can see how in this example provided in the lecture, the data is reorganized in a more visually separable manner.
<a href="https://i.sstatic.net/9VSkM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9VSkM.png" alt="enter image description here" /></a></p>
| <python><scikit-learn><dbscan> | 2023-07-12 08:34:51 | 1 | 1,168 | Leo |
76,668,518 | 4,850,343 | Why does Jupyter %timeit report the mean instead of the best? | <p>I would like to measure the execution time of various code snippets in Python using Jupyter notebooks. Jupyter notebooks offer the <code>%timeit</code> and <code>%%timeit</code> magic to measure the execution time of a cell.</p>
<p>In the Jupyter <a href="https://ipython.readthedocs.io/en/stable/interactive/magics.html" rel="nofollow noreferrer">documentation</a> it states</p>
<blockquote>
<p><code>-r<R>: number of repeats <R>, each consisting of <N> loops, and take the best result. Default: 7</code></p>
</blockquote>
<p>Which would indicate the <em>best-of-N</em> should be reported, which is in line with <a href="https://stackoverflow.com/questions/43939345/should-we-measure-the-average-or-the-minimum-execution-time-of-a-function">general profiling best practices</a>. The <code>timeit.repeat</code> <a href="https://docs.python.org/3/library/timeit.html#timeit.Timer.repeat" rel="nofollow noreferrer">documentation</a> states this explicitly:</p>
<blockquote>
<p>Note: Itβs tempting to calculate mean and standard deviation from the
result vector and report these. However, this is not very useful. In a
typical case, the lowest value gives a lower bound for how fast your
machine can run the given code snippet; higher values in the result
vector are typically not caused by variability in Pythonβs speed, but
by other processes interfering with your timing accuracy. <strong>So the <code>min()</code>
of the result is probably the only number you should be interested in.</strong>
After that, you should look at the entire vector and apply common
sense rather than statistics.</p>
</blockquote>
<p>However when we execute a cell in Jupyter with the <code>%timeit</code> magic, it reports <code>mean Β± std</code>:</p>
<pre><code>In [1]: %timeit pass
8.26 ns Β± 0.12 ns per loop (mean Β± std. dev. of 7 runs, 100000000 loops each)
In [2]: u = None
In [3]: %timeit u is None
29.9 ns Β± 0.643 ns per loop (mean Β± std. dev. of 7 runs, 10000000 loops each)
</code></pre>
<p>There is a significant difference between taking the <code>mean</code> of multiple runs and taking the <code>min</code>:</p>
<pre><code>for approach in approaches:
function = partial(approach, *data)
times = timeit.Timer(function).repeat(repeat=number_of_repetitions, number=10)
approach_times_min[approach].append(min(times))
approach_times_mean[approach].append(mean(times))
</code></pre>
<p><a href="https://i.sstatic.net/5424h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5424h.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/kstCW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kstCW.png" alt="enter image description here" /></a></p>
<p>This can lead to wrong conclusions.</p>
<p>Why is the behaviour of <code>%timeit</code> different than what is stated in the documentation? Can I change a setting to make it report the minimum instead?</p>
| <python><performance><jupyter-notebook> | 2023-07-12 08:21:58 | 1 | 17,634 | Sebastian Wozny |
76,668,399 | 238,671 | Get all Sum node in a sympy expression with EPath // | <p>I have undestood that <a href="https://docs.sympy.org/latest/modules/simplify/simplify.html#sympy.simplify.epathtools.EPath" rel="nofollow noreferrer">EPath</a> is similar to xpath, but I am not sure. I want to match all the Sum in an expression. Consider</p>
<pre><code>import sympy
x = sympy.IndexedBase('x', real=True)
i = sympy.Symbol('i', integer=True, positive=True, finite=True)
n = sympy.Symbol('n', integer=True, positive=True, finite=True)
expr = sympy.sin(sympy.Sum(2*x[i], (i, 1, n))) ** 2 + sympy.Sum(x[i], (i, 1, n))
</code></pre>
<p>I can match the Sum in the root note with</p>
<pre><code>sympy.EPath(r"/Sum")
</code></pre>
<p>(actually it seems wrong to me, <code>/*/Sum</code> seems more correct). I can match the other one with</p>
<pre><code>sympy.EPath(r"/*/*/Sum")
</code></pre>
<p>but I want all of them. From the <a href="https://www.w3schools.com/xml/xpath_syntax.asp" rel="nofollow noreferrer">documentation</a> of xpath I read</p>
<blockquote>
<p><code>//</code> : Selects nodes in the document from the current node that match the selection no matter where they are</p>
</blockquote>
<p>but it seems not supported by sympy:</p>
<pre><code>sympy.EPath(r"//Sum")
---> ValueError: empty selector
</code></pre>
<p>If useful I have created a workaround looping on the expression tree:</p>
<pre><code>def traverse_apply(expr, atype, func):
"""
Apply the function `func` to all the element in the expression `expr` of type `atype`.
Return a new expression.
"""
if type(expr) == atype:
return func(expr)
if isinstance(expr, sympy.Basic):
if not expr.is_Atom:
args, basic = expr.args, True
else:
return expr
elif hasattr(expr, '__iter__'):
args, basic = expr, False
else:
return expr
args = list(args)
indices = range(len(args))
for i in indices:
arg = args[i]
args[i] = traverse_apply(arg, atype, func)
if basic:
return expr.func(*args)
else:
return expr.__class__(args)
</code></pre>
| <python><xpath><sympy> | 2023-07-12 08:08:31 | 1 | 17,832 | Ruggero Turra |
76,668,206 | 2,443,944 | Python get all final values from nested dictionary | <p>I have a nested dictionary for example:</p>
<p><code>nested_dict = {'a':{1:2, 4:5}, 3:{'b':{'c':'d'}}, 'e':5}</code></p>
<p>I'm trying to find a way to get the leaf values of the nested dictionary. So in the example I shared the final values are <code>[2,5,'d',5]</code></p>
| <python><python-3.x><dictionary> | 2023-07-12 07:43:00 | 4 | 2,227 | piccolo |
76,667,907 | 2,515,265 | How to disable a button for an amount of seconds in a Dash client side callback? | <p>In a Dash 2.11 application I need to disable a button for 5 seconds.
Here is the definition of my button:</p>
<pre><code> dbc.Button(
"Expand",
id=expansion_button_id,
className="primary-btn mt-2 btn-small",
n_clicks=0,
disabled=False,
),
</code></pre>
<p>Here is the definition of the client side callback:</p>
<pre><code> self.app.clientside_callback(
ClientsideFunction(namespace="foo", function_name="toggle_expand_button"),
Output(self._expand_id, "disabled"),
[
Input(self._expand_id, "id"),
Input(self._expand_id, "n_clicks")
],
)
</code></pre>
<p>and this is my JavaScript function:</p>
<pre><code> toggle_expand_button: function(component_id, n_clicks) {
console.log("component_id: " + component_id)
console.log("n_clicks: " + n_clicks)
if (!n_clicks) {
return false
}
if (n_clicks > 0) {
setTimeout(function() {
document.getElementById(component_id).disabled = false;
}, 5000); // 5 seconds
return true;
}
return true
}
</code></pre>
<p>The button works as expected until the 5 seconds are over, but after this point no click action is fired even though the button does no longer appear disabled.
What's wrong with my code?</p>
| <javascript><python><asynchronous><plotly-dash> | 2023-07-12 07:04:14 | 1 | 2,657 | Javide |
76,667,903 | 7,964,098 | Pandas does not preserve datetime type when converting pydantic objects to dataframe | <p>I've migrated my code to pydantic V2 and found an unexpected behaviour when working with pydantic objects (timezone-aware timestamps/datetimes) and pandas. I have a list of pydantic objects and I have to convert it to pandas DataFrame. When you run the code below, it should work in both versions of pydantic (v1 and v2). But the final DataFrame in v2 does not preserve the original data type.</p>
<p><strong>Example code:</strong></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from datetime import datetime
from pydantic import BaseModel
class MyModel(BaseModel):
ts: datetime
m1 = MyModel(ts="2023-07-01T10:00:00Z")
m2 = MyModel(ts="2023-07-02T10:00:00Z")
# note: dict is used here instead of newer model_dump so this example runs in V1 as well
d1 = pd.DataFrame([m1.dict()])
d2 = pd.DataFrame([m2.dict()])
df_concat = pd.concat([d1, d2])
</code></pre>
<p>When you run it with pydantic v1, <code>df_concat</code> preserve the correct datatype (<code>ts datetime64[ns, UTC]</code>), but using v2, the output datatype is just <code>object</code>.</p>
<p>It seems it relates to some change in timezone handling, because in pydantic v1 we get <code>tzinfo=datetime.timezone.utc</code> but in v2 <code>tzinfo=TzInfo(UTC)</code>. And pandas probably cannot handle it correctly. Is this a bug (on either side - pydantic or pandas) or should I do sth differently?</p>
<p>I could convert all <code>tzinfo</code> to use the <code>datetime.timezone.utc</code> (like <code>m1.ts.replace(tzinfo=datetime.timezone.utc)</code>) and then I will get the expected behavior (correct data types), but it's not much conveniant way how to do it. I would rather find a way how to preserve the original behavior that was in v1.</p>
<p><a href="https://github.com/pydantic/pydantic/issues/6592" rel="nofollow noreferrer">Pydantic issue reference</a></p>
| <python><pandas><datetime><pydantic> | 2023-07-12 07:03:27 | 0 | 4,017 | Nerxis |
76,667,874 | 6,729,010 | How can I install Python library in ChatGPT Code Interpreter | <p>ChatGPT is the newest platform for running Python in a Jupyter-like environment. However the installed libraries are limited. You cannot access the internet too. So, I cannot use pip to install.</p>
<p>How can I install a new library?</p>
| <python><jupyter><chat-gpt-4><code-interpreter> | 2023-07-12 06:58:34 | 3 | 41,306 | korakot |
76,667,733 | 1,838,076 | Auto rebin the histogram in plotly express based on active Colors | <p>With <code>plotly.express.histogram</code>, is there a way to force rebinning when I unselect one or more <code>colors (metrics)</code> under comparison?</p>
<p>The following code gives me the images below (but in interactive mode)</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
import pandas as pd
import numpy as np
dfA = pd.DataFrame({'Value': np.random.normal(0, 100, 1000)}).assign(Metric='A')
dfB = pd.DataFrame({'Value': np.random.normal(1000, 500, 1000)}).assign(Metric='B')
px.histogram(pd.concat([dfA, dfB]), color='Metric', barmode='overlay').show()
px.histogram(dfA, color='Metric', barmode='overlay').show()
</code></pre>
<p><a href="https://i.sstatic.net/mIrjY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mIrjY.png" alt="dfA & dfB Together" /></a>
<a href="https://i.sstatic.net/ysv1B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ysv1B.png" alt="dfA Alone" /></a></p>
<p>However, when I unselect B from the first plot, I get</p>
<p><a href="https://i.sstatic.net/kaZhN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kaZhN.png" alt="dfA & dfB but dfB unselected" /></a></p>
<p>This is different from what I get when I plot dfA alone. Probably because the number of bins is untouched, <strong>while the X (and Y) axis are recalibrated</strong>.</p>
<p>Is there a way to force rebinning based on the current selection?</p>
| <python><plotly><histogram><plotly-express> | 2023-07-12 06:37:15 | 1 | 1,622 | Krishna |
76,667,651 | 19,553,193 | Select the unique value in Django | <p>I have values <code>id, code, name, status, and user_id</code></p>
<pre><code>id Code name status last added
1 23-07-00001 red 1 2023-07-11 02:48:41.025713
2 23-07-00002 orange 2 2023-07-11 02:48:41.025713
3 23-07-00003 blue 3 2023-07-12 05:18:47.534430
4 23-07-00002 orange 4 2023-07-12 05:24:40.485039
</code></pre>
<p>Now I want output like this , the last added should retain</p>
<pre><code>1 23-07-00001 red 1 2023-07-11 02:48:41.025713
3 23-07-00003 blue 3 2023-07-12 05:18:47.534430
4 23-07-00002 orange 4 2023-07-12 05:24:40.485039
</code></pre>
<p>I tried <code>distinct</code> this but it doesn't work, any solution will be appreciated its either it is written in a raw or default in django</p>
<pre><code>item_data = TevIncoming.objects.filter(status__in=retrieve).select_related().values_list('code', flat=True).distinct().order_by('-incoming_in').reverse()
</code></pre>
| <python><django> | 2023-07-12 06:24:09 | 2 | 335 | marivic valdehueza |
76,667,430 | 2,866,298 | strip() on multiple column not working - pandas | <p>I am writing a generic dataframe cleansing function as follows</p>
<pre><code>def cleanse_data(df,cols_to_strip):
df.replace({'(?=.*)(\s*\[.*\]\s*)':'','\*':'','\+':'',',.*':'','β':''},inplace=True, regex=True)
df.columns.str.strip()
df[cols_to_strip] = df[cols_to_strip].applymap(lambda x: x.strip())
return df
</code></pre>
<p>the second argument takes the list of columns in the dataframe to stip() (i.e. remove its whitespaces) .... calling this function</p>
<pre><code>nhl_df = cleanse_data(nhl_df,['team'])
print(nhl_df[nhl_df['team']=='Jose Sharks']) #doesnt work
print(nhl_df[nhl_df['team'].str.strip()=='Jose Sharks']) #works
</code></pre>
<p>so it seems that for some reason the stripping inside the cleansing function didnt work (though the regex replacement worked fine !!) ... any reason for this ??</p>
| <python><pandas><dataframe> | 2023-07-12 05:43:48 | 1 | 1,906 | osama yaccoub |
76,667,412 | 4,525,932 | stop visual studio's linker from looking for python39_d.lib in debug mode | <p>I don't really know why the linker will sometimes choose to automatically search for libraries suffixed with 'd' or '_d' when I run in debug mode. I am trying out calling python modules from C++ in Windows. I create a blank C++ project in Visual Studio 2022 and write thr following code:</p>
<pre><code>#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <iostream>
int main()
{
std::cout << "hello world";
}
</code></pre>
<p>I have Anaconda3 installed on my computer and <code>Python.h</code> lives in <code>C:\ProgramData\Anaconda3\include</code> so I add it to my include path. The output of my debug build is <code>LNK1104 cannot open file python39_d.lib</code>. Oops! I must have forgotten to point the linker to the static lib from the python API. But wait! All that I find in <code>C:\ProgramData\Anaconda3\libs</code> is <code>python39.lib</code>. Anaconda forgot to ship me my <code>python39_d.lib</code> file which for some reason is automatically linked to. The stupid work around is to copy it and rename the copy so that it matches what the linker expects, but I would rather just link to the non-debug library.</p>
<p>But how? Is it a <code>#pragma</code> inside of <code>Python.h</code> causing this? It's a cruel trick for a python distro to ship an API that can't work without a workaround. Or is it some obscure Visual Studio setting I don't know about?</p>
| <python><c++><visual-studio><linker> | 2023-07-12 05:37:52 | 1 | 1,571 | dmedine |
76,667,317 | 1,563,831 | How to animate a scatter plot with variable number of points? | <p>I'm trying to animate a scatter plot but with a variable number of points at each iteration of my animation.</p>
<p>Animating a scatter plot has been addressed before (e.g., <a href="https://stackoverflow.com/questions/9401658/how-to-animate-a-scatter-plot">here</a> and <a href="https://stackoverflow.com/questions/41602588/how-to-create-3d-scatter-animations">here</a>). However, the number of points is always assumed to be fixed. For example, if <code>Axes3D</code> is used, then <code>axes3d.scatter._offsets3d</code> won't work if the number of points are different in each iteration of <code>FuncAnimation</code>.</p>
<p>How can I animate a scatter plot when each animation iteration contains a different number of points?</p>
| <python><matplotlib><scatter-plot><matplotlib-animation> | 2023-07-12 05:15:28 | 1 | 1,184 | Discombobulous |
76,667,099 | 9,582,542 | Inserting cell by cell into a Dataframe of unknown size | <p>The code below works fine. What I like to do is instead of printing <strong>"d"</strong> I would like to insert values into a dataframe. The driver is pulling data out of a table the table size may vary so I find the rxc. How can I insert these values into a DataFrame as it loops through a table.</p>
<pre><code>df = pd.DataFrame()
# to identify the table rows
r = driver.find_elements_by_xpath("//*[@id='contentScrollPoint']/div[4]/div[4]/div[3]/div/div")
# to identify table columns
c = driver.find_elements_by_xpath("//*[@id='contentScrollPoint']/div[4]/div[4]/div[3]/div/div/table/tbody/tr[2]/td")
# to get row count with len method
rc = len (r)
# to get column count with len method
cc = len (c)
# to traverse through the table rows excluding headers
for i in range (2, rc+2 + 2):
# to traverse through the table column
for j in range (1, cc + 1):
# to get all the cell data with text method
d = driver.find_element_by_xpath("//*[@id='contentScrollPoint']/div[4]/div[4]/div[3]/div/div/table/tbody/tr["+str(i)+"]/td["+str(j)+"]").text
print(d) #add to dataframe here
</code></pre>
| <python><pandas><dataframe> | 2023-07-12 04:10:11 | 0 | 690 | Leo Torres |
76,667,072 | 6,907,424 | How to use custom SVM detector with cv2.HOGDescriptor()? | <p>I am following <a href="https://pyimagesearch.com/2015/11/16/hog-detectmultiscale-parameters-explained/" rel="nofollow noreferrer">this</a> tutorial and trying to use a custom SVM object detector instead of the <code>cv2.HOGDescriptor_getDefaultPeopleDetector()</code>. Following is the code for how to set an <code>SVM</code> detector:</p>
<pre><code>hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
</code></pre>
<p>So no surprise, I tried to make use of it by putting my SVM detector here. Following is the relevant code from training:</p>
<pre><code># This problem still happens if I use the default parameters (not passing any parameters)
hog = cv2.HOGDescriptor(_winSize = (64, 64),
_blockSize = (16, 16),
_blockStride = (2, 2),
_cellSize = (8, 8),
_nbins = 9)
# All the image in positive and negative images are just the cropped bounding boxes
# which are resized to (128, 128)
positive_features = np.array([hog.compute(img) for img in positive_images])
negative_features = np.array([hog.compute(img) for img in negative_images])
feature_matrix = np.concatenate((positive_features, negative_features), axis=0)
labels = np.concatenate((np.zeros(len(positive_features)), np.ones(len(negative_features))), axis=0) # This is counter-intuitive but opencv SVM seems to expect 0 for positive class and 1 for negative (when you wish to use it with cv2.detectMultiScale())
train_matrix = np.concatenate((feature_matrix, np.expand_dims(labels, 0).T), axis = 1)
np.random.seed(int(SEED))
np.random.shuffle(train_matrix)
feature_matrix = train_matrix[:, :-1]
labels = train_matrix[:, -1]
# OpenCV
feature_matrix = feature_matrix.astype(np.float32) # Convert to 32-bit floating-point
labels = labels.astype(np.int32) # Convert labels to 32-bit signed integer
model = cv2.ml.SVM_create()
model.setType(cv2.ml.SVM_C_SVC)
model.setKernel(cv2.ml.SVM_LINEAR)
model.setTermCriteria((cv2.TERM_CRITERIA_MAX_ITER, 100, 1e-6))
model.train(feature_matrix, cv2.ml.ROW_SAMPLE, labels)
model.save("model.svm")
</code></pre>
<p>During prediction:</p>
<pre><code># This problem still happens if I use the default parameters (not passing any parameters)
hog = cv2.HOGDescriptor(_winSize = (64, 64),
_blockSize = (16, 16),
_blockStride = (2, 2),
_cellSize = (8, 8),
_nbins = 9)
model = cv2.ml.SVM_load("model.svm")
support_vectors = model.getSupportVectors()
coefficients = -model.getDecisionFunction(0)[0]
coefficients = np.array(coefficients).reshape(1, -1)
svmdetector = np.concatenate((support_vectors, coefficients), axis=1)
hog.setSVMDetector(svmdetector.T.flatten())
</code></pre>
<p>But this gives me an exception:</p>
<pre><code> hog.setSVMDetector(svmdetector.T.flatten())
cv2.error: OpenCV(4.7.0) /io/opencv/modules/objdetect/src/hog.cpp:120: error: (-215:Assertion failed) checkDetectorSize() in function 'setSVMDetector'
</code></pre>
<p>I checked the source code of it and found <a href="https://github.com/opencv/opencv/blob/4.x/modules/objdetect/src/hog.cpp#L120" rel="nofollow noreferrer">this</a> line is giving me the error. And this is the definition of the check function:</p>
<pre><code>bool HOGDescriptor::checkDetectorSize() const
{
size_t detectorSize = svmDetector.size(), descriptorSize = getDescriptorSize();
return detectorSize == 0 ||
detectorSize == descriptorSize ||
detectorSize == descriptorSize + 1;
}
</code></pre>
<p>I couldn't understand any obvious reason why the error is happening. I also tried another model from OpenCV and could reproduce the error message. The following are the findings:</p>
<pre><code>hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector()) # Works [shape: (3781,)]
hog.setSVMDetector(cv2.HOGDescriptor_getDaimlerPeopleDetector()) # Does't work [shape: (1981,)]
</code></pre>
| <python><opencv><svm><histogram-of-oriented-gradients> | 2023-07-12 04:02:25 | 1 | 2,916 | hafiz031 |
76,666,989 | 5,246,226 | ImportError: cannot import name x from y | <p>I have a directory tree like so:</p>
<pre><code>- main
- training
- run.py
- utils.py
- __init__.py
</code></pre>
<p><code>utils.py</code> includes the following:</p>
<pre><code>import numpy as np
from ray.rllib.algorithms.callbacks import DefaultCallbacks
# Guide: https://discuss.ray.io/t/log-or-record-custom-env-data-via-rllib/4674/2
class RewardLoggerCallback(DefaultCallbacks):
def on_episode_start(
self, *, worker, base_env, policies, episode, env_index, **kwargs
):
episode.user_data = {
'MainRew': 0
}
def on_episode_step(
self, *, worker, base_env, episode, env_index, **kwargs
):
# Running metrics -> keep all values
# Final metrics -> only keep the current value
info = episode.last_info_for()
for k in episode.user_data.keys():
episode.user_data[k].append(info[k])
def on_episode_end(
self, *, worker, base_env, policies, episode, env_index, **kwargs
):
for name, value in episode.user_data.items():
episode.custom_metrics[name + "_avg"] = np.mean(value)
episode.custom_metrics[name + "_sum"] = np.sum(value)
episode.hist_data[name] = value
</code></pre>
<p>However, when I try and add <code>from main.training.utils import RewardLoggerCallback</code>, I get an error of the form <code>ImportError: cannot import name RewardLoggerCallback from main.training.utils</code>. I thought it might be because I didn't add the import in the <code>__init__.py</code> file, but even adding <code>from main.training.utils import RewardLoggerCallback</code> there doesn't help.</p>
<p>Wondering what I might be missing? I have other files in the directory that I've been able to import fine before this way.</p>
| <python> | 2023-07-12 03:30:21 | 0 | 759 | Victor M |
76,666,946 | 341,362 | Avoiding repetition in a multi-tiered comparison function | <p>I have a comparison function which looks something like this:</p>
<pre><code>def compare(x, y):
current = 0
tmp = complicated_function(x, y)
if tmp > 0:
if current < 0:
return 0 # incomparable
current = -1
elif tmp < 0:
if current > 0:
return 0 # incomparable
current = 1
tmp = complicated_function_2(x, y)
if tmp > 0:
if current < 0:
return 0 # incomparable
current = -1
elif tmp < 0:
if current > 0:
return 0 # incomparable
current = 1
# ...
return current
</code></pre>
<p>Basically, it does a lot of comparisons. If they are all compatible (a mix of 0s and 1s, a mix of 0s and -1s, or all 0s) then the result is returned; otherwise end early with a value of 0.</p>
<p>What's the most Pythonic method for implementing this? I'd love to do</p>
<pre><code>def compare(x, y):
current = 0
def inner_compare(z):
if z > 0:
if current == -1:
return 0 # <------- fails here
current = 1
if z < 0:
if current == 1:
return 0 # <------- fails here
current = -1
inner_compare(complicated_function(x, y))
inner_compare(complicated_function_2(x, y))
# ...
</code></pre>
<p>but of course that won't work since the inner function can't return the value to the outer function. I could use exceptions or flags but that just looks terrible. Since the structure is the same in each case I'd like to avoid repeating myself each time, any suggestions?</p>
| <python><python-3.x><comparison><dry> | 2023-07-12 03:19:19 | 4 | 11,589 | Charles |
76,666,937 | 21,107,707 | How to mask out everything but main leaf using opencv? | <p>I've got an image of a leaf:</p>
<p><a href="https://i.sstatic.net/7nCj2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7nCj2.jpg" alt="enter image description here" /></a></p>
<p>and I need to make everything except for the leaf in the middle black. How can I do this using opencv in python? A couple things:</p>
<ul>
<li>The images have really inconsistent backgrounds, so we cannot rely on masking by color or anything like that.</li>
<li>Finding contours may, end up with contours merged with the background (unless done really right)</li>
<li>I think a possibility would be to do something with edges found by <code>cv2.Canny</code> (just a thought)</li>
</ul>
<p>Any help?</p>
| <python><opencv><deep-learning><computer-vision><semantic-segmentation> | 2023-07-12 03:17:10 | 0 | 801 | vs07 |
76,666,676 | 839,733 | Time complexity of list slicing and f-string in a loop | <p>Given a string <code>word</code>, I'm trying to estimate the Big-O time complexity of the following function.</p>
<pre><code>def wildcards(word: str) -> Iterator[str]:
for j in range(len(word)):
yield f"{word[:j]}_{word[j + 1:]}"
</code></pre>
<p>I've read <a href="https://stackoverflow.com/q/34008010/839733">Is the time-complexity of iterative string append actually O(n^2), or O(n)?</a> and <a href="https://stackoverflow.com/q/37133547/839733">Time complexity of string concatenation in Python</a>, both of which are concerned with repeated string concatenation, which isn't happening here.</p>
<p><a href="https://stackoverflow.com/a/13203617/839733">This</a> answer says slicing is linear in size of the slice, so, two slices takes <code>O(n)</code> time in total. But what about the f-string?</p>
<p>Overall, it looks to me that <code>wildcards</code> is O(n<sup>2</sup>), is that right?</p>
| <python><string><list><time-complexity><big-o> | 2023-07-12 01:43:18 | 1 | 25,239 | Abhijit Sarkar |
76,666,603 | 3,904,031 | Convert unicode series from a webpage to Chinese characters with Python | <p>This is my first experience with unicode, and also with escaping and I'm over my head. The source is a website's pull-down menu and I want to generate a text list of all the items using Python.</p>
<p>From <code>&#x65B0;&#x5317;&#x5E02</code> I understand that I need to make something that looks like <code>u'\u65B0\u5317\u5E02'</code> in order to see ζ°εεΈ when I print it.</p>
<p>However <code>''.join([s.replace('&#x', '\u') for s in ''.split(';')])</code> fails:</p>
<p><code>SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 0-1: truncated \uXXXX escape</code></p>
<p>and <code>''.join([s.replace('&#x', '\\u') for s in '&#x65B0;&#x5317;&#x5E02'.split(';')])</code> (double backslash) gives me <code>'\\u65B0\\u5317\\u5E02'</code></p>
<p><strong>Quesiton:</strong> What expression for <code>mystring</code> will make `print(mystring)' show 'ζ°εεΈ'</p>
| <python><python-3.x><unicode> | 2023-07-12 01:20:34 | 3 | 3,835 | uhoh |
76,666,488 | 22,212,435 | Everything is pixelated in python tkinter | <p>In tkinter the circles and lines I make look really rough and pixelated.</p>
<p>For example these are my circles in tkinter:
<a href="https://i.sstatic.net/e6J9c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e6J9c.png" alt="" /></a></p>
<p>I would like them to look more smooth like this:
<a href="https://i.sstatic.net/tKhcJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tKhcJ.png" alt="enter image description here" /></a></p>
<p>P.S. Sorry if it is a stupid or obvious question but I did not see something similar asked.</p>
<p>P.P.S sorry for my bad english</p>
| <python><tkinter> | 2023-07-12 00:24:53 | 1 | 610 | Danya K |
76,666,331 | 7,846,884 | Snakemake concatenate multiple files | <p>i want to concatenate individual chromosome files (labelled chromosome 1, X and M) for each sample (sample 1-4). each sample has 3 chromosome file.</p>
<p>here's Final files that i want to create</p>
<pre><code>result/sample1_merged_chr1XM.txt
result/sample2_merged_chr1XM.txt
result/sample3_merged_chr1XM.txt
result/sample4_merged_chr1XM.txt
</code></pre>
<p>using the command</p>
<pre><code>cat dirToFiles/sample1_chr_1_data.txt dirToFiles/sample1_chr_X_data.txt dirToFiles/sample1_chr_M_data.txt > result/sample1_merged.txt
cat dirToFiles/sample1_chr_2_data.txt dirToFiles/sample2_chr_X_data.txt dirToFiles/sample2_chr_M_data.txt > result/sample2_merged.txt
cat dirToFiles/sample1_chr_3_data.txt dirToFiles/sample3_chr_X_data.txt dirToFiles/sample3_chr_M_data.txt > result/sample3_merged.txt
cat dirToFiles/sample1_chr_4_data.txt dirToFiles/sample4_chr_X_data.txt dirToFiles/sample4_chr_M_data.txt > result/sample4_merged.txt
</code></pre>
<p>here i am attaching a reproducible code to create files and merge the files afterwards</p>
<pre><code>#######################################
#Step 1: create multiple files to merge
######################################
import subprocess
subprocess.run((['mkdir','-p', 'dirToFiles']))
SAMPLES = ["sample1", "sample2", "sample3", "sample4"]
CHROMS=["1", "X", "M"]
for sample in SAMPLES:
for chrom in CHROMS:
subprocess.run((['touch', f'dirToFiles/{sample}_chr_{chrom}_data.txt']))
# subprocess.run((['echo', '"',str(sample),'"', '>>', f'dirToFiles/{sample}_chr_{chrom}_result.txt'])) #this doesn't work,
for sample in SAMPLES:
for chrom in CHROMS:
f = open(f'dirToFiles/{sample}_chr_{chrom}_data.txt', "w")
f.write(f'dirToFiles/{sample}_chr_{chrom}_data.txt')
f.close()
#expand("dirToFiles/{sample}_chr_{chrom}_data.txt",sample=SAMPLES, chrom=CHROMS),
#################################
#Step 2: snakemake rules
#################################
SAMPLES = ["sample1", "sample2", "sample3", "sample4"]
CHROMS = ["1", "X", "M"]
rule all:
input: expand("result/{sample}_merged_chr1XM.txt",sample=SAMPLES)
rule combineFiles:
input:
expand("dirToFiles/{sample}_chr_{chrom}_data.txt", chrom=CHROMS, partial=True)
output:
"result/{sample}_merged_chr1XM.txt"
log:
"logs/{sample}_merged_chr1XM.txt"
shell:
"""
cat {input} > {output} &> {log}
"""
</code></pre>
| <python><bash><snakemake> | 2023-07-11 23:26:28 | 1 | 473 | sahuno |
76,666,225 | 11,672,860 | Combining 2 columns in a multi-index multi-column dataframe | <p>I have the following Dataframe, How can I add A+B to create a new column called "A+B_new" and C to be a new column called "C_new".
I am using the latest 3.10 or 3.11 python and JupyterLab.</p>
<p>Starting DataFrame:</p>
<pre><code>import pandas as pd
data = {('2022/10', 'A'): {'ABC': 7, 'CDE': 4, 'FGH': 7},
('2022/10', 'B'): {'ABC': 3, 'CDE': 3, 'FGH': 6},
('2022/10', 'C'): {'ABC': 6, 'CDE': 4, 'FGH': 5},
('2022/11', 'A'): {'ABC': 2, 'CDE': 5, 'FGH': 7},
('2022/11', 'B'): {'ABC': 5, 'CDE': 8, 'FGH': 4},
('2022/11', 'C'): {'ABC': 9, 'CDE': 3, 'FGH': 3},
('2022/12', 'A'): {'ABC': 2, 'CDE': 4, 'FGH': 5},
('2022/12', 'B'): {'ABC': 6, 'CDE': 7, 'FGH': 4},
('2022/12', 'C'): {'ABC': 3, 'CDE': 8, 'FGH': 5 }}
df = pd.DataFrame(data, index=['ABC','CDE','FGH'])
df
</code></pre>
<p>And the Resulting DataFrame should be:</p>
<pre><code>data2 = {('2022/10', 'A+B_new'): {'ABC': 10, 'CDE': 7, 'FGH': 13},
('2022/10', 'C_new'): {'ABC': 6, 'CDE': 4, 'FGH': 5},
('2022/11', 'A+B_new'): {'ABC': 7, 'CDE': 13, 'FGH': 11},
('2022/11', 'C_new'): {'ABC': 6, 'CDE': 4, 'FGH': 5},
('2022/12', 'A+B_new'): {'ABC': 8, 'CDE': 11, 'FGH': 9},
('2022/12', 'C_new'): {'ABC': 6, 'CDE': 4, 'FGH': 5}}
df_result = pd.DataFrame(data2, index=['ABC','CDE','FGH'])
df_result
</code></pre>
| <python><dataframe><multi-index> | 2023-07-11 22:50:35 | 1 | 677 | John Doe |
76,666,215 | 7,599,215 | How make pdfplumber treat right vertical edge of a page as a table vertical line? | <p>How make pdfplumber treat right vertical edge of a page as a table vertical line?</p>
<p>I have pdf with cropped right edge, and that cut took away the rightmost vertical line of the table.</p>
| <python><pdfplumber> | 2023-07-11 22:48:25 | 1 | 2,563 | banderlog013 |
76,665,893 | 5,806,473 | Insert custom boolean values in JSON python | <p>I am creating a JSON using Python. It looks something like this:</p>
<pre><code>[
...
{"name":"John","is_active":true}
...
]
</code></pre>
<p>This is like a configuration JSON that I will be passing to a config file.
How can I insert <strong>true</strong> as a value.</p>
<p>I do not want 'True' or <strong>True</strong></p>
<p>I want to insert lowercase <strong>true</strong> as a value.</p>
| <python><json><boolean> | 2023-07-11 21:32:14 | 2 | 384 | Sumukh Bhandarkar |
76,665,833 | 4,075,155 | Assign scores to sentences for their 'quality' | <p>I have scrapped a lot of pages from a specific domain and I would like to identify which sentences from the text of these pages are more useful in terms of information they carry. Is there an NLP technique to do it? An example would be:</p>
<pre><code>sent0 = "The cat is white"
sent1 = "Cat"
sent2 = "The reason why the cat is white is due to a certain type of pigmentation its fur contains"
</code></pre>
<p>Where the scores would be decrescent in the order: sent2, sent0, sent1.</p>
| <python><nlp><sentence> | 2023-07-11 21:19:45 | 1 | 2,380 | Lucas Azevedo |
76,665,696 | 5,431,734 | image labelling and anisotropy | <p>I have a 3d image which I want to label. Typically I use from <code>label</code> from <code>scipy.ndimage</code>. I want to ask how do you handle anisotropy, the z-dimension is cut more coarsely than x and y.</p>
<p>My structuring element is like a ball:</p>
<pre><code>from scipy.ndimage import label, generate_binary_structure
s = generate_binary_structure(3, 1)
labels, _ = label(img, structure=s)
</code></pre>
<p>I am looking at the plane above and below (apart from the current) to check for connecting elements</p>
| <python><image-processing><scikit-image><connected-components><scipy.ndimage> | 2023-07-11 20:53:36 | 0 | 3,725 | Aenaon |
76,665,625 | 4,019,495 | VS Code: python debugger unable to evaluate `[test for test in range(10) if x]` | <p>EDIT: I tested a few more things. It looks like this error only occurs with my company's internal python executable, not <code>/bin/python</code> or an anaconda install. Would like to vote to close this.</p>
<p>If I break into the VS code python debugger (this is debugpy v1.6.6) and in the debug console run:</p>
<pre><code>x = 1
[test for test in range(10) if x]
</code></pre>
<p>On the second line I get</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 1, in <listcomp>
NameError: name 'x' is not defined
</code></pre>
<p>Any ideas why this might be happening? Possibly related posts:</p>
<ul>
<li><a href="https://github.com/Microsoft/vscode-python/issues/2110" rel="nofollow noreferrer">https://github.com/Microsoft/vscode-python/issues/2110</a></li>
<li><a href="https://stackoverflow.com/questions/24149168/list-comprehension-scope-error-from-python-debugger">List comprehension scope error from Python debugger</a></li>
</ul>
<p>The first link suggests the issue was resolved in ~2018, however I am experiencing it today on my work machine with version 2023.2 of the VS code python extension. Any ideas?</p>
<p>Info:</p>
<ul>
<li>VS code version: 1.79.2</li>
<li>debugpy version: 1.6.6</li>
<li>python: 3.8</li>
</ul>
<p>Screenshot:</p>
<p><a href="https://i.sstatic.net/qYYeQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qYYeQ.png" alt="enter image description here" /></a></p>
| <python><visual-studio-code><vscode-debugger> | 2023-07-11 20:42:02 | 0 | 835 | extremeaxe5 |
76,665,611 | 1,815,054 | Python - Loading an R-Tree from file | <p>I was able to successfully build an R-Tree spatial index and save it to a file.</p>
<pre><code>from rtree import index
idx = index.Index("gridIndex")
idx.insert(..., ...)
idx.insert(..., ...)
and so on...
</code></pre>
<p>Now I can load the saved index and use it:</p>
<pre><code>idx = index.Index('gridIndex')
hits = idx.intersection([x, y], objects=True)
</code></pre>
<p>This works only if my index is in the same directory as my python file.</p>
<p>In my case, I'm running the main python file from a different directory:</p>
<pre><code>python path/to/my_file/main.py
</code></pre>
<p>Now the index is not loaded, rather it creates an empty index in my working directory.</p>
<p>So how can I load my previously created index and specify its path?</p>
<p>I've tried to load it this way:</p>
<pre><code>idx = index.Index(os.path.join(sys.path[0], 'gridIndex')) # doesn't work
</code></pre>
<p>And also this way:</p>
<pre><code>idx = index.Index(os.path.join(sys.path[0], 'gridIndex.dat'))
</code></pre>
<p>Which creates a new empty index - two new files in the desired directory: gridIndex.dat.dat and gridIndex.dat.idx</p>
<p>In the documentation I found two methods - dumps() and loads(), but couldn't get it to work.</p>
| <python><r-tree> | 2023-07-11 20:38:36 | 1 | 3,508 | Dusan |
76,665,482 | 3,684,433 | Why is pyinstaller sensitive to working directory? | <p>I'm building an application with PyInstaller, and I get different behaviors depending on the directory from which I run the application. For example:</p>
<pre><code>user:~$ ./project/dist/runmyapp
<fails>
user:~$ /home/user/project/dist/runmyapp
<fails>
user:~$ cd project
user:~/project$ ./dist/runmyapp
<runs correctly>
user:~/project$ /home/user/project/dist/runmyapp
<runs correctly>
</code></pre>
<p>When it fails, I get a bizarre unicode error:</p>
<pre><code>Traceback (most recent call last):
File "tokenize.py", line 330, in find_cookie
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 128: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "runmyapp", line 3, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
<lots of stack trace>
File "traceback.py", line 197, in format_stack
File "traceback.py", line 211, in extract_stack
File "traceback.py", line 366, in extract
File "traceback.py", line 288, in line
File "linecache.py", line 16, in getline
File "linecache.py", line 47, in getlines
File "linecache.py", line 136, in updatecache
File "tokenize.py", line 394, in open
File "tokenize.py", line 371, in detect_encoding
File "tokenize.py", line 335, in find_cookie
SyntaxError: invalid or missing encoding declaration for '/home/user/project/dist/runmyapp'
[7407] Failed to execute script 'runmyapp' due to unhandled exception!
</code></pre>
<p>If you look at the bytes of the actual executable (./dist/runmyapp, produced by pyinstaller), it is true that byte 128 is 0xa8:</p>
<pre><code>00000080: a802 0000 0000 0000 a802 4000 0000 0000 ..........@.....
^^
</code></pre>
<p>But that's going to be the case no matter what the working directory is when it gets run. And also, it's an executable, so the python interpreter shouldn't be trying to read it like a script file (my understanding is that the executable is pyinstaller's "bootloader").</p>
<p>That leads me to believe that I'm running the executable, and it is trying to read a file at a path relative to my current working directory, and that just happens to be the executable itself. That's strange, since my own code would never reference an executable that doesn't exist yet (i.e., before pyinstaller builds it), and also, the target path should change as I move directories, but it seems to fail in all but one location.</p>
<p>There are plenty of mentions on SO of <code>UnicodeDecodeError</code> related to pyinstaller, but it seems to be a different issue. I'm stuck, since I don't get enough information from the stack trace about why it's trying to read <code>runmyapp</code> like it's a script.</p>
<p>Has anyone seen similar behavior? Any insights? Thanks!</p>
<h2>Edit 07/12/2023</h2>
<p>Here is a minimal reproducible example. I am using python 3.8.10 and pyinstaller 5.13.0.</p>
<p>File tree:</p>
<pre><code>/
/pyitest/
/pyitest/__init__.py
/pyitest/__main__.py
/runtest
/setup.py
</code></pre>
<p>__main__.py:</p>
<pre class="lang-py prettyprint-override"><code>import traceback
def main():
traceback.print_stack()
if __name__ == "__main__":
main()
</code></pre>
<p>runtest:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
from pyitest.__main__ import main
main()
</code></pre>
<p>setup.py</p>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup, find_packages
package_name = 'pyitest'
setup(name=package_name,
packages=find_packages(exclude=['tests']),
include_package_data=True,
setup_requires='wheel',
entry_points={
'console_scripts': [
'runtest = pyitest.__main__:main'
]
}
)
</code></pre>
<p>To build:</p>
<pre class="lang-bash prettyprint-override"><code>$ mktemp -d ./pyibuild.XXXXXX
./pyibuild.3TbKYF
$ cp runtest ./pyibuild.3TbKYF/
$ python setup.py build
$ pyi-makespec --paths build/lib ./pyibuild.3TbKYF/runtest
$ pyinstaller runtest.spec
</code></pre>
<p>To run:</p>
<pre class="lang-bash prettyprint-override"><code>$ ./dist/runtest/runtest` (works fine)
$ cd ..
$ <project_root>/dist/runtest/runtest
Traceback (most recent call last):
File "tokenize.py", line 330, in find_cookie
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 40: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "runtest", line 6, in <module>
File "pyitest/__main__.py", line 5, in main
File "traceback.py", line 190, in print_stack
File "traceback.py", line 211, in extract_stack
File "traceback.py", line 366, in extract
File "traceback.py", line 288, in line
File "linecache.py", line 16, in getline
File "linecache.py", line 47, in getlines
File "linecache.py", line 136, in updatecache
File "tokenize.py", line 394, in open
File "tokenize.py", line 371, in detect_encoding
File "tokenize.py", line 335, in find_cookie
SyntaxError: invalid or missing encoding declaration for '<project_root>/dist/runtest/runtest'
[44713] Failed to execute script 'runtest' due to unhandled exception!
</code></pre>
| <python><pyinstaller> | 2023-07-11 20:13:42 | 0 | 447 | maldata |
76,665,448 | 4,534,466 | Understanding Data Loading and Manipulation in Azure ML Pipeline with PythonScriptStep | <p>I'm in the process of adapting my codebase for an Azure ML pipeline and need some assistance understanding certain aspects. I am converting one of my steps into a script and executing it through a <a href="https://learn.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep?view=azure-ml-py" rel="nofollow noreferrer">PythonScriptStep</a>, but I have a few questions about how data is transferred and managed within the pipeline.</p>
<p>Here's a brief look at an example similar to mmy current pipeline step:</p>
<pre><code># Define pipeline step
from azureml.data import OutputFileDatasetConfig
from azureml.pipeline.steps import PythonScriptStep, EstimatorStep
raw_ds = Dataset.get_by_name(ws, 'raw_dataset') # initial data
data_store = ws.get_default_datastore()
prepped_data = OutputFileDatasetConfig('prepped')
step1 = PythonScriptStep(name = 'prepare data',
source_directory = 'scripts',
script_name = 'data_prep.py',
compute_target = 'aml-cluster',
arguments = ['--raw-ds', raw_ds.as_named_input('raw_data'),
'--out_folder', prepped_data])
</code></pre>
<p>and the corresponding <code>data_prep.py</code> script:</p>
<pre><code># data_prep.py script
from azureml.core import Run
import argparse
import os
run = Run.get_context()
parser = argparse.ArgumentParser()
parser.add_argument('--raw-ds', type=str, dest='raw_dataset_id')
parser.add_argument('--out_folder', type=str, dest='folder')
args = parser.parse_args()
output_folder = args.folder
raw_df = run.input_datasets['raw_data'].to_pandas_dataframe()
prepped_df = raw_df[['col1', 'col2', 'col3']]
os.makedirs(output_folder, exist_ok=True)
output_path = os.path.join(output_folder, 'prepped_data.csv')
prepped_df.to_csv(output_path)
</code></pre>
<p>My questions are:</p>
<ol>
<li><p>Why is it necessary to include command line arguments in <code>data_prep.py</code> if I'm already loading the data from the Run context?</p>
</li>
<li><p>Is there an alternative method to load the data, which doesn't involve using the context?</p>
</li>
<li><p>How should I handle data loading if my data is not in a tabular format, for example, if it's a JSON file?</p>
</li>
</ol>
| <python><azure><azure-machine-learning-service> | 2023-07-11 20:08:24 | 1 | 1,530 | JoΓ£o Areias |
76,665,382 | 17,835,656 | how can i open pdf as bytes by pypdfium2 in python | <p>i have a database and i store the pdf files as bytes in LONGBLOB cell , after that i want to get the PDF file from the Database as bytes and open it directly using pypdfium2.</p>
<p>but i did not find any way to do it by myself, for that i need your helps.</p>
<p>i user this method to open the PDF file from a path in my computer :</p>
<pre class="lang-py prettyprint-override"><code>import pypdfium2
file_path = "mm.pdf"
open_the_pdf_file = pypdfium2.PdfDocument(file_path)
determine_the_page = open_the_pdf_file.get_page(0)
</code></pre>
<p>is there any possibility to do it ?</p>
<p>thanks.</p>
| <python><python-3.x><pdf><byte> | 2023-07-11 19:58:09 | 0 | 721 | Mohammed almalki |
76,665,367 | 4,487,457 | Dataflow Template launch failure on pickling | <p>My Dataflow pipeline is as followed</p>
<pre class="lang-py prettyprint-override"><code> pipeline_options = PipelineOptions(
pipeline_args, streaming=True, save_main_session=True, sdk_location="container"
)
with Pipeline(options=pipeline_options) as pipeline:
(
pipeline
| f"Read event topic"
>> io.ReadFromPubSub(topic=input_topic).with_output_types(bytes)
| "Convert to string" >> beam.Map(lambda msg: msg.decode("utf=8"))
| f"Transform event"
>> beam.Map(transform_message, event_name=event_name)
| f"Write to output topic"
>> beam.Map(publish_to_output_topic)
)
</code></pre>
<p>And I'm using <a href="https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates" rel="nofollow noreferrer">Flex templates</a> to deploy my pipeline. And I built it using the gcloud CLI like so</p>
<pre><code>gcloud dataflow flex-template build gs://mybucket/templates/dataflow-latest.json \
--image "us-docker.pkg.dev/project_id/dataflow/dataflow:latest" \
--sdk-language "PYTHON" "
</code></pre>
<p>and I invoke the job as so</p>
<pre><code> gcloud dataflow flex-template run "test-job" \
--template-file-gcs-location "gs://mybucket/templates/dataflow-latest.json" \
--service-account-email "dataflow@project_id.iam.gserviceaccount.com" \
--staging-location "gs://mybucket/staging/" \
--temp-location "gs://mybucket/temp/" \
--parameters event_name="foobuzz" \
--parameters sdk_container_image="us-docker.pkg.dev/project_id/dataflow/dataflowsdk:latest" \
--region "us-central2"
</code></pre>
<p>However my template is unable to fire up and I keep getting this error</p>
<pre><code>{"severity":"INFO","time":"2023/07/11 17:59:12.176036","line":"python_template.go:64","message":"Using launch args: [/dataflow/template/beam.py --runner=DataflowRunner --region=us-central2 --staging_location=gs://mybucket/staging/ --event_name=foobuzz --project=project_id --job_name=test-job --template_location=gs://my-bucket/staging/template_launches/2023-07-11_10_55_55-17803180896608448008/job_object --service_account_email=dataflow@my_project.iam.gserviceaccount.com --temp_location=gs://mybucket/temp/ --sdk_container_image=us-docker.pkg.dev/project_id/dataflow:latest]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.782216","line":"exec.go:66","message":"β¬ T4: \u003cclass 'apache_beam.transforms.core.CallableWrapperDoFn'\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.782528","line":"exec.go:66","message":"β # T4 [54 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.782851","line":"exec.go:66","message":"β¬ D2: \u003cdict object at 0x7fcaee860840\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.783157","line":"exec.go:66","message":"ββ¬ F1: \u003cfunction Map.\u003clocals\u003e.\u003clambda\u003e at 0x7fcaee86e430\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.783359","line":"exec.go:66","message":"βββ¬ F2: \u003cfunction _create_function at 0x7fcb0746e1f0\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.783562","line":"exec.go:66","message":"βββ # F2 [34 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.784154","line":"exec.go:66","message":"βββ¬ Co: \u003ccode object \u003clambda\u003e at 0x7fcaf7968df0, file \"/usr/local/lib/python3.8/site-packages/apache_beam/transforms/core.py\", line 1900\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.784366","line":"exec.go:66","message":"ββββ¬ F2: \u003cfunction _create_code at 0x7fcb0746e280\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.784554","line":"exec.go:66","message":"ββββ # F2 [19 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.784825","line":"exec.go:66","message":"βββ # Co [156 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.785078","line":"exec.go:66","message":"βββ¬ D4: \u003cdict object at 0x7fcaf7949300\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.785230","line":"exec.go:66","message":"βββ # D4 [38 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.785515","line":"exec.go:66","message":"βββ¬ Ce2: \u003ccell at 0x7fcaee87a520: function object at 0x7fcaee86e3a0\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.785693","line":"exec.go:66","message":"ββββ¬ F2: \u003cfunction _create_cell at 0x7fcb0746ea60\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.785883","line":"exec.go:66","message":"ββββ # F2 [19 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.786047","line":"exec.go:66","message":"βββ # Ce2 [24 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.786211","line":"exec.go:66","message":"βββ¬ D2: \u003cdict object at 0x7fcaee860d00\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.786379","line":"exec.go:66","message":"ββββ¬ F1: \u003cfunction run.\u003clocals\u003e.\u003clambda\u003e at 0x7fcaee86e3a0\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.786567","line":"exec.go:66","message":"βββββ¬ Co: \u003ccode object \u003clambda\u003e at 0x7fcb07d62240, file \"/dataflow/template/beam.py\", line 220\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.786850","line":"exec.go:66","message":"βββββ # Co [99 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.789579","line":"exec.go:66","message":"βββββ¬ D1: \u003cdict object at 0x7fcb07e86fc0\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.789761","line":"exec.go:66","message":"βββββ # D1 [22 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.789954","line":"exec.go:66","message":"βββββ¬ D2: \u003cdict object at 0x7fcaee85b0c0\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.790158","line":"exec.go:66","message":"ββββββ¬ T6: \u003cclass 'apache_beam.typehints.decorators.IOTypeHints'\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.790344","line":"exec.go:66","message":"βββββββ¬ F2: \u003cfunction _create_namedtuple at 0x7fcb074700d0\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.790524","line":"exec.go:66","message":"βββββββ # F2 [25 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.790718","line":"exec.go:66","message":"ββββββ # T6 [118 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.790898","line":"exec.go:66","message":"βββββ # D2 [143 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.791050","line":"exec.go:66","message":"βββββ¬ D2: \u003cdict object at 0x7fcaee866600\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.791219","line":"exec.go:66","message":"ββββββ¬ D2: \u003cdict object at 0x7fcaef60f080\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.791370","line":"exec.go:66","message":"ββββββ # D2 [2 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.791529","line":"exec.go:66","message":"βββββ # D2 [63 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.791681","line":"exec.go:66","message":"ββββ # F1 [341 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.791824","line":"exec.go:66","message":"βββ # D2 [358 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.791978","line":"exec.go:66","message":"βββ¬ D2: \u003cdict object at 0x7fcaee86f200\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.792141","line":"exec.go:66","message":"ββββ¬ D2: \u003cdict object at 0x7fcaee800ac0\u003e"}
{"severity":"INFO","time":"2023/07/11 17:59:27.792287","line":"exec.go:66","message":"ββββ # D2 [2 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.792453","line":"exec.go:66","message":"βββ # D2 [34 B]"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805605","line":"exec.go:66","message":"Traceback (most recent call last):"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805641","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/apache_beam/internal/dill_pickler.py\", line 246, in dumps"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805661","line":"exec.go:66","message":" s = dill.dumps(o, byref=settings['dill_byref'])"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805675","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 263, in dumps"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805691","line":"exec.go:66","message":" dump(obj, file, protocol, byref, fmode, recurse, **kwds)#, strictio)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805707","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 235, in dump"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805722","line":"exec.go:66","message":" Pickler(file, protocol, **_kwds).dump(obj)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805735","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 394, in dump"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805749","line":"exec.go:66","message":" StockPickler.dump(self, obj)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805766","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 487, in dump"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805780","line":"exec.go:66","message":" self.save(obj)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805792","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 388, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805813","line":"exec.go:66","message":" StockPickler.save(self, obj, save_persistent_id)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805828","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 603, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805842","line":"exec.go:66","message":" self.save_reduce(obj=obj, *rv)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805854","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 717, in save_reduce"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805870","line":"exec.go:66","message":" save(state)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805883","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 388, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805897","line":"exec.go:66","message":" StockPickler.save(self, obj, save_persistent_id)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805910","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 560, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805924","line":"exec.go:66","message":" f(self, obj) # Call unbound method with explicit self"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805937","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/apache_beam/internal/dill_pickler.py\", line 216, in new_save_module_dict"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805953","line":"exec.go:66","message":" return old_save_module_dict(pickler, obj)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805965","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 1186, in save_module_dict"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805980","line":"exec.go:66","message":" StockPickler.save_dict(pickler, obj)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.805993","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 971, in save_dict"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806007","line":"exec.go:66","message":" self._batch_setitems(obj.items())"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806019","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 997, in _batch_setitems"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806033","line":"exec.go:66","message":" save(v)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806045","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 388, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806059","line":"exec.go:66","message":" StockPickler.save(self, obj, save_persistent_id)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806072","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 560, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806087","line":"exec.go:66","message":" f(self, obj) # Call unbound method with explicit self"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806100","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 1824, in save_function"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806114","line":"exec.go:66","message":" _save_with_postproc(pickler, (_create_function, ("}
{"severity":"INFO","time":"2023/07/11 17:59:27.806127","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 1089, in _save_with_postproc"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806141","line":"exec.go:66","message":" pickler.save_reduce(*reduction)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806153","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 691, in save_reduce"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806167","line":"exec.go:66","message":" save(func)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806178","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 388, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806196","line":"exec.go:66","message":" StockPickler.save(self, obj, save_persistent_id)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806211","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 603, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806224","line":"exec.go:66","message":" self.save_reduce(obj=obj, *rv)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806236","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 692, in save_reduce"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806251","line":"exec.go:66","message":" save(args)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806263","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 388, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806278","line":"exec.go:66","message":" StockPickler.save(self, obj, save_persistent_id)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806291","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 560, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806304","line":"exec.go:66","message":" f(self, obj) # Call unbound method with explicit self"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806325","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 886, in save_tuple"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806351","line":"exec.go:66","message":" save(element)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806363","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/dill/_dill.py\", line 388, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806378","line":"exec.go:66","message":" StockPickler.save(self, obj, save_persistent_id)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806391","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/pickle.py\", line 560, in save"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806405","line":"exec.go:66","message":" f(self, obj) # Call unbound method with explicit self"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806418","line":"exec.go:66","message":" File \"/usr/local/lib/python3.8/site-packages/apache_beam/internal/dill_pickler.py\", line 170, in save_module"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806464","line":"exec.go:66","message":" dill.dill.log.info('M2: %s' % obj)"}
{"severity":"INFO","time":"2023/07/11 17:59:27.806477","line":"exec.go:66","message":"AttributeError: module 'dill._dill' has no attribute 'log'"}
</code></pre>
<p>I'm unsure where to look. It doesn't look like <a href="https://github.com/uqfoundation/dill/releases" rel="nofollow noreferrer">dill</a> or <a href="https://github.com/apache/beam/releases/tag/v2.48.0" rel="nofollow noreferrer">beam-sdk</a> haven't had a new release recently. I tried this out a few days ago and it worked so I'm unsure exactly where to look here. It looks like there is an issue with serializing the Dataflow pipeline.</p>
| <python><google-cloud-platform><google-cloud-dataflow><apache-beam> | 2023-07-11 19:56:13 | 1 | 2,360 | Minh |
76,665,363 | 7,535,168 | Is it possible to change font size in Recycleview widgets? | <p>I'm having a recycleview of labels and I would like to change the font size of these labels using MDSlider. I'm not sure if this is even possible. The closest thing I've got is to change the font of the objects retrieved by the method <code>view_adapter.get_visible_view(index)</code>, which are the objects currently visible if I'm not mistaken. But using the code below which should be doing just that, I'm getting strange behavior as it's changing the font size of all visible labels, but also some of the labels which aren't visible.</p>
<pre><code>from kivymd.app import MDApp
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager
from kivy.uix.screenmanager import Screen
from kivymd.uix.boxlayout import MDBoxLayout
kv = """
<DailyService>:
bg_color: app.theme_cls.primary_dark
day: ''
service: ''
MDGridLayout:
rows: 2
MDLabel:
id: firstLabelId
halign: 'center'
text: root.day
MDLabel:
id: secondLabelId
halign: 'center'
md_bg_color: root.bg_color
text: root.service
<MainScreen>:
name: 'mainScreen'
rvid: myRv
MDRelativeLayout:
orientation: 'vertical'
MDRecycleView:
viewclass: 'DailyService'
id: myRv
RecycleBoxLayout:
default_size: None, dp(200)
default_size_hint: 1, None
size_hint_y: None
height: self.minimum_height
orientation: 'vertical'
MDSlider:
color: 'white'
orientation: 'horizontal'
size_hint: (0.2, 0.2)
pos_hint: {"x":0.4, "top": 1}
min: 10
value: 20
max: 30
on_value_normalized: root.fontSizeSlider(self.value)
MyScreenManager:
mainScreen: mainScreenId
MainScreen:
id: mainScreenId
"""
class DailyService(MDBoxLayout):
pass
class MainScreen(Screen):
def __init__(self, **kwargs):
super(MainScreen, self).__init__(**kwargs)
def fontSizeSlider(self, value):
recycleViewObj = self.ids['myRv']
index = 0
while (recycleViewObj.view_adapter.get_visible_view(index)):
idList = recycleViewObj.view_adapter.get_visible_view(index).ids
for obj in idList.values():
obj.font_size = str(int(value))+'dp'
index = index + 1
class MyScreenManager(ScreenManager):
def __init__(self, **kwargs):
super(MyScreenManager, self).__init__(**kwargs)
class MyApp(MDApp):
def on_start(self):
data = []
for i in range(10):
data.append({'day': 'DAY','service': 'SERVICE'})
self.root.ids.mainScreenId.rvid.data = data
def build(self):
self.theme_cls.theme_style = 'Dark'
self.theme_cls.primary_palette = 'Blue'
self.theme_cls.accent_palette = 'Amber'
return Builder.load_string(kv)
if __name__ == '__main__':
MyApp().run()
</code></pre>
<p>I've also tried using layout_manager along with the view_adapter and it seems to me that I'm able to extract all the objects from the Recycleview, but when attempting to change the font_size nothing happens. So my guess would be that those objects aren't actual objects that I see in my app. Here's the attempt:</p>
<pre><code>def fontSizeSlider(self, value):
recycleViewObj = self.ids['myRv']
opts = recycleViewObj.layout_manager.view_opts
for i in range(len(recycleViewObj.data)):
viewAdapter = recycleViewObj.view_adapter
viewClass = opts[i]['viewclass']
data = recycleViewObj.data[i]
dailyService = viewAdapter.get_view(i, data, viewClass)
for obj in dailyService.ids.values():
print(obj)
obj.font_size = str(int(value))+'dp'
</code></pre>
<p>Is it possible to achieve what I want with Recycleview? Just to make myself clear, I would like to change the font of all the labels in the Recycleview (not just visible ones).</p>
| <python><kivy><kivymd> | 2023-07-11 19:54:45 | 1 | 601 | domdrag |
76,665,314 | 6,599,648 | Geopandas error: GeocoderNotFound: Unknown geocoder 'geocodefarm' | <p>I'm following a tutorial on geocoding using geopandas. This is my code:</p>
<pre><code>import geopandas
place = "Sankt Gallen"
geopandas.tools.geocode(place).explore()
</code></pre>
<p>When I run the code, I get the following error:</p>
<pre><code>GeocoderNotFound: Unknown geocoder 'geocodefarm'; options are: dict_keys(['algolia', 'arcgis', 'azure', 'baidu', 'baiduv3', 'banfrance', 'bing', 'databc', 'geocodeearth', 'geocodio', 'geonames', 'google', 'googlev3', 'geolake', 'here', 'herev7', 'ignfrance', 'mapbox', 'mapquest', 'maptiler', 'nominatim', 'opencage', 'openmapquest', 'pickpoint', 'pelias', 'photon', 'liveaddress', 'tomtom', 'what3words', 'what3wordsv3', 'yandex'])
</code></pre>
<p>Does anyone know how to fix this error?</p>
| <python><geopandas> | 2023-07-11 19:48:01 | 1 | 613 | Muriel |
76,665,310 | 16,912,844 | Python Run `subprocess.Popen` With Timeout and Get `stdout` at Runtime | <p>I am trying to create a function to run a <code>command</code> with <code>timeout</code>, but at the same time, get standard output (<code>stdout</code>) to display. I saw some answers, but not exactly what I am looking for.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/38801202/how-to-run-a-process-with-timeout-and-still-get-stdout-at-runtime">How to run a process with timeout and still get stdout at runtime</a> (this is similar, but I need to return a <code>Popen</code> object.</li>
<li><a href="https://stackoverflow.com/questions/71644398/how-to-get-stdout-from-subprocess-popen-elegantly-with-timeout-check">How to get stdout from subprocess.Popen elegantly with timeout check?</a> (also similar)</li>
</ul>
<p>Function signature goes:</p>
<pre class="lang-py prettyprint-override"><code>def run_with_timeout(command, timeout)
</code></pre>
<p>So far I am able to get <code>stdout</code> to print at runtime, but I am not sure what's an robust way to timeout the application. Is there a robust way to do the timeout, or is there a better approach to this? I tried <code>process.wait(timeout=timeout)</code> and <code>process.communicate(timeout=timeout)</code>, but doesn't seem to work. I am trying to avoid using threads as well...</p>
<pre class="lang-py prettyprint-override"><code>def run_with_timeout(command, timeout):
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
text=True,
)
output = ''
try:
for line in iter(process.stdout.readline, ''):
output += line
print(line)
process.stdout.close()
process.out = output
except subprocess.TimeoutExpired as te:
process.kill()
return process
</code></pre>
| <python><subprocess><popen> | 2023-07-11 19:47:24 | 2 | 317 | YTKme |
76,665,289 | 17,896,651 | Access parent class vars from child class | <p>Will this work ?
What are a better options ?</p>
<pre><code>class Liketool():
def __init__(self, parent: Instapy):
self.parent = parent
print('Liketool' + self.parent.my_name)
class Folllowtool():
def __init__(self, parent: Instapy):
self.parent = parent
print('Folllowtool' + self.parent.my_name)
class Instapy(Liketool, Folllowtool):
def __init__(self):
self.my_name = 'moshe'
super().__init__(self)
</code></pre>
<p>I don't like the use of <code>self.parent.my_name</code> can I somehow do it that <code>self.my_name</code> will see the Instapy vars ? when called from Folllowtool ?</p>
<p>WHAT I WANT TO ACHIEVE:</p>
<p>I have one session manager for social media operation.
I have like tool and follow tool that do specific operations.</p>
<p>I want to be able to get parent to know it's child Liketool to be able to access Instapy data.</p>
<p>EXAMPLE:
<a href="https://www.programiz.com/python-programming/methods/built-in/super" rel="nofollow noreferrer">https://www.programiz.com/python-programming/methods/built-in/super</a>
<a href="https://i.sstatic.net/rkyDa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rkyDa.png" alt="enter image description here" /></a></p>
<p>ALTERNATIVE SOLUTION:
What do you think on that ?</p>
<pre><code>class Liketool:
def __init__(self, session: Instapy):
self.session = session
print('Liketool' + self.session.my_name)
class Folllowtool:
def __init__(self, session: Instapy):
self.session = session
print('Folllowtool' + self.session.my_name)
class Instapy():
def __init__(self):
self.my_name= 'moshe'
self.like = Liketool(self)
self.follow = Folllowtool(self)
</code></pre>
| <python><class><object> | 2023-07-11 19:44:25 | 1 | 356 | Si si |
76,665,236 | 9,542,989 | UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 1679: character maps to <undefined> in GitHub Action | <p>I have a GitHub action that runs to build and release a Python package that I am building. At the point of installing dependencies (on Windows compute), I am faced with the above error. This is the full stack trace,</p>
<pre><code> [21 lines of output]
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 132, in get_requires_for_build_editable
return hook(config_settings)
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-h6r6afd3\overlay\Lib\site-packages\setuptools\build_meta.py", line 450, in get_requires_for_build_editable
return self.get_requires_for_build_wheel(config_settings)
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-h6r6afd3\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-h6r6afd3\overlay\Lib\site-packages\setuptools\build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-h6r6afd3\overlay\Lib\site-packages\setuptools\build_meta.py", line 487, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-h6r6afd3\overlay\Lib\site-packages\setuptools\build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 10, in <module>
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 1679: character maps to <undefined>
[end of output]
</code></pre>
<p>What could be the problem here?</p>
<p>This is the exact run that I am having the issue with,
<br>
<a href="https://github.com/MinuraPunchihewa/ai-text-to-sql/actions/runs/5523938376/jobs/10075748805" rel="nofollow noreferrer">https://github.com/MinuraPunchihewa/ai-text-to-sql/actions/runs/5523938376/jobs/10075748805</a></p>
| <python><github-actions><cicd> | 2023-07-11 19:34:30 | 0 | 2,115 | Minura Punchihewa |
76,664,986 | 1,601,580 | Unexpected chardet depedency with wandb, how to fix and why does it happen? | <p>I got this error:</p>
<pre><code>import wandb
...
Traceback (most recent call last):
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/requests/compat.py", line 11, in <module>
import chardet
ModuleNotFoundError: No module named 'chardet'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/afs/cs.stanford.edu/u/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/__init__.py", line 52, in <module>
from uutils.logging_uu.wandb_logging.common import setup_wandb
File "/afs/cs.stanford.edu/u/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/logging_uu/wandb_logging/common.py", line 8, in <module>
import wandb
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/wandb/__init__.py", line 26, in <module>
from wandb import sdk as wandb_sdk
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/wandb/sdk/__init__.py", line 3, in <module>
from . import wandb_helper as helper # noqa: F401
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/wandb/sdk/wandb_helper.py", line 6, in <module>
from .lib import config_util
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/wandb/sdk/lib/config_util.py", line 10, in <module>
from wandb.util import load_yaml
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/wandb/util.py", line 49, in <module>
import requests
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/requests/__init__.py", line 45, in <module>
from .exceptions import RequestsDependencyWarning
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/requests/exceptions.py", line 9, in <module>
from .compat import JSONDecodeError as CompatJSONDecodeError
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/requests/compat.py", line 13, in <module>
import charset_normalizer as chardet
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/charset_normalizer/__init__.py", line 23, in <module>
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File "/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/charset_normalizer/api.py", line 10, in <module>
from charset_normalizer.md import mess_ratio
File "charset_normalizer/md.py", line 5, in <module>
ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (/lfs/ampere1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/charset_normalizer/constant.py)
(data_quality) brando9~ $ pip install chardet
Collecting chardet
Downloading chardet-5.1.0-py3-none-any.whl (199 kB)
ββββββββββββββββββββββββββββββββββββββββ 199.1/199.1 kB 7.7 MB/s eta 0:00:00
Installing collected packages: chardet
Successfully installed chardet-5.1.0
</code></pre>
<p>why? seems odd. Thought this fixed it it seems:</p>
<pre><code>pip install --upgrade pip
pip install chardet # might be needed if wandb acts weird
</code></pre>
<p>fixed it. But why did this happen in the first place?</p>
<p><a href="https://community.wandb.ai/t/odd-error-needing-chardet-with-wandb/4719" rel="nofollow noreferrer">https://community.wandb.ai/t/odd-error-needing-chardet-with-wandb/4719</a></p>
<hr />
<h1>Edit</h1>
<p>idk why this is needed but this seems to work:</p>
<pre><code>pip install --upgrade pip
pip install wandb --upgrade
if ! pip show chardet > /dev/null; then
pip install chardet
fi
if ! pip show cchardet > /dev/null; then
pip install cchardet
fi
python -c "import uutils; uutils.torch_uu.gpu_test()"
</code></pre>
<p>shouldn't be happening in the first place...</p>
| <python><wandb> | 2023-07-11 18:54:30 | 1 | 6,126 | Charlie Parker |
76,664,977 | 5,868,293 | How to calculate sample weights to match population in python | <p>I have the following dataframe</p>
<pre><code>import pandas as pd
dt = pd.DataFrame({'Age': [20,30,40], 'Gender': ['M', 'F', 'F'],
'Origin': ['Caucasian', 'Asian', 'Latino']})
</code></pre>
<p>which represent a sample from a population. I want to calculate sample weights for <code>dt</code> and add them as a new column.
My population has the following ratios for each of the features:</p>
<ul>
<li><code>Gender</code> -> M:F = 1:1 (meaning for every 1 Male there is 1 Female and vice versa)</li>
<li><code>Age</code> -> 20:30:40 = 1:2:3</li>
<li><code>Origin</code> -> caucasian:asian:latino = 1:4:2</li>
</ul>
<p>How can I calculate the weights ?</p>
<p><strong>UPDATE</strong></p>
<p>The weights should have the following property:
When you do e.g. <code>dt.groupby('Gender')['Weight'].sum()</code> you should get</p>
<pre><code>Gender
F 1
M 1
</code></pre>
<p>since the ratio for <code>gender</code> is <code>1:1</code>,</p>
<p>when you do e.g. <code>dt.groupby('Origiin')['Weight'].sum()</code> you should get</p>
<pre><code>Origin
Asian 4
Caucasian 1
Latino 2
</code></pre>
<p>and when you do e.g. <code>dt.groupby('Age')['Weight'].sum()</code> you should get</p>
<pre><code>Age
20 1
30 2
40 3
</code></pre>
| <python><pandas> | 2023-07-11 18:52:37 | 2 | 4,512 | quant |
76,664,945 | 19,251,893 | Send MDNS request with Python Scapy | <p>I have device in my LAN that send MDNS that looks like that in Wireshark</p>
<pre><code>ip src: 192.168.1.41 , ip dst: 224.0.0.251 , src port : 5353 , dst port : 5353
MDNS 228 Standard query response 0x0000 PTR, cache flush abcd.local PTR, cache flush abcd.local NSEC, cache flush 192.168.1.41.in-addr.arpa NSEC
</code></pre>
<p>That mean in 192.168.1.41 there is abcd.local.</p>
<p>So I tried to send multicast packet in that LAN to get an answer from 192.168.1.41 with <code>abcd.local</code> MDNS</p>
<pre><code>from scapy.all import *
sr1(IP()/UDP(dport=5353)/DNS(qd=DNSQR(qtype="PTR", qname="abcd.local")))
</code></pre>
<p>That packet was sent , but I didn't see any response from 192.168.1.41 , why is that?</p>
<p>Maybe I send wrong MDNS request ?</p>
| <python><wireshark><scapy><multicast><mdns> | 2023-07-11 18:49:00 | 0 | 345 | python3.789 |
76,664,735 | 2,221,360 | How to get signal from QPushButtons which is inside grid layout? | <p>I have a GUI that looks like the below which has many QPushButtons added dynamically during run time:</p>
<p><a href="https://i.sstatic.net/r5w4U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r5w4U.png" alt="scroll widget" /></a></p>
<p>Each of these buttons is added inside <code>gridlayout</code> and later image is embedded in each button. What I want is to get the object name or image name. How do I do that?</p>
<p><strong>Edit 1</strong>:</p>
<p>There is a suggestion to use <code>self.sender()</code> to get the widget's object name as mentioned in this link <a href="https://stackoverflow.com/questions/36823841/pyqt-getting-which-button-called-a-specific-function">PyQt: Getting which button called a specific function</a> which uses <code>self.button.clicked.connect(self.function)</code>. However, in my case, I have loaded more than 100 buttons, and emitting a signal from each of these buttons is not possible.</p>
<p><strong>Edit 2</strong>:
What I am trying to implement is closer to what is mentioned in this post</p>
<p><a href="https://stackoverflow.com/questions/66790247/emit-a-signal-when-any-button-is-pressed">Emit a signal when any button is pressed</a></p>
<p>Request to reopen the question.</p>
| <python><user-interface><pyqt><pyside6> | 2023-07-11 18:14:26 | 0 | 3,910 | sundar_ima |
76,664,566 | 16,491,055 | How to automatically update Certifi cacert.pem with Trusted Certificates in the Windows Certificate Store? | <p>I have installed a firewall Root CA into the Trusted Root Certification Authorities of the Local System of my Windows system. This was done for the purpose of SSL inspection.</p>
<p>When this was done, I was seeing SSL Error messages in my Python applications</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)')) WINDOWS</code></pre>
</div>
</div>
</p>
<p>This was strange, because everything else on my system such as web browsers, were operating fine. I then realized that Python uses the Root CA's specified in the file : <code>cacert.pem</code> which is managed by the <code>certifi</code> module. It does <strong>not</strong> use the Windows certificate store. This made sense because the certificates were setup correctly in the Windows certificate store, and every other application on my system was fine.</p>
<p>To view where this file is stored, you can run the following code:</p>
<pre><code>import certifi
print(certifi.where())
</code></pre>
<p>I manually added my Root CA to the <code>cacert.pem</code> file, by copy and pasting my Root CA certificate to the bottom of the <code>cacert.pem</code> file.</p>
<p>After doing this, I have not received any SSL errors whatsoever and my Python applications are correctly using the firewall's certificates.</p>
<p>My questions are:</p>
<ul>
<li>Can I make Python just use the trusted Root certificates in my Windows store?</li>
<li>It is unnecessary and annoying to have to have duplicates of the same certificate. I would like Python to use the central Windows store, just like everything else, to minimize hassle and sources of error</li>
<li>At the least, is there a certifi command that I can run to have it copy everything from the Windows certificate store into the <code>cacert.pem</code> file?</li>
</ul>
<p><strong>EDIT</strong></p>
<p>From the <a href="https://github.com/certifi/python-certifi" rel="nofollow noreferrer">certifi github page</a></p>
<blockquote>
<p>Certifi does not support any addition/removal or other modification of the CA trust store content. This project is intended to provide a reliable and highly portable root of trust to python deployments. Look to upstream projects for methods to use alternate trust.</p>
</blockquote>
| <python><ssl><ssl-certificate><certifi> | 2023-07-11 17:48:47 | 1 | 771 | geekygeek |
76,664,548 | 1,494,193 | How to do complex filters on three or more DataFrames? | <p><strong>I want to populate columns using filters based on several DataFrames values as conditions</strong>.</p>
<p><strong>Using the code bellow creates three DataFrames:</strong></p>
<pre><code>print(main_df.tail())
Date Code start_date end_date nums furniture
18934 2021-02-17 84QQP49YJ 2018-07-25 2021-02-17 three chest
18935 2021-02-17 89IA4XDIU 2018-11-27 2021-02-17 five bed
18936 2021-02-17 8LIFYW7LB 2020-05-16 2021-02-17 six sofa
18937 2021-02-17 8ZF080G5U 2020-04-08 2021-02-17 three coffe_table
18938 2021-02-17 9B9R0JIBB 2020-08-25 2021-02-17 three sofa
print(second_df.tail())
Date Code values colors
107509 2023-11-07 1E5M1DVZI 86799 green
107510 2023-11-07 3P3VTOSN0 96229 magenta
107511 2023-11-07 910TL0TMF 93852 dark_blue
107512 2023-11-07 21JZU3AB5 92032 white
107513 2023-11-07 7CBK4U5F2 52470 yellow
print(third_df.tail())
Date Shapes quantity
21896 2023-11-07 star 85367
21897 2023-11-07 circle 23269
21898 2023-11-07 oval 37599
21899 2023-11-07 squiggle 52118
21900 2023-11-07 octagon 42500
</code></pre>
<p>I'll explain the logic to understand how to filter based on these types of conditions.</p>
<p><strong>Case 1:</strong> Add new column named <em>"new_value"</em> on main_df</p>
<p>(main_df.new_value = second_df.value) when</p>
<p>(main_df.date == second_df.date) and</p>
<p>(main_df.code == second_df.code)</p>
<p><strong>Case 2:</strong> Add new column named <em>"color"</em> on main_df</p>
<p>(main_df.color = second_df.colors) when</p>
<p>(main_df.code == second_df.code) and</p>
<p>(main_df.start_date <= second_df.date <= main_df.end_date)</p>
<p>Should there be more than one match, pick the first.</p>
<p><strong>Case 3:</strong> On main_df add new column <em>"mixed_quantity"</em></p>
<p>main_df.mixed_quantity = (third_df.quantity + second_df.values) if</p>
<p>('main_df.date == second_df.date == third_df.date) and</p>
<p>(main_df.Code == second_df.Code), and</p>
<p>(third_df.Shape == 'circle'). This field will have Nans, that's ok.</p>
<p><strong>Case 4:</strong> Add a new column on main_df <em>"furniture_value"</em></p>
<p>main_df.furniture_value = second_df.values/10 if</p>
<p>(main_df.furniture == 'chest') and (main_df.date == second_df.date)</p>
<p>main_df.furniture_value = second_df.values/15 if</p>
<p>(main_df.furniture == 'bed') and (main_df.date == second_df.date)</p>
<p>main_df.furniture_value = second_df.values/20 if</p>
<p>(main_df.furniture == 'sofa') and (main_df.date == second_df.date)</p>
<p>and so on..</p>
<p>similar to a switch statement. For this I was able to use iterrows but it took over 30 minutes to run on the original dataset. Which is unfeasible.</p>
<p>I have tried all manners of and so much more. I'll avoid writing the overtly complex loops I've done.:</p>
<pre class="lang-py prettyprint-override"><code> main_df.new_value = main_df[(main_df.date == second_df.date) & (main_df.code == second_df.code)]
second_df.date.reset_index(drop=True) == main_df.date.reset_index(drop=True)) & (third_df.Shape == 'circle')]
</code></pre>
<p>and similar and I have gotten index errors. And so many other errors.</p>
<pre><code> raise ValueError("Can only compare identically-labeled Series objects")
ValueError: Can only compare identically-labeled Series objects
</code></pre>
<p>I just don't know how to handle these complex conditions.</p>
<p><strong>The code to generate the dataset:</strong></p>
<p>In order to make my questions more clear I made some <strong>code to generate a dataset</strong> that has the proper conditions: structure and the size.</p>
<p>Note: I use the library <a href="https://pypi.org/project/pyjanitor/" rel="nofollow noreferrer">pyjanitor</a> to make a conditional join faster.</p>
<pre class="lang-py prettyprint-override"><code>import random
import string
from datetime import datetime
import janitor
import pandas as pd
initial_date = pd.to_datetime('5/27/2018')
today = pd.to_datetime(datetime.today()).strftime('%d-%m-%Y')
date_range = pd.date_range(start=initial_date, end=today)
date_range = pd.Series(date_range, name='Date')
shapes = ['square', 'triangle', 'rhomboid', 'trapezoid', 'pentagon', 'hexagon', 'star', 'circle',
'oval', 'squiggle', 'octagon']
def alph_num_generator(n, a=False):
def single_alphanumeric():
random_string = ''.join(
# random.choice(string.digits + string.ascii_uppercase + string.ascii_lowercase) for _ in range(8))
random.choice(string.digits + string.ascii_uppercase) for _ in range(8))
return str(random.randint(1, 9)) + random_string
if a:
random_alphanumeric_list = [single_alphanumeric() for x in range(n)]
return random_alphanumeric_list
else:
random_numeric_list = [random.randint(10_000, 99_999) for x in range(n)]
return random_numeric_list
def date_pair_list(start_r, end_r, n):
def pair_generator(begin, end):
date_list = pd.date_range(begin, end).date
first_half = random.randint(0, round(len(date_list) / 2 - 1))
second_half = random.randint(round(len(date_list) / 2 - 1),
len(date_list) - round(len(date_list) / 2 - 1))
left_day, right_day = date_list[first_half], date_list[second_half]
return [left_day, right_day]
both = [pair_generator(start_r, end_r) for x in range(n)]
left, right = [x[0] for x in both], [x[1] for x in both]
return left, right
def random_list_generator(i, n):
colors = ['red', 'green', 'magenta', 'yellow', 'light_blue', 'dark_blue', 'black', 'white',
'pink', 'orange', 'crimson', 'teal']
shapes = ['square', 'triangle', 'rhomboid', 'trapezoid', 'pentagon', 'hexagon', 'star', 'circle',
'oval', 'squiggle', 'octagon']
fruits = ['banana', 'apple', 'orange', 'pear', 'lemon', 'pineapple', 'watermelon', 'lime',
'tangerine', 'grapefruit', 'grapes']
furniture = ['table', 'chair', 'armoire', 'chest', 'bed', 'night_stand', 'coffe_table',
'sofa', 'bench', 'bookcase', 'stool']
nums = ['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten', 'eleven']
map_to_type = {1: colors, 2: shapes, 3: fruits, 4: furniture, 5: nums}
def random_value_generator(j):
rand_num = random.randint(0, 10)
return map_to_type[j][rand_num]
output = [random_value_generator(i) for x in range(n)]
return output
Code = alph_num_generator(37, a=True)
start_date, end_date = date_pair_list(initial_date, today, 37)
start_date, end_date = pd.to_datetime(start_date), pd.to_datetime(end_date)
nums = random_list_generator(5, 37)
src_df = pd.DataFrame({'Code': Code, 'start_date': start_date, 'end_date': end_date, 'nums': nums})
main_df = src_df.conditional_join(date_range, ('start_date', 'Date', '<='),
('end_date', 'Date', '>='), use_numba=False)
main_df = main_df[['Date', 'Code', 'start_date', 'end_date', 'nums']] \
.sort_values(['Date', 'Code']).reset_index().drop('index', axis=1)
furniture = random_list_generator(4, int(main_df.shape[0]))
main_df = main_df.assign(furniture=furniture)
extra_code = Code + alph_num_generator(17, a=True)
secondary_code, secondary_date = pd.DataFrame({'Code': extra_code}), pd.DataFrame({'Date': date_range})
second_df = pd.merge(secondary_date, secondary_code, how='cross')
values, colors = alph_num_generator(second_df.shape[0]), random_list_generator(1, second_df.shape[0])
second_df = second_df.assign(values=values, colors=colors)
third_shapes, third_date = pd.DataFrame({'Shapes': shapes}), pd.DataFrame({'Date': date_range})
third_df = pd.merge(third_date, third_shapes, how='cross')
third_quanitity = alph_num_generator(int(third_df.shape[0]))
third_df = third_df.assign(quantity=third_quanitity)
del secondary_code, secondary_date, extra_code, values, colors, nums, third_date, third_quanitity, third_shapes, src_df
</code></pre>
| <python><pandas><dataframe><vectorization> | 2023-07-11 17:45:23 | 0 | 401 | Gorgonzola |
76,664,151 | 13,578,682 | Function doesn't get the __doc__ attribute | <pre><code>def foo():
("abc"
"def")
def bar():
("abc"
f"def")
</code></pre>
<p>The function <code>foo</code> got a docstring but <code>bar</code>'s doc is null.</p>
<pre><code>>>> bar.__doc__
>>> foo.__doc__
'abcdef'
</code></pre>
<p>Why does the f-string prevent the function from taking a <code>__doc__</code> attribute?</p>
| <python><docstring> | 2023-07-11 16:45:34 | 2 | 665 | no step on snek |
76,664,071 | 19,392,385 | sqlite3 'cursor.execute' makes slash command unresponsive (discord.py) | <p>I am coding a discord bot and I added slash command. I use sqlite3 to create a database.</p>
<p>One of the commands is problematic as it returns <code>The application did not respond</code> on Discord. I checked what went wrong in the code by adding some print statement and realized it was because of <code>cursor.execute</code>. Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>@commands.hybrid_command(name='addoc', with_app_command=True)
# @commands.cooldown(rate=1, per=60, type=commands.BucketType.user)
async def addoc(self, ctx: CustomContext, name, age, nationality, gender, sexuality, universe, desc, picture: discord.Attachment):
""" Add OC to the database with respect to user and guild id """
# Get guild and user roles
guild_id = str(ctx.guild.id)
user_roles = extract_role_ids(ctx.message.author.roles) # roles formatting needed to only keep the actual id
print('check 1')
try:
# Check if permissions have been set for the server
roles_list = self.roles_dict[guild_id]
# Check is user is allowed to use the database
print('check 2')
if any(role in roles_list for role in user_roles):
user_id = ctx.message.author.id
print('check 3')
cursor = self.db.cursor()
print('check 4')
cursor.execute('INSERT INTO "guild_{}" VALUES(?,?,?,?,?,?,?,?,?,?)'.format(guild_id), (name, age, nationality, gender, sexuality, universe, desc, picture.url))
print('check 5')
self.db.commit()
# Close cursor
cursor.close()
self.db.close()
await ctx.send(f'Character successfully added for <@{user_id}>!')
else:
await ctx.send("**If you think you should be able to add a character to the database, contact your local admins.**")
except KeyError:
await ctx.send("**Please set the authorized roles first with `addrole` before adding an OC.**")
</code></pre>
<p>The code for some reason is too slow at check 4 and the slash command just stops (<em>check 5</em> isn't printed).</p>
<p>I did not have this issue before adding slash command so is there a way to fix this? By making the code faster in some way?</p>
<p>I thought it might come from the picture input, soI added a <code>.url</code> to only get the link but this didn't fix the issue.</p>
<p><strong>EDIT 1: I have good contact with the developer behind the postgreslite. He made sure everything was asynchronous and updated the library. I also tried <code>ctx.defer</code> but the bot just keep thinking indefinitely without sending a response. I said that already but I did not have any issue when it was not a slash command (<code>commands.command()</code> instead of <code>hybrid_command(name='addoc', with_app_command=True)</code></strong>.</p>
| <python><sqlite><discord.py> | 2023-07-11 16:33:50 | 2 | 359 | Chris Ze Third |
76,664,053 | 16,235,223 | does anyone understand where does python scientific expression fail when dealing with every large integers? | <p>And the breaking example would be this:</p>
<pre><code>>> (1000000000000000000 + 1e6 - 1) == (1000000000000000000 + 1e6)
True
>> (1000000000000000000 + 1000000 - 1) == (1000000000000000000 + 1000000)
False
</code></pre>
<p>Would love it if someone can shed some light on which mechanism in python causes this bug? My guess is something to do with 64-bit integer, 32-bit integer and how scientific expression is actually in float.</p>
| <python><python-3.x> | 2023-07-11 16:32:01 | 3 | 309 | michaelgbj |
76,664,025 | 1,263,935 | XGBoost.save_model loses feature_names when saving | <p>I would like to recover the feature names from my saved models, however it seems this info is lost when the model is saved. Steps:</p>
<ul>
<li>train model</li>
<li>feature names exist</li>
<li>save model using <code>Booster.save_model</code></li>
<li>load model using <code>Booster.load_model</code></li>
<li>feature names of loaded model are <code>None</code></li>
</ul>
<p>Code to reproduce:</p>
<pre class="lang-py prettyprint-override"><code>
import xgboost as xgb
# Prep data
data = {'col_1': [3, 2, 1, 0], 'col_2': [1, 2, 3, 4], 'label': [1, 1, 0, 0]}
df = pd.DataFrame.from_dict(data)
y_train = df.pop('label')
x_train = df
# Train model
classifier = xgb.XGBClassifier(
max_depth=3,
learning_rate=0.1,
n_estimators=3)
classifier.fit(x_train, y_train)
print(f'Trained, features: {classifier.get_booster().feature_names}')
# Save model
filename = 'test_model.bst'
classifier.save_model(filename)
# Load model
bst_model = xgb.Booster()
bst_model.load_model(filename)
print(f'Loaded: {bst_model.feature_names}')
</code></pre>
<p>How can I include the feature names in my model?</p>
| <python><xgboost> | 2023-07-11 16:29:09 | 1 | 2,367 | Paul |
76,664,017 | 5,942,100 | Create multiple calculated fields which are referenced to other columns, based on conditions using Pandas | <p>I wish to create multiple calculated fields which are referenced to other columns, based on conditions using Pandas</p>
<p><strong>Data</strong></p>
<pre><code>ID status used
AA False 3
BB True 2
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>ID status used new
AA False 3 0.6
BB True 2 2
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>df['new'] = df.apply(lambda row: row['used'] * 0.2 if row['status'] ==
'False' else row['used'] * 1, axis = 1)
</code></pre>
<p>Any suggestion is appreciated
The following is not calculating off of the correct field</p>
| <python><pandas><numpy> | 2023-07-11 16:27:13 | 1 | 4,428 | Lynn |
76,663,943 | 9,658,149 | How to use OpenAI in streaming mode with Gradio and Langchain Agent with each Gradio user having a session with memory? | <p>I want to do two things:</p>
<ol>
<li>Use OpenAI API in streaming mode (Using Langchain Agent).</li>
<li>Print the answer in streaming mode using Gradio</li>
</ol>
<p>I know how to generate streaming messages using Gradio, and I know how to activate streaming mode for OpenAI; However, I don't know how to adapt the code to accomplish both things at the same time. Here is my base code that needs modification; I don't know how to play with generators and callbacks.</p>
<pre><code>from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
import gradio as gr
import time
from langchain.callbacks.base import BaseCallbackHandler
from dotenv import load_dotenv
class MyCallbackHandler(BaseCallbackHandler):
def on_llm_new_token(self, token, **kwargs) -> None:
print(token)
def myfun(input):
llm = ChatOpenAI(streaming=True, callbacks=[MyCallbackHandler()])
tools = load_tools(["wikipedia", "llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=False
)
return agent.run(input)
with gr.Blocks() as demo:
chatbot = gr.Chatbot()
msg = gr.Textbox()
clear = gr.Button("Clear")
def user(user_message, history):
return "", history + [[user_message, None]]
def bot(history):
bot_message = myfun(input=history[-1][0])
history[-1][1] = ""
for character in bot_message:
history[-1][1] += str(character)
time.sleep(0.03)
yield history
msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
bot, chatbot, chatbot
)
clear.click(lambda: None, None, chatbot, queue=False)
demo.queue()
demo.launch()
</code></pre>
<p>I want to add memory to the agent, and each Gradio user has a session with memory. How can I do that with the States in Gradio?</p>
| <python><openai-api><gradio> | 2023-07-11 16:15:25 | 0 | 2,097 | Eric Bellet |
76,663,885 | 13,799,627 | How to add tags on TrainnigJob cloudwatch log stream in sagemaker notebooks? | <p>I use a AWS Sagemaker notebook to train a TensorFlow ML model unsing my own entrypoint script.<br />
I need to tag my training with some information which I'm currently doing like this:</p>
<pre class="lang-py prettyprint-override"><code>
import tensorflow as tf
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_session.region_name
from sagemaker.tensorflow import TensorFlow
tags = [{'Key': 'Project',
'Value': 'my-project-1'
}]
tf_estimator = TensorFlow(entry_point='train_mlp.py',
output_path='s3://path/to/my/output',
model_name='mlp',
role=get_execution_role(),
instance_count=1,
instance_type='ml.c5.4xlarge',
framework_version=tf.__version__,
py_version='py310',
script_mode=True,
base_job_name='my-training-1',
tags=tags,
disable_profiler=True,
hyperparameters={
'epochs': 1
}
)
tf_estimator.fit()
</code></pre>
<p>When the training starts, I can go into the cloudwatch UI and see my logs for <code>my-training-1-<timestamp></code>.<br />
Now my qustion is, how can I add tags to this Log Steam directly from the notbook instance, without going into the cloudwatch UI if I need to add some tags similar to the tags for the training job?</p>
<p>Thanks for any help.</p>
| <python><amazon-web-services><amazon-cloudwatch><amazon-sagemaker> | 2023-07-11 16:08:13 | 1 | 535 | Crysers |
76,663,778 | 1,014,217 | TypeError: issubclass() arg 1 must be a class when using langchain in azure | <p>I have a simple python app with streamlit and langchain, I am deploying this to Azure via CI/CD with the following YAML definition</p>
<pre><code>stages:
- stage: Build
displayName: Build stage
jobs:
- job: BuildJob
pool:
vmImage: $(vmImageName)
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- script: |
python -m venv antenv
source antenv/bin/activate
python -m pip install --upgrade pip
pip install setup streamlit
pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
workingDirectory: $(projectRoot)
displayName: "Install requirements"
- task: ArchiveFiles@2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(projectRoot)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
displayName: 'Upload package'
artifact: drop
- stage: Deploy
displayName: 'Deploy Web App'
dependsOn: Build
condition: succeeded()
jobs:
- deployment: DeploymentJob
pool:
vmImage: $(vmImageName)
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python version'
- task: AzureAppServiceSettings@1
displayName: 'Set App Settings'
inputs:
azureSubscription: 'AzureAIPocPrincipal'
appName: 'test'
resourceGroupName: 'AzureAIPoc'
appSettings: |
[
{
"name": "ENABLE_ORYX_BUILD",
"value": 1
},
{
"name": "SCM_DO_BUILD_DURING_DEPLOYMENT",
"value": 1
},
{
"name": "POST_BUILD_COMMAND",
"value": "pip install -r ./requirements.txt"
}
]
- task: AzureWebApp@1
displayName: 'Deploy Azure Web App : {{ webAppName }}'
inputs:
azureSubscription: 'AzureAIPocPrincipal'
appType: 'webAppLinux'
deployToSlotOrASE: true
resourceGroupName: 'AzureAIPoc'
slotName: 'production'
appName: 'test'
package: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
startUpCommand: 'python -m streamlit run app/home.py --server.port 8000 --server.address 0.0.0.0'
</code></pre>
<p>My requirements file is:</p>
<pre><code>langchain==0.0.225
streamlit
openai
python-dotenv
pinecone-client
streamlit-chat
chromadb
tiktoken
pymssql
typing-inspect==0.8.0
typing_extensions==4.5.0
</code></pre>
<p>However I am getting the following error:</p>
<pre><code>TypeError: issubclass() arg 1 must be a class
Traceback:
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/tmp/8db82251b0e58bc/app/pages/xxv0.2.py", line 6, in <module>
import langchain
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/agents/agent.py", line 26, in <module>
from langchain.chains.base import Chain
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/chains/__init__.py", line 2, in <module>
from langchain.chains.api.base import APIChain
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/chains/api/base.py", line 13, in <module>
from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/chains/api/prompt.py", line 2, in <module>
from langchain.prompts.prompt import PromptTemplate
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/prompts/__init__.py", line 12, in <module>
from langchain.prompts.example_selector import (
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/prompts/example_selector/__init__.py", line 4, in <module>
from langchain.prompts.example_selector.semantic_similarity import (
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/prompts/example_selector/semantic_similarity.py", line 8, in <module>
from langchain.embeddings.base import Embeddings
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/embeddings/__init__.py", line 29, in <module>
from langchain.embeddings.sagemaker_endpoint import SagemakerEndpointEmbeddings
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/embeddings/sagemaker_endpoint.py", line 7, in <module>
from langchain.llms.sagemaker_endpoint import ContentHandlerBase
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/llms/__init__.py", line 52, in <module>
from langchain.llms.vertexai import VertexAI
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/llms/vertexai.py", line 14, in <module>
from langchain.utilities.vertexai import (
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/utilities/__init__.py", line 3, in <module>
from langchain.utilities.apify import ApifyWrapper
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/utilities/apify.py", line 5, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/document_loaders/__init__.py", line 43, in <module>
from langchain.document_loaders.embaas import EmbaasBlobLoader, EmbaasLoader
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/document_loaders/embaas.py", line 54, in <module>
class BaseEmbaasLoader(BaseModel):
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/main.py", line 204, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 419, in __init__
self.prepare()
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 539, in prepare
self.populate_validators()
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 801, in populate_validators
*(get_validators() if get_validators else list(find_validators(self.type_, self.model_config))),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/validators.py", line 696, in find_validators
yield make_typeddict_validator(type_, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/validators.py", line 585, in make_typeddict_validator
TypedDictModel = create_model_from_typeddict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/annotated_types.py", line 35, in create_model_from_typeddict
return create_model(typeddict_cls.__name__, **kwargs, **field_definitions)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/main.py", line 972, in create_model
return type(__model_name, __base__, namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/main.py", line 204, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 419, in __init__
self.prepare()
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 534, in prepare
self._type_analysis()
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 638, in _type_analysis
elif issubclass(origin, Tuple): # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.11.3/lib/python3.11/typing.py", line 1570, in __subclasscheck__
return issubclass(cls, self.__origin__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
</code></pre>
<p>I am not copying here the app script as the code works locally, I think its something more related to Azure App Service Plan Environment or the venv setup in the yaml file.</p>
| <python><azure><pip><python-venv><langchain> | 2023-07-11 15:55:27 | 0 | 34,314 | Luis Valencia |
76,663,728 | 5,867,094 | Azure table storage python client return not implement when query the entity | <p>I have this code:</p>
<pre><code>table_client = TableClient(
endpoint=AZURE_TABLE_URL,
credential=AzureSasCredential(AZURE_TABLE_SAS_TOKEN),
table_name=AZURE_TABLE_NAME
)
entities = table_client.get_entity(partition_key=my_partition_key, row_key=my_row_key)
</code></pre>
<p>which returns</p>
<blockquote>
<p>The requested operation is not implemented on the specified resource</p>
</blockquote>
<p>I am sure this sas token is granted with 4 permissions and I can use it with create entity. I saw several similar questions on #c or JS language, and I try to follow but none of them works for me.</p>
| <python><azure-table-storage> | 2023-07-11 15:49:02 | 1 | 891 | Tiancheng Liu |
76,663,674 | 9,465,029 | pandas rolling cumulative weighted average | <p>I am trying to compute a rolling cumulative weighted average.</p>
<pre><code>df["cum_wt_avg"] = df['val'].mul(df['wt']).cumsum().div(df['wt'].cumsum())
</code></pre>
<p>This gives me the cumulative weighted average. How can i do to apply the same function rolling with a period of 2 for instance?</p>
<pre><code>df :
val wt
1 100 2
2 300 3
3 200 5
required df :
val wt cum_wt_avg rolling_cum_wt_avg_2periods
1 100 2 100 100
2 300 3 220 220
3 200 5 210 237.5
</code></pre>
| <python><pandas><rolling-computation> | 2023-07-11 15:40:51 | 1 | 631 | Peslier53 |
76,663,511 | 16,770,846 | If value in range, dispatch a function for the condition | <p>This question has been asked on SO and an answer was given <a href="https://stackoverflow.com/a/13628825/16770846">here</a>. However it doesnβt fully solve my problem as my problem is a little bit more complex.</p>
<p>I have a studentβs total score which represents an integer <code>0 <= x <= 100</code>. Depending on where the studentβs total score lies, a given grade will be issued and some logic will run also.</p>
<p>Of course I could write it like so:</p>
<pre><code>if 0 <= grade <= 39:
do_f()
elif 40 <= grade <= 60:
do_b()
elif 60 <= grade <= 100:
do_a()
</code></pre>
<p>You can see the problem with this already. What if more grades are added? It becomes an if else entanglement.</p>
<p>How can I rewrite this using a dispatch table or any other elegant and efficient way?</p>
| <python> | 2023-07-11 15:21:57 | 4 | 4,252 | Chukwujiobi Canon |
76,663,422 | 607,407 | How to use python generic hints to indicate value type in parent class? | <p>I have a static member that will be some implementation of <code>ToolInterface</code> interface. I inherit the class that has that member and want to specify what implementation is used.</p>
<p>I tried this:</p>
<h4><code>tool_test_base.py</code></h4>
<pre><code>from typing import Generic, TypeVar
from abstract_tool import ToolInterface
TToolImpl = TypeVar("TToolImpl", ToolInterface)
class ToolTestBase(Generic[TToolImpl]):
tool_impl = None # type: TToolImpl
@test_case
test_do_something(self):
assert self.tool_impl.do_something() == "hello"
</code></pre>
<h4><code>abstract_tool.py</code></h4>
<pre><code>def ToolInterface:
def do_something():
assert False, "Tool method not implemented"
</code></pre>
<p>Now I have a class that extends the tool test and also provides a specific implementation:</p>
<h4><code>my_tool.py</code></h4>
<pre><code>from abstract_tool import ToolInterface
class MyTool(ToolInterface):
do_something(self):
return "hello"
do_something_special(self):
return "π"
</code></pre>
<h4><code>my_tool_test.py</code></h4>
<pre><code>from tool_test_base import ToolTestBase
from my_tool import MyTool
class MyToolTest(ToolTestBase[MyTool]):
test_something_special(self):
assert self.tool_impl.do_something_special() == "π"
</code></pre>
<p>Now by writing <code>ToolTestBase[MyTool]</code> I was hoping to indicate that <code>TToolImpl</code> is <code>MyTool</code> and not <code>MyToolBase</code>. However, it is not working as intended. I see <code>tool_impl</code> hints for <code>MyToolBase</code> but not for <code>MyTool</code></p>
| <python><python-typing> | 2023-07-11 15:10:30 | 1 | 53,877 | TomΓ‘Ε‘ Zato |
76,663,192 | 210,867 | How to differentiate between client omitting field or setting field to None? | <p>Assuming the following:</p>
<pre><code>class User(BaseModel):
name : str
bio : Optional[str]
class UpdateUser(BaseModel):
name = Optional[str] = Field(None)
bio = Optional[str] = Field(None)
@app.put("/users/{id}", response_model=User)
async def put_user(pk:int, payload:UpdateUser) -> User:
...
</code></pre>
<p>If omitted fields are simply defaulted to None in <code>payload</code>, how can I tell the difference between a client omitting the field (because they don't want to update it) vs explicitly setting the field to <code>None</code>?</p>
<p>Restated a different way: How do I implement an API that doesn't require every field to be specified on an update, but is still able to differentiate between omitted fields and an explicit update of a field to <code>None</code> (when appropriate)?</p>
| <python><fastapi><pydantic> | 2023-07-11 14:46:47 | 0 | 8,548 | odigity |
76,663,144 | 7,347,925 | Pandas update Dataframes while keep some columns as the old one | <p>I have two DataFrames and want to update the old one by the new one. However, I want to keep some specific columns as old values.</p>
<p>Here's a simple example:</p>
<pre><code>data_old = {'name': ['name1', 'name2', 'name3'],
'datetime': ['2022-10-25 14:00', '2022-10-10 00:00', np.nan],
'value': [1, 2, np.nan]}
data_new = {'name': ['name1', 'name2', 'name2', 'name3'],
'datetime': ['2022-10-25 14:00', '2022-10-10 00:00', '2022-10-10 01:00', '2022-10-10 22:00'],
'value': [1, 3, 10, 8]}
df_old = pd.DataFrame(data=data_old)
df_new = pd.DataFrame(data=data_new)
</code></pre>
<pre><code>df_old
name datetime value
0 name1 2022-10-25 14:35 1
1 name2 2022-10-10 00:00 2
2 name3 NaN NaN
df_new
name datetime value
0 name1 2022-10-25 14:00 1
1 name2 2022-10-10 00:00 3
2 name2 2022-10-10 01:00 10
3 name3 2022-10-10 22:00 8
</code></pre>
<p>I have tried <code>df_old.merge(df_new, how='right')</code>, it would overwrite the value of <code>value</code> column. Is it possible to keep the <code>value</code> column as the old one if it exists, while updating other columns?</p>
<p>The result should be like this (The <code>value</code> of the second row should be <code>2</code> instead of <code>3</code>):</p>
<pre><code>
name datetime value
0 name1 2022-10-25 14:00 1
1 name2 2022-10-10 00:00 2
2 name2 2022-10-10 01:00 10
3 name3 2022-10-10 22:00 8
</code></pre>
| <python><pandas><dataframe> | 2023-07-11 14:41:16 | 2 | 1,039 | zxdawn |
76,663,044 | 4,575,197 | How to pass variables to ProcessPoolExecutor, got type error | <p>i'm trying to modify my code to apply Parallelism with <code>ProcessPoolExecutor</code>. as i read on internet i should use <code>Map()</code> function to pass my variables to the function. but it throws typeerror, which needs iterabels. should i modify the method so it would only get one row and then feed the variables through a for loop?</p>
<p>the Error:</p>
<pre><code> 186 def _get_chunks(*iterables, chunksize):
187 """ Iterates over zip()ed iterables in chunks. """
--> 188 it = zip(*iterables)
189 while True:
190 chunk = tuple(itertools.islice(it, chunksize))
TypeError: 'float' object is not iterable
</code></pre>
<p>my Code:</p>
<pre><code>with ProcessPoolExecutor() as executor:
df = executor.map(drop_dupplicate_rows, df,'title',0.7,chunksize=100)
def drop_dupplicate_rows(_df,_column,_cuttoff):
'''
drop Duplicate rows based on difflib library
Parameters
----------
df: DataFrame
the data frame
_column: string
the column name to be checked for duplicates
_cuttoff: int
the percentage based of which rows are dropped
Returns
----------
the cleaned dataframe
'''
for k,row in enumerate(_df[f'{_column}']):
repetition_list=list()
if not np.where(_df[f'{_column}']==''):
repetition_list=difflib.get_close_matches(row,_df[f'{_column}'],cutoff=_cuttoff)
if len(repetition_list)>=2:
print(f'row:{k},link: {_df.at[k,"url"]} due to: {_column} repetition_list: ',repetition_list)
_df.drop(index=k,inplace=True)
_df.reset_index(drop=True, inplace=True)
return _df
</code></pre>
| <python><pandas><parallel-processing><typeerror><process-pool> | 2023-07-11 14:30:33 | 1 | 10,490 | Mostafa Bouzari |
76,662,972 | 513,904 | safely using python eval on non-arbitrary code | <p>I would like to create a simple yaml workflow that works as metadata in a yaml environment as below. The user will create these and submit them, mostly to organize a modest number of tasks (such as specifying a chain of anomaly detectors). Imports will be parsed with <code>importlib</code>. I was planning to use <code>newglobals=None</code> and populate <code>newlocals</code> using the imports and arguments, then call <code>eval(globals=newglobals,locals=newlocals)</code>. The workflow yaml would orchestrate work and create metadata in yaml which suits our needs and it is also easy to extend to non-python shell scripts.</p>
<p>My question concerns the use of <code>eval</code>. It isn't hard to find examples online of how malicious arbitrary code could be represented and run with yaml, e.g. with module=shutil, names='remove', expr='remove' and args = '/'.</p>
<p>However, the text is potentially non-arbitrary if the user is uses this workflow tool to organize their own work and stores the yaml in trusted repos. Is there an incremental danger to the yaml/eval approach compared to python if the python and yaml/eval are both managed using the same type of security? After all, I expect our organization members not to execute a file that says <code>run os.shutil.remove('/')</code>. Are there additional dangers?</p>
<pre><code>imports:
- module: mymod
names:
- func1
steps:
- expr: 'func1(foo=foo) + 2'
args:
foo: 2
</code></pre>
| <python><security><eval> | 2023-07-11 14:22:14 | 1 | 1,473 | Eli S |
76,662,962 | 4,665,335 | Django migrations - order of execution | <p>I have two apps and</p>
<ul>
<li>Yesterday I decided to make a model (M) in app1. In app2, i decided to create a data migration and create an instance of that model through migrations. I <code>mademigrations</code> and committed.</li>
<li>Today, I changed my mind and decided to remove model M completely.</li>
</ul>
<p>Diagram to illustrate my migrations.</p>
<pre><code> myproject/
βββ app1/
β βββ migrations/
β β βββ app1_0001.py ββ> Create model M.
β β βββ app1_0002.py ββ> Drop model M.
βββ app2/
β βββ migrations/
β β βββ app2_0001.py ββ> Data migration for Model M.
</code></pre>
<p>However, when trying to <code>python manage.py test -v 2</code>, I get the following error when migration <code>app2_0001.py</code> is being applied.</p>
<pre><code>Applying app1.app1_0001... OK
Applying app1.app1_0002... OK
Applying app2.app2_0001... ERROR
LookupError: "App 'app1' doesn't have a 'M' model."
</code></pre>
<p>Why is <code>app2.app2_0001</code> executing after <code>app1.app1_0002</code> and not before it?
Here is the <code>app2.app2_0001.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code># Generated by Django 3.2.7 on 2023-06-09 13:38
from django.db import migrations
def code(apps, schema_editor):
M = apps.get_model('app1', 'M') # this is the line that errors....
M.objects.create()
class Migration(migrations.Migration):
dependencies = [
('app1', 'app1.app1_0001'),
]
operations = [
migrations.RunPython(code),
]
</code></pre>
| <python><django><django-migrations> | 2023-07-11 14:21:14 | 1 | 400 | michal111 |
76,662,918 | 9,284,651 | Create months since current month column in pandas | <p>My DF looks like below:</p>
<pre><code>IndexData Week_number
2022-12-28 53
2022-12-29 53
2022-12-30 53
2022-12-31 53
2023-01-01 1
2023-01-02 1
2023-01-03 1
2023-01-04 1
.........
2023-02-27 9
2023-02-28 9
2023-03-01 9
2023-03-02 9
........
2023-03-29 13
2023-03-30 13
2023-03-31 13
</code></pre>
<p>I need to create another column that will looks like below:</p>
<pre><code>IndexData Week_number new_column
2022-12-28 53 -9
2022-12-29 53 -9
2022-12-30 53 -9
2022-12-31 53 -9
........
2023-01-03 1 -8
2023-01-04 1 -8
.........
2023-02-27 9 -1
2023-02-28 9 -1
2023-03-01 9 Current_month
2023-03-02 9 Current_month
........
2023-03-29 13 Current_month
2023-03-30 13 Current_month
2023-03-31 13 Current_month
</code></pre>
<p>The logic for new column is:</p>
<ul>
<li>it should take the last month in data set and label it as 'Current_month' and then based on index date starts to count week numbers from the most recent to the oldest date. Do you have idea how I could solve this?</li>
</ul>
<p>Regards</p>
| <python><pandas><dataframe><data-manipulation> | 2023-07-11 14:15:20 | 3 | 403 | Tmiskiewicz |
76,662,874 | 6,335,363 | How can I declare type definitions that extend another library in Python? | <p>I have recently written a Pytest plugin that exposes several <code>mark</code> functions, but I have noticed that when I write tests that use this plugin, the marks I created aren't recognised, and can't be checked by <code>mypy</code>.</p>
<pre class="lang-py prettyprint-override"><code>import pytest
# .my_mark is not an editor suggestion, and mypy provides no
# errors if I make a mistake with the calling logic
@pytest.mark.my_mark('some', 'args')
def test_thing():
...
</code></pre>
<p>As such, I want to create a <code>.pyi</code> file that extends the type definitions for Pytest to add type checking for my marks, <a href="https://stackoverflow.com/a/51298325/6335363">much like how it can be done in TypeScript</a>. However, I am unsure how to create this definition.</p>
<p>I've attempted to create a <code>pytest-stubs</code> library containing the definition for my marks, but doing so appears to delete all of the type definitions for everything in Pytest, causing many issues in other parts of my code. Adding a <code>from pytest import *</code> in this stub file does not improve things.</p>
<pre class="lang-py prettyprint-override"><code># pytest-stubs/__init__.pyi
from typing import Callable
# This overwrites the `mark` class entirely
# and all other properties in `pytest` are considered to be deleted
# by Mypy as well
class mark:
@staticmethod
def my_mark(arg1: str, arg2: str) -> Callable[[Callable], Callable]:
...
</code></pre>
| <python><python-typing> | 2023-07-11 14:08:29 | 1 | 2,081 | Maddy Guthridge |
76,662,854 | 15,992,173 | How do I set up a proxy when I get the message: "... [WinError 10061]" using Selenium Wire and Tor browser and set header capture β in python? | <p>To be able to capture headers (the Selenium library does not support this) I decided to use the Selenium Wire library. I found the following website: <a href="https://gitlab.torproject.org/woswos/CAPTCHA-Monitor/-/snippets/60" rel="nofollow noreferrer">https://gitlab.torproject.org/woswos/CAPTCHA-Monitor/-/snippets/60</a> that explains how to use the Selenium Wire library with the Tor browser. However, when I use the code from this page I get a connection error, quote "Error connecting to SOCKS5 proxy 127.0.0.1:9150: [WinError 10061]". I also can't set header capture according to the documentation of the Selenium Wire library: <a href="https://github.com/wkeeling/selenium-wire" rel="nofollow noreferrer">https://github.com/wkeeling/selenium-wire</a> . The documentation states that this should be according to the formula:</p>
<pre><code>def interceptor(request):
del request.headers['Referer'] # Remember to delete the header first
request.headers['Referer'] = 'some_referer' # Spoof the referer
driver.request_interceptor = interceptor
driver.get(...)
# All requests will now use 'some_referer' for the referer
</code></pre>
<p>However, it does not explain what a request is or why a function reference is not <code>interceptor()</code>.</p>
| <python><proxy><tor><request-headers><seleniumwire> | 2023-07-11 14:07:06 | 1 | 593 | Olgierd WiΕniewski |
76,662,800 | 17,915,481 | Upload file to google drive on specific folder with progress bar and python requests | <p>This is my code for uploading to google drive with python requests without using google-drive-api:</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import json
import requests
import collections
from tqdm import tqdm
import requests_toolbelt
from requests.exceptions import JSONDecodeError
class ProgressBar(tqdm):
def update_to(self, n: int) -> None:
self.update(n - self.n)
def upload_file(access_token:str, filename:str, filedirectory:str, folder_id: str):
metadata = {
"name": filename,
"parents": [{"id": folder_id}]
}
session = requests.session()
with open(filedirectory, "rb") as fp:
files = collections.OrderedDict(data=("metadata", json.dumps(metadata), "application/json"), file=fp)
encoder = requests_toolbelt.MultipartEncoder(files)
with ProgressBar(
total=encoder.len,
unit="B",
unit_scale=True,
unit_divisor=1024,
miniters=1,
file=sys.stdout,
) as bar:
monitor = requests_toolbelt.MultipartEncoderMonitor(
encoder, lambda monitor: bar.update_to(monitor.bytes_read)
)
r = session.post(
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart",
data=monitor,
allow_redirects=False,
json=metadata,
headers={"Authorization": "Bearer " + access_token, "Content-Type": monitor.content_type},
)
try:
resp = r.json()
print(resp)
except JSONDecodeError:
sys.exit(r.text)
upload_file(access_token, filename, filedirectory, "12345asdfghj")
</code></pre>
<p>But the above code is not uploading any file to a specific folder. I also tried some changes but nothing worked. How can I fix this ?</p>
| <python><python-3.x><python-requests><google-drive-api> | 2023-07-11 14:00:53 | 1 | 512 | jak bin |
76,662,685 | 6,119,331 | reading file in python Class turns file to 0 byte, even when closed | <p>I have this weird behaviour with python class where when the text file is read it deletes everything within, leading to a 0 byte text file.</p>
<p>Firstly I have created a empty text file called 'file.txt'.</p>
<p>Then the python class is as follows</p>
<pre><code>class File:
def __init__(self):
with open('/home/pi/temp/file.txt', 'r') as f:
self.fileRead = f.readlines()
f.close()
self.fileWrite = open('/home/pi/temp/file.txt', 'w')
def create(self):
self.fileWrite.write('ABC')
self.fileWrite.close()
def read(self):
for line in self.fileRead:
print(line)
</code></pre>
<p>So to create the text file I called the <code>create()</code> method.</p>
<pre><code>x = File()
x.create()
</code></pre>
<p>The file returns 3 bytes. - OK since it is just 'ABC'</p>
<p><code>-rwxrwxrwx 1 pi pi 3 Jul 11 21:41 file.txt</code></p>
<p>Now the problem is when I read the file:</p>
<pre><code>x = File()
x.read()
</code></pre>
<p>The console prints <code>ABC</code> which is correct. But when I look at the file size, it has gone to <code>0</code> byte.</p>
<p><code>-rwxrwxrwx 1 pi pi 0 Jul 11 21:43 file.txt</code></p>
<p>I closed the file in the <code>__init__</code> method and have not written anything to it other than <code>create</code>, which, on the second steps it never get called?</p>
<p>Any idea where I made the mistakes and any better ways to do this?</p>
<p>Thanks</p>
| <python><class><methods> | 2023-07-11 13:48:20 | 2 | 453 | Gabriel |
76,662,494 | 10,781,340 | docker build taking too long for pandas | <p>I have this docker file.</p>
<pre><code>FROM python:3.10-alpine3.14
LABEL maintainer="Sagun Devkota"
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /tmp/requirements.txt
WORKDIR /app
EXPOSE 8000
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --update --no-cache mariadb-connector-c-dev build-base mariadb-dev && \
/py/bin/pip install -r /tmp/requirements.txt && \
rm -rf /tmp
COPY ./app /app
RUN adduser \
--disabled-password \
--no-create-home \
django-user
ENV PATH="/py/bin:$PATH"
USER django-user
</code></pre>
<p>Whenever I run <code>docker-compose build</code> then it takes around 800sec to build the image.
But, if I add pandas to my requirements.txt file then it takes around 3000sec to build the image. What is causing this delay?</p>
| <python><pandas><docker> | 2023-07-11 13:29:23 | 0 | 505 | Sagun Devkota |
76,662,113 | 7,745,011 | Allow very long "from .." imports for black in order to stick to max line length? | <p>In our projects instead of using relative imports, we usually create the code as python packages and install them with <code>pip install -e .</code> This is nice and prevents from issues with import errors on different machines/IDEs.
However, for bigger project with a deeper structure this however means quite long import statements, which is where problem with black starts. E.G the following (dummy) example import</p>
<pre><code>from root.machine_vision.hardware_layer.cameras.some_camera_brand. \
camera_access_class import CameraAccessClass
</code></pre>
<p>would be changed by black into</p>
<pre><code>from root.machine_vision.hardware_layer.cameras.some_camera_brand.camera_access_class import (
CameraAccessClass
)
</code></pre>
<p>Which would cause issues with the max. line length.
So far I have only found <a href="https://stackoverflow.com/a/72717617/7745011">this post</a>, which suggests to turn black off for the block in question with <code># fmt: off</code> and <code># fmt: on</code>. I'm wondering if it is possible to tell black to allow backslashes in these kind of imports.</p>
<p>On the other hand, is there maybe another way to handle such long imports? For example some way to do "part imports", which would look something like</p>
<pre><code>import root.machine_vision.hardware_layer as hwl
from hwl.cameras.some_camera_brand.camera_access_class import CameraAccessClass
</code></pre>
<p>So far I have not found a satisfactory solution...</p>
| <python><lint><python-black> | 2023-07-11 12:41:11 | 1 | 2,980 | Roland Deschain |
76,662,109 | 8,126,390 | python logging module output not working in multiprocessing process | <p>I can't get logging to work correctly until I initialize logging for a second time in my python module. There is no output (printing works fine), and if I manage to get output it is not applying the formatting I specified with</p>
<pre><code>logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
</code></pre>
<p>I've tried adding test log entries in my <code>__init__</code> which display as expected. However, I cannot log anywhere else in my class. After trying different locations for my test entries and log init, I can log in my <code>__init__</code> but not any method afterwards.</p>
| <python><python-logging> | 2023-07-11 12:40:45 | 1 | 740 | Brian |
76,661,749 | 12,790,501 | How to prevent restart on function submited using dask.distributed.client.Client when the worker gets killed due to asyncio.exceptions.CancelledError | <p>How to prevent restart on function submitted using dask.distributed.client.Client when the worker gets killed due to asyncio.exceptions.CancelledError - e.g. OOM (instantiated objects are larger than worker memory)</p>
<p><strong>Expected behavior:</strong>
Client.submit method has parameter retries that default to 0 - so I would expect 0 retries</p>
<p><strong>Observed behavior:</strong>
4 retries of the submitted function before it the dask gives up</p>
<p><strong>How to reproduce:</strong></p>
<p>This function causes the issue - the worker gets killed due to the OOM but the function is launched repeatedly</p>
<pre class="lang-py prettyprint-override"><code>def oom_script():
def generate_data() -> bytes:
"""something that represents memory allocation"""
return os.urandom(10) + b":-) " * (100_000_000 // 4)
oom_list = []
while True:
oom_list.append(generate_data)
client.submit(oom_script)
</code></pre>
<p>This function works as expected - the function raises exception and finishes.</p>
<pre class="lang-py prettyprint-override"><code>def exception_script():
raise Exception()
client.submit(exception_script)
</code></pre>
| <python><dask><dask-distributed> | 2023-07-11 11:59:25 | 1 | 343 | Petr Synek |
76,661,695 | 2,355,176 | BAC0 Lite not able to read from MS/TP controller | <p>I am using the following code to read from the regular controller having ip address and it works fine but when I provide the address of MS/TP controller in notation net:mac the error thrown</p>
<blockquote>
<p>Traceback (most recent call last): File "read.py", line 22, in
main() File "read.py", line 13, in main
value = bacnet.read(payload) File "/home/nvidia/.local/lib/python3.6/site-packages/BAC0/core/io/Read.py",
line 219, in read
"APDU Abort Reason : {}".format(reason) BAC0.core.io.IOExceptions.NoResponseFromController: APDU Abort Reason
: Timeout</p>
</blockquote>
<p>The code is following</p>
<pre><code>import BAC0
bacnet = BAC0.lite(ip="192.168.1.20/24", port=47808)
def main():
payload = "50:7 analogOutput 1 presentValue" # This is not working
# payload = "192.168.1.40 analogOutput 1 presentValue" # This works fine
print("Requesting {}".format(payload))
print("The value is ", bacnet.read(payload))
if __name__ == '__main__':
main()
</code></pre>
| <python><bacnet><bac0> | 2023-07-11 11:51:46 | 0 | 2,760 | Zain Ul Abidin |
76,661,631 | 1,049,393 | How do I detect the frequency of an LFO signal using numpy.fft.fft? | <p>Suppose I have an LFO signal of frequency, say, 7 Hz. Also, the input signal contains other audio data (other waves). I can get only 100 last samples, each sample is a 16-bit signed value between the range -32768 to 32767. (Sampling occurs 11025 times a second.) (The device is Critter & Guitari ETC.)</p>
<p>My question is: How using <code>numpy</code> could I extract from the last 100 samples the magical number of 7 (the frequency of the LFO signal)?</p>
| <python><numpy><audio><signal-processing><fft> | 2023-07-11 11:42:56 | 0 | 947 | coderodde |
76,661,415 | 1,020,139 | How to import _RetryDict from botocore-stubs? | <p>I need to import <code>_RetryDict</code> from botocore-stubs in order to satisfy mypy for the following code:</p>
<pre><code>from dataclasses import dataclass
from botocore.config import Config
@dataclass
class COMMON_CONSTANTS:
region = "eu-west-1"
retry_config = {
'max_attempts': 10,
'mode': 'adaptive',
}
conf = Config(retries=COMMON_CONSTANTS.retry_config)
</code></pre>
<p>I have tried <code>from botocore.stubs import _RetryConfig</code>, etc., but nothing seems to work?</p>
<p><a href="https://github.com/search?q=repo%3Ayoutype%2Fbotocore-stubs%20_RetryDict&type=code" rel="nofollow noreferrer">https://github.com/search?q=repo%3Ayoutype%2Fbotocore-stubs%20_RetryDict&type=code</a></p>
| <python><python-3.x><amazon-web-services><boto3> | 2023-07-11 11:16:17 | 0 | 14,560 | Shuzheng |
76,661,057 | 409,568 | Can I use setuptools_scm to check at runtime whether the installed version is behind the Git version? | <p>I have a Python package that is set up to retrieve the version number from Git tag when it is installed locally. All of this works really well.</p>
<p>This is what I have in my pyproject.toml:</p>
<pre class="lang-py prettyprint-override"><code>[tool.setuptools_scm]
write_to = "my_package/version.py"
git_describe_command = "git describe --dirty --tags --long --match v* --first-parent"
</code></pre>
<p>Then I use this in the module that defines the command line client so that a user can run a command <code>my_package --version</code>:</p>
<pre class="lang-py prettyprint-override"><code>try:
from .version import version as __version__
from .version import version_tuple
except ImportError:
__version__ = "unknown version"
version_tuple = (0, 0, "unknown version")
</code></pre>
<p>All of this works fine: it extracts the version of the package from the latest tag on the GitLab repo and writes it to the file "version.py" from which the above import can obtain the version and show it to the user.</p>
<p>My question is: is there a way I could check at runtime (not during setup) whether a newer version exists on the Git repo so I can tell the user to update? I would insert this check before any command run by the CLI so I can output a warning to upgrade before running the command or perhaps even fail with an exception to prevent users running old versions of a package that is under heavy development (all of this is local, not in a public repo).</p>
<p>I might have missed it but couldn't find anything about this in the docs for setuptools_scm but I reckon this is a very common use case so probably it's possible somehow? Thanks!</p>
| <python><git><setuptools-scm> | 2023-07-11 10:32:03 | 1 | 730 | tospo |
76,661,046 | 11,998,115 | Can you have a progress bar for sorting a list? | <p>I have a list containing ~50k elements of a custom data type (the latter is probably not important for my question)
I'm sorting the list using pythons builtin <code>list.sort()</code> method.</p>
<pre class="lang-py prettyprint-override"><code>myList: List[Foo] = ...
myList.sort(key=Foo.x)
</code></pre>
<p>Since the sorting takes a couple of minutes, I would like to have a progress bar for the sorting process. I haven't found any solutions online.</p>
<p>Is this even possible? I'm aware sorting algorithms may be complex and it might not be possible to measure the sorting progress at all.
However, it would be fine for my usecase to have a "rough" measurement, like 25%, 50%, 75%...</p>
| <python><python-3.x><list><sorting> | 2023-07-11 10:30:56 | 2 | 856 | M.X |
76,660,974 | 5,767,535 | Multiplying n-D and 1-D numpy arrays along common dimension | <p>I have two numpy arrays <code>x</code> and <code>y</code> with shapes <code>(m,n,p)</code> and <code>m</code> respectively.</p>
<p>For all indices <code>i</code> between <code>0</code> and <code>m-1</code>, I want to multiply every element in the block <code>x[i,:,:]</code> by <code>y[i]</code>.</p>
<p><strong>How can I achieve this in a single line?</strong></p>
<p>Related question: <a href="https://stackoverflow.com/questions/58455293/how-to-multiply-numpy-1d-with-n-d-array">How to multiply numpy 1D with N-D array?</a></p>
| <python><numpy><multidimensional-array> | 2023-07-11 10:21:17 | 1 | 2,343 | Daneel Olivaw |
76,660,920 | 3,611,164 | Rotate Plotly Scatter Markers to create Wind Arrows on a Timeseries Plot | <p>Trying to create a plot displaying the wind conditions with arrows visualizing the direction of the wind at a certain timestamp.
<a href="https://i.sstatic.net/KivV9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KivV9.png" alt="Target Visualization" /></a></p>
<p>The above visualization could be created by converting the datetime index and fixing the scale-ratio.</p>
<p>With plotly 5.13.0 markers haven gotten the <code>angle</code> property (<a href="https://plotly.com/python-api-reference/generated/plotly.graph_objects.Scatter.html#plotly.graph_objects.scatter.Marker.angle" rel="nofollow noreferrer">Plotly Reference</a>), which motivated me to give it an other go using a native datetime index and scatter-markers.</p>
<p>The angle property can only be set once per trace as it seems, thus we'll need to loop.</p>
<h3>Sample Data Generation</h3>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
np.random.seed(seed=13)
idx = pd.date_range("2023-07-11 10:05", "2023-07-11 11:00", freq='5min')
wind_speeds = np.random.random_sample(size=len(idx)) * 3
wind_directions = np.random.randint(0, high=3600, size=len(idx)) / 10.0
df = pd.DataFrame({'datetime': idx, 'V_WIND': wind_speeds, 'DIR_WIND': wind_directions})
df['datetime'] = pd.to_datetime(df.datetime)
df = df.set_index('datetime')
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">datetime</th>
<th style="text-align: left;">V_WIND</th>
<th style="text-align: left;">DIR_WIND</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2023-07-11 10:05:00</td>
<td style="text-align: left;">2.333107</td>
<td style="text-align: left;">323.2</td>
</tr>
<tr>
<td style="text-align: left;">2023-07-11 10:10:00</td>
<td style="text-align: left;">0.712624</td>
<td style="text-align: left;">296.3</td>
</tr>
<tr>
<td style="text-align: left;">2023-07-11 10:15:00</td>
<td style="text-align: left;">2.472836</td>
<td style="text-align: left;">219.7</td>
</tr>
<tr>
<td style="text-align: left;">2023-07-11 10:20:00</td>
<td style="text-align: left;">2.897248</td>
<td style="text-align: left;">142.7</td>
</tr>
</tbody>
</table>
</div><h3>Current Attempt</h3>
<pre class="lang-py prettyprint-override"><code>def plot_wind(df: pd.DataFrame) -> None:
fig = px.line(df, y="V_WIND")
for wind_row in df.iterrows():
fig.add_trace(
go.Scatter(
x=[wind_row[0]],
y=[wind_row[1]['V_WIND']],
showlegend=False,
marker=dict(
color='red',
size=20,
symbol='triangle-down-open',
angle=[wind_row[1]['DIR_WIND']],
)))
fig.show()
plot_wind(df)
</code></pre>
<p><a href="https://i.sstatic.net/Bt1qQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bt1qQ.png" alt="Output Figure without rotated Markers" /></a>
The angle property has no effect at all. Even when fixing it to a value such as 25.0, the markers look identical. How can we get them to rotate?</p>
| <python><plotly> | 2023-07-11 10:15:48 | 1 | 366 | Fabitosh |
76,660,884 | 3,979,160 | OAuth error 403 (forbidden) on Youtube Api v3 with python | <p>I'm trying to use the Youtube API v3 with python from my local machine but I keep ending up in 403 forbidden/error after the authorisation screen.</p>
<p>I'm running this Python example code:
<a href="https://developers.google.com/youtube/v3/docs/activities/list?apix=true" rel="nofollow noreferrer">https://developers.google.com/youtube/v3/docs/activities/list?apix=true</a></p>
<p>When running it from there, it works fine. When copy the code to a .py file and running it locally I first get an error:</p>
<blockquote>
<p>AttributeError: 'InstalledAppFlow' object has no attribute 'run_console'</p>
</blockquote>
<p>But I can fix that by changing:</p>
<pre><code>credentials = flow.run_console()
</code></pre>
<p>into:</p>
<pre><code>credentials = flow.run_local_server()
</code></pre>
<p>Then it opens my browser with a permission screen (wrong browser, so I copy it to the one with the same google account logged-in) and I then click on allow. Click the channel name and then I get 403 forbidden....</p>
<p>I tried most of the things I found online:</p>
<ul>
<li>check if API is enabled</li>
<li>sets scopes</li>
<li>restrict api-key (although I use oauth)</li>
<li>Create new project / credentials</li>
</ul>
<p>I also checked this: <a href="https://developers.google.com/explorer-help/code-samples#python" rel="nofollow noreferrer">https://developers.google.com/explorer-help/code-samples#python</a></p>
<p>I actually got the same problem with a wrapper library (that wraps the YouTube API in a more friendly easy to use interface) that I tried before. Hence, me trying to use the API directly.</p>
<p>What am I Missing here?</p>
| <python><youtube> | 2023-07-11 10:12:56 | 0 | 523 | Hasse |
76,660,687 | 3,251,645 | pybind11 convert python object to C++ struct equivalent | <p>I'm trying use Pybind11 to pass an object generated in python to C++ and have it converted to proper <code>struct</code> types, here's my python class:</p>
<pre><code>class Token(object):
def __repr__(self):
return f'Token(type: {self.type}, value: {self.value!r}, line: {self.lineno}, pos: {self.pos})'
@dataclass
class TreeNode:
type: NodeType
tok: Token = None
children: list = field(default_factory=list)
def __str__(self):
return f"{self.type}" + (f"[{self.tok.value}]" if self.tok else "")
</code></pre>
<p>Here are the c++ struct equivalents:</p>
<pre><code>struct Token
{
char *type;
char *value;
int lineno;
int pos;
};
struct TreeNode
{
NodeType type;
Token tok;
std::vector<TreeNode *> children;
};
</code></pre>
<p>The pybind11 module definition:</p>
<pre><code>PYBIND11_MODULE(c_codegen, m)
{
py::enum_<NodeType>(m, "NodeType")
.value("CompilationUnit", NodeType::CompilationUnit)
.value("Block", NodeType::Block)
.value("Statement", NodeType::Statement)
.value("Expression", NodeType::Expression)
.value("Condition", NodeType::Condition)
.value("Function", NodeType::Function)
.value("FunctionCall", NodeType::FunctionCall)
.value("Arguments", NodeType::Arguments)
.value("Identifier", NodeType::Identifier)
.value("Literal", NodeType::Literal)
.value("Operator", NodeType::Operator);
py::class_<Token>(m, "Token")
.def(py::init<>())
.def_readwrite("type", &Token::type)
.def_readwrite("value", &Token::value)
.def_readwrite("lineno", &Token::lineno)
.def_readwrite("pos", &Token::pos);
py::class_<TreeNode>(m, "TreeNode")
.def(py::init<>())
.def_readwrite("type", &TreeNode::type)
.def_readwrite("tok", &TreeNode::tok)
.def_readwrite("children", &TreeNode::children);
m.def("processTreeNode", &processTreeNode, "Test func");
}
void processTreeNode(TreeNode &root)
{
std::cout << root.type << std::endl;
}
</code></pre>
<p>I want to be able to send an instance of <code>TreeNode</code> class from Python to C++ to the <code>processTreeNode</code> function and get struct instance on the other end which I can work with. This compiles but I'm getting this error:</p>
<pre><code>Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/path/codegen_test.py", line 30, in <module>
c_codegen.processNodeType(NodeType.Statement)
TypeError: processNodeType(): incompatible function arguments. The following argument types are supported:
1. (arg0: c_codegen.NodeType) -> None
</code></pre>
<p>The error just says the types are incompatible but doesn't tell what's wrong with my Python type I'm sending. Do I need to change my C++ structs to classes for this to work? What am I doing wrong here?</p>
| <python><c++><pybind11> | 2023-07-11 09:48:32 | 1 | 2,649 | Amol Borkar |
76,660,680 | 549,195 | How to use fixture in combination of Django TestCase when unit testing with modified Django settings? | <p>In a Django project, I want to override a setting for a unit test. From the <a href="https://docs.djangoproject.com/en/4.2/topics/testing/tools/#django.test.SimpleTestCase.modify_settings" rel="nofollow noreferrer">Django documentation</a>, it seems the recommended way seems to use a <code>TestCase</code> and using methods like <code>modify_settings</code> or <code>override_settings</code>. This is the example given:</p>
<pre class="lang-py prettyprint-override"><code>from django.test import TestCase
class MiddlewareTestCase(TestCase):
def test_cache_middleware(self):
with self.modify_settings(
MIDDLEWARE={
"append": "django.middleware.cache.FetchFromCacheMiddleware",
"prepend": "django.middleware.cache.UpdateCacheMiddleware",
"remove": [
"django.contrib.sessions.middleware.SessionMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
],
}
):
response = self.client.get("/")
# ...
</code></pre>
<p>When using Django <code>TestCase</code> (and override_setting) my test fixtures that are annotated using <code>@pytest.fixture</code> are no longer recognized.</p>
<p>This is a minimal example that reproduce my issue.</p>
<pre class="lang-py prettyprint-override"><code>from django.test import TestCase, override_settings
from rest_framework.test import APIClient
@pytest.fixture
def my_client():
return APIClient()
@pytest.mark.usefixtures("my_client")
class TestLogin(TestCase):
def test_jwt_expiration(self):
simple_jwt_settings = settings.SIMPLE_JWT
# Force a very short lifetime for the access token
simple_jwt_settings['ACCESS_TOKEN_LIFETIME'] = datetime.timedelta(milliseconds=1)
with override_settings(SIMPLE_JWT=simple_jwt_settings):
response = self.my_client.get("/login")
assert response.status_code == 200
# ...
sleep(2/1000)
# ... test that the access token expired for further calls
</code></pre>
<p>When calling <code>self.my_client</code>, I get the error:
<code>AttributeError: 'TestLogin' object has no attribute 'my_client'</code></p>
<p>I've already found this thread <a href="https://stackoverflow.com/questions/64528957/how-to-use-pytest-fixtures-with-django-testcase">How to use pytest fixtures with django TestCase</a>, but it doesn't give a solution to use a <code>pytest.fixture</code> explicitly in the context of a Django <code>TestCase</code>.</p>
<p>Any idea how to access <code>my_client</code> inside the <code>TestLogin</code> ?</p>
| <python><django><unit-testing><settings><fixtures> | 2023-07-11 09:47:17 | 1 | 1,297 | skuallpa |
76,660,611 | 3,371,198 | Class variable is not shared over all instances after pickling and unpickling | <p>I have a class ConfProvider that contains a class variable which holds all config that cannot be pickled. My other classes inherit from this class and they use the _conf class variable when they call the non-pickable objects. The idea is then to clean the config when pickling my classes so I do not get any errors because the class contains unpickable objects.</p>
<p>This is my ConfProvider class</p>
<pre><code>class ConfProvider:
_conf = None
@staticmethod
def set_conf(conf: dict) -> None:
ConfProvider._conf = conf
@staticmethod
def clear_conf() -> None:
ConfProvider._conf = {}
@classmethod
def print_conf(cls) -> None:
cls._conf["console_logger"].info(ConfProvider._conf)
</code></pre>
<p>Next to this class I have 2 other classes that inherit from the ConfProvider. Class_1 contains objects of class_2.</p>
<pre><code>class Class_1(ConfProvider):
def __init__(self):
# Do some init stuff
self.list_class_2 = [Class_2(1, 2), Class_2(3, 4)]
@Logging.debug_decorator
def save(self, debug=False) -> None:
config = self._conf
self.clear_conf()
with open(save_path, "wb") as f:
cloudpickle.dump(self, f)
# Reset non serializable attributes
self.set_conf(config)
class Class_2(ConfProvider):
def __init__(self, a, b):
# Do some init stuff
</code></pre>
<p>I want to be able to pickle the Class_1 object I create that contains several objects of Class_2. For this, I use the save method as shown above in Class_1. Now this works perfectly since the class variable is shared amongst all objects. However, once I unpickle the class_1 object and add new class_2 objects to the list in class_1, the clear_conf method does only clear the conf variable from the objects that were included before pickling. The _conf of all new objects is not cleared. How is this possible as normally a class variable is shared over all instances?</p>
<p>EDIT:</p>
<p>After debugging a bit more, I found that before pickling <code>Class_1._conf</code> always contains the same content as <code>Class_1().__class__._conf</code>. However, once I pickle the class and unpickle it, the two are different. <code>Class_1._conf</code> has the values that were loaded when starting up, but <code>Class_1().__class__._conf</code> contains an empty dictionary.</p>
| <python><class-variables><cloudpickle> | 2023-07-11 09:39:38 | 1 | 402 | Lennart |
76,660,604 | 665,802 | Active Directory authentication with JWT in Python FastAPI | <p>I am trying to implement a combination of Active Directory authentication with JWT in Python FastAPI.</p>
<p>Is this possible to use Active Directory authentication for acquiring a JWT token and then use that JWT for accessing a secured endpoint. Below is the code I have tried.</p>
<pre><code>from fastapi import FastAPI, Depends, HTTPException
from fastapi.security import OAuth2PasswordBearer
from passlib.context import CryptContext
import jwt
from jwt.exceptions import ExpiredSignatureError, InvalidSignatureError
import datetime
app = FastAPI()
SECRET_KEY = "C4460C71-7A98-4356-A6DC-CC9F3FBBB63E"
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/login")
def create_access_token(data: dict, expires_delta: datetime.timedelta):
print("DEBUG: [create_access_token()]:", data)
to_encode = data.copy()
expires = datetime.datetime.utcnow() + expires_delta
to_encode.update({"exp": expires})
encoded_jwt = jwt.encode(to_encode, data["access_token"], algorithm="HS256")
return encoded_jwt
def decode_token(token: str):
decoded_token = jwt.decode(token, SECRET_KEY, algorithms=["HS256"])
return decoded_token
async def get_current_user(token: str = Depends(oauth2_scheme)):
try:
decoded_token = decode_token(token)
username = decoded_token.get("sub")
print("DEBUG: [get_current_user()]:", username)
if not username:
raise HTTPException(status_code=401, detail="Invalid authentication token")
except (ExpiredSignatureError):
raise HTTPException(status_code=440, detail="Token has explired")
except (InvalidSignatureError):
raise HTTPException(status_code=498, detail="Invalid token")
return {"username": username}
@app.post("/login/{access_token}")
async def login(access_token: str):
# Authenticate the user against your AD
# Generate a JWT token if authentication is successful
access_token_expires = datetime.timedelta(minutes=5)
expires = datetime.datetime.utcnow() + access_token_expires
access_token = create_access_token(
data={"sub": "user_id", "access_token": access_token}, expires_delta=access_token_expires
)
return {"access_token": access_token, "token_type": "bearer", "valid_till": f"{expires} UTC"}
@app.get("/protected_route")
async def protected_route(user: dict = Depends(get_current_user)):
print("DEBUG: [/protected_route()]:", user)
return {"message": "This route is protected, and you are authenticated successfully"}
</code></pre>
<p>It is able to generate and use JWT, however, even if I try with wrong AD credentials through Postman, it is still working.</p>
<p>In addition, is there a way I can check AD Group membership before issuing JWT?</p>
| <python><jwt><active-directory><fastapi> | 2023-07-11 09:38:53 | 0 | 971 | SavindraSingh |
76,660,594 | 8,862,235 | Unable to plot other plots with for loop plot | <p>I am having difficulties plotting data when using a for loop.</p>
<p>What I want to do is combine the following two plots:</p>
<pre><code>fig, ax = plt.subplots()
ax.scatter(x1, y1, color='blue', label='Series 1')
ax.scatter(x2, y2, color='green', label='Series 2')
</code></pre>
<p>Which gives:
<a href="https://i.sstatic.net/3waFQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3waFQ.png" alt="enter image description here" /></a></p>
<p>And:</p>
<p>A second plot that uses the x1,x2,y1,y2 data but has been filtered to create a line between two points that have same y1==y2 and their x1 > x2.</p>
<pre><code>for x1, x2, y1, y2 in special:
ax.plot([x1, x2], [y1, y2])
</code></pre>
<p>That gives:
<a href="https://i.sstatic.net/ffdAP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ffdAP.png" alt="enter image description here" /></a></p>
<p>I tried using <code>plt.fill_between</code> but it seems I can't use it with scatter or I don't know how to use it properly.</p>
<p>What I get in the end is only the scatter with the lines on my screen and I want to put those lines into the scatter plot where I can see the points that do not have the line. Basically to combine them into one...</p>
<p>Here is the sample data if you want to experiment:</p>
<pre><code>x1 = [datetime.datetime(2023, 4, 6, 3, 58, 32), datetime.datetime(2023, 4, 6, 3, 58, 32), datetime.datetime(2023, 4, 6, 3, 58, 32), datetime.datetime(2023, 4, 6, 3, 58, 32), datetime.datetime(2023, 4, 6, 3, 58, 32), datetime.datetime(2023, 4, 6, 3, 58, 32), datetime.datetime(2023, 4, 7, 6, 57, 13), datetime.datetime(2023, 4, 11, 5, 57, 14), datetime.datetime(2023, 4, 11, 5, 57, 14), datetime.datetime(2023, 4, 12, 16, 57, 4), datetime.datetime(2023, 4, 12, 16, 57, 4)]
x2 = [datetime.datetime(2023, 3, 23, 2, 38, 55), datetime.datetime(2023, 3, 25, 5, 26, 27), datetime.datetime(2023, 3, 25, 6, 26, 27), datetime.datetime(2023, 3, 26, 16, 46, 48), datetime.datetime(2023, 3, 29, 12, 53, 11), datetime.datetime(2023, 3, 31, 15, 25, 33), datetime.datetime(2023, 4, 2, 5, 40, 26), datetime.datetime(2023, 4, 3, 18, 36, 11), datetime.datetime(2023, 4, 3, 18, 36, 11), datetime.datetime(2023, 4, 6, 2, 21, 47), datetime.datetime(2023, 4, 7, 6, 54, 1), datetime.datetime(2023, 4, 11, 5, 47, 34), datetime.datetime(2023, 4, 12, 16, 32, 35), datetime.datetime(2023, 4, 12, 17, 6, 9),
datetime.datetime(2023, 4, 19, 1, 46, 29)]
y1 = ['bda3fa6c0e47be8fb2af68889a76fb7b', '86bcdeef6510f5c129c036a7446c5d20', '45249523a3866a65c30daec40fff68ca', '653e4c7b5f77a81513cbe557554c79c3', '1e6358c3e2be89cf4aabbdc792de64d2', '42249fa0ce3f09182a51c4a91bf7ebd8', '4aba3f7340769eccf0ded01f1e557b98', 'eadc80f6ca1d4c3fccf94fcc202f66ff', '9f9fd071d04150f60783841b6669c48d', 'febea981a7bc1935a9bd97e6aec11f64', 'ac1590025eb4791909fd027c02e5288f']
y2 = ['653e4c7b5f77a81513cbe557554c79c3', 'febea981a7bc1935a9bd97e6aec11f64', 'ac1590025eb4791909fd027c02e5288f', '61806b2fe0aad80e37057e32c3263886', '4aba3f7340769eccf0ded01f1e557b98', '45249523a3866a65c30daec40fff68ca', 'ce07499ecb1e7c34f5c4df8fcb638856', '751d8f2ddd2262244f24c2bca8d86fdf', '46532398219ddd7b2c0e0466ecc00985', '9668ee6dd65b606af2089bd643b313c7', '1e6358c3e2be89cf4aabbdc792de64d2', '9a3755b1dced8a4f56918e5322a12dd2', 'c3e74c8067d21a9928d7ba2d1df8bec2', '9d09a0bc49064fb76cdc0e3f016aa591', 'bda3fa6c0e47be8fb2af68889a76fb7b']
special = [['2023-04-06 03:58:32', '2023-03-31 15:25:33', '45249523a3866a65c30daec40fff68ca', '45249523a3866a65c30daec40fff68ca'], ['2023-04-06 03:58:32', '2023-03-23 02:38:55', '653e4c7b5f77a81513cbe557554c79c3', '653e4c7b5f77a81513cbe557554c79c3'], ['2023-04-07 06:57:13',
'2023-03-29 12:53:11', '4aba3f7340769eccf0ded01f1e557b98', '4aba3f7340769eccf0ded01f1e557b98'],
['2023-04-12 16:57:04', '2023-03-25 05:26:27', 'febea981a7bc1935a9bd97e6aec11f64', 'febea981a7bc1935a9bd97e6aec11f64'], ['2023-04-12 16:57:04', '2023-03-25 06:26:27', 'ac1590025eb4791909fd027c02e5288f', 'ac1590025eb4791909fd027c02e5288f']]
</code></pre>
| <python><matplotlib><plot><scatter-plot> | 2023-07-11 09:37:05 | 0 | 323 | Filip |
76,660,586 | 6,114,310 | Why does the content in a Streamlit widget disappear when a button is clicked? | <p>I'm building a chat application using Streamlit and OpenAI. When I click the "STOP" button, the result of the chat (displayed in res_box) disappears, which is not the desired behavior. I want the results to persist even after the "STOP" button is clicked. How can I make this happen?</p>
<pre><code>import openai
import streamlit as st
# Set OpenAI API Key
openai.api_key = "<API-KEY>"
# Header for the application
st.subheader("AI Assistant : Streamlit + OpenAI: `stream` *argument*")
# Text input for user
user_input = st.text_input("You: ", placeholder = "Ask me anything ...", key="input")
# Initialize 'stop_pressed' and 'result' in session state if they don't exist
if 'stop_pressed' not in st.session_state:
st.session_state['stop_pressed'] = False
if 'result' not in st.session_state:
st.session_state['result'] = ""
# Create an empty box to display results
res_box = st.empty()
# Submit button action
if st.button("Submit", type="primary"):
# Reset result to an empty string
result = ""
# Stop button action
if st.button("STOP", key="stop"):
st.session_state['stop_pressed'] = True
# Separate different parts of the interface
st.markdown("----")
report = []
try:
# Create a chat completion with OpenAI
response = openai.ChatCompletion.create(model='gpt-4', messages=[{"role": "assistant", "content": user_input}],
temperature = 0,
stream = True)
# Iterate through responses from OpenAI
for resp in response:
report.append(resp["choices"][0]["delta"]["content"])
result = "".join(report).strip()
result = result.replace("\n", "")
# Store the result in the session state
st.session_state['result'] = result
# Display result so far
res_box.markdown(f'*{result}*')
if st.session_state['stop_pressed']:
break
except Exception as e:
print(e)
finally:
response.close()
# If there is a result, display it
if st.session_state["result"] != "":
res_box.markdown(f'*{st.session_state["result"]}*')
# Separate different parts of the interface
st.markdown("----")
</code></pre>
<p>Inspired by: <a href="https://medium.com/@avra42/how-to-stream-output-in-chatgpt-style-while-using-openai-completion-method-b90331c15e85" rel="nofollow noreferrer">https://medium.com/@avra42/how-to-stream-output-in-chatgpt-style-while-using-openai-completion-method-b90331c15e85</a></p>
| <python><streamlit> | 2023-07-11 09:36:04 | 1 | 967 | henry |
76,660,575 | 15,098,472 | Optimize for-loop in pandas for calculating distances and speed of a dataframe consisting of coordinates | <p>I have a code snippet that discretizes coordinates in a DataFrame to a specific line length and assigns speed values based on the line length between consecutive points. While the code works, it seems to be inefficient and I would like to optimize it for better performance. First, here is a short example of how the dataframe looks like:</p>
<pre><code># Create a time range
time_range = pd.date_range('2023-07-11 00:00:00', periods=10, freq='5S')
# Generate random x and y coordinates
x_coordinates = np.random.rand(10)
y_coordinates = np.random.rand(10)
# Create the DataFrame
df = pd.DataFrame({'x': x_coordinates, 'y': y_coordinates}, index=time_range)
</code></pre>
<p>part of the output :</p>
<pre><code> x y
2023-07-11 00:00:00 0.529496 0.237321
2023-07-11 00:00:05 0.745546 0.351738
2023-07-11 00:00:10 0.146295 0.982289
2023-07-11 00:00:15 0.732974 0.526501
2023-07-11 00:00:20 0.713234 0.377941
2023-07-11 00:00:25 0.099409 0.630578
</code></pre>
<p>My goal is to iterate over the time steps, and as soon as the distance from the previous and current coordinates is bigger than a certain threshold, extract this position, calculate the speed and append it an array, which will be needed for plotting later. Here is how I managed to solve this, but highly inefficient I suppose:</p>
<pre><code># the original dataframe has more columns, than just 'x' and 'y', thus I select it here
# explicitly, moreover I create the speed column, where I calculate the speed later
df = df[['x', 'y']]
df = df.assign(speed=np.repeat(0, len(df)))
# Discretize the points according to the line length if desired
if line_length > 0:
start_ts = df.index[0]
start_pos = df.loc[start_ts, ['x', 'y']]
# I transform the dataframe to a numpy array, because I need the data as a np.array
# for plotting a line collection
discretized_df = [df.loc[start_ts].to_numpy()]
for i in range(len(df) - 1):
end_ts = df.index[i]
end_pos = df.loc[end_ts, ['x', 'y']]
if np.linalg.norm(end_pos - start_pos) >= line_length:
time_delta = (end_ts - start_ts).microseconds / 1000000
speed = np.linalg.norm(end_pos - start_pos) / time_delta
df.loc[end_ts, 'speed'] = speed
discretized_df.append(df.loc[end_ts].to_numpy())
start_ts = end_ts
# I always want to include the last point
df.loc[df.index[-1], 'speed'] = 0
discretized_df.append(df.iloc[-1].to_numpy())
# the stacking is just reshaping for the line collection
discretized_df = np.stack(discretized_df, 0)
</code></pre>
<p>I hope it is clear what I want to achieve and I appreciate any help on how to improve this!</p>
| <python><pandas><loops> | 2023-07-11 09:34:41 | 1 | 574 | kklaw |
76,660,464 | 11,028,689 | ValueError: Shapes (None, 1) and (None, 7) are incompatible with sparse_categorical_crossentropy | <p>I am building a classifier for 7 classes where my array shapes are as follows:</p>
<pre><code>X_train data shape - (171812, 384)
y_train data shape - (171812,)
X_test data shape - (37715, 384)
y_test data shape - (37715,)
</code></pre>
<p>my model is like this:</p>
<pre><code> # parameters
input_dim= 10000
max_length =384
output_dim =128 DENSE_DIM = 32
DENSE_DIM = 32
LSTM1_DIM = 32
LSTM2_DIM = 16
WD = 0.001
FILTERS = 64
model_lstm = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim, output_dim, input_length=max_length),
# tf.keras.layers.LSTM(64, return_sequences=True, stateful=False, input_shape = (384,128)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(LSTM1_DIM, dropout=0.2, kernel_regularizer = regularizers.l2(WD), return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(LSTM2_DIM, dropout=0.2, kernel_regularizer = regularizers.l2(WD))),
tf.keras.layers.Dense(DENSE_DIM, activation='relu'),
tf.keras.layers.Dense(7, activation='softmax') ])
# Set the training parameters
model_lstm.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=[tf.keras.metrics.Accuracy()])
model_lstm.summary()
Layer (type) Output Shape Param #
=================================================================
embedding_31 (Embedding) (None, 384, 128) 1280000
bidirectional_67 (Bidirecti (None, 384, 64) 41216
onal)
bidirectional_68 (Bidirecti (None, 32) 10368
onal)
dense_108 (Dense) (None, 32) 1056
dense_109 (Dense) (None, 7) 231
=================================================================
</code></pre>
<p>I am having ValueError: Shapes (None, 1) and (None, 7) are incompatible when I run:</p>
<pre><code>epochs = 12
batch_size = 250
history = model_lstm.fit(X_train, y_train,
epochs=epochs,
validation_data=(X_test, y_test),
batch_size=batch_size)
</code></pre>
<p>What should I be changing?</p>
<p>Also I am using an embedding layer for this model with 3 parameters (input_dim, output_dim, input_length). When I use an LTSM layer instead</p>
<pre><code>tf.keras.layers.LSTM(64, return_sequences=True, stateful=False, input_shape = (384,128))
</code></pre>
<p>does not have a compatible input shape?</p>
<pre><code> Input 0 of layer "lstm_106" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 384)
Call arguments received by layer "sequential_53" " f"(type Sequential):
β’ inputs=tf.Tensor(shape=(None, 384), dtype=float32)
β’ training=True
β’ mask=None
</code></pre>
<p>Do I need to change the X_train data shape somehow?</p>
<p>Thank you.</p>
| <python><tensorflow><keras><neural-network> | 2023-07-11 09:22:35 | 1 | 1,299 | Bluetail |
76,660,343 | 1,239,299 | Next N elements for a Iterator | <pre><code>def infinite_generator():
i = 0
while True:
i += 1
yield i
</code></pre>
<p>I have a generator function which looks a like this.</p>
<pre><code>def first_second_and_fifty_items():
iterator = iter(infinite_generator())
first = next(iterator)
second = next(iterator)
next_fifty = [next(iterator) for _ in range(50)]
return first, second, next_fifty
</code></pre>
<p>And I have a second function which looks a little like this.</p>
<p>Is there a better way to take elements from the iterator?</p>
| <python><python-3.x><iterator> | 2023-07-11 09:08:17 | 3 | 817 | user1239299 |
76,660,160 | 11,208,548 | Convert very large PDF to images with python | <p>I have an extremely large PDF containing scans that are approximately 30.000px wide (wtf!). I have a python script that works well for normal sized PDF but when confronted to this large PDF outputs only 1 pixel wide white squares as images.</p>
<p>The problem occurs at the <code>convert_from_path</code> step since images in <code>save_img</code> are already one unique pixel.</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
from pdf2image import pdfinfo_from_path, convert_from_path
from pathlib import Path
Image.MAX_IMAGE_PIXELS = None
def pdf_to_img(pdf_path, dpi=500):
"""
Convert the PDF file to JPEG images
"""
pdf_name = pdf_path.split("/")[-1].split(".")[0]
pdf_info = pdfinfo_from_path(pdf_path, userpw=None, poppler_path=None)
page_nb = pdf_info["Pages"]
step = 2
try:
for img_nb in range(1, page_nb + 1, step):
batch_pages = convert_from_path(
pdf_path,
dpi=dpi,
first_page=img_nb,
last_page=min(img_nb + step - 1, page_nb),
)
for page in batch_pages:
save_img(page, f"{pdf_name}_{img_nb:04d}.jpg")
img_nb += 1
except Exception as e:
print(f"[pdf_to_img] Failed to convert {pdf_name}.pdf to images:\n{e} ({e.__class__.__name__})")
def save_img(
img,
img_filename,
img_path=Path("./output"),
error_msg="Failed to save img",
max_dim=2500,
img_format="JPEG",
):
try:
if img.width > max_dim or img.height > max_dim:
img.thumbnail(
(max_dim, max_dim), Image.Resampling.LANCZOS
)
img.save(img_path / img_filename, format=img_format)
return True
except Exception as e:
print(f"[save_img] {error_msg}:\n{e} ({e.__class__.__name__})")
return False
</code></pre>
<p>Do you know what I can do or how to improve my code? I have tried several libraries (<code>wand</code>, <code>reportlab</code>) with no success.. Thank you so much for your help!</p>
| <python><pdf><python-imaging-library><pdf2image> | 2023-07-11 08:46:59 | 1 | 501 | Seglinglin |
76,660,098 | 4,002,633 | Annotating a Django QuerySet with the Sum of the Max of a Subquery | <p>I'm struggling to compile a Queryset with a mildly complicated annotation, on the basis of which I'd sort the Queryset.</p>
<p>When working with such problems, I favour spinning off a generalised minimalist test bed and for this one I have this set of models:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class Top(models.Model):
name = models.CharField(max_length=70)
class Middle(models.Model):
name = models.CharField(max_length=70)
klass = models.IntegerField(default=1)
reports_to = models.ForeignKey(Top, related_name='middles', on_delete=models.CASCADE)
class Lower(models.Model):
name = models.CharField(max_length=70)
rank = models.IntegerField(default=1)
reports_to = models.ForeignKey(Middle, related_name='lowers', on_delete=models.CASCADE)
</code></pre>
<p>Top and Middle are unique and Lower is not, Lower objects of the same name exist under different Middles and Tops but with a unique and sequential rank.</p>
<p>The challenge I have is expressed in English (natural language) as follows:</p>
<p>Annotate Tops with the sum of the maximum ranks of all Lowers in a given klass (or all klasses).</p>
<p>I can extract these individually for a given top and klass, for example:</p>
<pre class="lang-py prettyprint-override"><code> def test_top(top, klass=None):
l_filter = Q(reports_to__reports_to=top)
if not klass is None:
l_filter &= Q(reports_to__klass=klass)
highest_ranks = Lower.objects.filter(l_filter).values('reports_to__reports_to', 'name').annotate(highest_rank=Max('rank')).values('highest_rank')
sum_highest_ranks = highest_ranks.aggregate(total=Sum('highest_rank'))
print(f"\t\tGot sum of {sum_highest_ranks}, for {top.name} from:")
for r in highest_ranks:
print(f"\t\t\t{r}")
</code></pre>
<p>Have checked this empirically and it works well. But this involves evaluating a Queryset for each indivitual Top. I seek a Queryset of Tops, a la:</p>
<pre><code>tops = Top.objects.all().annotate(sum_highest_ranks=...).order_by('-sum_highest_ranks')
</code></pre>
<p>And cannot find the missing ...</p>
<p>I have in fact come as far as:</p>
<pre class="lang-py prettyprint-override"><code>def test_query(klass=None, top=None):
if top is None:
l_filter = Q(reports_to__reports_to=OuterRef('id'))
else:
l_filter = Q(reports_to__reports_to=top)
if not klass is None:
l_filter &= Q(reports_to__klass=klass)
hr = Lower.objects.filter(l_filter).values('reports_to__reports_to', 'name').annotate(highest_rank=Max('rank')).values('highest_rank')
if top:
print(f"\t\t{top.name} drill down to highest ranks:")
for l in hr:
print(f"\t\t\t{l}")
else:
Tops = Top.objects.annotate(high_rank_sums=Sum(Subquery(hr)))
for t in Tops:
print(f"\t\t{t.name}, {t.high_rank_sums}")
</code></pre>
<p>But this produces suprising and incorrect results. The SQL looks fine, it generates:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT "DjangoAnnotation_top"."id",
"DjangoAnnotation_top"."name",
SUM(
(SELECT MAX(U0."rank") AS "highest_rank"
FROM "DjangoAnnotation_lower" U0
INNER JOIN "DjangoAnnotation_middle" U1 ON (U0."reports_to_id" = U1."id")
WHERE U1."reports_to_id" = ("DjangoAnnotation_top"."id")
GROUP BY U1."reports_to_id", U0."name")) AS "high_rank_sums"
FROM "DjangoAnnotation_top"
GROUP BY "DjangoAnnotation_top"."id",
"DjangoAnnotation_top"."name"
</code></pre>
<p>But the SUM on the MAX seems not to be the SUM of those MAX values at all. It seems not to returns the SUM, but the FIRST. For example I can evaluate the inner SELECT MAX in SQL</p>
<pre class="lang-sql prettyprint-override"><code>SELECT MAX(U0."rank") AS "highest_rank"
FROM "DjangoAnnotation_lower" U0
INNER JOIN "DjangoAnnotation_middle" U1 ON (U0."reports_to_id" = U1."id")
WHERE U1."reports_to_id" = 4
GROUP BY U1."reports_to_id", U0."name"
</code></pre>
<p>and I see:</p>
<pre><code>3
5
5
5
4
5
3
4
4
8
6
4
5
1
</code></pre>
<p>and yet the full query produces:</p>
<pre><code>1 Edward Cox 1
2 Rufus Brown 2
3 Ronald Roberts 3
4 Nancy Fernandez 3
5 Jill Tuley 4
</code></pre>
<p>Note Top 4 has a result of 3.</p>
<p>I have all this in a very small Django App here:</p>
<p><a href="https://github.com/bernd-wechner/DjangoAnnotation" rel="nofollow noreferrer">https://github.com/bernd-wechner/DjangoAnnotation</a></p>
<p>including a sample database (SQLite).</p>
<p>Any insights here would be deeply appreciated.</p>
<h2>Update</h2>
<p>I have using the <a href="https://sqlitebrowser.org/" rel="nofollow noreferrer">DB browser for SQL Lite</a> generated SQL that will do what I want:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT topname, SUM(max)
FROM
(SELECT T."name" AS topname, L."name" AS lowname, MAX(L."rank") AS max
FROM "DjangoAnnotation_top" T
INNER JOIN "DjangoAnnotation_middle" M ON M.reports_to_id = T.id
INNER JOIN "DjangoAnnotation_lower" L ON L.reports_to_id = M.id
GROUP BY T."id",T."name", L."name")
GROUP BY topname
</code></pre>
<p>This however relies on selecting from a subquery, not a strength that Django has I fear. The only solution I'm aware of is the ghastly reliance on raw SQL:</p>
<pre class="lang-py prettyprint-override"><code>def test_query(klass=None):
where_klass = "" if klass is None else f'WHERE M."klass"={klass}\n'
query = f"""
SELECT id, name, SUM(max) AS high_rank_sums
FROM
(SELECT T."id" AS id, T."name" AS name, MAX(L."rank") AS max
FROM "DjangoAnnotation_top" T
INNER JOIN "DjangoAnnotation_middle" M ON M.reports_to_id = T.id
INNER JOIN "DjangoAnnotation_lower" L ON L.reports_to_id = M.id
{where_klass}GROUP BY T."id",T."name", L."name")
GROUP BY id, name
"""
Tops = Top.objects.raw(query)
for t in Tops:
print(f"{t.name}, {t.high_rank_sums}")
</code></pre>
<p>It remains a puzzle to me why the Django Queryset above and the SQL it generates, which looks right to my eye, fails to return the SUM of the MX, but returns the FIRST of the MAX. And I wonder if there's a way, short of Raw SQL to produce SQL like this sample or functionally equivalent?</p>
| <python><sql><django><django-queryset> | 2023-07-11 08:39:17 | 1 | 2,192 | Bernd Wechner |
76,659,946 | 2,749,397 | Modifying the Patch in the legend leaving alone the Patch in the scatter plot | <p>I can change the opacity of an item in the legend of a scatter plot</p>
<pre><code>import matplotlib.pyplot as plt
# note the alpha value vvvvvvvvvv
plt.scatter(range(10), range(10), alpha=0.15, label='Mount Resegone')
plt.scatter(range(10), [0]*10, alpha=0.15, label='Rimini Beach')
plt.gca().get_legend_handles_labels()[0][0].set_alpha(1.0)
plt.legend()
plt.show()
</code></pre>
<p>but doing that I change also the opacity of every dot in the scatter plot.</p>
<p>I guess that there is a single instance of the Patch, correct me if I'm wrong, but my issue is changing the Patch in the legend leaving alone the Patch in the scatter plot.</p>
<p><a href="https://i.sstatic.net/dmwYK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dmwYK.png" alt="enter image description here" /></a></p>
<hr />
<p><strong>Addendum</strong></p>
<hr />
<p>The solution originally suggested (<code>handle._legmarker.set_alpha(1)</code>) does not work in recent Matplotlib, raising an AttributeError</p>
<pre><code>'PathCollection' object has no attribute '_legmarker'
</code></pre>
<p>Fortunately it's even easier to have the expected behaviour</p>
<pre><code>plt.scatter(range(10), range(10), alpha=0.15, label='Mount Resegone')
plt.scatter(range(10), [0]*10, alpha=0.15, label='Rimini Beach')
plt.legend().legendHandles[0].set_alpha(1.0)
</code></pre>
<p><a href="https://i.sstatic.net/RvqOV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RvqOV.png" alt="enter image description here" /></a></p>
| <python><matplotlib> | 2023-07-11 08:18:31 | 1 | 25,436 | gboffi |
76,659,917 | 7,713,770 | django with docker , TCP/IP connections on port 5432 error | <p>I have a django application. And the application works fine. But now I try to dockkerize the django application. So my docker file looks like:</p>
<pre><code># pull official base image
FROM python:3.9-alpine3.13
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add linux-headers postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
COPY ./requirements.dev.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# copy project
COPY . .
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
</code></pre>
<p>And dockercompose looke like:</p>
<pre><code>version: '3.9'
services:
app:
build:
context: .
args:
- DEV=true
ports:
- "8000:8000"
volumes:
- .:/app
command: >
sh -c "python manage.py migrate &&
python app/manage.py runserver 192.168.1.135:8000"
env_file:
- variables.env
depends_on:
- db
db:
image: postgres:13-alpine
container_name: postgres
volumes:
- dev-db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=db
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
volumes:
dev-db-data:
dev-static-data:
</code></pre>
<p>So when I do: docker-compose up -d --build</p>
<p>I see that the db container is running. But the app container is not running.</p>
<p>And this error occurs:</p>
<pre><code>django.db.utils.OperationalError: could not connect to server: Connection refused
dwl_backend-app-1 | Is the server running on host "localhost" (127.0.0.1) and accepting
dwl_backend-app-1 | TCP/IP connections on port 5432?
dwl_backend-app-1 | could not connect to server: Address not available
dwl_backend-app-1 | Is the server running on host "localhost" (::1) and accepting
dwl_backend-app-1 | TCP/IP connections on port 5432?
</code></pre>
<p>And of course I googled. And I found something that I have to change the postgresql.conf file and set the port to 5432. But port number is setting correct.</p>
<p>Question: how to tackle this error?</p>
<p>This i have in settings.py file:</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': environ.get("DATABASE_NAME"),
'USER': environ.get("DATABASE_USER"),
'PASSWORD': environ.get("DATABASE_PASSWORD"),
'HOST': environ.get("DATABASE_HOST"),
'PORT': environ.get("DATABASE_PORT"),
}
}
</code></pre>
| <python><docker><docker-compose><dockerfile> | 2023-07-11 08:14:51 | 1 | 3,991 | mightycode Newton |
76,659,744 | 5,299,750 | Cannot create new tensorflow keras model from model subset during training | <p>I have a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow noreferrer"><code>tensorflow.keras.Model</code></a>
and for reasons that go beyond the scope of this question,
I'm using a custom <a href="https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LambdaCallback" rel="nofollow noreferrer"><code>tensorflow.keras.callbacks.LambdaCallback</code></a>
to save the model at the end of the epoch.
As part of said callback,
I wanted to only save a subset of the model
(by cutting of the last couple of layers and a few inputs only needed for training).
To that end,
I get input layers by name from the model using <code>model.get_layer("some_name").input</code>
and <code>.output</code> respectively for some outputs.</p>
<p>Now when I try to create a new model using <code>tensorflow.keras.Model(inputs=[input_layer, other_input_layer], outputs=outputs)</code>,
I get an error message about an unconnected graph,
where the second layer doesn't find the first input layer:</p>
<pre><code>ValueError: Graph disconnected: cannot obtain value for tensor KerasTensor(type_spec=TensorSpec(...), name='some_name', description="created by layer 'some_name'") at layer "...". The following previous layers were accessed without issue: []
</code></pre>
<p>This surprises me, because
a) I just trained this model no problem.
b) I can use this function that creates the submodel outside of the training context and it works.
c) if I clone model and apply the function to the clone, it also works.</p>
<p>I must not understand some element of the tensorflow workings under the hood.
Does anyone know,
why I would not be able to create a new model from the existing model during training?</p>
| <python><tensorflow><keras><deep-learning> | 2023-07-11 07:50:59 | 1 | 954 | Christian Steinmeyer |
76,659,607 | 18,876,759 | pyinstaller - collect local package | <p>I have a Python application which I want to bundel with pyinstaller.</p>
<p>My folder structure is like this:</p>
<pre><code>.
βββ docs
βΒ Β βββ manual
βββ main.py
βββ README.md
βββ requirements.txt
βββ setup.sh
βββ venv
βΒ Β βββ ...
βββ mypackage
βββ package1
βββ package2
βββ bindata1.bin
βββ bindata2.bin
βββ __init__.py
</code></pre>
<p><code>mypackage</code> is a python package which also contains some binary data and does some dynamic importing by scanning some submodules/namespaces:</p>
<pre class="lang-py prettyprint-override"><code>def iter_namespace(ns_pkg):
return pkgutil.iter_modules(ns_pkg.__path__, ns_pkg.__name__ + ".")
APPS = {}
for finder, name, ispkg in iter_namespace(apps):
if not ispkg:
continue
package = importlib.import_module(name + '.app')
APPS[package.APPNAME] = package.App
</code></pre>
<p>This works fine when running the python script directly, but not with the frozen version. My first approach was this command: <code>pyinstaller main.py --clean --noconfirm --onefile --windowed --hidden-import pyvisa_py</code>
So I tried to collect the package by appending <code>--collect-all mypackage</code>, but I get those warnings:</p>
<pre><code>115 INFO: PyInstaller: 5.13.0
115 INFO: Python: 3.10.6
116 INFO: Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
116 INFO: wrote /opt/mysoftware/main.spec
117 INFO: Removing temporary files and cleaning cache in /root/.cache/pyinstaller
121 WARNING: Unable to copy metadata for mypackage: The 'mypackage' distribution was not found and is required by the application
121 WARNING: collect_data_files - skipping data collection for module 'mypackage' as it is not a package.
121 WARNING: collect_dynamic_libs - skipping library collection for module 'mypackage' as it is not a package.
142 INFO: Determining a mapping of distributions to packages...
760 WARNING: Unable to determine requirements for mypackage: The 'mypackage' distribution was not found and is required by the application
760 INFO: Extending PYTHONPATH with paths
['/opt/mysfoftware']
898 INFO: checking Analysis
898 INFO: Building Analysis because Analysis-00.toc is non existent
898 INFO: Initializing module dependency graph...
...
</code></pre>
<p>How can I get Pyinstaller to collect my package so that dynamic loading and accessing binary data works?</p>
| <python><pyinstaller> | 2023-07-11 07:31:37 | 2 | 468 | slarag |
76,659,423 | 12,924,273 | Use of Params in pyspak | <p>In this example, I am trying to use overrides as a Params object and I want it to be used as a list of strings.</p>
<p>But I am not able to assign its value using the below code.</p>
<pre><code>class _AB(Params):
overrides = Param(Params._dummy(), "overrides", "Parameters for environment setup", typeConverter=TypeConverters.toListString)
def __init__(self, *args):
super().__init__(*args)
self._setDefault(overrides=None)
class A(_AB):
@keyword_only
def __init__(self, overrides):
super().__init__()
kwargs = self._input_kwargs
self.setParams(**kwargs)
@keyword_only
def setParams(self, overrides: List[str]):
kwargs = self._input_kwargs
print(kwargs)
return self._set(**kwargs)
def c(self):
print(self.overrides.__dict__['typeConverter'].__dict__)
for i in self.overrides:
print(i)
a = A(overrides=["dsfs", "Sdf"])
a.c()
</code></pre>
<p>It gives me a blank dictionary when I print it inside function <code>c</code>.<br />
It gives me an error:</p>
<pre><code>TypeError: 'Param' object is not iterable
</code></pre>
<p>I guess it's happening because it's not able to assign some value to overrides variable.</p>
| <python><pyspark><transformer-model><simpletransformers> | 2023-07-11 07:03:19 | 1 | 323 | 300 |
76,659,355 | 1,777,081 | tensorflow no t installing | <p>I've tried to install tensorflow with:</p>
<pre><code>pip3 install tensorflow
</code></pre>
<p>put I recieve the following error message:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none)
ERROR: No matching distribution found for tensorflow
</code></pre>
<p>my python version is 3.11.4 and pip 23.1.2</p>
<p>tensorflow is supporting python 3.11 so why is it not installing?</p>
| <python><tensorflow> | 2023-07-11 06:53:50 | 0 | 355 | Jonas Fredriksson |
76,659,299 | 3,121,975 | What to do when your formatter and your linter are fighting | <p>I've been writing a decorator in Python:</p>
<pre><code>def dictionary_updater(key: str) -> Callable[[FieldStringer], PayloadSetter]:
"""Convert string-converter to dictionary modifier.
"""
# Create the actual decorator method and return it
def inner(func: FieldStringer) -> PayloadSetter:
# Create the method that should actually be called when the decorated function
# is invoked
def with_dict(self, payload: Payload) -> None:
payload[key] = func(self)
return with_dict
return inner
</code></pre>
<p>The issue I'm having is that <code>black</code> will try to put an empty line after the docstring, I assume because the first line of code is a function definition. However, <code>pydocstyle</code> will complain about this because there's not supposed to be an empty line between the docstring and the function body.</p>
<p>I've tried disabling the rule for each system, respectively, but because it's an empty line, both tools appear to be ignoring it. Furthermore, I can't just disable the tools themselves or modify their rules because they're part of a CI/CD pipeline I have no control over. I suppose I could disable one tool or the other for the entire file, but I'd rather not do that either, as that defeats the purpose of having the tools in the first place.</p>
<p>Does anyone know how to fix this issue?</p>
| <python><python-black> | 2023-07-11 06:44:38 | 1 | 8,192 | Woody1193 |
76,658,904 | 2,975,438 | LangChain with ConversationBufferMemory in Streamlit application does not work | <p>I have a streamlit chatbot that works perfectly fine but does not remember previous chat history. I was trying to add it with langchain ConversationBufferMemory but it does not seem to work.</p>
<p>Here is a sample of chatbot I created:</p>
<pre><code>import streamlit as st
from streamlit_chat import message
from langchain.chains import ConversationChain
from langchain.llms import OpenAI
from langchain.chat_models import AzureChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
])
def load_chain(prompt):
"""Logic for loading the chain you want to use should go here."""
llm = AzureChatOpenAI(
deployment_name = 'gpt-35-turbo',
model_name = 'gpt-35-turbo',
temperature = 0,
openai_api_key = '.....',
openai_api_base = '.....',
openai_api_version = "2023-05-15",
openai_api_type="azure"
)
memory = ConversationBufferMemory(return_messages=True)
chain = ConversationChain(
llm=llm,
verbose=True,
prompt=prompt,
memory=memory
)
return chain
chain = load_chain(prompt)
# From here down is all the StreamLit UI.
st.set_page_config(page_title="LangChain Demo", page_icon=":robot:")
st.header("LangChain Demo")
if "generated" not in st.session_state:
st.session_state["generated"] = []
if "past" not in st.session_state:
st.session_state["past"] = []
if "history" not in st.session_state:
st.session_state["history"] = []
def get_text():
input_text = st.text_input("You: ", "Hello, how are you?", key="input")
return input_text
user_input = get_text()
if user_input:
output = chain.run(input=user_input, history=st.session_state["history"])
st.session_state["history"].append((user_input, output))
st.session_state.past.append(user_input)
st.session_state.generated.append(output)
st.write(st.session_state["history"])
if st.session_state["generated"]:
for i in range(len(st.session_state["generated"]) - 1, -1, -1):
message(st.session_state["generated"][i], key=str(i))
message(st.session_state["past"][i], is_user=True, key=str(i) + "_user")
</code></pre>
<p>It looks like bot ignores ConversationBufferMemory for some reason. Any help would be appreciated.</p>
| <python><streamlit><langchain> | 2023-07-11 05:23:34 | 2 | 1,298 | illuminato |
76,658,757 | 2,825,403 | Cleaner way of getting date out of pandas timestamp | <p>Suppose I have a df like so:</p>
<pre><code>foo = pd.DataFrame(
{
'a': [1, 2, 3],
'b': ['2021-01-05 05:15', '2021-01-06 11:10', '2021-03-01 09:00']
}
)
</code></pre>
<p>And I want to convert column <code>b</code> to datetime and extract only the date part. I can do something like:</p>
<pre><code>foo['date'] = pd.to_datetime(foo.b).dt.date
</code></pre>
<p>But even though this returns a Numpy array of datetime objects, Pandas doesn't recognise this and assigns an <code>object</code> dtype to the column:</p>
<pre><code>foo.dtypes
Out:
a int64
b object
date object
dtype: object
</code></pre>
<p>I can of course get it to be a datetime by casting it to datetime again:</p>
<pre><code>foo['date'] = pd.to_datetime(pd.to_datetime(foo.b).dt.date)
</code></pre>
<p>I can also get it with string slicing</p>
<pre><code>foo['date2'] = pd.to_datetime(foo.b.str[:11])
</code></pre>
<p>But I feel like there must be a cleaner way of getting a date out of datetime column.</p>
| <python><pandas><datetime> | 2023-07-11 04:41:17 | 1 | 4,474 | NotAName |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.