QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,853,628 | 1,078,199 | Python `select.select` vs `asyncio.wait`. Which to use? | <p>Both of these functions can be used to wait for the first of a group of files to become readable. <code>select</code> uses a UNIX syscall; <code>asyncio.wait</code> uses green threads. What are the theoretical pros and cons of each approach? Do they use the same underlying primitives (in particular, does <code>asyncio.wait</code> somehow figure out that it's waiting on file descriptors and call <code>select</code>)?</p>
<pre class="lang-py prettyprint-override"><code>import select
devices = ["/dev/input/event6", "/dev/input/event3"]
files = [open(device) for device in devices]
first_fileno = select.select([file.fileno() for file in files], [], [])[0][0]
print([file.name for file in files if file.fileno() == first_fileno][0])
</code></pre>
<pre class="lang-py prettyprint-override"><code>import aiofiles
import asyncio
import contextlib
async def read_and_return(file):
contents = await file.read(1)
return file, contents
async def main(devices):
async with contextlib.AsyncExitStack() as stack:
files = [await stack.enter_async_context(aiofiles.open(device, "rb")) for device in devices]
tasks = [asyncio.create_task(read_and_return(file)) for file in files]
completed, incompleted = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
file, contents = list(completed)[0].result()
print(file.name)
devices = ["/dev/input/event6", "/dev/input/event3"]
asyncio.run(main(devices))
</code></pre>
| <python><asynchronous><concurrency><python-asyncio> | 2023-08-07 16:42:47 | 1 | 5,583 | charmoniumQ |
76,853,620 | 7,200,859 | 2D heatmap of frequency over magntiude from FFTs | <p>How to create a 2D heat map (or similar) with frequency on the x-axis and the magnitude on the y-axis?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fs = 1000
t = np.arange(0, 0.1, 1/fs)
N = len(t)
f_bin = fs / N
f = np.arange(0, fs, f_bin)
X = [np.fft.fft(np.sin(2 * np.pi * 100 * t)), np.fft.fft(np.sin(2 * np.pi * 200 * t)), np.fft.fft(np.sin(2 * np.pi * 300 * t))]
M = np.absolute(X)
fig1, (ax1) = plt.subplots(nrows=1, ncols=1)
ax1.plot(f[:len(f)//2], M[0,:len(M[0][:])//2], "r+", label="100 Hz")
ax1.plot(f[:len(f)//2], M[1,:len(M[1][:])//2], "g*", label="200 Hz")
ax1.plot(f[:len(f)//2], M[2,:len(M[2][:])//2], "b.", label="300 Hz")
ax1.set_xlabel("Frequency (Hz)")
ax1.set_ylabel("Magnitude")
ax1.set_title(f"DFT")
ax1.grid(True, which="both")
ax1.legend()
plt.show()
</code></pre>
<p>In general I search for a good way to visualize in a single picture the output of multiple (e.g., 1000 and not just three as in this example) FFTs (e.g., with <code>matplotlib.imshow()</code>?, <code>numpy.histogram2d()</code>?).</p>
| <python><numpy><matplotlib><fft> | 2023-08-07 16:41:57 | 2 | 715 | ge45mue |
76,853,607 | 2,417,922 | Python "map": Do I have to surround it with "list"? | <p>I find that sometimes, if I want to use the result of Python <code>map</code>, I have to surround the call with <code>list</code>. For example:</p>
<pre><code>>>> sum(map( lambda x: x + 1, range(5)))
15
>>> max(map( lambda x: x + 1, range(5)))
5
>>> map( lambda x: x + 1, range(5))[2:3]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'map' object is not subscriptable
</code></pre>
<p>What I'm looking for is some rule that defines when the addition of <code>list</code> is necessary.</p>
| <python><python-3.x> | 2023-08-07 16:40:09 | 1 | 1,252 | Mark Lavin |
76,853,402 | 601,976 | Proper way to build a compiled alternation list | <p>I have a series of regex patterns that get grouped into categories. I'm trying to statically compile them into alternations, but I don't want any special meanings to get lost.</p>
<p>As an example, I'm identifying Raspberry Pi GPIO pins by name. There are the GPIO pins, 0-27 (coincidentally the same numbers as the BCP nomenclature), the voltage reference pins, and the named-function pins. Depending upon into which category a particular physical pin falls, assumptions can be made; for example, a voltage-reference pin never has a pull-up status nor a GPIO/BCM number.</p>
<p>So:</p>
<pre class="lang-py prettyprint-override"><code>_cats = {
'data': (
r'gpio\.?([0-9]|[1-3][0-9]|40)',
),
'vref': (
r'v3_3',
r'v5'
r'gnd',
),
'named': (
r'SDA\.?([01])',
r'CE\.?0'
r'CE\.?1',
),
}
</code></pre>
<p>The first thing I want to do is combine all of the patterns into a single compiled alternation so I can check whether an input actually <em>matches</em> any of my keys. For a single string in the first dict, I could simply use:</p>
<pre class="lang-py prettyprint-override"><code>crx = rx.compile(('\A' + _cats['data'][0] + '\Z'), rx.IGNORECASE)
</code></pre>
<p>For all of them, I could do something like:</p>
<pre class="lang-py prettyprint-override"><code>crx = re.compile('|'.join([('\A' + rx + '\Z') for rx in _cats['vref']]), re.IGNORECASE)
</code></pre>
<p>but this is starting to confuse me. Each regex term should be <code>^$</code> or <code>\A\Z</code> bounded, but joining them into alternations and <em>then</em> compiling them is giving me issues.</p>
<p>I'm looking for something like Emacs' <a href="https://www.emacswiki.org/emacs/RegexpOpt" rel="nofollow noreferrer"><code>regexp-opt</code></a> function.</p>
<p>I've tried variations on the theme described, and getting syntax errors, patterns that don't match anything, and patterns that match too much.</p>
<p><em>Edit</em></p>
<p>Thanks for the comments which helped clarify and solve my main question, but I think the second part got lost somewhere. Specifically,</p>
<p>Is a compiled regex itself a regular expression, or is it a sort of opaque end-point? Would this (p-codish) work?</p>
<pre class="lang-py prettyprint-override"><code>rx_a = re.compile(r'(?:a|1|#)')
rx_b = re.compile(r'(?:[b-z]|[2-9]|@)')
rx_c = re.compile('|'.join([repr(rx_a), repr(rx_b)]))
</code></pre>
<p>Or something of the sort?</p>
| <python><regex><regex-alternation> | 2023-08-07 16:10:09 | 1 | 2,010 | RoUS |
76,853,351 | 12,300,981 | Assistance in setting up a custom method for Scipy Basinhopping | <p>I'm trying to set a custom method for the local minimization. From my understanding of basinhopping, you start with an initial starting condition, minimize using a local method, then use basinhopping change the value of the parameters, and minimize locally again. I know basinhopping has a few options for local minimization methods, but I'm trying to play around with using a custom one and am having a few problems. So let's start with a MVE:</p>
<pre><code>import scipy.optimize as so
import numpy as np
from scipy.optimize import basinhopping
from scipy.optimize import OptimizeResult
data_set=[np.array([0.01252837, 0.00994032, 0.02607758, 0.02938639, 0.03470389,
0.0393117 , 0.05045751, 0.05288866]), np.array([0.01096586, 0.0093723 , 0.02996665, 0.0490254 , 0.06359686,
0.07470107, 0.07533133, 0.10770218]), np.array([0.0108 , 0.01922004, 0.0290243 , 0.03236109, 0.00761577,
0.05216742, 0.05526853, 0.06572701]), np.array([0.01744162, 0.02563377, 0.03473111, 0.04372516, 0.05533209,
0.06429533, 0.06852919, 0.08112336]), np.array([0.01664812, 0.03377632, 0.04334155, 0.05260618, 0.06893069,
0.07831481, 0.08656102, 0.0999732 ]), np.array([0.01933805, 0.02861486, 0.04197618, 0.05017609, 0.06353904,
0.07471151, 0.08393098, 0.09447883])]
prot,lig=np.array([0.28, 0.26, 0.25, 0.23, 0.21, 0.19, 0.18, 0.15]), np.array([0.14, 0.26, 0.37, 0.47, 0.63, 0.77, 0.88, 1.1 ])
def global_fun(par,protein,ligand,csp_list):
kd,wmax=par
chi2=0
for csp in csp_list:
model=wmax*((protein+ligand+kd-np.sqrt(((protein+ligand+kd)**2)-(4*protein*ligand)))/(2*protein))
chi2+=np.sum((csp-model)**2)
return chi2
sol=basinhopping(global_fun,minimizer_kwargs={'args':(prot,lig,data_set),'options':{'maxiter':100000},'bounds':((0,np.inf),)*2,'method':'Nelder-Mead'},x0=[1,0.01])
>>>
fun: 0.00879004731452548
lowest_optimization_result: final_simplex: (array([[3.20231857, 0.33548486],
[3.20238274, 0.33549024],
[3.20240774, 0.33549202]]), array([0.00879005, 0.00879005, 0.00879005]))
fun: 0.00879004731452548
message: 'Optimization terminated successfully.'
nfev: 66
nit: 34
status: 0
success: True
x: array([3.20231857, 0.33548486])
message: ['requested number of basinhopping iterations completed successfully']
minimization_failures: 0
nfev: 8794
nit: 100
x: array([3.20231857, 0.33548486])
</code></pre>
<p>It works well, providing a good solution with low chi2.</p>
<p>Basinhopping takes the inputs in <code>minimizer_kwargs</code>, and uses those for the local solver. So the bounds, arguments, and options are all defined in minimizer_kwargs and passed on. So I tried my hand at a local solver (I'm only posting the bottom half of the code, the top half is the same).</p>
<pre><code>def custom_method(args=(args,bounds,options)):
local_so=so.minimize(args=args,bounds=bounds,options=options,method='Nelder-Mead')
return OptimizeResult(x=local_so.x,fun=local_so.fun,success=local_so.success)
sol=basinhopping(global_fun,minimizer_kwargs={'args':(prot,lig,data_set),'options':{'maxiter':100000},'bounds':((0,np.inf),)*2,'method':custom_method},x0=[1,0.01])
</code></pre>
<p>This is incorrectly set up, but I don't quite know how to fix it. I don't quite know how to transfer the argument from minimizer_kwargs, over to my custom method so that they can be used as inputs for the local solver. Furthermore, I don't quite know how x0 is transferred over from basinhopping to the local solver. X0 will be the same for the first iteration for both basinhopping and the local solver, but after the first iteration of basinhopping, x0 for basinhopping will change, and that change will need to be transferred over to the local solver.</p>
| <python><numpy><scipy><scipy-optimize-minimize> | 2023-08-07 16:01:58 | 1 | 623 | samman |
76,853,343 | 4,551,325 | Pandas multi-indexed dataframe column sum and assignment | <p>Consider a dataframe with multi-index'ed columns:</p>
<pre><code>[ In 1]: df = pd.DataFrame(np.random.randint(0,10,[5,4]),
columns=pd.MultiIndex.from_product([['A','B'], ['a','b']]))
df
[Out 1]:
A B
a b a b
0 6 0 7 3
1 0 5 1 8
2 3 4 6 4
3 6 2 0 2
4 3 9 5 0
</code></pre>
<p>I want to insert new columns that are sums of <code>level=1</code> columns. This is my attempt which works:</p>
<pre><code>[ In 2]: df.loc[:, pd.IndexSlice['A','c']] = df.loc[:, pd.IndexSlice['A',:]].sum(axis=1)
df.loc[:, pd.IndexSlice['B','c']] = df.loc[:, pd.IndexSlice['B',:]].sum(axis=1)
df = df.sort_index(axis=1)
df
[Out 2]:
A B
a b c a b c
0 6 0 6 7 3 10
1 0 5 5 1 8 9
2 3 4 7 6 4 10
3 6 2 8 0 2 2
4 3 9 12 5 0 5
</code></pre>
<p>Is there a method that avoids loopting through the <code>level=0</code> column names? Because in my real dataframe there are hundreds of text column names at <code>level=0</code> .</p>
<p>Python 3.8, pandas 1.4.4.</p>
| <python><pandas><multi-index> | 2023-08-07 16:01:04 | 2 | 1,755 | data-monkey |
76,853,233 | 10,460,447 | Django OnetoOne key insert on perform_update not working | <p>in my <code>models.py</code> I've</p>
<pre><code>class Product(models.Model):
code = models.CharField(primary_key=True, null=False)
status = models.CharField(max_length=50, blank=True, null=True)
</code></pre>
<h1></h1>
<pre><code>class Price(models.Model):
code = models.OneToOneField(Product, on_delete=models.CASCADE)
price_tc = models.FloatField(blank=True, null=True)
price_uc = models.FloatField(blank=True, null=True)
price_bx = models.FloatField(blank=True, null=True)
</code></pre>
<p>I want to insert the price after each insertion of product to ensure the one to one.
So Implemented Django Perform Create in <code>views.py</code></p>
<pre><code>class ProductView(viewsets.ModelViewSet):
queryset = Product.objects.all()
serializer_class = ProductSerializer
ordering = ["code"]
def perform_create(self, serializer):
serializer.save()
Price.objects.create(self.request.data["code"])
</code></pre>
<p>I get the error : Price.code must be a "Product" instance, so i swap to</p>
<pre><code> def perform_create(self, serializer):
serializer.save()
m = Product.objects.get(code=self.request.data["code"])
Price.objects.create(code=m)
</code></pre>
<p>But still not working.</p>
| <python><django><django-views><django-forms> | 2023-08-07 15:43:05 | 1 | 325 | Moun |
76,853,198 | 880,874 | Why does my Python script take too much time to execute? | <p>I have a very large Excel spreadsheet of astronomical observation data that comes from a SQL database query.</p>
<p>The data is sorted by <code>galaxyID</code> and there are about 70 columns and 5000+ rows.</p>
<p>90% of the data are just duplicates and I need to find which columns are causing the duplicates.</p>
<p>For example, I might have a row that is repeated 50 times where all the data is the same except for one column.</p>
<p>SO I have this Python program below that is supposed to list all the column headers that are causing duplicates for each <code>galaxyID</code>.</p>
<p>Like if <code>galaxyID 0473788</code> has 30 rows exccept 2 columns causing dupes, I'd like the Python script to list something like:</p>
<p><code>galaxyID 0473788: unique column(s) causing duplicates --- observationTimeId (5874), locationName('Juno Mountain').</code></p>
<p>But when I run the program it seems to go into an infinite loop printing out every single row in the Excel Spreadsheet.</p>
<p>Does anyone see anything that's wrong?</p>
<p>Thanks!</p>
<pre><code>import pandas as pd
def find_unique_rows_by_ID(file_path):
df = pd.read_excel(file_path, engine='openpyxl')
df.sort_values(by='galaxyID', inplace=True)
grouped = df.groupby('galaxyID')
unique_rows_by_ID = {}
for ID, group in grouped:
unique_rows = group.drop_duplicates(keep=False)
unique_rows_list = []
for index, row in unique_rows.iterrows():
unique_data = {
'row': row.to_dict(),
'unique_columns': list(row.index)
}
unique_rows_list.append(unique_data)
unique_rows_by_ID[ID] = unique_rows_list
return unique_rows_by_ID
if __name__ == "__main__":
file_path = "astronomy_observation_data.xlsx"
unique_rows = find_unique_rows_by_ID(file_path)
for ID, unique_rows_list in unique_rows.items():
print(f"galaxyID: {ID}")
for unique_data in unique_rows_list:
print("Unique Row:", unique_data['row'])
print("Unique Columns:", unique_data['unique_columns'])
print("--------------------")
</code></pre>
| <python><pandas> | 2023-08-07 15:37:49 | 2 | 7,206 | SkyeBoniwell |
76,853,170 | 983,447 | setting temperature in Open Llama does not work | <p>I try to generate several alternative continuations of given prompt with Open Llama, setting nonzero temperature:</p>
<pre><code>import re
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'openlm-research/open_llama_3b_v2'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, device_map='auto')
text = 'Once upon a time '
text_tokenized_for_llm = tokenizer(text, return_tensors="pt").input_ids
for i in range(25):
result = model.generate(input_ids=text_tokenized_for_llm, max_new_tokens=6, temperature=2)
text = tokenizer.decode(result[0])
print('->' + text + '<-')
</code></pre>
<p>However, when I run the program, all continuations are the same:</p>
<pre><code>-><s>Once upon a time 100 years ago,<-
-><s>Once upon a time 100 years ago,<-
-><s>Once upon a time 100 years ago,<-
(...)
</code></pre>
<p>What is wrong here?</p>
| <python><large-language-model> | 2023-08-07 15:32:42 | 1 | 1,737 | user983447 |
76,853,092 | 944,732 | Hierarchical varying effects model with MVN prior | <h1>What I'm trying to do</h1>
<p>I've already dealt with multivariate priors in pymc (I'm using 4.0.1), but I can't get their usage in a hierarchical model working. In my example I'm modeling a regression problem with two covariates <code>x1</code>, <code>x2</code> and an outcome <code>y</code>. There are two categorical features in the data that define hierarchy levels: <code>dim1</code> indicates the higher level category and <code>dim2</code> the lower level category.</p>
<p>The model consists of an intercept and a slope each for <code>x1</code> and <code>x2</code> respectively. There's a multivariate normal hyperprior <code>B_1</code> that defines varying intercepts and slopes by <code>dim1</code> in order [intercept, slope_x1, slope_x2] . These then inform the prior <code>B_2</code> on the lower level, where the terms vary also by <code>dim2</code>.</p>
<h1>Problem</h1>
<p>When trying to sample from the model, I'm getting the following error: "<strong>ValueError: Invalid dimension for value: 3</strong>".<br />
I interpret this as the '3' in the dims for <code>B_1</code> and <code>B_2</code> being wrong, but when evaluating the model variables manually everything seems to be fine:</p>
<p><code>B_2.eval().shape</code> correctly shows that the rv is of shape (6,2,3).<br />
The linear model in <code>mu</code> can also be evaluated without issue.</p>
<p>Checking the error traceback mentions this as the issue:</p>
<pre><code>/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/pymc/distributions/multivariate.py in quaddist_parse(value, mu, cov, mat_type)
133 """Compute (x - mu).T @ Sigma^-1 @ (x - mu) and the logdet of Sigma."""
134 if value.ndim > 2 or value.ndim == 0:
--> 135 raise ValueError("Invalid dimension for value: %s" % value.ndim)
136 if value.ndim == 1:
137 onedim = True
</code></pre>
<p>Which means that the dimensionality can't be larger than 2. This seems odd, because multivariate distributions with more than 2 dimensions are pretty common. That leads me to believe that the way I'm specifying the MVN priors is just wrong, but after much trying I'm pretty stuck.
What's the correct way of specifying <code>B_1</code> and <code>B_2</code> here?</p>
<h1>Code for model</h1>
<pre><code>import numpy as np
import pymc as pm
with pm.Model(coords={'dim1':np.arange(2), 'dim2':np.arange(6)}) as mdl:
d1 = pm.MutableData('d1', X['d1'].values, dims='obs_id')
d2 = pm.MutableData('d2', X['d2'].values, dims='obs_id')
# upper level
sd_1 = pm.HalfNormal.dist(1,shape=3)
chol_1, _, _ = pm.LKJCholeskyCov('chol_1', n=3, eta=1,
sd_dist=sd_1, compute_corr=True)
B_1 = pm.MvNormal('B_1', mu=[5,0,0], chol=chol_1,
dims=('dim1','3')
)
# lower level
sd_2 = pm.HalfNormal.dist(1,shape=3)
chol_2, _, _ = pm.LKJCholeskyCov('chol_2', n=3, eta=1,
sd_dist=sd_2, compute_corr=True)
B_2 = pm.MvNormal('B_2', mu=B_1, chol=chol_2,
dims=('dim2','dim1','3')
)
# regular robust regression, mu defined as linear model of covariates
sigma = pm.HalfNormal('sigma',2)
mu = pm.Deterministic(
'mu',
B_2[d2,d1,0] + # intercept
B_2[d2,d1,1] * X['x1'].values + # slope x1
B_2[d2,d1,2] * X['x2'].values, # slope x2
dims='obs_id'
)
outcome = pm.StudentT('outcome', nu=3, mu=mu, sigma=sigma, observed=y, dims="obs_id")
trace = pm.sample(draws=2000, tune=1000, target_accept=0.95, random_seed=0)
</code></pre>
<h1>Code for data generation</h1>
<pre><code>import numpy as np
import pandas as pd
from scipy import stats
import pymc as pm
rng = np.random.default_rng(0)
N = 1000
# generate covariates and categories
X = pd.DataFrame(
{
'x1': stats.halfnorm(loc=0,scale=3).rvs(N),
'x2': stats.norm(loc=0,scale=2).rvs(N),
'd1': rng.choice([0,1],size=N, p=[0.6,0.4]),
'd2': rng.choice(np.arange(6),size=N, p=[0.1,0.2,0.1,0.3,0.1,0.2]),
}
)
# means of the parameter distributions
intercept = np.array([
[5,5,6,7,6,5],
[4,4,5,6,6,4]
])
slope1 = np.array([
[0,0.7,0.3,-0.2,-1,0],
[0.5,1,-1,0.6,-0.2,0.3]
])
slope2 = np.array([
[0,0.7,0.3,-0.2,-1,0],
[0.5,1,-1,0.6,-0.2,0.3]
])*1.5
# generate some random covariance matrices
corrs = []
for _ in np.arange(6):
_,corr,_ = pm.LKJCholeskyCov.dist(eta=1,n=3,sd_dist=pm.HalfNormal.dist(1,shape=3), compute_corr=True)
corrs.append(corr.eval())
# generate outcome
y = np.zeros(N)
for d1 in [0,1]:
for d2 in np.arange(6):
ind = (X['d1']==d1)&(X['d2']==d2)
mv = stats.multivariate_normal(mean=[intercept[d1,d2], slope1[d1,d2],slope2[d1,d2]],cov=corrs[d2]).rvs(1)
y[ind] = mv[0] + X.loc[ind,'x1']*mv[1] + X.loc[ind,'x2']*mv[2] + rng.normal(loc=0,scale=1,size=ind.sum())
</code></pre>
| <python><pymc><hierarchical-bayesian> | 2023-08-07 15:23:54 | 1 | 1,036 | deemel |
76,853,089 | 4,286,383 | Missing Words in PyDictionary | <p>PyDictionary is a Dictionary Module for Python that can be used to get meanings of words:</p>
<pre><code>from PyDictionary import PyDictionary
dictionary=PyDictionary()
print (dictionary.meaning("word"))
</code></pre>
<p>Output:</p>
<blockquote>
<p>{'Noun': ['a unit of language that native speakers can identify', 'a brief statement', 'information about recent and important events', 'a verbal command for action', 'an exchange of views on some topic', 'a promise', 'a string of bits stored in computer memory', 'the divine word of God; the second person in the Trinity (incarnate in Jesus', 'a secret word or phrase known only to a restricted group', 'the sacred writings of the Christian religions'], 'Verb': ['put into words or an expression']}</p>
</blockquote>
<p>But if I search for other words like "the" (which is the most frequently used word in the English language):</p>
<pre><code>from PyDictionary import PyDictionary
dictionary=PyDictionary()
print (dictionary.meaning("the"))
</code></pre>
<p>Output:</p>
<blockquote>
<p>Error: The Following Error occured: list index out of range
None</p>
</blockquote>
<p>How is it that the most frequently used word in English doesn't appear in a dictionary?, is there a way to add articles, prepositions and conjunctions to the Python dictionary?</p>
| <python><python-3.x><pydictionary> | 2023-08-07 15:23:18 | 1 | 471 | Nau |
76,852,969 | 2,220,328 | vectorized & in numpy | <p>My use case is to use numpy for bitmap (that is, set operations using bit encoding). I use numpy arrays with <code>uint64</code>. If I have a query with 3 entries, I can then do <code>bitmap | query !=0</code> to check if any element in the query are in the set. Amazing!</p>
<p>Now comes the rub: The performance seems inferior compared to other vectorized operations.</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
N = 7_000_000
def generate_bitmap():
return np.random.uniform(size=(N)).astype(np.uint64)
bitmap = generate_bitmap()
# A mean operations (well known vectorized operations) runs fast.
%timeit -o np.mean(bitmap)
# <TimeitResult : 7.91 ms ± 66.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)>
# Compare against bitmap, >twice as slow
%timeit -o bitmap | np.uint64(6540943)
# <TimeitResult : 14.6 ms ± 136 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)>
</code></pre>
<p>Intel supports the <a href="https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#cats=Logical" rel="nofollow noreferrer">AVX-512 instruction</a> <code>_mm512_and_epi64</code> so I was expecting the same speed.</p>
| <python><numpy><bitmap><simd> | 2023-08-07 15:05:11 | 2 | 1,286 | Guillaume |
76,852,312 | 21,896,093 | Minor log ticks in seaborn.objects | <p>The default log scale in <code>seaborn.objects</code> doesn't show the <em>minor</em> log grid lines. How can they be turned on?</p>
<p><a href="https://i.sstatic.net/umFPy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/umFPy.png" alt="without minors" /></a></p>
<pre class="lang-py prettyprint-override"><code>
#Synthetic data
import pandas as pd
from scipy.stats import expon
data = pd.DataFrame({'rate': expon(scale=110).rvs(100),
'value': np.random.randn(100)}
)
#Default: no minor log ticks
display(
so.Plot(data, y='value', x='rate')
.add(so.Dots())
.scale(x='log')
.layout(size=(7, 3))
)
</code></pre>
| <python><seaborn><seaborn-objects> | 2023-08-07 13:42:02 | 1 | 5,252 | MuhammedYunus |
76,851,990 | 13,100,938 | Dataflow pipeline creation fails due to lack of resources in current zone | <p>I've set up a Dataflow pipeline that ingests messages from Pub/Sub, converts to a dict and prints the messages.</p>
<p>Here is the script I've written:</p>
<pre class="lang-py prettyprint-override"><code>import apache_beam as beam
import logging
import message_pb2
from apache_beam.options.pipeline_options import StandardOptions
from google.protobuf.json_format import MessageToDict
TOPIC_PATH = "projects/<TOPIC ID>/topics/<TOPIC NAME>"
def protoToDict(msg, schema_class):
message = schema_class()
if isinstance(msg, (str, bytes)):
message.ParseFromString(msg)
else:
return "Invalid Message - something isn't quite right."
return MessageToDict(message, preserving_proto_field_name=True)
pipelineOptions = beam.options.pipeline_options.PipelineOptions()
pipelineOptions.view_as(StandardOptions).streaming = True
pipeline = beam.Pipeline(options=pipelineOptions)
data = (
pipeline
| 'Read from PubSub' >> beam.io.ReadFromPubSub(topic=TOPIC_PATH)
| 'Proto to Dict' >> beam.Map(lambda pb_msg: protoToDict(pb_msg, message_pb2.Message))
| 'Log Result' >> beam.Map(lambda msg: logging.info(msg))
)
pipeline.run()
</code></pre>
<p>When I run this with:</p>
<pre><code>python -m <script name> --region=europe-west2 --runner=DataflowRunner --project=<PROJECT ID> --worker-machine-type=n1-standard-3
</code></pre>
<p>I receive this error:</p>
<pre><code>creation failed: The zone 'projects/<PROJECT ID>/zones/europe-west2-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
</code></pre>
<p>I've seen various other sources however the suggestions given are along the lines of "try a different machine type until it works" or "wait until there are more resources".</p>
<p>Surely I'm doing something wrong and it's not on Google's side?</p>
| <python><google-cloud-platform><google-cloud-dataflow><apache-beam><google-cloud-pubsub> | 2023-08-07 13:01:36 | 3 | 2,023 | Joe Moore |
76,851,769 | 12,103,577 | TypeError: issubclass() arg 1 must be a class with Generic type in Python | <p>I'm trying to create a lazy container with typing in python:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
class Lazy(Generic[T]):
...
a = Lazy[str]
issubclass(a, Lazy)
</code></pre>
<p>However, I'm getting a <code>TypeError: issubclass() arg 1 must be a class</code> on the last line.<br />
Help to explain this and how I can fix it would be much appreciated.<br />
Code taken from <a href="https://stackoverflow.com/questions/48722835/custom-type-hint-annotation">Custom type hint annotation</a></p>
| <python><typing> | 2023-08-07 12:33:38 | 2 | 905 | George Ogden |
76,851,305 | 10,232,932 | Sklearn metrics in groupby function with attribute error for pandas dataframe | <p>I have a pandas dataframe df:</p>
<pre><code> A B type month model forecast_value actual_value
0 71022 ETF backtest 4 Arima 10 11
1 71022 ETF backtest 5 Arima 10 11
2 71022 ETF backtest 4 MA 20 11
3 71023 ADC backtest 4 Arima 5 10
4 71024 DAC backtest 4 Arima 8 20
...
</code></pre>
<p>How can I use the <code>groupby</code>and a <code>sklearn.metrics</code> without getting the error:</p>
<blockquote>
<p>AttributeError: 'Series' object has no attribute 'actual_value'</p>
</blockquote>
<p>When I run the following command:</p>
<pre><code>from sklearn.metrics import mean_squared_error
df = df.groupby(['A', 'B', 'model'], as_index=False).agg(lambda x: mean_squared_error(x.actual_value, x.forecast_value)).reset_index()
</code></pre>
<p>The desired output would be:</p>
<pre><code>A B model mean_squared_error
71022 ETF Arima 2
71022 ETF MA 81
71023 ADC Arima 25
71024 DAC Arima 144
...
</code></pre>
| <python><pandas><scikit-learn> | 2023-08-07 11:34:03 | 1 | 6,338 | PV8 |
76,851,281 | 529,218 | Setting up a SentenceTransformer with AWS Lambda | <p>Is it possible to deploy a SentenceTransformer with AWS lambda?</p>
<p>I receive following error when I try to invoke the Lambda.</p>
<p><strong>Error</strong></p>
<pre><code>{
"errorMessage": "/var/lang/lib/python3.11/site-packages/nvidia/cufft/lib/libcufft.so.10: failed to map segment from shared object",
"errorType": "OSError",
"requestId": "",
"stackTrace": [
" File \"/var/lang/lib/python3.11/importlib/__init__.py\", line 126, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1204, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1176, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 1147, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 690, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 940, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\n",
" File \"/var/task/main.py\", line 5, in <module>\n from sentence_transformers import SentenceTransformer\n",
" File \"/var/lang/lib/python3.11/site-packages/sentence_transformers/__init__.py\", line 3, in <module>\n from .datasets import SentencesDataset, ParallelSentencesDataset\n",
" File \"/var/lang/lib/python3.11/site-packages/sentence_transformers/datasets/__init__.py\", line 1, in <module>\n from .DenoisingAutoEncoderDataset import DenoisingAutoEncoderDataset\n",
" File \"/var/lang/lib/python3.11/site-packages/sentence_transformers/datasets/DenoisingAutoEncoderDataset.py\", line 1, in <module>\n from torch.utils.data import Dataset\n",
" File \"/var/lang/lib/python3.11/site-packages/torch/__init__.py\", line 228, in <module>\n _load_global_deps()\n",
" File \"/var/lang/lib/python3.11/site-packages/torch/__init__.py\", line 189, in _load_global_deps\n _preload_cuda_deps(lib_folder, lib_name)\n",
" File \"/var/lang/lib/python3.11/site-packages/torch/__init__.py\", line 155, in _preload_cuda_deps\n ctypes.CDLL(lib_path)\n",
" File \"/var/lang/lib/python3.11/ctypes/__init__.py\", line 376, in __init__\n self._handle = _dlopen(self._name, mode)\n"
]
}
</code></pre>
<p><strong>Handler</strong></p>
<p><em>main.py</em></p>
<pre><code>import os
import sys
import json
import logging
from sentence_transformers import SentenceTransformer
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def transform(sentences):
model = SentenceTransformer('all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
return embeddings
def log_sentence_embeddings(sentences, embeddings):
for sentence, embedding in zip(sentences, embeddings):
logger.info("sentence: %s", sentence)
logger.info("embedding: %s", embedding)
def handler(event, context):
logger.info("env variables: %s", os.environ)
logger.info( 'version: %s', sys.version)
logger.info("Received event: %s", json.dumps(event))
sentences = event.get("sentences")
if sentences is None:
logger.error("No 'body' field found in the event")
return {
"statusCode": 400,
"body": "No 'body' field found in the event"
}
try:
logger.info("sentences: %s", sentences)
embeddings = transform(sentences)
log_sentence_embeddings(sentences, embeddings)
response = {
"statusCode": 200,
"body": json.dumps({"embeddings": embeddings}),
"headers": {
"Content-Type": "application/json"
}
}
return response
except json.JSONDecodeError as e:
logger.error("JSON Decode Error: %s", str(e))
return {
"statusCode": 400,
"body": "Invalid JSON data in 'body' field"
}
</code></pre>
<p><em><strong>Dockerfile</strong></em></p>
<p>Docker base recommended by AWS
<a href="https://hub.docker.com/r/amazon/aws-lambda-python" rel="nofollow noreferrer">https://hub.docker.com/r/amazon/aws-lambda-python</a></p>
<pre><code>FROM public.ecr.aws/lambda/python:3.11
# Copy requirements.txt
COPY requirements.txt ${LAMBDA_TASK_ROOT}
# Copy function code
COPY main.py ${LAMBDA_TASK_ROOT}
# Install the specified packages
RUN pip install -r requirements.txt
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "main.handler" ]
</code></pre>
<p><em><strong>requirements.txt</strong></em></p>
<pre><code>boto3==1.28.20
flask==2.2.2
sentence-transformers==2.2.2
</code></pre>
| <python><amazon-web-services><aws-lambda><sentence-transformers> | 2023-08-07 11:31:16 | 2 | 1,846 | Pradeep Sanjaya |
76,851,236 | 6,241,554 | How to run methods of inner object in Robot Framework? | <p>I have <code>Inner</code> class which has two methods <code>get_value</code>, <code>set_value</code> and property <code>value</code> with default meaning of those. Now I have class <code>Outer</code> which creates objects of other classes (like <code>Inner</code>) and assign them as its attribute:</p>
<pre class="lang-py prettyprint-override"><code>class Outer:
def __init__(self, configuration: int):
if configuration == 1:
self.inner = Inner("Some data")
self.other = Other("Some data")
elif ...
else:
self.inner = Inner("Some data")
self.other = Other("Some data")
</code></pre>
<p>(There will always be <code>inner</code>, <code>other</code>). I would like to use <code>Outer.inner.Get Value</code> and <code>Outer.inner.Set Value ${3}</code> inside Robot Framework test. How to do it?</p>
<p>This is what I came up with (using "Extended variable syntax"):</p>
<ul>
<li>I added <code>get_inner</code> method to <code>Outer</code> class that returns <code>self.inner</code>.</li>
<li>Assign <code>inner</code> object to new variable <code>${MyVar}</code> in RF.</li>
<li>Then use <code>${MyVar.get_value()}</code></li>
</ul>
<p>However with this approach I cannot just call a method without assigning it anywhere (can't call setter):</p>
<pre><code>*** Settings ***
Library Path.To.Outer ${1} WITH NAME Out
*** Test Cases ***
Check setting current
${MyVar} Out.Get Inner
${Initial Value} Set Variable ${MyVar.get_value()}
${New Value} Evaluate ${Initial Value}+${1}
${PS.set_current(3)} # This is not working
</code></pre>
<p><code>get_value</code> worked but in fourth line of code I get <code>Keyword name cannot be empty.</code> error. In fact my goal is to use <code>${New Value}</code> there instead of hardcoded value, like <code> ${PS.set_current(${New Value})}</code>. How should I fix my code?</p>
| <python><robotframework> | 2023-08-07 11:24:04 | 0 | 1,841 | Piotr Wasilewicz |
76,851,129 | 5,431,132 | Python C-Extensions with array of integers argument | <p>I have a simple C++ file that I want to export into Python via a C-extension. I compile my .cc file and create a shared library, which I then call in Python using CDLL from ctypes. However, I am struggling to pass in a Python list correctly when I call the library function and it does not return correctly. Where am I going wrong?</p>
<pre><code>// test.cc
extern "C" {
int call_my_routine(int *array) {
int i;
int count = 0;
for(i=0; array[i]!='\0'; i++)
{
count++;
}
return count;
}
}
// test.py
from ctypes import CDLL, c_int
so_file = "./libtest.so"
my_function = CDLL(so_file)
py_list = [2,3,5,7,4,12,21,49]
array = (c_int * len(py_list))(*py_list)
print(my_function.call_my_routine(array))
</code></pre>
<p>The correct output should be 8, but is instead returning 17.</p>
<p>I compile the shared lib with</p>
<pre><code>g++ -I ./inc -fpic -c test.cc -o test.o
g++ -shared -o libtest.so test.o
</code></pre>
<p>Disclosure, I am not a C programmer.</p>
| <python><c><ctypes> | 2023-08-07 11:11:14 | 0 | 582 | AngusTheMan |
76,850,444 | 534,238 | BigQuery Storage API: what happened to AppendRowsStream | <h1>Seeing no Errors but no Data Submitted</h1>
<p>I am trying to use the new Python BigQuery Storage API (<code>google.cloud.bigquery_storage</code> instead of <code>google.cloud.bigquery_storage_v1</code> or <code>google.cloud.bigquery_storage_v1beta2</code>), as can be <a href="https://cloud.google.com/python/docs/reference/bigquerystorage/latest/upgrading" rel="nofollow noreferrer">seen here</a>.</p>
<p>But there are no end-to-end documents or examples to write to BigQuery using the new API (which it seems is version 2, but is not explicitly referred to as version 2 other than in that migration document).</p>
<p>If I look at version 1 examples like <a href="https://github.com/googleapis/python-bigquery-storage/blob/main/samples/snippets/append_rows_proto2.py" rel="nofollow noreferrer">this one</a> or <a href="https://github.com/googleapis/python-bigquery-storage/blob/main/samples/snippets/append_rows_pending.py" rel="nofollow noreferrer">this one</a>, they use the <code>AppendRowsStream</code> method. That method does not exist anywhere (that I can find) in version 2.</p>
<p>I am trying to use version 2, as it seems the right one to use since it is fully released, but without much documentation and with breaking changes that I cannot understand, I'm unable to get a publish to work. I am running code which has no errors, both an Async and a Sync version, but neither are publishing anything. They just run -- without errors -- and complete with no data being pushed to the table.</p>
<h1>Code</h1>
<p>The code is quite long, and I can publish it, but for now I will just publish the high level flow (I am publishing a <em>batch</em> using a <code>PENDING</code> type stream):</p>
<ol>
<li>Create a bigquery client: <code>client = google.cloud.bigquery_storage.BigQueryWriteClient()</code></li>
<li>Create a write stream request: <code>request = google.cloud.bigquery_storage.CreateWriteStreamRequest(parent="<string_with_full_table_path", write_stream=google.cloud.bigquery_storage.WriteStream(type_=google.cloud.bigquery_storage.WriteStream.PENDING)</code></li>
<li>Create a write stream: <code>stream = client.create_write_stream(request=request)</code></li>
<li>Create an <em>iterator</em> of append rows requests. This is just a <code>def - for - yield</code> wrapper around the append rows request immediately below.</li>
<li>Create an append rows request: <code>request = google.cloud.bigquery_storage.AppendRowsRequest(write_stream=stream.name, proto_rows=proto_data)</code>
<ul>
<li>I've skipped describing <code>proto_data</code>, but it was written correctly because I could inspect it. It is a <code>google.cloud.bigquery_storage.AppendRowsRequest.ProtoData</code> object with the proper <code>writer_schema</code> and <code>rows</code> data attached to it.</li>
</ul>
</li>
<li>Iterate through the iterator. It seems from the examples I showed above that all which is necessary is to iterate through them.</li>
<li>Finalize the stream: <code>client.finalize_write_stream(name=stream.name)</code></li>
<li>Create a batch write stream request: <code>batch_commit_request = google.cloud.bigquery_storage.BatchCommitWriteStreamsRequest( parent=table_path, write_streams=[stream.name] )</code></li>
<li>Batch commit the stream: <code>commit_response = client.batch_commit_write_streams(request=batch_commit_request)</code></li>
</ol>
<h1>Analysis</h1>
<p>I have sprinkled in logs, and I can see in the logs that the data is being generated and is within the Protobufs and the Rows. I am also getting proper responses from the client object. But I am not getting any data pushed.</p>
<p>The <em>only</em> thing that I can see that I am doing differently from the examples I pointed to earlier is that I am not including the <code>AppendRowsStream</code> action. For instance, one of the examples I linked to has these sections:</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud.bigquery_storage_v1 import writer
.
.
.
append_rows_stream = writer.AppendRowsStream(write_client, request_template)
.
.
.
response_future_1 = append_rows_stream.send(request)
.
.
.
append_rows_stream.close()
</code></pre>
<p>This is the only reason that I am seeing why my data might not be arriving in the table: without the stream ever closed, I am "committing" something that was never finished. Maybe there are "two steps" to ensure the commit, so that when I am committing an unclosed stream, I am committing something that is effectively empty, because it is trying to ensure ACID compliance.</p>
<p>But I cannot find a way to create -- or therefore submit data to or close -- such an <code>AppendRowsStream</code>. This is the only wrinkle I am seeing why my version might not be working.</p>
<h1>Summary</h1>
<p>Two questions:</p>
<ol>
<li>Is there an <code>AppendRowsStream</code> in the new <code>google.cloud.bigquery_storage</code> API? If so, where is it hidden?</li>
<li>Is there anything else you can see that I might be doing wrong?</li>
</ol>
| <python><google-cloud-platform><google-bigquery><google-bigquery-storage-api> | 2023-08-07 09:35:08 | 0 | 3,558 | Mike Williamson |
76,850,067 | 16,383,578 | How to combine millions of large ranges that overlap cross series? | <p>I am trying to convert CSV text dumps of <a href="https://dev.maxmind.com/geoip/geolite2-free-geolocation-data" rel="nofollow noreferrer">MaxMind GeoLite2 database</a> into an efficient SQLite3 database with a lot of "laziness". I will spare you all the details, I have written a <a href="https://drive.google.com/file/d/1koK3-r1UYFFgHvt030xDv4SgrN_bkoFz/view?usp=sharing" rel="nofollow noreferrer">script</a> that almost does exactly what I want, but it failed to combine the ranges after many hours, so I was forced to terminate it.</p>
<p>I won't post the full script below, because very few people would read it and too much code might lose readers' attention.</p>
<hr />
<p>At this stage, the problem can be simplified to finding all overlaps between lists of ranges that don't overlap intra-list but overlap inter-list. And they are guaranteed to be sorted. Let me show you what I mean.</p>
<p>I have many triplets in the form <code>(start, end, data)</code>, <code>start</code> & <code>end</code> are both <code>int</code>s and <code>start <= end</code>, and it is the compressed <code>dict</code>: <code>{k: data for k in range(start, end + 1)}</code>. <strong>The numbers are huge, they are uint32 for IPv4 networks and uint128 for IPv6 networks, and I have 4,673,096 such triplets for IPv4 networks and 1,685,642 such triplets for IPv6 networks.</strong></p>
<p>So I need a really efficient solution for this problem.</p>
<hr />
<p>The problem is simple, given two huge lists of triplets, in the form <code>(start, end, data)</code> representing <code>{k: data for k in range(start, end + 1)}</code>, the triplets in the same list don't overlap, repeatedly pop the first item from either lists when needed, until either or both is empty.</p>
<p>In each iteration compare the two current items from their respective lists to find overlaps between them. If there are overlaps, split the two ranges into non-overlapping ranges, the overlapping range should have attributes from both ranges and non-overlapping portion of the original ranges should be split from the overlapping portion.</p>
<p>We then yield the ranges that are past, a range is past if its end has been reached, meaning if the end of a range is greater than the other, its respective list shouldn't be popped in the next iteration, and if the ends are equal, then both lists need to be popped in the next iteration.</p>
<p>Given two triplets <code>(As, Ae, Ad)</code> and <code>(Bs, Be, Bd)</code>, in which <code>As <= Ae and Bs <= Be</code> must always be satisfied, if we add the constraint <code>As <= Bs</code>, there are a total of eight conditions that may affect the result of comparison, I have listed them below, the first return value means the ranges that are past, the second indicate the index of the positional argument that is the current range (if <code>Ae != Be</code>), and the third the start of the current range after adjustment (if <code>Ae != Be</code>).</p>
<pre><code>{
('Ae < Bs'): [[('As', 'Ae', '(Ad, )')], 1, 'Bs'],
('As == Bs', 'Ae == Be'): [[('As', 'Ae', '(Ad, Bd)')], None, None],
('As < Bs', 'Ae < Be'): [[('As', 'Bs - 1', '(Ad, )'), ('Bs', 'Ae', '(Ad, Bd)')], 1, 'Ae + 1'],
('As < Bs', 'Ae == Be'): [[('As', 'Bs - 1', '(Ad, )'), ('Bs', 'Ae', '(Ad, Bd)')], None, None],
('As == Bs', 'Ae < Be'): [[('As', 'Ae', '(Ad, Bd)')], 1, 'Ae + 1'],
('As < Bs', 'Be < Ae'): [[('As', 'Bs - 1', '(Ad, )'), ('Bs', 'Be', '(Ad, Bd)')], 0, 'Be + 1'],
('As < Bs', 'Be == Ae'): [[('As', 'Bs - 1', '(Ad, )'), ('Bs', 'Be', '(Ad, Bd)')], None, None],
('As == Bs', 'Be < Ae'): [[('Bs', 'Be', '(Ad, Bd)')], 0, 'Be + 1'],
}
</code></pre>
<hr />
<p>There are 14 possibilities of the order and uniqueness of As, Ae, Bs and Be if As <= Bs, there are at most 28 possibilities. I have written a completely working script that is guaranteed to get the correct result, but unfortunately it is too slow:</p>
<pre><code>from itertools import chain
def merge_worker(A, B):
if cur := B[0] < A[0]:
A, B = B, A
(As, Ae, Ad), (Bs, Be, Bd) = A, B
if Ae < Bs:
return [(As, Ae, {Ad})], cur ^ 1, cur, (Bs, Be, Bd)
if Ae < Be:
cur ^= 1
low, high, data = Ae, Be, Bd
else:
low, high, data = Be, Ae, Ad
ranges = [(As, Bs - 1, {Ad})] if As < Bs else []
ranges.append((Bs, low, {Ad, Bd}))
return (
(ranges, 1, 1, None)
if Ae == Be
else (ranges, cur, cur ^ 1, (low + 1, high, data))
)
def merge_series(line_a, line_b):
end_a = end_b = 1
while line_a and line_b:
range_a = line_a.pop(0) if end_a else current
range_b = line_b.pop(0) if end_b else current
past, end_a, end_b, current = merge_worker(range_a, range_b)
yield past
remaining = line_a or line_b
end_b = 1
while current and end_b and remaining:
past, _, end_b, current = merge_worker(current, remaining.pop(0))
yield past
if current:
yield [(*current[:2], {current[2]})]
yield ((s, e, {d}) for s, e, d in remaining)
def merge(segments):
start, end, data = next(segments)
for start2, end2, data2 in segments:
if data == data2 and end + 1 == start2:
end = end2
else:
yield start, end, data
start, end, data = start2, end2, data2
yield start, end, data
def combine_series(line_a, line_b):
return merge(chain.from_iterable(merge_series(line_a, line_b)))
</code></pre>
<p>The full script containing methods of test case generation and result verification is <a href="https://drive.google.com/file/d/1wmqcKgdbA20VcMYdHxukKJVvVHEtn4fI/view?usp=sharing" rel="nofollow noreferrer">here</a>, I have rigorously verified the correctness of my solution.</p>
<p>But unfortunately it is way too slow:</p>
<pre><code>In [393]: line_a = get_series(262144, 2**32, 2048, 'A')
In [394]: line_b = get_series(262144, 2**32, 2048, 'B')
In [395]: %time correct = list(combine_series(line_a, line_b))
CPU times: total: 46.6 s
Wall time: 49.5 s
</code></pre>
<p>For merely 524,288 triplets, it took around 50 seconds. I have literally millions of triplets to process. While it may be tempting to say for 4,673,096 triplets it will take 445.661 seconds, the reality is way more complicated, the execution time depends on the time complexity and data involved.</p>
<p>I don't know the exact structure of the actual data, and I don't know what the time complexity of my function is (I guess it's O(n)?), and I don't have enough RAM to generate million item test cases, I can't even sample 128-bit integers using <code>random.sample</code>:</p>
<pre><code>OverflowError: Python int too large to convert to C ssize_t
</code></pre>
<p>The method is way too slow and can't reliably process the whole data.</p>
<p>How can I scale it up using <code>Cython</code>, <code>NumPy</code>, <code>Numba</code> and the like? I have already tried <code>numba.njit(cache=True, fastmath=True)</code> and it didn't work, I won't show the tracebacks here. How can I scale it up so that it can process millions of items relatively fast? Bear in mind I need to process IPv6 addresses which are 128 bit unsigned integers and not supported by C natively.</p>
<hr />
<p>I originally thought <code>chain</code> will through some not supported data type exception when used with <code>numba.njit</code>, surprisingly it didn't in this case. I added <code>numba.njit(cache=True, fastmath=True)</code> in front of each of the three main generator functions, and there is no performance change at all, I then compiled the utility function, still no performance boost but no exception was thrown.</p>
<hr />
<p>I have just found out that my new method is much slower than my previous method, and my previous method failed to process the whole data:</p>
<pre><code>In [402]: line_a = get_series(262144, 2**32, 2048, 'A')
In [403]: line_b = get_series(262144, 2**32, 2048, 'B')
In [404]: %time correct = list(combine_series(line_a, line_b))
CPU times: total: 47.3 s
Wall time: 49.4 s
In [405]: line_a = get_series(262144, 2**32, 2048, 'A')
In [406]: line_b = get_series(262144, 2**32, 2048, 'B')
In [407]: %time correct = combine(line_a+line_b)
CPU times: total: 4.12 s
Wall time: 4.42 s
</code></pre>
<hr />
<p>My intentions as requested by a commenter:</p>
<p>I am ethnic Chinese but I am also a left-liberal, I hate the authoritarian regime and I plan to immigrate to United States.</p>
<p>As for the purpose, I use VPNs and they tend to be blocked. So I plan to use hundreds of public DNS over HTTPS servers to find the actual IP addresses of blocked websites and unblock them.</p>
<p>I am going to do hundreds of DNS queries at once asynchronously, and then ping the resultant IP addresses all at once asynchronously, to find the IP addresses that are accessible, then change my hosts file to unblock the websites.</p>
<p>The information from the CSV files is crucial to the effort, first these DNS over HTTPS servers themselves may be blocked and/or hijacked, I need to use the geolocation of the IP addresses to determine the reliability of the DNS servers and the resultant IP addresses.</p>
<p>If the DNS is based in China, it isn't trustworthy; if the DNS isn't operated by the Chinese but a Chinese IP responded to the address, then the DNS server's domain name is hijacked; if the DNS is far from China, it should have a lower priority than closer ones, because the latency is likely higher.</p>
<p>If a resultant IP address is owned by the Chinese, don't ever ping it and don't ever use the DNS that returned it again; if multiple IP addresses in the same network are blocked, don't test other IP addresses from the same network; the IP addresses would be ranked according to the geolocation information, and if the top one gets blocked test the next best one and repeat, until there are no good IP addresses, then the DNS query should start all over again...</p>
| <python><python-3.x><algorithm><performance> | 2023-08-07 08:41:34 | 1 | 3,930 | Ξένη Γήινος |
76,849,995 | 17,561,414 | CICD pipeline fails with InterpolationSyntaxError | <p>Im trying to run the CICD pipelines. but my python code which is suppose to get the parameters from the azure data factory fails with the error <code>configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(',</code>. I understand the error but possible thing that can fix does not really seem to work.</p>
<p>this is my code snippet</p>
<pre><code> f = open(parametersJson,)
# returns JSON object as dict
data = json.load(f)
# Iterating through the json
parametersJsonDict = {}
for key,value in data['parameters'].items():
value = value.replace('%','%%')
parametersJsonDict[key]=value["value"]
f.close()
parametersIniDict = {}
argumentStringDict = {}
for env in envs:
parametersIniDict[env] = {}
config = configparser.ConfigParser()
config.optionxform = str
config.read(parametersIni)
argumentStringDict[env] = ""
for key, value in config[env].items():
value= repr(value)
parametersIniDict[env][key]=value
try:
value = json.loads(value)
except:
pass
if isinstance(value, Mapping) or isinstance(value, list):
argumentStringDict[env] = argumentStringDict[env] +" -" + str(key) + " " + json.dumps(value)
else:
argumentStringDict[env] = argumentStringDict[env] +" -" + str(key) + " \"" + str(value) + "\""
config = configparser.ConfigParser()
config.optionxform = str
failBuildMessage = []
failBuild="false"
for env in envs:
config[env] = {}
addIniDict = {}
removeIniDict = {}
for key,value in parametersJsonDict.items():
if key in parametersIniDict[env]:
config[env][str(key)] = str(parametersIniDict[env][key])
else:
addIniDict[key] = str(value)
for key,value in parametersIniDict[env].items():
if not key in parametersJsonDict:
removeIniDict[key] = str(value)
if len(addIniDict)>0:
failBuild ="true"
failBuildMessage.append(f"There are some overwrite parameters that need to be added in {env}, check the test artifacts")
config[env+'_add'] = {}
for key,value in addIniDict.items():
config[env+'_add'][str(key)] = f'"{str(value)}"'
if len(removeIniDict)>0:
failBuild ="true"
failBuildMessage.append(f"There are some overwrite parameters that need to be removed in {env}, check the test artifacts")
config[env+'_remove'] = {}
for key,value in removeIniDict.items():
config[env+'_remove'][str(key)] = f'"{str(value)}"'
</code></pre>
<p>I tried added this line <code> value= repr(value)</code> to conver the varibale into raw string but did not work,</p>
<p>I also tried this line `value=value.repalce("%","%%"). still did not work.</p>
<p>Important to mention: First time I got the error I added to my code this line ``value=value.repalce("%","%%")` I re run the pipeline and it worked. but failed to the point where it neeeds to produce overwrite parametes from ADF.</p>
<p>When this failes normally it produces the artifact with the parameters that I need to copy and paste in my config file. I could see that my paramters already where having <code>%%</code> instead of single '%'. After adding this to config file. I re run the pipeline to but it still failed as running as a first time, reffering to <code>InterpolationSyntaxError</code>.</p>
<p>Anyone could help?</p>
<p>FULL traceback:</p>
<pre><code>Traceback (most recent call last):
File "/agent/_work/2/s/CICD/pipelines/scripts/adf_overwrite_parameter_check.py", line 42, in <module>
for key, value in config[env].items():
File "/usr/lib/python3.6/_collections_abc.py", line 744, in __iter__
yield (key, self._mapping[key])
File "/usr/lib/python3.6/configparser.py", line 1234, in __getitem__
return self._parser.get(self._name, key)
File "/usr/lib/python3.6/configparser.py", line 800, in get
d)
File "/usr/lib/python3.6/configparser.py", line 394, in before_get
self._interpolate_some(parser, option, L, value, section, defaults, 1)
File "/usr/lib/python3.6/configparser.py", line 444, in _interpolate_some
"found: %r" % (rest,))
configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%27/sites/U-Uitgevers/Gedeelde%20documenten/Business%20plannen/MJP%20(nieuwe%20fondsstructuur%20en%20uniform%20format)/Power%20BI%20-%20Mapping/mapping%20tabel%20met%20business%20logica.xlsx%27)/$value"'
</code></pre>
<p>line 42 refers to this line <code> argumentStringDict[env] = "" for key, value in config[env].items():</code></p>
| <python> | 2023-08-07 08:30:17 | 1 | 735 | Greencolor |
76,849,942 | 21,891,079 | How to do state-dependent caching in a notebook? | <p>I have a Jupyter notebook with large intermediate data frames that are computationally intensive to generate. I would like to be able to cache them between sessions. However, I would like to be able to automatically recompute them if one of the steps/variables used to generate the resulting data frames has changed (e.g. new data are filtered data upstream). Is there some way to link a cached variable to any number of "state" variables such that the target variable is recomputed if a change in the "state" has been detected (and loaded from cache is no change is detected)?</p>
<h3>Related Questions/Answers on SO</h3>
<ul>
<li><p><a href="https://stackoverflow.com/questions/31255894/how-to-cache-in-ipython-notebook?rq=1">How to cache in IPython Notebook?</a>, <a href="https://stackoverflow.com/questions/32210951/ipython-notebook-save-variables-after-closing?rq=3">ipython notebook save variables after closing</a></p>
<p>Answer: Use <code>%store</code></p>
<p>The cache has to be manually recomputed.</p>
</li>
<li><p><a href="https://stackoverflow.com/questions/34342155/how-to-pickle-or-store-jupyter-ipython-notebook-session-for-later?rq=3">How to pickle or store Jupyter (IPython) notebook session for later</a></p>
<p>Answer: Use <code>dill</code> to dump and load a session</p>
<p>I do not want to save the entire session because that would be too large. I want to save intermediate values that are difficult to recompute, and recompute the cells that are not so computationally expensive</p>
</li>
</ul>
<h3>Example of what I'm looking for...</h3>
<p><em>Consider this a notebook environment, with cells separated by the</em> <code># ---- #</code> <em>comments.</em></p>
<pre class="lang-py prettyprint-override"><code># Initial work, loading datasets and whatnot
df = load_dataset()
df = clean_dataset()
# Some intermediate variables that are used later on...
s1, s2, s3 = compute_intermediate_variables()
# ---- #
# Intensive computation cell
def compute(df, s1, s2, s3): # Defining the expensive computation
return some_func(df, s1, s2, s3)
... # Hopefully use `compute` in the caching operation...
</code></pre>
<p>I am looking for a caching function or notebook magic thingy that can cache the result of <code>compute</code> and recompute it <em>only</em> if a change in <code>df</code>, <code>s1</code>, <code>s2</code>, or <code>s3</code> has been detected. Therefore, rerunning the cell repeatedly should be a near-instant operation. Repeatedly opening, running, and then closing the notebook should also not be hindered by this "expensive computation cell" <em>after</em> the first time.</p>
| <python><caching><jupyter-notebook><ipython> | 2023-08-07 08:22:11 | 0 | 1,051 | Joshua Shew |
76,849,931 | 765,271 | why using alpha and beta as symbols causes sympy to give an error? | <p>I am not very familiar with sympy, but when using alpha and beta as symbols in an integrated command it gives an error? Here is an example. Using 1.12 sympy with Python 3.11.3</p>
<pre><code>>>> from sympy import *
>>> A,B,a,alpha,b,beta,x= symbols('A B a alpha b beta x ')
>>> integrate(S("(B*x+A)/(b*x+a)**(1/2)"),x)
</code></pre>
<p>No problem, gives</p>
<pre><code>Piecewise(((2*A*sqrt(a + b*x) + 2*B*(-a*sqrt(a + b*x) + (a + b*x)**(3/2)/3)/b)/b, Ne(b, 0)), ((A*x + B*x**2/2)/sqrt(a), True))
</code></pre>
<p>But when changing <code>B,A</code> to <code>beta,alpha</code> it gives error</p>
<pre><code>>>> integrate(S("(beta*x+alpha)/(b*x+a)**(1/2)"),x)
</code></pre>
<p>gives</p>
<pre><code>ValueError: Error from parse_expr with transformed code: "(beta *Symbol ('x' )+Symbol ('alpha' ))/(Symbol ('b' )*Symbol ('x' )+Symbol ('a' ))**(Integer (1 )/Integer (2 ))"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/me/.local/lib/python3.11/site-packages/sympy/core/sympify.py", line 495, in sympify
expr = parse_expr(a, local_dict=locals, transformations=transformations, evaluate=evaluate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/.local/lib/python3.11/site-packages/sympy/parsing/sympy_parser.py", line 1087, in parse_expr
raise e from ValueError(f"Error from parse_expr with transformed code: {code!r}")
File "/home/me/.local/lib/python3.11/site-packages/sympy/parsing/sympy_parser.py", line 1078, in parse_expr
rv = eval_expr(code, local_dict, global_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/.local/lib/python3.11/site-packages/sympy/parsing/sympy_parser.py", line 906, in eval_expr
expr = eval(
^^^^^
File "<string>", line 1, in <module>
TypeError: unsupported operand type(s) for *: 'FunctionClass' and 'Symbol'
>>>
</code></pre>
<p>I searched to see if <code>beta</code> and <code>alpha</code> have special meaning or use in sympy, but so far I could not find anything.</p>
<p>These integrals are from external source. I did not write them myself and this is how they are in original form.</p>
| <python><sympy> | 2023-08-07 08:19:46 | 2 | 13,235 | Nasser |
76,849,911 | 8,040,369 | DataFrame: Get the mean of two rows to a set it to a row | <p>I have a df like,</p>
<pre><code> ds value
====================================
1 2023-07-01 17:00:00 100
2 2023-07-01 18:00:00 0
3 2023-07-01 19:00:00 300
</code></pre>
<p>I want to set the mean of 1st and 3rd rows value into the 2nd row like this</p>
<pre><code> ds value
====================================
1 2023-07-01 17:00:00 100
2 2023-07-01 18:00:00 150
3 2023-07-01 19:00:00 200
</code></pre>
<p>And some times it may be more that one rows which will be having <strong>0</strong>, is there a way to handle that dynamically.</p>
<p>Any help is much appreciated</p>
<p>Thanks,</p>
| <python><pandas><dataframe> | 2023-08-07 08:16:15 | 1 | 787 | SM079 |
76,849,855 | 3,118,602 | Concatenating strings in dataframe | <p>I have a simple dataframe below:</p>
<pre><code>id sub status description
313 AF open test
234 JF open testing in progress
919 IA closed test
is completed
well done
393 QE open failed testing
888 VE open check the
test results
before closing
193 UR open waiting
</code></pre>
<p>Due to formatting, some of the description are split into new rows in different columns (either id or sub). I would like to concatenate these back into the original row.</p>
<p>What the end result will look like:</p>
<pre><code>id sub status description
313 AF open test
234 JF open testing in progress
919 IA closed test is completed well done
393 QE open failed testing
888 VE open check the test results before closing
193 UR open waiting
</code></pre>
<p>How can I achieve this in pandas?</p>
<p>Thank you.</p>
<p>Edit:
Tested g</p>
<pre><code>0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 0.0
7 0.0
8 0.0
9 0.0
10 0.0
11 NaN
12 NaN
13 NaN
14 NaN
15 0.0
16 0.0
17 0.0
18 NaN
19 NaN
20 0.0
21 0.0
22 0.0
23 0.0
Name: id, dtype: object
</code></pre>
| <python><pandas><dataframe> | 2023-08-07 08:08:12 | 1 | 593 | user3118602 |
76,849,605 | 18,221,164 | Time triggered Azure function not getting triggered for every minute | <p>PS: Complete beginner here.</p>
<p>Because of some restrictions in the company policies, I am not able to test this function locally.
However I was able to deploy the function to our functionapp.</p>
<p>I have a <strong>init.py</strong> which looks like below :</p>
<pre><code>import datetime
import logging
import azure.functions as func
def main(mytimer: func.TimerRequest) -> None:
utc_timestamp = datetime.datetime.utcnow().replace(
tzinfo=datetime.timezone.utc).isoformat()
if mytimer.past_due:
logging.info('The timer is past due!')
logging.info('Hello!!')
logging.info('Python timer trigger function ran at %s', utc_timestamp)
</code></pre>
<p>The function.json looks like below :</p>
<pre><code>{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "mytimer",
"type": "timerTrigger",
"direction": "in",
"schedule": "* * * * *"
}
]
}
</code></pre>
<p>I expect the log statements to be shown every minute, but that does not seem to be the case.
Output: <a href="https://i.sstatic.net/sc87O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sc87O.png" alt="Output response" /></a></p>
<p>What am I inspecting wrong here?</p>
| <python><azure><azure-functions> | 2023-08-07 07:29:47 | 1 | 511 | RCB |
76,849,340 | 14,860,526 | Pytest cov doesn't recognize option | <p>I'm running pytest with pytest-cov with the following command line:</p>
<pre><code>python -m pytest Mytools\tests --junitxml test_reports\unittest_result.xml --cov . --cov-branch --cov-report xml:test_reports\unittest_coverage.xml --cov-config=scripts\.coveragerc || exit /b 1
</code></pre>
<p>however I get this error:</p>
<p><strong>CoverageWarning: Unrecognized option '[report] exclude_also=' in config file scripts\.coveragerc</strong></p>
<p>My .coveragerc file looks like this:</p>
<pre><code>[report]
exclude_also =
if __name__ == .__main__.:
</code></pre>
<p>Any idea on how to fix it?
many thanks</p>
| <python><pytest><coverage.py><pytest-cov> | 2023-08-07 06:45:38 | 1 | 642 | Alberto B |
76,849,286 | 8,519,380 | How can app.tasks not start at the same time in Celery? | <p>After many tests and searches, I did not get the result and I hope you can guide me.<br>
My codes are available at <a href="https://github.com/arezooebrahimi/celery_distribution_tasks" rel="nofollow noreferrer">this GitHub</a> address.<br>
Due to the complexity of the main codes, I have written a simple code example with the same problem and linked it at the above address.<br>
I have a worker that contains four app.tasks with the following names:<br></p>
<ul>
<li>app_1000</li>
<li>app_1002</li>
<li>app_1004</li>
<li>app_1006</li>
</ul>
<p>And each of the app.tasks should be executed simultaneously only once, that is, for example, <code>app_1000</code> should not be executed two or three times at the same time and should be executed only once at a time, and if the current task of <code>app_1000</code> is finished, it can go to Go to the next job.
<br>
Celery config:</p>
<pre><code>broker_url='amqp://guest@localhost//'
result_backend='rpc://'
include=['celery_app.tasks']
worker_prefetch_multiplier = 1
task_routes={
'celery_app.tasks.app_1000':{'queue':'q_app'},
'celery_app.tasks.app_1002':{'queue':'q_app'},
'celery_app.tasks.app_1004':{'queue':'q_app'},
'celery_app.tasks.app_1006':{'queue':'q_app'},
'celery_app.tasks.app_timeout':{'queue':'q_timeout'},
}
</code></pre>
<p>As you can see, <code>worker_prefetch_multiplier = 1</code> is in the above configuration.</p>
<br>
I use fastapi to send the request and the sample request is as follows (to simplify the question, I only send the number of tasks that must be executed by this worker through fastapi)
<p><a href="https://i.sstatic.net/my865.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/my865.png" alt="enter image description here" /></a>
<br><br><br>
I also use the flower script to check the tasks.
<br>
After pressing the Send button in Postman, all these 20 hypothetical tasks are sent to the Worker, and at first everything is fine because each app.tasks has started a task.
<br>
<a href="https://i.sstatic.net/GmK0x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GmK0x.png" alt="enter image description here" /></a>
<br><br><br>
But a few minutes later, when things go forward, the app.tasks are executed simultaneously, that is, for example, according to the photo, <code>app_1000</code> has been started twice, or in the next photo, <code>app_1006</code> has been started twice and they are running simultaneously, and I do not intend to do this. case to occur.
<br>
<a href="https://i.sstatic.net/oJoP0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJoP0.png" alt="enter image description here" /></a>
<br>
A few moments later:
<a href="https://i.sstatic.net/5MD4z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5MD4z.png" alt="enter image description here" /></a></p>
<br>
<p><strong>I expect app_1000 or app_1006 to do only one thing at a time, but I don't know how to do it.</strong><br><br>
<strong>Important note</strong>: Please do not suggest creating 4 queues for 4 app.tasks because in my real project I have more than 100 app.tasks and it is very difficult to manage all these queues.
<br>
A question may arise, why, for example, app_1000 should not be executed simultaneously? The answer to this question is very complicated and we have to explain the main codes which are too many, so please skip this question.
<br>
The codes are <a href="https://github.com/arezooebrahimi/celery_distribution_tasks" rel="nofollow noreferrer">in GitHub</a> (the volume of the codes is small and will not take much of your time)
And if you want to run it, you can enter the following commands:</p>
<pre><code>celery -A celery_app worker -Q q_app --loglevel=INFO --concurrency=4 -n worker@%h
celery flower --port=5566
uvivorn api:app --reload
</code></pre>
<br>
thank you
| <python><rabbitmq><celery> | 2023-08-07 06:34:32 | 2 | 778 | Sardar |
76,849,173 | 13,415,726 | how to open an xlsx file with xlrd | <p>I am trying to open an xlsx file with xlrd module. the code is like this:</p>
<pre><code> workbook = xlrd.open_workbook('DAT_XLSX_EURUSD_M1_2018.xlsx')
</code></pre>
<p>But I this error:</p>
<pre><code> AttributeError: module 'xlrd' has no attribute 'xlsx'
</code></pre>
<p>It was suggested to use the code below to prevent this error form happening but, it did no help</p>
<pre><code> xlrd.xlsx.ensure_elementtree_imported(False, None)
xlrd.xlsx.Element_has_iter = True
</code></pre>
<p>My xlrd version is 2.0.1</p>
| <python><python-3.x><xlsx><xlrd> | 2023-08-07 06:12:47 | 1 | 438 | shabnam aboulghasem |
76,849,155 | 14,427,714 | buildx: failed to read current commit information with git rev-parse --is-inside-work-tree | <p>I am getting this error after installing docker inside an existing project. Here is my dockerfile:</p>
<pre><code>FROM python:3.11
ENV PYTHONBUFFERED=1
WORKDIR /code
COPY re.txt .
RUN pip install -r re.txt
COPY . .
EXPOSE 8001
RUN apt-get update && apt-get install -y git
RUN git config --global --add safe.directory .
CMD ["python", "manage.py", "runserver"]
</code></pre>
<p>Here is my docker-compose</p>
<pre><code>version: '3.11'
services:
django:
image: leadbracket
build: .
ports:
- "8001:8001"
</code></pre>
| <python><django><docker> | 2023-08-07 06:08:05 | 0 | 549 | Sakib ovi |
76,849,045 | 7,306,999 | Pandas: extend dataframe so that columns start at the beginning of the calendar week (Monday) | <p>Suppose I have a dataframe with a columns index that contains dates. The first date is not a Monday. As a simple example:</p>
<pre><code># Imports
import numpy as np # version 1.20.3
import pandas as pd # version 1.3.4
# Dataframe initialization
np.random.seed(0)
col_idx = pd.date_range(start="2022-12-22", end="2023-01-15", freq="D")
data = np.random.rand(5, len(col_idx))
df = pd.DataFrame(data, columns=col_idx)
</code></pre>
<p>I would like to extend the dataframe such that the first date is on the closest Monday (2022-12-19 in this example). Basically I want to prepend a few extra columns to ensure that my dataframe starts with a full calendar week. All data values in the newly created columns can be set to 0.0.</p>
<p>What would be a smart way to do this?</p>
| <python><pandas><dataframe><date> | 2023-08-07 05:43:25 | 1 | 8,674 | Xukrao |
76,848,984 | 6,901,561 | How to get mouse double click event from the title of QGroupBox? | <p>I have a QGroupBox and wanted to do some action when I double-click it.
However, I want to restrict the interaction area to only the title of QGroupBox, not his body or the widget inside it.</p>
<p>Found a C++ solution here - <a href="https://forum.qt.io/topic/131379/how-to-get-mouse-double-click-event-from-only-the-title-of-qgroupbox/2" rel="nofollow noreferrer">https://forum.qt.io/topic/131379/how-to-get-mouse-double-click-event-from-only-the-title-of-qgroupbox/2</a></p>
<p>How can we do this in Python?</p>
| <python><pyqt><pyside2> | 2023-08-07 05:26:35 | 0 | 309 | rpi_guru |
76,848,586 | 1,880,182 | Issue with scrolling behaviour in Tkinter | <p>I am encountering a scrolling behaviour issue in my Tkinter-based terminal-like interface. I am using the <code>scrolledtext.ScrolledText</code> widget to display output, and I want to achieve the following behaviour:</p>
<ol>
<li>When the user manually scrolls up, the text area should remain at
that position, even if the window is resized.</li>
<li>If new text is printed while the user is scrolled up, the text area should stay at the current scroll position, unless the user is
already at the bottom of the text area, in which case it should
automatically scroll down to show the new content.</li>
</ol>
<p>I have tried several approaches, including the <code>yview()</code> and the <code>see(tk.END)</code> methods, but I cannot get the desired behaviour.</p>
<p>Here is a simplified version of my code:</p>
<pre><code>import tkinter as tk
import tkinter.font as tkFont
from tkinter import scrolledtext
class TerminalApp:
def __init__(self, root_):
self.user_input_var = None
self.root = root_
self.root.title("Terminal-like Library")
self.font = tkFont.Font(family="Courier", size=12)
self.text_area = scrolledtext.ScrolledText(self.root, wrap=tk.WORD, font=self.font)
self.text_area.pack(expand=True, fill="both")
# Create the edit menu
self.edit_menu = tk.Menu(self.root, tearoff=0)
self.edit_menu.add_command(label="Copy", command=self.copy_text, accelerator="Ctrl+Shift+C")
self.edit_menu.add_command(label="Paste", command=self.paste_text, accelerator="Ctrl+Shift+V")
# Bind right-click event to show the edit menu
self.text_area.bind("<Button-3>", self.show_edit_menu)
# Bind keyboard shortcuts
self.root.bind("<Control-Shift-c>", self.copy_text)
self.root.bind("<Control-Shift-v>", self.paste_text)
self.text_area.bind("<Control-Shift-C>", self.copy_text)
self.text_area.bind("<Control-Shift-V>", self.paste_text)
def write_output(self, output):
yview = self.text_area.yview()[1]
self.text_area.configure(state="normal")
self.text_area.insert(tk.END, output)
self.text_area.configure(state="disabled")
if yview == 1.0:
self.text_area.see(tk.END)
def stop(self):
self.root.destroy()
def print(self, *args, **kwargs):
text = "".join(map(str, args))
end = kwargs.get("end", "\n")
self.write_output(text + end)
def show_edit_menu(self, event):
self.edit_menu.post(event.x_root, event.y_root)
def copy_text(self, event=None):
selected_text = self.text_area.get(tk.SEL_FIRST, tk.SEL_LAST)
if selected_text:
self.root.clipboard_clear()
self.root.clipboard_append(selected_text)
def paste_text(self, event=None):
clipboard_text = self.root.clipboard_get()
if clipboard_text:
self.text_area.insert(tk.INSERT, clipboard_text)
def input(self, prompt=""):
def adjust_entry_width(event):
self.root.update_idletasks()
entry.configure(width=(self.text_area.winfo_width() - entry.winfo_x() - 18) // self.font.measure("0"),
borderwidth=0, selectborderwidth=0, insertborderwidth=0, border=0.0)
# Update the entry width when the scrolledtext widget is resized
self.text_area.bind("<Configure>", adjust_entry_width)
def on_enter(event=None):
self.print(self.user_input_var.get())
entry.destroy()
if prompt:
self.print(prompt, end="")
yview = self.text_area.yview()[1]
self.user_input_var = tk.StringVar()
entry = tk.Entry(self.text_area, textvariable=self.user_input_var, font=self.font, relief=tk.FLAT,
borderwidth=0, border=0.0, insertborderwidth=0, selectborderwidth=0)
entry.pack(side=tk.LEFT, fill="x", padx=0, pady=0, ipady=0, ipadx=0, expand=True)
self.text_area.window_create(tk.INSERT, padx=0, pady=0, stretch=1, window=entry)
if yview == 1.0:
self.text_area.see(tk.END)
adjust_entry_width(None)
entry.bind("<Return>", on_enter)
entry.focus_set()
self.root.wait_window(entry) # Wait for the entry window to close
user_input_ = self.user_input_var.get()
return user_input_
if __name__ == "__main__":
root = tk.Tk()
app = TerminalApp(root)
app.print("Hello, world!", end=" ")
app.print("This terminal-like library is great.")
app.print("""
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur eu velit nec tortor pharetra pretium ac
nec felis. Nullam faucibus libero nec dui auctor pellentesque. Suspendisse eu ipsum in nibh auctor maximus vitae
vitae urna. Phasellus facilisis nulla tempus erat bibendum faucibus. Fusce turpis leo, condimentum eu mollis ac,
sodales at justo. Pellentesque dapibus feugiat leo, ac euismod lectus fermentum malesuada. Aliquam a turpis
tempus, vulputate risus id, lobortis ipsum. Nunc tincidunt nibh a urna suscipit egestas. Phasellus tellus tellus,
bibendum in lacus non, mattis pharetra arcu. Nam a est eu odio facilisis sollicitudin sit amet fringilla nunc.
Nunc massa enim, tempor vel est et, eleifend vulputate est. Suspendisse eu libero vel odio commodo malesuada.
Fusce gravida, sem ut gravida congue, magna massa accumsan libero, ac interdum arcu sem sit amet diam. Cras eu
eros fringilla urna fringilla faucibus quis sed dui.""".replace("\n", "").replace(" ", ""))
app.print("""
Quisque vitae quam neque. Maecenas tempus dapibus turpis, a laoreet velit commodo id. Vivamus id lectus turpis.
Nulla suscipit, augue quis egestas pellentesque, nulla sem imperdiet turpis, ut porta nulla nisl in eros.
Phasellus convallis dignissim mollis. Fusce ut blandit ipsum. Etiam venenatis turpis et lobortis efficitur. Morbi
venenatis ante nisl, sed aliquet eros tincidunt nec. Sed vitae odio ut dui accumsan dictum. Vestibulum ante ipsum
primis in faucibus orci luctus et ultrices posuere cubilia curae; Nulla et nulla lectus. Vivamus euismod quam ut
lacus varius, id sodales neque semper. Duis eleifend fermentum nisl, a commodo ex. Donec id massa erat. Vivamus
sed ultrices nunc.""".replace("\n", "").replace(" ", ""))
app.print("""
Ut molestie purus id elit porta, nec ultricies lorem pharetra. Donec nec rhoncus magna. Praesent tristique lacus
et metus vulputate interdum. Phasellus eu lectus quam. Sed ac diam faucibus, commodo quam eget, aliquam tortor.
Integer rutrum consequat lorem. Sed metus eros, placerat sit amet tempor vitae, ullamcorper non risus. Sed a
aliquam neque, vitae rhoncus ipsum. Nam libero purus, mattis ut scelerisque vitae, mattis at lacus. Vivamus
fermentum diam faucibus justo lobortis cursus. Aliquam erat volutpat. Aliquam ac vestibulum mi,
sit amet condimentum neque. Phasellus eget nisi et lorem ultricies finibus venenatis at metus.""".replace("\n",
"").replace(
" ", ""))
app.print("""
Phasellus est leo, finibus non pretium sit amet, tempus id lacus. In hac habitasse platea dictumst. Etiam sed
lacinia nunc, sit amet dignissim eros. Fusce semper fermentum erat ut accumsan. Donec volutpat eget magna in
porttitor. In sed ultrices nisi, hendrerit ornare ante. Fusce imperdiet metus sit amet dolor bibendum
consectetur. Sed dapibus enim libero, id blandit augue sollicitudin id. Fusce non metus feugiat, accumsan velit
et, consectetur quam.""".replace("\n", "").replace(" ", ""))
app.print("""
Curabitur placerat placerat elit eu interdum. Proin tempus mi semper urna tempor, nec molestie metus vehicula.
Donec metus ante, rutrum ac porta aliquet, bibendum ut mi. Phasellus mattis erat mauris, et aliquet nisl faucibus
maximus. Nam ac ex turpis. Morbi pulvinar non leo in efficitur. Vivamus dignissim porttitor turpis non tempor.
Morbi enim neque, facilisis id odio id, iaculis finibus dolor. In consectetur urna et vulputate rutrum. Aliquam
eget maximus mauris. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis
egestas. Pellentesque viverra risus eget maximus molestie. Curabitur pulvinar dui nec mi vehicula, id vestibulum
urna consequat.""".replace("\n", "").replace(" ", ""))
user_input = app.input("Enter a number: ")
app.print(f"Entered input: {user_input}")
root.mainloop()
</code></pre>
<p>I appreciate any guidance or suggestions to help me achieve the intended scrolling behaviour. Thank you in advance for your assistance!</p>
| <python><python-3.x><user-interface><tkinter><scroll> | 2023-08-07 03:08:41 | 1 | 541 | Eftal Gezer |
76,848,543 | 9,244,653 | What is the best way to plot confidence intervals for close curves | <p>I have to plot a figure that contains three curves that are close to each other. When I plot the confidence intervals in each curve, I get an ugly figure that contains confidence intervals that are overlapped.</p>
<p>Here is an example.</p>
<pre><code>from matplotlib import pyplot as plt
import numpy as np
y1 = [2, 3.9, 5.8, 7.2]
y2 = [2, 3.7, 5.6, 7.1]
y3 = [1.9, 3.6, 5.4, 6.8]
x = [2, 4, 6, 8]
ci1 = 1.96 * np.std(y1)/np.sqrt(len(x))
ci2 = 1.96 * np.std(y2)/np.sqrt(len(x))
ci3 = 1.96 * np.std(y3)/np.sqrt(len(x))
fig, ax = plt.subplots()
ax.plot(x, y1)
ax.fill_between(x, (y1-ci1), (y1+ci1), color='b', alpha=.1)
ax.plot(x, y2)
ax.fill_between(x, (y2-ci2), (y2+ci2), color='b', alpha=.1)
ax.plot(x, y3)
ax.fill_between(x, (y3-ci3), (y3+ci3), color='b', alpha=.1)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/bTnm9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bTnm9.png" alt="enter image description here" /></a>
Do you know of any better way to show the curves with the confidence intervals? I tried with error bars but it is even uglier. Is it a good alternative to show the confidence intervals in the legend or some where in the x- or y-axis?</p>
| <python><matplotlib><confidence-interval><errorbar> | 2023-08-07 02:52:45 | 1 | 531 | zdm |
76,848,473 | 7,306,999 | Pandas: split dataframe into weeks along columns axis | <p>Suppose I have a pandas dataframe in which the columns index contains dates. I would like to split this dataframe: I want to obtain a sub-dataframe for each calendar week.</p>
<p>Here is the initialization of an example dataframe, along with my two attempts to tackle the problem:</p>
<pre><code># Imports
import numpy as np # version 1.20.3
import pandas as pd # version 1.3.4
# Dataframe initialization
np.random.seed(0)
col_idx = pd.date_range(start="2022-12-19", end="2023-01-15", freq="D")
data = np.random.rand(5, len(col_idx))
df = pd.DataFrame(data, columns=col_idx)
#-----------------------------------------------------------------------------#
# Attempt 01 #
#-----------------------------------------------------------------------------#
# Intialize Excel writer
with pd.ExcelWriter("attempt_01.xlsx") as writer:
# Initialize row index for keeping track of location in Excel sheet
row_idx = 1
# Perform grouping along columns axis
grouper = df.groupby([df.columns.year, df.columns.week], axis=1)
for weekyear, sub_df in grouper:
# Here various operations on the data values in `sub_df` would be
# performed (not relevant to my question)
# Write `sub_df` to Excel file
sub_df.to_excel(writer, startrow=row_idx, index=False)
# Prepare for next loop iteration
row_idx += len(sub_df) + 3
#-----------------------------------------------------------------------------#
# Attempt 02 #
#-----------------------------------------------------------------------------#
# Intialize Excel writer
with pd.ExcelWriter("attempt_02.xlsx") as writer:
# Initialize row index for keeping track of location in Excel sheet
row_idx = 1
# Perform grouping along columns axis
grouper = df.groupby(pd.Grouper(freq="W-MON", axis=1))
for week, sub_df in grouper:
# Here various operations on the data values in `sub_df` would be
# performed (not relevant to my question)
# Write `sub_df` to Excel file
sub_df.to_excel(writer, startrow=row_idx, index=False)
# Prepare for next loop iteration
row_idx += len(sub_df) + 3
</code></pre>
<p>The problem with the first attempt is that a sub-dataframe with one single column for 01 Jan 2023 is generated during the last loop iteration. I instead want the 01 Jan 2023 column to be part of the second sub-dataframe (i.e. I want a sub-dataframe containing 26 Dec 2022 through 01 Jan 2023 columns, a full calendar week Monday through Sunday). Additionally this attempt raises a pandas FutureWarning which states that the <code>week</code> accessor has been deprecated (I'm using pandas version 1.3.4).</p>
<p>The second attempt generates a result that is quite confusing to me. I get sub-dataframes with 28 columns and varying amounts of rows.</p>
<p>Would anyone be able to point me in the right direction? Thanks.</p>
| <python><pandas><dataframe><group-by> | 2023-08-07 02:21:57 | 2 | 8,674 | Xukrao |
76,848,449 | 9,983,652 | AttributeError: 'GPT4All' object has no attribute 'chat_completion' | <p>just install gpt4all 1.0.8. Not sure how to solve this error. Thanks for help. I am using python 3.8.</p>
<p><a href="https://pypi.org/project/gpt4all/" rel="nofollow noreferrer">https://pypi.org/project/gpt4all/</a></p>
<pre><code>import gpt4all
gptj=gpt4all.GPT4All("ggml-gpt4all-j-v1.3-groovy")
messages = [{"role": "user", "content": "Give me a list of 10 colors and their RGB code"}]
ret = gptj.chat_completion(messages)
print(ret)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[3], line 2
1 messages = [{"role": "user", "content": "Give me a list of 10 colors and their RGB code"}]
----> 2 ret = gptj.chat_completion(messages)
3 print(ret)
AttributeError: 'GPT4All' object has no attribute 'chat_completion'
</code></pre>
| <python><gpt4all> | 2023-08-07 02:11:49 | 1 | 4,338 | roudan |
76,848,383 | 1,714,987 | langflow application stuck at the thinking stage without responding | <p>I have successfully installed langflow (0.4.0). I test the first application "Basic Chat with Prompt and History". After inputting the openai_key at the ChatOpenAI panel, and setting "input key" as "text", I build successfully the application. However, when I open the chat interface and ask a question "hello", the application is stucked at thinking as illustrated below:</p>
<p><a href="https://i.sstatic.net/JA2UQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JA2UQ.png" alt="enter image description here" /></a></p>
<p>log information:</p>
<pre class="lang-none prettyprint-override"><code>> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: hello
AI:
/mnt/data/yangpengcheng/miniconda3/envs/langflow/lib/python3.10/site-packages/tenacity/__init__.py:338: RuntimeWarning: coroutine 'AsyncRunManager.on_retry' was never awaited
self.before_sleep(retry_state)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
[2023-08-07 01:43:05 +0000] [88942] [INFO] ('10.12.186.19', 55850) - "WebSocket /api/v1/chat/6242294b-dad0-4698-a0cf-3766e80dd008" [accepted]
[2023-08-07 01:43:05 +0000] [88942] [INFO] connection open
> Entering new LLMChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: hello
AI:
> Entering new LLMChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: hello
AI:
[2023-08-07 01:51:10 +0000] [88940] [CRITICAL] WORKER TIMEOUT (pid:88942)
[2023-08-07 01:51:10 +0000] [88940] [ERROR] Worker (pid:88942) was sent code 134!
[2023-08-07 01:51:10 +0000] [89080] [INFO] Booting worker with pid: 89080
[2023-08-07 01:51:10 +0000] [89080] [INFO] Started server process [89080]
[2023-08-07 01:51:10 +0000] [89080] [INFO] Waiting for application startup.
[2023-08-07 01:51:10 +0000] [89080] [INFO] Application startup complete.
</code></pre>
<p>I have also tested other applications, with the same problem.</p>
<p>The langflow is installed on a server. I am sure the server can connect to the internet.</p>
<p>There is a similar question on reddit: <a href="https://www.reddit.com/r/langflow/comments/14yfs46/chat_is_not_responding/" rel="nofollow noreferrer">langflow: Chat is not responding</a></p>
| <python><langchain><py-langchain> | 2023-08-07 01:45:52 | 1 | 830 | pengchy |
76,848,275 | 1,362,485 | Sorting values within a pandas dataframe group by | <p>I have a dataframe with three columns: account_number, balance_date, and amount.</p>
<p>I need to calculate, for each account, the amount moving average, where the amounts within each account are sorted by balance_date. This is what I do:</p>
<pre><code>df = df.sort_values(by=['account_number', 'balance_date'])
df['amount_mov_average'] = df.groupby('account_number')['amount'].rolling(3).mean().droplevel(0)
</code></pre>
<p>I have problems with the calculation and I'm not sure it's because I don't have sorted by balance_date within each account.</p>
<p>Question is: do I need to sort again by balance_date after the group by, or the <code>sort_values</code> that I do before the group by is enough?</p>
<p>Am I forced to create an index with account_number and balance_date? I prefer not to add complexity creating a new index.</p>
| <python><pandas> | 2023-08-07 00:51:57 | 1 | 1,207 | ps0604 |
76,848,266 | 171,461 | Using a GCP service account to impersonate a Workspace user... without a key file | <p>I have been using a GCP service account json key file for years to impersonate a Google Workspace service account for accessing various Workspace Admin SDK APIs.</p>
<p>I would like to move away from having to export a key file if at all possible. My organization, for good reasons, as implemented an Org Policy to block exporting service account keys.</p>
<p>I would like to know how to use the Service Account running my Cloud Run service to impersonate a Workspace side Service Account please ?</p>
<p>PS: the service account on the Workspace side as been given <code>domain wide delegation</code> and my service account on the GCP side as been authorized against it with the necessary scopes.</p>
<p>PS2: I do not want to be using <code>gcloud</code> but a proper python script.</p>
<hr />
<ol>
<li>I have enabled "admin sdk"</li>
<li>I have granted "Service Account Token Creator" to the service account on the GCP side.</li>
<li>Domain Delegation is enabled on the service account on the Workspace side.</li>
<li>The GCP side service account was authorized with the necessary scopes (in my specific case, the directory API).</li>
</ol>
<hr />
<pre class="lang-py prettyprint-override"><code>import google.auth
from google.auth.transport.requests import Request
from google.auth import impersonated_credentials
def get_access_token():
SCOPES = [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/admin.directory.group.readonly",
"https://www.googleapis.com/auth/admin.directory.user.readonly"
]
# This gets the credentials of the service account running the Cloud Run service. This works without an issue.
credentials, _project_id = google.auth.default(scopes=SCOPES)
# Impersonated user's email address
# e.g. x@y.com in the corporate workspace org
impersonated_user_email = WORKSPACE_SERVICE_ACCOUNT_EMAIL
# Set up a Credentials object with domain-wide delegation
target_credentials = impersonated_credentials.Credentials(
source_credentials=credentials,
target_principal=impersonated_user_email,
target_scopes=SCOPES
)
# Fetch the OIDC token with domain-wide delegation
target_credentials.refresh(Request())
# The above generates the "Not found; Gaia id not found for email x@y.com" exception
access_token = target_credentials.token
logging.debug({
"message": f"get_access_token",
"credentials": str(target_credentials),
"repr(credentials)": repr(target_credentials)
})
return access_token
</code></pre>
<hr />
<p>From the <a href="https://cloud.google.com/iam/docs/create-short-lived-credentials-delegated#sa-credentials-delegated-chain" rel="nofollow noreferrer">documentation</a>, I only see delegation for GCP service accounts. I am surely missing something.</p>
| <python><google-cloud-platform><google-cloud-run><google-workspace> | 2023-08-07 00:46:47 | 1 | 97,535 | jldupont |
76,848,083 | 1,056,460 | python3.11 + Celery + Gevent -> "returned NULL without setting an exception" | <p>When running celery worker on a container using python3.11 with redis as a broker - I am getting the following exception.</p>
<p>Downgrading to python 3.10 solves the problem, but is there another solution?</p>
<pre><code>2023-08-04 11:10:28,805] [WARNING] [Dummy-748] [log.py:232 - write()] Traceback (most recent call last):
[2023-08-04 11:10:28,806] [WARNING] [Dummy-748] [log.py:232 - write()] File "src/gevent/_waiter.py", line 122, in gevent._gevent_c_waiter.Waiter.switch
[2023-08-04 11:10:28,806] [WARNING] [Dummy-748] [log.py:232 - write()] SystemError: <built-in method switch of gevent._gevent_cgreenlet.Greenlet object at 0x7fcdb37de700> returned NULL without setting an exception
[2023-08-04 11:10:28,806] [WARNING] [Dummy-748] [log.py:232 - write()] 2023-08-04T11:10:28Z
[2023-08-04 11:10:28,806] [WARNING] [Dummy-748] [log.py:232 - write()]
[2023-08-04 11:10:28,806] [WARNING] [Dummy-748] [log.py:232 - write()] <built-in method switch of gevent._gevent_cgreenlet.Greenlet object at 0x7fcdb37de700> failed with SystemError
[2023-08-04 11:10:28,806] [ERROR] [MainThread] [consumer.py:248 - perform_pending_operations()] Pending callback raised: SystemError('<built-in method switch of gevent._gevent_cgreenlet.Greenlet object at 0x7fcdb37de700> returned NULL without setting an exception')
Traceback (most recent call last):
File "/home/.local/lib/python3.11/site-packages/celery/worker/consumer/consumer.py", line 246, in perform_pending_operations
self._pending_operations.pop()()
File "/home/.local/lib/python3.11/site-packages/vine/promises.py", line 160, in __call__
return self.throw()
^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/vine/promises.py", line 157, in __call__
retval = fun(*final_args, **final_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/kombu/message.py", line 131, in ack_log_error
self.ack(multiple=multiple)
File "/home/.local/lib/python3.11/site-packages/kombu/message.py", line 126, in ack
self.channel.basic_ack(self.delivery_tag, multiple=multiple)
File "/home/.local/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 670, in basic_ack
self.qos.ack(delivery_tag)
File "/home/.local/lib/python3.11/site-packages/kombu/transport/redis.py", line 379, in ack
self._remove_from_indices(delivery_tag).execute()
File "/home/.local/lib/python3.11/site-packages/redis/client.py", line 2120, in execute
return conn.retry.call_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/redis/retry.py", line 46, in call_with_retry
return do()
^^^^
File "/home/.local/lib/python3.11/site-packages/redis/client.py", line 2121, in <lambda>
lambda: execute(conn, stack, raise_on_error),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/redis/client.py", line 1975, in _execute_transaction
self.parse_response(connection, "_")
File "/home/.local/lib/python3.11/site-packages/redis/client.py", line 2060, in parse_response
result = Redis.parse_response(self, connection, command_name, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/redis/client.py", line 1286, in parse_response
response = connection.read_response()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/redis/connection.py", line 882, in read_response
response = self._parser.read_response(disable_decoding=disable_decoding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/redis/connection.py", line 349, in read_response
result = self._read_response(disable_decoding=disable_decoding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/redis/connection.py", line 359, in _read_response
raw = self._buffer.readline()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/redis/connection.py", line 262, in readline
self._read_from_socket()
File "/home/.local/lib/python3.11/site-packages/redis/connection.py", line 212, in _read_from_socket
data = self._sock.recv(socket_read_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/gevent/_socketcommon.py", line 666, in recv
self._wait(self._read_event)
File "src/gevent/_hub_primitives.py", line 317, in gevent._gevent_c_hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 322, in gevent._gevent_c_hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 304, in gevent._gevent_c_hub_primitives._primitive_wait
File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 55, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_waiter.py", line 154, in gevent._gevent_c_waiter.Waiter.get
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 65, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_gevent_c_greenlet_primitives.pxd", line 35, in gevent._gevent_c_greenlet_primitives._greenlet_switch
File "src/gevent/_waiter.py", line 122, in gevent._gevent_c_waiter.Waiter.switch
SystemError: <built-in method switch of gevent._gevent_cgreenlet.Greenlet object at 0x7fcdb37de700> returned NULL without setting an exception
</code></pre>
| <python><redis><celery><gevent><greenlets> | 2023-08-06 23:10:17 | 0 | 4,131 | DanielM |
76,848,035 | 7,169,895 | How do I control the tab that is deleted? | <p>I am almost done implementing browser-like tab functionality in my Pyside6 app. I want to be able to load different pages based on the button the user presses on the new tab page. The only thing I do not quite have working is when someone closes a different tab than the one they are currently on. For instance, sometimes closing a different tab closes two tabs (the selected one) and the current one.</p>
<p>I know this has to do with my <code>close_tab</code> function and the default <code>tabCloseRequested</code>, but I am stumped as how to change it. A print statement in the <code>close_tab</code> shows it is only printed once so I think Pyside6 default closing behavior is conflicting with mine.</p>
<p>Can I get a look over to see what I am doing wrong?</p>
<p>My code:</p>
<pre><code>from PySide6 import QtWidgets
from PySide6 import QtCore
from PySide6.QtCore import Signal, QSize
from PySide6.QtWidgets import QLabel, QWidget, QHBoxLayout, QVBoxLayout
class ShrinkTabBar(QtWidgets.QTabBar):
_widthHint = -1
_initialized = False
_recursiveCheck = False
addClicked = Signal()
def __init__(self, parent):
super(ShrinkTabBar, self).__init__(parent)
self.setElideMode(QtCore.Qt.TextElideMode.ElideRight)
self.setExpanding(False)
self.setTabsClosable(True)
self.addButton = QtWidgets.QToolButton(self.parent(), text='+')
self.addButton.clicked.connect(self.addClicked)
self._recursiveTimer = QtCore.QTimer(singleShot=True, timeout=self._unsetRecursiveCheck, interval=0)
self._tabHint = QSize(0, 0)
self._minimumHint = QSize(0, 0)
def _unsetRecursiveCheck(self):
self._recursiveCheck = False
def _computeHints(self):
if not self.count() or self._recursiveCheck:
return
self._recursiveCheck = True
opt = QtWidgets.QStyleOptionTab()
self.initStyleOption(opt, 0)
width = self.style().pixelMetric(QtWidgets.QStyle.PixelMetric.PM_TabBarTabHSpace, opt, self)
iconWidth = self.iconSize().width() + 4
self._minimumWidth = width + iconWidth
# default text widths are arbitrary
fm = self.fontMetrics()
self._minimumCloseWidth = self._minimumWidth + fm.horizontalAdvance('x' * 4) + iconWidth
self._defaultWidth = width + fm.horizontalAdvance('x' * 17)
self._defaultHeight = super().tabSizeHint(0).height()
self._minimumHint = QtCore.QSize(self._minimumWidth, self._defaultHeight)
self._defaultHint = self._tabHint = QtCore.QSize(self._defaultWidth, self._defaultHeight)
self._initialized = True
self._recursiveTimer.start()
def _updateSize(self):
if not self.count():
return
frameWidth = self.style().pixelMetric(
QtWidgets.QStyle.PixelMetric.PM_DefaultFrameWidth, None, self.parent())
buttonWidth = self.addButton.sizeHint().width()
self._widthHint = (self.parent().width() - frameWidth - buttonWidth) // self.count()
self._tabHint = QtCore.QSize(min(self._widthHint, self._defaultWidth), self._defaultHeight)
# dirty trick to ensure that the layout is updated
if not self._recursiveCheck:
self._recursiveCheck = True
self.setIconSize(self.iconSize())
self._recursiveTimer.start()
def minimumTabSizeHint(self, index):
if not self._initialized:
self._computeHints()
return self._minimumHint
def tabSizeHint(self, index):
if not self._initialized:
self._computeHints()
return self._tabHint
def tabLayoutChange(self):
if self.count() and not self._recursiveCheck:
self._updateSize()
# self._closeIconTimer.start()
def tabRemoved(self, index):
if not self.count():
self.addButton.setGeometry(1, 2,
self.addButton.sizeHint().width(), self.height() - 4)
def changeEvent(self, event):
super().changeEvent(event)
if event.type() in (event.StyleChange, event.FontChange):
self._updateSize()
def resizeEvent(self, event):
if not self.count():
super().resizeEvent(event)
return
self._recursiveCheck = True
super().resizeEvent(event)
height = self.sizeHint().height()
if height < 0:
# a tab bar without tabs returns an invalid size
height = self.addButton.height()
self.addButton.setGeometry(self.geometry().right() + 1, 2,
self.addButton.sizeHint().width(), height - 4)
# self._closeIconTimer.start()
self._recursiveTimer.start()
class ShrinkTabWidget(QtWidgets.QTabWidget):
addClicked = Signal()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._tabBar = ShrinkTabBar(self)
self.setTabBar(self._tabBar)
self._tabBar.tabCloseRequested.connect(self.close_tab)
self._tabBar.addClicked.connect(self.get_page_choices)
def get_page_choices(self):
selection_page = SelectionPage()
index = self.addTab(selection_page, 'New Tab')
self._tabBar.setCurrentIndex(index) # Set as current tab
def resizeEvent(self, event):
self._tabBar._updateSize()
super().resizeEvent(event)
def close_tab(self, index):
self.widget(index).deleteLater()
self._tabBar.removeTab(index)
class TabContainer(QWidget):
def __init__(self):
super(TabContainer, self).__init__()
self.setGeometry(150, 150, 650, 350)
self.tabwidget = ShrinkTabWidget(self)
self.top_vbox = QtWidgets.QHBoxLayout()
self.top_vbox.addWidget(self.tabwidget)
self.setLayout(self.top_vbox)
self.show()
class SelectionPage(QWidget):
def __init__(self):
super(SelectionPage, self).__init__()
self.vbox = QtWidgets.QVBoxLayout()
self.portfolio_btn = QtWidgets.QPushButton("Portfolio")
self.portfolio_btn.clicked.connect(self.get_portfolio_page)
self.vbox.addWidget(self.portfolio_btn)
self.setLayout(self.vbox)
def get_portfolio_page(self):
children = []
for i in range(self.vbox.count()):
child = self.vbox.itemAt(i).widget()
if child:
children.append(child)
for child in children:
child.deleteLater()
self.vbox.deleteLater()
portfolio_page = PortfolioPage()
self.vbox.addWidget(portfolio_page)
class PortfolioPage(QWidget):
def __init__(self):
super(PortfolioPage, self).__init__()
vbox = QVBoxLayout()
label = QLabel('Test')
vbox.addWidget(label)
self.setLayout(vbox)
import sys
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
tab_container = TabContainer()
tab_container.show()
sys.exit(app.exec())
</code></pre>
| <python><pyside6> | 2023-08-06 22:48:20 | 1 | 786 | David Frick |
76,848,031 | 10,380,942 | Invalid MEX-file '...\mvmedian.mexw64': The specified procedure could not be found. (.mexw64 file error when running on Jupyter with matlab_kernel) | <p>I've set up a Jupyter Notebook to run MATLAB via the <code>matlab_kernel</code>, following the instructions on <a href="https://github.com/mathworks/jupyter-matlab-proxy" rel="nofollow noreferrer">this GitHub page</a>. Basic functions like <code>peaks</code> or <code>plot(...)</code> work with no issue on Jupyter.</p>
<p>However, I got an error when executing a function that calls <code>medfilt1.m</code> in the <code>signal toolbox</code> (<code>..\MATLAB\R2020a\toolbox\signal\signal\medfilt1.m</code>):</p>
<pre><code>Invalid MEX-file 'C:\Users\...\mvmedian.mexw64': The specified procedure could not be found.
</code></pre>
<p>I noticed that <code>medfilt1.m</code> calls <code>mvmedian()</code> function (<code>y = mvmedian(x,n,dim,'central',missing,padding)</code>) and <code>mvmedian.mexw64</code> file does exist in <code>..\MATLAB\R2020a\toolbox\signal\signal\private</code>.</p>
<p>I tried to check the dependencies of <code>mvmedian.mexw64</code> using <a href="https://www.dependencywalker.com/" rel="nofollow noreferrer">Dependency Walker</a> by following the steps explained in <a href="https://stackoverflow.com/questions/15338427/error-invalid-mex-file-the-specified-module-could-not-be-found">here</a> and <a href="https://www.mathworks.com/matlabcentral/answers/95906-how-do-i-use-dependency-walker-with-matlab" rel="nofollow noreferrer">MATLAB Answers</a>. There are so many dependencies (<code>.dll</code> files), and I am not sure if I need to download all of them (from <a href="https://www.dll-files.com/" rel="nofollow noreferrer">DLL-FILES.COM</a>) and add their path to Jupyter Notebook...</p>
<p>The function runs without any issue when I run the same script directly in MATLAB.
Why does it give an error when I run it on Jupyter Notebook? Is this something to do with <code>matlab_kernel</code>? How can I resolve this issue?</p>
<p>Thanks,</p>
<hr />
<p>My environment is:</p>
<pre><code>64-bit Windows 10 (x64-based processor)
MATLAB 2023a
VS Code 1.81.0
Python 3.9.17
</code></pre>
| <python><matlab><jupyter-notebook><kernel> | 2023-08-06 22:47:33 | 0 | 393 | alpha |
76,848,009 | 7,228,014 | Parsing nested children nodes using pandas.read_xml | <p>I would like to import an xml with nested structure into a pandas dataframe. I include a sample xml</p>
<pre><code><?xml version='1.0' encoding='utf-8'?>
<AuditFile xmlns="urn:OECD:StandardAuditFile-Taxation/2.00">
<MasterFiles>
<Customers>
<Customer>
<RegistrationNumber>FR16270524</RegistrationNumber>
<Name>Test guy S.A</Name>
<Address>
<StreetName>1, av des Champs Elysées</StreetName>
<City>France</City>
<PostalCode>75000</PostalCode>
<Country>FR</Country>
</Address>
<Contact>
<ContactPerson>
<FirstName>Boom</FirstName>
<LastName>Baker</LastName>
</ContactPerson>
<Telephone>+331523526</Telephone>
<Email>boom.baker@sample.com</Email>
</Contact>
<TaxRegistration>
<TaxRegistrationNumber>FR16270524</TaxRegistrationNumber>
<TaxNumber>FR16270524</TaxNumber>
</TaxRegistration>
<CustomerID>2800002252</CustomerID>
<AccountID>400100</AccountID>
<OpeningDebitBalance>32.76</OpeningDebitBalance>
<ClosingDebitBalance>0.0</ClosingDebitBalance>
</Customer>
</Customers>
</MasterFiles>
</AuditFile>
</code></pre>
<p>By 'flatten', I mean that I would like that every end node corresponds to a column in the dataframe. The first column names will then be RegistrationNumber, Name, StreetName, etc...</p>
<p>Pandas' documentation of method read_xml () mentions, in the documentation of parameter elems_only, that, "by default, all child elements and non-empty text nodes are returned."</p>
<p>This is not the case if the structure contains nested children. Unlike Excel, only the first level of nodes is imported, not the nested ones.</p>
<p>I read a similar question on Javascript, where it was necessary to "flatten" the xml before import in a dataframe. I also looked at previous question <a href="https://stackoverflow.com/questions/72140531/flatten-xml-data-as-a-pandas-dataframe">Flatten XML data as a pandas dataframe</a>, but the solution provided using XSLT is heavy and beyond my skills.</p>
<p>2 questions:</p>
<ol>
<li>Is there a pandas' functionality that addresses those nested xmls that I missed (like Excel does)?</li>
<li>if this is not the case, is there an easier way to flatten or should I define the flat format manually (e.g by parsing into a dict and redefining the key as suggested in the stackoverflow article mentioned above?</li>
</ol>
| <python><pandas><xml><readxml> | 2023-08-06 22:38:50 | 1 | 309 | JCF |
76,847,883 | 6,407,935 | How to find the most frequent element of a list that is not in another list? | <p>Find the most frequent element of list <code>A</code> that is not in list <code>B</code>:</p>
<pre><code>A = [2, 3, 3, 4, 5, 6, 5, 7, 4, 5, 7, 9, 12, 12, 23, 3, 5, 6, 7, 10, 3, 4, 5, 6, 5, 20, 21, 22, 22]
B = [2, 4, 5, 22]
</code></pre>
<p>I'm looking for the most concise solution.</p>
| <python><list> | 2023-08-06 21:45:46 | 2 | 525 | Rebel |
76,847,846 | 2,270,422 | Write and Read protobuf message into dynamodb having a oneof field in Python | <p>I am having a message as:</p>
<pre><code>message CanRead {
optional string s = 1;
}
message CanWrite {
optional string s = 1;
}
message MyMessage {
optional string name = 1;
oneof capability {
optional CanRead r = 2;
optional CanWrite w = 3;
}
</code></pre>
<p>Now, I have two questions before implementation and deciding to use Protobuffers:</p>
<ol>
<li>Can I save <code>MyMessage</code> binary serialization into dynamodb and deserialize them back and figure out capability field value and its type?</li>
<li>How about if I want to just save <code>capability</code> field as an attribute in my record? Can I read the field and deserialize it back to the correct type?</li>
</ol>
<p>PS: The choice of DynamoDB cannot be changed.</p>
| <python><amazon-dynamodb><protocol-buffers> | 2023-08-06 21:31:32 | 1 | 685 | masec |
76,847,669 | 13,916,049 | TypeError: read_csv() got an unexpected keyword argument 'error_bad_lines' | <p>I have multiple sets of files in the current directory and the unique ID of each file is the substring before the first <code>.</code> parenthesis. For example, the ID for <code>A001C007.log</code> is <code>A001C007</code>.</p>
<p>For each ID, I want to run the following experiment on the set of respective files.</p>
<p>Note: I do not have write access to the source code and therefore cannot amend <code>pd.read_csv(...,error_bad_lines=False)</code></p>
<pre><code>for id in *.log; do
annotate-gene-fusion --sv-file "${id%.CNN_SVs.5K_combined.txt}" \
--output-file "${id%.gene-fusions.txt}" \
--buff-size 10000 --skip-rows 1 --ensembl-release 93 --species human
done
</code></pre>
<p>Traceback:</p>
<pre><code>INFO:pyensembl.database:Creating database: /home/melchua/.cache/pyensembl/GRCh38/ensembl93/Homo_sapiens.GRCh38.93.gtf.db
INFO:pyensembl.database:Reading GTF from /home/melchua/.cache/pyensembl/GRCh38/ensembl93/Homo_sapiens.GRCh38.93.gtf.gz
Traceback (most recent call last):
File "/scg/apps/software/eaglec/0.1.9/bin/annotate-gene-fusion", line 97, in <module>
run()
File "/scg/apps/software/eaglec/0.1.9/bin/annotate-gene-fusion", line 61, in run
db.index()
File "/scg/apps/software/eaglec/0.1.9/lib/python3.8/site-packages/pyensembl/genome.py", line 275, in index
self.db.connect_or_create(overwrite=overwrite)
File "/scg/apps/software/eaglec/0.1.9/lib/python3.8/site-packages/pyensembl/database.py", line 291, in connect_or_create
return self.create(overwrite=overwrite)
File "/scg/apps/software/eaglec/0.1.9/lib/python3.8/site-packages/pyensembl/database.py", line 213, in create
df = self._load_gtf_as_dataframe(
File "/scg/apps/software/eaglec/0.1.9/lib/python3.8/site-packages/pyensembl/database.py", line 605, in _load_gtf_as_dataframe
df = read_gtf(
File "/scg/apps/software/eaglec/0.1.9/lib/python3.8/site-packages/gtfparse/read_gtf.py", line 208, in read_gtf
result_df = parse_gtf_and_expand_attributes(
File "/scg/apps/software/eaglec/0.1.9/lib/python3.8/site-packages/gtfparse/read_gtf.py", line 151, in parse_gtf_and_expand_attributes
result = parse_gtf(
File "/scg/apps/software/eaglec/0.1.9/lib/python3.8/site-packages/gtfparse/read_gtf.py", line 82, in parse_gtf
chunk_iterator = pd.read_csv(
TypeError: read_csv() got an unexpected keyword argument 'error_bad_lines'
</code></pre>
<p>I tried to use pandas version 1.4.0 but the error persisted.</p>
<pre><code>pip install --user 'pandas==1.4.0'
Collecting pandas==1.4.0
Using cached pandas-1.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB)
Requirement already satisfied: python-dateutil>=2.8.1 in /scg/apps/software/eaglec/0.1.9/envs/EagleC/lib/python3.8/site-packages (from pandas==1.4.0) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /scg/apps/software/eaglec/0.1.9/envs/EagleC/lib/python3.8/site-packages (from pandas==1.4.0) (2023.3)
Requirement already satisfied: numpy>=1.18.5 in /scg/apps/software/eaglec/0.1.9/envs/EagleC/lib/python3.8/site-packages (from pandas==1.4.0) (1.24.3)
Requirement already satisfied: six>=1.5 in /scg/apps/software/eaglec/0.1.9/envs/EagleC/lib/python3.8/site-packages (from python-dateutil>=2.8.1->pandas==1.4.0) (1.16.0)
Installing collected packages: pandas
Successfully installed pandas-1.3.5
</code></pre>
<p>Example files in the current directory (for <code>A001C007</code>, <code>A001C008</code>, and <code>A002C012</code> samples):</p>
<pre><code>A001C007.CNN_SVs.10K_highres.txt A002C010.CNN_SVs.5K_combined.txt
A001C007.CNN_SVs.10K.txt A002C010.CNN_SVs.5K.txt
A001C007.CNN_SVs.50K_highres.txt A002C010.log
A001C007.CNN_SVs.50K.txt A002C012.CNN_SVs.10K_highres.txt
A001C007.log A002C012.CNN_SVs.10K.txt
A001C008.CNN_SVs.10K_highres.txt A002C012.CNN_SVs.50K_highres.txt
A001C008.CNN_SVs.10K.txt A002C012.CNN_SVs.50K.txt
A001C008.CNN_SVs.50K_highres.txt A002C012.CNN_SVs.5K_combined.txt
A001C008.CNN_SVs.50K.txt A002C012.CNN_SVs.5K.txt
A001C008.log A002C012.log
</code></pre>
<p>Expected output:</p>
<p>As per the <a href="https://github.com/XiaoTaoWang/EagleC" rel="nofollow noreferrer">software</a>, a run for a single set of files with ID <code>SK-N-AS</code> would be:</p>
<pre><code>annotate-gene-fusion --sv-file SK-N-AS.CNN_SVs.5K_combined.txt \
--output-file SK-N-AS.gene-fusions.txt \
--buff-size 10000 --skip-rows 1 --ensembl-release 93 --species human
</code></pre>
| <python><pandas><bash> | 2023-08-06 20:24:56 | 1 | 1,545 | Anon |
76,847,651 | 13,860,719 | Topology-matching algorithm for finding 2D lattice in a 3D lattice | <p>I have a 3D lattice with a <em>unit cell</em> (i.e. the minimum repeating unit) of 16 points. Since it's a 3D lattice, it is periodic in all 3 dimensions (x, y, z). In detail, the unit cell looks like this:
<a href="https://i.sstatic.net/ke0Ej.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ke0Ej.png" alt="enter image description here" /></a></p>
<p>The 3 <em>lattice vectors</em> <strong>a</strong><sub>1</sub>, <strong>a</strong><sub>2</sub>, <strong>a</strong><sub>3</sub> are</p>
<pre><code>bulk_vecs = [[14.56578026795, 0.0, 0.0 ], # a1
[0.0, 8.919682340494, 0.0 ], # a2
[7.282890133975, 2.973227446831, 4.20477857933]] # a3
</code></pre>
<p>The Cartesian coordinates of the 16 points are</p>
<pre><code>bulk_coords = [[ 0.00000000, 0.00000000, 0.00000000],
[ 10.9243352, 6.34420137, 2.66488775],
[ 18.2072253, 6.34420137, 2.66488775],
[ 12.7450577, 3.17210068, 1.33244387],
[ 10.6807662, 8.91968234, 0.42187384],
[ 16.1429338, 3.37097392, 1.19181926],
[ 19.7843789, 9.31742882, 3.29420855],
[ 9.34718165, 2.97322745, 1.47306849],
[ 14.8093492, 6.34420137, 2.24301390],
[ 9.10361267, 9.11855558, 3.43483316],
[ 3.88501404, 0.39774648, 0.14062462],
[ 7.03932116, 5.94645489, 2.52426313],
[ 12.9886267, 8.91968234, 3.57545778],
[ 7.28289013, 0.00000000, 0.00000000],
[ 5.46216760, 3.17210068, 1.33244387],
[ 16.3865028, 9.11855558, 3.43483316]]
</code></pre>
<p>Now I have a reference 2D lattice that contains 32 points in unit cell. Note that here "2D" means it is only periodic in the x and y dimensions, but can still have thickness in the z direction. Its unit cell looks like this (this is only a simple example, the unit cell can be of any shape, not necessarily cubic):</p>
<p><a href="https://i.sstatic.net/JPsq6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JPsq6.png" alt="enter image description here" /></a></p>
<p>The 3 lattice vectors are given by</p>
<pre><code>ref_vecs = [[8.1968234, 0., 0. ], # x-direction (periodic)
[0., 13.38535656, 0. ], # y-direction (periodic)
[0., 0., 7.7280392142]] # z-thickness (non-periodic)
</code></pre>
<p>The Cartesian coordinates of the 32 points are</p>
<pre><code>ref_coords = [[ 5.65852755, 9.84826406, 0.10024035],
[ 5.66769587, 3.14583318, 0.03049278],
[ 5.84383908, 6.16622816, 0.33687635],
[ 3.06154746, 11.5036515, 0.89497370],
[ 3.04533245, 4.74684988, 0.80482405],
[ 2.81714593, 7.85388222, 0.95654678],
[ 3.09409158, 1.66514480, 1.21902173],
[-0.03484920, 9.98872688, 1.91238251],
[ 0.07586672, 6.55689489, 2.02252838],
[ 0.15624468, 13.1218508, 2.07583316],
[ 0.30441248, 2.82022739, 2.21046382],
[ 5.46226670, 11.2187312, 2.95396399],
[ 5.83993074, 5.03326172, 3.11251461],
[ 5.40963696, 8.13117939, 3.23409626],
[ 5.45612978, 1.47718658, 3.32634157],
[ 2.72591886, 6.68506441, 3.86756751],
[ 2.91602855, 3.06649570, 3.90688764],
[ 2.95946887, 9.79329729, 3.96116732],
[ 3.11916042, 12.9792231, 4.17677610],
[ 0.35863087, 4.72533071, 4.75876360],
[ 0.31594175, 11.4937246, 4.67355664],
[ 0.03089551, 1.37091216, 4.91348486],
[ 0.39020723, 8.35223658, 5.11836201],
[ 5.45925351, 3.34155067, 5.87036690],
[ 5.62981527, 13.2212649, 5.93479016],
[ 5.64259931, 6.46196620, 5.93435713],
[ 5.83825398, 9.54653517, 6.08472919],
[ 2.80592452, 4.61964086, 6.76956275],
[ 3.15087591, 11.6754589, 6.98585963],
[ 2.68542186, 1.40293444, 7.10791606],
[ 2.73378423, 8.13529985, 7.12898617],
[ 0.03114729, -0.01057378, 7.78083813]]
</code></pre>
<p>This reference 2D lattice represents something similar to a "thick layer" extracted from a 3D lattice and its 2 lattice vectors in the x and y directions are <strong>parallel</strong> to <strong>a</strong><sub>1</sub> and <strong>a</strong><sub>2</sub> (i.e. parallel to the xy-plane). Now I want to find a parallel 2D lattice inside the current 3D lattice that best matches this reference 2D lattice. The similarity should depend on the <strong>topology</strong> of the two 2D lattice unit cells. For example, the reference unit cell could be thought as applying small perturbations to the fractional coordinates and lattice parameters of the best-matched 2D lattice unit cell. Below is an illustration for the 2D lattice extraction from the 3D lattice:
<a href="https://i.sstatic.net/7uPss.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7uPss.jpg" alt="enter image description here" /></a></p>
<p>To search in the space of all possible parallel 2D lattice unit cells in a 3D lattice space is difficult, especially considering that I might need to expand the 3D lattice unit cell to search in the whole space (but how much should I expand?). I would like to know if there's any algorithm that can find the best topology-matching 2D lattice unit cell in the whole 3D lattice space relatively fast? Specifically, I want to obtain the lattice vectors and the 32 Cartesian coordinates of the best-matched 2D lattice unit cell. Any sugguestions or even ideas are welcome.</p>
<h1>Edit:</h1>
<p>OK I will try to explain the question in the simplest way possible. As you might have already guessed, I am working on crystallography. The 3D lattice represents a bulk crystal structure (i.e. periodic atom arrangement in all 3 dimensions) and a 2D lattice represents a surface structure (i.e. periodic atom arrangement in x and y directions, but has a non-periodic thick atom-layer in z direction). Obviously, a surface structure can be obtained by cutting the bulk structure.</p>
<p>Let's use a simple Si bulk structure (downloaded from <a href="https://next-gen.materialsproject.org/materials/mp-149?chemsys=Si" rel="nofollow noreferrer">MaterialsProject</a>) as an example, it has a cubic unit cell with 3 lattice vectors of</p>
<pre><code>bulk_vecs = [[5.44370237, 0.0, 0.0 ],
[0.0, 5.44370237, 0.0 ],
[0.0, 0.0, 5.44370237]]
</code></pre>
<p>and atomic coordinates of</p>
<pre><code>bulk_coords = [[ 1.27013575, 0.03307657, -0.02898786],
[ 0.02039057, 0.02332397, 0.79145084],
[ 0.0137774, 1.18012325, 1.85879242],
[ 1.22267376, 1.27420799, 2.72875596],
[ 1.20645874, -0.0488289, 3.52879259],
[ 0.08487146, -0.06430577, 4.44147333]]
</code></pre>
<p>Diamond has the same type of crystal structure as Si, but slightly different lattice parameters and atomic coordinates. Now let's say I have a reference diamond(001) surface structure (the <em>Miller index</em> (001) basically means the surface plane is parallel to the xy-plane). It has a 6-atom unit cell, with 3 lattice vectors of</p>
<pre><code>ref_vecs = [[2.5178270133578393, 0.0, 0.0 ],
[0.0, 2.5178270133578393, 0.0 ],
[0.0, 0.0, 5.341117665]]
</code></pre>
<p>and atomic coordinates of</p>
<pre><code>ref_coords = [[ 1.27013575, 0.03307657, -0.02898786],
[ 0.02039057, 0.02332397, 0.79145084],
[ 0.0137774, 1.18012325, 1.85879242],
[ 1.22267376, 1.27420799, 2.72875596],
[ 1.20645874, -0.0488289, 3.52879259],
[ 0.08487146, -0.06430577, 4.44147333]]
</code></pre>
<p>Since diamond has the same type of crystal structure as Si, one should be able to find a Si(001) surface unit cell from this reference diamond(001) surface unit cell. From what I could think of, a simple topology-matching algorithm could be to find a Si(001) surface unit cell in the bulk Si space, so that the sum of the fractional coordinate differences w.r.t. the reference diamond(001) surface unit cell is minimized. However, I am not sure how to search in such a large space, especially when the reference surface unit cell is large, one probably needs to expand the bulk unit cell by a lot, which means the search space becomes even larger.</p>
<p>For benchmarking purposes, I will provide the expected solution for the best-matched Si(001) surface unit cell. It should have lattice vectors close to</p>
<pre><code>sol_vecs = [[3.8492788605882797, 0.0, 0.0 ],
[0.0, 3.8492788605882797, 0.0 ],
[0.0, 0.0, 8.165553555]]
</code></pre>
<p>and atomic coordinates close to</p>
<pre><code>sol_coords = [[ 1.92463943, 0.0, 0.0 ],
[ 0.0, 0.0, 1.36092562],
[ 0.0, 1.92463943, 2.72185121],
[ 1.92463943, 1.92463943, 4.08277680],
[ 1.92463943, 0.0, 5.44370240],
[ 0.0, 0.0, 6.80462799]]
</code></pre>
<p><a href="https://i.imgur.com/AeNAk1f.jpg" rel="nofollow noreferrer">This</a> is an illustration of this problem in terms of crystallography (cannot upload to SO for some reason). As you can see in the figure, the Si(001) surface unit cell solution that I provided matches well with the reference diamond(001) surface unit cell.</p>
<p>Hint: working on the reciprocal space might be much easier than working on the real space.</p>
<h1>Final edit:</h1>
<p>OK so let me rephrase the question in the most clear and simple way: <strong>How to find a cell in the whole 3D lattice space where it contains the same number of points as the reference 2D lattice unit cell and has the minimum MAE/RMSE between the fractional coordinates of the two cells (i.e. Cartesian coordinates normalized by cell size)? The information we have is (1) the 3D lattice unit cell (2) the reference 2D lattice unit cell (3) the 2D lattice is parallel to the xy-plane of the 3D lattice</strong></p>
| <python><algorithm><performance><math><pattern-matching> | 2023-08-06 20:18:38 | 2 | 2,963 | Shaun Han |
76,847,567 | 604,388 | How to perform OOB migration on PC with no browser? | <p>Previously I used the following code to authorise the user:</p>
<pre><code>from apiclient.discovery import build
from apiclient.errors import HttpError
from apiclient.http import MediaFileUpload
from oauth2client.client import flow_from_clientsecrets
from oauth2client.file import Storage
from oauth2client.tools import argparser, run_flow
def get_authenticated_service( self , args):
flow = flow_from_clientsecrets(os.path.dirname(os.path.abspath(__file__)) + "/" + CLIENT_SECRET_FILE,
scope='https://www.googleapis.com/auth/photoslibrary',
message='client_secrets files is missing')
credentials = None
storage = Storage(os.path.dirname(os.path.abspath(__file__)) + "/" + "token-oauth2.json")
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
self.token = credentials.access_token
return build('photoslibrary', 'v1',
http=credentials.authorize(httplib2.Http()))
</code></pre>
<p>(googleapiclient version is 1.6.2)</p>
<p>It was generating the url like <code>https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fphotoslibrary&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&client_id=x.apps.googleusercontent.com&access_type=offline</code>, so I was able to copy it to another device to perform authorization.</p>
<p>Now it returns <code>400: invalid_request. The out-of-band (OOB) flow has been blocked in order to keep users secure</code>.</p>
<p>I am reading <a href="https://developers.google.com/identity/protocols/oauth2/resources/oob-migration" rel="nofollow noreferrer">the migration doc</a>, but see no solution for my case. What is the current authorization process?</p>
| <python><python-2.7><google-api><google-api-python-client> | 2023-08-06 19:48:33 | 0 | 20,489 | LA_ |
76,847,515 | 3,947,414 | Replace capture group with line number (sed? awk? python?) | <p>I have source code in which I would like to insert filename:linenumber into various logging statements, such as:</p>
<pre><code>logger.e("", "an error happened");
</code></pre>
<p>The first string literal needs to be replaced. The string literal might not be blank. If I'm editing a file on which I've already run my replacement the line might be:</p>
<pre><code>logger.e("filename:123", "an error happened, this is the line number the last time I edited the file.")
</code></pre>
<p>The plan is to create a text processor as part of my build pipeline that does a search-and-replace for logging statements and inserts filename:linenumber. I originally planned to use awk, but having beaten myself up trying to get that to work will be content with using any suitable tool.</p>
<p>I have created a regex intending to capture the text to be replaced in a capture group:</p>
<pre><code>logger\.[tdiwe]\("(.*?)"
</code></pre>
<p>This should (I think) match text starting with 'logger', then a dot, then any of t, d, i, w, or e (trace/debug/info/warn/error), then a left paren, then a double-quote, then zero or more characters, then a double-quote. I believe, but am not 100% certain, that the <code>"(.*?)"</code> part of the regex puts the characters between the first set of double quotes into a capture group.</p>
<p>However, after poking at this in various tools, it seems that extracting capture groups is straightforward, but replacing them is not.</p>
<p>I did get some Python code to do what I want but this seems excessive and I've spent enough time banging at this problem that I want someone to show me the easier way.</p>
<pre><code>with open(infilename) as infile:
with open(outfilename, "w") as outfile:
linenumber = 1
for line in infile:
rslt = re.search(r'logger\.[tdiwe]\("([^"]*)', line)
if rslt:
outfile.write(line[:rslt.span(1)[0]] + f"filename:{str(linenumber)}" + line[rslt.span(1)[1]:])
else:
outfile.write(line)
linenumber += 1
</code></pre>
| <python><regex> | 2023-08-06 19:36:35 | 1 | 968 | Nevo |
76,847,464 | 872,130 | How can I increase the precision of the calculations in this python code | <p>I am trying to reproduce a plot on page 81 of Math Letters Jr. (which is a great recreational math book, by the way). The plot is created using a pair of simple functions that are applied iteratively (i.e. the output of each loop is the input for the following loop). Plotting the output x, y point in each loop should produce a beautiful complex point cloud. However, when I implemented the code below, it gets stuck cycling between 4 identical points after the fiftieth iteration. I assume this is due to rounding due to insufficient precision in my variable definitions. How can I improve on this?</p>
<pre><code>import matplotlib.pyplot as plt
x = 0
y = 0
vecx = []
vecy = []
for i in range(5000):
x = 1.3 + 0.3*x + 0.6*x*y - 0.6*y - y**2
y = 0.1 - 0.7*x + 0.5*(x**2) - 0.8*x*y + 0.1*y - 0.6*(y**2)
vecx.append(x)
vecy.append(y)
plt.plot(vecx, vecy, 'b.')
</code></pre>
| <python><precision> | 2023-08-06 19:20:44 | 3 | 341 | Dessie |
76,847,340 | 6,051,652 | convert index to period and back to timestamp | <p>I'm trying to write a function that takes a dataframe with datetimeindex, converts the index to period, does something, and then converts the dataframe back to timestamp.</p>
<p>I want to make sure that the returned index does not change when I convert to period and then back to timestamp.</p>
<p>This is what I have so far:</p>
<pre><code>def generate_df(freq):
dates = pd.date_range("2023-01-01", periods=10, freq=freq)
return pd.DataFrame({'values': range(10)}, index=dates)
def func(df, freq):
df_period = df.to_period(freq=freq)
# do something to df_period
df_timestamp = df_period.to_timestamp(freq=freq)
# check that index is the same
assert df.index.equals(df_timestamp.index), "index is not the same"
df = generate_df('D')
func(df,'D') # works
df = generate_df('M')
func(df, 'M') # works
df = generate_df('MS')
func(df, 'MS') # does not work
</code></pre>
<p>There are freq parameters that do not work for to_period. For example: 'MS', 'BQ'.</p>
<p>Is there a way to get the list of supported offset aliases that do not work with to_period? Any suggestions for a workaround for those aliases?</p>
<p>Edit:
I followed the recommendations in the links shared in the comments.
dates = pd.date_range("2023-01-01", periods=10, freq='W')
t_dates = dates.to_period().to_timestamp()</p>
<p>It works for the most part but at least for freq='W' it does not work.</p>
| <python><pandas> | 2023-08-06 18:47:16 | 0 | 1,159 | Eyal S. |
76,847,246 | 4,426,041 | How to save and load a Peft/LoRA Finetune (star-chat) | <p>I am trying to further finetune <code>Starchat-Beta</code>, save my progress, load my progress, and continue training. But whatever I do, it doesn't come together. Whenever I load my progress and continue training, my loss starts back from <em>zero</em> (3.xxx in my case).
I'll run you through my code and then the problem.</p>
<pre class="lang-py prettyprint-override"><code>tokenizer = AutoTokenizer.from_pretrained(BASEPATH)
model = AutoModelForCausalLM.from_pretrained(
"/notebooks/starbaseplus"
...
)
# I get both the Tokenizer and the Foundation model from the starbaseplus repo (which I have locally).
peftconfig = LoraConfig(
"/notebooks/starchat-beta"
base_model_name_or_path = "/notebooks/starbaseplus",
...
)
model = get_peft_model(model, peftconfig)
# All Gucci so far, the model and the LoRA fine-tune are loaded from the starchat-beta repo (also local).
# important for later:
print_trainable_parameters(model)
# trainable params: 306 Million || all params: 15 Billion || trainable: 1.971%
trainer = Trainer(
model=model,
...
)
trainer.train()
# I train, loss drops. from 3.xx to 1.xx.
# Now, either I follow the HugginFace docks:
model.save_pretrained("./huggingface_model")
# -> saves /notebooks/huggingface_model/adapter_model.bin 16mb.
# or an alternative I found on SO:
trainer.save_model("./torch_model")
# -> saves /notebooks/torch_model/pytorch_model.bin 60gb.
</code></pre>
<p>I have two alternatives saved to disk. Lets restart and try either of these approaches</p>
<p>First the <a href="https://huggingface.co/docs/peft/quicktour#peftmodel" rel="nofollow noreferrer">huggingface docs</a> approach:
I now have three sets of weights.</p>
<ol>
<li>the foundation model - starbase plus</li>
<li>the chat finetune - starchat-beta</li>
<li>the 16mb saved bin - adapter_model.bin</li>
</ol>
<p>But I only have two opportunities to load weights.</p>
<ol>
<li><code>AutoModelForCausalLM.from_pretrained</code></li>
<li>either <code>get_peft_model</code> or <code>PeftModel.from_pretrained</code></li>
</ol>
<p>Neither works. training restarts at a loss of 3.x.</p>
<p>Second approach:
Load the 60bg instead of the <em>old</em> starchat-beta repo model.
<code>get_peft_model("/notebooks/torch_model/pytorch_model.bin", peftconfig)</code></p>
<p>Also doesn't work. The <code>print_trainable_parameters(model)</code> drops to <code>trainable: 0.02%</code> and training restarts at a loss of 3.x</p>
| <python><pytorch><huggingface-transformers><torch> | 2023-08-06 18:23:28 | 2 | 395 | Finn Luca Frotscher |
76,847,159 | 2,895,197 | Is it possible to open browser with webbrowser.open and somehow take over by selenium? | <p>I'm wondering if it's possible to use <code>webbrowser.open()</code> method, open the browser, get its handler (?), and use it in
<code>driver = webdriver.Chrome()</code> command somehow (webdriver is from selenium)?</p>
<p>Is it feasible at all?</p>
<p>I'm using Python.</p>
| <python><google-chrome><selenium-webdriver> | 2023-08-06 17:59:50 | 1 | 1,813 | psmith |
76,847,003 | 22,212,435 | Why not assigned images are not creating, but usual widgets can exist even without assign to variable? | <p>Let's start with a code example:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
l = tk.Label() # default name
l1 = tk.Label(name="assign1")
l2 = tk.Label(name="assign2")
l3 = tk.Label(name="assign3")
tk.Label(name="not_assign1")
tk.Label(name="not_assign2")
tk.Label() # default name
tk.Label(name="not_assign4")
print("\n".join(root.children))
root.mainloop()
</code></pre>
<p>And output is:</p>
<p><a href="https://i.sstatic.net/veTKp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/veTKp.png" alt="enter image description here" /></a></p>
<p>As can be seen, every label has been created even that has not been assigned to any variable. That will happen with any widgets, so putting into variable is optional. That is ok. Then I decided to do something similar with PhotoImages:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
im = tk.PhotoImage() # default name
im1 = tk.PhotoImage(name="assign1")
im2 = tk.PhotoImage(name="assign2")
im3 = tk.PhotoImage(name="assign3")
tk.PhotoImage(name="not_assign1")
tk.PhotoImage(name="not_assign2")
tk.PhotoImage() # default name
tk.PhotoImage(name="not_assign4")
print([im for im in root.image_names() if ":" not in im]) # check ' : ' not to show default icons
root.mainloop()
</code></pre>
<p>Output is <code>['assign3', 'pyimage1', 'assign1', 'assign2']</code>. So no not_assign images at all.</p>
<p>There is actually no need for me to use not assigned images. Just want to know, why this happens.</p>
<p>P.S. This question seems quite trivial, so sorry, if there are already such questions exist.</p>
| <python><tkinter> | 2023-08-06 17:22:01 | 1 | 610 | Danya K |
76,846,746 | 12,908,701 | Unable to run the server after export on another computer (migrations worked) | <p>I am lost with what is probably a very simple issue configuring a django / mysql project. I have a working project on a computer A, which I just want to be able to use with a computer B. On computer A, I have an Eclipse project, using Pydev and Django, a local virtual environnement, and a local database running with Mysql.</p>
<p>What I have done:</p>
<ul>
<li>I exported the virtual environnement (including django) and created the same one on computer B;</li>
<li>I used Gitlab in order to get all the files with the code ;</li>
<li>I installed Mysql, and created a database with the same name, and a user with the same informations (id / pwd).</li>
</ul>
<p>The migrations went well, the tables are created like I expected. However, I cannot find a way to run the server. When I try, I get multiple errors saying one (and only one) of the Apps is not loaded yet, then another error, which seems to be related to the database. Here are some of the messages:</p>
<pre><code>Traceback (most recent call last):
File "/home/francois/snap/eclipse/67/amd64/plugins/org.python.pydev.core_10.2.1.202307021217/pysrc/_pydev_runfiles/pydev_runfiles.py", line 460, in __get_module_from_str
mod = __import__(modname)
File "/home/francois/eclipse-workspace/iou/members/forms.py", line 2, in <module>
from django.contrib.auth.forms import UserCreationForm, AuthenticationForm, UsernameField
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/contrib/auth/forms.py", line 10, in <module>
from django.contrib.auth.models import User
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/contrib/auth/models.py", line 3, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/contrib/auth/base_user.py", line 48, in <module>
class AbstractBaseUser(models.Model):
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/db/models/base.py", line 108, in __new__
app_config = apps.get_containing_app_config(module)
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/apps/registry.py", line 253, in get_containing_app_config
self.check_apps_ready()
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/apps/registry.py", line 136, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
ERROR: Module: members.forms could not be imported (file: /home/francois/eclipse-workspace/iou/members/forms.py).
Traceback (most recent call last):
File "/home/francois/snap/eclipse/67/amd64/plugins/org.python.pydev.core_10.2.1.202307021217/pysrc/_pydev_runfiles/pydev_runfiles.py", line 460, in __get_module_from_str
mod = __import__(modname)
File "/home/francois/eclipse-workspace/iou/members/migrations/0001_initial.py", line 3, in <module>
import django.contrib.auth.models
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/contrib/auth/models.py", line 3, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/contrib/auth/base_user.py", line 48, in <module>
class AbstractBaseUser(models.Model):
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/db/models/base.py", line 108, in __new__
app_config = apps.get_containing_app_config(module)
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/apps/registry.py", line 253, in get_containing_app_config
self.check_apps_ready()
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/apps/registry.py", line 136, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
ERROR: Module: members.migrations.0001_initial could not be imported (file: /home/francois/eclipse-workspace/iou/members/migrations/0001_initial.py).
done.
Traceback (most recent call last):
File "/home/francois/snap/eclipse/67/amd64/plugins/org.python.pydev.core_10.2.1.202307021217/pysrc/runfiles.py", line 268, in <module>
main()
File "/home/francois/snap/eclipse/67/amd64/plugins/org.python.pydev.core_10.2.1.202307021217/pysrc/runfiles.py", line 95, in main
return pydev_runfiles.main(configuration) # Note: still doesn't return a proper value.
File "/home/francois/snap/eclipse/67/amd64/plugins/org.python.pydev.core_10.2.1.202307021217/pysrc/_pydev_runfiles/pydev_runfiles.py", line 857, in main
PydevTestRunner(configuration).run_tests()
File "/home/francois/snap/eclipse/67/amd64/plugins/org.python.pydev.core_10.2.1.202307021217/pysrc/_pydev_runfiles/pydev_runfiles.py", line 780, in run_tests
get_django_test_suite_runner()(run_tests).run_tests([])
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/test/runner.py", line 723, in run_tests
databases = self.get_databases(suite)
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/test/runner.py", line 702, in get_databases
databases = self._get_databases(suite)
File "/home/francois/anaconda3/envs/iou_env/lib/python3.10/site-packages/django/test/runner.py", line 690, in _get_databases
for test in suite:
TypeError: 'NoneType' object is not iterable
</code></pre>
<p><strong>edit</strong> : here is the code for the User model:</p>
<pre><code>from django.db import models
from django.contrib.auth.models import AbstractUser
class Group(models.Model):
name = models.CharField(max_length=100)
def __str__(self):
return self.name
class User(AbstractUser):
groups = models.ManyToManyField(Group)
current_group = models.ForeignKey(Group, on_delete=models.SET_NULL,blank = True , null = True, related_name="current_group")
</code></pre>
<p><strong>Edit 2</strong> : actually, it seems that most of the errors are related to the imports of the User django class, for example the errors related with forms.py is in line 2, which is this one:</p>
<pre><code>from django.contrib.auth.forms import UserCreationForm, AuthenticationForm, UsernameField
</code></pre>
<p>I precise that I have the following line in my settings.py, since I use a personnalized User model, however I tried to delete this line and the issue persists:</p>
<pre><code>AUTH_USER_MODEL = 'members.User'
</code></pre>
<p>I really don't understand what is happening here, so I would be gratefull if anyone can help, many thanks!</p>
| <python><django><configuration> | 2023-08-06 16:14:21 | 1 | 563 | Francois51 |
76,846,739 | 4,434,941 | Scraping a dynamic webpage where text is loaded asyncronously | <p>I have a scraper for HBR. Recently, HBR seems to have changed their site API which seems to have broken my script</p>
<p>Previously, when I used the following script, I could get the html for the entire article</p>
<pre><code>url ='https://hbr.org/2023/08/mastering-the-art-of-the-request'
headers = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.167 Safari/537.36"
}
# Send a GET request to the URL with custom headers
response = requests.get(url, headers=headers)
# Check if the request was successful (status code 200)
if response.status_code != 200:
print("Error:", response.status_code)
sel = Selector(text=response.text)
art = sel.xpath('//div[@class="article-body standard-content"]//p').extract()
</code></pre>
<p>Now, when I use this code, it only seems to be pulling the first p tag, because the requests object is returning a subset of the html. I tried inspecting the network tag to see if there is any async json being loaded through an API but can't see any such call</p>
<p>I would appreciate any guidance on how to fix this.</p>
<p>Thanks</p>
| <python><web-scraping><python-requests> | 2023-08-06 16:12:56 | 1 | 405 | jay queue |
76,846,613 | 1,942,868 | With FileField, opening file before inserting database | <p>I have <code>models</code>, <code>serializer</code> and viewsets with <code>FileField</code></p>
<pre><code>class Drawing(SafeDeleteModel):
drawing = f.FileField(upload_to='uploads/')
class DrawingSerializer(ModelSerializer):
drawing = serializers.FileField()
class DrawingViewSet(viewsets.ModelViewSet):
queryset = m.Drawing.objects.all()
serializer_class = s.DrawingSerializer
def create(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
print("serializer request:",request.data['drawing']) # this is myfile.pdf
self.perform_create(serializer)
print("serializer seliarizer",serializer.data['drawing'])
# myfile_ccA3TjY.pdf
try:
doc = fitz.open(file_path) # open to check if it is correct pdf
except;
raise Exception("file is not valid pdf")
</code></pre>
<p>When uploading file,<code>request.data['drawing']</code> is <code>myfile.pdf</code>, <code>serializer.data['drawing']</code> is real name <code>myfile_ccA3TjY.pdf</code>. ( when there is the same name file)</p>
<p>So for opening file with <code>fitz</code>, I need to know the real name <code>myfile_ccA3TjY.pdf</code>.</p>
<p>I have to do <code>self.perform_create</code> to know the real file name, and it creates the row. However I want to cancel inserting to database when file is not valid pdf.</p>
<p>With this code database inserting is executed even file is not correct pdf.</p>
<p>Is there any way to open the file without calling <code>self.perform_create</code>?</p>
| <python><django> | 2023-08-06 15:39:28 | 1 | 12,599 | whitebear |
76,846,537 | 206,253 | Assigning a persistent and unique ID to instances of a python class | <p>There are a number of threads on SO dealing with a similar question, e.g.</p>
<p><a href="https://stackoverflow.com/questions/58101476/how-to-create-a-unique-and-incremental-id-in-a-python-class">How to create a unique and incremental ID in a Python Class</a></p>
<p><a href="https://stackoverflow.com/questions/8319910/auto-incrementing-ids-for-class-instances">Auto-incrementing IDs for Class Instances</a></p>
<p><a href="https://stackoverflow.com/questions/1045344/how-do-you-create-an-incremental-id-in-a-python-class">How do you create an incremental ID in a Python Class</a></p>
<p>The solution in each of these threads is to use itertools.count() to assign a unique ID to a class attribute.</p>
<p>My issue with this solution is that this guarantees a unique ID within a single interpreter session. But what about if you want to persist/serialize the instances you created and then use them in another interpreter session? The itertools.count() would start from zero again so you may create instances with the same ID.</p>
<p>The only thing that occurs to me is to bind the auto-increment to some time stamp (e.g. in milliseconds or for my application even in seconds as the instances would be created slowly). It is still not guaranteed to be unique on different environments but, at least, in the same virtual environment it should be unique.</p>
<p>This is a question which seems a bit rhetorical - but I am welcoming opinions on this as well as suggestions about possible alternatives</p>
| <python><auto-increment> | 2023-08-06 15:22:18 | 0 | 3,144 | Nick |
76,846,418 | 10,714,490 | Twitter (X) login using Selenium triggers anti-bot detection | <p>I am currently working on automating the login process for my Twitter account using Python and Selenium.</p>
<p>However, I'm facing an issue where Twitter's anti-bot measures seem to detect the automation and <strong>immediately redirect me to the homepage</strong> when clicking the <strong>next button</strong>.</p>
<p><a href="https://i.sstatic.net/Hul0g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hul0g.png" alt="Next button" /></a></p>
<p>I have attempted to use <code>send_keys</code> and ActionChains to create more human-like interactions, but the problem persists.</p>
<p>Here's a simplified code snippet that illustrates my current approach:</p>
<pre class="lang-py prettyprint-override"><code># imports...
driver.get(URLS.login)
username_input = driver.find_element(By.NAME, 'text')
username_input.send_keys(username)
next_button = driver.find_element(By.XPATH, '//div[@role="button"]')
# These attempts all failed and return to the homepage
next_button.click()
next_button.send_keys(Keys.ENTER)
ActionChains(driver).move_to_element(next_button).click().perform()
</code></pre>
<p>What's weird is that besides manually clicking the next button, <strong>execute a <code>click</code> in console</strong> also works.</p>
<p>I suspect that my automation attempts are still being detected by Twitter's security mechanisms, but I'm unsure about the root cause or how to bypass it successfully.</p>
| <python><selenium-webdriver><twitter><automation><anti-bot> | 2023-08-06 14:50:49 | 1 | 668 | KumaTea |
76,846,395 | 7,657,180 | Using variable for WebKitFormBoundary makes request invalid | <p>I have the following python code that posts a request successfully</p>
<pre><code>import requests
cookies = {'BNES_JSESSIONID': 'R2aVMfNrT78PyhSgyqW6Kj7BlBMOR7cQqB+Tq7wGQl34xSno1KKqpymu/ZLhny2lshfxmOnZiFy8Zbvw+vcSPCPOAALQXhOdFcYX61dU7WDszMVLkwHKehvnHfmubbwQG5BhwWAjqWJlKz22mi985kyPwyYEwx8y'}
headers = {'Content-Type': 'multipart/form-data; boundary=----WebKitFormBoundaryB6r6VIYgOSHWqj5I'}
data = '------WebKitFormBoundaryB6r6VIYgOSHWqj5I\r\nContent-Disposition: form-data; name="selectAser"\r\n\r\n211053300\r\n------WebKitFormBoundaryB6r6VIYgOSHWqj5I\r\nContent-Disposition: form-data; name="phoneNo"\r\n\r\n55075052\r\n------WebKitFormBoundaryB6r6VIYgOSHWqj5I\r\nContent-Disposition: form-data; name="selectAsed"\r\n\r\n120987470-1\r\n------WebKitFormBoundaryB6r6VIYgOSHWqj5I\r\nContent-Disposition: form-data; name="comReg"\r\n\r\n\r\n------WebKitFormBoundaryB6r6VIYgOSHWqj5I\r\nContent-Disposition: form-data; name="bankType"\r\n\r\n1850\r\n------WebKitFormBoundaryB6r6VIYgOSHWqj5I\r\nContent-Disposition: form-data; name="attachments"; filename=""\r\nContent-Type: application/octet-stream\r\n\r\n\r\n------WebKitFormBoundaryB6r6VIYgOSHWqj5I\r\nContent-Disposition: form-data; name="searchType"\r\n\r\n0745\r\n------WebKitFormBoundaryB6r6VIYgOSHWqj5I--\r\n'
response = requests.post('https://eservices.gov./', cookies=cookies, headers=headers, data=data)
print(response.text)
</code></pre>
<p>I tried to make it simpler so I have used variable for the value like that</p>
<pre><code>import requests
def post_with_custom_boundary(c_boundary):
url = 'https://eservices.gov./'
cookies = {'BNES_JSESSIONID': 'FBVxGgtjjJhDGmcnG/XOglUeuJlG/mwpJH9+ICQw/bL3/2Y383HWR4f+WzRXsJBG5jHf33qCsgMBOHLeTKxJRBwlkrkKE1tGbdywVgdikWhC4VcCUd1yIIYJBFm4m/60lfkGqvgk+UfaMbPKe3CUVRvsTaMCLn+A'}
headers = {'Content-Type': f'multipart/form-data; boundary={c_boundary}'}
data = f'{c_boundary}\r\nContent-Disposition: form-data; name="selectAser"\r\n\r\n211053300\r\n{c_boundary}\r\nContent-Disposition: form-data; name="phoneNo"\r\n\r\n55075052\r\n{c_boundary}\r\nContent-Disposition: form-data; name="selectAsed"\r\n\r\n120987470-1\r\n{c_boundary}\r\nContent-Disposition: form-data; name="comReg"\r\n\r\n\r\n{c_boundary}\r\nContent-Disposition: form-data; name="bankType"\r\n\r\n1850\r\n{c_boundary}\r\nContent-Disposition: form-data; name="attachments"; filename=""\r\nContent-Type: application/octet-stream\r\n\r\n\r\n{c_boundary}\r\nContent-Disposition: form-data; name="searchType"\r\n\r\n0745\r\n{c_boundary}--\r\n'
response = requests.post(url, cookies=cookies, headers=headers, data=data)
print(response.text)
c_boundary = '------WebKitFormBoundaryg3LDfuaso7lfkkvj'
post_with_custom_boundary(c_boundary)
</code></pre>
<p>That is too weird. Any ideas how to fix that problem?</p>
<p>Note that the first code is working well and returns this valid response
<code>تم تسجيل الطلب بنجاح</code>
which means <code>Registered Successfully</code></p>
| <python><python-requests> | 2023-08-06 14:42:46 | 2 | 9,608 | YasserKhalil |
76,846,366 | 6,758,862 | Exception silently ignored inside context manager decorated method | <p>The following code appears not to raise. Is that a bug or have I not correctly understood the error handling in context managers?</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import contextmanager
class Dataset:
_getToRaise = False
def __getitem__(self, index):
if self._getToRaise:
print("It is supposed to raise here")
raise ValueError
return 0
@contextmanager
def getToRaise(self):
try:
self._getToRaise = True
print('Entering context manager')
yield
finally:
print('Exiting context manager')
self._getToRaise = False
return
d = Dataset()
with d.getToRaise():
d[0]
</code></pre>
<p>Running the code in Python 3.8.17 returns:</p>
<pre><code>Entering context manager
It is supposed to raise here
Exiting context manager
</code></pre>
<p>and no exception is raised...</p>
| <python><exception><error-handling><contextmanager> | 2023-08-06 14:35:32 | 2 | 723 | Vasilis Lemonidis |
76,846,312 | 11,929,884 | Tabula-py: Java HotSpot(TM) 64-Bit Server VM warning: CodeCache is full | <p>I installed both the tabula-py library and also Java to try and scrape tables from PDFs. I ran some simple code below with a sample pdf I found online:</p>
<pre class="lang-py prettyprint-override"><code>from tabula import read_pdf
path = "https://sedl.org/afterschool/toolkits/science/pdf/ast_sci_data_tables_sample.pdf"
table = read_pdf(path,pages=1)
print(table[0])
</code></pre>
<p>I got the following error(s):</p>
<pre><code>Got stderr: Java HotSpot(TM) 64-Bit Server VM warning: CodeCache is full. Compiler has been disabled.
Java HotSpot(TM) 64-Bit Server VM warning: Try increasing the code cache size using -XX:ReservedCodeCacheSize=
Traceback (most recent call last):
File "/Users/default/Desktop/Schedule Data/Extraction.py", line 21, in <module>
tables = tabula.read_pdf('Brunswick Student Proof 1.pdf',pages = [14,20])
File "/Users/default/Library/Python/3.9/lib/python/site-packages/tabula/io.py", line 440, in read_pdf
raw_json: List[Any] = json.loads(output.decode(encoding))
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p>I have searched for a potential fix to either the codecache error or the JSON decoder error, and the answers have not been very helpful, is the issue here with the Java end or the tabula library or both?</p>
| <python><java><tabula-py> | 2023-08-06 14:22:58 | 1 | 316 | RIPPLR |
76,846,310 | 5,838,180 | How to groupby in pandas for multiple periods (season and weekday)? | <p>I have a dataframe containing ~3 years of data looking something like this:</p>
<p><a href="https://i.sstatic.net/ojLdT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ojLdT.png" alt="enter image description here" /></a></p>
<p>I can groupby the whole dataframe by weekdays, to get the average value for each of them, by doing</p>
<pre><code>df_weekday = df.groupby(df.index.weekday).mean()
</code></pre>
<p>But I want to have not just the average value for each weekday, but have this for every season within my dataframe of ~3 years - meaning, average value for Mon to Sun for winter, average value for Mon to Sun for spring and so on.</p>
<p>How can I do this? Tnx</p>
| <python><pandas><dataframe><group-by> | 2023-08-06 14:22:14 | 2 | 2,072 | NeStack |
76,846,118 | 2,741,831 | Using asyncio Futures with Flask | <pre><code>from flask import Flask, request, jsonify
from sentence_transformers import SentenceTransformer
import asyncio
loop = asyncio.get_event_loop()
app = Flask(__name__)
prio_queue = asyncio.PriorityQueue()
# Load the SentenceTransformer model
model = SentenceTransformer('all-MiniLM-L6-v2')
async def embed_task_loop():
while True:
# get next item
prio, p= await prio_queue.get()
sentences, fut = p
try:
# Encode the sentences using the model
embeddings = model.encode(sentences)
# Create a dictionary to hold the sentence-embedding pairs
result = {
'texts': sentences,
'embeddings': embeddings.tolist()
}
#return jsonify(result), 200
fut.set_result((jsonify(result), 200))
except Exception as e:
#return jsonify(error=str(e)), 500
fut.set_result((jsonify(error=str(e)), 500))
async def add_to_prio_queue(sentences, prio):
global prio_queue
# add to prio queue always if prio is one, if prio is zero add only if prio queue is not larger than 10
if prio == 1 or (prio == 0 and prio_queue.qsize() < 10):
fut=loop.create_future()
package = (prio, (sentences, fut))
prio_queue.put_nowait(package)
else:
fut.set_result((jsonify(error='Too many requests'), 429))
return await fut
@app.route('/embed', methods=['POST'])
def embed_sentences():
# Get the list of sentences from the request body
data = request.get_json(force=True)
sentences = data.get('texts', [])
prio= data.get('prio', 0)
if not sentences:
return jsonify(error='No sentences provided'), 400
result = loop.run_until_complete(add_to_prio_queue(sentences, prio))
return result
if __name__ == '__main__':
# start the embed task loop
with app.app_context():
loop.create_task(embed_task_loop())
app.run()
</code></pre>
<p>I have this code that contains two parts, an api that takes in sentences to be embedded (you only need to know that that is a process that takes a while) and adds them to a priority queue. High priority tasks are always processed first and low priority tasks may be rejected. The embedding threads work simultaneously and enqueuing an embedding task will return a future that can be awaited. Unfortunately, Flask and asyncio really don't like each other and so if I try to use <code>await</code> instead of <code>loop.run_until_complete</code> I get this error:</p>
<pre><code>RuntimeError: Task <Task pending name='Task-9' coro=<AsyncToSync.main_wrap() running at /home/user/miniconda3/envs/tf/lib/python3.9/site-packages/asgiref/sync.py:353> cb=[_run_until_complete_cb() at /home/user/miniconda3/envs/tf/lib/python3.9/asyncio/base_events.py:184]> got Future <Future pending> attached to a different loop
</code></pre>
<p>with <code>loop.run_until_complete</code> It sort of kind of works but it regularly gives me errors like these (most likely because the threads start to overlap):</p>
<pre><code>Traceback (most recent call last):
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/flask/app.py", line 2529, in wsgi_app
response = self.full_dispatch_request()
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/user/projects/vector-crawler/backend/src/embed.py", line 78, in embed_sentences
result = loop.run_until_complete(add_to_prio_queue(sentences, prio))
File "/home/user/miniconda3/envs/tf/lib/python3.9/asyncio/base_events.py", line 623, in run_until_complete
self._check_running()
File "/home/user/miniconda3/envs/tf/lib/python3.9/asyncio/base_events.py", line 583, in _check_running
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
</code></pre>
<p>It also crashes pretty hard on exit, although thats not that big of a deal
So is there a clean way to handle and await promises in Flask?
Or is there another way to solve this?</p>
| <python><flask><python-asyncio> | 2023-08-06 13:29:51 | 0 | 2,482 | user2741831 |
76,846,066 | 7,657,180 | Post request works in postman but not in python code | <p>I am trying to post a request on a website that requires credentials. I have used postman and sync with the browser using interceptor (google extension) to make use of the cookies stored correctly in postman. I have succeeded in posting the request in postman and here is the settings I did.
<a href="https://i.sstatic.net/J5tag.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J5tag.png" alt="enter image description here" /></a></p>
<p>When trying to copy the code of the Python requests package, I got an invalid response <code>404</code> and something went wrong.
Here's the python code I got from Postman (not working)</p>
<pre><code>import requests
url = "https://eservices.moj.gov.kw/execFileUpload2"
payload = {'selectAser': '211053300',
'phoneNo': '55075052',
'selectAsed': '120987470-1',
'comReg': '',
'bankType': '1850',
'attachments': '(binary)',
'searchType': '0745'}
files=[
]
headers = {
'Cookie': 'BNES_JSESSIONID=FBVxGgtjjJhDGmcnG/XOglUeuJlG/mwpJH9+ICQw/bL3/2Y383HWR4f+WzRXsJBG5jHf33qCsgMBOHLeTKxJRBwlkrkKE1tGbdywVgdikWhC4VcCUd1yIIYJBFm4m/60lfkGqvgk+UfaMbPKe3CUVRvsTaMCLn+A; BNIS___utm_is1=834onz3cMbm6+rxfXql7Q9R5wUqKP9h0Og9waF061RytqqFpbicKoodJVczfRblgLKkNejO9xULfy/gV59QKOKqOE5nH62HC/3yuYJehn8hT5JL0Trtl1Q==; BNIS___utm_is2=h3TrzcqVULgLWXXvNz3L2bQtrG+U/tHgaCBTF5MLDA6zaHEpzzwCy9Rj99RJySZUYmO8bQuFJ6s=; BNIS___utm_is3=xl2RfksMuxdJJrgDWe3RlTXaGAMcCXNp8jk39UeD37JWegODHQm0CVYCakYal9jUfWovaY2FrKriBIFNQOFbyIxcgMpHKnERXJrFCsvLKrmZNPgeZdMezA==; BNIS_vid=dVdYD1HKY4P0pa5whnuuvLgo71u2130LdG9wpcgfK0yYw3B1mLAfK2pzLr7+xPH1h3PVudRNDGJQHyVY0Y9fVfhte25qE3ArdMOHRB4jcyMGq/9iCsU5goBa1Q2hVs/8Fqd53PjKs7hT81XeYQ8Jtani11zEpPgkw70aLejHmSSykk8nACMnKMr0zP4RqoujmqEbAHE3+e6Odf2XSkBCWh3jPiyUqWmU4zYGail1N6A=; JSESSIONID=6ZLK74MAMrYOWAb39OQ7EYLHtt9GjRFzwvgwACd9idbb76Fe7q4p!669384774'
}
response = requests.request("POST", url, headers=headers, data=payload, files=files)
print(response.text)
</code></pre>
<p>This is the response I got</p>
<pre><code><!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html dir="rtl">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1256">
<title>::بوابة العدل الالكترونية::</title>
<link type="text/css" rel="stylesheet" href="/Styles/NMaster.css" />
<style type="text/css">
<!--
H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;}
PRE, TT {border: 1px dotted #525D76}
A {color : black;}A.name {color : black;}
-->
</style>
</head>
<body>
<h1 style="text-align:center;">حدث خطأ ما </h1>
</body>
</html>
</code></pre>
<p>Here are some snapshots (may help us)
<a href="https://i.sstatic.net/RCq7I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RCq7I.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Bkri6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bkri6.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/3h9wy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3h9wy.png" alt="enter image description here" /></a></p>
| <python><python-requests><postman> | 2023-08-06 13:16:36 | 2 | 9,608 | YasserKhalil |
76,845,739 | 1,434,847 | How to restore/validate sliced & shuffled image? | <p>First of all I got an image that is sliced into different pieces and shuffled, like the following image:</p>
<p><a href="https://i.sstatic.net/7oAC1.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7oAC1.jpg" alt="enter image description here" /></a></p>
<p>I am not sure how the image got restored, like the following:</p>
<p><a href="https://i.sstatic.net/uFxEc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uFxEc.jpg" alt="enter image description here" /></a></p>
<p>Or are there any way to validate the output image, if I could brute-force to generate all kinds of combination. Thanks in advance.</p>
| <python><image-processing> | 2023-08-06 11:46:09 | 1 | 1,446 | Winston |
76,845,466 | 9,565,342 | Why can't I import parametrize from pytest | <p>I know there is <code>parametrize</code> decorator in <code>pytest</code>, e.g.:</p>
<pre><code>@pytest.mark.parametrize("num1, num2", [(1, 1)])
def test_equal(num1, num2):
assert num1 == num2
</code></pre>
<p>However, I wonder why can't I import <code>parametrize</code> decorator directly, i.e. if I write <code>from pytest.mark import parametrize</code> I get <code>ModuleNotFoundError: No module named 'pytest.mark'</code>.</p>
<p>Why I can call <code>pytest.mark.parametrize</code> but cannot call <code>from pytest.mark import parametrize</code>?</p>
<p><strong>P.S.:</strong> It's not a question about a proper usage, I'm perfectly fine using <code>@pytest.mark.parametrize</code>. It's rather about comprehension of the import process.</p>
| <python><pytest> | 2023-08-06 10:34:38 | 1 | 1,156 | NShiny |
76,845,311 | 7,946,082 | How to use type hint properly for hypothesis's "stateful testing" example? | <p>I'm trying to properly type hint hypothesis's <a href="https://hypothesis.readthedocs.io/en/latest/stateful.html#rule-based-state-machines" rel="nofollow noreferrer">stateful testing example</a>:</p>
<pre class="lang-py prettyprint-override"><code>import shutil
import tempfile
from collections import defaultdict
import hypothesis.strategies as st
from hypothesis.database import DirectoryBasedExampleDatabase
from hypothesis.stateful import Bundle, RuleBasedStateMachine, rule
class DatabaseComparison(RuleBasedStateMachine):
def __init__(self) -> None:
super().__init__()
self.tempd = tempfile.mkdtemp()
self.database = DirectoryBasedExampleDatabase(self.tempd)
self.model: dict[bytes, set[bytes]] = defaultdict(set)
keys = Bundle("keys")
values = Bundle("values")
@rule(target=keys, k=st.binary())
def add_key(self, k: bytes) -> bytes:
return k
@rule(target=values, v=st.binary())
def add_value(self, v: bytes) -> bytes:
return v
@rule(k=keys, v=values)
def save(self, k: bytes, v: bytes) -> None:
self.model[k].add(v)
self.database.save(k, v)
@rule(k=keys, v=values)
def delete(self, k: bytes, v: bytes) -> None:
self.model[k].discard(v)
self.database.delete(k, v)
@rule(k=keys)
def values_agree(self, k: bytes) -> None:
assert set(self.database.fetch(k)) == self.model[k]
def teardown(self) -> None:
shutil.rmtree(self.tempd)
TestDBComparison = DatabaseComparison.TestCase
</code></pre>
<p>However, when I run mypy (Python 3.11.3, mypy 1.3.0):</p>
<pre><code>temp.py:17: error: Need type annotation for "keys" [var-annotated]
temp.py:18: error: Need type annotation for "values" [var-annotated]
</code></pre>
<p>So how do I fix the "Bundle" thing?</p>
| <python><mypy><python-typing><python-hypothesis> | 2023-08-06 09:54:36 | 1 | 513 | Jerry |
76,845,238 | 3,840,940 | how to import anaconda pandas module in visual studio code environment? | <p>I configure the apache spark in visual studio code environment. The confiugration of settigs.json is like below,</p>
<pre><code>"python.defaultInterpreterPath": "C:\\Anaconda3\\python.exe",
"terminal.integrated.env.windows": {
"PYTHONPATH": "C:/spark-3.4.1-bin-hadoop3/python;C:/spark-3.4.1-bin-hadoop3/python/pyspark;C:/spark-3.4.1-bin-hadoop3/python/lib/py4j-0.10.9.7-src.zip;C:/spark-3.4.1-bin-hadoop3/python/lib/pyspark.zip"
},
"python.autoComplete.extraPaths": [
"C:\\spark-3.4.1-bin-hadoop3\\python",
"C:\\spark-3.4.1-bin-hadoop3\\python\\pyspark",
"C:\\spark-3.4.1-bin-hadoop3\\python\\lib\\py4j-0.10.9.7-src.zip",
"C:\\spark-3.4.1-bin-hadoop3\\python\\lib\\pyspark.zip"
],
"python.analysis.extraPaths": [
"C:\\spark-3.4.1-bin-hadoop3\\python",
"C:\\spark-3.4.1-bin-hadoop3\\python\\pyspark",
"C:\\spark-3.4.1-bin-hadoop3\\python\\lib\\py4j-0.10.9.7-src.zip",
"C:\\spark-3.4.1-bin-hadoop3\\python\\lib\\pyspark.zip"
]
</code></pre>
<p>But I face the errors in the next python codes</p>
<pre><code>import pandas as pd
</code></pre>
<p>The error is</p>
<pre><code>File "c:\VSCode_Workspace\deep-learn-python\com\aaa\dl\mysql_feat.py", line 5, in <module>
import pandas as pd
File "C:\spark-3.4.1-bin-hadoop3\python\pyspark\pandas\__init__.py", line 29, in <module>
from pyspark.pandas.missing.general_functions import MissingPandasLikeGeneralFunctions
File "C:\spark-3.4.1-bin-hadoop3\python\pyspark\pandas\__init__.py", line 34, in <module>
require_minimum_pandas_version()
</code></pre>
<p>As you see, the pandas module which is imported is not anaconda module, but pyspark.pandas module. I think the above configuration of apache spark would bring these errors. Kindly inform me how to import the anaconda pandas module, not pyspark.pandas on visual studio code. But I have to sustain this configuration.</p>
| <python><pandas><visual-studio-code><pyspark> | 2023-08-06 09:34:26 | 1 | 1,441 | Joseph Hwang |
76,845,129 | 12,924,562 | Python Mastodon API gives same results for following and followers? | <p>I am trying to get a list of people I follow, and a lost of people I am followed by, but the functions give the same number?</p>
<pre><code>myid = mastodon.account_verify_credentials()["id"]
followed_by = mastodon.account_followers(myid) #Fetch users the given user is followed by.
following = mastodon.account_following(myid) # Fetch users the given user is following.
binoff = []
print ("followed_by={}, following={}, binoff={}".format(len(followed_by),len(following),len(binoff)))
</code></pre>
<p>The result is:</p>
<blockquote>
<p>followed_by=40, following=40, binoff=0<br />
followed_by=40, following=40, binoff=40</p>
</blockquote>
<p>Where have I gone wrong</p>
| <python><mastodon> | 2023-08-06 09:02:50 | 1 | 386 | Rick Dearman |
76,845,012 | 6,734,243 | How to automatically start a GitHub Codespace with some Python packages? | <p>I started using Codespaces for small modifications in my open-source Python repositories. Most of them require only 2 core dependencies, <code>pre-commit</code> and <code>nox</code>. Is it possible to save a codespace configuration where these 2 libraries are automatically installed every time I create a new codespace?</p>
<p>Even better if this configuration can be scoped for each repository for small adjustments.</p>
| <python><github><github-codespaces> | 2023-08-06 08:27:08 | 1 | 2,670 | Pierrick Rambaud |
76,844,957 | 13,734,323 | How to make a Chatbot with Langchain that has access to custom data and the internet? | <p>Very strange it doesn't find a good response. When I print(response["answer"]) I get that there is no text to give to the query I put. Even if it gets information from the internet and the Document on the list seems good structured. Here the code:</p>
<p>`</p>
<pre><code>from googlesearch import search
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.document_loaders import (
UnstructuredWordDocumentLoader,
TextLoader,
UnstructuredPowerPointLoader,
)
from langchain.tools import Tool
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.chat_models import ChatOpenAI
from langchain.docstore.document import Document
import os
import openai
import sys
from dotenv import load_dotenv, find_dotenv
sys.path.append('../..')
_ = load_dotenv(find_dotenv())
google_api_key = os.environ.get("GOOGLE_API_KEY")
google_cse_id = os.environ.get("GOOGLE_CSE_ID")
openai.api_key = os.environ['OPENAI_API_KEY']
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus"
os.environ["LANGCHAIN_API_KEY"] = os.environ['LANGCHAIN_API_KEY']
os.environ["GOOGLE_API_KEY"] = google_api_key
os.environ["GOOGLE_CSE_ID"] = google_cse_id
folder_path_docx = "DB\\DB VARIADO\\DOCS"
folder_path_txt = "DB\\BLOG-POSTS"
folder_path_pptx_1 = "DB\\PPT JUNIO"
folder_path_pptx_2 = "DB\\DB VARIADO\\PPTX"
loaded_content = []
for file in os.listdir(folder_path_docx):
if file.endswith(".docx"):
file_path = os.path.join(folder_path_docx, file)
loader = UnstructuredWordDocumentLoader(file_path)
docx = loader.load()
loaded_content.extend(docx)
for file in os.listdir(folder_path_txt):
if file.endswith(".txt"):
file_path = os.path.join(folder_path_txt, file)
loader = TextLoader(file_path, encoding='utf-8')
text = loader.load()
loaded_content.extend(text)
for file in os.listdir(folder_path_pptx_1):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_1, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_1 = loader.load()
loaded_content.extend(slides_1)
for file in os.listdir(folder_path_pptx_2):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_2, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_2 = loader.load()
loaded_content.extend(slides_2)
embedding = OpenAIEmbeddings()
embeddings_content = []
for one_loaded_content in loaded_content:
embedding_content = embedding.embed_query(one_loaded_content.page_content)
embeddings_content.append(embedding_content)
db = DocArrayInMemorySearch.from_documents(loaded_content, embedding)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": 3})
search = GoogleSearchAPIWrapper()
def custom_search(query):
max_results = 3
internet_results = search.results(query, max_results)
internet_documents = [Document(page_content=result["snippet"], metadata={
"source": result["link"]}) for result in internet_results
]
return internet_documents
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model_name="gpt-4", temperature=0),
chain_type="map_reduce",
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
)
history = []
while True:
query = input("Hola, soy Chatbot. ¿Qué te gustaría saber? ")
internet_documents = custom_search(query)
small = loaded_content[:3]
combined_results = small + internet_documents
print(combined_results)
response = chain(
{"question": query, "chat_history": history, "documents": combined_results})
print(response["answer"])
history.append(("system", query))
history.append(("assistant", response["answer"]))
</code></pre>
<p>`</p>
<p>The output when I <code>print(comgined_results)</code> it looks ok to read it properly and output the correct answer based on the custom data and the internet. But somehow it doesn't work:</p>
<p><code>Document(page_content="Cumple diez años como referente mundial en formación digital, según prestigiosos rankings internacionales y nacionales\n\nEl Centro Universitario abrió sus puertas en el año 2011 para cubrir el vacío existente de perfiles con competencias digitales en sectores como el del diseño digital, la ingeniería del software, los videojuegos y la animación en España.\n\nPor sus aulas han pasado ya más de 4.500 estudiantes, muchos de los cuales trabajan hoy en compañías líderes de estos sectores por todo el mundo.\n\nLos alumnos de esta universidad han ganado más de 100 premios, nacionales e internacionales, en todos los ámbitos.\n\nMadrid, 25 de mayo de 2021.- Centro Universitario de Tecnología y Arte Digital, celebra este año su décimo aniversario convertido en la institución educativa referente en formación digital por los importantes reconocimientos procedentes de entidades de prestigio nacionales e internacionales, por la excelente valoración que reciben por parte del tejido industrial, en referencia a los conocimientos impartidos a sus alumnos, así como, por los numerosos premios conseguidos por el alumnado. \n\nDurante esta década, han pasado por las aulas de U-tad, situadas en el municipio madrileño de Las Rozas, más de 4.500 alumnos entre sus titulaciones de grado, postgrado y ciclo formativo de grado superior. En los últimos cinco años los alumnos de esta universidad han ganado más de 100 premios, nacionales e internacionales, en todos los ámbitos. U-tad es el Centro Universitario español con mayor número de PlayStation Awards obtenidos (11), además de cuatro galardones en el célebre South by Southwest (SXSW) de Austin y 3 Gamelab, entre muchos otros. En el ámbito del diseño digital, el trabajo desarrollado por una alumna consistente en una instalación de arte interactivo ha obtenido un Laus, premio que concede la Asociación de Diseñadores Gráficos y Directores de Arte (ADG-FAD), en la categoría ‘Proyecto Final de Estudios’, a la que se presentaron 250 proyectos.\n\nNumerosos alumnos del área de animación han obtenido importantes galardones y nominaciones a título personal en certámenes tan acreditados como los recientes premios Annie (galardones que entrega la\xa0‘International Animated Film Association’\xa0afincada en\xa0Los Ángeles, y que son considerados los Óscar de la animación), y Quirino, así como han participado en producciones premiadas en los Óscar o los Goya.\n\nAsimismo, estudiantes quedaron en el primer puesto en el 'European Cybersecurity Challenge' y también en el ‘Datathon’\xa0organizado por\xa0Microsoft, así como en el\xa0‘Datathon Ciudad de Madrid’.\n\nHoy, alumnos que estudiaron en U-tad están triunfando no solo en España, sino por todo el mundo, formando parte de algunas de las compañías más punteras, no solo en estudios de animación de primerísimo nivel como Walt Disney Studios, Sony Pictures ImageWorks, Skydance Animation, Cartoon Saloon, Double Negative o El Ranchito, sino también en desarrolladoras de videojuegos como Ubisoft, King, Rockstar Games, EA o como ingenieros o diseñadores en compañías como Telefónica, Microsoft, IBM, Amazon, Banco Santander, Inditex, Erretres, Ogilvy, Fjord o SoyOlivia.\n\n“Es un orgullo enorme asistir al éxito profesional de nuestros alumnos, tanto a través de los premios obtenidos, como por su extraordinario desempeño laboral. Estamos formando a una generación de profesionales que tendrán un peso muy importante en la transformación digital de nuestro país”, afirma Ignacio Pérez Dolset, fundador y CEO de U-tad.\n\nA lo largo de estos años, U-tad ha recibido importantes reconocimientos procedentes de entidades de prestigio nacionales e internacionales: \n\nLa revista americana ‘Animation Magazine’ la incluye en el Top 25 mundial de los mejores centros para estudiar Animación, siendo el único Centro Universitario español y uno de los cuatro europeos que forman parte de esta selección.\n\nLa Global Association for Media Education (GAMEducation) la sitúa como la sexta mejor universidad del mundo para formarse como desarrollador de videojuegos, por el nivel de aprendizaje que ofrece a sus alumnos, los proyectos que estos realizan, la empleabilidad y los premios recibidos por los egresados. \n\nLa Asociación Española de Excelencia Académica (SEDEA) valora a sus alumnos del área de Ingeniería entre los diez mejor formados de toda España. \n\nTitulaciones de alta especialización para una óptima inserción laboral\n\nU-tad surgió de la propia necesidad de la industria de contar con profesionales especializados en competencias digitales capaces de desenvolverse con éxito en sectores como el diseño digital, la ingeniería del software, los videojuegos y la animación, perfiles que por aquel entonces escaseaban en España.\xa0No en vano, fueron los primeros en lanzar un Grado Oficial en ‘Animación’. \n\nEn estos diez años, este Centro Universitario se ha consolidado como un referente en formación digital, innovando no solo con las disciplinas impartidas, sino también a través de su particular metodología de aprendizaje multidisciplinar basada en el desarrollo de proyectos reales, dotando así a los alumnos de los conocimientos necesarios que demanda el tejido industrial de cada sector.\n\nDurante esta década, U-tad ha sabido ir adaptando su oferta formativa a la evolución y necesidades de la industria lanzando grados, dobles grados, postgrados y ciclos formativos de grado superior muy diferenciales que ofrecen a los alumnos de un alto nivel de especialización, así como una incorporación inmediata y con todas las garantías al mercado laboral. Actualmente, este Centro Universitario oferta un total de 19 titulaciones, algunas de las cuales pueden ser cursadas íntegramente en inglés así como, en modalidad presencial y online.\n\nEjemplo de lo anterior son los Dobles Grados en ‘Ingeniería del Software y Matemática Computacional’ y, a partir del próximo mes de septiembre, el de ‘Ingeniería del Software y Física Computacional’, convirtiéndose de este modo en la única universidad en España en impartir esta titulación. También cabe destacar los postgrados en las áreas de las realidades extendidas, del Big Data, de los videojuegos y de la animación, siendo pioneros en su impartición en todos ellos. \n\nEl método U-tad: una enseñanza multidisciplinar y un claustro de profesionales en activo.\n\nFruto de su cercanía con el tejido industrial, a través de sus Comités Industriales, y de su conocimiento del sector, en U-tad se forma a los perfiles digitales con las competencias y conocimientos profesionales más demandados por las empresas a nivel global. De este modo, contribuyen también al desarrollo de la industria digital en España proporcionando profesionales capaces de liderarla.\n\nA través de un modelo educativo que ofrece un aprendizaje práctico basado en el desarrollo de proyectos reales, muy similares a los que el alumno va a tener que realizar en la empresa, y de un claustro formado tanto por profesionales en activo en la industria (80%) como de Académicos Doctores en su especialidad, U-tad ofrece una formación de excelencia totalmente práctica y cercana a sus alumnos. \n\nAsimismo, la vocación investigadora forma parte de su ADN. Como pioneros en formación en Realidad Virtual, U-tad lidera proyectos de investigación con finalidades prácticas donde además participan alumnos de diferentes áreas de conocimiento. Se trata de ‘Virtual Transplant Reality (VTR)’, una iniciativa pionera a nivel mundial en realidad virtual y aumentada para mejorar la calidad de vida de los pacientes pediátricos trasplantados que se está llevando a cabo en el Hospital Universitario La Paz o ‘CicerOn: VR speech coach’, una aplicación que, a través de técnicas inmersivas de realidad virtual, ayuda a las personas con síndrome de Asperger a entrenar su interacción con otras personas. \n\nDesde sus orígenes, el objetivo de U-tad ha sido el de ofrecer una formación multidisciplinar, promoviendo el desarrollo de trabajos con alumnos de diferentes grados, así como un aprendizaje basado en proyectos reales. Estas premisas convierten a este Centro Universitario en un referente de cómo debe ser la formación superior en España. \n\nSobre U-tad, Centro Universitario de Tecnología y Arte Digital:\n\nU-tad es el primer Centro Universitario especializado 100% en la formación en todas las grandes áreas asociadas a la cadena de valor de la economía digital: Ingeniería del Software, Diseño Digital, Animación, Diseño de Productos Interactivos y Videojuegos, Matemáticas, Física, Realidad Virtual, Aumentada y Mixta, Big Data, Ciberseguridad, etc. Una institución única en España orientada a formar a los líderes de la industria digital del presente y futuro, con profesores procedentes de las mejores empresas del sector. Un Centro de primer nivel internacional, basado en la excelencia, la innovación y la tecnología que fomenta el desarrollo del talento y prepara a sus alumnos para las profesiones del mundo digital. www.u-tad.com ", metadata={'source': 'UTAD DB\\UTAD DB VARIADO\\DOCS\\NdP U-tad_X aniversario_may21.docx'}), Document(page_content='Kompożizzjoni tal-kumitati u tad-delegazzjonijiet ... Poštovana predsjedavajuća, poštovani dame i gospodo, dvanaest godina se zna da će Katar biti domaćin\xa0...', metadata={'source': 'https://eur-lex.europa.eu/legal-content/MT/TXT/HTML/?uri=OJ:C:2023:240:FULL'}), Document(page_content='El diseño digital multimedia es una disciplina que combina diferentes formas de medios digitales, como gráficos, imágenes, vídeos, sonido y texto, para crear\xa0...', metadata={'source': 'https://u-tad.com/que-es-diseno-digital/'}), Document(page_content='5 may 2023 ... asistimos a la presentación de los 3 videojuegos que van a desarrollar los alumnos de los másteres de videojuegos durante el curso\xa0...', metadata={'source': 'https://www.hobbyconsolas.com/patrocinado/tad-son-proyectos-alumnos-masteres-videojuegos-22-23-1240272'})] El texto no proporciona ninguna noticia específica.</code></p>
<p>Can anyone help me to make it work? Appreciate!</p>
| <python><chatbot><langchain><large-language-model> | 2023-08-06 08:07:02 | 0 | 662 | Zaesar |
76,844,921 | 12,466,687 | How to plot matplotlib objects from third party libraries in streamlit? | <p>I am using a library called <code>toad</code> from <a href="https://toad.readthedocs.io/en/latest/tutorial.html" rel="nofollow noreferrer">toad for WOE</a> and a function (bin_plot()) in it returns plot type - <code><AxesSubplot: xlabel='mob2', ylabel='prop'></code></p>
<p>When I used this in <code>notebook</code> I am able to get the plot But I am unable to plot it in <strong>streamlit</strong></p>
<p>(Have <strong>Updated</strong> the Post with <code>sample dataset & code</code> at the bottom of this post)</p>
<p>In Notebook:
<a href="https://i.sstatic.net/EZFt9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EZFt9.png" alt="enter image description here" /></a></p>
<p>Tried couple of things in <code>streamlit</code> as mentioned below which didn't work-</p>
<pre><code>st.pyplot(bin_plot(X_train_binned,x=X_train_binned.columns[1],target='status_perf_end'))
</code></pre>
<p>Error: AttributeError: 'AxesSubplot' object has no attribute 'savefig'</p>
<pre><code>binned_1, ax = plt.subplots()
bin_plot(X_train_binned,x=X_train_binned.columns[1],target='status_perf_end')
st.pyplot(binned_1)
</code></pre>
<p><a href="https://i.sstatic.net/q1VF4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q1VF4.png" alt="enter image description here" /></a></p>
<pre><code>fig_binned = plt.figure(figsize=(10, 4))
bin_plot(X_train_binned,x=X_train_binned.columns[1],target='status_perf_end')
st.pyplot(fig_binned)
</code></pre>
<p>Plots nothing</p>
<p><strong>UPDATE:</strong></p>
<p>Now when I just added <code>st.pyplot()</code> under the above code then I got this plot with an error:
<a href="https://i.sstatic.net/5nul5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5nul5.png" alt="enter image description here" /></a></p>
<p>I am not sure how to get this plot in streamlit error free. Would Appreciate any help.</p>
<p><strong>UPDATE</strong> Adding code for sample dataset</p>
<pre><code>from sklearn import datasets
iris_data = datasets.load_iris()
iris_df = pd.DataFrame(data = iris_data['data'], columns = iris_data['feature_names'])
iris_df['Iris_type'] = iris_data['target']
# keeping it to 2 types only and removing 3rd category
iris_df = iris_df.loc[iris_df.Iris_type != 2]
# importing toad lib
from toad.plot import bin_plot
# plotting from data
bin_plot(iris_df, iris_df.columns[0], target='Iris_type')
# I would like to run this bin_plot() in streamlit
</code></pre>
| <python><matplotlib><streamlit> | 2023-08-06 07:53:07 | 1 | 2,357 | ViSa |
76,844,814 | 9,317,532 | How to resolve "django.urls.exceptions.NoReverseMatch: Reverse for 'activate' not found." | <p>I'm currently working on a local project using Django 4.2 as backend. I'm trying to do account activation based on email. Below is my code snippet.</p>
<p><strong>Project <em>urls.py</em></strong></p>
<pre><code>from django.contrib import admin
from django.urls import path, include
from .auth_token import CustomAuthToken
urlpatterns = [
path('admin/', admin.site.urls),
path('api-auth/', include('rest_framework.urls')),
path('api-token-auth/', CustomAuthToken.as_view()),
path('portal-user/api/', include('portal_user.urls', namespace="portal_user"))
]
</code></pre>
<p><strong>portal_user <em>urls.py</em></strong></p>
<pre><code>from django.urls import path
from .views import activate
app_name = "portal_user"
urlpatterns = [
path(
'activate/<slug:uidb64>/<slug:token>/', activate, name="account-activate"),
]
</code></pre>
<p><strong>portal_user views.py</strong></p>
<pre><code>def send_account_activation_mail(self):
mail_subject = 'Welcome to Our Platform! Activate Your Account'
mail_message = render_to_string('portal_user/account_activate_email.html', {
"first_name": f"{self.first_name}!",
"uidb": urlsafe_base64_encode(force_bytes(self.pk)),
"token": account_activation_token.make_token(self),
"activate_url": CLIENT_DOMAIN
})
try:
email = EmailMessage(
subject=mail_subject, body=mail_message, to=[self.email]
)
email.content_subtype = "html"
email.send()
except BadHeaderError:
return HttpResponse("Invalid header found!")
except Exception as e:
return HttpResponse("Something went wrong!")
def activate(request, uidb64, token):
try:
uid = force_str(urlsafe_base64_decode(uidb64))
user = PortalUser.objects.get(pk=uid)
except (TypeError, ValueError, OverflowError, PortalUser.DoesNotExist):
user = None
if user is not None and account_activation_token.check_token(user, token):
user.is_active = True
user.save()
return Response({"message": "activation_successful"}, status=status.HTTP_200_OK)
else:
return Response({"message": "activation_failed"}, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p><strong>Account Activation Mail Template <em>account_activate_email.html</em></strong></p>
<pre><code><a href="{{ activate_url }}{% url 'portal_user:account-activate' uidb64=uid token=token %}">
{{ activate_url }}{% url 'portal_user:account-activate' uidb64=uid token=token %}
</a>
</code></pre>
<p>Whenever I'm trying to send email to users using <em><strong>send_account_activation_mail</strong></em> function provided in <strong>portal_user <em>views.py</em></strong>, it throws error</p>
<blockquote>
<p>django.urls.exceptions.NoReverseMatch: Reverse for 'account-activate' with keyword arguments '{'uidb64': '', 'token': 'bshs4w-028ce1c25d313eeaaeb81e3cac46535d'}' not found. 1 pattern(s) tried: ['portal\-user/api/activate/(?P[-a-zA-Z0-9_]+)/(?P[-a-zA-Z0-9_]+)/\Z']`</p>
</blockquote>
<p>and if I remove the namespace from template, then it throws error</p>
<blockquote>
<p>`django.urls.exceptions.NoReverseMatch: Reverse for 'account-activate' not found. 'account-activate' is not a valid view function or pattern name.</p>
</blockquote>
<p>I have spent hours on google regarding these error but unable to solve. I'm not getting where I'm going wrong. If anyone have solution to this issue, please let me know.</p>
| <python><django><django-urls> | 2023-08-06 07:14:02 | 0 | 574 | Abhishek Kumar |
76,844,747 | 16,220,410 | password text field for customtkinter | <p>I am trying to integrate a python code I found here on SO by a user @srccircumflex, his code is for <code>tkinter</code> and I tried to modify it because I'm using <code>customtkinter</code></p>
<p>His code was using <code>class PassEntry(Entry):</code> so I replaced it with <code>class PassEntry(ctk.CTkEntry):</code> also inside the class I replaced <code>Entry</code> with <code>ctk.CTkEntry</code></p>
<p>I am a beginner in python and below is what I have so far</p>
<pre><code>import customtkinter as ctk
import tkinter as tk #using messagebox
from tkinter import filedialog
from tkinter import END, INSERT, SEL_FIRST, SEL_LAST
import PyPDF2
from PyPDF2 import PdfWriter
from sys import platform
from typing import Iterable
merger = PdfWriter()
def merge_pdfs():
#code to merge pdf files
file_password = entry_password.getpass()
...
#PASSENTRY # Copyright (c) 2022 Adrian F. Hoefflin [srccircumflex] ------------------------------
class PassEntry(ctk.CTkEntry):
def __init__(self,
master,
show: chr = "*",
delay: int = 800,
getpass_range: Iterable = None,
getpass_call: ... = None,
getpass_del: bool = False,
**tk_kwargs,
):
"""
Password entry with delayed hiding. Alternative to `Entry(master, show="*")'.
Supports all common character sets(1aA!), multy keys(^`˝) and under Linux also the alternative graphics(↑Ωł).
{Deletes the input from the widget and writes it casually into a variable. Markings and the position
of the cursor is respected.}
howto get the password:
- by protected member self._password
- by calling self.getpass (args `getpass_*' executed here)
- by calling self.get (args `getpass_*' executed here)
:param master: root tk
:param show: displayed char
:param delay: hiding delay
:param getpass_range: check password length
:param getpass_call: callable, gets `self._password' as argument
:param getpass_del: delete `self._password' and flush entry if True
:param tk_kwargs: Valid resource names: background, bd, bg, borderwidth, cursor, exportselection, fg, font, foreground, highlightbackground, highlightcolor, highlightthickness, insertbackground, insertborderwidth, insertofftime, insertontime, insertwidth, invalidcommand, invcmd, justify, relief, selectbackground, selectborderwidth, selectforeground, state, takefocus, textvariable, validate, validatecommand, vcmd, width, xscrollcommand
"""
self._password: str = ""
self.delay: int = delay
self.show: chr = show
self.getpass_range: Iterable = getpass_range
self.getpass_call: ... = getpass_call
self.getpass_del: bool = getpass_del
ctk.CTkEntry.__init__(self, master, **tk_kwargs)
self.bind("<Key>", self._run)
self.bind("<Button>", self._run)
self._external: bool = False
self.get = self.getpass
if platform == "linux":
# (
# MultyKeys, ^ ` ọ ˇ
# NoModifier, a b c d
# Shift+Key, A B C D
# AltGr+Key(AT-Layout), @ ł | ~
# AltGr+Shift+Key(AT-Layout) Ω Ł ÷ ⅜
# )
self._states = (0, 16, 17, 144, 145)
elif platform == "win32":
# (
# AltGr+Key(AT-Layout), @ \ | }
# NoModifier, a b c d
# Shift+Key, A B C D
# )
self._states = (0, 8, 9)
def _char(self, event) -> str:
def del_mkey():
i = self.index(INSERT)
self._delete(i - 1, i)
if event.keysym in ('Delete', 'BackSpace'):
return ""
elif event.keysym == "Multi_key" and len(event.char) == 2: # windows stuff
if event.char[0] == event.char[1]:
self.after(10, del_mkey)
return event.char[0]
return event.char
elif event.char != '\\' and '\\' in f"{event.char=}":
return ""
elif event.num in (1, 2, 3):
return ""
elif event.state in self._states:
return event.char
return ""
def _get(self):
return self.tk.call(self._w, 'get')
def _delete(self, first, last=None):
self.tk.call(self._w, 'delete', first, last)
def _insert(self, index, string: str) -> None:
self.tk.call(self._w, 'insert', index, string)
def _run(self, event):
if self._external and self._char(event):
self._external = False
self.clear()
def hide(index: int, lchar: int):
i = self.index(INSERT)
for j in range(lchar):
self._delete(index + j, index + 1 + j)
self._insert(index + j, self.show)
self.icursor(i)
if event.keysym == 'Delete':
if self.select_present():
start = self.index(SEL_FIRST)
end = self.index(SEL_LAST)
else:
start = self.index(INSERT)
end = start + 1
self._password = self._password[:start] + self._password[end:]
elif event.keysym == 'BackSpace':
if self.select_present():
start = self.index(SEL_FIRST)
end = self.index(SEL_LAST)
else:
if not (start := self.index(INSERT)):
return
end = start
start -= 1
self._password = self._password[:start] + self._password[end:]
elif char := self._char(event):
if self.select_present():
start = self.index(SEL_FIRST)
end = self.index(SEL_LAST)
else:
start = self.index(INSERT)
end = start
self._password = self._password[:start] + char + self._password[end:]
self.after(self.delay, hide, start, len(char))
def insert(self, index, string: str) -> None:
self._external = True
self.tk.call(self._w, 'insert', index, string)
def delete(self, first, last=None) -> None:
self._external = True
self.tk.call(self._w, 'delete', first, last)
def clear(self):
del self._password
self._password = ""
self._delete(0, END)
def getpass(self):
password = self._password
if self.getpass_range:
assert len(self._password) in self.getpass_range, f'## Password not in {self.getpass_range}'
if self.getpass_call:
password = self.getpass_call.__call__(self._password)
if self.getpass_del:
del self._password
self._password = ""
self._delete(0, END)
return password
#PASSENTRY----------------------------------------------------------------------------------
if __name__ == "__main__":
#main window
root = ctk.CTk()
root.title("Merge PDF")
#Create user input entry for filename
...
#Create user input entry for optional password
entry_password = PassEntry(
master=root,
width=200,
height=40,
border_width=1,
fg_color="white",
placeholder_text="OPTIONAL: Enter password...",
text_color="black",
font=('Arial Rounded MT Bold', 14))
entry_password.grid(row=2, column=0, padx=20, pady=(10,20))
# Create a button to merge the selected PDF files
...
root.mainloop()
</code></pre>
| <python><tkinter><customtkinter> | 2023-08-06 06:47:25 | 1 | 1,277 | k1dr0ck |
76,844,623 | 948,801 | Process list of tuples in SQL Server table using python | <p>I would like to process <code>list of tuple</code> values into SQL server using python. I think we can try with loop to handle along with <code>cursor.execute</code>, but that is not my requirement. I would like to process list of tuples with parametrized way as part of optimization purpose.</p>
<p>I tried like this but it is not working.</p>
<pre><code>lt = [('a','temp_a','001'),('b','temp_b','002')]
sql = '''
EXECUTE [dbo].[mystrored_proc]
@table = %(parm)s
'''
conn = pymssql.connect(
server=server,
user=user,
password=password,
database=database)
cursor = conn.cursor()
cursor.execute(query, {'parm': lt})
</code></pre>
| <python><sql><sql-server><stored-procedures> | 2023-08-06 06:10:18 | 1 | 699 | Sekhar |
76,844,581 | 15,320,579 | Convert complex list of dictionaries into dataframe using python | <p>I have the following list of dictionaries:</p>
<pre><code>ip_purchase = [
{
'V_TYPE': 'Purchase', 'V_No': '1', 'DATE': '20230401',
'ITEMS': [{'NAME': 'A100', 'AMOUNT': '-10000.00'}]
},
{
'V_TYPE': 'Purchase', 'V_No': '2', 'DATE': '20230401',
'ITEMS': [{'NAME': 'RTX A6000', 'AMOUNT': '-50000.00'},
{'NAME': 'Jetson Nano', 'AMOUNT': '-4000.00'},
{'NAME': 'V100', 'AMOUNT': '-45000.00'}]
},
{
'V_TYPE': 'Purchase', 'V_No': '3', 'DATE': '20230401',
'ITEMS': [{'NAME': 'A100', 'AMOUNT': '-10000.00'},
{'NAME': 'V100', 'AMOUNT': '-45000.00'}]
},
]
</code></pre>
<p>I want to convert the above list of dictionaries to a dataframe as follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>V_TYPE</th>
<th>V_NO</th>
<th>DATE</th>
<th>ITEMS</th>
<th>AMOUNT</th>
</tr>
</thead>
<tbody>
<tr>
<td>Purchase</td>
<td>1</td>
<td>2023401</td>
<td>A100</td>
<td>-10000</td>
</tr>
<tr>
<td>Purchase</td>
<td>2</td>
<td>2023401</td>
<td>RTX A6000</td>
<td>-50000</td>
</tr>
<tr>
<td>Purchase</td>
<td>2</td>
<td>2023401</td>
<td>Jetson Nano</td>
<td>-4000</td>
</tr>
<tr>
<td>Purchase</td>
<td>2</td>
<td>2023401</td>
<td>V100</td>
<td>-45000</td>
</tr>
<tr>
<td>Purchase</td>
<td>3</td>
<td>2023401</td>
<td>A100</td>
<td>-10000</td>
</tr>
<tr>
<td>Purchase</td>
<td>3</td>
<td>2023401</td>
<td>V100</td>
<td>-45000</td>
</tr>
</tbody>
</table>
</div>
<p>I know I can use the <code>to_dict()</code> function to convert a list of dictionaries to a dataframe but the nested list in the <code>ITEMS</code> is messing up the format. the <code>ip_purchase</code> list can have any number of elements (i.e. dictionaries) and the nested <code>ITEMS</code> list can also have any number of elements (i.e. dictionaries).</p>
<p>Any help is appreciated!</p>
| <python><pandas><dataframe><dictionary> | 2023-08-06 05:55:08 | 4 | 787 | spectre |
76,844,550 | 880,783 | How to type a decorator factory correctly? | <p>I have already posted this example at <a href="https://github.com/python/mypy/issues/15594" rel="nofollow noreferrer">https://github.com/python/mypy/issues/15594</a>, but I suspect it may not even be a bug (instead, I may be typing incorrectly) and I hope for help by this larger community.</p>
<p>In the example code below, <code>mypy</code> does not seem to properly bind the return type of the decorated function (<code>int</code>) to <code>T</code>. Instead, I am seeing references to <code>T``-2</code>, which I am not sure I fully know what it is.</p>
<p>(<code>Pylance</code> reports <code>int</code> for the result of <code>fun()</code>, but I am not sure it checks the decorators at all.)</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, ParamSpec, TypeVar, reveal_type
T = TypeVar("T")
P = ParamSpec("P")
def decorator_factory(
decorator_: Callable[[Callable[P, T]], Callable[P, T]]
) -> Callable[[Callable[P, T]], Callable[P, T]]:
return decorator_
@decorator_factory
def decorator(fun_: Callable[P, T]) -> Callable[P, T]:
return fun_
@decorator
def fun() -> int:
return 0
var = fun()
print(type(var)) # int
reveal_type(decorator_factory)
reveal_type(decorator)
reveal_type(fun)
reveal_type(var) # expected 'int', got 'T`-2' (Pylance reports 'int')
</code></pre>
<p>The problem disappears when I remove <code>@decorator_factory</code>, so it must be something about typing the decorator factory.</p>
<p><a href="https://github.com/python/mypy/issues/15594" rel="nofollow noreferrer">https://github.com/python/mypy/issues/15594</a> also has closely related examples based on <code>decorator.decorator</code>-style, caller-based decorators that fail with <code>T-1</code>.</p>
| <python><python-decorators><python-typing> | 2023-08-06 05:43:44 | 0 | 6,279 | bers |
76,844,538 | 11,170,350 | The adapter was unable to infer a handler to use for the event. This is likely related to how the Lambda function was invoked | <p>I am stuck in this error and is clueless. Trying to run a code on aws lambda, but i am testing it locally
Here is code<br />
app.py</p>
<pre><code>
from pydantic import BaseModel
from fastapi import FastAPI
from mangum import Mangum
from fastapi.responses import JSONResponse
import uvicorn
app=FastAPI()
handler=Mangum(app)
class TextModel(BaseModel):
text: str
@app.post("/")
def process_data(text: TextModel):
return JSONResponse({'result':text.text})
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=9000)
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.11
COPY app.py ${LAMBDA_TASK_ROOT}/
COPY requirements.txt .
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}" -U
#--no-cache-dir
CMD [ "app.handler" ]
</code></pre>
<p>When i test it locally be following command, i got the error</p>
<pre><code>curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"text":"hello world!"}'
</code></pre>
| <python><aws-lambda><fastapi><mangum> | 2023-08-06 05:39:21 | 1 | 2,979 | Talha Anwar |
76,844,478 | 1,501,285 | Is it possible to inject and move data in a binary file using Python? | <p>Let's say I have the following:</p>
<pre><code>with open("inject.bin", "ab") as binary_file:
binary_file.write(bytes(str('A'), 'utf-8'))
binary_file.write(bytes(str('C'), 'utf-8'))
</code></pre>
<p>How do I inject <code>bytes(str('B'), 'utf-8')</code> in between <code>A</code> and <code>B</code> so that I know that I can find it using <code>f.seek(1)</code>?</p>
| <python> | 2023-08-06 05:06:43 | 0 | 7,678 | Bob van Luijt |
76,844,343 | 5,212,614 | How can we map catagorical codes in a dataframe back to the original data points in the original dataframe? | <p>I have a simple dataframe that looks like this.</p>
<pre><code>import pandas as pd
# Intitialise data of lists
data = [{'Year': 2020, 'Airport':2000, 'Casino':5000, 'Stadium':9000, 'Size':'Small'},
{'Year': 2019, 'Airport':3000, 'Casino':4000, 'Stadium':12000, 'Size':'Medium'},
{'Year': 2018, 'Airport':5000, 'Casino':9000, 'Stadium':10000, 'Size':'Medium'},
{'Year': 2017, 'Airport':5000, 'Casino':10000, 'Stadium':15000, 'Size':'Large'}]
df = pd.DataFrame(data)
df = df.set_index(['Year'])
df
df_fin = pd.DataFrame({col: df[col].astype('category').cat.codes for col in df}, index=df.index)
print(df_fin.columns)
df_fin
</code></pre>
<p><a href="https://i.sstatic.net/5GUr5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5GUr5.png" alt="enter image description here" /></a></p>
<p>Then, I convert everything to categorical codes, like this.</p>
<pre><code>df_fin = pd.DataFrame({col: df[col].astype('category').cat.codes for col in df}, index=df.index)
print(df_fin.columns)
df_fin
</code></pre>
<p><a href="https://i.sstatic.net/d6MHG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d6MHG.png" alt="enter image description here" /></a></p>
<p>Then, I am doing a basic basic classification experiment, like this.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier,AdaBoostClassifier,GradientBoostingClassifier
X = df_fin[['Airport', 'Casino', 'Stadium', ]]
y = df_fin['Size']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = AdaBoostClassifier(n_estimators=100)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy_score(y_test, y_pred)
</code></pre>
<p>Finally, if I want to make a prediction, I can do this.</p>
<pre><code>print(clf.predict([[2, 3, 3]]))
</code></pre>
<p>The result is '0' for size, which is what I would expect for 2017. However, I don't want to use the categorical codes, I want to use the original records from the original 'df'.</p>
<p>How can I make a prediction like this?</p>
<pre><code>print(clf.predict([[5000,10000,15000]]))
</code></pre>
<p>So I can get a prediction of 'Large'. Somehow I need to map the categorical codes back to the records in the original 'df'. How can I do this?</p>
| <python><python-3.x><dataframe><machine-learning><data-science> | 2023-08-06 03:45:02 | 1 | 20,492 | ASH |
76,844,289 | 6,675,878 | FastAPI: Background outside of route functions behaves differently? | <p>In FastAPI, I'm trying to understand why the background instance not getting created outside the route function handler and its different behaviors.</p>
<p><em>Examples:</em></p>
<p>Standard doc example working as expected:</p>
<pre><code>@app.get('/')
async def index(background_tasks: BackgroundTasks):
background_tasks.add_task(some_function_reference)
#Executes non-blocking in the same asyncio loop without any issues
return "Hello"
</code></pre>
<p>It behaves differently when adding the background_tasks outside of the route function:</p>
<pre><code>async def some_logic(background_tasks: BackgroundTasks):
#Throws a "required positional argument missing" error
background_tasks.add_task(some_function_reference)
@app.get('/')
async def index():
await some_logic()
#Executes non-blocking in the same asyncio loop
return "Hello"
</code></pre>
<p>meanwhile, if we try to init the <code>BackgroundTasks</code> in the <code>some_logic</code> function, the task does not run as following:</p>
<pre><code>async def some_logic():
#Does not Run
background_tasks = BackgroundTasks()
background_tasks.add_task(some_function_reference)
@app.get('/')
async def index(background_tasks: BackgroundTasks):
await some_logic()
#Executes non-blocking in the same asyncio loop
return "Hello"
</code></pre>
<p>Why would these three cases be different? Why do i need to pass the background tasks from the route function to the following called function?</p>
| <python><fastapi> | 2023-08-06 03:16:46 | 1 | 822 | STOPIMACODER |
76,844,260 | 14,745,738 | Selenium Automation for React Material UI Input Fields? | <p>I'm hoping someone might be able to point me in the right direction here.
I'm trying to use selenium to automate entering details for media files after they've been uploaded to a website. I'm able to navigate to the media library and parse the list for the files in question, but I'm hung up on how to send input to the fields.</p>
<p>It looks like the website was built with React Material UI, and I'm having to use xpath exclusively, which is actually working pretty well. I'm able to get to the media details page, but that's where I've hit the wall.</p>
<p>Here's the exact html for the input field I'm trying to interact with and what the page looks like.</p>
<pre><code><div class="MuiFormControl-root MuiFormControl-marginNormal MuiFormControl-fullWidth">
<label class="MuiFormLabel-root MuiInputLabel-root MuiInputLabel-formControl MuiInputLabel-animated MuiInputLabel-shrink" data-shrink="true" for="formatted-text-mask-input">Subtitle</label>
<div class="MuiInputBase-root MuiInput-root MuiInput-underline MuiInputBase-formControl MuiInput-formControl" title="Subtitle">
<input aria-invalid="false" type="text" class="MuiInputBase-input MuiInput-input" value="" data-np-intersection-state="visible">
</div>
<p class="MuiFormHelperText-root"></p>
</div>
</div>
</code></pre>
<p><a href="https://i.sstatic.net/NnSi9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NnSi9.png" alt="enter image description here" /></a></p>
<p>I've tried to get the input field by xpath and css selector using:</p>
<p><code>driver.find_element(By.XPATH('//*[@id="app"]/div/div/div[2]/div/div/div[4]/div/div[2]/div[1]/div[3]/div/div/input'))</code></p>
<p>But this gets the text property not the element and throws a TypeError.
I read values can't be set directly with selenium because it's meant to simulate user input.</p>
<p>Can this need to be done with
<code>webdriver.execute_script("set_attribute(arguments?)")</code>? If so, how do I pass my text using <code>execute_script</code>?</p>
<p>Or is there some other browser automation that would work with React/Material UI websites?</p>
<p>Thanks!</p>
| <python><selenium-webdriver><material-ui><selenium-chromedriver> | 2023-08-06 02:51:50 | 2 | 539 | dbarnes |
76,844,072 | 1,982,032 | Is the class which contain __getattribute__ a descriptor? | <p>In python3's manual:<br />
<a href="https://docs.python.org/3/howto/descriptor.html#definition-and-introduction" rel="nofollow noreferrer">https://docs.python.org/3/howto/descriptor.html#definition-and-introduction</a><br />
In general, a descriptor is an attribute value that has one of the methods in the descriptor protocol. Those methods are <code>__get__()</code>, <code>__set__()</code>, and <code>__delete__()</code>. If any of those methods are defined for an attribute, it is said to be a descriptor.</p>
<pre><code>class Room:
def __init__(self,name):
self.name = name
def __getattribute__(self,attr):
return object.__getattribute__(self,attr)
def __setattr__(self,attr,value):
return object.__setattr__(self,attr,value)
</code></pre>
<p><code>__getattribute__</code> and <code>__setattr__</code> are python's magic methods,built-in functions ,<code>Room</code> class use magic methods to implement <code>__get__</code> and <code>__set__</code>,strictly speaking,is the class <code>Room</code> a descriptor?</p>
| <python><python-3.x><descriptor> | 2023-08-06 00:54:05 | 2 | 355 | showkey |
76,843,984 | 11,652,655 | I have an error when running my Django application | <p>I'm following a tutorial on deploying a machine learning model via django framework. Here's the link to the tutorial <a href="https://www.deploymachinelearning.com/train-ml-models/" rel="nofollow noreferrer">Link1</a>
I followed all the steps correctly up to step 3. First, I wrote all the code myself. Seeing that my application wasn't working, I copied and pasted all the Django code from the site.
Here's the link to my project on gitHub <a href="https://github.com/seydoug007/my_ml_service2.git" rel="nofollow noreferrer">Link2</a></p>
<p>I'm using <code>python 3.6</code>
and <code>django==2.2.4</code></p>
<p>When I run <code>python manage.py runserver</code> in my terminal, I get this error message:</p>
<pre><code>(venv) C:\Users\SEYDOU GORO\my_ml_service2\backend\server>python manage.py runserver
Watching for file changes with StatReloader
Performing system checks...
Exception in thread django-main-thread:
Traceback (most recent call last):
File "C:\Users\SEYDOU GORO\my_ml_service2\venv\lib\site-packages\django\urls\resolvers.py", line 604, in url_patterns
iter(patterns)
TypeError: 'module' object is not iterable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\SEYDOU GORO\AppData\Local\Programs\Python\Python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Users\SEYDOU GORO\AppData\Local\Programs\Python\Python36\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\SEYDOU GORO\my_ml_service2\venv\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "C:\Users\SEYDOU GORO\my_ml_service2\venv\lib\site-packages\django\core\management\commands\runserver.py", line 118, in inner_run
self.check(display_num_errors=True)
File "C:\Users\SEYDOU GORO\my_ml_service2\venv\lib\site-packages\django\core\management\base.py", line 423, in check
databases=databases,
File "C:\Users\SEYDOU GORO\my_ml_service2\venv\lib\site-packages\django\core\checks\registry.py", line 76, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
File "C:\Users\SEYDOU GORO\my_ml_service2\venv\lib\site-packages\django\core\checks\urls.py", line 13, in check_url_config
return check_resolver(resolver)
File "C:\Users\SEYDOU GORO\my_ml_service2\venv\lib\site-packages\django\core\checks\urls.py", line 23, in check_resolver
return check_method()
File "C:\Users\SEYDOU GORO\my_ml_service2\venv\lib\site-packages\django\urls\resolvers.py", line 416, in check
for pattern in self.url_patterns:
File "C:\Users\SEYDOU GORO\my_ml_service2\venv\lib\site-packages\django\utils\functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\SEYDOU GORO\my_ml_service2\venv\lib\site-packages\django\urls\resolvers.py", line 611, in url_patterns
raise ImproperlyConfigured(msg.format(name=self.urlconf_name)) from e
django.core.exceptions.ImproperlyConfigured: The included URLconf 'server.urls' does not appear to have any patterns in it. If you see valid patterns in the file then the issue is probably caused by a circular import.
</code></pre>
<p>I don't know what's going on, but I need help to understand what's going on.</p>
| <python><django> | 2023-08-06 00:06:44 | 2 | 1,285 | Seydou GORO |
76,843,958 | 1,667,018 | Sequence or list type annotation with append | <p>I have the following setting:</p>
<pre class="lang-py prettyprint-override"><code>class A:
pass
class B(A):
pass
def add_element(lst: list[A], el: A) -> None:
lst.append(el)
lst: list[B] = [B()]
add_element(lst, B())
</code></pre>
<p>This causes an error with mypy:</p>
<pre><code>t.py:11: error: Argument 1 to "add_element" has incompatible type "List[B]"; expected "List[A]"
t.py:11: note: "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance
t.py:11: note: Consider using "Sequence" instead, which is covariant
</code></pre>
<p>If I try this instead:</p>
<pre class="lang-py prettyprint-override"><code>def add_element(lst: Sequence[A], el: A) -> None:
lst.append(el)
</code></pre>
<p>I get</p>
<pre><code>t.py:10: error: "Sequence[A]" has no attribute "append"
</code></pre>
<p>I can think of the ways to force this to work, but what is the correct way to do it?</p>
| <python><python-typing> | 2023-08-05 23:52:10 | 1 | 3,815 | Vedran Šego |
76,843,860 | 913,749 | Unable to execute Jupyter Notebook from browser - UsageError: Line magic function `%profile` not found | <p>I am trying to set-up Notebook on my Mac. It is a new M2 Mac, if that makes a difference. Previously, I had followed the same steps to set-up the notebook on a Intel chip Mac successfully.</p>
<p>I have installed Python 3.10, created a virtual environment and installed <code>jupyter notebook</code>. When I start the notebook on the terminal, there is no error. It also automatically opens <code>http://localhost:8888/tree</code> in the browser and I can create a new notebook. But when I try to execute below commands in the cell -</p>
<pre><code>%profile saml
%idle_timeout 15
</code></pre>
<p>I get the below error</p>
<pre><code>UsageError: Line magic function `%profile` not found.
</code></pre>
<p>There is no other error in the browser or on the terminal from where I started the notebook. There is no error or stack trace on the terminal</p>
<p>What should I check to identify the actual error? Or, how can I resolve the issue and get it working??</p>
<p><strong>Update</strong></p>
<p>Added versions</p>
<pre><code>jupyter --version
Selected Jupyter core packages...
IPython : 8.14.0
ipykernel : 6.25.0
ipywidgets : 8.1.0
jupyter_client : 8.3.0
jupyter_core : 5.3.1
jupyter_server : 2.7.0
jupyterlab : 4.0.4
nbclient : 0.8.0
nbconvert : 7.7.3
nbformat : 5.9.2
notebook : 7.0.2
qtconsole : 5.4.3
traitlets : 5.9.0
</code></pre>
<p>Link for docs for IPython 8.14 - <a href="https://ipython.readthedocs.io/en/8.14.0/interactive/magics.html" rel="nofollow noreferrer">IPython 8.14 docs</a></p>
| <python><macos><jupyter-notebook><apple-silicon> | 2023-08-05 22:59:42 | 1 | 2,259 | adbdkb |
76,843,809 | 10,133,797 | Change docstring max line length while preserving newlines, indents, and words | <p>Given a function docstring, I seek to reduce its max line length. My attempt preserves newlines, indents, and words (doesn't break words), but fails to actually enforce max line length. My other attempts enforce length but fail at preserving, and code tends to complicate fast.</p>
<p>Autopep8 appears to have an <a href="https://github.com/hhatto/autopep8/issues/497" rel="nofollow noreferrer">unfixed bug</a> since 2019.</p>
<p>I need an automated way to change docstring max length - overwriting target file allowed. Am I missing something simple with <code>textwrap</code>, or is there any other module/utility?</p>
<h3>Attempt</h3>
<pre class="lang-py prettyprint-override"><code>import textwrap
def change_maxlen(txt):
# hard-coded stuff (e.g. `76`) for demo simplicity
tnew = [" "]
for l in txt.splitlines():
wrap = textwrap.fill(l[4:], width=76)
if '\n' in wrap:
pnew = "\n " + wrap.replace('\n', ' ')
else:
pnew = "\n " + wrap
tnew += [pnew]
tnew = "".join(tnew).replace('"""', '')
return tnew
</code></pre>
<p>Many other failed attempts...</p>
<h3>Example input</h3>
<pre><code> """Compute spatial support of `pf` as the interval, in number of samples,
where sum of envelope (absolute value) inside it is `1 / criterion_amplitude`
times greater than outside.
Used for avoiding boundary effects and incomplete filter decay. Accounts for
tail decay, but with lessser weight than the Heisgenberg resolution measure
for monotonic decays (i.e. no bumps/spikes). For prioritizing the main lobe,
see `dumdum.utils.measures.compute_temporal_width()`.
Parameters
----------
pf : np.ndarray, 1D
Filter, in frequency domain.
Assumes that the time-domain waveform, `ifft(pf)`, is centered at index
guarantee_decay : bool (default False)
In practice, this is identical to `True`, though `True` will still return
Returns
-------
support : int
Total temporal support of `pf`, measured in samples ("total" as opposed to
only right or left part).
"""
</code></pre>
<h3>Desired output</h3>
<pre><code> """Compute spatial support of `pf` as the interval, in number of samples,
where sum of envelope (absolute value) inside it is `1 /
criterion_amplitude` times greater than outside.
Used for avoiding boundary effects and incomplete filter decay. Accounts
for tail decay, but with lessser weight than the Heisgenberg resolution
measure for monotonic decays (i.e. no bumps/spikes). For prioritizing the
main lobe, see `dumdum.utils.measures.compute_temporal_width()`.
Parameters
----------
pf : np.ndarray, 1D
Filter, in frequency domain.
Assumes that the time-domain waveform, `ifft(pf)`, is centered at index
guarantee_decay : bool (default False)
In practice, this is identical to `True`, though `True` will still
return
Returns
-------
support : int
Total temporal support of `pf`, measured in samples ("total" as opposed
to only right or left part).
"""
</code></pre>
| <python><string><multiline> | 2023-08-05 22:37:56 | 2 | 19,954 | OverLordGoldDragon |
76,843,756 | 5,953,720 | Is there a way to execute selenium "queries" via commandline interactively? | <p>Is there a way to execute selenium "queries" via commandline interactively? Instead of creaticing a static file and run that I would like to execute selenium python code interactively via terminal. Is it possibile?</p>
<p>For example selenium browser opens up, and from terminal I write <code>driver.get("www.google.com")</code> and selenium browsers executes this. Then for example I write in terminal <code>driver.find_element("name", "test")</code> and selenium browsers do that etc etc.</p>
| <python><selenium-webdriver><selenium-chromedriver> | 2023-08-05 22:17:28 | 1 | 1,517 | Allexj |
76,843,609 | 15,452,898 | Filter rows based on multiple advanced criterias in PySpark | <p>Currently I'm performing some calculations on a database that contains information on how loans are paid by borrowers.</p>
<p>And my goal is to create a new dataframe that will include loans that fall under the following criteria:</p>
<ul>
<li>a borrower (ID) has at least 2 loans;</li>
<li>each subsequent loan is less than previous one and issued 15 days after the repayment of the previous loan;</li>
<li>last loans (outstanding loans with no ClosingDate) are bigger than previous loan.</li>
</ul>
<p>Here is my dataframe:</p>
<pre><code>Name ID ContractDate LoanSum ClosingDate
A ID1 2022-10-10 10 2022-10-15
A ID1 2022-10-16 8 2022-10-25
A ID1 2022-10-27 25
B ID2 2022-12-12 10 2022-10-15
B ID2 2022-12-16 22 2022-11-18
B ID2 2022-12-20 9 2022-11-25
B ID2 2023-11-29 13
C ID3 2022-11-11 30 2022-11-18
</code></pre>
<p>My expected result is:</p>
<pre><code>Name ID ContractDate LoanSum ClosingDate
A ID1 2022-10-10 10 2022-10-15
A ID1 2022-10-16 8 2022-10-25
A ID1 2022-10-27 25
B ID2 2022-12-12 10 2022-10-15
B ID2 2022-12-16 22 2022-11-18
B ID2 2022-12-20 9 2022-11-25
B ID2 2023-11-29 13
</code></pre>
<p>What I have already done:
(the code below helps to catch loans but it does not include outstanding loans with no ClosingDate that are bigger than previous loans)</p>
<pre><code>cols = df.columns
w = Window.partitionBy('ID').orderBy('ContractDate')
newdf = df.withColumn('PreviousContractDate', f.lag('ContractDate').over(w)) \
.withColumn('PreviousLoanSum', f.lag('LoanSum').over(w)) \
.withColumn('Target', f.expr('datediff(ContractDate, PreviousContractDate) >= 1 and datediff(ContractDate, PreviousContractDate) < 16 and LoanSum - PreviousLoanSum < 0')) \
.withColumn('Target', f.col('Target') | f.lead('Target').over(w)) \
.filter('Target == True')
</code></pre>
<p>Any help is highly appreciated!
Thanks a lot for ideas / solutions!</p>
| <python><pyspark><filtering><data-manipulation> | 2023-08-05 21:32:32 | 0 | 333 | lenpyspanacb |
76,843,388 | 413,653 | How can I save my panda's dataframe to a sqlite3 database while preserving the datetime as the indentifier? | <p>When I create the dataset originally it has the first column as a Date</p>
<pre><code>print("saving this training set for " + symbol)
print (thisDataset)
dbHelper.saveTrainingSet(thisDataset, symbol)
</code></pre>
<p>In dbHelper:</p>
<pre><code>def saveTrainingSet(trainingSet, symbol):
conn = sqlite3.connect('my_database')
c = conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS trainingsets_' + symbol + ' (' + readColumnsForSaving() +')')
conn.commit()
trainingSet.to_sql('trainingsets_' + symbol, conn, if_exists='replace', index = False)
</code></pre>
<p><a href="https://i.sstatic.net/nxmTP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nxmTP.png" alt="enter image description here" /></a></p>
<p>When I load the dataset, it no longer has the dates as the identifier column. How can I preserve it?</p>
<pre><code>returnedTrainingset = dbHelper.getTrainingSet(symbol)
print(returnedTrainingset)
</code></pre>
<p>In dbHelper:</p>
<pre><code>def getTrainingSet(symbol):
conn = sqlite3.connect('technical_analysis_database')
c = conn.cursor()
c.execute('SELECT * FROM trainingsets_' + symbol)
return pd.DataFrame(c.fetchall(), columns=[readColumnsForLoading())
</code></pre>
<p><a href="https://i.sstatic.net/SIm5t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SIm5t.png" alt="enter image description here" /></a></p>
<p>The column list is long so I just replaced it with 2 different readColumns...() method placeholders. Neither column string contains the Date column (when I tried adding it to both, the one for loading threw an exception saying their was 1 too many columns.</p>
<p>How can I save the original dataset including the date identifier column to a sqlite3 database table? And how can I retrieve it?</p>
| <python><sqlite> | 2023-08-05 20:15:02 | 1 | 1,537 | Mark |
76,843,366 | 2,867,882 | FTP Tab Delimited Text file unable to save as utf-8 encoded | <p>First - No, I cannot change the FTP settings. It is entirely locked down as it is embedded in a device and extremely old. It doesn't support PASV, TLS, or anything a more modern server would. The software is WS_FTP 4.8 (as far as I can tell) which I can't even find that version date, but it is probably somewhere around 20 years old. I have contacted the company that owns these devices to see if they will do the right thing and do a firmware update that puts a better FTP server in their hardware, but I have not received a response.</p>
<p>Issuing a set PASV gives a 502, so I know for sure it doesn't support that; I am not even sure that has anything to do with this issue. I think it has to do with whatever the underlying OS is on this device.</p>
<p>FTP login messages:</p>
<pre><code>Connection established, waiting for welcome message...
Status: Insecure server, it does not support FTP over TLS.
Status: Server does not support non-ASCII characters.
</code></pre>
<p>I am going to post the different things I have tried to do to fix this:</p>
<pre><code> with open(local_temp_file, 'wb', encoding='UTF-8', errors='replace') as local_file:
conn.retrbinary('RETR ' + filename_convention
+ yesterday + '.txt', local_file.write)
</code></pre>
<p>FTP Logs:</p>
<pre><code>*resp* '200 Type set to I.'
*resp* '200 PORT command successful.'
*cmd* 'RETR Data Log Trend_Ops_Data_Log_230804.txt'
*resp* '150 Opening BINARY mode data connection for Data Log Trend_Ops_Data_Log_230804.txt.'
</code></pre>
<p>Traceback:</p>
<pre><code>{'TypeError'}
Traceback (most recent call last):
File "c:\users\justin\onedrive\documents\epic_cleantec_work\ftp log retriever\batch_data_get.py", line 123, in get_log_data
conn.retrbinary('RETR ' + filename_convention
File "D:\Anaconda\Lib\ftplib.py", line 441, in retrbinary
callback(data)
TypeError: write() argument must be str, not bytes
</code></pre>
<p>Ok cool - str not bytes</p>
<pre><code> with open(local_temp_file, 'w', encoding='UTF-8', errors='replace') as local_file:
conn.retrlines('RETR ' + filename_convention
+ yesterday + '.txt', local_file.write)
</code></pre>
<p>FTP logs:</p>
<pre><code>*resp* '200 Type set to A.'
*resp* '200 PORT command successful.'
*cmd* 'RETR Data Log Trend_Ops_Data_Log_230804.txt'
*resp* '150 Opening ASCII mode data connection for Data Log Trend_Ops_Data_Log_230804.txt.'
</code></pre>
<p>Traceback:</p>
<pre><code>{'UnicodeDecodeError'}
Traceback (most recent call last):
File "c:\users\justin\onedrive\documents\epic_cleantec_work\ftp log retriever\batch_data_get.py", line 123, in get_log_data
conn.retrlines('RETR ' + filename_convention
File "D:\Anaconda\Lib\ftplib.py", line 465, in retrlines
line = fp.readline(self.maxline + 1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
</code></pre>
<p>Now I have a ftp client and I have downloaded these files successfully with it and my code sending to S3 has worked to send. The problem is I don't know what the encoding actually is.</p>
<p>Opening it with Open Office Calc it just says Unicode.</p>
<p>I tried</p>
<pre><code>def detect_encoding(file):
detector = chardet.universaldetector.UniversalDetector()
with open(file, "rb") as f:
for line in f:
detector.feed(line)
if detector.done:
break
detector.close()
return detector.result
</code></pre>
<p>and then
f = open(local_temp_file, 'wb')</p>
<pre><code> conn.retrbinary('RETR ' + filename_convention
+ yesterday + '.txt', f.write)
f.close()
f.encode('utf-8')
print(detect_encoding(f))
</code></pre>
<p>Traceback:</p>
<pre><code>{'AttributeError'}
Traceback (most recent call last):
File "c:\users\justin\onedrive\documents\epic_cleantec_work\ftp log retriever\batch_data_get.py", line 139, in get_log_data
f.encode('utf-8')
^^^^^^^^
AttributeError: '_io.BufferedWriter' object has no attribute 'encode'
</code></pre>
<p>I also tried the above function with</p>
<pre><code> f = open(local_temp_file, 'wb')
conn.retrbinary('RETR ' + filename_convention
+ yesterday + '.txt', f.write)
f.close()
</code></pre>
<p>Traceback:</p>
<pre><code>{'TypeError'}
Traceback (most recent call last):
File "c:\users\justin\onedrive\documents\epic_cleantec_work\ftp log retriever\batch_data_get.py", line 140, in get_log_data
print(detect_encoding(f))
^^^^^^^^^^^^^^^^^^
File "c:\users\justin\onedrive\documents\epic_cleantec_work\ftp log retriever\batch_data_get.py", line 69, in detect_encoding
with open(file, "rb") as f:
^^^^^^^^^^^^^^^^
TypeError: expected str, bytes or os.PathLike object, not BufferedWriter
</code></pre>
<p>Ultimately here is why this matters - these tab delimited text files get dropped into S3 and crawled by a glue crawler and currently it cannot read the columns properly, it only sees a single column because at the opening of the file it is showing ?? in diamonds which tells me it is encoded in some way that the crawler doesn't recognize as a csv using a '\t' delimiter (yes I set a classifier for this). I also need to append 2 fields (columns) to the tab delimited file to give the site name and a timestamp for each row of data which I can't do to just a string.</p>
<p>I am likely missing something simple, but I have scoured the google and seen a lot of SO posts and I cannot seem to find a solution.</p>
| <python><unicode><utf-8><ftp> | 2023-08-05 20:09:10 | 1 | 1,076 | Shenanigator |
76,843,232 | 3,577,105 | How to generate searchable PDF using reportlab? | <p>Here's some code that generates pdfs and has been in stable use for a few years - however, I just noticed that the generated pdf is not searchable in acrobat reader. How can I make the generated pdf searchable?</p>
<p>Notice that the element containing the content to be searched is a table - maybe that's the hitch?</p>
<pre><code>from reportlab.lib import colors,utils
from reportlab.lib.pagesizes import letter,landscape,portrait
from reportlab.platypus import SimpleDocTemplate, Table, TableStyle, Paragraph, Image, Spacer
from reportlab.lib.styles import getSampleStyleSheet,ParagraphStyle
from reportlab.lib.units import inch
</code></pre>
<p>...
...</p>
<pre><code> doc = SimpleDocTemplate(pdfName, pagesize=landscape(letter),leftMargin=0.5*inch,rightMargin=0.5*inch,topMargin=1.03*inch,bottomMargin=0.5*inch) # or pagesize=letter
# self.logMsgBox.show()
# QTimer.singleShot(5000,self.logMsgBox.close)
QCoreApplication.processEvents()
elements=[]
for team in teamFilterList:
extTeamNameLower=getExtTeamName(team).lower()
radioLogPrint=[]
styles = getSampleStyleSheet()
styles.add(ParagraphStyle(
name='operator',
parent=styles['Normal'],
backColor='lightgrey'
))
headers=MyTableModel.header_labels[0:6]
if self.useOperatorLogin:
operatorImageFile=os.path.join(iconsDir,'user_icon_80px.png')
if os.path.isfile(operatorImageFile):
rprint('operator image file found: '+operatorImageFile)
headers.append(Image(operatorImageFile,width=0.16*inch,height=0.16*inch))
else:
rprint('operator image file not found: '+operatorImageFile)
headers.append('Op.')
radioLogPrint.append(headers)
## if teams and opPeriod==1: # if request op period = 1, include 'Radio Log Begins' in all team tables
## radioLogPrint.append(self.radioLog[0])
entryOpPeriod=1 # update this number when 'Operational Period <x> Begins' lines are found
## hits=False # flag to indicate whether this team has any entries in the requested op period; if not, don't make a table for this team
for row in self.radioLog:
opStartRow=False
## rprint("message:"+row[3]+":"+str(row[3].split()))
if row[3].startswith("Radio Log Begins:"):
opStartRow=True
if row[3].startswith("Operational Period") and row[3].split()[3] == "Begins:":
opStartRow=True
entryOpPeriod=int(row[3].split()[2])
# #523: handled continued incidents
if row[3].startswith('Radio Log Begins - Continued incident'):
opStartRow=True
entryOpPeriod=int(row[3].split(': Operational Period ')[1].split()[0])
## rprint("desired op period="+str(opPeriod)+"; this entry op period="+str(entryOpPeriod))
if entryOpPeriod == opPeriod:
if team=="" or extTeamNameLower==getExtTeamName(row[2]).lower() or opStartRow: # filter by team name if argument was specified
style=styles['Normal']
if 'RADIO OPERATOR LOGGED IN' in row[3]:
style=styles['operator']
printRow=[row[0],row[1],row[2],Paragraph(row[3],style),Paragraph(row[4],styles['Normal']),Paragraph(row[5],styles['Normal'])]
if self.useOperatorLogin:
if len(row)>10:
printRow.append(row[10])
else:
printRow.append('')
radioLogPrint.append(printRow)
## hits=True
if not teams:
# #523: avoid exception
try:
radioLogPrint[1][4]=self.datum
except:
rprint('Nothing to print for specified operational period '+str(opPeriod))
return
rprint("length:"+str(len(radioLogPrint)))
if not teams or len(radioLogPrint)>2: # don't make a table for teams that have no entries during the requested op period
if self.useOperatorLogin:
colWidths=[x*inch for x in [0.5,0.6,1.25,5.2,1.25,0.9,0.3]]
else:
colWidths=[x*inch for x in [0.5,0.6,1.25,5.5,1.25,0.9]]
t=Table(radioLogPrint,repeatRows=1,colWidths=colWidths)
t.setStyle(TableStyle([('FONT',(0,0),(-1,-1),'Helvetica'),
('FONT',(0,0),(-1,1),'Helvetica-Bold'),
('INNERGRID', (0,0), (-1,-1), 0.25, colors.black),
('BOX', (0,0), (-1,-1), 2, colors.black),
('BOX', (0,0), (-1,0), 2, colors.black)]))
elements.append(t)
if teams and team!=teamFilterList[-1]: # don't add a spacer after the last team - it could cause another page!
elements.append(Spacer(0,0.25*inch))
doc.build(elements,onFirstPage=functools.partial(self.printLogHeaderFooter,opPeriod=opPeriod,teams=teams),onLaterPages=functools.partial(self.printLogHeaderFooter,opPeriod=opPeriod,teams=teams))
# self.logMsgBox.setInformativeText("Finalizing and Printing...")
self.printPDF(pdfName)
</code></pre>
<p>...
...</p>
<pre><code>def printPDF(self,pdfName):
try:
win32api.ShellExecute(0,"print",pdfName,'/d:"%s"' % win32print.GetDefaultPrinter(),".",0)
except Exception as e:
estr=str(e)
</code></pre>
<p>...
...</p>
| <python><pdf><reportlab><searchable> | 2023-08-05 19:29:03 | 1 | 904 | Tom Grundy |
76,843,179 | 1,187,621 | GEKKO Python + IPOPT minimize delta-V | <p>I have the following as my GEKKO model in Python:</p>
<pre><code># Initialize model
m = GEKKO()
# Manipulating variables and initial guesses
launch = m.MV(value = np.array([2460310.5, 0, 0]), lb = np.array([2460310.5, 0, 0]), ub = np.array([2460340.5, 0, 0]))
launch.STATUS = 1
flyby = m.MV(value = np.array([2460575.5, 0, 0]), lb = np.array([2460493.5, 0, 0]), ub = np.array([2460340.5, 0, 0])) # Venus/Mars
# flyby = m.MV(value = 2460997.5, lb = 2460887.5, ub = 2460908.5) # Jupiter
flyby.STATUS = 1
arrive = m.MV(value = np.array([2460845.5, 0, 0]), lb = np.array([2460631.5, 0, 0]), ub = np.array([2460660.5])) # Venus/Mars
# arrive = m.MV(value = 2461534.5, lb = 2461250.5, ub = 2461658.5) # Jupiter
arrive.STATUS = 1
# Variables
r1 = m.Var(value = np.array([0, 0, 0]), lb = np.array([-1e10, -1e10, -1e10]), ub = np.array([1e10, 1e10, 1e10]), name = "r1")
v1 = m.Var(value = np.array([0, 0, 0]), lb = np.array([-1e5, -1e5, -1e5]), ub = np.array([1e5, 1e5, 1e5]), name = "v1")
r2 = m.Var(value = np.array([0, 0, 0]), lb = np.array([-1e10, -1e10, -1e10]), ub = np.array([1e10, 1e10, 1e10]), name = "r2")
v2 = m.Var(value = np.array([0, 0, 0]), lb = np.array([-1e5, -1e5, -1e5]), ub = np.array([1e5, 1e5, 1e5]), name = "v2")
r3 = m.Var(value = np.array([0, 0, 0]), lb = np.array([-1e10, -1e10, -1e10]), ub = np.array([1e10, 1e10, 1e10]), name = "r3")
v3 = m.Var(value = np.array([0, 0, 0]), lb = np.array([-1e5, -1e5, -1e5]), ub = np.array([1e5, 1e5, 1e5]), name = "v3")
l = m.Var(value = np.array([0, 0, 0]), lb = np.array([-1e5, -1e5, -1e5]), ub = np.array([1e5, 1e5, 1e5]), name = "launch")
imp = m.Var(value = np.array([0, 0, 0]), lb = np.array([-1e5, -1e5, -1e5]), ub = np.array([1e5, 1e5, 1e5]), name = "impulse")
# Objective function
dV = m.FV(value = m.sqrt(imp[0]**2 + imp[1]**2 + imp[2]**2), lb = 0, ub = 10000)
dV.STATUS = 1
# Slingshot maneuver
r1, v1, r2, v2, r3, v3, l_mag, imp_mag, v_final = slingshot()
m.Obj(dV) #minimize delta-V
m.options.IMODE = 6 # non-linear model
m.options.SOLVER = 3 # solver (IPOPT)
m.options.MAX_ITER = 15000
m.options.RTOL = 1e-7
m.options.OTOL = 1e-7
m.solve(disp=False) # Solve
</code></pre>
<p>When I run it, I get the following error message:</p>
<blockquote>
<p>Exception: Data arrays must have the same length, and match time discretization in dynamic problems</p>
</blockquote>
<p>I have tried the following to no avail:</p>
<ul>
<li>Modifying the data types of my m.Var variables</li>
<li>Modifying my m.MV variables to be arrays (I really just need those first values)</li>
<li>I previously had an m.time, but realized I didn't need it and took it out (same error with vs without)</li>
</ul>
<p>The 'r' values are radii and the 'v' values are velocities; 'l' and 'imp' are velocity changes.</p>
| <python><optimization><gekko> | 2023-08-05 19:16:19 | 1 | 437 | pbhuter |
76,842,987 | 3,899,975 | Customize the spacing between seaborn grouped box plots | <p>I 3 main groups (A, B, C) and each group is subdivided in two (1 and 2). I want to plot the box plot where the spacing between the box plots for each main group is larger than the sub-groups. I have the following code but while I can adjust the x tick labels., I can not adjust the spacing between the box plots to line up</p>
<pre><code>import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# Create the NumPy arrays
FA_1 = np.random.randint(200, 230, 20)
FA_2 = np.random.randint(180, 210, 20)
FB_1 = np.random.randint(130, 160, 20)
FB_2 = np.random.randint(140, 170, 20)
FC_1 = np.random.randint(80, 110, 20)
FC_2 = np.random.randint(60, 90, 20)
# Create a list of tuples for the DataFrame
data = [
(FA_1, 'A_1'), (FA_2, 'A_2'),
(FB_1, 'B_1'), (FB_2, 'B_2'),
(FC_1, 'C_1'), (FC_2, 'C_2')
]
# Create the DataFrame
df = pd.DataFrame(data, columns=['Force', 'Config'])
# Explode the 'Force' column to stack the NumPy arrays
df = df.explode('Force', ignore_index=True)
# Create a box plot using Seaborn with custom spacing
plt.figure(figsize=(10, 6))
# Manually adjust the positions for each configuration
positions = {'A_1': 0, 'A_2': 1, 'B_1': 2, 'B_2': 3, 'C_1': 4, 'C_2': 5}
sns.boxplot(x='Config', y='Force', data=df, order=['A_1', 'A_2', 'B_1', 'B_2', 'C_1', 'C_2'], width=0.5)
plt.title('Box Plot of Force by Configuration')
plt.xlabel('Configuration')
plt.ylabel('Force')
plt.xticks(ticks=[positions[config] for config in df['Config'].unique()], labels=df['Config'].unique())
plt.tight_layout()
plt.show()
</code></pre>
| <python><pandas><seaborn> | 2023-08-05 18:23:26 | 1 | 1,021 | A.E |
76,842,847 | 7,169,895 | How do I get pandas DataFrame from QThreadpool Worker? | <p>I am trying to have QThreadPool gather a lot of data all at once so I can speed up my program 'boot time.' Right now the program gets a bunch of DataFrames one after the other and then finally runs. The process of getting these DataFrames is usually a request.get or some wrapper around it and so QThreadPool is a good candidate to speed things up by getting a bunch of requests' DataFrames then printing them. However, I can not seem to get the DataFrame after the threadpool runs the thread. What am I doing wrong? To keep things simple, I just want to print the DataFrame.</p>
<p>My MVC code:</p>
<pre><code>
def get_data_1():
# performs a get. I changed to just return a pre-made DataFrame
return pd.DataFrame({'value': [5,10,25,53], 'price': [2.24, 2.34, 5.22, 8.66]})
def get_data_2():
# performs a get. I changed to just return a pre-made DataFrame
return pd.DataFrame({'item_id': [10,20,75,103], 'price': [10.00, 5.29, 2.33, 8.77]})
class WorkerSignals(QObject):
finished = Signal()
error = Signal(tuple)
result = Signal(object) # Might need to be Signal(DataFrame)
class Worker(QRunnable):
def __init__(self, fn, *args, **kwargs):
super(Worker, self).__init__()
# Store constructor arguments (re-used for processing)
self.fn = fn
self.args = args
self.kwargs = kwargs
self.signals = WorkerSignals()
@Slot()
def run(self):
# Retrieve args/kwargs here; and fire processing using them
try:
result = self.fn(*self.args, **self.kwargs)
except:
traceback.print_exc()
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(result) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.data1 = None
self.data2 = None
self.threadpool = QThreadPool()
print("Multithreading with maximum %d threads" % self.threadpool.maxThreadCount())
self.get_data() # should set data1 and data2. Do I put a wait / finish somewhere.
def get_data(self):
worker1 = Worker(get_data1)
worker1.signals.result.connect(self.return_data1)
self.threadpool.start(worker1)
worker2 = Worker(get_data2)
worker2.signals.result.connect(self.return_data2)
self.threadpool.start(worker2)
# does something go here
print(self.data1)
print(self.data2)
def return_data1(self, df):
self.data1 = df
def return_data2(self, df):
self.data2 = df
</code></pre>
<p>Is printing</p>
<pre><code>
<PySide6.QtCore.QMetaObject.Connection object at 0x000002CD3631C500>
<PySide6.QtCore.QMetaObject.Connection object at 0x000002CD3632A940>
</code></pre>
<p>when it should be giving me a DataFrame. However, printing df in <code>return_data</code> does give me the DataFrame. Note, the DataFrames are unrelated so I need not need to do anything complex in processing them; I am just trying to get them concurrently.</p>
| <python><pyside6> | 2023-08-05 17:46:12 | 0 | 786 | David Frick |
76,842,799 | 1,185,242 | Can you vectorize a lookup loop in numpy? | <p>I'm looking to set every element of an array based on looking up the value in a second array. Current doing it this way which works but is slow:</p>
<pre><code>import numpy as np
import cv2
colors = [
[0, 255, 0], # GREEN
[255, 0, 0], # BLUE
[0, 0, 255], # RED
]
WIDTH = 400
HEIGHT = 400
target = np.zeros((HEIGHT, WIDTH, 3), dtype=np.uint8)
color_lookup = np.random.randint(0, len(colors), (HEIGHT, WIDTH))
for i in range(target.shape[0]):
for j in range(target.shape[1]):
target[i, j] = colors[color_lookup[i, j]]
cv2.imshow("target", target)
cv2.waitKey(0)
</code></pre>
<p><a href="https://i.sstatic.net/epqQT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/epqQT.png" alt="enter image description here" /></a></p>
<p>What is the correct way to vectorize that loop instead?</p>
| <python><numpy><numpy-ndarray> | 2023-08-05 17:33:08 | 1 | 26,004 | nickponline |
76,842,786 | 8,967,303 | how to api key to google client in python | <p>How to pass api key to TextToSpeechClient client. I can't see in doc where to pass api key.
I can see only option to pass the json file path. Can i pass api key instead ?</p>
<pre><code> def list_voices():
from google.cloud import texttospeech
client = texttospeech.TextToSpeechClient()
voices = client.list_voices()
for voice in voices.voices:
print(f"Name: {voice.name}")
for language_code in voice.language_codes:
print(f"Supported language: {language_code}")
ssml_gender = texttospeech.SsmlVoiceGender(voice.ssml_gender)
print(f"SSML Voice Gender: {ssml_gender.name}")
print(f"Natural Sample Rate Hertz: {voice.natural_sample_rate_hertz}\n")
</code></pre>
| <python><google-cloud-platform> | 2023-08-05 17:28:42 | 1 | 878 | Kiran S youtube channel |
76,842,644 | 11,794,603 | Install dependencies in pipx virtual environment | <p>So I wanted to install ipython with pipx such that I could have an ipython environment that is used anywhere I launch ipython from.</p>
<p>For example let's say I don't want to install any global pip packages. I tried pipx install ipython. Went into the venv pipx created for ipython, and activated it. I hoped that I could then run pip install numpy and that it would only install it in that venv (and then numpy would be available in ipython, but not python).</p>
<p>However it seems to still be installed globally (despite the venv being activated), and is available to python and not ipython (the exact opposite of what I was going for)</p>
<p>How can I achieve the isolation that I am looking for? I thought that was the whole point of pipx</p>
| <python><pip><python-venv><pipx> | 2023-08-05 16:48:20 | 2 | 497 | HashBr0wn |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.