QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,309,471
3,433,875
Add a new row to an existing dataframe pandas error
<p>I need to append a new row that I specify to an existing dataframe:</p> <pre><code>import pandas as pd import random random.seed(42) data = {'group':['a', 'a', 'a', 'b', 'b', 'b', 'b', 'c', 'c', 'c' ], 'value' : [1, 2, 5, 3, 4, 5, 7, 4, 7, 9], 'year': ['2010', '2010', '2010', '2011', '2011', '2011', '2011', '2012', '2012','2012', ]} df = pd.DataFrame(data) </code></pre> <p>The new row is:</p> <pre><code>new_row={'group': 'the current group', 'year': 'current year','value':''} </code></pre> <p>so the data looks like this:</p> <pre><code> group value year 0 a 1 2010 1 a 2 2010 2 a 5 2010 3 a 2010 0 b 3 2011 1 b 4 2011 2 b 5 2011 3 b 7 2011 4 b 2011 0 c 4 2012 1 c 7 2012 2 c 9 2012 3 c 2012 </code></pre> <p>I have tried:</p> <pre><code>new_row={'value':''} df2 = df.groupby(df['group']) df = pd.concat([i.append(new_row, ignore_index=True) for _, i in df2]) df </code></pre> <p>but i get the following error:</p> <pre><code> FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. df = pd.concat([i.append(new_row, ignore_index=True) for _, i in df2]) </code></pre> <p>I havent figured out how to do it without append. Any ideas?</p>
<python><pandas>
2023-10-17 13:35:06
4
363
ruthpozuelo
77,309,464
14,468,588
DCPError: Problem does not follow DCP rules - TV minimization using CVXPY library
<p>I want to solve a compressed sensing problem which I wish to minimize total variation(TV) of my object vector. Here is representation of my code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import os import numpy as np from scipy.optimize import minimize import scipy.io import pandas as pd import math # A and Y are (2323,6561) and (2323,1) complex array respectively. A = scipy.io.loadmat(r'C:\Users\mohammad\Desktop\feko1\feko\A.mat') Y = scipy.io.loadmat(r'C:\Users\mohammad\Desktop\feko1\feko\Y.mat') A = np.array(A[&quot;A&quot;]) Y = np.array(Y[&quot;Y&quot;]) import cvxpy as cp delta = 10 # Define the variables s_L1 = cp.Variable(A.shape[1]) # Define the objective function xydim = int(math.sqrt(A.shape[1])) print(xydim) s_L1_2d = s_L1.reshape((xydim,xydim)) dim1diff = s_L1_2d[1:,:] - s_L1_2d[:-1,:] dim2diff = s_L1_2d[:,1:]- s_L1_2d[:,:-1] dim1diff = cp.square(dim1diff) dim2diff = cp.square(dim2diff) dim2diff2 = dim2diff.T tot_var = cp.sum(cp.sqrt(dim1diff + dim2diff2)) s_L1 = s_L1.reshape((A.shape[1],1)) objective = tot_var constraints = [cp.norm(A@s_L1 - Y,2) &lt;= delta] # Define the optimization problem problem = cp.Problem(cp.Minimize(objective), constraints) # Solve the optimization problem problem.solve() </code></pre> <p>but it gives the following error in jupyter:</p> <pre><code>--------------------------------------------------------------------------- DCPError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_15028\3075150186.py in &lt;module&gt; 5 6 # Solve the optimization problem ----&gt; 7 problem.solve() ~\AppData\Roaming\Python\Python39\site-packages\cvxpy\problems\problem.py in solve(self, *args, **kwargs) 501 else: 502 solve_func = Problem._solve --&gt; 503 return solve_func(self, *args, **kwargs) 504 505 @classmethod ~\AppData\Roaming\Python\Python39\site-packages\cvxpy\problems\problem.py in _solve(self, solver, warm_start, verbose, gp, qcp, requires_grad, enforce_dpp, ignore_dpp, canon_backend, **kwargs) 1070 return self.value 1071 -&gt; 1072 data, solving_chain, inverse_data = self.get_problem_data( 1073 solver, gp, enforce_dpp, ignore_dpp, verbose, canon_backend, kwargs 1074 ) ~\AppData\Roaming\Python\Python39\site-packages\cvxpy\problems\problem.py in get_problem_data(self, solver, gp, enforce_dpp, ignore_dpp, verbose, canon_backend, solver_opts) 644 if key != self._cache.key: 645 self._cache.invalidate() --&gt; 646 solving_chain = self._construct_chain( 647 solver=solver, gp=gp, 648 enforce_dpp=enforce_dpp, ~\AppData\Roaming\Python\Python39\site-packages\cvxpy\problems\problem.py in _construct_chain(self, solver, gp, enforce_dpp, ignore_dpp, canon_backend, solver_opts) 896 candidate_solvers = self._find_candidate_solvers(solver=solver, gp=gp) 897 self._sort_candidate_solvers(candidate_solvers) --&gt; 898 return construct_solving_chain(self, candidate_solvers, gp=gp, 899 enforce_dpp=enforce_dpp, 900 ignore_dpp=ignore_dpp, ~\AppData\Roaming\Python\Python39\site-packages\cvxpy\reductions\solvers\solving_chain.py in construct_solving_chain(problem, candidates, gp, enforce_dpp, ignore_dpp, canon_backend, solver_opts, specified_solver) 215 if len(problem.variables()) == 0: 216 return SolvingChain(reductions=[ConstantSolver()]) --&gt; 217 reductions = _reductions_for_problem_class(problem, candidates, gp, solver_opts) 218 219 # Process DPP status of the problem. ~\AppData\Roaming\Python\Python39\site-packages\cvxpy\reductions\solvers\solving_chain.py in _reductions_for_problem_class(problem, candidates, gp, solver_opts) 130 append += (&quot;\nHowever, the problem does follow DQCP rules. &quot; 131 &quot;Consider calling solve() with `qcp=True`.&quot;) --&gt; 132 raise DCPError( 133 &quot;Problem does not follow DCP rules. Specifically:\n&quot; + append) 134 elif gp and not problem.is_dgp(): DCPError: Problem does not follow DCP rules. Specifically: The objective is not DCP. Its following subexpressions are not: power(power(reshape(var117, (81, 81), F)[1:81, 0:81] + -reshape(var117, (81, 81), F)[0:80, 0:81], 2.0) + power(reshape(var117, (81, 81), F)[0:81, 1:81] + -reshape(var117, (81, 81), F)[0:81, 0:80], 2.0).T, 0.5) </code></pre> <p>You may ask about the way that I have obtained TV. I have to say that my variable vector in convex problem is reflectivity of an image which its rows' pixels are stacked together to create a vector.</p> <p>How can I solve this problem?</p> <p>Any help would be appreciated.</p>
<python><cvxpy><convex-optimization>
2023-10-17 13:33:10
1
352
mohammad rezza
77,309,413
21,295,456
How to make pynput.keyboard.listener thread keep running in the background even when main app is terminated
<p>I have a main GUI app made out of <em>Tkinter</em>, which includes a button called <em>&quot;start&quot;</em> , which when pressed initiates the <code>pynput.keyboard.listener</code> thread .</p> <p>Its listens to the keystrokes all right .</p> <p>But when I terminate the app (by clicking <em>'x'</em>) , the keystrokes listener thread also terminates .</p> <p>I want the thread to be persistent and keep listening , even when the GUI is closed .</p> <p>Here is the code .</p> <p>counter.py</p> <pre><code> import pynput.keyboard import threading from threading import Event import ctypes class KeypressCounter(threading.Thread): def __init__(self): # import pynput.keyboard # import threading # from threading import Event # import ctypes super().__init__() self.key_count = 0 self.stop_event = Event() def run(self): listener = pynput.keyboard.Listener(on_press=self.on_press,daemon=False) listener.start() def on_press(self, key): self.key_count += 1 if key == pynput.keyboard.Key.esc: print(&quot;esc pressed&quot;) self.on_endsession() print(&quot;counter:&quot;,self.key_count) def on_endsession(self): with open(&quot;key_count.txt&quot;, &quot;w&quot;) as f: print(&quot;writing to file&quot;) f.write(str(self.key_count)) self.stop_event.set() def main(): keypress_counter = KeypressCounter() keypress_counter.start() while True: if keypress_counter.stop_event.is_set(): break if __name__ == &quot;__main__&quot;: main() </code></pre> <p>The tkinter frame , app.py</p> <pre><code> from tkinter import * from counter import KeypressCounter def start() : KeyCounter = KeypressCounter() KeyCounter.start() window_size = '1200x600' # Create the root window window = Tk() window.geometry(window_size) keyBoardFrame = Frame(window,width=100,height=100,bg='green') button = Button(keyBoardFrame,text='start',bg='red',fg='white',command = start) button.grid(row=0,column=0,sticky='nsew') window.mainloop() </code></pre> <p>The counter is written to <em>key_count.txt</em> before it terminates(don't know how ) .</p> <p>And after window is closed, nothing gets written to the <em>key_count.txt</em> .</p> <p>but I want the thread to keep listening to all the keystrokes in the background .</p>
<python><multithreading><tkinter><pynput>
2023-10-17 13:27:45
1
339
akashKP
77,309,270
7,667,649
Python numpy: don't understand the result of subtracting np.datetime64 dates
<p>If I do the following operation:</p> <pre><code>import numpy as np np.datetime64('20230901') - np.datetime64('20230831') </code></pre> <p>I get the result</p> <pre><code>numpy.timedelta64(70,'Y') </code></pre> <p>I get that the letter 'Y' in this case means Year, but what does the 70 mean?</p> <p>Furthermore, if I do the same operation specifying a time:</p> <pre><code>import numpy as np np.datetime64('20230901-01') - np.datetime64('19700101-01') </code></pre> <p>I get the same number but in months:</p> <pre><code>numpy.timedelta64(840,'M') </code></pre> <p>What is the meaning of that amount of years/months?</p>
<python><numpy><timedelta><datetime64>
2023-10-17 13:08:05
1
479
Mauro
77,309,155
14,631,246
Python `requests.Session()`: response headers not showing `Set-Cookie` headers
<p>I was following the tutorial over at <a href="https://requests.readthedocs.io/en/latest/user/advanced/" rel="nofollow noreferrer">https://requests.readthedocs.io/en/latest/user/advanced/</a> regarding the <code>requests.Session()</code>. I tried to access the response headers for the HTTP GET request that is expected to return a <code>Set-Cookie</code> header. However, it doesn't show. I am running Python 3.12 and requests@2.31.0 incase that might be the issue.</p> <pre class="lang-py prettyprint-override"><code> s = requests.Session() r = s.get('https://httpbin.org/cookies/set/sessioncookie/123456789') print(r.headers) # does not display set-cookie </code></pre> <p>I was expecting the <code>Set-Cookie</code> header to display. I have also tried the opposite, where the <code>r.headers</code> display the <code>Set-Cookie</code> headers correctly, but the <code>r.request.headers</code> for another request right after does not display the <code>Cookie</code> header</p> <pre class="lang-py prettyprint-override"><code> r1 = s.get(&quot;https://google.com&quot;) print(r1.headers) # displays set-cookie r2 = s.get(&quot;https://httpbin.org/cookies&quot;) print(r2.request.headers # does not display cookie </code></pre>
<python><python-requests>
2023-10-17 12:50:53
2
618
Jarrett GXZ
77,309,123
19,130,803
extend list for matching dictionary keys
<p>I am trying to combine multiple sub <code>dictionaries</code> and into single <code>dictionary</code> based on <code>keys</code>. The dictionary <code>keys</code> are fixed(same) and <code>items</code> are <code>list</code> that I am trying to combine.</p> <pre><code>fruits = ['apple', 'banana', 'cherry'] cars = ['Ford', 'BMW', 'Volvo'] d = {} d1 = {'a': fruits, 'b': cars} d2 = {'a': cars, 'b': fruits} l = [d1, d2] for i in l: if not d: d.update(i) else: for k in d.keys(): d[k].extend(i[k]) print(f&quot;{d=}&quot;) </code></pre> <p>Current output:</p> <pre><code>d={'a': ['apple', 'banana', 'cherry', 'Ford', 'BMW', 'Volvo'], 'b': ['Ford', 'BMW', 'Volvo', 'apple', 'banana', 'cherry', 'Ford', 'BMW', 'Volvo']} </code></pre> <p>Expected output:</p> <pre><code>d={'a': ['apple', 'banana', 'cherry', 'Ford', 'BMW', 'Volvo'], 'b': ['Ford', 'BMW', 'Volvo', 'apple', 'banana', 'cherry']} </code></pre> <p>Error, I am getting an extra items for <code>key b</code>.</p> <p>What I am missing?</p>
<python>
2023-10-17 12:46:14
6
962
winter
77,309,029
10,416,012
asyncio.run not working at python azure function app
<p>I have a code that works in local perfectly:</p> <pre><code>... main.py from file import my_test ... test.py async test(): &quot;&quot;&quot;some async function&quot;&quot;&quot; await asyncio.sleep(1) return &quot;test&quot; import asyncio my_test = asyncio.run(test()) </code></pre> <p>Notice that test is called at <strong>module level</strong>, But in an azure function app this simple code it doesn't work.</p> <blockquote> <p>raise RuntimeError('This event loop is already running')</p> </blockquote> <p>Which as far as I know, it means that the function app is already running his own event loop so you can't run it twice.</p> <p>So i tried to develop a custom run function that captures the event loop and awaits the value if exist, otherwise uses the plain asyncio.run.</p> <p>But can't figure out how to make it properly, because it needs to work both in local and in azure, and it can't be async and not async at the same time.</p> <pre><code>def my_run(coro: Coroutine[Any, Any, T], *args, **kwargs) -&gt; T: &quot;&quot;&quot; Azure uses already asyncio.run and it cannot be nested, so if run in azure we take their event loop&quot;&quot;&quot; try: loop = asyncio.get_running_loop() except RuntimeError: return asyncio.run(coro, *args, **kwargs) else: return loop.run_until_complete(coro(*args, **kwargs)) my_test = my_run(test()) # far from working </code></pre> <p>Any idea?</p> <p>thanks in advance</p>
<python><azure><azure-functions><event-loop><aio>
2023-10-17 12:33:28
0
2,235
Ziur Olpa
77,308,976
14,498,998
Media files are not shown: open() "/home/shahriar/Amlak/real_estate/media/images/image1.jpg" failed (13: Permission denied) - Django Nginx
<p>I'm trying to see my media files in my website, but I can't. Static files are OK but Media files, despite all my efforts are not visible.</p> <p>Here is nginx error log:</p> <pre><code>2023/10/17 11:59:33 [error] 48068#48068: *7 open() &quot;/home/shahriar/Amlak/real_estate/media/images/ویلا-چمستان-1_uWh54BB.jpg&quot; failed (13: Permi&gt; 2023/10/17 11:59:33 [error] 48068#48068: *6 open() &quot;/home/shahriar/Amlak/real_estate/media/images/IMG_20230828_214539_i6yw2Zc.jpg&quot; failed (13:&gt; 2023/10/17 11:59:33 [error] 48068#48068: *5 open() &quot;/home/shahriar/Amlak/real_estate/media/images/image1.jpg&quot; failed (13: Permission denied), &gt; 2023/10/17 11:59:33 [error] 48068#48068: *4 open() &quot;/home/shahriar/Amlak/real_estate/media/images/ویلا-چمستان-1_0kmP7E3.jpg&quot; failed (13: Permi&gt; 2023/10/17 11:59:33 [error] 48068#48068: *3 open() &quot;/home/shahriar/Amlak/real_estate/media/images/455008604_479875.jpg&quot; failed (13: Permission&gt; </code></pre> <p>My nginx configuration file for website:</p> <pre><code>location /static { alias /var/www/real_estate/static/; } location /media { root /home/path/to/real_estate(projectname)/; } </code></pre> <p>settings.py:</p> <pre><code>STATIC_URL = 'static/' STATIC_ROOT = '/var/www/real_estate/static/' STATICFILES_DIRS = [BASE_DIR/'static/',] # Default primary key field type # https://docs.djangoproject.com/en/4.2/ref/settings/#default-auto-field DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' # Media Settings MEDIA_URL = 'media/' MEDIA_ROOT = BASE_DIR / 'media/' </code></pre> <p>I tried giving permission to my media folder like this:</p> <pre><code>sudo chown -R www-data:www-data /var/www/real_estate/static sudo chown -R www-data:www-data /home/shahriar/Amlak/real_estate/media </code></pre> <p>I tried restarting nginx.service and gunicorn.service.</p> <p>The path to media files is correct, but the website seems to be unable to get them.</p>
<python><django><linux><ubuntu><nginx>
2023-10-17 12:25:55
1
313
Alin
77,308,801
13,935,315
Nested Cross Validation with Optuna
<p>I want to perform nested cross validation using Optuna. I am wondering what your take is on my solution. Would one implement it like this or am I making some fundamental mistakes? In particular with the branching strategy and mixing <code>trial.suggest_categorical</code> and <code>OptunaSearchCV</code></p> <pre class="lang-py prettyprint-override"><code>from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score from optuna.integration import OptunaSearchCV import optuna from sklearn.datasets import load_iris from sklearn.preprocessing import StandardScaler, MinMaxScaler from optuna.distributions import CategoricalDistribution, FloatDistribution, IntDistribution pipe = Pipeline( [ (&quot;scaling&quot;, &quot;passthrough&quot;), (&quot;estimator&quot;, DecisionTreeClassifier()), ] ) class OptunaNestedCV: def __init__(self, data: pd.DataFrame) -&gt; None: self.data = data self.x = self.data.data self.y = self.data.target def __call__(self, trial, *args: Any, **kwds: Any) -&gt; Any: classifier_name = trial.suggest_categorical( &quot;classifier&quot;, [&quot;SVC&quot;, &quot;RandomForest&quot;] ) if classifier_name == &quot;SVC&quot;: param_distributions = { &quot;scaling&quot;: optuna.distributions.CategoricalDistribution( [MinMaxScaler(), StandardScaler()] ), &quot;estimator__C&quot;: optuna.distributions.FloatDistribution( 1e-10, 1e10, log=True ), &quot;estimator__degree&quot;: optuna.distributions.IntDistribution(1, 5), } estimator = pipe.set_params(estimator=SVC(gamma=&quot;auto&quot;)) classifier_obj = OptunaSearchCV( estimator=estimator, param_distributions=param_distributions, cv=2, n_jobs=-1, ) else: estimator = pipe.set_params(estimator=RandomForestClassifier(n_estimators=10)) param_distributions = { &quot;scaling&quot;: optuna.distributions.CategoricalDistribution( [MinMaxScaler(), StandardScaler()] ), &quot;estimator__max_depth&quot;: optuna.distributions.IntDistribution(1, 5), } classifier_obj = OptunaSearchCV( estimator=estimator, param_distributions=param_distributions, cv=2, n_jobs=-1, ) score = cross_val_score(classifier_obj, self.x, self.y, n_jobs=1, cv=3) accuracy = score.mean() return accuracy class OptunaCV: def __init__(self, data: pd.DataFrame) -&gt; None: self.data = data self.x = self.data.data self.y = self.data.target def __call__(self, trial, *args: Any, **kwds: Any) -&gt; Any: classifier_name = trial.suggest_categorical( &quot;classifier&quot;, [&quot;SVC&quot;, &quot;RandomForest&quot;] ) if classifier_name == &quot;SVC&quot;: param_distributions = { &quot;scaling&quot;: optuna.distributions.CategoricalDistribution( [MinMaxScaler(), StandardScaler()] ), &quot;estimator__C&quot;: optuna.distributions.FloatDistribution( 1e-10, 1e10, log=True ), &quot;estimator__degree&quot;: optuna.distributions.IntDistribution(1, 5), } estimator = pipe.set_params(estimator=SVC(gamma=&quot;auto&quot;)) classifier_obj = OptunaSearchCV( estimator=estimator, param_distributions=param_distributions, cv=2, n_jobs=-1, ) else: estimator = pipe.set_params(estimator=RandomForestClassifier(n_estimators=10)) param_distributions = { &quot;scaling&quot;: optuna.distributions.CategoricalDistribution( [MinMaxScaler(), StandardScaler()] ), &quot;estimator__max_depth&quot;: optuna.distributions.IntDistribution(1, 5), } classifier_obj = OptunaSearchCV( estimator=estimator, param_distributions=param_distributions, cv=2, n_jobs=-1, ) classifier_obj.fit(self.x, self.y) score = classifier_obj.best_score_ trial.set_user_attr(&quot;model&quot;, classifier_obj.best_estimator_) return score if __name__ == &quot;__main__&quot;: #load data iris = load_iris() # Get Performance of model objective = OptunaNestedCV(iris) study = optuna.create_study(direction=&quot;maximize&quot;) study.optimize(objective, n_trials=3) print(study.best_value) # Get eventual model study2 = optuna.create_study(direction=&quot;maximize&quot;) study2.optimize(OptunaCV(iris), n_trials=3) print(study2.best_trial.user_attrs) </code></pre>
<python><scikit-learn>
2023-10-17 11:58:17
0
331
Jens
77,308,683
1,001,290
Guest checkout business logic for server/database with Stripe
<p>I am making an a simple app which allows users to post ads to the website for a fee, as in pay per post. No account is required, it is in essence a guest checkout by default.</p> <p>The issue I am dealing with is the order and best way to implement the business logic on the server side.</p> <p>My flow is as follows:</p> <ol> <li>(Front end) User fills out a form with ad details</li> <li>(Front end) User hits submit</li> <li>(Back end) Post request is processed, new database entry is created</li> <li>(Back end) User is redirected to Stripe checkout</li> <li>Payment fails or succeeds</li> </ol> <p>Notice how in this flow I create an object/entry in my database before any payment has been made. This already feels wrong because I feel the entry shouldn't exist unless it has been paid for. Because spammer could in theory fill my database with endless entries by filling the form and not completing payment. So if Stripe payment fails in step 5, I have a choice: delete the entry, or return the user to the form prefilled. Clearly, the later is the better option, however if the user never finishes or fixes the payment I will still have a dangling entry in my database. What is the best way to approach this situation? And what should I do with the entry if I want to implement a pay later option (via sending a payment url to an email)?</p> <p>Also, if payment succeeds everything operates as normal. However, I want to use webhooks as it seems more reliable for payment status. But Stripe checkout also uses a &quot;success_url&quot;. So in the case that the checkout succeeds, but the payment is rejected. I will still forward the user to a success_url, which seems strange as user might get the impression that everything went smoothly. So, how would that case be handled?</p> <p>I feel like a lot of the trouble stems from Stripe checkout making me leave my server environment, whereby I lack control of how to change my database for each case should it arise.</p>
<python><django><sqlite><stripe-payments><business-logic>
2023-10-17 11:41:46
1
412
Sjieke
77,308,674
14,282,714
Count number of groups with any certain value across rows pandas
<p>I would like to count the number of groups which have any <code>True</code> value across there rows in a column. Here I have some reproducible data:</p> <pre><code>import pandas as pd df = {'group': [1, 1, 1, 2, 2, 2, 3, 3], 'condition': [&quot;True&quot;, &quot;False&quot;, &quot;False&quot;, &quot;True&quot;, &quot;True&quot;, &quot;False&quot;, &quot;False&quot;, &quot;False&quot;]} pd.DataFrame(data=df, index=[0, 1, 2, 3, 4, 5, 6, 7]) </code></pre> <p>Output:</p> <pre><code> group condition 0 1 True 1 1 False 2 1 False 3 2 True 4 2 True 5 2 False 6 3 False 7 3 False </code></pre> <p>As we can see in group 1 and group 2 there are rows with a True value which means the result should be that there are 2 groups. So the expected outcome should look like this:</p> <pre><code> groups total_any_true 0 3 2 </code></pre> <p>So I was wondering if anyone knows how to count the total number of groups that have any certain value across their rows?</p>
<python><pandas>
2023-10-17 11:39:42
3
42,724
Quinten
77,308,472
13,916,049
Subset dataframe based on index substring that matches the index of another dataframe in unspecified order
<p>I want to subset <code>anno_GSE159115</code> based on the substring of its index that matches any value in <code>adata.obs.index</code>, regardless of order. My code does not return all the matches when I use <code>isin</code>.</p> <pre><code>anno_GSE159115[anno_GSE159115.index.str.rsplit(&quot;_&quot;, 1).str[-1].isin(adata.obs.index)] </code></pre> <p>Input:</p> <p><code>anno_GSE159115</code></p> <pre><code>pd.DataFrame({'no_genes': {'SI_18854_AAACCTGCAAGTAGTA-1': 3255, 'SI_18854_AAACCTGTCCACTGGG-1': 3807, 'SI_18854_AAACCTGTCCTTTCTC-1': 2770, 'SI_18854_AAACGGGCAAACTGCT-1': 2185, 'SI_18854_AAACGGGCAAGGTTTC-1': 2741, 'SI_18854_AAACGGGGTTCACGGC-1': 4417, 'SI_18854_AAACGGGGTTCTGTTT-1': 3292, 'SI_18854_AAACGGGGTTTACTCT-1': 4335, 'SI_18854_AAAGATGAGATCTGAA-1': 3042, 'SI_18854_AAAGATGCACGTCTCT-1': 3019, 'SI_18854_AAAGATGGTGATAAAC-1': 6192, 'SI_18854_AAAGATGGTTTGGGCC-1': 1099, 'SI_18854_AAAGCAAGTACTTCTT-1': 2107, 'SI_18854_AAAGCAAGTACTTGAC-1': 2636, 'SI_18854_AAAGCAATCCTTGCCA-1': 4950, 'SI_18854_AAAGTAGGTCCAGTTA-1': 1889, 'SI_18854_AAAGTAGTCACCAGGC-1': 3031, 'SI_18854_AAAGTAGTCAGGTAAA-1': 2251, 'SI_18854_AAATGCCAGTAAGTAC-1': 1389, 'SI_18854_AAATGCCGTTCAACCA-1': 4068, 'SI_18854_AAATGCCTCAGTTAGC-1': 2400, 'SI_18854_AAATGCCTCGATAGAA-1': 2205, 'SI_18854_AACACGTGTACTCGCG-1': 2775, 'SI_18854_AACACGTTCACCAGGC-1': 2400, 'SI_18854_AACCATGAGTAAGTAC-1': 1687, 'SI_18854_AACCATGCAGCCTGTG-1': 1733, 'SI_18854_AACCATGCAGTCGTGC-1': 1785, 'SI_18854_AACCATGTCTCGGACG-1': 2397, 'SI_18854_AACCGCGAGGGAAACA-1': 2160, 'SI_18854_AACCGCGCACGAAGCA-1': 2972, 'SI_18854_AACCGCGCAGTCAGCC-1': 5695, 'SI_18854_AACGTTGCACCATCCT-1': 4423, 'SI_18854_AACTCAGGTCCGTCAG-1': 2558, 'SI_18854_AACTCAGGTGTTGAGG-1': 2443, 'SI_18854_AACTCAGTCCGCGCAA-1': 1196, 'SI_18854_AACTCCCAGAAGGTTT-1': 3257, 'SI_18854_AACTCCCAGCTTCGCG-1': 2019, 'SI_18854_AACTCCCAGGCCGAAT-1': 1526, 'SI_18854_AACTCCCAGGTGGGTT-1': 4833, 'SI_18854_AACTCCCCAGTCGTGC-1': 5175, 'SI_18854_AACTCCCGTGCATCTA-1': 3545, 'SI_18854_AACTCTTAGATGGCGT-1': 1741, 'SI_18854_AACTCTTAGGACGAAA-1': 2242, 'SI_18854_AACTCTTCACATGGGA-1': 1975, 'SI_18854_AACTCTTTCCAGAAGG-1': 2780, 'SI_18854_AACTGGTGTCAACATC-1': 2160, 'SI_18854_AACTGGTGTGGAAAGA-1': 2631, 'SI_18854_AACTTTCAGCTAGTTC-1': 2983, 'SI_18854_AACTTTCAGGATGGAA-1': 2177, 'SI_18854_AACTTTCAGTATGACA-1': 5198, 'SI_18854_AACTTTCGTGACAAAT-1': 1687, 'SI_18854_AAGACCTCAGACACTT-1': 2319, 'SI_18854_AAGACCTTCAACTCTT-1': 2983, 'SI_18854_AAGACCTTCTTGTCAT-1': 4303, 'SI_18854_AAGCCGCGTACCGCTG-1': 4485, 'SI_18854_AAGCCGCGTTTAAGCC-1': 2863, 'SI_18854_AAGCCGCTCAGTTAGC-1': 3569, 'SI_18854_AAGCCGCTCCACGTTC-1': 6725, 'SI_18854_AAGCCGCTCGTTACAG-1': 2497, 'SI_18854_AAGCCGCTCTCGCATC-1': 3358, 'SI_18854_AAGGAGCAGTACGTTC-1': 1734, 'SI_18854_AAGGAGCGTTCTCATT-1': 1513, 'SI_18854_AAGGAGCTCCAGAGGA-1': 1614, 'SI_18854_AAGGAGCTCCCAAGTA-1': 2350, 'SI_18854_AAGGCAGCAAAGTGCG-1': 2109, 'SI_18854_AAGGCAGCACGTCTCT-1': 3231, 'SI_18854_AAGGCAGGTCCGAATT-1': 2112, 'SI_18854_AAGGCAGGTCGGCACT-1': 4298, 'SI_18854_AAGGCAGTCGGCGGTT-1': 3054, 'SI_18854_AAGGTTCAGTCGTACT-1': 3450, 'SI_18854_AAGTCTGAGGGAGTAA-1': 1631, 'SI_18854_AAGTCTGAGTGTTGAA-1': 4198, 'SI_18854_AAGTCTGGTATGAATG-1': 4625, 'SI_18854_AAGTCTGTCTCCCTGA-1': 2152, 'SI_18854_AATCCAGAGAAACGCC-1': 1746, 'SI_18854_AATCCAGGTGATAAAC-1': 2176, 'SI_18854_AATCGGTAGACAGACC-1': 3262, 'SI_18854_AATCGGTCACGGCCAT-1': 3994, 'SI_18854_AATCGGTGTGGAAAGA-1': 3387, 'SI_18854_ACACCAAAGAACAATC-1': 2056, 'SI_18854_ACACCAAAGAATTCCC-1': 5704, 'SI_18854_ACACCAACAACAACCT-1': 5949, 'SI_18854_ACACCCTCAAGCGAGT-1': 3208, 'SI_18854_ACACCCTCATCATCCC-1': 2517, 'SI_18854_ACACCCTTCGTCGTTC-1': 2313, 'SI_18854_ACACCGGTCCCTTGCA-1': 2074, 'SI_18854_ACACCGGTCGCTAGCG-1': 2623, 'SI_18854_ACACTGAAGAGGGCTT-1': 3039, 'SI_18854_ACACTGAGTAAATGAC-1': 2544, 'SI_18854_ACAGCCGCATTAGCCA-1': 1725, 'SI_18854_ACAGCCGGTACCGGCT-1': 2649, 'SI_18854_ACAGCCGTCGTTTAGG-1': 1661, 'SI_18854_ACAGCCGTCTCGCTTG-1': 3747, 'SI_18854_ACAGCCGTCTTATCTG-1': 2595, 'SI_18854_ACAGCTAAGAACAACT-1': 2623, 'SI_18854_ACAGCTAAGGACAGAA-1': 2247, 'SI_18854_ACAGCTAAGTTTCCTT-1': 3156, 'SI_18854_ACAGCTAGTCTGCGGT-1': 1867, 'SI_18854_ACAGCTAGTGTAATGA-1': 3351, 'SI_18854_ACAGCTATCGGAATCT-1': 2811}}) </code></pre> <p><code>adata.obs.index</code></p> <pre><code>Index(['AAACCTGAGACGACGT-1', 'AAACCTGCACGGCTAC-1', 'AAACCTGCAGTATCTG-1', 'AAACCTGCAGTCAGCC-1', 'AAACCTGGTAAATGTG-1', 'AAACCTGGTAACGTTC-1', 'AAACCTGGTCAAGCGA-1', 'AAACCTGGTCAGTGGA-1', 'AAACCTGGTTCATGGT-1', 'AAACCTGGTTTAGGAA-1', 'AAACCTGTCAACCATG-1', 'AAACCTGTCCGCATCT-1', 'AAACCTGTCGCCAGCA-1', 'AAACCTGTCTATCGCC-1', 'AAACCTGTCTGACCTC-1', 'AAACGGGAGACCCACC-1', 'AAACGGGAGTTAGCGG-1', 'AAACGGGCACTTAAGC-1', 'AAACGGGCAGCAGTTT-1', 'AAACGGGCAGTTCCCT-1', 'AAACGGGGTAACGACG-1', 'AAACGGGGTAAGTTCC-1', 'AAACGGGGTAGCGCAA-1', 'AAACGGGGTCCTGCTT-1', 'AAACGGGGTCGGCTCA-1', 'AAACGGGGTCGTGGCT-1', 'AAACGGGGTTAAGATG-1', 'AAACGGGGTTGCCTCT-1', 'AAAGATGAGCGTTGCC-1', 'AAAGATGAGCTAGTCT-1', 'AAAGATGAGCTGAAAT-1', 'AAAGATGAGGTGCTTT-1', 'AAAGATGAGTATCGAA-1', 'AAAGATGCAATCACAC-1', 'AAAGATGCAATCGGTT-1', 'AAAGATGCAGATCGGA-1', 'AAAGATGCATTGAGCT-1', 'AAAGATGTCGTTTATC-1', 'AAAGATGTCTCCAGGG-1', 'AAAGATGTCTGAGGGA-1', 'AAAGCAAAGATATGGT-1', 'AAAGCAAAGCACCGCT-1', 'AAAGCAAAGCAGACTG-1', 'AAAGCAAAGCGCCTTG-1', 'AAAGCAAAGGCAAAGA-1', 'AAAGCAACAGACGCCT-1', 'AAAGCAACAGTGGAGT-1', 'AAAGCAACATACAGCT-1', 'AAAGCAAGTAAAGTCA-1', 'AAAGCAAGTGCAACTT-1', 'AAAGCAAGTTATTCTC-1', 'AAAGCAAGTTGGTAAA-1', 'AAAGCAATCAGAAATG-1', 'AAAGCAATCGCCGTGA-1', 'AAAGTAGAGATCGATA-1', 'AAAGTAGAGCAATATG-1', 'AAAGTAGCAGGCTCAC-1', 'AAAGTAGCAGTTTACG-1', 'AAAGTAGGTACGAAAT-1', 'AAAGTAGGTCACAAGG-1', 'AAAGTAGGTCAGAGGT-1', 'AAAGTAGGTCGGATCC-1', 'AAAGTAGTCAACACGT-1', 'AAAGTAGTCACAGGCC-1', 'AAAGTAGTCACCGGGT-1', 'AAAGTAGTCAGCTCTC-1', 'AAAGTAGTCCTCATTA-1', 'AAAGTAGTCTCCTATA-1', 'AAATGCCAGATCCTGT-1', 'AAATGCCAGCGATCCC-1', 'AAATGCCCACCAGGCT-1', 'AAATGCCGTACAGTGG-1', 'AAATGCCGTAGTAGTA-1', 'AAATGCCGTATCTGCA-1', 'AAATGCCTCAACACGT-1', 'AAATGCCTCAGCCTAA-1', 'AAATGCCTCGAATCCA-1', 'AAATGCCTCTGCCCTA-1', 'AAATGCCTCTGCGACG-1', 'AAATGCCTCTGTCCGT-1', 'AACACGTAGACTGTAA-1', 'AACACGTAGCGATTCT-1', 'AACACGTCACTTACGA-1', 'AACACGTCATGCCTAA-1', 'AACACGTCATGGTAGG-1', 'AACACGTGTCAACATC-1', 'AACACGTTCAGTTGAC-1', 'AACACGTTCGAGCCCA-1', 'AACACGTTCGCACTCT-1', 'AACACGTTCTTCTGGC-1', 'AACCATGAGACCGGAT-1', 'AACCATGAGCAGCGTA-1', 'AACCATGCACTTAAGC-1', 'AACCATGCAGATTGCT-1', 'AACCATGCATGTTGAC-1', 'AACCATGGTCCAGTGC-1', 'AACCATGGTGTGCGTC-1', 'AACCATGTCCCAGGTG-1', 'AACCATGTCGGCATCG-1', 'AACCATGTCTCCAACC-1'], dtype='object') </code></pre>
<python><pandas>
2023-10-17 11:05:13
0
1,545
Anon
77,308,447
6,851,715
Dynamically create a fake dataset based on the subset of another (real) dataset
<p>I've got a few datasets and for each, I'd like to create a fake dataset that is kind of a representative of that dataset. I need to do it dynamically, only based on the type of data (numeric, obj)</p> <p>Here's an example</p> <pre><code>import pandas as pd import random # Create a dictionary with columns as lists data = { 'ObjectColumn1': [f'Object1_{i}' for i in range(1, 11)], 'ObjectColumn2': [f'Object2_{i}' for i in range(1, 11)], 'ObjectColumn3': [f'Object3_{i}' for i in range(1, 11)], 'NumericColumn1': [random.randint(1, 100) for _ in range(10)], 'NumericColumn2': [random.uniform(1.0, 10.0) for _ in range(10)], 'NumericColumn3': [random.randint(1000, 2000) for _ in range(10)], 'NumericColumn4': [random.uniform(10.0, 20.0) for _ in range(10)] } # Create the DataFrame df = pd.DataFrame(data) </code></pre> <p><a href="https://i.sstatic.net/BPJ3t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BPJ3t.png" alt="enter image description here" /></a></p> <p>Let's say the above dataset has m (=3) object columns and n (=4) numeric columns. the dataset has x (=10) rows. I'd like to create a <strong>fake dataset of N (=10,000) rows</strong>, so that:</p> <ol> <li>ObjectColumn1, ObjectColumn2, ..., and ObjectColumn_m in the fake dataset are random selections of entries in ObjectColumn1, ObjectColumn2, ..., and ObjectColumn_m of data respectively</li> <li>ExtraObjectColumn in the fake dataset is an added fake column, which is a random selection of a list(e.g. list = [ran1, ran2, ran3])</li> <li>all NumericColumns in the fake data are a randomly generated number that is between the minimum and median of each of those columns in data respectively. for example, NumericColumn1 in the fake data would be a randomly generated data between (3 and 71.5)</li> <li>I don't want columns to be hard-coded. imagine m and n and x and N are all dynamic. I need to use this on many multiple datasets and the function needs to detect the object and numeric columns and do the above dynamically. The only column that is NOT dynamic is the ExtraObjectColumn, which needs to be given a list to be created from.</li> <li>Obviously, I need this to be reasonable performance. N is usually a large number (at least 10,000)</li> </ol> <p>here's how fake_data should look like if N = 4</p> <p><a href="https://i.sstatic.net/dLdlE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dLdlE.png" alt="enter image description here" /></a></p>
<python><pandas><random><dynamic><faker>
2023-10-17 11:02:05
2
1,430
Ankhnesmerira
77,308,411
3,445,378
Can I use the python library 'xmlschema' to create XML-Documents from scratch?
<p>There are some schema definition files (xsd). They can be read with the library <a href="https://xmlschema.readthedocs.io/en/latest/" rel="nofollow noreferrer">xmlschema</a>. I can use it to validate or convert some existing XML-Documents. That is fine.</p> <p>But I have to create entire new XML-Documents from scratch. How can the library help to reach the goal?</p> <p>Addition:</p> <p>I like to provide some more information. The XSD-Files are provided by a third party and I cannot make them public. But I can red them without a problem with:</p> <pre><code>import xmlschema schema = xmlschema.XMLSchema('schema.xsd') print(schema) </code></pre> <blockquote> <p>XMLSchema10(name='schema.xsd', namespace='some:namespace')</p> </blockquote> <p>Further the documentation says, that the root element for the xml-document has the namespace ´some:namespace´.</p>
<python><xml><xsd>
2023-10-17 10:57:09
1
1,280
testo
77,308,321
1,227,057
How to install external Python dependencies for Spark workers in a PySpark cluster?
<p>I am running a Spark job with a PySpark user-defined function (UDF) that requires the <code>requests</code> package, which is not a standard built-in Python package. When executing the job, I encounter the error <code>ModuleNotFoundError: No module named 'requests'</code>. I understand that the dependencies are missing on the workers, causing the UDF to fail.</p> <p>I tried following the instructions in the official documentation (<a href="https://spark.apache.org/docs/latest/api/python/user_guide/python_packaging.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/api/python/user_guide/python_packaging.html</a>) using <code>pyspark.SparkContext.addPyFile()</code>, but I couldn't get it to work.</p> <p>Could someone provide a simple guide or example on how to install external Python dependencies for Spark workers in a PySpark cluster? Any help would be appreciated.</p>
<python><apache-spark><pyspark>
2023-10-17 10:43:22
1
1,401
micmia
77,308,150
6,393,479
How to fetch data asynchronously from a multiprocessing spawn process in Python?
<p>I have a FastAPI app with a single endpoint <code>/generate</code>. This endpoint takes in requests and puts them into global <code>input_queue</code>. Now the background spawn process called <code>worker</code> gets the data from the <code>input_queue</code> and do some calculations on it real-time and stores the output in a global dictionary <code>output_store</code>.</p> <p>Now it takes some time for the <code>worker</code> to generate output and store it in global <code>output_store</code>. How to make <code>/generate</code> to wait for the output value asynchronously without using a while statement to check if the output appeared in <code>output_store</code> ?</p> <pre><code># server.py import uuid import asyncio import multiprocessing as mp import uvicorn from fastapi import FastAPI from pydantic import BaseModel class RAGRequest(BaseModel): msg: str async def _bg_process(item): chat_id, msg = item['chat_id'], item['msg'] print(f&quot;{chat_id} is running...&quot;) await asyncio.sleep(10) # calculations that take atleast 10s print(f&quot;{chat_id} is done.&quot;) return f&quot;{msg}_output&quot; #Background process async def bg_process(item, output_store): output = await _bg_process(item) chat_id = item['chat_id'] # storing this output in global dictionary output_store[chat_id] = {'chat_id': chat_id, &quot;output&quot;: output} async def _worker(input_queue, output_store): # creating tasks realtime and running them in the background async with asyncio.TaskGroup() as tg: while True: if not input_queue.empty(): item = input_queue.get() cor = bg_process(item, output_store) tg.create_task(cor) else: await asyncio.sleep(0.1) # Listener # Starts the background process that listens to incoming requests and processes them def worker(input_queue, output_store): cor = _worker(input_queue, output_store) asyncio.run(cor) app = FastAPI() @app.post(&quot;/generate&quot;) async def gen(request: RAGRequest): global input_queue global output_store chat_id = str(uuid.uuid1()) input_queue.put({'chat_id': chat_id, &quot;msg&quot;: request.msg}) # not using while True because it blocks the code final_output = output_store[chat_id]['output'] # need help here in fetching this item asyncronously return final_output if __name__ == &quot;__main__&quot;: mp.set_start_method(&quot;spawn&quot;) input_queue = mp.Queue() # passing inputs to listener process output_store = mp.Manager().dict() # stores all outputs from the listener process p = mp.Process(target=worker, args=(input_queue, output_store)) p.start() uvicorn.run(app, host=&quot;0.0.0.0&quot;, port = 3000) </code></pre> <p>you can use the below code to send requests to the server</p> <pre><code># send_requests.py import aiohttp, asyncio url = &quot;http://0.0.0.0:3000/generate&quot; url = &quot;http://localhost:3000/generate&quot; prompts = [ &quot;Leading Causes of heart diseases in atleast 1000 words&quot;, &quot;Give me a brief history of India in atleast 1000 words&quot;, &quot;Explain to me all the Harry potter books in atleast 1000 words&quot;, &quot;Explain the War of Roses in atleast 1000 words&quot;, &quot;Explain all the major events that happened in the World war 2 in atleast 1000 words&quot;, &quot;Explain General relativity ELI5 in atleaset 1000 words&quot;, &quot;Explain Evolution ELI5 in atleast 1000 words&quot;, &quot;What are the big questions in the Universe in atleast 1000 words?&quot;, &quot;What happens after death in atleast 1000 words&quot;, &quot;Explain the story of One Piece so far in atleast 1000 words&quot;, &quot;Write a story about a character who discovers a hidden talent for painting in atleast 1000 words.&quot;, &quot;Imagine a world where gravity doesn't exist and create a narrative about how people adapt to this new reality in atleast 1000 words.&quot;, &quot;Write a letter to your future self, ten years from now in atleast 1000 words.&quot;, &quot;Describe a dream you had last night and explain its significance in atleast 1000 words.&quot;, &quot;Imagine you are an alien visiting Earth for the first time and write a journal entry about your observations in atleast 1000 words.&quot;, &quot;Write a script for a movie trailer about a protagonist who must save the world from an evil villain in atleast 1000 words.&quot;, &quot;Imagine you have been given the power to change one thing about the world. What would it be and why? in atleast 1000 words&quot;, &quot;Write a poem about your favorite season and the emotions it evokes in atleast 1000 words.&quot;, &quot;Create a character who is a master of time travel and write a story about their adventures in atleast 1000 words.&quot;, &quot;Imagine you are a detective solving a crime that took place in a parallel universe. How do you go about solving the case? in atleast 1000 words&quot;, &quot;Write a monologue for a character who has just discovered a deep secret about their family's past in atleast 1000 words&quot;, &quot;Imagine you are a superhero with the power to control the elements. Write a scene where you must use your powers to save a city from a natural disaster in atleast 1000 words.&quot;, &quot;Write a story about a character who is struggling to overcome a personal defeat and find a new sense of purpose in atleast 1000 words.&quot;, &quot;Create a world where memories can be transferred from one person to another. Write a story about a character who discovers this technology and the implications it has on their life in atleast 1000 words.&quot;, &quot;Imagine you are a ghost haunting an old mansion. Write a journal entry about your experiences and the secrets you have uncovered in atleast 1000 words.&quot;, ] prompts = [{&quot;msg&quot;:i} for i in prompts] async def send_req(session, url, payload, _id): print(f&quot;Sent req number {_id}&quot;) async with session.post(url, json = payload) as resp: r = await resp.text() # r = await resp.json() print(r) async def send_multiple(url, prompts): if not isinstance(prompts, list): prompts = [prompts] n = len(prompts) async with aiohttp.ClientSession() as session: coros = [send_req(session, url, payload, i) for payload,i in zip(prompts, range(1,n+1))] await asyncio.gather(*coros) if __name__ == &quot;__main__&quot;: loop = asyncio.get_event_loop() # sends one request # loop.run_until_complete(send_multiple(url, prompts[0])) # sends 25 requests loop.run_until_complete(send_multiple(url, prompts)) </code></pre>
<python><multiprocessing><python-asyncio><fastapi><python-multiprocessing>
2023-10-17 10:17:47
2
1,053
Uchiha Madara
77,308,083
9,530,017
How to change figure size and other size elements for a svg file
<p>I am currently working with figures that I save in <strong>SVG</strong> format, in order to include them into a HTML presentation. I'm looking for consistency in terms of fonts, line sizes etc .. across the presentation.</p> <p>Hence, I need to scale all my figures of a factor 2 in width compared to the default <code>matplotlib</code> params.</p> <p>For now, I am relying on an external python lib, <code>svgutils</code>:</p> <pre><code> import matplotlib.pyplot as plt import svgutils.transform as sg figname = 'test.svg' fig = plt.figure() fig.savefig(figname) fig = sg.fromfile(figname) newsize = ['{}pt'.format(2 * float(i.replace('pt', ''))) for i in fig.get_size()] fig.set_size(newsize) fig.save(figname) </code></pre> <p>but this is a small project and I am not sure it will be maintained through time, so I'd like to do it only with <code>matplotlib</code>.</p> <p>If I only modify the <code>figsize</code>, all other elements (i.e lines) do not increase in size. Is there a way to scale all parameters? Or should I go over each element defined in the <code>rcParams</code> and scale them appropriately? That looks rather long to do ...</p> <p>EDIT: Note that changing the DPI (as suggested <a href="https://stackoverflow.com/questions/47633546/relationship-between-dpi-and-figure-size">here</a>) will not work, as SVG objects do not have associated DPI, unlike raster formats (i.e png, jpeg, etc ..).</p>
<python><matplotlib><svg><svgutils>
2023-10-17 10:08:57
1
1,546
Liris
77,308,067
4,288,043
wrap one pandas column heading and another column contents simultaneously
<p>I've had two styling things clash in a pandas dataframe in jupyter notebook and cancel each other out.</p> <p>One column has a very long column heading, the other column has a small column heading but the cell contents are a lot of text.</p> <p>I wanted to wrap the heading of the one column but contents of the other.</p> <p>To reproduce:</p> <pre><code>import pandas as pd # create dataframe with one long column name and another column with a short name but long contents d = {'one': pd.Series([10, 20, 30, 40], index=['a', 'b', 'c', 'd']), 'two': pd.Series([&quot;Flexitarian palo santo JOMO next level&quot;, &quot;disrupt tempor labore enamel pin etsy kickstarter pour-over put a bird on it adaptogen&quot;, &quot;Kombucha whatever venmo authentic woke elit&quot;, &quot;90s vice fingerstache est polaroid laboris iPhone taiyaki&quot;], index=['a', 'b', 'c', 'd']), 'Raw denim prism copper mug, cornhole poutine laborum': pd.Series([10, 20, 30, 40], index=['a', 'b', 'c', 'd'])} df = pd.DataFrame(d) df </code></pre> <p>This creates a sample dataframe similar to what I am working with.</p> <p>wrap long column heading as descriped here <a href="https://stackoverflow.com/a/43093281/4288043">https://stackoverflow.com/a/43093281/4288043</a> :</p> <pre><code>df.style.set_table_styles([dict(selector=&quot;th&quot;,props=[('max-width', '50px')])]) </code></pre> <p>wrap other column contents as described here <a href="https://stackoverflow.com/a/70162236/4288043">https://stackoverflow.com/a/70162236/4288043</a> :</p> <pre><code>from IPython.display import display, HTML def wrap_df_text(df): return display(HTML(df.to_html().replace(&quot;\\n&quot;,&quot;&lt;br&gt;&quot;))) df['two'] = df['two'].str.wrap(20) wrap_df_text(df) </code></pre> <p>Either of these will work separately but they will not work together.</p> <p>Does anyone know how to fix it?</p>
<python><pandas><pandas-styles>
2023-10-17 10:06:15
1
7,511
cardamom
77,308,057
7,737,097
Conversion accuracy issues of E57 to LAS in Python using pye57 and laspy
<p>I am attempting to convert an .e57 pointcloud into a .las pointcloud but I am running into issues which seem to be caused by accuracy. The following images illustrate the original .e57 in CloudCompare, compared to the converted .las file also in CloudCompare. The point size is bumped up significantly to compare the differences.</p> <p><strong>Original E57</strong> <a href="https://i.sstatic.net/GOPG2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GOPG2.png" alt="enter image description here" /></a></p> <p><strong>Converted LAS</strong> <a href="https://i.sstatic.net/cKuHn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cKuHn.png" alt="enter image description here" /></a></p> <p>As you can see the points are positioned in some kind of grid pattern in the LAS file. Inspecting the LAS file in Potree exposes the issue at a significant level. See the image below. <a href="https://i.sstatic.net/TyU9s.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TyU9s.jpg" alt="enter image description here" /></a></p> <p>My code for converting the E57 to the LAS is posted below. If anyone has any insights as to why this is happening and why the points seem to be grid-aligned then I'd be happy to hear more.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pye57 import laspy e57 = pye57.E57(&quot;in.e57&quot;) # read scan at index 0 data = e57.read_scan(index=0, ignore_missing_fields=True, colors=True, intensity=True) # 'data' is a dictionary with the point types as keys assert isinstance(data[&quot;cartesianX&quot;], np.ndarray) assert isinstance(data[&quot;cartesianY&quot;], np.ndarray) assert isinstance(data[&quot;cartesianZ&quot;], np.ndarray) # other attributes can be read using: # data = e57.read_scan(0, intensity=True, colors=True, row_column=True) assert isinstance(data[&quot;cartesianX&quot;], np.ndarray) assert isinstance(data[&quot;cartesianY&quot;], np.ndarray) assert isinstance(data[&quot;cartesianZ&quot;], np.ndarray) assert isinstance(data[&quot;intensity&quot;], np.ndarray) assert isinstance(data[&quot;colorRed&quot;], np.ndarray) assert isinstance(data[&quot;colorGreen&quot;], np.ndarray) assert isinstance(data[&quot;colorBlue&quot;], np.ndarray) # the ScanHeader object wraps most of the scan information: header = e57.get_header(0) print(header.point_count) print(header.rotation_matrix) print(header.translation) # all the header information can be printed using: for line in header.pretty_print(): print(line) # Create a new LAS file las_out = laspy.create(point_format=3, file_version='1.2') # Populate the LAS file with point cloud data print(data[&quot;cartesianX&quot;]) las_out.x = data[&quot;cartesianX&quot;] las_out.y = data[&quot;cartesianY&quot;] las_out.z = data[&quot;cartesianZ&quot;] las_out.intensity = data[&quot;intensity&quot;] las_out.red = data[&quot;colorRed&quot;] las_out.green = data[&quot;colorGreen&quot;] las_out.blue = data[&quot;colorBlue&quot;] # Close the LAS file las_out.write(&quot;output/out.las&quot;) </code></pre>
<python><python-3.x><data-conversion><floating-accuracy><point-clouds>
2023-10-17 10:05:06
1
609
Arjan
77,307,979
11,678,191
Installing IPOPT in Linux (to be used with Pyomo)
<p>I am trying to install ipopt in a Linux machine but without success...</p> <p>I have followed the steps in <a href="https://coin-or.github.io/Ipopt/INSTALL.html" rel="nofollow noreferrer">https://coin-or.github.io/Ipopt/INSTALL.html</a>.</p> <p>I am able to sucesfully compile it but in the end it seems that the bin folder is not beeing created, instead I have a lib, share and include folders</p> <p>My goal is to use IPOPT with pyomo and since I do not have the bin folder, I cannot set the PATH enviroment variable neither the executable parameter...</p> <p><a href="https://i.sstatic.net/zXFhX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zXFhX.png" alt="enter image description here" /></a></p> <p>I wonder if someone could give me light on that, thanks</p>
<python><optimization><pyomo><ipopt>
2023-10-17 09:58:01
2
319
rapha123
77,307,953
5,379,182
SQLAlchemy session is invalidated unexpectedly causing PendingRollbackError
<p>I am using SQLAlchemy (2.0.20) as an ORM library in my FastAPI project. Whenever a request is made to the API server, the route requires a db <code>sqlalchemy.orm.session.Session</code> object. This session object is either created anew by a <code>DbSession = sessionmaker(engine)</code> or taken from the cache</p> <pre><code>engine = create_engine( connection_string(), poolclass=QueuePool, pool_size=15, pool_recycle=3600, ) DbSession = sessionmaker(engine) @cache def get_db_session() -&gt; Session: session = DbSession() return session # this is then used in the fastapi route </code></pre> <p>At some point, the API does not function anymore because this cached session object is <a href="https://docs.sqlalchemy.org/en/20/core/pooling.html#more-on-invalidation" rel="nofollow noreferrer">invalidated</a> (for unknown reason) and I get the error &quot;PendingRollbackError: Can't reconnect until invalid transaction is rolled back&quot; <a href="https://docs.sqlalchemy.org/en/20/errors.html#can-t-reconnect-until-invalid-transaction-is-rolled-back-please-rollback-fully-before-proceeding" rel="nofollow noreferrer">docs</a> whenever I want to use the session.</p> <pre><code>class SqlRepository: # this repository object is also cached across API requests (like the session) def __init__(session: Session): self.session = session def get_all_todos(self): self.session.query(TodoORM).all() # this is where the PendingRollbackError is thrown ... </code></pre> <p>So my questions are</p> <ul> <li>should I re-create the session for every API request (i.e. not cache it)?</li> <li>should I handle the <code>PendingRollbackError</code> in <code>get_all_todos</code>?</li> </ul> <p>My intention behind caching the session was that I have a long-lived session that only has to be instantiated once that uses connections from the connection pool. Is the pattern correct to instantiate a repository with the session object or should it rather have access to the sessionmaker and re-create sessions everytime a db transaction must be made.</p>
<python><sqlalchemy><fastapi>
2023-10-17 09:55:04
0
3,003
tenticon
77,307,821
17,160,160
Dictionary creation from dataframe, replace values, remove nulls. Pandas
<p>Suppose the following dataframe in which one column (<code>df.VALUE</code>) contains only string values and several other columns which contain a combination of null and floats.</p> <pre><code>df = pd.DataFrame({ 'VALUE': ['A','B','C','D','E'], 'CAT1' : [np.nan,1,1,1,1], 'CAT2': [1,1,np.nan,np.nan,1]}) </code></pre> <p>I'd like to create a dictionary in which the non-null for values for each column are replaced by the corresponding value from the <code>VALUE</code> column and listed with the associated column name as key:</p> <pre><code>{'CAT1': ['B', 'C', 'D', 'E'], 'CAT2': ['A', 'B', 'E']} </code></pre> <p>So far, I achieve this using <code>pd.where</code> and the <code>to_dict</code> methods to create the dictionary. This includes null values in the lists and so a for loop with list comprehension is used to retain only the <code>str</code> values:</p> <pre><code>d1 = df.iloc[:,1:].where(df.isna(), df['VALUE'], axis=0).to_dict(orient='list') for key in d1.keys(): d1[key] = [x for x in d1[key] if type(x) == str] </code></pre> <p>List comprehension within a for loop doesn't seem to be the most efficient solution here and I hoped someone could suggest a more elegant approach please.</p>
<python><pandas>
2023-10-17 09:38:41
2
609
r0bt
77,307,788
16,405,935
Join data from two data frame
<p>I have two data frame that I want to join from <code>df2</code> to <code>df</code> by using values of <code>df2</code> rows. Something as below:</p> <pre><code>import pandas as pd import numpy as np #create DataFrame df = pd.DataFrame({'Definition': ['Loan', 'Deposit'], '20231015': [28, 17], '20231016': [5, 6], '20231017': [10, 13], 'Notes':['','']}) df2 = pd.DataFrame({'BR_CD': ['Hanoi', 'Hochiminh'], 'CUS_NM': ['A', 'B'], 'AMT': ['2', '3']}) df2['Conclusion'] = &quot;[&quot; + df2['BR_CD'] + &quot;]&quot; + ' ' + df2['CUS_NM'] + ' ' + df2['AMT'] x = &quot;[&quot; + df2['BR_CD'] + &quot;]&quot; + ' ' + df2['CUS_NM'] + ' ' + df2['AMT'] for i in x: df['Notes'].iloc[1] = i df </code></pre> <p>With this code, <code>df['Notes'</code> just take last value of i:</p> <p><a href="https://i.sstatic.net/Lc2Op.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lc2Op.png" alt="enter image description here" /></a></p> <p>But my goal is show show all values from <code>df2['Conclusion]</code>:</p> <pre><code>df3 = pd.DataFrame({'Definition': ['Loan', 'Deposit'], '20231015': [28, 17], '20231016': [5, 6], '20231017': [10, 13], 'Notes':['','[Hanoi] A 2;[Hochiminh] B 3']}) df3 </code></pre> <p><a href="https://i.sstatic.net/F1FWb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F1FWb.png" alt="enter image description here" /></a></p> <p>If possible I want to down the line for each conclusion. Thank you.</p>
<python><pandas><join>
2023-10-17 09:34:34
1
1,793
hoa tran
77,307,787
3,241,486
Run localGPT via pipenv instead of conda
<h3>Goal</h3> <p>I would like to use <strong>pipenv</strong> instead of <strong>conda</strong> to run <a href="https://github.com/PromtEngineer/localGPT" rel="nofollow noreferrer"><strong>localGPT</strong></a> on a Ubuntu 22.04.03 machine.</p> <p><em>Reason: On the server where I would like to deploy <em>localGPT</em> pipenv is already installed, but conda isn't and I lack the permissions to install it.</em></p> <h3>Approach</h3> <p>I translated the <a href="https://github.com/PromtEngineer/localGPT/blob/main/requirements.txt" rel="nofollow noreferrer">existing, up-to-date <code>requirements.txt</code></a> file:</p> <pre><code># Natural Language Processing langchain==0.0.267 chromadb==0.4.6 pdfminer.six==20221105 InstructorEmbedding sentence-transformers faiss-cpu huggingface_hub transformers protobuf==3.20.2; sys_platform != 'darwin' protobuf==3.20.2; sys_platform == 'darwin' and platform_machine != 'arm64' protobuf==3.20.3; sys_platform == 'darwin' and platform_machine == 'arm64' auto-gptq==0.2.2 docx2txt unstructured unstructured[pdf] # Utilities urllib3==1.26.6 accelerate bitsandbytes ; sys_platform != 'win32' bitsandbytes-windows ; sys_platform == 'win32' click flask requests # Streamlit related streamlit Streamlit-extras # Excel File Manipulation openpyxl </code></pre> <p>to a Pipfile (located on root), which looks like this:</p> <pre><code>[[source]] name = &quot;pypi&quot; url = &quot;https://pypi.org/simple&quot; verify_ssl = true [packages] langchain = &quot;==0.0.267&quot; chromadb = &quot;==0.4.6&quot; pdfminer.six = &quot;==20221105&quot; InstructorEmbedding = &quot;*&quot; sentence-transformers = &quot;*&quot; faiss-cpu = &quot;*&quot; huggingface_hub = &quot;*&quot; transformers = &quot;*&quot; protobuf = &quot;==3.20.3&quot; auto-gptq = &quot;==0.2.2&quot; docx2txt = &quot;*&quot; unstructured = {extras = [&quot;pdf&quot;], version = &quot;*&quot;} urllib3 = &quot;==1.26.6&quot; accelerate = &quot;*&quot; bitsandbytes = &quot;*&quot; click = &quot;*&quot; flask = &quot;*&quot; requests = &quot;*&quot; streamlit = &quot;*&quot; Streamlit-extras = &quot;*&quot; openpyxl = &quot;*&quot; jmespath = &quot;==1.0.1&quot; llama-cpp-python = &quot;==0.2.11&quot; [requires] python_version = &quot;3.10&quot; </code></pre> <p>Mind that I added <code>jmespath</code> and <code>llama-cpp-python</code>, because, when I did it the conda way, I needed to additionally install these two packages via pip.</p> <h3>Problem</h3> <p>So, theoretically, running</p> <pre class="lang-bash prettyprint-override"><code>pipenv install pipenv shell python ingest.py </code></pre> <p>should do the ingestion, but unfortunately I get an error :red_circle:</p> <pre><code>python ingest.py 2023-10-17 11:16:53,102 - INFO - ingest.py:121 - Loading documents from /home/*********/Documents/my-chatbot/SOURCE_DOCUMENTS 2023-10-17 11:16:53,131 - INFO - ingest.py:34 - Loading document batch concurrent.futures.process._RemoteTraceback: &quot;&quot;&quot; Traceback (most recent call last): File &quot;/usr/lib/python3.10/concurrent/futures/process.py&quot;, line 246, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File &quot;/home/*******/Documents/mylocal-chatbot/ingest.py&quot;, line 40, in load_document_batch data_list = [future.result() for future in futures] File &quot;/home/*******/Documents/mylocal-chatbot/ingest.py&quot;, line 40, in &lt;listcomp&gt; data_list = [future.result() for future in futures] File &quot;/usr/lib/python3.10/concurrent/futures/_base.py&quot;, line 458, in result return self.__get_result() File &quot;/usr/lib/python3.10/concurrent/futures/_base.py&quot;, line 403, in __get_result raise self._exception File &quot;/usr/lib/python3.10/concurrent/futures/thread.py&quot;, line 58, in run result = self.fn(*self.args, **self.kwargs) File &quot;/home/*******/Documents/mylocal-chatbot/ingest.py&quot;, line 30, in load_single_document return loader.load()[0] File &quot;/home/*******/.local/share/virtualenvs/mylocal-chatbot-j8q8_E0e/lib/python3.10/site-packages/langchain/document_loaders/unstructured.py&quot;, line 86, in load elements = self._get_elements() File &quot;/home/*******/.local/share/virtualenvs/mylocal-chatbot-j8q8_E0e/lib/python3.10/site-packages/langchain/document_loaders/unstructured.py&quot;, line 169, in _get_elements from unstructured.partition.auto import partition File &quot;/home/*******/.local/share/virtualenvs/mylocal-chatbot-j8q8_E0e/lib/python3.10/site-packages/unstructured/partition/auto.py&quot;, line 80, in &lt;module&gt; from unstructured.partition.pdf import partition_pdf File &quot;/home/*******/.local/share/virtualenvs/mylocal-chatbot-j8q8_E0e/lib/python3.10/site-packages/unstructured/partition/pdf.py&quot;, line 12, in &lt;module&gt; from pdfminer.converter import PDFPageAggregator, PDFResourceManager ImportError: cannot import name 'PDFResourceManager' from 'pdfminer.converter' (/home/*******/.local/share/virtualenvs/mylocal-chatbot-j8q8_E0e/lib/python3.10/site-packages/pdfminer/converter.py) &quot;&quot;&quot; The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/home/*******/Documents/mylocal-chatbot/ingest.py&quot;, line 159, in &lt;module&gt; main() File &quot;/home/*******/.local/share/virtualenvs/mylocal-chatbot-j8q8_E0e/lib/python3.10/site-packages/click/core.py&quot;, line 1157, in __call__ return self.main(*args, **kwargs) File &quot;/home/*******/.local/share/virtualenvs/mylocal-chatbot-j8q8_E0e/lib/python3.10/site-packages/click/core.py&quot;, line 1078, in main rv = self.invoke(ctx) File &quot;/home/*******/.local/share/virtualenvs/mylocal-chatbot-j8q8_E0e/lib/python3.10/site-packages/click/core.py&quot;, line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File &quot;/home/*******/.local/share/virtualenvs/mylocal-chatbot-j8q8_E0e/lib/python3.10/site-packages/click/core.py&quot;, line 783, in invoke return __callback(*args, **kwargs) File &quot;/home/*******/Documents/mylocal-chatbot/ingest.py&quot;, line 122, in main documents = load_documents(SOURCE_DIRECTORY) File &quot;/home/*******/Documents/mylocal-chatbot/ingest.py&quot;, line 71, in load_documents contents, _ = future.result() File &quot;/usr/lib/python3.10/concurrent/futures/_base.py&quot;, line 451, in result return self.__get_result() File &quot;/usr/lib/python3.10/concurrent/futures/_base.py&quot;, line 403, in __get_result raise self._exception ImportError: cannot import name 'PDFResourceManager' from 'pdfminer.converter' (/home/*******/.local/share/virtualenvs/mylocal-chatbot-j8q8_E0e/lib/python3.10/site-packages/pdfminer/converter.py) </code></pre> <p>Manually (re)installing the related packages via <code>pipenv install pdfminer</code>, <code>pipenv install pdfminer.six</code> or <code>pipenv install unstructured</code> did not help.</p> <p>Any ideas how to get rid of this import error?</p>
<python><conda><pipenv><pdfminer><privategpt>
2023-10-17 09:34:27
0
2,533
chamaoskurumi
77,307,743
16,545,894
Create a Polars Dataframe by appending multiple row at a time
<p><strong>How can I make DataFrame and then more rows one at a time?</strong></p> <pre><code>import polars as pl df_polars = pl.DataFrame({ 'abc': [], 'def': [], 'ghi': [] }) value_to_set = 10.0 df_polars = df_polars.extend([ pl.lit(None).alias('abc'), pl.lit(value_to_set).alias('def'), pl.lit(None).alias('ghi') ]) print(df_polars) </code></pre> <p>It only works for one field at a time. What is a more efficient approach to add several rows to my data-frame?</p>
<python><python-polars>
2023-10-17 09:27:13
2
1,118
Nayem Jaman Tusher
77,307,742
18,904,265
How should I pass a class as an attribute to another class with attrs?
<p>So, I just stumbled upon a hurdle concerning the use of <code>attrs</code>, which is quite new to me (I guess this also applies to dataclasses?). I have two classes, one I want to use as an attribute for another. This is how I would do this with regular classes:</p> <pre class="lang-py prettyprint-override"><code>class Address: def __init__(self) -&gt; None: self.street = None class Person: def __init__(self, name) -&gt; None: self.name = name self.address = Address() </code></pre> <p>Now with attrs, I tried to do the following:</p> <pre class="lang-py prettyprint-override"><code>from attrs import define @define class Address: street: str | None = None @define class Person: name: str self.address = Address() </code></pre> <p>Now if I try the following, I don't get the same result for a class and a dataclass, for which the reason wasn't obvious to me at first:</p> <pre class="lang-py prettyprint-override"><code>person_1 = Person(&quot;Joe&quot;) person_2 = Person(&quot;Jane&quot;) person_1.address.street = &quot;street&quot; person_2.address.street = &quot;other_street&quot; print(person_1.address.street) </code></pre> <p>I would expect the output to be <code>&quot;street&quot;</code>, which is what happens with a regular class. But with <code>attrs</code>, the output is <code>&quot;other_street&quot;</code>. I then compared the hashes of <code>person_1.address</code> and <code>person_2.address</code>, and voila, they are the same.</p> <p>After some thinking this is logical, with <code>attrs</code> I instantiate Address immediately, so everyone gets the same instance of Address, with regular classes I only instantiate them when I instantiate the parent class.</p> <p>Now, there is a fix available with <code>attrs</code>:</p> <pre class="lang-py prettyprint-override"><code>from attrs import define, field @define class Address: street: str | None = None @define class Person: name: str address: Address = field(init=False) def __attrs_post_init__(self): self.address = Address() </code></pre> <p>But this seems really cumbersome to implement every time. Is there a nice solution to this? One way would be to put the instantiation of <code>Address</code> outside of the class like this:</p> <pre class="lang-py prettyprint-override"><code>address_1 = Address() person_1 = Person(&quot;Joe&quot;, address) </code></pre> <p>But my issue with that is, that often I want to instantiate the class in an empty state (for example to seperate input from computed values), and this way adds an extra step to instantiation which I need to remember.</p> <p>So in conclusion: In this case, attrs, dataclass, pydantic etc. blur the line between what belongs to the class and what belongs to the instance, and in my case that led to an hour of &quot;wtf happened here&quot;. So back to normal classes? I really like the default and validation possibilities of attrs though. Or is there a best practice way to handle this kind of setup?</p>
<python><python-attrs>
2023-10-17 09:26:37
2
465
Jan
77,307,614
4,399,016
Applying function to Pandas Data frame and merging resulting data frames into a new data frame
<p>I have this code:</p> <pre><code>import pandas as pd import yfinance as yF import datetime from functools import reduce def get_returns_change_period(tic,com,OHLC_COL1,OHLC_COL2): df_Stock = yF.download(tickers = tic, period = &quot;38y&quot;, interval = &quot;1mo&quot;, prepost = False, repair = False) df_Stock['MONTH'] = pd.to_datetime(df_Stock.index) df_Stock = df_Stock.sort_values(by='MONTH') df_Stock[com + ' % Change '+'3M'] = (((df_Stock[OHLC_COL1].shift(2) - df_Stock[OHLC_COL2]))/df_Stock[OHLC_COL2]) *100 df_Stock[com + ' % Change '+'3M'] = (df_Stock[com + ' % Change '+'3M'] * 100 )/df_Stock[OHLC_COL2] df_Stock[com + ' % Change '+'2M'] = (((df_Stock[OHLC_COL1].shift(1) - df_Stock[OHLC_COL2]))/df_Stock[OHLC_COL2]) *100 df_Stock[com + ' % Change '+'2M'] = (df_Stock[com + ' % Change '+'2M'] * 100 )/df_Stock[OHLC_COL2] df_Stock[com + ' % Change '+'M'] = (((df_Stock[OHLC_COL1].shift(0) - df_Stock[OHLC_COL2]))/df_Stock[OHLC_COL2]) *100 df_Stock[com + ' % Change '+'M'] = (df_Stock[com + ' % Change '+'M'] * 100 )/df_Stock[OHLC_COL2] return df_Stock.filter(regex='MONTH|% Change') </code></pre> <p>Everything works as expected until this point.</p> <pre><code>get_returns_change_period('XOM','EXXON MOBIL','High','Open') </code></pre> <p>Now I am trying to Apply Lambda function to this function.</p> <pre><code>df_Industry = pd.DataFrame({'ID':['1', '2'], 'Ticker': ['AIN', 'TILE'], 'Company':['Albany International', 'Interface']}) df1 = df_Industry.apply(lambda x: get_returns_change_period(x.Ticker, x.Company, 'High','Open'), axis=1) #df1 = df1.T df1 </code></pre> <p>The output has to be data frame which is merged result of multiple stocks which are returned by <code>get_returns_change_period()</code> function.</p> <p><a href="https://stackoverflow.com/a/77250672/4399016">The problem/solution here is very similar to this one</a>. But I am unable to reuse the solution given. Returns for multiple time periods are calculated and returned by the function here:</p> <pre><code>get_returns_change_period() </code></pre>
<python><pandas><dataframe><merge>
2023-10-17 09:11:25
1
680
prashanth manohar
77,307,603
11,913,986
map columns of two dataframes based on array intersection of their individual columns and based on highest common element match Pyspark/Pandas
<p>I have a dataframe df1 like this:</p> <pre><code>A B AA [a,b,c,d] BB [a,f,g,c] CC [a,b,l,m] </code></pre> <p>And another one as df2 like:</p> <pre><code>C D XX [a,b,c,n] YY [a,m,r,s] UU [e,h,I,j] </code></pre> <p>I want to find out and map column C of df2 with column A of df1 based on the highest element match between the items of df2['D'] and df1['B'] and null if there is none.</p> <p>The result df will look like:</p> <pre><code>C D A common_items XX [a,b,c,n] AA [a,b,c] YY [a,m,r,s] CC [a,m] UU [e,h,I,j] Null Null </code></pre> <p>After spending a lot of time through itertools and np operations and pd.merge with 'inner', the closest I have come with is:</p> <pre><code>np.intersect1d(df2.D, df1.B) keys = ['B', 'D'] intersection = df1.merge(df2[keys], on=keys) </code></pre> <p>Any solution on the number of common elements of the two columns of different dataframes, mapping the source df1['A'] to target df2['c']. [a,b,c,d] etc are list of string.</p> <p>Data:</p> <pre><code>df1.to_dict('list'): {'A': ['AA', 'BB', 'CC'], 'B': [['a', 'b', 'c', 'd'], ['a', 'f', 'g', 'c'], ['a', 'b', 'l', 'm']]} df2.to_dict('list'): {'C': ['XX', 'YY', 'UU'], 'D': [['a', 'b', 'c', 'n'], ['a', 'm', 'r', 's'], ['e', 'h', 'l', 'j']]} </code></pre> <p>Anything on Pyspark/Pandas would be really halpful.</p>
<python><arrays><pandas><dataframe><pyspark>
2023-10-17 09:10:03
1
739
Strayhorn
77,307,584
14,566,295
InterpolationWarning: KPSS test in Python with statsmodels
<p>When I run below code, I get specific warning</p> <pre><code>import numpy as np from statsmodels.tsa.stattools import kpss kpss(np.random.choice(range(-1000,1000),10000)) </code></pre> <p>Warning message</p> <pre><code>&lt;stdin&gt;:1: InterpolationWarning: The test statistic is outside of the range of p-values available in the look-up table. The actual p-value is greater than the p-value returned. </code></pre> <p>Is it possible to ignore only this warning, without impacting the display of any other warning that may arise in other part of a code file</p>
<python><python-3.x><statsmodels>
2023-10-17 09:07:18
1
1,679
Brian Smith
77,307,421
3,553,923
Robust import of Python package modules in nested git submodule structure
<p>This might be a duplicate, but I didn't really find questions with this problem (nor a helpful answer).</p> <p>The setup is as follows (simplified):</p> <p>I have three different git Python projects using each other as submodules (note: I use the name <code>submodule</code> here referring to the git submodules. In Python, they are not modules, but packages, as pointed out in the answers):</p> <ul> <li>main_project</li> <li>submodule_1 (used by main_project)</li> <li>submodule_2 (used by submodule_1)</li> </ul> <p>So the tree for the main project looks like:</p> <pre><code>- main_project +- submodule_1 ++- submodule_2 </code></pre> <p>The Python code for importing submodule_2 in submodule_1 is like:</p> <pre><code>from submodule_2 import * </code></pre> <p>The Python code for importing submodule_1 in the main_project is like:</p> <pre><code>from submodule_1 import * </code></pre> <p>Now, when I run the main project, I get an error for the import of submodule_2 in submodule_1, as the paths are different from the main_project perspective.</p> <p>To fix that, one could change the import in submodule_1 to:</p> <pre><code>from submodule_1.submodule_2 import * </code></pre> <p>That way, however, the import only works from running within the main_project and not when running submodule_1 independently.</p> <p>I also experimented with changing the directories within the Python scripts, but that didn't really help, either.</p> <p>And adding the packages (submodules) to the path is also not really an option (as it removes flexibility and easy portability).</p>
<python><git><import><nested><git-submodules>
2023-10-17 08:45:03
1
323
clel
77,307,413
17,082,611
Best practice for loading test data in separate python files for model training and evaluation: should I train_test_split again or load test data?
<p>Let's suppose I have a <strong>train.py</strong> file containing the logic for training a model and then saving its parameters into a directory called <code>weights/</code>:</p> <pre><code>x_train, x_test, y_train, y_test = train_test_split(x, y) model = compile() model.fit(x_train, y_train) model.save_weights(&quot;weights/&quot;) </code></pre> <p>Another file, namely <strong>evaluate.py</strong>, contains the logic for evaluating the performance of the model whose parameters will be loaded from the <code>weights/</code> directory:</p> <pre><code>x_train, x_test, y_train, y_test = train_test_split(x, y) model = compile() model.load_weights(&quot;weights/&quot;) model.evaluate(x_test, y_test) </code></pre> <p>My question is: in the <strong>evaluate.py</strong> file, is the statement <code>x_train, x_test, y_train, y_test = train_test_split(x, y)</code> correct or am I supposed to load the same test set splitted in the <strong>train.py</strong> file? In that case the <strong>train.py</strong> file would be:</p> <pre><code>x_train, x_test, y_train, y_test = train_test_split(x, y) np.save(&quot;x_test&quot;, x_test) np.save(&quot;y_test&quot;, y_test) model = compile() model.fit(x_train, y_train) model.save_weights(&quot;weights/&quot;) </code></pre> <p>while the <strong>evaluate.py</strong> file would be:</p> <pre><code>x_test = np.load(&quot;x_test&quot;) y_test = np.load(&quot;y_test&quot;) model = compile() model.load_weights(&quot;weights/&quot;) model.evaluate(x_test, y_test) </code></pre>
<python><tensorflow><machine-learning><train-test-split>
2023-10-17 08:43:50
1
481
tail
77,307,336
4,451,521
How can adjust the axis so that I don't have unused space in dash?
<p>I have the following dash script</p> <pre><code>import dash from dash import dcc, html from dash.dependencies import Input, Output import pandas as pd import plotly.graph_objects as go import random # Generate random data for demonstration random.seed(42) # For reproducibility data = pd.DataFrame({ 'fr': list(range(1, 25001)), 'X1': [random.uniform(1, 10) for _ in range(25000)], 'Y1': [random.uniform(5, 20) for _ in range(25000)], 'X2': [random.uniform(1, 10) for _ in range(25000)], 'Y2': [random.uniform(5, 20) for _ in range(25000)] }) # Initialize the Dash app app = dash.Dash(__name__) app.layout = html.Div([ html.H1(&quot;Data Filtering with Dash&quot;), # RangeSlider for selecting the range of 'fr' values html.Div([ html.Label(&quot;Select a Range of 'fr' Values:&quot;), dcc.RangeSlider( id=&quot;fr-range&quot;, min=1, max=25000, step=1, marks={i: str(i) for i in range(1, 25001, 5000)}, value=[1, 25000], ), html.Div(id='range-display') ]), # Plot displayed dcc.Graph(id='scatter-plot', config={'displayModeBar': False}, style={'width': '100%', 'height': '100%'}), ]) @app.callback( Output('scatter-plot', 'figure'), [Input('fr-range', 'value')] ) def update_plot(range_value): min_value, max_value = range_value filtered_data = data[(data['fr'] &gt;= min_value) &amp; (data['fr'] &lt;= max_value)] hover_text_1 = [ f'fr: {fr}&lt;br&gt;X1: {x:.2f}&lt;br&gt;Y1: {y:.2f}' for fr, x, y in zip(filtered_data['fr'], filtered_data['X1'], filtered_data['Y1']) ] hover_text_2 = [ f'fr: {fr}&lt;br&gt;X2: {x:.2f}&lt;br&gt;Y2: {y:.2f}' for fr, x, y in zip(filtered_data['fr'], filtered_data['X2'], filtered_data['Y2']) ] fig = go.Figure() fig.add_trace(go.Scatter( x=filtered_data['X1'], y=filtered_data['Y1'], mode='markers', text=hover_text_1, name='(X1, Y1) Data' )) fig.add_trace(go.Scatter( x=filtered_data['X2'], y=filtered_data['Y2'], mode='markers', text=hover_text_2, name='(X2, Y2) Data' )) fig.update_layout(title_text='Data') fig.update_xaxes(range=[0, 15]) # Set X-axis range to match the data range # fig.update_layout(aspectratio=dict(x=1, y=1)) # Set aspect ratio for square plot fig.update_xaxes(scaleanchor='y', scaleratio=1) # Set the X axis to have the same scale as the Y axis return fig if __name__ == '__main__': app.run_server(debug=True) </code></pre> <p>The script works fine but it gives me this</p> <p><a href="https://i.sstatic.net/yJgiA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yJgiA.png" alt="enter image description here" /></a></p> <p>which as you can see gives me a lot of unused space for X. My data only goes from around 0 to 15 but the X axis goes from -50 to 60</p> <p>How can I eliminate this unused space.</p> <p>Take into account that the ration of the axis X and Y has to be the same</p> <p>I have tried to use</p> <pre><code>fig.update_xaxes(range=[0, 15]) </code></pre> <p>but that did not change anything</p>
<python><plotly-dash>
2023-10-17 08:32:55
1
10,576
KansaiRobot
77,307,174
13,330,214
Does the python library 'webdriver-manager' always install the latest chrome driver?
<p>Even reading the github page, the role of this library is confusing.</p> <ol> <li>always install the latest chrome drivers</li> <li>checks the user's Chrome browser version and installs the appropriate Chrome driver.</li> </ol> <p>Which is it?</p>
<python><selenium-webdriver><webdriver-manager><webdrivermanager-python>
2023-10-17 08:08:02
1
705
newbieeyo
77,306,999
2,890,093
Flask restx Debug "Unauthorized" response
<p>I have a Flask (2.2.3) app with Flask-RESTX (1.1.0) used as an API (without frontend). I'm using flask-azure-oauth library to authenticate users using Azure AD. The setup is:</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask, current_app from flask_azure_oauth import FlaskAzureOauth from flask_restx import Api app = Flask(__name__) api = Api(app, &lt;...&gt;) CORS(app) auth = FlaskAzureOauth() auth.init_app(app) # App routes @api.route(&quot;/foo&quot;) class FooCollection(Resource): @auth('my_role') def get(self): return [&lt;...&gt;] </code></pre> <p>This was working fine, but since a few days I started to receive Unauthorized responses when passing valid token. Unfortunately I am not able to track the reason - tokens seem fine (examined manually or decoded using jwt.ms) and the only response I have from API is: <code>401 UNAUTHORIZED</code> with response body <code>{ &quot;message&quot;: null }</code>.</p> <p>I tried to add error logging and error handlers:</p> <pre><code># Logging request/response @app.before_request def log_request_info(): app.logger.debug(f&quot;{request.method} {request.path} {request.data}&quot;) @app.after_request def log_response_info(response): app.logger.debug(f&quot;{response.status}&quot;) return response # Error handling @app.errorhandler(Unauthorized) def handle_error(error): current_app.logger.debug(f&quot;Oops&quot;) &lt;...&gt; @app.errorhandler def handle_error(error): current_app.logger.debug(f&quot;Noooo...!&quot;) &lt;...&gt; </code></pre> <p>With this, request and response are logged and non-HTTP exceptions are handled by <code>handle_error</code>. But HTTP errors like 404, 401, ... are just passing by, ignored by both generic error handler and a specific one (<code>@app.errorhandler(Unauthorized)</code>).</p> <p>Here's some code used to validate that:</p> <pre><code>from werkzeug.exceptions import Unauthorized def unauth(): def decorator(fn): @wraps(fn) def wrapper(*args, **kwargs): raise Unauthorized(&quot;No-no&quot;) # &lt;&lt;&lt; return wrapper return decorator @api.route(&quot;/dummy&quot;) class Dummy(Resource): @unauth() def get(self): return jsonify(message=&quot;Hello there!&quot;) @app.errorhandler def handle_error(Exception): current_app.logger.debug(&quot;Intercepted&quot;) </code></pre> <p>Dummy route is protected by <code>unauth</code> decorator that denies all requests with 401 - UNAUTHORIZED and this is exactly what client receives. However <code>@app.errorhandler(Exception)</code>, which is supposed to catch ALL exceptions, still misses it. Replace <code>raise Unauthorized</code> with something like <code>1 / 0</code> and the exception will normally be caught. HTTP errors then get some special treatment!</p> <p>So how do I properly intercept and examine them? (with focus on: how do I find out why it denied token authorization)</p>
<python><azure><flask><flask-restx><azure-oauth2>
2023-10-17 07:37:57
1
10,783
Kombajn zbożowy
77,306,963
22,046,817
Does Class Variable is common among objects in Python
<p>Suppose I want to implement common variable among objects using class variables (similar to static in java/c++).</p> <p>When I access class variable using object, it shows default value. Then I updaded class variable via object, its not updating class variable for other objects. It is showing old value in when class variable accessed via other objects or Class Name directly.</p> <p>Why is it so, and what is the way in python to make class/static variable (common among objects)</p> <pre><code># Class for Computer Science Student class CSStudent: stream = 'cse' # Class Variable def __init__(self,name): self.name = name # Instance Variable # Objects of CSStudent class a = CSStudent('Geek') b = CSStudent('Nerd') print(a.stream) # prints &quot;cse&quot; print(b.stream) # prints &quot;cse&quot; print(a.name) # prints &quot;Geek&quot; print(b.name) # prints &quot;Nerd&quot; # Class variables can be accessed using class # name also print(CSStudent.stream) # prints &quot;cse&quot; # Now if we change the stream for just a it won't be changed for b a.stream = 'ece' b.stream = 'abc' print(CSStudent.stream) # prints 'ece' print(a.stream) # prints 'ece' print(b.stream) # prints 'abc' </code></pre>
<python><class><static-variables>
2023-10-17 07:32:07
1
772
Vinay Vishwakarma
77,306,959
638,504
Expose statistics/logging remotely in Python
<p>I have Python scripts which run in the background (for a long time). Those scripts process data in some way and log output to the STDOUT - nothing fancy...</p> <p>Playing around with <a href="https://www.dask.org/" rel="nofollow noreferrer">Dask</a> and its status page (which shows the progress if you run something in Jupyter for example). This got me thinking: is there is a module for <em>remote logging/stats</em> for Python scripts.</p> <p>I am aware of remote debuggers, exception monitoring services, etc. I am just looking for something 'simple' what can be used within own script.</p> <p>The idea:</p> <p>Basically by loading this module, you now can connect to &quot;your Python script&quot; via the web browser. In the script you can push variable values to the UI (<code>remotelog.log('Done %d%%', done)</code>). Maybe even <em>logging as</em> data frame table, progress bar, graphs, etc. In some sense updatable logger, but not to STDOUT, but to a web UI with a fancy formatting and in place updates. So if, on line 10, you log a data frame, it will be updated in the web UI in the same spot. Maybe you could even define <code>N</code> x <code>M</code> grid and show specific output in a specific place.</p> <p>Obviously one could implement web sockets or an API to the script, but it would be great if there was a plug and play module which that already and what only requires a <em>logging</em> call with some and it would update the web UI in the background.</p>
<python><logging><jupyter-notebook><monitoring>
2023-10-17 07:31:09
0
2,193
ddofborg
77,306,748
1,801,060
How is QStyle.StandardPixmap defined in PyQt6
<p>How is the enumeration for QStyle.StandardPixmap defined in PyQt6? I've tried to replicate it as shown below:</p> <pre><code>from enum import Enum class bootlegPixmap(Enum): SP_TitleBarMenuButton = 0 SP_TitleBarMinButton = 1 SP_TitleBarMaxButton = 2 for y in bootlegPixmap: print(y) </code></pre> <p>I get the following output:</p> <pre><code>bootlegPixmap.SP_TitleBarMenuButton bootlegPixmap.SP_TitleBarMinButton bootlegPixmap.SP_TitleBarMaxButton </code></pre> <p>If I try and iterate over the original using the following code:</p> <pre><code>from PyQt6.QtWidgets import QStyle for x in QStyle.StandardPixmap: print(x) </code></pre> <p>I get numerical values only:</p> <pre><code>0 1 2 ... </code></pre>
<python><enums><pyqt><pyqt6>
2023-10-17 06:55:46
1
2,821
user1801060
77,306,569
5,731,101
Django proxy-model returning wrong ContentType
<p>Let's say we have a model <code>Company</code> and subtype <code>Client</code> like so:</p> <pre class="lang-py prettyprint-override"><code>class Company(models.Model): name = models.CharField(max_length=100) related_companies = models.ManyToManyField('self', symmetrical=True) is_customer = models.BooleanField(default=False) class ClientManager(Manager): def get_queryset(self): return QuerySet(self.model, using=self._db).filter(is_customer=True) class Client(Company): objects = ClientManager() class Meta: proxy = True </code></pre> <p>Next you go into CLI, create an instance and fetch its content-type</p> <pre class="lang-py prettyprint-override"><code>from django.contrib.contenttypes.models import ContentType In[38]: company = Company.objects.create(name='test', is_customer=True) In [39]: ContentType.objects.get_for_model(company) Out[39]: &lt;ContentType: contacts | company&gt; </code></pre> <p>That returns the concrete model for the company class. So far so good.</p> <p>Next, according to the docs, the follows should return the Client content-type.</p> <pre class="lang-py prettyprint-override"><code>In [40]: ContentType.objects.get_for_model(company, for_concrete_model=False) Out[40]: &lt;ContentType: contacts | company&gt; </code></pre> <p>Yet, it does not. I'm still receiving the concrete model Company instead of Customer</p> <p>Do you see my mistake?</p>
<python><django><django-models>
2023-10-17 06:24:02
1
2,971
S.D.
77,306,397
1,609,428
wide to long in pandas while aligning columns
<p>Consider the following example:</p> <pre><code>pd.DataFrame({'time' : [1,2,3], 'X1-Price': [10,12,11], 'X1-Quantity' : [2,3,4], 'X2-Price': [3,4,2], 'X2-Quantity' : [1,1,1]}) Out[92]: time X1-Price X1-Quantity X2-Price X2-Quantity 0 1 10 2 3 1 1 2 12 3 4 1 2 3 11 4 2 1 </code></pre> <p>I am trying to reshape this wide dataframe into a long format. The difficulty is that I want to break down the variables by type (identified by the X1, X2 variables) while keeping price and quantity aligned on the same rows. That is, the desired output is the following</p> <pre><code>Out[93]: time type quantity price 0 1 X1 2 10 1 2 X1 3 12 2 3 X1 4 11 3 1 X2 1 3 4 2 X2 1 4 5 3 X2 1 2 </code></pre> <p>I am not sure how to do this. I tried the <code>pd.wide_to_long()</code> function but it does not work correctly.</p> <pre><code>pd.wide_to_long(b, i = 'time', j = 'value', stubnames = ['X']) Out[95]: Empty DataFrame Columns: [X1-Price, X2-Price, X2-Quantity, X1-Quantity, X] Index: [] </code></pre> <p>Do you have any idea? Thanks!</p>
<python><pandas>
2023-10-17 05:42:58
3
19,485
ℕʘʘḆḽḘ
77,306,322
18,148,705
Getting Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://ec2-ip/login
<p>I have a react frontend which is deployed on S3. Now I have mapped my domain with the S3 hosted url. So whenever someone tries to access s3 hosted url, they will get redirected to <a href="https://myproject.mycompany.com" rel="nofollow noreferrer">https://myproject.mycompany.com</a> . My backend is using Flask python, which is in Ec2(Ubuntu instance). I am using Nginx(reverse proxy) and gunicorn as well. I have done Nginx conf setup, flask-cors setup.</p> <p>Whenever I’m trying to make a login request to my backend, the request doesn’t reach Nginx. Surprisingly when I try to make an http request it works well, but not https request. I get below error when I try to make request</p> <blockquote> <p>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://ec2-ip/login. (Reason: CORS request did not succeed). Status code: (null).</p> </blockquote> <p>Any idea how this can be fixed ?</p>
<python><reactjs><nginx><flask><cors>
2023-10-17 05:19:01
1
335
user18148705
77,306,171
146,077
Can Boto3's DynamoDB "update_item" method handle native python objects?
<p>I'm currently using boto3 to get and put items into DynamoDB. I can get and put like this:</p> <pre><code>def get_item(key): # key is a dict like {&quot;partitionkey&quot;: &quot;abc&quot;, &quot;sortkey&quot;: &quot;123&quot;} table = boto3.resource(&quot;dynamodb&quot;).Table(DYNAMO_TABLE) response = table.get_item(Key=key) return response.get(&quot;Item&quot;) def put_item(key, item): # item is a dict of data to store, e.g. {&quot;name&quot;: &quot;uhoh&quot;, &quot;size&quot;: 100} table = boto3.resource(&quot;dynamodb&quot;).Table(DYNAMO_TABLE) return table.put_item(Item={**key, **item}) </code></pre> <p>This works great! I pass Python dictionary to the <code>put_item</code> function, and I get a Python dictionary back when I call <code>get_item</code>. I don't need to do any of the DynamoDB encoding stuff<sup>1</sup>, it just works.</p> <p>Now I'm looking at introducing <code>update_item</code> in addition to <code>put_item</code>. It seems I cannot pass these Python types to the <code>update_item</code> method in the same way; when I try I get</p> <pre><code>table.update_item(Key=my_key, UpdateExpression=&quot;SET #X = :x&quot;, ExpressionAttributeNames={&quot;#X&quot;: &quot;my_property&quot;}, ExpressionAttributeValues={&quot;:x&quot;: somedict}) </code></pre> <blockquote> <p>botocore.exceptions.ParamValidationError: Parameter validation failed: Unknown parameter in ExpressionAttributeValues.:x: &quot;my_key&quot;, must be one of: S, N, B, SS, NS, BS, M, L, NULL, BOOL</p> </blockquote> <p>My question: is there a way to have the <code>update_item</code> handle this transformation for me the same as <code>put_item</code> does? Or is there a utility that can transform back and forth between these two formats? At this point I'm thinking <code>update_item</code> isn't worth the extra effort and lines of code.</p> <hr /> <p><sup>1</sup> Transformation from <code>[{&quot;my_key&quot;: &quot;my_value&quot;}]</code> to <code>{&quot;L&quot;: [{&quot;M&quot;: {&quot;my_key&quot;: {&quot;S&quot;: &quot;my_value&quot;}}}]}</code> is handled auto-magically!</p>
<python><amazon-dynamodb><boto3>
2023-10-17 04:21:04
1
28,876
Kirk Broadhurst
77,306,112
2,728,074
Write dynamically-defined python function to file
<p>Consider:</p> <pre><code>def func_create(default_val): def the_function(): print(&quot;The default value is: %s&quot; % default_val) return the_function new_f = func_create(1.0) new_f() </code></pre> <p>Which prints the line:</p> <pre><code>The default value is: 1.0 </code></pre> <p>Is there a way to export the dynamically-created <code>new_f</code> to a file? pickle does not work, because the function is not hashable, but does this mean it's impossible?</p> <p>Ideally, after exporting a function, you would be able to run something like:</p> <pre><code>from new_function_module import new_f new_f() </code></pre> <p>and it would print the line</p> <pre><code>The default value is: 1.0 </code></pre> <p>For bonus points, is there any method/library for python that replicates behaviour similar to Matlab's <a href="https://au.mathworks.com/help/symbolic/sym.matlabfunction.html" rel="nofollow noreferrer">matlabFunction</a>? That is, exports <em>human-readable</em> python code to a file (as opposed to something like a pickle object)?</p>
<python><function><file><dynamic><export>
2023-10-17 04:03:44
2
469
Charlie
77,306,059
70,157
Interrogate Python package dependencies without using network
<p>How can I query Python's 'pip' to answer the question &quot;is this install specification already satisfied&quot;, without any network requests?</p> <p>To allow the build system to work offline, I want to interrogate the Python environment for <em>whether</em> anything would need to be installed to satisfy dependencies, and not actually make any network requests to do so.</p> <p>The dependencies are declared using the 'pyproject.toml' file (and so, suggestions to <a href="https://stackoverflow.com/questions/16294819/check-if-my-python-has-all-required-packages">compare with a 'requirements.txt' file</a> don't apply; there is no such file).</p> <pre class="lang-ini prettyprint-override"><code>[project] name = &quot;lorem&quot; dependencies = [ &quot;setuptools &gt;= 62.4.0&quot;, &quot;packaging&quot;, &quot;dolor &gt;= 0.10&quot;, ] [project.optional-dependencies] static-analysis = [ &quot;pycodestyle ~= 2.11&quot;, &quot;pydocstyle ~= 6.3&quot;, ] test = [ &quot;lorem[static-analysis]&quot;, &quot;testscenarios &gt;= 0.4&quot;, &quot;coverage&quot;, ] devel = [ &quot;lorem[test]&quot;, &quot;isort ~= 5.12&quot;, &quot;twine&quot;, ] </code></pre> <p>It's also not enough merely to check the &quot;runtime dependencies&quot;; I need to check whether the dependencies specifically for the 'test' feature, are installed.</p> <p>The installation of the dependency packages for the test suite is done with:</p> <pre class="lang-bash prettyprint-override"><code>$ python3 -m pip install .[test] Processing /home/bignose/Projects/foo [… installs some packages, reports that others are already satisfied …] </code></pre> <p>(meaning: Install this package and all dependencies, including those needed for the optional &quot;test&quot; feature.)</p> <p>But the test suite should be run without needing any network access; so I want to <em>interrogate</em> package dependencies without any network requests.</p> <p>I'm imagining a &quot;just tell me if this is already satisfied&quot; option:</p> <pre class="lang-bash prettyprint-override"><code>$ python3 -m pip install --no-index --fail-if-any-packages-need-to-be-installed .[test] Processing /home/bignose/Projects/foo [… reports that some packages are not installed, others are already satisfied …] $ if [ $? -eq 0 ] ; then python3 -m unittest ; else echo &quot;Test dependencies not installed; aborting.&quot; ; fi </code></pre> <p>So, the <code>python3 -m pip …</code> command here is instructed not to contact the package index (<code>--no-index</code>); it is <em>only</em> to get a success-or-failure result, for determining whether to run the test suite or abort now.</p> <p>There is no such <code>--fail-if-any-packages-need-to-be-installed</code> option, but 'pip' obviously does know this information; it's all derived from inspecting what packages are installed locally, and the set of dependencies.</p> <p>So how can I use 'pip' to do this?</p>
<python><pip>
2023-10-17 03:42:14
0
32,600
bignose
77,306,024
7,260,269
Problem sending email with template_id with Airflow and Sendgrid
<p>I am trying to send an email with Sendgrid and <a href="https://en.wikipedia.org/wiki/Apache_Airflow" rel="nofollow noreferrer">Airflow</a> (2.4.2). I am able to send the email, but when I try to pass a <em>template_id</em>, the email is sent without the template design. It looks like I am passing the template_id in the incorrect way. How can I send the email with the template?.</p> <pre><code>from airflow import DAG from airflow.operators.python import PythonOperator from airflow.providers.postgres.hooks.postgres import PostgresHook from airflow.providers.sendgrid.utils.emailer import send_email from airflow.operators.email import EmailOperator from datetime import datetime default_args = { 'owner': 'your_name', 'start_date': datetime(2023, 10, 16, 15, 35, 0), 'retries': 1, } #Task to send an email using SendGrid def _send_email(): postgres_hook = PostgresHook(postgres_conn_id=&quot;postgres&quot;) sql_query = &quot;select email from Users where firstname='Raghav';&quot; results = postgres_hook.get_records(sql_query) for row in results: email = row email_content = f&quot;Hello Email: {email}\n&quot; send_email( to=email, subject='Test Email Subject', html_content=email_content, files=None, # List of file paths to attach to the email cc=None, # List of email addresses to CC bcc=None, # List of email addresses to BCC conn_id='sendgrid_default', template_id='d-8cefbe24e73842d7810550fc441aceb0', #didn't work kwargs={ 'template_id':'d-8cefbe24e99942d7810550fc441aceb0' } ) with DAG('user_processing', start_date=datetime(2022,1,1), schedule_interval='@daily', catchup=False) as dag: email_user = PythonOperator( task_id = 'email_user', python_callable= _send_email ) email_user </code></pre>
<python><airflow><sendgrid><directed-acyclic-graphs>
2023-10-17 03:29:43
1
341
James Hameson
77,305,875
6,303,639
Python get the total execution time of a function
<p>Suppose that I have a function:</p> <pre class="lang-py prettyprint-override"><code>def func(x): time.sleep(x) return x </code></pre> <p>It will be called in every place when needed, for example, <code>func_a in class A</code>, <code>func_b in class B</code>... And all the modules start from a main function, <code>main()</code>.</p> <p>Now I want to statistic the amount of the called freqs and time of <code>func</code>. It's similar to <code>profiler</code>, but I only want to get the statistics of <code>func</code>; I do not care about other methods.</p> <p>A Decorator can only get the time once, which is not sufficient for the total time. Another problem is that <code>func</code> overrides a function in another package. It may be called by functions in the package, and it is not easy to add a decorator to every call method.</p> <pre class="lang-py prettyprint-override"><code>def time(func): def wrapper(*args, **kwargs): import time st = time.time() result = func(*args, **kwargs) end = time.time() print(f&quot;execution time: {end - st}&quot;) return result return wrapper() @time def func(x): ... </code></pre> <p>Is there any simple method that I can use to get the total execution of <code>func</code> with the minimum code?</p>
<python><decorator><profiler>
2023-10-17 02:30:07
2
783
Whisht
77,305,865
1,620,040
Pandas Dataframe from a text file (table) python
<p>Considering my text file would have the following pattern:</p> <pre><code> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES xxxxxxxxxxxx yyyyyyyyyyyyyyyyyyyyyyyyyyy zzzzzzzzzzzzzzzzzzzzzz aaaaaaaaaaaaaaaa bbbbbbbbbbb ports random name </code></pre> <p>Basically the output of <strong>docker ps</strong> command, written to a text file. What is the most effective way to convert this into a data frame or a readable format into python, to query the values. (i.e.) Given a name, get the matching container ID etc.</p> <p>I tried</p> <pre><code> df = pd.read_csv(&quot;docker.txt&quot;,sep=&quot; &quot;) </code></pre> <p>but there are lapses, as the delimiter is not consistent.</p>
<python><pandas><dataframe>
2023-10-17 02:24:21
1
5,362
Lakshmi Narayanan
77,305,848
9,983,652
is there a command to clear all output when running the notebook?
<p>In jupyter notebook, typically I use the menu to clear all output. I am wondering if there is a command (majic command or lines of code) which I can place on the first cell and it will clear all output when running the notebook?</p> <p>Thanks</p>
<python><jupyter-notebook>
2023-10-17 02:18:58
1
4,338
roudan
77,305,778
2,030,532
How to vectorize <x(t-lag), y(t)> for two similar shaped numpy arrays?
<p>I have two numpy arrays of the same dimensions <code>x</code> and <code>y</code>. I would like to calculate the &lt;x(t-lag&gt;, y(t)&gt; as follows for different values of lag. The following code works by I am wondering if there is a vectorized implementation that could be faster.</p> <pre><code>def foo(x, y, lags): T = x.shape[0] res_list = [] for lag in lags: res_list.append(np.mean(x[lag:]*y[:(T - lag)])) return res_list if __name__ == '__main__': T = 1000 N = 10 lags = np.arange(0, 40) x = np.random.randn(T, N) y = np.random.randn(T, N) res = foo(x, y, lags) print(&quot;done&quot;) </code></pre>
<python><numpy>
2023-10-17 01:48:37
1
3,874
motam79
77,305,726
156,458
In Python Regex, how can I express the following pattern of strings?
<p>In Python Regex, how can I express the following pattern of strings?</p> <p>Suppose I have two subpatterns p1 and p2. I would like the pattern for any of the following cases:</p> <ul> <li>p1p2 (p1 appears immediately before p2)</li> <li>p1 (no p2)</li> <li>p2 (no p1)</li> </ul> <p>It is not possible that the two patterns don't exist at the same time.</p>
<python><regex>
2023-10-17 01:26:08
0
100,686
Tim
77,305,722
4,451,521
A plotly plot with many points is very heavy and cannot unzoom
<p>I have made a plot with plotly. It has five plots in it, each with 25409 points.</p> <p>My problem is that the plot appears but it is very heavy when I try to zoom it and unzoom it is impossible.</p> <p>Is there a way that I can do this in a more practical way?</p> <p>Each plot is like this</p> <pre><code>hover_text1 = dff.apply(lambda row: f'({row[&quot;x&quot;]},{row[&quot;y&quot;]}) at {row[&quot;f&quot;]}', axis=1) fig.add_trace(go.Scatter(x=df.x, y= df.y, mode='lines+markers', text=hover_text1, name='left', line=dict(color='orange',width=linewidth),marker=dict(symbol='circle', size=markersize),hoverinfo='text')) </code></pre> <p>EDIT: I tried using plotly-resampler but it did not work because it required my data to be monotonically increasing.</p> <p>I was suggested to sort the data. This presents the following problems:</p> <ol> <li><p>My data has five plots. Each plot is data from different fields of the dataframe. I guess I would have to sort it every time I add a plot?</p> </li> <li><p>Even with only one plot, it did not work</p> </li> </ol> <p>My plot (well a zoom of it) must look like this</p> <p><a href="https://i.sstatic.net/szUyS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/szUyS.png" alt="enter image description here" /></a></p> <p>however when I sort it by x I get</p> <p><a href="https://i.sstatic.net/tc7NU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tc7NU.png" alt="enter image description here" /></a></p> <p>and I don't know why</p>
<python><plotly>
2023-10-17 01:24:40
1
10,576
KansaiRobot
77,305,591
433,432
Can I create a normalied image rotation matrix and then use it on multiple different images?
<p>I'm trying to store an affine transformation matrix that is independent of image size (normalized if you will) and then apply that to an image but I'm having problems with the math. It certainly seems possible to me. Here is my python code.</p> <pre class="lang-py prettyprint-override"><code>import cv2 image = cv2.imread(filepath) center = (0.4, 0.7) angle = 39 # # THIS WORKS # matrix = cv2.getRotationMatrix2D((center[0] * image.shape[1], center[1] * image.shape[0]), angle, 1.0) image2 = cv2.warpAffine(image, matrix, dsize=(image.shape[1], image.shape[0])) cv2.imwrite(&quot;/home/ken/rotated.jpg&quot;, image2) # # BUT WHAT I WANT TO DO IS THIS such that matrix can be calculated and stored... # matrix = cv2.getRotationMatrix2D(center, angle, 1.0) # # ... and then when presented with an image, convert the matrix here to # make it work with the current image size. # matrix = denormalize_matrix(matrix, image.shape) # &lt;== DESIRED PSEUDO FUNCTION image2= cv2.warpAffine(image, matrix, dsize=(image.shape[1], image.shape[0])) cv2.imwrite(&quot;/home/ken/rotated_new.jpg&quot;, image2) </code></pre> <p>Is there some way to transform this matrix once I know the actual size of the image I'm operating on?</p> <p>Cheers.</p>
<python><opencv>
2023-10-17 00:30:54
1
7,257
crowmagnumb
77,305,290
2,646,881
SQLite - create standalone sequence (get autoincrement integer without the need of having table)
<p>I am trying to fix an issue with InfluxDB, where it does not support sequences (unique, incremental Integer column). The workaround I have &quot;invented&quot; is, to create an unique Id outside of Influx, when transforming data into data Points.</p> <p>So my workflow is like: 1.) get set of data 2.) Extract one for at a time 3.) generate unique Id for this row by external DB engine (postgres, sqlite) 4.) Create data point that will be row data + this unique Id as a separate tag 5.) Write hole dataset standard way</p> <p>That seems like good workaround, ID will be unique and creation of sequence is not hard task for Postgres. All I need to do is to create sequence, and than call <code>SELECT next_val</code> everytime I need unique Id. Postgres engine will make sure each integer I get is unique.</p> <p>However, I do not have postgresql available in enviroment where my code will be running. I have only SQLite. I was so naive, that I thought &quot;its so easy and native, each SQL DB engine has to have it&quot;. Well, no, after whole day of investigation, it seems like there is no way of using SQLite sequences outside of the table for which it was created.</p> <p>Or is there a way?</p> <p>Yes, I can use more workaround, like stored procedure, where I will add new row to table with autoincrement ID column, delete previous row and than return ID of last row. But I was wondering if there is no simpler / more natural way of achieving my goal (which is, to have command, which when I call into SQLlite, it will return me unique autoincremented integer.</p>
<python><sqlite><influxdb>
2023-10-16 22:31:18
0
418
rRr
77,305,284
1,234,419
setup unable to find zope
<p>hello I am unable to install crossbar.io due to a zope error. here is the env details</p> <pre><code>ENV: Python 3.11.6 (main, Oct 16 2023, 14:39:12) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. uname -a Darwin &lt;SR_NO&gt; 22.6.0 Darwin Kernel Version 22.6.0: Fri Sep 15 13:39:52 PDT 2023; root:xnu-8796.141.3.700.8~1/RELEASE_X86_64 x86_64 Processing dependencies for crossbar==23.1.2 Searching for zope-interface&gt;=5 Reading https://pypi.org/simple/zope-interface/ /Users/&lt;username&gt;/workspace/new-python/venv/lib/python3.11/site-packages/pkg_resources/__init__.py:123: PkgResourcesDeprecationWarning: is an invalid version and will not be supported in a future release warnings.warn( No local packages or working download links found for zope-interface&gt;=5 error: Could not find suitable distribution for Requirement.parse('zope-interface&gt;=5') (venv) ➜ crossbar git:(master) pip freeze | grep zope zope.interface==6.1 </code></pre> <p>zope is installed correctly but setup is failing to recognize the install</p>
<python><zope><crossbar>
2023-10-16 22:30:24
0
1,403
vector8188
77,305,166
1,644,561
Can you use a HMAC-SHA256 signature in authlib in python?
<p>The <a href="https://docs.authlib.org/en/latest/flask/1/customize.html" rel="nofollow noreferrer">authlib documentation</a> discusses how to process <code>HMAC-SHA256</code> signature methods server side, but there doesn't seem to be anything about how to sign requests with this kind of signature.</p> <p>The following code fails with a <code>ValueError: Invalid signature method.</code></p> <pre class="lang-py prettyprint-override"><code>auth = OAuth1Auth( client_id=&quot;...&quot;, client_secret=&quot;...&quot;, token=&quot;...&quot;, token_secret=&quot;...&quot;, realm=&quot;...&quot;, signature_method= &quot;HMAC-SHA256&quot;, ) r = requests.post(url, auth=auth, data=payload) </code></pre> <p>Is there a way to issue requests with <code>HMAC-SHA256</code>, or is this not supported?</p>
<python><oauth><oauth-1.0a><authlib>
2023-10-16 21:56:32
1
854
avyfain
77,305,151
353,407
Sympy diffgeom: Expressions won't simplify
<p>Here is the setup, defining a manifold and a local coordinate system (x,y,z).</p> <pre><code>from sympy.diffgeom import ( Manifold, Patch, CoordSystem, Differential, WedgeProduct, LieDerivative) from sympy import symbols, simplify R3 = Manifold('R_3', 3) U = Patch('U', R3) cartesian = CoordSystem('xyz', U, symbols('x y z')) x, y, z = cartesian.base_scalars() partial_x, partial_y, partial_z = cartesian.base_vectors() dx, dy, dz = cartesian.base_oneforms() </code></pre> <p>I expect <code>simplify(WedgeProduct(dx, dy) + WedgeProduct(dy, dx))</code> to equal 0, but sympy instead reports <code>dx /\ dy + dy /\ dx</code>.</p> <p>Similarly, no simplification occurs when I try to compute the Lie derivative of a simple 2-form with respect to a vector field defined in terms of <code>partial_x, partial_y, partial_z</code>.</p>
<python><sympy>
2023-10-16 21:54:02
1
5,543
Mark
77,305,072
11,429,035
multiprocessing and write date to disk to reduce memory costs in python?
<p>I am using <code>multiprocessing</code> to do a bunch of parallel computing, and returning the results as a list. The may issue right now is the function often runs out of memory. My current code is like</p> <pre><code>def some_func(a) b = a**2 (some calculations here) return b with multiprocessing.Pool() as p: results_list = p.map(partial(some_func), a_list_of_a) with open(f&quot;results/final_data.pkl&quot;, &quot;wb&quot;) as f: pickle.dump(results_list, f) </code></pre> <p>I think one solution might be split the <code>a_list_of_a</code> into several batches, and write the results as batches to the disk to reduce the memory usage. Is it possible?</p> <p>Edited:</p> <p>The main issue of my <code>some_func</code> is it returns a large variable. A <code>networkx</code> graph, specifically. The input <code>a_list_a</code> is a list of node numbers. And the return of <code>some_func</code> is graphs generated by given nodes. The length of <code>a_list_of_a</code> is around 15,000. My memory is 64GB but the function is still killed due to out of memory.</p> <p>Maybe there is a better way to store <code>networkx</code> <code>Graph</code>? I do need to keep the nodes' attribute when storing the <code>Graph</code> object.</p>
<python><multiprocessing><networkx>
2023-10-16 21:31:59
1
533
Xudong
77,305,054
7,397,195
dateparser search_dates returns impossible year
<p>I'm using dateparser's &quot;search_dates&quot; to parse text for dates and got a strange date in my result.</p> <pre><code>dateparser.__version__ '1.1.8' settings= { 'RELATIVE_BASE': datetime.datetime(2023, 7, 31, 0, 0), 'PREFER_DAY_OF_MONTH': 'first', 'PREFER_DATES_FROM': 'future', 'REQUIRE_PARTS': ['year', 'month'], 'DATE_ORDER': 'YMD' } s = 'Closing Yield, 2010 Year Treasury notes On Dec 31, 2023' search_dates(s, settings=settings) </code></pre> <p>Result:</p> <pre><code>Out[27]: [('2010 Year', datetime.datetime(4033, 7, 31, 0, 0)), ('On Dec 31, 2023', datetime.datetime(2023, 12, 31, 0, 0))] </code></pre> <p>The first item in the list yields an impossible result (year = 4033).</p> <p>Any ideas here?</p>
<python><dateparser>
2023-10-16 21:27:53
1
454
leeprevost
77,305,001
3,582,701
How to separate the EXE header and the encrypted message of a PGP SDA using Python?
<p>I have both PGP Self Decrypting Archive EXE file and their password. It is my understanding that correspond to an EXE header and the encrypted message.</p> <p>How can I separate the encrypted message for further processing?</p> <p>I'm trying to use PEReader, but I have no idea how identify neither the content not the EXE header.</p>
<python><exe><pgp>
2023-10-16 21:15:37
1
506
tristobal
77,304,809
14,546,482
Pydantic 2.0 multiple Basemodel classes - how to handle class dependencies
<p>How can I make my other classes optional in pydantic 2.0? When I have data I dont get an error, however, when I send a null value I get a ValidationError Field Required type=missing error.</p> <p>For Example, I could do <em><strong>profile: Optional[dict[str, str]] = None in User</strong></em> and that works but it defeats the point of using <strong>class (Profile)</strong></p> <pre><code>from typing import Optional, Dict class User(BaseModel): id: str username: str password: str connections: Optional[Connections] &lt;---- not working returning ValdiationError if null value sent profile: Optional[Profile] &lt;---- not working returning ValdiationError if null value sent class Profile(BaseModel): id: str profile_id: str details: str class Connections(BaseModel): connection_id: str connection_count: int connection_names: Dict[str, str] </code></pre> <p>I'm new to the updates in pydnatic so any help would be appreciated !</p>
<python><python-typing><pydantic>
2023-10-16 20:33:57
1
343
aero8991
77,304,783
179,372
Dask from_delayed() causing high memory usage
<p>I'm following the instructions from (<a href="https://docs.dask.org/en/stable/delayed-collections.html" rel="nofollow noreferrer">https://docs.dask.org/en/stable/delayed-collections.html</a>) to create a custom data loader for a Dask DataFrame, which is basically this:</p> <pre class="lang-py prettyprint-override"><code>import dask.dataframe as dd from dask.delayed import delayed dfs = [delayed(load)(fn) for fn in filenames] df = dd.from_delayed(dfs, meta=types) </code></pre> <p>I have 10k filenames and I'm specifying the <code>meta</code> attribute (type are correct). The issue is that this is eating a lot of memory and making workers to be paused. If I do the same but with dask arrays it works fine, but if I do that by returning Pandas DataFrames from the <code>load()</code> function, it gets this high memory usage to the point of exhausting the system memory. Isn't this approach supposed to scale well ? Are there other ways to load DataFrames (by parts) in Dask ?</p> <p>I'm using <code>dask==2023.5.0</code>.</p>
<python><pandas><dask><dask-distributed>
2023-10-16 20:28:08
1
20,050
Tarantula
77,304,751
8,205,554
How can I combine two dictionaries with the same keys and values, ignoring string values?
<p>I have a dataset as follows,</p> <pre class="lang-py prettyprint-override"><code>[ {'sets': 3, 'muscle': 'Chest', 'body_part': 'Upper'}, {'sets': 3, 'muscle': 'Chest', 'body_part': 'Upper'}, {'sets': 4, 'muscle': 'Chest', 'body_part': 'Upper'}, {'sets': 2, 'muscle': 'Abdominal', 'body_part': 'Abdominal'}, {'sets': 3, 'muscle': 'Abdominal', 'body_part': 'Abdominal'}, {'sets': 2, 'muscle': 'Abdominal', 'body_part': 'Abdominal'}, {'sets': 3, 'muscle': 'Abdominal', 'body_part': 'Abdominal'} ] </code></pre> <p>I want to merge two dictionaries if their value is less than or equal to 2. For instance, when we look at the data above, the output should be as follows,</p> <pre class="lang-py prettyprint-override"><code>[ {'sets': 3, 'muscle': 'Chest', 'body_part': 'Upper'}, {'sets': 3, 'muscle': 'Chest', 'body_part': 'Upper'}, {'sets': 4, 'muscle': 'Chest', 'body_part': 'Upper'}, {'sets': 4, 'muscle': 'Abdominal', 'body_part': 'Abdominal'}, {'sets': 3, 'muscle': 'Abdominal', 'body_part': 'Abdominal'}, {'sets': 3, 'muscle': 'Abdominal', 'body_part': 'Abdominal'} ] </code></pre> <p>As you can see, there were two sets of &quot;Abdominal&quot; with 2 and these were combined to make 4 sets. Also, other string values (<code>muscle</code> and <code>body_part</code>) are still here.</p> <p>I tried this and found the items to replace but I still need to remove the actual values from the data.</p> <pre class="lang-py prettyprint-override"><code>replaces = {} for part in data: if part['sets'] == 2: if part['muscle'] not in replaces: replaces[part['muscle']] = [] replaces[part['muscle']].append({ 'sets': part['sets'], 'muscle': part['muscle'], 'body_part': part['body_part'] }) </code></pre> <p>How can I do this? Thanks in advance.</p>
<python><dictionary>
2023-10-16 20:21:45
2
2,633
E. Zeytinci
77,304,733
5,287,011
append() question - AttributeError: 'numpy.ndarray' object has no attribute 'append'
<pre><code>import numpy as np import pandas as pd from PIL import Image from glob import glob import cv2 import gc import os </code></pre> <p>It is an attempt to build a CNN to classify lung and colon cancer. Dataset: <a href="https://www.kaggle.com/datasets/andrewmvd/lung-and-colon-cancer-histopathological-images" rel="nofollow noreferrer">https://www.kaggle.com/datasets/andrewmvd/lung-and-colon-cancer-histopathological-images</a></p> <pre><code>path = PATH + 'lung_colon_image_set/lung_image_sets' classes = os.listdir(path) </code></pre> <p>You will have to define your own PATH to get the data after downloading it from Kaggle.</p> <pre><code>for cat in classes: image_dir = f'{path}/{cat}' images = os.listdir(image_dir) print('Images Count: ', len(images)) IMG_SIZE = 256 </code></pre> <p>I have the error AttributeError: 'numpy.ndarray' object has no attribute 'append' while I do NOT define any array:</p> <pre><code>X = list() Y = list() for i, cat in enumerate(classes): images = glob(f'{path}/{cat}/*.jpeg') for image in images: img = cv2.imread(image) #img1 = cv2.resize(img, (IMG_SIZE, IMG_SIZE)) #X = append(X,cv2.resize(img, (IMG_SIZE, IMG_SIZE))) #X = np.append(X,cv2.resize(cv2.imread(image), (IMG_SIZE, IMG_SIZE))) #X = np.append(X, img1) X.append(cv2.resize(img, (IMG_SIZE, IMG_SIZE))) Y.append(i) X = np.asarray(X) one_hot_encoded_Y = pd.get_dummies(Y).values </code></pre> <p>As you can see, I defined X and Y as lists.</p> <p>I tried to correct it (you can see commented rows of code), but the loop never ends (images are around 5000).</p> <p>As per a great comment, cv2.imread(image) creates an array. Thus, I am appending a list (X) with an array (list of arrays). There must be a way to do this, but I do not see it.</p> <pre><code>NameError Traceback (most recent call last) Cell In[37], line 16 10 img = cv2.imread(image) 11 #img = img.tolist() 12 #print(type(img)) 13 #img1 = cv2.resize(img, (IMG_SIZE, IMG_SIZE)) 14 #print(type(img1.shape)) ---&gt; 16 X.append(cv2.resize(img, (IMG_SIZE, IMG_SIZE))) 17 #X = np.append(X,cv2.resize(cv2.imread(image), (IMG_SIZE, IMG_SIZE))) 18 #X = np.append(X, img1) 19 #X=np.append(X, cv2.resize(img, (IMG_SIZE, IMG_SIZE))) 20 #Y=np.append(i, Y) 22 Y.append(i) AttributeError: 'numpy.ndarray' object has no attribute 'append' </code></pre> <p>EDIT:</p> <p>Here is the code that addresses the error:</p> <pre><code>for i, cat in enumerate(classes): images = glob(f'{path}/{cat}/*.jpeg') kk =0 #for tracking only for image in images: kk+=1 print('kk = ', kk) #tracking img1 = (cv2.resize(cv2.imread(image), (IMG_SIZE, IMG_SIZE))).tolist() X.append(img1) Y.append(i) </code></pre> <p>It fixes the error and processes normally but it kills the kernel (at about 5000-6000 elements processed out of 15,000).</p> <p>My guess is extensive memory usage during append() - 3 classes x 5000 images are being added to the list.</p> <p>How can this be improved? I can not use extend() because I still need a list (of img1 elements) for that.</p> <p>using gc() did not work (or maybe I did not use it correctly).</p> <p>Of course, I can reduce the number of images but I would rather not to.</p> <p>I use MacOS Pro M1 notebook. No GPU is possible.</p> <p>Appreciate your ideas!!</p>
<python><memory-management><append><attributeerror>
2023-10-16 20:17:26
2
3,209
Toly
77,304,705
5,227,892
Load the python dictionary from csv file
<p>I saved the dictionary <code>dict</code> <code>dict_keys(['df1', 'df2'])</code> into a <code>csv</code> file. <code>cat data.csv</code> outputs something like this?</p> <pre><code>,,df1,df2 quantile_10,h0,0.03636844456195831,0.105098 quantile_10,h5,0.0654495671391487,0.108322 quantile_10,h10,0.10933497846126557,0.113496 quantile_10,h15,0.1400846481323242,0.119098 quantile_10,h20,0.1513956755399704,0.068324 quantile_10,h25,0.11640003025531769,0.017974 quantile_10,h30,0.026758757233619694,9.4e-05 quantile_10,h35,0.0020915842149406673,0.0 quantile_20,h0,0.05211468040943147,0.130436 quantile_20,h5,0.08270514607429505,0.125416 quantile_20,h10,0.12569279968738556,0.125436 quantile_20,h15,0.14520362317562102,0.149596 </code></pre> <p>How to open this dictionary in python so I have a <code>dict['df1']</code>, <code>dict['df2']</code> and quantiles being row indexes?</p>
<python>
2023-10-16 20:12:00
1
435
Sher
77,304,601
489,088
How to use a mask to limit broadcasted operations between two numpy arrays?
<p>I have an array like so:</p> <pre><code>data = np.array([ [[10, 10, 10], [10, 10, 10], [10, 10, 10]], [[20, 20, 20], [20, 20, 20], [20, 20, 20]], [[30, 30, 30], [30, 30, 30], [30, 30, 30]], ], dtype=np.float64) </code></pre> <p>and one to divide values by, like so:</p> <pre><code>divide_by = np.array([ [[10, 10, 1]], [[1, 10, 10]], [[1, 1, 1]], ], dtype=np.float64) </code></pre> <p>I would like to divide each row (axis 0) of the <code>data</code> array by values in the <code>divide_by</code> array (sort of like a stamp), but only in positions where a given mask (which as the shape of <code>data</code>) has been set to <code>True</code>.</p> <p>So the first part I can achieve by:</p> <pre><code>divide_by = divide_by.reshape(divide_by.shape[0], divide_by.shape[2]) data /= divide_by print(data) </code></pre> <p>Which yields:</p> <pre><code>[[[ 1. 1. 10.] [10. 1. 1.] [10. 10. 10.]] [[ 2. 2. 20.] [20. 2. 2.] [20. 20. 20.]] [[ 3. 3. 30.] [30. 3. 3.] [30. 30. 30.]]] </code></pre> <p>Note that each row of the <code>data</code> array has been divided by what's in <code>divide_by</code> as if that had been applied like a stamp on top of it. Great.</p> <p>I would like to do the same now, but only apply the division in places where this mask is set to true:</p> <pre><code>mask = np.array([ [[False, True, False], [False, False, False], [True, False, False]], [[True, True, True], [False, False, True], [False, False, False]], [[True, False, False], [False, False, False], [False, False, False]], ]) </code></pre> <p>So that the expected output is:</p> <pre><code>[[[10. 1. 10.] [10. 10. 1.] [10. 10. 10.]] [[ 2. 2. 20.] [20. 20. 2.] [20. 20. 20.]] [[ 3. 30. 30.] [30. 30. 30.] [30. 30. 30.]]] </code></pre> <p>The mask is defining a subset of places to divide by,</p> <p>But if I do:</p> <pre><code>data[mask] /= divide_by </code></pre> <p>instead of</p> <pre><code>data /= divide_by </code></pre> <p>I get:</p> <pre><code>ValueError: operands could not be broadcast together with shapes (7,) (3,3) (7,) </code></pre> <p>How can I use this mask in this particular case?</p>
<python><arrays><numpy><numpy-ndarray><array-broadcasting>
2023-10-16 19:48:50
1
6,306
Edy Bourne
77,304,463
5,924,264
Round and floating point precision
<p>We're writing data to a sql database (sqlite3 to be specific). For a particular column, <code>quantity</code>, it is specified as a <code>REAL</code> (float) column because we store the <code>quantity</code> as multiples of 1000 (so the integer quantity is divided by <code>1000</code>). So the stored quantity is accurate to the 3rd decimal place, but because of floating point precision, we sometimes see random non-zero values after the 3rd decimal place.</p> <p>My colleague is trying to use <code>round</code> to handle this, but I don't see how this will help with the issue. In fact, I don't think it does anything for this particular problem.</p> <p>My understanding is there is no way to actually resolve this issue when writing if we insist of using floats.</p>
<python><sql><sqlite><floating-point><rounding>
2023-10-16 19:16:58
3
2,502
roulette01
77,304,386
5,924,264
Does a "round"'ed value guarantee equality?
<p>Suppose i have some floating point value <code>x</code>. Suppose we do <code>x = round(x, 3)</code>, i.e., we truncate after the 3rd decimal place.</p> <p>Then we if we check something like:</p> <pre><code>x == 0.222 </code></pre> <p>Does this guarantee there's no comparison after the 3rd decimal place?</p>
<python><floating-point>
2023-10-16 19:04:45
1
2,502
roulette01
77,304,366
1,386,584
Cannot download file with scrapy-playwright
<p>I was trying to download file via <a href="https://github.com/scrapy-plugins/scrapy-playwright" rel="nofollow noreferrer">scrapy-playwright</a>, but for some reason download of file fails. On URL there is pdf file that I want to download, and I can see from logs that download is started, but it is interrupted. I am not sure why is that happening since error is not clear enough.</p> <pre class="lang-py prettyprint-override"><code> import scrapy from scrapy_playwright.page import PageMethod from playwright.async_api import Dialog import logging class AwesomeSpider(scrapy.Spider): name = &quot;awesome&quot; def start_requests(self): # GET request yield scrapy.Request( url=&quot;https://www.fiat.rs/content/dam/fiat/rs/cenovnici-i-katalozi/cenovnik-fiat-500.pdf&quot;, meta=dict( playwright=True, playwright_include_page = True, playwright_page_event_handlers = { &quot;download&quot;: self.handle_download }, playwright_page_methods={ &quot;expect_download&quot;: PageMethod(&quot;expect_download&quot;, predicate=self.is_download_done, timeout=120000), }, ), errback=self.errback, ) def is_download_done(self, download): logging.info(f&quot;Is download done: {download.is_done()}&quot;) return download.is_done() async def handle_download(self, download) -&gt; None: print(&quot;File download started&quot;) print(f&quot;File: {download.suggested_filename}&quot;) await download.save_as(&quot;~/projects/nenad/temp/&quot; + download.suggested_filename) print(f&quot;Received file with path {await download.path()}&quot;) async def parse(self, response): # 'response' contains the page as seen by the browser logging.info(&quot;Parsing response&quot;) page = response.meta[&quot;playwright_page&quot;] file = await page.expect_download(timeout=120000) # screenshot contains the image's bytes await page.close() return {&quot;file&quot;: file} async def errback(self, failure): logging.info( &quot;Handling failure in errback, request=%r, exception=%r&quot;, failure.request, failure.value ) logging.info(failure) page = failure.request.meta[&quot;playwright_page&quot;] await page.close() await page.context.close() </code></pre> <p>Error logs, that are showing the error can be found here:</p> <pre><code>2023-10-16 20:37:01 [scrapy-playwright] INFO: Launching browser chromium 2023-10-16 20:37:01 [scrapy-playwright] INFO: Browser chromium launched 2023-10-16 20:37:01 [scrapy-playwright] DEBUG: Browser context started: 'default' (persistent=False, remote=False) 2023-10-16 20:37:02 [scrapy-playwright] DEBUG: [Context=default] New page created, page count is 1 (1 for all contexts) 2023-10-16 20:37:02 [scrapy-playwright] DEBUG: [Context=default] Request: &lt;GET https://www.fiat.rs/content/dam/fiat/rs/cenovnici-i-katalozi/cenovnik-fiat-500.pdf&gt; (resource type: document) 2023-10-16 20:37:02 [scrapy-playwright] DEBUG: [Context=default] Response: &lt;200 https://www.fiat.rs/content/dam/fiat/rs/cenovnici-i-katalozi/cenovnik-fiat-500.pdf&gt; File download started File: cenovnik-fiat-500.pdf 2023-10-16 20:37:02 [root] INFO: Handling failure in errback, request=&lt;GET https://www.fiat.rs/content/dam/fiat/rs/cenovnici-i-katalozi/cenovnik-fiat-500.pdf&gt;, exception=Error('net::ERR_ABORTED at https://www.fiat.rs/content/dam/fiat/rs/cenovnici-i-katalozi/cenovnik-fiat-500.pdf\n=========================== logs ===========================\nnavigating to &quot;https://www.fiat.rs/content/dam/fiat/rs/cenovnici-i-katalozi/cenovnik-fiat-500.pdf&quot;, waiting until &quot;load&quot;\n============================================================') 2023-10-16 20:37:02 [root] INFO: [Failure instance: Traceback: &lt;class 'playwright._impl._api_types.Error'&gt;: net::ERR_ABORTED at https://www.fiat.rs/content/dam/fiat/rs/cenovnici-i-katalozi/cenovnik-fiat-500.pdf =========================== logs =========================== navigating to &quot;https://www.fiat.rs/content/dam/fiat/rs/cenovnici-i-katalozi/cenovnik-fiat-500.pdf&quot;, waiting until &quot;load&quot; ============================================================ /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/twisted/internet/defer.py:735:errback /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/twisted/internet/defer.py:798:_startRunCallbacks /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/twisted/internet/defer.py:892:_runCallbacks /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/twisted/internet/defer.py:1792:gotResult --- &lt;exception caught here&gt; --- /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/twisted/internet/defer.py:1693:_inlineCallbacks /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/twisted/python/failure.py:518:throwExceptionIntoGenerator /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/scrapy/core/downloader/middleware.py:54:process_request /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/twisted/internet/defer.py:1065:adapt /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/scrapy_playwright/handler.py:322:_download_request /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/scrapy_playwright/handler.py:357:_download_request_with_page /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/playwright/async_api/_generated.py:9251:goto /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/playwright/_impl/_page.py:473:goto /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/playwright/_impl/_frame.py:138:goto /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/playwright/_impl/_connection.py:61:send /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/playwright/_impl/_connection.py:482:wrap_api_call /home/mrav/.local/share/virtualenvs/scrapy_google-izqaQN_l/lib/python3.8/site-packages/playwright/_impl/_connection.py:97:inner_send ] Received file with path /tmp/playwright-artifacts-Of01oV/40467222-8f9f-4c24-9e67-348d60a34a68 2023-10-16 20:37:02 [scrapy-playwright] DEBUG: Browser context closed: 'default' (persistent=False, remote=False) 2023-10-16 20:37:03 [scrapy.core.engine] INFO: Closing spider (finished) </code></pre> <p>It seems to me that download of file is started, but for some reason scraper do not wait for download to be finished.</p> <p>In <em>settings.py</em> I have additional configuration for Playwright:</p> <pre><code>TWISTED_REACTOR = &quot;twisted.internet.asyncioreactor.AsyncioSelectorReactor&quot; DOWNLOAD_HANDLERS = { &quot;http&quot;: &quot;scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler&quot;, &quot;https&quot;: &quot;scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler&quot;, } PLAYWRIGHT_BROWSER_TYPE = &quot;chromium&quot; </code></pre> <p>Any suggestion what could be wrong is more then welcomed.</p>
<python><scrapy><playwright-python><scrapy-playwright>
2023-10-16 19:00:29
0
5,728
Ivan Vasiljevic
77,304,354
5,431,734
chaining an I/O bound and a cpu bound operation in python multtprocessing
<p>Suppose I want to fetch/read data from the filesystem and then process them with some number crunching function. The former is an io bound hence I will benefit by running multiple threads on it (using <code>multiprocessing.dummy.pool</code> and <code>pool.map</code>) whereas the latter being cpu bound will benefit by multiple processes (using <code>multiprocessing.pool</code> and <code>pool.map</code>).</p> <p>How can I chain those two please, is this doable? Or the only way is to do them sequentially, ie parse all the flatfiles from the disk using multithreading and when this is finished do the calculations using multiprocessing by feeding-in the data that the multitreading step has produced.</p>
<python><multithreading><multiprocessing><python-multiprocessing>
2023-10-16 18:57:49
1
3,725
Aenaon
77,304,248
5,131,139
NameError when running Apache Beam pipeline on Google Dataflow when using nested functions
<p>I am working on an Apache Beam pipeline in Python, and I'm encountering a NameError when running the pipeline on Google Dataflow. The error specifically mentions that 'json_encoder' is not defined. The pipeline works fine when running it locally.</p> <p>Here's the gist of the code:</p> <pre><code>import apache_beam as beam import decimal # Simplified Apache Beam pipeline step input | &quot;Convert to string&quot; &gt;&gt; beam.Map(encode_as_task) def encode_as_task(element, cache_enabled=False): import orjson # Convert the element to a string before publishing to Pub/Sub task = dict() tasks = [] task['task'] = element[2][0] task['task'].pop('client_id', None) task['clientIds'] = element[1] tasks.append(task) task_generator_request = {&quot;tasks&quot;: tasks, &quot;cacheEnabled&quot;: cache_enabled} message = orjson.dumps(task_generator_request, default=json_encoder) return message def json_encoder(value): if isinstance(value, decimal.Decimal): return float(value) raise TypeError </code></pre> <p>This code works well locally.</p> <p>But when running in cloud dataflow I get the following error. <code>NameError: name 'json_encoder' is not defined. </code></p> <p>I used to get these issues when dealing with external dependencies such as orjson. I resolved those by importing the package inside the function directly like <code>import orjson</code> in <code>encode_as_task</code>.</p> <p>But since <code>json_encoder</code> is an internal function, not sure how to handle or import the same.</p> <p>Side note, I am sending the requirement file using beam runtime argument, <code> --requirements_file=requirements.txt</code> to tell apache beam and dataflow regarding the dependencies. Still facing these issues.</p>
<python><google-cloud-dataflow><apache-beam>
2023-10-16 18:35:08
1
1,429
AnandShiva
77,304,230
11,330,134
Order dataframe by two values and assign category per n-size
<p>I have a dataframe with two columns:</p> <pre><code>data = [ (0, 'A'), (0, 'B'), (0, 'C'), (1, 'D'), (1, 'E'), (2, 'F'), (3, 'G'), (4, 'H'), (5, 'I'), ] schm = ['val', 'id'] df = spark.createDataFrame(data,schema=schm) df.show() +---+---+ |val| id| +---+---+ | 0| A| | 0| B| | 0| C| | 1| D| | 1| E| | 2| F| | 3| G| | 4| H| | 5| I| +---+---+ </code></pre> <p>Notice, the <code>id</code> values are unique but the <code>val</code> values are not. I've seen solutions grouping by the <code>val</code> but that won't work in this case.</p> <p>The sample is already in order, but say I order them by <code>val</code> then <code>id</code> and store that in a row number (<code>rn</code>) column:</p> <pre><code>from pyspark.sql.functions import row_number,lit from pyspark.sql.window import Window w = Window().orderBy('val', 'id') df = df.withColumn(&quot;rn&quot;, row_number().over(w)) df.show() +---+---+---+ |val| id| rn| +---+---+---+ | 0| A| 1| | 0| B| 2| | 0| C| 3| | 1| D| 4| | 1| E| 5| | 2| F| 6| | 3| G| 7| | 4| H| 8| | 5| I| 9| +---+---+---+ </code></pre> <p>I want to add a new column to categorize them into four buckets of equal n-size.</p> <pre><code>n_splits = 4 # Calculate count of each dataframe rows each_len = df.count() // n_splits each_len 2 </code></pre> <p>I kind of get there doing something like this:</p> <pre><code>b1 = range(0, each_len) b2 = range(b1[-1], each_len*2) b3 = range(b2[-1], each_len*3) b4 = range(b3[-1], each_len*4) df = df.withColumn('cat', when(col('rn').between(b1[1], b1[-1]), 1) .when(col('rn').between(b2[1], b2[-1]), 2) .when(col('rn').between(b3[1], b3[-1]), 3) .when(col('rn').between(b4[1], b4[-1]), 4)) df.show() +---+---+---+----+ |val| id| rn| cat| +---+---+---+----+ | 0| A| 1| 1| | 0| B| 2| 2| | 0| C| 3| 2| | 1| D| 4| 3| | 1| E| 5| 3| | 2| F| 6| 4| | 3| G| 7| 4| | 4| H| 8|null| | 5| I| 9|null| +---+---+---+----+ </code></pre> <p>However, there must be a more elegant and programmatic way. I figure the <code>null</code>s can be assigned the last category (<code>4</code>) as well.</p> <p>I want output like:</p> <pre><code>+---+---+---+---+ |val| id| rn|cat| +---+---+---+---+ | 0| A| 1| 1| | 0| B| 2| 1| | 0| C| 3| 2| | 1| D| 4| 2| | 1| E| 5| 3| | 2| F| 6| 3| | 3| G| 7| 4| | 4| H| 8| 4| | 5| I| 9| 4| +---+---+---+---+ </code></pre>
<python><pyspark>
2023-10-16 18:31:42
0
489
md2614
77,304,200
14,608,443
How to loop through a file and use the values in a bash or python script?
<p>I have a numbers.txt file which contains thousands of numbers in the following format</p> <pre><code>00001 00002 </code></pre> <p>I need to create a Python or Bash script which will take each number and replace xyz as seen below with each of the values in a new file. Below is the format:</p> <pre><code>echo '{&quot;id&quot;:1,&quot;jsonrpc&quot;:&quot;2.0&quot;, &quot;method&quot;:&quot;dummy.endpoint&quot;,&quot;params&quot;:[123, [xyz]]}' | nc -N localhost 30000 | json_pp </code></pre> <p>So the final output would like:</p> <pre><code>echo '{&quot;id&quot;:1,&quot;jsonrpc&quot;:&quot;2.0&quot;, &quot;method&quot;:&quot;dummy.endpoint&quot;,&quot;params&quot;:[123, [00001]]}' | nc -N localhost 30000 | json_pp echo '{&quot;id&quot;:1,&quot;jsonrpc&quot;:&quot;2.0&quot;, &quot;method&quot;:&quot;dummy.endpoint&quot;,&quot;params&quot;:[123, [00002]]}' | nc -N localhost 30000 | json_pp </code></pre> <p>How can this be done in either a Python or Bash script?</p>
<python><bash>
2023-10-16 18:25:30
3
373
IveGotCodeButImNotACoder
77,304,167
14,546,482
Using pydantic to change int to string
<p>I am sometimes getting data that is either a string or int for 2 of my values. Im trying to figure out the best solution for handling this and I am using pydantic v2. Ive been studying the @field_validtaors(pre=True) but haven't been successful.</p> <p>Here's my code:</p> <pre><code>class User(BaseModel): system_id: str db_id: str id: str record: dict isValid: bool @field_validator(&quot;id&quot;, mode='before') &lt;&lt;&lt;--------- not working def transform_system_id_to_str(cls, value) -&gt; str: return str(value) @field_validator(&quot;system_id&quot;, mode='before') &lt;&lt;&lt;--------- not working def transform_system_id_to_str(cls, value: str | int) -&gt; str: if isinstance(value, str): print('yes') return value return str(value) </code></pre>
<python><pydantic>
2023-10-16 18:18:34
4
343
aero8991
77,304,052
960,412
Transactional Operations with google.cloud.ndb
<p>I am using Google Cloud Datastore via the <a href="https://googleapis.dev/python/python-ndb/latest/" rel="nofollow noreferrer">python-ndb</a> library (Python 3). My goal is to <em>transactionally</em> create two entities at once. For example, when a user creates an <code>Account</code> entity, also create a <code>Profile</code> entity, such that if either entity fails to be created, then neither entity should be created.</p> <p>From the <a href="https://cloud.google.com/datastore/docs/concepts/transactions#transaction_locks" rel="nofollow noreferrer">datastore documentation</a>, it can be implemented like this:</p> <pre><code>from google.cloud import datastore client = datastore.Client() def transfer_funds(client, from_key, to_key, amount): with client.transaction(): from_account = client.get(from_key) to_account = client.get(to_key) from_account[&quot;balance&quot;] -= amount to_account[&quot;balance&quot;] += amount client.put_multi([from_account, to_account]) </code></pre> <p>However, there is no example provided by the python-ndb documentation. From chatgpt, I tried something like this:</p> <pre><code>from google.cloud import ndb client = ndb.Client() def create_new_account(): with client.context(): @ndb.transactional def _transaction(): account = Account() account.put() profile = Profile(parent=account.key) profile.put() return account, profile try: account, profile = _transaction() except Exception as e: print('Failed to create account with profile') raise e return account, profile </code></pre> <p>However, I get the error:</p> <pre><code>TypeError: transactional_wrapper() missing 1 required positional argument: 'wrapped' </code></pre>
<python><google-cloud-platform><google-app-engine><google-cloud-datastore>
2023-10-16 17:58:09
1
1,280
hyang123
77,304,020
20,947,319
How to filter an item in one queryset from appearing in another queryset in django
<p>I have a Django view where I am getting the first queryset of 'top_posts' by just fetching all the Posts from database and cutting it in the first four. Within the same view, I want to get &quot;politics_posts&quot; by filtering Posts whose category is &quot;politics&quot;. But I want a post appearing in 'top_posts', not to appear in 'politics_posts'. I have tried using exclude but it seems like it`s not working. Below is my view which is currently not working:</p> <pre><code>def homepage(request): top_posts = Post.objects.all()[:4] politics_posts = Post.objects.filter(category='news').exclude(pk__in=top_posts) context={&quot;top_posts&quot;:top_posts, &quot;politics_posts&quot;:politics_posts} return render(request, 'posts/homepage.html', context) </code></pre> <p>Any help will be highly apprecuiated. Thanks.</p>
<python><django><django-views><django-queryset>
2023-10-16 17:52:38
1
446
victor
77,303,939
1,726,083
How to locally test FastAPI app using AWS JWT authorizer & API Gateway
<p>I'm struggling to implement testing on my FastAPI which runs under Mangum as a Lambda behind an AWS API Gateway with the <a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-jwt-authorizer.html" rel="nofollow noreferrer">AWS JWT Authorizer</a>.</p> <blockquote> <p>After validating the JWT, API Gateway passes the claims in the token to the API route’s integration. [...] For example, if the JWT includes an identity claim emailID, it's available to a Lambda integration in <code>$event.requestContext.authorizer.jwt.claims.emailID</code></p> </blockquote> <p>In the route, I can do the following:</p> <pre><code>@app.get(&quot;/claims&quot;) async def claims(request: Request): event = request.scope[&quot;aws.event&quot;] claims = event.get(&quot;requestContext&quot;,{}).get(&quot;authorizer&quot;,{}).get(&quot;jwt&quot;,{}).get(&quot;claims&quot;,{}) return claims </code></pre> <p>And that works in the API Gateway to return the claims from the JWT.</p> <p>Now I want to write a test for <code>/claims</code> that mocks the claims in the request. I started like this:</p> <pre><code>from fastapi.testclient import TestClient client = TestClient(app) def test_claims(): resp = client.get(&quot;/claims&quot;) assert resp.status_code == 200 </code></pre> <p>FastAPI's TestClient uses httpx, but that means the FastAPI specific <code>request</code> parameter cannot be passed directly through the client.</p> <p>I can run <code>claims</code> directly:</p> <pre><code>@pytest.mark.asyncio async def test_claims(): aws_event = { &quot;requestContext&quot;: {&quot;authorizer&quot;: {&quot;jwt&quot;: {&quot;claims&quot;: {&quot;A&quot;: &quot;B&quot;}}}} } resp = await claims( Request( scope={ &quot;type&quot;: &quot;http&quot;, &quot;aws.event&quot;: aws_event }, ) ) assert resp == aws_event[&quot;requestContext&quot;][&quot;authorizer&quot;][&quot;jwt&quot;][&quot;claims&quot;] </code></pre> <p>But now it's not even going through FastAPI's router anymore.</p> <p>I feel like there should be some way to inject scope content into the request I'm making with httpx, perhaps by wrapping / modifying <code>fastapi.TestClient</code>. But I'm not enough of a pythonista to figure it out.</p>
<python><pytest><aws-api-gateway><fastapi><mangum>
2023-10-16 17:37:13
1
16,447
erik258
77,303,722
2,382,483
Optimizing divisions in numpy
<p>I have a situation where I need to divide several different large arrays by the same array:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np a = np.random.uniform(-1, 1, size=(1000000,)) b = np.random.uniform(-1, 1, size=(1000000,)) c = np.random.uniform(-1, 1, size=(1000000,)) d = np.random.uniform(1, 100, size=(1000000,)) a1 = a / d b1 = b / d c1 = c / d </code></pre> <p>I know divisions are slower than multiplications, so I was curious if it would it be faster for me to write:</p> <pre class="lang-py prettyprint-override"><code>d_recip = 1 / d a1 = a * d_recip b1 = b * d_recip c1 = c * d_recip </code></pre> <p>Is there is anything numpy already does under the hood to optimize situations like this? If I used cython, would something like this be optimized?</p> <p>I'm also curious how this applies when the divisor is a scalar:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np a = np.random.uniform(-1, 1, size=(1000000,)) b = np.random.uniform(-1, 1, size=(1000000,)) c = np.random.uniform(-1, 1, size=(1000000,)) d = 1.65 a1 = a / d b1 = b / d c1 = c / d # faster? or has numpy already optimized this above? d_recip = 1 / d a2 = a * d_recip b2 = b * d_recip c2 = c * d_recip </code></pre> <p>I understand the reciprocal method won't produce a bit-identical result, I'm just curious how to get the most speed when the exact bits aren't a concern.</p>
<python><numpy>
2023-10-16 16:56:18
1
3,557
Rob Allsopp
77,303,511
12,553,730
How do you calculate the area metric when comparing three images using python?
<p>I have three images: A, B, and C. I want to compare the difference in area between (A, B) and (C, B). Is there a more efficient and accurate way to do this using Python?</p> <p>Here's what I am currently doing, which I believe is quite inaccurate:</p> <pre><code>from PIL import Image from pycocotools import mask as coco_mask import pandas as pd, json, cv2, numpy as np, matplotlib.pyplot as plt, os A = r&quot;/path/to/A&quot; B = r&quot;/path/to/B&quot; C = r&quot;/path/to/C&quot; for a_path in os.listdir(A): for b_path in os.listdir(B): if b_path in a_path: a_image = Image.open(A + os.sep + a_path).convert('L') b_image = Image.open(B + os.sep + b_path).convert('L') a_array = np.array(a_image) b_array = np.array(b_image) overlap_array = np.bitwise_and(a_array, b_array) overlap_pixel_count = np.sum(overlap_array &gt; 0) n_pixel_count = np.sum(b_array &gt; 0) overlap_percentage = (overlap_pixel_count / n_pixel_count) * 100 print(overlap_percentage) else: pass for c_path in os.listdir(C): for b_path in os.listdir(B): if b_path in c_path: c_image = Image.open(bounding_boxes_path + os.sep + c_path).convert('L') # Convert to grayscale b_image = Image.open(segment_masks_path + os.sep + b_path).convert('L') # Convert to grayscale c_array = np.array(c_image) b_array = np.array(b_image) overlap_array = np.bitwise_and(c_array, b_array) overlap_pixel_count = np.sum(overlap_array &gt; 0) n_pixel_count = np.sum(b_array &gt; 0) overlap_percentage = (overlap_pixel_count / n_pixel_count) * 100 print(overlap_percentage) else: pass </code></pre> <h5>A:</h5> <p><a href="https://i.sstatic.net/m7R58.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m7R58.png" alt="A" /></a></p> <h5>B:</h5> <p><a href="https://i.sstatic.net/tJmuF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tJmuF.png" alt="enter image description here" /></a></p> <h5>C:</h5> <p><a href="https://i.sstatic.net/GxWnl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GxWnl.png" alt="enter image description here" /></a></p>
<python><python-3.x><image><image-processing><image-segmentation>
2023-10-16 16:18:16
1
309
nikhil int
77,303,499
16,591,513
TensorRT seems to lack common functionality
<p>I've recently encountered such an amazing tool called tensorRT, but because I don't have NVIDIA GPU on my laptop, I decided to use Google Collab instead to play around with this technology.</p> <p>I used simple pip command to install necessary libraries, including ones for CUDA management</p> <pre><code>pip install nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com pip install pycuda </code></pre> <p>After installation everything seems to be ready for usage. However, it turns out that some of the common methods simply does not exist.</p> <p>When I tried to create <code>tensorRT Engine</code> via</p> <pre><code>builder = trt.Builder(trt.Logger(trt.Logger.INFO)) network = builder.create_network(batch_size) engine = builder.build_cuda_engine(network) </code></pre> <p>It throws exception, <code>'tensorrt.tensorrt.Builder' has no attribute 'build_cuda_engine'</code>, despite the fact, that it suppose to.</p> <p><a href="https://i.sstatic.net/hNbIS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hNbIS.png" alt="enter image description here" /></a></p> <p>Am I missing out on some important installation, or I just use some deprecated version?</p>
<python><tensorrt>
2023-10-16 16:16:24
2
449
CraZyCoDer
77,303,405
1,852,526
Pandas dataframe remove square brackets from list string when writing to xlsx
<p>The location data is a list of strings that I am getting. When I use Pandas to write to xlsx, it always puts square brackets around this 1 column.</p> <p>This is what I have been trying:</p> <pre><code>def create_excel_with_format(headers,values,full_file_name_with_path): #Write to CSV in xlsx format with indentation. df = pd.DataFrame(data=values,columns=headers) #df = df.set_axis(df.index*2 + 1).reindex(range(len(df)*2)) #Create a blank row after every row. with pd.ExcelWriter(full_file_name_with_path) as writer: df.to_excel(writer, index=False) workbook = writer.book worksheet = writer.sheets['Sheet1'] #Write the location each comma separated in new line if exists (For Nuget exists and thirdparty no). if 'Location' in df: location = df[&quot;Location&quot;].str.join(&quot;\n&quot;) df[&quot;Location&quot;] = location.str.replace('[', repl = '', regex = False).str.replace(']', repl = '', regex = False) twrap = workbook.add_format({&quot;text_wrap&quot;: True}) idx_location = df.columns.get_loc(&quot;Location&quot;) worksheet.set_column(idx_location, idx_location, 60, twrap) header_format = workbook.add_format({ 'bold': True, 'border': False, 'text_wrap': False, 'font_size':13}) for col_num, value in enumerate(df.columns.values): worksheet.write(0, col_num, value, header_format) #pd.read_csv(full_file_name_with_path).iloc[:, 1:].apply(lambda x: x.replace(r&quot;[\[\]]&quot;,&quot;&quot;,regex=True)).to_csv(full_file_name_with_path) </code></pre> <p>When I see the excel file, its always like this</p> <pre><code> Location ['Pico\\Medman\\FluidTransferTests\\packages.config'] ['EchoNET\\ExternalData\\PadsServiceConsole\\PadsServiceConsole.csproj', 'EchoNET\\ExternalData\\SerialNumberDataBase\\SerialNumberDataBase.csproj', 'EchoNET\\UI\\E2C\\E2CApp\\E2CApp.csproj', 'Fixtures\\PADS\\PADSClient\\PADSClient.csproj'] </code></pre> <p>How can I remove the square brackets?</p>
<python><pandas><dataframe><xlsx>
2023-10-16 15:57:38
1
1,774
nikhil
77,303,336
17,530,552
Sorting a deeply nested dictionary from lowest to highest values
<p>I created an empty dictionary labeled <code>dict_dif = {}</code> into which I loaded data via a for loop. The resulting dictionary has the following nested structure:</p> <pre><code>dict_dif[var][roi][subject][numeric_value] </code></pre> <p>This means that for each var, roi, and subject, there is a specific numeric value attached. For example, the code <code>print(dict_dif[&quot;acw&quot;][&quot;V1&quot;][&quot;10600&quot;])</code> would print <code>0.44242</code>.</p> <p>In a previous question (<a href="https://stackoverflow.com/questions/77302486/sort-values-in-a-nested-dictionary-from-min-to-max">Sort values in a nested dictionary from min. to max</a>) I asked how it is possible to sort data from min. to max. values in a dictionary where the key <code>subject</code> was missing, and instead each key <code>roi</code> contained <em>a list of values</em> (instead of a single value assigned to each <code>[subject]</code>.</p> <p><strong>Problem:</strong> My problem is that I have to remember how each <code>subject</code> is connected to each <code>numeric_value</code> <em>after</em> sorting the dictionary, and this is impossible with my previous attempt. In other words, after sorting the values in my previous attempt (see link above), I no longer know which of the sorted value corresponds to which subject.</p> <p><strong>Idea:</strong> To solve this problem, my idea is as follows. First, I create a distinct key in the dictionary <code>dict_dif</code> for each subject as shown above. Each subject key then gets the corresponding numeric value. Second, I sort this dictionary again from min. to max. numeric values, while the subject information remains.</p> <p><strong>Question:</strong> Is it principally possible to sort this nested dictionary from the lowest to the highest numeric values, since each of the nested key structures only has <em>one value</em> (so that Python probably cannot understand how to sort the dictionary, because all my attemps so far failed)?</p> <p>Besides, I wonder if there is a better approach (than my attempt described above) to solve this problem.</p> <p>Here is what my current dictionary looks like:</p> <pre><code>{'acw': {'V1': {'100610': -0.14749908116389868, '102311': -0.0024704496902879514, '104416': 0.08791335022154448, '105923': -0.14556719812398553, ... }}} </code></pre> <p>I would like to order the dictionary from lowest to highest values as follows:</p> <pre><code>{'acw': {'V1': {'100610': -0.14749908116389868, '105923': -0.14556719812398553, '102311': -0.0024704496902879514, '104416': 0.08791335022154448, ... }}} </code></pre> <p>I shortened the dictionary indicated by the three dots <code>...</code> because the fully dictionary contains way too much keys.</p>
<python><dictionary><sorting>
2023-10-16 15:48:00
0
415
Philipp
77,303,323
2,112,406
Can I overload init from C++ side in pybind11?
<p>If in C++ a class had an overloaded constructor:</p> <pre><code>class MyClass { std::string name; public: MyClass(){}; MyClass(std::string name_){ name = name_;} }; </code></pre> <p>how do I expose this to python? Can I use <code>py::overload_cast</code> and <code>py::init</code> together?</p> <p>Currently I expose the first one:</p> <pre><code>PYBIND11_MODULE(ex_module, module_handle){ module_handle.doc() = &quot;ex class&quot;; py::class_&lt;MyClass&gt;(module_handle, &quot;PyEx&quot;) .def(py::init&lt;&gt;()) ; } </code></pre> <p>(Note that I'm using C++20 and python &gt;=3.10)</p> <p>I want to be able to do:</p> <pre><code>new_obj = PyEx() new_obj = PyEx(&quot;blah&quot;) </code></pre> <p>Currently, <code>PyEx(&quot;blah&quot;)</code> isn't throwing any errors, but it's also not really passing <code>blah</code> into the C++ constructor. I check this by adding a print statement in the second constructor.</p> <p><strong>Edit:</strong> simplified the question.</p>
<python><c++><pybind11><constructor-overloading>
2023-10-16 15:45:11
1
3,203
sodiumnitrate
77,303,295
160,245
count_documents gives error: AttributeError: 'dict' object has no attribute '_txn_read_preference'
<p>I wrote a function to check if a row exists, then insert if it is not there, or update if it is there. So I'm trying to use <code>count_documents</code>. I'm using a field called <code>filetype</code> as my unique key.</p> <p>A sample document looks like this:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;_id&quot;: ObjectId(&quot;652d47a64732d257ee31e846&quot;), &quot;filetype&quot;: &quot;USTreasuryBillRates&quot;, &quot;last_date_processed&quot;: &quot;2023-10-16T00:00:00.000Z&quot;, &quot;update_date&quot;: &quot;2023-10-16T10:13:45.000Z&quot; } </code></pre> <p>Python function:</p> <pre><code>def upsert_load_history(arg_file_type, arg_date, arg_db_collection): # insert if not there, otherwise update in place print(&quot;===== upsert_load_history ======&quot;) json_query = {&quot;filetype&quot;: arg_file_type} row_count = arg_db_collection.count_documents(json_query, {'limit': 1}) if row_count == 0: insert_load_history_new_rows(arg_file_type, arg_date, arg_db_collection) else: update_load_history_for_file_type(arg_file_type, arg_date, arg_db_collection) </code></pre> <p>Error:</p> <pre class="lang-none prettyprint-override"><code>===== upsert_load_history ====== Traceback (most recent call last): File &quot;C:\GitHub\ClientName\Apps\LambdaUSTreasury\SetupLoadHistoryDB.py&quot;, line 134, in &lt;module&gt; upsert_load_history(file_type, file_date, db_collection_filehistory) File &quot;C:\GitHub\ClientName\Apps\LambdaUSTreasury\SetupLoadHistoryDB.py&quot;, line 61, in upsert_load_history row_count = arg_db_collection.count_documents(json_query, {'limit': 1}) File &quot;C:\GitHub\ClientName\Apps\LambdaUSTreasury\pymongo\collection.py&quot;, line 1786, in count_documents _cmd, self._read_preference_for(session), session) File &quot;C:\GitHub\ClientName\Apps\LambdaUSTreasury\pymongo\common.py&quot;, line 870, in _read_preference_for return session._txn_read_preference() or self.__read_preference AttributeError: 'dict' object has no attribute '_txn_read_preference' </code></pre>
<python><mongodb><pymongo>
2023-10-16 15:41:01
1
18,467
NealWalters
77,303,260
15,405,732
How can I import a local module using Databricks asset bundles?
<p>I want to do something pretty simple here: import a module from the local filesystem using databricks asset bundles. These are the relevant files:</p> <p><strong>databricks.yml</strong></p> <pre class="lang-yaml prettyprint-override"><code>bundle: name: my_bundle workspace: host: XXX targets: dev: mode: development default: true resources: jobs: my_job: name: my_job tasks: - task_key: my_task existing_cluster_id: YYY spark_python_task: python_file: src/jobs/bronze/my_script.py </code></pre> <p><strong>my_script.py</strong></p> <pre class="lang-py prettyprint-override"><code>from src.jobs.common import * if __name__ == &quot;__main__&quot;: hello_world() </code></pre> <p><strong>common.py</strong></p> <pre class="lang-py prettyprint-override"><code>def hello_world(): print(&quot;hello_world&quot;) </code></pre> <p>And the following folder structure:</p> <pre><code>databricks.yml src/ ├── __init__.py └── jobs ├── __init__.py ├── bronze │   └── my_script.py └── common.py </code></pre> <p>I'm deploying this to my workspace + running it by using Databricks CLI v0.206.0 with the following commands:</p> <pre><code>databricks bundle validate databricks bundle deploy databricks bundle run my_job </code></pre> <p>I'm getting issues to import my <code>common.py</code> module. I'm getting the classic <code>ModuleNotFoundError: No module named 'src'</code> error here.</p> <p>I've added the <code>__init__.py</code> files as I typically do when doing this locally, and tried the following variations:</p> <pre><code>from src.jobs.common import * from jobs.common import * from common import * from ..common import * </code></pre> <p>I guess my issue is that I don't really know what the python path is here, since I'm deploying it on Databricks. How can I do something like this using databricks asset bundles?</p>
<python><pyspark><databricks><azure-databricks><databricks-asset-bundle>
2023-10-16 15:36:00
4
6,101
Koedlt
77,303,204
4,288,043
differences between a Pandas dataframe and Geopandas dataframe
<p>I have used Pandas a lot but recently completed a project with Geopandas for the first time. There was a dataframe of POIs which had various data columns including a Latitude and a Longitude column and that was converted into a Geopandas dataframe which now included a 'projection' applied to the whole dataframe and also included a 'geometry' column with <code>POINTS</code> objects each of which contained a pair of coordinates, no doubt a projection of the latitude and longitude.</p> <p>Can this geopandas dataframe now be <em>safely</em> recycled for general pandas use, add additional columns and do operations with it which have nothing more to do with geographic matters or will that run into problems?</p> <p>What are otherwise the differences between the Pandas dataframe and Geopandas dataframe?</p> <p><code>&lt;class 'pandas.core.frame.DataFrame'&gt;</code> vs. <code>&lt;class 'geopandas.geodataframe.GeoDataFrame'&gt;</code></p>
<python><pandas><dataframe><geopandas>
2023-10-16 15:26:40
1
7,511
cardamom
77,303,203
13,438,431
How to make Telegram deep-linking work for messages?
<p>I'm developing a bot, which creates games in Telegram.</p> <p>When a user issues a <code>/start_game</code> command, the bot replies to user's command with a message, which includes a link to the created game.</p> <p>Now when the same user issues the <code>/start_game</code> command again, the bot responds with something along the lines:</p> <blockquote> <p>Would you like to delete your <strong>currently active game</strong> and replace it with a new one?</p> </blockquote> <p>The <code>currently active game</code> text should link to previously created game's message via Telegram Deep Linking.</p> <p>The way I do this currently is by using this <a href="https://core.telegram.org/api/links#message-links" rel="nofollow noreferrer">deep link template from the docs</a>:</p> <pre><code>f'&lt;a href=&quot;tg://privatepost?channel={chat_id}&amp;post={message_id}&amp;single&quot;&gt;{text}&lt;/a&gt;' </code></pre> <p>Which worked fine under <code>Telethon</code>, but now that I switched over to <code>Python-Telegram-Bot</code>, it does not.</p> <p>When clicked, such link says &quot;Unfortunately you can't access this message. You aren't a member of the chat where it was posted.&quot;.</p> <p>Well, I thought that the problem was because Telegram Bot API prefixes groups' ids with <code>-</code> and channels' ids with <code>-100</code>, so I got rid of those.</p> <p>And now it says: &quot;Message doesn't exist&quot;.</p> <p>Actual code:</p> <pre><code>def message_mention(text: str, chat_id: int, message_id: int, escape: EscapeType | None = 'html'): text = escape_string(text, escape) if chat_id is None or message_id is None: return text chat_id_string = str(chat_id) if chat_id_string.startswith('-100'): chat_id_string = chat_id_string[4:] elif chat_id_string.startswith('-'): chat_id_string = chat_id_string[1:] return f'&lt;a href=&quot;tg://privatepost?channel={chat_id_string}&amp;post={message_id}&amp;single&quot;&gt;{text}&lt;/a&gt;' </code></pre> <p>What am I missing here?</p> <p>The <code>message_id</code> <strong>must be</strong> correct, because later in the same function if the user clicked &quot;<em>Yes, delete the old game</em>&quot;, it deletes the mentioned message fine and dandy.</p> <p><strong>UPD:</strong></p> <p>Minimal reproducible example, which attempts to link to a command just issued by the user.</p> <pre><code>async def main(): from telegram import Update from telegram.ext import ContextTypes from telegram.ext import CommandHandler from telegram.ext import Application, Defaults app = ( Application .builder() .defaults( Defaults( parse_mode=&quot;html&quot; ) ) .token(BOT_TOKEN) .build() ) def message_mention_lonami(text: str, chat_id: int, message_id: int): chat_id_string = str(chat_id) if chat_id_string.startswith('-100'): chat_id_string = chat_id_string[4:] elif chat_id_string.startswith('-'): chat_id_string = chat_id_string[1:] return f'&lt;a href=&quot;https://t.me/c/{chat_id_string}/{message_id}&quot;&gt;{text}&lt;/a&gt;' def message_mention(text: str, chat_id: int, message_id: int): chat_id_string = str(chat_id) if chat_id_string.startswith('-100'): chat_id_string = chat_id_string[4:] elif chat_id_string.startswith('-'): chat_id_string = chat_id_string[1:] return f'&lt;a href=&quot;tg://privatepost?channel={chat_id_string}&amp;post={message_id}&amp;single&quot;&gt;{text}&lt;/a&gt;' async def handler(update: Update, ctx: ContextTypes.DEFAULT_TYPE): chat_id = update.effective_chat.id message_id = update.effective_message.id mention1 = message_mention_lonami(&quot;Solution by lonami&quot;, chat_id=chat_id, message_id=message_id) mention2 = message_mention(&quot;Default solution&quot;, chat_id=chat_id, message_id=message_id) await ctx.bot.send_message( chat_id, &quot;\n&quot;.join([ &quot;is this your message?&quot;, mention1, mention2, ]), ) app.add_handler(CommandHandler(&quot;test_mention&quot;, handler)) async with app: try: await app.start() await app.updater.start_polling() await asyncio.Future() finally: await app.updater.stop() await app.stop() if __name__ == '__main__': import asyncio asyncio.new_event_loop().run_until_complete(main()) </code></pre> <p>Results in the client app:</p> <p><a href="https://i.sstatic.net/MX3Fe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MX3Fe.png" alt="Message does not exist alert" /></a> <a href="https://i.sstatic.net/xkqLy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xkqLy.png" alt="Actual link of the both solutions" /></a></p>
<python><telegram><telegram-bot><python-telegram-bot><telethon>
2023-10-16 15:26:33
1
2,104
winwin
77,303,136
3,251,425
Python UrlLib3 - Can't download file due to SSL Error even when ssl verification is disabled
<p>I am unable to download a file using this piece of code:</p> <pre class="lang-py prettyprint-override"><code>import requests response = requests.get('https://download.inep.gov.br/informacoes_estatisticas/indicadores_educacionais/taxa_transicao/tx_transicao_municipios_2019_2020.zip', stream=True, verify=False) with open('tx_transicao_municipios_2019_2020.zip', 'wb') as f: for chunk in response.iter_content(chunk_size=1024): if chunk: f.write(chunk) </code></pre> <p>I keep getting this error even when <strong>verify=False</strong> is setted:</p> <blockquote> <p>urllib3.exceptions.SSLError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1006)</p> </blockquote> <p>When using Chrome, I am able to download the file.</p> <p>Using <strong>verify=certifi.where()</strong> doesn't work also.</p> <h2>Environment</h2> <ul> <li>Windows 10 Enterprise 22H2 (19045.3448);</li> <li>Python v3.11.5;</li> <li>OpenSSL v3.0.9;</li> <li>Urllib3 v2.0.6;</li> <li>Requests v2.31.0;</li> <li>Certifi v2023.7.22;</li> </ul> <p>Also tried in MacOS Catalina (10.15) and MacOS Big Sur (11.x) with no success.</p> <p>What am I doing wrong here?</p>
<python><python-requests><openssl><urllib3><certifi>
2023-10-16 15:15:29
2
1,312
Jackson Mourão
77,302,958
10,430,394
Updating height_ratios of gridspec multiplot after initial creation
<p>I am trying to change the height ratios of my stacked triple plot after it has been created using the <code>.set_height_ratios()</code> method of a <code>GridSpec</code> object. The reason I need this is that I have a method in my class, which multiplies data by a certain factor to make it &quot;more visible&quot; in a certain range (start, end, factor). Once this multiplication has been performed, the height ratios of my plot need to update so that the top axes, which contains experimental and calculated data has the same scale as the third axes, which contains the difference between the experimental and calculated data. This is an example of what I am creating right now:</p> <p><a href="https://i.sstatic.net/uOOBG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uOOBG.png" alt="Example plot with cut off difference peak (green)" /></a></p> <p>As you can see, if the difference is multiplied by the factor, and it maintains the same scale as ax1, which contains the experimental/calculated data, the marked peak will be cut off, since the current heigth ratio can no longer accomodate the full peak, if the scales of ax1 and ax3 are identical (meaning, same amount of pixels per ax units)</p> <p>So what I tried to do is to use the <code>set_height_ratios()</code> method to increase the height ratio of ax3 (bottom) if a multiplication has been performed, so that it can still fully accomodate the cut off peak. But I have found out that this only works, if I use <code>constrained_layout=True</code> and <code>plt.tight_layout()</code>.</p> <p>GitHub Issue: <a href="https://github.com/matplotlib/matplotlib/issues/8434" rel="nofollow noreferrer">https://github.com/matplotlib/matplotlib/issues/8434</a></p> <p>I cannot use those options, because it completely breaks my multiplot and doesn't function with <code>plt.adjust_subplots(hspace=0)</code>.</p> <p>Here's some example code that shows how updating the height ratios only works with <code>constrained_layout</code> and <code>plt.tight_layout()</code></p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec fig = plt.figure(figsize=(6, 4), constrained_layout=False) # set this to true gs = gridspec.GridSpec(3, 1, height_ratios=[1,1,1]) ax1 = plt.subplot(gs[0]) ax2 = plt.subplot(gs[1]) ax3 = plt.subplot(gs[2]) ax1.plot([1, 2, 3], [3, 2, 1], label='Subplot 1') ax2.plot([1, 2, 3], [1, 2, 3], label='Subplot 2') ax3.plot([1, 2, 3], [2, 1, 2], label='Subplot 3') print(&quot;Original height ratios:&quot;, gs.get_height_ratios()) new_height_ratios = [5, 1, 1] gs.set_height_ratios(new_height_ratios) print(&quot;Updated height ratios:&quot;, gs.get_height_ratios()) # plt.tight_layout() # uncomment this plt.show() </code></pre> <p>My actual code is a bit long and so I didn't want to put it all in here, but I have a github repository that contains all the relevant files <a href="https://github.com/p3rAsperaAdAstra/Pawley-Fit-Plot" rel="nofollow noreferrer">here</a> if you're intersted in the whole thing.</p> <p>My question is: Can I get this to work without <code>constrained_layout</code> and <code>plt.tight_layout</code>?</p> <p>EDIT: I have managed to get it working by just doing all the multiplications first before creating the figure/GridSpec object. But I would still like to know if it isn't possible to update the height ratios without using the aforementioned options.</p>
<python><matplotlib><plot>
2023-10-16 14:46:28
0
534
J.Doe
77,302,772
192,801
Cannot access task instance to get xcom
<p>I am new to using xcoms in Airflow.</p> <p>Here's what my code looks like:</p> <pre class="lang-py prettyprint-override"><code>with DAG(default_args=default_args, max_active_runs=1, dag_id=&quot;my_dag&quot;, catchup=False, start_date=start_date, render_template_as_native_obj=True, schedule_interval=None) as my_dag: init_task = GKEStartPodOperator( task_id=&quot;init_task&quot;, name=&quot;init_task&quot;, retries=0, do_xcom_push=True, project_id=PROJECT_ID, location=REGION, cluster_name=CLUSTER, image=image, arguments=[], in_cluster=False, is_delete_operator_pod=True, startup_timeout_seconds=300, namespace=NAMESPACE, service_account_name=SA, ) task_1 = PythonOperator( task_id=&quot;task_1&quot;, do_xcom_push=False, op_kwargs={ 'connector_key':&quot;e_conns&quot;, }, python_callable=lambda: do_task(), ) task_2 = PythonOperator( task_id=&quot;task_2&quot;, do_xcom_push=False, op_kwargs={ 'connector_key':&quot;f_conns&quot;, }, python_callable=lambda: do_task(), ) init_task &gt;&gt; task_1 &gt;&gt; task_2 def do_task(**kwargs) -&gt; int: ti = kwargs['ti'] connectors = ti.xcom_pull(task_ids='init_task')[kwargs['connector_key']] #remainder removed (not relevant) </code></pre> <p>I can see that the xcom push from <code>init_task</code> is successful, but the <code>do_task</code> method is unable to pull xcom values. I get an error <code>KeyError: 'ti'</code> on <code>ti = kwargs['ti']</code>.</p> <p>I've seen numerous ways to handle xcoms and I'm a little confused. I thought that Airflow would inject the task instance for the key <code>ti</code> into <code>kwargs</code>, but it doesn't seem like the task instance is ever getting to the method for the PythonOperator.</p> <p>Is there a way to get the task instance into the method?</p> <p>(Airflow is version 2.4.3 on GCP Composer)</p>
<python><airflow><google-cloud-composer><airflow-xcom>
2023-10-16 14:21:55
4
27,696
FrustratedWithFormsDesigner
77,302,592
458,742
Django settings.py can't import my local module
<p>My Django tree looks like this</p> <pre><code>git/src ├── myproject │   ├── settings.py │   ├── mysharedlib.py │   └── urls.py └── www ├── myotherlib.py ├── urls.py └── views.py </code></pre> <p>It works except that in <code>settings.py</code> I have</p> <pre><code>import mysharedlib </code></pre> <p>But this raises an exception</p> <pre><code>ModuleNotFoundError: No module named 'mysharedlib' </code></pre> <p>Importing <code>myotherlib</code> from <code>views.py</code> raises a similar error.</p> <p>Why isn't this working?</p>
<python><django><python-import>
2023-10-16 13:57:30
2
33,709
spraff
77,302,486
17,530,552
Sort values in a nested dictionary from min. to max
<p>I created a dictionary, named <code>dict_dif</code> and loaded nested data into the same. The nested structure of this dictionary has the following structure:</p> <ol> <li>Variables (four different variables)</li> <li>Regions (eleven different regions for each of the four variables)</li> <li>Data points (60 data points per region).</li> </ol> <p>The dictionary looks as follows (where I shorten the variables, regions and data points to provide a small and better overview):</p> <pre><code>'variable_one': {'V1': [0.2883961843782473, -0.2564336831277686, -0.1502803806437477, -0.3404171090176428, 'V2': [0.3053319292730037, -0.12252840653652636, -0.18759593722968465, -0.04368054841152297, 'variable_two': {'V1': [0.05129822246425414, ... </code></pre> <p><strong>Aim:</strong> I would like to sort the numeric values from the lowest to the highest for each variable and region, respectively. For example, in the sorted dictionary the numeric values should span from the lowest to the highest for region V1, for region V2, and so on individually for each variable.</p> <p>I tried the following code (among others) that I found on the internet:</p> <pre><code>sorted_dict = {var : dict(sorted(roi.items(), key = lambda ele: ele[1])) for var, roi in dict_dif.items()} </code></pre> <p>However, that code seems to sort the regions based on each region’s first value. Is there a way to adjust this code above (or a different code) so that the order of the variables and regions remains the same, while only sorting the regions’ values?</p>
<python><dictionary><sorting>
2023-10-16 13:41:47
1
415
Philipp
77,302,438
13,866,965
Python function for plotting distributions works with one DataFrame but not another - no output or error
<p>I have the following python function to make graphs.</p> <pre><code> def plot_distributions(dataframe, title=None): # Set the style of seaborn sns.set(style=&quot;whitegrid&quot;) # Plotting distributions for each column plt.figure(figsize=(15, 8)) # Loop through each column and plot distribution for i, column in enumerate(dataframe.columns): plt.subplot(2, 3, i + 1) sns.histplot(dataframe[column], kde=True) # Add mean and median lines mean_value = dataframe[column].mean() median_value = dataframe[column].median() plt.axvline(mean_value, color='red', linestyle='dashed', linewidth=2, label=f'Mean: {mean_value:.2f}') plt.axvline(median_value, color='green', linestyle='dashed', linewidth=2, label=f'Median: {median_value:.2f}') plt.title(column) plt.legend() # Set the title of the overall plot if title: plt.suptitle(title, y=1.02, size=16) plt.tight_layout() plt.show() </code></pre> <p>Then I have two dataframes, with the exact same six features and lenght but with different data.</p> <pre><code>plot_distributions(jaro_df) </code></pre> <pre><code>plot_distributions(transformer_df) </code></pre> <p>The very weird thing is that using <code>jaro_df</code> works perfectly while <code>transformer_df</code> the script it is just loading and does not output anything, not even an error. How is that possible? I tried both on google colab, jupyter notebook and vscode.</p>
<python><pandas><matplotlib><debugging><seaborn>
2023-10-16 13:35:33
1
451
arrabattapp man
77,302,270
1,627,466
Alternative to applying slicing of dataframe in pandas
<p>I am trying to rank authors using an intersection of two metrics (where averaging wouldn't make sense) for each observation.</p> <p>So far the only way I have done it was to apply a slice of the dataframe and take the unique number of authors but it's really slow.</p> <p>I was wondering if there was a faster solution to this problem of ranking with multiple criteria.</p> <pre><code>df = pd.DataFrame([[&quot;a&quot;, 2, 0], [&quot;b&quot;, 3, 0.15], [&quot;b&quot;, 3, 0.4], [&quot;c&quot;, 7, 0.49], [&quot;d&quot;, 3, 0.17]], columns=['author', 'metric1', 'metric2']) df['score'] = df.apply(lambda x: len(df[(df['metric2']&gt;= x['metric2']) &amp; (df['metric1']&gt;= x['metric1'])]['author'].unique()), axis=1) </code></pre>
<python><pandas><dataframe><apply>
2023-10-16 13:12:08
3
423
user1627466
77,302,216
3,057,377
Bokeh sticky crosshair
<p>In <a href="http://bokeh.org/" rel="nofollow noreferrer">Bokeh</a>, is there a tool similar to <a href="https://plotly.com/python/hover-text-and-formatting/#spike-lines" rel="nofollow noreferrer">spike lines in plotly</a> ? The crosshair tool is close to that but is it possible to make it stick to a plot?</p>
<python><bokeh>
2023-10-16 13:04:19
1
1,380
nico
77,302,128
15,893,581
pulp: Schedule equipment replacement, maximizing Total Cost (considering devaluation)
<p>Given <code>init_age=3</code> years, <code>init_cost=4</code> of equipment, and <code>new_eq_cost=18</code>, schedule replacement time, given data for <code>cost= np.array([31, 30, 28, 28, 27, 26, 26, 25, 24, 24, 23 ])</code> &amp; <code>depreciation= np.array([ 8, 9, 9, 10, 10, 10, 11, 12, 14, 16, 18])</code> yearly.</p> <p>I tried to use <a href="https://coin-or.github.io/pulp/" rel="nofollow noreferrer">PuLP</a>, but it seems that either my objective or my constraints or both are not correct; and besides I cannot output <a href="https://stackoverflow.com/questions/33134522/converting-conditional-constraints-to-linear-constraints-in-linear-programming?noredirect=1&amp;lq=1">binary</a> desicion <code>y_variable</code> (<code>true</code> if replacement is needed):</p> <pre><code>import numpy as np from pulp import * ##M=100000 time_count= 12 tm= np.array([0,1,2,3,4,5,6,7,8,9,10,11]) cost_count= 12 cost= np.array([4,31, 30, 28, 28, 27, 26, 26, 25, 24, 24, 23 ]) depreciation= np.array([0, 8, 9, 9, 10, 10, 10, 11, 12, 14, 16, 18]) #depreciated_cost= np.array([cost[i]-depreciation[i] for i in range(time_count)]) ##print(&quot;depreciated_cost:&quot;, depreciated_cost) # subject to constraints init_age=3 # 3 yr init_cost= 4 new_cost= 18 # &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt; # ... FROM 1st case ##replace_cost= init_cost - new_cost + cost[0] - depreciation[0] # if is negative - then should be replaced ##print(replace_cost) # Create the model model = LpProblem(name=&quot;equipment replacement shedule&quot;, sense=LpMinimize) # TODO: Maximize Cost # Initialize the DECISION VARIABLES x_variables = LpVariable.dicts(&quot;x_time_cost&quot;, [(t,j) for t in tm for j in range(cost_count)],lowBound=0, upBound=None, cat='Integer') y_variables = LpVariable.dicts(&quot;indicator_cons&quot;, [(j) for j in tm ], cat='Binary') # true - if replace needed # Add the CONSTRAINTS to the model ??? for t in tm: # IF SUM OK ? model += (lpSum([x_variables[(t, j)] - depreciation[j] for j in range(cost_count)]) &lt;= new_cost, &quot;Constraint on Cost{} for replacement&quot;.format(t)) for j in range(cost_count): model += ( lpSum([x_variables[(t, j)] - depreciation[j] for t in tm]) &gt;= 1 * y_variables[j], &quot;One_assignment_constraint{}&quot;.format(j)) # Add the OBJECTIVE function to the model ??? model += lpSum(x_variables) # The problem data is written to an .lp file model.writeLP(&quot;Equipment_Replacement.lp&quot;) # Solve the problem status = None time_limit = 900 status = model.solve(PULP_CBC_CMD(msg=0, timeLimit=time_limit, threads=1)) print(&quot;-------------------------------------------------------&quot;) print(f&quot;status: {model.status}, {LpStatus[model.status]}&quot;) print(f&quot;objective: {model.objective.value()}&quot;) for var in model.variables(): if var.value()==9.0: print(f&quot;{var.name}: {var.value()}&quot;) for name, constraint in model.constraints.items(): print(f&quot;{name}: {constraint.value()}&quot;) </code></pre> <p>I included <code>init_cost</code> of equipment as the first element to the array <code>cost</code> - was I right? And how to output binary conditional variables pointing to the year for replacement? Output of total cost maximized is also needed.</p> <p>Solution should show:</p> <ol> <li>use equipment with <code>init_age</code> 3 years for 2 years, then replace it with new &amp; use it for 3th, 4th, 5th, 6th year, again replace with new &amp; use it for 7th, 8th, 9th, 10th year</li> <li>maximized total cost should be 169 (in correct answer) - but I cannot achieve it.</li> </ol> <p>I am trying to solve this scheduling problem for the first time &amp; cannot understand it yet, even how to correctly write a mathematical model. Though debugging <code>Constraint_on_Cost_for_replacement</code> seems to show time-points for replacement (when accounting becomes negative) -- I still cannot show it with binary <code>y_variable</code> (if it is needed here ?) and I still doubt that using <code>LpMinimize</code> is adequate here if task is to formulate the problem of cost maximization. How can I correct my model to reach the correct answer given?</p> <p>P.S. or could <a href="https://gekko.readthedocs.io/en/latest/model_methods.html#logical-functions" rel="nofollow noreferrer">GEKKO Logical Functions</a> be easier to use for such a task like mine?</p>
<python><optimization><pulp>
2023-10-16 12:52:09
1
645
JeeyCi
77,302,112
6,221,742
plpython3u anaconda environment
<p>I am trying to use an anaconda environment inside my plpython3u function in postgres 15. Here is what I tried so far:</p> <pre><code>DROP FUNCTION use_anaconda_env(); CREATE OR REPLACE FUNCTION use_anaconda_env() RETURNS text LANGUAGE plpython3u AS $$ import os # Set the PYTHONHOME environment variable to point to your Anaconda environment os.environ['PYTHONPATH'] = '/root/miniconda3/envs/py39' # Your Python code here # For example, check the Python version import sys python_version = sys.version # Return the Python version return python_version $$; SELECT * FROM use_anaconda_env(); </code></pre> <p>Output</p> <pre><code>&quot;3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0]&quot; </code></pre> <p>The function works but the output is the path of the python in the system and not in the anaconda environment. How can I set the path to my anaconda environment?</p>
<python><anaconda><system-paths><postgres-plpython>
2023-10-16 12:50:37
0
339
AndCh
77,302,092
4,451,521
Hovering message with plotly
<p>I have a script that does a plot with plotly</p> <pre><code>import plotly.graph_objects as go import pandas as pd # Create a DataFrame with X, Y, and f values df= pd.DataFrame({'x': [1, 2, 3, 4, 5], 'y': [10, 12, 5, 8, 6], 'x2': [1, 3, 4, 5, 7],'y2':[2,5,3,1,8],'f':[1,2,3,4,5]}) hover_text1 = df.apply(lambda row: f'({row[&quot;x&quot;]},{row[&quot;y&quot;]}) at {row[&quot;f&quot;]}', axis=1) fig=go.Figure() fig.add_trace(go.Scatter(x=df.x, y= df.y, mode='lines+markers', text=hover_text1, name='data1', line=dict(color='orange',width=linewidth),marker=dict(symbol='circle', size=markersize),hoverinfo='text')) fig.add_trace(go.Scatter(x=df.x2, y= df.y2, mode='lines+markers', name='data2', line=dict(color='green',width=linewidth),marker=dict(symbol='circle', size=markersize))) # Customize the layout or add additional settings as needed fig.update_layout(title='Plotly Scatter Plot', xaxis_title='X-axis', yaxis_title='Y-axis', showlegend=True) # Show the plot fig.show() </code></pre> <p>Unfortunately I cannot capture a image of the hovering but if you run it you see you have two lines (one orange, the other green) When you hover over the orange one you get &quot;(X,Y) at f&quot; message which is fine.</p> <p>But when you hover over the green one you get the default &quot;(X,Y)&quot; and then in inverted color the name of the curve. However once you customize the hover message this name disapears (like in the orange one)</p> <p>My question is, how can I customize the hover message as the orange one, but without losing the nice inverted color message with the data name?</p> <p>EDIT: clarification For clarity I changed the name of <code>hover_text2</code> to <code>hover_text1</code> (since it goes for <code>data1</code>)</p> <p>When hovering in <code>data2</code> I see (X,Y) and the name &quot;data2&quot; in inverted colors</p> <p>What I want is to see:</p> <ul> <li>In <code>data1</code> I want to see (X,Y) at f and then the nice &quot;data1&quot; name in inverted colors (as you see in data2)</li> </ul>
<python><plotly>
2023-10-16 12:48:25
2
10,576
KansaiRobot
77,302,055
4,551,325
Parsing xml response to pandas dataframe and dictionary
<p>My response is a well-shaped .xml file with the following structure:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;Response&gt; &lt;Record key=&quot;XXXXX&quot; req_sym=&quot;AAPL-US&quot;&gt; &lt;Fields&gt; &lt;Field id=&quot;7000&quot; name=&quot;HEADLINE&quot; value=&quot;THIS IS THE FIRST STORY&quot; /&gt; &lt;Field id=&quot;7001&quot; name=&quot;SOURCE&quot; value=&quot;EDG&quot; /&gt; &lt;Field id=&quot;7004&quot; name=&quot;STORY_DATE&quot; value=&quot;20231010&quot; /&gt; &lt;/Fields&gt; &lt;/Record&gt; &lt;Record key=&quot;YYYYY&quot; req_sym=&quot;AI-US&quot;&gt; &lt;Fields&gt; &lt;Field id=&quot;7000&quot; name=&quot;HEADLINE&quot; value=&quot;THIS IS THE SECOND STORY&quot; /&gt; &lt;Field id=&quot;7001&quot; name=&quot;SOURCE&quot; value=&quot;EDG&quot; /&gt; &lt;Field id=&quot;7004&quot; name=&quot;STORY_DATE&quot; value=&quot;20231010&quot; /&gt; &lt;/Fields&gt; &lt;/Record&gt; &lt;Record key=&quot;ZZZZZ&quot; req_sym=&quot;MSFT-US&quot;&gt; &lt;Fields&gt; &lt;Field id=&quot;7000&quot; name=&quot;HEADLINE&quot; value=&quot;THIS IS THE THIRD STORY&quot; /&gt; &lt;Field id=&quot;7001&quot; name=&quot;SOURCE&quot; value=&quot;EDG&quot; /&gt; &lt;Field id=&quot;7004&quot; name=&quot;STORY_DATE&quot; value=&quot;20231010&quot; /&gt; &lt;/Fields&gt; &lt;/Record&gt; &lt;Request&gt; &lt;RequestedReports&gt;'search'&lt;/RequestedReports&gt; &lt;SearchGUID&gt;'D207EF2FB41023D5'&lt;/SearchGUID&gt; &lt;RequestedStartDate&gt;'20231010'&lt;/RequestedStartDate&gt; &lt;RequestedEndDate&gt;'20231016'&lt;/RequestedEndDate&gt; &lt;/Request&gt; &lt;Error description=&quot;&quot; code=&quot;200&quot;/&gt; &lt;/Response&gt; </code></pre> <p>How do I parse this response into:</p> <ol> <li>a pandas dataframe with each &lt;Record&gt; as a row, with 'Field id' as index and ['HEADLINE', 'SOURCE', 'STORY_DATE'] as columns</li> <li>a dictionary to store meta data inside the &lt;Request&gt; tag</li> </ol>
<python><xml><dataframe>
2023-10-16 12:42:13
3
1,755
data-monkey
77,301,932
3,550,786
Efficient way to drop rows based on custom function
<p>I am processing a Pandas dataframe and I want to filter out the rows that don't meet a specific condition. The dataframe has a <code>content</code> column that contains examples of Javascript code. The goal is to parse each snippet, remove its comments and only keep it if its length is above a certain threshold (e.g. above 200 characters after parsing).</p> <p>I have the following code:</p> <pre><code>def computeSnippetLength(self, row): try: snippet = row['content'] (... parsing code ...) snippetLength = len(parsedSnippet) except Exception as error: if self.debug: print(error) print(row['filename']) return False return True if snippetLength &gt; 200 else False </code></pre> <p>So now I want to remove the rows whose content doesn't meet the 200 characters minimum. I have a relatively simple solution using <code>apply</code>:</p> <pre><code>def removeSmallFiles(self): self.df = self.df[self.df.apply(lambda row: self.computeSnippetLength(row), axis=1) == True] </code></pre> <p>However, I was looking for a more elegant and efficient solution, ideally without using <code>apply</code> which pretty much creates a new Series and removes the rows based on its values. Is there a simpler way using, <code>drop</code> for example? I couldn´t come up with a more efficient solution for this problem.</p>
<python><pandas>
2023-10-16 12:23:31
1
1,455
GRoutar
77,301,900
3,449,093
Why passing exception as e doesn't work, while explicitely does?
<p>I'm catching an exception and I want to pass it further. I've tried this the 2 following ways:</p> <pre><code>except exceptions.MyException as e: raise exceptions.MyException(message=e.message) </code></pre> <p>and:</p> <pre><code>except exceptions.MyException as e: raise e(message=e.message) </code></pre> <p>further, I catch it as this:</p> <pre><code>except ArticleNotAcceptable as e: Messages.flash(e.message) </code></pre> <p>While the first version works fine, the second doesn't - it causes <code>'NoneType' object has no attribute 'id'</code> that doesn't tell me much. Can anyone please explain to me why?</p>
<python>
2023-10-16 12:17:44
3
1,399
Malvinka
77,301,650
10,353,865
Element of Dataframe column and its type relations to original array element
<p>In the code below I start with a numpy string-array (typecode:'U') and then create a pandas df from it. Afterwards I look at the type of an individual element and it is just an ordinary ptyhon str - hence it seems to &quot;forget&quot; the numpy-datatype (which should be &lt;class 'numpy.str_'&gt;). However, if I do the same with a numpy int-array it remembers the dtype and spits out: &lt;class 'numpy.int64'&gt;. (I also have an ordinary python list as a comparison)</p> <pre><code>a = np.array([&quot;Hi&quot;]) type(a[0]) #&lt;class 'numpy.str_'&gt; b = [&quot;Hi&quot;] type(b[0]) #&lt;class 'str'&gt; df_a = pd.DataFrame({&quot;val&quot;: a}) df_b = pd.DataFrame({&quot;val&quot;: b}) type(df_a[&quot;val&quot;][0]) # &lt;class 'str'&gt; type(df_b[&quot;val&quot;][0]) # &lt;class 'str'&gt; # NOTE: Same type although originally created with different types ###compare with numbers: a = np.array([1]) type(a[0]) # &lt;class 'numpy.int64'&gt; b = [1] type(b[0]) # &lt;class 'int'&gt; df_a = pd.DataFrame({&quot;val&quot;: a}) df_b = pd.DataFrame({&quot;val&quot;: b}) type(df_a[&quot;val&quot;][0]) # &lt;class 'numpy.int64'&gt; type(df_b[&quot;val&quot;][0]) # &lt;class 'numpy.int64'&gt; ###compare with numbers - but without default size: a = np.array([1],dtype='i2') type(a[0]) # &lt;class 'numpy.int16'&gt; b = [1] type(b[0]) # &lt;class 'int'&gt; df_a = pd.DataFrame({&quot;val&quot;: a}) df_b = pd.DataFrame({&quot;val&quot;: b}) type(df_a[&quot;val&quot;][0]) # &lt;class 'numpy.int16'&gt; type(df_b[&quot;val&quot;][0]) # &lt;class 'numpy.int64'&gt; </code></pre> <p>So my general question: If I create a numpy array whose individual element have type &quot;x&quot; under which circumstances is an element in the created pd.DataFrame of the same type? And why does the string-type convert in the manner displayed?</p>
<python><pandas>
2023-10-16 11:40:03
1
702
P.Jo