QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
75,382,158
10,065,556
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DatatypeMismatch) column is of type bit but expression is of type integer
<p>I'm trying to insert a BIT value in my postgres column called <code>isFile</code>. The following error is raised when passing IsFile=1</p> <pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DatatypeMismatch) column &quot;IsFile&quot; is of type bit but expression is of type integer </code></pre> <p>For some reason it's not type casting it to BIT type (unlike other DBAPIs)</p> <p>I tried to pass a BIT object instead of <code>1</code>:</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy.dialects.postgresql import BIT IsFile=BIT(1) </code></pre> <p>which fails with:</p> <pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'BIT' </code></pre>
<python><postgresql><sqlalchemy><psycopg2>
2023-02-08 06:39:13
0
994
ScriptKiddieOnAComputer
75,382,057
4,796,963
A random string of ascii_letters + digits, but only starts with a letter
<p>I'm using the following simple code to generate a random string of length 10</p> <pre><code>from string import ascii_letters, digits ''.join(choice(ascii_letters + digits) for i in range(10)) </code></pre> <p>The problem is that sometimes the first character of the string is a digit. I don't want that. I want the first character to be always a letter, and what comes after I don't care about.</p> <p>I can solve this problem to joining two strings (one of length 1 and the other of length 9), and generating the first one based on the ascii_letters alone. However, I was wondering if there's a simpler approach.</p>
<python><python-3.x><string><random>
2023-02-08 06:26:26
4
1,297
AhmedWas
75,382,000
15,781,591
How to insert string into plotting function in python
<p>I have the following code that creates a histogram from my data using seaborn in python:</p> <pre><code>ax=sns.histplot(data=data, y=value, x=category) ax.figure.set_size_inches(7,len(10)) plt.title('Title'.format(field)) plt.show() </code></pre> <p>Now I want to work on creating a for loop that creates a different type of plot for my data from a list of data types. Specifically, for my data, I want to create a histogram, a boxplot, and violin plot from my data using seaborn.</p> <p>And so, I have this list of seaborn plot types:</p> <pre><code>plot_type_list = ['histplot', 'boxplot', 'violinplot] </code></pre> <p>And so I want to loop through this list of plot types and try each plot type for my data. I try the following:</p> <pre><code>for plot_type in plot_type_list: ax=sns.plot_type(data=data, y=value, x=category) ax.figure.set_size_inches(7,len(10)) plt.title('Title'.format(field)) plt.show() </code></pre> <p>My reasoning here was that the &quot;plot_type&quot; in <code>sns.plot_type()</code> would be substituted by each string in my plot_type_list, thus creating each type of plot from my data. However, this code attempt returns the following error:</p> <pre><code>AttributeError: module 'seaborn' has no attribute 'plot_type_list' </code></pre> <p>I see that python is not inserting my plot_type string into the seaborn plotting function as I intended, rather just preserving &quot;plot_type&quot; as &quot;plot_type&quot;, rather than &quot;histplot&quot;, &quot;boxplot&quot;, and &quot;violinplot&quot;. How can I fix my code so that the seaborn plotting function has each plot type inserted, such that for through my for loop, I can create a histplot, boxplot, and violinplot?</p>
<python><matplotlib><seaborn>
2023-02-08 06:18:28
1
641
LostinSpatialAnalysis
75,381,941
19,238,204
How to Generate Random Year in Python with format YYYY?
<p>I have this code to generate fake name, age, address, I want to know what is the line to generate fake year in YYYY format?</p> <pre><code>def create_fake_users(n): &quot;&quot;&quot;Generate fake users.&quot;&quot;&quot; faker = Faker() for i in range(n): user = User(name=faker.name(), age=random.randint(20, 80), address=faker.address().replace('\n', ', '), phone=faker.phone_number(), email=faker.email()) db.session.add(user) db.session.commit() print(f'Added {n} fake users to the database.') </code></pre>
<python><python-3.x>
2023-02-08 06:09:36
2
435
Freya the Goddess
75,381,861
3,386,779
turnoff screenshot in robot framework
<p>I have created application in robot framework and want to disable screenshot.</p> <p>Selenium2Library tried run_on_failure=Capture Page Screenshot but taking screenshot for success case .</p> <p>I need to stop taking screenshot for both success and failure case</p>
<python><robotframework>
2023-02-08 05:57:03
1
7,263
user3386779
75,381,833
3,169,868
Pandas extract phrases in string that occur in a list
<p>I have a data frame with a column <code>text</code> which has strings as shown below</p> <pre><code>text my name is abc xyz is a fruit abc likes per </code></pre> <p>I also have a list of phrases as shown below</p> <pre><code>['abc', 'fruit', 'likes per'] </code></pre> <p>I want to add a column <code>terms</code> to my data frame which contains those phrases in the list that occur in the <code>text</code> string, so result in this case would be</p> <pre><code>text terms my name is abc ['abc'] xyz is a fruit ['fruit'] abc likes per ['abc', 'likes per'] </code></pre> <p>Can I do this without using regex?</p>
<python><pandas><list><apply>
2023-02-08 05:53:43
2
1,432
S_S
75,381,812
9,768,643
filter a pandas df with multiple check of different column equal
<pre><code>,unique_system_identifier,call_sign,date1,date2,date3,date4 0,3929436,WQZL268,14-06-2023,,14-06-2023, 1,3929436,WQZL268,,,, 2,3929437,WQZL269,14-06-2023,,14-06-2023, 3,3929437,WQZL269,,,, 4,3929438,WQZL270,14-06-2023,,14-06-2023, 5,3929438,WQZL270,,,, 6,3929439,WQZL271,14-06-2023,,14-06-2023, 7,3929439,WQZL271,,,, 8,3929440,WQZL272,14-06-2023,,14-06-2023, 9,3929440,WQZL272,,,, 10,3929441,WQZL273,14-06-2023,,14-06-2023, 11,3929441,WQZL273,,,, 12,3929442,WQZL274,14-06-2023,,14-06-2023, 13,3929442,WQZL274,,,, 14,3929443,WQZL275,14-06-2023,,14-06-2023, </code></pre> <p>I have a df like above need to take only the values which are date1 &amp; date3 are have different or date2 or date4 have different if both different also need how to do with pandas,</p> <p>the columns are coming as pandas objectnote as datetime/string</p>
<python><pandas>
2023-02-08 05:50:59
1
836
abhi krishnan
75,381,775
19,321,677
How to randomly sample a value within a given segment?
<p>I want to create a new column &quot;sample_group_B&quot; which randomly samples a purchase price value from group B within the same segment of group A. How do I do this in pandas?</p> <pre><code>segment | purchase price | group High | 100 | A High | 105 | A High | 103 | B High | 104 | B Low | 10 | A Low | 9 | B Low | 50 | B Low | 55 | B </code></pre> <p>I want to create a new column that randomly samples the purchase price of group B within the respective segment such as:</p> <pre><code>segment | purchase price | group | sample_group_B High | 100 | A | sample a value from (103 or 104) High | 105 | A | sample a value from (103 or 104) Low | 10 | A | sample a value from (9 or 50 or 55) </code></pre> <p>I tried np.random() but it returned a bunch of Nans.</p>
<python><pandas>
2023-02-08 05:44:59
2
365
titutubs
75,381,758
6,928,142
How to update arguments of an already started apscheduler job?
<p>I have a function called 'run' that runs for several hours and receive an argument 'param', as shown in the code below. How can I change param while the job is running?</p> <pre><code>sched1 = BackgroundScheduler() sched1.add_job(run, 'interval', hours = 5, args=[param]) sched1.start() </code></pre> <p>Regards</p>
<python><apscheduler>
2023-02-08 05:43:09
1
355
SergioABP
75,381,750
7,052,826
Different user running same Python version can't import package
<p>I have an Ubuntu (18.04.6 LT)) server with multiple users. On my default user, I can do the following:</p> <pre><code>defaultUser@server:~$ python Python 2.7.17 (default, Mar 18 2022, 13:21:42) [GCC 7.5.0] on linux2 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import sys, os &gt;&gt;&gt; sys.version '2.7.17 (default, Mar 18 2022, 13:21:42) \n[GCC 7.5.0]' &gt;&gt;&gt; sys.executable '/usr/bin/python' &gt;&gt;&gt; import numpy &gt;&gt;&gt; </code></pre> <p>Using another user, invoking the same Python executable, I can not import numpy anymore:</p> <pre><code>anotherUser@server:/home/mjpvanzuijlen$ python Python 2.7.17 (default, Mar 18 2022, 13:21:42) [GCC 7.5.0] on linux2 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import sys, os &gt;&gt;&gt; sys.version '2.7.17 (default, Mar 18 2022, 13:21:42) \n[GCC 7.5.0]' &gt;&gt;&gt; sys.executable '/usr/bin/python' &gt;&gt;&gt; import numpy Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ImportError: No module named numpy </code></pre> <p>The contents of <code>sys.path</code> are the same for both users, though the order is not.</p> <pre><code>['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/home/defaultUser/.local/lib/python2.7/site-packages', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] </code></pre> <p>As I understand it, both users are running the exact same python version. Why can I import numpy in one Python version and not in another?</p>
<python>
2023-02-08 05:42:17
0
4,155
Mitchell van Zuylen
75,381,665
9,357,484
Precision, recall, F1 score all have zero value for the minority class in the classification report
<p>I got a warning while using SVM and MLP classifiers from SkLearn package:</p> <blockquote> <p>C:\Users\cse_s\anaconda3\lib\site-packages\sklearn\metrics_classification.py:1327: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use <code>zero_division</code> parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result))</p> </blockquote> <p>Code for splitting dataset</p> <pre><code>from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify=y) </code></pre> <p>Code for SVM classifier</p> <pre><code>from sklearn import svm SVM_classifier = svm.SVC(kernel=&quot;rbf&quot;, probability = True, random_state=1) SVM_classifier.fit(X_train, y_train) SVM_y_pred = SVM_classifier.predict(X_test) print(classification_report(y_test, SVM_y_pred)) </code></pre> <p>Code for MLP classifier</p> <pre><code>from sklearn.neural_network import MLPClassifier MLP = MLPClassifier(random_state=1, learning_rate = &quot;constant&quot;, learning_rate_init=0.3, momentum = 0.2 ) MLP.fit(X_train, y_train) R_y_pred = MLP.predict(X_test) target_names = ['No class', 'Yes Class'] print(classification_report(y_test, R_y_pred, target_names=target_names)) </code></pre> <p>The error is same for both classifiers</p>
<python><scikit-learn><svm><mlp>
2023-02-08 05:25:29
1
3,446
Encipher
75,381,655
1,639,926
Looking for python packages that can calculate nodes/region/edges while respecting X,Y coordinates of nodes
<p>I have a list of cities (nodes) plotted in a 2D plane each given by an X,Y coordinate. I now want to add roads (edges) to it, but the roads cannot intersect. I want to create the most number of roads possible. By count, not by total length.</p> <p>In more general graph theory parlance, I think I want the maximum number of edges (or regions?? maybe it's the same thing), where edges do not intersect in 2-dimensions, for a given set of Nodes at X,Y points.</p> <p>In a brief view of NetworkX, it seems that they generate Graphs by making &quot;nodes&quot; but nodes can be &quot;anywhere&quot; and cannot force nodes to be at a certain location with respect to each other (they have abstracted too far!).</p> <p>Edit: <a href="https://stackoverflow.com/questions/11804730/networkx-add-node-with-specific-position">networkx add_node with specific position</a> suggests that you can plot them in a given location. @Stef thanks!!</p> <ol> <li>Am i thinking about the problem correctly?</li> <li>Can I visualize using some python package my Nodes/edges, where this package can automatically calculate the proper edges given a set of nodes?</li> <li>Is automatically finding the maximum number of non-intersecting edges a thing (and what is this called so I can find out more about it?)</li> </ol> <p>Very possibly similar to this question, but this question wasn't really answered and from 8 years ago (<a href="https://stackoverflow.com/questions/27892692/algorithm-for-finding-minimal-cycle-basis-of-planar-graph">Algorithm for finding minimal cycle basis of planar graph</a>)</p>
<python><graph-theory>
2023-02-08 05:23:42
0
862
user1639926
75,381,553
1,344,579
Pelican *_SAVE_AS settings set to "" are still generating pages
<p>According to the documentation for Pelican 4.8.0, I should be able to prevent the creation of certain pages by setting their value to an empty string.</p> <p>I am the only author of my blog, and have set up a simple about me page, with more detail than the author page provides. Within the <a href="https://docs.getpelican.com/en/latest/settings.html#url-settings" rel="nofollow noreferrer">docs</a> is this note:</p> <blockquote> <p>If you do not want one or more of the default pages to be created (e.g., you are the only author on your site and thus do not need an Authors page), set the corresponding *_SAVE_AS setting to '' to prevent the relevant page from being generated.</p> </blockquote> <p>Using this as my guidance, I've set the following values in my config file:</p> <pre><code>AUTHORS_SAVE_AS = &quot;&quot; AUTHOR_SAVE_AS = &quot;&quot; </code></pre> <p>Unfortunately, both the overall <code>authors.hml</code> and individual author <code>author/NewGuy.html</code> are still being generated.</p> <p>I am using Pelican <code>4.8.0</code></p> <p>Is there a new/better way to prevent these pages from being generated? Sadly, the documentation doesn't appear to be 100% accurate in this case</p>
<python><pelican>
2023-02-08 05:04:27
1
3,473
NewGuy
75,381,464
1,828,539
Plot multiple multi-plot panels with seaborn
<p>I'm trying to plot three panels of 3 plots in a single figure, but all I get is an empty grid and the invidiual 3-plot panels. What am I doing wrong?</p> <pre><code>def plot_qc_metrics(adata, key): pg.qc_metrics(adata, min_genes=100, mito_prefix='MT-') pg.filter_data(adata) fig, ax = plt.subplots(1, 3, figsize=(15, 5), sharey=True) fig.suptitle(key, fontsize=16) sns.histplot(data = adata.obs, x = 'n_counts', bins=50, log_scale=True, ax=ax[0]) ax[0].set(xlim=(1, 100000), xlabel='log nUMI') data = adata.obs['n_genes'] sns.histplot(data = adata.obs, x = 'n_genes', bins=50, log_scale=True, ax=ax[1]) ax[1].set(xlim=(1, 100000), xlabel='log nGenes') data = np.log2(adata.obs['percent_mito']) sns.histplot(data = data, x=data, bins=100, ax=ax[2]) # ax[2].set(xlim=(np.min(data), np.max(data)), xlabel='log2(percent_mito)') ax[2].xaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '$2^{%.0f}$' % x)) return fig, ax </code></pre> <pre><code> adata_dict = { 'org_1': org_1, 'org_2': org_2, 'org_3': org_3 } fig, ax = plt.subplots(nrows = 3, ncols = len(adata_dict), figsize=(15, 5), sharey=True) for i, (key, adata) in enumerate(adata_dict.items()): subfig, subax = plot_qc_metrics(adata, key) ax[i] = subax plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.sstatic.net/NtypE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NtypE.png" alt="enter image description here" /></a></p>
<python><matplotlib><seaborn>
2023-02-08 04:44:27
0
2,376
Carmen Sandoval
75,381,444
4,112,504
solve deflection curve using scipy.solve_ivp error: 'Required step size is less than spacing between numbers.'
<p>I encountered a failure on solving deflection curve using scipy.solve_ivp when I used a big length for the beam(pipe). The following is my script.</p> <pre><code># -*- coding: utf-8 -*- # # Calculate deflection curve under gravity # import numpy as np from scipy.integrate import solve_ivp import matplotlib.pyplot as plt # round pipe's sectional area def area_round(D,t): d = D - 2 * t return np.pi*(D*D-d*d)/4.0 # area moment of inertia section properties of round pipe def moi_round(D,t): d = D - 2 * t return np.pi/64.0*(D**4-d**4) # gravity force per unit length for round pipe def G_round(dens,D,t): g = 9.8 # gravity coef, N/kg A = area_round(D,t) return A*dens*g # Calculate 2D plane(xy) bending of a beam undering self-gravity # minus y is the gravity direction # - Young's modules E # - area moment of inertia: Iz # x: coordinate position along x-axis # u=[y,dydx]: deflection at y direction , first order derivative on x def deflection(x,u,E,Iz,G,LG): y,dy_dx = u MG = np.select([x&lt;=LG,x&gt;LG],[G*(0.5*LG**2+0.5*x**2-LG*x),0]) # distributed force ddy_ddx = np.power(1.0+dy_dx*dy_dx,3.0/2.0)*MG/E/Iz return [dy_dx,ddy_ddx] # solve initial value problem using numerical method # Given x list, calcuate y and y' def solve_deflection(E,Iz,Gy,Lg,x_list): # initial value y0 = 0.0 dy_dx_x0 = 0.0 sol = solve_ivp(deflection,t_span=[0.0,Lg],y0=[y0,dy_dx_x0],method='RK45',t_eval=x_list,vectorized=True,args=(E,Iz,Gy,Lg),rtol=1.0e-3,atol=1.0e-4) print(&quot;sol:&quot;,sol) return sol.y[0,:],sol.y[1,:] # test of solving deflection equation # unit sytem: length mm force:N pressure:N/mm^2/MPa def test(): dens = 7880.0e-9 # Kg/mm^3 E = 210e3 # elastic modulus, N/mm^2 # round pipe Dp = 60.0 # mm, outer diameter tp = 3.0 # mm ,thickness print(&quot;Dp:&quot;,Dp,' tp:',tp) Iz= moi_round(Dp,tp) # section const Gy = -G_round(dens,Dp,tp) # gravity force per unit print(&quot; Iz:&quot;,Iz,' Gy:',Gy) # Lg = 18799.1 # mm, this length is OK Lg = 18799.2 # mm, this length leads to failure print(&quot;Lg=&quot;,Lg) xlist = np.linspace(0.0,Lg,num=100,endpoint=True) res = solve_deflection(E, Iz, Gy=Gy, Lg=Lg, x_list=xlist) plt.plot(xlist,res[0]) plt.show() if __name__ == '__main__': test() pass </code></pre> <p>When the length is 10000mm, it can give correct result as the following graph <a href="https://i.sstatic.net/UHvQq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UHvQq.png" alt="enter image description here" /></a></p> <p>But when I changed it to 20000mm, it failed and output the following messages,</p> <pre><code>sol: message: 'Required step size is less than spacing between numbers.' nfev: 944 njev: 0 nlu: 0 sol: None status: -1 success: False t: array([ 0. , 202.02020202, 404.04040404, 606.06060606, 808.08080808, 1010.1010101 , 1212.12121212, 1414.14141414, 1616.16161616, 1818.18181818, 2020.2020202 , 2222.22222222, 2424.24242424, 2626.26262626, 2828.28282828, 3030.3030303 , 3232.32323232, 3434.34343434, 3636.36363636, 3838.38383838, 4040.4040404 , 4242.42424242, 4444.44444444, 4646.46464646, 4848.48484848, 5050.50505051, 5252.52525253, 5454.54545455, 5656.56565657, 5858.58585859, 6060.60606061, 6262.62626263, 6464.64646465, 6666.66666667, 6868.68686869, 7070.70707071, 7272.72727273, 7474.74747475, 7676.76767677, 7878.78787879, 8080.80808081, 8282.82828283, 8484.84848485, 8686.86868687, 8888.88888889]) t_events: None y: array([[ 0.00000000e+00, -3.66155907e+00, -1.45618988e+01, -3.25951385e+01, -5.76794150e+01, -8.97565181e+01, -1.28794030e+02, -1.74786625e+02, -2.27730361e+02, -2.87640987e+02, -3.54556806e+02, -4.28538678e+02, -5.09670018e+02, -5.98056798e+02, -6.93827544e+02, -7.97133338e+02, -9.08147821e+02, -1.02706718e+03, -1.15411018e+03, -1.28951811e+03, -1.43356488e+03, -1.58659673e+03, -1.74895654e+03, -1.92104924e+03, -2.10335332e+03, -2.29642078e+03, -2.50087718e+03, -2.71742163e+03, -2.94684282e+03, -3.19021861e+03, -3.44843893e+03, -3.72265946e+03, -4.01447603e+03, -4.32592470e+03, -4.65948174e+03, -5.01806360e+03, -5.40502733e+03, -5.82494500e+03, -6.28452879e+03, -6.79299154e+03, -7.36408180e+03, -8.02047137e+03, -8.80421033e+03, -9.81478548e+03, -1.15113253e+04], [ 0.00000000e+00, -3.61399978e-02, -7.16869806e-02, -1.06770973e-01, -1.41512819e-01, -1.76024795e-01, -2.10415154e-01, -2.44799321e-01, -2.79258778e-01, -3.13875788e-01, -3.48738932e-01, -3.83943102e-01, -4.19589504e-01, -4.55785660e-01, -4.92645404e-01, -5.30288885e-01, -5.68842566e-01, -6.08439223e-01, -6.49217947e-01, -6.91324142e-01, -7.34921284e-01, -7.80235186e-01, -8.27461362e-01, -8.76842769e-01, -9.28683414e-01, -9.83348358e-01, -1.04126372e+00, -1.10291665e+00, -1.16888799e+00, -1.24022057e+00, -1.31734604e+00, -1.40101902e+00, -1.49267854e+00, -1.59444811e+00, -1.70913565e+00, -1.84023352e+00, -1.99191927e+00, -2.17052401e+00, -2.38680965e+00, -2.65797898e+00, -3.01398821e+00, -3.51651158e+00, -4.31409773e+00, -5.92506745e+00, -1.41760305e+01]]) y_events: None Traceback (most recent call last): File &quot;mwe.py&quot;, line 82, in &lt;module&gt; test() File &quot;mwe.py&quot;, line 76, in test plt.plot(xlist,res[0]) File &quot;\lib\site-packages\matplotlib\pyplot.py&quot;, line 3019, in plot return gca().plot( File &quot;\lib\site-packages\matplotlib\axes\_axes.py&quot;, line 1605, in plot lines = [*self._get_lines(*args, data=data, **kwargs)] File &quot;\lib\site-packages\matplotlib\axes\_base.py&quot;, line 315, in __call__ yield from self._plot_args(this, kwargs) File &quot;\lib\site-packages\matplotlib\axes\_base.py&quot;, line 501, in _plot_args raise ValueError(f&quot;x and y must have same first dimension, but &quot; ValueError: x and y must have same first dimension, but have shapes (100,) and (45,) </code></pre> <p>I have read similar question on <a href="https://stackoverflow.com/questions/59634279/solve-ivp-error-required-step-size-is-less-than-spacing-between-numbers">solve_ivp error: 'Required step size is less than spacing between numbers.'</a> But I can not understand it.</p> <p>I also tried different atol value and can not solve the issue. I have struggled on this for several days.</p> <p>Anyone can help me out? Thank you.</p>
<python><scipy><ode><runge-kutta>
2023-02-08 04:40:01
0
340
Jilong Yin
75,381,432
17,610,082
MYSQL Database Copy Using Python/Django
<p>I need to create a copy of the database in my MySQL Server using Django Application</p> <p>After a little research, i found mysqldump as the better approach</p> <pre class="lang-py prettyprint-override"><code>backup_file_path = f&quot;/tmp/{src_database_name}_backup.sql&quot; backup_db_command = f&quot;mysqldump -h {SQL_DB_HOST} -P 3306 -u {SQL_DB_USER} -p{SQL_DB_PASSWORD} {src_database_name} &gt; {backup_file_path}&quot; print(backup_db_command) # TODO: remove with os.popen(backup_db_command, &quot;r&quot;) as p: r = p.read() print(f&quot;Backup Output: {r}&quot;) restore_command = f&quot;mysql -u root -p{SQL_DB_PASSWORD} {dest_database_name} &lt; {backup_file_path}&quot; with os.popen(restore_command, &quot;r&quot;) as p: r = p.read() print(f&quot;Restore Output: {r}&quot;) </code></pre> <p><strong>My Queries:</strong></p> <ol> <li>Any issues with this approach</li> <li>Any better approaches to do a copy of DB using Either python or Django ORM</li> </ol>
<python><mysql><sql><django><database>
2023-02-08 04:37:42
1
1,253
DilLip_Chowdary
75,381,387
5,254,055
Appending to lists within loops in Python3 (again)
<p>I'm having difficulty adding to a list iteratively.</p> <p>Here's a MWE:</p> <pre><code># Given a nested list of values, or sets sets = [[1, 2, 3], [1, 2, 4], [1, 2, 5]] # add a value to each sublist giving the number of that set in the list. n_sets = len(sets) for s in range(n_sets): (sets[s]).insert(0, s) # Now repeat those sets reps times reps = 4 expanded_sets = [item for item in sets for i in range(reps)] # then assign a repetition number to each occurance of a set. rep_list = list(range(reps)) * n_sets for i in range(n_sets * reps): (expanded_sets[i]).insert(0, rep_list[i]) expanded_sets </code></pre> <p>which returns</p> <pre><code>[[3, 2, 1, 0, 0, 1, 2, 3], [3, 2, 1, 0, 0, 1, 2, 3], [3, 2, 1, 0, 0, 1, 2, 3], [3, 2, 1, 0, 0, 1, 2, 3], [3, 2, 1, 0, 1, 1, 2, 4], [3, 2, 1, 0, 1, 1, 2, 4], [3, 2, 1, 0, 1, 1, 2, 4], [3, 2, 1, 0, 1, 1, 2, 4], [3, 2, 1, 0, 2, 1, 2, 5], [3, 2, 1, 0, 2, 1, 2, 5], [3, 2, 1, 0, 2, 1, 2, 5], [3, 2, 1, 0, 2, 1, 2, 5]] </code></pre> <p>instead of the desired</p> <pre><code>[[0, 0, 1, 2, 3], [1, 0, 1, 2, 3], [2, 0, 1, 2, 3], [3, 0, 1, 2, 3], [0, 1, 1, 2, 4], [1, 1, 1, 2, 4], [2, 1, 1, 2, 4], [3, 1, 1, 2, 4], [0, 2, 1, 2, 5], [1, 2, 1, 2, 5], [2, 2, 1, 2, 5], [3, 2, 1, 2, 5]] </code></pre> <p>Just for fun, the first loop returns an expected value of <code>sets</code></p> <pre><code>[[0, 1, 2, 3], [1, 1, 2, 4], [2, 1, 2, 5]] </code></pre> <p>but after the second loop <code>sets</code> changed to</p> <pre><code>[[3, 2, 1, 0, 0, 1, 2, 3], [3, 2, 1, 0, 1, 1, 2, 4], [3, 2, 1, 0, 2, 1, 2, 5]] </code></pre> <p>I suspect the issue has something to do with copies and references. I've tried adding <code>.copy()</code> and slices in various places, but with the indexed sublists I haven't come across a combo that works. I'm running Python 3.10.6.</p> <p>Thanks for looking!</p> <p>Per suggested solution, <code>[list(range(reps)) for _ in range(n_sets)]</code> doesn't correctly replace the <code>list(range(reps)) * n_sets</code>, since it gives <code>[[0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3]]</code> instead of the desired <code>[0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3]</code>. Do I need to flatten, or is there a syntax with the <code>_</code> notation that gives me a single list?</p> <p>Further update . . . replacing</p> <pre><code>rep_list = list(range(reps)) * n_sets </code></pre> <p>with</p> <pre><code>rep_list_nest = [list(range(reps)) for _ in range(n_sets)] rep_list = [i for sublist in rep_list_nest for i in sublist] </code></pre> <p>gives the <strong>same</strong> undesired result for <code>expanded_sets</code>.</p>
<python><python-3.x><append><nested-lists>
2023-02-08 04:28:02
1
567
zazizoma
75,381,362
4,414,359
Uploading Google API python library to Lambda
<p>I have a Jupyter notebook that i've built a script in for extracting data from a Google Sheet using these two imports:</p> <pre><code>from googleapiclient.discovery import build from google.oauth import service_account </code></pre> <p>I'm trying to copy it to AWS Lambda and I'm having trouble uploading these three libraries to a layer:</p> <p><code>google-api-python-client</code></p> <p><code>google-auth-httplib2</code></p> <p><code>google-auth-oauthlib</code></p> <p>I downloaded them from pypi.org. They all only have one download option and don't specify which version of python 3 they're compatible with, except <code>google-api-python-client</code> which has &quot;Python 3.7, 3.8, 3.9, 3.10 and 3.11 are fully supported and tested.&quot; in the comments.</p> <p>I just checked and it looks like my Jupyter notebook is running Python 3.10. I've also copied the script into VSCode and these libraries also appear to only work in Python 3.10. Which is weird since at least one of them should still work in all versions. It makes me think i'm doing something wrong.</p> <p>Also, it doesn't look like Lambda supports 3.10? So is there no way to run Google libraries on it? Or do I need to use older libraries?</p>
<python><amazon-web-services><aws-lambda><google-api>
2023-02-08 04:23:34
1
1,727
Raksha
75,381,208
4,183,877
Serve TailwindCSS with django_plotly_dash
<p>I have a Dash app in Django being served via <a href="https://github.com/GibbsConsulting/django-plotly-dash" rel="nofollow noreferrer"><code>django-plotly-dash</code></a> and I'm using Tailwind for the styling across the site. Tailwind seems to be working everywhere except for the Dash app, where it is kind of working, but seems to be overwritten by the Bootstrap at some points.</p> <p>I can see the Tailwind styling without any issues if I run the Dash app on its own, but not when embedded in Django.</p> <p>Here's the view inside Django (and <a href="https://github.com/hubbs5/tailwind-dash" rel="nofollow noreferrer">the code for this basic example</a>): <a href="https://i.sstatic.net/2rEXm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2rEXm.png" alt="dash app not showing Tailwind" /></a></p> <p>And here it is (with garish colors to see the difference) while running Dash and Tailwind without Django: <a href="https://i.sstatic.net/oTvfC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTvfC.png" alt="Ugly Dash app with Tailwind" /></a></p> <p>Some of the Tailwind styling is being applied, such as the <code>container mx-auto</code> bit of the Dash layout, but others (e.g. coloring) are being dropped.</p> <p>Here's the code for the Dash app, which is split into <code>layout.py</code>, <code>callbacks.py</code>, and <code>dashboard.py</code>:</p> <p><code>layout.py</code>:</p> <pre><code>from dash import dcc, html layout = html.Div( className=&quot;bg-green-100 container mx-auto my-auto px-15 py-5&quot;, children=[ html.Div( className=&quot;bg-red-100 py-5&quot;, children=[ dcc.Dropdown( id=&quot;symbol-input&quot;, options=[ {&quot;label&quot;: &quot;Apple&quot;, &quot;value&quot;: &quot;AAPL&quot;}, {&quot;label&quot;: &quot;Tesla&quot;, &quot;value&quot;: &quot;TSLA&quot;}, {&quot;label&quot;: &quot;Meta&quot;, &quot;value&quot;: &quot;META&quot;}, {&quot;label&quot;: &quot;Amazon&quot;, &quot;value&quot;: &quot;AMZN&quot;} ], searchable=True, value=&quot;AAPL&quot;, ) ]), html.Div( className=&quot;max-w-full shadow-2xl rounded-lg border-3&quot;, id=&quot;price-chart&quot; ) ] ) </code></pre> <p><code>callbacks.py</code>:</p> <pre><code>from dash import dcc, html from dash.dependencies import Input, Output import yfinance as yf import plotly.express as px def register_callbacks(app): @app.callback( Output(&quot;price-chart&quot;, &quot;children&quot;), Input(&quot;symbol-input&quot;, &quot;value&quot;), ) def get_data(symbol): df = yf.Ticker(symbol).history() fig = px.line( x=df.index, y=df.Close, title=f&quot;Price for {symbol}&quot;, labels={ &quot;x&quot;: &quot;Date&quot;, &quot;y&quot;: &quot;Price ($)&quot;, } ) return dcc.Graph( id=&quot;price-chart-1&quot;, figure=fig ) </code></pre> <p><code>dashboard.py</code>:</p> <pre><code>from django_plotly_dash import DjangoDash from .layout import layout from .callbacks import register_callbacks app = DjangoDash(&quot;Dashboard&quot;) app.css.append_css({&quot;external_url&quot;: &quot;/static/css/output.css&quot;}) app.layout = layout register_callbacks(app) </code></pre> <p>The Tailwind CSS is in <code>/static/css/output.css</code> and is linked as the stylesheet in the <code>base.html</code>. To ensure it's working correctly in Django, I put a simple homepage up and copied code from <a href="https://play.tailwindcss.com/" rel="nofollow noreferrer">Tailwind's site</a> to confirm that it works. Again, it's partially coming through in the Dash app, but seems to get overwritten.</p>
<python><css><django><tailwind-css><plotly-dash>
2023-02-08 03:50:52
1
1,305
hubbs5
75,381,130
7,267,480
very slow work with dataframe, how to avoid
<p>I have an issue with the part of code which seems to work slowly.</p> <p>I suppose it's because of iterating through a dataframe. Here is the code:</p> <pre><code># creating a dataframe for ALL data df_all = pd.DataFrame() for idx, x in enumerate(all_data[0]): peak_indx_E = ... ... # TODO: speed up! # it works slow because of this? How to avoid this problem if I need to output a dataframe temp = pd.DataFrame( { 'idx_global_num': idx, ... 'peak_sq_divE': peak_sq_divE }, index=[idx] ) df_all = pd.concat([df_all, temp]) </code></pre> <p>Can you give me a suggestion - how can I speed up the execution - I suppose the pd.concat operation is slow.</p> <p>How to solve this issue?</p>
<python><pandas><performance>
2023-02-08 03:31:54
1
496
twistfire
75,381,109
998,070
Generate a new set of points along a line
<p>I have a Python project where I need to redraw a line many times with the points in random places but keeping the line's shape and point count roughly the same. The final output will be using polygonal points and <strong>not</strong> Bezier paths (though I wouldn't be opposed to using Bezier as an intermediary step).</p> <p>This animation is demonstrating how the points could move along the line to different positions while maintaining the general shape. <a href="https://i.sstatic.net/ODMTE.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ODMTE.gif" alt="Point Count animation" /></a></p> <p>I also have a working example below where I'm moving along the line and picking random new points between existing points (the red line, below). It works okay, but I'd love to hear some other approaches I might take if someone knows of a better one?</p> <p><em>Though this code is using <code>matplotlib</code> to demonstrate the line, the final program will not.</em></p> <pre class="lang-py prettyprint-override"><code>import numpy as np from matplotlib import pyplot as plt import random from random import (randint,uniform) def move_along_line(p1, p2, scalar): distX = p2[0] - p1[0] distY = p2[1] - p1[1] modX = (distX * scalar) + p1[0] modY = (distY * scalar) + p1[1] return [modX, modY] x_coords = [213.5500031,234.3809357,255.211853,276.0427856,296.8737183,317.7046204,340.1997681,364.3751221,388.5505066,414.8896484,444.5192261,478.5549622,514.5779419,545.4779053,570.3830566,588.0241699,598.2469482,599.772583,596.758728,593.7449341,590.7310791,593.373291,610.0373535,642.1326294,677.4451904,710.0697021,737.6887817,764.4020386,791.1152954,817.8284912,844.541687,871.2550049,897.9682007,924.6813965,951.3945923,978.1078491,1009.909546,1042.689941,1068.179199,1089.543091] y_coords = [487.3099976,456.8832703,426.4565125,396.0297852,365.6030273,335.1763,306.0349426,278.1913452,250.3477478,224.7166748,203.0908051,191.2358704,197.6810608,217.504303,244.4946136,276.7698364,312.0551453,348.6885986,385.4395447,422.1904297,458.9414063,495.5985413,527.0128479,537.1477661,527.6642456,510.959259,486.6988525,461.2799683,435.8611145,410.4422913,385.023468,359.6045532,334.18573,308.7669067,283.3480835,257.929184,239.4429474,253.6099091,280.1803284,310.158783] plt.plot(x_coords,y_coords,color='b') plt.scatter(x_coords,y_coords,s=2) new_line_x = [] new_line_y = [] for tgt in range(len(x_coords)-1): #tgt = randint(0, len(x_coords)-1) next_pt = tgt+1 new_pt = move_along_line([x_coords[tgt],y_coords[tgt]], [x_coords[next_pt],y_coords[next_pt]], uniform(0, 1)) new_line_x.append(new_pt[0]) new_line_y.append(new_pt[1]) plt.plot(new_line_x,new_line_y,color='r') plt.scatter(new_line_x,new_line_y,s=10) ax = plt.gca() ax.set_aspect('equal') plt.show() </code></pre> <p><a href="https://i.sstatic.net/Rq0Fr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rq0Fr.png" alt="enter image description here" /></a></p> <p>Thank you very much!</p>
<python><matplotlib><graphics><geometry>
2023-02-08 03:24:45
1
424
Dr. Pontchartrain
75,381,105
1,828,539
Why do these plots with same parameters look so different? logging in matplotlib vs seaborn
<p>With seaborn.histplot:</p> <pre><code>import seaborn as sns plot = sns.histplot(data = adata.obs, x = 'n_counts', bins=50, log_scale=True) plot.set_xlim(1, 100000) </code></pre> <p><a href="https://i.sstatic.net/42Om0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/42Om0.png" alt="enter image description here" /></a></p> <p>With plt.hist</p> <pre><code>adata = org_1 data = adata.obs['n_counts'] plt.hist(data, bins=50, range=(1, 100000)) plt.xscale(&quot;log&quot;) </code></pre> <p><a href="https://i.sstatic.net/Ktb7s.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ktb7s.png" alt="enter image description here" /></a></p> <p>With plt.hist, but logging the data before passing it to plotting function: Tangent - how can I get the x axis to be in 10^n notation? (as in first plot)</p> <pre><code>data = np.log10(adata.obs['n_counts']) plt.hist(data, bins=50) plt.xlabel('log nUMI') </code></pre> <p><a href="https://i.sstatic.net/QnIlm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QnIlm.png" alt="enter image description here" /></a></p> <p>With plt.hist, but logging the data before passing it to plotting function, but specifying range to be as in plots 1 and 2:</p> <pre><code>data = np.log10(adata.obs['n_counts']) plt.hist(data, bins=50, range = (1, 10000)) plt.xlabel('log nUMI') </code></pre> <p><a href="https://i.sstatic.net/0xXti.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0xXti.png" alt="enter image description here" /></a></p>
<python><matplotlib><seaborn>
2023-02-08 03:23:48
2
2,376
Carmen Sandoval
75,380,950
1,106,951
Convert complex comma-separated string into Python dictionary
<p>I am getting following string format from csv file in Pandas</p> <blockquote> <p>&quot;title = matrix, genre = action, year = 2000, rate = 8&quot;</p> </blockquote> <p>How can I change the string value into a python dictionary like this:</p> <pre><code>movie = &quot;title = matrix, genre = action, year = 2000, rate = 8&quot; movie = { &quot;title&quot;: &quot;matrix&quot;, &quot;genre&quot;: &quot;action&quot;, &quot;year&quot;: &quot;1964&quot;, &quot;rate&quot;:&quot;8&quot; } </code></pre>
<python>
2023-02-08 02:47:27
3
6,336
Behseini
75,380,909
7,590,993
Raspberry pi wiringPi c not able to send serial data
<p>I have a working python file for my RaspberryPi which send serial communication</p> <pre><code>import serial # pyserial from time import sleep ser = serial.Serial('/dev/ttyAMA0', baudrate=9600, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_TWO) on_packet = bytearray() on_packet.append(0x00) on_packet.append(0xFF) on_packet.append(0x53) on_packet.append(0xC3) while 1: ser.write(on_packet) sleep(.042) ser.close() </code></pre> <p>I have installed WiringPi library from <a href="https://github.com/WiringPi/WiringPi/tree/master/wiringPi" rel="nofollow noreferrer">https://github.com/WiringPi/WiringPi/tree/master/wiringPi</a> I am using the equivalent in c</p> <pre><code>#include &lt;wiringPi.h&gt; #include &lt;wiringSerial.h&gt; int fd; char on_packet[] = {0x00, 0xFF, 0x53, 0xC3}; int main () { if ((fd = serialOpen(&quot;/dev/ttyAMA0&quot;, 9600)) &lt; 0) { printf(&quot;Unable to open serial device\n&quot;); return 1; } while(1) { serialPuts(fd, on_packet); delay(42); } serialClose(fd); return 0; } </code></pre> <p>is this both equivalent .As the c example I provided doesn&quot;t sent the serial output</p>
<python><c><raspberry-pi><serial-port>
2023-02-08 02:39:33
0
3,672
Hariom Singh
75,380,890
1,332,521
Python: Parse dita map file and contents and output all href values
<p>I'm a newbie Python programmer, and I was looking for a script or snippet to help. I have to parse a dita map/xml file and for every xml file, output that filename, open that file and search for referenced <code>.dita</code>, <code>.ditamap</code>, or <code>.xml</code> file, output their filename, and recurse into those files. The ideas is to output a file of all the files referenced by that <code>.ditamap/xml</code> file and its children. This file will feed a list for zipping that group of files to send for processing. I found some sample code but I get no output!</p> <pre><code>import os import glob root_dir ='~/test_folder' for filename in glob.glob(root_dir + '**/*.xml', recursive=True) print(filename) </code></pre> <p>Here is a sample ditamap file:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;&lt;?Inspire CreateDate=&quot;2019-04-04T16:06:14&quot; ModifiedDate=&quot;2022-11-11T16:44:57&quot;?&gt;&lt;!DOCTYPE bookmap PUBLIC &quot;-//OASIS//DTD DITA BookMap//EN&quot; &quot;bookmap.dtd&quot;&gt; &lt;bookmap id=&quot;bookmap_e90eb827-7421-4491-8df3-5fea34a44931&quot; xml:lang=&quot;en-US&quot;&gt; &lt;booktitle id=&quot;booktitle_a78ddf49-09d7-4d3d-925c-d42d9ff7f360&quot;&gt; &lt;mainbooktitle id=&quot;mainbooktitle_0a34f716-bedc-4c5d-b198-dfd5006a3174&quot;&gt;About the Documentation&lt;/mainbooktitle&gt; &lt;/booktitle&gt; &lt;bookmeta&gt; &lt;prodinfo&gt; &lt;prodname /&gt; &lt;vrmlist&gt; &lt;vrm version=&quot;1&quot; /&gt; &lt;/vrmlist&gt; &lt;!--Do not change: Must be Manual--&gt; &lt;brand&gt;Manual&lt;/brand&gt; &lt;/prodinfo&gt; &lt;!--sets task labels (1st othermeta tag below)--&gt; &lt;othermeta content=&quot;yes&quot; name=&quot;task-labels&quot; /&gt; &lt;othermeta content=&quot;about&quot; name=&quot;bundle&quot; /&gt; &lt;bookid&gt; &lt;!--Revision--&gt; &lt;volume&gt;A0X&lt;/volume&gt; &lt;/bookid&gt; &lt;bookrights&gt; &lt;copyrfirst&gt; &lt;!--Format of copyright year is yyyy - mm--&gt; &lt;year&gt;2019 - 04&lt;/year&gt; &lt;/copyrfirst&gt; &lt;bookowner&gt; &lt;!--Do not change organization--&gt; &lt;organization&gt;Dell&lt;/organization&gt; &lt;/bookowner&gt; &lt;/bookrights&gt; &lt;/bookmeta&gt; &lt;chapter href=&quot;subjectscheme_6b1f4589-e73e-49be-806d-0d064f3efd01.xml&quot; format=&quot;ditamap&quot; outputclass=&quot;subjectscheme&quot; processing-role=&quot;resource-only&quot; scope=&quot;external&quot; /&gt; &lt;chapter href=&quot;atm-About_user_guide_891d23dc-a186-422d-af40-75249dd31f87.xml&quot;&gt; &lt;topicmeta&gt; &lt;navtitle&gt;About the &lt;keyword conref=&quot;lib-Boomi_Keywords_0346af2b-13d7-491e-bec9-18c5d89225bf.xml#GUID-0207C7F1-40FD-4537-BE59-1D6DA46B9A1D/BOOMI_DELL&quot; /&gt;&lt;keyword conref=&quot;lib-Boomi_Keywords_0346af2b-13d7-491e-bec9-18c5d89225bf.xml#GUID-0207C7F1-40FD-4537-BE59-1D6DA46B9A1D/BOOMI_ATOMSPHERE&quot; /&gt; User Guide&lt;/navtitle&gt; &lt;/topicmeta&gt; &lt;topicref href=&quot;atm-Content_browsing_2c16a734-5cf8-416c-8978-0062ac04e430.xml&quot;&gt; &lt;topicmeta&gt; &lt;navtitle&gt;Content browsing&lt;/navtitle&gt; &lt;/topicmeta&gt; &lt;/topicref&gt; &lt;topicref href=&quot;atm-Content_searching_acdba241-6d33-41bc-8886-0907906fed64.xml&quot;&gt; &lt;topicmeta&gt; &lt;navtitle&gt;Content searching&lt;/navtitle&gt; &lt;othermeta name=&quot;mini-toc&quot; content=&quot;yes&quot; /&gt; &lt;/topicmeta&gt; &lt;/topicref&gt; &lt;topicref href=&quot;atm-Creating_a_documentation_account_c4ddf038-e007-4ee3-bef9-9f4eb06d0f89.dita&quot; /&gt; &lt;topicref href=&quot;atm-Collections_of_your_favorite_topics_5dd10ed2-b689-4628-bc2c-bc35dd4f571e.xml&quot;&gt; &lt;topicref id=&quot;topicref_bb2f9a40-0266-44b5-a061-39eca24b5d41&quot; href=&quot;atm-sharing_saved_collections_d41e734f-4b2e-4c1e-82e7-91617d1008ae.dita&quot; navtitle=&quot;atm-Sharing_saved_collections&quot; type=&quot;task&quot; /&gt; &lt;/topicref&gt; &lt;topicref id=&quot;topicref_8a2ba548-6595-4cc5-af12-afa2631abfbb&quot; href=&quot;atm-Using_table_filters_178c0de0-ddee-4073-b828-476ad13345c4.dita&quot; type=&quot;task&quot; /&gt; &lt;topicref href=&quot;atm-Team_welcomes_your_feedback_848e635e-0132-43d8-b22d-bbdf87ca398a.xml&quot;&gt; &lt;topicmeta&gt; &lt;navtitle&gt;The &lt;keyword conref=&quot;lib-Boomi_Keywords_0346af2b-13d7-491e-bec9-18c5d89225bf.xml#GUID-0207C7F1-40FD-4537-BE59-1D6DA46B9A1D/BOOMI_ATOMSPHERE&quot;&gt;The T&lt;/keyword&gt; documentation team welcomes your feedback&lt;/navtitle&gt; &lt;/topicmeta&gt; &lt;/topicref&gt; &lt;topicref href=&quot;atm-Other_ways_to_get_help_09adc783-784f-4f15-87f9-672d8030b689.xml&quot;&gt; &lt;topicmeta&gt; &lt;navtitle&gt;Other ways to get help&lt;/navtitle&gt; &lt;/topicmeta&gt; &lt;/topicref&gt; &lt;/chapter&gt; &lt;chapter&gt; &lt;topicref href=&quot;atm-Terms_of_use_78ffba54-261d-428d-afcd-a9db3ce51123.dita&quot; /&gt; &lt;/chapter&gt; &lt;chapter&gt; &lt;topicref&gt; &lt;topicref href=&quot;atm-API_licensing_df074d66-3a10-4df5-8dd5-0a3e13373d0e.dita&quot; /&gt; &lt;/topicref&gt; &lt;/chapter&gt; &lt;backmatter&gt; &lt;topicref href=&quot;r-boo-Copyright_Boomi_Online_Help_9eea563b-53a2-4d69-b6e7-7372bf7d5440.xml&quot; navtitle=&quot;Copyright&quot;&gt; &lt;topicmeta&gt; &lt;navtitle&gt;CopyrightBoomiOnlineHelp&lt;/navtitle&gt; &lt;/topicmeta&gt; &lt;/topicref&gt; &lt;topicref href=&quot;atm-About_reltable_72640fe6-ae6d-490c-b369-7adbcb67bc99.xml&quot; linking=&quot;normal&quot; print=&quot;no&quot; toc=&quot;no&quot;&gt; &lt;topicmeta&gt; &lt;navtitle&gt;reltable&lt;/navtitle&gt; &lt;/topicmeta&gt; &lt;/topicref&gt; &lt;/backmatter&gt; &lt;/bookmap&gt; </code></pre> <p>If anyone can help or have a similar script that would traverse and parse the files, that would be great!</p> <p>Any help is greatly appreciated!</p> <p>Thanks,</p> <p>Russ</p>
<python><xml><dita>
2023-02-08 02:35:13
2
329
Russ Urquhart
75,380,812
10,210,261
Discovering Bluetooth devices in Python with M1 MacBook
<p>Is there any way to discover nearby Bluetooth devices using Python on macOS running on the newer Apple Silicon?</p> <p>I tried <code>Pybluez</code>, but <code>lightblue</code>, one of its dependencies, doesn't seem to be supported.</p>
<python><bluetooth><apple-m1><pybluez>
2023-02-08 02:20:25
1
311
Sam Lerman
75,380,805
766,704
Regex for finding numbers with or without a comma - but no other trailing symbols
<p>I need a regex that will parse a string and give me all of the numbers in it EXCEPT for things like percentage or currency.</p> <p>So for example:</p> <pre><code>Mario is 1,400 - which is 16% older than Luigi --&gt; 1,400 Mario is 1400 - which is 500 years older than peach ---&gt; 1400 500 Peach is 20% older than Luigi at 5000 and owes him 25$ ---&gt; 5000 </code></pre> <p>The closest I have gotten is here: <a href="https://regex101.com/r/4wkttj/1" rel="nofollow noreferrer">https://regex101.com/r/4wkttj/1</a></p> <pre><code>\b\d+(?:[\.,]\d+)?\b\s+(?!%|percent) </code></pre> <p>which gives me the first instance correctly, but not the second. additionally, If I try to put it in excel I get an error that the regex is invalid (even though it works fine on regex testing sites).</p> <pre><code>=REGEXMATCH(A5, &quot;\b\d+(?:[\.,]\d+)?\b\s+(?!%|percent)&quot;) Function REGEXMATCH parameter 2 value &quot;\b\d+(?:[\.,]\d+)?\b\s+(?!%|percent)&quot; is not a valid regular expression. </code></pre>
<python><regex><excel-formula>
2023-02-08 02:19:02
5
353
Ortal
75,380,736
3,136,710
Python guessing game exercise - confused about the description and the math
<p>My questions are on the bottom.</p> <p>This exercise is based on a guessing game where the user (you) inputs a lower and upper bound int and then the PC picks a random number within those boundaries. Your job is to guess the number and the program tells the user if the guess is greater than, less than, or equal to the number the PC has chosen. It's most strategic to make guesses from the middle. The number of tries is recorded.</p> <p>Here is the code:</p> <pre><code>import random smaller = int(input(&quot;Enter the smaller number: &quot;)) larger = int(input(&quot;Enter the larger number: &quot;)) myNumber = random.randint(smaller, larger) count = 0 while True: count += 1 userNumber = int(input(&quot;Enter your guess: &quot;)) if userNumber &lt; myNumber: print(&quot;Too small!&quot;) elif userNumber &gt; myNumber: print(&quot;Too large!&quot;) else: print(&quot;Congratulations! You've got it in&quot;, count, &quot;tries!&quot;) break </code></pre> <p>Here is a sample output:</p> <pre><code>#Output example Enter the smaller number: 1 Enter the larger number: 100 Enter your guess: 50 Too small! Enter your guess: 75 Too large! Enter your guess: 63 Too small! Enter your guess: 69 Too large! Enter your guess: 66 Too large Enter your guess: 65 You've got it in 6 tries! </code></pre> <p>Now here is the exercise description:</p> <p>&quot;Modify the guessing-game program so that the user thinks of a number that the computer must guess.</p> <p>The computer must make no more than the minimum number of guesses, and it must prevent the user from cheating by entering misleading hints.</p> <p>Use <code>I'm out of guesses, and you cheated</code> and <code>Hooray, I've got it in X tries</code> as your final output.</p> <p>(Hint: Use the <code>math.log</code> function to compute the minimum number of guesses needed after the lower and upper bounds are entered.)&quot;</p> <hr /> <p>Here are two sample outputs I should be able to reproduce:</p> <pre><code>Enter the smaller number: 0 Enter the larger number: 10 0 10 Your number is 5 Enter =, &lt;, or &gt;: &lt; 0 4 Your number is 2 Enter =, &lt;, or &gt;: &gt; 3 4 Your number is 3 Enter =, &lt;, or &gt;: = Hooray, I've got it in 3 tries! </code></pre> <p>and</p> <pre><code>Enter the smaller number: 0 Enter the larger number: 50 0 50 Your number is 25 Enter =, &lt;, or &gt;: &lt; 0 24 Your number is 12 Enter =, &lt;, or &gt;: &lt; 0 11 Your number is 5 Enter =, &lt;, or &gt;: &lt; 0 4 Your number is 2 Enter =, &lt;, or &gt;: &lt; 0 1 Your number is 0 Enter =, &lt;, or &gt;: &gt; 1 1 Your number is 1 Enter =, &lt;, or &gt;: &gt; I'm out of guesses, and you cheated! </code></pre> <hr /> <p>Now here is my code:</p> <pre><code>from math import log2 #get the lower and upper bounds that the PC can guess between smaller = int(input(&quot;Enter the smaller number: &quot;)) larger = int(input(&quot;Enter the larger number: &quot;)) #this should calculate the maximum number of tries the PC can do max_tries = round(log2(larger-smaller+1)) print(&quot;PC should guess in no more than %s tries\n&quot; % max_tries) count = 0 while count &lt;= max_tries: count += 1 pc_guess = (larger+smaller) // 2 #This is the PC's guess print(&quot;%d %d&quot; % (smaller, larger)) #boundry print(&quot;Your number is &quot;, pc_guess) #printing PC's guess op = input(&quot;Enter =, &lt;, or &gt;: &quot;) #giving PC more info if op == '&lt;': #the number is less than the PC's guess larger = pc_guess - 1 elif op == '&gt;': #the number is greater than the PC's guess smaller = pc_guess + 1 elif op == '=': print(&quot;Hooray I've got it in&quot;, count, &quot;tries&quot;)#PC guessing correct break if count &gt; max_tries: print(&quot;I'm out of guesses, and you cheated&quot;) </code></pre> <p>Am I even thinking about this exercise correctly?</p> <p>Furthermore, I don't understand how to get the PC to get the right answer in &quot;no more than the minimum number of guesses,&quot; which I calculated as <code>log2(larger-smaller+1)</code> in my code. Calculating the average of the upper and lower bound still results in greater tries than the log2 result.</p> <p>Also, am I even calculating the PC's best guess correctly (the average calculation)?</p>
<python>
2023-02-08 02:05:31
1
652
Spellbinder2050
75,380,478
2,491,592
For each group add a new shifted column based on the value in another column
<p>Given the dataframe below, I am trying to add a new shifted column based on yes/no value in the column start. However, my attempts are not really effective.</p> <pre><code>id Name type start AAA A xx yes AAA B yy no AAA C xx no BBB C xx yes BBB D zz no BBB B yy no </code></pre> <p>In the dataframe above, given the value &quot;no&quot; in column &quot;start&quot;, I would like to add a new shifted column with the value from &quot;Name&quot;, as well as change the value on the column &quot;Name&quot; itself.</p> <p>Example of the expected output (the column start can be deleted after the operation)</p> <pre><code>id Name type NAG AAA A xx B AAA A yy C BBB C xx D BBB C zz B </code></pre> <p>Even better (but this I can also fix it using a dictionary afterwards, probably not worth including it unless you have a better solution):</p> <pre><code>id Name type NAG typeNAG AAA A xx B yy AAA A xx C xx BBB C xx D zz BBB C xx B yy </code></pre> <p>My very poor attempt:</p> <pre class="lang-py prettyprint-override"><code>def n_issue(row): if row['start'] == &quot;no&quot;: return row['issueLabel'] else: pass ag[&quot;nag&quot;] = ag(n_issue, axis=1) </code></pre> <p>But using the above I cannot shift the column..</p> <p>Any solution is very much appreciated!</p>
<python><pandas><dataframe>
2023-02-08 01:07:44
2
315
K3it4r0
75,380,343
6,293,211
Import png into Excel and change its height and width
<p>I use the following code to import the <code>image.png</code> file into the excel sheet <code>image_in_excel.xlsx</code> at cell location <code>D5</code>. I also attempt to change the width and height of the imported image.</p> <p><strong>Issue</strong>: The image imports successfully. However, irrespective of whatever I set the 'width' and 'height' in python code, the imported image always measures to be Height=11.25&quot; and Width= 22.01&quot; in Excel (<strong>see attached photo</strong>). Also the aspect ratio seems to be locked.</p> <p><strong>Question</strong>: How to import png to xlsx file at a particular location and then change the imported photo's dimension?</p> <pre><code>import xlsxwriter # Create a new Excel file workbook = xlsxwriter.Workbook('image_in_excel.xlsx') worksheet = workbook.add_worksheet() # Insert the PNG image image_file = 'image.png' worksheet.insert_image('D5', image_file, {'width': 1, 'height': 20}) # Save the Excel file workbook.close() </code></pre> <p><a href="https://i.sstatic.net/rpANc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rpANc.png" alt="enter image description here" /></a></p>
<python><excel><image><xlsxwriter>
2023-02-08 00:33:11
1
466
Sinha
75,380,282
5,212,614
How can we use Feature Importance to Find the 'Worst' Features?
<p>I have some data at work that is confidential so I can't share it here, but the dataset below illustrates the point quite well. Basically, I want to run a feature importance exercise to find the top independent features (in this case, RM, LSTAT, and DIS) that have the most influence on the dependent feature (MDEV). This is done! My question is...how can I use this model to find the IDs associated with the top independent features (RM, LSTAT, and DIS)?</p> <p><strong>After viewing the plot, is it simply sorting the dataframe, in descending order, by RM, LSTAT, and DIS, because these are the top most influential features that impact the dependent feature? I don't think it works like that, but maybe that's all it is. In this case, I am assuming RM, LSTAT, and DIS are the 'worst' features, given the context of my business needs.</strong></p> <pre><code>from sklearn.datasets import load_boston import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.feature_selection import RFE from sklearn.linear_model import RidgeCV, LassoCV, Ridge, Lasso from sklearn.ensemble import RandomForestRegressor #Loading the dataset x = load_boston() df = pd.DataFrame(x.data, columns = x.feature_names) df[&quot;MEDV&quot;] = x.target X = df.drop(&quot;MEDV&quot;,1) #Feature Matrix y = df[&quot;MEDV&quot;] #Target Variable df.head() df['id'] = df.groupby(['MEDV']).ngroup() df = df.sort_values(by=['MEDV'], ascending=True) df.head(10) names = df.columns reg = RandomForestRegressor() reg.fit(X, y) print(&quot;Features sorted by their score:&quot;) print(sorted(zip(map(lambda x: round(x, 4), reg.feature_importances_), names), reverse=True)) features = names importances = reg.feature_importances_ indices = np.argsort(importances) plt.title('Feature Importances') plt.barh(range(len(indices)), importances[indices], color='#8f63f4', align='center') plt.yticks(range(len(indices)), features[indices]) plt.xlabel('Relative Importance') plt.show() </code></pre> <p><a href="https://i.sstatic.net/dpwQK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dpwQK.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/TWHfv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TWHfv.png" alt="enter image description here" /></a></p>
<python><python-3.x><machine-learning><data-science><feature-selection>
2023-02-08 00:18:47
0
20,492
ASH
75,380,233
7,882,846
Non-Linear Least Square Fitting Using Python
<p>I am trying to find the optimal parameter values based on 2 dimensional data. <br> I searched for other Q&amp;A threads and one of the questions looked similar to mine and the answer seemed to be duable <a href="https://stackoverflow.com/questions/40046961/non-linear-least-squares-fitting-2-dimensional-in-python">link</a> for me to replicate but I get the following error: <br><br> <code>TypeError: only size-1 arrays can be converted to Python scalars</code> <br></p> <p>I rarely changed the codes but customized a bit only but seems like my equation does not accept Numpy arrays. Is there a way to address this issue and derive the parameter values? <br> Below is the code:</p> <pre><code>from scipy.optimize import curve_fit import numpy as np import matplotlib.pyplot as plt t_data = np.array([0,5,10,15,20,25,27]) y_data = np.array([1771,8109,22571,30008,40862,56684,59101]) def func_nl_lsq(t, *args): K, A, L, b = args return math.exp(L*x)/((K+A*(b**x))) popt, pcov = curve_fit(func_nl_lsq, t_data, y_data, p0=[1, 1, 1, 1]) plt.plot(t_data, y_data, 'o') plt.plot(t_data, func_nl_lsq(t_data, *popt), '-') plt.show() print(popt[0], popt[1], popt[2], popt[3]) </code></pre> <p>For more clarifications, I also attach a screenshot of the (1) equation that I want to run NLS fitting and the data observations from the <a href="https://pubsonline.informs.org/doi/abs/10.1287/isre.1.1.23" rel="nofollow noreferrer">journal paper</a> that I've read.</p> <p><a href="https://i.sstatic.net/GhtTJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GhtTJ.png" alt="enter image description here" /></a> ... (1)</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>year(x_data)</th> <th>value(y_data)</th> </tr> </thead> <tbody> <tr> <td>1960</td> <td>1771</td> </tr> <tr> <td>1965</td> <td>8109</td> </tr> <tr> <td>1970</td> <td>22571</td> </tr> <tr> <td>1975</td> <td>30008</td> </tr> <tr> <td>1980</td> <td>40862</td> </tr> <tr> <td>1985</td> <td>56684</td> </tr> <tr> <td>1987</td> <td>59101</td> </tr> </tbody> </table> </div>
<python><curve-fitting><scipy-optimize><nonlinear-optimization>
2023-02-08 00:09:29
1
439
Todd
75,380,186
17,487,457
How to drop one of any two highly correlated features having low correlation with target
<p>I am working with the breast cancer dataset included in the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html#sklearn.datasets.load_breast_cancer" rel="nofollow noreferrer">scikit-learn's</a> package, loaded like so:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.datasets import load_breast_cancer data = load_breast_cancer() df = pd.DataFrame(data.data, columns=data.feature_names) df['target'] = data.target df['target'] = df['target'].map({0:'malignant', 1:'benign'}) #data.head() </code></pre> <p>I can calculate and plot the the features correlation like so:</p> <pre class="lang-py prettyprint-override"><code>corr_mat = df.corr() mask = np.triu(np.ones_like(corr_mat, dtype=bool)) heatmap = sns.heatmap(corr_mat, vmin=-1, vmax=1, mask=mask, cmap='BrBG') </code></pre> <p><a href="https://i.sstatic.net/kdVsw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kdVsw.png" alt="enter image description here" /></a></p> <p>That said, suppose I set <code>abs(0.7)</code> as threshold to determine if features <code>i</code> and <code>j</code> are highly-correlated, so I can drop one of them. But then instead of dropping in any order, I want to make sure I drop the one with low correlation to the target variable <code>df['target']</code>.</p> <p>Something like:</p> <pre class="lang-py prettyprint-override"><code>for i in range(len(corr_mat.columns)): for j in range(i): if abs(corr_mat.iloc[i, j]) &gt; 0.7: # if correlation of corr_mat.columns[i] to df['target'] &lt; corr_mat.columns[j] to df['target'] # drop corr_mat.columns[i] # else drop corr_mat.columns[j] </code></pre> <p>Can someone help with this?</p>
<python><machine-learning><scikit-learn><classification><feature-selection>
2023-02-07 23:59:11
1
305
Amina Umar
75,380,064
251,276
How do I create a Dask DataFrame partition by partition and write portions to disk while the DataFrame is still incomplete?
<p>I have python code for data analysis that iterates through hundreds of datasets, does some computation and produces a result as a pandas DataFrame, and then concatenates all the results together. I am currently working with a set of data where these results are too large to fit into memory, so I'm trying to switch from pandas to Dask.</p> <p>The problem is that I have looked through the Dask documentation and done some Googling and I can't really figure out how to create a Dask DataFrame iteratively like how I described above in a way that will take advantage of Dask's ability to only keep portions of the DataFrame in memory. Everything I see assumes that you either have all the data already stored in some format on disk, or that you have all the data in memory and now want to save it to disk.</p> <p>What's the best way to approach this? My current code using pandas looks something like this:</p> <pre><code>def process_data(data) -&gt; pd.DataFrame: # Do stuff return df dfs = [] for data in datasets: result = process_data(data) dfs.append(result) final_result = pd.concat(dfs) final_result.to_csv(&quot;result.csv&quot;) </code></pre>
<python><pandas><dataframe><dask>
2023-02-07 23:37:06
1
10,920
Colin
75,380,003
220,225
Clean setup of pip-tools doesn't compile very basic pyproject.toml
<p>Using a completely new <code>pip-tools</code> setup always results in a <code>Backend subprocess exited</code> error.</p> <p><code>pyproject.toml</code>:</p> <pre class="lang-ini prettyprint-override"><code>[project] dependencies = [ 'openpyxl &gt;= 3.0.9, &lt; 4', ] </code></pre> <p>Running pip-tools in an empty directory that only contains the above pyproject.toml:</p> <pre class="lang-bash prettyprint-override"><code>% python -m venv .venv % source .venv/bin/activate % python -m pip install pip-tools % pip-compile -v -o requirements.txt --resolver=backtracking pyproject.toml Creating venv isolated environment... Installing packages in isolated environment... (setuptools &gt;= 40.8.0, wheel) Getting build dependencies for wheel... Backend subprocess exited when trying to invoke get_requires_for_build_wheel Failed to parse .../pyproject.toml </code></pre> <p>No <code>requirements.txt</code> gets created.</p> <p>Ideas on what might be missing here are appreciated.</p>
<python><pip-tools>
2023-02-07 23:28:10
1
600
Windowlicker
75,379,948
19,840,632
What is correct mapped annotation for `JSON` in SQLAlchemy 2.x version?
<p>The new version of SQLAlchemy introduced the ability to use type annotations using Mapped[Type] (<a href="https://docs.sqlalchemy.org/en/20/orm/declarative_tables.html#using-annotated-declarative-table-type-annotated-forms-for-mapped-column" rel="noreferrer">link</a>).</p> <p>My question is, what should we use as an annotation for <code>sqlalchemy.types.JSON</code>? Is just <code>dict</code> will be ok?</p> <p>I'm just using dict, but I want to understand the correct option for this.</p>
<python><json><sqlalchemy>
2023-02-07 23:18:35
3
459
May Flower
75,379,889
18,476,381
Python groupby/convert a triple join table to nested dictionary
<p>From a SQL stored procedure that performs a join on 3 tables I get the data below.</p> <pre><code> data = [ {&quot;service_order_number&quot;: &quot;ABC&quot;, &quot;item_id&quot;: 0, &quot;ticket_id&quot;: 10}, {&quot;service_order_number&quot;: &quot;ABC&quot;, &quot;item_id&quot;: 0, &quot;ticket_id&quot;: 11}, {&quot;service_order_number&quot;: &quot;ABC&quot;, &quot;item_id&quot;: 1, &quot;ticket_id&quot;: 12}, {&quot;service_order_number&quot;: &quot;DEF&quot;, &quot;item_id&quot;: 3, &quot;ticket_id&quot;: 13}, {&quot;service_order_number&quot;: &quot;DEF&quot;, &quot;item_id&quot;: 3, &quot;ticket_id&quot;: 14}, {&quot;service_order_number&quot;: &quot;DEF&quot;, &quot;item_id&quot;: 3, &quot;ticket_id&quot;: 15}] </code></pre> <p>I would like to group the data on service_order_number and item_id to return a list of dicts like below.</p> <pre><code>[ { &quot;service_order_number&quot;: &quot;ABC&quot;, &quot;line_items&quot;: [ { &quot;item_id&quot;: 0, &quot;tickets&quot;: [ { &quot;ticket_id&quot;: 10 }, { &quot;ticket_id&quot;: 11 } ] }, { &quot;item_id&quot;: 1, &quot;tickets&quot;: [ { &quot;ticket_id&quot;: 12 } ] } ] }, { &quot;service_order_number&quot;: &quot;DEF&quot;, &quot;line_items&quot;: [ { &quot;item_id&quot;: 3, &quot;tickets&quot;: [ { &quot;ticket_id&quot;: 13 }, { &quot;ticket_id&quot;: 14 }, { &quot;ticket_id&quot;: 15 } ] } ] } ] </code></pre> <p>The hierarchy would be service_order_number &gt; item_id &gt; ticket_id</p> <p>Is there an easy way to convert this data into my desired structure?</p>
<python><json><dictionary><nested>
2023-02-07 23:08:11
1
609
Masterstack8080
75,379,771
17,696,880
How to validate if a date indicated as a string belongs to an interval of 2 dates indicated in another string?
<pre class="lang-py prettyprint-override"><code>import os, datetime content = os.listdir(&quot;when&quot;) print(content) #this print... #['2022_-_12_-_29 12pp33 am _--_ 2023_-_01_-_25 19pp13 pm.txt', '2023-02-05 00pp00 am.txt'] for i in range(len(content)): content[i] = content[i].replace(&quot;_-_&quot;, &quot;-&quot;).replace(&quot;pp&quot;, &quot;:&quot;) print(content) #I prepare the input to use it to search #this print... #['2022-12-29 12:33 am _--_ 2023-01-25 19:13 pm.txt', '2023-02-05 00:00 am.txt'] input_to_search_in_folder = &quot;2022_-_01_-_05 12:33 am&quot; #file data to find in the 'when' folder </code></pre> <p>I have changed the <code>:</code> to <code>pp</code> (referring to point-point) because you cannot place <code>:</code> in folders or/and files, at least not in Windows</p> <pre><code>2022_-_12_-_29 12pp33 am _--_ 2023_-_01_-_25 19pp13 pm initial date _--_ final date </code></pre> <p>In this case <code>input_to_search_in_folder = &quot;2022_-_01_-_05 12:33 am&quot;</code> does not match a file with a specific date name. But if it belongs to the interval of days indicated in the file name <code>'2022_-_12_-_29 12pp33 am _--_ 2023_-_01_-_25 19pp13 pm.txt'</code></p> <p>How could I validate that this date <code>&quot;2022_-_01_-_05 12:33 am&quot;</code> does belong to that time interval <code>'2022_-_12_-_29 12pp33 am _--_ 2023_-_01_-_25 19pp13 pm'</code> or if it's this date <code>'2023-02-05 00:00 am'</code>?</p> <p>If the validation is successful, the program should print the content inside that .txt (in this case inside the <code>2022_-_12_-_29 12pp33 am _--_ 2023_-_01_-_25 19pp13 pm.txt</code> )</p> <pre class="lang-py prettyprint-override"><code>text_file = open(&quot;when/&quot; + , &quot;r&quot;) data_inside_this_file = text_file.read() text_file.close() #And finally prints the content of the .txt file that matches the date specified in the 'input_to_search_in_folder' variable print(repr(data_inside_this_file)) </code></pre>
<python><python-3.x><regex><date><datetime>
2023-02-07 22:49:07
2
875
Matt095
75,379,715
4,931,020
Python: creating a class instance via static method vs class method
<p>Let's say I have a class and would like to implement a method which creates an instance of that class. What I have is 2 options:</p> <ol> <li>static method,</li> <li>class method.</li> </ol> <p>An example:</p> <pre><code>class DummyClass: def __init__(self, json): self.dict = json @staticmethod def from_json_static(json): return DummyClass(json) @classmethod def from_json_class(cls, json): return cls(json) </code></pre> <p>Both of the methods work:</p> <pre><code>dummy_dict = {&quot;dummy_var&quot;: 124} dummy_instance = DummyClass({&quot;test&quot;: &quot;abc&quot;}) dummy_instance_from_static = dummy_instance.from_json_static(dummy_dict) print(dummy_instance_from_static.dict) &gt; {'dummy_var': 124} dummy_instance_from_class = DummyClass.from_json_class(dummy_dict) print(dummy_instance_from_class.dict) &gt; {'dummy_var': 124} </code></pre> <p>What I often see in codes of other people is the <code>classmethod</code> design instead of <code>staticmethod</code>. Why is this the case?</p> <p>Or, rephrasing the question to possibly get a more comprehensive answer: what are the pros and cons of creating a class instance via <code>classmethod</code> vs <code>staticmethod</code> in Python?</p>
<python><oop><static-methods><class-method>
2023-02-07 22:42:18
1
739
kaksat
75,379,579
4,159,530
"Invalid operation" error in resizeGL after window closed
<ul> <li>Window 10 64-bit</li> <li>Python 3.8</li> <li>PyQt 6.3.0</li> <li>PySide 6.3.1</li> <li>PyOpenGL 3.1.6</li> </ul> <p>When I close the window, I get an &quot;Invalid operation&quot; error in resizeGL() on the line with the glViewport() function. This happens with PyQt6 and PySide6.</p> <pre class="lang-py prettyprint-override"><code>import sys from OpenGL.GL import * from PyQt6.QtCore import Qt from PyQt6.QtOpenGL import QOpenGLWindow from PyQt6.QtWidgets import QApplication class OpenGLWindow(QOpenGLWindow): def __init__(self): super().__init__() def initializeGL(self): glClearColor(0.2, 0.2, 0.2, 1) def resizeGL(self, w, h): glViewport(0, 0, w, h) def paintGL(self): glClear(GL_COLOR_BUFFER_BIT) if __name__ == &quot;__main__&quot;: QApplication.setAttribute(Qt.ApplicationAttribute.AA_UseDesktopOpenGL) app = QApplication(sys.argv) w = OpenGLWindow() w.show() sys.exit(app.exec()) </code></pre> <p>Debugging details:</p> <pre><code>Traceback (most recent call last): File &quot;E:\_Projects\OpenGL\basic-examples\animations\set-swap-interval-opengl21-pyqt6-python\opengl_window.py&quot;, line 15, in resizeGL glViewport(0, 0, w, h) File &quot;E:\ProgramFiles\Python\Python38\lib\site-packages\OpenGL\error.py&quot;, line 230, in glCheckError raise self._errorClass( OpenGL.error.GLError: GLError( err = 1282, description = b'invalid operation', baseOperation = glViewport, cArguments = (0, 0, 160, 160) ) </code></pre>
<python><pyqt><pyside><pyside6><pyqt6>
2023-02-07 22:21:41
0
1,212
8Observer8
75,379,387
4,035,296
Python tkinter canvas events not triggering on scrolled region
<p>I am making a python progrem using Tkint GUI and the canvas element. I attached a scrollbar to the canvas element so that the user can scroll to unseen regions of the canvas. I created a dotted grid that would allow the user to hover over the dots and if clicked the program draws a circle over the dot. Also as the mouse enters and leave each dot, and dashed circle is drawn and erased.</p> <p><a href="https://i.sstatic.net/A7ZSl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A7ZSl.png" alt="enter image description here" /></a></p> <p>I also have a print procedures that shows debugging information of the actions performed:</p> <p><a href="https://i.sstatic.net/TNRtQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TNRtQ.png" alt="enter image description here" /></a></p> <p>Everything works on the initially visible portion of the canvas. However, when I scroll down, I noticed that the bounded events of click on hover do work, but the canvas graphics are not being triggered/drawn and nothing appears.</p> <p><a href="https://i.sstatic.net/w7SF3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w7SF3.png" alt="enter image description here" /></a></p> <p>I can't understand why initially the graphics were successfully drawn, the event bounded and also working, but the canvas graphics just won't work. Is there some issue between the scroll bar and canvas? Here is the code for the initialization of the canvas and scroll bars:</p> <pre><code> def __init__(self, parent, model, settings, *args, **kwargs): super().__init__(parent, *args, **kwargs) self.width = 625 self.height = 500 self.canvas = tk.Canvas(self, width=self.width, height=self.height, background='white', cursor='arrow') self.canvas.grid(row=0, column=0) self.focusDotImage = -1 print(&quot;self.focusDotImage: &quot;, self.focusDotImage) # configure the scroll region self.canvas.configure(scrollregion=(0, 0, self.width * 2, self.height * 2)) # create scrollbars and connect to canvas xScroll = tk.Scrollbar( self, command=self.canvas.xview, orient=tk.HORIZONTAL ) xScroll.grid(row=1, column=0, sticky='new') yScroll = tk.Scrollbar(self, command=self.canvas.yview) yScroll.grid(row=0, column=1, sticky='nsw') self.canvas.configure(yscrollcommand=yScroll.set) self.canvas.configure(xscrollcommand=xScroll.set) # Draw Dotted Grid width self.space = 25 self.dotRadius= 1 self.nodeRadius = 5 for i in range(0, self.width): for j in range(0, self.height): dotItem = self.canvas.create_rectangle(i * self.space-self.dotRadius, j * self.space-self.dotRadius, i * self.space + self.dotRadius, j * self.space + self.dotRadius, fill='lightgrey') self.canvas.tag_bind(dotItem, '&lt;Enter&gt;', self._on_dot_enter) self.canvas.tag_bind(dotItem, '&lt;Leave&gt;', self._on_dot_leave) self.canvas.bind('&lt;Button-1&gt;', self._on_click) </code></pre> <p>Per comment request, I built a small runnable demo to showcase the weird effects of the canavas/scrolling issue:</p> <pre><code>import tkinter as tk def _on_click(event): print(&quot;On click imagine item...&quot;, ) global image_item3 canvas.delete(image_item3) def _on_click_canvas(e): print(&quot;coord {}, {}&quot;.format(e.x, e.y)) canvas.create_oval(e.x, e.y, (e.x+10), (e.y+10), fill='white') # Create root and canvas root = tk.Tk() width = 1024 height = 768 canvas = tk.Canvas( root, background='black', width=width, height=height, ) canvas.grid(row=0, column=0) image_item = canvas.create_oval((200, 200), (300, 300), fill='white') image_item2 = canvas.create_oval((300, 300), (400, 400), fill='white') global image_item3 image_item3 = canvas.create_oval((200, 900), (300, 1000), fill='white') canvas.tag_bind(image_item3, '&lt;Button-1&gt;', _on_click) canvas.bind('&lt;Button-1&gt;', _on_click_canvas) # configure the scroll region canvas.configure(scrollregion=(0, 0, width * 2, height * 2)) # create scrollbars and connect to canvas xscroll = tk.Scrollbar( root, command=canvas.xview, orient=tk.HORIZONTAL ) xscroll.grid(row=1, column=0, sticky='new') yscroll = tk.Scrollbar(root, command=canvas.yview) yscroll.grid(row=0, column=1, sticky='nsw') canvas.configure(yscrollcommand=yscroll.set) canvas.configure(xscrollcommand=xscroll.set) root.mainloop() </code></pre>
<python><tkinter><tkinter-canvas>
2023-02-07 21:55:27
1
571
Salvo
75,379,195
1,476,285
What workflow can I use for moving from Poetry environment from dev to prod?
<p>I've been coding small personal projects using <em>conda</em> for years. Recently I've been working on a higher volume of projects without the need for scientific packages, so I decided to switch over to standard Python. I am now using poetry as my package manager.</p> <p>My development environment is working fine, and I'm liking poetry overall. But it's time to start rolling out my apps to several different machines. I'm letting some of my workmates use my apps. I don't know how to go about flowing my projects to them, as they are not developers.</p> <p>Part of my app has a system_tray.py which loads on startup (windows), and is basically the launcher for all the additional *.py files that perform different functions. This is my issue that I need to solve during the small rollout.</p> <p>system_tray.py</p> <pre><code>... args = ['C:\\poetry\\Cache\\virtualenvs\\classroom-central-3HYMdsQi-py3.11\\Scripts\\python.exe', 'ixl_grade_input.py'] subprocess.call(args) ... </code></pre> <p>This obviously triggers the running of a separate *.py file. What I'm not sure of is what workflow would deal with this situation. How would I make this work in a development environment, and be able to roll this out into a production environment?</p> <p>In my case, I can just manually modify everything and install Python on the individual machines along with pip install of the required modules.. but this can't be the way a real rollout would go, and I'm looking to learn a more efficient method.</p> <p>I'm aware of, but have yet to use, a requirements.txt file. I first thought that perhaps you just setup virtualenvs to model after the production environment configurations. However, after trying to research this, it seems as though people roll their virtualenvs out into the production environments as well. I also understand that I can configure poetry config to install the venv into the project directory. Then just use a relative path in the &quot;args&quot;?</p>
<python><virtualenv><python-poetry>
2023-02-07 21:31:34
1
413
pedwards
75,379,185
880,874
How can I loop through my arrays and build a list of SQL INSERT commands?
<p>I'm trying to build some SQL INSERT commands using data from arrays.</p> <p>My problem is, I can't figure out how to iterate through all of them at the same time.</p> <p>Here is my beginning code:</p> <p>import os</p> <pre><code>raceValues = [ &quot;Human&quot;, &quot;Elf&quot;, &quot;Orc&quot; ] classValues = [ &quot;Fighter&quot;, &quot;Mage&quot;, &quot;Cleric&quot; ] alignmentValues = [ &quot;Good&quot;, &quot;Neutral&quot;, &quot;Evil&quot; ] for x in insertValues: characterData = &quot;&quot;&quot;INSERT INTO game.Characters(race, class, alignment) VALUES '{raceValues}', '{classValues}', '{alignmentValues}' &quot;&quot;&quot; for command in characterData.splitlines(): command = command.format(**data) print(command) </code></pre> <p>So for the above, I'm trying to get 3 INSERT statements using the data from the 3 arrays I defined.</p> <p>Is there a way to do this in Python 3?</p> <p>Thanks!</p>
<python><python-3.x>
2023-02-07 21:30:21
2
7,206
SkyeBoniwell
75,379,184
10,963,057
plotly range slider without showing line in small
<p>I want to use Plotly to generate a line chart with a range slider. the range slider shows the displayed line again. this code is just an example. in my case, I have a lot of subplots and everything is shown twice. is it possible to show nothing or only the date in the range slider?</p> <pre><code>import plotly.express as px import yfinance as yf yf.pdr_override() df = yf.download(tickers='aapl' ,period='1d',interval='1m') fig = px.line(df, x = df.index, y = 'Close', title='Apple Stock Price') fig.update_layout( xaxis=dict( rangeselector=dict( buttons=list([ dict(count=1, label=&quot;1m&quot;, step=&quot;month&quot;, stepmode=&quot;backward&quot;), dict(step=&quot;all&quot;) ]) ), rangeslider=dict( visible=True ), type=&quot;date&quot; ) ) fig.show() </code></pre>
<python><plotly><visualization><rangeslider>
2023-02-07 21:30:19
1
1,151
Alex
75,379,119
9,816,919
How can I partition `itertools.combinations` such that I can process the results in parallel?
<p>I have a massive quantity of combinations (86 choose 10, which yields 3.5 trillion results) and I have written an algorithm which is capable of processing 500,000 combinations per second. I would not like to wait 81 days to see the final results, so naturally I am inclined to separate this into many processes to be handled by my many cores.</p> <p>Consider this naive approach:</p> <pre class="lang-py prettyprint-override"><code>import itertools from concurrent.futures import ProcessPoolExecutor def algorithm(combination): # returns a boolean in roughly 1/500000th of a second on average def process(combinations): for combination in combinations: if algorithm(combination): # will be very rare (a few hundred times out of trillions) if that matters print(&quot;Found matching combination!&quot;, combination) combination_generator = itertools.combinations(eighty_six_elements, 10) # My system will have 64 cores and 128 GiB of memory with ProcessPoolExecutor(workers=63) as executor: # assign 1,000,000 combinations to each process # it may be more performant to use larger batches (to avoid process startup overhead) # but eventually I need to start worrying about running out of memory group = [] for combination in combination_generator: group.append(combination) if len(group) &gt;= 1_000_000: executor.submit(process, group) group = [] </code></pre> <p>This code &quot;works&quot;, but it has virtually no performance gain over a single-threaded approach, since it is bottlenecked by the generation of the combinations <code>for combination in combination_generator</code>.</p> <p><strong>How can I pass this computation off to the child-processes so that it can be parallelized? How can each process generate a specific subset of <code>itertools.combinations</code>?</strong></p> <p>p.s. I found <a href="https://stackoverflow.com/questions/57075046/is-there-a-function-fn-that-returns-the-nth-combination-in-an-ordered-list-of">this answer</a>, but it only deals with generating single specified elements, whereas I need to efficiently generate millions of specified elements.</p>
<python><math><combinations>
2023-02-07 21:22:11
2
853
Gaberocksall
75,379,085
459,975
Why is Pandas to_datetime is stripping nanoseconds with one format string, but not another?
<p>I am attempting to convert a timestamp with nanoseconds to a Pandas datetime object via <code>pandas.to_datetime</code>.</p> <p>The following does what I expect:</p> <pre><code>print(pandas.to_datetime('2023-01-02 03:04:05.012345678', format='%Y-%m-%d %H:%M:%S.%f')) &gt; 2023-01-02 03:04:05.012345678 </code></pre> <p>However when I use the following slightly different time format, nanoseconds are stripped:</p> <pre><code>print(pandas.to_datetime('2023.01.02D03:04:05.012345678', format='%Y.%m.%dD%H:%M:%S.%f')) &gt; ValueError: unconverted data remains: 678 </code></pre> <p>What is the cause of this?</p> <p>This is with Python v3.9.0, Pandas v1.5.0</p>
<python><pandas><datetime>
2023-02-07 21:16:33
1
4,601
Chuu
75,379,032
3,162,724
Python - Azure function service bus trigger batch processing
<p>I am using Azure function service bus trigger in Python to receive messages in batch from a service bus queue. Even though this process is not well documented in Python, but I managed to enable the batch processing by following the below Github PR.</p> <p><a href="https://github.com/Azure/azure-functions-python-library/pull/73" rel="nofollow noreferrer">https://github.com/Azure/azure-functions-python-library/pull/73</a></p> <p>Here is the sample code I am using -</p> <p><strong>function.json</strong></p> <pre><code>{ &quot;scriptFile&quot;: &quot;__init__.py&quot;, &quot;bindings&quot;: [ { &quot;name&quot;: &quot;msg&quot;, &quot;type&quot;: &quot;serviceBusTrigger&quot;, &quot;direction&quot;: &quot;in&quot;, &quot;cardinality&quot;: &quot;many&quot;, &quot;queueName&quot;: &quot;&lt;some queue name&gt;&quot;, &quot;dataType&quot;: &quot;binary&quot;, &quot;connection&quot;: &quot;SERVICE_BUS_CONNECTION&quot; } ] } __init__.py import logging import azure.functions as func from typing import List def main(msg: List[func.ServiceBusMessage]): message_length = len(msg) if message_length &gt; 1: logging.warn('Handling multiple requests') for m in msg: #some call to external web api </code></pre> <p><strong>host.json</strong></p> <pre><code>&quot;version&quot;: &quot;2.0&quot;, &quot;extensionBundle&quot;: { &quot;id&quot;: &quot;Microsoft.Azure.Functions.ExtensionBundle&quot;, &quot;version&quot;: &quot;[3.3.0, 4.0.0)&quot; }, &quot;extensions&quot;: { &quot;serviceBus&quot;: { &quot;prefetchCount&quot;: 100, &quot;messageHandlerOptions&quot;: { &quot;autoComplete&quot;: true, &quot;maxConcurrentCalls&quot;: 32, &quot;maxAutoRenewDuration&quot;: &quot;00:05:00&quot; }, &quot;batchOptions&quot;: { &quot;maxMessageCount&quot;: 100, &quot;operationTimeout&quot;: &quot;00:01:00&quot;, &quot;autoComplete&quot;: true } } } } </code></pre> <p>After using this code , I can see that service bus trigger is picking up messages in a batch of 100 (or sometimes &lt; 100) based on the <strong>maxMessageCount</strong> but I have also observed that most of the messages are ending up in the dead letter queue with the <code>MaxDeliveryCountExceeded</code> reason code. I have tried with different values of <strong>MaxDeliveryCount</strong> from 10-20 but I had the same result. So my question is do we need to adjust/optimize the MaxDeliveryCount in case of batch processing of service bus messages ? How both of them are related ? What kind of change can be done in the configuration to avoid this dead letter issue ?</p>
<python><azure><azure-functions><azureservicebus><azure-python-sdk>
2023-02-07 21:11:23
1
5,992
Niladri
75,379,028
528,369
Fix mypy error (got "signedinteger[Any]", expected "int")
<p>Mypy says</p> <pre><code>first.py:11: error: Incompatible return value type (got &quot;signedinteger[Any]&quot;, expected &quot;int&quot;) [return-value] </code></pre> <p>for the code</p> <pre><code>import numpy as np import numpy.typing as npt def first_true_pos(x: npt.NDArray) -&gt; int: &quot;&quot;&quot; for logical ndarray x, returns the index of the first True value if any, otherwise -1 &quot;&quot;&quot; if len(x) &lt; 1: return -1 i = np.argmax(x) if x[i]: return i return -1 </code></pre> <p>How can the code be fixed to satisfy mypy?</p>
<python><mypy>
2023-02-07 21:10:59
0
2,605
Fortranner
75,379,000
2,654,794
python one liner for assignment in loop
<p>In two lines I can do something like:</p> <pre><code>for object in object_list: object.param = 10 </code></pre> <p>Is there any way to do this in one line?</p> <p>something like:</p> <pre><code>(object.param = 10) for object in object_list </code></pre>
<python><one-liner>
2023-02-07 21:07:23
1
346
pongoS
75,378,976
4,784,683
Qt - does it make sense to call QTreeView::dataChanged() from within QTreeView::currentChanged()?
<p>I have existing code that I'm modifying.</p> <p>It derives a class, <code>MyTreeView</code>, from <code>QTreeview</code> and re-implements <code>currentChanged()</code>.</p> <p>Does it make sense to call <code>dataChanged()</code> here? Since <code>currentChanged()</code> is only called when the selection changes, it seems to be pointless to call <code>dataChanged()</code></p> <pre><code>class MyTreeView(QTreeView): def currentChanged(self, current, previous): #self.setCurrentIndex(current) logging.debug(f&quot;on current index {current}, \ object {QModelIndex.internalPointer(current)}&quot;) self.dataChanged(current,current) self.obj_changed_signal.emit(current) </code></pre>
<python><c++><qt>
2023-02-07 21:05:39
0
5,180
Bob
75,378,968
7,760,910
unable to fetch record based on a specific key in dynamo db and python
<p>I have a child class where I am trying to fetch the dynamo db data based on a specific key, I tried multiple variations of passing a value to the key but somehow I am getting an error on the response.</p> <pre><code>class AdGroupDetailsRepository(Repository): def __init__(self, client, table_name): super().__init__(client) self.table_name = table_name def _exec_find_by_id(self, id: str): logger = get_provisioner_logger() logger.info(&quot;Table name is %s&quot; % self.table_name) dynamo_table = self.client.Table(self.table_name) logger.info(&quot;Connected to the dynamo table...&quot;) logger.info(&quot;id is %s&quot; % id) item = dynamo_table.get_item(Key={'ad_group': &quot;bigdata&quot;}) logger.info(&quot;fetched account: &quot;, item['Items']) return AdGroupDetails(id, item['account'], item['role']) </code></pre> <p><strong>Error</strong>:</p> <p><a href="https://i.sstatic.net/JNhfN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JNhfN.png" alt="enter image description here" /></a></p> <p>I am just trying to fetch the results which is nothing but the dictionary in dynamo's case and out of that I need to fetch the specific column values.</p> <p>Dynamo table schema: <code>ad_group, account, role</code></p> <p>I am using the <code>dynamo resource</code> to connect to the dynamo table. Also, I am running it via lambda functions for the APIs.</p>
<python><amazon-web-services><amazon-dynamodb>
2023-02-07 21:05:05
1
2,177
whatsinthename
75,378,950
1,783,632
subprocess.run gives me error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)" python 3.6
<p>I wrote a script in python3.9 and used subprocess.run to run shell scripts from inside the python, As a requirement, I needed to downgrade to <strong>python3.6</strong>, and some of the code needed to change.</p> <p>What hasn't changed is the way I run the sub processes but for some reason, I now get an error like:</p> <pre><code>UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128) </code></pre> <p>The line that failed is, I tried to add <em><strong>&quot;encoding='utf_8' &quot;</strong></em> from what I read but it didn't help:</p> <pre><code>smc_shell_script = subprocess.run(['sh','smc.sh'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True, encoding='utf_8') </code></pre> <p>When I reuse this code and run other commands some of them are working and some are not which is probably a result of the command output that I am running.</p> <p>Any help will be appreciated.</p>
<python><python-3.x><python-3.6>
2023-02-07 21:03:31
1
6,237
Shahar Hamuzim Rajuan
75,378,903
7,951,365
Create clusters depending on scores performance
<p>I have data from students who took a test that has 2 sections : the 1st section tests their digital skill at level2, and the second section tests their digital skills at level3. I need to come up with 3 clusters of students depending on their scores to place them in 3 different skills levels (1,2 and 3) --&gt; code sample below</p> <pre><code>import pandas as pd data = [12,24,14,20,8,10,5,23] # initialize data of lists. data = {'Name': ['Marc','Fay', 'Emile','bastian', 'Karine','kathia', 'John','moni'], 'Scores_section1': [12,24,14,20,8,10,5,23], 'Scores_section2' : [20,4,1,0,18,9,12,10], 'Sum_all_scores': [32,28,15,20,26,19,17,33]} # Create DataFrame df = pd.DataFrame(data) # print dataframe. df </code></pre> <p>I thought about using K-means clustering, but following a tutorial online, I'd need to use x,y coordinates. Should I use scores_section1 as x, and Scores_section2 as y or vice-versa, and why?</p> <p>Many thanks in advance for your help!</p>
<python><cluster-analysis><k-means>
2023-02-07 20:57:12
1
490
Kathia
75,378,747
10,266,106
Moving Window Calculation Across Multiple Arrays
<p>I have several two-dimensional data arrays loaded into NumPy arrays, all of which have identical dimensions. These shared dimensions are 2606x and 1228y.</p> <p>I am interested in computing calculations between the first two arrays (a1 &amp; a2) across a moving window, using a window sized 2x by 2y, with the resultant calculation then applied to the third array. Specifically, the workflow would be:</p> <ol> <li>Finding the the maximum &amp; minimum value of a1 at the piece of this moving window</li> <li>Selecting the corresponding array indices of these values</li> <li>Extracting the values at these indices of a2</li> <li>Cast the calculation result to each of the indices in the third array (a3) inside the moving window.</li> </ol> <p>I know that this process involves the following pieces of code to obtain the values I require:</p> <pre><code>idx1 = np.where(a1 == a1.max()) idx2 = np.where(a1 == a1.min()) val1 = a2[idx1[1], idx1[2]] val2 = a2[idx2[1], idx2[2]] </code></pre> <p>What additional code is required to perform this moving window along the identically sized arrays?</p>
<python><arrays><numpy><numpy-ndarray>
2023-02-07 20:37:21
1
431
TornadoEric
75,378,726
3,919,277
How can I set up Python virtual environment in VScode so that it views locally-installed modules?
<p>I have created a virtual environment via the command line</p> <pre><code>python3.11 -m venv . source ./bin/activate python -m pip install NAME_OF_MODULE source deactivate </code></pre> <p>I can see the installed modules when I run <code>pip freeze</code> (prior to deactiving). So far so good.</p> <p>Then I launch VSCode, open a file and using the command palette, I click <code>Python: Select Interpreter</code>. I then navigate in the bin folder of the virtual environment to the Python installation, which consists of a short-cut / alias pointing to a global Python installation.</p> <p>When I do this, I cannot import Python modules located in the virtual environment, only those in the global environment. In other words, it appears to be selecting the global environment.</p> <p>Do I need to set this up within VSCode (<code>Python: Create Environment</code>) ? If so, I can only get so far as the official instructions (<a href="https://code.visualstudio.com/docs/python/environments" rel="nofollow noreferrer">https://code.visualstudio.com/docs/python/environments</a>) do not cover installing packages within a virtual environment.</p> <p>Thanks</p>
<python><visual-studio-code>
2023-02-07 20:35:01
1
421
Nick Riches
75,378,642
11,814,875
Scrapy redirecting [302 to 200] but url isn't updating
<p>I'm trying to fetch redirected url from a url using Scrapy</p> <p>Response status changes from 302 to 200 but still the url isn't changing.</p> <pre class="lang-py prettyprint-override"><code>from scrapy import Spider from scrapy.crawler import CrawlerProcess class MySpider(Spider): name = 'test' start_urls = ['https://news.google.com/rss/articles/CBMilwFodHRwczovL3d3dy5waW5rdmlsbGEuY29tL2VudGVydGFpbm1lbnQvYnRzLWppbi1zaGFyZXMtYmVoaW5kLXRoZS1zY2VuZXMtb2YtdGhlLWFzdHJvbmF1dC1zdGFnZS13aXRoLWNvbGRwbGF5LWJhbmQtcGVyZm9ybXMtb24tc25sLXdpdGgtd29vdHRlby0xMjA4MjQx0gGbAWh0dHBzOi8vd3d3LnBpbmt2aWxsYS5jb20vZW50ZXJ0YWlubWVudC9idHMtamluLXNoYXJlcy1iZWhpbmQtdGhlLXNjZW5lcy1vZi10aGUtYXN0cm9uYXV0LXN0YWdlLXdpdGgtY29sZHBsYXktYmFuZC1wZXJmb3Jtcy1vbi1zbmwtd2l0aC13b290dGVvLTEyMDgyNDE_YW1w'] def parse(self, response): yield { 'url': response.url, } process = CrawlerProcess(settings={ &quot;FEEDS&quot;: { &quot;items.json&quot;: { &quot;format&quot;: &quot;json&quot;, &quot;overwrite&quot;: True }}, 'ROBOTSTXT_OBEY': False, 'FEED_EXPORT_ENCODING': 'utf-8', 'REDIRECT_ENABLED': True, 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7' }) process.crawl(MySpider) process.start() </code></pre> <p>Console Output</p> <pre><code>2023-02-08 01:44:25 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: scrapybot) 2023-02-08 01:44:25 [scrapy.utils.log] INFO: Versions: lxml 4.9.2.0, libxml2 2.9.12, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.1, Twisted 22.10.0, Python 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)], pyOpenSSL 23.0.0 (OpenSSL 3.0.8 7 Feb 2023), cryptography 39.0.1, Platform Windows-10-10.0.22621-SP0 2023-02-08 01:44:25 [scrapy.crawler] INFO: Overridden settings: {'FEED_EXPORT_ENCODING': 'utf-8', 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7'} 2023-02-08 01:44:25 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor 2023-02-08 01:44:25 [scrapy.extensions.telnet] INFO: Telnet Password: 013f29d178b8cbb6 2023-02-08 01:44:25 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.feedexport.FeedExporter', 'scrapy.extensions.logstats.LogStats'] 2023-02-08 01:44:26 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2023-02-08 01:44:26 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2023-02-08 01:44:26 [scrapy.middleware] INFO: Enabled item pipelines: [] 2023-02-08 01:44:26 [scrapy.core.engine] INFO: Spider opened 2023-02-08 01:44:26 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2023-02-08 01:44:26 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2023-02-08 01:44:26 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to &lt;GET https://news.google.com/rss/articles/CBMilwFodHRwczovL3d3dy5waW5rdmlsbGEuY29tL2VudGVydGFpbm1lbnQvYnRzLWppbi1zaGFyZXMtYmVoaW5kLXRoZS1zY2VuZXMtb2YtdGhlLWFzdHJvbmF1dC1zdGFnZS13aXRoLWNvbGRwbGF5LWJhbmQtcGVyZm9ybXMtb24tc25sLXdpdGgtd29vdHRlby0xMjA4MjQx0gGbAWh0dHBzOi8vd3d3LnBpbmt2aWxsYS5jb20vZW50ZXJ0YWlubWVudC9idHMtamluLXNoYXJlcy1iZWhpbmQtdGhlLXNjZW5lcy1vZi10aGUtYXN0cm9uYXV0LXN0YWdlLXdpdGgtY29sZHBsYXktYmFuZC1wZXJmb3Jtcy1vbi1zbmwtd2l0aC13b290dGVvLTEyMDgyNDE_YW1w?hl=en-IN&amp;gl=IN&amp;ceid=IN:en&gt; from &lt;GET https://news.google.com/rss/articles/CBMilwFodHRwczovL3d3dy5waW5rdmlsbGEuY29tL2VudGVydGFpbm1lbnQvYnRzLWppbi1zaGFyZXMtYmVoaW5kLXRoZS1zY2VuZXMtb2YtdGhlLWFzdHJvbmF1dC1zdGFnZS13aXRoLWNvbGRwbGF5LWJhbmQtcGVyZm9ybXMtb24tc25sLXdpdGgtd29vdHRlby0xMjA4MjQx0gGbAWh0dHBzOi8vd3d3LnBpbmt2aWxsYS5jb20vZW50ZXJ0YWlubWVudC9idHMtamluLXNoYXJlcy1iZWhpbmQtdGhlLXNjZW5lcy1vZi10aGUtYXN0cm9uYXV0LXN0YWdlLXdpdGgtY29sZHBsYXktYmFuZC1wZXJmb3Jtcy1vbi1zbmwtd2l0aC13b290dGVvLTEyMDgyNDE_YW1w&gt; 2023-02-08 01:44:27 [scrapy.core.engine] DEBUG: Crawled (200) &lt;GET https://news.google.com/rss/articles/CBMilwFodHRwczovL3d3dy5waW5rdmlsbGEuY29tL2VudGVydGFpbm1lbnQvYnRzLWppbi1zaGFyZXMtYmVoaW5kLXRoZS1zY2VuZXMtb2YtdGhlLWFzdHJvbmF1dC1zdGFnZS13aXRoLWNvbGRwbGF5LWJhbmQtcGVyZm9ybXMtb24tc25sLXdpdGgtd29vdHRlby0xMjA4MjQx0gGbAWh0dHBzOi8vd3d3LnBpbmt2aWxsYS5jb20vZW50ZXJ0YWlubWVudC9idHMtamluLXNoYXJlcy1iZWhpbmQtdGhlLXNjZW5lcy1vZi10aGUtYXN0cm9uYXV0LXN0YWdlLXdpdGgtY29sZHBsYXktYmFuZC1wZXJmb3Jtcy1vbi1zbmwtd2l0aC13b290dGVvLTEyMDgyNDE_YW1w?hl=en-IN&amp;gl=IN&amp;ceid=IN:en&gt; (referer: None) 2023-02-08 01:44:27 [scrapy.core.scraper] DEBUG: Scraped from &lt;200 https://news.google.com/rss/articles/CBMilwFodHRwczovL3d3dy5waW5rdmlsbGEuY29tL2VudGVydGFpbm1lbnQvYnRzLWppbi1zaGFyZXMtYmVoaW5kLXRoZS1zY2VuZXMtb2YtdGhlLWFzdHJvbmF1dC1zdGFnZS13aXRoLWNvbGRwbGF5LWJhbmQtcGVyZm9ybXMtb24tc25sLXdpdGgtd29vdHRlby0xMjA4MjQx0gGbAWh0dHBzOi8vd3d3LnBpbmt2aWxsYS5jb20vZW50ZXJ0YWlubWVudC9idHMtamluLXNoYXJlcy1iZWhpbmQtdGhlLXNjZW5lcy1vZi10aGUtYXN0cm9uYXV0LXN0YWdlLXdpdGgtY29sZHBsYXktYmFuZC1wZXJmb3Jtcy1vbi1zbmwtd2l0aC13b290dGVvLTEyMDgyNDE_YW1w?hl=en-IN&amp;gl=IN&amp;ceid=IN:en&gt; {'url': 'https://news.google.com/rss/articles/CBMilwFodHRwczovL3d3dy5waW5rdmlsbGEuY29tL2VudGVydGFpbm1lbnQvYnRzLWppbi1zaGFyZXMtYmVoaW5kLXRoZS1zY2VuZXMtb2YtdGhlLWFzdHJvbmF1dC1zdGFnZS13aXRoLWNvbGRwbGF5LWJhbmQtcGVyZm9ybXMtb24tc25sLXdpdGgtd29vdHRlby0xMjA4MjQx0gGbAWh0dHBzOi8vd3d3LnBpbmt2aWxsYS5jb20vZW50ZXJ0YWlubWVudC9idHMtamluLXNoYXJlcy1iZWhpbmQtdGhlLXNjZW5lcy1vZi10aGUtYXN0cm9uYXV0LXN0YWdlLXdpdGgtY29sZHBsYXktYmFuZC1wZXJmb3Jtcy1vbi1zbmwtd2l0aC13b290dGVvLTEyMDgyNDE_YW1w?hl=en-IN&amp;gl=IN&amp;ceid=IN:en'} 2023-02-08 01:44:27 [scrapy.core.engine] INFO: Closing spider (finished) 2023-02-08 01:44:27 [scrapy.extensions.feedexport] INFO: Stored json feed (1 items) in: items.json 2023-02-08 01:44:27 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 1561, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 104182, 'downloader/response_count': 2, 'downloader/response_status_count/200': 1, 'downloader/response_status_count/302': 1, 'elapsed_time_seconds': 1.162381, 'feedexport/success_count/FileFeedStorage': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2023, 2, 7, 20, 14, 27, 736096), 'httpcompression/response_bytes': 302666, 'httpcompression/response_count': 1, 'item_scraped_count': 1, 'log_count/DEBUG': 4, 'log_count/INFO': 11, 'response_received_count': 1, 'scheduler/dequeued': 2, 'scheduler/dequeued/memory': 2, 'scheduler/enqueued': 2, 'scheduler/enqueued/memory': 2, 'start_time': datetime.datetime(2023, 2, 7, 20, 14, 26, 573715)} 2023-02-08 01:44:27 [scrapy.core.engine] INFO: Spider closed (finished) </code></pre> <p>item.json</p> <pre class="lang-json prettyprint-override"><code>[ {&quot;url&quot;: &quot;https://news.google.com/rss/articles/CBMilwFodHRwczovL3d3dy5waW5rdmlsbGEuY29tL2VudGVydGFpbm1lbnQvYnRzLWppbi1zaGFyZXMtYmVoaW5kLXRoZS1zY2VuZXMtb2YtdGhlLWFzdHJvbmF1dC1zdGFnZS13aXRoLWNvbGRwbGF5LWJhbmQtcGVyZm9ybXMtb24tc25sLXdpdGgtd29vdHRlby0xMjA4MjQx0gGbAWh0dHBzOi8vd3d3LnBpbmt2aWxsYS5jb20vZW50ZXJ0YWlubWVudC9idHMtamluLXNoYXJlcy1iZWhpbmQtdGhlLXNjZW5lcy1vZi10aGUtYXN0cm9uYXV0LXN0YWdlLXdpdGgtY29sZHBsYXktYmFuZC1wZXJmb3Jtcy1vbi1zbmwtd2l0aC13b290dGVvLTEyMDgyNDE_YW1w?hl=en-IN&amp;gl=IN&amp;ceid=IN:en&quot;} ] </code></pre> <p>I expect the url to <code>https://www.pinkvilla.com/entertainment/bts-jin-shares-behind-the-scenes-of-the-astronaut-stage-with-coldplay-band-performs-on-snl-with-wootteo-1208241</code></p> <p>I've tried setting params such as dont_redirect, handle_httpstatus_list, etc. but nothing working out</p> <p>What am I missing? Any guidance would be helpful</p>
<python><python-3.x><web-scraping><scrapy>
2023-02-07 20:25:26
1
491
rish_hyun
75,378,245
3,403,293
GeoPandas.explore MultiPolygon not serializable
<p>I have my own dataset containing MultiPolygon definitions, and I want to plot them using GeoPandas <code>explore</code> function. I have followed the steps outlined at: <a href="https://geopandas.org/en/stable/getting_started/introduction.html" rel="nofollow noreferrer">https://geopandas.org/en/stable/getting_started/introduction.html</a>, but at step 10 of their Jupyter notebook, I get an error</p> <pre><code>TypeError: Object of type MultiPolygon is not JSON serializable </code></pre> <p>Here is an example row from my GeoDataFrame:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>_id</th> <th>geometry</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>425c...</td> <td>MULTIPOLYGON (((-13143224.221 4028014.525, -13...</td> </tr> </tbody> </table> </div> <p>I have <code>geopandas</code> version 0.12.2 and <code>folium</code> version 0.14.0 installed.</p> <p>I have also tried to convert the <code>geometry</code> column using <code>json.dumps</code> on the output of <code>shapely.geometry.mapping</code> before calling <code>explore</code>, but then I get this warning:</p> <pre><code>UserWarning: Geometry column does not contain geometry. </code></pre> <p>Trying to plot the mapped values using <code>explore</code> gives me this error:</p> <pre><code>TypeError: Input must be valid geometry objects </code></pre> <p>This all seems contradictory, so I have no idea what is going on.</p>
<python><geopandas><shapely>
2023-02-07 19:40:51
1
320
wikenator
75,378,240
4,772,565
How to use absolute import in source code when using requirements.txt, Python3?
<p>I have a Python project with this directory structure:</p> <pre><code>my_project | |_venv/ | |_requirements.txt | |_src | |_main.py | |_input_data.py </code></pre> <p>The file of <code>requirements.txt</code> is:</p> <pre><code>numpy pydantic </code></pre> <p>The file of <code>input_data.py</code></p> <pre class="lang-py prettyprint-override"><code>class A: name = &quot;a1&quot; </code></pre> <p>The file of <code>main.py</code> is</p> <pre class="lang-py prettyprint-override"><code>from src.input_data import A print(f&quot;{A.name}&quot;) </code></pre> <p>I use <code>pip install -r requirements.txt</code> first. And then I run this command</p> <pre><code>python src/main.py </code></pre> <p>I got the following error message:</p> <pre><code>Traceback (most recent call last): File &quot;.../my_project/src/main.py&quot;, line 1, in &lt;module&gt; from src.input_data import A ModuleNotFoundError: No module named 'src' </code></pre> <p>It seems like the project itself is not recognised as a package. Could you please help me with the solution?</p> <p>I know using a <code>setup.py</code> and the command <code>pip install -e .</code> can make it work. However, I want to try with <code>requirements.txt</code> in order to use <code>docker</code>.</p> <p>I found a work-around, which is putting the <code>main.py</code> file to the root directory <code>my_project</code>. But do you have other solutions without changing the given directory structure mentioned above?</p> <p>Thanks in advance.</p>
<python><pip><python-import><requirements.txt>
2023-02-07 19:40:09
2
539
aura
75,378,119
303,056
Pulumi runtime can't find pulumi library
<p>I keep hitting this problem where the <code>pulumi</code> CLI tool runs, but whenever I try to do anything with it like <code>pulumi up</code> or <code>pulumi preview</code> it gives me this error that it can't find its own library:</p> <pre><code> Traceback (most recent call last): File &quot;/home/ubuntu/.pulumi/bin/pulumi-language-python-exec&quot;, line 16, in &lt;module&gt; import pulumi ModuleNotFoundError: No module named 'pulumi' </code></pre> <p>The last time I solved this I realized the root cause was that I'm using conda to manage my python environments. The tool suggests <code>pip install pulumi</code> which doesn't really work because then it'll just complain about <code>No module named 'pulumi_aws'</code> etc.</p> <p>I've forgotten how I got around the problem, so I'm putting the question up here so when I figure it out I'll have a good place to post the solution. Or maybe somebody has knows the answer.</p>
<python><conda><pulumi><pulumi-python>
2023-02-07 19:29:19
1
42,939
Leopd
75,378,061
528,369
How to use mypy to ensure that a NumPy array of floats is passed as function argument?
<p>Can mypy check that a NumPy array of floats is passed as a function argument? For the code below mypy is silent when an array of integers or booleans is passed.</p> <pre><code>import numpy as np import numpy.typing as npt def half(x: npt.NDArray[np.cfloat]): return x/2 print(half(np.full(4,2.1))) print(half(np.full(4,6))) # want mypy to complain about this print(half(np.full(4,True))) # want mypy to complain about this </code></pre>
<python><numpy><numpy-ndarray><mypy>
2023-02-07 19:23:42
1
2,605
Fortranner
75,378,037
8,187,191
expo-av recording to flask api
<p>I am trying to build an app using React-Native that will record a sound on a mobile device, using expo-av, and send it to a Flask API.</p> <p><strong>The problem</strong> is that the data I receive from expo-av, into the API, is in an unexpected format. I would like to retrieve the signal/wave data of the sound but when I plot the data to inspect it I get plot-A below, instead of something more like plot-B.</p> <p>This resource was very useful and contains a fully reproducible example: <a href="https://www.tderflinger.com/en/react-native-audio-recording-flask" rel="nofollow noreferrer">https://www.tderflinger.com/en/react-native-audio-recording-flask</a></p> <p>How can I retrieve in Python (flask) the sample's data that would could help me produce a plot like <a href="https://stackoverflow.com/questions/18625085/how-to-plot-a-wav-file">these</a>? And what does plot-A represent?</p> <p><strong>plot-A:</strong></p> <p><a href="https://i.sstatic.net/xhJyN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xhJyN.png" alt="enter image description here" /></a></p> <p><strong>plot-B</strong></p> <p><a href="https://i.sstatic.net/C4nEO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C4nEO.png" alt="enter image description here" /></a></p> <p><strong>Python code for plot-A:</strong></p> <pre><code>import numpy as np import matplotlib.pyplot as plt # This is a preview of the data from expo-av # I made the flask api print(request.get_data()) and copied it from the terminal string = b'\x00\x00\x00\x18ftyp3gp4\x00\x00\ ... x00\x00\x00(' # binary to np array data = np.frombuffer(string, dtype=np.uint8) plt.plot(data) plt.show() </code></pre> <p>Thank you</p>
<python><react-native><flask><expo>
2023-02-07 19:20:42
1
1,028
Claudio Paladini
75,378,025
3,763,616
How to complete a self join in python polars vs pandas sql?
<p>I am trying to use python polars over pandas sql for a large dataframe as I am running into memory errors. There are two where conditions that are utilized in this dataframe but can't get the syntax right.</p> <p>Here is what the data looks like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Key</th> <th>Field</th> <th>DateColumn</th> </tr> </thead> <tbody> <tr> <td>1234</td> <td>Plumb</td> <td>2020-02-01</td> </tr> <tr> <td>1234</td> <td>Plumb</td> <td>2020-03-01</td> </tr> <tr> <td>1234</td> <td>Pear</td> <td>2020-04-01</td> </tr> </tbody> </table> </div> <pre><code>import pandas as pd import datetime as dt import pandasql as ps d = {'Key': [1234, 1234, 1234, 1234, 1234, 1234, 1234, 1234, 1234, 1234, 1234, 2456, 2456, 2456, 2456, 2456, 2456, 2456, 2456, 2456, 2456, 2456, 3754, 3754, 3754, 3754, 3754, 3754, 3754, 3754, 3754, 3754, 3754], 'Field':[ &quot;Plumb&quot;, &quot;Plumb&quot;, &quot;Pear&quot;, &quot;Plumb&quot;, &quot;Orange&quot;, &quot;Pear&quot;, &quot;Plumb&quot;, &quot;Plumb&quot;, &quot;Pear&quot;, &quot;Apple&quot;, &quot;Plumb&quot;, &quot;Orange&quot;, &quot;Orange&quot;, &quot;Apple&quot;, &quot;Apple&quot;, &quot;Pear&quot;, &quot;Apple&quot;, &quot;Plumb&quot;, &quot;Plumb&quot;, &quot;Orange&quot;, &quot;Orange&quot;, &quot;Pear&quot;, &quot;Plumb&quot;, &quot;Pear&quot;, &quot;Plumb&quot;, &quot;Pear&quot;, &quot;Apple&quot;, &quot;Plumb&quot;, &quot;Orange&quot;, &quot;Pear&quot;, &quot;Apple&quot;, &quot;Pear&quot;, &quot;Apple&quot;], 'DateColumn':[ '2020-02-01', '2020-03-01', '2020-04-01', '2020-05-01', '2020-06-01', '2020-07-01', '2020-08-01', '2020-09-01', '2020-10-01', '2020-11-01', '2020-12-01', '2020-02-01', '2020-03-01', '2020-04-01', '2020-05-01', '2020-06-01', '2020-07-01', '2020-08-01', '2020-09-01', '2020-10-01', '2020-11-01', '2020-12-01', '2020-02-01', '2020-03-01', '2020-04-01', '2020-05-01', '2020-06-01', '2020-07-01', '2020-08-01', '2020-09-01', '2020-10-01', '2020-11-01', '2020-12-01' ]} df = pd.DataFrame(data=d) df['DateColumn'] = pd.to_datetime(df['DateColumn']) df['PreviousMonth'] = df['DateColumn'] - pd.DateOffset(months=1) df_output = ps.sqldf(&quot;&quot;&quot; select a.Key ,a.Field ,b.Field as PreviousField ,a.DateColumn ,b.DateColumn as PreviousDate from df as a, df as b where a.Key = b.Key and b.DateColumn = a.PreviousMonth &quot;&quot;&quot;) print(df_output.head()) Key Field DateColumn PreviousDate 0 1234 Plumb 2020-03-01 00:00:00.000000 2020-02-01 00:00:00.000000 1 1234 Pear 2020-04-01 00:00:00.000000 2020-03-01 00:00:00.000000 2 1234 Plumb 2020-05-01 00:00:00.000000 2020-04-01 00:00:00.000000 3 1234 Orange 2020-06-01 00:00:00.000000 2020-05-01 00:00:00.000000 4 1234 Pear 2020-07-01 00:00:00.000000 2020-06-01 00:00:00.000000 </code></pre> <p>I have tried to do</p> <pre><code>data_output = df.join(df, left_on='Key', right_on='Key') </code></pre> <p>But unable to find a good example on how to put the two conditions on the join condition.</p>
<python><python-polars>
2023-02-07 19:19:05
1
489
Drthm1456
75,377,986
20,266,647
Exception during ingestion, MLRunAccessDeniedError
<p>I got this issue during data ingest/write:</p> <pre><code>~/.pythonlibs/jist-jupyter/lib/python3.7/site-packages/mlrun/errors.py in raise_for_status(response, message) 82 raise STATUS_ERRORS[response.status_code]( 83 error_message, response=response ---&gt; 84 ) from exc 85 except KeyError: 86 raise MLRunHTTPError(error_message, response=response) from exc MLRunAccessDeniedError: 403 Client Error: Forbidden for url: http://mlrun-api:8080/api/v1/projects/fs-test/feature-sets/test/references/latest?versioned=False: Failed storing feature-set fs-test/test details: {'reason': &quot;MLRunAccessDeniedError('Not allowed to create resource /projects/fs-test/feature-sets/test')&quot;} </code></pre> <p>It is source code, that generated this issue:</p> <pre><code>import mlrun import mlrun.projects as prj import mlrun.feature_store as fstore from mlrun.datastore.targets import ParquetTarget,CSVTarget, NoSqlTarget ... feature_set=fstore.FeatureSet(name=fsName, entities=entity_list, timestamp_key='sysdate') feature_set.set_targets(targets=[ParquetTarget(name=&quot;s1&quot;)],with_defaults=False) feature_set.save() </code></pre> <p>Do you know, how to solve the issue?</p>
<python><mlops><feature-store><mlrun>
2023-02-07 19:14:33
1
1,390
JIST
75,377,927
5,659,966
Django filter on list of column values
<p>I'm using Django 3.2 and Postgres 13.</p> <p>I have a simple model:</p> <pre class="lang-py prettyprint-override"><code>from django.db import models class Member(models.Model): user = models.ForeignKey(&quot;User&quot;, on_delete=models.CASCADE) group = models.ForeignKey(&quot;Group&quot;, on_delete=models.CASCADE) class Meta: constraints = [ models.UniqueConstraint( fields=[&quot;user&quot;, &quot;group&quot;], name=&quot;unique_user_group_tuple&quot; ) ] </code></pre> <p>The important info here is the <code>UniqueConstraint</code>, which also creates a unique index.</p> <p>Let's say I have a list of <code>(user_id, group_id)</code> tuples:</p> <pre class="lang-py prettyprint-override"><code>member_tuples = [(1, 1), (2, 1), (2, 4)] </code></pre> <p>I would like to retrieve all existing <code>Member</code> objects matching these tuples, <strong>while using the index</strong>. In PG, I would write:</p> <pre class="lang-sql prettyprint-override"><code>SELECT * from members where (user_id, group_id) in ((1, 1), (2, 1), (2, 4)); </code></pre> <p>My question is: how to get something similar in Django ? Obviously, my main concern is to keep calling the <code>unique_user_group_tuple</code> index, to avoid a sequential scan</p> <p>For now, the best solution I found is to do:</p> <pre class="lang-py prettyprint-override"><code>filters = Q() for user_id, group_id in member_tuples: filters |= Q(user_id=user_id, group_id=group_id) existing_members = Member.objects.filter(filters) </code></pre> <p>which translates to</p> <pre class="lang-sql prettyprint-override"><code>SELECT * from members WHERE (issue_id = 1 AND group_id = 1) OR (issue_id = 2 AND group_id = 1) OR (issue_id = 2 AND group_id = 4); </code></pre> <p>This also queries the index, but it does not creates the query I expect.</p> <p>So, I try to achieve 2 different things:</p> <ul> <li>find a way to annotate a record, e.g. return something like: <code>SELECT (user_id, group_id) as member_tuple FROM member</code></li> <li>use a collection of records in a <code>WHERE</code> clause: <code>WHERE something IN ((1,1), (2,1) (2,4));</code></li> </ul>
<python><django><django-orm>
2023-02-07 19:09:05
0
780
Lotram
75,377,863
14,012,215
EOF occurred in violation of protocol (_ssl.c:2396) when connecting to odoo 15
<p>I am running a django app that connects to an Odoo instance. Like so</p> <pre><code>userlog = request.user if request.method == 'GET': # return render(request, 'inventory_updates.html') products = models.execute_kw(db, uid, password, 'product.product', 'search_read', [[['type', 'in', ['product', 'consu']]]], … {'fields': ['name']}) return render(request, 'inventory_updates.html', {'products': products}) </code></pre> <p>For some reason during certain times when making a get request(and by default searching/connecting to odoo) to the Django view this error pops up</p> <pre><code>EOF occurred in violation of protocol (_ssl.c:2396) </code></pre> <p>Here is how I am connecting to the server.</p> <pre><code>models = xmlrpc.client.ServerProxy('{}/xmlrpc/2/object'.format(url),allow_none=True,verbose=False, use_datetime=True,context=ssl._create_unverified_context()) common = xmlrpc.client.ServerProxy('{}/xmlrpc/2/common'.format(url),allow_none=True,verbose=False, use_datetime=True,context=ssl._create_unverified_context()) </code></pre> <p>The server is ubuntu 20, running Nginx, and both apps are hosted on the same server. The root issue I belive is due to the SSL being self-signed. But I can't figure out why the issue only happens sometimes and not consitently.</p> <p>I have checked the python tls version, checked tls for the nginx cofig files.</p>
<python><ssl><odoo>
2023-02-07 19:02:43
0
433
apexprogramming
75,377,514
5,016,440
Plotly: changing axis range in surface subplots
<p>I have the following minimal reproducible example and can't find how to change the z-axis range for the subplots</p> <pre><code>import plotly.graph_objects as go from plotly.subplots import make_subplots import numpy as np main_fig = make_subplots( rows= 4, cols= 4, specs = [[{'type': 'surface'} for k in range(4)] for j in range(4)], horizontal_spacing = 0.05, vertical_spacing=0.05) axis = np.arange(50) for k, spec in enumerate(range(16)): fig = go.Surface(x=axis, y=axis, z=axis[:,None] * axis[None,:], showscale=False) main_fig.append_trace(fig, row = k//4 + 1, col = k%4 + 1) main_fig.update_traces(showlegend=False) main_fig.update_layout(height=800, width=800) </code></pre> <p>I tried with variations of <code>fig.update_layout</code> inside the loop, so far without luck.</p>
<python><plotly>
2023-02-07 18:27:04
2
455
nestor556
75,377,479
12,845,199
Acessing multiple anchor elements inside text
<p>I have the following XPATH</p> <pre><code>//a[@class='product-cardstyles__Link-sc-1uwpde0-9 bSQmwP hyperlinkstyles__Link-j02w35-0 coaZwR'] </code></pre> <p>This xpath, finds a lot of anchor tags similar to the following HTML sample</p> <pre><code>&lt;a href=&quot;/produto/10669/acucar-refinado-da-barra-pacote-1kg&quot; class=&quot;product-cardstyles__Link-sc-1uwpde0-9 bSQmwP hyperlinkstyles__Link-j02w35-0 coaZwR&quot; font-size=&quot;16&quot; font-family=&quot;primaryMedium&quot;&gt;Açúcar Refinado DA BARRA Pacote 1kg&lt;/a&gt; </code></pre> <p>What I want to do is not to acess it is href, it is to grab the following string inside the anchor elements?</p> <pre><code>Açúcar Refinado DA BARRA Pacote 1kg </code></pre> <p>Sample code of what I am currently doing</p> <pre><code> elements_list = EC.presence_of_all_elements_located((By.XPATH,&quot;//a[@class='product-cardstyles__Link-sc-1uwpde0-9 bSQmwP hyperlinkstyles__Link-j02w35-0 coaZwR']&quot;)) print(elements_list) # How do I extract the text honhon? </code></pre> <p>If needed I could share the entire source code for reproduction.</p>
<python><selenium><xpath><webdriverwait>
2023-02-07 18:23:42
2
1,628
INGl0R1AM0R1
75,377,374
105,589
OpenAI Embeddings API: How to get an embedding and calculate cosine similarity?
<p>I have an OpenAI embedding generated from their API.</p> <p>I see examples of putting that vector into Postgres or Sqlite and then running a query against it.</p> <p>I'm looking for simple code in python where I can use a text string and see how close the cosine distance for that text. I believe that cosine distance is used in databases because it is simpler to calculate: would using Euclidean distance be a more accurate estimate of the &quot;closeness&quot; of the string? If there is a better distance function to run I'm interested in seeing that as well.</p>
<python><openai-api>
2023-02-07 18:13:21
3
4,091
xrd
75,377,281
12,694,438
Method suggestions don't work for 'nltk.corpus.wordnet` (vscode)
<p>When I write <code>nltk.corpus.wordnet.</code>, method suggestions don't show up and I can't see the signatures of the functions that I call, even though the code executes fine.</p> <p>This happens both in notebook and in regular <code>.py</code> files.</p> <hr /> <pre class="lang-py prettyprint-override"><code>import nltk nltk.corpus.wordnet </code></pre> <p>This code prints <code>&lt;WordNetCorpusReader in '/home/user/nltk_data/corpora/wordnet.zip/wordnet/'&gt;</code></p> <p>Is the module is stored in a .zip file that gets unzipped at runtime? Is that why it's invisible to the IDE?</p> <hr /> <p>I also can't find an api reference for this module. They have a reference for <code>nltk.corpus.READER.wordnet</code> but not for the one without <code>reader</code>. There's just no way for me to even know the names of the methods that are available.</p>
<python><nltk><wordnet>
2023-02-07 18:01:45
1
944
splaytreez
75,377,265
4,414,359
How do add python libraries to AWS Lambda?
<p>I just made my first function that fetches data from an excel sheet in Google Sheets. I got an error:</p> <p><code>&quot;errorMessage&quot;: &quot;Unable to import module 'lambda_function': No module named 'googleapiclient'&quot;</code></p> <p>so i googled how to upload python modules (<a href="https://www.youtube.com/watch?v=HBt8MXHcaPI" rel="nofollow noreferrer">https://www.youtube.com/watch?v=HBt8MXHcaPI</a>) and it said to create a virtual env in something like VSCode, pip install the libraries that i'll need, then zip them and add them as a layer to Lambda.</p> <p>I did that, twice. (It just looked like a whole bunch of libraries were being installed, so i looked up how to remove all of them (<code>pip freeze | xargs pip uninstall -y</code>) and tried again). So here's the starting point and after doing <code>pip install google-api-python-client</code> <a href="https://i.sstatic.net/gsUZG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gsUZG.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/v2oic.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v2oic.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/BOzqT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOzqT.png" alt="enter image description here" /></a></p> <p>I guess i'm a little confused whether i should be zipping up literally all of that, or just the stuff that has <code>google</code> in the name. I tried it both ways and neither seemed to work. I'm still getting that error.</p>
<python><amazon-web-services><aws-lambda><google-api>
2023-02-07 18:00:40
4
1,727
Raksha
75,377,184
2,575,155
how to improve the performance python script for extracting big size(3-4GB)of oracle table
<p>I'm connecting to oracle database using python script and extracting around 10 tables. one table is having 3Gb of data it took around 4 hours to extract with below code and upload it to S3. How can we improve the performance of the below python script?</p> <p>Different file format other than csv will improve the performance like parquet?</p> <p>Any suggestions or solutions will be highly appreciated.</p> <p>Below is the code I tried:</p> <pre><code>def extract_handler(): # Parameters defined in cloudwatch event env = os.environ['Environment'] if 'Environment' in os.environ else 'sit' # FTP parameters host = f&quot;/{env}/connet_HOSTNAME&quot; username = f&quot;/{env}/connect_USERNAME&quot; password = f&quot;/{env}/connect_PASSWORD&quot; host = get_parameters(host) username = get_parameters(username) password = get_parameters(password) today = date.today() current_date = today.strftime(&quot;%Y%m%d&quot;) con = None cur = None tables = [&quot;table1&quot;, &quot;table2&quot;,&quot;table3&quot;.........&quot;table10&quot;] bucket = &quot;bucket_name&quot; for table in tables: try: con = cx_Oracle.connect(username, password, host, encoding=&quot;UTF-8&quot;) cur = con.cursor() logging.info('Successfully established the connection to Oracle db') table_name = table.split(&quot;.&quot;)[1] logging.info(&quot;######## Table name:&quot;+ table +&quot; ###### &quot;) logging.info(&quot;****** PROCESSING:&quot; +table_name+&quot; *********&quot;) cur.execute(&quot;SELECT count(*) FROM {}&quot;.format(table)) count = cur.fetchone()[0] logging.info(&quot;Count:&quot;, count) if count &gt; 0: cur1 = con.cursor() # Define the desired timestamp format timestamp_format = '%Y/%m/%d %H:%M:%S' # Execute a query to read a table cur1.execute( &quot;select * from {} where TRUNC(DWH_CREATED_ON)=TRUNC(SYSDATE)-1&quot;.format(table)) batch_size = 10000 rows = cur1.fetchmany(batch_size) csv_file = f&quot;/tmp/{table_name}.csv&quot; with open(csv_file, &quot;w&quot;, newline=&quot;&quot;) as f: # Add file_date column as the first column writer = csv.DictWriter(f, fieldnames=['file_date'] + [col[0] for col in cur1.description], delimiter='\t') writer.writeheader() logging.info(&quot;Header added to the table:&quot; + table + &quot;######&quot;) while rows: for row in rows: row_dict = {'file_date': current_date} for i, col in enumerate(cur1.description): if col[1] == cx_Oracle.DATETIME: if row[i] is not None: row_dict[col[0]] = row[i].strftime(timestamp_format) else: row_dict[col[0]] = &quot;&quot; else: row_dict[col[0]] = row[i] with open(csv_file, &quot;a&quot;, newline=&quot;&quot;) as f: # Add file_date column as the first column writer = csv.DictWriter(f, fieldnames=['file_date'] + [col[0] for col in cur1.description], delimiter='\t') writer.writerow(row_dict) # Fetch the next batch of 100 rows rows = cur1.fetchmany(batch_size) logging.info(&quot;Records written to the temp file for the table :&quot; + table + &quot;######&quot;) s3_path = &quot;NorthernRegion&quot; + '/' + table_name + '/' + current_date + '/' + table_name + '.csv' s3_client = boto3.client('s3', region_name='region-central-1') s3_client.upload_file('/tmp/' + table_name + '.csv', bucket, s3_path) logging.info(table + &quot;File uploaded to S3 ######&quot;) else: logging.info('Table not having data') return 'Data is not refreshed yet, Hence quitting..' if cur1: cur1.close() except Exception as err: #Handle or log other exceptions such as bucket doesn't exist logging.error(err) finally: if cur: cur.close() if con: con.close() return &quot;Successfully processed&quot; </code></pre>
<python><python-3.x><oracle-database>
2023-02-07 17:53:09
0
736
marjun
75,377,023
5,349,476
Hough lines missing some lines
<p>I'm trying to detect lines in an irregular image using a relatively low <code>threshold</code> of 5. The result I get is the following:</p> <p><a href="https://i.sstatic.net/KX0mH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KX0mH.png" alt="enter image description here" /></a></p> <p>where red lines are the computed lines. However, I was expecting the yellow lines to satisfy the parameters as well. Does anyone know why the yellow lines aren't detected? Here's my code:</p> <pre class="lang-python prettyprint-override"><code># img rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi / 180 # angular resolution in radians of the Hough grid threshold = 5 # minimum number of votes (intersections in Hough grid cell) min_line_length = 200 # minimum number of pixels making up a line max_line_gap = 500 # maximum gap in pixels between connectable line segments low_threshold = 50 high_threshold = 150 edge_image = img.copy() edge_image = cv2.GaussianBlur(edge_image, (3, 3), 1) edges = cv2.Canny(edge_image, low_threshold, high_threshold) line_image = np.copy(edges) # creating a blank to draw lines on line_image = cv2.cvtColor(line_image, cv2.COLOR_GRAY2BGR) lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), min_line_length, max_line_gap) for line in lines: for x1,y1,x2,y2 in line: cv2.line(line_image,(x1,y1),(x2,y2),(0,0,255),1) </code></pre>
<python><opencv><signal-processing><technical-indicator>
2023-02-07 17:37:23
1
2,742
John M.
75,376,998
3,168,283
Error: invalid command 'bdist_wheel' with setuptools==67.1.0
<p>I'm getting tje following error after having upgraded <code>setuptools</code> to version 67.1.0:</p> <pre><code>% python setup.py bdist_wheel usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help error: invalid command 'bdist_wheel' </code></pre> <p>Have others come across this issue? The error was not there before the upgrade.</p> <p>Within my <code>setup.py</code>, I have</p> <pre><code>#!/usr/bin/env python import setuptools if __name__ == &quot;__main__&quot;: setuptools.setup() </code></pre> <p>and the version of <code>wheel</code> is 0.38.4.</p> <p>Thanks in advance.</p>
<python><pypi>
2023-02-07 17:34:21
0
683
takachanbo
75,376,787
2,755,116
Flatten list of whatever objects
<p>I want to flatten a list whatever the values of list are:</p> <p>Example:</p> <pre><code>[1, 2, 1] --&gt; [1, 2] [[1, 2], [2, 1] --&gt; [1, 2] </code></pre> <p>Now I have a code which have very cases depending of the type of the objects in the list (first example numbers, second example list).</p> <p>Is there a universal solution?</p>
<python><list><flatten>
2023-02-07 17:13:54
1
1,607
somenxavier
75,376,695
6,775,101
Generics, inheritance - TypeVar bound class fails mypy check
<p>hope i won't leave out anything important. My very much simplified situation is like this:</p> <p>In my domain I have some defined data structures:</p> <pre><code>@dataclass class Model: var_1: str var_2: str class Book(Model): ... class Page(Model): ... </code></pre> <p>I want to have 2 abstraction steps defining how the data is processed, like :</p> <pre><code>PARAMETERS = TypeVar(&quot;PARAMETERS&quot;, bound=Model) RESULT = TypeVar(&quot;RESULT&quot;, bound=Model) class Finder(Generic[PARAMETERS, RESULT], metaclass=abc.ABCMeta): def run(self, parameters_dict: Dict[str, str]) -&gt; RESULT: parameters = self.parse_parameters(parameters_dict) return self.do_stuff(parameters) @abc.abstractmethod def parse_parameters(self, parameters_dict: Dict[str, str]) -&gt; PARAMETERS: ... @abc.abstractmethod def do_stuff(self, parameters: PARAMETERS) -&gt; RESULT: ... BookParameter = TypeVar(&quot;BookParameter&quot;, bound=Book) BookResult = TypeVar(&quot;BookResult&quot;, bound=Page) class BookFinder(Generic[BookParameter, BookResult], Finder[BookParameter, BookResult], abc.ABC): def parse_parameters(self, parameters_dict: Dict[str, str]) -&gt; BookParameter: return Book(**parameters_dict) @abc.abstractmethod def do_stuff(self, parameters: BookParameter) -&gt; BookResult: ... </code></pre> <p>and then use it like:</p> <pre><code>class ItalianBookFinder(BookFinder[Book, Page]): def do_stuff(self, parameters: Book) -&gt; Page: # define this class LatinBook(Book): var_3: str class LatinBookFinder(BookFinder[LatinBook, Page]): def parse_parameters(self, parameters_dict: Dict[str, str]) -&gt; LatinBook: # define both this def do_stuff(self, parameters: LatinBook) -&gt; Page: # and this too class EnglishPageFinder(PageFinder[EnglishPage, PageFinderResult]): .... </code></pre> <p>(i'm omitting the <code>RESULT</code> part, but is similar to <code>PARAMETERS</code>, too long text already)</p> <p>But when i run mypy check on my code i get the error:</p> <pre><code>Incompatible return value type (got &quot;Book&quot;, expected &quot;BookParameter&quot;) [return-value] </code></pre> <p>which leads me to think i am not doing this correctly. i might be totally missing some important part, or have design flaw in here, if anyone knows i'll be happy for any input. Thank you.</p>
<python><generics><inheritance><mypy>
2023-02-07 17:03:57
1
716
jiripi
75,376,519
661,740
Pytest mocker failing to find Path
<p>I am working with someone else's testing code, and they make extensive use of mocker. The problem is that I changed the underlying code so it tests for the existence of a file using Path ().is_file.</p> <p>Now I need to mock Path ().is_file so it returns True. I tried this:</p> <pre><code>from pathlib import Path @pytest.fixture(scope=&quot;function&quot;) def mock_is_file (mocker): # mock the AlignDir existence validation mocker.patch ('Path.is_file') return True </code></pre> <p>I'm getting this error:</p> <pre><code>E ModuleNotFoundError: No module named 'Path' /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/unittest/mock.py:1161: ModuleNotFoundError </code></pre> <p>What is the correct way to patch Path.is_file()?</p>
<python><pytest><pytest-mock>
2023-02-07 16:49:46
1
3,511
Greg Dougherty
75,376,379
1,132,408
Inserting pandas dataframe with float values into Oracle
<p>I can insert normal values like integer, string into Oracle from pandas data frame but when I try float type values , sqlalchemy gives below errors</p> <pre><code>sqlalchemy.exc.ArgumentError: Oracle FLOAT types use 'binary precision', which does not convert cleanly from decimal 'precision'. Please specify this type with a separate Oracle variant, such as Float(precision=53).with_variant(oracle.FLOAT(binary_precision=176), 'oracle'), so that the Oracle specific 'binary_precision' may be specified accurately. </code></pre> <p>here is the complete source code</p> <pre><code>import pandas as pd import os import sqlalchemy as sa from sqlalchemy import text from sqlalchemy.dialects import oracle oracle_db = sa.create_engine('oracle://username:password@instance/?service_name=MBCDFE') conn = oracle_db.connect() df = pd.DataFrame({ #&quot;column1&quot;: [1, 1, 1], #- inserting into DB works with just numbers &quot;column1&quot;: [1.2, 1.2, 1.2], &quot;column2&quot;: [&quot;a&quot;, &quot;bb&quot;, &quot;c&quot;], &quot;column3&quot;: [&quot;K&quot;, &quot;L&quot;, &quot;M&quot;] }) df.to_sql(&quot;python&quot;, conn, if_exists=&quot;replace&quot;, index=True) conn.commit() conn.close() </code></pre> <p>any help will be really appreciated. many thanks.</p>
<python><pandas><dataframe><sqlalchemy><cx-oracle>
2023-02-07 16:35:44
2
335
Justin Mathew
75,376,266
1,581,090
How to get the complete response from a telnet command using python?
<p>I am trying to use python 3.10.9 on windows to create a telnet session using <a href="https://docs.python.org/3/library/telnetlib.html" rel="nofollow noreferrer">telnetlib</a>, but I have trouble to read the complete response.</p> <p>I create a telnet session like</p> <pre><code>session = telnetlib.Telnet(host, port, timeout) </code></pre> <p>and then I write a command like</p> <pre><code>session.write(command + b&quot;\n&quot;) </code></pre> <p>and then I wait some really long time (like 5 seconds) before I try to read the response using</p> <pre><code>session.read_some() </code></pre> <p>but I only get half of the response back!</p> <p>The complete response is e.g.</p> <pre><code>Invalid arguments Usage: $IMU,START,&lt;SAMPLING_RATE&gt;,&lt;OUTPUT_RATE&gt; where SAMPLING_RATE = [1 : 1000] in Hz OUTPUT_RATE = [1 : SAMPLING_RATE] in Hz </code></pre> <p>but all I read is the following:</p> <pre><code>b'\x1b[0GInvalid arguments\r\n\r\nUsage: $IMU,START,&lt;' </code></pre> <p>More than half of the response is missing! How to read the complete response in a <strong>non-blocking</strong> way?</p> <p>Other strange read methods:</p> <ul> <li>read_all: blocking</li> <li>read_eager: same issue</li> <li>read_very_eager: sometimes works, sometimes not. Seems to contain a repetition of the message ...</li> <li>read_lazy: does not read anything</li> <li>read_very_lazy: does not read anything</li> </ul> <p>I have not the slightest idea what all these different read methods are for. The <a href="https://docs.python.org/3/library/telnetlib.html" rel="nofollow noreferrer">documentation</a> is not helping at all.</p> <p>But <code>read_very_eager</code> seems to work sometimes. But sometimes I get a response like</p> <pre><code>F FI FIL FILT FILTE FILTER </code></pre> <p>and so on. But I am reading only once, not adding the output myself!</p> <p>Maybe there is a more simple-to-use module I can use instead if <code>telnetlib</code>?</p>
<python><telnet><telnetlib>
2023-02-07 16:26:30
1
45,023
Alex
75,376,241
6,895,840
Mutagen using forward slash as delimiter
<p>I'm using mutagen to collect information about given MP3 files. It's working but there is one problem. When a song has mutliple artists it uses a forward slash as a delimiter. So the TPE1 tag may return the following when the song is, e.g., a collaboration between AC/DC and Ministry:</p> <pre><code>['Ministry/AC/DC'] </code></pre> <p>This is problematic when trying to isolate separate artists from the tag. Splitting on <code>/</code> won't work because this will give three results: Ministry, AC and DC. This is my code:</p> <pre><code>import re import mutagen class MusicData: def __init__(self, root, filepath): self.fullpath = root + '\\' + filepath self.prepath = re.sub(r'\\[^\\]*$', '', self.fullpath) + '\\' self.filename = self.fullpath.replace(self.prepath, '') file = mutagen.File(self.fullpath) self.duration = file.info.length self.title = self.extractKey(file, 'TIT2')[0] self.artist = self.extractKey(file, 'TPE1')[0] self.album = self.extractKey(file, 'TALB')[0] self.year = self.extractKey(file, 'TDRC')[0] self.genre = self.extractKey(file, 'TCON')[0] self.publisher = self.extractKey(file, 'TPUB')[0] self.key = self.extractKey(file, 'TKEY')[0] self.bpm = self.extractKey(file, 'TBPM')[0] def extractKey(self, file, key): if(key in file): if(type(file.tags[key].text[0]) == mutagen.id3._specs.ID3TimeStamp): return [str(file.tags[key].text[0])] else: return file.tags[key].text else: return [&quot;&quot;] </code></pre> <p>The documentation on mutagen is very brief and is making me none the wiser. How do I properly collect the artists from a given file using mutagen?</p>
<python><mutagen>
2023-02-07 16:24:06
1
1,156
Anteino
75,376,225
15,045,363
Saving partial result of df.apply after exception
<p>How to get the partial result computed by <code>df.apply()</code> when an Exception is raised within the function?</p> <h2>Context</h2> <p>I have a Pandas dataframe where values are missing. To get this values I have to call a REST API for each row (which take time). To do that, I use the <code>apply()</code> function.<br /> Exceptions can be thrown during the API call, or the scipt can be stopped with CTRL-C.<br /> I would like to save the results that have been acquired so far before closing the process, however <code>apply()</code> will not return a value if an exception is raised. Do I have to use a for loop?</p> <p>The example code is the following:</p> <pre class="lang-py prettyprint-override"><code>import random import time import pandas as pd # Simulate a request to a slow REST API taking parameters, returning an integer and where exceptions can be raised def getApiValue_sim(p1:int, p2:int) -&gt; int: time.sleep(1) #Simulate slow request i = random.randint(p1,p2) #Simulate REST API reply if not i%4: raise Exception(&quot;Multiple of 4&quot;) #Simulate exception raising return i dataframe = pd.DataFrame({'P1':[1,2,3],'P2':[11, 12, 13],'Reply':[pd.NA,pd.NA,pd.NA]}) try: dataframe['Reply'] = dataframe.apply(lambda row : getApiValue_sim(row['P1'],row['P2']), axis=1) #Do not return a value if an exception occure finally: print(dataframe) </code></pre>
<python><pandas><exception>
2023-02-07 16:22:43
0
865
Maxime Charrière
75,376,167
9,536,103
Equivalent function of pytesseract.image_to_data in tesserocr
<p>I am currently using <code>pytesseract.image_to_data</code> on several images but this is incredibly slow so I was thinking of moving over to <code>tesserocr</code>.</p> <p>I can't seem to find an equivalent function though that gives me the positions of all the pieces of text on the page as well as breaking them into levels, paragraphs, line numbers and word numbers.</p>
<python><image-processing><ocr><python-tesseract>
2023-02-07 16:17:32
0
1,151
Daniel Wyatt
75,376,061
9,773,920
Export redshift table data to csv file tabs using lambda python
<p>I have a table metric_data that has data in the below format:</p> <p><a href="https://i.sstatic.net/1oKfu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1oKfu.png" alt="enter image description here" /></a></p> <p>I want to export this data into csv file in S3 with separate tabs for components. So I will have 1 file with 3 tabs - COMP-01, COMP-02, COMP-03.</p> <p>UNLOAD function is able to export all the data from the table to one CSV file. but how can I export the data as separate tabs in the CSAV file? Below is the UNLOAD command I am using:</p> <pre><code>unload ('select * from mydb.metric_data') to 's3://mybucket/demo/folder/file.xlsx' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; </code></pre> <p>This command generates one csv file with all the data from the table. How can I export the data as separate sheets in a single CSV file?</p> <p><strong>UPDATE</strong>: as CSV doesn't support multiple sheets, I am trying to implement the same with excel. So i updated the Unload command to generate excel file and it produces one file with all the table data</p>
<python><pandas><csv><aws-lambda><amazon-redshift>
2023-02-07 16:08:50
1
1,619
Rick
75,376,021
11,436,357
Why playwright isn't modifying request body?
<p>I need to modify the request body that is sent by the browser, i do it like this:</p> <pre class="lang-py prettyprint-override"><code>async def handle_uri_route(route: Route): response = await route.fetch() response_json = await response.json() reg_notification_uri = response_json['confirmUri'] await page.route(reg_notification_uri, modify_notification_uri) await route.continue_() async def modify_notification_uri(route: Route): post_data = route.request.post_data_json post_data['notificationUi'] = 'https://httpbin.org/anything' await route.continue_(post_data=urlencode(post_data)) pprint(route.request.post_data_json) await page.route(re.compile(r'api\/common\/v1\/register_notification_uri\?'), handle_uri_route) </code></pre> <p><code>pprint</code> displays the changed data, but if i open the devtools, then i see that the request hasn't really changed.<br /> What am I doing wrong?</p>
<python><playwright><playwright-python>
2023-02-07 16:05:10
0
976
kshnkvn
75,376,016
17,897,456
How to compute the next minute of a time in Python?
<p>So I have a <code>datetime.datetime</code> object and I want to compute the next minute of it. What I mean by next minute is the same time but at the very beginning of the next minute. For example, the next minute of 16:38:23.997 is 16:39:00.000.</p> <p>I can do that easily by adding 1 to the minute and setting every smaller values to 0 (seconds, milliseconds, etc), but I'm not satisfied with this way, because I may need to carry out by checking if the minute becomes higher than 60, and if the hour is bigger than 24... and it ends up being over complicated for what I want to do</p> <p>Is there a &quot;simple&quot; pythonic way to achieve this ?</p>
<python><datetime>
2023-02-07 16:04:30
1
710
Mateo Vial
75,375,938
3,654,588
Cython: efficient memoryview of type "object" such that it can be used in nogil function
<p>I am currently using the new NUMPY_1_7 C API and Cython 0.29+. Usage of types like <code>cnp.ndarray</code> are deprecated and it is preferred to use a Cython memoryview instead.</p> <p>However, part of my code stores a ndarray of type 'object'. E.g.</p> <pre><code>cdef class A: cdef cnp.ndarray buff cdef void A_func(self): self.buff = np.empty(10, dtype='object') cdef int idx for idx in range(10): self.buff[idx] = CythonClass(5) cdef void nogil_func(self) nogil: cdef void** buff = &lt;void**&gt; self.buff cdef int idx for idx in range(10): (&lt; CythonClass &gt; buff[idx]).do_something_nogil() cdef class CythonClass: def __cinit__(self, int n): self.n = n cdef int do_something_nogil(self) nogil: cdef int result = 0 cdef int idx for idx in range(5): result += idx return result </code></pre> <p>I have a few questions of how to covert the <code>cdef cnp.ndarray buff</code> line into a Cython memoryview, such that it allows me to still leverage the data structure in nogil blocks:</p> <ol> <li>How would I go about converting <code>buff</code> into a Cython memorvyew that supports the storage of an &quot;object&quot;? Is the way below fine?</li> <li>Are there alternatives?</li> <li>Are there tradeoffs to the alternatives in terms of speed and efficiency?</li> </ol> <p>Here is my attempt:</p> <pre><code>cdef class A: cdef object[:] buff cdef void A_func(self): self.buff = np.empty(10, dtype='object') cdef int idx for idx in range(10): self.buff[idx] = CythonClass(5) cdef void nogil_func(self) nogil: cdef void** buff = &lt;void**&gt; self.buff cdef int idx for idx in range(10): (&lt;CythonClass&gt; buff[idx]).do_something_nogil() </code></pre> <p>which results in the following error: <code>operand of type '__Pyx_memviewslice' where arithmetic or pointer type is required</code></p>
<python><cython><cythonize>
2023-02-07 15:58:22
0
1,302
ajl123
75,375,918
10,270,590
How to have multiple input and output connection for an Airflow DAG task use a global variable pandas data frame with in @task.external_python?
<h2>GOAL</h2> <ul> <li>I use the Docker 2.4.1 version of Airflow</li> <li>I use my external python virtual environment for each task</li> <li>I have a normal python integer that I want to pass on from task to task.</li> <li>I should start form 1 graph point the &quot;start&quot; than it should push it's result to x, y, z than all of the x,y,z should go to &quot;compare&quot; to pick and print out the highest value.</li> </ul> <h2>CODE</h2> <pre><code>from __future__ import annotations import logging import os import shutil import sys import tempfile import time from pprint import pprint from datetime import timedelta import pendulum from airflow import DAG from airflow.decorators import task log = logging.getLogger(__name__) PYTHON = sys.executable BASE_DIR = tempfile.gettempdir() ''''For Tasks that are essntial and we want to know about the 1st faliure!''' my_default_args = { 'owner': 'Anonymus', 'email': ['random@random.com'], 'email_on_failure': True, 'email_on_retry': False, #only allow if it was allowed in the scheduler #'retries': 1, #only allow if it was allowed in the scheduler #'retry_delay': timedelta(minutes=1) } with DAG( dag_id='sample_many_task_connections', # https://crontab.guru/ # 0-7, where 0 or 7 is Sunday # min HOUR DAY_OF_MONTH MONTH DAY_OF_WEEK # * * * * * schedule='12 11 * * *', # IT IS AT UTC. EX.: 11:12am UTC = 11:12am GMT = 12:12am BST start_date=pendulum.datetime(2023, 1, 1, tz=&quot;UTC&quot;), # this is from whre it starts counting time to run taks, NOT like cron catchup=False, #execution_timeout=timedelta(seconds=60), default_args=my_default_args, tags=['sample_tag', 'sample_tag2'], ### !!! also add 'xRetry' to tags so we see if a DAG has rety feature in it ) as dag: #@task.external_python(task_id=&quot;test_external_python_venv_task&quot;, python=os.fspath(sys.executable)) @task.external_python(task_id=&quot;start&quot;, python='/opt/airflow/v1/bin/python3') def start(): # this could be any function name start = 1 print(start) return start @task.external_python(task_id=&quot;random_function_x&quot;, python='/opt/airflow/v1/bin/python3') def random_function_x(start): import random print('start: ', start) x = random.randint(1, 100) print('x: ', x) x += start print('x += start: ', x) return x @task.external_python(task_id=&quot;random_function_y&quot;, python='/opt/airflow/v1/bin/python3') def random_function_y(start): import random print('start: ', start) y = random.randint(1, 100) print('y: ', y) y += start print('y += start: ', y) return y @task.external_python(task_id=&quot;random_function_z&quot;, python='/opt/airflow/v1/bin/python3') def random_function_z(start): import random print('start: ', start) z = random.randint(1, 100) print('z: ', z) z += start print('z += start: ', z) return z @task.external_python(task_id=&quot;compare&quot;, python='/opt/airflow/v1/bin/python3') def compare(x,y,z): # pick the largest value and return it from x y z and return what value was te largest if x &gt; y and x &gt; z: print('x is the largest', x) return x elif y &gt; x and y &gt; z: print('y is the largest', y) return y else: print('z is the largest', z) return z compare([random_function_x(start()), random_function_y(start()), random_function_z(start())]) </code></pre> <h2>ERROR</h2> <pre><code>error DAG Import Errors (1) Broken DAG: [/opt/airflow/dags/sample_many_task_connections.py] Traceback (most recent call last): File &quot;/usr/local/lib/python3.8/inspect.py&quot;, line 3037, in bind return self._bind(args, kwargs) File &quot;/usr/local/lib/python3.8/inspect.py&quot;, line 2952, in _bind raise TypeError(msg) from None TypeError: missing a required argument: 'y' </code></pre> <h2>Tried</h2> <ul> <li>based on previosue Q&amp;A I not resolve this issue - <a href="https://stackoverflow.com/questions/75374582/how-to-use-a-python-list-as-global-variable-pandas-data-frame-with-in-task-exte">How to use a python list as global variable pandas data frame with in @task.external_python?</a> &amp; <a href="https://stackoverflow.com/questions/75361423/how-to-use-a-python-list-as-global-variable-python-list-with-in-task-external-p">How to use a python list as global variable python list with in @task.external_python?</a></li> </ul>
<python><airflow><directed-acyclic-graphs><airflow-2.x>
2023-02-07 15:56:16
1
3,146
sogu
75,375,906
240,795
Can pyinstaller make a desktop file (with an icon) for an Ubuntu python app
<p>I'm building a python app which is to pack up as a single executable and work on Windows, MacOS and Linux. Have made a lot of progress and am using a workflow on Github to build using pyinstaller for each OS. Most things are working fine.</p> <p>Right now I am working on getting an icon onto the executable instead of the default system icon.</p> <p>I have a <code>spec</code> file for pyinstaller and I have a section where the icon is mentioned:</p> <pre><code>exe = EXE( pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='my_app_name', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=False, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, icon='images/my_icon.ico' ) </code></pre> <p>This seems to work well for Windows and the output exe file has my icon which is great. My question is, is there a way to do this for Linux. I know that normally for Linux you need to build a <code>.desktop</code> file, so I guess the question is three-fold:</p> <ol> <li>Is there a way to give a file an icon without a desktop file (in Linux)?</li> </ol> <p>or</p> <ol start="2"> <li>Is there a way to somehow build and connect a desktop file to my Linux file in pyinstaller?</li> </ol> <p>or</p> <ol start="3"> <li>Is there some python way to self-create a desktop file for my python app?</li> </ol> <p>Thanks</p>
<python><linux><pyinstaller><desktop>
2023-02-07 15:55:00
2
2,140
Kibi
75,375,612
11,668,258
How to convert Neo4j response into JSON
<p>Convert stream of records from NEO4j get by NEO4J python driver into JSON. simply, I want to convert the results of cypher queries in neo4j into a JSON format. Below is output got from NEO4J</p> <pre><code>&lt;Record n=&lt;Node id=3099 labels=frozenset({'BusinessData'}) properties={'Description': 'sample description.', 'x-NodeCreated': neo4j.time.Date(2023, 1, 11), 'name': 'Example', 'GUID': 'KL12822', 'Notes': 'Deployed', 'x-LastEdit': '16445676677MN'}&gt;&gt; </code></pre> <p>The result from a query looks something like above and When I perform a simple json.dumps() it return with TypeError: Object of type is not JSON serializable.</p> <pre><code>json_data = json.dumps(info,default = str) </code></pre> <p>What I am asking here is a way to get, What is the correct way to convert the query results to JSON!</p>
<python><json><python-3.x><neo4j><neo4j-python-driver>
2023-02-07 15:33:27
1
916
Tono Kuriakose
75,375,549
5,235,665
Adding Python library to Flask app requirements.txt
<p>Long time Java dev who inherited a Python (Flask) application that is in dire need of some maintenance. Instead of using env vars or system properties or <em>any</em> kind of configuration (!!!) all the connections and credentials are <strong>hardcoded</strong> right there in the source code. Yikes.</p> <p>Trying to get <code>python-dotenv</code> loaded and used. So I tried to install it using <code>pip3</code> (I'm on a Mac):</p> <pre><code> myuser@mymac my-database-service % pip3 install python-dotenv Defaulting to user installation because normal site-packages is not writeable Collecting python-dotenv Downloading python_dotenv-0.21.1-py3-none-any.whl (19 kB) Installing collected packages: python-dotenv WARNING: The script dotenv is installed in '/Users/myuser/Library/Python/3.8/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Successfully installed python-dotenv-0.21.1 WARNING: You are using pip version 20.2.3; however, version 23.0 is available. You should consider upgrading via the '/Library/Developer/CommandLineTools/usr/bin/python3 -m pip install --upgrade pip' command. </code></pre> <p><em>Looks</em> like it succeeded however I don't see anything changed in my project. Nothing added, no new folders, etc.</p> <p>Do I now just manually add <code>python-dotenv-0.21.1</code> to my <code>requirements.txt</code>? Can someone explain it like I'm five (ELIF) and help this old Java dog out getting <code>python-dotenv</code> properly installed and usable inside my project?</p>
<python><macos><pip><dotenv>
2023-02-07 15:29:24
2
845
hotmeatballsoup
75,375,485
3,025,555
Socket IO returns 127.0.0.1 as host address, randomly, instead of the public IP Of the device
<p>Trying to get host-IP Through Python, using socket module - Occasionally, i get address 127.0.0.1 and not the real IP Address - i.e 10.210.24.24</p> <p>I've adjusted my code per the answer in:</p> <p><a href="https://stackoverflow.com/questions/72331707/socket-io-returns-127-0-0-1-as-host-address-and-not-192-168-0-on-my-device">Socket IO returns 127.0.0.1 as host address and not 192.168.0.* on my device</a></p> <p>But still get random encounters of 127.0.0.1.</p> <p>The issue occurs in automation - Which SSH's into the remote host (10.210.24.24 for example), And runs the below code snippet.</p> <p>Trying to reproduce manually - i always get the correct IP Address.</p> <p>My code is as follows:</p> <pre><code> self.host_name = socket.gethostname() try: host_ip = socket.gethostbyname(self.host_name) if host_ip == &quot;127.0.0.1&quot;: s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.connect((&quot;8.8.8.8&quot;, 80)) host_ip = s.getsockname()[0] self.host_ip = host_ip except Exception as e: # In case DNS Resolving fails logger.warning(&quot;Failed setting '%s' IP | %s&quot;, self.host_name, str(e)) self.host_ip = &quot;&quot; </code></pre> <p>I'm trying to understand what am I missing here:</p> <ol> <li><p>Why would &quot;randomly&quot;, i would get 127.0.0.1 instead of the actual host IP. Given the fact the host definitely has IP Assigned (Since i'm using SSH root@ in order to run the script)</p> </li> <li><p>How to overcome this issue and make sure i always get the correct IP Address?</p> </li> <li><p>In case more debug info is needed - What logs should i look into once this issue does occur? I can add &quot;fail handling&quot; to at least dump those logs when the issue occurs in my automation.</p> </li> </ol> <p>System is Debian 11, Python 3.9.</p> <p>Thanks</p>
<python><python-3.x><socket.io><dns>
2023-02-07 15:25:16
0
1,225
Adiel
75,375,458
5,197,329
pygame triggers "pycharm not responding" popup message, how to prevent/supress this message?
<p>I have created a small game in python using pygame. The game is a 2-player game, and has various player modes including random cpu. When I allow two cpu players to play against each other it works well, except for the problem that pycharm shows a &quot;pycharm not responding&quot; popup message after around 5 seconds or so with the option to wait or force exit.</p> <p>The issue is that the game is actually running just fine in the background, so the only issue is the popup.</p> <p>Does anyone know why this popup appears? and how I can prevent the popup from appearing?</p>
<python><pycharm>
2023-02-07 15:23:25
1
546
Tue
75,375,442
14,269,252
what is version of spark and pyspark compatible with python 3.10
<p>I built a virtual environment and I installed these version of packages. it throws an error as follows and it is because of not being compatible packages, what is the best version of spark and Pyspark which is compatible with my version of Python?</p> <pre><code>java version: openjdk 11.0.11 2021-04-20 spark version : 2.4.6 Pyspark version: 2.4.6 Python version : 3.10 </code></pre> <p>The code :</p> <pre><code>s3 = s3fs.S3FileSystem() spark = SparkSession.builder.getOrCreate() </code></pre> <p>The error :</p> <pre><code>Py4JError: org.apache.spark.api.python.PythonUtils.isEncryptionEnabled does not exist in the JVM </code></pre>
<python><apache-spark><pyspark>
2023-02-07 15:22:12
1
450
user14269252
75,375,354
9,944,937
add values to each array in a 2D numpy array
<p>I'm looking for a way to efficiently add a fixed number of <code>np.nan</code> values at the beginning of each array in a numpy array of shape (3,123...):</p> <pre><code>Original numpy array: [[1,2,3...], [4,5,6...], [7,8,9...]] New numpy array: [[nan,nan,1,2,3...], [nan,nan,4,5,6...], [nan,nan,7,8,9...]] </code></pre> <p>I tried with a for loop:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np original_array = np.random.rand(3,3) gap = np.empty(shape=(2,)) gap.fill(np.nan) new_array = np.zeros(shape=(3,5)) for i,row in enumerate(original_array): new_array[i] = np.concatenate([gap,row]) </code></pre> <p>It works but this is probably not the best way to do it.</p>
<python><arrays><numpy>
2023-02-07 15:15:39
2
1,101
Fabio Magarelli
75,375,250
18,221,164
Getting a 401 response while using Requests package
<p>I am trying to access a server over my internal network under <code>https://prodserver.de/info</code>. I have the code structure as below:</p> <pre><code>import requests from requests.auth import * username = 'User' password = 'Hello@123' resp = requests.get('https://prodserver.de/info/', auth=HTTPBasicAuth(username,password)) print(resp.status_code) </code></pre> <p>While trying to access this server via browser, it works perfectly fine. What am I doing wrong?</p>
<python><python-requests><http-error>
2023-02-07 15:07:32
2
511
RCB
75,375,222
6,414
PyCharm [Intellij] auto-import from Binary Skeletons instead of standard python library
<p>I'm trying to import Decimal from <code>decimal</code> but when I try and do this using Intellij it just says I can import from <code>_decimal</code> instead which is in the Binary Skeletons.</p> <p>I'm using Poetry and Python 3.10, and it's almost certainly something wrong with my setup, I just can't work out what.</p>
<python><intellij-idea><pycharm><python-3.10>
2023-02-07 15:05:12
2
8,037
Henry B
75,375,139
2,403,819
Can I get the black python code format tool to recognize what directory my code is in
<p>I am trying to integrate the black python code formatting tool into my workflow. As a test I have created a directory with the following structure.</p> <pre><code>hello |_ pyproject.toml |_ hello |_main.py </code></pre> <p>The pyproject.toml file has the following information in it.</p> <pre><code>[tool.poetry] name = &quot;hello&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;my Name &lt;name@gmail.com&gt;&quot;] readme = &quot;README.rst&quot; [tool.poetry.dependencies] python = &quot;^3.10&quot; [tool.poetry.group.dev.dependencies] pytest = &quot;^7.2.1&quot; flake8 = &quot;^6.0.0&quot; mypy = &quot;^1.0.0&quot; black = &quot;^23.1.0&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; classifiers = [ &quot;Development Status :: 4 - Beta&quot;, &quot;Programming Language :: Python :: 3&quot;, &quot;Programming Language :: Python :: 3.10&quot;, &quot;License :: OSI Approved :: MIT License&quot;, &quot;Operating System :: MacOS&quot;, &quot;Operating System :: POSIX :: Linux&quot;, ] [tool.black] line-length = 90 target-version = ['py38', 'py39', 'py310'] include = ['\.pyi?$', 'hello'] exclude = ''' /( \.eggs | \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist # The following are specific to Black, you probably don't want those. | blib2to3 | tests/data | profiling )/ ''' </code></pre> <p>As you can see, I include the name of my source code directory, <code>hello</code> in the <code>include</code> line. From the uppermost <code>hello</code> durectory, if I type <code>black hello</code> it will look into the lowermost <code>hello</code> directory and format any code in that directory. If I <code>cd</code> to the lowermost <code>hello</code> directory and type <code>black</code> or <code>black main.py</code> it will format the <code>main.py</code> code. However, is there a way to use the <code>pyproject.toml</code> file to tell black where my source code is, such that from the upper most <code>hello</code> directory I can just type <code>black</code> and it will look into the lowermost <code>hello</code> directory without me explicitly pointing it there from command line.</p> <p>Presently when I type <code>black</code> from the uppermost <code>hello</code> directory I get the message <code>Usage of black [OPTIONS] SRC ... One of 'SRC' or 'code' is required</code></p>
<python><python-3.x><python-black>
2023-02-07 14:59:13
1
1,829
Jon
75,375,121
7,194,271
how to solve "Unresolved reference 'unpack'" in python3.7?
<p>when I check the code in <a href="https://www.kaggle.com/code/hengck23/3hr-tensorrt-nextvit-example" rel="nofollow noreferrer">kaggle 3hr-tensorrt-nextvit-example</a>, I find there is a error that I can not solve. it is called <strong>Unresolved reference</strong> '<strong>unpack</strong>'.what should I do?</p> <pre><code>unc_data = unpack(unpack_fmt, cast(bytes, item.LUTData)) </code></pre>
<python><python-3.x><reference>
2023-02-07 14:57:48
1
380
Hong Cheng
75,375,116
243,031
docker python:3.9 image gives error for scripts
<p>We are using <code>python:3.9</code> image for the base, and run some command on that.</p> <p><strong>Base Image</strong></p> <pre><code>######################## # Base Image Section # ######################## # # Creates an image with the common requirements for a flask app pre-installed # Start with a smol OS FROM python:3.9 # Install basic requirements RUN apt-get -q update -o Acquire::Languages=none &amp;&amp; apt-get -yq install --no-install-recommends \ apt-transport-https \ ca-certificates &amp;&amp; \ apt-get autoremove -yq &amp;&amp; apt-get clean &amp;&amp; rm -rf &quot;/var/lib/apt/lists&quot;/* # Install CA certs # Prefer the mirror for package downloads COPY [&quot;ca_certs/*.crt&quot;, &quot;/usr/local/share/ca-certificates/&quot;] RUN update-ca-certificates &amp;&amp; \ mv /etc/apt/sources.list /etc/apt/sources.list.old &amp;&amp; \ printf 'deb https://mirror.company.com/debian/ buster main contrib non-free\n' &gt; /etc/apt/sources.list &amp;&amp; \ cat /etc/apt/sources.list.old &gt;&gt; /etc/apt/sources.list # Equivalent to `cd /app` WORKDIR /app # Fixes a host of encoding-related bugs ENV LC_ALL=C.UTF-8 # Tells `apt` and others that no one is sitting at the keyboard ENV DEBIAN_FRONTEND=noninteractive # Set a more helpful shell prompt ENV PS1='[\u@\h \W]\$ ' ##################### # ONBUILD Section # ##################### # # ONBUILD commands take effect when another image is built using this one as a base. # Ref: https://docs.docker.com/engine/reference/builder/#onbuild # # # And that's it! The base container should have all your dependencies and ssl certs pre-installed, # and will copy your code over when used as a base with the &quot;FROM&quot; directive. ONBUILD ARG BUILD_VERSION ONBUILD ARG BUILD_DATE # Copy our files into the container ONBUILD ADD . . # pre_deps: packages that need to be installed before code installation and remain in the final image ONBUILD ARG pre_deps # build_deps: packages that need to be installed before code installation, then uninstalled after ONBUILD ARG build_deps # COMPILE_DEPS: common packages needed for building/installing Python packages. Most users won't need to adjust this, # but you could specify a shorter list if you didn't need all of these. ONBUILD ARG COMPILE_DEPS=&quot;build-essential python3-dev libffi-dev libssl-dev python3-pip libxml2-dev libxslt1-dev zlib1g-dev g++ unixodbc-dev&quot; # ssh_key: If provided, writes the given string to ~/.ssh/id_rsa just before Python package installation, # and deletes it before the layer is written. ONBUILD ARG ssh_key # If our python package is installable, install system packages that are needed by some python libraries to compile # successfully, then install our python package. Finally, delete the temporary system packages. ONBUILD RUN \ if [ -f setup.py ] || [ -f requirements.txt ]; then \ install_deps=&quot;$pre_deps $build_deps $COMPILE_DEPS&quot; &amp;&amp; \ uninstall_deps=$(python3 -c 'all_deps=set(&quot;'&quot;$install_deps&quot;'&quot;.split()); to_keep=set(&quot;'&quot;$pre_deps&quot;'&quot;.split()); print(&quot; &quot;.join(sorted(all_deps-to_keep)), end=&quot;&quot;)') &amp;&amp; \ apt-get -q update -o Acquire::Languages=none &amp;&amp; apt-get -yq install --no-install-recommends $install_deps &amp;&amp; \ if [ -n &quot;${ssh_key}&quot; ]; then \ mkdir -p ~/.ssh &amp;&amp; chmod 700 ~/.ssh &amp;&amp; printf &quot;%s\n&quot; &quot;${ssh_key}&quot; &gt; ~/.ssh/id_rsa &amp;&amp; chmod 600 ~/.ssh/id_rsa &amp;&amp; \ printf &quot;%s\n&quot; &quot;StrictHostKeyChecking=no&quot; &gt; ~/.ssh/config &amp;&amp; chmod 600 ~/.ssh/config || exit 1 ; \ fi ; \ if [ -f requirements.txt ]; then \ pip3 install --no-cache-dir --compile -r requirements.txt || exit 1 ; \ elif [ -f setup.py ]; then \ pip3 install --no-cache-dir --compile --editable . || exit 1 ; \ fi ; \ if [ -n &quot;${ssh_key}&quot; ]; then \ rm -rf ~/.ssh || exit 1 ; \ fi ; \ fi </code></pre> <p>We build this image last year, and it was working fine, but we decided to use latest changes and build new base image, once we build it, it start failing for last <code>RUN</code> command.</p> <pre><code>DEBU[0280] Deleting in layer: map[] INFO[0281] Cmd: /bin/sh INFO[0281] Args: [-c if [ -f setup.py ] || [ -f requirements.txt ]; then install_deps=&quot;$pre_deps $build_deps $COMPILE_DEPS&quot; &amp;&amp; uninstall_deps=$(python3 -c 'all_deps=set(&quot;'&quot;$install_deps&quot;'&quot;.split()); to_keep=set(&quot;'&quot;$pre_deps&quot;'&quot;.split()); print(&quot; &quot;.join(sorted(all_deps-to_keep)), end=&quot;&quot;)') &amp;&amp; apt-get -q update -o Acquire::Languages=none &amp;&amp; apt-get -yq install --no-install-recommends $install_deps &amp;&amp; if [ -n &quot;${ssh_key}&quot; ]; then mkdir -p ~/.ssh &amp;&amp; chmod 700 ~/.ssh &amp;&amp; printf &quot;%s\n&quot; &quot;${ssh_key}&quot; &gt; ~/.ssh/id_rsa &amp;&amp; chmod 600 ~/.ssh/id_rsa &amp;&amp; printf &quot;%s\n&quot; &quot;StrictHostKeyChecking=no&quot; &gt; ~/.ssh/config &amp;&amp; chmod 600 ~/.ssh/config || exit 1 ; fi ; if [ -f requirements.txt ]; then pip3 install --no-cache-dir --compile -r requirements.txt || exit 1 ; elif [ -f setup.py ]; then pip3 install --no-cache-dir --compile --editable . || exit 1 ; fi ; if [ -n &quot;${ssh_key}&quot; ]; then rm -rf ~/.ssh || exit 1 ; fi ; fi] INFO[0281] Running: [/bin/sh -c if [ -f setup.py ] || [ -f requirements.txt ]; then install_deps=&quot;$pre_deps $build_deps $COMPILE_DEPS&quot; &amp;&amp; uninstall_deps=$(python3 -c 'all_deps=set(&quot;'&quot;$install_deps&quot;'&quot;.split()); to_keep=set(&quot;'&quot;$pre_deps&quot;'&quot;.split()); print(&quot; &quot;.join(sorted(all_deps-to_keep)), end=&quot;&quot;)') &amp;&amp; apt-get -q update -o Acquire::Languages=none &amp;&amp; apt-get -yq install --no-install-recommends $install_deps &amp;&amp; if [ -n &quot;${ssh_key}&quot; ]; then mkdir -p ~/.ssh &amp;&amp; chmod 700 ~/.ssh &amp;&amp; printf &quot;%s\n&quot; &quot;${ssh_key}&quot; &gt; ~/.ssh/id_rsa &amp;&amp; chmod 600 ~/.ssh/id_rsa &amp;&amp; printf &quot;%s\n&quot; &quot;StrictHostKeyChecking=no&quot; &gt; ~/.ssh/config &amp;&amp; chmod 600 ~/.ssh/config || exit 1 ; fi ; if [ -f requirements.txt ]; then pip3 install --no-cache-dir --compile -r requirements.txt || exit 1 ; elif [ -f setup.py ]; then pip3 install --no-cache-dir --compile --editable . || exit 1 ; fi ; if [ -n &quot;${ssh_key}&quot; ]; then rm -rf ~/.ssh || exit 1 ; fi ; fi] error building image: error building stage: failed to execute command: starting command: fork/exec /bin/sh: exec format error </code></pre> <p>We label the image, based on date, to know when it was working, we have base image, build on <code>12-09-22</code> works fine.</p> <p>Something new in <code>python:3.9</code> cause this issue. Same script was working.</p>
<python><docker><image><containers>
2023-02-07 14:57:14
0
21,411
NPatel
75,375,099
7,192,318
ctypes.c_int.from_address does not work in RStudio
<p>I am trying to count references to an object in Python using the RStudio. I use following function:</p> <pre><code> ctypes.c_int.from_address(id(an_object)).value </code></pre> <p>This work perfectly in Pycharm and Jupyter as shown bellow:</p> <p><a href="https://i.sstatic.net/YcbTx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YcbTx.png" alt="enter image description here" /></a></p> <p>and the result in the RStudio is:</p> <p><a href="https://i.sstatic.net/Sy038.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Sy038.png" alt="enter image description here" /></a></p> <p>The question is why the result is not correct in RStudio and how to fix it?</p> <p>I also tried to use</p> <pre><code>sys.getrefcount </code></pre> <p>function, but It does not work in RStudio too!</p> <p>I did it without using &quot;id&quot; function as shown below:</p> <p><a href="https://i.sstatic.net/6gRzD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6gRzD.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/Dszz9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dszz9.png" alt="enter image description here" /></a></p> <p>But the result in RStudio is not correct! Sometimes It may happen in PyCharm(I did not see, Perhaps no guarantee) But in RStudio something is wrong completely!</p> <p>Why this is important?! And why I care about it.</p> <p>Consider following example:</p> <p><a href="https://i.sstatic.net/eD0q3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eD0q3.png" alt="enter image description here" /></a></p> <p>Sometimes it is important to know about &quot;a&quot; before change &quot;b&quot;.</p> <p>The big problem in RStudio is the result increases randomly! But in PyCharm and other Python tools I did not see that happen.</p> <p>I am not an expert on this so if I am wrong on it correct me please.</p>
<python><rstudio>
2023-02-07 14:55:13
1
625
Masoud
75,375,096
1,056,179
Finding the date diff between all pairs of dates in an array without using two loops
<p>I have a huge array containing dates as its element:</p> <pre><code>arr = ['01/01/2020', '15/11/2021', '05/07/2018', '01/03/2020', '10/10/2022', '07/02/2015', ....] </code></pre> <p>I would like to find the date difference between all pairs but using two loops has a time complexity of <strong>O(nˆ2)</strong>. I think there should be a faster way, maybe using a Dynamic Programming approach.</p> <p>Could you solve it in a faster way?</p>
<python><algorithm>
2023-02-07 14:54:53
2
2,059
Amir Jalilifard