QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,033,804
12,178,630
How to separate two touching blobs?
<p>I have written the following code in order to detect the blobs based on their <code>HSV</code> values, everything works fine, except that when the two blobs intersect(touch) the are detected as one instead of two. I have read <a href="https://stackoverflow.com/questions/19916765/opencv-blob-detection-separate-close-blobs">here</a> that this can be solved, by using <code>4-neighborhood</code> and <a href="https://scikit-image.org/docs/stable/auto_examples/applications/plot_morphology.html" rel="nofollow noreferrer">morphological filtering</a> operations, while I have not succeeded in implementing that in my code, I have tried <code>erode</code> operation, but that did not help because I had to combine it with <code>dilate</code>, and they are opposite operations no result was achieved, If I keep only <code>erode</code> all the blobs will be removed and the result will be a black image</p> <p>This is the input image: <a href="https://i.sstatic.net/nNBZe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nNBZe.png" alt="enter image description here" /></a></p> <p>The ones on the left are combined as one blob, and I want to separate them, so that I have 7 blobs instead of 6.</p> <pre><code>import cv2 import numpy as np img = cv2.imread('./lemon.png') hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) fruit_mask = cv2.inRange(hsv_img, *hsv_bounds[plant_name]) fruit_mask = cv2.cvtColor(fruit_mask, cv2.COLOR_GRAY2BGR) result = cv2.bitwise_and(img, fruit_mask) counter = {} counter['lemon'] = 0 image_gray = cv2.cvtColor(result, cv2.COLOR_BGR2GRAY) image_gray = cv2.GaussianBlur(image_gray, (5, 5), 0) image_edged = cv2.Canny(image_gray, 50, 100) # kernel = np.ones((4, 4), np.uint8) image_edged = cv2.dilate(image_edged, None, iterations=1) # kernel = np.ones((4, 4), np.uint8) image_edged = cv2.erode(image_edged, None, iterations=1) cnts = cv2.findContours( image_edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] for c in cnts: if cv2.contourArea(c) &lt; 200: continue hull = cv2.convexHull(c) img_mask = cv2.drawContours(result, [hull], 0, (0, 0, 255), 1) counter['lemon'] += 1 print(counter) cv2.imwrite('./blob_testing/detected_55.png', img_mask) </code></pre> <p>Update: Thanks to the comments, I understood my mistake, and I have added the <code>erode</code> to the drawn contours (not the canny edges), and that fixed the problem, and the result look like that:</p> <p><a href="https://i.sstatic.net/2Ln8X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Ln8X.png" alt="enter image description here" /></a></p> <p>I still need to draw the red contours around the blobs so that I can count them</p>
<python><opencv><image-processing><scikit-image><canny-operator>
2024-02-21 11:56:23
1
314
Josh
78,033,500
4,093,755
I have an excel sheet which has four columns in which each cell of the column is related to the next columns cell in a cascaded manner
<p>I have an excel sheet which has four columns in which each cell of the column is related to the next columns cell in a cascaded manner.</p> <p>The excel sheet looks like below:-</p> <pre><code> Base Version OS Package Name Description Version 0 A NaN NaN NaN NaN 1 NaN B NaN NaN NaN 2 NaN NaN b-01.zip description about B-01 NaN 3 NaN NaN b-02.zip description about B-02 NaN 4 X NaN NaN NaN NaN 5 NaN Y NaN NaN NaN 6 NaN NaN y-01.zip description about Y-01 NaN 7 NaN NaN y-02.zip description about Y-02 NaN </code></pre> <p>I want to have a DataFrame that should look like below when output to an excel sheet.</p> <pre><code> Base Version OS Package Name Description Version 2 A B b-01.zip description about B-01 NaN 3 NaN NaN b-02.zip description about B-02 NaN 6 X Y y-01.zip description about Y-01 NaN 7 NaN NaN y-02.zip description about Y-02 NaN </code></pre> <p>Is there a way in Pandas to achieve this?</p>
<python><pandas><dataframe>
2024-02-21 11:02:21
2
349
Biplab
78,033,462
1,711,271
Compute the ratio between the number of rows where A=True, to the number of rows where A=False
<p>I have a Polars dataframe:</p> <pre><code>df = pl.DataFrame( { &quot;nrs&quot;: [1, 2, 3, None, 5], &quot;names&quot;: [&quot;foo&quot;, &quot;ham&quot;, &quot;spam&quot;, &quot;egg&quot;, None], &quot;random&quot;: np.random.rand(5), &quot;A&quot;: [True, True, False, False, False], } ) </code></pre> <p>How can I compute the ratio between the number of rows where <code>A==True</code>, to the number of rows where <code>A==False</code>? Note that <code>A</code> is always <code>True</code> or <code>False</code>. I found a solution, but it seems a bit clunky:</p> <pre><code>ntrue = df.filter(pl.col('A')==1).shape[0] ratio = ntrue/(df.shape[0]-ntrue) </code></pre>
<python><python-3.x><dataframe><count><python-polars>
2024-02-21 10:56:12
2
5,726
DeltaIV
78,033,441
512,183
Poetry: [Errno 2] No such file or directory: 'python'
<p>OSX 14.2.1 on M1, Python 3.11.4, Poetry 1.7.1 I set config virtualenvs.in-project = true</p> <p>If I run &quot;poetry install&quot; or &quot;poetry env info&quot; I get the error:</p> <blockquote> <p>[Errno 2] No such file or directory: 'python'</p> </blockquote> <p>If I run a &quot;poetry check&quot; I get:</p> <blockquote> <p>All set!</p> </blockquote> <p>What am I missing?</p>
<python><python-poetry>
2024-02-21 10:52:30
1
1,574
eriq
78,033,431
7,414,939
Python Linter to catch user shadowed function names
<p>Assume the following scenario:</p> <pre><code>def my_decorator(func): @functools.wraps(func) def wrapper(*args, **kwargs): # Code return wrapper def f(): # function code return f = my_decorator(f) </code></pre> <p><code>f = my_decorator(f)</code> shadows the function <code>f</code> and that will lead to unexpected behavior on <code>f()</code> calls down the line.</p> <p><strong>Linters that I tried so far and results:</strong></p> <ul> <li><strong>MyPy:</strong> Can catch the shadowing if <code>f</code> changes type completely (ex. <code>f = &quot;I am a string now&quot;</code>) but not the shadowing in question.</li> <li><strong>Pylint:</strong> Same as <strong>MyPy</strong></li> <li><strong>Ruff:</strong> Same as <strong>MyPy</strong></li> </ul> <p>Also all the linters mentioned are able to catch built-in variable shadowing.<br/> <em>So the linters are failing in the scenario of interest because <code>f</code> ultimately doesn't change type passing though the decorator method.</em></p> <p><strong>Questions:</strong></p> <ul> <li>Is there any tweak that I am missing for any of those linters than can possibly make them more pedantic/strict?</li> <li>Is there any alternative that may work and I have not considered?</li> </ul>
<python><mypy><pylint><linter><ruff>
2024-02-21 10:50:32
0
23,254
John Moutafis
78,033,347
11,561,992
Communicate with python process
<p>I have a python prog which does something like that:</p> <pre class="lang-python prettyprint-override"><code>while (true) { Number1 = input(&quot;What's your first number?&quot;) Number2 = input(&quot;What's your second number?&quot;) print(&quot;Your numbers are: &quot;,Number 1, Number 2) Sum= input(&quot;Whats the sum?&quot;) if Sum == Number1 + Number2: print(&quot;You win&quot;) exit() else print(&quot;You failed, try again!&quot;) } </code></pre> <p>this is obviously not the exact code, it's just for an understanding of what it does; it's more complicated than just sum, I won't be ale to calculate it and will need to retry more than once. I can't change that python script, just execute it.</p> <p>I want to write a script that would give him the first two input, get back what's printed, then give him third input, get back what's printed. And start again, until I win.</p> <p>This is on a linux box where I don't have many rights, with python3.6</p> <p>I've tried two methods:</p> <ul> <li>Using a bash script and a python script together: Bash script call my python script and the main one, and use an output file and pipes to transfer input/output. It doesn't sound like a proper way to do it, but it &quot;mostly&quot; works (I can always see the first round, sometimes it goes further). Spent yesterday afternoon on that:</li> </ul> <pre class="lang-bash prettyprint-override"><code>#!/bin/bash # Create temporary files for communication output_file=$(mktemp) # Trap to ensure cleanup of temporary files on script exit cleanup() { rm -f &quot;$output_file&quot; } trap cleanup EXIT # Run the Python scripts with temporary files python3 /tmp/tmp1/crack.py &lt; &quot;$output_file&quot; | /path/to/python/prog &gt; &quot;$output_file&quot; &amp; # Wait for the background process to finish wait # Display the contents of the output file cat &quot;$output_file&quot; </code></pre> <p>crack.py:</p> <pre class="lang-python prettyprint-override"><code>#Python file: go get input, send output, construct password. #!/usr/bin/env python3 sum = &quot;&quot; answer = &quot;&quot; while not &quot;You win&quot; in answer: print(0) print(3) #Weirdly, works better if I comment the input() lines. # input() # input() sum += input()[10] print(sum) # answer = input() </code></pre> <p>EDIT: If I comment ALL the input() lines (including <code>sum += input()</code>) then it works, but obviously I have to change the while loop because I can't check the ending condition, so I have to do a fixed number of loops, randomly chosen. All the output from the python prog will be in the output file, and since it does print my two first input it's sufficient for what I need to do, but it really bother me that I can't properly handle the prog output and react accordingly to it.</p> <ul> <li>Using only another python script with subprocess method (spend the morning reading doc about it):</li> </ul> <pre class="lang-python prettyprint-override"><code>#!/usr/bin/python3 from subprocess import PIPE, Popen import subprocess import sys from tempfile import TemporaryFile p = Popen([sys.executable, '/path/to/main/python/exec'], stdout=PIPE, stdin=PIPE) stdout, stderr = p.communicate(input=&quot;1\n200\noops\n&quot;) print(stdout) p.kill() </code></pre> <p>Doesn't work at all. Looks like the process is running in background and I'll get the output when, and only when, it terminates. Definitely not what I want. Also, it sends back encoding errors (even if I put the encoding arg in popen, it says that some received output from the main script is not utf-8). I've tried p.stdout instead of communicate too (looks like communicate is from python3.7 anyway), and subprocess.run instead of popen, but it looks like the behaviour of this thing is to run wait for subprocess to terminate before getting output anyway so won't do what I want.</p> <p>I tried using coproc too, but it's not installed on the machine (and I can't install anything).</p> <p>What would be the proper way to handle input/output of the python prog?</p>
<python><linux><bash><input><output>
2024-02-21 10:39:34
0
305
Ablia
78,033,145
2,386,113
How to get immediate neighbors using a kd-tree irrespective of the spacing?
<p>I want to find the immediate neighbours around a given point in a multidimensional space (up to 7 dimensions).</p> <p><strong>Important facts about the space:</strong></p> <ul> <li><strong>non-linear spacing</strong> among points within a single dimension. As shown in the screenshot below, the distance between the points is varying <a href="https://i.sstatic.net/5kUnh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5kUnh.png" alt="List item" /></a></li> <li><strong>unequal spacing</strong> among different dimensions</li> </ul> <p>(sample code to generate a grid of uneven spacing among dimensions)</p> <pre><code>x_values = np.linspace(-0.3, 0.3, 5) y_values = np.linspace(-0.3, 0.3, 5) z_values = np.linspace(1, 6, 6) # unqual spacing (large spacing in z-direction) </code></pre> <p><strong>MWE:</strong></p> <pre><code>import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn.neighbors import KDTree import numpy as np # Define ranges for X, Y, and Z values x_values = np.linspace(-0.3, 0.3, 5) y_values = np.linspace(-0.3, 0.3, 5) z_values = np.linspace(1, 6, 6) # unqual spacing (large spacing in z-direction) # z_values = np.linspace(-0.3, 0.3, 5) # equal spacing case # Create meshgrid to generate combinations of X, Y, and Z values X, Y, Z = np.meshgrid(x_values, y_values, z_values) # Reshape the meshgrid arrays to create a single array of all combinations points = np.column_stack((X.ravel(), Y.ravel(), Z.ravel())) # Create a KDTree object with the sample points kdtree = KDTree(points, leaf_size=30, metric='euclidean') # Query point for which nearest neighbors will be found # query_point = np.array([[0, 0, 0]]) # test query point for equal spacing query_point = np.array([[0, 0, 2]]) # test query point for unequal spacing # Find the indices of the nearest neighbors and their distances distances, indices = kdtree.query(query_point, k=27) # Plot all points in 3D fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111, projection='3d') ax.scatter(points[:, 0], points[:, 1], points[:, 2], color='blue', label='All Points') # Plot the query point in 3D ax.scatter(query_point[:, 0], query_point[:, 1], query_point[:, 2], color='red', label='Query Point') # Plot the nearest neighbors in 3D nearest_neighbors = points[indices[0]] # Get nearest neighbors using indices ax.scatter(nearest_neighbors[:, 0], nearest_neighbors[:, 1], nearest_neighbors[:, 2], color='green', label='Nearest Neighbors') # Connect the query point with its nearest neighbors in 3D for neighbor in nearest_neighbors: ax.plot([query_point[0, 0], neighbor[0]], [query_point[0, 1], neighbor[1]], [query_point[0, 2], neighbor[2]], color='gray', linestyle='--') ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') ax.set_title('KD-Tree Nearest Neighbors in 3D') ax.legend() plt.show() print() </code></pre> <p><strong>Results with the above Code:</strong></p> <p><a href="https://i.sstatic.net/MuFzFm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MuFzFm.png" alt="enter image description here" /></a></p> <p><strong>Required results:</strong> Immediate neighbors should be selected from each dimension irrespective their actual distance.</p> <p><a href="https://i.sstatic.net/u14gXm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u14gXm.png" alt="enter image description here" /></a></p>
<python><scikit-learn><sklearn-pandas><nearest-neighbor>
2024-02-21 10:08:59
1
5,777
skm
78,033,119
11,584,327
How to interactively control the amplitude and frequency of a signal with two scale sliders using Tkinter
<p>Just starting out with python, I want to model a triangular signal and be able to control its amplitude and frequency in an interactive way using scale/sliders widgets, and <code>Tkinter</code> as possible.</p> <p>Inappropriately, my (updated) code below generate two signals (amplitude and frequency) which are controlled by the sliders independently of each other. This isn't surprising because I didn't link these two. However, there are no examples on internet explaining in a didactic way how to do it (<code>matplotlib</code>, in particular <code>.set_ydata</code> allows this interactivity -<a href="https://matplotlib.org/stable/gallery/widgets/slider_demo.html" rel="nofollow noreferrer">see here</a>- but I would really like to understand in detail the binding process and if this is possible with <code>Tkinter</code>).</p> <p>So, the <strong>question</strong> is: How to bind the 2 sliders so that they control a single signal, that is to say when we move one, the other variable remains in its last position?</p> <p>Thanks for help or any advice</p> <p><strong>Code:</strong></p> <pre><code># import modules from tkinter import ttk import tkinter as tk # create window root = tk.Tk() # object root root.title('Oscilloscope') root.geometry(&quot;1200x600+200+100&quot;) # exit button btn_exit = tk.Button(root, text='Exit', command=root.destroy, height=2, width=15) btn_exit.place(x=1100, y=500, anchor=tk.CENTER) # canvas canvas = tk.Canvas(root, width = 800, height = 400, bg = 'white') canvas.place(x=600, y=250, anchor=tk.CENTER) for x in range(0, 800, 50): canvas.create_line(x, 0, x, 400, fill='darkgray', dash=(2, 2)) # vertical dashed lines every 50 units for x in range(0, 400, 50): canvas.create_line(0, x, 800, x, fill='darkgray', dash=(2, 2)) # horizontal dashed lines every 50 units canvas.create_line(400, 0, 400, 800, fill='black', width=2) # vertical line at x = 400 canvas.create_line(0, 200, 800, 200, fill='black', width=2) # horizontal line at y = 200 canvas.create_rectangle(3, 3, 800, 400, width=2, outline='darkgrey') # parameters triangular signal amplitude = 200 frequency = 10 nb_pts = 20 # must be necessary 2 fold the frequency for a triangular signal offset = 200 # function for drawing the triangular signal def draw_triangular(canvas, amplitude, frequency, offset, nb_pts): xpts = 1000 / (nb_pts-1) line = [] for i in range(nb_pts): x = i * xpts y = amplitude * ((2 * (i * frequency / nb_pts) % 2 - 1)) + offset line.extend((x, y)) canvas_line = canvas.create_line(line, fill=&quot;red&quot;, width=3) canvas.after(50, canvas.delete, canvas_line) # vertical widget scale for amplitude ############################################################################################### def amplitude_value(new_value): # show value label_amplitude.configure(text=f&quot;Amplitude {new_value}&quot;) def select_amplitude(): sel = &quot;Value = &quot; + str(value.get(amplitude)) value_amp = tk.IntVar() frm = ttk.Frame(root, padding=10) frm.place(x=100, y=250, anchor=tk.CENTER) scale_amplitude = tk.Scale(frm, variable=value_amp, command=amplitude_value, from_ = 200, to = -200, length=400, showvalue=0, tickinterval=50, orient = tk.VERTICAL) scale_amplitude.pack(anchor=tk.CENTER, padx=10) label_amplitude = ttk.Label(root, text=&quot;Amplitude&quot;, font=(&quot;Arial&quot;)) label_amplitude.place(x=110, y=480, anchor=tk.CENTER) def update_amplitude(): amplitude = scale_amplitude.get() draw_triangular(canvas, amplitude, frequency, offset, nb_pts) root.after(50, update_amplitude) return amplitude # horizontal widget scale for frequency ########################################################################################### def frequency_value(new_value): # show value label_frequency.configure(text=f&quot;Frequency {new_value}&quot;) def select_frequency(): sel = &quot;Value = &quot; + str(value.get(frequency)) value_freq = tk.IntVar() frm = ttk.Frame(root, padding=10) frm.place(x=600, y=520, anchor=tk.CENTER) scale_frequency = tk.Scale(frm, variable=value_freq, command=frequency_value, from_ = -50, to = 50, length=800, showvalue=0, tickinterval=10, orient = tk.HORIZONTAL) scale_frequency.pack(anchor=tk.CENTER, padx=10) label_frequency = ttk.Label(root, text=&quot;Frequency&quot;, font=(&quot;Arial&quot;)) label_frequency.place(x=600, y=560, anchor=tk.CENTER) def update_frequency(): frequency = scale_frequency.get() draw_triangular(canvas, amplitude , frequency, offset, nb_pts) root.after(50, update_frequency) return frequency # reset function def reset_values(): value_amp.set(0) amplitude_value(0) value_freq.set(0) frequency_value(0) # reset button btn_reset = tk.Button(root, text='Reset', command=reset_values, height=2, width=15) btn_reset.place(x=1100, y=400, anchor=tk.CENTER) update_amplitude() update_frequency() root.mainloop() </code></pre> <p><strong>Edit</strong></p> <p>Thanks to acw1668's answer, the two sliders now interact and the signal is fully controllable, as updated in the new version of the code below (note that now, the triangular signal is built using <code>signal</code> from <code>scipy</code>).</p> <p>Thanks for all</p> <p><strong>Updated code</strong></p> <pre><code># import modules from tkinter import ttk import tkinter as tk import numpy as np from scipy import signal as sg # create window root = tk.Tk() # object root root.title('Oscilloscope') root.geometry(&quot;1200x600+200+100&quot;) # exit button btn_exit = tk.Button(root, text='Exit', command=root.destroy, height=2, width=15) btn_exit.place(x=1100, y=500, anchor=tk.CENTER) # canva of internal grid canvas = tk.Canvas(root, width = 800, height = 400, bg = 'white') canvas.place(x=600, y=250, anchor=tk.CENTER) for x in range(0, 800, 50): canvas.create_line(x, 0, x, 400, fill='darkgray', dash=(2, 2)) for x in range(0, 400, 50): canvas.create_line(0, x, 800, x, fill='darkgray', dash=(2, 2)) canvas.create_line(400, 0, 400, 800, fill='black', width=2) canvas.create_line(0, 200, 800, 200, fill='black', width=2) canvas.create_rectangle(3, 3, 800, 400, width=2, outline='darkgrey') # parameters triangular signal nb_pts = 2500 x_range = 800 offset = 200 # updated draw_triangular() def draw_triangular(canvas, amplitude, frequency, offset, nb_pts): canvas.delete(&quot;line&quot;) # clear current plot x_pts = x_range / (nb_pts-1) line = [] for i in range(nb_pts): x = (i * x_pts) y = amplitude * sg.sawtooth(2 * np.pi * frequency * i/nb_pts, width=0.5) + offset line.extend((x, y)) canvas.create_line(line, fill=&quot;red&quot;, width=3, tag=&quot;line&quot;) # function to be called when any of the scales is changed def on_scale_changed(*args): amplitude = value_amp.get() frequency = value_freq.get() draw_triangular(canvas, amplitude, frequency, offset, nb_pts) # vertical widget scale for amplitude ############################################################################################### def select_amplitude(): sel = &quot;Value = &quot; + str(value.get(amplitude)) value_amp = tk.IntVar() frm = ttk.Frame(root, padding=10) frm.place(x=100, y=250, anchor=tk.CENTER) scale_amplitude = tk.Scale(frm, variable=value_amp, command=on_scale_changed, from_ = 200, to = -200, length=400, showvalue=1, tickinterval=50, orient = tk.VERTICAL) scale_amplitude.pack(anchor=tk.CENTER, padx=10) label_amplitude = ttk.Label(root, text=&quot;Amplitude&quot;, font=(&quot;Arial&quot;)) label_amplitude.place(x=110, y=480, anchor=tk.CENTER) # horizontal widget scale for frequency ############################################################################################# def select_frequency(): sel = &quot;Value = &quot; + str(value.get(frequency)) value_freq = tk.IntVar() frm = ttk.Frame(root, padding=10) frm.place(x=600, y=480, anchor=tk.CENTER) scale_frequency = tk.Scale(frm, variable=value_freq, command=on_scale_changed, from_ = 0, to = 50, length=800, showvalue=1, tickinterval=5, orient = tk.HORIZONTAL) scale_frequency.pack(anchor=tk.CENTER, padx=10) label_frequency = ttk.Label(root, text=&quot;Frequency&quot;, font=(&quot;Arial&quot;)) label_frequency.place(x=600, y=530, anchor=tk.CENTER) # reset function def reset_values(): value_amp.set(0) value_freq.set(0) canvas.delete(&quot;line&quot;) # reset button btn_reset = tk.Button(root, text='Reset', command=reset_values, height=2, width=15) btn_reset.place(x=1100, y=400, anchor=tk.CENTER) root.mainloop() </code></pre>
<python><tkinter><widget><waveform>
2024-02-21 10:05:01
1
902
denis
78,032,524
10,686,345
Airflow - NameError: name 'ti' is not defined
<p>I am trying to execute a Airflow script that consists of a couple of functions. I want to pass the value of 'program_no' as an argument in spark submit request which I am getting in my DAG from an api call via context in get_conf method. I am trying to pass like {ti.xcom_pull(task_ids='parameterized_task')} but I am getting an error - NameError: name 'ti' is not defined. Please help how to resolve this issue?<br /> I have also tried to pass {prog_no} instead of {ti.xcom_pull(task_ids='parameterized_task')} but getting same error - prog_no not defined</p> <pre class="lang-py prettyprint-override"><code>dag = DAG( dag_id=os.path.basename(__file__).replace(&quot;.py&quot;, &quot;&quot;), default_args=default_args, start_date=datetime(2023, 12, 21), schedule_interval=None, description='Event based job for calculating missed sales based allowances for retroactive program setup' ) def get_conf(**context): global prog_no #ti = context['ti'] prog_no = context['dag_run'].conf['program_no'] return prog_no parameterized_task = PythonOperator( task_id=&quot;parameterized_task&quot;, python_callable=get_conf, provide_context=True, dag=dag ) sparkway_request = SimpleHttpOperator( task_id='sparkway_request', endpoint=Variable.get('sparkway_api_endpoint'), method=&quot;POST&quot;, http_conn_id=&quot;SPARKWAY_CONN&quot;, data=json.dumps({ &quot;cmd&quot;: &quot;sparkway-submit --master kubernetes --job-name spark-allowance-calculation --class com.xxx.CalculationApplication --spark-app s3a://xyz.jar --arguments SuspendedProgramStatus --num-executors 2 --executor-cores 2 --executor-memory 3G --driver-memory 3G&quot;, &quot;arguments&quot;: f&quot;{dag.latest_execution_date},{ti.xcom_pull(task_ids='parameterized_task')}&quot;, &quot;type&quot;: &quot;job&quot; }), headers={ &quot;Authorization&quot;: f&quot;Bearer {Variable.get('sparkway_token')}&quot;, &quot;Content-Type&quot;: &quot;application/json&quot;, &quot;X-CSRF-TOKEN&quot;: Variable.get('sparkway_csrf_token') }, response_check=lambda response: handle_sparkway_response(response), log_response=True, dag=dag ) </code></pre> <p>parameterized_task &gt;&gt; sparkway_request</p>
<python><airflow><spark-submit>
2024-02-21 08:29:39
2
457
Ayush Goyal
78,032,313
3,099,733
How to create a TypeGuard to ensure value not None in python?
<p>Given the following code</p> <pre class="lang-py prettyprint-override"><code>val = object.data assert val, 'data should not be None' </code></pre> <p>It use a lot in my project to ensure <code>Optional[T]</code> data type is not None.</p> <p>I create the following method to make the code DRY.</p> <pre class="lang-py prettyprint-override"><code>def not_none(val: T, msg: str) -&gt; AssertTypeOrSomething[T]: assert val, msg return val </code></pre> <p>It works in runtime, but how to make the right type annotation to pass the static check?</p>
<python><python-typing>
2024-02-21 07:44:47
1
1,959
link89
78,032,224
11,840,002
PySpark and Databricks addFile and SparkFiles.get Exception java.io.FileNotFoundException
<p>I am trying to:</p> <ol> <li>Load an SSL certification from S3 to a cluster.</li> <li><code>addFile</code> so all nodes see the file.</li> <li>Create a connection URL to IBM db2 with JDBC.</li> </ol> <p>Step 1 and Step 2 are working successfully. I can open the file with <code>with open(cert_filepath, &quot;r&quot;) as file</code> and print it.</p> <p>But in Step 3 I get the following error:</p> <blockquote> <p>org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.239.124.48 executor 1): com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2043][11550][4.27.25] Exception java.io.FileNotFoundException: Error opening socket to server / on port 53,101 with message: /local_disk0/spark-c30d1a7f-deea-4184-a9f9-2b6c9eab6c5e/userFiles-761620cf-2cb1-4623-9677-68694f0e4b3c/dwt01_db2_ssl.arm (No such file or directory). ERRORCODE=-4499, SQLSTATE=08001</p> </blockquote> <p>The port is <code>53101</code> but the error code puts a comma.</p> <p>The essential part of code:</p> <pre class="lang-py prettyprint-override"><code>sc = SparkContext.getOrCreate() s3_client = boto3.client(&quot;s3&quot;) s3_client.download_file( Bucket=&quot;my-bucket-s3&quot;, Key=&quot;my/path/db2/dwt01_db2_ssl.arm&quot;, Filename=&quot;dwt01_db2_ssl.arm&quot;, ) sc.addFile(&quot;dwt01_db2_ssl.arm&quot;) cert_filepath = SparkFiles.get(&quot;dwt01_db2_ssl.arm&quot;) user_name = cls.get_aws_secret(secret_name=cls._db2_username_aws_secret_name, key=&quot;username&quot;, region=&quot;eu-central-1&quot;) password = cls.get_aws_secret(secret_name=cls._db2_password_aws_secret_name, key=&quot;password&quot;, region=&quot;eu-central-1&quot;) driver = &quot;com.ibm.db2.jcc.DB2Driver&quot; jdbc_url = f&quot;jdbc:db2://{hostname}:{port}/{database}:sslConnection=true;sslCertLocation={cert_filepath};&quot; df = ( spark.read.format(&quot;jdbc&quot;) .option(&quot;driver&quot;, driver) .option(&quot;url&quot;, jdbc_url) .option(&quot;dbtable&quot;, f&quot;({query}) as src&quot;) .option(&quot;user&quot;, user_name) .option(&quot;password&quot;, password) .load() ) </code></pre> <p>I can't seem to solve what could be the cause for this <code>FileNotFoundException</code> as it is there for at least when reading it and printing it.</p>
<python><apache-spark><amazon-s3><pyspark><databricks>
2024-02-21 07:26:19
1
1,658
eemilk
78,032,156
13,613,776
sqlalchemy.orm.exc.DetachedInstanceError: Parent instance <User at 0x197950dead0> is not bound to a Session;
<p>Hey I am trying to print all stocks which users have</p> <pre><code>class User(Base, TableNameMixin): user_id: int = Column(BIGINT, primary_key=True, autoincrement=False) stocks: Mapped[List[&quot;Stock&quot;]] = relationship(back_populates=&quot;user&quot;) class Stock(Base, TableNameMixin): stock_id: int = Column(Integer, primary_key=True) user_id: int = Column(BIGINT, ForeignKey(&quot;users.user_id&quot;)) symbol: str = Column(String(10), nullable=True) quantity: int = Column(Integer, server_default=text(&quot;34&quot;)) user: Mapped[&quot;User&quot;] = relationship(back_populates=&quot;stocks&quot;, foreign_keys=[user_id]) def __repr__(self): return f&quot;&lt;Stock(symbol={self.symbol}, quantity={self.quantity})&gt;&quot; </code></pre> <p>This is how I defined my db model This is my function to add stock to user</p> <pre><code>async def select_user(user_id: int) -&gt; Optional[User]: async with async_session() as session: select_stmt = select(User).where(User.user_id == user_id) result = await session.execute(select_stmt) user = result.scalar_one_or_none() if user.stocks: print(&quot;yes&quot;) except Exception as e: await message.answer(str(e)) </code></pre> <p>Please correct if i am doing something wrong.</p> <p><strong>Full error</strong> <code>Parent instance &lt;User at 0x2277a012190&gt; is not bound to a Session; lazy load operation of attribute 'stocks' cannot proceed (Background on this error at: https://sqlalche.me/e/20/bhk3)</code></p>
<python><sqlalchemy>
2024-02-21 07:12:19
0
448
Rishu Pandey
78,032,069
188,331
Free up Python script-driven Selenium WebDriver's cache in Ubuntu?
<p>I am using Selenium WebDriver to scrap website data 24-hour non-stop. I found out that the cache size is increasing non-stop.</p> <pre><code>4879M /tmp/snap-private-tmp/snap.chromium/tmp </code></pre> <p>My Python script <strong>scrap.py</strong> is as follow:</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.common.exceptions import WebDriverException from time import sleep options = Options() options.add_argument('--headless') options.add_argument('--disable-gpu') options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome(service=Service(), options=options) driver.get(&quot;https://example.com/&quot;) sleep(3) html = driver.page_source soup = BeautifulSoup(html, features='html.parser') </code></pre> <p>This script is set to run 1000 times using this command (I know this is not a smart way):</p> <pre><code>for run in {1..1000}; do python scrap.py; done </code></pre> <p>How can I free up the disk space used after executing the Python script? Just remove the contents in the folder?</p> <hr /> <p>Currently, I am using this command to free up the cache modified more than 1 day before:</p> <pre><code>sudo find /tmp/snap-private-tmp/snap.chromium/tmp/ -name '.org.chromium.Chromium*' -type d -mtime +0 -exec rm -rf {} + </code></pre> <p>which removed 4GB data.</p>
<python><selenium-webdriver>
2024-02-21 06:54:06
0
54,395
Raptor
78,031,873
1,029,902
Keeping macros when writing to .xlsm using Python on Ubuntu
<p>I have a fairly straightforward script. It is supposed to replace all the data in one sheet of an <code>.xlsm</code> workbook. The workbook contains macros which I would like to keep. Unfortunately, <strong>I cannot find a way to do this on UBUNTU</strong>, not Windows</p> <p>I cannot use <code>xlwings</code> because it needs an instance of Excel which I cannot get on Ubuntu.</p> <p>I have tried to use <code>openpyxl</code> with <code>keep_vba</code> but the macros are gone when it is done writing. How can I possibly keep my macros if I am running this script on Ubuntu?</p> <p>This is my script:</p> <pre><code>import pandas as pd import re import os import xlrd import openpyxl from datetime import datetime from openpyxl import load_workbook from openpyxl.utils.dataframe import dataframe_to_rows all_data=pd.read_csv('full.csv') file_path = 'data.xlsm' wb = load_workbook(filename=file_path, read_only = False, keep_vba = True) sheet=wb['Sheet1'] sheet.delete_rows(sheet.min_row, sheet.max_row) for r_idx, row in enumerate(dataframe_to_rows(all_data, index=False), sheet.min_row): print(f&quot;{r_idx}...&quot;) for c_idx, value in enumerate(row, sheet.min_column): sheet.cell(row=r_idx, column=c_idx, value=value) wb.save(file_path) </code></pre> <p>Also, the part of the script that is writing to the sheet is taking really long. I am referring to the <code>for</code> loop. If there is a more efficient way of doing it please let me know. Right now it is taking almost 3 hours to write 150k rows. The rows are very simple. The csv rows look like this and that is all that is going into the sheet.</p> <pre><code>Symbol,Date,Open,High,Low,Close,Volume A,2024-Feb-16,133.59,136.27,133.59,134.84,1066800 AA,2024-Feb-16,27.34,28.03,27.16,27.4,4686200 AAA,2024-Feb-16,25.22,25.22,25.0999,25.124,7100 AAAU,2024-Feb-16,19.78,19.96,19.76,19.915,3564400 </code></pre> <p>Any help would be appreciated</p>
<python><pandas><excel><linux><openpyxl>
2024-02-21 06:12:55
0
557
Tendekai Muchenje
78,031,665
8,589,908
Reading and Writing numpy Float Complex 128
<p>I have a block of code given to me that generates a file containing numpy Float Complex 128 array from streamed data from a SDR. Another part of this same program can read in the file containing numpy Float Complex 128 using <code>np.load</code></p> <p>When I try to generate my own Float Complex 128 array by writing the same SDR data to a Float Complex 128 and then read it in I get the error:</p> <pre><code>File &quot;c:\Users\Owner\test_pyrtlsdr\.venv\Lib\site-packages\numpy\lib\npyio.py&quot;, line 462, in load raise ValueError(&quot;Cannot load file containing pickled data &quot; ValueError: Cannot load file containing pickled data when allow_pickle=False </code></pre> <p>The code I have bene given that is able to generate a file looks like:</p> <pre><code>async def run_readonly(sample_config: SampleConfig, filename: str, max_samples: int): chunk_size = sample_config.read_size nrows = max_samples // sample_config.read_size if nrows * chunk_size &lt; max_samples: nrows += 1 samples = np.zeros((nrows, chunk_size), dtype=np.complex128) sample_config = SampleConfig(read_size=chunk_size) reader = SampleReader(sample_config) async with reader: await reader.open_stream() i = 0 count = 0 async for _samples in reader: if count == 0: print(f'{_samples.size=}') samples[i,:] = _samples count += _samples.size # print(f'{i}\t{reader.aio_queue.qsize()=}\t{count=}') i += 1 if count &gt;= max_samples: break samples = samples.flatten()[:max_samples] np.save(filename, samples) </code></pre> <p>I have confirmed that the file generated from above is of dtype=floatcomplex128</p> <p>In another class I have that receives 250000 samples from the SDR every 0.1 of a second, and should keep appending samples as they come into file:</p> <pre><code>class Processor: def __init__(self): self.f = open('testing_gain.fc32', 'wb') def process(self, samples: SamplesT): #for testing - log to file self.f.write(samples.astype(np.complex128).tobytes()) </code></pre> <p>The problem with the above is that f.write does not allow me to set pickle to false. I could use <code>np.save</code> which does allow <code>allow_picke=False</code> but <code>np.save</code> doesnt allow for the constant appending everytime it is called.</p> <p>I tried using np.save and writing out only when a certain number of samples had been read - but not very elegant:</p> <pre><code> save_flag = False if (self.stateful_index &lt; 50000 and not save_flag): self.test_samp = np.append(self.test_samp, samples) else: save_flag = True #self.f2.write(self.test_samp.astype(np.complex128).tobytes, allow_pickle=False) np.save(self.f2, self.test_samp, allow_pickle=False) </code></pre> <p>Surely there is a cleaner way I can save and append <code>samples</code> to a file with pickle set to False?</p>
<python><numpy>
2024-02-21 05:03:39
0
499
Radika Moonesinghe
78,031,540
21,323,912
"Compiled regex exceeds size limit of 10485760 bytes." error when defining the pattern property in a Pydantic Field
<h1>Original question</h1> <p>I'm trying to define a Pydantic model with a string field that matches the following regex pattern: <code>^(\/[\w-]{1,255}){1,64}\/?$</code>, which should be used to validate expressions that match the form: <code>/alphanumeric-slugs-no-longer-than-255-chars/separated-by-slashes/no-more-than-64-slugs/</code>.</p> <p>Basically, I want to match 1 to 64 slugs, each of max length 255, separated by slashes. This because after validation, I want to split the field into slugs by the slashes and save each slug in a database.</p> <p>However, when I define my Pydantic model with the field as an <code>Annotated</code> string <code>Field</code> specifying the regex pattern like so:</p> <pre class="lang-py prettyprint-override"><code>from typing import Annotated from pydantic import BaseModel, Field class MyModel(BaseModel): my_field: Annotated[str, Field(pattern=r&quot;^(\/[\w-]{1,255}){1,64}\/?$&quot;] </code></pre> <p>I'm met with the following error traceback:</p> <pre><code>pydantic_core._pydantic_core.SchemaError: Error building &quot;function-after&quot; validator: SchemaError: Error building &quot;function-wrap&quot; validator: SchemaError: Error building &quot;model&quot; validator: SchemaError: Error building &quot;model-fields&quot; validator: SchemaError: Field &quot;full_path&quot;: SchemaError: Error building &quot;default&quot; validator: SchemaError: Error building &quot;str&quot; validator: SchemaError: Compiled regex exceeds size limit of 10485760 bytes. </code></pre> <p>I tried to define a <code>field_validator</code> instead, and use the <code>re.match</code> function from the standard library to add the regex validation like so:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, field_validator class MyModel(BaseModel): my_field: str @field_validator(&quot;my_field&quot;) @classmethod def my_field_matches_regex(cls, v: str) -&gt; str: if not re.match(r&quot;^(\/[\w-]{1,255}){1,64}\/?$&quot;, v): raise ValueError(&quot;Invalid format, must match the regex pattern: ^(\/[\w-]{1,255}){1,64}\/?$&quot;) return v </code></pre> <p>This approach actually worked. However, I was hoping to take advantage of the simplicity of the <code>Field</code> when defining the regex pattern. I'm not sure why this error is thrown with the <code>Field</code>'s <code>pattern</code> property and not with the <code>re.match</code> approach.</p> <hr /> <h1>More information about the problem</h1> <ul> <li>Python version: <code>3.12</code></li> <li>Pydantic version: <code>2.6.1</code></li> </ul> <p>As @InSync pointed out below, this is not a runtime error but a compile time error emitted by Rust. Further digging into Pydantic's documentation does in fact show that it uses the <a href="https://github.com/rust-lang/regex" rel="nofollow noreferrer">Rust regex implementation</a> as its default <a href="https://docs.pydantic.dev/2.6/api/pydantic_core_schema/#pydantic_core.core_schema.CoreConfig" rel="nofollow noreferrer"><code>regex_engine</code></a>.</p> <p>The original question is asking <em>why</em> does this behavior happen when working with a Pydantic's <code>Field</code> and not with <code>re.match</code>. Turns out, these two approaches use underlying different regex engines.</p> <h2>Testing the two approaches</h2> <p>I tested the two engines with the same pattern <code>^(\/[\w-]{1,255}){1,64}\/?$</code> on regex101 and got these results:</p> <ul> <li><a href="https://regex101.com/r/UWieis/1" rel="nofollow noreferrer">Python engine</a>: passes 15 tests. Tested for valid characters, invalid characters, 64 slugs, more than 64 slugs, 255 characters per slug, more than 255 characters per slug, longest valid combination possible, extra characters after longest valid combination. Average test completion is 0 ms. The most inefficient query (last one) took 65156 steps and resolved in 5 ms. Was not able to replicate a catastrophic backtracking error.</li> <li><a href="https://regex101.com/r/kDngyJ/1" rel="nofollow noreferrer">Regex engine</a>: got <code>engine error</code> - <code>Compiled regex exceeds size limit of 10485760 bytes.</code></li> </ul> <h3>Bonus</h3> <p>Tested the pattern <code>^(\/[\w-]+)+\/?$</code> and got these results:</p> <ul> <li><a href="https://regex101.com/r/uHIeML/1" rel="nofollow noreferrer">Python engine</a>: passes 10 tests (couldn't replicate string length tests). The most inefficient query is just writing a lot of valid characters until an invalid character. Average test completion is 0 ms. Got one with more than 10000 in length that took 40292 steps and resolved in 2 ms. Was not able to replicate a catastrophic backtracking error.</li> <li><a href="https://regex101.com/r/QrCq6O/1" rel="nofollow noreferrer">Regex engine</a>: passed the same 10 tests. Average test completion is 0 ms. Same inefficient string resolved in 2 ms.</li> </ul> <p>Tested the pattern <code>^(\/[\w-]+){64}\/?$</code> and this one was more surprising. Got these results:</p> <ul> <li><a href="https://regex101.com/r/pEsmDW/1" rel="nofollow noreferrer">Python engine</a>: passes 13 tests, fails 1 very inefficient test of more than 64 slugs 10000 characters long each. Average test completion is 0 ms.</li> <li><a href="https://regex101.com/r/fuCIqk/1" rel="nofollow noreferrer">Regex engine</a>: passes the 14 tests. However, all tests are suddenly more slow (30 ms - 60 ms to complete). The most inefficient query takes upwards of 130ms to be resolved.</li> </ul> <h2>Motivation about the original question</h2> <p>I was not able to find the error <code>Compiled regex exceeds size limit of 10485760 bytes.</code> when working on Python with Pydantic. This question is aimed to helps other that encounter the same error understand why does this happen (I didn't know that this was a Rust error until it was pointed out to me), and how can they fix it (in Pydantic that is).</p>
<python><regex><pydantic>
2024-02-21 04:21:23
1
556
amoralesc
78,031,403
5,782,416
How to static type check dicts for standard API Response types?
<p>I am looking to learn how to make a more robust API in Python using pythonic methods. My goal is to achieve a more strict API response, to reduce the amount of parsing errors on our frontend and increase standardisation for response types (error states, etc).</p> <p>For the sake of argument, I am using Python 3.11 and Flask. Below is sample code:</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask, jsonify from typing import TypedDict, Any, assert_type, Union app = Flask(__name__) # This is our forced API Response that we want to type check. # Simplified Union data for argument's sake. class APIResponse(TypedDict): success: bool data: Union[dict, list] @app.get(&quot;/&quot;) def index(): response = { &quot;success&quot;: True, &quot;data&quot;: { &quot;hello&quot;: &quot;World!&quot; } } assert_type(response, APIResponse) return jsonify(response) if __name__ == &quot;__main__&quot;: app.run(debug=True) </code></pre> <p>No errors, as expected. But when changing response to the following:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;success&quot;: &quot;notABool&quot;, &quot;dat&quot;: { &quot;hello&quot;, &quot;World!&quot; } # a Set, does not satisfy our type union, an easy (but platonic) mistake } </code></pre> <p>I also get no errors (warnings) showing up in my IDE.</p> <p>I've read the following to try to find answers, but some assume MyPy and others create really lax static type checking (dict is a dict, therefore is good):</p> <ul> <li><a href="https://stackoverflow.com/questions/71264869/validate-dict-against-typeddict-with-mypy">Validate `Dict` against `TypedDict` with MyPy</a></li> <li><a href="https://stackoverflow.com/questions/53221949/python-type-hinting-on-imported-flask-modules">Python type hinting on imported Flask modules</a></li> </ul> <p>I understand I could use PyDantic, but I would rather not wait for runtime errors to find out that an API response was not implemented correctly.</p> <p>Thanks for any help.</p>
<python><flask><python-typing>
2024-02-21 03:23:52
1
1,664
Jack Hales
78,031,380
12,314,521
How to vectorize this 2 loops in Pytorch (Difficult)
<p>How to vectorize this:</p> <pre><code>vocab_size = 20 batch_size = 2 input_len = 5 output_len = 10 input_ids = torch.randint(0, vocab_size, (batch_size, input_len)) output_ids = torch.randint(0, vocab_size, (batch_size, output_len)) print(input_ids) print(output_ids) tensor([[ 0, 8, 7, 12, 8], [14, 15, 9, 7, 10]]) tensor([[ 2, 8, 3, 15, 2, 19, 7, 1, 19, 8], [10, 8, 0, 7, 16, 0, 6, 2, 16, 13]]) </code></pre> <p>Basically, the new value in output_ids gonna be batch_size + index of the k_th of that value in the input_ids because that value might appear multiple times in the input_ids and output_ids as well. So if that value appears for the second times in the output_ids, its' gonna be replace by vocab_size + the 2nd index of that value in the input_ids (Though my code above only get the first appearance). I change the value of output like an example (the 21 and 24 at the first row in the output)</p> <p>This is what I want:</p> <pre><code>#%% for i in range(batch_size): for k, value in enumerate(output_ids[i]): if value in input_ids[i] and value not in [0, 1, 2]: # mean that I will ignore values 0, 1, 2 output_ids[i][k] = vocab_size + torch.where(input_ids[i] == value)[0][0] output_ids tensor([[ 2, 21, 3, 15, 2, 19, 22, 1, 19, 24], [24, 8, 0, 23, 16, 0, 6, 2, 16, 13]]) </code></pre>
<python><pytorch><vectorization>
2024-02-21 03:16:17
1
351
jupyter
78,031,374
10,567,465
Trigger a property setter when an attribute of an injected class changes in python
<p>I have this python problem (simplified here for the sake of the example) where I have 2 classes: One child class (to be injected) and one parent class. In the parent class, I have an attribute that stores an instance of the child class (via dependency injection). This attribute has a getter and a setter.</p> <p>What I am trying to achieve is have something happen when an (any) attribute in the child instance changes (outside the parent class). However, in the below example, nothing happens. The only way to trigger the setter is if I replace the child class with another instance (which I don't want).</p> <pre><code>class Spline3D: def __init__(self, num=20): self.num = num class Spline3DViewer: def __init__(self, spline3D): self._spline3D = spline3D @property def spline3D(self): return self._spline3D @spline3D.setter def spline3D(self, value): self._spline3D = value print(f&quot;Updated self.spline3D.num is = {self._spline3D.num}&quot;) spline3D = Spline3D(num=30) spline3DViewer = Spline3DVTK(spline3D=spline3D) spline3D.num = 40 ### This should, technically, trigger the @spline3D.setter in Spline3DViewer and print &quot;Updated self.spline3D.num is = 40&quot; but nothing happens </code></pre> <p>However, this works (not the solution I want):</p> <pre><code>spline3D2 = Spline3D(num=40) spline3DViewer.spline3D = spline3D2 </code></pre> <p>I'm clearly doing something wrong but I can't seem to find the solution. Any help is appreciated.</p> <p>Kind regards,</p>
<python><dependency-injection><properties><getter-setter>
2024-02-21 03:12:57
1
365
TheNomad
78,031,336
2,707,864
Sympy: Extract the two functions solutions of a homogeneous second order linear ODE
<p>I have a homogeneous second-order ODE, whose general solution is <code>ur(r) = C1*u1(r) + C2*u2(r)</code>.</p> <p>How can I get <code>u1(r)</code> and <code>u2(r)</code> as separate functions by setting <code>(C1=1, C2=0)</code> and <code>(C1=0, C2=1)</code>, respectively?</p> <p>This is what I tried (and other variants as well)</p> <pre><code>import IPython.display as disp import sympy as sym sym.init_printing(latex_mode='equation') r = sym.symbols('r', real=True, positive=True) ur = sym.Function('ur', real=True) def Naxp(ur): return r**2 * sym.diff(ur(r), r, 2) + r * sym.diff(ur(r), r) - ur(r) Naxp_ur = Naxp(ur) EqNaxpur = sym.Eq(Naxp_ur) #print('Radial ODE:') #display(Naxp_ur) ur_sol = sym.dsolve(Naxp_ur, ur(r)) ur_sol1 = ur_sol.rhs #display(sym.Eq(ur(r), ur_sol1)) # I still have to find how to extract the two basis functions #u1 = sym.Function('u1', real=True) C1, C2 = sym.symbols('C1, C2', real=True) u1 = ur_sol1.subs([(C1, 1), (C2, 0)]) print(u1) </code></pre> <p>and I always get a result as if <code>subs</code> were ignored</p> <pre><code>C1/r + C2*r </code></pre>
<python><sympy><differential-equations>
2024-02-21 02:59:10
1
15,820
sancho.s ReinstateMonicaCellio
78,031,001
2,681,662
django if any of many to many has filed with value of False
<p>Say I have a model of a library and I want to hide some books if either the book is marked as hidden or any of authors is marked as hidden or any of categories is marked as hidden.</p> <pre><code>class Category(CustomModel): name = models.CharField(max_length=32, unique=True, null=False, blank=False) ... is_hidden = models.BooleanField(default=False) class Person(CustomModel): first_name = models.CharField(max_length=64, null=False, blank=False) ... is_hidden = models.BooleanField(default=False) class Book(CustomModel): title = models.CharField(max_length=128, null=False, blank=False) authors = models.ManyToManyField(Person, null=False, blank=False) translators = models.ManyToManyField(Person, null=True, blank=True) ... categories = models.ManyToManyField(Category, null=False, blank=False) ... is_hidden = models.BooleanField(default=False) </code></pre> <p>Now I tried these queries to exclude all books with either, <code>is_hidden=True</code>, <code>authors__is_hidden=True</code>, <code>translators__is_hidden=True</code>, or <code>categories__is_hidden=True</code>:</p> <p>as this:</p> <pre><code>books = Book.objects.filter( is_hidden=False, authors__is_hidden=False, translators__is_hidden=False, categories__is_hidden=False).distinct() </code></pre> <p>and this:</p> <pre><code>books = Book.objects.filter( is_hidden=False, authors__is_hidden__in=[False], translators__is_hidden__in=[False], categories__is_hidden__in=[False]).distinct() </code></pre> <p>but I cannot find a way to achieve it. I wish to achieve my goal without a loop since the number of entries can get as high as 80K. How to make the query?</p>
<python><django>
2024-02-21 00:34:23
1
2,629
niaei
78,030,969
8,010,921
Type declaration for dynamically created properties (setattr)
<p>I am writing a class which dynamically sets its attributes according to a <code>dict</code> passed to the constructor:</p> <pre class="lang-py prettyprint-override"><code>class Test: def __init__(self, track_data:dict[Any,Any]) -&gt; None: valid_attributes = [member.value for member in fields] for field,value in track_data.items(): if field not in valid_attributes: raise Exception(f&quot;{field} is not a valid field&quot;) setattr(self,field,value) t = Test({'id': 524545, 'name': 'TV', 'channels': 2}) </code></pre> <p><code>valid_attributes</code> is actually a <code>list</code> built from the following <code>Enum</code> and it allows me to filter out <code>key</code>s ( i.e. potential <code>attributes</code>) which are not useful for my program:</p> <pre><code>class fields(Enum): ID = &quot;id&quot; NAME = &quot;name&quot; CHANNELS = &quot;channels&quot; DESCRIPTION = &quot;description&quot; DURATION = &quot;duration&quot; </code></pre> <p>so in this example <code>t</code> will have three attributes (<code>id</code>,<code>name</code>,<code>channels</code>) which can be accessed with <code>t.name</code> or similar.</p> <p>Now the problem is <code>typing</code>this attributes...infact:</p> <pre><code>Type of &quot;name&quot; is unknown </code></pre> <p>the problem is pretty much the same as <a href="https://github.com/python/mypy/issues/5719#issue-366075896=" rel="nofollow noreferrer">this</a></p> <p>As far as I understand declaring a <code>getter</code> or a <code>@property</code> could solve the issue, but the whole point was to have an <code>Enum</code> containing tenths of elements and just implementing them easily in the <code>Test</code> class.</p> <p>I tried using <code>property(lambda self: str(self.key()))</code> right after the <code>setattr</code> declaration and explicitly converting each value in a string. But nothing.</p> <p>Any suggestions?</p>
<python><python-typing>
2024-02-21 00:23:53
1
327
ddgg
78,030,967
5,769,814
EditorCamera's look_at method ruins the zoom-in/zoom-out commands
<p>I have an <code>EditorCamera</code> in my code and I was trying to set its initial rotation in a certain direction. I couldn't get it to work using <code>.rotation</code>, so I used the <code>look_at</code> method instead:</p> <pre><code>import ursina as ur app = ur.Ursina(title=&quot;Cube&quot;) ground = ur.Entity(model='quad', scale=60, texture='white_cube', texture_scale=(60, 60), rotation_x=90, y=-5, color=ur.color.light_gray) cube = ur.Entity(model='cube', color=ur.color.black, position=(0, 0, 0), scale=(3, 3, 3)) ur.camera.position = 3, 3, -5 ur.EditorCamera() ur.camera.look_at((1.5, 1.5, 1.5)) app.run() </code></pre> <p>This works, but if I use the mouse wheel to move forward/backward in the scene, the camera goes forward/backward on the z-axis, instead of relative to the camera looking direction. How do I fix this issue?</p> <h1>EDIT</h1> <p>Given @pokepetter's answer, I'm updating the code. The camera now moves correctly on zoom. However, I'm trying to place it to the right, above, and a bit in front of the cube and face the center of the cube. The result would hopefully be that it sees the cube's front-top-right corner at a roughly 45° angle, but I haven't been able to get it to do that:</p> <pre><code>app = ur.Ursina(title=&quot;Cube&quot;) ground = ur.Entity(model='quad', scale=60, texture='white_cube', texture_scale=(60, 60), rotation_x=90, y=-5, color=ur.color.light_gray) cube = ur.Entity(model='cube', texture='brick', color=ur.color.red, position=(1.5, 1.5, 1.5), scale=(3, 3, 3)) camera = ur.EditorCamera() camera.position = 5, 5, -5 camera.look_at(cube) # camera.look_at(cube.position) # camera.look_at(ur.Vec3(1.5, 1.5, 1.5)) # camera.look_at(cube.bounds.center) app.run() </code></pre>
<python><python-3.x><ursina>
2024-02-21 00:23:14
1
1,324
Mate de Vita
78,030,853
1,767,106
JAX jax.grad on simple function that takes an array: `ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected`
<p>I'm trying to implement this function and use JAX to automatically build the gradient function:</p> <pre><code>$f(x) = \sum\limits_{k=1}^{n-1} [100 (x_{k+1} - x_k^2)^2 + (1 - x_k)^2]$ </code></pre> <p>(sorry, I don't know how to format math on stackoverflow. Some sister sites allow TeX, but apparently this site does not?)</p> <pre class="lang-py prettyprint-override"><code>import jax import jax.numpy as jnp # x is an array, which does not handle type hints well. def rosenbrock(n: int, x: any) -&gt; float: f = 0 # i is 1-indexed to match document. for i in range(1, n): # adjust 1-based indices to 0-based python indices. xi = x[i-1].item() xip1 = x[i].item() fi = 100 * (xip1 - xi**2)**2 + (1 - xi)**2 f = f + fi return f # with n=2. def rosenbrock2(x: any) -&gt; float: return rosenbrock(2, x) grad_rosenbrock2 = jax.grad(rosenbrock2) x = jnp.array([-1.2, 1], dtype=jnp.float32).reshape(2,1) # this line fails with the error given below grad_rosenbrock2(x) </code></pre> <p>This last line results in:</p> <pre><code>ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: traced array with shape float32[1]. The problem arose with the `float` function. If trying to convert the data type of a value, try using `x.astype(float)` or `jnp.array(x, float)` instead. See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.ConcretizationTypeError </code></pre> <p>I'm trying to follow the docs, and I'm confused. This is my first time using JAX or Autograd, can someone help me resolve this? Thanks!</p>
<python><numpy><jax><autograd>
2024-02-20 23:41:28
1
20,816
clay
78,030,813
7,265,114
Fill between areas with gradient color in matplotlib
<p>I try to plot the following plot with gradient color ramp using matplotlib fill_between. I can manage to get the color below the horizontal line correctly, but I had difficult time to do the same for above horizontal line. The idea is to keep color above and below the horizontal line up to y values boundary, and the rest removed. Any help would be very great.</p> <p>Here is the same code and data.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt text = &quot;54.38,53.99,65.39,66.22,57.65,49.17,42.72,42.07,44.88,46.56,55.27,57.28,60.54,65.87,37.61,44.21,50.62,56.15,52.65,52.17,57.71,61.21,60.77,50.74,62.9,56.51,62.1,52.79,53.96,50.33,48.44,43.72,39.03,36.18,41.6,50.44,50.67,53.68,50.05,47.92,40.43,29.87,24.78,15.53,17.37,20.56,20.3,30.22,38.89,45.04,45.87,47.99,50.97,54.59,53.69,48.09,48.56,47.62,52.78,57.26,48.77,51.6,56.6,58.37,48.39,43.63,42.15,40.05,33.62,48.73,45.29,51.47,50.73,51.52,54.86,49.18,51.03,50.26,45.73,46.7,52.0,42.17,49.93,53.08,51.34,52.44,54.06,50.56,51.04,55.47,52.71,52.36,53.59,65.08,62.74,59.23,56.07,49.21,46.67,40.62,44.59,44.89,35.57,37.67,49.74,46.52,38.47,42.08,49.73,53.82,60.76,56.2,56.41,53.83,59.9,51.06,46.9,49.11,36.01,46.72,56.53,59.04,58.52,60.78,60.02,51.78,54.78,52.88,51.73,59.76,67.84,66.63,60.97,53.69,53.17,52.44,46.54,49.08,43.62,40.81,41.64,43.3,46.36,58.07,54.95,50.82,54.17,51.37,55.26,53.55,47.57,36.05,38.66,35.3,51.02,54.96,59.34,53.47,48.34,50.25,54.06,49.99,47.44,41.59,37.58,42.39,41.41,41.84,47.77,52.75,54.84,49.63,51.5,56.26,52.47,49.35,47.13,35.94,28.42,33.14,44.38,56.38,59.63,60.86,64.35,54.59,63.34,74.68,66.29,58.33,57.64,64.64,61.49,59.11,53.72,60.37,56.66,56.92,61.58,57.21,58.12,61.93,45.75,54.77,52.95,50.06,54.54,52.64,48.31,49.9,56.49,54.99,51.83,61.78,50.93,52.92,58.76,58.82,51.26,48.29,41.18,50.69,54.0,45.13,48.72,45.32,42.29,30.57,41.28,53.54,55.47,57.54,53.48,50.01,52.42,55.38,53.12,53.31,56.26,57.56,53.87,53.48,53.82,56.8,58.31,60.45,63.22,68.44,76.04,72.14,75.31,64.74,56.5,60.42,54.05,55.62,52&quot; y = np.array([float(i) for i in text.split(&quot;,&quot;)]) x = np.arange(len(y)) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10,6)) cmap =&quot;Spectral&quot; # Define above and below threshold below = y.copy() below[below&lt;=50]=0 above = y.copy() above[above&gt;50]=0 ymin = 10 ymax = 80 xi, yi = np.meshgrid(x, np.linspace(ymin, ymax, 100)) ax.contourf( xi, yi, yi, cmap=cmap, levels=np.linspace(ymin, ymax, 100) ) ax.axhline(50, linestyle=&quot;--&quot;, linewidth=1.0, color=&quot;k&quot;) ax.fill_between(x=x, y1=y, color=&quot;w&quot;) ax.plot(x,y, color=&quot;k&quot;) ax.set_ylim(ymin, ymax) </code></pre> <p><a href="https://i.sstatic.net/N0hrO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N0hrO.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2024-02-20 23:24:47
1
1,141
Tuyen
78,030,679
11,462,274
Keep the token active by updating its expiration date to not receive the error Token has been expired or revoke
<p>Based on <a href="https://developers.google.com/drive/api/quickstart/python?hl=pt-br" rel="nofollow noreferrer">Google's documentation</a> I tried to use this code to keep my token active:</p> <pre class="lang-python prettyprint-override"><code> from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow SCOPES = [&quot;https://www.googleapis.com/auth/drive&quot;] creds = None if os.path.exists(&quot;token.json&quot;): creds = Credentials.from_authorized_user_file(&quot;token.json&quot;, SCOPES) if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( &quot;client_secrets.json&quot;, SCOPES ) creds = flow.run_local_server(port=0) with open(&quot;token.json&quot;, &quot;w&quot;) as token: token.write(creds.to_json()) </code></pre> <p>But to my surprise (my poor knowledge made me understand that everything was ok), even though the code was executed every half hour every day without interruption, today my token lost its validity and I had to delete the <code>token.json</code> file. for a new token to be created.</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;c:\Users\Computador\Desktop\Squads Python\squads_sw.py&quot;, line 796, in &lt;module&gt; download_file(&quot;109fDhTJNR2iHeA4bgcHTcWtZZ2dShKR-&quot;, file_base_name) File &quot;c:\Users\Computador\Desktop\Squads Python\squads_sw.py&quot;, line 429, in download_file creds.refresh(Request()) File &quot;C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\google\oauth2\credentials.py&quot;, line 335, in refresh ) = reauth.refresh_grant( File &quot;C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\google\oauth2\reauth.py&quot;, line 351, in refresh_grant _client._handle_error_response(response_data, retryable_error) File &quot;C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\google\oauth2\_client.py&quot;, line 73, in _handle_error_response raise exceptions.RefreshError( google.auth.exceptions.RefreshError: ('invalid_grant: Token has been expired or revoked.', {'error': 'invalid_grant', 'error_description': 'Token has been expired or revoked.'}) </code></pre> <p>What is the error with the code that does not keep the token updated so that it is not necessary to authorize it again every X days?</p>
<python><oauth-2.0><google-oauth>
2024-02-20 22:45:50
1
2,222
Digital Farmer
78,030,469
3,534,782
Multiprocessing pool, whether using map or async, hangs on last couple processes
<p>I have a function that I am applying to hundreds of chunks of my larger dataset -- each individual job only takes a minute or two, and outputs a file if it is successful. This holds true for the first ~740 out of my ~750 individual jobs, but then the last several jobs, especially the very last one, seems to take increasingly longer, for no apparent reason (no errors/Exceptions). Originally my function was simply return a result which was being appended to a global list. I thought this might be the issue, so I changed to function to output the file, which I can access later on. However, that did not fix the lag issue. Next, I thought it might have to do with my reliance on the imap_unordered function within a context manager, which should take care of joining spawned processes and closing the pool:</p> <pre><code> with get_context(&quot;spawn&quot;).Pool(processes=processes) as p: max_ = len(coverage_counting_job_params) print(&quot;max_ is {}&quot;.format(max_)) with tqdm(total=max_) as pbar: for _ in p.imap_unordered(get_edit_info_for_barcode_in_contig_wrapper, coverage_counting_job_params): pbar.update() total_contigs += 1 total_time = time.perf_counter() - start_time total_seconds_for_contig[total_contigs] = total_time </code></pre> <p>After looking at some other posts here, I thought perhaps something was going on with the joining, and switched to using the async function in order to manager it more manually:</p> <pre><code> def update(result): pbar.update() num_files_made = len(glob(&quot;{}/*.tsv&quot;.format(coverage_processing_folder))) #print(&quot;Made {}&quot;.format(num_files_made)) if num_files_made == max_: print(&quot;All {} expected files are present!&quot;.format(max_)) return for i in range(pbar.total): pool.apply_async( get_edit_info_for_barcode_in_contig_wrapper, args=(coverage_counting_job_params[i],), callback=update) # wait for completion of all tasks: print(&quot;Closing pool...&quot;) pool.close() print(&quot;Joining pool...&quot;) pool.join() </code></pre> <p>I am using spawn because I am using polars within my function and it seems it only works with the &quot;spawn&quot; type of context.</p> <p>However, even using apply_async still had the same issue.</p> <p>My multiprocessing job should take only 50 minutes or so based on the per-job time, when using 30 cores, but instead is taking upwards of 2 hours because of this weird hanging. I am going crazy trying to figure out how to get around this issue, as the lag makes the software painfully slow for end-users.</p> <p>Any ideas what might be going on?</p> <p>Update:</p> <ul> <li>When I kill my original job because it is hanging, I sometimes get errors about leaked semaphores like &quot;There appear to be 7 leaked semaphore objects to clean up at shutdown&quot;</li> </ul>
<python><multiprocessing><freeze><lag><pool>
2024-02-20 21:47:54
0
419
ekofman
78,030,364
3,750,694
Applying CLAHE on RGB images
<p>I applied CLAHE to improve the image visibility (original image). However, it looks like after applying the CLAHE, the visibility of imagery is not improved in the shadow areas (CLAHE image). Please suggest me how to improve quality of my imagery. Below are my script:</p> <pre><code># Input and output folder paths input_folder = r'D:\imagery_shadow\clip_3b' output_folder = r'D:\CLAHE\clahe' # List all image files in the input folder image_files = [f for f in os.listdir(input_folder) if f.startswith('shadow_image') and f.endswith('.tif')] # Iterate over each image file for image_file in image_files: # Construct full paths for input and output images original_image_path = os.path.join(input_folder, image_file) process_image_path = os.path.join(output_folder, image_file.replace('.tif', '_clahe_V3.tif')) # Convert the image to LAB Color img_cv = cv2.imread(original_image_path, cv2.IMREAD_COLOR) lab_img = cv2.cvtColor(img_cv, cv2.COLOR_RGB2LAB) # Split the LAB image into L, A, and B channels lab_planes = cv2.split(lab_img) # Apply CLAHE to L channel clahe = cv2.createCLAHE(clipLimit=2, tileGridSize=(3, 3)) clahe_images = [clahe.apply(plane) for plane in lab_planes] # Combine the CLAHE enhanced L-channel with A and B channels updated_lab_img2 = cv2.merge(clahe_images) # Convert LAB image back to RGB color space processed_rgb_img = cv2.cvtColor(updated_lab_img2, cv2.COLOR_LAB2RGB) cv2.imwrite(process_image_path,processed_rgb_img) </code></pre> <p><a href="https://i.sstatic.net/u0ULe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u0ULe.png" alt="Image 1" /></a> (original image)</p> <p><a href="https://i.sstatic.net/eVDc9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eVDc9.png" alt="Image 2" /></a> (clahe image)</p>
<python><image-processing>
2024-02-20 21:17:50
0
703
user30985
78,030,332
7,578,588
VSCode Python debugger runs but nothing happens
<p>I have recently bought a new laptop and set up VSCode and Python so I can continue coding. Everything looks fine, took a bit of messing around to get virtual environments working, but apart from that no problem EXCEPT my code does not run.</p> <p>To be specific, when I open my code and click &quot;Run | Start Debugging (F5)&quot;, for an instant the little toolbar of debug buttons (run, step-over, etc) pops up and disappears, but the code in the window does not run, there is no output to the terminal, nothing happens!</p> <p>A screenshot of my entire VSCode screen is below. <a href="https://i.sstatic.net/8LDIO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8LDIO.png" alt="VSCode Window" /></a></p> <p>My launch.json file is below.</p> <pre><code> { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;type&quot;: &quot;chrome&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;name&quot;: &quot;Launch Chrome against localhost&quot;, &quot;file&quot;: &quot;${workspaceFolder}/web/_test/book.html&quot; }, { &quot;name&quot;: &quot;Python: Current File&quot;, &quot;type&quot;: &quot;debugpy&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${file}&quot;, &quot;console&quot;: &quot;integratedTerminal&quot; }, { &quot;name&quot;: &quot;Python: Flask (development mode)&quot;, &quot;type&quot;: &quot;debugpy&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;module&quot;: &quot;flask&quot;, &quot;env&quot;: { &quot;FLASK_APP&quot;: &quot;flask_run.py&quot;, &quot;FLASK_ENV&quot;: &quot;development&quot; }, &quot;args&quot;: [ &quot;run&quot; ], &quot;jinja&quot;: true } ] } </code></pre> <p>I have all the python extensions installed.</p> <p><a href="https://i.sstatic.net/YeFTL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YeFTL.png" alt="enter image description here" /></a></p>
<python><visual-studio-code><vscode-debugger>
2024-02-20 21:09:45
2
2,130
Mark Kortink
78,030,103
901,426
Python split() fails when using delimiter
<p>i'm trying to parse a feedback string from a device. i'm just trying to run a simple <code>split()</code> but it throws this error: <code>ValueError: not enough values to unpack (expected 2, got 1)</code>. this makes no sense to me.</p> <pre class="lang-py prettyprint-override"><code> workingResult = &quot;SIMState: READY\nSIMMCC: 262\nSIMMNC: 01\nRegState: Registered, home network\nSigQuality: Good\nSigQualitydBm: -72\nAccessTec: 4G\nLastUpdate: 19.02.20_12:25:37&quot; strip1 = workingResult.replace('\n', ',') strip2 = list(strip1.split(',')) finalDict = {} for item in strip2: print(f'{type(item)}, {item}') key, value = item.split(': ') #&lt;-- problem child print(key,value) finalDict[key] = value </code></pre> <p>if it try the 'problem child' line with a single assignment, <code>key = item.split(': ')</code> i get the single line as one would expect. this should be a simple dump into two vars, but i don't get the error. i also tried trimming the delimiter to just <code>':'</code> with no space, but got the same error.</p> <p>what am i missing here?</p>
<python><split>
2024-02-20 20:20:45
1
867
WhiteRau
78,030,097
8,120,585
Is there a way in Python GitHub library to check if the branch for which a PR has been raised is out-of-date from it's base branch or not?
<ul> <li>My python script is getting a list of PRs and then it needs to run a check if the branches being used in those PRs are out-of-date from their base branch or not.</li> <li>I need to perform this action remotely i.e without cloning the repo to my local machine.</li> </ul> <p><a href="https://i.sstatic.net/obTSP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/obTSP.png" alt="enter image description here" /></a></p> <ul> <li><p>The image attached shows what we see in the GUI if we go to the PR manually.</p> </li> <li><p>My script searches for PRs (that have the string: DEVELOPERS-XXXX). Once I have the paginated list, it uses them one by one:</p> </li> </ul> <pre><code> from github import Github github_t = os.environ.get('GITHUB_TOKEN') github_c = Github(github_t) git_i = github_c.search_issues('DEVELOPERS-XXXX', state='open') for search in git_i: </code></pre> <ul> <li>I am unable to find an attribute that can help me check if the branch used in the PR is out-of-date from it's base branch.</li> </ul>
<python><github><github-api><pygithub>
2024-02-20 20:18:18
1
438
Syed Faraz Umar
78,029,940
17,040,989
legend in bokeh UMAP shows only one entry
<p>Hi there I just managed to get this plot done in <code>bokeh</code>, so I imagine there are many things that could be improved. Nonetheless, what bothers me the most is that I cannot figure out how to have all entries for my eight populations in the UMAP plot...<br /> Right now it shows only one entry which I don't know if it's associated to the correct population, and I manipulate with <code>legend_lable</code>.</p> <p>What I actually want to show is a legend with all eight populations (EUR, SIB, AFR, SAS, CEA, OCE, MENA and AME) and their associated colors. <em>See</em> below for the code I used and and example for the plot. Any help is appreciated!</p> <pre><code>import numpy as np import pandas as pd import plotly.express as px import bokeh.plotting as bp from bokeh.plotting import ColumnDataSource, figure, show from umap import UMAP umap = pd.read_csv(&quot;SGDP_download/SGDP_bi_snps_norm-2.eigenvec&quot;, sep=&quot;\t&quot;) umap.rename(columns={&quot;#IID&quot;: &quot;#ID&quot;}, inplace=True) loc = pd.read_csv(&quot;SGDP_download/pca_loc_fix_python-order.txt&quot;) colors = pd.read_csv(&quot;SGDP_download/bokeh_colors.txt&quot;) eigenval = pd.read_csv(&quot;SGDP_download/SGDP_bi_snps_norm-2.eigenval&quot;, header=None) pve = round(eigenval / (eigenval.sum(axis=0))*100, 2) pve.head() umap.sort_values('#ID', inplace=True) umap.insert(loc=1, column='#LOC', value=loc) umap.rename(columns={'#ID': 'ID', '#LOC': 'LOC'}, inplace=True) regions_umap = umap.iloc[:, 2:12] umap_plot = UMAP(n_components=2, init=&quot;random&quot;, random_state=15) umap_proj = umap_plot.fit_transform(regions_umap) #umap_proj.view() #umap_proj.shape df = pd.DataFrame(umap_proj, columns=['UMAP1', 'UMAP2']) df.insert(loc=0, column='population', value=loc) df.insert(loc=1, column='color', value=colors) df.index = umap[&quot;ID&quot;] source=ColumnDataSource(df) #source df TOOLS=&quot;hover,crosshair,pan,wheel_zoom,zoom_in,zoom_out,box_zoom,undo,redo,reset,tap,save,box_select,poly_select,lasso_select,examine,help&quot; fig = figure(tools=TOOLS, x_axis_label='UMAP1', y_axis_label='UMAP2') fig.scatter(x=df['UMAP1'], y=df['UMAP2'], color=df['color'], size=5, legend_label='population', fill_alpha=0.6, line_color=None) fig.legend.location = &quot;top_left&quot; fig.legend.title = &quot;metapopulations&quot; show(fig) </code></pre> <p><a href="https://i.sstatic.net/kUtB1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kUtB1.png" alt="enter image description here" /></a></p> <p>P.S. as a side note it is possible to have the legend at the bottom of the plot with the legend title centered?</p> <p><strong>EDIT</strong> this is what the df looks like @droumis</p> <p><a href="https://i.sstatic.net/wqrF9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wqrF9.png" alt="enter image description here" /></a></p>
<python><legend><bokeh><legend-properties><runumap>
2024-02-20 19:43:54
2
403
Matteo
78,029,932
5,710,525
How to interrupt Linux kernel module without terminating the interfacing userspace process
<p>I've written a kernel module implementing a device driver that uses <code>down_interruptible</code> to handle access to shared resources. I have a userspace application that communicates with the kernel module. I'd like to be able to interrupt the kernel module without terminating the userspace application (this will finish on its own after the kernel module is interrupted). This can occur in one of 2 ways: (1) the userspace application knows it's time to finish and interrupts the kernel module, or (2) the user invoking the userspace application interrupts the application in which case the application should pass along the interrupt to the kernel module but otherwise ignore the interrupt. Currently, I've only been able to interrupt both the userspace application and the kernel module, or neither. I'm new to kernel module development and it could very well be that I'm going about this the wrong way. If that's the case, please feel free to mention other, better ways to do this.</p> <p>Here's a minimal example that illustrates the problem I'm facing. The kernel module code is:</p> <pre class="lang-c prettyprint-override"><code>#include &lt;linux/cdev.h&gt; #include &lt;linux/device.h&gt; #include &lt;linux/fs.h&gt; #include &lt;linux/init.h&gt; #include &lt;linux/module.h&gt; #include &lt;linux/semaphore.h&gt; #include &lt;linux/slab.h&gt; #include &lt;linux/types.h&gt; #define NAME &quot;testmod&quot; #define MAJOR -1 static struct class *testmod_class; static struct cdev *testmod_cdev; struct semaphore *lock; static ssize_t read(struct file *filp, char __user *buf, size_t count, loff_t *f_pos) { up(lock); printk(KERN_INFO &quot;semaphore unlocked\n&quot;); printk(KERN_INFO &quot;semaphore count: %d\n&quot;, lock-&gt;count); return 0; } static ssize_t write(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos) { ssize_t ret; if ((ret = down_interruptible(lock)) != 0) { printk(KERN_INFO &quot;interrupt received\n&quot;); } printk(KERN_INFO &quot;semaphore count: %d\n&quot;, lock-&gt;count); ret = count; return ret; } static const struct file_operations fops = { .owner = THIS_MODULE, .read = read, .write = write, }; static int testmod_init(void) { int ret; if ((testmod_class = class_create(NAME)) == NULL) { ret = -1; goto out; } if ((testmod_cdev = cdev_alloc()) == NULL) { ret = -1; goto cleanup_class; } testmod_cdev-&gt;ops = &amp;fops; testmod_cdev-&gt;owner = THIS_MODULE; if (cdev_add(testmod_cdev, MAJOR, 1) == -1) { ret = -1; goto cleanup_cdev; } if ((device_create(testmod_class, NULL, MAJOR, NULL, NAME)) == NULL) { ret = -1; goto cleanup_class; } if ((lock = kzalloc(sizeof(struct semaphore), GFP_KERNEL)) == NULL) { ret = -1; goto cleanup_dev; } sema_init(lock, 1); ret = 0; goto out; cleanup_dev: device_destroy(testmod_class, MAJOR); cleanup_class: class_destroy(testmod_class); cleanup_cdev: cdev_del(testmod_cdev); out: return ret; } static void testmod_exit(void) { device_destroy(testmod_class, MAJOR); class_destroy(testmod_class); cdev_del(testmod_cdev); } module_init(testmod_init); module_exit(testmod_exit); MODULE_LICENSE(&quot;GPL&quot;); </code></pre> <p>The userspace application is:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python import os from signal import signal, SIGINT, SIG_IGN signal(SIGINT, SIG_IGN) fd = os.open(&quot;/dev/testmod&quot;, os.O_WRONLY) while True: os.write(fd, b&quot;1&quot;) os.close(fd) </code></pre> <p>If I run the userspace application and interrupt it with ctrl-C, nothing happens, which is unsurprising since I've told it to ignore the interrupt. If I uncomment the <code>signal(...)</code> line using ctrl-C interrupts the kernel module write but also stops the userspace application.</p> <p>The actual application performs these writes in threads (<code>threading.Thread</code>), though I didn't do that in this minimal example to try to keep things simple. But, if there are additional considerations when using threads, that would be good to know.</p>
<python><linux><linux-kernel><linux-device-driver><interrupt>
2024-02-20 19:42:36
0
464
MattHusz
78,029,856
243,031
Mysql execute many wont work when add on duplicate
<p>I have mysql table and I want to run the <a href="https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html" rel="nofollow noreferrer"><code>executemany</code></a> for that table. I want to make sure, if there is duplicate entry, then it should be updated. I wrote query as below with data.</p> <pre><code>a = 'INSERT INTO tabl (switch_id, readiness, message) VALUES (%s, %s, %s) ON DUPLICATE KEY UPDATE readiness=%s, message=%s' b = [(12780, 'not_ready', 'StatusDB data', 'not_ready', 'StatusDB data.'), (12781, 'not_ready', 'StatusDB data.', 'not_ready', 'StatusDB data.')] </code></pre> <p>When I try to do execute many on these data with <code>CONN</code> of my database connection object, it gives error.</p> <pre><code>&gt;&gt;&gt; with CONN.cursor() as c: ... c.executemany(a, [b[0]]) ... Traceback (most recent call last): File &quot;&lt;console&gt;&quot;, line 2, in &lt;module&gt; File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/utils.py&quot;, line 69, in executemany return self._execute_with_wrappers(sql, param_list, many=True, executor=self._executemany) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/utils.py&quot;, line 75, in _execute_with_wrappers return executor(sql, params, many, context) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/utils.py&quot;, line 89, in _executemany return self.cursor.executemany(sql, param_list) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/mysql/base.py&quot;, line 83, in executemany return self.cursor.executemany(query, args) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/cursors.py&quot;, line 182, in executemany return self._do_execute_many( File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/cursors.py&quot;, line 205, in _do_execute_many v = values % escape(next(args), conn) TypeError: not all arguments converted during string formatting </code></pre> <p>When I remove the <code>ON DUPLICATE</code> syntaxt, it works fine.</p> <pre><code>&gt;&gt;&gt; with CONN.cursor() as c: ... c.executemany(a[:-49], [b[0][:3]]) ... Traceback (most recent call last): File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/utils.py&quot;, line 89, in _executemany return self.cursor.executemany(sql, param_list) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/mysql/base.py&quot;, line 83, in executemany return self.cursor.executemany(query, args) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/cursors.py&quot;, line 182, in executemany return self._do_execute_many( File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/cursors.py&quot;, line 220, in _do_execute_many rows += self.execute(sql + postfix) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/cursors.py&quot;, line 153, in execute result = self._query(query) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/cursors.py&quot;, line 322, in _query conn.query(q) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/connections.py&quot;, line 558, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/connections.py&quot;, line 822, in _read_query_result result.read() File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/connections.py&quot;, line 1200, in read first_packet = self.connection._read_packet() File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/connections.py&quot;, line 772, in _read_packet packet.raise_for_error() File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/protocol.py&quot;, line 221, in raise_for_error err.raise_mysql_exception(self._data) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/err.py&quot;, line 143, in raise_mysql_exception raise errorclass(errno, errval) pymysql.err.IntegrityError: (1062, &quot;Duplicate entry '12780' for key 'tabl.PRIMARY'&quot;) The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;&lt;console&gt;&quot;, line 2, in &lt;module&gt; File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/utils.py&quot;, line 69, in executemany return self._execute_with_wrappers(sql, param_list, many=True, executor=self._executemany) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/utils.py&quot;, line 75, in _execute_with_wrappers return executor(sql, params, many, context) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/utils.py&quot;, line 89, in _executemany return self.cursor.executemany(sql, param_list) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/utils.py&quot;, line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/utils.py&quot;, line 89, in _executemany return self.cursor.executemany(sql, param_list) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/django/db/backends/mysql/base.py&quot;, line 83, in executemany return self.cursor.executemany(query, args) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/cursors.py&quot;, line 182, in executemany return self._do_execute_many( File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/cursors.py&quot;, line 220, in _do_execute_many rows += self.execute(sql + postfix) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/cursors.py&quot;, line 153, in execute result = self._query(query) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/cursors.py&quot;, line 322, in _query conn.query(q) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/connections.py&quot;, line 558, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/connections.py&quot;, line 822, in _read_query_result result.read() File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/connections.py&quot;, line 1200, in read first_packet = self.connection._read_packet() File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/connections.py&quot;, line 772, in _read_packet packet.raise_for_error() File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/protocol.py&quot;, line 221, in raise_for_error err.raise_mysql_exception(self._data) File &quot;/home/myuser/var/virtualenvs/venv//lib/python3.8/site-packages/pymysql/err.py&quot;, line 143, in raise_mysql_exception raise errorclass(errno, errval) django.db.utils.IntegrityError: (1062, &quot;Duplicate entry '12780' for key 'tabl.PRIMARY'&quot;) </code></pre> <p>Its giving DUPLICATE error, but ultimately its able to reach to database. While when we use original query, it gives error for paramater.</p> <p>If I loop through parameter and run <code>cursor.execute</code> it works fine with <code>ON DUPLICATE</code> syntaxt.</p> <p>Whey its not able to match all the parameter when we use <code>executemany</code> ?</p>
<python><mysql><pymysql><executemany>
2024-02-20 19:23:09
1
21,411
NPatel
78,029,820
1,852,526
Write two strings into one column using pandas
<p>I want to write 2 strings into one column within an xlsx file using pandas like</p> <blockquote> <p><a href="https://www.nuget.org/packages/Newtonsoft.Json/" rel="nofollow noreferrer">https://www.nuget.org/packages/Newtonsoft.Json/</a> - release notes tab</p> </blockquote> <p>where the part of the hyperlink is clickable. The '- release notes tab' should not part of the hyperlink. Something like in the screenshot.</p> <p><a href="https://i.sstatic.net/GNzZo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GNzZo.png" alt="two strings one with url format" /></a></p> <p>I am trying something like this. The column I am trying to write is called 'Known Issues'. Please see the <code>if 'Known Issues' in df:</code> line.</p> <pre><code>def create_excel_with_format(headers,values,full_file_name_with_path): #Write to CSV in xlsx format with indentation. df = pd.DataFrame(data=values,columns=headers) #df = df.set_axis(df.index*2 + 1).reindex(range(len(df)*2)) #Create a blank row after every row. with pd.ExcelWriter(full_file_name_with_path) as writer: #For any list items inside of the excel file we remove the square brackets[] for index,row in df.iterrows(): for col in list(df.columns): if isinstance(row[col], list): row[col] = &quot;, &quot;.join(row[col]) df.to_excel(writer, index=False) workbook = writer.book worksheet = writer.sheets['Sheet1'] #Write the location each comma separated in new line if exists (For Nuget exists and thirdparty no). if 'Location' in df: df[&quot;Location&quot;] = df[&quot;Location&quot;].str.join(&quot;\n&quot;) twrap = workbook.add_format({&quot;text_wrap&quot;: True}) idx_location = df.columns.get_loc(&quot;Location&quot;) worksheet.set_column(idx_location, idx_location, 60, twrap) if 'Known Issues' in df: df[&quot;Known Issues&quot;]=df[&quot;Known Issues&quot;].str.join(&quot; - release notes tab&quot;) #DOES NOT WORK header_format = workbook.add_format({ 'bold': True, 'border': False, 'text_wrap': False, 'font_size':13}) for col_num, value in enumerate(df.columns.values): worksheet.write(0, col_num, value, header_format) </code></pre>
<python><pandas><xlsx>
2024-02-20 19:16:28
1
1,774
nikhil
78,029,817
6,462,301
How to type hint a subset of values from a known set in python
<p>Consider the following set:</p> <p><code>s = set([&quot;x&quot;, &quot;y&quot;, &quot;z&quot;])</code></p> <p>How do I create a type hint for a variable that is any subset of the elements in s (short of explicitly creating a Union of every possible subset)?</p>
<python><set><python-typing>
2024-02-20 19:16:02
1
1,162
rhz
78,029,628
7,344,164
Adding nested lists to shared dict in Multiprocessing Pool and saving to JSON after the context
<p>I'm trying to run a process in multiprocesses Pool as:</p> <pre><code>from multiprocessing.pool import Pool from multiprocessing import Manager manager = Manager() lock = manager.Lock() data_dict = manager.dict({data_subset: {}}) with Pool(processes=cpu_count()-2) as p: with tqdm(total=len(paths)) as pbar: for v in p.imap_unordered( partial(extract_video, root_dir=opt.data_path, dataset=dataset, output_path=opt.output_path, data_dict=data_dict, data_subset=data_subset), paths ): pbar.update() print(data_dict.copy()) # Save the data_dict to a JSON file with open(os.path.join(opt.output_path, &quot;data_dict.json&quot;), 'w') as json_file: print('writing to json') json.dump(data_dict.copy(), json_file) </code></pre> <p>And here is the extract_video function:</p> <pre><code>def extract_video(video, root_dir, dataset, output_path, data_dict, data_subset): try: # some code to create image crops and other variables # .... for j, crop in enumerate(crops): image_path = os.path.join(output_path, id, &quot;{}_{}.png&quot;.format(i, j)) cv2.imwrite(image_path, crop) # Update data_dict label = 0 # Assuming label is 0, you can modify this based on your logic # Tried without lock as well with lock: if id not in data_dict[data_subset]: data_dict[data_subset][id] = {'label': label, 'list': []} # tried manager.list() as well data_dict[data_subset][id]['list'] += [image_path] # tried append method on list as well # data_dict[data_subset][id] = {'label': label, 'list': [image_path]} # Tried this by adding image_path directly to the list as well print(data_dict) except Exception as e: print(&quot;Error:&quot;, e) </code></pre> <p>Now the problem is all code in Pool context works fine if I comment the lines adding nested objects to data_dict. But when I uncomment those lines, got some exceptions. I tried debugging the code snippets with some dummy dict and list as discussed in <a href="https://bugs.python.org/issue36119" rel="nofollow noreferrer">this</a> thread but no success.</p>
<python><json><multiprocessing><python-multiprocessing>
2024-02-20 18:34:07
1
14,299
DevLoverUmar
78,029,515
17,580,381
Managed dictionary does not behave as expected in multiprocessing
<p>From what I've managed to figure out so far, this may only be a problem in macOS.</p> <p>Here's my MRE:</p> <pre><code>from multiprocessing import Pool, Manager from functools import partial def foo(d, n): d.setdefault(&quot;X&quot;, []).append(n) def main(): with Manager() as manager: d = manager.dict() with Pool() as pool: pool.map(partial(foo, d), range(5)) print(d) if __name__ == &quot;__main__&quot;: main() </code></pre> <p><strong>Output:</strong></p> <pre><code>{'X': []} </code></pre> <p><strong>Expected output:</strong></p> <pre><code>{'X': [0, 1, 2, 3, 4]} </code></pre> <p><strong>Platform:</strong></p> <pre><code>macOS 14.3.1 Python 3.12.2 </code></pre> <p>Maybe I'm doing something fundamentally wrong but I understood that the whole point of the Manager was to handle precisely this kind of scenario.</p> <p><strong>EDIT</strong></p> <p>There is another <em>hack</em> which IMHO should be unnecessary but even this doesn't work (produces identical output):</p> <pre><code>from multiprocessing import Pool, Manager def ipp(d, lock): global gDict, gLock gDict = d gLock = lock def foo(n): global gDict, gLock with gLock: gDict.setdefault(&quot;X&quot;, []).append(n) def main(): with Manager() as manager: d = manager.dict() lock = manager.Lock() with Pool(initializer=ipp, initargs=(d, lock)) as pool: pool.map(foo, range(5)) print(d) if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><multiprocessing>
2024-02-20 18:08:59
1
28,997
Ramrab
78,029,447
1,485,926
Release of references in PyObject when that object is a dictionary, list or text string
<p>I have a code in <a href="https://docs.python.org/3/c-api/index.html" rel="nofollow noreferrer">Python C API</a> like this:</p> <pre><code>PyObject* result = PyObject_CallMethod(/* some function with some arguments */); // Do something with result Py_XDECREF(result); </code></pre> <p>I'm almost sure this code is fine from the point of view of references releasing when <code>result</code> is a <code>PyObject*</code> representing a simple type (<code>PyBool</code>, <code>PyLong</code>, <code>PyFloat</code>, etc.) but I'm unsure in the following cases:</p> <ol> <li>Dictionaries (i.e. <code>result</code> is of <code>PyDict</code> type)</li> <li>Lists (i.e. <code>result</code> is of <code>PyList</code> type)</li> <li>Text strings (i.e. <code>result</code> is of <code>PyUnicodeObject</code> type)</li> </ol> <p>(In case 1 and 2 my doubt is due to dictionaries/lists also use <code>PyObject*</code> for it's members and I'm not sure if <code>Py_XDECREF()</code> is smart enough to remove all references used internally in the dictionary/list or if I have to implement some kind of recursive removal function)</p> <p>Is my approach right also for these types?</p> <p><strong>EDIT:</strong> to answer the question in the comment <em>is the code you are using making references to the objects that are referenced by the dict or list?</em> a closer version to reality is the following one.</p> <p>As you can see, the processing of the <code>result</code> is done in a recursive way. Not sure if <code>key</code>, <code>value</code> and the result of <code>PyList_GetItem(result, ix)</code> are making references that should be freed in some way...</p> <pre><code>void process(PyObject* result) { if (PyDict_Check(result)) { PyObject *key, *value; Py_ssize_t pos = 0; while (PyDict_Next(value, &amp;pos, &amp;key, &amp;value)) { process(value); } } else if (PyList_Check(result)) { Py_ssize_t size = PyList_Size(result); for (Py_ssize_t ix = 0; ix &lt; size; ++ix) { process(PyList_GetItem(result, ix)); } } else if (PyUnicode_Check(result)) { const char* str = PyUnicode_AsUTF8(result); // do something with str } else { // processing other types } // code entry point PyObject* result = PyObject_CallMethod(/* some function with some arguments */); process(result); Py_XDECREF(result); </code></pre>
<python><python-c-api>
2024-02-20 17:56:03
0
12,442
fgalan
78,029,445
250,540
Pythonic optimization of per-pixel image processing
<p>I am processing an image in Python where I need to calculate an image metric on &quot;how much is happening on an image&quot;. Metric calculates stripes of non-0 values, calculates sum of values inside the stripes, and calculate sum of squares of these sums.</p> <p>Naive implementation which is processing every pixel on an image is painfully slow (as expected). What is the best way to rewrite it in Pythonic way, ideally if it can utilize multiple CPU's under the hood?</p> <pre><code> merit = 0.0 for y in range(height): segment_sum = 0 for x in range(width): if test_image[y,x]&gt;0: segment_sum+=test_image[y,x] elif segment_sum&gt;0: if(segment_sum&gt;1000):merit+=segment_sum*segment_sum #Ignore short segments segment_sum = 0 return merit </code></pre>
<python><numpy><opencv>
2024-02-20 17:55:40
1
6,601
BarsMonster
78,029,336
1,111,886
Qt event for focus changed with tab key
<p>I have a UI that I would like to implement a form of auto-fill for. I have implemented this by using an event filter to populate the QLineEdit when it's given focus based on the previously focused QLineEdit.</p> <p>The issue is that I only want this autofill to occur only when the user is using the tab key to cycle through the form. If they are manually clicking around then the autofill will likely be a hindrance.</p> <pre><code>def eventFilter(self, selected_input, event): if event.type() == QEvent.FocusIn: if (self.previous_input is not None #and selected_via_tab_key and selected_input.text() == &quot;&quot;): new_value = int(self.previous_input.text()) + 1 selected_input.setText(str(new_value)) self.previous_input = selected_input return False </code></pre> <p>Is there any way to detect how the focus was changed within this event, or another event I could monitor for just a tab focus change?</p>
<python><pyqt5>
2024-02-20 17:35:31
1
4,367
Fr33dan
78,029,324
1,485,926
How can I create lambda PyObject from its Python code in a C string?
<p>I have the following code in Python, which uses <a href="https://pypi.org/project/pyjexl" rel="nofollow noreferrer"><code>pyjexl</code> module</a>:</p> <pre><code>import pyjexl jexl = pyjexl.JEXL() jexl.add_transform(&quot;lowercase&quot;, lambda x: str(x).lower()) </code></pre> <p>I want to do the same using <a href="https://docs.python.org/3/c-api/index.html" rel="nofollow noreferrer">Python C API</a>. Something like this:</p> <pre class="lang-c prettyprint-override"><code>Py_Initialize(); PyObject* pyjexlModule = PyImport_ImportModule(&quot;pyjexl&quot;); PyObject* jexl = PyObject_CallMethod(pyjexlModule, &quot;JEXL&quot;, NULL); const char* myLambda = &quot;lambda x: str(x).lower()&quot;; PyObject* lambda = ... /* something using myLambda */ PyObject_CallMethod(jexl, &quot;add_transform&quot;, &quot;sO&quot;, &quot;lowercase&quot;, lambda); </code></pre> <p>(Simplified version of the code, e.g., <code>NULL</code> checking and <code>Py_XDECREF()</code> have been omitted)</p> <p>The problem I'm trying to solve is how to get a <code>PyObject</code> representing a lambda function which Python code is contained in the C-string <code>myLambda</code>.</p> <p>How can I achieve this?</p> <p>I have tried with the suggestion by @DavidW using this:</p> <pre class="lang-c prettyprint-override"><code>PyObject* globals = PyDict_New(); PyObject* locals = PyDict_New(); PyObject* lambda = PyRun_String(myLambda, Py_eval_input, globals, locals); </code></pre> <p>But I think it's not working, as the resulting <code>lambda</code> variable (inspected using debugger) is of type <code>PyNone</code>:</p> <p><a href="https://i.sstatic.net/3nc1V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3nc1V.png" alt="Debugger screenshot" /></a></p>
<python><python-c-api>
2024-02-20 17:34:03
1
12,442
fgalan
78,029,273
2,791,346
Code standards check before commit in PyCharm
<p>I would like to have some analysis done on Python code before I commit, to gently “enforce” the standards.</p> <p>I would like to have a check such as:</p> <ul> <li>no print in new code</li> <li>No methods longer than 50 lines</li> <li>No more than 4 arguments in methods</li> <li>…</li> </ul> <p>( inside the team we could expand the list as needed)</p> <p>Those should be only as warnings, because I don’t want to prevent commit if needed.</p> <p>I was looking into pycharm settings/version control/commit/analyse before commit/ But this is working only sometimes and only accepts regex.</p> <p>Is there any better and simpler way to do this?</p>
<python><git><pycharm><code-standards>
2024-02-20 17:27:17
1
8,760
Marko Zadravec
78,029,128
2,015,734
readthedocs build fails with `Sphinx version error`
<p>readthedocs started to throw an error <code>Sphinx version error</code> during build, although there were no changes to the readthedocs configuration and dependencies are all pinned.</p> <p>Here's the <a href="https://github.com/dask/dask-image" rel="nofollow noreferrer">repository</a> and a specific [PR] <a href="https://github.com/dask/dask-image/pull/344" rel="nofollow noreferrer">https://github.com/dask/dask-image/pull/344</a> for which the doc build fails. Here's the <a href="https://readthedocs.org/projects/dask-image/builds/23503914/" rel="nofollow noreferrer">error message in the readthedocs build</a>:</p> <p>(shortened)</p> <pre><code>Running Sphinx v4.5.0 loading translations [en]... done ... Sphinx version error: The sphinxcontrib.applehelp extension used by this project needs at least Sphinx v5.0; it therefore cannot be built with this version. </code></pre> <p>HOWEVER, sphinx is already pinned to &gt;5 in the repository, but version 4.5 seems to be installed during doc build.</p> <p>The error looks very similar to <a href="https://stackoverflow.com/questions/77818509/readthedocs-sphinx-build-failing-due-to-version-error">this question</a>, however in my case there's no such misconfiguration and the fully pinned doc environment defined in <code>continuous_integration/environment-doc.yml</code> hasn't changed since the fails started.</p> <p>What's interesting is that <code>cat continuous_integration/environment-doc.yml</code> on the <a href="https://readthedocs.org/projects/dask-image/builds/23503914/" rel="nofollow noreferrer">readthedocs</a> shows something different than <code>continuous_integration/environment-doc.yml</code> in the repository. Namely, there's a conda dependency sphinx added at the end, as if readthedocs added lines in the background.</p> <p>Any help is greatly appreciated!</p>
<python><continuous-integration><python-sphinx><read-the-docs>
2024-02-20 16:56:26
0
358
malbert
78,029,050
2,778,860
Installing gst-python in macOS
<p>I installed GStreamer 1.22.10 on macOS (Sonoma) using the <a href="https://gstreamer.freedesktop.org/download/#macos" rel="nofollow noreferrer">official binaries</a>. I realized that gst-python is not automatically installed, so I cloned the GStreamer Gitlab repository and followed the <a href="https://gitlab.freedesktop.org/gstreamer/gstreamer/-/tree/main/subprojects/gst-python" rel="nofollow noreferrer">related instructions</a> in the gst-python subproject to install it.</p> <p>I'm using Python (3.12) installed via homebrew, so I gave its pygi overrides directory to meson during setup:</p> <pre><code>meson setup builddir -Dpygi-overrides-dir=/opt/homebrew/lib/python3.12/site-packages/gi/overrides </code></pre> <p>Pygobject3 is also installed via brew in the path: <code>/opt/homebrew/Cellar/pygobject3/3.46.0_1</code></p> <p>Now when I do <code>python -c &quot;import gi; gi.require_version('Gst', '1.0');&quot;</code>, I get an error that the namespace Gst is not available.</p> <pre><code>Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/opt/homebrew/lib/python3.12/site-packages/gi/__init__.py&quot;, line 126, in require_version raise ValueError('Namespace %s not available' % namespace) ValueError: Namespace Gst not available </code></pre> <p>Any help is appreciated.</p> <p><strong>Note:</strong> If GStreamer is installed using brew, gst-python is included but that is not my preferred approach because the brew version does not contain some of the packages I need, like webrtc.</p>
<python><macos><gstreamer><pygobject>
2024-02-20 16:45:10
1
1,715
chronosynclastic
78,029,020
640,916
PIL's Image.rotate gives me 90 degrees instead of 270 degrees
<p>Python 3.11.8 PIL 10.2.0</p> <p>I'm trying to rotate this image by 270 degrees:</p> <p><a href="https://i.sstatic.net/X6jhX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X6jhX.jpg" alt="enter image description here" /></a></p> <p>My code:</p> <pre><code>from PIL import Image image = Image.open(&quot;test.jpg&quot;) image = image.rotate(270, expand=True) image.save(&quot;test_out.jpg&quot;) </code></pre> <p>But the result is:</p> <p><a href="https://i.sstatic.net/hebXl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hebXl.jpg" alt="enter image description here" /></a></p> <p>which to me is a 90 degree rotation</p> <p>I suspected that auto-orientation and exif data is messing with me, so I tried these commands separately before the rotation but they did not help getting the expected result either:</p> <pre><code>image = ImageOps.exif_transpose(image) image.getexif().clear() </code></pre> <p>When I do it with another tool:</p> <pre><code>convert test.jpg -rotate 270 test_out.jpg </code></pre> <p>I get the expected result:</p> <p><a href="https://i.sstatic.net/rCHQd.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rCHQd.jpg" alt="enter image description here" /></a></p>
<python><image-processing><python-imaging-library>
2024-02-20 16:38:36
1
7,819
djangonaut
78,028,992
8,385,813
TypeError: len() of unsized object in pyclustering library
<p>I am using the pyclustering library to perform K-means. The datasets I am using are being read in CSV format as shown in the code below. I have tried passing X_scaled as a numpy array, as a list using to_list(). However, I constantly get this error:</p> <pre><code>TypeError: len() of unsized object </code></pre> <p>Version of pyclustering: 0.10.1.2</p> <p>The code is below:</p> <pre><code>from pyclustering.cluster.kmeans import kmeans from pyclustering.utils.metric import distance_metric, type_metric import matplotlib.pyplot as plt import numpy as np # Define a function to convert distance metric names to functions def get_distance_metric(metric_name): if metric_name == 'euclidean': return distance_metric(type_metric.EUCLIDEAN) elif metric_name == 'squared euclidean': return distance_metric(type_metric.EUCLIDEAN_SQUARE) elif metric_name == 'manhattan': return distance_metric(type_metric.MANHATTAN) elif metric_name == 'chebyshev': return distance_metric(type_metric.CHEBYSHEV) elif metric_name == 'canberra': return distance_metric(type_metric.CANBERRA) elif metric_name == 'chi-square': return distance_metric(type_metric.CHI_SQUARE) else: raise ValueError(f&quot;Unsupported distance metric: {metric_name}&quot;) # Define the distance measures dictionary distance_measures = {'euclidean': 0, 'squared euclidean': 1, 'manhattan': 2, 'chebyshev': 3, 'canberra': 5, 'chi-square': 6} # Example of running the modified code datasets = main_datasets df = datasets['circles0.3.csv'] original_labels = df['label'].values if 'label' in df.columns else None X = df.drop(columns=['label'], errors='ignore').values scaler = StandardScaler() X_scaled = scaler.fit_transform(X) # Set the number of clusters k = 3 # Experiment with various distance metrics for metric_name, metric_code in distance_measures.items(): # Get the distance metric function distance_metric_func = get_distance_metric(metric_name) # Perform K-means clustering with the selected distance metric # centers, clusters = kmeans(X_scaled.tolist(), k, metric=distance_metric_func) centers, clusters = kmeans(X_scaled, k, metric=distance_metric_func) # Plot the clusters plt.figure() plt.title(f'K-means Clustering with {metric_name}') plt.xlabel('X') plt.ylabel('Y') plt.scatter([point[0] for point in X_scaled], [point[1] for point in X_scaled], c=clusters, cmap='viridis') plt.scatter([center[0] for center in centers], [center[1] for center in centers], marker='x', c='red', s=100) plt.show() </code></pre> <p>Can anybody help me out with what the issue might be with this code?</p>
<python><cluster-analysis><k-means><unsupervised-learning>
2024-02-20 16:34:56
1
340
Arnab Sinha
78,028,664
2,459,855
Setting File Type for Emailed Attachments
<p>A Python script for sending email with attachments is working, but all attachments, if any, are arriving as .eml files rather than as jpg, pdf, txt, etc. They do open correctly, but I would prefer that they bear their actual name and type. Although I made a minor change for Python 3 ( <em>attach( changed to add_attachment(</em> ), it did work for me on Python 2.7.</p> <p>Also, I keep coming across references that set_payload has been deprecated, but I can't see what to use instead.</p> <pre><code># Attach any files files = '''/Users/Mine/Desktop/RKw.jpeg\n/Users/Mine/Desktop/PG_2022.pdf'''.splitlines() for file in files: attachment = open(file, &quot;rb&quot;) part = MIMEBase(&quot;application&quot;, &quot;octet-stream&quot;) part.set_payload((attachment).read()) encoders.encode_base64(part) part.add_header(&quot;Content-Disposition&quot;, f&quot;attachment; filename= {file.split('/')[-1]}&quot;) msg.add_attachment(part) </code></pre>
<python><email><attachment><email-attachments><mime>
2024-02-20 15:49:50
1
1,127
JAC
78,028,444
1,501,700
Different current folders in Debug and Run modes
<p>I have python code in Visual Studio Code :</p> <pre><code>current_working_directory = os.getcwd() print (current_working_directory) </code></pre> <p>It prints my Windows home directory when I run it. In case I do debug it prints folder where code file is located. Why? How to make the same behavior in both cases?</p>
<python><visual-studio-code>
2024-02-20 15:17:42
0
18,481
vico
78,028,390
4,527,660
Whitelisting of IP in GCP using cloud function not working
<p>I am writing a cloud function which gets trigger from slack with an IP and then we whitelist the IP in GCP firewall. because GCP firewall dont appeand so we have to get the IP and then appear a new IP to and then update it .</p> <p>I am facing below error</p> <p><code>Error updating firewall rule: FirewallsClient.update() got an unexpected keyword argument 'body'</code></p> <p>I tried a lot to fix this but I am getting one or the other error. so not sure how to fix this, due to lack of understanding about the coding as well.</p> <p>my code</p> <pre><code>def updateSecurityGroupRule(ip): project_id = 'development-404511' firewall_rule_name = 'bastion-host-ssh' client = compute_v1.FirewallsClient() try: request = client.get(project=project_id, firewall=firewall_rule_name) current_firewall_rule = request except Exception as e: print(f&quot;Error retrieving current firewall rule: {e}&quot;) return False if isinstance(current_firewall_rule, compute_v1.Firewall) and hasattr(current_firewall_rule, 'source_ranges'): # Assuming there is at least one source range in the list current_ip_range = current_firewall_rule.source_ranges[0] if current_firewall_rule.source_ranges else None else: print(f&quot;Unexpected response from client.get(): {current_firewall_rule}&quot;) return False new_source_ranges = [current_ip_range, f'{ip}/32'] firewall_update_mask = FieldMask(paths=['source_ranges']) firewall_update_request = compute_v1.Firewall( name=firewall_rule_name, source_ranges=new_source_ranges ) print(f&quot;Updating firewall rule with new source ranges: {new_source_ranges}&quot;) try: client.update(project=project_id, firewall=firewall_rule_name, body=firewall_update_request, updateMask=firewall_update_mask) print(&quot;Firewall rule updated successfully.&quot;) return True except Exception as e: print(f&quot;Error updating firewall rule: {e}&quot;) return False def updateSecurityGroupRule(ip): project_id = 'project-12345' firewall_rule_name = 'firewall-name' client = compute_v1.FirewallsClient() try: request = client.get(project=project_id, firewall=firewall_rule_name) current_firewall_rule = request except Exception as e: print(f&quot;Error retrieving current firewall rule: {e}&quot;) return False if isinstance(current_firewall_rule, compute_v1.Firewall) and hasattr(current_firewall_rule, 'source_ranges'): # Assuming there is at least one source range in the list current_ip_range = current_firewall_rule.source_ranges[0] if current_firewall_rule.source_ranges else None else: print(f&quot;Unexpected response from client.get(): {current_firewall_rule}&quot;) return False new_source_ranges = [current_ip_range, f'{ip}/32'] firewall_update_mask = FieldMask(paths=['source_ranges']) firewall_update_request = compute_v1.Firewall( name=firewall_rule_name, source_ranges=new_source_ranges ) print(f&quot;Updating firewall rule with new source ranges: {new_source_ranges}&quot;) try: client.update(project=project_id, firewall=firewall_rule_name, body=firewall_update_request, updateMask=firewall_update_mask) print(&quot;Firewall rule updated successfully.&quot;) return True except Exception as e: print(f&quot;Error updating firewall rule: {e}&quot;) return False </code></pre> <p>while I run it and watch logs I see that script works until here <code>print(f&quot;Updating firewall rule with new source ranges: {new_source_ranges}&quot;)</code></p> <p>it gets the IP from firewall and adds new IP to it, later it is not updating the firewall back</p> <p>can someone tell me what is wrong here and is there any better way I can do to update a firewall with ip</p>
<python><google-cloud-platform><google-cloud-functions><google-compute-api>
2024-02-20 15:07:54
1
303
Waseem Mir
78,028,104
15,176,150
How can you colour a matplotlib plot to show point density?
<p>I'm plotting some partial dependency plots using matplotlib. I have 1,000 samples which I produce regression predictions for. These predictions are plotted on the y-axis. To get an overview of partial dependency for the whole dataset I'm fixing all variables in my samples bar one. The un-fixed variable is shown on the x-axis and is varied for all samples.</p> <p>On my plot I want to show the mean value that my 1,000 samples take for a given x value, and the distribution of values plotted as a colour gradient at that x value. At the moment I have the following:</p> <p><a href="https://i.sstatic.net/UieXH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UieXH.png" alt="Demonstration of point method." /></a></p> <p>This is functional, because it shows the density of predictions for a given x value, but ugly. I want something that does the same job of showing density at each x value, but that is a smooth gradient that fills the plot, not individual points.</p> <p>How would I achieve something like this?</p>
<python><matplotlib><plot>
2024-02-20 14:24:00
2
1,146
Connor
78,028,090
836,723
Regex is not working in Python Playwright page.wait_for_url()?
<p>I found a strange difference in Python VS JavaScript regex implementation of <code>page.waitForURL()</code> / <code>page.wait_for_url()</code>.</p> <p>In python version this code doesn't work:</p> <pre class="lang-py prettyprint-override"><code>from playwright.sync_api import sync_playwright with sync_playwright() as p: browser = p.chromium.launch() page = browser.new_page() page.goto(&quot;https://playwright.dev/python/docs/api/class-page&quot;) page.wait_for_url(r&quot;docs/api&quot;) browser.close() </code></pre> <p>In JavaScript version it works fine:</p> <pre class="lang-js prettyprint-override"><code>// @ts-check const playwright = require('playwright'); (async () =&gt; { // Try to add 'playwright.firefox' to the list ↓ for (const browserType of [playwright.chromium, playwright.webkit]) { const browser = await browserType.launch(); const context = await browser.newContext(); const page = await context.newPage(); await page.goto('https://playwright.dev/python/docs/api/class-page'); await page.waitForURL(/docs\/api/); await browser.close(); } })(); </code></pre>
<python><playwright><playwright-python>
2024-02-20 14:21:59
2
1,818
Maksim Shamihulau
78,028,035
6,498,753
Basemap with joint histograms plot
<p>Here is the code returning the figure below:</p> <pre><code>import seaborn as sns plt.figure(figsize=(8, 8)) gs = plt.GridSpec(3, 3) ax_main = plt.subplot(gs[1:3, :2]) ax_lon = plt.subplot(gs[0, :2]) ax_lat = plt.subplot(gs[1:3, 2]) m = Basemap(projection='merc', resolution='i', llcrnrlon=llcrnrlon, llcrnrlat=llcrnrlat, urcrnrlon=urcrnrlon, urcrnrlat=urcrnrlat, ax=ax_main) try: m.drawcoastlines(linewidth=0.5) except: pass m.drawcountries() m.drawmapboundary() lon_range = urcrnrlon - llcrnrlon lat_range = urcrnrlat - llcrnrlat lat_interval = round(0.2 * lat_range, 2) lon_interval = round(0.2 * lon_range, 2) parallels = np.arange(-90, 90, lat_interval) meridians = np.arange(-180, 180, lon_interval) m.drawparallels(parallels, labels=[0, 1, 0, 0], fontsize=8, dashes=[1, 5]) m.drawmeridians(meridians, labels=[0, 0, 1, 0], fontsize=8, dashes=[1, 5]) m.fillcontinents(color='#F0F0F0', lake_color='#F0F0F0') x1, y1 = m(LON_1, LAT_1) x2, y2 = m(LON_2, LAT_2) ax_main.scatter(x1, y1, c='steelblue', alpha=.8, s=8, label='Data 1') ax_main.scatter(x2, y2, c='red', alpha=.8, s=8, label='Data 2') ax_main.set_xlabel('Longitude', fontsize=12, labelpad=10) ax_main.set_ylabel('Latitude', fontsize=12, labelpad=10) ax_main.legend(loc='upper right') # Histogram of longitude ax_lon.hist(LON_1, bins=20, histtype='step', density=True, color='steelblue', align='mid', label='Data 1') ax_lon.hist(LON_2, bins=20, histtype='step', density=True, color='red', align='mid', label='Data2') ax_lon.set_xticks([]) ax_lon.set_yticks([]) #ax_lon.legend(loc='upper right') # Histogram of latitude ax_lat.hist(LAT_1, bins=20, histtype='step', density=True, color='steelblue', align='mid', orientation='horizontal', label='Data 1') ax_lat.hist(LAT_2, bins=20, histtype='step', density=True, color='red', align='mid', orientation='horizontal', label='Data 2') # ax_lat.tick_params(axis='y', rotation=-90) ax_lat.set_yticks([]) ax_lat.set_xticks([]) #ax_lat.legend(loc='upper right') plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.sstatic.net/mniZy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mniZy.png" alt="enter image description here" /></a></p> <p>I cannot adjust the histograms' size with sharex/sharey option otherwise they are not properly displayed (do not know why, but I think it has to do with Basemap). Is there a way to adjust the top histogram size such that its x-axis (y-axis) is the same as the x-axis (y-axis) of the map (right histo)?</p>
<python><matplotlib><plot><histogram><matplotlib-basemap>
2024-02-20 14:14:00
1
461
Roland
78,028,014
11,659,631
pick a specific contour level in seaborn/matplotlib
<p>I have a dataset with x and y variables. I plotted them in a regular plot:<a href="https://i.sstatic.net/wDtCv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wDtCv.png" alt="enter image description here" /></a></p> <p>Now I want to plot the contour level of the density probability contour level eg: 50% of the values are in this area. I tried with seaborn with the following code:</p> <pre><code>import seaborn as sns import pandas as pd import numpy as np import matplotlib.pyplot as plt # define my x an y axes np.random.seed(0) x = np.random.randn(100) y = np.random.randn(100) # Create a joint plot with scatter plot and KDE contour lines sns.jointplot(x = x, y = y, kind = &quot;scatter&quot;, color = 'b') sns.kdeplot(x = x, y = y, color = &quot;r&quot;, levels = 5) plt.ylim(0, 17.5) plt.xlim(0, 20) # Show the plot plt.show() </code></pre> <p>and the result is: <a href="https://i.sstatic.net/bpuy2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bpuy2.png" alt="enter image description here" /></a></p> <p>But I would like to choose the contour level values. I searched a long time for a solution but didn't find any really… Is there a simple way of doing this ?</p>
<python><matplotlib><seaborn><contour>
2024-02-20 14:10:45
1
338
Apinorr
78,027,998
15,313,661
Reading a .shp and .shx file from an Azure Data Lake/Blob Container
<p>I am using our Azure Data Factory to load a ZIP from a public api. I then unpack that ZIP using a copy activity, resulting in a bunch of .shp/.shx files.</p> <p>From a Python Script, I then want to use the geopandas package to read the data into a variable. To achieve that I use the following packages:</p> <p>azure-datalake-store</p> <p>azure-storage-blob</p> <pre class="lang-py prettyprint-override"><code>import os from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient storage_conection_string = &quot;myconnectionstring&quot; blob_service_client = BlobServiceClient.from_connection_string(storage_conection_string) local_path = r&quot;mypath&quot; local_file_name = &quot;file.shp&quot; download_file_path = os.path.join(local_path, local_file_name) container_client = blob_service_client.get_container_client(container= &quot;mycontainer&quot;) with open(file=download_file_path, mode=&quot;wb&quot;) as download_file: download_file.write(container_client.download_blob(&quot;fileonblobstorage&quot;).readall()) </code></pre> <p>This correctly downloads the named file to my local storage. However, I would prefer to directly load it into the geodataset, rather than saving it locally, and then reading it in again.</p> <p>This works using a .csv file, but the .shp file returns an engine error. Since it is stored as a binary file, I assume it is an encoding issue. But I can't seem to figure out a way to solve it. Ultimately, I'd like to get to this:</p> <pre class="lang-py prettyprint-override"><code>import geopandas as gpd gdf = gpd.read_file(container_client) </code></pre> <p>The above returns a plain error, same one I got with the .csv file. However, wrapping into BytesIO solved the issue for the .csv file into a pandas df, but returns an engine error for the .shp file in a geodataframe.</p> <p>Lastly, to load properly, the .shx file accompanying the .shp file also has to be loaded. This is usually done automatically by the geopandas package when the files are in the same folder (which is the case in the blob container as well). However, the second file would probably have to be parsed as well.</p> <p>EDIT: We don't have DataBrick or a Spark Engine on our infrastructure. The Python Script runs on a local data warehousing software on a virtual machine.</p>
<python><encoding><azure-blob-storage><geopandas><shapefile>
2024-02-20 14:08:38
1
536
James
78,027,996
6,760,756
prevent windows popingup when exe run from python
<p>I’m using python to batch run a scientific application (DNDC) compiled for OS Windows using the <code>subprocess.run()</code> function:</p> <pre class="lang-py prettyprint-override"><code>import subprocess subprocess.run(r'root_path_to_dndc/exe/console/DNDC95.exe -s path_to_input.txt -output path_to_output -root root_path_to_dndc') </code></pre> <p>DNDC occasionally pops up a window for some error messages that blocks the python code from continuing running until I manually click on the “OK” button to go onward. Is there any method to catch the error message then kill the process? This matters as I need to run the last code over a very large number of times (&gt; 100k).</p> <p>I have tried setting a <code>try, except</code> block in my code but the problem is that Python detects no error as DNDC .exe is launched sucessfully. I need therefore to figure out how to catch the error message windows that DNDC fires.</p> <p>Details:</p> <ul> <li>The batch script is written whith Python 3.12.</li> <li>The batch script is run on a remote OS Unix server with the &quot;wine&quot; app.</li> <li>I have no admin rights on the server (cannot run Disable-WindowsErrorReporting)</li> <li>DNDC is a scientific application (for carbon cycle simulations), available from github (<a href="https://github.com/BrianBGrant/DNDCv.CAN/blob/master/DNDC_Jan2024.zip" rel="nofollow noreferrer">https://github.com/BrianBGrant/DNDCv.CAN/blob/master/DNDC_Jan2024.zip</a>)</li> </ul> <p>Thanks for your help.</p>
<python><subprocess><exe>
2024-02-20 14:08:10
1
718
Rami
78,027,881
6,803,114
Efficient way to replace parts of a word in a document where the replacement words are defined in a excel
<p>I have a .txt file which is huge I have read it using <code>readlines()</code> in python</p> <p>Sample:</p> <p><code>lines = [&quot;XX_A_Name, A_Bad_Joke = '67' , \n&quot;, &quot;A quick Black.Jack fox JJ_Value over XX_A_Name a.lazy.dog\n&quot;]</code></p> <p>Replacements are defined in excel file:</p> <pre><code>OldName New_replacement XX_A_ ZZ_B_ A_Bad_Joke C_Good_Joke Black.Jack Yellow.flower JJ_Value KK_Sum a.lazy.dog very.huge </code></pre> <p>I want to read the excel file and replace all the occurances of the values which are listed in OldName column from the txt file.</p> <p>For eg: the sample output should be like:</p> <p><code>[&quot;ZZ_B_Name, C_Good_Joke = '67' , \n&quot;, &quot;A quick Yellow.flower fox KK_Sum over ZZ_B_Name very.huge\n&quot;]</code></p> <p>I want to create a function for this. because this is just sample portion of the txt file I've pasted here. there are multiple .txt files which has to be load, replaced and saved as a new .txt file.</p> <p>I am seeking help for a efficient and a quick way to do this.</p> <p>Reading excel can be done using <code>pd.read_excel()</code> and <code>re</code> package can be used for replace(). but I am not able to understand how everything will fit together.</p>
<python><python-3.x><replace>
2024-02-20 13:51:16
3
7,676
Shubham R
78,027,676
2,409,868
Mock fails to block expensive resources
<p>In the following minimal code samples I show what should be two identical classes. Both versions work correctly in the original source from which this extract has been drawn.</p> <p>The problem is a normally valid tkinter error message that appears to be wrongly generated. Please note there are many SO questions about this all of which relate to the <strong>correct</strong> production of the error message.</p> <p>One of the classes <code>Factory</code> always fails on test. The other <code>PostInit</code> has two tests; one works and the other fails. The tests were written for the pytest framework.</p> <p>The failure is the tkinter error:</p> <blockquote> <p>RuntimeError: Too early to create variable: no default root window</p> </blockquote> <p>The passing test <code>test_post_init</code> mocks and blocks the whole of <code>tkinter</code>. When the mock is commented out in <code>test_tk_unmocked</code> the tkinter error is produced. I assume this demonstrates the correct production of the error message.</p> <p>In the always failing <code>test_factory</code> tkinter is again completely mocked but still manages to produce an error message from deep inside Tk/Tcl.</p> <p>How is <code>test_factory</code> managing to bypass the mocking of <code>tkinter</code> in the line <code>monkeypatch.setattr(patterns_so, &quot;tk&quot;, MagicMock())</code>?</p> <pre><code>&quot;&quot;&quot;patterns_so.py&quot;&quot;&quot; from dataclasses import dataclass, field import tkinter as tk @dataclass class PostInit: # Passes if tkinter is mocked. textvariable: tk.StringVar = None def __post_init__(self): self.textvariable = tk.StringVar() @dataclass class Factory: # Always fails. _textvariable: tk.StringVar = field( default_factory=tk.StringVar, init=False, repr=False ) </code></pre> <pre><code>&quot;&quot;&quot;test_patterns_so.py&quot;&quot;&quot; from unittest.mock import MagicMock import patterns_so class TestFacade: def test_post_init(self, monkeypatch): # Passes. monkeypatch.setattr(patterns_so, &quot;tk&quot;, MagicMock()) patterns_so.PostInit() def test_tk_unmocked(self, monkeypatch): # Expected fail. # monkeypatch.setattr(patterns_so, &quot;tk&quot;, MagicMock()) patterns_so.PostInit() def test_factory(self, monkeypatch): # Unexpected fail. monkeypatch.setattr(patterns_so, &quot;tk&quot;, MagicMock()) patterns_so.Factory() </code></pre> <h3>Postscript</h3> <p>The fundamental problem appears to be premature instantiation by the mechanics of the dataclasses module.</p> <p>Very near the bottom of the docs† on <code>dataclasses</code> there is a note which says <code>init=False</code> will be ignored when a default_factory is present and so it <strong>will</strong> be included in the generated <code>__init__</code>. I confirmed this with further testing.</p> <p>Naïve me thought that <code>__init__</code> only got run when the class was instantiated. According to the docs <code>__init__</code> calls <code>__post_init__</code>. I beg to differ. The observed behavior here suggests that the generated <code>__init__</code> function is actually run when the class is created <em><strong>not later</strong></em> when it is instantiated. After instantiation the <code>__post_init__</code> function will run.</p> <p>In cases where the mock is intended to prevent an expensive or persistent resource from being started it will fail: The resource will be started.</p> <p>Any method which forcibly delays instantiation of mocked attributes will work. Either Riccardo Bucco's lambda or my example using <code>__post_init__</code> ensure success.</p> <p>Thanks to <a href="https://stackoverflow.com/users/5296106/riccardo-bucco">Riccardo Bucco</a> for guiding me to this deeper understanding of dataclasses.</p> <p>†bottom of the docs: aka ‘The small print’.</p>
<python><mocking><python-dataclasses>
2024-02-20 13:21:24
1
1,380
lemi57ssss
78,027,669
5,368,083
Download multiple files concurrently
<p>I'm using fsspec to interact with remote filesystems, in my case its GCS, but I believe the solution would be general.</p> <p>For a single file, I'm using the following code (if you need the helper function code, it's <a href="https://gist.github.com/eliorc/4edcd45cd20a513aea7682e5142a0824" rel="nofollow noreferrer">here</a>)</p> <pre class="lang-py prettyprint-override"><code>def open_any_file(filepath: str, mode: str = &quot;r&quot;, **kwargs) -&gt; t.Generator[t.IO, None, None]: &quot;&quot;&quot; Open file and close it after use. Works for local, remote, http, https, s3, gcs, etc. :param filepath: Filepath. :param mode: Mode. :param kwargs: Keyword arguments. :return: File object. &quot;&quot;&quot; protocol, path = get_protocol_and_path(filepath) filepath = PurePosixPath(path) filesystem = fsspec.filesystem(protocol) load_path = get_filepath_str(filepath, protocol) # Figure out content type if &quot;content_type&quot; not in kwargs and filepath.suffix == &quot;.json&quot;: kwargs[&quot;content_type&quot;] = &quot;application/json&quot; with filesystem.open(load_path, mode=mode, **kwargs) as f: yield f </code></pre> <p>Assuming I have a thousand JSONs to download, what would be the most efficient way to do so? Should I go for parallelization? threading? Async?</p> <p>What would be the optimal choice in terms of execution-time, and what would be the implementation for it?</p>
<python><fsspec>
2024-02-20 13:20:10
1
12,767
bluesummers
78,027,629
6,783,015
Can jax.vmap() do a hstack()?
<p>As the title says, I currently manually <code>hstack()</code> the first axis of a 3D array returned by <code>jax.vmap()</code>. In my code, the copy operation in <code>hstack()</code> is a currently a speed bottleneck. Can I avoid this by instructing <code>jax.vmap()</code> to do this directly?</p> <p>Here is a simplified example:</p> <pre class="lang-py prettyprint-override"><code>import jax import jax.numpy as jnp def f(a, b, c): return jnp.array([[a.sum(), b.sum()], [c.sum(), 0.]]) # Returns a 2x2 array def arr(m, n): return jnp.arange(m*n).reshape((m, n)) m = 3 a = arr(m, 2) b = arr(m, 5) c = arr(m, 7) fv = jax.vmap(f) vmap_output = fv(a, b, c) desired_output = jnp.hstack(fv(a, b, c)) print(vmap_output) print(desired_output) </code></pre> <p>This yields:</p> <pre class="lang-py prettyprint-override"><code># vmap() output [[[ 1. 10.] [ 21. 0.]] [[ 5. 35.] [ 70. 0.]] [[ 9. 60.] [119. 0.]]] # Desired output [[ 1. 10. 5. 35. 9. 60.] [ 21. 0. 70. 0. 119. 0.]] </code></pre> <p>If this is not possible, I would resort to pre-allocating an array and simply writing to the columns manually, but I hope to avoid this. Thanks for any clue!</p> <hr /> <p><strong>Update from @jakevdp's answer</strong></p> <p>Alright, it isn't possible. So I resort to writing to the columns, but this fails as well:</p> <pre class="lang-py prettyprint-override"><code>def g(output, idx, a, b, c): block = jnp.array([[a.sum(), b.sum()], [c.sum(), 0.]]) # Returns a 2x2 array jax.lax.dynamic_update_slice_in_dim(output, block, idx*2, axis=1) # Defined above: jax, jnp, m, a, b, c g_output = jnp.zeros((2, 2*m)) idxs = jnp.arange(m) gv = jax.vmap(g, in_axes=(None, 0, 0, 0, 0)) gv(g_output, idxs, a, b, c) print(g_output) </code></pre> <p>This yields:</p> <pre class="lang-py prettyprint-override"><code>[[0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0.]] </code></pre> <p>So writing to <code>g_output</code> in the function <code>g</code> is not retained. Is there a way around this?</p>
<python><arrays><jax><google-jax>
2024-02-20 13:15:23
1
1,172
marnix
78,027,613
14,114,654
Create a column based on similarity to other rows
<p>I have a df with guests, sorted by family. The number of members in a family is not fixed. Two different families may have the same familyhead name (e.g. &quot;abba&quot;).</p> <pre><code> family fam_head name meeting1 meeting2 meeting3 meeting4 0 1 True abba 1 1 1 1 1 1 ben 1 1 1 1 2 1 berry 1 3 2 jack 1 1 4 2 True joe 1 1 5 3 razia 1 1 6 3 True riri 1 7 4 True abba ... ... </code></pre> <p>For each family head, how could I create a column in normal English (with use of commas + 'and' before the last person's name). It needs to list the events only when members within a family are invited to different things:</p> <pre><code>1 abba Hi abba, delighted for you, ben and berry to come. meeting1, meeting2 and meeting 3: abba and ben (events with same members grouped) meeting4: abba, ben and berry 2 joe Hi joe, delighted for you and jack to come. (no need to specify events since all members invited to same meetings) 3 riri Hi riri, delighted for you and razia to come. meeting1 and meeting 4: razia meeting2: riri 4 abba Hi abba, ... </code></pre>
<python><pandas><group-by>
2024-02-20 13:12:52
1
1,309
asd
78,027,549
5,553,963
How to reconnect to RabbitMQ after the RMQ goes down
<p>I have a python code that connects to RMQ and consumes messages. I just noticed that when the RMQ goes down my code just hangs at the close connection (inside aioamqp library) and stuck there and the only solution is to restart my code.</p> <p>I added different exceptions but non solved my problem.</p> <p>My code:</p> <pre class="lang-py prettyprint-override"><code>def main(*args, **kwargs): log.info('main args: %s', kwargs) try: close_event=asyncio.Event() event_loop = asyncio.get_event_loop() event_loop.create_task(sim_server.run()) #launch rpc server and wait till finish event_loop.run_until_complete(rpc_server(**kwargs, on_close=close_event)) log.info(&quot;after event_loop.run_until_complete&quot;) except ProcessorException as e: log.error('Processor exception: %s', e) except Exception as e: log.exception('Unexpected exception: %s', e) except KeyboardInterrupt: pass finally: log.info(&quot;inside finally&quot;) #sent event to stop rpc server close_event.set() #block till all tasks are done event_loop.run_forever() event_loop.close() async def rpc_server(url: str, exchange:str , queue_name: str, service_id: str, on_close: asyncio.Event): log.info('amqp: %s, exchange: %s, queue_name: %s, service_id: %s', url, exchange, queue_name, service_id) #aioamqp expects the virtualhost with the /. This is not accourding to the AMQP spec if '%2f' in url: url=url.replace('%2f', '/') o=urlparse(url) backoff=1.0 max_attempts=5 retry_interval=1 while True: #reconnection loop try: transport, protocol = await aioamqp.connect(host=o.hostname, port=o.port, login=o.username, password=o.password, virtualhost=o.path[1:]) channel = await protocol.channel() await channel.queue_declare(queue_name=queue_name) #bind to shared queue await channel.queue_bind(queue_name=queue_name, exchange_name=exchange, routing_key=queue_name) #bind to unique queue await channel.queue_bind(queue_name=queue_name, exchange_name=exchange, routing_key=service_id) await channel.basic_consume(on_request, queue_name=queue_name) log.info('Awaiting RPC requests') #wait till on_close event await on_close.wait() log.info('close amqp connection') await protocol.close() transport.close() except (OSError, aioamqp.exceptions.AmqpClosedConnection) as e: #AMQP server not running or starting up log.error('AMQP connection error: %s', e) if not max_attempts: raise ProcessorException('AMPQ error: max attempt reached: %s', e) log.info('retry in %s seconds', backoff) await asyncio.sleep(backoff) max_attempts-=1 backoff*=2 except ConnectionResetError as e: log.error('Connection reset by broker: %s', e) log.info('Retrying indefinitely until the broker is up...') await asyncio.sleep(retry_interval) # Adjust the retry interval as needed except Exception as e: log.error('Unexpected exception: %s', e) import traceback traceback.print_exc() else: log.info(&quot;inside break&quot;) break log.info(&quot;after while&quot;) </code></pre> <p>Log:</p> <pre><code>2024-02-20 12:27:39,835 ~~INFO ~~__main__ ~~Awaiting RPC requests ~~[processor.py:376] 2024-02-20 12:27:50,768 ~~WARNING ~~aioamqp.protocol ~~Connection lost exc=ConnectionResetError(104, 'Connection reset by peer') ~~[protocol.py:115] 2024-02-20 12:27:50,769 ~~INFO ~~aioamqp.protocol ~~Close connection ~~[protocol.py:314] </code></pre> <p>Thats it, that is all the logs that I get after stopping/killing the RMQ. The aioamqp library is getting stuck and I don't know how to restart the connection.</p>
<python><rabbitmq><amqp>
2024-02-20 13:02:35
0
5,315
AVarf
78,027,325
17,575,465
Stable baselines 3 throws ValueError when episode is truncated
<p>So I'm trying to train an agent on my custom <code>gymnasium</code> environment trough <code>stablebaselines3</code> and it kept crashing seemingly random and throwing the following <code>ValueError</code>:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\bo112\PycharmProjects\ecocharge\code\Simulation Env\prototype_visu.py&quot;, line 684, in &lt;module&gt; model.learn(total_timesteps=time_steps, tb_log_name=log_name) File &quot;C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\ppo\ppo.py&quot;, line 315, in learn return super().learn( File &quot;C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\common\on_policy_algorithm.py&quot;, line 277, in learn continue_training = self.collect_rollouts(self.env, callback, self.rollout_buffer, n_rollout_steps=self.n_steps) File &quot;C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\common\on_policy_algorithm.py&quot;, line 218, in collect_rollouts terminal_obs = self.policy.obs_to_tensor(infos[idx][&quot;terminal_observation&quot;])[0] File &quot;C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\common\policies.py&quot;, line 256, in obs_to_tensor vectorized_env = vectorized_env or is_vectorized_observation(obs_, obs_space) File &quot;C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\common\utils.py&quot;, line 399, in is_vectorized_observation return is_vec_obs_func(observation, observation_space) # type: ignore[operator] File &quot;C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\common\utils.py&quot;, line 266, in is_vectorized_box_observation raise ValueError( ValueError: Error: Unexpected observation shape () for Box environment, please use (1,) or (n_env, 1) for the observation shape. </code></pre> <p>I don't know why the observation shape/content would change though, since it doesn't change how the state gets its values at all.</p> <p>I figured out that it crashes, whenever the agent 'survives' a whole episode for the first time and truncation gets used instead of termination. Is there some kind of weird quirk for returning <code>truncated</code> and <code>terminated</code> that I don't know about? Because I can't find the error in my step function.</p> <pre><code> def step(self, action): ... # handling the action etc. reward = 0 truncated = False terminated = False # Check if time is over/score too low - else reward function if self.n_step &gt;= self.max_steps: truncated = True print('truncated') elif self.score &lt; -1000: terminated = True # print('terminated') else: reward = self.reward_fnc_distance() self.score += reward self.d_score.append(self.score) self.n_step += 1 # state: [current power, peak power, fridge 1 temp, fridge 2 temp, [...] , fridge n temp] self.state['current_power'] = self.d_power_sum[-1] self.state['peak_power'] = self.peak_power for i in range(self.n_fridges): self.state[f'fridge{i}_temp'] = self.d_fridges_temp[i][-1] self.state[f'fridge{i}_on'] = self.fridges[i].on if self.logging: print(f'score: {self.score}') if (truncated or terminated) and self.logging: self.save_run() return self.state, reward, terminated, truncated, {} </code></pre> <p>This is the general setup for training my models:</p> <pre><code>hidden_layer = [64, 64, 32] time_steps = 1000_000 learning_rate = 0.003 log_name = f'PPO_{int(time_steps/1000)}k_lr{str(learning_rate).replace(&quot;.&quot;, &quot;_&quot;)}' vec_env = make_vec_env(env_id=ChargeEnv, n_envs=4) model = PPO('MultiInputPolicy', vec_env, verbose=1, tensorboard_log='tensorboard_logs/', policy_kwargs={'net_arch': hidden_layer, 'activation_fn': th.nn.ReLU}, learning_rate=learning_rate, device=th.device(&quot;cuda&quot; if th.cuda.is_available() else &quot;cpu&quot;), batch_size=128) model.learn(total_timesteps=time_steps, tb_log_name=log_name) model.save(f'models/{log_name}') vec_env.close() </code></pre> <p>As mentioned above, episodes <strong>only</strong> get <code>truncated</code> when it also throws the <code>ValueError</code> and vice versa, so I'm pretty sure it has to be that.</p> <hr /> <p><strong>EDIT:</strong></p> <p>From the answer below, I found the problem was to simply put all my float/Box values of <code>self.state</code> into numpy arrays before returning them like following:</p> <pre><code>self.state['current_power'] = np.array([self.d_power_sum[-1]], dtype='float32') self.state['peak_power'] = np.array([self.peak_power], dtype='float32') for i in range(self.n_fridges): self.state[f'fridge{i}_temp'] = np.array([self.d_fridges_temp[i][-1]], dtype='float32') self.state[f'fridge{i}_on'] = self.fridges[i].on </code></pre> <p>(Note: the dtype specification is not necessary in itself, it's just important for using the <code>SubprocVecEnv</code> from <code>stable_baselines3</code>)</p>
<python><reinforcement-learning><openai-gym><stable-baselines>
2024-02-20 12:29:06
1
369
maxxel_
78,027,324
4,858,640
pydantic: JSON dictionary type?
<p>I want to use <code>pydantic</code> to validate that some incoming data is a valid JSON dictionary. There is already the predefined <code>pydantic.Json</code> type but this seems to be only for validating Json strings. Then of course I could use <code>Dict[str, Any]</code> but that allows values that are not valid in JSON. What I really want is the following:</p> <pre class="lang-py prettyprint-override"><code>from typing import ( Dict, List, Union, ) from pydantic import ( BaseModel, ) Json = Union[None, str, int, bool, List['Json'], Dict[str, 'Json']] class MyModel(BaseModel): field: Json </code></pre> <p>But that fails with a recursion error. How can I achieve this?</p>
<python><pydantic>
2024-02-20 12:29:05
1
3,242
Peter
78,027,259
1,219,317
Can ARIMA take input sequence in Python?
<p>In Python, when you train LSTM model you can train the model on part of the data. Then at inference you can give it whatever input you like for example 10 recent timesteps that is not part of the training set. It will produce the output. Now can ARIMA operate in the same way? Can we give it input sequence ? Or does it use the training data to predict next steps ?</p> <p>Below is my code:</p> <pre><code>import pandas as pd from statsmodels.tsa.arima.model import ARIMA import torch import sys import math # Read the dataset from CSV file df = pd.read_csv('sm_data.csv', header=None) # Take the first 102 rows as training data and the rest as test data train_data = df.iloc[:102, :] test_data = df.iloc[102:, :] # Iterate over each trend forecast_results = {} for column in df.columns: # Fit ARIMA model on training data model = ARIMA(train_data[column], order=(10,1,0)) model_fit = model.fit() # Forecast 36 months ahead forecast = model_fit.forecast(steps=36) # Store forecast results forecast_results[column] = forecast # Convert forecast results to DataFrame forecast_df = pd.DataFrame(forecast_results) # Save forecast results to CSV forecast_df.to_csv('forecast_results.csv', index=False) </code></pre>
<python><machine-learning><time-series><statsmodels><arima>
2024-02-20 12:16:51
2
2,281
Travelling Salesman
78,027,110
10,167,486
I have a websocket server. It works connecting with postman, but in my react proyect doesnt
<p>So i have a websocket server built with python, which is deployed in a EC2 container. Connecting through postman works perfectly, but when connecting via Websocket in React, it throws always this error:</p> <p>WebSocket connection to 'wss://xxx:3500/' failed:</p> <p>with no further description.</p> <pre class="lang-js prettyprint-override"><code> useEffect(() =&gt; { const socket = new WebSocket( process.env.NODE_ENV === 'production' ? &quot;wss://xxxx:3500&quot; : &quot;ws://localhost:3500&quot;, ); setSocket(socket); socket.addEventListener(&quot;message&quot;, (event: MessageEvent&lt;string&gt;) =&gt; { handleWebSocketMessage(event, setFutures, setStocks, setBonds, setDollars) }); socket.addEventListener(&quot;error&quot;, (event) =&gt; { console.error(&quot;WebSocket error:&quot;, event); }); }, []); </code></pre> <p>The way i fixed this problem temporaly, was disabling the SSL certificates in my python websocket server, and connecting through ws instead of wss. But this is no solution, as i need it to be secure. The thing is that when the server is secure, it only works through postman.</p> <p>I've already tryed debugging by connecting to a mock wss server in my React websocket connection and it worked perfectly, so i made the conclution that the client isn't the problem. What could be happening?</p>
<python><reactjs><next.js><websocket><server>
2024-02-20 11:51:01
2
365
Juan Pedro Pont Vergés
78,027,086
1,668,622
Is it possible to check for a given python file whether it's pyc file is valid and up to date?
<p>For a project which gets deployed to a RO filesystem I want to also deploy pre-compiled <code>.pyc</code> files for all of the Python source.</p> <p>Thanks to the <code>compileall</code> module this already works, but due to whatever reasons caused by a complex build machinery, those <code>.pyc</code> files <em>can</em> become invalid accidentally (because of unwanted file modifications, source relocation, etc)</p> <p>So what I'd like to do is to check if there's an already existing <code>pyc</code> file for each with would have been created, if it's being found and if it's valid (i.e. correct source path embedded, correct bytecode, etc).</p> <p>Is there an integrated way to check &quot;precompilation validity&quot; (like <code>python3 -m compileall --check</code>)? Or what would be a pragmatic way to do this by hand? (e.g. by inspecting some trace or whatever).</p> <p>The most reliable way I can imagine would be to compare a readily deployed tree to a copy where I deleted and re-created all files using <code>compileall</code>..</p>
<python><python-3.x><bytecode><precompile><pyc>
2024-02-20 11:47:13
1
9,958
frans
78,027,044
11,167,163
Django IIS Deployment - HTTP Error 404.0 - Not Found
<p>I need explanation on how to deploy Django Application on Windows IIS Server.</p> <p>I am struggling to follow the following <a href="https://blog.nonstopio.com/deploy-django-application-on-windows-iis-server-93aee2864c41" rel="nofollow noreferrer">tutorial</a></p> <p>I have a project which looks like this :</p> <pre><code>[My_App] --&gt; [My_App_Venv] --&gt; [abcd] |-&gt; manage.py |-&gt; [abcd] |-&gt; settings.py |-&gt; [static] |-&gt; ... </code></pre> <blockquote> <p><strong>[</strong> example <strong>]</strong> is used to represent folders</p> </blockquote> <p>I Need to understand how I should setup my IIS website :</p> <h2>What should be FastCGI Application settings</h2> <p>Full Path :</p> <pre><code>C:\xxxxx\My_App\My_App_Venv\Scripts\python.exe </code></pre> <p>Arguments :</p> <pre><code>C:\xxxxx\My_App\My_App_Venv\Lib\site-packages\wfastcgi.py </code></pre> <p><strong>Environment Variables :</strong></p> <pre><code> 1. DJANGO_SETTINGS_MODULE : abcd.settings (I think this is ok) 2. PYTHONPATH : C:\xxxxx\My_App (I am not sure about this) 3. WSGI_HANDLER : abcd.wsgi.application (I think this is ok) </code></pre> <h2>Create and Configure a New IIS Web Site</h2> <p>what should be the physical Path ?</p> <pre><code>C:\xxxxx\My_App ? C:\xxxxx\My_App\abcd ? C:\xxxxx\My_App\abcd\abcd ? </code></pre> <h2>Configure the mapping module ## (which i assume is correct)</h2> <pre><code>Request path: * Module: FastCgiModule Executable : C:\xxxxx\My_App\My_App_Venv\Scripts\python.exe|C:\xxxxx\My_App\My_App_Venv\Lib\site-packages\wfastcgi.py Name : Django Handler </code></pre> <hr /> <p>output</p> <p><a href="https://i.sstatic.net/oCSew.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oCSew.png" alt="enter image description here" /></a></p> <p>Hope someone can help to understand what I am doing wrong, I might use wrong path somewhere.</p> <p>Note : When I do By hand :</p> <pre><code>cd C:/xxxxx/My_App/My_App_Venv .\\scripts\Activate cd.. cd abcd python manage.py runserver </code></pre> <p>The website is well displayed on : <a href="http://127.0.0.1:8000/" rel="nofollow noreferrer">http://127.0.0.1:8000/</a></p>
<python><django><iis>
2024-02-20 11:41:51
1
4,464
TourEiffel
78,026,990
8,515,825
How can I uninstall Jupyter Notebook if I don't know the package manager?
<p>I am having troubles with running Jupyter Notebook and want to reinstall it.</p> <p>When I run <code>which jupyter</code>, I get <code>/opt/homebrew/bin/jupyter</code>.</p> <p>However, trying to run <code>brew uninstall jupyter</code> yields <code>Error: No such keg: /opt/homebrew/Cellar/jupyter</code>.</p> <p>Indeed, when I type <code>ls -l /opt/homebrew/bin/jupyter</code> I get <code>-rwxr-xr-x 1 my_username admin 247 Feb 20 11:34 /opt/homebrew/bin/jupyter</code>, showing that the directory above is an actual executable file and not a symlink.</p> <p>Trying to uninstall it with pip also doesn't work: Running <code>pip uninstall jupyter jupyterlab jupyter-notebook</code> yields</p> <pre><code>WARNING: Skipping jupyter as it is not installed. WARNING: Skipping jupyterlab as it is not installed. WARNING: Skipping jupyter-notebook as it is not installed. </code></pre> <p>and same for <code>pip3</code>.</p> <p>What is the safest way to uninstall jupyter in this case?</p> <hr /> <p>In case it matters, <code>sw_vers</code> gives</p> <pre><code>ProductName: macOS ProductVersion: 14.2.1 BuildVersion: 23C71 </code></pre>
<python><jupyter-notebook><homebrew><uninstallation><package-managers>
2024-02-20 11:34:13
0
455
chickenNinja123
78,026,989
12,439,683
Narrow down type-hint of return value to child class if function only hints to parent class
<p>I have an external function that works like a factory and returns instances of different classes with a common parent type: e.g.</p> <pre class="lang-py prettyprint-override"><code>class Printer: def make(self, blueprint:str) -&gt; Blueprint: ... class PrintA(Blueprint): def a_specific(self) -&gt; int: ... class PrintB(Blueprint): def b_specific(self, args): &quot;&quot;&quot;docstring&quot;&quot;&quot; a : PrintA = Printer().make(&quot;A&quot;) b : PrintB = Printer().make(&quot;B&quot;) # TypeHint: &quot;(variable) a: Blueprint&quot; # TypeHint: &quot;(variable) b: Blueprint&quot; # Usage: result = a.a_specific() # function &amp; type of result not detected : Any b.b_specific(arg) # function not detected, parameters and description unknown : Any </code></pre> <p>The problem is that <code>a</code> and <code>b</code> have <code>Blueprint</code> as their given type hint. <strong>How can I force the correct type hint onto the variables</strong>, so that the child functions can be detected by my IDE?</p> <hr /> <p>I tried to use <code># type: ignore[override]</code> but that does not seem applicable for this purpose as its no error; is there by chance another magic comment command to indicate that the type checker should trust the human annotation?</p> <hr /> <p>Notes: I cannot adjust Printer or the Blueprints itself. Modify a matching <code>*.pyi</code> file is possible but should be minimal. The amount of Blueprints is large.</p> <p>Not sure if relevant but using VS-Code and Python Extension for type hinting.</p> <p>At least in my example PrintA and PrintB cannot be instantiated directly via Python and <code>Printer</code> has to be used to initialize them.</p>
<python><mypy><python-typing><pyright>
2024-02-20 11:34:09
3
5,101
Daraan
78,026,944
5,506,167
IronPython garbage collection - How does it provides compatibility with C-extensions?
<p>In this part of the talk on GIL by Larry Hastings, there is an explanation about how <a href="https://code.google.com/archive/p/ironclad/" rel="nofollow noreferrer">ironclad</a> provides C-extension compatibility with IronPython. This is the interesting <a href="https://youtu.be/4zeHStBowEk?list=LL&amp;t=1444" rel="nofollow noreferrer">part of the talk</a>:</p> <blockquote> <p>We implemented something called Ironclad which was an implementation of the Python C-API for IronPython and IronPython doesn't have a GIL and it doesn't use reference counting but we maintain binary compatibility with existing binaries.</p> <p>What you have with those existing binaries is you have C objects (effectively python objects implemented in C) that expect to use reference counting so we had a hybrid system. For objects that were returned from the C extension we artificially inflated their reference counting by one so that the <strong>macros</strong> compiled into the binaries would never be triggered by getting down to zero and then we had a <strong>proxy</strong> <strong>object</strong> that if garbage collection was entered for this object then we would decrement the reference count to zero.</p> <p>Technically we had a leak though because if you pass in references to python objects to the C extension, the C extension could keep references to those alive and essentially what Larry is saying is that if you move to something like Mark and sweep for the main python C interpreter, those C objects are opaque to the the garbage collector. It can't know what internal references you have.</p> </blockquote> <p>My questions:</p> <p>1- If GC is implemented in the interpreter itself (e.g. IronPython) what are the macros he is referring to? (That we should care about and increment ref count for the sake of it!)</p> <p>2- What is the role of the proxy object? It is a proxy for a python object implemented in a C extension? Why don't we decrement refcount directly on the original object?</p>
<python><garbage-collection><ironpython><python-c-api><reference-counting>
2024-02-20 11:25:59
2
1,962
Saleh
78,026,859
10,595,871
VScode can't find python unittest
<p>I've set up a test in VScode, if I normally run it in the powershell it works fine, but I'm struggling in finding how to properly set it as a test in vscode.</p> <p>I've already tried all of the thing I found so:</p> <ul> <li>addedd an <code>__init__.py</code> file in the test directory</li> <li>in the command palette tried the configure test -&gt; unittest -&gt; .root -&gt; test_, nothing happens</li> <li>uninstalled python extension, reinstalled it and doing the process again</li> <li>this is my settings.json:</li> </ul> <pre><code>{ &quot;python.testing.unittestArgs&quot;: [ &quot;-v&quot;, &quot;-s&quot;, &quot;./Tutto&quot;, &quot;-p&quot;, &quot;*test*.py&quot; ], &quot;python.testing.pytestEnabled&quot;: false, &quot;python.testing.unittestEnabled&quot;: true, &quot;python.testing.pytestArgs&quot;: [ &quot;.&quot; ] } </code></pre> <p>No errors, it just can't find tests.</p> <p>The project is in the Tutto folder, the scripts are in a folder named src and the test file in another folder inside src named test (where there is the test_blabla.py and the <strong>init</strong>.py)</p> <p>edit: I found a workaround by replacing <code>&quot;./Tutto&quot;</code> in the <code>setting.json</code> with the entire path of the test folder, and now at least the test extension is working fine, but 2 things still remains:</p> <ul> <li>I don't have the run test and debug test functionalities in the code</li> <li>Now the entire VSCODE is pointing at the tests for the &quot;Tutto&quot; project, but what if I have another project with its own test folder? If I replace the path with &quot;.&quot; it does not work</li> </ul>
<python><unit-testing><visual-studio-code>
2024-02-20 11:12:40
1
691
Federicofkt
78,026,771
1,606,657
python - handle exceptions with curses wrapper
<p>I'm following the documentation <a href="https://docs.python.org/3/howto/curses.html" rel="nofollow noreferrer">https://docs.python.org/3/howto/curses.html</a> to setup a <code>curses</code> app in Python. According to the documentation it is recommended to use the <code>wrapper</code> as it auto initializes everything and handles exceptions.</p> <p>The example provided is fairly basic by using a single function only. I'd like to use a class to wrap the curses functionality and trigger changes to the TUI. How would I use the wrapper in that scenario? Below is an example of what I've tried so far but running the code outputs the traceback in a weird alignment and puts the terminal in a broken state which then requires a <code>reset</code>.</p> <pre><code>import curses from curses import wrapper class Test: def __init__(self, stdscr): self._stdscr = stdscr @classmethod def create(cls): _cls = wrapper(cls) return _cls def start(self): self._screen = curses.newpad(100, 100) self._screen.addstr(2, 2, 'test title', curses.A_STANDOUT) screen_rows, screen_cols = self._stdscr.getmaxyx() self._screen.refresh(0, 0, 0, 0, screen_rows - 1, screen_cols - 1) while True: self._stdscr.getch() 1/0 # example of uncaught exception t = Test.create() t.start() </code></pre> <p>Output</p> <pre><code>Traceback (most recent call last): File &quot;/tmp/test/test.py&quot;, line 27, in &lt;module&gt; t.start() File &quot;/tmp/test/test.py&quot;, line 23, in start 1/0 ~^~ ZeroDivisionError: division by zero </code></pre> <p>Essentially I'm looking for a solution to handle any uncaught exceptions</p>
<python><curses>
2024-02-20 11:01:32
1
6,352
wasp256
78,026,674
16,627,522
Pybind11/Nanobind: How to return class object and use methods from Python. How can I cast the C++ object to something whose methods I can use in Py
<p>I'm not sure why I can't find any good discussion of this, I have been searching since yesterday.</p> <p>How do you return a class object from C++ binding and use its methods in Python i.e.:</p> <pre><code>class A { ... void foo() { py::print(&quot;Hello world&quot;); } } class B { ... A bar() { a = A(); // What do I return here if I want to return a and use it in Python // return a } } </code></pre> <p>I want to then bind B and use it in Python</p> <pre><code>b_object = B() a = b_object.bar() a.foo() </code></pre> <p>How do I do this in Pybind, Nanobind or Boost Python (if the syntax for the latter is similar).</p> <p><strong>I can't use lambdas when creating the <code>B</code> binding. The use case is actually more complex than what I've shown here.</strong></p>
<python><c++><boost-python><pybind11><nanobind>
2024-02-20 10:44:50
1
634
Tommy Wolfheart
78,026,618
237,105
Cannot connect slot to pqtgraph InfiniteLine signal with connectSlotsByName
<p>I'm using the following code to call a function whenever the infinite line is dragged. It does not call the function.</p> <pre class="lang-py prettyprint-override"><code> import sys import pyqtgraph as pg from PyQt5.QtCore import pyqtSlot as Slot, Qt, QMetaObject from PyQt5.QtWidgets import * from PyQt5.QtCore import * from PyQt5.QtGui import * class MyMainWindow(QMainWindow): def __init__(self, parent=None): super().__init__(parent=parent) self.setGeometry(300, 300, 400, 300) self.setWindowTitle('Hello World') self.plot_widget = pg.PlotWidget() self.plot_item = self.plot_widget.plot([1,0,2], pen='b', name='p0') self.vline = pg.InfiniteLine(movable=True, angle=90) self.vline.setObjectName('vline') self.plot_widget.addItem(self.vline) self.setCentralWidget(self.plot_widget) QMetaObject.connectSlotsByName(self) @Slot(object) def on_vline_sigDragged(self, obj): print('dragged') if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) ui = MyMainWindow() ui.show() sys.exit(app.exec_()) </code></pre> <p>But when I assign the slot manually via</p> <pre class="lang-py prettyprint-override"><code> self.vline.sigDragged.connect(self.on_vline_sigDragged) </code></pre> <p>it works fine.</p> <p>Why is that so?</p>
<python><pyqt><signals><signals-slots><pyqtgraph>
2024-02-20 10:37:46
1
34,425
Antony Hatchkins
78,026,604
4,269,851
Python break loop with one line of code without using enumerate()
<p>If I want to add a break to my <code>for</code> or <code>while</code> loop at certain iteration can it be done without adding counter?</p> <pre><code> i=1 for line in zip(*input): print(format_row.format(*line)) if i &gt; 3: break i += 1 </code></pre> <p>also without <code>enumerate()</code>:</p> <pre><code> for i, line in enumerate(zip(*input)): print(format_row.format(*line)) if i &gt; 3: break </code></pre> <p><strong>EDIT:</strong></p> <p>Please solution without including additional libraries.</p> <p>I don't want to use enumerate because i have to edit code in for statement too much, its a temporary break for example to print N number of lines rather than 1000 lines, so debugging can be fast. I want to add quick <code>break</code> and then remove it without need to modify original code of <code>for</code> loop.</p> <p>I want to add 2 lines maximum one outside the <code>for</code> loop another inside the <code>for</code> loop, how to make this work, function will be stored elsewhere and reused many times so its not a problem, function can be as long as required.</p> <pre><code> # function does not count as extra code because it is reusable def check(num): if num &gt; 3: return True else: num += i # line 1 i = 1 for line in zip(*input): print(format_row.format(*line)) # line 2 if check(i): break </code></pre>
<python><loops><break>
2024-02-20 10:35:00
1
829
Roman Toasov
78,026,487
412,137
SpecTree - is there a way to hide an endpoint from swagger
<p>I am currently working on a Flask project and using Spectree version 1.2.8 for API validation. For certain endpoints, I want to restrict their visibility on the Swagger UI documentation generated by Spectree to keep them private or for internal use only.</p> <p>Could anyone guide me on whether it's possible to exclude specific routes from being displayed in the Swagger UI with Spectree 1.2.8?</p> <p>I've searched through the Spectree documentation and tried looking into the configuration options, but I haven't found a clear solution that addresses this need directly.</p> <p>Thank you in advance for your assistance!</p>
<python><flask>
2024-02-20 10:19:19
1
2,767
Nadav
78,026,458
127,508
How to convert binary Postgres data to text with Polars?
<p>I am extracting data from POstgres with Polars.</p> <p>The table looks like this:</p> <pre><code> | Column | Type | |----------------+--------------------------+ | seq_id | bigint | | id | uuid | | version | integer | | data | jsonb | | timestamp | timestamp with time zone | </code></pre> <p>The problem is when I execute a &quot;SELECT * FROM table;&quot; query I get id and data as binary in Polars.</p> <p>The fix is to execute:</p> <pre class="lang-sql prettyprint-override"><code>SELECT id::text, data::text, ... FROM table; </code></pre> <p>The problem with this is that I need to care about the fields in the table and I would like to have a job that does not have the list of fields spelled out.</p> <p>Is there a way to instruct Polars to convert binary to text automatically?</p> <p>I use the ADBC connector and the read_database_uri from Python:</p> <pre class="lang-py prettyprint-override"><code>database_query = &quot;SELECT id::text, data::text, ... FROM table;&quot; return pl.read_database_uri( query=database_query, uri=database_uri, engine=&quot;adbc&quot; ) </code></pre> <p>Is there an easy way not needing to spell out fields and ::text for each binary column?</p>
<python><postgresql><python-polars>
2024-02-20 10:15:17
1
8,822
Istvan
78,026,404
3,650,983
list of supported backbones models
<p>I'm running the following example:</p> <pre><code>from transformers import TimmBackboneConfig, TimmBackbone # Initializing a timm backbone configuration = TimmBackboneConfig(&quot;resnet50&quot;) # Initializing a model from the configuration model = TimmBackbone(configuration) # Accessing the model configuration configuration = model.config </code></pre> <p>from <a href="https://huggingface.co/docs/transformers/main/en/main_classes/backbones" rel="nofollow noreferrer">here</a></p> <p>Now I want to use other backbone model from the supported list in the same page,</p> <ul> <li>BEiT</li> <li>BiT</li> <li>ConvNet</li> <li>ConvNextV2</li> <li>DiNAT</li> <li>DINOV2</li> <li>FocalNet</li> <li>MaskFormer</li> <li>NAT</li> <li>ResNet</li> <li>Swin Transformer</li> <li>Swin Transformer v2</li> <li>ViTDet</li> </ul> <p>where can I find the dictionary from this list, I mean what is the string that I should use for each of the model?</p> <p>For example:</p> <ul> <li>&quot;ResNet&quot; -&gt; 'resnet50'</li> <li>&quot;BEiT&quot; -&gt; ???</li> </ul> <p>I have tried to search for it in the some of the links in the page and in google but didn't find this information anywhere.</p>
<python><machine-learning><pytorch><huggingface-transformers><huggingface>
2024-02-20 10:07:59
1
4,119
ChaosPredictor
78,026,392
8,832,641
Get Azure Application ID from Enterprise Application Name in python
<p>I have an Azure Application name and I would like to know if there is a way to get the application ID of the enterprise application in python.</p> <p>I am able to authenticate myself by passing in the clientID/ApplicationID and secret. But, I would like to know if there is a way to fetch client id/application ID using Azure Application name, so that I dont have to hardcode it.</p>
<python><azure>
2024-02-20 10:06:01
1
1,117
Padfoot123
78,026,316
11,861,874
Excel Data to be updated using python in a loop
<p>I have below table to update in excel using python. I tried groupby and also loop or iterrow method but failed to populate values.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>AGE</th> <th>INCOME</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>32</td> <td>50,000</td> <td></td> </tr> <tr> <td>34</td> <td>55,000</td> <td></td> </tr> <tr> <td>32</td> <td>43,000</td> <td></td> </tr> <tr> <td>32</td> <td>48,000</td> <td></td> </tr> <tr> <td>34</td> <td>38,000</td> <td></td> </tr> </tbody> </table></div> <p>The aim here is to calculate and update Total column in excel. So Age=32 will have 141,000 which is the total of all entries for &quot;32&quot; and similarly 93,000 for &quot;34&quot;.</p>
<python><excel>
2024-02-20 09:55:57
1
645
Add
78,026,306
532,054
Django Rest Framework basic set of APIs around authentication
<p>I'm a django newby and I'm looking to expose basic services like: forgotten password, password change.</p> <p>I would expect to have those services for free but looking here and there it looks like we have to do them by hand, is that correct?</p> <p>Is there any explanation for that?</p>
<python><django><authentication><django-rest-framework>
2024-02-20 09:54:15
1
1,771
lorenzo
78,026,256
12,304,000
filter out dagster assets based on group_name
<p>In my assets file, I have 3 groups but they are differentiated based on their <strong>group_name</strong>*</p> <p><strong>assets/my_assets.py:</strong></p> <pre><code>@asset( group_name=&quot;group1&quot; ) def group1_data(context: AssetExecutionContext): x = 1 + 3 @asset( group_name=&quot;group1&quot; ) def group1_full_data(context: AssetExecutionContext): x = 1 + 6 @asset( group_name=&quot;group2&quot; ) def group2_data(context: AssetExecutionContext): x = 1 + 1 </code></pre> <p><strong>assets/init.py:</strong></p> <pre><code>all_assets = load_assets_from_modules([my_assets]) </code></pre> <p>Now when I load them using load_assets_from_modules, I always end up loading all assets together. Is it not possible to load only those with a specific group name?</p> <p>Because I want to run 2 different jobs for 2 different groups:</p> <pre><code>from dagster import define_asset_job, load_assets_from_modules from ..assets import my_assets my_group1_job = define_asset_job(name=&quot;group1_job&quot;, selection=load_assets_from_modules([my_assets]), description=&quot;Loads only group1 data&quot;) </code></pre>
<python><assets><jobs><directed-acyclic-graphs><dagster>
2024-02-20 09:47:36
1
3,522
x89
78,026,226
1,613,983
"Incorrect number of bindings supplied" when inserting into SQLite DB
<p>I'm trying to check whether a row exists and contains some expected values, but I keep getting this error:</p> <blockquote> <p>Incorrect number of bindings supplied. The current statement uses 1, and there are 5 supplied.</p> </blockquote> <p>Here's a minimal reproduction of the error:</p> <pre><code>import sqlite3 db_file = &quot;test.db&quot; con = sqlite3.connect(db_file) cur = con.cursor() cur.execute('DROP TABLE IF EXISTS image') cur.execute(''' CREATE TABLE image( id INTEGER PRIMARY KEY, listing_id TEXT NOT NULL, url TEXT NOT NULL, image BLOB NOT NULL )''') res = con.execute(''' SELECT listing_id, url FROM image WHERE listing_id=? ''', 'id123') </code></pre> <p>Result:</p> <pre><code>--------------------------------------------------------------------------- ProgrammingError Traceback (most recent call last) Cell In[1], line 15 7 cur.execute('DROP TABLE IF EXISTS image') 8 cur.execute(''' 9 CREATE TABLE image( 10 id INTEGER PRIMARY KEY, (...) 13 image BLOB NOT NULL 14 )''') ---&gt; 15 res = con.execute(''' 16 SELECT listing_id, url FROM image WHERE listing_id=? 17 ''', 'id123') ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 5 supplied. </code></pre> <p>What am I doing wrong here?</p>
<python><sqlite>
2024-02-20 09:44:17
1
23,470
quant
78,026,207
15,136,864
Stream connection lost: RxEndOfFile(-1, 'End of input stream (EOF)' when connecting from Lambda to Amazon MQ using Python + AMQP
<p>My goal is to create a client responsible for sending messages from AWS Lambda to Amazon MQ (ActiveMQ Classic) through AMQP protocol. I've done everything according to the <a href="https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-rabbitmq-pika.html" rel="nofollow noreferrer">documentation</a>.</p> <pre class="lang-py prettyprint-override"><code>class BasicPikaClient: def __init__(self, rabbitmq_broker_id, rabbitmq_user, rabbitmq_password, region): ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ssl_context.set_ciphers(&quot;ECDHE+AESGCM:!ECDSA&quot;) ssl_context.load_verify_locations(cafile=&quot;ca.pem&quot;) url = f&quot;amqps://{rabbitmq_user}:{rabbitmq_password}@{rabbitmq_broker_id}.mq.{region}.amazonaws.com:5671&quot; parameters = pika.URLParameters(url) parameters.ssl_options = pika.SSLOptions(context=ssl_context) self.connection = pika.BlockingConnection(parameters) self.channel = self.connection.channel() class BasicMessageSender(BasicPikaClient): def declare_queue(self, queue_name): print(f&quot;Trying to declare queue({queue_name})...&quot;) self.channel.queue_declare(queue=queue_name) def send_message(self, exchange, routing_key, body): channel = self.connection.channel() channel.basic_publish(exchange=exchange, routing_key=routing_key, body=body) print( f&quot;Sent message. Exchange: {exchange}, Routing Key: {routing_key}, Body: {body}&quot; ) def close(self): self.channel.close() self.connection.close() </code></pre> <p>The documentation seems to be outdated so I had to change <code>ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)</code> with <code>ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)</code>. The <code>ca.pem</code> file comes from the <a href="https://www.amazontrust.com/repository/" rel="nofollow noreferrer">Amazon Trust Services Repository</a>. I've chosen this guy (don't ask me why, I'm just guessing):</p> <p><a href="https://i.sstatic.net/2JBAG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2JBAG.png" alt="enter image description here" /></a></p> <p>When running the script I got such an Exception <code>pika.exceptions.IncompatibleProtocolError: StreamLostError: (&quot;Stream connection lost: RxEndOfFile(-1, 'End of input stream (EOF)')&quot;,)</code></p> <pre class="lang-py prettyprint-override"><code>basic_message_sender = BasicMessageSender( &quot;b-177801af-xxx-1&quot;, &quot;xxx&quot;, &quot;xxx&quot;, &quot;eu-central-1&quot;, ) basic_message_sender.declare_queue(&quot;xxx&quot;) basic_message_sender.send_message( exchange=&quot;&quot;, routing_key=&quot;xxx&quot;, body=b&quot;Hello World!&quot; ) basic_message_sender.close() </code></pre>
<python><amazon-web-services><ssl><amazon-mq>
2024-02-20 09:41:22
1
310
Sevy
78,026,120
1,493,192
Merge two dataframe in pandas with different size
<p>I want to find a method to join two dataframes in pandas in an elegant way without doing grafted loops. The first dataframe appears with the following structure and a &quot;week_of_year&quot; column</p> <p><a href="https://i.sstatic.net/yPrGa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yPrGa.png" alt="enter image description here" /></a></p> <p>The second smallest dataframe, possesses the maximum value recorded in that week</p> <p><a href="https://i.sstatic.net/1WWTO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1WWTO.png" alt="enter image description here" /></a></p> <p>What I would like to do is to merge the two dataframes so that in the first dataframe the value 34 appears repeatedly for all the columns that have value 12 and so on</p>
<python><pandas><dataframe>
2024-02-20 09:29:58
1
8,048
Gianni Spear
78,025,987
21,049,944
polars - what does "fortran-like indexing" mean and how to enforce it?
<p>I am trying to use <code>to_numpy(allow_copy=False)</code> on my polars dataframe, but I am getting:</p> <p><code>RuntimeError: copy not allowed: only numeric data without nulls in Fortran-like order can be converted without copy</code></p> <p>Since I checked that <code>df.select(pl.all().is_null().sum())</code> gives 0 for every column (I have 3), I suppose that the problem is in the fortran-like indexing. How to enforce it?</p> <p>I would also like to know what does the fortran-like indexing mean for polars (I failed to find any sources on that).</p> <p>I was unfortunately not able to reproduce an example since the zero copy mode works just fine for basic examples and my data creation process is a too long to reproduce.</p> <p>The exception also raises for <code>order = &quot;c&quot;</code> and <code>writable = False</code> options.</p> <p>EDIT: After some thinking I decided to reformulate this question in more general form <a href="https://stackoverflow.com/questions/78039906/polars-is-there-a-procedure-that-ensures-zero-copy-when-calling-df-to-numpy">here</a>.</p>
<python><numpy><python-polars><zero-copy>
2024-02-20 09:09:18
0
388
Galedon
78,025,976
5,868,293
Create a column for cumulative sum for each string value in a column pandas
<p>I have the following pandas dataframe</p> <pre><code>import pandas as pd import random random.seed(42) pd.DataFrame({'index': list(range(0,10)), 'cluster': [random.choice(['S', 'C', ]) for l in range(0,10)]}) index cluster 0 0 S 1 1 S 2 2 C 3 3 S 4 4 S 5 5 S 6 6 S 7 7 S 8 8 C 9 9 S </code></pre> <p>I would like to create 5 new columns, one for each unique value of the <code>cluster</code> column, which will be the cumulative sum of appearances of each value.</p> <p>The output pandas dataframe should look like this :</p> <pre><code>pd.DataFrame({'index': list(range(0,10)), 'cluster': [random.choice(['S', 'C', ]) for l in range(0,10)], 'cumulative_S': [1,2,2,3,4,5,6,7,7,8], 'cumulative_C': [0,0,1,1,1,1,1,1,2,2]}) index cluster cumulative_S cumulative_C 0 0 S 1 0 1 1 S 2 0 2 2 C 2 1 3 3 S 3 1 4 4 S 4 1 5 5 S 5 1 6 6 S 6 1 7 7 S 7 1 8 8 C 7 2 9 9 S 8 2 </code></pre> <p>How can I achieve that ?</p>
<python><pandas>
2024-02-20 09:07:53
3
4,512
quant
78,025,970
12,314,521
How to asign a tensor of value to another tensor of value by a tensor of indices (Pytorch)
<p>Given:</p> <ul> <li>a tensor A of zeroes has shape: (batch_size, vocab_size), lets say (16, 10000)</li> <li>a tensor B is tensor of indices has shape: (batch_size, seq_len), lets say (16, 20)</li> <li>a tensor C is tensor of value has shape: (batch_size, seq_len), -&gt; (16, 20)</li> </ul> <ol> <li><p>Now I want to replace values in A by C at indices B. Like: A[B] = C</p> </li> <li><p>I want to replace values in A by C at indices B except some indices. For example. B = [[0, 1, 2, 3], [0, 1, 2, 4]] Replace the values of A by C at indices B except index 3 and 5 (apply for all rows) -&gt; Now B can't expressed by a tensor because it doesn't have equal dim after filter. Something like this: A[B[valid_indices]] = C[valid_indices]</p> </li> </ol> <p>I tried use for loop but it cost 2 inner loop and take too long.</p> <pre><code>for i,row in enumerate(probs): valid_indices = torch.tensor([idx[0] for idx in enumerate(encoder_input_ids[i]) if idx[1] not in [vocab['&lt;pad&gt;'],vocab['&lt;unk&gt;'], vocab['&lt;/s&gt;']]]) valid_ids = torch.tensor([idx[0] for idx in enumerate(encoder_input_ids[i]) if idx[1] not in [vocab['&lt;pad&gt;'],vocab['&lt;unk&gt;'], vocab['&lt;/s&gt;']]]) # print(valid_ids) # value = probs_c[i][valid_indices] # probs[i][tmp] = value #probs_c[i] </code></pre>
<python><pytorch><tensor>
2024-02-20 09:07:19
1
351
jupyter
78,025,938
769,405
Are DBT Python models with Snowflake but without Snowpark possible?
<p>A significant chunk of one of my pipelines utilises DBT Python models that write to Snowflake. Originally these were workbooks in Databricks. We have moved everything to Snowflake recently. The code originally generated asynchronous inserts to the DB in a loop. However, Snowpark Session library is not threadsafe, so these inserts are now synchronous. Is there a way to build DBT Python models which will operate &quot;outside&quot; of Snowpark, so that I can use Snowflake Python Connector for async DB calls and not be forced to use Snowpark Session library?</p> <p>The original Databricks code looked like the following, but Snowpark Session library is not threadsafe</p> <pre><code>pool = ThreadPool(5) pool.map( lambda statement: execute_statement(statement), statements_list) </code></pre>
<python><snowflake-cloud-data-platform><dbt>
2024-02-20 09:00:54
0
1,074
mikelus
78,025,925
4,521,319
PySpark Error - SparkContext can only be used on the driver, not in code that it run on workers
<p>I have a PySpark job that reads data into a dataframe and tokenizes a column as a part of the transformations. I have the follwoing:</p> <p><code>Main.py</code> - reads configs and creates sparkSession</p> <p><code>Extractor.py</code> - reads the data from source and returns the dataframe to <code>main.py</code></p> <p><code>Transformer.py</code> - in case there are any tranfsormaions required, the <code>main.py</code> sends the dataframe to <code>Transformer</code> where tokenization is performed along with other transformations and returns the dataframe to <code>Main.py</code></p> <p>I don't have any issues with doing other transformations like casting columns, but whenever I run the job with tokenization set to true, I am getting weird error:</p> <pre><code>Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-506 </code></pre> <p>here is my <code>Transformer.py</code> tokenization code:</p> <pre><code>def perform_transformations(self, df): if 'tokenization' in config: self.log.info(&quot;sending to tokenization&quot;) self.log.info(tokenize_cols) df = self.tokenize_data_from_snowflake(df) def tokenize_data_from_snowflake(self, data): self.logger.info(f&quot;Begin tokenize process for data&quot;) for column in tokenize_cols: self.logger.info(f&quot;processing for column: {column}&quot;) data = self.get_tokenized_df(column, data=data) self.logger.info(data.schema) self.logger.info(f&quot;Tokenize process for data completed successfully&quot;) return data def chunks(self, iterable, chunk_size): iterator = iter(iterable) for chunk in iter(lambda: list(islice(iterator, chunk_size)), []): yield chunk def get_tokenized_df(self, column, data): if data.count() == 0: return data schema = data.schema def map_partitions(partition): for current_group in self.chunks(partition, 20): self.logger.info(f&quot;printing column: {column}&quot;) self.logger.info(f&quot;printing currentGroup: {current_group}&quot;) clear_text_batch = [] tokenizer_value_map = {} for row in current_group: clear_text_batch.append( row[&quot;A&quot;][:32] if column == &quot;A&quot; else row[column] ) self.logger.info(f&quot;clear_text_batch: {clear_text_batch}&quot;) if len(clear_text_batch) &gt; 0: # Ensure you are calling the tokenizer with the correct column tokenizer_value_map.update(self.call_tokenizer(column, clear_text_batch)) self.logger.info(tokenizer_value_map) self.logger.info(&quot;returned to map partitions&quot;) for i, row in enumerate(current_group): current_group[i] = self.process_row(row, column, tokenizer_value_map) yield current_group # print(f&quot;updated_rows: {current_group}&quot;) return data.rdd.mapPartitions(map_partitions).flatMap(lambda x: x).toDF(schema) def update_row(self, row, column, value): row_dict = row.asDict() row_dict[column] = value return Row(**row_dict) def process_row(self, row, column, tokenizer_value_map): self.logger.info(&quot;in process_row&quot;) clear_text_value = row[column] tokenized_value = tokenizer_value_map.get( clear_text_value[:32] if column == &quot;A&quot; else clear_text_value, &quot;No Token Received&quot; ) self.logger.info(f&quot;values -&gt; {clear_text_value} : {tokenized_value}&quot;) return self.update_row(row, &quot;TOKENIZED_&quot; + column, tokenized_value) def call_tokenizer(self, column_name, cleartext_data): #call tokenizer service code </code></pre> <p>the logic is to take the existing values of column that needs to be tokenized, call a tokenizer service and then update the row with the tokenized value. the bulk of this logic is in <code>get_tokenized_df</code> method. the error is being thrown at</p> <pre><code>return data.rdd.mapPartitions(map_partitions).flatMap(lambda x: x).toDF(schema) </code></pre> <p>I had no issue with running this logic before when <code>Transformer.py</code> itself was reading the data into the dataframe and sending it to another class for transformation. I am seeing this error after I did some refactoring of the code. I did check the Jira page mentioned in the error: <a href="https://issues.apache.org/jira/browse/SPARK-5063" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/SPARK-5063</a> but couldn't figure out anything from it. So I am not sure how I should be debugging/resolving this.</p> <p>Any help would be appreciated. Thank you!</p>
<python><apache-spark><pyspark><apache-spark-sql>
2024-02-20 08:58:59
0
925
Hemanth Annavarapu
78,025,817
16,436,095
Too fast checking local ports with python socket
<p>This code example is given in the book &quot;CPython Internals&quot;.</p> <pre class="lang-py prettyprint-override"><code>from queue import Queue import socket import time timeout = 1.0 def check_port(host: str, port: int, results: Queue): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.settimeout(timeout) result = sock.connect_ex((host, port)) if result == 0: results.put(port) sock.close() if __name__ == '__main__': start = time.time() host = &quot;localhost&quot; results = Queue() for port in range(80, 100): check_port(host, port, results) while not results.empty(): print(&quot;Port {0} is open&quot;.format(results.get())) print(&quot;Completed scan in {0} seconds&quot;.format(time.time() - start)) </code></pre> <p>If <code>host == 'localhost'</code>, then this script works very quickly (all ports (form 1 to 65535) checked for about 1.5 secs!), moreover, the duration of the timeout does not play any role.</p> <p>If I set the host &quot;8.8.8.8&quot; or some other external one, then the script execution time looks correct. For example, checking ports from 440 to 444 with <code>timeout == 1</code> on 8.8.8.8 takes about 4 secs.</p> <p>Why is checking the availability of local ports so fast? (I use Ubuntu if it's important)</p>
<python><sockets><python-sockets>
2024-02-20 08:38:50
1
370
maskalev
78,025,739
815,859
Call a function on Raspberry Pi Pico from PC using pyserial
<p>I have a small python file in Raspberry Pi Pico that can control servos via PCA9685 servo controller:</p> <pre><code>import time from adafruit_servokit import ServoKit import busio import board import time kit = ServoKit(channels=16, i2c=(busio.I2C(board.GP1, board.GP0))) def rotateservo(): kit.servo[0].angle = 100 time.sleep(1) kit.servo[0].angle = 0 time.sleep(1) kit.servo[0].angle = 180 </code></pre> <p>The function works fine when called from Pico. I am looking for a way to call the function from a PC using pyserial</p> <pre><code>import serial ser = serial.Serial('COM5', 115200, timeout=timeout) ser.write('rotateservo()'.encode()) ser.close() </code></pre> <p>This simply outputs the number 13 which I assume is the length of the string. How do I call a function on Pi Pico using pyserial?</p>
<python><pyserial><raspberry-pi-pico>
2024-02-20 08:22:18
1
795
Monty Swanson
78,025,726
714,564
Selenium printing with chrome, keeps showing a save popup
<p>I need some automated script that will print a page. This script has to run on Chrome, as FF does not give me a decent output for this printing task. I tried several things, but for some reason it still requeires a filename, and it ignores what I use for both savefile.default_directory and download.default_directory setting. This is the script I am currently working on (needless to say, I tried removing and adding parts, tried setting the TMP_DIR to my acctual default download, and so on).</p> <pre><code>import json from selenium import webdriver def main(): TMP_DIR = &quot;F:/Downloads&quot; printer = &quot;Microsoft Print to PDF&quot; options = webdriver.ChromeOptions() print_settings = { &quot;recentDestinations&quot;: [{ &quot;id&quot;: printer, &quot;origin&quot;: &quot;local&quot;, &quot;account&quot;: &quot;&quot;, }], &quot;selectedDestinationId&quot;: printer, &quot;version&quot;: 2, &quot;isHeaderFooterEnabled&quot;: False, &quot;isLandscapeEnabled&quot;: True } options.add_argument(&quot;--start-maximized&quot;) options.add_argument('--window-size=1920,1080') # options.add_argument(&quot;--headless&quot;) options.add_argument('--enable-print-browser') options.add_argument(&quot;--kiosk&quot;) options.add_argument(&quot;--kiosk-printing&quot;) options.add_argument(&quot;--safebrowsing-disable-download-protection&quot;); options.add_argument(&quot;--safebrowsing-disable-extension-blacklist&quot;); options.add_experimental_option(&quot;prefs&quot;, { &quot;printing.print_preview_sticky_settings.appState&quot;: json.dumps(print_settings), &quot;savefile.default_directory&quot;: TMP_DIR, # Change default directory for downloads &quot;download.default_directory&quot;: TMP_DIR, # Change default directory for downloads &quot;download.prompt_for_download&quot;: False, # To auto download the file &quot;download.directory_upgrade&quot;: True, &quot;profile.default_content_setting_values.automatic_downloads&quot;: 1, &quot;safebrowsing.enabled&quot;: True }) driver = webdriver.Chrome(executable_path=&quot;C:/Users/97252/Downloads/chromedriver-win64/chromedriver-win64/chromedriver.exe&quot;, chrome_options=options) driver.get(&quot;https://www.google.com/&quot;) driver.execute_script(&quot;window.print();&quot;) if __name__ == '__main__': main() </code></pre> <p>Using chrome / chromedriver 121.0.6167.184 Python 3.9.13 selenium==3.11.0</p> <p>is there anything else I can do to force the download? Bonus if I can set the filename, however at this point I'll settle for anything.</p>
<python><selenium-webdriver><selenium-chromedriver>
2024-02-20 08:19:39
2
671
GuruYaya
78,025,589
13,320,357
Scheduled cron job not getting executed
<p>I have a cron job scheduled as follows inside an EC2 instance operating on Amazon Linux OS:</p> <pre><code>51 7 * * * /usr/bin/python3 /home/ec2-user/set_access_token.py &gt; /home/ec2-user/access_token_logs.txt </code></pre> <p>The job is not getting executed despite providing the native path of python3 and specifying the time to run in UTC.</p> <p>I am not getting any cron logs inside the /var/log folder</p>
<python><python-3.x><amazon-web-services><amazon-ec2><cron>
2024-02-20 07:55:17
1
415
Anuj Panchal
78,025,330
11,267,783
Python link between properties
<p>I wanted to know if there is an easy way to link properties in a object. This is my code :</p> <pre class="lang-py prettyprint-override"><code>import numpy as np obj = {} obj[&quot;data&quot;] = np.linspace(1,10) obj[&quot;first&quot;] = obj[&quot;data&quot;][0] # obj[&quot;first&quot;] = 1 obj[&quot;last&quot;] = obj[&quot;data&quot;][-1] # obj[&quot;last&quot;] = 10 obj[&quot;data&quot;] = np.linspace(20,100) # obj[&quot;first&quot;] = 20 # obj[&quot;last&quot;] = 100 </code></pre> <p>Is there a way to automatically update the first and last properties when data has changed ? I don't want to set each time obj[&quot;first&quot;] and obj[&quot;last&quot;].</p>
<python>
2024-02-20 07:04:32
1
322
Mo0nKizz
78,025,039
11,098,908
Matplotlib: ValueError: 'steps' is not a valid value for ls
<p>The following code came from the <a href="https://www.packtpub.com/en-au/product/matplotlib-30-cookbook-9781789135718?type=print&amp;gad_source=1&amp;gclid=CjwKCAiAlcyuBhBnEiwAOGZ2S5AG1tIdmdDusS18Q4CYS6Cv8c3TLClPdVPEMOz85nEEyw5kpOYxmRoCJLoQAvD_BwE" rel="nofollow noreferrer">Matplotlib 3.0 Cookbook</a></p> <pre><code>import numpy as np import matplotlib.pyplot as plt fig = plt.figure(figsize=(12,6)) ax1 = plt.subplot(321) ax2 = plt.subplot(322) ax3 = plt.subplot(323) ax4 = plt.subplot(324) ax5 = plt.subplot(325) ax6 = plt.subplot(326) x = np.linspace(0, 10, 20) color_list = ['xkcd:sky blue', 'green', '#1F1F1F1F', 'b', '#1C0B2D', 'tab:pink', 'C4'] for i, color in enumerate(color_list): y = x - (-5*i + 15) ax1.plot(x, y, color) ax1.set_title('colors demo') line_style = ['-', '--', '-.', ':', '.'] for i, ls in enumerate(line_style): y = x - (-5*i + 15) line, = ax2.plot(x, y, ls) ax2.set_title('line style demo') plt.setp(line, ls='steps') # PROBLEMATIC CODE marker_list = ['.', ',', 'o', 'v', '^', 's', 'p', '*', 'h', 'H', 'D'] for i, marker in enumerate(marker_list): y = x - (-5*i + 15) ax3.plot(x, y, marker) ax3.set_title('marker demo') y = x # reset y to x ax4.plot(x, y-10, 'k-d') ax4.plot(x, y-5, 'c--') ax4.plot(x, y, '|') ax4.plot(x, y+5, '-.') ax4.plot(x, y+10, color='purple', ls=':', marker='3') ax4.plot(x, y+15, color='orange', linestyle=':', marker='1') ax4.set_title('combination demo') ax5.plot(x, y-10, 'y-D', linewidth=2, markersize=4, markerfacecolor='red', markeredgecolor='k',markeredgewidth=1) ax5.plot(x, y-5, 'm-s', lw=4, ms=6, markerfacecolor='red', markeredgecolor='y', markeredgewidth=1) ax5.set_title('Line and Marker Sizes Demo') dash_capstyles = ['butt','projecting','round'] for i, cs in enumerate(dash_capstyles): y = x - (-5*i + 15) ax6.plot(x, y, ls='--', lw=10, dash_capstyle=cs) ax6.set_title('dash capstyle demo') plt.tight_layout(); plt.show() </code></pre> <p>Although the book showed the output as per below <a href="https://i.sstatic.net/LNv6l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LNv6l.png" alt="enter image description here" /></a></p> <p>I couldn't replicate the result, but got the following error (for the line <code>plt.setp(line, ls='steps'</code>) instead:</p> <pre><code>ValueError Traceback (most recent call last) Cell In[114], line 25 23 line, = ax2.plot(x, y, ls) 24 ax2.set_title('line style demo') ---&gt; 25 plt.setp(line, ls='steps') ValueError: 'steps' is not a valid value for ls; supported values are '-', '--', '-.', ':', 'None', ' ', '', 'solid', 'dashed', 'dashdot', 'dotted' </code></pre> <p>Could someone please help me how to fix the error and draw the step line (highlighted in yellow in the image above)?</p>
<python><matplotlib>
2024-02-20 05:51:08
1
1,306
Nemo
78,025,006
1,453,157
How to open content in a new tab using Python + Selenium
<p>I am using Python + Selenium.</p> <p>Browsing this page</p> <p><a href="https://rovo.co/explore/activities" rel="nofollow noreferrer">https://rovo.co/explore/activities</a></p> <p>you will see many activities.</p> <p>Using Python + Selenium, how can I open each activity in a new tab, and switch to the new tab?</p> <p>I tried the following code, but it doesn't open in a new tab.</p> <pre><code>link = WebDriverWait(driver, 30).until(EC.element_to_be_clickable(driver.find_elements(By.CLASS_NAME, &quot;css-vurnku&quot;)[0])) actions = ActionChains(driver) actions.key_down(Keys.CONTROL) actions.click(on_element=link) actions.perform() </code></pre> <p>Any help is highly appreciated.</p>
<python><selenium-webdriver>
2024-02-20 05:42:50
3
503
J K
78,024,843
9,997,385
multipart POST with python requests unsupported media type
<p>Trying to recreate the following cURL with Python requests. I see <a href="https://stackoverflow.com/questions/17415084/multipart-data-post-using-python-requests-no-multipart-boundary-was-found">here</a> that I &quot;should NEVER set <code>&quot;content-type&quot;: 'multipart/form-data'</code>&quot; header. Instead, allow requests module to do it.</p> <pre><code>curl -X POST \ 'https://example.com/v1/endpoint' \ -H 'accept: application/json' \ -H 'Content-Type: multipart/form-data' \ -H 'Authorization: Bearer &lt;BEARER_TOKEN_HERE&gt;' \ -F 'name=test5' \ -F 'type=notebook' \ -F 'content=@some-file.json;type=application/json' </code></pre> <p>Here is my code:</p> <pre><code>headers = { &quot;accept&quot;: &quot;application/json&quot;, &quot;Authorization&quot;: &quot;Bearer &lt;BEARER_TOKEN_REDACTED&gt;&quot; } with open(path, mode=&quot;r&quot;) as f: data = f.read() stuff = { &quot;name&quot;: &quot;test1&quot;, &quot;type&quot;: &quot;notebook&quot;, &quot;content&quot;: f&quot;{data};type=application/json&quot; } upload_resp = requests.post( url=&quot;https://example.com/endpoint&quot;, data=stuff, headers=headers ) return upload_resp </code></pre> <p>With this I get an HTTP <code>415</code> code: <code>Unsupported media type</code>.</p> <p>When I set <code>headers</code> as:</p> <pre><code>headers = { &quot;accept&quot;: &quot;application/json&quot;, &quot;Authorization&quot;: &quot;Bearer &lt;BEARER_TOKEN_REDACTED&gt;&quot;, &quot;content-type&quot;: &quot;multipart/form-data&quot; } </code></pre> <p>I get an HTTP <code>400</code> with: <code>Missing initial multi part boundary</code> which makes sense as we are setting the content-type not requests so it'll be invalid (as per linked SO thread above).</p> <p>What am I missing here?</p>
<python>
2024-02-20 04:43:33
0
643
A. Gardner
78,024,426
9,614,610
How to make two mouseMoveEvents work together
<p>I have a QGraphicsEllipseItem and a QGraphicsScene, both of which have custom mouseMoveEvent and mousePressEvent functions. However, for some reason, the mouseMoveEvent only seems to work for the QGraphicsScene. My guess is that the QGraphicsScene event is overriding the QGraphicsEllipseItem event. But, the mousePressEvent works for both for some reason. Is there any way to make the mouseMoveEvent work for both of them as well?</p> <pre><code>from PyQt5.QtWidgets import * class CustomEllipseItem(QGraphicsEllipseItem): def __init__(self, *args, **kwargs): super(CustomEllipseItem, self).__init__(*args, **kwargs) def mousePressEvent(self, event): print(&quot;EllipseItem mousePressEvent&quot;) super().mousePressEvent(event) def mouseMoveEvent(self, event): print(&quot;EllipseItem mouseMoveEvent&quot;) super().mouseMoveEvent(event) class CustomScene(QGraphicsScene): def __init__(self, *args, **kwargs): super(CustomScene, self).__init__(*args, **kwargs) def mousePressEvent(self, event): print(&quot;Scene mousePressEvent&quot;) super().mousePressEvent(event) def mouseMoveEvent(self, event): print(&quot;Scene mouseMoveEvent&quot;) super().mouseMoveEvent(event) if __name__ == '__main__': app = QApplication([]) scene = CustomScene() view = QGraphicsView(scene) ellipse_item = CustomEllipseItem(0, 0, 100, 100) scene.addItem(ellipse_item) view.show() app.exec_() </code></pre>
<python><pyqt><pyqt5>
2024-02-20 02:03:27
1
417
Vlad