QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,500,406
6,242,138
subprocess.run while running as systemd service
<p>I have Python script which is run periodically (with the help of systemd timer). It performs some checks and in case of need sends me e-mail with the help of DMA. Internally I use <code>subprocess.run()</code> to call <code>mail</code> Linux command:</p> <pre><code>body = str(&quot;Something is wrong&quot;) subprocess.run([&quot;mail&quot;, &quot;-s&quot;, &quot;[myscript@myserver] - ALERT&quot;, &quot;mymail@domain.com&quot;], input = body.encode()) </code></pre> <p>AFAIK <code>subprocess.run()</code> should wait until <code>mail</code> incantation will finish. Internally <code>mail</code> calls <code>sendmail</code> which is provided by DMA. And I've noticed strange behavior: when I call my script directly from command line I can check in <code>journalctl -r</code> that my mail is actually forwarded to the DMA and successfully sent to mymail@domain.com:</p> <pre><code>Jun 17 17:00:59 myserver dma[859]: &lt;mymail@domain.com&gt; delivery successful Jun 17 17:00:57 myserver dma[859]: using SMTP authentication for user mymail Jun 17 17:00:57 myserver dma[859]: Server supports LOGIN authentication Jun 17 17:00:57 myserver dma[859]: Server does not support STARTTLS Jun 17 17:00:57 myserver dma[859]: Server greeting successfully completed Jun 17 17:00:57 myserver dma[859]: SSL initialization successful Jun 17 17:00:57 myserver dma[859]: Server supports STARTTLS Jun 17 17:00:57 myserver dma[859]: Server greeting successfully completed Jun 17 17:00:57 myserver dma[859]: trying remote delivery to smtp.domain.com [AAA.BBB.CCC.DDD] pref 0 Jun 17 17:00:57 myserver dma[859]: using smarthost (smtp.domain.com:587) Jun 17 17:00:57 myserver dma[859]: &lt;mymail@domain.com&gt; trying delivery Jun 17 17:00:57 myserver dma[858]: mail to=&lt;mymail@domain.com&gt; queued as b40adc.5577f82d5a80 Jun 17 17:00:57 myserver dma[858]: new mail from user=root uid=8 envelope_from=&lt;mymail@domain.com&gt; </code></pre> <p>But when the same script is called by systemd timer the <code>dma</code> process hangs in the middle and the mail is left in spool:</p> <pre><code>Jun 17 18:00:16 myserver dma[9078]: trying remote delivery to smtp.domain.com [AAA.BBB.CCC.DDD] pref 0 Jun 17 18:00:16 myserver dma[9078]: using smarthost (smtp.domain.com:587) Jun 17 18:00:16 myserver dma[9078]: &lt;mymail@domain.com&gt; trying delivery Jun 17 18:00:16 myserver dma[9077]: mail to=&lt;mymail@domain.com&gt; queued as b404d6.55e1c6b52a80 Jun 17 18:00:16 myserver dma[9077]: new mail from user=root uid=8 envelope_from=&lt;mymail@domain.com&gt; </code></pre> <p>I've thought that it may be because of parent script finishes before mail was sent so I've added 10-seconds <code>sleep()</code> just after <code>subprocess.run()</code> incantation and that worked fine!</p> <p>So I wonder why this happens and how it can be worked-out in a right way withou tricks with <code>sleep()</code>?</p>
<python><systemd>
2023-06-18 11:52:30
0
373
Drobot Viktor
76,500,405
8,410,477
How to interpolate monthly frequency sample data's missing values with interp1d(x, y) from scipy
<p>I have created monthly sample data <code>data</code>, in which there are missing values in some months, and I hope to fill them in by <code>interp1d()</code> method. I have implemented it with the following code, but the result is still empty, and I don’t know where the problem lies. May I ask how to modify the code? Many thanks.</p> <pre><code>import pandas as pd import numpy as np from scipy.interpolate import interp1d # Create an example DataFrame data = pd.DataFrame({ 'value': [1.0, 1.2, np.nan, 1.4, 1.6, np.nan, 1.8, 2.0, np.nan, 2.2, 2.4, np.nan] }, index=pd.date_range('2000-01-01', periods=12, freq='M')) # Convert the index to a DateTimeIndex data.index = pd.to_datetime(data.index) # Convert the DateTimeIndex to a PeriodIndex with monthly frequency x = data.index.to_period('M') # Convert the period index to integers x = x.astype(int) # Convert the 'y' column to a numpy array y = data['value'].values # Create the interpolation function f = interp1d(x, y, kind='linear', fill_value=&quot;extrapolate&quot;) # Create a boolean mask that selects the missing values in the 'value' column mask = np.isnan(data['value']) # Create an array with the 'x' values where 'y' is missing x_new = pd.date_range(start=data.index.min(), end=data.index.max(), freq='M')[mask] # Convert the 'x_new' values to dates with monthly frequency x_new_dates = pd.date_range(start=x_new.min(), end=x_new.max(), freq='M') # Interpolate the missing 'y' values y_new = f(x_new_dates. astype(int)) # Create a new column 'value_c' and fill it with the original data # Insert the interpolated 'y' values into the new column data.loc[x_new_dates, 'value_interpolated'] = y_new # Print the DataFrame print(data) </code></pre> <p>Out:</p> <pre><code> value value_interpolated 2000-01-31 1.0 NaN 2000-02-29 1.2 NaN 2000-03-31 NaN NaN 2000-04-30 1.4 NaN 2000-05-31 1.6 NaN 2000-06-30 NaN NaN 2000-07-31 1.8 NaN 2000-08-31 2.0 NaN 2000-09-30 NaN NaN 2000-10-31 2.2 NaN 2000-11-30 2.4 NaN 2000-12-31 NaN NaN </code></pre>
<python><pandas><dataframe><scipy><interpolation>
2023-06-18 11:52:12
1
10,141
ah bon
76,500,401
7,973,301
How to rescale x-axis limited Matplotlib plots
<p>When I plot data using Matplotlib and limit the x-axis to a smaller range, Matplotlib still uses the same y-axis limits as before the x-axis limitation. For example</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt x = np.linspace(-10,10,100) y = x**2 fig, ax = plt.subplots() ax.plot(x,y) ax.set_xlim(-2, 2) plt.show() </code></pre> <p>gives the following plot:</p> <p><a href="https://i.sstatic.net/VX5Qb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VX5Qb.png" alt="Plot image" /></a></p> <p>I would expect that the y-axis approximately ranges from ~0 to ~4, since these are the new data limits in the x-limited region. Is this considered to be a bug? And if not, is there a simple way to achieve the desired output (without truncation of the NumPy array to the desired x-range)?</p>
<python><matplotlib>
2023-06-18 11:50:57
0
970
Padix Key
76,500,363
5,489,190
Huge differences in accuracy between Matlab and Python (Tensorflow) ANN of the same topology
<p>So far I have worked with neural networks mainly in Matlab. However, some limitations led me to try with python. My choice fell on the Tensorflow library. I am comparing the accuracy of prediction on the very same data set and ANNs topology in both environments. First <strong>Matlab</strong> implementation:</p> <pre><code>net = feedforwardnet([18 12 23],'trainrp'); rng('default') %Just for repeatability net.divideParam.trainRatio = 75/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100; net.performFcn='mae'; net.layers{1}.transferFcn = 'logsig'; net.layers{2}.transferFcn = 'softmax'; net. Layers{3}.transferFcn = 'softmax'; net.trainParam.showWindow = 0; [net,tr] = train(net,input,target); output=net(input); f=mean(mean(abs(target-output)./((abs(target)+abs(output))/2)))*100; </code></pre> <p>The final error <code>f=8.0164</code> which is acceptable value. Now I reproduce the same ANN using <code>tensorflow</code> in <strong>Python</strong>:</p> <pre><code>#For repeatability SEED = 777 random.seed(SEED) tf.random.set_seed(SEED) np.random.seed(SEED) # split the data into train and test set train_x, test_x, train_y, test_y = train_test_split( input_array, output_array, test_size=0.15, random_state=7, shuffle=True) # split the train into train and validation set train_x, val_x, train_y, val_y = train_test_split( train_x, train_y, test_size=0.17, random_state=7, shuffle=True) # Model function definition (21 features in every observation, 11 features in output, the same as in Matlab) model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(21,), name=&quot;Input&quot;), tf.keras.layers.Dense(18, activation = 'sigmoid', name=&quot;Hidden1&quot;), tf.keras.layers.Dense(12, activation ='softmax', name=&quot;Hidden2&quot;), tf.keras.layers.Dense(23, activation ='softmax', name=&quot;Hidden3&quot;), tf.keras.layers.Dense(11, activation = 'linear', name=&quot;Output&quot;)]) # Compilation of model model.compile(optimizer='RMSprop', loss='MeanAbsoluteError') # fitting the model model.fit(train_x, train_y, epochs=200, batch_size=64, validation_data=(val_x, val_y),verbose=0) values = model.predict(input_array, verbose=0) Error=abs(values-output_array)/output_array*100 f=np.mean(Error) </code></pre> <p>In this case <code>f=94.4482</code> which is very huge difference. Could you help me figure out why python can't handle it? I was trying to mess with <code>epoaches</code> and <code>batch_size</code> but the difference is like +/- 5% so nothing special. The dataset I use is a bit big and I can't share it, but I hope you can use any dummy data to reproduce my MWE.</p> <h1>Edit:</h1> <p>I still did not solved the problem but I have a clue, that maybe you can use to help me figure it out. Matlab use some kind of automatic data preprocessing (normalization?) that I think Python does not do. When I switch it off by:</p> <pre><code>net. Inputs{1}.processFcns={}; net.output.processFcns={}; </code></pre> <p>The <code>f=35.5123</code>. Still much lower than in Python but it suggest that my python code my misses some auxiliary operation before the data is send to the Dense layer.</p>
<python><matlab><tensorflow><keras><neural-network>
2023-06-18 11:43:42
0
749
Karls
76,500,312
10,771,559
Multiple line graph
<p>I have a dataframe that looks like this:</p> <pre><code>No._trees prop._robin prop._dove 1 0.5 0.6 2 0.6 0.2 </code></pre> <p>The mean number of birds whether it is robins or doves will only ever be between 0 and 1. I want to create a graph where the number of trees would be the x-axis variable, the y-axis variable would go between 0 and 1 and there would be two lines, one for the robins and one for the doves. I am struggling because all of the examples I have read have the y-variable as a dataframe column, whereas in my case I just want it to go from 0 to 1.</p> <p>Reproducible dataframe:</p> <pre><code>d = {'No._trees': [1, 2], 'prop._robin': [0.5, 0.6], 'prop._dove':[0.6,0.2]} df = pd.DataFrame(data=d) </code></pre>
<python><matplotlib>
2023-06-18 11:31:08
1
578
Niam45
76,500,261
9,142,914
How to vizualize OpenCV imshow() window on a VPS server?
<p>I am running this code on my distant vps server:</p> <pre><code>import cv2 from ultralytics import YOLO model = YOLO('yolov5nu.pt') cap = cv2.VideoCapture('Bowling1.mp4') while cap.isOpened(): success, frame = cap.read() if success: results = model(frame) annotated_frame = results[0].plot() cv2.imshow(&quot;YOLOv5 Inference&quot;, annotated_frame) if cv2.waitKey(1) &amp; 0xFF == ord(&quot;q&quot;): break else: break cap.release() cv2.destroyAllWindows() </code></pre> <p>But then I get:</p> <pre><code>[W NNPACK.cpp:51] Could not initialize NNPACK! Reason: Unsupported hardware. 0: 384x640 (no detections), 90.2ms Speed: 2.2ms preprocess, 90.2ms inference, 0.4ms postprocess per image at shape (1, 3, 640, 640) qt.qpa.xcb: could not connect to display qt.qpa.plugin: Could not load the Qt platform plugin &quot;xcb&quot; in &quot;/root/anaconda3/envs/yolo5/lib/python3.9/site-packages/cv2/qt/plugins&quot; even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. </code></pre> <p>When I simply double click on the video 'Bowlin1.mp4' it displays in VSCode and I can read it:</p> <p><a href="https://i.sstatic.net/CZdXe.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CZdXe.jpg" alt="enter image description here" /></a></p> <p>Can't visualize the VideoCapture the same way ?</p> <p>Thanks</p>
<python><user-interface><visual-studio-code><vps><vscode-remote>
2023-06-18 11:17:17
1
688
ailauli69
76,500,170
20,740,043
Create subplot, by overlapping two dataframes of different shapes and column names, for every group/id,
<p>I have the below <strong>two dataframes</strong> with <strong>different shapes</strong> and <strong>column names</strong>:</p> <pre><code>#Load the required libraries import pandas as pd import matplotlib.pyplot as plt #Create dataset_1 data_set_1 = {'id': [1, 1, 1,1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3,3, 4, 4, 4, 4,], 'cycle_1': [0.0, 0.2,0.4, 0.6, 0.8, 1,1.2,1.4,1.6, 0.0, 0.2,0.4, 0.6, 0.0, 0.2,0.4, 0.6, 0.8,1.0,1.2,1.4, 0.0, 0.2,0.4, 0.6, ], 'Salary_1': [6, 7, 7, 7,8,9,10,11,12, 3, 4, 4, 4, 2, 8,9,10,11,12,13,14, 1, 8,9,10,], 'Children_1': ['Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','No', 'Yes', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'No','Yes', 'Yes', 'No','No', 'Yes','Yes', 'Yes', 'Yes', 'No','Yes', ], 'Days_1': [141, 123, 128, 66, 66, 120, 141, 52, 52, 141, 96, 120,120, 141, 15,123, 128, 66, 120, 141, 141, 141, 141,123, 128, ], } #Convert to dataframe_1 df_1 = pd.DataFrame(data_set_1) print(&quot;\n df_1 = \n&quot;,df_1) #Create dataset_2 data_set_2 = {'id': [1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3,3, 4, 4, 4, 4, 4,4,], 'cycle_2': [0.0, 0.2,0.4, 0.6, 0.8, 1,1.2,1.4,1.6,1.8,2.0,2.2, 0.0, 0.2,0.4, 0.6,0.8,1.0,1.2, 0.0, 0.2,0.4, 0.6, 0.8,1.0,1.2,1.4, 0.0, 0.2,0.4, 0.6, 0.8,1.0,], 'Salary_2': [7, 8, 8, 8,8,9,14,21,12,19,14,20, 1, 6, 3, 8,4,9,8, 6, 4,9,10,4,12,13,6, 1, 4,9,10,9,4,], 'Children_2': ['Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','Yes', 'Yes', 'No','No', 'Yes','Yes', 'Yes', 'Yes', 'No','Yes', 'Yes','Yes',], 'Days_2': [141, 123, 128, 66, 66, 120, 141, 52,96, 120, 141, 52, 141, 96, 120,120, 141, 52,96, 141, 15,123, 128, 66, 120, 141, 141, 141, 141,123, 128, 66,67,], } #Convert to dataframe_2 df_2 = pd.DataFrame(data_set_2) print(&quot;\n df_2 = \n&quot;,df_2) </code></pre> <p>Now, here I wish to plot the cycle_1 vs Salary_1, and overlap it with cycle_2 vs Salary_2, for every id in different subplots.</p> <p>Thus I need to use subplot function as such:</p> <pre><code>## Plot for all id's plt_fig_verify = plt.figure(figsize=(10,8)) ## id1: plt.subplot(4,1,1) plt.plot(df_1.groupby(by=&quot;id&quot;).get_group(1)['cycle_1'], df_1.groupby(by=&quot;id&quot;).get_group(1)['Salary_1'], 'b', linewidth = '1', label ='id1: Salary_1 of df_1') plt.plot(df_2.groupby(by=&quot;id&quot;).get_group(1)['cycle_2'], df_2.groupby(by=&quot;id&quot;).get_group(1)['Salary_2'], 'r', linewidth = '1', label ='id1: Salary_2 of df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id2: plt.subplot(4,1,2) plt.plot(df_1.groupby(by=&quot;id&quot;).get_group(2)['cycle_1'], df_1.groupby(by=&quot;id&quot;).get_group(2)['Salary_1'], 'b', linewidth = '1', label ='id2: Salary_1 of df_1') plt.plot(df_2.groupby(by=&quot;id&quot;).get_group(2)['cycle_2'], df_2.groupby(by=&quot;id&quot;).get_group(2)['Salary_2'], 'r', linewidth = '1', label ='id2: Salary_2 of df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id3: plt.subplot(4,1,3) plt.plot(df_1.groupby(by=&quot;id&quot;).get_group(3)['cycle_1'], df_1.groupby(by=&quot;id&quot;).get_group(3)['Salary_1'], 'b', linewidth = '1', label ='id3: Salary_1 of df_1') plt.plot(df_2.groupby(by=&quot;id&quot;).get_group(3)['cycle_2'], df_2.groupby(by=&quot;id&quot;).get_group(3)['Salary_2'], 'r', linewidth = '1', label ='id3: Salary_2 of df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id4: plt.subplot(4,1,4) plt.plot(df_1.groupby(by=&quot;id&quot;).get_group(4)['cycle_1'], df_1.groupby(by=&quot;id&quot;).get_group(4)['Salary_1'], 'b', linewidth = '1', label ='id4: Salary_1 of df_1') plt.plot(df_2.groupby(by=&quot;id&quot;).get_group(4)['cycle_2'], df_2.groupby(by=&quot;id&quot;).get_group(4)['Salary_2'], 'r', linewidth = '1', label ='id4: Salary_2 of df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() plt.show() </code></pre> <p>The plot looks as such:</p> <p><a href="https://i.sstatic.net/NV3uq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NV3uq.png" alt="enter image description here" /></a></p> <p>However, here I need to write the codes for the subplot function four times, <strong>with different column names</strong>, i.e. for all four id's of the dataframe, and then overlap.</p> <p>Is there any way out, by which we can have some iterative function and write the subplot function only once and get all four overalapped <code>subplots</code>.</p> <p>Can somebody please let me know how to achieve this task in <code>Python</code>?</p>
<python><pandas><matplotlib><group-by>
2023-06-18 10:54:01
1
439
NN_Developer
76,500,139
4,451,315
Filter and aggregate based on other dataframe
<p>Say I have</p> <pre class="lang-py prettyprint-override"><code>df1 = pl.DataFrame({'start': [1., 2., 4.], 'end': [2., 4., 6.]}) df2 = pl.DataFrame({'idx': [1., 1.7, 2.3, 2.5, 3., 4.], 'values': [3, 1, 4, 2, 3, 5]}) </code></pre> <p>They look like this:</p> <pre class="lang-py prettyprint-override"><code>In [8]: df1 Out[8]: shape: (3, 2) ┌───────┬─────┐ │ start ┆ end │ │ --- ┆ --- │ │ f64 ┆ f64 │ ╞═══════╪═════╡ │ 1.0 ┆ 2.0 │ │ 2.0 ┆ 4.0 │ │ 4.0 ┆ 6.0 │ └───────┴─────┘ In [9]: df2 Out[9]: shape: (6, 2) ┌─────┬────────┐ │ idx ┆ values │ │ --- ┆ --- │ │ f64 ┆ i64 │ ╞═════╪════════╡ │ 1.0 ┆ 3 │ │ 1.7 ┆ 1 │ │ 2.3 ┆ 4 │ │ 2.5 ┆ 2 │ │ 3.0 ┆ 3 │ │ 4.0 ┆ 5 │ └─────┴────────┘ </code></pre> <p>I would like to end up with something like this:</p> <pre class="lang-py prettyprint-override"><code>In [6]: expected = pl.DataFrame({ ...: 'start': [1., 2., 4.], ...: 'end': [2., 4.5, 6.], ...: 'sum_values': [4, 9, 5] ...: }) In [7]: expected Out[7]: shape: (3, 3) ┌───────┬─────┬────────────┐ │ start ┆ end ┆ sum_values │ │ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ i64 │ ╞═══════╪═════╪════════════╡ │ 1.0 ┆ 2.0 ┆ 4 │ │ 2.0 ┆ 4.5 ┆ 9 │ │ 4.0 ┆ 6.0 ┆ 5 │ └───────┴─────┴────────────┘ </code></pre> <p>Here's an inefficient way of doing it I came up with, using <code>map_rows</code>:</p> <pre class="lang-py prettyprint-override"><code>( df1.with_columns( df1.map_rows( lambda row: df2.filter( pl.col(&quot;idx&quot;).is_between(row[0], row[1], closed=&quot;left&quot;) )[&quot;values&quot;].sum() )[&quot;map&quot;].alias(&quot;sum_values&quot;) ) ) </code></pre> <p>It gives the correct output, but because it uses <code>map_rows</code> and a Python lambda function, it's not as performant as it could be.</p> <p>Is there a way to write this using polars native expressions API?</p>
<python><python-polars>
2023-06-18 10:48:07
1
11,062
ignoring_gravity
76,500,017
5,400,597
Agents of Faust streaming stop event consumption after some days
<p>I have an application using faust-streaming. The application has around 20 agents each consuming events from different Kafka topics. The problem is, after several day, the agents stop consuming and I have to restart application container.</p> <p>Here is an example agent:</p> <pre><code>@app.agent(event_sent_topic, concurrency=20) async def send_event_agent(stream): &quot;&quot;&quot;&quot;&quot;&quot; task_name = asyncio.current_task().get_name() interface = Interface() async for records in stream.take(10, 1): for record in records: send_event_tasks_queue[record.user_id].append(task_name) await _wrap_send_event(record=record, interface=interface) await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks. </code></pre> <p>Container logs now only contains some warnings like below:</p> <pre><code>* m_consumer.m_agent -----&gt; ============================================================ ['Stack for &lt;coroutine object movie_updated_agent at 0x7f303428bce0&gt; (most recent call last):\n File &quot;/project/m_consumer/movie_updated_consumer.py&quot;, line 22, in movie_updated_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n'] * z_consumer.retry_send_product_agent -----&gt; ============================================================ ['Stack for &lt;coroutine object retry_send_product_agent at 0x7f30341506b0&gt; (most recent call last):\n File &quot;/project/z_consumer/retry_consumer.py&quot;, line 29, in retry_send_product_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object retry_send_product_agent at 0x7f303428bf00&gt; (most recent call last):\n File &quot;/project/z_consumer/retry_consumer.py&quot;, line 29, in retry_send_product_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object retry_send_product_agent at 0x7f3034150af0&gt; (most recent call last):\n File &quot;/project/z_consumer/retry_consumer.py&quot;, line 29, in retry_send_product_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object retry_send_product_agent at 0x7f30341508d0&gt; (most recent call last):\n File &quot;/project/z_consumer/retry_consumer.py&quot;, line 29, in retry_send_product_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object retry_send_product_agent at 0x7f3034150490&gt; (most recent call last):\n File &quot;/project/z_consumer/retry_consumer.py&quot;, line 29, in retry_send_product_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n'] * z.z_consumer.send_user_agent -----&gt; ============================================================ ['Stack for &lt;coroutine object send_user_agent at 0x7f30341f8710&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f8dd0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f9010&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f8830&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f8ef0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f95b0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f8950&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f9130&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f96d0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f84d0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f8a70&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f9250&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f85f0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f97f0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f8b90&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f9370&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f303427e7b0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f8170&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f8cb0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n', 'Stack for &lt;coroutine object send_user_agent at 0x7f30341f9490&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 50, in send_user_agent\n async for records in stream.take(faust_config.max_stream_take, 1):\n File &quot;async_generator_asend&quot;, line -1, in [rest of traceback truncated]\n'] * zb_core.transport_layer.zb_product_sent_consumer.consumer.send_product_agent -----&gt; ============================================================ ['Stack for &lt;coroutine object send_product_agent at 0x7f3034108dd0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034108050&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034108710&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034109250&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034108ef0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034108170&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034108830&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034109370&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034109010&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034108950&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034108290&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f30341fb770&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034108b90&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f30341083b0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034108a70&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f30341fbe30&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034109130&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f30341084d0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f3034108cb0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n', 'Stack for &lt;coroutine object send_product_agent at 0x7f30341085f0&gt; (most recent call last):\n File &quot;/project/z_consumer/consumer.py&quot;, line 54, in send_product_agent\n await _wrap_send_product_to_zb(record=record, zb_interface=interface)\n File &quot;/project/z_consumer/consumer.py&quot;, line 34, in _wrap_send_product_to_zb\n await asyncio.sleep(0) # Skipping current event loop run for giving execution chance to other tasks.\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 630, in sleep\n await __sleep0()\n File &quot;/usr/local/lib/python3.11/asyncio/tasks.py&quot;, line 624, in __sleep0\n yield\n'] [2023-06-15 08:38:54,210] [1] [WARNING] Executing &lt;Task pending name='&lt;coroutine object MethodQueueWorker._method_queue_do_work at 0x7f30346f8160&gt;' coro=&lt;Service._execute_task() running at /usr/local/lib/python3.11/site-packages/mode/services.py:843&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[Service._on_future_done()] created at /usr/local/lib/python3.11/asyncio/tasks.py:670&gt; took 0.297 seconds [2023-06-15 08:38:54,462] [1] [INFO] Timer Monitor.sampler woke up too late, with a drift of +0.336312010884285 runtime=4.301220178604126e-05 sleeptime=1.336312010884285 [2023-06-15 08:38:54,838] [1] [WARNING] Executing &lt;Task pending name='Task-190' coro=&lt;Agent._execute_actor() running at /usr/local/lib/python3.11/site-packages/faust/agents/agent.py:674&gt; cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/site-packages/faust/agents/agent.py:664&gt; took 0.102 seconds [2023-06-15 08:38:55,500] [1] [WARNING] Executing &lt;Task pending name='&lt;coroutine object MethodQueueWorker._method_queue_do_work at 0x7f30346f8160&gt;' coro=&lt;Service._execute_task() running at /usr/local/lib/python3.11/site-packages/mode/services.py:843&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[Service._on_future_done()] created at /usr/local/lib/python3.11/asyncio/tasks.py:670&gt; took 0.281 seconds [2023-06-15 08:38:55,768] [1] [INFO] Timer Monitor.sampler woke up too late, with a drift of +0.30657482892274857 runtime=4.564225673675537e-05 sleeptime=1.3065748289227486 [2023-06-15 08:38:56,789] [1] [WARNING] Executing &lt;Task pending name='&lt;coroutine object MethodQueueWorker._method_queue_do_work at 0x7f30346f8160&gt;' coro=&lt;Service._execute_task() running at /usr/local/lib/python3.11/site-packages/mode/services.py:843&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[Service._on_future_done()] created at /usr/local/lib/python3.11/asyncio/tasks.py:670&gt; took 0.280 seconds [2023-06-15 08:38:58,079] [1] [WARNING] Executing &lt;Task pending name='&lt;coroutine object MethodQueueWorker._method_queue_do_work at 0x7f30346f8160&gt;' coro=&lt;Service._execute_task() running at /usr/local/lib/python3.11/site-packages/mode/services.py:843&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[Service._on_future_done()] created at /usr/local/lib/python3.11/asyncio/tasks.py:670&gt; took 0.283 seconds [2023-06-15 08:38:59,384] [1] [WARNING] Executing &lt;Task pending name='&lt;coroutine object MethodQueueWorker._method_queue_do_work at 0x7f30346f8160&gt;' coro=&lt;Service._execute_task() running at /usr/local/lib/python3.11/site-packages/mode/services.py:843&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[Service._on_future_done()] created at /usr/local/lib/python3.11/asyncio/tasks.py:670&gt; took 0.296 seconds [2023-06-15 08:38:59,629] [1] [INFO] Timer Monitor.sampler woke up too late, with a drift of +0.3187522292137146 runtime=4.420429468154907e-05 sleeptime=1.3187522292137146 [2023-06-15 08:39:00,682] [1] [WARNING] Executing &lt;Task pending name='&lt;coroutine object MethodQueueWorker._method_queue_do_work at 0x7f30346f8160&gt;' coro=&lt;Service._execute_task() running at /usr/local/lib/python3.11/site-packages/mode/services.py:843&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[Service._on_future_done()] created at /usr/local/lib/python3.11/asyncio/tasks.py:670&gt; took 0.290 seconds [2023-06-15 08:39:00,933] [1] [INFO] Timer Monitor.sampler woke up too late, with a drift of +0.30384987592697144 runtime=5.805492401123047e-05 sleeptime=1.3038498759269714 [2023-06-15 08:39:01,969] [1] [WARNING] Executing &lt;Task pending name='&lt;coroutine object MethodQueueWorker._method_queue_do_work at 0x7f30346f8160&gt;' coro=&lt;Service._execute_task() running at /usr/local/lib/python3.11/site-packages/mode/services.py:843&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[Service._on_future_done()] created at /usr/local/lib/python3.11/asyncio/tasks.py:670&gt; took 0.277 seconds [2023-06-15 08:39:03,264] [1] [WARNING] Executing &lt;Task pending name='&lt;coroutine object MethodQueueWorker._method_queue_do_work at 0x7f30346f8160&gt;' coro=&lt;Service._execute_task() running at /usr/local/lib/python3.11/site-packages/mode/services.py:843&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[Service._on_future_done()] created at /usr/local/lib/python3.11/asyncio/tasks.py:670&gt; took 0.289 seconds [2023-06-15 08:39:04,560] [1] [WARNING] Executing &lt;Task pending name='&lt;coroutine object MethodQueueWorker._method_queue_do_work at 0x7f30346f8160&gt;' coro=&lt;Service._execute_task() running at /usr/local/lib/python3.11/site-packages/mode/services.py:843&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[Service._on_future_done()] created at /usr/local/lib/python3.11/asyncio/tasks.py:670&gt; took 0.287 seconds [2023-06-15 08:39:05,851] [1] [WARNING] Executing &lt;Task pending name='&lt;coroutine object MethodQueueWorker._method_queue_do_work at 0x7f30346f8160&gt;' coro=&lt;Service._execute_task() running at /usr/local/lib/python3.11/site-packages/mode/services.py:843&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[Service._on_future_done()] created at /usr/local/lib/python3.11/asyncio/tasks.py:670&gt; took 0.283 seconds [2023-06-15 08:39:06,858] [1] [WARNING] [^--Consumer]: wait_empty: Waiting for tasks [2023-06-15 08:55:59,973] [1] [INFO] Timer Monitor.sampler woke up too late, with a drift of +0.32391348481178284 runtime=4.1179358959198e-05 sleeptime=1.3239134848117828 [2023-06-15 08:56:01,013] [1] [WARNING] Executing &lt;Task pending name='Task-763' coro=&lt;AIOKafkaConnection._read() running at /usr/local/lib/python3.11/site-packages/aiokafka/conn.py:525&gt; wait_for=&lt;Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/base_events.py:427&gt; cb=[AIOKafkaConnection._on_read_task_error(&lt;weakref at 0...x7f30230ac330&gt;)()] created at /usr/local/lib/python3.11/site-packages/aiokafka/util.py:26&gt; took 0.291 seconds </code></pre> <p>If you are wondering, the <code>_wrap_send_event</code> job is to send a HTTP request to an endpoint using aiohttp library.</p>
<python><python-3.x><django><asynchronous><faust>
2023-06-18 10:13:29
0
1,239
Mr Alihoseiny
76,500,007
726,730
save a QWidget as an image
<p>In pyqt5 i try:</p> <pre class="lang-py prettyprint-override"><code>fileName = &quot;qtable_widget.png&quot; pixmap = QtGui.QPixmap(self.parent_self.main_self.ui_scheduled_transmitions_create_window.review_table.size()) self.parent_self.main_self.ui_scheduled_transmitions_create_window.review_table.render(pixmap) pixmap.save(&quot;qtable_widget.png&quot;, &quot;PNG&quot;, -1) </code></pre> <p>But this captures only the view area of the QTableWidget. The QTableWidget has a vertical scrollbar. Is there any way to capture all the qtablewidget?</p>
<python><pyqt5><screenshot><qtablewidget>
2023-06-18 10:11:17
1
2,427
Chris P
76,499,877
20,740,043
Create subplot, by overlapping two dataframes, for every group/id
<p>I have the below two dataframe:</p> <pre><code>#Load the required libraries import pandas as pd import matplotlib.pyplot as plt #Create dataset_1 data_set_1 = {'id': [1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3,3, 4, 4, 4, 4, 4,4,], 'cycle': [0.0, 0.2,0.4, 0.6, 0.8, 1,1.2,1.4,1.6,1.8,2.0,2.2, 0.0, 0.2,0.4, 0.6,0.8,1.0,1.2, 0.0, 0.2,0.4, 0.6, 0.8,1.0,1.2,1.4, 0.0, 0.2,0.4, 0.6, 0.8,1.0,], 'Salary': [6, 7, 7, 7,8,9,10,11,12,13,14,15, 3, 4, 4, 4,4,5,6, 2, 8,9,10,11,12,13,14, 1, 8,9,10,11,12,], 'Children': ['Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','Yes', 'Yes', 'No','No', 'Yes','Yes', 'Yes', 'Yes', 'No','Yes', 'Yes','Yes',], 'Days': [141, 123, 128, 66, 66, 120, 141, 52,96, 120, 141, 52, 141, 96, 120,120, 141, 52,96, 141, 15,123, 128, 66, 120, 141, 141, 141, 141,123, 128, 66,67,], } #Convert to dataframe_1 df_1 = pd.DataFrame(data_set_1) print(&quot;\n df_1 = \n&quot;,df_1) #Create dataset_2 data_set_2 = {'id': [1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3,3, 4, 4, 4, 4, 4,4,], 'cycle': [0.0, 0.2,0.4, 0.6, 0.8, 1,1.2,1.4,1.6,1.8,2.0,2.2, 0.0, 0.2,0.4, 0.6,0.8,1.0,1.2, 0.0, 0.2,0.4, 0.6, 0.8,1.0,1.2,1.4, 0.0, 0.2,0.4, 0.6, 0.8,1.0,], 'Salary': [7, 8, 8, 8,8,9,14,21,12,19,14,20, 1, 6, 3, 8,4,9,8, 6, 4,9,10,4,12,13,6, 1, 4,9,10,9,4,], 'Children': ['Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','Yes', 'Yes', 'No','No', 'Yes','Yes', 'Yes', 'Yes', 'No','Yes', 'Yes','Yes',], 'Days': [141, 123, 128, 66, 66, 120, 141, 52,96, 120, 141, 52, 141, 96, 120,120, 141, 52,96, 141, 15,123, 128, 66, 120, 141, 141, 141, 141,123, 128, 66,67,], } #Convert to dataframe_2 df_2 = pd.DataFrame(data_set_2) print(&quot;\n df_2 = \n&quot;,df_2) </code></pre> <p>Now, here I wish to plot the <code>cycle</code> vs <code>Salary</code>, and overlap for two dataframes for every <code>id</code>, in one single plot. Thus I need to use subplot function as such:</p> <pre><code>## Plot for all id's plt_fig_verify = plt.figure(figsize=(10,8)) ## id1: plt.subplot(4,1,1) plt.plot(df_1.groupby(by=&quot;id&quot;).get_group(1)['cycle'], df_1.groupby(by=&quot;id&quot;).get_group(1)['Salary'], 'b', linewidth = '1', label ='id1: df_1') plt.plot(df_2.groupby(by=&quot;id&quot;).get_group(1)['cycle'], df_2.groupby(by=&quot;id&quot;).get_group(1)['Salary'], 'r', linewidth = '1', label ='id1: df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id2: plt.subplot(4,1,2) plt.plot(df_1.groupby(by=&quot;id&quot;).get_group(2)['cycle'], df_1.groupby(by=&quot;id&quot;).get_group(2)['Salary'], 'b', linewidth = '1', label ='id2: df_1') plt.plot(df_2.groupby(by=&quot;id&quot;).get_group(2)['cycle'], df_2.groupby(by=&quot;id&quot;).get_group(2)['Salary'], 'r', linewidth = '1', label ='id2: df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id3: plt.subplot(4,1,3) plt.plot(df_1.groupby(by=&quot;id&quot;).get_group(3)['cycle'], df_1.groupby(by=&quot;id&quot;).get_group(3)['Salary'], 'b', linewidth = '1', label ='id3: df_1') plt.plot(df_2.groupby(by=&quot;id&quot;).get_group(3)['cycle'], df_2.groupby(by=&quot;id&quot;).get_group(3)['Salary'], 'r', linewidth = '1', label ='id3: df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id4: plt.subplot(4,1,4) plt.plot(df_1.groupby(by=&quot;id&quot;).get_group(4)['cycle'], df_1.groupby(by=&quot;id&quot;).get_group(4)['Salary'], 'b', linewidth = '1', label ='id4: df_1') plt.plot(df_2.groupby(by=&quot;id&quot;).get_group(4)['cycle'], df_2.groupby(by=&quot;id&quot;).get_group(4)['Salary'], 'r', linewidth = '1', label ='id4: df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() plt.show() </code></pre> <p>The result looks as such:</p> <p><a href="https://i.sstatic.net/ySyzR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ySyzR.png" alt="enter image description here" /></a></p> <p>However, here I need to write the codes for the subplot function four times, i.e. for all four id's of the dataframe, and then overlap.</p> <p>Is there any way out, by which we can have some iterative function and write the subplot function only once and get all four overalapped subplots.</p> <p>Can somebody please let me know how to achieve this task in <code>Python</code>?</p>
<python><pandas><dataframe><matplotlib><group-by>
2023-06-18 09:34:21
2
439
NN_Developer
76,499,872
2,426,635
Pyinstaller - SQLAlchemy with SQLServer
<p>I have a command line script I'm trying to build to an .exe file using pyinstaller. I'm using SQLAlchemy to connect to a database.</p> <pre><code>import sys from urllib.parse import quote_plus from sqlalchemy import create_engine # Press the green button in the gutter to run the script. username = sys.argv[1] password = sys.argv[2] hostname = sys.argv[3] engine = 'mssql+pytds' port = sys.argv[4] technical_name = sys.argv[5] uri = f&quot;{engine}://{username}:{quote_plus(password)}@{hostname}:{port}/{technical_name}&quot; pool = create_engine(uri) </code></pre> <p>I run</p> <pre><code>pyinstaller --onefile main.py </code></pre> <p>But when I try to execute the resulting main.exe in the dist/ folder, I get the error:</p> <blockquote> <p>sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:mssql.pytds</p> </blockquote> <p>How can I include mssql.pytds for sqlalchemy? I have sqlalchemy-pytds and python-tds installed in my environment and in requirements.txt</p> <p>Edit: The above code executes successfully executes when I simply run</p> <p><code>python main.py user pw 127.0.0.1 1433 dbname</code></p>
<python><sql-server><sqlalchemy>
2023-06-18 09:32:37
1
626
pwwolff
76,499,865
1,018,861
Splitting a lazyframe into two frames by fraction of rows to make a train-test split
<p>I have a <code>train_test_split</code> function in Polars that can handle an eager DataFrame. I wish to write an equivalent function that can take a LazyFrame as input and return two LazyFrames without evaluating them.</p> <p>My function is as follows. It shuffles all rows, and then splits it using row-indexing based on the height of the full frame.</p> <pre class="lang-py prettyprint-override"><code>def train_test_split( df: pl.DataFrame, train_fraction: float = 0.75 ) -&gt; tuple[pl.DataFrame, pl.DataFrame]: &quot;&quot;&quot;Split polars dataframe into two sets. Args: df (pl.DataFrame): Dataframe to split train_fraction (float, optional): Fraction that goes to train. Defaults to 0.75. Returns: Tuple[pl.DataFrame, pl.DataFrame]: Tuple of train and test dataframes &quot;&quot;&quot; df = df.with_columns(pl.all().shuffle(seed=1)) split_index = int(train_fraction * df.height) df_train = df[:split_index] df_test = df[split_index:] return df_train, df_test df = pl.DataFrame({&quot;a&quot;: [1, 2, 3, 4], &quot;b&quot;: [4, 3, 2, 1]}) train, test = train_test_split(df) # this is what the above looks like: train = pl.DataFrame({'a': [2, 3, 4], 'b': [3, 2, 1]}) test = pl.DataFrame({'a': [1], 'b': [4]}) </code></pre> <p>Lazyframes, however, have unknown height, so we have to do this another way. I have two ideas, but run into issues with both:</p> <ol> <li>Use <code>df.sample(frac=train_fraction, with_replacement=False, shuffle=False)</code>. This way I could get the train part, but wouldn't be able to get the test part.</li> <li>Add a &quot;random&quot; column, where each row gets assigned a random value between 0 and 1. Then I can filter on values below my train_fraction and above train_fraction, and assign these to my train and test datasets respectively. But since I don't know the length of the dataframe beforehand, and (afaik) Polars doesn't have a native way of creating such a column, I would need to <code>.map_elements</code> an equivalent of <code>np.random.uniform</code> on each row, which would be very time consuming.</li> <li>Add a <code>.with_row_index()</code> and filter on rows larger than some fraction of the total, but here I also need the height, and creating the row count might be expensive.</li> </ol> <p>Finally, I might be going about this the wrong way: I could count the total number of rows beforehand, but I don't know how expensive this is considered.</p> <p>Here's a big dataframe to test on (takes ~1 sec) to run my function eagerly:</p> <pre class="lang-py prettyprint-override"><code>N = 50_000_000 df_big = pl.DataFrame( [ pl.int_range(N, eager=True), pl.int_range(N, eager=True), pl.int_range(N, eager=True), pl.int_range(N, eager=True), pl.int_range(N, eager=True), ], schema=[&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;, &quot;e&quot;], ) </code></pre>
<python><dataframe><python-polars>
2023-06-18 09:30:34
1
3,252
TomNorway
76,499,857
343,159
Struggling to create a unique datetime index in Pandas dataset
<p>I have a dataframe that has a <code>time</code> property. This property is in seconds, but with nanosecond precision.</p> <p>I was struggling to make this unique, but with help from a robot, managed to come up with this:</p> <pre><code># Convert the time column to nanoseconds and add a sequence number for trades df['time_ns'] = pd.to_datetime(df['time'], unit='s').values.astype(np.int64) + \ np.arange(len(df)) % (10 ** 9) df.set_index('time_ns', inplace=True) # Convert the time_ns column to a DatetimeIndex with nanosecond precision df.index = pd.to_datetime(df.index, unit='ns') # Get a list of the non-unique timestamps non_unique = df.index[df.index.duplicated(keep=False)].unique() # Print the non-unique timestamps print(&quot;Non-unique values:&quot;) print(non_unique) dataset = PandasDataset(df, target=&quot;price&quot;) </code></pre> <p>Now, there are no non-unique values. However, the frequency calculation when creating the dataset is falling over, due to this in <code>/pandas/tseries/frequencies.py</code>:</p> <pre><code>if not self.is_unique_asi8: return None </code></pre> <p>Digging into this with the penetrating insight into Python I have developed over the last two weeks 🤣, I have discovered that this property, too, is an indication of uniqueness.</p> <p>So how do I configure the dataset so that the index is considered unique? That it is considered at nanosecond precision? The incoming dataframe, it seems, is now unique.</p>
<python><pandas><numpy><time-series><gluonts>
2023-06-18 09:28:54
0
12,750
serlingpa
76,499,775
3,416,774
`Exception has occurred: OSError` when the file is there
<p>When running <code>import ssdeep</code> (<a href="https://github.com/MacDue/ssdeep-windows-32_64" rel="nofollow noreferrer">source</a>), I get this error:</p> <pre class="lang-none prettyprint-override"><code>Exception has occurred: OSError /usr/local/lib/python3.10/dist-packages/ssdeep_windows_32_64-0.0.1-py3.10.egg/ssdeep/bin\fuzzy_64.dll: cannot open shared object file: No such file or directory File &quot;/home/WE1S/preprocessing/libs/fuzzyhasher/fuzzyhasher.py&quot;, line 19, in &lt;module&gt; import ssdeep File &quot;/home/20230614_1549_test/modules/import/scripts/test.py&quot;, line 3, in &lt;module&gt; from libs.fuzzyhasher.fuzzyhasher import FuzzyHasher OSError: /usr/local/lib/python3.10/dist-packages/ssdeep_windows_32_64-0.0.1-py3.10.egg/ssdeep/bin\fuzzy_64.dll: cannot open shared object file: No such file or directory </code></pre> <p>However, the file is there:</p> <pre class="lang-none prettyprint-override"><code>ls '/usr/local/lib/python3.10/dist-packages/ssdeep_windows_32_64-0.0.1-py3.10.egg/ssdeep/bin' fuzzy.dll fuzzy_64.dll </code></pre> <p>Why is that? I'm on <a href="https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux" rel="nofollow noreferrer">WSL 2</a>.</p>
<python><file><oserror>
2023-06-18 09:05:27
1
3,394
Ooker
76,499,736
17,638,206
Extracting Arabic text from a text file
<p>I have a txt file that includes ['صفحه رقم ا من ٤'] which is the output of EasyOCR. I want to extract &quot;ا&quot;, which is after the substring &quot;صفحه رقم&quot;. I am using this code:</p> <pre><code># Open the text file for reading with open(r'file.txt', 'r', encoding='utf-8') as f: text = f.read() # Extract the digit after the string &quot;صفحه رقم&quot; page_num_str = 'صفحه رقم' start_idx = text.find(page_num_str) if start_idx != -1: substr = text[start_idx+len(page_num_str):] page_num = ''.join(filter(str.isdigit, substr.split()[0])) print(&quot;Page number extracted: &quot;,page_num) </code></pre> <p>Here is the output:</p> <pre><code>Page number extracted: </code></pre> <p>As you can see, nothing is extracted! I don’t know why, but I have tried to output the value of</p> <pre><code>start_idx = text.find(page_num_str) </code></pre> <p>The output was 2 instead of 0. What is the problem? Here is <a href="https://drive.google.com/file/d/1Yh_4AaxrslR2XZY4C3Eecb4ev2Q8If0A/view?usp=sharing" rel="nofollow noreferrer">the text file uploaded</a>.</p>
<python><string><text>
2023-06-18 08:51:49
2
375
AAA
76,499,576
20,740,043
Create subplot, for every group/id of a dataframe
<p>I have the below dataframe:</p> <pre><code>#Load the required libraries import pandas as pd import matplotlib.pyplot as plt #Create dataset data = {'id': [1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3,3, 4, 4, 4, 4, 4,4,], 'cycle': [0.0, 0.2,0.4, 0.6, 0.8, 1,1.2,1.4,1.6,1.8,2.0,2.2, 0.0, 0.2,0.4, 0.6,0.8,1.0,1.2, 0.0, 0.2,0.4, 0.6, 0.8,1.0,1.2,1.4, 0.0, 0.2,0.4, 0.6, 0.8,1.0,], 'Salary': [6, 7, 7, 7,8,9,10,11,12,13,14,15, 3, 4, 4, 4,4,5,6, 2, 8,9,10,11,12,13,14, 1, 8,9,10,11,12,], 'Children': ['Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','Yes', 'Yes', 'No','No', 'Yes','Yes', 'Yes', 'Yes', 'No','Yes', 'Yes','Yes',], 'Days': [141, 123, 128, 66, 66, 120, 141, 52,96, 120, 141, 52, 141, 96, 120,120, 141, 52,96, 141, 15,123, 128, 66, 120, 141, 141, 141, 141,123, 128, 66,67,], } #Convert to dataframe df = pd.DataFrame(data) print(&quot;\n df = \n&quot;,df) </code></pre> <p>Now, here I wish to plot the <code>cycle</code> vs <code>Salary</code>, for all <code>id</code>'s of the dataframe in one single plot. Thus I need to use subplot function as such:</p> <pre><code>## Plot for all id's plt_fig_verify = plt.figure(figsize=(10,8)) ## id1: plt.subplot(4,1,1) plt.plot(df.groupby(by=&quot;id&quot;).get_group(1)['cycle'], df.groupby(by=&quot;id&quot;).get_group(1)['Salary'], 'b', linewidth = '1', label ='id1') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id2: plt.subplot(4,1,2) plt.plot(df.groupby(by=&quot;id&quot;).get_group(2)['cycle'], df.groupby(by=&quot;id&quot;).get_group(2)['Salary'], 'b', linewidth = '1', label ='id2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id3: plt.subplot(4,1,3) plt.plot(df.groupby(by=&quot;id&quot;).get_group(3)['cycle'], df.groupby(by=&quot;id&quot;).get_group(3)['Salary'], 'b', linewidth = '1', label ='id3') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id4: plt.subplot(4,1,4) plt.plot(df.groupby(by=&quot;id&quot;).get_group(4)['cycle'], df.groupby(by=&quot;id&quot;).get_group(4)['Salary'], 'b', linewidth = '1', label ='id4') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() plt.show() </code></pre> <p>However, here I need to write the codes for the subplot function four times, i.e. for all four id's of the dataframe.</p> <p>Is there any way out, by which we can have some iterative function and write the subplot function only once and get all four subplots.</p> <p>Can somebody please let me know how to achieve this task in <code>Python</code>?</p>
<python><pandas><dataframe><matplotlib><group-by>
2023-06-18 08:06:38
2
439
NN_Developer
76,499,565
1,017,348
Python does not find module installed with pipx
<p><a href="https://en.wikipedia.org/wiki/Debian#Forks_and_derivatives" rel="nofollow noreferrer">Debian Stable</a> wants me to install Python modules using pipx. So I do</p> <pre class="lang-none prettyprint-override"><code>pipx install auditwheel pipx ensurepath python3 -m pipx ensurepath python3 </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. </code></pre> <p>And:</p> <pre class="lang-python prettyprint-override"><code>import auditwheel </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'auditwheel' </code></pre> <p>What am I doing wrong?</p>
<python><debian><pipx>
2023-06-18 08:02:30
4
8,427
Joachim W
76,499,333
14,343,465
brew install dbt-snowflake: /usr/local/Cellar/dbt-snowflake/1.5.1/libexec/bin/pip install -v --no-deps --no-binary :all: --use-feature=no-bina
<p>brew is hanging when installing the dbt-snowflake:</p> <p><code>brew install dbt-snowflake --overwrite -s</code></p> <pre class="lang-bash prettyprint-override"><code>==&gt; Installing dbt-snowflake from dbt-labs/dbt ==&gt; python3 -m venv /usr/local/Cellar/dbt-snowflake/1.5.1/libexec ==&gt; /usr/local/Cellar/dbt-snowflake/1.5.1/libexec/bin/python -m pip install pip==22.3.1 ==&gt; /usr/local/Cellar/dbt-snowflake/1.5.1/libexec/bin/pip install -v --no-deps --no-binary :all: --use-feature=no-bina ==&gt; /usr/local/Cellar/dbt-snowflake/1.5.1/libexec/bin/pip install -v --no-deps --no-binary :all: --use-feature=no-bina ==&gt; /usr/local/Cellar/dbt-snowflake/1.5.1/libexec/bin/pip install -v --no-deps --no-binary :all: --use-feature=no-bina ^C </code></pre> <p>system</p> <pre><code>$brew --version Homebrew 4.0.23-12-ge986264 Homebrew/homebrew-core (git revision c78e92f14ff; last commit 2023-02-03) Homebrew/homebrew-cask (git revision 8f932465f7; last commit 2023-02-03) $uname -a Darwin me-MacBook-Pro.local 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:17 PST 2023; root:xnu-8796.101.5~3/RELEASE_X86_64 x86_64 </code></pre>
<python><macos><homebrew><dbt>
2023-06-18 06:40:59
1
3,191
willwrighteng
76,499,319
16,383,578
What is the fastest way to find intersection of two NumPy arrays while preserving order?
<p>I have two one-dimensional NumPy arrays, A and B, of the same length. I want to find the intersection of the two arrays, meaning I want to find all the elements of A that are also present in B.</p> <p>The result should be a boolean array that is <code>True</code> when an element in array A at the index is also a member of array B, preserving the order so that I can use the result to index another array.</p> <p>If not for the boolean mask constraint, I would convert both arrays to sets and use the set intersection operator (<code>&amp;</code>). However, I have tried using <code>np.isin</code> and <code>np.in1d</code>, and found that using plain Python list comprehension is much faster.</p> <p>Given the setup:</p> <pre class="lang-py prettyprint-override"><code>import numba import numpy as np primes = np.array([ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997], dtype=np.int64) @numba.vectorize(nopython=True, cache=True, fastmath=True, forceobj=False) def reverse_digits(n, base): out = 0 while n: n, rem = divmod(n, base) out = out * base + rem return out flipped = reverse_digits(primes, 10) def set_isin(a, b): return a in b vec_isin = np.vectorize(set_isin) </code></pre> <p><code>primes</code> contains all prime numbers under 1000 with a total of 168. I chose it because it is of decent size and predetermined. I have performed various tests:</p> <pre><code>In [2]: %timeit np.isin(flipped, primes) 51.3 µs ± 1.55 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [3]: %timeit np.in1d(flipped, primes) 46.2 µs ± 386 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [4]: %timeit setp = set(primes) 12.9 µs ± 133 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [5]: %timeit setp = set(primes.tolist()) 6.84 µs ± 175 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [6]: %timeit setp = set(primes.flat) 11.5 µs ± 54.6 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [7]: setp = set(primes.tolist()) In [8]: %timeit [x in setp for x in flipped] 23.3 µs ± 739 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [9]: %timeit [x in setp for x in flipped.tolist()] 12.1 µs ± 76.6 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [10]: %timeit [x in setp for x in flipped.flat] 19.7 µs ± 249 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [11]: %timeit vec_isin(flipped, setp) 40 µs ± 317 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [12]: %timeit np.frompyfunc(lambda x: x in setp, 1, 1)(flipped) 25.7 µs ± 418 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [13]: %timeit setf = set(flipped.tolist()) 6.51 µs ± 44 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [14]: setf = set(flipped.tolist()) In [15]: %timeit np.array(sorted(setf &amp; setp)) 9.42 µs ± 78.9 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) </code></pre> <p><code>setp = set(primes.tolist()); [x in setp for x in flipped.tolist()]</code> takes about 19 microseconds, which is faster than NumPy methods. I am wondering why this is the case, and if there is a way to make it even faster.</p> <p>(I wrote all the code, and I used AI suggested edit feature to edit the question)</p>
<python><arrays><numpy>
2023-06-18 06:36:38
1
3,930
Ξένη Γήινος
76,499,172
9,261,745
In docker fails to do request to a python official docker image
<p>I use dockerfile to build image and container. I haven't changed anything. But from yesterday, I cannot build it. My docker file is</p> <pre><code>FROM python:3.8-slim-buster WORKDIR /src COPY requirements.txt /src RUN pip install -r requirements.txt COPY app/ /src/app ENV PYTHONPATH=/src EXPOSE 8000 WORKDIR /src/app </code></pre> <p>I got below error.</p> <pre><code>ERROR: failed to solve: DeadlineExceeded: DeadlineExceeded: python:3.8-slim-buster: failed to copy: httpReadSeeker: failed open: failed to do request: Get &quot;https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/cf/cf7d11e02afd21125c5378d971bc42cfed2b697d7c116b14eed0baad617839da/data?verify=1687069048-Wqw%2FLtaBudeh5Szh3s0QeRVV6q0%3D&quot;: dial tcp 104.18.123.25:443: i/o timeout </code></pre> <p>I don't know why this happens and I cannot find anything about this kind of error. Would you help me out here?</p> <p>Thanks</p>
<python><docker><dockerfile>
2023-06-18 05:36:26
2
457
Youshikyou
76,499,079
6,496,679
How can I add the path to torch to a Pybind11 extension?
<p>I'm writing a python extension in C++ using pybind11 and I'm trying to make the <code>setup.py</code> file and this is what I have so far:</p> <pre class="lang-py prettyprint-override"><code>from glob import glob import setuptools from pybind11.setup_helpers import Pybind11Extension, build_ext ext_modules = [ Pybind11Extension( &quot;my_ext&quot;, sorted(glob(&quot;src/my_ext/**/*.cc&quot;)), ) ] setuptools.setup( name=&quot;my_ext&quot;, version=&quot;0.1&quot;, package_dir={&quot;&quot;: &quot;src/my_ext/python/&quot;}, cmdclass={&quot;build_ext&quot;: build_ext}, ext_modules=ext_modules, ) </code></pre> <p>However, when I run <code>pip install .</code>, I get this error:</p> <pre><code>In file included from src/my_ext/cc/thing.cc:7: src/my_ext/cc/thing.h:9:10: fatal error: 'torch/torch.h' file not found #include &lt;torch/torch.h&gt; ^~~~~~~~~~~~~~~ 1 error generated. error: command '/usr/bin/clang' failed with exit code 1 </code></pre> <p>Is there any argument I can pass to Pybind11Extension to allow it to find torch and build successfully?</p>
<python><c++><python-packaging><pybind11>
2023-06-18 04:54:15
1
849
DataOrc
76,498,857
16,043,284
What is the difference between mapped_column and Column in SQLAlchemy?
<p>I am new to SQLAlchemy and I see that in the documentation the older version (Column) can be swapped directly with the newer &quot;mapped_column&quot;.</p> <p>Is there any advantage to using mapped_column over Column? Could you stick to the older 'Column'?</p>
<python><sqlalchemy>
2023-06-18 02:43:40
1
683
user16043284
76,498,736
7,091,646
Efficient way to collect all unique substrings of a string
<p>I need to identify all substrings in a string with a minimum size and repeats. The caveat is that I don't want substrings returned that are themselves substrings of other returned substrings. In other words the set of substrings needs to be a disjoint set. The function below works but is very inefficient. 97.9% of the time is spent re-calculating the suffix array and LCP. I settled on this because after I remove the last substring from the string and re-calculate the SA and LCP, I can guarantee that no substrings of the last substring would be added. Is there a more efficient way to do this that would require calculating the SA and LCP once?</p> <pre><code>from typing import Dict, List, NamedTuple import numpy as np import pandas as pd from pydivsufsort import divsufsort, kasai class Substring(NamedTuple): length: int count: int def find_unique_repeat_substrings(s: bytes, min_length: int = 20, min_repeats: int = 10) -&gt; Dict[str, Substring]: string_dict = dict() K = len(s) while K&gt;=min_length: sa = divsufsort(s) lcp = kasai(s, sa) K_loc = np.argmax(lcp) K=np.max(lcp) #calculate number of repeats loc = K_loc+1 while lcp[loc]==K: loc += 1 cnt = loc-K_loc+1 longest_string = s[sa[K_loc]:sa[K_loc]+K] #add substring to dict if cnt &gt;= min_repeats and K&gt;=min_length: string_dict[longest_string.decode()] = Substring(length=K, count=cnt) #remove substring s = s.replace(longest_string, b&quot;&quot;) # Replacing with bytes return(string_dict) s = &quot;this string is repeated three times in this sentence. string string.&quot;.encode() string_dict = find_unique_repeat_substrings(s,min_length = 4, min_repeats=2) string_dict </code></pre> <p>{' string ': Substring(length=8, count=2), 'this': Substring(length=4, count=2)}</p>
<python><string><algorithm><suffix-array>
2023-06-18 01:36:35
1
1,399
Eric
76,498,715
6,458,245
How do I exclude a tensor from the torch.no_grad?
<p>If I am using the torch.no_grad decorator like this:</p> <pre><code>import torch x = torch.tensor([1.], requires_grad = True) @torch.no_grad() # Or like this: with torch.no_grad() def func(x): a = torch.tensor([1.]) b = a * 2 b.backward() print(a.grad) ........ return x ** 2 z = func(x) print(z.requires_grad) </code></pre> <p>What if I want to have no_grad (to save time/memory computation) for everything except for a selective few amount of tensors? How do I do what I did above?</p>
<python><pytorch>
2023-06-18 01:20:57
0
2,356
JobHunter69
76,498,660
15,542,245
Using list of indexes to append underscores to string tokens
<p>The design of this is not meeting expectations:</p> <pre><code># Explanation: # Read split of splits until index of indexes reached. Apply underscore to split token with no space if split followed by another index # Therefore line output should be: '7 Waitohu Road _York_Bay Co Manager _York_Bay Asst Co Dir _Central_Lower_Hutt General Hand _Wainuiomata School Caretaker' # A list of suburb words and there index position in line uniqueList = ['York', 3, 'Bay', 4, 'York', 7, 'Bay', 8, 'Central', 12, 'Lower', 13, 'Hutt', 14, 'Wainuiomata', 17] # Using indexes = uniqueList[1::2] to reduce uniqueList down to just indexes indexes = [3, 4, 7, 8, 12, 13, 14, 17] # The line example line = '7 Waitohu Road York Bay Co Manager York Bay Asst Co Dir Central Lower Hutt General Hand Wainuiomata School Caretaker' # Split the line into tokens for counting indexes splits = line.split(' ') # Read index for i in range(len(indexes)): check = indexes[i] for j in range(len(splits)): if j == check and (i + 1 &lt; len(indexes)): # Determine if next index incremental next = indexes[i + 1] if 1 == next - check: splits[j] = '_' + splits[j] + '_' + splits[j + 1] else: if j == check: splits[j] = '_' + splits[j] # Results here newLine = ' '.join(splits) print(newLine) </code></pre> <p>Output:</p> <pre><code>7 Waitohu Road _York_Bay Bay Co Manager _York_Bay Bay Asst Co Dir _Central_Lower _Lower_Hutt Hutt General Hand _Wainuiomata School Caretaker </code></pre> <p>How to:</p> <ul> <li>Not output/remove doubled up word <code>Bay</code> and <code>Hutt</code></li> <li>Deal with an additional underscored word to get <code>_Central_Lower_Hutt</code></li> </ul>
<python><list><branching-strategy>
2023-06-18 00:47:48
1
903
Dave
76,498,347
907,967
PyYAML dumps Python dictionary with tags
<p>I don't understand why yaml.dump adds tags and weird informations into result:</p> <p>Source file.yml content is :</p> <pre><code>ssh_keys: user1: - key: key1 state: state1 </code></pre> <p>I want to add to the file the following key :</p> <pre><code>key_to_add = { 'key': key2, 'state': state2 } </code></pre> <p>Here is my code :</p> <pre><code>import yaml with open('file.yml', 'r') as file: data = yaml.load(file) print(key_to_add) data['ssh_keys']['user1'].append(key_to_add) print(data) print(type(data)) print(yaml.dump(data)) </code></pre> <p>The unexpected result:</p> <pre><code>{'key': 'key2', 'state': 'state2'} {'ssh_keys': {'user1': [{'key': 'key1', 'state': 'state1'}, {'key': 'key2', 'state': 'state2'}]}} &lt;class 'dict'&gt; ssh_keys: user1: - key: 'key1' state: 'state1' - key: !!python/object/new:ansible.parsing.yaml.objects.AnsibleUnicode args: - key2 state: _column_number: 26 _data_source: /home/........... _line_number: 34 state: !!python/object/new:ansible.parsing.yaml.objects.AnsibleUnicode args: - state2 state: _column_number: 24 _data_source: /home/........... _line_number: 35 </code></pre> <p>I was expecting the following result:</p> <pre><code>ssh_keys: user1: - key: 'key1' state: 'state1' - key: 'key2' state: 'state2' </code></pre>
<python><yaml><pyyaml>
2023-06-17 22:12:05
2
1,130
timothepoznanski
76,498,203
8,453,556
De-duplication of pandas dataframe based on key and value columns
<p>Lets say I have a dataframe with 3 column types</p> <ol> <li>Key Columns</li> <li>Value Columns</li> <li>Other Columns</li> </ol> <p>Objects: Only keep a value if, it has a new value for a given key.</p> <p>say I generate some random data</p> <pre><code>import pandas as pd import numpy as np import string import random N_keys = 3 N_rows = 20 key_columns = ['key1', 'key2' ] val_columns = ['val1', 'val2',] oth_columns = ['oth1', 'oth2'] random.seed(42) np.random.seed(42) keys = [ tuple([random.choice(string.ascii_letters) for _ in range(len(key_columns))]) for _ in range(N_keys)] last_val_dict = {} data = [] for _ in range(N_rows): # Pick a key k = random.choice(keys) if k not in last_val_dict: v = [random.choice(string.ascii_letters) for _ in range(len(val_columns))] else: if random.random() &gt; 0.5: v = [random.choice(string.ascii_letters) for _ in range(len(val_columns))] else: v = last_val_dict[k] last_val_dict[k] = v o = [np.random.randn() for _ in range(len(oth_columns))] data.append({col:val for col, val in zip( key_columns+val_columns+oth_columns, list(k)+v+o )}) df = pd.DataFrame(data) # S,I --&gt; I f df.loc[5, 'val1'] = 'I' df.loc[5, 'val2'] = 'f' </code></pre> <p>This dataframe has 3 keys</p> <pre><code>&gt;&gt;&gt;&gt; keys &gt;&gt;&gt;&gt; ('O', 'h'), ('b', 'V'), ('r', 'p') </code></pre> <p>and the dataframe looks like this</p> <pre><code>&gt;&gt;&gt;&gt; df &gt;&gt;&gt;&gt; key1 key2 val1 val2 oth1 oth2 0 O h i V 0.496714 -0.138264 1 O h I f 0.647689 1.523030 (*) This should stay since for (O, h) we went from (i, V) -&gt; (I, f) 2 r p B c -0.234153 -0.234137 3 O h I f 1.579213 0.767435 (*) This should NOT stay since for (O, h) we went from remained at (I, f) 4 O h b J -0.469474 0.542560 5 O h I f -0.463418 -0.465730 (*) This should stay since we went from (b, J) -&gt; (I, f) even thought (I, f) is seen before 6 b V o C 0.241962 -1.913280 7 r p B c -1.724918 -0.562288 8 O h k S -1.012831 0.314247 9 b V o C -0.908024 -1.412304 10 O h k S 1.465649 -0.225776 11 b V o C 0.067528 -1.424748 12 b V o C -0.544383 0.110923 13 b V Z c -1.150994 0.375698 14 r p B c -0.600639 -0.291694 15 O h y f -0.601707 1.852278 16 r p B c -0.013497 -1.057711 17 r p x K 0.822545 -1.220844 18 O h c Q 0.208864 -1.959670 19 O h f o -1.328186 0.196861 </code></pre> <p>The best logic I could come up with is the following</p> <pre><code>df_new = pd.concat( [key_df[ (key_df[val_columns] != key_df[val_columns].shift(1)).all(axis=1) ] \ for keys, key_df in df.groupby(key_columns) ] ,axis=0) </code></pre> <p>and it does seem to work. <strong>But I'm not sure if this could be improved upon or if I'm missing something.</strong></p> <pre><code>&gt;&gt;&gt;&gt; df_new &gt;&gt;&gt;&gt; key1 key2 val1 val2 oth1 oth2 0 O h i V 0.496714 -0.138264 1 O h I f 0.647689 1.523030 4 O h b J -0.469474 0.542560 5 O h I f -0.463418 -0.465730 8 O h k S -1.012831 0.314247 15 O h y f -0.601707 1.852278 18 O h c Q 0.208864 -1.959670 19 O h f o -1.328186 0.196861 6 b V o C 0.241962 -1.913280 13 b V Z c -1.150994 0.375698 2 r p B c -0.234153 -0.234137 17 r p x K 0.822545 -1.220844 </code></pre>
<python><pandas><duplicates>
2023-06-17 21:20:20
1
491
Sahil Puri
76,498,156
7,579,270
Python to PHP byte string conversion
<p>I am trying to convert this Python code into PHP but I got stuck on handling bytes data. Can someone help me fix this PHP code so I can get the same result as Python which is <strong>2734</strong>?</p> <p>PYTHON VERSION</p> <pre><code>indexes = [8, 13, 23, 13, 37, 23, 31, 11, 8, 37, 23, 27, 12, 9, 21, 20, 6, 8, 12, 16, 12, 17, 35, 37, 35, 28, 13, 35, 9, 31, 2, 25] sha_1_sign = 'cf493b66e356e588c545df7dda24ed4d404f1c90' sha_1_b = sha_1_sign.encode(&quot;ascii&quot;) checksum = sum([sha_1_b[number] for number in indexes]) print(checksum) #result 2734 </code></pre> <p>PHP VERSION</p> <pre><code>$indexes = [8, 13, 23, 13, 37, 23, 31, 11, 8, 37, 23, 27, 12, 9, 21, 20, 6, 8, 12, 16, 12, 17, 35, 37, 35, 28, 13, 35, 9, 31, 2, 25]; $sha_1_sign = 'cf493b66e356e588c545df7dda24ed4d404f1c90'; $sha_1_b = hex2bin($sha_1_sign); $checksum = 0; foreach ($indexes as $number) { if (isset($sha_1_b[$number])) { $checksum += ord($sha_1_b[$number]); } } echo $checksum; #result 2047 </code></pre>
<python><php>
2023-06-17 21:06:55
1
476
Mihai Galan
76,498,061
1,188,943
Dot character in FFMPEG in Python
<p>I'm using ffmpeg with python3 with filenames like the below:</p> <pre><code>27. Air Space </code></pre> <p>My code is something like this:</p> <pre><code>for folder, dirs, files in os.walk(rootdir): for file in files: fullpath = os.path.join(folder, file) filename = '&quot;&quot;' + fullpath + '&quot;&quot;' actual_filename = fullpath.replace(&quot;.mp4&quot;,&quot;&quot;)+&quot;.wav&quot; os.system('ffmpeg -i {} -acodec pcm_s16le -ar 16000 {}'.format(filename, actual_filename)) </code></pre> <p>The problem is FFMPEG cannot find the file and it gives me an error:</p> <pre><code>No such file or directory </code></pre>
<python><ffmpeg>
2023-06-17 20:35:13
2
1,035
Mahdi
76,497,985
7,267,480
Fitting a chi2 distribution, estimation of dof, difference with the mean value of the data
<p>I have data to process and in theory, I know the distribution of that data - it's chi2.</p> <p>I know this (data) was produced using sampling from chi2 distr. and I know the parameters used for sampling - e.g. mean value and number of degrees of freedom used in chi2.</p> <p>But in my case - I need to find out those parameters from &quot;measured data&quot; assuming chi2 distribution. Find the average parameters and the degrees of freedom (estimated values). <a href="https://i.sstatic.net/tyPGf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tyPGf.png" alt="enter image description here" /></a></p> <p>When I am trying to fit this function for the pdf:</p> <pre><code>def chi2_pdf(x, dof, loc = 0, scale=1): return chi2.pdf(x, df=dof, loc = loc, scale=scale) </code></pre> <p>I am getting various parameters that are really hard to interpret: (loc, scale - what is the meaning of those parameters?) I have assumed that loc stands for the mean value and scale - some rescaling number - but for my case I have got some unreal numbers that confused me a lot.. E.g. negative loc value and dof much larger than it is in the example data.</p> <p><strong>So main question is - How to fit distribution correctly and get reasonable estimates of parameters?</strong> Maybe I need to rescale the data before fitting and passing them to a function calculating the pdf?</p> <p>I have tried to use curve-fit to estimate parameters of scipy.stats.chi2.pdf, - see the code below - but don't know how to interpret parameters I have, e.g. as a result I have negative loc or scale values - what are they stand for?</p> <p>E.g. for provided case:</p> <blockquote> <p>Estimated dof, LS: 7793.619571961899 Estimated scale: 0.1006724349475489 Estimated loc:-720.5363121803032</p> </blockquote> <p>Assuming that for the chi2 distribution mean value is the dof (?) - I have completely wrong answer..</p> <p>How to make sure that the estimated parameters are correct?</p> <p>Here is a code with the example data and a plot of that data to reproduce:</p> <pre><code>from distfit import distfit from scipy.stats import chi2 from scipy.optimize import curve_fit # getting the data data_csv_link = 'https://drive.google.com/uc?export=download&amp;id=1Cs7-dTs6TesVvSNd7g0aJOj4IQqsOuCb' data_df: pd.DataFrame = pd.read_csv(data_csv_link, ' ', header=None) data = data_df[0].to_list() print(data) # plotting the distribution of gg fig, ax = plt.subplots() ax.hist(data , density=True, bins=50, alpha=0.5, cumulative=False, label = 'Data') #kernel density estimation of the curve (just for visualization purposes) #kde = gaussian_kde(data) x = np.linspace(min(data), max(data), 100) #ax.plot(x, kde(x), 'b-', label='KDE of data') # how to find optimal values of average value of parameter I am measuring? # &amp; dof for chi-2 assuming that the data is distributed like chi-2? # if we use scipy.stats - what are the best parameters to fit given data with scipy? # chi2.pdf(x, dof, scale) df = 64 # real mean parameter used for generation.. loc = 0 scale = 1 ax.plot(x, chi2.pdf(x, df=df, loc = loc, scale=scale), 'r-', label = 'Estimate curve') ax.set_title('Distr. of data') ax.legend() # try to find using distfit dist = distfit(distr=['chi2', 'norm']) results = dist.fit_transform(data_df[0]) dist.plot() dist.plot_summary() print(dist.summary) </code></pre>
<python><curve-fitting><distribution>
2023-06-17 20:15:16
1
496
twistfire
76,497,937
6,077,239
DuckDB - Rank correlation is much slower than regular correlation
<p>Comparing the following two code sections with the only difference as the second one first computes <code>rank</code>, the second section results in much slower performance than the first one (~5x).</p> <p>Although the second section involves a few more extra computations (<code>rank</code>), I don't expect and understand such a big difference. Am I implementing the query very inefficiently (I don't so as well because I think DuckDB should have query optimizations in place)?</p> <p>Note: Similar computations using Polars will not result in such a big slow down (code not shown here, otherwise people will be overwhelmed with too many codes).</p> <pre><code>import time import duckdb import numpy as np import polars as pl ## example dataframe rng = np.random.default_rng(1) nrows = 5_000_000 df = pl.DataFrame( dict( id=rng.integers(1, 1_000, nrows), id2=rng.integers(1, 10, nrows), id3=rng.integers(1, 500, nrows), v1=rng.normal(0, 1, nrows), v2=rng.normal(0, 1, nrows), ) ) ## pearson correlation start = time.perf_counter() res = duckdb.sql( &quot;&quot;&quot; WITH cte AS ( SELECT df.id2, df.id3, df2.id3 AS id3_right, df.v1, df2.v1 AS v1_right, df.v2, df2.v2 AS v2_right FROM df JOIN df AS df2 ON ( df.id = df2.id AND df.id2 = df2.id2 AND df.id3 &gt; df2.id3 AND df.id3 &lt; df2.id3 + 30 ) ) SELECT id2, id3, id3_right, corr(v1, v1_right) AS v1, corr(v2, v2_right) AS v2 FROM cte GROUP BY id2, id3, id3_right &quot;&quot;&quot; ).pl() time.perf_counter() - start # 19.462523670867085 ## rank correlation start = time.perf_counter() res2 = duckdb.sql( &quot;&quot;&quot; WITH cte AS ( SELECT df.id2, df.id3, df2.id3 AS id3_right, RANK() OVER (g ORDER BY df.v1) AS v1, RANK() OVER (g ORDER BY df2.v1) AS v1_right, RANK() OVER (g ORDER BY df.v2) AS v2, RANK() OVER (g ORDER BY df2.v2) AS v2_right FROM df JOIN df AS df2 ON ( df.id = df2.id AND df.id2 = df2.id2 AND df.id3 &gt; df2.id3 AND df.id3 &lt; df2.id3 + 30 ) WINDOW g AS (PARTITION BY df.id2, df.id3, df2.id3) ) SELECT id2, id3, id3_right, corr(v1, v1_right) AS v1, corr(v2, v2_right) AS v2 FROM cte GROUP BY id2, id3, id3_right &quot;&quot;&quot; ).pl() time.perf_counter() - start # 104.54312287131324 </code></pre>
<python><sql><duckdb>
2023-06-17 19:58:32
1
1,153
lebesgue
76,497,897
1,811,073
Include newline characters between nested XML Tags (BeautifulSoup)
<p>I'd like to test that some (de)serialization code is 100% symmetrical but I've found that, when serializing my data as a nested Tag, the inner Tags don't have newline characters between them like the original Tag did.</p> <p>Is there some way to force newline characters between nested Tags <em>without</em> having to prettify -&gt; re-parse?</p> <p>Example:</p> <pre><code>import bs4 from bs4 import BeautifulSoup tag_string = &quot;&lt;OUTER&gt;\n&lt;INNER/&gt;\n&lt;/OUTER&gt;&quot; tag = BeautifulSoup(tag_string, &quot;xml&quot;).find(&quot;OUTER&quot;) print(tag) ... &lt;OUTER&gt; ... &lt;INNER/&gt; ... &lt;/OUTER&gt; new_tag = bs4.Tag(name=&quot;OUTER&quot;) new_tag.append(bs4.Tag(name=&quot;INNER&quot;, can_be_empty_element=True)) tag == new_tag ... False print(new_tag) ... &lt;OUTER&gt;&lt;INNER/&gt;&lt;/OUTER&gt; </code></pre> <p>In order to get a 100% symmetrical serialization, I have to:</p> <ul> <li>prettify the Tag into a string</li> <li>replace &quot;\n &quot; with &quot;\n&quot;</li> <li>parse this string as a new XML Tag</li> <li>find the <code>OUTER</code> Tag</li> </ul> <pre><code>new_tag = BeautifulSoup( new_tag.prettify().replace(&quot;\n &quot;, &quot;\n&quot;), &quot;xml&quot; ).find(&quot;OUTER&quot;) tag == new_tag ... True print(new_tag) ... &lt;OUTER&gt; ... &lt;INNER/&gt; ... &lt;/OUTER&gt; </code></pre> <p>Justification for this need:</p> <ul> <li>I'm deserializing thousands of tags into objects using some test data specific to me</li> <li>as I developed this feature, I found that there were many mismatches between input / output for several of the attributes</li> <li>mismatches are not acceptable because it causes the closed-source binary that reads this data to perform operations that should be avoided unless there's a genuine difference between that program's database and the entries in this XML</li> <li>I was able to handle these mismatches for my test data, but I am not able to guarantee (de)serialization symmetry for all users of this code</li> <li>as a result, I think it's best if the test suit enumerates tags for the users' data asserting that input == output and, to facilitate this, I've had to do this kinda annoying hackaround (prettify -&gt; re-parse) which I'd like to avoid if possible</li> </ul>
<python><xml><beautifulsoup>
2023-06-17 19:46:44
2
876
aweeeezy
76,497,775
3,449,093
finding closest ellipse to a closed curve
<p>I have an array of points described by their <code>x,y</code> coordinates that represents an isophote. It was determined as a contour of the image. It's a closed curve.</p> <p>I need to find the closest ellipse to further calculate how different from that ellipse my isophote is. I have no idea where to start.</p> <p>Could you point me in the right direction?</p>
<python>
2023-06-17 19:11:24
1
1,399
Malvinka
76,497,721
3,084,842
Fortran equivalent of numpy.roll function
<p>Python's <a href="https://numpy.org/doc/stable/reference/generated/numpy.roll.html" rel="nofollow noreferrer"><code>numpy.roll</code></a> function rolls arrays along an axis. Example:</p> <pre><code>from numpy import roll, array x = array([[1,0,2],[2,1,9],[5,5,1]]) print(roll(x, 1, axis=0)) </code></pre> <p>transforms the matrix <code>x</code> into</p> <pre><code>array([[5, 5, 1], [1, 0, 2], [2, 1, 9]]) </code></pre> <p>I try to do this in Fortran by slicing a matrix row and attaching it to the existing matrix:</p> <pre><code>program myfun implicit none integer, parameter :: N=3 real, dimension(N,N) :: m integer :: i, j m = 0 do i = 1, N m(i, i) = 1.0 enddo m(3,1) = 2 m(3,2) = 9 m(1,3) = 5 m(1,2) = 2 m(2,3) = 5 print *, '', [m(:,N), m] end program myfun </code></pre> <p>This produces matrix</p> <pre><code>[5, 5, 1] [1, 0, 2] [2, 1, 9] [5, 5, 1] &lt;- need to remove this row (how?) </code></pre> <p>Is this the best way to perform <code>numpy.roll</code> in Fortran? If so, how do I delete the last row of the matrix?</p>
<python><numpy><fortran>
2023-06-17 18:59:48
2
3,997
Medulla Oblongata
76,497,264
16,383,578
How to generate twin primes in NumPy?
<p>I want to generate <a href="https://en.wikipedia.org/wiki/Twin_prime" rel="nofollow noreferrer">twin primes</a> using NumPy. Two prime numbers are twin primes if their difference is 2.</p> <p>My idea is simple: generate the prime numbers using the Sieve of Eratosthenes, then filter out adjacent prime numbers that don't have a difference of 2.</p> <p>Sieve of Erastothenes code implementation is <a href="https://stackoverflow.com/questions/76495180/how-to-find-emirp-numbers-using-numpy">here</a>.</p> <p>This is the implementation of the twin prime filter I have written:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np def twin_primes(n: int) -&gt; np.ndarray: primes = prime_wheel_sieve(n) mask = np.isin(primes + 2, primes) mask |= np.concatenate([[False], mask])[:-1] return primes[mask] </code></pre> <p>The expected output should be:</p> <pre class="lang-py prettyprint-override"><code>In [367]: twin_primes(1000) Out[367]: array([ 3, 5, 7, 11, 13, 17, 19, 29, 31, 41, 43, 59, 61, 71, 73, 101, 103, 107, 109, 137, 139, 149, 151, 179, 181, 191, 193, 197, 199, 227, 229, 239, 241, 269, 271, 281, 283, 311, 313, 347, 349, 419, 421, 431, 433, 461, 463, 521, 523, 569, 571, 599, 601, 617, 619, 641, 643, 659, 661, 809, 811, 821, 823, 827, 829, 857, 859, 881, 883], dtype=int64) </code></pre> <p>I first use membership checking to find prime numbers that, when incremented by 2, are also prime. This would find the first number in all twin prime pairs, but it will fail to find the second number of the pairs.</p> <p>This problem boils down to flipping all <code>False</code> values in a NumPy array immediately after <code>True</code> values, since in all twin prime pairs the second number must be the next prime after the first.</p> <p>I am unsure of how to flip every <code>False</code> after <code>True</code>, and Google is of no help. So I right-rotated the mask by one to get the second number in all pairs and combined the two masks using the bitwise OR operator.</p> <p>Is there a simpler way to flip all <code>False</code> after <code>True</code>?</p> <p>(as with my previous two questions, I used AI suggested edits)</p>
<python><arrays><python-3.x><numpy>
2023-06-17 17:02:38
2
3,930
Ξένη Γήινος
76,497,070
8,519,830
How to get a list of interpolated values at fixed timesteps from a time sorted list without constant timesteps
<p>I have a long file containing temperature data and I want to align the temperatures by interpolation to fixed time steps (5 minutes spacing).</p> <pre><code>17.06.2023 16:11:59 : tfa 27.2 17.06.2023 16:19:47 : tfa 27.4 17.06.2023 16:27:36 : tfa 28.2 17.06.2023 16:35:50 : tfa 28.7 17.06.2023 16:42:52 : tfa 28.1 </code></pre> <p>As a start the regular times for the interpolation would be like: <code>times = pd.date_range(beginTime, endTime, freq='5min')</code></p> <p>I tried <code>np.interp</code> but I'm stuck how to deal with the 2 time frames.</p> <p>The result should be the a list or a dataframe with interpolated values at the 5 minutes timesteps. I saw multiple questions here but either the proposed solution was long code or, only one value was interpolated.</p>
<python><pandas><numpy>
2023-06-17 16:13:54
1
585
monok
76,496,874
11,901,732
Find mean of truncated normal distribution in R
<p>How can I find the mean of a truncated normal distribution in R, where the lower bound <code>a</code> is 1, <code>sigma</code> is 2, and <code>mu</code> is 2.5? I have used the <code>truncnorm</code> library, but it does not have any functions for moments. In Python, I tried the following code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.stats import truncnorm a, b = 1, np.inf mean, var, skew, kurt = truncnorm.stats(a, b, moments='mvsk') print(mean) </code></pre> <p>which gives <code>mean = 1.52513528</code>. How can I achieve the same result in R?</p>
<python><r><statistics><normal-distribution><truncated>
2023-06-17 15:21:04
1
5,315
nilsinelabore
76,496,627
17,638,206
Can't use DBnet in EasyOCR
<p>I am trying to use DBnet model with EasyOCR, through using:</p> <pre><code>reader = Reader(['ar'], gpu = False,detect_network = 'dbnet18') </code></pre> <p>EasyOCR has downloaded the model, however, when detecting text I got the following error:</p> <pre><code>--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[116], line 1 ----&gt; 1 easy_ocr(res,r&quot;C:\Users\PC\OCR-App 1\data\interim\iscore\file.txt&quot;,r&quot;C:\Users\PC\OCR-App 1\data\interim\iscore\out.jpg&quot;) Cell In[115], line 15, in easy_ocr(image, out_txt, out_image) 12 ts = time.time() 13 #min_size=10,width_ths=0.5,add_margin=0.1,paragraph = False,text_threshold =0.15,low_text=0.2 14 #text_threshold =0.15,low_text=0.2,add_margin=0.09 ---&gt; 15 results = reader.readtext(image,paragraph = True) 16 te = time.time() 17 td = te - ts File c:\Users\PC\anaconda3\envs\easyocr\Lib\site-packages\easyocr\easyocr.py:452, in Reader.readtext(self, image, decoder, beamWidth, batch_size, workers, allowlist, blocklist, detail, rotation_info, paragraph, min_size, contrast_ths, adjust_contrast, filter_ths, text_threshold, low_text, link_threshold, canvas_size, mag_ratio, slope_ths, ycenter_ths, height_ths, width_ths, y_ths, x_ths, add_margin, threshold, bbox_min_score, bbox_min_size, max_candidates, output_format) 446 ''' 447 Parameters: 448 image: file path or numpy-array or a byte stream object 449 ''' 450 img, img_cv_grey = reformat_input(image) --&gt; 452 horizontal_list, free_list = self.detect(img, 453 min_size = min_size, text_threshold = text_threshold,\ 454 low_text = low_text, link_threshold = link_threshold,\ 455 canvas_size = canvas_size, mag_ratio = mag_ratio,\ 456 slope_ths = slope_ths, ycenter_ths = ycenter_ths,\ 457 height_ths = height_ths, width_ths= width_ths,\ ... 231 &quot;Input type is {}, but 'deform_conv_{}.*.so' is not imported successfully.&quot;.format(device_, device_), 232 ) 233 return output RuntimeError: Input type is cpu, but 'deform_conv_cpu.*.so' is not imported successfully. </code></pre>
<python><ocr><easyocr>
2023-06-17 14:23:33
1
375
AAA
76,496,565
16,383,578
How to reverse strings in a NumPy array?
<p>I want to reverse the order of characters in each string element of a NumPy array. For example, given the following input:</p> <pre class="lang-py prettyprint-override"><code>array(['2', '3', '5', '7', '11', '13', '17', '19', '23', '29', '31', '37', '41', '43', '47', '53', '59', '61', '67', '71', '73', '79', '83', '89', '97'], dtype='&lt;U2') </code></pre> <p>I want to obtain the following output (<em><strong>without using Python <code>for</code> loop</strong></em>):</p> <pre class="lang-py prettyprint-override"><code>array(['2', '3', '5', '7', '11', '31', '71', '91', '32', '92', '13', '73', '14', '34', '74', '35', '95', '16', '76', '17', '37', '97', '38', '98', '79'], dtype='&lt;U2') </code></pre> <p>I know that I can use <code>arr[::-1]</code> to reverse the order of elements in a NumPy array, but that isn't the topic of this question, and <code>np.array([e[::-1] for e in arr])</code> is inefficient and against the point of NumPy.</p> <p>The array was created using a vectorized version of the base conversion function <a href="https://stackoverflow.com/questions/76495180/how-to-find-emirp-numbers-using-numpy"><code>np.vectorize(to_base_str)</code></a>.</p> <p>How can I reverse the order of characters in each string element of a NumPy array using vectorization? I have searched online but have not found a solution. Note that <code>arr[..., ::-1]</code> does not work for string elements in a NumPy array.</p> <p>(Code is mine, but I did use the &quot;AI suggested edits&quot; feature)</p>
<python><arrays><numpy>
2023-06-17 14:09:11
2
3,930
Ξένη Γήινος
76,496,465
2,983,568
How can I make VS Code's suggest widget auto-select the first item while not auto-expanding the suggestion details?
<p>Coding with VS Code and the <code>Python</code>, <code>Pylance</code>, <code>Black</code>, and <code>Jupyter</code> extensions, I would like to always see IntelliSense suggestions and never see documentation pop-ups unless I explicitly request them (e.g. with <code>Ctrl+space</code>). I have attempted various combinations of settings related to quick suggestions and selection mode, but every time I show the documentation once, it continues to appear every time I select a suggestion, which is very frustrating. Here's a screenshot of what I'm trying to achieve:</p> <p><img src="https://i.sstatic.net/sHAhF.png" alt="Screenshot of desired behavior" /></p> <p>Is there a setting I'm missing that will allow me to achieve this behavior and keep it consistent?</p>
<python><visual-studio-code><jupyter><editor><settings>
2023-06-17 13:44:37
1
4,665
evilmandarine
76,496,461
755,229
How to add Geo data to a google contact via People api?
<p>I am trying to add geo data to the contacts I am inserting/updating into/on the google contacts using the Python/java api but I can not find any reference as to where this latitude longitude data should go. of course adding them as a custom field works find but then it is not really adding it as a real geo datapoint</p> <p>as you can see here: <a href="https://developers.google.com/people/api/rest/v1/people#address" rel="nofollow noreferrer">https://developers.google.com/people/api/rest/v1/people#address </a> there is no mention of geo data atll.</p>
<javascript><python><geolocation><google-people-api>
2023-06-17 13:43:23
0
4,424
Max
76,496,257
4,865,723
Translation of Qt buttons not working with QTranslator
<p>Usually I do translate my Python applications with GNU gettext. But I realized that standard elements (e.g. <code>QDialogButtonBox.Yes</code>) in a Qt GUI are not translated.</p> <p>So I found examples with <code>QTranslator</code>. I'm not sure if this is the correct approach. I would expect that Qt handles the translation itself and realize which locale is currently activate. I would expect a simple switch like <code>.use_current_local()</code>.</p> <p>I couldn't find such an Qt option so I tried the <code>QTranslator</code> examples without understanding them in all details. The example here does run without errors but the buttons are not translated and looking like this.</p> <p>I tried three variants for the value of the locale variable: explicit <code>de_DE.UTF8</code>, <code>QLocale.system().name</code> (returns <code>de_DE</code>) and <code>qt_de_DE</code>. The latter seems wired but comes from an real world example which I don't understand.</p> <p><a href="https://i.sstatic.net/kajeA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kajeA.png" alt="enter image description here" /></a></p> <pre><code>#!/usr/bin/env python3 from PyQt5.QtWidgets import QApplication, QDialogButtonBox from PyQt5.QtCore import QTranslator, QLocale, QLibraryInfo def get_translator(): translator = QTranslator() # Variant 1 loc = QLocale.system().name() # Variant 2 # loc = 'de_DE.utf-8', # qt_%s' % locale, # Variant 3 # loc = 'qt_de_DE', print(f'{loc=}') # &lt;-- de_DE translator.load( loc, QLibraryInfo.location(QLibraryInfo.TranslationsPath) ) return translator if __name__ == '__main__': app = QApplication([]) t = get_translator() app.installTranslator(t) buttonBox = QDialogButtonBox( QDialogButtonBox.Ok | QDialogButtonBox.Cancel | QDialogButtonBox.Yes | QDialogButtonBox.No ) buttonBox.show() app.exec() </code></pre> <p>Please note I'm not interested to use Qt to replace my GNU gettext setup. I just want to have the standard Qt elements translated also. All other strings I set myself are still translated via GNU gettext.</p> <p>I do use Debian 12 here. The <code>qttranslations5-l10n</code> package is installed. Other Qt applications (e.g. <code>backintime-qt</code>) do work and translate standard GUI elements like buttons without problems.</p>
<python><pyqt5><localization><qtranslator>
2023-06-17 12:47:24
1
12,450
buhtz
76,496,218
354,051
Parallel processing of tasks in python using multiprocessing.Pool
<p>Here is my code for executing tasks in parallel. The program executes all the shell commands to resize all the icons using the magick program.</p> <pre class="lang-py prettyprint-override"><code>import glob from subprocess import run import subprocess from multiprocessing import Process, Pool, cpu_count import time import os class ParallelProcess: def __init__(self, files): &quot;&quot;&quot;Constructor Args: files (List[str]): A list of icon files. &quot;&quot;&quot; self.files = files def parallel_resize(self, file): &quot;&quot;&quot;Task function for multiprocessing Pool. Args: file (TYPE): Icon file path &quot;&quot;&quot; print(&quot;Resizing : {}&quot;.format(file)) cmnd = &quot;magick {} -resize 100x100^ -gravity center -extent 100x100 ./out/{}&quot;.format(file, os.path.basename(file)) try: data = run(cmnd, capture_output=True, shell=True) errors = data.stderr.splitlines() if errors: for e in errors: print(e.decode()) exit() except subprocess.CalledProcessError as e: print(e.stderr) exit() def execute_parallel(self): '''Execute tasks in parallel. ''' start_time = time.perf_counter() try: pool = Pool() pool.map(self.parallel_resize, self.files, 4) except Exception as e: print(e.stderr) end_time = time.perf_counter() print(&quot;Program finished in {} seconds.&quot;.format(end_time-start_time)) if __name__ == '__main__': t = ParallelProcess(glob.glob(&quot;./rabbits/*.ico&quot;)) t.execute_parallel() </code></pre> <p>The results are:</p> <pre class="lang-py prettyprint-override"><code>Serial processing finished in 5.773 seconds. Parallel processing finished in 2.01 seconds. chunk size = 1 Parallel processing finished in 2.181 seconds. chunk size = 4 Parallel processing finished in 2.2651441 seconds. chunk size = 8 </code></pre> <p>On my Intel i9, 8 processors (4 logical) laptop, the results of using different chunk sizes are shown. I am not able to figure out how to choose the correct chunk size. I will be using the above code to process thousands of images.</p> <p>How do I handle interrupts? I have purposely copied a corrupt icon file in the directory. When magick processes that file, errors are produced. In that case, the processing should immediately stop, but I'm not able to figure it out correctly, although I'm still trying.</p> <p>Practically, I can think of two ways to stop:</p> <ol> <li>As soon as an error is produced, finish the files in the queue and stop.</li> <li>Immediately stop, even drop the tasks in the queue.</li> </ol> <p>thus How do you handle an error interruption correctly?</p> <p>Prashant</p>
<python><python-multiprocessing>
2023-06-17 12:35:30
2
947
Prashant
76,495,956
16,719,690
Should we use flask-migrate in the init_app function or separately in our run script?
<p>I am reading through Grinberg's Flask Development 2nd ed., and I have a question about flask migrate &amp; a &quot;large app structure&quot;.</p> <p>Previously, I had used migrate like any other extension, like</p> <pre class="lang-py prettyprint-override"><code># __init__.py ... from flask_migrate import Migrate ... migrate = Migrate() ... def create_app(): app = Flask(__name__) ... migrate.init_app(app, db) ... return app </code></pre> <p>This is suggested in the <a href="https://flask-migrate.readthedocs.io/en/latest/#command-reference" rel="nofollow noreferrer">Flask-Migrate docs</a>, but I understand it is obviously not the only way to do it. And then I had an entrypoint script, or root-level application script, which I call with <code>python run.py</code>:</p> <pre class="lang-py prettyprint-override"><code># root-level application script from app import create_app app = create_app() if __name__ == '__main__': app.run() </code></pre> <p>In the Flask Web Development book however, which walks through the <a href="https://github.com/miguelgrinberg/flasky" rel="nofollow noreferrer">Flasky app</a>, Miguel shows on page 93 a setup where migrate is actually imported and initialised with <code>app</code> and <code>db</code> in the <strong>root-level application script</strong> and NOT the app-level <code>__init__.py</code> file. What I mean is this:</p> <pre><code>flasky ├── app/ │ ├── api/ │ ├── auth/ │ ├── main/ │ ├── static/ │ ├── templates/ │ └── __init__.py # App-level init script ├── migrations/ └── flasky.py # Root-level application script </code></pre> <p>There is not an explanation as to why this approach was chosen. I was previously running my app by calling the root-level application script, not using the <code>flask run</code> CLI command. When I tried having migrate in this root level, it of course was not recognised because I had not exported <code>FLASK_APP=run.py</code>, so the <code>flask</code> CLI command was unaware of <code>db</code> or <code>migrate</code>. This means I would need to change how I run my application.</p> <p>My question then is, what might be the reasoning for putting migrate in the root level app script vs in my <code>__init__.py</code> file's <code>create_app()</code> function? Given that Miguel has written the extension he has decided on this for a reason but I am not sure why this extension is treated differently. Could it be something to do with separating the database-related code away from the rest of the application code?</p> <p>Sorry if this is not the right place, but I can't exactly make an issue on the flasky github because it is just a question and not an issue.</p>
<python><flask><flask-migrate>
2023-06-17 11:21:41
2
357
crabulus_maximus
76,495,891
13,245,310
How to type-annotate a function that returns either a decorator, a context manager, or a custom object?
<p>I have a library that allows users to add functionality to their tests. The library can be used in three different ways:</p> <pre class="lang-py prettyprint-override"><code>def test() -&gt; None: with custom_functionality(): print(&quot;test&quot;) @custom_functionality def test_deco() -&gt; None: print(&quot;test_deco&quot;) def test_manual() -&gt; None: custom_func = custom_functionality() custom_func.start() print(&quot;test_manual&quot;) custom_func.stop() </code></pre> <p>This works because of the following logic inside my library:</p> <pre class="lang-py prettyprint-override"><code>from typing import ContextManager, Any, Callable, Optional, TypeVar, Union from typing_extensions import ParamSpec P = ParamSpec('P') T = TypeVar('T') class ContextDecorator: def __init__(self, func: Optional[Callable[P, T]] = None) -&gt; None: self.func = func def __call__(self) -&gt; Union[Callable[P, T], T]: self.start() x = self.func() self.stop() return x def start(self) -&gt; None: self.__enter__() def stop(self) -&gt; None: self.__exit__() def __enter__(self) -&gt; None: pass def __exit__(self, *args: Any) -&gt; None: pass def custom_functionality(func: Optional[Callable[P, T]] = None): if func: def wrapper() -&gt; Callable[P, T]: context_deco = ContextDecorator(func) return context_deco() return wrapper else: return ContextDecorator() </code></pre> <p>My question: How do I annotate the return-type of the <code>custom_functionality</code>-function?</p> <p>As far as I can tell, there are three return-types here:</p> <ul> <li><code>Callable[[], Callable[P, T]]</code>, because of the standard decorator</li> <li><code>Callable[[Callable[P, T]], Callable[P, T]</code> for the context manager</li> <li><code>ContextDecorator</code> to ensure the <code>start</code> and <code>stop</code> methods are properly exposed/typed</li> </ul> <p>A <code>Union</code> of the three doesn't work, because MyPy can't figure out when to use which.</p> <p><strong>Note</strong>: This code is a simplified version of what I need, but it works as is. The provided tests can be run with <code>pytest</code>.</p>
<python><mypy><python-typing>
2023-06-17 11:03:23
1
2,143
Bert Blommers
76,495,744
198,825
Should I call Authlib's parse_id_token on every request?
<p>I'm following Auth0's guide to <a href="https://auth0.com/docs/quickstart/webapp/python/interactive" rel="nofollow noreferrer">adding login to a Python application</a> and implementing it in my FastAPI application.</p> <p>I've successfully completed every step of the guide and my application is working as expected. I am able to login and a token is placed in my session.</p> <p>One thing I'm failing to grasp is: how does this token get validated in subsequent requests?</p> <p>The example in guide shows a snippet that places the session in the rendered template:</p> <pre class="lang-py prettyprint-override"><code>@app.route(&quot;/&quot;) def home(): return render_template(&quot;home.html&quot;, session=session.get('user'), pretty=json.dumps(session.get('user'), indent=4)) </code></pre> <p>And the template checks the existence of the token:</p> <pre class="lang-py prettyprint-override"><code>{% if session %} &lt;h1&gt;Welcome {{session.userinfo.name}}!&lt;/h1&gt; &lt;p&gt;&lt;a href=&quot;/logout&quot;&gt;Logout&lt;/a&gt;&lt;/p&gt; {% else %} &lt;h1&gt;Welcome Guest&lt;/h1&gt; &lt;p&gt;&lt;a href=&quot;/login&quot;&gt;Login&lt;/a&gt;&lt;/p&gt; {% endif %} </code></pre> <p>But surely just testing for the existence of the token in the session is not enough, right?</p> <p>After digging around in Authlib's code I've found the method <code>parse_id_token</code> and so I've added middleware that calls it in every request to the webapp.</p> <pre class="lang-py prettyprint-override"><code>@app.middleware(&quot;http&quot;) async def authenticate_request(request: Request, call_next): subject = request.session.get(&quot;subject&quot;) userinfo = await oauth.auth0.parse_id_token(get, nonce=get['userinfo']['nonce']) ... return await call_next(request) </code></pre> <p>Is this the correct way to validate all requests after the user has successfully logged in?</p>
<python><fastapi><auth0><authlib>
2023-06-17 10:19:19
1
7,885
noamt
76,495,515
3,919,531
Install local python wheel with yocto
<p>I compiled the tensorrt python binding for a specific version of python. It doesn't have other python dependencies. Now I would like to install the compiled .whl file in a yocto image. I tried with the following recipe:</p> <pre><code>SUMMARY = &quot;NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.&quot; HOMEPAGE = &quot;https://github.com/NVIDIA/TensorRT&quot; SRC_URI = &quot;file://tensorrt-8.2.1.9-cp38-none-linux_aarch64.whl&quot; LICENSE = &quot;Proprietary&quot; LIC_FILES_CHKSUM = &quot;file://${COMMON_LICENSE_DIR}/Proprietary;md5=0557f9d92cf58f2ccdd50f62f8ac0b28&quot; DEPENDS += &quot;python3 python3-pip&quot; FILES_${PN} += &quot;\ ${libdir}/${PYTHON_DIR}/site-packages/* \ &quot; do_install() { pip install ${S}/tensorrt-8.2.1.9-cp38-none-linux_aarch64.whl } </code></pre> <p>Unfortunately the recipe fails at the do_install step: <code>ERROR: Execution of '/home/user/Desktop/tegra-demo-distro/build/tmp/work/aarch64-oe4t-linux/tensorrt/8.2.1-r0/temp/run.do_install.56472' failed with exit code 127</code></p>
<python><yocto><python-wheel>
2023-06-17 09:14:40
1
1,298
Damien
76,495,395
755,229
How to remove or labels to google contacts using people api?
<p>How would you add a label or delete it from google contacts using the people api python library? I can not find much python reference for using google api, I know that you can add a label if you include it in a contact via adding or updating it but I can not find a method to remove it directly or update it.</p>
<python><google-people-api>
2023-06-17 08:37:35
1
4,424
Max
76,495,306
10,694,340
What is the rationale behind the naming of the squeeze and unsqueeze operations?
<p>I was wondering if there is some (mathematical, historical, etc) reason behind the operations to remove and add singleton dimensions being called <code>squeeze</code> and <code>unsqueeze</code>. I am mainly asking from a torch perspective but it seems that those names are used also in many other programming languages and libraries.</p> <p>Especially because you could easily make a case for the naming to be reversed and you would not notice much of a difference. It would make even more sense to me that an operation that removes singleton dimensions would be called <code>un-squeeze</code>.</p> <p>It seems that something more descriptive like <code>add_singleton_dimension</code> and <code>remove_singleton_dimension</code> would help to better convey the meaning of those operations (like the numpy function <code>expand_dims</code> does).</p> <p>So, is this because the names <code>squeeze</code> and <code>unsqueeze</code>:</p> <ul> <li>have been inherited from other programming languages/paradigms</li> <li>are defined and used in mathematical language</li> <li>retrocompatibility with existing libraries</li> </ul>
<python><numpy><pytorch><naming-conventions>
2023-06-17 08:12:15
1
778
Kevin Spaghetti
76,495,180
16,383,578
How to find emirp numbers using NumPy?
<p>I want to find prime numbers whose digits reversed is also a prime number, including palindromic primes, and I also wish to find primes with such property in other bases, not just decimal. Prime numbers whose reversal of digits are also prime numbers are called emirp and more information about it is on this <a href="https://en.wikipedia.org/wiki/Emirp" rel="nofollow noreferrer">link</a>.</p> <p>I have already implemented a solution that gives the correct output, which is shown below. I would like to implement the same thing using NumPy.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from itertools import cycle def prime_wheel_sieve(n: int) -&gt; np.ndarray: wheel = cycle([4, 2, 4, 2, 4, 6, 2, 6]) primes = np.ones(n + 1, dtype=bool) primes[:2] = False for square, step in ((4, 2), (9, 6), (25, 10)): primes[square::step] = False k = 7 while (square := k * k) &lt;= n: if primes[k]: primes[square :: 2 * k] = False k += next(wheel) return np.where(primes)[0] DIGITS = { **{n: chr(n + 48) for n in range(10)}, **{n: chr(n + 87) for n in range(10, 36)} } def to_base_str(n: int, base: int) -&gt; str: if not 2 &lt;= base &lt;= 36: raise ValueError s = [] while n: n, d = divmod(n, base) s.append(DIGITS[d]) return &quot;&quot;.join(s[::-1]) def to_base(n: int, base: int) -&gt; str | tuple: if base &lt; 2: raise ValueError if base &lt;= 36: return to_base_str(n, base) s = [] while n: n, d = divmod(n, base) s.append(d) return tuple(s[::-1]) def from_base(s: str | tuple, base: int) -&gt; int: if base &lt; 2: raise ValueError if base &lt;= 36: return int(s, base) return sum(d * base ** i for i, d in enumerate(s[::-1])) def emirp(n: int, base: int = 10) -&gt; np.ndarray: primes = prime_wheel_sieve(n) primes = primes[primes &gt; base] elist = [to_base(p, base) for p in primes] eset = set(elist) return primes[[i for i, e in enumerate(elist) if e[::-1] in eset]] </code></pre> <p>Examples:</p> <pre><code>In [90]: emirp(1000, 2) Out[90]: array([ 3, 5, 7, 11, 13, 17, 23, 29, 31, 37, 41, 43, 47, 53, 61, 67, 71, 73, 83, 97, 101, 107, 113, 127, 131, 151, 163, 167, 173, 181, 193, 197, 199, 223, 227, 229, 233, 251, 257, 263, 269, 277, 283, 307, 313, 331, 337, 349, 353, 359, 373, 383, 409, 421, 431, 433, 443, 449, 461, 463, 479, 487, 491, 503, 509, 521, 571, 577, 599, 601, 617, 619, 631, 643, 653, 661, 677, 683, 691, 701, 709, 727, 739, 757, 773, 797, 821, 823, 827, 839, 853, 857, 881, 883, 907, 911, 937, 941, 947, 953, 967]) In [91]: emirp(1000, 10) Out[91]: array([ 11, 13, 17, 31, 37, 71, 73, 79, 97, 101, 107, 113, 131, 149, 151, 157, 167, 179, 181, 191, 199, 311, 313, 337, 347, 353, 359, 373, 383, 389, 701, 709, 727, 733, 739, 743, 751, 757, 761, 769, 787, 797, 907, 919, 929, 937, 941, 953, 967, 971, 983, 991]) In [92]: emirp(1000, 5) Out[92]: array([ 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 47, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 149, 151, 157, 163, 167, 191, 193, 211, 223, 227, 229, 233, 239, 251, 257, 269, 271, 277, 281, 293, 317, 331, 337, 347, 349, 353, 359, 367, 379, 397, 421, 431, 439, 443, 449, 457, 461, 463, 479, 491, 503, 521, 523, 547, 563, 571, 577, 599, 601, 613, 617, 619, 631, 701, 751, 761, 881, 911]) </code></pre> <hr /> <h2>Update</h2> <p>I just found out <code>np.base_repr</code>, it almost does exactly the same thing as my custom function, except it doesn't handle bases larger than 36, as with <code>int</code>, and it doesn't take an array as input. So it is useless here. And I only found it because I Googled a topic completely irrelevant to it.</p>
<python><arrays><python-3.x><numpy>
2023-06-17 07:27:31
2
3,930
Ξένη Γήινος
76,495,173
1,475,482
I am streaming binance data in python but the prices are rounded down when inserted into my mariadb db
<p>I can stream the data fine and it is returning the results I want, however the data is getting rounded after a couple of decimal places when inserted into the database.</p> <p>snippit of data below.</p> <pre><code>{ &quot;stream_type&quot;:&quot;btcusdt@kline_1m&quot;, &quot;event_type&quot;:&quot;kline&quot;, &quot;event_time&quot;:1686983029769, &quot;symbol&quot;:&quot;BTCUSDT&quot;, &quot;kline&quot;:{ &quot;kline_start_time&quot;:1686982980000, &quot;kline_close_time&quot;:1686983039999, &quot;symbol&quot;:&quot;BTCUSDT&quot;, &quot;interval&quot;:&quot;1m&quot;, &quot;first_trade_id&quot;:false, &quot;last_trade_id&quot;:false, &quot;open_price&quot;:&quot;26738.94000000&quot;, &quot;close_price&quot;:&quot;26777.99000000&quot;, &quot;high_price&quot;:&quot;26778.00000000&quot;, &quot;low_price&quot;:&quot;26738.94000000&quot;, &quot;base_volume&quot;:&quot;52.18289000&quot;, &quot;number_of_trades&quot;:1613, &quot;is_closed&quot;:false, &quot;quote&quot;:&quot;1396485.64284410&quot;, &quot;taker_by_base_asset_volume&quot;:&quot;37.43018000&quot;, &quot;taker_by_quote_asset_volume&quot;:&quot;1001618.64984080&quot;, &quot;ignore&quot;:&quot;0&quot; }, &quot;unicorn_fied&quot;:[ &quot;binance.com&quot;, &quot;0.12.2&quot; ] } </code></pre> <p>here is my code to retrieve the above.</p> <p>here is my code to retrieve the above.</p> <pre><code>#import from unicorn_binance_websocket_api.manager import BinanceWebSocketApiManager import time import pandas as pd import sqlalchemy from sqlalchemy import create_engine #create engine to connect to DB engine = sqlalchemy.create_engine(&quot;mariadb+mariadbconnector://test_user:StrongPassword@127.0.0.1:3306/test_db&quot;) #define coins to collect data from and start stream symbols = ['BTC','COMBO'] symbols = [symbol+'usdt' for symbol in symbols] ubwa = BinanceWebSocketApiManager(exchange=&quot;binance.com&quot;, output_default=&quot;UnicornFy&quot;) ubwa.create_stream(['kline_1m'], symbols, output=&quot;UnicornFy&quot;) #sort data from stream into dataframes and insert into db def SQLimport(data): time = data['event_time'] coin = data['symbol'] open_price = data['kline']['open_price'] close_price = data['kline']['close_price'] low_price = data['kline']['low_price'] high_price = data['kline']['high_price'] volume = data['kline']['taker_by_base_asset_volume'] frame = pd.DataFrame([[time,open_price,close_price,low_price,high_price,volume]], columns = ['time','open_price','close_price','low_price','high_price','volume']) frame.time = pd.to_datetime(frame.time, unit='ms') frame.open_price = frame.open_price.astype(float) frame.close_price = frame.close_price.astype(float) frame.low_price = frame.low_price.astype(float) frame.high_price = frame.high_price.astype(float) frame.volume = frame.volume.astype(float) frame['amplitude'] = (frame.high_price - frame.low_price) / (frame.high_price + frame.low_price) /2 frame.to_sql(coin, engine, index=False, if_exists='append') while True: data = ubwa.pop_stream_data_from_stream_buffer() if data: if len(data) &gt; 3: SQLimport(data) print(data) </code></pre> <p>This imports the data into the db and also prints it out on the screen, however the values printed show the full prices eg (26738.94000000), where if this record is returned from the db it would show 26738.94, it seems to be rounding the prices or not inserting them fully. I have manually inserted data and it will accept it fine but through my script, the open, close, high and low columns are rounded to 2 decimal places.</p> <p>I also setup a sqlite db and inserted the data there too and had the same results.</p> <p>I have googled all night, read the pandas docs as best as I can and am stumped.</p> <p>am I missing somthing obvious ?</p> <p>Thank you for any guidance.</p>
<python><pandas><mariadb><rounding><binance>
2023-06-17 07:24:57
0
380
John Spencer
76,495,043
573,082
Why module cannot be found even if I use __init__.py?
<p>I tried to read how <a href="https://docs.python.org/3/reference/import.html" rel="nofollow noreferrer"><code>import</code></a> works and use what is suggested, but I still have questions:</p> <p>If I <code>cd</code> to <code>project</code> folder and run <code>python ./test/test1.py</code> I get <code>ModuleNotFoundError: No module named 'lib'</code>. <br></p> <ul> <li>If I add <code>print(os.getcwd())</code> it does print the current directory which is <code>project</code>. So, why it cannot find <code>lib</code> folder?</li> <li>If I <code>cd</code> to <code>project/test</code>, add <code>import sys sys.path.append('..')</code> to the beginning of <code>test1.py</code> and then run <code>python test1.py</code> it works. But, why I cannot run it from one level above?</li> <li>I also tried to add <code>__init__.py</code> to <code>lib</code> and <code>test</code> folders, but I get same error.</li> <li>If I try changing to relative path <code>from .lib.lib1 ...</code> and run the <code>test.py</code> from <code>project</code> folder I get <code>ImportError: attempted relative import with no known parent package</code> error</li> </ul> <p>What am I doing wrong?</p> <p><strong>test1.py</strong></p> <pre><code>from lib.lib1 import MyClass </code></pre> <p><strong>Project structure</strong></p> <pre><code>project lib lib1.py lib2.py test test1.py # from lib.lib1 import MyClass main.py </code></pre>
<python><python-3.x><python-import>
2023-06-17 06:43:28
2
14,501
theateist
76,495,042
4,723,732
Why my Python match case does not insert into list?
<p>My code(solving challenge)</p> <pre><code>if __name__ == '__main__': N = int(input()) for _ in range(N): cmd, *args = input().split() data =[] match cmd: case &quot;insert&quot;: data.insert(int(args[1]),args[0]) case &quot;print&quot;: print(data) case &quot;remove&quot;: data.remove(args[0]) case &quot;append&quot;: data.append(args[0]) case &quot;sort&quot;: data.sort() case &quot;reverse&quot;: data.reverse() case &quot;pop&quot;: data.pop() case _: raise ValueError(f&quot;Illegal command: {cmd}&quot;) </code></pre> <p>I run it</p> <pre><code>4 append 1 append 2 insert 3 1 print [] </code></pre> <p>and I have empty list. How to fix empty list? Should I switch to <code>if elif</code> to prevent data resetting?</p>
<python>
2023-06-17 06:43:19
1
8,250
Richard Rublev
76,494,942
7,250,111
How to reestablish ROUTER/DEALER connection in ZMQ?
<pre><code># ROUTER import time import random import zmq context = zmq.Context.instance() client = context.socket(zmq.ROUTER) client.setsockopt(zmq.TCP_KEEPALIVE,1) client.bind(&quot;tcp://127.0.0.1:99999&quot;) for _ in range(100): ident = random.choice([b'A', b'A', b'B']) work = b&quot;This is the workload&quot; client.send_multipart([ident, work]) time.sleep(0.5) </code></pre> <pre><code>#CLIENT import zmq context = zmq.Context.instance() worker = context.socket(zmq.DEALER) worker.setsockopt(zmq.IDENTITY, b'A') worker.connect(&quot;tcp://127.0.0.1:99999&quot;) while True: request = worker.recv() print(request) </code></pre> <p>My goal is to make multiple clients receive data from ROUTER, determined by identities(A, B). It works fine no matter which starts first. However, once a client is halted and restarted, it can't recconnect to ROUTER or receive data anymore. Is this pattern not designed to deal with this or are there options to make a client reconnect ROUTER again?</p>
<python><zeromq>
2023-06-17 06:05:22
1
2,056
maynull
76,494,932
813,596
Very poor Marshmallow performance
<p>I have a Flask+Marshmallow+SQLAlchemy based back-end application, and while profiling it I realized that the majority of the CPU time is spent doing simple Marshmallow schema serialization.</p> <p>Consider the following example:</p> <pre class="lang-py prettyprint-override"><code>from marshmallow import Schema, fields class TestSchema(Schema): a = fields.Int() b = fields.Int() c = fields.Int() d = fields.Int() test_data = {&quot;a&quot;: 123, &quot;b&quot;: 456, &quot;c&quot;: 789, &quot;d&quot;: 111} schema = TestSchema() start_time = datetime.datetime.now() for i in range(0, 10): schema.load(test_data) end_time = datetime.datetime.now() delta = end_time - start_time ms = delta.total_seconds() * 1000 print(str(ms)) # 1.013669 </code></pre> <p>This simple schema, loaded 10 times takes over 1ms (!) on my 4.38 GHz Core i7. This is just 4 integers, no complex types, no nested fields, no fancy validation.</p> <p>In my case, I have a couple of mid-sized objects that are loaded/dumped to/from SQLALchemy models. A couple of dozen fields and several levels of nesting each (around 100 fields total per object including nested), takes around <strong>50+</strong> ms of the pure CPU time to load. This is before any json serialization, any computation or business logic..</p> <p>What this means in practice, is simple Marshmallow schema work and nothing else, would allow only for 10-20 requests per second before the CPU reaches 100%.</p> <p>To make sure I am not crazy, I have re-written this example in C#:</p> <pre class="lang-cs prettyprint-override"><code>class TestSchema { public int Id; public int ItemId; public float Value; public int Level; } JObject testData = new JObject(); testData[&quot;Id&quot;] = 123; testData[&quot;ItemId&quot;] = 123; testData[&quot;Value&quot;] = 123.456f; testData[&quot;Level&quot;] = 123; testData.ToObject&lt;TestSchema&gt;(); // First time Newtonsoft library caches reflection Stopwatch sw = new Stopwatch(); sw.Start(); for(int i=0; i&lt;10; i++) { var a = testData.ToObject&lt;TestSchema&gt;(); } sw.Stop(); double ms = sw.ElapsedTicks / 10000d; Console.WriteLine($&quot;{ms}&quot;); // 0.0767 </code></pre> <p>C# which I have always considered slower than most languages/frameworks, performs basically the same work 13 times faster on the same machine. What's curious is that Newtonsoft statically caches a bunch of slow reflection calls and subsequent serialization calls are much faster. Marshmallow doesn't seem to be doing any of that.</p> <p>I have looked into Toasted Marshmallow but the project seems to be outdated and abandoned. I honestly do not understand how something like allocating very small amount of memory, finding the attribute by string name, checking numeric type, and copying the number into a new spot on stack, could take so long.</p> <p>Is there any way to speed it up? Are there any tips or tricks? Or is building high performance web apis with Python and Marshmallow impossible?</p>
<python><performance><marshmallow>
2023-06-17 06:01:23
0
618
splattru
76,494,929
7,446,003
Pk becoming lost from django url
<p>I have 2 views defined as follows:</p> <pre><code>class IpsychCaseView(LoginRequiredMixin, TemplateView): model = IpsychCase template_name = &quot;ipsych/ipsych_scz.html&quot; def get(self, *args, **kwargs): p = 'antipsychotic' return self.render_to_response( { &quot;page_title&quot;: p.capitalize() ```}) def post(self, *args, **kwargs): p='antipsychotic' ipsychcaseform = IpsychCaseForm(data=self.request.POST) ipsychcaseform = IpsychCaseForm if ipsychcaseform.is_valid(): print('case valid') ipsychcase_instance = ipsychcaseform.save() ipsychcase_instance.user = self.request.user ipsychcase_instance.save() if ipsychvisitform.is_valid(): ipsychvisit_instance = ipsychvisitform.save(commit=False) ipsychvisit_instance.ipsychcase = ipsychcase_instance ipsychvisit_instance.save() u = reverse(&quot;ipsych_results&quot;, kwargs = {&quot;ipsychcase_id&quot;: ipsychvisit_instance.ipsychvisit_id}) print(f'the url being passed to redirect is {u}') return redirect(u) class ResultsView(TemplateView): template_name = &quot;ipsych/ipsych_results.html&quot; def get(self, request, *args, **kwargs): ipsychcase_id = kwargs[&quot;ipsychcase_id&quot;] results = ip.scz_alg(ipsychcase_id) context = { &quot;ipsychcase_id&quot;: ipsychcase_id, &quot;results_intro&quot;: results[&quot;results_intro&quot;], # Add other result values to the context as needed } return self.render_to_response(context) def get_context_data(self, *args, **kwargs): context = super().get_context_data(*args, **kwargs) print(f'the id passed to results view is {context[&quot;ipsychcase_id&quot;]}') context[&quot;ipsychcase_id&quot;] = self.kwargs[&quot;ipsychcase_id&quot;] results = ip.scz_alg(context[&quot;ipsychcase_id&quot;]) # context['se_heatmap_image'] = results['se_heatmap_image'] # context['meds_info'] = results['meds_info'] # context['mse'] = results['mse'] # context['score_image'] = results['score_image'] # context['results_intro'] = results['results_intro'] for key, value in results.items(): context[key] = value print(context[&quot;ipsychcase_id&quot;]) print(context.keys()) return context def post(self, *args, **kwargs): # if 'download_report' in self.request.POST: context = super().get_context_data() print(f'This is the id in the results post {context[&quot;ipsychcase_id&quot;]}') context[&quot;ipsychcase_id&quot;] = self.kwargs[&quot;ipsychcase_id&quot;] results = ip.scz_alg(context[&quot;ipsychcase_id&quot;], imagetype = 'png') response = download_report(results, self.request.user) return response </code></pre> <p>the urls.py is:</p> <pre><code> path( &quot;ipsych/&quot;, view=ipsych_views.IpsychCaseView.as_view(), name=&quot;ipsych&quot;, ), path( route='ipsych/processed/&lt;ipsychcase_id&gt;/', view=ipsych_views.ResultsView.as_view(), name='ipsych_results'), </code></pre> <p>The template contains some ajax to allow for tabbed forms:</p> <pre><code> &lt;script src=&quot;https://ajax.googleapis.com/ajax/libs/jquery/3.6.0/jquery.min.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js&quot;&gt;&lt;/script&gt; &lt;script type=&quot;text/javascript&quot; src=&quot;{% static 'js/jquery.formset.js' %}&quot;&gt;&lt;/script&gt; &lt;script&gt; $(document).ready(function() { // Submit all forms when the submit button is clicked $('#tabbed-form').submit(function(event) { event.preventDefault(); console.log('Form submitted'); // Collect the data from all the forms var formData = new FormData(); formData.append('csrfmiddlewaretoken', $('input[name=&quot;csrfmiddlewaretoken&quot;]').val()); console.log('1 step'); $('#tabbed-form .tab-pane').each(function(index) { var formFields = $(this).find('input, select, textarea').serialize(); var fieldsArray = formFields.split('&amp;'); fieldsArray.forEach(function(field) { var keyValue = field.split('='); formData.append(keyValue[0], decodeURIComponent(keyValue[1])); }); console.log(formData); }); // Send the data using AJAX $.ajax({ url: '{% url &quot;ipsych&quot; %}', type: 'POST', data: formData, processData: false, contentType: false, success: function(response) { window.location.href = '/ipsych/processed/{{ipsychcase_id}}/'; }, error: function(xhr, status, error) { // Handle the error } }); }); }); &lt;/script&gt; </code></pre> <p>the url being passed to redirect is /ipsych/processed/8b429f6d-8538-4af6-90b9-102168887e34/</p> <p>When I enter <code>http://localhost:8000/ipsych/processed/8b429f6d-8538-4af6-90b9-102168887e34/</code> into the browser, the subsequent view is generated correctly. However, when I click the Submit button that triggers POST (which prints out the above), I get the following error:</p> <pre><code>Page not found (404) &quot;/Users/a/Documents/websites/website/ipsych/processed&quot; does not exist Request Method: GET Request URL: http://localhost:8000/ipsych/processed// Raised by: django.views.static.serve </code></pre> <p>Why is the <code>ipsychcase_id</code> getting lost?</p>
<python><django><url><view>
2023-06-17 05:59:11
2
422
RobMcC
76,494,919
2,575,970
Alternate code to supress "PerformanceWarning: DataFrame is highly fragmented."
<p><strong>Below is my current code:</strong></p> <pre><code>row_data = dict(zip(keys, text)) df[row_data['EXAMINATION'].replace(&quot;:&quot;,&quot;&quot;)] = row_data['FINDING'].strip() df[row_data['EXAMINATION'].replace(&quot;:&quot;,&quot;&quot;) + ' Further Comments'] = row_data['FURTHER COMMENTS'].strip() </code></pre> <p><strong>Warning:</strong></p> <p>C:\Users\XXX\AppData\Local\Temp\ipykernel_10588\1664916535.py:38: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling <code>frame.insert</code> many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use <code>newframe = frame.copy()</code> df[row_data['EXAMINATION'].replace(&quot;:&quot;,&quot;&quot;)] = row_data['FINDING'].strip()</p> <p><strong>Replacement:</strong></p> <pre><code>df = pd.concat([df,pd.DataFrame([row_data['FINDING'].strip()], columns = [row_data['EXAMINATION'].replace(&quot;:&quot;,&quot;&quot;)])],axis=1,ignore_index=True) df = pd.concat([df,pd.DataFrame([row_data['FURTHER COMMENTS'].strip()], columns = [row_data['EXAMINATION'].replace(&quot;:&quot;,&quot;&quot;) + ' Further Comments'])],axis=1,ignore_index=True) </code></pre> <p>The replacement code however does not work as expected. I compared the output and they differ. I fail to understand why. Need some Python expert to advice please.</p> <h2>Sample value of row_data(each row is a result of one iteration of for-loop) :</h2> <pre><code>{'EXAMINATION': 'PSA', 'FINDING': '1.80ug/L', 'FURTHER COMMENTS': 'Normal range 0.00-2.99ug/L'} {'EXAMINATION': 'FIT Test', 'FINDING': 'YYY', 'FURTHER COMMENTS': 'XXX'} {'EXAMINATION': 'Height:', 'FINDING': '1.78m', 'FURTHER COMMENTS': 'BB'} {'EXAMINATION': 'Weight:', 'FINDING': '82kg', 'FURTHER COMMENTS': 'AA'} </code></pre> <p>What I am trying is to flatten this in the dataframe:</p> <p><a href="https://i.sstatic.net/wpDbC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wpDbC.png" alt="enter image description here" /></a></p> <p><strong>Full Code that I am using :</strong></p> <pre><code> document = Document(f) table = document.tables[2] keys = None for i, row in enumerate(table.rows): text = (cell.text for cell in row.cells) # Establish the mapping based on the first row # headers; these will become the keys of our dictionary if i == 0: keys = tuple(text) continue row_data = dict(zip(keys, text)) df[row_data['EXAMINATION'].replace(&quot;:&quot;,&quot;&quot;)] = row_data['FINDING'].strip() df[row_data['EXAMINATION'].replace(&quot;:&quot;,&quot;&quot;) + ' Further Comments'] = row_data['FURTHER COMMENTS'].strip() #df = pd.concat([df,pd.DataFrame([row_data['FINDING'].strip()], # columns = [row_data['EXAMINATION'].replace(&quot;:&quot;,&quot;&quot;)])],axis=1,ignore_index=True) #df = pd.concat([df,pd.DataFrame([row_data['FURTHER COMMENTS'].strip()], # columns = [row_data['EXAMINATION'].replace(&quot;:&quot;,&quot;&quot;) + ' Further Comments'])],axis=1,ignore_index=True) df1 = pd.concat([df1,df], axis=0, ignore_index=True) </code></pre>
<python><pandas><dataframe>
2023-06-17 05:56:51
1
416
WhoamI
76,494,788
4,169,924
Access class variable value of a Python subclass that was defined in abstract superclass in implemented function within abstract superclass
<p>I try to achieve the following:</p> <ul> <li>Require <code>class_variable</code> to be &quot;implemented&quot; in <code>ConcreteSubClass</code> of <code>AbstractSuperClass</code>, i.e. make <code>AbstractSuperClass.class_variable</code> abstract</li> <li>Define implemented (concrete) <code>method</code> in <code>AbstractSuperClass</code> which accesses the &quot;implemented&quot; value of <code>ConcreteSubClass.class_variable</code></li> </ul> <p>I would like to do this so that I won't have to implement <code>method</code> within all <code>ConcreteSubClass</code>es of <code>AbstractSuperClass</code>.</p> <p>If I run the below code:</p> <pre><code>from abc import ABC class AbstractSuperClass(ABC): class_variable: int def method(self): return self.instance_variable * AbstractSuperClass.class_variable class ConcreteSubClass(AbstractSuperClass): class_variable: int = 2 def __init__(self, instance_variable): self.instance_variable = instance_variable concrete_subclass = ConcreteSubClass(instance_variable=2) print(concrete_subclass.method()) </code></pre> <p>The code fails with:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;input&gt;&quot;, line 19, in &lt;module&gt; File &quot;&lt;input&gt;&quot;, line 8, in function AttributeError: type object 'AbstractSuperClass' has no attribute 'class_variable' </code></pre> <p>Which is reasonable, because the value of <code>class_variable</code> is not assigned in <code>AbstractSuperClass</code>, but suspicious because <code>AbstractSuperClass</code> <em>has</em> attribute <code>class_variable</code>.</p> <p>I would like to achieve the below:</p> <pre><code>def method(self): return self.instance_variable * &lt;refer to cls of the concrete subclass this method will be called&gt;.class_variable </code></pre> <p>How can I do this?</p>
<python><abstract-class><subclass><superclass><class-variables>
2023-06-17 05:03:05
2
667
bugfoot
76,494,691
9,357,484
Need to know how to Integrate Gazebo with Python code
<p>Suppose I have a Python code for the A* algorithm, and I would like to create an environment in the Gazebo simulator with a starting and goal point. In this environment, a robot must move from the start to the goal position, using the A* algorithm to decide which action to take. The robot's actions are up, down, left, and right.</p> <p>What are the step-by-step operations I need to perform to complete this project? I have the general idea, but I am unsure where to start or which topics to focus on. Any help would be appreciated. Thank you.</p>
<python><ros><gazebo-simu>
2023-06-17 04:18:18
1
3,446
Encipher
76,494,620
22,009,322
Modules are not installing/importing using pyodide (v0.23.2)
<p>I am new to Python and trying to display some diagrams on a webpage using Pyscript. However, importing modules with Pyodide isn't working. I've tried every single example found in the documentation as well as this thread: <a href="https://stackoverflow.com/questions/63958996/how-do-i-import-modules-in-a-project-using-pyodide-without-errors">How do I import modules in a project using pyodide without errors?</a> but it's still not working.</p> <p>I've tried &quot;loadPackage&quot;, &quot;py-env&quot; and &quot;micropip.install&quot;. Am I missing something? I am completely confused. I am getting the &quot;ModuleNotFoundError: The module 'pandas' is included in the Pyodide distribution, but it is not installed.&quot; error again and again: <a href="https://i.sstatic.net/uG9u1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uG9u1.png" alt="enter image description here" /></a></p> <p>The example of the code below:</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;link rel=&quot;stylesheet&quot; href=&quot;https://pyscript.net/latest/pyscript.css&quot; /&gt; &lt;script defer src=&quot;https://pyscript.net/latest/pyscript.js&quot;&gt;&lt;/script&gt; &lt;meta charset=&quot;utf-8&quot; /&gt; &lt;title&gt;PyScript Demo&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;script type=&quot;text/javascript&quot; src=&quot;https://cdn.jsdelivr.net/pyodide/v0.23.2/full/pyodide.js&quot;&gt;&lt;/script&gt; &lt;script type=&quot;text/javascript&quot;&gt; async function main() { const pyodide = await loadPyodide() await pyodide.loadPackage(&quot;micropip&quot;); const micropip = pyodide.pyimport(&quot;micropip&quot;); await micropip.install(&quot;pandas&quot;); await micropip.install(&quot;matplotlib&quot;); await pyodide.runPython(` import pandas as pd import matplotlib.pyplot as plt `); } main(); &lt;/script&gt; &lt;py-env&gt; - pandas - matplotlib.pyplot &lt;/py-env&gt; &lt;h1&gt;Hello, Geeks&lt;/h1&gt; &lt;py-script&gt; import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame([['g1', 'c1', 10], ['g1', 'c2', 12], ['g1', 'c3', 13], ['g2', 'c1', 8], ['g2', 'c2', 10], ['g2', 'c3', 12]], columns=['group', 'column', 'val']) df.pivot(index=&quot;column&quot;, columns=&quot;group&quot;, values=&quot;val&quot;).plot(kind='bar') plt.show() plt.close() &lt;/py-script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Thank you!</p>
<javascript><python><pyscript><pyodide>
2023-06-17 03:42:57
1
333
muted_buddy
76,494,580
652,528
Is there any easy way of a generic type is subtype of another type in python or mypy?
<p>I want to explore the subtyping relations with generics, for simple classes I can use</p> <pre><code>issubclass(bool, int) # True </code></pre> <p>But for generics I get an error</p> <pre><code>issubclass(list[bool], list[int]) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: issubclass() argument 2 cannot be a parameterized generic </code></pre> <p>Is there any interactive way of doing this?</p>
<python><generics><mypy>
2023-06-17 03:27:36
0
6,449
geckos
76,494,559
3,742,823
Passing Dicts using Pointers in Python HuggingFace
<p>AFAIK, in Python objects are passed by reference, then why do HuggingFace keeps using pointers to pass objects? Example snippet below taken from the tutorial at this link: <a href="https://huggingface.co/learn/nlp-course/chapter3/4?fw=pt" rel="nofollow noreferrer">https://huggingface.co/learn/nlp-course/chapter3/4?fw=pt</a></p> <pre><code>raw_inputs = [ &quot;I've been waiting for a HuggingFace course my whole life.&quot;, &quot;I hate this so much!&quot;, ] inputs = tokenizer(raw_inputs, padding=True, truncation=True, return_tensors=&quot;pt&quot;) print(inputs) from transformers import AutoModel checkpoint = &quot;distilbert-base-uncased-finetuned-sst-2-english&quot; model = AutoModel.from_pretrained(checkpoint) outputs = model(**batch) # &lt;-- What does this even mean? print(outputs.loss, outputs.logits.shape) </code></pre>
<python><pointers><huggingface-transformers><huggingface>
2023-06-17 03:16:29
1
3,281
The Wanderer
76,494,541
9,760,446
Append list element in pandas dataframe column depending on value of another column
<p>Whenever I try to append to a list given some condition, the value ends up being <code>None</code> instead of the list with the appended value. Example:</p> <pre><code># sample data data = {&quot;bool_col&quot;: [True, False, True, True, False]} my_df = pd.DataFrame.from_dict(data) # instantiate column of empty lists my_df[&quot;list_col&quot;] = [[] for r in range(len(my_df))] # append value to list_col when bool_col is True my_df[&quot;list_col&quot;] = my_df.apply(lambda x: x[&quot;list_col&quot;].append(&quot;truth!&quot;) if x[&quot;bool_col&quot;] else [], axis=1) my_df </code></pre> <p>I've tried wrapping <code>x[&quot;list_col&quot;]</code> in <code>list()</code> prior to calling <code>append()</code> to no avail. I'm not sure how to do this while retaining whatever list values may already be present and appending a new one.</p>
<python><pandas><list>
2023-06-17 03:07:03
3
1,962
Arthur Dent
76,494,387
19,157,137
Sphinx not recognized as a command within Poetry project Python
<p>I am trying to set up Sphinx documentation for my Python project using Poetry as the package manager. However, when I run <code>poetry run sphinx-quickstart</code> in my project directory, I receive the following error message:</p> <pre><code>'sphinx-quickstart' is not recognized as an internal or external command, operable program or batch file. </code></pre> <p>Here is my <code>pyproject.toml</code> file:</p> <pre><code>[tool.poetry] name = &quot;testing-project&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;User &lt;user@gmail.com&gt;&quot;] readme = &quot;README.md&quot; packages = [{include = &quot;testing_project&quot;}] [tool.poetry.dependencies] python = &quot;^3.9&quot; Sphinx = { version = &quot;4.2.0&quot;, optional = true } sphinx-rtd-theme = { version = &quot;1.0.0&quot;, optional = true } sphinxcontrib-napoleon = { version = &quot;0.7&quot;, optional = true } cython = &quot;^0.29.35&quot; [tool.poetry.extras] docs = [&quot;Sphinx&quot;, &quot;sphinx-rtd-theme&quot;, &quot;sphinxcontrib-napoleon&quot;] [build-system] requires = [&quot;poetry-core&quot;] </code></pre> <p>Tree Structure:</p> <pre><code>. ├── Testing_project │ ├── testing_project │ │ └── __init__.py │ ├── docs │ ├── poetry.lock │ ├── pyproject.toml │ └── tests │ └── __init__.py └── README.md </code></pre> <p>I have already ensured that Sphinx is installed by running <code>pip show sphinx</code> and confirming its presence. However, it seems that the <code>sphinx-quickstart</code> command is not recognized within my Poetry project. I also do not want to use a <code>pipenv</code> or a <code>poetry env create</code> virtual environments since I will be utilizing Docker for that.</p> <p>Can someone help me understand why I am encountering this error and how to resolve it?</p>
<python><dependencies><virtualenv><python-sphinx><python-poetry>
2023-06-17 01:47:29
0
363
Bosser445
76,494,386
4,115,031
Is `sudo pip install` ok if I've activated a virtual environment?
<p>I have seen some posts on Stack Overflow advising against running <code>sudo pip install</code> [<a href="https://stackoverflow.com/questions/61452582/why-is-using-sudo-pip-a-bad-idea">1</a>]. However, if I have already activated a virtual environment, does that change anything?</p>
<python><pip><virtualenv>
2023-06-17 01:47:23
3
12,570
Nathan Wailes
76,494,371
9,280,722
How do i check if two slices intersect of an array?
<p>I have an 2d-array <code>arr = np.zeros((9,9), dtype=object)</code> later i will have two slices that are in shape of (5,1) and (1,5) os they are always 1-dimensional.</p> <pre><code>a = arr[1:2, 3:8] # Red b = arr[0:5, 4:5] # Blue </code></pre> <p>How to tell mathematically in code whether the two slices are intersecting vertically at a point or not?</p> <p><a href="https://i.sstatic.net/yEXPq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yEXPq.png" alt="visualization of the array" /></a></p> <p>previous attempts: I tried to use <code>np.may_share_memory(a,b)</code> but it returns True even they are not intersected.</p>
<python><numpy><math>
2023-06-17 01:41:23
2
324
Ahmed4end
76,494,320
6,186,481
How do I run "python manage.py migrate" in a Visual Studio Django Web Project?
<p>I am trying to run the default Django Web Project using Visual Studio 2022. I have reached this point:</p> <p><img src="https://i.sstatic.net/2R0i0.png" alt="enter image description here" /></p> <p>I open a Developer PowerShell window and run &quot;python manage.py migrate&quot;. However, this fails because manage.py is in a sub-directory.</p> <p>So I run &quot;python DjangoWebProject\manage.py migrate&quot;. This fails with &quot;ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?&quot;</p> <p>What should I do to get this working?</p>
<python><django><visual-studio>
2023-06-17 01:15:26
0
652
Hal Heinrich
76,494,315
3,328,933
(Z3Py) Using all_smt to generate all solutions of a model
<p>The online <a href="https://theory.stanford.edu/%7Enikolaj/programmingz3.html#sec-blocking-evaluations" rel="nofollow noreferrer">Programming Z3 book</a> contains a full implementation of an <code>all_smt</code> function that generate an iterator of all valid solutions for a model. I am using this <code>all_smt</code> function verbatim, as well as a naive implementation <code>all_smt_slow_method</code> that performs the same thing. My <code>all_smt_slow_method</code> function produces the correct answer, while <code>all_smt</code> produces an incorrect answer. I suspect that the available <code>all_smt</code> implementation is correct, and that I'm calling it wrong somehow.</p> <p>The problem:</p> <pre><code>1. x, y, z are integers between 0 and 11, inclusive. 2. The difference between x and y must be less than or equal to 8 3. The difference between y and z must be less than or equal to 8 How many valid (x, y, z) pairs satisfy the above constraints? </code></pre> <p>My solution:</p> <pre><code>from z3 import * def all_smt(s, initial_terms): def block_term(s, m, t): s.add(t != m.eval(t, model_completion=True)) def fix_term(s, m, t): s.add(t == m.eval(t, model_completion=True)) def all_smt_rec(terms): if sat == s.check(): m = s.model() yield m for i in range(len(terms)): s.push() block_term(s, m, terms[i]) for j in range(i): fix_term(s, m, terms[j]) yield from all_smt_rec(terms[i:]) s.pop() yield from all_smt_rec(list(initial_terms)) def all_smt_slow_method(s, initial_terms, x, y, z): s.add(initial_terms) manual_counter = 0 while s.check() == sat: s.add(Or(x != s.model()[x], y != s.model()[y], z != s.model()[z])) manual_counter += 1 return manual_counter def main(): maximum_delta = 8 range = 12 x, y, z = Ints(&quot;x y z&quot;) initial_terms = [ 0 &lt;= x, x &lt; range, 0 &lt;= y, y &lt; range, 0 &lt;= z, z &lt; range, ] initial_terms += [ x - y &lt;= maximum_delta, y - x &lt;= maximum_delta, y - z &lt;= maximum_delta, z - y &lt;= maximum_delta, ] manual_counter = all_smt_slow_method(Solver(), initial_terms, x, y, z) all_smt_generator = all_smt(Solver(), initial_terms) all_smt_counter = sum(1 for x in all_smt_generator) print(f&quot;{range ** 3 = } {manual_counter = } {all_smt_counter = }&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p><code>manual_counter</code> evaluates to 1468, which I believe be the correct answer. The result of all_smt has a size of 111, which is incorrect.</p>
<python><z3><smt><z3py><theorem-proving>
2023-06-17 01:12:50
1
1,429
My other car is a cadr
76,494,218
6,554,099
Why do multiple test files cause Django issues when running in parallel?
<p>I am using django 4.1 on windows. running my tests sequentially (python manage.py test) works fine. but when I run my tests in parallel (python manage.py test --parallel) I get the following error</p> <pre><code>File &quot;C:\Users\SomeGuy\miniconda3\envs\SomeProject\lib\multiprocessing\process.py&quot;, line 315, in _bootstrap self.run() File &quot;C:\Users\SomeGuy\miniconda3\envs\SomeProject\lib\multiprocessing\process.py&quot;, line 108, in run self._target(*self._args, **self._kwargs) File &quot;C:\Users\SomeGuy\miniconda3\envs\SomeProject\lib\multiprocessing\pool.py&quot;, line 109, in worker initializer(*initargs) File &quot;C:\Users\SomeGuy\miniconda3\envs\SomeProject\lib\site-packages\django\test\runner.py&quot;, line 420, in _init_worker process_setup(*process_setup_args) TypeError: process_setup() missing 1 required positional argument: 'self' </code></pre> <p>I created a project and made a folder underneath the main app folder for my tests called &quot;tests&quot; next to manage.py</p> <p>I have two test files</p> <p>test_losing_hair.py:</p> <pre><code>from django.test import TestCase class WhyTest(TestCase): def test_counting_scheme(self): self.assertTrue(True) </code></pre> <p>and test_rapidly.py:</p> <pre><code>from django.test import TestCase class WontThisWorkTest(TestCase): def test_counting_scheme(self): self.assertTrue(True) </code></pre> <p>What Am I doing wrong here?</p> <p>I've noticed running it in parallel works if I choose one file or the other, but both at the same time creates the error above.</p>
<python><python-3.x><django>
2023-06-17 00:23:16
1
309
DNS_Jeezus
76,493,880
7,394,787
How to explain the logic process of the numpy's output?
<p>I am learning numpy from the start page : <a href="https://numpy.org/devdocs/user/quickstart.html" rel="nofollow noreferrer">https://numpy.org/devdocs/user/quickstart.html</a> There is a confusing part that makes me stop.</p> <pre><code>&gt;&gt;&gt; a = np.arange(12).reshape(3, 4) &gt;&gt;&gt; a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) &gt;&gt;&gt; b1 = np.array([False, True, True]) # first dim selection &gt;&gt;&gt; b2 = np.array([True, False, True, False]) # second dim selection &gt;&gt;&gt; a[b1, b2] array([ 4, 10]) </code></pre> <p>Could you please provide any hints or explains to help me understand this logic? The output that I expect is</p> <pre><code>array([[ 4, 6], [ 8, 10]]) </code></pre>
<python><numpy>
2023-06-16 22:18:26
1
305
Z.Lun
76,493,864
519,836
Simulation in Simpy not working - yielded process never returns
<p>I am trying to write a simple simulation of a queuing system:</p> <ol> <li>An input process generates customers regularly at a specified rate.</li> <li>A queuing system reacts when a customer is generated by the input process.</li> </ol> <p>The input process is:</p> <pre><code>import simpy class InputProcess(object): def __init__(self, env: simpy.Environment, name: str, freq: float): self.env = env self.name = name self.period = 1/freq self.counter = 0 self.action = env.process(self.run()) def create_cust(self): cust_name = f&quot;{self.name}-{self.counter}&quot; self.counter += 1 return cust_name def run(self): while True: new_cust = self.create_cust() print(&quot;customer %s created at %d&quot; % (new_cust.name, self.env.now)) yield self.env.timeout(self.period, new_cust) </code></pre> <p>The process created out of <code>run</code> will emit an event every <code>period</code> units of time.</p> <p>The queue:</p> <pre><code>from __future__ import annotations from simpy import Environment from simpy.events import AnyOf, Event from input_process import InputProcess class Queue(object): def __init__(self, env: Environment, period: float): self.env = env self.period = period self.inputs = [] env.process(self.listen()) def add_input(self, node: InputProcess): print(&quot;ADDED&quot;) self.inputs.append(node.action) def listen(self): while True: if len(self.inputs) &gt; 0: print(&quot;YES&quot;) yield AnyOf(self.env, self.inputs) print(&quot;TRIGGERED&quot;) </code></pre> <p>The simulation:</p> <pre><code>import simpy from input_process import InputProcess from queue import Queue def run(): env = simpy.Environment() gen = InputProcess(env, &quot;T1&quot;, 1/2) queue = Queue(env, 3) queue.add_input(gen) env.run(until=25) if __name__ == &quot;__main__&quot;: run() </code></pre> <p>The output:</p> <pre><code>ADDED traffic generator originated job 'T1-0' at 0 YES traffic generator originated job 'T1-1' at 2 traffic generator originated job 'T1-2' at 4 traffic generator originated job 'T1-3' at 6 traffic generator originated job 'T1-4' at 8 traffic generator originated job 'T1-5' at 10 traffic generator originated job 'T1-6' at 12 traffic generator originated job 'T1-7' at 14 traffic generator originated job 'T1-8' at 16 traffic generator originated job 'T1-9' at 18 traffic generator originated job 'T1-10' at 20 traffic generator originated job 'T1-11' at 22 traffic generator originated job 'T1-12' at 24 </code></pre> <h3>The problem</h3> <p>As you can see, the queue tries to pause and wait for any input's events. In the current setting, there is only one input to the queue: the <code>InputProcess</code>.</p> <p>Even though the simpy process of <code>InputProcess</code> does regularly yield events, the <code>Queue</code> never moves away from the <code>yield</code>, indicating that it keeps waiting for an event to be released by the <code>InputProcess</code>, but that process does regularly creates events.</p> <p>I am not understanding what is wrong here.</p>
<python><python-3.x><simpy>
2023-06-16 22:15:27
1
16,943
Andry
76,493,823
616,728
Pydantic Type with arbitrary type list in it
<p>I have defined a standard API response Pydantic type as follows:</p> <pre class="lang-py prettyprint-override"><code>class ApiResponse(BaseModel): success: bool data: Optional[List[Any]] = [] message: Optional[str] = None meta: Optional[dict] = {} </code></pre> <p>However, the <code>data</code> list will be a list of pre-defined Pydantic types, depending on the endpoint. For example, if the endpoint is <code>/users</code>, I want Pydantic to enforce that the <code>data</code> list must be a list of <code>User</code> objects. I attempted to modify the <code>ApiResponse</code> class as follows:</p> <pre class="lang-py prettyprint-override"><code>class User(BaseModel): name: str favorite_sandwich: str class ApiResponse(SubType): success: bool data: Optional[List[SubType]] = [] message: Optional[str] = None meta: Optional[dict] = {} </code></pre> <p>I then want to be able to define the endpoint with:</p> <pre class="lang-py prettyprint-override"><code>@router.get(&quot;/users&quot;, response_model=ApiResponse[User]) async def get_groups() -&gt; ApiResponse[User]: users = models.User.all() count = models.User.count() return {&quot;success&quot; : True, &quot;data&quot;: users, &quot;meta&quot; : {&quot;count&quot;: count}} </code></pre> <p>Is there a way to accomplish this?</p>
<python><fastapi><python-3.8><pydantic>
2023-06-16 22:06:52
1
2,748
Frank Conry
76,493,610
13,721,819
How do I change the axis numbers in a matplotlib colormap image?
<p>I have some python code that applies a 2-D function over a range of x and y values, and uses matplotlib to plot the results as a colormap. However, the axes on this plot show the integer indexes of the output array. I would instead like these axes to show the range of the x and y values, from <code>-1.0</code> to <code>1.0</code>, like a typical graphing application would look.</p> <p>How do I set the range of the axes?</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt # let x, y values be in range of (-1, 1) x, y = np.mgrid[-1:1:.05, -1:1:.05] # Apply the function to the values z = x * y # Get matplotlib figure and axis fig, ax = plt.subplots(figsize=(3, 3), ncols=1) # Plot the colormap pos_neg_clipped = ax.imshow(z, cmap='RdBu', interpolation='none') # Display the image plt.show() </code></pre> <p>Output:</p> <p><a href="https://i.sstatic.net/dDsIg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dDsIg.png" alt="Output Image" /></a></p>
<python><numpy><matplotlib>
2023-06-16 21:15:52
1
612
Wilson
76,493,482
6,457,407
Why is the memory view of a numpy record readonly?
<p>Why is Python telling me that the memory view of a record is readonly?</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; x = np.zeros(1, dtype='d,d,i') &gt;&gt;&gt; x array([(0., 0., 0)], dtype=[('f0', '&lt;f8'), ('f1', '&lt;f8'), ('f2', '&lt;i4')]) &gt;&gt;&gt; memoryview(x).readonly False &gt;&gt;&gt; memoryview(x[0]).readonly True </code></pre> <p>Obviously, <code>x[0]</code> isn't readonly, since</p> <pre><code>&gt;&gt;&gt; x[0][0] += 1 &gt;&gt;&gt; x[0] (1., 0., 0) </code></pre> <p>Memory view doesn't have trouble with normal arrays:</p> <pre><code>&gt;&gt;&gt; y = np.zeros((3, 4)) &gt;&gt;&gt; memoryview(y).readonly False &gt;&gt;&gt; memoryview(y[0]).readonly False </code></pre> <p>Likewise, the deprecated <code>__array_info__</code> knows that <code>x[0]</code> is read-write:</p> <pre><code>&gt;&gt;&gt; x.__array_interface__['data'] # returns tuple (address, read-only) (105553143159680, False) &gt;&gt;&gt; x[0].array_interface__['data'] (105553143159680, False) </code></pre> <p>My actual issue is in C code. Fortunately all my issues there can also be shown in pure Python.</p> <p>I'm trying to read and write numpy records in C code, and I just need the address of the data. I can find the address of the data just fine using <a href="https://numpy.org/doc/stable/reference/arrays.interface.html" rel="nofollow noreferrer"><code>__array_interface__</code></a> and its corresponding C-side <code>__array_struct__</code>. But there is a note on that page saying that this is legacy, and that new code should be using the buffer protocol.</p> <p>But the <a href="https://docs.python.org/3/c-api/buffer.html#bufferobjects" rel="nofollow noreferrer">buffer protocol</a> (which is <a href="https://docs.python.org/3/c-api/memoryview.html" rel="nofollow noreferrer">mimicked</a> by <code>memoryview</code> in Python) thinks the record is read-only. I have to specifically ask for a readonly buffer. Yes, I could get the address of the data from the &quot;readonly&quot; buffer, and write to it anyway, but that feels dirty.</p> <hr /> <p>Updated to respond to @tdelaney's comment:</p> <p>As an experiment, I wrote a small C function that requested a read-only memory buffer, found the start address, and incremented the double there even though it wasn't supposed to.</p> <pre><code>void foo(PyObject *object) { Py_buffer view; PyObject_GetBuffer(object, &amp;view, PyBUF_CONTIG_RO); if (view.obj) { double *data = (double *)view.buf; *data += 1; } PyBuffer_Release(&amp;view); } </code></pre> <p>I could then look at the resulting array in Python.</p> <p>For arrays of records, both <code>foo(x)</code> and <code>foo(x[1])</code> correctly incremented an array element.</p> <p>For the two-dimensional array of doubles, both <code>foo(y)</code> and <code>foo(y[1])</code> correctly incremented an array element. As expected, <code>foo(y[1][2])</code> did nothing.</p> <p>So for records, the <code>np.void</code> is <em>not</em> copied. At least in the case.</p>
<python><swig><memoryview>
2023-06-16 20:48:25
0
11,605
Frank Yellin
76,493,324
2,680,053
Pandas get_group with value None
<p>I have a DataFrame with a column that contains some <code>None</code> values. I'd like to be able to do <code>df.groupby('A').get_group(None)</code>, but I get a <code>KeyError</code> on <code>None</code> even though <code>df['A'].unique()</code> contains <code>None</code>. Can I make this work without converting <code>None</code>'s to a string or other value?</p>
<python><pandas><dataframe>
2023-06-16 20:15:35
2
1,548
Marc Bacvanski
76,493,293
7,516,523
Iterate over numpy array to get sub-arrays
<p>Given the following numpy array:</p> <pre><code>arr = np.array([0, 1, 2, 3, 4, 5]) </code></pre> <p>what iterable would return sub-arrays of length <code>x</code> from <code>arr</code>? (Given that <code>len(arr)</code> is a multiple of <code>x</code>)</p> <pre><code>x = 2 sub_arrays = [sub_arr for sub_arr in iterable(arr, x)] </code></pre> <blockquote> <p>sub_arrays = [ np.ndarray( [0, 1] ), np.ndarray( [2, 3] ), np.ndarray( [4, 5] ) ]</p> </blockquote> <p>I know that array slicing is possible with <code>start</code>, <code>stop</code>, and <code>step</code> arguments, but that returns individual elements:</p> <pre><code>x = 2 sub_elements = [sub_elem for sub_elem in arr[::x]] </code></pre> <blockquote> <p>sub_elements = [0, 2, 4]</p> </blockquote>
<python><arrays><numpy>
2023-06-16 20:10:08
2
345
Florent H
76,493,288
9,386,819
In pandas, how is it possible for a Boolean series of len(100) to be applied to a dataframe of len(10) without it throwing an error?
<p>Apologies for not being able to provide the data. Somebody else wrote this code, and I don't understand how it's working.</p> <p>There's a dataframe (<code>df</code>) that's say, 100 samples long. They grouped it:</p> <p>[EDIT TO QUESTION: I forgot to include that the groupby statement ended with an index reset. Adding that below.]</p> <p><code>grouped_df = df.groupby('col_a').sum()['col_b'].sort_values().reset_index()</code></p> <p>This resulted in a DataFrame object of length 10.</p> <p>Then they created a Boolean series to use as a mask. They created it from the original dataframe (<code>df</code>) based on values in a third column:</p> <p><code>mask = df['col_c'] &gt; 10</code></p> <p>This resulted in a Boolean series of length 100—same length as <code>df</code>, naturally.</p> <p>Then they applied <code>mask</code> (len=100) to <code>grouped_df</code> (len=10), and the result was a DataFrame object of length 5.</p> <p>How does that work? What is happening? How can you apply a Boolean series to a dataframe as a mask when the lengths don't match up?</p>
<python><pandas><dataframe><boolean><series>
2023-06-16 20:09:01
1
414
NaiveBae
76,493,160
610,505
How to use a type variable for both code and type annotation?
<p>I'm using this pattern often:</p> <pre><code>import typing T = typing.TypeVar('T') class Base(typing.Generic[T]): _type: typing.Type[T] def func(self) -&gt; T: return self._type(42) the_type: type = int class A(Base[the_type]): _type = the_type the_type = str class B(Base[the_type]): _type = the_type </code></pre> <p>Is there any way to avoid passing the type to both <code>Base[]</code> and as <code>_type</code>?</p> <p>I want something like this:</p> <pre><code>import typing T = typing.TypeVar('T') class Base(typing.Generic[T]): def func(self) -&gt; T: return T(42) class A(Base[int]): pass class B(Base[str]): pass </code></pre> <p>but that's not how it works of course.</p>
<python><python-typing>
2023-06-16 19:42:30
1
7,635
eepp
76,493,134
6,077,239
DuckDB slower than Polars in single table over + groupby context
<p>For the following toy example which involves both calculations <code>over</code> window and <code>groupby</code> aggregations, <code>DuckDB</code> performs nearly 3x slower than <code>Polars</code> in <code>Python</code>. Both give exactly the same results.</p> <p>Is this kind of benchmarking result as expected, because <code>DuckDB</code> is designed and should be used more for cross-dataframe/table operations?</p> <p>Or, is it just because the inefficiency comes from the way my SQL query is written?</p> <pre><code>import time import duckdb import numpy as np import polars as pl ## example dataframe rng = np.random.default_rng(1) nrows = 10_000_000 df = pl.DataFrame( dict( id=rng.integers(1, 100, nrows), id2=rng.integers(1, 1_000, nrows), v1=rng.normal(0, 1, nrows), v2=rng.normal(0, 1, nrows), v3=rng.normal(0, 1, nrows), v4=rng.normal(0, 1, nrows), ) ) ## polars start = time.perf_counter() res = ( df.select( [ &quot;id&quot;, &quot;id2&quot;, pl.col(&quot;v1&quot;) - pl.col(&quot;v1&quot;).mean().over([&quot;id&quot;, &quot;id2&quot;]), pl.col(&quot;v2&quot;) - pl.col(&quot;v2&quot;).mean().over([&quot;id&quot;, &quot;id2&quot;]), pl.col(&quot;v3&quot;) - pl.col(&quot;v3&quot;).mean().over([&quot;id&quot;, &quot;id2&quot;]), pl.col(&quot;v4&quot;) - pl.col(&quot;v4&quot;).mean().over([&quot;id&quot;, &quot;id2&quot;]), ] ) .groupby([&quot;id&quot;, &quot;id2&quot;]) .agg( [ (pl.col(&quot;v1&quot;) * pl.col(&quot;v2&quot;)).sum().alias(&quot;ans1&quot;), (pl.col(&quot;v3&quot;) * pl.col(&quot;v4&quot;)).sum().alias(&quot;ans2&quot;), ] ) ) time.perf_counter() - start # 1.0977217499166727 ## duckdb start = time.perf_counter() res2 = ( duckdb.sql( &quot;&quot;&quot; SELECT id, id2, v1 - mean(v1) OVER (PARTITION BY id, id2) as v1, v2 - mean(v2) OVER (PARTITION BY id, id2) as v2, v3 - mean(v3) OVER (PARTITION BY id, id2) as v3, v4 - mean(v4) OVER (PARTITION BY id, id2) as v4, FROM df &quot;&quot;&quot; ) .aggregate( &quot;id, id2, sum(v1 * v2) as ans1, sum(v3 * v4) as ans2&quot;, &quot;id, id2&quot;, ) .pl() ) time.perf_counter() - start # 3.549897135235369 </code></pre>
<python><duckdb>
2023-06-16 19:36:39
2
1,153
lebesgue
76,493,088
781,938
How do I specify a pip-installed, editable dependency in a conda environment YAML file?
<p>I'm developing a python package in a conda environment defined by a YAML environment file. my question is: is there a way to specify this local package as an editable, pip-installed dependency in the YAML file?</p> <p>my conda env YAML file looks something like as follows (<a href="https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#create-env-file-manually" rel="nofollow noreferrer">env files documented here</a>):</p> <pre><code>name: ... channels: - conda-forge - defaults dependencies: - &lt;packages on conda-forge&gt; - pip: - &lt;packages on PyPI&gt; - git+https://github.com/&lt;some packages on github&gt; </code></pre> <p>currently, i first create this env with <code>conda env create -n env.yml</code> and then manually install the package with <code>pip install --no-build-isolation --editable .</code>.</p> <p>i'm wondering if it's possible to add an entry under <code> - pip:</code> for my package in the environment file. i've searched the documentation and can't find anything on whether or how this is possible. any help appreciated!</p>
<python><pip><anaconda><conda><development-environment>
2023-06-16 19:28:38
1
6,130
william_grisaitis
76,493,076
6,727,914
How to prefix `print` output for each thread in python?
<p>How to prefix <code>print</code> output for each thread in python ?</p> <pre><code>from threading import Thread def func1(): print('Working foo') def func2(): print(&quot;Working bar&quot;) if __name__ == '__main__': Thread(target = func1).start() Thread(target = func2).start() </code></pre> <p>Expected output:</p> <pre><code>[Foo] Working foo [Bar] Working bar </code></pre> <p>The project is very large so I want to avoid changing the print to logger.</p> <p>I tried:</p> <pre><code>import sys import threading # Custom stream to add prefix to stdout class PrefixStream: def __init__(self, prefix): self.prefix = prefix def write(self, text): sys.stdout.write(self.prefix + &quot; &quot; + text) # Thread 1 function def thread1_function(): # Redirect stdout to custom stream with prefix sys.stdout = PrefixStream(&quot;[Thread 1]&quot;) # Thread 1 logic print(&quot;Thread 1 stdout message&quot;) # Thread 2 function def thread2_function(): # Redirect stdout to custom stream with prefix sys.stdout = PrefixStream(&quot;[Thread 2]&quot;) # Thread 2 logic print(&quot;Thread 2 stdout message&quot;) # Create and start the threads thread1 = threading.Thread(target=thread1_function) thread2 = threading.Thread(target=thread2_function) thread1.start() thread2.start() # Wait for both threads to complete thread1.join() thread2.join() # Restore stdout to default sys.stdout = sys.__stdout__ </code></pre> <p>But this is not working at all</p>
<python><multithreading><logging><printing>
2023-06-16 19:26:37
2
21,427
TSR
76,492,984
4,787,126
Python defined or operator
<p>Is there some conditional operator in Python which would evaluate the first operand and return its value if it's not <code>None</code>, or if it's <code>None</code>, evaluate and return the second operand? I know that <code>a or b</code> can almost do this, except that it does not strictly distinguish <code>None</code> and <code>False</code>. Looking for something similar to <code>//</code> operator in Perl.</p> <p>The goal is to write both <code>a</code> and <code>b</code> only once, therefore alternative <code>a if a is not None else b</code> doesn't work either - both <code>a</code> and <code>b</code> can be expensive expressions.</p>
<python>
2023-06-16 19:10:26
2
19,053
Zbynek Vyskovsky - kvr000
76,492,882
18,756,733
How To Compile Multiple Column Values Into One Under Corresponding Primary Key In Pandas?
<p>I have this dataset:</p> <pre><code>pd.DataFrame(data={'ID':['1','2','3'],'Genre':['Adventure|Children|Fantasy','Horror','Comedy|Drama']}) ID Genre 0 1 Adventure|Children|Fantasy 1 2 Horror 2 3 Comedy|Drama </code></pre> <p>I want it to look like this:</p> <pre><code> ID Genre 0 1 Adventure 1 1 Children 2 1 Fantasy 3 2 Horror 4 3 Comedy 5 3 Drama </code></pre> <p>How can I do it with Pandas?</p>
<python><pandas>
2023-06-16 18:49:05
2
426
beridzeg45
76,492,808
5,036,928
Get dest for all args in argparser
<p>I would like to get the <code>dest</code> values for all the arguments defined for the parser. I.e. in the case of:</p> <pre><code>parser = argparse.ArgumentParser() parser.add_argument('arg1') parser.add_argument('arg2') parser.add_argument('arg3') parser.add_argument('arg4') </code></pre> <p>I would like to return <code>['arg1', 'arg2', 'arg3', 'arg4']</code>.</p> <p><code>parser.parse_args()</code> takes stuff from sys.argv which is not what I'm looking for. How can I achieve this?</p>
<python><command-line><command-line-arguments><argparse>
2023-06-16 18:34:14
2
1,195
Sterling Butters
76,492,801
5,317,819
pybind11 and eigen gives "error: use of deleted function ‘bool pybind11::detail::eigen_map_caster<MapType>::load"
<p>I am trying to compile a simple c++ program with pybind11 and eigen, but when I compile it I get (see below the full error message)</p> <pre><code>/usr/include/pybind11/cast.h:1195:51: error: use of deleted function ‘bool pybind11::detail::eigen_map_caster&lt;MapType&gt;::load(pybind11::handle, bool) [with MapType = Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;]’ 1195 | if ((... || !std::get&lt;Is&gt;(argcasters).load(call.args[Is], call.args_convert[Is]))) | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ </code></pre> <p>Here is my c++ file</p> <pre class="lang-cpp prettyprint-override"><code>#include &lt;pybind11/pybind11.h&gt; #include &lt;pybind11/eigen.h&gt; namespace py = pybind11; using MatrixX3d = Eigen::Matrix&lt;double, Eigen::Dynamic, 3, Eigen::RowMajor&gt;; PYBIND11_MODULE(test, m) { m.def(&quot;func&quot;, [](const Eigen::Map&lt;MatrixX3d&gt;&amp; x) { // ... }); } </code></pre> <p>and my cmake file</p> <pre><code>cmake_minimum_required(VERSION 3.12) project(test) set(CMAKE_CXX_STANDARD 17) find_package(pybind11 CONFIG REQUIRED) find_package(Eigen3 REQUIRED) add_library(test MODULE test.cpp) target_link_libraries(test PUBLIC pybind11::module Eigen3::Eigen) </code></pre> <p>I am following the documentation <a href="https://pybind11.readthedocs.io/en/latest/advanced/cast/eigen.html" rel="nofollow noreferrer">https://pybind11.readthedocs.io/en/latest/advanced/cast/eigen.html</a>. What did I do wrong?</p> <p>The full error message is</p> <pre><code> * Executing task: CMake: build build task started.... /usr/bin/cmake --build /home/me/dev/test/buildDebug --config Debug --target all -j 10 -- Consolidate compiler generated dependencies of target test [ 50%] Building CXX object CMakeFiles/test.dir/test.cpp.o In file included from /usr/include/pybind11/attr.h:13, from /usr/include/pybind11/pybind11.h:13, from /home/me/dev/test/test.cpp:1: /usr/include/pybind11/cast.h: In instantiation of ‘bool pybind11::detail::argument_loader&lt;Args&gt;::load_impl_sequence(pybind11::detail::function_call&amp;, std::index_sequence&lt;Is ...&gt;) [with long unsigned int ...Is = {0}; Args = {const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1, -1, 3&gt;, 0, Eigen::Stride&lt;0, 0&gt; &gt;&amp;}; std::index_sequence&lt;Is ...&gt; = std::integer_sequence&lt;long unsigned int, 0&gt;]’: /usr/include/pybind11/cast.h:1173:34: required from ‘bool pybind11::detail::argument_loader&lt;Args&gt;::load_args(pybind11::detail::function_call&amp;) [with Args = {const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1, -1, 3&gt;, 0, Eigen::Stride&lt;0, 0&gt; &gt;&amp;}]’ /usr/include/pybind11/pybind11.h:214:42: required from ‘void pybind11::cpp_function::initialize(Func&amp;&amp;, Return (*)(Args ...), const Extra&amp; ...) [with Func = pybind11_init_test(pybind11::module_&amp;)::&lt;lambda(const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;&amp;)&gt;; Return = void; Args = {const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1, -1, 3&gt;, 0, Eigen::Stride&lt;0, 0&gt; &gt;&amp;}; Extra = {pybind11::name, pybind11::scope, pybind11::sibling}]’ /usr/include/pybind11/pybind11.h:100:19: required from ‘pybind11::cpp_function::cpp_function(Func&amp;&amp;, const Extra&amp; ...) [with Func = pybind11_init_test(pybind11::module_&amp;)::&lt;lambda(const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;&amp;)&gt;; Extra = {pybind11::name, pybind11::scope, pybind11::sibling}; &lt;template-parameter-1-3&gt; = void]’ /usr/include/pybind11/pybind11.h:1048:22: required from ‘pybind11::module_&amp; pybind11::module_::def(const char*, Func&amp;&amp;, const Extra&amp; ...) [with Func = pybind11_init_test(pybind11::module_&amp;)::&lt;lambda(const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;&amp;)&gt;; Extra = {}]’ /home/me/dev/test/test.cpp:11:10: required from here /usr/include/pybind11/cast.h:1195:51: error: use of deleted function ‘bool pybind11::detail::eigen_map_caster&lt;MapType&gt;::load(pybind11::handle, bool) [with MapType = Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;]’ 1195 | if ((... || !std::get&lt;Is&gt;(argcasters).load(call.args[Is], call.args_convert[Is]))) | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /home/me/dev/test/test.cpp:2: /usr/include/pybind11/eigen.h:393:10: note: declared here 393 | bool load(handle, bool) = delete; | ^~~~ In file included from /usr/include/pybind11/attr.h:13, from /usr/include/pybind11/pybind11.h:13, from /home/me/dev/test/test.cpp:1: /usr/include/pybind11/cast.h: In instantiation of ‘typename pybind11::detail::make_caster&lt;T&gt;::cast_op_type&lt;typename std::add_rvalue_reference&lt;_Tp&gt;::type&gt; pybind11::detail::cast_op(pybind11::detail::make_caster&lt;T&gt;&amp;&amp;) [with T = const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;&amp;; typename pybind11::detail::make_caster&lt;T&gt;::cast_op_type&lt;typename std::add_rvalue_reference&lt;_Tp&gt;::type&gt; = Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;; pybind11::detail::make_caster&lt;T&gt; = pybind11::detail::type_caster&lt;Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;, void&gt;; typename std::add_rvalue_reference&lt;_Tp&gt;::type = const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;&amp;]’: /usr/include/pybind11/cast.h:1207:51: required from ‘Return pybind11::detail::argument_loader&lt;Args&gt;::call_impl(Func&amp;&amp;, std::index_sequence&lt;Is ...&gt;, Guard&amp;&amp;) &amp;&amp; [with Return = void; Func = pybind11_init_test(pybind11::module_&amp;)::&lt;lambda(const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;&amp;)&gt;&amp;; long unsigned int ...Is = {0}; Guard = pybind11::detail::void_type; Args = {const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1, -1, 3&gt;, 0, Eigen::Stride&lt;0, 0&gt; &gt;&amp;}; std::index_sequence&lt;Is ...&gt; = std::integer_sequence&lt;long unsigned int, 0&gt;]’ /usr/include/pybind11/cast.h:1184:65: required from ‘std::enable_if_t&lt;std::is_void&lt;_Dummy&gt;::value, pybind11::detail::void_type&gt; pybind11::detail::argument_loader&lt;Args&gt;::call(Func&amp;&amp;) &amp;&amp; [with Return = void; Guard = pybind11::detail::void_type; Func = pybind11_init_test(pybind11::module_&amp;)::&lt;lambda(const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;&amp;)&gt;&amp;; Args = {const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1, -1, 3&gt;, 0, Eigen::Stride&lt;0, 0&gt; &gt;&amp;}; std::enable_if_t&lt;std::is_void&lt;_Dummy&gt;::value, pybind11::detail::void_type&gt; = pybind11::detail::void_type]’ /usr/include/pybind11/pybind11.h:233:71: required from ‘void pybind11::cpp_function::initialize(Func&amp;&amp;, Return (*)(Args ...), const Extra&amp; ...) [with Func = pybind11_init_test(pybind11::module_&amp;)::&lt;lambda(const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;&amp;)&gt;; Return = void; Args = {const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1, -1, 3&gt;, 0, Eigen::Stride&lt;0, 0&gt; &gt;&amp;}; Extra = {pybind11::name, pybind11::scope, pybind11::sibling}]’ /usr/include/pybind11/pybind11.h:100:19: required from ‘pybind11::cpp_function::cpp_function(Func&amp;&amp;, const Extra&amp; ...) [with Func = pybind11_init_test(pybind11::module_&amp;)::&lt;lambda(const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;&amp;)&gt;; Extra = {pybind11::name, pybind11::scope, pybind11::sibling}; &lt;template-parameter-1-3&gt; = void]’ /usr/include/pybind11/pybind11.h:1048:22: required from ‘pybind11::module_&amp; pybind11::module_::def(const char*, Func&amp;&amp;, const Extra&amp; ...) [with Func = pybind11_init_test(pybind11::module_&amp;)::&lt;lambda(const Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;&amp;)&gt;; Extra = {}]’ /home/me/dev/test/test.cpp:11:10: required from here /usr/include/pybind11/cast.h:43:100: error: use of deleted function ‘pybind11::detail::eigen_map_caster&lt;MapType&gt;::operator MapType() [with MapType = Eigen::Map&lt;Eigen::Matrix&lt;double, -1, 3, 1&gt; &gt;]’ 42 | return std::move(caster).operator | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 43 | typename make_caster&lt;T&gt;::template cast_op_type&lt;typename std::add_rvalue_reference&lt;T&gt;::type&gt;(); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~ In file included from /home/me/dev/test/test.cpp:2: /usr/include/pybind11/eigen.h:394:5: note: declared here 394 | operator MapType() = delete; | ^~~~~~~~ gmake[2]: *** [CMakeFiles/test.dir/build.make:76: CMakeFiles/test.dir/test.cpp.o] Error 1 gmake[1]: *** [CMakeFiles/Makefile2:83: CMakeFiles/test.dir/all] Error 2 gmake: *** [Makefile:91: all] Error 2 build finished with error(s). * The terminal process terminated with exit code: 2. * Terminal will be reused by tasks, press any key to close it. </code></pre>
<python><c++><eigen><pybind11>
2023-06-16 18:32:56
1
747
T.L
76,492,738
3,557,939
Incorrect sys.path when executing orgmode python code block under venv
<p>When executing python code block with <code>session</code> modifier python uses global modules but not local, althrough <code>venv</code> is activated. But while executing the same code block without <code>session</code> modifier uses local modules as expected.</p> <p>The steps to reproduce the issue:</p> <pre><code>$ mkdir test $ cd test $ pyenv local 3.10.6 $ python -m venv .venv $ . .venv/bin/activate $ pip install numpy scipy $ emacs test.org </code></pre> <p>Where <code>test.org</code> is:</p> <pre class="lang-py prettyprint-override"><code>#+begin_src python :results output try: import numpy print('numpy imported') except ModuleNotFoundError as err: print(f'ERROR: {err}') try: import scipy print('scipy imported') except ModuleNotFoundError as err: print(f'ERROR: {err}') import sys print(sys.executable) print(sys.path) #+end_src #+begin_src python :session :results output try: import numpy print('numpy imported') except ModuleNotFoundError as err: print(f'ERROR: {err}') try: import scipy print('scipy imported') except ModuleNotFoundError as err: print(f'ERROR: {err}') import sys print(sys.executable) print(sys.path) #+end_src </code></pre> <p>After executing <code>org-babel-execute-buffer</code>, the results for the first code block should be:</p> <pre><code>#+RESULTS: : numpy imported : scipy imported : /home/sd/projects/test/.venv/bin/python : ['', '/home/sd/.pyenv/versions/3.10.6/lib/python310.zip', '/home/sd/.pyenv/versions/3.10.6/lib/python3.10', '/home/sd/.pyenv/versions/3.10.6/lib/python3.10/lib-dynload', '/home/sd/projects/test/.venv/lib/python3.10/site-packages'] </code></pre> <p>Here <code>sys.path</code> contains <code>site-packages</code> from <code>.venv</code>, so only local packages are being used.</p> <p>For the second code block (with session), the results should be:</p> <pre><code>#+RESULTS: : numpy imported : ERROR: No module named 'scipy' : /home/sd/.pyenv/versions/3.10.6/bin/python : ['', '/home/sd/.pyenv/versions/3.10.6/lib/python310.zip', '/home/sd/.pyenv/versions/3.10.6/lib/python3.10', '/home/sd/.pyenv/versions/3.10.6/lib/python3.10/lib-dynload', '/home/sd/.local/lib/python3.10/site-packages', '/home/sd/.pyenv/versions/3.10.6/lib/python3.10/site-packages'] </code></pre> <p>Here, <code>numpy</code> was imported but <code>scipy</code> was not found because <code>sys.path</code> contains <code>site-packages</code> from <code>.local</code> and <code>.pyenv</code> directories, but not from <code>.venv</code>, so Python uses only global packages, but no local ones.</p> <p>In the first case (without session), <code>sys.executable</code> equals <code>: /home/sd/projects/test/.venv/bin/python</code>, which is correct. In the second case (with session), <code>sys.executable</code> shows <code>/home/sd/.pyenv/versions/3.10.6/bin/python</code>, so Python does not recognize that it is under a virtual environment.</p> <p>How can this issue be fixed without manually setting <code>sys.path</code>? In both cases, the executable should be the same for both the simple code block and the code block with session.</p>
<python><emacs><org-mode><python-venv>
2023-06-16 18:20:05
0
572
sdorof
76,492,727
8,451,248
requests module hangs after coming out of sleep on Windows
<p>When I take my computer out of sleep mode and try to run a python script using the requests module, it inevitably hangs. Everything else works fine. Restarting the computer allows requests to work again.</p> <p>This is probably some kind of socket issue between windows and the requests module, but how can I prevent this ?</p>
<python><windows><python-requests>
2023-06-16 18:18:19
1
310
ouai
76,492,622
9,338,509
Is AWS java lambda layer automatically starts JVM?
<p>I am new to aws layers/extensions. Is AWS java lambda layer automatically starts JVM? or DO we need to manually start? I am trying to create java lambda layer and call the methods from lambda function, however I am trying to call java method from lambda I am getting <code>Runtime exited with error: signal: segmentation fault</code> Python code (lambda):</p> <pre><code>def call_java_method() #jpype.addClassPath(JAR) - Not sure what path I should give here? class_name = 'com.java.example.SampleJavaClass' my_class = JClass(class_name) instance = my_class() result = instance.sampleMethod() print(result) </code></pre> <p>Java Code:</p> <pre><code>package com.java.example; public class SampleJavaClass { public String sampleMethod() { return &quot;sampleMethod() called!&quot;; } } </code></pre>
<python><java><aws-lambda><aws-lambda-layers><aws-lambda-extensions>
2023-06-16 18:00:33
0
553
lakshmiravali rimmalapudi
76,492,608
443,854
Surprising behavior of with keyword in python
<p>I wanted to modify context manager behavior of an existing instance of a class (say, a database connection object). My initial idea was to monkey-patch <code>__enter__</code> and <code>__exit__</code> on the instance. To my surprise, that did not work. Monkey-patching the class achieves the desired effect (with a caveat that I am not sure that updating <code>__class__</code> is a good idea).</p> <p>What is the reason for this behavior of the <code>with</code> keyword? Essentially, I am looking for an explanation of why I should not be surprised. I could not find how the <code>with</code> is implemented, and I did not get the answer by reading <a href="https://peps.python.org/pep-0343/" rel="nofollow noreferrer">PEP 343</a>.</p> <p>A runnable piece of code to illustrate.</p> <pre><code>import types class My: def __enter__(self): print('enter') def __exit__(self, a, b, c): print('exit') def add_behavior_to_context_manager(c): # Does not work c_enter = c.__enter__ c_exit = c.__exit__ def __enter__(self): print('enter!') c_enter() return c def __exit__(self, exc_type, exc_value, exc_tb): c_exit(exc_type, exc_value, exc_tb) print('exit!') c.__enter__ = types.MethodType(__enter__, c) c.__exit__ = types.MethodType(__exit__, c) return c def add_behavior_by_modifying_class(c): # Works class MonkeyPatchedConnection(type(c)): def __enter__(self): print('enter!') return super().__enter__() def __exit__(wrapped, exc_type, exc_value, exc_tb): super().__exit__(exc_type, exc_value, exc_tb) print('exit!') c.__class__ = MonkeyPatchedConnection return c my = add_behavior_to_context_manager(My()) print('Methods called on the instance of My work as expected: ') my.__enter__() my.__exit__(None, None, None) print('Instance methods are ignored by the &quot;with&quot; statement: ') with add_behavior_to_context_manager(My()): pass print('Instead, class methods are called by the &quot;with&quot; statement: ') with add_behavior_by_modifying_class(My()): pass </code></pre> <p>And the output:</p> <pre><code>Methods called on the instance of My work as expected: enter! enter exit exit! Instance methods are ignored by the &quot;with&quot; statement: enter exit Instead, class methods are called by the &quot;with&quot; statement: enter! enter exit exit! </code></pre>
<python><with-statement><contextmanager>
2023-06-16 17:58:19
1
7,543
user443854
76,492,577
895,029
Add and format pandas timestamp labels
<p>Related to <a href="https://stackoverflow.com/questions/45056579/is-it-possible-to-format-the-labels-using-set-xticklabels-in-matplotlib">this SO question</a>, but instead of numeric x-axis labels, I'm struggling to add timestamp labels. As an example</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt idx = pd.date_range(&quot;2023-06-15&quot;, &quot;2023-06-16&quot;, freq=&quot;1min&quot;, inclusive=&quot;left&quot;) df = pd.DataFrame(np.random.rand(len(idx), 5), index=idx) fig, ax = plt.subplots() im = ax.imshow(df.transpose(), aspect=&quot;auto&quot;) fig.colorbar(im, ax=ax) </code></pre> <p>I'd specifically like to just show the time component, and say just on every 15 minute interval. I am trying to do something like</p> <pre class="lang-py prettyprint-override"><code>ax.xaxis.set_major_locator(mdates.MinuteLocator(interval=15)) ax.xaxis.set_major_formatter(mdates.DateFormatter(&quot;%H:%M:%S&quot;)) </code></pre> <p>But this just gives</p> <p><code>Locator attempting to generate 138241 ticks ([-0.5, ..., 1439.5]), which exceeds Locator.MAXTICKS (1000).</code></p> <p>I've tried and failed with various auto formatters and locators. Any help would be appreciated.</p>
<python><pandas><matplotlib>
2023-06-16 17:53:44
1
4,506
rwb
76,492,575
1,306,892
Calculate the signed area of piecewise constant functions without using integration
<p>I have defined a step function in Python using the following code. The function takes in an array <code>a</code> and <code>x</code> values, applies some calculations, and returns a step function <code>f</code>. Additionally, I have defined two helper functions <code>rect</code> and <code>psi_j_n</code>. I'd like to calculate the signed area of the product of <code>step_function</code> and <code>psi_j_n(x, -10, 0)</code> without using the integral because it is a rectangle, and that's the area I'm looking for.</p> <p>My initial attempt:</p> <pre><code>signed_area = 0 for x_values in x: signed_area += step_function(x_values, a) * psi_j_n(x_values, -10, 0) signed_area </code></pre> <p>is incorrect because I am missing the length of the base of the rectangle. When I calculate the area by hand, I should get 0.012.</p> <p><strong>Update</strong> I used the following code to define the step function:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt # Define the step function def step_function(x, a): def rect(x): return np.where((x &gt;= 0) &amp; (x &lt; 1), 1, 0) f = np.sum([a[k-1] * rect(x - k) for k in range(1, len(a) + 1)], axis=0) return f # Set the random seed for reproducibility np.random.seed(42) # Generate random values for a_k N = 10 a = np.array([-0.25091976, 0.90142861, 0.46398788, 0.19731697, -0.68796272, -0.68801096, -0.88383278, 0.73235229, 0.20223002, 0.41614516]) #a = np.random.uniform(-1, 1, size=N) # Define the x-values for plotting x = np.arange(0, N + 1, 0.01) # Evaluate the step function at x y = step_function(x, a) # Plot the step function plt.plot(x, y) plt.xlabel('x') plt.ylabel('f(x)') plt.title('Step Function Plot') plt.grid(True) plt.show() </code></pre> <p>It produces the picture</p> <p><a href="https://i.sstatic.net/PunKw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PunKw.png" alt="enter image description here" /></a></p> <p>The following code defines instead the function <code>psi_j_n(x, j, n)</code>:</p> <pre><code>def psi(x): if 0 &lt;= x &lt; 0.5: return 1 elif 0.5 &lt;= x &lt; 1: return -1 else: return 0 def psi_j_n(x, j, n): return 2**(j/2) * psi(2**j * x - n) </code></pre> <p>Then, I would like to calculate the product of <code>step_function(x, a)</code> and <code>psi_j_n(x, j, n)</code>.</p>
<python><numpy>
2023-06-16 17:53:38
1
1,801
Mark
76,492,543
3,137,789
mypy: how to specialize a child attribute type?
<p>Here is what I'm trying to do: <code>Parent.x</code> can accept a <code>Literal</code> &quot;a&quot;, &quot;b&quot;, or &quot;c&quot;. I would like a Child class that inherits from <code>Parent</code>, but whose <code>x</code> attributes can only be &quot;a&quot;. Here is one of my attempts:</p> <pre><code>from typing import Literal class Parent: x: Literal[&quot;a&quot;, &quot;b&quot;, &quot;c&quot;] def __init__(self, x: Literal[&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]) -&gt; None: self.x = x class Child(Parent): x: Literal[&quot;a&quot;] child = Child(&quot;c&quot;) # this should return an error, but doesn't </code></pre> <p>As you can see, despite declaring the <code>x</code> attributes in Child as only accepting &quot;a&quot;, I can still instantiate it with &quot;c&quot;. I'm using mypy 1.3.0 with the <code>--strict</code> flag enabled.</p>
<python><mypy>
2023-06-16 17:46:41
0
1,327
MyUsername112358
76,492,535
3,821,009
Polars shift by date
<p>Say I have this:</p> <pre><code>numpy.random.seed(1) df = (polars .DataFrame(dict( dt=numpy.random.randint(datetime.datetime(2023, 1, 1).timestamp(), datetime.datetime(2023, 1, 31).timestamp(), 100) )) .select( polars.from_epoch('dt').sort() ) .filter( (polars.col('dt').dt.day().cos() * 17).floor() % 3 == 0 ) .pipe(lambda df: df.with_columns( j=polars.lit(numpy.random.randint(10, 99, df.height)), )) .with_columns( k=polars.col('j').last().over(polars.col('dt').dt.date()), ) ) </code></pre> <p>which produces:</p> <pre><code> dt (datetime[μs]) j (i64) k (i64) 2023-01-01 12:48:34 30 30 2023-01-04 09:37:05 42 75 2023-01-04 15:13:42 22 75 2023-01-04 22:58:20 75 75 2023-01-07 00:18:27 70 20 2023-01-07 02:42:28 34 20 2023-01-07 06:32:09 92 20 2023-01-07 09:38:43 12 20 2023-01-07 20:59:16 20 20 2023-01-08 05:25:04 64 76 2023-01-08 09:10:17 92 76 2023-01-08 10:53:40 96 76 2023-01-08 14:29:28 80 76 2023-01-08 16:11:37 76 76 2023-01-10 12:59:38 81 58 2023-01-10 21:21:29 58 58 2023-01-11 14:33:55 64 52 2023-01-11 18:54:01 25 52 2023-01-11 22:00:55 15 52 2023-01-11 22:34:28 27 52 2023-01-11 23:41:27 52 52 2023-01-13 04:07:50 30 23 2023-01-13 08:20:19 58 23 2023-01-13 09:44:07 32 23 2023-01-13 14:18:54 23 23 2023-01-20 08:42:19 63 94 2023-01-20 17:07:37 94 94 shape: (27, 3) </code></pre> <p>where I've calculated <code>k</code> to be the last value of <code>j</code> for each of the dates. I'd like to add another column <code>l</code> which would have the value of <code>k</code> for the previous available date, i.e.:</p> <pre><code> dt (date) j (i64) k (i64) l (i64) 2023-01-01 30 30 null 2023-01-04 42 75 30 2023-01-04 22 75 30 2023-01-04 75 75 30 2023-01-07 70 20 75 2023-01-07 34 20 75 2023-01-07 92 20 75 2023-01-07 12 20 75 2023-01-07 20 20 75 2023-01-08 64 76 20 2023-01-08 92 76 20 2023-01-08 96 76 20 2023-01-08 80 76 20 2023-01-08 76 76 20 2023-01-10 81 58 76 2023-01-10 58 58 76 2023-01-11 64 52 58 2023-01-11 25 52 58 2023-01-11 15 52 58 2023-01-11 27 52 58 2023-01-11 52 52 58 2023-01-13 30 23 52 2023-01-13 58 23 52 2023-01-13 32 23 52 2023-01-13 23 23 52 2023-01-20 63 94 23 2023-01-20 94 94 23 shape: (27, 4) </code></pre> <p>That is, instead of &quot;shift by a constant number of rows&quot; which the standard <a href="https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.shift.html#polars.DataFrame.shift" rel="nofollow noreferrer">shift</a> function provides, I'm looking for a &quot;shift by date&quot; functionality. How would I go about doing that?</p>
<python><dataframe><python-polars>
2023-06-16 17:45:28
2
4,641
levant pied
76,492,521
12,603,110
How to create huggingface tokenizer from a "char_to_idx" dict?
<p>Given a dictionary <code>char_to_idx</code> how can one create a tokenizer such that the ids of the tokens are <strong>guaranteed</strong> to be the same as in char_to_idx?</p> <pre class="lang-py prettyprint-override"><code>char_to_idx = {'a': 0, 'b': 1, 'c': 2, 'd': 3} tokenizer = tokenizers.Tokenizer(tokenizers.models.Unigram()) # ??? print(tokenizer.get_vocab()) # {'a': 0, 'b': 1, 'c': 2, 'd': 3} </code></pre>
<python><nlp><huggingface-tokenizers>
2023-06-16 17:43:25
1
812
Yorai Levi
76,492,506
10,491,381
Piecewize Python CPlex
<p>This is my optimization problem :</p> <p>My company sells 2 items A and B, res 20$ and 30$.</p> <ul> <li>If the production of A &gt; 0, then the cost of maintenance is 20</li> <li>If the production of A &gt; 200, then the cost of maintenance is 30</li> <li>If the production of A &gt; 300, then the cost of maintenance is 50</li> </ul> <p>How to maximize my profit ?</p> <p>There are discontinuous piecewize constraints.</p> <p>How should I model this using Cplex ? There is my first try out, using <b>binary</b> variables : This is not working when a &gt;= 300:</p> <pre><code>DOcplexException: Model did not solve successfully </code></pre> <p>Complete code, using <b> binary </b> variables :</p> <pre><code>import cplex import docplex.mp from docplex.mp.model import Model model = Model(name='LP_example') a = model.integer_var(name='a') b = model.integer_var(name='b') z = model.binary_var(name='z') k = model.binary_var(name='k') t = model.binary_var(name='t') model.maximize(20 * a + 30 * b - z * 20 + k * 30 + t * 50 ) model.add_constraint(a + b &lt;= 1000) model.add_constraint(a &gt;= 200) model.add_constraint(b &gt;= 150) model.add_constraint(model.if_then(a &lt;= 200, z == 1)) model.add_constraint(model.if_then(a &gt;= 200, k == 1)) model.add_constraint(model.if_then(a &gt;= 300, k == 0)) model.add_constraint(model.if_then(a &gt;= 300, t == 1)) model.solve() for v in model.iter_integer_vars(): print(v,&quot; = &quot;,v.solution_value) for v in model.iter_binary_vars(): print(v,&quot; = &quot;,v.solution_value) </code></pre> <p>How to use the <b>model.piecewize</b> function ?</p> <p>Edit : I have something based on this exemple : <a href="https://github.com/AlexFleischerParis/zoodocplex/blob/master/zoopiecewise.py" rel="nofollow noreferrer">https://github.com/AlexFleischerParis/zoodocplex/blob/master/zoopiecewise.py</a></p> <p>What do you think about that ?</p> <pre><code>f = model.piecewise(0, [(0, 0),(200,200)], z==1) model.maximize(f(a) * 20 + b * 30 - z * 20 ) </code></pre> <p>Thanks a lot Sebastian, with your help, I have found this :</p> <pre><code># Define the piecewise linear function f = model.piecewise(0, [(0, 20), (200, 30), (300, 50)],1) model.maximize(20 * a - f(a) + 30 * b) </code></pre>
<python><linear-programming><cplex>
2023-06-16 17:42:06
1
347
harmonius cool
76,492,474
12,603,110
Decoded text of huggingface Unigram tokenizer has extra spaces
<p>decoded should be equal to text but:</p> <pre class="lang-py prettyprint-override"><code>import tokenizers text = &quot;Hello World!&quot; tokenizer = tokenizers.Tokenizer(tokenizers.models.Unigram()) tokenizer.train_from_iterator(text) encoded = tokenizer.encode(text) decoded = tokenizer.decode(encoded.ids) print(decoded) # 'H e l l o W o r l d !' </code></pre> <p>how can i change the tokenizer to reflect the desired output?</p>
<python><nlp><huggingface-tokenizers>
2023-06-16 17:37:54
0
812
Yorai Levi
76,492,422
4,075,155
Modify the data type of last layer of transformer model - llama huggingface
<p>I'm using torch to load the <code>decapoda-research/llama-7b-hf</code> from hf, which is a <code>'transformers.models.llama.modeling_llama.LlamaForCausalLM'</code> and the only way I manage to load it is by using <code>load_in_8bit</code>.</p> <p>because of that, when I try to do inferences with the model I get:</p> <pre><code>RuntimeError: &quot;log_softmax_lastdim_kernel_impl&quot; not implemented for 'Half' </code></pre> <p>How can I change the number of bits of the last layer of the model to prevent that error?</p>
<python><torch><huggingface>
2023-06-16 17:28:08
0
2,380
Lucas Azevedo
76,492,324
7,530,245
In shiny for python how can I use panel_conditional to check if an input contains a value?
<p>Let's say I have an input 'state' that can contain multiple values. How do I check if the input state contains 'Alabama'. I assumed I could just use 'in' like this but it doesn't seem to work:</p> <pre><code> from shiny import ui, App, render app_ui = ui.page_fluid( ui.input_selectize(&quot;state&quot;, &quot;State&quot;, choices = [ &quot;Alabama&quot;, &quot;Alaska&quot;, &quot;Arizona&quot;, &quot;Arkansas&quot;, &quot;California&quot;, &quot;Colorado&quot;, &quot;Connecticut&quot;, &quot;Delaware&quot;, &quot;Florida&quot;, &quot;Georgia&quot;, &quot;Hawaii&quot;, &quot;Idaho&quot;, &quot;Illinois&quot;, &quot;Indiana&quot;, &quot;Iowa&quot;, &quot;Kansas&quot;, &quot;Kentucky&quot;, &quot;Louisiana&quot;, &quot;Maine&quot;, &quot;Maryland&quot;, &quot;Massachusetts&quot;, &quot;Michigan&quot;, &quot;Minnesota&quot;, &quot;Mississippi&quot;, &quot;Missouri&quot;, &quot;Montana&quot;, &quot;Nebraska&quot;, &quot;Nevada&quot;, &quot;New Hampshire&quot;, &quot;New Jersey&quot;, &quot;New Mexico&quot;, &quot;New York&quot;, &quot;North Carolina&quot;, &quot;North Dakota&quot;, &quot;Ohio&quot;, &quot;Oklahoma&quot;, &quot;Oregon&quot;, &quot;Pennsylvania&quot;, &quot;Rhode Island&quot;, &quot;South Carolina&quot;, &quot;South Dakota&quot;, &quot;Tennessee&quot;, &quot;Texas&quot;, &quot;Utah&quot;, &quot;Vermont&quot;, &quot;Virginia&quot;, &quot;Washington&quot;, &quot;West Virginia&quot;, &quot;Wisconsin&quot;, &quot;Wyoming&quot;, &quot;Washington D.C.&quot; ], multiple = True), ui.panel_conditional(&quot;'Alabama' in input.state&quot;, &quot;TEST&quot;) ) def server(input, output, session): ... app = App(app_ui, server) </code></pre>
<python><py-shiny>
2023-06-16 17:11:51
1
1,467
Dave Rosenman
76,492,319
3,247,006
Only a model label is not translated in Django Admin
<p>I'm trying to translate the entire Django Admin from <strong>English</strong> to <strong>French</strong>.</p> <p>This is my <code>django-project</code> below. *I use <strong>Django 4.2.1</strong>:</p> <pre class="lang-none prettyprint-override"><code>django-project |-core | |-settings.py | └-urls.py |-my_app1 | |-models.py | |-admin.py | └-apps.py |-my_app2 └-locale └-fr └-LC_MESSAGES |-django.po └-django.mo </code></pre> <p>And, this is <code>core/settings.py</code> below:</p> <pre class="lang-py prettyprint-override"><code># &quot;core/settings.py&quot; MIDDLEWARE = [ ... 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.locale.LocaleMiddleware', 'django.middleware.common.CommonMiddleware', ... ] LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_TZ = True from django.utils.translation import gettext_lazy as _ LANGUAGES = ( ('en', _('English')), ('fr', _('French')) ) </code></pre> <p>And, this is <code>core/urls.py</code> below:</p> <pre class="lang-py prettyprint-override"><code># &quot;core/urls.py&quot; from django.contrib import admin from django.urls import path from django.conf.urls.i18n import i18n_patterns urlpatterns = i18n_patterns( path('admin/', admin.site.urls) ) </code></pre> <p>And, this is <code>my_app1/models.py</code> below:</p> <pre class="lang-py prettyprint-override"><code># &quot;my_app1/models.py&quot; from django.db import models from django.utils.translation import gettext_lazy as _ class Person(models.Model): name = models.CharField(max_length=20, verbose_name=_(&quot;name&quot;)) class Meta: verbose_name = _('person') verbose_name_plural = _('persons') </code></pre> <p>And, this is <code>my_app1/apps.py</code> below:</p> <pre class="lang-py prettyprint-override"><code># &quot;my_app1/apps.py&quot; from django.apps import AppConfig from django.utils.translation import gettext_lazy as _ class MyApp1Config(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'my_app1' verbose_name = _('my_app1') </code></pre> <p>And, this is <code>locale/fr/LC_MESSAGES/django.po</code> below:</p> <pre class="lang-none prettyprint-override"><code># &quot;locale/fr/LC_MESSAGES/django.po&quot; ... #: .\core\settings.py:140 msgid &quot;English&quot; msgstr &quot;Anglais&quot; #: .\core\settings.py:141 msgid &quot;French&quot; msgstr &quot;Français&quot; #: .\my_app1\apps.py:7 msgid &quot;my_app1&quot; msgstr &quot;mon_app1&quot; #: .\my_app1\models.py:6 msgid &quot;name&quot; msgstr &quot;nom&quot; #: .\my_app1\models.py:12 #, fuzzy msgid &quot;person&quot; msgstr &quot;personne&quot; #: .\my_app1\models.py:13 #, fuzzy msgid &quot;persons&quot; msgstr &quot;personnes&quot; ... </code></pre> <p>But, only the model label <code>person</code> is not translated at <code>http://localhost:8000/fr/admin/my_app1/person/add/</code> as shown below:</p> <p><a href="https://i.sstatic.net/6Cxrw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6Cxrw.png" alt="enter image description here" /></a></p> <p>So, how can I translate the model label <code>person</code>?</p>
<python><django><django-models><django-admin><django-i18n>
2023-06-16 17:11:22
1
42,516
Super Kai - Kazuya Ito