QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,767,645
| 10,737,147
|
Matplotlib event handling within a class
|
<p>Could someone please shed some light on this?</p>
<p>This code is supposed to print out the event details on a mouse button press. If I connect the <code>button_press_event</code> slot to <code>on_button_press</code> (outside the class), the code works as expected. If I instead use the class method <code>self.on_button_press</code>, nothing is printed out. Why is that?</p>
<pre><code>import matplotlib.pyplot as plt
class Modifier:
def __init__(self, initial_line):
self.initial_line = initial_line
self.ax = initial_line.axes
canvas = self.ax.figure.canvas
cid = canvas.mpl_connect('button_press_event', self.on_button_press)
def on_button_press(self, event):
print(event)
def on_button_press(event):
print(event)
fig, ax = plt.subplots()
ax.set_aspect('equal')
initial = ax.plot([1,2,3], [4,5,6], color='b', lw=1, clip_on=False)
Modifier(initial[0])
plt.show()
</code></pre>
|
<python><matplotlib><events>
|
2024-01-05 23:19:22
| 1
| 437
|
XYZ
|
77,767,585
| 247,542
|
How to add organization access to Google Drive folder?
|
<p>How do you programmatically grant access to everyone in an organization to a shared folder on Google Drive using the v3 Drive Python API?</p>
<p><a href="https://developers.google.com/drive/api/guides/manage-shareddrives" rel="nofollow noreferrer">These docs</a> clearly explain how to grant access to a specific user, but this doesn't seem to work for granting your entire group (e.g. "Anyone in this group with the link can view") option under the "General access" section.</p>
<p>This is what I've tried:</p>
<pre><code>from apiclient import discovery
credentials = get_google_drive_credentials()
http = credentials.authorize(httplib2.Http())
drive_service = discovery.build('drive', 'v3', http=http)
body = {
'name': 'My Folder',
'mimeType': "application/vnd.google-apps.folder",
'supportsAllDrives': True,
'allowFileDiscovery': True,
'type': 'domain',
'domain': 'MyDomain',
}
new_folder = drive_service.files().create(body = body).execute()
</code></pre>
<p>And while it create the folder successfully, when I view the new folder on Google Drive, it still shows at as "My Drive", owned by me, and shared with no one.</p>
<p>Why is it ignoring the <code>domain</code> type and not sharing it with everyone in my group?</p>
|
<python><google-drive-api><google-drive-shared-drive>
|
2024-01-05 23:00:51
| 1
| 65,489
|
Cerin
|
77,767,517
| 3,671,543
|
Combine rows and columns to create a 2x2 table for Fisher's exact test
|
<p>I need to perform a test of independence on the following crosstab <code>ct</code> using python:</p>
<p><a href="https://i.sstatic.net/YPr8A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YPr8A.png" alt="enter image description here" /></a></p>
<p>Since there are some values less than 5, I cannot perform the chi-square test of independence. Instead, I need to perform Fisher's exact test.</p>
<p>Since Fisher's exact test implementation on Scipy supports only a 2x2 table, I implemented the following solution:</p>
<pre><code>from scipy.stats import fisher_exact
# Combine rows and columns to create a 2x2 table
table_2x2 = np.array([[ct[1][4] + ct[2][4] + ct[1][3] + ct[2][3], ct[3][4] + ct[4][4] + ct[3][3] + ct[3][3]],
[ct[1][2] + ct[2][2] + ct[1][1] + ct[2][1], ct[3][2] + ct[4][2] + ct[3][1] + ct[4][1]]])
# Perform Fisher's exact test on the 2x2 table
odds_ratio, p_value = fisher_exact(table_2x2)
# Display the results
print(f'Odds Ratio: {odds_ratio}')
print(f'P-value: {p_value}')
</code></pre>
<p>Is this a valid solution? if not is there any other suggestion to implement this in Python?</p>
|
<python><scipy><statistics><data-analysis>
|
2024-01-05 22:36:27
| 2
| 689
|
Oussema Chaabouni
|
77,767,450
| 2,893,712
|
APScheduler Runs Job Twice
|
<p>I have an APScheduler job that I want to run on Friday's at 8:05am with a jitter of 15minutes. Here is my code:</p>
<pre><code>from apscheduler.schedulers.background import BackgroundScheduler
sched = BackgroundScheduler()
def WOL():
requests.get("https://example.com/WOL")
sched.add_job(WOL, 'cron', hour=8, minute=5, jitter=900, day_of_week='fri')
sched.start()
try:
while True:
time.sleep(2)
except (KeyboardInterrupt, SystemExit):
sched.shutdown()
</code></pre>
<p>However, this job runs twice. Here are the logs from today's run:</p>
<blockquote>
<p>2024-01-05 08:04:17,657 - INFO - Running job "WOL (trigger: cron[day_of_week='fri', hour='8', minute='5'], next run at: 2024-01-05 08:13:47 PST)" (scheduled at 2024-01-05 08:04:17.646354-08:00)</p>
<p>2024-01-05 08:04:19,848 - INFO - Job "WOL (trigger: cron[day_of_week='fri', hour='8', minute='5'], next run at: 2024-01-05 08:13:47 PST)" executed successfully</p>
<p>2024-01-05 08:13:47,596 - INFO - Running job "WOL (trigger: cron[day_of_week='fri', hour='8', minute='5'], next run at: 2024-01-12 08:12:15 PST)" (scheduled at 2024-01-05 08:13:47.591469-08:00)</p>
<p>2024-01-05 08:13:49,608 - INFO - Job "WOL (trigger: cron[day_of_week='fri', hour='8', minute='5'], next run at: 2024-01-12 08:12:15 PST)" executed successfully</p>
</blockquote>
<p>It ran at 8:04am and 8:13am. Am I doing something wrong with how I am scheduling it?</p>
|
<python><cron><apscheduler>
|
2024-01-05 22:13:59
| 0
| 8,806
|
Bijan
|
77,767,405
| 10,200,497
|
Creating a new column when values in another column is not duplicate
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
{
'a': [98, 97, 100, 101, 103, 110, 108, 109, 130, 135],
'b': [3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
'c': [np.nan, np.nan, 1.0, 1.0, 1.0, 2.0, 2.0, 2.0, 3.0, 3.0],
'd': [92, 92, 92, 92, 92, 92, 92, 92, 92, 92],
}
)
</code></pre>
<p>And this is the expected output. I want to to create column <code>x</code>:</p>
<pre><code> a b c d x
0 98 3 NaN 92 92
1 97 3 NaN 92 92
2 100 3 1.0 92 94
3 101 3 1.0 92 94
4 103 3 1.0 92 94
5 110 3 2.0 92 104
6 108 3 2.0 92 104
7 109 3 2.0 92 104
8 130 3 3.0 92 124
9 135 3 3.0 92 124
</code></pre>
<p>Steps:</p>
<p>a) When <code>c</code> is not duplicated, <code>df['x'] = df.a - (df.b * 2)</code></p>
<p>b) If <code>df.c == np.nan</code>, <code>df['x'] = df.d</code></p>
<p>For example:</p>
<p>The first new value in <code>c</code> is row <code>2</code>. So <code>df['x'] = 100 - (3 * 2)</code> which is 94 and <code>df['x'] = 94</code> until a new value in <code>c</code> appears which is row <code>5</code>. For row <code>5</code>, <code>df['x'] = 110 - (3 * 2)</code> which is 104. And the logic continues.</p>
<p>This is what I have tried:</p>
<pre><code>df['x'] = df.a - (df.b * 2)
df.loc[df.c.isna(), 'x'] = df.d
df['x'] = df.x.cummax()
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-05 22:00:40
| 2
| 2,679
|
AmirX
|
77,767,299
| 21,420,742
|
Getting the names separated into own columns in python
|
<p>I have a dataset that I need to separate the names into <code>first name</code> <code>middle name</code> and <code>last name</code> issue I am having is on some occasions people have two last/middle names or spaces creating
lager names.
This is a sample of what I have:</p>
<pre><code>name
John Smith
Jack A Doe
Jane Marie Jones Smith
</code></pre>
<p>I have found that doing <code>df[['firstname','middlename','middlename1','lastname']] = df['name'] .str.split(expand=True)</code> then just combining the middle names together with logic and this works but the issue I am having with this is if the file I am using updates and a name like <strong>Josh Jacob Jingle Hiemer Schmidt</strong> then it will throw a ValueError saying: <code>Columns must be same length as key.</code> because I didn't accommodate for someone with 5 names. tried doing<br />
<code>name_parts = df['name'].str.split(expand = True)</code></p>
<p><code>df['first_name'] = name_parts[0]</code></p>
<p><code>df['last_name] = name_parts.iloc[:,1]</code></p>
<p><code>df['middle_name'] = name_parts.iloc[:,1:-1].apply(lammba row:" ".join(row.dropna()),axis = 1)</code></p>
<p>when I do this I only seem to get the last name. The desired output should look like this.</p>
<pre><code>first_name middle_name last_name
John Smith
Jack A Doe
Jane Marie Jones Smith
Josh Jacob Jingle Hiemer Schmidt
</code></pre>
<p>Any help would be appreciated. Thank you in advance.</p>
|
<python><python-3.x><pandas><dataframe>
|
2024-01-05 21:28:53
| 4
| 473
|
Coding_Nubie
|
77,767,278
| 15,412,256
|
Complex One-Hot Encoding Dummies Generation From Lists
|
<p>I have a pandas DataFrame like this:</p>
<pre class="lang-py prettyprint-override"><code> username Category
0 user1 ["stackoverflow", "cross_validted"]
1 user2 []
2 user3 ["stackoverflow"]
</code></pre>
<p>I want to generate the binary dummy columns for each unique user and each unique Category:</p>
<pre class="lang-py prettyprint-override"><code> username stackoverflow cross_validted
0 user1 1 1
1 user2 NaN NaN
2 user3 1 0
</code></pre>
<p>I have tried the pandas explode method to extract all the possible elements from the lists but I was stuck with the duplicated usernames.</p>
|
<python><pandas><encoding><aggregation-framework>
|
2024-01-05 21:22:24
| 0
| 649
|
Kevin Li
|
77,767,224
| 12,553,730
|
Do I use the `--hyp` for training YOLO-v5 model with Albumentations?
|
<p>I have integrated various <em>Albumentations</em> into my YOLO-v5 model by modifying the <code>augmentations.py</code> file. In the <em>Albumentations</em> class, I have included a list of transformations.</p>
<p>Here is my code in the Albumentations class(<code>/utils/augmentations.py</code>):</p>
<pre><code>class Albumentations:
# YOLOv5 Albumentations class (optional, only used if package is installed)
def __init__(self, size=640):
self.transform = None
prefix = colorstr('albumentations: ')
try:
import albumentations as A
check_version(A.__version__, '1.0.3', hard=True) # version requirement
T = [
A.RandomResizedCrop(height=size, width=size, scale=(0.8, 1.0), ratio=(0.9, 1.11), p=0.1),
A.Blur(p=0.1),
A.MedianBlur(p=0.1),
A.ToGray(p=0.1),
A.CLAHE(p=0.1),
A.RandomBrightnessContrast(p=0.1),
A.RandomGamma(p=0.1),
A.ImageCompression(quality_lower=75, p=0.1),
A.HueSaturationValue(hue_shift_limit=25, sat_shift_limit=40, val_shift_limit=0, p=0.1),
A.ColorJitter(p=0.1), A.Defocus(p=0.1), A.Downscale(p=0.1), A.Emboss(p=0.1),
A.FancyPCA(p=0.1), A.GaussNoise(p=0.1), A.HueSaturationValue(p=0.1), A.ToRGB(p=0.1),
A.ISONoise(p=0.1), A.ImageCompression(p=0.1), A.MultiplicativeNoise(p=0.1),
A.Posterize(p=0.1), A.RGBShift(p=0.1), A.RandomBrightnessContrast(p=0.1), A.CLAHE(p=0.1),
A.RandomGamma(p=0.1), A.RingingOvershoot(p=0.1), A.Sharpen(p=0.1), A.UnsharpMask(p=0.1)
] # transforms
self.transform = A.Compose(T, bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels']))
LOGGER.info(prefix + ', '.join(f'{x}'.replace('always_apply=False, ', '') for x in T if x.p))
except ImportError: # package not installed, skip
pass
except Exception as e:
LOGGER.info(f'{prefix}{e}')
</code></pre>
<p>When training a YOLO model with these Albumentations, do I need to include the <code>--hyp</code> option, or can I train without it while still incorporating the Albumentations into the training process?</p>
<ul>
<li><p><code>python train.py --img 512 --batch 16 --epochs 1000 --data consider.yaml --weights yolov5s.pt --hyp hyp.scratch-med.yaml --cache --cuda</code></p>
</li>
<li><p><code>python train.py --img 512 --batch 16 --epochs 1000 --data consider.yaml --weights yolov5s.pt --cache --cuda</code></p>
</li>
</ul>
|
<python><python-3.x><yolo><yolov5>
|
2024-01-05 21:07:19
| 1
| 309
|
nikhil int
|
77,767,157
| 4,343,563
|
Python Create new row for each value within lists in two columns?
|
<p>I am trying to compare the values of a list within two columns so I would like to create a separate row for each list index in that column where the row name contains the index of the list. My dataframe looks like:</p>
<pre><code> File Color File Value True Value
1 blue ['123', 'abc', 'tvr'] ['123', 'abc', 'tvr']
2 green ['jlak', 'abc', 'vds'] ['123', 'ffs', 'tvr']
3 red ['vssf', 'blue', '15a'] ['15a', 'asd', '15a', '234']
4 black 'fvg' 'fvg'
</code></pre>
<p>I want the data frame to look like:</p>
<pre><code> File Color File Value True Value
1 blue_0 '123' '123'
1 blue_1 'abc' 'abc'
1 blue_2 'tvr' 'tvr'
2 green_0 'jlak' '123'
2 green_1 'abc' 'ffs'
2 green_2 'vds' 'tvr'
3 red_0 'vssf' '15a'
3 red_1 'blue' 'asd'
3 red_2 '15a' '15a'
3 red_3 None '234'
4 black 'fvg' 'fvg'
</code></pre>
<p>I have tried using the solution from here: <a href="https://stackoverflow.com/questions/59640789/how-to-create-a-new-row-for-each-comma-separated-value-in-a-column-in-pandas">How to create a new row for each comma separated value in a column in pandas</a> but I keep getting nulls and don't know how to apply it when I have some rows that aren't lists.</p>
<pre><code>(data.assign(category = data['File Value'].explode('File Value').reset_index(drop=True)))
</code></pre>
<p>I have tried subsetting the dataframe to get only the rows that have lists:</p>
<pre><code>sub = data[~data.Color == 'black']
(sub.assign(category = sub['File Value'].explode('File Value').reset_index(drop=True)))
</code></pre>
<p>But I again get a column of all nulls</p>
<p><strong>UPDATE:</strong></p>
<p>I am using explode to separate the column values into separate rows, but don't know how to modify the 'File' and 'Color' column to keep track of values</p>
<pre><code>sub = data[~data.Color == 'black']
sub['File Value'].explode()
</code></pre>
|
<python><pandas>
|
2024-01-05 20:49:55
| 2
| 700
|
mjoy
|
77,767,005
| 6,395,618
|
Tensorflow Out Of Range while saving weights
|
<p>I am getting <code>OutOfRangeError</code> when I am trying to save weights and don't necessarily know why. The model seems to be of the size <strong>674.82 MB</strong> and I have <strong>1 TB</strong> hard-disk size on my instance. So, I can't figure why I am getting this error. Below is my model and code to save weights. Let me know if you need anything else to debug the error. The model saves weights if I reduce the hidden layer size from 512 to 128. Does anyone what could be causing this error?</p>
<p><strong>Model Architecture:</strong></p>
<pre><code>Model: "trxster_prob_512H"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
en_input_layer (InputLayer [(None, 12)] 0 []
)
en_pos_embed_layer (Positi (None, 12, 512) 159744 ['en_input_layer[0][0]']
onalEncodingLayer)
en_sub_layer1 (EncoderSubL (None, 12, 512) 1050316 ['en_pos_embed_layer[0][0]',
ayer) 8 'en_pos_embed_layer[0][0]']
en_drop_layer1 (Dropout) (None, 12, 512) 0 ['en_sub_layer1[0][0]']
en_sub_layer2 (EncoderSubL (None, 12, 512) 1050316 ['en_drop_layer1[0][0]',
ayer) 8 'en_drop_layer1[0][0]']
en_drop_layer2 (Dropout) (None, 12, 512) 0 ['en_sub_layer2[0][0]']
en_sub_layer3 (EncoderSubL (None, 12, 512) 1050316 ['en_drop_layer2[0][0]',
ayer) 8 'en_drop_layer2[0][0]']
en_drop_layer3 (Dropout) (None, 12, 512) 0 ['en_sub_layer3[0][0]']
en_sub_layer4 (EncoderSubL (None, 12, 512) 1050316 ['en_drop_layer3[0][0]',
ayer) 8 'en_drop_layer3[0][0]']
en_drop_layer4 (Dropout) (None, 12, 512) 0 ['en_sub_layer4[0][0]']
en_sub_layer5 (EncoderSubL (None, 12, 512) 1050316 ['en_drop_layer4[0][0]',
ayer) 8 'en_drop_layer4[0][0]']
en_drop_layer5 (Dropout) (None, 12, 512) 0 ['en_sub_layer5[0][0]']
de_input_layer (InputLayer [(None, 4)] 0 []
)
en_sub_layer6 (EncoderSubL (None, 12, 512) 1050316 ['en_drop_layer5[0][0]',
ayer) 8 'en_drop_layer5[0][0]']
de_pos_embed_layer (Positi (None, 4, 512) 159744 ['de_input_layer[0][0]']
onalEncodingLayer)
en_drop_layer6 (Dropout) (None, 12, 512) 0 ['en_sub_layer6[0][0]']
de_sub_layer1 (DecoderSubl (None, 4, 512) 1890560 ['de_pos_embed_layer[0][0]',
ayer) 0 'en_drop_layer6[0][0]']
de_drop_layer1 (Dropout) (None, 4, 512) 0 ['de_sub_layer1[0][0]']
de_sub_layer2 (DecoderSubl (None, 4, 512) 1890560 ['de_drop_layer1[0][0]',
ayer) 0 'en_drop_layer6[0][0]']
de_drop_layer2 (Dropout) (None, 4, 512) 0 ['de_sub_layer2[0][0]']
de_sub_layer3 (DecoderSubl (None, 4, 512) 1890560 ['de_drop_layer2[0][0]',
ayer) 0 'en_drop_layer6[0][0]']
de_drop_layer3 (Dropout) (None, 4, 512) 0 ['de_sub_layer3[0][0]']
de_sub_layer4 (DecoderSubl (None, 4, 512) 1890560 ['de_drop_layer3[0][0]',
ayer) 0 'en_drop_layer6[0][0]']
de_drop_layer4 (Dropout) (None, 4, 512) 0 ['de_sub_layer4[0][0]']
de_sub_layer5 (DecoderSubl (None, 4, 512) 1890560 ['de_drop_layer4[0][0]',
ayer) 0 'en_drop_layer6[0][0]']
de_drop_layer5 (Dropout) (None, 4, 512) 0 ['de_sub_layer5[0][0]']
de_sub_layer6 (DecoderSubl (None, 4, 512) 1890560 ['de_drop_layer5[0][0]',
ayer) 0 'en_drop_layer6[0][0]']
de_drop_layer6 (Dropout) (None, 4, 512) 0 ['de_sub_layer6[0][0]']
de_output_layer (TimeDistr (None, 4, 250) 128250 ['de_drop_layer6[0][0]']
ibuted)
==================================================================================================
Total params: 176900346 (674.82 MB)
Trainable params: 176900346 (674.82 MB)
Non-trainable params: 0 (0.00 Byte)
__________________________________________________________________________________________________
</code></pre>
<p><strong>Training and saving weights:</strong></p>
<pre><code>EPOCHS = 1
stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True)
trxster.fit(train_ds.take(2),
epochs=EPOCHS,
validation_data=val_ds.take(1),
callbacks=[stop_early])
trxster.save_weights('./saved_models/weights/trxster_sm_prob/trxster_wts')
</code></pre>
<p><strong>Error Trace:</strong></p>
<pre><code>--------------------------------------------------------------------------- OutOfRangeError Traceback (most recent call last) File <command-1080604003390190>, line 2
1 # trxster.save('./saved_models/trxster_transformer_model.h5')
----> 2 trxster.save_weights('./saved_models/weights/trxster_sm_prob/trxster_wts')
File /databricks/python/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File /databricks/python/lib/python3.10/site-packages/tensorflow/python/eager/execute.py:60, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 # Convert any objects of type core_types.Tensor to Tensor.
54 inputs = [
55 tensor_conversion_registry.convert(t)
56 if isinstance(t, core_types.Tensor)
57 else t
58 for t in inputs
59 ]
---> 60 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
61 inputs, attrs, num_outputs)
62 except core._NotOkStatusException as e:
63 if name is not None:
OutOfRangeError: {{function_node
__wrapped__SaveV2_dtypes_771_device_/job:localhost/replica:0/task:0/device:CPU:0}} saved_models/weights/trxster_sm_prob/trxster_wts_temp/part-00000-of-00001.data-00000-of-00001.tempstate10989366996057447488; File too large [Op:SaveV2]
</code></pre>
|
<python><tensorflow><keras><deep-learning>
|
2024-01-05 20:18:10
| 0
| 2,606
|
Krishnang K Dalal
|
77,766,938
| 836,941
|
is it possible to build a tree form expanded Airflow DAG tasks? (dynamic task mapping over dynamic task mapping output)
|
<p>I want to generate dynamic tasks from the dynamic task output. Each mapped task returns a list, and I'd like to create a separate mapped task for each of the element of the list so the process will look like this:
<a href="https://i.sstatic.net/qApiz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qApiz.png" alt="Airflow dynamic task tree" /></a>
Is it possible to expand on the output of the dynamically mapped task so it will result in a sequence of map operations instead of a map and then reduce?</p>
<h1>What I tried:</h1>
<p>In my local environment, I'm using:</p>
<pre><code>Astronomer Runtime 9.6.0 based on Airflow 2.7.3+astro.2
Git Version: .release:9fad9363bb0e7520a991b5efe2c192bb3405b675
</code></pre>
<p>For the sake of the experiment, I'm using three tasks with a single string as an input and a string list as an output.</p>
<h2>1. Expand over a group with expanded task (map over a group with mapped tasks):</h2>
<pre><code>import datetime
import logging
from airflow.decorators import dag, task, task_group
@dag(schedule_interval=None, start_date=datetime.datetime(2023, 9, 27))
def try_dag3():
@task
def first() -> list[str]:
return ["0", "1"]
first_task = first()
@task_group
def my_group(input: str) -> list[str]:
@task
def second(input: str) -> list[str]:
logging.info(f"input: {input}")
result = []
for i in range(3):
result.append(f"{input}_{i}")
# ['0_0', '0_1', '0_2']
# ['1_0', '1_1', '1_2']
return result
second_task = second.expand(input=first_task)
@task
def third(input: str, input1: str = None):
logging.info(f"input: {input}, input1: {input1}")
return input
third_task = third.expand(input=second_task)
my_group.expand(input=first_task)
try_dag3()
</code></pre>
<p>but it causes <code>NotImplementedError: operator expansion in an expanded task group is not yet supported</code></p>
<h2>2. expand over the expanded task result (map over a mapped tasks):</h2>
<pre><code>import datetime
import logging
from airflow.decorators import dag, task
@dag(start_date=datetime.datetime(2023, 9, 27))
def try_dag1():
@task
def first() -> list[str]:
return ["0", "1"]
first_task = first()
@task
def second(input: str) -> list[str]:
logging.info(f"source: {input}")
result = []
for i in range(3):
result.append(f"{input}_{i}")
# ['0_0', '0_1', '0_2']
# ['1_0', '1_1', '1_2']
return result
# this expands fine into two tasks from the list returned by first_task
second_task = second.expand(input=first_task)
@task
def third(input: str):
logging.info(f"source: {input}")
return input
# this doesn't expand - there are two mapped tasks, and the input value is a list, not a string
third_task = third.expand(input=second_task)
try_dag1()
</code></pre>
<p>but the result of <code>second</code> dag is not expanded, and <code>third</code> task input is a string list instead:
<a href="https://i.sstatic.net/pOY1w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pOY1w.png" alt="dag1 graph" /></a>
<code>third[0]</code> task log:
<code>[2024-01-05, 11:40:30 UTC] {try_dag1.py:30} INFO - source: ['0_0', '0_1', '0_2']</code></p>
<h2>3. Expand over the expanded task with const input (to test if the structure is possible):</h2>
<pre><code>import datetime
import logging
from airflow.decorators import dag, task
@dag(start_date=datetime.datetime(2023, 9, 27))
def try_dag0():
@task
def first() -> list[str]:
return ["0", "1"]
first_task = first()
@task
def second(input: str) -> list[str]:
logging.info(f"input: {input}")
result = []
for i in range(3):
result.append(f"{input}_{i}")
# ['0_0', '0_1', '0_2']
# ['1_0', '1_1', '1_2']
return result
second_task = second.expand(input=first_task)
@task
def third(input: str, input1: str = None):
logging.info(f"input: {input}, input1: {input1}")
return input
third_task = third.expand(input=second_task, input1=["a", "b", "c"])
try_dag0()
</code></pre>
<p>It looks like the mapped tasks can be expanded over a constant list passed to <code>input1</code>, but <code>input</code> value is a nonexpanded list:
<a href="https://i.sstatic.net/IQ06I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IQ06I.png" alt="dag0 graph" /></a>
<code>third[0]</code> task log:
<code>[2024-01-05, 12:51:39 UTC] {try_dag0.py:33} INFO - input: ['0_0', '0_1', '0_2'], input1: a</code></p>
|
<python><airflow><airflow-2.x>
|
2024-01-05 20:01:15
| 1
| 8,862
|
zacheusz
|
77,766,843
| 2,142,728
|
Coroutines started with asyncio.create_task not running when followed by time.sleep - Why?
|
<p>I'm facing an issue with coroutines in Python's asyncio module. I've noticed that coroutines started with asyncio.create_task seem to not run as expected when I introduce a time.sleep immediately after starting them. I'm puzzled by this behavior and would appreciate some insights into why this might be happening.</p>
<p>Here's a simplified example of the code structure:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import time
async def my_coroutine():
# ... coroutine logic ...
# Some function (which I NEED to be non-async)
def main():
loop = asyncio.get_event_loop()
task = loop.create_task(my_coroutine())
# Introducing time.sleep seems to affect coroutine execution
while not_ready():
time.sleep(0.1) # If this line is removed, the coroutine runs as expected
main()
</code></pre>
<p>Can an event loop run on multiple threads? if so, how is it possible?</p>
|
<python><python-asyncio>
|
2024-01-05 19:40:20
| 1
| 3,774
|
caeus
|
77,766,721
| 8,040,369
|
DataFrame: Convert the columns in a df to list of row in Python dataframe
|
<p>I have a pandas dataframe like below</p>
<pre><code>id name value1 value2 value3 Type
=======================================================
1 AAA 1.0 1.5 1.8 NEW
2 BBB 2.0 2.3 2.5 NEW
3 CCC 3.0 3.6 3.7 NEW
</code></pre>
<p>I have convert the above df into something like below so that i can join it to another df based on name (which will be unique in my case)</p>
<pre><code>Type AAA BBB CCC
================================================================
NEW [1.0, 1.5, 1.8] [2.0, 2.3, 2.5] [3.0, 3.6, 3.7]
</code></pre>
<p>Is there any way to achieve this instead of having too many looping statements.</p>
<p>Any help is much appreciated.</p>
<p>Thanks,</p>
|
<python><pandas><dataframe>
|
2024-01-05 19:11:10
| 4
| 787
|
SM079
|
77,766,597
| 8,040,369
|
DataFrame: Transform a DF rows into a key with list of value pairs in Python
|
<p>I have a pandas dataframe like below</p>
<pre><code>id name value1 value2 value3
=======================================================
1 AAA 1.0 1.5 1.8
2 BBB 2.0 2.3 2.5
3 CCC 3.0 3.6 3.7
</code></pre>
<p>I have convert the above df into something like below so that i can join it to another df based on name (which will be unique in my case)</p>
<pre><code>name value
=========================
AAA [1.0, 1.5, 1.8]
BBB [2.0, 2.3, 2.5]
CCC [3.0, 3.6, 3.7]
</code></pre>
<p>Is there any way to achieve this instead of having too many looping statements.</p>
<p>Any help is much appreciated.</p>
<p>Thanks,</p>
|
<python><pandas>
|
2024-01-05 18:41:51
| 1
| 787
|
SM079
|
77,766,570
| 696,206
|
What is the proper Pythonic Approach to Collection.forEach?
|
<p>I inevitably end up writing a lot of functions that look something along the lines of</p>
<pre class="lang-py prettyprint-override"><code>def forEach(function: typing.Callable, collection: typing.Iterable) -> typing.NoReturn:
for item in collection:
function(item)
</code></pre>
<p>Just about every language I've worked in has had some form of <code>forEach</code> function, whether it's <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach" rel="nofollow noreferrer">Array.forEach</a> in Javascript, <a href="https://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html#forEach-java.util.function.Consumer-" rel="nofollow noreferrer">Iterable.forEach</a> in Java, or <a href="https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.foreach?view=net-8.0" rel="nofollow noreferrer">List.forEach</a> in C#. Python lists don't have that. I've talked with some people and we've hashed out stuff like <code>any(function(item) for item in collection)</code> or <code>[function(item) for item in collection]</code>, but stuff that looks like it should work, like <code>map(function, collection)</code>, doesn't do anything until iterated upon.</p>
<p>Now, I'm sure there's 18 billion pip packages that may be used, but is there a more elegant/pythonic way to do this in vanilla? My example above does the trick, but it still seems odd that I have to either write a function like this or write out the loop every time this would get called.</p>
|
<python><list>
|
2024-01-05 18:36:05
| 1
| 715
|
Tubbs
|
77,766,567
| 2,437,443
|
SQLAlchemy / Python - SSL errors: "decryption failed or bad record mac" | "EOF detected"
|
<p>I have a Flask Python application where I'm using the multiprocessing library to execute a function in parallel with the main thread. The purpose of the function is to upload files to OneDrive. The function also needs to read and write from my DB.</p>
<p>My app uses a PostgreSQL database and SQLAlchemy. In order to access the DB from the temp threads, I'm creating a new SQLALchemy engine every time the function is executed. I'm getting a couple different errors when these temp threads are executing their function. The errors are very intermittent.</p>
<h5>This error occurs most often (full stack trace further down), and doesn't actually seem to cause any side effects in my app. Code continues executing:</h5>
<pre><code>psycopg2.OperationalError: SSL error: decryption failed or bad record mac
</code></pre>
<h5>This error occurs less often, but it causes a screen crash and prevents some of the function execution:</h5>
<pre><code>sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) SSL SYSCALL error: EOF detected
</code></pre>
<p>I've tried to research these errors, but many of the posts I've found have different enough setups that I'm not sure if they apply in my case. I've tried some of the solutions suggested in those posts but they are not preventing the errors. I'm not very experienced with SQLAlchemy so I'm flying blind here.</p>
<h5>Here is my code that spawns the temp threads:</h5>
<pre class="lang-py prettyprint-override"><code> # Initialize the class that contains my upload function
vmb_client = vmb_controller()
# Upload files asynchronously
from multiprocessing import Process
p = Process(target=vmb_client.upload_file, args=(<arguments>))
p.start()
</code></pre>
<h5>Here is a simplified version of my upload function:</h5>
<pre class="lang-py prettyprint-override"><code>def upload_file(self, corp_index, filename):
#
# Set up new DB connection
#
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
engine = create_engine(db_uri)
Session = sessionmaker(bind=engine)
sess = Session()
#
# This does the API call to upload the file. It doesn't involve any DB interaction
#
results = self.call_upload_file(<arguments>)
#
# Track the uploaded file in the DB
#
command = f"""
-- Insert new item in DB
INSERT INTO corporate.vmb_items (corporation_key, onedrive_item_id, parent_id, type, name, created_datetime,
modified_datetime, path, web_url, child_count, size)
VALUES ({corp_index}, '{res.get('id')}', '{res.get('parent_id')}', 'file', '{res.get('name').replace("'", "''")}',
'{res.get('created_datetime')}', '{res.get('modified_datetime')}', '{res.get('path').replace("'", "''")}',
'{res.get('web_url').replace("'", "''")}', {res.get('child_count')}, '{res.get('size')}');"""
sess.execute(command)
#
# Update parent folder
#
command = f"""-- Update child count of parent folder
UPDATE corporate.vmb_items AS i SET child_count = (SELECT COUNT(vmb_item_key) FROM corporate.vmb_items AS sub WHERE sub.parent_id = '{res.get('parent_id')}')
WHERE i.onedrive_item_id = '{res.get('parent_id')}';"""
sess.execute(command)
#
# Cleanup
#
sess.commit()
sess.close()
engine.dispose()
return results
</code></pre>
<p>Based on other posts I saw related to the errors I'm getting, I tried a couple different parameters in my <code>create_engine</code> call, but those didn't make any difference:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.pool import NullPool
corporate_engine = create_engine(db_uri, pool_pre_ping=True, poolclass=NullPool)
</code></pre>
|
<python><flask><sqlalchemy><python-multiprocessing><psycopg2>
|
2024-01-05 18:35:12
| 1
| 2,297
|
user2437443
|
77,766,301
| 1,441,053
|
Polars DataFrame enum and categorical type memory usage
|
<p>When I call <code>df.estimated_size()</code> on a <code>Categorical</code> or <code>Enum</code> column, it appears to have the same memory footprint as an <code>int32</code> of the same length, even though there are only four members in the category.</p>
<p>Is this memory size accurate? <code>shrink_to_fit()</code> doesn't improve the memory size.</p>
<ol>
<li>Is there a way to reduce memory footprint of categoricals with small number of members?</li>
<li>aside: is there a better to convert int Ids to categoricals instead?</li>
<li>casting to str first?</li>
</ol>
|
<python><python-polars>
|
2024-01-05 17:37:46
| 1
| 802
|
user1441053
|
77,766,233
| 2,381,348
|
python subprocess.run with awk failing
|
<p><strong>My function:</strong></p>
<pre><code>def run(cmd = ""):
sp = subprocess.run(shlex.split(cmd), shell = False, capture_output = True)
pprint(sp)
print(sp.stdout.decode("utf-8", 'ignore'))
</code></pre>
<p>This function was working normally. Then I suddenly came across <code>awk</code> and it gave me an error.</p>
<p>Command passed to the function:</p>
<pre><code>run(cmd = "ssh myserver \"tail -10 server.log | awk '{print $7}' | sort | uniq -c | sort -nr | head\"")
</code></pre>
<p>To simplify this further,</p>
<pre><code>run(cmd = "ssh myserver \"tail -10 server.log | awk '{print $7}'\"")
</code></pre>
<p>Mark the ssh above. All the commands passed to ssh needs to run on the remote and not on local machine. Hence multiple subprocess is not gonna help. The error thrown for both of the above commands is</p>
<pre><code>awk: line 2: missing } near end of file
</code></pre>
<p>Both the above commands work fine from terminal. <br><br>
I suspected, pipe (|) or single quotes (' ') might be the culprit. But the below code works fine.</p>
<pre><code>run(cmd = "ssh myserver \"tail -10 server.log | grep something\"")
run(cmd = "ssh myserver \"tail -10 server.log | grep 'something'\"")
</code></pre>
<p>Can anyone help me out with <code>awk</code> in the above?</p>
<p>The associated question mentioned <a href="https://stackoverflow.com/questions/9393425/python-how-to-execute-shell-commands-with-pipe-but-without-shell-true">here</a> is not related and won't help in my case as the moment <code>tail -10 server.log</code> changes to <code>tail -10000 server.log</code> or <code>cat server.log</code>, multiple subprocess will try to download the whole content of the <code>server.log</code> from the remote server which might be in GBs and then process the output while the whole command of</p>
<pre><code>ssh myserver cat server.log | awk '{print $7}' | sort | uniq -c | sort -nr | head
</code></pre>
<p>might be 15-20 lines.</p>
<p>Since it's used with ssh which as mentioned in the comments: concatenates multiple arguments, intentionally multiple subprocess haven't been used.</p>
<p>I tried multiple snippets after suggestions from comment which doesn't work and throw same or similar error as mentioned above. Both the snippet in the question and suggestions work for normal scenarios and only fail for <code>awk</code> with the above mentioned error.</p>
<p>Try 1:</p>
<pre><code>cmd = "ssh myserver \"tail -10 server.log | awk '{print $7}'\""
sp = subprocess.run(shlex.split(cmd), shell = False, capture_output = True)
pprint(sp)
</code></pre>
<p>Try 2:</p>
<pre><code>cmd = ["ssh", "myserver", "tail -10 server.log | awk '{print $7}'"]
sp = subprocess.run(cmd, shell = False, capture_output = True)
pprint(sp)
</code></pre>
<p>Try 3:<br>
(Exact suggestion from the comment)</p>
<pre><code>cmd = ['ssh', 'myserver', '''tail -10 server.log | awk '{print $7}'''']
sp = subprocess.run(cmd, shell = False, capture_output = True)
pprint(sp)
</code></pre>
<p>Try 4:</p>
<pre><code>cmd = ["ssh", "myserver", """tail -10 server.log | awk '{print $7}'"""]
sp = subprocess.run(cmd, shell = False, capture_output = True)
pprint(sp)
</code></pre>
|
<python><awk><subprocess>
|
2024-01-05 17:23:19
| 0
| 3,551
|
RatDon
|
77,766,180
| 2,379,009
|
Adding scatter animations across frames in plotly
|
<p>I want to add scatter points over an animation i am doing w/ px.imshow:</p>
<pre><code># video is a [n_frames, height, width, 3] array
fig = px.imshow(video, animation_frame=0, binary_string=True, height=height)
for i in range(video .shape[0]):
str_plot = go.Scatter(x=[kp1[0, i], kp2[0, i]], y=[kp1[1, i], kp2[1, i]], mode='markers+lines',
marker=dict(color='red', size=5), xaxis='x', yaxis='y')
fig['frames'][i]['data'] = list(fig['frames'][i]['data']) + [str_plot]
</code></pre>
<p>This kinda works, where i can see both scatter and img when i move the slider, but not when it plays by itself. I assume there is a better way to do this. Please help.</p>
<p>Thanks!</p>
|
<python><animation><plotly><plotly-express>
|
2024-01-05 17:14:27
| 0
| 2,173
|
DankMasterDan
|
77,766,158
| 17,176,270
|
Cannot render a value to html from FastAPI JSONResponse with HTMX
|
<p>I'm making a concept of a client for API service running FastAPI. I need Fast API to serve API and HTML by different endpoints.</p>
<p>My FastAPI code:</p>
<pre><code>from fastapi import FastAPI, Request
from fastapi.responses import HTMLResponse, JSONResponse
from fastapi.templating import Jinja2Templates
app = FastAPI()
templates = Jinja2Templates(directory="templates")
@app.get("/", response_class=HTMLResponse)
async def home(request: Request):
return templates.TemplateResponse("index.html", {"request": request})
@app.get("/api/v1", response_class=JSONResponse)
async def api_home():
data = {"key": "value"}
return data
</code></pre>
<p>In this code my client pings API endpoint and gets json response:</p>
<pre><code> <div class="container">
<h1 class="h2">API Client</h1>
<a hx-get="/api/v1" hx-target="#content" hx-swap="innerHTML" class="btn btn-primary">Fetch data</a>
<div id="content">{{ key | default("No message received") }}</div>
</div>
</code></pre>
<p>But as a result it renders:</p>
<pre><code>{"key": "value"}
</code></pre>
<p>I need to render only <code>value</code> in my template. How to do it?</p>
|
<python><jinja2><fastapi><htmx>
|
2024-01-05 17:10:06
| 1
| 780
|
Vitalii Mytenko
|
77,766,049
| 893,254
|
Clarification of the parameter "alpha"
|
<p>The Pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.ewm.html" rel="nofollow noreferrer"><code>ewm</code></a> function (Exponentially Weighted Moving Average/Stddev/etc.) takes a parameter <code>alpha</code>.</p>
<p>If I understand the intent correctly, this parameter <code>alpha</code> is used to multiply previous values before adding the current value to calculate the value of the function.</p>
<p>For example, in the case of the mean: the next value of the mean is given by the previous value multiplied by alpha, plus the next data value:</p>
<pre><code>mean_next = mean_previous * alpha + next_data
</code></pre>
<p>Can anyone confirm?</p>
|
<python><pandas>
|
2024-01-05 16:48:16
| 1
| 18,579
|
user2138149
|
77,766,048
| 1,473,517
|
Getting a very simple stablebaselines3 example to work
|
<p>I tried to model the simplest coin flipping game where you have to predict if it is going to be a head. Sadly it won't run, given me:</p>
<pre><code>Using cpu device
Traceback (most recent call last):
File "/home/user/python/simplegame.py", line 40, in <module>
model.learn(total_timesteps=10000)
File "/home/user/python/mypython3.10/lib/python3.10/site-packages/stable_baselines3/ppo/ppo.py", line 315, in learn
return super().learn(
File "/home/user/python/mypython3.10/lib/python3.10/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 264, in learn
total_timesteps, callback = self._setup_learn(
File "/home/user/python/mypython3.10/lib/python3.10/site-packages/stable_baselines3/common/base_class.py", line 423, in _setup_learn
self._last_obs = self.env.reset() # type: ignore[assignment]
File "/home/user/python/mypython3.10/lib/python3.10/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py", line 77, in reset
obs, self.reset_infos[env_idx] = self.envs[env_idx].reset(seed=self._seeds[env_idx], **maybe_options)
TypeError: CoinFlipEnv.reset() got an unexpected keyword argument 'seed'
</code></pre>
<p>Here is the code:</p>
<pre><code>import gymnasium as gym
import numpy as np
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
class CoinFlipEnv(gym.Env):
def __init__(self, heads_probability=0.8):
super(CoinFlipEnv, self).__init__()
self.action_space = gym.spaces.Discrete(2) # 0 for heads, 1 for tails
self.observation_space = gym.spaces.Discrete(2) # 0 for heads, 1 for tails
self.heads_probability = heads_probability
self.flip_result = None
def reset(self):
# Reset the environment
self.flip_result = None
return self._get_observation()
def step(self, action):
# Perform the action (0 for heads, 1 for tails)
self.flip_result = int(np.random.rand() < self.heads_probability)
# Compute the reward (1 for correct prediction, -1 for incorrect)
reward = 1 if self.flip_result == action else -1
# Return the observation, reward, done, and info
return self._get_observation(), reward, True, {}
def _get_observation(self):
# Return the current coin flip result
return self.flip_result
# Create the environment with heads probability of 0.8
env = DummyVecEnv([lambda: CoinFlipEnv(heads_probability=0.8)])
# Create the PPO model
model = PPO("MlpPolicy", env, verbose=1)
# Train the model
model.learn(total_timesteps=10000)
# Save the model
model.save("coin_flip_model")
# Evaluate the model
obs = env.reset()
for _ in range(10):
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
print(f"Action: {action}, Observation: {obs}, Reward: {rewards}")
</code></pre>
<p>What am I doing wrong?</p>
<p>This is in version 2.2.1.</p>
|
<python><machine-learning><reinforcement-learning><openai-gym><stable-baselines>
|
2024-01-05 16:47:55
| 2
| 21,513
|
Simd
|
77,765,955
| 3,159,288
|
VSCode - Renaming Symbol Skips Test Files in Python Project
|
<p>In my Python project, when I use VSCode to <a href="https://code.visualstudio.com/docs/editor/refactoring#_rename-symbol" rel="nofollow noreferrer">rename a symbol</a> the change is not propagated to "test files". More specifically, if a <code>.py</code> file lives in a directory named <code>test</code> or is named <code>test_*.py</code> then the symbol is not renamed in that file. How do I get VSCode to propagate the renamed symbol to test files? There must be some setting I'm not aware of. Or perhaps this is a bug in VSCode's Python extension?</p>
<p>To give a more concrete example of my situation, if I have the following file structure:</p>
<pre><code>my_project/
test/
__init__.py
test_thing.py
__init__.py
thing.py
something_else.py
</code></pre>
<p>Where <code>thing.py</code> contains:</p>
<pre class="lang-py prettyprint-override"><code>class Thing:
...
</code></pre>
<p>And <code>test_thing.py</code> contains:</p>
<pre class="lang-py prettyprint-override"><code>from my_project.thing import Thing
</code></pre>
<p>If I rename <code>Thing</code> to something else, <code>test_thing.py</code> does not experience the change even though other, non-test files (e.g. <code>something_else.py</code>), are changed to reflect the refactoring.</p>
<p>Here's my full <code>settings.json</code>. From what I can tell there isn't anything that should be messing with this:</p>
<pre class="lang-json prettyprint-override"><code>{
"[python]": {
"editor.codeActionsOnSave": {
"source.fixAll": "explicit"
},
"editor.defaultFormatter": "ms-python.black-formatter",
"editor.formatOnSave": true,
"editor.formatOnType": true
},
"[toml]": {
"editor.formatOnSave": false
},
"cSpell.enabled": false,
"cSpell.userWords": [
"callstack",
"dataframes",
"docstrings",
"kwargs",
"Numpy",
],
"editor.formatOnPaste": true,
"editor.formatOnSave": true,
"editor.formatOnType": true,
"editor.inlineSuggest.enabled": true,
"editor.minimap.enabled": false,
"editor.rulers": [
88,
120
],
"evenBetterToml.formatter.alignComments": false,
"evenBetterToml.formatter.arrayTrailingComma": true,
"evenBetterToml.formatter.compactEntries": false,
"explorer.confirmDragAndDrop": false,
"files.insertFinalNewline": true,
"git.autofetch": true,
"github.copilot.enable": {
"*": true,
"markdown": true,
"plaintext": true,
"scminput": false
},
"jupyter.askForKernelRestart": false,
"python.analysis.autoFormatStrings": true,
"python.analysis.autoImportCompletions": true,
"python.languageServer": "Pylance",
"python.venvFolders": [
"~/.mambaforge/envs"
],
"terminal.integrated.inheritEnv": false,
"terminal.integrated.scrollback": 3000,
"window.nativeTabs": true
}
</code></pre>
|
<python><visual-studio-code>
|
2024-01-05 16:28:54
| 0
| 852
|
rmorshea
|
77,765,919
| 519,422
|
Pandas dataframe: writing specific values to a file with specific formatting?
|
<p>I want to update external files. I want to do this by writing new data from Pandas dataframes to the files in a particular format.</p>
<p>Here is an example of the first two blocks of data in a file (what I want my new data from the dataframes to look like when I write the data to the files). "<code>identifieri</code>" is a unique name for a block of data and dataframe.</p>
<pre><code>(Lines of comments, then)
identifier1 label2 = i \ label3 label4
label5
A1 = -5563.88 B2 = -4998 C3 = -203.8888 D4 = 5926.8
E5 = 24.99876 F6 = 100.6666 G7 = 30.008 H8 = 10.9999
J9 = 1000000 K10 = 1.0002 L11 = 0.1
M12
identifier2 label2 = i \ label3 label4
label5
A1 = -788 B2 = -6554 C3 = -100.23 D4 = 7526.8
E5 = 20.99876 F6 = 10.6666 G7 = 20.098 H8 = 10.9999
J9 = 1000000 K10 = 1.0002 L11 = 0.000
M12
...
</code></pre>
<p>In the dataframes, the new data are in two columns called <code>Labels</code> and <code>Numbers</code>. Each label goes with one number. For example, there are labels <code>A1</code>, <code>B2</code>, <code>C3</code>, etc (see the example above) and for the "identifier2 data block, the corresponding numbers are <code>-788</code>, <code>-6554</code>, <code>-100.23</code>. However, there are also other data that I don't need. I only need data from specific rows.</p>
<p>So far I have tried to get the specific labels and numbers I need from a dataframe and then write them in append mode to a file. I can print one label and its number from a dataframe:</p>
<pre><code>df.set_index("Labels", inplace=True)
print("Label 1", df.loc["label1", 'Numbers'])
</code></pre>
<p>My plan is to get each label and number I need and then append them to a file along with appropriate spaces. However, when I try this:</p>
<pre><code>df.loc["label1", 'Numbers'].to_csv('file.txt', mode='a', index = False, sep=' ')
</code></pre>
<p>I get: <code>AttributeError: 'float' object has no attribute 'to_csv'</code></p>
<p>It seems like my method is clunky and it will be hard to figure out how to add spaces and newlines. If anyone knows of a more foolproof and sophisticated method, I would appreciate any advice.</p>
|
<python><python-3.x><regex><dataframe><file-io>
|
2024-01-05 16:21:27
| 1
| 897
|
Ant
|
77,765,887
| 13,496,839
|
convert *.py to *.Pyd and getting a no module error in import?
|
<p>I have a Cython file (main_file_001.pyx) located at d:/exe_project/all_files/python_files/. I am attempting to compile this Cython file into a Python extension module and import it into another program. However, I'm encountering difficulties in the import process.</p>
<pre><code>try:
from setuptools import setup
from setuptools import Extension
except ImportError:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
pat = "d:/exe_project/all_files/python_files/main_file_001.pyx"
ext_modules = [Extension("my_code_cython", [pat])]
setup(
name='Generic model class',
cmdclass={'build_ext': build_ext},
ext_modules=ext_modules
)
</code></pre>
<p>After running python setup.py build_ext --inplace, I find a compiled module named my_code_cython.cp37-win_amd64.pyd in the same directory.</p>
<p>When I try to import the compiled module using import my_code_cython, in another Python program, I receive a "no module found" error.</p>
<p>Expected Outcome:</p>
<p>I expect to successfully import and use the functions or classes defined in the Cython module in my Python program without any import errors.</p>
<p>Additional Information:</p>
<p>I'm using pycahrm community edition, windows 10,Cython version 3.0.0a11.
The Python version associated with my Cython compilation and import attempts is Python 3.7.
I've tried adding the directory containing the .pyd file to the PYTHONPATH, but the issue persists.</p>
<p>make_file_001.pyx code as below:</p>
<pre><code>def say_hello_to():
print("Hello ")
</code></pre>
<p>print sys.path produce the following result</p>
<pre><code>['D:\\exe_project\\all_files\\python_files', 'D:\\exe_project', 'C:\\Program Files\\Python37\\python37.zip', 'C:\\Program Files\\Python37\\DLLs', 'C:\\Program Files\\Python37\\lib', 'C:\\Program Files\\Python37', 'D:\\exe_project\\lib\\site-packages', 'c:\\python\\DLLs']
</code></pre>
|
<python><python-3.x><cython><cythonize>
|
2024-01-05 16:15:45
| 0
| 686
|
Bala
|
77,765,806
| 11,431,171
|
FastAPI Application Deployment on Azure App Service: 503 Service Unavailable Error
|
<p><strong>Issue Summary:</strong>
I am experiencing a 503 Service Unavailable error when deploying a FastAPI application on Azure App Service (Linux). The application runs fine locally but encounters issues when deployed to Azure.</p>
<p><strong>Environment:</strong></p>
<p>FastAPI version: 0.108.0
Gunicorn version: 21.2.0
Uvicorn version: 0.25.0
Deployment: Azure App Service on Linux</p>
<p><strong>Application Structure:</strong>
The application is a basic FastAPI setup:</p>
<p>python</p>
<pre><code># app.py
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
@app.get("/")
async def read_root():
return {"Hello": "World"}
@app.get("/items/{item_id}")
async def read_item(item_id: int):
return {"item_id": item_id}
</code></pre>
<p><strong>Requirements (requirements.txt):</strong></p>
<pre><code>fastapi==0.108.0
gunicorn==21.2.0
uvicorn==0.25.0
</code></pre>
<p><strong>Deployment Details:</strong>
I deployed this application to Azure App Service. I didn't use a Procfile, but I set the startup command in the App Service configuration under General Settings → Startup:</p>
<pre><code>gunicorn -w 4 -k uvicorn.workers.UvicornWorker -b :80 app:app
</code></pre>
<p><strong>Issue Encountered:</strong>
When accessing the application URL (e.g., <a href="https://test-api-qa.azurewebsites.net/items/1" rel="nofollow noreferrer">https://test-api-qa.azurewebsites.net/items/1</a>), I receive a 503 Service Unavailable error.</p>
<pre><code>2024-01-05T15:21:11.509Z ERROR - Container test-api-qa_1_ebb38e7f didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
2024-01-05T15:21:11.514Z INFO - Stopping site test-api-qa because it failed during startup.
2024-01-05T15:27:21.578Z INFO - 3.10_20230810.1.tuxprod Pulling from appsvc/python
2024-01-05T15:27:21.589Z INFO - Digest: sha256:6e7907b272357dfda9a8c141b01fc30851ffc4448c6c41b81d6d6d63d2de0472
2024-01-05T15:27:21.590Z INFO - Status: Image is up to date for 10.1.0.4:13209/appsvc/python:3.10_20230810.1.tuxprod
2024-01-05T15:27:21.603Z INFO - Pull Image successful, Time taken: 0 Seconds
2024-01-05T15:27:21.634Z INFO - Starting container for site
2024-01-05T15:27:21.635Z INFO - docker run -d --expose=8000 --name test-api-qa_1_966ce6a6 -e WEBSITE_USE_DIAGNOSTIC_SERVER=false -e WEBSITE_SITE_NAME=test-api-qa -e WEBSITE_AUTH_ENABLED=False -e PORT=8000 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=test-api-qa.azurewebsites.net -e WEBSITE_INSTANCE_ID=b11d819b8697339cf9054171dc33b13ea31a05e43536bee7f778d0e63d5fd0c5 -e HTTP_LOGGING_ENABLED=1 appsvc/python:3.10_20230810.1.tuxprod gunicorn -w 4 -k uvicorn.workers.UvicornWorker -b :80 app:app
2024-01-05T15:27:22.048Z INFO - Initiating warmup request to container test-api-qa_1_966ce6a6_msiProxy for site test-api-qa
2024-01-05T15:27:22.057Z INFO - Container test-api-qa_1_966ce6a6_msiProxy for site test-api-qa initialized successfully and is ready to serve requests.
2024-01-05T15:27:22.058Z INFO - Initiating warmup request to container test-api-qa_1_966ce6a6 for site test-api-qa
2024-01-05T15:27:52.435Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 30.3869829 sec
2024-01-05T15:28:07.503Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 45.4549341 sec
2024-01-05T15:28:22.783Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 60.7352662 sec
2024-01-05T15:28:37.855Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 75.8073171 sec
2024-01-05T15:28:52.924Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 90.8760207 sec
2024-01-05T15:29:07.990Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 105.9423549 sec
2024-01-05T15:29:23.059Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 121.0115291 sec
2024-01-05T15:29:38.128Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 136.0803053 sec
2024-01-05T15:29:53.194Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 151.1457954 sec
2024-01-05T15:30:08.258Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 166.2100867 sec
2024-01-05T15:30:23.323Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 181.2752611 sec
2024-01-05T15:30:38.387Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 196.3392949 sec
2024-01-05T15:30:53.452Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 211.40458 sec
2024-01-05T15:31:08.525Z INFO - Waiting for response to warmup request for container test-api-qa_1_966ce6a6. Elapsed time = 226.4769965 sec
2024-01-05T15:31:12.550Z ERROR - Container test-api-qa_1_966ce6a6 for site test-api-qa did not start within expected time limit. Elapsed time = 230.5018293 sec
2024-01-05T15:31:12.554Z ERROR - Container test-api-qa_1_966ce6a6 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
2024-01-05T15:31:12.557Z INFO - Stopping site test-api-qa because it failed during startup.
</code></pre>
<p><strong>Attempts to Resolve:</strong></p>
<p>Checked that the application works locally.
Verified that the Procfile is not used and the startup command is set correctly in Azure.
Considered potential port configuration issues.</p>
<p><strong>Questions:</strong></p>
<ol>
<li>Why might Azure App Service be failing to serve requests to my
FastAPI application?</li>
<li>Is there a mismatch between the container's
exposed port and the port that Azure expects the app to be served
on?</li>
<li>Are there additional configurations or logs that I should look
into for troubleshooting this issue?</li>
</ol>
<p>Any insights or suggestions from the community would be greatly appreciated!</p>
|
<python><azure><azure-web-app-service><fastapi>
|
2024-01-05 15:58:59
| 1
| 501
|
fortanu82
|
77,765,751
| 7,766,158
|
Handling asyncronous function in a loop
|
<p>I have the following code :</p>
<pre><code>async def async_function():
await some_function()
# TODO : Doesn't work
def sync_function():
for i in range (100):
async_function()
</code></pre>
<p>In the past I had no loop in <code>sync_function</code>, so I managed to call <code>async_function</code> using <code>asyncio.run()</code>. Now that I have a this for loop, I don't find the proper way to handle my asynchronous function.</p>
<p>The current code throws the following warning</p>
<pre><code>RuntimeWarning: coroutine 'async_function' was never awaited
</code></pre>
<p>And part of the logic in it is not done during runtime.</p>
<p>What would be the good way to do this ?</p>
|
<python><python-asyncio>
|
2024-01-05 15:49:18
| 1
| 1,931
|
Nakeuh
|
77,765,572
| 15,991,297
|
Check if Value Greater than Previous Row by Group
|
<p>With the dataframe below I want to compare the values in Val1 with the value in the preceding row by group. I then want to create a new column containing the word "Abv" if the value is greater than the preceding row and "Blw" if the value is lower than the predecing row. If the values are equal the new column should be blank.</p>
<p>Here's an extract:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Ref1</th>
<th>Val1</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>1</td>
</tr>
<tr>
<td>A</td>
<td>2</td>
</tr>
<tr>
<td>A</td>
<td>3</td>
</tr>
<tr>
<td>A</td>
<td>4</td>
</tr>
<tr>
<td>B</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>2</td>
</tr>
<tr>
<td>B</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>Desired result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Ref1</th>
<th>Val1</th>
<th>AbvBlw</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>A</td>
<td>2</td>
<td>Abv</td>
</tr>
<tr>
<td>A</td>
<td>3</td>
<td>Abv</td>
</tr>
<tr>
<td>A</td>
<td>4</td>
<td>Abv</td>
</tr>
<tr>
<td>B</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>B</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>B</td>
<td>2</td>
<td>Abv</td>
</tr>
<tr>
<td>B</td>
<td>0</td>
<td>Blw</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><numpy>
|
2024-01-05 15:16:57
| 1
| 500
|
James
|
77,765,509
| 2,102,025
|
Pandas dataframe: find closest time before timestamp
|
<p>I have two Pandas dataframes with "datetime" column.</p>
<p>For the <code>df</code> dataframe I would like to add a column with the seconds-difference with the nearest <code>dflogdf["datetime"]</code> BEFORE the <code>df["datetime"]</code>. This means I cannot use <code>merge_asof direction="nearest"</code>.</p>
<p>For example
<code>df["datetime"]</code>:</p>
<pre><code>2023-11-15T18:00:00
2023-11-20T19:00:00
2023-11-20T20:00:00
2023-11-20T21:00:00
</code></pre>
<p><code>dflogs["datetime"]:</code></p>
<pre><code>2023-11-17T18:00:00
2023-11-20T20:00:00
</code></pre>
<p>Expected output:</p>
<pre><code>2023-11-15T18:00:00 None (Nothing before)
2023-11-20T19:00:00 262800 (2023-11-17T18:00:00)
2023-11-20T20:00:00 0 (2023-11-20T20:00:00)
2023-11-20T21:00:00 3600 (2023-11-20T20:00:00)
</code></pre>
<p>I was thinking around a function like this (not working correctly):</p>
<pre><code>def check_time_diff(item):
item["timediff"] = (item["datetime"] - dflogs['datetime']).min() / pd.Timedelta(seconds=1)
return item
df = df.apply(check_time_diff, axis=1)
</code></pre>
|
<python><pandas>
|
2024-01-05 15:05:22
| 1
| 579
|
Scripter
|
77,765,417
| 14,839,602
|
Write text to a text file in RTL format
|
<p>I have an array of text, I want to write it to a .txt file from right to left since it is in Arabic or Kurdish. How can I do that using python?</p>
<p><strong>Method to write to a file:</strong></p>
<pre><code>with open(current_extracted_text_output_path, 'w', encoding='utf-8') as text_file:
for line in extracted_lines:
print(line)
text_file.write(line + '\n')
</code></pre>
<p><strong>Output:</strong></p>
<p><a href="https://i.sstatic.net/JiR87.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JiR87.png" alt="enter image description here" /></a></p>
|
<python><text-files><right-to-left>
|
2024-01-05 14:51:06
| 1
| 434
|
Hama Sabah
|
77,765,355
| 18,206,974
|
How to use FastAPI's lifespan to manage connection pool creation and relase?
|
<p>I want to use FastAPIs <code>lifespan</code> to create redis connection pool.</p>
<p>Currently my implementation looks like that:</p>
<ol>
<li>My APIs has the following dependencies</li>
</ol>
<pre class="lang-py prettyprint-override"><code>@router.post("/items")
async def post_items(r: RedisDependency): ...
</code></pre>
<ol start="2">
<li><code>RedisDependency</code> is:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>RedisDependency = Annotated[Redis, Depends(get_redis)]
</code></pre>
<ol start="3">
<li><code>get_redis</code> function is:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import redis.asyncio as aredis
from . import pool
async def get_redis() -> aredis.Redis:
r = aredis.Redis(connection_pool=pool, decode_responses=True)
try:
yield r
finally:
await r.aclose()
</code></pre>
<ol start="4">
<li>The pool is imported into mine redis module from <code>db/__init__</code> file, where it initialized:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import redis.asyncio as aredis
from app.core.config import settings
def create_redis_pool():
return aredis.ConnectionPool.from_url(settings.REDIS_URL)
pool = create_redis_pool()
</code></pre>
<p>With current implementation the pool is created outside FastAPIs lifespan. I've seen examples, where people use something like this:</p>
<pre class="lang-py prettyprint-override"><code>@asynccontextmanager
async def lifespan(app: FastAPI):
app.state.r = await init_redis()
yield
await app.state.r.aclose()
</code></pre>
<p>and then uses it via:</p>
<pre class="lang-py prettyprint-override"><code>@router.post("/items")
async def post_items(req: Request):
redis = req.app.state.r
...
</code></pre>
<p>which beats the purpose the convenience of <code>Depends(get_resourse)</code>.</p>
<p>So the question is, is there more optimal way to use <code>lifespan</code>s and not involve weird <code>request.app.state.resourse</code> usage in API?</p>
|
<python><fastapi>
|
2024-01-05 14:39:41
| 1
| 336
|
Vladyslav Chaikovskyi
|
77,765,298
| 12,390,973
|
how to compare variables in PYOMO?
|
<p>I have a dummy energy dispatch code which I have created just to learn about how to <strong>compare variables</strong> in <strong>PYOMO</strong> and also how to use <strong>Binary variable</strong> in <strong>PYOMO</strong>.</p>
<pre><code>from pyomo.environ import *
import random
import pandas as pd
import numpy as np
# Create a concrete model
model = ConcreteModel()
idx = 20
np.random.seed(idx)
model.m_index = Set(initialize=list(range(idx)))
model.load_profile = Param(model.m_index, initialize=dict(zip(model.m_index, np.random.randint(10, 350, idx))))
model.solar_profile = Param(model.m_index, initialize=dict(zip(model.m_index, np.random.uniform(0, 0.6, idx))))
# Parameters: Generator capacities
model.gen1_cap = Param(initialize=100)
model.gen2_cap = Param(initialize=100)
model.gen3_cap = Param(initialize=100)
model.backup_cap = Param(initialize=300)
# Parameters: Cost of running generators
model.gen1_cost = Param(initialize=10)
model.gen2_cost = Param(initialize=10)
model.gen3_cost = Param(initialize=10)
model.backup_cost = Param(initialize=-3)
# Variable: Energy given out by generators
model.gen1_use = Var(model.m_index, domain=NonNegativeReals)
model.gen2_use = Var(model.m_index, domain=NonNegativeReals)
model.gen3_use = Var(model.m_index, domain=NonNegativeReals)
model.backup_use = Var(model.m_index, domain=NonNegativeReals)
model.gen3_status = Var(model.m_index, domain=Binary)
# Objective function: Maximise the total cost
def production_cost(model):
total_cost = sum(
model.gen1_use[m] * model.gen1_cost +
model.gen2_use[m] * model.gen2_cost +
model.gen3_use[m] * model.gen3_cost * model.gen3_status[m] +
model.backup_use[m] * model.backup_cost
for m in model.m_index)
return total_cost
model.obj = Objective(rule=production_cost, sense=maximize)
def gen3_on_off(model, m):
if model.solar_profile[m] >= 0.30: # This condition works fine
# if model.gen1_use[m] + model.gen2_use[m] <= 0.90 * model.load_profile[m]: # This condition gives error
return model.gen3_status[m] == 1
else:
return model.gen3_status[m] == 0
model.gen3_on_off = Constraint(model.m_index, rule=gen3_on_off)
def energy_balance(model, m):
eq = model.gen1_use[m] + model.gen2_use[m] + model.gen3_use[m] + model.backup_use[m] >= model.load_profile[m]
return eq
model.energy_balance = Constraint(model.m_index, rule=energy_balance)
def gen1_max(model, m):
eq = model.gen1_use[m] <= model.gen1_cap
return eq
model.gen1_max = Constraint(model.m_index, rule=gen1_max)
def gen2_max(model, m):
eq = model.gen2_use[m] <= model.gen2_cap
return eq
model.gen2_max = Constraint(model.m_index, rule=gen2_max)
def gen3_max(model, m):
eq = model.gen3_use[m] <= model.gen3_cap * model.solar_profile[m]
return eq
model.gen3_max = Constraint(model.m_index, rule=gen3_max)
def backup_max(model, m):
eq = model.backup_use[m] <= model.backup_cap
return eq
model.backup_max = Constraint(model.m_index, rule=backup_max)
def gen2_on_off(model, m):
eq = model.gen2_use[m] <= model.gen2_cap
return eq
# model.gen2_max = Constraint(model.m_index, rule=gen2_max)
Solver = SolverFactory('gurobi')
Solver.options['LogFile'] = "gurobiLog"
print('\nConnecting to Gurobi Server...')
results = Solver.solve(model)
if (results.solver.status == SolverStatus.ok):
if (results.solver.termination_condition == TerminationCondition.optimal):
print("\n\n***Optimal solution found***")
print('obj returned:', round(value(model.obj), 2))
else:
print("\n\n***No optimal solution found***")
if (results.solver.termination_condition == TerminationCondition.infeasible):
print("Infeasible solution")
exit()
else:
print("\n\n***Solver terminated abnormally***")
exit()
load = []
gen1 = []
gen2 = []
gen3 = []
solar_profile = []
backup = []
gen3_status = []
for i in range(20):
load.append(value(model.load_profile[i]))
gen1.append(value(model.gen1_use[i]))
gen2.append(value(model.gen2_use[i]))
gen3.append(value(model.gen3_use[i]))
solar_profile.append(value(model.solar_profile[i]))
backup.append(value(model.backup_use[i]))
gen3_status.append(value(model.gen3_status[i]))
df = pd.DataFrame({
'Load profile': load,
'Gen 1': gen1,
'Gen 2': gen2,
'Gen 3': gen3,
'Solar profile': solar_profile,
'Backup': backup,
'Gen 3 Status': gen3_status
# 'Gen 2 Status': gen2_status,
})
df.to_excel('binary.xlsx')
print(df)
</code></pre>
<p>Constraint gen3_on_off is the one where I am facing an issue. If you run this program as it is, it will work but when I comment the first <strong>if statement</strong> and uncomment the second <strong>if statement</strong>:</p>
<pre><code>if model.gen1_use[m] + model.gen2_use[m] <= 0.90 * model.load_profile[m]:
</code></pre>
<p>then this gives an error:</p>
<pre><code>WARNING: DEPRECATED: Chained inequalities are deprecated. Use the inequality()
function to express ranged inequality expressions.
WARNING: DEPRECATED: Chained inequalities are deprecated. Use the inequality()
function to express ranged inequality expressions.
ERROR: Rule failed when generating expression for constraint gen3_on_off with
index 0: TypeError: Relational expression used in an unexpected Boolean
context.
The inequality expression:
84.3 <= gen1_use[0] + gen2_use[0]
contains non-constant terms (variables) that were evaluated in an
unexpected Boolean context at
File
'C:\Users\nvats\PycharmProjects\microgrid-v1.0\RTC\test.py',
line 52:
if model.gen1_use[m] + model.gen2_use[m] >= 0.30 *
model.load_profile[m]: # This condition gives error
Evaluating Pyomo variables in a Boolean context, e.g.
if expression <= 5:
is generally invalid. If you want to obtain the Boolean value of
the expression based on the current variable values, explicitly
evaluate the expression using the value() function:
if value(expression) <= 5:
or
if value(expression <= 5):
ERROR: Constructing component 'gen3_on_off' from data=None failed: TypeError:
Relational expression used in an unexpected Boolean context.
The inequality expression:
84.3 <= gen1_use[0] + gen2_use[0]
contains non-constant terms (variables) that were evaluated in an
unexpected Boolean context at
File
'C:\Users\nvats\PycharmProjects\microgrid-v1.0\RTC\test.py',
line 52:
if model.gen1_use[m] + model.gen2_use[m] >= 0.30 *
model.load_profile[m]: # This condition gives error
Evaluating Pyomo variables in a Boolean context, e.g.
if expression <= 5:
is generally invalid. If you want to obtain the Boolean value of
the expression based on the current variable values, explicitly
evaluate the expression using the value() function:
if value(expression) <= 5:
or
if value(expression <= 5):
Process finished with exit code 1
</code></pre>
<p>Is there any way I can add this condition to the constraint:</p>
<pre><code>if model.gen1_use[m] + model.gen2_use[m] >= 0.90 * model.load_profile[m]:
</code></pre>
<p>and set the binary variable to 1 or 0 based on True or False?</p>
|
<python><binary><pyomo>
|
2024-01-05 14:30:06
| 1
| 845
|
Vesper
|
77,765,233
| 1,900,317
|
Accessing Flask application context within python-socketio Client()'s event handler
|
<p>I am trying to access Flask application context inside the <code>python-socketio</code>'s event handler <code>on_tick()</code>, and I am presented with error -</p>
<pre><code>(...)
File "H:\Python\autotrader2\venv\lib\site-packages\werkzeug\local.py", line 316, in __get__
obj = instance._get_current_object() # type: ignore[misc]
File "H:\Python\autotrader2\venv\lib\site-packages\werkzeug\local.py", line 513, in _get_current_object
raise RuntimeError(unbound_message) from None
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
the current application. To solve this, set up an application context
with app.app_context(). See the documentation for more information.
</code></pre>
<p>Tried to extract a self sufficient miniature version of a larger program -</p>
<pre><code>import socketio
from flask import current_app
class MySocketIOClient:
def __init__(self):
self.sio = socketio.Client()
self.sio.connect("https://feeds.sio.server",
headers={"User-Agent": "python-socketio[client]/socket"},
auth={"user": "user_id", "token": "session_token"},
transports="websocket")
def execute(self):
# with current_app.app_context(): # Does not work
with current_app.test_request_context('/'): # Does not work
self.sio.on('update', self.on_tick)
# self.sio.wait() # Does not work either
def on_tick(self, tick):
print(tick)
# Error: working outside application context
print("APP_TIMEZONE={}".format(current_app.config.get('APP_TIMEZONE')))
if __name__ == '__main__':
MySocketIOClient().execute()
</code></pre>
<p>How to access Flask's application context (or request context) in the event handler?</p>
|
<python><flask><flask-socketio><python-socketio>
|
2024-01-05 14:19:41
| 1
| 505
|
Adi
|
77,765,210
| 10,362,396
|
Django Migrations Not Using Configured Database Connection
|
<p>I am using supabase postgresql along with django. By default it uses the public schema, but now I want to link the user_id field in model to auth.users.id where auth is the schema, users is the table name and id is the UUID field</p>
<p>Models definition here</p>
<pre><code># auth/models.py
class SupabaseUser(models.Model):
id = models.UUIDField(
primary_key=True,
default=uuid.uuid4,
verbose_name="User ID",
help_text="Supabase managed user id",
editable=False,
)
class Meta:
managed = False
db_table = "users"
</code></pre>
<pre><code># myapp/models.py
class MyModel(models.Model):
user = models.ForeignKey(
SupabaseUser,
on_delete=models.CASCADE,
verbose_name="Supabase User",
help_text="Supabase user associated with the account",
null=False,
)
</code></pre>
<p>In the settings I have two database connections</p>
<pre><code>DATABASES = {
"default": dj_database_url.config(),
"supabase_auth": dj_database_url.config(),
}
DATABASES["supabase_auth"]["OPTIONS"] = {
"options": "-c search_path=auth",
}
</code></pre>
<p>And of course model router is configured as</p>
<pre><code>from django.db.models import Model
from django.db.models.options import Options
class ModelRouter:
@staticmethod
def db_for_read(model: Model, **kwargs):
return ModelRouter._get_db_schema(model._meta)
@staticmethod
def db_for_write(model: Model, **kwargs):
return ModelRouter._get_db_schema(model._meta)
@staticmethod
def allow_migrate(db, app_label, model: Model, model_name=None, **kwargs):
return True
@staticmethod
def _get_db_schema(options: Options) -> str:
if options.app_label == "auth":
return "supabase_auth"
return "default"
</code></pre>
<p>When I use the <code>./manage.py shell</code> and run the following script, it works</p>
<pre><code>from auth.models import SupabaseUser
assert SupabaseUser.objects.count() == 1
</code></pre>
<p>But when I apply migration for <code>myapp</code>, I am getting the following error</p>
<pre><code>$ ./manage.py makemigration myapp && ./manage.py migrate myapp
Operations to perform:
Apply all migrations: myapp
Running migrations:
Applying myapp.0013_mymodel_user...Traceback (most recent call last):
File "/mnt/Projects/myapp/backend/.venv/lib/python3.11/site-packages/django/db/backends/utils.py", line 103, in _execute
return self.cursor.execute(sql)
^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.errors.UndefinedTable: relation "users" does not exist
....
snip
....
django.db.utils.ProgrammingError: relation "users" does not exist
</code></pre>
|
<python><django><orm>
|
2024-01-05 14:16:27
| 1
| 2,057
|
tbhaxor
|
77,765,051
| 1,391,441
|
What is the proper replacement for scipy.interpolate.interp1d?
|
<p>The class <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html" rel="noreferrer">scipy.interpolate.interp1d</a> has a message that reads:</p>
<blockquote>
<p>Legacy</p>
<p>This class is considered legacy and will no longer receive updates. This could also mean it will be removed in future SciPy versions.</p>
</blockquote>
<p>I'm not sure why this is marked for deprecation, but I use it heavily in my code to return an interpolating function, which <code>numpy.interp()</code> does not handle. What is the proper replacement for <code>scipy.interpolate.interp1d</code>?</p>
|
<python><python-3.x><scipy><interpolation>
|
2024-01-05 13:52:37
| 1
| 42,941
|
Gabriel
|
77,764,890
| 17,580,381
|
sigwait not behaving as expected
|
<p><strong>Unix only</strong></p>
<pre><code>from threading import Thread
from signal import signal, alarm, sigwait, SIGALRM
class Check(Thread):
def __init__(self):
super().__init__()
signal(SIGALRM, Check.handler)
@staticmethod
def handler(*_):
print("Hello")
def run(self):
for _ in range(5):
alarm(1)
print("Waiting...")
sigwait((SIGALRM,))
print("done")
if __name__ == "__main__":
(check := Check()).start()
check.join()
</code></pre>
<p><strong>Expected behaviour:</strong></p>
<p>I expect the following output to be repeated 5 times:</p>
<pre><code>Waiting...
Hello
done
</code></pre>
<p>However, "done" is never printed because the runtime is "blocking" on <em>sigwait()</em></p>
<p>"Hello" is printed. Therefore I know that the signal handler has been invoked.</p>
<p>If SIGALARM has been signalled (via <em>alarm()</em>) and handled, why does <em>sigwait()</em> not return?</p>
<p><strong>Platform:</strong></p>
<pre><code>macOS 14.2.1
Python 3.12.1
</code></pre>
<p><strong>EDIT to include call to pthread_sigmask</strong></p>
<pre><code>from threading import Thread
from signal import signal, alarm, sigwait, pthread_sigmask, SIGALRM, SIG_BLOCK
class Check(Thread):
def __init__(self):
super().__init__()
signal(SIGALRM, self.handler)
def handler(self, *_):
print("Hello")
def run(self):
mask = SIGALRM,
pthread_sigmask(SIG_BLOCK, mask)
for _ in range(5):
alarm(1)
print("Waiting...")
sigwait(mask)
print("done")
if __name__ == "__main__":
(check := Check()).start()
check.join()
</code></pre>
<p><strong>Behaviour is identical to original code</strong></p>
|
<python><signals>
|
2024-01-05 13:26:16
| 2
| 28,997
|
Ramrab
|
77,764,754
| 8,865,579
|
Gunicorn over Flask sending Parallel requests to same route
|
<p>I am running a Flask application (fewer users, organization level). I have a high-end machine, with 72 cores and 515703MB RAM.</p>
<p>I am currently running my application as (no nginx):</p>
<pre><code>gunicorn --workers=10 --threads=12 -b 0.0.0.0:5000 "run:app" --access-logfile user_activity_logs/user_activity.log --preload
</code></pre>
<p>i.e. 120 process threads (Ideally, there is flexibility ~ 2*72 + 1 process threads).</p>
<p>There is a paginate API that needs to send huge data approx 60-70MB of rows from a CSV in small chunks of 1-2 MB.</p>
<p>For these, I am sending n (no. of pages) requests in parallel from Frontend. But on some dates, n is around 25-30 pages or even more.</p>
<p>From <code>networks</code> and <code>htop</code>, what I am seeing is initial 4 requests seem to be fast and parallel, but the later requests follow less speed and follow a pattern i.e. around 4 requests at a time in parallel and keep remaining pending.</p>
<p><strong>Frontend Code:</strong></p>
<pre><code>const page_size = 20000;
async function fetchInitDetails() {
return await fetchApi('/paginate-refdata/init', {
"method": "GET",
"data": {
'date': $('#date').val(),
'exchange': $('#exchange').val(),
'page_size': page_size
}
})
}
async function fetchAllPages(initDetails) {
let columns = [];
for (let i = 0; i < show_columns.length; i++) {
if (initDetails.columns.includes(show_columns[i])) {
if(show_columns[i] == 'expiry_date') {
columns.push({
headerName: titleCase(show_columns[i]),
field: show_columns[i],
filter: 'agTextColumnFilter',
minWidth: 150,
sort: 'desc',
sortIndex: 0
})
} else {
columns.push({
headerName: titleCase(show_columns[i]),
field: show_columns[i],
filter: 'agTextColumnFilter',
minWidth: 150,
})
}
}
}
for (let i = 0; i < initDetails.columns.length; i++) {
if (!show_columns.includes(initDetails.columns[i])) {
columns.push({
headerName: titleCase(initDetails.columns[i]),
field: initDetails.columns[i],
filter: 'agTextColumnFilter',
minWidth: 150,
hide: true
})
}
}
grid.ref_db.gridOptions.columnDefs = columns;
const { total_pages } = initDetails;
const parallelRequests = Array.from({ length: total_pages }, (_, index) => index + 1);
let pages = 1;
// (data.total - data.cnt) / 15000 * 10
const responses = await Promise.all(parallelRequests.map(async (page) => {
const response = await fetchApi('/paginate-refdata', {
"method": "GET",
"data": {
'date': $('#date').val(),
'exchange': $('#exchange').val(),
'page_size': page_size,
'page': page
}
});
show_loader(pages/initDetails.total_pages * 100, (initDetails.total_pages - pages++) * 1.2, rerender_html=false);
// pages++;
return response;
}));
return responses;
}
fetchInitDetails()
.then(initDetails => fetchAllPages(initDetails))
.then(responses => {
let rows_data = []
for(let i=0;i<responses.length;i++) {
for(let j=0;j<responses[i]['data'].length;j++) {
rows_data.push(responses[i]['data'][j]);
}
}
grid['ref_db'].renderGrid(rows_data);
Swal.fire({
toast: true,
position: 'bottom',
icon: 'success',
title: 'Data loaded',
showConfirmButton: false,
timer: 5000,
});
})
.catch(error => {
grid['ref_db'].renderGrid([]);
})
.finally(() => {
onclose();
})
</code></pre>
<p><strong>Backend Code:</strong></p>
<pre><code>@blueprint.route('/paginate-refdata')
@login_required
def paginate_refdata():
try:
date = request.args['date'].replace('-', '')
exchange = request.args['exchange'].lower()
page = int(request.args['page'])
page_size = int(request.args.get('page_size', '10'))
except:
return jsonify({ 'msg': 'Please provide date, exchange & page', 'status': False }), 400
refdata_df = get_or_load_refdata_from_cache(exchange, date)
total_pages = int(len(refdata_df) / page_size) + 1
curr_page = paginate_array(refdata_df, page_size, page).to_dict('records')
result = {
'page': page,
'total_pages': total_pages,
'page_size': len(curr_page),
'total': len(refdata_df),
'data': curr_page,
}
response = Response(gzip.compress(JSON.dumps(result).encode('utf-8')), content_type='application/json')
response.headers['Content-Encoding'] = 'gzip'
return response
</code></pre>
<p>I want to make this webserver utilise the resources and process all 30 requests in parallel as currently it is taking around 2-3 mins as using only 4 workers, it seems and I am expecting it to finish in <10secs, as 1 request takes around 3 secs of time.</p>
<p>Can anyone please help me and let me know where I am missing or wrong.</p>
<p><strong>EDIT :</strong></p>
<p>I tested with ab and it seems concurrent, as all processes now seem to have non 0 CPU% in htop</p>
<p><code>seq 1 300 | xargs -n 1 -P 300 -I {} ab -n 1 -c 1 "10.40.1.56:5000/…{}&page_size=10000"</code></p>
<p>but from browser actually 6 requests initially are in parallel in logs and then incrementally other requests come</p>
|
<python><flask><promise><gunicorn><htop>
|
2024-01-05 12:58:45
| 0
| 2,277
|
DARK_C0D3R
|
77,764,734
| 17,721,722
|
Better way to insert a very large data into PostgreSQL tables
|
<p>What is the better way to insert a very large data into PostgreSQL Table?<br />
OS: Ubuntu 22.04 LTS<br />
DB: PostgreSQL 14<br />
Framework: Python 3.11 Django</p>
<p>For now I am using insert into statement of 100,000 rows at a time. It is taking 2 Minutes for the whole process of inserting average of 1,000,000 rows, which is within my acceptable range. But I want to know if there is any better way to do this.</p>
<p>It was working fine but somehow it is taking more time and sometimes giving errors like</p>
<pre><code>OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
</code></pre>
<pre><code>from django.db import connection
cursor = connection.cursor()
batch_size = 100000
offset = 0
while True:
transaction_list_query = SELECT * FROM {source_table} LIMIT {batch_size} OFFSET {offset};
cursor.execute(transaction_list_query)
transaction_list = dictfetchall(cursor)
if not transaction_list:
break
data_to_insert = []
for transaction in transaction_list:
# Some Process Intensive Calculations
insert_query = INSERT INTO {per_transaction_table} ({company_ref_id_id_column}, {rrn_column},{transaction_type_ref_id_id_column}, {transactionamount_column}) VALUES {",".join(data_to_insert)} ON CONFLICT ({rrn_column}) DO UPDATE SET {company_ref_id_id_column} = EXCLUDED.{company_ref_id_id_column};
cursor.execute(insert_query)
offset += batch_size
</code></pre>
|
<python><django><postgresql>
|
2024-01-05 12:54:13
| 1
| 501
|
Purushottam Nawale
|
77,764,710
| 4,504,711
|
How to set multiple values in a Pandas dataframe at once?
|
<p>I have a large dataframe and a list of many locations I need to set to a certain value. Currently I'm iterating over the locations to set the values one by one:</p>
<pre><code>import pandas as pd
import numpy as np
#large dataframe
column_names = np.array(range(100))
np.random.shuffle(column_names)
row_names = np.array(range(100))
np.random.shuffle(row_names)
df = pd.DataFrame(columns=column_names, index=row_names)
#index values to be set
ix = np.random.randint(0, 100,1000)
#column values to be set
iy = np.random.randint(0, 100,1000)
#setting the specified locations to 1, one by one
for k in range(len(ix)):
df.loc[ix[k], iy[k]]=1
</code></pre>
<p>This appears to be prohibitively slow. For the above example, on my machine, the last for loop takes 0.35 seconds. On the other hand, if I do</p>
<pre><code>df.loc[ix, iy]=1
</code></pre>
<p>only takes 0.035 seconds so it is ten times faster. Unfortunately, it does not give the correct result, as it sets all combinations of elements of <code>ix</code> and <code>iy</code> to 1. I was wondering whether there is a similarly fast way to set values of many locations at once, avoiding the iteration over the locations?</p>
|
<python><pandas>
|
2024-01-05 12:48:43
| 2
| 2,842
|
Botond
|
77,764,700
| 497,132
|
Very slow list_objects_v2 in minio
|
<p>I'm using latest minio in docker on two servers. One is in my local network, another one is on a remote server. Configuration is default, volume is located on an HDD, not SSD.</p>
<p>Bucket contains around 400k objects, ~70 GB.</p>
<p>I have the following code (with the latest boto3):</p>
<pre><code> paginator = s3_client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket=bucket_name)
for page in page_iterator:
keys = [obj['Key'] for obj in page.get('Contents', [])]
...
</code></pre>
<p>And every iteration is very slow in most (but not 100%) cases on both servers, iterating over 400k objects can take hours. Sometimes it works fine.</p>
<p>On the other hand, PUT/HEAD are very fast, so I don't think it's a disk problem (especially on two different servers with different hardware). CPU/RAM load is low. There is absolutely no other parallel requests to minio (these are internal servers and they are not used yet).</p>
<p>How to speed it up? Maybe I'm doing something wrong and there is a better way to get all keys in bucket?</p>
|
<python><amazon-web-services><amazon-s3><boto3><minio>
|
2024-01-05 12:47:03
| 1
| 16,835
|
artem
|
77,764,671
| 13,854,431
|
Retrieve data from interactive brokers from inside a docker container
|
<p>I have a docker container and a script with the following code:</p>
<pre><code> import backtrader as bt
import pandas as pd
class MyStrategy(bt.Strategy):
def __init__(self):
print('initializing strategy')
def next(self):
print('next')
def start():
print('starting backtrader')
cerebro = bt.Cerebro()
store = bt.stores.IBStore(host = '127.0.0.1', port=7497, _debug = True)
data = store.getdata(dataname='EUR.USD', sectype = 'CASH', exchange = 'IDEALPRO', timeframe = bt.TimeFrame.Seconds)
print(data.LIVE)
cerebro.resampledata(data, timeframe = bt.TimeFrame.Seconds, compression = 15)
# spy_prices = pd.read_csv("SPY.csv", index_col='Date',parse_dates=True)
# feed_spy = bt.feeds.PandasData(dataname=spy_prices)
# cerebro.adddata(feed_spy)
cerebro.addstrategy(MyStrategy)
cerebro.run()
start()
</code></pre>
<p>I am using the paper trading account running on port 7497 and configurations:
<a href="https://i.sstatic.net/fcwZi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fcwZi.png" alt="enter image description here" /></a></p>
<p>Output of the script above is the following:</p>
<pre><code>starting backtrader
4
<error id=-1, errorCode=502, errorMsg=Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.>
<error id=-1, errorCode=502, errorMsg=Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.>
<error id=-1, errorCode=502, errorMsg=Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.>
<error id=-1, errorCode=502, errorMsg=Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.>
initializing strategy
</code></pre>
<p>Since the connection with TWS fails, the <strong>def next(self):</strong> is not run. The <strong>def next(self)</strong> is run whenever the data is loaded and for example spy_prices is used.</p>
<p>I think due to the fact that I am working inside a docker container the connection is unsuccessful with these <strong>store = bt.stores.IBStore(host = '127.0.0.1', port=7497, _debug = True)</strong> settings. Should the host be different? Because this is the default host? Or should I map a port inside the docker container?</p>
<p>The dockerfile is the following:</p>
<pre><code># Arguments
ARG IMAGE_VARIANT=slim-buster
ARG PYTHON_VERSION=3.9.5
# Base image
FROM python:${PYTHON_VERSION}-${IMAGE_VARIANT} as py3
# Working directory
WORKDIR data
# Copy requirements.txt
COPY requirements.txt .
# Pip and apt-get install packages
RUN python -m pip install --upgrade pip
RUN apt-get update && \
apt-get install -y libpq-dev gcc
# Pip install dependencies
RUN pip install pandas
RUN pip install backtrader
RUN pip install IbPy2
# Remove requirements file, no longer needed
RUN rm requirements.txt
</code></pre>
<p>Maybe since the TWS (Trader work station of Interactive brokers) is running on port 7947 I have to connect to that port from docker. Doing so by exposing a random port (maybe 7400) and then using docker run -it -p 7400:7497 -v ${PWD}:/data image_name bash and then python main.py.</p>
<pre><code> Add EXPOSE 7400 to docker file
Changing to: store = bt.stores.IBStore(host = 'host.docker.internal', port=7400, _debug = True)
Run in terminal: docker run -it -p 7400:7497 -v ${PWD}:/data image_name bash
Run in terminal/container: python main.py
</code></pre>
<p>-p 7400:7497 to map the ports. In the end I would like to do something by mayby using docker compose. Because I don't want to work from the terminal but work from VScode.(The proposed method does not work by the way)</p>
|
<python><docker><network-programming><port>
|
2024-01-05 12:41:06
| 1
| 457
|
Herwini
|
77,764,495
| 5,799,799
|
How to resolve Quarto version upgrade breaking panel-tabset and pandas.to_html() interaction (1.2->1.3)
|
<p>Rendering the below code chunk with Quarto cli version 1.2 generates 3 tabs labeled a, b and c with a nicely rendered table in each tab. However when I upgrade to version 1.3 all the tables appear in tab a.</p>
<p>Is there a way I should be using panel-tabset differently in version 1.3? Or if this was hacky and never really supported are there any alternative ways quarto can display tabs. I can't find any good alternatives to what I have achieved here in their documentation?</p>
<pre><code>---
title: "Untitled"
format: html
---
# Test doc
::: {.panel-tabset}
```{python}
# | output: asis
import pandas as pd
import random
from IPython.display import display, Markdown, Latex
df = pd.DataFrame({"group":["a"]*3 + ["b"]*3 + ["c"]*3,"value":[random.randint(1,10) for i in range(9)]})
for group in df["group"].drop_duplicates():
display(Markdown(f"\n"))
display(Markdown(f"## {group}\n"))
display(Markdown(f"\n"))
print(df[df["group"]==group].to_html())
```
:::
</code></pre>
|
<python><markdown><quarto>
|
2024-01-05 12:09:53
| 1
| 435
|
DataJack
|
77,764,228
| 425,678
|
Pandas/SciPy - High Commit Memory Usage - Windows
|
<p><em>Update: After posting this question I discovered the same issue was occurring with SciPy. I have updated the title so others may discover this post. I have left the original question unchanged. A solution is provided below.</em></p>
<p>Original Question:</p>
<p>Is there a way to reduce the commit memory usage of Pandas on Windows when multiprocessing?</p>
<p>I have written a multiprocessing application using Python on Windows and am getting MemmoryErrors, driven largely by importing Pandas. in each sub-process. When importing Pandas I notice in taskmgr an increase in commit memory of roughly 0.5GB. When multiprocessing accross 16 cores, this becomes a significant issue as 8GB of commit memory is being used up before the "real" code runs. This problem leaves insufficient commit memory for the application, causing MemoryErrors to be thrown. The private working memory doesn't increase in the same way so RAM is not the constraint.</p>
<p>I will look to have the pagefile increased but this is not an ideal solution as the application will be rolled over to others. Is there a way on Windows to reduce the import size, for example by sharing the import across sub-processes or are there any particular versions/flags that can reduce the allocation being implemented by pandas?</p>
<p><strong>Python</strong>: 3.11 (64bit), <strong>Pandas</strong>: 2.0.3, <strong>OS</strong>: Windows 10 Enterprise</p>
|
<python><pandas><windows>
|
2024-01-05 11:24:37
| 1
| 810
|
user425678
|
77,763,904
| 3,809,375
|
"RuntimeError: Event loop is closed" when using unit IsolatedAsyncioTestCase to test fastapi routes
|
<p>Consider this mcve:</p>
<p><strong>requirements.txt:</strong></p>
<pre><code>fastapi
httpx
motor
pydantic[email]
python-bsonjs
uvicorn==0.24.0
</code></pre>
<p><strong>main.py:</strong></p>
<pre><code>import asyncio
import unittest
from typing import Optional
import motor.motor_asyncio
from bson import ObjectId
from fastapi import APIRouter, Body, FastAPI, HTTPException, Request, status
from fastapi.testclient import TestClient
from pydantic import BaseModel, ConfigDict, EmailStr, Field
from pydantic.functional_validators import BeforeValidator
from typing_extensions import Annotated
# -------- Model --------
PyObjectId = Annotated[str, BeforeValidator(str)]
class ItemModel(BaseModel):
id: Optional[PyObjectId] = Field(alias="_id", default=None)
name: str = Field(...)
email: EmailStr = Field(...)
model_config = ConfigDict(
populate_by_name=True,
arbitrary_types_allowed=True,
json_schema_extra={
"example": {"name": "Jane Doe", "email": "jdoe@example.com"}
},
)
# -------- Router --------
mcve_router = APIRouter()
@mcve_router.post(
"",
response_description="Add new item",
response_model=ItemModel,
status_code=status.HTTP_201_CREATED,
response_model_by_alias=False,
)
async def create_item(request: Request, item: ItemModel = Body(...)):
db_collection = request.app.db_collection
new_bar = await db_collection.insert_one(
item.model_dump(by_alias=True, exclude=["id"])
)
created_bar = await db_collection.find_one({"_id": new_bar.inserted_id})
return created_bar
@mcve_router.get(
"/{id}",
response_description="Get a single item",
response_model=ItemModel,
response_model_by_alias=False,
)
async def show_item(request: Request, id: str):
db_collection = request.app.db_collection
if (item := await db_collection.find_one({"_id": ObjectId(id)})) is not None:
return item
raise HTTPException(status_code=404, detail=f"item {id} not found")
if __name__ == "__main__":
app = FastAPI()
app.include_router(mcve_router, tags=["item"], prefix="/item")
app.db_client = motor.motor_asyncio.AsyncIOMotorClient(
"mongodb://127.0.0.1:27017/?readPreference=primary&appname=MongoDB%20Compass&ssl=false"
)
app.db = app.db_client.mcve_db
app.db_collection = app.db.get_collection("bars")
class TestAsync(unittest.IsolatedAsyncioTestCase):
async def asyncSetUp(self):
self.client = TestClient(app)
async def asyncTearDown(self):
self.client.app.db_client.close()
def run_async_test(self, coro):
loop = asyncio.get_event_loop()
return loop.run_until_complete(coro)
def test_show_item(self):
bar_data = {"name": "John Doe", "email": "johndoe@example.com"}
create_response = self.client.post("/item", json=bar_data)
self.assertEqual(create_response.status_code, 201)
created_item_id = create_response.json().get("id")
self.assertIsNotNone(created_item_id)
response = self.client.get(f"/item/{created_item_id}")
self.assertEqual(response.status_code, 200)
unittest.main()
</code></pre>
<p>When I try to run it I'll get this crash:</p>
<pre><code>(venv) d:\mcve>python mcve.py
E
======================================================================
ERROR: test_show_item (__main__.TestBarRoutesAsync.test_show_item)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\software\python\3.11.3-amd64\Lib\unittest\async_case.py", line 90, in _callTestMethod
if self._callMaybeAsync(method) is not None:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\software\python\3.11.3-amd64\Lib\unittest\async_case.py", line 112, in _callMaybeAsync
return self._asyncioRunner.run(
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\software\python\3.11.3-amd64\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\software\python\3.11.3-amd64\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "d:\mcve\mcve.py", line 87, in test_show_item
response = self.client.get(f"/item/{created_item_id}")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\starlette\testclient.py", line 502, in get
return super().get(
^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 1055, in get
return self.request(
^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\starlette\testclient.py", line 468, in request
return super().request(
^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 828, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\starlette\testclient.py", line 344, in handle_request
raise exc
File "D:\mcve\venv\Lib\site-packages\starlette\testclient.py", line 341, in handle_request
portal.call(self.app, scope, receive, send)
File "D:\mcve\venv\Lib\site-packages\anyio\from_thread.py", line 288, in call
return cast(T_Retval, self.start_task_soon(func, *args).result())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\software\python\3.11.3-amd64\Lib\concurrent\futures\_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "D:\software\python\3.11.3-amd64\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "D:\mcve\venv\Lib\site-packages\anyio\from_thread.py", line 217, in _call_func
retval = await retval_or_awaitable
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\fastapi\applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "D:\mcve\venv\Lib\site-packages\starlette\applications.py", line 116, in __call__
await self.middleware_stack(scope, receive, send)
File "D:\mcve\venv\Lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
raise exc
File "D:\mcve\venv\Lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "D:\mcve\venv\Lib\site-packages\starlette\middleware\exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "D:\mcve\venv\Lib\site-packages\starlette\_exception_handler.py", line 55, in wrapped_app
raise exc
File "D:\mcve\venv\Lib\site-packages\starlette\_exception_handler.py", line 44, in wrapped_app
await app(scope, receive, sender)
File "D:\mcve\venv\Lib\site-packages\starlette\routing.py", line 746, in __call__
await route.handle(scope, receive, send)
File "D:\mcve\venv\Lib\site-packages\starlette\routing.py", line 288, in handle
await self.app(scope, receive, send)
File "D:\mcve\venv\Lib\site-packages\starlette\routing.py", line 75, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "D:\mcve\venv\Lib\site-packages\starlette\_exception_handler.py", line 55, in wrapped_app
raise exc
File "D:\mcve\venv\Lib\site-packages\starlette\_exception_handler.py", line 44, in wrapped_app
await app(scope, receive, sender)
File "D:\mcve\venv\Lib\site-packages\starlette\routing.py", line 70, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\fastapi\routing.py", line 299, in app
raise e
File "D:\mcve\venv\Lib\site-packages\fastapi\routing.py", line 294, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\fastapi\routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "d:\mcve\mcve.py", line 57, in show_item
if (item := await db_collection.find_one({"_id": ObjectId(id)})) is not None:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\motor\metaprogramming.py", line 75, in method
return framework.run_on_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\mcve\venv\Lib\site-packages\motor\frameworks\asyncio\__init__.py", line 85, in run_on_executor
return loop.run_in_executor(_EXECUTOR, functools.partial(fn, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\software\python\3.11.3-amd64\Lib\asyncio\base_events.py", line 816, in run_in_executor
self._check_closed()
File "D:\software\python\3.11.3-amd64\Lib\asyncio\base_events.py", line 519, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
----------------------------------------------------------------------
Ran 1 test in 0.074s
FAILED (errors=1)
</code></pre>
<p>The line that's producing that crash is <code>response = self.client.get(f"/item/{created_item_id}")</code> but I don't understand what's the issue.</p>
<p>Btw, not interested on using pytest at all, the main purpose of this question is to figure what's wrong and how to fix the current mcve</p>
<p>Thanks in advance!</p>
|
<python><python-3.x><unit-testing><asynchronous><fastapi>
|
2024-01-05 10:23:46
| 2
| 9,975
|
BPL
|
77,763,824
| 1,059,860
|
Tokensize a string in python
|
<p>This is what I'm trying to do:</p>
<pre><code>from typer import Option, Typer, echo
app = Typer(name="testapp")
@app.command("run", help="Run a command in my special environment")
def run(
command: str,
) -> None:
print (command)
</code></pre>
<p>When running this testapp using</p>
<p><code>testapp run "foobar --with baz --exclude bar --myfancyotheroption"</code></p>
<p>It works perfectly well because it accepts "command" as a string, instead I wish to be able to tokenize it and just run it as</p>
<p><code>testapp run foobar --with baz --exclude bar --myfancyotheroption</code></p>
<p>In the current content, I might have anything as a part of the command, so how do I approach this?</p>
|
<python><typer>
|
2024-01-05 10:08:42
| 1
| 2,258
|
tandem
|
77,763,574
| 17,525,745
|
httpstat pip installed but not working as a command httpstat
|
<p>I'm trying to use the Python library "<a href="https://pypi.org/project/httpstat/" rel="nofollow noreferrer">httpstat</a>".</p>
<p><a href="https://i.sstatic.net/S6HT4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S6HT4.png" alt="enter image description here" /></a></p>
<p>As is mentioned on the official docs it says if I download it through pip I can use httpstat as command but I can't do that. It just throws the error that command isn't recognized.</p>
<p><a href="https://i.sstatic.net/U5fPX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U5fPX.png" alt="enter image description here" /></a></p>
<p>As you can see it's already installed.</p>
<p><a href="https://i.sstatic.net/fZkQ2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fZkQ2.png" alt="enter image description here" /></a></p>
<p>But I can't use it as a command.</p>
|
<python><pip>
|
2024-01-05 09:19:12
| 1
| 476
|
UNRIVALLEDKING
|
77,763,387
| 1,922,291
|
Missing module error when installing lmdb (windows)
|
<p>I am trying to install wikipedia2vec on windows.
I am running "pip install wikipedia2vec"</p>
<p>It throws an error: Building py-lmdb from source on Windows requires the "patch-ng" python module while collecting lmdb (Full stack trace below)</p>
<p>I have already ran "pip install patch-ng" and that project installed successfully.</p>
<p>I dug around the web and found a suggestion to install VS Build Tools. Done that and it did not help (that seemed like a long shot).</p>
<p>I am out of ideas as to what might be causing it...</p>
<p>Here's the full stack trace:
<a href="https://i.sstatic.net/LAOkb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LAOkb.png" alt="enter image description here" /></a></p>
|
<python><pip>
|
2024-01-05 08:40:19
| 1
| 428
|
QuarterlyQuotaOfQuotes
|
77,763,369
| 7,133,942
|
How to align two plots in Matplotlib
|
<p>I have the following matplotlib code</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Time of day values
time_of_day = [
"00:00", "01:00", "02:00", "03:00", "04:00", "05:00", "06:00", "07:00", "08:00",
"09:00", "10:00", "11:00", "12:00", "13:00", "14:00", "15:00", "16:00", "17:00",
"18:00", "19:00", "20:00", "21:00", "22:00", "23:00"
]
# Electricity load values in W
electricity_load = np.array([
5460, 177, 163, 745, 770, 1049, 1090, 868, 277, 3117, 3416, 1383, 1551, 4324, 969, 431, 703, 1201, 898, 4969,7839, 259, 410, 617
])
# Convert electricity load values to kW
electricity_load_kW = electricity_load / 1000
# Price values in Cent/kWh
price_values = [
27.1, 26.5, 26.0, 25.1, 26.6, 27.5, 34.4, 51.3, 45.3, 44.3, 44.3, 41.3, 38.1, 35.5,
33.9, 37, 41.4, 48.6, 53.4, 48.6, 43.4, 38.7, 37.8, 27.4,
]
# Create subplots within the same figure
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 10), sharex=True)
# Plotting the bar diagram
ax1.bar(time_of_day, electricity_load_kW[:len(time_of_day)], color='#FFD700', edgecolor='black')
# Labeling the first plot
ax1.set_ylabel('Electricity Load (kW)', fontsize=18)
ax1.grid(axis='y', linestyle='--', alpha=0.7)
# Plotting the step line plot
ax2.step(time_of_day, price_values, where='post', color='limegreen', linewidth=6)
# Labeling the second plot
ax2.set_xlabel('Time of Day', fontsize=18)
ax2.set_ylabel('Price (Cent/kWh)', fontsize=18)
ax2.grid(axis='y', linestyle='--', alpha=0.7)
# Adjust x-axis limits to eliminate empty space
ax1.set_xlim(time_of_day[0], time_of_day[-1])
# Increase thickness and rotation of x-tick labels
for ax in [ax1, ax2]:
ax.tick_params(axis='x', which='both', labelsize=14, width=2)
# Set x-tick labels with rotation and horizontal alignment
plt.sca(ax1)
plt.xticks(rotation=45, ha='right')
plt.sca(ax2)
plt.xticks(rotation=45, ha='right')
# Tighten the layout
plt.tight_layout()
# Save the figure and plot it
plt.savefig('C:/Users/wi9632/Desktop/temp_combined.png', dpi=100)
plt.show()
</code></pre>
<p>So I have a bar diagram on the upper part and on the lower part there is a step line plot. What I want is that the horizontal lines of the bar diagram and the step line plot are at the same x-position such that they are aligned. This means that the values for the x-label ticks (00:00 to 01:00, 01:00 to 02:00 etc.) should be at the same position which is currently not the case.</p>
|
<python><matplotlib>
|
2024-01-05 08:36:49
| 1
| 902
|
PeterBe
|
77,763,199
| 4,892,210
|
Not able to get live streaming from DJI Tello on ubuntu 22
|
<p>I am trying to do live video streaming using DJI Tello drone, initially i was able to run same code on windows machine but when i tried on Ubuntu 22 i am not able to get any opencv popup for live streaming.</p>
<p>Here is the output of code :</p>
<pre><code>base) nirbhay@nirbhay:~/Desktop/Tello_code$ python LiveStream.py
[INFO] tello.py - 129 - Tello instance was initialized. Host: '192.168.10.1'. Port: '8889'.
[INFO] tello.py - 438 - Send command: 'command'
[INFO] tello.py - 462 - Response command: 'ok'
Drone Battery Percentage:
100
Live Stream Address:
udp://@0.0.0.0:11111
[INFO] tello.py - 438 - Send command: 'streamon'
[INFO] tello.py - 462 - Response streamon: 'ok'
non-existing PPS 0 referenced
non-existing PPS 0 referenced
decode_slice_header error
no frame!
non-existing PPS 0 referenced
non-existing PPS 0 referenced
decode_slice_header error
no frame!
non-existing PPS 0 referenced
non-existing PPS 0 referenced
decode_slice_header error
no frame!
non-existing PPS 0 referenced
non-existing PPS 0 referenced
decode_slice_header error
no frame!
non-existing PPS 0 referenced
non-existing PPS 0 referenced
decode_slice_header error
no frame!
non-existing PPS 0 referenced
non-existing PPS 0 referenced
decode_slice_header error
no frame!
error while decoding MB 22 30, bytestream 127
error while decoding MB 58 37, bytestream -6
</code></pre>
<p>Here is the code:</p>
<pre><code>from djitellopy import tello
import cv2
import time
#import libh264decoder
drone = tello.Tello()
drone.connect()
print("Drone Battery Percentage:")
print(drone.get_battery())
print("Live Stream Address:")
print(drone.get_udp_video_address())
drone.streamon()
while True:
img=drone.get_frame_read().frame
#print("image:",img)
img=cv2.resize(img,(660,420))
cv2.imshow("Image",img)
#time.sleep(3)
cv2.waitKey(1)
#time.sleep(1)
</code></pre>
<p>python packages list and version:</p>
<pre><code>djitellopy 2.5.0
opencv-python 4.9.0.80
</code></pre>
<p>OS</p>
<pre><code>Ubuntu 22.04.3 LTS
</code></pre>
|
<python><video-streaming><dji-sdk>
|
2024-01-05 07:59:17
| 0
| 321
|
NIrbhay Mathur
|
77,763,138
| 7,113,481
|
pymongo insert all data in string format
|
<p>I am importing csv using pymongo then inserting it into mongodb but due to some reason all field is in format of string where i was expecting double. Below is python code .</p>
<pre><code>def saveToMongo():
print("inside saveToMongo")
collection = mydb['country']
header = ['country_id','country_name','zone_id','minLat','maxLat','minLong','maxLong']
csvFile = open('country.csv', 'r')
reader = csv.DictReader(csvFile)
print(reader)
for each in reader:
row = {}
for field in header:
row[field] = each[field]
#print(row)
collection.insert(row)
</code></pre>
<p>and here is the csv file</p>
<pre><code>country_id,country_name,zone_id,minLat,maxLat,minLong,maxLong
2,Bangladesh,1,20.6708832870000,26.4465255803000,88.0844222351000,92.6727209818000
3,"Sri Lanka",1,5.9683698592300,9.8240776636100,79.6951668639000,81.7879590189000
4,Pakistan,1,23.6919650335000,37.1330309108000,60.8742484882000,77.8374507995000
5,Bhutan,1,26.7194029811000,28.2964385035000,88.8142484883000,92.1037117859000
</code></pre>
<p>I am unable to understand why python is storing data in String format.</p>
<p>When i try to insert data using mongoimport I'm getting belopw error</p>
<pre><code>fatal error: unrecognized DWARF version in .debug_info at 6
runtime stack:
panic during panic
runtime stack:
stack trace unavailable
</code></pre>
|
<python><python-3.x><mongodb><csv><pymongo>
|
2024-01-05 07:47:38
| 2
| 1,426
|
gaurav singh
|
77,762,764
| 12,935,622
|
How to remove a portion from a 3d plane?
|
<p>I'm plotting in matplotlib. I want to remove a circle on the planes where they intersect with the catenoid surfaces. It's ok to have the intersection hard coded, as long as it looks fine visually.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Parametric equations for the catenoid surface
def catenoid(u, v, a=1):
x = a * np.cosh(u/a) * np.cos(v)
y = a * np.cosh(u/a) * np.sin(v)
z = u
return x, y, z
# Generate u and v values
u = np.linspace(-2, 2, 100)
v = np.linspace(0, 2 * np.pi, 100)
u, v = np.meshgrid(u, v)
# Create the figure
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(111, projection="3d", computed_zorder=False)
ax.set_box_aspect(aspect=(6, 4, 3))
ax.set_axis_off()
# Plot the first catenoid surface without gridlines
x1, y1, z1 = catenoid(u, v)
surf1 = ax.plot_surface(x1, y1, z1, cmap='viridis', edgecolor='grey', alpha=0.5, vmin=-2, vmax=2)
# Plot the second catenoid surface with an offset along the y-axis without gridlines
x2, y2, z2 = catenoid(u, v)
x2_offset = x2 + 8 # Adjust the offset value as needed
surf2 = ax.plot_surface(x2_offset, y2, z2, cmap='viridis', edgecolor='grey', alpha=0.5, vmin=-2, vmax=2)
# Plot the first plane with top and bottom colors of the first catenoid surface
xx, yy = np.meshgrid(np.linspace(-5, 13, 100), np.linspace(-5, 5, 100))
zz = yy * 0 - 2
ax.plot_surface(xx, yy, zz, zorder=-1, alpha=0.5, cmap='viridis', edgecolor='none', vmin=-2, vmax=2)
zz = yy * 0 + 2
ax.plot_surface(xx, yy, zz, zorder=1, alpha=0.5, cmap='viridis', edgecolor='none', vmin=-2, vmax=2)
# Show the plot
plt.show()
</code></pre>
<p>Currently, it looks like:
<a href="https://i.sstatic.net/H0idc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H0idc.png" alt="enter image description here" /></a></p>
<p>I want to get rid of the grey parts, so the plane is not covering the catenoid openings:
<a href="https://i.sstatic.net/1nk42.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1nk42.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><3d><geometry><visualization>
|
2024-01-05 06:04:35
| 1
| 1,191
|
guckmalmensch
|
77,762,538
| 224,511
|
Python unittests: how to get an AsyncMock to block until a certain point
|
<p>I have some async Python code I want to test, which include something like</p>
<pre><code>async def task(self):
while True:
await self.thing.change()
...
self.run_callbacks()
</code></pre>
<p>In my test file, I'm starting the task in the setUp() function like so:</p>
<pre><code>def setUp(self):
...
self.loop = asyncio.get_event_loop()
self.task = self.loop.create_task(self.object_under_test.task())
</code></pre>
<p>then wanting to verify that the callbacks ran:</p>
<pre><code>def test_callback(self):
callback = mock.MagicMock()
self.object_under_test.add_callback('cb1', callback)
# ???
callback.assert_called_once()
</code></pre>
<p>Where I'm stuck is how to mock <code>thing.change()</code> in the task. It needs to block until I tell it to (at ???) at which point it will return. If I mock it after starting the task, then it won't make any difference as the task is already waiting for the un-mocked function to return. If I mock it prior to starting the task, then it will always return something and not block.</p>
<p>Any suggestions how I might achieve this?</p>
|
<python><asynchronous><python-asyncio><python-unittest><python-unittest.mock>
|
2024-01-05 04:35:50
| 1
| 3,859
|
askvictor
|
77,762,482
| 13,645,093
|
how to iterate through the dataset from hugging face?
|
<p>I have a dataset saved in a local drive, and i can load and convert this dataset to json or csv format and saved it to my local drive. I want to know , if there is a way i can stream this data and iterate through item. how can i iterate through ds? I can save it as a csv and process data row by row. or can i stream the data and iterate through each item?</p>
<pre><code>from datasets import load_from_disk
import json
ds = load_from_disk(f"local_path"...)
ds.to_json(mydata.json)
</code></pre>
|
<python><huggingface-datasets>
|
2024-01-05 04:19:51
| 1
| 689
|
ozil
|
77,762,467
| 9,008,162
|
How to fix ArgumentError: List argument must consist only of tuples or dictionaries?
|
<p>This is my method:</p>
<pre><code>def print_relative_strength_by_ticker(self, tickers): # print relative strength
tickers_list = tickers.split(" ")
if len(tickers_list) == 1:
query = "SELECT ticker, rs FROM companyinfo WHERE ticker = %s"
params = [tickers_list[0]] # Wrap the single ticker in a list or a tuple
else:
query = "SELECT ticker, rs FROM companyinfo WHERE ticker IN %(tickers_tuple)s" # The key 'tickers' is a string that represents the named parameter in the SQL query (%(tickers)s)
params = {'tickers_tuple': tuple(tickers_list)} # creates a dictionary with a single key-value pair
df = self.db.read_sql(query, params=params)
for index, row in df.iterrows():
print("Ticker:", row['ticker'], "\tRS:", row['rs'])
</code></pre>
<p>I updated Spyder but did not make any changes to my code. Now I'm getting an error; I've never had one before, and I'm not sure if it's sqlachemly or not; how can I handle this?</p>
<pre><code>s.print_relative_strength_by_ticker('estc')
Traceback (most recent call last):
Cell In[27], line 1
s.print_relative_strength_by_ticker('estc')
File D:\Stocks/Code\stock_analysis_class.py:111 in print_relative_strength_by_ticker
df = self.db.read_sql(query, params=params)
File D:\Stocks/Code\mysql_class.py:33 in read_sql
df = pd.read_sql(query, conn, params=params, index_col=index_col)
File C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py:682 in read_sql
return pandas_sql.read_query(
File C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py:1776 in read_query
result = self.execute(sql, params)
File C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py:1599 in execute
return self.con.exec_driver_sql(sql, *args)
File C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py:1767 in exec_driver_sql
distilled_parameters = _distill_raw_params(parameters)
File lib\\sqlalchemy\\cyextension\\util.pyx:38 in sqlalchemy.cyextension.util._distill_raw_params
File lib\\sqlalchemy\\cyextension\\util.pyx:17 in sqlalchemy.cyextension.util._check_item
ArgumentError: List argument must consist only of tuples or dictionaries
</code></pre>
|
<python><mysql><sqlalchemy>
|
2024-01-05 04:11:20
| 1
| 775
|
saga
|
77,762,392
| 8,076,158
|
How do I set environment variables in Code Cells in Scientific Mode?
|
<p>Pycharm is not reflecting the environment variable I set in a code cell. The following code runs in Pycharm (hitting play on the entire script):</p>
<pre><code>#%% <- Scientific Mode: on
import os
from pathlib import Path
CACHE_DIR = Path("/home/me/numba_cache")
CACHE_DIR.touch(exist_ok=True)
os.environ["NUMBA_CACHE_DIR"] = str(CACHE_DIR)
from numba import jit
@jit(nopython=True, cache=True)
def square(x):
return x ** 2
print(square(5))
</code></pre>
<p>However when I run the code cell, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/me/.pycharm_helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 11, in <module>
File "/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/core/decorators.py", line 234, in wrapper
disp.enable_caching()
File "/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/core/dispatcher.py", line 863, in enable_caching
self._cache = FunctionCache(self.py_func)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/core/caching.py", line 601, in __init__
self._impl = self._impl_class(py_func)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/core/caching.py", line 337, in __init__
raise RuntimeError("cannot cache function %r: no locator available "
RuntimeError: cannot cache function 'square': no locator available for file '<input>'
</code></pre>
<p>How do I get this to work? I've tried setting the env var in the run configuration, however that only works when the script is run, not an individual code cell</p>
|
<python><pycharm><numba>
|
2024-01-05 03:42:29
| 0
| 1,063
|
GlaceCelery
|
77,762,324
| 4,582,026
|
How to detect only yellow objects in an image with Python OpenCV?
|
<p>I am attempting to detect only yellow objects in a screenshot i've taken and i'm using the following code in Python.</p>
<pre><code>import cv2
import numpy as np
img=cv2.imread('screenshot.png')
#B, G, R boundaries
yellow = ([0, 200, 200], [60, 255, 255])
for (lower, upper) in boundaries:
lower = np.array(lower, dtype="uint8")
upper = np.array(upper, dtype="uint8")
mask = cv2.inRange(img, lower, upper)
result= cv2.bitwise_and(img, img, mask=mask)
ret, thresh = cv2.threshold(mask, 40, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
'do something'
</code></pre>
<p>It should highlight my detected objects and the background should be black - <code>len(contours) > 0</code> if any yellow is detected. However, looking into the mask/output arrays, they are always completely black so <code>len(contours) = 0</code>. I'm unsure what I am doing wrong here.</p>
|
<python><opencv><image-processing><computer-vision>
|
2024-01-05 03:12:35
| 1
| 549
|
Vik
|
77,762,317
| 23,190,147
|
Python google search not returning expected output
|
<p>I'm trying to do a google search using python, it's basically web scraping. I'm searching using a specific phrase, and I'm expecting a detailed output. I also would like to avoid using other web scraping modules, such as <code>bs4</code> (BeautifulSoup4) or <code>urllib</code>.</p>
<p>I'm using the <code>googlesearch-python</code> module, but it's not working as expected:</p>
<pre><code>from googlesearch import search
search("my search", advanced=True)
</code></pre>
<p>but I get this output:</p>
<p><generator object search at 0x000001B4F198F290></p>
<p>and when I try using the print() function:</p>
<pre><code>from googlesearch import search
print(search("my search", advanced=True))
</code></pre>
<p>it still returns:</p>
<p><generator object search at 0x000001B4F198F290>.</p>
<p>(Advanced is an argument that makes the search more detailed and return more information.)</p>
<p>Sometimes, I also get no output at all (meaning that the program ends immediately, as if there were 0 lines of code in it), and other times I get the generator object output. Is there a way to return a clean, detailed output with the <code>googlesearch-python</code> module? (This question is a follow up to my previous question about the <code>googlesearch</code> module: "Python google search 'advanced' argument unexpected")</p>
|
<python><web-scraping><google-search>
|
2024-01-05 03:10:44
| 1
| 450
|
5rod
|
77,762,241
| 1,101,750
|
How do you use Python's string interning?
|
<p>I was curious about the performance benefits on dictionary lookups when using Python's <code>sys.intern</code>. As a toy example to measure that, I implemented the following code snippets:</p>
<p>Without interning:</p>
<pre class="lang-py prettyprint-override"><code>import random
from uuid import uuid4
keys = [str(uuid4()) for _ in range(1_000_000)]
values = [random.random() for _ in range(1_000_000)]
my_dict = dict(zip(keys, values))
keys_sample = random.choices(keys, k=50_000)
def get_values(d, ks):
return [d[k] for k in ks]
</code></pre>
<p>Test results using ipython:</p>
<pre><code>%timeit get_values(my_dict, keys_sample)
8.92 ms ± 17.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>With interning:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import random
from uuid import uuid4
keys = [sys.intern(str(uuid4())) for _ in range(1_000_000)]
values = [random.random() for _ in range(1_000_000)]
my_dict = dict(zip(keys, values))
keys_sample = random.choices(keys, k=50_000)
def get_values(d, ks):
return [d[k] for k in ks]
</code></pre>
<p>Test results:</p>
<pre><code>%timeit get_values(my_dict, keys_sample)
8.83 ms ± 17.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>So, no meaningful difference between both cases. I tried pumping up the dict size and sampling, but the results stayed on par. Am I using <code>sys.intern</code> incorrectly? Or is the testing flawed?</p>
|
<python><string><string-interning>
|
2024-01-05 02:40:17
| 2
| 475
|
Sergi
|
77,761,900
| 1,285,061
|
Python find min and max in realtime stream without full array
|
<p>I have a continuous stream of values coming in, many millions of records.
I need to find the minimum and maximum in that has arrived so far in real-time as numbers keep coming in. The whole data array is not available. Data arrived is not stored. Min max range is also unknown.</p>
<p>I tried something like this, but it isn't working perfectly. Is there a better way to solve these using libraries, <code>numpy</code>, <code>scipy</code>?</p>
<pre><code>import numpy as np
rng = np.random.default_rng()
test = rng.choice(np.arange(-100,100, dtype=int), 10, replace=False)
testmax = 0
testmin = 0
for i in test: #simulates a stream
if i < testmax:
testmin = i
if i > testmax:
testmax = i
if i < testmin:
testmin = i
print (test, 'min: ',testmin, 'max: ', testmax)
>>> print (test, 'min: ',testmin, 'max: ', testmax)
[ 39 -32 61 -18 -53 -57 -69 98 -88 -47] min: -47 max: 98 #should be -88 and 98
>>>
>>> print (test, 'min: ',testmin, 'max: ', testmax)
[-65 -53 1 2 26 -62 82 70 39 -44] min: -44 max: 82 #should be -65 and 82
>>>
</code></pre>
|
<python><numpy><scipy><signal-processing>
|
2024-01-05 00:25:56
| 3
| 3,201
|
Majoris
|
77,761,818
| 214,526
|
resolving sys.path in test
|
<p>My code structure is following:</p>
<pre><code><root>
+-- src
| +- module1.py
| |
| +- module2.py
| |
| + __init__.py
|
+-- test
+- test1
| +- test1.py
| |
| + __init__.py
|
+- test2.py
|
+- __init__.py
</code></pre>
<p><code>test/__init__.py</code> contains following:</p>
<pre><code>import sys
sys.path.append(".")
</code></pre>
<p>and <code>test/test1/__init__.py</code> contains following:</p>
<pre><code>import sys
sys.path.append("..")
</code></pre>
<p>Both <code>test1.py</code> and <code>test2.py</code> contains <code>from src import module1, module2</code></p>
<p>When I run <code>pytest test1</code>, it works fine but when I run just <code>pytest</code>, the test2 fails with the import statement as it could not resolve <code>src</code></p>
<p>I've tried modifying <code>test/__init__.py</code> to append ".." to the sys.path - it does not help.</p>
<p>Also, I have tried the import statement as <code>from ..src import module1, module2</code> - it does not work either.</p>
<p>What is the way to resolve sys.path in this case?</p>
|
<python><python-3.x><pytest>
|
2024-01-04 23:56:48
| 1
| 911
|
soumeng78
|
77,761,606
| 788,775
|
How to use pylint with optional objects
|
<p>I have a class:</p>
<pre><code>class MyClass:
LOOKUP: Optional[Dict] = None # it is supposed to be a
# dictionary, but each derived class
# will prepare own version. It has to be done later.
# None value indicates that initialization is not done yet.
@classmethod
def do_smthn(cls):
if cls.LOOKUP is None:
# the LOOKUP is not initialized for this class yet, so
cls.LOOKUP = cls.prepare_lookup()
# now this class has own cls.LOOKUP dictionary, I can use it
assert cls.LOOKUP is not None
return cls.LOOKUP[42]
</code></pre>
<p>pylint complains: E1136: Value 'cls.LOOKUP' is unsubscriptable (unsubscriptable-object)</p>
<p>Well, I understand why pylint is not happy. I can disable pylint warning on each line where I use LOOKUP - but that's not convenient.</p>
<p>I tried to annotate the LOOKUP with 'dict' type (the above code already contains the annotation) - it doesn't help. pylint still reports same problem. But I still want to tell pylint somehow that LOOKUP is a dictionary.</p>
<p>Is there a way to do it?</p>
|
<python><pylint>
|
2024-01-04 22:46:18
| 1
| 2,817
|
lesnik
|
77,761,566
| 23,190,147
|
Python google search "advanced" argument unexpected
|
<p>I'm trying to perform a google search using python, using the <code>googlesearch</code> python module. I don't just want urls to be outputted, I also want a description of the information in the url, basically web scraping.</p>
<p>Here's the code that includes the advanced argument:</p>
<pre><code>from googlesearch import search
print(search('my search', advanced=True)) #advanced argument is true, to make the search detailed
</code></pre>
<p>but I get this error:</p>
<p>TypeError: search() got an unexpected keyword argument 'advanced'</p>
<p>and I don't know why this is occurring.</p>
<p>I got this code from <a href="https://pypi.org/project/googlesearch-python" rel="nofollow noreferrer">https://pypi.org/project/googlesearch-python</a>. I would also like to note that I don't want to use other web scraping libraries, such as <code>bs4</code> (BeatifulSoup4) or <code>urllib</code>, especially since I want to figure out why the advanced argument isn't being accepted.</p>
|
<python><web-scraping><google-search>
|
2024-01-04 22:33:55
| 1
| 450
|
5rod
|
77,761,520
| 8,284,452
|
How to replace part of a Timestamp using .where()?
|
<p>I have a dataframe that contains a few timestamps. I am trying to find certain timestamps that don't meet a condition and compute their new timestamp value based on pieces from both another timestamp and the current timestamp being tested.</p>
<pre><code>df = pd.DataFrame(data={'col1': [pd.Timestamp(2021, 1, 1, 12), pd.Timestamp(2021, 1, 2,
12), pd.Timestamp(2021, 1, 3, 12)],
'col2': [pd.Timestamp(2021, 1, 4, 12), pd.Timestamp(2021, 1, 5,
12), pd.Timestamp(2021, 1, 6, 12)]})
print(df)
# col1 col2
# 0 2021-01-01 12:00:00 2021-01-04 12:00:00
# 1 2021-01-02 12:00:00 2021-01-05 12:00:00
# 2 2021-01-03 12:00:00 2021-01-06 12:00:00
</code></pre>
<p>I'm trying to do something like this:</p>
<pre><code>testDate = pd.Timestamp(2021, 1, 2, 16)
df['newCol'] = df['col1'].where(df['col1'].dt.date <= testDate.date(), pd.Timestamp(year=testDate.year, month=testDate.month, day=testDate.day, hour=df['col1'].dt.hour))
</code></pre>
<p>I get an error though about ambiguity:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>If I remove the last bit <code>hour=df['col1'].dt.hour</code>, the code will run, so I know it has to do with that, but I don't understand why it's complaining about truthiness since that little piece of the code isn't testing any conditions, it's just assigning. I thought it was because I'm trying to compute the new value using the values that are being iterated over, but if I try this process using integers instead of timestamps, it runs just fine:</p>
<pre><code>df = pd.DataFrame(data={'col1': [1,2,3], 'col2': [4,5,6]})
print(df)
# col1 col2
# 0 1 4
# 1 2 5
# 2 3 6
testInt = 2
df['newCol'] = df['col1'].where(df['col1'] < testInt, df['col1'] + 2)
print(df)
# col1 col2 newCol
# 0 1 4 1
# 1 2 5 4
# 2 3 6 5
</code></pre>
<p>What is the proper way to do what I want to do?</p>
|
<python><pandas><dataframe><timestamp><series>
|
2024-01-04 22:20:57
| 1
| 686
|
MKF
|
77,761,282
| 214,296
|
Python socket transmitting on correct interface but with wrong source address
|
<p>I have a multicast system with multiple network interfaces. Packets transmitted using the socket I created are being sent out the correct interface, but the packet's source address is incorrect. The source address should match the interface that was used to transmit the packet over an isolated network, but it's using the source address from the interface on the private network (which ultimately connects to the internet).</p>
<p>What do I need to change about my socket configuration, so that transmitted packets are sent with the isolated network interface's IP address as the source address?</p>
<pre><code>import socket
src_ip = '172.17.0.1' # isolated network interface IP address which should be source address but isn't
dst_ip = '225.17.0.18' # destination IP address
port = 30000
msg = bytes([0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x88, 0xAA, 0xBB, 0xCC, 0xDD ])
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
sock.connect((dst_ip, port))
sock.setsockopt(socket.SOL_IP, socket.IP_MULTICAST_IF, socket.inet_aton(src_ip))
sock.setsockopt(socket.SOL_IP, socket.IP_ADD_MEMBERSHIP, socket.inet_aton(dst_ip) + socket.inet_aton(src_ip))
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.sendall(msg)
</code></pre>
<p>The packet gets transmitted as expected, except for the source address. Any thoughts? I'm using Python 3.6. I know it's old but it's what they're using here.</p>
|
<python><sockets><ip-address><multicast><transmission>
|
2024-01-04 21:24:17
| 2
| 14,392
|
Jim Fell
|
77,761,161
| 1,601,831
|
SQLAlchemy string relationships cause 'undefined name' complaints by flake8 and mypy
|
<pre><code># order.py
class Order(Base):
__tablename__ = "Order"
id: Mapped[int] = mapped_column(primary_key=True)
items: Mapped[List["Item"]] = relationship(back_populates="order")
# item.py
class Item(Base):
__tablename__ = "Item"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
order_id: Mapped[int] = mapped_column(ForeignKey("Order.id"))
order: Mapped["Order"] = relationship(back_populates="items")
</code></pre>
<p>I'm using sqlalchemy 2.0.23 and not using plugins for mypy or flake8.</p>
<p>Neither module should need to import the other since I'm using string relationships like "Item" in the Order class and "Order" in the relationship in the Item class.</p>
<p>flake8 reports an F821 error on both files because of the undefined name.<br />
mypy reports a similar error on both.</p>
<p>I can configure flake8 to ignore the F821. I'm not sure how I'd do similar in mypy. But these are important rules that shouldn't be turned off to get SQLAlchemy classes through linters.</p>
<p>I want to keep my classes in separate files. Is there a way to correctly define them so that linters like these won't complain? Adding imports to both files quiets these linters, but results in a circular import problem so the code won't run.</p>
|
<python><sqlalchemy><relationship><mypy><flake8>
|
2024-01-04 20:56:12
| 1
| 417
|
dam
|
77,761,069
| 5,287,011
|
Dictionary where values can be either a list or a list of lists
|
<p>I am trying to plot the result of truck routing optimization using networkx library. First I must prepare the data and I am stuck.</p>
<p>Here is small example of the initial data:</p>
<pre><code>Vehicle Route
0 V0 [0,3](V0)
1 V0 [3,0](V0)
2 V1 [0,3](V1)
3 V1 [0,8](V1)
4 V1 [2,0](V1)
5 V1 [3,2](V1)
6 V1 [8,0](V1)
</code></pre>
<p>There are two trucks: V0 and V1. The route for V0 is: 0 to 3 to 0. But V1 has TWO routes: 0 to 8 to 0 AND 0 to 3 to 2 to 0.</p>
<p>Every tour must start and end at node 0.</p>
<p>I must solve two issues:</p>
<ol>
<li>For each truck (V0, V1) I must find all the tours within a given list that starts and end with zero</li>
<li>The flow of tours can not have breaks: the end of each sub-tour [0,8] must be the beginning of the next one [8,0]</li>
</ol>
<p>the final result must look:</p>
<pre><code>{'V0': [0,3,0], 'V1': [[0,3,2,0], [0,8,0]]}
</code></pre>
<p>I have a code but it does not produce what I want:</p>
<pre><code>tours = {'V0': ['0,3', '3,0'], 'V1': ['0,3', '0,8', '2,0', '3,2', '8,0']}
def convert_to_list(s):
return list(map(int, s.split(',')))
# Function to process each value in the 'tours' dictionary
def process_value(value):
result = []
current_list = []
for pair in value:
pair_list = convert_to_list(pair)
if not current_list or pair_list[0] == 0:
current_list.extend(pair_list)
elif pair_list[1] == 0:
current_list.append(pair_list[0])
current_list.append(0)
result.append(current_list)
current_list = [0]
else:
current_list.append(pair_list[0])
if current_list:
current_list.append(0)
result.append(current_list)
return result
# Create the 'tours1' dictionary using the processing functions
tours1 = {key: process_value(value) for key, value in tours.items()}
# Print the result
print(tours1)
</code></pre>
<p>OUTPUT:</p>
<pre><code>{'V0': [[0, 3, 3, 0], [0, 0]], 'V1': [[0, 3, 0, 8, 2, 0], [0, 3, 8, 0], [0, 0]]}
</code></pre>
<p>Any ideas?</p>
<p>Thank you!</p>
|
<python><list><dictionary><nested-lists>
|
2024-01-04 20:33:30
| 3
| 3,209
|
Toly
|
77,760,902
| 929,382
|
Fix for "TypeError: cannot pickle 'weakref' object" when using Python Multiprocessing in a Class
|
<p>I am currently working on a Python FastAPI application that spawns busy loops in parallel threads using the multiprocessing library. When I first started building the application, I created the threads during start-up in the main.py file. This worked great and I could spawn all the threads I needed with no issues.</p>
<p>Eventually I needed specific FastAPI endpoints to also be able to control those threads, so instead of spawning the threats on application startup, I created a singleton class file that could control the threads and be accessed by both the startup process and by FastAPI endpoints.</p>
<p>The issue is that as soon as I moved the Process method from the main.py file to the .core.thread_utils.py file, I now get the following error when I spawn more than one thread:</p>
<pre><code> File "/home/user/.local/lib/python3.8/site-packages/starlette/routing.py", line 677, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "/home/user/.local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __aenter__
await self._router.startup()
File "/home/user/.local/lib/python3.8/site-packages/starlette/routing.py", line 654, in startup
await handler()
File "/home/user/Projects/test/test-api/api/main.py", line 127, in startup_event
thread_utils.launch_all_threads()
File "/home/user/Projects/test/test-api/api/core/thread_utils.py", line 66, in launch_all_threads
p.start()
File "/usr/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/usr/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/usr/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'weakref' object
</code></pre>
<p>So, the code that worked fine in main.py, as soon as it's put into a separate class, now throws an error. Also, I can spawn <strong>one</strong> thread with no issues at all, but <strong>more than one</strong> thread throws the 'cannot pickle' TypeError. I do not understand why.</p>
<p>After some research on specifically how Python forks new threads, I believe I am hitting <a href="https://github.com/python/cpython/issues/91090" rel="nofollow noreferrer">this error</a>. Also, the multiprocessing documentation mentions '<a href="https://docs.python.org/3/library/multiprocessing.html#the-spawn-and-forkserver-start-methods" rel="nofollow noreferrer">safe importing</a>' but I'm not sure if/how that applies to this situation because attempts to apply this produce the same error or no threads at all. I could be doing that wrong...</p>
<p>The code I'm using looks something like this:</p>
<p><strong>main.py:</strong></p>
<pre><code>from .core.thread_utils import thread_utils
thread_utils = thread_utils()
...
@app.on_event("startup") # this is part of FastAPI
async def startup_event():
# previous thread creation went here and worked perfectly
thread_utils.launch_all_threads()
</code></pre>
<p><strong>core/thread_utils.py:</strong></p>
<pre><code>from multiprocessing import Process
from ..core import busy_loop
class thread_utils:
_self = None
def __new__(cls):
if cls._self is None:
cls._self = super().__new__(cls)
return cls._self
def launch_runner_thread(self, t:int):
busy_loop(t)
def launch_all_threads(self):
t_list = [1,2]
for t in t_list:
p = Process(target=self.launch_runner_thread, args=(t,))
p.start()
</code></pre>
<p>Any help, direction, or working examples of how to implement multiprocessing in a separate class would be very appreciated!</p>
<p><strong>WORKING CODE</strong> (thanks to @pts' suggestion below!)</p>
<p><strong>main.py</strong></p>
<pre><code>from .core.thread_utils import launch_all_threads
...
@app.on_event("startup")
async def startup_event():
launch_all_threads()
</code></pre>
<p><strong>core/thread_utils.py:</strong></p>
<pre><code>from multiprocessing import Process
from ..core import busy_loop
def launch_runner_thread(t:int):
busy_loop(t)
def launch_all_threads():
t_list = [1,2]
for t in t_list:
p = Process(target=launch_runner_thread, args=(t,))
p.start()
</code></pre>
<p>I will go ahead and update the title of this question to match the problem/solution and give @pts the credit for answering. Thank you again!</p>
|
<python><class><multiprocessing><fastapi>
|
2024-01-04 19:55:23
| 1
| 593
|
utdream
|
77,760,717
| 1,546,990
|
Mocking signals function in unit test
|
<p>I'm using Django version 4.2.6.</p>
<p>I have a file, <code>signals.py</code>, that includes a pre-save handler for my entity:</p>
<pre><code>def do_stuff(sender, instance, **kwargs):
... stuff...
pre_save.connect(do_stuff, sender=MyEntity)
</code></pre>
<p>... so that when an instance of <code>MyEntity</code> is either created or updated, the <code>do_stuff()</code> function is executed.</p>
<p>How do I mock <code>do_stuff()</code> in unit tests? I have a number of tests that create and update <code>MyEntity</code> instances, so <code>do_stuff()</code> is being called during the execution of the unit test. My implementation of <code>do_stuff()</code> is making a call to an external source amongst other functionality, so I want to mock it when the unit test runs.</p>
<p>I have tried declaring the following a part of my unit test:</p>
<pre><code>@mock.patch("application.package.signals.do_stuff", autospec=True,)
</code></pre>
<p>... but when I execute the test, I can see that <code>do_stuff()</code> is still being executed.</p>
|
<python><django>
|
2024-01-04 19:19:40
| 1
| 2,069
|
GarlicBread
|
77,760,651
| 5,924,264
|
How to converge derived classes when the overrode methods are defined the same way but base class is different
|
<p>I have the following design pattern:</p>
<pre><code>class Base:
# definition omitted
class Derived(Base):
# definition omitted
</code></pre>
<pre><code>class Mock1(Base):
def __init__(self, input, args):
self._input = input
super(Mock1, self).__init__(args)
# my_func is defined in Base so we're overriding it
def my_func(self, for_val):
# defintion omitted
class Mock2(Derived):
def __init__(self, input, args):
self._input = input
super(Mock2, self).__init__(args)
# my_func is defined in Base (note it's not defined in Derived) so we're overriding it
def my_func(self, for_val):
# defintion omitted
</code></pre>
<p>My quip is with <code>Mock1, Mock2</code> which inherits <code>Base, Derived</code>, respectively, which seems to have duplicated code b/c <code>my_func</code> is defined the same way in both and the only difference between the 2 classes is what class it's inheriting. Is there a cleaner way to write this?</p>
<p>I believe there is a way to dynamically change which class to inherit but from my understanding it's frowned upon.</p>
|
<python><design-patterns>
|
2024-01-04 19:08:19
| 1
| 2,502
|
roulette01
|
77,760,478
| 1,471,980
|
how do you style data frame and write to excel in pandas
|
<p>I need to change the background's color of the dataframe headers to orange:</p>
<p>My <code>df</code> looks like this:</p>
<pre><code>Campus Server Port
AZ server12 Eth1
AZ1 server12 Eth2
AZ2 server23 Gi2
NV Server34 Eth2
</code></pre>
<p>I tried this:</p>
<pre><code>df1=df.style.set_table_style(
[{
'selector': 'th',
'props': [('background-color', '#FFA500')]
}])
</code></pre>
<p>When I try to write this <code>df1</code> to Excel, I don't see the color of the header's background changing:</p>
<pre><code>with pd.ExcelWriter('C:/documements') as writer:
df1.to_excel(writer, index=False
</code></pre>
<p>Any ideas what I am doing wrong here?</p>
|
<python><pandas><excel><dataframe><pandas-styles>
|
2024-01-04 18:34:22
| 1
| 10,714
|
user1471980
|
77,760,449
| 10,750,537
|
pcolormesh() different behavior when using OO vs state machine interfaces
|
<p>I need to use the <a href="https://stackoverflow.com/a/14261698/10750537">OO interface</a> of matplotlib (3.1.2) to draw some pictures in Python, but I get unpleasant results when calling pcolormesh(). The issue disappears when using the state-machine interface. However, as far as I understand, I cannot use the latter because I need to manage several axes.</p>
<p>Please consider the following example code that shows the issue:</p>
<pre><code>import matplotlib
from matplotlib import pyplot
import numpy as np
OO = True #Set to False to make it work
#
f = 7.5e8 + np.arange(4096) * (1.25e9 - 7.5e8) / 4096
t = np.arange(1024) / 1000
sxx = np.random.rand(f.size,t.size)
#
if OO:
fig = matplotlib.pyplot.figure()
axes = matplotlib.pyplot.axes(figure=fig)
spectrogram = axes.pcolormesh(t, f, sxx, shading='auto')
fig.colorbar(spectrogram)
else:
spectrogram = matplotlib.pyplot.pcolormesh(t, f, sxx, shading='auto')
matplotlib.pyplot.colorbar(spectrogram)
matplotlib.pyplot.show()
</code></pre>
<p>The results is poor (e.g, almost no tick along the y-axis), as shown below. Editing the x/y limits or ticks does not fix it.</p>
<p><a href="https://i.sstatic.net/wIvMB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wIvMB.png" alt="Glitches" /></a></p>
<p>Nevertheless, if "OO" is set to "False" in the code above, the result is fine (see below)
<a href="https://i.sstatic.net/8ASrM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8ASrM.png" alt="Ok" /></a></p>
<p>How can I get the OO interface working as the state-machine interface?</p>
|
<python><matplotlib>
|
2024-01-04 18:27:00
| 0
| 381
|
JtTest
|
77,760,239
| 12,352,239
|
Unable to import module 'index': No module named 'awswrangler'
|
<p>I am deploying a Python lambda to integrate with an open search endpoint. I am following this <a href="https://docs.aws.amazon.com/solutions/latest/constructs/aws-lambda-opensearch.html" rel="nofollow noreferrer">CDK example</a>.</p>
<p>I have a directory containing my lambda handler as well as a <code>requirements.txt</code> file in a directory that is uploaded like this:</p>
<pre><code>const lambdaFunction: lambda.Function = new lambda.Function(this, id, {
runtime: lambda.Runtime.PYTHON_3_9,
handler: 'index.handler',
code: lambda.Code.fromAsset(`lambda`)}
);
</code></pre>
<p>my <code>requirements.txt</code> has the following content:</p>
<pre><code>awswrangler[opensearch]
boto3
</code></pre>
<p>When I invoke the lambda I get an "Internal server error" and when I review the cloudwatch logs I see the following:</p>
<pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'index': No module named 'awswrangler'
Traceback (most recent call last):
</code></pre>
<p>I am trying to access opensearch via awswrangler but am getting a dependency error - my lambda imports look like the following:</p>
<pre><code>import awswrangler as wr
# this is configured in CDK: https://docs.aws.amazon.com/solutions/latest/constructs/aws-lambda-opensearch.html
open_search_domain_endpoint = os.environ.get('DOMAIN_ENDPOINT')
os_client = wr.opensearch.connect(
host=open_search_domain_endpoint,
)
</code></pre>
<p>it seems as though my <code>requirements.txt</code> file is not getting installed. How can I verify my libraries are being installed to avoid this module import error.</p>
|
<python><amazon-web-services><aws-lambda>
|
2024-01-04 17:45:16
| 1
| 480
|
219CID
|
77,760,238
| 2,725,810
|
Intermittently slow CPU performance in AWS Lambda
|
<p>Consider the following AWS Lambda function:</p>
<pre class="lang-py prettyprint-override"><code>import time
import pickle
import gc
def ms_now():
return int(time.time_ns() / 1000000)
class Timer():
def __init__(self):
self.start = ms_now()
def stop(self):
return ms_now() - self.start
with open('model_embeddings.pkl', 'rb') as file:
model_embeddings = pickle.load(file)
def get_embeddings(texts):
timer = Timer()
embeddings = model_embeddings.encode(texts) # The line of interest
print(f"Time: {timer.stop()}ms")
return embeddings
def lambda_handler(event, _):
gc.disable() # Disable garbage collection
result = get_embeddings(event['texts']).tolist()
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json'
},
'result': result[0][0],
}
</code></pre>
<p>The function is deployed as image and is given 10,240 MB of RAM, of which only 800 MB is used. The function outputs the timing of a single line that performs a CPU-intensive task of running input strings through a neural network (HuggingFace sentence transformer) that has been initialized and loaded into memory.</p>
<p>I run this function repeatedly with the same input from AWS Lambda console. The output time is usually around 350ms. However, some runs are as slow as 2100ms. Note that these spikes cannot be explained by either cold starts (since only one line's performance is being measured) or garbage collection kicking in (since garbage collection is disabled).</p>
<p>What can cause such spikes?</p>
<p><strong>EDIT 1.</strong> I ran it locally (under WSL under Windows) and never got such a high spike. So it must be something AWS Lambda - specific...</p>
<p><strong>EDIT 2.</strong> Today I cannot reproduce the spikes with the above. However, if I return <code>result</code> in the response instead of <code>result[0][0]</code>, then it's slow. Now this is really strange. What does the size of the response have to do with the line I am timing? (For some background: <code>texts</code> is a list of 114 strings. <code>embeddings</code> is a NumPy matrix with 114 rows and 384 columns. <code>tolist()</code> converts this matrix into a list of lists.)</p>
<p><strong>EDIT 3.</strong> If I use a short input first, the problem disappears! Namely:</p>
<pre class="lang-py prettyprint-override"><code># Pre-warming. I have no clue why it helps to avoid spikes!
timer = Timer()
embeddings = model_embeddings.encode(texts[:1])
print(f"Time: {timer.stop()}ms")
timer = Timer()
embeddings = model_embeddings.encode(texts)
print(f"Time: {timer.stop()}ms")
</code></pre>
<p>What is going on?</p>
<p>P.S. The <a href="https://repost.aws/questions/QU188JSeI_QGK2e83vodXyYw/intermittently-slow-cpu-performance-in-aws-lambda" rel="nofollow noreferrer">question at re:Post</a></p>
|
<python><amazon-web-services><performance><aws-lambda>
|
2024-01-04 17:45:13
| 0
| 8,211
|
AlwaysLearning
|
77,760,171
| 4,343,563
|
Check if values in list in different dataframe columns match?
|
<p>I am trying to check each value in a list in two separate columns. My dataframe looks like:</p>
<pre><code> attribute value1 value2
Address ['a','b','c'] ['a','b','c']
Count ['1', 2, 3] ['1','2','3']
Color ['bl','cr','r'] ['bl','rd','gr']
</code></pre>
<p>I want a dataframe that looks like:</p>
<pre><code> attribute value1 value2 match
Address ['a','b','c'] ['a','b','c'] [True, True, True]
Count ['1', 2, 3] ['1','2','3'] [True, False, False]
Color ['bl','cr','r'] ['bl','rd','r'] [True, False, True]
</code></pre>
<p>I am trying to create a lambda statement but can't figure out the syntax:</p>
<pre><code>if isinstance(data['value1'][0], list):
for i in range(0, len(data.value1)):
data['match'] = data['value2'].apply(lambda ls: [x for x in ls if x == data['value1'][i]])
</code></pre>
|
<python><pandas>
|
2024-01-04 17:33:55
| 1
| 700
|
mjoy
|
77,760,041
| 3,002,584
|
Creating numpy ndarray gives error buffer is too small for requested array - but it isn't
|
<ul>
<li><sub>[Searched for duplicates but couldn't find any referring to this simple issue on Windows, so that's a self-answer]</sub></li>
</ul>
<p>I'm trying to create a simple 1D numpy <code>ndarray</code>, but gets an error:</p>
<pre class="lang-py prettyprint-override"><code>>>> np.ndarray(shape=(3,), buffer=np.array([1,2,3]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: buffer is too small for requested array
</code></pre>
<p>Why do I get an error? the <code>shape</code> corresponds to the passed <code>buffer</code>, isn't it?</p>
|
<python><windows><numpy>
|
2024-01-04 17:10:03
| 1
| 10,690
|
OfirD
|
77,759,931
| 6,603,473
|
Title color for figure (not axes) in Matplotlib
|
<p><strong>Edit:</strong> <em>Solution below, original question misses a bit of key information</em></p>
<p>I am having difficulties changing the title color for my figure in Matplotlib.</p>
<p>I have a figure containing subplots:</p>
<pre><code>import matplotlib as mpl
import matplotlib.pyplot as plt
fig, ax = plt.subplots(3, 4, sharex='all', sharey='all',
figsize=(20, 9), dpi=600)
</code></pre>
<p>The figure has 10 subplots with data using the same symbols on each plot, so I want a joint legend for the figure, not for each axis.</p>
<p>I have blanked the upper right and middle right axes - I want the legend to go here:</p>
<pre><code>ax[0,3].axis('off')
ax[1,3].axis('off')
</code></pre>
<p>I've created my legend elements:</p>
<pre><code>legend_elements = [Line2D([0], [0], marker='x', color='w', markeredgecolor='b', label='Measured data points'),
Line2D([0], [0], marker='o', color='w', markeredgecolor='g', label='Means of data points'),
Line2D([0], [0], color='r', label='Curve fitted to model')]
</code></pre>
<p>When writing my legend, I'm having real difficulty changing the color of the title text. I have tried the following examples:</p>
<pre><code>legend = fig.legend(handles=legend_elements, fontsize=10, labelcolor='c',
loc='lower left', bbox_to_anchor=(0.77, 0.63, 1, 1))
legend.set_title('Legend\n', color='c')
TypeError: Legend.set_title() got an unexpected keyword argument 'color'
</code></pre>
<p>...</p>
<pre><code>legend = fig.legend(handles=legend_elements, fontsize=10, labelcolor='c',
loc='lower left', bbox_to_anchor=(0.77, 0.63, 1, 1),
title='Legend\n', title_fontsize=10)
plt.setp(legend.get_title(), color='c')
AttributeError: 'NoneType' object has no attribute 'get_title'
</code></pre>
<p>...</p>
<pre><code>fig.setp(legend.get_title(), color='c')
AttributeError: 'Figure' object has no attribute 'setp'
</code></pre>
<p>...</p>
<pre><code>legend._legend_title_box._text.set_color('c')
AttributeError: 'NoneType' object has no attribute '_legend_title_box'
</code></pre>
<p>I've also looked at <a href="https://matplotlib.org/stable/api/font_manager_api.html#matplotlib.font_manager.FontProperties" rel="nofollow noreferrer">FontProperties</a>, something along the lines of the following, but FontProperties doesn't have a color kwarg as far as I can tell:</p>
<pre><code>import matplotlib.font_manager as font_manager
legend_title_props = font_manager.FontProperties(color='c')
legend = fig.legend(handles=legend_elements, fontsize=10, labelcolor='c',
loc='lower left', bbox_to_anchor=(0.77, 0.63, 1, 1),
title='Legend\n', title_fontsize=10,
title_fontproperties=legend_title_props)
TypeError: FontProperties.__init__() got an unexpected keyword argument 'color'
</code></pre>
<p>Does anyone have a workaround? I can't seem to find any other options online. I've got a feeling it stems from the fact that my legend is being stored as a 'NoneType' object, but I can't work out what I'm doing wrong to cause this.</p>
|
<python><matplotlib>
|
2024-01-04 16:54:10
| 2
| 322
|
georussell
|
77,759,902
| 19,130,803
|
check value in column is list type
|
<p>I have a dataframe with 2 columns <code>field</code> and <code>value</code>. I am performing some checks on each field's value. For <code>field a</code> I need to check its corresponding value is always <code>list</code> type and storing its result in <code>status</code> column</p>
<p>Below is code:</p>
<pre><code>import pandas as pd
from pandas.api.types import is_list_like
data = {
"field": ["a", "b", "c"],
"value": [[1, "na", -99], 20, 80],
}
df = pd.DataFrame(data)
print("Initial DF")
print(f"{df=}")
condlist = [df["field"] == "a", df["field"] == "b", df["field"] == "c"]
choicelist = [
df["value"].apply(is_list_like).any(),
df["value"].isin([10, 20, 30, 40]),
df["value"].between(50, 100),
]
df["status"] = np.select(condlist, choicelist, False)
print("After check DF")
print(f"{df=}")
</code></pre>
<p>But getting error as</p>
<pre><code> df["value"].between(50, 100),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandas/_libs/ops.pyx", line 107, in pandas._libs.ops.scalar_compare
TypeError: '>=' not supported between instances of 'list' and 'int'
</code></pre>
<p>What I am missing?</p>
|
<python><pandas><dataframe>
|
2024-01-04 16:49:53
| 2
| 962
|
winter
|
77,759,855
| 3,767,239
|
Type checking an optional attribute whose value is related to that of another attribute
|
<p>I have a function which computes a <code>Result</code> and this computation can either be <code>success</code>ful or not. In case it was successful, some <code>data</code> that summarizes the result of the computation will be returned as well. In case it was unsuccessful, this data will be <code>None</code>. Now the problem is that even though I verify the status of the computation (<code>success</code>), the type checker (<code>mypy</code>) cannot infer the coupling between <code>success</code> and <code>data</code>. This is summarized by the following code:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import Optional
@dataclass
class Result:
success: bool
data: Optional[int] # This is not None if `success` is True.
def compute(inputs: str) -> Result:
if inputs.startswith('!'): # Oops, some condition that prevents the computation.
return Result(success=False, data=None)
return Result(success=True, data=len(inputs))
def check(inputs: str) -> bool:
return (result := compute(inputs)).success and result.data > 2
assert check('123')
assert not check('12')
assert not check('!123')
</code></pre>
<p>Running <code>mypy</code> against this code gives the following error:</p>
<pre><code>test.py:18: error: Unsupported operand types for < ("int" and "None") [operator]
test.py:18: note: Left operand is of type "Optional[int]"
</code></pre>
<p>I considered the following solutions, but I'm not really happy with either of them. So, I'm wondering if there's a better way to solve this.</p>
<h3><a href="https://docs.python.org/3/library/typing.html#typing.cast" rel="nofollow noreferrer"><code>typing.cast</code></a></h3>
<p>The function <code>check</code> could be modified to use <code>cast(int, result.data)</code> to enforce the logical relationship between <code>success</code> and <code>data</code>. However, having to resort to <code>cast</code> feels like a sign of something being wrong with the code (in this case, at least). Also, I would have to use <code>cast</code> each time this relationship is used in the code. It would be better to solve it in one place.</p>
<h3>Check <code>result.data is not None</code></h3>
<p>In the above example, the relationship between <code>success</code> and <code>data</code> is quite simple: <code>success == data is not None</code>. So, I could remove the attribute <code>success</code> altogether and, instead, check for <code>result.data is not None</code>.</p>
<pre class="lang-py prettyprint-override"><code>def check(inputs: str) -> bool:
return (result := compute(inputs)).data is not None and result.data > 2
</code></pre>
<p>While this works, the real use case is more complex and there are various data fields, e.g., <code>data_x</code>, <code>data_y</code>, and <code>data_z</code>. In this case, <code>success == all(d is not None for d in [data_x, data_y, data_z])</code>. Using this as a check is too verbose and so I would refactor it into a <code>property</code> of the <code>Result</code> class. For the above example this would be:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Result:
data: Optional[int]
@property
def success(self) -> bool:
return self.data is not None
</code></pre>
<p>However, when the <code>is not None</code> check is moved into a property, <code>mypy</code> cannot infer anymore that <code>result.data</code> really is not <code>None</code> when <code>result.success</code> is <code>True</code>.</p>
|
<python><mypy><python-typing>
|
2024-01-04 16:40:30
| 1
| 36,629
|
a_guest
|
77,759,545
| 9,366,726
|
I get a Max retries exceeded (_ssl.c:2427) error when trying to inject with LlamaIndex data in a pinecone DB
|
<p>I get a Max retries exceeded with url:</p>
<pre><code>(Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2427)')))
</code></pre>
<p>When trying to inject data in pinecone DB
When trying to inject data with LlamaIndex into a Pinecone DB i get the following error:</p>
<pre><code>
LlamaIndex_Doc_Helper-JJYEcwwZ\Lib\site-packages\urllib3\util\retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='llamaindex-documentation-helper-ksad0bm.svc.gcp-starter.pinecone.io', port=443): Max retries exceeded with url: /vectors/upsert (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2427)')))
Upserted vectors: 0%| | 0/18 [00:02<?, ?it/s]
</code></pre>
<p>My Python code:</p>
<pre><code>#--- Indexing in Pinecone
# Init. Pinecone Index object
index_name = "llamaindex-documentation-helper"
pinecone_index = pinecone.Index(index_name=index_name)
# Init. vector store object
vector_store = PineconeVectorStore(pinecone_index=pinecone_index)
# Init storage context
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# Creates
index = VectorStoreIndex.from_documents(
documents=documents, # documents = dir_reader.load_data()
storage_context=storage_context, # storage_context = StorageContext.from_defaults(vector_store=vector_store)
service_context=service_context, # service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model, node_parser=node_parser)
show_progress=True,
)
</code></pre>
<p>I disabled firewalls, VPN, and Antivirus programs
I am using Windows 11 pro, Python version 3.12 in a pipenv and my pip install certifi --upgrade is up to date</p>
<p>I can also ping the server:</p>
<pre><code>LlamaIndex Tutorial\LlamaIndex Doc Helper> ping llamaindex-documentation-helper-ksad0bm.svc.gcp-starter.pinecone.io
Pinging ingress.gcp-starter.pinecone.io [34.160.88.44] with 32 bytes of data:
Reply from 34.160.88.44: bytes=32 time=10ms TTL=114
Reply from 34.160.88.44: bytes=32 time=9ms TTL=114
Reply from 34.160.88.44: bytes=32 time=11ms TTL=114
Reply from 34.160.88.44: bytes=32 time=9ms TTL=114
Ping statistics for 34.160.88.44:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 9ms, Maximum = 11ms, Average = 9ms
</code></pre>
<p>Any comments on how to maybe fix the issue would be greatly appreciated</p>
|
<python><llama-index><pinecone>
|
2024-01-04 15:52:07
| 1
| 476
|
Alex Ricciardi
|
77,759,542
| 9,318,323
|
SqlAlchemy why does engine create more connection than "permitted"
|
<p><strong>Situation</strong></p>
<p>I am migrating some code from pyodbc to SqlAlchemy. I noticed that during some tests I get more active connections than previously. Old pyodbc code keeps at 2 connections at most while the new SqlAlchemy reaches 7.</p>
<p>To check connections I just run a query from the db using SSMS while my tests are running.
I am using pyodbc==5.0.1 and SQLAlchemy==2.0.23, Windows 10.</p>
<p><strong>Goal</strong></p>
<p>I want to keep the number of connection to be 5 at most. I tried tinkering with <code>poolclass=NullPool</code>, <code>pool_size</code>, <code>max_overflow</code>, <code>pool_recycle</code>, <code>pool_timeout</code> etc but nothing works. How do I limit the number of connections below a certain number at all times?</p>
<p>Here's a rough example of Python my code</p>
<pre><code>import pandas as pd
import sqlalchemy as sa
connection_url = sa.engine.URL.create("mssql+pyodbc", query={"odbc_connect": "Driver={ODBC Driver 17..."})
engine = sa.create_engine(
connection_url,
fast_executemany=fast_executemany,
# poolclass=NullPool # this
pool_size=1, # or this
max_overflow=2 # and this do not influence anything
)
def get_connection():
global engine
return engine.connect()
# old code
# return pyodbc.connect("Driver={ODBC Driver 17...")
def sql_query(query, params):
connection = get_connection()
res = pd.read_sql_query(query, connection, params=params)
connection.close()
def some_test_mock():
res = sql_query("1")
res = sql_query("2")
res = sql_query("3")
for i in range(2):
for j in range(2):
res = sql_query(f"{i} {j}")
res = sql_query("4")
</code></pre>
<p>Query to check conenctions:</p>
<pre><code>SELECT
DB_NAME(dbid) as DBName,
COUNT(dbid) as NumberOfConnections,
loginame as LoginName
FROM
sys.sysprocesses with (nolock)
WHERE
dbid > 0
and ecid=0
and loginame = 'testsqluser'
GROUP BY
dbid, loginame
--the number of connections climb from 1,2,3 to 7
--previously it would climb and stay at 2
</code></pre>
|
<python><sql-server><sqlalchemy>
|
2024-01-04 15:51:41
| 1
| 354
|
Vitamin C
|
77,759,524
| 17,597,213
|
A more efficient regular expression for extracting astrological house data?
|
<p>I wrote a script to extract astrological house data from a birth chart PDF. I am wondering if there is a more efficient regular expression I could be using.</p>
<p><strong>Current pattern:</strong></p>
<pre class="lang-py prettyprint-override"><code>house_pattern = r'([A-Z]{2}|[A-Z][a-z]+\.|[0-9]|[0-9]{2})\s+([a-z])\s+(\d+°+.\d+\'+.\d+\")'
</code></pre>
<p><strong>Example dataset:</strong></p>
<pre><code>Houses (Plac.) Declination
Asc. j 3°23'49" 23°23'37" S
2 k 13°38'12" 16°43'48" S
3 l 25°39'11" 1°43'39" S
IC a 28°32'56" 10°57'28" N
5 b 23° 5'14" 18°32'35" N
6 c 13°27'11" 22°24'45" N
Desc. d 3°23'49" 23°23'37" N
8 e 13°38'12" 16°43'48" N
9 f 25°39'11" 1°43'39" N
MC g 28°32'56" 10°57'28" S
11 h 23° 5'14" 18°32'35" S
12 i 13°27'11" 22°24'45" S
</code></pre>
<p><strong>Desired (and current) results:</strong>
The format is: House number or point, zodiac sign, degree/arcminute/arcsecond.</p>
<pre><code>('Asc.', 'j', '3°23\'49"')
('2', 'k', '13°38\'12"')
('3', 'l', '25°39\'11"')
('IC', 'a', '28°32\'56"')
('5', 'b', '23° 5\'14"')
('6', 'c', '13°27\'11"')
('Desc.', 'd', '3°23\'49"')
('8', 'e', '13°38\'12"')
('9', 'f', '25°39\'11"')
('MC', 'g', '28°32\'56"')
('11', 'h', '23° 5\'14"')
('12', 'i', '13°27\'11"')
</code></pre>
<p>Currently, '<em>the house number / point</em>' group uses a ton of overly specific OR statements. How can I refactor to get the same results?</p>
<p>I tried the pattern above and tested on regex101.com. I did a bit of google searching but I'm struggling to find an answer similar enough to fit my use case. My current pattern does return the correct results but seems as though it could be way better.</p>
|
<python><regex>
|
2024-01-04 15:49:21
| 2
| 617
|
Jordan
|
77,759,361
| 16,503,741
|
Pydantic refusing a model that clearly has an annotation
|
<p>I have a clearly annotated pydantic model:</p>
<pre><code>class CurrencyCode(TripleApiModel, RootModel[str]):
"""3-character ISO-4217 currency code"""
root: Annotated[
str,
Field(
"USD",
description="3-character ISO-4217 currency code",
examples=["USD"],
max_length=3,
min_length=3,
pattern=r"^[A-Z]{3}$",
),
]
</code></pre>
<p>But when I use the model in another class as a type, I get an error.</p>
<pre><code>class MyTestModel(BaseModel):
currency_code: CurrencyCode
</code></pre>
<pre><code>E pydantic.errors.PydanticUserError: A non-annotated attribute was detected: `currency_code = <class 'api_models.models.CurrencyCode'>`. All model fields require a type annotation; if `currency_code` is not meant to be a field, you may be able to resolve this error by annotating it as a `ClassVar` or updating `model_config['ignored_types']`.
E
E For further information visit https://errors.pydantic.dev/2.5/u/model-field-missing-annotation
</code></pre>
<p>I have followed the instructions at the link provided by the error, but none of the suggestions eliminate the error.</p>
<p>How can I use my bespoke class as a type for pydantic?</p>
|
<python><pydantic>
|
2024-01-04 15:23:29
| 0
| 339
|
Jelkimantis
|
77,759,260
| 2,923,617
|
Django AppRegistryNotReady when running Unittests from vscode
|
<p>My UnitTest test cases in a Django project run fine when running them directly from the Django utility (<code>python manage.py test</code>). However, when trying to run them from VSCode's test explorer, I get <code>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.</code>. I found several similar questions that were already answered here on SO, but none of the solutions seem to work. Here's what I tried:</p>
<ul>
<li><p>import my classes (e.g. <code>models</code>) within the <code>SetUpTestData</code> method and the actual test methods of the test class, rather than at the top of the file. This seemed to help a bit, in the sense that the tests now do show up in the test explorer (previously they didn't), but I still get an error when running them.</p>
</li>
<li><p>set the <code>DJANGO_SETINGS_MODULE</code> environment variable at the top of the file containing the test class:</p>
<pre><code> import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_project.settings")
</code></pre>
</li>
<li><p>add a <code>.env</code> file to my root directory, containing <code>DJANGO_SETINGS_MODULE = my_project.settings</code></p>
</li>
<li><p>add a configuration with purpose <code>debug-test</code> to <code>launch.json</code>:</p>
<pre><code> "configurations": [
{
"name": "Django test",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}\\manage.py",
"args": [
"test"
],
"purpose": ["debug-test"],
"django": true,
"justMyCode": true
}
]
</code></pre>
</li>
<li><p>remove <code>tests.py</code> from my app folder, as my tests are stored in a separate folder (and apparently Django can't have both)</p>
</li>
</ul>
<p>My folder structure looks like this:</p>
<pre><code>root
|_ my_app
|_ tests
|_ __init__.py
|_ test_models.py
|_ my_project
</code></pre>
<p>For the sake of completeness, I have the following in settings.json:</p>
<pre><code>{
"python.testing.unittestArgs": [
"-v",
"-s",
".",
"-p",
"test_*.py"
],
"python.testing.pytestEnabled": false,
"python.testing.unittestEnabled": true
}
</code></pre>
<p>Is there any way to make VSCode's test explorer play nice with Django and UnitTest?</p>
|
<python><django><visual-studio-code>
|
2024-01-04 15:08:00
| 1
| 486
|
NiH
|
77,759,146
| 5,798,201
|
Issue installing openai in Colab
|
<p>Hi after the chrismas break I have problems installing openai in Google Colab</p>
<p>I'm doing this. Which worked util mid december 2023:</p>
<pre><code>!pip install --upgrade pip
!pip install --upgrade --quiet openai
import openai
</code></pre>
<p>Now I get this error:</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-6-727253859e14> in <cell line: 4>()
2 get_ipython().system('pip install --upgrade --quiet openai')
3
----> 4 import openai
5 frames
/usr/local/lib/python3.10/dist-packages/openai/_utils/_streams.py in <module>
1 from typing import Any
----> 2 from typing_extensions import Iterator, AsyncIterator
3
4
5 def consume_sync_iterator(iterator: Iterator[Any]) -> None:
ImportError: cannot import name 'Iterator' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)
</code></pre>
<p>I tested installing older versons of openai and also upgrading and uninstall+install of typing_extensions. But until now without success.</p>
|
<python><google-colaboratory><openai-api>
|
2024-01-04 14:49:11
| 3
| 1,812
|
Tobi
|
77,759,057
| 2,932,907
|
PyEZ - Getting RpcTimeoutError after succesful commit even though timeout value is set to a high value
|
<p>I've got a Python (3.10) script using the PyEZ library to connect to Junos devices and commit various display set commands. Although both these steps are successful, when having committed the candidate configuration I get an RpcTimeoutError no matter what timeout value I set for the Device class and inside the commit() method of the Config class. I just don't get why this happens. The commit is done way before the 5 minutes are over and the commit_config() method should therefore return True.</p>
<p>The display set commands I commit:</p>
<pre><code>delete interfaces ge-0/0/0 unit 500
delete class-of-service interfaces ge-0/0/0 unit 500
delete routing-options rib inet6.0 static route <ipv6 route>,
</code></pre>
<p>The error:</p>
<pre><code>Error: RpcTimeoutError(host: hostname, cmd: commit-configuration, timeout: 360)
</code></pre>
<p>Relevant code is below:</p>
<pre><code>DEVICE_TIMEOUT = 360 # RPC timeout value in seconds
DEVICE_AUTOPROBE = 15
class JunosDeviceConfigurator:
def __init__(self, user=NETCONF_USER, password=NETCONF_PASSWD) -> None:
self.user = user
self.password = password
self.device = None
Device.auto_probe = DEVICE_AUTOPROBE
Device.timeout = DEVICE_TIMEOUT
def connect(self) -> bool:
try:
self.device = Device(
host=self._hostname,
user=self.user,
passwd=self.password,
port=22, huge_tree=True,
gather_facts=True,
timeout=DEVICE_TIMEOUT)
self.device.open()
self.device.timeout = DEVICE_TIMEOUT
self.logger.info(f'Connected to {self._hostname}')
return True
except ConnectRefusedError as err:
self.logger.error(f'Connection refused to {self._hostname}: {str(err)}')
return False
except ConnectError as err:
self.logger.error(f'Connection to {self._hostname} failed: {str(err)}')
return False
except Exception as err:
self.logger.error(f'Error connecting to {self._hostname}: {str(err)}')
return False
def commit_config(self, commands: list, mode = 'exclusive'):
if not self.device:
self.connect()
try:
with Config(self.device, mode=mode) as cu:
for command in commands:
cu.load(command, format='set')
cu.commit(timeout=DEVICE_TIMEOUT)
return True
except Exception as e:
self.logger.error(f'Error: {str(e)}')
return False
</code></pre>
|
<python><pyez>
|
2024-01-04 14:36:01
| 1
| 503
|
Beeelze
|
77,758,924
| 3,405,291
|
matplotlib figures are empty
|
<p>I run this Python code: <a href="https://github.com/calculix/beso" rel="nofollow noreferrer">https://github.com/calculix/beso</a></p>
<h2>Config</h2>
<p>The <code>beso_conf.py</code> file is modified. These variables:</p>
<pre class="lang-py prettyprint-override"><code>path_calculix = "C:\\Users\\m3\\Downloads\\calculix_2.21_4win\\ccx_static.exe" # path to the CalculiX solver
file_name = "C:\\Users\\m3\\AppData\\Local\\Temp\\result.inp" # file with prepared linear static analysis
cpu_cores = 1 # 0 - use all processor cores, N - will use N number of processor cores
</code></pre>
<h2>Run</h2>
<pre><code>cd C:\Users\m3\repos\beso # Go to BESO repository folder
pip install virtualenv --user # Install virtual env, also: https://github.com/pypa/pip/issues/11845
python -m venv virtual_env # Create a virtual env
virtual_env\Scripts\activate.bat # Activate the virtual env
pip install numpy # Install a dependency
pip install matplotlib # Install another dependency
python beso_main.py # Run BESO. It uses `beso_conf.py` config file.
</code></pre>
<h2>Visualize</h2>
<p>Assuming BESO went through 62 iterations, we can visualize the last iteration by:</p>
<pre><code>c:\Users\m3\Downloads\calculix_2.21_4win\cgx_STATIC.exe -c file062_state1.inp
</code></pre>
<h1>Problem</h1>
<p>For some reason the <code>matplotlib</code> Python figures are empty. But I cannot figure out why:</p>
<p><a href="https://i.sstatic.net/NcBwa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NcBwa.png" alt="Empty plot" /></a></p>
<p>They should display how BESO iterations progress, like below:</p>
<p><a href="https://i.sstatic.net/aUpDO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aUpDO.png" alt="What plot should display" /></a></p>
|
<python><matplotlib>
|
2024-01-04 14:14:56
| 1
| 8,185
|
Megidd
|
77,758,901
| 8,384,910
|
Multispectral image to RGB in Python
|
<p>I would like to convert a multispectral image to RGB. After parsing said image, its an ndarray of shape <code>(w, h, c)</code>. where c is the hundred-or-so wavelengths which were measured. Null measurements are -999, not 0.</p>
<p>For a single pixel, it is easy to use the <a href="https://www.colour-science.org/" rel="nofollow noreferrer"><code>colour</code></a> library to use the "intensities" of each wavelength to recover the colour in sRGB:</p>
<pre class="lang-py prettyprint-override"><code>import colour
pixel_intensities = {380: 0.048, 385: 0.051, 390: 0.055}
sd = colour.SpectralDistribution(pixel_intensities )
cmfs = colour.MSDS_CMFS["CIE 1931 2 Degree Standard Observer"]
illuminant = colour.SDS_ILLUMINANTS["D65"]
XYZ = colour.sd_to_XYZ(sd, cmfs, illuminant)
red, green, blue = colour.XYZ_to_sRGB(XYZ)
</code></pre>
<p>A naive solution for applying this to the entire image would be to go through each pixel and run it through the function one at a time. I found that it was very slow.</p>
<p>Another naive solution would be to cherry pick only three wavelengths: one for the red channel, one for the green channel, and one for the blue channel. This yielded decent results and executed almost instantly in comparison. However, I found that the colours were not very accurate.</p>
<p>Is there an alternative solution that also requires little code?</p>
|
<python><colors>
|
2024-01-04 14:11:58
| 1
| 9,414
|
Richie Bendall
|
77,758,714
| 393,010
|
How to type annotate converting envvar to integer?
|
<p>How would I get mypy to accept this code?</p>
<pre><code>try:
DEBUG = int(os.getenv("DEBUG")) > 0
except ValueError:
DEBUG = False
</code></pre>
<p>Current diagnostic is <code>mypy: Argument 1 to "int" has incompatible type "str | None"; expected "str | Buffer | SupportsInt | SupportsIndex | SupportsTrunc" [arg-type]</code></p>
|
<python><mypy>
|
2024-01-04 13:39:38
| 1
| 5,626
|
Moberg
|
77,758,645
| 10,847,096
|
Get accelerate package to log test results with huggingface Trainer
|
<p>I am fine-tuning a T5 model on a specific dataset and my code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>accelerator = Accelerator(log_with='wandb')
tokenizer = T5Tokenizer.from_pretrained('t5-base')
model = T5ForConditionalGeneration.from_pretrained('t5-base')
accelerator.init_trackers(
project_name='myProject',
config={
# My configs
}
)
# Then I do some preparations towards the fine-tuning
trainer_arguments = transformers.Seq2SeqTrainingArguments(
# Here I pass many arguments
)
trainer = transformers.Seq2SeqTrainer(
# Here I pass the arguments along side other needed arguments
)
# THEN FINALLY I TRAIN, EVALUATE AND TEST LIKE SO:
trainer.train()
trainer.evaluate( #evaluation parameters# )
trainer.predict( #test arguments# )
</code></pre>
<p>Now my main issue, when I check the <code>wandb</code> site for my project, I only see logging for the <code>trainer.train()</code> phase but not the <code>trainer.evaluate()</code> or <code>trainer.predict()</code> phases.<br><br></p>
<p>I've scoured the web trying to find a solution but could not find any.<br></p>
<p>How do I get wandb/accelerate to log all of my phases?
<br>
Thanks!</p>
<p>For the full code, you can see it here:
<a href="https://github.com/zbambergerNLP/principled-pre-training/blob/master/fine_tune_t5.py" rel="nofollow noreferrer">https://github.com/zbambergerNLP/principled-pre-training/blob/master/fine_tune_t5.py</a></p>
|
<python><huggingface><wandb><accelerate>
|
2024-01-04 13:28:04
| 1
| 993
|
Ofek Glick
|
77,758,605
| 5,220,257
|
Get full text from a LayoutLM
|
<p>I am using LayoutLM to read receipts and get text from the invoices. I am using this model from HuggingFace "philschmid/lilt-en-funsd". Given below is the code snippet:</p>
<pre><code>def run_inference(image_path, model=model, processor=processor, output_image=True):
# Load image from the path
image = Image.open(image_path).convert("RGB")
# get predictions
encoding = processor(image, return_tensors="pt")
del encoding["pixel_values"]
outputs = model(**encoding)
predictions = outputs.logits.argmax(-1).squeeze().tolist()
labels = [model.config.id2label[prediction] for prediction in predictions]
boxes = encoding["bbox"][0].tolist()
model_name = model.name_or_path.split('/')[-1]
if output_image:
image_with_boxes = draw_boxes(image, encoding["bbox"][0], labels)
b_answer_boxes = [encoding["bbox"][0][i].detach().numpy() for i, label in enumerate(labels) if label == "B-ANSWER"]
b_answer_texts = extract_text_from_boxes(image, b_answer_boxes, image_path, model_name)
return draw_boxes(image, encoding["bbox"][0], labels), b_answer_texts
else:
return draw_boxes(image, encoding["bbox"][0], labels), []
</code></pre>
<p>The issue is that, it does extract the "B-ANSWER" tags correctly but they are split into multiple boxes as shown in the image below:</p>
<p><a href="https://i.sstatic.net/TFhST.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TFhST.jpg" alt="enter image description here" /></a></p>
<p>I would like to only extract the items, quantity, and price from the receipt. Any help on this would be much appreciated, thanks!</p>
|
<python><deep-learning><ocr><huggingface-transformers><text-extraction>
|
2024-01-04 13:23:11
| 1
| 1,490
|
Asim
|
77,758,565
| 3,387,716
|
How to iterate over the remaining items in a dict while looping through that dict?
|
<p>Is there a cleaner way to iterate over the follow-up keys of a dict when iterating over a dict?</p>
<pre class="lang-py prettyprint-override"><code>d = { "a": 1, "b": 2, "c": 3 }
k = list(d.keys())
for i in range(len(k)):
print(k[i] + ":")
for j in range(i+1, len(k)):
print("\t" + k[j])
</code></pre>
<pre class="lang-none prettyprint-override"><code>a:
b
c
b:
c
c:
</code></pre>
<p>It feels like it should be possible to use iterators.</p>
|
<python><loops><dictionary>
|
2024-01-04 13:17:52
| 4
| 17,608
|
Fravadona
|
77,758,563
| 9,112,151
|
Rotate the array to the right by k steps
|
<p>Given an integer array nums, rotate the array to the right by k steps, where k is non-negative.</p>
<p>Example:</p>
<p>Input: nums = [1,2,3,4,5,6,7], k = 3</p>
<p>Output: [5,6,7,1,2,3,4]</p>
<p>Explanation:</p>
<p>rotate 1 steps to the right: [7,1,2,3,4,5,6]</p>
<p>rotate 2 steps to the right: [6,7,1,2,3,4,5]</p>
<p>rotate 3 steps to the right: [5,6,7,1,2,3,4]</p>
<p>How to solve the task with O(n) time complexity and O(1) space complexity (if possible)?</p>
<p>P.S. I know solutions with O(kn) time complexity and O(1) space complexity only.</p>
|
<python>
|
2024-01-04 13:17:29
| 2
| 1,019
|
Альберт Александров
|
77,758,436
| 12,319,746
|
Autogen response in a variable
|
<pre><code>import autogen
from nicegui import ui, context
from uuid import uuid4
# AutoGen Configuration
config_list = [
{
'model': 'gpt-4',
'api_key': ''
}
]
llm_config = {
'seed': 42,
'config_list': config_list,
'temperature': 0.2
}
# Initialize AutoGen Agents
assistant = autogen.AssistantAgent(name='Albert', llm_config=llm_config)
user_proxy = autogen.UserProxyAgent(name='user_proxy', human_input_mode="NEVER", max_consecutive_auto_reply=1, is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"), code_execution_config={"work_dir": "web"}, llm_config=llm_config)
@ui.page('/')
def main():
messages = []
user_id = str(uuid4()) # Unique ID for each user session
@ui.refreshable
def chat_messages():
for name, text in messages:
ui.chat_message(text=text, name=name, sent=name == 'You')
if context.get_client().has_socket_connection:
ui.run_javascript('setTimeout(() => window.scrollTo(0, document.body.scrollHeight), 0)')
async def send():
user_message = task_input.value
messages.append(('You', user_message)) # Append user's message to the messages list
chat_messages.refresh() # Refresh chat messages to display the latest message
task_input.value = '' # Clear the input field after sending the message
try:
response = await user_proxy.initiate_chat(assistant, message=user_message)
if response and 'content' in response[0]:
assistant_response = response[0]['content']
messages.append(('Albert', assistant_response)) # Append assistant's response to messages
else:
messages.append(('Albert', "Assistant did not provide a response."))
except Exception as e:
messages.append(('Albert', f"Error: {e}"))
finally:
chat_messages.refresh()
with ui.scroll_area().classes('w-full h-60 p-3 bg-white overflow-auto'):
chat_messages()
with ui.footer().style('position: fixed; left: 0; bottom: 0; width: 100%; background: white; padding: 10px; box-shadow: 0 -2px 5px rgba(0,0,0,0.1);'):
task_input = ui.input().style('width: calc(100% - 100px);')
ui.button('Send', on_click=send).style('width: 90px;')
ui.run(title='Chat with Albert')
</code></pre>
<p>trying to use this <a href="https://nicegui.io/" rel="nofollow noreferrer">GUI</a> over Autogen. However, I cannot figure out where the response is coming from? The response variable doesn't seem to have it. When there is an exception, it is printed in the UI, when it works well, Autogen prints the answer in the terminal but not the UI.</p>
|
<python><artificial-intelligence><large-language-model><ms-autogen>
|
2024-01-04 12:55:32
| 1
| 2,247
|
Abhishek Rai
|
77,758,337
| 2,123,706
|
PerformanceWarning: DataFrame is highly fragmented when creating new columns
|
<p>I have a 1m row df with one column that is always 5000 characters, A-Z0-9</p>
<p>I parse the long column into 972 columns using :</p>
<pre><code>def parse_long_string(df):
df['a001'] = df['long_string'].str[0:2]
df['a002'] = df['long_string'].str[2:4]
df['a003'] = df['long_string'].str[4:13]
df['a004'] = df['long_string'].str[13:22]
df['a005'] = df['long_string'].str[22:31]
df['a006'] = df['long_string'].str[31:40]
....
df['a972'] = df['long_string'].str[4994:]
return(df)
</code></pre>
<p>When I call the function, I get the following warning:</p>
<pre><code>PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use newframe = frame.copy()
</code></pre>
<p>Reading <a href="https://stackoverflow.com/questions/68292862/performancewarning-dataframe-is-highly-fragmented-this-is-usually-the-result-o">PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance</a>, this issue arises when creating > 100 columns and not specifying the data type of the new column, but each column is automatically a string.</p>
<p>Is there a way around this other than : <code>warnings.simplefilter(action='ignore', category=pd.errors.PerformanceWarning)</code> ?</p>
|
<python><dataframe><calculated-columns>
|
2024-01-04 12:37:42
| 1
| 3,810
|
frank
|
77,758,277
| 13,518,907
|
CMAKE in requirements.txt file: Install llama-cpp-python for Mac
|
<p>I have put my application into a Docker and therefore I have created a requirements.txt file. Now I need to install <code>llama-cpp-python</code> for Mac, as I am loading my LLM with from langchain.llms import LlamaCpp.</p>
<p>My installation command specifically for Mac is:</p>
<p><code>"CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python"</code></p>
<p>But it does not work if I put this in my "requirements.txt" file.</p>
<p>My requirements.txt file looks as follows:</p>
<pre><code>chromadb==0.4.14
langchain==0.0.354
pandas==2.0.3
python-dotenv==1.0.0
python_box==7.1.1
PyYAML==6.0.1
streamlit==1.29.0
torch==2.1.0
sentence-transformers==2.2.2
faiss-cpu==1.7.4
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python # Does not work like this
</code></pre>
<p>My Dockerfile looks like this after the comment of @ivvija :</p>
<pre><code>FROM python:3.11.5
WORKDIR /app
COPY . .
RUN pip3 install -r requirements.txt
RUN CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python
EXPOSE 8501
HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health
ENTRYPOINT ["streamlit", "run", "streamlit_app.py", "--server.port=8501", "--server.address=0.0.0.0"]
</code></pre>
<p>Which results in this error: <code>ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects</code></p>
<p>How can I do this?</p>
|
<python><macos><llama-cpp-python><llamacpp>
|
2024-01-04 12:28:49
| 2
| 565
|
Maxl Gemeinderat
|
77,758,047
| 10,128,276
|
Encountering ImportError with tesserocr: Symbol not found in flat namespace '__ZN9tesseract11TessBaseAPID1Ev'
|
<p>I'm attempting to use tesserocr in my Python project, but when I try to import it, I'm getting <code>[No module named 'tesserocr']</code>, I run into an ImportError. The error message points to a missing symbol related to the Tesseract library. Here is the full error message:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>ImportError: dlopen(/Volumes/WorkSpace/Backend/Reveratest/revera_api/venv/lib/python3.8/site-packages/tesserocr.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '__ZN9tesseract11TessBaseAPID1Ev'</code></pre>
</div>
</div>
</p>
<p>Here's what I've done so far:</p>
<ol>
<li><p>Installed tesseract using brew: <code>brew install tesseract</code></p>
</li>
<li><p>Installed tesserocr via pip within a virtual environment: <code>pip install tesserocr</code></p>
</li>
<li><p>Confirmed that tesseract command-line works correctly. I'm running
this on macOS M1 and my Python version is 3.8.</p>
</li>
</ol>
<p>How can I resolve this issue so that I can use tesserocr in my project? Are there specific paths or configurations I'm possibly missing?</p>
<p>Any assistance or pointers as to what might be causing this error would be greatly appreciated!</p>
|
<python><macos><tesseract><importerror>
|
2024-01-04 11:43:51
| 1
| 617
|
Jim Khan
|
77,757,962
| 13,094,333
|
Understanding Python async await coroutines (blocking loop example)
|
<p>I am trying to understand the coroutines. I know that they are using single thread and <code>await</code> keyword means "give back control to other running coroutines and give them a chance to run".</p>
<p>I have the following example:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import time
async def long_function():
# it takes almost 1 sec to execute it
for _ in range(50000000):
pass
async def count():
for x in range(10):
print(x)
#await asyncio.sleep(0)
await long_function()
async def gather():
await asyncio.gather(count(), count())
asyncio.run(gather())
</code></pre>
<p>I would expect this code to run asynchronously and give the output like that:</p>
<pre><code>0
0
1
1
2
2
...
</code></pre>
<p>However it gives:</p>
<pre><code>0
1
2
3
...
0
1
2
3
...
</code></pre>
<p>When I uncomment the <code>asyncio.sleep(0)</code> the output is as expected - asynchronous.</p>
<p>Can anyone explain why is using <code>await long_function()</code> not enough to pass control to other coroutine? And how does it work on some lower level to help me understand.</p>
<p>Thank you.</p>
|
<python><async-await><python-asyncio>
|
2024-01-04 11:30:23
| 2
| 412
|
Jack Scandall
|
77,757,929
| 1,095,967
|
"Go to Definition" not working in VS Code with Robot Framework Language Server
|
<p>When I use the context menu on a keyword in a file and select "Go to Definition" I'm always getting a pop-up for No Definition Found.</p>
<p>I've got the Robot Framework Language Server extension installed. Python (3.11) and Pylance.</p>
<p>I've tried disabling/reenabling extensions and restarting code and changing the Python language server.</p>
<p>How can I reenable the go to functionality, please?</p>
<p>I've tried the following:
<a href="https://stackoverflow.com/questions/37341849/vscode-go-to-definition-not-working">VSCode "go to definition" not working</a></p>
<p>This functionality was previously working with my VS Code / Python / Robot Framework setup.</p>
<p><a href="https://i.sstatic.net/Ro7w1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ro7w1.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><robotframework>
|
2024-01-04 11:24:15
| 1
| 661
|
MaxRussell
|
77,757,731
| 1,119,391
|
uscxml (statecharts) and Python
|
<p>Are there any examples of <code>uscxml</code> (statechart) code using the Python bindings?</p>
<p>E.g. I'm having the following problem. Running <code>uscxml-browser -v <scxml-chart-file></code> works (showing me it's going through the transitions) but how do I run this as Python code.</p>
|
<python><statechart>
|
2024-01-04 10:54:55
| 0
| 335
|
stustd
|
77,757,473
| 4,393,414
|
Prevent "smem" command to use terminal width, Need to use custom width
|
<p>I want <code>smem</code> to capture whole output i.e. command name should fully capture agnostic of terminal width.</p>
<p>I see <code>-a</code> option is there but its tied up with current terminal width.</p>
<p>I want to use custom width, for that I tried below thing to set <code>env={'COLUMNS':'512'}</code> in <code>subprocess.check_output()</code> but this is not working for only <code>smem</code> command and works for <code>top</code> command.</p>
<p><code>smem</code> should take width as <code>512</code> but its taking upto terminal width only.</p>
<p>Using <code>Linux</code> environment</p>
|
<python><linux><subprocess>
|
2024-01-04 10:19:02
| 1
| 745
|
Sanket
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.