QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,108,880
| 2,998,077
|
Error on Google Colab on an installed package
|
<p>In Google Colab, I am running below lines.</p>
<pre><code>!pip3 install -U scipy
!git clone https://github.com/jnordberg/tortoise-tts.git
%cd tortoise-tts
!pip3 install transformers==4.19.0
!pip3 install -r requirements.txt
!python3 setup.py install
import torch
import torchaudio
import torch.nn as nn
import torch.nn.functional as F
import IPython
from tortoise.api import TextToSpeech # problem here
from tortoise.utils.audio import load_audio, load_voice, load_voices
tts = TextToSpeech()
</code></pre>
<p>This line has problem:</p>
<pre><code>from tortoise.api import TextToSpeech
</code></pre>
<p>The error message says:</p>
<pre class="lang-none prettyprint-override"><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-0f6e1f002713> in <cell line: 9>()
7 import IPython
8
----> 9 from tortoise.api import TextToSpeech
10 from tortoise.utils.audio import load_audio, load_voice, load_voices
11
3 frames
/content/tortoise-tts/tortoise/models/xtransformers.py in <module>
8 from collections import namedtuple
9
---> 10 from einops import rearrange, repeat, reduce
11 from einops.layers.torch import Rearrange
12
ModuleNotFoundError: No module named 'einops'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
</code></pre>
<p>I've tried to "!pip3 install einops" and it says "Requirement already satisfied....". But the problem persists.</p>
<p>What went wrong and how can it be corrected?</p>
|
<python><google-colaboratory>
|
2024-03-05 15:37:03
| 0
| 9,496
|
Mark K
|
78,108,839
| 2,599,709
|
restricts doesn't seem to be uploaded when using a private endpoint for matching engine
|
<p>I have a matching engine set up in Google Cloud that is on a VPC. I'm uploading vectors and trying to use the <code>restricts</code> attribute to save some metadata for limiting searches later on.</p>
<p>When I create my datapoint, it seems like the restricts attribute is there:</p>
<pre class="lang-py prettyprint-override"><code>metadata = [
aiplatform_v1.IndexDatapoint.Restriction(namespace='color', allow_list=['red']),
aiplatform_v1.IndexDatapoint.Restriction(namespace='size', allow_list=['medium'])
]
emb_query = [[.5, .5]]
# Create datapoint
id = 101
datapoints = [
aiplatform_v1.IndexDatapoint(
datapoint_id=str(id),
feature_vector=emb_query[0],
restricts=metadata
)
]
print(id, datapoints[0])
</code></pre>
<p>This prints:</p>
<pre><code>(101,
datapoint_id: "101"
feature_vector: 0.5
feature_vector: 0.5
restricts {
namespace: "color"
allow_list: "red"
}
restricts {
namespace: "size"
allow_list: "medium"
})
</code></pre>
<p>So far so good. Now I create the request and send it:</p>
<pre class="lang-py prettyprint-override"><code>upsert_request = aiplatform_v1.UpsertDatapointsRequest(
index=index_name, datapoints=datapoints
)
client.index_client.upsert_datapoints(request=upsert_request)
</code></pre>
<p>Now, for whatever reason, when I query it back, the <code>MatchNeighbor</code> doesn't have the <code>restricts</code> attribute set:</p>
<pre class="lang-py prettyprint-override"><code># Fetch the endpoint
my_index_endpoint = aiplatform.MatchingEngineIndexEndpoint(
index_endpoint_name=INDEX_ENDPOINT_NAME
)
# Execute the request
response = my_index_endpoint.match(
deployed_index_id=DEPLOYED_INDEX_ID,
queries=[QUERY_EMBEDDING],
# The number of nearest neighbors to be retrieved
num_neighbors=1,
)
# print(response)
response[0]
# [MatchNeighbor(id='101', distance=1.0, feature_vector=None, crowding_tag=None, restricts=None, numeric_restricts=None)]
</code></pre>
<p>So why is the <code>restricts</code> attribute now None?</p>
|
<python><gcloud><google-ai-platform>
|
2024-03-05 15:30:36
| 1
| 4,338
|
Chrispresso
|
78,108,792
| 16,759,116
|
Why is builtin sorted() slower for a list containing descending numbers if each number appears twice consecutively?
|
<p>I sorted four similar lists. List <code>d</code> consistently takes much longer than the others, which all take about the same time:</p>
<pre class="lang-none prettyprint-override"><code>a: 33.5 ms
b: 33.4 ms
c: 36.4 ms
d: 110.9 ms
</code></pre>
<p>Why is that?</p>
<p>Test script (<a href="https://ato.pxeger.com/run?1=fZHBasMwDEDZ1V8h2CHxSNM4YbAY-iUhFLtxWkPsBMc9jNIv2aWX7Sf2J_uayUkaGIwadBDSexbSx9fw7k-9vd0-z77dvP08fbeuN-C1UdqDNkPvPDg1KOEJsbCDfJ9lWQgiMKs0bLfAoO0daNAWnLBHFVtaAzxDlSXAEsgTKBJI0xSzfVmWIWoiVzx_gM-GWZIvklVxQIWoON8w7P_noWL9cGGLSYO-rCYN4vIx_hfO7zCbBkMFCZNbYVQYPhLy0EScBLYbPdqPXS9FN8a0Cj31VAmrxZLRNp73GnfCyEZwGHHXqokRpQnYs5HK7RilEzU4bX3cRpcgunK4BM0LUwXw15S1VzBjROcbLqe8n_QX" rel="noreferrer">Attempt This Online!</a>):</p>
<pre class="lang-py prettyprint-override"><code>from timeit import repeat
n = 2_000_000
a = [i // 1 for i in range(n)] # [0, 1, 2, 3, ..., 1_999_999]
b = [i // 2 for i in range(n)] # [0, 0, 1, 1, 2, 2, ..., 999_999]
c = a[::-1] # [1_999_999, ..., 3, 2, 1, 0]
d = b[::-1] # [999_999, ..., 2, 2, 1, 1, 0, 0]
for name in 'abcd':
lst = globals()[name]
time = min(repeat(lambda: sorted(lst), number=1))
print(f'{name}: {time*1e3 :5.1f} ms')
</code></pre>
|
<python><algorithm><performance><sorting><time-complexity>
|
2024-03-05 15:21:50
| 2
| 10,901
|
no comment
|
78,108,709
| 11,281,877
|
Hybrid deep learning model combining backbone model and handcrafted features
|
<p>I have RGB images and I'd like to build a regression model to predict 'Lodging_score' combining densenet121 as backbone and handcrafted features in a csv file. Running my script below, I got the following error ValueError: Layer "model" expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, None, None) dtype=float32>]. I would appreciate if you could help me out, I've been struggling for days.</p>
<pre><code>#Step 1: Import the required libraries
import tensorflow as tf
from tensorflow.keras.applications import DenseNet121
from tensorflow.keras.layers import Dense, Dropout, Input, Concatenate, GlobalAveragePooling2D
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
modelID = 'd121_HCF'
#Step 2: Load and preprocess the image data
image_dir = r'/path_to_images_folder'
annotations_file = '/path_to/annotation.csv'
features_file = 'handcrafted_features.csv'
# Load image filenames and labels from annotations file
annotations_df = pd.read_csv(annotations_file)
image_filenames = annotations_df['Image_filename'].tolist()
labels = annotations_df['Lodging_score'].tolist()
# Load handcrafted features
features_df = pd.read_csv(features_file)
features_df.set_index('Image_filename', inplace=True)
# Get common image filenames
common_filenames = list(set(image_filenames).intersection(features_df.index))
#print(len(common_filenames))
# Filter the annotation and feature dataframes based on common filenames
annotations_df = annotations_df[annotations_df['Image_filename'].isin(common_filenames)]
features_df = features_df.loc[common_filenames]
features_df = features_df.drop(columns=['plot_id','project_id','Lodging_score'])# dropping columns that are not features
# Split the data into train, val, and test sets
train_filenames, test_filenames, train_labels, test_labels = train_test_split(
annotations_df['Image_filename'].tolist(),
annotations_df['Lodging_score'].tolist(),
test_size=0.2,
random_state=42)
val_filenames, test_filenames, val_labels, test_labels = train_test_split(
test_filenames,
test_labels,
test_size=0.5,
random_state=42)
# Preprocess handcrafted features
train_features = features_df.loc[train_filenames].values
val_features = features_df.loc[val_filenames].values
test_features = features_df.loc[test_filenames].values
# Normalize handcrafted features
train_features = (train_features - train_features.mean(axis=0)) / train_features.std(axis=0)
val_features = (val_features - train_features.mean(axis=0)) / train_features.std(axis=0)
test_features = (test_features - train_features.mean(axis=0)) / train_features.std(axis=0)
# Convert the label arrays to numpy arrays
train_labels = np.array(train_labels)
val_labels = np.array(val_labels)
test_labels = np.array(test_labels)
# Preprocess handcrafted features
train_features = train_features[:len(train_filenames)]
val_features = val_features[:len(val_filenames)]
test_features = test_features[:len(test_filenames)]
# Define image data generator with augmentations
image_size = (75, 200)
batch_size = 32
image_data_generator = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True)
train_data = pd.DataFrame({'filename': train_filenames, 'Lodging_score': train_labels})
train_generator = image_data_generator.flow_from_dataframe(
train_data,
directory=image_dir,
x_col='filename',
y_col='Lodging_score',
target_size=image_size,
batch_size=batch_size,
class_mode='raw',
shuffle=False)
val_generator = image_data_generator.flow_from_dataframe(
pd.DataFrame({'filename': val_filenames, 'Lodging_score': val_labels}),
directory=image_dir,
x_col='filename',
y_col='Lodging_score',
target_size=image_size,
batch_size=batch_size,
class_mode='raw',
shuffle=False)
# Create test generator
test_generator = image_data_generator.flow_from_dataframe(
pd.DataFrame({'filename': test_filenames, 'Lodging_score': test_labels}),
directory=image_dir,
x_col='filename',
y_col='Lodging_score',
target_size=image_size,
batch_size=batch_size, # Keep the batch size the same as the other generators
class_mode='raw',
shuffle=False)
#Step 3: Build the hybrid regression model
# Load DenseNet121 pre-trained on ImageNet without the top layer
base_model = DenseNet121(include_top=False, weights='imagenet', input_shape=image_size + (3,))
# Freeze the base model's layers
base_model.trainable = False
# Input layers for image data and handcrafted features
image_input = Input(shape=image_size + (3,))
features_input = Input(shape=(train_features.shape[1],))
# Preprocess image input for DenseNet121
image_preprocessed = tf.keras.applications.densenet.preprocess_input(image_input)
# Extract features from the base model
base_features = base_model(image_preprocessed, training=False)
base_features = GlobalAveragePooling2D()(base_features)
# Combine base model features with handcrafted features
combined_features = Concatenate()([base_features, features_input])
# Add dense layers for regression
x = Dropout(0.5)(combined_features)
x = Dense(128, activation='relu')(x)
x = Dropout(0.5)(x)
output = Dense(1, activation='linear')(x)
# Create the model
model = Model(inputs=[image_input, features_input], outputs=output)
# Compile the model
model.compile(optimizer=Adam(learning_rate=0.001), loss='mean_squared_error')
#Step 4: Train the model with early stopping
# Define early stopping callback
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_loss', patience=5, restore_best_weights=True)
# Convert numpy arrays to tensors
train_features_tensor = tf.convert_to_tensor(train_features, dtype=tf.float32)
val_features_tensor = tf.convert_to_tensor(val_features, dtype=tf.float32)
test_features_tensor = tf.convert_to_tensor(test_features, dtype=tf.float32)
# Train the model
history = model.fit(
train_generator,
steps_per_epoch=len(train_generator),
epochs=50,
validation_data=([val_generator.next()[0], val_features], val_labels),
validation_steps=len(val_generator),
callbacks=[early_stopping])
# Evaluate the model on the test set
loss = model.evaluate([test_generator.next()[0], test_features], test_labels, verbose=0)
predictions = model.predict([test_generator.next()[0], test_features])
</code></pre>
|
<python><tensorflow><deep-learning>
|
2024-03-05 15:09:19
| 1
| 519
|
Amilovsky
|
78,108,436
| 386,861
|
Setting order of categorical columns in Altair histogram
|
<p>Trying to swap order of columns in histogram.</p>
<pre><code>data = {'Age group': {0: '0 - 4', 1: '12 - 16', 2: '5 - 11', 3: 'not specified'},
'Count': {0: 81.0, 1: 86.0, 2: 175.0, 3: 0.0}}
dp = pd.DataFrame(data)
alt.Chart(dp).mark_bar().encode(
x=alt.X('Age group:N', title='Age group'),
y='Count:Q',
tooltip=['Age group', 'Count']
).properties(title='Number of children in voucher', width=400)
</code></pre>
<p><a href="https://i.sstatic.net/wHMlp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wHMlp.png" alt="enter image description here" /></a></p>
<p>Just want to swap the order and can't find a way of doing it.</p>
|
<python><pandas><altair>
|
2024-03-05 14:26:27
| 2
| 7,882
|
elksie5000
|
78,108,381
| 3,990,451
|
selenium python how to wait for presence individual elements after table found
|
<p>I wait for the presence of headers of a table.</p>
<pre><code>headers = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//table[@class='react-table']/thead/tr/th/span")))
</code></pre>
<p>headers is then a python list of webelements.</p>
<p>However when I iterate over that list with a variable <strong>h</strong> and I use <code>h.text</code>, a stale element reference exception is thrown.</p>
<p>I believe I need to wait explicitly on the elements making up the list one by one.</p>
<p>Is there a way to call <code>wait.until(EC.presence_of_element_located....)</code> given that I have the webelements?</p>
|
<python><selenium-chromedriver>
|
2024-03-05 14:17:21
| 1
| 982
|
MMM
|
78,108,326
| 1,595,417
|
Deadlock in Python garbage collection on exception
|
<p>I have encountered a strange situation where a program won't exit due to the way python handles exceptions. In this situation, I have an object which owns a Thread, and this Thread is only shut down when the object's <code>__del__</code> method is called. However, if the program "exits" due to an exception involving this object, the exception itself will hold a reference to the object in its stack trace, which prevents the object from being deleted. Since the object isn't deleted, the Thread is never shut down, and so the program can't fully exit and hangs forever. Here is a small repro:</p>
<pre class="lang-py prettyprint-override"><code>import threading
class A:
def __init__(self):
self._event = threading.Event()
self._thread = threading.Thread(target=self._event.wait)
self._thread.start()
def __del__(self):
print('del')
self._event.set()
self._thread.join()
def main():
a = A()
# The stack frame created here holds a reference to `a`, which
# can be verified by looking at `gc.get_referrers(a)` post-raise.
raise RuntimeError()
main() # hangs indefinitely
</code></pre>
<p>A workaround is to break the reference chain by eating the exception and raising a new one:</p>
<pre class="lang-py prettyprint-override"><code>error = False
try:
main()
except RuntimeError as e:
error = True
if error:
# At this point the exception should be unreachable; however in some
# cases I've found it necessary to do a manual garbage collection.
import gc; gc.collect()
# Sadly this loses the stack trace, but that's what's necessary.
raise RuntimeError()
</code></pre>
<p>Funnily enough, a similar issue occurs without any exceptions at all, simply by leaving a reference to <code>a</code> in the main module:</p>
<pre class="lang-py prettyprint-override"><code>A() # This is fine, prints 'del'
a = A() # hangs indefinitely
</code></pre>
<p>What is going on here? Is this a python (3.10) bug? And is there a best practice for avoiding these sorts of issues? It really took me a long time to figure out what was happening!</p>
|
<python><exception><garbage-collection><python-multithreading><deadlock>
|
2024-03-05 14:07:27
| 1
| 1,019
|
xpilot
|
78,108,088
| 2,287,486
|
Pandas loop for checking the most recent data with conditions
|
<p>This question shouldn't be closed as I have already checked this question <a href="https://stackoverflow.com/questions/32459325/select-row-by-max-value-in-group-in-a-pandas-dataframe">Select row by max value in group in a pandas dataframe</a> Here it doesn't go back to check the empty values and doesnt have the condition to check the value upto 5 years. And I have already applied upto the answer explained in this question. My question is few more conditions than the attached duplicate question.</p>
<p>Following is the example datasets:</p>
<pre><code>df1 = pd.DataFrame(
data=[['Afghanistan','2015','5.1'],
['Afghanistan','2016','6.1'],
['Djibouti','2021',''],
['Djibouti','2020',''],
['Djibouti','2019','30'],
['Egypt','2019',''],
['Egypt','2018',''],
['Egypt','2015','37'],
['Bahrain','2020','32'],
['Bahrain','2021','']],
columns=['Country', 'Reference Year', 'value'])
</code></pre>
<p>I am trying to get the most recent data using the following codes.</p>
<pre><code>most_recent = df1[df1.groupby('Country')['Reference Year'].transform('max') == df1['Reference Year']]
most_recent['Reference Year'] = 'Most recent data'
</code></pre>
<p>However, it only checks the data with the highest year which meets one condition and I have more conditions to check. For example, if the highest year has empty data, it has to go back to one year behind and check and get the values. My top year is 2021 and hence it can go upto years behind to look for the latest data which is 2016 and stop there. If there are no values until 2016, then only it should give empty value. Please help!</p>
<p>Here my expected output is :</p>
<pre><code>Afghanistan Most recent data 6.1
Djibouti Most recent data 30
Egypt Most recent data
Bahrain Most recent data 32
</code></pre>
|
<python><pandas><loops>
|
2024-03-05 13:30:19
| 1
| 579
|
khushbu
|
78,107,849
| 2,741,831
|
use auxiliary input with keras in middle of network
|
<p>I have the following models, one is used for calculating trajectories, the other for judging them. The scoring model has all layers, the targeting one only the first half. Both need the targeting parameters to calculate a trajectory and then scoring it respectively. In order for the scoring model to have access to the targeting data, it needs it as an auxiliary input (since the targeting output only has 2 values). I tried achieving this using a concatenate layer, but it gives me an error:</p>
<pre><code># targeting_model
inp=layers.Input(4)
tdense1=layers.Dense(1024, activation='relu')(inp)
tdense2=layers.Dense(1024, activation='relu')(tdense1)
tdense3=layers.Dense(1024, activation='relu')(tdense2)
tout=layers.Dense(2, activation='linear')(tdense3)
# scoring_model
auxinp=layers.Input(4)
sinp=layers.Concatenate()([tout,auxinp])
sdense1=layers.Dense(1024, activation='relu')(sinp)
sdense2=layers.Dense(1024, activation='relu')(sdense1)
sout=layers.Dense(1, activation='linear')(sdense2)
targeting_model = tf.keras.Model(inputs=inp, outputs=tout, name="targeting_model")
print(targeting_model.summary())
scoring_model = tf.keras.Model(inputs=inp, outputs=sout, name="scoring_model")
</code></pre>
<p>Here's the error:</p>
<pre><code>ValueError`: Graph disconnected: cannot obtain value for tensor KerasTensor(type_spec=TensorSpec(shape=(None, 4), dtype=tf.float32, name='input_2'), name='input_2', description="created by layer 'input_2'") at layer "concatenate". The following previous layers were accessed without issue: ['dense', 'dense_1', 'dense_2', 'dense_3']`
</code></pre>
<p>but the tout layer is fully connected and the auxinput layer is just an input layer. So why is it saying the graph is disconnected?</p>
<p>EDIT: Figured it out. Needed to add auxinp to inputs in Model()</p>
|
<python><tensorflow><keras>
|
2024-03-05 12:49:08
| 1
| 2,482
|
user2741831
|
78,107,822
| 3,829,943
|
How to refuse to respond using Flask?
|
<p>I'm using Flask to implement a web service.</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/myservice',methods=['POST'])
def myservice():
if abnormal_activity_detected():
refuse to make any response.
</code></pre>
<p>I'd like to know how to properly refuse to make a response from the server side.</p>
<p>Thank you in advance!</p>
|
<python><web-services><flask>
|
2024-03-05 12:44:51
| 0
| 984
|
Chiron
|
78,107,775
| 610,569
|
How to increase the width of hidden linear layers in Mistral 7B model?
|
<p>After installing</p>
<pre><code>!pip install -U bitsandbytes
!pip install -U transformers
!pip install -U peft
!pip install -U accelerate
!pip install -U trl
</code></pre>
<p>And then some boilerplates to load the Mistral model:</p>
<pre><code>from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig,HfArgumentParser,TrainingArguments,pipeline, logging
from peft import LoraConfig, PeftModel, prepare_model_for_kbit_training, get_peft_model
from datasets import load_dataset
from trl import SFTTrainer
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit= True,
bnb_4bit_quant_type= "nf4",
bnb_4bit_compute_dtype= torch.bfloat16,
bnb_4bit_use_double_quant= False,
)
base_model="mistralai/Mistral-7B-v0.1"
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb_config,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
model.config.use_cache = False # silence the warnings
model.config.pretraining_tp = 1
model.gradient_checkpointing_enable()
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
tokenizer.padding_side = 'right'
tokenizer.pad_token = tokenizer.eos_token
tokenizer.add_eos_token = True
tokenizer.add_bos_token, tokenizer.add_eos_token
</code></pre>
<p>We can see the layers/architecture:</p>
<pre><code>>>> model
</code></pre>
<p>[out]:</p>
<pre><code>MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 4096)
(layers): ModuleList(
(0-31): 32 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear4bit(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear4bit(in_features=4096, out_features=1024, bias=False)
(v_proj): Linear4bit(in_features=4096, out_features=1024, bias=False)
(o_proj): Linear4bit(in_features=4096, out_features=4096, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear4bit(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear4bit(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear4bit(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
</code></pre>
<h2>Is there any way to increase the width size of the Linear4bit layers?</h2>
<p>E.g. if we want the model to take in another 800 more hidden nodes layer to get</p>
<pre><code>MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 4896)
(layers): ModuleList(
(0-31): 32 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear4bit(in_features=4896, out_features=4896, bias=False)
(k_proj): Linear4bit(in_features=4896, out_features=1024, bias=False)
(v_proj): Linear4bit(in_features=4896, out_features=1024, bias=False)
(o_proj): Linear4bit(in_features=4896, out_features=4896, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear4bit(in_features=4896, out_features=14336, bias=False)
(up_proj): Linear4bit(in_features=4896, out_features=14336, bias=False)
(down_proj): Linear4bit(in_features=14336, out_features=4896, bias=False)
(act_fn): SiLU()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=4896, out_features=32000, bias=False)
)
</code></pre>
<p>Note: It’s okay if the additional hidden nodes in the <code>Linear4bit</code> be randomly initialized.</p>
|
<python><huggingface-transformers><attention-model><mistral-7b>
|
2024-03-05 12:36:47
| 1
| 123,325
|
alvas
|
78,107,726
| 2,215,094
|
Fake pathlib.Path in class variable using pyfakefs'
|
<p>I have a class variable of type <code>pathlib.Path</code>.</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
class MyClass:
FILE_PATH = Path('/etc/ids.json')
</code></pre>
<p>I know that <em>pyfakefs</em> is not able to mock this automatically. So in my test I use its <code>Patcher</code> class (I also tried other ways.) to reload the corresponding module.</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
from pyfakefs.fake_filesystem_unittest import Patcher
from pyfakefs.fake_pathlib import FakePathlibModule
from . import my_class
def test_class_variable(fs):
# my_class.MyClass.FILE_PATH = Path('/etc/ids.json')
with Patcher(modules_to_reload=[my_class]):
assert type(my_class.MyClass.FILE_PATH) is FakePathlibModule.PosixPath
</code></pre>
<p>But it still isn't mocked.</p>
<p>If I uncomment the commented line, the test succeeds.</p>
<p>What should I do to mock the class variable?</p>
|
<python><pyfakefs>
|
2024-03-05 12:28:47
| 1
| 385
|
Jan Schatz
|
78,107,698
| 3,433,875
|
Showing change in a treemap in matplotlib
|
<p>I am trying to create this:</p>
<p><a href="https://i.sstatic.net/Duw5g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Duw5g.png" alt="Treemap in matplotlib" /></a></p>
<p>The data for the chart is:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
data = {
"year": [2004, 2022, 2004, 2022, 2004, 2022],
"countries" : [ "Denmark", "Denmark", "Norway", "Norway","Sweden", "Sweden",],
"sites": [4,10,5,8,13,15]
}
df= pd.DataFrame(data)
df['diff'] = df.groupby(['countries'])['sites'].diff()
df['diff'].fillna(df.sites, inplace=True)
df
</code></pre>
<p>I am aware that there are packages that do treemaps, (squarify and plotly, to name some), but I have not figured out how to do the one above where the values of the years are added to each other. (or the difference to be exact) and it would be fantastic to learn how to do it in pure matplotlib, if it is not too complex.</p>
<p>Anyone has any pointers? I havent found a lot of info on treemaps on google.</p>
|
<python><matplotlib><treemap>
|
2024-03-05 12:22:22
| 1
| 363
|
ruthpozuelo
|
78,107,528
| 2,641,825
|
Using doctest with pandas, how to fix the number of columns in the output example?
|
<p>I have a package with many methods that output pandas data frame.
I would like to test the examples with pytest and doctest as explained on the <a href="https://docs.pytest.org/en/4.6.x/doctest.html#output-format" rel="nofollow noreferrer">pytest doctest integration page</a>.</p>
<p>Pytest requires the output data frame to contain a certain number of columns that might be different than the number of columns provided in the example.</p>
<pre><code> >>> import pandas
>>> df = pandas.DataFrame({"variable_1": range(3)})
>>> for i in range(2, 8): df["variable_"+str(i)] = range(3)
>>> df
variable_1 variable_2 variable_3 variable_4 variable_5 variable_6 variable_7
0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2
</code></pre>
<p><code>pytest --doctest-modules</code> returns the following error because it displays 6 columns instead of 7</p>
<pre><code>Differences (unified diff with -expected +actual):
@@ -1,4 +1,6 @@
- variable_1 variable_2 variable_3 variable_4 variable_5 variable_6 variable_7
-0 0 0 0 0 0 0 0
-1 1 1 1 1 1 1 1
-2 2 2 2 2 2 2 2
+ variable_1 variable_2 variable_3 ... variable_5 variable_6 variable_7
+0 0 0 0 ... 0 0 0
+1 1 1 1 ... 1 1 1
+2 2 2 2 ... 2 2 2
+<BLANKLINE>
+[3 rows x 7 columns]
</code></pre>
<p>Is there a way to fix the number of column? Does doctest always have a fixed terminal output?</p>
|
<python><pandas><pytest><doctest>
|
2024-03-05 11:52:21
| 0
| 11,539
|
Paul Rougieux
|
78,107,353
| 5,618,856
|
How to make a list (or any other value) immutable: a real constant in Python?
|
<p>In an <a href="https://stackoverflow.com/questions/11111632/python-best-cleanest-way-to-define-constant-lists-or-dictionarys">old discussion</a> I found the reference to typing <code>Final</code>. I tried the example <a href="https://typing.readthedocs.io/en/latest/spec/qualifiers.html#semantics-and-examples" rel="nofollow noreferrer">from the docs</a>:</p>
<pre><code>y: Final[Sequence[str]] = ['a', 'b']
y.append('x') # Error: "Sequence[str]" has no attribute "append"
z: Final = ('a', 'b') # Also works
</code></pre>
<p>But opposing to the docs it mutates <code>y</code> to <code>['a','b','x']</code> without error. I'm on Python 3.11. What's going on? And moreover: What's the state of the art creating immutable constants in 2024 in Python?</p>
<p>Currently I use <code>@dataclass(frozen=True)</code> to accomplish immuability, but there might be more straight forward solutions.</p>
<p>The old discussions do not help with modern features of Python 3.10+.</p>
|
<python><python-3.x><constants><immutability><python-typing>
|
2024-03-05 11:22:28
| 1
| 603
|
Fred
|
78,107,266
| 6,081,921
|
Join PySpark ML predictions to identifier data
|
<p>I'm building a classification model using PySpark and its ML library. In my input dataframe, I have an identifier column (called <code>erp_number</code>) that I want to exclude from building the model (I don't want it to be a feature of the model), but I want to add it back when the predictions are output.</p>
<pre><code>def create_predictions(data, module):
data = data.drop("erp_number")
# Identify categorical columns
categorical_columns = [field.name for field in data.schema.fields if isinstance(field.dataType, StringType)]
# Numerical columns, excluding the categorical and target columns
numerical_columns = [field.name for field in data.schema.fields if field.name not in categorical_columns and field.name != module]
# Create a list of StringIndexers and OneHotEncoders
stages = []
for categorical_col in categorical_columns:
string_indexer = StringIndexer(inputCol=categorical_col, outputCol=categorical_col + "_index", handleInvalid="keep")
encoder = OneHotEncoder(inputCols=[string_indexer.getOutputCol()], outputCols=[categorical_col + "_vec"])
stages += [string_indexer, encoder]
# Add VectorAssembler to the pipeline stages
feature_columns = [c + "_vec" for c in categorical_columns] + numerical_columns
assembler = VectorAssembler(inputCols=feature_columns, outputCol="features")
stages += [assembler]
# Add the GBTClassifier to the pipeline stages
gbt = GBTClassifier(labelCol=module, featuresCol="features", predictionCol="prediction")
stages += [gbt]
# Create a Pipeline
pipeline = Pipeline(stages=stages)
# Fit the pipeline to the data
model = pipeline.fit(data)
# Apply the model to the data
predictions = model.transform(data)
return predictions
</code></pre>
<p>I tried to drop the column from the dataframe. But it looks like there is no equivalent to pandas concat or dplyr bind_cols.
I tried to exclude the erp_number from the <code>feature_columns</code> list, but this generates an error in the pipeline.</p>
|
<python><pyspark>
|
2024-03-05 11:10:25
| 1
| 954
|
cyrilb38
|
78,107,055
| 4,269,851
|
Python changing dictionary values in a loop
|
<p>Shortest spelling to change dict values is (not using one liners)</p>
<pre><code># process array with loop
dct = {1: 'one', 2: 'two', 3: 'three'}
for key, value in dct.items():
dct[key] = 'new'
print(dct)
#{1: 'new', 2: 'new', 3: 'new'}
</code></pre>
<p>According to python manual <code>.values()</code> '<em>Return a new view of the dictionary’s values</em>' and <a href="https://docs.python.org/3/library/stdtypes.html#dict-views" rel="nofollow noreferrer">view</a> is '<em>provide a dynamic view on the dictionary’s entries, which means that <strong>when the dictionary changes, the view reflects these changes</strong>.</em>'</p>
<p>Why not other way around, changing view does <strong>not</strong> dynamically changes the dictionary?</p>
<pre><code>dct = {1: 'one', 2: 'two', 3: 'three'}
for value in dct.values():
value = 'new'
print(dct)
#{1: 'one', 2: 'two', 3: 'three'}
</code></pre>
|
<python><python-3.x><loops><dictionary>
|
2024-03-05 10:40:09
| 2
| 829
|
Roman Toasov
|
78,107,054
| 1,120,977
|
`convert_time_zone` function to retrieve the values based on the timezone specified for each row in Polars
|
<p>I'm attempting to determine the time based on the timezone specified in each row using <code>Polars</code>. Consider the following code snippet:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from datetime import datetime
from polars import col as c
df = pl.DataFrame({
"time": [datetime(2023, 4, 3, 2), datetime(2023, 4, 4, 3), datetime(2023, 4, 5, 4)],
"tzone": ["Asia/Tokyo", "America/Chicago", "Europe/Paris"]
}).with_columns(c.time.dt.replace_time_zone("UTC"))
df.with_columns(
tokyo=c.time.dt.convert_time_zone("Asia/Tokyo").dt.hour(),
chicago=c.time.dt.convert_time_zone("America/Chicago").dt.hour(),
paris=c.time.dt.convert_time_zone("Europe/Paris").dt.hour()
)
</code></pre>
<p>In this example, I've computed the time separately for each timezone to achieve the desired outcome, which is [11, 22, 6], corresponding to the hour of the <code>time</code> column according to the <code>tzone</code> timezone. Even then it is difficult to collect the information from the correct column.</p>
<p>Unfortunately, the following simple attempt to dynamically pass the timezone from the <code>tzone</code> column directly into the <code>convert_time_zone</code> function does not work:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(c.time.dt.convert_time_zone(c.tzone).dt.hour())
# TypeError: argument 'time_zone': 'Expr' object cannot be converted to 'PyString'
</code></pre>
<p>What would be the most elegant approach to accomplish this task?</p>
|
<python><datetime><timezone><python-polars>
|
2024-03-05 10:39:59
| 2
| 2,631
|
Sungmin
|
78,106,979
| 1,006,955
|
u# format character removed from Python 3.12 C-API, how to account for it?
|
<p>A bunch of unicode-related functionality was removed from the Python 3.12 C-API. Unfortunately for me, there's a very old piece of code (~2010) in our library that uses these and I need to migrate this functionality somehow over to 3.12 since we're looking to upgrade to 3.12 eventually. One thing I'm specifically struggling with is the removal of the <code>u#</code> parameter. The following piece of code would parse any positional parameters passed to <code>foo</code> (including unicode strings), and store them in <code>input</code>:</p>
<pre><code>static PyObject *
foo(PyObject *self, PyObject *args) {
Py_UNICODE *input;
Py_ssize_t length;
if (!PyArg_ParseTuple(args, "u#", &input, &length)) {
return NULL;
}
...
}
</code></pre>
<p>However, according to the <a href="https://docs.python.org/3/c-api/arg.html" rel="nofollow noreferrer">docs</a>, the <code>u#</code> has been removed:</p>
<blockquote>
<p>Changed in version 3.12: <code>u</code>, <code>u#</code>, <code>Z</code>, and <code>Z#</code> are removed because they used a legacy Py_UNICODE* representation.</p>
</blockquote>
<p>and the current code simply throws something like <code>bad-format-character</code> when this is compiled and used in pure python.</p>
<p><code>Py_UNICODE</code> is just <code>wchar_t</code> so that's easily fixed. But with the removal of <code>u#</code> I am not sure how to get <code>PyArg_ParseTuple</code> to accept unicode input arguments. Using <code>s#</code> instead of <code>u#</code> does not work since it won't handle anything widechar. How do I migrate this call in Python 3.12?</p>
|
<python><python-c-api>
|
2024-03-05 10:27:58
| 1
| 7,498
|
Nobilis
|
78,106,890
| 5,790,653
|
How to find if one value of dictionary has certain value in a list of dicts
|
<p>This is my list:</p>
<pre class="lang-py prettyprint-override"><code>list1 = [
{'id': 1, 'custom': 'one'},
{'id': 2, 'custom': 'two'},
{'id': 3, 'custom': 'one'},
{'id': 1, 'custom': 'two'},
{'id': 3, 'custom': 'two'},
{'id': 4, 'custom': 'one'},
{'id': 5, 'custom': 'two'},
]
</code></pre>
<p>I'm going to find which <code>id</code>s have only <code>custom</code> value of <code>two</code>. If they have <code>one</code> or (<code>one</code> and <code>two</code>), that's OK. The problem should be ones which have <code>two</code>.</p>
<p>This is my attempt:</p>
<pre class="lang-py prettyprint-override"><code>for x in list1:
uid = x['id']
if uid == x['id'] and x['custom'] == 'two':
print(x)
</code></pre>
<p>Expected output should be <code>id</code>s 5 and 2.</p>
<p>Current output prints 1, 2, 3 and 5.</p>
|
<python>
|
2024-03-05 10:13:05
| 4
| 4,175
|
Saeed
|
78,106,793
| 10,750,541
|
How to group data and also specify the percentiles in a go.box?
|
<p>I am trying to achieve a plotly Figure like the following furthier down but instead of the whiskers to show the min and max, I want the percentiles 10th and 90th and I cannot figure a way to make it work.</p>
<p>There is some inspiration <a href="https://stackoverflow.com/questions/70966883/how-to-specify-the-percentiles-in-a-pyplot-box">here</a> and <a href="https://stackoverflow.com/questions/60588385/plotly-how-to-group-data-and-specify-colors-using-go-box-instead-of-px-box">here</a> which show correspondingly that there is a way to manipulate the boxplot and group the data, but I have not figured out how to it works.</p>
<p>I have a piece of code that I would like to share and would appreciate some help.</p>
<pre><code>import plotly.graph_objects as go
from itertools import cycle
# generate a dataframe
num_rows = 100
ids = np.random.randint(1, 1000, size=num_rows)
categories = np.random.choice(['A', 'B', 'C', 'D'], size=num_rows)
phases, durations = [], []
for id in ids:
phases.extend([1, 2, 3])
durations.extend(np.random.randint(100, 1001, size=3))
data = {
'id': np.repeat(ids, 3),
'category': np.repeat(categories, 3),
'phase': phases,
'duration': durations}
df = pd.DataFrame(data)
df = df.sample(frac=1).reset_index(drop=True)
# calculate statistics
p10 = lambda x: x.quantile(0.10)
p25 = lambda x: x.quantile(0.25)
p50 = lambda x: x.quantile(0.50)
p75 = lambda x: x.quantile(0.75)
p90 = lambda x: x.quantile(0.90)
to_display = df.groupby(['phase', 'category'], as_index=False).agg(p_10 = ('duration', p10),
p_25 = ('duration', p25),
p_75 = ('duration', p75),
median = ('duration', p50),
p_90 = ('duration', p90),
avg = ('duration', 'mean')
)
# create plot
palette = cycle(['black', 'grey', 'red', 'blue'])
fig_grouped = go.Figure()
for i, cat in enumerate(df['category'].unique()):
# print(i, cat)
df_plot = df[df['category']==cat]
fig_grouped.add_trace(go.Box(y = df_plot['duration'],
x = df_plot['phase'],
name = cat, boxpoints=False,
marker_color=next(palette)))
fig_grouped.update_traces(boxmean=True)
fig_grouped.update_layout(boxmode='group')
temp = to_display[to_display['category']==cat]
q1 = list(temp['p_25'].values)
median = list(temp['median'].values)
q3 = list(temp['p_75'].values)
lowerfence = list(temp['p_10'].values)
upperfence = list(temp['p_90'].values)
avg = list(temp['avg'].values)
print(cat)
print('q1', q1)
print('median', median)
print('q3', q3)
print('lowerfence', lowerfence)
print('upperfence', upperfence)
print('avg', avg)
fig.update_traces(q1 = q1,
median = median,
q3 = q3,
lowerfence = lowerfence,
upperfence = upperfence
)
fig_grouped.show()
</code></pre>
<p>The code above gives me this:
<a href="https://i.sstatic.net/5fMMl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5fMMl.png" alt="plotly graph_objects boxplot" /></a></p>
<p>and it is very close to what I need except that the boxes should inform about the percentiles requested but it does not seem to work.</p>
<p>PS: the <code>to_display</code> dataframe has the following format</p>
<pre><code> phase category p_10 p_25 p_75 p_90 avg
0 1 A 231.8 376.25 803.75 920.6 591.964286
1 1 B 124.0 200.50 669.50 886.4 468.913043
2 1 C 203.2 318.50 784.50 901.6 535.739130
3 1 D 175.0 294.75 821.00 882.5 528.961538
4 2 A 326.5 448.25 764.25 842.8 586.928571
5 2 B 169.0 321.50 825.50 933.6 599.304348
6 2 C 138.8 352.50 808.50 936.2 556.260870
7 2 D 166.5 376.50 783.50 872.0 590.961538
8 3 A 260.2 419.00 707.00 828.2 564.928571
9 3 B 190.0 528.00 836.50 962.0 614.043478
10 3 C 343.4 450.00 812.00 894.6 630.652174
11 3 D 204.5 364.00 833.25 929.0 604.384615
</code></pre>
|
<python><pandas><plotly><boxplot><plotly.graph-objects>
|
2024-03-05 09:56:47
| 0
| 532
|
Newbielp
|
78,106,617
| 859,604
|
Drawbacks of system-site-packages?
|
<p>An advantage of using the <a href="https://docs.python.org/3/library/venv.html" rel="nofollow noreferrer">venv</a> option <code>--system-site-packages</code> is that it avoids having multiple copies of a given package on one’s system, when several packages rely on the same version of that package (especially handy with big and common packages such as pandas, numpy or tensorflow).</p>
<p>Beyond Python and in general, it seems to me to make sense for reasons of simplicity and economy of resources, when several services on my system rely on some-given-data-source-or-program, to store the data or program once in a central place and refer every service to that central place, rather than copying it all around multiple times (assuming here that the data is used solely as a source and not written to, otherwise other considerations come into play). I think that this is the generally accepted state of mind; for example, in Debian, when I use <code>apt</code> to install a package <code>p</code> that depends on package <code>d</code>, <code>apt</code> will check if <code>d</code> is installed already and not download it again if so.</p>
<p>I view the option <code>--system-site-packages</code> as implementing this idea of “put my common dependency somewhere only once and let everyone use that common spot”. But, given that it is not the default option, I suspect that there are problems with this. Thus, I wonder about the <em>disadvantages</em> of using <code>--system-site-packages</code>. Why and when should I <em>not</em> use it? In other words, if I use it systematically without thinking further, when will I run into trouble?</p>
<p>Note that I am <em>not</em> asking why using virtual environments make sense in general: I understand that they enable concurrent usage of packages whose sets of dependencies are incompatible with each other. I am asking for the drawbacks of using virtual environments systematically <em>with the option <code>--system-site-packages</code></em>, versus using virtual environments systematically without that option.</p>
|
<python><python-venv>
|
2024-03-05 09:30:08
| 0
| 1,115
|
Olivier Cailloux
|
78,106,497
| 17,624,474
|
Pull PubSub Message through Proxy server - Python
|
<p>I am using this below script to pull message, I have developed as per document but facing error</p>
<pre><code>import json
from googleapiclient.discovery import build
from httplib2 import Http
import httplib2
from oauth2client.service_account import ServiceAccountCredentials
# Replace placeholders with your project ID, topic name, subscription name, and proxy details
project_id = 'v-acp' # Replace with your GCP project ID
topic_name = 'd_ack' # Replace with your Pub/Sub topic name
subscription_name = 'dpull' # Replace with your desired subscription name
proxy_host = '192.173.10.2' # Replace with your proxy server address
proxy_port = 8095 # Replace with your proxy server port
# Create credentials (replace with your own authentication method)
credentials = ServiceAccountCredentials.from_json_keyfile_name(
'key.json',
scopes=['https://www.googleapis.com/auth/pubsub']
)
# Configure HTTP connection with proxy
proxy_info = httplib2.ProxyInfo(proxy_type=httplib2.socks.PROXY_TYPE_HTTP_NO_TUNNEL,
proxy_host=proxy_host,
proxy_port=proxy_port)
http = Http(proxy_info=proxy_info)
# Build the Pub/Sub API client
service = build('pubsub', 'v1', http=credentials.authorize(http))
# Pull messages from the subscription
def pull_messages():
request = service.projects().subscriptions().pull(
subscription=f'projects/{project_id}/subscriptions/{subscription_name}'
)
response = request.execute()
if 'receivedMessages' in response:
for message in response['receivedMessages']:
# Process message data (message['message']['data']) and acknowledgment ID (message['ackId'])
print(f"Received message: {message['message']['data']}")
# Acknowledge message using service.projects().subscriptions().acknowledge()
# Call the pull_messages function to retrieve messages
pull_messages()
</code></pre>
<p>I am getting following error</p>
<p>json returned "You have passed an invalid argument to the service (argument=max_messages).". Details: "You have passed an invalid argument to the service (argument=max_messages)."</p>
|
<python><proxy><google-cloud-pubsub><publish-subscribe><http-proxy>
|
2024-03-05 09:07:08
| 1
| 312
|
Moses01
|
78,106,328
| 10,639,382
|
Updating pandas dataframe using Dash
|
<p>Hi I am wondering if its possible to develop a dashboard where some inputs (from check boxes/radio items etc.) can update pandas dataframe in the backend.
Basically I am trying to develop a data labeling tool. For example, an image is shown in the dashboard and you pick several classes using check boxes after which you click on a submission button which populates the entries to a pandas dataframe which can eventually be saved into a csv file.</p>
<pre><code>app.layout = html.Div([
# Dropdown selection that will be used to select some image
html.Div([
dcc.Dropdown(id = "image-folders", options = [{"label" : i, "value" : i} for i in folders, value = folders[0]]
]),
# Graph goes here ...
html.Div([
dcc.Graph(id = "image-graph", figure :{}) # Returns a plotly graph through a callback
])
# Checkboxes go here - Selected values should be updated in the dataframe
html.Div([
dcc.Checklist(id = "labels-checklist", options = [{"label" : "Agriculture" , "value" : "Agriculture"}, {"label" : "Residential", "value" : "Residential"}])
])
# Submission button goes here - Click submit to append selected values from checklist to a dataframe and then move on to the next image using the drop down and do the same
html.Button(id = "submission-button", n_clicks = 0, children = "Submit")
])
# Callback & function to update figure
@callback(Output("image-graph","figure"), Input("image-folders", "value))
def update_figure(img_folder):
file_path = os.path.join(root, img_folder)
img = skimage.io.imread(file_path)
fig = px.imshow(img)
return fig
</code></pre>
<p>Now I basically need to write another callback that takes values from the "labels-checklist" and append it to a data frame. This doesn't necessarily need to be shown in the dashboard, but I read in few places that callbacks need both an Input and Output. So if it appears in some dash table that's fine. But its not really necessary in my case. Any idea how to do this ?</p>
<pre><code># Ideally no Output
@callback([Input("label-checklist","value"), Inputs("submission-button")])
def populate_dataframe(nclicks, checklist_value):
# need help what goes here and what output I have to give
</code></pre>
|
<python><plotly-dash>
|
2024-03-05 08:35:51
| 1
| 3,878
|
imantha
|
78,106,285
| 6,212,718
|
Polars dataframe: rolling_sum look ahead
|
<p>I want to calculate <code>rolling_sum</code>, but not over x rows above the current row, but over the x rows below the current row.</p>
<p>My solution is to sort the dataframe with <code>descending=True</code> before applying the <code>rolling_sum</code> and sort back to <code>descending=False</code>.</p>
<p><strong>My solution:</strong></p>
<pre><code>import polars as pl
# Dummy dataset
df = pl.DataFrame({
"Date": [1, 2, 3, 4, 5, 1, 2, 3, 4, 5],
"Close": [-1, 1, 2, 3, 4, 4, 3, 2, 1, -1],
"Company": ["A", "A", "A","A", "A", "B", "B", "B", "B", "B"]
})
# Solution using sort twice
(
df
.sort(by=["Company", "Date"], descending=[True, True])
.with_columns(
pl.col("Close").rolling_sum(3).over("Company").alias("Cumsum_lead")
)
.sort(by=["Company", "Date"], descending=[False, False])
)
</code></pre>
<p><strong>Is there a better solution?</strong></p>
<p>With better I mean:</p>
<ul>
<li>more computational efficient and/or</li>
<li>less code / easier to read</li>
</ul>
<p>Thanks!</p>
<p><strong>EDIT:</strong></p>
<p>I just thought of one other solution which is avoids sorting / reversing the column altogether: using <code>shift</code></p>
<pre><code>(
df
.with_columns(
pl.col("Close")
.rolling_sum(3)
.shift(-2)
.over("Company").alias("Cumsum_lead"))
)
</code></pre>
|
<python><python-polars>
|
2024-03-05 08:27:01
| 1
| 1,489
|
FredMaster
|
78,106,213
| 1,367,655
|
How to do a nested loop over a variable length list of generators?
|
<p>I have a list of generators <code>L = [gen_1,...,gen_n]</code> with variable length <code>n</code>.</p>
<p>How can I implement in a compact way the following:</p>
<pre><code>for el_1 in gen_1:
for ...
for el_n in gen_n:
do_something([el_1,...,el_n])
</code></pre>
<p>That should be a fairly common problem, but somehow I can't figure it out and also could not find anything online so far.</p>
<p>Unlike in the simpler case of iterating over the Cartesian product over a list of generators <a href="https://stackoverflow.com/questions/533905/how-to-get-the-cartesian-product-of-multiple-lists">as in this question</a>, a variable-depth for loops allows to execute code at any level.</p>
<p>As an example problem, maybe we can assume the simple case of tracking the index of each loop's generator:</p>
<pre><code>i_1 = 0
for el_1 in gen_1:
i_1 += 1
i_2 = 0
for ...
i_n = 0
for el_n in gen_n:
i_n += 1
do_something([el_1,...,el_n],[i_1,...,i_n])
</code></pre>
<p>This is just an example of calling some code in a specific place in the nested for loop. Specific being bound by the length of the list of the generators.</p>
|
<python><generator><combinations><variable-length><nested-for-loop>
|
2024-03-05 08:13:47
| 0
| 980
|
Radio Controlled
|
78,106,147
| 2,964,472
|
Python date formatting ValueError: unconverted data remains:
|
<p>I was just iterating over a log and trying to find the difference in timestamps, between logs.
Below is my unit test code of the problem. Trying to understand why this is not being parsed in the format. Any help will be greatly appreciated.</p>
<pre><code>#import datetime
#import pandas as pd
from datetime import datetime
string1 = "02-23-24 13:37:46 0006847636: MKAP: Download"
string2 = "02-23-24 11:33:26 0000352403: MKAP: Download"
#%m-%d-%y %H:%M:%S %f
if __name__ == '__main__':
print(string2[:28].strip())
if string2[30:] == "MKAP: Download":
print("True")
dt1 = datetime.strptime(string1[:28], "%m-%d-%y %H:%M:%S %f")
dt2 = datetime.strptime(string2[:28], "%m-%d-%y %H:%M:%S %f")
delta = dt2 - dt1
print("Test", delta)
else:
print("False")
</code></pre>
<p>Error:</p>
<pre><code>"C:\Scripts\python.exe" "ScriptFilter/test1.py"
Traceback (most recent call last):
File "ScriptFilter/test1.py", line 8, in <module>
dt1 = datetime.strptime(str_dt1, "%m-%d-%y %H:%M:%S %f")
File "lib\_strptime.py", line 577, in _strptime_datetime
tt, fraction, gmtoff_fraction = _strptime(data_string, format)
File "Python37\lib\_strptime.py", line 362, in _strptime
data_string[found.end():])
ValueError: unconverted data remains: 7636
</code></pre>
|
<python><datetime><strptime>
|
2024-03-05 08:01:07
| 1
| 1,606
|
NarasimhaTejaJ
|
78,106,118
| 12,042,622
|
Extending a CMake-based C++ project to Python using Pybind11
|
<p>I have a (large) C++ project built with CMake, and I am trying to use pybind11 on it. The targets include:</p>
<ol>
<li>to build and run an executable (like a normal C++ project);</li>
<li>to call some C++ methods through python.</li>
</ol>
<p>Therefore, I tried <em>cmake_example</em> (<a href="https://github.com/pybind/cmake_example" rel="nofollow noreferrer">https://github.com/pybind/cmake_example</a>), which is a simple demo provided by pybind11. I can successfully call the C++ method through python (target 2) but have trouble in building an executable (target 1). To ensure that everything is clear, I've included the reproduction code below.</p>
<p><strong>Project structure</strong>: It is just the structure of <em>cmake_example</em>.</p>
<pre><code>├── CMakeLists.txt
├── pybind11
├── setup.py
└── src
└── main.cpp
</code></pre>
<p>src/main.cpp: It is basically the same as in <em>cmake_example</em> except a trivial main function is added.</p>
<pre><code>#include <pybind11/pybind11.h>
#include <iostream>
#define STRINGIFY(x) #x
#define MACRO_STRINGIFY(x) STRINGIFY(x)
int add(int i, int j) {
return i + j;
}
int main(int argc, char **argv) {
std::cout << "Hello World!!!\n";
return 0;
}
namespace py = pybind11;
PYBIND11_MODULE(cmake_example, m) {
m.doc() = R"pbdoc(
Pybind11 example plugin
-----------------------
.. currentmodule:: cmake_example
.. autosummary::
:toctree: _generate
add
subtract
)pbdoc";
m.def("add", &add, R"pbdoc(
Add two numbers
Some other explanation about the add function.
)pbdoc");
m.def("subtract", [](int i, int j) { return i - j; }, R"pbdoc(
Subtract two numbers
Some other explanation about the subtract function.
)pbdoc");
#ifdef VERSION_INFO
m.attr("__version__") = MACRO_STRINGIFY(VERSION_INFO);
#else
m.attr("__version__") = "dev";
#endif
}
</code></pre>
<p>CMakeLists.txt: It is basically the same as in <em>cmake_example</em> except <code>add_executable</code> is added to build executable.</p>
<pre><code>cmake_minimum_required(VERSION 3.4...3.18)
project(cmake_example)
set(CMAKE_CXX_STANDARD 17)
add_subdirectory(pybind11)
pybind11_add_module(cmake_example src/main.cpp)
# EXAMPLE_VERSION_INFO is defined by setup.py and passed into the C++ code as a
# define (VERSION_INFO) here.
target_compile_definitions(cmake_example
PRIVATE VERSION_INFO=${EXAMPLE_VERSION_INFO})
add_executable(exec_example src/main.cpp)
</code></pre>
<p>setup.py: It is the same as in <em>cmake_example</em>, but I've still included the code here.</p>
<pre><code>import os
import re
import subprocess
import sys
from pathlib import Path
from setuptools import Extension, setup
from setuptools.command.build_ext import build_ext
# Convert distutils Windows platform specifiers to CMake -A arguments
PLAT_TO_CMAKE = {
"win32": "Win32",
"win-amd64": "x64",
"win-arm32": "ARM",
"win-arm64": "ARM64",
}
# A CMakeExtension needs a sourcedir instead of a file list.
# The name must be the _single_ output extension from the CMake build.
# If you need multiple extensions, see scikit-build.
class CMakeExtension(Extension):
def __init__(self, name: str, sourcedir: str = "") -> None:
super().__init__(name, sources=[])
self.sourcedir = os.fspath(Path(sourcedir).resolve())
class CMakeBuild(build_ext):
def build_extension(self, ext: CMakeExtension) -> None:
# Must be in this form due to bug in .resolve() only fixed in Python 3.10+
ext_fullpath = Path.cwd() / self.get_ext_fullpath(ext.name)
extdir = ext_fullpath.parent.resolve()
# Using this requires trailing slash for auto-detection & inclusion of
# auxiliary "native" libs
debug = int(os.environ.get("DEBUG", 0)) if self.debug is None else self.debug
cfg = "Debug" if debug else "Release"
# CMake lets you override the generator - we need to check this.
# Can be set with Conda-Build, for example.
cmake_generator = os.environ.get("CMAKE_GENERATOR", "")
# Set Python_EXECUTABLE instead if you use PYBIND11_FINDPYTHON
# EXAMPLE_VERSION_INFO shows you how to pass a value into the C++ code
# from Python.
cmake_args = [
f"-DCMAKE_LIBRARY_OUTPUT_DIRECTORY={extdir}{os.sep}",
f"-DPYTHON_EXECUTABLE={sys.executable}",
f"-DCMAKE_BUILD_TYPE={cfg}", # not used on MSVC, but no harm
]
build_args = []
# Adding CMake arguments set as environment variable
# (needed e.g. to build for ARM OSx on conda-forge)
if "CMAKE_ARGS" in os.environ:
cmake_args += [item for item in os.environ["CMAKE_ARGS"].split(" ") if item]
# In this example, we pass in the version to C++. You might not need to.
cmake_args += [f"-DEXAMPLE_VERSION_INFO={self.distribution.get_version()}"]
if self.compiler.compiler_type != "msvc":
# Using Ninja-build since it a) is available as a wheel and b)
# multithreads automatically. MSVC would require all variables be
# exported for Ninja to pick it up, which is a little tricky to do.
# Users can override the generator with CMAKE_GENERATOR in CMake
# 3.15+.
if not cmake_generator or cmake_generator == "Ninja":
try:
import ninja
ninja_executable_path = Path(ninja.BIN_DIR) / "ninja"
cmake_args += [
"-GNinja",
f"-DCMAKE_MAKE_PROGRAM:FILEPATH={ninja_executable_path}",
]
except ImportError:
pass
else:
# Single config generators are handled "normally"
single_config = any(x in cmake_generator for x in {"NMake", "Ninja"})
# CMake allows an arch-in-generator style for backward compatibility
contains_arch = any(x in cmake_generator for x in {"ARM", "Win64"})
# Specify the arch if using MSVC generator, but only if it doesn't
# contain a backward-compatibility arch spec already in the
# generator name.
if not single_config and not contains_arch:
cmake_args += ["-A", PLAT_TO_CMAKE[self.plat_name]]
# Multi-config generators have a different way to specify configs
if not single_config:
cmake_args += [
f"-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_{cfg.upper()}={extdir}"
]
build_args += ["--config", cfg]
if sys.platform.startswith("darwin"):
# Cross-compile support for macOS - respect ARCHFLAGS if set
archs = re.findall(r"-arch (\S+)", os.environ.get("ARCHFLAGS", ""))
if archs:
cmake_args += ["-DCMAKE_OSX_ARCHITECTURES={}".format(";".join(archs))]
# Set CMAKE_BUILD_PARALLEL_LEVEL to control the parallel build level
# across all generators.
if "CMAKE_BUILD_PARALLEL_LEVEL" not in os.environ:
# self.parallel is a Python 3 only way to set parallel jobs by hand
# using -j in the build_ext call, not supported by pip or PyPA-build.
if hasattr(self, "parallel") and self.parallel:
# CMake 3.12+ only.
build_args += [f"-j{self.parallel}"]
build_temp = Path(self.build_temp) / ext.name
if not build_temp.exists():
build_temp.mkdir(parents=True)
subprocess.run(
["cmake", ext.sourcedir, *cmake_args], cwd=build_temp, check=True
)
subprocess.run(
["cmake", "--build", ".", *build_args], cwd=build_temp, check=True
)
# The information here can also be placed in setup.cfg - better separation of
# logic and declaration, and simpler if you include description/version in a file.
setup(
name="cmake_example",
version="0.0.1",
author="Dean Moldovan",
author_email="dean0x7d@gmail.com",
description="A test project using pybind11 and CMake",
long_description="",
ext_modules=[CMakeExtension("cmake_example")],
cmdclass={"build_ext": CMakeBuild},
zip_safe=False,
extras_require={"test": ["pytest>=6.0"]},
python_requires=">=3.7",
)
</code></pre>
<p>I followed their instructions of installation (i.e., <code>pip install ./cmake_example</code>), then I can call <code>add</code> method through python (e.g., <code>import cmake_example</code> <code>cmake_example.add(1, 2)</code>). Now the problem is that I also need the project to be built and run regularly. So I built the project via</p>
<pre><code>mkdir build
cd build
cmake ..
make
</code></pre>
<p>but I got</p>
<pre><code>/home/jjt/work/BN/cmake_example/src/main.cpp:2:10: fatal error: pybind11/pybind11.h: No such file or directory
2 | #include <pybind11/pybind11.h>
| ^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [CMakeFiles/exec_example.dir/build.make:76: CMakeFiles/exec_example.dir/src/main.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:128: CMakeFiles/exec_example.dir/all] Error 2
make: *** [Makefile:91: all] Error 2
</code></pre>
<p><code>pybind11</code> is a submodule in <em>cmake_example</em>, and the path of <code>pybind11.h</code> is <code>pybind11/include/pybind11/pybind11.h</code>.</p>
<p>Tried <code>make VERBOSE=1</code> and got:</p>
<pre><code>/usr/bin/c++ -std=gnu++17 -MD -MT CMakeFiles/exec_example.dir/src/main.cpp.o -MF CMakeFiles/exec_example.dir/src/main.cpp.o.d -o CMakeFiles/exec_example.dir/src/main.cpp.o -c /home/jjt/work/BN/cmake_example/src/main.cpp
/home/jjt/work/BN/cmake_example/src/main.cpp:2:10: fatal error: pybind11/pybind11.h: No such file or directory
2 | #include <pybind11/pybind11.h>
| ^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [CMakeFiles/exec_example.dir/build.make:76: CMakeFiles/exec_example.dir/src/main.cpp.o] Error 1
make[2]: Leaving directory '/home/jjt/work/BN/cmake_example/build'
make[1]: *** [CMakeFiles/Makefile2:128: CMakeFiles/exec_example.dir/all] Error 2
make[1]: Leaving directory '/home/jjt/work/BN/cmake_example/build'
make: *** [Makefile:91: all] Error 2
</code></pre>
|
<python><c++><cmake><pybind11>
|
2024-03-05 07:55:25
| 0
| 751
|
Joxixi
|
78,105,605
| 313,273
|
Setting an event loop fails on IPython 8.22.1 / Python 3.11
|
<p>Is the following reaction normal or is there a problem with my Python install. From the past hours of reading I just did, it really <em>should</em> work. Otherwise, what am I doing wrong?</p>
<pre><code>Python 3.11.6 (main, Oct 8 2023, 05:06:43) [GCC 13.2.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.22.2 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import asyncio
In [2]: asyncio.get_event_loop()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[2], line 1
----> 1 asyncio.get_event_loop()
File /usr/lib/python3.11/asyncio/events.py:677, in BaseDefaultEventLoopPolicy.get_event_loop(self)
674 self.set_event_loop(self.new_event_loop())
676 if self._local._loop is None:
--> 677 raise RuntimeError('There is no current event loop in thread %r.'
678 % threading.current_thread().name)
680 return self._local._loop
RuntimeError: There is no current event loop in thread 'MainThread'.
In [3]: asyncio.set_event_loop(asyncio.new_event_loop())
In [4]: asyncio.get_event_loop()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 1
----> 1 asyncio.get_event_loop()
File /usr/lib/python3.11/asyncio/events.py:677, in BaseDefaultEventLoopPolicy.get_event_loop(self)
674 self.set_event_loop(self.new_event_loop())
676 if self._local._loop is None:
--> 677 raise RuntimeError('There is no current event loop in thread %r.'
678 % threading.current_thread().name)
680 return self._local._loop
RuntimeError: There is no current event loop in thread 'MainThread'.
</code></pre>
<p>As you can see, line <code>[2]</code> says that there was no current event loop. So I created one on <code>[3]</code>. But on <code>[4]</code> there still isn't an event loop.</p>
<p>Is it me or is it the interpreter?</p>
|
<python><python-3.x><python-asyncio>
|
2024-03-05 06:02:07
| 1
| 2,429
|
eje211
|
78,105,399
| 10,200,497
|
What is the best way to filter groups by two lambda conditions and create a new column based on the conditions?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': ['x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'z', 'z', 'z', 'p', 'p', 'p', 'p'],
'b': [1, -1, 1, 1, -1, 1, 1, -1, -1, -1, -1, 1, 1, 1, 1]
}
)
</code></pre>
<p>And this the expected output. I want to create column <code>c</code>:</p>
<pre><code> a b c
0 x 1 first
1 x -1 first
2 x 1 first
3 x 1 first
4 y -1 second
5 y 1 second
6 y 1 second
7 y -1 second
11 p 1 first
12 p 1 first
13 p 1 first
14 p 1 first
</code></pre>
<p>Groups are defined by column <code>a</code>. I want to filter <code>df</code> and choose groups that either their first <code>b</code> is 1 OR their second <code>b</code> is 1.</p>
<p>I did this by this code:</p>
<pre><code>df1 = df.groupby('a').filter(lambda x: (x.b.iloc[0] == 1) | (x.b.iloc[1] == 1))
</code></pre>
<p>And for creating column <code>c</code> for <code>df1</code>, again groups should be defined by <code>a</code> and then if for each group first <code>b</code> is 1 then <code>c</code> is <code>first</code> and if the second <code>b</code> is 1 then <code>c</code> is <code>second</code>.</p>
<p>Note that for group <code>p</code>, both first and second <code>b</code> is 1, for these groups I want <code>c</code> to be <code>first</code>.</p>
<p>Maybe the way that I approach the issue is totally wrong.</p>
|
<python><pandas><dataframe><group-by>
|
2024-03-05 04:56:30
| 5
| 2,679
|
AmirX
|
78,105,348
| 1,117,320
|
How to reduce python Docker image size
|
<p>My Python based Docker image is 6.35GB. I tried multi stage build and several other options found while searching( like cache cleanup) Nothing helped.</p>
<p>I might be missing something really important.</p>
<pre><code># Use a smaller base image
FROM python:3.11.4-slim as compiler
# Set the working directory in the container
WORKDIR /app
RUN python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY ./requirements.txt /app/requirements.txt
#RUN pip install -Ur requirements.txt
RUN pip install pip==23.1.2 \
&& pip install -r requirements.txt \
&& rm -rf /root/.cache
FROM python:3.11.4-slim as runner
WORKDIR /app/
COPY --from=compiler /opt/venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
COPY . /app/
# Expose the port the app runs on
EXPOSE 8501
# Define environment variables
ENV STREAMLIT_THEME_BASE dark
ENV STREAMLIT_THEME_SECONDARY_BACKGROUND_COLOR #3A475C
ENV STREAMLIT_THEME_BACKGROUND_COLOR #2d3748
# Run the Streamlit app
CMD ["streamlit", "run", "welcome.py"]
</code></pre>
<p>requirements.txt</p>
<pre><code>ibm-generative-ai>=2.0.0
pydantic>=2.6.1
langchain>=0.1.0
streamlit==1.27.2
ray==2.7.1
chromadb>=0.4.14
python-dotenv==1.0.0
beautifulsoup4==4.12.2
sentence-transformers==2.2.2
ibm-watson>=6.1.0
markdown==3.5.2
ibm-generative-ai[langchain]
</code></pre>
<p>app folder contains multiple .py which does language model processing</p>
<p><code>pip list</code> to check size of individual packages</p>
<pre><code>1.5G /app/venv/lib/python3.11/site-packages/torch
420M /app/venv/lib/python3.11/site-packages/triton
170M /app/venv/lib/python3.11/site-packages/ray
126M /app/venv/lib/python3.11/site-packages/pyarrow
86M /app/venv/lib/python3.11/site-packages/transformers
79M /app/venv/lib/python3.11/site-packages/pandas
73M /app/venv/lib/python3.11/site-packages/sympy
29M /app/venv/lib/python3.11/site-packages/kubernetes
25M /app/venv/lib/python3.11/site-packages/streamlit
24M /app/venv/lib/python3.11/site-packages/onnxruntime
23M /app/venv/lib/python3.11/site-packages/sqlalchemy
17M /app/venv/lib/python3.11/site-packages/networkx
16M /app/venv/lib/python3.11/site-packages/pip
15M /app/venv/lib/python3.11/site-packages/langchain
14M /app/venv/lib/python3.11/site-packages/torchvision
14M /app/venv/lib/python3.11/site-packages/nltk
14M /app/venv/lib/python3.11/site-packages/altair
12M /app/venv/lib/python3.11/site-packages/uvloop
12M /app/venv/lib/python3.11/site-packages/tokenizers
9.3M /app/venv/lib/python3.11/site-packages/pydeck
9.0M /app/venv/lib/python3.11/site-packages/pygments
6.7M /app/venv/lib/python3.11/site-packages/setuptools
5.9M /app/venv/lib/python3.11/site-packages/aiohttp
5.6M /app/venv/lib/python3.11/site-packages/pydantic_core
5.3M /app/venv/lib/python3.11/site-packages/watchfiles
5.1M /app/venv/lib/python3.11/site-packages/mpmath
5.0M /app/venv/lib/python3.11/site-packages/safetensors
4.3M /app/venv/lib/python3.11/site-packages/tornado
3.7M /app/venv/lib/python3.11/site-packages/chromadb
3.6M /app/venv/lib/python3.11/site-packages/pydantic
3.5M /app/venv/lib/python3.11/site-packages/regex
3.0M /app/venv/lib/python3.11/site-packages/sentencepiece
2.8M /app/venv/lib/python3.11/site-packages/tzdata
2.8M /app/venv/lib/python3.11/site-packages/pytz
2.6M /app/venv/lib/python3.11/site-packages/joblib
2.5M /app/venv/lib/python3.11/site-packages/rich
2.5M /app/venv/lib/python3.11/site-packages/msgpack
2.5M /app/venv/lib/python3.11/site-packages/greenlet
2.4M /app/venv/lib/python3.11/site-packages/bcrypt
1.7M /app/venv/lib/python3.11/site-packages/fsspec
1.5M /app/venv/lib/python3.11/site-packages/oauthlib
1.4M /app/venv/lib/python3.11/site-packages/fastapi
1.3M /app/venv/lib/python3.11/site-packages/jinja2
1.2M /app/venv/lib/python3.11/site-packages/yarl
1.1M /app/venv/lib/python3.11/site-packages/websockets
1.1M /app/venv/lib/python3.11/site-packages/pyasn1
1.1M /app/venv/lib/python3.11/site-packages/jsonschema
1.1M /app/venv/lib/python3.11/site-packages/httptools
1.0M /app/venv/lib/python3.11/site-packages/anyio
1004K /app/venv/lib/python3.11/site-packages/urllib3
932K /app/venv/lib/python3.11/site-packages/frozenlist
860K /app/venv/lib/python3.11/site-packages/click
820K /app/venv/lib/python3.11/site-packages/markdown
804K /app/venv/lib/python3.11/site-packages/httpx
788K /app/venv/lib/python3.11/site-packages/httpcore
780K /app/venv/lib/python3.11/site-packages/starlette
712K /app/venv/lib/python3.11/site-packages/humanfriendly
696K /app/venv/lib/python3.11/site-packages/uvicorn
696K /app/venv/lib/python3.11/site-packages/toolz
680K /app/venv/lib/python3.11/site-packages/langsmith
632K /app/venv/lib/python3.11/site-packages/pypika
628K /app/venv/lib/python3.11/site-packages/watchdog
620K /app/venv/lib/python3.11/site-packages/posthog
568K /app/venv/lib/python3.11/site-packages/tqdm
548K /app/venv/lib/python3.11/site-packages/h11
540K /app/venv/lib/python3.11/site-packages/idna
540K /app/venv/lib/python3.11/site-packages/gitdb
512K /app/venv/lib/python3.11/site-packages/multidict
484K /app/venv/lib/python3.11/site-packages/requests
480K /app/venv/lib/python3.11/site-packages/importlib_resources
468K /app/venv/lib/python3.11/site-packages/marshmallow
444K /app/venv/lib/python3.11/site-packages/typer
412K /app/venv/lib/python3.11/site-packages/packaging
372K /app/venv/lib/python3.11/site-packages/wrapt
348K /app/venv/lib/python3.11/site-packages/referencing
340K /app/venv/lib/python3.11/site-packages/soupsieve
328K /app/venv/lib/python3.11/site-packages/coloredlogs
328K /app/venv/lib/python3.11/site-packages/certifi
284K /app/venv/lib/python3.11/site-packages/flatbuffers
264K /app/venv/lib/python3.11/site-packages/rsa
260K /app/venv/lib/python3.11/site-packages/orjson
248K /app/venv/lib/python3.11/site-packages/validators
192K /app/venv/lib/python3.11/site-packages/smmap
188K /app/venv/lib/python3.11/site-packages/tenacity
188K /app/venv/lib/python3.11/site-packages/build
188K /app/venv/lib/python3.11/site-packages/asgiref
156K /app/venv/lib/python3.11/site-packages/toml
136K /app/venv/lib/python3.11/site-packages/tzlocal
124K /app/venv/lib/python3.11/site-packages/overrides
120K /app/venv/lib/python3.11/site-packages/markupsafe
120K /app/venv/lib/python3.11/site-packages/backoff
108K /app/venv/lib/python3.11/site-packages/pyproject_hooks
108K /app/venv/lib/python3.11/site-packages/cachetools
108K /app/venv/lib/python3.11/site-packages/blinker
96K /app/venv/lib/python3.11/site-packages/filelock
84K /app/venv/lib/python3.11/site-packages/mdurl
80K /app/venv/lib/python3.11/site-packages/mmh3
68K /app/venv/lib/python3.11/site-packages/deprecated
64K /app/venv/lib/python3.11/site-packages/zipp
64K /app/venv/lib/python3.11/site-packages/attrs
60K /app/venv/lib/python3.11/site-packages/sniffio
48K /app/venv/lib/python3.11/site-packages/aiolimiter
24K /app/venv/lib/python3.11/site-packages/aiosignal
</code></pre>
|
<python><docker><dockerfile>
|
2024-03-05 04:38:49
| 3
| 4,690
|
ambikanair
|
78,105,199
| 683,482
|
Given Data Frame containing string values, how to find the correlation between a group of values in a pandas dataframe column?
|
<p>I have a dataframe df:</p>
<pre><code>ID District Var1 (Average Down Time) Var2 (Incident Count)
0206571-017 TSUEN WAN 1.2 4
0206571-017 TSUEN WAN 2.1 6
0206571-017 TSUEN WAN 3.0 7
0206571-017 TSUEN WAN 1.3 8
0206571-019 TSING YI 2.1 9
0206571-018 CENTRAL 3.2 13
</code></pre>
<p>As a data analyst,</p>
<ol>
<li>I want to find the correlation coefficient value between <code>Var1</code> and <code>Var2</code> for every <code>ID</code></li>
<li>I would like to find which district contains highest Average Down Time / Incident Count as separate CSV</li>
</ol>
<p>Given data columns containing string value e.g. ID / District, please provide metre way to encode or group st that I can conduct df.corr() to output correlation matrix</p>
|
<python><dataframe><correlation>
|
2024-03-05 03:38:30
| 1
| 3,076
|
Jeff Bootsholz
|
78,105,102
| 2,744,242
|
Merge multiple sublists where the value is not zero
|
<p>What is the best way in Python to perform a merge where the value is not 0, resulting in the following outcome?</p>
<pre><code>list = [
[0, 0, 0, '', 0, 0, 0, 0, 0],
[0, 0, 0, 0, 'b', 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 'c', 0, 0],
[0, 0, 0, 0, 0, 0, 0, '', 0],
[0, 0, 0, 0, 0, 0, 0, 0, '']
]
[[0, 0, 0, '', 'b', 0, 'c', '', '']] # result
</code></pre>
|
<python>
|
2024-03-05 03:01:20
| 1
| 13,406
|
rafaelcb21
|
78,104,696
| 1,447,953
|
New column, sampled from list, based on column value
|
<pre><code>values = [1,2,3,2,3,1]
colors = ['r','g','b']
expected_output = ['r', 'g', 'b', 'g', 'b', 'r'] # how to create this in pandas?
df = pd.DataFrame({'values': values})
df['colors'] = expected_output
</code></pre>
<p>I want to make a new column in my dataframe where the colors are selected based on values in an existing column. I remember doing this in xarray with a vectorised indexing trick, but I can't remember if the same thing is possible in pandas. It feels like it should be a basic indexing task.</p>
<p>The current answers are a nice start, thanks! They take a bit too much advantage of the numerical nature of "values" though. I'd rather something generic that would also work if say</p>
<pre><code>values = ['a', 'b', 'c', 'b', 'c', 'a']
</code></pre>
<p>I guess the "map" method probably still works.</p>
|
<python><pandas><dataframe>
|
2024-03-05 00:13:24
| 3
| 2,974
|
Ben Farmer
|
78,104,587
| 12,309,386
|
Polars efficiently extract subset of fields from array of JSON objects (list of structs)
|
<p><strong>Question</strong>: using Polars, how can I more efficiently extract a subset of fields from an array of JSON objects?</p>
<p><strong>Background</strong></p>
<p>I have a large (~ 300GB) jsonlines/ndjson file where each line represents a JSON object which in turn contains nested JSON.
I am interested in just a subset of the fields from each JSON object, so my goal is to extract these fields from the ndjson and save as parquet for downstream processing.</p>
<p>I am evaluating two options:</p>
<ol>
<li>Using Clickhouse (actually <a href="https://github.com/chdb-io/chdb" rel="nofollow noreferrer">chdb</a>)</li>
<li>Using Polars</li>
</ol>
<p>I've experimented with the Clickhouse approach and have working code that is extremely performant. However, Polars is preferable because it's being used in the rest of my project and my team is already familiar with it, whereas Clickhouse would introduce a new and unfamiliar element.</p>
<p>A simplified view of a single JSON object (i.e. one line of the ndjson, formatted for easier viewing):</p>
<pre><code>{
"id": "834934509",
"baseElements":
[
{
"baseElements":
[
{
"url": "https://acme.com",
"owner":
{
"attributes":
[
{
"type": "name",
"display": "Acme Consulting",
},
{
"type": "id",
"display": "A345B"
}
],
"text": "The Acme Consulting Agency London"
}
}
],
"url": "https://baselocation.com/acme"
},
{
"url": "https://baselocation.com/values",
"valueContent":
{
"display": "CUSTOM SPIRIT LIMITED",
"reference": "Client/529d807b46da995395ad3364cbf37701"
}
},
{
"url": "https://backuplocation.com/values",
"valueContent":
{
"display": "UNDERMAKER INNOVATION",
"reference": "Client/08afa4cc57d67bb625d794d60937a770"
}
}
],
"standing": "good",
"deleted": false
}
</code></pre>
<p>I want to extract <code>id</code> and <code>standing</code> (pretty straightforward). And from the outer <code>baseElements</code> array I want to get an array with just the <code>url</code> and <code>valueContent</code> fields of the nested objects.</p>
<p>Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
lf_sample = pl.scan_ndjson('data/input/myjson.jsonl')\
.select(
'id',
pl.col('baseElements').list.eval(
pl.struct(
pl.element().struct.field('url'),
pl.element().struct.field('valueContent'))
).alias('baseElements'),
'standing'
)
lf_sample.collect(streaming=True).write_parquet('data/output/myparquet.parquet',
compression='snappy',
use_pyarrow=True
)
</code></pre>
<p>Functionally, this works as I expect it to.</p>
<p>However, it is significantly slower than the Clickhouse approach. Some stats below using a subset of the full dataset, run on an AWS <code>r6g.2xlarge</code> EC2 instance (8 vCPUs, 64GB RAM) running Amazon Linux 2023:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Sample size (#rows)</th>
<th>Clickhouse</th>
<th>Polars</th>
</tr>
</thead>
<tbody>
<tr>
<td>10k</td>
<td>0.37s</td>
<td>4.67s</td>
</tr>
<tr>
<td>20k</td>
<td>0.52s</td>
<td>18.74s</td>
</tr>
</tbody>
</table></div>
<p>The complete file is 130 million rows/records. Processing with Clickhouse takes ~ 40m, given the numbers above I have not even attempted with Polars.</p>
<p>If I simplify the Polars code to simply extract the entire <code>baseElements</code> array as-is, the speed is comparable to the Clickhouse approach, so I figure it's the <code>pl.col('baseElements').list.eval(...)</code> that's introducing some inefficiency. I attempted using <code>parallel=True</code> for the <code>list.eval(...)</code> but that seems to make it run even longer.</p>
<p>So, is there a better way to do what I'm attempting with Polars, or should I just resign myself to using Clickhouse?</p>
<p>For reference, here's the Clickhouse approach:</p>
<pre class="lang-py prettyprint-override"><code>from chdb import session as chs
sess = chs.Session()
query = """
SET flatten_nested=0;
select
id,
baseElements,
standing,
from file('data/input/myjson.jsonl',
'JSONEachRow',
'id String,
baseElements Nested(
url Nullable(String),
valueContent Tuple(reference Nullable(String))
),
standing String
'
)
into outfile 'data/output/myparquet.parquet'
truncate
format Parquet
settings output_format_parquet_compression_method='snappy', output_format_parquet_string_as_string=1;
"""
sess.query(query)
</code></pre>
|
<python><json><dataframe><python-polars>
|
2024-03-04 23:39:18
| 1
| 927
|
teejay
|
78,104,505
| 14,250,641
|
Large Pandas dataframe finding overlapping regions
|
<p>I have a Pandas DataFrame in pandas with genomic regions represented by their chromosome, start position, and stop position. I'm trying to identify overlapping regions within the same chromosome and compile them along with their corresponding labels. I'm not sure if the way that I'm doing is correct-- also I want an efficient approach since my df is very large (3 million rows), so a for loop is not ideal.</p>
<p>Here's a sample df and expected output df:</p>
<pre><code>import pandas as pd
# Sample DataFrame
data = {
'chromosome': ['chr1', 'chr1', 'chr1', 'chr1', 'chr1'],
'start': [10, 15, 35, 45, 55],
'stop': [20, 25, 55, 56, 60],
'hg_38_locs': ['chr1:10-20', 'chr1:15-25', 'chr1:35-55', 'chr1:45-56', 'chr1:55-60'],
'main_category': ['label1', 'label2', 'label2', 'label3', 'label1']
}
Output:
overlapping_regions overlapping_labels
0 (chr1:10-20, chr1:15-25) (label1, label2)
1 (chr1:10-20, chr1:35-55) (label1, label2)
2 (chr1:15-25, chr1:35-55) (label2, label2)
3 (chr1:35-55, chr1:45-56) (label2, label3)
4 (chr1:45-56, chr1:55-60) (label3, label1)
</code></pre>
|
<python><pandas><dataframe><bioinformatics>
|
2024-03-04 23:08:45
| 1
| 514
|
youtube
|
78,104,367
| 19,565,276
|
Hash function for sets of IDs
|
<p>Is there a hash algorithm/function in Python that is able to convert sets of unique strings to a single hash string?</p>
<p>For example, a set of <code>{a,b,c}</code> would return some unique ID, the same unique ID as for <code>{c,a,b}</code> or <code>{b,c,a}</code> etc.</p>
|
<python><algorithm><hash><set>
|
2024-03-04 22:27:01
| 1
| 311
|
Lucien Chardon
|
78,104,362
| 4,119,262
|
Why is a test (100/100) leading to an unexpected output?
|
<p>I am trying to learn python. In this context, I work on this exercise:</p>
<ul>
<li>I take a fraction from the user (X/Y) and return a result</li>
<li>If Y is greater than X, I prompt the user to provide another fraction</li>
<li>If X or X are not integers, I prompt the user to provide another fraction</li>
<li>If X is greater than Y, I prompt the user to provide another fraction</li>
<li>If the fraction leads to a result over 99%, I print F</li>
<li>If the fraction leads to a result below 1%, I print E</li>
</ul>
<p>My code is as follows:</p>
<pre><code>z = input("Fraction: ")
k = z.split("/")
while True:
try:
x = int(k[0])
y = int(k[1])
if y >= x:
result = round((x / y) * 100)
else:
z = input("Fraction: ")
# if x and y are integer and y greater or equal to x, then divide, and round x and y
except (ValueError, ZeroDivisionError):
z = input("Fraction: ")
# if x and y are not integer or y is zero, prompt the user again
else:
break
# exit the loop if the condition is met and print (either F, E or the result of the division)
if result >= 99:
print("F")
elif 0 <= result <= 1:
print("E")
else:
print(f"{result}%")
</code></pre>
<p>The input of <code>100/100</code> leads to another prompt, when it should lead to <code>F</code> as an output.</p>
<p>I do not understand why.</p>
|
<python><while-loop><break>
|
2024-03-04 22:26:12
| 1
| 447
|
Elvino Michel
|
78,104,317
| 881,637
|
I'm recieving an "AttributeError: 'NoneType' object has no attribute '_instantiate_plugins'" error
|
<p>I have a straightforward Flask app based on a straightforward tutorial, and I'm trying to get my DB credentials out of the code and into an environment variable. I've got <code>DB_CONNECTION_STR</code> defined in <code>.env</code>, which is at the root of my application. The following is the top of my <code>database.py</code> file. This should be a simple fix; I might be missing an obvious typo.</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine, text
import os
DB_CONNECTION_STR = os.environ.get('DB_CONNECTION_STR')
# These both print the right value:
print(DB_CONNECTION_STR)
print(os.environ.get('DB_CONNECTION_STR'))
# This works:
db_connection_string = "mysql+pymysql://ehcazu8mvtvv4i44ghs3:pscale_pw_i960dmnsHwOCf3IWmKZi4AzOMVi1ZueGLYRiPZAy3i9@aws.connect.psdb.cloud/joviancareers?charset=utf8mb4"
# Trying either of these two gives me a "AttributeError: 'NoneType' object has no attribute '_instantiate_plugins'"
# db_connection_string = DB_CONNECTION_STR
# db_connection_string = os.environ.get('DB_CONNECTION_STR')
engine = create_engine(db_connection_string, connect_args={
"ssl": {
"ca": "/etc/ssl/certs/ca-certificates.crt"
}
})
</code></pre>
<p>Here is the error I'm getting:</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "/home/taylor/src/jovian-carrers/app.py", line 2, in <module>
from database import load_jobs
File "/home/taylor/src/jovian-carrers/database.py", line 17, in <module>
engine = create_engine(db_connection_string,
File "<string>", line 2, in create_engine
File "/home/taylor/src/jovian-carrers/venv/lib/python3.10/site-packages/sqlalchemy/util/deprecations.py", line 298, in warned
return fn(*args, **kwargs)
File "/home/taylor/src/jovian-carrers/venv/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 518, in create_engine
u, plugins, kwargs = u._instantiate_plugins(kwargs)
AttributeError: 'NoneType' object has no attribute '_instantiate_plugins'
</code></pre>
|
<python><flask><sqlalchemy>
|
2024-03-04 22:12:42
| 1
| 1,160
|
Taylor Huston
|
78,104,294
| 7,090,501
|
AzureOpenAI upload a file from memory
|
<p>I am building an assistant and I would like to give it a dataset to analyze. I understand that I can upload a file that an assistant can use with the following code:</p>
<pre class="lang-py prettyprint-override"><code>from openai import AzureOpenAI
import pandas as pd
client = AzureOpenAI(**credentials_here)
pd.DataFrame({
"A": [1, 2, 3, 4, 5],
"B": [6, 7, 8, 9, 10],
"C": [11, 12, 13, 14, 15],
}).to_csv('data.csv', index=False)
file = client.files.create(
file=open(
"data.csv",
"rb",
),
purpose="assistants",
)
</code></pre>
<p>I would prefer to upload the file from a data structure in memory. How can I upload a data from memory using the AzureOpenAI client?</p>
<p>I read that OpenAI allows users to provide <a href="https://github.com/openai/openai-python/tree/main?tab=readme-ov-file#file-uploads" rel="nofollow noreferrer">bytes-like objects</a> so I hoped I could do this with <code>pickle.dumps</code></p>
<pre class="lang-py prettyprint-override"><code>import pickle
df = pd.DataFrame({
"A": [1, 2, 3, 4, 5],
"B": [6, 7, 8, 9, 10],
"C": [11, 12, 13, 14, 15],
})
file = client.files.create(
file=pickle.dumps(df),
purpose="assistants"
)
</code></pre>
<p>This snippet does not throw an error using the OpenAI client. I get the below through the AzureOpenAI client.</p>
<pre><code>openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid file format. Supported formats: ['c', 'cpp', 'csv', 'docx', 'html', 'java', 'json', 'md', 'pdf', 'php', 'pptx', 'py', 'rb', 'tex', 'txt', 'css', 'jpeg', 'jpg', 'js', 'gif', 'png', 'tar', 'ts', 'xlsx', 'xml', 'zip']", 'type': 'invalid_request_error', 'param': None, 'code': None}}
</code></pre>
|
<python><openai-api><azure-openai><openai-assistants-api>
|
2024-03-04 22:06:10
| 1
| 333
|
Marshall K
|
78,103,980
| 610,569
|
What is the expected inputs to Mistral model's embedding layer?
|
<p>After installing</p>
<pre><code>!pip install -U bitsandbytes
!pip install -U transformers
!pip install -U peft
!pip install -U accelerate
!pip install -U trl
</code></pre>
<p>And then some boilerplates to load the Mistral model:</p>
<pre><code>from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig,HfArgumentParser,TrainingArguments,pipeline, logging
from peft import LoraConfig, PeftModel, prepare_model_for_kbit_training, get_peft_model
from datasets import load_dataset
from trl import SFTTrainer
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit= True,
bnb_4bit_quant_type= "nf4",
bnb_4bit_compute_dtype= torch.bfloat16,
bnb_4bit_use_double_quant= False,
)
base_model="mistralai/Mistral-7B-v0.1"
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb_config,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
model.config.use_cache = False # silence the warnings
model.config.pretraining_tp = 1
model.gradient_checkpointing_enable()
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
tokenizer.padding_side = 'right'
tokenizer.pad_token = tokenizer.eos_token
tokenizer.add_eos_token = True
tokenizer.add_bos_token, tokenizer.add_eos_token
</code></pre>
<p>We can access the embedding layers (tokenizers -> dense layer output) from:</p>
<pre><code>print(type(model.model.embed_tokens))
model.model.embed_tokens
</code></pre>
<p>[out]:</p>
<pre><code>torch.nn.modules.sparse.Embedding
Embedding(32000, 4096)
</code></pre>
<p>But when I try to feed in some strings it's not the right expected types, e.g.</p>
<pre><code>model.model.embed_tokens(tokenizer("Hello world"))
</code></pre>
<p>[out]:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-30-66f4114cc3e1> in <cell line: 3>()
1 import numpy as np
2
----> 3 model.model.embed_tokens(tokenizer("Hello world"))
4 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2231 # remove once script supports set_grad_enabled
2232 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2233 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2234
2235
TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not BatchEncoding
</code></pre>
<p>or</p>
<pre><code>model.model.embed_tokens(tokenizer("Hello world").input_ids)
</code></pre>
<p>[out]:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-31-95e95b326f0d> in <cell line: 3>()
1 import numpy as np
2
----> 3 model.model.embed_tokens(tokenizer("Hello world").input_ids)
4 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2231 # remove once script supports set_grad_enabled
2232 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2233 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2234
2235
TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list
</code></pre>
<p>And this seems to be the right type it's expecting:</p>
<pre><code>model.model.embed_tokens(torch.tensor(tokenizer("Hello world").input_ids))
</code></pre>
<p>[out]:</p>
<pre><code>tensor([[-4.0588e-03, 1.6499e-04, -4.6997e-03, ..., -1.8597e-04,
-9.9945e-04, 4.0531e-05],
[-1.9684e-03, 1.6098e-03, -4.2343e-04, ..., -2.7924e-03,
1.1673e-03, -1.0529e-03],
[-2.3346e-03, 2.0752e-03, -1.4114e-03, ..., 8.4305e-04,
-1.0376e-03, -2.0294e-03],
[-1.5640e-03, 9.3460e-04, 1.8692e-04, ..., 1.1749e-03,
3.3760e-04, 3.3379e-05]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<EmbeddingBackward0>)
</code></pre>
<p><strong>Is there a way to specify the return type when using the <code>tokenizer()</code> instead of casting the output of the tokenizer into a torch tensor?</strong></p>
|
<python><huggingface-transformers><huggingface-tokenizers><mistral-7b>
|
2024-03-04 20:45:31
| 1
| 123,325
|
alvas
|
78,103,543
| 2,406,499
|
How to get a total count of records in dataframe B for each record on df A based on conditions
|
<p>I have 2 df's that look like this</p>
<p>services_df</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Service id</th>
<th>Domain</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td><a href="http://www.abc.com" rel="nofollow noreferrer">www.abc.com</a></td>
</tr>
<tr>
<td>222</td>
<td>xyz.com</td>
</tr>
<tr>
<td>333</td>
<td><a href="http://www.opq.com" rel="nofollow noreferrer">www.opq.com</a></td>
</tr>
<tr>
<td>444</td>
<td>rst.com</td>
</tr>
</tbody>
</table></div>
<p>subscriptions_df</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Sub id</th>
<th>Domain</th>
<th>Status</th>
</tr>
</thead>
<tbody>
<tr>
<td>11</td>
<td>abc.com</td>
<td>Active</td>
</tr>
<tr>
<td>22</td>
<td>abc.com</td>
<td>Active</td>
</tr>
<tr>
<td>33</td>
<td>www.xyz.com</td>
<td>Cancelled</td>
</tr>
<tr>
<td>44</td>
<td>rst.com</td>
<td>suspended</td>
</tr>
</tbody>
</table></div>
<p>I need to add a new <strong>total Active/Suspended Subs</strong> column to df A that has the totals of active subs from df B for the corresponding domain, and I need to be as efficent as possible as the records on df A and df B are quite large (60-100K)</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Service id</th>
<th>Domain</th>
<th>Total Active/Suspended Subs</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>abc.com</td>
<td>2</td>
</tr>
<tr>
<td>222</td>
<td>xyz.com</td>
<td>0</td>
</tr>
<tr>
<td>333</td>
<td>opq.com</td>
<td>#N/A</td>
</tr>
<tr>
<td>444</td>
<td>rst.com</td>
<td>1</td>
</tr>
</tbody>
</table></div>
<p>I came up with this function which works but is not very efficient</p>
<pre><code>def numberOfActiveSubsTiedToDomainInServices(domain):
#remove www and trim spaces
domain = domain.replace('www.','').replace(' ','')
#retrieve a count of active uber active services tied to the domain found in either domain or the domain in the service description
try:
return len(subscriptions_df.loc[(subscriptions_df['Domain'].astype(str).replace('www.','').replace(' ','') == domain) & (subscriptions_df['Status'].isin(['Active','Suspended']))])
except:
return '#N/A'
services_df['Total Active/Suspended Subs'] = services_df['Domain'].map(numberOfActiveSubsTiedToDomainInServices)
</code></pre>
<p>The problem I face with this approach is that is extremely time-consuming as it takes too long and I have to do similar counts on other columns.</p>
<p>is there a more pythonic way to do this that is more efficient?</p>
|
<python><pandas><dataframe><google-colaboratory>
|
2024-03-04 19:09:17
| 1
| 1,268
|
Francisco Cortes
|
78,103,535
| 2,157,783
|
Pip: unistall ALL pkgs installed during `-e` installation
|
<p>I just installed a github repo with <code>-e .</code>; That's what I intended to do, only I didn't notice I was in my <code>base</code> conda env rather than in the intended environment.</p>
<p>That package, in turn, installed a sh*tload of dependencies (like something north of ~20 Gb). Now, I'm in need of uninstalling all that stuff. Is there a way to tell <code>pip</code> to do that without breaking my conda <code>base</code> env?
I was thinking.. Is there a log of installed pkgs *with timestamps` so that I can tell pip to remove all the stuff installed from a certain date onwards?</p>
|
<python><pip><anaconda><conda>
|
2024-03-04 19:06:15
| 1
| 680
|
MadHatter
|
78,103,459
| 7,746,472
|
Import from parallel folder in Python
|
<p>I can't figure out the way to import from a parallel folder.</p>
<p>Importing from the same directory works fine:</p>
<pre><code>project_folder
-subfolder1
-my_prog.py
-credentials.py
</code></pre>
<p>credentials.py contains</p>
<pre><code>api_key = "supersecretapikey"
</code></pre>
<p>and my_prog.py contains</p>
<pre><code>import credentials
</code></pre>
<p>This works fine when I run <code>$ python my_prog.py</code>.</p>
<p>However, I want to move credentials to another subfolder, like so:</p>
<pre><code>project_folder
-subfolder1
-my_prog.py
-subfolder2
-credentials.py
</code></pre>
<p>I can't get the import to work in this setup. I have tried the obvious thing, which to me is:</p>
<pre><code>from ..subfolder2 import credentials
</code></pre>
<p>This results in the error</p>
<blockquote>
<p>ImportError: attempted relative import with no known parent package</p>
</blockquote>
<p>which I don't understand.</p>
<p>I've tried placing <code>__init__.py</code> files in project_folder, subfolder1 and subfolder2, which seems to make no difference.</p>
<p>I prefer not to use environment variables as need to run this on a remote server.</p>
<p>I'm using Python 3.12.2</p>
|
<python><python-3.x><python-import>
|
2024-03-04 18:51:44
| 2
| 1,191
|
Sebastian
|
78,103,405
| 10,491,381
|
Real case ODE using solve_ivp
|
<p>I'm new to ODEs (Differential equations) and need to model y' = ax</p>
<p>This is the story :
Let's assume that a population increases by 100% per unit of time ( rate = 1)
For example, if at time 1, the population is 1,
at time 2 it will be 2 persons, and at time 3 , there will be 4 persons, at time 4, there will be 8 persons. (Constant increase)</p>
<p>The speed of change of the population is therefore proportional to the quantity of population.</p>
<p>How can I model this using scipy solve_ivp and draw a good line ?
Thank you</p>
<p>This is my try out :</p>
<pre><code>import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def model(y,t):
k = 1 # rate 100%
dydt = k * y
return dydt
# Initial condition
y0 = 1
# Interval , max point, number of points
t = np.linspace(1,3,3)
# Solve Diff eq
y = odeint(model,y0,t)
# Draw
plt.plot(t,y)
plt.xlabel('time')
plt.ylabel('y(t) nb persons population')
plt.show()
</code></pre>
<p>Result is wrong :</p>
<p><a href="https://i.sstatic.net/PEzA8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PEzA8.png" alt="enter image description here" /></a></p>
<p>Could you please point me in the right direction ? Thank you</p>
<p>This is my second try out, result is wrong in my opinion, Should I understand another thing about odes ? :</p>
<pre><code>import numpy as np
from scipy.integrate import odeint
from matplotlib import pyplot as plt
k = 1.0 ## k is a rate meaning 100%
def f(y,x): ## This function returns derivatives
return k*y
xs = np.arange(1,4,1) ## I need growth population infos from 1 to the third hour.
yO = 1 ## At the beggining, population is one person
ys = odeint(f,yO,xs)
print(xs)
print(ys)
plt.plot(xs,ys)
plt.plot(xs,ys,'b*')
plt.xlabel('time in hours')
plt.ylabel('population')
plt.title('population grows 100% per hour')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Oitmi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Oitmi.png" alt="enter image description here" /></a></p>
<p>I think It should be solved using geometric series, but I am trying to understand ODES ...for now, I fail.</p>
<p>Here is the link I used :
<a href="https://www.epythonguru.com/2020/09/ODE-Growth-model-in-Python.html" rel="nofollow noreferrer">https://www.epythonguru.com/2020/09/ODE-Growth-model-in-Python.html</a></p>
|
<python><scipy><ode>
|
2024-03-04 18:35:25
| 0
| 347
|
harmonius cool
|
78,103,022
| 10,985,257
|
Explanation of reportAttributeAccessIssue of pylance
|
<p>I have following class</p>
<pre class="lang-py prettyprint-override"><code>class ProcessRunner:
def __init__(self, network: nn.Module) -> None:
self.network: nn.Module = network
def run(self, processes: List[Process]):
for process in processes:
process.network = self.network
self.network = process.update()
</code></pre>
<p>Which shows in function <code>run</code> for <code>self.network</code> and <code>process.update()</code> following Errors:</p>
<pre><code>Cannot assign member "network" for type "Process"
Type "Module | None" cannot be assigned to type "Module"
"None" is incompatible with "Module"PylancereportAttributeAccessIssue
</code></pre>
<p>and</p>
<pre><code>Cannot assign member "network" for type "ProcessRunner*"
"None" is incompatible with "Module"PylancereportAttributeAccessIssue
</code></pre>
<p>I think I understand that those issues a are partial coming from my Protocol Class <code>Process</code>, which looks like this:</p>
<pre class="lang-py prettyprint-override"><code>@runtime_checkable
class Process(Protocol):
network: nn.Module
def update(self, **kwargs):
...
</code></pre>
<p>But I do not understand completely what causes the issue here:</p>
<ol>
<li>The second issue might appear because, the <code>update</code> method doesn't return the network here. Okay solution could be that the network is only part of the <code>ProcessRunner</code> class. But how can a method access it in this case?</li>
<li>I have no clue what the first issues triggers. The object in the Protocol class as well in the <code>ProcessRunner</code> class are both of type <code>nn.Module</code>. Why is it not satisfied then? or is this an issue which appears by the second issue?</li>
</ol>
|
<python><python-typing><pyright>
|
2024-03-04 17:15:41
| 1
| 1,066
|
MaKaNu
|
78,102,952
| 745,903
|
Is there a standard class / type that represents valid Python identifiers?
|
<p>Being interpreted and lassez-faire, Python often blurs the boundary between what's an entity in the language (like a variable or class) or simply an entry in a runtime dictionary. The perhaps most prominent example are keyword arguments: one can switch between calling a function with individual arguments, whose names are Python identifiers, or with a single dict, whose keys are strings:</p>
<pre><code>In [1]: def viking(spam, eggs):
...: print(f"Eating {spam}")
...: print(f"Eating {eggs}")
...: print(f"Eating more {spam}")
...:
In [2]: viking(spam="spammedy", eggs="eggedy")
Eating spammedy
Eating eggedy
Eating more spammedy
In [3]: viking(**{"spam": "spammedy", "eggs": "eggedy"})
Eating spammedy
Eating eggedy
Eating more spammedy
</code></pre>
<p>Both styles aren't equi-general though: while <code>spam</code> and <code>eggs</code> are valid identifiers, a string like <code>"5+a.@))"</code> does not correspond to something that could actually be a Python variable name.</p>
<p>As <a href="https://stackoverflow.com/questions/26534634/attributes-which-arent-valid-python-identifiers">discussed elsewhere</a>, Python itself makes a point of <em>not</em> preventing an attribute to be set as something that wouldn't be a legal identifier, but assume I wish to write some macro code that does enforce this. Is there somewhere in the standard libraries a class whose values correspond precisely to legal identifiers? Or at least a function that can be called to check whether or not a string represents such an identifier?</p>
<p>After all, these sort of considerations matter – if not elsewhere then at least when it comes to <em>parsing Python code</em> from plain text.</p>
<p>It turns out rather tricky to search the internet for something like that!</p>
|
<python><keyword-argument>
|
2024-03-04 17:03:55
| 1
| 121,021
|
leftaroundabout
|
78,102,923
| 14,338,504
|
Is it possible to execute snowflake python code (snowpark.Session) through airflow's SnowflakeOperator?
|
<p>I have an airflow DAG using SnowflakeOperator with sql="myscript.sql"</p>
<p>I'd like to have another member of this dag using snowflake.Session code, i.e. dataframe code.</p>
<p>Is there an easy way to set this up?</p>
<p>I've read the docs for <a href="https://airflow.apache.org/docs/apache-airflow-providers-snowflake/stable/operators/snowflake.html" rel="nofollow noreferrer">SnowflakeOperator</a> and I see there's a sql= parameter but it's unclear from the docs how well this works with the rest of the snowflake ecosystem.</p>
|
<python><snowflake-cloud-data-platform><airflow>
|
2024-03-04 16:57:26
| 1
| 651
|
blake
|
78,102,877
| 18,690,626
|
PyTorch throws batch size error with any value but 1
|
<p>I made a neural network model with PyTorch, it kind of resembles the vgg19 model. Every time i enter a value for batch size i got the error:</p>
<pre><code>ValueError: Expected input batch_size (1) to match target batch_size (16).
</code></pre>
<p>This is not happening with the batch size value of 1, it starts training without any complication.</p>
<p>I simply want to alter the batch size without getting any error.</p>
<p>I could not find (or understand) a solution for this issue by myself. I provide the model script below.</p>
<p>Note : In the train() method of the model, a defult value of batch size is given as 16. With this configuration it throws error as i showed above, i set the batch size value to 1 when i call this method and it works fine.</p>
<pre><code>import os
import glob
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
class ICModel(nn.Module):
def __init__(self):
super().__init__()
# CNN
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=2)
self.conv2 = nn.Conv2d(64, 128, kernel_size=3, stride=2)
self.conv3 = nn.Conv2d(128, 256, kernel_size=3, stride=2)
self.conv4 = nn.Conv2d(256, 512, kernel_size=3, stride=2)
# FULLY CONNECTED LAYERS
self.fc1 = nn.Linear(61952, 256)
self.fc2 = nn.Linear(256, 64)
self.out = nn.Linear(64, 2)
def forward(self, x):
# CONV - 1
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, kernel_size=3, stride=1)
# CONV - 2
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, kernel_size=3, stride=1)
# CONV - 3
x = F.relu(self.conv3(x))
x = F.max_pool2d(x, kernel_size=3, stride=1)
# CONV - 4
x = F.relu(self.conv4(x))
flattened_size = x.shape[0] * x.shape[1] * x.shape[2] * x.shape[3]
x = x.view(-1, flattened_size)
# FULLY CONNECTED LAYERS
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.out(x)
return F.log_softmax(x, dim=1)
def train(self, dataset_dir='', epochs=5, batch_size=16, seed=35, learning_rate=0.001, model_weights_path=''):
if dataset_dir == '':
raise Exception("Please enter a valid dataset directory path!")
train_correct = []
train_losses = []
torch.manual_seed(seed)
# CRITERION AND OPTIMIZER SETUP
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(self.parameters(), lr=learning_rate)
optim_width, optim_height = 224, 224
data_transforms = transforms.Compose([
transforms.Resize((optim_width, optim_height)), # Resize images to average dimensions
transforms.ToTensor(), # Convert images to PyTorch tensors
transforms.Normalize(mean=[0.456, 0.456, 0.456], std=[0.456, 0.456, 0.456]) # Normalize images
])
dataset = datasets.ImageFolder(root=dataset_dir, transform=data_transforms)
train_loader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True)
for epoch in range(epochs):
trn_corr = 0
for b, (X_train, y_train) in enumerate(train_loader):
b += 1
y_pred = self(X_train)
loss = criterion(y_pred, y_train)
predicted = torch.max(y_pred, dim=1)[1]
batch_corr = (predicted == y_train).sum()
trn_corr += batch_corr.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
if b % 4 == 0:
print(f'Epoch: {epoch} Batch: {b} Loss: {loss.item()}')
train_losses.append(loss)
train_correct.append(trn_corr)
if (model_weights_path != '') & os.path.exists(model_weights_path) & os.path.isdir(model_weights_path):
torch.save({
'model_state_dict': self.state_dict(),
'optimizer_state_dict': optimizer.state_dict()
}, model_weights_path)
def test(self, dataset_dir='', batch_size=16):
if dataset_dir == '':
raise Exception("Please enter a valid dataset directory path!")
optim_width, optim_height = 224, 224
test_losses = []
tst_crr = 0
criterion = nn.CrossEntropyLoss()
data_transforms = transforms.Compose([
transforms.Resize((optim_width, optim_height)), # Resize images to average dimensions
transforms.Grayscale(),
transforms.ToTensor(), # Convert images to PyTorch tensors
transforms.Normalize(mean=[0.456], std=[0.456]) # Normalize images
])
dataset = datasets.ImageFolder(root=dataset_dir, transform=data_transforms)
test_loader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True)
with torch.no_grad():
for b, (X_test, y_test) in enumerate(test_loader):
y_val = self(X_test)
predicted = torch.max(y_val.data, dim=1)[1]
tst_crr += (predicted == y_test).sum()
loss = criterion(y_val, y_test)
test_losses.append(loss.item())
test_results = {
'true_positive': tst_crr,
'false_positive': len(dataset.imgs) - tst_crr
}
return test_results, test_losses
</code></pre>
<p>And this is the full traceback of the error :</p>
<pre><code>Traceback (most recent call last):
File "/Users/eaidy/Repos/ML/inclination-classification-pytorch/src/main.py", line 12, in <module>
ic_model.train(dataset_dir=train_dataset_absolute_path, epochs=1, batch_size=16, learning_rate=1e-5)
File "/Users/eaidy/Repos/ML/inclination-classification-pytorch/src/models/cnn_model.py", line 94, in train
loss = criterion(y_pred, y_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eaidy/Repos/ML/inclination-classification-pytorch/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eaidy/Repos/ML/inclination-classification-pytorch/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eaidy/Repos/ML/inclination-classification-pytorch/.venv/lib/python3.11/site-packages/torch/nn/modules/loss.py", line 1179, in forward
return F.cross_entropy(input, target, weight=self.weight,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eaidy/Repos/ML/inclination-classification-pytorch/.venv/lib/python3.11/site-packages/torch/nn/functional.py", line 3059, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected input batch_size (1) to match target batch_size (16).
</code></pre>
|
<python><machine-learning><pytorch><neural-network>
|
2024-03-04 16:51:12
| 1
| 620
|
eaidy
|
78,102,827
| 6,498,649
|
cannot understand python match case
|
<p>why the two examples give 2 different results? if <code>str()</code> is equal to <code>''</code> the result should be the same.</p>
<pre><code> # example 1
match 'ciao':
case str():
print('string')
case _:
print('default')
# >>> string
# example 2
match 'ciao':
case '':
print('string')
case _:
print('default')
# >>> default
# but ...
assert str() == ''
</code></pre>
|
<python><match><case>
|
2024-03-04 16:40:38
| 1
| 403
|
Dario Colombotto
|
78,102,745
| 9,707,286
|
Langchain CSVLoader
|
<p>Not a coding question, but a documentation omission that is nowhere mentioned online at this point. When using the Langchain CSVLoader, which column is being vectorized via the OpenAI embeddings I am using?</p>
<p>I ask because viewing this code below, I vectorized a sample CSV, did searches (on Pinecone) and consistently received back DISsimilar responses. How do know which column Langchain is actually identifying to vectorize?</p>
<pre><code>loader = CSVLoader(file_path=file, metadata_columns=['col2', 'col3', 'col4','col5'])
langchain_docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=100)
docs = text_splitter.split_documents(langchain_docs)
for doc in docs:
doc.metadata.pop('source')
doc.metadata.pop('row')
my_index = pc_store.from_documents(docs, embeddings, index_name=PINECONE_INDEX_NAME)
</code></pre>
<p>I am assuming the CSVLoader is then identifying col1 to vectorize. But, searches of Pinecone are terrible, leading me to think some other column is being vectorized.</p>
|
<python><loader><langchain><pinecone>
|
2024-03-04 16:29:06
| 1
| 747
|
John Taylor
|
78,102,664
| 22,221,987
|
Importing classes from different module levels causes typehint and argument warning disabling
|
<p>I have this file structure:</p>
<pre><code> - package
- sub_package_1
- __init__.py
- sub_module_1.py
- sub_package_2
- __init__.py
- sub_module_2.py
</code></pre>
<p>In <code>sub_module_1.py</code> we have this code:</p>
<pre><code>class A:
def __init__(self):
self.b = B()
def method_a(self, var: int):
pass
class B:
def __init__(self):
pass
def method_b(self, var: int):
pass
class C: # This class always located in different module, but for example lets place it here
def __init__(self):
self.a = A()
def method_c(self, var: int):
pass
a = A()
a.b.method_b() # here we have an argument warning (method_b is waiting for an argument)
c = C()
c.a.b.method_b() # and also here, the same warning
</code></pre>
<p>And in <code>sub_module_2.py</code> we have this code:</p>
<pre><code>from package.sub_package_1.sub_module_1 import A
class C:
def __init__(self):
self.a = A()
def method_c(self, var: int):
pass
c = C()
c.a.b.method_b() # but here we don't have a warning
</code></pre>
<p>I've leaved some comments in code to show the problem.<br />
When I work in the same module (or even in the same package), I can see all warnings, caused by methods' arguments. But if I move one of the classes outside the package (like I did with <code>C</code> class) I loose all warnings. I still can autocomplete lines, has typehinting etc, but I have no warnings in case I do not pass any argument in <code>c.a.b.method_b()</code>.</p>
<p>I feel like this is an import issue, but I can't find the solution.<br />
What is the reason of this behaviour and how can I fix it?</p>
|
<python><python-3.x><import><path><sys>
|
2024-03-04 16:14:45
| 2
| 309
|
Mika
|
78,102,616
| 4,406,532
|
Timeline in Python - creating spaces between dates lines
|
<p>Following by is the code and its output.</p>
<ol>
<li>Is there a way to create space between the date line to the lower event name, so that there will not be overlapping as seen in the figure.</li>
<li>There is a space between the 'positive' events names and their horizontal tick. Is there a way to eliminate the space (and make it as 'negative' events on the ticks)?</li>
</ol>
<p>Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd</p>
<pre><code>df = pd.DataFrame(
{
'event': ['Qfpgv KFJPF fpnkmkq',
'Ltzx cqwup xnywi bxfjzgq ol mwwqmg ukszs',
'MUTD ysrfzad Urmiv lqyjexdq xqkqzfx vqtwrgh',
'Vxdys vjxqnqojq qvqoshjmhv dmyzf fj wvtrjv',
'Kcxtm-Bix Nzvlqj ajmydgbxk',
'Nrsbo! ukguvle xavahfg tqyikwqg, UZSP tgrlqfr',
'Rjxetf/uzpqwhwr qtshxvlp tljybtncbq qvqybnjgq dzqj',
'Qwvbt-Khspqw olfypkbvh tljmyyvz ajmy zazvqfm',
'UHW Umkqtqm zvhq tljybtncbq',
'Wwscye rukqdf, vfyvqmf udzvqmcv tljybtncbq',
'Twljq uqtrjxwh hyvnvwbl tljmyyvz rbykqkwqjg djzv Kqkmv xnyzqmv.',
'Qfpgv Qnwroj rymqzqm tljybtncbq kxqj vq Kqmnjp kxqkz.',
'Vwkqr jvqjg fqtwp, Jvccjvj CQM Sgqhojif mblqjc',
'Qxltj dqg Vqsue tljmyyvz jvtsqjwuj wkhruwqlqj, ixdro xqjolvkphw',
'Rwkq Vqwdqlqj odujhg jvswhuh fduuleolqj',
'Nzvq nqfupxqj jtsqjwuj',
'Vqolqjyphfwhv sohwwhuqvhtg jtsqjwuj',
'Ulnwj ri gihw dqg rqooih OY wlg ihghovdfh orqjv',
'Fkxohqjdoahoo sohwwhu ydplqhuv rqh wkhhtxdo jtsqjwuj'
],
'date': ['1984', '1987', '1991', '1994', '1997', '1998', '1999', '2002', '2004',
'2005', '2007', '2009', '2010', '2012', '2013', '2014', '2017', '2019', '2021']
}
)
df['date'] = pd.to_datetime(df['date'])
df['date'] = df['date'].dt.year
levels = np.tile(
[-5, 5, -3, 3, -1, 1, -7, 7, -4, 4, -2, 2, -6, 6, -3, 3, -1, 1, -5, 5, -3, 3, -1, 1, 5],
int(np.ceil(len(df) / 6))
)[:len(df)]
fig, ax = plt.subplots(figsize=(12.8, 4), constrained_layout=True)
ax.vlines(df['date'], 0, levels, color="tab:red") # The vertical stems.
ax.plot( # Baseline and markers on it.
df['date'],
np.zeros_like(df['date']),
"-o",
color="k",
markerfacecolor="w"
)
# annotate lines
for d, l, r in zip(df['date'], levels, df['event']):
lines = r.split(' ')
line1 = ' '.join(lines[:len(lines)//2])
line2 = ' '.join(lines[len(lines)//2:])
ax.annotate(
line1 + '\n' + line2,
xy=(d, l),
xytext=(-5, np.sign(l) * 15), # Increase the y-offset for more vertical space
textcoords="offset points",
horizontalalignment="center",
verticalalignment="bottom",
fontsize=8 # Adjust the font size to fit the annotations within the plot
)
ax.text(0.5, 1.3, "PLOT PLOT PLOT", transform=ax.transAxes,
fontsize=16, fontweight='bold', ha='center')
ax.get_yaxis().set_visible(False) # Remove the y-axis
ax.spines['left'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/S185U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S185U.png" alt="Here is the image" /></a></p>
|
<python><matplotlib><time-series><timeline>
|
2024-03-04 16:06:36
| 1
| 2,293
|
Avi
|
78,102,584
| 8,541,953
|
Fail to restore conda environment
|
<p>I have installed some dependencies that have screwed my conda environment. Having my conda environment activated, I am running <code>conda list --revisions</code> I am getting a list. By the date, I would like to select the number 10.</p>
<p>Following the instructions available in the conda documentation, I am trying to restore the environment to that version with:
<code>conda install --rev 10</code></p>
<p>When doing so, I am getting this error:</p>
<pre><code>(icca) ➜ ~ conda install --rev 10
PackagesNotFoundError: The following packages are missing from the target environment:
- conda-forge/noarch::packaging==23.0=pyhd8ed1ab_0
- conda-forge/osx-arm64::libcurl==7.87.0=h9049daf_0
- conda-forge/osx-arm64::libbrotlicommon==1.0.9=h1a8c8d9_8
- conda-forge/noarch::jinja2==3.1.2=pyhd8ed1ab_1
- conda-forge/noarch::python-dateutil==2.8.2=pyhd8ed1ab_0
- conda-forge/noarch::ipython_genutils==0.2.0=py_1
- conda-forge/osx-arm64::matplotlib-base==3.6.3=py39h35e9e80_0
- conda-forge/noarch::json5==0.9.5=pyh9f0ad1d_0
- conda-forge/noarch::bleach==6.0.0=pyhd8ed1ab_0
- conda-forge/noarch::pycparser==2.21=pyhd8ed1ab_0
- conda-forge/osx-arm64::lz4-c==1.9.4=hb7217d7_0
- conda-forge/noarch::asttokens==2.2.1=pyhd8ed1ab_0
- conda-forge/osx-arm64::libblas==3.9.0=16_osxarm64_openblas
- anaconda/osx-arm64::bottleneck==1.3.4=py39heec5a64_0
- conda-forge/noarch::pyopenssl==23.0.0=pyhd8ed1ab_0
- conda-forge/noarch::pkgutil-resolve-name==1.3.10=pyhd8ed1ab_0
- anaconda/osx-arm64::numpy-base==1.22.3=py39h974a1f5_0
- conda-forge/osx-arm64::llvm-openmp==15.0.7=h7cfbb63_0
- conda-forge/osx-arm64::libopenblas==0.3.21=openmp_hc731615_3
- conda-forge/osx-arm64::liblapack==3.9.0=16_osxarm64_openblas
- conda-forge/osx-arm64::krb5==1.20.1=h69eda48_0
- conda-forge/noarch::executing==1.2.0=pyhd8ed1ab_0
- conda-forge/noarch::importlib-metadata==6.0.0=pyha770c72_0
- conda-forge/noarch::font-ttf-ubuntu==0.83=hab24e00_0
- conda-forge/osx-arm64::contourpy==1.0.7=py39haaf3ac1_0
- anaconda/osx-arm64::blas==1.0=openblas
- conda-forge/noarch::idna==3.4=pyhd8ed1ab_0
- conda-forge/osx-arm64::libwebp-base==1.2.4=h57fd34a_0
- conda-forge/osx-arm64::libdeflate==1.17=h1a8c8d9_0
- conda-forge/osx-arm64::unicodedata2==15.0.0=py39h02fc5c5_0
- conda-forge/noarch::parso==0.8.3=pyhd8ed1ab_0
- conda-forge/noarch::backports.functools_lru_cache==1.6.4=pyhd8ed1ab_0
- conda-forge/noarch::nest-asyncio==1.5.6=pyhd8ed1ab_0
- conda-forge/osx-arm64::tornado==6.1=py39h5161555_1
- conda-forge/noarch::python-fastjsonschema==2.16.2=pyhd8ed1ab_0
- conda-forge/noarch::wcwidth==0.2.6=pyhd8ed1ab_0
- conda-forge/noarch::beautifulsoup4==4.11.2=pyha770c72_0
- conda-forge/noarch::pyparsing==3.0.9=pyhd8ed1ab_0
- conda-forge/noarch::terminado==0.17.1=pyhd1c38e8_0
- conda-forge/noarch::entrypoints==0.4=pyhd8ed1ab_0
- conda-forge/noarch::ptyprocess==0.7.0=pyhd3deb0d_0
- conda-forge/noarch::anyio==3.6.2=pyhd8ed1ab_0
- conda-forge/noarch::appnope==0.1.3=pyhd8ed1ab_0
- conda-forge/osx-arm64::gettext==0.21.1=h0186832_0
- conda-forge/noarch::pandocfilters==1.5.0=pyhd8ed1ab_0
- anaconda/osx-arm64::scipy==1.7.3=py39h2f0f56f_0
- conda-forge/osx-arm64::xorg-libxau==1.0.9=h27ca646_0
- conda-forge/osx-arm64::freetype==2.12.1=hd633e50_1
- conda-forge/osx-arm64::pandoc==2.19.2=hce30654_1
- conda-forge/noarch::typing-extensions==4.4.0=hd8ed1ab_0
- conda-forge/noarch::pytz==2022.7.1=pyhd8ed1ab_0
- conda-forge/osx-arm64::matplotlib==3.6.3=py39hdf13c20_0
- conda-forge/noarch::ipympl==0.9.3=pyhd8ed1ab_0
- conda-forge/noarch::font-ttf-dejavu-sans-mono==2.37=hab24e00_0
- anaconda/noarch::threadpoolctl==2.2.0=pyh0d69192_0
- conda-forge/osx-arm64::libkml==1.3.0=h41464e4_1015
- conda-forge/noarch::tomli==2.0.1=pyhd8ed1ab_0
- conda-forge/osx-arm64::snappy==1.1.9=h17c5cce_2
- conda-forge/osx-arm64::libjpeg-turbo==2.1.4=h1a8c8d9_0
- conda-forge/osx-arm64::libbrotlienc==1.0.9=h1a8c8d9_8
- conda-forge/noarch::decorator==5.1.1=pyhd8ed1ab_0
- conda-forge/noarch::fonts-conda-ecosystem==1=0
- conda-forge/noarch::nbformat==5.7.3=pyhd8ed1ab_0
- conda-forge/noarch::mistune==2.0.4=pyhd8ed1ab_0
- conda-forge/noarch::defusedxml==0.7.1=pyhd8ed1ab_0
- conda-forge/noarch::jupyterlab_server==2.19.0=pyhd8ed1ab_0
- conda-forge/noarch::send2trash==1.8.0=pyhd8ed1ab_0
- conda-forge/noarch::nbconvert-pandoc==7.2.9=pyhd8ed1ab_0
- conda-forge/osx-arm64::curl==7.87.0=h9049daf_0
- anaconda/osx-arm64::numexpr==2.8.1=py39h144ceef_2
- conda-forge/noarch::babel==2.11.0=pyhd8ed1ab_0
- conda-forge/osx-arm64::jpeg==9e=he4db4b2_2
- conda-forge/noarch::urllib3==1.26.14=pyhd8ed1ab_0
- conda-forge/noarch::prompt-toolkit==3.0.36=pyha770c72_0
- conda-forge/noarch::pure_eval==0.2.2=pyhd8ed1ab_0
- conda-forge/osx-arm64::lcms2==2.14=h481adae_1
- conda-forge/osx-arm64::freexl==1.0.6=h1a8c8d9_1
- conda-forge/noarch::zipp==3.12.0=pyhd8ed1ab_0
- conda-forge/noarch::webencodings==0.5.1=py_1
- conda-forge/noarch::jupyter_client==7.3.4=pyhd8ed1ab_0
- conda-forge/osx-arm64::zstd==1.5.2=hf913c23_6
- conda-forge/osx-arm64::pillow==9.4.0=py39h8bbe137_0
- conda-forge/osx-arm64::nspr==4.35=hb7217d7_0
- conda-forge/osx-arm64::fontconfig==2.14.2=h82840c6_0
- conda-forge/osx-arm64::libcblas==3.9.0=16_osxarm64_openblas
- conda-forge/noarch::jupyter_server==1.23.5=pyhd8ed1ab_0
- conda-forge/noarch::sniffio==1.3.0=pyhd8ed1ab_0
- conda-forge/osx-arm64::libev==4.33=h642e427_1
- conda-forge/noarch::backcall==0.2.0=pyh9f0ad1d_0
- conda-forge/noarch::font-ttf-source-code-pro==2.038=h77eed37_0
- conda-forge/noarch::ipython==8.9.0=pyhd1c38e8_0
- conda-forge/osx-arm64::xorg-libxdmcp==1.1.3=h27ca646_0
- conda-forge/osx-arm64::c-ares==1.18.1=h3422bc3_0
- conda-forge/osx-arm64::pixman==0.40.0=h27ca646_0
- conda-forge/osx-arm64::blosc==1.21.3=h1d6ff8b_0
- conda-forge/osx-arm64::libedit==3.1.20191231=hc8eb9b7_2
- conda-forge/osx-arm64::libgfortran5==11.3.0=hdaf2cc0_27
- conda-forge/noarch::nbclient==0.7.2=pyhd8ed1ab_0
- conda-forge/noarch::nbconvert==7.2.9=pyhd8ed1ab_0
- conda-forge/noarch::backports==1.0=pyhd8ed1ab_3
- conda-forge/noarch::pickleshare==0.7.5=py_1003
- conda-forge/noarch::font-ttf-inconsolata==3.000=h77eed37_0
- conda-forge/osx-arm64::libbrotlidec==1.0.9=h1a8c8d9_8
- conda-forge/noarch::six==1.16.0=pyh6c4a22f_0
- conda-forge/noarch::stack_data==0.6.2=pyhd8ed1ab_0
- conda-forge/noarch::jupyterlab_pygments==0.2.2=pyhd8ed1ab_0
- conda-forge/osx-arm64::zeromq==4.3.4=hbdafb3b_1
- conda-forge/osx-arm64::bzip2==1.0.8=h3422bc3_4
- conda-forge/osx-arm64::ipykernel==5.5.5=py39h32adebf_0
- conda-forge/noarch::affine==2.4.0=pyhd8ed1ab_0
- conda-forge/noarch::click-plugins==1.1.1=py_0
- conda-forge/osx-arm64::nss==3.78=h1483a63_0
- conda-forge/osx-arm64::kiwisolver==1.4.4=py39haaf3ac1_1
- conda-forge/osx-arm64::brotli==1.0.9=h1a8c8d9_8
- conda-forge/osx-arm64::libxcb==1.13=h9b22ae9_1004
- conda-forge/osx-arm64::libgfortran==5.0.0=11_3_0_hd922786_27
- conda-forge/osx-arm64::pthread-stubs==0.4=h27ca646_1001
- conda-forge/noarch::pexpect==4.8.0=pyh1a96a4e_2
- conda-forge/osx-arm64::libpng==1.6.39=h76d750c_0
- conda-forge/noarch::notebook-shim==0.2.2=pyhd8ed1ab_0
- conda-forge/noarch::platformdirs==2.6.2=pyhd8ed1ab_0
- conda-forge/noarch::websocket-client==1.5.0=pyhd8ed1ab_0
- conda-forge/osx-arm64::libssh2==1.10.0=h7a5bd25_3
- conda-forge/noarch::attrs==22.2.0=pyh71513ae_0
- conda-forge/osx-arm64::libglib==2.74.1=h4646484_1
- conda-forge/noarch::jedi==0.18.2=pyhd8ed1ab_0
- conda-forge/osx-arm64::json-c==0.16=hc449e50_0
- conda-forge/noarch::nbconvert-core==7.2.9=pyhd8ed1ab_0
- conda-forge/osx-arm64::fonttools==4.38.0=py39h02fc5c5_1
- conda-forge/noarch::soupsieve==2.3.2.post1=pyhd8ed1ab_0
- conda-forge/osx-arm64::poppler==22.12.0=h9564b9f_1
- conda-forge/osx-arm64::brotli-bin==1.0.9=h1a8c8d9_8
- conda-forge/noarch::pygments==2.14.0=pyhd8ed1ab_0
- conda-forge/noarch::matplotlib-inline==0.1.6=pyhd8ed1ab_0
- conda-forge/noarch::jsonschema==4.17.3=pyhd8ed1ab_0
- conda-forge/noarch::snuggs==1.4.7=py_0
- conda-forge/noarch::typing_extensions==4.4.0=pyha770c72_0
- conda-forge/osx-arm64::libnghttp2==1.51.0=hae82a92_0
- conda-forge/osx-arm64::argon2-cffi==21.1.0=py39h5161555_0
- conda-forge/osx-arm64::brotlipy==0.7.0=py39h5161555_1001
- conda-forge/noarch::requests==2.28.2=pyhd8ed1ab_0
- conda-forge/osx-arm64::expat==2.5.0=hb7217d7_0
- conda-forge/noarch::prometheus_client==0.16.0=pyhd8ed1ab_0
- conda-forge/noarch::click==8.1.3=unix_pyhd8ed1ab_2
- conda-forge/osx-arm64::cairo==1.16.0=h73a0509_1014
- conda-forge/noarch::traitlets==5.9.0=pyhd8ed1ab_0
- conda-forge/noarch::notebook==6.5.2=pyha770c72_1
- conda-forge/osx-arm64::libsodium==1.0.18=h27ca646_1
- conda-forge/noarch::charset-normalizer==2.1.1=pyhd8ed1ab_0
- conda-forge/noarch::importlib_resources==5.10.2=pyhd8ed1ab_0
- conda-forge/osx-arm64::giflib==5.2.1=h27ca646_2
- conda-forge/noarch::nbclassic==0.4.8=pyhd8ed1ab_0
- conda-forge/noarch::poppler-data==0.4.11=hd8ed1ab_0
- conda-forge/noarch::cycler==0.11.0=pyhd8ed1ab_0
- conda-forge/noarch::pysocks==1.7.1=pyha2e5f31_6
- conda-forge/noarch::cligj==0.7.2=pyhd8ed1ab_1
- conda-forge/noarch::tinycss2==1.2.1=pyhd8ed1ab_0
- conda-forge/noarch::munkres==1.1.4=pyh9f0ad1d_0
- anaconda/noarch::joblib==1.1.0=pyhd3eb1b0_0
- conda-forge/noarch::fonts-conda-forge==1=0
</code></pre>
<p>Why is it failing to restore? I am not sure how to proceed from here.</p>
|
<python><anaconda><conda><miniconda>
|
2024-03-04 16:00:02
| 0
| 1,103
|
GCGM
|
78,102,413
| 9,462,829
|
Change server header on all endpoints (Flask + Nginx + Gunicorn)
|
<p>I'm working on a Flask app that uses gunicorn and nginx and should hide its server header, so I managed to do it only for the homepage, like this:</p>
<p><code>gunicorn.conf.py</code></p>
<pre><code>import gunicorn
gunicorn.SERVER = ''
</code></pre>
<p><code>nginx.conf</code></p>
<pre><code>events {
worker_connections 1024;
}
http{
include /etc/nginx/mime.types;
# include /etc/nginx/conf.d/*.conf;
server{
#server_tokens off;
proxy_pass_header Server; # get server from gunicorn
# let the browsers know that we only accept HTTPS
add_header Strict-Transport-Security max-age=2592000;
listen 80;
add_header Content-Security-Policy $CSPheader;
gzip on;
location / {
proxy_pass http://127.0.0.1:5000
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 5M;
}
location /static/ {
alias /home/app/static/;
proxy_set_header Cookie $http_cookie;
add_header X-Content-Type-Options nosniff;
}
}
}
</code></pre>
<p>So, in my "/" page I'm getting</p>
<p><a href="https://i.sstatic.net/lHjx8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lHjx8.png" alt="enter image description here" /></a></p>
<p>But elsewhere I'm displaying my server:</p>
<p><a href="https://i.sstatic.net/8dyLD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8dyLD.png" alt="enter image description here" /></a></p>
<p>I'm not sure about how communication between nginx and gunicorn works, but I seem to be having a similar problem to <a href="https://stackoverflow.com/questions/51138814/nginx-not-returning-cache-control-header-from-upstream-gunicorn">this post</a>, but I'm not sure how to use this information.</p>
<p>Any help to actually hide my server header would be really appreciated. Thanks!</p>
|
<python><flask><nginx><gunicorn>
|
2024-03-04 15:30:50
| 0
| 6,148
|
Juan C
|
78,102,355
| 1,251,549
|
How perform task in airflow that executes regardless of failed tasks but fail the DAG when error appeared in previous tasks?
|
<p>The scenario is simple airflow DAG starts a cluster, submits a job and terminates cluster. The problem is that the cluster needs to be terminated always. E.g. even when previous task failed. If I add <code>TriggerRule.ALL_DONE</code> - I guarantee that cluster is terminated. But in that case DAG will be successful. If I change <code>TriggerRule</code> DAG will fail but cluster will be run.</p>
<p>So is there a way to tell airflow - execute task regardless DAG previous tasks failed or not and mark DAG as failed?</p>
|
<python><directed-acyclic-graphs><airflow-2.x>
|
2024-03-04 15:25:08
| 0
| 33,944
|
Cherry
|
78,102,296
| 13,086,128
|
How to group dataframe rows into list in polars group_by
|
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"Letter": ["A", "A", "B", "B", "B", "C", "C", "D", "D", "E"],
"Value': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
})
</code></pre>
<p>I want to group <code>Letter</code> and collect their corresponding <code>Value</code> in a List.</p>
<p>Related Pandas question: <a href="https://stackoverflow.com/questions/22219004/how-to-group-dataframe-rows-into-list-in-pandas-groupby">How to group dataframe rows into list in pandas groupby</a></p>
<p>I know pandas code will not work here:</p>
<pre class="lang-py prettyprint-override"><code>df.group_by("a")["b"].apply(list)
</code></pre>
<blockquote>
<p>TypeError: 'GroupBy' object is not subscriptable</p>
</blockquote>
<p>Output will be:</p>
<pre><code>┌────────┬───────────┐
│ Letter ┆ Value │
│ --- ┆ --- │
│ str ┆ list[i64] │
╞════════╪═══════════╡
│ A ┆ [1, 2] │
│ B ┆ [3, 4, 5] │
│ C ┆ [6, 7] │
│ D ┆ [8, 9] │
│ E ┆ [10] │
└────────┴───────────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2024-03-04 15:17:10
| 3
| 30,560
|
Talha Tayyab
|
78,101,969
| 2,256,700
|
Require decorated function to accept argument matching bound `TypeVar` without narrowing to that type
|
<p>If I define my decorator like this</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar('T', bound=Event)
def register1(evtype: Type[T]) -> Callable[[Callable[[T], None]], Callable[[T], None]]:
def decorator(handler):
# register handler for event type
return handler
return decorator
</code></pre>
<p>I get a proper error if I use it on the wrong function:</p>
<pre class="lang-py prettyprint-override"><code>class A(Event):
pass
class B(Event):
pass
@register1(A) # Argument of type "(ev: B) -> None" cannot be assigned to parameter of type "(A) -> None"
def handler1_1(ev: B):
pass
</code></pre>
<p>However, it does not work if I apply the decorator multiple times:</p>
<pre class="lang-py prettyprint-override"><code>@register1(A) # Argument of type "(B) -> None" cannot be assigned to parameter of type "(A) -> None"
@register1(B)
def handler1_3(ev: A|B):
pass
</code></pre>
<p>I kind of want the decorators to build up a <code>Union</code> of allowed/required argument types.</p>
<p>I think <code>ParamSpec</code> is the way to solve it, but how can I use <code>ParamSpec</code> to not overwrite the argument type but also require that the argument type matches the type that is in the decorator argument?</p>
<p>Using <code>ParamSpec</code> does not result in any type error:</p>
<pre class="lang-py prettyprint-override"><code>P = ParamSpec("P")
def register2(evtype: Type[T]) -> Callable[[Callable[P, None]], Callable[P, None]]:
def decorator(handler):
# ...
return handler
return decorator
@register2(A) # This should be an error
def handler2_1(ev: B):
pass
</code></pre>
<p>If I add another <code>TypeVar</code> and use a <code>Union</code> it does work for the double-decorated and even triple decorated function, but not or single decorated functions.</p>
<pre class="lang-py prettyprint-override"><code>T2 = TypeVar('T2')
def register3(evtype: Type[T]) -> Callable[[Callable[[Union[T,T2]], None]], Callable[[Union[T,T2]], None]]:
def decorator(handler):
# ...
return handler
return decorator
# Expected error:
@register3(A) # Argument of type "(ev: B) -> None" cannot be assigned to parameter of type "(A | T2@register3) -> None"
def handler3_1(ev: B):
pass
# Wrong error:
@register3(A) # Argument of type "(ev: A) -> None" cannot be assigned to parameter of type "(A | T2@register3) -> None"
def handler3_2(ev: A):
pass
# Works fine
@register3(A)
@register3(B)
def handler3_3(ev: A|B):
pass
</code></pre>
<p>While writing this question, I came the solution closer and closer.
And I will provide my own solution in an Answer.</p>
<p>However, I'm interested if there are better ways to solve this.</p>
|
<python><mypy><python-decorators><python-typing><pyright>
|
2024-03-04 14:29:06
| 2
| 989
|
MaPePeR
|
78,101,835
| 12,164,800
|
How to find all function calls a defined function makes? (including recursive and futher down the stack calls)
|
<p>I'm using Python's AST module, and stuck on a given problem (it may be that AST isn't the right tool for this problem).</p>
<p>I want to determine if a given function (let's say "print" as an example) is called, by a defined function.</p>
<p>For example:</p>
<p>file_one.py</p>
<pre class="lang-py prettyprint-override"><code>def some_function():
print("hello!")
</code></pre>
<p>file_two.py</p>
<pre class="lang-py prettyprint-override"><code>from file_one import some_function
def one():
print("hi!")
def two():
one()
def three():
some_function()
def four():
three()
</code></pre>
<p>How can I parse <code>file_two.py</code> and figure out which functions (in this case all of them) call <code>print</code>?</p>
<p>At the moment, I have something like this:</p>
<pre class="lang-py prettyprint-override"><code>import ast
class Visitor(ast.NodeVisitor):
def visit_FunctionDef(self, node: ast.AST):
for child in ast.walk(node):
if isisntance(child, ast.Call) and child.func.id == "print":
print(f"Found a function that calls print!")
if __name__ == "__main__":
with open("file_two.py") as file:
tree = ast.parse(file.read())
Visitor().visit(tree)
</code></pre>
<p>But this only works for the function <code>file_two.one</code> and no others.</p>
|
<python><abstract-syntax-tree>
|
2024-03-04 14:09:27
| 0
| 457
|
houseofleft
|
78,101,812
| 10,499,034
|
How to change internal grid margin in matplotlib scatterplot
|
<p>I need to be able to add a margin before 0 on the Y axis. to make the second figure below look like the first figure with whitespace before 0. I have tried use_sticky-edges to false and have tried:</p>
<pre><code>plt.margins(0.2,0.2)
</code></pre>
<p>That allows me to change the space on the X axis but the Y axis won't change. How can I make this work?</p>
<p>This is what I have:</p>
<p><a href="https://i.sstatic.net/r4Jli.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r4Jli.png" alt="enter image description here" /></a></p>
<p>This is what I need and how it normally does by default but for some reason it did not in the plot above:
<a href="https://i.sstatic.net/00wH8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/00wH8.png" alt="enter image description here" /></a></p>
<p>How can I make this work?</p>
|
<python><matplotlib>
|
2024-03-04 14:06:33
| 1
| 792
|
Jamie
|
78,101,782
| 3,114,229
|
python file from task scheduler as system
|
<p>I have created a python script to monitor the current version of MS Edge. When run, the script creates a .txt file called previous_version.txt and writes the version number in the text file. This works well when I run the file manually.</p>
<p>Now I'm trying to add this script to task scheduler and add it as a system task for it to run when I'm not logged in (It's supposed to run remotely on a server PC).
This however, does not work.
It claims that task scheduler has completed the task successfully, but ends with a return code 2147942401 and never creates a .txt file.</p>
<p>Please see attached pictures below.</p>
<p><a href="https://i.sstatic.net/yICFb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yICFb.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/z9hoP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z9hoP.png" alt="enter image description here" /></a></p>
<p>My python script:</p>
<pre><code>import os
import win32api
def local_has_changed():
filepath = r"C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe"
info = win32api.GetFileVersionInfo(filepath, "\\")
ms = info["FileVersionMS"]
new_majorversion = str(win32api.HIWORD(ms))
path = os.path.dirname(__file__)
file = str(path)+"\\previous_version.txt"
if not os.path.exists(file):
open(file, 'w+').close()
filehandle = open(file, 'r')
old_majorversion = str(filehandle.read())
filehandle.close()
if new_majorversion != old_majorversion:
print('Version changed from '+old_majorversion+' to '+new_majorversion)
filehandle = open(file, 'w')
filehandle.write(str(new_majorversion))
filehandle.close()
local_has_changed()
</code></pre>
<p>The file runs well manually on both the remote server and my local pc, but task scheduler fails on both. I'm running Windows 11</p>
|
<python><scheduled-tasks><pywin32>
|
2024-03-04 14:01:51
| 2
| 419
|
Martin
|
78,101,699
| 103,682
|
AttributeError: module 'builtins' has no attribute 'RemovedIn20Warning'
|
<p>I am trying to run tests and setting certain types of warnings as errors. No matter which warning I set to be treated in pytest.ini I get same exception, just the type changes. What do I need to do to get passed this exception?</p>
<pre><code>ERROR: while parsing the following warning configuration:
error::RemovedIn20Warning
This error occurred:
Traceback (most recent call last):
File "/Users/edv/Library/Caches/pypoetry/virtualenvs/api-gX2kb1zU-py3.11/lib/python3.11/site-packages/_pytest/config/__init__.py", line 1761, in parse_warning_filter
category: Type[Warning] = _resolve_warning_category(category_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/edv/Library/Caches/pypoetry/virtualenvs/api-gX2kb1zU-py3.11/lib/python3.11/site-packages/_pytest/config/__init__.py", line 1800, in _resolve_warning_category
cat = getattr(m, klass)
^^^^^^^^^^^^^^^^^
AttributeError: module 'builtins' has no attribute 'RemovedIn20Warning'
</code></pre>
<p>pytest.ini</p>
<pre><code>[pytest]
filterwarnings =
always::DeprecationWarning
#ignore::pytest.PytestAssertRewriteWarning
#ignore::pluggy.PluggyTeardownRaisedWarning
#ignore::PytestUnknownMarkWarning
#ignore::PydanticDeprecatedSince20
#ignore::PytestAssertRewriteWarning
ignore:.*Setting backref.*
error::RemovedIn20Warning
</code></pre>
|
<python><pytest><python-poetry>
|
2024-03-04 13:47:22
| 2
| 17,645
|
epitka
|
78,101,282
| 19,838,445
|
pydantic validate all Literal fields
|
<p>I have multiple pydantic 2.x models and instead of applying validation per each literal field on each model</p>
<pre class="lang-py prettyprint-override"><code>class MyModel(BaseModel):
name: str = ""
description: Optional[str] = None
sex: Literal["male", "female"]
@field_validator("sex", mode="before")
@classmethod
def strip_sex(cls, v: Any, info: ValidationInfo):
if isinstance(v, str):
return v.strip()
return v
</code></pre>
<p>I want to use approach similar to this <a href="https://docs.pydantic.dev/latest/concepts/validators/#annotated-validators" rel="nofollow noreferrer">Annotated Validators</a></p>
<p>How can I achieve automatic validation on all <code>Literal</code> fields?</p>
<pre class="lang-py prettyprint-override"><code>def strip_literals(v: Any) -> Any:
if isinstance(v, str):
return v.strip()
return v
# doesn't work
# LiteralType = TypeVar("LiteralType", bound=Literal)
# LiteralStripped = Annotated[Literal, BeforeValidator(strip_literals)]
class MyModel(BaseModel):
name: str = ""
description: Optional[str] = None
sex: LiteralStripped["male", "female"]
</code></pre>
<p>I want something like above, but cannot actually define proper validation handlers on literals.</p>
|
<python><python-typing><pydantic><literals>
|
2024-03-04 12:37:17
| 1
| 720
|
GopherM
|
78,101,227
| 10,268,534
|
Why I cannot type a `list` parameter in a class with an existing `list` method
|
<p>I'm working with Python 3.10 and I have this class (which I have simplified):</p>
<pre class="lang-py prettyprint-override"><code>class Greetings:
def list(self) -> list[str]:
return ['ab', 'cd']
def hello(self, names: list[str]) -> None:
for name in names:
print("Hello", name)
</code></pre>
<p>While testing it, I got this error:</p>
<pre><code>... in Greetings
def hello(self, names: list[str]) -> None:
E TypeError: 'function' object is not subscriptable` error.
</code></pre>
<p>I know that the issue comes from my <code>list</code> method, which Python is trying to use in the typing of the <code>names</code> parameter. But I don't understand why this is happening or if it an issue with Python language. It is suppose that starting with Python 3.10 I can use <code>list</code> as typing instead of importing <code>List</code> from the <code>typing</code> module.</p>
<p>Any guess?</p>
|
<python><python-typing><python-3.10>
|
2024-03-04 12:29:13
| 1
| 478
|
pakobill
|
78,101,176
| 2,386,113
|
Accessing class present in a different folder in Python
|
<p>I want to access a configuration class in my script. The configuration class is present in a folder which is one level up and then in a separate folder. My folder and file organization is below:</p>
<pre><code>.
├── configs
│ ├── config.json
│ ├── config_manager.py
│ └── __init__.py
└── simulaton_scripts
└── test_script.py
</code></pre>
<p><strong>Requirement:</strong> My program is present in <code>test_script.py</code>, I want to access the <code>ConfigurationManager</code> class which is present in the file <code>config_manager.py</code></p>
<p>I tried to put an <code>__init__.py</code> file in the <code>configs</code> with the following lines:</p>
<pre><code>from . import configs
from .configs import *
</code></pre>
<p><strong>MWE:</strong></p>
<pre><code>import numpy as np
# Config
from configs.config_manager import ConfigurationManager as cfg
</code></pre>
<p>The above code is throwing an exception:</p>
<blockquote>
<p>Exception has occurred: ModuleNotFoundError (note: full exception trace is shown but execution is paused at: )
No module named 'configs'
File "D:\code\test_script.py", line 9, in (Current frame)
from configs.config_manager import ConfigurationManager as cfg
ModuleNotFoundError: No module named 'configs'</p>
</blockquote>
|
<python>
|
2024-03-04 12:21:01
| 1
| 5,777
|
skm
|
78,101,036
| 20,954
|
WeakMethod in WeakSet
|
<p>I would like to use the functionality of a <a href="https://docs.python.org/3/library/weakref.html#weakref.WeakSet" rel="nofollow noreferrer"><code>weakref.WeakSet</code></a>, but in this set I would like to store bound methods, so I have to use <a href="https://docs.python.org/3/library/weakref.html#weakref.WeakMethod" rel="nofollow noreferrer"><code>weakref.WeakMethod</code></a>.</p>
<p>Here's a stripped down example:</p>
<pre class="lang-py prettyprint-override"><code>import weakref
class C:
def method(self): pass
ci = C()
print("weakMethod:", weakref.WeakMethod(ci.method))
print("s1:", weakref.WeakSet([C.method]))
print("s2:", weakref.WeakSet([ci.method]))
print("s3:", weakref.WeakSet([weakref.WeakMethod(ci.method)]))
</code></pre>
<p>which gives me (with Python 3.12.2)</p>
<pre><code>weakMethod: <weakref at 0x7f569e9308d0; to 'C' at 0x7f569e96dca0>
s1: {<weakref at 0x7f56ac12a520; to 'function' at 0x7f569e98ade0 (method)>}
s2: set()
s3: set()
</code></pre>
<p>As you can see in the first line, <code>WeakMethod</code> works as expected, but storing it in a <code>WeakSet</code> yields an empty <code>s3</code>.</p>
<p>Side note: <code>s2</code> is empty as expected, but storing a weak reference to an unbound method as in <code>s1</code> works.</p>
<p>Obvious workaround: Use a <code>set</code> instead of a <code>WeakSet</code> and replicate its functionality.</p>
<p><strong>Question:</strong> Is there a more elegant way of combining the functionality of <code>WeakSet</code> and <code>WeakMethod</code>?</p>
|
<python><python-3.x><weak-references>
|
2024-03-04 11:54:33
| 1
| 3,184
|
Tom Pohl
|
78,100,922
| 2,443,944
|
pytorch split array by list of indices
|
<p>I want to split a torch array by a list of indices.</p>
<p>For example say my input array is <code>torch.arange(20)</code></p>
<pre><code>tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19])
</code></pre>
<p>and my list of indices is <code>splits = [1,2,5,10]</code></p>
<p>Then my result would be:</p>
<pre><code>(tensor([0]),
tensor([1, 2]),
tensor([3, 4, 5, 6, 7]),
tensor([ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]))
</code></pre>
<p>assume my input array is always long enough to bigger than the sum of my list of indices.</p>
|
<python><numpy><pytorch><numpy-slicing>
|
2024-03-04 11:30:09
| 4
| 2,227
|
piccolo
|
78,100,828
| 14,045,537
|
Is Per-User Retrieval supports open-source vectorstore chromadb?
|
<p>From the <code>langchain</code> documentation - <a href="https://python.langchain.com/docs/use_cases/question_answering/per_user" rel="nofollow noreferrer">Per-User Retrieval</a></p>
<blockquote>
<p>When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see eachother’s data. This means that you need to be able to configure your retrieval chain to only retrieve certain information.</p>
</blockquote>
<p>The documentation has an example implementation using <code>PineconeVectorStore</code>. Does chromadb support multiple users? If yes, can anyone help with an example of how the per-user retrieval can be implemented using the open source <code>ChromaDB</code>?</p>
|
<python><langchain><chromadb><vectorstore>
|
2024-03-04 11:14:09
| 2
| 3,025
|
Ailurophile
|
78,100,752
| 4,852,094
|
Using A type to define unpacked args from a method
|
<p>say I have a type:</p>
<pre><code>class MyType(Generic[T, U]):
def foo(self, *args: T) -> U:
pass
m = MyType[tuple[int, bool], str]
</code></pre>
<p>I want to be able to provide the args like:</p>
<pre><code>m.foo(1, True)
</code></pre>
<p>instead of</p>
<pre><code>m.foo((1, True))
</code></pre>
<p>Is there any way to do this in a method using the generic, so if I have a type that is multiple args I can unpack them in a method?</p>
<p>I see I can use:</p>
<pre><code>P = ParamSpec("P")
</code></pre>
<p>but then I'd want to put a type constraint on the P.args so that it was equal to T.</p>
|
<python><python-typing>
|
2024-03-04 11:01:34
| 1
| 3,507
|
Rob
|
78,100,683
| 12,436,050
|
Group by and aggregate the columns in pandas
|
<p>I have following dataframe.</p>
<pre><code> org_id org_name category org_status created_on modified_on location_id loc_status street_x city country
0 ORG-100023310 advanceCOR GmbH Industry,Pharmaceutical company ACTIVE 2016-10-18T15:38:34.322+02:00 2022-11-02T08:23:13.989+01:00 LOC-100052061 ACTIVE Fraunhoferstrasse 9a, Martinsried Planegg Germany
1 ORG-100023310 advanceCOR GmbH Industry,Pharmaceutical company ACTIVE 2016-10-18T15:38:34.322+02:00 2022-11-02T08:23:13.989+01:00 LOC-100032442 ACTIVE Lochhamer Strasse 29a, Martinsried Planegg Germany
</code></pre>
<p>I am trying to group this dataframe by the <code>org_id</code> column, and get all the unique values separated by ' | ' in the final dataframe.</p>
<p>The expected output is:</p>
<pre><code> org_id org_name category org_status created_on modified_on location_id loc_status street_x city country
0 ORG-100023310 advanceCOR GmbH Industry,Pharmaceutical company ACTIVE 2016-10-18T15:38:34.322+02:00 2022-11-02T08:23:13.989+01:00 LOC-100052061 | LOC-100032442 ACTIVE Fraunhoferstrasse 9a, Martinsried | Lochhamer Strasse 29a, Martinsried Planegg Germany
</code></pre>
<p>I have tried the following, but it produces an error.</p>
<pre><code>join_unique = lambda x: '|'.join(x.unique())
df2 = df.groupby(['org_id'], as_index=False).agg(join_unique)
</code></pre>
<p>Error:</p>
<pre><code>FutureWarning: ['loc_status', 'street_x', 'city', 'country'] did not aggregate successfully. If any error is raised this will raise in a future version of pandas. Drop these columns/ops to avoid this warning.
df2 = df.groupby(['org_id'], as_index=False).agg(join_unique)
</code></pre>
<p>How can I get the desired output? Any help is highly appreciated.</p>
|
<python><pandas><dataframe>
|
2024-03-04 10:49:13
| 2
| 1,495
|
rshar
|
78,100,661
| 10,967,961
|
Scraping hierarchical website in a specific category
|
<p>I am trying to scrape the following page: <a href="https://esco.ec.europa.eu/en/classification/skill_main" rel="nofollow noreferrer">https://esco.ec.europa.eu/en/classification/skill_main</a>. In particular I would like to click on all plus buttons under S-skills unless there are no more "plus buttons" to click and then save that page source. Now, having found that the plus button is under the CSS selector ".api_hierarchy.has-child-link" when inspecting the page, I have tried as follows:</p>
<pre><code>from selenium.common.exceptions import StaleElementReferenceException
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
driver.get("https://esco.ec.europa.eu/en/classification/skill_main")
driver.implicitly_wait(10)
wait = WebDriverWait(driver, 20)
# Define a function to click all expandable "+" buttons
def click_expand_buttons():
while True:
try:
# Find all expandable "+" buttons
expand_buttons = wait.until(EC.presence_of_all_elements_located(
(By.CSS_SELECTOR, ".api_hierarchy.has-child-link"))
)
# If no expandable buttons are found, we are done
if not expand_buttons:
break
# Click each expandable "+" button
for button in expand_buttons:
try:
driver.implicitly_wait(10)
driver.execute_script("arguments[0].click();", button)
# Wait for the dynamic content to load
time.sleep(1)
except StaleElementReferenceException:
# If the element is stale, we find the elements again
break
except StaleElementReferenceException:
continue
# Call the function to start clicking "+" buttons
click_expand_buttons()
html_source = driver.page_source
# Save the HTML to a file
with open("/Users/federiconutarelli/Desktop/escodata/expanded_esco_skills_page.html", "w", encoding="utf-8") as file:
file.write(html_source)
# Close the browser
driver.quit()
</code></pre>
<p>However, the code above keeps closing and opening the + of the say "first level" and this is likely because, with my limited knowledge of scraping, I just asked selenium to click on the plus buttons until there are plus buttons and when the page refreshes to the original one, the script keeps doing it to the infinnity. Now my question is: how can I open all the plus signs (until there are plus signs) only fo S-skills:</p>
<pre><code><a href="#overlayspin" class="change_right_content" data-version="ESCO dataset - v1.1.2" data-link="http://data.europa.eu/esco/skill/335228d2-297d-4e0e-a6ee-bc6a8dc110d9" data-id="84527">S - skills</a>
</code></pre>
<p>?</p>
|
<python><html><selenium-webdriver><web-scraping>
|
2024-03-04 10:45:40
| 1
| 653
|
Lusian
|
78,100,348
| 13,628,676
|
How to define a dict that maps types with callables that recieve arguments of that type?
|
<p>In Python, I have trouble defining a type. I would like something like this:</p>
<pre class="lang-py prettyprint-override"><code>def foo(x: int) -> int:
return x
def bar(x: str) -> str:
return x
# [...] MyDictType would be defined here...
my_dict: MyDictType = {
int: foo, # Good
str: bar, # Good
float: bar # WRONG! Type warning!
}
</code></pre>
<p>MyDictType should indicate that it is a dict that maps a type with a callable that receives a variable of that type. I was wondering about how to do it using generics... but I am not sure. I have tried this:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
MyDictType = dict[Type[T], Callable[[T], T]]
</code></pre>
<p>However, the editor does not show the error I was expecting in the dict's third entry...</p>
<p>How should I define MyDictType?</p>
|
<python><generics><python-typing>
|
2024-03-04 09:54:15
| 1
| 538
|
asmartin
|
78,100,152
| 2,205,880
|
How to disable introspection on graphene-python?
|
<p>The <a href="https://docs.graphene-python.org/en/latest/execution/queryvalidation/#disable-introspection" rel="nofollow noreferrer">docs</a> give this explanation:</p>
<pre><code>validation_errors = validate(
schema=schema.graphql_schema,
document_ast=parse('THE QUERY'),
rules=(
DisableIntrospection,
)
</code></pre>
<p>... but leave a "THE QUERY" placeholder for the <code>document_ast</code> parameter.</p>
<p>What should be used so that introspection is disabled for all queries?</p>
|
<python><graphql><graphene-python>
|
2024-03-04 09:20:11
| 1
| 341
|
user2205880
|
78,100,006
| 9,879,534
|
How to change cpu affinity on linux with python in realtime?
|
<p>I know I can use <code>os.sched_setaffinity</code> to set affinity, but it seems that I can't use it to change affinity in realtime. Below is my code:</p>
<p>First, I have a cpp program</p>
<pre class="lang-cpp prettyprint-override"><code>// test.cpp
#include <iostream>
#include <thread>
#include <vector>
void workload() {
unsigned long long int sum = 0;
for (long long int i = 0; i < 50000000000; ++i) {
sum += i;
}
std::cout << "Sum: " << sum << std::endl;
}
int main() {
unsigned int num_threads = std::thread::hardware_concurrency();
std::cout << "Creating " << num_threads << " threads." << std::endl;
std::vector<std::thread> threads;
for (unsigned int i = 0; i < num_threads; ++i) {
threads.push_back(std::thread(workload));
}
for (auto& thread : threads) {
thread.join();
}
return 0;
}
</code></pre>
<p>Then, I compile it</p>
<pre><code>g++ test.cpp -O0
</code></pre>
<p>and I'll get an <code>a.out</code> file in the same directory.</p>
<p>Then, still in the same directory, I have a python file</p>
<pre class="lang-py prettyprint-override"><code># test.py
from subprocess import Popen
import os
import time
a = set(range(8, 16))
b = set(range(4, 12))
if __name__ == "__main__":
proc = Popen("./a.out", shell=True)
pid = proc.pid
print("pid", pid)
tic = time.time()
while True:
if time.time() - tic < 10:
os.sched_setaffinity(pid, a)
print("a", os.sched_getaffinity(pid))
else:
os.sched_setaffinity(pid, b)
print("b", os.sched_getaffinity(pid))
res = proc.poll()
if res is None:
time.sleep(1)
else:
break
</code></pre>
<p><code>a.out</code> would run a long time, and my expect for <code>test.py</code> is: in the first 10 seconds, I would see cpu 8~15 busy while 0~7 idle; and after 10 seconds, I would see cpu 4~11 busy while others idle. But as I observed with <code>htop</code>, I found that in the first 10 seconds, my observation indeed met my expect, however after 10 seconds, I could see <code>b {4, 5, 6, 7, 8, 9, 10, 11}</code> every second, as if I successfully set the affinity; but on <code>htop</code>, I still found that cpu 8~15 busy while 0~7 idle until the program normally stopped, which means I faild to set the affinity.</p>
<p>I'd like to ask why would this happen? I read the <a href="https://www.man7.org/linux/man-pages/man2/sched_setaffinity.2.html" rel="nofollow noreferrer">manual</a> but didn't find anything to mention about it. And it seems that python's <code>os.sched_setaffinity</code> doesn't return anything so I can't see the result.</p>
<p>I'm using AMD cpu, but I don't think that matters.</p>
|
<python><linux><affinity>
|
2024-03-04 08:55:43
| 1
| 365
|
Chuang Men
|
78,099,646
| 3,682,549
|
pydantic error: subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)
|
<p>For the below given code i am getting pydantic error:</p>
<pre><code>from langchain.chains import LLMChain
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import PromptTemplate
from pydantic import BaseModel, Field
query = "Do you offer vegetarian food?"
class LineList(BaseModel):
lines: list[str] = Field(description="Lines of text")
class LineListOutputParser(PydanticOutputParser):
def __init__(self) -> None:
super().__init__(pydantic_object=LineList)
def parse(self, text: str) -> list[str]:
lines = text.strip().split("\n")
return lines
output_parser = LineListOutputParser()
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from a vector
database. By generating multiple perspectives on the user question, your goal is to help
the user overcome some of the limitations of the distance-based similarity search.
Provide these alternative questions separated by newlines. Only provide the query, no numbering.
Original question: {question}""",
)
llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT, output_parser=output_parser)
queries = llm_chain.invoke(query)
</code></pre>
<p>I am getting the below error:</p>
<pre><code>---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[93], line 20
16 lines = text.strip().split("\n")
17 return lines
---> 20 output_parser = LineListOutputParser()
22 QUERY_PROMPT = PromptTemplate(
23 input_variables=["question"],
24 template="""You are an AI language model assistant. Your task is to generate five
(...)
29 Original question: {question}""",
30 )
32 llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT, output_parser=output_parser)
Cell In[93], line 13, in LineListOutputParser.__init__(self)
12 def __init__(self) -> None:
---> 13 super().__init__(pydantic_object=LineList)
File ~\anaconda3\Lib\site-packages\langchain_core\load\serializable.py:120, in Serializable.__init__(self, **kwargs)
119 def __init__(self, **kwargs: Any) -> None:
--> 120 super().__init__(**kwargs)
121 self._lc_kwargs = kwargs
File ~\anaconda3\Lib\site-packages\pydantic\v1\main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for LineListOutputParser
pydantic_object
subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)
</code></pre>
<p>I am using pydantic-2.5.3</p>
|
<python><pydantic><langchain>
|
2024-03-04 07:47:16
| 3
| 1,121
|
Nishant
|
78,099,555
| 4,399,016
|
Pandas Resample 2M and 3M for every month
|
<p>I have this code to calculate Returns:</p>
<pre><code>import yfinance as yf
import numpy as np
import pandas as pd
df = yf.download('SPY', '2023-01-01')
df = df[['Close']]
df['d_returns'] = np.log(df.div(df.shift(1)))
df.dropna(inplace = True)
df_1M = pd.DataFrame()
df_2M = pd.DataFrame()
df_3M = pd.DataFrame()
df_1M['1M cummreturns'] = df.d_returns.cumsum().apply(np.exp)
df_2M['2M cummreturns']= df.d_returns.cumsum().apply(np.exp)
df_3M['3M cummreturns'] = df.d_returns.cumsum().apply(np.exp)
df1 = df_1M[['1M cummreturns']].resample('1M').max()
df2 = df_2M[['2M cummreturns']].resample('2M').max()
df3 = df_3M[['3M cummreturns']].resample('3M').max()
df1 = pd.concat([df1, df2, df3], axis=1)
df1
</code></pre>
<p>This gives the following:</p>
<pre><code> 1M cummreturns 2M cummreturns 3M cummreturns
Date
2023-01-31 1.067381 1.067381 1.067381
2023-02-28 1.094428 NaN NaN
2023-03-31 1.075022 1.094428 NaN
2023-04-30 1.092196 NaN 1.094428
2023-05-31 1.103356 1.103356 NaN
2023-06-30 1.164014 NaN NaN
2023-07-31 1.202116 1.202116 1.202116
2023-08-31 1.198677 NaN NaN
2023-09-30 1.184785 1.198677 NaN
2023-10-31 1.145738 NaN 1.198677
2023-11-30 1.198466 1.198466 NaN
2023-12-31 1.251746 NaN NaN
2024-01-31 1.290032 1.290032 1.290032
2024-02-29 1.334174 NaN NaN
2024-03-31 1.346699 1.346699 NaN
2024-04-30 NaN NaN 1.346699
</code></pre>
<p>How to get valid values in <code>2M cummreturns</code> and <code>3M cummreturns</code> columns for every row?</p>
<p>For instance, <code>2023-02-28</code> row represents <code>Feb-2023</code> month. The columns <code>2M cummreturns</code> and <code>3M cummreturns</code> need to have max returns in the next 2 Months and 3 Months time respectively starting from <code>Feb-2023</code> the same way <code>1M cummreturns</code> gives max returns in the next 1 Month time.</p>
|
<python><pandas><data-wrangling><pandas-resample>
|
2024-03-04 07:25:11
| 1
| 680
|
prashanth manohar
|
78,099,283
| 188,331
|
Building a custom tokenizer via HuggingFace Tokenizers library from scratch, some vocabularies are added, but some are not
|
<p>I try to create a custom Tokenizer via the HuggingFace Tokenizers library from scratch, following <a href="https://huggingface.co/learn/nlp-course/chapter6/8?fw=pt" rel="nofollow noreferrer">this tutorial</a>.</p>
<p>My dataset consists of 80 million Chinese sentences. The structure of my <code>SentencePieceBPETokenizer</code>-based custom tokenizer consists of a custom pre-tokenizer, normalizer and decoder.</p>
<p><strong>Normalizer</strong>: The normalizer is responsible for cleaning up the sentences. Used <code>NKFC()</code>, <code>Replace(Regex("\s+"), " ")</code> and <code>Lowercase()</code> in sequence.</p>
<pre><code>class CustomNormalizer:
def normalize(self, normalized: NormalizedString):
normalized.nfkc()
normalized.filter(lambda char: not char.isnumeric())
normalized.replace(Regex("\s+"), " ")
normalized.lowercase()
</code></pre>
<p><strong>Pre-Tokenizer</strong>: The pre-tokenizer is responsible for word segmenting, splitting sentences into vocabularies specifically. The codes of pre-tokenizer can be found <a href="https://github.com/huggingface/tokenizers/blob/b24a2fc1781d5da4e6ebcd3ecb5b91edffc0a05f/bindings/python/examples/custom_components.py" rel="nofollow noreferrer">here</a>.</p>
<pre><code>class JiebaPreTokenizer:
def jieba_split(self, i: int, normalized_string: NormalizedString) -> List[NormalizedString]:
splits = []
# we need to call `str(normalized_string)` because jieba expects a str,
# not a NormalizedString
for token, start, stop in jieba.tokenize(str(normalized_string)):
splits.append(normalized_string[start:stop])
return splits
# We can also easily do it in one line:
# return [normalized_string[w[1] : w[2]] for w in jieba.tokenize(str(normalized_string))]
def odd_number_split(
self, i: int, normalized_string: NormalizedString
) -> List[NormalizedString]:
# Just an odd example...
splits = []
last = 0
for (i, char) in enumerate(str(normalized_string)):
if char.isnumeric() and int(char) % 2 == 1:
splits.append(normalized_string[last:i])
last = i
# Don't forget the last one
splits.append(normalized_string[last:])
return splits
def pre_tokenize(self, pretok: PreTokenizedString):
# Let's call split on the PreTokenizedString to split using `self.jieba_split`
pretok.split(self.jieba_split)
# Here we can call `pretok.split` multiple times if we want to apply
# different algorithm, but we generally just need to call it once.
pretok.split(self.odd_number_split)
</code></pre>
<p><strong>Decoder</strong>: The decoder is just joining the texts if needed.</p>
<pre><code>class CustomDecoder:
def decode(self, tokens: List[str]) -> str:
return "".join(tokens)
def decode_chain(self, tokens: List[str]) -> List[str]:
return [f" {t}" for t in tokens]
</code></pre>
<p>Building code of the Tokenizer is:</p>
<pre><code>from tokenizers import SentencePieceBPETokenizer
special_tokens = ["<unk>", "<pad>", "<cls>", "<sep>", "<mask>"]
tk_tokenizer = SentencePieceBPETokenizer()
tk_tokenizer.normalizer = Normalizer.custom(CustomNormalizer())
tk_tokenizer.pre_tokenizer = PreTokenizer.custom(JiebaPreTokenizer())
tk_tokenizer.decoder = Decoder.custom(CustomDecoder())
</code></pre>
<p>To prove the pre-tokenization process work, I used this code:</p>
<pre><code>text = "件衫巢𠵼𠵼,幫我燙吓喇"
print(tk_tokenizer.pre_tokenizer.pre_tokenize_str(text))
</code></pre>
<p>which outputs:</p>
<pre><code>[('件衫', (0, 2)), ('巢𠵼𠵼', (2, 5)), (',', (5, 6)), ('幫', (6, 7)), ('我', (7, 8)), ('燙', (8, 9)), ('吓', (9, 10)), ('喇', (10, 11))]
</code></pre>
<p>which correctly segments the words.</p>
<p>However, after the training of the custom tokenizer using the following codes, some vocabularies like '巢𠵼𠵼' (means <em>wrinkle</em>) and '燙' (means <em>ironing</em>) cannot be identified:</p>
<pre><code>tk_tokenizer.train_from_iterator(
get_training_corpus(),
vocab_size=60000,
min_frequency=1,
show_progress=True,
special_tokens=special_tokens
)
# Helper function
def get_training_corpus(batch_size=1000):
for i in range(0, len(dataset), batch_size):
yield dataset[i : i + batch_size]
</code></pre>
<p>After training, I tested the functionality of the Tokenizer:</p>
<pre><code>encoding = tk_tokenizer.encode("件衫巢𠵼𠵼,幫我燙吓喇")
print(encoding.ids, encoding.tokens, encoding.offsets)
</code></pre>
<p>which outputs:</p>
<pre><code>[3248, 13, 350, 406, 191, 222] ['件衫', ',', '幫', '我', '吓', '喇'] [(0, 2), (5, 6), (6, 7), (7, 8), (9, 10), (10, 11)]
</code></pre>
<p><strong>About Vocab Size</strong></p>
<p>The final vocabulary size of the custom tokenizer is 58,685. The reason for setting the limit of 60,000 vocabulary size is that the GPT vocab size is 40,478 while GPT-2 vocabulary size is 50,257. Modern Chinese has approximately 106,230 Chinese vocabulary but less than half of them are commonly used.</p>
<p><strong>The Vocab Missing Problem</strong></p>
<p>I know the reason why '巢𠵼𠵼' cannot be identified, as the dataset does not contain such vocabulary. However, for '燙' the dataset has over 100 instances of such vocabulary. In theory, the tokenizer can store the vocabulary as one of the vocabularies, instead of presenting the vocabulary as an unknown token.</p>
<p><strong>My Question</strong></p>
<p>My question is, how to improve the tokenizer code, to let the vocabulary "燙" re-appear in the Tokenizer vocabulary list? Thanks.</p>
|
<python><huggingface-tokenizers>
|
2024-03-04 06:24:06
| 0
| 54,395
|
Raptor
|
78,099,223
| 2,161,073
|
Display Nested Categories with Single Model
|
<pre><code>class ProductCategory(models.Model):
name = models.CharField(max_length=50, blank=True, null=True)
parent_category = models.ForeignKey(
'self', null=True, blank=True, on_delete=models.CASCADE)
category_image = models.ImageField(
upload_to='categories/product/imgs/', verbose_name=_("Category Image"), blank=True, null=True,
help_text=_("Please use our recommended dimensions: 120px X 120px"))
category_description = models.TextField(verbose_name=_("Category Description"))
slug = models.SlugField(
blank=True, null=True, allow_unicode=True, unique=True, verbose_name=_("Slugfiy"))
date = models.DateTimeField(auto_now_add=True, blank=True, null=True)
</code></pre>
<p>In website home page I need to show the categories in nested menu.So Can anybody help me to do it..?</p>
|
<python><django>
|
2024-03-04 06:06:58
| 0
| 324
|
Devi A
|
78,099,166
| 1,230,724
|
Running many `pip install -r requirements.txt` concurrently
|
<p>I'd like to test multiple python projects concurrently. Each project is located in specific directory. Part of the testing is setting up a virtualenv with python packages (via <code>pip install -r requirements.txt</code>). Each project has its respective <code>requirements.txt</code>.</p>
<p>Each project gets its own virtualenv (<code>virtualenv -p $PYTHON_VER ./.env</code> + <code>source .env/bin/active</code> + <code>pip install -r requirements.txt</code>).</p>
<p>Is it safe to run many of these <code>pip install</code>s concurrently given that the downloaded packages are cached (<code>pip cache dir</code> tells me the global cache is at <code>~/.cache/pip</code>) or would I need to disable the pip cache (<code>--no-cache-dir</code>) for parallel runs of <code>pip install</code>?</p>
|
<python><pip>
|
2024-03-04 05:49:22
| 1
| 8,252
|
orange
|
78,099,132
| 1,609,428
|
How to extract and plot the immediate neighbors (and the neighbors of neighbors) in a networkx graph?
|
<p>Consider the following example</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
G = nx.Graph()
G.add_edges_from(
[('A', 'B'), ('A', 'C'), ('D', 'B'), ('E', 'C'),
('H', 'C'), ('Y', 'I')])
nx.draw_networkx(G)
</code></pre>
<p><a href="https://i.sstatic.net/WwKPK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WwKPK.png" alt="Network graph showing two sets of connected nodes." /></a></p>
<p>I would like to extract and plot the subgraphs containing 'A' and all of neighbors, then all the neighbors of the neighbors, and so on <code>n</code> times (obtaining n different charts, that is). To be clear: step 0 is the node <code>A</code> itself, step 1 is the network <code>B-A-C</code>, step 2 is the network <code>D-B-A-C-E-H</code> (<code>Y</code> and <code>I</code> have been removed because they are not neighbors of neighbors of <code>A</code>).</p>
<p>I have been unable to do so with <code>networkx</code>. Using <code>all_neighbors()</code> extracts all immediate neighbors (<code>B</code> and <code>C</code>) but somehow loses <code>A</code> in the process so the graph is incomplete.</p>
<pre class="lang-py prettyprint-override"><code>zn = G.subgraph(nx.all_neighbors(G, 'A'))
nx.draw_networkx(zn)
</code></pre>
<p><a href="https://i.sstatic.net/srJTY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/srJTY.png" alt="enter image description here" /></a></p>
<p>What can be done here?</p>
|
<python><graph><networkx>
|
2024-03-04 05:36:54
| 3
| 19,485
|
ℕʘʘḆḽḘ
|
78,099,041
| 755,934
|
sqlalchemy timestamps give inconsistent times for multiple columns
|
<p>I am using the SQLAlchemy ORM with Flask. Included below is a simplified version of my model:</p>
<pre><code>class RenterLead(BaseModel):
__tablename__ = "renter_leads"
uuid = db.Column(db.String, nullable=False, primary_key=True)
owner = db.Column(db.Integer, db.ForeignKey("users.id"), nullable=False)
name = db.Column(db.String)
email = db.Column(db.String)
phone_number = db.Column(db.String)
inserted_at = db.Column(db.DateTime, nullable=False, default=datetime.utcnow,
server_default=func.now())
modified_at = db.Column(db.DateTime, nullable=False, default=datetime.utcnow,
server_default=func.now())
</code></pre>
<p>I noticed that, when using the model, the <code>inserted_at</code> and <code>modified_at</code> timestamps were slightly different by a few ms after inserting a new record (which is not what I want, nor what I expect). Note that, after creating the table using such a schema, if you insert a record <em>directly into the DB without using the ORM</em>, the <code>inserted_at</code> and <code>modified_at</code> timestamps are actually exactly the same. This is the desired behaviour, and I performed this test to validate my understanding that the problem I was observing was caused by the ORM and did not exist at the DB layer.</p>
<p>I think I know why the timestamps are different (the python functions <code>datetime.utcnow</code> are being evaluated twice, once per column, so we get slightly different values). What I don't know is what I should do. I suppose I can remove <code>default</code> and use <code>server_default</code> only. But this can cause other issues, for example the data not existing on uncommitted models. So what's the right way to handle this in SQLAlchemy? I imagine I'm not the first person to run into this particular issue.</p>
|
<python><postgresql><sqlalchemy>
|
2024-03-04 05:00:15
| 1
| 5,624
|
Daniel Kats
|
78,098,939
| 1,592,764
|
Python/Telethon on SQLite database operational error: unable to open database file
|
<p>I have a python/telethon Telegram bot project consisting of a single python file, an SQLite databse, and a config file. The bot runs fine on my local system in a virtual environment (all three files are in the same directory) but when I attempt to run the python file on ubuntu server, I'm running into the following error.</p>
<p>Project files are in <code>/usr/local/bin/project/</code>:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/bin/project/script.py", line 19, in <module>
client = TelegramClient(session_name, API_ID, API_HASH).start(bot_token=BOT_TOKEN)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/telethon/client/telegrambaseclient.py", line 289, in __init__
session = SQLiteSession(session)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/telethon/sessions/sqlite.py", line 47, in __init__
c = self._cursor()
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/telethon/sessions/sqlite.py", line 242, in _cursor
self._conn = sqlite3.connect(self.filename,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
sqlite3.OperationalError: unable to open database file
</code></pre>
<p>Below are the contents of script.py:</p>
<pre><code>import configparser
import re
from telethon import TelegramClient, Button, events
import sqlite3
from datetime import datetime
import traceback
print("Initializing configurations...")
config = configparser.ConfigParser()
config.read('config.ini', encoding='utf-8')
API_ID = config.get('default','api_id')
API_HASH = config.get('default','api_hash')
BOT_TOKEN = config.get('default','bot_token')
session_name = "sessions/Bot"
# Start the bot session
client = TelegramClient(session_name, API_ID, API_HASH).start(bot_token=BOT_TOKEN)
@client.on(events.NewMessage(pattern="(?i)/start"))
async def start(event):
sender = await event.get_sender()
SENDER = str(sender.id)
await event.reply('Hello!.')
##### MAIN
if __name__ == '__main__':
try:
print("Initializing Database...")
# Connect to local database
db_name = 'database.db'
conn = sqlite3.connect(db_name, check_same_thread=False)
# Create the cursor
crsr = conn.cursor()
print("Connected to the database")
# Command that creates the "customers" table
sql_command = """CREATE TABLE IF NOT EXISTS customers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
lname VARCHAR(200),
fname VARCHAR(200),
note VARCHAR(200));"""
crsr.execute(sql_command)
print("All tables are ready")
print("Bot Started")
client.run_until_disconnected()
except Exception as error:
print('Cause: {}'.format(error))
</code></pre>
<p>These are the contents of <code>config.ini</code>:</p>
<pre><code>; DO NOT MODIFY THE LINE BELOW!!! ([default])
[default]
; EDITABLE FIELDS:
api_id = 000000
api_hash = 000000000000000000
bot_token = 00000000000000000000000000
</code></pre>
<p>I checked permissions on all three files, and am getting the following:</p>
<pre><code>-rw-r--rw- 1 root root 10601 Mar 4 00:52 script.py
-rw-rw-rw- 1 root root 716800 Mar 4 00:52 database.db
-rw-r--rw- 1 root root 195 Mar 4 00:52 config.ini
</code></pre>
|
<python><sqlite><permissions><telethon>
|
2024-03-04 04:19:52
| 1
| 1,695
|
Marcatectura
|
78,098,830
| 9,357,484
|
Version incompatibility between Spacy, Cuda, Pytorch and Python
|
<p>I want run spacy in GPu. The configuration that I installed for the Spacy is defined below</p>
<p>Name: spacy</p>
<p>Version: 3.7.4</p>
<p>The Cuda configuration that have in my Ubuntu 20.04.1 LTS based machine is</p>
<p>nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243</p>
<p>I have a restriction to upgrade the Cuda. The PyTorch version that I have installed on the machine is "2.2.1+cu121" which have no support for the Cuda version I have.</p>
<p>I tried to downgrade the Pytorch and found that I need to downgrade the Python version as well.</p>
<p>My current python version is Python 3.12.2. If I go for a compatible Python version for pytorch that should be >=3.6,<3.7.0 for pytorch 1.4.0.</p>
<p>If I downgrade both Pytorch and Python that is not compatible with Spacy version.</p>
<p>I need to do a transformer-based NER task in Spacy. Therefore I am not sure what is the minimum requirement of spacy.</p>
<p>How can I handle this version incompatibility?</p>
|
<python><pytorch><nlp><spacy>
|
2024-03-04 03:32:25
| 0
| 3,446
|
Encipher
|
78,098,740
| 12,314,521
|
Does padded rows (fake inputs) affect backpropagation?
|
<p>Each row data doesn't have the same size. It looks like this:</p>
<p>Ideally, the shape of input data will be: <code>(batch_size, N, dim)</code></p>
<p>But each row in the batch is not equal dimension. E.g: it can be <code>(k, dim)</code> k < N</p>
<p>To feed to my model I have to add some fake rows called padded rows.</p>
<p>These inputs go through some functions, layers in my model.</p>
<p>In the end, the loss function takes the mean of each row as input -> <code>(batch_size, dim)</code> . But I don't want to consider the padded row in the reduction.</p>
<p>-> So I computed the mean as:</p>
<ul>
<li>assigned all the padded rows to zero.</li>
<li><code>torch.sum(..)/(number_of_non_padded_rows)</code></li>
</ul>
<p>My question is: As the padded rows is included in the parameter of function torch.sum(...), does the model try to modify the weights base on these fake (padded) rows?
My second thought is because I assigned the padded row by a constraint value which is zero, then it somehow get rid of gradients of these padded rows during training?</p>
<p>Edit:
Base on chain rule, derivative of constraint number is zero so I guess my second thought is correct and my approach is correct</p>
|
<python><deep-learning><pytorch><backpropagation>
|
2024-03-04 02:52:44
| 0
| 351
|
jupyter
|
78,098,472
| 1,394,353
|
How can I filter all rows of a polars dataframe that partially match strings in another?
|
<p>I want to delete all rows of a dataframe that match one or more rows in a filtering dataframe.</p>
<p>Yes, I know about filter by <a href="https://stackoverflow.com/questions/77476875/python-polars-how-to-delete-rows-from-a-dataframe-that-match-a-given-regex"><strong>one</strong> regex</a> and I also know how join can be leveraged when there is a <a href="https://stackoverflow.com/questions/77421496/polars-filter-dataframe-by-another-dataframe-by-row-elements">full match</a> on a column. This isn't a direct match, except through looping the filter dataframe row by row.</p>
<p>It is a relatively trivial problem in sql to apply this filter in bulk, on the server, without looping with client-side code:</p>
<p>given:</p>
<h5>data.csv</h5>
<pre><code>filename,col2
keep.txt,bar
skip.txt,foo
keep2.txt,zoom
skip3.txt,custom1
discard.txt,custom2
file3.txt,custom3
discard2.txt,custom4
file4.txt,custom5
</code></pre>
<h5>filter.csv:</h5>
<pre><code>skip
discard
skip
</code></pre>
<p>Here's the sql using postgres. It will, and that is the key point here, scale very well.</p>
<h5>withsql.sql</h5>
<pre><code>\c test;
DROP TABLE IF EXISTS data;
DROP TABLE IF EXISTS filter;
CREATE TABLE data (
filename CHARACTER(50),
col2 CHARACTER(10),
skip BOOLEAN DEFAULT FALSE
);
\copy data (filename,col2) FROM './data.csv' WITH (FORMAT CSV);
CREATE TABLE filter (
skip VARCHAR(20)
);
\copy filter FROM './filter.csv' WITH (FORMAT CSV);
update filter set skip = skip || '%';
update data set skip = TRUE where exists (select 1 from filter s where filename like s.skip);
delete from data where skip = TRUE;
select * from data;
</code></pre>
<p><code>psql -f withsql.sql</code></p>
<p>this gives as output:</p>
<pre><code>You are now connected to database "test" as user "djuser".
...
UPDATE 4
DELETE 4
filename | col2 | skip
----------------------------------------------------+------------+------
filename | col2 | f
keep.txt | bar | f
keep2.txt | zoom | f
file3.txt | custom3 | f
file4.txt | custom5 | f
(5 rows)
</code></pre>
<p>Now, I can do with polars, but the only thing I can think of is using a loop on the filter.csv:</p>
<h5>withpolars.py</h5>
<pre><code>import polars as pl
df_data = pl.read_csv("data.csv")
df_filter = pl.read_csv("filter.csv")
for row in df_filter.iter_rows():
df_data = df_data.filter(~pl.col('filename').str.contains(row[0]))
print("data after:\n", df_data)
</code></pre>
<p>The output is correct, but I do this without looping, somehow? And... just curious how some of these bulk sql approaches map to dataframes.</p>
<pre><code>data after:
shape: (4, 2)
┌───────────┬─────────┐
│ filename ┆ col2 │
│ --- ┆ --- │
│ str ┆ str │
╞═══════════╪═════════╡
│ keep.txt ┆ bar │
│ keep2.txt ┆ zoom │
│ file3.txt ┆ custom3 │
│ file4.txt ┆ custom5 │
└───────────┴─────────┘
</code></pre>
|
<python><dataframe><python-polars><bulkupdate>
|
2024-03-04 00:50:51
| 2
| 12,224
|
JL Peyret
|
78,098,383
| 1,592,380
|
Ipywidget selection box not opening
|
<p><a href="https://i.sstatic.net/lGa7V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGa7V.png" alt="enter image description here" /></a></p>
<p>I have a jupyter notebook and I'm running leafmap. I'm trying to add a filepicker that will open up a widget to select a local file. I have the following in a cell:</p>
<pre><code>import ipywidgets as widgets
from ipyfilechooser import FileChooser
import os
padding = "0px 0px 0px 5px"
style = {"description_width": "initial"}
tool_output = widgets.Output(
layout=widgets.Layout(max_height="150px", max_width="500px", overflow="auto")
)
file_type = widgets.ToggleButtons(
options=["GeoTIFF", "COG", "STAC", "Microsoft"],
tooltips=[
"Open a local GeoTIFF file",
"Open a remote COG file",
"Open a remote STAC item",
"Create COG from Microsoft Planetary Computer",
],
)
file_type.style.button_width = "110px"
file_chooser = FileChooser(
os.getcwd(), sandbox_path=m.sandbox_path, layout=widgets.Layout(width="454px")
)
file_chooser.filter_pattern = ["*.tif", "*.tiff"]
file_chooser.use_dir_icons = True
source_widget = widgets.VBox([file_chooser])
</code></pre>
<p>When I run the cell, no selection box opens. What am I doing wrong?</p>
|
<python><jupyter-notebook><ipywidgets>
|
2024-03-04 00:11:00
| 1
| 36,885
|
user1592380
|
78,098,132
| 2,180,100
|
Is there a more pythonic way to evaluate "for some"?
|
<p>Consider the pseudocode</p>
<pre><code>if P(S) for some S in Iterator:
Do something
</code></pre>
<p>When I write this in python I usually end up writing something like</p>
<pre><code>for S in Iterator:
if P(S):
Do something
break
</code></pre>
<p>Or if I want to avoid nesting an if in a loop</p>
<pre><code>i = 0
while i < len(S) and not P(S[i]):
i+=1
if i < len(S):
Do something
</code></pre>
<p>Both of these are <em>irritating</em> to use. The code is direct yet my intent in pseudocode feels somewhat obfuscated. I might instead try to write</p>
<pre><code>if [P(S) for S in Iterator if P(s)]:
Do something
</code></pre>
<p>but that list comprehension will enumerate the entire iterator (I believe) and the P(S) is mentioned twice, and overloading the if statement with the list feels ugly.</p>
<p>Is there a better/more pythonic way to write that pseudocode in python?</p>
|
<python><syntactic-sugar><control-structure>
|
2024-03-03 22:14:58
| 0
| 729
|
Sidharth Ghoshal
|
78,098,063
| 4,119,262
|
Unexpected result when testing conditional: checks if inputs starts with 0 or ends with numbers
|
<p>I am trying to learn Python, and in this context I work on a problem.
The problem aims to check if a string of characters:</p>
<ul>
<li>contains 2 characters at minimim and 6 at miximum</li>
<li>starts with 2 letters</li>
<li>if containing numbers, these arrive at the end of the string</li>
<li>the numbers do not start with a 0</li>
<li>no ponctuation is used in the string</li>
</ul>
<p>From what I understand, only two parts of the code are causing the issue, I am hence presenting these two issues only:</p>
<p>My code is as follows:</p>
<pre><code>def number_end_plate(s):
if s.isalpha():
return False
else:
number = ""
for char in s:
if char.isnumeric():
number = number + char
else:
number = number
result = number.startswith("0")
return(result)
# Function "number_end_plate" checks if the numbers starts with zero. If "True" then this will be rejected in the above function
def numbers_after_letter(s):
if s.isalpha():
return True
elif len(s) == 4 and s == s[3:4].isnumeric():
return True
elif len(s) == 4 and s == s[3:3].isalpha() and s == s[4:4].isnumeric():
return True
elif len(s) == 5 and s == s[3:5].isnumeric():
return True
elif len(s) == 5 and s == s[3:4].isalpha() and s == s[5:5].isnumeric():
return True
elif len(s) == 5 and s == s[3:3].isalpha() and s == s[4:5].isnumeric():
return True
elif len(s) == 6 and s == s[3:6].isnumeric():
return True
elif len(s) == 6 and s == s[3:3].isalpha() and s == s[4:6].isnumeric():
return True
elif len(s) == 6 and s == s[3:4].isalpha() and s == s[5:6].isnumeric():
return True
elif len(s) == 6 and s == s[3:5].isalpha() and s == s[6:6].isnumeric():
return True
#Function checks if the numbers (if any) are always at the end of the string
</code></pre>
<p>I am failing the following tests, I do not understand why I get "<code>Invalid</code>" as a result:</p>
<ul>
<li>AK88</li>
<li>ECZD99</li>
<li>IKLMNV</li>
</ul>
|
<python><string><if-statement>
|
2024-03-03 21:48:43
| 1
| 447
|
Elvino Michel
|
78,097,971
| 999,137
|
Langchain : How to store memory with streaming?
|
<p>I have a simple RAG app and cannot figure out how to store memory with streaming. Should <code>save_context</code> be part of the chain? Or do I have to handle it using some callback?</p>
<p>At the end of the example is <code>answer_chain</code>, where the last step is skipped. I believe it should be something at the end, but I cannot figure out what. I want to run a callback when streaming is finished.</p>
<p>Also, I split the chain into two steps, as when there is one big streaming chain, it sends documents and so on to the stout, which does not make sense, I only want messages. Is it the proper way to handle it with two separate chains?</p>
<p>Any ideas?</p>
<pre class="lang-py prettyprint-override"><code>import uuid
from typing import Iterator
import dotenv
from langchain_core.messages import get_buffer_string
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, PromptTemplate, format_document
from langchain_core.runnables import RunnableLambda, RunnablePassthrough, RunnableParallel
from langchain_core.runnables.utils import Output
from document_index.vector import get_retriever
from operator import itemgetter
from memory import get_memory
from model import get_model
dotenv.load_dotenv()
model = get_model()
condense_question_prompt = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(condense_question_prompt)
initial_prompt = """
You are helpful AI assistant.
Answer the question based only on the context below.
### Context start ###
{context}
### Context end ###
Question: {question}
"""
ANSWER_PROMPT = ChatPromptTemplate.from_template(initial_prompt)
DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")
retriever = get_retriever()
def _get_memory_with_session_id(session_id):
return get_memory(session_id)
def _combine_documents(docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"):
doc_strings = [format_document(doc, document_prompt) for doc in docs]
return document_separator.join(doc_strings)
def search(session_id, query) -> Iterator[Output]:
memory = _get_memory_with_session_id(session_id)
def _save_context(inputs, answer):
memory.save_context(inputs, {"answer": answer})
loaded_memory = RunnablePassthrough.assign(
chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter("history"),
)
standalone_question = {
"standalone_question": {
"question": lambda x: x["question"],
"chat_history": lambda x: get_buffer_string(x["chat_history"]),
}
| CONDENSE_QUESTION_PROMPT
| model
| StrOutputParser()
}
retrieved_documents = {
"docs": itemgetter("standalone_question") | retriever,
"question": lambda x: x["standalone_question"],
}
preparation_chain = loaded_memory | standalone_question | retrieved_documents
memory.load_memory_variables({})
inputs = {"question": query}
docs = preparation_chain.invoke(inputs)
answer_chain = (
{"docs": RunnablePassthrough()}
| {
"context": lambda x: _combine_documents(x["docs"]),
"question": itemgetter("question"),
}
| ANSWER_PROMPT
| model
| StrOutputParser()
# | RunnableLambda(_save_context, ????query_argument, ????MODEL_ANSWER)
)
return answer_chain.stream(docs)
if __name__ == "__main__":
session_id = str(uuid.uuid4())
query = "Where to buy beer?"
for result in search(session_id, query):
print(result, end="")
</code></pre>
|
<python><streaming><langchain>
|
2024-03-03 21:14:05
| 1
| 971
|
rtyshyk
|
78,097,965
| 10,985,257
|
Assign values via two subsequent masking operations in pytorch
|
<p>I have generated two different masks based on values:</p>
<pre class="lang-py prettyprint-override"><code>import torch
values = torch.tensor([0, 0.5, 0.99, 0.87])
saved_values = values + torch.tensor([0.1, -0.4, 0, 0.1])
result = torch.zeros_like(values)
mask1 = values > 0
mask2 = ~torch.greater(saved_values[mask1], values[mask1])
</code></pre>
<p>Now I have tested if I am able to grep the data with the help of the masks:</p>
<pre class="lang-py prettyprint-override"><code>>>> result[mask1][mask2]
tensor([0., 0.])
</code></pre>
<p>seems to work, so I used broadcasting for testing further:</p>
<pre class="lang-py prettyprint-override"><code>>>> result[:] = 5
>>> result[mask1][mask2]
tensor([5., 5.])
</code></pre>
<p>Seems to work also, So I tested finally the values with the masks:</p>
<pre class="lang-py prettyprint-override"><code>>>> values[mask1][mask2]
tensor([0.5000, 0.9900])
</code></pre>
<p>Seems to work as well, so I try to assign the values based on the masks:</p>
<pre class="lang-py prettyprint-override"><code>result = torch.zeros_like(values)
result[mask1][mask2] = values[mask1][mask2]
</code></pre>
<p>Doesn't throw an error so I assumed it works, checked twice:</p>
<pre class="lang-py prettyprint-override"><code>>>> result[mask1][mask2]
tensor([0., 0.])
>>> result
tensor([0., 0., 0., 0.])
</code></pre>
<p>It seems the values are not saved properly due to some reference issues.</p>
<p>How can I achieve the wanted behavior?</p>
|
<python><pytorch>
|
2024-03-03 21:12:22
| 1
| 1,066
|
MaKaNu
|
78,097,805
| 7,846,884
|
how to set output directory in shell in snakemake workflow
|
<p>the <code>--output_dir</code> in my shell command allows file to be written to that directory.
but i keep getting the error</p>
<pre><code>SyntaxError:
Not all output, log and benchmark files of rule bismark_cov contain the same wildcards. This is crucial though, in order to avoid that two or more jobs write to the same file.
File "extra_bismark_methyl_analysis.smk", line 35, in <module>
</code></pre>
<pre><code>bismark_methylation_extractor {input.bam_path} --parallel 4 \
--paired-end --comprehensive \
--bedGraph --zero_based --output_dir {params.out_dir}
</code></pre>
<p>pls see full command i used</p>
<pre><code>import os
import glob
from datetime import datetime
#import configs
configfile: "/lila/data/greenbaum/users/ahunos/apps/lab_manifesto/configs/config_snakemake_lilac.yaml"
# Define the preprocessed files directory
preprocessedDir = '/lila/data/greenbaum/projects/methylSeq_Spectrum/data/preprocessed/WholeGenome_Methyl/OUTDIR/bismark/deduplicated/*.bam'
dir2='/lila/data/greenbaum/projects/methylSeq_Spectrum/data/preprocessed/Capture_Methyl/OUTDIR/bismark/deduplicated/*.bam'
# Create the pattern to match BAM files
def get_bams(nfcore_OUTDIR):
bam_paths = glob.glob(nfcore_OUTDIR, recursive=True)
return bam_paths
#combine bam files
bam_paths = get_bams(nfcore_OUTDIR=preprocessedDir) + get_bams(nfcore_OUTDIR=dir2)
print(bam_paths)
#get sample names
SAMPLES = [os.path.splitext(os.path.basename(f))[0] for f in bam_paths]
print(f"heres SAMPLES \n{SAMPLES}")
contexts=['CpG','CHH','CHG']
suffixes=['bismark.cov.gz','M-bias.txt', '.bedGraph.gz']
rule all:
input:
expand('results/{sample}/{sample}.{suffix}', sample=SAMPLES, suffix=suffixes, allow_missing=True),
expand('results/{sample}/{sample}_splitting_report.txt', sample=SAMPLES,allow_missing=True),
expand('results/{sample}/{C_context}_context_{sample}.txt', sample=SAMPLES, C_context=contexts,allow_missing=True)
rule bismark_cov:
input:
bam_path=lambda wildcards: wildcards.bam_paths
output:
'results/{sample}/{sample}.{suffix}',
'results/{sample}/{sample}_splitting_report.txt',
'results/{sample}/{C_context}_context_{sample}.txt'
params:
out_dir='results/{sample}'
shell:
"""
bismark_methylation_extractor {input.bam_path} --parallel 4 \
--paired-end --comprehensive \
--bedGraph --zero_based --output_dir {params.out_dir}
"""
</code></pre>
|
<python><snakemake>
|
2024-03-03 20:13:57
| 1
| 473
|
sahuno
|
78,097,764
| 6,943,622
|
Apply Operations to Make All Array Elements Equal to Zero
|
<p>So I attempted this leetcode problem and I was able to come up with a solution that passes 1017/1026 test cases. The remaining that failed did so due to the time limit exceeding and not incorrectness of the solution. So I am wondering if anyone has any ideas on how to optimize my approach. I know it has to do with the fact that I am finding the min value in the sublist every single time. I'm pretty sure this can be optimized with some prefix sum logic, but I am not sure how to introduce that into my approach.</p>
<p>Here's the question:</p>
<blockquote>
<p>You are given a 0-indexed integer array nums and a positive integer k.</p>
<p>You can apply the following operation on the array any number of
times:</p>
<p>Choose any subarray of size k from the array and decrease all its
elements by 1. Return true if you can make all the array elements
equal to 0, or false otherwise.</p>
<p>A subarray is a contiguous non-empty part of an array.</p>
<p>Example 1:</p>
<p>Input: nums = [2,2,3,1,1,0], k = 3 Output: true</p>
<p>Explanation: We can do
the following operations:</p>
<ul>
<li>Choose the subarray [2,2,3]. The resulting array will be nums = [1,1,2,1,1,0].</li>
<li>Choose the subarray [2,1,1]. The resulting array will be nums = [1,1,1,0,0,0].</li>
<li>Choose the subarray [1,1,1]. The resulting array will be nums = [0,0,0,0,0,0].</li>
</ul>
<p>Example 2:</p>
<p>Input: nums = [1,3,1,1], k = 2 Output: false</p>
<p>Explanation: It is not
possible to make all the array elements equal to 0.</p>
</blockquote>
<p>Here's my code:</p>
<pre><code>def checkArray(self, nums: List[int], k: int) -> bool:
i, j = 0, k
while i + k <= len(nums):
sublist = nums[i:j] # Make a copy of the sublist
smallest = min(sublist)
for x in range(len(sublist)):
sublist[x] -= smallest # Modify the values in the sublist
if x > 0 and sublist[x] < sublist[x-1]:
return False
nums[i:j] = sublist # Assign the modified sublist back to the original list
i += 1
j += 1
return sum(nums) == 0
</code></pre>
<p>And here's my intuition so my approach can be followed:</p>
<blockquote>
<p>What I did was use a sliding window. Within each window, we are doing
2 things, we are simulating applying operations that would reduce the
elements to 0 and we are also optimising by checking if the window is
valid. A window is valid if no element to the left of the currently
iterated over element is greater after the simulation is complete. It
makes sense because if an element to the left is greater than it, that
means that element to the left cannot be reduced to 0 within a window
of size k. By simulation, I simply mean instead of continuously
subtracting 1 from all elements in the window until the smallest
reaches 0, we simply find the smallest element in the window and
subtract it from all elements in the window. It yields the same result
but more efficiently.</p>
</blockquote>
|
<python><arrays><sliding-window><prefix-sum>
|
2024-03-03 19:57:35
| 1
| 339
|
Duck Dodgers
|
78,097,735
| 3,930,599
|
PubSub async publishing
|
<p>I wanted to make sure (haven't found it anywhere in the documentation), If I am using PubSub async publishing as part of my service handling request logic (Python Django) -</p>
<pre><code>*handling request*
publish async PubSub messages
*handling request*
returning response
</code></pre>
<p>Is it guaranteed that the async PubSub publisher won't delay my response to that specific request? when does it exactly publish the events?</p>
<p>Thanks</p>
|
<python><google-cloud-platform><google-cloud-pubsub>
|
2024-03-03 19:47:45
| 0
| 757
|
Itai Bar
|
78,097,730
| 1,874,170
|
Calling SHGetKnownFolderPath from Python?
|
<p>I've written this minimal reproducible example to calculate the Desktop folder on Windows "the hard way" (using <code>SHGetKnownFolderPath</code>), but I seem to end up with a Success error code while the output buffer only yields <code>b'C'</code> when dereferenced via the <code>.result</code> property of <a href="https://docs.python.org/3/library/ctypes.html#ctypes.c_char_p" rel="nofollow noreferrer"><code>c_char_p</code></a>. What am I doing wrong?</p>
<p>My code does this:</p>
<ol>
<li>Converts the desired GUID into the cursed <a href="https://learn.microsoft.com/en-us/windows/win32/api/guiddef/ns-guiddef-guid" rel="nofollow noreferrer"><code>_GUID</code> struct format</a> according to Microsoft's specification</li>
<li>Allocates <code>result_ptr = c_char_p()</code> which is initially a NULL pointer but will be overwritten with the pointer to the result</li>
<li>Calls <a href="https://learn.microsoft.com/en-us/windows/win32/api/shlobj_core/nf-shlobj_core-shgetknownfolderpath" rel="nofollow noreferrer"><code>SHGetKnownFolderPath</code></a> with the desired GUID struct, no flags, on the current user, passing our <code>result_ptr</code> by reference so its value can be overwritten</li>
<li>If <code>SHGetKnownFolderPath</code> indicated success, dereferences <code>result_ptr</code> using <code>.value</code></li>
</ol>
<p>I'm getting a result which is only a single char long, but I thought that <code>c_char_p</code> is supposed to be the pointer to the start of a null-terminated string.</p>
<p>Is Windows writing a bogus string into my pointer, am I reading its value out wrongly, or have I made some other error in building my function?</p>
<pre class="lang-py prettyprint-override"><code>import contextlib
import ctypes
import ctypes.wintypes
import functools
import os
import pathlib
import types
import uuid
try:
wintypes_GUID = ctypes.wintypes.GUID
except AttributeError:
class wintypes_GUID(ctypes.Structure):
# https://learn.microsoft.com/en-us/windows/win32/api/guiddef/ns-guiddef-guid
# https://github.com/enthought/comtypes/blob/1.3.1/comtypes/GUID.py
_fields_ = [
('Data1', ctypes.c_ulong),
('Data2', ctypes.c_ushort),
('Data3', ctypes.c_ushort),
('Data4', ctypes.c_ubyte * 8)
]
@classmethod
def _from_uuid(cls, u):
u = uuid.UUID(u)
u_str = f'{{{u!s}}}'
result = wintypes_GUID()
errno = ctypes.oledll.ole32.CLSIDFromString(u_str, ctypes.byref(result))
if errno == 0:
return result
else:
raise RuntimeError(f'CLSIDFromString returned error code {errno}')
DESKTOP_UUID = 'B4BFCC3A-DB2C-424C-B029-7FE99A87C641'
def get_known_folder(uuid):
# FIXME this doesn't work, seemingly returning just b'C' no matter what
result_ptr = ctypes.c_char_p()
with _freeing(ctypes.oledll.ole32.CoTaskMemFree, result_ptr):
errno = ctypes.windll.shell32.SHGetKnownFolderPath(
ctypes.pointer(wintypes_GUID._from_uuid(uuid)),
0,
None,
ctypes.byref(result_ptr)
)
if errno == 0:
result = result_ptr.value
if len(result) < 2:
import warnings
warnings.warn(f'result_ptr.value == {result!r}')
return pathlib.Path(os.fsdecode(result))
else:
raise RuntimeError(f'Shell32.SHGetKnownFolderPath returned error code {errno}')
@contextlib.contextmanager
def _freeing(freefunc, obj):
try:
yield obj
finally:
freefunc(obj)
assert get_known_folder(DESKTOP_UUID) ==\
pathlib.Path('~/Desktop').expanduser(),\
f'Result: {get_known_folder(DESKTOP_UUID)!r}; expcected: {pathlib.Path("~/Desktop").expanduser()!r}'
</code></pre>
|
<python><ctypes><c-strings><shell32.dll><lpwstr>
|
2024-03-03 19:45:57
| 3
| 1,117
|
JamesTheAwesomeDude
|
78,097,632
| 8,203,926
|
Concurrent Futures vs Asyncio Difference
|
<p>I trying to optimize a Python code by doing it asynchronous. For doing it I tried asyncio and concurrent.futures libraries.</p>
<p>Here are my codes:</p>
<pre><code>async def get_rds_instances(session, region, engine_types):
report_rds = []
mandatory_tags = {'Use-Case'}
client = session.client('rds', region_name=region)
try:
await asyncio.sleep(1)
response = client.describe_db_instances()
rds_report.append(response)
except (ClientError, Exception) as e:
print(e)
return reportd_rds
async def main():
... some arguments definition
session = get_rds_session(profile_name)
regions = session.get_available_regions('rds')
try:
reports = await asyncio.gather(*[get_rds_instances(session=session, region=region, engine_types=engine_types) for region in regions])
except Exception as e:
print(f"An error occurred: {str(e)}")
... process report
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>Without asyncio this code completed in ~22 seconds. With asyncio it completed in ~20 seconds.</p>
<p>Yet, here the things getting interesting, I used concurrent.futures:</p>
<pre><code>def get_rds_instances(session, region, engine_types):
report_rds = []
mandatory_tags = {'Use-Case'}
client = session.client('rds', region_name=region)
try:
response = client.describe_db_instances()
rds_report.append(response)
except (ClientError, Exception) as e:
print(e)
return reportd_rds
def main():
... some arguments definition
session = get_rds_session(profile_name)
regions = session.get_available_regions('rds')
try:
args = ((session, region, engine_types) for region in regions)
with concurrent.futures.ThreadPoolExecutor() as executor:
reports = executor.map(lambda p: get_rds_instances(*p), args)
except Exception as e:
print(f"An error occurred: {str(e)}")
... process report
if __name__ == "__main__":
main()
</code></pre>
<p>And this code completed in ~3 seconds.</p>
<p>So, my question, is this difference normal, am I missing something for asyncio or doing something wrong for asyncio?</p>
<p>Edit:</p>
<p><a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds/client/describe_db_instances.html" rel="nofollow noreferrer">describe_db_instances</a></p>
<p>Thanks!</p>
|
<python><asynchronous><python-asyncio><concurrent.futures>
|
2024-03-03 19:10:50
| 1
| 972
|
Umut TEKİN
|
78,097,600
| 5,370,979
|
Convert Python code to verify webhook request into C#
|
<p>I am following the documentation of a payment provider (Lemon Squeezy) where my application needs to verify that webhook requests are indeed coming from the provider. The documentation only has PHP, Node.js, and Python code examples. My application is written in C#.</p>
<p>This is the Python code provided in the documentation:</p>
<pre><code>import hashlib
import hmac
signature = request.META['HTTP_X_SIGNATURE']
secret = '[SIGNING_SECRET]'
digest = hmac.new(secret.encode(), request.body, hashlib.sha256).hexdigest()
if not hmac.compare_digest(digest, signature):
raise Exception('Invalid signature.')
</code></pre>
<p>The signing secret is being used to generate a hash of the payload and send the hash in the X-Signature header of the request. The signing secret is a password that I have set on the provider side and I'm storing locally and accessing through the configuration dependency injection. The signature is retrieved from the header, and the request body taken from the request.</p>
<p>With the help of Copilot, and some research online this is what I have at the moment.</p>
<pre><code>var signature = Request.Headers["X-Signature"];
string secret = _configuration.GetValue<string>("Keys:LMWebhookSigningSecret");
string requestBody = await new StreamReader(HttpContext.Request.Body).ReadToEndAsync();
using (var hmac = new HMACSHA256(Encoding.UTF8.GetBytes(secret)))
{
byte[] computedHash = hmac.ComputeHash(Encoding.UTF8.GetBytes(requestBody));
string digest = BitConverter.ToString(computedHash).Replace("-", "").ToLower();
if (!digest.Equals(signature))
{
throw new Exception("Invalid signature.");
}
}
</code></pre>
<p>I'm not sure if my code is equivalent to the Python code but I think something is wrong because the digest is never equal to the signature.</p>
<p>This is the documentation btw: <a href="https://docs.lemonsqueezy.com/help/webhooks" rel="nofollow noreferrer">https://docs.lemonsqueezy.com/help/webhooks</a></p>
|
<python><c#><webhooks><digital-signature>
|
2024-03-03 19:00:55
| 0
| 461
|
nerdalert
|
78,097,574
| 2,382,483
|
How to setup the netCDF4 package in multistage docker build?
|
<p>I have an existing dockerfile that runs a python program involving netCDF4. Here's a simplified version:</p>
<pre><code>ARG BASE_IMG=python:3.11-slim
ARG VENV="/opt/venv"
# ------------------------------ #
FROM $BASE_IMG
ARG VENV
RUN apt-get update && \
apt-get upgrade && \
apt-get install -y python3-dev libhdf5-dev libnetcdf-dev
RUN python -m venv $VENV
ENV PATH="$VENV/bin:$PATH"
RUN pip install numpy~=1.23.5 netcdf4~=1.6.4 h5py~=3.9.0
COPY test.py test.py
ENTRYPOINT ["python", "-m", "test"]
</code></pre>
<p>My full dockerfile involves some c++ compilation as well, and I want to covert this into a multistage build so the compilation tools don't end up in my final image. While I'm at it, I figured I could also <code>pip install</code> my python packages in the compile stage as well, and move the whole venv over to the final stage like so:</p>
<pre><code>ARG BASE_IMG=python:3.11-slim
ARG VENV="/opt/venv"
FROM $BASE_IMG as compile-image
ARG VENV
RUN apt-get update && \
apt-get upgrade && \
apt-get install -y python3-dev libhdf5-dev libnetcdf-dev
RUN python -m venv $VENV
ENV PATH="$VENV/bin:$PATH"
RUN pip install numpy~=1.23.5 netcdf4~=1.6.4 h5py~=3.9.0
# ------------------------------ #
FROM $BASE_IMG
ARG VENV
RUN apt-get update && \
apt-get upgrade && \
apt-get install -y libhdf5-dev libnetcdf-dev
COPY --from=compile-image $VENV $VENV
ENV PATH="$VENV/bin:$PATH"
COPY test.py test.py
ENTRYPOINT ["python", "-m", "test"]
</code></pre>
<p>This works great, <em>except</em> copying the netCDF4 package over this way seems to result in a large slow down in netcdf read/write operations. I can make an identical Dockerfile to the one above where I just install netCDF4 directly in the final stage, and I <em>don't</em> see this slow down, so I'm thinking there is some sort of external c lib the netCDF4 package is using that I also need to copy over. Does anyone know how to determine whether netCDF4 has linked to all its libs correctly, or what I need to copy over specifically to make this work?</p>
|
<python><docker><netcdf><netcdf4>
|
2024-03-03 18:52:08
| 1
| 3,557
|
Rob Allsopp
|
78,097,487
| 315,168
|
PyLance in Visual Studio Code does not recognise Poetry virtual env dependencies
|
<p>I am using Poetry to manage a Python project. I create a virtual environment for Poetry using a normal <code>poetry install</code> and <code>pyproject.toml</code> workflow. Visual Studio Code and its PyLance does not pick up project dependencies in Jupyter Notebook.</p>
<ul>
<li>Python stdlib modules are recognised</li>
<li>The modules of my application are recognised</li>
<li>The modules in the dependencies and libraries my application uses are not recognised</li>
</ul>
<p>Instead, you get an error</p>
<pre><code>Import "xxx" could not be resolved Pylance (reportMissingImports)
</code></pre>
<p>An example screenshot with some random imports that show what is recognised and what is not (tradeexecutor package is Poetry project, then some random Python packages dependency are not recognised).:</p>
<p><a href="https://i.sstatic.net/zkVPb.png" rel="noreferrer"><img src="https://i.sstatic.net/zkVPb.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/LuPom.png" rel="noreferrer"><img src="https://i.sstatic.net/LuPom.png" alt="enter image description here" /></a></p>
<p>The notebook still runs fine within Visual Studio Code, so the problem is specific to PyLance, the virtual environment is definitely correctly set up.</p>
<p>Some Python Language Server output (if relevant):</p>
<pre><code>2024-03-01 10:15:40.628 [info] [Info - 10:15:40] (28928) Starting service instance "trade-executor"
2024-03-01 10:15:40.656 [info] [Info - 10:15:40] (28928) Setting pythonPath for service "trade-executor": "/Users/moo/code/ts/trade-executor"
2024-03-01 10:15:40.657 [info] [Info - 10:15:40] (28928) Setting environmentName for service "trade-executor": "3.10.13 (trade-executor-8Oz1GdY1-py3.10 venv)"
2024-03-01 10:15:40.657 [info] [Info - 10:15:40] (28928) Loading pyproject.toml file at /Users/moo/code/ts/trade-executor/pyproject.toml
2024-03-01 10:15:40.657 [info] [Info - 10:15:40] (28928) Pyproject file "/Users/moo/code/ts/trade-executor/pyproject.toml" has no "[tool.pyright]" section.
2024-03-01 10:15:41.064 [info] [Info - 10:15:41] (28928) Found 763 source files
2024-03-01 10:15:41.158 [info] [Info - 10:15:41] (28928) Background analysis(4) root directory: file:///Users/moo/.vscode/extensions/ms-python.vscode-pylance-2024.2.2/dist
2024-03-01 10:15:41.158 [info] [Info - 10:15:41] (28928) Background analysis(4) started
2024-03-01 10:15:41.411 [info] [Info - 10:15:41] (28928) Indexer background runner(5) root directory: file:///Users/moo/.vscode/extensions/ms-python.vscode-pylance-2024.2.2/dist (index)
2024-03-01 10:15:41.411 [info] [Info - 10:15:41] (28928) Indexing(5) started
2024-03-01 10:15:41.662 [info] [Info - 10:15:41] (28928) scanned(5) 1 files over 1 exec env
2024-03-01 10:15:42.326 [info] [Info - 10:15:42] (28928) indexed(5) 1 files over 1 exec
</code></pre>
<p>Also looks like PyLance correctly finds the virtual environment in the earlier Python Language Server output:</p>
<pre><code>2024-03-03 19:36:56.784 [info] [Info - 19:36:56] (41658) Pylance language server 2024.2.2 (pyright version 1.1.348, commit cfb1de0c) starting
2024-03-03 19:36:56.789 [info] [Info - 19:36:56] (41658) Server root directory: file:///Users/moo/.vscode/extensions/ms-python.vscode-pylance-2024.2.2/dist
2024-03-03 19:36:56.789 [info] [Info - 19:36:56] (41658) Starting service instance "trade-executor"
2024-03-03 19:36:57.091 [info] [Info - 19:36:57] (41658) Setting pythonPath for service "trade-executor": "/Users/moo/Library/Caches/pypoetry/virtualenvs/trade-executor-8Oz1GdY1-py3.10/bin/python"
2024-03-03 19:36:57.093 [info] [Info - 19:36:57] (41658) Setting environmentName for service "trade-executor": "3.10.13 (trade-executor-8Oz1GdY1-py3.10 venv)"
2024-03-03 19:36:57.096 [info] [Info - 19:36:57] (41658) Loading pyproject.toml file at /Users/moo/code/ts/trade-executor/pyproject.toml
</code></pre>
<p>How to diagnose the issue further and then fix the issue?</p>
|
<python><visual-studio-code><python-poetry><pylance>
|
2024-03-03 18:22:15
| 3
| 84,872
|
Mikko Ohtamaa
|
78,097,421
| 170,966
|
How to propagate opentelemetry span context to http request headers in B3 format?
|
<p>Our organization uses a few different tracing mechanisms. The prominent one is B3. But some services also use Datadog. For the purpose of this question, I am mainly concerned with B3.</p>
<p>I dont want to take dependency on any specific vendor. So I imported <code>opentelemetry</code> python sdk.</p>
<p>Now, I want to propogate the span to a HTTP service call in B3 format.</p>
<p>How do I do that? I searched and found <code>py-zipkin</code> which unfortunately, is an <code>opentracing</code> compatible lib, not <code>opentelemetry</code>.</p>
|
<python><b3>
|
2024-03-03 18:00:04
| 1
| 7,644
|
feroze
|
78,097,305
| 9,315,690
|
How can I call a method on a handler created by logging.config.dictConfig in Python?
|
<p>I'm trying to set up a particular logging scheme for an application I'm building. For it, I want to be able to rotate logs arbitrarily on a custom condition. As such, the built-in options of rotating based on time (using <code>TimedRotatingFileHandler</code>) or log size (using <code>RotatingFileHandler</code>) are not sufficient. Both <code>TimedRotatingFileHandler</code> and <code>RotatingFileHandler</code> do however have the method <code>doRollover</code> which I could use to implement what I want. The problem comes from that I'm using <code>logging.config.dictConfig</code> to set up my log configuration, like so:</p>
<pre><code>config = {
"version": 1,
"formatters": {
"default_formatter": {
"format": logging.BASIC_FORMAT,
},
},
"handlers": {
"file": {
"class": "logging.handlers.RotatingFileHandler",
"level": "NOTSET",
"formatter": "default_formatter",
"backupCount": 5,
"filename": "log.txt",
"encoding": "utf8",
},
},
"loggers": {
"": {
"level": "NOTSET",
"handlers": ["file"],
},
},
}
logging.config.dictConfig(config)
</code></pre>
<p>This way, <code>logging.config.dictConfig</code> is responsible for instantiating <code>RotatingFileHandler</code>, and so I never get the chance to retain a reference to the class instance (i.e., the object). As such, it is not clear how I could go about calling methods upon the object.</p>
<p>How could I go about calling a method (in my case, <code>doRollover</code>) on an object instantiated as a handler by <code>logging.config.dictConfig</code>? Alternatively, if that is not possible, how can I manually provide a handler object that I have instantiated by calling the constructor directly given this configuration?</p>
|
<python><logging><python-logging><log-rotation>
|
2024-03-03 17:23:11
| 2
| 3,887
|
Newbyte
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.