QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,330,113 | 10,595,871 | Bulk vectors in elasticsearch using python | <p>I'm new to elasticsearch and I think I'm missing some hints.
I'm trying to put some data into an index using python, that's my code:</p>
<pre><code>from elasticsearch import Elasticsearch, helpers
import json
df = pd.read_excel('analisi rna.xlsx')
</code></pre>
<p>data are something like this:</p>
<pre><code>name content doc_vector
0 some_text list of embeddings of the text in col 2
</code></pre>
<p>17 rows only</p>
<pre><code>client = Elasticsearch("https://something",
api_key="xxx" )
client.ping()
True
mapping = {
"mappings": {
"properties": {
"name": {
"type": "text"
},
"content": {
"type": "text"
},
"doc_vector": {
"type": "dense_vector",
"dims": 768
}
}
}
}
response = client.indices.create(
index="prova_vettori",
body=mapping,
ignore=400 # ignore 400 already exists code
)
response
ObjectApiResponse({'acknowledged': True, 'shards_acknowledged': True, 'index': 'prova_vettori'})
json_str = df.to_json(orient='records')
json_records = json.loads(json_str)
action_list = []
for row in json_records:
record ={
'_index': "prova_vettori",
'_source': row
}
action_list.append(record)
helpers.bulk(client, action_list)
BulkIndexError: 17 document(s) failed to index.
</code></pre>
<p>I don't understand what I am doing wrong. I've tried to follow many other similare questions (part of the code is infact copied and pasted from other answers), as well as youtube tutorial, but nothing seems to work.</p>
<p>Thanks!</p>
| <python><elasticsearch> | 2023-10-20 10:15:19 | 0 | 691 | Federicofkt |
77,329,890 | 9,893,918 | DAG-level Params for string and integer | <p>I have a Airflow <code>SparkSubmitOperator</code> and I want to add DAG-level Params to configure:</p>
<pre><code>executor_cores
executor_memory
application_args
</code></pre>
<p>I’ve wrote next code:</p>
<pre><code>dag = DAG(
'spark_submit_ex',
params={
"executor_cores": Param(2, type="integer", minimum=2),
"executor_memory": Param("4g", type="string"),
"some_id": Param("file_sink", type="string")
},
….
)
spark_job = SparkSubmitOperator(
executor_cores='{{ params.executor_cores }}',
driver_memory='{{ params.driver_memory }}',
application='0.0.1-SNAPSHOT.jar',
application_args=['{{ params.some_id }}'],
…..
</code></pre>
<p>In airflow logs I see the correct value for <code>application_args</code> and incorrect for <code>executor_cores</code> and <code>driver_memory</code>:</p>
<pre><code>- Spark-Submit cmd: spark-submit
…..
--executor-cores {{ params.executor_cores }} --executor-memory {{ params.executor_memory }}
….
0.0.1-SNAPSHOT.jar --workspaceId file_sink
</code></pre>
<p>I also tried to use double quotes ( <code>executor_cores='{{ params.executor_cores }}'</code>) and DAG did not start in this case.</p>
| <python><airflow> | 2023-10-20 09:42:18 | 1 | 835 | Vadim |
77,329,798 | 18,771,355 | SimCLR/ResNet18 : last fractional batch mecanism not functional ? (tensor shapes incompatible) | <p>I'm implementing a SimCLR/ResNet18 architecture over a custom dataset.</p>
<p>I know that</p>
<blockquote>
<p>Number of Iterations in One Epoch=Batch Size/Total Training Dataset Size</p>
</blockquote>
<p>And if the result is floating then the size of the last batch is adapted for the leftovers (the 'fractional batch').
However in my case, this last mechanism does not seem to work.
My dataset is of size 7000.
If I give a batch size of 100, I then have 7000/70=100 iterations, without fractional batch and the training goes on.
However, if I give a batch size of 32 for instance, then I have the following error (full stack trace)</p>
<pre class="lang-py prettyprint-override"><code>/home/wlutz/PycharmProjects/hiv-image-analysis/venv/bin/python /home/wlutz/PycharmProjects/hiv-image-analysis/main.py
2023-10-20 11:12:22.106008: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-20 11:12:22.107921: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-10-20 11:12:22.133919: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-10-20 11:12:22.133941: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-10-20 11:12:22.133955: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-10-20 11:12:22.138715: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-20 11:12:22.737271: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pl_bolts/__init__.py:11: FutureWarning: In the future `np.object` will be defined as the corresponding NumPy scalar.
if not hasattr(numpy, tp_name):
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pl_bolts/__init__.py:11: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
if not hasattr(numpy, tp_name):
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pl_bolts/models/self_supervised/amdim/amdim_module.py:34: UnderReviewWarning: The feature generate_power_seq is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html
"lr_options": generate_power_seq(LEARNING_RATE_CIFAR, 11),
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pl_bolts/models/self_supervised/amdim/amdim_module.py:92: UnderReviewWarning: The feature FeatureMapContrastiveTask is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html
contrastive_task: Union[FeatureMapContrastiveTask] = FeatureMapContrastiveTask("01, 02, 11"),
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pl_bolts/losses/self_supervised_learning.py:228: UnderReviewWarning: The feature AmdimNCELoss is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html
self.nce_loss = AmdimNCELoss(tclip)
available_gpus: 0
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.
warnings.warn(msg)
Dim MLP input: 512
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:613: UserWarning: Checkpoint directory /home/wlutz/PycharmProjects/hiv-image-analysis/saved_models exists and is not empty.
rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
/home/wlutz/PycharmProjects/hiv-image-analysis/main.py:330: UnderReviewWarning: The feature LinearWarmupCosineAnnealingLR is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html
scheduler_warmup = LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs=10, max_epochs=max_epochs,
| Name | Type | Params
------------------------------------------
0 | model | AddProjection | 11.5 M
1 | loss | ContrastiveLoss | 0
------------------------------------------
11.5 M Trainable params
0 Non-trainable params
11.5 M Total params
46.024 Total estimated model params size (MB)
Optimizer Adam, Learning Rate 0.0003, Effective batch size 160
Epoch 0: 100%|█████████▉| 218/219 [04:03<00:01, 1.12s/it, loss=3.74, v_num=58, Contrastive loss_step=3.650]Traceback (most recent call last):
File "/home/wlutz/PycharmProjects/hiv-image-analysis/main.py", line 388, in <module>
trainer.fit(model, data_loader)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit
call._call_and_handle_interrupt(
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl
self._run(model, ckpt_path=self.ckpt_path)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1112, in _run
results = self._run_stage()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1191, in _run_stage
self._run_train()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1214, in _run_train
self.fit_loop.run()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 267, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 213, in advance
batch_output = self.batch_loop.run(kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
outputs = self.optimizer_loop.run(optimizers, kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 202, in advance
result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 249, in _run_optimization
self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 370, in _optimizer_step
self.trainer._call_lightning_module_hook(
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1356, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/core/module.py", line 1754, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 169, in step
step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 234, in optimizer_step
return self.precision_plugin.optimizer_step(
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 119, in optimizer_step
return optimizer.step(closure=closure, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
return wrapped(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torch/optim/optimizer.py", line 373, in wrapper
out = func(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torch/optim/optimizer.py", line 76, in _use_grad
ret = func(self, *args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torch/optim/adam.py", line 143, in step
loss = closure()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 105, in _wrap_closure
closure_result = closure()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 149, in __call__
self._result = self.closure(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 135, in closure
step_output = self._step_fn()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 419, in _training_step
training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values())
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1494, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 378, in training_step
return self.model.training_step(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/main.py", line 316, in training_step
loss = self.loss(z1, z2)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/main.py", line 243, in forward
denominator = device_as(self.mask, similarity_matrix) * torch.exp(similarity_matrix / self.temperature)
RuntimeError: The size of tensor a (64) must match the size of tensor b (48) at non-singleton dimension 1
Process finished with exit code 1
</code></pre>
<p>Here is some code (error happens at last line):</p>
<pre class="lang-py prettyprint-override"><code>train_config = Hparams()
reproducibility(train_config)
model = SimCLR_pl(train_config, model=resnet18(pretrained=False), feat_dim=512)
transform = Augment(train_config.img_size)
data_loader = get_stl_dataloader(train_config.batch_size, transform)
accumulator = GradientAccumulationScheduler(scheduling={0: train_config.gradient_accumulation_steps})
checkpoint_callback = ModelCheckpoint(filename=filename, dirpath=save_model_path, every_n_epochs=2,
save_last=True, save_top_k=2, monitor='Contrastive loss_epoch', mode='min')
trainer = Trainer(callbacks=[accumulator, checkpoint_callback],
gpus=available_gpus,
max_epochs=train_config.epochs)
trainer.fit(model, data_loader)
</code></pre>
<p>and here are my classes:</p>
<pre class="lang-py prettyprint-override"><code>class Hparams:
def __init__(self):
self.epochs = 10 # number of training epochs
self.seed = 33333 # randomness seed
self.cuda = True # use nvidia gpu
self.img_size = 224 # image shape
self.save = "./saved_models/" # save checkpoint
self.load = False # load pretrained checkpoint
self.gradient_accumulation_steps = 5 # gradient accumulation steps
self.batch_size = 70
self.lr = 3e-4 # for ADAm only
self.weight_decay = 1e-6
self.embedding_size = 128 # papers value is 128
self.temperature = 0.5 # 0.1 or 0.5
self.checkpoint_path = '/media/wlutz/TOSHIBA EXT/Image Analysis/VIH PROJECT/models' # replace checkpoint path here
class SimCLR_pl(pl.LightningModule):
def __init__(self, config, model=None, feat_dim=512):
super().__init__()
self.config = config
self.model = AddProjection(config, model=model, mlp_dim=feat_dim)
self.loss = ContrastiveLoss(config.batch_size, temperature=self.config.temperature)
def forward(self, X):
return self.model(X)
def training_step(self, batch, batch_idx):
(x1, x2) = batch
z1 = self.model(x1)
z2 = self.model(x2)
loss = self.loss(z1, z2)
self.log('Contrastive loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True)
return loss
def configure_optimizers(self):
max_epochs = int(self.config.epochs)
param_groups = define_param_groups(self.model, self.config.weight_decay, 'adam')
lr = self.config.lr
optimizer = Adam(param_groups, lr=lr, weight_decay=self.config.weight_decay)
print(f'Optimizer Adam, '
f'Learning Rate {lr}, '
f'Effective batch size {self.config.batch_size * self.config.gradient_accumulation_steps}')
scheduler_warmup = LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs=10, max_epochs=max_epochs,
warmup_start_lr=0.0)
return [optimizer], [scheduler_warmup]
class AddProjection(nn.Module):
def __init__(self, config, model=None, mlp_dim=512):
super(AddProjection, self).__init__()
embedding_size = config.embedding_size
self.backbone = default(model, models.resnet18(pretrained=False, num_classes=config.embedding_size))
mlp_dim = default(mlp_dim, self.backbone.fc.in_features)
print('Dim MLP input:', mlp_dim)
self.backbone.fc = nn.Identity()
# add mlp projection head
self.projection = nn.Sequential(
nn.Linear(in_features=mlp_dim, out_features=mlp_dim),
nn.BatchNorm1d(mlp_dim),
nn.ReLU(),
nn.Linear(in_features=mlp_dim, out_features=embedding_size),
nn.BatchNorm1d(embedding_size),
)
def forward(self, x, return_embedding=False):
embedding = self.backbone(x)
if return_embedding:
return embedding
return self.projection(embedding)
class ContrastiveLoss(nn.Module):
"""
Vanilla Contrastive loss, also called InfoNceLoss as in SimCLR paper
"""
def __init__(self, batch_size, temperature=0.5):
super().__init__()
self.batch_size = batch_size
self.temperature = temperature
self.mask = (~torch.eye(batch_size * 2, batch_size * 2, dtype=bool)).float()
def calc_similarity_batch(self, a, b):
representations = torch.cat([a, b], dim=0)
similarity_matrix = F.cosine_similarity(representations.unsqueeze(1), representations.unsqueeze(0), dim=2)
return similarity_matrix
def forward(self, proj_1, proj_2):
"""
proj_1 and proj_2 are batched embeddings [batch, embedding_dim]
where corresponding indices are pairs
z_i, z_j in the SimCLR paper
"""
batch_size = proj_1.shape[0]
z_i = F.normalize(proj_1, p=2, dim=1)
z_j = F.normalize(proj_2, p=2, dim=1)
similarity_matrix = self.calc_similarity_batch(z_i, z_j)
sim_ij = torch.diag(similarity_matrix, batch_size)
sim_ji = torch.diag(similarity_matrix, -batch_size)
positives = torch.cat([sim_ij, sim_ji], dim=0)
nominator = torch.exp(positives / self.temperature)
# print(" sim matrix ", similarity_matrix.shape)
# print(" device ", device_as(self.mask, similarity_matrix).shape, " torch exp ", torch.exp(similarity_matrix / self.temperature).shape)
denominator = device_as(self.mask, similarity_matrix) * torch.exp(similarity_matrix / self.temperature)
all_losses = -torch.log(nominator / torch.sum(denominator, dim=1))
loss = torch.sum(all_losses) / (2 * self.batch_size)
return loss
class ImageDataResourceDataset(VisionDataset):
train_list = ['train_X_v1.bin', ]
test_list = ['test_X_v1.bin', ]
def __init__(self, root: str, transform: Optional[Callable] = None, ):
super().__init__(root=root, transform=transform)
self.data = self.__loadfile(self.train_list[0])
def __len__(self) -> int:
return self.data.shape[0]
def __getitem__(self, idx):
img = self.data[idx]
img = np.transpose(img, (1, 2, 0))
img = Image.fromarray(img)
img = self.transform(img)
return img
def __loadfile(self, data_file: str) -> np.ndarray:
path_to_data = os.path.join(os.getcwd(), 'datasets', data_file)
everything = np.fromfile(path_to_data, dtype=np.uint8)
images = np.reshape(everything, (-1, 3, 224, 224))
images = np.transpose(images, (0, 1, 3, 2))
return images
</code></pre>
<p>For records, my dataset has 7000 RGB images of size 224x224.</p>
<p>How come my last 'fractional' batch is not supported ?
Many thanks for your help.</p>
| <python><deep-learning><pytorch><resnet> | 2023-10-20 09:30:00 | 1 | 316 | Willy Lutz |
77,329,701 | 1,682,470 | Sphinx: custom Pygments' lexer not found | <p>I created <code>Pygments</code> customized lexer and style:</p>
<ul>
<li><code>acetexlexer.py</code> (lexer file),</li>
<li><code>acedracula.py</code> (style file),</li>
</ul>
<p>that work pretty well since the following command returns the expected result:</p>
<pre><code>pygmentize -O style=acedracula -x -l acetexlexer.py:AceTexLexer test.tex
</code></pre>
<p><a href="https://i.sstatic.net/eg4yE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eg4yE.png" alt="enter image description here" /></a></p>
<p>But I can't get them to work with Sphinx, despite extensive research on the Internet.</p>
<p>I tried for instance the following setup, partially based on:</p>
<ul>
<li><a href="https://github.com/AdaCore/aunit/blob/v21.0.0/doc/share/ada_pygments.py" rel="nofollow noreferrer">https://github.com/AdaCore/aunit/blob/v21.0.0/doc/share/ada_pygments.py</a></li>
<li><a href="https://github.com/AdaCore/aunit/blob/v21.0.0/doc/share/conf.py" rel="nofollow noreferrer">https://github.com/AdaCore/aunit/blob/v21.0.0/doc/share/conf.py</a></li>
</ul>
<p>that I found from:</p>
<p><a href="https://github.com/sphinx-doc/sphinx/issues/9544" rel="nofollow noreferrer">https://github.com/sphinx-doc/sphinx/issues/9544</a></p>
<ol>
<li><p>At the root of the <code>source</code> of the project, I created a <code>_pygments</code> subdirectory containing the lexer and style files
(<code>acetexlexer.py</code> and <code>acedracula.py</code>).</p>
</li>
<li><p>The relevant lines of <code>acetexlexer.py</code> are:</p>
<pre><code> from pygments.lexer import inherit, bygroups
from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
Number, Punctuation, Generic, Other, Whitespace
from pygments.lexers.markup import TexLexer
__all__ = ['AceTexLexer']
class AceTexLexer(TexLexer):
aliases = ['xtex', 'xlatex']
</code></pre>
</li>
<li><p>The relevant lines of my <code>conf.py</code> are:</p>
<pre><code> import os
import sys
sys.path.insert(0, os.path.abspath('.'))
sys.path.append(os.path.abspath("./_ext"))
sys.path.append(os.path.abspath("./_pygments"))
def setup(app):
from acetexlexer import AceTexLexer
app.add_lexer('xlatex', acetexlexer.AceTexLexer)
</code></pre>
</li>
</ol>
<p>But, when I run:</p>
<pre><code>sphinx-build -v -j auto source build/html source/test.md
</code></pre>
<p>I get:</p>
<blockquote>
<p>[...]/source/test.md:16: WARNING: Pygments lexer name 'xlatex' is not known.</p>
</blockquote>
<p>What am I doing wrong?</p>
| <python><python-sphinx><lexer><pygments> | 2023-10-20 09:14:09 | 0 | 674 | Denis Bitouzé |
77,329,606 | 4,795,075 | Save custom field in Keras model | <p>Consider the situation in which I have a trained Keras Sequential model. I save the model using</p>
<pre><code>keras.saving.save_model(model, path, save_format="...")
</code></pre>
<p>Before saving, however, I set a custom <code>list[str]</code> attribute in the model this way:</p>
<pre><code>setattr(model, "custom_attr", ["one", "two", "three"])
</code></pre>
<p>And finally, when I reload the model object (from another project) with <code>keras.saving.load_model</code>, I would like to have my custom attribute available via <code>model.custom_attr</code>. However, this doesn't work as custom_attr doesn't exist anymore after reloading the model.</p>
<p><strong>Is there any way to do that?</strong></p>
<p>I looked up a bit and it seems you can specify a <code>custom_objects</code> parameter when reloading the model, but that method seems to be limited to custom layers or custom loss functions defined in a custom model class. My setting is completely different as I have a normal <code>Sequential</code> model.</p>
| <python><keras> | 2023-10-20 08:58:54 | 2 | 356 | leqo |
77,329,554 | 1,142,881 | What's the difference between connection and connect, and when should I use one over the other? | <p>I'm using peewee extensively with the <code>play_house.db_url.connect</code> to connect to a database. This method allows for a lot of flexibility and it is very straightforward when opening a single connection to a database i.e., open and close a single connection.</p>
<p>However, this gets unclear when attempting to reuse the Pooled versions of the URL. For instance, if I do:</p>
<pre><code>from playhouse.db_url import connect
db_url = 'postgresql+pool://...?max_connections=20&stale_timeout=300'
db = connect(db_url)
</code></pre>
<p>what is db ?? a single connection or a connection pool? in case it is the later, how do I from a multi-threaded application e.g. Flask, acquire a separate connection from the pool? using <a href="https://docs.peewee-orm.com/en/latest/peewee/api.html#Database.connect" rel="nofollow noreferrer">connect</a> or <a href="https://docs.peewee-orm.com/en/latest/peewee/api.html#Database.connection" rel="nofollow noreferrer">connection</a>? which one and why?</p>
<p>or do I instead, every time I need a new connection should do over again? or is this creating a new separate pool?</p>
<pre><code>db = connect(db_url)
</code></pre>
<p>and if so will calling <code>db.close_all()</code> apply to all the opened connections?</p>
| <python><peewee> | 2023-10-20 08:51:00 | 1 | 14,469 | SkyWalker |
77,329,511 | 13,560,598 | usage of tf.gather to index lists containing non-Tensor types | <p>Consider the following code. I'd like to know how I can gather non-Tensor types from a list.</p>
<pre><code>import tensorflow as tf
class Point(tf.experimental.ExtensionType):
xx: tf.Tensor
def __init__(self,xx):
self.xx = xx
super().__init__()
list1 = [ 1, 2, 3, 4]
list2 = [ Point(1), Point(2), Point(3), Point(4) ]
# this works
out1 = tf.gather(list1,[0,2])
print('First gather ',out1)
# this throws: ValueError: Attempt to convert a value (Point(xx=<tf.Tensor:
# shape=(), dtype=int32, numpy=1>)) with an unsupported type
# (<class '__main__.Point'>) to a Tensor.
out2 = tf.gather(list2,[0,2])
print('Second gather ',out2)
</code></pre>
| <python><list><tensorflow> | 2023-10-20 08:45:34 | 3 | 593 | NNN |
77,329,213 | 10,737,147 | Import DLL | Ctypes | <p>I am trying to import the functions of the DLL comes in the package XFLR5.
<a href="https://sourceforge.net/projects/xflr5/files/6.59/" rel="nofollow noreferrer">https://sourceforge.net/projects/xflr5/files/6.59/</a></p>
<p>I have 2 questions.</p>
<ol>
<li>The function names look very different to what is being declared in the source file
<a href="https://sourceforge.net/p/xflr5/code/HEAD/tree/trunk/xflr5/XFoil-lib/xfoil.cpp" rel="nofollow noreferrer">https://sourceforge.net/p/xflr5/code/HEAD/tree/trunk/xflr5/XFoil-lib/xfoil.cpp</a>
Some functions are listed below which came out from</li>
</ol>
<p>objdump -p XFoil.dll</p>
<pre><code> [ 0] ??0XFoil@@QEAA@AEBV0@@Z
[ 1] ??0XFoil@@QEAA@XZ
[ 2] ??1XFoil@@UEAA@XZ
[ 3] ??4XFoil@@QEAAAEAV0@AEBV0@@Z
[ 4] ??_7XFoil@@6B@
[ 5] ?CheckAngles@XFoil@@QEAA_NXZ
[ 6] ?ClSpec@XFoil@@QEBANXZ
[ 7] ?DeRotate@XFoil@@QEAANXZ
[ 8] ?ExecMDES@XFoil@@QEAAXXZ
[ 9] ?ExecQDES@XFoil@@QEAA_NXZ
[ 10] ?Gauss@XFoil@@AEAA_NHQEAY05NQEAN@Z
</code></pre>
<p>Given the above, What is the method from above list corresponds to the constructor of this class</p>
<pre><code>XFoil::XFoil()
{
m_pOutStream = nullptr;
//------ primary dimensioning limit parameters
</code></pre>
<ol start="2">
<li>When I import the DLL and, try to call function naca4 I get the below error.</li>
</ol>
<pre><code>void XFoil::naca4(int ides, int nside)
{
int n1=0, n2=0, n3=0, n4=0, ib=0, i=0;
// double xx[nside], yt[nside], yc[nside], xb[2*nside], yb[2*nside]
</code></pre>
<pre><code>from ctypes import *
lib= WinDLL("XFoil.dll")
naca4 = getattr(lib, '?naca4@XFoil@@QEAAXHH@Z')
</code></pre>
<pre><code># >>> naca4(2412,100)
# Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# OSError: exception: access violation writing 0x000000000009314C
</code></pre>
<p>Could someone please point out what is causing this error and how to avoid this ?
Also, do I have to use getattr(lib, 'xxxx') for each and every function before calling them or will they become apparent automatically when the constructor is called ?</p>
| <python><dll><ctypes> | 2023-10-20 07:55:43 | 0 | 437 | XYZ |
77,329,074 | 16,473,860 | Python's "No module named 'ultralytics'" Error | <p>I've written a simple code using yolov5 and opencv.
I then deployed it into an exe file using pyinstaller.
However, this deployed file exits immediately upon execution.
When I ran this file directly from the terminal, I encountered an error stating <strong>"No module named 'ultralytics'"</strong>.</p>
<p>Below is my code.</p>
<pre><code>import cv2
import torch
from PIL import Image
import os
# model load
model = torch.hub.load('ultralytics/yolov5', 'custom', path='model.pt')
# opencv cam
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
pil_image = Image.fromarray(frame)
results = model(pil_image)
output_frame = results.render()[0]
cv2.imshow('Object Detection', output_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
| <python><deep-learning><pytorch><yolo><yolov5> | 2023-10-20 07:33:09 | 1 | 747 | Kevin Yang |
77,329,023 | 2,005,559 | Parsing a python dict to a list of values | <p>I am writing a python code, to parse bibtex file:</p>
<pre><code>import bibtexparser #to be installed with pip --pre
bibtex_str = """
@ARTICLE{Cesar2013,
author = {Jean César},
title = {An amazing title},
year = {2013},
volume = {12},
pages = {12--23},
journal = {Nice Journal}
}
"""
db = bibtexparser.parse_string(bibtex_str)
print((db.entries[0].fields_dict))
tuples = tuple([
db.entries[0].fields_dict.get(entry)
for entry in ["id", "author", "title"]
])
print(tuples)
</code></pre>
<p>The result is:</p>
<pre><code>{'author': Field(key=`author`, value=`Jean César`, start_line=2), 'title': Field(key=`title`, value=`An amazing title`, start_line=3), 'year': Field(key=`year`, value=`2013`, start_line=4), 'volume': Field(key=`volume`, value=`12`, start_line=5), 'pages': Field(key=`pages`, value=`12--23`, start_line=6), 'journal': Field(key=`journal`, value=`Nice Journal`, start_line=7)}
(None, Field(key=`author`, value=`Jean César`, start_line=2), Field(key=`title`, value=`An amazing title`, start_line=3))
</code></pre>
<p>But, I want the final tuple just as values, eg:</p>
<pre><code>(None, 'Jean Cesar', `An amazing title`)
</code></pre>
<p>I know how to deal with a key-val dict. But I have no idea what the Field is doing here.</p>
<p>How can I get this?</p>
| <python> | 2023-10-20 07:22:34 | 1 | 3,260 | BaRud |
77,328,794 | 9,091,991 | Django app for loop through list containing a single dictionary | <p>I am creating a Django app to use json to parse data from the internet using an API. The Result is displayed in the form a a list containing a dictionary as follows</p>
<p>The following is the code used in the views page</p>
<pre><code> def home(request):
import requests
import json
####
api_request=requests.get("xyz...URL")
try:
api=json.loads(api_request.content)
except Exception as e:
api="Error"
return render(request, 'home.html',{"api": api})
</code></pre>
<p>I am using the following code to render api on the home page. It renders flawlessly</p>
<pre><code>{% extends 'base.html' %}
{% block content %}
<h1>Hello World!</h1>
{{api.values}}
{% endblock %}
</code></pre>
<p>The output is in the form of a list containing a dictionary as follows</p>
<pre><code>[{'A': 53875881, 'B': 'cl', 'CH': -0.38, 'CHP': -0.00216, 'CLP': 175.46}]
</code></pre>
<p>I would like to get these values as follows</p>
<pre><code> A : 53875881
B : 'cl'
CH : -0.38
CHP: -0.00216
CLP: 175.46
</code></pre>
<p>I have tried the following code to loop through the dictionary contained in a list.I am not getting any output. Just an empty webpage</p>
<pre><code>{% extends 'base.html' %}
{% block content %}
{% if api %}
{% if api == "Error..." %}
check your ticker symbol
{% elif api != "Error..." %}
{% for element in api %}
{% for key, value in api.items %}
{{ key }}:{{ value }}
{% endfor %}
{% endfor %}
{% endif %}
{% endif %}
{% endblock %}
</code></pre>
<p>I have tried other codes to loop directly through the list also.</p>
<pre><code>{% extends 'base.html' %}
{% block content %}
{% if api %}
{% if api == "Error..." %}
check your ticker symbol
{% elif api != "Error..." %}
{% for key, value in api.items %}
{{ key }}:{{ value }}
{% endfor %}
{% endif %}
{% endif %}
{% endblock %}
</code></pre>
<p>I request someone to take a look and guide me. I am unable to figure this one out</p>
| <python><django><list><loops> | 2023-10-20 06:44:25 | 1 | 1,285 | Raghavan vmvs |
77,328,739 | 9,647,709 | subtraction from a dictionary of dictionaries in a pandas dataframe | <p>I have a dataframe where I want to find the difference in the unique_users for the latest day(2023-09-06) and previous day(2023-09-07) for a specific hour 2 and 13 separately for each key 'bsnl' and 'other' and for a specific <code>Exception</code>. I need to consider the <code>Exception</code> and <code>Hour</code> and <code>Date</code> to calculate the difference.</p>
<pre><code> DateTime Date Hour Exception IMSI_Operator
0 2023-09-06 02:00:00 2023-09-06 00:00:00 2 s2ap {'bsnl': {'total_sessions': 50007, 'unique_users': 38880}, 'Other': {'total_sessions': 50, 'unique_users': 32}}
1 2023-09-06 13:00:00 2023-09-06 00:00:00 13 s2ap {'bsnl': {'total_sessions': 60004, 'unique_users': 49816}, 'Other': {'total_sessions': 34, 'unique_users': 22}}
2 2023-09-07 13:00:00 2023-09-07 00:00:00 13 s2ap {'bsnl': {'total_sessions': 45224, 'unique_users': 37525}, 'Other': {'total_sessions': 32, 'unique_users': 27}}
3 2023-09-07 02:00:00 2023-09-07 00:00:00 2 s2ap {'bsnl': {'total_sessions': 47713, 'unique_users': 37284}, 'Other': {'total_sessions': 43, 'unique_users': 27}}
</code></pre>
<p>What I have tried:</p>
<pre><code>import pandas as pd
import json
# Sample DataFrame
data = {
'Date': ['2023-09-06 00:00:00', '2023-09-06 00:00:00', '2023-09-07 00:00:00', '2023-09-07 00:00:00'],
'Hour': [2, 13, 13, 2],
'Exception': ['s2ap', 's2ap', 's2ap', 's2ap'],
'IMSI_Operator': [
{'bsnl': {'total_sessions': 50007, 'unique_users': 38880}, 'Other': {'total_sessions': 50, 'unique_users': 32}},
{'bsnl': {'total_sessions': 60004, 'unique_users': 49816}, 'Other': {'total_sessions': 34, 'unique_users': 22}},
{'bsnl': {'total_sessions': 45224, 'unique_users': 37525}, 'Other': {'total_sessions': 32, 'unique_users': 27}},
{'bsnl': {'total_sessions': 47713, 'unique_users': 37284}, 'Other': {'total_sessions': 43, 'unique_users': 27}}
]
}
result_df = pd.DataFrame(data)
# Convert 'Date' to datetime
result_df['Date'] = pd.to_datetime(result_df['Date'])
# Filter for the latest day
latest_day = result_df[result_df['Date'] == result_df['Date'].max()]
# Filter for the previous day
previous_day = result_df[result_df['Date'] == (result_df['Date'].max() - pd.DateOffset(days=1))]
result = {}
for key in latest_day['IMSI_Operator'].values[0].keys():
latest_unique_users = latest_day['IMSI_Operator'].values[0][key]['unique_users']
previous_unique_users = previous_day['IMSI_Operator'].values[0][key]['unique_users']
result[key] = {
'unique_users_diff': latest_unique_users - previous_unique_users,
}
print(result)
</code></pre>
<p>But I would like to get the difference in the original dataframe itself. The previous date <code>2023-09-06</code> rows are not neeed in the final dataframe.
my current code is not considering the specific hour and specific exception for the calcualtion.</p>
<p>Expected output:</p>
<pre><code>Hour Exception Date IMSI_Operator bsnl_unique_users_diff Other_unique_users_diff
2 s2ap 2023-09-07 {'bsnl': {'total_sessions': 60004, 'unique_users': 49816}, 'Other': {'total_sessions': 34, 'unique_users': 22}} -1596 5
13 s2ap 2023-09-07 {'bsnl': {'total_sessions': 50007, 'unique_users': 38880}, 'Other': {'total_sessions': 50, 'unique_users': 32}} -12291 -5
</code></pre>
| <python><pandas> | 2023-10-20 06:30:43 | 2 | 414 | Sam |
77,328,548 | 3,542,535 | Dynamically get a list of keys to access the deepest non-dict values in a nest dictionary | <p>I have a dictionary where I want to return a dynamic list of the keys that would access any non-dict values. I do not know the structure of the dictionary beforehand and nesting could go n-levels.</p>
<pre><code>{
"parent": {
"a": 1,
"b": {
"nested": "first"
}
}
}
</code></pre>
<p>From this dictionary, I want to return the following list of lists:</p>
<pre><code>[
["parent", "a"],
["parent", "b", "nested"]
]
</code></pre>
<p>I plan to use this list with <code>functools</code> and <code>reduce</code> to get specific values out of the dictionary to build a new dictionary. I've tried a recursion function to lop through the dictionary items and check for <code>isinstance(value, dict)</code>, but I'm having issues keeping the path together. Any help is much appreciated, thanks!</p>
| <python><dictionary><recursion> | 2023-10-20 05:40:46 | 1 | 413 | alpacafondue |
77,328,530 | 672,452 | Fix Python3 syntax error in CheckMK plugin | <p>I'm using CheckMK 2.2.0 and its plugin for Nginx to monitor some hosts. The agent is running on a host using Python 3.4.2 that can not be updated. When running the Nginx plugin on this host, I'm getting a syntax error:</p>
<pre><code># python3 nginx_status.py
File "nginx_status.py", line 126
config: dict = {}
^
SyntaxError: invalid syntax
</code></pre>
<p>The code looks like:</p>
<pre><code>def main(): # pylint: disable=too-many-branches
config_dir = os.getenv("MK_CONFDIR", "/etc/check_mk")
config_file = config_dir + "/nginx_status.cfg"
config: dict = {}
if os.path.exists(config_file):
with open(config_file) as open_config_file:
config_src = open_config_file.read()
exec(config_src, globals(), config)
</code></pre>
<p>Running this script on another host using Python 3.11.2 it works. But as I've said, I'm not able to update the elder Python version. I'm a PHP programmer, but have no knowledge of Python.</p>
<p>What is this type of code <code>config: dict = {}</code> and how to fix it to run on Python 3.4?</p>
| <python><python-3.x><syntax-error><python-3.4> | 2023-10-20 05:36:46 | 1 | 7,782 | rabudde |
77,328,216 | 1,857,373 | Class MRJob Python TypeError: super(type, obj): obj must be an instance or subtype of type | <p><strong>Problem</strong></p>
<p>Running Class MRJob custom code in Python 3.x. The MRJob class is define. Unit testing in Jyputer Notebook ok running fine on unit test. Saved Jyputer Notebook into Python .py file to run on console for final unit test.</p>
<p>Seeking help on the Type Error for super on configure_args object, and what I did wrong not to have the instance object be a correct instance or subtype of JoinJob() object.</p>
<p><strong>Main Process for running instance of class</strong></p>
<p>Issued a new object instance on JoinJoin() class:</p>
<pre><code> instance = JoinJob()
instance.testJob()
</code></pre>
<p><strong>Class</strong>
class JoinJob() has defined a function, configure_args, which is the class method definition.</p>
<pre><code> def configure_args(self)
</code></pre>
<p><strong>Area of line 122 for Type Error</strong></p>
<p>This class, MapReduceJoinJob, subclasses MRJob, and has /// line 122 /// where the Type Error is raised. It can be observed that the function definition for configure_args() is properly setup to issue a super on the main class, JoinJob and its defined configure_args() to override it. However, no custom overriding code is customized on this example.</p>
<pre><code>class MapReduceJoinJob(MRJob):
OUTPUT_PROTOCOL = RawValueProtocol
/// line 122 ///
def configure_args(self):
super(JoinJob, self).configure_args()
</code></pre>
<p>In main python routine, called the driver class, MapReduceJoinJob(), by creating a new instance of MapReduceJoinJob(), to then process and invoke the</p>
<pre><code>if __name__ == "__main__":
driver_reduce_join()
instance = MapReduceJoinJob() #// This is the area of Type Error
</code></pre>
<p><strong>Error</strong></p>
<pre><code>TypeError: super(type, obj): obj must be an instance or subtype of type
line 122, in configure_args
super(JoinJob, self).configure_args()
TypeError: super(type, obj): obj must be an instance or subtype of type
</code></pre>
<p><strong>MRJob class definition</strong></p>
<pre><code># MRJob MapReduce using MRJob (MRUnit in Python)
# MRJob / MRUnit Test unit class
class JoinJob():
///line 122///
def configure_args(self):
JoinJob.configure_args()
def showResults(self, df_h_data, df_v_data, combine_row):
print('MRJob Test Case : Columns from Dataset Homocides Non-Fatal')
print('\n')
df_h_data['HOMICIDE'].iloc[:9]
print(df_h_data)
print('MRJob Test Case : Columns from Dataset Victim Demographics')
print('\n')
df_v_data['HOMICIDE'].iloc[:8]
print('MRJob Test Case : Combined Map Reduced Dataset Homocides Non-Fatal & Victim Demographics')
print('\n')
combine_row
print(combine_row)
# test classs function
def testJob(self):
h_data, v_data = MapReduceJoin()
combined_row = CombineDatasets(h_data, v_data)
df_combine_row = pd.DataFrame(combined_row)
by='BATTERY'
combine_row = SortVictimData(df_combine_row, sort_order=False, column=by)
df_h_data = pd.DataFrame(h_data)
df_v_data = pd.DataFrame(v_data)
self.showResults(df_h_data, df_v_data, combine_row)
#%%
# MRJob MapReduce using MRJob (MRUnit in Python)
# Test: test differnt row filter, test sort
class MapReduceJoinJob(MRJob):
OUTPUT_PROTOCOL = RawValueProtocol
def configure_args(self):
super(JoinJob, self).configure_args()
instance = JoinJob()
instance.testJob()
</code></pre>
| <python><class><instance><mrjob> | 2023-10-20 03:44:24 | 1 | 449 | Data Science Analytics Manager |
77,328,200 | 11,743,016 | Disable a Plotly Dash component during callback execution, then re-enabling it after the callback has finished | <p>I have Plotly Dash components that need to be disabled after a callback has been triggered. The same components need to be enabled after the callback has finished running. I have tried the approach below. While the component gets disabled, it does not get enabled after finishing the callback function.</p>
<pre class="lang-py prettyprint-override"><code>
from dash.exceptions import PreventUpdate
from dash_extensions.enrich import (
Output,
DashProxy,
html,
Input,
MultiplexerTransform,
)
from time import sleep
app = DashProxy(__name__, transforms=[MultiplexerTransform()])
app.layout = html.Div(
[
html.Button(id="button", children="input-button", style={"color": "#FF0000"}),
html.Div(id="trigger", children=None, style={"display": "none"})
],
)
@app.callback(
[
Output("button", "disabled"),
Output("button", "style"),
Output("trigger", "children"),
],
[
Input("trigger", "children")
],
prevent_initial_call=True
)
def enable_components_after_export(trigger):
if trigger == 1:
return [
False, {"color": "#FF0000"}, 0
]
raise PreventUpdate
@app.callback(
[
Output("button", "disabled"),
Output("button", "style")
],
[
Input("button", "n_clicks"),
],
prevent_initial_call=True
)
def disable_components_on_export_button_click(button):
if (button is not None and button > 0):
return [
True, {"color": "#808080"}
]
raise PreventUpdate
@app.callback(
[
Output("trigger", "children")
],
[
Input("button", "n_clicks"),
],
prevent_initial_call=True
)
def callback_function(button):
if button is not None and button > 0:
sleep(5)
return [1]
raise PreventUpdate
if __name__ == "__main__":
app.run(debug=False)
</code></pre>
<p>What would be the correct method to disable components during callbacks, and then re-enabling them after finishing the callback? Also, I recall that Dash does not support multiple callback outputs, which is why DashProxy and MultiplexerTransform is used, though I am not sure if that is causing my problem.</p>
<p>Package versions</p>
<pre><code>dash 2.12.1 pypi_0 pypi
dash-bootstrap-components 1.5.0 pypi_0 pypi
dash-core-components 2.0.0 pypi_0 pypi
dash-extensions 0.1.11 pypi_0 pypi
dash-html-components 2.0.0 pypi_0 pypi
dash-renderer 1.9.1 pypi_0 pypi
</code></pre>
| <python><user-interface><callback><plotly-dash> | 2023-10-20 03:37:39 | 2 | 349 | disguisedtoast |
77,328,167 | 2,955,541 | Speeding Up NumPy Array Generation from SymPy Poly Expression | <p>I have a multivariable polynomial expression that I would like to convert into a square <code>numpy</code> matrix:</p>
<pre><code>import sympy
n = 225
x, C = sympy.symbols(f'x:{n+1}'), sympy.symbols('C')
expr = -1
for i in range(1, n+1):
expression += x[i]
penalty = sympy.Poly(C*(expr)**2)
</code></pre>
<p>So, <code>C</code> is just some constant (e.g., <code>C=0.5</code>) and the <code>penalty</code> would be the expansion of:</p>
<img src="https://latex.codecogs.com/png.image?%5Clarge&space;%5Cdpi%7B150%7DC%5Cleft(x_%7B1%7D+x_%7B2%7D+x_%7B3%7D+...+x_%7B224%7D+x_%7B225%7D-1%5Cright)%5E%7B2%7D">
<p>Ultimately, what I am looking for is the corresponding <code>numpy</code> matrix that represents this expanded expression (minus/dropping the constant term).</p>
<p>Also, note that we assume:</p>
<img src="https://latex.codecogs.com/png.image?%5Clarge&space;%5Cdpi%7B200%7Dx_i=x_i%5E2">
<p>In other words, all linear terms are equal to their squared equivalent (we'll see more of this below).</p>
<p>To provide a more concrete example, let's say <code>n=3</code> then the <code>penalty</code> will be:</p>
<img src="https://latex.codecogs.com/png.image?%5Clarge&space;%5Cdpi%7B200%7DC%5Cleft(x_%7B1%7D+x_%7B2%7D+x_%7B3%7D-1%5Cright)%5E%7B2%7D">
<p>And this expands to:</p>
<img src="https://latex.codecogs.com/png.image?%5Clarge&space;%5Cdpi%7B200%7DCx_1%5E2+2Cx_1x_2+2Cx_1x_3-2Cx_1+Cx_2%5E2+2Cx_2x_3-2Cx_2+Cx_3%5E2-2Cx_3+C">
<p>Due to the relationship/equality of linear term mentioned above, this further simplifies to:</p>
<img src="https://latex.codecogs.com/png.image?%5Clarge&space;%5Cdpi%7B200%7D-Cx_1%5E2+2Cx_1x_2+2Cx_1x_3-Cx_2%5E2+2Cx_2x_3-Cx_3+C">
<p>And dropping the constant term, <code>C</code>, would allow us to represent the coefficients of this polynomial as a 3x3 matrix with the squared terms positioned along the diagonal and the cross terms along the off diagonals:</p>
<img src="https://latex.codecogs.com/png.image?%5Clarge&space;%5Cdpi%7B200%7D%5Cbegin%7Bbmatrix%7D-C&2&2%5C%5C0&-C&2%5C%5C0&0&-C%5C%5C%5Cend%7Bbmatrix%7D">
<p>or its equivalent acceptable symmetric form:</p>
<img src="https://latex.codecogs.com/png.image?%5Clarge&space;%5Cdpi%7B200%7D%5Cbegin%7Bbmatrix%7D-C&1&1%5C%5C1&-C&1%5C%5C1&1&-C%5C%5C%5Cend%7Bbmatrix%7D">
<p>And this is the 2D <code>numpy</code> array that I would like to generate but, unfortunately, when <code>n=225</code> this takes a long time and eventually causes Python to crash. Is there a more efficient approach that I can take for any <code>n</code> that is large (less than <code>1000</code>)?</p>
| <python><sympy> | 2023-10-20 03:24:48 | 1 | 6,989 | slaw |
77,328,072 | 1,844,518 | How to test that the method of an instance has been called? | <p>Let's say I have the following piece of code:</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
def __init__(self, value=1):
self.value = value
self.destructive_stuff()
def destructive_stuff(self):
print("Running destructive commands on the host...")
def compute_value(self):
return self.value * 10
def main():
sample = MyClass(value=5)
print(f"Computed value: {sample.compute_value()}")
if __name__ == "__main__":
main()
</code></pre>
<p>I want to test the <code>main()</code> function. Specifically, I want to check that the <code>compute_value()</code> method has been called. I'm using a Mock since I don't want a real instance of <code>MyClass</code> initialized, because it would run <code>destructive_stuff()</code>:</p>
<pre class="lang-py prettyprint-override"><code>import unittest
from unittest.mock import patch
import my_sample
class TestPieq(unittest.TestCase):
@patch("my_sample.MyClass")
def test_called(self, mock_myclass):
my_sample.main()
mock_myclass.compute_value.assert_called()
</code></pre>
<p>When I run this, though, it fails:</p>
<pre><code>$ python3 -m unittest tests.py
Computed value: <MagicMock name='MyClass().compute_value()' id='140298914799936'>
F
======================================================================
FAIL: test_called (tests.TestSample)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/tmp/pieq/tests.py", line 9, in test_called
mock_myclass.compute_value.assert_called()
File "/usr/lib/python3.10/unittest/mock.py", line 898, in assert_called
raise AssertionError(msg)
AssertionError: Expected 'compute_value' to have been called.
----------------------------------------------------------------------
Ran 1 test in 0.002s
FAILED (failures=1)
</code></pre>
<p>If I target the method directly in the <code>patch</code> directive, my test passes... but it uses the real class, so it calls the <code>destructive_stuff()</code> method!</p>
<pre><code>class TestSample(unittest.TestCase):
@patch("my_sample.MyClass.compute_value")
def test_called(self, mock_myclass_compute):
my_sample.main()
mock_myclass_compute.assert_called()
</code></pre>
<pre><code>$ python3 -m unittest tests.py
Running destructive commands on the host...
Computed value: <MagicMock name='compute_value()' id='140289139827584'>
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
</code></pre>
<p>How to achieve what I want?</p>
| <python><unit-testing><testing><mocking> | 2023-10-20 02:51:51 | 2 | 432 | Pierre |
77,327,854 | 7,898,913 | How to use secondary source as a fallback for Python package installs? | <p>My users have some software that depends on a package distribution in a private index and its source code in a private git repository. Because it is more inconvenient for my users to authenticate to the private index (e.g., local dev environment), they can instead install from a git source. At the same time if the software is deployed in an index-authenticated environment (e.g., prod environment), it should install from the index instead.</p>
<p>Is there a way to specify this in <code>pyproject.toml</code> or <code>pip install</code>?<sup>1</sup></p>
<hr />
<p><sup>[1]: A non-answer is <code>pip install git+https://path/to/package/dependency</code>. This misunderstands my request for a way to specify a fallback.</sup></p>
| <python><pip><package><pyproject.toml> | 2023-10-20 01:27:29 | 1 | 2,338 | Keto |
77,327,708 | 1,285,061 | How do we fit previous models' data into the last model? | <p>How do we fit previous models' data into the last model?
The last <code>fit</code> in this code isn't working.
I am trying to build a fairly complicated model, about 40 models. How do I <code>fit</code> data only to the last model that feeds to all previous models?
I start with <code>a_in</code> <code>b_in</code> processed together, and add another single input <code>s_in</code> to create final output of 3; to be matched with <code>c_out</code>. I need to produce 40 something instances like this, and keep them chaining them.</p>
<pre><code>#Input
a_in=np.array([[3,4,5],[2,3,4],[1,2,3],[4,6,8]])
b_in=np.array([[9,5,3],[1,0,2],[7,3,1],[6,6,6]])
s_in = np.array([[5],[7],[9],[0]])
#Output
c_out = np.array([[6,2,3],[0,2,7],[9,6,1],[8,6,5]])
a = keras.Input(shape=(3,), dtype=tf.float64)
aModel = keras.layers.Flatten()(a)
b = keras.Input(shape=(3,), dtype=tf.float64)
bModel = keras.layers.Flatten()(b)
s = keras.Input(shape=(1,), dtype=tf.float64)
aModel = keras.layers.Dense(10, activation='sigmoid')(a)
aModel = keras.layers.Dense(1, activation='sigmoid')(aModel)
aModel = keras.Model(inputs=a, outputs=aModel, name="a")
bModel = keras.layers.Dense(10, activation='sigmoid')(b)
bModel = keras.layers.Dense(1, activation='sigmoid')(bModel)
bModel = keras.Model(inputs=b, outputs=bModel, name="b")
combine = keras.layers.concatenate([aModel.output, bModel.output, s], dtype=tf.float64) #inject s in the middle stage
cModel = keras.layers.Dense(10, activation='sigmoid')(combine)
cModel = keras.layers.Dense(3, activation='sigmoid')(cModel)
cModel = keras.Model(inputs=combine, outputs=cModel, name="c")
cModel.compile(optimizer='adam', loss='mean_absolute_error', metrics='accuracy')
aModel.summary()
bModel.summary()
cModel.summary()
keras.utils.plot_model(cModel, "baby-architecture.png", show_shapes=True)
cModel.fit([a_in, b_in, s_in],[c_out],epochs=1, shuffle=False, verbose=1)
</code></pre>
<p>Summary:</p>
<pre><code>Model: "a"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 3)] 0
dense (Dense) (None, 10) 40
dense_1 (Dense) (None, 1) 11
=================================================================
Total params: 51 (204.00 Byte)
Trainable params: 51 (204.00 Byte)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "b"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 3)] 0
dense_2 (Dense) (None, 10) 40
dense_3 (Dense) (None, 1) 11
=================================================================
Total params: 51 (204.00 Byte)
Trainable params: 51 (204.00 Byte)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "c"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) [(None, 3)] 0
dense_4 (Dense) (None, 10) 40
dense_5 (Dense) (None, 3) 33
=================================================================
Total params: 73 (292.00 Byte)
Trainable params: 73 (292.00 Byte)
Non-trainable params: 0 (0.00 Byte)
</code></pre>
<p><a href="https://i.sstatic.net/IAuFA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IAuFA.png" alt="enter image description here" /></a></p>
<p>Error:</p>
<pre><code>ValueError: Layer "c" expects 1 input(s), but it received 3 input tensors.
Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 3) dtype=int64>,
<tf.Tensor 'IteratorGetNext:1' shape=(None, 3) dtype=int64>,
<tf.Tensor 'IteratorGetNext:2' shape=(None, 1) dtype=int64>]
</code></pre>
| <python><tensorflow><machine-learning><keras> | 2023-10-20 00:25:05 | 1 | 3,201 | Majoris |
77,327,679 | 1,887,919 | Fastest way to add matrices of different shapes in Python/Numba | <p>I want to "add" two matrices, a matrix <code>a</code> with shape (<code>K,T</code>) and a matrix <code>b</code> of shape <code>(K,N)</code>, to result in a matrix of shape <code>(K,T,N</code>)</p>
<p>The following works ok:</p>
<pre class="lang-py prettyprint-override"><code>
import numpy as np
from numba import njit
@njit
def add_matrices(a, b):
K, T, N = a.shape[0], a.shape[1], b.shape[1]
result_matrix = np.empty((K, T, N))
for k in range(K):
for t in range(T):
for n in range(N):
result_matrix[k, t, n] = a[k, t] + b[k, n]
return result_matrix
K = 10
T = 11
N = 12
a = np.ones((K,T))
b = np.ones((K,N))
result = add_matrices(a, b)
</code></pre>
<p>Is there a faster (vectorized?) way to do it that doesn't require the for loops, which I think is slowing down the function, especially for larger values of <code>K, T,N</code>?</p>
| <python><numpy><matrix><vectorization><numba> | 2023-10-20 00:13:29 | 2 | 923 | user1887919 |
77,327,589 | 523,124 | How can I modify a customized build class derived from build_py so that it builds in a temporary directory? | <p>I have a setup.py file with a customized build class derived from build_py, which is imported from either distutils (for older setups) or setuptools:</p>
<pre><code>try:
warnings.filterwarnings('ignore',
message='.*distutils package is deprecated.*',
category=DeprecationWarning)
from distutils.core import setup
from distutils.command.build_py import build_py
except:
from setuptools import setup
from setuptools.command.build_py import build_py
class MyBuild(build_py):
#... existing custom code here...
#... new custom code here...
def run(self):
build_py.run(self)
if __name__ == '__main__':
setup(
name="...",
version="...",
#...
cmdclass={'build_py': MyBuild}
)
</code></pre>
<p>I've run into the problem that when running from a VM client (though oddly not when running outside the client), <code>pip install .</code> fails, apparently because it tries to set up a build directory within the source directory and this fails due to permissions issues. Other people have found <code>pip install .</code> failing under different circumstances. For what it's worth, I'm using Python 3.11 on Windows, and I've seen the failure with both pip 22.3.1 and pip 23.3. <strong>(UPDATE: I see the same problems with Python 3.9 and 3.10. Using older versions of pip causes another problem: "ImportError: cannot import name 'Mapping' from 'collections".)</strong></p>
<p>I was thinking that I could retrieve the name of a temporary directory via <code>tempfile.gettempdir()</code> and set the build directory to this value. The problem is that I can't quite figure out how to do this based on the <a href="https://epydoc.sourceforge.net/stdlib/distutils.cmd.Command-class.html" rel="nofollow noreferrer">distutils documentation</a>, the <a href="https://setuptools.pypa.io/en/latest/userguide/extension.html" rel="nofollow noreferrer">Setup tools documentation</a>, and existing examples elsewhere on the web, including Stack Overflow. In particular, I'm not clear on how <code>initialize_options</code>, <code>finalize_options</code>, and <code>set_undefined_options</code> are supposed to work and whether I should be using <code>set_undefined_options</code> at all.</p>
<p>I tried this code, inserted where <code>"#... new custom code here..."</code> is located in my example above:</p>
<pre><code> def initialize_options(self):
self.build_base = None
self.build_lib = None
print('Set self.build_base and self.build_lib to None')
def finalize_options(self):
if not self.build_base:
self.build_base = tempfile.gettempdir()
print('Set self.build_base to ', self.build_base)
if not self.build_lib:
self.build_lib = os.path.join(self.build_base, 'lib')
print('Set self.build_lib to ', self.build_lib)
</code></pre>
<p>But then I got an exception when I executed <code>pip install .</code> in the directory containing the setup.py file:</p>
<pre><code> Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [55 lines of output]
Set self.build_base and self.build_lib to None
Set self.build_base to <...>\AppData\Local\Temp
Set self.build_lib to <...>\AppData\Local\Temp\lib
Traceback (most recent call last):
File "C:\pyvenv\<...>\Lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 351, in <module>
main()
File "C:\pyvenv\<...>\Lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\pyvenv\<...>\Lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\build_meta.py", line 355, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in _get_build_requires
self.run_setup()
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 138, in <module>
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\dist.py", line 989, in run_command
super().run_command(command)
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 318, in run
self.find_sources()
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 326, in find_sources
mm.run()
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 548, in run
self.add_defaults()
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 586, in add_defaults
sdist.add_defaults(self)
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\command\sdist.py", line 113, in add_defaults
super().add_defaults()
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\_distutils\command\sdist.py", line 249, in add_defaults
self._add_defaults_python()
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\command\sdist.py", line 125, in _add_defaults_python
self.filelist.extend(build_py.get_source_files())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\_distutils\command\build_py.py", line 303, in get_source_files
return [module[-1] for module in self.find_all_modules()]
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\_distutils\command\build_py.py", line 293, in find_all_modules
if self.py_modules:
^^^^^^^^^^^^^^^
File "C:\<...>\AppData\Local\Temp\pip-build-env-42o2vdhl\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 107, in __getattr__
raise AttributeError(attr)
AttributeError: py_modules. Did you mean: 'find_modules'?
[end of output]
</code></pre>
<p>I tried adding this line to the end of <code>finalize_options</code>:</p>
<pre><code> # self.set_undefined_options('build_py', ('build_base', 'build_base'),
# ('build_lib', 'build_lib'))
</code></pre>
<p>but this just got me into a nasty recursive loop.</p>
<p>What should I be doing instead?</p>
| <python><pip><setuptools><distutils> | 2023-10-19 23:43:20 | 0 | 2,258 | Alan |
77,327,474 | 13,324,244 | MemoryError when using requests_cache (with sqlite backend) inside Celery worker | <p>I have <code>requests-cache==1.1.0</code> and <code>celery==5.3.4</code> installed on M1 MacBook.</p>
<p>Nothing special about the config on either, but posting here just incase its somehow helpful</p>
<p>Celery config:</p>
<pre><code>broker_url="redis://localhost",
result_backend="redis://localhost",
task_ignore_result=True,
</code></pre>
<p>Cached session config:</p>
<pre><code>CachedSession("http_cache.db")
</code></pre>
<p>Whenever I try to run a celery task that makes a get request via <code>CachedSession</code> I get the following error:</p>
<pre><code>File "***/site-packages/requests_cache/backends/sqlite.py", line 294, in __getitem__
cur = con.execute(f'SELECT value FROM {self.table_name} WHERE key=?', (key,))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
MemoryError
</code></pre>
<p>From the debugger right before the line above I see:</p>
<pre><code>(Pdb) self.table_name
'responses'
(Pdb) key
'32805c2791b85726'
(Pdb) con.execute(f'SELECT value FROM {self.table_name} WHERE key=?', (key,))
*** MemoryError
</code></pre>
<p>I cant execute any other queries on the connection either</p>
<pre><code>(Pdb) con.execute('SELECT * FROM sqlite_master')
*** MemoryError
</code></pre>
<p>The task runs when I execute it directly in a shell, but when it gets run through celery worker it runs into the <code>MemoryError</code> when trying to execute on the connection.</p>
<p>The task also runs fine when I use regular <code>requests</code> or when I change the backend to <code>Redis</code> for example.</p>
<p>I am hoping to use the <code>Sqlite</code> backend as I need to indefinitely persist the responses. I have a feeling I can switch to <code>Mongo</code> backend and not have this issue but wanted to post it here before I commit to adding that dependency.</p>
<p>Really lost on this one, any ideas?</p>
| <python><sqlite><python-requests><celery> | 2023-10-19 23:03:33 | 1 | 1,228 | sarartur |
77,327,247 | 1,930,402 | How to join two PySpark DataFrames on a common list column | <p>I have two PySpark DataFrames, df1 and df2, with a column named 'conditions' that contains lists of strings. I want to join these DataFrames based on the common elements in the 'conditions' column, where the entire list in one DataFrame matches the entire list in the other DataFrame.</p>
<p>Here's a simplified example of the data structures:</p>
<p>df1:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: left;">conditions</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">User1</td>
<td style="text-align: left;">["abc", "apple"]</td>
</tr>
<tr>
<td style="text-align: left;">User2</td>
<td style="text-align: left;">["banana", "orange"]</td>
</tr>
<tr>
<td style="text-align: left;">User3</td>
<td style="text-align: left;">["cherry", "pear"]</td>
</tr>
</tbody>
</table>
</div>
<p>df2</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>conditions</th>
<th>weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>["abc","apple"]</td>
<td>10</td>
</tr>
<tr>
<td>["banana"]</td>
<td>21</td>
</tr>
<tr>
<td>["cherry"]</td>
<td>15</td>
</tr>
<tr>
<td>["strawberry"]</td>
<td>30</td>
</tr>
<tr>
<td>["banana","orange"]</td>
<td>20</td>
</tr>
</tbody>
</table>
</div>
<p>I want to create a new DataFrame that includes rows from df1 and df2 where the 'conditions' list matches exactly. In this example, the expected output would be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>conditions</th>
<th>weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>User1</td>
<td>["abc", "apple"]</td>
<td>10</td>
</tr>
<tr>
<td>User2</td>
<td>["banana", "orange"]</td>
<td>20</td>
</tr>
</tbody>
</table>
</div>
<p>I tried this, but the results are wrong.</p>
<p><code>joined_results=df1.join(df2,on="conditions")</code></p>
| <python><pyspark> | 2023-10-19 21:44:50 | 1 | 1,509 | pnv |
77,327,165 | 3,469,243 | Creating a categorical heatmap with sparklines? | <p>Does anyone know of an example of how to create a categorical heat map with individual sparklines within each cell? Or have a suggestion on how to use matplotlib's annotation to produce this (or something similar)?</p>
<p>Essentially turning this: <a href="https://matplotlib.org/stable/gallery/images_contours_and_fields/image_annotated_heatmap.html" rel="nofollow noreferrer">Matplotlib heatmap annotation</a></p>
<p>into this: <a href="https://www.secviz.org/content/combination-heatmap-and-sparklines%3Fsize=_original.html" rel="nofollow noreferrer">Heatmap with sparkline</a></p>
<p><a href="https://i.sstatic.net/mnJu3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mnJu3.png" alt="enter image description here" /></a></p>
| <python><matplotlib><heatmap><sparklines> | 2023-10-19 21:21:45 | 1 | 2,486 | As3adTintin |
77,326,989 | 5,516,822 | How to specify null value character for `csv.reader`? | <p>I want to read "csv" file with <code>\u0000</code> as null, <code>\u0001</code> as line terminator and <code>u0002</code> as delimiter. For last two there is two parameters:</p>
<pre><code>csv.reader(csvfile, lineterminator='\u0001', delimiter='\u0002')
</code></pre>
<p>Is there parameter for null value character?</p>
| <python><csv> | 2023-10-19 20:45:15 | 2 | 1,283 | Makrushin Evgenii |
77,326,740 | 13,676,462 | optimizing 3 variables in a differential equation based on available data point from solution of differential equation | <p>I have following data points (I call them actual data points):</p>
<pre><code>y_data = np.array([0, 32.1463583, 33.1915926, 37.9100309, 39.2501778, 40.8225707, 48])
t_data = np.array([0, 26.75, 72.25, 163.4166667, 209.25, 525, 1250])
</code></pre>
<p>and I have a differential equation which is includes y and t:</p>
<pre><code>dy/dt=(1-y)/((a+b*t)*exp(-E/3060.8))
</code></pre>
<p>my goal is to optimize <strong>a, b , and E</strong> such that the solution of the above differential equation best fit my actual data (y_data).
to do this. I had the following steps in my mind:</p>
<pre><code>1- set initial guess for a=1, b=20, E=10000
2- create a for loop ( for 1000 iterations)
3- solve differential equation using ODEint with initial guess
4- Find difference between calculated value of y from solution of differential equation at time equal to "t" with actual "y" value
5- print error
6- update a to a , b, E and repeat from step 3
</code></pre>
<p>and here is my code :</p>
<pre><code>import numpy as np
from scipy.integrate import odeint
# Given data
y_data = np.array([0, 32.1463583, 33.1915926, 37.9100309, 39.2501778, 40.8225707, 48])
t_data = np.array([0, 26.75, 72.25, 163.4166667, 209.25, 525, 1250])
# Initial guess for parameters
a = 1
b = 20
E = 10000
# Number of iterations
iterations = 1000
# Tolerance for convergence
tolerance = 1e-6
# Step size for numerical gradient approximation
h = 1e-6
# Perform optimization
for i in range(iterations):
# Calculate gradients using numerical gradient approximation
def calculate_error(a, b, E):
def model(y, t):
return (1 - y) / ((a + b * t) * np.exp(-E / 3060.8))
y_solution = odeint(model, y_data[0], t_data)
error = np.mean(np.abs(y_solution[:, 0] - y_data))
return error
print (error)
grad_a = (calculate_error(a + h, b, E) - calculate_error(a, b, E)) / h
grad_b = (calculate_error(a, b + h, E) - calculate_error(a, b, E)) / h
grad_E = (calculate_error(a, b, E + h) - calculate_error(a, b, E)) / h
# Update parameters
a -= 0.1 * grad_a
b -= 0.1 * grad_b
E -= 0.1 * grad_E
# Check for convergence
if abs(grad_a) < tolerance and abs(grad_b) < tolerance and abs(grad_E) < tolerance:
break
# Output the optimized parameters
print("Optimized Parameters (a, b, E):", a, b, E)
</code></pre>
<p>The problem is that this code doesn't work and keep giving me constant error of "32.223947830786685". This is somehow indicating to me that the update function to update a, b and E doesn't work properly.
For reference, I used a function in this for</p>
<pre><code>(calculate_error(a + h, b, E) - calculate_error(a, b, E))
</code></pre>
<p>to manually calculate the gradient of a, b , E.</p>
<p>Any suggestion as how to fix my issue? or any alternative solution to find bets a, b , E parameters?</p>
| <python><optimization><gradient-descent><odeint> | 2023-10-19 19:57:33 | 0 | 923 | Yellow_truffle |
77,326,539 | 7,351,855 | Infinite axline in 3D | <p>I am trying to draw a scene like in the picture below using Python matplotlib, but I got stuck on drawing the infinite line (the black & dotted one in the picture). In 2D, this line can be drawn using <code>axline</code>, but I couldn't find an alternative in 3D.</p>
<p>Is there any solution to this?</p>
<p><a href="https://i.sstatic.net/WdRu9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WdRu9.png" alt="enter image description here" /></a></p>
| <python><matplotlib><matplotlib-3d> | 2023-10-19 19:21:56 | 1 | 830 | Matej |
77,326,473 | 2,410,605 | selenium 4.13 python unable to change default download directory | <p>I feel like I'm in a viscous cycle of fixing one thing and breaking another. I've recently upgraded to Selenium 4.13 to take advantage of Selenium Manager's auto chromedriver feature. I think I finally have it installed correctly, but now I cannot change my default download directory no matter what I try. I've read through many examples and tried them all and the downloads just keep loading to the original default directory. Below is my setup, can anybody spot where I may be going wrong? I don't know if I'm having default "slash issues" or something, but I've tried several variations of c:\dev, c:\dev, and c:\\dev -- none of them seem to have any affect.</p>
<pre><code>from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from datetime import date
import os
import re
import time
import logging
# begin copy
#user ID and Pwd to access the web site
download_path = "c:\dev"
# Set download folder
op = webdriver.ChromeOptions()
config = {"download.default_directory" : download_path}
op.add_experimental_option("prefs", config)
op.headless = False
##Call Chrome Browser
browser = webdriver.Chrome()
browser.get("https://www.kyote.org/mc/login.aspx?url=kplacementMgmt.aspx")
#end copy
</code></pre>
| <python><selenium-webdriver> | 2023-10-19 19:09:22 | 0 | 657 | JimmyG |
77,326,470 | 7,055,769 | class not seeing its properties | <p>my class</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
class Task(models.Model):
id: models.UUIDField(unique=True, auto_created=True)
content: models.CharField(default="")
deadline: models.IntegerField(default=None)
creationDate: models.DateField(auto_now_add=True)
author: models.ForeignKey(User, on_delete=models.CASCADE)
def __str__(self) -> str:
return str(self.id) + str(self.content)
</code></pre>
<p>my error:</p>
<blockquote>
<p>File "/Users/ironside/Documents/Python/PythonDjango-Project/api/models.py", line 13, in <strong>str</strong></p>
</blockquote>
<blockquote>
<p>return str(self.id) + str(self.content)</p>
</blockquote>
<blockquote>
<p>AttributeError: 'Task' object has no attribute 'content'</p>
</blockquote>
<p>I tested, and all other properties give the same error ('content' changes to property name).</p>
<p>It's as if it only sees id as valid property.</p>
<p>I'd like to be able to print the id + content or any other property</p>
<p>Edit:</p>
<p>Updated model:</p>
<pre><code>class Task(models.Model):
id: str = models.UUIDField(
unique=True,
auto_created=True,
primary_key=True,
default=uuid.uuid4,
editable=False,
)
content: str = models.CharField(
default="",
max_length=255,
)
deadline: int = models.IntegerField(
default=None,
null=True,
blank=True,
)
creationDate: models.DateField(auto_now_add=True)
author: models.ForeignKey(User, on_delete=models.CASCADE)
def __str__(self) -> str:
return str(self.id) + str(self.content)
</code></pre>
<p>new error</p>
<blockquote>
<p>django.db.utils.OperationalError: table api_task has no column named content</p>
</blockquote>
<p>when doing <code>print(Task.objects.all())</code></p>
| <python><django><django-models><django-rest-framework> | 2023-10-19 19:08:20 | 2 | 5,089 | Alex Ironside |
77,326,270 | 2,231,299 | how to display parallel output results within Jupyter Notebook, using AsyncSSH + IPyWidget? | <p>ChatGPT is running in circles now and it keeps failing at the task: having multiple boxes monitoring in real time the output of a remote python script.</p>
<p>So far, here is the notebook code:</p>
<pre><code>import asyncssh
import asyncio
from ipywidgets import Output, HBox
import traceback
class MySSHClientSession(asyncssh.SSHClientSession):
def __init__(self, output_widget):
super().__init__()
self._output_widget = output_widget
def data_received(self, data, datatype):
if datatype == asyncssh.EXTENDED_DATA_STDERR:
self._output_widget.append_stderr(data)
else:
self._output_widget.append_stdout(data)
def connection_lost(self, exc):
if exc:
self._output_widget.append_stderr(f"SSH session error: {exc}")
async def run_remote_command(host, username, password, command, output_widget):
try:
async with asyncssh.connect(host, username=username, password=password, known_hosts=None) as conn:
chan,session = await conn.create_session(lambda: MySSHClientSession(output_widget), command)
await chan.wait_closed()
except Exception as e:
output_widget.append_stderr(f"Error connecting to {host}: {str(e)}\n")
async def main():
host_infos=[parse_creds(i) for i in range(6)]
cmds=[f"python /scripts/print_hostname.py {P}" for P in range(1,7)]
outputs = [Output(layout={'border': '1px solid white', 'width': '200px'}) for _ in host_infos]
tasks = [run_remote_command(host_info[0], host_info[1], host_info[2], command, out) for host_info, command, out in zip(host_infos, cmds, outputs)]
display(HBox(outputs))
await asyncio.gather(*tasks)
# Run the asynchronous function
asyncio.create_task(main())
</code></pre>
<p>while troubleshooting, we simplified the code of <code>print_hostname.py</code> to the following:</p>
<pre><code>import time
print("Début du script sur la machine.")
for i in range(5):
print(f"Étape {i} sur la machine.")
time.sleep(4)
print("Fin du script sur la machine.")
</code></pre>
<p>I don't know what to try anymore. We went from Threads, to pure asyncio, to managing the output in a <code>while True</code> loop.</p>
<p>And I think the <code>while True</code> loop is the key, but I can't figure out how to implement it in the above code?</p>
| <python><ipywidgets><asyncssh> | 2023-10-19 18:29:41 | 1 | 875 | Myoch |
77,326,158 | 3,825,948 | Function Not Found in Sanic Server | <p>In Sanic server, a have a function defined in a file called controller.py with path /views/controller.py. My Sanic app is created in main.py at the root level (/) and can't find this function. The function has the decorator</p>
<pre><code>@app.post("/get_x")
</code></pre>
<p>What do I have to do to make the Sanic app aware of this function's path? I'm getting a 404 error at the moment when calling this function from the browser. I can't seem to find any good documentation or examples on this. Any help would be greatly appreciated. Thanks.</p>
| <python><routes><sanic> | 2023-10-19 18:06:10 | 1 | 937 | Foobar |
77,326,149 | 14,829,523 | Sorting algorithm on dataframe with swapping rows | <p>I have the following dummy df:</p>
<pre><code>import pandas as pd
data = {
'address': [1234, 24389, 4384, 4484, 1234, 24389, 4384, 188],
'old_account': [200, 200, 200, 300, 200, 494, 400, 100],
'new_account': [300, 100, 494, 200, 400, 200, 200, 200]
}
df = pd.DataFrame(data)
print(df)
address old_account new_account
0 1234 200 300
1 24389 200 100
2 4384 200 494
3 4484 300 200
4 1234 200 400
5 24389 494 200
6 4384 400 200
7 188 100 200
</code></pre>
<p><strong>A)</strong> I want to sort it such that I have <code>200</code> at <code>old_account</code> and directly in the next row at <code>new_account</code> again:</p>
<pre><code>200 xxx
xxx 200
</code></pre>
<p><strong>B)</strong> I further want to sort the non-200s such that I start somewhere let's say with <code>300</code> and browse through the whole df looking for <code>300</code>s and do the switches:</p>
<pre><code>200 300
300 200
200 300
...
</code></pre>
<p>Only once there are no <code>300</code>s anymore I would go to the next, let's say <code>400</code>..</p>
<pre><code>200 300
300 200
200 300
...
200 400
400 200
200 400
...
</code></pre>
<p><code>df</code> above should look like this:</p>
<pre><code> address old_account new_account
0 1234 200 300
1 4484 300 200
2 24389 200 100
3 188 100 200
4 4384 200 494
5 24389 494 200
6 1234 200 400
7 4384 400 200
</code></pre>
<p>As you can see, the 200s are diagonal to each other and so are the non-200s.</p>
<p>The following code works only for A). <strong>I did not manage to also make it consider B)</strong>
I have the following code:</p>
<pre><code>import pandas as pd
# Create the initial DataFrame
df= pd.read_csv('dummy_data.csv', sep=';')
# Initiate sorted df
sorted_df = pd.DataFrame(columns=df.columns)
while not df.empty:
# Find the first row where '200' is in 'old_account'
idx_old = df.index[df['old_account'] == 200].min()
if pd.notna(idx_old):
# Add the corresponding row to the sorted result
sorted_df = pd.concat([sorted_df, df.loc[[idx_old]]], ignore_index=True)
# Remove the row from the original DataFrame
df = df.drop(index=idx_old)
# Find the matching row where '200' is in 'new_account'
idx_new = df.index[df['new_account'] == 200].min()
if pd.notna(idx_new):
# Add the corresponding row to the sorted result
sorted_df = pd.concat([sorted_df, df.loc[[idx_new]]], ignore_index=True)
# Remove the row from the original DataFrame
df = df.drop(index=idx_new)
else:
break # If no matching row is found, exit the loop
else:
break # If no more '200' in 'old_account' is found, exit the loop
# Reset the index of the sorted DataFrame
sorted_df.reset_index(drop=True, inplace=True)
print(sorted_df)
</code></pre>
| <python><pandas><dataframe><sorting> | 2023-10-19 18:04:12 | 1 | 468 | Exa |
77,326,060 | 4,234,062 | Find & replace values in presenter notes using pptx-python | <p>I have a dictionary of values that I'd like to use to find & replace in the presenter notes of a powerpoint presentation. My code works for replacing the values in the slides themselves, but I can't figure out the presenter notes. This is what I have so far - any help would be appreciated!</p>
<pre><code>from pptx import Presentation
prs = Presentation('templates/template_python.pptx')
#dictionary of key-values to find & replace in presentation & presenter notes
replacements = {
'client_name': 'example',
'date_range': '2023-01-01 to 2023-01-30',
'country_abb': country,
}
slides = [slide for slide in prs.slides]
shapes = []
for slide in slides:
for shape in slide.shapes:
shapes.append(shape)
#find & replace text in presentation - **WORKING**
def replace_text(replacements, shapes):
for shape in shapes:
for match, replacement in replacements.items():
if shape.has_text_frame:
if (shape.text.find(match)) != -1:
text_frame = shape.text_frame
for paragraph in text_frame.paragraphs:
whole_text = "".join(run.text for run in paragraph.runs)
whole_text = whole_text.replace(str(match), str(replacement))
for idx, run in enumerate(paragraph.runs):
if idx != 0:
p = paragraph._p
p.remove(run._r)
if(not(not paragraph.runs)):
paragraph.runs[0].text = whole_text
#run function to find & replace text in slides
replace_text(replacements, shapes)
#find & replace text in presenter notes **NOT WORKING**
def replace_in_presenter_notes(presenter_notes, replacements):
for key, value in replacements.items():
presenter_notes = presenter_notes.replace(key, str(value))
return presenter_notes
# Iterate through the slides and update presenter notes
for slide in prs.slides:
for shape in slide.shapes:
if hasattr(shape, 'notes_text_frame') and shape.notes_text_frame.text:
shape.notes_text_frame.text = replace_in_presenter_notes(shape.notes_text_frame.text, replacements)
</code></pre>
| <python><powerpoint><python-pptx> | 2023-10-19 17:48:55 | 1 | 739 | chris_aych |
77,325,943 | 6,379,197 | Tensorflow 2.11.0 Cannot dlopen some GPU libraries | <p>I am tryng to use GPU for tensorflow. I have installed tensorflow by the following command:</p>
<pre><code>pip install tensorflow.
</code></pre>
<p>The code to access gpu on tensorflow is as follows:</p>
<pre><code>import tensorflow as tf
from tensorflow.compat.v1.keras import backend as K
def set_gpu_option(which_gpu, fraction_memory):
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = False
config.gpu_options.per_process_gpu_memory_fraction = fraction_memory
config.gpu_options.visible_device_list = which_gpu
K.set_session(tf.compat.v1.Session(config=config))
return set_gpu_option('0', 0.9)
</code></pre>
<p>But I am getting error as follows:</p>
<pre><code>2023-10-19 13:10:02.225759: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1934] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
</code></pre>
<p>On server, I have checked GPU driver is installed as nvidia-smi gives the correct output and cuda-11.2 is installed. Is there anything to install on the GPU server?</p>
| <python><tensorflow><gpu> | 2023-10-19 17:31:09 | 0 | 2,230 | Sultan Ahmed |
77,325,768 | 8,869,570 | Querying sqlite3 table based on inferring another column | <p>I have a sqlite3 table <code>table_name</code> with columns <code>start, end, id, val_end</code> and note that <code>val_end</code> is not a nullable column. Suppose there's a value <code>val_start</code> such that <code>val_start</code> for a row is the <code>val_end</code> for the previous row, e.g.,</p>
<pre><code>start end id val_end val_start (doesn't actually exist in the table)
0 10 0 10.1 0
10 20 0 10.3 10.1
</code></pre>
<p>I want to query the table for <code>val_start</code> at where <code>end=input_time</code> based on manipulating <code>val_end</code>.</p>
<p>I tried this query:</p>
<pre><code> query = """
select b.id, b.val_end as val_start
from table_name b
join table_name a
on a.start = b.end
and a.id = b.id
WHERE a.end = :input
"""
connector.cursor().execute(query, {"input": input_time}).fetchall()
</code></pre>
<p>But it keeps returning empty results even though there are valid columns and rows. Is there something wrong with my query?</p>
<p>Example, suppose <code>input_time=20</code>, the query should give:</p>
<pre><code>[(0, 10.1)]
</code></pre>
<p>(I don't care about the format, but usually sql reads give me a list of tuples)</p>
<p>Here's a specific example (note I simplified the example to debug) where it seems to be failing</p>
<pre><code>start id end val_end
20230905160000 1 20230906080000 10.1
20230905160000 2 20230906080000 10.3
</code></pre>
<pre><code> query = """
select b.id, b.val_end as val_start
from table_name b
join table_name a
on (a.start = b.end
and a.id = b.id)
"""
print(connector.cursor().execute(query).fetchall())
</code></pre>
<p>produces</p>
<pre><code>[]
</code></pre>
<p>Example with original query:</p>
<pre><code> query = """
select b.id, b.val_end as val_start
from table_name b
join table_name a
on a.start = b.end
and a.id = b.id
WHERE a.end = :input
"""
connector.cursor().execute(query, {"input": 20230906080000}).fetchall()
</code></pre>
<p>should return:</p>
<pre><code>[(1, 0,)
(2, 0,)
]
</code></pre>
<p>but it returns</p>
<pre><code>[]
</code></pre>
| <python><sql><sqlite> | 2023-10-19 17:02:15 | 0 | 2,328 | 24n8 |
77,325,701 | 7,534,658 | How to handle python type hint for class method overrides | <p>I want to give type hint for a class inheritance scenario:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
U = TypeVar("U")
class Mother(Generic[T, V]):
def process(self, x: T) -> V:
...
class Child(Mother[int, str]):
def process(self, x: int) -> str:
...
</code></pre>
<p>it feel like duplication because I need to provide the type hint twice (at the <code>class</code> line and below at the method definition line) - whereas I was hoping it could be inferred by python.
If I remove, let's say, the type hint in the method - then it seems that mypy is not capable of inferring it from the class type (weirdly though, if I provide an incompatible type hint, it is complaining).
Is there any way around that?</p>
| <python><generics><inheritance><python-typing> | 2023-10-19 16:49:55 | 0 | 631 | p9f |
77,325,636 | 1,841,839 | How to load an existing vector db into Langchain? | <p>I have the following code which loads my pdf file generates embeddings and stores them in a vector db. I can then use it to preform searches on it.</p>
<p>The issue is that every time i run it the embeddings are regrated and stored in the db along with the ones already created.</p>
<p>Im trying to figurer out How to load an existing vector db into Langchain. rather then recreating them every time the app runs.</p>
<p><a href="https://i.sstatic.net/FUOqO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FUOqO.png" alt="enter image description here" /></a></p>
<h1>load it</h1>
<pre><code>def load_embeddings(store, file):
# delete the dir
# shutil.rmtree(store) # I have to delete it or it just loads double data
loader = PyPDFLoader(file)
text_splitter = CharacterTextSplitter(
separator="\n",
chunk_size=1000,
chunk_overlap=200,
length_function=len,
is_separator_regex=False,
)
pages = loader.load_and_split(text_splitter)
return DocArrayHnswSearch.from_documents(
pages, GooglePalmEmbeddings(), work_dir=store + "/", n_dim=768
)
</code></pre>
<h1>use it</h1>
<pre><code>db = load_embeddings("linda_store", "linda.pdf")
embeddings = GooglePalmEmbeddings()
query = "Have I worked with Oauth?"
embedding_vector = embeddings.embed_query(query)
docs = db.similarity_search_by_vector(embedding_vector)
for i in range(len(docs)):
print(i, docs[i])
</code></pre>
<h1>issue</h1>
<p>This works fine but if I run it again it just loads the file again into the vector db. I want it to just use the db after I have created it and not create it again.</p>
<p>I cant seem to find a method for loading it I tried</p>
<pre><code>db = DocArrayHnswSearch.load("hnswlib_store/", embeddings)
</code></pre>
<p>But thats a no go.</p>
| <python><langchain><palm-api><hnswlib> | 2023-10-19 16:38:56 | 1 | 118,263 | Linda Lawton - DaImTo |
77,325,334 | 3,256,651 | Running a gams file from python - gams system directory error | <p>I would like to run GAMS mdoel from python through the gams python API.</p>
<p>I insalled the API and it works fine:</p>
<pre><code>import gams
print(f'API OK -- Version {gams.__version__}')'''
</code></pre>
<p>I get:</p>
<pre><code>API OK -- Version 45.1.0
</code></pre>
<p>However when I try to:</p>
<pre><code>from gams import GamsWorkspace
ws = GamsWorkspace()
</code></pre>
<p>I get:</p>
<pre><code>GamsException: GAMS System directory not found or invalid.
</code></pre>
<p>I tried putting the gams installation path to no avail. The error mentions to use "findthisgams.exe" but I cannot find it.</p>
<p>(gams is intalled and I am able to run it from the GUI)</p>
<p>This is similar to this question: <a href="https://stackoverflow.com/questions/75266078/how-to-run-gams-from-python">How to run GAMS from python?</a></p>
<p>still I am stuck.</p>
| <python><gams-math> | 2023-10-19 15:50:57 | 1 | 1,922 | esperluette |
77,325,323 | 1,914,781 | apply function which return dataframe | <p>Below code works but wish if get better way to implement.</p>
<p>The idea is combine first dataframe with another dataframe, which depends on first dataframe's row values.</p>
<pre><code>import pandas as pd
def getdf(x):
df2 = pd.DataFrame(
{'rkey': ['X', 'Y', 'Z'],
'rval': [x, x*2, x*3]})
return df2
def combine(df):
data = []
dfout = pd.DataFrame()
for i in range(len(df)):
df1 = df.iloc[i, :].to_frame().transpose().reset_index()
df2 = getdf(df1['lval'].values[0])
df3 = df1.join(df2, how='outer',lsuffix='', rsuffix='')
#print(df3)
dfout = pd.concat([dfout,df3],axis=0,ignore_index=True)
#dfout.reset_index()
dfout = dfout[dfout.columns.drop('index')]
return dfout
df1 = pd.DataFrame(
{'key': ['A','B','C'],
'lval': [1,3,5]})
print(df1)
</code></pre>
<p>output:</p>
<pre><code> key lval
0 A 1
1 B 3
2 C 5
key lval rkey rval
0 A 1 X 1
1 NaN NaN Y 2
2 NaN NaN Z 3
3 B 3 X 3
4 NaN NaN Y 6
5 NaN NaN Z 9
6 C 5 X 5
7 NaN NaN Y 10
8 NaN NaN Z 15
</code></pre>
| <python><pandas> | 2023-10-19 15:48:18 | 1 | 9,011 | lucky1928 |
77,325,233 | 7,318,120 | how to convert a dict to a dataclass (reverse of asdict)? | <p>the dataclasses module lets users make a dict from a dataclass reall conveniently, like this:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, asdict
@dataclass
class MyDataClass:
''' description of the dataclass '''
a: int
b: int
# create instance
c = MyDataClass(100, 200)
print(c)
# turn into a dict
d = asdict(c)
print(d)
</code></pre>
<p>But i am trying to do the reverse process: dict -> dataclass.</p>
<p>The best that i can do is unpack a dict back into the predefined dataclass.</p>
<pre class="lang-py prettyprint-override"><code># is there a way to convert this dict to a dataclass ?
my_dict = {'a': 100, 'b': 200}
e = MyDataClass(**my_dict)
print(e)
</code></pre>
<p>How can i achieve this without having to pre-define the dataclass (if it is possible) ?</p>
| <python><python-dataclasses> | 2023-10-19 15:35:31 | 4 | 6,075 | darren |
77,325,131 | 2,504,762 | pyarrow not able to handle nulls for required fields when writing to parquet files | <p>I am trying to create a pyarrow table and then write that into parquet files.</p>
<pre class="lang-py prettyprint-override"><code>def test_pyarow():
import pyarrow as pa
import pyarrow.parquet
import pandas as pd
fields = [pa.field('id', pa.string(), nullable=False),
pa.field('name', pa.string(), nullable=False)]
array = [pa.array(['10', '11', '12', '13']),
pa.array(['AAA', None, 'BBB', 'CCC'])]
table = pa.Table.from_arrays(array, schema=pa.schema(fields))
pyarrow.parquet.write_table(table, 'test_arrow.parquet', compression='SNAPPY', use_compliant_nested_type=True)
df = pd.read_parquet("/Users/fki/Documents/git/Demo/bq_api/test_arrow.parquet", engine='pyarrow')
print("\n\n\n")
print(df)
</code></pre>
<p><strong>when nullable is True:</strong></p>
<pre><code> id name
0 10 AAA
1 11 None
2 12 BBB
3 13 CCC
</code></pre>
<p><strong>when nullable is False:</strong></p>
<pre><code> id name
0 10 AAA
1 11 BBB
2 12 CCC
3 13 AAA
</code></pre>
| <python><pyarrow> | 2023-10-19 15:24:33 | 0 | 13,075 | Gaurang Shah |
77,325,120 | 776,543 | Accessing model attributes results in Self not defined error | <p>Can anyone help me understand why I am receiving the error <code>self is not defined</code> while accessing a class attribute? Thanks for your help</p>
<pre><code> class Receipt(models.Model):
""
receipt_id = models.AutoField(
primary_key=True,
db_comment=""
)
store_name = models.TextField(
max_length=255,
default="Unidentified",
db_comment=""
)
total_amt = models.DecimalField(
max_digits=6,
decimal_places=2,
default=0.00,
db_comment=""
)
def create_from_document(document):
""
if (document is None):
return False
self.store_name = document.store_name
self.total_amt = document.total_amt
self.save()
return self
</code></pre>
| <python><django> | 2023-10-19 15:22:44 | 1 | 1,312 | SeaSky |
77,325,095 | 22,407,544 | Why does my Django view return 'Reverse for 'initiate_transcription' with arguments '('',)' not found'? | <p>A user uploads a file on page 1 and the site redirects them to page 2 where they can click a button which triggers the service. I'm trying to create a unique URL for each user upload and I know the error has to do with the <code>{% url 'initiate_transcription' session_id %}</code> in HTML page 2 form but i'm not sure what to change. Here is the code:</p>
<p>urls.py:</p>
<pre><code>from django.urls import path
from . import views
urlpatterns = [
path("", views.transcribeSubmit, name = "transcribeSubmit"),
path("init-transcription/<str:session_id>/", views.initiate_transcription, name = "initiate_transcription"),
]
</code></pre>
<p>views.py:</p>
<pre><code>@csrf_protect
def transcribeSubmit(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
uploaded_file = request.FILES['file']
fs = FileSystemStorage()
filename = fs.save(uploaded_file.name, uploaded_file)
request.session['uploaded_file_name'] = filename
request.session['uploaded_file_path'] = fs.path(filename)
session_id = str(uuid.uuid4())
request.session['session_id'] = session_id
# Render the 'transcribe-complete.html' template to a string
return JsonResponse({'redirect': reverse('initiate_transcription', args=[session_id])})
else:
else:
form = UploadFileForm()
@csrf_protect
def initiate_transcription(request, session_id):
if request.method == 'POST':
try:
# get the file's name and path from the session
file_name = request.session.get('uploaded_file_name')
file_path = request.session.get('uploaded_file_path')
audio_language = request.POST.get('audio_language')
output_file_type = request.POST.get('output_file_type')
if file_name and file_path:
with open(file_path, 'rb') as f:
path_string = f.name
transcript = transcribe_file(path_string,audio_language, output_file_type )
file_extension = ('.' + (str(file_name).split('.')[-1]))
transcript_name = file_name.replace(file_extension, f'.{output_file_type}')
transcript_path = file_path.replace((str(file_path).split('\\')[-1]), transcript_name)
# Save transcript to a file
if os.path.exists(transcript_path):
file_location = transcript_path
rawdata = open(file_location, 'rb').read(1000)
result = chardet.detect(rawdata)
charenc = result['encoding']
with open(file_location, 'r', encoding=charenc) as f:
file_data = f.read()
transcribed_doc = TranscribedDocument(
audio_file=file_path,
output_file=transcript_path
)
transcribed_doc.save()
# Create a FileResponse
response = HttpResponse(file_data, content_type='text/plain; charset=utf-8')#text/plain
response['Content-Disposition'] = 'attachment; filename="' + transcript_name + '"'
return response
else:
return JsonResponse({'status': 'error', 'error': 'No file uploaded'})
except Exception as e:
error_message = f"Error occurred: {e}"
return render(request, 'transcribe/transcribe-complete.html')
def transcribe_file(path, audio_language, output_file_type ):
#transcription logic
</code></pre>
<p>HTML page1:</p>
<pre><code><form method="post" action="{% url 'transcribeSubmit' %}" enctype="multipart/form-data" >
{% csrf_token %}
<label for="transcribe-file" class="transcribe-file-label">
...
</form>
</code></pre>
<p>HTML page2:</p>
<pre><code><form id="initiate-transcription-form" method="post" action="{% url 'initiate_transcription' session_id %}" enctype="multipart/form-data">
{% csrf_token %}
...
</form
</code></pre>
<p>JS page1:</p>
<pre><code>const fileInput = document.querySelector('#transcribe-file');
fileInput.addEventListener('change', function (event) {
if (event.target.files.length > 0) {
console.log(fileInput.value);
const fileName = event.target.value;
const fileExtension = fileName.split('.').pop().toLowerCase();
const allowedExtensions = ['m4a', 'wav', 'mp3', 'mpeg', 'mp4', 'webm', 'mpga', 'ogg', 'flac'];
if (!allowedExtensions.includes(fileExtension)) {
const uploadField = document.querySelector('.transcribe-file-label');
const originalLabelText = uploadField.innerHTML;
uploadField.style.color = '#ad0f0f';
uploadField.textContent = 'Invalid file type. Please try again';
setTimeout(function () {
uploadField.style.color = '';
uploadField.innerHTML = originalLabelText;
// Clear the file input field
fileInput.value = '';
return;
}, 5000);
} else {
const form = document.querySelector('form');
const xhr = new XMLHttpRequest();
const formData = new FormData(form);
xhr.open('POST', form.action);
xhr.upload.onprogress = function (event) {
if (event.lengthComputable) {
let percentComplete = (event.loaded / event.total) * 100;
let progressBar = document.getElementById('myBar');
progressBar.style.width = percentComplete + '%';
console.log('Upload progress: ' + percentComplete + '%');
}
};
xhr.onload = function () {
if (xhr.status == 200) {
console.log('Upload complete');
const response = JSON.parse(xhr.responseText);
// Update the content of the current page with the HTML from the server
if (response.redirect) {
window.location.href = response.redirect;
}
//document.body.innerHTML = response.html;
} else {
console.error('Upload failed');
}
};
xhr.send(formData);
}
}
});
</code></pre>
<p>Here is the error:</p>
<pre><code>NoReverseMatch at /transcribe/init-transcription/854eae4d-3167-4e45-8b17-20a14b142aad/
Reverse for 'initiate_transcription' with arguments '('',)' not found. 1 pattern(s) tried: ['transcribe/init\\-transcription/(?P<session_id>[^/]+)/\\Z']
</code></pre>
<p>I can provide the traceback if necessary.</p>
| <javascript><python><html><django><forms> | 2023-10-19 15:19:52 | 1 | 359 | tthheemmaannii |
77,325,058 | 853,462 | Run `multiprocessing.Pool.initialize` on a method of the forked class | <pre class="lang-py prettyprint-override"><code>import os
from multiprocessing import Pool
class A:
def initialize(self):
print('initialize', self, os.getpid())
def run(self):
print('run ', self, os.getpid())
a = A()
print('root ', a, os.getpid())
with Pool(1, initializer=a.initialize) as pool:
pool.apply(a.run)
</code></pre>
<p>Out:</p>
<pre><code>root <__main__.A object at 0x7b644c0af1c0> 386
initialize <__main__.A object at 0x7b644c0af1c0> 3264
run <__main__.A object at 0x7b644c0ae080> 3264
</code></pre>
<p>I want to initialize the forked process of a <code>Pool</code>, in particular to occupy a significant amount of memory that the the parent doesn't need. However, it seems like <code>a.initialize</code> runs on the parent's <code>a</code> within the child process, which would confuse me because <code>a</code> isn't or shouldn't be in shared memory?</p>
<p>How can I adapt the fork of <code>a</code> once during <code>Pool</code> initialization, such that <code>a.run</code> applies on these changes.</p>
<p>EDIT: elegant ways to share information between <code>pool.apply</code>s other than global variables are very welcome.</p>
| <python><python-3.x><python-multiprocessing> | 2023-10-19 15:16:21 | 1 | 5,685 | Herbert |
77,324,859 | 9,983,652 | how to convert list of tuple into a string? | <p>I have a list of tuple and I'd like to convert the list into a string of tuple pair separted by comman. Not sure how to do it?</p>
<p>For example if I have list of this</p>
<pre><code>a=[(830.0, 930.0), (940.0, 1040.0)]
</code></pre>
<p>I'd like to convert it to a string like this</p>
<pre><code>b="(830.0, 930.0), (940.0, 1040.0)"
</code></pre>
<pre><code>a=[(830.0, 930.0), (940.0, 1040.0), (1050.0, 1150.0), (1160.0, 1260.0), (1270.0, 1370.0), (1380.0, 1480.0), (1490.0, 1590.0)]
b=','.join(a)
b
----> 2 b=','.join(a)
3 b
TypeError: sequence item 0: expected str instance, tuple found
</code></pre>
| <python><list><tuples> | 2023-10-19 14:53:25 | 5 | 4,338 | roudan |
77,324,551 | 4,105,440 | Radial text annotation polar plot with clockwise direction | <p>I want to place some text on a radius of a polar plot. When using the default theta zero location and direction it works as expected</p>
<pre class="lang-py prettyprint-override"><code>fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111, polar=True)
ax.set_yticklabels("")
ax.annotate('test',
xy=(np.deg2rad(90), 0.5),
fontsize=15,
rotation=90)
</code></pre>
<p><a href="https://i.sstatic.net/Kav7X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kav7X.png" alt="enter image description here" /></a></p>
<p>However this fails when changing the direction</p>
<pre class="lang-py prettyprint-override"><code>fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111, polar=True)
ax.set_theta_zero_location("N")
ax.set_theta_direction(-1)
ax.set_yticklabels("")
ax.annotate('test',
xy=(np.deg2rad(90), 0.5),
fontsize=15,
rotation=90)
</code></pre>
<p><a href="https://i.sstatic.net/X6o93.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X6o93.png" alt="enter image description here" /></a></p>
<p>It seems that the x and y are correct but the rotation angle is not. In theory, the transformation between clockwise and anti-clockwise should bring theta into -theta but that obviously does not work here. I have tried any possible transformation but it seems that something weird is happening..</p>
<p>What am I missing?</p>
| <python><matplotlib><polar-coordinates><plot-annotations> | 2023-10-19 14:12:56 | 1 | 673 | Droid |
77,324,210 | 14,301,545 | QtCreator; AttributeError: 'Constant' object has no attribute 'id'; JSONDecodeError: running pyside6-metaobjectdump | <p>QtCreator does not want to run an application that I wrote a few months ago on another computer. Interestingly, the application itself works OK if I run it directly from the disk, or e.g. via PyCharm. Does anyone know what the reason could be?</p>
<p>Minimal (non)working example:</p>
<p>main.py:</p>
<pre><code>import sys
from pathlib import Path
from PySide6.QtCore import QObject, Signal, Property
from PySide6.QtGui import QGuiApplication
from PySide6.QtQml import QQmlApplicationEngine
import os
class PyBackground(QObject):
def __init__(self):
QObject.__init__(self)
self.d1 = {'0': 'zero', '1': 'one', '2': 'two'}
property_d1_changed = Signal()
@Property('QVariant', notify=property_d1_changed)
def property_d1(self):
return self.d1
if __name__ == "__main__":
app = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine()
py_background = PyBackground()
engine.rootContext().setContextProperty("py_background", py_background)
engine.load(os.fspath(Path(__file__).resolve().parent / "main.qml")),
if not engine.rootObjects():
sys.exit(-1)
sys.exit(app.exec())
</code></pre>
<p>main.qml:</p>
<pre><code>import QtQuick
import QtQuick.Controls
import QtQuick.Window
Window {
width: 640
height: 480
visible: true
title: qsTr("Hello World")
property var d1: py_background.property_d1
Text {
id: name
text: qsTr("text: " + d1['2'])
x: 20
y: 20
}
}
</code></pre>
<p>error message (Compile Output):</p>
<pre><code>Error parsing C:\DANE\QtProjects\test1\main.py: 'Constant' object has no attribute 'id'
Traceback (most recent call last):
File "C:\Users\danie\AppData\Roaming\Python\Python311\site-packages\PySide6\scripts\metaobjectdump.py", line 428, in <module>
json_data = parse_file(file, context, args.suppress_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\danie\AppData\Roaming\Python\Python311\site-packages\PySide6\scripts\metaobjectdump.py", line 388, in parse_file
visitor.visit(ast_tree)
File "C:\Program Files\Python311\Lib\ast.py", line 418, in visit
return visitor(node)
^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\ast.py", line 426, in generic_visit
self.visit(item)
File "C:\Program Files\Python311\Lib\ast.py", line 418, in visit
return visitor(node)
^^^^^^^^^^^^^
File "C:\Users\danie\AppData\Roaming\Python\Python311\site-packages\PySide6\scripts\metaobjectdump.py", line 220, in visit_ClassDef
self.visit(b)
File "C:\Program Files\Python311\Lib\ast.py", line 418, in visit
return visitor(node)
^^^^^^^^^^^^^
File "C:\Users\danie\AppData\Roaming\Python\Python311\site-packages\PySide6\scripts\metaobjectdump.py", line 238, in visit_FunctionDef
self._parse_function_decorator(node.name, d)
File "C:\Users\danie\AppData\Roaming\Python\Python311\site-packages\PySide6\scripts\metaobjectdump.py", line 315, in _parse_function_decorator
type = _python_to_cpp_type(_name(node.args[0]))
^^^^^^^^^^^^^^^^^^^
File "C:\Users\danie\AppData\Roaming\Python\Python311\site-packages\PySide6\scripts\metaobjectdump.py", line 71, in _name
return node.id
^^^^^^^
AttributeError: 'Constant' object has no attribute 'id'
JSONDecodeError: running pyside6-metaobjectdump on C:\DANE\QtProjects\test1\main.py: Expecting value: line 1 column 1 (char 0)
15:16:02: The process "C:\Users\danie\AppData\Roaming\Python\Python311\Scripts\pyside6-project.exe" exited with code 1.
Error while building/deploying project test1 (kit: Desktop Qt 6.5.3 MinGW 64-bit)
When executing step "Run PySide6 project tool"
15:16:02: Elapsed time: 00:01.
</code></pre>
<p>It seems that there is some problem with properties, especially when I want to send python dictionary to qml. Above code is working when run directly from explorer.</p>
<p>INFO:</p>
<ul>
<li>Python 3.11.6</li>
<li>PySide 6.5.3</li>
<li>Qt 6.5.3</li>
<li>QtCreator 11.0.3</li>
<li>Windows 10 Home, 22H2, x64</li>
<li>laptop HP Omen 15-en0xxx, 16GB RAM, GTX1650Ti (fresh format)</li>
</ul>
<p>EDIT:
.pyproject file:</p>
<pre><code>{
"files": [
"main.py",
"main.qml"
]
}
</code></pre>
| <python><qt><qml><pyside6> | 2023-10-19 13:29:22 | 1 | 369 | dany |
77,324,061 | 10,430,394 | How to place inset axes of image in upper right corner | <p>I am trying to place an inset axes exactly in the upper right of my existing axes. This is usually very simple if you use the <code>bbox_to_anchor</code> keyword in conjunction with the <code>transform</code> of the parent axes.</p>
<pre class="lang-py prettyprint-override"><code>axins = inset_axes(ax, width="40%", height="40%",
borderpad=0,
bbox_to_anchor=(0,0,1,1),
bbox_transform=ax.transAxes)
</code></pre>
<p>But in my case, the inset axes contains an image using <code>ax.imshow()</code>. This will always set the aspect ratio of the inset axes to the aspect ratio of the image <em>which is what I want</em>. However, if the aspect ratio of <code>ax</code> and the image do not match, then the image will be added not completely in the corner of the parent axes:</p>
<p><a href="https://i.sstatic.net/LjlTG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LjlTG.png" alt="incorrect placing" /></a></p>
<p>I know that this is because I am using the transform of the parent axes and that it is because of imshow shortening the width of the image/inset axes, but no matter what I use to calculate the offset that needs to be added in <code>bbox_to_anchor</code>, I cannot figure out the formula that gives me the right amount to add in <code>bbox_to_anchor=(0+num,0,1,1)</code>.</p>
<p>It must be something obtained from <code>ax.get_position()</code>, multiplied by the percentage (<code>width=40%, height=40%</code>) and something involving the transform <code>transform=ax.transAxes</code>, but I do not know what to multiply with what in order to find the correct value for <code>num</code>.</p>
<p>How do I calculate the offset so that I can provide the right value for <code>num</code> to <code>bbox_to_anchor</code>?</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import numpy as np
X,Y,R = 200,200,3
pixels = np.random.uniform(0,1,X*Y*R)
image = pixels.reshape(X, Y, R)
fig, ax = plt.subplots(figsize=(6,4))
axins = inset_axes(ax, width="40%", height="40%", borderpad=0, bbox_to_anchor=(0,0,1,1), bbox_transform=ax.transAxes)
axins.imshow(image)
plt.show()
</code></pre>
| <python><matplotlib> | 2023-10-19 13:09:27 | 1 | 534 | J.Doe |
77,323,993 | 774,133 | Groupby returns Series, then DataFrames | <p>Please consider this code:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({"a" : [1, 1, 2, 2],
"b" : [1, 1, 2, 3],
"s1": [1, np.nan,3, np.nan],
"s2": [np.nan,2, 3, np.nan]})
print(df)
def compute(g):
print(type(g))
print(g)
return g
df.groupby(["a", "b"]).transform(lambda g: compute(g))
</code></pre>
<p>Basically, I want group df using two columns, then apply a function to each group, that needs to be a dataframe with columns "s1" and "s2".</p>
<p>The output of the previous code is:</p>
<pre><code> a b s1 s2
0 1 1 1.0 NaN
1 1 1 NaN 2.0
2 2 2 3.0 3.0
3 2 3 NaN NaN
<class 'pandas.core.series.Series'>
0 1.0
1 NaN
Name: s1, dtype: float64
<class 'pandas.core.series.Series'>
0 NaN
1 2.0
Name: s2, dtype: float64
<class 'pandas.core.frame.DataFrame'>
s1 s2
0 1.0 NaN
1 NaN 2.0
<class 'pandas.core.frame.DataFrame'>
s1 s2
2 3.0 3.0
<class 'pandas.core.frame.DataFrame'>
s1 s2
3 NaN NaN
</code></pre>
<p>As you can see, in the first two iterations the function receives a pd.Series, specifically, it receives "s1", then "s2", then the three dataframes.</p>
<p>I cannot understand why the function received the two columns separately at the beginning of the iteration.</p>
| <python><pandas> | 2023-10-19 13:01:23 | 0 | 3,234 | Antonio Sesto |
77,323,928 | 13,217,286 | In Polars, is there a better way to only return items within a string if they match items in a list using .is_in? | <p>Is there a better way to only return each <code>pl.element()</code> in a polars list if it matches an item contained within another list?</p>
<p>While it works, I believe there's probably a more concise/better way:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
terms = ['a', 'z']
(pl.LazyFrame({'a':['x y z']})
.select(pl.col('a')
.str.split(' ')
.list.eval(pl.when(pl.element().is_in(terms))
.then(pl.element())
.otherwise(None))
.list.drop_nulls()
.list.join(' ')
)
.collect()
)
</code></pre>
<pre><code>shape: (1, 1)
┌─────┐
│ a │
│ --- │
│ str │
╞═════╡
│ z │
└─────┘
</code></pre>
<p>For posterity's sake, it replaces my previous attempt using .map_elements():</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import re
terms = ['a', 'z']
(pl.LazyFrame({'a':['x y z']})
.select(pl.col('a')
.map_elements(lambda x: ' '.join(list(set(re.findall('|'.join(terms), x)))),
return_dtype = pl.String)
)
._fetch()
)
</code></pre>
| <python><regex><dataframe><python-polars> | 2023-10-19 12:53:35 | 2 | 320 | Thomas |
77,323,830 | 6,067,528 | Why is this python priority queue failing to heapify? | <p>Why is this priority queue failing to heapify? Where (150, 200, 200) are the priority values assigned to the dictionaries</p>
<pre><code>import heapq
priority_q = [
(150, {'intel-labels': {'timestamp': 150}}),
(200, {'intel-labels': {'timestamp': 200}}),
(200, {'intel-labels': {'timestamp': 200, 'xx': 'xx'}})
]
heapq.heapify(priority_q)
print( heapq.nlargest(2, priority_q))
</code></pre>
<p>The exception:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: '<' not supported between instances of 'dict' and 'dict'
</code></pre>
<p>The below, however, works..</p>
<pre><code>priority_q = [
(150, {'intel-labels': {'timestamp': 150}}),
(200, {'intel-labels': {'timestamp': 200}}),
(201, {'intel-labels': {'timestamp': 200, 'xx': 'xx'}})
]
heapq.heapify(priority_q)
</code></pre>
<p>Why is this?</p>
| <python><heapq> | 2023-10-19 12:38:26 | 3 | 1,313 | Sam Comber |
77,323,754 | 14,512,983 | Create or Maintain a single CHANGELOG from multiple Git repositories | <p>I tried to look up any examples or GitHub actions if there is a way to generate or maintain a single CHANGELOG.md file for all GitHub repositories (say more than 10).</p>
<p>I would like to know if this can be achieved through any automation work (through Python) if possible. Any help (any ideas, reference examples, etc.) would be appreciated.</p>
<p>EDIT: I have tried a small Python script to achieve this:</p>
<pre><code>import requests
# GitHub access token
access_token = "YOUR_ACCESS_TOKEN"
# List of repositories you want to collect changelog data from
repositories = ["owner/repo1", "owner/repo2"]
# Initialize an empty changelog
changelog = ""
# Loop through repositories
for repo in repositories:
url = f"https://api.github.com/repos/{repo}/releases"
headers = {"Authorization": f"token {access_token}"}
response = requests.get(url, headers=headers)
if response.status_code == 200:
releases = response.json()
for release in releases:
# Format the release data as needed
changelog += f"## {repo} {release['tag_name']}\n"
changelog += release['body'] + "\n\n"
# Write the consolidated changelog to a file
with open("changelog.md", "w") as changelog_file:
changelog_file.write(changelog)
</code></pre>
| <python><github><github-actions> | 2023-10-19 12:26:17 | 0 | 818 | Ranji Raj |
77,323,740 | 14,485,257 | Find the segment ordinal number and the inner index for a given search index on a splitted array | <p>I have a pandas dataframe having 1 column:</p>
<pre><code>df = pd.DataFrame({"Value": [10,9,5,11,2,8,6,7,4,2,1,9]})
</code></pre>
<p>If I take a subset of this, the starting index value changes from 0,1,2,3,4,5,6,7,8,9,10,11 to 3,4,5,6,7:</p>
<pre><code>df = df[3:8]
</code></pre>
<p>When I try to convert this into a numpy array using .to_numpy() as follows, then its indices get reset to 0,1,2,3,4. But I need to have them as 3,4,5,6,7 itself.</p>
<pre><code>df_mod = df.to_numpy()
</code></pre>
<p>Can anyone please help to create this numpy array with the same indices as the pandas dataframe from which it was converted?</p>
<p><em><strong>EDIT {further context}:</strong></em></p>
<ul>
<li><p>I've a numpy array having 2880 index points. There's a particular index point in it - 1440 at which I need a marker.</p>
</li>
<li><p>Now I need to split this main array into multiple segments - say 100 of them. And I need to identify in which out of these 100 segments is the marker present and at what index point of this particular segment is the marker present - say it's at index point 60.</p>
</li>
<li><p>I would be splitting the main array into multiple segments other than 100 as well as needed. Hence, I need a modular code which would be able to implement this for any no. of segments. I need the segment no. and the index of the identified segment in which the marker is present.</p>
</li>
<li><p>I thought that retaining the indices of the pandas df from which I obtained my numpy array would've helped in achieving this, but looks like the numpy array cannot have any other starting index value apart from 0, unlike pandas dataframes.</p>
</li>
</ul>
<p>This is what I'm currently doing to create the segments:</p>
<pre><code># Convert pandas df to numpy array
signal = df['value'].values
# Split the signal into n no. of parts
num_parts = 100
segment_length = len(signal) // num_parts
segments = [signal[i:i + segment_length] for i in range(0, len(signal), segment_length)]
</code></pre>
<p>Kindly suggest how to achieve this.</p>
| <python><arrays><pandas><dataframe><numpy> | 2023-10-19 12:24:16 | 3 | 315 | EnigmAI |
77,323,575 | 1,701,504 | How do I change the project from GCP text to speech python library? | <p>I am playing with the playing with the gcp python sdk sample (<a href="https://github.com/GoogleCloudPlatform/python-docs-samples/blob/main/texttospeech/snippets/README.rst" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/python-docs-samples/blob/main/texttospeech/snippets/README.rst</a>). I am able to login with gcloud auth login and set the project correctl.</p>
<p>But the library is point to a wrong project ID when I run the sample:</p>
<pre><code> raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.PermissionDenied: 403 This API method requires billing to be enabled. Please enable billing on project #841741138139 by visiting https://console.developers.google.com/billing/enable?project=xxxxxxx then retry. If you enabled billing for this project recently, wait a few minutes for the action to propagate to our systems and retry. [links {
description: "Google developers console billing"
url: "https://console.developers.google.com/billing/enable?project=xxxx"
</code></pre>
<p>The above trace is showing I am using the wrong project ID.</p>
<p>I tried to set <code>GCP_PROJECT</code> in the env but no use.</p>
<p>How can I set the correct ID in the python client?</p>
| <python><google-cloud-platform> | 2023-10-19 12:00:27 | 1 | 3,215 | Kintarō |
77,323,156 | 9,811,964 | Cluster people based on spatial coordinates with constraints | <p>I have a pandas dataframe <code>df</code>. The columns <code>latitude</code> and <code>longitude</code> represent the spatial coordinates of people.</p>
<pre><code>import pandas as pd
data = {
"latitude": [49.5619579, 49.5619579, 49.56643220000001, 49.5719721, 49.5748542, 49.5757358, 49.5757358, 49.5757358, 49.57586389999999, 49.57182530000001, 49.5719721, 49.572026, 49.5727859, 49.5740071, 49.57500899999999, 49.5751017, 49.5751468, 49.5757358, 49.5659508, 49.56611359999999, 49.5680586, 49.568089, 49.5687609, 49.5699217, 49.572154, 49.5724688, 49.5733994, 49.5678048, 49.5702381, 49.5707702, 49.5710414, 49.5711228, 49.5713705, 49.5723685, 49.5725714, 49.5746149, 49.5631496, 49.5677449, 49.572268, 49.5724273, 49.5726773, 49.5739391, 49.5748542, 49.5758151, 49.57586389999999, 49.5729483, 49.57321150000001, 49.5733375, 49.5745175, 49.574758, 49.5748055, 49.5748103, 49.5751023, 49.57586389999999, 49.56643220000001, 49.5678048, 49.5679685, 49.568089, 49.57182530000001, 49.5719721, 49.5724688, 49.5740071, 49.5757358, 49.5748542, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5619579, 49.5628938, 49.5630028, 49.5633175, 49.56397639999999, 49.5642962, 49.56643220000001, 49.5679685, 49.570056, 49.5619579, 49.5724688, 49.5745175, 49.5748055, 49.5748055, 49.5748542, 49.5748542, 49.5751023, 49.5751023],
"longitude": [10.9995758, 10.9995758, 10.9999593, 10.9910787, 11.0172739, 10.9920322, 10.9920322, 10.9920322, 11.0244747, 10.9910398, 10.9910787, 10.9907713, 10.9885155, 10.9873742, 10.9861229, 10.9879312, 10.9872357, 10.9920322, 10.9873409, 10.9894231, 10.9882496, 10.9894035, 10.9887881, 10.984756, 10.9911384, 10.9850981, 10.9852771, 10.9954673, 10.9993329, 10.9965937, 10.9949475, 10.9912959, 10.9939141, 10.9916605, 10.9983124, 10.992722, 11.0056254, 10.9954016, 11.017472, 11.0180908, 11.0181911, 11.0175466, 11.0172739, 11.0249866, 11.0244747, 11.0200454, 11.019251, 11.0203055, 11.0183162, 11.0252416, 11.0260046, 11.0228523, 11.0243391, 11.0244747, 10.9999593, 10.9954673, 10.9982288, 10.9894035, 10.9910398, 10.9910787, 10.9850981, 10.9873742, 10.9920322, 11.0172739, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 10.9995758, 11.000319, 10.9990996, 10.9993819, 11.004145, 11.0039476, 10.9999593, 10.9982288, 10.9993409, 10.9995758, 10.9850981, 11.0183162, 11.0260046, 11.0260046, 11.0172739, 11.0172739, 11.0243391, 11.0243391]
}
df = pd.DataFrame(data)
</code></pre>
<p>I want to cluster people based on their spatial coordinates. Each cluster must contain exactly 9 people. However, I want to avoid that people with the same spatial coordinates slip into the same cluster. This can happen, because the dataset contains some location coordinates that are exactly the same and therefore automatically assigned to the same cluster. So the goal is to prevent exactly that when clustering. It may be necessary to automatically move the persons to an adjacent cluster in a subsequent process.</p>
<p>To cluster the people I used <code>k-means-constrained</code> with <code>!pip install k-means-constrained</code>.</p>
<pre><code>from k_means_constrained import KMeansConstrained
coordinates = np.column_stack((df["latitude"], df["longitude"]))
# Define the number of clusters and the number of points per cluster
n_clusters = len(df) // 9
n_points_per_cluster = 9
# Perform k-means-constrained clustering
kmc = KMeansConstrained(n_clusters=n_clusters, size_min=n_points_per_cluster, size_max=n_points_per_cluster, random_state=42)
kmc.fit(coordinates)
# Get cluster assignments
df["cluster"] = kmc.labels_
</code></pre>
<p>To validate the result I check how many people have been clustered in the same cluster although they have same spatial coordinates:</p>
<pre><code>duplicate_rows = df[df.duplicated(subset=["cluster", "latitude", "longitude"], keep=False)]
duplicate_indices = duplicate_rows.index.tolist()
# Group by specified columns and count occurrences
count_occurrences = df.iloc[duplicate_indices].groupby(['latitude', 'longitude', 'cluster']).size().reset_index(name='count')
print("Number of rows with identical values in specified columns:")
print(count_occurrences)
</code></pre>
<p>For example the print-statement looks like this:</p>
<pre><code>Number of rows with identical values in specified columns:
latitude longitude cluster count
0 49.5619579000000030 10.9995758000000006 0 2
1 49.5748054999999965 11.0260046000000003 9 2
2 49.5748541999999972 11.0172738999999993 9 2
3 49.5751022999999975 11.0243391000000006 9 2
4 49.5757357999999968 10.9920322000000006 0 3
5 49.5758150999999998 11.0249866000000001 7 8
</code></pre>
<p>In total we have (8+3+2+2+2+2) people who are clusters with a neighbor from the same building. I want to minimize this number. <code>count = 2</code> or less works fine for me. Its not perfect, but I can deal with that. But <code>count > 2</code> (for example index 5) is not okay. Too many people with same spatial coordinates in the same cluster.</p>
| <python><pandas><machine-learning><cluster-analysis><k-means> | 2023-10-19 10:52:38 | 1 | 1,519 | PParker |
77,323,105 | 5,002,316 | Can NetworkX actually handle None weights as hidden edges in dijkstra's algorithm? | <p>Based on the sparse and opaque documentation of NetworkX, when calculating shortest paths, I've tried assigning weights using function to limit traversals to specific modes of travel. This weight function assigns the proper weight if the edge is among the traversable types, and <code>None</code> if the edge is any other type.</p>
<p>According to the documentation, using a <code>None</code> weight makes the edges hidden, so the traversal will ignore them. But I'm having weird problems when trying to do this. Specifically, although a path exists using the specified edge weight (and <code>has_path</code> confirms this) when I run <code>single_source_dijkstra</code> with these weights, it fails to find the path.</p>
<p>So, for both my integrated multimodal network and for my isolated road network, the code</p>
<pre><code>nx.has_path(fullNetwork,orig,dest)
nx.has_path(roadNetwork,orig,dest)
</code></pre>
<p>returns <code>True</code>. Then running it unweighted using</p>
<pre><code>roadTime,roadPath = nx.single_source_dijkstra(roadNetwork, source=orig, target=dest)
</code></pre>
<p>returns a proper time and path, and furthermore</p>
<pre><code>roadTime,roadPath = nx.single_source_dijkstra(roadNetwork, source=orig, target=dest, weight='walkTime')
</code></pre>
<p>also returns a time and path. This is all as it should be.</p>
<p>I've confirmed that every road edge has a positive float value for 'walkTime', but some other modes of transportation do not. So I wanted to use the following weight function to run shortest paths that uses this 'walkTime' weight and ignores non-road edges:</p>
<pre><code>def roadWalkTime(u,v,attr):
if attr.get('modality','poo') == 'road':
return attr.get('walkTime',None)
else:
return None
roadTime,roadPath = nx.single_source_dijkstra(fullNetwork, source=orig, target=dest, weight=roadWalkTime)
</code></pre>
<p>but instead I get</p>
<pre><code> File C:\miniforge3\envs\GAT\lib\site-packages\networkx\algorithms\shortest_paths\weighted.py:747 in multi_source_dijkstra
raise nx.NetworkXNoPath(f"No path to {target}.") from err
NetworkXNoPath: No path to destin_0.
</code></pre>
<p>I'm doing this with random pairs of origins and destinations, and the result does not depend on any specific nodes...it always happens. So I thought I was doing something wrong with the weight function call, so I defined a static weight and tried again.</p>
<pre><code>for u,v,attr in fullNetwork.edges(data=True):
coreNetwork[u][v]['roadWalkTime'] = attr['walkTime'] if attr.get('modality','blah')=='road' else None
</code></pre>
<p>Now, when I try</p>
<pre><code>roadTime,roadPath = nx.single_source_dijkstra(fullNetwork, source=orig, target=destinNode, weight='roadWalkTime')
</code></pre>
<p>I still get that same error, and even if I isolate the road network, and confirm that a path DOES exist, and all edge have positive float weights called 'walkTime', the call</p>
<pre><code>roadTime,roadPath = nx.single_source_dijkstra(roadNetwork, source=orig, target=destinNode, weight='roadWalkTime')
</code></pre>
<p>fails even though</p>
<pre><code>roadTime,roadPath = nx.single_source_dijkstra(roadNetwork, source=orig, target=destinNode, weight='walkTime')
</code></pre>
<p>does not.</p>
<p>So, is there something I'm doing wrong in specifying or using that weight function? Or, because it even fails when static, is there something wrong with my definition of 'roadWalkTime'?</p>
| <python><networkx><dijkstra> | 2023-10-19 10:46:26 | 0 | 1,287 | Aaron Bramson |
77,322,856 | 9,234,092 | Filter langchain vector database using as_retriever search_kwargs parameter | <p>How to <strong>filter a langchain vector database using search_kwargs parameter</strong> from the <em>as_retriever</em> function ?</p>
<p>Here is an example of what I would like to do :</p>
<pre class="lang-py prettyprint-override"><code># Let´s say I have the following vector database
db = {'3c3bc745': Document(page_content="This is my text A", metadata={'Field_1': 'S', 'Field_2': 'R'}),
'14f84778': Document(page_content="This is my text B", metadata={'Field_1': 'S', 'Field_2': 'V'}),
'bd0022c9-449b': Document(page_content="This is my text C", metadata={'Field_1': 'Z', 'Field_2': 'V'})}
# Filter the vector database
retriever = db.as_retriever(search_kwargs={'filter': dict(Field_1='Z'), 'k': 1})
# Create the conversationnal chain
chain = ConversationalRetrievalChain.from_llm(llm=ChatOpenAI(temperature=0.0,
model_name='gpt-3.5-turbo',
deployment_id="chat"),
retriever=retriever)
chat_history = []
prompt = "Which sentences do you have ?"
# Expect to get only "This is my text C" but I get also get the two other page_content elements
chain({"question": prompt, "chat_history": chat_history})
</code></pre>
| <python><langchain><information-retrieval><large-language-model><vector-database> | 2023-10-19 10:10:35 | 3 | 703 | JeanBertin |
77,322,847 | 2,549,828 | Mocking instance methods in python unittest | <p>I'm a software developer for several years but new to python. I'm writing a unit test (so no database connection present) that involves a model in Django that accesses another model referenced via a foreign key connection. I want to mock the method that accesses this connection and replace the result with a hard coded response, that is different for each instance though.</p>
<p>Here's a minimal example:</p>
<pre class="lang-py prettyprint-override"><code>### tests/test_MyTestCase.py
from unittest import TestCase
from djangoapi.models import *
class MyTestCase(TestCase):
def setUp(self):
self.instance1 = MyModel()
self.instance2 = MyModel()
foreignKey1 = MySubModel()
foreignKey1.my_value = 1
# Mock that self.instance1.submodel_set.all() returns [foreignKey1]
foreignKey2 = MySubModel()
foreignKey2.my_value = 2
# Mock that self.instance2.submodel_set.all() returns [foreignKey2]
def testSomething(self):
self.assertEqual(self.instance1.get_max_value(), 1)
self.assertEqual(self.instance2.get_max_value(), 2)
### models.py
from django.db import models
class MyModel(models.Model):
def get_max_value(self):
value = 0
# the return value of self.submodel_set.all() is what I want to mock
for model in self.submodel_set.all():
value = max(value, model.my_value)
return value
class Submodel(models.Model):
my_model = models.ForeignKey(MyModel, null=True, on_delete=models.SET_NULL)
my_value = models.IntegerField()
</code></pre>
<p>I tried several combinations of <code>the @patch decorator</code>, <code>Mock()</code> and <code>MagicMock()</code> but could not get it to work. Thank you in advance!</p>
| <python><python-3.x><django><django-rest-framework><python-unittest> | 2023-10-19 10:09:36 | 1 | 1,148 | Phocacius |
77,322,763 | 1,193,138 | python-configparser not found on ubuntu 22.04 | <p>I'm trying to install this <a href="https://github.com/pimoroni/displayhatmini-python" rel="nofollow noreferrer">https://github.com/pimoroni/displayhatmini-python</a> on a raspberry pi running Ubuntu 22.04 and when I run the install.sh it fails on this:</p>
<pre><code>E: Unable to locate package python-configparser
</code></pre>
<p>Unfortunately, I'm not that familiar with python. I'm trying to install it using this:</p>
<pre><code>apt install python-configparser
</code></pre>
<p>but I get this:</p>
<pre><code>Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package python-configparser
</code></pre>
<p><code>python3 -V</code> gives me <code>Python 3.10.12</code></p>
<p>So I also tried <code>apt-get install -y python3-configparser</code> but that also comes up as not found.</p>
<p>Would I need to uninstall python3 and install python instead? Not sure how to resolve.</p>
<p>Thanks.</p>
<p><strong>Update:</strong></p>
<p>Thanks to the comment below, I have installed this but the error persists:</p>
<pre><code>E: Unable to locate package python-configparser
./install.sh: line 145: python: command not found
Error parsing configuration...
</code></pre>
<p>This is the code it refers to:</p>
<pre><code>apt_pkg_install python-configparser
CONFIG_VARS=`python - <<EOF
from configparser import ConfigParser
c = ConfigParser()
c.read('library/setup.cfg')
p = dict(c['pimoroni'])
# Convert multi-line config entries into bash arrays
for k in p.keys():
fmt = '"{}"'
if '\n' in p[k]:
p[k] = "'\n\t'".join(p[k].split('\n')[1:])
fmt = "('{}')"
p[k] = fmt.format(p[k])
print("""
LIBRARY_NAME="{name}"
LIBRARY_VERSION="{version}"
""".format(**c['metadata']))
print("""
PY3_DEPS={py3deps}
PY2_DEPS={py2deps}
SETUP_CMDS={commands}
CONFIG_TXT={configtxt}
""".format(**p))
EOF`
</code></pre>
| <python><raspberry-pi><python-3.10><ubuntu-22.04> | 2023-10-19 09:56:17 | 1 | 1,072 | omega1 |
77,322,726 | 11,737,958 | How to filter words based on the position using regex in python | <p>I am new to python. I use python 3.8 version. I am trying to filter only the first whole word and then last word from the string except the numbers globally for any format using regex.</p>
<p>Thanks in advance</p>
<pre><code> s = 'abcd xyz efgh 1691 2191.7 1296 15.4 efgh eghj'
print(re.search(r'[A-Za-z]+',s))
s1 = 'abcd xyz:efgh 1691 2191.7 1296 15.4 efgh'
print(re.search(r'(\w+)',s))
</code></pre>
<p><strong>Output:</strong></p>
<pre><code><re.Match object; span=(0, 4), match='abcd'>
<re.Match object; span=(0, 4), match='abcd'>
</code></pre>
<p><strong>Expected Output:</strong></p>
<p>first word</p>
<pre><code><re.Match object; span=(0, 4), match='abcd xyz efgh'>
<re.Match object; span=(0, 4), match='abcd xyz:efgh'>
</code></pre>
| <python><regex> | 2023-10-19 09:52:06 | 1 | 362 | Kishan |
77,322,627 | 4,469,565 | How to convert numbers from an image to a csv file using python | <p>I am trying to extract the words and numbers from this image:</p>
<p><a href="https://i.sstatic.net/0tTTs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0tTTs.png" alt="enter image description here" /></a></p>
<p>my desired output is a csv table that directly imitates the table in the image.</p>
<p>Current code:</p>
<pre><code>import cv2
import os,argparse
import pytesseract
from PIL import Image
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
ap=argparse.ArgumentParser()
ap.add_argument("-i","--image",
required=True,
help="Path to the image folder")
ap.add_argument("-p","--pre_processor",
default="thresh",
help="the preprocessor usage")
args=vars(ap.parse_args())
images=cv2.imread(args["image"])
#convert to grayscale image
gray=cv2.cvtColor(images, cv2.COLOR_BGR2GRAY)
#checking whether thresh or blur
if args["pre_processor"]=="thresh":
cv2.threshold(gray, 0,255,cv2.THRESH_BINARY| cv2.THRESH_OTSU)[1]
if args["pre_processor"]=="blur":
cv2.medianBlur(gray, 3)
filename = "{}.bmp".format(os.getpid())
cv2.imwrite(filename, gray)
text = pytesseract.image_to_string(Image.open(filename), config='--psm 7')
os.remove(filename)
print(text)
#cv2.imshow("Image Input", images)
#cv2.imshow("Output In Grayscale", gray)
cv2.waitKey(0)
</code></pre>
<p>processed image output:</p>
<p><a href="https://i.sstatic.net/ugyR2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ugyR2.png" alt="enter image description here" /></a></p>
<p>My output is currently:</p>
<pre><code>: RF
</code></pre>
<p>Is there anything obvious i am doing wrong? Is the blur from the words and numbers on the image too difficult to parse?</p>
| <python><python-imaging-library><ocr><tesseract><python-tesseract> | 2023-10-19 09:40:02 | 0 | 944 | Joey |
77,322,418 | 1,632,812 | empty recordset. Why? | <p>In a Odoo 14 shell, I'm doing this</p>
<pre><code>env['model_A'].search(['model_B_id', '=', '17' ])
</code></pre>
<p>I know for sure that there are several records in model_A with the field model_B_id set to 17</p>
<p>I get an empty recordset back</p>
<pre><code>model_A()
</code></pre>
<p>Why ?</p>
| <python><odoo><odoo-14> | 2023-10-19 09:09:44 | 1 | 603 | user1632812 |
77,322,412 | 2,123,706 | Extract all keys from nested Python dictionary | <p>I have a nested Python dictionary, with lower levels containing lists of dictionaries. The nesting goes up to 7 levels.</p>
<p>My sample data:</p>
<pre><code>d = {'a':'ewr0',
'b':1234,
'c':{'c1':1234,'c2':456},
'd':[{'d1':123, 'd2':[12,23,34,45,56], 'd3':{'d4':98,'d5':87}},{'d6':1,'d7':[1,2,3,4,5,6],'d8':{'d9':10,'d10':11}}]}
</code></pre>
<p>I would like to loop through each key, print out/append to list what type of value is held.</p>
<ul>
<li>If the value is a dictionary, say that it is a dictionary, and in the nested one perform the same test.</li>
<li>If the value is a list, cycle through each element of the list, and state what type each element is. And depending on the type, carry out the same analysis.</li>
</ul>
<p>For the above, I would like to have a result:</p>
<pre><code>a: string
b: int
c: dict of length 2
c1: int
c2: int
d: list of 2 elements
ele 1: dict of length 3
d1: int
d2: list of 5 int/string elements
d3: dict of length 2
d4: int
d5: int
ele 2: dict of length 3
d6: int
d7: list of 6 int/string elements
d8: dicts of length 2
d9: int
d10: int
</code></pre>
<p>I am able to extract key info from the first level, but not sure how to get the lower levels with:</p>
<pre><code>for key, value in d.items():
print(f'{key} holds value of type:{type(value)}')
if isinstance(value, dict):
print(f'\tdict has length {len(value)}')
for key2, value2 in value:
print(f'\t{key} holds value of type:{type(value)}')
if isinstance(value, list):
print(f'\tthe list has {len(value)} elements')
for j in range(len(value)):
print(f'\t{type(value[j])}')
</code></pre>
<p>What do I need to do to search deeper into the dictionary, without adding more <code>if's</code>? How can I extract information if the levels deeper than what is currently shown? Any suggestions?</p>
| <python><dictionary> | 2023-10-19 09:08:38 | 2 | 3,810 | frank |
77,322,360 | 3,420,542 | AWS Lambda HTTP 503 when opening amazon urls | <p>Hello I developed an AWS Lambda which makes an http request to amazon url to get the web page content.
The problem I have regards the HTTP 503 error I get every time I run the lambda. Running the code locally it works fine but on AWS it doesn't.</p>
<p>I am using <code>Python 3.7</code> and <code>urllib</code> module</p>
<p>Below the source code</p>
<pre><code>import urllib.request
url = 'https://www.amazon.it/dp/B0786QNS9B'
reqq = urllib.request.Request(url=url, headers={'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36'})
reddit_file = urllib.request.urlopen(reqq)
</code></pre>
<p>I changed several times the <code>User-agent</code> value but it doesn't work.</p>
<p>I was wondering if the problem could be related to the IP address used by my AWS account when I run the lambda, I checked what's the IP used every time and I see it's always the same, so basically it's a static ip address.</p>
<p>I'm not using the API Gateway service, just the lambda only.</p>
<p>Is it possible to use a different IP address for each run of the lambda function?
Will this solve my issue?</p>
<p>Thanks</p>
| <python><http><aws-lambda><urllib><http-status-code-503> | 2023-10-19 09:01:41 | 0 | 748 | xXJohnRamboXx |
77,322,139 | 1,210,075 | blob.generate_signed_url() cloud urls return SignatureDoesNotMatch after a few days occasionally | <p>When I sign a google cloud storage url in App Engine, the url returns <code>SignatureDoesNotMatch</code> after a few days, but only sometimes.</p>
<p>Based on <a href="https://stackoverflow.com/questions/55231676/cloud-storage-download-urls-fail-after-three-days-maybe-due-to-content-type">Cloud Storage download URLs fail after three days, maybe due to Content-Type?</a>, I thought adding content_type would fix it.</p>
<p>Here was my code:</p>
<pre><code>credentials, _project_id = google.auth.default()
storage_client = storage.Client()
bucket = storage_client.bucket(UPLOAD_BUCKET)
blob = bucket.blob(blob_name)
signing_credentials = impersonated_credentials.Credentials(
source_credentials=credentials,
target_principal=SERVICE_ACCOUNT_EMAIL,
target_scopes = 'https://www.googleapis.com/auth/devstorage.read_only',
lifetime=2 )
url = blob.generate_signed_url(
version="v4",
expiration=timedelta(seconds=MEDIA_EXPIRATION), # 7 days is max
method="GET",
content_type=blob.content_type, # I think this is setting content_type=None, since blob.reload() is necessary to get a non-None value.
credentials=signing_credentials
)
# note: I'm not sure if I could have just used credentials=credentials instead of the impersonated_credentials.
</code></pre>
<p>A small side note: blob.content_type was actually returning None. You have to do <code>blob.reload()</code> to get the content_type. But adding a content_type caused downloads to fail since browsers don't include content_type in the headers when asking for images.</p>
| <python><google-app-engine><google-cloud-storage> | 2023-10-19 08:30:36 | 1 | 2,929 | Scott Driscoll |
77,321,879 | 11,688,559 | Deploying Python functions that compile via Numba on the Google Cloud Platform: state must be maintained | <p>I am running timeseries forecasts for multiple products using the <a href="https://nixtla.github.io/statsforecast/" rel="nofollow noreferrer">StatsForecast library from Nixtla</a>. It is a brilliant library. Its most attractive feature would be the speed and reliability of its autoARIMA script. According to the documentation, it owes its speed to the underlying functions using Numba to compile the Python functions.</p>
<p>It runs the forecasts at the promised speed on my local machine. However, I deployed the same script a 2nd generation Google Cloud function where it is evident that the autoARIMA is compiled every time it is called.</p>
<p>As far as my knowledge goes, this is because the server less nature of Google Cloud Functions does not allow a function to maintain state. As such, it will never maintain a compiled state.</p>
<p>I need the most simple approach to deploy the script to the Google Cloud platform such that the autoARIMA function can compile. In other words, I need the easiest way to maintain state for the function. My proposal is to use a scheduled Virtual Machine (VM) from the Compute Engine. I believe it should maintain state and allow the function to compile only one.</p>
<p>Please verify whether such an approach would work, share similar experiences and suggest solutions that are better than the current proposal.</p>
<h2>EDIT:</h2>
<p>The virtual machine approach will definitely work. However, I am considering different approaches. It seems that one can cache the compiled functions as explained <a href="https://nixtla.github.io/statsforecast/docs/how-to-guides/numba_cache.html" rel="nofollow noreferrer">here</a>. So for a cloud function, I might be able to use an external caching service such as Redis or Memcached. The overhead of calling the API to such a cache client might lead to the run-time not improving. Another approach might be to serialize the compiled function and to import it from cloud Storage.</p>
| <python><google-cloud-platform><virtual-machine><google-compute-engine><numba> | 2023-10-19 07:49:30 | 1 | 398 | Dylan Solms |
77,321,812 | 1,864,294 | ThreadPoolExecutor exits before queue is empty | <p>My goal is to concurrently crawl URLs from a queue. Based on the crawling result, the queue may be extended. Here is the MWE:</p>
<pre><code>import queue
from concurrent.futures import ThreadPoolExecutor
import time
def get(url): # let's assume that the HTTP magic happens here
time.sleep(1)
return f'data from {url}'
def crawl(url, url_queue: queue.Queue, result_queue: queue.Queue):
data = get(url)
result_queue.put(data)
if 'more' in url:
url_queue.put('url_extended')
url_queue = queue.Queue()
result_queue = queue.Queue()
for url in ('some_url', 'another_url', 'url_with_more', 'another_url_with_more', 'last_url'):
url_queue.put(url)
with ThreadPoolExecutor(max_workers=8) as executor:
while not url_queue.empty():
url = url_queue.get()
executor.submit(crawl, url, url_queue, result_queue)
while not result_queue.empty():
data = result_queue.get()
print(data)
</code></pre>
<p>In this MWE, two URLs require another crawl: <code>'url_with_more'</code> and <code>'another_url_with_more'</code>. They are added to the <code>url_queue</code> while crawling.</p>
<p>However, this solution ends before those two 'more' URLs are processed; after running, the <code>url_queue</code> remains to have two entries.</p>
<p>How can I make sure that the ThreadPoolExecutor does not exit too early? Have I misunderstood ThreadPoolExecutor?</p>
| <python><concurrent.futures> | 2023-10-19 07:37:33 | 2 | 20,605 | Michael Dorner |
77,321,688 | 1,123,094 | calling sg.popup when SystemTray (psgtray) running | <pre class="lang-py prettyprint-override"><code>import PySimpleGUI as sg
from psgtray import SystemTray
import pyaudio
import threading
def findAudioDevices():
p = pyaudio.PyAudio()
info = p.get_host_api_info_by_index(0)
numdevices = info.get('deviceCount')
device_info = ""
for i in range(0, numdevices):
if (p.get_device_info_by_host_api_device_index(0, i).get('maxInputChannels')) > 0:
device_info += f"Input Device ID {i} - {p.get_device_info_by_host_api_device_index(0, i).get('name')}\n"
sg.popup('Audio Devices', device_info)
# PySimpleGUI setup
menu_def = ['File', ['Show Audio Devices', 'Open Config', 'Exit']]
tooltip = 'Tooltip'
# Initialize the tray
tray = SystemTray(menu=menu_def, single_click_events=False, tooltip=tooltip, icon=r'Icon.png')
tray.show_message('Sound Switch', 'Sound Switchs Started!')
# Event loop
while True:
menu_item = tray.key
if menu_item == 'Exit':
break
elif menu_item == 'Show Audio Devices':
findAudioDevices() # This function should probably be modified to display a popup instead of printing to console
elif menu_item == 'Open Config':
os.system(f"notepad {config_path}")
tray.close()
</code></pre>
<p>leads to this error</p>
<pre><code>An error occurred when calling message handler
Traceback (most recent call last):
File "C:\Users\willwade\.pyenv\pyenv-win\versions\3.11.4\Lib\site-packages\pystray\_win32.py", line 398, in _dispatcher
return int(icon._message_handlers.get(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\willwade\.pyenv\pyenv-win\versions\3.11.4\Lib\site-packages\pystray\_win32.py", line 210, in _on_notify
descriptors[index - 1](self)
File "C:\Users\willwade\.pyenv\pyenv-win\versions\3.11.4\Lib\site-packages\pystray\_base.py", line 308, in inner
callback(self)
File "C:\Users\willwade\.pyenv\pyenv-win\versions\3.11.4\Lib\site-packages\pystray\_base.py", line 434, in __call__
return self._action(icon, self)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\willwade\.pyenv\pyenv-win\versions\3.11.4\Lib\site-packages\psgtray\psgtray.py", line 138, in _on_clicked
self.window.write_event_value(self.key, item.text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'write_event_value'
</code></pre>
<p>So it looks like its ignoring sg and using psgtray - but I dont get it. I have a feeling its something to do with the window thread but I have no idea how to fix this?</p>
| <python><pysimplegui><system-tray> | 2023-10-19 07:15:40 | 1 | 2,250 | willwade |
77,321,623 | 13,560,598 | Tensorflow float64 error while running in eager execution | <p>I'm using TF 2.13.0 and I'm getting an error only when eager execution is enabled. Is there a workaround?</p>
<p>The error is</p>
<pre><code>tensorflow.python.framework.errors_impl.InvalidArgumentError: TensorArray dtype is float64 but Op is trying to write dtype float32
</code></pre>
<p>The code is</p>
<pre><code>import tensorflow as tf
# when the next line is uncommented, we get an error
tf.config.run_functions_eagerly(True)
@tf.function(input_signature=[tf.TensorSpec(shape=None,dtype=tf.float64)])
def TrySine(dev):
mytensor = tf.map_fn(fn=lambda t,dev=dev: tf.math.sin(dev*3.14/180.0), elems=tf.ones(shape=(8,),dtype='float64'))
return mytensor
output = TrySine(dev=5.0)
print(output)
</code></pre>
| <python><tensorflow><eager-execution> | 2023-10-19 07:03:01 | 1 | 593 | NNN |
77,320,932 | 1,609,428 | how to constrain a (cubic) regression model to pass through certain points? | <p>Consider the following example:</p>
<pre><code>import statsmodels.formula.api as smf
import random
import pandas as pd
df = pd.DataFrame({'y' : [x**2 + random.gauss(2) for x in range(10)],
'x' : [x for x in range(10)]})
model = smf.ols(data = df, formula = 'y ~ x + I(x**2) + I(x**3)').fit()
df['pred'] = model.predict(df)
df.set_index('x').plot()
</code></pre>
<p><a href="https://i.sstatic.net/o1mKq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o1mKq.png" alt="enter image description here" /></a></p>
<p>As you can see, I fit a cubic model to my data and the fit is overall pretty good. However, I would like to constrain my cubic model to have the following values at two specific x points:</p>
<ul>
<li><code>f(0) = 10</code></li>
<li><code>f(8) = 60</code></li>
</ul>
<p>How can I do that in <code>statsmodels</code> or <code>sklearn</code>?
Thanks!</p>
| <python><scikit-learn><statsmodels> | 2023-10-19 04:16:38 | 1 | 19,485 | ℕʘʘḆḽḘ |
77,320,798 | 1,686,628 | Connecting MongoDB from localhost | <p>I have an EC2 instance (<code>EC2_VM</code>) where I can connect mongodb (<code>MONGO_HOST</code>)from</p>
<pre><code>with sshtunnel.open_tunnel(
(EC2_VM, 22),
ssh_username="ec2-user",
ssh_pkey=EC2_KEY,
remote_bind_address=(MONGO_HOST, 27017),
local_bind_address=("0.0.0.0", 27017),
) as tunnel:
print(tunnel.local_bind_port)
# list database names
client = MongoClient(
"mongodb://%s:%s@%s" % (MONGO_USER, MONGO_PASSWORD, "127.0.0.1"),
port=tunnel.local_bind_port,
)
names = client.list_database_names()
print(names)
</code></pre>
<p>But I am getting below error. Any idea whats wrong here? It looks like it has set up the tunnel properly but cant seem to reach mongo still. TIA</p>
<pre><code> File "/lib/python3.11/site-packages/pymongo/topology.py", line 269, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: 127.0.0.1:27017: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 6530a0a24577402c25f1997e, topology_type: Unknown, servers: [<ServerDescription ('127.0.0.1', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('127.0.0.1:27017: timed out')>]>
</code></pre>
<p>anyone who has docdb set up in aws, how do you guys connect to it from localhost? It works with MongoDB Compass because it provides the ssh tunnel option from GUI but when I am trying to do it via script using pymongo, it's not working</p>
<p>TIA</p>
| <python><pymongo><ssh-tunnel><aws-documentdb> | 2023-10-19 03:27:20 | 1 | 12,532 | ealeon |
77,320,733 | 1,588,847 | Preserving DatetimeIndex `freq` in MultiIndex in Pandas | <p>I have a <code>pd.DatetimeIndex</code> with a <code>freq</code> which I want to use as part of a <code>pd.MultiIndex</code>, but when I do so the freq gets set to <code>None</code>. Can I preserve the freq?</p>
<pre><code>[1]: dates = pd.date_range('2000-01-01', '2000-03-31', freq='M')
labls = pd.Index(['AA','BB'])
multi = pd.MultiIndex.from_product([dates, labls])
</code></pre>
<p>As expected, <code>dates</code> has a monthly <code>freq='M'</code>:</p>
<pre><code>[2]: dates
[2]: DatetimeIndex(['2000-01-31', '2000-02-29', '2000-03-31'],
dtype='datetime64[ns]', freq='M')
</code></pre>
<p>Level 0 in the multi-index is a DatetimeIndex, but the freq is <code>None</code>, presumably due to the repeated values.</p>
<pre><code>[3]: multi.get_level_values(0)
[3]: DatetimeIndex(['2000-01-31', '2000-01-31', '2000-02-29', '2000-02-29',
'2000-03-31', '2000-03-31'],
dtype='datetime64[ns]', freq=None)
</code></pre>
<p>Is there any way to recover the freq that went in?</p>
<p>PS: I see that if you use <code>pd.PeriodIndex</code> instead of DatetimeIndex, then the freq <em>is</em> preserved.</p>
<p>Unfortunately PeriodIndex won't work for me, since I use both month-start (MS) and month-end (M) frequencies with DatetimeIndex, and they're not differentiated in PeriodIndex.</p>
| <python><pandas><multi-index> | 2023-10-19 03:09:44 | 0 | 2,124 | Jetpac |
77,320,676 | 10,431,629 | Update a Table using a Python Dictionary keys and values for where Clause | <p>I wish to update a SQL Server table. Its a big table (with multiple items to be updated at one shot) and I have an issue in updating that by putting where statements manually. So I created a Python dictionary to use the key value pairs as set value for the value (if it exists) for where condition from the key.
If the dictionary key is blank or null it sets the key as value or else it sets the value. (just to add both values are expected to be in the same column which I want to update).</p>
<p>So the problem in concrete is a follows:</p>
<p>Say I have created a dictionary as follows:</p>
<pre><code>d = {'a': '', 'b':'x1', 'c': 'y1', 'd':'', 'e':'13f', 'f':'o'}
</code></pre>
<p>So say I have a table like SAMPLETABLE and I want to update a column SAMPLECOL of the table as follows:</p>
<pre><code> UPDATE SAMPLE TABLE
SET SAMPLECOL = 'a'
WHERE SAMPLECOL = 'a'
SET SAMPLECOL = 'x1'
WHERE SAMPLECOL = 'b'
SET SAMPLECOL = 'y1'
WHERE SAMPLECOL = 'c'
SET SAMPLECOL = 'd'
WHERE SAMPLECOL = 'd'
SET SAMPLECOL = '13f'
WHERE SAMPLECOL = 'e'
SET SAMPLECOL = 'o'
WHERE SAMPLECOL = 'f'
</code></pre>
<p>So I tried in Python like this:</p>
<pre><code> for key, values in d.items():
if len(d[key]) > 1:
sql = """UPDATE SAMPLETABLE
SET SAMPLECOL = '{}' where id
='{}'""".format(d[key],key)
else:
d[key] = key
sql = """UPDATE SAMPLETABLE
SET SAMPLECOL = '{}' where id = '{}'""".format(d[key],
key)
</code></pre>
<p>But its not working as a single update statement what I want. Not sure where I am going wrong. Any help appreciated.</p>
| <python><sql-server><pandas><dictionary><sql-update> | 2023-10-19 02:49:30 | 1 | 884 | Stan |
77,320,560 | 471,478 | map numpy array by numpy array | <p>I want to map the values in an array by some other array which maps the values from that array to new values by index.</p>
<p>Example:</p>
<pre><code>arr = np.array([0, 1, 2, 3, 0, 4, 3, 3, 1, 4, 0, 0, 0, 2])
tbl = np.array([0, 1, 1, 0, 2])
res = np.array([tbl[x] for x in arr])
print(arr) # [0 1 2 3 0 4 3 3 1 4 0 0 0 2]
print(tbl) # [0 1 1 0 2]
print(res) # [0 1 1 0 0 2 0 0 1 2 0 0 0 1]
</code></pre>
<p>Is there a faster way to do this using numpy?</p>
<p>I am expecting <code>tbl</code> (and thus the number of different values in <code>arr</code>) to be very small (tens of values) but <code>arr</code> itself to be very large (millions of entries).</p>
| <python><arrays><numpy> | 2023-10-19 02:12:39 | 1 | 12,364 | scravy |
77,320,495 | 5,792,426 | How to bypass the TTL of 30 seconds while waiting response of post request from external server | <p>I am building a view in Django , which will send a POST request to chat-gpt API, the problem that I am facing that the response from chat-gpt is taking more than 30 seconds (we are having a long prompts)
the idea that I have in mind is:</p>
<ol>
<li>The client sends a request to the server.</li>
<li>The server writes the request to a message queue and returns a
message ID to the client.</li>
<li>Another worker is listening to the message queue. It retrieves the
request from the queue, sends it to OpenAI,and then writes the
response back to the message queue.</li>
<li>The client periodically sends requests to the server to ask for the response using the previously received message ID.</li>
<li>The server responds with "pending" until it finds the response in the message queue, at which point it returns the actual response to the client.</li>
</ol>
<p>the problem is that I have no idea how to achieve that ...
I am using gke for hosting the application, I already have some cronjob using some views as well
any idea how to deal with this will be so much appreciated
here is an example of the view request:</p>
<pre class="lang-py prettyprint-override"><code>import openai
from app.forms_prompt import PromptForm
from app.models import ModelName
from django.conf import settings
from django.contrib.auth.decorators import login_required
from django.http import HttpRequest
from django.http import HttpResponse
from django.shortcuts import get_object_or_404
from django.shortcuts import redirect
from django.shortcuts import render
from django.utils.translation import gettext_lazy as _
@login_required
def form_prompt(request: HttpRequest, pk: int) -> HttpResponse:
instance = get_object_or_404(ModelName, pk=pk)
openai.api_key = settings.OPENAI_KEY
form = PromptForm(request.POST or None, instance=setkeyword)
# check if form data is valid
if form.is_valid():
prompt = form.cleaned_data["text"]
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "user", "content": prompt},
],
)
instance.specific_field = response["choices"][0]["message"]["content"]
form.save()
return redirect("view_instance_name", instance.pk)
return render(request, "view_prompt_name.html", context)
</code></pre>
<p>any suggestion how to follow the solution, will be very helpful, thank you</p>
| <python><django><kubernetes><google-cloud-platform><openai-api> | 2023-10-19 01:48:36 | 1 | 557 | ladhari |
77,320,462 | 2,954,547 | Row indexing/subsetting with only part of a MultiIndex | <p>Given the following DataFrame and MultiIndex instances:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
{'x': [1,2,3]},
index=pd.MultiIndex.from_tuples(
[('a', 'p', 1), ('b', 'z', 3), ('a', 'q', 2)],
names=['l1', 'l2', 'l3'],
),
)
idx = pd.MultiIndex.from_tuples(
[('a', 'q'), ('a', 'p')],
names=['l1', 'l2'],
)
</code></pre>
<p>How do I subset <code>df</code> using <code>idx</code>? My naive attempt failed (Pandas v2.0.3) :</p>
<pre class="lang-py prettyprint-override"><code>df.loc[idx]
</code></pre>
<pre class="lang-none prettyprint-override"><code>ValueError: operands could not be broadcast together with shapes (2,2) (3,) (2,2)
</code></pre>
<p>Ideally I'd get a nicer error message than that, but clearly <code>.loc</code> isn't meant to subset using a partial MultiIndex.</p>
<p>I came up with two workarounds, but they're both kind of ugly for various reasons:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[
pd.IndexSlice[tuple(idx.get_level_values(n) for n in idx.names if n in df.index.names)]
]
</code></pre>
<pre class="lang-py prettyprint-override"><code>df.join(pd.DataFrame({'y': 1}, index=idx)).drop('y')
</code></pre>
<p>Is there an idiomatic way to do this? Or is this ultimately a Pandas feature request?</p>
| <python><pandas><dataframe> | 2023-10-19 01:37:30 | 1 | 14,083 | shadowtalker |
77,320,311 | 6,759,459 | HTTP401 Error in Twilio WhatsApp when retrieving Audio Message | <p>I am trying to access the audio message a user sends over WhatsApp via the Twilio API.</p>
<p>Here's my error:</p>
<blockquote>
<p>ERROR:services.audio_processing:HTTP error: 401 Client Error:
Unauthorized for url:
<a href="https://api.twilio.com/2010-04-01/Accounts/ACa6d0fade9bfb1f8bfdd3238b6ca522b1/Messages/MM503f349644b344315426b6cb68e37ea9/Media/ME0bdd87db3d7c7e7fef2e153694b669e1" rel="nofollow noreferrer">https://api.twilio.com/2010-04-01/Accounts/ACa6d0fade9bfb1f8bfdd3238b6ca522b1/Messages/MM503f349644b344315426b6cb68e37ea9/Media/ME0bdd87db3d7c7e7fef2e153694b669e1</a>
INFO:twilio.http_client:-- BEGIN Twilio API Request --
INFO:twilio.http_client:POST Request:
<a href="https://api.twilio.com/2010-04-01/Accounts/ACa6d0fade9bfb1f8bfdd3238b6ca522b1/Messages.json" rel="nofollow noreferrer">https://api.twilio.com/2010-04-01/Accounts/ACa6d0fade9bfb1f8bfdd3238b6ca522b1/Messages.json</a></p>
</blockquote>
<p>My code for the function I use to extract the audio is as follows:</p>
<pre><code>import os
import logging
import uuid
import requests
import urllib.request
from config.env_var import *
import openai
from twilio.rest import Client
from pydub import AudioSegment
logger = logging.getLogger(__name__)
# Find your Account SID and Auth Token at twilio.com/console
client = Client(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN)
def process_audio(media_url):
ogg_file_path = f'{OUTPUT_DIR}/{uuid.uuid1()}.ogg'
mp3_file_path = f'{OUTPUT_DIR}/{uuid.uuid1()}.mp3'
try:
# Authenticated request to get the media file
response = client.request("GET", media_url)
print("Response status code: ", response.status_code)
if response.status_code == 200:
with open(ogg_file_path, 'wb') as file:
file.write(response.content)
else:
raise Exception(f'Failed to download file: {response.status_code}')
# Load the OGG file
audio_file = AudioSegment.from_ogg(ogg_file_path)
# Export the file as MP3
audio_file.export(mp3_file_path, format="mp3")
with open(mp3_file_path, 'rb') as audio_file:
transcript = openai.Audio.transcribe(
'whisper-1', audio_file, api_key=OPENAI_API_KEY)
return {
'status': 1,
'text': transcript['text']
}
except Exception as e:
logger.error(f'Error at transcript_audio: {e}')
return {
'status': 0,
'text': 'Transcription failed'
}
finally:
for path in [ogg_file_path, mp3_file_path]:
if os.path.exists(path):
os.unlink(path)
</code></pre>
<p>I suspected the error was due to not using the right credentials, but I checked my Twilio Auth Token and Twilio Account SID and that wasn't it.</p>
| <python><twilio><chatbot><openai-whisper> | 2023-10-19 00:30:06 | 1 | 926 | Ari |
77,320,301 | 1,634,986 | Drawing text along an arbitrary path using matplotlib TextPath | <p>Similar to drawing text along an arbitrary path using SVG <code>textPath</code>, I would like to do something similar using <code>matplotlib</code> From <a href="https://stackoverflow.com/a/44521963/1634986">this SO answer</a>, it is possible to do by subclassing the <code>Text</code> class. However, I'd like to use <code>TextPath</code> instead as it scales better as the plot domain changes. The following example is not very general, but at least it shows the basic idea.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from matplotlib import pyplot as plt
from matplotlib import text as mtext
from matplotlib import patches as mpatch
from matplotlib import transforms as mtrans
fig, ax = plt.subplots(1, 1, figsize=(7, 7), dpi=100)
N = 100
cx = -np.cos(np.linspace(0, 2*np.pi, N))
cy = np.sin(np.linspace(0, 2*np.pi, N))
ax.plot(cx, cy, color='black')
tpath = mtext.TextPath((0, 0), 'T', .5)
tw = tpath.get_extents().width
th = tpath.get_extents().height
angle = np.rad2deg(np.arctan2(cy[0], cx[0]))
tpath = tpath.transformed(
mtrans.Affine2D()
.translate(-0.5 * tw, -0.5 * th)
.rotate_deg(angle - 90)
.translate(cx[0], cy[0])
)
tpath2 = mtext.TextPath((0, 0), 'E', .5)
tw2 = tpath2.get_extents().width
th2 = tpath2.get_extents().height
angle2 = np.rad2deg(np.arctan2(cy[5], cx[5]))
tpath2 = tpath2.transformed(
mtrans.Affine2D()
.translate(-0.5 * tw, -0.5 * th)
.rotate_deg(angle2 - 90)
.translate(cx[5], cy[5])
)
tpath3 = mtext.TextPath((0, 0), 'S', .5)
tw3 = tpath3.get_extents().width
th3 = tpath3.get_extents().height
angle3 = np.rad2deg(np.arctan2(cy[11], cx[11]))
tpath3 = tpath3.transformed(
mtrans.Affine2D()
.translate(-0.5 * tw, -0.5 * th)
.rotate_deg(angle3 - 90)
.translate(cx[11], cy[11])
)
tpath4 = mtext.TextPath((0, 0), 'T', .5)
tw4 = tpath4.get_extents().width
th4 = tpath4.get_extents().height
angle4 = np.rad2deg(np.arctan2(cy[16], cx[16]))
tpath4 = tpath4.transformed(
mtrans.Affine2D()
.translate(-0.5 * tw, -0.5 * th)
.rotate_deg(angle4 - 90)
.translate(cx[16], cy[16])
)
tpatch = mpatch.PathPatch(tpath, facecolor='red', edgecolor='black',
linewidth=3)
tpatch2 = mpatch.PathPatch(tpath2, facecolor='red', edgecolor='black',
linewidth=3)
tpatch3 = mpatch.PathPatch(tpath3, facecolor='red', edgecolor='black',
linewidth=3)
tpatch4 = mpatch.PathPatch(tpath4, facecolor='red', edgecolor='black',
linewidth=3)
ax.add_patch(tpatch)
ax.add_patch(tpatch2)
ax.add_patch(tpatch3)
ax.add_patch(tpatch4)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/VD6Bo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VD6Bo.png" alt="enter image description here" /></a></p>
<p>Obviously, the spacing of the text in this example is hard-coded. To generalize this, you would need to take any path and be able to calculate the angles at various points where the text will go. In order to do that, you will need to have some idea of the size of the text to properly space it. One thought I had was to perhaps subclass <code>TextPath</code> or create a <code>Path</code> subclass that can be handled by <code>TextToPath</code>. Looking at the source code for those classes shows the use of the <code>matplotlib._text_helpers</code> module that has methods to calculate the text size. What I do not understand is what units the text sizes are calculated in. I think it might be pixels, but I cannot be sure as it is not documented anywhere.</p>
<p>I am hoping there is someone who has a better idea about how to handle the text size issue. Beyond that, any other ideas on how to approach adding text along a curved path would also be appreciated.</p>
| <python><matplotlib> | 2023-10-19 00:24:36 | 0 | 384 | nawendt |
77,320,211 | 310,399 | pathlib replace() fails to move existing files | <p>I have a script that dumps output into a "RUNNING" directory tree. When the script completes, I want to move the output into a PASS or FAIL directory tree, depending on how things went.</p>
<p>I'm having a devil of a time getting pathlib replace() to perform the move. For example:</p>
<pre><code>source_dir = pathlib.Path('/home/me/output/12345-12345/RUNNING/20231018T160743')
dest_dir = pathlib.Path('/home/me/output/12345-12345/PASS/20231018T160743')
source_dir.replace(dest_dir)
FileNotFoundError: [Errno 2] No such file or directory: '/home/me/output/12345-12345/RUNNING/20231018T160743' -> '/home/me/output/12345-12345/PASS/20231018T160743'
</code></pre>
<p>This is nuts because I can verify the source dir is a directory and exists, and I can verify the destination dir does NOT exist before attempting the move:</p>
<pre><code>source_dir = pathlib.Path('/home/me/output/12345-12345/RUNNING/20231018T160743')
dest_dir = pathlib.Path('/home/me/output/12345-12345/PASS/20231018T160743')
print(f"Moving source dir: {type(source_dir)} to dest dir: '{dest_dir}'")
print(f"source dir is dir: {source_dir.is_dir()}")
print(f"source dir exists: {source_dir.exists()}")
print(f"dest dir is dir: {dest_dir.is_dir()}")
print(f"dest dir exists: {dest_dir.exists()}")
source_dir.replace(dest_dir)
>>> source dir is dir: True
>>> source dir exists: True
>>> dest dir is dir: False
>>> dest dir exists: False
</code></pre>
<p>Per some other questions on Stack Overflow, I've even enlisted psutil to make sure I don't have some open files that are preventing the directory to be moved.</p>
<pre><code>import psutil
proc = psutil.Process()
print(f"{proc.open_files()=}")
</code></pre>
<p>Turned out I had a few logging File Handlers still open. I've closed them and still get the same error.</p>
| <python><pathlib> | 2023-10-18 23:54:07 | 1 | 16,427 | JS. |
77,320,064 | 3,821,009 | polars read_json schema named colums with inferred types | <p>The docs for <a href="https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.read_json.html#polars.read_json" rel="nofollow noreferrer">polars.read_json</a> say:</p>
<blockquote>
<p>schema : Sequence of str, (str,DataType) pairs, or a {str:DataType,} dict
The DataFrame schema may be declared in several ways:</p>
<ul>
<li>As a dict of {name:type} pairs; if type is None, it will be auto-inferred.</li>
<li>As a list of column names; in this case types are automatically inferred.</li>
<li>As a list of (name,type) pairs; this is equivalent to the dictionary form.</li>
</ul>
</blockquote>
<p>However I'm unable to use schema with any of the three suggested ways that still rely on automatic type inferrence:</p>
<pre><code>import polars as pl
import io
print(pl.__version__)
js = io.StringIO('{"j":null,"k":1,"l":null}')
print(pl.read_json(js))
try:
print(pl.read_json(js, schema='j k l'.split()))
except Exception as e:
print(e)
try:
print(pl.read_json(js, schema={key:None for key in 'j k l'.split()}))
except Exception as e:
print(e)
try:
print(pl.read_json(js, schema=[(key, None) for key in 'j k l'.split()]))
except Exception as e:
print(e)
</code></pre>
<p>produces:</p>
<pre><code>0.19.9
k (i64)
1
shape: (1, 1)
argument 'schema': 'list' object cannot be converted to 'PyDict'
argument 'schema': A NoneType object is not a recognised polars DataType. Hint: use the class without instantiating it.
argument 'schema': 'list' object cannot be converted to 'PyDict'
</code></pre>
<p>Is there a way to do what I'm after?</p>
| <python><python-polars> | 2023-10-18 23:02:24 | 0 | 4,641 | levant pied |
77,319,957 | 3,103,957 | Necesscity of pip inside conda environment | <p>When Anaconda installed, additional packages can be installed in the environments using <code>conda install</code> command. But I see people using <code>pip</code> command inside an activated environment created/managed by conda.</p>
<p>Are there any reasons why it becomes necessary to use <code>pip</code> when we have Anaconda installed already?</p>
| <python><pip><anaconda> | 2023-10-18 22:28:01 | 1 | 878 | user3103957 |
77,319,904 | 13,084,917 | Why am I getting the same result in different elements with Selenium? | <p>I have a simple question. I am not sure why - maybe a really silly mistake - but I couldn't figure out why this code gives me the same result.</p>
<p>The code:</p>
<pre><code>rows = table.find_elements(By.XPATH, "//div[@role='row']")
for row in rows:
print(row)
print(row.find_element(By.XPATH, '//*[@role="gridcell"][1]'))
print(row.find_element(By.XPATH, '//*[@role="gridcell"][1]/span'))
print(row.find_element(By.XPATH, '//*[@role="gridcell"][1]/span/a[1]'))
name = row.find_element(By.XPATH, '//div[@role="gridcell"][1]/span/a[1]')
</code></pre>
<p>The output:</p>
<pre><code><selenium.webdriver.remote.webelement.WebElement (session="0f06562267ccaa7755c3e9a20542161e", element="8828F8096DD88B9082D0630AAD96DF91_element_74")>
<selenium.webdriver.remote.webelement.WebElement (session="0f06562267ccaa7755c3e9a20542161e", element="8828F8096DD88B9082D0630AAD96DF91_element_96")>
<selenium.webdriver.remote.webelement.WebElement (session="0f06562267ccaa7755c3e9a20542161e", element="8828F8096DD88B9082D0630AAD96DF91_element_97")>
<selenium.webdriver.remote.webelement.WebElement (session="0f06562267ccaa7755c3e9a20542161e", element="8828F8096DD88B9082D0630AAD96DF91_element_98")>
<selenium.webdriver.remote.webelement.WebElement (session="0f06562267ccaa7755c3e9a20542161e", element="8828F8096DD88B9082D0630AAD96DF91_element_75")>
<selenium.webdriver.remote.webelement.WebElement (session="0f06562267ccaa7755c3e9a20542161e", element="8828F8096DD88B9082D0630AAD96DF91_element_96")>
<selenium.webdriver.remote.webelement.WebElement (session="0f06562267ccaa7755c3e9a20542161e", element="8828F8096DD88B9082D0630AAD96DF91_element_97")>
<selenium.webdriver.remote.webelement.WebElement (session="0f06562267ccaa7755c3e9a20542161e", element="8828F8096DD88B9082D0630AAD96DF91_element_98")>
</code></pre>
<p>As you can see, the row is different but after everything is same, what is the problem here? (It is like this for every row in rows)</p>
| <python><selenium-webdriver> | 2023-10-18 22:13:34 | 1 | 884 | omerS |
77,319,901 | 9,760,446 | Show percentage of total in pandas pivot table with multiple columns based on single field in dataframe | <p>Example dataframe, also available as <a href="https://onecompiler.com/python/3zqvhgc2g" rel="nofollow noreferrer">a fiddle</a>:</p>
<pre><code>import pandas as pd
d = {
"year": [2021, 2021, 2021, 2021, 2022, 2022, 2022, 2023, 2023, 2023, 2023],
"type": ["A", "B", "B", "A", "A", "B", "A", "B", pd.NA, "B", "A"],
"observation": [22, 11, 67, 44, 2, 16, 78, 9, 10, 11, 45]
}
df = pd.DataFrame(d)
df_pivot = pd.pivot_table(
df,
values="observation",
index="year",
columns="type",
aggfunc="count"
)
</code></pre>
<p>The pivot table produces the desired output by <em>count</em> (this is intentional, I do <em>not</em> want the sum of observations, I want a row count):</p>
<pre><code>>>> print(df_pivot)
type A B
year
2021 2 2
2022 2 1
2023 1 2
</code></pre>
<p>However, I would like to show the percentage divided into total for each row by types "A" and "B" (the values of the "type" column in the dataframe). Note that not all rows have a type, some are NA (one is NA in this sample data to illustrate this). It's fine to ignore these unpopulated values in calculations. This also means that the "total" may be different in each row and is based on the sum of counted values in each type (i.e., count of A + count of B for each year).</p>
<p>I have tried multiple ways but it only seems to work when I isolate each specific type one at a time. I have not been able to figure out how to do it where it has similar output only showing the percentage of the total instead of the count. My lambda functions for <code>aggfunc</code> seem to result in incorrect values not reflective of the correct percentages.</p>
<p>Example desired output:</p>
<pre><code>>>> print(df_desired_output)
type A B
year
2021 0.50 0.50
2022 0.66 0.33
2023 0.33 0.66
</code></pre>
<p>How do I get this desired output?</p>
| <python><pandas> | 2023-10-18 22:11:26 | 2 | 1,962 | Arthur Dent |
77,319,872 | 4,707,978 | Text to Tag similarity word2vec | <p>Our users will give a 2 to 3 sentence description about their profession.
Example user A (profile description): <code>I am a data scientist living in Berlin, I like Japanese food and I am also interested in arts.</code></p>
<p>Then they also give a description about what kind of person they are looking for.
Example user B (looking for description): <code>I am looking for a data scientist, sales guy and an architect for my new home</code>.</p>
<p>We want to match these on the basis that user A is a data scientist and user B is looking for a data scientist.</p>
<p>At first we required the user to hand select the tags they want to be matched on.
And example of the kind of tags we provided:</p>
<pre><code>Environmental Services
Events Services
Executive Office
Facilities Services
Human Resources
Information Services
Management Consulting
Outsourcing/Offshoring
Professional Training & Coaching
Security & Investigations
Staffing & Recruiting
Supermarkets
Wholesale
Energy & Mining
Mining & Metals
Oil & Energy
Utilities
Manufacturing
Automotive
Aviation & Aerospace
Chemicals
Defense & Space
Electrical & Electronic Manufacturing
Food Production
Industrial Automation
Machinery
Japanese Food
...
</code></pre>
<p>This system kinda works but we have a lot of tags and want to create more 'distant' relations.</p>
<p>So we need:</p>
<ul>
<li>to know which parts are important, we could use POS-tagging for this, to extract the 'data science', 'japanese food' etc?</li>
<li>and then compare the vectors of each part; e.g. 'data science' with 'statistics' is a good match, and 'japanese food' and 'asian food' is a good match.</li>
<li>and set a threshold.</li>
<li>and this should result in a more convenient way of matching right?</li>
</ul>
| <python><machine-learning><artificial-intelligence><word2vec><part-of-speech> | 2023-10-18 22:04:40 | 2 | 3,431 | Dirk |
77,319,805 | 1,887,919 | Vectorization of complicated matrix calculation in Python | <p>I have a 3x3 matrix which I can calculate as follows</p>
<pre class="lang-py prettyprint-override"><code> e_matrix = np.array([m[i]*m[j]-n[i]*n[j] for i in range(3) for j in range(3)])
</code></pre>
<p>where <code>m</code> and <code>n</code> are both length-3 vectors. This works ok ✅</p>
<p>Now suppose that <code>m</code> and <code>n</code> are matrices of shape <code>(K,3)</code>. I want to do an analogous calculation to the above and get an <code>e_matrix</code> of shape <code>(3,3,K)</code>.</p>
<p>I realise I can just do a naive approach e.g.</p>
<pre class="lang-py prettyprint-override"><code>
e_matrix = np.zeros((3,3,K))
for k in range(K):
e_matrix[:,:,k] = np.array([m[k,i]*m[k,j]-n[k,i]*n[k,j] for i in range(3) for j in range(3)])
</code></pre>
<p>Does a vectorized approach exist? Ideally one that works with Numba/JIT compiling (so no e.g. <code>np.tensordot</code>).</p>
| <python><numpy><matrix><vectorization><numba> | 2023-10-18 21:49:03 | 3 | 923 | user1887919 |
77,319,729 | 160,245 | How to store decimals and dates using PyMongo from a Python Dictionary (to MongoDB) | <p>Here is a code fragment that I used to test. I'm trying to determine what happens when you use dictionary vs JSON style, and when you use a variable.</p>
<pre><code> from decimal import *
from pymongo import MongoClient
my_int_a = "21"
my_int_b = 21
row_dict = {}
row_dict['key'] = 'my string (default)'
row_dict['myInteger'] = int(21)
row_dict['myInteger2'] = 21
row_dict['myInteger3'] = my_int_a
row_dict['myInteger4'] = my_int_b
# row_dict['myCurrencyDecimal'] = Decimal(25.97)
row_dict['myCurrencyDouble'] = 25.97
row_dict['myDateInsertedFmt'] = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S.000Z")
row_dict['myDateInsertedNow'] = datetime.datetime.now()
db_collection_test_datatypes.insert_one(row_dict) db_collection_test_datatypes.insert_one(row_dict)
json_dict = {
'key': 'my json key',
'myInteger': 21,
'myCurrencyDouble': 25.97
}
db_collection_test_datatypes.insert_one(json_dict)
</code></pre>
<p>This is what I see in the database:</p>
<pre><code>{
"_id": ObjectId("653052a351416c5223bbeee7"),
"key": "my string (default)",
"myInteger": NumberInt("21"),
"myInteger2": NumberInt("21"),
"myInteger3": "21",
"myInteger4": NumberInt("21"),
"myCurrencyDouble": 25.97,
"myDateInsertedFmt": "2023-10-18T16:48:19.000Z",
"myDateInsertedNow": ISODate("2023-10-18T16:48:19.411Z")
} and
{
"_id": ObjectId("65305109485c0394d26e8983"),
"key": "my json key",
"myInteger": NumberInt("21"),
"myCurrencyDouble": 25.97
}
</code></pre>
<p>NOTE: There is no ISODate wrapper on the formatted date. Somewhere I saw that the above is the format to use for storing a date in MongoDB. So if I have a date, I need to load it to a date type. (I have changed this question several times since I first posted it, where it wasn't an ISODATE).</p>
<p>When I try to use the decimal (which is commented out above), I get this error:</p>
<blockquote>
<p>bson.errors.InvalidDocument: cannot encode object:
Decimal('25.969999999999998863131622783839702606201171875'), of type:
<class 'decimal.Decimal'></p>
</blockquote>
<p>Do I have to store the floating point value?</p>
<p>And Part 2 of my question, if I use JSON with quotes, then numbers arestored as strings? JSON seems to lack standards, many JSONs I see have all values in quotes, even the numbers. How would I deal with that?</p>
<p>In my actual application, I'm parsing a CSV into strings, but then I noticed that even the numbers were being stored as string instead of numbers, which I is why I wrote the above test and asked this question.</p>
| <python><python-3.x><mongodb><decimal><pymongo> | 2023-10-18 21:32:09 | 2 | 18,467 | NealWalters |
77,319,639 | 3,377,314 | Running a shiny python app from a script with `reload` | <p>I was trying to run the example python shiny app directly from a script</p>
<pre><code>"""Test shiny app."""
import argparse
from shiny import App, render, ui
app_ui = ui.page_fluid(
ui.input_slider("n", "N", 0, 100, 20),
ui.output_text_verbatim("txt"),
)
def server(input, output, _session):
@output
@render.text
def txt():
return f"n*2 is {input.n() * 2}."
app = App(app_ui, server)
if __name__ == "__main__":
ap = argparse.ArgumentParser(description="Parse args passed into the Shiny app")
ap.add_argument("--host", help="URL of host", default="127.0.0.1")
ap.add_argument("--port", help="Port to use", type=int, default=8888)
ap.add_argument("--reload", help="Enable auto-reload", action=argparse.BooleanOptionalAction, default=True)
args = ap.parse_args()
app.run(host=args.host, port=args.port, reload=args.reload)
</code></pre>
<p>When I run this script from a python environment which has shiny installed as</p>
<pre><code>python test_app.py
</code></pre>
<p>I see the following error</p>
<pre><code>WARNING: Current configuration will not reload as not all conditions are met,please refer to documentation.
WARNING: You must pass the application as an import string to enable 'reload' or 'workers'.
</code></pre>
<p>It works well if I set <code>reload=False</code>. Is there a workaround for this?</p>
<p>(EDIT: Fixed argparse issue pointed out by @relent95 in the answer)</p>
| <python><py-shiny> | 2023-10-18 21:12:53 | 1 | 969 | Devil |
77,319,595 | 4,212,875 | Python script containing connection to psycopg2 exits without error on Windows Command Prompt | <p>When I try to run a python script on windows command prompt using</p>
<pre><code>"C:\path_to_conda_env\anaconda3\envs\virutal_env\python.exe" "C:\path_to_script\script.py"
</code></pre>
<p>the script executes until it hits a connection I'm making to using <code>psycopg2</code> and then exits without any messages. However, running the same command above on Anaconda prompt works with no issues. What could be the cause of this?</p>
| <python><python-3.x><windows><command-line> | 2023-10-18 21:05:27 | 0 | 411 | Yandle |
77,319,523 | 19,123,103 | ValueError: putmask: mask and data must be the same size | <p>I have a pandas dataframe and I wanted to replace its index values using the <code>where()</code> method, but I got the following error.</p>
<blockquote>
<p>ValueError: putmask: mask and data must be the same size</p>
</blockquote>
<p>How do I solve this error?</p>
<p>Steps to reproduce the error:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'col': [1, 3, 5]})
df.index.where(lambda x: x>0, 10) # ValueError: putmask: mask and data must be the same size
</code></pre>
<p>In the above example, I expect to replace index=0 by index=10.</p>
| <python><pandas><valueerror> | 2023-10-18 20:50:36 | 1 | 25,331 | cottontail |
77,319,516 | 21,024,780 | Lazy import from a module (a.k.a. lazy evaluation of variable) | <p>Lazy imports in Python have been long discussed and some proposals (for example the <a href="https://peps.python.org/pep-0690/" rel="nofollow noreferrer">PEP609 - Lazy Imports</a>) have been made to make it a built-in (optional) feature in the future.</p>
<p>I am developing a CLI package, so startup time is very important, and I would like to speed it up by lazy loading some of the modules I am using.</p>
<p><strong>What I have so far</strong><br>
By modifying the <a href="https://docs.python.org/3/library/importlib.html#implementing-lazy-imports" rel="nofollow noreferrer">function to implement lazy imports</a> from Python's <a href="https://docs.python.org/3/library/importlib.html" rel="nofollow noreferrer">importlib documentation</a>, I built the following <code>LazyImport</code> class:</p>
<pre class="lang-py prettyprint-override"><code>import importlib.util
import sys
from types import ModuleType
class LazyImport:
def __init__(self):
pass
def __new__(
cls,
name: str,
) -> type(ModuleType):
try:
return sys.modules[name]
except KeyError:
spec = importlib.util.find_spec(name)
if spec:
loader = importlib.util.LazyLoader(spec.loader)
spec.loader = loader
module = importlib.util.module_from_spec(spec)
sys.modules[name] = module
loader.exec_module(module)
return module
else:
raise ModuleNotFoundError(f"No module named '{name}'") from None
</code></pre>
<p><em>Note:</em> This is the best way I could think of to turn the function to a class, but I'm welcoming feedback on this too if you have a better way.</p>
<p>This works just fine for top-level module imports:</p>
<p>Instead of importing (for example) <code>xarray</code> as</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
</code></pre>
<p>I would run</p>
<pre class="lang-py prettyprint-override"><code>xr = LazyImport('xarray')
</code></pre>
<p>and everything works as expected, with the difference that the <code>xarray</code> module is added to <code>sys.modules</code> but it is not loaded in memory yet (the module scripts are not run yet).
<br>
The module gets loaded into memory (so the module scripts run) only when the variable <code>xr</code> is first referenced (for example by calling a method/submodule or simply by referencing it as it is).
So, for the example above, any of these statements would load the <code>xarray</code> module into memory:</p>
<ul>
<li><code>xr.DataArray([1,2,3])</code></li>
<li><code>print(xr)</code></li>
<li><code>xr</code></li>
</ul>
<p><strong>What I want</strong><br>
Now I would like to be able to achieve the same result, but when I load a Class, function or variable from a module.
<br>
So (for example) instead of importing the <code>xarray.DataArray</code> Class through:</p>
<pre class="lang-py prettyprint-override"><code>from xarray import DataArray as Da
</code></pre>
<p>I want to have something like:</p>
<pre class="lang-py prettyprint-override"><code>Da = LazyImport('DataArray', _from='xarray')
</code></pre>
<p>so that the <code>xarray</code> module is added to <code>sys.modules</code> but not loaded in memory yet, and will get loaded only when I first reference the <code>Da</code> variable. The <code>Da</code> variable will reference the <code>DataArray</code> Class of the <code>xarray</code> module.</p>
<p><strong>What I tried</strong><br>
I tried some options such as</p>
<pre class="lang-py prettyprint-override"><code>xr = LazyImport('xarray')
Da = getattr(xr, 'DataArray')
</code></pre>
<p>or by modifying the <code>LazyImport</code> class, but every time I reference <code>xr</code> the <code>xarray</code> module gets loaded in memory. I could not manage to create a <code>Da</code> variable without loading <code>xarray</code> in memory.</p>
<p>Referred to the example, what I need is basically a lazy evaluation of the <code>Da</code> variable that evaluates (to the <code>DataArray</code> Class of the <code>xarray</code> module) only when I first reference <code>Da</code> (and therefore runs the module scripts only at this point).<br></p>
<p>Also, I don't want any method to be called on the variable <code>Da</code> to be evaluated (something like <code>Da.load()</code> for example), but I want the variable to be directly evaluated when first referenced.</p>
<p>I looked at some external libraries (such as <a href="https://github.com/scientific-python/lazy_loader" rel="nofollow noreferrer">lazy_loader</a>), but I haven't found one that allows lazy importing of Classes and variables from external modules (modules other than the one you are developing).</p>
<p>Does anyone know a solution for the implementation of lazy imports from a module?</p>
| <python><python-3.x><python-import><python-importlib> | 2023-10-18 20:49:05 | 2 | 478 | atteggiani |
77,319,444 | 19,299,757 | How to specify API Gateway URL in cloudformation script | <p>I am using cloudformation script yaml file to create resources on AWS dev account.</p>
<p>My goal is to create a lambda function with API gateway endpoint. The lambda function is created successfully and also the gateway within it. But when I tried to specify the API end point URL in the script, I am greeted with the following error.</p>
<pre><code>Resource handler returned message: "Invalid API identifier specified 442645024664:test-api-
gateway-ms (Service: ApiGateway, Status Code: 404, Request ID: 47e0c115-02c4-4dfd-b13a-
b149c72807cf)" (RequestToken: a3fdecd7-41fd-cfef-f538-2da3bac33ec2, HandlerErrorCode:
NotFound)
</code></pre>
<p>Below is my stack from template.yaml.</p>
<pre><code>Parameters:
MyApi:
Type: String
Description: "This is a demo API"
AllowedValues: [ "dev-demo-lambda-api-poc" ]
Resources:
MyDemoLambdaApiFunction:
Type: AWS::Serverless::Function
Properties:
Description: >
Currently does not support S3 upload event.
Handler: app.lambda_handler
Runtime: python3.11
CodeUri: .
MemorySize: 1028
Events:
MyDemoAPI:
Type: Api
Properties:
Path: /test
Method: GET
Tracing: Active
Deployment:
Type: AWS::ApiGateway::Deployment
Properties:
Description: Demo Lambda API Gateway deployment
RestApiId: !Ref MyApi
Outputs:
MyApiUrl:
Description: API Gateway URL
Value:
Fn::Sub: https://dev-demo-lambda-api-poc
</code></pre>
<p>Setting aside the error, Iam bit confused here when creating the endpoint for the API. What's the correct way of providing the end point URL in the CFN script?</p>
<p>I believe I should first Deploy this before I can have the end point?</p>
<p>Any help is much appreciated.</p>
| <python><amazon-web-services><aws-lambda><aws-cloudformation><aws-api-gateway> | 2023-10-18 20:35:51 | 1 | 433 | Ram |
77,319,419 | 788,022 | Redeploying Google cloud function looses python dependencies | <p>I have a simple cloud function that contains the main.py file, and a requirements.txt.</p>
<p>This is the content of the main.py:</p>
<pre><code>from superpackage.subpackage import TheSuperClass
def watch_function(event, context):
print(f"Event ID: {context.event_id}")
print(f"Event type: {context.event_type}")
print("Bucket: {}".format(event["bucket"]))
print("File: {}".format(event["name"]))
print("Metageneration: {}".format(event["metageneration"]))
print("Created: {}".format(event["timeCreated"]))
print("Updated: {}".format(event["updated"]))
a = TheSuperClass()
</code></pre>
<p>And this is is the content of the requirements.txt file, it just references a repo that contains a python package. Using this package locally works fine.</p>
<pre><code>-e git+https://github.com/someorg/superpackage.git@v0.2#egg=superpackage
</code></pre>
<p>Using normal deploy the function works fine, and the superpackage.subpackage can be used. This is initial command for deploy.</p>
<pre><code>gcloud functions deploy peter-function \
--runtime python311 \
--trigger-resource the-bucket-we-watch \
--entry-point watch_function \
--trigger-event google.storage.object.finalize
</code></pre>
<p>Uploading file to the-bucket-we-watch triggers the function, and all is fine.</p>
<p>Now we come to the problem. I make a small change in the main.py, and redeploy the function, with the same name, and the same command as above. It fails to deploy with error. It somehow does not include the requirements.txt on redeploy. Deploying the change with new function name works as expected. I have updated the gcloud sdk to the latest version and tried again before asking this question :)</p>
<pre><code>Deploying function (may take a while - up to 2 minutes)...failed.
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. This is likely due to a bug in the user code. Error message: Traceback (most recent call last):
File "/layers/google.python.pip/pip/bin/functions-framework", line 8, in <module>
sys.exit(_cli())
^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/functions_framework/_cli.py", line 37, in _cli
app = create_app(target, source, signature_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/functions_framework/__init__.py", line 288, in create_app
spec.loader.exec_module(source_module)
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/workspace/main.py", line 1, in <module>
from superpackage.subpackage import TheSuperClass
ModuleNotFoundError: No module named 'superpackage'. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation.
</code></pre>
| <python><google-cloud-functions><google-cloud-sdk> | 2023-10-18 20:30:57 | 1 | 1,255 | pjotr_dolphin |
77,319,397 | 10,886,283 | How to avoid overlapping between boxes and whiskers in boxplot? | <p>When trying not to show outlines while plotting boxes in a boxplot, whiskers may overlap. Is there a way to avoid it? (perhaps changing the order patches are display and send whiskers behind)</p>
<p>Consider this minimal reproducible example:</p>
<pre><code>import matplotlib.pyplot as plt
bp = plt.boxplot([1,2,3], patch_artist=True)
plt.setp(bp['boxes'], color='#5454c6')
for item in ['whiskers', 'caps']:
plt.setp(bp[item], lw=5)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/s6i2y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s6i2y.png" alt="enter image description here" /></a></p>
| <python><matplotlib><boxplot><overlap> | 2023-10-18 20:26:59 | 1 | 509 | alpelito7 |
77,319,342 | 3,973,175 | Adding color bar to 1D heatmap | <p>I have found an excellent answer from <a href="https://stackoverflow.com/questions/45841786/creating-a-1d-heat-map-from-a-line-graph">Creating a 1D heat map from a line graph</a> but I need a color reference/side bar to show what numerical values each color represents.</p>
<p>There is a comment asking this same question on that page, but that comment was never answered.</p>
<p>I have modified the answer's code to my needs, slightly:</p>
<pre><code># https://stackoverflow.com/questions/45841786/creating-a-1d-heat-map-from-a-line-graph
import matplotlib.pyplot as plt
import numpy as np; np.random.seed(1)
plt.rcParams["figure.figsize"] = 5,2
x = np.arange(0,99)
y = np.random.uniform(-10, 10, 99)
fig, (ax,ax2) = plt.subplots(nrows=2, sharex=True)
extent = [x[0]-(x[1]-x[0])/2., x[-1]+(x[1]-x[0])/2.,0,1]
ax.imshow(y[np.newaxis,:], cmap="gist_rainbow", aspect="auto", extent=extent)
ax.set_yticks([])
ax.set_xlim(extent[0], extent[1])
ax2.plot(x,y)
plt.tight_layout()
plt.show()
</code></pre>
<p>but I don't know how to add the color bar. <code>ax.imshow</code> doesn't have an option for this. <code>ax</code> doesn't have a method that seems to do this.</p>
<p>I'm using matplotlib 3.8.0 and python 3.10.12</p>
<p>How can I add a colorbar to reference the colors in the top plot to a numerical value?</p>
| <python><matplotlib><colorbar><imshow> | 2023-10-18 20:16:54 | 0 | 6,227 | con |
77,319,296 | 2,334,833 | JQ Query - Restructuring a nested JSON | <p>I have a JSON which looks like this:</p>
<pre><code>{
"orders": [
{
"order": [
{
"items": [
{
"name": "Item 1",
"id": [],
"type": [
{
"name": "Color",
"value": [
{
"value": "blue"
}
]
},
{
"name": "model",
"value": [
{
"value": "Stereo"
}
]
}
]
}
]
},
{
"items": [
{
"name": "Item 2",
"id": [],
"type": [
{
"name": "Color",
"value": [
{
"value": "Yellow"
}
]
},
{
"name": "model",
"value": [
{
"value": "NewModel"
}
]
}
]
}
]
}
],
"id": "715874"
},
{
"order": [
{
"items": [
{
"name": "Item 6",
"id": [],
"type": [
{
"name": "Range",
"value": [
{
"value": "10"
}
]
},
{
"name": "Type",
"value": [
{
"value": "AllRegion"
}
]
}
]
}
]
},
{
"items": [
{
"name": "Item 4",
"id": [],
"type": [
{
"name": "Color",
"value": [
{
"value": "Yellow"
}
]
},
{
"name": "model",
"value": [
{
"value": "OldModel"
}
]
}
]
}
]
}
],
"id": "715875"
}
]
}
</code></pre>
<p>I am trying to convert to this format</p>
<pre><code>{
"order": [
{
"items": [
{
"name": "Item 1",
"type": {
"Color": "blue",
"model": "Stereo"
}
},
{
"name": "Item 2",
"type": {
"Color": "Yellow",
"model": "NewModel"
}
}
],
"id": "715874"
},
{
"items": [
{
"name": "Item 6",
"type": {
"Color": "blue",
"Type": "AllRegion"
}
},
{
"name": "Item 4",
"type": {
"Color": "Yellow",
"model": "OldModel"
}
}
],
"id": "715875"
}
]
}
</code></pre>
<p>I tried using queries like this, not able to achieve the format i am looking for..</p>
<p>.orders[] | { orders : [ .order[] |{ .order[].items : .], id : .id } ]}</p>
<p>{ orders : [ .orders[] |{ order : [.items[].name[] ], id : .id } ]}</p>
<p>Please help. thanks in Advance. I am parsing this in Python, if any other Lib can parse this similar to JQ thats also helpfull.</p>
| <python><json><jq> | 2023-10-18 20:09:43 | 3 | 329 | Aksanth |
77,319,282 | 1,445,660 | AWS RDS Postgres error - "no pg_hba.conf entry for host..." | <p>I have two rds postgres databases. One in eu-west-1 (Ireland) and one in us-east-1 (N. Virginia). The one in eu-west-1 suddenly throws this error:</p>
<pre><code>
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "database-2.<xxx>.eu-west-1.rds.amazonaws.com" (<db ip>), port 5432 failed: FATAL: password authentication failed for user "postgres"
connection to server at "database-2.<xxx>.eu-west-1.rds.amazonaws.com" (<db ip>), port 5432 failed: FATAL: no pg_hba.conf entry for host "<my ip>", user "postgres", database "postgres", no encryption
</code></pre>
<p>This is my code:</p>
<pre><code>engine_sync = create_engine(
"postgresql://<user>:<password>:@database-2.<xxx>.eu-west-1.rds.amazonaws.com/postgres")
Session = sessionmaker(bind=engine)
session = Session()
session.query(Game).filter_by(name="Alley Cat").first()
</code></pre>
<p>I didn't make any change, it just started happening.</p>
| <python><postgresql><amazon-web-services><sqlalchemy><amazon-rds> | 2023-10-18 20:07:51 | 1 | 1,396 | Rony Tesler |
77,319,279 | 740,488 | langchain csvLoader error openai.error.InvalidRequestError: '$.input' is invalid | <p>I am using a simple langchain CSVLoader to load a sample data that I downloaded from <a href="https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2021-financial-year-provisional/Download-data/annual-enterprise-survey-2021-financial-year-provisional-csv.csv" rel="nofollow noreferrer">here</a></p>
<p>My simple code is here: <a href="https://pastebin.com/RNT1bpTM" rel="nofollow noreferrer">https://pastebin.com/RNT1bpTM</a>
When I start my script, I get the error:</p>
<pre><code>openai.error.InvalidRequestError: '$.input' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.
</code></pre>
<p>I also tried loading csv with specifying arguments but it's not any better.
Do you know what am I missing? Csv format seems valid.</p>
<p>Thanks</p>
| <python><openai-api><langchain><py-langchain> | 2023-10-18 20:07:32 | 0 | 1,220 | gospodin |
77,319,228 | 1,391,441 | Conflict between logging module, Latex and matplotlib | <p>The following example:</p>
<pre><code>import logging
import matplotlib.pyplot as plt
import matplotlib.pyplot as mpl
mpl.rc('font', family='serif', serif="DejaVu Serif")
mpl.rc('text', usetex=True)
# Set up logging module
level = logging.INFO
frmt = '%(message)s'
handlers = [
logging.FileHandler("test.log", mode='a'),
logging.StreamHandler()]
logging.basicConfig(level=level, format=frmt, handlers=handlers)
fig = plt.figure()
plt.scatter(.5, .5)
plt.xlabel("test label")
plt.savefig("del.png")
</code></pre>
<p>generates dozens of warnings:</p>
<pre><code>No LaTeX-compatible font found for the serif fontfamily in rcParams. Using default.
</code></pre>
<p>All these go away and the output stays apparently exactly the same if I comment out the <code>serif="DejaVu Serif"</code> argument. I can also leave that argument in and comment out the <code>logging</code> block, and the warnings also go away.</p>
<p>I'm running elementary OS (based on Ubuntu 22.04.3) with apparently all the required fonts and LaTeX packages installed.</p>
<p>Is this a conflict between the <code>logging</code> module and my system's fonts? Is this an issue with <code>logging</code> and LaTeX? What is going on here?</p>
| <python><matplotlib><logging><latex> | 2023-10-18 19:56:04 | 1 | 42,941 | Gabriel |
77,319,204 | 9,811,964 | Shift rows with identical spatial coordinates into a different cluster in pandas dataframe | <p>I have a pandas dataframe <code>df</code>. The columns <code>latitude</code> and <code>longitude</code> represent the spatial coordinates of people.</p>
<pre><code>import pandas as pd
data = {
"latitude": [49.5619579, 49.5619579, 49.56643220000001, 49.5719721, 49.5748542, 49.5757358, 49.5757358, 49.5757358, 49.57586389999999, 49.57182530000001, 49.5719721, 49.572026, 49.5727859, 49.5740071, 49.57500899999999, 49.5751017, 49.5751468, 49.5757358, 49.5659508, 49.56611359999999, 49.5680586, 49.568089, 49.5687609, 49.5699217, 49.572154, 49.5724688, 49.5733994, 49.5678048, 49.5702381, 49.5707702, 49.5710414, 49.5711228, 49.5713705, 49.5723685, 49.5725714, 49.5746149, 49.5631496, 49.5677449, 49.572268, 49.5724273, 49.5726773, 49.5739391, 49.5748542, 49.5758151, 49.57586389999999, 49.5729483, 49.57321150000001, 49.5733375, 49.5745175, 49.574758, 49.5748055, 49.5748103, 49.5751023, 49.57586389999999, 49.56643220000001, 49.5678048, 49.5679685, 49.568089, 49.57182530000001, 49.5719721, 49.5724688, 49.5740071, 49.5757358, 49.5748542, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5619579, 49.5628938, 49.5630028, 49.5633175, 49.56397639999999, 49.5642962, 49.56643220000001, 49.5679685, 49.570056, 49.5619579, 49.5724688, 49.5745175, 49.5748055, 49.5748055, 49.5748542, 49.5748542, 49.5751023, 49.5751023],
"longitude": [10.9995758, 10.9995758, 10.9999593, 10.9910787, 11.0172739, 10.9920322, 10.9920322, 10.9920322, 11.0244747, 10.9910398, 10.9910787, 10.9907713, 10.9885155, 10.9873742, 10.9861229, 10.9879312, 10.9872357, 10.9920322, 10.9873409, 10.9894231, 10.9882496, 10.9894035, 10.9887881, 10.984756, 10.9911384, 10.9850981, 10.9852771, 10.9954673, 10.9993329, 10.9965937, 10.9949475, 10.9912959, 10.9939141, 10.9916605, 10.9983124, 10.992722, 11.0056254, 10.9954016, 11.017472, 11.0180908, 11.0181911, 11.0175466, 11.0172739, 11.0249866, 11.0244747, 11.0200454, 11.019251, 11.0203055, 11.0183162, 11.0252416, 11.0260046, 11.0228523, 11.0243391, 11.0244747, 10.9999593, 10.9954673, 10.9982288, 10.9894035, 10.9910398, 10.9910787, 10.9850981, 10.9873742, 10.9920322, 11.0172739, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 10.9995758, 11.000319, 10.9990996, 10.9993819, 11.004145, 11.0039476, 10.9999593, 10.9982288, 10.9993409, 10.9995758, 10.9850981, 11.0183162, 11.0260046, 11.0260046, 11.0172739, 11.0172739, 11.0243391, 11.0243391]
}
df = pd.DataFrame(data)
</code></pre>
<p>In order to avoid clustering people who live at the same spatial coordinates I added additional columns to <code>df</code>:</p>
<pre><code># add a new feature
df['feature_dub'] = df.groupby(['latitude', 'longitude']).cumcount()
df['IsDuplicate'] = df.groupby(['latitude', 'longitude'])['feature_dub'].transform('count') > 1
df['IsDorm'] = df.groupby(['latitude', 'longitude'])['feature_dub'].transform('count') > 6
</code></pre>
<p>In the code above I assume that if <code>6</code> people have the exact spatial coordinates they live in a dorm.</p>
<p>Next step I do:</p>
<pre><code>df['feature_dorm'] = 0
# Filter rows where 'IsDorm' is True
mask = df['IsDorm']
# Use cumcount to count non-zero occurrences of 'IsDorm' within each group of 'latitude' and 'longitude'
df.loc[mask, 'feature_dorm'] = df[mask].groupby(['latitude', 'longitude']).cumcount() + 1
</code></pre>
<p>Now I am ready to apply the cluster algorithm and hope that it does its job</p>
<pre><code>from k_means_constrained import KMeansConstrained
coordinates = np.column_stack((df["latitude"], df["longitude"], df['feature_dub'], df['feature_dorm']))
# Define the number of clusters and the number of points per cluster
n_clusters = len(df) // 9
n_points_per_cluster = 9
# Perform k-means-constrained clustering
kmc = KMeansConstrained(n_clusters=n_clusters, size_min=n_points_per_cluster, size_max=n_points_per_cluster, random_state=42)
kmc.fit(coordinates)
# Get cluster assignments
df["cluster"] = kmc.labels_
# Print the clusters
for cluster_num in range(n_clusters):
cluster_data = df[df["cluster"] == cluster_num][["latitude", "longitude", "feature_dub", "feature_dorm",]]
print(f"Cluster {cluster_num + 1}:")
print(cluster_data)
</code></pre>
<p>After applying a cluster algorithm called <code>KMeansConstrained</code> I get an additional column <code>cluster</code>. Each cluster contains 9 rows of people who live very close to each other:</p>
<pre><code>import pandas as pd
data = {
"latitude": [49.5619579, 49.5619579, 49.56643220000001, 49.5719721, 49.5748542, 49.5757358, 49.5757358, 49.5757358, 49.57586389999999, 49.57182530000001, 49.5719721, 49.572026, 49.5727859, 49.5740071, 49.57500899999999, 49.5751017, 49.5751468, 49.5757358, 49.5659508, 49.56611359999999, 49.5680586, 49.568089, 49.5687609, 49.5699217, 49.572154, 49.5724688, 49.5733994, 49.5678048, 49.5702381, 49.5707702, 49.5710414, 49.5711228, 49.5713705, 49.5723685, 49.5725714, 49.5746149, 49.5631496, 49.5677449, 49.572268, 49.5724273, 49.5726773, 49.5739391, 49.5748542, 49.5758151, 49.57586389999999, 49.5729483, 49.57321150000001, 49.5733375, 49.5745175, 49.574758, 49.5748055, 49.5748103, 49.5751023, 49.57586389999999, 49.56643220000001, 49.5678048, 49.5679685, 49.568089, 49.57182530000001, 49.5719721, 49.5724688, 49.5740071, 49.5757358, 49.5748542, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5758151, 49.5619579, 49.5628938, 49.5630028, 49.5633175, 49.56397639999999, 49.5642962, 49.56643220000001, 49.5679685, 49.570056, 49.5619579, 49.5724688, 49.5745175, 49.5748055, 49.5748055, 49.5748542, 49.5748542, 49.5751023, 49.5751023],
"longitude": [10.9995758, 10.9995758, 10.9999593, 10.9910787, 11.0172739, 10.9920322, 10.9920322, 10.9920322, 11.0244747, 10.9910398, 10.9910787, 10.9907713, 10.9885155, 10.9873742, 10.9861229, 10.9879312, 10.9872357, 10.9920322, 10.9873409, 10.9894231, 10.9882496, 10.9894035, 10.9887881, 10.984756, 10.9911384, 10.9850981, 10.9852771, 10.9954673, 10.9993329, 10.9965937, 10.9949475, 10.9912959, 10.9939141, 10.9916605, 10.9983124, 10.992722, 11.0056254, 10.9954016, 11.017472, 11.0180908, 11.0181911, 11.0175466, 11.0172739, 11.0249866, 11.0244747, 11.0200454, 11.019251, 11.0203055, 11.0183162, 11.0252416, 11.0260046, 11.0228523, 11.0243391, 11.0244747, 10.9999593, 10.9954673, 10.9982288, 10.9894035, 10.9910398, 10.9910787, 10.9850981, 10.9873742, 10.9920322, 11.0172739, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 11.0249866, 10.9995758, 11.000319, 10.9990996, 10.9993819, 11.004145, 11.0039476, 10.9999593, 10.9982288, 10.9993409, 10.9995758, 10.9850981, 11.0183162, 11.0260046, 11.0260046, 11.0172739, 11.0172739, 11.0243391, 11.0243391],
"cluster": [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9]
}
df = pd.DataFrame(data)
</code></pre>
<p>Since the dataset contains many dorms or large buildings, it can happen that multiple people have the exact same latitude and longitude and therefore end up in the same cluster. Therefore we need some post-processing: Each time there are <strong>more than 2 people</strong> with same spatial coordinates I want to shift one (or more) of them into an adjacent cluster. Keep in mind, I don not want to shift them somewhere, but into a cluster which is next (or relatively close) to its original cluster.</p>
<p>First of all I try to find the critical parts:</p>
<pre><code>duplicate_rows = df[df.duplicated(subset=["cluster", "latitude", "longitude"], keep=False)]
duplicate_indices = duplicate_rows.index.tolist()
# Group by specified columns and count occurrences
count_occurrences = df.iloc[duplicate_indices].groupby(['latitude', 'longitude', 'cluster']).size().reset_index(name='count')
print("Number of rows with identical values in specified columns:")
print(count_occurrences)
</code></pre>
<p>I get this:</p>
<pre><code>Number of rows with identical values in specified columns:
latitude longitude cluster count
0 49.5619579000000030 10.9995758000000006 0 2
1 49.5748054999999965 11.0260046000000003 9 2
2 49.5748541999999972 11.0172738999999993 9 2
3 49.5751022999999975 11.0243391000000006 9 2
4 49.5757357999999968 10.9920322000000006 0 3
5 49.5758150999999998 11.0249866000000001 7 8
</code></pre>
<p>Indices 0, 1, 2 and 3 are fine. Just each 2 people who share same spatial coordinates. I don't want to shift people from there, because value <code>2</code> (and less) in <code>count</code> works fine for me! In general, if possible, I want each person to <strong>stay in its original cluster</strong> and not move the person at all, because it is the optimal cluster for this person. However, both index 4 and 5 are problems. The goal is to shift some people of index 4 and 5 into adjacent clusters to minimize the number of value in <code>count</code>.</p>
<p><strong>What I have so far:</strong></p>
<p>Based on a <a href="https://stackoverflow.com/questions/77160797/automatically-shift-rows-with-same-spatial-coordinates-into-a-different-cluster/77163618?noredirect=1#comment136281941_77163618">previous question on stack overflow</a> I have the following code which shifts <strong>all</strong> people with same spatial coordinates:</p>
<pre><code>CLUSTER_SIZE = 9
df = df.drop(columns=["cluster"])
df_copy = df.copy(deep=True)
dfs = []
while True:
for cluster_number in range(1 + df_copy.shape[0] // CLUSTER_SIZE):
# Select a sample without duplicated coordinates
while True:
tmp = df_copy.sample(n=CLUSTER_SIZE, replace=False)
if (
tmp.drop_duplicates(subset=["latitude", "longitude"]).shape[0]
== CLUSTER_SIZE
):
break
# Add new cluster number
tmp["cluster"] = cluster_number
dfs.append(tmp)
# Remove sample from original dataframe
df_copy = df_copy.drop(labels=tmp.index)
if df_copy.shape[0] <= CLUSTER_SIZE:
df_copy["cluster"] = cluster_number + 1
dfs.append(df_copy)
break
# Check that no cluster contains duplicates
for item in dfs:
if item.duplicated(subset=["latitude", "longitude"]).sum():
# Start again
df_copy = df.copy(deep=True)
dfs = []
break
else: # if no duplicates found in any cluster, exit loop
break
new_df = pd.concat(dfs).sort_values(
by=["cluster", "latitude", "longitude"],
ignore_index=True,
)
</code></pre>
| <python><pandas><dataframe><cluster-analysis> | 2023-10-18 19:52:49 | 0 | 1,519 | PParker |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.