QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,017,833 | 102,221 | Error installing smbus module with Python 3.10 | <p>I have Ubuntu running on Raspberry Pi and Python 3.10 installed on it.
In my attempt to add <code>smbus</code> library I get next error:</p>
<pre><code>pi@raspberrypi:~$ python3 -m pip install smbus
Defaulting to user installation because normal site-packages is not writeable
Collecting smbus
Using cached smbus-1.1.post2.tar.gz (104 kB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: smbus
Building wheel for smbus (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
running bdist_wheel
running build
running build_ext
building 'i2c' library
error: [Errno 2] No such file or directory: 'make'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for smbus
Running setup.py clean for smbus
Failed to build smbus
</code></pre>
<p>I suppose, that I should install some package which will help to build i2c library, but what package it should be?</p>
<p>Attempt to run <code>sudo python3 -m pip install smbus</code> don't bring much help:</p>
<pre><code>pi@raspberrypi:~$ sudo python3 -m pip install smbus
/usr/bin/python3: No module named pip
</code></pre>
| <python><python-3.x><pip><smbus> | 2023-08-31 16:48:26 | 1 | 1,113 | BaruchLi |
77,017,804 | 5,583,772 | Polars: "ValueError: could not convert value 'Unknown' as a Literal" | <p>I have a line of code in polars that worked prior to my most recent update of the polars package to '0.19.0'. This example ran before:</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"a": [5, 6, 7, 8, 9],
"b": [5, 6, 7, 8, 9],
"c": [5, 6, 7, 8, None],})
cols_1 = ["a", "b"]
cols_2 = ["c"]
df = df.filter(pl.all(pl.col(cols_1 + cols_2).is_not_null()))
</code></pre>
<p>But now raises the error:</p>
<pre><code>ValueError: could not convert value 'Unknown' as a Literal
</code></pre>
| <python><python-polars> | 2023-08-31 16:44:20 | 1 | 556 | Paul Fleming |
77,017,728 | 371,577 | How is Python multithreading efficient with the non-async requests library? | <p>I would just like to understand, I'm using a Python script that sends 3 HTTP GET requests to an URL that I made to wait for 5 seconds.</p>
<p>I am using the requests library and the threading Python module like so:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import time
import requests
from threading import Thread
url = 'https://mydebug-url.example.com/sleep/5'
payload = {}
def myfunc(i):
response = requests.get(url, json=payload)
print(f'Got back from {i} {response.status_code}. Response was {response.json()}')
if __name__ == '__main__':
start = time.time()
threads = []
for i in [1, 2, 3]:
thread = Thread(target=myfunc, args=[i])
threads.append(thread)
for thread in threads:
thread.start()
# Wait for all threads to finish
for thread in threads:
thread.join()
end = time.time()
print(f'took {end-start} seconds')
</code></pre>
<p>And the output is:</p>
<pre><code>Got back from 2 200. Response was {'message': 'Slept 5 seconds'}
Got back from 1 200. Response was {'message': 'Slept 5 seconds'}
Got back from 3 200. Response was {'message': 'Slept 5 seconds'}
took 5.35151219367981 seconds
</code></pre>
<p>Now my question is, how is that efficient?
So this code in the sample, using threading, takes a little more than 5 seconds so it's almost like doing them in parallel.</p>
<p>What puzzles me is, that it's kind of efficient, although:</p>
<ul>
<li>Python has GIL so it would have executed them in a single thread</li>
<li>requests library is blocking (non async) so async wouldn't have worked either to benefit from one thread jumping rapidly between the 3 tasks while waiting.</li>
</ul>
<p>So, what am I missing? where I can read more about it?</p>
<p>My interest is, I could use more of threading, still get the performance benefits, instead of changing to a async library to do requests, in case that is proven more difficult in the current codebase.</p>
<p>Thanks!</p>
| <python><multithreading> | 2023-08-31 16:33:42 | 0 | 797 | StefanH |
77,017,686 | 19,130,803 | Change text color of a column of dbc.Table | <p>I am developing a dash application. I have a dataframe coming from backend and in forntend, I am displaying that dataframe using <code>dbc.Table</code> component.</p>
<pre><code>import dash_bootstrap_components as dbc
import pandas as pd
df = pd.DataFrame(
{
"First Name": ["Arthur", "Ford", "Zaphod", "Trillian"],
"Last Name": ["Dent", "Prefect", "Beeblebrox", "Astra"],
"Result": ["Failed", "Passed", "Passed", "Failed"],
}
)
table = dbc.Table.from_dataframe(df, striped=True, bordered=True, hover=True)
</code></pre>
<p>Currently, table is getting displayed with default text color. I am trying to display <code>Result</code> column with</p>
<pre><code>Failed in Red color
Passed in Green color
</code></pre>
<p>Is this possible?</p>
| <python><pandas><plotly-dash> | 2023-08-31 16:25:21 | 1 | 962 | winter |
77,017,603 | 9,536,103 | Cumulative distinct count in pandas | <p>I have a dataframe with a column called <code>group</code> and another column called <code>country</code>. I want to create a new column that outputs a cumulative count of distinct values in the <code>country</code> column whilst grouping on the <code>group</code> column</p>
<p>Original dataframe:</p>
<pre><code>group country
A usa
A germany
A germany
A france
A usa
B germany
B germany
B france
B germany
B france
</code></pre>
<p>New dataframe:</p>
<pre><code>group country num_distinct_values
A usa 1
A germany 2
A germany 2
A france 3
A usa 3
B germany 1
B germany 1
B france 2
B germany 2
B france 2
</code></pre>
<p>I have tried using the duplicated method along with cumcount and cumsum but I can't seem to use these methods to get what I want</p>
| <python><pandas> | 2023-08-31 16:10:57 | 2 | 1,151 | Daniel Wyatt |
77,017,536 | 1,958,391 | Swin-Transformer-TF not working with generator | <p>Swin-Transformer-TF from <a href="https://github.com/rishigami/Swin-Transformer-TF/tree/main" rel="nofollow noreferrer">https://github.com/rishigami/Swin-Transformer-TF/tree/main</a>, failed when I use generator for train. I tried many generators, and many output of generators options, lists, Tensor's etc. everything is crashing.</p>
<pre><code>import tensorflow as tf
import numpy as np
!git clone https://github.com/rishigami/Swin-Transformer-TF.git
import sys
sys.path.append('./Swin-Transformer-TF')
from swintransformer import SwinTransformer
</code></pre>
<p>create the model:</p>
<pre><code>img_adjust_layer = tf.keras.layers.Lambda(lambda data: tf.keras.applications.imagenet_utils.preprocess_input(tf.cast(data,
tf.float32),
mode="torch"),
input_shape=[224, 224,3])
pretrained_model = SwinTransformer('swin_tiny_224', num_classes=2,
include_top=True, pretrained=0,
use_tpu=0)
model = tf.keras.Sequential([
img_adjust_layer,
pretrained_model,
tf.keras.layers.Dense(2, activation='softmax')
])
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-5, epsilon=1e-8),
loss = 'categorical_crossentropy',
metrics=['accuracy']
)
model.summary()
</code></pre>
<p>Train the model, without generetor, works.</p>
<pre><code>x_train = np.zeros((20,224,224,3))
y_train = np.zeros((20,2))
model.fit(x_train, y_train,
steps_per_epoch=2, epochs=1)
</code></pre>
<p>Adding a generator (and I tried a few), crashes. The shape is read as <code>(None,None,None,3)</code>:</p>
<pre><code>def __data_generation():
for i in range(3000):
yield np.zeros((20,224,224,3)),np.zeros((20,2))
gen=__data_generation()
x,y=next(gen)
print(x.shape,y.shape)
model.fit(gen, epochs=1, steps_per_epoch=4)
AssertionError Traceback (most recent call last)
<ipython-input-7-fb53f3a3d1ae> in <cell line: 7>()
5 x,y=next(gen)
6 print(x.shape,y.shape)
----> 7 model.fit(gen, epochs=1, steps_per_epoch=4)
4 frames
/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
/usr/local/lib/python3.10/dist-packages/keras/engine/training.py in tf__train_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
/content/./Swin-Transformer-TF/swintransformer/model.py in tf__call(self, x)
8 do_return = False
9 retval_ = ag__.UndefinedReturnValue()
---> 10 x = ag__.converted_call(ag__.ld(self).forward_features, (ag__.ld(x),), None, fscope)
11
12 def get_state():
/content/./Swin-Transformer-TF/swintransformer/model.py in tf__forward_features(self, x)
8 do_return = False
9 retval_ = ag__.UndefinedReturnValue()
---> 10 x = ag__.converted_call(ag__.ld(self).patch_embed, (ag__.ld(x),), None, fscope)
11
12 def get_state():
/content/./Swin-Transformer-TF/swintransformer/model.py in tf__call(self, x)
9 retval_ = ag__.UndefinedReturnValue()
10 (B, H, W, C) = ag__.converted_call(ag__.converted_call(ag__.ld(x).get_shape, (), None, fscope).as_list, (), None, fscope)
---> 11 assert ag__.and_(lambda : ag__.ld(H) == ag__.ld(self).img_size[0], lambda : ag__.ld(W) == ag__.ld(self).img_size[1]), f"Input image size ({ag__.ld(H)}*{ag__.ld(W)}) doesn't match model ({ag__.ld(self).img_size[0]}*{ag__.ld(self).img_size[1]})."
12 x = ag__.converted_call(ag__.ld(self).proj, (ag__.ld(x),), None, fscope)
13 x = ag__.converted_call(ag__.ld(tf).reshape, (ag__.ld(x),), dict(shape=[-1, ag__.ld(H) // ag__.ld(self).patch_size[0] * (ag__.ld(W) // ag__.ld(self).patch_size[0]), ag__.ld(self).embed_dim]), fscope)
AssertionError: in user code:
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1284, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1268, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1249, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1050, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/tmp/__autograph_generated_filegq78ow1y.py", line 10, in tf__call
x = ag__.converted_call(ag__.ld(self).forward_features, (ag__.ld(x),), None, fscope)
File "/tmp/__autograph_generated_filehkbim9xc.py", line 10, in tf__forward_features
x = ag__.converted_call(ag__.ld(self).patch_embed, (ag__.ld(x),), None, fscope)
File "/tmp/__autograph_generated_file3nqntnhc.py", line 11, in tf__call
assert ag__.and_(lambda : ag__.ld(H) == ag__.ld(self).img_size[0], lambda : ag__.ld(W) == ag__.ld(self).img_size[1]), f"Input image size ({ag__.ld(H)}*{ag__.ld(W)}) doesn't match model ({ag__.ld(self).img_size[0]}*{ag__.ld(self).img_size[1]})."
AssertionError: Exception encountered when calling layer 'swin_tiny_224' (type SwinTransformerModel).
in user code:
File "/content/./Swin-Transformer-TF/swintransformer/model.py", line 422, in call *
x = self.forward_features(x)
File "/content/./Swin-Transformer-TF/swintransformer/model.py", line 411, in forward_features *
x = self.patch_embed(x)
File "/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler **
raise e.with_traceback(filtered_tb) from None
File "/tmp/__autograph_generated_file3nqntnhc.py", line 11, in tf__call
assert ag__.and_(lambda : ag__.ld(H) == ag__.ld(self).img_size[0], lambda : ag__.ld(W) == ag__.ld(self).img_size[1]), f"Input image size ({ag__.ld(H)}*{ag__.ld(W)}) doesn't match model ({ag__.ld(self).img_size[0]}*{ag__.ld(self).img_size[1]})."
AssertionError: Exception encountered when calling layer 'patch_embed' (type PatchEmbed).
in user code:
File "/content/./Swin-Transformer-TF/swintransformer/model.py", line 336, in call *
assert H == self.img_size[0] and W == self.img_size[1], f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
AssertionError: Input image size (None*None) doesn't match model (224*224).
Call arguments received by layer 'patch_embed' (type PatchEmbed):
• x=tf.Tensor(shape=(None, None, None, 3), dtype=float32)
Call arguments received by layer 'swin_tiny_224' (type SwinTransformerModel):
• x=tf.Tensor(shape=(None, None, None, 3), dtype=float32)
</code></pre>
| <python><tensorflow><transformer-model><swin-transformer> | 2023-08-31 15:59:31 | 1 | 2,144 | Naomi Fridman |
77,017,429 | 6,385,519 | Folium: How to show a map from a saved html file in ipynb | <p>I have a folium map saved locally as a html file, now I want to read that <code>map.html</code> file to show a map in another ipynb cell/notebook.</p>
<p>Minimum code to generate the local file:</p>
<pre><code>import folium
m = folium.Map(location=[45.5236, -122.6750])
m # to view the map if needed
m.save('map.html')
</code></pre>
<p>Esentially I'm trying to find any way to save a map locally (doesn't need to be html) and read it later in some other ipynb notebook.</p>
<p>I am able to find ways of sharing it on the <a href="https://gis.stackexchange.com/questions/389371/how-can-i-share-folium-map-on-the-web">web</a> but my use case is limited to ipynb.</p>
| <python><folium> | 2023-08-31 15:46:23 | 1 | 2,605 | Suraj Shourie |
77,017,408 | 5,108,149 | Raspberry Pi transmits corrupted data | <p>I have a Raspberry Pi Zero trying to read GPS data.</p>
<p>With this command:</p>
<pre><code>sudo cat /dev/ttyAMA0
</code></pre>
<p>I get the following correct data:</p>
<pre><code>$GPGSV,1,1,0*49
$GPRMC,,V,,,,,,,,,,N*53
$GPVTG,,,,,,,,,N*30
$GPGGA,,,,,,0,00,99.99,,,,,,*48
$GPGLL,,,,,,V,N*64
$GPGSA,A,1,,,,,,,,,,,,,99.99,99.99,99.99*30
</code></pre>
<p>But in the Python script:</p>
<pre><code>import serial
import time
import string
while True:
port="/dev/ttyAMA0"
ser=serial.Serial(port, baudrate=9600, timeout=0.5)
newdata=ser.readline().strip().decode('UTF-8')
print(newdata)
</code></pre>
<p>the data is corrupted:</p>
<pre><code>GPGSA,A,1,,,,,,,,,,,,,99.99,99.99,99.99*30
SV,1,1,0*49
RMC,,V,,,,,,,,,,N*53
PVTG,,,,,,,,,N*30
,,,,,,0,00,99.99,,,,,,*48
L,,,,,,V,N*64
</code></pre>
<p>I've spent two days trying to figure out what went wrong with no success. Any tip is much appreciated.</p>
<p>(RaspberryPi OS Lite didn't have <code>serial</code> preinstalled, so I installed it like this:)</p>
<pre><code>sudo apt-get install python-serial
</code></pre>
| <python><raspberry-pi><raspberry-pi-zero> | 2023-08-31 15:43:51 | 1 | 1,653 | Blendester |
77,017,255 | 104,324 | Attempt to speedup python unzip using isal_zlib fails | <p>I have a long running series of scripts to unpack and analyse a very large dataset. Using pyinstruments I have found out that the ZipFile library is taking up a large amount of time. The files I'm unzipping are actually GLF files - taken from a Tritech Gemini Sonar.</p>
<p>The GLF file itself is actually a zip file with no compression. Inside are a further three files, two of which are important - the cfg and the dat.</p>
<p>The CFG file I can read using just python's open() command, by following the zip file format found on <a href="https://en.wikipedia.org/wiki/ZIP_(file_format)" rel="nofollow noreferrer">Wikipedia</a>. This is good as it's pretty quick to find the file times I need. I do need to cut off 5 bytes at either end of the data block to make it work. However, the dat file doesn't seem to work in the same way. The dat file contains the images I need to look through and it seems that within this file, each image is compressed using the deflate algorithm.</p>
<p>Now, looking at the zip file local headers, all the files have a bigger compression size than the original size - the act of adding them to a zipfile has actually increased the size. When I look at the header for the dat file it claims to use DEFLATE as the compression type (with 0 as the compression level). This means that running deflate on the dat and cfg files has increased their size (so it appears).</p>
<p>When I read the dat file I get more data than I need, and yet the beginning and ends of this dat file are actually the same as these from an unzipped datfile! Very strange!</p>
<p>I've written a short test to show what is happening. This test fails with:</p>
<p>igzip_lib.IsalError: Error -5 Gzip/zlib wrapper specifies unsupported compress method</p>
<pre><code>import os
import struct
from zipfile import ZipFile
from isal import isal_zlib
from pytritech.glf import GLF
def read_raw(glf_path):
""" Using just normal file reads, find the dat and cfg files."""
with open(glf_path, 'rb') as f:
glf_file_size = os.path.getsize(glf_path)
ds = 0
while ds < glf_file_size:
file_sig = int(struct.unpack("<I", f.read(4))[0])
assert file_sig == 0x04034b50
f.seek(14,1)
compressed_file_size = int(struct.unpack("<I", f.read(4))[0])
uncompressed_file_size = int(struct.unpack("<I", f.read(4))[0])
filename_len = int(struct.unpack("<H", f.read(2))[0])
extra_field_length = int(struct.unpack("<H", f.read(2))[0])
filename = str(f.read(filename_len))
f.seek(extra_field_length, 1)
f.seek(5, 1) # TODO - this really shouldn't be here but it works!
if ".cfg" in filename:
# we've found the file we need, so read it and return the cfg.
# We know that this file at any rate is uncompressed.
# This CFG is correct - it's a bunch of XML therefore the
# compression is minimal. This doesn't seem to be the case for
# our dat file however!
cfg = f.read(uncompressed_file_size).decode("utf-8")
f.seek(5, 1) # TODO - this really shouldn't be here but it works!
elif ".dat" in filename:
raw_dat = f.read(compressed_file_size)
return raw_dat
elif ".xml" in filename:
xml_dat = f.read(compressed_file_size)
else:
f.seek(uncompressed_file_size, 1)
ds += 30 + filename_len + extra_field_length + uncompressed_file_size
def test_raw():
""" Try and find the difference between a raw read and a ZipFile read."""
glf_path = "./pytritech_testdata/log_2022-07-22-025519.glf"
assert os.path.exists(glf_path)
raw_dat = read_raw(glf_path)
# Remove the last 5 bytes from raw_dat and the end bit matches!
raw_dat = raw_dat[:-5]
dat = None
zobj = ZipFile(glf_path, "r")
print("Zobj", zobj, dir(zobj), zobj.compression, zobj.compresslevel)
for zname in zobj.namelist():
if "dat" in zname:
with zobj.open(zname) as f:
print("fileobj", dir(f), type(f), f._compress_type, f._decompressor, f._orig_file_size, f._orig_compress_size)
dat = f.read()
with GLF(glf_path) as glf:
print("Num images", len(glf.images))
assert(raw_dat[:20] == dat[:20])
assert(raw_dat[-20:] == dat[-20:])
decomped = isal_zlib.decompress(raw_dat)
assert(raw_dat == dat)
</code></pre>
<p>The line - <code>decomped = isal_zlib.decompress(raw_dat)</code> I thought might solve things. It seems to me that a level 0 deflate compression has been applied to that file which has made it larger (as it was quite compressed already due to it being full of compressed images) but for some reason, isal_zlib doesn't want to deflate it.</p>
<p>Is there another speedy way to deflate this binary blob please? It's a big bottle neck (my script takes days to weeks to run)</p>
| <python><performance><zip><compression><zlib> | 2023-08-31 15:25:38 | 1 | 692 | Oni |
77,017,251 | 11,659,920 | Reference a dict from another file in Lambda | <p>I initially had an index.py file with contents like</p>
<pre><code>import boto3
import json
import os
BUCKETNAME = os.environ['bucketname']
PUBLICKEYSBUCKETNAME = os.environ['publickkeysbucketname']
TRANSFERACCESSADMINROLE = os.environ['transferaccessadminrole']
TRANSFERACCESSREADROLE = os.environ['transferaccessreadrole']
TRANSFERACCESSHUBADMINROLE = os.environ['transferaccesshubadminrole']
PRIMARY_REGION = os.environ['AWS_REGION']
SSM_PREFIX = os.environ['transferssmparameter']
def lambda_handler(event, context):
auth_response = {}
data_ret = {}
...
...
data_pol = {
"Version": "2012-10-17",
"Statement": [{
"Sid": "AllowListAccessToBucket",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::" + BUCKETNAME, "arn:aws:s3:::" + PUBLICKEYSBUCKETNAME],
"Condition": { "StringLike": { "s3:prefix": [ event["username"] + "/*",
event["username"]]}}},
{"Sid": "GetAccessForPublicKeys",
"Effect": "Allow",
"Action": ["s3:GetObject","s3:GetObjectAcl","s3:GetObjectVersion","s3:GetBucketLocation"],
"Resource": ["arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"]
,"arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"] + "/*"]},
{"Sid": "PublicKeysPubFilePutAccess",
"Effect": "Allow",
"Action": ["s3:PutObject","s3:PutObjectAcl"],
"Resource": ["arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"] + "/*.pub"]},
{"Sid": "TransferDataBucketAccess",
"Effect": "Allow",
"Action": ["s3:PutObject","s3:PutObjectAcl","s3:GetObject","s3:GetObjectAcl","s3:GetObjectVersion","s3:GetBucketLocation","s3:DeleteObject","s3:DeleteObjectVersion"],
"Resource": ["arn:aws:s3:::" + BUCKETNAME + "/" + event["username"],
"arn:aws:s3:::" + BUCKETNAME + "/" + event["username"] + "/*"]}]}
data_ret["Policy"] = json.dumps(data_pol) ##### <---- Policy used here
data_ret["HomeDirectoryType"] = "PATH"
#data_ret["HomeDirectoryDetails"] = json.dumps(directorymapping)
data_ret["HomeDirectory"] = "/" + BUCKETNAME + "/" + event["username"]
return data_ret
</code></pre>
<p>I want policies to be defined in another file. I added another file <code>policies.py</code> and its contents initially were</p>
<pre><code> data_pol = {
"Version": "2012-10-17",
"Statement": [{
"Sid": "AllowListAccessToBucket",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::" + BUCKETNAME, "arn:aws:s3:::" + PUBLICKEYSBUCKETNAME],
"Condition": { "StringLike": { "s3:prefix": [ event["username"] + "/*",
event["username"]]}}},
{"Sid": "GetAccessForPublicKeys",
"Effect": "Allow",
"Action": ["s3:GetObject","s3:GetObjectAcl","s3:GetObjectVersion","s3:GetBucketLocation"],
"Resource": ["arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"]
,"arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"] + "/*"]},
{"Sid": "PublicKeysPubFilePutAccess",
"Effect": "Allow",
"Action": ["s3:PutObject","s3:PutObjectAcl"],
"Resource": ["arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"] + "/*.pub"]},
{"Sid": "TransferDataBucketAccess",
"Effect": "Allow",
"Action": ["s3:PutObject","s3:PutObjectAcl","s3:GetObject","s3:GetObjectAcl","s3:GetObjectVersion","s3:GetBucketLocation","s3:DeleteObject","s3:DeleteObjectVersion"],
"Resource": ["arn:aws:s3:::" + BUCKETNAME + "/" + event["username"],
"arn:aws:s3:::" + BUCKETNAME + "/" + event["username"] + "/*"]}]}
</code></pre>
<p>and i removed the policy declaration from <code>index.py</code>. I also added <code>import policies</code> to the top of <code>index.py</code>. I updated the call to the policy to</p>
<pre><code>data_ret["Policy"] = json.dumps(policies.data_pol)
</code></pre>
<p>in <code>index.py</code>. This resulted in the error</p>
<pre><code> "errorMessage": "name 'BUCKETNAME' is not defined",
</code></pre>
<p>I then added the following to <code>policies.py</code></p>
<pre><code>import os
BUCKETNAME = os.environ['bucketname']
PUBLICKEYSBUCKETNAME = os.environ['publickkeysbucketname']
</code></pre>
<p>The next error was</p>
<pre><code> "errorMessage": "name 'event' is not defined",
</code></pre>
<p><code>event</code> is the input to the lambda function.</p>
<p>So I added the <code>def lambda handler</code> and the full content of my <code>policies.py</code> file are now</p>
<pre><code>import os
BUCKETNAME = os.environ['bucketname']
PUBLICKEYSBUCKETNAME = os.environ['publickkeysbucketname']
def lambda_handler(event, context):
data_pol = {
"Version": "2012-10-17",
"Statement": [{
"Sid": "AllowListAccessToBucket",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::" + BUCKETNAME, "arn:aws:s3:::" + PUBLICKEYSBUCKETNAME],
"Condition": { "StringLike": { "s3:prefix": [ event["username"] + "/*",
event["username"]]}}},
{"Sid": "GetAccessForPublicKeys",
"Effect": "Allow",
"Action": ["s3:GetObject","s3:GetObjectAcl","s3:GetObjectVersion","s3:GetBucketLocation"],
"Resource": ["arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"]
,"arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"] + "/*"]},
{"Sid": "PublicKeysPubFilePutAccess",
"Effect": "Allow",
"Action": ["s3:PutObject","s3:PutObjectAcl"],
"Resource": ["arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"] + "/*.pub"]},
{"Sid": "TransferDataBucketAccess",
"Effect": "Allow",
"Action": ["s3:PutObject","s3:PutObjectAcl","s3:GetObject","s3:GetObjectAcl","s3:GetObjectVersion","s3:GetBucketLocation","s3:DeleteObject","s3:DeleteObjectVersion"],
"Resource": ["arn:aws:s3:::" + BUCKETNAME + "/" + event["username"],
"arn:aws:s3:::" + BUCKETNAME + "/" + event["username"] + "/*"]}]}
</code></pre>
<p>And now the error message I get is</p>
<pre><code> "errorMessage": "'function' object has no attribute 'data_pols'",
</code></pre>
<p>What am i missing here ? All I want to do it to get <code>data_pol</code> defined in <code>policies.py</code> into <code>index.py</code>.</p>
| <python><aws-lambda><lambda> | 2023-08-31 15:25:22 | 1 | 467 | Jason Stanley |
77,017,062 | 13,494,917 | How to avoid accessing table at same time as another instance? | <p>I have a Log table that I send data to when an event happens. Sometimes there are multiple events that happen at basically the same time. Sometimes this causes duplicates in a value that I'm grabbing and incrementing before sending it back to the table with some other data.
Problem arises when two instances are doing this, at near the same time. I've tried solutions to try and make sure only one instance is run but I've had no luck. So what else can I do?</p>
<p>If I started a transaction like so:</p>
<pre><code>with conn.begin() as transaction:
#do what I need
transaction.commit()
</code></pre>
<p>Will this lock the other instance out until this transaction is done? If it does, then we're almost there, but I foresee some error handling of some sort so that the instance that gets locked out re runs its process again.</p>
<p>Using python, pandas, SQLAlchemy and I'm interacting with an Azure SQL db/SQL Server db</p>
| <python><sql-server><pandas><sqlalchemy><azure-sql-database> | 2023-08-31 15:03:15 | 0 | 687 | BlakeB9 |
77,016,846 | 816,721 | Poetry script, command not found | <p>I created a project using poetry. My project structure is like this -</p>
<pre><code>/
pyproject.toml
winshift/
__init__.py
cli.py
... more sturf
</code></pre>
<p>My file <code>cli.py</code> includes a function <code>main</code> and in my <em>pyproject.toml</em> I defined -</p>
<pre><code>...
[tools.poetry.scripts]
winshift-cli = "winshift.cli:main"
</code></pre>
<p>But when I try to run <code>poetry run </code>winshift-cli` I get -</p>
<pre><code>Command not found: winshift-cli
</code></pre>
<p>I have no idea why this doesn't work. I am able to run my <em>cli</em> in different ways and work properly.</p>
<ul>
<li><code>poetry run python -m winshift.cli</code></li>
<li><code>poetry run python winshift/cli.py</code></li>
<li><code>python -m winshift.cli</code></li>
</ul>
<p>I know that scripts are added to <code>$(poetry env info -p)/bin</code>folder, but in this case that is not happening. I have no clue what is wrong.</p>
<p>Here is the full code for reference: <a href="https://github.com/rkmax/winshift-python" rel="nofollow noreferrer">https://github.com/rkmax/winshift-python</a></p>
| <python><python-poetry> | 2023-08-31 14:35:01 | 1 | 18,233 | rkmax |
77,016,767 | 6,195,489 | SQLalchemy: how to patch session from context manager and replace with in memory DB session | <p>Say I have an SQLalchemy model like:</p>
<pre><code>from app.data_structures.base import (
Base
class User(Base):
__tablename__ = "users"
user_name: Mapped[str] = mapped_column(primary_key=True, nullable=True)
flag = Column(Boolean)
def __init__(
self,
user_name: str = None,
flag: bool = false(),
)-> None:
self.user_name = user_name
self.flag = flag
</code></pre>
<p>where app.data_structures.base.py:</p>
<pre><code>from contextlib import contextmanager
from os import environ
from os.path import join, realpath
from sqlalchemy import Column, ForeignKey, Table, create_engine
from sqlalchemy.orm import declarative_base, scoped_session, sessionmaker
db_name = environ.get("DB_NAME")
ROOT_DIR =environ.get("ROOT_DIR")
db_path = realpath(join(ROOT_DIR, "data", db_name))
engine = create_engine(f"sqlite:///{db_path}", connect_args={"timeout": 120})
session_factory = sessionmaker(bind=engine)
sql_session = scoped_session(session_factory)
@contextmanager
def Session():
session = sql_session()
try:
yield session
session.commit()
except Exception:
session.rollback()
raise
</code></pre>
<p>I then have a function defined (app.helpers.db_helper.get_users_to_query), elsewhere that does something like:</p>
<pre><code>from app.data_structures.base import Session
def get_usernames_to_query(flag: bool = False) -> List[User]:
logger = getLogger(__name__)
try:
with Session() as session:
usernames_to_query = [
{"user_name": user.user_name}
for user in session.query(User).filter(
User.flag == flag
)
]
except Exception as err:
logger.exception(f"Exception thrown in get_users_to_query {' '.join(err.args)}")
usernames_to_query = []
return usernames_to_query
</code></pre>
<p>I am trying to unit test this function, by having a separate session in memory, rather than using the production DB.</p>
<p>I patch the Session() context manager where the function imports it, and then mock the <code>__enter__</code> return value to be the in memory session:</p>
<pre><code>@patch("app.helpers.db_helper.Session")
def test_get_users_to_query(self, mock_session) -> None:
self.engine = create_engine("sqlite:///:memory:")
self.session = Session(self.engine)
print(type(mock_session))
print("session id", id(self.session))
mock_session.__enter__ = Mock(return_value=self.session)
mock_session.__exit__ = Mock(return_value=None)
Base.metadata.create_all(self.engine)
(patcher, environ_dict, environ_mock_get) = self.environ_mock_get_factory()
with patcher():
fake_user1 = User(
display_name="Jane Doe", user_name="jdb1", flag=False
)
fake_user2 = User(
display_name="John Doe", user_name="jdb2", flag=True
)
with mock_session as session:
session.add(fake_user1)
session.add(fake_user2)
session.commit()
users = session.query(User).filter(User.flag == True).all()
print([user.user_name for user in users])
print(id(session), type(session))
users_to_query = get_users_to_query()
print("Users:", users_to_query)
</code></pre>
<p>In the test above the mocked_session context manger get replaced with self.session, as I would expect. So <code>print([user.user_name for user in users])</code> prints the fake_user2's username as expected.</p>
<p>But in the call to <code>get_users_to_query</code> it is not and it doesn't query the in memory DB, and <code>get_users_to_query</code> returns an empty list.</p>
<p>Can anyone explain why this is, and how to get the session in <code>get_users_to_query</code> to be the mocked session with the in memory DB (i.e. self.session)?</p>
<p>Thanks!</p>
| <python><unit-testing><sqlalchemy><mocking><python-unittest> | 2023-08-31 14:25:14 | 0 | 849 | abinitio |
77,016,724 | 10,232,932 | Add a numpy array with one value less to a pandas column | <p>I have a pandas dataframe df:</p>
<pre><code>A B C
1 2 NaN
2 3 NaN
4 5 NaN
</code></pre>
<p>How can I add a numpy array with the shape of 1 element less then the length of the dataframe, in this case, the numpy array:</p>
<pre><code>[10 20]
</code></pre>
<p>And I want to start at the second row and fill the column C:</p>
<pre><code>A B C
1 2 NaN
2 3 10
4 5 20
</code></pre>
| <python><pandas><numpy> | 2023-08-31 14:20:22 | 1 | 6,338 | PV8 |
77,016,618 | 543,087 | Websocket automatically closes after response in Python | <p>The following code is run as a server and browser clients connect.
However after each request the browser is showing that websocket connection is closed. I want to keep connection open in browser clients as reopening is making it slow due to network issues etc.
Earlier with Nodejs websocket servers it never closed the connections.</p>
<p>Can anybody tell me where and how socket may be getting closed:</p>
<pre><code># WSS (WS over TLS) server example, with a self-signed certificate
from common import *
from datetime import datetime
import numpy as np
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from pathlib import Path
import re
import time
import os.path
from dateutil.relativedelta import relativedelta
now = datetime.now()
dt_string = now.strftime("%d/%m/%Y %H:%M:%S")
print("Started time=", dt_string)
def decode_batch_predictions(pred):
input_len = np.ones(pred.shape[0]) * pred.shape[1]
# Use greedy search. For complex tasks, you can use beam search
results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0][
:, :8
]
# Iterate over the results and get back the text
output_text = []
for res in results:
condition = tf.less(res, 0)
res = tf.where(condition, 1, res)
res = tf.strings.reduce_join(num_to_char(res)).numpy().decode("utf-8")
output_text.append(res)
return output_text
characters = [' ', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D',
'E', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S',
'T', 'U', 'V', 'W', 'X', 'Y', 'Z']
characters = np.asarray(characters, dtype='<U1')
num_to_char = layers.StringLookup(
vocabulary=characters, mask_token=None, invert=True
)
prediction_model = tf.keras.models.load_model('model_prediction2')
opt = keras.optimizers.Adam()
prediction_model.compile(optimizer=opt)
gg_hashmap = None
frameinfo = getframeinfo(currentframe())
async def hello(websocket, path):
global gg_hashmap
print(datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
json_data = await websocket.recv()
obj = json.loads(json_data)
#print(obj, flush=True)
if ("l" in obj and obj["l"]=='license'):
res = {
'status': 1,
'python': 1,
}
json_string = json.dumps(res)
await websocket.send(json.dumps(json_string))
else:
print("In else pat")
def start_server():
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
ssl_context.load_cert_chain("bundle.pem", "key.pem");
ip = ''
if os.name == 'nt':
ip = '127.0.0.1'
else:
ip = "0.0.0.0"
start_server = websockets.serve(
hello, ip, 31334, ssl=ssl_context
)
asyncio.get_event_loop().run_until_compaserver.pyplete(start_server)
asyncio.get_event_loop().run_forever()
def main():
print("Entered main")
global gg_hashmap
gg_hashmap = getHash();
start_server()
print("Server started")
main()
</code></pre>
| <python><python-3.x><websocket> | 2023-08-31 14:06:24 | 0 | 1,241 | user5858 |
77,016,586 | 3,616,293 | np.where comparison between 2 arrays | <p>I have 4 numpy arrays as:</p>
<pre><code>mat_row.shape, mat_col.shape
# ((3000, 2000), (3000, 2000))
nearest_neighb_x.shape, nearest_neighb_y.shape
# (2040,) (2040,)
</code></pre>
<p>The idea is to perform a lookup for unique values of <code>nearest_neighb_x</code> and <code>nearest_neighb_y</code> existing in <code>mat_col</code> and <code>mat_row</code> with condition checks (xc and yc should lie within a specific range):</p>
<p>Currently, I am using two <code>for</code> loops for this (inefficient and slow):</p>
<pre><code>x1, x2 = 822, 1414
y1, y2 = 760, 2516
# Python3 list to contain candidate (xc, yc) cam/image space pxs on object-
cand_x = []
cand_y = []
for xp in np.unique(nearest_neighb_x):
for yp in np.unique(nearest_neighb_y):
# Get (x, y) coord in cam/image space-
yc = np.where(mat_row == yp)[0][0]
xc = np.where(mat_col == xp)[1][0]
if (xc >= (x1 + 45)) & (xc <= (x2 - 45)):
cand_x.append(xc)
if (yc >= (y1 + 45)) & (yc <= (y2 - 45)):
cand_y.append(yc)
cand_x = np.array(cand_x)
cand_y = np.array(cand_y)
</code></pre>
<p>Is there an efficient implementation for this by avoiding the <code>for</code> loops?</p>
| <python><numpy><numpy-ndarray> | 2023-08-31 14:02:34 | 0 | 2,518 | Arun |
77,016,585 | 4,575,197 | multiply and summing same named columns based on name in 2 different DF pandas python | <p>it's easy to multiply 2 columns, but some how my code returns NAN that i don't understand.</p>
<p>so i have these columns in df2 that have the same name also in df1. my goal was to multiply and then sum them together.</p>
<pre><code>df1.columns=['ISIN', 'DSCD', 'GEOGN', 'SIC', 'GEOGC', 'COMMONSHARESOUTSTANDING',
'year', 'date', 'month', 'Mcap', 'TotalAssets', 'NItoCommon',
'NIbefEIPrefDiv', 'PrefDiv', 'NIbefPrefDiv', 'Sales',
'GainLossAssetSale', 'PPT', 'LTDebt', 'CommonEquity', 'PrefStock',
'OtherIncome', 'TotalLiabilities', 'PreTaxIncome', 'IncomeTaxes',
'OtherTA', 'OtherLiabilities', 'CashSTInv', 'OtherCA', 'OtherCL',
'TotalDiv', 'Country', 'Industry', 'IsSingleCountry', 'Mcap_w',
'NItoCommon_w', 'NIbefEIPrefDiv_w', 'PrefDiv_w', 'NIbefPrefDiv_w',
'Sales_w', 'GainLossAssetSale_w', 'PPT_w', 'LTDebt_w', 'CommonEquity_w',
'PrefStock_w', 'OtherIncome_w', 'TotalLiabilities_w', 'PreTaxIncome_w',
'IncomeTaxes_w', 'OtherTA_w', 'OtherLiabilities_w', 'CashSTInv_w',
'OtherCA_w', 'OtherCL_w', 'TotalDiv_w', 'fair_value', 'month__1',
'month__2', 'month__3', 'month__4', 'month__5', 'month__6', 'month__7',
'month__8', 'month__9', 'month__10', 'month__11', 'month__12',
'constant', 'V'
</code></pre>
<p>df2 has only one row while the other one is more than one row.</p>
<pre><code>df2.columns = ['constant', 'TotalAssets', 'NItoCommon_w', 'NIbefEIPrefDiv_w',
'NIbefPrefDiv_w', 'Sales_w', 'GainLossAssetSale_w', 'PPT_w', 'LTDebt_w',
'CommonEquity_w', 'PrefStock_w', 'OtherIncome_w', 'TotalLiabilities_w',
'PreTaxIncome_w', 'IncomeTaxes_w', 'OtherTA_w', 'OtherLiabilities_w',
'CashSTInv_w', 'OtherCA_w', 'OtherCL_w', 'TotalDiv_w', 'month__1',
'month__2', 'month__3', 'month__4', 'month__5', 'month__6', 'month__7',
'month__8', 'month__9', 'month__10', 'month__11', 'month__12',
'Country', 'predicted_Mcap']
</code></pre>
<p>i need to multiply and sum these columns:</p>
<pre><code>'TotalAssets', 'NItoCommon_w', 'NIbefEIPrefDiv_w',
'NIbefPrefDiv_w', 'Sales_w', 'GainLossAssetSale_w', 'PPT_w', 'LTDebt_w',
'CommonEquity_w', 'PrefStock_w', 'OtherIncome_w', 'TotalLiabilities_w',
'PreTaxIncome_w', 'IncomeTaxes_w', 'OtherTA_w', 'OtherLiabilities_w',
'CashSTInv_w', 'OtherCA_w', 'OtherCL_w', 'TotalDiv_w'
</code></pre>
<p>my code:</p>
<pre><code>#adding a column to df1 to save the result in it.
df1['V']=0
for col in df2.iloc[:,1:20]:
df1['V']+=df1[col]*df2[col]
</code></pre>
<p>but the result is all NAN which is strange for me.</p>
| <python><pandas> | 2023-08-31 14:02:25 | 1 | 10,490 | Mostafa Bouzari |
77,016,505 | 2,178,942 | Converting pandas to multiple dictionaries | <p>I have a pandas dataset that looks like this:</p>
<pre><code>check_d = pickle.load(a_file)
</code></pre>
<p><a href="https://i.sstatic.net/Mb1jp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mb1jp.png" alt="enter image description here" /></a></p>
<p>when I type:</p>
<pre><code>check_d["conv2"]
</code></pre>
<p>I get:</p>
<pre><code>n00005787_9986.JPEG [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0....
n00006484_9999.JPEG [-0.0, -0.0, -0.0, -0.0, 9.846705, -0.0, -0.0,...
n00007846_99456.JPEG [-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0....
n00015388_9991.JPEG [-0.0, 9.055762, -0.0, 3.3329132, -0.0, -0.0, ...
n00017222_9935.JPEG [-0.0, -0.0, -0.0, -0.0, -0.0, 13.863088, -0.0...
...
n15093298_9.JPEG [-0.0, -0.0, 29.058922, -0.0, -0.0, -0.0, -0.0...
n15102359_129.JPEG [-0.0, -0.0, 17.293856, -0.0, -0.0, 201.57753,...
n15102455_9991.JPEG [-0.0, 64.55209, -0.0, -0.0, -0.0, -0.0, -0.0,...
n15102894_997.JPEG [-0.0, -0.0, -0.0, 73.35719, 5.8023753, -0.0, ...
n00004475_6590.JPEG [-0.0, 33.83642, -0.0, 62.97699, 8.908251, -0....
Name: conv2, Length: 19892, dtype: object
</code></pre>
<p>and <code>type(check_d["conv2"])</code> is <code>pandas.core.series.Series</code></p>
<p>I want to convert this to 4 dictionaries: "conv2", "conv4", "fc6" and "fc8".</p>
<p>So in each dictionary, keys are <code>n*****_****.JPEG</code>, and values are the corresponding value from the pandas file. How can I do that?</p>
| <python><python-3.x><pandas><dictionary><pickle> | 2023-08-31 13:53:03 | 2 | 1,581 | Kadaj13 |
77,016,386 | 10,232,932 | Running the same function several times for a filtered dataframe in python | <p>I have a pandas dataframe df:</p>
<pre class="lang-none prettyprint-override"><code>TypeA TypeB timepoint value
A AB 1 10
A AB 2 10
A AC 1 5
A AC 2 15
A AC 3 10
...
D DB 1 1
D DB 2 1
</code></pre>
<p>How can I run a function several times on a the unique combinations of 'TypeA' and 'TypeB' and store the results in a new dataframe? Let's assume the following function:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / float(N)
</code></pre>
<p>Normally, I would do a <code>for-loop</code>, but I think that is not a good idea (and I miss the savings of the functions):</p>
<pre class="lang-py prettyprint-override"><code>df4 = pd.DataFrame()
for i in df['typeA'].unique().tolist():
df2 = df[df['typeA'] == i]
for j in df2['typeB'].unique().tolist():
df3 = df2[df2['typeB'] == j]
moving_av = running_mean(df3['Wert'].values, 2)
df3.iloc[1:1+len(moving_av), df3.columns.get_loc('moving_av')] = moving_av
df5 = pd.concat([df5, df3])
df = pd.merge(df, df5, how='left', on=['typeA', 'Type', 'Kontonummer', 'timepoint'])
</code></pre>
<p>My desired output is:</p>
<pre class="lang-none prettyprint-override"><code>TypeA TypeB timepoint value moving_av
A AB 1 10 NaN
A AB 2 10 10
A AC 1 5 NaN
A AC 2 15 10
A AC 3 10 12.5
...
D DB 1 1 NaN
D DB 2 1 1
</code></pre>
<p>Please note that the simple 'sum' function is only a example, I am searching for a solution for a bigger function.</p>
| <python><pandas><dataframe> | 2023-08-31 13:38:25 | 0 | 6,338 | PV8 |
77,016,334 | 2,595,216 | New way of dark mode in PyQt6/PySide6 | <p>There is 'old' way of setting dark mode in PyQt6/Pyside6 answered here: <a href="https://stackoverflow.com/questions/73060080/how-do-i-use-qt6-dark-theme-with-pyside6">How do I use QT6 Dark Theme with PySide6?</a></p>
<p>But I found official Qt blog post <a href="https://www.qt.io/blog/dark-mode-on-windows-11-with-qt-6.5" rel="nofollow noreferrer">https://www.qt.io/blog/dark-mode-on-windows-11-with-qt-6.5</a> and I not sure how to applied 'new' way for PySide6.</p>
| <python><pyside6><pyqt6> | 2023-08-31 13:32:51 | 2 | 553 | emcek |
77,016,167 | 2,641,825 | How to install Jupyter Lab on Debian 12 bookworm? | <ul>
<li><p>In Debian 11 I had installed jupyter lab with pip in my user directory, but on Debian 12 this is not possible anymore since <a href="https://www.debian.org/releases/stable/i386/release-notes/ch-information.en.html#python3-pep-668" rel="nofollow noreferrer">Debian decided to follow pep-668</a> and python packages are marked as externally managed, you can only install system wide with the package manager (APT).</p>
</li>
<li><p>I installed python3-jupyterlab-server lab with APT, but it fails to start</p>
<pre><code> $ sudo apt reinstall python3-jupyterlab-server
...
...
Preparing to unpack .../python3-jupyterlab-server_2.16.5-1_all.deb ...
Unpacking python3-jupyterlab-server (2.16.5-1) over (2.16.5-1) ...
Setting up python3-jupyterlab-server (2.16.5-1) ...
$ jupyter lab
Traceback (most recent call last):
File "/usr/local/bin/jupyter-lab", line 5, in <module>
from jupyterlab.labapp import main
ModuleNotFoundError: No module named 'jupyterlab'
</code></pre>
</li>
<li><p>I installed Jupyter Lab with pip in a virtual environment and can successfully start it from there. Is this the only way?</p>
</li>
</ul>
<p>I understand the value of virtual environments or conda for maintaining a lot of data analysis packages. But Jupyter Lab is now a core component of the system, so I guess it should be installed system-wide with the system package manage (APT on Debian). I'm wondering what's wrong in my config, why can't I start jupyter lab when installed from apt?</p>
<p>Related questions:</p>
<ul>
<li><a href="https://stackoverflow.com/q/75608323/2641825">How do I solve "error: externally-managed-environment" every time I use pip 3?</a></li>
<li><a href="https://stackoverflow.com/q/75602063/2641825">pip install -r requirements.txt is failing: "This environment is externally managed"</a></li>
</ul>
| <python><debian><jupyter-lab> | 2023-08-31 13:15:09 | 1 | 11,539 | Paul Rougieux |
77,016,149 | 4,239,879 | Mock a class method before making an object of that class in setUp | <p>How can I use mock to avoid executing the <code>some_thing</code> method of <code>Settings</code> class and make it return some arbitrary string?</p>
<p>My class:</p>
<pre><code>class Settings():
"""
Represents settings. Will read the settings file if available.
If not: will create it with defaults
"""
def __init__(self):
self.settings_file = os.path.join(pathlib.Path.home(), "settings.json")
self.settings = self.get_defaults()
def get_defaults(self):
""" Get default settings """
return {
"other_path": self.some_thing()
}
def some_thing(self):
return "value"
</code></pre>
<p>My test module:</p>
<pre><code>"""
Unit tests for settings module
"""
import unittest
import unittest.mock as mock
from src.config.settings import Settings
class TestSettings(unittest.TestCase):
""" Class containing tests for settings module """
def setUp(self):
""" Use this method to setup the tests """
print() # this just prints a newline after module name printed by pytest
self.settings = Settings()
def test_get_settings_file(self):
"""
Tests get setttings file
"""
assert self.settings.settings_file == r"C:\users\root\.settings.json"
</code></pre>
| <python><mocking><integration-testing> | 2023-08-31 13:13:06 | 1 | 2,360 | Ivan |
77,016,132 | 8,913,316 | Import from another file inside the same module and running from a main.py outside the module throws an import error | <p>Given the following tree:</p>
<pre><code>├── main.py
└── my_module
├── a.py
├── b.py
└── __init__.py
</code></pre>
<p>a.py:</p>
<pre><code>def f():
print('Hello World.')
</code></pre>
<p>b.py:</p>
<pre><code>from a import f
def f2():
f()
if __name__ == '__main__':
f2()
</code></pre>
<p>main.py:</p>
<pre><code>from my_module.b import f2
if __name__ == '__main__':
f2()
</code></pre>
<p>When I run b.py, "Hello World." is printed successfully. However, when I run main.py, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/user/main.py", line 1, in <module>
from my_module.b import f2
File "/home/user/my_module/b.py", line 1, in <module>
from a import f
ModuleNotFoundError: No module named 'a'
</code></pre>
<p>I would expect the same output executing main.py and b.py</p>
| <python> | 2023-08-31 13:09:43 | 2 | 388 | Keredu |
77,016,079 | 10,909,217 | Construct filtering expression based on lexicographical ordering | <p>I have a dataset with integer columns for <code>'year'</code>, <code>'month'</code> and <code>'day'</code>.</p>
<p>For a given <code>start_date</code> and <code>end_date</code> (pandas Timestamps), I want to build an expression to filter for all the rows between these dates.</p>
<p>I have the following working solution, which implements precedence of year over month over day manually:</p>
<pre><code>import pandas as pd
import pyarrow.dataset as ds
import pyarrow.compute as pc
def get_partition_filter(start_time: pd.Timestamp,
end_time: pd.Timestamp) -> pc.Expression:
# Extract year, month, and day from start and end times
start_year, start_month, start_day = start_time.year, start_time.month, start_time.day
end_year, end_month, end_day = end_time.year, end_time.month, end_time.day
# Construct filter
return ((ds.field("year") > start_year) | ((ds.field("year") == start_year) & (ds.field("month") > start_month)) |
((ds.field("year") == start_year) & (ds.field("month") == start_month) & (ds.field("day") >= start_day))) & \
((ds.field("year") < end_year) | ((ds.field("year") == end_year) & (ds.field("month") < end_month)) |
((ds.field("year") == end_year) & (ds.field("month") == end_month) & (ds.field("day") <= end_day)))
</code></pre>
<p>I also have the following hack, which circumvents lexicographical ordering by treating the dates as one integer each and constructing a derived field.</p>
<pre><code>def get_partition_filter_new(start_time: pd.Timestamp,
end_time: pd.Timestamp) -> pc.Expression:
# Construct ints from dates
start_date_int, end_date_int = (
int(time.strftime('%Y%m%d')) for time in (start_time, end_time)
)
# Construct the derived field
derived_field = (
ds.field("year")*10**4 +
ds.field("month")*10**2 +
ds.field("day")
)
# Construct the filter expression
return (derived_field >= start_date_int) & (derived_field <= end_date_int)
</code></pre>
<p>Is there a native way to do something like this in <code>pyarrow</code>? To rephrase the question, I am looking for something like this (pseudo-code):</p>
<pre><code># PSEUDO-CODE
(start_year_int, start_month_int, start_day_int) <= (ds.field("year"), ds.field("month"), ds.field("day")) <= (end_year_int, end_month_int, end_day_int)
</code></pre>
| <python><pyarrow> | 2023-08-31 13:01:42 | 1 | 1,290 | actual_panda |
77,016,033 | 11,329,736 | Snakemake ignores custom number of threads | <p>For all of my rules in my snakefile, <code>snakemake</code> ignores the number of threads that I set:</p>
<pre><code>rule trim:
input:
r1="reads/{sample}_R1.fastq.gz",
r2="reads/{sample}_R2.fastq.gz",
output:
multiext("trimmed/{sample}",
"_val_1.fq.gz",
"_R1.fastq.gz_trimming_report.txt",
"_val_2.fq.gz",
"_R2.fastq.gz_trimming_report.txt",),
threads: config["resources"]["trim"]["cpu"]
resources:
runtime=config["resources"]["trim"]["time"]
conda:
"envs/read-processing.yaml"
log:
"logs/trim_galore/{sample}.log",
shell:
"trim_galore -j {threads} -q 20 --basename {wildcards.sample} "
"-o trimmed --paired {input.r1} {input.r2} 2> {log}"
</code></pre>
<p>When I run a dry run I get this for the shell command:</p>
<pre><code>trim_galore -j 1 -q 20 --basename control_2_input_hyp -o trimmed --paired reads/control_2_input_hyp_R1.fastq.gz reads/control_2_input_hyp_R2.fastq.gz 2> logs/trim_galore/control_2_input_hyp.log
</code></pre>
<p>The <code>-j</code> flag is the number of threads to use and should be 8, as set in <code>config</code>. Even when I change <code>config["resources"]["trim"]["cpu"]</code> to 8, it still sets <code>-j</code> to 1, and this happens for all my rules.</p>
<p>Why does <code>snakemake</code> only use one thread?</p>
<p>I am using version <code>snakemake</code> 7.32.3.</p>
| <python><snakemake> | 2023-08-31 12:55:44 | 1 | 1,095 | justinian482 |
77,015,931 | 13,023,647 | Get data from ICMP package | <p>Could you tell me please, get more detailed information about the ICMP packet?
Right now I'm using some code construction:</p>
<pre><code>import scapy.layers.inet
from scapy.all import *
def gettingDataFromICMPTraffic(pkt):
if pkt.haslayer(scapy.layers.inet.ICMP):
type_8 = pkt.getlayer(scapy.layers.inet.ICMP).type
if type_8 == 8:
print(pkt.getlayer(scapy.layers.inet.ICMP))
def main():
pkts = rdpcap('icmp_yes.pcap')
for pkt in pkts:
gettingDataFromICMPTraffic(pkt)
if __name__ == '__main__':
main()
</code></pre>
<p>I get some information in the form:</p>
<pre><code>ICMP 192.168.34.163 > 192.168.34.118 echo-request 0 / Raw
ICMP 192.168.34.163 > 192.168.34.118 echo-request 0 / Raw
ICMP 192.168.34.163 > 192.168.34.136 echo-request 0 / Raw / Padding
ICMP 192.168.34.163 > 192.168.34.136 echo-request 0 / Raw / Padding
</code></pre>
<p>I would like to get more information regarding the <code>Sequence Number</code> parameters, as is done in <code>Wireshark</code>.</p>
<p><a href="https://i.sstatic.net/NU0AJ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NU0AJ.jpg" alt="enter image description here" /></a></p>
<p>I looked in the <a href="https://scapy.readthedocs.io/en/latest/api/scapy.layers.inet.html#scapy.layers.inet.ICMP" rel="nofollow noreferrer">documentation</a> and didn't find anything.</p>
| <python><python-3.x><scapy><icmp> | 2023-08-31 12:42:47 | 1 | 374 | Alex Rebell |
77,015,800 | 3,274,630 | Why does Python's rstrip remove more characters than expected? | <p>According to the docs the function will strip that sequence of chars from the right side of the string.<br />
The expression <code>'https://odinultra.ai/api'.rstrip('/api')</code> should result in the string <code>'https://odinultra.ai'</code>.</p>
<p>Instead here is what we get in Python 3:</p>
<pre><code>>>> 'https://odinultra.ai/api'.rstrip('/api')
'https://odinultra.'
</code></pre>
| <python><string><strip> | 2023-08-31 12:23:08 | 1 | 448 | Corneliu Maftuleac |
77,015,775 | 8,671,089 | TypeError: 'NoneType' object is not iterable while reading list in python | <p>I am trying to write a function which gets all the elements from list to another list, but sometimes my list can be None. I tried below code with checking if not None.
Does anyone know how to do this? Thanks in advance.</p>
<pre><code>from typing import List
fruits = ["lemon", "pear", "watermelon", "tomato"]
food = None
def check(items:List[str]):
c = ["apple","banana",
*[value for value in items if items is not None]]
print(*c)
check(fruits)
check(food)
</code></pre>
| <python> | 2023-08-31 12:19:58 | 1 | 683 | Panda |
77,015,717 | 4,451,315 | too-many-args for union of callables | <p>If I run mypy on</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable
a: Callable[[str, str], str] | Callable[[str], str]
a('a', 'b')
</code></pre>
<p>then I get</p>
<pre class="lang-py prettyprint-override"><code>main.py:5: error: Too many arguments [call-arg]
</code></pre>
<p>But, why?</p>
<p>How can I say that <code>a</code> can be a function which either takes two <code>str</code> arguments, or just one?</p>
| <python><mypy><python-typing> | 2023-08-31 12:11:00 | 0 | 11,062 | ignoring_gravity |
77,015,563 | 5,374,161 | Amazon SES creds working with smtplib but not with boto3 | <p>I'm trying to send emails using <strong>Amazon SES</strong>. Created the roles and API keys and policies and tested it out using Python <code>smtplib</code>. Pasting the snippet below:</p>
<pre><code>import smtplib
fromaddr = "some@email.com"
toaddrs = "other@email.com"
msg = """From: some@email.com
Hello, this is test from python.
"""
smtp_server = 'email-smtp.us-east-1.amazonaws.com'
smtp_username = 'SES_KEY'
smtp_password = 'SES_SECRET'
smtp_port = '25'
smtp_do_tls = True
server = smtplib.SMTP(
host = smtp_server,
port = smtp_port,
timeout = 10
)
server.set_debuglevel(10)
server.starttls()
server.ehlo()
server.login(smtp_username, smtp_password)
server.sendmail(fromaddr, toaddrs, msg)
print(server.quit())
</code></pre>
<p>It works as expected. I recieve the email and following output:</p>
<pre><code>send: 'ehlo servername.com\r\n'
reply: '250-email-smtp.amazonaws.com\r\n'
reply: '250-8BITMIME\r\n'
reply: '250-STARTTLS\r\n'
reply: '250-AUTH PLAIN LOGIN\r\n'
reply: '250 Ok\r\n'
reply: retcode (250); Msg: email-smtp.amazonaws.com
8BITMIME
STARTTLS
AUTH PLAIN LOGIN
Ok
send: 'AUTH PLAIN AEFLSU......HJiRkh3\r\n'
reply: '235 Authentication successful.\r\n'
reply: retcode (235); Msg: Authentication successful.
send: 'mail FROM:<some@email.com>\r\n'
reply: '250 Ok\r\n'
reply: retcode (250); Msg: Ok
send: 'rcpt TO:<other@email.com>\r\n'
reply: '250 Ok\r\n'
reply: retcode (250); Msg: Ok
send: 'data\r\n'
reply: '354 End data with <CR><LF>.<CR><LF>\r\n'
reply: retcode (354); Msg: End data with <CR><LF>.<CR><LF>
data: (354, 'End data with <CR><LF>.<CR><LF>')
send: 'From: some@email.com\r\n\r\nHello, this is test from python.\r\n.\r\n'
reply: '250 Ok 0100018a4b42947f-c2eed....96d8078-000000\r\n'
reply: retcode (250); Msg: Ok 0100018a4b429....52-e84d296d8078-000000
data: (250, 'Ok 0100018a4b42947f-......-e84d296d8078-000000')
send: 'quit\r\n'
reply: '221 Bye\r\n'
reply: retcode (221); Msg: Bye
(221, 'Bye')
</code></pre>
<p>But when I do the same thing with <code>boto3</code>, I'm getting an error:</p>
<pre><code>Error: An error occurred (SignatureDoesNotMatch) when calling the SendEmail operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
</code></pre>
<p>Snippet:</p>
<pre><code>import boto3
from botocore.exceptions import BotoCoreError, ClientError
def send_email():
ses = boto3.client('ses', region_name='us-east-1', aws_access_key_id='key', aws_secret_access_key='secret')
subject = 'Your Email Subject'
body = """
This is a test email from BOTO3 SES.
"""
message = {"Subject": {"Data": subject},
"Body": {"Text": {"Data": body}}}
try:
response = ses.send_email(Source='some@email.com',
Destination={'ToAddresses': ['other@email.com']},
Message=message)
except (BotoCoreError, ClientError) as error:
print(f"Error: {error}")
send_email()
</code></pre>
<p>Has anyone faced this issue before? I have created the identity in us-east-1, so I'm using that region, plus I have also tried giving the role full access to send email on all resources, but boto3 is still giving errors.</p>
<p>Any help is appreciated.</p>
| <python><amazon-web-services><boto3><amazon-ses> | 2023-08-31 11:49:24 | 1 | 667 | Krishh |
77,015,464 | 4,461,239 | Adding EXIF GPS data to .jpg files using Python and Piexif | <p>I am trying to write a script that adds EXIF GPS data to images using Python. When running the below script, I am getting an error returned from the <code>piexif.dump()</code> as follows:</p>
<pre><code>(venv) C:\projects\geo-photo>python test2.py
Traceback (most recent call last):
File "C:\projects\geo-photo\test2.py", line 31, in <module>
add_geolocation(image_path, latitude, longitude)
File "C:\projects\geo-photo\test2.py", line 21, in add_geolocation
exif_bytes = piexif.dump(exif_dict)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\projects\geo-photo\venv\Lib\site-packages\piexif\_dump.py", line 74, in dump
gps_set = _dict_to_bytes(gps_ifd, "GPS", zeroth_length + exif_length)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\projects\geo-photo\venv\Lib\site-packages\piexif\_dump.py", line 335, in _dict_to_bytes
length_str, value_str, four_bytes_over = _value_to_bytes(raw_value,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\projects\geo-photo\venv\Lib\site-packages\piexif\_dump.py", line 244, in _value_to_bytes
new_value += (struct.pack(">L", num) +
struct.error: argument out of range
</code></pre>
<p>Does anyone have any idea as to why this would be happening? Below is the full script. Any help appreciated.</p>
<pre><code>import piexif
def add_geolocation(image_path, latitude, longitude):
exif_dict = piexif.load(image_path)
# Convert latitude and longitude to degrees, minutes, seconds format
def deg_to_dms(deg):
d = int(deg)
m = int((deg - d) * 60)
s = int(((deg - d) * 60 - m) * 60)
return ((d, 1), (m, 1), (s, 1))
lat_dms = deg_to_dms(latitude)
lon_dms = deg_to_dms(longitude)
exif_dict["GPS"][piexif.GPSIFD.GPSLatitude] = lat_dms
exif_dict["GPS"][piexif.GPSIFD.GPSLongitude] = lon_dms
exif_dict["GPS"][piexif.GPSIFD.GPSLatitudeRef] = 'N' if latitude >= 0 else 'S'
exif_dict["GPS"][piexif.GPSIFD.GPSLongitudeRef] = 'E' if longitude >= 0 else 'W'
exif_bytes = piexif.dump(exif_dict)
piexif.insert(exif_bytes, image_path)
print("Geolocation data added to", image_path)
# Example usage
latitude = 34.0522 # Example latitude coordinates
longitude = -118.2437 # Example longitude coordinates
image_path = 'test.jpg' # Path to your image
add_geolocation(image_path, latitude, longitude)
</code></pre>
| <python><exif><piexif> | 2023-08-31 11:36:26 | 2 | 967 | Mike Resoli |
77,015,435 | 3,595,907 | What is causing this memory usage increase? | <p>I have code that reads in a sensor data file & outputs a series of spectrograms augmented with added noise. The problem I'm running into is that memory usage is increasing on every iteration of the spectrogram generation loop, until eventually I run out of memory.</p>
<p>In the picture below you can clearly see the memory usage per iteration & the small residual build up over time after each iteration. The 1st panel is the 1st 6~7 iterations, the 2nd panel was ~60th iteration, 3rd panel ~160th iteration.</p>
<p><a href="https://i.sstatic.net/RGAUg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RGAUg.png" alt="enter image description here" /></a></p>
<p>The code,</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import os
f = open("datafile_1685530800.txt", "r")
x = []
y = []
z = []
# Reads in data and stores in appropriate array
for idx, line in enumerate(f):
if idx > 1:
datum = [float(x) for x in line.split(",")]
x.append(datum[2])
y.append(datum[3])
z.append(datum[4])
x = np.array(x)
y = np.array(y)
z = np.array(z)
x_std = np.std(x)
x_len = len(x)
y_std = np.std(y)
y_len = len(y)
z_std = np.std(z)
z_len = len(z)
# Add random noise from a Gaussian distribution centred at 0 within a std of each data series
# & generates a spectrogram
for idx in range(270):
print(idx)
x_r = list(x + np.random.normal(0, x_std, x_len))
y_r = list(y + np.random.normal(0, y_std, y_len))
z_r = list(z + np.random.normal(0, z_std, z_len))
# For x axis
os.chdir(r'X/x_norm')
fig = plt.figure()
ax = plt.subplot(111)
_, _, _, im = ax.specgram(x_r)
ax.axis('off')
fig.tight_layout()
fig_name = "x_" + str(idx) + ".png"
fig.savefig(fig_name, bbox_inches = 'tight', pad_inches = 0)
plt.close()
# For y axis
os.chdir(r'../../Y/y_norm')
fig = plt.figure()
ax = plt.subplot(111)
_, _, _, im = ax.specgram(y_r)
ax.axis('off')
fig.tight_layout()
fig_name = "y_" + str(idx) + ".png"
fig.savefig(fig_name, bbox_inches = 'tight', pad_inches = 0)
plt.close()
# For z axis
os.chdir(r'../../Z/z_norm')
fig = plt.figure()
ax = plt.subplot(111)
_, _, _, im = ax.specgram(z_r)
ax.axis('off')
fig.tight_layout()
fig_name = "z_" + str(idx) + ".png"
fig.savefig(fig_name, bbox_inches = 'tight', pad_inches = 0)
plt.close()
x_r = []
y_r = []
z_r = []
os.chdir(r'../..')
</code></pre>
<p>I have reset the temporary lists at the end of spectrogram generation loop which has had an effect but not enough of one to stop the memory usage creep. Could anyone explain what is happening here & how to mitigate it?</p>
| <python><numpy><matplotlib><spectrogram> | 2023-08-31 11:32:30 | 0 | 3,687 | DrBwts |
77,015,204 | 2,223,661 | transform pandas data-frame grouping records on one col and counting frequencies of few other cols and making them as new cols | <p>Let's say we have this data-frame below:</p>
<pre><code>userid concessions reason contact date
1 0 aaa call
1 0 aaa chat
1 1 bbb call 01-01-1990
1 0 ccc mail
1 1 aaa call 31-12-1992
1 1 ccc call 15-06-1994
2 0 aaa call
2 0 aaa chat
3 1 bbb chat 01-05-1990
3 0 ccc mail
3 1 aaa mail 10-02-1991
3 1 ccc call 21-08-1995
</code></pre>
<p>I would want to transform this data-frame to something like this:</p>
<pre><code>userid concessions aaa bbb ccc call chat mail date
1 3 3 1 2 4 1 1 15-06-1994
2 0 2 0 0 1 1 0
3 3 1 1 2 1 1 2 21-08-1995
</code></pre>
<p>How can I achieve this? I have tried using <code>groupby()</code> and <code>value_counts()</code>. It is giving me the frequency alright, but I am not quite sure how to transform the data-frame itself. I am quite new to pandas, and python as well, in general.</p>
<p>Edit:
I think I have not completely explained in my rush of thoughts while posting.</p>
<p>So, basically, I want to count the number of <code>concessions</code> for a <code>userid</code> and count the number of types of <code>reason</code> and <code>contact</code> appear for that <code>userid</code> and pick the latest <code>date</code>.</p>
| <python><pandas><dataframe> | 2023-08-31 10:59:14 | 3 | 509 | Mariners |
77,014,920 | 3,616,293 | Compute closest neghtbor between 2 numpy arrays - KDTree | <p>I have 2 numpy arrays: a (smaller) array consisting of int values, b (larger) array consisting of float values. The idea is that b contains float values which are close to some int values in a. For example, as a toy example, I have the code below. The arrays aren't sorted like this and I use np.sort() on both a and b to get:</p>
<pre><code>a = np.array([35, 11, 48, 20, 13, 31, 49])
b = np.array([34.78, 34.8, 35.1, 34.99, 11.3, 10.7, 11.289, 18.78, 19.1, 20.05, 12.32, 12.87, 13.5, 31.03, 31.15, 29.87, 48.1, 48.5, 49.2])
</code></pre>
<p>For each element in a, there are multiple float values in b and <strong>the goal is to get the closest value in b for each element in a</strong>.</p>
<p>To naively achieve this, I use a for loop:</p>
<pre><code>for e in a:
idx = np.abs(e - b).argsort()
print(f"{e} has nearest match = {b[idx[0]]:.4f}")
'''
11 has nearest match = 11.2890
13 has nearest match = 12.8700
20 has nearest match = 20.0500
31 has nearest match = 31.0300
35 has nearest match = 34.9900
48 has nearest match = 48.1000
49 has nearest match = 49.2000
'''
</code></pre>
<p><em>There can be values in a not existing in b and vice-versa.</em></p>
<p><strong>a.size = 2040 and b.size = 1041901</strong></p>
<p>To construct a KD-Tree:</p>
<pre><code># Construct KD-Tree using and query nearest neighnor-
kd_tree = KDTree(data = np.expand_dims(a, 1))
dist_nn, idx_nn = kd_tree.query(x = np.expand_dims(b, 1), k = [1])
dist.shape, idx.shape
# ((19, 1), (19, 1))
</code></pre>
<p>To get nearest neighbor in 'b' with respect to 'a', I do:</p>
<pre><code>b[idx]
'''
array([[10.7 ],
[10.7 ],
[10.7 ],
[11.289],
[11.289],
[11.289],
[11.3 ],
[11.3 ],
[11.3 ],
[12.32 ],
[12.32 ],
[12.32 ],
[12.87 ],
[12.87 ],
[12.87 ],
[12.87 ],
[13.5 ],
[13.5 ],
[18.78 ]])
'''
</code></pre>
<p><strong>Problems:</strong></p>
<ul>
<li>It seems that KD-Tree doesn't go beyond value 20 in 'a'. [31, 25, 48, 49] in a are completely missed</li>
<li>And most of the nearest neighbors it finds is wrong when compared to output of for loop!!</li>
</ul>
<p>What's going wrong?</p>
| <python><arrays><numpy><nearest-neighbor> | 2023-08-31 10:16:46 | 1 | 2,518 | Arun |
77,014,641 | 17,082,611 | Is there any difference between this keras script and my python class? | <p>I am wondering whether there is any difference between this Keras script:</p>
<pre><code>latent_dim = 2
latent_inputs = keras.Input(shape=(latent_dim,))
x = layers.Dense(7 * 7 * 64, activation="relu")(latent_inputs)
x = layers.Reshape((7, 7, 64))(x)
x = layers.Conv2DTranspose(64, 3, activation="relu", strides=2, padding="same")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu", strides=2, padding="same")(x)
decoder_outputs = layers.Conv2DTranspose(1, 3, activation="sigmoid", padding="same")(x)
decoder = keras.Model(latent_inputs, decoder_outputs, name="decoder")
</code></pre>
<p>and my object-oriented implementation:</p>
<pre><code>class Decoder(keras.Model):
def __init__(self, latent_dimension):
super(Decoder, self).__init__()
self.latent_dim = latent_dimension
self.dense1 = layers.Dense(7 * 7 * 64, activation="relu")
self.reshape = layers.Reshape((7, 7, 64))
self.deconv1 = layers.Conv2DTranspose(64, 3, 2, "same", activation="relu")
self.deconv2 = layers.Conv2DTranspose(32, 3, 2, "same", activation="relu")
def call(self, inputs, training=None, mask=None):
x = self.dense1(inputs)
x = self.reshape(x)
x = self.deconv1(x)
decoder_outputs = self.deconv2(x)
return decoder_outputs
</code></pre>
<p>invoked in</p>
<pre><code>if __name__ == '__main__':
latent_dim = 2
decoder = Decoder(latent_dim)
</code></pre>
| <python><machine-learning><keras> | 2023-08-31 09:40:55 | 1 | 481 | tail |
77,014,640 | 1,852,526 | Trying to parse xml throws FileNotFoundError | <p>I am new to Python and all I am doing is parsing a simple XML string. But when I do that, it says 'No such file or directory' at Et.parse. I also tried saying <code>Et.parse(Et.fromstring(xmlfile))</code> but still that leads to a slightly different exception.</p>
<p><a href="https://i.sstatic.net/gDTBY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gDTBY.png" alt="xml parse error" /></a></p>
<pre class="lang-py prettyprint-override"><code>import xml.etree.ElementTree as ET
thisDict={}
def parseXML(xmlfile):
tree = ET.parse(xmlfile)
root=tree.getroot()
for package in root.findall('package'):
if package is None:
continue
if 'id' in package.attrib:
thisDict['package']=package.get('id')
if 'version' in package.attrib:
thisDict['version']=package.get('version')
for x, y in thisDict.items():
print(x, y)
xmlstr=f'''<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="Castle.Core" version="5.1.1" targetFramework="net481" />
<package id="Moq" version="4.18.4" targetFramework="net481" />
<package id="System.Runtime.CompilerServices.Unsafe" version="4.5.3" targetFramework="net481" />
<package id="System.Threading.Tasks.Extensions" version="4.5.4" targetFramework="net481" />
<package id="xunit" version="2.4.2" targetFramework="net481" />
<package id="xunit.abstractions" version="2.0.3" targetFramework="net481" />
<package id="xunit.analyzers" version="1.0.0" targetFramework="net481" />
<package id="xunit.assert" version="2.4.2" targetFramework="net481" />
<package id="xunit.core" version="2.4.2" targetFramework="net481" />
<package id="xunit.extensibility.core" version="2.4.2" targetFramework="net481" />
<package id="xunit.extensibility.execution" version="2.4.2" targetFramework="net481" />
<package id="xunit.runner.visualstudio" version="2.4.5" targetFramework="net481" developmentDependency="true" />
</packages>'''
parseXML(xmlstr)
</code></pre>
| <python><xml><elementtree> | 2023-08-31 09:40:51 | 4 | 1,774 | nikhil |
77,014,538 | 1,833,326 | Databricks-Connect: Missing sparkContext | <p>I have issues using the newest version of databricks-connect (13.3.0). I would like to access the sparkContext and tried it as it worked for databricks-connect<13.0:</p>
<pre><code>from databricks.connect import DatabricksSession
spark = DatabricksSession.builder.getOrCreate()
spark.sparkContext
</code></pre>
<p>However, now I get the error:</p>
<blockquote>
<p>pyspark.errors.exceptions.base.PySparkNotImplementedError: [NOT_IMPLEMENTED] sparkContext() is not implemented.</p>
</blockquote>
<p>Can someone help?</p>
| <python><apache-spark><pyspark><databricks><databricks-connect> | 2023-08-31 09:27:56 | 1 | 1,018 | Lazloo Xp |
77,014,471 | 10,353,865 | An indexer that gets should return a copy | <p>In order to decide as to whether some operations return a copy or a view, I consulted the posts to the question: <a href="https://stackoverflow.com/questions/23296282/what-rules-does-pandas-use-to-generate-a-view-vs-a-copy">What rules does Pandas use to generate a view vs a copy?</a></p>
<p>It reads as follows:</p>
<p>Here's the rules, subsequent override:</p>
<pre><code>All operations generate a copy
If inplace=True is provided, it will modify in-place; only some operations support this
An indexer that sets, e.g. .loc/.iloc/.iat/.at will set inplace.
An indexer that gets on a single-dtyped object is almost always a view (depending on the memory layout it may not be that's why this is not reliable). This is mainly for efficiency. (the example from above is for .query; this will always return a copy as its evaluated by numexpr)
An indexer that gets on a multiple-dtyped object is always a copy.
</code></pre>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(),
'B': 'one one two three two two one three'.split(),
'C': np.arange(8), 'D': np.arange(8) * 2})
df2 = df.loc[0,]
df2[0] = 500
</code></pre>
<p>So this last line does not change df. Reasoning: The call "df.loc[]" is an indexer that gets on a multi-dtyped object. Therefore df2 is a copy.
Now, I thought I got the rules - however when using the following:</p>
<pre><code>df3 = df.iloc[0:2,]
df3[:] = 55
</code></pre>
<p>I get a) a warning saying "A value is trying to be set on a copy of a slice from a DataFrame." and b) the original df is changed - but only partially. The first two columns are unchanged, whereas the last two are changed to 55.</p>
<p>I don't understand this behavior wrt the rules outlined above. For instance, "df.iloc[0:2,]" is an indexer that gets on a multi-dtyped object and should therefore return a copy. So why do I get the warning that a value is set on a copy of a slice?</p>
| <python><pandas> | 2023-08-31 09:19:26 | 0 | 702 | P.Jo |
77,014,428 | 7,069,126 | What is the meaning of a `files` key in poetry.lock? | <p>I just installed a newer version of a package in a poetry project. Now, for every package listed in the <code>poetry.lock</code> file, there's an added <code>files</code> key, like this:</p>
<pre class="lang-ini prettyprint-override"><code>[[package]]
name = "..."
version = "..."
files = [
{...},
{...}
]
</code></pre>
<p>This wasn't there before, only after I installed the new version of the package, and introduces a lot of changes to the <code>poetry.lock</code>.</p>
<p>What is this <code>files</code> key and why was it created now when it wasn't there before? Can I prevent its creation?</p>
| <python><python-poetry> | 2023-08-31 09:14:09 | 1 | 2,764 | Tobias Feil |
77,014,360 | 2,123,099 | PostgreSQL Procedure using python write dataframe to table | <p>I am developing a PostgreSQL Procedure using the plpython3u extension which let you use python procedural language into PostgreSQL.</p>
<p>With the following code using plpy, I am able to retrive data form table and put it into pandas dataframe.</p>
<pre><code>CREATE OR REPLACE PROCEDURE public.plpy_proc_clas_full(
)
LANGUAGE 'plpython3u'
AS $BODY$
import pandas as pd
data_lt = plpy.execute('SELECT "key", "value" FROM public."<your-table>" ORDER BY "key"'); #PLyResult --> List or Dictionary
data_df_x = pd.DataFrame.from_records(data_lt)['key'];
data_df_y = pd.DataFrame.from_records(data_lt)['value'];
df = pd.concat([data_df_x, data_df_y], axis=1).values
return df;
$BODY$;
</code></pre>
<p>But how can I write back the pandas dataframe to a table (for example after a few data manipulations in python)?</p>
| <python><sql><postgresql><procedure><plpython> | 2023-08-31 09:04:02 | 2 | 1,502 | Stavros Koureas |
77,014,292 | 5,437,090 | Merge multiple dictionaries using unique keys | <p>Given three
dictionaries as follows <code>{key: index}</code>:</p>
<pre><code>d1 = {'a':0, 'b':1,'c':2, 'd':3, 'e':4, 'f':5, 'g':6, 'h':7, 'i':8, 'j':9, 'k':10, 'l':11, 'm':12, 'n':13, 'o':14}
d2 = {'k':0, 'l':1,'m':2, 'n':3, 'o':4, 'p':5, 'q':6, 'r':7, 's':8, 't':9, 'u':10, 'v':11, 'w':12}
d3 = {'a':0, 'b':1,'c':2, 'd':3, 't': 4, 'u': 5, 'v': 6, 'w': 7, 'x': 8, 'y': 9, 'z': 10}
</code></pre>
<p>I would like to merge (combine?) them into one, using their unique keys and reindex their values.</p>
<p>Right now, I have the following working but inefficient solution for such small <code>dict</code>:</p>
<pre><code>%timeit -r 10 -n 10000 d_merged = {key: idx for idx, key in enumerate( sorted(list(set([k for d in [d1, d2, d3] for k in d.keys()]))) ) }
{'a': 0, 'b': 1, 'c': 2, 'd': 3, 'e': 4, 'f': 5, 'g': 6, 'h': 7, 'i': 8, 'j': 9, 'k': 10, 'l': 11, 'm': 12, 'n': 13, 'o': 14, 'p': 15, 'q': 16, 'r': 17, 's': 18, 't': 19, 'u': 20, 'v': 21, 'w': 22, 'x': 23, 'y': 24, 'z': 25}
# The slowest run took 4.36 times longer than the fastest. This could mean that an intermediate result is being cached.
# 34.6 µs ± 16.1 µs per loop (mean ± std. dev. of 10 runs, 10000 loops each)
</code></pre>
<p>I wonder if there probably exists a better and more efficient approach to do this for large dictionaries (more than 500k elements) without for loops?</p>
| <python><performance> | 2023-08-31 08:55:28 | 3 | 1,621 | farid |
77,014,174 | 3,338,256 | Flask OSError: [Errno 5] Input/output error when creating zip of a folder and sending as a response | <p>I have an endpoint in flask application which downloads a folder by creating zip of it and sending it in the response. Here is the code for it</p>
<pre><code>def download_data(zip_file_name):
current_time = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
shutil.make_archive(zip_file_name + current_time, 'zip', folder_path)
@after_this_request
def cleanup(response):
try:
os.remove(zip_file_name + current_time + ".zip")
return response
except Exception as ex:
LOG.error(f"Error while removing the zipped dataset after downloading {ex}")
return send_file(zip_file_name + current_time + ".zip")
</code></pre>
<p>Surprisingly the response of the api is sent and has a 200 status code. But in the logs it shows below error</p>
<pre><code>OSError: [Errno 5] Input/output error
data = self.file.read(self.buffer_size)
File "/usr/local/lib/python3.10/site-packages/werkzeug/wsgi.py", line 365, in __next__
Traceback (most recent call last):
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
</code></pre>
<p>NOTE - This download is from OBS buckets and I am able to download/send individual files in the response.</p>
| <python><flask> | 2023-08-31 08:40:17 | 0 | 566 | Prakash |
77,014,017 | 16,577,671 | Better readable labels in pygal | <p>I would like to visualize data with Pygal for this I have to use a radar, unfortunately the X-axis labels have very long labels, which leads to the fact that it becomes difficult to read and the labels are cut off (see picture). <a href="https://i.sstatic.net/8ddmg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8ddmg.png" alt="(Generated plot)" /></a></p>
<pre><code> import pygal
from pygal.style import Style
radar_chart = pygal.Radar(fill=True)
radar_chart.style = Style(
dots_size=14,
label_font_size=10,
x_label_rotation=20,
human_readable=True,
background='white',
)
# example code
skill_list = ["aaaaaaaaaaaaaaabbbbbbbbbbb"]*17
radar_chart.add(skill_list,
[randint(1, 10) for _ in range(0, 17)])
radar_chart.x_labels= skill_list
#......
</code></pre>
<p>How can I make the labels more readable without using a legend ?</p>
<p>Thanks in advance</p>
| <python><radar-chart><pygal><spider-chart> | 2023-08-31 08:18:32 | 1 | 1,102 | ChsharpNewbie |
77,013,954 | 4,402,431 | How to write a file and then delete it | <p>I am a C# programmer and need to go for python now. It is my 2nd day so far.
I need to write a file, read it and then delete it.
In C# that's easy peasy.</p>
<pre><code>string strPath = @"C:\temp\test.txt";
using (StreamWriter writer = new StreamWriter(strPath))
{ writer.WriteLine("XYZ");}
string readText = File.ReadAllText(strPath);
File.Delete(strPath);
</code></pre>
<p>the stream is closed by the using</p>
<p>In python I came up to this:</p>
<pre><code>with open(strPath, "xt") as f:
f.write("XYZ")
f.close()
f = open(strPath, "r")
strReadFile = f.read()
os.remove(strPath)
</code></pre>
<p>but try as I might I still get the error telling me that the file is in usage.
I therefore googled: "Python write read and delete a file" but nothing came up</p>
<p>Thanks
Patrick</p>
| <python><file><readfile><delete-file><writefile> | 2023-08-31 08:09:44 | 3 | 2,623 | Patrick |
77,013,746 | 13,994,829 | How to release memory correctly in streamlit app? | <p>Recently, while developing a Streamlit app, the app frequently crashes and requires manual rebooting.</p>
<p>After spending some time, I identified the issue as <strong>"exceeding RAM"</strong>. The free version of RAM is only 1GB, and my app easily surpasses this limit when multiple users are using it simultaneously.</p>
<h3>Application of the App</h3>
<ol>
<li>Using langchain to build a document GPT.</li>
<li>Users upload PDFs and start asking questions.</li>
</ol>
<h3>Main Problematic Code</h3>
<p><a href="https://github.com/Lin-jun-xiang/docGPT-streamlit/tree/Issue%232" rel="nofollow noreferrer">Complete code from Github</a></p>
<p>app.py</p>
<pre class="lang-py prettyprint-override"><code>model = None
doc_container = st.container()
with doc_container:
# when user upload pdf base on upload_and_process_pdf()
# the create_doc_gpt() can execute successfully
docs = upload_and_process_pdf()
model = create_doc_gpt(docs)
del docs
st.write('---')
</code></pre>
<pre class="lang-py prettyprint-override"><code>def create_doc_gpt(docs):
if not docs:
return
... instance docGPT which will use HuggingFaceEmbedding
</code></pre>
<h3>What I've Tried</h3>
<p>I attempted to identify where the issue in the code lies and whether optimization is possible. I conducted the following experiments:</p>
<ol>
<li><p>Used Windows Task Manager's detailed view.</p>
</li>
<li><p>Executed the app (streamlit run <code>app.py</code>) and simultaneously identified its PID, observing memory usage.</p>
</li>
<li><p>When opening the app, memory usage occupied <code>150,000 KB</code>.</p>
</li>
<li><p>Based on the simplified code above, after uploading a PDF, the docGPT instance (my model) is instantiated. At this point, memory rapidly spikes to <code>1,000,000 KB</code>. I suspect this is due to <code>HuggingFaceEmbedding</code> causing this. (When I switched to a lighter embedding, memory decreased significantly)</p>
</li>
<li><p><strong>Since memory's main source is the model instance, but when I re-upload the same PDF, memory increases again to <code>1,750,000 KB</code>. This seems like two models are occupying memory.</strong></p>
</li>
<li><p>Additionally, I have attempted to repeatedly upload the same PDF on my app. After uploading the 8000KB file approximately 4 times, the app crashes.</p>
</li>
</ol>
<h3>Question</h3>
<p>How should I correctly release the initially instantiated model?</p>
<p>If I use <code>st.cache_resource</code> to decorate <code>create_doc_gpt(docs)</code>, I have a few points of confusion as follows:</p>
<ol>
<li><p>When the same user uploads the first PDF, the embedding is performed, and the model is returned. At this point, does the app create a cache and occupy memory? If the user uploads a new PDF again, will the app go through embedding and returning the model, creating a new cache and occupying memory again?</p>
</li>
<li><p>If the assumption in #1 is correct, can I use the <code>ttl</code> and <code>max_entrie</code>s parameters to avoid excessive caching?</p>
</li>
<li><p>If the assumptions in #1 and #2 are correct, when there are two users simultaneously, and my <code>max_entries</code> is set to 2, will the cached models they create be counted separately?</p>
</li>
</ol>
<hr />
<p>I'm unsure if this type of question is appropriate to ask here. If it's against the rules, I'm willing to delete the post and seek help elsewhere.</p>
| <python><memory><streamlit><langchain> | 2023-08-31 07:41:10 | 1 | 545 | Xiang |
77,013,491 | 16,383,578 | How to compare a three dimensional array with a two dimensional array in NumPy? | <p>I have a three-dimensional array of shape <code>(height, width, 3)</code>, it represents an BGR image, the values are floats in [0, 1].</p>
<p>After some operation on the pixels I obtain a two-dimensional array of shape <code>(height, width)</code>, the values in the array are the results of some operation performed on each individual pixel.</p>
<p>Now I want to compare the original image with the result, more specifically I want to compare each of the BGR components of each pixel with the value of the result array located at the same coordinate.</p>
<p>For instance, I want to know which of the BGR component is the greatest in each pixel:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
img = np.random.random((360, 640, 3))
maxa = img.max(axis=-1)
</code></pre>
<p>Now I want to compare <code>img</code> with <code>maxa</code>, I know <code>img == maxa</code> doesn't work:</p>
<pre><code>In [335]: img == maxa
<ipython-input-335-acb909814b9a>:1: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
img == maxa
Out[335]: False
</code></pre>
<p>I am not good at describing things, so I will show you what I intend to do in Python:</p>
<pre><code>result = [[[c == maxa[y, x] for c in img[y, x]] for x in range(640)] for y in range(360)]
</code></pre>
<p>Obviously it is inefficient but I want to demonstrate I know the logic.</p>
<p>I have managed to do the same in NumPy, but I think it can be more efficient:</p>
<pre><code>img == np.dstack([maxa, maxa, maxa])
</code></pre>
<p>I have confirmed my comprehension's correctness:</p>
<pre><code>In [339]: result = [[[c == maxa[y, x] for c in img[y, x]] for x in range(640)] for y in range(360)]
...: np.array_equal(arr3, img == np.dstack([maxa, maxa, maxa]))
Out[339]: True
</code></pre>
<p>And I have benchmarked my methods:</p>
<pre><code>In [340]: %timeit [[[c == maxa[y, x] for c in img[y, x]] for x in range(640)] for y in range(360)]
509 ms ± 16.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [341]: maxals = maxa.tolist()
In [342]: imgls = img.tolist()
In [343]: %timeit [[[c == maxals[y][x] for c in imgls[y][x]] for x in range(640)] for y in range(360)]
156 ms ± 2.57 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [344]: %timeit img == np.dstack([maxa, maxa, maxa])
4.25 ms ± 121 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>What is a faster method?</p>
| <python><arrays><numpy> | 2023-08-31 07:00:58 | 2 | 3,930 | Ξένη Γήινος |
77,013,454 | 3,179,698 | python cross platform download all the files(not all needed so unwanted) | <p>Hi I want to download linux version package in windows env.(install on offline linux-server)</p>
<p>I used the command:</p>
<pre><code>pip download --only-binary=:all: --platform linux_x86_64 --python-version 36 hyperopt
</code></pre>
<p>to download package hyperopt and its dependencies to install it on linux env offline, I restrict python version to be 3.6, which is the python version on linux</p>
<p>However it seems downloaded all the hyperopt package it could get, which I suspect is not needed</p>
<p>Any way I can only download the specific package and dependencies that I could install and use in the currenct python version(3.6.13)??</p>
<p><a href="https://i.sstatic.net/h7nDx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h7nDx.png" alt="see the pic for details" /></a></p>
| <python><pip><python-wheel> | 2023-08-31 06:55:51 | 1 | 1,504 | cloudscomputes |
77,013,434 | 1,354,517 | Trying to run tensorflow graphics home page demo from comamnd line throws an error | <p>I am a bit of a ML / TF / Python noob but have made good progress. i.e. can train basic models and use for inference with TF 2.13 - non gpu version.</p>
<p>While trying to run the example here - <a href="https://www.tensorflow.org/graphics" rel="nofollow noreferrer">TensorFlow Graphics</a> , I am encoutering an error -</p>
<pre><code>~/.pyenv/versions/3.9.0/lib/python3.9/site-packages/tensorflow_graphics/notebooks/threejs_visualization.py", line 74, in build_context
_publish.javascript(url=threejs_url + 'three.min.js')
NameError: name '_publish' is not defined
</code></pre>
<p>I think the sample is missing a step but I am clueless how it can be fixed.</p>
<p>Is it supposed to run only inside jupyter notebook ?</p>
<p>I am trying it on command line. How can I run it ?</p>
| <python><tensorflow><tensorflow2.0> | 2023-08-31 06:51:57 | 0 | 1,117 | Gautam |
77,013,254 | 2,038,089 | Print uid to screen/browser in Python | <p>I'm currently testing mod_wsgi, and trying to print my uid to screen in my browser.</p>
<pre><code>import os
uid = os.getuid()
print("uid being used:", uid)
def application(environ, start_response):
status = '200 OK'
output = b'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
</code></pre>
<p>expected result was that I would get my uid printed on screen.. so I know what user Mod_wsgi script is running as.. but I am currently printing only Hello World to screen</p>
| <python><apache><mod-wsgi> | 2023-08-31 06:18:49 | 1 | 382 | Ahmad Bilal |
77,013,226 | 4,958,156 | Using Python multiprocessing Process and Queue to abort function | <p>I have a function called <code>some_function(a,b)</code> which takes in two parameters a and b, and does something intensive, and then stores a Spark dataframe as an output. I want to put a timeout on some_function -> if it takes longer than 5 mins (300 seconds), I want to abort, and move to the next 'i' in the main for loop. The following method doesn't seem to work, <code>some_function(a,b)</code> appears to run forever instead of stopping after 5 mins. Maybe there's something wrong with the Queue?</p>
<pre><code>from multiprocessing import Process, Queue
def some_function(a,b):
//do some intensive process with a and b to generate spark dataframe
//store Spark dataframe output in Q
Q.put(some_spark_df)
def main():
for i in range(0,10):
Q = Queue()
a = //something
b = //something
p = Process(target=some_function, args = (a, b))
p.start()
#timeout value in seconds (300s = 5min)
p.join(300)
#if still running after above time
if p.is_alive():
#terminate
p.terminate()
p.join()
//get output from some_function(a,b)
result_df = Q.get()
</code></pre>
| <python><apache-spark><multiprocessing><python-multiprocessing> | 2023-08-31 06:13:46 | 1 | 1,294 | Programmer |
77,013,196 | 5,561,472 | Error: DocumentSnapshot.get() got an unexpected keyword argument 'transaction' | <p>I need to update the documents in collection. But I would like to run transaction for every document:</p>
<pre class="lang-py prettyprint-override"><code>async def example():
@google.cloud.firestore.async_transactional
async def update_in_transaction(transaction, city_ref, data):
snapshot = await city_ref.get(
transaction=transaction
) # <= Error: DocumentSnapshot.get() got an unexpected keyword argument 'transaction'
transaction.update(city_ref, data)
# prepare data
db: google.cloud.firestore.AsyncClient = AsyncClient()
await db.document("1/1").set({"a": 1})
await db.document("1/2").set({"a": 2})
# try to update document outside of the loop
city_ref = db.collection("1").document("1")
transaction = db.transaction()
await update_in_transaction(transaction, city_ref, {"a": 3}) # <= no error here
# try to update within loop
docs = db.collection("1").stream()
async for doc in docs:
transaction = db.transaction()
await update_in_transaction(transaction, doc, {"a": 3}) # <= error here
asyncio.run(example())
</code></pre>
<p>I am getting a strange error <code>DocumentSnapshot.get() got an unexpected keyword argument 'transaction'</code> when trying to do it. Why is that?</p>
| <python><asynchronous><google-cloud-firestore> | 2023-08-31 06:09:41 | 1 | 6,639 | Andrey |
77,013,101 | 6,729,591 | Going through model's each layer individually gives other result than using forward | <p>I want to tinker with some intermediate layer result from a pretrained resnet. I noticed a drop in accuracy when just iterating over the layers of a specific block. Upon further inspection I saw that the intermediate results change if i iterate or use the builtin forward function.</p>
<pre class="lang-py prettyprint-override"><code>net = resnet18()
before = torch.nn.Sequential(*list(net.children())[:7])
middle = list(net.children())[7]
</code></pre>
<p><code>middle</code> is a basic block that looks like that:</p>
<pre><code>Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
</code></pre>
<p>What i am trying to do is get the intermediate result from the second <code>BasicBlock['conv1']</code>. Here's what i wrote:</p>
<pre class="lang-py prettyprint-override"><code>x = self.before(x)
bb1 = list(self.middle.children())[0]
x = bb1(x)
bb2 = list(self.middle.children())[1]
print(bb2(x))
for i, layer in enumerate(bb2.children()):
x = layer(x)
if i == 0:
z = copy.copy(x)
print(x)
</code></pre>
<p>For the first print statement i get:</p>
<pre><code>tensor([[[[1.1271e-01, 1.1205e-01],
[1.6054e-01, 1.4965e-01]],...)
</code></pre>
<p>For the second print statement i get:</p>
<pre><code>tensor([[[[ 0.0533, 0.0498],
[ 0.0607, 0.0574]],...)
</code></pre>
<p>In my understanding they should be exactly the same. what is happening here?</p>
| <python><machine-learning><deep-learning><pytorch> | 2023-08-31 05:44:09 | 1 | 1,404 | Dr. Prof. Patrick |
77,012,862 | 16,220,410 | why are even numbers retained in list x? | <p>i found this code and i am just confused why are the even numbers retained in <strong>list x</strong> when the remove method is at the top of the for loop, i tried to print the list for each iteration and it shows the first iteration is 1, gets removed from x then appended to y, but the next iteration of i is now 3 not 2</p>
<pre><code>x = [1,2,3,4,5,6,7,8,9,10]
y = []
for i in x:
x.remove(i)
if i%2 == 0:
continue
elif len(x) == 5:
break
y.append(i)
print(x) # output [2, 4, 6, 8, 10]
print(y) # output [1, 3, 5, 7]
</code></pre>
<p>adding print before the remove method</p>
<pre><code>...
print(i)
print(x)
x.remove(i)
...
</code></pre>
<p>as you can see the next iteration is 3 not 2 when 2 is still in list x</p>
<pre><code>1
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
3
[2, 3, 4, 5, 6, 7, 8, 9, 10]
5
[2, 4, 5, 6, 7, 8, 9, 10]
7
[2, 4, 6, 7, 8, 9, 10]
9
[2, 4, 6, 8, 9, 10]
</code></pre>
| <python><for-loop> | 2023-08-31 04:32:52 | 1 | 1,277 | k1dr0ck |
77,012,845 | 2,954,547 | Select and/or deselect tests that use a particular fixture? | <p>I see that there are some questions about conditionally <em>skipping</em> tests with certain fixtures: <a href="https://stackoverflow.com/q/46852934/2954547">Is there a way to skip a pytest fixture?</a> and <a href="https://stackoverflow.com/q/70956586/2954547">Skip tests using a particular pytest fixture</a>.</p>
<p>However I am looking for a straightforward way to <em>select or deselect</em> tests that use a particular fixture. I'd like to avoid manually marking all such tests, because the fixture might be used indirectly, and this is generally error-prone and not scalable.</p>
<p>Ideally, I'd like to be able to run something like <code>pytest -m 'not uses_foo'</code>, and all tests that require the <code>foo</code> fixture would be deselected.</p>
<p>Again, this question is not about <em>skipping</em> the tests. I am specifically asking about a way to make them (de-)selectable as one might normally be able to do with marks.</p>
| <python><pytest> | 2023-08-31 04:25:09 | 2 | 14,083 | shadowtalker |
77,012,820 | 4,156,036 | Multithreaded i += 1 has different behavior between Windows and Linux? | <pre><code>import threading
BASE = 1000000
i = 0
def test():
global i
for x in range(BASE):
i += 1
threads = [threading.Thread(target=test) for t in range(10)]
for t in threads:
t.start()
for t in threads:
t.join()
assert i == BASE*10, (BASE*10, i)
</code></pre>
<p>The code above can always pass on Windows but always fails on Linux. Why? Shouldn't they have the same behavior?</p>
<p>I expected both Windows and Linux to fail the test cause <code>i += 1</code> is not an atomic operation.</p>
<p>Is Windows passing the test a coincidence or there is something different between Windows and Linux?</p>
| <python><multithreading> | 2023-08-31 04:19:31 | 1 | 695 | Lane |
77,012,813 | 5,046,887 | Recursive tracing issue when using opentelemetry-instrumentation-psycopg2 with psycopg2 | <h3>Problem Description</h3>
<p>When I used the <code>opentelemetry-instrumentation-psycopg2</code> library with <code>psycopg2's ThreadedConnectionPool</code>, I encountered a recursion issue. This problem seems to occur with some probability when concurrency is involved, so I decide the reproduce this issue.
<a href="https://i.sstatic.net/RI97i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RI97i.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/bHphP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bHphP.png" alt="enter image description here" /></a></p>
<h3>Reproduce problem</h3>
<ul>
<li>otel-collector-config.yaml</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>receivers:
otlp:
protocols:
grpc:
http:
exporters:
logging:
loglevel: debug
jaeger:
endpoint: jaeger-all-in-one:14250
tls:
insecure: true
processors:
batch:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [logging, jaeger]
processors: [batch]
</code></pre>
<ul>
<li>docker-compose.yaml</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>version: "3"
services:
jaeger-all-in-one:
image: jaegertracing/all-in-one:1.42
restart: always
environment:
- COLLECTOR_OTLP_ENABLED=true
ports:
- "16686:16686" # server frontend
- "14268:14268" # HTTP collector
- "14250:14250" # gRPC collector
otel-collector:
image: otel/opentelemetry-collector:0.72.0
restart: always
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP Http receiver
depends_on:
- jaeger-all-in-one
# database
postgres:
image: postgres:13.2-alpine
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=12345678
- POSTGRES_DB=example
ports:
- "5432:5432"
</code></pre>
<ul>
<li>requirements.txt</li>
</ul>
<pre><code>psycopg2==2.9.7
opentelemetry-instrumentation-psycopg2>=0.33b0
opentelemetry-exporter-otlp>=1.12.0
</code></pre>
<ul>
<li>index.py</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import logging
import threading
import time
from psycopg2 import pool
import psycopg2
from opentelemetry import trace
from opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# this database use the connection poll
class PostgresDatabasePool:
def __init__(self, minconn, maxconn, host, database, user, password, port):
self.db_pool = pool.ThreadedConnectionPool(
minconn=minconn,
maxconn=maxconn,
host=host,
database=database,
user=user,
password=password,
port=port
)
def execute_query(self, query):
conn = None
cursor = None
try:
conn = self.db_pool.getconn()
cursor = conn.cursor()
cursor.execute(query)
logging.info(f"execute command: {query}")
except Exception as e:
logging.error(f"SQL error: {e}")
finally:
if cursor:
cursor.close()
if conn:
self.db_pool.putconn(conn)
# this database use the default connect
class PostgresDatabase:
def __init__(self, host, database, user, password, port):
self.conn = psycopg2.connect(
host=host,
database=database,
user=user,
password=password,
port=port
)
self.conn.autocommit = True
self.cursor = self.conn.cursor()
def execute_query(self, query):
if not self.cursor:
logging.warning("Please connect first")
return
try:
self.cursor.execute(query)
logging.info(f"execute command: {query}")
except Exception as e:
logging.error(f"SQL error: {e}")
def delete1_with_db_pool(thread_name):
while True:
with tracer.start_as_current_span("delete1_with_db_pool", kind=trace.SpanKind.INTERNAL):
dbPool.execute_query("DELETE FROM public.test1;")
dbPool.execute_query("DELETE FROM public.test1;")
dbPool.execute_query("DELETE FROM public.test1;")
dbPool.execute_query("DELETE FROM public.test1;")
dbPool.execute_query("DELETE FROM public.test1;")
time.sleep(5)
def delete2_with_db_pool(thread_name):
while True:
with tracer.start_as_current_span("delete2_with_db_pool", kind=trace.SpanKind.INTERNAL):
dbPool.execute_query("DELETE FROM public.test2;")
dbPool.execute_query("DELETE FROM public.test2;")
dbPool.execute_query("DELETE FROM public.test2;")
dbPool.execute_query("DELETE FROM public.test2;")
dbPool.execute_query("DELETE FROM public.test2;")
time.sleep(5)
def delete1(thread_name):
while True:
with tracer.start_as_current_span("delete1", kind=trace.SpanKind.INTERNAL):
db.execute_query("DELETE FROM public.test1;")
db.execute_query("DELETE FROM public.test1;")
db.execute_query("DELETE FROM public.test1;")
db.execute_query("DELETE FROM public.test1;")
db.execute_query("DELETE FROM public.test1;")
time.sleep(5)
def delete2(thread_name):
while True:
with tracer.start_as_current_span("delete2", kind=trace.SpanKind.INTERNAL):
db.execute_query("DELETE FROM public.test2;")
db.execute_query("DELETE FROM public.test2;")
db.execute_query("DELETE FROM public.test2;")
db.execute_query("DELETE FROM public.test2;")
db.execute_query("DELETE FROM public.test2;")
time.sleep(5)
# tracing
resource = Resource(attributes={SERVICE_NAME: 'Demo_Bug'})
provider = TracerProvider(resource=resource)
otlp_exporter = OTLPSpanExporter(endpoint='localhost:4317', insecure=True)
provider.add_span_processor(BatchSpanProcessor(otlp_exporter))
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("test", "1.0.0")
# Psycopg2
Psycopg2Instrumentor().instrument()
# init database
dbPool = PostgresDatabasePool(1, 5, "localhost", "example", "root", "12345678", 5432)
db = PostgresDatabase("localhost", "example", "root", "12345678", 5432)
# create table
db.execute_query("CREATE TABLE IF NOT EXISTS public.test1 (id serial NOT NULL , text varchar(64) NOT NULL);")
db.execute_query("CREATE TABLE IF NOT EXISTS public.test2 (id serial NOT NULL , text varchar(64) NOT NULL);")
# init thread
threads = [threading.Thread(target=delete1_with_db_pool, args=("Thread-1",)), threading.Thread(target=delete2_with_db_pool, args=("Thread-2",)),
threading.Thread(target=delete1, args=("Thread-3",)), threading.Thread(target=delete2, args=("Thread-4",))]
# start
for thread in threads:
thread.start()
# wait
for thread in threads:
thread.join()
</code></pre>
<ul>
<li>Run command and open Jaeger Dashboard you will find the problem.</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>docker-compose up -d
pip install -r requirements.txt
python index.py
</code></pre>
<h3>Suspect the Root Cause</h3>
<p>I suspect the problem lies in our use of the <code>pool.ThreadedConnectionPool()</code> function in the <code>psycopg2</code> library to reuse the database connection. It seems that opentelemetry-instrumentation-psycopg2 hasn't taken this case.</p>
<p>I have posted this <a href="https://github.com/open-telemetry/opentelemetry-python-contrib/issues/1925" rel="nofollow noreferrer">issue</a> on the GitHub opentelemetry-python-contrib repository.</p>
| <python><concurrency><psycopg2><trace><open-telemetry> | 2023-08-31 04:15:34 | 1 | 1,417 | Changemyminds |
77,012,451 | 4,362,655 | How can I have callbacks for Plotly Dash in different directories while using flask | <p>I have done flask integration with Dash (credit to <a href="https://github.com/jimmybow/Flask_template_auth_with_Dash" rel="nofollow noreferrer">source</a>) with file structure as below...</p>
<pre><code>/<main_app>
|
|
- <app>
| |-----__init__.py
|
- <dashboard>
| |--------Dash_App1.py
| |--------<dash_app_code1>
| |--------<dash_app_code2>
|
- <configs>
|
----- run.py
</code></pre>
<p>run.py has following code...</p>
<pre><code>from app import create_app
app = create_app(config_mode)
if __name__ == "__main__":
app.run(debug = True)
</code></pre>
<p>"<strong>init</strong>.py" of the directory "app" has following code...</p>
<pre><code>def create_app(config, selenium=False):
app = Flask(__name__, static_folder='base/static')
app.config.from_object(config)
register_extensions(app)
register_blueprints(app)
app = Dash_App1.Add_Dash(app)
return app
</code></pre>
<p>So I instantiate my Dash app in "Dash_App1.py" in "dashboard" directory like this.</p>
<pre><code>def Add_Dash(server):
d_app = Dash(server=server, url_base_pathname=url_base)
apply_layout_with_auth(d_app, layout)
@d_app.callback(Output('tabs-content', 'children'), Input('tabs', 'value'))
def render_content(tab):
if tab == 'tab-1':
output = general_layout()
return output
@d_app.callback(Output('date-text', 'children'), Input('btn-cboe-data', 'n_clicks'))
def readDaily(cobe_btn):
if cobe_btn:
do_something()
return d_app.server
</code></pre>
<p>Here is my problem....Seems this design is forcing me to have all the callbacks in Add_Dash function in Dash_App1.py file. I want to organize my callbacks based on functionality in directories <dash_app_code1> and <dash_app_code2> in different files. But I don't know how to do this. I tried using "from flask import current_app" and tried to add callbacks there but it doesn't work. I tried assigning dash instance (d_app) to the global variable "g" and tried to import (and use) that in <dash_app_code1> but it throws out of context error. How can I add more dash related callbacks in files in different directories (<dash_app_code1> and <dash_app_code2> and avoid having all in Add_Dash function ?</p>
| <python><flask><plotly><plotly-dash> | 2023-08-31 02:04:02 | 1 | 1,575 | LuckyStarr |
77,012,324 | 22,474,322 | django-avatar // not enough values to unpack (expected 2, got 1) | <p><a href="https://django-avatar.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://django-avatar.readthedocs.io/en/latest/</a></p>
<p>Django returns "not enough values to unpack (Expected 2, got 1) when trying to open this html template:</p>
<pre><code>{% extends 'base.html' %}
{% load account %} {% load avatar_tags %}
{% block content %}
<div style="border: 2px solid #ffffff; text-align:center">
<h1>Dashboard</h1>
<div>
{% avatar user %}
<a href="{% url 'avatar_change' %}">Change your avatar</a>
</div>
<div>
<p>Welcome back, {% user_display user %}</p>
</div>
</div>
{% endblock content %}
</code></pre>
<p>and highlights "<code>{% avatar user %}</code>"</p>
<p>Here's the traceback: <a href="https://dpaste.com/B7EX28LNB" rel="nofollow noreferrer">https://dpaste.com/B7EX28LNB</a></p>
<pre><code>Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/dashboard/
Django Version: 4.2.4
Python Version: 3.11.5
Installed Applications:
['yt2b2',
'home',
'dashboard',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'embed_video',
'avatar',
'allauth',
'allauth.account',
'allauth.socialaccount']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Template error:
In template C:\Users\Pedro\Documents\GITHUB\YT2B2-new-dev\YT2B2\youtube2blog2\dashboard\templates\dashboard\dashboard.html, error at line 10
not enough values to unpack (expected 2, got 1)
1 : {% extends 'base.html' %}
2 : {% load account %} {% load avatar_tags %}
3 :
4 : {% block content %}
5 :
6 : <div style="border: 2px solid #ffffff; text-align:center">
7 : <h1>Dashboard</h1>
8 : <div>
9 :
10 : {% avatar user %}
11 :
12 : <a href="{% url 'avatar_change' %}">Change your avatar</a>
13 :
14 : </div>
15 :
16 : <div>
17 : <p>Welcome back, {% user_display user %}</p>
18 : </div>
19 : </div>
20 :
Traceback (most recent call last):
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\core\handlers\exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\core\handlers\base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\contrib\auth\decorators.py", line 23, in _wrapper_view
return view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\Documents\GITHUB\YT2B2-new-dev\YT2B2\youtube2blog2\dashboard\views.py", line 10, in dashboard
return render(request, 'dashboard/dashboard.html', {})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\shortcuts.py", line 24, in render
content = loader.render_to_string(template_name, context, request, using=using)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\loader.py", line 62, in render_to_string
return template.render(context, request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\backends\django.py", line 61, in render
return self.template.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 175, in render
return self._render(context)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 167, in _render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 1005, in <listcomp>
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\loader_tags.py", line 157, in render
return compiled_parent._render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 167, in _render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 1005, in <listcomp>
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\loader_tags.py", line 63, in render
result = block.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 1005, in <listcomp>
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\django\template\library.py", line 237, in render
output = self.func(*resolved_args, **resolved_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\avatar\templatetags\avatar_tags.py", line 48, in avatar
url = avatar_url(user, width, height)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\avatar\utils.py", line 72, in cached_func
result = func(user, width or default_size, height, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\avatar\templatetags\avatar_tags.py", line 21, in avatar_url
avatar_url = provider.get_avatar_url(user, width, height)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pedro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\avatar\providers.py", line 77, in get_avatar_url
_, domain = email.split(b"@")
^^^^^^^^^
Exception Type: ValueError at /dashboard/
Exception Value: not enough values to unpack (expected 2, got 1)
</code></pre>
<p>Any suggestions of how to fix this? Everything is setup properly.</p>
<p>I'm also using allauth if there's some kind of conflict.</p>
<p>I tried redoing the documentation setup, still won't work.</p>
| <python><html><django><django-templates> | 2023-08-31 01:11:27 | 1 | 325 | Pedro Santos |
77,012,307 | 17,653,423 | How to convert a pd.Series of dict to pd.Series of string? | <p>How to convert the Series of dictionary below to a Series of strings.</p>
<pre><code>import pandas as pd
dicts = {1: [{"id1":"result1", "name1":"result1"}], 2: [{"id2":"result2", "name2":"result2"}]}
# create test dataframe
df = pd.DataFrame.from_dict(dicts, orient='index')
df=df.rename(columns={0:'column'})
print(df)
</code></pre>
<p><a href="https://i.sstatic.net/socAW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/socAW.png" alt="enter image description here" /></a></p>
<p>Change to:</p>
<p><a href="https://i.sstatic.net/e5eFC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e5eFC.png" alt="enter image description here" /></a></p>
<p>The example above isn't the real scenario, that would be used to convert the result from an API response. I want to load it to a table, but as a string, not as dictionary (in Big Query there is the STRUCT type)</p>
| <python><pandas><dataframe> | 2023-08-31 01:03:40 | 2 | 391 | Luiz |
77,012,192 | 9,554,172 | How to put an enrich policy in elasticsearch when python package doesn't match documentation | <p>I'm trying to place a simple enrich policy onto elasticsearch before its corresponding ingest pipelines. The issue I have is when I execute:</p>
<pre><code> if enrich_policies:
for i, (name, body) in enumerate(enrich_policies.items()):
result = es.enrich.get_policy(name=name)
if "config" in body:
body = body["config"]
# cannot pass in "body", but the individual params
# https://www.elastic.co/guide/en/elasticsearch/reference/current/put-enrich-policy-api.html#put-enrich-policy-api-request-body
body_match = body.get("match")
body_range = body.get("range")
body_geo_match = body.get("geo_match")
log_print(f"Putting policy {name} with match: {body_match}, range: {body_range}, geo_match: {body_geo_match}", "DEBUG")
# upload enrich policy
result = es.enrich.put_policy(name=name, match=body_match, range=body_range, geo_match=body_geo_match)
# execute enrich policy
result = es.enrich.execute_policy(name=name)
</code></pre>
<p>I get this error:</p>
<pre><code> File "C:\Users\xxx\source\xxx\configure_kibana_objects.py", line 152, in <module>
main()
File "C:\Users\xxx\source\xxx\configure_kibana_objects.py", line 116, in main
result = es.enrich.put_policy(name=name, match=body_match, range=body_range, geo_match=body_geo_match)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\elasticsearch\_sync\client\utils.py", line 414, in wrapped
return api(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\elasticsearch\_sync\client\enrich.py", line 188, in put_policy
return self.perform_request( # type: ignore[return-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\elasticsearch\_sync\client\_base.py", line 389, in perform_request
return self._client.perform_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\elasticsearch\_sync\client\_base.py", line 320, in perform_request
raise HTTP_EXCEPTIONS.get(meta.status, ApiError)(
elasticsearch.BadRequestError: BadRequestError(400, 'x_content_parse_exception', '[1:11] [policy] unknown field [name]')
</code></pre>
<p>Elasticsearch version is 8.7.0</p>
<pre><code> python -m pip show elasticsearch
Version: 8.7.0
Summary: Python client for Elasticsearch
Home-page: https://github.com/elastic/elasticsearch-py
Author: Honza Král, Nick Lang
Author-email: honza.kral@gmail.com, nick@nicklang.com
License: Apache-2.0
Location: C:\Python311\Lib\site-packages
Requires: elastic-transport
Required-by:
</code></pre>
<p>Here is the <a href="https://elasticsearch-py.readthedocs.io/en/v8.7.0/api.html?highlight=enrich#elasticsearch.client.EnrichClient.put_policy" rel="nofollow noreferrer">link to the documentation</a> for the package, which includes the first parameter: <code>name</code>.</p>
<p>So why does the package keep yelling at me that there is no parameter <code>name</code>?</p>
| <python><elasticsearch> | 2023-08-31 00:15:56 | 1 | 881 | Lacrosse343 |
77,012,106 | 22,474,322 | Django Allauth - ModuleNotFoundError: No module named 'allauth.account.middleware' even when django-allauth is properly installed | <p>"ModuleNotFoundError: No module named 'allauth.account.middleware'"</p>
<p>I keep getting this error in my django project even when django-allauth is all installed and setup???</p>
<p>I tried even reinstalling and changing my python to python3 but didn't change anything, can't figure out why all other imports are working but the MIDDLEWARE one...</p>
<p>Help pls?</p>
<p>settings.py:</p>
<pre><code>"""
Django settings for youtube2blog2 project.
Generated by 'django-admin startproject' using Django 4.2.4.
For more information on this file, see
For the full list of settings and their values, see
"""
from pathlib import Path
import django
import os
import logging
import pyfiglet
import allauth
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'omegalul'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# CUSTOM CODE
# os.environ['FFMPEG_PATH'] = '/third-party/ffmpeg.exe'
# os.environ['FFPROBE_PATH'] = '/third-party/ffplay.exe'
OFFLINE_VERSION = False
def offline_version_setup(databases):
if (OFFLINE_VERSION):
# WRITE CODE TO REPLACE DATABASES DICT DATA FOR OFFLINE SETUP HERE
return True
return
banner_ascii_art = pyfiglet.figlet_format("CHRIST IS KING ENTERPRISES")
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
print("\n - CURRENT DJANGO VERSION: " + str(django.get_version()))
print("\n - settings.py: Current logger level is " + str(logger.getEffectiveLevel()))
logger.debug('settings.py: Logger is working.\n\n')
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
AUTHENTICATION_BACKENDS = [
# Needed to login by username in Django admin, regardless of `allauth`
'django.contrib.auth.backends.ModelBackend',
# `allauth` specific authentication methods, such as login by email
'allauth.account.auth_backends.AuthenticationBackend',
]
'''
NEEDED SETUP FOR SOCIAL AUTH
REQUIRES DEVELOPER CREDENTIALS
ON PAUSE UNTIL MVP IS DONE
# Provider specific settings
SOCIALACCOUNT_PROVIDERS = {
'google': {
# For each OAuth based provider, either add a ``SocialApp``
# (``socialaccount`` app) containing the required client
# credentials, or list them here:
'APP': {
'client_id': '123',
'secret': '456',
'key': ''
}
}
'apple': {
}
'discord' {
}
}
'''
LOGIN_REDIRECT_URL = 'dashboard'
#
# Application definition
INSTALLED_APPS = [
# My Apps
'yt2b2',
'home',
'dashboard',
# Django Apps
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Downloaded Apps
'rest_framework',
'embed_video',
'allauth',
'allauth.account',
'allauth.socialaccount',
#'allauth.socialaccount.providers.google',
#'allauth.socialaccount.providers.apple',
#'allauth.socialaccount.providers.discord',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
# Downloaded Middleware
'allauth.account.middleware.AccountMiddleware',
]
ROOT_URLCONF = 'youtube2blog2.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'youtube2blog2.wsgi.application'
# Database
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3', # <--------- OFFLINE VERSION
# Consider masking these secret variables using a .env file to beef up your Django app's security. Besides, Vercel allows you to list your environment variables during deployment.
#'URL' : 'postgresql://postgres:oibkk5LL9sI5dzY5PAnj@containers-us-west-128.railway.app:5968/railway',
#'NAME' : 'railway',
#'USER' : 'postgres',
#'PASSWORD' : 'oibkk5LL9sI5dzY5PAnj',
#'HOST' : 'containers-us-west-128.railway.app',
#'PORT' : '5968'
}
}
# Password validation
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
STATIC_URL = '/static/' # the path in url
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
]
# Default primary key field type
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
</code></pre>
<p>Tried changing to python3, reinstalling django-allauth through pip, other stackoverflow solutions, shifting through allauth docs... Nothing worked until now</p>
<p>Update: removed https links because of spam filter</p>
<p>Error location:</p>
<pre><code>MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
# Downloaded Middleware
'allauth.account.middleware.AccountMiddleware',
</code></pre>
<p>]</p>
| <python><python-3.x><django><django-allauth><django-settings> | 2023-08-30 23:39:49 | 3 | 325 | Pedro Santos |
77,012,002 | 8,171,079 | Unexpected behaviour when using put with multidimensional Numpy array | <p>I have an array <code>A</code> defined as:</p>
<pre><code>[[0, 0],
[0, 0]]
</code></pre>
<p>I have a list <code>I</code> containing indices of <code>A</code>, for example <code>[(0, 1), (1, 0), (1, 1)]</code>, and a list <code>v</code> with as many values as in <code>I</code>, for example <code>[1, 2, 3]</code>. I want to replace entries of <code>A</code> at indices contained in <code>I</code> with corresponding values stored in <code>v</code>. The expected result is therefore:</p>
<pre><code>[[0, 1],
[2, 3]]
</code></pre>
<p>I was expecting to be able to achieve this using <code>np.put</code>. I tried:</p>
<pre><code>np.put(A, I, v)
</code></pre>
<p>However the value of <code>B</code> after running the line above is:</p>
<pre><code>[[1., 3.],
[0., 0.]]
</code></pre>
<p>Why did <code>put</code> behave this way and how can I achieve the result I was expecting?</p>
| <python><numpy> | 2023-08-30 23:04:05 | 3 | 377 | user8171079 |
77,011,837 | 10,789,707 | How to get all the values of 2nd loop output into the 3rd loop end_idx variable? | <p>This question may have been already answered but after much time searching SO and the net I did not find a suitable answer.</p>
<p>I have 3 consecutive loops as follows:</p>
<pre><code> w = my_text0.get(1.0, END).split()
data = []
# get & set the matching dict keys and values
for i in w:
if i in [*dictionary]:
word = i.title()
my_text.insert(END, word)
my_text.insert(END, ", " + dictionary[i] + "\n")
keys_len = len(word)
print("keys_len", keys_len)
data.append(keys_len)
print("data", data)
# print("keys_len", keys_len)
# get text widget lines count
txtwidg_lnescnt = int(my_text.index("end-1c").split(".")[0])
for j in range(0, len(data)):
k_l = data[j]
print("k_l", k_l)
print("k_l out", k_l)
# get index start and end for tag_add method
for k in range(1, txtwidg_lnescnt):
lne_idx = k
str_idx = f"{lne_idx}.0"
end_idx = f"{lne_idx}.{k_l}"
print("end_idx", end_idx)
my_text.tag_add("start", str_idx, end_idx)
my_text.tag_config(
"start",
background="red",
foreground="black",
font=bdfont,
)
</code></pre>
<p>I need the <code>k_l</code> values from the <code>2nd loop</code> to be processed in the <code>3rd loop</code>.
But currently only the last value <code>(9)</code> from the <code>2nd loop</code> is processed.</p>
<p>Here's the output I'm getting:</p>
<pre><code>keys_len 5
keys_len 6
keys_len 9
data [5, 6, 9]
k_l 5
k_l 6
k_l 9
k_l out 9
end_idx 1.9
end_idx 2.9
end_idx 3.9
</code></pre>
<p><strong>I need it as this:</strong></p>
<pre><code>keys_len 5
keys_len 6
keys_len 9
data [5, 6, 9]
k_l 5
k_l 6
k_l 9
k_l out 9
end_idx 1.5
end_idx 2.6
end_idx 3.9
</code></pre>
<p>I'm familiar with the <code>append to an empty list method</code> to store all the values from a loop.</p>
<p>But I can't repeat it here as it was already done in the <code>1st loop</code>.</p>
<p>I also tried using the <code>return from a separate function method</code> but it outputs as in a nested loop and that's not what I need:</p>
<pre><code>def mykeyslengths():
# get the user's entered words
w = my_text0.get(1.0, END).split()
data = []
# get & set the matching dict keys and values
for i in w:
word = i
keys_len = len(word)
print("keys_lenzz", keys_len)
data.append(keys_len)
for j in range(0, len(data)):
k_l = data[j]
print("k_l", k_l)
return k_l
def generate():
# Clear The text
my_text.delete(1.0, END)
# get the user's entered words
w = my_text0.get(1.0, END).split()
# data = []
# get & set the matching dict keys and values
for i in w:
if i in [*dictionary]:
word = i.title()
my_text.insert(END, word)
my_text.insert(END, ", " + dictionary[i] + "\n")
# keys_len = len(word)
# print("keys_len", keys_len)
# data.append(keys_len)
# print("data", data)
# print("keys_len", keys_len)
# get text widget lines count
txtwidg_lnescnt = int(my_text.index("end-1c").split(".")[0])
# for j in range(0, len(data)):
# k_l = data[j]
# print("k_l", k_l)
# print("k_l out", k_l)
# get index start and end for tag_add method
for k in range(1, txtwidg_lnescnt):
lne_idx = k
str_idx = f"{lne_idx}.0"
end_idx = f"{lne_idx}.{mykeyslengths()}"
print("end_idx", end_idx)
my_text.tag_add("start", str_idx, end_idx)
my_text.tag_config(
"start",
background="red",
foreground="black",
font=bdfont,
)
</code></pre>
<p>Which output as this:</p>
<pre><code>keys_lenzz 5
keys_lenzz 6
keys_lenzz 9
k_l 5
k_l 6
k_l 9
end_idx 1.9
keys_lenzz 5
keys_lenzz 6
keys_lenzz 9
k_l 5
k_l 6
k_l 9
end_idx 2.9
keys_lenzz 5
keys_lenzz 6
keys_lenzz 9
k_l 5
k_l 6
k_l 9
end_idx 3.9
</code></pre>
<p>If that was not clear from the examples, I need the lengths <code>5, 6, 9</code> to <code>bold</code> and modify the <code>backgrounds</code> and <code>foregrounds</code> of the respective substrings lengths determined by the <code>end_idx</code> variable's values with a <code>tkinter</code> <code>tag_add</code> end index property.</p>
<p>The problem I get currently is that all 3 substrings (the dictionary keys) are the same lengths (9) whereas they should all be distinct as per their respective lengths (5, 6, 9).</p>
| <python><python-3.x><for-loop><tkinter><nested-loops> | 2023-08-30 22:16:44 | 1 | 797 | Lod |
77,011,809 | 14,509,604 | Debian 11 : pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available | <p>I've installed Python 3.11 in Debian 11 but after installing when I want to use <code>pip3.11</code> with any package it throws the warning: <code>pip is configured with locations that require TLS/SSL, however the SSL module in Python is not available</code> and fails to download.</p>
<p>I've done</p>
<pre><code>cd Python 3.11.5
./configure --prefix=/usr/local --enable-optimizations --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib"
sudo make
sudo make altinstall
</code></pre>
<p>Also tried <a href="https://stackoverflow.com/questions/45954528/pip-is-configured-with-locations-that-require-tls-ssl-however-the-ssl-module-in">this</a>, <a href="https://sudo%20apt%20install%20openssl" rel="nofollow noreferrer">this</a>, and other similar solutions.</p>
<p>When I run <code>sudo make</code> it prompts</p>
<pre><code>*** WARNING: renaming "_crypt" since importing it failed: libcrypt.so.2: cannot open shared object file: No such file or directory
The necessary bits to build these optional modules were not found:
_hashlib _ssl _tkinter
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
The following modules built successfully but were removed because they could not be imported:
_crypt
Could not build the SSL module!
Python requires an OpenSSL 1.1.1 or newer
</code></pre>
<p>But I have it installed.</p>
<p>Tried: <code>/configure --prefix=/usr/local --enable-optimizations --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib" --with-openssl="/home/linuxbrew/.linuxbrew/bin/openssl"</code> but did not work either.</p>
<p>What can I do to build those missing packages? The one I'm most interested in is in SSL handler because it doesn't let me do anything with pip.</p>
<p>EDIT:</p>
<p>Tried installing wiht <code>pyenv</code>, same result:</p>
<pre><code>File "/home/juanc/.pyenv/versions/3.10.12/lib/python3.10/ssl.py", line 99, in <module>
import _ssl # if we can't import it, let the error propagate
ModuleNotFoundError: No module named '_ssl'
ERROR: The Python ssl extension was not compiled. Missing the OpenSSL lib?
Please consult to the Wiki page to fix the problem.
https://github.com/pyenv/pyenv/wiki/Common-build-problems
</code></pre>
<p>I've reinstalled <code>openssl</code> following <a href="https://orcacore.com/install-openssl-3-debian-11/" rel="nofollow noreferrer">this</a> tutorial (before I had it installed with <code>brew</code>) and now the path is <code>/usr/local/bin/openssl</code>.</p>
<p>Followed the pyenv <a href="https://github.com/pyenv/pyenv/wiki/Common-build-problems" rel="nofollow noreferrer">wiki</a>:</p>
<pre><code>CPPFLAGS="-I/usr/local/include" \
LDFLAGS="-L/usr/local/lib64" \
pyenv install -v 3.10 # 3.11 is not available
</code></pre>
<p>But <code>openssl</code> keeps missing when python is built.</p>
<p>Pyenv ends with "succes", but there is nothing in <code>~/.pyenv/versions/</code> and <code>pyenv versions</code> does not recognize any other python version than system's.</p>
<p>Any clue?</p>
| <python><ssl><installation><pip><pyenv> | 2023-08-30 22:11:25 | 0 | 329 | juanmac |
77,011,769 | 880,783 | Can I @override the following @overloaded Python function? | <p>The PySide6 function <code>QComboBox.addItem</code> is type-hinted, in <code>QtWidgets.pyi</code>, like this:</p>
<pre class="lang-py prettyprint-override"><code>@overload
def addItem(self, icon: Union[PySide6.QtGui.QIcon, PySide6.QtGui.QPixmap], text: str, userData: Any = ...) -> None: ...
@overload
def addItem(self, text: str, userData: Any = ...) -> None: ...
</code></pre>
<p>In order to cleanly (according to type checkers) <code>@override</code> it, I need to provide a Python implementation which "accepts all possible arguments of" signatures 1 and 2, yet I seem unable to do so not least due to the fact that the <code>text</code> argument appears in different positions.</p>
<p>What are strategies to deal with this?</p>
| <python><overriding><overloading><mypy><typing> | 2023-08-30 22:02:36 | 1 | 6,279 | bers |
77,011,731 | 1,113,159 | python3 plotly timeline gant diagram customdata for every bar | <p>I am trying to use dash + plotly to create a horizontal time line diagram.
Several actors are working on shared resources in some time slots.
I would like to visualize with horizontal bars work process of every actor.
But in the hover on each horizontal bar I would like to see the resource that this actor is working on in this timeslot.</p>
<p>For example:</p>
<p>actor1 working on resource1 from 08:00:00 to 10:30:00
actor2 working on resource1 from 10:00:00 to 10:40:00
actor1 working on resource2 from 10:31:00 to 11:00:00
actor3 working on resource3 from 09:15:00 to 10:15:00</p>
<p>I represented it in the following data sample :</p>
<pre><code>data_sample = {
'Actor': ['actor1', 'actor2', 'actor1', 'actor3'],
'Start Time': ['08:00:00', '10:00:00', '10:31:00', '09:15:00'],
'End Time': ['10:30:00', '10:40:00', '11:00:00', '10:15:00'],
'Resource': ['resource1', 'resource1', 'resource2', 'resource3']
}
df = pd.DataFrame(data_sample)
df['Start Time'] = pd.to_datetime(df['Start Time'])
df['End Time'] = pd.to_datetime(df['End Time'])
</code></pre>
<p>And trying to visualise in following way:</p>
<pre><code>fig = px.timeline(df, x_start='Start Time', x_end='End Time', y='Actor', color='Actor')
fig.update_traces(hovertemplate='Resource: %{customdata}<br>' +
'Start Time: %{base}<br>' +
'End Time: %{x}<br>',
customdata=df['Resource'])
app.layout = html.Div([dcc.Graph(figure=fig)])
</code></pre>
<p>I can see a pretty beautiful picture. The only problem that I accidentally noticed and can figure out how to deal with it it the wrong hovers. For example, diagram shows that actor3 is working on resource1 but not on resource3.
Looks like customdata in update_traces starts from the very beginning for every actor.
But I need it to match the appropriate index of the 'Actor' frame.</p>
<p><a href="https://i.sstatic.net/dCUua.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dCUua.png" alt="wrong hover for actor3" /></a></p>
<p>How can I deal with it? May be I could prepare data in another format? Or use another type of diagram?</p>
| <python><plotly><visualization> | 2023-08-30 21:53:17 | 1 | 645 | user1113159 |
77,011,713 | 217,332 | Is there any way to decorate a class such that the resulting class's supertypes are known to the typechecker? | <p>I'd like to be able to write a class decorator that returns classes of a given base type <code>B</code> regardless of the input class, such that I can then use the decorated classes as objects of type <code>B</code>.</p>
<p>I've done something like this:</p>
<pre><code>T = TypeVar("T")
class Base:
pass
def decorate_class(cls: Type[T]) -> Type[Base]:
class Ret(Base, cls):
pass
return Ret
@decorate_class
class C:
pass
def takes_base(inst: Type[Base]) -> Type[Base]:
return inst
takes_base(C)
</code></pre>
<p>but pyre complains that in the call to <code>takes_base</code>, it expected an argument of <code>Type[Base]</code> but instead got <code>Type[C]</code>, despite the return type of the decorator.</p>
| <python><python-3.8><python-typing> | 2023-08-30 21:50:24 | 0 | 83,780 | danben |
77,011,692 | 12,311,071 | Override the usage of calling an enum class in Python | <p>I have what is probably a very obscure problem.
My python script is intended to be ran from the command line with some command line arguments.
Each argument needs to be parsed into a type more useful than a string. I am doing the following:</p>
<pre class="lang-py prettyprint-override"><code>typed_arguments = []
for argument, (parameter, parameter_type) in zip(arguments, self.parameters.items()):
try:
argument = parameter_type(argument)
typed_arguments.append(argument)
except ValueError:
print(self.usage_prompt)
raise TypeError(f"Invalid {parameter}! Must conform to {parmeter_type}.")
</code></pre>
<p>where <code>arguments</code> is <code>sys.argv[1:]</code> and <code>self.parameters</code> is an <code>OrderedDict[str, type]</code> mapping the names of the parameters to the type they should conform to.</p>
<p>One of the arguments needs to be of an enum type. I have the following:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
class MessageType(Enum):
READ = 1
CREATE = 2
RESPONSE = 3
...
</code></pre>
<p>where the entry in the parameter dictionary is <code>{"message_type" : MessageType}</code></p>
<p>The problem, is that the code <code>MessageType("READ")</code> throws an exception instead of returning <code><MessageType.READ: 1></code>. Converting string to enum is instead done like so <code>MessageType["READ"]</code>, but this is not helpful for my above code.</p>
<p>Is there some way, maybe through overriding <code>__new__</code>, making a MetaClass, or somthing entirely different, that I can make <code>MessageType("READ")</code> give me <code>MessageType.READ</code> and not throw an error?</p>
<p>Also, whilst trying to solve this problem, I discovered the argparse module. However this doesn't really solve my problem. Performing</p>
<pre class="lang-py prettyprint-override"><code>parser.add_argument("message_type", type=MessageType)
</code></pre>
<p>still gives me <code>error: argument message_type: invalid MessageType value: 'READ'</code>, as I presume its trying to do the exact same thing</p>
| <python><enums><command-line-arguments> | 2023-08-30 21:46:05 | 0 | 314 | Harry |
77,011,575 | 1,028,560 | Deploying python docker images on AKS gives "exec /usr/bin/sh: exec format error" | <p>I'm trying to deploy a docker container with a python app to Azure Kubernetes Service. After deploying and looking at the logs on the new pod, I see this error:</p>
<blockquote>
<p>exec /usr/bin/sh: exec format error</p>
</blockquote>
<p>I'm building the container on a mac using the following docker buildx command:</p>
<pre><code>docker buildx build --platform linux/x86_64 -t <my username>/ingest .
</code></pre>
<p>My Docker file has the following header</p>
<pre><code>FROM --platform=linux/x86_64 python:3.11
</code></pre>
<p>My deployment yaml looks like the following and seems to pull the image just fine.(I just used something from the azure documentation as a template.)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ingest
namespace: default
spec:
replicas: 1
selector:
matchLabels:
bb: ingest
template:
metadata:
labels:
bb: ingest
spec:
containers:
- name: ingest
image: <my username>/ingest:0.0.1
imagePullPolicy: Always
</code></pre>
<p>When I inspect the images locally, I see</p>
<pre><code>"Architecture": "amd64",
"Os": "linux",
</code></pre>
<p>I'm was assuming that the default chip architecture is x86_64, but not sure. I also built the image with default chip architecture and OS and tested locally - it works fine. I'm new to K8 and Azure. Maybe, I'm missing something obvious. Is there a way to specify the chip architecture and the OS for in my configuration? If I don't specify it, what is the default?</p>
| <python><azure><docker><kubernetes> | 2023-08-30 21:19:32 | 1 | 1,595 | Sanjeev |
77,011,529 | 9,415,280 | TensorFlow Probability inference error input type | <p>I train a model using tensorflow_probability and distributions.</p>
<p>To train it I format my data like this (2 heads model so 2 inputs set):</p>
<pre><code>input_1 = tf.data.Dataset.from_tensor_slices(Xdata)
input_2 = tf.data.Dataset.from_tensor_slices(Xphysio)
output = tf.data.Dataset.from_tensor_slices(yobs)
combined_dataset = tf.data.Dataset.zip(((input_1, input_2), output))
input_dataset = combined_dataset.batch(30)
</code></pre>
<p>Trainning work well... but when I try to do inference like in this <a href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb" rel="nofollow noreferrer">exemple</a> at cell #15 they call the model like this:</p>
<pre><code>yhats = model(combined_dataset)
</code></pre>
<p>I got the error</p>
<pre><code>TypeError: Inputs to a layer should be tensors. Got: <ZipDataset element_spec=((TensorSpec(shape=(120, 9), dtype=tf.float32, name=None), TensorSpec(shape=(24,), dtype=tf.float32, name=None)), TensorSpec(shape=(), dtype=tf.float32, name=None))>
</code></pre>
<p>I try:</p>
<pre><code>yhats = model([input_1, input_2])
</code></pre>
<p>and got same error:</p>
<pre><code>TypeError: Inputs to a layer should be tensors. Got: <TensorSliceDataset element_spec=TensorSpec(shape=(120, 9), dtype=tf.float32, name=None)>
</code></pre>
<p>using <code>yhats = model.predict([Xdata, Xphysio])</code> run well but seem to not return a valid format for tfd.Distribution:</p>
<pre><code>assert isinstance(yhat, tfd.Distribution):
Traceback (most recent call last):
File "E:\Windows programs\PyCharm Community Edition 2021.3.1\plugins\python-ce\helpers\pydev\_pydevd_bundle\pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
AssertionError
</code></pre>
| <python><tensorflow><machine-learning><tensorflow-datasets><tensorflow-probability> | 2023-08-30 21:08:53 | 1 | 451 | Jonathan Roy |
77,011,490 | 2,223,106 | Loop through module and run inspect.getdoc() on each method | <p>I can print a list of all the builtins:</p>
<pre><code>for i, b in __builtins__.__dict__:
print(i, b)
__name__
__doc__
__package__
__loader__
__spec__
__build_class__
__import__
abs
all
any
ascii
...
</code></pre>
<p>Then <code>import inspect</code> and <code>inspect.getdoc(<some_module>)</code>.</p>
<p>First thought was:</p>
<pre><code>for b in __builtins__.__dict__:
print(b, inspect.getdoc(b))
</code></pre>
<p>But <em>you know</em> that at that point "b" is just a string of the name of the module.</p>
<p>Tried: <code>dir(__builtins__)</code>, which also seems to be a list of strings.</p>
<p>Using <code>list</code> wont work:</p>
<pre><code>list(__builtins__)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'module' object is not iterable
</code></pre>
<p>Obviously, I can just <a href="https://docs.python.org/3/library/functions.html" rel="nofollow noreferrer">read the docs online</a>, but am wondering if there <em>is</em> a way to do what I'm trying to do <em>within</em> Python... rather <em>how</em> it can be done.</p>
<p>Ultimately, it would be really cool to have a little generator that can output them one by one:</p>
<pre><code>mybuiltins.next()
Method: reversed
Doc: 'Return a reverse iterator over the values of the given sequence.'
</code></pre>
| <python><python-3.x><introspection> | 2023-08-30 21:00:51 | 1 | 6,610 | MikeiLL |
77,011,448 | 1,506,763 | Why is Jitted Numba function slower than original function? | <p>I've written a function to create uniformly spaced points on a disk and since it's run quite often and on relatively large array I figured the application of <code>numba</code> would increase the speed significantly. However, upon running a quick test I've found that the <code>numba</code> function is more than twice as slow.</p>
<p>Is there a way to figure out what is slowing down the <code>numba</code> function?</p>
<p>Here's the function:</p>
<pre class="lang-py prettyprint-override"><code>@njit(cache=True)
def generate_points_turbo(centre_point, radius, num_rings, x_axis=np.array([-1, 0, 0]), y_axis=np.array([0, 1, 0])):
"""
Generate uniformly spaced points inside a circle
Based on algorithm from:
http://www.holoborodko.com/pavel/2015/07/23/generating-equidistant-points-on-unit-disk/
Parameters
----------
centre_point : np.ndarray (1, 3)
radius : float/int
num_rings : int
x_axis : np.ndarray
y_axis : np.ndarray
Returns
-------
points : np.ndarray (n, 3)
"""
if num_rings > 0:
delta_R = 1 / num_rings
ring_radii = np.linspace(delta_R, 1, int(num_rings)) * radius
k = np.arange(num_rings) + 1
points_per_ring = np.rint(np.pi / np.arcsin(1 / (2*k))).astype(np.int32)
num_points = points_per_ring.sum() + 1
ring_indices = np.zeros(int(num_rings)+1)
ring_indices[1:] = points_per_ring.cumsum()
ring_indices += 1
points = np.zeros((num_points, 3))
points[0, :] = centre_point
for indx in range(len(ring_radii)):
theta = np.linspace(0, 2 * np.pi, points_per_ring[indx]+1)
points[ring_indices[indx]:ring_indices[indx+1], :] = ((ring_radii[indx] * np.cos(theta[1:]) * x_axis[:, None]).T
+ (ring_radii[indx] * np.sin(theta[1:]) * y_axis[:, None]).T)
return points + centre_point
</code></pre>
<p>And it's called like this:</p>
<pre class="lang-py prettyprint-override"><code>centre_point = np.array([0,0,0])
radius = 1
num_rings = 15
generate_points_turbo(centre_point, radius, num_rings )
</code></pre>
<p>Would be great if someone knows why the function is slower when <code>numba</code> compiled or how to go about finding out what the bottleneck for the <code>numba</code> function is.</p>
<h3>Update: Possible computer specific size dependence</h3>
<p>It seems the <code>numba</code> function is working, but the cross-over between where it's faster and slower maybe be hardware specific.</p>
<pre class="lang-py prettyprint-override"><code>%timeit generate_points(centre_point, 1, 2)
99.5 µs ± 932 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
%timeit generate_points_turbo(centre_point, 1, 2)
213 µs ± 8.4 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
%timeit generate_points(centre_point, 1, 20)
647 µs ± 11.2 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
%timeit generate_points_turbo(centre_point, 1, 20)
314 µs ± 8.74 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
%timeit generate_points(centre_point, 1, 200)
11.9 ms ± 375 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit generate_points_turbo(centre_point, 1, 200)
7.9 ms ± 243 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>After about 12-15 rings the <code>numba</code> function (*_turbo) starts to become a similar speed or faster on my machine, but the performance gains at larger size are smaller than expected. But seems like it is actually working, just that some part of the function is heavily size dependent.</p>
| <python><numpy><performance><numba> | 2023-08-30 20:51:23 | 1 | 676 | jpmorr |
77,011,313 | 5,431,734 | how to build a package and include dependencies conditional on os | <p>I am trying to build a package that depending on the os of the target machine will include an extra package. More specifically my application depends on <code>redis</code> however when the machine where the package is going to be deployed to runs linux, I would like to have <code>redis-server</code> as an extra dependency.</p>
<p>What I was thinking to do is to have in my <code>setup.py</code> the following lines:</p>
<pre><code>from setuptools import setup, find_packages
import sys
install_deps = ['redis']
if sys.platform in ['linux', 'darwin']:
install_deps.append('redis-server')
setup(
name="my_app",
version=version,
license="BSD",
author="first_name last_name",
author_email="first.last@mymail.com",
packages=find_packages(),
install_requires=install_deps,
)
</code></pre>
<p>That unfortunately doest work since the value of <code>install_deps</code> is determined the moment I run 'python setup.py sdist bdist_wheel' on the the machine that keeps the source code and not when I install the wheel on the new (target) machine. I would like when I deploy my package and do <code>pip install my_app</code> on a windows pc to install only <code>redis</code> as a requirement but when the targer computer runs linux, pip to fetch both <code>redis</code> and <code>redis-server</code> into the environment.</p>
<p>Is there a way to do that without releasing two different builds, one for windows and another for linux?</p>
| <python><redis><setuptools> | 2023-08-30 20:22:41 | 0 | 3,725 | Aenaon |
77,011,280 | 2,318,409 | python installation moved, pip stopped working | <p>I had Python 3.6 on my Windows machine under "C:\Program Files"
I wanted to upgrade it to 3.11.
Installer completed successfully, but new version was installed not in program files, but in "C:\Users\myname\AppData\Local\Programs\Python\Python311".</p>
<p>I moved Python311 directory to the "C:\Program Files" and python itself does work, but pip is not working anymore. when I am trying to do</p>
<p>pip --version</p>
<p>I am getting</p>
<p>Fatal error in launcher: Unable to create process using '"C:\Users\myname\AppData\Local\Programs\Python\Python311\python.exe" "C:\Program Files\Python311\Scripts\pip.exe" --version': The system cannot find the file specified.</p>
<p>Can someone please tell me what I need to do to fix it?</p>
| <python><pip> | 2023-08-30 20:16:55 | 1 | 596 | Gary Greenberg |
77,011,220 | 4,865,723 | Text in QLabel do not increase the QDialog | <p>This problem is an example to improve the understanding of layout and size policy.
In this example code I do set the width of a <code>QDialog</code> to a minimum and maximum. This works. I don't touch the height of the dialog.</p>
<p>The problem is that the text in a <code>QLabel</code> is to much to fit into the dialog but the dialog do not increase its height to fit the whole label.</p>
<p>Which component need to change its behavior: The dialog, the layout, the label?
Do I need to change the size policy or something else?</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import sys
import this
from PyQt5.QtCore import *
from PyQt5.QtWidgets import *
class MyDialog(QDialog):
def __init__(self):
super().__init__()
self.setMinimumWidth(200)
self.setMaximumWidth(400)
wdg = QLabel(self)
wdg.setWordWrap(True)
# Zen of Python in Rot13
wdg.setText(this.s.replace('\n', ' ') + " THE END")
layout = QVBoxLayout(self)
layout.addWidget(wdg)
if __name__ == "__main__":
app = QApplication(sys.argv)
dlg = MyDialog()
dlg.show()
sys.exit(app.exec())
</code></pre>
<p><a href="https://i.sstatic.net/pvo4k.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pvo4k.gif" alt="enter image description here" /></a></p>
| <python><qt><pyqt> | 2023-08-30 20:03:29 | 1 | 12,450 | buhtz |
77,011,196 | 5,240,684 | Random selection from list avoiding a given element (negative sampling) | <p>I have a list of item pairs, where each item is indexed by a numeric id.</p>
<p>E.g.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>item_1</th>
<th>item_2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>..</td>
<td>..</td>
</tr>
</tbody>
</table>
</div>
<p>The number pairs are not necessarily aranged from 0 and might not include all values in between (e.g. item 7 could exist, even though item 6 is not in the dataset).</p>
<p>I sample random pairs from this list and want to create multiple negative samples (i.e. samples that do not exist) for each positive sample.</p>
<p>Currently, my code looks like this, but is taking way too long:</p>
<pre><code>import pandas as pd
import numpy as np
num_neg_samples = 4
# sample data
rng = np.random.default_rng()
all_pairs = pd.DataFrame({
'TARGET_ID': rng.choice(1_000_000, size=500_000, replace=True),
'CONTEXT_ID': rng.choice(1_000_000, size=500_000, replace=True)
})
pair_sample = all_pairs.sample(10240)
print(pair_sample.head())
import time
start = time.time()
targets = np.repeat(pair_sample['TARGET_ID'].values,(num_neg_samples+1),axis=0)
contexts = pair_sample['CONTEXT_ID']
negids = []
print(f'{len(pair_sample["TARGET_ID"])} target_ids to create neg samples for')
for index, target_id in enumerate(pair_sample['TARGET_ID']):
neg_samples = rng.choice(
all_pairs.loc[all_pairs['TARGET_ID'] != target_id, 'TARGET_ID'].values,
size=num_neg_samples
)
negids.append(neg_samples)
print(time.time() - start, 'seconds')
batch = (negatives, contexts, targets)
</code></pre>
<p>Result:</p>
<pre><code> TARGET_ID CONTEXT_ID
252373 5238 953345
290732 589947 869541
124135 365468 373147
129140 566125 542728
450409 688717 936377
10240 target_ids to create neg samples for
26.611750602722168 seconds
</code></pre>
<p>I am sampling batches of 10240 pairs per training round. Thus, I would like to end up with 40960 negative pairs, 4 for each "target item" in my dataset.</p>
<p>Does anyone have a good idea how to speed up this code? Any help is greatly appreciated.</p>
<p>EDIT: As the question came up: The pairs are items that appear in a search result together. I want to train a model similar to word2vec or an autoencoder, that generates a embedding that is similar for items that appear together in a search result. To improve the embeddings, I would like to train with negative samples as well, i.e. pairs of items that do not appear in the search results together.</p>
<p>EDIT2: Please note, that the pairs I have available might include duplicates, i.e. multiple times the same exact pair. Each item id will appear multiple times in both <code>item_1</code> and <code>item_2</code> columns.</p>
| <python><pandas><numpy> | 2023-08-30 19:58:57 | 3 | 1,057 | Lukas Hestermeyer |
77,011,176 | 3,973,175 | How to extend plot area to keep text inside of plot area without clipping | <p>I have a plot with a long text string that was inserted (string not determined by me, so I can't shorten it):</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
lines = [[(0, 1), (1, 1)], [(2, 3), (3, 3)], [(1, 2), (1, 3)]]
z = [1, 2, 3]
lc = LineCollection(lines, array=z, cmap="coolwarm", linewidth = 8)
fig, ax = plt.subplots()
ax.add_collection(lc)
ax.autoscale()
fig.colorbar(lc)
ax.text(2,3,'some very long text that will go outside of the plot boundaries')
plt.savefig('tmp.png')
</code></pre>
<p>which generates the following:</p>
<p><a href="https://i.sstatic.net/5Ecx2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Ecx2.png" alt="enter image description here" /></a></p>
<p>as you can see, the long text goes outside of the plot area.</p>
<p><a href="https://stackoverflow.com/questions/21833348/matplotlib-text-only-in-plot-area">matplotlib text only in plot area</a> is similar, but I cannot clip the text.</p>
<p>I have also tried <code>subplots.adjust()</code> (<a href="https://stackoverflow.com/questions/30508850/increasing-the-space-for-x-axis-labels-in-matplotlib">Increasing the space for x axis labels in Matplotlib</a>) but that generates something even worse:</p>
<p><a href="https://i.sstatic.net/lK6FF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lK6FF.png" alt="enter image description here" /></a></p>
<p>I've also thought of using some other way than subplots, but subplots seem to be standard practice: <a href="https://matplotlib.org/stable/gallery/shapes_and_collections/line_collection.html" rel="nofollow noreferrer">https://matplotlib.org/stable/gallery/shapes_and_collections/line_collection.html</a></p>
<p>How can I extend the plot area to the maximum extent of the text?</p>
| <python><matplotlib><plot-annotations> | 2023-08-30 19:55:17 | 0 | 6,227 | con |
77,011,152 | 5,305,242 | Python regex ignores first occurrence | <p>Attempting to ignore first and third occurrence of match and replace all date information. Here is trails which is not working.</p>
<pre><code>Input = '''
Captured on: 08-29-2023 09:43:26
Start Time: 08-28-2023 11:11:57
1St Cycle Time: 08-28-2023 11:12:06
Channel-00 Cycles Completed: 4404
Channel-00 Triggered Time: 08-29-2023 00:45:08
Channel-01 Cycles Completed: 24
Channel-01 Triggered Time: 08-28-2023 11:15:57'''
stra = re.sub("^(?!Captured.*$).*?(\d+-\d+-2023 )", "", Input)
</code></pre>
<p>Expecting output:</p>
<pre class="lang-none prettyprint-override"><code>Captured on: 08-29-2023 09:43:26
Start Time: 11:11:57
1St Cycle Time: 08-28-2023 11:12:06
Channel-00 Cycles Completed: 4404
Channel-00 Triggered Time: 00:45:08
Channel-01 Cycles Completed: 24
Channel-01 Triggered Time: 11:15:57
</code></pre>
| <python><regex> | 2023-08-30 19:50:58 | 3 | 458 | OO7 |
77,010,734 | 843,458 | python append list creates list of lists instead of one list | <p>I want to get a list of all images files.</p>
<pre><code>imageExtensionList = ['jpg', '.arw', '.tif', 'mp4']
fileList = []
for extension in imageExtensionList:
searchStr = str(importPath) + '/**/*.' + extension
files = glob.glob(searchStr, recursive=True)
fileList.append(files)
</code></pre>
<p>This however creates a list of lists. I want the files to be in a single list.
There seem to be too many solutions in python for reducing/flattening of lists.
<a href="https://stackoverflow.com/questions/952914/how-do-i-make-a-flat-list-out-of-a-list-of-lists">How do I make a flat list out of a list of lists?</a></p>
<p>However, which should I apply to this list of strings?</p>
<p>which results in 'P:\Import/**/*.arw'
So the windows and unix path are mixed. How can I correct this?</p>
| <python><list> | 2023-08-30 18:42:04 | 1 | 3,516 | Matthias Pospiech |
77,010,717 | 1,115,833 | How to convert a string list of unquoted string entries to a list in Python? | <p>I have a bizzare problem where I am reading a csv file with entries like so:</p>
<pre><code>4,[the mentalist, dodgeball, meet the fockers]
5,[godfather, the heat, friends]
...
</code></pre>
<p>I read this python using the csv module, and normally Id do:</p>
<pre><code>import ast
x=ast.literal_eval(row[1])
</code></pre>
<p>However this obviously fails because the list entries are not quoted.</p>
<p>How do I get around this problem? :(</p>
| <python><python-3.x><csv> | 2023-08-30 18:39:00 | 1 | 7,096 | JohnJ |
77,010,712 | 820,088 | Group Range of Rows in Excel uisng Python Win32com | <p>I am attempting to group a range of rows in Excel using the win32com python api for Excel Objects. I can't seem to get it to work. My code is as follows:</p>
<pre class="lang-py prettyprint-override"><code> excel = win32com.client.gencache.EnsureDispatch("Excel.Application")
excel.Workbooks.Open(r"pathtomyexcel.xlsx")
excel.Visible = True
book = excel.ActiveWorkbook
sheet = book.Worksheets("Test")
sheet.Activate()
# trying to get a range of rows using various methods.
# Using cell ranges
sheet.Range(sheet.Cells(6, 1),sheet.Cells(15,1)).Group
# Using row ranges
sheet.Range(sheet.Rows(6), sheet.Rows(15)).Rows.Group
# Using the rows range method
sheet.Rows("6:15").Group
</code></pre>
<p>I found <a href="https://stackoverflow.com/questions/48020902/grouping-of-rows-in-excel-using-python">this</a> post, but the answer is not clear as noted in the comments. It seems like the group method needs a range object, however my tests above just aren't doing anything. No errors are occurring either.</p>
<p>for reference, I am using python 2.7 and Excel 2010</p>
| <python><excel><win32com><group> | 2023-08-30 18:38:30 | 2 | 4,435 | Mike |
77,010,708 | 1,015,703 | SQLAlchemy 2.0 - Error when using "stringified" forward reference to models in separate files | <p>I want to put each of my SQLAlchemy models in a separate file, in a "models" package. Because of circular reference problems, I am trying to use the "stringified" forward reference trick.</p>
<p>Here's the code in the "models" package:</p>
<pre class="lang-py prettyprint-override"><code># __init__.py
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
</code></pre>
<pre class="lang-py prettyprint-override"><code># parent.py
from typing import List
from sqlalchemy.orm import Mapped, mapped_column, relationship
from app.models import Base
class Parent(Base):
__tablename__ = "parent"
id: Mapped[int] = mapped_column(primary_key=True, index=True)
name: Mapped[str]
children: Mapped[List['Child']] = relationship(
back_populates='parent',viewonly=True)
</code></pre>
<pre class="lang-py prettyprint-override"><code># child.py
from sqlalchemy import ForeignKey
from sqlalchemy.orm import Mapped, mapped_column, relationship
from app.models import Base
class Child(Base):
__tablename__ = "child"
id: Mapped[int] = mapped_column(primary_key=True, index=True)
name: Mapped[str]
parent_id: Mapped[int] = mapped_column(ForeignKey('parent.id'))
parent: Mapped['Parent'] = relationship(
back_populates='children',
viewonly=True)
</code></pre>
<p>And here's the test code:</p>
<pre class="lang-py prettyprint-override"><code># test_sa.py
from sqlalchemy import select
from app.models.child import Child
class TestParentChild:
def test_by_user(self):
sql = (select(Child, Child.parent)
.join(Child.parent)
.limit(10))
print()
print(sql)
</code></pre>
<p>But this gives me the error:</p>
<blockquote>
<p>sqlalchemy.exc.InvalidRequestError: When initializing mapper
Mapper[Child(child)], expression 'Parent' failed to locate a name
('Parent'). If this is a class name, consider adding this
relationship() to the <class 'app.models.child.Child'> class after
both dependent classes have been defined.</p>
</blockquote>
<p>This seems like it should be a textbook approach. Why is SQLAlchemy not finding the <code>Parent</code> class while mapping, and how might I fix it?</p>
| <python><sqlalchemy> | 2023-08-30 18:37:11 | 0 | 1,625 | David H |
77,010,396 | 2,153,235 | Getting pyreadline3.Readline's history working with Python 3.9 | <p>I am following <a href="https://stackoverflow.com/a/7008316/2153235">this
Q&A</a>'s procedure for
accessing Python's REPL command history -- specifically at the Conda
prompt, not in an IDE.</p>
<p>I am using Python 3.9 installed under Anaconda on Windows 10. Version
3.9 needed for backward compatibility with colleagues' past work.</p>
<p>The history is apparently handled by <code>pyreadline3.Readline</code>, which is
the implementation of Gnu's <em>readline</em> for Python 3. I am relying on
Anaconda to manage the package depedencies. Unfortunately, the REPL
commands aren't being captured.</p>
<p>Annex A shows a session transcript in the Conda environment <code>py39</code>,
which contains Python 3.9. It contains three chunks:</p>
<ol>
<li><p>Installation of pyreadline3</p>
</li>
<li><p>Test-driving the history capture in Python, which fails</p>
</li>
<li><p>Using Cygwin's Bash to confirm that the written history file is
empty.</p>
</li>
</ol>
<p><em><strong>What else can I do to enable history capture?</strong></em></p>
<p><strong>Tangentially relevant:</strong> <code>%USERPROFILE%\.python_history</code> captures REPL commands, but does not update until one exits the Python REPL session.</p>
<h1>Annex A: Session transcript</h1>
<pre><code>REM Insteall pyreadline3
REM --------------------
(py39) C:\Users\User.Name> conda install pyreadline3
<...snip...>
Collecting package metadata (current_repodata.json): \ DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
<...snip...>
done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 23.7.2
latest version: 23.7.3
Please update conda by running
$ conda update -n base -c defaults conda
Or to minimize the number of packages updated during conda update use
conda install conda=23.7.3
## Package Plan ##
environment location: C:\Users\User.Name\anaconda3\envs\py39
added / updated specs:
- pyreadline3
The following packages will be downloaded:
package | build
---------------------------|-----------------
pyreadline3-3.4.1 | py39haa95532_0 148 KB
------------------------------------------------------------
The following NEW packages will be INSTALLED:
pyreadline3 pkgs/main/win-64::pyreadline3-3.4.1-py39haa95532_0
Proceed ([y]/n)? y <RETURN>
Downloading and Extracting Packages
pyreadline3-3.4.1 | 148 KB | | 0% DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/main/win-64/pyreadline3-3.4.1-py39haa95532_0.conda HTTP/1.1" 200 151256
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
REM Illustrate in Python that commands aren't capture
REM -------------------------------------------------
(py39) C:\Users\User.Name> python
Python 3.9.17 (main, Jul 5 2023, 21:22:06) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from pyreadline3 import Readline
>>> readline = Readline()
>>> readline.get_current_history_length() # Commands captured
0
>>> readline.get_history_length() # Length of buffer
100
>>> import os # Write history to file
>>> os.getcwd()
'C:\\Users\\User.Name'
>>> readline.write_history_file("./pyHistory.txt")
# Use Cygwin's Bash to show that target history file is empty
#------------------------------------------------------------
User.Name@Laptop-Name /c/Users/User.Name
$ new # This is "ls -ltA", cleaned up with "sed"
0 2023-08-30 13:10 pyHistory.txt*
2359296 2023-08-29 01:39 NTUSER.DAT*
<...snip...>
</code></pre>
| <python><read-eval-print-loop><command-history> | 2023-08-30 17:46:00 | 1 | 1,265 | user2153235 |
77,010,245 | 10,237,558 | Batch names and send HTTP request using panda.Series and Spark UDF | <p>I have a column "names" and I want to create an HTTP POST request using these names in the payload.</p>
<pre><code># Create a Spark session
spark = SparkSession.builder.appName('SparkByExamples.com').getOrCreate()
bearerToken = 'abc'
records = []
def insert(names):
for name in names:
fields = {"fields": {
"name": name
}
}
records.append(fields)
print(records)
url = insertURL
try:
payload = json.dumps({
"records": records
})
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Authorization': f'Bearer {bearerToken}'
}
response = requests.request(
"POST", url, headers=headers, data=payload)
dataRes = response.json()
return pd.Series(["Success"] * len(names))
except Exception as e:
print("Error Occurred:", e)
def main():
match args.subparser:
case 'Insert':
# Create and initialize a Faker Generator
fake = Faker()
global bearerToken
numberOfRecords = args.numberOfRecordsToBeInserted
columns = ["seqno", "name"]
data = []
for i in range(numberOfRecords):
data.append((i, fake.name()))
df = spark.createDataFrame(pd.DataFrame(data), schema=columns)
call_udf = pandas_udf(insert, returnType=StringType())
names_series = df.select("name").rdd.flatMap(lambda x: x).collect()
names_df = spark.createDataFrame([(name,) for name in names_series], ["name"])
# Apply the Pandas UDF to the names DataFrame
result_df = names_df.withColumn("response", call_udf(col("name")))
result_df.show(truncate=False)
case _:
print("Invalid operation")
if __name__ == "__main__":
main()
</code></pre>
<p>This leads to successful insert of the names, however the <code>records</code> only contains a single name. The <code>print(records)</code> has this:</p>
<pre><code>[{'fields': {'name': 'Jessica Moore'}}]
[{'fields': {'name': 'Noah Crawford'}}]
[{'fields': {'name': 'Jessica Welch'}}]
[{'fields': {'name': 'Rita Brown DDS'}}]
[{'fields': {'name': 'Ronald Saunders'}}]
</code></pre>
<p>rather than the expected:</p>
<pre><code>[{'fields': {'name': 'Jessica Moore'}},{'fields': {'name': 'Noah Crawford'}},{'fields': {'name': 'Jessica Welch'}},{'fields': {'name': 'Rita Brown DDS'}},{'fields': {'name': 'Ronald Saunders'}}]
</code></pre>
<p>How do I change the above code to get it in this format?</p>
<p>[EDIT] Added a reproducible example:</p>
<pre><code>from pyspark.sql import SparkSession
import argparse
from pyspark.sql.functions import col, pandas_udf
from faker import Faker
from pyspark.sql.types import StringType
import pandas as pd
# Create a Spark session
spark = SparkSession.builder.appName('SparkByExamples.com').getOrCreate()
parser = argparse.ArgumentParser(
description="Program Arguments for Insert")
subparsers = parser.add_subparsers(dest='subparser')
parser_1 = subparsers.add_parser(
'Insert-Records', help='Insert Records')
parser_1.add_argument("--numberOfRecordsToBeInserted", type=int, default=500,
help="The number of records to be inserted into the newly created vault")
args = parser.parse_args()
def insert(names):
records = []
for name in names:
fields = {"fields": {"name": name}}
records.append(fields)
print(records)
return (pd.Series(["Success"] * len(names)))
def main():
match args.subparser:
case 'Insert-Records':
# Create and initialize a Faker Generator
fake = Faker()
global numberOfRecords
numberOfRecords = args.numberOfRecordsToBeInserted
columns = ["seqno", "name"]
data = []
for i in range(numberOfRecords):
data.append((i, fake.name()))
df = spark.createDataFrame(pd.DataFrame(data), schema=columns)
call_udf = pandas_udf(insert, returnType=StringType())
names_series = df.select("name").rdd.flatMap(lambda x: x).collect()
names_df = spark.createDataFrame([(name,) for name in names_series], ["name"])
# Apply the Pandas UDF to the names DataFrame
result_df = names_df.withColumn("response", call_udf(col("name")))
result_df.show(truncate=False)
case _:
print("Invalid operation")
if __name__ == "__main__":
main()
</code></pre>
<p>Usage: <code>python3 example.py Insert-Records --numberOfRecordsToBeInserted 5</code></p>
| <python><pandas><dataframe><apache-spark><pyspark> | 2023-08-30 17:22:59 | 1 | 632 | Navidk |
77,010,064 | 10,285,400 | How to get all content related to file attachment by hitting an Get API in Python | <p>I am trying to get all the information such as name, content of the file downloaded when I hit a GET API programmatically.</p>
<p><strong>Context :</strong></p>
<p>I have a GET API, whenever I hit that API from browser it automatically downloads the file into the system, I can see that file in filesystem. I would like achieve this using Python program.</p>
<p><strong>Tested Approach :</strong></p>
<p>I almost tried every approach to get the contents of the file, every time I download the file content appears to be in JavaScript format instead of text.</p>
<p>Python Code :</p>
<pre class="lang-py prettyprint-override"><code>caseDocumentFolderPath = os.path.join(SubpoenaJobpath,caseDocumentsFolderName)
os.chdir(caseDocumentFolderPath)
documentResponse = requests.post(completeZipDownloadUrl, headers=headers)
print(documentResponse.content)
open('temp.txt', 'wb').write(documentResponse.content)
</code></pre>
<p>Output file :</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
<script>
function redirectOnLoad() {
if (this.SfdcApp && this.SfdcApp.projectOneNavigator) { SfdcApp.projectOneNavigator.handleRedirect('https://texastech.my.salesforce.com?ec=302&startURL=%2Fvisualforce%2Fsession%3Furl%3Dhttps%253A%252F%252Ftexastech.lightning.force.com%252Fcontent%252Fsession%253Furl%253Dhttps%25253A%25252F%25252Ftexastech.file.force.com%25252Fsfc%25252Fservlet.shepherd%25252Fdocument%25252Fdownload%25252F0696T00000OcugCQAR%25253FoperationContext%25253DS1'); } else
if (window.location.replace){
window.location.replace('https://texastech.my.salesforce.com?ec=302&startURL=%2Fvisualforce%2Fsession%3Furl%3Dhttps%253A%252F%252Ftexastech.lightning.force.com%252Fcontent%252Fsession%253Furl%253Dhttps%25253A%25252F%25252Ftexastech.file.force.com%25252Fsfc%25252Fservlet.shepherd%25252Fdocument%25252Fdownload%25252F0696T00000OcugCQAR%25253FoperationContext%25253DS1');
} else {
window.location.href ='https://texastech.my.salesforce.com?ec=302&startURL=%2Fvisualforce%2Fsession%3Furl%3Dhttps%253A%252F%252Ftexastech.lightning.force.com%252Fcontent%252Fsession%253Furl%253Dhttps%25253A%25252F%25252Ftexastech.file.force.com%25252Fsfc%25252Fservlet.shepherd%25252Fdocument%25252Fdownload%25252F0696T00000OcugCQAR%25253FoperationContext%25253DS1';
}
}
redirectOnLoad();
</script>
</head>
</html>
</code></pre>
<p><strong>Note:</strong> The link which allows the file download directly to browser is from Salesforce and I have tested the link is working fine. Here is the proof for that:</p>
<p><a href="https://i.sstatic.net/DXY8k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DXY8k.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/X6Z4D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X6Z4D.png" alt="enter image description here" /></a></p>
| <python><salesforce> | 2023-08-30 16:53:16 | 1 | 615 | Jakka rohith |
77,010,000 | 4,537,160 | Python, __init__ method of abstract class implicitly being called? | <p>I'm not too familiar with the use of abstract classes, so I'm trying to understand what is happening in some code I'm using at the moment.</p>
<p>In the code, I have a base dataset class, and some implementations of datasets inheriting from the main one, something like:</p>
<pre><code>class dataset(metaclass=abc.ABCMeta):
def __init__():
# implementation of __init__
@abc.abstractmethod
def _some_method(self, ....):
return
class dataset_01(dataset):
def _some_method(self, ....):
# implementation of _some_method for dataset_01, overriding abstract method from base class
</code></pre>
<p>What I don't understand is, I was expecting to see a call for <code>super().__init__()</code> in dataset_01:</p>
<pre><code>class dataset_01(dataset):
def __init__(self, length):
super().__init__(...)
</code></pre>
<p>There's no such call, but when debugging the code, I noticed that I still end up in the constructor of dataset when creating an instance of dataset_01, in spite of the missing super.</p>
<p>Is the use of metaclass=abc.ABCMeta in the dataset class leading to some automatic method resolution, i.e. is it ensuring that the <code>__init__</code> method of the base class is called anyway?
Or am I missing something else?</p>
| <python><oop><abstract-class><abc> | 2023-08-30 16:44:24 | 1 | 1,630 | Carlo |
77,009,906 | 3,380,902 | pass a tuple as parameter to sql query | <pre><code>at_lst = ['131','132','133']
at_tup = (*at_lst,)
print(at_tup)
('131','132','133')
</code></pre>
<p>In my sql query, i am trying to pass this on a parameter, however, it doesn't work.</p>
<pre><code>%%sql
select * from main.schema.tbl
where code IN ( '{{ at_tup }}' )
</code></pre>
<p>This return no result.</p>
<p>However, when I pass the actual tuple, I get the desired results:</p>
<pre><code>select * from main.schema.tbl
where code IN ('131','132','133')
</code></pre>
<p>How do I pass a tuple as a paramter the sql query without using the <code>execute</code> cursor object method?</p>
| <python><sql><jupyter-notebook><databricks> | 2023-08-30 16:30:09 | 1 | 2,022 | kms |
77,009,878 | 19,003,861 | Overriding LOGIN_REDIRECT_URL = "page-name" for a specific function in Django | <p>I have a base.py file that states a default redirection page when the user registers and gets logged in.</p>
<p><strong>base.py</strong></p>
<pre><code>LOGIN_REDIRECT_URL = "index"
</code></pre>
<p>I want to create to possibility for a user that connects from a specific page <code>page_1</code> to signup/login directly from this page.</p>
<p>When they sign up/login they will be redirected to the same <code>page1</code> which will show a different content based on whether the user is authenticated or not. (if authenticated they get content, if not they get the signup/login prompt)</p>
<p>The problem I am running into (I think) is that when a user register from <code>page_1</code>, this user is redirected to <code>index</code> page as defined in my base.py.</p>
<p>Is there a way to override the base.py for a specific function?</p>
<p>I left my latest attempt <code>request.session['LOGIN_REDIRECT_URL'] = url</code>. Which didnt work.</p>
<pre><code>def show_venue(request, page_id):
#REGISTER USER FROM VENUE
url = request.META.get('HTTP_REFERER')
print(f'url ={url}')
if request.method == "POST":
form = RegisterForm(request.POST)
if form.is_valid():
user = form.save(commit=False)
user.is_active=True
user.save()
user = authenticate(username = form.cleaned_data['username'],
password = form.cleaned_data['password1'],)
login(request,user)
request.session['LOGIN_REDIRECT_URL'] = url
else:
for error in list(form.errors.values()):
messages.error(request, error)
else:
form = RegisterForm()
</code></pre>
| <python><django><django-views> | 2023-08-30 16:26:27 | 1 | 415 | PhilM |
77,009,864 | 1,028,270 | Is there any way to specify a parametrized test id inside the param set itself instead of as a separate array? | <p>We can use <code>ids=[]</code> to name the param sets:</p>
<pre><code>@pytest.mark.parametrize(
'a, b',
[
(1, {'Two Scoops of Django': '1.8'}),
(True, 'Into the Brambles'),
('Jason likes cookies', [1, 2, 3]),
(PYTEST_PLUGIN, 'plugin_template'),
], ids=[
'int and dict',
'bool and str',
'str and list',
'CookiecutterTemplate and str',
]
)
def test_foobar(a, b):
assert True
</code></pre>
<p>But I really hate how it's a separate list like this because I have to count to figure out which param set matches up with the test id name.</p>
<p>For heavily parametrized() tests I've taken to this which is unpleasant as well:</p>
<pre><code>@pytest.mark.parametrize(
'a, b',
[
# int and dict
(1, {'Two Scoops of Django': '1.8'}),
# bool and str
(True, 'Into the Brambles'),
# str and list
('Jason likes cookies', [1, 2, 3]),
# CookiecutterTemplate and str
(PYTEST_PLUGIN, 'plugin_template'),
], ids=[
'int and dict',
'bool and str',
'str and list',
'CookiecutterTemplate and str',
]
)
def test_foobar(a, b):
assert True
</code></pre>
<p>Is there support for specifying the id name inside the param set instead? If possible something like this:</p>
<pre><code>[
id="int and dict", (1, {'Two Scoops of Django': '1.8'}),
id="bool and str", (True, 'Into the Brambles'),
etc
</code></pre>
| <python><pytest> | 2023-08-30 16:24:11 | 1 | 32,280 | red888 |
77,009,843 | 13,371,454 | How to limit resources for airflow task or prevent execution of tasks which start spark jobs locally | <p>We administer Airflow cluster and different teams launch their DAGs on that cluster. These DAGs have to start very heavy spark jobs on many cores with hundreds Tb of RAM. Time-to-time someone launch these jobs in local mode on airflow node and it causes airflow go out of service. Is there any way to prevent launch of these tasks or prevent importing such DAGs? Or may be is there any way to limit resources for airflow tasks?</p>
| <python><apache-spark><airflow> | 2023-08-30 16:21:31 | 2 | 376 | Vladimir Shadrin |
77,009,691 | 18,125,313 | How to maximize the cache hit rate of the 2-element combinations? | <p>My question is simple, but I find it difficult to get the point straight, so please allow me to explain step by step.</p>
<p>Suppose I have <code>N</code> items and <code>N</code> corresponding indices.
Each item can be loaded using the corresponding index.</p>
<pre class="lang-py prettyprint-override"><code>def load_item(index: int) -> ItemType:
# Mostly just reading, but very slow.
return item
</code></pre>
<p>Also I have a function that takes two (loaded) items and calculates a score.</p>
<pre class="lang-py prettyprint-override"><code>def calc_score(item_a: ItemType, item_b: ItemType) -> ScoreType:
# Much faster than load function.
return score
</code></pre>
<p>Note that <code>calc_score(a, b) == calc_score(b, a)</code>.</p>
<p>What I want to do is calculate the score for all 2-item combinations and find (at least) one combination that gives the maximum score.</p>
<p>This can be implemented as follows:</p>
<pre class="lang-py prettyprint-override"><code>def dumb_solution(n: int) -> Tuple[int, int]:
best_score = 0
best_combination = None
for index_a, index_b in itertools.combinations(range(n), 2):
item_a = load_item(index_a)
item_b = load_item(index_b)
score = calc_score(item_a, item_b)
if score > best_score:
best_score = score
best_combination = (index_a, index_b)
return best_combination
</code></pre>
<p>However, this solution calls the <code>load_item</code> function <code>2*C(N,2) = N*(N-1)</code> times, which is the bottleneck for this function.</p>
<p>This can be resolved by using a cache.
Unfortunately, however, the items are so large that it is impossible to keep all items in memory.
Therefore, we need to use a size-limited cache.</p>
<pre class="lang-py prettyprint-override"><code>from functools import lru_cache
@lru_cache(maxsize=M)
def load(index: int) -> ItemType:
# Very slow process.
return item
</code></pre>
<p>Note that <code>M</code> (cache size) is much smaller than <code>N</code> (approx. <code>N // 10</code> to <code>N // 2</code>).</p>
<p>The problem is that the typical sequence of combinations is not ideal for the LRU cache.</p>
<p>For instance, when <code>N=6, M=3</code>, <code>itertools.combinations</code> generates the following sequence, and the number of calls of the <code>load_item</code> function is 17.</p>
<pre class="lang-py prettyprint-override"><code>[
(0, 1), # 1, 2
(0, 2), # -, 3
(0, 3), # -, 4
(0, 4), # -, 5
(0, 5), # -, 6
(1, 2), # 7, 8
(1, 3), # -, 9
(1, 4), # -, 10
(1, 5), # -, 11
(2, 3), # 12, 13
(2, 4), # -, 14
(2, 5), # -, 15
(3, 4), # 16, 17
(3, 5), # -, -
(4, 5), # -, -
]
</code></pre>
<p>However, if I rearrange the above sequence as follows, the number of calls will be 10.</p>
<pre class="lang-py prettyprint-override"><code>[
(0, 1), # 1, 2
(0, 2), # -, 3
(1, 2), # -, -
(0, 3), # -, 4
(2, 3), # -, -
(0, 4), # -, 5
(3, 4), # -, -
(0, 5), # -, 6
(4, 5), # -, -
(1, 4), # 7, -
(1, 5), # -, -
(1, 3), # -, 8
(3, 5), # -, -
(2, 5), # 9, -
(2, 4), # -, 10
]
</code></pre>
<h1>Question:</h1>
<p>How can I generate a sequence of 2-item combinations that maximizes the cache hit rate?</p>
<hr />
<h1>What I tried:</h1>
<p>The solution I came up with is to prioritize items that are already in the cache.</p>
<pre class="lang-py prettyprint-override"><code>from collections import OrderedDict
def prioritizes_item_already_in_cache(n, cache_size):
items = list(itertools.combinations(range(n), 2))
cache = OrderedDict()
reordered = []
def update_cache(x, y):
cache[x] = cache[y] = None
cache.move_to_end(x)
cache.move_to_end(y)
while len(cache) > cache_size:
cache.popitem(last=False)
while items:
# Find a pair where both are cached.
for i, (a, b) in enumerate(items):
if a in cache and b in cache:
reordered.append((a, b))
update_cache(a, b)
del items[i]
break
else:
# Find a pair where one of them is cached.
for i, (a, b) in enumerate(items):
if a in cache or b in cache:
reordered.append((a, b))
update_cache(a, b)
del items[i]
break
else:
# Cannot find item in cache.
a, b = items.pop(0)
reordered.append((a, b))
update_cache(a, b)
return reordered
</code></pre>
<p>For <code>N=100, M=10</code>, this sequence resulted in 1660 calls, which is about 1/3 of the typical sequence. For <code>N=100, M=50</code> there are only 155 calls. So I think I can say that this is a promising approach.</p>
<p>Unfortunately, this function is too slow and useless for large <code>N</code>.
I was not able to finish for <code>N=1000</code>, but the actual data is in the tens of thousands.
Also, it does not take into account how to select an item when no cached item is found.
Therefore, even if it is fast, it is doubtful that it is theoretically the best solution (so please note my question is not how to make the above function faster).</p>
<p>(Edited) Here is the complete code including everyone's answers and the test and benchmark code.</p>
<pre class="lang-py prettyprint-override"><code>import functools
import itertools
import math
import time
from collections import Counter, OrderedDict
from itertools import chain, combinations, product
from pathlib import Path
from typing import Callable, Iterable, Tuple
import joblib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from PIL import Image, ImageDraw
ItemType = int
ScoreType = int
def load_item(index: int) -> ItemType:
return int(index)
def calc_score(item_a: ItemType, item_b: ItemType) -> ScoreType:
return abs(item_a - item_b)
class LRUCacheWithCounter:
def __init__(self, maxsize: int):
def wrapped_func(key):
self.load_count += 1
return load_item(key)
self.__cache = functools.lru_cache(maxsize=maxsize)(wrapped_func)
self.load_count = 0
def __call__(self, key: int) -> int:
return self.__cache(key)
def basic_loop(iterator: Iterable[Tuple[int, int]], cached_load: Callable[[int], int]):
best_score = 0
best_combination = None
for i, j in iterator:
a = cached_load(i)
b = cached_load(j)
score = calc_score(a, b)
if score > best_score:
best_score = score
best_combination = (i, j)
return best_score, best_combination
def baseline(n, _):
return itertools.combinations(range(n), 2)
def prioritizes(n, cache_size):
items = list(itertools.combinations(range(n), 2))
cache = OrderedDict()
reordered = []
def update_cache(x, y):
cache[x] = cache[y] = None
cache.move_to_end(x)
cache.move_to_end(y)
while len(cache) > cache_size:
cache.popitem(last=False)
while items:
# Find a pair where both are cached.
for i, (a, b) in enumerate(items):
if a in cache and b in cache:
reordered.append((a, b))
update_cache(a, b)
del items[i]
break
else:
# Find a pair where one of them is cached.
for i, (a, b) in enumerate(items):
if a in cache or b in cache:
reordered.append((a, b))
update_cache(a, b)
del items[i]
break
else:
# Cannot find item in cache.
a, b = items.pop(0)
reordered.append((a, b))
update_cache(a, b)
return reordered
def Matt_solution(n: int, cache_size: int) -> Iterable[Tuple[int, int]]:
dest = []
def findPairs(lo1: int, n1: int, lo2: int, n2: int):
if n1 < 1 or n2 < 1:
return
if n1 == 1:
for i in range(max(lo1 + 1, lo2), lo2 + n2):
dest.append((lo1, i))
elif n2 == 1:
for i in range(lo1, min(lo1 + n1, lo2)):
dest.append((i, lo2))
elif n1 >= n2:
half = n1 // 2
findPairs(lo1, half, lo2, n2)
findPairs(lo1 + half, n1 - half, lo2, n2)
else:
half = n2 // 2
findPairs(lo1, n1, lo2, half)
findPairs(lo1, n1, lo2 + half, n2 - half)
findPairs(0, n, 0, n)
return dest
def Kelly_solution(n: int, cache_size: int) -> Iterable[Tuple[int, int]]:
k = cache_size // 2
r = range(n)
return chain.from_iterable(combinations(r[i : i + k], 2) if i == j else product(r[i : i + k], r[j : j + k]) for i in r[::k] for j in r[i::k])
def Kelly_solution2(n: int, cache_size: int) -> Iterable[Tuple[int, int]]:
k = cache_size - 2
r = range(n)
return chain.from_iterable(combinations(r[i : i + k], 2) if i == j else product(r[i : i + k], r[j : j + k]) for i in r[::k] for j in r[i::k])
def diagonal_block(lower, upper):
for i in range(lower, upper + 1):
for j in range(i + 1, upper + 1):
yield i, j
def strip(i_lower, i_upper, j_lower, j_upper):
for i in range(i_lower, i_upper + 1):
for j in range(j_lower, j_upper + 1):
yield i, j
def btilly_solution(n: int, cache_size: int):
i_lower = 0
i_upper = n - 1
k = cache_size - 2
is_asc = True
while i_lower <= i_upper:
# Handle a k*k block first. At the end that is likely loaded.
if is_asc:
upper = min(i_lower + k - 1, i_upper)
yield from diagonal_block(i_lower, upper)
j_lower = i_lower
j_upper = upper
i_lower = upper + 1
else:
lower = max(i_lower, i_upper - k + 1)
yield from diagonal_block(lower, i_upper)
j_lower = lower
j_upper = i_upper
i_upper = lower - 1
yield from strip(i_lower, i_upper, j_lower, j_upper)
is_asc = not is_asc
def btilly_solution2(n: int, cache_size: int):
k = cache_size - 2
for top in range(0, n, k):
bottom = top + k
# Diagonal part.
for y in range(top, min(bottom, n)): # Y-axis Top to Bottom
for x in range(y + 1, min(bottom, n)): # X-axis Left to Right
yield y, x
# Strip part.
# Stripping right to left works well when cache_size is very small, but makes little difference when it is not.
for x in range(n - 1, bottom - 1, -1): # X-axis Right to Left
for y in range(top, min(bottom, n)): # Y-axis Top to Bottom
yield y, x
def btilly_solution3(n: int, cache_size: int):
k = cache_size - 2
r = range(n)
for i in r[::k]:
yield from combinations(r[i : i + k], 2)
yield from product(r[i + k :], r[i : i + k])
def btilly_solution4(n: int, cache_size: int):
def parts():
k = cache_size - 2
r = range(n)
for i in r[::k]:
yield combinations(r[i : i + k], 2)
yield product(r[i + k :], r[i : i + k])
return chain.from_iterable(parts())
def plot(df, series, ignore, y, label, title):
df = df[df["name"].isin(series)]
# plt.figure(figsize=(10, 10))
for name, group in df.groupby("name"):
plt.plot(group["n"], group[y], label=name)
y_max = df[~df["name"].isin(ignore)][y].max()
plt.ylim(0, y_max * 1.1)
plt.xlabel("n")
plt.ylabel(label)
plt.title(title)
plt.legend(loc="upper left")
plt.tight_layout()
plt.grid()
plt.show()
def run(func, n, cache_ratio, output_dir: Path):
cache_size = int(n * cache_ratio / 100)
output_path = output_dir / f"{n}_{cache_ratio}_{func.__name__}.csv"
if output_path.exists():
return
started = time.perf_counter()
for a, b in func(n, cache_size):
pass
elapsed_iterate = time.perf_counter() - started
# test_combinations(func(n, cache_size), n)
started = time.perf_counter()
cache = LRUCacheWithCounter(cache_size)
basic_loop(iterator=func(n, cache_size), cached_load=cache)
elapsed_cache = time.perf_counter() - started
output_path.write_text(f"{func.__name__},{n},{cache_ratio},{cache_size},{cache.load_count},{elapsed_iterate},{elapsed_cache}")
def add_lower_bound(df):
def calc_lower_bound(ni, mi):
n = ni
m = n * mi // 100
return m + math.ceil((math.comb(n, 2) - math.comb(m, 2)) / (m - 1))
return pd.concat(
[
df,
pd.DataFrame(
[
{"name": "lower_bound", "n": ni, "m": mi, "count": calc_lower_bound(ni, mi)}
for ni, mi in itertools.product(df["n"].unique(), df["m"].unique())
]
),
]
)
def benchmark(output_dir: Path):
log_dir = output_dir / "log"
log_dir.mkdir(parents=True, exist_ok=True)
candidates = [
baseline,
prioritizes,
Matt_solution,
Kelly_solution,
Kelly_solution2,
btilly_solution,
btilly_solution2,
btilly_solution3,
btilly_solution4,
]
nc = np.linspace(100, 500, num=9).astype(int)
# nc = np.linspace(500, 10000, num=9).astype(int)[1:]
# nc = np.linspace(10000, 100000, num=9).astype(int).tolist()[1:]
print(nc)
mc = np.linspace(10, 50, num=2).astype(int)
print(mc)
joblib.Parallel(n_jobs=1, verbose=5, batch_size=1)([joblib.delayed(run)(func, ni, mi, log_dir) for ni in nc for mi in mc for func in candidates])
def plot_graphs(output_dir: Path):
log_dir = output_dir / "log"
results = []
for path in log_dir.glob("*.csv"):
results.append(path.read_text().strip())
(output_dir / "stat.csv").write_text("\n".join(results))
df = pd.read_csv(output_dir / "stat.csv", header=None, names=["name", "n", "m", "size", "count", "time", "time_full"])
df = add_lower_bound(df)
df = df.sort_values(["name", "n", "m"])
for m in [10, 50]:
plot(
df[df["m"] == m],
series=[
baseline.__name__,
prioritizes.__name__,
Matt_solution.__name__,
Kelly_solution.__name__,
Kelly_solution2.__name__,
btilly_solution.__name__,
"lower_bound",
],
ignore=[
baseline.__name__,
prioritizes.__name__,
],
y="count",
label="load count",
title=f"cache_size = {m}% of N",
)
plot(
df[df["m"] == 10],
series=[
baseline.__name__,
prioritizes.__name__,
Matt_solution.__name__,
Kelly_solution.__name__,
Kelly_solution2.__name__,
btilly_solution.__name__,
btilly_solution2.__name__,
btilly_solution3.__name__,
btilly_solution4.__name__,
],
ignore=[
prioritizes.__name__,
Matt_solution.__name__,
],
y="time",
label="time (sec)",
title=f"cache_size = {10}% of N",
)
class LRUCacheForTest:
def __init__(self, maxsize: int):
self.cache = OrderedDict()
self.maxsize = maxsize
self.load_count = 0
def __call__(self, key: int) -> int:
if key in self.cache:
value = self.cache[key]
self.cache.move_to_end(key)
else:
if len(self.cache) == self.maxsize:
self.cache.popitem(last=False)
value = load_item(key)
self.cache[key] = value
self.load_count += 1
return value
def hit(self, i, j):
count = int(i in self.cache)
self(i)
count += int(j in self.cache)
self(j)
return count
def visualize():
# Taken from https://stackoverflow.com/a/77024514/18125313 and modified.
n, m = 100, 30
func = btilly_solution2
pairs = func(n, m)
cache = LRUCacheForTest(m)
# Create the images, save as animated png.
images = []
s = 5
img = Image.new("RGB", (s * n, s * n), (255, 255, 255))
draw = ImageDraw.Draw(img)
colors = [(255, 0, 0), (255, 255, 0), (0, 255, 0)]
for step, (i, j) in enumerate(pairs):
draw.rectangle((s * j, s * i, s * j + s - 2, s * i + s - 2), colors[cache.hit(i, j)])
if not step % 17:
images.append(img.copy())
images += [img] * 40
images[0].save(f"{func.__name__}_{m}.gif", save_all=True, append_images=images[1:], optimize=False, duration=30, loop=0)
def test_combinations(iterator: Iterable[Tuple[int, int]], n: int):
# Note that this function is not suitable for large N.
expected = set(frozenset(pair) for pair in itertools.combinations(range(n), 2))
items = list(iterator)
actual = set(frozenset(pair) for pair in items)
assert len(actual) == len(items), f"{[item for item, count in Counter(items).items() if count > 1]}"
assert actual == expected, f"dup={actual - expected}, missing={expected - actual}"
def test():
n = 100 # N
cache_size = 30 # M
def run(func):
func(n, cache_size)
# Measure generation performance.
started = time.perf_counter()
for a, b in func(n, cache_size):
pass
elapsed = time.perf_counter() - started
# Test generated combinations.
test_combinations(func(n, cache_size), n)
# Measure cache hit (load count) performance.
cache = LRUCacheWithCounter(cache_size)
_ = basic_loop(iterator=func(n, cache_size), cached_load=cache)
print(f"{func.__name__}: {cache.load_count=}, {elapsed=}")
candidates = [
baseline,
prioritizes,
Matt_solution,
Kelly_solution,
Kelly_solution2,
btilly_solution,
btilly_solution2,
btilly_solution3,
btilly_solution4,
]
for f in candidates:
run(f)
def main():
test()
visualize()
output_dir = Path("./temp2")
benchmark(output_dir)
plot_graphs(output_dir)
if __name__ == "__main__":
main()
</code></pre>
<p>I have no problem with you not using the above test code or changing the behavior of <code>basic_loop</code> or <code>LRUCacheWithCounter</code>.</p>
<p>Additional Note:</p>
<ul>
<li>The score calculation cannot be pruned using neighbor scores.</li>
<li>The score calculation cannot be pruned using only a portion of the item.</li>
<li>It is impossible to guess where the best combination will be.</li>
<li>Using faster media is one option, but I'm already at my limit, so I'm looking for a software solution.</li>
</ul>
<p>Thank you for reading this long post to the end.</p>
<hr />
<h1>Edit:</h1>
<p>Thanks to btilly's answer and help with Kelly's visualization, I have come to the conclusion that btilly's solution is the best and (possibly) optimal one.</p>
<p>Here is a theoretical explanation (although I am not very good at math, so it could be wrong).</p>
<hr />
<p>Let <code>N</code> represent the number of indexes, <code>M</code> the cache size, and <code>C</code> the number of combinations (same as <code>math.comb</code>).</p>
<p>Consider a situation where the cache is full and <strong>no further combinations can be generated</strong> without loading.
If we add a new index at this point, the only combinations that can be generated are combinations of the newly added index and the remaining indexes in the cache.
This pattern holds for each subsequent iteration.
Hence, while the cache is full, the maximum number of combinations can be generated per load is <code>M - 1</code>.</p>
<p>This logic holds if the cache isn't full as well.
If <code>M'</code> indexes are currently in the cache, then the next index can generate at most <code>M'</code> combinations.
The subsequent index can generate at most <code>M' + 1</code> combinations, and so forth.
In total, at most <code>C(M,2)</code> combinations can be generated before the cache is full.</p>
<p>Thus, to generate <code>C(N,2)</code> combinations, at least <code>M</code> loads are required to fill the cache, at least <code>(C(N,2) - C(M,2)) / (M - 1)</code> loads are required after the cache is filled.</p>
<p>From above, the load counts complexity of this problem is <code>Ω(N^2 / M)</code>.</p>
<hr />
<p>I have plotted this formula as a <code>lower_bound</code> in the graphs below.
Note that it is only a lower bound and no guarantee that it can actually be achieved.</p>
<p>As an aside, Kelly's solution needs to configure <code>k</code> to maximize its performance.
For <code>M = 50% of N</code>, it's about <code>M * 2/3</code>.
For <code>M = 30% of N</code>, it's about <code>M * 5/6</code>.
Although I couldn't figure out how to calculate it.
As a general configuration, I use <code>k = M - 2</code> (which is not best, but relatively good) in the <code>Kelly_solution2</code> in the graphs below.</p>
<p>For <code>M = 10% of N</code>:</p>
<p><a href="https://i.sstatic.net/xL5Rx.png" rel="noreferrer"><img src="https://i.sstatic.net/xL5Rx.png" alt="n_to_load_count_graph_10" /></a></p>
<p>For <code>M = 50% of N</code>:</p>
<p><a href="https://i.sstatic.net/LsWwa.png" rel="noreferrer"><img src="https://i.sstatic.net/LsWwa.png" alt="n_to_load_count_graph_50" /></a></p>
<p>Note that, in these graphs, it looks like <code>O(N)</code>, but this is because I determined <code>M</code> based on <code>N</code>. When <code>M</code> does not change, it is <code>O(N^2)</code> as described above.</p>
<p>Here is an animation visualizing the cache hit rate of <code>btilly_solution2</code>, composed by a modified version of Kelly's code.
Each pixel represents a combination, with red representing combinations where both indexes are loaded, yellow where one index is loaded, and green where neither index is loaded.</p>
<p><a href="https://i.sstatic.net/fYjDw.gif" rel="noreferrer"><img src="https://i.sstatic.net/fYjDw.gif" alt="visualization_of_btilly_solution2" /></a></p>
<p>In addition, since I'm looking for the optimal sequence, execution time doesn't matter much.
But just in case anyone is curious, here is a comparison of execution times (iteration only).</p>
<p><a href="https://i.sstatic.net/eiun9.png" rel="noreferrer"><img src="https://i.sstatic.net/eiun9.png" alt="n_to_time_graph" /></a></p>
<p><code>btilly_solution4</code> (btilly's solution modified by Kelly) is almost as fast as <code>itertools.combinations</code>, which should be optimal in this case.
Note, however, that even without the modification, it took only 112 nanoseconds per combination.</p>
<p>That's it. Thanks to everyone involved.</p>
| <python><algorithm><caching><combinations> | 2023-08-30 15:58:43 | 5 | 3,446 | ken |
77,009,676 | 8,434,663 | Why is a property on a subclass, that returns a type consistent with the same attribute on the superclass disallowed | <pre class="lang-py prettyprint-override"><code>class Foo:
bar: str
class Bat(Foo):
@property
def bar(self) -> str:
...
</code></pre>
<p>Given the above code, my typechecker (mypy) raises the following complaint:</p>
<pre><code>error: Signature of "bar" incompatible with supertype "Foo" [override]
</code></pre>
<p>This surprises me given that an instance of <code>Foo</code> or <code>Bat</code> will behave the same from the perspective of a caller accessing the <code>bar</code> attribute/property. What is the issue the typechecker is preventing by rejecting this code?</p>
| <python><inheritance><python-typing><mypy> | 2023-08-30 15:56:51 | 1 | 1,657 | Josh |
77,009,539 | 3,406,455 | best format for saving images to disk so that the loading speed is fastest | <p>What is the best format to save images to disk so that the loading speed is fastest ?
I am guessing that <code>.JPG</code> format might be suboptimal, maybe some other format works better ?</p>
<p><em>my case:</em> I have some 9k <code>.JPG</code> images (large 8000x8000 images) and am loading them (using PIL for the moment) for training using <code>pytorch</code> on 2x GPUs. I have a large batch size <code>256</code> and am noticing that the bottleneck is the loading of these images (which are then resized, augmented and so on)</p>
<p>Anyone knows of any benchmarks on different saving and loading techniques ?</p>
| <python><pytorch><python-imaging-library> | 2023-08-30 15:39:20 | 0 | 1,007 | AnarKi |
77,009,536 | 11,578,996 | Efficiently bin data in python | <p>I am trying to summarise a database of activity times by user id to give an array of event count per timeslot to show a summary of weekly behaviour. I have approx. 35k ids over 3M events covering 2y of time in a pandas dataframe. I have two event time columns and I want to combine them into the same output array.</p>
<p>A sample of the true data, included in case I'm missing something or asking the wrong question.</p>
<pre><code>print(data)
id startTime endTime
0 633597028911259693 2021-01-13 06:25:00+00:00 2021-01-13 06:56:00+00:00
1 633597028910043601 2021-01-13 06:58:00+00:00 2021-01-13 08:05:00+00:00
2 633597028909606343 2021-01-13 07:07:00+00:00 2021-01-13 07:46:00+00:00
3 633597028910076221 2021-01-13 07:56:00+00:00 2021-01-13 09:02:00+00:00
4 633597028911248563 2021-01-13 08:09:00+00:00 2021-01-13 08:41:00+00:00
5 633597028910169505 2021-01-13 08:36:00+00:00 2021-01-13 09:11:00+00:00
6 633597028910980570 2021-01-13 08:53:00+00:00 2021-01-13 09:06:00+00:00
7 633597028910039724 2021-01-13 11:29:00+00:00 2021-01-13 12:26:00+00:00
8 633597028910980570 2021-01-13 15:15:00+00:00 2021-01-13 15:32:00+00:00
9 633597028911259693 2021-01-13 15:31:00+00:00 2021-01-13 15:54:00+00:00
10 633597028910043601 2021-01-13 16:31:00+00:00 2021-01-13 17:32:00+00:00
11 633597028911248563 2021-01-13 17:06:00+00:00 2021-01-13 17:40:00+00:00
12 633597028909606343 2021-01-13 17:21:00+00:00 2021-01-13 17:55:00+00:00
13 633597028910039724 2021-01-13 17:29:00+00:00 2021-01-13 18:21:00+00:00
14 633597028910076221 2021-01-13 17:58:00+00:00 2021-01-13 19:06:00+00:00
15 633597028910169505 2021-01-13 18:00:00+00:00 2021-01-13 18:10:00+00:00
16 633597028910169505 2021-01-13 18:12:00+00:00 2021-01-13 18:33:00+00:00
17 633597028911259693 2021-01-14 06:18:00+00:00 2021-01-14 06:56:00+00:00
18 633597028907641698 2021-01-14 06:51:00+00:00 2021-01-14 07:49:00+00:00
19 633597028910960846 2021-01-14 07:19:00+00:00 2021-01-14 08:27:00+00:00
</code></pre>
<p>Preprocessing of my data:</p>
<pre><code>import pandas as pd
import numpy as np
from datetime import timedelta
timeslot_minutes=60
data['startSlot'] = ((data.startTime - data.startTime.dt.floor('7D')) / timedelta(minutes=timeslot_minutes)).fillna(0).astype(int)
data['endSlot'] = ((data.endTime - journeys.endTime.dt.floor('7D')) / timedelta(minutes=timeslot_minutes)).fillna(0).astype(int)
slots = pd.concat([data.set_index('isrn').startSlot, data.set_index('isrn').endSlot], axis=0)
</code></pre>
<p>To generate equivalent data (this is evenly distributed, my data is not):</p>
<pre><code>import numpy as np
import pandas as pd
ids = np.random.randint(6335*1000, 6435*1000, 35000)
timeslots = np.arange(168) # 168 hour long timeslots per week
data = {'id':np.random.choice(ids, 3*10**6),
'timeslot':np.random.choice(timeslots, 3*10**6),
}
data = pd.DataFrame(data).set_index('id')
</code></pre>
<p>My attempt so far but this takes a million years:</p>
<pre><code>grps = data.groupby('id')
output = {}
for id,grp in grps:
output[id] = [grp[grp.timeslot==slot].shape[0] for slot in timeslots]
output = pd.DataFrame.from_dict(output)
</code></pre>
<p>and this also takes long and gives me a far higher count in the 0 timeslot than expected:</p>
<pre><code>grps = data.groupby('id')
output2 = {id:[sum([slot==event for event in times.values]) for slot in timeslots] for id,times in grps}
output2 = pd.DataFrame.from_dict(output2)
</code></pre>
<p>Expected output something like:</p>
<pre><code>id 0 1 2 3 4 5 6 7 8 9 ... 158 159 160 161 162 163 164 165 166 167
63350242 0 0 0 0 0 0 0 0 0 0 ... 0 0 1 1 0 0 0 0 0 0
63379701 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 1 1 0 0 0 0
63449716 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 1 1 0 0 0 0 0
63391102 0 0 0 0 0 0 2 0 0 0 ... 0 0 0 0 2 0 0 0 0 0
63352305 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
</code></pre>
<p>I feel like I'm missing some obvious numpy or pandas functions for this. Current speeds have been between 18m and 90m for the run. Any help appreciated!</p>
| <python><pandas><performance><loops><group-by> | 2023-08-30 15:38:59 | 1 | 389 | ciaran haines |
77,009,489 | 10,093,190 | How to convert a lists of integers to a string of the individual bytes? | <p>I'm trying to print a list of bytes to a file in hex (or decimal, doesn't really make a difference for the sake of the question I believe). I'm baffled by how hard this appears to be in Python?</p>
<p>Based on what I found <a href="https://stackoverflow.com/questions/19210414/byte-array-to-hex-string">here</a> (and on other SO questions), I already managed to do this:</p>
<pre><code>>>> b
[12345, 6789, 7643, 1, 2]
>>> ' '.join(format(v, '02x') for v in b)
'3039 1a85 1ddb 01 02'
</code></pre>
<p>Now... this is already a good start, but... for larger numbers the individual bytes are grouped and attached to each other. This should not happen. There should be a space between them, just like is the case with the two last numbers.</p>
<p>Of course, I could just do some string manipulation afterwards and insert some spaces, but... that sounds like a very hacky way, and I refuse to believe there isn't a cleaner way to do this.</p>
<p>So, what I want is this:</p>
<pre><code>'30 39 1a 85 1d db 00 01 00 02'
</code></pre>
<p>How can I do this?</p>
| <python><python-3.x><string><list><hex> | 2023-08-30 15:32:21 | 5 | 501 | Opifex |
77,009,394 | 139,692 | Python -m Loading Site Package | <p>I have two packages installed via pip which are com.ziath.acacus and com.ziath.driver which install in 'dotted' packages. If I then write a module that imports these:</p>
<pre><code>from com.ziath.acacus.reader import GiSReader
from com.ziath.driver.Zebra2707 import Zebra2707
</code></pre>
<p>When I execute these using -m : python -m com.ziath.ritrack.manu.RITrackManu -l debug; I get the following error:</p>
<pre><code>C:\Users\nbenn\GitHub\RITrackManu\src>python -m com.ziath.ritrack.manu.RITrackManu -l debug
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\nbenn\GitHub\RITrackManu\src\com\ziath\ritrack\manu\RITrackManu.py", line 8, in <module>
from com.ziath.acacus.reader import GiSReader
ModuleNotFoundError: No module named 'com.ziath.acacus'
</code></pre>
<p>However when I execute from the source file : python com\ziath\ritrack\manu\RITrackManu.py -l debug it works with the following:</p>
<pre><code>C:\Users\nbenn\GitHub\RITrackManu\src>python com\ziath\ritrack\manu\RITrackManu.py -l debug
RITRack Manu
INFO:root:rfid com port is COM3
INFO:root:linear com port is COM4
</code></pre>
<p>I'm frankly stumped - the <strong>init</strong>.py file is there on each directory, com/ com/ziath and com/acacus and the setup.py file is as follows:</p>
<pre><code>from setuptools import setup, find_packages
setup(
name = 'com.ziath.acacus',
version ='1.19',
packages = ['com.ziath.acacus'],
package_dir={'':'src'},
install_requires = ['crcmod', 'pyserial'],
python_requires='>=3.7'
)
</code></pre>
<p>The com.ziath.driver package is the same with the name acacus changed to driver.</p>
<p>If anyone has any sugestions I'd be grateful!</p>
<p>Cheers,</p>
<p>Neil</p>
| <python><setuptools> | 2023-08-30 15:21:56 | 0 | 946 | Neil Benn |
77,009,377 | 7,295,599 | How to reduce size of PyInstaller RDKit executable? | <p>In order to find duplicate chemical structures, I was writing a little Python script using RDKit (Win10, Python3.11.3, without (Ana)Conda).
The script seems to work, however, since I want to pass this script to someone who has no Python installation, I'm using <code>pyinstaller --onefile CheckSMILESduplicates.py</code> to create an executable.</p>
<p>This executable will get a size of about >100 MB and starting up the .exe will take about 20 seconds. Is this really the lower file size limit?</p>
<p><strong>Question: How to reduce the size of a PyInstaller RDKit executable?</strong></p>
<p>There seem to be some highly active questions:</p>
<ul>
<li><a href="https://stackoverflow.com/q/47692213/7295599">Reducing size of pyinstaller exe</a></li>
<li><a href="https://stackoverflow.com/q/44681356/7295599">Reduce pyinstaller executable size</a></li>
<li><a href="https://stackoverflow.com/q/73035914/7295599">How to reduce the size of a python exe file?</a> (duplicate)</li>
<li><a href="https://stackoverflow.com/q/69065057/7295599">Reducing the size of executable created by pyinstaller</a> (without answers so far)</li>
</ul>
<p>The suggestions were not yet helpful to me
and maybe there are <strong>RDKit specific</strong> "tricks" and "excludes"...</p>
<ul>
<li>exclude all the unnecessary packages
(How should I know what is <em>not</em> required? Trial and error?)</li>
<li>use UPX
(this is already using <code>upx=True</code> when getting >100 MB)</li>
<li>create a "clean" environment only with the packages which you are using
(Why is it not possible with an existing "dirty" environment/installation?)</li>
</ul>
<p>Thank you for RDKit-specific advice how to reduze the size of the executable.</p>
<p>And for those who are interested in the script (or want to test it):</p>
<p><strong>Input:</strong> (minimized/simplified example, e.g. from ChemDraw)</p>
<p><a href="https://i.sstatic.net/3tjcv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3tjcv.png" alt="enter image description here" /></a></p>
<ul>
<li>in ChemDraw select all structures and copy as SMILES</li>
<li>paste and save into <code>SMILES.txt</code> which will give a long string where SMILES are separated by <code>.</code></li>
</ul>
<pre><code>C12=CC=CC=C1C=C3C(C=CC=C3)=C2.C45=CC=CC=C4C=CC=C5.C67=CC=CC=C6C=CC8=C7C=CC=C8.C9%10=CC=CC=C9C%11=C(C=CC=C%11)C=C%10.C%12(C=C(C=CC=C%13)C%13=C%14)=C%14C=CC=C%12.C%15(C=C(C=CC=C%16)C%16=C%17)=C%17C=CC=C%15.C%18%19=CC=CC=C%18C=C%20C(C=C(C=CC=C%21)C%21=C%20)=C%19.c%22%23ccccc%22=CC=CC=%23.c%24%25ccccc%24cccc%25
</code></pre>
<p><strong>Script:</strong></p>
<pre><code># Check duplicate SMILES structures
#
# in a ChemDraw file:
# 1. select all structures (Ctrl+A)
# 2. copy as SMILES (ALt+Ctrl+C)
# 3. paste text into text file, e.g. SMILES.txt
# 4. start this program with command line argument "SMILES.txt"
# 5. SMILES will be separated and written into separate lines
# 6. Output: a) total structures found, b) different sum formulas, c) duplicates
# from rdkit import Chem
from rdkit.Chem.inchi import MolToInchi
from rdkit.Chem import MolFromSmiles
import sys
def read_file(ffname):
print('Loading "{}" ...'.format(ffname))
with open(ffname, "r") as f:
smiles = f.read().strip().split(".")
with open(ffname, "w") as f:
smiles_str = '\n'.join(smiles)
f.write(smiles_str)
return smiles_str.split()
def get_inchi_duplicates_from_smiles(smiless):
inchis = [MolToInchi(MolFromSmiles(x)) for x in smiless]
print("\nTotal structures found: {}".format(len(inchis)))
print("\n".join(inchis))
dict_sumform = {}
dict_inchi = {}
for inchi in inchis:
sumform = inchi.split('/')[1]
if sumform in dict_sumform.keys():
dict_sumform[sumform] += 1
else:
dict_sumform[sumform] = 1
if inchi in dict_inchi.keys():
dict_inchi[inchi] += 1
else:
dict_inchi[inchi] = 1
print("\nDifferent sum formulas: {}".format(len(dict_sumform.keys())))
for x in sorted(dict_sumform.keys()):
print("{} : {}".format(x, dict_sumform[x]))
# duplicates
print("\nDuplicates:")
for inchi in sorted(dict_inchi.keys()):
if dict_inchi[inchi]>1:
print("{}x : {}".format(dict_inchi[inchi], inchi))
if __name__ == '__main__':
print("Checking for duplicate structures...")
if len(sys.argv)>1:
ffname = sys.argv[1]
smiless = read_file(ffname)
get_inchi_duplicates_from_smiles(smiless)
else:
print("Please give a input file!")
### end of script
</code></pre>
<ul>
<li>start the script on the console via <code>py CheckSMILESduplicates.py SMILES.txt</code></li>
</ul>
<p><strong>Output:</strong></p>
<pre><code>Loading "SMILES.txt" ...
Total structures found: 9
InChI=1S/C14H10/c1-2-6-12-10-14-8-4-3-7-13(14)9-11(12)5-1/h1-10H
InChI=1S/C10H8/c1-2-6-10-8-4-3-7-9(10)5-1/h1-8H
InChI=1S/C14H10/c1-3-7-13-11(5-1)9-10-12-6-2-4-8-14(12)13/h1-10H
InChI=1S/C14H10/c1-3-7-13-11(5-1)9-10-12-6-2-4-8-14(12)13/h1-10H
InChI=1S/C14H10/c1-2-6-12-10-14-8-4-3-7-13(14)9-11(12)5-1/h1-10H
InChI=1S/C14H10/c1-2-6-12-10-14-8-4-3-7-13(14)9-11(12)5-1/h1-10H
InChI=1S/C18H12/c1-2-6-14-10-18-12-16-8-4-3-7-15(16)11-17(18)9-13(14)5-1/h1-12H
InChI=1S/C10H8/c1-2-6-10-8-4-3-7-9(10)5-1/h1-8H
InChI=1S/C10H8/c1-2-6-10-8-4-3-7-9(10)5-1/h1-8H
Different sum formulas: 3
C10H8 : 3
C14H10 : 5
C18H12 : 1
Duplicates:
3x : InChI=1S/C10H8/c1-2-6-10-8-4-3-7-9(10)5-1/h1-8H
3x : InChI=1S/C14H10/c1-2-6-12-10-14-8-4-3-7-13(14)9-11(12)5-1/h1-10H
2x : InChI=1S/C14H10/c1-3-7-13-11(5-1)9-10-12-6-2-4-8-14(12)13/h1-10H
</code></pre>
| <python><windows><pyinstaller><exe><rdkit> | 2023-08-30 15:19:47 | 0 | 27,030 | theozh |
77,009,361 | 217,332 | Type hints for classes with known supertype | <p>Using Python 3.8. I have the following stripped-down example that I haven't fully gotten to typecheck:</p>
<pre><code>class A(ABC):
property: str
def implemented(self, arg: str) -> None:
self.property = arg
@abstractmethod
def not_implemented(self, another_arg: int) -> int:
pass
def method_decorator(f: Callable[[??, int], int]) -> Callable[[??, int], int]:
@wraps(f)
def wrapper(self: ??, arg: int) -> int:
# do stuff
return wrapper
class SubA(A):
@method_decorator
def not_implemented(self, another_arg: int) -> int:
return another_arg
list_of_a_impls: List[??] = [SubA]
def takes_callback(f: Callable[[int], int]) -> int:
return f(3)
def do_something_with_an_a() -> None:
cls: ?? = list_of_a_impls[0]
a_inst: ?? = cls()
takes_callback(a_inst.not_implemented) # partial application giving problems
</code></pre>
<p>I was able to resolve everything except <code>do_something_with_an_a</code> by introducing a <code>Protocol</code> that looks very similar to <code>A</code>. However, I'm worried that I shouldn't actually need a protocol given that I really only want to allow subclasses of <code>A</code>. Regardless of that, I still wasn't able to come up with a type hint that would let me use <code>cls</code> to instantiate an object. What's the right thing to do here?</p>
| <python><python-3.x><types><python-3.8> | 2023-08-30 15:17:53 | 1 | 83,780 | danben |
77,009,305 | 4,267,439 | pcolormesh quadrilateral alignment | <p>Looking at the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.pcolormesh.html" rel="nofollow noreferrer">documentation</a>, <code>quadrilateral</code> should have X, Y coordinates on the lower-left corner, but I get them with X,Y as center point. How can I fix it? Everything seems to be shifted up and right by 0.5.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
X = np.arange(20)
Y = np.arange(11)
Z = [[11, 41, 77, 84, 45, 20, 9, 5, 2, 1, 1, 1, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, 1, 46, 109, 87, 49, 26, 14, 8, 5, 3, 2, 1, 1, 1, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, 2, 45, 97, 82, 50, 28, 16, 10, 6, 4, 3, 2, 1, 1, 1, np.nan],
[np.nan, np.nan, np.nan, np.nan, 7, 53, 76, 57, 34, 20, 11, 7, 4, 3, 2, 1, 1, 1, np.nan],
[np.nan, np.nan, np.nan, np.nan, 1, 17, 47, 46, 30, 17, 10, 6, 3, 2, 1, 1, 1, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan, 4, 21, 28, 20, 12, 6, 4, 2, 1, 1, 1, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan, 1, 7, 13, 11, 6, 3, 2, 1, 1, np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 2, 5, 5, 3, 1, 1, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 2, 2, 1, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]]
plt.figure()
plt.pcolormesh(X[1:], Y[1:], Z, cmap='jet')
plt.xlim([0, 1.5 * 8.3])
plt.ylim([0, 2 * 5])
plt.grid(True)
plt.colorbar()
plt.show()
</code></pre>
<p>This is what I get:</p>
<p><a href="https://i.sstatic.net/Dr7R9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dr7R9.png" alt="enter image description here" /></a></p>
| <python><matplotlib> | 2023-08-30 15:11:50 | 2 | 2,825 | rok |
77,009,295 | 12,297,666 | Different DTW Distance in FastDTW and dtaidistance | <p>I just started reading about DTW, and decided to try two Python packages, <a href="https://github.com/slaypni/fastdtw" rel="nofollow noreferrer">fastdtw</a> and <a href="https://dtaidistance.readthedocs.io/en/latest/" rel="nofollow noreferrer">dtaidistance</a>.</p>
<p>Consider the case of a multiclass timeseries classification problem (Classes are 0, 1, 3 and 4). Samples for classes 1, 3 and 4 are generated based on the Classes 0 samples, like this:</p>
<pre><code>import numpy as np
from fastdtw import fastdtw
from dtaidistance import dtw
np.random.seed(42)
# Original Class 0 samples (10 samples with 48 half-hourly measurements each)
class_0_samples = np.random.rand(10, 48)
# Generate Class 1 samples (multiply each sample by a random value between [0, 0.8])
class_1_samples = class_0_samples * np.random.uniform(0, 0.8, size=(10, 1))
# Generate Class 3 samples (multiply each half-hourly measurement by a different random value between [0, 0.8])
class_3_samples = class_0_samples * np.random.uniform(0, 0.8, size=(10, 48))
# Generate Class 4 samples (multiply specific columns by a random value between [0, 0.8])
class_4_samples = class_0_samples.copy()
start_cols = np.random.randint(7, 15, size=(10,))
for i in range(10):
start_col = start_cols[i]
class_4_samples[i, start_col:start_col+4] = class_4_samples[i, start_col:start_col+4]*np.random.uniform(0, 0.8)
</code></pre>
<p>Now, i have tried to determine the DTW distance from each sample of Class 0 to each sample of Classes 1, 3 and 4. Using <code>fastdtw</code>, i wrote this code:</p>
<pre><code># Calculate DTW distances between Class 0 samples and original Class 0 samples
fastdtw_distances_class_0 = []
fastdtw_distances_class_1 = []
fastdtw_distances_class_3 = []
fastdtw_distances_class_4 = []
for i in range(10):
distance0, _ = fastdtw(class_0_samples[i].reshape(1, -1), class_0_samples[i].reshape(1, -1), dist=euclidean)
fastdtw_distances_class_0.append(distance0)
distance1, _ = fastdtw(class_0_samples[i].reshape(1, -1), class_1_samples[i].reshape(1, -1), dist=euclidean)
fastdtw_distances_class_1.append(distance1)
distance3, _ = fastdtw(class_0_samples[i].reshape(1, -1), class_3_samples[i].reshape(1, -1), dist=euclidean)
fastdtw_distances_class_3.append(distance3)
distance4, _ = fastdtw(class_0_samples[i].reshape(1, -1), class_4_samples[i].reshape(1, -1), dist=euclidean)
fastdtw_distances_class_4.append(distance4)
# Convert distances to a numpy array
fastdtw_distances_class_0 = np.array(fastdtw_distances_class_0).reshape(-1, 1)
fastdtw_distances_class_1 = np.array(fastdtw_distances_class_1).reshape(-1, 1)
fastdtw_distances_class_3 = np.array(fastdtw_distances_class_3).reshape(-1, 1)
fastdtw_distances_class_4 = np.array(fastdtw_distances_class_4).reshape(-1, 1)
</code></pre>
<p>And for <code>dtaidistance</code>, i wrote this:</p>
<pre><code># Calculate DTW distances between Class 0 samples and original Class 0 samples
dtaidtw_distances_class_0 = []
dtaidtw_distances_class_1 = []
dtaidtw_distances_class_3 = []
dtaidtw_distances_class_4 = []
for i in range(10):
distance0 = dtw.distance_fast(class_0_samples[i], class_0_samples[i])
dtaidtw_distances_class_0.append(distance0)
distance1 = dtw.distance_fast(class_0_samples[i], class_1_samples[i])
dtaidtw_distances_class_1.append(distance1)
distance3 = dtw.distance_fast(class_0_samples[i], class_3_samples[i])
dtaidtw_distances_class_3.append(distance3)
distance4 = dtw.distance_fast(class_0_samples[i], class_4_samples[i])
dtaidtw_distances_class_4.append(distance4)
# Convert distances to a numpy array
dtaidtw_distances_class_0 = np.array(dtaidtw_distances_class_0).reshape(-1, 1)
dtaidtw_distances_class_1 = np.array(dtaidtw_distances_class_1).reshape(-1, 1)
dtaidtw_distances_class_3 = np.array(dtaidtw_distances_class_3).reshape(-1, 1)
dtaidtw_distances_class_4 = np.array(dtaidtw_distances_class_4).reshape(-1, 1)
</code></pre>
<p>If you run all that, you can see that some distances are the same, but some aren't. For example, if we check <code>fastdtw</code> and <code>dtaidistance</code> distances, between the sample 0 of Class 0 and Class 4 we can see they are exactly the same:</p>
<pre><code>fastdtw_distances_class_4[0]
Out[12]: array([0.35720446])
dtaidtw_distances_class_4[0]
Out[13]: array([0.35720446])
</code></pre>
<p>But for some other samples, they are differente, for example, Sample 1:</p>
<pre><code>fastdtw_distances_class_4[1]
Out[15]: array([0.57095312])
dtaidtw_distances_class_4[1]
Out[16]: array([0.48818216])
</code></pre>
<p>And for some other cases, they are significantly different, for example, any sample between Class 0 and Class 3.</p>
<pre><code>fastdtw_distances_class_3[8]
Out[17]: array([3.27973515])
dtaidtw_distances_class_3[8]
Out[18]: array([2.11034766])
</code></pre>
<p>What could be the cause of this? There is something wrong with my code? Or the implementation of the DTW algorithm in those packages are really different from each other? How should I chose one or another in this case?</p>
| <python><dtw> | 2023-08-30 15:10:07 | 1 | 679 | Murilo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.