QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,521,658 | 5,344,240 | VSCode Python/Pylance does not gray out unaccessed variable | <p>In below screenshot when hovering over the grayed out variables Pylance (correctly!) says they are not accessed, e.g.; <code>"_baz" is not accessed Pylance</code>. My question is about <code>waz</code>, which is clearly not accessed in either tabs, still not grayed out. Why isn't it grayed out?</p>
<p><a href="https://i.sstatic.net/eXwkk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eXwkk.png" alt="enter image description here" /></a></p>
<p>I thought maybe it was related to <code>waz</code> not being a "private" (underscore) variable, but it just doesn't make sense...</p>
| <python><visual-studio-code><pylance> | 2023-11-21 09:26:04 | 2 | 455 | Andras Vanyolos |
77,521,499 | 949,251 | How to make `os.path.isdir()` work for mapped network drives? | <p>When I boot my PC, <code>os.path.isdir("R:\\")</code> returns <code>False</code>, where <code>R:\</code> is a mapped network drive. Once I access the mapped drive at least once from Windows Explorer, the command works and keeps working until I reboot.</p>
<p>I know that I can get around this using UNC paths but since my code is part of a UI, I can't force the user to not use mapped network drives.</p>
<p>How do I make Python reliably accept mapped network drives?
I thought I might be able to simulate opening the folder from Windows by calling something like <code>os.system("dir R:\\")</code> but it did not work. Apparently there is some magic Windows is performing under the hood that I cannot replicate through a <code>os.system</code> call.</p>
<p>Here is my call history for reference to what I tried:</p>
<pre class="lang-py prettyprint-override"><code>os.path.isdir("R:\\") # False
os.system("dir R:\\") # 1
os.path.isdir("R:\\") # False
os.system("net use") # 0
os.path.isdir("R:\\") # False
# open R:\ in Windows Explorer
os.path.isdir("R:\\") # True
</code></pre>
<p>This question deals with the same problem, but the accepted answer proposes the workaround to use UNC paths, but as explained, that does not fit my use case: <a href="https://stackoverflow.com/questions/39492524/os-path-isfile-returns-false-for-file-on-network-drive">os.path.isfile() returns false for file on network drive</a></p>
| <python><windows><mapped-drive> | 2023-11-21 08:58:13 | 0 | 831 | Cerno |
77,521,378 | 2,667,387 | Confusing output with pandas rolling window with datetime64[us] dtype | <p>I get confusing results from <code>pandas.rolling()</code> when the dtype is <code>datetime64[us]</code>. Pandas version is 2.1.1. Let <code>df</code> be the dataframe</p>
<pre><code> day x
0 2021-01-01 3
1 2021-01-02 2
2 2021-01-03 1
3 2021-01-05 4
4 2021-01-08 2
5 2021-01-14 5
6 2021-01-15 6
7 2021-01-16 1
8 2021-01-19 5
9 2021-01-20 2
</code></pre>
<p>Its dtypes are:</p>
<pre><code>day datetime64[ns]
x int64
dtype: object
</code></pre>
<p>We specify a rolling window of length <strong>3 days</strong>:</p>
<pre class="lang-py prettyprint-override"><code>df.rolling("3d", on="day", center=True)["x"].sum()
</code></pre>
<p>Output is as expected:</p>
<pre><code>0 5.0
1 6.0
2 3.0
3 4.0
4 2.0
5 11.0
6 12.0
7 7.0
8 7.0
9 7.0
Name: x, dtype: float64
</code></pre>
<p>Let us repeat this after casting the dtype <code>datetime64[ns]</code> to <code>datetime64[us]</code>:</p>
<pre class="lang-py prettyprint-override"><code>df["day"] = df["day"].astype("datetime64[us]")
</code></pre>
<p>Using the exact same code as above for the rolling window now gives:</p>
<pre><code>0 31.0
1 31.0
2 31.0
3 31.0
4 31.0
5 31.0
6 31.0
7 31.0
8 31.0
9 31.0
Name: x, dtype: float64
</code></pre>
<p>Why?</p>
| <python><pandas><datetime><rolling-computation> | 2023-11-21 08:38:38 | 2 | 863 | DustByte |
77,521,373 | 25,645 | python pandas writing parquet file I just read: Conversion failed for column | <pre><code>import pandas as pd
file = pd.read_parquet("/Users/myuser/Downloads/input.parquet")
file.to_parquet("/Users/hazout/mysuer/output.parquet")
</code></pre>
<p>I am trying to read and write a parquet file (I plan to modify it too, but first things first).</p>
<p>I get <code>pyarrow.lib.ArrowTypeError: ("Expected bytes, got a 'int' object", 'Conversion failed for column gtis_per_league with type object')</code></p>
<p>I have 46 columns, do I really have to redefine and massage each column even when I don't touch them?</p>
| <python><pandas><parquet> | 2023-11-21 08:37:47 | 0 | 49,641 | Nathan H |
77,521,240 | 14,516,016 | Executing workflow commands for GitHub Actions using a Python script | <p>I am trying to log a <a href="https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-a-warning-message" rel="nofollow noreferrer">warning message</a> in my GitHub Actions workflow using a Python script.</p>
<p>The script writes the following output in the workflow:</p>
<pre class="lang-none prettyprint-override"><code>::warning file="./german.ini",title="Sub-Section 'close_button' of 'about_csprite_popup' Not Found"
</code></pre>
<p>using the <code>sys.stdout.write</code> function in Python's <code>sys</code> module, but apparently it is logged as a normal text:</p>
<p><a href="https://i.sstatic.net/Rxifw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rxifw.png" alt="github actions un-formatted output screenshot" /></a></p>
<p>as opposed to the formatted text as expected here:</p>
<p><a href="https://i.sstatic.net/pXBEV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pXBEV.png" alt="github-action-utils python module test screenshot" /></a></p>
<p>My Python script:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python
# Requires Python v3 Interpreter
import os
import sys
import glob
import configparser
ConfigStructure = {
"file_menu": [ "file", "new", "open" ],
"help_menu": [ "help", "about", "github" ],
"open_file_popup": [ "open_file" ],
"new_document_popup": [
"title", "width_input", "height_input",
"ok_button", "cancel_button"
],
"about_csprite_popup": [
"title", "contributors_header", "contributors_paragraph",
"contributors_link", "os_projects_header", "os_projects_text",
"close_button"
],
"unicode_range": [ "range" ]
}
Files = []
ExitCode = 0
for f in glob.glob("./*.ini"):
if os.path.isfile(f):
Files.append(f)
for f in Files:
try:
config=configparser.ConfigParser()
config.read(f)
for Section in ConfigStructure:
if not Section in config:
sys.stdout.write("::warning file=\"" + f + "\",title=\"Section '" + Section + "' Not Found\"\n")
break
for SubSection in ConfigStructure[Section]:
if not SubSection in config[Section]:
sys.stdout.write("::warning file=\"" + f + "\",title=\"Sub-Section '" + SubSection + "' of '" + Section + "' Not Found\"\n")
except Exception as e:
sys.stdout.write(f"::error file='{f}',title='Unhandled Error Occurred'::'{str(e)}'\n")
ExitCode = 1
sys.exit(ExitCode)
</code></pre>
<p>My GitHub Actions workflow:</p>
<pre class="lang-yaml prettyprint-override"><code>name: Lint
on:
push:
branches: [ master ]
jobs:
lint:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Install Python
run: |
sudo apt-get update -y
sudo apt-get install python3 -y
- name: Lint Languages Files
run: |
python3 ValidateLangFiles.py
</code></pre>
| <python><logging><github-actions> | 2023-11-21 08:14:28 | 1 | 1,359 | Aditya |
77,520,973 | 772,354 | OpenCV detect liquid level in a tube | <p>I have image like this below</p>
<p><a href="https://i.sstatic.net/TzbZc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TzbZc.jpg" alt="tube" /></a></p>
<p>Can anyone suggest how to detect the level of the liquid?
Is this too blurry to process? I have tried Canny and findContour and cannot detect the line.</p>
<p><a href="https://i.sstatic.net/R8Hk0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R8Hk0.jpg" alt="enter image description here" /></a></p>
<p>I am using python opencv 4.8</p>
| <python><opencv><image-processing><computer-vision> | 2023-11-21 07:22:45 | 0 | 747 | Bharata |
77,520,963 | 8,595,891 | Using locally deployed llm with langchain's openai llm wrapper | <p>I have deployed llm model locally which follows openai api schema. As it's endpoint follows openai schema, I don't want to write separate inference client.</p>
<p>Is there any way we can utilize existing openai wrapper by langchain to do inference for my localhost model.</p>
<p>I checked there is a <a href="https://python.langchain.com/docs/guides/adapters/openai#chatcompletionstream" rel="nofollow noreferrer">openai adapter</a> by langchain, but it seems like it require provider, which again I have to write separate client.</p>
<p>Overall goal it to not write any redundant code as it's already been maintained by langchain and may change with time. We can modify our api wrt openai and it works out of the box.</p>
<p>Your suggestion is appreciated.</p>
| <python><openai-api><langchain><py-langchain> | 2023-11-21 07:20:51 | 1 | 1,362 | Pranjal Doshi |
77,520,936 | 4,281,353 | tensorflow - tf.keras.Model.fit causes run out of data for validation data with validation_steps being set | <p>Trying to understand the <code>validation_steps</code> parameter of <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit" rel="nofollow noreferrer">tf.keras.Model.fit</a>.</p>
<blockquote>
<p>Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch.</p>
</blockquote>
<p>For instance, <a href="https://www.tensorflow.org/datasets/catalog/mnist" rel="nofollow noreferrer">TFDS MNIST</a> dataset has <code>60,000</code> train and <code>10,000</code> test data records. Trying to consume all the records during <code>num_epochs=2</code> epochs with <code>batch_size=8</code> using generators as the data sources to the model.</p>
<pre><code>(train, test), info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
x_generator = train.batch(batch_size).as_numpy_iterator()
v_generator = test.batch(batch_size).as_numpy_iterator() # using 'test' for validation here
</code></pre>
<p>The training data can afford <code>3750=(60000 / batch_size=8 / epochs=2)</code> batches, and the test data can afford <code>625=(10000 / batch_size=8 / epochs=2)</code> batches.</p>
<pre><code>def f(image, label):
return 1
num_total_train_records = len(list( # 60000
train.map(f)
))
num_total_test_records = len(list( # 10000
test.map(f)
))
print(num_total_train_records, num_total_test_records)
-----
60000 10000
</code></pre>
<pre><code>num_epochs = 2
batch_size = 8
num_x_batches_per_epoch = int(np.floor(num_total_train_records / batch_size / num_epochs))
num_v_batches_per_epoch = int(np.floor(num_total_test_records / batch_size / num_epochs))
print(num_x_batches_per_epoch, num_v_batches_per_epoch)
# ---
# show 3750 625
</code></pre>
<p>However, setting <code>tf.keras.Model.fit(validation_steps=625)</code> causes the error <code>Your input ran out of data... Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 625 batches)</code>.</p>
<pre><code>model.fit(
x=x_generator ,
epochs=num_epochs,
batch_size=batch_size, # not using batch_size arg makes no difference
steps_per_epoch=num_x_batches_per_epoch,
validation_data=v_generator,
validation_steps=num_v_batches_per_epoch,
validation_batch_size=batch_size
)
</code></pre>
<pre><code>Your input ran out of data; interrupting training.
Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches
(in this case, 625 batches). You may need to use the repeat() function when building your dataset.
2023-11-21 17:39:33.226528: I tensorflow/core/framework/local_rendezvous.cc:421] Local rendezvous recv item cancelled. Key hash: 17391114698345974101
2023-11-21 17:39:33.226580: I tensorflow/core/framework/local_rendezvous.cc:421] Local rendezvous recv item cancelled. Key hash: 8226056677969075330
WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 625 batches). You may need to use the repeat() function when building your dataset.
</code></pre>
<h2>Code</h2>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow import keras
import tensorflow_datasets as tfds
(train, test), info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
def f(image, label):
return 1
num_total_train_records = len(list(
train.map(f)
))
num_total_test_records = len(list(
test.map(f)
))
print(num_total_train_records, num_total_test_records)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)
num_epochs = 2
batch_size = 8
num_x_batches_per_epoch = int(np.floor(num_total_train_records / batch_size / num_epochs))
num_v_batches_per_epoch = int(np.floor(num_total_test_records / batch_size / num_epochs))
print(num_x_batches_per_epoch, num_v_batches_per_epoch)
# ---
# will show 3750 625
x_generator = train.batch(batch_size).as_numpy_iterator()
v_generator = test.batch(batch_size).as_numpy_iterator()
model.fit(
x=x_generator ,
epochs=num_epochs,
batch_size=batch_size,
steps_per_epoch=num_x_batches_per_epoch,
validation_data=v_generator,
validation_steps=num_v_batches_per_epoch,
validation_batch_size=batch_size
)
</code></pre>
<p>By minus 1, it works.</p>
<pre><code>num_v_batches_per_epoch = int(np.floor(num_total_test_records / batch_size / num_epochs)) -1 # Cuase ran out of data without -1
</code></pre>
<p>Please help understand this behavior. Also the document says <code>Only relevant if validation_data is provided and is a tf.data dataset.</code> but obviously it is not only for <code>tf.data.Dataset</code>.</p>
<h2>Environment</h2>
<pre><code>tensorflow 2.14.1
Python 3.10.12
Ubuntu 22.04 LTS
</code></pre>
| <python><tensorflow><machine-learning> | 2023-11-21 07:14:37 | 2 | 22,964 | mon |
77,520,930 | 9,495,110 | torch.utils.data.random_split() is not splitting Dataset | <p>I am using <code>ImageFolder</code> to load the data from a directory:</p>
<pre><code>full_dataset = ImageFolder('some_dir', transform=transform)
</code></pre>
<p>When I print its length it gives: 32854. Now I want to split the <code>Dataset</code> returned by <code>ImageFolder</code> into train and test dataset using <code>torch.utils.data.random_split()</code>. I tried passing fraction <code>[0.8, 0.2]</code>, and length like <code>[len(full_dataset) - 100, 100]</code>.</p>
<pre><code>train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [len(full_dataset) - 100, 100])
</code></pre>
<p>But when I print both their length using <code>len(train_dataset.dataset.imgs)</code> and <code>len(test_dataset.dataset.imgs)</code>, they show the same value as <code>full_dataset</code>.</p>
<p>Why is my split not working?</p>
| <python><pytorch><dataset> | 2023-11-21 07:12:51 | 1 | 1,027 | let me down slowly |
77,520,904 | 22,466,650 | How to form a group based on nearest days of previous and next weeks? | <p>I was asked to re build my previous question so I deleted it and created this one.</p>
<p>My input is a dataframe with two columns :</p>
<pre><code>df = pd.DataFrame({'DATE': ['Tuesday, November 7, 2023', 'Wednesday, November 8, 2023', 'Thursday, November 9, 2023', 'Friday, November 10, 2023', 'Monday, November 13, 2023', 'Friday, November 17, 2023', 'Sunday, November 19, 2023', 'Monday, November 20, 2023', 'Thursday, November 23, 2023', 'Friday, November 24, 2023'], 'WEEK': [45, 45, 45, 45, 46, 46, 46, 47, 47, 47]})
print(df)
DATE WEEK
0 Tuesday, November 7, 2023 45
1 Wednesday, November 8, 2023 45
2 Thursday, November 9, 2023 45
3 Friday, November 10, 2023 45
4 Monday, November 13, 2023 46
5 Friday, November 17, 2023 46
6 Sunday, November 19, 2023 46
7 Monday, November 20, 2023 47
8 Thursday, November 23, 2023 47
9 Friday, November 24, 2023 47
</code></pre>
<p>Logic : I need to form groups of custom weeks. For example, for the rows with week <code>46</code>, we need to look for the nearest day in weeks <code>45</code> and <code>47</code> and form the group. If a week has no previous (like week <code>45</code>) we flag it with a missing TAIL and if a week has no next week (like <code>47</code>) we say that he has a missing HEAD. So the pair group/missing will be unique at the final.</p>
<pre><code> DATE WEEK GROUP MISSING
0 Tuesday, November 7, 2023 45 1 TAIL
1 Wednesday, November 8, 2023 45 1 TAIL
2 Thursday, November 9, 2023 45 1 TAIL
3 Friday, November 10, 2023 45 1 TAIL
4 Monday, November 13, 2023 46 1 TAIL
5 Friday, November 17, 2023 46 2 NONE
6 Sunday, November 19, 2023 46 2 NONE
7 Monday, November 20, 2023 47 2 NONE
8 Thursday, November 23, 2023 47 3 HEAD
9 Friday, November 24, 2023 47 3 HEAD
</code></pre>
<p>There is a detail, my real dataset have different years. It means that the groups must reset (start from 1) for each year.</p>
<p>I started making the groups with the code below but from the beginning I got shifted groups. I'm not able to fix it and also won't be able the create the MISSING column.</p>
<pre><code>df['GROUP'] = ((df['WEEK'].diff() == 1).shift().cumsum() + 1).fillna(1)
</code></pre>
<p>Can you guys help me with that ?</p>
| <python><pandas> | 2023-11-21 07:08:46 | 1 | 1,085 | VERBOSE |
77,520,713 | 17,519,895 | How to update weights of a base Keras PointNet model? | <p>I trained a PointNet regression model and then saved it's weights by</p>
<pre><code>model.save_weights('/home/rev9ai/aleef/Dainsta/model/beta_v0.weights.ckpt')
</code></pre>
<p>The saved files were</p>
<ul>
<li>beta_v0.weights.ckpt.data-00000-of-00001</li>
<li>beta_v0.weights.ckpt.index</li>
</ul>
<p>Now I want to update a base PointNet model using these weights
Here is how the model was built using Tensorflow:</p>
<pre><code># Define the PointNet architecture for regression
def build_pointnet_regression_model(num_points=2048):
inputs = keras.Input(shape=(num_points, 3))
x = tnet(inputs, 3)
x = conv_bn(x, 32)
x = conv_bn(x, 32)
x = tnet(x, 32)
x = conv_bn(x, 32)
x = conv_bn(x, 64)
x = conv_bn(x, 64) #additional
x = conv_bn(x, 64) #additional
x = conv_bn(x, 512)
x = layers.GlobalMaxPooling1D()(x)
x = dense_bn(x, 256)
# x = layers.Dropout(0.3)(x)
# x = dense_bn(x, 128)
# x = layers.Dropout(0.3)(x)
# Regression output layer
outputs = layers.Dense(1, activation="relu")(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="pointnet_regression")
return model
# Create the PointNet regression model
model = build_pointnet_regression_model(NUM_POINTS)
# Compile the model for regression
model.compile(
loss="mean_absolute_error",
optimizer=keras.optimizers.RMSprop(learning_rate=0.1),
metrics=["mean_absolute_error"],
)
</code></pre>
| <python><tensorflow><keras><regression><mlmodel> | 2023-11-21 06:25:18 | 1 | 421 | Aleef |
77,520,491 | 21,305,238 | PyCharm cannot infer @cache-d methods' return types | <p>PyCharm's type checker works well with this:</p>
<pre class="lang-py prettyprint-override"><code>from functools import cache
class MyType:
pass
@cache
def f() -> MyType:
...
v = f() # v: MyType
</code></pre>
<p>...but not this:</p>
<pre class="lang-py prettyprint-override"><code>class C:
@cache
def m(self) -> MyType:
...
v = C().m() # v: Any
</code></pre>
<p>What should I do to get good autocompletion?</p>
| <python><pycharm><python-typing> | 2023-11-21 05:28:54 | 1 | 12,143 | InSync |
77,520,334 | 10,219,156 | how to change the rows and columns in pandas dataframe | <p>i have code like this which creates dataframe from the dictionary, but the output is differant from what is expected. Have given the code as follows with steps</p>
<pre><code>options_pnl={'BTC': {'Pnl(since 6pm)': Decimal('7831.52228528'),
'Pnl(4 hr)': Decimal('2930.47450133'),
'Pnl(1 hr)': Decimal('1416.81306308'),
'Volume(since 6pm)': Decimal('24509290.62181862'),
'Volume(4 hr)': Decimal('4504202.83422724'),
'Volume(1 hr)': Decimal('1067850.01837278')},
'ETH': {'Pnl(since 6pm)': Decimal('387.87564823'),
'Pnl(4 hr)': Decimal('-349.14871930'),
'Pnl(1 hr)': Decimal('656.74824550'),
'Volume(since 6pm)': Decimal('10700784.53262117'),
'Volume(4 hr)': Decimal('1968872.36761706'),
'Volume(1 hr)': Decimal('778937.22275036')}}
options_pnl_df = pd.DataFrame.from_dict(options_pnl)
options_pnl_df = options_pnl_df.astype('int64')
options_pnl_df = options_pnl_df.transpose()
display(options_pnl_df)
</code></pre>
<p>and output is like below :</p>
<pre><code> Pnl(since 6pm) Pnl(4 hr) Pnl(1 hr) Volume(since 6pm) Volume(4 hr) Volume(1 hr)
BTC 7831 2930 1416 24509290 4504202 1067850
ETH 387 -349 656 10700784 1968872 778937
</code></pre>
<p>I need to change the structure to this from the above ouput to the below output given</p>
<pre><code> PNL Volume
BTC since 6pm 7831 24509290
BTC since last 4 hr 2930 4504202
BTC last 1 hr 1416 1067850
ETH since 6pm 387 10700784
ETH since last 4 hr -349 1968872
ETH last 1 hr 656 778937
</code></pre>
<p>tried with</p>
<pre><code>data = []
for asset, values in options_pnl.items():
for interval, metrics in values.items():
metric_type, metric_interval = interval.split('(')
metric_interval = metric_interval.rstrip(')') # remove trailing ')'
pnl_value = metrics if metric_type == 'Pnl' else 0
volume_value = metrics if metric_type == 'Volume' else 0
data.append([asset, metric_interval, pnl_value, volume_value])
columns = ['Asset', 'Type', 'PNL', 'Volume']
options_pnl_df = pd.DataFrame(data, columns=columns)
print(options_pnl_df)
</code></pre>
<p>but not getting desired ouput, could someone make the desired output from the given dicts</p>
| <python><pandas><dataframe><numpy><dictionary> | 2023-11-21 04:38:40 | 2 | 326 | Madan Raj |
77,520,302 | 5,162,426 | Labelling contours manually via clabel adds spurious new lines to the plot | <p>I have a set of contours made with <code>plt.contourf</code>. I am now trying to label the contours with <code>plt.clabel</code> with <code>manual=True</code>. This opens the plot, and prompts me to click on the figure to add labels. When I do this, new spurious contours are added to the plot, which are not correct given the data. Some labels are drawn correctly, but most are not.</p>
<p>I came across <a href="https://stackoverflow.com/questions/22901365/matplotlib-manually-interactively-picked-contour-label-adds-extra-lines">this old question</a> which seems to be describing the same issue, though an answer there claims that this was identified as a bug and fixed years ago. Has anyone run into this?</p>
<p>Before clicking:</p>
<p><a href="https://i.sstatic.net/Njp4Pm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Njp4Pm.png" alt="enter image description here" /></a></p>
<p>After a few clicks:</p>
<p><a href="https://i.sstatic.net/cwlsdm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwlsdm.png" alt="enter image description here" /></a></p>
| <python><matplotlib><contour><contourf> | 2023-11-21 04:27:19 | 1 | 3,032 | pretzlstyle |
77,520,193 | 4,038,800 | TTS Tacotron2 Fine-tuning: Missing Layers and No Output Sound | <p>I’m attempting to use <a href="https://github.com/coqui-ai/TTS" rel="nofollow noreferrer">TTS</a> to fine tune a Tacotron2 TTS model. If it makes a difference, I'm using Python 3.9.1 and I'm fine-tuning the latest <code>tts_models--en--ljspeech--tacotron2-DDC</code>.</p>
<p>During the fine-tuning process, when I load the pretrained model the system throws errors indicating <code>Layer missing in the checkpoint.</code> Then it says</p>
<pre><code>| > 81 / 105 layers are restored.
> Model restored from step 278000
> Model has 47669492 parameters
> Number of output frames: 2
</code></pre>
<p>I've checked to see that I'm using the correct model version as per TTS docs and everything else seems to be in order.</p>
<p>Training technically does execute... but the wav file synthesized by the fine-tuned model doesn't contain any audible sound.</p>
<p>Any ideas or suggestions are welcome.</p>
<p>Here's the exact command-line code I'm using:</p>
<pre><code>CUDA_VISIBLE_DEVICES="0" python ./TTS/recipes/ljspeech/tacotron2-DDC/train_tacotron_ddc.py --restore_path ../.local/share/tts/tts_models--en--ljspeech--tacotron2-DDC/model_file.pth --config_path ../.local/share/tts/tts_models--en--ljspeech--tacotron2-DDC/config.json
</code></pre>
| <python><deep-learning><pytorch><text-to-speech> | 2023-11-21 03:46:38 | 0 | 919 | DonCarleone |
77,520,191 | 10,219,156 | How to change dataframe structure in pandas | <p>I have created a dataframe from two dicts like this</p>
<pre><code>import pandas as pd
summary={'BTC': {'DAE': -11661300, 'Vega': -661, 'Theta': 109, 'Gamma': -32391},
'ETH': {'DAE': -533985, 'Vega': -227, 'Theta': 518, 'Gamma': -289779}}
summary_df = pd.DataFrame.from_dict(summary)
summary_df = summary_df.transpose()
display(summary_df)
greeks_6_pm= {'BTC': {'DAE': -11725765, 'Vega': -567, 'Theta': 79, 'Gamma': 85512},
'ETH': {'DAE': -676607, 'Vega': -15, 'Theta': 186, 'Gamma': -78159}}
change_of_greeks_df = pd.DataFrame(summary).sub(pd.DataFrame(greeks_6_pm)).transpose().add_prefix("△")
pnl_summary = pd.merge(summary_df, change_of_greeks_df, left_index=True, right_index=True)
</code></pre>
<p>Im getting output like</p>
<pre><code> DAE Vega Theta Gamma △DAE △Vega △Theta △Gamma
BTC -11661300 -661 109 -32391 64465 -94 30 -117903
ETH -533985 -227 518 -289779 142622 -212 332 -211620
</code></pre>
<p>but i need output like</p>
<pre><code>Currency Dae Vega theta gamma
BTC -11661300 -661 109 -32391
△BTC 64465 -94 30 -117903
ETH -533985 -227 518 -289779
△ETH 142622 -212 332 -211620
</code></pre>
<p>tried with</p>
<pre><code>pnl_summary = pd.concat([summary_df, change_of_greeks_df], axis=0, keys=['', 'Changed']).swaplevel(axis=0)
display(pnl_summary)
</code></pre>
<p>but getting output like</p>
<pre><code> DAE Vega Theta Gamma △DAE △Vega △Theta △Gamma
BTC -11,661,300 -661 109 -32,391 nan nan nan nan
ETH -533,985 -227 518 -289,779 nan nan nan nan
BTC Changed nan nan nan nan 64,465 -94 30 -117,903
ETH Changed nan nan nan nan 142,622 -212 332 -211,620
</code></pre>
<p>How to get the desired output in dataframe</p>
| <python><pandas><dataframe><numpy><dictionary> | 2023-11-21 03:46:28 | 1 | 326 | Madan Raj |
77,520,074 | 9,443,671 | How can I effectively split text coming from a stream where the output is not necessarily new words but might be letters | <p>so I have a stream of text which comes in where the next item coming in is the all the previously generated text + some additional text which may either be a new word, letter, or part of a word. I'm trying to effectively split the text/arrange the text in a way which allows me to fully reconstruct the text as it's streaming. Here's an example:</p>
<p><code>original text = "this is some prepended text which keeps showing up with the stream"</code></p>
<p>and now let's say the stream comes in like this, notice the commented string, it contains a letter which is a continuation of a word in the following example and not a stand-alone letter:</p>
<pre><code>
def dummy_generator():
yield 'I'm sorry to hear that you're feeling sad'
time.sleep(0.2)
yield '. Here are some suggestions that may help'
time.sleep(0.2)
yield 'uplift your mood:\n\n1. Take a walk outside'
time.sleep(0.2)
yield 'Fresh air and natural surroundings can do w' #For example, here there's a single letter which is continued in the next piece of text
time.sleep(0.2)
yield 'onders for your mood.\n\n2. Listen to music'
time.sleep(0.2)
yield 'Music has a powerful effect on our emotions'
time.sleep(0.2)
yield 'and can help lift our spirits.\n\n3. Practice'
time.sleep(0.2)
yield 'mindfulness: Taking a few minutes to focus on'
</code></pre>
<p>given this, is there a way to effectively split the text as it's being generated into the correct words?</p>
| <python><string><text><stream> | 2023-11-21 03:03:14 | 1 | 687 | skidjoe |
77,519,992 | 5,482,999 | Query a collection by an array of document ids with python | <p>I have list of ids I want to query from a collection.
If I use:</p>
<pre><code>query = firebase_client.collection('my_collection').where(
field_path=field_path.FieldPath.document_id(), op_string='in', value=list_of_ids
).stream()
</code></pre>
<p>Works, but gives me the warning:</p>
<blockquote>
<p>UserWarning: Detected filter using positional arguments. Prefer using
the 'filter' keyword argument instead. return
query.where(field_path, op_string, value)</p>
</blockquote>
<p>I know that if I wanted to query by document field I should use:</p>
<pre><code>query = firebase_client.collection('my_collection').where(
filter=FieldFilter('field', '==', value)
).stream()
</code></pre>
<p>What is the proper way to query a collection by a list of document ids in order to avoid the warning?</p>
<p>When using:</p>
<pre><code>filter=FieldFilter(field_path.FieldPath.document_id(), op_string='in', value=list_of_ids)
</code></pre>
<p>The script crashes when trying to read the snapshot in a for loop.</p>
<blockquote>
<p>grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of
RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "<strong>key</strong> filter value must be a Key"
debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.64.138:443
{created_time:"2023-11-20T21:50:49.381633-06:00", grpc_status:3,
grpc_message:"<strong>key</strong> filter value must be a Key"}"</p>
</blockquote>
<p>with at the end of the crash message</p>
<blockquote>
<p>google.api_core.exceptions.InvalidArgument: 400 <strong>key</strong> filter value
must be a Key</p>
</blockquote>
| <python><firebase><google-cloud-firestore><google-cloud-python> | 2023-11-21 02:36:10 | 0 | 1,924 | Guanaco Devs |
77,519,970 | 12,931,358 | How to create a huggingface dataset and import from a list? | <p>I have a list and I want to convert it to a huggingface dataset for training model, I follow some tips and here is my code,</p>
<pre><code>from datasets import Dataset
class MkqaChineseDataset(Dataset):
def __init__(self, data):
# super().__init__() if add this, it shows super().__init__() TypeError: __init__() missing 1 required positional argument: 'arrow_table'
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
sample = self.data[idx]
return {
"input_ids": sample["input_ids"],
"attention_mask": sample["attention_mask"],
"labels":sample["input_ids"]
}
buffer_test = [
{'input_ids': torch.Tensor([9437,29,210]), 'attention_mask': torch.Tensor([1, 1, 1])},
{'input_ids': torch.Tensor([37,9,211]), 'attention_mask': torch.Tensor([1, 1, 1])},
{'input_ids': torch.Tensor([937,19,212]), 'attention_mask': torch.Tensor([1, 1, 1])}
]
print(buffer_test)
mkqa = MkqaChineseDataset(buffer_test)
res = isinstance(mkqa, Dataset)
print(res)
</code></pre>
<p>However, it shows attributes error:</p>
<pre><code> self.data = data
AttributeError: can't set attribute
</code></pre>
| <python><huggingface-datasets> | 2023-11-21 02:27:56 | 1 | 2,077 | 4daJKong |
77,519,960 | 179,234 | .deployment file not working while doing zip deployment of an azure app | <p>I have an application that I am trying to deploy to an existing app to an offline server.</p>
<p>The application is two parts: a nodejs frontend and a python backend app. I use the zip deployment method to wrap everything and use it in the server (explained here <a href="https://learn.microsoft.com/en-us/azure/app-service/deploy-run-package" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/app-service/deploy-run-package</a>)</p>
<p>I want to create a ".deployment" file to run the offline installation for the python libraries I downloaded from pip. The method of creating a .deployment is explained here <a href="https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script" rel="nofollow noreferrer">https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script</a></p>
<p>I created a startup.sh file that contains the commands that I need to run, which is basically as below:</p>
<pre><code>cd backend
pip install --no-index --find-links downloaded_pip -r requirements.txt
gunicorn --bind=0.0.0.0:8081 --timeout 600 app:app
</code></pre>
<p>and then I created .deployment file that contains the below</p>
<pre><code>[config]
command = startup.sh
</code></pre>
<p>I put this on the zip file with the .deployment and .startup.sh in the root and I deploy.. it doesn't seem like it do anything with .deployment.. I actually have to go to the shell and run those commands each time I deploy for the libraries to be installed.</p>
<p>no idea what am I doing wrong.</p>
| <python><azure-web-app-service><azure-deployment><offlineapps> | 2023-11-21 02:23:05 | 0 | 1,417 | Sarah |
77,519,883 | 1,348,798 | Is there a way to group by list in pandas? | <p>I have a set of data that looks like this in pandas. I want to group by value in list such that:</p>
<pre><code>column1 column2 column3
1 14 P22
3 43 P65
4 43 141
3 14 146
5 43 P51
</code></pre>
<p>becomes:</p>
<pre><code>column1 column2 column3
1 14 P22
3 43 P65
column1 column2 column3
4 43 141
3 14 146
5 43 P51
</code></pre>
<p>because the 141, 146 and P51 codes are all part of the same list and should be grouped that way. How can I accomplish this in pandas?</p>
| <python><pandas> | 2023-11-21 01:57:16 | 2 | 329 | Marc |
77,519,881 | 10,378,232 | How to guarantee the value in multiple processes is increased or decreased correctly in python? | <p><strong>My needs</strong></p>
<p>At present, there are several groups of data that need to perform computation-type tasks, and every time a group of data is executed, java will be notified and the generated result file will be passed to java, and I will tell Java when all tasks are executed (that is, when the last group of data is executed) that all the data in this round is executed. The java side is going to do the whole database entry operation.</p>
<p><strong>My assumption</strong></p>
<p>Maintain an integer value between multiple processes, increment by 1 when executing a set of data, and compare this integer value with the total number of tasks in the notification java method, equal means all finished. (This does not take into account the failure of the calculation task.)</p>
<p><strong>The situation faced</strong></p>
<p>I declare an integer variable through the <code>Manager</code> in the <code>multiprocessing</code> module, and then passed it to each process. When executing self-increment, multiple processes read the same value (see the output below for details, or perform the following demo by themselves). Unable to meet the value of the read and write atoms, I tried to lock, but it did not work.</p>
<p><strong>This is my small demo</strong></p>
<pre class="lang-py prettyprint-override"><code>from concurrent.futures import ProcessPoolExecutor
import ctypes
from multiprocessing import Manager, Lock
from multiprocessing.managers import ValueProxy
import os
m = Manager().Value(ctypes.c_int, 0)
def calc_number(x: int, y: int, _m: "ValueProxy", total_tasks: int):
"""simulate the computation-type tasks"""
# simulate the calculation
res = x**y
# Lock does not work
with Lock():
# self-increment
_m.value += 1
# compare this integer value with the total number of tasks, equal means all finished.
if _m.value == total_tasks:
print(True)
print(f"m_value: {_m.value}, p_id: {os.getpid()}, res: {res}")
def main():
# there are 8 groups of tasks
t1 = (100, 200, 300, 400, 500, 600, 700, 800)
t2 = (80, 70, 60, 50, 40, 30, 20, 10)
len_t = len(t1)
# tasks are executed by multiple processes
with ProcessPoolExecutor(max_workers=len_t) as executor:
{executor.submit(calc_number, x, y, m, len_t) for x, y in zip(t1, t2)}
if __name__ == "__main__":
main()
</code></pre>
<p>Then the output:</p>
<pre><code>m_value: 2, p_id: 14873, res: 118059162071741130342400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
m_value: 2, p_id: 14877, res: 12676506002282294014967032053760000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
m_value: 3, p_id: 14875, res: 42391158275216203514294433201000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
m_value: 3, p_id: 14872, res: 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
m_value: 4, p_id: 14883, res: 797922662976120010000000000000000000000000000000000000000
m_value: 5, p_id: 14879, res: 909494701772928237915039062500000000000000000000000000000000000000000000000000000000000000000000000000000000
m_value: 5, p_id: 14881, res: 221073919720733357899776000000000000000000000000000000000000000000000000000000000000
m_value: 6, p_id: 14885, res: 107374182400000000000000000000
</code></pre>
<p>Note that the correct output should be <code>m_value</code> printed as <code>12345678</code> but...</p>
<p>So what did I do wrong, hoping to get help here. THX.</p>
| <python><python-3.x><parallel-processing><multiple-processes> | 2023-11-21 01:56:08 | 1 | 412 | Runstone |
77,519,830 | 13,178,529 | Breaking values of multiple of 20 (20,40,60,80...) into candles of 20 (Stocks) | <p>I have the following logic which reads data from a stock exchange file and builds an Excel data table or a chart.</p>
<p>I'm building a graphic like the following:
<a href="https://i.sstatic.net/q2XPg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q2XPg.png" alt="enter image description here" /></a></p>
<p>As you can see in the image, I have this green candle which varied at once 50 points.
In my logic, I managed to create a <code>multipleTimes</code>, which calculates if this unusual behavior happens.</p>
<p>What I need to do is break this 50 candle in 2 candles of 20, and the 10 points remaining will be ignored.<br>
So for example,</p>
<ol>
<li>If I have a variation of 60, it will break into 3 candles of 20.</li>
<li>If I have a variation of 75, it will break into 3 candles of 20 and ignore the 15 points.</li>
</ol>
<p>Making it look like this (23, 24):</p>
<p><a href="https://i.sstatic.net/WqKnF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WqKnF.png" alt="enter image description here" /></a></p>
<p>The file is like this, which refers to stock trade winz23, the last columns are the tick columns, which vary from 5 to 5 points, but in some cases, it can vary 50, 70, or more points at once, which causes my error (<a href="https://wetransfer.com/downloads/f70a02c57b8c93a9009b4cb5a6b53c6220231120222005/1f909e" rel="nofollow noreferrer">https://wetransfer.com/downloads/f70a02c57b8c93a9009b4cb5a6b53c6220231120222005/1f909e</a>):</p>
<pre><code>WINZ23|2023-10-25 09:00:58|115240
WINZ23|2023-10-25 09:00:58|115240
WINZ23|2023-10-25 09:00:58|115290
WINZ23|2023-10-25 09:00:58|115290
WINZ23|2023-10-25 09:00:58|115275
WINZ23|2023-10-25 09:00:58|115275
WINZ23|2023-10-25 09:00:58|115250
WINZ23|2023-10-25 09:00:58|115250
</code></pre>
<pre><code># import socket, base64
import re
import sys
import xlsxwriter
from operator import neg
from datetime import datetime
# Todas as interações
MAPPED = []
CANDLES_ERROR = []
CANDLES_ACERTO = []
CANDLE_PAST_MAPPED = []
FIRST = True
GET_FIRST_TICK = True
RESET = True
READ_INPUT = True
# Logica
INDEX = 1
CLOSE = 0
CLOSE_REDUZIDO = 0
OPEN = 0
MAXIMO = 0
MINIMO = sys.maxsize
SINALIZADOR = ''
RANGE = 0
CANDLE_TICK_FIX = 0
CANDLE_VALOR_FIX = 0
# Dado passado logica
MAXIMO_PASSADO = 0
MINIMO_PASSADO = 0
ABERTURA_PASSADO = 0
CLOSE_PASSADO = 0
# Excel
# file_to_open = './example.txt'
file_to_open = './TickByTick/2023-10-25.txt'
workbook = xlsxwriter.Workbook('new2.xlsx')
worksheet = workbook.add_worksheet('firstSheet')
# Analytics
ANALYTICS_SINALIZADOR = ''
class Box():
def __init__(self, index, horario, close, abertura, close_one, maximo, minimo, high_one, low_one, open_one, close_reduzido, range, analytics):
self.index = index
self.horario = horario
self.close = close
self.abertura = abertura
self.close_one = close_one
self.maximo = maximo
self.minimo = minimo
self.high_one = high_one
self.low_one = low_one
self.open_one = open_one
self.close_reduzido = close_reduzido
self.range = range
self.analytics = analytics
class CandleBox():
def __init__(self, asset, index, close, abertura, maximo, minimo):
self.asset = asset
self.index = index
self.close = close
self.abertura = abertura
self.maximo = maximo
self.minimo = minimo
def maximo_e_minimo():
global MAXIMO, MINIMO
if (CLOSE > MAXIMO):
MAXIMO = CLOSE
if (CLOSE < MINIMO):
MINIMO = CLOSE
# Store only finished candles
def add_to_candle_box_map(asset):
global PAST_MAPPED_DATA
CANDLE_PAST_MAPPED.append(CandleBox(asset, INDEX, CLOSE, OPEN, MAXIMO, MINIMO))
# Stores all candles
def add_to_map(horario):
global MAPPED
MAPPED.append(Box(INDEX, horario, CLOSE, OPEN, CANDLE_PAST_MAPPED[INDEX].close, MAXIMO, MINIMO, CANDLE_PAST_MAPPED[INDEX].maximo, CANDLE_PAST_MAPPED[INDEX].minimo, CANDLE_PAST_MAPPED[INDEX].abertura, CLOSE_REDUZIDO, RANGE, ANALYTICS_SINALIZADOR))
def reset_data_after_box_create():
global RANGE, INDEX, MAXIMO, MINIMO
RANGE = 0
INDEX += 1
MAXIMO = 0
MINIMO = sys.maxsize
def count_multiples_of_20(end):
count = 0
for value in range(1, end + 1):
if value % 20 == 0:
count += 1
return count
def create_candle(asset):
global OPEN, CLOSE, CANDLE_TICK_FIX
# Calculo do maximo e minimo
maximo_e_minimo()
if OPEN != 0 and CLOSE >= (OPEN + 20):
add_to_map(asset)
add_to_candle_box_map(asset)
reset_data_after_box_create()
OPEN = CLOSE
elif OPEN != 0 and CLOSE <= (OPEN - 20):
add_to_map(asset)
add_to_candle_box_map(asset)
reset_data_after_box_create()
OPEN = CLOSE
else:
add_to_map(asset)
def handle_box(asset, valor):
global CLOSE, CLOSE_REDUZIDO, FIRST, OPEN, RESET, FIRST, ANALYTICS_SINALIZADOR, CANDLE_TICK_FIX, CANDLE_VALOR_FIX
CLOSE_REDUZIDO = valor - OPEN
multipleTimes = count_multiples_of_20(abs(CLOSE_REDUZIDO))
# IF CLOSE_REDUZIDO IS multiple
# 40 -> create 2 candles
# 60 -> create 3 candles
# 80 -> create 4 candles
#some logic here
if multipleTimes > 1 and INDEX > 1:
for x
if abs(CLOSE_REDUZIDO) > 20 and INDEX > 1:
if CLOSE_REDUZIDO < -20:
CANDLE_TICK_FIX = (abs(CLOSE_REDUZIDO) - 20)
elif CLOSE_REDUZIDO > 20:
CANDLE_TICK_FIX = (abs(CLOSE_REDUZIDO) - 20)
CLOSE = valor + CANDLE_TICK_FIX
# Seta como primeira vez que roda
if FIRST == True:
OPEN = CLOSE
FIRST = False
add_to_candle_box_map(asset)
add_to_candle_box_map(asset)
create_candle(asset)
CANDLE_TICK_FIX = 0
ANALYTICS_SINALIZADOR = ''
try:
with open(file_to_open, 'r') as arquivo:
# Itere sobre as linhas do arquivo
for linha in arquivo:
# Divida a linha em partes usando o caractere '|'
partes = linha.split('|')
# Verifique se a linha tem o formato correto (deve ter 6 partes)
if len(partes) >= 3:
# Extraia os valores das partes
codigo = partes[0]
data = partes[1]
valor1 = partes[2]
handle_box(data, int(valor1))
time = datetime.strptime(data, '%Y-%m-%d %H:%M:%S').time()
if time == datetime.strptime('9:10:00', '%H:%M:%S').time():
raise KeyboardInterrupt
except KeyboardInterrupt:
if (len(sys.argv) > 1 and sys.argv[1] == 'txt'):
print('TXT Wrote')
with open('candle-25.csv', 'w') as file:
for index, line in enumerate(CANDLE_PAST_MAPPED):
if index == 0:
writable = 'Asset,Index,Open,High,Low,Close\n'
else:
writable = f'{line.asset},{line.index},{line.abertura},{line.maximo},{line.minimo},{line.close}\n'
file.write(writable)
else:
worksheet.write(0, 0, 'Iteração')
worksheet.write(0, 1, 'Candle')
worksheet.write(0, 2, 'Horario')
worksheet.write(0, 3, 'Abertura')
worksheet.write(0, 4, 'Close')
worksheet.write(0, 5, 'Close[1]')
worksheet.write(0, 6, 'High')
worksheet.write(0, 7, 'Low')
worksheet.write(0, 8, 'High[1]')
worksheet.write(0, 9, 'Low[1]')
worksheet.write(0, 10, 'Open[1]')
worksheet.write(0, 11, 'CloseReduzido')
worksheet.write(0, 12, 'Range')
worksheet.write(0, 13, 'Analytics')
# candles acerto
worksheet.write(0, 17, 'Candles acerto')
worksheet.write(1, 17, f'{CANDLES_ACERTO}')
# candles error
worksheet.write(2, 17, 'Candles error')
worksheet.write(3, 17, f'{CANDLES_ERROR}')
print(f"""
Tamanho: {INDEX}
""")
for index, entry in enumerate(MAPPED):
if entry.close_reduzido >= 20:
cell_format = workbook.add_format({'bg_color': '#90EE90'})
elif entry.close_reduzido <= -20:
cell_format = workbook.add_format({'bg_color': '#FFC7CE'})
else:
cell_format = None
worksheet.write(index+1, 0, str(index))
worksheet.write(index+1, 1, entry.index)
worksheet.write(index+1, 2, entry.horario)
worksheet.write(index+1, 3, entry.abertura)
worksheet.write(index+1, 4, entry.close)
worksheet.write(index+1, 5, entry.close_one)
worksheet.write(index+1, 6, entry.maximo)
worksheet.write(index+1, 7, entry.minimo)
worksheet.write(index+1, 8, entry.high_one)
worksheet.write(index+1, 9, entry.low_one)
worksheet.write(index+1, 10, entry.open_one)
worksheet.write(index+1, 11, entry.close_reduzido, cell_format)
worksheet.write(index+1, 12, entry.range)
worksheet.write(index+1, 13, entry.analytics)
workbook.close()
except Exception as e:
print('Error', e)
</code></pre>
<p>For the chart:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
from io import StringIO
import mplfinance as mpf
file = 'candle-25.csv'
data = pd.read_csv(file)
data.Asset = pd.to_datetime(data.Asset)
data = data.set_index('Asset')
mpf.plot(data, figratio=(20,12), type='candle', tight_layout=True, title='Candle dia 25', style='yahoo')
</code></pre>
<p>But as you can see in this excel image, the values from <code>abertura</code> does not vary 20 points when it closes.
<a href="https://i.sstatic.net/draPo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/draPo.png" alt="enter image description here" /></a></p>
<p>This is an example of 4pt graphic in profit:
<a href="https://i.sstatic.net/JMziB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JMziB.png" alt="enter image description here" /></a></p>
| <python><python-3.x><charts><logic><stock> | 2023-11-21 01:33:09 | 0 | 1,200 | Nilton Schumacher F |
77,519,523 | 15,476,955 | Missing convert_trajectory_transformer_original_pytorch_checkpoint_to_pytorch.py (notebook/Jupyter/transformers) | <p>I tried on a notebook to install the package transformers, it failed:</p>
<p><code>!pip install transformers</code></p>
<p>I get this:</p>
<pre><code>FULLTRACE:
Collecting transformers
Obtaining dependency information for transformers from https://files.pythonhosted.org/packages/12/dd/f17b11a93a9ca27728e12512d167eb1281c151c4c6881d3ab59eb58f4127/transformers-4.35.2-py3-none-any.whl.metadata
Using cached transformers-4.35.2-py3-none-any.whl.metadata (123 kB)
Requirement already satisfied: filelock in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (3.13.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.16.4 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (0.17.3)
Requirement already satisfied: numpy>=1.17 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (1.25.2)
Requirement already satisfied: packaging>=20.0 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (23.2)
Requirement already satisfied: pyyaml>=5.1 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (6.0.1)
Requirement already satisfied: regex!=2019.12.17 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (2023.10.3)
Requirement already satisfied: requests in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (2.31.0)
Requirement already satisfied: tokenizers<0.19,>=0.14 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (0.14.1)
Requirement already satisfied: safetensors>=0.3.1 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (0.4.0)
Requirement already satisfied: tqdm>=4.27 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers) (4.66.1)
Requirement already satisfied: fsspec in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from huggingface-hub<1.0,>=0.16.4->transformers) (2023.10.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from huggingface-hub<1.0,>=0.16.4->transformers) (4.5.0)
Requirement already satisfied: colorama in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from tqdm>=4.27->transformers) (0.4.6)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests->transformers) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests->transformers) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests->transformers) (2.0.3)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\x\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests->transformers) (2023.5.7)
Using cached transformers-4.35.2-py3-none-any.whl (7.9 MB)
Installing collected packages: transformers
</code></pre>
<pre><code>ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\Users\\x\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\transformers\\models\\deprecated\\trajectory_transformer\\convert_trajectory_transformer_original_pytorch_checkpoint_to_pytorch.py'
HINT: This error might have occurred since this system does not have Windows Long Path support enabled. You can find information on how to enable this at https://pip.pypa.io/warnings/enable-long-paths
[notice] A new release of pip is available: 23.2.1 -> 23.3.1
[notice] To update, run: C:\Users\x\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\python.exe -m pip install --upgrade pip
</code></pre>
<p>If you need more details, I'm ready !
I need to write more because the website ask me to. But I don't need to flood you with more info I think.</p>
| <python><jupyter-notebook><huggingface-transformers> | 2023-11-20 23:35:26 | 1 | 1,168 | Utopion |
77,519,494 | 3,842,845 | CSV file not processing in concatenation if not saved manually | <p>I am trying to run a python code that grabs data from daily csv files and processes to create a dataframe that meets the requirement.</p>
<p>This is code that runs <strong>fine</strong> (if I open csv file, <strong>saved</strong> and run):</p>
<pre><code>import re
import pandas as pd
from io import StringIO
def read_block(names, igidx=True):
with open("Test_11_17_2023.csv") as f:
pat = r"(\w+),+$\n[^,]+.+?\n,+\n(.+?)(?=\n,{2,})"
return pd.concat([
pd.read_csv(StringIO(m.group(2)), skipinitialspace=True)
.iloc[:, 1:].dropna(how="all") for m in re.finditer(
pat, f.read(), flags=re.M|re.S) if m.group(1) in names
], keys=names, ignore_index=igidx)
df = read_block(names=["Admissions", "Readmissions"],
igidx=False).droplevel(1).reset_index(names="block")
print(df)
</code></pre>
<p>If I just run the code where it just read data, there is no issue (even if the file is not opened and saved manually):</p>
<pre><code>import pandas as pd
df = pd.read_csv("Test_11_17_2023.csv")
print(df)
</code></pre>
<p>First, I thought it is just the security setting of csv file, but even I checked the "Unblock", my python code still would not able to run to process within "concat"/Regex.</p>
<p><a href="https://i.sstatic.net/jbHFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jbHFd.png" alt="enter image description here" /></a></p>
<p>But, whenever I get a new csv file (meaning "not saved manually again"), it creates this error (concat).</p>
<p><a href="https://i.sstatic.net/YVuqL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YVuqL.png" alt="enter image description here" /></a></p>
<p>My question is, is there anything making it difficult to concatenate data if csv file is not saved? I am not sure it is just general concat issue or Regex issue.</p>
<p>How do I modify the existing code to run without concat error issue?</p>
<p>This is how it looks like when I originally get the csv file.
I generated file in csv format from txt file.</p>
<p><a href="https://i.sstatic.net/PkFiY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PkFiY.png" alt="enter image description here" /></a></p>
<p>Bottom is the result from comparing both files (original (doesn't work) and saved one (works)) using an app called <a href="https://meld.app/" rel="nofollow noreferrer">Meld</a>:</p>
<p><a href="https://i.sstatic.net/ItBBv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ItBBv.png" alt="enter image description here" /></a></p>
<p>This is image from Cygwin (updated 11/25):
<a href="https://i.sstatic.net/4Q2rC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Q2rC.png" alt="enter image description here" /></a></p>
| <python><azure><csv><concatenation> | 2023-11-20 23:26:59 | 0 | 1,324 | Java |
77,519,280 | 6,534,818 | PySpark: CumSum with Salting over Window w/ Skew | <p>How can I use salting to perform a cumulative sum window operation? While a tiny sample, my id column is heavily skewed, and I need to perform effectively this operation on it:</p>
<pre><code>window_unsalted = Window.partitionBy("id").orderBy("timestamp")
# exected value
df = df.withColumn("Expected", F.sum('value').over(window_unsalted))
</code></pre>
<p>However, I want to try salting because at the scale of my data, I cannot compute it otherwise.</p>
<p>Consider this MWE. How can I replicate the expected value, 20, using salting techniques?</p>
<pre><code>from pyspark.sql import functions as F
from pyspark.sql.window import Window
data = [
(7329, 1636617182, 1.0),
(7329, 1636142065, 1.0),
(7329, 1636142003, 1.0),
(7329, 1680400388, 1.0),
(7329, 1636142400, 1.0),
(7329, 1636397030, 1.0),
(7329, 1636142926, 1.0),
(7329, 1635970969, 1.0),
(7329, 1636122419, 1.0),
(7329, 1636142195, 1.0),
(7329, 1636142654, 1.0),
(7329, 1636142484, 1.0),
(7329, 1636119628, 1.0),
(7329, 1636404275, 1.0),
(7329, 1680827925, 1.0),
(7329, 1636413478, 1.0),
(7329, 1636143578, 1.0),
(7329, 1636413800, 1.0),
(7329, 1636124556, 1.0),
(7329, 1636143614, 1.0),
(7329, 1636617778, -1.0),
(7329, 1636142155, -1.0),
(7329, 1636142061, -1.0),
(7329, 1680400415, -1.0),
(7329, 1636142480, -1.0),
(7329, 1636400183, -1.0),
(7329, 1636143444, -1.0),
(7329, 1635977251, -1.0),
(7329, 1636122624, -1.0),
(7329, 1636142298, -1.0),
(7329, 1636142720, -1.0),
(7329, 1636142584, -1.0),
(7329, 1636122147, -1.0),
(7329, 1636413382, -1.0),
(7329, 1680827958, -1.0),
(7329, 1636413538, -1.0),
(7329, 1636143610, -1.0),
(7329, 1636414011, -1.0),
(7329, 1636141936, -1.0),
(7329, 1636146843, -1.0)
]
df = spark.createDataFrame(data, ["id", "timestamp", "value"])
# Define the number of salt buckets
num_buckets = 100
# Add a salted_id column to the dataframe
df = df.withColumn("salted_id", (F.concat(F.col("id"),
(F.rand(seed=42)*num_buckets).cast("int")).cast("string")))
# Define a window partitioned by the salted_id, and ordered by timestamp
window = Window.partitionBy("salted_id").orderBy("timestamp")
# Add a cumulative sum column
df = df.withColumn("cumulative_sum", F.sum("value").over(window))
# Define a window partitioned by the original id, and ordered by timestamp
window_unsalted = Window.partitionBy("id").orderBy("timestamp")
# Compute the final cumulative sum by adding up the cumulative sums within each original id
df = df.withColumn("final_cumulative_sum",
F.sum("cumulative_sum").over(window_unsalted))
# exected value
df = df.withColumn("Expected", F.sum('value').over(window_unsalted))
# incorrect trial
df.agg(F.sum('final_cumulative_sum')).show()
# expected value
df.agg(F.sum('Expected')).show()
</code></pre>
| <python><apache-spark><pyspark><apache-spark-sql> | 2023-11-20 22:32:37 | 1 | 1,859 | John Stud |
77,519,206 | 13,596,420 | Polars equivalent to Pandas min_count on groupby | <p>I'm trying to find the equivalent of a <code>min_count</code> param on polars groupby, such as in <code>pandas.groupby(key).sum(min_count=N)</code>.</p>
<p>Let's suppose the dataframe</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_repr("""
┌───────┬───────┐
│ fruit ┆ price │
│ --- ┆ --- │
│ str ┆ i64 │
╞═══════╪═══════╡
│ a ┆ 1 │
│ a ┆ 3 │
│ a ┆ 5 │
│ b ┆ 10 │
│ b ┆ 10 │
│ b ┆ 10 │
│ b ┆ 20 │
└───────┴───────┘
""")
</code></pre>
<p>How can I groupby through the <code>fruit</code> key with the constrain of the group having at least 4 values for the sum?</p>
<p>So instead of</p>
<pre><code>┌───────┬───────┐
│ fruit ┆ price │
│ --- ┆ --- │
│ str ┆ i64 │
╞═══════╪═══════╡
│ b ┆ 50 │
│ a ┆ 9 │
└───────┴───────┘
</code></pre>
<p>I'd have only fruit <code>b</code> on the output, since it's the only one with at least 4 elements</p>
<pre><code>┌───────┬───────┐
│ fruit ┆ price │
│ --- ┆ --- │
│ str ┆ i64 │
╞═══════╪═══════╡
│ b ┆ 50 │
└───────┴───────┘
</code></pre>
| <python><python-polars> | 2023-11-20 22:14:13 | 3 | 306 | viniciusbaca |
77,518,956 | 783,314 | debugging workflows and/or activities in VSCode or similar | <p>If we have code for Temporal workflows/activities and want to step through the code line-by-line in a debugger, for example with VSCode, is there any reasonably easy way of doing that?</p>
<p>I saw that there is a Temporal VSCode extension, but it seems to only be written for TypeScript.</p>
<p>I've tried pulling some of my workflow code out of the workflow class to run directly. However it's not so straightforward because there may be additional <code>workflow.*</code> calls which won't work at all if you aren't in the event loop.</p>
| <python><temporal-workflow> | 2023-11-20 21:20:10 | 0 | 8,953 | Stephen |
77,518,816 | 12,369,606 | Can I install a Python package in a Conda environment while a script is currently being executed in the environment? | <p>I used a SLURM manager to submit a bunch of scripts that all run in one Conda environment. I would like to install a new Python package to this environment. Do I need to wait until all of my scripts are done running? Or can I install the package now without messing anything up?</p>
| <python><conda><hpc> | 2023-11-20 20:49:10 | 1 | 504 | keenan |
77,518,740 | 13,881,506 | Why is nums[3:-1:-1] empty instead of the first 4 elements of nums in reverse? Python list slice with positive start, -1 stop, and -1 step | <p>Say I have a list called <code>nums</code> and I want to get a reversed, prefix of <code>nums</code> - i.e. the first <code>start</code> elements of <code>nums</code> in reverse order. <strong>Assuming <code>0 <= start <= n</code> where <code>n = len(nums)</code></strong>, the following work</p>
<h5>Get prefix, then reverse:</h5>
<ul>
<li><code>nums[:start][::-1]</code> is a list</li>
<li><code>reversed(nums[:start])</code> is a reverse iterator, not a list</li>
<li><code>list(reversed(nums[:start]))</code> is a list</li>
</ul>
<h5>Reverse, then get suffix:</h5>
<ul>
<li><code>nums[::-1][-start:] if start else []</code> is a list</li>
<li><code>list(reversed(nums))[-start:] if start else []</code> is a list</li>
</ul>
<h5>Using comprehension:</h5>
<ul>
<li><code>[nums[i] for i in range(start-1, -1, -1)]</code></li>
</ul>
<p>Each slice or <code>list(.)</code> call creates a new list and each <code>reversed(.)</code> call adds overhead, so ideally there's a way to do this using just one slice and nothing else. There is</p>
<h5>One slice working approaches (still assuming 0 <= start <= n):</h5>
<ul>
<li><code>nums[-(n+1-start): -(n+1): -1]</code></li>
<li><code>nums[ start-1: -(n+1): -1] if start else []</code></li>
</ul>
<p>but they're confusing. I would have thought the following one slice approaches would have also worked, but they didn't:</p>
<h5>One slice (non-working approaches):</h5>
<ul>
<li><code>nums[start-1: -1: -1] if start else []</code></li>
<li><code>nums[-(n+1-start): -1: -1]</code></li>
</ul>
<p>I find it confusing that <code>nums[start-1: -1: -1]</code> does not work given that it breaks the following pattern (assuming <code>start = 4 <= n = len(nums)</code>):</p>
<ul>
<li><code>nums[3: 3:-1]</code> is <code>[]</code></li>
<li><code>nums[3: 2:-1]</code> is <code>[nums[3]]</code></li>
<li><code>nums[3: 1:-1]</code> is <code>[nums[3], nums[2]]</code></li>
<li><code>nums[3: 0:-1]</code> is <code>[nums[3], nums[2], nums[1]]</code></li>
<li><code>nums[3:-1:-1]</code> is <code>[]</code> <em>(breaks the pattern)</em></li>
</ul>
<p>following the pattern above, one would expect <code>nums[3:-1:-1]</code> to be <code>[nums[3], nums[2], nums[1], nums[0]]</code>, but instead it is <code>[]</code>. Why?</p>
<p>The only thing that comes to mind is that a <code>stop</code> of <code>-1</code> is treated as a <code>stop</code> of <code>len(nums)-1</code>.</p>
| <python><python-3.x><list><slice> | 2023-11-20 20:35:56 | 0 | 1,013 | joseville |
77,518,556 | 1,564,070 | Confusing output from Python cProfile - queue and threading | <p>I'm trying to find performance bottlenecks in a Python application. The crux of the app is reading data from a text file and inserting it into and RDBMS. I'm using direct ODBC SQL and the program is single-threaded. I do not use the threading or queue libraries at all.</p>
<p>Output from cProfile is below. I'm trying to figure out how the majority of the execution time is taken up by queue.py, builtins.exec and threading.py. Any suggestions are most appreciated.
'''</p>
<pre><code> 6617261 function calls (6606203 primitive calls) in 149.445 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
2445/1461 0.042 0.000 278.930 0.191 queue.py:154(get)
3/1 0.046 0.015 149.242 149.242 {built-in method builtins.exec}
2433 36.848 0.015 148.912 0.061 threading.py:302(wait)
983 0.081 0.000 73.227 0.074 threading.py:616(wait)
1007188 19.456 0.000 19.567 0.000 functions.py:53(sql_val)
6002 13.408 0.002 13.408 0.002 {method 'execute' of 'pyodbc.Cursor' objects}
6001 6.708 0.001 6.708 0.001 {method 'commit' of 'pyodbc.Cursor' objects}
19 0.001 0.000 3.833 0.202 pydevd_net_command.py:102(send)
29609 0.108 0.000 2.874 0.000 {built-in method strptime}
29609 0.417 0.000 2.766 0.000 _strptime.py:552(_strptime_datetime)
29609 1.729 0.000 2.349 0.000 _strptime.py:293(_strptime)
25676 0.287 0.000 0.740 0.000 locale.py:250(currency)
29611 0.058 0.000 0.437 0.000 _strptime.py:26(_getlang)
29611 0.086 0.000 0.379 0.000 locale.py:580(getlocale)
2015091 0.361 0.000 0.361 0.000 {method 'append' of 'list' objects}
51352 0.034 0.000 0.316 0.000 locale.py:108(localeconv)
</code></pre>
<p>'''</p>
| <python><performance-testing><profiler> | 2023-11-20 19:57:41 | 0 | 401 | WV_Mapper |
77,518,472 | 4,506,929 | How do I add a subplot to an existing figure? | <p>Consider that I have a pre-existing figure like:</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
fig, axes = plt.subplots(nrows=2, ncols=3)
</code></pre>
<p><a href="https://i.sstatic.net/pUkHA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pUkHA.png" alt="enter image description here" /></a></p>
<p>How can I add a 5th subplot that would take both rows in a third column?</p>
<p>I know it's easy to create such a figure from the start using, for example, mosaics like</p>
<pre class="lang-py prettyprint-override"><code>plt.figure().subplot_mosaic([[1, 2, 3], [4, 5, 3]])
</code></pre>
<p>which produces what I want:</p>
<p><a href="https://i.sstatic.net/PSn98.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PSn98.png" alt="enter image description here" /></a></p>
<p>But the figure comes from an existing application, and if I were to replicate the same figure manually it'd take a huge amount of code. So really I'd need a way to do it <em>after</em> the first 4 subplots are already built and plotted.</p>
<p>Can this be done? (Preferably with <code>constrained_layout</code> active, although I'm sure this would be asking too much.)</p>
<p>PS: I tried the solutions <a href="https://stackoverflow.com/questions/38231738/matplotlib-plotting-subplots-to-existing-figure">here</a> and <a href="https://stackoverflow.com/questions/63703979/append-subplots-to-existing-figure-in-matplotlib">here</a>, but I couldn't make them work for what I want.</p>
| <python><matplotlib><subplot> | 2023-11-20 19:38:01 | 1 | 3,547 | TomCho |
77,518,388 | 19,053,778 | Creating a ColumnTransformer pipeline with user defined functions | <p>I´m trying to create a <code>ColumnTransformer</code> pipeline to then pass on a general processing pipeline and it will be mainly used to scale specific columns and add new columns to the data (like cube, etc)</p>
<p>I was trying this</p>
<pre><code>from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import FunctionTransformer
def take_cube(data,col):
data["speed_cube"] = np.power(col,3)
return data
def sin_angle(data, speed, acc):
data["angle"] = data[speed] * np.sin(data[acc])
return data
preprocessor = ColumnTransformer(
transformers=[
("cube", FunctionTransformer(take_cube, validate=False), [speed]),
("sin_angle", FunctionTransformer(sin_angle, kw_args={"speed":"speed","acc":"acceleration"},validate=False), [speed, acceleration]),
],
remainder="passthrough"
).set_output(transform="pandas")
</code></pre>
<p>But I'm not entirely sure how to format the column transformer in the format that is on sickit-learn (name, transformer, columns).</p>
<p>I'm supposedly already passing the columns I want to create the new columns as the kw_args in FunctionTransformer, and if I pass the list like ["speed,"acceleration"] it will create a bunch of extra columns that I don't want giving the following output:</p>
<pre><code>
cube__speed sin_angle__speed sin_angle__acceleration \
2020-08-18 14:43:19 80.901828 4.325 0.0880
2020-08-18 14:43:20 81.520685 4.336 0.0842
2020-08-18 14:43:21 85.707790 4.409 0.0234
2020-08-18 14:43:22 87.824421 4.445 0.0016
2020-08-18 14:43:23 87.587538 4.441 0.1144
... ... ... ...
2020-09-13 14:55:57 1.170905 1.054 0.0234
2020-09-13 14:55:58 0.569723 0.829 0.0258
2020-09-13 14:55:59 0.233745 0.616 -0.1686
2020-09-13 14:56:00 0.000000 0.000 -0.4284
2020-09-13 14:56:01 0.000000 0.000 -0.3096
sin_angle__speed_ang remainder__heart-rate \
2020-08-18 14:43:19 0.380109 102.0
2020-08-18 14:43:20 0.364660 103.0
2020-08-18 14:43:21 0.103161 105.0
2020-08-18 14:43:22 0.007112 106.0
2020-08-18 14:43:23 0.506943 106.0
... ... ...
2020-09-13 14:55:57 0.024661 130.0
2020-09-13 14:55:58 0.021386 130.0
2020-09-13 14:55:59 -0.103366 129.0
2020-09-13 14:56:00 -0.000000 130.0
2020-09-13 14:56:01 -0.000000 130.0
remainder__cadence remainder__slope remainder__angle
2020-08-18 14:43:19 64.0 -0.033870 -0.033857
2020-08-18 14:43:20 64.0 -0.033571 -0.033559
2020-08-18 14:43:21 66.0 -0.033223 -0.033210
2020-08-18 14:43:22 66.0 -0.032908 -0.032896
2020-08-18 14:43:23 67.0 0.000000 0.000000
... ... ... ...
2020-09-13 14:55:57 0.0 0.000000 0.000000
2020-09-13 14:55:58 0.0 0.000000 0.000000
2020-09-13 14:55:59 0.0 0.000000 0.000000
2020-09-13 14:56:00 0.0 0.000000 0.000000
2020-09-13 14:56:01 0.0 0.000000 0.000000
</code></pre>
<p>What changes would I need to make so that I can make the <code>ColumnTransformer</code> user-defined functions take in the kw_args and create the extra columns as intended?</p>
<p>Thanks!</p>
| <python><scikit-learn> | 2023-11-20 19:22:02 | 1 | 496 | Chronicles |
77,517,975 | 9,415,280 | tf dataset "TypeError: unhashable type: 'list'" error on trainning | <p>I try to use tensorflow dataset with a lstm model.
It is my first project integrating tf dataset, so all advices to do it better is welcome.</p>
<p>Here all my fonctions to get data from many csv and process it to dataset:</p>
<pre><code>def csv_loader(path):
return tf.data.experimental.CsvDataset(
path,
record_defaults=[tf.float32, tf.float32, tf.float32],
header=header,
field_delim=',',
select_cols=[0,1,2,5],
na_value='nan')
def split_feature_label(x, lbl_position, nb_attrib):
output = x[self.input_sequence_length - 1, lbl_position + 1]
# remove output from input set
sub = list(range(x.shape[1]))
sub.pop(lbl_position + 1)
# remove id column used by fonction "filter_mixed_csv_sample"
sub.pop(0)
x = tf.transpose(tf.gather(tf.transpose(x), sub))
return {'input1': x[:self.input_sequence_length, :-nb_attrib],
'input2': x[self.input_sequence_length - 1, -nb_attrib:]}, output
def filter_mixed_csv_sample(x):
# remove samples with mixed csv values
y, idx = tf.unique(x[:, 0])
if len(y) > 1:
return False
return True
def filter_nan_missing_values(x):
# find NaN and reject samples not sure if it always work...
try:
ynan = tf.math.is_nan(x) # return true un ou plusieurs NaN inside sample
return tf.math.logical_not(tf.math.reduce_any(ynan)) # inverse la réponse pour rejeter avec un False si Nan
except:
return False
def to_timeseries(x):
# turn dataset into 3D lstm compatible dataset
x = x.map(lambda *items: tf.stack(items), num_parallel_calls=tf.data.AUTOTUNE)
x = x.window(self.input_sequence_length + self.output_sequence_length, shift=1,
drop_remainder=True)
x = x.flat_map(lambda i: i).batch(self.input_sequence_length + self.output_sequence_length)
return x
def is_test(x, _):
# split dataset into test et trainning dataset
return x % int(self.val_split * 100) == 0
def is_train(x, y):
return not is_test(x, y)
def apply_scaler(x, y):
x1_std = (x['input1'] - x1_scaler.data_min_) / (x1_scaler.data_max_ - x1_scaler.data_min_)
x1_scaled = x1_std * (x1_scaler.feature_range[1] - x1_scaler.feature_range[0]) + x1_scaler.feature_range[0]
x2_std = (x['input2'] - x2_scaler.data_min_) / (x2_scaler.data_max_ - x2_scaler.data_min_)
x2_scaled = x2_std * (x2_scaler.feature_range[1] - x2_scaler.feature_range[0]) + x2_scaler.feature_range[0]
y_std = (y - y_scaler.data_min_) / (y_scaler.data_max_ - y_scaler.data_min_)
y_scaled = y_std * (y_scaler.feature_range[1] - y_scaler.feature_range[0]) + y_scaler.feature_range[0]
return {'input1': x1_scaled, 'input2': x2_scaled}, y_scaled
</code></pre>
<p>and how I order these processing:</p>
<pre><code>tf_list = tf.data.Dataset.list_files(list_files, shuffle=True)
dataset = tf_list.interleave(csv_loader, cycle_length=1)
with tf.device('/cpu:0'):
dataset = to_timeseries(self.dataset)
dataset = dataset.ignore_errors()
dataset = dataset.filter(filter_nan_missing_values)
dataset = dataset.filter(filter_mixed_csv_sample)
if shuffle:
dataset = dataset.shuffle(shuffle_buffer)
dataset = dataset.map(lambda x: split_feature_label(x, label_index, nb_attributs), num_parallel_calls=tf.data.AUTOTUNE)
# Split dataset to train/test set
if val_split > 0:
recover = lambda x, y: y
test_set = dataset.enumerate() \
.filter(is_test) \
.map(recover)
trainning_set = dataset.enumerate() \
.filter(is_train) \
.map(recover)
# set-up multi-GPUs config if availlable
if gpu:
strategy = tf.distribute.MirroredStrategy()
batch_size_per_replica = batch_size * strategy.num_replicas_in_sync
else:
batch_size_per_replica = batch_size
if val_split == 0:
dataset = dataset.batch(batch_size_per_replica)
dataset = dataset.cache()
dataset = dataset.prefetch(2)
else:
trainning_set = trainning_set.batch(batch_size_per_replica).cache().prefetch(2)
test_set = test_set.batch(batch_size_per_replica).cache().prefetch(2)
x1_scaler = load('/artefacts/scalers/x1_scaler.sclr')
x2_scaler = load('/artefacts/scalers/x2_scaler.sclr')
y_scaler = load('/artefacts/scalers/y_scaler.sclr')
dataset = dataset.map(apply_scaler, num_parallel_calls=tf.data.AUTOTUNE)
</code></pre>
<p>Finnaly, how I start trainning:</p>
<pre><code>if val_split > 0:
history = model.fit(trainning_set, validation_data=test_set,
verbose=verbose,
epochs=epochs,
callbacks=[checkpoint, early_stop],
shuffle=shuffle)
else:
history = model.fit(dataset,
verbose=verbose,
epochs=epochs,
callbacks=[checkpoint, early_stop],
shuffle=shuffle)
</code></pre>
<p>These codes was working in an early version, I did some "light" modification like adding multi-GPU optimisation, and re-organized my code to clean not usefull test lines...
now I get this error but realy don't understand what change I did to that and why this error... :</p>
<pre><code>Exception has occurred: TypeError
unhashable type: 'list'
File "/home/cy6112/CASTOR-S3/CASTOR-S3-cy6112/Code/timeseries_nn/core.py", line 797, in train
history = model.fit(dataset,
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cy6112/CASTOR-S3/CASTOR-S3-cy6112/Code/timeseries_nn/workbench.py", line 33, in <module>
NN.train(ds, 200, 10, verbose=2, shuffle=True)
TypeError: unhashable type: 'list'
</code></pre>
<p>my dataset is:</p>
<pre><code><_ParallelMapDataset element_spec=({'input1': TensorSpec(shape=(None, None, 3), dtype=tf.float32, name=None), 'input2': TensorSpec(shape=(None, 13), dtype=tf.float32, name=None)}, TensorSpec(shape=(None,), dtype=tf.float32, name=None))
</code></pre>
<p>thank for your help and advice if you see something not did in the good way with dataset</p>
| <python><tensorflow><tf.data.dataset><tf.dataset> | 2023-11-20 17:52:45 | 2 | 451 | Jonathan Roy |
77,517,974 | 16,308,381 | Flask-session dictionary not persisting between endpoint requests from React/Axios/Chrome/CORS; but it works from Postman | <p>I'd like to 1. outline the problem, 2. show my code, 3. show what happens and what I've done to diagnose the problem, 4. show where I've already looked to solve the problem, and finally, 5. ask a few questions. I apologize in advance that it's a long post, but I've looked everywhere and it's been hard to pinpoint what the problem is, so I'm trying to share everything that's relevant but also not too much.</p>
<h1>1. Problem outline</h1>
<p>I'm writing a login system using Flask and React. On the server side, I have a <code>/register</code> endpoint to make a new user, a <code>/login</code> endpoint to log them in, and an <code>/@me</code>endpoint to display the user's information. I'm using a server-side <code>flask_session</code> for the user's session using SQLAlchemy. I have CORS enabled with credentials via <code>flask_cors</code>. On the client side, I've made up a register and login page, and a landing page to display the user's information. Requests to the server are sent using Axios with credentials.</p>
<p>My issue is when I register (or login) the user from my browser (Chrome), the landing page is unable to display the user's information because on the server side, the <code>session</code> dictionary is empty when calling the <code>/@me</code> endpoint, even though I explicitly populate the <code>session</code> dictionary when I register (or login) the user in via the <code>/register</code> (or <code>/login</code>) endpoint.</p>
<p>What is even stranger, is that if I use Postman to do the exact same sequence (i.e. <code>/register</code> or <code>/login</code>, and then <code>/@me</code>) it works just fine, and the <code>/@me</code> endpoint indeed returns the user's information.</p>
<h1>2. Code</h1>
<p>N.B.: I cannot claim originality for this code. I followed <a href="https://www.youtube.com/watch?v=sBw0O5YTT4Q&ab_channel=DevGuyAhnaf" rel="nofollow noreferrer">this excellent video</a> for a simple login system, and their code is <a href="https://github.com/ahnaf-zamil/flask-react-session-authenticaton-tutorial/" rel="nofollow noreferrer">here</a></p>
<h2>Server side</h2>
<p>I have my main Flask app as follows (<code>app.py</code>):</p>
<pre class="lang-py prettyprint-override"><code>
from flask import Flask, request, session
from flask_cors import CORS
from flask_session import Session
from flask_bcrypt import Bcrypt
from models import db, User
from config import ApplicationConfig
app = Flask(__name__)
app.config.from_object(ApplicationConfig)
bcrypt = Bcrypt(app)
cors_app = CORS(
app,
supports_credentials=True
)
server_session = Session(app)
db.init_app(app)
with app.app_context():
db.create_all()
@app.route("/@me", methods=["GET"])
def get_current_user():
user_id = session.get("user_id")
# Included print statement for debugging from Flask console
print(session)
if not user_id:
return ({"error": "Unauthorized"}, 401)
user = User.query.filter_by(id=user_id).first()
return ({
"id": user.id,
"email": user.email,
"session" : session
}, 200)
@app.route("/register", methods=["POST"])
def register_user():
email = request.json["email"]
password = request.json["password"]
user_exists = User.query.filter_by(email=email).first() is not None
if user_exists:
return ({"error" : "User already exists"}, 409)
hashed_password = bcrypt.generate_password_hash(password)
new_user = User(email=email, password=hashed_password)
db.session.add(new_user)
db.session.commit()
session["user_id"] = new_user.id
# Included print statement for debugging from Flask console
print(session)
return ({
"id": new_user.id,
"email": new_user.email
}, 200)
@app.route("/login", methods=["POST"])
def login_user():
email = request.json["email"]
password = request.json["password"]
user = User.query.filter_by(email=email).first()
if user is None:
return ({"error": "Unauthorized"}, 401)
if not bcrypt.check_password_hash(user.password, password):
return ({"error": "Unauthorized"}, 401)
session["user_id"] = user.id
# Included print statement for debugging from Flask console
print(session)
return ({
"id": user.id,
"email": user.email
}, 200)
@app.route("/logout", methods=["POST"])
def logout_user():
print(session)
session.pop("user_id")
return (
{
"message" : "logged out"
},
200)
</code></pre>
<p>My application configuration looks like (<code>config.py</code>):</p>
<pre class="lang-py prettyprint-override"><code>import dotenv
import os
from models import db
from datetime import timedelta
dotenv.load_dotenv()
environment_values = dotenv.dotenv_values()
basedir = os.path.abspath(os.path.dirname(__file__))
class ApplicationConfig:
SECRET_KEY = "a very secret key"
SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_ECHO = False
SQLALCHEMY_DATABASE_URI = (
'sqlite:///'
+ os.path.join(basedir, 'database.db')
)
SESSION_TYPE = "sqlalchemy"
SESSION_PERMANENT = True
SESSION_USE_SIGNER = True
SESSION_SQLALCHEMY_TABLE="sessions"
SESSION_SQLALCHEMY=db
PERMANENT_SESSION_LIFETIME = timedelta(minutes=15)
</code></pre>
<p>and my database models looks like (<code>models.py</code>):</p>
<pre class="lang-py prettyprint-override"><code>from flask_sqlalchemy import SQLAlchemy
from uuid import uuid4
db = SQLAlchemy()
def get_uuid():
return uuid4().hex
class User(db.Model):
__tablename__ = "users"
id = db.Column(
db.String(32),
primary_key=True,
unique=True,
default=get_uuid
)
email = db.Column(db.String(345), unique=True)
password = db.Column(db.Text, nullable=False)
</code></pre>
<p>Also, it might be helpful, here is my <code>requirements.txt</code> to install the packages I need:</p>
<pre><code>Flask==2.3.2
Flask-Cors==3.0.10
Flask-Session==0.5.0
Flask-Bcrypt==1.0.1
Flask-SQLAlchemy==3.0.3
python-dotenv==1.0.0
Werkzeug==2.3.4 # Need a lower version than 3. to avoid some string/bytes issue
</code></pre>
<h2>Client side</h2>
<p>First, as suggested in one of <a href="https://blog.miguelgrinberg.com/post/how-to-create-a-react--flask-project" rel="nofollow noreferrer">Miguel Grinberg's posts</a>, I set React's proxy to the address of my Flask backend. To <code>package.json</code>, I add:</p>
<pre class="lang-js prettyprint-override"><code>"proxy": "http://127.0.0.1:5000"
</code></pre>
<p>I set a router to re-direct traffic (<code>Router.js</code>)</p>
<pre class="lang-js prettyprint-override"><code>import { BrowserRouter, Routes, Route } from "react-router-dom";
import LandingPage from "./pages/LandingPage";
import LoginPage from "./pages/LoginPage";
import NotFound from "./pages/NotFound";
import RegisterPage from "./pages/RegisterPage";
const Router = () => {
return (
<BrowserRouter>
<Routes>
<Route path="/" exact element={<LandingPage/>} />
<Route path="/login" exact element={<LoginPage/>} />
<Route path="/register" exact element={<RegisterPage />} />
<Route path="*" element={<NotFound/>} />
</Routes>
</BrowserRouter>
);
};
export default Router;
</code></pre>
<p>and I tell the index to use this router (<code>index.js</code>)</p>
<pre class="lang-js prettyprint-override"><code>import React from 'react';
import ReactDOM from 'react-dom/client';
import './index.css';
import Router from "./Router.js"
const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(
<React.StrictMode>
<Router/>
</React.StrictMode>
);
</code></pre>
<p>I then define an Axios http client and tell it to use credentials, which I gather is essential for CORS in <code>httpClient.js</code>:</p>
<pre class="lang-js prettyprint-override"><code>import axios from "axios";
export default axios.create({
withCredentials: true
});
</code></pre>
<p>I then define my login, register, landing, and not found components. First, <code>pages/LoginPage.js</code>:</p>
<pre class="lang-js prettyprint-override"><code>import React, {useState} from 'react'
import httpClient from '../httpClient'
const LoginPage = () => {
const [email, setEmail] = useState("")
const [password, setPassword] = useState("")
const logInUser = async (e) => {
console.log(email, password);
try {
await httpClient.post("http://127.0.0.1:5000/login",{
email,
password
});
window.location.href = "/";
}
catch(error) {
if (error.response.status === 401)
{
alert("Invalid credentials")
}
}
};
return (
<div>
<h1>Log into your account </h1>
<form>
<div>
<label>Email:</label>
<input
type="text"
value={email}
onChange={(e) => setEmail(e.target.value)}
id=""
/>
</div>
<div>
<label>Password:</label>
<input
type="password"
value={password}
onChange={(e) => setPassword(e.target.value)}
id=""
/>
</div>
<button type="button" onClick={() => logInUser()}>
Submit
</button>
</form>
</div>
);
};
export default LoginPage;
</code></pre>
<p>Next, <code>pages/RegisterPage.js</code>:</p>
<pre class="lang-js prettyprint-override"><code>import React, { useState } from "react";
import httpClient from "../httpClient";
const RegisterPage =() => {
const [email, setEmail] = useState("");
const [password, setPassword] = useState("");
const registerUser = async () => {
try {
await httpClient.post("//127.0.0.1:5000/register", {
email,
password,
});
window.location.href = "/";
} catch (error) {
if (error.response.status === 401) {
alert("Invalid credentials");
}
}
};
return (
<div>
<h1>Create an account</h1>
<form>
<div>
<label>Email: </label>
<input
type="text"
value={email}
onChange={(e) => setEmail(e.target.value)}
id=""
/>
</div>
<div>
<label>Password: </label>
<input
type="password"
value={password}
onChange={(e) => setPassword(e.target.value)}
id=""
/>
</div>
<button type="button" onClick={() => registerUser()}>
Submit
</button>
</form>
</div>
);
};
export default RegisterPage;
</code></pre>
<p>and, finally, the <code>pages/LandingPage.js</code>:</p>
<pre class="lang-js prettyprint-override"><code>import React, {useState, useEffect} from 'react'
import httpClient from '../httpClient';
const LandingPage = () => {
const [user, setUser] = useState(null)
const logoutUser = async () => {
await httpClient.post("http://127.0.0.1:5000/logout");
window.location.href = "/";
};
useEffect(() => {
(async () => {
try {
const resp = await httpClient.get("http://127.0.0.1:5000/@me");
console.log(resp)
setUser(resp.data);
} catch(error) {
console.log("Not authenticated")
}
})();
}, []);
return(
<div>
<h1>Welcome to CogniFlow!</h1>
{user != null ? (
<div>
<h2>Logged in</h2>
<h3>ID: {user.id}</h3>
<h3>Email: {user.email}</h3>
<button onClick={logoutUser}>Logout</button>
</div>
) : (
<div>
<p>You aren't logged in</p>
<div className="buttons">
<a href="/login">
<button>Login</button>
</a>
<a href="/register">
<button>Register</button>
</a>
</div>
</div>
)}
</div>
);
};
export default LandingPage;
</code></pre>
<p>For brevity, I exclude the <code>NotFound.js</code> page, but I can show it if necessary.</p>
<h1>3. What happens</h1>
<p>I run the Flask server via <code>python3 -m flask --app=app.py --debug run</code>, which runs on <code>http::127.0.0.1:5000</code> and I run the React client using <code>npm start</code>, which runs on <code>http://localhost:3000</code>.</p>
<h2>From the browser (Chrome)</h2>
<p>I create a user with username <code>hamlet@elsinore.dk</code> and password <code>2b|!2b</code>. As shown in <code>RegisterPage.js</code>, this leads me back to the landing page. However, it says "You are not logged in", which suggests that the request to <code>/@me</code> was unsuccessful. If I look at the console, I get two of the exact same <code>Failed to load resource: server responded with a status of 401 (UNAUTHORIZED)</code> error.</p>
<p>If I look to the Flask console, I see:</p>
<pre><code>127.0.0.1 - - [20/Nov/2023 09:21:17] "OPTIONS /register HTTP/1.1" 200 -
<SqlAlchemySession {'_permanent': True, 'user_id': '4fe9812bb8464917bd2297dc7855d929'}>
127.0.0.1 - - [20/Nov/2023 09:21:18] "POST /register HTTP/1.1" 200 -
<SqlAlchemySession {'_permanent': True}>
127.0.0.1 - - [20/Nov/2023 09:21:18] "GET /@me HTTP/1.1" 401 -
<SqlAlchemySession {'_permanent': True}>
127.0.0.1 - - [20/Nov/2023 09:21:18] "GET /@me HTTP/1.1" 401 -
</code></pre>
<p>Indeed, when I registered <code>hamlet@elsinore.dk</code>, the <code>session</code> dictionary has a <code>user_id</code>, but when I call the <code>/@me</code> endpoint, the relevant information seems to vanish.</p>
<h2>From Postman</h2>
<p>I effectively repeat the procedure from Postman. I post to the the <code>/register</code> endpoint with a username of <code>lear@england.co.uk</code> with a password of <code>nothingcomesfromnothing</code> (not that it really matters). This returns:</p>
<pre class="lang-js prettyprint-override"><code>{
"email": "lear@england.co.uk",
"id": "974f99e96e864225b38d13cd50a47a5a"
}
</code></pre>
<p>I get to the <code>/@me</code> endpoint and I get back:</p>
<pre class="lang-js prettyprint-override"><code>{
"email": "lear@england.co.uk",
"id": "974f99e96e864225b38d13cd50a47a5a",
"session": {
"_permanent": true,
"user_id": "974f99e96e864225b38d13cd50a47a5a"
}
}
</code></pre>
<p>which indeed suggests that the session dictionary works. If I look at the Flask console, I see:</p>
<pre><code><SqlAlchemySession {'_permanent': True, 'user_id': '974f99e96e864225b38d13cd50a47a5a'}>
127.0.0.1 - - [20/Nov/2023 09:27:50] "POST /register HTTP/1.1" 200 -
<SqlAlchemySession {'_permanent': True, 'user_id': '974f99e96e864225b38d13cd50a47a5a'}>
127.0.0.1 - - [20/Nov/2023 09:28:46] "GET /@me HTTP/1.1" 200 -
</code></pre>
<h2>Comparison</h2>
<p>In the Postman case, there is no pre-flight OPTIONS request (not entirely unexpected), there is only one GET request, and the session dictionary persists.</p>
<h1>4. Where I've looked</h1>
<p>Everywhere. <a href="https://stackoverflow.com/questions/64114524/postman-request-works-but-not-axios-cors">This link</a> suggests that Postman circumvents CORS, suggesting that maybe this is a CORS issue; <a href="https://stackoverflow.com/questions/39261260/flask-session-variable-not-persisting-between-requests">this link</a> suggests adding a <code>session.modified = True</code>. This didn't work. It also suggests to make sure that I store less than 4KB of data in the session. <a href="https://stackoverflow.com/questions/70627684/flask-sessions-not-persistent-between-requests-from-cross-domains">This link</a> suggests to add a <code>SESSION_COOKIE_DOMAIN = "http://localhost:3000"</code> to my <code>config.py</code> and that doesn't work. <a href="https://stackoverflow.com/questions/49712385/flask-session-not-persisting-postman-works-javascript-doesnt">This link</a> suggests to add a <code>credentials : "same-origin"</code> to the client side when posting requests, but I couldn't find any option in axios for that. Besides, I have <code>withCredentials: true</code> already set. <a href="https://stackoverflow.com/questions/51977485/fetch-and-sessions-and-cors">This link</a> suggests to double check that the <code>flask_cors</code> has <code>supports_credentials=True</code>, which I do. <a href="https://stackoverflow.com/questions/71588516/cors-error-in-react-js-axios-when-its-working-in-postman">This link</a> suggests adding <code>"http://localhost:3000"</code> to the allowed origins in <code>flask_cors</code> but this didn't work. <a href="https://stackoverflow.com/questions/69256429/flask-session-not-persisted">This link</a> suggests to explicitly handle the pre-flight request, but that didn't work for me either. There are many other things I've tried, which didn't work.</p>
<h1>5. Questions</h1>
<p>If you've got this far, thanks. My questions are:</p>
<ul>
<li>How can I make my session dictionary persist between endpoint requests from Chrome?</li>
<li>What am I doing wrong here? I'm sure it's something very simple, but I'm just not seeing it</li>
<li>What is the difference between the Chrome and Postman requests? Why does the former fail yet the latter work?</li>
</ul>
| <python><reactjs><flask><cors><flask-session> | 2023-11-20 17:52:11 | 0 | 392 | pvasudev16 |
77,517,944 | 13,630,719 | How to resolve error building python-Levenshtein-wheels? | <p>When I try to run a precommit locally and I'm getting the following error log:</p>
<p>Failed to build python-Levenshtein-wheels
stderr:
error: subprocess-exited-with-error</p>
<pre><code> × Building wheel for python-Levenshtein-wheels (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [214 lines of output]
/private/var/folders/44/jk1pxby9555ckx9vm9ygbl4c0000gq/T/pip-build-env-ahj64dqh/overlay/lib/python3.12/site-packages/setuptools/config/expand.py:133: SetuptoolsWarning: File '/private/var/folders/44/jk1pxby9555ckx9vm9ygbl4c0000gq/T/pip-install-mzfl5e3n/python-levenshtein-wheels_bc06a2aa41c84ef08ec27875130a3c05/CHANGELOG.rst' cannot be found
return '\n'.join(
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-14.1-x86_64-cpython-312
creating build/lib.macosx-14.1-x86_64-cpython-312/Levenshtein
copying Levenshtein/StringMatcher.py -> build/lib.macosx-14.1-x86_64-cpython-312/Levenshtein
copying Levenshtein/__init__.py -> build/lib.macosx-14.1-x86_64-cpython-312/Levenshtein
running build_ext
building 'Levenshtein._levenshtein' extension
creating build/temp.macosx-14.1-x86_64-cpython-312
creating build/temp.macosx-14.1-x86_64-cpython-312/Levenshtein
clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -DNDEBUG -g -O3 -Wall -I/Users/ENV/.cache/pre-commit/repofltdlao5/py_env-python3.12/include -I/Users/eliasvolonakis/.pyenv/versions/3.12.0/include/python3.12 -c Levenshtein/_levenshtein.c -o build/temp.macosx-14.1-x86_64-cpython-312/Levenshtein/_levenshtein.o
clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -g -I/Users/ENV/.cache/pre-commit/repofltdlao5/py_env-python3.12/include -I/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c Levenshtein/_levenshtein.c -o build/temp.macosx-10.9-universal2-cpython-312/Levenshtein/_levenshtein.o
Levenshtein/_levenshtein.c:711:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string1 = PyString_AS_STRING(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:712:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string2 = PyString_AS_STRING(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:726:12: error: call to undeclared function 'PyUnicode_GET_SIZE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
len1 = PyUnicode_GET_SIZE(arg1);
^
Levenshtein/_levenshtein.c:726:12: note: did you mean 'PyDict_GET_SIZE'?
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:53:26: note: 'PyDict_GET_SIZE' declared here
static inline Py_ssize_t PyDict_GET_SIZE(PyObject *op) {
^
Levenshtein/_levenshtein.c:729:15: error: call to undeclared function 'PyUnicode_AS_UNICODE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
string1 = PyUnicode_AS_UNICODE(arg1);
^
Levenshtein/_levenshtein.c:729:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string1 = PyUnicode_AS_UNICODE(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:730:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string2 = PyUnicode_AS_UNICODE(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:796:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string1 = PyString_AS_STRING(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:797:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string2 = PyString_AS_STRING(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:805:12: error: call to undeclared function 'PyUnicode_GET_SIZE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
len1 = PyUnicode_GET_SIZE(arg1);
^
Levenshtein/_levenshtein.c:812:15: error: call to undeclared function 'PyUnicode_AS_UNICODE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
string1 = PyUnicode_AS_UNICODE(arg1);
^
Levenshtein/_levenshtein.c:812:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string1 = PyUnicode_AS_UNICODE(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:813:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string2 = PyUnicode_AS_UNICODE(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:840:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string1 = PyString_AS_STRING(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:841:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string2 = PyString_AS_STRING(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:848:12: error: call to undeclared function 'PyUnicode_GET_SIZE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
len1 = PyUnicode_GET_SIZE(arg1);
^
Levenshtein/_levenshtein.c:850:15: error: call to undeclared function 'PyUnicode_AS_UNICODE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
string1 = PyUnicode_AS_UNICODE(arg1);
^
Levenshtein/_levenshtein.c:850:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string1 = PyUnicode_AS_UNICODE(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:851:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string2 = PyUnicode_AS_UNICODE(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:890:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string1 = PyString_AS_STRING(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:891:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string2 = PyString_AS_STRING(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:900:12: error: call to undeclared function 'PyUnicode_GET_SIZE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
len1 = PyUnicode_GET_SIZE(arg1);
^
Levenshtein/_levenshtein.c:902:15: error: call to undeclared function 'PyUnicode_AS_UNICODE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
string1 = PyUnicode_AS_UNICODE(arg1);
^
Levenshtein/_levenshtein.c:902:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string1 = PyUnicode_AS_UNICODE(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:903:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string2 = PyUnicode_AS_UNICODE(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:992:43: warning: passing 'lev_byte *' (aka 'unsigned char *') to parameter of type 'const char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
result = PyString_FromStringAndSize(medstr, len);
^~~~~~
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/bytesobject.h:34:62: note: passing argument to parameter here
PyAPI_FUNC(PyObject *) PyBytes_FromStringAndSize(const char *, Py_ssize_t);
^
Levenshtein/_levenshtein.c:1001:16: error: call to undeclared function 'PyUnicode_FromUnicode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
result = PyUnicode_FromUnicode(medstr, len);
^
Levenshtein/_levenshtein.c:1001:14: warning: incompatible integer to pointer conversion assigning to 'PyObject *' (aka 'struct _object *') from 'int' [-Wint-conversion]
result = PyUnicode_FromUnicode(medstr, len);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1071:15: warning: initializing 'lev_byte *' (aka 'unsigned char *') with an expression of type 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
lev_byte *s = PyString_AS_STRING(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1077:43: warning: passing 'lev_byte *' (aka 'unsigned char *') to parameter of type 'const char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
result = PyString_FromStringAndSize(medstr, len);
^~~~~~
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/bytesobject.h:34:62: note: passing argument to parameter here
PyAPI_FUNC(PyObject *) PyBytes_FromStringAndSize(const char *, Py_ssize_t);
^
Levenshtein/_levenshtein.c:1082:21: error: call to undeclared function 'PyUnicode_AS_UNICODE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
Py_UNICODE *s = PyUnicode_AS_UNICODE(arg1);
^
Levenshtein/_levenshtein.c:1082:17: warning: incompatible integer to pointer conversion initializing 'Py_UNICODE *' (aka 'int *') with an expression of type 'int' [-Wint-conversion]
Py_UNICODE *s = PyUnicode_AS_UNICODE(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1083:16: error: call to undeclared function 'PyUnicode_GET_SIZE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
size_t l = PyUnicode_GET_SIZE(arg1);
^
Levenshtein/_levenshtein.c:1088:16: error: call to undeclared function 'PyUnicode_FromUnicode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
result = PyUnicode_FromUnicode(medstr, len);
^
Levenshtein/_levenshtein.c:1088:14: warning: incompatible integer to pointer conversion assigning to 'PyObject *' (aka 'struct _object *') from 'int' [-Wint-conversion]
result = PyUnicode_FromUnicode(medstr, len);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1115:41: warning: comparison of integers of different signs: 'Py_ssize_t' (aka 'long') and 'size_t' (aka 'unsigned long') [-Wsign-compare]
if (PySequence_Fast_GET_SIZE(wlist) != n) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~
Levenshtein/_levenshtein.c:1201:16: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
strings[0] = PyString_AS_STRING(first);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1213:18: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
strings[i] = PyString_AS_STRING(item);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1237:18: error: call to undeclared function 'PyUnicode_AS_UNICODE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
strings[0] = PyUnicode_AS_UNICODE(first);
^
Levenshtein/_levenshtein.c:1237:16: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
strings[0] = PyUnicode_AS_UNICODE(first);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1238:16: error: call to undeclared function 'PyUnicode_GET_SIZE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
sizes[0] = PyUnicode_GET_SIZE(first);
^
Levenshtein/_levenshtein.c:1249:18: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
strings[i] = PyUnicode_AS_UNICODE(item);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1378:15: warning: unused variable 's' [-Wunused-variable]
const char *s;
^
Levenshtein/_levenshtein.c:1379:13: warning: unused variable 'len' [-Wunused-variable]
size_t i, len;
^
Levenshtein/_levenshtein.c:1650:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string1 = PyString_AS_STRING(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1651:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string2 = PyString_AS_STRING(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1658:12: error: call to undeclared function 'PyUnicode_GET_SIZE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
len1 = PyUnicode_GET_SIZE(arg1);
^
Levenshtein/_levenshtein.c:1660:15: error: call to undeclared function 'PyUnicode_AS_UNICODE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
string1 = PyUnicode_AS_UNICODE(arg1);
^
Levenshtein/_levenshtein.c:1660:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string1 = PyUnicode_AS_UNICODE(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1661:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string2 = PyUnicode_AS_UNICODE(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1768:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string1 = PyString_AS_STRING(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1769:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string2 = PyString_AS_STRING(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1776:12: error: call to undeclared function 'PyUnicode_GET_SIZE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
len1 = PyUnicode_GET_SIZE(arg1);
^
Levenshtein/_levenshtein.c:1778:15: error: call to undeclared function 'PyUnicode_AS_UNICODE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
string1 = PyUnicode_AS_UNICODE(arg1);
^
Levenshtein/_levenshtein.c:1778:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string1 = PyUnicode_AS_UNICODE(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1779:13: warning: incompatible integer to pointer conversion assigning to 'Py_UNICODE *' (aka 'int *') from 'int' [-Wint-conversion]
string2 = PyUnicode_AS_UNICODE(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1863:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string1 = PyString_AS_STRING(arg1);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1864:13: warning: assigning to 'lev_byte *' (aka 'unsigned char *') from 'char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
string2 = PyString_AS_STRING(arg2);
^ ~~~~~~~~~~~~~~~~~~~~~~~~
Levenshtein/_levenshtein.c:1878:43: warning: passing 'lev_byte *' (aka 'unsigned char *') to parameter of type 'const char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
result = PyString_FromStringAndSize(s, len);
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/bytesobject.h:34:62: note: passing argument to parameter here
PyAPI_FUNC(PyObject *) PyBytes_FromStringAndSize(const char *, Py_ssize_t);
^
Levenshtein/_levenshtein.c:1894:43: warning: passing 'lev_byte *' (aka 'unsigned char *') to parameter of type 'const char *' converts between pointers to integer types where one is of the unique plain 'char' type and the other is not [-Wpointer-sign]
result = PyString_FromStringAndSize(s, len);
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/bytesobject.h:34:62: note: passing argument to parameter here
PyAPI_FUNC(PyObject *) PyBytes_FromStringAndSize(const char *, Py_ssize_t);
^
Levenshtein/_levenshtein.c:1913:12: error: call to undeclared function 'PyUnicode_GET_SIZE'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
len1 = PyUnicode_GET_SIZE(arg1);
^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
41 warnings and 20 errors generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for python-Levenshtein-wheels
ERROR: Could not build wheels for python-Levenshtein-wheels, which is required to install pyproject.toml-based projects
Check the log at /Users/ENV/.cache/pre-commit/pre-commit.log
</code></pre>
<p>The precommit script is <code>poetry run pre-commit run --all-files</code>. Python version is 3.12 and I'm running the precommit in Visual Studio Code. Going through the log, I'm not sure why the error "call to undeclared function 'PyUnicode_GET_SIZE';" is happening. That should be completed by python-levenshtein.</p>
<p>I suspect the error is due to this being in the pre-commit-config.yaml: <a href="https://github.com/Lucas-C/pre-commit-hooks" rel="nofollow noreferrer">https://github.com/Lucas-C/pre-commit-hooks</a>. This is the only repo that refers to Levenshtein. All other repos do not seem to contain any mention of it.</p>
<p>Any assistance on resolving this error would be much appreciated!</p>
| <python> | 2023-11-20 17:47:24 | 2 | 1,342 | ENV |
77,517,937 | 1,415,826 | shouldn't this date format fail validation? | <p>I am using <code>strptime()</code> to create a datetime object to validate the string format. The format should be <code>mmddyy</code> for example <code>112023</code></p>
<p>Using this format code <code>"%m%d%y</code> for the 2 digit day, month, and year works as expected</p>
<pre><code>file_name_date = "112023"
print(bool(datetime.strptime(file_name_date, "%m%d%y")))
print(datetime.strptime(file_name_date, "%m%d%y"))
</code></pre>
<p>output:</p>
<pre><code>True
2023-11-20 00:00:00
</code></pre>
<p>but this format code <code>"%m%d%Y</code> also works. I would expect this one to fail the format since it is asking for the 4 digit year, but instead it's treating it as JAN 1 2023</p>
<pre><code>file_name_date = "112023"
print(bool(datetime.strptime(file_name_date, "%m%d%Y")))
print(datetime.strptime(file_name_date, "%m%d%Y"))
</code></pre>
<p>output:</p>
<pre><code>True
2023-01-01 00:00:00
</code></pre>
| <python><python-datetime> | 2023-11-20 17:45:57 | 1 | 945 | iambdot |
77,517,922 | 3,861,775 | Subclassing Polygon in Shapely | <p>I'm working with Shapely in Python and trying to subclass the Polygon class. However, I'm encountering an error when trying to add a custom attribute during object creation. Could you please provide guidance on how to subclass the Polygon class in Shapely and add custom attributes without running into this error?</p>
<p>This is what I tried so far:</p>
<pre><code>from shapely.geometry import Polygon
class CustomPolygon(Polygon):
def __init__(self, shell=None, holes=None, name=None):
super().__init__(shell, holes)
self._name = name
@property
def name(self):
return self._name
@name.setter
def name(self, value):
self._name = value
polygon1 = CustomPolygon([(0, 0), (0, 1), (1, 1), (1, 0)], name="Polygon1")
</code></pre>
<p>And this is the error I get:</p>
<pre><code>polygon1 = CustomPolygon([(0, 0), (0, 1), (1, 1), (1, 0)], name="Polygon1")
TypeError: __new__() got an unexpected keyword argument 'name'
</code></pre>
| <python><shapely> | 2023-11-20 17:42:41 | 1 | 3,656 | Gilfoyle |
77,517,864 | 7,194,375 | Running setup works in jupyter lab but not in vscode debug environment, although using same python path | <p>I am running this code in my jupyterlab and in vscode, it works in jupyterlab but does not in vscode. The error appears in the setup function of pycaret. This is my code:</p>
<pre><code>from pycaret.regression(.oop) import * #tried with and without oop
from pycaret.datasets import get_data
data = get_data('insurance')
s = setup(data, target = 'charges', session_id = 123)
</code></pre>
<p>Unfortunately I do not get any error message from pycaret. It just says:</p>
<pre><code>Error:
</code></pre>
<p>But if I breakpoint before and run pycarets setup code in "Debug Console", I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/fx/.local/lib/python3.9/site-packages/gradio/__init__.py", line 3, in <module>
import gradio._simple_templates
File "/home/fx/.local/lib/python3.9/site-packages/gradio/_simple_templates/__init__.py", line 1, in <module>
from .simpledropdown import SimpleDropdown
File "/home/fx/.local/lib/python3.9/site-packages/gradio/_simple_templates/simpledropdown.py", line 6, in <module>
from gradio.components.base import FormComponent
File "/home/fx/.local/lib/python3.9/site-packages/gradio/components/__init__.py", line 1, in <module>
from gradio.components.annotated_image import AnnotatedImage
File "/home/fx/.local/lib/python3.9/site-packages/gradio/components/annotated_image.py", line 8, in <module>
from gradio_client.documentation import document, set_documentation_group
File "/home/fx/.local/lib/python3.9/site-packages/gradio_client/__init__.py", line 1, in <module>
from gradio_client.client import Client
File "/home/fx/.local/lib/python3.9/site-packages/gradio_client/client.py", line 22, in <module>
import httpx
File "/home/fx/.local/lib/python3.9/site-packages/httpx/__init__.py", line 2, in <module>
from ._api import delete, get, head, options, patch, post, put, request, stream
File "/home/fx/.local/lib/python3.9/site-packages/httpx/_api.py", line 4, in <module>
from ._client import Client
File "/home/fx/.local/lib/python3.9/site-packages/httpx/_client.py", line 30, in <module>
from ._transports.default import AsyncHTTPTransport, HTTPTransport
File "/home/fx/.local/lib/python3.9/site-packages/httpx/_transports/default.py", line 30, in <module>
import httpcore
File "/home/fx/.local/lib/python3.9/site-packages/httpcore/__init__.py", line 1, in <module>
from ._api import request, stream
File "/home/fx/.local/lib/python3.9/site-packages/httpcore/_api.py", line 5, in <module>
from ._sync.connection_pool import ConnectionPool
File "/home/fx/.local/lib/python3.9/site-packages/httpcore/_sync/__init__.py", line 1, in <module>
from .connection import HTTPConnection
File "/home/fx/.local/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 12, in <module>
from .._synchronization import Lock
File "/home/fx/.local/lib/python3.9/site-packages/httpcore/_synchronization.py", line 11, in <module>
import trio
File "/home/fx/.local/lib/python3.9/site-packages/trio/__init__.py", line 20, in <module>
from ._core import TASK_STATUS_IGNORED as TASK_STATUS_IGNORED # isort: split
File "/home/fx/.local/lib/python3.9/site-packages/trio/_core/__init__.py", line 21, in <module>
from ._local import RunVar, RunVarToken
File "/home/fx/.local/lib/python3.9/site-packages/trio/_core/_local.py", line 9, in <module>
from . import _run
File "/home/fx/.local/lib/python3.9/site-packages/trio/_core/_run.py", line 51, in <module>
from ._multierror import MultiError, concat_tb
File "/home/fx/.local/lib/python3.9/site-packages/trio/_core/_multierror.py", line 488, in <module>
assert sys.excepthook is apport_python_hook.apport_excepthook
AssertionError
</code></pre>
<p>Also tried to upgrade, but no effect on the behavior:</p>
<pre><code>pip install --upgrade gradio httpx trio
</code></pre>
<p>I do use the same environment (python path is the same), pycaret is installed and can be loaded and displays also version, the version is the most recent one (3.2.0), I installed all dependencies.</p>
<pre><code>pip install pycaret[full]
</code></pre>
<p>Can anybody help me to solve this?</p>
| <python><machine-learning><regression><pycaret> | 2023-11-20 17:28:40 | 1 | 408 | AldegarRızvan |
77,517,808 | 7,307,125 | How to trim and match columns to obtain the common time values and corresponding values? | <p>The equipment (two scopes) give data in different number of points, and triggers in the same moment (0 on time column). (No chance to find same settings).</p>
<p>What I would like to achieve is to align the time with its corresponding values (if missing it may be interpolated, data doesn't change fast here).</p>
<p>How to achieve this trim and match data along two dataframes?</p>
<p>As an example I have prepared two data sets: one comes from scope1 (more datapoints), and other data comes from scope2 (less datapoints). This is just an example, in reality I got 20k samples from scope1 and 10k samples from other.</p>
<pre><code>scope1Data = pd.DataFrame({
'TIME': [-1, -0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1],
'V': [-0.841470985, -0.78332691, -0.717356091, -0.644217687, -0.564642473, -0.479425539, -0.389418342, -0.295520207, -0.198669331, -0.099833417, 0, 0.099833417, 0.198669331, 0.295520207, 0.389418342, 0.479425539, 0.564642473, 0.644217687, 0.717356091, 0.78332691, 0.841470985],
})
scope2Data = pd.DataFrame({
'TIME': [-1.05, -0.9, -0.75, -0.6, -0.45, -0.3, -0.15, 0, 0.15, 0.3, 0.45, 0.6, 0.75, 0.9, 1.05]
'I': [-0.887362369, -0.946300088, -0.983985947, -0.999573603, -0.992712991, -0.963558185, -0.91276394, -0.841470985, -0.751280405, -0.644217687, -0.522687229, -0.389418342, -0.247403959, -0.099833417, 0.049979169]
})
</code></pre>
<p>The best would be to start from zero time (somewhere in the middle) and match time from scope1 to time of scope2. The missing values may be extrapolated, or I can change number of datapoints from faster scope (scope1). Extra values from faster scope may be discarded. Saying simplier, data for time from -1.05 to 1.05 is sufficient, the rest may be trimmed off.</p>
<p>Also, the column TIME from scope2 will be not required anymore.</p>
<p>I am not expecting full answer, of course more is better; just the naming of this process would suffice.</p>
<p>The desired output format can be:</p>
<pre><code> combinedData = pd.DataFrame({
'TIME': [-0.9, -0.75, -0.6, -0.45, -0.3, -0.15, 0, 0.15, 0.3, 0.45, 0.6, 0.75, 0.9]
'V': [corresponding values]
'I': [corresponding-iterpolated-values-if-not-available-from-data]
})
</code></pre>
| <python><pandas><dataframe> | 2023-11-20 17:20:15 | 2 | 351 | smajli |
77,517,536 | 18,587,779 | keyboard library interfered with python input | <p>When I press on "p" it triggers input with "p" in the start :</p>
<pre><code>import keyboard
def print_your_name():
x = input("Enter Your Name: ")
print(x)
keyboard.add_hotkey('p', print_your_name)
keyboard.wait("esc")
</code></pre>
<p>output:</p>
<pre><code>Enter Your Name:p
</code></pre>
<p>Is there any way to send a clean input to the user?</p>
| <python><python-3.x><keyboard> | 2023-11-20 16:35:46 | 2 | 318 | Mark Wasfy |
77,517,488 | 8,324,480 | Exclude global patterns in setuptools.packages.find is not working | <p>I have a package <code>Foo</code> where I want to exclude both the <code>test</code> files located in folders <code>tests</code> and the file(s) <code>conftest.py</code>.</p>
<p>My <code>pyproject.toml</code> configuration is:</p>
<pre><code>[tool.setuptools.packages.find]
exclude = ['Foo*tests', '*conftest']
include = ['Foo*']
</code></pre>
<p>The test files are correctly excluded, but not the <code>conftest</code> file. Any idea on what I'm doing wrong?</p>
| <python><setuptools><packaging> | 2023-11-20 16:26:39 | 0 | 5,826 | Mathieu |
77,517,429 | 5,134,285 | AWS CDK How to handle Fn.getAtt to return integer | <p>I am utilizing a custom resource to retrieve an integer value. However, despite the CDK parameter specifically requiring an integer, I encounter an error stating <code>type of argument must be one of (int, float); got str instead</code>. Is there a method within CDK to ensure that the value is received as an integer, thereby avoiding this type mismatch error?</p>
| <python><amazon-web-services><aws-cdk> | 2023-11-20 16:17:48 | 0 | 1,404 | GeoCom |
77,517,370 | 6,875,230 | Generate list of pattern contained in dictionary values Python | <p>I have the following dictionary as an input of a Python script:</p>
<pre><code>d = {
'A':[[1,2,7]],
'B':[[1,3,7], [1,3], [1,7]],
'C':[[1,3,7], [2,6]],
'D':[[1,3,2], [2,1,3]]
}
</code></pre>
<p>and I want the following patterns as outputs:</p>
<pre><code>{
(2,7):['A'],
(3,7):['C'],
(2,6):['C'],
(2,3):['D'],
(1,2):['A','D'],
(1,7):['A','B','C'],
(1,3):['B','C','D']
}
</code></pre>
<p>in which the resulting values should be generated considering the existence of a pair of indices in the values of the dictionary d.</p>
<p>For instance the pair (1,3) exists in the sublists of the values 'B', 'C', and 'D'</p>
<pre><code>'B':[[1,3,X]],
'C':[[1,3,X]],
'D':[[1,3,X], [X,1,3]]
</code></pre>
<p>and the pair (2,7) exists only in the sublist of the key 'A'</p>
| <python><algorithm><data-structures> | 2023-11-20 16:08:29 | 3 | 528 | Stefano |
77,517,303 | 25,277 | Rdflib query value of a related triple | <p>On below example ttl I'm trying to run a query that would give me the <code>relationshipLabel</code> for specific combination of <code>subject</code> and <code>object</code> stated as <code>canRelateTo</code>.</p>
<p>So for example: I want to query the value of relationshipLabel for the combination of</p>
<ul>
<li>destination-org:Type-Data_Product, and</li>
<li><a href="https://open-kos.org/ext/kos-powerbi/PowerBIReport" rel="nofollow noreferrer">https://open-kos.org/ext/kos-powerbi/PowerBIReport</a></li>
</ul>
<p>which should return "Data Product consists of"</p>
<p>I tried it using this query (where <code>subj</code>/<code>obj</code> are for example the values above):</p>
<pre><code> query = f"""
SELECT ?relationshipLabel
WHERE {{
<{subj}> :canRelateTo ?canRelateTo ;
:relationshipLabel ?relationshipLabel .
FILTER (?canRelateTo = <{obj}> )
}}
"""
if (subj.endswith("Type-Data_Product")):
print(query)
results = graph.query(query)
</code></pre>
<p>But this query would return all values of <code>relationshipLabel</code>, so: "Data Product consists of", "has data quality rule", "is governed by", "see also", "is a data product".</p>
<p>Als tried various combinations using the <code>Graph.value()</code> method but without success.</p>
<p>Any idea how I can achieve this?</p>
<p>Example ttl:</p>
<pre><code>destination-org:Type-Data_Product
:canRelateTo <https://open-kos.org/ext/kos-powerbi/PowerBIReport> ;
:relationshipLabel "Data Product consists of" .
destination-org:Type-Data_Product
:canRelateTo destination-org:Type-Data_Quality_rule ;
:relationshipLabel "has data quality rule" .
destination-org:Type-Data_Product
:canRelateTo destination-org:Type-Policy ;
:relationshipLabel "is governed by" .
destination-org:Type-Data_Product
:canRelateTo dwec:BusinessTerm ;
:relationshipLabel "see also" .
<https://open-kos.org/ext/kos-powerbi/PowerBIReport>
:canRelateTo destination-org:Type-Data_Product ;
:relationshipLabel "is a data product" .
</code></pre>
| <python><sparql><rdf><rdflib><turtle-rdf> | 2023-11-20 15:55:19 | 0 | 2,967 | Gero |
77,517,230 | 6,041,915 | Proper URI for mlflow models predict CLI | <p>I have a remote mlflow server up and running. I have also a model in its artifact store. mlflow provides a snippet that can be used for inference:</p>
<pre><code>import mlflow
logged_model = 'runs:/d3d28efecca44e728a8520d9392267e4/model'
mlflow.set_tracking_uri("http://my.server.address:5001")
# Load model as a PyFuncModel.
loaded_model = mlflow.pyfunc.load_model(logged_model)
</code></pre>
<p>This works fine and the model is loaded, I can use it to predict new values.</p>
<p>Now I'd like to use <code>mlflow models predict</code> API and I tried many different combinations, but nothing works. I tried this:</p>
<pre><code>set MLFLOW_TRACKING_URI=http://my.server.address:5001
mlflow models predict -m 'runs:/d3d28efecca44e728a8520d9392267e4/model' -i 'newdata.csv'
mlflow models predict -m 'http://my.server.address:5001/runs:/d3d28efecca44e728a8520d9392267e4/model' -i 'newdata.csv'
mlflow models predict -m 'http://my.server.address:5001/mlartifacts/d3d28efecca44e728a8520d9392267e4/artifacts/model' -i 'newdata.csv'
mlflow models predict -m 'mlflow-artifacts://my.server.address:5001/mlartifacts/2/d3d28efecca44e728a8520d9392267e4/artifacts/model' -i 'newdata.csv'
</code></pre>
<p>and many others but nothing works. I get "Bad URL" error. How should I provide the uri to my model? The documentation here <a href="https://mlflow.org/docs/latest/tracking.html#id14" rel="nofollow noreferrer">https://mlflow.org/docs/latest/tracking.html#id14</a> doesn't cover the topic deeply, I think.</p>
<p>An extra remark - the artifact store is proxied by the remote tracking server, my client has no access to it directly, as described here <a href="https://mlflow.org/docs/latest/tracking.html#scenario-5-mlflow-tracking-server-enabled-with-proxied-artifact-storage-access" rel="nofollow noreferrer">https://mlflow.org/docs/latest/tracking.html#scenario-5-mlflow-tracking-server-enabled-with-proxied-artifact-storage-access</a></p>
| <python><mlflow> | 2023-11-20 15:45:58 | 0 | 702 | Jakub Małecki |
77,517,225 | 6,373,357 | how to resolve IndexError: list index out of range in python | <p>I am working on one question from Leetcode <a href="https://leetcode.com/problems/maximum-strong-pair-xor-i/description/" rel="nofollow noreferrer">2932. Maximum Strong Pair XOR I</a>. but get the error <code>IndexError: list index out of range</code> under return statement. however I do not see the problem from line to line debugging.</p>
<pre><code> def maximumStrongPairXor(nums: List[int]) -> int:
from itertools import combinations
pair = list()
for a, b in combinations(nums, 2):
if abs(a- b) <= min(a, b):
pair += [(a^b, a, b)]
return sorted(pair)[-1][0
</code></pre>
| <python><python-3.x> | 2023-11-20 15:45:30 | 2 | 9,309 | jacobcan118 |
77,517,181 | 10,682,062 | Is there a common way to deploy a Django application with Windows Server ISS? | <p>I deployed a Django application on a Windows Server 2016 of a customer, but he wants it deployed through ISS, which as a Linux person i had no idea what it was.</p>
<p>So I searched for it, and how to deploy a Django application using ISS.</p>
<p>The best content I could find about this was on <a href="https://www.mattwoodward.com/2016/07/23/running-a-django-application-on-windows-server-2012-with-iis/" rel="nofollow noreferrer">Matt Woodward</a> post from years ago.</p>
<p>Still it didn't work as expected, and I can't find where the problem is.</p>
<p>specs:</p>
<p>django 4.2
wfastcgi 3.0
python 3.11
windows server 2016</p>
<p>folder structure:</p>
<p><a href="https://i.sstatic.net/OA4KO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OA4KO.png" alt="folder structure" /></a></p>
<h1>Steps</h1>
<p>This was my steps trying to get it working.</p>
<h1>Setting up dependencies</h1>
<p>1 - Installed Python and IIS on the OS;</p>
<p>2 - Created virtual environment and installed project requirements + wfastcgi;</p>
<p>3 - Tested Django application - working fine with runserver;</p>
<h1>Configure FastCGI</h1>
<p>1 - As Full path value: C:\Users\one\Documents\app\myApplication\virtualenv\Scripts\python.exe</p>
<p>2 - As arguments value: C:\Users\one\Documents\app\myApplication\virtualenv\Lib\site-packages\wfastcgi.py</p>
<p>3 - For environment variables:</p>
<ul>
<li>DJANGO_SETTINGS_MODULE: core.settings</li>
<li>PYTHONPATH: C:\Users\one\Documents\app\myApplication\core</li>
<li>WSGI_HANDLER: django.core.wsgi.get_wsgi_application()</li>
</ul>
<h1>Configure IIS Site</h1>
<p>1 - As Content Directory/Physical path: C:\Users\one\Documents\app\myApplication\core</p>
<p>2 - For Handler Mappings:</p>
<ul>
<li>Request path: *</li>
<li>Module: FastCgiModule</li>
<li>Executable: C:\Users\one\Documents\app\myApplication\virtualenv\Scripts\python.exe|C:\Users\one\Documents\app\myApplication\virtualenv\Lib\site-packages\wfastcgi.py</li>
</ul>
<p>Now when I go to http://localhost:81</p>
<p>I receive this page <a href="https://i.sstatic.net/W8r1M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W8r1M.png" alt="Error" /></a></p>
<p>Now I don't know if there's some problem with configs or is an authentication problem, any ideas ?</p>
| <python><django><deployment><windows-server> | 2023-11-20 15:40:18 | 1 | 398 | Eduardo Fellipe |
77,517,162 | 38,368 | Visual Studio Code working with Python with PYTHONPATH and separate source and tests folders | <p>Given a project structure:</p>
<pre><code>* project
* src
* my_package
* __init__.py
* code_file.py
* tests
* __init__.py
* my_package
* __init__.py
* code_file_test.py
</code></pre>
<p>How do I get Visual Studio code to identify that src is the root for code files so this will work for all of these:</p>
<ol>
<li><p>intellisense</p>
</li>
<li><p>running</p>
</li>
<li><p>discovering and running tests</p>
</li>
</ol>
| <python><visual-studio-code><python-unittest><pythonpath> | 2023-11-20 15:37:53 | 1 | 18,277 | Danny Varod |
77,517,057 | 2,414,957 | AttributeError: module 'numpy' has no attribute 'bool'. when importing glumpy | <pre><code>>>> import glumpy
/home/mona/anaconda3/envs/clean-pvnet/lib/python3.10/site-packages/glumpy/gloo/variable.py:82: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
gl.GL_BOOL : ( 1, gl.GL_BOOL, np.bool),
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mona/anaconda3/envs/clean-pvnet/lib/python3.10/site-packages/glumpy/__init__.py", line 7, in <module>
from . import app
File "/home/mona/anaconda3/envs/clean-pvnet/lib/python3.10/site-packages/glumpy/app/__init__.py", line 23, in <module>
from . console import Console
File "/home/mona/anaconda3/envs/clean-pvnet/lib/python3.10/site-packages/glumpy/app/console.py", line 7, in <module>
from glumpy import gl, glm, gloo
File "/home/mona/anaconda3/envs/clean-pvnet/lib/python3.10/site-packages/glumpy/gloo/__init__.py", line 7, in <module>
from . program import Program
File "/home/mona/anaconda3/envs/clean-pvnet/lib/python3.10/site-packages/glumpy/gloo/program.py", line 16, in <module>
from . variable import gl_typeinfo, Uniform, Attribute
File "/home/mona/anaconda3/envs/clean-pvnet/lib/python3.10/site-packages/glumpy/gloo/variable.py", line 82, in <module>
gl.GL_BOOL : ( 1, gl.GL_BOOL, np.bool),
File "/home/mona/.local/lib/python3.10/site-packages/numpy/__init__.py", line 324, in __getattr__
raise AttributeError(__former_attrs__[attr])
AttributeError: module 'numpy' has no attribute 'bool'.
`np.bool` was a deprecated alias for the builtin `bool`. To avoid this error in existing code, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations. Did you mean: 'bool_'?
</code></pre>
<p>I changed the following file line <code>(clean-pvnet) mona@ada:~/anaconda3/envs/clean-pvnet/lib/python3.10/site-packages/glumpy$ vi gloo/variable.py</code></p>
<p><code>bool gl.GL_BOOL 1 gl.GL_BOOL np.bool</code></p>
<p>to</p>
<p><code>bool gl.GL_BOOL 1 gl.GL_BOOL np.bool_</code></p>
<p>and still same error.</p>
<p>It seems the module is not being reloaded despite opening a new terminal tab and <code>conda activate clean-pvnet</code>.</p>
| <python><numpy><conda><glumpy> | 2023-11-20 15:24:09 | 2 | 38,867 | Mona Jalal |
77,516,898 | 6,494,707 | How to make sure generated random boundingboxes within another boundingbox are not overlapping? | <p>I have a boundingbox</p>
<pre><code>a = [233.9259, 16.3902, 356.8651, 426.9131]
import shapely.geometry
bbox = (233.9259, 16.3902, 356.8651, 426.9131)
polygon = shapely.geometry.box(*bbox, ccw=True)
polygon.bounds # (233.9259, 16.3902, 356.8651, 426.913
</code></pre>
<p>and I randomly generated 10 random boundingboxes with the size <code>8x8</code></p>
<pre><code>import random
from shapely.geometry import Polygon, Point
min_x, min_y, max_x, max_y = polygon.bounds
width = 8# to be defined
height = 8# to be defined
num_polygons = 10# to be defined
random_poly = []
while len(random_poly) < num_polygons:
rand_x = random.uniform(min_x, max_x)
rand_y = random.uniform(min_y, max_y)
left = Point([rand_x, rand_y])
bottom = Point([rand_x, rand_y - height])
right = Point([rand_x + width, rand_y - height])
top = Point([rand_x + width, rand_y])
new_poly = Polygon([left, bottom, right, top])
if polygon.contains(new_poly):
random_poly.append(new_poly)
</code></pre>
<p><a href="https://i.sstatic.net/pZ8JT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pZ8JT.jpg" alt="enter image description here" /></a></p>
<p>How to make sure these generated bounding boxes are non-overlapping?</p>
| <python><python-3.x><polygon><shapely> | 2023-11-20 15:00:12 | 1 | 2,236 | S.EB |
77,516,874 | 978,074 | Moto and boto3 mocks only when specifying the specific python file | <p>I'm using moto to mock my aws calls in my unit tests. When i run <code>pytest tests/test_file.py</code> the unit test will be successful and boto3 gets mocked.</p>
<p>When i run <code>pytest tests/</code> to detect all the test files in the directory, <code>boto3</code> doesnt get mocked by moto. Causing an error below.</p>
<p><code>botocore.exceptions.ClientError: An error occurred (ExpiredToken) when calling the CreateRole operation: The security token included in the request is expired</code></p>
<p>Is there a specific flag I need to use in <code>pytest</code> to let it act like im running an individual file?</p>
| <python><unit-testing><pytest><boto3><moto> | 2023-11-20 14:57:39 | 0 | 633 | dwardu |
77,516,784 | 2,955,997 | python tkinter add spacing to label with multiple lines | <p>I'm using tkinter with python 3.9.10.</p>
<p>I have a label which include multiple lines. The spacing between the lines is very narrow and I'm trying to increase it.
will appreciate and suggestion.</p>
<p>My code:</p>
<pre><code>from tkinter import *
window = Tk()
window.geometry(window_size)
window.title("Terpi App")
label = Label(window, text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. \n"
"Nam posuere mi sit amet dolor eleifend, nec dignissim enim volutpat\n"
"Morbi ut facilisis magna, ut scelerisque dolor. \n"
"Mauris non nulla mi.", font=("ariel", 20), justify="left")
label.place(relx=0.5, rely=0.5, anchor="center")
window.mainloop()
</code></pre>
<p><a href="https://i.sstatic.net/pXkbH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pXkbH.png" alt="enter image description here" /></a></p>
| <python><tkinter> | 2023-11-20 14:47:08 | 0 | 5,832 | ibezito |
77,516,518 | 2,386,605 | How to enforce Pydantic error raising for wrong generic types | <p>Below you can see my code</p>
<pre><code>from pydantic import BaseModel
from typing import TypeVar, Generic
T = TypeVar('T')
class GenericItem(BaseModel, Generic[T]):
item: T
# Examples of using GenericItem
string_item = GenericItem[str](item="example")
int_item = GenericItem[int](item=42)
# This will raise a validation error because 'item' is expected to be of type 'int'
try:
invalid_item = GenericItem[int](item="invalid")
except Exception as e:
print(f"Validation error: {e}")
</code></pre>
<p>I want to use a generic value <code>T</code> and I have a field <code>item</code> of that value.</p>
<p>When I create <code>GenericItem[int](item="invalid")</code> it should throw an error, because the input should be <code>int</code>, but it is an <code>str</code>. Yet, no error shows up.</p>
<p>How can I make that possible, such that in this case it would just accept input of the specified type <code>T</code>?</p>
| <python><python-3.x><pydantic><typing> | 2023-11-20 14:07:06 | 1 | 879 | tobias |
77,516,515 | 595,599 | can not parse a simple FIX message using simplefix | <p>I'm trying to parse a FIX message using Python <code>simplefix</code>:</p>
<pre class="lang-py prettyprint-override"><code>import simplefix
parser = simplefix.FixParser()
parser.append_buffer("8=FIX.4.2|35=A|49=SENDER_COMP_ID|56=TARGET_COMP_ID|34=1|98=0|108=30|10=112|")
message = parser.get_message()
</code></pre>
<p>Alas - <code>message</code> is <code>None</code>.
What am I doing wrong?</p>
| <python><fix-protocol> | 2023-11-20 14:06:48 | 0 | 1,298 | DuduArbel |
77,516,323 | 9,479,925 | How to do formatting and trimming in polars dataframe? | <p>I have a data frame with about 100 columns with a type of pl.Utf8, I would like to do trim/proper/title of each column at one go.</p>
<pre><code>df_.with_columns(pl.all().str.strip_chars())
</code></pre>
| <python><python-polars> | 2023-11-20 13:37:28 | 1 | 1,518 | myamulla_ciencia |
77,516,280 | 461,499 | Install youtokentome in poetry requires Cython, unable to configre correctly | <p>I'm trying to convert <a href="https://github.com/MahmoudAshraf97/whisper-diarization" rel="nofollow noreferrer">whipser-diarization</a> to <code>poetry</code>.</p>
<p>It goes well until I add <code>nemo_toolkit[asr]==1.20.0</code>, which depends on <code>youtokentome</code> (that name is well thought of, btw)</p>
<pre><code> File "/tmp/tmpexmdke23/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 507, in run_setup
super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
File "/tmp/tmpexmdke23/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 5, in <module>
ModuleNotFoundError: No module named 'Cython'
</code></pre>
<p>So I tried adding <code>cython</code> to the dependencies. It works fine if I <code>poetry shell</code> and execute <code>cython</code>, so it is avaible.</p>
<p>My pyproject so far:</p>
<pre><code>...
[tool.poetry.dependencies]
python = "^3.10"
faster-whisper = "0.9.0"
wget = "^3.2"
transformers = ">=4.26.1"
whisperx = {git = "https://github.com/m-bain/whisperX.git", rev = "49e0130e4e0c0d99d60715d76e65a71826a97109"}
deepmultilingualpunctuation = "^1.0.1"
cython = "^3.0.5"
[build-system]
requires = ["poetry-core", "cython"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>I added cython to the <code>requires</code> section, but that doesn't resolve the error.</p>
| <python><cython><python-poetry><openai-whisper> | 2023-11-20 13:31:01 | 2 | 20,319 | Rob Audenaerde |
77,516,173 | 3,743,899 | python Tkinter - how to loop on data with a next button | <p>My goal is to read a JSON file create a GUI with labels containing the first data set. Now I want a next button which should update the GUI and update the labels with the second data set and so on.</p>
<p>So I expect to see (after start): 127#13:21:17#13:34:30
And after clicking next I expect: 168#13:16:50#13:30:35
and so on...</p>
<p>I'm new to python but so far the documentation of TKinter didn't helped me. Any guidance is appreciated.</p>
<p><strong>test.py</strong></p>
<pre><code>import tkinter as tk
import json
f = open('bus_arrival.json')
myJSON = json.load(f)
window = tk.Tk()
canvas = tk.Canvas(window, width=300, height=80)
canvas.pack()
window.title("Hello World")
def next_btn():
print("test")
button = tk.Button(text='Next', command=next_btn)
canvas.create_window(230, 20, window=button)
# the result is a Python dictionary:
for ad in myJSON["bus"]:
text = tk.Label(window, text=ad["Bus Service"] + "#" + ad["1st Bus"] + "#" + ad["2nd Bus"])
canvas.create_window(150, 20, window=text)
window.mainloop()
</code></pre>
<p><strong>bus_arrival.json</strong></p>
<pre><code>{
"bus": [
{
"Bus Service": "127",
"1st Bus": "13:21:17",
"2nd Bus": "13:34:30"
},
{
"Bus Service": "168",
"1st Bus": "13:16:50",
"2nd Bus": "13:30:35"
},
{
"Bus Service": "27",
"1st Bus": "13:12:38",
"2nd Bus": "13:21:00"
},
{
"Bus Service": "72",
"1st Bus": "13:13:24",
"2nd Bus": "13:20:45"
},
{
"Bus Service": "",
"1st Bus": "",
"2nd Bus": ""
}
]
}
</code></pre>
| <python><tkinter> | 2023-11-20 13:16:22 | 1 | 678 | Yaerox |
77,516,160 | 20,176,161 | Excluding a list of string using str.contains: error: nothing to repeat at position 181436 | <p>I have a list of strings that i am trying to exclude from a dataframe.</p>
<p>The string format are as follows:</p>
<pre><code>Filtre=list(transactions_located.titre.unique())
</code></pre>
<p>The output is:</p>
<pre><code>['1abcvd/C', '1abcvdee/C', '1abcvdeedd/C', '1abcvdfdfs/32', '1abcvdfadfd/C',....]
</code></pre>
<p>I am trying to exclude the strings above and create a new dataframe as follows:</p>
<pre><code>temp=transactions.loc[~transactions['titre'].str.contains('|'.join(Filtre), case=False, na=False)]
</code></pre>
<p>I get this error:</p>
<pre><code>error: nothing to repeat at position 181436
</code></pre>
<p>Whats wrong with my approach? Thanks</p>
| <python><pandas><string><dataframe><list> | 2023-11-20 13:13:31 | 1 | 419 | bravopapa |
77,515,931 | 3,623,537 | Python Debugger (PDB): open currently active python file in editor | <p>Is there some way to get during debugging with <code>pdb</code>:</p>
<ul>
<li>currently opened python file</li>
<li>currently active line of code</li>
<li>indent on the current line of code</li>
</ul>
<p>and open the file in default text editor?</p>
<p>Example. After running the code below and then using <code>s</code> to step into <code>Path</code> execution and using some command/code I'm looking for I wanted to open file <code>c:\python311\lib\pathlib.py</code> at line 868 and at 5th symbol.</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
import pdb; pdb.set_trace()
p = Path()
</code></pre>
<p>The problem is only about getting current python file in <code>pdb</code> context. Opening file itself is trivial <code>os.system(r"C:\test.py")</code>. Active line code and indent is needed so I'd be able to open it in VS Code at exact position: <code>vsc --goto "C:\test.py:1:5"</code> (comes from <code>vsc --goto "<filepath>:<linenumber>:<x-coordinates>"</code>).</p>
<p>It would be nice if it was some native way to do it directly from <code>pdb</code> but since there's nothing related in the documentation, the solution is probably to make some custom method - which is okay too.</p>
<p>I've already tried to use <code>inspect</code> module but it seems that <code>pdb</code> is starting new frame where actual debugged python file is not accessible:</p>
<pre class="lang-py prettyprint-override"><code>from inspect import currentframe, getframeinfo
frameinfo = getframeinfo(currentframe())
print(frameinfo.filename, frameinfo.lineno)
# <stdin> 1
</code></pre>
| <python><pdb> | 2023-11-20 12:31:39 | 1 | 469 | FamousSnake |
77,515,875 | 1,084,174 | Getting error “error: subprocess-exited-with-error” while installing using "pip install --cert" | <p>I have tried to install using anaconda prompt,</p>
<blockquote>
<p>pip --cert "D:\DDownloads\MyCert.pem" install tflite_support</p>
</blockquote>
<p>And ended up with error,</p>
<pre><code> Collecting tflite_support
Using cached tflite-support-0.1.0a1.tar.gz (390 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [58 lines of output]
D:\AI\Anaconda\Lib\site-packages\setuptools\__init__.py:84: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!
********************************************************************************
Requirements should be satisfied by a PEP 517 installer.
If you are using pip, you can try `pip install --use-pep517`.
********************************************************************************
!!
dist.fetch_build_eggs(dist.setup_requires)
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)'))': /simple/pybind11/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)'))': /simple/pybind11/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)'))': /simple/pybind11/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)'))': /simple/pybind11/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)'))': /simple/pybind11/
ERROR: Could not find a version that satisfies the requirement pybind11>=2.4 (from versions: none)
ERROR: No matching distribution found for pybind11>=2.4
Traceback (most recent call last):
File "D:\AI\Anaconda\Lib\site-packages\setuptools\installer.py", line 96, in _fetch_build_egg_no_warn
subprocess.check_call(cmd)
File "D:\AI\Anaconda\Lib\subprocess.py", line 413, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['D:\\AI\\Anaconda\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\user.sk\\AppData\\Local\\Temp\\tmpfjgjp7hu', '--quiet', 'pybind11>=2.4']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\user.sk\AppData\Local\Temp\pip-install-ok_9cx2o\tflite-support_76f7e4f7f27f407ba020192d982c23a5\setup.py", line 143, in <module>
setup(
File "D:\AI\Anaconda\Lib\site-packages\setuptools\__init__.py", line 106, in setup
_install_setup_requires(attrs)
File "D:\AI\Anaconda\Lib\site-packages\setuptools\__init__.py", line 79, in _install_setup_requires
_fetch_build_eggs(dist)
File "D:\AI\Anaconda\Lib\site-packages\setuptools\__init__.py", line 84, in _fetch_build_eggs
dist.fetch_build_eggs(dist.setup_requires)
File "D:\AI\Anaconda\Lib\site-packages\setuptools\dist.py", line 907, in fetch_build_eggs
return _fetch_build_eggs(self, requires)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\Anaconda\Lib\site-packages\setuptools\installer.py", line 38, in _fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\Anaconda\Lib\site-packages\pkg_resources\__init__.py", line 829, in resolve
dist = self._resolve_dist(
^^^^^^^^^^^^^^^^^^^
File "D:\AI\Anaconda\Lib\site-packages\pkg_resources\__init__.py", line 865, in _resolve_dist
dist = best[req.key] = env.best_match(
^^^^^^^^^^^^^^^
File "D:\AI\Anaconda\Lib\site-packages\pkg_resources\__init__.py", line 1135, in best_match
return self.obtain(req, installer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\Anaconda\Lib\site-packages\pkg_resources\__init__.py", line 1147, in obtain
return installer(requirement)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\Anaconda\Lib\site-packages\setuptools\installer.py", line 98, in _fetch_build_egg_no_warn
raise DistutilsError(str(e)) from e
distutils.errors.DistutilsError: Command '['D:\\AI\\Anaconda\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\user.sk\\AppData\\Local\\Temp\\tmpfjgjp7hu', '--quiet', 'pybind11>=2.4']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p><strong>How to resolve this issue?</strong></p>
<p>Note that I have valid certificate that has been used successfully before.</p>
| <python><installation><pip><package><anaconda> | 2023-11-20 12:20:40 | 1 | 40,671 | Sazzad Hissain Khan |
77,515,630 | 853,561 | Incorrect Method Name in Python Logging Output: Always Displays 'logging:callHandlers:1706' Instead of Actual Method Name | <p>I am encountering an issue with Python logging.</p>
<p>The expected method name in the log output is replaced with <code>logging:callHandlers:1706</code></p>
<p>Initially, my log entries displayed in the format <code>[level] [date] [method] [text]</code> but now the 'method' field consistently shows 'logging:callHandlers:1706.'</p>
<p>I have ensured that each file contains the line <code>logger = logging.getLogger(name)</code> and that I sistematically use this logger variable for logging (eg: <code>logger.error(f"Deleting job task for: {e}"</code>))</p>
| <python><logging> | 2023-11-20 11:44:02 | 1 | 344 | magnum87 |
77,515,483 | 2,636,044 | Mocking method that calls other methods | <p>I have a class</p>
<pre><code>class X:
def __init__(self, db):
self.db = db
def get_data_from_friend(self):
return None
def get_data_from_db(self):
return self.db.get_my_db_data()
def get_data(self):
if data := self.get_data_from_friend():
return data
return self.get_data_from_db()
</code></pre>
<p>And I'm trying to test the <code>get_data</code> method, to validate that the calls inside of it are executed.</p>
<p>I have a test like this</p>
<pre><code>def test_get_data(self):
mock = create_autospec(X)
mock.get_data()
mock.get_data.assert_called_once() # <-- works
mock.get_data_from_friend.assert_called_once() # <-- assertionError, not called
</code></pre>
<p>what am I missing here?</p>
| <python><pytest><python-unittest><python-unittest.mock> | 2023-11-20 11:21:54 | 1 | 1,339 | Onilol |
77,515,253 | 8,030,746 | Why am I getting this sqlite3.ProgrammingError error? | <p>The error is:</p>
<pre><code>sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 0, and there are 1 supplied.
</code></pre>
<p>And this is the code causing it:</p>
<pre><code>data = c.execute('''SELECT * FROM job WHERE title LIKE "%?%"''', (user_input,)).fetchall()
</code></pre>
<p>For more context, I'm trying to create a functioning search bar with Python, Flask and SQlite3, where c.execute here is supposed to return the data from my database based on user input. But I'm having trouble configuring <code>SELECT</code> so it works with partial matches too, hence the use of <code>LIKE</code>. What am I doing wrong?</p>
| <python><sql><sqlite><flask> | 2023-11-20 10:50:21 | 1 | 851 | hemoglobin |
77,515,235 | 5,305,512 | Pose estimation web app running extremely slow in deployment | <p>I have the following script for taking 2 input videos of people dancing, compare them frame-by-frame for the body movements (pose estimation), and then display the accuracy with which the user video is able to copy the movements of the person in the benchmark video real time:</p>
<pre><code>import streamlit as st
import mediapipe as mp
import cv2, tempfile
import numpy as np
# Initialize MediaPipe pose detection
mp_pose = mp.solutions.pose
mp_drawing = mp.solutions.drawing_utils
pose = mp_pose.Pose()
# Streamlit layout
st.title("AI Dance Trainer")
with st.sidebar:
st.header("Video Upload")
benchmark_video_file = st.file_uploader("Upload a benchmark video", type=["mp4", "mov", "avi", "mkv"], key="benchmark")
uploaded_video = st.file_uploader("Upload your video", type=["mp4", "mov", "avi", "mkv"], key="user_video")
# Initialize Streamlit session state
if 'playing' not in st.session_state:
st.session_state.playing = False
# Start/Clear button logic
if not st.session_state.playing:
if st.button('Start'):
st.session_state.playing = True
else:
if st.button('Clear'):
st.session_state.playing = False
# Function to save uploaded file to a temporary file and return the path
def save_uploaded_file(uploaded_file):
if uploaded_file is not None:
with tempfile.NamedTemporaryFile(delete=False, suffix='.' + uploaded_file.name.split('.')[-1]) as tmp_file:
tmp_file.write(uploaded_file.getvalue())
return tmp_file.name
return None
# Function to calculate cosine distance
def cosine_distance(landmarks1, landmarks2):
if landmarks1 and landmarks2:
points1 = np.array([(lm.x, lm.y, lm.z) for lm in landmarks1.landmark])
points2 = np.array([(lm.x, lm.y, lm.z) for lm in landmarks2.landmark])
dot_product = np.dot(points1.flatten(), points2.flatten())
norm_product = np.linalg.norm(points1.flatten()) * np.linalg.norm(points2.flatten())
similarity = dot_product / norm_product
distance = 1 - similarity
return distance
else:
return 1
# Main video processing logic
if st.session_state.playing and benchmark_video_file and uploaded_video:
# Save uploaded videos to temporary files and read them
temp_file_path_benchmark = save_uploaded_file(benchmark_video_file)
temp_file_path_user = save_uploaded_file(uploaded_video)
cap_benchmark = cv2.VideoCapture(temp_file_path_benchmark)
cap_user = cv2.VideoCapture(temp_file_path_user)
# Check if videos are valid
if not cap_benchmark.isOpened() or not cap_user.isOpened():
st.error("Failed to open video streams. Please check the video files.")
st.session_state.playing = False
else:
# Layout for videos
col1, col2, col3 = st.columns([1, 1, 1])
# Create placeholders for videos and statistics
benchmark_video_placeholder = col1.empty()
user_video_placeholder = col2.empty()
stats_placeholder = col3.empty()
correct_steps = 0
total_frames = 0
# Process and display videos
while st.session_state.playing:
ret_benchmark, frame_benchmark = cap_benchmark.read()
ret_user, frame_user = cap_user.read()
if not ret_benchmark or not ret_user:
break
total_frames += 1
# Pose detection for benchmark
image_benchmark = cv2.cvtColor(frame_benchmark, cv2.COLOR_BGR2RGB)
# Pose detection for user
image_user = cv2.cvtColor(frame_user, cv2.COLOR_BGR2RGB)
image_user.flags.writeable = False
results_user = pose.process(image_user)
image_user.flags.writeable = True
image_user = cv2.cvtColor(image_user, cv2.COLOR_RGB2BGR)
if results_user.pose_landmarks:
mp_drawing.draw_landmarks(image_user, results_user.pose_landmarks, mp_pose.POSE_CONNECTIONS)
# Display videos
benchmark_video_placeholder.image(image_benchmark, channels="RGB", use_column_width=True)
user_video_placeholder.image(image_user, channels="BGR", use_column_width=True)
# Calculate error and update statistics
error = cosine_distance(results_user.pose_landmarks, pose.process(image_benchmark).pose_landmarks) * 100
correct_step = error < 30
correct_steps += correct_step
# Update statistics
stats = f"""
Frame Error: {error:.2f}%\n
Step: {'CORRECT STEP' if correct_step else 'WRONG STEP'}\n
Cumulative Accuracy: {(correct_steps / total_frames) * 100:.2f}%
"""
stats_placeholder.markdown(stats)
cap_benchmark.release()
cap_user.release()
</code></pre>
<p>It runs perfectly fine in local: <a href="https://shorturl.at/arwy9" rel="nofollow noreferrer">https://shorturl.at/arwy9</a></p>
<p>However, when I deploy it, it is not running smoothly at all: <a href="http://34.146.75.110:8501/" rel="nofollow noreferrer">http://34.146.75.110:8501/</a></p>
<p>This is a deployment in a Kubernetes cluster in Google Cloud, but I have tried in Streamlit Cloud and Heroku too, its the same performance - getting 1-2 fps in cloud as opposed to ~20 fps is local.</p>
<p>So, first of all, why is that happening? Is it using the GPU when running on my local machine (Mac Air M1) to render it smoothly, even though I never explicitly coded it to use the GPU?</p>
<p>Secondly, how do I make it run smoothly on cloud then?</p>
| <python><performance><deployment><streamlit><mediapipe> | 2023-11-20 10:47:34 | 0 | 3,764 | Kristada673 |
77,515,042 | 3,979,391 | Celery with postgresql giving error "can't adapt type 'AsyncResult'" | <p>I am using <code>postgresql</code> as my backend for Celery (v5.3.5).</p>
<p>Celery is returning an SQL error when I call ready() on the task <code>ASyncResult</code>:</p>
<pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'AsyncResult'
[SQL: SELECT celery_taskmeta.id AS celery_taskmeta_id, celery_taskmeta.task_id AS celery_taskmeta_task_id, celery_taskmeta.status AS celery_taskmeta_status, celery_taskmeta.result AS celery_taskmeta_result, celery_taskmeta.date_done AS celery_taskmeta_date_done, celery_taskmeta.traceback AS celery_taskmeta_traceback, celery_taskmeta.name AS celery_taskmeta_name, celery_taskmeta.args AS celery_taskmeta_args, celery_taskmeta.kwargs AS celery_taskmeta_kwargs, celery_taskmeta.worker AS celery_taskmeta_worker, celery_taskmeta.retries AS celery_taskmeta_retries, celery_taskmeta.queue AS celery_taskmeta_queue
FROM celery_taskmeta
WHERE celery_taskmeta.task_id = %(task_id_1)s]
[parameters: {'task_id_1': <AsyncResult: 0ae578c2-85b2-4d13-9002-50604329a480>}]
(Background on this error at: https://sqlalche.me/e/20/f405)
There is quite a long traceback, mostly sqlalchemy, the last Celery trace was:
File "/dist-packages/celery/backends/database/\_\_init__.py", line 152, in _get_task_meta_for
task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))
</code></pre>
<p>This is the task worker script (called task_queue.py):</p>
<pre><code>from celery import Celery, current_app, Task
broker = 'sqla+postgresql://user:password@server/db'
backend = 'db+postgresql://user:password@server/db'
app : Celery = Celery('tasks', broker=broker, backend=backend)
@app.task(bind=True)
def long_running_task(self, seconds : int):
""" A task that takes a number of seconds to complete. """
print('Starting count off of {0} seconds'.format(seconds))
for i in range(seconds):
print('{0} seconds left'.format(seconds - i))
sleep(1)
</code></pre>
<p>and this is how it is called:</p>
<pre><code>from task_queue import long_running_task
from celery.result import AsyncResult
id = long_running_task.delay(5)
print(f"Task ID {id} queued")
task : AsyncResult = AsyncResult(id, app=long_running_task.app)
while not task.ready():
print("Waiting for task to complete")
sleep(1)
</code></pre>
<p>I cannot see what I am doing wrong here, this same code works if I use rpc as a backend.
Clearly that SQL call from Celery is expecting a task id but is getting an AsyncResult instead. Is this a bug in celery?
Any ideas much appreciated.</p>
<p>(Addendum: This same error occurs trying to get any information from the task result eg: task.name, task.result, task.args)</p>
| <python><postgresql><sqlalchemy><celery> | 2023-11-20 10:18:55 | 1 | 1,697 | Giles |
77,514,946 | 9,021,547 | Out of sample prediction with SARIMAX | <p>I am building a seasonal ARIMA model using the <code>SARIMAX</code> package from <code>statsmodels</code>. The following is an illustration of the model:</p>
<pre><code>import pandas as pd
import numpy as np
from statsmodels.tsa.statespace.sarimax import SARIMAX
date_range = pd.date_range(start='2000-01-01', end='2009-12-01', freq='MS')
values = np.random.rand(len(date_range))
ts_full = pd.Series(values, index=date_range)
train = ts_full[:-12]
mdl = SARIMAX(train, order=(1, 0, 0), seasonal_order=(1, 0, 0, 12)).fit()
</code></pre>
<p>Now I want to see how the model performs on the 12 months of data, which it was not trained on, without refitting the model every month. I tried the following:</p>
<pre><code>mdl.predict(ts_full)
</code></pre>
<p>Which results in the following error:</p>
<pre><code>TypeError: Cannot convert input [2000-01-01 0.509615
2000-02-01 0.094391
2000-03-01 0.454202
2000-04-01 0.489502
.
.
.
2009-10-01 0.167847
2009-11-01 0.625154
2009-12-01 0.621803
Freq: MS, dtype: float64] of type <class 'pandas.core.series.Series'> to Timestamp
</code></pre>
<p>I found several prediction examples online, however they all either predict only 1 period ahead or require that the model be refit every time period before making the forecast. Is there any way to make the prediction using data on which the model was not trained?</p>
| <python><statsmodels><predict><sarimax> | 2023-11-20 10:03:51 | 1 | 421 | Serge Kashlik |
77,514,893 | 13,222,679 | how to create xml as an instance from xsd in python | <p>i have xsd file and my task is to create an empty xml based on that xsd i tried to use an external
package that i found on github here's the link <a href="https://github.com/fortesp/xsd2xml" rel="nofollow noreferrer">https://github.com/fortesp/xsd2xml</a>
and here's what i tried</p>
<pre><code>!git clone https://github.com/fortesp/xsd2xml.git
!pip install xmlschema
cd /content/sample_data/xsd2xml/xsd2xml
from xmlgenerator import XmlGenerator
from xmldatafacet import DataFacet
xmlgenerator = XmlGenerator('/content/sample_data/file.xsd', True)
print(xmlgenerator.generate()) # Output to console
xmlgenerator.write('filename.xml') # Output to file
</code></pre>
<p>but it gaves me an error on the <code>xmlgenerator.generate()</code>
in the pacakge.
My question here is there's any other solution other than what i'm trying to do?</p>
| <python><xml><xsd> | 2023-11-20 09:56:05 | 1 | 311 | Bahy Mohamed |
77,514,329 | 4,580,542 | How to resolve ImportError: No module named bs4 | <p>I'm trying to run some python code, I've installed Python and BeautifulSoup, but I'm getting this error.</p>
<p>Any ideas?</p>
<p>Thanks</p>
<p><code>main.py</code></p>
<pre><code>from bs4 import BeautifulSoup
with open('index.html', 'r') as html_file:
content = html_file.read()
print(content)
</code></pre>
<p>After running <code>python main.py</code> in terminal</p>
<pre><code>Requirement already satisfied: bs4 in /usr/local/lib/python3.9/site-packages (0.0.1)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.9/site-packages (from bs4) (4.12.2)
Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.9/site-packages (from beautifulsoup4->bs4) (2.5)
</code></pre>
<p><code>terminal error</code></p>
<pre><code>ImportError: No module named bs4
</code></pre>
| <python><beautifulsoup> | 2023-11-20 08:16:28 | 1 | 1,063 | Joe Consterdine |
77,514,316 | 1,306,892 | Confusion understanding Google Trends with pytrends: column normalization, search representation, and monthly aggregation | <p>The following code</p>
<pre><code>from pytrends.request import TrendReq
pytrends = TrendReq(hl='it', tz=0, timeout=None)
keywords = ['dichiarazione', 'redditi']
pytrends.build_payload(keywords, timeframe='2004-01-01 2017-01-01', geo='IT')
pytrends.interest_by_region(resolution='COUNTRY', inc_low_vol=True, inc_geo_code=False)
</code></pre>
<p>returns</p>
<p><a href="https://i.sstatic.net/Q9gia.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q9gia.png" alt="enter image description here" /></a></p>
<p>I don't have a clear understanding of the results I'm obtaining. Firstly, should the column numbers not be normalized between 0 and 100, as is usually done for Google Trends? What do they represent? Why, for example, does the last column have much lower numbers than the one next to it? I wanted to focus on searches containing both the words 'dichiarazione' and 'redditi' in Italian, but I'm starting to doubt whether this code actually returns results for searches containing each of the words 'dichiarazione' and 'redditi' separately (i.e., in the second column, searches containing the single word 'dichiarazione,' and in the third column, searches containing the single word 'redditi'). Is that really the case? Additionally, I would like to obtain monthly results of searches within the specified time frame (so I would like to see this dataframe repeated for each month of the time frame), but I don't know where to start. Any suggestions, please?</p>
| <python><google-trends> | 2023-11-20 08:12:41 | 1 | 1,801 | Mark |
77,514,237 | 742,269 | Getting numbers from an array using mask and regex | <p>Having this array with <code>code</code> and <code>collection</code>, where <code>X</code> is a mask that can be "any number":</p>
<pre><code>input_array = [{"code": "XXXX10", "collection": "one"}, {"code": "XXX610", "collection": "two"}, {"code": "XXXX20", "collection": "three"}]
</code></pre>
<p>I want a function that given any 6 digit code, for example <code>000710</code> returns the value that matches the <em>best</em> code mask (for the example would be <code>one</code>). This is my try:</p>
<pre><code>def get_collection_from_code(analysis_code):
for collection in input_array:
actual_code = collection["code"]
mask_to_re = actual_code.replace("X", "[\d\D]")
pattern = re.compile("^" + mask_to_re + "$")
if pattern.match(analysis_code):
print("Found collection '" + str(collection["collection"]) + "' for code: " + str(analysis_code))
return collection["collection"]
res = get_collection_from_code("010610")
print(res)
</code></pre>
<p>The problem here is that if I inpuit the code <code>010610</code> (and I want to return <code>two</code>), it returns <code>one</code> as also matches the pattern <code>XXXX10</code> first.</p>
<p>For better understanding, if I input there values, I would like to have those ouputs:</p>
<pre><code>010610 > two
010010 > one
123420 > three
</code></pre>
| <python><regex><python-re> | 2023-11-20 07:56:44 | 6 | 8,475 | Avión |
77,514,185 | 2,840,697 | How do we just select lists from the imported module? | <p>Let's say I did the following:</p>
<pre><code>from X import A, B, C, D
</code></pre>
<p>What I want to do is that combine all lists in A, B, C, D to make a single big list:</p>
<pre><code>all_list=[]
for each_variable in (X.A, X.B, X.C, X.D combined):
if each_variable is a list:
all_list.extend(each_list)
</code></pre>
<p>Is there an easy way to do it in python?</p>
<p>For instance, X.A may contain some lists, some dictionaries, etc, and same for X.C, X.D.</p>
<p>I was wondering if python has modules that can detect data type within each module and only select that.</p>
<p>Thanks.</p>
| <python><loops><directory><iterator> | 2023-11-20 07:43:58 | 2 | 942 | user98235 |
77,514,059 | 943,713 | Python pattern match on an operator | <p>I'm trying to build a function that would match on a operator , like <code>-</code> :</p>
<pre><code>def testM(x):
match x:
case (operator.sub,a,b):
return operator.sub(a,b)
case ('-',a, b):
return a-b
case ("+",a,b):
return a+b
case ("other strange op",a,b,c):
return (a+b-c)
.......
case _ :
return 0
</code></pre>
<p>The function will be used in jupyter, so user will type it quite frequently. I would like to make it minimal keystroke possible</p>
<pre><code>testM('-',5,1) ## 3 key strokes
# it works and return 4
testM((operator.sub,4,1)) ## 12 key strokes
# it works and return 3
</code></pre>
<p>The goal is , user can call it like, but it doesn't work.</p>
<pre><code>testM(-,5,1) ## only 1 key strokes
# it return 4
</code></pre>
<p>Is there a way to escape evaluation of <code>-</code> in the parameter ? then python won't raise error ?</p>
| <python><pattern-matching><literals> | 2023-11-20 07:10:37 | 1 | 1,883 | Shawn Zhang |
77,514,020 | 3,848,207 | What's wrong with this Python function to upload file to github? | <p>I have the following Python function to upload a file to github repository.</p>
<pre><code>def upload_to_github(github_token: str,
source_file: str, destination_folder: str,
github_repo: str, git_branch: str) -> None:
"""
Uploads a file to a GitHub Pages repository using the PyGithub library.
Parameters:
github_token: Github token for authentication
source_file: The path of the local file to be uploaded
destination_folder: The path of the folder in the GitHub repository where the file will be uploaded
github_repo: The name of the GitHub repository
git_branch: The name of the branch in the GitHub repository
"""
from github import Github
# Create a Github instance using the username and password
g = Github(github_token)
# Get the repository object
repo = g.get_user().get_repo(github_repo)
# Get the branch object
branch = repo.get_branch(git_branch)
# Read the source file as binary data
with open(source_file, "rb") as f:
content = f.read()
# Create the path of the file in the GitHub repository
path = destination_folder + "/" + source_file.split("/")[-1]
# Create or update the file in the GitHub repository
repo.create_file(path, "Upload file", content, branch=branch.name)
</code></pre>
<p>The file can be successfully uploaded the first time. If I upload the same file again, I get the error message <code>github.GithubException.GithubException: 422 {"message": "Invalid request.\n\n\"sha\" wasn't supplied.", "documentation_url": "https://docs.github.com/rest/repos/contents#create-or-update-file-contents"}</code></p>
<p>Why does it work only the first time? It seems a new file can be uploaded but once uploaded, it cannot be updated.</p>
| <python><github><pygithub> | 2023-11-20 07:01:27 | 1 | 5,287 | user3848207 |
77,513,843 | 5,696,601 | Normalized colormap distorted | <p>I want to normalise a colormap and came across <a href="https://gis.stackexchange.com/a/330175/97137">this</a> answer on Stackexchange. It does exactly what I want. Or at least it did. However, the answer and the code are five years old and the code no longer fully reproduces the desired result.</p>
<p>What do I need to change in the code to get the result I want?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import TwoSlopeNorm
import geopandas as gpd
# generate data
gdf = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
gdf = gdf[gdf.continent == 'Africa']
gdf['random'] = np.random.gamma(2, 2, len(gdf)) - 2
# normalize color
vmin, vmax, vcenter = gdf.random.min(), gdf.random.max(), 0
norm = TwoSlopeNorm(vmin=vmin, vcenter=vcenter, vmax=vmax)
# create a normalized colorbar
cmap = 'RdBu'
cbar = plt.cm.ScalarMappable(norm=norm, cmap=cmap)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 7))
# with no normalization
gdf.plot(column='random', cmap=cmap, legend=True, ax=ax1)
# with normalization
gdf.plot(column='random', cmap=cmap, norm=norm, legend=False, ax=ax2)
# add colorbar
fig.colorbar(cbar, ax=ax2)
</code></pre>
<p>Today:
<a href="https://i.sstatic.net/zd0r0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zd0r0.png" alt="enter image description here" /></a></p>
<p>Desired solution:
<a href="https://i.sstatic.net/pwU0g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pwU0g.png" alt="enter image description here" /></a></p>
| <python><matplotlib><geopandas> | 2023-11-20 06:19:42 | 0 | 1,023 | Stücke |
77,513,798 | 7,520,419 | Given a Pandas DataFrame With a MultiIndex of Size 2, How Can I Access Data Using Only the Second Set of Index Values? | <p>I've been trying to figure out how to use MultiIndexes in Pandas. Here's the details. Given the following dataset:</p>
<pre><code>user movie rating
196 242 3
186 302 3
22 377 1
244 51 2
166 346 1
</code></pre>
<p>Where "user" and "movie" are used as Indexers and "rating" is a column, it's simple to access a "user" by the following:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[196]
</code></pre>
<p>However, I also need to be able to access data for a given "movie" without considering the "user" Indexer. The following attempts do not work:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[,302] # Syntax Error
df.loc[:,302] # Key Error
df[:][302] # Index Error
df.loc[:].loc[302] # Only accesses "user" Indexer
</code></pre>
<p>Surely, Pandas must provide a simple way to access individual Indexers beyond just the first, but I haven't figured it out. In my mind, the Pandas implementation for accessing data with MultiIndexers should be similar to accessing Numpy arrays. If not, I will just have to use Numpy rather than Pandas.</p>
| <python><pandas><dataframe> | 2023-11-20 06:07:18 | 0 | 606 | Sintrias |
77,513,762 | 14,108,609 | Why is not possible to change the grey color in axes/axis title of graph in streamlit | <p><a href="https://i.sstatic.net/7bQcf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7bQcf.png" alt="image of the streamlit multiline graph plotted using st.write and plotly" /></a></p>
<pre><code> fig = sp.make_subplots(specs=[[{"secondary_y": True}]])
fig.add_trace(go.Line(x=qdata.DateTime, y=qdata.a), secondary_y=False)
fig.add_trace(go.Line(x=qdata.DateTime, y=qdata.b), secondary_y=False)
fig.add_trace(go.Line(x=qdata.DateTime, y=qdata.c), secondary_y=False)
fig.add_trace(go.Line(x=qdata.DateTime, y=qdata.d), secondary_y=False)
fig.add_trace(go.Line(x=qdata.DateTime, y=qdata.e), secondary_y=True)
fig.update_yaxes(color='#000000', title_text="<b>TEMPERATURES </b> (F)", range=[option_Y1Range0, option_Y1Range1], secondary_y=False)
fig.update_yaxes(color='#000000', title_text="<b>MODE</b> (0 to 5)", range=[option_Y2Range0, option_Y2Range1], secondary_y=True)
fig.update_xaxes(type='category', color='#000000')
fig.update_layout(width=1800,
height=600,
font_color='#000000',
legend=dict(yanchor="top", orientation="h", y=0.9, xanchor="left", x=0.4)
)
st.write(fig)
</code></pre>
| <python><plotly><visualization><dashboard><streamlit> | 2023-11-20 05:54:54 | 1 | 1,351 | Janzaib M Baloch |
77,513,504 | 12,931,358 | TypeError: list indices must be integers or slices, not str when import HF dataset from local path | <p>I want to use <code>map</code> function after importing huggingface dataset [mkqa], so I download <a href="https://github.com/apple/ml-mkqa/raw/main/dataset/mkqa.jsonl.gz" rel="nofollow noreferrer">it</a> firstly,
then put all to local path, <code>"/data/mkqa-Chinese"</code> and,</p>
<pre><code>from datasets import Dataset, load_dataset
raw_dataset = load_dataset(data_path)
</code></pre>
<p>the structure of raw_dataset like,</p>
<pre><code> DatasetDict({
train: Dataset({
features: ['query', 'answers', 'queries', 'example_id'],
num_rows: 6758
})
})
</code></pre>
<p>then, I want map it by tokenizer, like,</p>
<pre><code>model_path = "/data/bigscience/bloomz-3b"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_Fast=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = 'right'
def tok(sample):
prompt_and_chosen = " Human: " + sample['queries']['zh_cn'] + " Assistant: " + sample['answers']['zh_cn'][0]['text']
model_inps = tokenizer(prompt_and_chosen, padding=True, max_length=512, truncation=True)
return model_inps
tokenized_training_data = raw_dataset['train'].map(tok, batched=True)
print(tokenized_training_data)
print("pause")
</code></pre>
<p>However, it shows typeerror,</p>
<pre><code> processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/home/novo_trl_sft.py", line 548, in tok
prompt_and_chosen = " Human: " + sample['queries']['zh_cn'] + " Assistant: " + sample['answers']['zh_cn'][0]['text']
TypeError: list indices must be integers or slices, not str
</code></pre>
<p>I guess the problem is on the class of Dataset, but how to correct it?</p>
| <python><huggingface-datasets> | 2023-11-20 04:25:16 | 1 | 2,077 | 4daJKong |
77,513,130 | 1,285,061 | python datetime delta inaccuracy - Sidereal period | <p>In python datatime module I noticed a gap. Datetime does leap year adjustment automatically, using sidereal year length.</p>
<p>How to achieve scientific accuracy in these date calculations?</p>
<p>The difference is expected to be exactly <code>200</code>, but it is <code>199.99377806650291</code></p>
<p>This is messing up the accuracy of the application.</p>
<p>Sidereal year days = <code>365.256363004</code></p>
<p>Expected difference is exactly 200 using 365.256363004.
<code>2103-1903 = 200</code></p>
<pre><code>>>> s = datetime.datetime(1903,1,1,0,0,0)
>>> s.strftime("%A, %d. %B %Y %I:%M%p")
'Thursday, 01. January 1903 12:00AM'
>>>
>>> e = datetime.datetime(2103,1,1,0,0,0)
>>> e.strftime("%A, %d. %B %Y %I:%M%p")
'Monday, 01. January 2103 12:00AM'
>>>
>>> e-s
datetime.timedelta(days=73049)
>>> s-e
datetime.timedelta(days=-73049)
>>>
</code></pre>
<p>Error -</p>
<p>datetime calculated time delta <code>73049</code> days.</p>
<p>I think time delta should return <code>200 * 365.256363004 = 73051.2726008</code> days.</p>
<p><code>73049 / 365.256363004 = 199.99377806650291</code></p>
<p><code>200 - 199.99377806650291 = 0.0062219334971</code> #expected - calculated = error</p>
<p><code>365.256363004 * 24 * 60 = 525969.16272576 minutes</code> #minutes in a Sidereal year</p>
<p><code>365.256363004 * 24 * 60 * 60 = 31558149.7635456 seconds</code> #seconds in a Sidereal year</p>
<p><code>365.256363004 * 0.0062219334971 = 2.272600800003505 days</code> #error in days</p>
<p><code>525969.16272576 * 0.0062219334971 = 3272.545152005046885 minutes</code> #error in minutes</p>
<p><code>31558149.7635456 * 0.0062219334971 = 196352.709120302813103 seconds</code> #error in seconds</p>
<p>It seems datetime is using <code>365.25</code> instead of all significant digits <code>365.256363004</code>.
<code>0.256363004 * 4 = 1.025452016 - 1 = 0.025452016</code> is ignored over leap year adjustments, not tracked by datetime that is causing error in scientific calculations.</p>
| <python><datetime><timedelta> | 2023-11-20 01:38:04 | 1 | 3,201 | Majoris |
77,513,010 | 9,273,406 | Skipping Authentication in FastAPI Based on POST request parameter | <p>I am currently working on a FastAPI project and facing a challenge in implementing a custom authenticator. My goal is to skip authentication based on the value of a specific parameter in the request body and return a hardcoded user ID when the condition is met.</p>
<p>Here's a simplified version of my main.py file which runs as:</p>
<pre><code>from fastapi import Depends, FastAPI, HTTPException, Request, status
from fastapi.openapi.models import OAuthFlows as OAuthFlowsModel
from fastapi.security import OAuth2AuthorizationCodeBearer, SecurityScopes
from pydantic import BaseModel, parse_obj_as
from jose import jwt
import uvicorn
import json
app = FastAPI()
# FastAPI OAuth2 configuration
fastapi_oauth2 = OAuth2AuthorizationCodeBearer(
authorizationUrl="your_authorization_url",
tokenUrl="your_token_url",
)
# Mockup of auth0_jwks for the sake of example
auth0_jwks = "your_auth0_jwks"
# Mockup of SearchRequest and verify_token for the sake of example
class SearchRequest(BaseModel):
query: str
tenant: str
# Mockup of parse_request for the sake of example
async def parse_request(api_request: Request) -> SearchRequest:
body = await api_request.body()
body = json.loads(body.decode("utf-8"))
return parse_obj_as(SearchRequest, body)
def verify_token(token: str = Depends(fastapi_oauth2)) -> dict:
"""
Verifies Auth0 access token and attached permissions(scopes).
Returns a dictionary of claims from the verified token.
"""
if token == "mock_token":
return {"sub": "mock_user_id"}
try:
# Add your token verification logic here, using your custom settings and keys
jwt_claims = jwt.decode(token, key=auth0_jwks, audience="your_audience")
except Exception as exc:
raise HTTPException(
status_code=401,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
) from exc
return jwt_claims
# Function causing the issue
def get_current_user_or_skip_auth(
api_request: SearchRequest = Depends(parse_request),
token: str | None = Depends(fastapi_oauth2),
) -> str:
if api_request.tenant == "special_tenant":
return f"{api_request.tenant}_user"
jwt_claims = verify_token(token) # Pass the token to verify_token
if jwt_claims:
return jwt_claims.get("sub")
raise HTTPException(
status_code=401,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
# Basic POST request endpoint
@app.post("/process_request")
async def process_request(
request: SearchRequest, user_id: str = Depends(get_current_user_or_skip_auth)
):
"""
Process the incoming request.
"""
return {"user_id": user_id, "request_data": request.dict()}
# Run the FastAPI application
if __name__ == "__main__":
uvicorn.run(app, host="127.0.0.1", port=8000)
</code></pre>
<p><strong>Issue:</strong></p>
<p>When I make a POST request with the tenant as "special_tenant" without a token, I expect the endpoint to skip the authentication process and return a JSON with the user_id as "special_tenant_user" and the request data. However, I am currently getting a "Not authenticated" error.</p>
<p>When I make a POST request with the tenant as "regular_tenant" with a mock token, I expect the endpoint to return a JSON with the user_id as "mock_user_id" and the request data. This is currently working as expected.</p>
<p>Here is an example of the issue:</p>
<ol>
<li>Start the app with <code>uvicorn main:app --reload</code></li>
<li>Open another terminal and run the following curl command to make a POST request with the tenant as "special_tenant" without a token:</li>
</ol>
<pre><code>curl -X POST "http://127.0.0.1:8000/process_request" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"query\":\"test_query\",\"tenant\":\"special_tenant\"}"
</code></pre>
<ol start="3">
<li>The server returns a "Not authenticated" error.</li>
<li>But I expected it to return a JSON with the user_id as "special_tenant_user" and the request data.</li>
<li>Post request with a regular tenant works because it authenticates</li>
</ol>
<pre><code>curl -X POST "http://127.0.0.1:8000/process_request" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer mock_token" -d "{\"query\":\"test_query\",\"tenant\":\"regular_tenant\"}"
</code></pre>
<ol start="5">
<li>Returns this as expected</li>
</ol>
<pre><code>{"user_id":"mock_user_id","request_data":{"query":"test_query","tenant":"regular_tenant"}}%
</code></pre>
<p><strong>Question:</strong></p>
<p>What is the proper way to skip authentication and return a hardcoded user ID based on a specific parameter in the request body? Is this even possible to do while being safe in terms of OAuth standards?</p>
<p>Any help would be appreciated. Thanks in advance!</p>
| <python><oauth-2.0><fastapi><auth0> | 2023-11-20 00:30:33 | 1 | 4,370 | azizbro |
77,512,691 | 1,757,321 | Retrieval-augmented generation without OpenAIEmbeddings | <p>I'm playing with HuggingFace and some of the models on there. I'm trying to achieve something along the lines of <a href="https://python.langchain.com/docs/use_cases/question_answering/" rel="nofollow noreferrer">RAG</a>. Seems like a pretty clear guide with all the needed ingredients and recipe. But the cooking sequence is what I need help.</p>
<p>What I want to do:</p>
<ul>
<li>Have a question and answer chat app</li>
<li>But the chat app will depend on custom information given the model to answer some specialized questions.</li>
</ul>
<p>Thus</p>
<ul>
<li>the model will 100% reside on HuggingFace and "inferred" (I hope I'm using the term right) remotely from a VPS using HuggingFace client</li>
<li>and the VPS will locally have the custom documents used for the "indexing" and "fine tuning"</li>
</ul>
<p><strong>What I've done</strong></p>
<p>With that in mind, here's what I've been able to do so far in the past couple of days (see python script below)</p>
<p>But the problem I keep encountering is, EVERY article/tutorial/documentation I've come across so far CANNOT write 3 sentences without mentioning something along the lines of OpenAI* <em>sigh</em></p>
<p>I do not want to use ANYTHING OpenAI (is that even possible?)</p>
<p>Below is my python script.</p>
<pre><code>import os
from langchain.chains import LLMChain
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
from huggingface_hub import InferenceClient
# <rant>
# Where to import what from seems to be a whack-a-mole sport with this
# langchain project. They can't seem to keep a module at a location
# even for a few version upgrades straight. Woow!
os.environ['HUGGINGFACEHUB_API_TOKEN'] = 'hf_xx'
# os.environ["OPENAI_API_KEY"] = 'sk-xx' # I don't wanna use anything OpenAI*
import gradio as gr
# Load openchat model
openchat = InferenceClient(model="openchat/openchat_3.5", token='hf_xx')
documents = ["trainingData/pdfs.pdf"]
# Define embedding function
def extract_embedding(document):
# Extract embedding using OpenAIEmbeddings
embedding = OpenAIEmbeddings.embed_query(text=document)
return embedding
# Load document embeddings
document_embeddings = [extract_embedding(document) for document in documents]
# Load vectorstore index
index = FAISS.from_documents(documents=documents, embeddings=document_embeddings)
# Define LLM chain
llm_chain = LLMChain(llm=openchat)
# Create RagChain manually
def predict(docs, input):
# Retrieve relevant documents from index
retrieved_docs = index.query(input)
# Run LLM chain on retrieved documents
outputs = []
for doc in retrieved_docs:
output = llm_chain.predict(input=doc)
outputs.append(output)
return outputs
# Launch Gradio app
iface = gr.Interface(fn=predict, inputs="text", outputs="text").launch()
</code></pre>
<p>How far off am I from the mark? And how do I get on track?</p>
<p>Currently, when I run the above script, I get the error:</p>
<pre><code>TypeError: OpenAIEmbeddings.embed_query() missing 1 required positional argument: 'self'
</code></pre>
<p>Like mentioned earlier, I'd prefer not to use OpenAI anywhere in the script above, so any alternatives are welcomed.</p>
<p>Will appreciate any insights</p>
| <python><nlp><artificial-intelligence><langchain> | 2023-11-19 22:11:20 | 1 | 9,577 | KhoPhi |
77,512,673 | 7,077,532 | Python Dataframe: Expand Single "Date" Column Dataframe by Adding Second Column From a List of Names To Each Date | <p>Let's say I have the following sample input dataframe below. Please note my real table contains hundreds of dates.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-04-02</td>
</tr>
<tr>
<td>2023-07-14</td>
</tr>
</tbody>
</table>
</div>
<p>The code to create above table is:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'date':["2023-04-02","2023-07-14"]})
</code></pre>
<p>And I have a list of names:</p>
<pre><code>names_list = ['Matthew Perry', "Amy Winehouse", "Ted Nugent"]
</code></pre>
<p>I want to expand the input table by adding a second column that contains names from the names_list. The dates in "Date" column would get repeated until a new date is hit.</p>
<p>My desired output table looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Names</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-04-02</td>
<td>Matthew Perry</td>
</tr>
<tr>
<td>2023-04-02</td>
<td>Amy Winehouse</td>
</tr>
<tr>
<td>2023-04-02</td>
<td>Ted Nugent</td>
</tr>
<tr>
<td>2023-07-14</td>
<td>Matthew Perry</td>
</tr>
<tr>
<td>2023-07-14</td>
<td>Amy Winehouse</td>
</tr>
<tr>
<td>2023-07-14</td>
<td>Ted Nugent</td>
</tr>
</tbody>
</table>
</div>
<p>I searched around Stack Overflow but couldn't find any code to do this. I'm familiar with left joins and concats but I don't think either apply in this situation.</p>
| <python><dataframe><list><loops><merge> | 2023-11-19 22:02:02 | 3 | 5,244 | PineNuts0 |
77,512,666 | 1,116,171 | Python create time interval from human readable string | <p>Is there a Python library to convert human readable time interval to 2 datetimes?</p>
<p>Example:</p>
<pre><code>Str(TimeInterval("last week")) # return a tuple (datetime(2023,11,12,0,0,0), datetime(2023,11,19,23,59,59))
</code></pre>
<p>Same for today, yesterday, last month, 2 months ago, year-to-date, etc.</p>
| <python><datetime><humanize> | 2023-11-19 21:59:33 | 1 | 1,932 | Charlie |
77,512,663 | 9,320,666 | How to make mypy ignore no-untyped-def error on internal (private) methods | <p>I'm using mypy with <code>disallow-untyped-defs</code> flag so it complains when I create a method that doesn't have type annotation on parameters or return (<code>error: Function is missing a type annotation [no-untyped-def]</code>).</p>
<p>This is usually what I want. I want to make sure all public methods have type annotations, but I don't want to be strict on the internal methods (methods starting with one or two leading underscore). How can I make mypy ignore internal methods?</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>def foo(a, b):
return a + b
def _bar(a, b):
return a + b
def __baz(a, b):
return a + b
</code></pre>
<p><code>mypy --disallow-untyped-defs --check-untyped-defs <module_name></code></p>
<p>Current output: [no-untyped-def] missing type annotation error on all three methods.</p>
<p>Desired output: error on only <code>foo</code> method.</p>
| <python><mypy><python-typing> | 2023-11-19 21:57:59 | 1 | 1,898 | Seljuk Gulcan |
77,512,620 | 2,446,374 | palm api often returns nothing | <p>I'm using the palm api from python - and using either text_generation or chat - it does the same thing. I send in a prompt like:</p>
<pre><code>summarise this:
a bunch of text
</code></pre>
<p>and it works perfectly - but I want a summary of my <em>conversation</em>. so if I send in this:</p>
<pre><code>summarise this:
{ "role":"user", "content": "Something" },
{ "role":"assistant", "content" : "some response" },
...
</code></pre>
<p>100% of the time, it just returns nothing - and when I say that - if I do:</p>
<pre class="lang-py prettyprint-override"><code> completion = palm.generate_text(
model="models/text-bison-001",
prompt=prompt,
temperature=temperature,
max_output_tokens=2000,
)
</code></pre>
<p>completion.result is None,
completion.safety_feeback is empty,
completion.candidates is empty
and there doesn't appear to be any error field or something I can look at.</p>
<p>Temperature is a variable here because I made a loop to try it with ever increasing values - and it always just returns nothing. I get exactly the same result if I use the palm.chat api.</p>
<p>so it's sort of a two part question:</p>
<ol>
<li>Is there any way to get palm to summarise a conversation?</li>
<li>is there any way to troubleshoot why palm just returns nothing?</li>
</ol>
| <python><palm-api> | 2023-11-19 21:41:36 | 0 | 3,724 | Darren Oakey |
77,512,614 | 1,251,549 | How set description to DAG tags in apache airflow? | <p>Tags for DAG are created automatically. But is there a way to create meaningful description for the tag? So user can not only use tags for searching but see description of tags that are used.</p>
| <python><airflow><airflow-2.x> | 2023-11-19 21:39:16 | 1 | 33,944 | Cherry |
77,512,179 | 15,542,245 | Unexpected list index affects word option for difflib.get_close_matches() | <p>Question: I am required to use this indexing of tokens in my difflib call:
<code>difflib.get_close_matches(tokens[0], jobList, n=1, cutoff=0.85)</code> in order to get my required output. If I use what I expect. Which is <code>tokens[j]</code> then my output is affected by having the token <code>Asst</code> still appearing before the address <code>Wyndrum</code>. Why?</p>
<pre><code># Short test removes job descriptions from in front of address trailing address strings
testList = ['21 Sharp Crescent _Wainuiomata Shop Asst','Shop Asst Wyndrum Avenue _Lower_Hutt Housewife','Housewife']
jobList = ['Asst','Housewife','Shop']
import difflib
newList = []
for i in range(len(testList)):
tokens = testList[i].split()
for j in range(len(tokens)):
print("tokens[j]",tokens[j],"tokens[0]",tokens[0])
result = difflib.get_close_matches(tokens[0], jobList, n=1, cutoff=0.85)
if result:
while tokens and tokens[0] == result[0]:
tokens.pop(0)
else:
newString = ' '.join(tokens)
newList.append(newString)
break
for i in range(len(newList)):
print(newList[i])
</code></pre>
<p>Expected/Correct Output</p>
<pre><code>21 Sharp Crescent _Wainuiomata Shop Asst
Wyndrum Avenue _Lower_Hutt Housewife
</code></pre>
<p>Debug print lines</p>
<pre><code>tokens[j] 21 tokens[0] 21
tokens[j] Shop tokens[0] Shop
tokens[j] Wyndrum tokens[0] Asst
tokens[j] _Lower_Hutt tokens[0] Wyndrum
tokens[j] Housewife tokens[0] Housewife
</code></pre>
| <python><list><indexing> | 2023-11-19 19:25:11 | 1 | 903 | Dave |
77,512,098 | 3,740,545 | How to add space between bars using Plotly? | <pre><code>import plotly.express as px
df = px.data.tips()
fig = px.histogram(
pd.DataFrame({'price': prices}),
x="price",
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/77phy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/77phy.png" alt="enter image description here" /></a></p>
<p>Is there a way to separate the different bins?</p>
| <python><plotly> | 2023-11-19 19:03:15 | 1 | 7,490 | Franco Piccolo |
77,512,072 | 6,324,055 | How to properly declare optional dependencies both as extras and in dedicated groups in Poetry? | <p>From Poetry documentation about the difference between groups and extras:</p>
<blockquote>
<p>Dependency groups, other than the implicit main group, must only contain dependencies you need in your development process. Installing them is only possible by using Poetry. To declare a set of dependencies, which add additional functionality to the project during runtime, use extras instead. Extras can be installed by the end user using pip.</p>
</blockquote>
<p>This perfectly makes sense and works fine most of the time. However, during development one usually wants to install all the extras dependencies in order to test all the functionalities of the package. <strong>However, extras are not installed by default contrary to groups.</strong> Moreover, the Poetry documentation states that:</p>
<blockquote>
<p>The dependencies specified for each extra must already be defined as project dependencies.</p>
<p>Dependencies listed in dependency groups cannot be specified as extras.</p>
</blockquote>
<p>Thus, because it is not possible to define extras in a Poetry project that are defined in dependency groups and because extras are not installed by default, this left 2 suboptimal options for the developer to get a nice developer experience:</p>
<ul>
<li>Installing the project with <code>poetry install --all-extras</code>. This has the downside that the developer ha to remember to pass this option during development, even when the <code>dev</code> dependency group is installed.</li>
<li>Mirroring the extras dependencies in a corresponding dependency group. This has the downside to introduce a lot of boilerplate and possible errors since dependencies are listed multiple times.</li>
</ul>
<p>For instance, the second case would look like:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
python = "^3.8,<3.12"
numpy = "^1.22"
scipy = { version = "^1.8", optional = true }
[tool.poetry.group.dev.dependencies]
scipy = "^1.8"
[tool.poetry.extras]
plot = ["scipy"]
</code></pre>
<p>In this example there is an extra that requires the SciPy dependency. This dependency is declared as optional in the main dependency group, as recommended by Poetry. However, this dependency should additionally be declared in the <code>dev</code> dependency group in order to organize the development dependencies and have it installed automatically.</p>
<p><strong>Is there a simpler approach to solve this problem that do not requires specifying multiple times the same dependency?</strong></p>
| <python><dependency-management><python-poetry> | 2023-11-19 18:52:55 | 0 | 6,586 | Louis Lac |
77,512,056 | 116,906 | Why is this stubbed Brython page giving cascading errors? | <p>I have a stub of a webpage that is designed to make jQuery UI and Brython with Python's standard library be available:</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>Game</title>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.13.2/themes/base/jquery-ui.css" integrity="sha512-lCk0aEL6CvAGQvaZ47hoq1v/hNsunE8wD4xmmBelkJjg51DauW6uVdaWEJlwgAE6PxcY7/SThs1T4+IMwwpN7w==" crossorigin="anonymous" referrerpolicy="no-referrer" />
</head>
<body onload="brython();">
<h1>Game</h1>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js" integrity="sha512-v2CJ7UaYy4JwqLDIrZUI/4hqeoQieOmAZNXBeQyjo21dadnwR+8ZaIJVT8EE2iyI61OV8e6M8PP2/4hpQINQ/g==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.13.2/jquery-ui.min.js" integrity="sha512-57oZ/vW8ANMjR/KQ6Be9v/+/h6bq9/l3f0Oc7vn6qMqyhvPd1cvKBRWWpzu0QoneImqr2SkmO4MSqU+RpHom3Q==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js" integrity="sha512-c9KPreQJkIztVdIopz/3ywOh1dexbsWtuH/Xd3SYhp9Qkp3VXttFYcGAyBmQyvc7ppTgtJHWqMMeb/nhZJ2kHg==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython_stdlib.js" integrity="sha512-zk2q2GXtqXBlbcxK+BFPg6pZBVO9EvOiOTreyk5n3SIRPlv4JmG8zCPNL32mSuPX4ZRoYXD+HoMiV6UqoocYeg==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
<script type="text/python" src="static/py/page.bpy"></script>
</body>
</html>
</code></pre>
<p>The <code>static/py/page.bpy</code> is a stub that seems to fail on attempting to import Python's <code>pickle</code> module:</p>
<pre><code>#!/usr/bin/python3
from browser import document, window
import pickled
jQuery = window.jQuery
</code></pre>
<p>This is getting a cascade of errors:</p>
<pre><code>brython.min.js:1
GET https://cjshayward.com/wp-content/project/game/static/py/pickled.py?v=1700418980119 404 (Not Found)
$download_module @ brython.min.js:1
PathEntryFinder.find_spec @ brython.min.js:1
method @ brython.min.js:1
PathFinder.find_spec @ brython.min.js:1
f @ brython.min.js:1
import_engine @ brython.min.js:1
$B.$__import__ @ brython.min.js:1
$B.$import @ brython.min.js:1
eval @ VM908:22
$B.loop @ brython.min.js:1
$B.run_script @ brython.min.js:1
$B.loop @ brython.min.js:1
req.onreadystatechange @ brython.min.js:1
XMLHttpRequest.send (async)
$B.ajax_load_script @ brython.min.js:1
$B.loop @ brython.min.js:1
run_scripts @ brython.min.js:1
$B.parser.brython @ brython.min.js:1
onload @ game/:8
ev.target.body.onload @ brython.min.js:1
load (async)
(anonymous) @ brython.min.js:1
brython.min.js:1 Error 404 means that Python module pickled was not found at url https://cjshayward.com/wp-content/project/game/static/py/pickled.py
brython.min.js:1
GET https://cjshayward.com/wp-content/project/game/static/py/pickled/__init__.py?v=1700418980316 404 (Not Found)
$download_module @ brython.min.js:1
PathEntryFinder.find_spec @ brython.min.js:1
method @ brython.min.js:1
PathFinder.find_spec @ brython.min.js:1
f @ brython.min.js:1
import_engine @ brython.min.js:1
$B.$__import__ @ brython.min.js:1
$B.$import @ brython.min.js:1
eval @ VM908:22
$B.loop @ brython.min.js:1
$B.run_script @ brython.min.js:1
$B.loop @ brython.min.js:1
req.onreadystatechange @ brython.min.js:1
XMLHttpRequest.send (async)
$B.ajax_load_script @ brython.min.js:1
$B.loop @ brython.min.js:1
run_scripts @ brython.min.js:1
$B.parser.brython @ brython.min.js:1
onload @ game/:8
ev.target.body.onload @ brython.min.js:1
load (async)
(anonymous) @ brython.min.js:1
brython.min.js:1 Error 404 means that Python module pickled was not found at url https://cjshayward.com/wp-content/project/game/static/py/pickled/__init__.py
brython.min.js:1 Access to XMLHttpRequest at 'https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/Lib/site-packages/pickled.py?v=1700418980450' from origin 'https://cjshayward.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
$download_module @ brython.min.js:1
PathEntryFinder.find_spec @ brython.min.js:1
method @ brython.min.js:1
PathFinder.find_spec @ brython.min.js:1
f @ brython.min.js:1
import_engine @ brython.min.js:1
$B.$__import__ @ brython.min.js:1
$B.$import @ brython.min.js:1
eval @ VM908:22
$B.loop @ brython.min.js:1
$B.run_script @ brython.min.js:1
$B.loop @ brython.min.js:1
req.onreadystatechange @ brython.min.js:1
XMLHttpRequest.send (async)
$B.ajax_load_script @ brython.min.js:1
$B.loop @ brython.min.js:1
run_scripts @ brython.min.js:1
$B.parser.brython @ brython.min.js:1
onload @ game/:8
ev.target.body.onload @ brython.min.js:1
load (async)
(anonymous) @ brython.min.js:1
brython.min.js:1
GET https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/Lib/site-packages/pickled.py?v=1700418980450 net::ERR_FAILED 403 (Forbidden)
$download_module @ brython.min.js:1
PathEntryFinder.find_spec @ brython.min.js:1
method @ brython.min.js:1
PathFinder.find_spec @ brython.min.js:1
f @ brython.min.js:1
import_engine @ brython.min.js:1
$B.$__import__ @ brython.min.js:1
$B.$import @ brython.min.js:1
eval @ VM908:22
$B.loop @ brython.min.js:1
$B.run_script @ brython.min.js:1
$B.loop @ brython.min.js:1
req.onreadystatechange @ brython.min.js:1
XMLHttpRequest.send (async)
$B.ajax_load_script @ brython.min.js:1
$B.loop @ brython.min.js:1
run_scripts @ brython.min.js:1
$B.parser.brython @ brython.min.js:1
onload @ game/:8
ev.target.body.onload @ brython.min.js:1
load (async)
(anonymous) @ brython.min.js:1
brython.min.js:1 Traceback (most recent call last):
File "https://cjshayward.com/wp-content/project/game/static/py/page.bpy", line 5, in <module>
import pickled
JavascriptError: Failed to execute 'send' on 'XMLHttpRequest': Failed to load 'https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/Lib/site-packages/pickled.py?v=1700418980450'.
Javascript error
NetworkError: Failed to execute 'send' on 'XMLHttpRequest': Failed to load 'https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/Lib/site-packages/pickled.py?v=1700418980450'.
Error: Failed to execute 'send' on 'XMLHttpRequest': Failed to load 'https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/Lib/site-packages/pickled.py?v=1700418980450'.
at $download_module (https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js:1:509236)
at PathEntryFinder.find_spec (https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js:1:522347)
at method (https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js:1:301957)
at PathFinder.find_spec (https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js:1:521465)
at f (https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js:1:332091)
at import_engine (https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js:1:524573)
at $B.$__import__ (https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js:1:527503)
at $B.$import (https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js:1:531266)
at eval (eval at $B.loop (https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js:1:265180), <anonymous>:22:4)
at $B.loop (https://cdnjs.cloudflare.com/ajax/libs/brython/3.12.0/brython.min.js:1:265221)
</code></pre>
<p>What can I do so that it loads Brython with both the Python standard library and jQuery UI available and I can genuinely start running and editing <code>static/page/page.bpy</code> after successful import of Python's <code>pickle</code> module?</p>
| <python><jquery><brython> | 2023-11-19 18:46:20 | 1 | 6,021 | Christos Hayward |
77,511,910 | 7,530,099 | Flask app in apache environment creating duplicate log entries and escalating db conns | <p>I have a python flask app which uses SQLAlchemy and flask-sqlalchemy. In the dev environment I use the built-in flask server connecting to a local postgres 14 db. The staging environment uses apache with mod-wsgi connecting to a postgres 12 db. The apache site config looks like this (anonymized):</p>
<pre><code><VirtualHost 192.168.1.100:80>
ServerName staging.mysite.com
ServerAdmin admin@my-application.org
DocumentRoot /var/www/mysite-staging
WSGIDaemonProcess mysite-staging user=auser group=auser threads=2 display-name=mysite-staging python-path=/var/www/mysite-staging/ python-home=/usr/local/venvs/mysite
WSGIScriptAlias / /var/www/mysite-staging/mysite.wsgi
<Directory /var/www/mysite-staging>
WSGIProcessGroup mysite-staging
WSGIApplicationGroup %{GLOBAL}
AllowOverride none
Require all granted
</Directory>
</VirtualHost>
</code></pre>
<p>Locally the startup logging looks like this, and there is no further repeat of these log entries on requests:</p>
<pre><code>2023-11-19 09:27:26,499 {\config\logger.py:37} INFO: Logger active: app.db.models
2023-11-19 09:27:26,551 {\config\logger.py:37} INFO: Logger active: app
2023-11-19 09:27:26,551 {\app\__init__.py:44} INFO: Loading app config: DEVELOPMENT
2023-11-19 09:27:26,551 {\app\__init__.py:47} INFO: Loading testing config
2023-11-19 09:27:26,552 {\app\__init__.py:73} INFO: ENVIRONMENT = DEVELOPMENT
</code></pre>
<p>Then in the apache environment on a linux server, the logging does this on each request:</p>
<pre><code>2023-11-19 09:22:03,483 {/config/logger.py:37} INFO: Logger active: app.db.models
2023-11-19 09:22:03,566 {/config/logger.py:37} INFO: Logger active: app
2023-11-19 09:22:03,566 {/app/__init__.py:44} INFO: Loading app config: STAGING
2023-11-19 09:22:03,567 {/app/__init__.py:55} INFO: Loading staging config
2023-11-19 09:22:03,567 {/app/__init__.py:73} INFO: ENVIRONMENT = STAGING
2023-11-19 09:22:03,789 {/config/logger.py:37} INFO: Logger active: app
2023-11-19 09:22:03,789 {/config/logger.py:37} INFO: Logger active: app
2023-11-19 09:22:03,790 {/app/__init__.py:44} INFO: Loading app config: STAGING
2023-11-19 09:22:03,790 {/app/__init__.py:44} INFO: Loading app config: STAGING
2023-11-19 09:22:03,790 {/app/__init__.py:55} INFO: Loading staging config
2023-11-19 09:22:03,790 {/app/__init__.py:55} INFO: Loading staging config
2023-11-19 09:22:03,790 {/app/__init__.py:73} INFO: ENVIRONMENT = STAGING
2023-11-19 09:22:03,790 {/app/__init__.py:73} INFO: ENVIRONMENT = STAGING
2023-11-19 09:22:03,805 {/config/logger.py:37} INFO: Logger active: app
2023-11-19 09:22:03,805 {/config/logger.py:37} INFO: Logger active: app
2023-11-19 09:22:03,805 {/config/logger.py:37} INFO: Logger active: app
2023-11-19 09:22:03,805 {/app/__init__.py:44} INFO: Loading app config: STAGING
2023-11-19 09:22:03,805 {/app/__init__.py:44} INFO: Loading app config: STAGING
2023-11-19 09:22:03,805 {/app/__init__.py:44} INFO: Loading app config: STAGING
2023-11-19 09:22:03,805 {/app/__init__.py:55} INFO: Loading staging config
2023-11-19 09:22:03,805 {/app/__init__.py:55} INFO: Loading staging config
2023-11-19 09:22:03,805 {/app/__init__.py:55} INFO: Loading staging config
2023-11-19 09:22:03,806 {/app/__init__.py:73} INFO: ENVIRONMENT = STAGING
2023-11-19 09:22:03,806 {/app/__init__.py:73} INFO: ENVIRONMENT = STAGING
2023-11-19 09:22:03,806 {/app/__init__.py:73} INFO: ENVIRONMENT = STAGING
2023-11-19 09:22:03,857 {/config/logger.py:37} INFO: Logger active: app
2023-11-19 09:22:03,857 {/config/logger.py:37} INFO: Logger active: app
2023-11-19 09:22:03,857 {/config/logger.py:37} INFO: Logger active: app
2023-11-19 09:22:03,857 {/config/logger.py:37} INFO: Logger active: app
2023-11-19 09:22:03,857 {/app/__init__.py:44} INFO: Loading app config: STAGING
2023-11-19 09:22:03,857 {/app/__init__.py:44} INFO: Loading app config: STAGING
2023-11-19 09:22:03,857 {/app/__init__.py:44} INFO: Loading app config: STAGING
2023-11-19 09:22:03,857 {/app/__init__.py:44} INFO: Loading app config: STAGING
2023-11-19 09:22:03,857 {/app/__init__.py:55} INFO: Loading staging config
2023-11-19 09:22:03,857 {/app/__init__.py:55} INFO: Loading staging config
2023-11-19 09:22:03,857 {/app/__init__.py:55} INFO: Loading staging config
2023-11-19 09:22:03,857 {/app/__init__.py:55} INFO: Loading staging config
2023-11-19 09:22:03,857 {/app/__init__.py:73} INFO: ENVIRONMENT = STAGING
2023-11-19 09:22:03,857 {/app/__init__.py:73} INFO: ENVIRONMENT = STAGING
2023-11-19 09:22:03,857 {/app/__init__.py:73} INFO: ENVIRONMENT = STAGING
2023-11-19 09:22:03,857 {/app/__init__.py:73} INFO: ENVIRONMENT = STAGING
</code></pre>
<p>Each time another request is made in the apache staging environment, the logging duplications increase by 4 - so if I make another call to the app the above will start at 5 duplicate entries, and end with 8 copies of each of the above log messages. In the apache site config I have tried changing the wsgi daemon threads to various numbers but it does not affect the number of duplications in the logs - it always increases by 4.</p>
<p>Possibly related, the dev database reacts like this to being hit with a few dozen requests:
<a href="https://i.sstatic.net/3BUtq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3BUtq.png" alt="Dev environment database" /></a></p>
<p>But in the apache staging environment, the database reacts like this to the same several dozen requests:</p>
<p><a href="https://i.sstatic.net/D6N1f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D6N1f.png" alt="Apache staging database" /></a></p>
<p>Basically all of the connections in the staging environment are in "idle" state, and will continue to pile up until it hits the maximum number of allowable connections, and then error out saying there are no connections available.</p>
<p>I'm not sure if the db connection problem is related to the log duplications or not.</p>
<p>Basically I'm hoping someone can offer some ideas of things to look at, things to test, or even just ideas of what might be going on here.</p>
| <python><postgresql><flask><flask-sqlalchemy><mod-wsgi> | 2023-11-19 18:08:32 | 0 | 426 | Momus |
77,511,750 | 6,410,450 | In plotly, how can I create a subplot figure with different numbers of subplots per row? | <p>I'd like to create a plotly figure that has 1 large subplot in the first row and 2 side-by-side subplots in the 2nd row. plotly.subplots.make_subplots forces me to create a rectangular grid, so I'd have to do a 2x2 grid or a 2x1 grid, which isn't what I'm looking for. Is there a way to do this in plotly?</p>
| <python><plotly><subplot> | 2023-11-19 17:25:40 | 1 | 2,245 | Troy D |
77,511,660 | 3,312,274 | How do I make JS fetch function receive json/data from flask API in the same order as sent by flask? | <p>How do I make JS fetch receive json/data from flask API in the same order as sent by flask?</p>
<p><strong>flask API:</strong></p>
<pre><code>@api_bp.route("/users")
def users():
all_users = User.query.all()
data = list([u.to_dict() for u in all_users])
return data
</code></pre>
<p>print(data):</p>
<p><code>[{'id': 1, 'username': 'godo', 'email': 'godo@email.com', 'first_name': 'Godo', 'last_name': 'xxx'}]</code></p>
<p><strong>JS script:</strong></p>
<pre><code>fetch( apiEndpoint + 'users' )
.then(response => response.json())
.then(data => {
this.tableData = data;
})
.catch(error => {
console.error('Error fetching data:', error);
});
</code></pre>
<p>data:</p>
<p><code>[{'email': 'godo@email.com', 'first_name': 'Godo', 'id': 1, 'last_name': 'xxx', 'username': 'godo'}]</code></p>
<p>The order of keys of the JSON when received by JS fetch is altered to a seemingly alphabetical or random order. I have read that there are technical explanations as to why this is the case, but I really need to receive the data in the same order. Reason: I will display this data using Bootstrap-vue in the browser, and the order is important.</p>
| <javascript><python><flask><bootstrap-vue> | 2023-11-19 16:57:11 | 1 | 565 | JeffP |
77,511,634 | 386,861 | Choropleth doesn't display in Altair | <p>I'm trying to create a choropleth from some data but something isn't working.</p>
<pre><code>import altair as alt
import pandas as pd
from vega_datasets import data
dat = {
'country': [
'France', 'Belgium', 'United Kingdom', 'Iraq', 'Turkey (including Gallipoli)',
'Iran', 'Germany', 'Italy', 'India', 'Egypt', 'Greece', 'Malta',
'Ireland, Republic of', 'Pakistan', 'Azerbaijan', 'Switzerland', 'Poland',
'Israel and Palestine (including Gaza)', 'Tanzania', 'Gibraltar', 'Kenya', 'Mozambique', 'Netherlands'
],
'count': [
7507, 2609, 686, 577, 414, 95, 93, 74, 72, 35, 24, 21,
20, 19, 14, 3, 2, 2, 2, 2, 1, 1, 1
]
}
df = pd.DataFrame(dat)
# Use altair's built-in world map data
world = alt.topo_feature(data.world_110m.url, 'countries')
# Create the choropleth map
choropleth = alt.Chart(world).mark_geoshape(
fill='lightgray',
stroke='white'
).encode(
color=alt.Color('count:Q', scale=alt.Scale(scheme='viridis')),
tooltip=['country:N', 'count:Q']
).transform_lookup(
lookup='id',
from_=alt.LookupData(df, 'country', ['count'])
).properties(
title='Number of Deaths by Country'
).project('equirectangular')
choropleth
</code></pre>
<p>The result is:</p>
<p><a href="https://i.sstatic.net/p392G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p392G.png" alt="enter image description here" /></a></p>
<p>Where's the map?</p>
| <python><pandas><altair> | 2023-11-19 16:49:13 | 1 | 7,882 | elksie5000 |
77,511,555 | 13,955,154 | RAG model not reading json files | <p>I'm trying to implement a simple rag that reads a list of input files and answers to questions based on their content:</p>
<pre><code>documents = SimpleDirectoryReader("/content/Data/").load_data()
llm = LlamaCPP(
model_url='https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf',
model_path=None,
temperature=0.1,
max_new_tokens=256,
context_window=3900,
generate_kwargs={},
model_kwargs={"n_gpu_layers": -1},
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
verbose=True,
)
embed_model = HuggingFaceEmbeddings(
model_name="thenlper/gte-large"
)
service_context = ServiceContext.from_defaults(
chunk_size=256,
llm=llm,
embed_model=embed_model
)
index = VectorStoreIndex.from_documents(documents, service_context=service_context)
query_engine = index.as_query_engine()
response = query_engine.query("What is the quantity of Nokia 3310 available?")
</code></pre>
<p>But I noticed that the model is not able to answer to questions regarding the json files within the Data folder, while it's great for pdf. Why does it happen and how can I solve? I notice that documents contains the json too, so I think it's not related to the first line of code but probably to the one for index.
Thank you in advance, if you need more information ask me</p>
| <python><embedding><langchain><large-language-model> | 2023-11-19 16:20:27 | 1 | 720 | Lorenzo Cutrupi |
77,511,403 | 13,803,549 | Order Django queryset by value closest to 0 | <p>I have a queryset that is returning a list of entries that are tied for the lead in a contest...</p>
<pre class="lang-py prettyprint-override"><code>
leaders = ContestEntry.objects.filter(name=game, wins=wins)
</code></pre>
<p>After I get this data I need to put them in order by a second field ('difference') as a tie breaker. The problem I can't figure out is I need the order to be by the number closest to 0. There will be a lot of negative and positive numbers.</p>
<p>Thanks for your help</p>
| <python><django><sorting><django-queryset> | 2023-11-19 15:41:09 | 2 | 526 | Ryan Thomas |
77,511,351 | 12,285,101 | split rows into columns in pandas dataframe | <p>I have table similar to this table:</p>
<pre><code> id val
0 abc_1 5
1 abc_1 3
2 abc_1 7
3 abc_2 12
4 abc_2 6
5 abc_2 9
...
</code></pre>
<p>I want to "split" the rows into columns based on the id, so the result should be:</p>
<pre><code> id val1 val2 val3
0 abc_1 5 3 7
1 abc_2 12 6 9
</code></pre>
<p>I was trying to do it by creating groups and then use pivot :</p>
<pre><code>df['group'] = (df.index // 3) + 1
>>>
id val group
0 abc_1 5 1
1 abc_1 3 1
2 abc_1 7 1
3 abc_2 12 2
4 abc_2 6 2
5 abc_2 9 2
</code></pre>
<p>but then then when I was using pivot I got many new columns al with null values.</p>
<p><strong>My goal is to get this desired output , open to other solutions as well.</strong></p>
<p>...</p>
<pre><code> id val1 val2 val3
0 abc_1 5 3 7
1 abc_2 12 6 9
</code></pre>
| <python><pandas><pivot-table> | 2023-11-19 15:25:52 | 2 | 1,592 | Reut |
77,511,233 | 15,239,717 | How can I send an email in a Django function-based view using the Brevo API? | <p>I have developed a Django web application and integrated sending customize email using <strong>Brevo</strong> formally known as <strong>SendinBlue</strong>.</p>
<p>My <code>settings.py</code> file for sending emails is configured fine, because I am able to receive a password reset email, but I am unable to sending email in a function-based view. I want to send email to user upon application approval, but I am getting:</p>
<blockquote>
<p>'User' object has no attribute 'swagger_types'</p>
</blockquote>
<p>See my view code below where I integrate the API for sending email:</p>
<pre><code>from __future__ import print_function
import sib_api_v3_sdk
from sib_api_v3_sdk.rest import ApiException
@login_required(login_url='user-login')
def approve_applicant(request, pk):
# Get Applicant id
app_id = Profile.objects.get(id=pk)
# Get Applicant's names
app_user = app_id.applicant
applicant_detail = app_id.surname
app_othername = app_id.othernames
app_phone = app_id.phone
app_email = app_id.applicant.email
# app_edu = app_id.applicant.education.qualification
# Try Check Application Submission
try:
app_id = Submitted.objects.get(applicant=app_user)
# When Applicant is Not Found
except Submitted.DoesNotExist:
# Send Message
messages.error(request, f"{applicant_detail} {app_othername} has No Submited Application")
# Redirect Back
return redirect('search-applicant')
else:
approved = Fee.objects.count()
if request.method == "POST":
# applicant = Submitted.objects.get(applicant_id=pk)
# Filter and Update scholarship Approval
Submitted.objects.filter(applicant_id=pk).update(approved='APROVED')
record_fee=Fee.objects.create(applicant=app_user, email=app_email, phone=app_phone)
record_fee.save()
# Instantiate the client with the API KEY
configuration = sib_api_v3_sdk.Configuration()
configuration.api_key['api-key']=config('API_KEY')
api_instance = sib_api_v3_sdk.TransactionalEmailsApi(sib_api_v3_sdk.ApiClient(configuration))
# Define the campaign settings\
subject = "SUCCESSGANDE SCHOLARSHIP PORTAL!"
sender = {"name": "SUCCESSGANDE", "email": "scholarship@successsolutions.com.ng"}
replyTo = {"name": "SUCCESSGANDE", "email": "info@successsolutions.com.ng"}
html_content = "<html><body><h1>Congratulations! Your Scholarship Application has been approved. </h1></body></html>"
to = [{"email": app_email, "name": app_user}]
params = {"parameter": "My param value", "subject": "New Subject"}
send_smtp_email = sib_api_v3_sdk.SendSmtpEmail(to=to, bcc='nabem.jude@gmail.com', cc='info@successsolutions.com.ng', reply_to=replyTo, headers='Testing', html_content=html_content, sender=sender, subject=subject)
try:
api_response = api_instance.send_transac_email(send_smtp_email)
print(api_response)
except ApiException as e:
print("Exception when calling SMTPApi->send_transac_email: %s\n" % e)
print("Exception when calling EmailCampaignsApi->create_email_campaign: %s\n" % e)
messages.success(request, f'{applicant_detail} {app_othername} Scholarship Approved successfully')
return redirect('search-applicant')
context = {
'applicant': applicant_detail,
'app_othername': app_othername,
'app_user': app_user,
'approved': approved,
}
return render(request, 'user/approval_form.html', context)
</code></pre>
<p>How can I best achieve sending email in my view using the BREVO (SendinBlue) API in a Django function-based view?</p>
| <python><django><sendinblue> | 2023-11-19 14:54:20 | 0 | 323 | apollos |
77,511,162 | 5,852,692 | SQLalchemy custom __repr__ different behaviour | <p>I have some SQLalchemy tables which are relational:</p>
<pre><code>DBNetwork (parent) -> DBNode (child)
</code></pre>
<p>When I call the a specific <code>DBNetwork</code> instance the <code>__repr__</code> function works as expected, only displaying the <code>DBNetwork</code> instance attributes. However, after I call its child object <code>DBNode</code>, the <code>__repr__</code> function starts to show its childs' instance attributes as well:</p>
<pre><code>net_b = session.scalar(sa.select(DBNetwork).where(DBNetwork.name == 'NET_B'))
print(net_b)
print(net_b.nodes[0])
print(net_b)
</code></pre>
<p>output looks like this:</p>
<pre><code>DBNetwork: {'id': 2, 'name': 'NET_B', 'year': 2023, 'description': 'TBD', 'type': 'methane', 'date_time': datetime.datetime(2023, 11, 19, 13, 13, 22, 743904)}
DBNode: {'network_id': 2, 'name': 'CS1I', 'subsys': 'DEFAULT', 'x': -261.30000000000246, 'height': 0.0, 'id': 3, 'alias': '', 'type': 256, 'y': -288.80000000000007}
DBNetwork: {'id': 2, 'name': 'NET_B', 'year': 2023, 'description': 'TBD', 'type': 'methane', 'date_time': datetime.datetime(2023, 11, 19, 13, 13, 22, 743904), 'nodes': [DBNode: {'network_id': 2, 'name': 'CS1I', 'subsys': 'DEFAULT', 'x': -261.30000000000246, 'height': 0.0, 'id': 3, 'alias': '', 'type': 256, 'y': -288.80000000000007}, DBNode: {'network_id': 2, 'name': 'CS1O', 'subsys': 'DEFAULT', 'x': 184.29999999999438, 'height': 0.0, 'id': 4, 'alias': '', 'type': 256, 'y': -288.80000000000007}, DBNode: {'network_id': 2, 'name': 'N3', 'subsys': 'DEFAULT', 'x': 518.499999999992, 'height': 0.0, 'id': 5, 'alias': '', 'type': 256, 'y': -288.80000000000007}, DBNode: {'network_id': 2, 'name': 'END', 'subsys': 'DEFAULT', 'x': 518.5, 'height': 0.0, 'id': 6, 'alias': '', 'type': 256, 'y': -66.0}, DBNode: {'network_id': 2, 'name': 'START', 'subsys': 'DEFAULT', 'x': -595.5, 'height': 0.0, 'id': 7, 'alias': '', 'type': 128, 'y': -66.0}, DBNode: {'network_id': 2, 'name': 'VA1O', 'subsys': 'DEFAULT', 'x': -595.5, 'height': 0.0, 'id': 8, 'alias': '', 'type': 256, 'y': -288.80000000000007}]}
</code></pre>
<hr />
<p>The original code defined as:</p>
<pre><code>import sqlalchemy as sa
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
class Base(DeclarativeBase):
def __repr__(self):
name = self.__class__.__name__
dict_ = {k: v for k, v in self.__dict__.items()
if k != '_sa_instance_state'}
return f'{name}: {dict_}'
class DBNetwork(Base):
__tablename__ = 'networks'
id: Mapped[int] = mapped_column(sa.Integer, primary_key=True)
name: Mapped[str] = mapped_column(sa.String(60))
...
nodes: Mapped[List['DBNode']] = relationship(back_populates='network')
class DBNode(Base):
__tablename__ = 'nodes'
id: Mapped[int] = mapped_column(sa.Integer, primary_key=True)
network_id: Mapped[int] = mapped_column(sa.Integer, sa.ForeignKey(
'simulation.networks.id'))
name: Mapped[int] = mapped_column(sa.String(25))
...
network: Mapped['DBNetwork'] = relationship(back_populates='nodes')
</code></pre>
<hr />
<p>Somehow since I defined a new <code>__repr__</code> for the <code>Base</code>, the return value changes. How can I make sure that the return value stays the same???</p>
| <python><dictionary><sqlalchemy><repr> | 2023-11-19 14:32:50 | 1 | 1,588 | oakca |
77,511,122 | 17,758,716 | How to parse an HTML table, into a portable data-set; DSV, JSON, etc | <p>I'd like to capture all the data from an <em><a href="https://en.wikipedia.org/wiki/HTML" rel="nofollow noreferrer">HTML</a> table</em>, and place it into a data-set. Preferably, a <em><a href="https://en.wikipedia.org/wiki/Delimiter-separated_values" rel="nofollow noreferrer">DSV</a></em>, or a <em><a href="https://en.wikipedia.org/wiki/JSON" rel="nofollow noreferrer">JSON</a></em>. Both of which, can be conveniently ported to other data-sets, and data-set containers, e.g., <em><a href="https://en.wikipedia.org/wiki/XML" rel="nofollow noreferrer">XML</a></em>, or a <em><a href="https://en.wikipedia.org/wiki/Database" rel="nofollow noreferrer">database</a></em>.</p>
<p>The following <em><a href="https://en.wikipedia.org/wiki/Main_Page" rel="nofollow noreferrer">Wikipedia</a></em> article, contains a table of values I'd like to capture. <em><a href="https://en.wikipedia.org/wiki/List_of_Solar_System_objects_by_size#Objects_with_radius_over_400_km" rel="nofollow noreferrer">Wikipedia – List of Solar System objects by size – Objects with radius over 400 km</a></em>.</p>
<p>I tried using <em><a href="https://en.wikipedia.org/wiki/XPath" rel="nofollow noreferrer">XPath expressions</a></em>, and not only is the syntax complex, the task of relaying each <em>element</em> <em>class</em>, or <em>id</em>, is simply too daunting. Not to mention that <em>XPath</em> is intended for <em>XML</em>, and not <em>HTML</em>.</p>
| <python><csv><parsing><html-table><dataset> | 2023-11-19 14:22:23 | 2 | 6,266 | Reilas |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.