QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,489,770
| 188,331
|
Difference of special token handling of the BertTokenizer's batch_decode() and decode() method?
|
<p>For <code>BertTokenizer</code>, I am trying to decode sentences produced after tokenization. Here is my code:</p>
<pre><code>from transformers import BertTokenizer
ref = '這件衣服皺巴巴的,幫我燙一下吧。'
our = '衣服皺了,幫我燙一燙'
tokenizer = BertTokenizer.from_pretrained('fnlp/bart-base-chinese')
tokenized_our = tokenizer(our, text_target=ref, max_length=300, truncation=True)
print(tokenized_our.labels)
</code></pre>
<p>which prints: <code>[101, 21561, 5139, 19633, 11230, 15183, 8924, 8924, 15134, 25818, 9042, 9970, 13953, 4896, 4907, 6458, 3566, 102]</code> (101 = CLS Token, 102 = SEP Token)</p>
<p>Then, I try to decode the same sentence with <code>batch_decode()</code> and <code>decode()</code> function, they produce different output:</p>
<pre><code>print(tokenizer.batch_decode(tokenized_our.input_ids, skip_special_tokens=True))
</code></pre>
<p>which produces: <code>['[ C L S ]', '衣', '服', '皺', '了', ',', '幫', '我', '燙', '一', '燙', '[ S E P ]']</code>. Note spaces are being added to the special tokens like CLS & SEP and the <code>skip_special_tokens=True</code> has no use at all.</p>
<p>For <code>decode()</code> method:</p>
<pre><code>print(tokenizer.decode(tokenized_our.input_ids, skip_special_tokens=True).replace(' ', ''))
</code></pre>
<p>which prints: <code>衣 服 皺 了 , 幫 我 燙 一 燙</code>. Spaces are added in between Chinese characters, but the special tokens are gone, respecting the <code>skip_special_tokens=True</code> settings. I can further remove the space by replacing them.</p>
<p>My question is: Does the <code>batch_decode()</code> function contain bug, which breaks the special tokens? And what is the difference between <code>batch_decode()</code> and <code>decode()</code>?</p>
|
<python><huggingface-transformers><tokenize><cjk><huggingface-tokenizers>
|
2024-05-16 12:09:57
| 0
| 54,395
|
Raptor
|
78,489,644
| 903,651
|
How to parse case-insensitive with lrparsing?
|
<p>This following example attempt to parse a simplified <code>Delphi</code> language where keywords and variables are case-insensitive.</p>
<pre><code>import sys
import lrparsing
from lrparsing import Keyword, List, Prio, Ref, Token, Tokens, Grammar, TokenRegistry, Some, Choice, Opt, Left, Right, THIS
class DelphiPasParser(lrparsing.Grammar):
Identifier = Token(re="[a-z][a-z0-9_]*", case=False)
FinalEnd = Token('end.', case=False)
Unit = Keyword('unit', case=False) + Identifier + Token(';') + FinalEnd
START = Unit
delphi_code = "Unit MyUnit; end."
parser = DelphiPasParser()
try:
parse_tree = parser.parse(delphi_code)
except lrparsing.TokenError as e:
print(e)
sys.exit();
print(DelphiPasParser.repr_parse_tree(parse_tree))
</code></pre>
<p>The problem is that <code>Token</code> seems to incorrectly detect similarity:</p>
<p><code>End.</code> is detected to be a Keyword of <code>Token(re="[a-z][a-z0-9_]*", case=False)</code> but the dot is not in the regex.</p>
<blockquote>
<p>Token.re (/[a-z][a-z0-9_]*/, <re.Match object; span=(0, 3), match='end'>) partially matches None of Keyword 'end.'</p>
</blockquote>
<p>And even without that, <code>Unit</code> is not recognized:</p>
<blockquote>
<p>("Unrecognised token 'U' at line 1 column 1", 'U', 0, 1, 1)</p>
</blockquote>
<p>How to properly parse this ?</p>
|
<python><lrparsing>
|
2024-05-16 11:49:53
| 0
| 14,979
|
Adrian Maire
|
78,489,368
| 13,285,583
|
How to calculate the w and b of linear regression with python using the loss function?
|
<p>My goal is to create a <code>ipynb</code> file to learn about how to create a Linear Regression Model.</p>
<p>The problem:</p>
<ol>
<li>How to set the initial w and b, should I set it randomly?</li>
<li>When to change the w or b on each iteration.</li>
<li>By how many should I update the w or b on each iteration.</li>
</ol>
<p>Expected result:</p>
<ol>
<li>There is a Model class with a <code>__init__</code> and <code>fit</code> function.</li>
<li>The <code>fit</code> function will increment or decrement the w or b on each iteration.</li>
</ol>
<p>What I've tried:</p>
<ol>
<li>Creating the graph</li>
</ol>
<pre><code>plt.xlim(-2, 11)
plt.ylim(-2, 11)
x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [-1, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
plt.plot(x, y, 'ro')
# y = x
x = [0,10]
y = [0,10]
plt.plot(x, y)
# fill_between
x = [0, 10]
y1 = [-1, 9]
y2 = [1, 11]
plt.fill_between(x, y1, y2, alpha=0.2)
del x, y
</code></pre>
<ol start="2">
<li>Creating a loss function.</li>
</ol>
<pre><code># Dots
x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [-1, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
# calculated from y = 1x + 0
ŷ = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# The loss function of linear regression is (ŷ - y)²
# Other name is: the residual
def calculate_linear_regression_loss_function(ŷ: list[int], y: list[int]) -> list[int]:
# Map to numpy array (for syntatic sugar)
np_ŷ = np.array(ŷ)
np_y = np.array(y)
# Measure distance between ŷ and y
distances = np_ŷ - np_y
# Square each distance
squares = distances ** 2
# sum the squares
return squares.sum()
print(calculate_linear_regression_loss_function(ŷ, y))
del x, ŷ, y
</code></pre>
<ol start="3">
<li>Create the generate training data function</li>
</ol>
<pre><code>def generate_training_data(n: int) -> list[list[int]]:
training_data: list[list[int]] = []
for _ in range(n):
x = randrange(0, 1000) / 100
y = randrange(0, 1000) / 100
training_data.append([x, y])
return training_data
generate_training_data(n=10)
</code></pre>
<pre><code>def separate_training_data_into_x_y(training_data: list[list[int]]) -> tuple[list[int], list[int]]:
array = np.array(training_data)
return array[:,0], array[:, 1]
training_data = generate_training_data(n=10)
print("training_data", training_data)
x_training, y_training = separate_training_data_into_x_y(training_data=training_data)
print("x_training", x_training)
</code></pre>
<pre><code>training_data = generate_training_data(10)
x_training, y_training = separate_training_data_into_x_y(training_data=training_data)
plt.plot(x_training, y_training, 'ro')
</code></pre>
<p>For example, how to estimate the initial w and b of</p>
<p><a href="https://i.sstatic.net/jtk0hjaF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtk0hjaF.png" alt="enter image description here" /></a></p>
<p><strong>TLDR, unfinished line of code</strong></p>
<p>The Model class. I was going to do this after I understand how Linear Regression works.</p>
<pre><code># Reference:
# 1. https://medium.com/deep-learning-construction/neural-network-build-from-scratch-without-frameworks-1-302dcfb46127
# What is the learning objectives of deep learning?
# To minimize the value of the loss function
# For example, a linear regression y = w*x + b
# the weights (w), and bias (b) are adjusted during the training process
# the loss function is (ŷ - y)² where ŷ is the prediction and y is the actual value.
from typing import Callable
class Model:
x_train: list[list[int]]
loss_function: Callable[[int, int], int]
def __init__(self,
loss_function: Callable[[int, int], int]):
self.loss_function = loss_function
def fit(self, x_training: list[list[int]], y_training: list[int]):
self.x_training = x_training
self.y_training = y_training
print(self.loss_function(3,1))
loss_function: Callable[[int, int], int] = lambda ŷ, y: (ŷ - y)**2
model = Model(loss_function)
model.fit([[1], [1]], [1])
</code></pre>
|
<python><numpy>
|
2024-05-16 10:58:10
| 0
| 2,173
|
Jason Rich Darmawan
|
78,489,233
| 17,487,457
|
Function to Oversample - Undersample instances in dataset
|
<p>I want to create a function, that takes data instances, labels, and a target-proportion.
The function should determine the proportion of classes in the given dataset/labels, and resample the data into the given target proportion using either <code>imblearn.over_sampling.SMOTE</code> for classes that require over-sampling to reach its target-proportion, or the <code>imblearn.under_sampling.RandomUnderSampler</code> for classes that need be under-sampled to make it equal to target-proportion.</p>
<p>For example, given:</p>
<pre class="lang-py prettyprint-override"><code>X, y = make_classification(n_samples=10000, n_features=10, n_classes=5,
n_informative=4, weights=[0.3,0.125,0.239,0.153,0.188])
X_train,X_test,y_train,y_test=train_test_split(X,y, random_state=42)
</code></pre>
<p>Initially, the proportion is:</p>
<pre><code>class 0: 0.3,
class 1: 0.125,
class 2: 0.239,
class 3: 0.153,
class 4: 0.188
</code></pre>
<p>And we want to get the following target proportion:</p>
<pre><code>class 0: 0.519
class 1: 0.373
class 2: 0.226
class 3: 0.053
class 4: 0.164
</code></pre>
<p>The function should determine when to use <code>SMOTE</code> or <code>RandomUndersampler</code>.</p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
from collections import Counter
def resample_to_proportion(X, y, target_proportion):
# Calculate the current proportion of each class
class_counts = Counter(y)
total_samples = len(y)
current_proportion = {label: count / total_samples for label, count in class_counts.items()}
# Initialize resampling strategies
resampling_strategies = {}
for label, target_prop in target_proportion.items():
if target_prop > current_proportion[label]:
resampling_strategies[label] = SMOTE(sampling_strategy=target_prop)
elif target_prop < current_proportion[label]:
resampling_strategies[label] = RandomUnderSampler(sampling_strategy=target_prop)
# Resample each class based on the difference between current and target proportion
X_resampled = []
y_resampled = []
for label, strategy in resampling_strategies.items():
mask = y == label
X_class = X[mask]
y_class = y[mask]
X_resampled_class, y_resampled_class = strategy.fit_resample(X_class, y_class)
X_resampled.append(X_resampled_class)
y_resampled.append(y_resampled_class)
# Concatenate resampled data
X_resampled = np.concatenate(X_resampled)
y_resampled = np.concatenate(y_resampled)
return X_resampled, y_resampled
</code></pre>
<p>But then:</p>
<pre class="lang-py prettyprint-override"><code>target_proportion = {0: 0.519, 1: 0.373, 2: 0.226, 3: 0.053, 4: 0.164}
X_resampled, y_resampled = resample_to_proportion(X_train, y_train, target_proportion)
ValueError: The target 'y' needs to have more than 1 class. Got 1 class instead
</code></pre>
<p>And with the following function:</p>
<pre class="lang-py prettyprint-override"><code>def resample_to_target_proportion(X, y, target_proportion):
# Calculate the current class proportions
class_counts = Counter(y)
total_samples = len(y)
current_proportion = {cls: count / total_samples for cls, count in class_counts.items()}
# Apply over-sampling or under-sampling to achieve the target proportion
resampled_X = X
resampled_y = y
for cls, target_prop in target_proportion.items():
if target_prop > current_proportion[cls]: # Need to over-sample
if current_proportion[cls] * total_samples < target_prop * total_samples:
oversample_amount = int(class_counts[cls] * (target_prop / current_proportion[cls] - 1))
smote = SMOTE(sampling_strategy={cls: oversample_amount})
X_resampled, y_resampled = smote.fit_resample(resampled_X, resampled_y)
resampled_X = X_resampled
resampled_y = y_resampled
else: # Need to under-sample
undersample_amount = int(class_counts[cls] * (1 - target_prop / current_proportion[cls]))
undersampler = RandomUnderSampler(sampling_strategy={cls: undersample_amount})
X_resampled, y_resampled = undersampler.fit_resample(resampled_X, resampled_y)
resampled_X = X_resampled
resampled_y = y_resampled
return resampled_X, resampled_y
</code></pre>
<p>Also raises error:</p>
<pre><code>X_resampled, y_resampled = resample_to_target_proportion(X_train,
y_train, target_proportion)
ValueError: With over-sampling methods, the number of samples in
a class should be greater or equal to the original number of samples.
Originally, there is 1248 samples and 447 samples are asked.
</code></pre>
|
<python><valueerror><resampling><smote>
|
2024-05-16 10:30:32
| 0
| 305
|
Amina Umar
|
78,489,093
| 3,918,996
|
Python Invoke and root for imports when loading modules
|
<p>We use a Python module named <a href="https://github.com/pyinvoke/invoke" rel="nofollow noreferrer">invoke</a> to execute a large set of Python scripts in a nice neat way: <a href="https://github.com/pyinvoke/invoke" rel="nofollow noreferrer">invoke</a> loads Python scripts from a <code>tasks</code> sub-directory and allows the Python functions of those tasks to be called in a well-structured fashion.</p>
<p>Inside such a Python script, we have lines like:</p>
<pre><code>from scripts import useful_script_1
</code></pre>
<p>...where <code>scripts</code> is a directory that contains generic, "non-task", Python scripts, located beside (i.e. at the same level as) the <code>tasks</code> directory like so:</p>
<pre><code>-- scripts
-- __init__.py
-- useful_script_1.py
-- useful_script_2.py
-- ...
-- tasks
-- __init__.py
-- task_1.py
-- task_2.py
</code></pre>
<p>Some time in the last few years (version 2.0.0 works, version 2.2.0 does not) <a href="https://github.com/pyinvoke/invoke" rel="nofollow noreferrer">invoke</a>, unintentionally or intentionally, we don't know, has stopped supporting this approach, the <code>scripts</code> directory is no longer found:</p>
<pre><code>Traceback (most recent call last):
File "c:\program files\python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\program files\python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Program Files\Python39\Scripts\inv.exe\__main__.py", line 7, in <module>
File "c:\program files\python39\lib\site-packages\invoke\program.py", line 387, in run
self.parse_collection()
File "c:\program files\python39\lib\site-packages\invoke\program.py", line 479, in parse_collection
self.load_collection()
File "c:\program files\python39\lib\site-packages\invoke\program.py", line 716, in load_collection
module, parent = loader.load(coll_name)
File "c:\program files\python39\lib\site-packages\invoke\loader.py", line 91, in load
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 855, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "c:\projects\my_project\tasks\__init__.py", line 2, in <module>
from scripts import useful_script_1
ImportError: cannot import name 'useful_script_1' from 'scripts' (unknown location)
</code></pre>
<p>We could stick with the earlier version of <code>invoke</code> but that will likely catch us out at some point (and certainly catches out those with new installations who don't expect to have to downgrade a component), we would prefer to use the latest version. We have <a href="https://github.com/pyinvoke/invoke/issues/994" rel="nofollow noreferrer">raised an issue</a> on the Github site of <code>invoke</code> but it might be that the owner/maintainer has moved on as there have been no replies to issues raised in the last four months. We can't easily move away from <code>invoke</code> as it is woven into our test automation system and we would prefer not to make major changes to the locations of files/directories if it could be avoided since that would adversely affect users of the system.</p>
<p>Is there someone here better versed in Python than us who could suggest a workaround?</p>
<p>EXTRA INFORMATION: we call <code>invoke</code> in one of two ways (and the error occurs both ways). We call it from anywhere as follows:</p>
<pre><code>inv -r <path-to-parent-directory-of-tasks> function parameters
</code></pre>
<p>...or from the parent directory of the <code>tasks</code> directory as follows:</p>
<pre><code>inv function parameters
</code></pre>
<p>...where <code>function</code> is one of the functions in a "task" script that <code>invoke</code> knows about and <code>parameters</code> are the parameters appropriate to that function.</p>
<p>If you're interested, the whole shebang can be seen <a href="https://github.com/u-blox/ubxlib/tree/master/port/platform/common/automation#pyinvoke-tasks" rel="nofollow noreferrer">here</a>.</p>
|
<python><pyinvoke>
|
2024-05-16 10:05:48
| 0
| 947
|
Rob
|
78,488,981
| 3,662,734
|
Set a part of weight tensor to requires_grad = True and keep rest of values to requires_grad = False
|
<p>I am doing some kind of transfer learning, where I load a dense model and then expand the weight tensor and train only the new values after expanding it and keep the old trained values frozen. in this case I need to set the new weights to <code>requires_grad = True</code> and old weights to <code>requires_grad = False</code> within the same weight tensor. I tried this but it doesnt work:</p>
<pre><code>old_values = weight_mat[0, :, :length[0]]
old_values.requires_grad = False # 1. I tried this and they got optimized
old_values = old_values.unsqueeze(0).detach() # 2. I tried this in addition to 1 and they get optimized
new_values = weight_mat[:, :, length[0]:]
new_values.requires_grad = True
weight_mat = torch.cat((old_values, new_values), dim=-1)
</code></pre>
<p>After I print the number of parameters of models that are not trainable I get 0, I also checked the weight tensor values over epochs and found that all values are updated, whereas I am setting <code>old_values</code> to <code>False</code>.</p>
|
<python><deep-learning><pytorch>
|
2024-05-16 09:46:29
| 1
| 579
|
Emily
|
78,488,965
| 10,200,497
|
How can I get the first row that meets conditions of a mask if another condition is not present before it?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'close': [109, 109, 105, 110, 105, 120, 120, 11, 90, 100],
'high': [110, 110, 108, 108, 115, 122, 123, 1120, 1000, 300],
'target': [107, 107, 107, 107, 107, 124, 124, 500, 500, 500]
}
)
</code></pre>
<p>Masks are:</p>
<pre><code>m1 = (
(df.high > df.target) &
(df.close > df.target)
)
m2 = (
(df.high > df.target) &
(df.close < df.target)
)
</code></pre>
<p>Expected output is getting row <code>7</code> as output:</p>
<pre><code> close high target
7 11 1120 500
</code></pre>
<p>The process is:</p>
<p><strong>a)</strong> The grouping is by the <code>target</code> column.</p>
<p><strong>b)</strong> For each group I want to find the first row that meets conditions of <code>m2</code> IF ONLY <code>m1</code> does not have any match BEFORE it.</p>
<p>For example:</p>
<p>For group 107, there is a match for <code>m2</code> but since <code>m1</code> has a match BEFORE that, this group should be skipped.</p>
<p>For the next group which is 124 there are no rows that has a match for <code>m2</code>.</p>
<p>For group 500 there is a row and there are no rows before it that <code>m1</code> is true.</p>
<p>For each group I want one row with this condition and for the total result I want the first match. So for example if one row has been found for multiple groups, the first row should be selected in the <code>df</code> regardless of groups.</p>
<p>These are my attempts:</p>
<pre><code># attmpt 1
df['a'] = m1.cummax()
df['b'] = m2.cummax()
# attempt 2
out = df[m2.cumsum().eq(1) & m2]
</code></pre>
|
<python><pandas><dataframe>
|
2024-05-16 09:43:39
| 1
| 2,679
|
AmirX
|
78,488,678
| 2,496,293
|
Cannot reproduce numpy.typing.NBitBase example with `numpy.typing.mypy_plugin`
|
<p>I am having trouble getting the <code>numpy.typing.mypy_plugin</code> to work.
Following the <a href="https://numpy.org/devdocs/reference/typing.html#numpy.typing.NBitBase" rel="nofollow noreferrer">example</a> of how to use <code>NBitBase</code>, I get different output from mypy.</p>
<h1>Problem</h1>
<h3>expected behavior</h3>
<p>According to the docs, the output should be:</p>
<pre><code> # note: Revealed local types are:
# note: a: numpy.floating[numpy.typing._16Bit*]
# note: b: numpy.signedinteger[numpy.typing._64Bit*]
# note: out: numpy.floating[numpy.typing._64Bit*]
</code></pre>
<h3>actual behavior</h3>
<p>My output, from running <code>mypy</code> on the example code is:</p>
<pre><code>╭sam@sam-XPS-15-7590:2 ...mpy_mypy_plugin_not_working ❖ do 16 10:19
╰[poetry: numpy_mypy_plugin_not_working/.venv]$ mypy
src/example/__init__.py:22: note: Revealed local types are:
src/example/__init__.py:22: note: a: numpy.floating[numpy._typing._16Bit]
src/example/__init__.py:22: note: b: numpy.signedinteger[numpy._typing._64Bit]
src/example/__init__.py:22: note: out: numpy.floating[Union[numpy._typing._16Bit, numpy._typing._64Bit]]
</code></pre>
<h1>MWE</h1>
<p>Below is MWE to reproduce the error.
I attached the poetry lock file in a gist <a href="https://gist.github.com/SamDM/7bc272f88933141290b0b6db269fd539" rel="nofollow noreferrer">here</a>.</p>
<p><code>src/example/__init__.py</code></p>
<pre class="lang-py prettyprint-override"><code>from typing import TYPE_CHECKING, TypeVar
import numpy as np
import numpy.typing as npt
T1 = TypeVar("T1", bound=npt.NBitBase)
T2 = TypeVar("T2", bound=npt.NBitBase)
def add(a: np.floating[T1], b: np.integer[T2]) -> np.floating[T1 | T2]:
return a + b
a = np.float16()
b = np.int64()
out = add(a, b)
print(a)
print(b)
print(out)
if TYPE_CHECKING:
reveal_locals()
</code></pre>
<p><code>pyproject.toml</code></p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "example"
version = "0.1.0"
description = "Foo"
authors = ["Sam De Meyer <sam.demeyer@foo.bar>"]
packages = [{ include = "example", from = "src" }]
[tool.poetry.dependencies]
python = ">=3.11,<=3.12"
numpy = "^1.26.4"
[tool.poetry.group.dev.dependencies]
mypy = "^1.8.0"
[build-system]
requires = ["poetry-core>=1.7"]
build-backend = "poetry.core.masonry.api"
[tool.mypy]
files = ["src/example"]
python_version = "3.11"
pretty = true
plugins = ["numpy.typing.mypy_plugin"]
</code></pre>
<p>To recreate the python <code>venv</code>, put the above files under the same directory as:</p>
<pre><code>.
├── pyproject.toml
└── src
└── example
└── __init__.py
</code></pre>
<p>Then run <code>poetry install</code> to create a <code>venv</code> with the same package versions.</p>
<h1>What I tried</h1>
<ul>
<li><p>I verified that I was using the correct python and mypy executables:</p>
<pre><code>╭sam@sam-XPS-15-7590:2 ...mpy_mypy_plugin_not_working ❖ do 16 10:50 ✘1
╰[poetry: numpy_mypy_plugin_not_working/.venv]$ poetry show
mypy 1.10.0 Optional static typing for Python
mypy-extensions 1.0.0 Type system extensions for programs checked with the mypy type checker.
numpy 1.26.4 Fundamental package for array computing in Python
typing-extensions 4.11.0 Backported and Experimental Type Hints for Python 3.8+
╭sam@sam-XPS-15-7590:2 ...mpy_mypy_plugin_not_working ❖ do 16 10:51
╰[poetry: numpy_mypy_plugin_not_working/.venv]$ which python
/home/sam/Safe/Proj/Scratch/numpy_mypy_plugin_not_working/.venv/bin/python
╭sam@sam-XPS-15-7590:2 ...mpy_mypy_plugin_not_working ❖ do 16 10:52
╰[poetry: numpy_mypy_plugin_not_working/.venv]$ which mypy
/home/sam/Safe/Proj/Scratch/numpy_mypy_plugin_not_working/.venv/bin/mypy
</code></pre>
</li>
<li><p>I verified that mypy is picking up the plugin settings. If I make a typo in the plugin name I get:</p>
<pre><code>╭sam@sam-XPS-15-7590:2 ...mpy_mypy_plugin_not_working ❖ do 16 10:22
╰[poetry: numpy_mypy_plugin_not_working/.venv]$ mypy
pyproject.toml:1: error: Error importing plugin "numpy.typing.mypy_plugi": No module named 'numpy.typing.mypy_plugi' [misc]
[tool.poetry]
^
Found 1 error in 1 file (errors prevented further checking)
</code></pre>
<p>Which I see as sufficient proof that mypy is loading the plugin if typed correctly.</p>
</li>
<li><p>I tried disabling (commenting out) the line:</p>
<pre><code>plugins = ["numpy.typing.mypy_plugin"]
</code></pre>
<p>Which resulted in the exact same (wrong) output of mypy</p>
</li>
</ul>
|
<python><numpy><mypy>
|
2024-05-16 08:56:07
| 0
| 2,441
|
Sam De Meyer
|
78,488,599
| 18,618,577
|
unexpected reversed secondary y axis on dataframe plot
|
<p>I'm trying to plot a electrical consumption, first in mA with a date, and with secondary axis in W with julian day.
I refered to <a href="https://matplotlib.org/stable/gallery/subplots_axes_and_figures/secondary_axis.html" rel="nofollow noreferrer">this matplotlib article</a>, and even if the example works perfectly, I can't figrure where mine differ of it.
Because my secondary y axis is inverted as it's supposed to be.</p>
<p>Here my prog :</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import glob
import os
import matplotlib.dates as mdates
import datetime
path = '[...]TEMPORARY/CR1000_test_intergration/'
all_files = glob.glob(os.path.join(path , "*.dat"))
li = []
for filename in all_files:
df = pd.read_csv(filename,
skiprows=[0,2,3],
header=0,
index_col=0
)
li.append(df)
frame = pd.concat(li, axis=0)
frame=frame.sort_values('TIMESTAMP')
frame.fillna(0)
frame.index = pd.to_datetime(frame.index,format="%Y-%m-%d %H:%M:%S")
st_date = pd.to_datetime("2024-05-12 23:30:00", format='%Y-%m-%d %H:%M:%S')
en_date = frame.index[-1]
mask = frame.loc[st_date:en_date].index
window1 = frame.loc[(frame.index >= st_date) & (frame.index <= en_date)]
#PLOT
fig, ax = plt.subplots(1,1, figsize=(20,6), dpi=150, sharex=True)
fig.suptitle('CUBE CONSO',fontsize=14, fontweight='bold')
fig.subplots_adjust(hspace=0)
plt.xticks(rotation=30)
ax.grid(True)
ax.xaxis.set_major_locator(mdates.HourLocator(interval=6))
ax.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d | %H:%M'))
ax.set_ylabel('A')
ax.plot(window1.index,window1['R2_voltage_Avg'], color='r', linewidth=2)
def date2yday(x):
y = x - mdates.date2num(datetime.datetime(2024, 1, 1))
return y
def yday2date(x):
y = x + mdates.date2num(datetime.datetime(2024, 1, 1))
return y
secax_x = ax.secondary_xaxis(
'top', functions=(date2yday, yday2date))
secax_x.set_xlabel('julian day [2024]')
def ma_to_w(x):
return (x * 12.5)
def w_to_ma(x):
return (12.5 / (x+0.0001)) #avoid divide by 0
secax_y = ax.secondary_yaxis(
'right', functions=(ma_to_w,w_to_ma))
secax_y.set_ylabel('W')
</code></pre>
<p>And here a sample of data (the concatened dataframe) :</p>
<pre><code>TIMESTAMP RECORD R1_voltage_Avg R2_voltage_Avg out1_voltage_Avg out2_voltage_Avg
2024-05-13 00:00:00 34155 0.286 0.099 78.56 3.949
2024-05-13 00:01:00 34156 0.797 0.104 20.91 0.057
2024-05-13 00:02:00 34157 0.599 0.091 41.6 0.966
2024-05-13 00:03:00 34158 0.519 0.097 27.76 0.824
2024-05-13 00:04:00 34159 0.814 0.096 27.39 0.455
2024-05-13 00:05:00 34160 0.828 0.101 19.75 0.398
2024-05-13 00:06:00 34161 0.664 0.098 58.36 1.193
2024-05-13 00:07:00 34162 0.081 0.1 49.98 1.023
2024-05-13 00:08:00 34163 0.414 0.098 50.26 0.739
2024-05-13 00:09:00 34164 0.708 0.101 45.97 0.568
2024-05-13 00:10:00 34165 0.698 0.099 82.2 3.552
2024-05-13 00:11:00 34166 0.524 0.101 40.6 -0.54
2024-05-13 00:12:00 34167 0.793 0.093 63.76 3.864
2024-05-13 00:13:00 34168 0.72 0.086 12.76 -0.256
2024-05-13 00:14:00 34169 0.564 0.096 23.44 0.881
2024-05-13 00:15:00 34170 0.67 0.094 30.17 2.33
</code></pre>
<p>And finally a plot :</p>
<p><a href="https://i.sstatic.net/WTIrM0wX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WTIrM0wX.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><axis>
|
2024-05-16 08:41:54
| 1
| 305
|
BenjiBoy
|
78,488,561
| 3,453,776
|
Configuring Sentry GRPC Integration: grpcio is not installed
|
<p>I'm trying to configure the Sentry GRPCIntegration in a new project, just as it's <a href="https://docs.sentry.io/platforms/python/integrations/grpc/" rel="nofollow noreferrer">described in the docs</a>. When running the application, I get this error:</p>
<pre><code> from sentry_sdk.integrations.grpc import GRPCIntegration
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/integrations/grpc/__init__.py", line 11, in <module>
from .client import ClientInterceptor
File "/usr/local/lib/python3.12/site-packages/sentry_sdk/integrations/grpc/client.py", line 17, in <module>
raise DidNotEnable("grpcio is not installed")
sentry_sdk.integrations.DidNotEnable: grpcio is not installed
</code></pre>
<p>I'm running my app in a Docker container. I'm in an Apple Silicon machine. I have also tried running the configuration in a Python virtual environment, by running the interpreter. I used the same configuration as in the docs i provided:</p>
<pre><code>import sentry_sdk
from sentry_sdk.integrations.grpc import GRPCIntegration
sentry_sdk.init(
dsn="https://7b762fa781d94283a5c76bfa4177495d@o30916.ingest.us.sentry.io/80959",
enable_tracing=True,
integrations=[
GRPCIntegration(),
],
)
</code></pre>
<p>with a proper sentry dsn, of course.</p>
<p>And I get the same error.</p>
|
<python><grpc><sentry>
|
2024-05-16 08:36:49
| 1
| 571
|
nnov
|
78,488,320
| 8,884,239
|
Convert each key value pair to columns of dataframe in pyspark
|
<p>I have following array of map and I want to convert it into array of structs to convert all key value pairs to columns of a dataframe</p>
<pre><code>-- DurationPeriod: array (nullable = true)
| |-- element: map (containsNull = true)
| | |-- key: string
| | |-- value: string (valueContainsNull = true)
</code></pre>
<p><strong>Expected structure</strong></p>
<pre><code>|-- transform_col: array (nullable = true)
| |-- element: struct (containsNull = false)
| | |-- key: string (nullable = true)
| | |-- value: string (nullable = true)
</code></pre>
<p>Here is the sample data what I will get as array of map. Each map may be empty or we may have more than 2 key value paris. it veries for each occurance.</p>
<pre><code>[{eod -> 2023-06-14, Id -> 123456789}, {eod -> 2028-11-17, Id -> 123456788}]
</code></pre>
<p>I am trying to convert map to struct as map is not supporting to convert key, value pairs to columns. Please suggest if there is any solution directly how map key value pairs can be converted to columns on dataframe</p>
<p>I tried with following code, but it is not getting key values and its showing null</p>
<pre><code>df1 = table.select("*",expr("transform(DurationPeriod, x -> struct(x.key as key,x.value as value))").alias("transform_col"))
[{null, null}, {null, null}]
</code></pre>
<p>Highly appreciated for your help to resolve this issue. Please let me know what I am missing here.</p>
<p><strong>Update</strong></p>
<p>Here is the actual conversion, I am looking for:</p>
<p><a href="https://i.sstatic.net/MtFnoOpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MtFnoOpB.png" alt="enter image description here" /></a></p>
|
<python><apache-spark><pyspark><apache-spark-sql><pyspark-schema>
|
2024-05-16 07:50:32
| 1
| 301
|
Bab
|
78,488,102
| 19,506,623
|
How to get classnames with a site in categories/subcategories?
|
<p>I'm trying to get only the classnames of each level of <a href="https://locations.bojangles.com/" rel="nofollow noreferrer">this site</a>. What I mean with levels? Well, the structure of the site it has 3 or 4 levels or it shows the information in categories and subcategories and is needed to drilldown to go deeper.</p>
<p>The structure is:</p>
<ol>
<li>In initial URL, the site shows links of each state (let say level 1)</li>
<li>When click on any state, it opens another url and shows one or more cities in each state (level 2)</li>
<li>When click on any city, it opens another url and shows one or more locations in each city (level 3)</li>
<li>Finally, when open details of each location, it shows the address of each location (level 4)</li>
</ol>
<p>Below is my current code with which I've been able to get the classname of the first 3 levels, but having issues with 4th level, getting this error <code>'NoneType' object is not subscriptable</code>. My logic is find the link (anchor html element) for the state given (in this case Alabama). When that element that contains Alabama is found, I get the classname. The same applies for city, location.
*Is important for me not to use xpath in this process.</p>
<p>May someone could help me to enhance/fix my code in order to get the <code>address</code> classname and how to handle when there is only 3 levels. Would be 4 levels when the city has more than one location. If the city has only one location, there would be only 3 levels. Below an image that shows the classnames when the input has a city with more than one location (Huntsville/4 levels) and a city with only one location (Birmingham/3 levels).</p>
<p><a href="https://i.sstatic.net/DmFbQc4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DmFbQc4E.png" alt="enter image description here" /></a></p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
import time
# function to get url and class in different levels
def get_url_class(url,level):
driver.get(url)
time.sleep(3)
links = driver.find_elements(By.TAG_NAME, "a")
for lnk in links:
if level in lnk.text:
if lnk.get_attribute("class"):
new_class = lnk.get_attribute("class")
new_url = lnk.get_attribute("href")
return [new_class,new_url]
# Open chromedriver
driver = webdriver.Chrome()
# Defining texts to search for each element
state="Alabama" # --> Level #1
city="Huntsville" # --> Level #2
location="South Memorial Pkwy, Huntsville" # --> Level #3
address="11375 South Memorial Pkwy" # --> Level #4
initial_url = "https://locations.bojangles.com/"
# Level #1 (Get state´s url and class)
level1 = get_url_class(initial_url,state)
print("className:" + level1[0] + ", url: "+ level1[1])
# Level #2 (Get cities´ url and class)
level2 = get_url_class(level1[1],city)
print("className:" + level2[0] + ", url: "+ level2[1])
# Level #3 (Get location's url and class)
level3 = get_url_class(level2[1],location)
print("className:" + level3[0] + ", url: "+ level3[1])
# Level #4 (Get address' url and class)
level4 = get_url_class(level3[1],address)
print("className:" + level4[0] + ", url: "+ level4[1])
driver.quit()
</code></pre>
|
<python><web-scraping><selenium-chromedriver>
|
2024-05-16 07:06:00
| 0
| 737
|
Rasec Malkic
|
78,487,990
| 7,712,908
|
Finding pid for python script B execution on a terminal using popen from python script A
|
<p>I have a python script "A" that uses <code>subprocess.Popen</code> to launch a <code>gnome-terminal</code> and then executes a different python script "B" in there. I am stuck with retrieving the pid for that newly spawned <code>gnome-terminal</code> so that I can terminate it later on when I see fit. Launching the script in the separate terminal is necessary for the intention of the application.</p>
<p>This is how the Python script "B" is being executed from Python script "A":</p>
<pre><code>process = subprocess.Popen(["gnome-terminal", "--", "bash", "-c", "python scriptB.py"])
</code></pre>
<p>This is my attempt to find the pid for the process, it's grandparent, parent, and its chidren:</p>
<pre><code>print(process.pid) # prints 5152
print(psutil.Process(process.pid).children(recursive=True)) # len is 0
parent = psutil.Process(process.pid).parent()
print(parent) # prints 5151
parent = psutil.Process(parent).parent()
print(parent) # prints 4372
</code></pre>
<p>None of these pids actually point to the pid of my interest which is in this case 5158. Terminating pid 5152 does not kill the new gnome-terminal.</p>
<p>When I run <code>ps ax</code>, I get this:</p>
<pre><code>5151 pts/0 S+ 0:00 python scriptA.py
5152 pts/0 Z+ 0:00 [gnome-terminal] <defunct>
5158 pts/1 Rs+ 0:10 /anaconda3/envs/env1/bin/python3 scriptB.py
5164 pts/2 R+ 0:00 ps ax
</code></pre>
<p>How can python script "A" find this pid (5158) the most proper way?</p>
<p>Update:
On the windows version, I am using:</p>
<pre><code>process = subprocess.Popen(["cmd", "/c", f"start cmd /k python scriptB.py"])
</code></pre>
<p>and finding the pid on windows is as same as the linux version. But I have no luck there as well.</p>
|
<python><linux><subprocess><popen><pid>
|
2024-05-16 06:40:38
| 0
| 405
|
blackbeard
|
78,487,838
| 5,409,315
|
How to use multiprocessing locks in joblib?
|
<p>I want to use a lock in joblib using backend multiprocessing or loky. It seems to be simple enough with using standard lib's multiprocessing, but with joblib it's not: It complains that the lock is not picklable:</p>
<pre><code>#!Python
from multiprocessing import Process, Lock
from joblib import Parallel, delayed
def f(l, i):
l.acquire()
try:
print('hello world', i)
finally:
l.release()
if __name__ == '__main__':
lock = Lock()
for num in range(10):
Process(target=f, args=(lock, num)).start()
Parallel(n_jobs=2)(delayed(f)(lock, i) for i in range(10, 20))
</code></pre>
<p>So far I've googled, read the documentation, searched the joblib GitHub project, and tried variants like implementing the lock as a global variable, but to no avail.</p>
|
<python><multiprocessing><locking><joblib>
|
2024-05-16 06:06:23
| 1
| 604
|
Jann Poppinga
|
78,487,476
| 22,963,183
|
How does Langchain AgentExecutor serving model LLM?
|
<p>I followed <a href="https://github.com/zenml-io/zenml-projects/tree/main/llm-agents" rel="nofollow noreferrer">this tutorial</a></p>
<p>I conducted some changes:</p>
<ul>
<li><p>I built model gemma:2b from Ollama</p>
<ul>
<li><a href="https://hub.docker.com/r/ollama/ollama" rel="nofollow noreferrer">built Ollama by docker</a></li>
</ul>
</li>
<li><p>I used this model replace with ChatOpenAI</p>
</li>
</ul>
<p>Summary</p>
<ol>
<li>steps/index_generator.py</li>
</ol>
<p><a href="https://i.sstatic.net/yrDW3MM0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrDW3MM0.png" alt="enter image description here" /></a></p>
<p>2. steps/agent_creator.py</p>
<p><a href="https://i.sstatic.net/51G1xyqH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51G1xyqH.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/TwT7rFJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TwT7rFJj.png" alt="enter image description here" /></a></p>
<p>After ran successfully pipeline, it created a agent:</p>
<p><a href="https://i.sstatic.net/4aibpBdL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aibpBdL.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/XWYCDtFc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWYCDtFc.png" alt="enter image description here" /></a></p>
<p>I want to use this agent to serving question/answer service</p>
<p>Here's what I tried in another python</p>
<pre><code>from zenml import step, pipeline
from zenml.client import Client
client = Client()
agent = Client().get_artifact_version('86cb0da2-ca22-48ec-9548-410ccb073bc2') # type(agent) is langchain.agents.agent.AgentExecutor
question = "Hi!"
agent.run({"input": question,"chat_history": []})
</code></pre>
<p>it raised the error, How can I overcome this</p>
<pre><code>OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should
pull the model with `ollama pull llama2`.
</code></pre>
<p><strong>Update</strong></p>
<p>I can interact with gemma model via cli</p>
<p><a href="https://i.sstatic.net/pBZ9r8Uf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBZ9r8Uf.png" alt="enter image description here" /></a></p>
|
<python><langchain><large-language-model><langchain-agents>
|
2024-05-16 03:37:34
| 0
| 515
|
happy
|
78,487,385
| 3,727,079
|
Is there a way to get the average from the standard deviation in Numpy? (Or a way to feed np.std the average)
|
<p>I've got an application where I need both the average and the standard deviation of a list of data.</p>
<p>Numpy can calculate both with <code>np.avg</code> and <code>np.std</code>. However, <code>np.std</code> does not give the average, even though you need the average to <a href="https://en.wikipedia.org/wiki/Standard_deviation#Discrete_random_variable" rel="nofollow noreferrer">calculate the standard deviation from first principles</a>. This means I need to use both commands, which is inefficient.</p>
<p>Is there a way to 1) extract the average from <code>np.std</code> (thereby making it unnecessary to run <code>np.avg</code> ), or 2) to give the output from <code>np.avg</code> to <code>np.std</code> (thereby making <code>np.std</code> evaluate faster)?</p>
<p>The <a href="https://numpy.org/devdocs/reference/generated/numpy.std.html" rel="nofollow noreferrer">documentation</a> suggests there's no way, which is unfortunate =/</p>
|
<python><numpy>
|
2024-05-16 02:59:54
| 0
| 399
|
Allure
|
78,487,312
| 13,737,893
|
Hide input components streamlit
|
<pre><code>import os
from PIL import Image
import streamlit as st
from openai import OpenAI
from utils import (
delete_files,
delete_thread,
EventHandler,
moderation_endpoint,
is_nsfw,
render_custom_css,
render_download_files,
retrieve_messages_from_thread,
retrieve_assistant_created_files,
)
# Inicializa una variable de estado para controlar la visibilidad del cuadro de texto
show_password_input = True # Initially show password input
# Inicializa una variable de estado para controlar la visibilidad del desplegable de clientes
show_client_dropdown = False
st.set_page_config(page_title="BOOMIT AI",
page_icon="")
img_path = r"C:\Users\User\Desktop\Boomit\desarrollos\BOOMITAI\company_logo.png"
img = Image.open(img_path)
st.image(
img,
width=200,
channels="RGB"
)
st.subheader(" BOOMIT AI")
st.markdown("Analítica de marketing inteligente", help="[Source]()")
# Apply custom CSS
render_custom_css()
# Initialise session state variables
if "file_uploaded" not in st.session_state:
st.session_state.file_uploaded = False
if "assistant_text" not in st.session_state:
st.session_state.assistant_text = [""]
if "code_input" not in st.session_state:
st.session_state.code_input = []
if "code_output" not in st.session_state:
st.session_state.code_output = []
if "disabled" not in st.session_state:
st.session_state.disabled = False
clientes_por_equipo = {
"equipo_verde": ["BONOXS", "LAFISE PN", "LAFISE RD", "LAFISE HN", "ALIGE"],
"equipo_amarillo": ["KASH", "DLOCALGO", "BANPAIS"],
"equipo_azul": ["ZAPIA", "HANDY", "BOOMIT"]
}
# Define a placeholder option
placeholder_option = "Seleccione un equipo"
# Update the list of team options to include the placeholder
team_options = list(clientes_por_equipo.keys())
team_options.insert(0, placeholder_option)
# Selection of team
equipo_seleccionado = st.selectbox("Seleccione un equipo:", team_options, index=0, key="equipo_seleccionado")
# Check if the selected team is the placeholder
if equipo_seleccionado == placeholder_option:
# Set the selected team to None to indicate no selection
equipo_seleccionado = None
# Define team passwords in a dictionary
team_passwords = {
"equipo_verde": "verde",
"equipo_amarillo": "amarillo",
"equipo_azul": "azul"
}
# Password input and validation
if equipo_seleccionado:
if show_password_input:
password_input = st.text_input("Ingrese la contraseña del equipo:", type="password")
if st.button("Validar"):
if password_input == team_passwords.get(equipo_seleccionado):
st.success("Contraseña correcta!")
show_password_input = False # Hide password input after successful validation
show_client_dropdown = True # Show client dropdown
else:
st.error("Contraseña incorrecta. Intente nuevamente.")
# Display client dropdown only if password is correct
if show_client_dropdown:
# Password validated, display client dropdown
clientes = clientes_por_equipo.get(equipo_seleccionado, [])
if clientes:
cliente_seleccionado = st.selectbox("Selecciona un cliente:", clientes, key="cliente_seleccionado")
# Your code to proceed with selected client (optional)
# ...
# ... (rest of code)
</code></pre>
<p>I need to hide the password input box and the ok validation message after password validation is OK, and only show the client selection input box.</p>
|
<python><streamlit>
|
2024-05-16 02:27:16
| 1
| 334
|
Maximiliano Vazquez
|
78,487,264
| 251,589
|
Pretty print why two objects are not equal
|
<p>When using <code>pytest</code>, I get nice pretty printing when two objects are not equivalent:</p>
<pre><code>Expected :Foo(id='red', other_thing='green')
Actual :Foo(id='red', other_thing='blue')
<Click to see difference>
def test_baz():
oneFoo = Foo(id="red", other_thing="blue")
twoFoo = Foo(id="red", other_thing="green")
> assert oneFoo == twoFoo
E AssertionError: assert Foo(id='red', other_thing='blue') == Foo(id='red', other_thing='green')
E
E Full diff:
E - Foo(id='red', other_thing='green')
E ? ^^ --
E + Foo(id='red', other_thing='blue')
E ? ^^^
baz.py:22: AssertionError
</code></pre>
<p>If I use an assert directly in my code, I just get an <code>AssertionError</code> and a stacktrace.</p>
<p>I am writing some integration tests right now that are NOT driven by pytest but would like to pretty print when two items (specifically Pydantic dataclasses) are not equal.</p>
|
<python><testing><pydantic>
|
2024-05-16 02:11:48
| 2
| 27,385
|
sixtyfootersdude
|
78,486,997
| 678,572
|
How to reproduce `kneighbors_graph(include_self=True)` using `KNeighborsTransformer` in sklearn?
|
<p>My ultimate goal is replace some methods that use <code>kneighbors_graph</code> with transformers from the <a href="https://sklearn-ann.readthedocs.io/en/latest/" rel="nofollow noreferrer">sklearn-ann package</a>. All the methods in <code>sklearn-ann</code> are implemented as sklearn-compatible transformer objects. However, the function I'm trying to replace uses <code>kneighbors_graph(mode="connectivity", include_self=True)</code> and I'm having a hard time converting the distance output with <code>include_self=False</code> to this type of connectivity matrix. Not all the transformer objects allow for connectivity mode while including self but all provide access to distance calculations without self.</p>
<p>I'm able to reproduce the <code>kneighbors_graph(mode="connectivity", include_self=True)</code> from <code>kneighbors_graph(mode="distance", include_self=True)</code> (referring to as <code>nn_with_self</code>). However, I'm unable to reproduce it from <code>kneighbors_graph(mode="distance", include_self=False)</code> (referring to as <code>nn_without_self</code>) which is the same output as <code>KNeighborsTransformer(mode="distance").fit_transform</code>.</p>
<p>I see that the <code>nn_without_self</code> is a super set of <code>nn_with_self</code> but I don't know how the backend algorithm selects which fields are kept.</p>
<p><strong>How can I recreate <code>nn_with_self</code> from the <code>nn_without_self</code> matrix below?</strong></p>
<p><strong>Further, how can I operate on sparse matrices the whole time without converting to dense matrices?</strong></p>
<p>I tried looking at the backend code but it's like an inception of class inheritance and I find myself pouring through several files at the same time losing track on the GitHub.</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.datasets import make_classification
from sklearn.neighbors import kneighbors_graph, KNeighborsTransformer
X, _ = make_classification(n_samples=10, n_features=4, n_classes=2, n_clusters_per_class=1, random_state=0)
n_neighbors=3
# Nearest neighbors
nn_with_self = kneighbors_graph(X, n_neighbors=n_neighbors, mode="distance", metric="euclidean", include_self=True,n_jobs=-1).todense()
nn_without_self = kneighbors_graph(X, n_neighbors=n_neighbors, mode="distance", metric="euclidean", include_self=False,n_jobs=-1).todense()
nn_from_transformer = KNeighborsTransformer(mode="distance", n_neighbors=n_neighbors, metric="euclidean", n_jobs=-1).fit_transform(X)
np.all(nn_from_transformer == nn_without_self)
# True
np.all(nn_with_self == nn_without_self)
# False
# Is `nn_with_self` symmetric?
np.allclose(nn_with_self,nn_with_self.T)
# False
# Is `nn_without_self` symmetric?
np.allclose(nn_without_self,nn_without_self.T)
# False
</code></pre>
<p>Here are the actual arrays:</p>
<pre class="lang-py prettyprint-override"><code>nn_with_self
# matrix([[0. , 0.70550439, 0. , 0.20463097, 0. ,
# 0. , 0. , 0. , 0. , 0. ],
# [0. , 0. , 0. , 0.51947869, 0. ,
# 0. , 0. , 0. , 0. , 0.44145655],
# [0. , 0. , 0. , 0. , 0.50025504,
# 0. , 0. , 0. , 0.49481662, 0. ],
# [0.20463097, 0.51947869, 0. , 0. , 0. ,
# 0. , 0. , 0. , 0. , 0. ],
# [0. , 0. , 0.50025504, 0. , 0. ,
# 0. , 0. , 0. , 0.34132965, 0. ],
# [0. , 0.88867318, 0. , 0. , 0. ,
# 0. , 0. , 0. , 0. , 0.44956691],
# [0. , 0. , 1.10390699, 0. , 1.52953542,
# 0. , 0. , 0. , 0. , 0. ],
# [0. , 0. , 0. , 0. , 0. ,
# 3.62670755, 0. , 0. , 0. , 3.83571739],
# [0. , 0. , 0.49481662, 0. , 0.34132965,
# 0. , 0. , 0. , 0. , 0. ],
# [0. , 0.44145655, 0. , 0. , 0. ,
# 0.44956691, 0. , 0. , 0. , 0. ]])
nn_without_self
# matrix([[0. , 0.70550439, 0. , 0.20463097, 1.02852831,
# 0. , 0. , 0. , 0. , 0. ],
# [0.70550439, 0. , 0. , 0.51947869, 0. ,
# 0. , 0. , 0. , 0. , 0.44145655],
# [0. , 0. , 0. , 0. , 0.50025504,
# 0. , 1.10390699, 0. , 0.49481662, 0. ],
# [0.20463097, 0.51947869, 0. , 0. , 0. ,
# 0. , 0. , 0. , 0. , 0.95611187],
# [1.02852831, 0. , 0.50025504, 0. , 0. ,
# 0. , 0. , 0. , 0.34132965, 0. ],
# [0. , 0.88867318, 0. , 1.40547465, 0. ,
# 0. , 0. , 0. , 0. , 0.44956691],
# [0. , 0. , 1.10390699, 0. , 1.52953542,
# 0. , 0. , 0. , 1.59848513, 0. ],
# [0. , 4.1280709 , 0. , 0. , 0. ,
# 3.62670755, 0. , 0. , 0. , 3.83571739],
# [1.36553076, 0. , 0.49481662, 0. , 0.34132965,
# 0. , 0. , 0. , 0. , 0. ],
# [0. , 0.44145655, 0. , 0.95611187, 0. ,
# 0.44956691, 0. , 0. , 0. , 0. ]])
</code></pre>
|
<python><arrays><numpy><matrix><nearest-neighbor>
|
2024-05-16 00:01:02
| 2
| 30,977
|
O.rka
|
78,486,966
| 15,587,184
|
LangChaing Text Splitter & Docs Saving Issue
|
<p>I'm trying to use the langchain text splitters library fun to "chunk" or divide A massive str file that has Sci-Fi Books I want to split it into n_chunks with a n_lenght of overlaping</p>
<p>This is my code:</p>
<pre><code>from langchain_text_splitters import CharacterTextSplitter
text_splitter = CharacterTextSplitter(
chunk_size=30,
chunk_overlap=5
)
text_raw = """
Water is life's matter and matrix, mother, and medium. There is no life without water.
Save water, secure the future.
Conservation is the key to a sustainable water supply.
Every drop saved today is a resource for tomorrow.
Let's work together to keep our rivers flowing and our oceans blue.
"""
chunks=text_splitter.split_text(text_raw)
print(chunks)
print(f'\n\n {len(chunks)}')
</code></pre>
<p>But this is my output:</p>
<pre><code>["Water is life's matter and matrix, mother, and medium. There is no life without water.\nSave water, secure the future.\nConservation is the key to a sustainable water supply.\nEvery drop saved today is a resource for tomorrow.\nLet's work together to keep our rivers flowing and our oceans blue."]
1
</code></pre>
<p>My intention is to split at every 30 characters and overlap the last/leading 5</p>
<p>for instance if this is one chunk:</p>
<pre><code>'This is one Chunk after text splitting ABC'
</code></pre>
<p>Then I want my following chunk to be something like :</p>
<pre><code>'splitting ABC This is my Second Chunk ---''
</code></pre>
<p>Notice how the beginning of the next chunk overlaps the last characters of the previous chunks?</p>
<p>That's what I'm looking for but it is obvious that that is not how that function works. I am very new to langchanin. I have checked the official documentation but haven't found an example or tutorial like the one I'm looking for.</p>
<p>I am also wanting to write a function to save locally the chunks from LangChain. Or do we have to stick to base Python to do that?</p>
|
<python><split><langchain><py-langchain>
|
2024-05-15 23:43:21
| 2
| 809
|
R_Student
|
78,486,957
| 10,906,068
|
how to properly loop and print json record in a single line
|
<p>The code below loops through the records and displays the result vertical one after another.</p>
<p><strong>Here is what i want</strong></p>
<p>I need to print this text in a single line Eg. <strong>Stackoverflow.com is a Programmers Question and Answering Site</strong></p>
<pre><code>import os
import json
my_json = '["Stack", "over" , "flow", ".com", "is a ", "Programmers Question and Answering Site"]'
data = json.loads(my_json)
# for result in data.values():
for result in data:
print(result)
# Display the result in a single line
</code></pre>
|
<python><json>
|
2024-05-15 23:40:26
| 1
| 2,498
|
Nancy Moore
|
78,486,683
| 6,300,467
|
PyQt6 designer won't install. Error: /usr/lib/x86_64-linux-gnu/libQt6Core.so.6: version `Qt_6.4' not found
|
<p>I have attempted to install <code>PyQt6</code> and its pyqt6 designer and I can't resolve the following error message when using the following command,</p>
<pre><code>pyqt6-tools designer
</code></pre>
<p>It results in this error message,</p>
<pre><code>~/projects/.venv/lib/python3.10/site-packages/qt6_applications/Qt/bin/designer: /usr/lib/x86_64-linux-gnu/libQt6Core.so.6: version `Qt_6.4' not found (required by ~/projects/.venv/lib/python3.10/site-packages/qt6_applications/Qt/bin/designer)
</code></pre>
<p>To reproduce the error, I created a clean pip environment and installed the packages via,</p>
<pre><code>python3 -m venv .venv
source .venv/bin/activate
pip install PyQt6
pip install pyqt6-tools
pyqt6-tools designer
</code></pre>
<p>and the <code>pyqt6-tools designer</code> command fails with the error message shown above, I've looked at trying to solve the issue but with no success. I did see other people have had this issue (see <a href="https://stackoverflow.com/questions/77707139/libqt6core-so-6-version-qt-6-6-not-found">this thread</a>) but their solution didn't work for me.</p>
<p>Any help would be greatly appreciated, thank you.</p>
|
<python><linux><pyqt6>
|
2024-05-15 21:50:47
| 0
| 785
|
AlphaBetaGamma96
|
78,486,645
| 976,299
|
Django Tests Throw ORA-00942: table or view does not exist
|
<p>I haven't found any questions with respect to Django tests and Oracle DB. I am trying to run a test on an existing application, but am running into the table or view doesn't exist error.</p>
<p>I am confused as the test says it is deleting and creating those tables/views.</p>
<pre><code>Creating test database for alias 'default'...
Failed (ORA-01543: tablespace 'TEST_EMS12250' already exists)
Got an error creating the test database: ORA-01543: tablespace 'TEST_EMS12250' already exists
Destroying old test database 'default'...
Creating test user...
Failed (ORA-01920: user name 'TEST_PRCQA' conflicts with another user or role name)
Got an error creating the test user: ORA-01920: user name 'TEST_PRCQA' conflicts with another user or role name
Destroying old test user...
Creating test user...
Traceback (most recent call last):
File "manage.py", line 14, in <module>
execute_manager(settings)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 459, in execute_manager
utility.execute()
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/core/management/commands/test.py", line 49, in run_from_argv
super(Command, self).run_from_argv(argv)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/core/management/commands/test.py", line 72, in handle
failures = test_runner.run_tests(test_labels)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/test/simple.py", line 381, in run_tests
old_config = self.setup_databases()
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/test/simple.py", line 317, in setup_databases
self.verbosity, autoclobber=not self.interactive)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/db/backends/creation.py", line 271, in create_test_db
load_initial_data=False)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 150, in call_command
return klass.execute(*args, **defaults)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/core/management/base.py", line 371, in handle
return self.handle_noargs(**options)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/core/management/commands/syncdb.py", line 102, in handle_noargs
cursor.execute(statement)
File "/scratch/prcbuild/releasejango_project/venv/lib/python2.7/site-packages/django/db/backends/oracle/base.py", line 691, in execute
return self.cursor.execute(query, self._param_generator(params))
django.db.utils.DatabaseError: ORA-00942: table or view does not exist
</code></pre>
<p>This is Python 2.7 and Django 1.4.22.</p>
|
<python><django><oracle-database><python-2.7>
|
2024-05-15 21:39:49
| 1
| 572
|
Casey
|
78,486,622
| 12,415,855
|
How to read the font color of an excel-cell?
|
<p>i have the folowing excel-sheet and try to read the font-color of the cell B3:</p>
<p><a href="https://i.sstatic.net/26Kf3DFM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26Kf3DFM.png" alt="enter image description here" /></a></p>
<p>using the following code:</p>
<pre><code>import openpyxl as ox
wb = ox.load_workbook("test.xlsx")
ws = wb.worksheets[0]
print(ws["B3"].font.color)
</code></pre>
<p>But the output is as following and the rgb-value is None</p>
<pre><code>$ python test2.py
<openpyxl.styles.colors.Color object>
Parameters:
rgb=None, indexed=10, auto=None, theme=None, tint=0.0, type='indexed'
(xlwings)
</code></pre>
<p>How can i get the rgb value of the font-color?
(also other solution with eg. xlwings or xlrd would be fine of course)</p>
|
<python><openpyxl><xlrd><xlwings>
|
2024-05-15 21:32:17
| 0
| 1,515
|
Rapid1898
|
78,486,471
| 678,572
|
How to add a .transform Nystroem method to project new observations into an existing space? (Diffusion Maps in Python)
|
<p>I am copying over some code from <a href="https://github.com/satra/mapalign/blob/3e8c7af51355896666e24d49544b1afa47e78364/mapalign/embed.py#L204" rel="nofollow noreferrer">mapalign</a> for <a href="https://github.com/scikit-learn/scikit-learn/issues/5818" rel="nofollow noreferrer">calculating diffusion maps using the sklearn api</a>. Currently, there is no <code>.transform</code> method so I've forked the repo and I'm trying to add it myself but I'm having trouble.</p>
<p>Essentially, I'm looking for the following usage where <code>X</code> and <code>Y</code> are tabular (N, m) have the same number of columns (m):</p>
<pre class="lang-py prettyprint-override"><code>model = DiffusionMapEmbedding()
#model.fit(X)
#X_embedding = model.transform(X)
X_embedding = model.fit_transform(X)
Y_embedding = model.transform(Y)
</code></pre>
<p>Here's the code from <code>mapalign</code> for the <code>DiffusionMapEmbedding</code> class:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
import os,sys,gzip,pickle,warnings
from typing import TypeVar
import numpy as np
from numpy.typing import NDArray
import scipy.sparse as sps
from scipy.spatial.distance import pdist, squareform
from sklearn.base import BaseEstimator
from sklearn.neighbors import kneighbors_graph
from sklearn.utils import check_array, check_random_state
from sklearn.manifold._spectral_embedding import _graph_is_connected
from sklearn.metrics.pairwise import rbf_kernel
def compute_affinity(X, method='markov', eps=None, metric='euclidean'):
"""Compute the similarity or affinity matrix between the samples in X
:param X: A set of samples with number of rows > 1
:param method: 'markov' or 'cauchy' kernel (default: markov)
:param eps: scaling factor for kernel
:param metric: metric to compute pairwise distances
:return: a similarity matrix
>>> X = np.array([[1,2,3,4,5], [1,2,9,4,4]])
>>> np.allclose(compute_affinity(X, eps=1e3), [[1., 0.96367614], [ 0.96367614, 1.]])
True
>>> X = np.array([[1,2,3,4,5], [1,2,9,4,4]])
>>> np.allclose(compute_affinity(X, 'cauchy', eps=1e3), [[0.001, 0.00096432], [ 0.00096432, 0.001 ]])
True
"""
D = squareform(pdist(X, metric=metric))
if eps is None:
k = int(max(2, np.round(D.shape[0] * 0.01)))
eps = 2 * np.median(np.sort(D, axis=0)[k+1, :])**2
if method == 'markov':
affinity_matrix = np.exp(-(D * D) / eps)
elif method == 'cauchy':
affinity_matrix = 1./(D * D + eps)
else:
raise ValueError("Unknown method: {}".format(method))
return affinity_matrix
def compute_diffusion_map(L, alpha=0.5, n_components=None, diffusion_time=0,
skip_checks=False, overwrite=False,
eigen_solver="eigs", return_result=False):
"""
Original Source:
https://github.com/satra/mapalign/blob/master/mapalign/embed.py
Compute the diffusion maps of a symmetric similarity matrix
L : matrix N x N
L is symmetric and L(x, y) >= 0
alpha: float [0, 1]
Setting alpha=1 and the diffusion operator approximates the
Laplace-Beltrami operator. We then recover the Riemannian geometry
of the data set regardless of the distribution of the points. To
describe the long-term behavior of the point distribution of a
system of stochastic differential equations, we can use alpha=0.5
and the resulting Markov chain approximates the Fokker-Planck
diffusion. With alpha=0, it reduces to the classical graph Laplacian
normalization.
n_components: int
The number of diffusion map components to return. Due to the
spectrum decay of the eigenvalues, only a few terms are necessary to
achieve a given relative accuracy in the sum M^t.
diffusion_time: float >= 0
use the diffusion_time (t) step transition matrix M^t
t not only serves as a time parameter, but also has the dual role of
scale parameter. One of the main ideas of diffusion framework is
that running the chain forward in time (taking larger and larger
powers of M) reveals the geometric structure of X at larger and
larger scales (the diffusion process).
t = 0 empirically provides a reasonable balance from a clustering
perspective. Specifically, the notion of a cluster in the data set
is quantified as a region in which the probability of escaping this
region is low (within a certain time t).
skip_checks: bool
Avoid expensive pre-checks on input data. The caller has to make
sure that input data is valid or results will be undefined.
overwrite: bool
Optimize memory usage by re-using input matrix L as scratch space.
References
----------
[1] https://en.wikipedia.org/wiki/Diffusion_map
[2] Coifman, R.R.; S. Lafon. (2006). "Diffusion maps". Applied and
Computational Harmonic Analysis 21: 5-30. doi:10.1016/j.acha.2006.04.006
"""
if isinstance(eigen_solver, str):
assert eigen_solver in {"eigs", "eigsh"}, "eigen_solver must either be a string: [eigs, eigsh] or a callable: {} [type:{}]".format(eigen_solver, type(eigen_solver))
if eigen_solver == "eigs":
eigen_solver = sps.linalg.eigs
if eigen_solver == "eigsh":
eigen_solver = sps.linalg.eigsh
assert hasattr(eigen_solver, "__call__"), "eigen_solver must either be a string: [eigs, eigsh] or a callable: {} [type:{}]".format(eigen_solver, type(eigen_solver))
use_sparse = False
if sps.issparse(L):
use_sparse = True
if not skip_checks:
if not _graph_is_connected(L):
raise ValueError('Graph is disconnected')
ndim = L.shape[0]
if overwrite:
L_alpha = L
else:
L_alpha = L.copy()
if alpha > 0:
# Step 2
d = np.array(L_alpha.sum(axis=1)).flatten()
d_alpha = np.power(d, -alpha)
if use_sparse:
L_alpha.data *= d_alpha[L_alpha.indices]
L_alpha = sps.csr_matrix(L_alpha.transpose().toarray())
L_alpha.data *= d_alpha[L_alpha.indices]
L_alpha = sps.csr_matrix(L_alpha.transpose().toarray())
else:
L_alpha = d_alpha[:, np.newaxis] * L_alpha
L_alpha = L_alpha * d_alpha[np.newaxis, :]
# Step 3
d_alpha = np.power(np.array(L_alpha.sum(axis=1)).flatten(), -1)
if use_sparse:
L_alpha.data *= d_alpha[L_alpha.indices]
else:
L_alpha = d_alpha[:, np.newaxis] * L_alpha
M = L_alpha
# Step 4
if n_components is not None:
lambdas, vectors = eigen_solver(M, k=n_components + 1)
else:
lambdas, vectors = eigen_solver(M, k=max(2, int(np.sqrt(ndim))))
del M
if eigen_solver == sps.linalg.eigsh:
lambdas = lambdas[::-1]
vectors = vectors[:, ::-1]
else:
lambdas = np.real(lambdas)
vectors = np.real(vectors)
lambda_idx = np.argsort(lambdas)[::-1]
lambdas = lambdas[lambda_idx]
vectors = vectors[:, lambda_idx]
return _step_5(lambdas, vectors, ndim, n_components, diffusion_time)
def _step_5(lambdas, vectors, ndim, n_components, diffusion_time):
"""
Original Source:
https://github.com/satra/mapalign/blob/master/mapalign/embed.py
This is a helper function for diffusion map computation.
The lambdas have been sorted in decreasing order.
The vectors are ordered according to lambdas.
"""
psi = vectors/vectors[:, [0]]
diffusion_times = diffusion_time
if diffusion_time == 0:
diffusion_times = np.exp(1. - np.log(1 - lambdas[1:])/np.log(lambdas[1:]))
lambdas = lambdas[1:] / (1 - lambdas[1:])
else:
lambdas = lambdas[1:] ** float(diffusion_time)
lambda_ratio = lambdas/lambdas[0]
threshold = max(0.05, lambda_ratio[-1])
n_components_auto = np.amax(np.nonzero(lambda_ratio > threshold)[0])
n_components_auto = min(n_components_auto, ndim)
if n_components is None:
n_components = n_components_auto
embedding = psi[:, 1:(n_components + 1)] * lambdas[:n_components][None, :]
result = dict(lambdas=lambdas, vectors=vectors,
n_components=n_components, diffusion_times=diffusion_times,
n_components_auto=n_components_auto, embedding=embedding)
return result
def compute_diffusion_map_psd(
X, alpha=0.5, n_components=None, diffusion_time=0):
"""
Original Source:
https://github.com/satra/mapalign/blob/master/mapalign/embed.py
This variant requires L to be dense, positive semidefinite and entrywise
positive with decomposition L = dot(X, X.T).
"""
# Redefine X such that L is normalized in a way that is analogous
# to a generalization of the normalized Laplacian.
d = X.dot(X.sum(axis=0)) ** (-alpha)
X = X * d[:, np.newaxis]
# Decompose M = D^-1 X X^T
# This is like
# M = D^-1/2 D^-1/2 X (D^-1/2 X).T D^1/2
# Substituting U = D^-1/2 X we have
# M = D^-1/2 U U.T D^1/2
# which is a diagonal change of basis of U U.T
# which itself can be decomposed using svd.
d = np.sqrt(X.dot(X.sum(axis=0)))
U = X / d[:, np.newaxis]
if n_components is not None:
u, s, vh = sps.linalg.svds(U, k=n_components+1, return_singular_vectors=True)
else:
k = max(2, int(np.sqrt(ndim)))
u, s, vh = sps.linalg.svds(U, k=k, return_singular_vectors=True)
# restore the basis and the arbitrary norm of 1
u = u / d[:, np.newaxis]
u = u / np.linalg.norm(u, axis=0, keepdims=True)
lambdas = s*s
vectors = u
# sort the lambdas in decreasing order and reorder vectors accordingly
lambda_idx = np.argsort(lambdas)[::-1]
lambdas = lambdas[lambda_idx]
vectors = vectors[:, lambda_idx]
return _step_5(lambdas, vectors, X.shape[0], n_components, diffusion_time)
class DiffusionMapEmbedding(BaseEstimator):
"""
Original Source:
https://github.com/satra/mapalign/blob/master/mapalign/embed.py
Diffusion map embedding for non-linear dimensionality reduction.
Forms an affinity matrix given by the specified function and
applies spectral decomposition to the corresponding graph laplacian.
The resulting transformation is given by the value of the
eigenvectors for each data point.
Note : Laplacian Eigenmaps is the actual algorithm implemented here.
Read more in the :ref:`User Guide <spectral_embedding>`.
Parameters
----------
diffusion_time : float
Determines the scaling of the eigenvalues of the Laplacian
alpha : float, optional, default: 0.5
Setting alpha=1 and the diffusion operator approximates the
Laplace-Beltrami operator. We then recover the Riemannian geometry
of the data set regardless of the distribution of the points. To
describe the long-term behavior of the point distribution of a
system of stochastic differential equations, we can use alpha=0.5
and the resulting Markov chain approximates the Fokker-Planck
diffusion. With alpha=0, it reduces to the classical graph Laplacian
normalization.
n_components : integer, default: 2
The dimension of the projected subspace.
eigen_solver : {None, 'eigs' or 'eigsh'}
The eigenvalue decomposition strategy to use.
random_state : int, RandomState instance or None, optional, default: None
A pseudo random number generator used for the initialization of the
lobpcg eigenvectors. If int, random_state is the seed used by the
random number generator; If RandomState instance, random_state is the
random number generator; If None, the random number generator is the
RandomState instance used by `np.random`. Used when ``solver`` ==
'amg'.
affinity : string or callable, default : "nearest_neighbors"
How to construct the affinity matrix.
- 'nearest_neighbors' : construct affinity matrix by knn graph
- 'rbf' : construct affinity matrix by rbf kernel
- 'markov': construct affinity matrix by Markov kernel
- 'cauchy': construct affinity matrix by Cauchy kernel
- 'precomputed' : interpret X as precomputed affinity matrix
- callable : use passed in function as affinity
the function takes in data matrix (n_samples, n_features)
and return affinity matrix (n_samples, n_samples).
gamma : float, optional
Kernel coefficient for pairwise distance (rbf, markov, cauchy)
metric : string, optional
Metric for scipy pdist function used to compute pairwise distances
for markov and cauchy kernels
n_neighbors : int, default : max(n_samples/10 , 1)
Number of nearest neighbors for nearest_neighbors graph building.
use_variant : boolean, default : False
Use a variant requires L to be dense, positive semidefinite and
entrywise positive with decomposition L = dot(X, X.T).
n_jobs : int, optional (default = 1)
The number of parallel jobs to run.
If ``-1``, then the number of jobs is set to the number of CPU cores.
Attributes
----------
embedding_ : array, shape = (n_samples, n_components)
Spectral embedding of the training matrix.
affinity_matrix_ : array, shape = (n_samples, n_samples)
Affinity_matrix constructed from samples or precomputed.
References
----------
- Lafon, Stephane, and Ann B. Lee. "Diffusion maps and coarse-graining: A
unified framework for dimensionality reduction, graph partitioning, and
data set parameterization." Pattern Analysis and Machine Intelligence,
IEEE Transactions on 28.9 (2006): 1393-1403.
https://doi.org/10.1109/TPAMI.2006.184
- Coifman, Ronald R., and Stephane Lafon. Diffusion maps. Applied and
Computational Harmonic Analysis 21.1 (2006): 5-30.
https://doi.org/10.1016/j.acha.2006.04.006
"""
def __init__(self, diffusion_time=0, alpha=0.5, n_components=2,
affinity="nearest_neighbors", gamma=None,
metric='euclidean', random_state=None, eigen_solver="eigs",
n_neighbors=None, use_variant=False, n_jobs=1):
self.diffusion_time = diffusion_time
self.alpha = alpha
self.n_components = n_components
self.affinity = affinity
self.gamma = gamma
self.metric = metric
self.random_state = random_state
self.eigen_solver = eigen_solver
self.n_neighbors = n_neighbors
self.use_variant = use_variant
self.n_jobs = n_jobs
@property
def _pairwise(self):
return self.affinity == "precomputed"
def _get_affinity_matrix(self, X, Y=None):
"""Calculate the affinity matrix from data
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples
and n_features is the number of features.
If affinity is "precomputed"
X : array-like, shape (n_samples, n_samples),
Interpret X as precomputed adjacency graph computed from
samples.
Returns
-------
affinity_matrix, shape (n_samples, n_samples)
"""
if self.affinity == 'precomputed':
self.affinity_matrix_ = X
return self.affinity_matrix_
if self.affinity == 'nearest_neighbors':
if sps.issparse(X):
warnings.warn("Nearest neighbors affinity currently does "
"not support sparse input, falling back to "
"rbf affinity")
self.affinity = "rbf"
else:
self.n_neighbors_ = (self.n_neighbors
if self.n_neighbors is not None
else max(int(X.shape[0] / 10), 1))
self.affinity_matrix_ = kneighbors_graph(X, self.n_neighbors_,
include_self=True,
n_jobs=self.n_jobs)
# currently only symmetric affinity_matrix supported
self.affinity_matrix_ = 0.5 * (self.affinity_matrix_ +
self.affinity_matrix_.T)
return self.affinity_matrix_
if self.affinity == 'rbf':
self.gamma_ = (self.gamma
if self.gamma is not None else 1.0 / X.shape[1])
self.affinity_matrix_ = rbf_kernel(X, gamma=self.gamma_)
return self.affinity_matrix_
if self.affinity in ['markov', 'cauchy']:
self.affinity_matrix_ = compute_affinity(X,
method=self.affinity,
eps=self.gamma,
metric=self.metric)
return self.affinity_matrix_
self.affinity_matrix_ = self.affinity(X)
return self.affinity_matrix_
def fit(self, X, y=None):
"""Fit the model from data in X.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples
and n_features is the number of features.
If affinity is "precomputed"
X : array-like, shape (n_samples, n_samples),
Interpret X as precomputed adjacency graph computed from
samples.
Returns
-------
self : object
Returns the instance itself.
"""
X = check_array(X, ensure_min_samples=2, estimator=self)
random_state = check_random_state(self.random_state)
if isinstance(self.affinity, (str,)):
if self.affinity not in set(("nearest_neighbors", "rbf",
"markov", "cauchy",
"precomputed")):
raise ValueError(("%s is not a valid affinity. Expected "
"'precomputed', 'rbf', 'nearest_neighbors' "
"or a callable.") % self.affinity)
elif not callable(self.affinity):
raise ValueError(("'affinity' is expected to be an affinity "
"name or a callable. Got: %s") % self.affinity)
affinity_matrix = self._get_affinity_matrix(X)
if self.use_variant:
result = compute_diffusion_map_psd(affinity_matrix,
alpha=self.alpha,
n_components=self.n_components,
diffusion_time=self.diffusion_time)
else:
result = compute_diffusion_map(affinity_matrix,
alpha=self.alpha,
n_components=self.n_components,
diffusion_time=self.diffusion_time,
eigen_solver=self.eigen_solver)
for k in ["lambdas", "vectors", "diffusion_times", "embedding"]:
v = result[k]
setattr(self, "{}_".format(k), v)
return self
def fit_transform(self, X, y=None):
"""Fit the model from data in X and transform X.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples
and n_features is the number of features.
If affinity is "precomputed"
X : array-like, shape (n_samples, n_samples),
Interpret X as precomputed adjacency graph computed from
samples.
Returns
-------
X_new : array-like, shape (n_samples, n_components)
"""
self.fit(X)
return self.embedding_
</code></pre>
<p>Here's a test dataset:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X, targets = make_classification(n_samples=1000, n_features=10, n_classes=2, n_clusters_per_class=1, random_state=0)
X, Y, x_targets, y_targets = train_test_split(X, targets, test_size=0.1, random_state=0)
model = DiffusionMapEmbedding(random_state=0)
model.fit(X)
plt.scatter(model.embedding_[:,0], model.embedding_[:,1], c=x_targets, edgecolor="black", linewidths=0.5)
</code></pre>
<p><a href="https://i.sstatic.net/kPMLa7b8m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kPMLa7b8m.png" alt="enter image description here" /></a></p>
<p>I've been trying to use the eigenvectors to transform new data but it's not as straightforward as I had hoped. Step 5 seems to be the most relevant section here but it looks like I'll need to run the eigensolver for the new data as well which would basically just be rerunning the algorithm with the new dataset (and refitting the model):</p>
<pre class="lang-py prettyprint-override"><code> psi = vectors/vectors[:, [0]]
diffusion_times = diffusion_time
if diffusion_time == 0:
diffusion_times = np.exp(1. - np.log(1 - lambdas[1:])/np.log(lambdas[1:]))
lambdas = lambdas[1:] / (1 - lambdas[1:])
else:
lambdas = lambdas[1:] ** float(diffusion_time)
...
embedding = psi[:, 1:(n_components + 1)] * lambdas[:n_components][None, :]
</code></pre>
<p>pyDiffMap has an implementation for <a href="https://github.com/DiffusionMapsAcademics/pyDiffMap/blob/22adc99faa83708e9ac05224015fa02c3a7f3c91/src/pydiffmap/diffusion_map.py#L294" rel="nofollow noreferrer">Nystroem out-of-sample extensions used to calculate the values of the diffusion coordinates at each given point.</a>. The backend implementations of the algorithms are different so I'm not sure if I can just port this method over. pyDiffMap also implements a <a href="https://github.com/DiffusionMapsAcademics/pyDiffMap/blob/22adc99faa83708e9ac05224015fa02c3a7f3c91/src/pydiffmap/diffusion_map.py#L321" rel="nofollow noreferrer">power-like method</a> but this is not the default so the Nystroem is preferred unless you can demonstrate that this approach is better for the <code>map align</code> implementation.</p>
<p><strong>Can someone help me figure out how to calculate diffusion coordinates for out-of-sample data either using the Nystroem method another one that is appropriate?</strong> That is, implementing a .transform method for this class.</p>
|
<python><numpy><scikit-learn><transform><dimensionality-reduction>
|
2024-05-15 20:54:37
| 0
| 30,977
|
O.rka
|
78,486,451
| 16,547,860
|
Connecting to MSSQL Server database using pyspark
|
<p>I am new to pyspark and trying to connect to mssql server database. Here are the details:
This gets printed when I run the script I have.</p>
<pre><code>('Processing table:', u'POL_ACTION_AMEND')
('Table schema:', u'dbo')
('Source_database:', u'PRD01_IPS')
('SQL Query:', '(SELECT TOP 100 * FROM PRD01_IPS.dbo.POL_EVENT_HISTORY)')
('jdbc_uri:', 'jdbc:jtds:sqlserver://REPLCLPRD01\REPL/PRD01_IPS')
</code></pre>
<p>But spark.read.format() is throwing an error when trying to load the data from the table.</p>
<pre><code>select_statement_sql_server = "(SELECT TOP 100 * FROM {source_database}.{schema}.{table_name})"
# Read data for a table
data_df = (
spark.read.format("jdbc")
.option("url",source_jdbc_uri)
.option("user",source_jdbc_user)
.option("password",source_jdbc_pass)
.option("driver", source_jdbc_driver)
.option("dbtable", select_statement_sql_server.format(
source_database=source_database, schema=schema, table_name='POL_EVENT_HISTORY'
))
.load()
)
data.show()
</code></pre>
<p>The <strong>error</strong> I am getting is:</p>
<pre><code>('SQL Query:', '(SELECT TOP 100 * FROM PRD01_IPS.dbo.POL_EVENT_HISTORY)')
Traceback (most recent call last):
File "/pkg/lxd0bigd/Talend_To_Pyspark/PSTAR_IPS/200job_IPS.py", line 136, in <module>
.option("dbtable", select_statement_sql_server.format(source_database=source_database, schema=schema, table_name='POL_EVENT_HISTORY')) \
File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 172, in load
File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o236.load.
: java.lang.NullPointerException
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:71)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:211)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:243)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:231)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:750)
</code></pre>
<p>I have a <strong>shell script</strong> to run the script which has spark-submit command.</p>
<pre class="lang-bash prettyprint-override"><code>spark-submit --master yarn --deploy-mode client \
--driver-class-path ${DRIVER_AND_JAR_FILE_PATH} \
--jars ${DRIVER_AND_JAR_FILE_PATH},${JAR_FILE_XML} \
--conf "spark.dynamicAllocation.enabled=true" \
--conf "spark.yarn.dist.files=/etc/hive/conf.cloudera.hive/hive-site.xml" \
${ROOT_DIR}/200job_IPS.py \
--hdfspath ${XML_FILE_HDFS} >> ${LOGFILE_DIR}/200job_log.txt 2>&1
sh /pkg/lxd0bigd/Talend_To_Pyspark/PSTAR_IPS/Ingestion_PSTAR.sh dev \
/pkg/lxd0bigd/Talend_To_Pyspark/spark-jobs/lib/mssql-jdbc-6.2.1.jre7.jar
</code></pre>
<p>The DRIVER_AND_JAR_FILE_PATH is what I am not sure about. I have 3 jar files and I am not sure which to be used. I tried all 3, and the same issue exists.</p>
<p><strong>mssql-jdbc-6.2.1.jre7.jar, sqljdbc41.jar, jtds-1.3.1-patch.jar</strong></p>
<p>Any suggestions and solutions will be greatly helpful.</p>
|
<python><apache-spark><pyspark><jdbc><mssql-jdbc>
|
2024-05-15 20:50:17
| 2
| 312
|
Shiva
|
78,486,446
| 11,751,799
|
Border around plotly figure
|
<p>I have some <code>plotly</code> plots that I like and just need one final piece before I can present them: a border around the entire figure. I have gotten this to work in <code>matplotlib.pyplot</code> via <code>fig.patch.set_linewidth</code> and <code>fig.patch.set_edgecolor</code>, but I have not been successful in <code>plotly</code>.</p>
<p>Is there not just a simple function to add a border around my whole <code>plotly</code> figure like there is in <code>matplotlib</code>?</p>
<p>From the code below, I have this.</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [1, 3, 2, 4, 3, 5, 4, 6, 5, 6]
fig = go.Figure(data=go.Scatter(x = x, y = y))
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/iVpi3Adj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVpi3Adj.png" alt="what I have" /></a></p>
<p>What I want it to create something more like this, with a border (not necessarily green) around the very edge of the entire plot.</p>
<p><a href="https://i.sstatic.net/gYeYiH1I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYeYiH1I.png" alt="enter image description here" /></a></p>
<p>(I made that border in Paint but cannot do that for every graph I have produced.)</p>
<p>These plots might have to display in Jupyter Notebooks, not just in saved image files.</p>
<p>A discussion on the <code>plotly</code> <a href="https://community.plotly.com/t/change-the-border-color/80758/9" rel="nofollow noreferrer">forum</a> concerns how to get rid of the border, but I do not see how the border got there in the first place, let alone how I could customize the border.</p>
|
<python><jupyter-notebook><graphics><plotly>
|
2024-05-15 20:48:25
| 1
| 500
|
Dave
|
78,486,428
| 2,072,241
|
Is it possible in Python to type a function that uses the first elements of an arbitrary sized arguments list
|
<p>I have a Python function that retrieves the first element of an arbitrary number of *args:</p>
<pre class="lang-py prettyprint-override"><code>def get_first(*args):
return tuple(a[0] for a in args)
</code></pre>
<p>Lets say that I call this function as follows:</p>
<pre class="lang-py prettyprint-override"><code>b = (1, 2, 3)
c = ("a", "b", "c", "d")
x = get_first(b, c)
</code></pre>
<p>I expect the type <code>Tuple[int, str]</code>. To me it seems impossible to achieve the correct typing to accurately reveal this type.</p>
<p>I have had no luck with the <code>TypeVarTuple</code> <a href="https://peps.python.org/pep-0646/" rel="nofollow noreferrer">PEP 646</a> or with the <code>Paramspec</code> <a href="https://peps.python.org/pep-0612/" rel="nofollow noreferrer">PEP 612</a>.</p>
|
<python><mypy><python-typing>
|
2024-05-15 20:44:54
| 1
| 351
|
Huub Hoofs
|
78,486,265
| 3,649,441
|
How to access DateTime index when running apply() on a pandas DataFrame?
|
<p>I have two pandas DataFrames, daily_data contains daily close price data for a stock and weekly_data contains weekly close price data.</p>
<p>daily_data:</p>
<pre><code> close
Date
2022-05-02 00:00:00-04:00 30.389999
2022-05-03 00:00:00-04:00 29.469999
2022-05-04 00:00:00-04:00 28.100000
2022-05-05 00:00:00-04:00 26.830000
2022-05-06 00:00:00-04:00 26.070000
2022-05-09 00:00:00-04:00 23.049999
2022-05-10 00:00:00-04:00 23.670000
2022-05-11 00:00:00-04:00 22.570000
2022-05-12 00:00:00-04:00 23.290001
2022-05-13 00:00:00-04:00 24.389999
2022-05-16 00:00:00-04:00 23.590000
...
</code></pre>
<p>weekly_data:</p>
<pre><code> close
Date
2022-05-02 00:00:00-04:00 26.070000
2022-05-09 00:00:00-04:00 24.389999
2022-05-16 00:00:00-04:00 23.350000
2022-05-23 00:00:00-04:00 23.670000
2022-05-30 00:00:00-04:00 24.150000
2022-06-06 00:00:00-04:00 23.719999
...
</code></pre>
<p>I'm running apply(func1) on the daily DataFrame.<br />
In func1() I need to get the iloc index of the week that the current day belongs to:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def func1(x)
weekly_index = data_weekly.index.get_indexer([pd.to_datetime(x.index)], method='pad')
return weekly_index[0]
daily_data['weekly_index'] = daily_data.apply(func1)
</code></pre>
<p>When running this code I get an "unhashable type: 'DatetimeIndex'" error.<br />
However if I just run the following code outside of func1() it works:</p>
<pre class="lang-py prettyprint-override"><code>index = weekly_data.index.get_indexer([pandas.to_datetime('2022-05-23 00:00:00-04:00')], method='pad')
</code></pre>
<p>Any insights on how to make this work in apply() would be much appreciated.</p>
|
<python><pandas>
|
2024-05-15 20:04:20
| 1
| 1,009
|
Chris
|
78,486,218
| 16,674,436
|
Networx and ipysigma, how to remove isolated nodes of degree <= 2 on graph?
|
<p>Context: I’m processing Reddit data. There is too much data to handle, therefore I created a random sample of the data. That leads in my network to have a lot of <em>isolated</em> nodes (emphasized, because isolated nodes are usually of degree 0, but here I am referring to degree <=2). An image is better than anything else:
<a href="https://i.sstatic.net/XKSaSOcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XKSaSOcg.png" alt="enter image description here" /></a></p>
<p>The whole big gray ring is composed of nodes that are of degree 1 or 2.</p>
<p><a href="https://i.sstatic.net/DIFpnq4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DIFpnq4E.png" alt="enter image description here" /></a></p>
<p>Hence, I’d like to get rid of those nodes in order to have a more meaningful graph based on the sample I have.</p>
<p>Is this the correct approach? Is it feasible?</p>
|
<python><networkx><network-analysis>
|
2024-05-15 19:54:58
| 1
| 341
|
Louis
|
78,486,167
| 2,487,835
|
Set import path for python with jupyter notebooks / vscode
|
<p>I connect to my own jupyter server via vpn/ssh with vscode.
Jupyter server runs as a daemon.</p>
<p>Having asked gpt, read articles here and tried many config versions, I still end up doing this at the beginning of each notebook:</p>
<pre><code>BASE_PATH = '/home/bartsimpson/dev/'
import sys
sys.path.append(BASE_PATH)
</code></pre>
<p>Is there a way to set it somewhere in my jupyter server or in vscode so that I could easily do this without setting path manually?</p>
<pre><code>from library.myclass import MyClass
</code></pre>
<p>if it resides in <code>/home/bartsimpson/dev/library/myclass.py</code> ?</p>
<p>P.S. Setting neither <code>PYTHONPATH</code>, nor <code>JUPYTER_PATH</code> prior to launch helps.</p>
|
<python><visual-studio-code><jupyter-notebook><jupyter-server>
|
2024-05-15 19:42:07
| 1
| 3,020
|
Lex Podgorny
|
78,486,065
| 13,395,230
|
What does <type>.<attr> do during a match/case?
|
<p>In general, the code <code><type>(arg1=value1)</code> is changed to the code <code><type>(attr1=value1)</code> during a <code>case</code> in a <a href="https://docs.python.org/3/reference/compound_stmts.html#match" rel="nofollow noreferrer"><code>match</code> statement</a>. This allows for some very interesting and complex capturing.</p>
<p>For example, we can ask <em>"does my instance have a attr called 'sort', if so capture it"</em>:</p>
<pre><code>match [1,2,3]:
case list(sort=f): # this isn't a call, it is an attr-capture statement
print(f)
</code></pre>
<p>Since the first (and only) positional argument of the list instance is the list values, you can capture those items as well:</p>
<pre><code>match [1,2,3]:
case list((x,*y),sort=f): # here we are capturing the 'attr' called 'sort' as 'f'
print(x,y,f)
</code></pre>
<p>Likewise, you can specify the matching values instead of capturing them into variables:</p>
<pre><code>g = list.sort
match [1,2,3]:
case list((1,*y),sort=g): # here we ask if the attr 'sort' is the same as 'g'
print(y)
</code></pre>
<p>However, I am now confused by the following:</p>
<pre><code>match [1,2,3]:
case list(x,sort=list.sort): # fails to match, but why???
print(x)
</code></pre>
<p>I would have thought that this fails because <code>list.sort</code> means something different than to specify <em>"the function 'sort' of 'list'"</em>. Maybe <code>sort=list.sort</code> means <em>"do I have an attr called 'sort' which itself is of 'type' 'list' that also has an attr called 'sort'"</em>. So I tried:</p>
<pre><code>match [1,2,3]:
case list.sort: # also fails, why?
print('ya')
</code></pre>
<p>So, <code>list.sort</code> does not mean that I want type list with an attr of sort. So, then I tried:</p>
<pre><code>match list.sort:
case list.sort: # does match, wait, what?
print('ya')
</code></pre>
<p>Which works. So, <code>list.sort</code> during a <code>case</code> <em>does</em> mean to use the function of <code>sort</code> from the type of <code>list</code>, but then why did <code>list(sort=list.sort)</code> fail?</p>
<hr />
<p>Example of capturing an attr 'y' and matching an attr 'x':</p>
<pre><code>class P:
def __init__(self,a,b):
self.x,self.y = a+b,a-b
match P(5,6):
case P(x=11,y=a): # a is a new variable, 11 is a match
print(a)
</code></pre>
|
<python><structural-pattern-matching>
|
2024-05-15 19:15:12
| 1
| 3,328
|
Bobby Ocean
|
78,486,064
| 1,028,270
|
How do I get the model type inside a PydanticBaseSettingsSource?
|
<p>I'm writing a custom PydanticBaseSettingsSource and processing fields of a specific type <code>MyModel</code></p>
<pre><code>class CustomSettingsSource(PydanticBaseSettingsSource):
# Must implement
def get_field_value(
self, field: FieldInfo, field_name: str
) -> Tuple[Any, str, bool]:
return ({}, "", False)
def __call__(self) -> Dict[str, Any]:
d: Dict[str, Any] = {}
for field_name, field in self.settings_cls.model_fields.items():
# If a field is of type MyModel I do custom processing/fetching
# This returns <class 'mymodule.MyModel'>
print(field.annotation)
# This does not work
if isinstance(field.annotation, MyModel):
# do stuff...
return d
</code></pre>
<p>I create a config assigning a field the MyModel type:</p>
<pre><code>class Test(MyConfig):
test: MyModel = MyModel(blah="sdfsdf")
</code></pre>
<p>What is the correct way to do this? I really don't want to have do something like <code>if "MyModel" in str(field.annotation): ...</code> because that feels like a hack.</p>
|
<python><pydantic><pydantic-v2>
|
2024-05-15 19:15:04
| 1
| 32,280
|
red888
|
78,485,759
| 34,372
|
Intellij wizard to create new Project from an existing Poetry project is incorrect
|
<p>When I create a new Python project using poetry and then try to open it in Intellij the wizard is wildly wrong. Here are the steps and the problems with the wizard. In the bash shell:</p>
<pre><code>$ poetry new 2_0
$ cd 2_0
$ poetry env use /usr/bin/python3.12
Creating virtualenv 2-0-8FFCH961-py3.12 in /home/dean/.cache/pypoetry/virtualenvs
Using virtualenv: /home/dean/.cache/pypoetry/virtualenvs/2-0-8FFCH961-py3.12
</code></pre>
<p>In Intellij:</p>
<pre><code>New -> Project From Existing Sources
Select the 2_0 directory that Poetry created
Create project from existing sources
Project name: accept default. Next
Source files for your project have been found. ... Accept default. Next
Please select project sdk:
</code></pre>
<p><a href="https://i.sstatic.net/LbYhXzdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LbYhXzdr.png" alt="SDKs" /></a></p>
<p>The list of project sdks does not include the virtual env just created. There is no way to refresh this list.</p>
<pre><code>Click + to Add sdk
Add Python SDK -> Poetry Environment -> Existing Environment.
</code></pre>
<p><a href="https://i.sstatic.net/YjmoqTlx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjmoqTlx.png" alt="Add Python SDK opens the project directory" /></a></p>
<p>It takes me to the project directory, which is incorrect. I have to navigate to where my new virtual env was created (and all existing virtual envs are)</p>
<p><a href="https://i.sstatic.net/Y11OJGx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y11OJGx7.png" alt="enter image description here" /></a></p>
<p>I select the python binary and click OK</p>
<p><a href="https://i.sstatic.net/iVWM2B9j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVWM2B9j.png" alt="enter image description here" /></a></p>
<p>It adds the sdk, but with the wrong name.
I correct the name and Click Next.</p>
<p>Is there a better way to open a Poetry project in Intellij? The wizard that Intellij provides is very badly done.</p>
<p>I've also noticed that the <a href="https://intellij-support.jetbrains.com/hc/en-us/community/topics/200379535-PyCharm" rel="nofollow noreferrer">PyCharm forum</a> is pretty dead. None of the tickets are getting comments.</p>
|
<python><intellij-idea><pycharm><python-poetry>
|
2024-05-15 18:05:37
| 1
| 10,409
|
Dean Schulze
|
78,485,621
| 8,378,817
|
Passing multiple arguments during langchain chain.invoke()
|
<p>I am experimenting with a langchain chain by passing multiple arguments.
Here is a scenario:</p>
<pre><code>TEMPLATE = """Task: Generate Cypher statement to query a graph database.
Instructions:
Use only the provided relationship types and properties in the schema.
Do not use any other relationship types or properties that are not provided.
You are also provided with contexts to generate cypher queries. These contexts are the node ids from the Schema.
{schema}
Some examples of contexts:
{context}
The question is:
{question}"""
prompt = PromptTemplate.from_template(template=TEMPLATE)
chain = (
{
"schema": schema,
"context": vector_retriever_chain | extract_relevant_docs,
"question": RunnablePassthrough()
}
| prompt
)
chain.invoke("my question?")
</code></pre>
<p>In this chain, I am getting some context from a vector retriever which I am passing to a function called extract_relevant_docs() that will parse the result and get format I want.</p>
<p>The tricky part here is the variable 'schema' which I also want to supply to design my prompt. How can I pass these variables during the chain.invoke().
Thank you</p>
|
<python><langchain><retrievalqa>
|
2024-05-15 17:37:39
| 1
| 365
|
stackword_0
|
78,485,530
| 2,056,201
|
Does python not automatically convert integer types when doing math?
|
<p>I had an issue with a collision function returning very large values for distances</p>
<p>I would think python would not overflow a 16 bit integer when doing this math, since it's dynamically typed, but apparently it does.</p>
<p>If I were to do these operations directly on an array, I understand why it would overflow, but here it is copying the value to another variable, <code>x1, x2, ... , y3</code>, and then doing math on those to create <code>distance</code> variables.</p>
<p>I would assume python would use 64 bit integers on those. But it doesn't. They are still 16 bit.</p>
<p>I suspect, under the cover, its not really creating any new variables, just assigning them by reference.</p>
<p>Is this always the case, or does it only happen when using numpy arrays? This is one of the really frustrating things about python coming from being a c++ programmer.</p>
<pre><code>def create_square(self, img):
color = self.random_color()
x = np.random.randint(0, img.shape[1] - square_size, dtype=np.uint16)
y = np.random.randint(0, img.shape[0] - square_size, dtype=np.uint16)
cv2.rectangle(img, (x, y), (x + square_size, y + square_size), color, -1)
return (x, y, square_size, color)
def collide(self, square1, square2, square3):
x1, y1, size1, _ = square1
x2, y2, size2, _ = square2
x3, y3, size3, _ = square3
# Calculate distances between squares
distance1 = (x1 - x2) ** 2 + (y1 - y2) ** 2
distance2 = (x1 - x3) ** 2 + (y1 - y3) ** 2
distance3 = (x2 - x3) ** 2 + (y2 - y3) ** 2
print("distance1",distance1)
print("distance2",distance2)
print("distance3",distance3)
# Check if all distances are below the threshold
if distance1 < (size1 + size2)**2 and distance2 < (size1 + size3)**2 and distance3 < (size2 + size3)**2:
return True
return False
def reset(self):
# Reset squares
self.squares = []
for _ in range(num_squares):
self.squares.append(self.create_square(np.zeros((600, 800, 3), dtype=np.uint16)))
</code></pre>
<p>Here is the output of distances</p>
<pre><code>distance1 -18581859
distance2 -52348750
distance3 -46958403
</code></pre>
|
<python><arrays><numpy>
|
2024-05-15 17:16:54
| 1
| 3,706
|
Mich
|
78,485,348
| 363,796
|
odd behavior changing dir from .xonshrc
|
<p>I am using the xonsh shell and my .xonshrc file contains a call to <code>os.chdir()</code>, which works, but oddly. When I start xonsh, my prompt, which shows my current path, seems to indicate I am still in my home dir until after I execute a command or even simply press enter. e.g.</p>
<pre><code>[etlsmart@msd-cld-etl-21p ~]$ export PROCESS=ibmc_reminder; /etlsmart/env/xonsh/bin/xonsh
etlsmart@msd-cld-etl-21p:~
$
etlsmart@msd-cld-etl-21p:/etlsmart/feeds/ibmc_reminder
$
</code></pre>
<p>If I execute a command such as <code>ls</code>, it lists the contents of the correct directory (the one I've changed to, not my home directory).</p>
<pre><code>[etlsmart@msd-cld-etl-21p ~]$ export PROCESS=ibmc_reminder; /etlsmart/env/xonsh/bin/xonsh
etlsmart@msd-cld-etl-21p:~
$ ls
bin block data docs etc lock log post_deployment stage
etlsmart@msd-cld-etl-21p:/etlsmart/feeds/ibmc_reminder
$ ls
bin block data docs etc lock log post_deployment stage
</code></pre>
<p>So this appears to be a benign cosmetic problem. That is, unless my first command is <code>cd</code>, then it does not behave correctly. The cd neither fails nor works, which makes no sense to me.</p>
<pre><code>etlsmart@msd-cld-etl-21p:~
$ cd log
etlsmart@msd-cld-etl-21p:/etlsmart/feeds/ibmc_reminder
$ pwd
/etlsmart/feeds/ibmc_reminder
etlsmart@msd-cld-etl-21p:/etlsmart/feeds/ibmc_reminder
$ cd log
etlsmart@msd-cld-etl-21p:/etlsmart/feeds/ibmc_reminder/log
</code></pre>
<p>I have tried adding various commands to my .xonshrc file to get past this, but so far without success. Can anyone offer an explanation and/or solution to this problem?</p>
|
<python><xonsh>
|
2024-05-15 16:40:17
| 1
| 1,653
|
zenzic
|
78,485,327
| 2,079,306
|
Python Flask Ubuntu Apache2 website, serving images outside of DocumentRoot using apache2's alias_mod. Am I missing something?
|
<p>Context:</p>
<p>I created a webtool using Python Flask to retrieve filenames from a database with a backend sql post method to my database code. This works fine. Filenames are always returned appropriately and presented to the client with file sizes. The client then selects which files they wish to download and hits go. They will receive a link via email to download the files in a zip (downloads are usually very large, the client request prompts the files to be prepared in a zip, which is then made into a link and emailed to the client).</p>
<p>Next functionality and explored solutions:</p>
<p>One query the filenames returned reference .png files and I would like to have them as clickable links where the user could click them and see the actual png file (rather than have to go through the email route). The data exists on the server outside of the webtools DocumentRoot. Moving the data inside of this projects DocumentRoot would be inappropriate as many other webtools use the data. Duplicating the data is infeasible as we're talking >100TB. One solution I found to access data outside of documentRoot is apache2's mod_alias. <a href="https://httpd.apache.org/docs/2.4/mod/mod_alias.html" rel="nofollow noreferrer">https://httpd.apache.org/docs/2.4/mod/mod_alias.html</a></p>
<p>I set up the mod alias in apache2/mods-available/alias.conf</p>
<pre><code>Alias "/pngs/" "/home/myuser/data/pngs/"
<Directory "/home/myuser/data/pngs/">
Require all granted
</Directory>
</code></pre>
<p>I then reference the png file in my html code with an <code><a href="/pngs/file1.png"> file 1 </a></code></p>
<p>Clicking the link takes me to <code>https://myurl.com/pngs/file1.png</code> but I get a 404. Am I missing something?</p>
|
<python><apache><flask>
|
2024-05-15 16:35:50
| 0
| 1,123
|
john stamos
|
78,485,326
| 2,893,712
|
Pandas Extract Phone Number if it is in Correct Format
|
<p>I have a column that has phone numbers. They are usually formatted in <code>(555) 123-4567</code> but sometimes they are in a different format or they are not proper numbers. I am trying to convert this field to have just the numbers, removing any non-numeric characters (if there are 10 numbers).</p>
<p>How can I apply a function that says if there are 10 numbers in this field, extract just the numbers?</p>
<p>I tried to use:</p>
<pre><code>df['PHONE'] = df['PHONE'].str.extract('(\d+)', expand=False)
</code></pre>
<p>But this just extracts the first chunk of numbers (the area code). How do I pull all the numbers and only run this extraction if there are exactly 10 numbers in the field?</p>
<p>My expected output would be <code>5551234567</code></p>
|
<python><pandas><regex><dataframe><text-extraction>
|
2024-05-15 16:35:27
| 4
| 8,806
|
Bijan
|
78,485,311
| 16,674,436
|
Networkx remove_from_nodes <= 2 gives unexpected behavior: removing randomly?
|
<p>I’m trying to graph a network based on some interactions. I want to remove nodes that have a degree smaller than or equal to 2. So here is my code.</p>
<pre class="lang-py prettyprint-override"><code>G = nx.Graph()
G.add_nodes_from(['player1', 'player2', 'player3', 'player4', 'player5', 'player6', 'player7' ])
G.add_edges_from([('player1', 'player2'), ('player1', 'player3'), ('player1', 'player4'), ('player1', 'player6'), ('player3', 'player2'), ('player3', 'player7'), ('player3', 'player4')])
G.degree
</code></pre>
<p>I get that:</p>
<pre><code>DegreeView({'player1': 4, 'player2': 2, 'player3': 4, 'player4': 2, 'player5': 0, 'player6': 1, 'player7': 1})
</code></pre>
<p>Expected.
Then I perform the following:</p>
<pre><code>nodes_to_remove = [node for node in G.nodes if G.degree(node) <= 2]
G.remove_nodes_from(nodes_to_remove)
G.degree
</code></pre>
<p>And it gives me that:</p>
<pre><code>DegreeView({'player1': 1, 'player3': 1})
</code></pre>
<p>The output of <code>nodes_to_remove</code> is <code>['player2', 'player4', 'player5', 'player6', 'player7']</code> when it should actually be <code>['player1', 'player3']</code>.</p>
<p>Anyone has an explanation?</p>
<p>I say "randomly" because when I switch around <code>>= 2</code> or <code>== 2</code> it never gives me the expected output… So I am quite confused.</p>
|
<python><nodes><networkx><network-analysis>
|
2024-05-15 16:32:22
| 0
| 341
|
Louis
|
78,485,236
| 6,382,969
|
TensorFlow Keras compilation successful; still model summary is undefined
|
<p>I am using the following code to define and compile a Keras model. Then when I print the model.summary() I am still getting unbuilt and erroneous summary. I am using Python3.10.12 and tensorflow 2.16.1. Please do not hesitate to reach out for any additional information you need.</p>
<p><strong>Code to build the model</strong></p>
<pre><code> model = Sequential()
#model.add(embedding_layer) # Add embedding layer if applicable
model.add(Dense(num_dense_units, activation='relu')) # First dense layer with ReLU activation
model.add(Dense(len(set(cluster_labels)), activation='softmax')) # Output layer with softmax for multi-class classification
# Model compilation
#model.compile(loss=CategoricalCrossentropy(), optimizer=Adam(), metrics=['accuracy'])
model.compile(loss=CategoricalCrossentropy(from_logits=True), optimizer=Adam(learning_rate=0.001), metrics=['accuracy'])
print(model.summary())
</code></pre>
<p><strong>Outcome of model.summary()</strong></p>
<pre><code>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ embedding (Embedding) │ ? │ 0 (unbuilt) │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_2 (Dense) │ ? │ 0 (unbuilt) │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_3 (Dense) │ ? │ 0 (unbuilt) │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 0 (0.00 B)
Trainable params: 0 (0.00 B)
Non-trainable params: 0 (0.00 B)
None
</code></pre>
<p>When I run my script I get the following runtime error too, which I think is because the model is not being built correctly, as I verified other dimension related things and they are correct.</p>
<pre><code>File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.10/dist-packages/keras/src/trainers/data_adapters/__init__.py", line 120, in get_data_adapter
raise ValueError(f"Unrecognized data type: x={x} (of type {type(x)})")
ValueError: Unrecognized data type: x=[[ 0 0 0 ... 10 4 7]
[ 0 0 0 ... 10 4 7]
[ 0 0 0 ... 10 4 7]
...
[ 0 0 0 ... 1 6 30]
[ 0 0 0 ... 1 6 30]
[ 0 0 0 ... 1 6 30]] (of type <class 'numpy.ndarray'>)
</code></pre>
|
<python><tensorflow><keras>
|
2024-05-15 16:13:59
| 1
| 1,246
|
PHcoDer
|
78,485,109
| 2,153,235
|
Is a pandas MultiIndex a counterpart to a SQL composite index?
|
<p>I posted this on <a href="https://www.reddit.com/r/dfpandas/comments/1cratw2" rel="nofollow noreferrer">reddit</a> some days ago, but haven't received any response.</p>
<p>Everything I've read online about the pandas MultiIndex makes it seem like a counterpart to a SQL composite index. Is this the correct understanding?</p>
<p>Additionally, MultiIndex is often described as hierarchical. This disrupts the analogy with a composite index. To me, that means a tree structure, with parent keys and child keys, possibly with a depth greater than 2. A composite index doesn't fit this picture. In the case of MultiIndexes, what are the parent/child keys?</p>
|
<python><pandas><multi-index>
|
2024-05-15 15:48:55
| 1
| 1,265
|
user2153235
|
78,485,053
| 8,543,025
|
Aligning Sparse Boolean Arrays
|
<p>I have two (very) long and sparse boolean arrays <code>n1</code> and <code>n2</code>, representing spikes of two neurons that are responding to the same stimulus. Because they are responding to the same stimulus, they have many "overlapping" spikes (up to some <code>t</code> time-difference); but also because of biological noise they could have uncoupled spikes (tonic properties of each neuron).<br />
I'm looking for a way to map between spikes from <code>n1</code> and <code>n2</code>, such that:</p>
<ol>
<li>Each spike from <code>n1</code> is matched to at most 1 spike from <code>n2</code>.</li>
<li>Each spike from <code>n2</code> is matched to at most 1 spike from <code>n1</code>.</li>
<li>If two spikes from <code>n2</code> could match a spike from <code>n1</code>, we take the one occurring closer (in time/array index) to the <code>n1</code> spike.</li>
<li>We match as many <code>n1</code> spikes as possible.</li>
</ol>
<p>I was able to achieve conditions (1), (2) and (4) from above, but I can't find a way to make sure there are no collisions (i.e. 2 <code>n1</code> spikes mapped to the same <code>n2</code> spike). Here's what I have:</p>
<pre><code>n1_idxs = np.where(n1)[0]
n2_idxs = np.where(n2)[0]
spike_time_diffs = np.abs(n1 - n2.reshape((-1, 1)))
n2_matches_to_n1 = spike_time_diffs.argmin(axis=0) # stores the index in `n2` best-matching the spike from `n1`, for each of `n1`s spikes
# assume some predefined value MAX_DIFF
min_spike_time_diffs = spike_time_diffs.min(axis=0)
n2_matches_to_n1[min_spike_time_diffs > MAX_DIFF] = np.nan # disregard matches if the difference is above a threshold
</code></pre>
<p>Here's some example input:</p>
<pre><code>MAX_DIFF = 5
n1 = list("000010010100010000000100001")
n2 = list("010010001000101000010000010")
</code></pre>
<p>The output should have the same length as number of <code>1</code>s in <code>n1</code>, and it contains the index of matched spike in <code>n2</code>. Note that <code>my_out</code> contains a collision:</p>
<pre><code>out = [4,8,13,15,19,25]
my_out = [4,8,8,15,19,25]
</code></pre>
<p>How can I make sure my mapping doesn't have collisions?</p>
|
<python><arrays><python-3.x><boolean>
|
2024-05-15 15:39:11
| 0
| 593
|
Jon Nir
|
78,485,031
| 1,505,832
|
Argparse - Check if positional argument is entered
|
<p>with argparse, is it possible to check whether the first positional argument has been entered:</p>
<pre><code>import argparse
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("object_type", type=str)
parser.add_argument("--pass1", type=str)
parser.add_argument("--pass2", type=str)
args = parser.parse_args()
print(args)
</code></pre>
<p>when calling: myscript.py --pass1=foo, I get the error:
error: the following arguments are required: object_type</p>
<p>what makes sense, but there don't seem to be a way to catch the error beforehand.</p>
<p>Something like:</p>
<pre><code>if args.object_type is None:
raise argparse.ArgumentError(message="object_type must be defined!")
</code></pre>
<p>tried also with try/except with no luck.</p>
<p>possible or not ?</p>
|
<python><argparse>
|
2024-05-15 15:36:46
| 0
| 693
|
laloune
|
78,485,024
| 17,487,457
|
AttributeError: can't set attribute: How do I fix this this class to work well?
|
<p>Given the following SMOTEBoost class implementeation in <code>smoteboost.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code>import numbers
import numpy as np
from collections import Counter
from sklearn.base import (clone,
is_regressor)
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble._forest import BaseForest
from sklearn.preprocessing import normalize
from sklearn.tree import BaseDecisionTree
from sklearn.utils import (check_random_state,
check_X_y,
check_array,
_safe_indexing)
from imblearn.utils import check_neighbors_object
from imblearn.over_sampling import SMOTE
__all__ = ['SMOTEBoost']
MAX_INT = np.iinfo(np.int32).max
class SMOTEBoost(AdaBoostClassifier):
def __init__(self,
k_neighbors=5,
base_estimator=None,
n_estimators=50,
learning_rate=1.,
sampling_strategy="auto",
algorithm='SAMME.R',
random_state=None,
n_jobs=1):
super(AdaBoostClassifier, self).__init__(
base_estimator=base_estimator,
n_estimators=n_estimators,
learning_rate=learning_rate,
random_state=random_state)
self.algorithm = algorithm
self.k_neighbors = k_neighbors
self.sampling_strategy = sampling_strategy
self.n_jobs=n_jobs
def _validate_estimator(self, default=AdaBoostClassifier()):
if not isinstance(self.n_estimators, (numbers.Integral, np.integer)):
raise ValueError("n_estimators must be an integer, "
"got {0}.".format(type(self.n_estimators)))
if self.n_estimators <= 0:
raise ValueError("n_estimators must be greater than zero, "
"got {0}.".format(self.n_estimators))
if self.base_estimator is not None:
base_estimator = clone(self.base_estimator)
else:
base_estimator = clone(default)
if isinstance(self.sampling_strategy, dict) and self.sampling_strategy != {}:
raise ValueError("'dict' type cannot be accepted for ratio in this class; "
"use alternative options")
self.nn_k_ = check_neighbors_object('k_neighbors',
self.k_neighbors,
additional_neighbor=1)
self.nn_k_.set_params(**{'n_jobs': self.n_jobs})
self.smote = SMOTE(sampling_strategy=self.sampling_strategy, k_neighbors=self.k_neighbors,
random_state=self.random_state)
self.base_estimator_ = base_estimator
def fit(self, X, y, sample_weight=None):
if self.algorithm not in ('SAMME', 'SAMME.R'):
raise ValueError("algorithm %s is not supported" % self.algorithm)
# Check parameters
if self.learning_rate <= 0:
raise ValueError("learning_rate must be greater than zero")
if (self.base_estimator is None or
isinstance(self.base_estimator, (BaseDecisionTree,
BaseForest))):
DTYPE = np.float64
dtype = DTYPE
accept_sparse = 'csc'
else:
dtype = None
accept_sparse = ['csr', 'csc']
X, y = check_X_y(X, y, accept_sparse=accept_sparse, dtype=dtype,
y_numeric=is_regressor(self))
if sample_weight is None:
# Initialize weights to 1 / n_samples
sample_weight = np.empty(X.shape[0], dtype=np.float64)
sample_weight[:] = 1. / X.shape[0]
else:
sample_weight = check_array(sample_weight, ensure_2d=False)
# Normalize existing weights
sample_weight = sample_weight / sample_weight.sum(dtype=np.float64)
# Check that the sample weights sum is positive
if sample_weight.sum() <= 0:
raise ValueError(
"Attempting to fit with a non-positive "
"weighted number of samples.")
# Check parameters
self._validate_estimator()
# Clear any previous fit results
self.estimators_ = []
self.estimator_weights_ = np.zeros(self.n_estimators, dtype=np.float64)
self.estimator_errors_ = np.ones(self.n_estimators, dtype=np.float64)
random_state = check_random_state(self.random_state)
for iboost in range(self.n_estimators):
# SMOTE step
target_stats = Counter(y)
min_class = min(target_stats, key=target_stats.get)
n_sample_majority = max(target_stats.values())
n_samples = n_sample_majority - target_stats[min_class]
target_class_indices = np.flatnonzero(y == min_class)
X_class = _safe_indexing(X, target_class_indices)
self.nn_k_.fit(X_class)
nns = self.nn_k_.kneighbors(X_class, return_distance=False)[:, 1:]
#smote._make_samples(X_class, y.dtype,
X_new, y_new = self.smote._make_samples(X_class, y.dtype, min_class, X_class,
nns, n_samples, 1.0)
# Normalize synthetic sample weights based on current training set.
sample_weight_syn = np.empty(X_new.shape[0], dtype=np.float64)
sample_weight_syn[:] = 1. / X.shape[0]
# Combine the original and synthetic samples.
X = np.vstack((X, X_new))
y = np.append(y, y_new)
# Combine the weights.
sample_weight = \
np.append(sample_weight, sample_weight_syn).reshape(-1, 1)
sample_weight = \
np.squeeze(normalize(sample_weight, axis=0, norm='l1'))
# Boosting step
sample_weight, estimator_weight, estimator_error = self._boost(
iboost,
X, y,
sample_weight,
random_state)
# Early termination
if sample_weight is None:
break
self.estimator_weights_[iboost] = estimator_weight
self.estimator_errors_[iboost] = estimator_error
# Stop if error is zero
if estimator_error == 0:
break
sample_weight_sum = np.sum(sample_weight)
# Stop if the sum of sample weights has become non-positive
if sample_weight_sum <= 0:
break
if iboost < self.n_estimators - 1:
# Normalize
sample_weight /= sample_weight_sum
return self
</code></pre>
<p>I am trying to get it work, but cannot figure out how to fix.
To reproduce:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.datasets import make_classification
from smoteboost import SMOTEBoost
from sklearn.model_selection import train_test_split
X, y = make_classification(n_samples=1000, n_features=10, n_classes=5,
n_informative=4, weights=[0.22,0.03,0.16,0.51,0.05])
X_train,X_test,y_train,y_test=train_test_split(X,y)
smt = SMOTEBoost()
smt.fit(X_train, y_train)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/smoteboost.py", line 176, in fit
self._validate_estimator()
File "~/smoteboost.py", line 129, in _validate_estimator
self.base_estimator_ = base_estimator
AttributeError: can't set attribute
</code></pre>
<p>I understand the error message indicates that the <code>SMOTEBoost</code> object has no attribute <code>estimator_</code>. SO I tried to set it this way:</p>
<pre><code>self.set_params(base_estimator_=self.base_estimator)
Error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/smoteboost.py", line 177, in fit
self._validate_estimator()
File "~/smoteboost.py", line 130, in _validate_estimator
self.set_params(base_estimator_=self.base_estimator) #self.set_params(base_estimator=self.base_estimator_)
File "~/venv/lib/python3.9/site-packages/sklearn/base.py", line 205, in set_params
raise ValueError(
ValueError: Invalid parameter 'base_estimator_' for estimator SMOTEBoost(). Valid parameters are: ['algorithm', 'base_estimator', 'k_neighbors', 'learning_rate', 'n_estimators', 'n_jobs', 'random_state', 'sampling_strategy'].
</code></pre>
<p><strong>EDIT</strong></p>
<p>scikit-learn version:</p>
<pre><code>import sklearn
sklearn.__version__
'1.2.2'
</code></pre>
|
<python><numpy><scikit-learn>
|
2024-05-15 15:35:00
| 1
| 305
|
Amina Umar
|
78,484,978
| 166,229
|
Make (a quite simple) pyparsing based parser fault tolerant
|
<p>I wrote a little parser using <code>pyparsing</code> to parse Google-like search strings, like <code>foo AND (bar OR baz)</code> (full code below). Like Google, I would like to make the parser fully fault-tolerant. It should ignore errors and parse as much as it can.</p>
<p>I wonder if I should adapt my grammar somehow (which looks very hard to me) or add some preprocessing to make the search string always valid when it gets parsed (but there are quite some corner cases; see invalid expressions in tests below).</p>
<p>I also thought about using pyparsing's <code>search_string</code> instead of <code>parse_string</code> which never seems to raise an exception, but the output is often not really useful for my use case (e.g. <code>foo AND OR bar</code> => <code>[[TermNode(WORD, foo)], [BinaryNode(OR, TermNode(WORD, ND), TermNode(WORD, bar))]]</code>)</p>
<pre class="lang-py prettyprint-override"><code>import pyparsing as pp
from typing import Literal
class TermNode:
def __init__(self, term_type: Literal["WORD", "PHRASE"], value: "Node"):
self.term_type = term_type
self.value = value
def __repr__(self):
return f"TermNode({self.term_type}, {self.value})"
class UnaryNode:
def __init__(self, operator: Literal["NOT"], operand: "Node"):
self.operator = operator
self.operand = operand
def __repr__(self):
return f"UnaryNode({self.operator}, {self.operand})"
class BinaryNode:
def __init__(self, operator: Literal["AND", "OR"], left: "Node", right: "Node"):
self.operator = operator
self.left = left
self.right = right
def __repr__(self):
return f"BinaryNode({self.operator}, {self.left}, {self.right})"
Node = TermNode | UnaryNode | BinaryNode
not_ = pp.Keyword("NOT")
and_ = pp.Keyword("AND")
or_ = pp.Keyword("OR")
lparen = pp.Literal("(")
rparen = pp.Literal(")")
extra_chars = "_-'"
word = ~(not_ | and_ | or_) + pp.Word(pp.alphanums + pp.alphas8bit + extra_chars).set_parse_action(lambda t: TermNode("WORD", t[0]))
phrase = pp.QuotedString(quoteChar='"').set_parse_action(lambda t: TermNode("PHRASE", t[0]))
term = (phrase | word)
or_expression = pp.Forward()
parens_expression = pp.Forward()
parens_expression <<= (pp.Suppress(lparen) + or_expression + pp.Suppress(rparen)) | term
not_expression = pp.Forward()
not_expression <<= (not_ + not_expression).set_parse_action(lambda t: UnaryNode("NOT", t[1])) | parens_expression
and_expression = pp.Forward()
and_expression <<= (not_expression + and_ + and_expression).set_parse_action(lambda t: BinaryNode("AND", t[0], t[2])) | (not_expression + and_expression).set_parse_action(lambda t: BinaryNode("AND", t[0], t[1])) | not_expression
or_expression <<= (and_expression + or_ + or_expression).set_parse_action(lambda t: BinaryNode("OR", t[0], t[2])) | and_expression
#or_expression.parse_string('', parse_all=True)
or_expression.run_tests("""\
###
# Valid expressions
###
# Word term
foobar
# Umlaute in word term
Gürtel
# Phrase term
"foo bar"
# Special characters in phrase
"foo!~ bar %"
# Implicit AND
foo bar
# Explicit AND
foo AND bar
# Explicit OR
foo OR bar
# NOT
NOT foo
# Parenthesis
foo AND (bar OR baz)
# Complex expression 1
NOT foo AND ("bar baz" OR qux)
# Complex expression 2
foo AND (NOT "bar baz" (moo OR zoo) AND yoo)
# Complex expression 3
foo (bar NOT "baz moo") zoo
###
# Invalid expressions
###
# Unary before binary operator
foo NOT AND bar
# Invalid redundant operators
foo AND OR bar
# Unknown char outside quoted terms
foo ~ bar
# Binary operator at start of line
AND foo
# Binary operator at start of parens expression
(AND bar)
# Binary operator at end of line
foo AND
# Binary operator at end of parens expression
(foo AND)
# Unary operator at end of line
foo NOT
# Unary operator at end of parens expression
(foo NOT)
# Unbalanced parens
((foo)
# Unbalanced quotes
""foo"
""");
</code></pre>
|
<python><parsing><pyparsing>
|
2024-05-15 15:26:20
| 1
| 16,667
|
medihack
|
78,484,934
| 10,240,072
|
Dataframe - Rolling product - timedelta window
|
<p>I am trying to do a simple rolling multiplication of a dataframe (each new value should be the product of all the input values in the window). Realizing that rolling does not allow of products, I tried to look for alternate solutions but they seem not to work with variable lenghth windows :</p>
<pre><code>start_date = '2024-05-01'
date_index = pd.date_range(end=start_date, periods=15, freq='B')
df = pd.DataFrame(list(range(1,16)), index = date_index, columns=['A'])
df.rolling('1W').agg({"A": "prod"})
</code></pre>
<p>This return the error : ValueError: <Week: weekday=6> is a non-fixed frequency</p>
<p>Is there any clean workarround for that type of rolling ?</p>
|
<python><pandas><rolling-computation>
|
2024-05-15 15:20:13
| 1
| 313
|
Fred Dujardin
|
78,484,891
| 7,456,317
|
pydantic: default values for None fields
|
<p><em><strong>Update:</strong></em>
Inspired by @chepner's answer, I change the validator as follows:</p>
<pre><code> @field_validator("age", mode="before")
@classmethod
def lower_age(cls, value: int) -> Union[int, None]:
if value is None or pd.isna(value):
return cls.model_fields["age"].default
else:
return value - 1
</code></pre>
<p>According to the <a href="https://docs.pydantic.dev/2.0/migration/#required-optional-and-nullable-fields" rel="nofollow noreferrer">official Pydantic guide</a>, a value of <code>None</code> is a valid value for an optional field with a default value.
How can I change it such that <code>None</code> value will be replaced by the default value?
My use case is a record in a Pandas dataframe, such that some of the fields can be <code>None</code>:</p>
<pre><code>from pydantic import BaseModel, field_validator
from typing import Optional, Union
import pandas as pd
import json
class Person(BaseModel):
name: str
age: Optional[int] = 10
@field_validator("age", mode="before")
@classmethod
def lower_age(cls, value: int) -> Union[int, None]:
return None if value is None or pd.isna(value) else value - 1
if __name__ == "__main__":
df_person = pd.DataFrame({
"name": ["Alice", "Bob"],
"age": [10, None]
})
person_list = df_person.to_dict(orient="records")
for p in person_list:
m = Person.model_validate(p)
print(json.dumps(m.model_dump(), indent=4))
</code></pre>
<p>In this MWE, I wish that Bob will get a value of 10, not null.
I know that I can change the field_validator such that it will return the default value, but this means I have to enter the default value twice.</p>
|
<python><pydantic>
|
2024-05-15 15:12:05
| 1
| 913
|
Gino
|
78,484,847
| 8,543,025
|
Best way to match between arrays of different length
|
<p>I have two sorted numerical arrays of unequal length and I'm looking for a way to match between elements in both arrays such that there's a one-to-one match between (most) <code>gt</code> and <code>pred</code> elements. A match is only considered valid if it is below some threshold <code>thrsh</code> (so there could be unmatched <code>gt</code> elements), and we define a <code>cost</code> of a match as the difference between matched <code>gt</code> and <code>pred</code> elements.<br />
From here we can define three types of matches:</p>
<ul>
<li><em>First Match</em> takes the <em>first</em> valid match for each <code>gt</code> element, as shown in <a href="https://stackoverflow.com/a/7426528/8543025">this answer</a>.</li>
<li><em>Greedy Match</em> tries to match as many <code>gt</code> elements as possible, regardless of the "cost" of matching.</li>
<li><em>Min Cost Match</em> is the match with minimal "cost".</li>
</ul>
<p>For example:</p>
<pre><code>thrsh=3
gt = [4,8,15,16,23,42,45]
pred = [4,5,7,16,19,44]
# some matching voodoo...
# first_match = [(4,4),(8,5),(15,16),(16,19),(42,44)] # gt23, gt45 and pred5 are unmatched
# greedy_match is the same as first_match
# min_cost_match = [(4,4),(8,7),(16,16),(45,44)] # gt15 and pred19 now unmatched as well
</code></pre>
<p>I believe <em>First Match</em> and <em>Greedy Match</em> are always the same (up to the last matched pair), and as mentioned, there's an implementation of <em>First Match</em> <a href="https://stackoverflow.com/a/7426528/8543025">here</a>. However, I can't find a way to implement the <em>Min Cost Match</em>, as any iteration may also change matches for previous iteration (depending on how large <code>thrsh</code> is, this could be very computationally heavy).</p>
|
<python><arrays><numpy>
|
2024-05-15 15:05:25
| 1
| 593
|
Jon Nir
|
78,484,828
| 1,028,270
|
Does BaseSettings support adding arbitrary input params without having to override the constructor?
|
<p>In my Config class:</p>
<pre><code>class MyConfig(BaseSettings):
err_if_not_found: bool = False
model_config = SettingsConfigDict(
env_file=MyConfigs(err_if_not_found).configs,
env_prefix="MY_APP",
)
</code></pre>
<p>I have my own class for loading configs and it supports optionally throwing an error if <code>err_if_not_found</code> is set.</p>
<p>I really don't want to override BaseSettings constructor, I want to leave that alone and not worry about duplicating it correctly without breaking pydantic features.</p>
<p>Is there a feature to be able to do something like this where I'm passing in a value to then pass along to a function or something being called inside of <code>SettingsConfigDict</code>:</p>
<pre><code>MyConfig(err_if_not_found=True)
</code></pre>
|
<python><pydantic>
|
2024-05-15 15:01:05
| 1
| 32,280
|
red888
|
78,484,794
| 6,714,667
|
How can i remove fractions from the first string in this list of lists?
|
<p>e.g i have the following:</p>
<pre><code>test = [['6 / 24 hello','4 / 5 askdskjf'],['2 / 3 dentist']]
</code></pre>
<p>i'd like to remove 6 / 24 and 2 /3 so the first string which contains a fraction. to get:</p>
<pre><code>[[' hello', '4 / 5 askdskjf'], [' dentist']]
</code></pre>
<p>i've tried this: <code>[x for x in test if re.sub(r'^\d{1} / \d{2}','', x[0])]</code> but it does not work.</p>
<p>What can i do to get the response i need?</p>
|
<python><regex>
|
2024-05-15 14:54:53
| 1
| 999
|
Maths12
|
78,484,792
| 15,394,019
|
Python (Polars): Vectorized operation of determining current solution with the use of previous variables
|
<p>Let's say we have 3 variables <strong>a</strong>, <strong>b</strong> & <strong>c</strong>.</p>
<p>There are <strong>n</strong> instances of each, and <strong>all but the first instance of c are null</strong>.</p>
<p>We are to calculate each <em>next</em> <strong>c</strong> based on a given formula <strong>comprising of only present variables on the right hand side</strong>:</p>
<p><code>c = [(1 + a) * (current_c) * (b)] + [(1 + b) * (current_c) * (a)]</code></p>
<p>How do we go about this calculation without using native python looping?
I've tried:</p>
<ul>
<li><code>pl.int_range(my_index_column_value, pl.len() + 1)</code> (<em>my index starts form 1</em>)</li>
<li><code>pl.rolling(...)</code> (<em>this seems to be quite an expensive operation</em>)</li>
<li><code>pl.when(...).then(...).otherwise(...)</code> with the above two along with <code>.over(...)</code> & <code>pl.select(...).item()</code></li>
</ul>
<p>to no avail. It's always the case that _the shift has already been fully made at once. I thought perhaps the most plausible way to do this would be either <em>rolling by 1</em> with <em>grouping by 2</em>, or via <code>pl.int_range(...)</code> and <em>using the current index column number as the shift value</em>. However, these keep failing as I am unable to properly come up with the correct syntax - I'm unable to pass the index column value and have polars accept it as a number. Even casting throws the same errors. Right now I am thinking we could manage another row for shifting and passing values back to row <strong>c</strong>, but then again, I'm not sure if this would even be an efficient way to go about it...</p>
<p>What would be the most optimal way to go about this without offloading to Rust?</p>
<p>Code for reference:</p>
<pre><code>import polars as pl
if __name__ == "__main__":
initial_c_value = 3
df = pl.DataFrame(((2, 3, 4, 5, 8), (3, 7, 4, 9, 2)), schema=('a', 'b'))
df = df.with_row_index('i', 1).with_columns(pl.lit(None).alias('c'))
df = df.with_columns(pl.when(pl.col('i') == 1)
.then(
(((1 + pl.col('a')) * (initial_c_value) * (pl.col('b'))) +
((1 + pl.col('b')) * (initial_c_value) * (pl.col('a')))).alias('c'))
.otherwise(
((1 + pl.col('a')) * (pl.col('c').shift(1)) * (pl.col('b'))) +
((1 + pl.col('b')) * (pl.col('c').shift(1)) * (pl.col('a')))).shift(1).alias('c'))
print(df)
</code></pre>
|
<python><dataframe><vectorization><python-polars><rolling-computation>
|
2024-05-15 14:54:39
| 2
| 934
|
mindoverflow
|
78,484,732
| 8,995,379
|
Multiple substrings in string python within variable check
|
<p>I have the following code:</p>
<pre><code>check = "red" in string_to_explore
</code></pre>
<p>How can I replicate it for multiple substrings?</p>
<p>I tried with:</p>
<pre><code>check = "red|blue" in string_to_explore
</code></pre>
<p>but doesn't seem to work.</p>
<p>Thanks in advance,</p>
|
<python><python-3.x><string><find><substring>
|
2024-05-15 14:44:08
| 3
| 840
|
iraciv94
|
78,484,689
| 511,302
|
Django test if all objects that refer to an object have a certain "flag"?
|
<p>When I need to test if an object isn't reffered to anymore I could use</p>
<pre><code>class ModelA(models.Model):
pass
class ModelB(models.Model):
retired = models.BooleanField(default=False)
modelA = models.ForeignKey(
ModelA,
)
ModelA.objects.filter(~Q(modelb__isnull=False))
</code></pre>
<p>The "weird" format of using negation and testing for False makes sure an Exists statement is created instead of a subquery or join.</p>
<p>However how would I test if "ModelA is only used in instances of ModelB that have retired=True", or are not referred to at all.</p>
|
<python><django><orm>
|
2024-05-15 14:36:58
| 1
| 9,627
|
paul23
|
78,484,486
| 1,071,405
|
How to group 3d points closer than a distance threshold in python?
|
<p>I have a set of 3d points and I want find any groups of them that are "close together" based on some threshold distance and make a group (removing them from my set). So I end up with a set of groups of points and the remainder.</p>
<p>The definition of "close together" would be that they fit in a sphere of less than the threshold distance, so I don't get a line of points in one group.</p>
<p>I need to do this in python, which I'm not very familiar with (cpp coder). Its a precompiled so it doesn't have to be that fast and there's not likely to be more than a few thousand points.</p>
<p>Ideally I want it to be fairly simple to implement coz it uses a library or is simple.</p>
<p>Any pointers to algorith or sample code would be appreciated.</p>
|
<python><algorithm><3d>
|
2024-05-15 14:01:35
| 2
| 683
|
Jules
|
78,484,063
| 6,912,069
|
reverse order by group in polars
|
<p>I'd like to <strong>reverse</strong> (not sort!) the order of a column in a polars dataframe, but only for the scope of a group. I know this is not so common, but I stumbled over a use case, where I need a symmetric group by operation for both ordereings (normal and reversed).</p>
<p>Here is some example code:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
data={
'group': ['a', 'a', 'a', 'b', 'b'],
'value': [1, 2, 3, 4, 5],
}
)
print(df)
</code></pre>
<pre><code>shape: (5, 2)
┌───────┬───────┐
│ group ┆ value │
│ --- ┆ --- │
│ str ┆ i64 │
╞═══════╪═══════╡
│ a ┆ 1 │
│ a ┆ 2 │
│ a ┆ 3 │
│ b ┆ 4 │
│ b ┆ 5 │
└───────┴───────┘
</code></pre>
<p>And my expected result would be:</p>
<pre><code>shape: (5, 2)
┌───────┬───────┐
│ group ┆ value │
│ --- ┆ --- │
│ str ┆ i64 │
╞═══════╪═══════╡
│ a ┆ 3 │
│ a ┆ 2 │
│ a ┆ 1 │
│ b ┆ 5 │
│ b ┆ 4 │
└───────┴───────┘
</code></pre>
<p>Is there also a way to do such an operation withing a <code>.with_columns()</code> call?</p>
|
<python><python-polars>
|
2024-05-15 12:46:18
| 3
| 686
|
N. Maks
|
78,483,941
| 822,896
|
Pandas DataFrame With Counted Values
|
<p>I have the following data:</p>
<pre class="lang-py prettyprint-override"><code>data = [{'Shape': 'Circle', 'Color': 'Green'}, {'Shape': 'Circle', 'Color': 'Green'}, {'Shape': 'Circle', 'Color': 'Green'}]
</code></pre>
<p>Which I create a DataFrame from:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data)
</code></pre>
<p>Giving:</p>
<pre><code>>>> df
Shape Color
0 Circle Green
1 Circle Green
2 Circle Green
</code></pre>
<p>The data is always received in this form, and I cannot change it.</p>
<p>Now I need to count the <code>Color</code> column for each <code>Shape</code>, with the <code>Color</code> as the index, like this:</p>
<pre><code> Circle Square
Green 3 0
Red 0 0
</code></pre>
<p>However, while I know that <code>Shape</code> can be either <code>Circle</code> or <code>Square</code>, they may or may not be present in the data. Likewise, <code>Color</code> can be either <code>Green</code> or <code>Red</code>, but also may or may not be present in the data.</p>
<p>So at the moment my solution is to use:</p>
<pre class="lang-py prettyprint-override"><code>df2 = pd.DataFrame(
[
{
"Color": "Green",
"Circle": len(np.where((df["Shape"] == "Circle") & (df["Color"] == "Green"))[0]),
"Square": len(np.where((df["Shape"] == "Square") & (df["Color"] == "Green"))[0]),
},
{
"Color": "Red",
"Circle": len(np.where((df["Shape"] == "Circle") & (df["Color"] == "Red"))[0]),
"Square": len(np.where((df["Shape"] == "Square") & (df["Color"] == "Red"))[0]),
},
]
)
df2 = df2.set_index("Color")
df2.index.name = None
</code></pre>
<p>Which gives the desired result:</p>
<pre><code>>>> df2
Circle Square
Green 3 0
Red 0 0
</code></pre>
<p>But I suspect this is inefficient. Is there a better way of doing this in Pandas directly? I tried a pivot_table, but couldn't get it to account for possible missing values in the data.</p>
|
<python><pandas><dataframe>
|
2024-05-15 12:28:13
| 2
| 1,229
|
Jak
|
78,483,843
| 525,865
|
How to scrape links from summary section / link list of wikipedia?
|
<p>update: many thanks for the replies - the help and all the efforts! some additional notes i have added. below (at the end)</p>
<p>howdy i am trying to scrape all the Links of a large wikpedia page from the "List of <strong>Towns and Gemeinden in Bayern</strong>" on Wikipedia using python. The trouble is that I cannot figure out how to export all of the links containing the words "/wiki/" to my CSV file. I am used to Python a bit but some things are still kinda of foreign to me. Any ideas? Here is what I have so far...</p>
<p><strong>the page:</strong> <a href="https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern#A" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern#A</a></p>
<pre><code>from bs4 import BeautifulSoup as bs
import requests
res = requests.get("https://en.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern#A")
soup = bs(res.text, "html.parser")
gemeinden_in_bayern = {}
for link in soup.find_all("a"):
url = link.get("href", "")
if "/wiki/" in url:
gemeinden_in_bayern[link.text.strip()] = url
print(gemeinden_in_bayern)
</code></pre>
<p>the results do not look very specific:</p>
<pre><code> nt': 'https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Cookie_statement'}
Kostenpflichtige Colab-Produkte - Hier können Sie Verträge kündigen
</code></pre>
<p>what is really aimed - is to geth the list like so:</p>
<pre><code>https://de.wikipedia.org/wiki/Abenberg
https://de.wikipedia.org/wiki/Abensberg
https://de.wikipedia.org/wiki/Absberg
https://de.wikipedia.org/wiki/Abtswind
</code></pre>
<p><strong>btw</strong>: on a sidenote: on the above mentioned subpages i have information in the infobox - which i am able to gather. See an example:</p>
<pre><code>import pandas
urlpage = 'https://de.wikipedia.org/wiki/Abenberg'
data = pandas.read_html(urlpage)[0]
null = data.isnull()
for x in range(len(data)):
first = data.iloc[x][0]
second = data.iloc[x][1] if not null.iloc[x][1] else ""
print(first,second,"\n")
</code></pre>
<p>which runs perfectly see the output:</p>
<pre><code>Basisdaten Basisdaten
Koordinaten: 49° 15′ N, 10° 58′ OKoordinaten: 49° 15′ N, 10° 58′ O
Bundesland: Bayern
Regierungsbezirk: Mittelfranken
Landkreis: Roth
Höhe: 414 m ü. NHN
Fläche: 48,41 km2
Einwohner: 5607 (31. Dez. 2022)[1]
Bevölkerungsdichte: 116 Einwohner je km2
Postleitzahl: 91183
Vorwahl: 09178
Kfz-Kennzeichen: RH, HIP
Gemeindeschlüssel: 09 5 76 111
LOCODE: ABR
Stadtgliederung: 14 Gemeindeteile
Adresse der Stadtverwaltung: Stillaplatz 1 91183 Abenberg
Website: www.abenberg.de
Erste Bürgermeisterin: Susanne König (parteilos)
Lage der Stadt Abenberg im Landkreis Roth Lage der Stadt Abenberg im Landkreis Roth
</code></pre>
<p>And that said i found out that the infobox is a typical wiki-part. so if i get familiar on this part - then i have learned alot - for future tasks - not only for me but for many others more that are diving into the Topos of scraping-wiki pages. So this might be a general task - helpful and packed with lots of information for many others too.</p>
<p>so far so good: i have a list with pages that lead to quite a many infoboxes:
<a href="https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern#A" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern#A</a></p>
<p>i think its worth to traverse over them - and fetch the infobox. the information you are looking for could be found with a python code that traverses over all the findindgs</p>
<pre><code>https://de.wikipedia.org/wiki/Abenberg
https://de.wikipedia.org/wiki/Abensberg
https://de.wikipedia.org/wiki/Absberg
https://de.wikipedia.org/wiki/Abtswind
</code></pre>
<p>....and so on and so forth - note: with that i would be able to traverse my above mentioned scraper that is able to fetch the data of one info-box.</p>
<p><strong>update</strong></p>
<p>again hello dear HedgeHog , hello dear Salman Khan ,</p>
<p>first of all - many many thanks for the quick help and your awesome support. Glad that you set me stragiht. i am very very glad.
btw. now that we have all the Links of a large wikpedia page from the "List of Towns and Gemeinden in Bayern".</p>
<p>i would love to go ahead and work with the extraction of the infobox - which btw. would be a general task that might be interesting for many user on stackoverflow: <strong>conclusio:</strong> see the main page: <a href="https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern</a> and the subpage with the <strong>infobox</strong>: <a href="https://de.wikipedia.org/wiki/Abenberg" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Abenberg</a></p>
<p>and how i gather data:</p>
<pre><code>import pandas
urlpage = 'https://de.wikipedia.org/wiki/Abenberg'
data = pandas.read_html(urlpage)[0]
null = data.isnull()
for x in range(len(data)):
first = data.iloc[x][0]
second = data.iloc[x][1] if not null.iloc[x][1] else ""
print(first,second,"\n")
</code></pre>
<p>which runs perfectly see the output:</p>
<pre><code>Basisdaten Basisdaten
Koordinaten: 49° 15′ N, 10° 58′ OKoordinaten: 49° 15′ N, 10° 58′ O
Bundesland: Bayern
Regierungsbezirk: Mittelfranken
Landkreis: Roth
Höhe: 414 m ü. NHN
Fläche: 48,41 km2
Einwohner: 5607 (31. Dez. 2022)[1]
Bevölkerungsdichte: 116 Einwohner je km2
Postleitzahl: 91183
Vorwahl: 09178
Kfz-Kennzeichen: RH, HIP
Gemeindeschlüssel: 09 5 76 111
LOCODE: ABR
Stadtgliederung: 14 Gemeindeteile
Adresse der Stadtverwaltung: Stillaplatz 1 91183 Abenberg
Website: www.abenberg.de
Erste Bürgermeisterin: Susanne König (parteilos)
Lage der Stadt Abenberg im Landkreis Roth Lage der Stadt Abenberg im Landkreis Roth
</code></pre>
<p>what is aimed is to gather all the data of the infobox(es) from all the pages.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
def fetch_city_links(list_url):
response = requests.get(list_url)
if response.status_code != 200:
print(f"Failed to retrieve the page: {list_url}")
return []
soup = BeautifulSoup(response.content, 'html.parser')
divs = soup.find_all('div', class_='column-multiple')
href_list = []
for div in divs:
li_items = div.find_all('li')
for li in li_items:
a_tags = li.find_all('a', href=True)
href_list.extend(['https://de.wikipedia.org' + a['href'] for a in a_tags])
return href_list
def scrape_infobox(url):
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
infobox = soup.find('table', {'class': 'infobox'})
if not infobox:
print(f"No infobox found on this page: {url}")
return None
data = {}
for row in infobox.find_all('tr'):
header = row.find('th')
value = row.find('td')
if header and value:
data[header.get_text(" ", strip=True)] = value.get_text(" ", strip=True)
return data
def main():
list_url = 'https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern'
city_links = fetch_city_links(list_url)
all_data = []
for link in city_links:
print(f"Scraping {link}")
infobox_data = scrape_infobox(link)
if infobox_data:
infobox_data['URL'] = link
all_data.append(infobox_data)
df = pd.DataFrame(all_data)
df.to_csv('wikipedia_infoboxes.csv', index=False)
if __name__ == "__main__":
main()
the Main Function:
def main():
list_url = 'https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern'
city_links = fetch_city_links(list_url)
all_data = []
for link in city_links:
print(f"Scraping {link}")
infobox_data = scrape_infobox(link)
if infobox_data:
infobox_data['URL'] = link
all_data.append(infobox_data)
df = pd.DataFrame(all_data)
df.to_csv('wikipedia_infoboxes.csv', index=False)
</code></pre>
<p>Well i thoght that this function orchestrates the process: it fetches the city links, scrapes the infobox data for each city,
and stores the collected data in a pandas DataFrame. Finally, it saves the DataFrame to a CSV file.</p>
<p><strong>BTW:</strong> i hope that this will not nukes the thread. i hope that this is okay here - this extended question - but if not - i can open a new thread! Thanks for all</p>
|
<python><pandas><web-scraping><beautifulsoup><python-requests>
|
2024-05-15 12:08:20
| 2
| 1,223
|
zero
|
78,483,799
| 2,178,942
|
Plotting an array with size n*512 to the PC components of another array with size n*256
|
<p>I have an array <code>a</code> with size <code>n*512</code>, I first want to plot it using PCA.</p>
<p>Next, I have another array <code>b</code> with size <code>n*256</code>, I want to plot it on the PCA components obtained above...</p>
<p>How can I do it?</p>
|
<python><algorithm><plot><pca><dimensionality-reduction>
|
2024-05-15 12:02:04
| 1
| 1,581
|
Kadaj13
|
78,483,761
| 770,513
|
Wagtail Django migration doesn't get applied when running `migrate`
|
<p>When I run Django <code>makemigrations</code> on Wagtail I get a migration (pasted at the bottom) which doesn’t seem to migrate properly. Here you see me making the migration successfully, applying it without errors and then, after doing <code>runserver</code>, being told that I have migrations to run.</p>
<pre><code>(.venv) development ➜ app 🐟 manpy makemigrations
/root/.cache/pypoetry/virtualenvs/.venv/lib/python3.10/site-packages/wagtail/utils/widgets.py:10: RemovedInWagtail70Warning: The usage of `WidgetWithScript` hook is deprecated. Use external scripts instead.
warn(
System check identified some issues:
WARNINGS:
?: (urls.W005) URL namespace 'freetag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
?: (urls.W005) URL namespace 'pagetag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
?: (urls.W005) URL namespace 'sectiontag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
Migrations for 'core':
modules/core/migrations/0019_whofundsyoupage_stream.py
- Add field stream to whofundsyoupage
(.venv) development ➜ app 🐟 manpy migrate
/root/.cache/pypoetry/virtualenvs/.venv/lib/python3.10/site-packages/wagtail/utils/widgets.py:10: RemovedInWagtail70Warning: The usage of `WidgetWithScript` hook is deprecated. Use external scripts instead.
warn(
System check identified some issues:
WARNINGS:
?: (urls.W005) URL namespace 'freetag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
?: (urls.W005) URL namespace 'pagetag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
?: (urls.W005) URL namespace 'sectiontag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
Operations to perform:
Apply all migrations: admin, auth, contenttypes, core, csp, django_cron, donations, importers, sessions, submissions, taggit, taxonomy, users, wagtail_localize, wagtailadmin, wagtailcore, wagtaildocs, wagtailembeds, wagtailforms, wagtailimages, wagtailredirects, wagtailsearch, wagtailsearchpromotions, wagtailusers
Running migrations:
Applying core.0019_whofundsyoupage_stream...⏎
(.venv) development ➜ app 🐟 runserver
/root/.cache/pypoetry/virtualenvs/.venv/lib/python3.10/site-packages/wagtail/utils/widgets.py:10: RemovedInWagtail70Warning: The usage of `WidgetWithScript` hook is deprecated. Use external scripts instead.
warn(
/root/.cache/pypoetry/virtualenvs/.venv/lib/python3.10/site-packages/wagtail/utils/widgets.py:10: RemovedInWagtail70Warning: The usage of `WidgetWithScript` hook is deprecated. Use external scripts instead.
warn(
Performing system checks...
System check identified some issues:
WARNINGS:
?: (urls.W005) URL namespace 'freetag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
?: (urls.W005) URL namespace 'pagetag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
?: (urls.W005) URL namespace 'sectiontag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
System check identified 3 issues (0 silenced).
You have 1 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): core.
Run 'python manage.py migrate' to apply them.
May 14, 2024 - 16:33:16
Django version 5.0.4, using settings 'config.settings.development'
Starting development server at http://0.0.0.0:5000/
Quit the server with CONTROL-C.
</code></pre>
<p>And here is the code of the migration that's been made and doesn't seem to be being applied:</p>
<p><code>modules/core/migrations/0019_whofundsyoupage_stream.py</code></p>
<pre><code># Generated by Django 5.0.4 on 2024-05-14 16:31
import modules.core.blocks.override
import wagtail.blocks
import wagtail.fields
import wagtail.images.blocks
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('core', '0018_auto_20240226_1608'),
]
operations = [
migrations.AddField(
model_name='whofundsyoupage',
name='stream',
field=wagtail.fields.StreamField([('rich_text', wagtail.blocks.StructBlock([('text', modules.core.blocks.override.RichTextBlock(label='Body text', required=True))], group=' Content')), ('image', wagtail.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock(required=True)), ('alt_text', wagtail.blocks.CharBlock(label='Override image alt-text', required=False)), ('caption', wagtail.blocks.RichTextBlock(features=['link', 'document-link'], label='Override caption', required=False)), ('credit', wagtail.blocks.RichTextBlock(features=['link', 'document-link'], label='Override credit', required=False)), ('image_display', wagtail.blocks.ChoiceBlock(choices=[('full', 'Full'), ('long', 'Long'), ('medium', 'Medium'), ('small-image', 'Small')]))], group=' Content')), ('html_advanced', wagtail.blocks.StructBlock([('html', wagtail.blocks.RawHTMLBlock(label='HTML code', required=True)), ('styling', wagtail.blocks.ChoiceBlock(choices=[('default', 'Default'), ('remove-styles', 'Remove style')]))], group=' Content'))], blank=True, verbose_name='Additional content'),
),
]
</code></pre>
|
<python><django><migration><wagtail>
|
2024-05-15 11:55:05
| 1
| 3,251
|
KindOfGuy
|
78,483,689
| 6,468,053
|
pyinstaller and kaleido not working together
|
<p>I'm building an exe using pyinstaller. A line of code involves writing a static image in the usual way like this:</p>
<pre><code>fig.write_image(file=thumb_file, format='jpeg', scale=0.4)
</code></pre>
<p>I've installed conda install -c conda-forge python-kaleido=0.1.0
as per <a href="https://stackoverflow.com/questions/76305333/plotly-write-image-runs-forever-and-doesnt-produce-any-static-image/77679636#77679636">plotly write_image() runs forever and doesn't produce any static image</a></p>
<p>However when building the exe I get the error :
The kaleido executable is required by the kaleido Python library, but it was not included in the Python package and it could not be found on the system PATH.
Searched for included kaleido executable at: etc</p>
<p>ChatGPT was its usual unhelpful self and suggested the spec file as below:</p>
<pre><code># -*- mode: python ; coding: utf-8 -*-
import os
from PyInstaller.utils.hooks import collect_submodules, collect_data_files, copy_metadata
block_cipher = None
# Path to python-kaleido directory within your environment
kaleido_dir = r"C:\Users\....\anaconda3\envs\***\Lib\site-packages\kaleido"
# Include all files from kaleido_dir and its subdirectories
kaleido_files = [(kaleido_dir, 'kaleido')]
a = Analysis(['***.py'],
pathex=[],
binaries=[],
datas=kaleido_files + [some stuff here]
+ collect_data_files('scipy', include_py_files=True),
hiddenimports=['pkg_resources.py2_warn', 'openpyxl.cell._writer']
+ collect_submodules('scipy'),
hookspath=[],
runtime_hooks=[],
excludes=['scipy._lib.array_api_compat.torch'],
debug=True,
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
name='***',
debug=True,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=['*kaleido*'],
runtime_tmpdir=None,
onefile=True,
console=True)
</code></pre>
<p>I also included the following before importing plotly</p>
<pre><code>if getattr(sys, 'frozen', False):
# the application is bundled
kaleido_bundled_dir = os.path.join(sys._MEIPASS, 'kaleido')
sys.path.append(kaleido_bundled_dir)
</code></pre>
<p>How can I get this to run correctly?</p>
|
<python><pyinstaller><kaleido>
|
2024-05-15 11:43:18
| 1
| 1,528
|
A Rob4
|
78,483,553
| 619,774
|
Why does Pygame Mixer not work when using sudo?
|
<p>I have a super simple python script:</p>
<pre><code>import pygame
pygame.mixer.init()
</code></pre>
<p>When executing this on a RaspberryPi, it works like a charm:</p>
<pre><code>python mytest.py -> OK
</code></pre>
<p>But when running as sudo, I get a weird error:</p>
<pre><code>sudo python mytest.py -> Not OK
</code></pre>
<blockquote>
<p>pygame.error: ALSA: Couldn't open audio device: No such file or
directory</p>
</blockquote>
|
<python><linux><pygame><raspberry-pi><sudo>
|
2024-05-15 11:19:42
| 0
| 9,041
|
Boris
|
78,483,476
| 6,808,376
|
How and where to download headless chrome (offline) latest for RHEL7.9
|
<p>I am trying web scrapping using the pyppeteer module. I am using the executablePath for headless chrome. Few sites failing to open the page due to older version of chrome.</p>
<p>My team used the <strong>HeadlessChrome/119.0.6045.105</strong>. So I am trying to use the latest version but no idea what's the latest version Headless chrome. Can someone shed the light to download (offline) headless chrome?</p>
|
<python><google-chrome><puppeteer><google-chrome-headless><rhel7>
|
2024-05-15 11:03:53
| 0
| 601
|
thulasi39
|
78,483,296
| 3,572,950
|
strange behavior of aiohttp with pytest
|
<p>So, I want to return <code>HTTPNoContent</code> response in specific controller (I'm using <code>aiohttp</code>). And then I want to test it with <code>pytest</code>:</p>
<pre><code>from aiohttp import web
from aiohttp.web import HTTPNoContent
async def hello(request):
raise HTTPNoContent(text="some test!")
return web.Response(text='Hello, world')
async def test_hello(aiohttp_client):
app = web.Application()
app.router.add_get('/', hello)
client = await aiohttp_client(app)
resp = await client.get('/')
assert resp.status == 204
text = await resp.text()
assert 'some test!' in text
</code></pre>
<p>But, when i'm runnin it like <code>pytest lol_test.py</code> I'm getting error in this test:</p>
<pre><code>AssertionError: assert 'some test!' in ''
</code></pre>
<p>So, my <code>text</code> is empty in my response, why? Can't figure it out. Thanks. Im using <code>3.8.5</code> of <code>aiohttp</code>, <code>pytest==7.4.2</code>, <code>pytest-aiohttp==0.3.0</code></p>
|
<python><python-3.x><pytest><aiohttp>
|
2024-05-15 10:27:43
| 1
| 1,438
|
Alexey
|
78,483,238
| 4,784,914
|
How to return a default object instead of `None` for an optional relationship in SQLAlchemy?
|
<p>I have a <code>users</code> table and a <code>roles</code> table, such that a <code>User</code> optionally has a single <code>Role</code>:</p>
<pre><code>class User(Base):
id: Mapped[int] = mapped_column(primary_key=True)
role: Mapped[Optional["Role"]] = relationship(back_populates="user")
# ...
class Role(Base):
id: Mapped[int] = mapped_column(primary_key=True)
user_id: Mapped[int] = mapped_column(ForeignKey("user.id"))
user: Mapped[Optional["User"]] = relationship(back_populates="role")
# ...
</code></pre>
<p>Now in code I would like that <code>User.role</code> <em>always</em> return a <code>Role</code>. If a specific <code>User</code> doesn't have a <code>Role</code>, it should just return a new default one: <code>Role()</code>.</p>
<p>How can I replace <code>User.role</code> with some kind of getter function that does this?</p>
<hr />
<p>A somewhat obvious solution would be:</p>
<pre><code>class User(Base):
# ...
@property
def role_or_new():
return self.role or Role()
</code></pre>
<p>However, I kind of want to override the default attribute name, <code>role</code>.</p>
<hr />
<p>There are some similarly named existing questions, but I think they're all subtly different:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/54404689/flask-sqlalchemy-set-relationship-default-value">Flask-SQLAlchemy set relationship default value</a>: solved through events</li>
<li><a href="https://stackoverflow.com/questions/38258389/how-to-set-a-default-value-for-a-relationship-in-sqlalchemy">How to set a default value for a relationship in SQLAlchemy?</a>: Adds a new method, but I want to modify how the existing attribute <code>role</code> behaves</li>
<li><a href="https://stackoverflow.com/questions/20477950/how-to-set-a-default-value-from-a-sqlalchemy-relationship">How to set a default value from a SQLAlchemy relationship?</a>: Asks how to get default values <em>based of</em> an existing relationship</li>
</ul>
|
<python><sqlalchemy><orm>
|
2024-05-15 10:18:16
| 1
| 1,123
|
Roberto
|
78,483,223
| 11,803,687
|
flask-caching config ignores the redis password
|
<p>Hey I do not understand why flask-caching seems to be ignoring the redis password</p>
<p>I have a redis setup with a password set to 1234, but when I configure flask-caching with a config using a CACHE_REDIS_PASSWORD also set to 1234, I am still getting the "Authentication Required" error</p>
<p>i put this in my redis-cli</p>
<pre><code>CONFIG SET requirepass 1234
</code></pre>
<p>when i unset my password it works just fine (actually it shouldnt. when providing a password for a redis instance that doesnt have a password, it should show a warning, or fail)</p>
<pre><code>CONFIG SET requirepass ""
</code></pre>
<p>I am using flask_caching version 2.3.0 and redis 4.5.4 with flask 2.2.5</p>
<p>a minimal setup would be</p>
<pre><code>from flask import Flask
from flask_caching import Cache
app = Flask(__name__)
config = {
'CACHE_TYPE': 'RedisCache', 'CACHE_KEY_PREFIX': 'resources_cache ',
'CACHE_REDIS_URL': f'redis://localhost:6379/0', 'CACHE_DEFAULT_TIMEOUT': 5,
'CACHE_REDIS_PASSWORD': 1234
}
cache = Cache(app=app, config=config)
cache.clear()
</code></pre>
<p>and the authentication error happens at cache.clear()</p>
<pre><code>redis.exceptions.AuthenticationError: Authentication required.
</code></pre>
<p>Are there any obvious problems with this, or a possible version conflict?</p>
<p>needless to say I would not actually use the password 1234, this is just a test to see if any password works</p>
|
<python><flask><redis>
|
2024-05-15 10:15:07
| 2
| 1,649
|
c8999c 3f964f64
|
78,483,141
| 17,176,829
|
printing decision tree as an image in python
|
<p>I have written a decision tree class in Python which uses the Node class as tree nodes, as shown below:</p>
<pre><code>class Node:
'''
Helper class which implements a single tree node.
'''
def __init__(self, feature=None, threshold=None, data_left=None, data_right=None, gain=None, value=None):
self.feature = feature
self.threshold = threshold
self.data_left = data_left
self.data_right = data_right
self.gain = gain
self.value = value
</code></pre>
<p>Now, I want to write a print method for this tree to print the gain, feature name, and threshold for each node of this tree, and print the value for leaf nodes, which represents the final label of the input sample.</p>
<p>Which libraries can I use for printing this output trained tree in an image format? How do I use them?</p>
<p>If I don’t want to print this tree in image format, which algorithm can I use to print it in a readable way with the information that I mentioned earlier?</p>
<p>I have written my print method as below! but it is not readable at all:</p>
<pre><code> def print_tree(self,node,depth=0):
if node is None:
return
prefix = " " * depth
# If the node is a leaf node, print its value
if node.value is not None:
print(f"{prefix}Value: {node.value}")
else:
# Print the feature and threshold for the split at this node
print(f"{prefix}Feature: {node.feature}, Threshold: {node.threshold}")
# Recursively print the left and right subtrees
print(f"{prefix}--> Left:")
self.print_tree(node.data_left, depth + 1)
print(f"{prefix}--> Right:")
self.print_tree(node.data_right, depth + 1)
</code></pre>
|
<python><classification><visualization><decision-tree>
|
2024-05-15 10:04:08
| 1
| 433
|
Narges Ghanbari
|
78,482,883
| 10,595,871
|
Strange behaviour on Random Forest Classifier
|
<p>I've build two identical rf_classifier, like so:</p>
<pre><code>rf_PB = RandomForestClassifier(n_estimators=800,
min_samples_split = 10,
min_samples_leaf = 4,
max_features = 'sqrt',
max_depth = 50,
bootstrap = True, random_state=59)
</code></pre>
<p>And trained with two identical datasets but with 2 different target variable (the sell or not sell of a specific product).</p>
<p>To me, they are both behaving well, the first one with 78% accuracy and the second one with 84%.</p>
<p>Both are trained with like 13.5k obs, 7k class 1, 6.5k class 0.</p>
<p>The problem is that the first one behaves that the probability distributions of an observation to be 1 is quite well distribuited (i'm using <code>predict_proba</code> to create 3 different classes at the end), the second one instead tends to predict pretty much everything as 1, even if the training set is well balanced and the accuracy is 84%. But when I use the fitted model to predict new clients, almost all of them are predicted as 1, with a probability > 0.8.</p>
<p>I don't understand why my algorithm is behaving like this and the business, after being very happy for the results of the first one, that allowed us to focus on like 1000 clients predicted as 1 for the first product to sell, now are facing that like 120.000 clients are predicted as 1 for the second product, and they are assuming that it's my fault that I didn't train the algorithm well.</p>
<p>The thing that I don't understand is that by computing the correlation with the target column, the higher one has 0.5 of correlation, but in the 120.000 obs given almost all of them had that column = 0, so I don't understand why they are predicted with a 0.99 probability of 1.</p>
<p>The (simple) code:</p>
<pre><code>X = df.drop(columns=['ValoreTrainingAI'])
y = df['ValoreTrainingAI']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=4)
rf_PB = RandomForestClassifier(n_estimators=800,
min_samples_split = 10,
min_samples_leaf = 4,
max_features = 'sqrt',
max_depth = 50,
bootstrap = True, random_state=59)
rf_PB.fit(X_train, y_train)
y_pred_proba = rf_PB.predict_proba(X_test)
</code></pre>
|
<python><random-forest>
|
2024-05-15 09:19:30
| 0
| 691
|
Federicofkt
|
78,482,683
| 5,746,996
|
How to add type inference to Python Flask and Marshmallow
|
<p>I'm new to Python from a Typescript background. Putting together a Flask API server and using Marshmallow to validate input DTOs.</p>
<p>How can I get VSCode to infer types from the Marshmallow Schema load?</p>
<pre><code>class PostInput:
myString = fields.Str(
required=True
)
@app.route('/foo', methods=['POST'])
def post_foo():
try:
data = PostInput().load(request.json)
except ValidationError as err:
return jsonify(err.messages), 400
myString = data.get("myString")
myString <== I want this to be known as a str
</code></pre>
|
<python><python-typing><marshmallow><pyright>
|
2024-05-15 08:45:55
| 1
| 2,057
|
Tobin
|
78,482,407
| 7,800,726
|
How not to perform rounding from float to torch.float64
|
<p>I want to convert a list of floats into a tensor. How can I do it without rounding being performed?</p>
<pre><code>tensor = torch.tensor(res, dtype=torch.float64)
# res:[-0.5479744136460554, -0.5555555555555556, -1.0]
# currently tensor = [tensor([-0.5480, -0.5556, -1.0000], dtype=torch.float64)
# goal tensor = [tensor([-0.5479744136460554, -0.5555555555555556, -1.0], dtype=torch.float64)
</code></pre>
|
<python><tensor><torch>
|
2024-05-15 07:51:36
| 1
| 558
|
Ian Gallegos
|
78,482,316
| 12,427,876
|
Decrypt & Re-Encrypt Chrome cookies
|
<p>I'm trying to decrypt Chrome's cookie SQLite DB, and move the decrypted cookies to another computer (browser), and re-encrypt the DB, and replicate sessions.</p>
<p>Here is what I plan:</p>
<ol>
<li>Decrypt AES key from <code>Local State</code> in <code>C:\Users\[username]\AppData\Local\Google\Chrome\User Data\Local State</code> using DPAPI</li>
<li>Use decrypted key to decrypt Cookie DB in <code>C:\Users\[username]\AppData\Local\Google\Chrome\User Data\Default\Network\Cookies</code></li>
<li>Copy the decrypted Cookie DB to another computer</li>
<li>Generate random AES key/nonce and encrypt the plaintext Cookie DB transferred on the other computer. Substitute original Cookies DB on the other computer.</li>
<li>Encrypt AES key using DPAPI and substitute associated entry in <code>Local State</code> on the other computer.</li>
</ol>
<p>And I have the following 2 Python files to do things described above:</p>
<p><code>encrypt.py</code>:</p>
<pre><code>from win32.win32crypt import CryptProtectData
import base64
import sqlite3
import os
from Cryptodome.Cipher.AES import new, MODE_GCM # pip install pycryptodomex
import decrypt
import json
def encrypt_dpapi_blob(decrypted_blob):
encrypted_blob = CryptProtectData(decrypted_blob, DataDescr="Google Chrome", OptionalEntropy=None, Reserved=None, PromptStruct=None, Flags=0)
encrypted_blob = b'DPAPI' + encrypted_blob
encrypted_blob_base64 = base64.b64encode(encrypted_blob)
return encrypted_blob_base64
def encrypt_cookies(cookies_db, key):
sqlite3.enable_callback_tracebacks(True)
conn = sqlite3.connect(cookies_db)
query = "SELECT name, encrypted_value FROM cookies"
cursor = conn.execute(query)
query_res = cursor.fetchall()
for row in query_res:
cookie_name, decrypted_value = row
# print(f"Encrypting cookie: {cookie_name}")
if decrypted_value is None or len(decrypted_value) == 0:
# print("No decrypted value found.")
continue
aes_cipher = new(key=key, mode=MODE_GCM, nonce=decrypted_value[3:15])
encrypted_value = aes_cipher.encrypt(decrypted_value[15: -16])
# print(f"Encrypted cookie:\n {decrypt.bytes_to_hex(encrypted_value)}\n {encrypted_value}")
verification_tag = decrypted_value[-16:]
# print(f"Verification tag:\n {decrypt.bytes_to_hex(verification_tag)}\n {verification_tag}")
nonce = decrypted_value[3:15]
# print(f"Nonce:\n {decrypt.bytes_to_hex(nonce)}\n {nonce}")
encrypted_cookie = b'\x76\x31\x30' +\
nonce +\
encrypted_value +\
verification_tag
query = f"UPDATE cookies SET encrypted_value = ? WHERE name = \"{cookie_name}\""
params = [encrypted_cookie]
cursor.execute(query, params)
# print("")
conn.commit()
conn.close()
if __name__ == "__main__":
cookies_db = os.path.join(os.getcwd(), "Cookies")
# print(f"Decrypted key:\n {decrypt.bytes_to_hex(key)}\n {key}")
key = os.urandom(32)
encrypt_cookies(cookies_db, key)
encrypted_key = encrypt_dpapi_blob(key)
print(f"Encrypted key:\n {str(encrypted_key, 'utf-8')}")
local_state = json.load(open('Local State'))
local_state['os_crypt']['encrypted_key'] = encrypted_key.decode()
json.dump(local_state, open('Local State', 'w'))
</code></pre>
<p><code>decrypt.py</code>:</p>
<pre><code>from win32.win32crypt import CryptUnprotectData
import base64
import sqlite3
import os
from Cryptodome.Cipher.AES import new, MODE_GCM # pip install pycryptodomex
import sys
import json
def decrypt_dpapi_blob(encrypted_blob):
encrypted_blob = base64.b64decode(encrypted_blob)[5:] # Leading bytes "DPAPI" need to be removed
decrypt_res = CryptUnprotectData(encrypted_blob, None, None, None, 0)
return decrypt_res
def decrypt_cookies(cookies_db, key):
sqlite3.enable_callback_tracebacks(True)
conn = sqlite3.connect(cookies_db)
query = "SELECT name, encrypted_value FROM cookies"
cursor = conn.execute(query)
query_res = cursor.fetchall()
for row in query_res:
cookie_name, encrypted_value = row
# print(f"Decrypting cookie: {cookie_name}")
if encrypted_value is None or len(encrypted_value) == 0:
# print("No encrypted value found.")
continue
aes_cipher = new(key=key, mode=MODE_GCM, nonce=encrypted_value[3:15])
decrypted_value = aes_cipher.decrypt(encrypted_value[15: -16])
# print(f"Decrypted cookie:\n {bytes_to_hex(decrypted_value)}\n {decrypted_value}")
if cookie_name == "BITBUCKETSESSIONID":
print(f"Decrypted cookie (bitbucket): {decrypted_value.decode()}")
verification_tag = encrypted_value[-16:]
# print(f"Verification tag:\n {bytes_to_hex(verification_tag)}\n {verification_tag}")
nonce = encrypted_value[3:15]
# print(f"Nonce:\n {bytes_to_hex(nonce)}\n {nonce}")
decrypted_cookie = b'\x76\x31\x30' +\
nonce +\
decrypted_value +\
verification_tag
query = f"UPDATE cookies SET encrypted_value = ? WHERE name = \"{cookie_name}\""
params = [decrypted_cookie]
cursor.execute(query, params)
# print("")
conn.commit()
conn.close()
def bytes_to_hex(byte_data):
return f"b'{''.join(f'\\x{byte:02x}' for byte in byte_data)}'"
if __name__ == "__main__":
encrypted_key_base64 = json.load(open('Local State'))['os_crypt']['encrypted_key']
# print(f"Encrypted key:\n {encrypted_key_base64}")
try:
decrypted_key = decrypt_dpapi_blob(encrypted_key_base64)[1]
print(f"Decrypted key:\n {bytes_to_hex(decrypted_key)}")
except Exception as e:
print("Decryption failed:", str(e))
sys.exit(1)
# get current working directory path
cookies_db = os.path.join(os.getcwd(), "Cookies")
decrypt_cookies(cookies_db, decrypted_key)
# print(f"Decrypted key:\n {bytes_to_hex(decrypted_key)}")
</code></pre>
<p><strong>With these functions, I can get the plaintext cookie and verified that, if I manually copy the cookie text in Chrome, I can get the target session.</strong></p>
<pre><code>if cookie_name == "BITBUCKETSESSIONID":
print(f"Decrypted cookie (bitbucket): {decrypted_value.decode()}")
</code></pre>
<p>Decrypting & Encrypting back and forth work without problem as well.</p>
<p>However, if I substitute the modified <code>Cookies</code> file and <code>Local State</code> file, Chrome will not read the migrated cookie.</p>
<p>May I know what is wrong here?</p>
<hr />
<p>As suggested by Topaco in the comments, I modified my functions in the following ways:</p>
<ol>
<li>Using existing local AES key on the other computer</li>
<li>Generate new random nonce (<code>nonce = os.urandom(12)</code>)</li>
<li>Change <code>encrypt</code> to <code>encrypt_and_digest</code>, and <code>decrypt</code> to <code>decrypt_and_verify</code></li>
<li>Store new verification tag returned by <code>encrypt_and_digest</code> & nonce in <code>encrypted_cookie</code></li>
</ol>
<p>... and here are the new functions:
<code>encrypt.py</code>:</p>
<pre><code>from win32.win32crypt import CryptProtectData
import base64
import sqlite3
import os
from Cryptodome.Cipher.AES import new, MODE_GCM # pip install pycryptodomex
import decrypt
import json
from os.path import expandvars
def encrypt_dpapi_blob(decrypted_blob):
encrypted_blob = CryptProtectData(decrypted_blob, DataDescr="Google Chrome", OptionalEntropy=None, Reserved=None, PromptStruct=None, Flags=0)
encrypted_blob = b'DPAPI' + encrypted_blob
encrypted_blob_base64 = base64.b64encode(encrypted_blob)
return encrypted_blob_base64
def encrypt_cookies(cookies_db, key):
sqlite3.enable_callback_tracebacks(True)
conn = sqlite3.connect(cookies_db)
query = "SELECT name, encrypted_value FROM cookies"
cursor = conn.execute(query)
query_res = cursor.fetchall()
for row in query_res:
cookie_name, decrypted_value = row
# print(f"Encrypting cookie: {cookie_name}")
if decrypted_value is None or len(decrypted_value) == 0:
# print("No decrypted value found.")
continue
nonce = os.urandom(12)
aes_cipher = new(key=key, mode=MODE_GCM, nonce=nonce)
# encrypted_value = aes_cipher.encrypt(decrypted_value[15: -16]) # wrong
encrypted_value, verification_tag = aes_cipher.encrypt_and_digest(decrypted_value[15: -16])
# print(f"Encrypted cookie:\n {decrypt.bytes_to_hex(encrypted_value)}\n {encrypted_value}")
# verification_tag = decrypted_value[-16:] # wrong
# print(f"Verification tag:\n {decrypt.bytes_to_hex(verification_tag)}\n {verification_tag}")
# nonce = decrypted_value[3:15] # wrong
# print(f"Nonce:\n {decrypt.bytes_to_hex(nonce)}\n {nonce}")
encrypted_cookie = b'\x76\x31\x30' +\
nonce +\
encrypted_value +\
verification_tag
query = f"UPDATE cookies SET encrypted_value = ? WHERE name = \"{cookie_name}\""
params = [encrypted_cookie]
cursor.execute(query, params)
# print("")
conn.commit()
conn.close()
def get_local_state_key():
local_state = json.load(open(expandvars('%LOCALAPPDATA%/Google/Chrome/User Data/Local State')))
encrypted_key = local_state['os_crypt']['encrypted_key']
decrypted_key = decrypt.decrypt_dpapi_blob(encrypted_key)[1]
return decrypted_key
# Example usage
if __name__ == "__main__":
cookies_db = os.path.join(os.getcwd(), "Cookies")
# print(f"Decrypted key:\n {decrypt.bytes_to_hex(key)}\n {key}")
# key = os.urandom(32)
# Using existing key
key = get_local_state_key()
encrypt_cookies(cookies_db, key)
# encrypted_key = encrypt_dpapi_blob(key)
# print(f"Encrypted key:\n {str(encrypted_key, 'utf-8')}")
# wrong
# local_state = json.load(open('Local State'))
# local_state['os_crypt']['encrypted_key'] = encrypted_key.decode()
# json.dump(local_state, open('Local State', 'w'))
</code></pre>
<p><code>decrypt.py</code>:</p>
<pre><code>from win32.win32crypt import CryptUnprotectData
import base64
import sqlite3
import os
from Cryptodome.Cipher.AES import new, MODE_GCM # pip install pycryptodomex
import sys
import json
import encrypt
def decrypt_dpapi_blob(encrypted_blob):
encrypted_blob = base64.b64decode(encrypted_blob)[5:] # Leading bytes "DPAPI" need to be removed
decrypt_res = CryptUnprotectData(encrypted_blob, None, None, None, 0)
return decrypt_res
def decrypt_cookies(cookies_db, key):
sqlite3.enable_callback_tracebacks(True)
conn = sqlite3.connect(cookies_db)
query = "SELECT name, encrypted_value FROM cookies"
cursor = conn.execute(query)
query_res = cursor.fetchall()
for row in query_res:
cookie_name, encrypted_value = row
# print(f"Decrypting cookie: {cookie_name}")
if encrypted_value is None or len(encrypted_value) == 0:
# print("No encrypted value found.")
continue
aes_cipher = new(key=key, mode=MODE_GCM, nonce=encrypted_value[3:15])
# decrypted_value = aes_cipher.decrypt(encrypted_value[15: -16]) # wrong
decrypted_value = aes_cipher.decrypt_and_verify(encrypted_value[15: -16], encrypted_value[-16:])
# print(f"Decrypted cookie:\n {bytes_to_hex(decrypted_value)}\n {decrypted_value}")
if cookie_name == "BITBUCKETSESSIONID":
print(f"Decrypted cookie (bitbucket): {decrypted_value.decode()}")
verification_tag = encrypted_value[-16:]
# print(f"Verification tag:\n {bytes_to_hex(verification_tag)}\n {verification_tag}")
nonce = encrypted_value[3:15]
# print(f"Nonce:\n {bytes_to_hex(nonce)}\n {nonce}")
decrypted_cookie = b'\x76\x31\x30' +\
nonce +\
decrypted_value +\
verification_tag
query = f"UPDATE cookies SET encrypted_value = ? WHERE name = \"{cookie_name}\""
params = [decrypted_cookie]
cursor.execute(query, params)
# print("")
conn.commit()
conn.close()
# Custom function to display all bytes in the \x[something] format
def bytes_to_hex(byte_data):
return f"b'{''.join(f'\\x{byte:02x}' for byte in byte_data)}'"
# Example usage
if __name__ == "__main__":
# encrypted_key_base64 = json.load(open('Local State'))['os_crypt']['encrypted_key']
# print(f"Encrypted key:\n {encrypted_key_base64}")
try:
decrypted_key = encrypt.get_local_state_key()
print(f"Decrypted key:\n {bytes_to_hex(decrypted_key)}")
except Exception as e:
print("Decryption failed:", str(e))
sys.exit(1)
# get current working directory path
cookies_db = os.path.join(os.getcwd(), "Cookies")
decrypt_cookies(cookies_db, decrypted_key)
# print(f"Decrypted key:\n {bytes_to_hex(decrypted_key)}")
</code></pre>
|
<python><google-chrome><encryption><cookies><dpapi>
|
2024-05-15 07:34:03
| 1
| 411
|
TaihouKai
|
78,482,279
| 19,146,511
|
past key values from hidden states
|
<p>I'm trying to extract past key, value pair using attention_layers and hidden_state for a particular layer</p>
<pre><code>import torch
import torch.nn.functional as F
from transformers import LlamaConfig
from transformers import LlamaModel, LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained(path_to_llama2)
# Load the configuration and enable required outputs
config = LlamaConfig.from_pretrained(path_to_llama2)
config.output_hidden_states = True
config.output_attentions = True # To get self_attn_weights and biases if needed
config.use_cache = True # To get past_key_values
model = LlamaForCausalLM.from_pretrained(path_to_llama2, config=config)
model.eval()
input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors='pt')
outputs = model(**inputs)
hidden_states = outputs.hidden_states # List of hidden states from each layer
state_dict = model.state_dict()
# Function to compute past_key_values for a single layer
def compute_past_key_values_for_layer(layer_idx, hidden_state):
attention_layers = [layer.self_attn for layer in model.model.layers]
W_q = state_dict[f'model.layers.{layer_idx}.self_attn.q_proj.weight']
W_k = state_dict[f'model.layers.{layer_idx}.self_attn.k_proj.weight']
W_v = state_dict[f'model.layers.{layer_idx}.self_attn.v_proj.weight']
queries = torch.matmul(hidden_state, W_q.T)
keys = torch.matmul(hidden_state, W_k.T)
values = torch.matmul(hidden_state, W_v.T)
batch_size, seq_length, hidden_dim = hidden_state.size()
num_attention_heads = attention_layers[layer_idx].num_heads
head_dim = hidden_dim // num_attention_heads
keys = keys.view(batch_size, seq_length, num_attention_heads, head_dim)
keys = keys.permute(0, 2, 1, 3)
values = values.view(batch_size, seq_length, num_attention_heads, head_dim)
values = values.permute(0, 2, 1, 3)
return keys, values
past_key_values = []
for i, hidden_state in enumerate(hidden_states[1:]): # Skip the embedding layer
keys, values = compute_past_key_values_for_layer(i, hidden_state)
past_key_values.append((keys, values))
past_key_values = tuple(past_key_values)
</code></pre>
<p>but these past_key_values don't match with the values I get from outputs.past_key_values for the particular layer.
why's it happening? are there any suggestions?</p>
|
<python><nlp><huggingface-transformers><large-language-model>
|
2024-05-15 07:26:02
| 0
| 307
|
lazytux
|
78,482,270
| 11,022,199
|
Saving and dropping a dataframe in a for loop
|
<p>I have a number of dataframes from which I have to take a sample. The samples taken from that dataframe, have to be excluded from the next dataframe in order to not have any 'double' samples as there is some overlap.</p>
<p>my code is as follows</p>
<pre><code>df_list = [df1, df2, df3, df4, df5]
samplesizes = [8, 2, 4, 4, 2]
sample = []
for df, samplesize in zip(df_list, samplesizes):
if sample: #can't drop in the first loop
df = df.drop(sample) #I want to drop the taken samples from the current df
if max_pop_size < len(df):
samplesize = max_pop_size #can't take a sample larger than population
sample.append(df.sample(samplesize, random_state=1000))
</code></pre>
<p>I get stuck on the dropping after the first loop. I've tried several things and none seem to work.</p>
<p>EDIT: the df's are a subset of 1 big population, so the columns are identical. the subsets (in <code>df_list</code>) are sliced based on earlier criteria which may contain duplicates. If a sample is taken, we don't want that specific row to be in the population of the next sample. To make things easier, they all have the same index!</p>
<p>Any help would be much appreciated!</p>
|
<python><pandas><dataframe>
|
2024-05-15 07:24:35
| 1
| 794
|
borisvanax
|
78,482,107
| 6,573,770
|
How to Loop through each element of a loop and filter out conditions in a python dataframe
|
<p>I have a list of subcategories and a dataframe. I want to filter out the dataframe on the basis of each subcategory of the list.</p>
<pre><code>lst = [7774, 29409, 36611, 77553]
import pandas as pd
data = {'aucctlg_id': [143424, 143424, 143424, 143388, 143388, 143430],
'catalogversion_id': [1, 1, 1, 1, 1, 1.2],
'Key': [1434241, 1434241, 1434241, 1433881, 1433881, 14343012],
'item_id': [4501118, 4501130, 4501129, 4501128, 4501127, 4501126],
'catlog_description': ['M&BP PIG IRON FA', 'M&BP PIG IRON FA', 'M&BP PIG IRON FA', 'PIG IRON MIXED OG','PIG IRON MIXED OG', 'P.S JAM & PIG IRON FINES'],
'catlog_start_date': ['17-05-2024 11:00:00', '17-05-2024 11:00:00', '17-05-2024 11:00:00', '17-05-2024 11:00:00','17-05-2024 11:00:00', '17-05-2024 11:00:00'],
'subcategoryid': [29409, 29409, 29409, 7774, 7774, 36611],
'quantity': [200, 200, 200, 180, 180, 100],
'auctionable': ['Y', 'Y', 'Y', 'Y' ,'Y' ,'Y']
}
df = pd.DataFrame(data)
print(df)
</code></pre>
<p>I have tried using the following code but I want output as a dataframe here it generates a list and of a single subcategory:</p>
<pre><code>new=[]
for i in range(0, len(lst)):
mask1 = df['subcategoryid']==(lst[i])
df2 = df.loc[mask1]
new.append(df2)
</code></pre>
<p>Required Output files, with the filtered data:</p>
<pre><code>df_7774, df_29409, df_36611
</code></pre>
|
<python><pandas><loops>
|
2024-05-15 06:54:27
| 2
| 329
|
Ami
|
78,482,092
| 22,400,527
|
Accessing the files in a folder of google drive using google oauth
|
<p>I am trying to create a web application using FastAPI for backend and ReactJS for frontend. I need the users to give me a link of a folder in their google drive in the frontend. I then need to access the files in that folder from my backend.</p>
<p>I created a new project in console.cloud.google.com and created client id and secret for oauth2.0 with drive apis consent. I set up oauth authentication in fastapi. Now, I need to create an endpoint from where I can access the files inside the folder whose link is given by the user.</p>
<pre class="lang-py prettyprint-override"><code>
from typing import Optional
from fastapi import FastAPI
from fastapi.responses import JSONResponse
from fastapi.middleware.cors import CORSMiddleware
from starlette.requests import Request
from starlette.middleware.sessions import SessionMiddleware
import uvicorn, requests
from google.oauth2 import id_token
from google.oauth2.credentials import Credentials
from google.auth.transport import requests as google_requests
from googleapiclient.discovery import build
from google import auth
app = FastAPI()
origins = [
"http://localhost:5173",
]
app.add_middleware(SessionMiddleware, secret_key="maihoonjiyan")
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
SCOPES = [
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/docs",
"https://www.googleapis.com/auth/drive",
"https://www.googleapis.com/auth/drive.metadata.readonly",
]
@app.get("/auth")
def authentication(request: Request, token: str):
try:
user = id_token.verify_oauth2_token(
token,
google_requests.Request(),
"client_id",
)
content = {"message": "Logged In Successfully"}
response = JSONResponse(content=content)
response.set_cookie(
key="email",
value=user["email"],
max_age=600,
httponly=True,
samesite="lax",
secure=True,
)
response.set_cookie(
key="name",
value=user["name"],
max_age=600,
httponly=True,
samesite="lax",
secure=True,
)
response.set_cookie(
key="access_token",
value=token,
max_age=600,
httponly=True,
samesite="lax",
secure=True,
)
return response
except ValueError:
return "unauthorized"
@app.get("/drive")
def drive():
return "SUCCESS"
@app.get("/")
def check(request: Request):
return "hi"
if __name__ == "__main__":
uvicorn.run("main:app", host="localhost", port=8000, reload=True)
</code></pre>
<p>This is my setup. <code>/drive</code> is the endpoint I am trying to create. I have tried things like sending a request directly to the drive API, building a service, but these methods give me a 401 unauthorized response. I found plenty of solutions online to access my own google drive but not on accessing other's google drive. Is this possible?</p>
|
<python><google-drive-api><google-oauth><fastapi>
|
2024-05-15 06:51:31
| 0
| 329
|
Ashutosh Chapagain
|
78,482,068
| 1,869,935
|
Docker build . hangs indefinitely without feedback on MacOS Big Sur
|
<p>I'm new to Docker, and following along a course. I went step by step creating some basic files to start a Django project on Docker, which are the following:</p>
<p><a href="https://i.sstatic.net/OJ8x8A18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OJ8x8A18.png" alt="enter image description here" /></a></p>
<p>My DockFile is this one:</p>
<pre><code>FROM python:3.9-alpine3.13
LABEL maintainer="user"
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /tmp/requirements.txt
COPY ./app /app
WORKDIR /app
EXPOSE 8000
RUN python -n venv /py && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /tmp/requirements.txt && \
rm -rf /tmp && \
adduser \
--disabled-password \
--no-create-home \
django-user
ENV PATH="/py/bin:$PATH"
USER django-user
</code></pre>
<p>Then, my .dockerignore file is this:</p>
<pre><code># Git
.git
.gitignore
# Docker
.docker
#Python
app/__pycache__/
app/*/__pycache__/
app/*/*/__pycache__/
app/*/*/*/__pycache__/
.env/
.venv/
venv/
</code></pre>
<p>Finally, the requirements reads:</p>
<pre><code>Django>=3.2.4,<3.3
djangorestframework>=3.12.4,<3.13
</code></pre>
<p>After that I'm supposed to run <code>docker build .</code>, but when I do it hangs and does nothing.</p>
<p><a href="https://i.sstatic.net/iVAQhlGj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVAQhlGj.png" alt="enter image description here" /></a></p>
<p>I had to shut it down a couple of times. Any idea of what I am doing wrong or at least have some feedback?</p>
|
<python><django><docker><build>
|
2024-05-15 06:45:22
| 0
| 757
|
user1869935
|
78,482,038
| 1,028,133
|
Shift + Tab tooltips not showing in JupyterLab 4.2.0 / Python 3.12.3 (pyenv) on Ubuntu 24
|
<p>SHIFT + TAB tooltips are not showing for me in my fresh Jupyterlab install.</p>
<p>Platform: Python 3.12.3 (via pyenv) on Ubuntu 24.04.</p>
<p>Here's the output of <code>$ jupyter --version</code>:</p>
<pre><code>Selected Jupyter core packages...
IPython : 8.24.0
ipykernel : 6.29.4
ipywidgets : not installed
jupyter_client : 8.6.1
jupyter_core : 5.7.2
jupyter_server : 2.14.0
jupyterlab : 4.2.0
nbclient : 0.10.0
nbconvert : 7.16.4
nbformat : 5.10.4
notebook : not installed
qtconsole : not installed
traitlets : 5.14.3
</code></pre>
<ul>
<li><p>I have already <a href="https://stackoverflow.com/questions/59008693/shift-tab-for-showing-the-documentation-of-a-command-in-jupyter-notebook-is-not">imported</a> the module I want to get tooltips for, so that's not the issue here.</p>
</li>
<li><p>Also, I don't want to get tooltips from within some convoluted expression (e.g. asking for a tooltip from <code>requests.get(url)</code> doesn't work).</p>
</li>
<li><p>I tried two different browsers, no luck.</p>
</li>
<li><p>I also tried uninstalling <code>jedi</code> as suggested <a href="https://stackoverflow.com/questions/77141665/shift-tab-not-showing-the-documentation-of-a-command-in-jupyter-notebook">here</a>, no luck.</p>
</li>
<li><p>I also have Jupyterlab in my Python 3.8 (also pyenv) and SHIFT + TAB works fine there.</p>
</li>
</ul>
<p>Any ideas what else to try?</p>
<p>NOTE: There are a bunch of questions (<a href="https://stackoverflow.com/questions/76418467/shifttab-on-jupyter-notebook-not-working">one</a>, <a href="https://stackoverflow.com/questions/77779940/shifttab-in-jupyter-notebook-does-not-work-properly">two</a>, <a href="https://stackoverflow.com/questions/77077279/documentation-does-not-open-on-jupyter-notebook">three</a>) asking more or less the same, but none of them have answers that work for me & all are missing version / platform info, which is probably relevant.</p>
|
<python><ubuntu><jupyter-lab>
|
2024-05-15 06:38:29
| 2
| 744
|
the.real.gruycho
|
78,481,849
| 1,084,174
|
Kernel Restarting: The kernel for .ipynb appears to have died. It will restart automatically
|
<p>I have trained a model using keras.
Model summary:</p>
<pre><code>Model: "sequential_1"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ lstm_1 (LSTM) │ (None, 200, 6) │ 312 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ flatten_1 (Flatten) │ (None, 1200) │ 0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_2 (Dense) │ (None, 128) │ 153,728 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_3 (Dense) │ (None, 2) │ 258 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 462,896 (1.77 MB)
Trainable params: 154,298 (602.73 KB)
Non-trainable params: 0 (0.00 B)
Optimizer params: 308,598 (1.18 MB)
</code></pre>
<p>I have tried to convert my model for Android usig tflite conveter. I used following code:</p>
<pre><code>import tensorflow as tf
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
# Convert the model.
from tensorflow.keras.models import Model, load_model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
with open('My_Model.tflite', 'wb') as f:
f.write(tflite_model)
</code></pre>
<p>When I run this conversion, it restarting my kernel and printing following error in jupyter notebook terminal:</p>
<pre><code>[I 2024-05-15 11:34:06.899 ServerApp] Saving file at /ml_codes/poc/MyModel.ipynb
[I 2024-05-15 11:34:35.986 ServerApp] AsyncIOLoopKernelRestarter: restarting kernel (1/5), keep random ports
[W 2024-05-15 11:34:35.986 ServerApp] kernel 171cb42f-92a2-405c-a1c9-6489095f343e restarted
[I 2024-05-15 11:34:36.002 ServerApp] Starting buffering for 171cb42f-92a2-405c-a1c9-6489095f343e:b45e968e-d91c-4761-bd02-5bf4a93dc15f
[I 2024-05-15 11:34:36.021 ServerApp] Connecting to kernel 171cb42f-92a2-405c-a1c9-6489095f343e.
[I 2024-05-15 11:34:36.021 ServerApp] Restoring connection for 171cb42f-92a2-405c-a1c9-6489095f343e:b45e968e-d91c-4761-bd02-5bf4a93dc15f
[I 2024-05-15 11:35:07.565 ServerApp] Saving file at /ml_codes/poc/MyModel.ipynb
2024-05-15 11:35:12.957443: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-15 11:35:13.726656: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-05-15 11:35:15.100616: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
</code></pre>
<p>My python version: 3.10.12
Tensorflow versoin: 2.16.1</p>
<p>Is there any solution to that problem?</p>
|
<python><tensorflow><tflite>
|
2024-05-15 05:50:05
| 0
| 40,671
|
Sazzad Hissain Khan
|
78,481,813
| 1,930,462
|
Type-correct implementation of a python decorator that drops the first argument to a function
|
<p>I'm trying to figure out how to write a decorator that passes strict type checks as well as preserve type information about the original arguments of a function.<br />
What makes <em><strong>this case</strong></em> tricky, is that the decorator must drop the first argument of the function it is wrapping.</p>
<p>I've been able to get past most of the typing/mypy errors using <code>Concatenate</code>, <code>ParamSpec</code> etc..., but now I'm stuck.</p>
<p>Here is what I have so far:</p>
<pre class="lang-py prettyprint-override"><code>from functools import wraps
from typing import Any, Callable, Concatenate, ParamSpec, TypeVar
P = ParamSpec('P')
R = TypeVar('R')
def drop_first_arg(f: Callable[Concatenate[Any, P], R]) -> Callable[P, R]:
@wraps(f)
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
# Error on "return" line:
# MyPy(4.10.0): error: Argument 1 has incompatible type "*tuple[object, ...]";
# expected "P.args" [arg-type]
# Pylance: Arguments for ParamSpec "P@drop_first_arg" are missing
# (reportCallIssue)
return f(*args[1:], **kwargs) # `[1:]` effectively drops the first argument
return wrapper
</code></pre>
<p>When you pass the above through <code>mypy --strict</code>, then you'll see the following error message:</p>
<pre><code>error: Argument 1 has incompatible type "*tuple[object, ...]"; expected "P.args" [arg-type]
</code></pre>
<p>So essentially this all boils down to the question: How can I write a <code>drop_first_argument</code> decorator that will also pass all of <code>mypy --strict</code>'s checks?</p>
|
<python><mypy><python-typing>
|
2024-05-15 05:40:31
| 0
| 956
|
Ru Hasha
|
78,481,798
| 7,700,802
|
How to adjust pictures alignment on fpdf
|
<p>I have this code that generates a investment summary and some analytic images</p>
<pre><code>def investment_summary(pdf, text, bullet_indent=15):
pdf.set_font("Arial", size=8)
for point in text.splitlines():
if point.startswith("-"):
pdf.set_x(bullet_indent)
pdf.multi_cell(0, 5, point)
def add_analytics(pdf, image_paths):
for image in image_paths:
pdf.image(image, w=180, h=100) # Adjust width (w) and height (h) as needed
pdf.add_page() # Start a new page for each graph
def create_report(county_and_state, llm_text, analytics_location_path=None):
pdf = FPDF()
pdf.add_page()
pdf.set_text_color(r=50,g=108,b=175)
pdf.set_font('Arial', 'B', 18)
pdf.cell(w=0, h=10, txt="Verus-AI: " + county_and_state, ln=1,align='C')
pdf.set_font('Arial', 'B', 16)
pdf.cell(w=0, h=10, txt="Investment Summary", ln=1,align='L')
investment_summary(pdf, llm_text)
pdf.set_text_color(r=50,g=108,b=175)
pdf.set_font('Arial', 'B', 16)
pdf.cell(w=0, h=10, txt="Analytics", ln=1,align='L')
add_analytics(pdf, analytics_location_path)
pdf.output(f'./example1.pdf', 'F')
</code></pre>
<p>The issue is the <code>add_analytics</code> function makes new pages for each image. I would like these images to use as much of the space as possible while still being nice to see from the users perspective.</p>
<p>This is what the output looks like
<a href="https://i.sstatic.net/QsraeHvn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsraeHvn.png" alt="enter image description here" /></a></p>
<p>I changed my code to add this function</p>
<pre><code>def add_analytics1(pdf, image_paths):
size_of_image = 150
for image in image_paths:
pdf.image(image, w=size_of_image, x=((pdf.w-size_of_image)/2))
pdf.ln(h=2.5)
</code></pre>
<p>but now I am getting this</p>
<p><a href="https://i.sstatic.net/mcuR91Ds.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mcuR91Ds.png" alt="enter image description here" /></a></p>
<p>How do I make sure the first picture does not slightly shift to the left?</p>
|
<python><pyfpdf>
|
2024-05-15 05:38:02
| 1
| 480
|
Wolfy
|
78,481,761
| 15,460,398
|
Can not open Video file using opencv on JETSON NANO
|
<p>Tried to use opencv to open a <code>.mp4</code> file in NVIDIA JETSON NANO using code:</p>
<pre><code>import cv2
video_path = 'bird.mp4'
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
print("Error: Could not open video.")
exit()
while True:
ret, frame = cap.read()
if not ret:
print("Error: Cannot read frame.")
break
cv2.imshow('Video', frame)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>But it does not work to preview the video file and gives error:</p>
<pre><code>Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[ WARN:0@0.407] global cap_gstreamer.cpp:2839 handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module qtdemux0 reported: Internal data stream error.
[ WARN:0@0.416] global cap_gstreamer.cpp:1698 open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0@0.416] global cap_gstreamer.cpp:1173 isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Error: Could not open video.
</code></pre>
<p>The code works properly in my Windows PC to preview the video but not in JETSON NANO. Any suggestions to preview video file in using opencv on JETSON NANO?</p>
<p>opencv version: 4.9.0</p>
|
<python><opencv><nvidia-jetson-nano>
|
2024-05-15 05:28:28
| 0
| 361
|
BeamString
|
78,481,723
| 6,401,403
|
Pandas read_html: keep cell content format as html
|
<p>I have a Word document with multiple tables. I'm converting it to html format and read it with pandas:</p>
<pre><code>tables = pd.read_html('report.htm')
</code></pre>
<p>But the text in cells has subscripts, superscripts and special characters, so, for example, 5×10<sup>-5</sup> becomes 5?10-5, while in html format it is <code>5&times;10<sup>-5</sup></code>. Is there the way to read cell contents as raw html with pandas or by using a different approach?</p>
|
<python><html><pandas><richtext>
|
2024-05-15 05:14:27
| 0
| 5,345
|
Michael
|
78,481,661
| 22,213,065
|
Get SubPath info with python
|
<p>I have a large number of GIF files that are very similar to the following sample:</p>
<p><a href="https://i.sstatic.net/H3MIbj4O.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3MIbj4O.gif" alt="enter image description here" /></a></p>
<p>Now, I want to create a <code>color range selection</code> with the color <code>#464843</code> in each GIF file, convert the selection into a <code>work path</code>, and finally obtain information about the <code>subpaths</code> of the created work path.</p>
<p><strong>Note that I have a very large number of files (about 30,000 GIF files) and I cannot use Photoshop for this task!</strong></p>
<p>I want to use Python for this task and have written the following script for it:</p>
<pre><code>import cv2
import numpy as np
import os
import xml.etree.ElementTree as ET
def find_extremes(contour):
min_x = min(contour[:, 0, 0])
max_x = max(contour[:, 0, 0])
min_y = min(contour[:, 0, 1])
max_y = max(contour[:, 0, 1])
return (min_x, max_x), (min_y, max_y)
def process_frame(frame):
# Convert frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Define color range and apply mask
lower = np.array([68, 72, 67])
upper = np.array([60, 72, 67])
mask = cv2.inRange(frame, lower, upper)
# Expand the selection
kernel = np.ones((10, 10), np.uint8)
expanded_mask = cv2.dilate(mask, kernel, iterations=1)
# Find contours
contours, _ = cv2.findContours(expanded_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Create XML structure
root = ET.Element("Paths")
for i, contour in enumerate(contours):
# Convert contour to work path
epsilon = 1.0
approx = cv2.approxPolyDP(contour, epsilon, True)
path = ET.SubElement(root, "Path")
path.set("name", f"Path_{i+1}")
# Find lowest and highest points
extremes_x, extremes_y = find_extremes(approx)
min_point = ET.SubElement(path, "Min")
min_point.set("x", str(extremes_x[0]))
min_point.set("y", str(extremes_y[0]))
max_point = ET.SubElement(path, "Max")
max_point.set("x", str(extremes_x[1]))
max_point.set("y", str(extremes_y[1]))
tree = ET.ElementTree(root)
return tree
def save_to_txt(tree, output_dir, filename):
output_file = os.path.join(output_dir, filename)
tree.write(output_file)
def main():
# Path to the GIF file
gif_path = r"E:\Desktop\infon\tst\viopl15.gif"
# Output directory for the text file
output_dir = os.path.dirname(gif_path)
# Output filename
output_filename = "result.txt"
# Read the GIF file
gif = cv2.VideoCapture(gif_path)
success, frame = gif.read()
frames = []
while success:
frames.append(frame)
success, frame = gif.read()
gif.release()
# Process each frame
for i, frame in enumerate(frames):
tree = process_frame(frame)
save_to_txt(tree, output_dir, f"frame_{i+1}_result.txt")
print("Processing completed.")
if __name__ == "__main__":
main()
</code></pre>
<p>Note: The color range selection must expand by 10 pixels using the script.</p>
<p>The script is working, but I don't know why it doesn't get and save any subpath information. It just writes the following in the saved text result file:</p>
<pre><code><Paths />
</code></pre>
<p>where is my script problem?</p>
<p><strong>Note that if there are any other better tools or ideas available, please share them with me.</strong></p>
|
<python><opencv><imagemagick><photoshop>
|
2024-05-15 04:49:40
| 0
| 781
|
Pubg Mobile
|
78,481,611
| 748,493
|
Get value from ElementTree using the full path
|
<p>Let's say we have the following xml string</p>
<pre><code>xml_string = '''
<Wikimedia>
<projects>
<project name="Wikipedia" launch="2001-01-05">
<editions>
<edition language="English">en.wikipedia.org</edition>
<edition language="German">de.wikipedia.org</edition>
<edition language="French">fr.wikipedia.org</edition>
<edition language="Polish">pl.wikipedia.org</edition>
<edition language="Spanish">es.wikipedia.org</edition>
</editions>
</project>
<project name="Wiktionary" launch="2002-12-12">
<editions>
<edition language="English">en.wiktionary.org</edition>
<edition language="French">fr.wiktionary.org</edition>
<edition language="Vietnamese">vi.wiktionary.org</edition>
<edition language="Turkish">tr.wiktionary.org</edition>
<edition language="Spanish">es.wiktionary.org</edition>
</editions>
</project>
</projects>
</Wikimedia>
'''
</code></pre>
<p>in an ElementTree</p>
<pre><code>tree = ET.ElementTree(ET.fromstring(xml_string))
root = tree.getroot()
</code></pre>
<p>and I am trying to get its elemens using the full path search like</p>
<p><code>root.findall('Wikimedia/projects/project/editions/edition')</code></p>
<p>but this returns an empty list <code>[]</code>.</p>
<p>How do I use the full path, including the starting node <code>Wikimedia</code>, to do this?</p>
|
<python><xml><elementtree>
|
2024-05-15 04:29:24
| 1
| 522
|
Confounded
|
78,481,546
| 107,294
|
What are the differences between pdoc and pdoc3?
|
<p>I've found what appear to be two forks of the pdoc documentation generator
<a href="https://pypi.org/project/pdoc/" rel="nofollow noreferrer">pdoc</a> (<a href="https://github.com/mitmproxy/pdoc" rel="nofollow noreferrer">GitHub</a>) and <a href="https://pypi.org/project/pdoc3/" rel="nofollow noreferrer">pdoc3</a> (<a href="https://github.com/pdoc3/pdoc" rel="nofollow noreferrer">GitHub</a>). From
comparing the commit graphs, these appear to have split in mid-2018, but
perhaps later, or perhaps there are some commits crossing between the two.
The differences I've seen are:</p>
<ul>
<li><p>pdoc:</p>
<ul>
<li>PyPI: 1st release 2013-08-09; v14.4.0 released 2024-01-18</li>
<li>Github: 48 contributors, used by 2.6k, 189 forks, starred by 1.8k</li>
<li>Moderate commit activity over the last year</li>
<li>Hosted under a GitHub personal account</li>
</ul>
</li>
<li><p>pdoc3:</p>
<ul>
<li>PyPI: 1st release 2019-01-25; v0.10.0 released 2021-08-03</li>
<li>Github: 64 contributors, used by 3.8k, 143 forks, starred by 1.1k</li>
<li>Little commit activity in the last year and a half</li>
<li>Hosted under a GitHub organisation, but it has only one member</li>
</ul>
</li>
</ul>
<p><s>From the releases it looks as if pdoc3 is a fork of pdoc,</s> <s>Per my answer below, pdoc appears to be a fork of pdoc3</s> It seems clear that pdoc3 is a fork of pdoc (though both seem to have gone for years unmaintained after the split point) but, I can't see
any explanation of why it forked or what the differences are. Given that,
and the slower pace of commits, it seems to me surprising that it has more
"users" (which I believe are projects with a <code>requirements.txt</code> referencing
it) than pdoc.</p>
<p>At any rate, I don't necessarily need the details of <em>why</em> the split
happened, but I would like to know what features pdoc3 provides that pdoc
doesn't, and vice versa, as well as anything else relevant to making a
choice between the two.</p>
<p><strong>NOTE:</strong> I am <em><strong>not</strong></em> asking anybody to recommend one over the other, nor am I interested in such recommendations. I am interested only in <em>facts</em> about each that are useful in making that decision myself. (That decision will depend not only on the information provided in answers, but many other factors of the given project or purpose for which it's being considered.)</p>
|
<python><documentation-generation>
|
2024-05-15 04:01:03
| 2
| 27,842
|
cjs
|
78,481,462
| 1,603,480
|
HTTPS GET with API Bearer Token: working with cURL but not with Python requests.get
|
<p>When I launch the following cURL command on a https URL with an API token, I get the expected response (a list of files in my storage): <code>curl -H "Authorization: Bearer <TOKEN>" "https://my_website.com:443/api/storage"</code>.</p>
<p>However, when I try with <code>requests.get</code> in Python:</p>
<pre class="lang-py prettyprint-override"><code>url = "https://my_website.com:443/api/storage"
token = "..."
headers = {'Authorization': f"Bearer {token}"}
resp = requests.get(url, headers=headers)
resp.json()
</code></pre>
<p>I have an authentication error as follows:</p>
<pre class="lang-py prettyprint-override"><code>---------------------------------------------------------------------------
SSLCertVerificationError Traceback (most recent call last)
File c:\AnsysDev\Python311\Lib\site-packages\urllib3\connectionpool.py:467, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length)
466 try:
--> 467 self._validate_conn(conn)
468 except (SocketTimeout, BaseSSLError) as e:
...
File c:\AnsysDev\Python311\Lib\site-packages\requests\adapters.py:517, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
513 raise ProxyError(e, request=request)
515 if isinstance(e.reason, _SSLError):
516 # This branch is for urllib3 v1.22 and later.
--> 517 raise SSLError(e, request=request)
519 raise ConnectionError(e, request=request)
521 except ClosedPoolError as e:
SSLError: HTTPSConnectionPool(host='my_website.com', port=443): Max retries exceeded with url: /api/storage (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)')))
</code></pre>
<p>Why is the behavior different?</p>
<p>I tried to add parameter <code>allow_redirects=False</code>, <code>allow_redirects=True</code> (also it is the default), and I tried setting the token with a <code>requests.Session</code> (see below) but they all lead to the same error:</p>
<pre class="lang-py prettyprint-override"><code>session = requests.Session()
session.headers.update(headers)
response = session.get(url)
response.json()
</code></pre>
|
<python><curl><https><python-requests><bearer-token>
|
2024-05-15 03:24:28
| 1
| 13,204
|
Jean-Francois T.
|
78,481,278
| 15,587,184
|
Splitting HTML file and saving chunks using LangChain
|
<p>I'm very new to LangChain, and I'm working with around 100-150 HTML files on my local disk that I need to upload to a server for NLP model training. However, I have to divide my information into chunks because each file is only permitted to have a maximum of 20K characters. I'm trying to use the LangChain library to do so, but I'm not being successful in splitting my files into my desired chunks.</p>
<p>For reference, I'm using this URL: <a href="http://www.hadoopadmin.co.in/faq/" rel="nofollow noreferrer">http://www.hadoopadmin.co.in/faq/</a> Saved locally as HTML only.</p>
<p>It's a Hadoop FAQ page that I've downloaded as an HTML file onto my PC. There are many questions and answers there. I've noticed that sometimes, for some files, it gets split by a mere title, and another split is the paragraph following that title. But my desired output would be to have the title and the specific paragraph or following text from the body of the page, and as metadata, the title of the page.</p>
<p>I'm using this code:</p>
<pre><code>from langchain_community.document_loaders import UnstructuredHTMLLoader
from langchain_text_splitters import HTMLHeaderTextSplitter
# Same Example with the URL http://www.hadoopadmin.co.in/faq/ Saved Locally as HTML Only
dir_html_file='FAQ – BigData.html'
data_html = UnstructuredHTMLLoader(dir_html_file).load()
headers_to_split_on = [
("h1", "Header 1")]
html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
html_header_splits = html_splitter.split_text(str(data_html))
</code></pre>
<p>But is returning a bunch of weird characters and not splitting the document at all.</p>
<p>This is an output:</p>
<pre><code>[Document(page_content='[Document(page_content=\'BigData\\n\\n"You can have data without information, but you cannot have information without Big data."\\n\\nsaurabhmcakiet@gmail.com\\n\\n+91-8147644946\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nToggle navigation\\n\\nHome\\n\\nBigData\\n\\n\\tOverview of BigData\\n\\tSources of BigData\\n\\tPros & Cons of BigData\\n\\tSolutions of BigData\\n\\nHadoop Admin\\n\\n\\tHadoop\\n\\t\\n\\t\\tOverview of HDFS\\n\\t\\tOverview of MapReduce\\n\\t\\tApache YARN\\n\\t\\tHadoop Architecture\\n\\t\\n\\n\\tPlanning of Hadoop Cluster\\n\\tAdministration and Maintenance\\n\\tHadoop Ecosystem\\n\\tSetup HDP cluster from scratch\\n\\tInstallation and Configuration\\n\\tAdvanced Cluster Configuration\\n\\tOverview of Ranger\\n\\tKerberos\\n\\t\\n\\t\\tInstalling kerberos/Configuring the KDC and Enabling Kerberos Security\\n\\t\\tConfigure SPNEGO Authentication for Hadoop\\n\\t\\tDisabled kerberos via ambari\\n\\t\\tCommon issues after Disabling kerberos via Ambari\\n\\t\\tEnable https for ambari Server\\n\\t\\tEnable SSL or HTTPS for Oozie Web UI\\n\\nHadoop Dev\\n\\n\\tSolr\\n\\t\\n\\t\\tSolr Installation\\n\\t\\tCommits and Optimizing in Solr and its use for NRT\\n\\t\\tSolr FAQ\\n\\t\\n\\n\\tApache Kafka\\n\\t\\n\\t\\tKafka QuickStart\\n\\t\\n\\n\\tGet last access time of hdfs files\\n\\tProcess hdfs data with Java\\n\\tProcess hdfs data with Pig\\n\\tProcess hdfs data with Hive\\n\\tProcess hdfs data with Sqoop/Flume\\n\\nBigData Architect\\n\\n\\tSolution Vs Enterprise Vs Technical Architect’s Role and Responsibilities\\n\\tSolution architect certification\\n\\nAbout me\\n\\nFAQ\\n\\nAsk Questions\\n\\nFAQ\\n\\nHome\\n\\nFAQ\\n\\nFrequently\\xa0Asked Questions about Big Data\\n\\nMany questions about big data have yet to be answered in a vendor-neutral way. With so many definitions, opinions run the gamut. Here I will attempt to cut to the heart of the matter by addressing some key questions I often get from readers, clients and industry analysts.\\n\\n1) What is Big Data?\\n\\n1) What is Big Data?\\n\\nBig data” is an all-inclusive term used to describe vast amounts of information. In contrast to traditional structured data which is typically stored in a relational database, big data varies in terms of volume, velocity, and variety.\\n\\nBig data\\xa0is characteristically generated in large volumes – on the order of terabytes or exabytes of data (starts with 1 and has 18 zeros after it, or 1 million terabytes) per individual data set.\\n\\nBig data\\xa0is also generated with high velocity – it is collected at frequent intervals – which makes it difficult to analyze (though analyzing it rapidly makes it more valuable).\\n\\nOr in simple words we can say “Big Data includes data sets whose size is beyond the ability of traditional software tools to capture, manage, and process the data in a reasonable time.”\\n\\n2) How much data does it take to be called Big Data?\\n\\nThis question cannot be easily answered absolutely. Based on the infrastructure on the market the lower threshold is at about 1 to 3 terabytes.\\n\\nBut using Big Data technologies can be sensible for smaller databases as well, for example if complex mathematiccal or statistical analyses are run against a database. Netezza offers about 200 built in functions and computer languages like Revolution R or Phyton which can be used in such cases.\\n\\
</code></pre>
<p>My Expected output will look something like this:</p>
<pre><code>One chunk:
Frequently Asked Questions about Big Data
Many questions about big data have yet to be answered in a vendor-neutral way. With so many definitions, opinions run the gamut. Here I will attempt to cut to the heart of the matter by addressing some key questions I often get from readers, clients and industry analysts.
1) What is Big Data?
“Big data” is an all-inclusive term used to describe vast amounts of information. In contrast to traditional structured data which is typically stored in a relational database, big data varies in terms of volume, velocity, and variety. Big data is characteristically generated in large volumes – on the order of terabytes or exabytes of data (starts with 1 and has 18 zeros after it, or 1 million terabytes) per individual data set. Big data is also generated with high velocity – it is collected at frequent intervals – which makes it difficult to analyze (though analyzing it rapidly makes it more valuable).
Or in simple words we can say “Big Data includes data sets whose size is beyond the ability of traditional software tools to capture, manage, and process the data in a reasonable time.”
2) How much data does it take to be called Big Data?
This question cannot be easily answered absolutely. Based on the infrastructure on the market the lower threshold is at about 1 to 3 terabytes.
But using Big Data technologies can be sensible for smaller databases as well, for example if complex mathematical or statistical analyses are run against a database. Netezza offers about 200 built in functions and computer languages like Revolution R or Phyton which can be used in such cases.
Metadata: FAQ
Another Chunck
7) Where is the big data trend going?
Eventually the big data hype will wear off, but studies show that big data adoption will continue to grow. With a projected $16.9B market by 2015 (Wikibon goes even further to say $50B by 2017), it is clear that big data is here to stay. However, the big data talent pool is lagging behind and will need to catch up to the pace of the market. McKinsey & Company estimated in May 2011 that by 2018, the US alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.
The emergence of big data analytics has permanently altered many businesses’ way of looking at data. Big data can take companies down a long road of staff, technology, and data storage augmentation, but the payoff – rapid insight into never-before-examined data – can be huge. As more use cases come to light over the coming years and technologies mature, big data will undoubtedly reach critical mass and will no longer be labeled a trend. Soon it will simply be another mechanism in the BI ecosystem.
8) Who are some of the BIG DATA users?
From cloud companies like Amazon to healthcare companies to financial firms, it seems as if everyone is developing a strategy to use big data. For example, every mobile phone user has a monthly bill which catalogs every call and every text; processing the sheer volume of that data can be challenging. Software logs, remote sensing technologies, information-sensing mobile devices all pose a challenge in terms of the volumes of data created. The size of Big Data can be relative to the size of the enterprise. For some, it may be hundreds of gigabytes, for others, tens or hundreds of terabytes to cause consideration.
9) Data visualization is becoming more popular than ever.
In my opinion, it is absolutely essential for organizations to embrace interactive data visualization tools. Blame or thank big data for that and these tools are amazing. They are helping employees make sense of the never-ending stream of data hitting them faster than ever. Our brains respond much better to visuals than rows on a spreadsheet.
Companies like Amazon, Apple, Facebook, Google, Twitter, Netflix and many others understand the cardinal need to visualize data. And this goes way beyond Excel charts, graphs or even pivot tables. Companies like Tableau Software have allowed non-technical users to create very interactive and imaginative ways to visually represent information.
Metadata: FAQ
</code></pre>
<p>My thought process is being able to gather all the information and split it into chunks, but I don't want titles without their following paragraphs separated, and I also want as much info as possible (max 20K characters) before creating another chunk.</p>
<p>I would also like to save these chunks and their meta data. Is there a function in LangChain to do this?</p>
<p>I am open to hearing not to do this in LangChain for efficiency reasons.</p>
|
<python><html><split><langchain><py-langchain>
|
2024-05-15 01:44:46
| 2
| 809
|
R_Student
|
78,481,247
| 1,330,974
|
Python Pool.apply_async() is returning None type objects
|
<p>I have a 4GB+ file in which each line represents a very nested JSON string. An example of what the file looks like would be below:</p>
<pre><code>{"core":{"id":"1","field1":{"subfield1":1, "subfield2":{"subsubfield1": 1}},....,"field700":700}}
{"core":{"id":"1","field1":{"subfield1":1, "subfield2":{"subsubfield1": 1}},....,"field700":700}}
100,000+ lines like above
</code></pre>
<p>I need to do the followings:</p>
<ul>
<li>convert each line in the file to JSON object</li>
<li>flatten each JSON object so that all the key-value pairs are on the same level while filtering only a few keys that I need (I need ~100 keys out of a total of 700+ in each JSON object)</li>
</ul>
<p>My plan is to divide this 100K plus lines into multiple chunks, and use <code>multiprocessing</code> to flatten the JSON objects in each chunk and combine them back into a dataframe. Because I am new to <code>multiprocessing</code>, I read a few posts including <a href="https://willcecil.co.uk/multiprocessing-in-python-to-speed-up-log-file-processing/" rel="nofollow noreferrer">this</a> and <a href="https://gist.github.com/ngcrawford/2237170" rel="nofollow noreferrer">this</a> (and a few more). I tried to write my code based on the <a href="https://willcecil.co.uk/multiprocessing-in-python-to-speed-up-log-file-processing/" rel="nofollow noreferrer">first post</a> like this:</p>
<pre><code>import json
import multiprocessing
from multiprocessing import Pool, Process, Queue
import pandas as pd
# list of ~100 columns I want to keep
COLS_TO_KEEP = {'id', 'field1', 'subfield1', 'subsubfield2', 'field8', ..., 'field680'}
def chunk_the_list(list_of_items, chunk_size):
for i in range(0, len(list_of_items), chunk_size):
yield list_of_items[i : i + chunk_size]
def do_work(list_of_dicts):
results = []
for d in list_of_dicts:
results.append(flatten_dict(d))
def flatten_dict(d, parent_key="", sep="_"):
# This function recursively dives into each JSON object and flattens them
items = []
for k, v in d.items():
new_key = parent_key + sep + k if parent_key else k
if isinstance(v, dict):
items.extend(flatten_dict(v, new_key, sep=sep).items())
else:
if new_key in COLS_TO_KEEP:
items.append((new_key, v))
return dict(items)
def main():
raw_data_file_path_name = 'path/to/file.txt' # file is 4GB+
file = open(raw_data_file_path_name, encoding="utf-8").readlines()
listify = [json.loads(line) for line in file]
size_of_each_chunk = int(len(listify) / (multiprocessing.cpu_count() - 2))
chunked_data = chunk_the_list(listify, size_of_each_chunk)
p = Pool(processes=multiprocessing.cpu_count() - 2)
results = [p.apply_async(do_work, args=(chunk,)) for chunk in chunked_data]
results = [item.get() for item in results] # This list has None type objects when I check using debugger
results = sum(results, []) # because the above line returns None values, this step errors out
# I want to create a Pandas dataframe using the accumulated JSON/dictionary objects from results and write it out as a parquet or csv file at the end like this https://stackoverflow.com/a/20638258/1330974
df = pd.DataFrame(results)
df.to_parquet('df.parquet.gzip', compression='gzip')
if __name__ == "__main__":
start_time = time.time()
main()
</code></pre>
<p>Am I doing something wrong to get <code>None</code> type objects back in the results set? Thank you in advance for suggestions and answers!</p>
|
<python><multiprocessing><python-multiprocessing><python-multithreading>
|
2024-05-15 01:22:19
| 2
| 2,626
|
user1330974
|
78,481,195
| 3,398,324
|
Reinforcement Learning Shape Error: ValueError: Error
|
<p>I am using the below approach to create simple DL-RL Model, but I am getting this error:</p>
<pre><code>ValueError: Error when checking input: expected flatten_1_input to have 2 dimensions, but got array with shape (1, 1, 2)
</code></pre>
<p>I am using the following versions of libraries and Python:</p>
<p>Python 3.11.7
TensorFlow: Version: 2.13.0
Keras: 2.13.1</p>
<pre><code>import numpy as np
import pandas as pd
import gym
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from rl.agents import DQNAgent
from rl.policy import BoltzmannQPolicy
from rl.memory import SequentialMemory
from tensorflow.keras.optimizers.legacy import Adam
class CustomEnv(gym.Env):
def __init__(self, df):
super(CustomEnv, self).__init__()
self.df = df
self.action_space = gym.spaces.Discrete(1) # Action space (predict F_1_d_returns)
self.observation_space = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(2,), dtype=np.float32) # State space (1_d_returns, 2_d_returns)
self.current_step = 0
def reset(self):
# Reset the environment to initial state
self.current_step = 0
self.state = self.df.iloc[self.current_step, 1:3].values # Start with first row's 1_d_returns and 2_d_returns
return self.state
def step(self, action):
# Take an action (not relevant here as we are predicting)
self.current_step += 1
done = self.current_step >= len(self.df) - 1
if done:
next_state = self.state
else:
next_state = self.df.iloc[self.current_step, 1:3].values
# Flatten the next_state
next_state = np.reshape(next_state, (-1,))
reward = 0 # No reward for predicting
info = {} # Additional information (if needed)
return next_state, reward, done, info
df = pd.DataFrame({
'F_1_d_returns': [-0.038076, 0.083333, 0.060577, -0.013599, -0.020221],
'1_d_returns': [-0.062030, -0.038076, 0.083333, 0.060577, -0.013599],
'2_d_returns': [-0.133681, -0.097744, 0.042084, 0.148958, 0.046154]
})
env = CustomEnv(df)
states = env.observation_space.shape
actions = env.action_space.n
def build_model(input_shape, nb_actions):
model = Sequential()
model.add(Flatten(input_shape=input_shape)) # Adjust input shape here
model.add(Dense(32, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(nb_actions, activation='linear'))
return model
model = build_model(states, actions)
def build_agent(model, actions):
policy = BoltzmannQPolicy()
memory = SequentialMemory(limit=50000, window_length=1)
dqn = DQNAgent(model=model, memory=memory, policy=policy,
nb_actions=actions, nb_steps_warmup=10, target_model_update=1e-2)
return dqn
dqn = build_agent(model, actions)
dqn.compile(Adam(lr=1e-3), metrics=['mae'])
dqn.fit(env, nb_steps=50000, visualize=False, verbose=1)
</code></pre>
|
<python><tensorflow><tf.keras>
|
2024-05-15 00:52:22
| 0
| 1,051
|
Tartaglia
|
78,481,178
| 2,803,777
|
How to do matrix multiplication under numpy in higher dimensions?
|
<p>Under numpy I want to perform a "usual" matrix multiplication like this:</p>
<p>C=A*B</p>
<p>where</p>
<p>A is a "2D-kind" Matrix, but each matrix element has shape (1,5)</p>
<p>and</p>
<p>B is a "1D-kind" Vector, but each vector element element has shape (20,5)</p>
<p>The result</p>
<p>C shall be a "1D-kind" Vector, but each vector element element again has shape (20,5)</p>
<p><a href="https://i.sstatic.net/tXXb2Iyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tXXb2Iyf.png" alt="enter image description here" /></a></p>
<p>I tried to produce the elements C1 and C2 of C manually:</p>
<pre><code>>>> A.shape
(2, 2, 1, 5)
>>> B.shape
(2, 20, 5)
>>> C0 = A[0,0]*B[0]+A[0,1]*B[1]
>>> C0.shape
(20, 5)
>>> C1 = A[1,0]*B[0]+A[1,1]*B[1]
>>> C1.shape
(20, 5)
>>>
</code></pre>
<p>Broadcasting (1,5) from A with (20,5) of B works as expected.</p>
<p>However, I was not able to find out, how this can be written like a matrix multiplication:</p>
<pre><code>C = np.matmul(A, B)
</code></pre>
<p>Of course this doesn't work because numpy can't know what indices I want to sum over. But I guess that some simple "numpythonic" solution must exist...</p>
|
<python><numpy>
|
2024-05-15 00:42:45
| 2
| 1,502
|
MichaelW
|
78,481,175
| 1,429,450
|
Easy way to store SciPy sparse symmetric matrix in a local file, load it into shared memory (shm), and reconstruct it from shm?
|
<p>Easy way to store the upper diagonal (including the diagonal) of a <a href="https://docs.scipy.org/doc/scipy/reference/sparse.html" rel="nofollow noreferrer">SciPy sparse matrix</a> in a local file, load the file into shared memory (shm), and reconstruct the sparse matrix from the shm?</p>
<p><a href="https://chat.openai.com/share/a8586f23-8d31-4bdd-88d8-0c2f6e0f8dfc" rel="nofollow noreferrer">ChatGPT 4o</a> gave what seems like an overly complex solution, with three parts: <code>data</code>, <code>indices</code>, and <code>indptr</code>; is that necessary?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import scipy.sparse as sp
from multiprocessing import shared_memory
# Example matrix creation
matrix = sp.random(100, 100, density=0.1, format='csr')
# Step 1: Extract and save the upper triangular part (including the diagonal)
upper_triangular_matrix = sp.triu(matrix)
sp.save_npz('upper_triangular_matrix.npz', upper_triangular_matrix)
# Step 2: Load the matrix from the file
loaded_matrix = sp.load_npz('upper_triangular_matrix.npz')
# Convert the matrix to shared memory
# Flatten the data, indices, and indptr arrays for shared memory storage
data = loaded_matrix.data
indices = loaded_matrix.indices
indptr = loaded_matrix.indptr
# Create shared memory blocks for each array
shm_data = shared_memory.SharedMemory(create=True, size=data.nbytes)
shm_indices = shared_memory.SharedMemory(create=True, size=indices.nbytes)
shm_indptr = shared_memory.SharedMemory(create=True, size=indptr.nbytes)
# Copy the arrays into the shared memory blocks
np.copyto(np.ndarray(data.shape, dtype=data.dtype, buffer=shm_data.buf), data)
np.copyto(np.ndarray(indices.shape, dtype=indices.dtype, buffer=shm_indices.buf), indices)
np.copyto(np.ndarray(indptr.shape, dtype=indptr.dtype, buffer=shm_indptr.buf), indptr)
# Access the shared memory blocks
shared_data = np.ndarray(data.shape, dtype=data.dtype, buffer=shm_data.buf)
shared_indices = np.ndarray(indices.shape, dtype=indices.dtype, buffer=shm_indices.buf)
shared_indptr = np.ndarray(indptr.shape, dtype=indptr.dtype, buffer=shm_indptr.buf)
# Create a shared memory CSR matrix
shared_csr_matrix = sp.csr_matrix((shared_data, shared_indices, shared_indptr), shape=loaded_matrix.shape)
# Accessing the shared memory matrix
print(shared_csr_matrix)
# Clean up shared memory
shm_data.close()
shm_data.unlink()
shm_indices.close()
shm_indices.unlink()
shm_indptr.close()
shm_indptr.unlink()
</code></pre>
|
<python><scipy><sparse-matrix><shared-memory><symmetric>
|
2024-05-15 00:41:14
| 1
| 5,826
|
Geremia
|
78,480,373
| 1,658,080
|
Evenly Distribute Stars on a Sphere According to Categories
|
<p>I'm working on a project where I need to map stars onto a virtual sphere. The stars must be evenly spaced from each other and from the center of the sphere (0,0,0), with a specific distribution of brightness categories across the sphere's surface. Each segment of the sphere should have a proportional mix of stars based on their brightness, ensuring that the entire sphere is covered uniformly.</p>
<p>Approach and Problem:</p>
<p><strong>Normalization:</strong> I started by normalizing the position of each star to form a sphere by scaling their position vectors to a fixed radius.</p>
<p><strong>Even Point Distribution:</strong> I generated 1000 evenly spaced points on the sphere using Fibonacci sphere sampling.</p>
<p><strong>Initial Matching:</strong> Matching each point to the nearest star worked well without considering star brightness.</p>
<p><strong>Category Distribution:</strong> The challenge arises when I try to include a desired distribution of star brightness. The categories need specific ratios across the sphere's surface, but I struggle to maintain an even distribution when categorizing by brightness.</p>
<p><strong>Current Method:</strong> My latest attempt involved randomly shuffling the generated points and then assigning them to stars based on their brightness category. For example, every 1.9th point goes to a star of brightness category '8'. However, this method failed to achieve the even distribution required, especially since category '8' stars are rare yet numerous enough to meet the distribution requirements.</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.neighbors import NearestNeighbors
def classify_stars(vmag):
if vmag >= 10:
return '1'
elif 7 <= vmag < 10:
return '3'
elif 6 <= vmag < 7:
return '5'
elif 3 <= vmag < 6:
return '8'
elif 1 <= vmag < 3:
return '9'
else:
return '10'
def generate_sphere_points(samples=1000, radius=50):
points = []
dphi = np.pi * (3. - np.sqrt(5.)) # Approximation of the golden angle in radians.
for i in range(samples):
y = 1 - (i / float(samples - 1)) * 2 # y goes from 1 to -1
radius = np.sqrt(1 - y * y) # radius at y
theta = dphi * i # golden angle increment
x = np.cos(theta) * radius
z = np.sin(theta) * radius
points.append((x * 50, y * 50, z * 50))
return np.array(points)
df = pd.read_csv('hygdata_v3.csv', usecols=['hip', 'x', 'y', 'z', 'mag'])
df.dropna(subset=['hip', 'x', 'y', 'z', 'mag'], inplace=True)
df['hip'] = df['hip'].astype(int)
df['norm'] = np.sqrt(df['x']**2 + df['y']**2 + df['z']**2)
df['x'] = 50 * df['x'] / df['norm']
df['y'] = 50 * df['y'] / df['norm']
df['z'] = 50 * df['z'] / df['norm']
df.drop(columns='norm', inplace=True)
df['class'] = df['mag'].apply(classify_stars)
points = generate_sphere_points(samples=1000)
desired_distribution = {'1': 0.27, '3': 0.27, '5': 0.27, '8': 0.19, '9': 0, '10': 0}
total_points = len(points)
# Calculate points per category based on desired distribution
category_points = {k: int(v * total_points) for k, v in desired_distribution.items()}
# Randomly shuffle points to avoid spatial clustering in assignment
np.random.shuffle(points)
sampled_df = pd.DataFrame()
offset = 0
for category, count in category_points.items():
if count > 0:
category_stars = df[df['class'] == category]
nbrs = NearestNeighbors(n_neighbors=1).fit(category_stars[['x', 'y', 'z']])
if offset + count > len(points):
count = len(points) - offset # Adjust count if it exceeds the number of points
_, indices = nbrs.kneighbors(points[offset:offset + count])
unique_indices = np.unique(indices.flatten())
assigned_stars = category_stars.iloc[unique_indices[:count]]
sampled_df = pd.concat([sampled_df, assigned_stars], ignore_index=True)
offset += count
# Output the hip values of stars with category 8 to see if they are evenly distributed
sampled_df = sampled_df.drop_duplicates(subset='hip')
category_8_stars = sampled_df[sampled_df['class'] == '8']
hip_values_category_8 = category_8_stars['hip'].astype(str).tolist()
print(hip_values_category_8)
</code></pre>
<p>The dataset can be downloaded here: <a href="https://raw.githubusercontent.com/EnguerranVidal/HYG-STAR-MAP/main/hygdatav3.csv" rel="nofollow noreferrer">https://raw.githubusercontent.com/EnguerranVidal/HYG-STAR-MAP/main/hygdatav3.csv</a></p>
<p>Like I said, it is not really wokring the way I imagine.
After two complete days of trying to solve this riddle, I came here for expert advice.</p>
<p>Any idea how I can approach this problem?</p>
|
<python><algorithm><math><geometry><visualization>
|
2024-05-14 19:51:00
| 1
| 725
|
Clms
|
78,480,305
| 616,460
|
Unknown target CPU 'x86-64-v3' when building via pip/clang
|
<p>When installing the <code>netifaces</code> package with <code>pip</code>, I get the following error while building its wheel (Debian 10, and Python 3.10):</p>
<pre class="lang-none prettyprint-override"><code>building 'netifaces' extension
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64-v3 -fPIC -fPIC -DNETIFACES_VERSION=0.11.0 -I/path/to/venv/include -I/install/include/python3.10 -c netifaces.c -o build/temp.linux-x86_64-cpython-310/netifaces.o
error: unknown target CPU 'x86-64-v3'
note: valid target CPU values are: nocona, core2, penryn, bonnell, atom, silvermont, slm, goldmont, goldmont-plus, tremont, nehalem, corei7, westmere, sandybridge, corei7-avx, ivybridge, core-avx-i, haswell, core-avx2, broadwell, skylake, skylake-avx512, skx, cannonlake, icelake-client, icelake-server, knl, knm, k8, athlon64, athlon-fx, opteron, k8-sse3, athlon64-sse3, opteron-sse3, amdfam10, barcelona, btver1, btver2, bdver1, bdver2, bdver3, bdver4, znver1, x86-64
error: command '/usr/bin/clang' failed with exit code 1
</code></pre>
<p>I see that <code>-march=x86-64-v3</code> is on the <code>clang</code> command line, and that the list of supported architectures includes <code>x86-64</code> but not <code>x86-64-v3</code>.</p>
<p>How can I either a) set <code>-march=x86-64</code> during the <code>pip</code> install, or b) add <code>x86-64-v3</code> to the supported architecture list?</p>
<p>I am a little confused because I installed <code>clang</code> with <code>apt-get install clang</code> so my assumption is that it would come with support for the current architecture.</p>
<hr />
<p>If it matters:</p>
<pre class="lang-none prettyprint-override"><code>$ uname -a
Linux magic 4.19.0-26-amd64 #1 SMP Debian 4.19.304-1 (2024-01-09) x86_64 GNU/Linux
</code></pre>
<p>Also:</p>
<pre><code>>>> import platform
>>> print(platform.machine())
x86_64
</code></pre>
|
<python><pip><clang>
|
2024-05-14 19:34:48
| 1
| 40,602
|
Jason C
|
78,480,228
| 14,250,641
|
how to find regional overlap between two dfs
|
<p>I have one df that has chr and position and the other has chr, start, end. I want to find all of the regions from df1 that match the chr and overlap with the position of df0. Please see the example below for more clarity. FYI I have large dfs</p>
<pre><code>df0:
chr position
1 33
1 100
1 400
df1:
chr start end label
1 30 40 dog
1 90 110 dog
1 85 200 cat
</code></pre>
<p>this is what I want:</p>
<pre><code>final_df:
chr position matched_start matched_end label
1 33 30 40 dog
1 100 90 110 dog
1 100 85 200 cat
</code></pre>
|
<python><pandas><dataframe><bioinformatics>
|
2024-05-14 19:15:01
| 1
| 514
|
youtube
|
78,480,187
| 2,276,022
|
How do I delete all documents from a Marqo index?
|
<p>I have a ton of documents that have been vectorized poorly / are using an old multimodal_field_combination.</p>
<pre><code>mappings={
'combo_text_image': {
"type": "multimodal_combination",
"weights": {
"name": 0.05,
"description": 0.15,
"image_url": 0.8
}
}
},
</code></pre>
<p>But I need to update to</p>
<pre><code>mappings={
'combo_text_image': {
"type": "multimodal_combination",
"weights": {
"name": 0.2,
"image_url": 0.8
}
}
},
</code></pre>
<p>I've realized that to do this in the same index, I should delete the documents and then reindex but I haven't been able to find a <code>mq.delete_all_documents()</code> call. What's the best way to do this?</p>
|
<python><vector><vespa><marqo>
|
2024-05-14 19:06:24
| 1
| 351
|
nostrebor
|
78,479,791
| 16,136,190
|
Getting requests.exceptions.InvalidHeader: Header part (1) from ('Sec-Gpc', 1) must be of type str or bytes, not <class 'int'> though Chrome sends it
|
<p>I get <code>requests.exceptions.InvalidHeader: Header part (1) from ('Sec-Gpc', 1) must be of type str or bytes, not <class 'int'></code> when trying to use the <code>Sec-Gpc</code> attribute in a header:</p>
<pre class="lang-json prettyprint-override"><code>headers = {
...,
"Sec-Gpc": 1,
...
}
</code></pre>
<p>I tried checking <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Sec-GPC" rel="nofollow noreferrer">MDN</a>, but the browser compatibility table says Chrome doesn't support it, but when I check the request headers, Chrome sends it.</p>
<p>In this <a href="https://github.com/mdn/browser-compat-data/issues/18741" rel="nofollow noreferrer">issue</a>, an extension is said to modify the headers and that Chrome doesn't support it, but I don't have that extension in my requests/driver. So, why do I get the error, and how do I pass the attribute, preferably without any third-party libraries?</p>
|
<python><python-requests><cross-browser><request-headers><http-request-attribute>
|
2024-05-14 17:33:58
| 1
| 859
|
The Amateur Coder
|
78,479,625
| 5,317,274
|
How do you mix 1d and 2d variables in a pandas dataframe?
|
<p>I am working with a bucketload of data that has the form:</p>
<pre><code>import pandas as pd
import numpy as np
lat = np.array([80.589, 80.592, 80.595])
lon = np.array([50.268, 50.264, 50.260])
wav = np.array([[486, 605, 666, 821, 777, 719],
[ 65, 60, 68, 67, 72, 64],
[866, 946, 882, 855, 999, 1195]])
print("lat shape:",lat.shape)
print("lon shape:",lon.shape)
print("wav shape:",wav.shape)
# lat shape: (3,)
# lon shape: (3,)
# wav shape: (3, 6)
df = pd.DataFrame({
'Lon': lon,
'Lat': lat,
'Wav': wav})
</code></pre>
<p>which gives the error "ValueError: Per-column arrays must each be 1-dimensional"</p>
<p>I can work around this by converting the guts of wav to a string and back again when I need it, but that is Ugly Gross, and I would like to find the proper way to handle mix dimension arrays in a pandas dataframe.</p>
<p>Desired result:</p>
<pre><code>print(df.head(1))
Lon Lat Wav
0 50.268 80.589 [486, 605, 666, 821, 777, 719]
</code></pre>
|
<python><pandas>
|
2024-05-14 17:00:40
| 2
| 377
|
EBo
|
78,479,519
| 6,652,048
|
Django Update on UniqueConstraint
|
<p>I'm trying to work with UniqueConstraint but I've been facing some issues trying to update it the way I want.</p>
<p>I have the following model</p>
<pre><code>class QueryModel(models.Model):
id = models.AutoField(_("query id"), primary_key=True)
user = models.ForeignKey(UserModel, on_delete=models.CASCADE)
name = models.CharField(_("query name"), max_length=150)
status = models.CharField(
_("query status"),
choices=QueryStatus.choices,
default=QueryStatus.ACTIVE,
)
is_favorite = models.BooleanField(_("favorite"), default=False)
date_created = models.DateTimeField(_("date created"), auto_now_add=True)
class Meta(object):
app_label = "app"
db_table = "query"
constraints = [
models.UniqueConstraint(
fields=("user",),
condition=Q(is_favorite=True),
name="unique_favorite_per_user",
)
]
</code></pre>
<pre><code>class QueryView(viewsets.ModelViewSet):
"""
View to list all users in the system.
* Requires token authentication.
* Only admin users are able to access this view.
"""
authentication_classes = [TokenAuthentication]
permission_classes = [IsAuthenticated]
serializer_class = QuerySerializer
aws_client = AWSClient()
def get_queryset(self):
"""
This view should return a list of all the purchases
for the currently authenticated user.
"""
self._paginator = None
return QueryModel.objects.filter(user=self.request.user).order_by("id")
def create(self, request):
serializer = QuerySerializer(
data={
"user": request.user.id,
"name": request.data["name"],
"expiration_date": request.data["expiration_date"],
"json": request.data["json"],
}
)
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
def partial_update(self, request, pk=None):
serializer = QuerySerializer(data=request.data, partial=True)
serializer.is_valid(raise_exception=True)
if "is_favorite" in serializer.validated_data.keys():
self.get_queryset().update(
is_favorite=ExpressionWrapper(Q(pk=pk), BooleanField())
)
return super().partial_update(request, pk)
</code></pre>
<pre><code>router = DefaultRouter(trailing_slash=False)
router.register("auth", AuthView, basename="auth")
router.register("query", QueryView, basename="query")
version_info = json.load(open("./version.json"))
@api_view(["GET"])
def health_check(request):
"""Check if the endpoint is on. Used by container orchestration system."""
return Response({"status": "pass", **version_info})
urlpatterns = [
path("", health_check),
*router.urls,
]
</code></pre>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 105, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "unique_favorite_per_user"
DETAIL: Key (user_workspace_id)=(2) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/views/decorators/csrf.py", line 65, in _view_wrapper
return view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/rest_framework/viewsets.py", line 124, in view
return self.dispatch(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/trends_backend/trends/views/query.py", line 61, in partial_update
self.get_queryset().update(
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 1253, in update
rows = query.get_compiler(self.db).execute_sql(CURSOR)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1990, in execute_sql
cursor = super().execute_sql(result_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 122, in execute
return super().execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 79, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 92, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 100, in _execute
with self.db.wrap_database_errors:
File "/usr/local/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 105, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
django.db.utils.IntegrityError: duplicate key value violates unique constraint "unique_favorite_per_user"
DETAIL: Key (user_id)=(2) already exists.
</code></pre>
<p>So in my system, users can create queries and set them as favorites, but each user can only have 1 favorite query, so I created the UniqueConstraint to enforce that business rule.</p>
<p>Then I would have an endpoint to set a query as favorite, but is there a way for me to update that field without having to check all is_favorite fields of each user and manually setting them to False?</p>
<p>I found <a href="https://stackoverflow.com/questions/71394544/django-change-all-other-boolean-fields-except-one">this</a> StackOverflow post from 2 years ago that proposes a solution, but this solution is not working for me, I would really appreciate some help</p>
|
<python><django><django-rest-framework>
|
2024-05-14 16:38:49
| 1
| 303
|
Pedro Daumas
|
78,479,439
| 7,746,472
|
Run Python script from shell script, with Virtualenv, and as module with -m
|
<p>How can I run a Python script from a shell script, with the conditions:</p>
<ol>
<li>it uses a helper script from a parallel folder</li>
<li>it uses the virtual environment for its subfolder</li>
</ol>
<p>My file/folder structure looks like this:</p>
<pre><code>project_folder
-subfolder_prog_1
-prog1.py
-subfolder_prog_2
-prog2.py
-helpers
-helper.py
</code></pre>
<p>Each of the subfolder_prog_x uses their own virtual environment (pipenv).</p>
<p>prog1.py and prog2.py use helper.py like this:</p>
<pre><code>from helpers.helper import helpful_function
</code></pre>
<p>Now - I know I can activate the environment from the shell like this:</p>
<pre><code>cd ~/git/project_folder/subfolder_prog_1
pipenv run python prog1.py
</code></pre>
<p>I also know that if I want to use an import from a parallel folder I have to run the script as a module, from the parent folder like this:</p>
<pre><code>cd ~/git/project_folder/
python -m subfolder_prog_1.prog1
</code></pre>
<p>But how can I combine the two?</p>
<p>I tried</p>
<pre><code>cd ~/git/project_folder/
pipenv run python -m subfolder_prog_1.prog1
</code></pre>
<p>but that gets me a ModuleNotFoundError: No module named 'subfolder_prog_1'.</p>
<p>All hints are appreciated!</p>
|
<python><python-3.x><pipenv>
|
2024-05-14 16:20:56
| 2
| 1,191
|
Sebastian
|
78,479,399
| 810,815
|
Running Flask in Debug Mode
|
<p>I am working on my first Flask app. All I am trying to do is to make sure that my Flask server restarts after I edit the hello.py file.</p>
<p>Here is my app structure:
<a href="https://i.sstatic.net/KKh64CGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KKh64CGy.png" alt="enter image description here" /></a></p>
<p>Here is hello.py:</p>
<pre><code>from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World"
</code></pre>
<p>and here is my .flask_env file:</p>
<pre><code>FLASK_APP=hello.py
FLASK_DEBUG=True
</code></pre>
<p>When I run the app in debug more using</p>
<pre><code>flask run --debug
</code></pre>
<p>I get the following error:</p>
<pre><code>Error: Could not locate a Flask application. Use the 'flask --app' option, 'FLASK_APP' environment variable, or a 'wsgi.py' or 'app.py' file in the current directory.
</code></pre>
<p>What am I missing?</p>
|
<python><flask>
|
2024-05-14 16:15:18
| 3
| 9,764
|
john doe
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.