QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,553,793
| 4,159,193
|
Use Pybind11 library created with MSYS2,CMake and Make in Python
|
<p>I have just created a library with Pybind11 from the C++ side:
I did it with MSYS2 and CMake and Make. (I have GCC, Make, CMake and Pybind11 installed with the commands)</p>
<pre><code>pacman -S mingw-w64-x86_64-gcc
pacman -S mingw-w64-x86_64-cmake
pacman -S mingw-w64-x86_64-make
pacman -S mingw-w64-x86_64-pybind11
</code></pre>
<p>Here example.cpp</p>
<pre><code>#include <pybind11/pybind11.h>
int add(int i, int j) {
return i + j;
}
PYBIND11_MODULE(example, m) {
m.doc() = "pybind11 example plugin"; // optional module docstring
m.def("add", &add, "A function that adds two numbers");
}
</code></pre>
<p>Here CMakeLists.txt (in the same directory as example.cpp)</p>
<pre><code>cmake_minimum_required(VERSION 3.3)
# Notre projet est étiqueté hello
project(example)
find_package(Python COMPONENTS Interpreter Development)
find_package(pybind11 CONFIG)
# pybind11 method:
pybind11_add_module(example example.cpp)
</code></pre>
<p>Then I ran the commands</p>
<pre><code>mkdir buildmingw
cd buildmingw
cmake .. -G "MinGW Makefiles"
mingw32-make
</code></pre>
<p>and a file example.cp311-win_amd64.pyd is produced.</p>
<p>How can I use this file now in Python?</p>
<p>I already tried putting test.py in the same directory as example.cp311-win_amd64.pyd and then I launched:</p>
<p>python test.py</p>
<p>where test.py looks like this:</p>
<pre><code>import example
print( example.add(7,13 ) )
</code></pre>
<p>I get the error message:</p>
<pre><code>D:\Informatik\NachhilfeInfoUni\KadalaSchmittC++\PythonBindC++\buildmingw4>python tes
t.py
Traceback (most recent call last):
File "D:\Informatik\NachhilfeInfoUni\KadalaSchmittC++\PythonBindC++\buildmingw4\test.py", line 1, in <module>
import example
ImportError: DLL load failed while importing example: Das angegebene Modul wurde nicht gefunden.
</code></pre>
<p>What do I do wrong? How can I correctly import my library?</p>
|
<python><cmake><pip><python-module><pybind11>
|
2024-05-30 08:29:05
| 2
| 546
|
flori10
|
78,553,768
| 2,071,807
|
Running multiple "after" model validators in Pydantic raises only one ValidationError
|
<p>If a model has more than one failing <code>@model_validator("after")</code>, then only one (the first?) is raised.</p>
<p>Is it possible to tell Pydantic to run all model validators? Are multiple model validators even supported? The docs seem to be silent on this question.</p>
<p>In the below example, I would expect two validation errors, but Pydantic only raises one:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, model_validator
class Foo(BaseModel):
x: int
y: str
z: float
@model_validator(mode="after")
def x_and_y(self):
if self.x == 1 and self.y == "2":
raise ValueError("x and y")
return self
@model_validator(mode="after")
def y_and_z(self):
if self.y == "2" and int(self.z) == 3:
raise ValueError("y and z")
return self
# This should fail both validators, but only one is raised:
Foo(x=1, y="2", z=3.0)
# pydantic_core._pydantic_core.ValidationError: 1 validation error for Foo
# Value error, x and y [type=value_error, input_value={'x': 1, 'y': '2', 'z': 3.0},
# input_type=dict]
# For further information visit https://errors.pydantic.dev/2.7/v/value_error
# If the first validator passes, only then the second one is reported
Foo(x=100, y="2", z=3.0)
# pydantic_core._pydantic_core.ValidationError: 1 validation error for Foo
# Value error, y and z [type=value_error, input_value={'x': 100, 'y': '2', 'z': 3.0}, # input_type=dict]
# For further information visit https://errors.pydantic.dev/2.7/v/value_erro
</code></pre>
|
<python><pydantic>
|
2024-05-30 08:25:27
| 0
| 79,775
|
LondonRob
|
78,553,761
| 10,750,541
|
Unexpected behaviour in seaborn subplots when the x axis is being shared
|
<p>I am having trouble using the <code>sharex=True</code> when I create two graphs with <code>seaborn</code> that should share the same horizontal axis.</p>
<p>I have my example hereafter and even with <code>sharex=False</code> I get a warning that I cannot interpret.</p>
<p>I print below the xticklabels and they are not as expected. Any help to solve the mystery is anticipated with great joy...</p>
<pre><code># Generate the years and random integers for 'total' and 'new', where 'total' > 'new' and calculate the cumulative sum for 'trend'
years = np.arange(2000, 2024)
np.random.seed(0) # Set seed for reproducibility
new_values = np.random.randint(1, 11, size=len(years))
total_values = np.random.randint(2, 21, size=len(years))
total_values = np.maximum(total_values, new_values + 1)
trend_values = np.cumsum(new_values)
# DataFrame
example = pd.DataFrame({
'year': years,
'new': new_values,
'total': total_values,
'trend': trend_values
})
# Work on the graph
total_color = 'steelblue'
new_drugs_color = 'skyblue'
fig, axes = plt.subplots(2, 1, sharex = False, figsize = (10,7), gridspec_kw={'height_ratios': [3, 1.5]})
bottom, top = 2000, 2023
sns.barplot(example, x = 'year', y = 'total', facecolor = total_color, edgecolor = total_color, lw=1.8, width=0.7, ax=axes[0])
bottomplot = sns.barplot(example, x = 'year', y = 'new', color = new_drugs_color, edgecolor = total_color, lw=1.8, width=0.7, hatch="//", ax = axes[0])
topbar = plt.Rectangle((0,0),1,1,fc = total_color, edgecolor = total_color, lw=1.8)
bottombar = patches.Rectangle((0,0),1,1, facecolor = new_drugs_color, edgecolor = total_color, lw=1.8, hatch="////")
axes[0].set_ylim(top = example['total'].max() + 8)
axes[0].margins(y=0.01)
l = axes[0].legend([topbar, bottombar], ['total', 'newly appeared'], ncol = 1, prop={'size':7}, loc = 'upper left')
l.draw_frame(False)
percentages = []
for total, new in zip(axes[0].containers[0].datavalues, axes[0].containers[1].datavalues):
percentages.append(f'''{int(new *100 / total)}%''')
# print(percentages)
axes[0].bar_label(axes[0].containers[0], fontsize=8, color = total_color, rotation = 0, padding = 15, fontweight='bold')
axes[0].bar_label(axes[0].containers[0], labels = axes[0].containers[1].datavalues.astype(int), fontsize=8, color = new_drugs_color, rotation = 0, padding = 5, fontweight='bold')
axes[0].bar_label(axes[0].containers[1], labels = percentages, fontsize=7, color = new_drugs_color, padding = 1.5, rotation = 0, fontstyle='italic')
axes[0].set_xticklabels(axes[0].get_xticklabels(), rotation=30, ha='right', rotation_mode='anchor', fontsize = 8)
axes[0].set_xlabel('', labelpad = 5, loc = 'center', fontsize = 10)
axes[0].set_ylabel("Count of total", labelpad = 15, loc = 'center', fontsize = 10)
axes[1] = sns.lineplot(example, x = 'year', y = 'trend', markers=True, marker="o", legend = 'full', markersize = 4, color=new_drugs_color)
axes[1].set_ylabel('''Trend of new appearances''', labelpad = 15, loc = 'center')
axes[1].set_ylim(top = example['trend'].max() + 10)
axes[1].margins(y=0.01)
axes[1].set_xticklabels(axes[1].get_xticklabels(), rotation=0, ha='right', rotation_mode='anchor', fontsize = 10)
axes[1].set_xlabel("year", labelpad = 5, loc = 'center', fontsize = 10)
plt.xticks(rotation=30, ha='right', rotation_mode='anchor', fontsize = 8)
plt.tight_layout()
plt.title ('''TITLE''', y = 3.4, fontsize = 11, loc = 'left')
print(axes[0].get_xticklabels())
print(axes[1].get_xticklabels())
plt.show()
</code></pre>
<p>The graph I get is:
<a href="https://i.sstatic.net/HlumnoVO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HlumnoVO.png" alt="graph" /></a>
and the warning along with the <code>x_ticklabels</code> are these:</p>
<pre><code>/var/folders/9c/w4pm5s1x1v7bgs372yrqwjfm0000gn/T/ipykernel_97104/101281620.py:44: UserWarning: set_ticklabels() should only be used with a fixed number of ticks, i.e. after set_ticks() or using a FixedLocator.
axes[0].set_xticklabels(axes[0].get_xticklabels(), rotation=30, ha='right', rotation_mode='anchor', fontsize = 8)
/var/folders/9c/w4pm5s1x1v7bgs372yrqwjfm0000gn/T/ipykernel_97104/101281620.py:54: UserWarning: set_ticklabels() should only be used with a fixed number of ticks, i.e. after set_ticks() or using a FixedLocator.
axes[1].set_xticklabels(axes[1].get_xticklabels(), rotation=0, ha='right', rotation_mode='anchor', fontsize = 10)
[Text(0, 0, '2000'), Text(1, 0, '2001'), Text(2, 0, '2002'), Text(3, 0, '2003'), Text(4, 0, '2004'), Text(5, 0, '2005'), Text(6, 0, '2006'), Text(7, 0, '2007'), Text(8, 0, '2008'), Text(9, 0, '2009'), Text(10, 0, '2010'), Text(11, 0, '2011'), Text(12, 0, '2012'), Text(13, 0, '2013'), Text(14, 0, '2014'), Text(15, 0, '2015'), Text(16, 0, '2016'), Text(17, 0, '2017'), Text(18, 0, '2018'), Text(19, 0, '2019'), Text(20, 0, '2020'), Text(21, 0, '2021'), Text(22, 0, '2022'), Text(23, 0, '2023')]
[Text(1995.0, 0, '1995'), Text(2000.0, 0, '2000'), Text(2005.0, 0, '2005'), Text(2010.0, 0, '2010'), Text(2015.0, 0, '2015'), Text(2020.0, 0, '2020'), Text(2025.0, 0, '2025')]
</code></pre>
<p>Also if I try to use <code>sharex=True</code>, the graph becomes totally messed up.
<a href="https://i.sstatic.net/Kn20SdgG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kn20SdgG.png" alt="graph 2" /></a></p>
|
<python><seaborn><subplot>
|
2024-05-30 08:24:44
| 0
| 532
|
Newbielp
|
78,553,628
| 8,480,460
|
Displaying sites every 10 seconds with Python and Selenium from a CSV
|
<p>I am doing an automation with Python.
This automation should take a list of websites from a .csv file and
perform click activities on them, which are
working correctly, so I will omit parts of the code.</p>
<p>I adjusted a line of code which is this one:</p>
<pre><code>if time.time() - start_time > 10:
print("Time limit exceeded, moving to the next site")
break
</code></pre>
<p>This line is responsible for moving to the next site in the list after 10 seconds, regardless of whether all activities have been completed or not. The robot should behave this way, performing the activities within a time limit, without needing to successfully complete all activities.</p>
<p>All sites are working correctly, except for the w3schools site, which freezes and does not move to the next one. All sites should remain open for a maximum of 10 seconds.</p>
<p>How can I force certain sites to move to the next one? This automation needs to be dynamic, as it will receive a .csv spreadsheet with 600 sites.</p>
<p>Below is the code for the automation:</p>
<pre><code>import time
import csv
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import NoSuchElementException, ElementNotInteractableException
from webdriver_manager.chrome import ChromeDriverManager
# Install a specific version of ChromeDriver
driver_path = ChromeDriverManager(driver_version="125.0.6422.77").install()
if __name__ == "__main__":
# Selenium WebDriver configuration
chrome_options = Options()
chrome_options.add_argument("--start-fullscreen") # Open the browser in fullscreen mode
driver = webdriver.Chrome(service=Service(driver_path), options=chrome_options)
# Reading the CSV file
with open('websites2.csv', newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
website = row['website']
try:
driver.get(website)
start_time = time.time() # Capture the start time
if time.time() - start_time > 10:
print("Time limit exceeded, moving to the next site")
break
except Exception as e:
print(f"Error processing {website}: {e}")
# Close the browser at the end of the process
driver.quit()
</code></pre>
|
<python><csv><selenium-chromedriver><rpa>
|
2024-05-30 07:59:15
| 1
| 1,094
|
claudiopb
|
78,553,590
| 241,552
|
Can I add abbreviations for imports to PyCharm?
|
<p>There is a number of industry-accepted import abbreviations, such as <code>import pandas as pd</code>, and there's a number of abbreviation conventions on our project. Is there a way to add them to PyCharm, so that I could type, say <code>mm.Schema</code>, command-click on the <code>mm</code> and have PyCharm add <code>import marshmallow as mm</code> or suggest that import when I type <code>import marshmallow</code>?</p>
|
<python><pycharm>
|
2024-05-30 07:50:45
| 0
| 9,790
|
Ibolit
|
78,553,427
| 19,356,117
|
How to select data in xarray Dataset with multiple slices once?
|
<p>I have a xarray dataset which looks like this:</p>
<pre><code>Dimensions: (time: 24, longitude: 701, latitude: 701)
Coordinates:
* time (time) datetime64[ns] 192B 2023-06-01 ... 2023-06-01T23:00:00
* longitude (longitude) float32 3kB 70.0 70.1 70.2 70.3 ... 139.8 139.9 140.0
* latitude (latitude) float32 3kB 65.0 64.9 64.8 64.7 ... -4.8 -4.9 -5.0
</code></pre>
<p>And I have some lists of longitudes and latitudes such as:</p>
<pre><code>bboxes = [[122.3, 122.9, 40.3, 39.8], [-124.1, -123.7, 42.4, 42.1]]
</code></pre>
<p>If there is only a list, I can select data in dataset with this:</p>
<pre><code>bbox = [122.3, 122.9, 40.3, 39.8]
res = dataset.sel(longitude=slice(bbox[0], bbox[1]), latitude=slice(bbox[2], bbox[3]))
</code></pre>
<p>However there are probably hundreds of lists in <code>bboxes</code>, so selecting data from these slices became a difficult task——if I use <code>foreach</code> and <code>xarray.merge</code> to complete it,running speed will be bad.
So how to read data from dataset quickly and elegantly?</p>
|
<python><gis><python-xarray>
|
2024-05-30 07:16:31
| 1
| 1,115
|
forestbat
|
78,553,412
| 8,934,639
|
How to create a datadog gauge metric in python Lambda?
|
<p>We have a Lambda written in Python, and we use datadog-lambda library to report metrics to datadog.
Is there a way to create a gauge metric using the above library?</p>
|
<python><aws-lambda><datadog>
|
2024-05-30 07:13:25
| 1
| 301
|
Chedva
|
78,553,269
| 451,878
|
Use "with" in all CRUD functions
|
<p>We have some issues with FastAPI (reading and "lost" data from database), so we use <code>with</code> instead of <code>Depends</code>.</p>
<p>Is it a good practice?</p>
<pre><code>with SessionManager() as db:
@router.get("/{id}", response_model=...)
async def get_by_id(id: UUID) -> JSONResponse:
result = crud_xxx.get_by_id(db, id)
@router.get("/{text}", response_model=...)
async def get_by_text(id: UUID) -> JSONResponse:
result = crud_xxx.get_by_text(db, text)
</code></pre>
<p>and before:</p>
<pre><code>@router.get("/{id}", response_model=...)
async def get_by_id(id: UUID, db: SessionLocal = Depends(deps.get_db)) -> JSONResponse:
result = crud_xxx.get_by_id(db, id)
</code></pre>
<p>Is there another way to do this?</p>
|
<python><fastapi><crud>
|
2024-05-30 06:39:23
| 2
| 1,481
|
James
|
78,553,180
| 7,717,176
|
List of values of a key in Qdrant
|
<p>I have a collection named <code>Docs</code> in the qdrant database. There are multiple points in this collection. Each point has metadata including the title of the document. I want to extract a list of titles. I tried to do it but it returned none. How can I do it?</p>
<pre><code>from qdrant_client import QdrantClient, models
client = QdrantClient(url="http://localhost:6333")
result = client.scroll(
collection_name="Docs",
scroll_filter=models.Filter(
must=[
models.FieldCondition(
key="meta.title",
match=models.MatchText(text='*')
)
]
)
)
result
</code></pre>
|
<python><vector-database><qdrant><qdrantclient>
|
2024-05-30 06:16:43
| 2
| 391
|
HMadadi
|
78,553,026
| 395,857
|
How can I save a tokenizer from Huggingface transformers to ONNX?
|
<p>I load a tokenizer and Bert model from Huggingface transformers, and export the Bert model to ONNX:</p>
<pre><code>from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("huawei-noah/TinyBERT_General_4L_312D")
# Load the model
model = AutoModelForTokenClassification.from_pretrained("huawei-noah/TinyBERT_General_4L_312D")
# Example usage
text = "Hugging Face is creating a tool that democratizes AI."
inputs = tokenizer(text, return_tensors="pt")
# We need to use the inputs to trace the model
input_names = ["input_ids", "attention_mask"]
output_names = ["output"]
# Export the model to ONNX
torch.onnx.export(
model, # model being run
(inputs["input_ids"], inputs["attention_mask"]), # model input (or a tuple for multiple inputs)
"TinyBERT_General_4L_312D.onnx", # where to save the model
export_params=True, # store the trained parameter weights inside the model file
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=input_names, # the model's input names
output_names=output_names, # the model's output names
dynamic_axes={ # variable length axes
"input_ids": {0: "batch_size"},
"attention_mask": {0: "batch_size"},
"output": {0: "batch_size"}
}
)
print("Model has been successfully exported to ONNX")
</code></pre>
<p>Requirements:</p>
<pre><code>pip install transformers torch onnx
</code></pre>
<p>How should I save the tokenizer to ONNX?</p>
|
<python><huggingface-transformers><onnx><huggingface-tokenizers>
|
2024-05-30 05:28:12
| 1
| 84,585
|
Franck Dernoncourt
|
78,552,892
| 2,155,026
|
KeyError in nx.algorithms.approximation.traveling_salesman_problem()
|
<p>I have written a code which generates 10 random graphs and solve the traveling salesman problem using NetworkX. I am getting <code>KeyError: 1</code>.</p>
<p>Following is my code.</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import time
import csv
import pandas as pd
# Function to generate a random graph
def generate_random_graph(num_nodes, edge_prob):
return nx.erdos_renyi_graph(num_nodes, edge_prob)
# Function to solve TSP and track time
def solve_tsp(graph):
start_time = time.time()
cycle = nx.algorithms.approximation.traveling_salesman_problem(graph, cycle=True)
end_time = time.time()
elapsed_time = end_time - start_time
return cycle, elapsed_time
# Number of nodes in the graph
num_nodes = 10
data_points = []
csv_data = []
# Generate 10 random graphs, solve TSP, and record data
for i in range(10):
edge_prob = np.random.rand() # Random edge probability
graph = generate_random_graph(num_nodes, edge_prob)
density = nx.density(graph)
# Solve TSP
cycle, elapsed_time = solve_tsp(graph)
# Record data
data_points.append((density, elapsed_time))
# Prepare adjacency matrix for CSV
adj_matrix = nx.adjacency_matrix(graph).todense().tolist()
csv_data.append([adj_matrix, density, elapsed_time, cycle])
</code></pre>
<p>I am getting the following error:</p>
<pre><code>KeyError Traceback (most recent call last)
Cell In[7], line 8
5 density = nx.density(graph)
7 # Solve TSP
----> 8 cycle, elapsed_time = solve_tsp(graph)
10 # Record data
11 data_points.append((density, elapsed_time))
Cell In[3], line 4, in solve_tsp(graph)
2 def solve_tsp(graph):
3 start_time = time.time()
----> 4 cycle = nx.algorithms.approximation.traveling_salesman_problem(graph, cycle=True)
5 end_time = time.time()
6 elapsed_time = end_time - start_time
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/networkx/utils/backends.py:412, in _dispatch.__call__(self, backend, *args, **kwargs)
409 def __call__(self, /, *args, backend=None, **kwargs):
410 if not backends:
411 # Fast path if no backends are installed
--> 412 return self.orig_func(*args, **kwargs)
414 # Use `backend_name` in this function instead of `backend`
415 backend_name = backend
File ~/anaconda3/envs/py39/lib/python3.9/site-packages/networkx/algorithms/approximation/traveling_salesman.py:320, in traveling_salesman_problem(G, weight, nodes, cycle, method)
318 if u == v:
319 continue
--> 320 GG.add_edge(u, v, weight=dist[u][v])
321 best_GG = method(GG, weight)
323 if not cycle:
324 # find and remove the biggest edge
KeyError: 1
</code></pre>
<p>How can I resolve it?</p>
|
<python><graph><networkx><traveling-salesman><jsnetworkx>
|
2024-05-30 04:41:55
| 1
| 1,144
|
Omar Shehab
|
78,552,821
| 1,107,474
|
Simple example of C++ calling Python using Pybind11 and CMake
|
<p>I have a simple C++ main.cpp file with two vectors I'd like to pass to a Python function:</p>
<pre><code>#include <vector>
int main()
{
std::vector<double> vec1;
std::vector<double> vec2;
}
</code></pre>
<p>python.py:</p>
<pre><code>def python_function(list_1, list_2):
# Code
</code></pre>
<p>I've git-cloned Pybind11 and make-installed it. Now I'd like to write a simple CMake script to compile the above.</p>
<p>I found this page:</p>
<p><a href="https://pybind11.readthedocs.io/en/stable/compiling.html" rel="nofollow noreferrer">https://pybind11.readthedocs.io/en/stable/compiling.html</a></p>
<p>which contains some details on CMake with Pybind but I don't understand their example:</p>
<pre><code>cmake_minimum_required(VERSION 3.4...3.18)
project(example LANGUAGES CXX)
find_package(pybind11 REQUIRED)
pybind11_add_module(example example.cpp)
</code></pre>
<p>However, I don't want the C++ executable to be a Pybind module. I'd like my Python file/functions to be callable from C++.</p>
<p>Would someone please help me finish the CMake to compile this small project? I'm also unsure how I inform Pybind where my python file is located.</p>
|
<python><c++><cmake><pybind11>
|
2024-05-30 04:09:18
| 1
| 17,534
|
intrigued_66
|
78,552,745
| 424,957
|
Why are the results different by Google Maps and Matplotlib?
|
<p>I plotted a kml file using matplotlib and showed it on Google Maps, but the line is different. Can anyone tell me why? How can I plot it just like in Google Maps?</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://www.opengis.net/kml/2.2">
<Document>
<name>2019年06月05日黄海海阳泰瑞号长征十一号海</name>
<Style id="line-000000-1200-normal">
<LineStyle>
<color>ff000000</color>
<width>1.2</width>
</LineStyle>
</Style>
<Style id="line-000000-1200-highlight">
<LineStyle>
<color>ff000000</color>
<width>1.8</width>
</LineStyle>
</Style>
<StyleMap id="line-000000-1200">
<Pair>
<key>normal</key>
<styleUrl>#line-000000-1200-normal</styleUrl>
</Pair>
<Pair>
<key>highlight</key>
<styleUrl>#line-000000-1200-highlight</styleUrl>
</Pair>
</StyleMap>
<Style id="poly-3949AB-1000-255-normal">
<LineStyle>
<color>ffab4939</color>
<width>1</width>
</LineStyle>
<PolyStyle>
<color>ffab4939</color>
<fill>1</fill>
<outline>1</outline>
</PolyStyle>
</Style>
<Style id="poly-3949AB-1000-255-highlight">
<LineStyle>
<color>ffab4939</color>
<width>1.5</width>
</LineStyle>
<PolyStyle>
<color>ffab4939</color>
<fill>1</fill>
<outline>1</outline>
</PolyStyle>
</Style>
<StyleMap id="poly-3949AB-1000-255">
<Pair>
<key>normal</key>
<styleUrl>#poly-3949AB-1000-255-normal</styleUrl>
</Pair>
<Pair>
<key>highlight</key>
<styleUrl>#poly-3949AB-1000-255-highlight</styleUrl>
</Pair>
</StyleMap>
<Placemark>
<name>ZSHA/A2585/19</name>
<description><![CDATA[description: <br>名前: ]]></description>
<styleUrl>#line-000000-1200</styleUrl>
<ExtendedData>
<Data name="description">
<value/>
</Data>
<Data name="名前">
<value/>
</Data>
</ExtendedData>
<LineString>
<tessellate>1</tessellate>
<coordinates>
121.409097,34.899247,0
121.407996,34.881579,0
121.404801,34.864084,0
121.399542,34.846932,0
121.392271,34.830286,0
121.38306,34.814308,0
121.371999,34.79915,0
121.359193,34.784958,0
121.344768,34.771868,0
121.328861,34.760006,0
121.311627,34.749485,0
121.29323,34.740406,0
121.273848,34.732857,0
121.253665,34.726909,0
121.232876,34.722619,0
121.211679,34.720028,0
121.190278,34.719162,0
121.168876,34.720028,0
121.147679,34.722619,0
121.12689,34.726909,0
121.106708,34.732857,0
121.087325,34.740406,0
121.068929,34.749485,0
121.051694,34.760006,0
121.035788,34.771868,0
121.021362,34.784958,0
121.008557,34.79915,0
120.997495,34.814308,0
120.988284,34.830286,0
120.981014,34.846932,0
120.975755,34.864084,0
120.972559,34.881579,0
120.971459,34.899247,0
120.972466,34.91692,0
120.975572,34.934426,0
120.980748,34.951596,0
120.987946,34.968266,0
120.997098,34.984274,0
121.008115,34.999465,0
121.020893,35.013694,0
121.03531,35.026821,0
121.051225,35.038721,0
121.068487,35.049279,0
121.086928,35.058391,0
121.10637,35.06597,0
121.126625,35.071942,0
121.147496,35.07625,0
121.168783,35.078851,0
121.190278,35.079721,0
121.211773,35.078851,0
121.233059,35.07625,0
121.253931,35.071942,0
121.274186,35.06597,0
121.293628,35.058391,0
121.312069,35.049279,0
121.32933,35.038721,0
121.345246,35.026821,0
121.359662,35.013694,0
121.37244,34.999465,0
121.383458,34.984274,0
121.392609,34.968266,0
121.399807,34.951596,0
121.404984,34.934426,0
121.40809,34.91692,0
121.409097,34.899247,0
</coordinates>
</LineString>
</Placemark>
<Placemark>
<name>ZSHA/A2637/19</name>
<description><![CDATA[description: <br>名前: ]]></description>
<styleUrl>#poly-3949AB-1000-255</styleUrl>
<ExtendedData>
<Data name="description">
<value/>
</Data>
<Data name="名前">
<value/>
</Data>
</ExtendedData>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<tessellate>1</tessellate>
<coordinates>
124,33.680556,0
123.628889,33.235556,0
124.598652,32.687742,0
124.966694,33.118961,0
124,33.680556,0
</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</Placemark>
<Placemark>
<name>RKRR/A0864/19</name>
<description><![CDATA[description: <br>名前: ]]></description>
<styleUrl>#poly-3949AB-1000-255</styleUrl>
<ExtendedData>
<Data name="description">
<value/>
</Data>
<Data name="名前">
<value/>
</Data>
</ExtendedData>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<tessellate>1</tessellate>
<coordinates>
124,33.680556,0
124,33.023056,0
124.598652,32.687742,0
124.966694,33.118961,0
124,33.680556,0
</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</Placemark>
<Placemark>
<name>RJJJ/J2462/19</name>
<description><![CDATA[description: <br>名前: ]]></description>
<styleUrl>#poly-3949AB-1000-255</styleUrl>
<ExtendedData>
<Data name="description">
<value/>
</Data>
<Data name="名前">
<value/>
</Data>
</ExtendedData>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<tessellate>1</tessellate>
<coordinates>
135.728611,25.744722,0
135.224722,25.186111,0
136.487549,24.20233,0
137.016389,24.755556,0
135.728611,25.744722,0
</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</Placemark>
<Placemark>
<name>海上45°轨道</name>
<description><![CDATA[description: <br>名前: Route]]></description>
<styleUrl>#line-000000-1200</styleUrl>
<ExtendedData>
<Data name="description">
<value/>
</Data>
<Data name="名前">
<value>Route</value>
</Data>
</ExtendedData>
<LineString>
<tessellate>1</tessellate>
<coordinates>
121.1829579,34.9082773,0
138.2007802,23.3168334,0
</coordinates>
</LineString>
</Placemark>
</Document>
</kml>
</code></pre>
<p><a href="https://i.sstatic.net/ykkxVEY0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ykkxVEY0.jpg" alt="KML file on Google Map" /></a>
<a href="https://i.sstatic.net/xzXq6fiI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xzXq6fiI.jpg" alt="KML file plotted by Matplotlib" /></a></p>
|
<python><matplotlib><geopandas><kml>
|
2024-05-30 03:27:31
| 1
| 2,509
|
mikezang
|
78,552,650
| 270,043
|
PySpark dataframes not matching the headers
|
<p>I have a bunch of parquet files written over a period of 6 months, partitioned by the date and hour when they were created. Over these 6 months, the headers have changed, so the data schema for the parquet files created on Jan 1 is different from the files created on May 1.</p>
<p>I'm trying to read the parquet files created on Jan 1 using PySpark into dataframes (hour by hour), then write them back into parquet files in another folder with a larger block size. The problem is when I compare the headers of the newly created parquet files with that of the original parquet files, they are different.</p>
<p>Here's what I have:</p>
<pre><code># Code to read into dataframes and write to parquet files
df = spark.read.parquet("original_folder/")
df.createOrReplaceTempView("all_records")
df1 = spark.sql("select * from all_records where datestr='20240101' and hourstr = '0'")
df1.coalesce(80).write.mode("append").partitionBy("datestr","hourstr").option("parquet.block.size", 134217728).parquet("new_folder/")
# Code to read from original parquet file
df_orig = spark.read.parquet("original_folder/datestr=20240101/hourstr=0/")
</code></pre>
<p>The headers in <code>df1</code> and <code>df_orig</code> are different, even for the exact record. Why is that, and how can I extract the actual data with the correct schema from the parquet files?</p>
|
<python><dataframe><pyspark><parquet>
|
2024-05-30 02:36:31
| 1
| 15,187
|
Rayne
|
78,552,507
| 12,834,785
|
CORS violation only on Apple Silicon Macs in Flask app
|
<p>I have reproduced this error with the following python backend api using flask.</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, jsonify, request
from flask_cors import CORS
app = Flask(__name__)
CORS(app) # Enable CORS for all routes and origins
@app.route('/api/hello', methods=['GET'])
def hello():
name = request.args.get('name', 'World')
message = f'Hello, {name}!'
return jsonify({'message': message})
@app.route('/api/data', methods=['POST'])
def receive_data():
data = request.get_json()
# Process the received data
processed_data = process_data(data)
return jsonify({'processed_data': processed_data})
def process_data(data):
# Apply upper() to all string values in the dictionary
processed_data = {}
for key, value in data.items():
if isinstance(value, str):
processed_data[key] = value.upper()
else:
processed_data[key] = value
return processed_data
if __name__ == '__main__':
app.run()
</code></pre>
<p>And I have a front end as follows:</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html>
<head>
<title>Flask API Test</title>
</head>
<body>
<h1>Flask API Test</h1>
<h2>GET Request</h2>
<label for="name">Name:</label>
<input type="text" id="name" placeholder="Enter a name">
<button onclick="sendGetRequest()">Send GET Request</button>
<p id="getResult"></p>
<h2>POST Request</h2>
<label for="data">Data:</label>
<input type="text" id="data" placeholder="Enter some data">
<button onclick="sendPostRequest()">Send POST Request</button>
<p id="postResult"></p>
<script>
function sendGetRequest() {
const name = document.getElementById('name').value;
const url = `http://localhost:5000/api/hello?name=${encodeURIComponent(name)}`;
fetch(url)
.then(response => response.json())
.then(data => {
document.getElementById('getResult').textContent = data.message;
})
.catch(error => {
console.error('Error:', error);
});
}
function sendPostRequest() {
const data = document.getElementById('data').value;
const url = 'http://localhost:5000/api/data';
fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ data: data })
})
.then(response => response.json())
.then(data => {
console.log(data.processed_data);
document.getElementById('postResult').textContent = data.processed_data.data;
})
.catch(error => {
console.error('Error:', error);
});
}
</script>
</body>
</html>
</code></pre>
<p>When I go to localhost:8080 and try it out on an M3 mac, I get these errors:</p>
<pre><code>localhost/:1 Access to fetch at 'http://localhost:5000/api/hello?name=George' from origin 'http://localhost:8080' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
(index):26
GET http://localhost:5000/api/hello?name=George net::ERR_FAILED 403 (Forbidden)
sendGetRequest @ (index):26
onclick @ (index):12
(index):32 Error: TypeError: Failed to fetch
at sendGetRequest ((index):26:10)
at HTMLButtonElement.onclick ((index):12:40)
</code></pre>
<p>However, when I run in on my 2015 macbook pro, it works flawlessly. A colleague with an M2 mac has the error, but another colleague running windows has no problem. I have tried it on chrome and safari on my computers.</p>
<p><strong>Any ideas as to what's going on here or what can be done about it??</strong></p>
<p>P.S. I put all the code here: <a href="https://github.com/Duncan-Britt/cors-test" rel="nofollow noreferrer">https://github.com/Duncan-Britt/cors-test</a> if you want to try it out for yourself.</p>
|
<python><cors><apple-silicon>
|
2024-05-30 01:26:26
| 1
| 340
|
Duncan Britt
|
78,552,467
| 968,132
|
Why does VS Code show inline comments as part of environment variables from .env file?
|
<p>VS Code is showing the inline comments and quotes from my .env file when importing environment variables and I can't figure out why. It does not happen if I run the script from terminal, only within VS Code -> Run. Also</p>
<p>dotenv is quite forgiving per <a href="https://pypi.org/project/python-dotenv/" rel="nofollow noreferrer">the docs</a>:</p>
<blockquote>
<p>Values can be unquoted, single- or double-quoted. Spaces before and after keys, equal signs, and values are ignored. Values can be followed by a comment.</p>
</blockquote>
<p>Here's my .env:</p>
<pre><code>TEST='abc'
TEST2="def"
TEST3='ghi' # comment 1
TEST4="hig" # comment 2
</code></pre>
<p>Here's the file test_dotenv.py:</p>
<pre><code>import os
from dotenv import load_dotenv
load_dotenv()
print("-=-START")
print(os.getenv('TEST'))
print(os.getenv('TEST2'))
print(os.getenv('TEST3'))
print(os.getenv('TEST4'))
</code></pre>
<p>In terminal, it runs as expected:
<code>python ./tests/component/test_dotenv.py</code></p>
<pre><code>-=-START
abc
def
ghi
hig
</code></pre>
<p>In VS Code, it doesn't:</p>
<pre><code>-=-START
abc
def
'ghi' # comment 1
"hig" # comment 2
</code></pre>
|
<python><visual-studio-code><dotenv>
|
2024-05-30 01:03:23
| 2
| 1,148
|
Peter
|
78,552,387
| 8,452,243
|
Optimizing node placement in a 2D grid to match specific geodesic distances
|
<p>I'm working on a problem where I need to arrange a set of nodes within a 2D grid such that the distance between pairs of nodes either <em>approximate</em> specific values as closely as possible or <em>meet a minimum threshold</em>.</p>
<p>In other words, for some node pairs, the distance should be approximately equal to a predefined value (≈). For other node pairs, the distance should be greater than or equal to a predefined threshold (≥).</p>
<p>Additional challenge: the grid is inscribed inside a concave polygon, so distances must be geodesic.</p>
<p><strong>Question</strong>: Using OR-Tools, how can I efficiently approximate the location of the nodes given the constraints above-mentioned?</p>
<p><a href="https://i.sstatic.net/9QssF2bK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QssF2bK.png" alt="enter image description here" /></a></p>
<p><strong>[EDIT]</strong> I've revised the example script, trying as best I can to apply Laurent's wise suggestions, but my poor understanding of OR-Tools (and its subtleties) still makes this a very difficult task.</p>
<p>This version:</p>
<ul>
<li>creates a simple conforming grid</li>
<li>pre-computes geodesic distances between all pairs of cells within that grid</li>
<li>indicates the number of nodes to place as well as their associated target pairwise distances
<ul>
<li>each pairwise distance comes with an objective to match (either ≈ or ≥)</li>
</ul>
</li>
<li>declares the CP-SAT model and creates the main variables for the problem</li>
<li><em>(new)</em> ensures each node is assigned exactly to one position and each position can have at most one node</li>
<li><em>(new)</em> creates a Boolean variable checking if the distance constraint is met for each pair of nodes</li>
<li><em>(new)</em> use <code>AddImplication</code> to connect the Boolean variables with the node positions</li>
<li><em>(new)</em> applies conditional penalties based on whether the distance condition is met and tries to minimize their sum</li>
</ul>
<p>Unfortunately, I must be missing a few nuances because this implementation doesn't return any results even though the solution space not null.</p>
<pre><code>from ortools.sat.python import cp_model
from itertools import combinations
import networkx as nx
# Dimensions of the original grid (width & height)
w, h = 7, 5
# Selection of grid-cell indices (conforming/concave grid)
cell_indices = list(sorted(set(range(w * h)) - set([0, 1, 2, 3, 7, 8, 9, 10, 28, 29])))
# Topology of the conforming/concave grid
T = nx.Graph()
for i in cell_indices:
if i >= w and (i - w) in cell_indices:
T.add_edge(i, i - w, weight=1)
if i < w * (h - 1) and (i + w) in cell_indices:
T.add_edge(i, i + w, weight=1)
if i % w != 0 and (i - 1) in cell_indices:
T.add_edge(i, i - 1, weight=1)
if (i + 1) % w != 0 and (i + 1) in cell_indices:
T.add_edge(i, i + 1, weight=1)
# Precompute geodesic distances using Dijkstra's algorithm
geodesic_distances = dict(nx.all_pairs_dijkstra_path_length(T))
# Get the largest geodesic distance
max_distance = float('-inf')
for i1 in geodesic_distances:
for i2 in geodesic_distances[i1]:
if i1 != i2 and i1 > i2:
distance = geodesic_distances[i1][i2]
if distance > max_distance: max_distance = distance
# Number of nodes to place
num_nodes = 5
# Target distances to match between each pair of nodes + type of objective (≈ or ≥)
objective_distances = {(0, 1): (3, '≈'),
(0, 2): (2, '≥'),
(0, 3): (2, '≥'),
(0, 4): (3, '≈'),
(1, 2): (3, '≥'),
(1, 3): (3, '≥'),
(1, 4): (4, '≥'),
(2, 3): (2, '≈'),
(2, 4): (4, '≥'),
(3, 4): (3, '≈')}
# Instantiate model
model = cp_model.CpModel()
# Ensure each position can have at most one node
node_at_position = {}
for index in cell_indices:
at_most_one = []
for node in range(num_nodes):
var = model.NewBoolVar(f'node_{node}_at_position_{index}')
node_at_position[node, index] = var
at_most_one.append(var)
# Apply at most one node per position constraint
model.AddAtMostOne(at_most_one)
# Ensure each node is assigned exactly to one position
for node in range(num_nodes):
model.AddExactlyOne(node_at_position[node, idx] for idx in cell_indices)
penalties = []
# For each pair of nodes:
for (node1, node2), (target_distance, constraint_type) in objective_distances.items():
# For each compatible pair of cells
for i1, i2 in combinations(cell_indices, 2):
# Get the corresponding geodesic distance
distance = geodesic_distances[i1][i2]
# Create a Boolean variable
is_compatible = model.NewBoolVar(f'compat_{node1}_{node2}_{i1}_{i2}')
# Create a penalty variable
penalty = model.NewIntVar(0, max_distance, f'penalty_{node1}_{node2}_{i1}_{i2}')
if constraint_type == '≈':
# Condition that `is_compatible` will be True if the distance approximates (deviation: -1/+1) the target distance
model.Add(is_compatible == (target_distance - 1 <= distance <= target_distance + 1))
elif constraint_type == '≥':
# Condition that `is_compatible` will be True if the distance is at least the target distance
model.Add(is_compatible == (distance >= target_distance))
# If 'is_compatible' is true -> implications to enforce node positions
model.AddImplication(is_compatible, node_at_position[node1, i1])
model.AddImplication(is_compatible, node_at_position[node2, i2])
# If it is not -> add a penalty
model.Add(penalty == abs(distance - target_distance)).OnlyEnforceIf(is_compatible.Not())
# Accumulate penalties
penalties.append(penalty)
# Objective to minimize total penalty
model.Minimize(sum(penalties))
# Solving the model
solver = cp_model.CpSolver()
status = solver.Solve(model)
print("Solver status:", solver.StatusName(status))
if status == cp_model.FEASIBLE or status == cp_model.OPTIMAL:
print("Solution found:")
for node in range(num_nodes):
for index in cell_indices:
if solver.Value(node_at_position[node, index]):
print(f'Node {node} is at position {index}')
else:
print("No solution found.")
</code></pre>
|
<python><optimization><or-tools><cp-sat>
|
2024-05-30 00:18:13
| 2
| 1,363
|
solub
|
78,552,380
| 8,458,083
|
How to solve problem of circular reference when defining a tree in python 3.12?
|
<p>I try to follow this <a href="https://wickstrom.tech/2024-05-23-statically-typed-functional-programming-python-312.html" rel="nofollow noreferrer">tutoriel</a>. I am using python 3.12 as required</p>
<p>I try to run this code:</p>
<pre><code>from dataclasses import dataclass
from typing import Callable
type RoseTree[T] = Branch[T] | Leaf[T]
@dataclass
class Branch[A]:
branches: list[RoseTree[A]]
def map[B](self, f: Callable[[A], B]) -> Branch[B]:
return Branch([b.map(f) for b in self.branches])
@dataclass
class Leaf[A]:
value: A
def map[B](self, f: Callable[[A], B]) -> Leaf[B]:
return Leaf(f(self.value))
</code></pre>
<p>And I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/mnt/c/Users/Pierre-Olivier/Documents/python/3.12/a.py", line 8, in <module>
class Branch[A]:
File "/mnt/c/Users/Pierre-Olivier/Documents/python/3.12/a.py", line 8, in <generic parameters of Branch>
class Branch[A]:
File "/mnt/c/Users/Pierre-Olivier/Documents/python/3.12/a.py", line 11, in Branch
def map[B](self, f: Callable[[A], B]) -> Branch[B]:
File "/mnt/c/Users/Pierre-Olivier/Documents/python/3.12/a.py", line 11, in <generic parameters of map>
def map[B](self, f: Callable[[A], B]) -> Branch[B]:
^^^^^^
NameError: name 'Branch' is not defined
</code></pre>
<p>If I try to invert the definition of RoseTree and Branch, Leaf</p>
<blockquote>
<pre><code>Traceback (most recent call last):
File "/mnt/c/Users/Pierre-Olivier/Documents/python/3.12/a.py", line 6, in <module>
class Branch[A]:
File "/mnt/c/Users/Pierre-Olivier/Documents/python/3.12/a.py", line 6, in <generic parameters of Branch>
class Branch[A]:
File "/mnt/c/Users/Pierre-Olivier/Documents/python/3.12/a.py", line 7, in Branch
branches: list[RoseTree[A]]
^^^^^^^^
NameError: name 'RoseTree' is not defined
</code></pre>
</blockquote>
<p>I suppose that it is a problem of circular reference. RoseTree can't be built because Branch isn't defined or Branch can't be built because RoseTree isb't defined</p>
|
<python><python-3.12>
|
2024-05-30 00:13:44
| 1
| 2,017
|
Pierre-olivier Gendraud
|
78,552,374
| 19,429,024
|
Issues with threading and hotkeys using pynput in Python (Infinite Loop and Responsiveness)
|
<p>I'm trying to use the <code>pynput</code> library to listen for hotkeys and start a thread when the <code>F12</code> key is pressed. The thread prints a message in an infinite loop to the console. Pressing <code>ESC</code> should stop the application. My <code>Example</code> class extends <code>Thread</code> and overrides the <code>run</code> method.</p>
<p><code>src/index.py</code></p>
<pre class="lang-py prettyprint-override"><code>import pynput
from lib.listener import HotkeyListener
if __name__ == "__main__":
on_press = HotkeyListener()
listener = pynput.keyboard.Listener(on_press=on_press.on_press, suppress=True)
listener.run()
</code></pre>
<p><code>listener.py</code></p>
<pre class="lang-py prettyprint-override"><code>from .Example import Example
import pynput
class HotkeyListener:
def __init__(self):
self.example = Example()
def on_press(self, key):
if key == pynput.keyboard.Key.f12:
print("Starting Thread...")
self.example.start()
if key == pynput.keyboard.Key.esc:
self.example.join()
print("Exiting application...")
return False # Stop listener
</code></pre>
<p><code>example.py</code></p>
<pre class="lang-py prettyprint-override"><code>from threading import Thread
from .delay import rand_sleep
class Example(Thread):
def run(self):
while True:
print("Hello, I am an Example :) ?") # This message appears in the console
rand_sleep(1, 2)
</code></pre>
<p>Problem:
When the F12 key is pressed, the thread starts and enters an infinite loop, printing messages to the console. This causes the application to not respond to other hotkeys, like ESC, to stop the execution.</p>
<p>Questions:</p>
<ul>
<li>How can I make my application respond to other hotkeys while the thread is running?</li>
<li>What is the correct way to manage threads in this scenario to avoid the infinite loop that freezes the computer?</li>
<li>Is there a more efficient or correct way to implement this hotkey listening functionality with threading?</li>
</ul>
<p>Any help would be greatly appreciated!</p>
<p>@EDIT
After <code>@jupiterbjy's</code> response, I arrived at the following solution (now it works as expected!), but there's a runtime error when I try to create it again by pressing the hotkey. The thread no longer exists and can only be started once:</p>
<pre class="lang-py prettyprint-override"><code>class Example(Thread):
def __init__(self, event: Event) -> None:
super().__init__()
self.event = event
def run(self):
counter: Number = 0
while not self.event.is_set():
print("Hello, I am an Example :)? ")
rand_sleep(1, 2)
counter += 1
if counter > 5:
self.event.set()
# RuntimeError: threads can only be started once
class HotkeyListener:
def __init__(self):
self.thread_event = threading.Event()
self.example = Example(self.thread_event)
def on_press(self, key):
if key == pynput.keyboard.Key.f12:
print("---------------------", self.example)
# --------------------- <Example(Thread-1, stopped 9676)>
self.example.start()
if key == pynput.keyboard.Key.esc:
self.thread_event.set()
self.example.join()
print("Exiting application...")
return False # Stop listener
</code></pre>
|
<python><multithreading><pynput>
|
2024-05-30 00:11:49
| 2
| 587
|
Collaxd
|
78,552,358
| 424,957
|
How to plot LinsString in gpd.GeoDataFrame?
|
<p>I have a kml file that has LinrString and Polygon, I can plot polygon, but I am not sure how to plot LineString, I tried <code> polys.boundary.plot(ax=axs, facecolor=color, color=color, label=legend, zorder=3)</code>, that seems like not work, how can I plot LineString in gpd.GeoDataFrame?</p>
<pre><code>myPolys = gpd.read_file('myKmkFile.kml', driver='KML')
myPolys.exterior.plot(ax=axs, facecolor=color, color=color, label=legend, zorder=3)
</code></pre>
<p><a href="https://i.sstatic.net/gwSvUG6I.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwSvUG6I.jpg" alt="enter image description here" /></a>
The part of KML file</p>
<pre><code><Placemark id="3">
<name>ZSHA/A2585/19</name>
<styleUrl>#4</styleUrl>
<LineString id="2">
<coordinates>121.40909651396505,34.89924748390615,0.0 121.40799622296342,34.88157892127923,0.0 121.40480056606769,34.864084191851276,0.0 121.39954160563234,34.846931616598226,0.0 121.39227117062804,34.83028612270202,0.0 121.38306031244876,34.81430766540144,0.0 121.37199858143134,34.799149701081674,0.0 121.35919313252089,34.784957725778504,0.0 121.34476766987393,34.77186789253947,0.0 121.32886124141642,34.76000572024479,0.0 121.31162689546245,34.74948490555849,0.0 121.29323021245763,34.740406248662744,0.0 121.27384772574081,34.732856702339355,0.0 121.25366524592614,34.72690855280921,0.0 121.23287610409945,34.72261873953367,0.0 121.21167932950523,34.72002831993127,0.0 121.19027777777801,34.71916208367624,0.0 121.16887622605077,34.72002831993127,0.0 121.14767945145657,34.72261873953367,0.0 121.12689030962986,34.72690855280921,0.0 121.10670782981519,34.732856702339355,0.0 121.08732534309837,34.740406248662744,0.0 121.06892866009355,34.74948490555849,0.0 121.05169431413958,34.76000572024479,0.0 121.03578788568207,34.77186789253947,0.0 121.02136242303511,34.784957725778504,0.0 121.00855697412466,34.799149701081674,0.0 120.99749524310725,34.81430766540144,0.0 120.98828438492797,34.83028612270202,0.0 120.98101394992366,34.846931616598226,0.0 120.97575498948831,34.864084191851276,0.0 120.97255933259258,34.88157892127923,0.0 120.97145904159095,34.89924748390615,0.0 120.97246605425434,34.91691977955479,0.0 120.97557201734512,34.93442556459562,0.0 120.98074831530587,34.95159609320779,0.0 120.98794629578126,34.96826574829586,0.0 120.99709769174416,34.9842736461484,0.0 121.0081152379774,34.99946519902468,0.0 121.02089347758957,35.01369362012356,0.0 121.03530975214778,35.02682135582214,0.0 121.05122536692012,35.03872143067552,0.0 121.06848692066855,35.04927869143998,0.0 121.08692778745268,35.05839093731496,0.0 121.10636973603305,35.06596992468892,0.0 121.1266246707295,35.071942235909546,0.0 121.14749647603564,35.0762500029669,0.0 121.16878294593866,35.078851478464635,0.0 121.19027777777801,35.079721447840576,0.0 121.21177260961736,35.078851478464635,0.0 121.23305907952036,35.0762500029669,0.0 121.2539308848265,35.071942235909546,0.0 121.27418581952296,35.06596992468892,0.0 121.29362776810332,35.058390937314954,0.0 121.31206863488747,35.04927869143999,0.0 121.32933018863588,35.038721430675515,0.0 121.34524580340822,35.02682135582214,0.0 121.35966207796643,35.01369362012357,0.0 121.37244031757862,34.99946519902468,0.0 121.38345786381184,34.9842736461484,0.0 121.39260925977474,34.96826574829587,0.0 121.39980724025013,34.95159609320778,0.0 121.40498353821089,34.93442556459563,0.0 121.40808950130166,34.91691977955479,0.0 121.40909651396505,34.89924748390615,0.0</coordinates>
</LineString>
</Placemark>
<Placemark>
<name>ZSHA/A2637/19</name>
<styleUrl>#poly-3949AB-1-255-nodesc</styleUrl>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<tessellate>1</tessellate>
<coordinates>
124,33.6805556,0
123.6288889,33.2355556,0
124.5986522,32.6877416,0
124.9666942,33.1189613,0
124,33.6805556,0
</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</Placemark>
</code></pre>
|
<python><matplotlib><geopandas>
|
2024-05-29 23:55:10
| 1
| 2,509
|
mikezang
|
78,552,276
| 8,121,824
|
How to use duplicate rows of input forms for dash app?
|
<p>I am new to Dash and working on a personal finance app, which will include spending tracking. I'd ideally like to have multiple rows of input forms so the user can add multiple transactions, or split out one receipt if they'd like. Does each input need to have a distinct id, or is there an easier way to go about this? An image and code is below.</p>
<p><a href="https://i.sstatic.net/MFkSTYpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MFkSTYpB.png" alt="Initial Row Sample" /></a></p>
<pre><code>from dash import Dash, dcc, html, dash_table, Input, Output, State, callback, callback_context
import dash_bootstrap_components as dbc
import plotly.graph_objects as go
import pandas as pd
app = Dash(
__name__,
external_stylesheets=[dbc.themes.SPACELAB, dbc.icons.FONT_AWESOME],
)
dropdown_values = ["Groceries", "Car Insurance", "Eating Out"]
amount_spent = dbc.Input(id="amount_spent",
type="number", value=0)
tax_rate = dbc.InputGroup([dbc.InputGroupText("Tax Rate"),dbc.Input(id="tax_rate",
type="number")], className="mb-3")
total = dbc.InputGroup([dbc.InputGroupText("Total"),dbc.Input(id="total",
type="number", value=0)], className="mb-3")
input_boxes = [amount_spent, tax_rate, total]
dummy_tab = html.H2("Header for dummy tab")
expense_tab = dbc.Container([dbc.Row([dbc.Col(html.H4("Category"), width=12, lg=2), dbc.Col(html.H4("Amount Spent"), width=12, lg=3), dbc.Col(html.H4("Tax Rate"), width=12, lg=3), dbc.Col(html.H4("Total"), width=12, lg=3)]),
dbc.Row(
[dbc.Col(
dcc.Dropdown(id="category-dropdown", multi=False, options=[{"label": x, "value":x} for x in ["Eating Out", "Groceries", "Entertainment"]]), lg=2),
dbc.Col(amount_spent, lg=3), dbc.Col(tax_rate, lg=3), dbc.Col(total, lg=3),
dbc.Col(html.Button("+", id="add-row", n_clicks=0), lg=1)
]
),
dbc.Row([html.Button("Submit", id="submit-button", n_clicks=0, disabled=True), html.Button("+", id="add-row-button", n_clicks=0),
html.Div(id="spending-output")])])
@app.callback(
Output('total', 'value'),
[Input('amount_spent', 'value'), Input('tax_rate', 'value')], prevent_initial_call=True)
def update_total(spent_value, tax_value):
total_amount = round(spent_value*(1+(tax_value/100)),2)
return total_amount
@app.callback(
Output('submit-button', 'disabled'),
Input('total', 'value'), prevent_initial_call=True)
def enable_submit(total):
if total>0:
return False
elif total<=0:
return True
@callback(
Output('spending-output', 'children'),
Input('submit-button', 'n_clicks'),
State('category-dropdown', 'value'), State('total', 'value'),
prevent_initial_call=True
)
def update_spending(n_clicks, category, total):
print('I am here')
data = {'Category':[category], 'Amount':[total]}
df = pd.DataFrame.from_dict(data)
df.to_excel("dummy_spending_data.xlsx")
tabs = dbc.Tabs([dbc.Tab(dummy_tab, label="Dummy"), dbc.Tab(expense_tab, label="Expenses")])
app.layout = dbc.Row(dbc.Col(tabs))
if __name__ == "__main__":
app.run_server(debug=True)
</code></pre>
|
<python><input><plotly-dash>
|
2024-05-29 23:08:29
| 1
| 904
|
Shawn Schreier
|
78,552,242
| 2,687,317
|
Polar contour plot with irregular 'bins'
|
<p>So I have sky data in azimuth and elevation (imported from matlab) that is 'binned' in 3 deg x 3 deg samples. So the matrix is 30 x 120 (3x30=90 deg of el, 3x120=360 of az). Each elevation, 0, 3, 6, ..., 87, has 120 azimuth values associated with it...</p>
<p>However, as you get closer to the zenith you really don't want 120 cells (az cols): As you move up in el, 3 deg of az represents a larger and larger portion of the sky. The final row of the data only has 3 non-zero values that essentially map to az 0-120, 120-240, and 240-360 @ an elevation of 87-90 deg.</p>
<p><a href="https://i.sstatic.net/7DGJCLeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7DGJCLeK.png" alt="So, this rectangular matrix looks like this:" /></a></p>
<p>So the matrix contains more and more 0's as you go from row 0 (which is el 0-3 deg) to row 29 (el 87-90 deg). <a href="https://www.dropbox.com/scl/fi/yn3o9kfyj0r07lnud9lnn/data.txt?rlkey=8qhlg82wzenjktsf0c4ka2aqb&dl=0" rel="nofollow noreferrer">Data is here</a>.</p>
<p>I want to plot this on a polar plot, but I don't know how to use contour or pcolor to plot these irregular grids (where the color is set by the value in the matrix).</p>
<p>I've tried:</p>
<pre><code>epfd = np.loadtxt("data.txt")
azimuths = np.radians(np.arange(0, 360, 3))
els = np.arange(0,90,3)
r, theta = np.meshgrid(els, azimuths)
fig, ax = plt.subplots(subplot_kw=dict(projection='polar'))
im = ax.contourf(theta, r,epfd.T,norm=mpl.colors.LogNorm())
ax.set_theta_direction(-1)
ax.set_theta_offset(np.pi / 2.0)
ax.invert_yaxis()
</code></pre>
<p>But that results in</p>
<p><a href="https://i.sstatic.net/wiU9qQBY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiU9qQBY.png" alt="enter image description here" /></a></p>
<p>Thanks for your ideas.</p>
|
<python><matplotlib><contour>
|
2024-05-29 22:44:43
| 1
| 533
|
earnric
|
78,551,987
| 11,039,749
|
Python Selenium execute from cronjob linux and shell script
|
<p>I have a shell script that executes a python script through a cronjob.</p>
<p>If I run the shell script or python script it executes without any issues.</p>
<p>It only has issues when I execute it from a cronjob.</p>
<p>Shell Script</p>
<pre><code>!/bin/bash
# Set environment variables
export XDG_RUNTIME_DIR=/run/user/$(id -u)
export PATH=$PATH:/snap/bin
export DISPLAY=:0
#export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
#echo "Current Date and time: $(date)"
#echo "current directory: $(pwd)"
#Xvfb :99 -screen 0 1024x768x24 &
#export PATH=$PATH:/usr/local/bin
#export DISPLAY=:0 #:99
LOGFILE=/opt/runscript.log
echo "Script Started at $(date)" >> $LOGFILE
cd /home/carson/Documents/code/python/speech2txt
source /home/carson/Documents/code/python/speech2txt/venv/bin/activate
#source ./venv/bin/activate
#/usr/bin/python3 /home/carson/Documents/code/python/speech2txt/createAudio.py >> /opt>
#/usr/bin/python3 /home/carson/Documents/code/python/speech2txt/cronBrowser.py >> $LOG>
su - carson -c "/usr/bin/python3 /home/carson/Documents/code/python/speech2txt/cronBro>
#echo "exit status $?"
deactivate
if [ $? -eq 0 ]; then
echo "Script Success - $(date)" >> $LOGFILE
else
echo "Script Failure - $(date)" >> $LOGFILE
fi
</code></pre>
<p>Crobjob</p>
<pre><code>12 16 * * * carson /opt/run.sh
#08 22 * * * PATH=$PATH:/usr/local/bin/:/usr/bin/:/usr/sbin:/bin && DISPLAY=:0 && /opt/run.sh
##47 21 * * * export XDG_RUNTIME_DIR=/run/user/$(id -u) && export PATH=$PATH:/snap/bin && /opt/run.sh >> /opt/runscript.log 2>&1
</code></pre>
|
<python><linux><bash><selenium-webdriver><cron>
|
2024-05-29 21:19:17
| 1
| 529
|
Bigbear
|
78,551,953
| 1,922,531
|
Why does python not find regex that regex101 does?
|
<p>I have this <a href="https://regex101.com/r/rwTrEf/1" rel="nofollow noreferrer">regex</a>, which works perfectly in regex101, but will not work in actual python. The find string exists in the test string, but python will not find it.</p>
<pre><code>find_str = r'^([\s\S]*)(?<=\n\n)([ \n-]+Once in a Lifetime Values[ \n-]+?)(?= if)([\s\S]+)$'
logging.info(f'Checking for existence....\n -- {re.search(find_str, text)}\n'
</code></pre>
<p>This produces a fail:</p>
<pre><code>2024-05-29 16:05:23: INFO - Checking for existence....
-- None
</code></pre>
<p>Any help?</p>
|
<python><regex>
|
2024-05-29 21:06:52
| 1
| 1,493
|
lukehawk
|
78,551,949
| 2,641,576
|
Python TypeError: cannot concatenate object of type '<class 'str'>'; only Series and DataFrame objs are valid
|
<p>I am getting the following error:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: cannot concatenate object of type '<class 'str'>'; only Series and DataFrame objs are valid
Traceback (most recent call last):
File "/var/task/etd.py", line 77, in lambda_handler
data = pd.concat(concated_data, ignore_index=True)
</code></pre>
<p>In the following block:</p>
<pre><code>for idx, day_row in curr_df.iterrows():
av_last_month, rate_end_last_mon = last_months_avs(df_to_date, day_row['From_Currency'], day_row['To_Currency'], day_row['Obs_Date'])
concated_data = ({'Obs_Date': day_row['Obs_Date'], 'From_Currency': day_row['From_Currency'],
'To_Currency': day_row['To_Currency'], 'Rate': day_row['Rate'],
'PE_Rate': av_last_month, 'Mon_End_Rate': rate_end_last_mon})
data = pd.concat(concated_data, ignore_index=True)
new_df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><dataframe>
|
2024-05-29 21:06:27
| 1
| 15,081
|
Matt
|
78,551,946
| 1,303,577
|
Python use delimited string to access element in nested dictionary
|
<p>I'm trying load a yaml file and access elements from it by a <code>/</code> delimited string.</p>
<p>Sample yaml file:</p>
<pre><code>foo:
bar:
baz: qux
corge: waldo
</code></pre>
<p>I'm loading that file with:</p>
<pre><code>import yaml
data = yaml.load(open(yamlfile), Loader=yaml.FullLoader)
</code></pre>
<p>I can then access <code>quz</code> with</p>
<pre><code>data['foo']['bar']['baz']
</code></pre>
<p>But is there a way to access it by a delimited string, something like this?</p>
<pre><code>data.get_by_full_path('/foo/bar/baz')
</code></pre>
<p>Edit:</p>
<p>Thanks for the linked post that provides some ways to do it. I may wind up using one of those. Here is another solution I remembered from working with jsonpath previously.</p>
<pre><code>from jsonpath_ng import parse
parse('foo.bar.baz').find(data)[0].value
# 'qux'
</code></pre>
|
<python><dictionary>
|
2024-05-29 21:04:32
| 0
| 1,945
|
Rusty Lemur
|
78,551,846
| 2,153,235
|
pandas: Access column-like results from "groupby" and "agg"?
|
<p>I am using <code>groupby</code> and <code>agg</code> to summarize groups of dataframe rows.
I summarize each group in terms of its <code>count</code> and <code>size</code>:</p>
<pre><code>>>> import pandas as pd
>>> df = pd.DataFrame([
[ 1, 2, 3 ],
[ 2, 3, 1 ],
[ 3, 2, 1 ],
[ 2, 1, 3 ],
[ 1, 3, 2 ],
[ 3, 3, 3 ] ],
columns=['A','B','C'] )
>>> gbB = df.groupby('B',as_index=False)
>>> Cagg = gbB.C.agg(['count','size'])
B count size
0 1 1 1
1 2 2 2
2 3 3 3
</code></pre>
<p>The result looks like a dataframe with columns for the
grouping variable <code>B</code> and for the summaries <code>count</code> and <code>size</code>:</p>
<pre><code>>>> Cagg.columns
Index(['B', 'count', 'size'], dtype='object')
</code></pre>
<p>However, I can't access each of the <code>count</code> and <code>size</code> columns
for further manipulation as series or by conversion <code>to_list</code>:</p>
<pre><code>>>> Cagg.count
<bound method DataFrame.count of B count size
0 1 1 1
1 2 2 2
2 3 3 3>
>>> Cagg.size
9
</code></pre>
<p>Can I access the individual column-like data with headings <code>count</code>
and <code>size</code>?</p>
|
<python><pandas><group-by>
|
2024-05-29 20:35:07
| 1
| 1,265
|
user2153235
|
78,551,793
| 3,817,456
|
swig - calling char const * [] from python
|
<p>I have C code with a standard <code>main</code> of the form</p>
<pre><code>int mymain(int argc, const char *argv[]);
</code></pre>
<p>I managed to get through the swig 'pythonizing' process to be able to call this from python. When I import my module and try to call mymain I get</p>
<pre><code>>>>mymodule.mymain(1,[])
TypeError:in method 'mymain', argument 2 of type 'char const *[]'
>>>mymodule.mymain(1,["test"])
TypeError:in method 'mymain', argument 2 of type 'char const *[]'
>>>mymodule.mymain(1,"test")
TypeError:in method 'mymain', argument 2 of type 'char const *[]'
>>>mymodule.mymain(1,)
TypeError:mymain() missing 1 required position argument: 'argv'
</code></pre>
<p>and similarly changing argc to 0 or anything else didn't help. Since the C is autogenerated I'd rather not modify that - can someone fill me in on how to pass a char *[] argument? I don't actually care about the values being passed as main ignores them completely.</p>
|
<python><c><swig><argv><argc>
|
2024-05-29 20:18:53
| 1
| 6,150
|
jeremy_rutman
|
78,551,766
| 23,260,297
|
Use groupby and aggregate functions to reshape dataframe
|
<p>I have a dataframe that looks like this:</p>
<pre><code>A B C D E F
foo s HO 02/01/24 100 20.0
foo s HO 02/01/24 200 20.0
foo b HO 02/01/24 100 20.0
foo b HO 02/01/24 200 20.0
bar s HO 02/01/24 100 20.0
bar s HO 02/01/24 200 20.0
bar b HO 02/01/24 100 20.0
bar b HO 02/01/24 200 20.0
fizz s BRT 02/01/24 100 20.0
fizz s BRT 02/01/24 200 20.0
fizz b BRT 02/01/24 100 20.0
fizz b BRT 02/01/24 200 20.0
buzz s PW 02/01/24 100 20.0
buzz s PW 02/01/24 200 20.0
buzz b PW 02/01/24 100 20.0
buzz b PW 02/01/24 200 20.0
</code></pre>
<p>I need to group these columns by ABCD and get the sum of E for each group and the first F in each group. Then I seperate the dfs into groups. Something like this:</p>
<pre><code>df_groups = df.groupby(['A','C','B','D'], sort=False, as_index=False).agg({'E': 'sum', 'F': 'first'})
df_list = [x for _, x in df_groups.groupby(['A', 'B'])]
</code></pre>
<p>however I get stuck when I need to reshape it. I need the resultant df to look like this:</p>
<pre><code> D A E F
B C
b HO 02/01/24 foo 300 20
02/01/24 bar 300 20
BRT 02/01/24 fizz 300 20
PW 02/01/24 buzz 300 20
s HO 02/01/24 foo 300 20
02/01/24 bar 300 20
BRT 02/01/24 fizz 300 20
PW 02/01/24 buzz 300 20
</code></pre>
<p>Im assuming I can do this with a groupby and an agg but I'm unsure</p>
<hr />
<p>Code for input data:</p>
<pre><code>data = {
'A': ['foo', 'foo', 'foo', 'foo', 'bar', 'bar', 'bar', 'bar', 'fizz', 'fizz', 'fizz', 'fizz', 'buzz', 'buzz', 'buzz', 'buzz'],
'B': ['s', 's', 'b', 'b', 's', 's', 'b', 'b', 's', 's', 'b', 'b', 's', 's', 'b', 'b'],
'C': ['HO', 'HO', 'HO', 'HO', 'HO', 'HO', 'HO', 'HO', 'BRT', 'BRT', 'BRT', 'BRT', 'PW', 'PW', 'PW', 'PW'],
'D': ['02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24', '02/01/24'],
'E': [100, 200, 100, 200, 100, 200, 100, 200, 100, 200, 100, 200, 100, 200, 100, 200],
'F': [20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0]
}
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas>
|
2024-05-29 20:12:36
| 1
| 2,185
|
iBeMeltin
|
78,551,694
| 6,606,057
|
Transfering NaN's to Dummy Variables While Using One Hot Encoder
|
<p>I am using OneHotEncoder to create a series of dummy variables based on a categoric variable. The problem I encounter is that any missing values are not transfered to the available dummy variables.</p>
<pre><code>oh = OneHotEncoder(min_frequency = 0.0001, sparse_output = False) ### df_experience_level
data = oh.fit_transform(df[['experience_level']])
cols=oh.get_feature_names_out()
df_experience_level = pd.DataFrame(data,columns=cols)
</code></pre>
<pre><code>missing_values_count = df_experience_level.isnull().sum()
missing_values_count
</code></pre>
<p>output:</p>
<pre><code>experience_level_EN 0
experience_level_EX 0
experience_level_MI 0
experience_level_SE 0
experience_level_nan 0
dtype: int64
</code></pre>
<p>The current code that I use is:</p>
<pre><code>df.loc[df['experience_level'].isna(), 'experience_level_EN'] = np.nan
df.loc[df['experience_level'].isna(), 'experience_level_EX'] = np.nan
df.loc[df['experience_level'].isna(), 'experience_level_MI'] = np.nan
df.loc[df['experience_level'].isna(), 'experience_level_SE'] = np.nan
</code></pre>
<p>However this is tedious.</p>
<p>Running the obvious:</p>
<pre><code>df.loc[df['experience_level'].isna(), df_experience_level] = np.nan
</code></pre>
<p>Results in:</p>
<pre><code>ValueError: Index data must be 1-dimensional
</code></pre>
<p>Is there any way to transfer the NaNs from the parent variable to each dummy variable in a single statement?</p>
|
<python><pandas><nan><one-hot-encoding>
|
2024-05-29 19:55:55
| 1
| 485
|
Englishman Bob
|
78,551,304
| 5,994,782
|
Getting scrapy and pytest to work with AsyncioSelectorReactor
|
<h2>To reproduce my issue</h2>
<ul>
<li>python 3.12.1</li>
<li>scrapy 2.11.2</li>
<li>pytest 8.2.1</li>
</ul>
<p>In <em>bookspider.py</em> I have:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Iterable
import scrapy
from scrapy.http import Request
class BookSpider(scrapy.Spider):
name = None
def start_requests(self) -> Iterable[Request]:
yield scrapy.Request("https://books.toscrape.com/")
def parse(self, response):
books = response.css("article.product_pod")
for book in books:
yield {
"name": self.name,
"title": book.css("h3 a::text").get().strip(),
}
</code></pre>
<p>In <em>test_bookspider.py</em> I have:</p>
<pre class="lang-py prettyprint-override"><code>import json
import os
from pytest_twisted import inlineCallbacks
from scrapy.crawler import CrawlerRunner
from twisted.internet import defer
from bookspider import BookSpider
@inlineCallbacks
def test_bookspider():
runner = CrawlerRunner(
settings={
"REQUEST_FINGERPRINTER_IMPLEMENTATION": "2.7",
"FEEDS": {"books.json": {"format": "json"}},
"TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor",
# "TWISTED_REACTOR": "twisted.internet.selectreactor.SelectReactor",
}
)
yield runner.crawl(BookSpider, name="books")
with open("books.json", "r") as f:
books = json.load(f)
assert len(books) >= 1
assert books[0]["name"] == "books"
assert books[0]["title"] == "A Light in the ..."
os.remove("books.json")
defer.returnValue(None)
</code></pre>
<p>With <code>"TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor"</code> uncommented I get the following error:</p>
<p><code>Exception: The installed reactor (twisted.internet.selectreactor.SelectReactor) does not match the requested one (twisted.internet.asyncioreactor.AsyncioSelectorReactor)</code></p>
<p>With <code>"TWISTED_REACTOR": "twisted.internet.selectreactor.SelectReactor"</code> uncommented my test passes.</p>
<p>Can anyone explain this behaviour and more broadly how to test CrawlerRunner or CrawlerProcess with pytest?</p>
|
<python><scrapy><pytest><python-asyncio><twisted>
|
2024-05-29 18:11:40
| 1
| 305
|
Henry Dashwood
|
78,551,302
| 6,703,592
|
pandas cannot set a value for MultiIndex
|
<p>It is not allowed to set a value for a MultiIndex. It works for the single index. Maybe becaause of the version of pandas.</p>
<pre><code>import pandas as pd
df = pd.DataFrame()
df.loc[('a', 'b'), 'col1'] = 1
print(df)
</code></pre>
<blockquote>
<p>KeyError: "None of [Index(['a', 'b'], dtype='object')] are in the [index]"</p>
</blockquote>
|
<python><pandas>
|
2024-05-29 18:11:04
| 1
| 1,136
|
user6703592
|
78,551,082
| 1,185,242
|
How do you parallelize access to a shared array in python using concurrent.futures?
|
<p>I have the following piece of code to illustrate my problem:</p>
<p>Each thread calculates a value <code>locs</code> and then updates the <code>result</code> array, assume that that update (<code>result[locs] += mask[locs]</code> ) is a very slow operation, how can I parallelize it so it can be threaded too?</p>
<pre><code>import numpy as np
import time
import concurrent.futures
MAX = 100
SIZE = 500
mask = np.random.randint(0, MAX, (SIZE, SIZE))
def process_image(i):
start = time.time()
locs = np.where(mask > i)
print(f" process_image({i}) took {round(time.time() - start, 2)} secs.")
return locs
if __name__ == '__main__':
result = np.zeros((SIZE, SIZE))
with concurrent.futures.ThreadPoolExecutor(max_workers=32) as executor:
results = [executor.submit(process_image, i) for i in range(MAX) ]
for f in concurrent.futures.as_completed(results):
locs = f.result()
# How do I parallelize this operation? Where the result of each thread updates a shared result array
result[locs] += mask[locs]
print(result)
</code></pre>
|
<python><parallel-processing><concurrent.futures>
|
2024-05-29 17:21:58
| 1
| 26,004
|
nickponline
|
78,550,940
| 2,336,081
|
How to get all HTML tags from a page content using BeautifulSoup?
|
<p>I'm trying to retrieve a tag text from a HTML page but I notice that BeautifulSoup is not "reading" all HTML elements, e.g:</p>
<p>This is the part that I'm working on, the <a href="https://www.tibia.com/community/?subtopic=characters&name=Aliance%20Bombao" rel="nofollow noreferrer">full page</a> have a lot of elements:</p>
<pre class="lang-html prettyprint-override"><code><div class="TableContainer">
<div class="CaptionContainer">
<div class="CaptionInnerContainer">
<span class="CaptionEdgeLeftTop"></span>
<span class="CaptionEdgeRightTop"></span>
<span class="CaptionBorderTop"></span>
<span class="CaptionVerticalLeft"></span>
<div class="Text">Character Information</div> <span class="CaptionVerticalRight"></span>
<span class="CaptionBorderBottom"></span>
<span class="CaptionEdgeLeftBottom"></span>
<span class="CaptionEdgeRightBottom"></span>
</div>
</div>
</div>
</code></pre>
<p>When I try to <code>find</code> the "Text" class from <code>CaptionInnerContainer</code> the <code>CaptionInnerContainer</code> is empty.</p>
<pre class="lang-py prettyprint-override"><code>import httpx
from bs4 import BeautifulSoup
CHARACTER_URL = "https://www.tibia.com/community/?subtopic=characters&name={name}"
character_name = "Aliance Bombao"
async def _get(url: str):
"""
Retrieve the page HTML content the for given URL
"""
async with httpx.AsyncClient() as client:
return await client.get(url)
async def get_character(name: str) -> BeautifulSoup:
response = await _get(CHARACTER_URL.format(name=name))
response.raise_for_status()
return BeautifulSoup(response.content, "html.parser")
page = await get_character(character_name)
tables = page.find_all("div", class_="TableContainer")
caption = tables[0].find("div", class_="CaptionContainer")
inner = caption.find("div", class_="CaptionInnerContainer")
text = inner.find("div", class_="Text").text
print(text) ## None
</code></pre>
<p>How can I ensure that BeautifulSoup is reading all HTML elements? When I try to debug it using <code>ipython</code> I can see that the <code>CaptionInnerContainer</code> has only one child element (CaptionEdgeLeftTop)</p>
|
<python><beautifulsoup>
|
2024-05-29 16:53:35
| 1
| 656
|
Rafa Acioly
|
78,550,725
| 5,171,169
|
how to resolve InputText key error in this simple PySimpleGUI script
|
<p>I do not see why i am getting this error: Traceback (most recent call last):
File ....\play_download.py", line 55, in
url = values['-url-']
~~~~~~^^^^^^^^^
KeyError: '-url-'</p>
<p>This error appears when I click on radio button <strong>download_video_with_yt_dlp</strong>
<a href="https://i.sstatic.net/tCEeE9dy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCEeE9dy.png" alt="enter image description here" /></a></p>
<pre><code> import PySimpleGUI as sg
from my_scripts import *
from my_constants import *
import sys
import glob
import yt_dlp
sg.theme_background_color('#3d5c5c')
sg.set_options(font=('Fira Code', 16))
l1 = sg.Text('Put url here',font=('Fira Code', 16), expand_x=True, justification='center')
l2 = sg.Text('Put 1 for Chef_Max_Mariola, 2 for JuliaCrossBow, 3 for French_Vibes, 4 for Studio_Italiano_InClasse, 5 for D:',
font=('Fira Code', 16), expand_x=True, justification='center')
t1 = sg.Text('', enable_events=True, font=('Fira Code', 16), expand_x=True, justification='left')
t2 = sg.Text('', enable_events=True, font=('Fira Code', 16), expand_x=False, justification='left')
def delete_video_in_D():
list_of_files = [os.remove(p) for p in glob.glob(r'D:\\*.*') if (os.path.isfile(p) and is_video(p))]
layout1 = [
[sg.Radio(key, 'Radio', enable_events=True, key=key) for key in ['Chef_Max_Mariola', 'JuliaCrossBow',
'French_Vibes', 'Studio_Italiano_InClasse', 'download_video_with_yt_dlp']]]
layout2 = [
[sg.Radio(key, 'Radio', enable_events=True, key=key) for key in ['Chef_Max_Mariola', 'JuliaCrossBow',
'French_Vibes', 'Studio_Italiano_InClasse', 'download_video_with_yt_dlp']], [l1],[t1, sg.InputText(key='-url-')], [l2], [t2, sg.InputText(key='-folder-')]]
FOLDERS = {'1:Chef_Max_Mariola_PATH',
'2:JuliaCrossBow_PATH',
'3:French_Vibes_PATH',
'4:Studio_Italiano_InClasse_PATH',
'5:r"D:\\"'}
window = sg.Window('Main Window', layout1)
while True:
event, values = window.read()
if event == sg.WIN_CLOSED:
break
elif event == 'Chef_Max_Mariola':
play_videos_from_folder(Chef_Max_Mariola_PATH)
elif event == 'JuliaCrossBow':
play_videos_from_folder(JuliaCrossBow_PATH)
elif event == 'French_Vibes':
play_videos_from_folder(French_Vibes_PATH)
elif event == 'Studio_Italiano_InClasse':
play_videos_from_folder(Studio_Italiano_InClasse_PATH)
elif event == 'download_video_with_yt_dlp':
delete_video_in_D()
window = sg.Window('Main Window', layout2)
url = values['-url-']
folder = values['-folder-']
os.chdir(FOLDERS[folder])
try:
with yt_dlp.YoutubeDL() as ydl:
ydl.download(link)
list_of_files = glob.glob('*.*')
latest_file = max(list_of_files, key=os.path.getctime)
# print("Download is completed successfully")
# print(latest_file)
new_name = re.sub(r'\s*\[.*?\]\s*', '', latest_file )
os.rename(latest_file, new_name)
proc = subprocess.Popen([r'C:\\Program Files\\VideoLAN\\VLC\\vlc.exe', new_name], close_fds=True)
time.sleep(2)
except Exception as e:
print('Error on line {}'.format(sys.exc_info()[-1].tb_lineno), type(e).__name__, e)
window.close()
[1]: https://i.sstatic.net/VCoJEVQt.png
</code></pre>
|
<python><pysimplegui>
|
2024-05-29 16:02:17
| 1
| 5,696
|
LetzerWille
|
78,550,600
| 845,210
|
How to fix Pydantic "Default value is not JSON serializable" warning when using third-party annotated type? [non-serializable-default]
|
<p>Pydantic supports <a href="https://jrdnh.github.io/posts/pydantic-with-third-party-types/" rel="nofollow noreferrer">annotating</a> <a href="https://docs.pydantic.dev/latest/concepts/types/#handling-third-party-types" rel="nofollow noreferrer">third-party types</a> so they can be used directly in Pydantic models and de/serialized to & from JSON.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Annotated, Any
from pydantic import BaseModel, model_validator
from pydantic.functional_validators import ModelWrapValidatorHandler
from typing_extensions import Self
# Pretend this is some third-party class
# we can't modify directly...
class Quantity:
def __init__(self, value: float, unit: str):
self.value = value
self.unit = unit
class QuantityAnnotations(BaseModel):
value: float
unit: str
@model_validator(mode="wrap")
def _validate(value: Any, handler: ModelWrapValidatorHandler[Self]) -> Quantity:
if isinstance(value, Quantity):
return value
validated = handler(value)
if isinstance(validated, Quantity):
return validated
return Quantity(**dict(validated))
QuantityType = Annotated[Quantity, QuantityAnnotations]
class OurModel(BaseModel):
quantity: QuantityType = Quantity(value=0.0, unit='m')
</code></pre>
<p>This works fine, because we just annotated the <code>Quantity</code> type so Pydantic knows how to serialize it to JSON with no issues:</p>
<pre class="lang-py prettyprint-override"><code>model_instance = OurModel()
print(model_instance.model_dump_json())
# {"quantity":{"value":0.0,"unit":"m"}}
</code></pre>
<p>But if we instead try to get the <a href="https://docs.pydantic.dev/latest/concepts/json_schema/" rel="nofollow noreferrer">JSON Schema</a> that describes <code>OurModel</code>, we get a warning that it doesn't know how to serialize the default value (the one it just successfully serialized)...</p>
<pre class="lang-py prettyprint-override"><code>OurModel.model_json_schema()
# ...lib/python3.10/site-packages/pydantic/json_schema.py:2158: PydanticJsonSchemaWarning:
# Default value <__main__.Quantity object at 0x75fcccab1960> is not JSON serializable;
# excluding default from JSON schema [non-serializable-default]
</code></pre>
<p>What am I missing here? Is this a Pydantic bug, or do I need to add more to the annotations to tell Pydantic how to serialize the default value in the context of a JSON Schema?</p>
<p>Does anyone have a good workaround to easily include annotated third-party types as default values in JSON Schema generated by Pydantic?</p>
|
<python><json><jsonschema><pydantic><pydantic-v2>
|
2024-05-29 15:34:26
| 1
| 3,331
|
bjmc
|
78,550,495
| 2,641,576
|
Error AttributeError: 'DataFrame' object has no attribute 'append', tried pd.concat but causing error also
|
<p>Getting the error <code>AttributeError: 'DataFrame' object has no attribute 'append</code></p>
<p>When trying to run the following:</p>
<pre><code>for idx, day_row in ecurr.iterrows():
avlm, relm= lma(dtd, dr['fc'], dr['tc'], dr['od'])
ndf.append({'od': dr['od'], 'fc': dr['fc'],
'tc': dr['tc'], 'r': dr['r'],
'per': alm, 'mer': ralm}, ignore_index = True)
</code></pre>
<p>Tried using pd.concat but that is causing a separate error:</p>
<pre><code>NameError: name 'p' is not defined
</code></pre>
|
<python><python-3.x><dataframe>
|
2024-05-29 15:12:50
| 1
| 15,081
|
Matt
|
78,550,418
| 5,138,332
|
SQLModel in FastAPI : NoForeignKeysError
|
<p>I separately created a Postgresql database with a schema named "geog" and two tables named projections (id, epsg_code, unit_id) and units (id, name).</p>
<p>So in my code I have:</p>
<ul>
<li>units.py</li>
</ul>
<pre><code>...
class Unit(SQLModel, table=True):
__tablename__ = "units"
metadata = {schema="geog"}
id: Optional[int] = Field(primary_key=True)
name: str
projections: List["Projection"] = Relationship(back_populates="units")
</code></pre>
<ul>
<li>projections.py</li>
</ul>
<pre><code>...
class Projections(SQLModel, table=True):
__tablename__ = "projections"
metadata = {schema="geog"}
id: Optional[int] = Field(primary_key=True)
epsg_code: str
unit_id: int = Field(foreign_key="units.id")
unit: Optional["Unit"] = Relationship(back_populates="projections")
</code></pre>
<p>And in the DB I manually declared:</p>
<ul>
<li>in the units table:
<ul>
<li>pk_units as a pk related to the units.id col</li>
</ul>
</li>
<li>in the projections table:
<ul>
<li>pk_projections as a pk related to the projections.id col</li>
<li>fk_projections_units_unit_id as a fk related to the projections.unit_id and units.id</li>
</ul>
</li>
</ul>
<p>But each time I use my CRUD (whatever the function), I've got the following error:
<code>sqlalchemy.exc.NoForeignKeysError: Could not determine join condition between parent/child tables on relationship Projection.units - there are no foreign keys linking these tables. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or specify a 'primaryjoin' expression.</code></p>
<hr />
<h2>UPDATE</h2>
<p>The fact is that everything is working fine when I group the different model in a single file. But when I have units.py and projections.py, the join isn't recognized. I'm following <a href="https://sqlmodel.tiangolo.com/tutorial/code-structure/" rel="nofollow noreferrer">the doc about code structure</a> using TYPE_CHECKING but still not work.</p>
|
<python><foreign-keys><fastapi><sqlmodel>
|
2024-05-29 15:00:18
| 1
| 314
|
FloCAD
|
78,550,382
| 3,973,269
|
binary(16) PK from mysql query cannot convert to integer
|
<p>In my mysql query, executed from a python script, I get a column that is stored as field type binary(16).
Unfortunately, when I try to get the integer value from the column, I get
<code>invalid literal for int() with base 10: '171099\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'</code>
(Since the value is 171099)</p>
<p>The code:</p>
<pre><code>query = "SELECT `fieldA` FROM `table` LIMIT 1"
connection = sql.connect(user=user,password=pw,host=host,port=port,database=db)
cursor = connection.cursor()
cursor.execute(query)
df = pd.DataFrame(cursor.fetchall())
print(int(df['fieldA'][0]))
</code></pre>
<p>Where in this code, fieldA is of type binary(16)</p>
<p>How can I make this work?</p>
<p>Python version: 3.11.9</p>
<p>Database charset/collation: UTF8mb4/utf8mb4_0900_</p>
<p>I have tried:
<code>print(int.from_bytes(df['fieldA'][0]))</code>
But unfortunately, an incorrect number is printed.</p>
|
<python><mysql><pandas>
|
2024-05-29 14:55:25
| 1
| 569
|
Mart
|
78,550,332
| 12,821,675
|
Django - Aggregate of an Aggregation
|
<p>I am working with DRF and am having issues defining my queryset for use in my view class. Suppose I have three models like so:</p>
<pre><code>class ExchangeRate(...):
date = models.DateField(...)
rate = models.DecimalField(...)
from_currency = models.CharField(...)
to_currency = models.CharField(...)
class Transaction(...):
amount = models.DecimalField(...)
currency = models.CharField(...)
group = models.ForeignKey("TransactionGroup", ...)
class TransactionGroup(...):
...
</code></pre>
<p>I want to create a queryset on the <code>TransactionGroup</code> level with the following:</p>
<ul>
<li>
<ol>
<li>for each <code>Transaction</code> in the transaction group, add an annotated field <code>converted_amount</code> that multiplies the <code>amount</code> by the <code>rate</code> on the <code>ExchangeRate</code> instance where the <code>currency</code> matches the <code>to_currency</code> respectively</li>
</ol>
</li>
<li>
<ol start="2">
<li>then sum up the <code>converted_amount</code> for each <code>Transaction</code> and set that on the <code>TransactionGroup</code> level as the annotated field <code>converted_amount_sum</code></li>
</ol>
</li>
</ul>
<p>An example json response for <code>TransactionGroup</code> using this desired queryset:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"id": 1,
"converted_amount_sum": 5000,
"transactions": [
{
"id": 1,
"amount": 1000,
"converted_amount": 500,
"currency": "USD",
},
{
"id": 2,
"amount": 5000,
"converted_amount": 4500,
"currency": "EUR",
},
},
...
]
</code></pre>
<p>My attempt at building a queryset (is there a way to construct this on the <code>TransactionGroup</code> level?):</p>
<pre><code>from django.db.models import F
annotated_transactions = Transaction.objects.annotate(
converted_amount = F("amount") * exchange_rate.rate # <-- simplifying here
).values(
"transaction_group"
).annotate(
amount=Sum("converted_amount"),
)
</code></pre>
<p>I can get the annotations to work properly on the <code>Transaction</code> model - but trying to then sum them up again on the <code>TransactionGroup</code> level throws the error:</p>
<pre><code>FieldError: Cannot compute Sum('converted_amount'), `converted_amount` is an aggregate
</code></pre>
<p>For added context - I want to be able to sort and filter the <code>TransactionGroups</code> by the <code>convreted_amount_sum</code> without having to do additional db lookups / operations.</p>
|
<python><django><django-rest-framework>
|
2024-05-29 14:46:46
| 1
| 3,537
|
Daniel
|
78,550,325
| 269,867
|
type error while creating custom dataset using huggingface dataset
|
<p>To generate custom dataset</p>
<pre><code>from datasets import Dataset,ClassLabel,Value
features = ({
"sentence1": Value("string"), # String type for sentence1
"sentence2": Value("string"), # String type for sentence2
"label": ClassLabel(names=["not_equivalent", "equivalent"]), # ClassLabel definition
"idx": Value("int32"),
})
custom_dataset = Dataset.from_dict(train_pairs)
custom_dataset = custom_dataset.cast(features)
custom_dataset
</code></pre>
<p>My train_pairs looks like below
train_pairs - sample</p>
<pre><code>{'sentence1': "that 's far too tragic to merit such superficial treatment ",
'sentence2': "that 's far too tragic to merit such superficial treatment ",
**'label': <ClassLabel.not_equivalent: 0>**,
'idx': 5}
/usr/local/lib/python3.10/dist-packages/pyarrow/lib.cpython-310-x86_64-linux-gnu.so in string.from_py.__pyx_convert_string_from_py_std__in_string()
**TypeError**: expected bytes, int found
So I changed label to integer
{'sentence1': "that 's far too tragic to merit such superficial treatment ",
'sentence2': "that 's far too tragic to merit such superficial treatment ",
**'label': 0**,
'idx': 5}
/usr/local/lib/python3.10/dist-packages/pyarrow/lib.cpython-310-x86_64-linux-gnu.so in string.from_py.__pyx_convert_string_from_py_std__in_string()
**TypeError**: expected bytes, int found
</code></pre>
<p>I am trying to model my dataset as below. (data sample + feature info)</p>
<p><a href="https://i.sstatic.net/v8IfbBSo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8IfbBSo.png" alt="enter image description here" /></a></p>
|
<python><huggingface-datasets>
|
2024-05-29 14:46:07
| 1
| 4,092
|
user269867
|
78,550,308
| 501,557
|
Is there a Python f-string equivalent of C++’s std::quoted?
|
<p>In C++, the <code>std::quoted</code> stream manipulator produces a quoted and escaped version of a string:</p>
<pre class="lang-cpp prettyprint-override"><code>std::string dirPath = "C:\\Documents\\Oval Office Recordings\\Launch Codes.txt";
std::cout << dirPath << endl; // C:\Documents\Oval Office Recordings\Launch Codes.txt
std::cout << std::quoted(dirPath) << endl; // "C:\\Documents\\Oval Office Recordings\\Launch Codes.txt"
</code></pre>
<p>Is there a direct Python f-string equivalent to <code>std::quoted</code>? By “direct,” I mean “built into Python” rather than “you can roll your own version.”</p>
<p>(Context: I have a C++ Quine I use when teaching a theory course that uses <code>std::quoted</code> to keep things simple. I’d like to port it to Python with as little invasive surgery as possible.)</p>
|
<python><quotes><f-string>
|
2024-05-29 14:43:46
| 2
| 375,379
|
templatetypedef
|
78,550,197
| 5,306,861
|
How to see the value of a variable in WinDbg when debugging pyx?
|
<p>I compiled <code>pyx</code> according to the instructions <a href="https://stevedower.id.au/blog/debugging-cython-with-windbg" rel="nofollow noreferrer">here</a>,
So that it can be debugged in <code>WinDbg</code>, I am now able to put a breakpoint inside the code, but the values in the Locals panel do not help much.</p>
<p>Attached is a picture, I would like to know the value of <code>min_bound</code>, how can I see it?</p>
<p><a href="https://i.sstatic.net/zAmyVV5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zAmyVV5n.png" alt="enter image description here" /></a></p>
<p>There are some objects that are actually <code>PyObject</code>, such as <code>__pyx_v_self</code>, is there a way to see their value? Something like I do in Python: <code>print(__pyx_v_self)</code> ?</p>
<p><a href="https://i.sstatic.net/VCegKH4t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCegKH4t.png" alt="enter image description here" /></a></p>
<p><strong>Minimal example:</strong></p>
<ol>
<li>Create three files below.</li>
<li>Compile with the following command: <code>python setup.py build_ext --inplace</code>.</li>
<li>Run the <code>main.py</code> file.</li>
<li>Run python.exe under WinDbg</li>
<li>Paste in Python <code>import os; os.chdir("full path to the code folder")</code>
And also the code from the main.py file and press Enter</li>
<li>Open helloworld.pyx in WinDbg</li>
<li>Put a breakpoint in the helloworld.pyx file</li>
<li>run the code from the main.py file again.</li>
</ol>
<p><strong>helloworld.pyx</strong></p>
<pre class="lang-py prettyprint-override"><code>from __future__ import print_function
def fib(n):
"""Print the Fibonacci series up to n."""
a, b = 0, 1
while b < n:
print(b, end=' ')
a, b = b, a + b
print()
def primes(int nb_primes):
cdef int n, i, len_p
cdef int[1000] p
if nb_primes > 1000:
nb_primes = 1000
len_p = 0 # The current number of elements in p.
n = 2
while len_p < nb_primes:
# Is n prime?
for i in p[:len_p]:
if n % i == 0:
break
# If no break occurred in the loop, we have a prime.
else:
p[len_p] = n
len_p += 1
n += 1
# Let's copy the result into a Python list:
result_as_list = [prime for prime in p[:len_p]]
return result_as_list
class MyClass:
def __init__(self, n) -> None:
self.fib = fib(n)
self.primes = primes(n)
# try see the value of self.fib and self.primes in Windbg
</code></pre>
<p><strong>setup.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from setuptools import Extension, setup
from Cython.Build import cythonize
extensions = [Extension('helloworld', ["helloworld.pyx"], extra_compile_args=["-Ox", "-Zi"], extra_link_args=["-debug:full"])]
setup(
ext_modules = cythonize(extensions, gdb_debug=True, emit_linenums=True)
)
</code></pre>
<p><strong>main.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import helloworld
my_class = helloworld.MyClass(2000)
print(my_class.fib)
</code></pre>
|
<python><debugging><cython><windbg>
|
2024-05-29 14:22:43
| 0
| 1,839
|
codeDom
|
78,550,171
| 788,349
|
Using github app to install private repos across organizations?
|
<p>Apologizes if this is a duplicate. I have found different variations of this all over the internet but am still having issues. See the below scenario.</p>
<p>I have two github organizations.</p>
<p><strong>Org1</strong> - has private <strong>Repo1</strong></p>
<p><strong>Org2</strong> - has a repo that has a dependency on private Repo1 from <strong>Org1</strong> which will be installed via pip</p>
<p>My question is, what is the prescribed pattern to follow in order to achieve this? On previous projects we would create a service account under <strong>Org2</strong> and generate a SSH key to assign to the user, add them as a readonly collaborator for <strong>Repo1</strong> and then associate any machines or github actions with the private key for the service account to pip install or clone the repo.</p>
<p>I am reading about github apps and I am curious if there a way to achieve this using github apps that is a more desirable/secure pattern? Any code for doing this is greatly appreciated.</p>
|
<python><github><pip><github-actions><github-app>
|
2024-05-29 14:19:11
| 0
| 513
|
acmisiti
|
78,550,109
| 6,379,197
|
Compute SHAP Value for a systems of model using shap.DeepExplainer
|
<p>I have trained a two-stage ML model training for binary classification. In the first stage, I applied an autoencoder model consisting of encoder and decoder. The encoder model has reduced the dimension of the input feature from 1000 to 32. The decoder reconstructs the input by increasing the input feature from 32 to 1000. From the encoder output, I have applied a DNN model to binary classify whether the object is a flower or fruit.</p>
<p>Now I want to calculate the feature importance score of this two-stage model. For this, I have computed the SHAP value on the encoder + DNN model to retrieve the feature importance of the raw variable.</p>
<p>I have checked <a href="https://stats.stackexchange.com/questions/418953/how-to-perform-shap-explainer-on-a-system-of-models">this question</a>. I have used <code>shap.KernelExplainer</code> to compute the feature importance score which is extremely slow. To get the feature importance score of 300 samples, I had to wait approximately 3 hours.</p>
<p><strong>My code:</strong></p>
<pre><code>class ModelWrapper:
def __init__(self, first_dnn_model, second_dnn_model):
self.first_dnn_model = first_dnn_model
self.second_dnn_model = second_dnn_model
def predict_proba(self, X):
self.first_dnn_model.eval()
self.second_dnn_model.eval()
with torch.no_grad():
X_tensor = torch.from_numpy(X)
X_dnn = self.first_dnn_model(X_tensor)
return self.second_dnn_model(X_dnn).numpy()
explainer = shap.KernelExplainer(modelwrapper.predict_proba, X_train[:100])
shap_value_for_test = explainer.shap_values(X_test)
</code></pre>
<p>The reason is quite clear. <code>shap.KernelExplainer</code> runs on CPU. I can compute shap value using <code>shap.DeepExplainer</code> but can not compute shap value on series of two models.</p>
<p><strong>My Question:</strong></p>
<ol>
<li><p>How can I speed up computation time of <code>shap.KernelExplainer</code> ?</p>
</li>
<li><p>How can I compute the feature importance score of a series of two models?</p>
<p>raw variable ==> autoencoder ==> DNN model ==> output</p>
</li>
</ol>
<p>How can I compute the feature importance score of the encoder + DNN model using shap.DeepExplainer?</p>
<p><strong>What else I tried:</strong></p>
<p><strong>1.</strong> I have read <a href="https://www.mdpi.com/2227-9091/10/1/3" rel="nofollow noreferrer">Matthews & Hartman (2021)</a> and found their <a href="https://github.com/srmatth/mshap" rel="nofollow noreferrer">GitHub repository</a> and <a href="https://pypi.org/project/mshap/0.2.1/" rel="nofollow noreferrer">Python port of the package <code>mshap</code></a>.</p>
<p><strong>Code:</strong></p>
<p><strong>Model Training:</strong></p>
<pre><code>X = r.X
y1 = r.cost_per_month
y2 = r.num_months
cpm_mod = sk.RandomForestRegressor(n_estimators = 100, max_depth = 10, max_features = 2)
cpm_mod.fit(X, y1)
nm_mod = sk.RandomForestRegressor(n_estimators = 100, max_depth = 10, max_features = 2)
nm_mod.fit(X, y2)
cpm_preds = cpm_mod.predict(X)
nm_preds = nm_mod.predict(X)
tot_rev = cpm_preds * nm_preds
</code></pre>
<p><strong>Model Explanation:</strong></p>
<pre><code>cpm_ex = shap.Explainer(cpm_mod)
cpm_shap = cpm_ex.shap_values(X)
cpm_expected_value = cpm_ex.expected_value
nm_ex = shap.Explainer(nm_mod)
nm_shap = nm_ex.shap_values(X)
nm_expected_value = nm_ex.expected_value
</code></pre>
<p><strong>Shap Value Calculation:</strong></p>
<pre><code>## R
final_shap <- mshap(
shap_1 = py$cpm_shap,
shap_2 = py$nm_shap,
ex_1 = py$cpm_expected_value,
ex_2 = py$nm_expected_value
)
head(final_shap$shap_vals)
</code></pre>
<p><strong>Why my case is different from this paper:</strong></p>
<p>I am not doing binary classification by multiplying two models' output. I have used the output of the first model as input to 2nd model.</p>
<p><strong>2. I have</strong> also read <a href="https://www.nature.com/articles/s41467-022-31384-3" rel="nofollow noreferrer">Chen, Lundberg, Lee (2022)</a> and gone through <a href="https://github.com/suinleelab/DeepSHAP" rel="nofollow noreferrer">their repository, DeepShap</a>. But this also does not answer my question.</p>
<p><strong>Code of this paper:</strong></p>
<pre><code>explicand = X_test; reference = X_train
explainer = shap.TreeExplainer(model, data=reference)
tree_attr = explainer.shap_values(X_test, per_reference=True)
shap.summary_plot(tree_attr.mean(2), features=explicand, show=False,
feature_names=explicand.columns, max_display=6)
</code></pre>
<p><strong>My Purpose:</strong><br />
My task is to compute the shapely value for features of input data that impact mostly binary classification. Any hint or paper link will be a great help to me.</p>
|
<python><deep-learning><pytorch><shap>
|
2024-05-29 14:08:51
| 0
| 2,230
|
Sultan Ahmed
|
78,550,028
| 2,707,590
|
How to use XComArg in the BigQueryInsertJobOperator `params` when creating dynamic task mappings?
|
<p>So i have been dealing with this issue for a while now without any light...
I have a DAG that queries data from BigQuery, and depending on the results some Dynamic Task Mappings are created to insert an entry in another BigQuery table using <code>BigQueryInsertJobOperator</code>...</p>
<p>For Example:</p>
<pre class="lang-py prettyprint-override"><code>from airflow import DAG
from airflow.providers.google.cloud.operators.bigquery import BigQueryGetDataOperator, BigQueryInsertJobOperator
from airflow.utils.dates import days_ago
from airflow.decorators import task
from airflow import XComArg
default_args = {
'owner': 'airflow',
'start_date': days_ago(1),
'retries': 1,
}
dag = DAG(
dag_id='bigquery_data_transfer_mapped_correct',
default_args=default_args,
schedule_interval="@daily",
catchup=False,
tags=['example'],
)
@task
def get_data(sql):
bq_hook = BigQueryHook(...)
self.log.info('Fetching Data from:')
self.log.info('Query: %s', sql)
bq_client = bq_hook.get_client()
query_job = bq_client.query(sql)
client_results = query_job.result() # Waits for the query to finish
results = list(dict(result) for result in client_results)
self.log.info(f"Retrieved {len(results)} rows from BigQuery")
self.log.info('Response: %s', results)
return results
query_data = get_data("SELECT * FROM some_table WHERE some_conditions;")
@task_group
def tasks(params):
insert_job = BigQueryInsertJobOperator(
task_id=f"insert_data",
configuration={
'query': {
'query': "INSERT INTO `project.dataset.table` (field1, field2) VALUES ('{{ params.field1 }}', '{{ params.field2 }}')",
'useLegacySql': False,
}
},
params=params
)
insert_job
bq_tasks = tasks.expand(params=XComArg(query_data))
query_data >> bq_tasks
</code></pre>
<p>Please note that this code is just a basic example that i just wrote and in my usecase, i actually have a task_group that expands and takes in a parameter to be sent to the params in one of the BigQueryInsertJobOperator task.</p>
<p>When i use it without a taskgroup (i.e. call the BigQueryInsertJobOperator directly with expand, it works.</p>
<p>After running my DAG i get an error saying:</p>
<pre class="lang-py prettyprint-override"><code>Broken DAG: [/opt/airflow/dags/src/dag.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/models/baseoperator.py", line 407, in apply_defaults
default_args, merged_params = get_merged_defaults(
^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/models/baseoperator.py", line 167, in get_merged_defaults
raise TypeError("params must be a mapping")
TypeError: params must be a mapping
</code></pre>
<p>The airflow version is:</p>
<pre><code>Version: [v2.8.1](https://pypi.python.org/pypi/apache-airflow/2.8.1)
Git Version: .release:c0ffa9c5d96625c68ded9562632674ed366b5eb3
</code></pre>
|
<python><google-bigquery><airflow>
|
2024-05-29 13:54:30
| 0
| 1,146
|
user2707590
|
78,549,980
| 3,861,330
|
Create a blank page and add text content using PyPDF2: module 'PyPDF2' has no attribute 'pdf'
|
<p>Using this method to add create a blank page, add text to it and then append the page to a pdf.</p>
<pre class="lang-py prettyprint-override"><code>def add_text_to_blank_page(pdf_writer, text):
# Create a new blank page
page = PyPDF2._pdf.PageObject.create_blank_page(width=612, height=792) # Standard US Letter size
# Create a PDF text object
pdf_text = PyPDF2.pdf.TextStringObject(text)
# Create a PDF text element
text_element = PyPDF2.pdf.TextObject()
text_element.setFont("Helvetica", 12) # Set font and font size
text_element.textLines.append(pdf_text)
# Add the text element to the page
page.addText(text_element)
# Add the page to the PDF writer
pdf_writer.add_page(page)
</code></pre>
<p>the first line is erroring:
<code>AttributeError: module 'PyPDF2' has no attribute 'pdf'</code></p>
<p>I checked the documents for <a href="https://pypdf2.readthedocs.io/en/3.0.0/modules/PageObject.html#PyPDF2._page.PageObject.create_blank_page" rel="nofollow noreferrer">PyPDF2</a>, the module should be there, I am clearly doing something wrong.</p>
<p>What is wrong here ?</p>
|
<python><python-3.x><pdf><pypdf>
|
2024-05-29 13:48:29
| 1
| 645
|
Dhruv
|
78,549,952
| 10,126,955
|
Injecting additional code in the `console_scripts` executable stub
|
<p>Let's say I have a pip package with an executable (<code>console_scripts</code> in <code>entry_points</code>).</p>
<p>I can decide to install it in a random directory by using <code>pip install --target <DIR> <PKG></code>.</p>
<p>If I am also using a Python interpreter in a non-standard location, like for example:</p>
<pre><code><MY_PYTHON>/python -m pip install --target <DIR> <PKG>
</code></pre>
<p><code>pip</code> will be smart enough to generate a shebang that points to this location, <code>DIR/bin/PKG</code> will contain:</p>
<pre class="lang-bash prettyprint-override"><code>#!<MY_PYTHON>/python
# -*- coding: utf-8 -*-
</code></pre>
<p>Now, the question is, is there any way, through command-line options, to make <code>pip</code> also add something that goes like this:</p>
<pre class="lang-py prettyprint-override"><code>import os
file_path = os.path.realpath(__file__)
sys.path.append(os.path.dirname(os.path.dirname(file_path)))
</code></pre>
<p>so that the console script automatically finds its main Python code?</p>
|
<python><pip>
|
2024-05-29 13:43:59
| 0
| 3,610
|
Criminal_Affair_At_SO
|
78,549,723
| 5,170,442
|
why is python's zip strictness only enforced when the object is used, not created
|
<p>In the following example, why is the ValueError only raised on the last line rather than when <code>Z</code> is created?</p>
<pre class="lang-py prettyprint-override"><code>l1 = [3]
l2 = [1,2]
Z = zip(l1, l2, strict=True)
Zlist = list(Z)
</code></pre>
|
<python>
|
2024-05-29 13:02:33
| 0
| 653
|
db_
|
78,549,656
| 9,494,140
|
docker-python error : ModuleNotFoundError: No module named '_distutils_hack'
|
<p>i'm running a python/django app using docker and apache2 and it was working great ,, but suddnle when I try to run again am getting this error :</p>
<pre><code> Traceback (most recent call last):
File "<frozen site>", line 201, in addpackage
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named '_distutils_hack'
Remainder of file ignored
Traceback (most recent call last):
File "<frozen site>", line 201, in addpackage
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named '_distutils_hack'
Remainder of file ignored
Error processing line 1 of /usr/lib/python3/dist-packages/distutils-precedence.pth:
Traceback (most recent call last):
File "<frozen site>", line 201, in addpackage
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named '_distutils_hack'
Remainder of file ignored
[Wed May 29 15:43:45.395312 2024] [wsgi:error] [pid 26:tid 132520276391616] [client 172.27.0.1:40422] mod_wsgi (pid=26): Failed to exec Python script file '/var/www/html/demo_app/water_maps/wsgi.py'.
[Wed May 29 15:43:45.395343 2024] [wsgi:error] [pid 26:tid 132520276391616] [client 172.27.0.1:40422] mod_wsgi (pid=26): Exception occurred processing WSGI script '/var/www/html/demo_app/water_maps/wsgi.py'.
[Wed May 29 15:43:45.395501 2024] [wsgi:error] [pid 26:tid 132520276391616] [client 172.27.0.1:40422] Traceback (most recent call last):
[Wed May 29 15:43:45.401671 2024] [wsgi:error] [pid 26:tid 132520276391616] [client 172.27.0.1:40422] File "/var/www/html/demo_app/water_maps/wsgi.py", line 12, in <module>
[Wed May 29 15:43:45.401687 2024] [wsgi:error] [pid 26:tid 132520276391616] [client 172.27.0.1:40422] from django.core.wsgi import get_wsgi_application
[Wed May 29 15:43:45.401704 2024] [wsgi:error] [pid 26:tid 132520276391616] [client 172.27.0.1:40422] ModuleNotFoundError: No module named 'django'
</code></pre>
<p>Here is an example of used files in the project :</p>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM ubuntu
RUN apt-get update
# Avoid tzdata infinite waiting bug
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Africa/Cairo
RUN apt clean
RUN apt-get update
RUN apt-get install -y apt-utils vim curl apache2 apache2-utils git
RUN apt -y install software-properties-common
RUN apt update
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt -y install python3.10-full
# Optional: Set Python 3.9 as the default Python version
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 1
# RUN apt-get -y install python3 libapache2-mod-wsgi-py3
RUN apt-get -y install libapache2-mod-wsgi-py3
RUN apt -y install certbot python3-certbot-apache
RUN apt-get -y install python3-pip
RUN apt update
# Check Python and pip versions
RUN python3 --version && \
pip3 --version
#Add sf to avoid ln: failed to create hard link '/usr/bin/pip': File exists
RUN ln -sf /usr/bin/pip3 /usr/bin/pip
RUN pip install --upgrade pip --break-system-packages
RUN pip install django ptvsd --break-system-packages
RUN apt install wait-for-it
RUN pip install cffi --upgrade --break-system-packages
RUN pip install -U pip setuptools --break-system-packages
RUN apt-get -y install gettext
RUN apt-get -y install poppler-utils
RUN apt-get -y install redis-server
RUN apt-get install python3-pymysql
RUN a2enmod headers
RUN service apache2 restart
COPY www/demo_app/water_maps/requirements.txt requirements.txt
RUN python3 -m pip install --upgrade setuptools
RUN pip install -r requirements.txt --break-system-packages
ADD ./demo_site.conf /etc/apache2/sites-available/000-default.conf
EXPOSE 80 5432
WORKDIR /var/www/html/demo_app
#CMD ["apache2ctl", "-D", "FOREGROUND"]
#CMD ["python", "manage.py", "migrate", "--no-input"]
</code></pre>
<p><strong>docker-compose.yaml</strong></p>
<pre><code>version: "2"
services:
db:
image: postgres:14
restart: always
volumes:
- ./data/db:/var/lib/postgresql/data
- ./www/:/var/www/html
- ./www/demo_app/kml_files:/var/www/html/demo_app/kml_files
- ./www/demo_app/temp_kml_file:/var/www/html/demo_app/temp_kml_file
- ./www/demo_app/upload:/var/www/html/demo_app/upload
- ./data/log:/var/log/apache2
ports:
- '5432:5432'
environment:
- POSTGRES_DB=database_innvoentiq
- POSTGRES_USER=database_user_innvoentiq
- POSTGRES_PASSWORD=Yahoo000@
django-apache2:
build: .
container_name: water_maps
restart: always
environment:
- POSTGRES_DB=database_innvoentiq
- POSTGRES_USER=database_user_innvoentiq
- POSTGRES_PASSWORD=Yahoo000@
ports:
- 5000:80
- 5001:443
# - 80:80
# - 443:443
volumes:
- ./www/:/var/www/html
- ./www/demo_app/kml_files:/var/www/html/demo_app/kml_files
- ./www/demo_app/temp_kml_file:/var/www/html/demo_app/temp_kml_file
- ./www/demo_app/upload:/var/www/html/demo_app/upload
- ./data/log:/var/log/apache2
# - ./data/config/etc/apache2:/etc/apache2
# command: sh -c 'python3 manage.py migrate && python3 manage.py loaddata the_db.json '
command: sh -c 'wait-for-it db:5432 -- python3 manage.py makemigrations && python3 manage.py migrate && python3 manage.py collectstatic --noinput && python3 manage.py compilemessages && apache2ctl -D FOREGROUND'
# command: sh -c 'wait-for-it db:5432 -- python manage.py migrate && python manage.py loaddata last.json && apache2ctl -D FOREGROUND'
depends_on:
- db
</code></pre>
<p><strong>/demo_site.conf</strong></p>
<pre><code>WSGIPythonPath /var/www/html/demo_app
<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
# ServerName test3.watermaps-eg.com
# ServerAlias test3.watermaps-eg.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html/
Alias /static /var/www/html/demo_app/static
Alias /en/upload /var/www/html/demo_app/upload
Alias /ar/upload /var/www/html/demo_app/upload
Alias /upload /var/www/html/demo_app/upload
WSGIScriptAlias / /var/www/html/demo_app/water_maps/wsgi.py
WSGIPassAuthorization On
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
<Directory /var/www/html/demo_app>
Require all granted
</Directory>
<Directory /var/www/html/demo_app/static>
Require all granted
</Directory>
<Directory /var/www/html/demo_app/upload>
Require all granted
</Directory>
<Directory /var/www/html/demo_app/water_maps>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
Header add Access-Control-Allow-Origin "*"
Header set Access-Control-Allow-Origin "*"
Header set Access-Control-Allow-Headers "*"
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
TimeOut 7200
</VirtualHost>
</code></pre>
|
<python><python-3.x><linux><docker><ubuntu>
|
2024-05-29 12:50:50
| 0
| 4,483
|
Ahmed Wagdi
|
78,549,567
| 9,506,773
|
How to extract document page associated to each chunk extracted from PDF in custom WebApiSkill
|
<p>I have the following <a href="https://learn.microsoft.com/en-us/azure/search/cognitive-search-custom-skill-web-api" rel="nofollow noreferrer">custom WebApiSkill</a>:</p>
<pre class="lang-py prettyprint-override"><code>@app.route(route="CustomSplitSkill", auth_level=func.AuthLevel.FUNCTION)
def CustomSplitSkill(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
try:
req_body = req.get_json()
except ValueError:
return func.HttpResponse("Invalid input", status_code=400)
try:
# 'values' expected top-level key in the request body
response_body = {"values": []}
for value in req_body.get('values', []):
recordId = value.get('recordId')
text = value.get('data', {}).get('text', '')
# Remove sequences of dots, numbers following them, and
# any additional punctuation or newline characters, replacing them with a single space
cleaned_text = re.sub(r"[',.\n]+|\d+", ' ', text)
# Replace multiple spaces with a single space and trim leading/trailing spaces
cleaned_text = re.sub(r'\s{2,}', ' ', cleaned_text).strip()
# Pattern to match sequences of ". " occurring more than twice
cleaned_text = re.sub(r"(\. ){3,}", "", cleaned_text)
chunks = split_text_into_chunks_with_overlap(cleaned_text, chunk_size=256, overlap_size=20)
# response object for specific pdf
response_record = {
"recordId": recordId,
"data": {
"textItems": chunks
}
}
response_body['values'].append(response_record)
return func.HttpResponse(json.dumps(response_body), mimetype="application/json")
except ValueError:
return func.HttpResponse("Function app crashed", status_code=400)
</code></pre>
<p>The inputs and outputs of this skill in the skillset are defined like this:</p>
<pre class="lang-py prettyprint-override"><code>inputs=[
InputFieldMappingEntry(name="text", source="/document/content")
],
outputs=[
OutputFieldMappingEntry(name="textItems", target_name="pages")
],
</code></pre>
<p>How should I extract page information for each chunk?</p>
|
<python><azure-functions><azure-cognitive-services><azure-cognitive-search>
|
2024-05-29 12:33:28
| 1
| 3,629
|
Mike B
|
78,549,448
| 832,490
|
Langchain with Redis responding only to the previous question
|
<p>I am using Langchain with Redis as the persistence layer. It works, but kind of—I have a strange behavior which is as follows:</p>
<p>I send a message, and it always responds to the previous prompt's message.</p>
<p>I don't know what's wrong; I followed the official documentation and also looked at other code, and it seems correct.</p>
<p>See below:</p>
<pre><code>$ curl -XPOST -H "session-id: 123" -d '{"message": "what is the capital of united states?"}' http://localhost:8000
{"message":"Of course! How can I assist you today?"}
</code></pre>
<p>Again, diferent prompt, now the previus and correct answer:</p>
<pre><code>$ curl -XPOST -H "session-id: 123" -d '{"message": "hello?"}' http://localhost:8000
{"message":"The capital of the United States is Washington, D.C."}%
</code></pre>
<p>Code</p>
<pre><code>import os
from typing import Any
import orjson
from langchain.globals import set_debug
from langchain_community.chat_message_histories import RedisChatMessageHistory
from langchain_core.messages import SystemMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts import MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
from starlette.applications import Starlette
from starlette.middleware import Middleware
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from starlette.responses import JSONResponse
from starlette.routing import Route
from tenacity import retry
from tenacity import stop_after_attempt
set_debug(True)
llm = ChatOpenAI(
model="gpt-4o",
temperature=0,
openai_api_key=os.environ["OPENAI_APIKEY"],
)
prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(content="You are a helpful assistant."),
MessagesPlaceholder(variable_name="history"),
# HumanMessage(content="{question}"),
]
)
chain = prompt | llm
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: RedisChatMessageHistory(session_id, url=os.environ["REDIS_DSN"]),
input_messages_key="question",
history_messages_key="history",
)
class OrjsonResponse(JSONResponse):
def render(self, content: Any) -> bytes:
return orjson.dumps(content)
@retry(stop=stop_after_attempt(3))
async def echo(request: Request):
data = await request.json()
output = chain_with_history.invoke(
{"question": data["message"]},
config={"configurable": {"session_id": request.headers["session-id"]}},
)
return OrjsonResponse({"message": output.content})
app = Starlette(
routes=[
Route("/", echo, methods=["POST"]),
],
middleware=[
Middleware(CORSMiddleware, allow_origins=["*"], allow_methods=["POST"])
],
)
</code></pre>
|
<python><langchain><large-language-model>
|
2024-05-29 12:09:56
| 1
| 1,009
|
Rodrigo
|
78,549,178
| 12,890,458
|
TimestampedGeoJson with marker showing heading
|
<p>I want to animate a boat travelling on a route. At the moment I can show the route as a line and the boat as a circle using TimestampedGeoJson:</p>
<pre><code># circle with following line
features = [
{
'type': 'Feature',
'geometry': {
'type': 'LineString',
'coordinates': coordinates_list,
},
'properties': {
'popup': 'ship position',
'times': times_list,
'icon': 'circle',
'iconstyle': {
'fillColor': 'green',
'fillOpacity': 0.6,
'stroke': 'false',
'radius': 13,
},
'style': {
'color': 'green',
'weight': 2,
'opacity': 0.6
},
'id': 'man',
},
}
]
plugins.TimestampedGeoJson(
{'type': 'FeatureCollection', 'features': features},
period='PT30M',
add_last_point=True,
auto_play=False,
loop=False,
max_speed=1,
loop_button=True,
date_options='YYYY/MM/DD',
time_slider_drag_update=True,
duration='P1M',
).add_to(m)
</code></pre>
<p>This yields the following image:
<a href="https://i.sstatic.net/KcCKTnGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KcCKTnGy.png" alt="enter image description here" /></a></p>
<p>I also want to show the heading of the ship, therefore the circle no longer satisfies. How can I keep the same functionality (marker followed by line, and a time slider) and at the same time showing a marker with heading?</p>
|
<javascript><python><leaflet><folium>
|
2024-05-29 11:20:30
| 1
| 460
|
Frank Tap
|
78,549,152
| 12,846,524
|
Single log file from multiple modules with the same timestamp
|
<p>I know that there are many other similar questions to mine, but I have not found one that deals specifically with a log file having a timestamp as a part of its name.</p>
<p>I have 4 modules (currently) that I want to implement logging within. I also have a <code>base_logger.py</code> script that I am using to create my logging object. The code is below:</p>
<pre><code>import configparser
import logging
import sys
from datetime import datetime
from pathlib import Path
def write_default_config(path: Path):
# This function creates a 'config.ini' if it does not already exist
conf = configparser.ConfigParser()
conf['logging'] = {'level': 'INFO'}
with open(path, 'w') as configfile:
conf.write(configfile)
def setup_config_and_logging():
# Create a location for the log file to be saved to, and find the config file
logs_dir_name = "logs"
config_file_name = "config.ini"
if getattr(sys, 'frozen', False): # if running as .exe, save logs to folder in cwd
script_dir = Path(sys.executable).parent.absolute()
logs_dir = script_dir / logs_dir_name
config_file_path = script_dir / config_file_name
else: # if running in IDE, save logs to folder at src level
script_dir = Path(__file__).parent.absolute()
logs_dir = script_dir.parent.parent / logs_dir_name
config_file_path = script_dir.parent.parent / config_file_name
logs_dir.mkdir(parents=True, exist_ok=True)
if not config_file_path.exists():
write_default_config(config_file_path)
# Read the logging level
config = configparser.ConfigParser()
config.read(config_file_path)
logging_level = config.get("logging", "level", fallback="DEBUG")
# Path for the log file (name dependent on logging level)
log_file_path = logs_dir / f"app-{logging_level}-{datetime.now().strftime('%Y%m%d-%H%M%S')}.log"
# Setup logging with the specified level
level = getattr(logging, logging_level.upper(), logging.DEBUG) # DEBUG is the default value
logging.basicConfig(level=level,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(log_file_path),
logging.StreamHandler()
])
def get_logger() -> logging.Logger:
# Provide a logger instance for other modules to use
return logging.getLogger()
</code></pre>
<p>As you can see, the filename for the log files (<code>log_file_path</code>) contains a timestamp for when the log file was created. However, this ends up creating 4 log files as the modules that import it all occur at different times (e.g. <code>app-INFO-20240529-115900.log</code>, <code>app-INFO-20240529-115901.log</code>, ...).</p>
<p>If I were to remove the timestamp from the filename, only one file is created and it correctly contains the logged information from all of my modules.</p>
<p>Just to be clear, when importing the logging object into other modules for use, I use this code:</p>
<pre><code>from base_logger import get_logger
log = get_logger()
</code></pre>
<p>I have previously tried to define a global variable <code>initialised</code> that would be checked within each import of my <code>base_logger.py</code> module, but this global value would be reset to <code>False</code> each time, hence my check to see whether it was <code>True</code> (i.e. if the logger had already been created) always turned up false, and hence the logger was recreated.</p>
<p>Any advice or solutions would be greatly appreciated!</p>
|
<python><logging><config><python-logging>
|
2024-05-29 11:16:44
| 0
| 374
|
AlexP
|
78,549,141
| 6,145,729
|
Python XLSWINGS - Cannot access read-only document
|
<p>I'm opening an Excel file for data entry using Python Pandas & Xlswings. Everything works fine as long as the Excel file is in an editable state (i.e. not opened by another user).</p>
<p>Firstly, has anyone overcome the 'Cannot access read-only document' when it is being used or locked by another user? or got any tips to deal with this?</p>
<p>Are there any alternative methods to see who has it open so I can notify them to close the file? I'm accessing the file via SharePoint and always assumed dual working was enabled, but clearly not.</p>
|
<python><xlwings>
|
2024-05-29 11:14:40
| 0
| 575
|
Lee Murray
|
78,549,100
| 16,525,263
|
How to select items inside a python list and add it to a dataframe
|
<p>I have a pyspark dataframe with below columns</p>
<pre><code>Dataframe: httpClient
[capacity: string, version: string]
</code></pre>
<p>and I have a list of columns declared as
<code>httpClient_fields = ["capacity", "`httpClient.install`", "date"]</code></p>
<p>I need to check the dataframe if it has the list items. If items does not exist in the dataframe, I need to add it with empty values.
So, in the result, I need</p>
<pre><code>Dataframe: httpClient
[capacity: string, version: string, `httpClient.install`: string, date: string]
</code></pre>
<p>This is my code now:</p>
<pre><code>df_cols = httpClient.columns
for f in httpClient_fields:
if f not in df_cols:
httpClient= httpClient.withColumn(f, F.lit(''))
httpClient = httpClient.select(*httpClient_fields).dropDuplicates().repartition(1)
httpClient = httpClient.withColumnRenamed("httpClient.install","httpClient_install")
</code></pre>
<p>when I execute this, Im getting
<code>cannot resolve '`httpClient.install`'</code></p>
<p>Please let me know how to solve this</p>
|
<python><pyspark>
|
2024-05-29 11:05:56
| 2
| 434
|
user175025
|
78,549,040
| 4,465,708
|
How to access the actual tab widget of ttk.Notebook?
|
<p>I need to access the actual tab widget that <code>ttk.Notebook</code> creates when some other widget (usually a frame) is added to it (and then called a "tab" in a Tkinter parlance).</p>
<p>I pictured the exact thing needed below to avoid any confusion:
<a href="https://i.sstatic.net/KnorFhuG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnorFhuG.png" alt="enter image description here" /></a></p>
<p>The <code>children</code> attribute on the Notebook object returns an empty dictionary for me (but calling <code>tabs()</code> on it returns a tuple of string <em>tab_id</em>'s of the frames added to it).</p>
<p>My use case is that I'd like to attach a custom tooltip widget to an actual tab (but not to the tab's contents). My custom <code>Tooltip</code> object needs a <code>tk.Widget</code> object as a parent for a <code>tk.Toplevel</code> it spawns to work. When I pass tab contents to it (a frame widget added to the notebook that I have an easy access to), a tooltip spawns inside the notebook obscuring the view which is not very helpful. Hence, I'd like to attach it to the actual tab widget only .</p>
<p>The code for <code>Tooltip</code> class is below.</p>
<pre class="lang-py prettyprint-override"><code>class ToolTip:
X_OFFSET = 20
Y_OFFSET = 20
JUSTIFY = tk.LEFT
BG_COLOR = "#ffffe0" # light-yellowish
RELIEF = tk.SOLID
BORDER_WIDTH = 1
@property
def text(self) -> str:
return self.__text
@text.setter
def text(self, value: str) -> None:
self.__text = value
@property
def x(self) -> int:
return self._widget.winfo_rootx() + self._x
@property
def y(self) -> int:
return self._widget.winfo_rooty() + self._y
def __init__(self, widget: tk.Widget, text: str) -> None:
self._widget = widget
self.text = text
self._tip_window = None
self._tip_showing_id = None
self._x = self._y = 0
self._bind_callbacks()
def _bind_callbacks(self) -> None:
self._entering_id = self._widget.bind("<Enter>", self._on_entered)
self._leaving_id = self._widget.bind("<Leave>", self._on_left)
self._button_pressing_id = self._widget.bind("<ButtonPress>", self._on_left)
def _unbind_callbacks(self) -> None:
self._widget.unbind("<Enter>", self._entering_id)
self._widget.unbind("<Leave>", self._leaving_id)
self._widget.unbind("<ButtonPress>", self._button_pressing_id)
def _on_entered(self, event: tk.Event) -> None:
self._x, self._y = event.x, event.y
self._schedule()
def _on_left(self, _) -> None:
self._unschedule()
self._hide_tip()
def _schedule(self) -> None:
self._unschedule()
self._tip_showing_id = self._widget.after(1500, self._show_tip)
def _unschedule(self) -> None:
widget_id = self._tip_showing_id
self._tip_showing_id = None
if widget_id:
self._widget.after_cancel(widget_id)
def _show_tip(self) -> None:
if self._tip_window or not self.text:
return
self._tip_window = tw = tk.Toplevel(self._widget)
tw.wm_overrideredirect(True)
tw.wm_geometry('+%d+%d' % (self.x, self.y))
label = tk.Label(
self._tip_window,
text=self.text,
justify=self.JUSTIFY,
background=self.BG_COLOR,
relief=self.RELIEF,
borderwidth=self.BORDER_WIDTH
)
label.pack(ipadx=5, ipady=2)
def _hide_tip(self) -> None:
tw = self._tip_window
self._tip_window = None
if tw:
tw.destroy()
</code></pre>
<h3>UPDATE</h3>
<p>So, the whole question turned out to be quite moot considering the tooltip use case I described. My blunder was failing to see that passing the <code>ttk.Notebook</code> itself to my <code>Tooltip</code> object (instead of passing a tab's content frame like I tried) results in tooltips spawning over tabs and not in the middle of content - just like I wanted.</p>
<p>Still, <a href="https://stackoverflow.com/a/78571059/4465708">the answer I got from 'patthoyts'</a> was essential because it enabled me to change a tooltip's text in regards to the tab beeing hoovered over - and that way achieving exactly the effect I wanted.</p>
<p>In summary:</p>
<ul>
<li>I passed <code>ttk.Notebook</code> instead of content frame to <code>Tooltip</code></li>
<li>I bound <code><<Motion>></code> event on the notebook to a callback that changed the <code>text</code> property on the tooltip based on the tab being hoovered over</li>
</ul>
|
<python><tkinter><tabs><widget><ttk>
|
2024-05-29 10:57:56
| 1
| 3,746
|
z33k
|
78,548,882
| 1,651,598
|
Kivy - Images appear pixelated
|
<p>I am using a <code>Rectangle()</code> <code>Instruction</code> added to a <code>Canvas()</code> to display a rather large .png which results in the image looking quite pixelated.</p>
<p>Here is the original and the rendered version in kivy.</p>
<p>Am i missing something obvious that would make the scaling "this bad"?</p>
<p><a href="https://i.sstatic.net/2fkzt63M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fkzt63M.png" alt="original" /></a></p>
<p><a href="https://i.sstatic.net/rUgWeP2k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUgWeP2k.png" alt="kivy rendering" /></a></p>
|
<python><kivy>
|
2024-05-29 10:25:45
| 0
| 13,598
|
Gung Foo
|
78,548,534
| 12,201,164
|
Bluetooth BTLE: how to connect reliably to device
|
<p>Problem: Connecting to a BTLE device succeeds about 1 out of 10 times. How can I improve this and reliably connect to the device?</p>
<p>I have a <a href="https://sensirion.com/products/catalog/SHT4x-Smart-Gadget/" rel="nofollow noreferrer">Sensirion SHT41x</a> that I want to access via Bluetooth from my Windows laptop. This does work (see script below), but only about 1 out of 10 times. When It does not work, it returns</p>
<pre><code>raise BleakDeviceNotFoundError(
bleak.exc.BleakDeviceNotFoundError: Device with address ... was not found.
</code></pre>
<p>The device is about half a meter from my laptop. What can I do to fix this?</p>
<p>So far tried:</p>
<ul>
<li>re-run the script</li>
<li>turn off and on Bluetooth of the device</li>
<li>restarted the device by removing the battery</li>
</ul>
<p>Here the simple script:</p>
<pre><code>import asyncio
from bleak import BleakClient
address = "..."
async def connect_device():
async with BleakClient(address) as client:
connected = await client.is_connected()
print(f"Connected: {connected}")
disconnected = await client.disconnect()
print(f"Disonnected: {disconnected}")
asyncio.run(connect_device())
</code></pre>
<p>EDIT:</p>
<p>I added re-tries and exception handling according to Klaus D.'s suggestion:</p>
<pre><code>import time
import asyncio
from bleak import BleakClient
from loguru import logger
async def just_connect(mac):
async with BleakClient(mac) as client:
pass
await asyncio.sleep(2)
mac = "C5:5A:42:A4:3C:80"
tries = 10
results = {}
start = time.time()
timeout = 12
for try_ in range(tries):
try:
await asyncio.wait_for(just_connect(mac), timeout)
logger.info(f"try {try_} succeeded")
except Exception as e:
logger.info(f"try {try_} failed due to {type(e).__name__}: {e}")
</code></pre>
<p>the result does not completely fail always as shown in the logs below, but often enough to defeat the whole purpose of IoT BLE.</p>
<pre><code>2024-06-02 15:25:25.346 | INFO | __main__:<module>:24 - try 0 failed due to BleakDeviceNotFoundError: Device with address C5:5A:42:A4:3C:80 was not found.
2024-06-02 15:25:35.366 | INFO | __main__:<module>:24 - try 1 failed due to BleakDeviceNotFoundError: Device with address C5:5A:42:A4:3C:80 was not found.
2024-06-02 15:25:45.377 | INFO | __main__:<module>:24 - try 2 failed due to BleakDeviceNotFoundError: Device with address C5:5A:42:A4:3C:80 was not found.
2024-06-02 15:25:55.907 | INFO | __main__:<module>:24 - try 3 failed due to TimeoutError:
2024-06-02 15:26:05.941 | INFO | __main__:<module>:24 - try 4 failed due to BleakDeviceNotFoundError: Device with address C5:5A:42:A4:3C:80 was not found.
2024-06-02 15:26:17.953 | INFO | __main__:<module>:24 - try 5 failed due to TimeoutError:
2024-06-02 15:26:27.951 | INFO | __main__:<module>:24 - try 6 failed due to BleakDeviceNotFoundError: Device with address C5:5A:42:A4:3C:80 was not found.
2024-06-02 15:26:39.963 | INFO | __main__:<module>:24 - try 7 failed due to TimeoutError:
2024-06-02 15:26:51.971 | INFO | __main__:<module>:24 - try 8 failed due to TimeoutError:
2024-06-02 15:27:01.978 | INFO | __main__:<module>:24 - try 9 failed due to BleakDeviceNotFoundError: Device with address C5:5A:42:A4:3C:80 was not found.
</code></pre>
|
<python><bluetooth><iot><btle><python-bleak>
|
2024-05-29 09:23:22
| 0
| 398
|
Dr-Nuke
|
78,548,388
| 232,831
|
force_authenticate and middlewares
|
<p>I'm trying to write tests using APITestCase and the provided <code>self.client</code>.</p>
<p>I'm using <code>self.client.force_authenticate</code> to authenticate a user. It looks like it's handled in <code>rest_framework.request.Request</code> constructor, itself created from <code>rest_framework/views.py(391)initialize_request()</code>, so <strong>after</strong> the middleware has been executed.</p>
<p>If I brekapoint in a middleware here what I see:</p>
<pre><code>(Pdb) p request
<WSGIRequest: GET '/api/admin/products/'>
(Pdb) p request.user
<SimpleLazyObject: <django.contrib.auth.models.AnonymousUser object at 0x7f537e0aa9a0>>
(Pdb) p request._force_auth_user
<User: admin>
</code></pre>
<p>The <code>._force_auth_user</code> placed by DRF is here, but <code>.user</code> is not here yet.</p>
<p>Is there a clean way to <code>force_authenticate</code> and have the <code>request.user</code> available during the middleware execution (at least after the <code>AuthenticationMiddleware</code>)?</p>
|
<python><django-rest-framework>
|
2024-05-29 08:54:26
| 0
| 12,036
|
Julien Palard
|
78,548,374
| 1,169,096
|
extract individual files from concatenated gzipped files
|
<p>I know that we can easily concatenate multiple gzipped files to create one big (valid) gzip file that contains all the data:</p>
<pre class="lang-bash prettyprint-override"><code>gzip -c A > A.gz
gzip -c B > B.gz
cat A.gz B.gz > AB.gz
</code></pre>
<p>If I then unzip the resulting <code>AB.gz</code>, I get the content of both A and B:</p>
<pre class="lang-bash prettyprint-override"><code>cat A B > ab
gzip -dc AB.gz > AB
diff ab AB
</code></pre>
<p>However, I would like to restore the input files individually.</p>
<p>Now <code>gzip(1)</code> is rather explicit:</p>
<blockquote>
<p>If you wish to create a single archive file with multiple members so that members can later be extracted independently, use an archiver such as tar or zip. [...] gzip is designed as a complement to tar, not as a replacement.</p>
</blockquote>
<p>Unfortunately, I cannot really use <code>tar</code> as an intermediate layer (for $reasons).</p>
<p>Since gzip stores metainformation within the gz file (the original filename) and I can concatenate individual gz-files, i figure that there must be some chunk marker, so gzip knows that it needs to reset.</p>
<p>But: how?</p>
<p>There's a <code>--name</code> flag to restore the a (single) member by its original name, but with multi-member gzip-files, it will just use the name of the first member to restore the concatenated content:</p>
<pre class="lang-bash prettyprint-override"><code>gzip -kdN AB.gz
</code></pre>
<p>If this is not possible with the <code>gzip</code> cmdline tool, a Python solution would be fine as well...</p>
|
<python><sh><extract><gzip>
|
2024-05-29 08:53:03
| 2
| 32,070
|
umläute
|
78,548,098
| 13,987,643
|
Chunking text with bounding box values
|
<p>I have used the Azure OCR service to extract text from PDFs. For each page in a PDF, the OCR output contains a list of text lines along with the bounding box values for that line. My original approach to chunk the text in each page was to firstly combine the strings in the list of texts using "\n" and use the RecursiveCharacterTextSplitter() from Langchain. However, I am not able to store the bounding box values for each chunk using this approach.</p>
<p>For each chunk, I want to be able to store the boundingbox value of the first and last line in the chunk. This is to ensure that I can later highlight the particular chunk in the PDF using the chunk's bounding box values.</p>
<p>I am now currently using a different chunking strategy where I iterate through each line in a page and keep track of the character count and once it reaches a particular character limit, I add text from those set of lines into a new chunk and start the process all over again with the next of chunks. This is my code to do the same :</p>
<pre><code>for line in item['lines']:
character_count += len(line['text'])
cumulative_lines.append(line)
if character_count >= character_limit:
chunk_start_bbox = cumulative_lines[0]['boundingBox']
chunk_end_bbox = cumulative_lines[-1]['boundingBox']
chunk_text = "\n".join(line['text'] for line in cumulative_lines)
chunk_metadata = metadata_dict.copy()
chunk_metadata.update({'chunk_start_bbox': chunk_start_bbox, 'chunk_end_bbox': chunk_end_bbox})
chunk_docs.append(Document(page_content=chunk_text, metadata=metadata_dict))
character_count = 0
cumulative_lines = []
</code></pre>
<p>However, I feel this is a very inefficient solution, going through each line in a page. Moreover, I also wouldn't consider the overlap of characters in a chunk, like how Langchain does.</p>
<p>Can someone suggest a better way to achieve this? Also, is there a way I can make my code more performant?</p>
|
<python><text><nlp><langchain>
|
2024-05-29 07:54:48
| 0
| 569
|
AnonymousMe
|
78,547,989
| 11,971,720
|
Best way to propagate cache-control policy in a FastAPI application?
|
<p>Using <code>FastAPI</code> and <code>FastAPICache</code>, we can use the <code>Cache-Control</code> header to allow the API response to be force-computed or allowed to returned from a cache.</p>
<p>For the endpoint as a whole, when the Cache-Control policy allows it, if the same request object is seen again, it can be returned from a cache.</p>
<p>I want a similar behaviour at the function-level. For some function <code>bar</code> within my endpoint <code>foo</code> (potentially nested and in another file/module), which only takes one argument <code>a</code>, how can it tell if it's allowed to return from cache?</p>
<p>Obviously we could do something like <code>bar(a=request.a, cache_control=request.headers.cache_control)</code>, but I'm thinking something like:</p>
<p>In <code>main.py</code></p>
<pre><code>@router.post("/foo")
def foo(request: Request):
return bar(a=request.body.a)
</code></pre>
<p>In <code>utils.py</code></p>
<pre><code>def bar(a: int):
# something like:
if fastapi.current_context().cache_control().no_cache():
val = some_expensive_computation(a)
else:
val = cache.get(a, some_expensive_computation(a))
if not fastapi.current_context().cache_control().no_store():
cache.set(a, val)
return val
</code></pre>
|
<python><fastapi><cache-control>
|
2024-05-29 07:34:15
| 0
| 376
|
angryweasel
|
78,547,930
| 6,867,099
|
Show count in each histplot bin
|
<p>As explained in <a href="https://stackoverflow.com/questions/72206187/seaborn-histplot-print-y-values-above-each-bar">seaborn histplot - print y-values above each bar</a>, counts can be displayed on each bar in a 1-d histogram by using <code>.bar_label(ax.containers[0])</code>.</p>
<p>I'm struggling to figure out how to do the equivalent for a 2-d histogram (created with <code>sns.histplot(data, x='var1', y='var2')</code>).</p>
<p>I know I can make an annotation for the <code>(a,b)</code> bin with <code>.annotate('foo', xy=(a, b))</code>, but I'm not sure how to retrieve the count for that bin (to pass it to <code>.annotate()</code>).</p>
<p>I'd like the result to look similar to the one shown at <a href="https://seaborn.pydata.org/examples/spreadsheet_heatmap.html" rel="nofollow noreferrer">https://seaborn.pydata.org/examples/spreadsheet_heatmap.html</a>, except that it's a histplot, not a heatmap.</p>
|
<python><matplotlib><seaborn><histplot>
|
2024-05-29 07:23:37
| 2
| 1,950
|
Peter Thomassen
|
78,547,325
| 17,889,492
|
VS Code not selecting correct conda environment for py but works for ipynb
|
<p>I have a workspace with a <code>.py</code> and a <code>.ipynb</code> file. After I've selected the correct conda environment (<code>py39llm</code> ) VS Code keeps working with the base conda environment in the case of <code>.py</code> but switches to the selected python environment for <code>ipynb</code>:</p>
<p>Executing the following:</p>
<pre><code>import os
bashCommand = "conda info --env"
os.system(bashCommand)
</code></pre>
<p>on the <code>.ipynb</code>:</p>
<pre><code># conda environments:
#
base /home/fes33/anaconda3
py39llm * /home/fes33/anaconda3/envs/py39llm
</code></pre>
<p>on the <code>.py</code>:</p>
<pre><code># conda environments:
#
base * /home/fes33/anaconda3
py39llm /home/fes33/anaconda3/envs/py39llm
</code></pre>
<p>Even though as the bar at the bottom shows I've selected the correct environment:</p>
<p><a href="https://i.sstatic.net/bmOM0DYU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bmOM0DYU.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><conda>
|
2024-05-29 04:32:37
| 1
| 526
|
R Walser
|
78,547,237
| 461,887
|
Fixing Future Warning Concat on excel files
|
<p>I am using glob to retrieve a list of csv files and combine them to append to excel.</p>
<p>When I run the process I receive a future warning.</p>
<blockquote>
<p>FutureWarning: The behavior of DataFrame concatenation with empty or
all-NA entries is deprecated. In a future version, this will no longer
exclude empty or all-NA columns when determining the result dtypes. To
retain the old behavior, exclude the relevant entries before the
concat operation.</p>
</blockquote>
<p>What do I need to do to this section of my code (below) to future proof it?</p>
<pre><code>QInboundCSV = [file for file in glob.glob("Queue Inbound\*.csv")]
QOutboundCSV = [file for file in glob.glob("Queue Outbound\*.csv")]
df_queue_inbound = pd.concat(map(pd.read_csv, QInboundCSV), ignore_index=True)
df_queue_outbound = pd.concat(map(pd.read_csv, QOutboundCSV), ignore_index=True)
</code></pre>
|
<python><pandas>
|
2024-05-29 03:53:20
| 1
| 7,188
|
sayth
|
78,546,847
| 21,709,774
|
How do you get the StringVar (not the text value, the StringVar itself) of an Entry?
|
<p>I'm trying to get the StringVar object associated with an Entry widget, but I can't figure out how.</p>
<p>I found this question: <a href="https://stackoverflow.com/questions/56271721/get-stringvar-bound-to-entry-widget/56272010#56272010">get StringVar bound to Entry widget</a>
but none of the answers work for me. Both entry["textvariable"] and entry.cget("textvariable") return the <em>name</em> of the StringVar and not the StringVar itself (although the answer with the second explains it properly, unlike the answer with the first which is NOT clear that it only returns the name. I've submitted an edit for the first that fixes this). You're supposed to be able to get the StringVar from its name using entry.getvar(name), but this is returning a str with the contents of the StringVar instead of the StringVar itself. I don't understand why this is happening, because the answer that explains this is marked as correct, and the person who asked the question seems to have wanted the StringVar itself. Did something get changed? If so, how would I get the StringVar now? I'm using Python 3.11.9 at the moment. I would also prefer a method that doesn't need the name of the StringVar, as an Entry without a StringVar explicitly set seems to have a StringVar without a name.</p>
<p>Here is some example code:</p>
<pre><code>from tkinter import *
from tkinter.ttk import *
root = Tk()
stringVar = StringVar(root, "test") # obviously in the real program I wouldn't be able to access this without using the Entry
entry = Entry(root, textvariable=stringVar)
entry.pack()
name1 = entry["textvariable"]
name2 = entry.cget("textvariable")
print(name1 == name2) # True
shouldBeStringVar = entry.getvar(name1)
print(name1, type(name1)) # PY_VAR0 <class 'str'>
print(shouldBeStringVar, type(shouldBeStringVar)) # test <class 'str'>
</code></pre>
|
<python><tkinter><tkinter-entry><ttk>
|
2024-05-29 00:06:42
| 2
| 308
|
Choosechee
|
78,546,844
| 25,091,707
|
Why is it vscode python could not find debugpy path when running without debugging?
|
<p><a href="https://i.sstatic.net/1qo8tX3L.png" rel="noreferrer"><img src="https://i.sstatic.net/1qo8tX3L.png" alt="screenshot of error message" /></a></p>
<p>I'm relatively new to platforms like vscode, and I seem to run into problems whenever I try to run a python program without debugging (Ctrl + F5). A window pops up saying "Could not find debugpy path" and gives me the option of either opening launch.json or cancelling. Running from the "run file" button on the right works, so I'm just curious what is causing this problem.</p>
<p>I opened launch.json and it's mostly empty:</p>
<pre class="lang-json prettyprint-override"><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": []
}
</code></pre>
<p><a href="https://i.sstatic.net/xVeVDBzi.png" rel="noreferrer"><img src="https://i.sstatic.net/xVeVDBzi.png" alt="screenshot of launch.json" /></a></p>
|
<python><visual-studio-code>
|
2024-05-29 00:05:58
| 2
| 343
|
Matt
|
78,546,774
| 7,846,884
|
How to add the filename into a new column using Polars scan_csv
|
<p>I'm reading multiple files with Polars, but I want to add filename as identifier in a new column.</p>
<pre><code>#how to add filenames to polars
lazy_dfs = (pl.scan_csv("data/file_*.tsv", separator="\t", has_header=False).fetch(n_rows= 500))
</code></pre>
|
<python><dataframe><csv><python-polars>
|
2024-05-28 23:19:04
| 2
| 473
|
sahuno
|
78,546,594
| 395,857
|
If a PyTorch model can be converted to onnx, can it always be converted to CoreML?
|
<p>If a PyTorch model can be converted to ONNX, can it always be converted to CoreML?</p>
|
<python><pytorch><coreml><onnx>
|
2024-05-28 22:03:10
| 0
| 84,585
|
Franck Dernoncourt
|
78,545,969
| 3,570,187
|
R workspace not loading correctly across two laptops
|
<p>I have recently got a new mac and I am using dropbox to sync files. I am using the online version of dropbox and I could not get the same workspace I see in my old mac. I tried downloading and it still shows the same. But when I open my old mac I get a pop up that python quit unexpectedly but I can see the:</p>
<pre><code> sh: line 1: 6539 Abort trap: 6 '/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4' -E -c 'import platform; print(platform.python_version())' 2>&1
sh: line 1: 6541 Abort trap: 6 '/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4' -E -c 'import sys; print(sys.version)' 2>&1
[Workspace loaded from ~/Dropbox/..../.RData]
</code></pre>
<p>I do not get the same python error in new mac. But the workspace is not loaded correctly in new mac.</p>
<pre><code> [Workspace loaded from ~/Dropbox/..../.RData]
</code></pre>
<p>Any advise?</p>
<p><a href="https://pastebin.com/LJNEz3fR" rel="nofollow noreferrer">see more details in this link about the error</a></p>
|
<python><r><rstudio>
|
2024-05-28 19:03:39
| 0
| 1,773
|
user3570187
|
78,545,940
| 9,135,031
|
Error while launching Optuna Dashboard in python
|
<p>I am trying to launch optuna ( <code>optuna-dashboard sqlite:///db.sqlite3</code> ) dashboard but I am receiving this error</p>
<pre><code>optuna-dashboard : The term 'optuna-dashboard' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is c
orrect and try again.
At line:1 char:1
+ optuna-dashboard sqlite:///db.sqlite3
+ ~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (optuna-dashboard:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
</code></pre>
<p>I have installed also the optuna extension in VS, but it is experimental with less info than the localhost one.</p>
|
<python><optuna>
|
2024-05-28 18:53:40
| 1
| 1,007
|
DrGenius
|
78,545,895
| 3,570,187
|
R studio unable to load properly
|
<p>My R studio does not load properly. When I open it I get this error.</p>
<p>Any suggestions on how to handle this?</p>
<pre><code> sh: line 1: 5719 Abort trap: 6 '/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4' -E -c 'import platform; print(platform.python_version())' 2>&1
sh: line 1: 5721 Abort trap: 6 '/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4' -E -c 'import sys; print(sys.version)' 2>&1
</code></pre>
|
<python><r><rstudio>
|
2024-05-28 18:41:06
| 0
| 1,773
|
user3570187
|
78,545,846
| 2,463,655
|
Find streak of hot days based on values in another column
|
<p>I have a dataframe as below and I want to find streak ofh hot days.</p>
<pre><code>dates = pd.date_range(start ='1-1-2018', end ='1-10-2018', freq ='1D')
np.random.seed(42)
temp = np.random.randint(60, 80, size=10)
df = pd.DataFrame({'dates': dates, 'temp':temp})
df["is_hot"] = np.where(df["temp"] > 70, 1, 0)
</code></pre>
<p>This creates a dataframe as below:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">dates</th>
<th style="text-align: right;">temp</th>
<th style="text-align: right;">is_hot</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2018-01-01 00:00:00</td>
<td style="text-align: right;">66</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">2018-01-02 00:00:00</td>
<td style="text-align: right;">79</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">2018-01-03 00:00:00</td>
<td style="text-align: right;">74</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">2018-01-04 00:00:00</td>
<td style="text-align: right;">70</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">2018-01-05 00:00:00</td>
<td style="text-align: right;">67</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: left;">2018-01-06 00:00:00</td>
<td style="text-align: right;">66</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: left;">2018-01-07 00:00:00</td>
<td style="text-align: right;">78</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: left;">2018-01-08 00:00:00</td>
<td style="text-align: right;">70</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: left;">2018-01-09 00:00:00</td>
<td style="text-align: right;">70</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">9</td>
<td style="text-align: left;">2018-01-10 00:00:00</td>
<td style="text-align: right;">63</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table></div>
<p>I am able to find streaks but I want them to break when we get is_hot value as 0 in between.
I use this</p>
<pre><code>df['streak'] = df.loc[df['is_hot'].eq(1)].groupby(df['is_hot'])['is_hot'].cumsum()
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">dates</th>
<th style="text-align: right;">temp</th>
<th style="text-align: right;">is_hot</th>
<th style="text-align: right;">streak</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2018-01-01 00:00:00</td>
<td style="text-align: right;">66</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">2018-01-02 00:00:00</td>
<td style="text-align: right;">79</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">2018-01-03 00:00:00</td>
<td style="text-align: right;">74</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">2018-01-04 00:00:00</td>
<td style="text-align: right;">70</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">2018-01-05 00:00:00</td>
<td style="text-align: right;">67</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: left;">2018-01-06 00:00:00</td>
<td style="text-align: right;">66</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: left;">2018-01-07 00:00:00</td>
<td style="text-align: right;">78</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: left;">2018-01-08 00:00:00</td>
<td style="text-align: right;">70</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: left;">2018-01-09 00:00:00</td>
<td style="text-align: right;">70</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">9</td>
<td style="text-align: left;">2018-01-10 00:00:00</td>
<td style="text-align: right;">63</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">nan</td>
</tr>
</tbody>
</table></div>
<p>In my example, I want the streak value for 2018-01-07 to be 1. I can fill the nas with 0, so that should not be a problem.</p>
|
<python><pandas><dataframe>
|
2024-05-28 18:29:14
| 2
| 2,051
|
rAmAnA
|
78,545,812
| 2,778,405
|
Verify jwt token using only hashlib
|
<p>I find pyJWT and other auth libraries are overly complex and, as a result, are often broken. Not to mention providers tend to add their own special steps quite often. As a result, I want to start dealing with JWT's without 3rd party libraries.</p>
<p>Here is a JWT I want to verify, using only crypto libraries. It comes from an AWS sandbox I set up and tore down, so there is no risk.</p>
<pre class="lang-py prettyprint-override"><code>jwt = 'eyJraWQiOiJIVWliRnBkUjM2MW92QUxRVFdVeGx3V0pOUmc1SEVRQkxsUjEzTWIyejI0PSIsImFsZyI6IlJTMjU2In0.eyJzdWIiOiIyODYxZTM4MC1mMDMxLTcwMzMtNDc3Ni1jNmRkMzY2MzA5ZmIiLCJjb2duaXRvOmdyb3VwcyI6WyJ0ZW5hbnQiLCJhbGx1c2VycyJdLCJpc3MiOiJodHRwczpcL1wvY29nbml0by1pZHAudXMtd2VzdC0yLmFtYXpvbmF3cy5jb21cL3VzLXdlc3QtMl9EQndXVEY1SDgiLCJjbGllbnRfaWQiOiI0ZHI2YzUxOTVkcTJxbTdiYTJ2bmJ1b2E1NSIsIm9yaWdpbl9qdGkiOiI3NmJkNmQ3Yy03NDkxLTRlMWUtOWFkNS0wYzVhNzRmZmI1ZTgiLCJldmVudF9pZCI6ImY0NGIzZmU1LWVhNTctNDdiNC04M2ZhLTc2MGQyYmJiYzZlNSIsInRva2VuX3VzZSI6ImFjY2VzcyIsInNjb3BlIjoiYXdzLmNvZ25pdG8uc2lnbmluLnVzZXIuYWRtaW4iLCJhdXRoX3RpbWUiOjE3MTY3MzI1NzUsImV4cCI6MTcxNjgxOTYxOCwiaWF0IjoxNzE2ODE2MDE4LCJqdGkiOiIzYmQ3NThhZS0zNTc5LTQxNjQtODkxMi02ZmY5MjA5MWVmNDQiLCJ1c2VybmFtZSI6IjI4NjFlMzgwLWYwMzEtNzAzMy00Nzc2LWM2ZGQzNjYzMDlmYiJ9.J_iS209k6Nsqmwf2XlK1kOeRCKjY-y6U28MicQTD8LFb3v-sC6sttYVya5kBb_qj3hnIDuXFvH3POlduBJhxiiXE7A3eAA9_09eYqmyna3tuNl_1W5pz_wlR9uhtOdhk0hAQWiaTaDViDjiEO6TPenEat0dz-yQWMp2Fda64yUOHFFiRZj5UsfO6_fUbOFlVzsmgwLhRPb5smIHkB4yFtcs4A1QI_fGyS9cEFTusKyt-JBmmdkfN83i8tiLfZV_IYUj0J5z-_vMSVOTg5yDMjcVCEswX1ZFUkm_FB2aLkAxPJxgzdDVAFSdg1UcITJjcfjFjPwi79quJEoEuYUahIQ'
jwk = {'keys': [{'alg': 'RS256',
'e': 'AQAB',
'kid': 'uNMjrozHoL/XMU+MGETT4ib+zErEVOpmI5TXcgP5TSQ=',
'kty': 'RSA',
'n': '_CkIlBfFJAtILPaVlQTNQySqSBWV63NXSgX0zcIIPf8o2HaweQZGG3kdETK8Vmcf_k_74sYpTuDOJ5x7UNNC24bJJhkoRcAkhSGNEUFQvEur0XCDzMjgFhWBsSS36Qs_wxGFZoeDHuEFyNmVGMlVg1drCxqCCrf-QV7rPKF-h7UIsrJxw_yf60y_Wh7iSiOgD7nc3lUBFh_2QJRAYv0AcaXpWwFVrT4IO4GrgU3s74kQn7I4R5q1L95CX1J3WA4MKaPObmBzWIo0Lg-uULAf3f-gEAMeZuwl4JLwhDHhoqqeiuz9g_ox5HaHwq5678xGtiz1K48WmvSzF_zY_P541w',
'use': 'sig'},
{'alg': 'RS256',
'e': 'AQAB',
'kid': 'HUibFpdR361ovALQTWUxlwWJNRg5HEQBLlR13Mb2z24=',
'kty': 'RSA',
'n': 'zC80KpQ--HfK2DHSydIXLWNcw0zGJjvzXygsuxG8tpaY2jgBjKN3iQjVYiBVwiOkYGza49hakU3C8WG8PPCCJrREOfw-IS9Zc0ZQmnuRtZTyvQhTYHy37IQpjQP9wWif74bWzTraPNy18PhtyEJcfW_LGa7s5p3vgYk3-ZXHk1AEyebZWIWFy1kjexMDiFwJJ69Ff6o8lx4LC_H17AGW-0S_IQrUxXExWBXc0zGV_QCBQbuv5frEg7tbHSYhkKLgVb0TZhACnGpfB7owiH2_4k5AHIgfvZxRXsBKGZEqBKJxV3dyszqGFeRQda65-J8dhyQhJSr7zC2a3L7gOUDaAQ',
'use': 'sig'}]}
import hashlib
## .. go to work here
</code></pre>
<p><strong>Here's my attempt so far</strong>
The process as I understand is to use n & e to encode the header & payload, which should then match the signature.</p>
<pre class="lang-py prettyprint-override"><code># split the key into parts
s = jwt.split('.')
header = base64.b64decode(s[0] + '=' * (-len(s[0]) % 4))
payload = base64.b64decode(s[0] + '=' * (-len(s[0]) % 4))
sig = s[2]
# filter for correct jwk.
jwk = next((k for k in jwk['keys'] if k['kid']==json.loads(header)['kid']))
# Now I need to hash. In the RSA spec, it says SHA256
# is used for rs256
hashed_ph = hashlib.sha256(head_1+payload_2).hexdigest()
# now I need to pad the key, not sure to what length though.
# its 32 char by default. The sig is 342 char length
# now the equation for varification, as far as i've found
# S^e = pad(hash(header+payload)) (mod n)
# S = signature
# e = e from the jwk
# hash is SHA256 according to rs256 spec
# n is n from the jwk
# pad is unknown.
# I do not think (mod n) in the code means mathmaticall (mod n) means # pad(hash(header+payload)) % n
# I think it means:
pad_hex = lambda x, ct : (ct - len(x)) * '0' + x
padded_hashed_ph = pad_hex(hashed_ph, len(jwk['n']))
# All that should be left is:
# S^e = padded_hashed_ph
</code></pre>
<p>I'm stalled here, as above has two issues.</p>
<ol>
<li>The output of padded_hashed_ph seems nonsensical, and in no way will equal the output of <code>some_large_number**large_exponent</code></li>
<li><code>sig ** e</code> takes forever to calculate. I restarted the interpreter after 10 min.</li>
</ol>
|
<python><jwt><rsa>
|
2024-05-28 18:20:55
| 0
| 2,386
|
Jamie Marshall
|
78,545,774
| 10,765,629
|
Why scipy checks abs(q1) < abs(q0) before for loop in Secant method?
|
<p><code>scipy.optimize.newton</code> for the Secant method in <a href="https://github.com/scipy/scipy/blob/44e4ebaac992fde33f04638b99629d23973cb9b2/scipy/optimize/_zeros_py.py#L360C1-L361C1" rel="nofollow noreferrer">this line</a> checks the condition:</p>
<pre><code>if abs(q1) < abs(q0):
p0, p1, q0, q1 = p1, p0, q1, q0
</code></pre>
<p>Why do they reorder <code>p0</code> and <code>p1</code> after that?</p>
<p>As I understand it, they opt for the estimation that is farthest from the horizontal axis. Why?</p>
|
<python><scipy><scipy-optimize><newtons-method>
|
2024-05-28 18:08:54
| 0
| 710
|
z_tjona
|
78,545,763
| 223,201
|
Using the platform_release environment marker
|
<p>I'm developing a Python package that has an optional dependency that is incompatible with certain versions of macOS.</p>
<p>Since the dependency is optional, I want my (wheel-based) package installation to succeed even on systems that have the incompatible OS.</p>
<p>To achieve this, I have the following in my requirements list:</p>
<pre><code>'scipy >=1.7, <2; platform_release >= "21.0.0"'
</code></pre>
<p>This works fine for most circumstances. However, some people are using tools like Bazel for installation. Bazel is using a method in <code>pkg_resources/_vendor/packaging/markers.py</code> for evaluating the markers. This raises an <code>InvalidVersion</code> exception on some systems because the <code>platform_release</code> marker evaluates to a version string that is not PEP 440 compliant.</p>
<p>For example, on a Linux box the value of <code>platform_release</code> may be something like <code>6.5.0-172-generic</code>.</p>
<p>Because the markers don't always evaluate properly, and because the values come from Python's own <code>platform</code> module, this seems like an oversight in how Bazel is designed, or perhaps even how <code>pkg_resources</code> is designed.</p>
<p>Searches online reveal several threads where Pypa maintainers say this is the correct behavior.</p>
<p>How do I work around this problem? As a package maintainer, can I "canonicalize" the <code>platform_release</code> marker or override the markers at runtime to permit installation, for example? Ostensibly this is not possible because there is no way to run pre-install scripts when installing a wheel!</p>
|
<python><python-3.x><bazel><python-packaging>
|
2024-05-28 18:05:17
| 0
| 19,362
|
Tom
|
78,545,754
| 1,444,609
|
Storing and retreiving data with Milvus and Langchain
|
<p>I'm trying to create a vector DB which will be populated with embeddings of articles from my employer's blog.</p>
<p>I've got a Milvus instance up and running and am able to follow <a href="https://python.langchain.com/v0.1/docs/integrations/vectorstores/milvus/" rel="nofollow noreferrer">the walkthrough on the Langchain website</a>.</p>
<p>Based on the walkthrough, my implementation so far looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>def parseWPDataFile(filename):
# redacted for brevity
return {
'meta': parsed_headers,
'body': doc_body.strip()
}
parsed_doc = parseWPDataFile('sample_data.txt')
text_splitter = RecursiveCharacterTextSplitter(is_separator_regex=True, separators=['\n+'], chunk_size=5000, length_function=len)
docs = text_splitter.create_documents([parsed_doc['body']], [parsed_doc['meta']])
embeddings = OpenAIEmbeddings()
vector_db = Milvus.from_documents(docs, embeddings, connection_args={"host": "127.0.0.1", "port": "19530"})
</code></pre>
<p>This being my first time using a vector database, I'm a little confused by that last line. <a href="https://api.python.langchain.com/en/v0.1/vectorstores/langchain_community.vectorstores.milvus.Milvus.html#langchain_community.vectorstores.milvus.Milvus.from_documents" rel="nofollow noreferrer">The documentation for <code>Milvus.from_documents</code></a> indicates that it creates a vectorstore from documents, I guess, in memory. What I want is a persistent vectorstore that I can load stuff into and then later, in a separate script, pull from. I can't find any Langchain examples of this.</p>
<p>How do I create a persistent VectorStore, add to it, and get a reference to it later, in another script?</p>
|
<python><python-3.x><langchain><py-langchain><milvus>
|
2024-05-28 18:02:46
| 2
| 23,519
|
I wrestled a bear once.
|
78,545,622
| 2,016,632
|
Upgraded numpy and now I am overwhelmed by RuntimeWarning
|
<p>Just upgraded numpy to the latest intel conda's, I am at pandas 2.2.1, numpy 1.24.3 and numexpr 2.10.0</p>
<p>If I try something simple like</p>
<pre><code>df = pd.DataFrame(index=[0, 1, 2, 3])
df["A"] = [1, 2, np.NaN, 3]
print(df)
</code></pre>
<p>then I get five run time warnings.</p>
<pre><code>RuntimeWarning: invalid value encountered in less
has_small_values = ((abs_vals < 10 ** (-self.digits)) & (abs_vals > 0)).any()
RuntimeWarning: invalid value encountered in greater
has_small_values = ((abs_vals < 10 ** (-self.digits)) & (abs_vals > 0)).any()
RuntimeWarning: invalid value encountered in greater
has_large_values = (abs_vals > 1e6).any()
RuntimeWarning: invalid value encountered in less
has_small_values = ((abs_vals < 10 ** (-self.digits)) & (abs_vals > 0)).any()
RuntimeWarning: invalid value encountered in greater
has_small_values = ((abs_vals < 10 ** (-self.digits)) & (abs_vals > 0)).any()
</code></pre>
<p>For a more complicated code the result is just overwhelming, hundreds and hundreds of such messages.</p>
<p>Did numpy drop support for NaN ??</p>
<h2>EDIT</h2>
<p>Building on some helpful comments, I can make a minimal reproducible example:</p>
<pre><code>conda update conda
conda create -n "test_numpy" python=3.10
conda activate test_numpy
conda install -c intel numpy==1.24.3 pandas=2.2.1
</code></pre>
<p>At that point <code>df["A"] = [0., 1., np.NaN, 2.]</code> will cause run-time errors because of some internal comparisons being made with the "<" and ">" operators and np.NaN deep in the heart of the core routines, like "Expressions.py"</p>
<p>[I should also note that the bug was fixed by 1.24.6, so the fix was to simply upgrade....]</p>
|
<python><pandas><numpy>
|
2024-05-28 17:28:38
| 1
| 619
|
Tunneller
|
78,545,619
| 14,386,187
|
Redis lock is stuck on acquiring
|
<p>I'm trying to perform a sanity check on Redis Locks (async version):</p>
<pre class="lang-py prettyprint-override"><code>import redis.asyncio as redis
import tqdm
import asyncio
r = redis.Redis(decode_responses=True)
async def update():
lock = r.lock("lock")
await lock.acquire(blocking=True)
k = await r.get("key")
if int(k) < 0:
raise ValueError(f"Value {k} is negative")
await r.setex("key", 3600, -1)
await asyncio.sleep(0.5)
await r.setex("key", 3600, 1)
await lock.release()
async def main():
total = 100
chunksize = 10
await r.setex("key", 3600, 1)
aws = [update() for _ in range(total)]
with tqdm.tqdm(total=total) as pbar:
for idx in range(0, total, chunksize):
end = min(idx + chunksize, total)
chunk = aws[idx:end]
await asyncio.gather(*chunk)
pbar.update(end - idx)
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>For some reason, my code doesn't make it past the <code>.acquire</code> step. Am I making any syntax errors?</p>
|
<python><redis><locks>
|
2024-05-28 17:28:15
| 0
| 676
|
monopoly
|
78,545,378
| 1,272,072
|
How to synchronise async runtimes
|
<p>We have a Python application that is implemented with the async runtime. Now we want to integrate a C library (with Cython) that runs its own event loop (e.g. <code>epoll</code>).</p>
<p>How can we integrate these two different runtime mechanisms. Is there a general pattern to synchronise these two codebases?</p>
<p>Eventually we might replace the Python code with Rust. So using a language-independent pattern would be nice.</p>
|
<python><c><asynchronous><rust><epoll>
|
2024-05-28 16:26:47
| 0
| 963
|
woodtluk
|
78,545,368
| 1,422,096
|
How can I exclude the base directory from a recursive glob in Python?
|
<p>When using <code>glob.glob("D:/TEST/*")</code>, we don't get <code>D:/TEST/</code> in the result.</p>
<p>With <code>glob.glob("D:/TEST/**")</code>, idem.</p>
<p>But when doing <code>glob.glob("D:/TEST/**", recursive=True)</code> we get all the files and subdirectories recursively, as desired, but we also get the base directory <code>D:/TEST/</code>. It seems unnatural to list the base dir, since it is not the case for non-recursive <code>glob</code>.</p>
<p><strong>Is there an option built-in in <code>glob</code> to avoid the base dir in the result?</strong> (other option than manually removing the base dir in the result of <code>glob</code>?)</p>
|
<python><glob>
|
2024-05-28 16:24:00
| 1
| 47,388
|
Basj
|
78,545,208
| 7,307,824
|
Lambda layer Pandas (numpy) causing dependency error
|
<p>I have the following Lambda function:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def lambda_handler(events, context):
d = {'col1': [1, 2], 'col2': [3, 4]}
df = pd.DataFrame(data=d)
print(df)
</code></pre>
<p>I have pandas (and dependencies) included as a layer. I can see numpy and other dependencies are in the files (with other files and folders):</p>
<pre><code>python/lib/python3.11/site-packages/numpy
python/lib/python3.11/site-packages/numpy-1.26.4.dist-info
</code></pre>
<p>When I run the function I get the following error:</p>
<pre><code>{
"errorMessage": "Unable to import module 'lambda_function': Unable to import required dependencies:\nnumpy: Error importing numpy: you should not try to import numpy from\n its source directory; please exit the numpy source tree, and relaunch\n your python interpreter from there.",
"errorType": "Runtime.ImportModuleError",
"requestId": "ebf17462-e3c2-40c0-9a44-a751882ab7d8",
"stackTrace": []
}
</code></pre>
<p>Any idea why I get this error?</p>
|
<python><pandas><numpy><aws-lambda><aws-lambda-layers>
|
2024-05-28 15:47:54
| 1
| 568
|
Ewan
|
78,545,168
| 3,348,261
|
Finding unused variables after minimizing
|
<p>After minimization (Python/scipy), I would like to know how to find unused variables in the result. Here is a simple example where the third variable is left untouched. Apart comparing initial value vs result, is there a better way to identify such variable?</p>
<pre><code>from scipy.optimize import minimize
def objective(x):
return -x[0] - x[1]
x0 = 0, 0, 1.234
res = minimize(objective, x0,
bounds = ([-10,+10], [-10,+10], [-10,+10]))
print(res.x)
# output: [10. 10. 1.234]
# res.x[2] has been left untouched compared to x0[2]
</code></pre>
|
<python><scipy>
|
2024-05-28 15:39:53
| 1
| 712
|
Nicolas Rougier
|
78,544,956
| 3,628,240
|
Clicking a button with Selenium and Python until no longer exists
|
<p>I'm trying to scrape this site:<a href="https://www.vertexconnects.com/find-atc" rel="nofollow noreferrer">https://www.vertexconnects.com/find-atc</a></p>
<p>I can't seem to get the while loop to continue to click the "Load More" button after you put in any zipcode. The code seems to be failing on the locations line, getting each of the location results, with this error</p>
<pre><code> raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
</code></pre>
<p>Code below:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver import ActionChains
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import Select
options = webdriver.ChromeOptions()
driver = webdriver.Chrome(options = options)
action = ActionChains(driver)
driver.get("https://www.vertexconnects.com/find-atc")
driver.maximize_window()
wait = WebDriverWait(driver,5)
# Use below line only if you are getting the Accept/Reject cookies pop-up
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[contains(.,'Accept All')]"))).click()
location_textbox = wait.until(EC.presence_of_element_located((By.ID,"location-search-input")))
action.move_to_element(location_textbox).click().send_keys("10001").perform()
wait.until(EC.element_to_be_clickable((By.CLASS_NAME, "atc-finder-button"))).click()
while True:
try:
wait.until(EC_element_to_be_clickable((By.ID, "loadMore"))).click()
except:
break
print("done")
locations = wait.until(EC.visibility_of_all_elements_located((By.XPATH, "//div[@class='location-result']")))
for location in locations:
name = location.find_element(By.TAG_NAME, "h4").text()
address = location.find_element(By.CLASS_NAME, "address atc-finder-hospital-address").text()
phone_num = location.find_element(By.TAG_Name, "a").text
print(name, address, phone_num)
</code></pre>
|
<python><selenium-webdriver><web-scraping>
|
2024-05-28 14:58:21
| 1
| 927
|
user3628240
|
78,544,791
| 9,022,641
|
AttributeError: Can't get attribute 'CustomActionMaskedEnvironment.observation_space' in RLlib with PettingZoo environment
|
<p>I have this very basic custom parallel multi agent environment written in <code>PettingZoo</code>.</p>
<pre class="lang-py prettyprint-override"><code>import functools
import random
from copy import copy
import numpy as np
from gymnasium.spaces import Discrete, MultiDiscrete
from pettingzoo import ParallelEnv
class CustomActionMaskedEnvironment(ParallelEnv):
"""The metadata holds environment constants.
The "name" metadata allows the environment to be pretty printed.
"""
metadata = {
"name": "masked_parallel_env",
}
def __init__(self):
"""The init method takes in environment arguments.
Should define the following attributes:
- escape x and y coordinates
- guard x and y coordinates
- prisoner x and y coordinates
- timestamp
- possible_agents
Note: as of v1.18.1, the action_spaces and observation_spaces attributes are deprecated.
Spaces should be defined in the action_space() and observation_space() methods.
If these methods are not overridden, spaces will be inferred from self.observation_spaces/action_spaces, raising a warning.
These attributes should not be changed after initialization.
"""
self.escape_y = None
self.escape_x = None
self.guard_y = None
self.guard_x = None
self.prisoner_y = None
self.prisoner_x = None
self.timestep = None
self.possible_agents = ["prisoner", "guard"]
def reset(self, seed=None, options=None):
"""Reset set the environment to a starting point.
It needs to initialize the following attributes:
- agents
- timestamp
- prisoner x and y coordinates
- guard x and y coordinates
- escape x and y coordinates
- observation
- infos
And must set up the environment so that render(), step(), and observe() can be called without issues.
"""
self.agents = copy(self.possible_agents)
self.timestep = 0
self.prisoner_x = 0
self.prisoner_y = 0
self.guard_x = 6
self.guard_y = 6
self.escape_x = random.randint(2, 5)
self.escape_y = random.randint(2, 5)
observation = (
self.prisoner_x + 7 * self.prisoner_y,
self.guard_x + 7 * self.guard_y,
self.escape_x + 7 * self.escape_y,
)
observations = {
"prisoner": {
"observation": observation,
"action_mask": [0, 1, 1, 0]
},
"guard": {
"observation": observation,
"action_mask": [1, 0, 0, 1]
},
}
# Get dummy infos. Necessary for proper parallel_to_aec conversion
infos = {
a: {
} for a in self.agents
}
return observations, infos
def step(self, actions):
"""Takes in an action for the current agent (specified by agent_selection).
Needs to update:
- prisoner x and y coordinates
- guard x and y coordinates
- terminations
- truncations
- rewards
- timestamp
- infos
And any internal state used by observe() or render()
"""
# Execute actions
prisoner_action = actions["prisoner"]
guard_action = actions["guard"]
if prisoner_action == 0 and self.prisoner_x > 0:
self.prisoner_x -= 1
elif prisoner_action == 1 and self.prisoner_x < 6:
self.prisoner_x += 1
elif prisoner_action == 2 and self.prisoner_y > 0:
self.prisoner_y -= 1
elif prisoner_action == 3 and self.prisoner_y < 6:
self.prisoner_y += 1
if guard_action == 0 and self.guard_x > 0:
self.guard_x -= 1
elif guard_action == 1 and self.guard_x < 6:
self.guard_x += 1
elif guard_action == 2 and self.guard_y > 0:
self.guard_y -= 1
elif guard_action == 3 and self.guard_y < 6:
self.guard_y += 1
# Generate action masks
prisoner_action_mask = np.ones(4, dtype=np.int8)
if self.prisoner_x == 0:
prisoner_action_mask[0] = 0 # Block left movement
elif self.prisoner_x == 6:
prisoner_action_mask[1] = 0 # Block right movement
if self.prisoner_y == 0:
prisoner_action_mask[2] = 0 # Block down movement
elif self.prisoner_y == 6:
prisoner_action_mask[3] = 0 # Block up movement
guard_action_mask = np.ones(4, dtype=np.int8)
if self.guard_x == 0:
guard_action_mask[0] = 0
elif self.guard_x == 6:
guard_action_mask[1] = 0
if self.guard_y == 0:
guard_action_mask[2] = 0
elif self.guard_y == 6:
guard_action_mask[3] = 0
# Action mask to prevent guard from going over escape cell
if self.guard_x - 1 == self.escape_x:
guard_action_mask[0] = 0
elif self.guard_x + 1 == self.escape_x:
guard_action_mask[1] = 0
if self.guard_y - 1 == self.escape_y:
guard_action_mask[2] = 0
elif self.guard_y + 1 == self.escape_y:
guard_action_mask[3] = 0
# Check termination conditions
terminations = {
a: False for a in self.agents
}
rewards = {
a: 0 for a in self.agents
}
if self.prisoner_x == self.guard_x and self.prisoner_y == self.guard_y:
rewards = {
"prisoner": -1,
"guard": 1
}
terminations = {
a: True for a in self.agents
}
self.agents = []
elif self.prisoner_x == self.escape_x and self.prisoner_y == self.escape_y:
rewards = {
"prisoner": 1,
"guard": -1
}
terminations = {
a: True for a in self.agents
}
self.agents = []
# Check truncation conditions (overwrites termination conditions)
truncations = {
"prisoner": False,
"guard": False
}
if self.timestep > 100:
rewards = {
"prisoner": 0,
"guard": 0
}
truncations = {
"prisoner": True,
"guard": True
}
self.agents = []
self.timestep += 1
# Get observations
observation = (
self.prisoner_x + 7 * self.prisoner_y,
self.guard_x + 7 * self.guard_y,
self.escape_x + 7 * self.escape_y,
)
observations = {
"prisoner": {
"observation": observation,
"action_mask": prisoner_action_mask,
},
"guard": {
"observation": observation,
"action_mask": guard_action_mask
},
}
# Get dummy infos (not used in this example)
infos = {"prisoner": {}, "guard": {}}
return observations, rewards, terminations, truncations, infos
def render(self, mode='human'):
"""Renders the environment."""
grid = np.zeros((7, 7), dtype = object)
grid[self.prisoner_y, self.prisoner_x] = "P"
grid[self.guard_y, self.guard_x] = "G"
grid[self.escape_y, self.escape_x] = "E"
print(f"{grid} \n")
# Observation space should be defined here.
# lru_cache allows observation and action spaces to be memoized, reducing clock cycles required to get each agent's space.
# If your spaces change over time, remove this line (disable caching).
@functools.lru_cache(maxsize=None)
def observation_space(self, agent):
# gymnasium spaces are defined and documented here: https://gymnasium.farama.org/api/spaces/
return MultiDiscrete([7 * 7 - 1] * 3)
# Action space should be defined here.
# If your spaces change over time, remove this line (disable caching).
@functools.lru_cache(maxsize=None)
def action_space(self, agent):
return Discrete(4)
</code></pre>
<p>The test function <code>pettingzoo.test.parallel_api_test</code> passes. Now, I would like to train with <code>RLlib</code>.</p>
<pre><code>from ray import tune
from ray.rllib.env import ParallelPettingZooEnv
from ray.tune.registry import register_env
env_name = "mpe"
# Register the environment
def env_creator(args):
return ParallelPettingZooEnv(CustomActionMaskedEnvironment())
register_env(env_name, env_creator)
</code></pre>
<p>This code, however, fails with the following traceback during some pickling event:</p>
<pre class="lang-py prettyprint-override"><code>(raylet) Traceback (most recent call last):
File "python/ray/_raylet.pyx", line 1873, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 1981, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 1880, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 1820, in ray._raylet.execute_task.function_executor
File "..PATH../python3.10/site-packages/ray/_private/function_manager.py", line 689, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File "..PATH../python3.10/site-packages/ray/util/tracing/tracing_helper.py", line 463, in _resume_span
return method(self, *_args, **_kwargs)
File "..PATH../python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 471, in __init__
self._env_id, self.env_creator = self._get_env_id_and_creator(
File "..PATH../python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 3099, in _get_env_id_and_creator
return env_specifier, _global_registry.get(ENV_CREATOR, env_specifier)
File "..PATH../ray/tune/registry.py", line 275, in get
return pickle.loads(value)
`AttributeError: Can't get attribute 'CustomActionMaskedEnvironment.observation_space' on <module '__main__' from '/usr/local/lib/python3.10/dist-packages/ray/_private/workers/default_worker.py'>
</code></pre>
<pre class="lang-py prettyprint-override"><code>from ray.rllib.algorithms.ppo import PPO
config = {
"env": env_name,
"env_config": {}, # config to pass to env class
"num_workers": 1, # Adjust as necessary
"framework": "torch", # or "tf" if using TensorFlow
"multiagent": {
"policies": {
"prisoner": (None, MultiDiscrete([7 * 7 - 1] * 3), Discrete(4), {}),
"guard": (None, MultiDiscrete([7 * 7 - 1] * 3), Discrete(4), {}),
},
"policy_mapping_fn": lambda agent_id: agent_id,
},
}
# Run the training
results = tune.run(PPO, config=config, stop={
"training_iteration": 5
})
</code></pre>
<p>Why?</p>
|
<python><reinforcement-learning><rllib><multi-agent-reinforcement-learning><pettingzoo>
|
2024-05-28 14:30:45
| 0
| 734
|
Lukas
|
78,544,774
| 1,612,986
|
Update value of a nested dictionary of varying depth and same key appearing at different levels
|
<p>If I have a situation like:</p>
<pre><code>dictionary1 = {
"level11": {
"level12": {"levelA": 0, "levelB": 1}
}
"level21": {
"level22": {
"level23": {"levelA": 0, "levelB": 1},
}
}
}
</code></pre>
<p>how do I update all occurrences of:</p>
<pre><code>{"levelA": 0, "levelB": 1}
</code></pre>
<p>to:</p>
<pre><code>{"levelA": 2, "levelB": 3}.
</code></pre>
<p><a href="https://stackoverflow.com/questions/3232943/update-value-of-a-nested-dictionary-of-varying-depth">Update value of a nested dictionary of varying depth</a> solves part of the issue but I have a situation more general than that.</p>
|
<python>
|
2024-05-28 14:27:06
| 2
| 1,415
|
user1612986
|
78,544,729
| 11,635,654
|
How to digitize large set of 3D points in 2D and compute the mean in 3rd axis to get a 2D image
|
<p>Here is a working Python snippet that get some 3D points (see <code>pts</code> array) and project them on a 2D image Npix x Npix keeping on the "j,i" pixel the mean of the third axis of all "pts" that are collected to that pixel. The code is probably clear.</p>
<p>I have bored the way by googling some times ago.</p>
<p>Now, my problem is that I have a huge amount of 3D pts that are on several files (numpy arrays) and I should collect them to produce a 50,000 x 50,000 pixels image.</p>
<p>In a certain sense, I should make a loop on the files and accumulate on each "j,i" bin/pixel and then divide by the number of pts collected on each bin/pixel.</p>
<p>May be one has a super idea :)</p>
<p>Thanks</p>
<pre class="lang-py prettyprint-override"><code>%pylab inline
import numpy as np
import random
params= {
'd': 5.35,
'halfWidth': 5.0,
'resol': 0.1,
'max_prof': 1000
}
resol = params['resol']; # image resolution
d = params['d'];
halfWidth = params['halfWidth'];
centre = -d/2;
Ximgmin = centre - halfWidth;
Ximgmax = centre + halfWidth;
Yimgmin = -halfWidth;
Yimgmax = halfWidth;
Zimgmin = 0;
Zimgmax = params['max_prof'];
(Ximgmin, Ximgmax, Yimgmin, Yimgmax, Zimgmin,Zimgmax)
xrange = (Ximgmin-0.1, Ximgmax+0.1)
yrange = (Yimgmin-0.1, Yimgmax+0.1)
zrange = (0, Zimgmax)
Npts= 10_000
points = []
[ points.append(np.array([random.uniform(*xrange),
random.uniform(*yrange),
random.uniform(*zrange)])) for i in range(Npts) ]
pts = np.array(points).T
print(pts.shape)
Npix = 100
def info_arr(x):
print(f"shape: {x.shape}, first: {x[0]}, last: {x[-1]}, min: {x.min()}, max: {x.max()}")
binsx = np.linspace(Ximgmin,Ximgmax,Npix-1,dtype=np.float32)
binsy = np.linspace(Yimgmin,Yimgmax,Npix-1,dtype=np.float32)
info_arr(binsx)
info_arr(binsy)
xmin = pts[0,:].min()
xmax = pts[0,:].max()
print(xmin,xmax)
ymin = pts[1,:].min()
ymax = pts[1,:].max()
print(ymin,ymax)
print(pts[2,:].min(),pts[2,:].max())
imin,imax=np.digitize(xmin,binsx),np.digitize(xmax,binsx)
print(imin,imax)
jmin,jmax=np.digitize(ymin,binsy),np.digitize(ymax,binsy)
print(jmin,jmax)
pts_x_id = np.digitize(pts[0,:],binsx)
pts_x_id = np.int32(pts_x_id)
pts_y_id = np.digitize(pts[1,:],binsy)
pts_y_id = np.int32(pts_y_id)
pts_id_2d = np.zeros(shape=(pts.shape[1],2),dtype=np.int32)
pts_id_2d[:,0] = pts_x_id
pts_id_2d[:,1] = pts_y_id
pts_z = pts[2,:]
sorted_idx = np.lexsort(pts_id_2d.T)
sorted_pts = pts_id_2d[sorted_idx,:]
df1 = np.diff(sorted_pts,axis=0)
df2 = np.append([True],np.any(df1!=0,1),0)
# Get unique sorted labels
sorted_labels = df2.cumsum(0)-1
# Get labels
labels = np.zeros_like(sorted_idx)
labels[sorted_idx] = sorted_labels
# Get unique indices
unq_idx = sorted_idx[df2]
pts_id_2d = pts_id_2d[unq_idx,:]
pts_z = np.bincount(labels, weights=pts_z)/np.bincount(labels)
pts_id_2d.shape,pts_z.shape
img = np.zeros((Npix,Npix),dtype=np.float32)
for pt,z in zip(pts_id_2d,pts_z):
i = int(pt[0])
j = int(pt[1])
img[j,i]=z
imshow(img);colorbar();
</code></pre>
|
<python><arrays><numpy>
|
2024-05-28 14:19:20
| 0
| 402
|
Jean-Eric
|
78,544,678
| 1,618,893
|
SQLAlchemy - Delete model classes
|
<p>I'm dynamically creating model classes in my unit tests using pytest, like so:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column
...
obj_args = {
"__tablename__": f"t_{name}",
"__table_args__": {"extend_existing": True},
"id": Column(Integer, primary_key=True),
}
obj_args.update({col: Column(String) for col in cols})
obj = type(name, (BaseModelDerivedFromDeclarativeBase,), obj_args)
</code></pre>
<p>Those objects keep existing after the unit test, leaving me with side effects for other unit tests (e.g. when I recreate a model class with the same name).</p>
<p>Is it possible to remove those model classes from SQLAlchemy completely after each test?</p>
|
<python><sqlalchemy><pytest>
|
2024-05-28 14:10:51
| 1
| 962
|
Roman Purgstaller
|
78,544,673
| 10,992,342
|
Is there a way of creating multiple stratified samples at once?
|
<p>Let say I have this input dataset with the Ids: a, b, c</p>
<p>I need to order it by packages of +-100 rows each sample with the same distribution of Ids as the input entire population.</p>
<p>What would be the best way to do that?</p>
<p><a href="https://i.sstatic.net/pi8I5pfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pi8I5pfg.png" alt="enter image description here" /></a></p>
|
<python><statistics><sample>
|
2024-05-28 14:09:54
| 1
| 511
|
luisvenezian
|
78,544,510
| 8,040,928
|
Gemini API Interaction/Tutorial Setup Errors
|
<p>I'm trying to go through jupyter notebook tutorial:
<a href="https://ai.google.dev/gemini-api/docs/get-started/tutorial?l&lang=python" rel="nofollow noreferrer">https://ai.google.dev/gemini-api/docs/get-started/tutorial?l&lang=python</a></p>
<p>When i run Google Colab notebook everything is ok, like it should be.
But when i try to run it on localhost sth is wrong (probably with enviroment, but i am not able to specify what exactly)</p>
<p>Maybe you would help me somehow.</p>
<p><em><strong>1st error i have after block that lists models:</strong></em></p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[28], line 1
----> 1 for m in genai.list_models():
2 if 'generateContent' in m.supported_generation_methods:
3 print(m.name)
File /opt/anaconda3/envs/base_ML_AI/lib/python3.11/site-packages/google/generativeai/models.py:191, in list_models(page_size, client, request_options)
188 client = get_default_model_client()
190 for model in client.list_models(page_size=page_size, **request_options):
--> 191 model = type(model).to_dict(model)
192 yield model_types.Model(**model)
AttributeError: to_dict
</code></pre>
<p><em><strong>2nd error i have when i try to get text from response</strong></em></p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[42], line 1
----> 1 to_markdown(response.text)
File /opt/anaconda3/envs/base_ML_AI/lib/python3.11/site-packages/google/generativeai/types/generation_types.py:407, in BaseGenerateContentResponse.text(self)
400 if not parts:
401 raise ValueError(
402 "The `response.text` quick accessor only works when the response contains a valid "
403 "`Part`, but none was returned. Check the `candidate.safety_ratings` to see if the "
404 "response was blocked."
405 )
--> 407 if len(parts) != 1 or "text" not in parts[0]:
408 raise ValueError(
409 "The `response.text` quick accessor only works for "
410 "simple (single-`Part`) text responses. This response is not simple text. "
(...)
413 "instead."
414 )
415 return parts[0].text
TypeError: argument of type 'Part' is not iterable
</code></pre>
<p>I've checked enviroments prerequisites and everything is ok (python 3.11, jupyter)
When i try to print response object they are both the same in colab and local jupyter notebook. Do you have any suggestions?</p>
|
<python><jupyter-notebook><anaconda><google-colaboratory><google-gemini>
|
2024-05-28 13:42:52
| 1
| 603
|
Janek Podwysocki
|
78,544,319
| 2,123,706
|
search for elements of a list as substring in another list python
|
<p>I have 2 lists. I want to find the elements in <code>ls2</code> where any element of <code>ls1</code> is a substring. I would like to return a list of <code>ls2</code> elements along with the substring that was searched and found from <code>ls1</code></p>
<pre><code>ls1 = ['apple','banana','pear']
ls2 = ['strawberry is not here',
'blueberry is over there',
'the pear tree is not ready yet',
'we have lots of pear trees',
'apples are yummy']
</code></pre>
<p>Both of these return a partial answer</p>
<pre><code>[ls1[j] for j in range(len(ls1)) if any(ls1[j] in x for x in ls2)] # returns elements from ls1 alone
[i for i in ls2 if any(w in i for w in ls1)] # returns elements from ls2 alone
</code></pre>
<p>What I would like is to see:</p>
<pre><code>[('the pear tree is not ready yet','pear'),
('we have lots of pear trees','pear'),
('apples are yummy','apple')]
</code></pre>
|
<python><list-comprehension>
|
2024-05-28 13:08:29
| 1
| 3,810
|
frank
|
78,544,277
| 8,329,213
|
Failed to display Jupyter Widgets
|
<p>I have installed Anaconda on my new laptop and I can't use the library <code>ipywidgets</code> any more. Just an example snippet of the code which I ran for test purposes -</p>
<pre><code>import ipywidgets as widgets
from IPython.display import display
w = widgets.IntSlider()
display(w)
</code></pre>
<p>I got the following Javascript error :</p>
<pre><code>[Open Browser Console for more detailed log - Double click to close this message]
Failed to load model class 'IntSliderModel' from module '@jupyter-widgets/controls'
Error: Module @jupyter-widgets/controls, version ^1.5.0 is not registered, however, 2.0.0 is
at f.loadClass (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/446.fdf8b1b233cb8c1783f6.js?v=fdf8b1b233cb8c1783f6:1:75041)
at f.loadModelClass (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/327.68dbf8491690b3aff1e7.js?v=68dbf8491690b3aff1e7:1:10729)
at f._make_model (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/327.68dbf8491690b3aff1e7.js?v=68dbf8491690b3aff1e7:1:7517)
at f.new_model (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/327.68dbf8491690b3aff1e7.js?v=68dbf8491690b3aff1e7:1:5137)
at f.handle_comm_open (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/327.68dbf8491690b3aff1e7.js?v=68dbf8491690b3aff1e7:1:3894)
at _handleCommOpen (http://localhost:8889/lab/extensions/@jupyter-widgets/jupyterlab-manager/static/446.fdf8b1b233cb8c1783f6.js?v=fdf8b1b233cb8c1783f6:1:73457)
at v._handleCommOpen (http://localhost:8889/static/notebook/3676.bundle.js:1:30808)
at async v._handleMessage (http://localhost:8889/static/notebook/3676.bundle.js:1:32702)
</code></pre>
<p>The solution <a href="https://stackoverflow.com/questions/73715821/jupyter-lab-issue-displaying-widgets-javascript-error"><code>suggested</code></a> at various places has been to change the version of <code>ipywidgets</code>, as there are some compatibility issues. My <code>ipywidgets</code> is 7.6.5 and I tried changing it to 7.7.5, 7.7.2 and a couple of more, but I keep getting the Javascript error.</p>
<p>The following are my Jupyter core packages' versions:</p>
<pre><code>IPython : 8.20.0
ipykernel : 6.28.0
ipywidgets : 7.6.5
jupyter_client : 8.6.0
jupyter_core : 5.5.0
jupyter_server : 2.10.0
jupyterlab : 4.0.11
nbclient : 0.8.0
nbconvert : 7.10.0
nbformat : 5.9.2
notebook : 7.0.8
qtconsole : 5.4.2
traitlets : 5.7.1
</code></pre>
<p>Where am I making a mistake? Thanks.</p>
|
<python><jupyter-notebook><anaconda><ipywidgets>
|
2024-05-28 13:01:11
| 0
| 7,707
|
cph_sto
|
78,544,143
| 1,349,673
|
What is the proper way to handle mypy [attr-defined] errors, due to transitions dynamically adding is_* attributes?
|
<h1>MRE</h1>
<pre><code>from transitions import Machine
class TradingSystem:
def __init__(self):
self.machine = Machine(model=self, states=['RUNNING'], initial='RUNNING')
def check_running(self) -> None:
if self.is_RUNNING():
print("System is running")
</code></pre>
<h1>Example usage</h1>
<pre><code>system = TradingSystem()
system.check_running()
</code></pre>
<h1>Issue</h1>
<pre><code>mypy transitions_mypy.py
</code></pre>
<p>gives the error:</p>
<pre><code>transitions_mypy.py:9: error: "TradingSystem" has no attribute "is_RUNNING" [attr-defined]
</code></pre>
<p>This can be avoided by bypassing mypy, for example adding <code># type: ignore[attr-defined]</code> at the end of line 9.</p>
<p>But what is the proper way? Is it better to avoid bypassing mypy? Perhaps by manually defining the attribute?</p>
|
<python><mypy><python-typing><pytransitions>
|
2024-05-28 12:36:03
| 2
| 8,126
|
James Hirschorn
|
78,543,909
| 2,180,332
|
How can I constrain a type to be a union of subclasses in Python?
|
<p>I have a class, <code>Foobar</code>, with several subclasses, say <code>FoobarAlpha</code> and <code>FoobarBeta</code>.</p>
<p>I know how to define a type <code>AnyFoobar = TypeVar("AnyFoobar", bound=Foobar)</code> that will match any subclass of <code>Foobar</code>.</p>
<p>However, how can I define a type <code>AnyFoobars</code> that also matches union of <code>Foobar</code> subclasses? For instance, a type that would match <code>FoobarAlpha</code>, <code>FoobarBeta</code> and <code>typing.Union[FoobarAlpha, FoobarBeta]</code>?</p>
|
<python><mypy><python-typing>
|
2024-05-28 11:46:15
| 1
| 4,656
|
azmeuk
|
78,543,883
| 3,502,079
|
PyQt: transparent background but keep the frame
|
<p>I want to have the following: a PyQt application with a transparent background, but with the "frame" still visible. With frame I mean a border which you can use to resize, the titlebar and also the close and minimize button. With transparent I mean that you can see "through" the application, i.e. you can see the program that's behind the PyQt application.</p>
<p>Here's what I tried:</p>
<p>import sys</p>
<pre><code>from PyQt5.QtWidgets import (
QWidget, QApplication, QPushButton, QVBoxLayout,
QLabel
)
from PyQt5.QtGui import (
QPainter, QBrush, QColor, QPen
)
from PyQt5.QtCore import QRect, QPoint, Qt, QSize
#%%
class MyWidget(QWidget):
def __init__(self):
super().__init__()
self.setGeometry(30,30,600,400)
# These lines try to make the background transparent
self.setWindowOpacity(.8)
self.setStyleSheet("background:red;")
self.setAttribute(Qt.WA_NoSystemBackground)
self.setAttribute(Qt.WA_TranslucentBackground)
self.setAttribute(Qt.WA_PaintOnScreen)
# self.setWindowFlag(Qt.FramelessWindowHint)
self.layout = QVBoxLayout()
self.layout.setAlignment(Qt.AlignRight)
self.button = QPushButton("X")
self.button.setFixedSize(QSize(40, 40))
self.layout.addWidget(self.button)
self.setLayout(self.layout)
self.button.clicked.connect(self.close)
self.show()
self.setFocus()
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = MyWidget()
sys.exit(app.exec_())
</code></pre>
<p>Some comments on the lines I used.</p>
<pre><code>self.setWindowOpacity(.8)
</code></pre>
<p>This line works, but it makes the titlebar transparent with the same amount. If I want to have full transparency, the titlebar would be invisible as well.</p>
<pre><code> self.setStyleSheet("background:red;")
</code></pre>
<p>Some sources stated that this line would work, but it doesn't seem to. It just makes it black. I put it to red here, so the button is visible at least.</p>
<pre><code>self.setAttribute(Qt.WA_NoSystemBackground)
self.setAttribute(Qt.WA_PaintOnScreen)
</code></pre>
<p>These lines don't seem to do anything</p>
<pre><code>self.setAttribute(Qt.WA_TranslucentBackground)
self.setWindowFlag(Qt.FramelessWindowHint)
</code></pre>
<p>These lines work if used together, but they make everything disappear. Including the titlebar and the frame.</p>
<p>Any thoughts?</p>
|
<python><python-3.x><qt><pyqt><transparency>
|
2024-05-28 11:40:56
| 0
| 392
|
AccidentalTaylorExpansion
|
78,543,852
| 13,944,524
|
Description for APIRouter in FastAPI?
|
<p>Suppose I have the following sample code:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import APIRouter, FastAPI
router_one = APIRouter(tags=["users"])
router_two = APIRouter(tags=["products"])
app = FastAPI(description="## Description for whole application")
@router_one.get("/users")
async def fn1():
pass
@router_one.post("/users")
async def fn2():
pass
@router_two.get("/products")
async def fn3():
pass
@router_two.post("/products")
async def fn4():
pass
app.include_router(router_one)
app.include_router(router_two)
</code></pre>
<p>It is rendered as below in swagger:</p>
<p><a href="https://i.sstatic.net/JptekGk2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JptekGk2.jpg" alt="enter image description here" /></a></p>
<p>I know I can pass <code>description</code> argument for individual path operation functions but what I really need is to pass <code>description</code> argument to the <code>APIRouter</code> itself(I showed in the picture). I have some common information which is shared among the path operations below a certain tag like <code>users</code>.</p>
<p>I noticed that there is no api available for that in FastAPI like this:</p>
<pre class="lang-py prettyprint-override"><code>router_one = APIRouter(tags=["users"], description=...)
# or
app.include_router(router_one, description=...)
</code></pre>
<p>Is there any other way to achieve that?</p>
|
<python><swagger><fastapi><openapi>
|
2024-05-28 11:33:51
| 1
| 17,004
|
S.B
|
78,543,502
| 9,472,819
|
Pyinstaller unable to load a DLL dynamically at runtime
|
<p>I'm building an application that requires loading a software specific DLL at runtime. The application is built using <code>Pyinstaller</code>, but since the DLL must be loaded at runtime (depending on the user's machine, licensing), I'm unable to bundle the dll when building the application. To load the DLL, I'm doing the following:</p>
<ol>
<li>Adding the dirpath of the DLL to the "PATH" using <code>os.environ["PATH"] = <dll_dir_path></code></li>
<li>Calling the <code>ctypes</code> <code>CDLL</code> method with the absolute filepath to the DLL with <code>CDLL(dll_filepath, winmode=0)</code>.</li>
</ol>
<p>Now, everything runs smoothly when I build the application in my local machine (be it with debug or not debug mode). However, when I build the application using a <code>GH</code> actions deployment process, I get the following error:</p>
<p><code>There was an error loading the Sofistik DLL: Failed to load dynlib/dll '<my_dll.dll>. Most likely this dynlib/dll was not found when the application was frozen.'</code></p>
<p>My local machine is running <code>Windows 11</code>, and <code>GH</code>'s VM is also running <code>Windows</code>. I suspect there might be some environment variables that are set differently in both machines, which is messing up with the path to the dll somehow...</p>
<p>I've tried the suggestion in <a href="https://stackoverflow.com/a/63544460/9472819">this post</a> by using <code>ctypes.windll.kernel32.SetDllDirectoryW(None)</code> but still get the same error.</p>
<p>The DLL is from the Sofistik structural analysis <a href="https://docs.sofistik.com/2023/en/cdb_interfaces/introduction/about_cdb/functions/sof_cdb_init.html" rel="nofollow noreferrer">software </a>.</p>
<p>Is there anything else I can try to debug or fix this issue?</p>
<p><strong>EDIT</strong>: I've printed the full traceback of the exception: <code>FileNotFoundError: Could not find module '<dll_module>.dll' (or one of its dependencies). Try using the full path with constructor syntax.</code></p>
|
<python><dll><pyinstaller><ctypes>
|
2024-05-28 10:29:56
| 0
| 749
|
tomas-silveira
|
78,543,163
| 1,860,805
|
Python3 stripping last binary chars
|
<p>There is a byte stream from socket and it comes likes this</p>
<blockquote>
<p>b"8=FIXT.1.1\x019=000076\x0135=A\x0149=ABCD\x0156=0109\x01"</p>
</blockquote>
<p>I need to strip the last \x01 (or \\x01)</p>
<p>So here is the sample python script to demo my requirement</p>
<pre><code>#!/usr/bin/python
test=b"8=FIXT.1.1\\x019=000076\\x0135=A\\x0149=ABCD\\x0156=0109\\x01"
print(test)
data=str(test)
data=data.rstrip("\\x01")
print(data)
</code></pre>
<p>It needs to print. like this after striping the last bytes</p>
<pre><code>b'8=FIXT.1.1\\x019=000076\\x0135=A\\x0149=ABCD\\x0156=0109\\x01'
b'8=FIXT.1.1\\x019=000076\\x0135=A\\x0149=ABCD\\x0156=0109
</code></pre>
<p>but it prints like this without stripping</p>
<pre><code>b'8=FIXT.1.1\\x019=000076\\x0135=A\\x0149=ABCD\\x0156=0109\\x01'
b'8=FIXT.1.1\\x019=000076\\x0135=A\\x0149=ABCD\\x0156=0109\\x01'
</code></pre>
<p>How to do that in python3 ?</p>
|
<python><python-3.x>
|
2024-05-28 09:27:30
| 2
| 523
|
Ramanan T
|
78,543,129
| 3,932,615
|
pyodbc connect to sql server with TrustServerCertificate
|
<p>I'm trying to connect to a SQL server instance using python and pyodbc.</p>
<pre><code>cnxn = pyodbc.connect("Driver={ODBC Driver 18 for SQL Server};Server=192.168.0.1;Database=Db;User Id=Too;Password=Easy")
</code></pre>
<p>When I run this, I get the below error:</p>
<blockquote>
<p>('08001', '[08001] [Microsoft][ODBC Driver 18 for SQL Server]SSL Provider: The certificate chain was issued by an authority that is not trusted.\r\n (-2146893019) (SQLDriverConnect); [08001] [Microsoft][ODBC Driver 18 for SQL Server]Invalid connection string attribute (0); [08001] [Microsoft][ODBC Driver 18 for SQL Server]Client unable to establish connection. For solutions related to encryption errors, see https://go.microsoft.com/fwlink/?linkid=2226722 (-2146893019)')</p>
</blockquote>
<p>So we have a self signed certificate, this should be easy to fix, as we just add TrustServerCertificate</p>
<pre><code>cnxn = pyodbc.connect("Driver={ODBC Driver 18 for SQL Server};Server=192.168.0.1;Database=Db;User Id=Too;Password=Easy;Encrypt=yes;TrustServerCertificate=yes")
</code></pre>
<p>Which now results in a new error:</p>
<blockquote>
<p>('28000', "[28000] [Microsoft][ODBC Driver 18 for SQL Server][SQL Server]Login failed for user ''. (18456) (SQLDriverConnect); [28000] [Microsoft][ODBC Driver 18 for SQL Server]Invalid connection string attribute (0); [28000] [Microsoft][ODBC Driver 18 for SQL Server][SQL Server]Login failed for user ''. (18456); [28000] [Microsoft][ODBC Driver 18 for SQL Server]Invalid connection string attribute (0)")</p>
</blockquote>
<p>Removing <code>TrustServerCertificate</code>, but leaving <code>Encrypt</code> results in the below error:</p>
<blockquote>
<p>('08001', '[08001] [Microsoft][ODBC Driver 18 for SQL Server]SSL Provider: The certificate chain was issued by an authority that is not trusted.\r\n (-2146893019) (SQLDriverConnect); [08001] [Microsoft][ODBC Driver 18 for SQL Server]Invalid connection string attribute (0); [08001] [Microsoft][ODBC Driver 18 for SQL Server]Client unable to establish connection. For solutions related to encryption errors, see https://go.microsoft.com/fwlink/?linkid=2226722 (-2146893019)')</p>
</blockquote>
<p>Which suggests the problem, is something related to recognising <code>TrustServerCertificate</code> in the connection string.</p>
<p>So I try to set the value through <code>attrs_before</code> instead</p>
<pre><code>SQL_COPT_SS_TRUST_SERVER_CERTIFICATE = 1228
SQL_COPT_SS_ENCRYPT = 1223
cnxn = pyodbc.connect("Driver={ODBC Driver 18 for SQL Server};Server=192.168.0.1;Database=Db;User Id=Too;Password=Easy;", attrs_before={SQL_COPT_SS_ENCRYPT : 1, SQL_COPT_SS_TRUST_SERVER_CERTIFICATE : 1})
</code></pre>
<p>Which again results in the error</p>
<blockquote>
<p>Invalid connection string attribute (0)</p>
</blockquote>
<p>What am I missing in order to connect to a SQL Server that's using a self signed certificate?</p>
|
<python><sql-server><pyodbc>
|
2024-05-28 09:22:31
| 1
| 3,240
|
Neil P
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.