QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,199,153
| 31,317
|
Using IUnknown-derived interface in python / pywin32
|
<p>In short: I'm trying to call a method on in IUknown-based interface. I have some parts, but they don't connect.</p>
<p>Situation:</p>
<pre><code>o1 = win32com.client.Dispatch("SomeProgId")
o2 = o1.SubObject
</code></pre>
<p>The subobject is not co-creatable, <code>o2._ole_obj_</code> is a <PyIDispatch at ...>`, as expected.</p>
<p><code>o2</code> also supports another interface, <code>IMyInterface</code>, derived form <code>IUnknown</code>, and I want to call a method on that interface.</p>
<p>The following:</p>
<pre><code>iid = pywintypes.IID("{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}") // IMyInterface iid
o2._oleobj_.QueryInterface(iid)
</code></pre>
<p>fails with</p>
<pre><code>TypeError: There is no interface object registered that supports this IID
</code></pre>
<p>The Queryinterface call itself succeeds (I've verified that in the source for the COM object), and if I specify an unsupported IID, the error message is different. The call fails even without trying to assing the result.</p>
<hr />
<p>Okay, so I set out to find out about early binding support in python. Documentation seems spotty and outdated (or am I missing something?)</p>
<p>The following succeeds:</p>
<pre><code>tlb = comtypes.client.GetModule(r"path-to-dll")
x = tlb.IMyInterface()
</code></pre>
<p>Of course x is not usable, but intellisense shows the correct methods etc. for x.
Generally, Intellisense shows all elements from the type library, so that part seems to work.</p>
<p>I've also tried:</p>
<pre><code>myitf = win32com.client.CastTo(o2._oleobj_, 'IMyInterface')
</code></pre>
<p>which fails with</p>
<blockquote>
<p>No module named 'win32com.gen_py.x0x2x4.IMyInterface'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "…\Python312-32\Lib\site-packages\win32com\client_<em>init</em>_.py", line 213, in CastTo
mod = gencache.GetModuleForCLSID(target_clsid)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
…
No module named 'win32com.gen_py.x0x2x4.IMyInterface'</p>
</blockquote>
<p>2.4 is the typelib version, so everything looks good.</p>
<p>Any help?</p>
|
<python><com><pywin32><win32com>
|
2024-11-18 08:10:24
| 0
| 41,346
|
peterchen
|
79,199,034
| 2,955,827
|
How to read a part of parquet dataset into pandas?
|
<p>I have a huge dataframe and want to split it into small files for better performance. Here is the example code to write. BUT I can not just read a small pieces from it without loading whole dataframe into memory.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import os
# Create a sample DataFrame with daily frequency
data = {
"timestamp": pd.date_range(start="2023-01-01", periods=1000, freq="D"),
"value": range(100)
}
df = pd.DataFrame(data)
# Add a column for year (to use as a partition key)
df["year"] = df["timestamp"].dt.year
df["month"] = df["timestamp"].dt.month
# Use the join method to expand the DataFrame (Cartesian product with a multiplier)
multiplier = pd.DataFrame({"replica": range(100)}) # Create a multiplier DataFrame
expanded_df = df.join(multiplier, how="cross") # Cartesian product using cross join
# Define the output directory
output_dir = "output_parquet"
# Save the expanded DataFrame to Parquet with year-based partitioning
expanded_df.to_parquet(
output_dir,
partition_cols=["year", "month"], # Specify the partition column
)
</code></pre>
<p>Which is the best way to read from the dataset if I only need data from <code>2023-12-01</code> to <code>2024-01-31</code>?</p>
|
<python><pandas><parquet><pyarrow>
|
2024-11-18 07:24:44
| 1
| 3,295
|
PaleNeutron
|
79,198,966
| 4,190,657
|
ThreadPoolExecutor for Parallelism
|
<p>I have PySpark code which does few POST API call to an external system. For each row in the input dataframe, I need to trigger a POST API request (using Python code) to create an entry in an external system. Given that the dataset is large, this process was taking considerable time.</p>
<p>To improve performance, plan to use Python's ThreadPoolExecutor to process the rows (ie to POST APIs) in parallel (multi-threading) based on the available cores.</p>
<pre><code>from concurrent.futures import ThreadPoolExecutor, as_completed
num_cores = spark.sparkContext.defaultParallelism
def process_all_rows(input_df):
results = []
with ThreadPoolExecutor(max_workers=num_cores) as executor: # Adjust max_workers based on needs
futures = {executor.submit(process_row, row): row for row in input_df.collect()}
for future in as_completed(futures):
try:
result = future.result()
results.append(result)
except Exception as e:
logger.error(f"Error in thread execution: {e}")
return results
</code></pre>
<p>While reviewing this, I was told that ThreadPoolExecutor mainly performs context switching. So if the input DataFrame has 100 rows and num_cores is set to 8 (ie. the cluster has 8 cores), the code will use only one core (not all available 8 cores), firing the POST requests sequentially with context switching ie. firing one POST API request, then the next and so on. Is this the correct understanding? Would the ThreadPoolExecutor uses all 8 cores in parallel.</p>
|
<python><python-3.x><apache-spark><pyspark>
|
2024-11-18 07:04:16
| 0
| 305
|
steve
|
79,198,926
| 9,632,470
|
How to Read a Text File and Make Pandas Data Frame
|
<p>Given a text file (data.txt) with the following contents:</p>
<pre><code>John: 1
Jane: 5
Mark: 7
Dan: 2
</code></pre>
<p>How can I use python to use the text file to create a data frame that would be logically equivalent to the one given by:</p>
<pre><code># Initialize data
data = {'Name': ['John', 'Jane', 'Mark', 'Dan'],
'Count': [1, 5, 7, 2]}
# Create DataFrame
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><dataframe>
|
2024-11-18 06:48:00
| 2
| 441
|
Prince M
|
79,198,686
| 6,468,467
|
diffPool implementation in PyTorch for unsipervised clustering of homogeneous graph
|
<p>I am trying to implement unsupervised multilayered clustering based on the <strong>difpool</strong> approach.</p>
<pre><code>import torch
import torch.nn.functional as F
from torch_geometric.data import Data
from torch_geometric.nn import GCNConv, DenseGCNConv, dense_diff_pool
from torch_geometric.utils import to_dense_adj
import networkx as nx
import matplotlib.pyplot as plt
# --- Step 1: Generate a Random Graph Using NetworkX ---
num_nodes = 100 # Number of nodes in the graph
num_edges = 300 # Number of edges to ensure sufficient connectivity
# Create a random graph using NetworkX
G = nx.gnm_random_graph(num_nodes, num_edges)
# Extract edge index from NetworkX graph
edge_index = torch.tensor(list(G.edges)).t().contiguous()
# Generate random node features
node_features = torch.rand((num_nodes, 16)) # 16-dimensional node features
# Create a torch_geometric Data object
data_homogeneous = Data(x=node_features, edge_index=edge_index)
# --- Visualize the Graph Structure ---
plt.figure(figsize=(8, 6))
nx.draw(G, with_labels=True, node_color='lightblue', node_size=500)
plt.title("Random Graph Structure")
plt.show()
# --- Step 2: Implement Multi-layer DiffPool ---
class MultiLayerDiffPool(torch.nn.Module):
def __init__(self, input_dim, hidden_dim, num_pool_layers, initial_num_nodes):
super(MultiLayerDiffPool, self).__init__()
self.num_pool_layers = num_pool_layers
self.projection = torch.nn.Linear(input_dim, hidden_dim)
self.gcn_layers = torch.nn.ModuleList()
self.dense_gcn_layers = torch.nn.ModuleList()
self.pool_assignments = torch.nn.ModuleList()
# Initialize the number of clusters conservatively
current_num_nodes = initial_num_nodes
for i in range(num_pool_layers):
# Reduce nodes gradually: Use a conservative reduction rate
num_clusters = max(8, int(current_num_nodes * 0.8))
# num_clusters = min(current_num_nodes, num_clusters) # Ensure clusters do not exceed current nodes
current_num_nodes = num_clusters
self.gcn_layers.append(GCNConv(hidden_dim, hidden_dim))
self.dense_gcn_layers.append(DenseGCNConv(hidden_dim, hidden_dim))
self.pool_assignments.append(torch.nn.Linear(hidden_dim, num_clusters))
# Print shapes for debugging
print(f"Layer {i + 1} - x shape: {x.shape}, edge_index shape: {edge_index.shape}")
S = F.softmax(pool_assign(x), dim=-1)
layer_assignments.append(S.detach().cpu().numpy())
def forward(self, x, edge_index):
x = F.relu(self.projection(x))
batch = torch.zeros(x.size(0), dtype=torch.long, device=x.device)
adj_dense = to_dense_adj(edge_index, max_num_nodes=x.size(0))[0]
layer_assignments = []
for i, (gcn, dense_gcn, pool_assign) in enumerate(
zip(self.gcn_layers, self.dense_gcn_layers, self.pool_assignments)
):
x = F.relu(gcn(x, edge_index))
S = F.softmax(pool_assign(x), dim=-1)
layer_assignments.append(S.detach().cpu().numpy())
# Print shapes for debugging
print(f"Layer {i + 1} - x shape: {x.shape}, S shape: {S.shape}")
if S.size(1) > x.size(0):
raise ValueError("Number of clusters in S cannot exceed the number of nodes.")
# Perform pooling and update x and adj_dense
x, adj_dense, _, _ = dense_diff_pool(x, adj_dense, S, batch)
# Ensure that x is reduced appropriately
x = x.mean(dim=1)
return x, layer_assignments
# --- Revised Training Loop ---
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = MultiLayerDiffPool(
input_dim=node_features.size(1), hidden_dim=32, num_pool_layers=3, initial_num_nodes=num_nodes
).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
data_homogeneous = data_homogeneous.to(device)
# Use the initial node features for unsupervised reconstruction
original_features = data_homogeneous.x.clone().to(device)
model.train()
for epoch in range(100):
optimizer.zero_grad()
x, layer_assignments = model(data_homogeneous.x, data_homogeneous.edge_index)
# Use the reconstructed x as the features for loss calculation
reconstructed_features = x
loss = F.mse_loss(reconstructed_features, original_features)
loss.backward()
optimizer.step()
print(f"Epoch {epoch + 1}, Loss: {loss.item()}")
print("Training complete.")
# --- Step 4: Plot Layer Assignments ---
for layer_idx, S in enumerate(layer_assignments):
print(f"Shape of S for layer {layer_idx + 1}: {S.shape}")
if len(S.shape) == 3 and S.shape[0] == 1:
S = S.squeeze(0)
plt.figure(figsize=(8, 6))
plt.imshow(S, cmap='viridis', aspect='auto')
plt.colorbar(label='Assignment Probability')
plt.title(f"Layer {layer_idx + 1} Node Assignments")
plt.xlabel("Clusters")
plt.ylabel("Nodes")
plt.show()
# --- Step 5: Plot Graph Per Layer ---
for layer_idx, S in enumerate(layer_assignments):
print(f"Shape of S for layer {layer_idx + 1}: {S.shape}")
if len(S.shape) == 3 and S.shape[0] == 1:
S = S.squeeze(0)
cluster_assignments = S.argmax(axis=1)
G_layer = nx.Graph()
G_layer.add_edges_from(data_homogeneous.edge_index.t().tolist())
colors = [cluster_assignments[node] for node in range(data_homogeneous.num_nodes)]
plt.figure(figsize=(10, 8))
nx.draw(
G_layer,
node_color=colors,
with_labels=True,
cmap='viridis',
node_size=300,
edge_color='gray'
)
plt.title(f"Graph Visualization at Layer {layer_idx + 1}")
plt.show()
</code></pre>
<p>Here is printed file from the above script:</p>
<blockquote>
<p>Layer 1 - x shape: torch.Size([100, 32]), S shape: torch.Size([100,
70])</p>
</blockquote>
<p>But it returns the following error.</p>
<blockquote>
<p>{ "name": "RuntimeError", "message": "index 87 is out of bounds for
dimension 0 with size 70", "stack":
"--------------------------------------------------------------------------- RuntimeError Traceback (most recent call
last) Cell In[88], line 94
92 for epoch in range(100):
93 optimizer.zero_grad()
---> 94 x, layer_assignments = model(data_homogeneous.x, data_homogeneous.edge_index)
96 # Use the reconstructed x as the features for loss calculation
97 reconstructed_features = x</p>
<p>...</p>
<p>File
d:\test\.venv\lib\site-packages\torch_geometric\utils\<em>scatter.py:75,
in scatter(src, index, dim, dim_size, reduce)
73 if reduce == 'sum' or reduce == 'add':
74 index = broadcast(index, src, dim)
---> 75 return src.new_zeros(size).scatter_add</em>(dim, index, src)
77 if reduce == 'mean':
78 count = src.new_zeros(dim_size)</p>
<p>RuntimeError: index 87 is out of bounds for dimension 0 with size 70"
}</p>
</blockquote>
<p>It seems that the number of clusters (pooled nodes) in the assignment matrix <em>S</em> exceed the number of available nodes at next layer. <strong>layer 1 - x shape: torch.Size([100, 32]), S shape: torch.Size([100, 20])</strong> meand that we have 100 nodes with 32 features in x whereas the assignment matrix <strong>S</strong> is reducing these 100 nodes into 20 clusters.
I tried different settings, e.g., changing adjustment factor <em>num_clusters = max(8, int(current_num_nodes * 0.8))</em> to different values, but it won't work. It seems that I am missing some basics but not sure how to find it.</p>
<p>Your insights highly recommended.</p>
|
<python><tensorflow><graph-neural-network>
|
2024-11-18 04:26:24
| 0
| 841
|
HSJ
|
79,198,575
| 1,297,248
|
SQLAlchemy database session is not reset after each test
|
<p>I have this TestRunner base class mostly following their <a href="https://docs.sqlalchemy.org/en/14/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites" rel="nofollow noreferrer">example</a>:</p>
<pre class="lang-py prettyprint-override"><code>import unittest
from sqlalchemy import create_engine
from sqlalchemy.event import listens_for
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm import sessionmaker
from compliance.takedowns.data.models import Base
engine = create_engine('sqlite:///:memory:')
SessionFactory = scoped_session(sessionmaker(bind=engine))
Base.metadata.create_all(engine) # Initial schema setup
class TakedownsDBInternalTestRunner(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.engine = engine
def setUp(self):
self.connection = self.engine.connect()
self.trans = self.connection.begin()
self.session = SessionFactory
self.nested = self.connection.begin_nested()
@listens_for(self.session, "after_transaction_end")
def end_savepoint(session, transaction):
assert self.nested is not None
if not self.nested.is_active:
self.nested = self.connection.begin_nested()
def tearDown(self):
self.session.close()
self.trans.rollback()
self.connection.close()
</code></pre>
<p>But I'm noticing I'm getting referential integrity errors between my tests?</p>
<pre class="lang-py prettyprint-override"><code>class RequestTests(TakedownsDBInternalTestRunner):
def test_one(self):
request = create_base_request()
self.session.add(request)
self.session.commit()
result = self.session.query(Request).all()
self.assertEqual(len(result), 1)
def test_two(self):
request = create_base_request()
self.session.add(request)
self.session.commit()
result = self.session.query(Request).all()
self.assertEqual(len(result), 2)
</code></pre>
<p>This is the error I get:</p>
<pre><code>========================================================================================================= short test summary info =========================================================================================================
FAILED test_request.py::RequestTests::test_two - AssertionError: 1 != 2
FAILED test_request.py::RequestTests::test_one - AssertionError: 2 != 1
</code></pre>
|
<python><sqlalchemy>
|
2024-11-18 02:55:30
| 2
| 6,409
|
Batman
|
79,198,485
| 9,873,381
|
Can we use Optuna to optimize YOLOv7's hyperparameters on a custom dataset?
|
<p>I would like to use Optuna to optimize YOLOv7's hyperparameters like the learning rate, momentum, weight_decay, iou_t, etc. Is this possible?</p>
<p>I tried writing an objective function that would call the training script <a href="https://github.com/WongKinYiu/yolov7/blob/main/train_aux.py" rel="nofollow noreferrer">https://github.com/WongKinYiu/yolov7/blob/main/train_aux.py</a> with a hyperparameter file created in the previous step. This function would then parse its output to extract the mAP for the class of interest. This value would be reported to Optuna.</p>
<p>I was expecting this script to run for the number of trials and epochs I specified and then give me the optimal set of hyperparameters based on these runs.</p>
|
<python><deep-learning><yolo><hyperparameters><optuna>
|
2024-11-18 01:26:56
| 0
| 672
|
Skywalker
|
79,198,413
| 492,015
|
Adding JSON key to field generated by usaddress python library
|
<p>Currently I'm using the usaadress python library to parse US addresses <a href="https://github.com/datamade/usaddress" rel="nofollow noreferrer">https://github.com/datamade/usaddress</a></p>
<p>Example code</p>
<pre><code>address = '456 Elm St, Someville, NY 54321'
print( usaddress.tag(address))
</code></pre>
<p>and the generated results is:</p>
<pre><code>> ({'AddressNumber': '456', 'StreetName': 'Elm', 'StreetNamePostType':
> 'St', 'PlaceName': 'Someville', 'StateName': 'NY', 'ZipCode':
> '54321'}, 'Street Address')
</code></pre>
<p>All the JSON values have keys besides "Street Address", is there anyway to assign a key to the "Street Address" value also? It will make parsing the JSON using GSON or Jackson much more simple</p>
|
<python>
|
2024-11-18 00:20:41
| 2
| 9,055
|
Arya
|
79,198,409
| 1,837,976
|
Azure Key vault creation and configuration using script
|
<p>I'm new to the Azure Key vault automation. I know its possible to create/automate the AKV creation using ARM template or Terraform or some other script. I have a requirement to create, configure and store the secrets in the Key vault.</p>
<p>Step1: Create a Key vault in a particular subscription and resource group.</p>
<p>Step2: Automatically read the password from a excel file and put the password as secrets in Key Vault.</p>
<p><a href="https://i.sstatic.net/GsklVrqQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GsklVrqQ.png" alt="enter image description here" /></a></p>
<p>Step3: Configure the Key vault in such that Apps can fetch the "secrets" from the key Vault.</p>
|
<python><azure><terraform><azure-resource-manager><azure-keyvault>
|
2024-11-18 00:14:25
| 1
| 2,625
|
AskMe
|
79,198,397
| 5,547,553
|
How to redraw figure on event in matplotlib?
|
<p>I'm trying to pre-generate and store matplotlib figures in python, and then display them on a keyboard event (left-right cursor keys).<br>
It partially seems working, but fails after the first keypress.<br>
Any idea, what am I doing wrong?</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
def new_figure(title, data):
fig,ax = plt.subplots()
plt.plot(data, label=title)
ax.set_xlabel('x-axis')
ax.set_ylabel('value')
plt.legend()
plt.title(title)
plt.close(fig)
return fig
def show_figure(fig):
dummy = plt.figure()
new_manager = dummy.canvas.manager
new_manager.canvas.figure = fig
fig.set_canvas(new_manager.canvas)
def redraw(event, cnt):
event.canvas.figure.clear()
dummy = event.canvas.figure
new_manager = dummy.canvas.manager
new_manager.canvas.figure = figs[cnt]
figs[cnt].set_canvas(new_manager.canvas)
event.canvas.draw()
def keypress(event):
global cnt
if event.key == 'right':
cnt += 1
cnt %= mx
elif event.key == 'left':
cnt -= 1
if cnt < 0:
cnt = mx-1
redraw(event, cnt)
d = range(0, 360)
data = []
data.append(np.sin(np.radians(d)))
data.append(np.cos(np.radians(d)))
data.append(np.tan(np.radians(d)))
titles = ['sin','cos','tan']
mx = len(data)
figs = []
for i in range(mx):
fig = new_figure(titles[i], data[i])
figs.append(fig)
cnt = 0
show_figure(figs[0])
figs[0].canvas.mpl_connect('key_press_event', keypress)
plt.show()
</code></pre>
<p>The error I get eventually is:<br></p>
<pre><code> File "C:\Program Files\Python39\lib\tkinter\__init__.py", line 1636, in _configure
self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: invalid command name ".!navigationtoolbar2tk.!button2"
</code></pre>
|
<python><matplotlib>
|
2024-11-18 00:05:40
| 1
| 1,174
|
lmocsi
|
79,198,377
| 398,348
|
How to tell if a matrix is 1-D or 2-D or... how to know if len(lst) is the number of rows of a 2-D matrix or the number of cols of a 1-D matrix?
|
<p>I am learning python and puzzled by this error while trying to tell if a matrix is 1-D or 2-D. (they are all integers and rectangular since I am multiplying them. That is why I call them matrices)</p>
<p>I am trying to find the dimensions of the matrices so that I can create a new matrix representing their product. (m,n) x (n,p) = (m,p)</p>
<p><em>How to know if len(lst) is the number of rows of a 2-D matrix or the number of cols of a 1-D matrix?</em></p>
<p><strong>Approach 1</strong></p>
<pre><code>def matrix_multiplication(H, lst):
# your code here
print(f'H {H}')
print(f'lst {lst}')
rA = len(H)
cA = len(H[0]) if rA > 0 else 0
rB = len(lst)
cB = len(lst[0]) if rB > 0 else 0
print(f'H: {rA}x{cA}')
print(f'lst: {rB}x{cB}')
return (H)
A1 = [[0,1,0,1],[1,0,0,0],[1,0,1,1]]
b1 = [1,1,1,0]
c1 = matrix_multiplication(A1, b1)
print('c1=', c1)
assert c1 == [1,1,0] , 'Test 1 failed'
</code></pre>
<p>--------OUTPUT----------</p>
<pre><code> H [[0, 1, 0, 1], [1, 0, 0, 0], [1, 0, 1, 1]] lst [1, 1, 1, 0] Traceback (most recent call last): File "c:\Users\Me\Documents\Learn\MS-CS\Foundations of Data Structures and Algorithms\assignment_week3.py", line 42, in <module>
c1 = matrix_multiplication(A1, b1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\Me\Documents\Learn\MS-CS\Foundations of Data Structures and Algorithms\assignment_week3.py", line 20, in matrix_multiplication
cB = len(lst[0]) if rB > 0 else 0
^^^^^^^^^^^ TypeError: object of type 'int' has no len()
------------------------
</code></pre>
|
<python>
|
2024-11-17 23:43:43
| 2
| 3,795
|
likejudo
|
79,198,298
| 14,122
|
Improving safety when a SQLAlchemy relationship adds conditions that refer to tables that don't exist yet
|
<p>I have a situation where I want to set up relationships between tables, mapped with the SQLAlchemy ORM layer, where these relationships have an extra join key. As far as I know, setting this up by hand requires embedding strings that are <code>eval</code>'d; I'm trying to figure out to what extent that can be avoided, or <em>at least</em> validated early (ideally by pyright or mypy, before runtime).</p>
<p>Take the following schema, which doesn't yet have the extra join key added:</p>
<pre><code>from typing import List
from uuid import UUID
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship
from sqlalchemy.schema import ForeignKey
from sqlalchemy.types import Uuid
import sqlalchemy as sa
class Base(DeclarativeBase): pass
class User(Base):
__tablename__ = 'user'
id: Mapped[UUID] = mapped_column(Uuid(), primary_key=True)
tenant_id: Mapped[UUID] = mapped_column(Uuid())
actions: Mapped[List["Action"]] = relationship("Action", back_populates="user", foreign_keys="Action.user_id")
class Action(Base):
__tablename__ = 'action'
id: Mapped[UUID] = mapped_column(Uuid(), primary_key=True)
tenant_id: Mapped[UUID] = mapped_column(Uuid())
user_id: Mapped[UUID] = mapped_column(Uuid(), ForeignKey("user.id"))
user: Mapped["User"] = relationship("User", back_populates="actions", foreign_keys=[user_id])
Base.metadata.create_all(sa.create_engine('sqlite://', echo=True))
</code></pre>
<p>That's simple enough as long as we aren't trying to add belt-and-suspenders protection against relationship evaluations linking across tenants. As soon as we do, the <code>relation</code> declarations need to be strings any time there's need to refer to an as-yet-undeclared class:</p>
<pre><code>class User(Base):
__tablename__ = "user"
id: Mapped[UUID] = mapped_column(Uuid(), primary_key=True)
tenant_id: Mapped[UUID] = mapped_column(Uuid())
actions: Mapped["Action"] = relationship("Action",
back_populates="user",
foreign_keys="Action.user_id",
primaryjoin="and_(tenant_id == Action.tenant_id, id == Action.user_id)",
)
class Action(Base):
__tablename__ = "action"
id: Mapped[UUID] = mapped_column(Uuid(), primary_key=True)
tenant_id: Mapped[UUID] = mapped_column(Uuid())
user_id: Mapped[UUID] = mapped_column(Uuid(), ForeignKey("user.id"))
user: Mapped["User"] = relationship("User",
back_populates="actions",
foreign_keys=[user_id],
primaryjoin=sa.and_(tenant_id == User.tenant_id, user_id == User.id),
)
</code></pre>
<hr />
<p>The above works, but having that <code>primaryjoin="and_(tenant_id == Action.tenant_id, id == Action.user_id)"</code> line where the heavy lifting is done in a context opaque to static analysis is unfortunate.</p>
<p>If we could provide code that's evaluated <em>after</em> all types are defined, but before SQLAlchemy begins its introspection, that would allow a helper function to be used to generate the <code>relationship</code>s. This still isn't static-checking friendly, but it's considerably better than nothing. <strong>However,</strong> I don't know if when the relevant introspection happens (if <code>relationship</code>s need to exist when <code>__init_subclass__</code> is called, anything trying to add them later would be too late).</p>
<p>I'd also be happy with any kind of situation where I'm using strings that static analysis can validate to be legitimate forward references -- if instantiating <code>typing.ForwardRef("Action.tenant_id")</code> were treated by pyright as an indication that a warning should be thrown if <code>Action.tenant_id</code> doesn't eventually exist, that would be perfect.</p>
<p>SQLAlchemy has quite a bit by way of facilities I'm immediately unfamiliar with, so I'm hoping there's an option I'm not thinking of here.</p>
|
<python><sqlalchemy>
|
2024-11-17 22:37:56
| 1
| 299,045
|
Charles Duffy
|
79,198,264
| 2,329,592
|
Connecting a Web Scraper to an Asset in Dagster without the Pipeline Module
|
<p>I want to scrape the content of a website in dagster with scrappy.
Unfortunately, all the examples I have found use the pipeline module of dagster.
The current version does not have this pipeline plugin.</p>
<p>I have this scraper and its parse function which returns all headings of the document.
These headings are to be used in an assset. How do I connect the asset and the crawler?</p>
<pre><code> import scrapy
from dagster import asset, AssetExecutionContext
class MySpider(scrapy.Spider):
name = 'headless'
def start_requests(self):
urls = ['http://google.com'] # Geben Sie hier die URL der HTML-Seite ein
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
headlines = response.css('h1::text').getall()
yield {'headlines': headlines}
spider = MySpider()
@asset()
def headlines(context: AssetExecutionContext):
headlines = spider.parse()
</code></pre>
<p>This is just a non-working example that I need some advice on.</p>
|
<python><scrapy><dagster>
|
2024-11-17 22:12:25
| 0
| 3,262
|
marcel
|
79,198,258
| 1,169,091
|
Why does pip say cmake is not installed?
|
<p>I have cmake:</p>
<pre><code>PS C:\Users\nicholdw\AppData\Local\Programs\Python\Python312> cmake --version
cmake version 3.31.0
</code></pre>
<p>I try to install dlib and it says I don't have cmake:</p>
<pre><code>PS C:\Users\nicholdw\AppData\Local\Programs\Python\Python312> pip install dlib
Collecting dlib
Using cached dlib-19.24.6.tar.gz (3.4 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: dlib
Building wheel for dlib (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for dlib (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [49 lines of output]
<string>:234: SyntaxWarning: invalid escape sequence '\('
<string>:235: SyntaxWarning: invalid escape sequence '\('
<string>:236: SyntaxWarning: invalid escape sequence '\('
running bdist_wheel
running build
running build_ext
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\nicholdw\AppData\Local\Programs\Python\Python312\Scripts\cmake.exe\__main__.py", line 4, in <module>
ModuleNotFoundError: No module named 'cmake'
================================================================================
================================================================================
================================================================================
CMake is not installed on your system!
Or it is possible some broken copy of cmake is installed on your system.
It is unfortunately very common for python package managers to include
broken copies of cmake. So if the error above this refers to some file
path to a cmake file inside a python or anaconda or miniconda path then you
should delete that broken copy of cmake from your computer.
Instead, please get an official copy of cmake from one of these known good
sources of an official cmake:
- cmake.org (this is how windows users should get cmake)
- apt install cmake (for Ubuntu or Debian based systems)
- yum install cmake (for Redhat or CenOS based systems)
On a linux machine you can run `which cmake` to see what cmake you are
actually using. If it tells you it's some cmake from any kind of python
packager delete it and install an official cmake.
More generally, cmake is not installed if when you open a terminal window
and type
cmake --version
you get an error. So you can use that as a very basic test to see if you
have cmake installed. That is, if cmake --version doesn't run from the
same terminal window from which you are reading this error message, then
you have not installed cmake. Windows users should take note that they
need to tell the cmake installer to add cmake to their PATH. Since you
can't run commands that are not in your PATH. This is how the PATH works
on Linux as well, but failing to add cmake to the PATH is a particularly
common problem on windows and rarely a problem on Linux.
================================================================================
================================================================================
================================================================================
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for dlib
Failed to build dlib
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (dlib)
PS C:\Users\nicholdw\AppData\Local\Programs\Python\Python312>
</code></pre>
|
<python><cmake><pip>
|
2024-11-17 22:10:21
| 0
| 4,741
|
nicomp
|
79,198,230
| 320,399
|
Django + Dask integration: usage and progress?
|
<h2>About performance & best practice</h2>
<blockquote>
<p><em>Note, the entire code for the question below is public on Github.
Feel free to check out the project! <a href="https://github.com/b-long/moose-dj-uv/pull/3" rel="nofollow noreferrer">https://github.com/b-long/moose-dj-uv/pull/3</a></em></p>
</blockquote>
<p>I'm trying to workout a simple Django + Dask integration, where one view starts a long-running process and another view is able to check the status of that work. Later on, I might enhance this in a way that <code>get_task_status</code> (or some other Django view function) is able to return the output of the work.</p>
<p>I'm using <code>time.sleep(2)</code> to intentionally mimic a long-running bit of work. Also, it's important to see the overall work status as <code>"running"</code>. To that end, I'm also using a <code>time.sleep()</code> in my test, which feels very silly.</p>
<p>Here's the view code:</p>
<pre class="lang-py prettyprint-override"><code>from uuid import uuid4
from django.http import JsonResponse
from dask.distributed import Client
import time
# Initialize Dask client
client = Client(n_workers=8, threads_per_worker=2)
NUM_FAKE_TASKS = 25
# Dictionary to store futures with task_id as key
task_futures = {}
def long_running_process(work_list):
def task_function(task):
time.sleep(2)
return task
futures = [client.submit(task_function, task) for task in work_list]
return futures
def start_task(request):
work_list = []
for t in range(NUM_FAKE_TASKS):
task_id = str(uuid4()) # Generate a unique ID for the task
work_list.append(
{"address": f"foo--{t}@example.com", "message": f"Mail task: {task_id}"}
)
futures = long_running_process(work_list)
dask_task_id = futures[0].key # Use the key of the first future as the task ID
# Store the futures in the dictionary with task_id as key
task_futures[dask_task_id] = futures
return JsonResponse({"task_id": dask_task_id})
def get_task_status(request, task_id):
futures = task_futures.get(task_id)
if futures:
if not all(future.done() for future in futures):
progress = 0
return JsonResponse({"status": "running", "progress": progress})
else:
results = client.gather(futures, asynchronous=False)
# Calculate progress, based on futures that are 'done'
progress = int((sum(future.done() for future in futures) / len(futures)) * 100)
return JsonResponse(
{
"task_id": task_id,
"status": "completed",
"progress": progress,
"results": results,
}
)
else:
return JsonResponse({"status": "error", "message": "Task not found"})
</code></pre>
<p>I've written a test, which completes in about 5.5 seconds:</p>
<pre class="lang-py prettyprint-override"><code>from django.test import Client
from django.urls import reverse
import time
def test_immediate_response_with_dask():
client = Client()
response = client.post(reverse("start_task_dask"), data={"data": "foo"})
assert response.status_code == 200
assert "task_id" in response.json()
task_id = response.json()["task_id"]
response2 = client.get(reverse("get_task_status_dask", kwargs={"task_id": task_id}))
assert response2.status_code == 200
r2_status = response2.json()["status"]
assert r2_status == "running"
attempts = 0
max_attempts = 8
while attempts < max_attempts:
time.sleep(1)
try:
response3 = client.get(
reverse("get_task_status_dask", kwargs={"task_id": task_id})
)
assert response3.status_code == 200
r3_status = response3.json()["status"]
r3_progress = response3.json()["progress"]
assert r3_progress >= 99
assert r3_status == "completed"
break # Exit the loop if successful
except Exception:
attempts += 1
if attempts == max_attempts:
raise # Raise the last exception if all attempts failed
</code></pre>
<p>My question is, is there a more performant way to implement this same API? What if <code>NUM_FAKE_TASKS = 10000</code>?</p>
<p>Am I wasting cycles?</p>
<h2>Edit: How to view progress percentage?</h2>
<p>Thanks to <a href="https://stackoverflow.com/questions/79198230/django-dask-integration-how-to-do-more-with-less?noredirect=1#comment139686650_79198230">@GuillaumeEB for the tip</a>.</p>
<p>So, we know that the following is blocking:</p>
<pre class="lang-py prettyprint-override"><code>client.gather(futures, asynchronous=False)
</code></pre>
<p>But, it seems like this also doesn't behave the way expect:</p>
<pre class="lang-py prettyprint-override"><code>client.gather(futures, asynchronous=True)
</code></pre>
<p>Is there some way that I could use <code>client.persist()</code> or <code>client.compute()</code>, to see incremental progress?</p>
<p>I know that I can't persist a <code>list</code> of <code><class 'distributed.client.Future'></code> , and using <code>client.compute(futures)</code> also seems to behave incorrectly (jumping the progress from <code>0</code> to <code>100</code>).</p>
|
<python><django><concurrency><dask>
|
2024-11-17 21:54:06
| 1
| 2,713
|
blong
|
79,198,203
| 11,594,202
|
How to properly call asynchronous request in Python without aiohttp
|
<p>This may be a silly question, but I designed a small python script to synchronize two applications, designed to run in a HubSpot custom coded action. This environment runs for a maximum of 20 seconds with a maximum 128MB of memory.</p>
<p>To optimize for speed, my code fetches records asynchronously using <code>asyncio.to_thread</code> on get request of the requests package. I think this is not the correct way to fetch requests asynchronously, as it relies on multiple threads (right?).</p>
<p>Normally I would use aiohttp for fetching asynchronously, but the environment only has the regular <code>requests</code> package installed. Can I reproduce the aiohttp functionality without installing the package?</p>
|
<python><asynchronous><python-requests><get>
|
2024-11-17 21:39:06
| 0
| 920
|
Jeroen Vermunt
|
79,198,199
| 3,486,684
|
How do I stop legends from being merged when vertically concatenating two plots?
|
<p>Consider the following small example (based off of this gallery example]):</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt
from vega_datasets import data
import polars as pl
# add a column indicating the year associated with each date
source = pl.from_pandas(data.stocks()).with_columns(year=pl.col.date.dt.year())
# an MSFT specific plot
msft_plot = (
alt.Chart(source.filter(pl.col.symbol.eq("MSFT")))
.mark_line()
.encode(x="date:T", y="price:Q", color="year:O")
)
# the original plot: https://altair-viz.github.io/gallery/line_chart_with_points.html
all_plot = (
alt.Chart(source)
.mark_line()
.encode(x="date:T", y="price:Q", color="symbol:N")
)
msft_plot & all_plot
</code></pre>
<p>This produces the following output:</p>
<p><a href="https://i.sstatic.net/xFqgDsbi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFqgDsbi.png" alt="enter image description here" /></a></p>
<p>On the other hand, if I only plot <code>all_plot</code>:</p>
<p><a href="https://i.sstatic.net/LRR7kA6d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRR7kA6d.png" alt="enter image description here" /></a></p>
<p>How do I stop the legends being merged when I concatenate <code>msft_plot & all_plot</code>?</p>
|
<python><vega-lite><altair>
|
2024-11-17 21:37:11
| 1
| 4,654
|
bzm3r
|
79,198,175
| 893,254
|
Fernet key must be 32 url-safe base64-encoded bytes - How to create a key for use with Fernet?
|
<h4>An example Fernet key</h4>
<p>The following code produces an example Fernet key, and reports that the length is 44.</p>
<pre><code>from cryptography.fernet import Fernet
generated_key = Fernet.generate_key()
print(f'Example generated key: {generated_key}')
print(f'Length of example key: {len(generated_key)}')
</code></pre>
<p>The output is (for example)</p>
<pre><code>Example generated key: b'U4f1fCfXWlz7pQ-7WdZKmCY-VtSAln7R-hhvF6qgYa4='
Length of example key: 44
</code></pre>
<p>A quick look at <a href="https://en.wikipedia.org/wiki/Base64" rel="nofollow noreferrer">this reference</a> tells us that <code>=</code> is a padding character. There are actually 43 characters, if this <code>=</code> symbol is not included.</p>
<pre><code>U4f1fCfXWlz7pQ-7WdZKmCY-VtSAln7R-hhvF6qgYa4
1 2 3 4 5 6 7 8 9 10 11
U4f1 fCfX Wlz7 pQ-7 WdZK mCY- VtSA ln7R -hhv F6qg Ya4
</code></pre>
<p>That is 10 blocks of 4 characters plus 3 characters = 43.</p>
<h4>Generating an example Fernet key</h4>
<p>I am attempting to generate an example key using the following code:</p>
<pre><code>key = "1234567890123456789012345678901234567890"
print(f'Encryption key is: {key}')
base64_key = base64.b64encode(key.encode('utf-8'))
print(f'Base64 encoded key is: {base64_key}')
print(f'Length of base64 encoded key: {len(base64_key)}')
if len(base64_key) > 43:
base64_key = base64_key[:43]
print(f'Truncating key to length {43}: {base64_key}')
print(f'Length of base64 encoded key is now {len(base64_key)}')
elif len(base64_key) < required_key_length_characters:
print(f'error: key is too short')
raise RuntimeError(f'key too short')
fernet_instance = Fernet(base64_key)
</code></pre>
<p>However this produces the following output and exception:</p>
<pre><code>Encryption key is: 1234567890123456789012345678901234567890
Base64 encoded key is: b'MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTIzNDU2Nzg5MA=='
Length of base64 encoded key: 56
Truncating key to length 43: b'MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI'
Length of base64 encoded key is now 43
binascii.Error: Incorrect padding
</code></pre>
<p>Indeed, manually hacking the padding</p>
<pre><code>base64_key = base64_key + b'='
</code></pre>
<p>fixes the problem.</p>
<h4>How long should a Fernet key be?</h4>
<p>The exception produced by my code says</p>
<pre><code>ValueError: Fernet key must be 32 url-safe base64-encoded bytes
</code></pre>
<p>32 bytes = 256 bits. Each base64 encoded character is 6 bits.</p>
<p>256 / 6 = 42.6666... or 43 if you round up. That matches the length of the key produced by <code>Fernet.generate_key()</code>.</p>
<h4>Why does it not work if the key is too short?</h4>
<p>A 43 character key contains enough bits to have 256 bits of encryption. So why does this not work?</p>
<p>It actually has slightly more than 256 bits of information. 43 * 6 = 258</p>
<h4>Why does it not work if the key is exactly the right length?</h4>
<p>A 44 character key contains enough bits to have 256 bits of encryption, and it aligns with a 4 byte boundary. But, it still fails and produces the following exception:</p>
<pre><code>ValueError: Fernet key must be 32 url-safe base64-encoded bytes
</code></pre>
<p>So even if the 4 byte boundary is some odd arbitrary requirement (is it a bug, perhaps?) then this should work, especially given that a key of length 43 does not have exactly 256 bits of information. It has 258.</p>
<h4>Does not work if key too long</h4>
<p>Given the above, this is not surprising.</p>
<p>If you don't truncate the key at all, the same exception is produced.</p>
<h4>So what is the right way to generate this key?</h4>
<p>The logic of truncating to 43 characters and then adding a padding character on the end to increase the length to 44 seems arbitrary.</p>
<p>Is there a better way to generate this kind of key using a password style input?</p>
<hr />
<p>To clarify a couple of things:</p>
<ul>
<li>I am trying to deterministically generate a key across multiple machines from the same passphrase</li>
</ul>
|
<python><cryptography><base64><fernet>
|
2024-11-17 21:17:38
| 1
| 18,579
|
user2138149
|
79,198,131
| 554,305
|
How can I enable Auto fitting of column widths in the python package openpyxl?
|
<p><em>This is a similar question to <a href="https://stackoverflow.com/questions/71139718/openpyxl-autosize-column-size">Openpyxl autosize column size</a> -and- <a href="https://stackoverflow.com/questions/65115775/python-openpyxl-worksheet-autosizing-column-dimensions-fail">python openpyxl worksheet autosizing column_dimensions - fail</a> -and- <a href="https://stackoverflow.com/questions/60248319/how-to-set-column-width-to-bestfit-in-openpyxl">How to set column width to bestFit in openpyxl</a></em></p>
<p>Has anyone found out how to fit column widths of Excel worksheets using the openpyxl package as of 2024? The <a href="https://openpyxl.readthedocs.io/en/stable/api/openpyxl.worksheet.dimensions.html#openpyxl.worksheet.dimensions.ColumnDimension" rel="nofollow noreferrer">ColumnDimension</a> object, which defines the auto_size and best_fit property, does nothing for me.</p>
<p>Here is what I got so far - I spare you the text extent calculations from 3rd-party GUI frameworks:</p>
<pre><code>ws = Workbook.active # loaded workbook, first sheet
# some column width change, most do not, none are fit correctly
ws.column_dimensions["A"].auto_size = True
# the width can be set manually.
# I think width is specified in millimeters
ws.column_dimensions["A"].with = 123
</code></pre>
<p>My only strategies at this point are:</p>
<ol>
<li>calculate the text extent, set width manually. Unfortunately, I have not found a GetTextExtent() method in openpyxl. I patched wx.ScreenDC.GetTextExtent with set Font (calibri, size 11) from wxPython in - but the calculation does not match either. It did not help that I had to convert pixels to millimeters. How else could I calculate the required width in millimeter from a text string?</li>
<li>Use the VBA-Method <a href="https://learn.microsoft.com/en-us/office/vba/api/Excel.Range.AutoFit" rel="nofollow noreferrer">Range.AutoFit</a>. Does openpyxl have a "Range" object comparable to or directly invoking the Range-interface? I have search the docu but found only iterable cell-"range"-generators.</li>
<li>Ask the developers / volunteers of openpyxl to explain possible approaches to auto fit columns widths, or hope that one of the reads my question here.</li>
</ol>
|
<python><excel><openpyxl>
|
2024-11-17 20:45:00
| 0
| 395
|
BeschBesch
|
79,197,994
| 893,254
|
What are the certchain.pem and private.key files used by Pythons ssl library, are they required, and how can I create them if they are?
|
<h1>Background</h1>
<p>I am writing a Python utility which will allow synchronization of files and folders across a network between a client machine and a server. Synchronization will be unidirectional and initiated by a client push action.</p>
<p>The initial "version 1" for this is quite simple. The server will be in an "always on" (listening) state, and will listen for connections from a client.</p>
<p>The client will be run from the command line. (Hence "push action".) When run, the client will connect to the server, perform some form of authentication (most likely based on private/public key pairs or alternatively username and password) and then send each file in a target directory to the server.</p>
<p>The server will write the recieved files to a (different) target directory.</p>
<h1>Security</h1>
<p>In order for this to be secure, either the entire session has to be protected by encryption, or the secure data has to be send encrypted. In this case, the secure data is a private key or username and password, and then the contents of each file.</p>
<h1>The best way to achieve security</h1>
<p>I do not know what the best way to achieve security is. One simple "roll your own" solution would be to pre-agree an encryption key between the client and server, and use this encryption key to encrypt and decrypt data. This could be done in a manual way by simply writing the code to do it.</p>
<p>The most straightforward way would probably be to use a standard encryption/decryption library. A quick search suggests the Python <code>cryptography</code> library is probably the standard go-to solution.</p>
<p>Although, more manual solutions would be possible, such as performing an xor operation between a 256 bit key and the data. I'm not sure how secure this would be, certainly it would be less secure than using a standard implementation.</p>
<p>However, it seems that a better approach might be to use Python's ssl library, which provides TSL 1.3 on top of the standard sockets library.</p>
<p>I do not know if this will be a suitable solution. The library (like all Python libraries) is quite high level. It is difficult to know what it is doing and how it works from the examples provided in the documentation.</p>
<h1>Python <code>ssl</code></h1>
<p>I started to write an example client-server application in Python which uses the <code>ssl</code> library.</p>
<p>There is one line on the server side code which I do not understand.</p>
<pre><code>context.load_cert_chain('/path/to/certchain.pem', '/path/to/private.key')
</code></pre>
<p>I don't know that much about how certificates work. I know that these two files are used to encrypt the ssl session and to give the client guarantees that the server is who it claims to be. (In other words, it provides a guarantee, somehow, that your traffic has not been intercepted and forwarded to a malicious server which is pretending to be the legitimate one.)</p>
<p>I don't really need the second of these features. Due to the network infrastructure ontop of which this client-server application will run, there is no need to perform a check to ensure the server is the server it claims to be. This will all effectively be running over a virtual private network. I probably don't really need to encrypt the data for this reason, but this is one security feature we do want to have, even though debatably it isn't needed.</p>
<ul>
<li>Is there a way to use Python's <code>ssl</code> library to provide encryption but without having to load the certificate and private key files?</li>
</ul>
<p>I decided to provide a lot of detail in this question, because it could be that I am going in completely the wrong direction here. It might be that the <code>ssl</code> library isn't the right thing to use. It just seemed like it might be a convenient solution.</p>
|
<python><ssl><encryption><tls1.3>
|
2024-11-17 19:30:46
| 0
| 18,579
|
user2138149
|
79,197,810
| 12,357,035
|
auto formatting using yapf to put parameters on multiple lines in condensed format
|
<p>Related to <a href="https://stackoverflow.com/questions/65955455/auto-formatting-python-code-to-put-parameters-on-same-line">this</a>. But instead of putting all arguments on single line, I want to put them onto multiple lines in condensed form.</p>
<p>Basically I want to transform:</p>
<pre><code>def f(arg1: typ1, arg2: typ2, ..., ...) -> typr:
</code></pre>
<p>instead of:</p>
<pre><code>def f(arg1: typ1,
arg2: typ2,
...,
...) -> typr:
</code></pre>
<p>into:</p>
<pre><code>def f(arg1: typ1, arg2: typ2, ...,
...) -> typr:
</code></pre>
<p>I tried setting <code>SPLIT_ALL_COMMA_SEPARATED_VALUES</code> and <code>SPLIT_ALL_TOP_LEVEL_COMMA_SEPARATED_VALUES</code> to false. But it didn't work. Please help.</p>
|
<python><vscode-extensions><code-formatting><yapf>
|
2024-11-17 17:52:58
| 0
| 3,414
|
Sourav Kannantha B
|
79,197,656
| 1,492,229
|
How to reduce the size of Numpy data type
|
<p>I am using Python to do cosine similarity.</p>
<pre><code>similarity_matrix = cosine_similarity(tfidf_matrix)
</code></pre>
<p>The problem is that I am getting this error</p>
<pre><code>MemoryError: Unable to allocate 44.8 GiB for an array with shape (6011226750,) and data type float64
</code></pre>
<p>I don't think I need <strong>float64</strong> for this operation as 2 digits after the decimal point should be enough.</p>
<p>Is there a way I can change the <code>cosine_similarity</code> data type to a smaller one?</p>
|
<python><numpy><scikit-learn><cosine-similarity>
|
2024-11-17 16:41:49
| 0
| 8,150
|
asmgx
|
79,197,644
| 336,827
|
AKS Python Azure Function - console log without timestamp
|
<p>This is the output of an Azure Function running in a kubernetes pod.</p>
<p>How can I enable the timestamp in my logs?<br />
Can I have one line of log for each message instead of two?</p>
<pre><code>info: Function.name_of_the_function.User[0]
this is the info message from the function...
info: Function.name_of_the_function.User[0]
this is the info message from the function...
info: Function.name_of_the_function.User[0]
this is the info message from the function...
info: Function.name_of_the_function.User[0]
this is the info message from the function...
</code></pre>
<p>Here is my <code>hosts.json</code></p>
<pre><code> "logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": false,
"excludedTypes": "Request"
}
},
"console": {
"isEnabled": "false",
"DisableColors": true
},
"fileLoggingMode": "always",
"logLevel": {
"Host": "Warning",
"Function": "Warning",
"Function.solr_catalog": "Warning",
"Function.solr_catalog.User": "Information",
"Azure.Core": "Warning",
"Azure.Messaging": "Warning",
"default": "none"
}
},
</code></pre>
|
<python><azure><azure-functions>
|
2024-11-17 16:34:33
| 1
| 30,887
|
freedev
|
79,197,575
| 759,880
|
Python GIL, multi-threading and atomicity
|
<p>I have read that lists in python provide atomic operations. I have 2 threads that use a list: one iterates over the list to retrieve regex strings to apply to some objects, and one updates the list at regular intervals by adding and removing elements of it. If I want the list update to be atomic for the first thread, should I use <code>threading.Lock</code> in both threads?</p>
|
<python><multithreading>
|
2024-11-17 15:59:05
| 3
| 4,483
|
ToBeOrNotToBe
|
79,197,289
| 4,061,339
|
sub chapters not shown in ebooklib in python
|
<h1>Objective</h1>
<p>To programmatically create a epub file from text files</p>
<h1>Problem</h1>
<p>Some of the sub chapters are not shown</p>
<h1>Minimal Reproducible Example</h1>
<pre class="lang-py prettyprint-override"><code>from ebooklib import epub
# Create a new EPUB book
book = epub.EpubBook()
# Set metadata
book.set_identifier('id123456')
book.set_title('book1')
book.set_language('en')
book.add_author('John Doe')
# Create a single chapter that includes all content
combined_chapter1 = epub.EpubHtml(title='Chaptor1', file_name='chapters1.xhtml', lang='en')
# Add content with main chapters, sub-chapters, and sub-sub-chapters
combined_chapter1.content = '''
<h1 id="chapter1">Chapter 1: Main Topic</h1>
<p>Introduction to Chapter 1.</p>
<h2 id="chapter1.1">1.1 Sub-Chapter</h2>
<p>Content of sub-chapter 1.1.</p>
<h3 id="chapter1.1.1">1.1.1 Sub-Sub-Chapter</h3>
<p>Detailed content of sub-sub-chapter 1.1.1.</p>
<h2 id="chapter1.2">1.2 Sub-Chapter</h2>
<p>Content of sub-chapter 1.2.</p>
<h3 id="chapter1.2.1">1.2.1 Sub-Sub-Chapter</h3>
<p>Detailed content of sub-sub-chapter 1.2.1.</p>
'''
# Add the combined chapter to the book
book.add_item(combined_chapter1)
# Define Table of Contents with links to all sections
book.toc = (
epub.Link('chapters1.xhtml#chapter1', 'Chapter 1: Main Topic', 'chapter1'),
(
epub.Link('chapters1.xhtml#chapter1.1', '1.1 Sub-Chapter', 'chapter1.1'),
(
(epub.Link('chapters1.xhtml#chapter1.1.1', '1.1.1 Sub-Sub-Chapter', 'chapter1.1.1'),)
),
epub.Link('chapters1.xhtml#chapter1.2', '1.2 Sub-Chapter', 'chapter1.2'),
(
(epub.Link('chapters1.xhtml#chapter1.2.1', '1.2.1 Sub-Sub-Chapter', 'chapter1.2.1'),)
),
),
)
# Add navigation files
book.add_item(epub.EpubNcx())
nav = epub.EpubNav()
book.add_item(nav)
# Set the spine
book.spine = ['nav', combined_chapter1]
# Write the EPUB file
epub.write_epub('my_book_ch1_ch2.epub', book, {})
</code></pre>
<h2>What the Result Looks Like</h2>
<p><a href="https://i.sstatic.net/bZG9gAAU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZG9gAAU.png" alt="enter image description here" /></a><br />
Calibre was used. 1.2 Sub-Chapter and 1.2.1 Sub-Sub-Chapter are missing. They should be on the table of contents.</p>
<h1>What I tried so far</h1>
<p>I googled "ebooklib sub chapter not shown" and checked the first 10 pages in vain.</p>
<h1>Environment</h1>
<ul>
<li>Windows 10</li>
<li>VSCode 1.95.3</li>
<li>python 3.12.4</li>
</ul>
<p>Any assistance would be appreciated.</p>
|
<python><windows><visual-studio-code><epub><ebooklib>
|
2024-11-17 13:26:41
| 1
| 3,094
|
dixhom
|
79,197,024
| 19,270,168
|
Google Forms API raises google.auth.exceptions.RefreshError: 'No access token in response.'
|
<p>I want to create a grading bot for my community's applications through Google Forms and when I try to retrieve a form, I get a <code>RefreshError</code>.</p>
<p>Minimal Reproducible Example:</p>
<pre class="lang-py prettyprint-override"><code># Google Credentials
from google.oauth2 import service_account
SCOPES = ['https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/forms.body'][0]
SERVICE_ACCOUNT_FILE = 'key.json'
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES)
import googleapiclient.discovery
forms = googleapiclient.discovery.build('forms', 'v1', credentials=credentials)
form_id = '__omitted__'
result = forms.forms().get(formId=form_id).execute()
print(result)
</code></pre>
<p>Log:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/oauth2/_client.py", line 323, in jwt_grant
access_token = response_data["access_token"]
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
KeyError: 'access_token'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "__omitted__", line 33, in <module>
result = forms.forms().get(formId=form_id).execute()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/googleapiclient/http.py", line 923, in execute
resp, content = _retry_request(
^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/googleapiclient/http.py", line 191, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google_auth_httplib2.py", line 209, in request
self.credentials.before_request(self._request, method, uri, request_headers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/auth/credentials.py", line 156, in before_request
self.refresh(request)
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/oauth2/service_account.py", line 438, in refresh
access_token, expiry, _ = _client.jwt_grant(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/google/oauth2/_client.py", line 328, in jwt_grant
six.raise_from(new_exc, caught_exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 3, in raise_from
google.auth.exceptions.RefreshError: ('No access token in response.', {'id_token': '__omitted__'})
</code></pre>
<p>The service account has a group of <code>Owner</code> with Forms API enabled.</p>
|
<python><google-cloud-platform><google-forms><google-forms-api>
|
2024-11-17 10:53:43
| 1
| 1,196
|
openwld
|
79,196,656
| 19,499,853
|
networkx graph get groups of linked/connected values with multiple values
|
<p>If I use such data</p>
<pre><code>import networkx as nx
G = nx.Graph()
G.add_nodes_from([1, 2, 3, 4, 5, 6, 7])
G.add_edges_from([(1, 2), (1, 3), (2, 4), (5, 6)])
print(list(nx.connected_components(G)))
</code></pre>
<p>Everything works fine.</p>
<p>But what if I need to get connected values from multiple tuple,
such as the folowing</p>
<pre><code>import networkx as nx
G = nx.Graph()
G.add_nodes_from([1, 2, 3, 4, 5, 6, 7])
G.add_edges_from([(1, 2), (1, 3, 7), (2, 4, 1, 6), (5, 6)])
print(list(nx.connected_components(G)))
</code></pre>
<p>As you can see its not classic and not working.
What methods can I implement in order to pass such data, so that
I got connected array values?</p>
<p>I expect getting arrays with connected values between each other</p>
|
<python><graph><logic><networkx>
|
2024-11-17 07:07:50
| 1
| 309
|
Gerzzog
|
79,196,654
| 2,962,555
|
Replacing the placeholder in a hierarchical config.yaml file with the value in the .env file
|
<p>Right now, I have config.yaml like this</p>
<pre><code>kafka:
bootstrap_servers: "${BOOTSTRAP_SERVERS}"
group: "${GROUP_NAME_1}"
topics:
- name: "${TOPIC_1}"
consumers: "${CONSUMER_NUMBER_FOR_TOPIC_1}"
</code></pre>
<p>And I have Dynaconf working as below:</p>
<pre><code>from dynaconf import Dynaconf
from dynaconf.validator import Validator
settings = Dynaconf(
envvar_prefix="service-a",
settings_files=['config.yaml'],
load_dotenv=True,
dotenv_path='.env',
validators=[
Validator(
"server.port", must_exist=True
)
]
)
</code></pre>
<p>Then, the settings will be used like</p>
<pre><code>def start_kafka_consumers():
topics = settings.kafka.topics
threads = []
for topic in topics:
topic_name = topic['name']
consumer_count = topic['consumers']
logger.info(f"{consumer_count} consumers will be started for topic {topic_name}")
for _ in range(consumer_count):
thread = threading.Thread(target=start_consumer_for_topic, args=(topic_name,))
thread.start()
threads.append(thread)
</code></pre>
<p>The beauty of using the config.yaml file is that I can group some properties with hierarchy. For example, I can have topic "abc" with 1 consumer grouped under topics. And later, I can simply add topic "def" with 2 consumers. And the code will dynamically load and use them.</p>
<p>The reason I want to use the place holder and an .env file to define the acutal value is because, I want to run it on my local with the corresponding value. Then, later, when it is uploaded to GCP cloud run, I can use environment variables (e.g. secret manger) to overwrite the value.</p>
<p>However, the ${} placeholder doesn't work as expected. The Dynaconf doesn't use the .env file value to replace the placeholder text. Any suggestions? Thanks.</p>
|
<python><configuration><environment-variables><.env><dynaconf>
|
2024-11-17 07:06:35
| 1
| 1,729
|
Laodao
|
79,196,626
| 2,929,914
|
Python Polars recursion
|
<p>I've used Polars for some time now but this is something that often makes me go from Polars DataFrames to native Python calculations. I've spent resonable time looking for solutions that (tries) to use <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.shift.html#polars.Expr.shift" rel="noreferrer">shift()</a>, <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.rolling.html#polars.Expr.rolling" rel="noreferrer">rolling()</a>, <a href="https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.group_by_dynamic.html#polars.DataFrame.group_by_dynamic" rel="noreferrer">group_by_dynamic()</a> and so on but none is successful.</p>
<h2><strong>Task</strong></h2>
<p>Do calculation that depends on previous calculation's result that is in the same column.</p>
<h2>Example in Excel</h2>
<p>In Excel this is like the most straighforward formula ever...if the "index" is zero I want to return "A", otherwise I want to return the result from the cell above.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Index</td>
<td>Result</td>
<td>Formula for the "Result" column</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>A</td>
<td>=IF(A2=0;"A";B1)</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>A</td>
<td>=IF(A3=0;"A";B2)</td>
</tr>
</tbody>
</table></div>
<h2>Where is the recursion</h2>
<p>In column "B" the formula refers to the previously calculated values on the same column "B".</p>
<p><a href="https://i.sstatic.net/XIMWbNbc.png" rel="noreferrer"><img src="https://i.sstatic.net/XIMWbNbc.png" alt="enter image description here" /></a></p>
<h2>Copy & Paste Excel's solution to Polars</h2>
<pre><code># Import Polars module.
import polars as pl
# Create the data.
data = {'Index': [0, 1]}
# Create the DataFrame.
df = pl.from_dict(data)
# Add a column to the DataFrame.
df = df.with_columns(
# Tries to reproduce the Excel formula.
Result = pl.when(
pl.col('Index') == 0
).then(
pl.lit('A')
).otherwise(
pl.col('Result')
)
)
</code></pre>
<h2>The issue</h2>
<p>Within the "with_columns()" method the "Result" column cannot be referred because It doens't exist in the DataFrame yet. If we try to do so, we get a ColumnNotFoundError:</p>
<p><a href="https://i.sstatic.net/HpgGuTOy.png" rel="noreferrer"><img src="https://i.sstatic.net/HpgGuTOy.png" alt="enter image description here" /></a></p>
<h2>Question</h2>
<p>Any idea on how can I accomplish such a simple task on Polars?</p>
<p>Thank you,</p>
|
<python><dataframe><python-polars>
|
2024-11-17 06:45:13
| 2
| 705
|
Danilo Setton
|
79,196,539
| 6,463,525
|
Exception has occurred: ValueError in langchain
|
<pre><code>def standardize_column_names(df, target_columns):
"""
Uses a language model to standardize column names dynamically based on semantic similarity.
Args:
df (pd.DataFrame): The DataFrame whose columns need standardization.
target_columns (list): A list of target column names to map the DataFrame's columns to.
Returns:
pd.DataFrame: DataFrame with standardized column names.
"""
raw_columns = list(df.columns) # Extract the raw column names
raw_columns_str = ", ".join(raw_columns) # Convert to a comma-separated string
target_columns_str = ", ".join(target_columns) # Convert target columns to a string
# Define the LLM prompt
prompt = PromptTemplate(
input_variables=["raw_columns", "target_columns"], # Match keys exactly with dictionary passed to `invoke`
template=(
"You are tasked with standardizing column names. Here are the raw column names:\n"
"{raw_columns}\n"
"And here is the list of target column names to map to:\n"
"{target_columns}\n"
"Provide a mapping of raw column names to target column names as a dictionary in this format:\n"
"{'raw_column': 'target_column', ...}"
),
)
# Initialize LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
try:
# Use `invoke` with correctly matched keys
response = chain.invoke({"raw_columns": raw_columns_str, "target_columns": target_columns_str})
mapping_result = response["text"] # Extract the LLM's generated text
column_mapping = eval(mapping_result) # Convert the string response into a Python dictionary
except Exception as e:
raise ValueError(f"Error in LLM-based column mapping: {e}")
# Apply the generated mapping to rename columns
df.rename(columns=column_mapping, inplace=True)
return df
</code></pre>
<p>The above code is projecting an error:</p>
<pre><code>Exception has occurred: ValueError
Error in LLM-based column mapping: Missing some input keys: {"'raw_column'"}
File "/Users/pro/Desktop/Technology/Bicycle AI/Data_analysis_AI.py", line 57, in standardize_column_names
response = chain.invoke({"raw_columns": raw_columns_str, "target_columns": target_columns_str})
</code></pre>
<p>I don't know why it is mapping like that one.</p>
|
<python><langchain>
|
2024-11-17 05:16:26
| 0
| 1,203
|
kvk30
|
79,196,217
| 9,632,470
|
Using BeautifulSoup to extract Titles from Text Box
|
<p>I am attempting to write a code using beautiful soup that prints the text for the links in the left handed gray box on <a href="https://www.mountainproject.com/area/110928184/stuart-enchantments" rel="nofollow noreferrer">this</a> webpage. In this case the code should return</p>
<pre><code>** Enchantments Bouldering
Aasgard Sentinel
Argonaut Peak
Cashmere Mountain
Colchuck Balanced Rock
Colchuck Peak
Crystal Lake Tower
Dragontail Peak
Flagpole, The
Headlight Basin
Ingalls Peak
Jabberwocky Tower
Mt Stuart
Nightmare Needles
Prusik Peak
Rat Creek Spires
Sherpa Peak
Stuart Lake Basin
Viviane Campsite
Witches Tower
</code></pre>
<p>I am trying to generalize the wonderful answers to <a href="https://stackoverflow.com/questions/79129809/use-beautiful-soup-to-count-title-links/79130175#79130175">this</a> very similar question, but when inspecting the source for my new web page, I can not find a table being used, and can't decipher what the container is to reference in the following lines of code:</p>
<pre><code>table = soup.find(lambda tag: tag.name=='???' and tag.has_attr('??') and tag['id']=="???")
rows = table.findAll(lambda tag: tag.name=='a')
</code></pre>
|
<python><html><beautifulsoup>
|
2024-11-16 23:05:57
| 2
| 441
|
Prince M
|
79,196,138
| 1,700,890
|
How to import dbutils module in Python on Databricks
|
<p>In Databricks Python notebook I can easily use <code>dbutils</code> module.
Now I also would like to use it within plain Python file which I import into Databricks notebook</p>
<p>Here is an example.</p>
<p>Here is content of some_python_module.py</p>
<pre><code>secret_value = dbutils.secrets.get("some_location", "some_secret")
</code></pre>
<p>Later on I am importing it in Databricks notebook</p>
<pre><code>import some_python_module.py
</code></pre>
<p>But I get error message: <code>NameError: name 'dbutils' is not defined</code></p>
<p>I tried to add import statement into my some_python_module.py</p>
<pre><code>import dbutils
</code></pre>
<p>but it returns: <code>ModuleNotFoundError: No module named 'dbutils'</code></p>
<p>Aslo <code>dbutils.secrets.get("some_location", "some_secret")</code> works fine in Databricks notebook</p>
|
<python><azure><databricks><dbutils>
|
2024-11-16 22:07:33
| 2
| 7,802
|
user1700890
|
79,195,973
| 15,086,628
|
How to access unknown fields in python protobuf version 5.38.3 with upb backend
|
<p>I'm using Python protobuf package version <code>5.38.3</code> for deserializing some packets and I need to check if the messages I deserialize are conformant or not to a specific protobuf message structure. For some checks I want to obtain the list of unknown fields.</p>
<p><a href="https://github.com/protocolbuffers/protobuf/issues/4281#issuecomment-419253577" rel="nofollow noreferrer">This post</a> points to an API <code>UnknownFields()</code> supported by messages, but when I call it in a deserialized message it raises <code>NotImplementedError</code>.</p>
<p>How can I get access to the list of unknown fields from a deserialized message in <a href="https://pypi.org/project/protobuf/" rel="nofollow noreferrer"><code>protobuf 5.28.3</code></a>?</p>
|
<python><protocol-buffers>
|
2024-11-16 20:09:22
| 1
| 395
|
V.Lorz
|
79,195,896
| 405,017
|
Correlate columns in two pandas dataframes with varying data types
|
<p>I have two Excel worksheets, one of which ("edit") is a slightly modified version of the other ("base"). I want to figure out if any columns have been added, deleted, or moved. I have loaded the worksheets into dataframes, and tried to correlated the two frames, but I get an unhelpful error, which I assume is due to being lax about checking cell value types.</p>
<pre class="lang-py prettyprint-override"><code>base = pd.read_excel(base_path, engine="openpyxl", sheet_name=name, header=None)
edit = pd.read_excel(edit_path, engine="openpyxl", sheet_name=name, header=None)
print(base.to_string())
#=> 0 1 2 3 4 5 6 7
#=> 0 NaN snip blip twig zorp plum glim frap
#=> 1 qux 10 10 9 11 9 10 10
#=> 2 baz 20 18 19 20 20 20 18
#=> 3 bat 12 11 12 11 11 12 12
#=> 4 zot 15 15 16 14 16 14 14
#=> 5 wib 11 11 9 9 10 10 11
#=> 6 fiz 16 16 18 17 18 18 16
#=> 7 woz 19 18 17 19 17 18 17
#=> 8 lug 13 12 12 12 11 12 13
#=> 9 vim 13 14 12 14 12 13 13
#=> 10 nub 18 17 18 16 16 17 18
#=> 11 sums 147 142 142 143 140 144 142
print(edit.to_string())
#=> 0 1 2 3 4 5 6 7 8 9 10 11
#=> 0 0.7 snip blip twig zorp plum glim2 glim frap NaN NaN NaN
#=> 1 qux 10 10 9 11 9 10 10 10 NaN NaN NaN
#=> 2 baz 20 18 19 20 20 21 20 18 NaN NaN 1.2
#=> 3 bat 12 11 12 11 11 12 12 12 NaN NaN NaN
#=> 4 zot 15 15 16 14 16 17 14 14 NaN NaN NaN
#=> 5 wib 11 11 9 9 61.6 10 10 11 NaN NaN NaN
#=> 6 fiz 16 16 18 17 18 18 19 16 NaN NaN NaN
#=> 7 woz 19 18 17 19 17 18 18 17 NaN NaN NaN
#=> 8 lug 13 12 12 12 11 12 12 13 NaN NaN NaN
#=> 9 vim 13 14 12 14 12 13 13 13 NaN NaN NaN
#=> 10 nub 18 17 18 16 16 17 17 18 NaN NaN NaN
#=> 11 sums 147 131 142 150 191.6 148 145 142 NaN NaN NaN
corr = base.corrwith(edit, axis=0)
</code></pre>
<p>Gives this error:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/Users/phrogz/xlsxdiff/tmp.py", line 18, in <module>
corr = base.corrwith(edit, axis=0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/frame.py", line 11311, in corrwith
ldem = left - left.mean(numeric_only=numeric_only)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/frame.py", line 11693, in mean
result = super().mean(axis, skipna, numeric_only, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/generic.py", line 12420, in mean
return self._stat_function(
^^^^^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/generic.py", line 12377, in _stat_function
return self._reduce(
^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/frame.py", line 11562, in _reduce
res = df._mgr.reduce(blk_func)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/internals/managers.py", line 1500, in reduce
nbs = blk.reduce(func)
^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/internals/blocks.py", line 404, in reduce
result = func(self.values)
^^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/frame.py", line 11481, in blk_func
return op(values, axis=axis, skipna=skipna, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/nanops.py", line 147, in f
result = alt(values, axis=axis, skipna=skipna, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/nanops.py", line 404, in new_func
result = func(values, axis=axis, skipna=skipna, mask=mask, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/pandas/core/nanops.py", line 719, in nanmean
the_sum = values.sum(axis, dtype=dtype_sum)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/phrogz/.venv/lib/python3.11/site-packages/numpy/_core/_methods.py", line 53, in _sum
return umr_sum(a, axis, dtype, out, keepdims, initial, where)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: unsupported operand type(s) for +: 'int' and 'str'
</code></pre>
<p>Is there a way to use dataframe's correlation calculation, or am I going to need to roll my own?</p>
<p><em>FWIW, this test data is simplified. There is not always a single header row of unique strings, there may be many rows representing the header. I cannot use a single row to determine a unique identifier. Moreover, I next want to do the same for rows, where there are (again) possibly many columns used as row headers. And, as shown, the cell values may have changed slightly.</em></p>
|
<python><pandas><dataframe><spreadsheet><correlation>
|
2024-11-16 19:22:23
| 2
| 304,256
|
Phrogz
|
79,195,851
| 662,967
|
How can I find the last occurance of a dot not preceding with a slash?
|
<p>I'm trying to create a regex to find the last dot in a string not preceded by a slash.</p>
<pre><code>r = MyLine.Text.Swap\ Numbered\ and\ Unnumbered\ List.From\ -\ -\ -\ to\ Numbered\ list\ 1\.\ 2\.\ 3\.\
</code></pre>
<p>What I want to find as match is "<code>From\ -\ -\ -\ to\ Numbered\ list\ 1\.\ 2\.\ 3\.\</code>"</p>
<p>I tried to reverse the string but that didn't work either
<code>re.findall(".*\\.(?!\\\)", r[::-1])</code></p>
<p>What did I wrong?</p>
|
<python><python-3.x><regex>
|
2024-11-16 18:50:24
| 1
| 8,199
|
Reman
|
79,195,840
| 6,843,153
|
Pylance failing to resolve import of libraries in a devcontainer in Linux
|
<p>I have a Python project in <strong>Ubuntu 24.04.1 LTS</strong> and I have a DevContainer in VSC with <strong>Debian GNU/Linux 11</strong>, the problem is that <strong>Pylance</strong> is flagging <code>import streamlit as st</code> with <code>Import "streamlit" could not be resolved</code>. If I run the application from the terminal with <code>streamlit run myfile.py</code>, it runs perfectly, but launching the debugger raises this exception:</p>
<p><strong>EDIT</strong> I reset the Docker image, and the exception changed to a simpler one:</p>
<pre><code>/usr/bin/python3: No module named streamlit
</code></pre>
<p>This is my <code>devcontainer.json</code>:</p>
<pre><code>{
"build": {"dockerfile": "Dockerfile"},
"customizations": {
"vscode": {
"settings": {},
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance"
]
},
"forwardPorts": [8501],
"runArgs": ["--env-file",".devcontainer/devcontainer.env"]
}
}
</code></pre>
<p>This is my <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.10-bullseye
COPY requirements.txt ./requirements.txt
RUN pip install oscrypto@git+https://github.com/wbond/oscrypto.git@d5f3437ed24257895ae1edd9e503cfb352e635a8
# COPY src ./src
# WORKDIR /src
RUN pip install --no-cache-dir -r requirements.txt
ENV PYTHONPATH=/workspaces/my_project/src
EXPOSE 8001
CMD ["streamlit", "run", "view/frontend/main.py"]
</code></pre>
<p>And this is my <code>launch.json</code>:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true,
"env": {"PYTHONPATH": "${workspaceFolder}/src"}
},
{
"name": "Python:Streamlit",
"type": "debugpy",
"request": "launch",
"module": "streamlit",
"args": [
"run",
"${file}",
"--server.port",
"8501",
"--server.fileWatcherType",
"poll",
"--server.address",
"0.0.0.0"
],
"cwd": "${workspaceFolder}/src",
"env": {
"PYTHONPATH": "${workspaceFolder}/src",
"PYTHONHOME": "/usr/local/bin"
}
}
]
}
</code></pre>
<p>I have tried a lot of configurations, but I failed to find one that works.</p>
<p><strong>EDIT</strong>
I have pylance and streamlit workinng together in another computer with a VSC Dev Container over Windows, and they work perfectly.</p>
<p>This is what I get with <code>python -m site</code>:</p>
<pre><code>sys.path = [
'/workspaces/sacbeh',
'/workspaces/sacbeh/src',
'/usr/local/lib/python310.zip',
'/usr/local/lib/python3.10',
'/usr/local/lib/python3.10/lib-dynload',
'/usr/local/lib/python3.10/site-packages',
]
USER_BASE: '/root/.local' (doesn't exist)
USER_SITE: '/root/.local/lib/python3.10/site-packages' (doesn't exist)
ENABLE_USER_SITE: True
</code></pre>
|
<python><docker><ubuntu><containers><pylance>
|
2024-11-16 18:41:03
| 1
| 5,505
|
HuLu ViCa
|
79,195,787
| 8,190,068
|
How do I make an Accordion layout which allows more than one item to be expanded and also scrolls when the items are many?
|
<p>I currently have an Accordion layout which looks like this:</p>
<p><a href="https://i.sstatic.net/WxTTp3Aw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxTTp3Aw.png" alt="current accordion layout" /></a></p>
<p><em>Note: the code for this can be seen here: <a href="https://stackoverflow.com/questions/79179586">How do I position buttons in a vertical box layout?</a></em></p>
<p>In this Accordion, items are able to be expanded only one at a time. And the item which is expanded is given all of the available space, no matter how much it needs. If I add more items, the expanded one may not get enough space to display all of its content.</p>
<p>However, what I would like is an Accordion which allows for more than one item to be expanded at a time, like these samples:</p>
<p><a href="https://i.sstatic.net/oT7yMqUA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oT7yMqUA.png" alt="accordion sample one" /></a></p>
<p><a href="https://i.sstatic.net/2fT3zcrM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fT3zcrM.png" alt="accordion sample two" /></a></p>
<p>I would even like the user to be able to expand all of them at once or none of them.</p>
<p>And since the user will be adding these accordion items at run time, I would like to have the possibility of the accordion to extend beyond the bottom of the window, with a scroll bar on the right to allow all of the items to be seen, like this:</p>
<p><a href="https://i.sstatic.net/MtoLTGpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MtoLTGpB.png" alt="accordion sample three" /></a></p>
<p>Here, the scroll bar is part of the browser window, so it is provided outside the accordion.</p>
<p><strong>Is there a way to customize the Kivy Accordion to...</strong></p>
<ul>
<li><strong>allow for multiple items to be expanded or none at all?</strong></li>
<li><strong>allow for items to use exactly as much space as they need, no more, no less?</strong></li>
<li><strong>allow for many items, with a scroll bar to see them all?</strong></li>
</ul>
<p>Or do I need to create my own custom accordion widget?</p>
|
<python><accordion><kivy-language>
|
2024-11-16 18:10:08
| 0
| 424
|
Todd Hoatson
|
79,195,579
| 16,037,313
|
How to generate a matrix in Python with 0 and 1 alternatively sorted
|
<p>I want to create a matrix of 0 and 1, using Numpy package, alternating 0 and 1, of size 8*8</p>
<p>This means:</p>
<ul>
<li>Along each row, the values alternate between 0 and 1.</li>
<li>Along each column, the values also alternate between 0 and 1.</li>
</ul>
<p>I want something like this:</p>
<pre><code>np.random.binomial(n=1, p=0.5, size=[64])
#This is the resulta expected but the code doesn't sort the matrix this way
[[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]]
#WHAT I DON'T WANT BUT IT IS QUITE SIMILAR
[[1 1 0 1 1 1 0 1]
[0 0 1 0 1 0 1 0]
[0 0 0 1 0 1 0 1]
[0 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[0 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]]
</code></pre>
<p>The thing is this is balanced, but the result is not 0 and 1, in an alternative way.</p>
|
<python><numpy>
|
2024-11-16 16:31:18
| 3
| 410
|
Javier Hernando
|
79,195,542
| 7,090,501
|
Selenium cannot retrieve url when running in Google Colab
|
<p>I built a small web scraper that has run successfully in a Google Colab over the last few months. It downloads a set of billing codes from the CMS website. Recently the driver started throwing timeout exceptions when retrieving some but not all urls. The reprex below downloads a file from two urls. It executes successfully when I run it locally and it attempts to and fails trying to retrieve the second url when running in Google Colab.</p>
<p>The timeout happens in <code>driver.get(url)</code>. Strangely, the code works so long as the driver has not previously visited another url. For example, in the code below, <code>not_working_url</code> will successfully retrieve the webpage and download the file if it does not come after <code>working_url</code>.</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
def download_documents() -> None:
"""Download billing code documents from CMS"""
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--disable-dev-shm-usage")
driver = webdriver.Chrome(options=chrome_options)
working_url = "https://www.cms.gov/medicare-coverage-database/view/article.aspx?articleid=59626&ver=6"
not_working_url = "https://www.cms.gov/medicare-coverage-database/view/lcd.aspx?lcdid=36377&ver=19"
for row in [working_url, not_working_url]:
print(f"Retrieving from {row}...")
driver.get(row) # Fails on second url
print("Wait for webdriver...")
wait = WebDriverWait(driver, 2)
print("Attempting license accept...")
# Accept license
try:
wait.until(EC.element_to_be_clickable((By.ID, "btnAcceptLicense"))).click()
except TimeoutException:
pass
wait = WebDriverWait(driver, 4)
print("Attempting pop up close...")
# Click on Close button of the second pop-up
try:
wait.until(
EC.element_to_be_clickable(
(
By.XPATH,
"//button[@data-page-action='Clicked the Tracking Sheet Close button.']",
)
)
).click()
except TimeoutException:
pass
print("Attempting download...")
driver.find_element(By.ID, "btnDownload").click()
download_documents()
</code></pre>
<p>Expected behavior: The code above runs successfully in Google Colab, just like it does locally.</p>
<p>A potentially related issue: <a href="https://stackoverflow.com/questions/73507343/selenium-timeoutexception-in-google-colab">Selenium TimeoutException in Google Colab</a></p>
|
<python><selenium-webdriver><selenium-chromedriver><google-colaboratory><google-notebook>
|
2024-11-16 16:13:20
| 2
| 333
|
Marshall K
|
79,195,406
| 2,386,113
|
Scatter plot in python with x/y-ticks on a haircross?
|
<p>I want to create a scatter plot with a hair-cross (vertical and horizontal line) at the centre. The x-ticks and y-ticks also need to be shown on this hair-cross. How can I do that?</p>
<p><strong>Sample Required Plot:</strong>
<a href="https://i.sstatic.net/MkNkI2pBm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MkNkI2pBm.png" alt="enter image description here" /></a></p>
<p>The scatter plot that I created looks like below though:</p>
<p><a href="https://i.sstatic.net/kZaOS5Qb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kZaOS5Qb.png" alt="enter image description here" /></a></p>
<p><strong>Sample Code:</strong></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Generate random data points
np.random.seed(42) # For reproducibility
x = np.random.uniform(-5, 5, 100)
y = np.random.uniform(-5, 5, 100)
# Create a figure and axis
fig, ax = plt.subplots()
fig.patch.set_facecolor('white') # Set the figure background to white
ax.set_facecolor('white') # Set the plot area background to white
# Scatter plot
ax.scatter(x, y, color='blue', label='Scatter Points')
# Set axis limits and labels
ax.set_xlim(-5.5, 5.5) # Extend slightly to fit the circle
ax.set_ylim(-5.5, 5.5)
ax.set_xlabel('X-axis', color='black')
ax.set_ylabel('Y-axis', color='black')
# Add lines to split the plot into quadrants
ax.axhline(0, color='black', linewidth=1) # Horizontal line
ax.axvline(0, color='black', linewidth=1) # Vertical line
# Set tick colors
ax.tick_params(axis='x', colors='black')
ax.tick_params(axis='y', colors='black')
# Add legend
legend = ax.legend()
plt.setp(legend.get_texts(), color='black') # Set legend text color to black
# Show the plot
plt.show()
</code></pre>
|
<python><matplotlib><plot><figure>
|
2024-11-16 15:00:20
| 0
| 5,777
|
skm
|
79,195,340
| 3,885,446
|
Jupyter notebook not displaying properly
|
<p>This question is very similar to <a href="https://stackoverflow.com/questions/78173935/jupyter-notebook-not-displaying-correctly-after-reinstall">Jupyter Notebook not displaying correctly after reinstall</a> which does not have a satisfactory answer.</p>
<p>A few months ago I was fiddling around with the preferences of Jupyter notebook using guidance from CoPilot! All I managed to do was to change the look of the pages. The following is an example.</p>
<p><a href="https://i.sstatic.net/o5dxNOA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o5dxNOA4.png" alt="Part of the file menu" /></a></p>
<p>Further changes uner the guidence of copilot broke the whole thing so I had to uninstall Anaconda and reinstall. Everything now works but the look of the pages remains as above.
I have finally got around to trying to sort it out without success. I assume there is a CSS file which is causing this but I cannot find it.
The things I have tried are</p>
<ul>
<li>changed browser</li>
<li>looked at the page using developers tools. The css is supposed to reside in ~\custom\custom.css. I have found 2 files with that description in the Anaconda3 tree but both are empty.</li>
<li>before reinstalling I deleted all the anaconda3 directory so I think it must be coming from somewhere else.</li>
</ul>
<p>Does anybody have any ideas?</p>
|
<python><css><jupyter-notebook>
|
2024-11-16 14:23:24
| 0
| 575
|
Alan Johnstone
|
79,195,338
| 2,685,402
|
How do I attach a python debugger to a Gradio UI running on IntelliJ Jupyter?
|
<p>I am running Gradio in an IntelliJ Jupyter plugin editor window. I have set a breakpoint in the <code>chat</code> function. I run the following code to start a Gradio interface in debug mode. The breakpoint I created is "Suspend: All", not "Suspend: Thread".
The breakpoints are not being hit.</p>
<p>How do I run Gradio so that my breakpoints are being hit?</p>
<pre><code>gr.ChatInterface(fn=chat).launch(debug=True)
</code></pre>
<p>I am guessing the Gradio interface starts a separate process that is not captured by my debugging session?</p>
|
<python><intellij-idea><jupyter><gradio>
|
2024-11-16 14:22:33
| 0
| 1,550
|
Wojtek
|
79,195,042
| 5,866,580
|
Handling complex parentheses structures to get the expected data
|
<p>We have data from a REST API call stored in an output file that looks as follows:</p>
<p><strong>Sample Input File:</strong></p>
<pre><code>test test123 - test (bla bla1 (On chutti))
test test123 bla12 teeee (Rinku Singh)
balle balle (testagain) (Rohit Sharma)
test test123 test1111 test45345 (Surya) (Virat kohli (Lagaan))
testagain blae kaun hai ye banda (Ranbir kapoor (Lagaan), Milkha Singh (On chutti) (Lagaan))
</code></pre>
<p><strong>Expected Output:</strong></p>
<pre><code>bla bla1
Rinku Singh
Rohit Sharma
Virat kohli
Ranbir kapoor, Milkha Singh
</code></pre>
<p><strong>Conditions to Derive the Expected Output:</strong></p>
<ul>
<li>Always consider the last occurrence of parentheses () in each line. We need to extract the values within this last, outermost pair of parentheses.</li>
<li>Inside the last occurrence of (), extract all values that appear before each occurrence of nested parentheses ().</li>
<li>Eg: <code>test test123 - test (bla bla1 (On chutti))</code> last parenthesis starts from <code>(bla</code> to till <code>chutti))</code> so I need <code>bla bla1</code> since its before inner <code>(On chutti)</code>. So look for the last parenthesis and then inside how many pair of parenthesis comes we need to get data before them, eg: in line <code>testagain blae kaun hai ye banda (Ranbir kapoor (Lagaan), Milkha Singh (On chutti) (Lagaan))</code> needed is <code>Ranbir kapoor</code> and <code>Milkha Singh</code>.</li>
</ul>
<p><strong>Attempted Regex:</strong>
I tried using the following regular expression on <a href="https://regex101.com/r/3GPjf0/1/" rel="nofollow noreferrer">Working Demo of regex</a>:</p>
<p>Regex:</p>
<pre><code>^(?:^[^(]+\([^)]+\) \(([^(]+)\([^)]+\)\))|[^(]+\(([^(]+)\([^)]+\),\s([^\(]+)\([^)]+\)\s\([^\)]+\)\)|(?:(?:.*?)\((.*?)\(.*?\)\))|(?:[^(]+\(([^)]+)\))$
</code></pre>
<p><strong>The Regex that I have tried is working fine but I want to improve it with the advice of experts here.</strong></p>
<p><strong>Preferred Languages:</strong> Looking to improve this regex OR a Python, or an <code>awk</code> answer is also ok. I myself will also try to add an <code>awk</code> answer.</p>
|
<python><regex><awk>
|
2024-11-16 11:20:41
| 9
| 134,567
|
RavinderSingh13
|
79,194,909
| 5,563,616
|
How to share the standard input with a child process in Python?
|
<p>I need to do in Python what this C++ program does:</p>
<pre class="lang-none prettyprint-override"><code>#include <iostream>
#include <unistd.h>
#include <string.h>
#include <sys/wait.h>
#include <stdio.h>
int main() {
char buffer[256];
read(0, buffer, sizeof(buffer));
std::cout << "(1) Parent received: " << buffer << std::endl;
pid_t pid = fork();
if (pid < 0) {
perror("fork");
exit(1);
} else if (pid == 0) { // Child process
// Read from the pipe (parent's input)
read(0, buffer, sizeof(buffer));
std::cout << "(2) Child received: " << buffer << std::endl;
return 0; // Child process finished
} else { // Parent process
wait(NULL); // Wait for the child to finish
}
printf("done with child\n");
read(0, buffer, sizeof(buffer));
std::cout << "(3) Parent received: " << buffer << std::endl;
return 0;
}
</code></pre>
<p>This program is first reading from its standard input, then it forks a child process which re-uses parent's stdin and reads from this stdin, and then the parent process waits for the child to finish, and it keeps reading data from the same stdin handle.</p>
<p>How can the same be done in Python?</p>
<p>I've read the documentation of the <code>subprocess.Popen()</code> function but I couldn't find how to share stdin between parent and child processes.</p>
|
<python>
|
2024-11-16 10:07:37
| 0
| 1,682
|
Jennifer M.
|
79,194,425
| 10,679,609
|
Sampling from joint probability mass function in python
|
<p>I have a non-negative normalized vector <code>p</code>. I would like to sample an index from the index set of the vector. The probability getting sample <code>k</code> is <code>p[k]</code>. Using <code>np.random.choice</code> function, I can write the following code.</p>
<pre class="lang-py prettyprint-override"><code>p = [0.2, 0.3, 0.1, 0.3, 0.1]
indices = np.arange(len(p))
k = np.random.choice(indices, p=p)
</code></pre>
<p>My question is, how can I generalize this code for multi-dimensional arrays? For example, given three dimensional non-negative normalized IxJxK tensor <code>p = np.random.rand(I,J,K)</code>
how can I sample the index (i,j,k) with the probability <code>p[i,j,k]</code>?</p>
|
<python><probability><sampling>
|
2024-11-16 02:52:12
| 1
| 694
|
Sakurai.JJ
|
79,194,050
| 569,229
|
How can I stop resource files that are Python source from being compiled by pip?
|
<p>I have a Python project with a flat layout that has some resource files. My <code>pyproject.toml</code> has the following entries:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.setuptools]
packages = [
"linton",
"linton.subcommand",
]
[tool.setuptools.package-data]
linton = ["init-pages/**"]
</code></pre>
<p>and the following layout:</p>
<pre><code>linton
├── argparse_util.py
├── init-pages
│ ├── Author.in.txt
│ ├── body.in.md
│ ├── breadcrumb.in.py
│ ├── email.in.py
│ ├── Email.in.txt
│ ├── index.nancy.html
│ ├── lastmodified.in.py
│ ├── markdown-to-html.in.sh
│ ├── menudirectory.in.py
│ ├── pageinsite.in.py
│ ├── path-to-root.in.py
│ ├── Sample page
│ │ ├── body.in.md
│ │ └── index.nancy.html
│ ├── style.css
│ ├── template.in.html
│ └── Title.in.txt
├── __init__.py
├── __main__.py
├── subcommand
│ ├── init.py
│ ├── publish.py
│ └── serve.py
└── warnings_util.py
</code></pre>
<p>Note the <code>init-pages</code> directory: it contains resource files for a <code>linton init</code> command, which, like many similar tools, creates a template project (in this case, a web site), and copies files into it (the contents of <code>init-pages</code>).</p>
<p>Also note that <code>init-pages</code> contains some <code>.py</code> files.</p>
<p>Packaging this project works fine: the source files and resource files are included as expected by pip.</p>
<p>The surprise to me is that when I <code>pip install</code> this project, pip compiles the <code>.py</code> files in the <code>init-pages</code> directory, although this directory is not included in <code>packages</code> and is only marked as package data.</p>
<p>I am unable to find any documentation about how pip decides which files to byte-compile on installation. I can turn it off globally, but of course I don't want to do that, and it doesn't help my users.</p>
<p>I could also work around the problem by making <code>linton init</code> skip the generated <code>__pycache__</code> directory when copying the resource files into the user's project. But it seems wrong that they exist in the first place, and I would like to prevent this.</p>
|
<python><pip>
|
2024-11-15 21:49:13
| 0
| 756
|
Reuben Thomas
|
79,193,781
| 89,233
|
grpc_tools.protoc compiling grpc with edition = 2023 fails
|
<p>I'm using protoc version 28.3 on macos:</p>
<pre><code>> protoc --version
libprotoc 28.3
</code></pre>
<p>This version can generate gRPC for typescript and go and C++, with the appropriate plugins.
It can also generate plain protobuf:</p>
<pre><code>> protoc --experimental_editions -I src/proto3 --python_out=./src/generated/python/revepb src/proto3/*.proto
>
</code></pre>
<p>However, when I try to generate grpc, it fails:</p>
<pre><code>> uv run python3 -m grpc_tools.protoc --experimental_editions -I src/proto3 --python_out=./src/generated/python/revepb src/proto3/*.proto
basetypes.generated.proto: is an editions file, but code generator --python_out hasn't been updated to support editions yet. Please ask the owner of this code generator to add support or switch back to proto2/proto3.
</code></pre>
<p>This makes no sense because --python_out works well in isolation.</p>
<p>The closest I can think of would be that grpc_tools might include its own flavor of protoc, AND that tool hasn't been updated for two years or something.</p>
<p>These are the versions:</p>
<pre><code>+ grpcio==1.67.1
+ grpcio-tools==1.67.1
+ protobuf==5.28.3
+ setuptools==75.5.0
</code></pre>
<p>1.67.1 is the latest released version, from October 28 2024.</p>
<p>What else can I check, version-wise?</p>
<p>I <em>assume</em>, perhaps wrongly, that "use grpc with editions in python" is a mainstream thing that I should expect to work, given that python is a core language inside google -- is this assumption wrong?</p>
<p>What else can I do to make this work?</p>
|
<python><protocol-buffers><grpc>
|
2024-11-15 19:52:44
| 1
| 7,386
|
Jon Watte
|
79,193,735
| 11,594,202
|
Python - Generator not working with next method
|
<p>I created a generator to perform pagination on an api:</p>
<pre><code>def page_helper(req, timeout=5, page=1, **kwargs):
print(f"Page {page}", end="\r")
try:
response = req(params={**kwargs, "page": page})
response = response.json()
except Exception as e:
status = response.status_code
if status == "429":
print(f"Rate limited. Waiting {timeout} seconds.")
time.sleep(timeout)
yield from page_helper(req, page=page, **kwargs)
else:
raise e
else:
if len(response) == kwargs["limit"]:
yield from page_helper(req, page=page + 1, **kwargs)
yield response
</code></pre>
<p>Later I use this generator somewhere like this</p>
<pre><code>batches = page_helper(<some_request>, limit=100)
# get insert and updates per batch
for i, batch in enumerate(batches):
print(f"Batch {i + 1}", end="\r")
insert_batch = []
update_batch = []
# ... process batch
</code></pre>
<p>I want it to fetch each page as a batch and process it before it fetches the next batch. Fetching the batches works perfectly, but it keep on fetches pages without processing in between.</p>
<p>I tried to check the generator by calling next, and I expect it to only return one batch. However it starts the full iterations immediately:</p>
<pre><code>next(batches) # --> Performs full iteration
next(batches)
next(batches)
next(batches)
</code></pre>
<p>Is there something wring with my generator function?</p>
|
<python><iterator><generator><yield>
|
2024-11-15 19:35:18
| 2
| 920
|
Jeroen Vermunt
|
79,193,687
| 2,662,743
|
python locust config file for multiple files/classes
|
<p>I'm using locust to do some load testing on our backend service and I have three classes defined in three different .py files</p>
<p>then I'm trying to run locust using this command line</p>
<p>==> <code>locust -f locustfiles/locustfile.py --headless --config ../loadTestConfig.yml</code></p>
<p><strong>locustfile.py</strong></p>
<pre><code>from myClass1 import MyClass1
from myClass2 import MyClass2
from myClass3 import MyClass3
</code></pre>
<p><strong>loadTestConfig.yml</strong></p>
<pre><code> user_classes:
- class: MyClass1
users: 8
spawn_rate: 8
- class: MyClass2
users: 2
spawn_rate: 2
- class: MyClass3
users: 36
spawn_rate: 36
run_time: 2m
host: "https://myhost"
</code></pre>
<p>this would give me the error when lunched that it's not recongizing the option <code>class</code> from the yml file.</p>
<pre><code>TestConfig.yml
locust: error: ambiguous option: --=class: MyClass3 could match --locustfile, --config, --help, --version, --worker, --master, --master-host, --master-port
</code></pre>
<p>How could I specify per class (file) the number of user and spwan rate?? Trying to buld my load scenario.</p>
|
<python><locust>
|
2024-11-15 19:12:29
| 0
| 1,767
|
eetawil
|
79,193,647
| 5,563,616
|
Can I prevent "^Z" from being printed when I handle SIGTSTP (Ctrl-Z) in my program?
|
<p>In Linux I need to print the terminal program status when the user presses some keyboard hotkey.</p>
<p>Since <kbd>Ctrl</kbd>+<kbd>T</kbd> isn't available on Linux (it is only available on BSDs), I decided to use <kbd>Ctrl</kbd>+<kbd>Z</kbd> instead.</p>
<p>I handle the SIGTSTP signal in my program, and it works fine.
One side-effect is that the characters <code>^Z</code> appear when the user presses <kbd>Ctrl</kbd>+<kbd>Z</kbd>.</p>
<p>Can the <code>^Z</code> printout be eliminated?</p>
<hr />
<p>As a concrete example, consider the following program:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
import signal, sys, time
def deliberately_no_stop(*args):
print('Status printout', file=sys.stderr)
signal.signal(signal.SIGTSTP, deliberately_no_stop)
time.sleep(5)
</code></pre>
<p>When run from an interactive console, and pressing <kbd>Ctrl</kbd>+<kbd>Z</kbd> twice, the actual content printed to the terminal is:</p>
<pre class="lang-none prettyprint-override"><code>^ZStatus printout
^ZStatus printout
</code></pre>
<p>...whereas I <em>want</em> it to be:</p>
<pre class="lang-none prettyprint-override"><code>Status printout
Status printout
</code></pre>
<p>How can I get this effect?</p>
|
<python><terminal>
|
2024-11-15 18:57:24
| 2
| 1,682
|
Jennifer M.
|
79,193,384
| 726,730
|
Open cv.VideoCapture(index) - ffmpeg list camera names - How to match?
|
<pre class="lang-py prettyprint-override"><code> def fetch_camera_input_settings(self):
try:
self.database_functions = database_functions
self.camera_input_device_name = database_functions.read_setting("camera_input_device_name")["value"]
self.camera_input_device_number = int(self.database_functions.read_setting("camera_input_device_number")["value"])
self.camera_input_devices = [[0,-1,"Καμία συσκευή κάμερας"]]
self.available_cameras = [{"device_index":-1,"device_name":"Καμία συσκευή κάμερας"}]
# FFmpeg command to list video capture devices on Windows
cmd = ["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"]
result = subprocess.run(cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE, text=True)
output = result.stderr # FFmpeg sends device listing to stderr
# Updated regular expression to capture both video and audio devices
device_pattern = re.compile(r'\[dshow @ .+?\] "(.*?)" \(video\)')
cameras = device_pattern.findall(output)
counter = 0
for camera in cameras:
counter += 1
self.camera_input_devices.append([counter,counter-1,camera])
self.available_cameras.append({"device_index": counter-1, "device_name": camera})
self.to_emitter.send({"type":"available_devices","devices":self.camera_input_devices,"device_index":self.camera_input_device_number})
except:
error_message = traceback.format_exc()
self.to_emitter.send({"type":"error","error_message":error_message})
</code></pre>
<p>How to match ffmpeg camera device names output with cv2.VideoCapture which wants camera index as input?</p>
|
<python><opencv><ffmpeg><camera>
|
2024-11-15 17:22:12
| 0
| 2,427
|
Chris P
|
79,193,167
| 474,772
|
Subclassing typing.TypeDict
|
<p>I skimmed over <a href="https://peps.python.org/pep-0589/" rel="nofollow noreferrer">PEP-0589</a>, and I am wondering why <code>typing.TypedDict</code> works when subclassing it like this:</p>
<pre><code>class A(TypedDict, total=False):
x: int
y: int
</code></pre>
<p>Specifically, <code>TypedDict</code> is a function that exposes <code>__mro_entries__</code>, which is itself a function that returns a <a href="https://sourcegraph.com/github.com/python/cpython/-/blob/Lib/typing.py?L3338" rel="nofollow noreferrer">metaclass constructor</a>.
There are a few things that I don't understand:</p>
<ol>
<li>You can subclass a function? How does that work?</li>
<li>How is the <code>total</code> kwarg provided to the <code>TypedDict</code> constructor?</li>
</ol>
<p>I would really appreciate someone explaining this magic in detail and providing docs as I couldn't find it in the official docs.</p>
|
<python><metaclass><typeddict>
|
2024-11-15 16:15:41
| 1
| 5,954
|
Mariy
|
79,193,082
| 10,425,150
|
"Invalid_grant" error after using "change_current_realm()" in keycloak-python
|
<p>I have <code>401: b'{"error":"invalid_grant","error_description":"Invalid user credentials"}'</code> error after I switch realm with <code>change_current_realm()</code> function from "master" to "new-sso" realm.</p>
<p>Here is the full code:</p>
<pre><code>from keycloak import KeycloakAdmin
from keycloak import KeycloakOpenIDConnection
server_url = "http://localhost:8080/"
new_sso_relam = "new-sso"
admin_username = 'admin'
admin_password = 'admin'
admin_client = 'admin-cli'
master_realm = "master"
keycloak_connection = KeycloakOpenIDConnection(server_url=server_url,
username=admin_username,
password=admin_password,
client_id=admin_client,
realm_name=master_realm)
keycloak_admin = KeycloakAdmin(connection=keycloak_connection)
keycloak_admin.change_current_realm(new_sso_relam)
user_payload = {"username": "new_user",
"enabled": True}
keycloak_admin.create_user(user_payload, exist_ok=True)
</code></pre>
<p>However if before switching to "new-realm" I call <code>keycloak_admin.get_realm(master_realm)</code>, then the code works fine and I can create user.</p>
<pre><code>keycloak_admin.get_realm(master_realm)
keycloak_admin.change_current_realm(new_sso_relam)
user_payload = {"username": "new_user",
"enabled": True}
keycloak_admin.create_user(user_payload, exist_ok=True)
</code></pre>
<p>I belive the <code>invalid_grant</code> error indicates that after switching realms with <code>change_current_realm()</code>, the credentials I'm using are no longer valid for the new realm. This issue arises because the Keycloak Admin client (in this case, <code>admin-cli</code>) is authenticated against the "master" realm and does not automatically carry over those credentials to the "new-sso" realm.</p>
<p>When I call <code>keycloak_admin.get_realm(master_realm)</code>, it seems to refresh or validate your session, allowing you to switch to the new realm successfully. However, you want to eliminate that extra step.</p>
|
<python><realm><keycloak>
|
2024-11-15 15:47:55
| 2
| 1,051
|
Gооd_Mаn
|
79,192,832
| 10,003,652
|
Is there a difference in rendering Matplotlib image (heatmaps) in Jupyter notebook vs running in Python script (terminal)?
|
<p>I have an NxN symmetric matrix from a <code>csv</code> file that I want to visualize using a heatmap with colorbar. The values consist of 0 to 1, and possibly <code>NaNs</code> as well. I used the following line of code to create a heatmap & save it as a <code>png</code> file with <code>dpi = 300</code>:</p>
<pre><code>def draw_heatmap(...):
vmin = np.nanmin(matrix) # Smallest value in the matrix (ignoring NaNs)
vmax = np.nanmax(matrix) # Largest value in the matrix
# Mask NaN values for coloring
masked_matrix = np.ma.masked_invalid(matrix)
# Define a colormap
cmap = plt.cm.RdBu_r # Red to Blue colormap
cmap.set_bad(color='gray') # Set NaN values to appear in gray
# Plot heatmap
fig, ax = plt.subplots(figsize=(10, 8))
heatmap = ax.imshow(masked_matrix, cmap=cmap, origin='upper', vmin=vmin, vmax=vmax)
....
plt.savefig(output_file, format='png', dpi=300, bbox_inches='tight')
</code></pre>
<p>When I ran <code>draw_heatmap</code> in <code>Jupyter (VSCode)</code> and in terminal as a <code>.py</code> script, I obtained 2 completely different heatmaps. They are not identical as I would have expected. The following are there differences:</p>
<ol>
<li>The diagonals in <code>Jupyter</code> version was correctly set to 1 (red), while in the <code>py</code> script, the diagonals were all white.</li>
<li>The <code>Jupyter</code> produced a 24MB size png file while the <code>py</code> script only 4.2MB png file.</li>
</ol>
<p>What could be the reason for this and how will I make my <code>py</code> script return files/images consistent with the output in my <code>Jupyter</code> notebook?</p>
<p>Edited with the <code>png</code> outputs:</p>
<p>Can't post the data here since it's huge (~5GB). Not sure if it's even worth looking. But just some notes, I checked the output in both runs (<code>Jupyter VSCode</code> vs terminal <code>py</code> script) and the matrices are identical. The <code>png</code> output for both are linked here as follows:</p>
<p><code>Jupyter</code> png output (~25MB) using <code>matplotlib=3.1.3</code> with <code>Python 3.6.13</code>: <a href="https://ibb.co/WxjLRFz" rel="nofollow noreferrer">https://ibb.co/WxjLRFz</a></p>
<p>Terminal output (~4MB) using <code>matplotlib=3.9.2</code> with <code>Python 3.9.20</code>: <a href="https://ibb.co/qDWQdf0" rel="nofollow noreferrer">https://ibb.co/qDWQdf0</a></p>
|
<python><matplotlib><visual-studio-code><jupyter-notebook>
|
2024-11-15 14:42:54
| 0
| 416
|
n3lmnTrxx
|
79,192,792
| 10,722,752
|
How to assign scores to each value in pandas columns based on percentile range, getting `Truth value of a Series is ambiguous.` error
|
<p>I need to assign scores to each of the values in many columns of a pandas dataframe, depending on the percentile score range each value falls between.</p>
<p>I have created a function:</p>
<pre><code>import pandas as pd
import numpy as np
def get_percentiles(x, percentile_array):
percentile_array = np.sort(np.array(percentile_array))
if x < x.quantile(percentile_array[0]) < 0:
return 1
elif (x >= x.quantile(percentile_array[0]) & (x < x.quantile(percentile_array[1]):
return 2
elif (x >= x.quantile(percentile_array[1]) & (x < x.quantile(percentile_array[2]):
return 3
elif (x >= x.quantile(percentile_array[2]) & (x < x.quantile(percentile_array[3]):
return 4
else:
return 5
</code></pre>
<p>Sample data:</p>
<pre><code>df = pd.DataFrame({'col1' : [1,10,5,9,15,4],
'col2' : [4,10,15,19,3,2],
'col3' : [10,5,6,9,1,24]})
</code></pre>
<p>When I try to run the function using apply:</p>
<pre><code>percentile_array = [0.05, 0.25, 0.5, 0.75]
df.apply(lambda x : get_percentiles(x, percentile_array), result_type = 'expand')
</code></pre>
<p>I get below error:</p>
<pre><code>Truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()
</code></pre>
<p>The expected output is new dataframe with 3 columns that has the scores between 1 and 5 depending on which percentile range each value in each column falls under</p>
|
<python><pandas><apply>
|
2024-11-15 14:26:58
| 1
| 11,560
|
Karthik S
|
79,192,572
| 11,062,613
|
How can I abbreviate phrases using Polars built-in methods?
|
<p>I need to abbreviate a series or expression of phrases by extracting the capitalized words and then creating an abbreviation based on their proportional lengths.
Here's what I'm trying to achieve:</p>
<ul>
<li>Extract capitalized words from each phrase.</li>
<li>Calculate proportional lengths based on the total length of the capitalized words in each phrase.</li>
<li>Adjust the lengths to ensure the abbreviation meets a target length (e.g., 4 characters).</li>
</ul>
<p>TODO: If the abbreviation results in duplicates, I need to either:</p>
<ul>
<li>Automatically resolve them (e.g., by adding numbers or modifying characters)</li>
<li>Flag them with a warning.</li>
</ul>
<p>Currently, I’m using a Python function and mapping it across a Polars Series.
Is there a more efficient way to do it using Polars built-in methods.</p>
<p>Here's my current approach:</p>
<pre><code>import polars as pl
def _abbreviate_phrase(phrase: str, length: int) -> str:
"""Abbreviate a single phrase by a constant length.
The function aims to abbreviate phrases into a constant length by focusing
on capitalized words and adjusting them according to their proportional lengths.
Example:
phrase = 'Commercial & Professional'
length = 4
res = _abbreviate_phrase(phrase, length)
print(res)
# CoPr
"""
# determine size of slices
capitalized_words = [word for word in phrase.split(' ') if word[0].isupper()]
word_lengths = [len(word) for word in capitalized_words]
total_word_length = sum(word_lengths)
if total_word_length == 0:
return '' # Return empty if no capitalized words
proportional_lengths = [round(wl / total_word_length * length) for wl in word_lengths]
total_proportional_length = sum(proportional_lengths)
# Adjust slices if their total length doesn't match target length
if total_proportional_length < length:
for i in range(length - total_proportional_length):
proportional_lengths[i] += 1
elif total_proportional_length > length:
for i in range(total_proportional_length - length):
proportional_lengths[i] -= 1
# Combine the abbreviated words and return the result
abbreviated_phrase = ''.join([word[:plength] for word, plength in zip(capitalized_words, proportional_lengths)])
return abbreviated_phrase
def abbreviate_phrases(phrases: pl.Series, length: int) -> pl.Series:
"""Abbreviate phrases by a constant length.
Example:
phrases = pl.Series([
'Sunshine',
'Sunset',
'Climate Change and Environmental Impact',
'Health and Wellness',
'Quantum Computing and Physics',
'Global Warming and Renewable Resources'
])
length = 4
res = abbreviate_phrases(phrases, length)
print(res)
# Series: '' [str]
# [
# "Suns"
# "Suns"
# "CEnI"
# "HeWe"
# "QCoP"
# "GWRR"
# ]
"""
abbreviates = phrases.map_elements(lambda x: _abbreviate_phrase(x, length), return_dtype=pl.String)
# if not abbreviates.is_unique().all():
# print('WARNING: There are duplicated abbreviations.')
return abbreviates
</code></pre>
<p>Edit: performance comparison setup for proposed solutions</p>
<pre><code>import sys
import timeit
def generate_phrases(n: int) -> pl.DataFrame:
# Repeat the original phrases 'n' times
phrases = pl.DataFrame({
"p": ['Climate Change and Environmental Impact',
'Health and Wellness',
'Quantum Computing and Physics',
'Global Warming and Renewable Resources',
'no capital letters']
})
return pl.concat([phrases.with_columns(pl.col('p')+'_'+str(i)) for i in range(n)])
def compare_performance(n: int, length: int = 4):
phrases = generate_phrases(n)
abbreviate_phrases_time = timeit.timeit(lambda: abbreviate_phrases(phrases['p'], length), number=10)
abbreviate_phrases_harbeck_time = timeit.timeit(lambda: abbreviate_phrases_harbeck(phrases, phrase_column="p", length=length), number=10)
abbreviate_phrases_jqurious_time = timeit.timeit(lambda: abbreviate_phrases_jqurious(phrases['p'], length=length), number=10)
abbreviate_phrases_rle_time = timeit.timeit(lambda: abbreviate_phrases_rle(phrases['p'], length=length), number=10)
ratio_harbeck = abbreviate_phrases_time / abbreviate_phrases_harbeck_time
ratio_jqurious = abbreviate_phrases_time / abbreviate_phrases_jqurious_time
ratio_rle = abbreviate_phrases_time / abbreviate_phrases_rle_time
return ratio_harbeck, ratio_jqurious, ratio_rle
</code></pre>
<p>Edit: performance comparison results for proposed solutions</p>
<pre><code>n = 200_000
ratio_harbeck, ratio_jqurious, ratio_rle = compare_performance(n=n, length=4)
print(f"Performance ratio ({n*5} rows):")
print(f" original/harbeck {ratio_harbeck:.2f}x")
print(f" original/jqurious {ratio_jqurious:.2f}x")
print(f" original/rle {ratio_rle:.2f}x")
print()
print(f'python {sys.version.split(' ')[0]}')
print(f'polars {pl.__version__}')
# Performance ratio (1000000 rows):
# original/harbeck 0.70x
# original/jqurious 1.30x
# original/rle 1.79x
# python 3.12.0
# polars 1.12.0
</code></pre>
|
<python><string><dataframe><python-polars>
|
2024-11-15 13:26:07
| 2
| 423
|
Olibarer
|
79,192,562
| 6,197,439
|
Open file from QFileDialog in native file explorer via right-click in PyQt5?
|
<p>In Firefox, if I download a file, there is a folder icon "Show in Folder":</p>
<p><a href="https://i.sstatic.net/x6qr3viI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x6qr3viI.png" alt="Firefox Show in Folder" /></a></p>
<p>... which when clicked, opens the native OS file explorer in the Downloads directory, with the target download file selected:</p>
<p><a href="https://i.sstatic.net/pBQ872Zf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBQ872Zf.png" alt="Show in Folder opened in native File Explorer" /></a></p>
<p>I would like the same kind of functionality - except I want it in a PyQt5 application, when QFileDialog is opened, upon choosing an action in the right-click context menu activated when the target file is selected; e.g. with the PyQt5 example (below), I can get this Qt5 dialog:</p>
<p><a href="https://i.sstatic.net/v843xc0o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v843xc0o.png" alt="Qt5 QFileDialog" /></a></p>
<p>... so, when I right-click on a target file (like <code>test.txt</code> in the image), I'd like a "Show in Folder" action added to the context menu, and when it is chosen, I'd like the native file explorer opened in the directory that contains the target file, and the target file selected - like what Firefox does.</p>
<p>How can I do that in PyQt5?</p>
<p>Example code:</p>
<pre class="lang-py prettyprint-override"><code># started from https://pythonspot.com/pyqt5-file-dialog/
import sys
from PyQt5.QtWidgets import QApplication, QWidget, QInputDialog, QLineEdit, QFileDialog
from PyQt5.QtGui import QIcon
class App(QWidget):
def __init__(self):
super().__init__()
self.title = 'PyQt5 file dialogs - pythonspot.com'
self.left = 10
self.top = 10
self.width = 640
self.height = 480
self.initUI()
def initUI(self):
self.setWindowTitle(self.title)
self.setGeometry(self.left, self.top, self.width, self.height)
self.openFileNameDialog()
self.show()
def openFileNameDialog(self):
options = QFileDialog.Options()
options |= QFileDialog.DontUseNativeDialog
fileName, _ = QFileDialog.getOpenFileName(self,"QFileDialog.getOpenFileName()", "","Text Files (*.txt)", options=options)
if fileName:
print(fileName)
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = App()
sys.exit(app.exec_())
</code></pre>
|
<python><pyqt5><qt5><file-manager><qfiledialog>
|
2024-11-15 13:22:40
| 2
| 5,938
|
sdbbs
|
79,192,545
| 9,984,846
|
Python looks for modules in current working directory on self-hosted azure agent
|
<p>I have a self-hosted azure windows agent with which I want to run python scripts in a few steps for which I need a couple of libraries. Therefore I added the following tasks to my pipeline.yml</p>
<pre><code> - task: UsePythonVersion@0
inputs:
versionSpec: '3.12'
architecture: 'x64'
</code></pre>
<p>This installed Python at the location <code><<agent_home>>\_work\_tool\Python\3.12.7\x64\</code>.</p>
<p>However, it seemed that this didn't include a functioning pip, so I extended it with the following script:</p>
<pre><code> - script: |
python -m ensurepip --default-pip
python -m pip install --upgrade pip
</code></pre>
<p>This seemingly worked, but resulted in the following warnings in the log files:</p>
<pre><code>Installing collected packages: pip
WARNING: The scripts pip.exe, pip3.12.exe and pip3.exe are installed in '<<agent_home>>\_work\3\s\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
</code></pre>
<p>Apparently here python is installing and searching for pip and libraries in the current working directory, which is strange to me.</p>
<p>For example I manually tried to execute <code>python.exe -m pip freeze</code> in the location python is installed for this agent, which 'worked', though I'm getting the message "Could not find platform independent libraries <prefix>".
If I do it one level up and do 'x64\python.exe -m pip freeze' it tells:</p>
<pre><code>Could not find platform independent libraries <prefix>
<<agent_home>>\_work\_tool\Python\3.12.7\x64\python.exe: No module named pip
</code></pre>
<p>I was looking around a bit and found for example this two posts here</p>
<ul>
<li><a href="https://stackoverflow.com/questions/19292957/how-can-i-troubleshoot-python-could-not-find-platform-independent-libraries-pr">How can I troubleshoot Python "Could not find platform independent libraries <prefix>"</a></li>
<li><a href="https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-so-it-finds-my-modules-packages">How to add to the PYTHONPATH in Windows, so it finds my modules/packages?</a></li>
</ul>
<p>and tried to set</p>
<ul>
<li><code>PYTHONPATH=<<path_to_python>>/DLLs;<<path_to_python>>/Lib</code> and</li>
<li><code>PYTHONHOME=<<path_to_python>></code></li>
</ul>
<p>but it didn't help. Changing the PYTHONPATH didn't change anything and after setting PYTHONHOME I get</p>
<pre><code>Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x0000410c (most recent call first):
<no Python frame>
</code></pre>
<p>Can somebody maybe give me a hint, what I have to change in order to get Python properly running here? Thank you.</p>
|
<python><azure-devops>
|
2024-11-15 13:17:52
| 1
| 1,571
|
Christian
|
79,192,393
| 12,775,432
|
Torch randn vector differs
|
<p>I am trying to generate a torch vector with a specific length.
I want the vector to have the same beginning elements when increasing its length using the same seed.</p>
<p>This works when the vector's length ranges from 1 to 15 for example:</p>
<p>For length 14</p>
<pre><code>torch.manual_seed(1)
torch.randn(14)
tensor([ 0.6614, 0.2669, 0.0617, 0.6213, -0.4519, -0.1661, -1.5228, 0.3817,
-1.0276, -0.5631, -0.8923, -0.0583, -0.1955, -0.9656])
</code></pre>
<p>For length 15</p>
<pre><code>torch.manual_seed(1)
torch.randn(15)
tensor([ 0.6614, 0.2669, 0.0617, 0.6213, -0.4519, -0.1661, -1.5228, 0.3817,
-1.0276, -0.5631, -0.8923, -0.0583, -0.1955, -0.9656])
</code></pre>
<p>But for length 16 I get a totaly different vector</p>
<pre><code>torch.manual_seed(1)
torch.randn(16)
tensor([-1.5256, -0.7502, -0.6540, -1.6095, -0.1002, -0.6092, -0.9798, -1.6091,
-0.7121, 0.3037, -0.7773, -0.2515, -0.2223, 1.6871, 0.2284, 0.4676])
</code></pre>
<p>Can someone explain what's happennig to me and could I get a solution where the vector does not changes?</p>
|
<python><torch><random-seed>
|
2024-11-15 12:32:22
| 1
| 640
|
pyaj
|
79,192,391
| 19,499,853
|
networkx graph get groups of linked/connected values
|
<p>I've got such data</p>
<pre><code>import networkx as nx
G = nx.Graph()
G.add_nodes_from([1, 2, 3, 4, 5, 6, 7])
G.add_edges_from([(1, 2), (1, 3), (2, 4), (5, 6), (7)])
</code></pre>
<p>As you can see 1 is connected with 2 (edge 1, 2) and 1 is connected with 3.
This means that 2 is connected with 3 via 2.</p>
<p>So I'd like to get 3 arrays:</p>
<p>First - <code>[1,2,3,4]</code>, second - <code>[5,6]</code>, because 5 and 6 does not connect with rest values
and third array <code>[7]</code></p>
<p>I expect getting arrays with connected values between each other.</p>
|
<python><graph><logic><networkx>
|
2024-11-15 12:32:06
| 1
| 309
|
Gerzzog
|
79,192,313
| 1,461,380
|
CrewAI - KeyError: 'key_name' When Running the Crew
|
<p>I’m following the CrewAI Getting Started Guide and running into a <code>KeyError: 'key_name'</code> when executing the <code>crewai run</code> command in the root of my project directory.</p>
<pre><code>(ai-crew) userk@mycelium:~/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development$ crewai run
Running the Crew
warning: `VIRTUAL_ENV=/home/userk/development/venv/ai-crew` does not match the project environment path `.venv` and will be ignored
Traceback (most recent call last):
File "/home/userk/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development/.venv/bin/run_crew", line 8, in <module>
sys.exit(run())
^^^^^
File "/home/userk/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development/src/latest_ai_development/main.py", line 13, in run
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/userk/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development/.venv/lib/python3.12/site-packages/crewai/project/crew_base.py", line 35, in __init__
self.map_all_task_variables()
File "/home/userk/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development/.venv/lib/python3.12/site-packages/crewai/project/crew_base.py", line 145, in map_all_task_variables
self._map_task_variables(
File "/home/userk/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development/.venv/lib/python3.12/site-packages/crewai/project/crew_base.py", line 178, in _map_task_variables
self.tasks_config[task_name]["agent"] = agents[agent_name]()
^^^^^^^^^^^^^^^^^^^^
File "/home/userk/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development/.venv/lib/python3.12/site-packages/crewai/project/utils.py", line 7, in memoized_func
cache[key] = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/userk/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development/src/latest_ai_development/crew.py", line 12, in researcher
return Agent(
^^^^^^
File "/home/userk/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development/.venv/lib/python3.12/site-packages/pydantic/main.py", line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/userk/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development/.venv/lib/python3.12/site-packages/crewai/agent.py", line 160, in post_init_setup
if env_var["key_name"] in unnacepted_attributes:
~~~~~~~^^^^^^^^^^^^
KeyError: 'key_name'
An error occurred while running the crew: Command '['uv', 'run', 'run_crew']' returned non-zero exit status 1.
</code></pre>
<p>Here is my setup:</p>
<pre><code>(ai-crew) userk@mycelium:~/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development$ python --version
Python 3.12.3
(ai-crew) userk@mycelium:~/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development$ pip freeze --local | grep crewai
crewai==0.80.0
crewai-tools==0.14.0
(ai-crew) userk@mycelium:~/development/git/vanto_ai_agents_cornerstone/crewai/latest_ai_development$ pip -V
pip 24.0 from /home/userk/development/venv/ai-crew/lib/python3.12/site-packages/pip (python 3.12)
</code></pre>
<p>I also tried using the virtual environment created by in the project directory with no luck.</p>
<p>How to resolve the error?</p>
|
<python><ubuntu><artificial-intelligence><crewai>
|
2024-11-15 12:07:49
| 2
| 918
|
UserK
|
79,192,261
| 5,335,649
|
aiohttp behaviour when not reading body
|
<p>In my current development, I am sending a request to 50-100 different servers at the same time with <code>aiohttp</code>, and I am in no need to read the body (only need status_code and headers). I also cannot use <code>HEAD</code> request for the purpose of my application.</p>
<p>My goals are:</p>
<ul>
<li>Be able to utilize keep-alive's/Connection pooling with the 50-100 servers I mentioned.</li>
<li>Not loading the body into RAM, but discarding it.</li>
</ul>
<p>I know that I need to read the body to be able to reuse the same socket for a new request (I know how TCP/HTTP works).</p>
<p>I searched the web for keywords like "aiohttp ignore body", "what <code>resp.release()</code> does", and delved into the source code however got lost in detail. I also served a local server (nginx) to check what aiohttp is doing. Although it looked like it was pooling the connections, at some point it did try to close connection for every request and create a new one. Which got me confused.</p>
<p>My questions are:</p>
<ul>
<li><p>When I do <code>async with session.request(url) as resp:</code> and later don't call stuff that reads body like <code>resp.text()</code> how does aiohttp(context managers) handle this? Does it indeed read the body then discard? Are there exceptions?</p>
</li>
<li><p>Does body size come in to the equation at any point for this matter? Like the response body being 1KB or 1GB affect the keep-alive I am trying to achieve, other that download time itself.</p>
</li>
<li><p>Is there a documented way in aiohttp which I can indeed discard the socket along with the body?</p>
</li>
</ul>
|
<python><aiohttp>
|
2024-11-15 11:52:34
| 1
| 4,540
|
Rockybilly
|
79,191,816
| 11,152,224
|
Automatic token refresh using Admin SDK Python
|
<p>I've made a simple REST service using FastAPI + Admin SDK (firebase_admin).</p>
<pre><code>import os
import firebase_admin
from firebase_admin import credentials, messaging
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from dotenv import load_dotenv
load_dotenv()
cred = credentials.Certificate(os.getenv("GOOGLE_APPLICATION_CREDENTIALS"))
firebase_admin.initialize_app(cred)
app = FastAPI()
class PushNotificationModel(BaseModel):
title: str
body: str
token: str
def send_push_notification(title: str, body: str, token: str) -> str:
try:
message = messaging.Message(
notification=messaging.Notification(title=title, body=body), token=token
)
response = messaging.send(message)
return response
except Exception as e:
raise HTTPException(
status_code=500, detail=f"Error sending push notification: {str(e)}"
)
@app.post("/send-push/")
async def send_push(notification: PushNotificationModel):
response = send_push_notification(
title=notification.title, body=notification.body, token=notification.token
)
return {"message": "Push notification sent", "response": response}
</code></pre>
<p>And also created script that sends POST-request to my REST to send push to device using its token.</p>
<pre><code>import requests
def send_test_push():
url = "http://localhost:8000/send-push/"
data = {
"title": "Test title",
"body": "Test body",
"token": "bla-bla-bla",
}
response = requests.post(url, json=data)
print(response.json())
if __name__ == "__main__":
send_test_push()
</code></pre>
<p>I use the newest version of Google Firebase Messaging and also I read <a href="https://firebase.google.com/docs/cloud-messaging/migrate-v1" rel="nofollow noreferrer">this</a>. Should I try to update access token when it exprires using refresh token? As I read Admin SDK can do this for me automatically.<a href="https://i.sstatic.net/ZLDnRbnm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZLDnRbnm.png" alt="what am I saying about" /></a></p>
|
<python><oauth-2.0><jwt><firebase-cloud-messaging>
|
2024-11-15 09:44:12
| 0
| 569
|
WideWood
|
79,191,769
| 956,424
|
how to resolve latency issue with django M2M and filter_horizontal in ModelAdmin panel?
|
<p>I have used django ModelAdmin with M2M relationship and formfield filtering code as follows:
But for superuser or any other login where the number of mailboxes is more than 100k. I have sliced the available after filtering. But loading the m2m field takes time and times out for superuser login:</p>
<pre><code>def formfield_for_manytomany(self, db_field, request, **kwargs):
if db_field.name == "mailboxes":
if request.user.is_superuser:
queryset = Mailbox.objects.prefetch_related('domain').only('id','email')
kwargs["queryset"] = queryset
field = super().formfield_for_manytomany(db_field, request, **kwargs)
field.widget.choices.queryset = queryset # Limit visible options
return field
if request.user.groups.filter(name__in=['customers']).exists():
queryset = Mailbox.objects.filter(
domain__customer__email=request.user.email).prefetch_related('domain').only('id','email')
kwargs["queryset"] = queryset
field = super().formfield_for_manytomany(db_field, request, **kwargs)
field.widget.choices.queryset = queryset
return field
return super().formfield_for_manytomany(db_field, request, **kwargs)
</code></pre>
<p>I want to use filter_horizontal only and not django auto_complete_light or any javascript. how can the latency be resolved.
As you can see the queryset filtering is already done to get valid options.
Slicing removed</p>
<p>the mailbox model is simple:</p>
<pre><code>class Mailbox(AbstractPerson):
username = models.EmailField(verbose_name='email', blank=True)
email = models.EmailField(verbose_name='email', null=True,blank=True, unique=True)
local_part = models.CharField(max_length=100,verbose_name='user part',help_text=hlocal_part)
domain = models.ForeignKey(Domain, on_delete=models.CASCADE)
</code></pre>
<p>which has M2M relation with GroupMailIds model:</p>
<pre><code>class GroupMailIds(models.Model):
local_part = models.CharField(max_length=100,verbose_name='local part',help_text=hlocal_part)
address = models.EmailField(unique=True,verbose_name='Email id of the distribution list')
domain = models.ForeignKey(Domain, on_delete=models.CASCADE,related_name='domains')
mailboxes = models.ManyToManyField(Mailbox,related_name='my_mailboxes')
</code></pre>
|
<python><django>
|
2024-11-15 09:33:05
| 2
| 1,619
|
user956424
|
79,191,760
| 3,296,786
|
Python test passing in CMake but not with pytest
|
<pre><code> def test_cover_fail(self):
self.url = self._api("cover")
fo.dis.cover().AndRaise(http_server.RpcError("error"))
self.mock.ReplayAll()
try:
json.load(self._make_request())
except (urllib.error.HTTPError, urllib.error.URLError) as e:
self.assertTrue("error" in e.read())
except Exception as e:
self.fail("cover shouldn't fail with exception: %s" % e)
else:
self.fail("cover should fail: %s" % e.read())
self.mock.VerifyAll()
</code></pre>
<p>this method was passing in CMakeTest but after integrating with python it is failing with error- <code>#{'error': {'message': "TypeError('Object of type RpcError is not JSON serializable')", 'details': {}, 'session_id': None}}</code></p>
<p>How do I alter the code to get the error string in e.read()</p>
|
<python><pytest>
|
2024-11-15 09:29:48
| 0
| 1,156
|
aΨVaN
|
79,191,712
| 1,719,931
|
Is dunder an official designation for `__method__` in Python?
|
<p>In a Python class you can use methods whose name is preceded and followed by a double underscore, which are called by a "special" syntax which is not the usual <code>object.method()</code> syntax.</p>
<p>For example:</p>
<pre><code>class Item:
def __init__(self, name):
self.name = name
i = Item("car")
</code></pre>
<p>Here the <code>Item("car")</code> syntax will call the <code>__init__</code> method.</p>
<p>Such methods are called "methods with special names" or "special method" by the <a href="https://docs.python.org/3/reference/datamodel.html#special-method-names" rel="nofollow noreferrer">Python documentation</a>.</p>
<p>However, some sources on the web call these kind of methods "dunder methods" (<a href="https://www.pythonmorsels.com/every-dunder-method/" rel="nofollow noreferrer">source 1</a>, <a href="https://www.geeksforgeeks.org/dunder-magic-methods-python/" rel="nofollow noreferrer">source 2</a>).</p>
<p>Is this designation of "dunder methods" an official Python name for these kind of methods?</p>
|
<python><oop><methods>
|
2024-11-15 09:15:14
| 1
| 5,202
|
robertspierre
|
79,191,599
| 11,159,734
|
FastAPI unit testing: AttributeError: 'NoneType' object has no attribute 'send'
|
<p>I try to test my FastAPI application. I use a real postgres test database so I actually want to read/write data from/to the database instead of mocking.</p>
<p>The first test works fine. However the second test will always give me this error:</p>
<pre><code>FAILED tests/users/test_signup.py::test_signup_successful2 - AttributeError: 'NoneType' object has no attribute 'send'
</code></pre>
<p>The first and second test are basically identical to assure it is not the fault of the test logic. I assume it has something to do with the <code>client</code> that I'm using. I was facing difficulties before as I was not able to get the FastAPI <code>TestClient</code> to work. Therefore I switched to the <code>AsyncClient</code> which works for one test only for some reason and I don't know why.</p>
<p>This is how my code currently looks like:</p>
<pre class="lang-py prettyprint-override"><code># tests/test_signup.py
import pytest
import pytest_asyncio
from httpx import AsyncClient, get
from httpx._transports.asgi import ASGITransport
from sqlmodel.ext.asyncio.session import AsyncSession
from sqlalchemy.sql import text
from app import app
from database.session import get_session
@pytest_asyncio.fixture
async def client():
"""Fixture for creating a new test client."""
transport = ASGITransport(app=app)
async with AsyncClient(transport=transport, base_url="http://test") as client:
yield client
@pytest_asyncio.fixture
async def session():
"""Fixture for creating a new test session."""
session: AsyncSession = await get_session().__anext__() # Get the session
try:
yield session
finally:
await session.close() # Ensure the session is properly closed
@pytest.mark.asyncio
async def test_signup_successful(client, session):
"""Test user signup with valid data"""
# Use ASGITransport explicitly
# transport = ASGITransport(app=app)
# async with AsyncClient(transport=transport, base_url="http://test") as client:
# Define the request payload
payload = {
"first_name": "John",
"last_name": "Doe",
"username": "johndoe",
"email": "testuser@example.com",
"password": "strongpassword123"
}
# Perform POST request
response = await client.post("/user/signup", json=payload)
# Assertions
assert response.status_code == 201
data = response.json()
assert data["email"] == payload["email"]
assert "uuid" in data
assert data["role"] == "user"
# Verify the user exists in the database
statement = text(f"SELECT email FROM users WHERE email = '{payload['email']}'")
result = await session.exec(statement)
user = result.scalar()
assert user is not None
await session.close()
@pytest.mark.asyncio
async def test_signup_successful2(client, session):
"""Test user signup with valid data"""
# Use ASGITransport explicitly
# transport = ASGITransport(app=app)
# async with AsyncClient(transport=transport, base_url="http://test") as client:
# Define the request payload
payload = {
"first_name": "John",
"last_name": "Doe2",
"username": "johndoe2",
"email": "testuser2@example.com",
"password": "strongpassword123"
}
# Perform POST request
response = await client.post("/user/signup", json=payload)
# Assertions
assert response.status_code == 201
data = response.json()
assert data["email"] == payload["email"]
assert "uuid" in data
assert data["role"] == "user"
# Verify the user exists in the database
statement = text(f"SELECT email FROM users WHERE email = '{payload['email']}'")
result = await session.exec(statement)
user = result.scalar()
assert user is not None
await session.close()
</code></pre>
<p>Here is the entire traceback for reference:</p>
<pre><code>================================================================================================== test session starts ===================================================================================================
platform win32 -- Python 3.11.10, pytest-8.3.3, pluggy-1.5.0
rootdir: C:\Users\myuser\Documents\visual-studio-code\my-project\backend-v2
configfile: pytest.ini
plugins: anyio-4.6.2.post1, asyncio-0.24.0
asyncio: mode=Mode.STRICT, default_loop_scope=function
collected 3 items
tests\test_app.py .
tests\users\test_signup.py .F
======================================================================================================== FAILURES ========================================================================================================
________________________________________________________________________________________________ test_signup_successful2 _________________________________________________________________________________________________
client = <httpx.AsyncClient object at 0x0000016B872ABB10>, session = <sqlalchemy.orm.session.AsyncSession object at 0x0000016B87096B90>
@pytest.mark.asyncio
async def test_signup_successful2(client, session):
"""Test user signup with valid data"""
# Use ASGITransport explicitly
transport = ASGITransport(app=app)
async with AsyncClient(transport=transport, base_url="http://test") as client:
# Define the request payload
payload = {
"first_name": "John",
"last_name": "Doe2",
"username": "johndoe2",
"email": "testuser2@example.com",
"password": "strongpassword123"
}
# Perform POST request
> response = await client.post("/user/signup", json=payload)
tests\users\test_signup.py:77:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\httpx\_client.py:1905: in post
return await self.request(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\httpx\_client.py:1585: in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\httpx\_client.py:1674: in send
response = await self._send_handling_auth(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\httpx\_client.py:1702: in _send_handling_auth
response = await self._send_handling_redirects(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\httpx\_client.py:1739: in _send_handling_redirects
response = await self._send_single_request(request)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\httpx\_client.py:1776: in _send_single_request
response = await transport.handle_async_request(request)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\httpx\_transports\asgi.py:157: in handle_async_request
await self.app(scope, receive, send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\fastapi\applications.py:1054: in __call__
await super().__call__(scope, receive, send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\applications.py:113: in __call__
await self.middleware_stack(scope, receive, send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\middleware\errors.py:187: in __call__
raise exc
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\middleware\errors.py:165: in __call__
await self.app(scope, receive, _send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\middleware\base.py:185: in __call__
with collapse_excgroups():
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\contextlib.py:158: in __exit__
self.gen.throw(typ, value, traceback)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\_utils.py:83: in collapse_excgroups
raise exc
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\middleware\base.py:187: in __call__
response = await self.dispatch_func(request, call_next)
middleware.py:27: in execution_timer
response = await call_next(request)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\middleware\base.py:163: in call_next
raise app_exc
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\middleware\base.py:149: in coro
await self.app(scope, receive_or_disconnect, send_no_error)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\middleware\trustedhost.py:36: in __call__
await self.app(scope, receive, send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\middleware\cors.py:85: in __call__
await self.app(scope, receive, send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\middleware\exceptions.py:62: in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\_exception_handler.py:62: in wrapped_app
raise exc
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\_exception_handler.py:51: in wrapped_app
await app(scope, receive, sender)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\routing.py:715: in __call__
await self.middleware_stack(scope, receive, send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\routing.py:735: in app
await route.handle(scope, receive, send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\routing.py:288: in handle
await self.app(scope, receive, send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\routing.py:76: in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\_exception_handler.py:62: in wrapped_app
raise exc
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\_exception_handler.py:51: in wrapped_app
await app(scope, receive, sender)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\starlette\routing.py:73: in app
response = await f(request)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\fastapi\routing.py:301: in app
raw_response = await run_endpoint_function(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\fastapi\routing.py:212: in run_endpoint_function
return await dependant.call(**values)
api\users\routes.py:36: in create_user_Account
user_exists = await user_service.user_exists(email, session)
api\users\service.py:17: in user_exists
user = await self.get_user_by_email(email, session)
api\users\service.py:12: in get_user_by_email
result = await session.exec(statement)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlmodel\ext\asyncio\session.py:81: in exec
result = await greenlet_spawn(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\util\_concurrency_py3k.py:201: in greenlet_spawn
result = context.throw(*sys.exc_info())
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlmodel\orm\session.py:66: in exec
results = super().execute(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\orm\session.py:2362: in execute
return self._execute_internal(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\orm\session.py:2247: in _execute_internal
result: Result[Any] = compile_state_cls.orm_execute_statement(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\orm\context.py:305: in orm_execute_statement
result = conn.execute(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\engine\base.py:1418: in execute
return meth(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\sql\elements.py:515: in _execute_on_connection
return connection._execute_clauseelement(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\engine\base.py:1640: in _execute_clauseelement
ret = self._execute_context(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\engine\base.py:1846: in _execute_context
return self._exec_single_context(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\engine\base.py:1986: in _exec_single_context
self._handle_dbapi_exception(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\engine\base.py:2358: in _handle_dbapi_exception
raise exc_info[1].with_traceback(exc_info[2])
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\engine\base.py:1967: in _exec_single_context
self.dialect.do_execute(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\engine\default.py:941: in do_execute
cursor.execute(statement, parameters)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py:568: in execute
self._adapt_connection.await_(
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\util\_concurrency_py3k.py:132: in await_only
return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\util\_concurrency_py3k.py:196: in greenlet_spawn
value = await result
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py:504: in _prepare_and_execute
await adapt_connection._start_transaction()
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py:833: in _start_transaction
self._handle_exception(error)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py:782: in _handle_exception
raise error
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py:831: in _start_transaction
await self._transaction.start()
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\asyncpg\transaction.py:146: in start
await self._connection.execute(query)
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\site-packages\asyncpg\connection.py:350: in execute
result = await self._protocol.query(query, timeout)
asyncpg\protocol\protocol.pyx:374: in query
???
asyncpg\protocol\protocol.pyx:367: in asyncpg.protocol.protocol.BaseProtocol.query
???
asyncpg\protocol\coreproto.pyx:1094: in asyncpg.protocol.protocol.CoreProtocol._simple_query
???
asyncpg\protocol\protocol.pyx:966: in asyncpg.protocol.protocol.BaseProtocol._write
???
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\asyncio\proactor_events.py:365: in write
self._loop_writing(data=bytes(data))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_ProactorSocketTransport fd=400 read=<_OverlappedFuture cancelled>>, f = None, data = b'Q\x00\x00\x00*BEGIN ISOLATION LEVEL READ COMMITTED;\x00'
def _loop_writing(self, f=None, data=None):
try:
if f is not None and self._write_fut is None and self._closing:
# XXX most likely self._force_close() has been called, and
# it has set self._write_fut to None.
return
assert f is self._write_fut
self._write_fut = None
self._pending_write = 0
if f:
f.result()
if data is None:
data = self._buffer
self._buffer = None
if not data:
if self._closing:
self._loop.call_soon(self._call_connection_lost, None)
if self._eof_written:
self._sock.shutdown(socket.SHUT_WR)
# Now that we've reduced the buffer size, tell the
# protocol to resume writing if it was paused. Note that
# we do this last since the callback is called immediately
# and it may add more data to the buffer (even causing the
# protocol to be paused again).
self._maybe_resume_protocol()
else:
> self._write_fut = self._loop._proactor.send(self._sock, data)
E AttributeError: 'NoneType' object has no attribute 'send'
..\..\..\..\AppData\Local\miniconda3\envs\aidav2\Lib\asyncio\proactor_events.py:401: AttributeError
================================================================================================ short test summary info =================================================================================================
FAILED tests/users/test_signup.py::test_signup_successful2 - AttributeError: 'NoneType' object has no attribute 'send'
============================================================================================== 1 failed, 2 passed in 1.85s ===============================================================================================
</code></pre>
<p>Edit:
For reference here is also my database/session.py code:</p>
<pre class="lang-py prettyprint-override"><code># database/session.py
from sqlmodel import SQLModel, create_engine
from sqlmodel.ext.asyncio.session import AsyncSession
from sqlalchemy.ext.asyncio import AsyncEngine
from sqlalchemy.orm import sessionmaker
from config import config
engine = AsyncEngine(create_engine(url=config.SQLALCHEMY_DATABASE_URI, echo=config.DB_ECHO))
async def init_db():
async with engine.begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all)
async def get_session() -> AsyncSession: # type: ignore
Session = sessionmaker(
bind=engine,
class_=AsyncSession,
expire_on_commit=False
)
async with Session() as session:
yield session
</code></pre>
|
<python><asynchronous><pytest><fastapi>
|
2024-11-15 08:38:33
| 1
| 1,025
|
Daniel
|
79,191,561
| 7,479,675
|
How to Parse Complex SQL Values String into Columns Safely
|
<p>I'm working with a string that represents SQL parameter values and need to parse it into individual columns. Here's an example of such a string:</p>
<pre><code>values = "( 14587, '290\'960', 'This is, a, difficult,,, string that uses '' \" and even '' \" , '' '', ''. So it definitely needs to be checked for escape characters.', null )"
</code></pre>
<p>The goal is to extract these values into a list of columns, handling various challenges:</p>
<ul>
<li><p>Different Data Types: The values could be integers, strings, or null.</p>
</li>
<li><p>Escape Characters: Strings may contain escaped single quotes (''),
backslashes, or other special characters.</p>
</li>
<li><p>Embedded Delimiters: Commas
may appear inside strings, making naive splitting by commas
impossible.</p>
</li>
<li><p>Quotes Matching: Properly matching single quotes around
strings is essential.</p>
</li>
</ul>
<p>I attempted to use a regular expression to handle splitting by commas outside of quotes:</p>
<pre><code>import re
values = "( 14587, '290\'960', 'This is, a, difficult,,, string that uses '' \" and even '' \" , '' '', ''. So it definitely needs to be checked for escape characters.', null )"
# Remove outer parentheses and leading/trailing spaces
cleaned_values = values.strip().strip('()')
# Use regular expression to split by commas outside quotes, accounting for escaped quotes
values_list = re.split(r",(?=(?:[^']*'[^']*')*[^']*$)", cleaned_values)
# Strip whitespace from each part
values_list = [v.strip() for v in values_list]
print(cleaned_values)
for value in values_list:
print(value)
</code></pre>
<p>This approach works to some extent but feels fragile and may not handle all edge cases, especially more complex SQL strings.</p>
<p>Question:
What is the best and most reliable way to parse such SQL VALUES strings into individual columns, ensuring the following:</p>
<ul>
<li>Properly handling different data types.</li>
<li>Escaping and parsing special characters correctly.</li>
<li>Preserving the integrity of strings with embedded commas or quotes.</li>
</ul>
<p>Would a dedicated SQL parser or another method be more suitable than regular expressions?</p>
|
<python><string>
|
2024-11-15 08:24:06
| 1
| 392
|
Oleksandr Myronchuk
|
79,191,501
| 8,721,930
|
Polars rolling window on time series with custom filter based on the current row
|
<p>How do I use polars' native API to do a rolling window on a datetime column, but filter out rows in the window based on the value of a column of the "current" row?</p>
<p>My polars dataframe of financial transactions has the following schema:</p>
<p><a href="https://i.sstatic.net/IYN2CLUW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYN2CLUW.png" alt="dataframe schema" /></a></p>
<p>For each transaction and a duration <code>d</code>, I want to:</p>
<ol>
<li>grab the <code>source_acct</code> and its <code>timestamp</code></li>
<li>look back <code>timestamp - d</code> hours and get only rows whose <code>source_acct</code> or <code>dest_acct</code> matches the current <code>source_acct</code></li>
<li>sum up all txn as <code>amount_in</code> when the current <code>source_acct</code> is equal to a row's <code>dest_acct</code></li>
<li>do the same for <code>amount_out</code> but where the current src acct is the row's <code>source_account</code> including itself.</li>
</ol>
<p>I tried this using <code>map_rows</code> but it's way too slow for a dataframe with 20M rows. I sort my df on the timestamp column, then run:</p>
<pre class="lang-py prettyprint-override"><code>def windowing(df: pl.DataFrame, window_in_hours: int):
d = timedelta(hours=window_in_hours)
def calculate_amt(row):
acc_no, window_end = row[0], row[1]
window_start = window_end - d
acct_window_mask = (
(pl.col('timestamp') >= window_start) &
(pl.col('timestamp') <= window_end) &
(pl.col('dest_acct').eq(acc_no) | pl.col('source_acct').eq(acc_no))
)
window_txns = df.filter(acct_window_mask)
amount_in = window_txns.filter(pl.col('dest_acct').eq(acc_no))['amount'].sum()
amount_out = window_txns.filter(pl.col('source_acct').eq(acc_no))['amount'].sum()
return (amount_in, amount_out)
calculated_amounts = df.select(["source_acct", "timestamp", 'dest_acct', 'amount']).map_rows(calculate_amt)
return df.with_columns(
calculated_amounts['column_0'].alias('amount_in'),
calculated_amounts['column_1'].alias('amount_out'),
)
</code></pre>
<p>I've been trying to implement this using polars native API like <code>.rolling()</code> but I don't get how to do the filter step of comparing the current row's source account against the windowed transactions.</p>
<p>Here's a sample:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from datetime import timedelta
data = {
"timestamp": [
"2024-01-01 10:00:00",
"2024-01-01 10:30:00",
"2024-01-01 11:00:00",
"2024-01-01 11:30:00",
"2024-01-01 12:00:00"
],
"source_acct": ["A", "B", "A", "C", "A"],
"dest_acct": ["B", "A", "C", "A", "B"],
"amount": [100, 150, 200, 300, 250]
}
df = pl.DataFrame(data).with_columns(pl.col("timestamp").str.to_datetime())
print(windowing(df, 1))
</code></pre>
<p>Expected output:</p>
<pre><code>┌─────────────────────┬─────────────┬───────────┬────────┬───────────┬────────────┐
│ timestamp ┆ source_acct ┆ dest_acct ┆ amount ┆ amount_in ┆ amount_out │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ datetime[μs] ┆ str ┆ str ┆ i64 ┆ i64 ┆ i64 │
╞═════════════════════╪═════════════╪═══════════╪════════╪═══════════╪════════════╡
│ 2024-01-01 10:00:00 ┆ A ┆ B ┆ 100 ┆ 0 ┆ 100 │
│ 2024-01-01 10:30:00 ┆ B ┆ A ┆ 150 ┆ 100 ┆ 150 │
│ 2024-01-01 11:00:00 ┆ A ┆ C ┆ 200 ┆ 150 ┆ 300 │
│ 2024-01-01 11:30:00 ┆ C ┆ A ┆ 300 ┆ 200 ┆ 300 │
│ 2024-01-01 12:00:00 ┆ A ┆ B ┆ 250 ┆ 300 ┆ 450 │
└─────────────────────┴─────────────┴───────────┴────────┴───────────┴────────────┘
</code></pre>
|
<python><dataframe><window-functions><python-polars>
|
2024-11-15 07:59:34
| 2
| 1,001
|
lionbigcat
|
79,191,441
| 6,930,340
|
How to identify differences in polars dataframe when assert_series_equal / assert_frame_equal fails?
|
<p>I am using <code>pl.testing.assert_frame_equal</code> to compare two <code>pl.DataFrame</code>s. The assertion fails. The traceback indicates that there are <code>exact value mismatches</code> in a certain column.</p>
<p>The column in question is of type <code>bool</code>. It also contains <code>null</code> values. This column has more than 20,000 rows and I need to figure out, where exactly the difference is.</p>
<p>What I did is to create a <code>mask</code> that shows a <code>true</code> value whenever there is a difference between the <code>actual</code> dataframe and the <code>expectation</code> dataframe.</p>
<pre><code>mask = actual != expectation
</code></pre>
<p>What I then noticed is that the mask only contains <code>false</code> and <code>null</code> values in every column.</p>
<p><code>mask.sum().sum_horizontal()</code> gives <code>0</code>.</p>
<p>That means this is apparently not a good way to identify the rows with differences.</p>
<p>In my large dataframe I expect a situation like the following:</p>
<pre><code>import polars as pl
from polars.testing import assert_frame_equal
df1 = pl.DataFrame(
{
"group": ["A", "A", "A", "B", "B"],
"value": [True, False, None, False, None]
}
)
df2 = pl.DataFrame(
{
"group": ["A", "A", "A", "B", "B"],
"value": [True, False, False, False, None]
}
)
</code></pre>
<p>Performing <code>assert_frame_equal(df1, df2)</code> will correctly result in an <code>AssertionError</code>.</p>
<pre><code>AssertionError: DataFrames are different (value mismatch for column 'value')
[left]: [True, False, None, False, None]
[right]: [True, False, False, False, None]
</code></pre>
<p>The inequality test doesn't help in order to identify where the differences is as there are no <code>true</code> values.</p>
<pre><code>df1 != df2
shape: (5, 2)
┌───────┬───────┐
│ group ┆ value │
│ --- ┆ --- │
│ bool ┆ bool │
╞═══════╪═══════╡
│ false ┆ false │
│ false ┆ false │
│ false ┆ null │
│ false ┆ false │
│ false ┆ null │
└───────┴───────┘
</code></pre>
|
<python><pytest><python-polars>
|
2024-11-15 07:36:17
| 2
| 5,167
|
Andi
|
79,191,312
| 4,080,615
|
How to sort python pandas dataframe in repetitive order after groupby?
|
<p>I have a dataset which is sorted in this order:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
<th>col3</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>1</td>
<td>r</td>
</tr>
<tr>
<td>a</td>
<td>1</td>
<td>s</td>
</tr>
<tr>
<td>a</td>
<td>2</td>
<td>t</td>
</tr>
<tr>
<td>a</td>
<td>2</td>
<td>u</td>
</tr>
<tr>
<td>a</td>
<td>3</td>
<td>v</td>
</tr>
<tr>
<td>a</td>
<td>3</td>
<td>w</td>
</tr>
<tr>
<td>b</td>
<td>4</td>
<td>x</td>
</tr>
<tr>
<td>b</td>
<td>4</td>
<td>y</td>
</tr>
<tr>
<td>b</td>
<td>5</td>
<td>z</td>
</tr>
<tr>
<td>b</td>
<td>5</td>
<td>q</td>
</tr>
<tr>
<td>b</td>
<td>6</td>
<td>w</td>
</tr>
<tr>
<td>b</td>
<td>6</td>
<td>e</td>
</tr>
</tbody>
</table></div>
<p>I want it to be sorted in the following order:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
<th>col3</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>1</td>
<td>r</td>
</tr>
<tr>
<td>a</td>
<td>2</td>
<td>t</td>
</tr>
<tr>
<td>a</td>
<td>3</td>
<td>v</td>
</tr>
<tr>
<td>a</td>
<td>1</td>
<td>s</td>
</tr>
<tr>
<td>a</td>
<td>2</td>
<td>u</td>
</tr>
<tr>
<td>a</td>
<td>3</td>
<td>w</td>
</tr>
<tr>
<td>b</td>
<td>4</td>
<td>x</td>
</tr>
<tr>
<td>b</td>
<td>5</td>
<td>z</td>
</tr>
<tr>
<td>b</td>
<td>6</td>
<td>w</td>
</tr>
<tr>
<td>b</td>
<td>4</td>
<td>y</td>
</tr>
<tr>
<td>b</td>
<td>5</td>
<td>q</td>
</tr>
<tr>
<td>b</td>
<td>6</td>
<td>e</td>
</tr>
</tbody>
</table></div>
<p>I want the col2 to be in repetitive fashion, as in, for col1 'a' values, it should be 1,2,3,4 and then 1,2,3,4 again instead of 1,1,2,2,3,3,4,4.
I have used the following code, but it is not working:</p>
<pre><code>import pandas as pd
# Creating the DataFrame
data = {
'col1': ['a', 'a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b', 'b'],
'col2': [1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6],
'col3': ['r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'q', 'w', 'e']
}
df = pd.DataFrame(data)
# Sort by col1, then reorder col2 within each group
df_sorted = df.sort_values(by=['col1', 'col2']).reset_index(drop=True)
df_sorted = df_sorted.groupby('col1', group_keys=False).apply(lambda x: x.sort_values('col2'))
# Display the sorted dataframe
print(df_sorted)
</code></pre>
|
<python><pandas><dataframe>
|
2024-11-15 06:37:06
| 1
| 1,017
|
DarthSpeedious
|
79,191,259
| 1,070,833
|
how to generate model constraint in cpmpy from a list?
|
<p>I have a list of filters that I need to apply to my model as OR so they cannot be added separately to the model. I'm looking for syntax to do it. Is there a way to extend a constraint added to the model and add the OR options in a loop?
or some other cpmpy magic that can do that?
I'm trying to write a generator of the constraints to simplify the process. I have a pile of numpy arrays with the boolean values with "masks" to apply to the problem. I might end up with hundreds of them</p>
<pre><code>p = cp.intvar(-1, 1, shape=(6, 6), name="test")
m = cp.Model()
m += (p != 0)
#applying one mask is trivial:
m += (abs(sum(puzzle[mask_array])) == 3)
# but I need to do it for many
# this is what I want in the end:
foo = some_list_of_numpy_arays
m += (abs(sum(p[foo[0]])) == 3) | (abs(sum(p[foo[1]])) == 3) | (abs(sum(p[foo[2]])) == 3)
# but in a loop as I do not know the number of elements in the list
for foo in some_iterator:
# and I'm stuck with syntax here?
m += (abs(sum(puzzle[foo[0]])) == 3)
</code></pre>
<p>I could make strings and run eval but this seems way too hacky for this high level problem.</p>
|
<python><cpmpy>
|
2024-11-15 06:14:27
| 1
| 1,109
|
pawel
|
79,191,157
| 508,236
|
What is the correct way to measure the performance of a Databrick notebook?
|
<p>Here is my code for converting one column field of a data frame to time data type:</p>
<pre class="lang-none prettyprint-override"><code>col_value = df.select(df.columns[0]).first()[0]
start_time = time.time()
col_value = datetime.strftime(col_value, "%Y-%m-%d %H:%M:%S") \
if isinstance(col_value, datetime) \
else col_value
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time)
</code></pre>
<p>The elapsed_time value is 0.00011396408081054688, which make sense it should cost little effort.</p>
<p>However, after I put this code inside a Python loop, things turn strange. Here is the code</p>
<pre class="lang-none prettyprint-override"><code>for col in df.columns:
col_value = df.select(col).first()[0]
start_time = time.time()
col_value = datetime.strftime(col_value, "%Y-%m-%d %H:%M:%S") \
if isinstance(col_value, datetime) \
else col_value
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time)
</code></pre>
<p>After finishing running this code, I found the <code>elapsed_time</code> increased to 5 seconds!</p>
<p>Then I remove the time convert logic and re-do the statistics again including the whole loop:</p>
<pre class="lang-none prettyprint-override"><code>loop_start_time = time.time()
for col in col_list:
start_time = time.time()
# Nothing was done here
end_time = time.time()
elapsed_time = end_time - start_time
print(col, elapsed_time)
loop_end_time = time.time()
loop_elapsed_time = loop_end_time - loop_start_time
print(f"Loop time cost {loop_elapsed_time} seconds")
</code></pre>
<p>It looks like each round without logic will cost more than 5 seconds as well, but the whole loop only costs 0.0026 second.</p>
<p>Why did this happen? Did I miss something? What is the correct why to measure the cost of each Python statement and function?</p>
|
<python><jupyter-notebook><databricks>
|
2024-11-15 05:06:54
| 0
| 15,698
|
hh54188
|
79,191,134
| 2,457,962
|
Need a smooth curve from lmfit at more datapoints
|
<p>I am fitting a Lorenztian to the following data. If I plot the best fit, it only plots the results at particular values of x where I had data. I tried to get a smooth curve that is a better representation but something seems off.</p>
<p>data:</p>
<pre><code>y_means = [2.32070822e-06, 1.90175015e-06, 2.09473380e-06, 2.80934411e-06,
2.38255275e-06, 3.02204121e-06, 3.84290466e-06, 3.84136311e-06,
7.53941486e-06, 8.68364774e-06, 1.20078494e-05, 2.20557048e-05,
3.73314724e-05, 6.03141332e-05, 9.84530711e-05, 1.58565010e-04,
3.61269554e-04, 7.53586472e-04, 3.56518897e-04, 1.60734633e-04,
1.06442283e-04, 5.41622644e-05, 2.73085592e-05, 2.54361900e-05,
9.10802093e-06, 4.81356192e-06, 6.49884117e-06, 4.94871197e-06,
3.27389990e-06, 2.65197533e-06, 2.52672943e-06, 2.56496345e-06,
2.11445845e-06, 1.96091323e-06, 2.60823301e-06]
all_xsclices = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
34]
</code></pre>
<p>code:</p>
<pre><code># lorentzian fit
from lmfit.models import LorentzianModel
model = LorentzianModel()
params = model.guess(y_means, x=all_xslices)
all_xslices_fit = np.linspace(min(all_xslices), max(all_xslices), 100)
result = model.fit(y_means, params, x=all_xslices)
result_smooth = model.eval(x=all_xslices_fit)
# plotting the decay along y-axis: log axis
plt.figure(figsize=(8, 5), dpi=300)
plt.scatter(all_xslices, y_means, marker = '.', s = 200, c = 'g', label = "")
# plt.plot(all_xslices,lorentzian(all_xslices,*popt), 'g', label='Lorentz 1')
plt.plot(all_xslices, result.best_fit, 'g', label='Lorentz')
plt.plot(all_xslices_fit, result_smooth, 'r', label='Lorentz')
plt.xlabel("y length", fontsize = 15)
plt.ylabel("FFT amplitude", fontsize = 15)
plt.xticks(fontsize = 15)
plt.yticks(fontsize = 15)
plt.yscale('log')
plt.subplots_adjust(right=0.96,left=0.15,top=0.96,bottom=0.12)
plt.legend(loc = 'best')
plt.show()
</code></pre>
<p>Here is the current result:</p>
<p><a href="https://i.sstatic.net/BPSsryzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BPSsryzu.png" alt="plot of data with fit" /></a></p>
|
<python><curve-fitting><model-fitting><lmfit>
|
2024-11-15 04:48:36
| 1
| 1,702
|
Abhinav Kumar
|
79,191,086
| 5,145,090
|
statsmodel glm and generalized linear model formula have inversed coefficient results
|
<p>I am new to using statsmodel in python (and a lot of more generalized statistics in general), but I have a question regarding the difference between how sm.GLM and smf.glm calculate their results. From my understanding, as long as you ensure the added coefficient to sm.GLM, they should yield the same results. However, I found when I calculate it, the coefficients produced are the negative result of the other.</p>
<p>For example, using the sample datasets from the "Introduction to Statistical Learning for Python" book:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
import statsmodels.api as sm
np.random.seed(0)
Default = pd.read_csv('../data/Default.csv')
X = Default[["balance", "income"]]
Y_=Default.default=='Yes'
X=sm.add_constant(X)
glmtest=sm.GLM(Y_,X,family=sm.families.Binomial()).fit()
glmtest.summary()
</code></pre>
<p>yields a balance coefficient of 0.0056. However, if I use smf</p>
<pre><code>mod1 = smf.glm(
formula="default~income+balance", data=Default, family=sm.families.Binomial()
).fit()
mod1.summary()
</code></pre>
<p>I instead get a coefficient of -0.0056. Other coefficients similarly have flipped signs. Since the results are the same ignoring the sign, I figure there was something going on under the hood and wanted to understand why.</p>
|
<python><pandas><statistics><statsmodels>
|
2024-11-15 04:22:35
| 1
| 465
|
tinfangwarble
|
79,191,052
| 2,525,479
|
Does TensorFlow or XLA provide a python API to read and parse the dumped MHLO mlir module?
|
<p>I turned on XLA when running TensorFLow, and in order to further optimize the fused kernels, I added <code>export XLA_FLAGS="--xla_dump_to=/tmp/xla_dump"</code>, and got the dumped IRs, including lmhlo.xxx.mlir and other llvm IRs.
Now as I'm trying to further analyze those dumped mlir's, I want to read them as structured MLIR modules, so I need to read and parse it correctly. But I can't find any resources documenting how to do this in pure python, I tried "pymlir" module, but not working well with this TF XLA HLO module, maybe the dumped module has different format.
So does anybody know how to read and parse this dumped mlir?</p>
|
<python><tensorflow><xla>
|
2024-11-15 03:57:07
| 1
| 541
|
StayFoolish
|
79,190,945
| 15,966,103
|
Django & Cloudinary - Admin Panel Image Upload Returns Error "This res.cloudinary.com page can’t be found"
|
<p>I have the following <code>Teacher</code> model:</p>
<pre><code>class Teacher(models.Model):
...
# Image Field
image = models.ImageField(upload_to='teacher_images/', blank=False, null=False)
# Metadata
...
class Meta:
verbose_name = "Teacher"
verbose_name_plural = "Teachers"
ordering = ['name']
indexes = [
models.Index(fields=['email']),
models.Index(fields=['department']),
]
</code></pre>
<p>And in my <em><strong>settings.py</strong></em>:</p>
<pre><code>cloudinary.config(
cloud_name = "XXXXXXX",
api_key = "XXXXXXX",
api_secret = "XXXXXXX",
secure=True
)
# Set default storage backend to Cloudinary
DEFAULT_FILE_STORAGE = 'cloudinary_storage.storage.MediaCloudinaryStorage'
MEDIA_URL = 'https://res.cloudinary.com/dce5bvfok/image/upload/'
</code></pre>
<p>Yet when I navigate to the admin panel to upload an image, the URL formats correctly. For example: <code>https://res.cloudinary.com/dce5bvfok/image/upload/teacher_images/hsu2-min.JPG</code>. However, I can't actually view the image as I receive this error when navigating to the generated URL:</p>
<pre><code>This res.cloudinary.com page can’t be found
No webpage was found for the web address: https://res.cloudinary.com/dce5bvfok/image/upload/teacher_images/hsu2-min.JPG
HTTP ERROR 404
</code></pre>
<p>Does anyone know why this is occurring and how to rectify it?</p>
|
<python><django><django-models><django-admin><cloudinary>
|
2024-11-15 02:22:32
| 0
| 918
|
jahantaila
|
79,190,941
| 4,547,189
|
Regex in Python - Only capture exact match
|
<pre><code>import re
fruit_list = ['apple banana', 'apple', 'pineapple', 'banana', 'banana apple', 'kiwi']
fruit = re.compile('|'.join(fruit_list))
fruit_re = [ re.compile(r'\b('+re.escape(fruit)+r')\b') for fruit in fruit_list]
fruit_re.append(re.compile( r'([#@])(\w+)'))
string = "this is pooapple is banana apple #apple"
for ft in fruit_re:
match = re.finditer(ft, string)
print(type(match))
for mat in match:
print(mat.span())
print(mat.group())
print("****************")
</code></pre>
<p>Above is the code that I am working with. The issue is that this snippet is capturing the #apple and the apple in #apple. How do I ensure that only the #apple is captured and not the apple in #apple.</p>
<pre><code>(27, 32)
apple
****************
(34, 39)
apple
****************
<class 'callable_iterator'>
<class 'callable_iterator'>
(20, 26)
banana
****************
<class 'callable_iterator'>
(20, 32)
banana apple
****************
<class 'callable_iterator'>
<class 'callable_iterator'>
(33, 39)
#apple
****************
</code></pre>
<p>In the above output I am only intrested in the #apple (33,39) and not apple(34,39)</p>
<p>Ty</p>
|
<python><regex>
|
2024-11-15 02:20:01
| 1
| 648
|
tkansara
|
79,190,576
| 15,072,863
|
Canvas API - upload_to_submission succeeds, but there is nu submission
|
<p>I'm an instructor, and I created an assignment which requires file submission. I want to upload a submission file for a student using Canvas API.</p>
<pre><code>from canvasapi import Canvas
canvas = Canvas('https://canvas.[my university].edu', canvas_api_key)
course = canvas.get_course(args.course_id)
assignment = course.get_assignment(args.assignment_id)
user = course.get_user(netid, "sis_user_id")
res = assignment.upload_to_submission(file_path, user)
print(res)
</code></pre>
<p>The first element in <code>res</code> is <code>True</code>, which, according to the <a href="https://canvasapi.readthedocs.io/en/stable/assignment-ref.html#canvasapi.assignment.Assignment.upload_to_submission" rel="nofollow noreferrer">docs</a>, means that the upload succeeded. I also checked the <code>url</code> field in the JSON output, and following the url indeed shows the uploaded file.</p>
<p>So the call succeeds, except that I don't see the uploaded file on Canvas. So, the student still doesn't have a submission.</p>
<p>How to fix that? Publishing the assignment doesn't help.</p>
|
<python>
|
2024-11-14 22:08:10
| 0
| 340
|
Dmitry
|
79,190,430
| 1,686,236
|
Python scipy.stats create new distribution function by adjusting a single argument
|
<p>I'm using the <code>distfit</code> package (<a href="https://erdogant.github.io/distfit/pages/html/Functions.html#module-distfit.distfit.distfit.fit_transform" rel="nofollow noreferrer">https://erdogant.github.io/distfit/pages/html/Functions.html#module-distfit.distfit.distfit.fit_transform</a>) to find the best-fitting statistical distribution for some empirical data. The results include the fit distribution as an <code>model=<scipy.stats._distn_infrastructure.rv_continuous_frozen at 0x7fa4a42c8a90></code> object, a tuple of parameters for it <code>params=(1013.8436378790848, -556.0268261452745, 81.19476091801334)</code>, and also separate <code>loc</code> and <code>scale</code> parameters. The <code>model</code> object has an <code>args</code> attribute which is the same as <code>params</code></p>
<pre><code>results = {'name': 'loggamma',
'score': 0.004380365964219514,
'loc': -556.0268261452745,
'scale': 81.19476091801334,
'arg': (1013.8436378790848,),
'params': (1013.8436378790848, -556.0268261452745, 81.19476091801334),
'model': <scipy.stats._distn_infrastructure.rv_continuous_frozen at 0x7fa4a42c8a90>,
'bootstrap_score': 0,
'bootstrap_pass': None,
'color': '#e41a1c',
'CII_min_alpha': 1.7049279936189805,
'CII_max_alpha': 10.095497944750718}
model.args = (1013.8436378790848, -556.0268261452745, 81.19476091801334)
</code></pre>
<p>Now that I have these, I want to adjust the scale parameter from some other data and obtain a new distribution function with only the scale changed, so I can use the new distribution function's inverse CDF method.</p>
<p>With the new scale parameter, I now have <code>params=(1013.8436378790848, -556.0268261452745, 52)</code>. I then set <code>dist.args</code> equal to the same. When I use this modified distribution's inverse CDF (<code>.ppf</code>) method, I get unexpected results. That may be ok, but I'm unsure if I'm creating this new distribution correctly. Thanks.</p>
|
<python><scipy><scipy.stats>
|
2024-11-14 21:07:50
| 1
| 2,631
|
Dr. Andrew
|
79,190,354
| 482,819
|
TypeVar defined within if/else
|
<p>While this code works:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeAlias, TypeVar
T = TypeVar("T", complex, float, str)
z: TypeAlias = tuple[T, ...] | list[T]
</code></pre>
<p>defining <code>T</code> conditionally does not.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeAlias, TypeVar
try:
import somepackage
use_complex = True
except:
use_complex = False
if use_complex:
T = TypeVar("T", complex, float, str)
else:
T = TypeVar("T", float, str)
z: TypeAlias = tuple[T, ...] | list[T]
</code></pre>
<p>I am getting: <code>Variable not allowed in type expression</code></p>
<p>Is there a way to tell the typechecker that use_complex is a constant and therefore it is either one branch or the other, <code>T</code> is defined only once and does not change?</p>
|
<python><python-typing><mypy>
|
2024-11-14 20:37:07
| 1
| 6,143
|
Hernan
|
79,190,189
| 2,761,174
|
Extract uncaptured raw text from regex
|
<p>I am given a regex expression that consists of raw text and capture groups. How can I extract all raw text snippets from it?</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>pattern = r"Date: (\d{4})-(\d{2})-(\d{2})"
assert extract(pattern) == ["Date: ", "-", "-", ""]
</code></pre>
<p>Here, the last entry in the result is an empty string, indicating that there is no raw text after the last capture group.</p>
<p>The solution should not extract raw text within capture groups:</p>
<pre class="lang-py prettyprint-override"><code>pattern = r"hello (world)"
assert extract(pattern) == ["hello ", ""]
</code></pre>
<p>The solution should work correctly with escaped characters too, for example:</p>
<pre class="lang-py prettyprint-override"><code>pattern = r"\(born in (.*)\)"
assert extract(pattern) == ["(born in ", ")"]
</code></pre>
<p>Ideally, the solution should be efficient, avoiding looping over the string in Python.</p>
|
<python><regex>
|
2024-11-14 19:32:37
| 2
| 409
|
Peter
|
79,190,162
| 420,996
|
PySpark issue with user defined function
|
<p>Why does my pyspark application fail with user defined function?</p>
<pre><code> multiplier = udf(lambda x: float(x) * 100.0, FloatType())
df = df.select(multiplier(df['value']).alias('value_percent'))
</code></pre>
<p>The error thrown is</p>
<blockquote>
<p>Lost task 0.0 in stage 1.0 (TID 1) (127.0.0.1 executor driver):
org.apache.spark.SparkException: Python worker exited unexpectedly<br />
....<br />
java.io.EOFException</p>
</blockquote>
<p>But even stranger is that the same code (with same UDF function and same dataset) actually works on a jupyter note.</p>
<hr />
<p>spark version: 3.5.3<br />
python version: 3.11.9<br />
os: Windows.</p>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2024-11-14 19:18:01
| 1
| 3,036
|
Kiran Mohan
|
79,190,025
| 1,420,553
|
Tensorflow Not Creating Model Correctly
|
<p>found a problem following some samples from the book <em>"AI Model and Machine Learning for Coders"</em> from Laurence Moroney and Andrew Ng.</p>
<p>Summarizing the findings, the following code:</p>
<pre><code>import tensorflow as tf
from tensorflow import keras
model = tf.keras.models.Sequential([
tf.keras.layers.Embedding(10000, 16),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
</code></pre>
<p>creates a model whose summary shows the following structure:
<a href="https://i.sstatic.net/rEUKzgIk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rEUKzgIk.png" alt="enter image description here" /></a></p>
<p>What's the problem with the code? Why are there no parameters to train? I have created another models and they work without this issue.</p>
<p>Thanks,</p>
<p>Gus</p>
|
<python><tensorflow>
|
2024-11-14 18:29:16
| 1
| 369
|
gus
|
79,189,939
| 11,001,493
|
How to make bars more visible while using plotly?
|
<p>I am trying to plot some data using plotly graph_objects. X axis has dates as categories and Y axis has depth values. I realized my graph becomes less visible when there is a relevant quantity of data. It almost seems that it become transparent.</p>
<p>I already tried to change the <code>opacity</code> to 1 or <code>bargroupgap</code> to 0, but it is still weird. See image below:</p>
<p><a href="https://i.sstatic.net/20aohqM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/20aohqM6.png" alt="enter image description here" /></a></p>
<pre><code># Create a figure
fig = go.Figure()
# Plot bars using go.Bar
fig.add_trace(go.Bar(
x=well_df['DATE'], # Intervals as x-axis
y=well_df['y_end'] - well_df['y_start'], # Height of the bars (difference between end and start)
base=well_df['y_start'], # Bottom of the bars (starting at y_start)
marker=dict(color=well_df['FILTRO'].map(color_map), opacity=1), # Map colors for categories
opacity=1))
# Customize layout
fig.update_layout(
xaxis_title="DATE",
yaxis_title="DEPTH",
yaxis=dict(autorange="reversed"),
barmode='group', # Group bars by category
bargap=0.05, # Change this value to modiffy gaps between different categories
bargroupgap=0,
xaxis=dict(tickangle=45), # Rotate values from x axis to 45°
showlegend=False)
# Show the plot
fig.show(renderer="browser")
</code></pre>
<p>How can I make it to look more visible (with stronger colors)?</p>
|
<python><plotly><plotly.graph-objects>
|
2024-11-14 17:59:10
| 1
| 702
|
user026
|
79,189,876
| 1,802,693
|
Error with Observer while exiting async function only when using uvloop as event loop
|
<p>I can't figure out, why I'm getting error while shutting down when using uvloop, and not getting the same error when going without it.</p>
<p>The error:</p>
<pre><code>ImportError: sys.meta_path is None, Python is likely shutting down
</code></pre>
<p>I need to use an Observer to watch config files and reconfigure the behaviour. I'm getting the error only when I'm calling <code>observer.stop()</code> and <code>observer.join()</code> AND using uvloop. When I'm not calling them, there is no exception in the output. It doens't matter if I'm calling the <code>stop()</code> and <code>join()</code> functions from async code (see <code>Option 1</code>) or from the synchronous code (see <code>Option 2 (preferred)</code>).</p>
<pre><code>if sys.platform != 'win32':
import uvloop
uvloop.install()
async def main(loop_container, connector, processor, observer):
loop_container.set_current_event_loop()
try:
connector.connect_exchange()
asyncio.create_task(processor.process_queue_events())
is_connected = await connector.is_connected()
await connector.start_symbol_watchings()
retry_delay = 1
while True:
await asyncio.sleep(retry_delay) # keepalive
# retry logic (...)
except asyncio.CancelledError:
logging.error(f'{Utils.get_current_datetime()} Execution has been cancelled.')
except Exception as exc:
logging.error(f'Unknown Error: {type(exc).__name__} {str(exc)}')
finally:
await connector.disconnect_exchange()
# Option 1
#observer.stop()
#observer.join()
if __name__ == "__main__":
try:
tick_data_queue = asyncio.Queue()
observer = Observer()
loop_container = EventLoopContainer()
config_abs_path = os.path.abspath(CONFIG_FILE_NAME)
output_abs_path = os.path.abspath(OUTPUT_FILE_NAME)
config_reader = ConfigReader(config_abs_path)
state_persister = StatePersister(output_abs_path, config_reader)
connector = ExchangeConnector(loop_container, tick_data_queue, config_reader, state_persister)
processor = TickProcessor(loop_container, tick_data_queue, config_reader, state_persister, connector)
event_handler = FileChangeHandler(loop_container, config_reader, state_persister, connector, processor)
observer.schedule(
event_handler,
path = os.path.dirname(config_abs_path),
recursive = False,
)
observer.start()
main_coro = main(loop_container, connector, processor, observer)
asyncio.run(main_coro)
# Option 2 (preferred)
#observer.stop()
#observer.join()
except Exception as exc::
logging.critical('Terminated.')
</code></pre>
<p>I'm not getting this error when:</p>
<ul>
<li>I'm not using uvloop</li>
<li>I don't call any of these functions</li>
</ul>
<p>What I can understand, that from the thread used by the Observer class somehow conflicts with the async loop.</p>
<p>Does anyone has any idea how to resolve this issue?</p>
<pre><code>^C
2024-11-14|17:25:17.996 Execution has been cancelled.
2024-11-14|17:25:18.001 [symbol='ETH/USDT:USDT'] Connection error: ExchangeClosedByUser : Connection closed by the user.
2024-11-14|17:25:18.002 [symbol='BTC/USDT:USDT'] Connection error: ExchangeClosedByUser : Connection closed by the user.
2024-11-14|17:25:18.253 Disconnected from the exchange.
--- Logging error ---
--- Logging error ---
--- Logging error ---
--- Logging error ---
Exception ignored in: <bound method Loop.call_exception_handler of <uvloop.Loop running=False closed=True debug=False>>
Traceback (most recent call last):
File "uvloop/loop.pyx", line 2429, in uvloop.loop.Loop.call_exception_handler
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1548, in error
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1664, in _log
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1680, in handle
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1736, in callHandlers
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1026, in handle
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/handlers.py", line 83, in emit
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1075, in handleError
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/traceback.py", line 129, in print_exception
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/traceback.py", line 1044, in __init__
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/traceback.py", line 492, in _extract_from_extended_frame_gen
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/traceback.py", line 369, in line
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/traceback.py", line 350, in _set_lines
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/linecache.py", line 25, in getline
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/linecache.py", line 41, in getlines
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/linecache.py", line 88, in updatecache
ImportError: sys.meta_path is None, Python is likely shutting down
--- Logging error ---
--- Logging error ---
--- Logging error ---
--- Logging error ---
Exception ignored in: <bound method Loop.call_exception_handler of <uvloop.Loop running=False closed=True debug=False>>
Traceback (most recent call last):
File "uvloop/loop.pyx", line 2429, in uvloop.loop.Loop.call_exception_handler
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1548, in error
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1664, in _log
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1680, in handle
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1736, in callHandlers
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1026, in handle
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/handlers.py", line 83, in emit
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/logging/__init__.py", line 1075, in handleError
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/traceback.py", line 129, in print_exception
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/traceback.py", line 1044, in __init__
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/traceback.py", line 492, in _extract_from_extended_frame_gen
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/traceback.py", line 369, in line
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/traceback.py", line 350, in _set_lines
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/linecache.py", line 25, in getline
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/linecache.py", line 41, in getlines
File "/home/ubuntu/.pyenv/versions/3.13.0/lib/python3.13/linecache.py", line 88, in updatecache
ImportError: sys.meta_path is None, Python is likely shutting down
</code></pre>
|
<python><python-3.x><python-asyncio><uvloop>
|
2024-11-14 17:39:23
| 0
| 1,729
|
elaspog
|
79,189,825
| 1,826,066
|
Use brush for transform_calculate in interactive altair char
|
<p>I have an interactive plot in <code>altair</code>/<code>vega</code> where I can select points and I see a pie chart with the ratio of the colors of the selected points.</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt
import numpy as np
import polars as pl
selection = alt.selection_interval(encodings=["x"])
base = (
alt.Chart(
pl.DataFrame(
{
"x": list(np.random.rand(100)),
"y": list(np.random.rand(100)),
"class": list(np.random.choice(["A", "B"], 100)),
}
)
)
.mark_point(filled=True)
.encode(
color=alt.condition(
selection, alt.Color("class:N"), alt.value("lightgray")
),
)
.add_params(selection)
)
alt.hconcat(
base.encode(x="x:Q", y="y:Q"),
(
base.transform_filter(selection)
.mark_arc()
.encode(theta="count()", color="class:N")
),
)
</code></pre>
<p>The outcome looks like this:
<a href="https://i.sstatic.net/pziP3zrf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pziP3zrf.png" alt="enter image description here" /></a></p>
<p>Now I'd like to add two more charts that show the ratio of selected / unselected points for each color. I.e. one pie chart that is orange / gray and one pie chart that is blue / gray with ratios depending on the number of selected points.</p>
<p>I tried to use the selection like this</p>
<pre class="lang-py prettyprint-override"><code> (
base.mark_arc().encode(
theta="count()",
color=alt.condition(
selection, alt.Color("class:N"), alt.value("gray")
),
row="class:N",
)
),
</code></pre>
<p>But it's not what I want:</p>
<p><a href="https://i.sstatic.net/rUYM2fOk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUYM2fOk.png" alt="enter image description here" /></a></p>
<p>What's the best way to add the pie charts I want?</p>
|
<python><vega-lite><altair>
|
2024-11-14 17:21:51
| 1
| 1,351
|
Thomas
|
79,189,727
| 3,056,882
|
How to disable or ignore the SSL for the Yfinance package
|
<p>Because I'm behind a firewall at the office, I get an SSL error when running yFinance package and I would like to disable the SSL when he pulls data from yahoo.</p>
<p>Code example:</p>
<pre><code># Load packages
import yfinance as yf
# Get data
df = yf.download('SPY', start='2000-01-01', end='2024-10-01')
# print
print(df)
</code></pre>
<p>The error:</p>
<pre><code>Failed to get ticker 'SPY' reason: HTTPSConnectionPool(host='fc.yahoo.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1000)')))
[*********************100%***********************] 1 of 1 completed
1 Failed download:
['SPY']: YFTzMissingError('$%ticker%: possibly delisted; no timezone found')
Empty DataFrame
Columns: [(Adj Close, SPY), (Close, SPY), (High, SPY), (Low, SPY), (Open, SPY), (Volume, SPY)]
Index: []
</code></pre>
<p>Does any body know a solution for this problem?</p>
|
<python><ssl><yfinance>
|
2024-11-14 16:50:49
| 2
| 741
|
H.L.
|
79,189,688
| 3,611,164
|
Plotly Python: How to properly add shapes to subplots
|
<p>How does plotly add shapes to figures with multiple subplots and what best practices are around that?</p>
<p>Let's take the following example:</p>
<pre class="lang-py prettyprint-override"><code>from plotly.subplots import make_subplots
fig = make_subplots(rows=2, cols=1, shared_xaxes=True)
fig.add_vrect(x0=1, x1=2, row=1, col=1, opacity=0.5, fillcolor="grey")
fig.add_scatter(x=[1,3], y=[3,4], row=1, col=1)
fig.add_scatter(x=[2,2], y=[3,4], row=2, col=1)
</code></pre>
<p><a href="https://i.sstatic.net/yr2xngH0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yr2xngH0.png" alt="plotly figure without vrect" /></a></p>
<p>If we <code>add_vrect</code> at the end, the rectangle is visualized as I would expect.</p>
<pre class="lang-py prettyprint-override"><code>fig = make_subplots(rows=2, cols=1, shared_xaxes=True)
fig.add_scatter(x=[1,3], y=[3,4], row=1, col=1)
fig.add_scatter(x=[2,2], y=[3,4], row=2, col=1)
fig.add_vrect(x0=1, x1=2, row=1, col=1, opacity=0.5, fillcolor="grey")
</code></pre>
<p><a href="https://i.sstatic.net/4ha15OJL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4ha15OJL.png" alt="plotly figure with vrect" /></a></p>
<p>Now, when I move away from the dummy example to a more complex plot (3 subplots, multiple y axes, logarithmic scaling, datetime x axis), adding the rectangles last does not help either. I don't manage to visualize them for two of the three subplots.</p>
<p>Thus, I'm trying to better understand how plotly handles this under the hood. From what I have gathered so far, the rectangles are shapes and thus, not part of <code>figure.data</code>, but <code>figure.layout</code>. In the above dummy examples, the shapes are only added to the layout in the second take. Why?<br />
Is it more advisable to use <code>fig.add_shape(type="rect")</code> when working with more complex plots?<br />
Or should I give up and just manually wrangle with <code>fig.layout.shapes</code> instead of using the function calls?</p>
<p>Examples are made with plotly 5.15.0.</p>
|
<python><plotly>
|
2024-11-14 16:39:03
| 1
| 366
|
Fabitosh
|
79,189,667
| 192,801
|
How to get counts on all queries running in BigQuery
|
<p>I am trying to get information on the number of queries running in BigQuery, and their states.</p>
<p>I've tried this:</p>
<pre class="lang-sql prettyprint-override"><code>select count(job_id) as job_count, state
from `MY_PROJECT`.`region-MY_REGION`.INFORMATION_SCHEMA.JOBS
where creation_time between
TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR)
AND CURRENT_TIMESTAMP()
and query not like 'select count(job_id) as job_count, state%'
and project_id = 'MY_PROJECT'
group by state;
</code></pre>
<p>but I only seem to get counts for queries submitted by <em>my</em> user. I know there is more activity than that because when I go to the monitoring page, the "Jobs Explorer", I can see jobs submitted by different service accounts as well, and how many are in each state. The fact that I can see these jobs in the UI makes me doubt that it is a permissions-related issue (unless queries require different permissions than the UI).</p>
<p>I need to be able to get that information programmatically instead of looking at the UI, and I am not sure why the query above doesn't do it.</p>
<hr />
<p>I thought I would try using the bigquery client, from Python:</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud import bigquery
from datetime import datetime, timedelta
project_id = 'MY_PROJECT'
client = bigquery.Client()
jobs = client.list_jobs(project=project_id,
all_users=True,
min_creation_time=datetime.now() - timedelta(hours=1),
max_creation_time=datetime.now())
states = []
job_counts = {}
# Iterate through the jobs and count the states
for job in jobs:
if job.state not in states:
states.append(job.state)
job_counts[job.state] = 0
job_counts[job.state] += 1
for state, count in job_counts.items():
print(f'Jobs in state {state}: {count}')
</code></pre>
<p>And this does <em>not</em> give the same results as the SQL query above.</p>
<p>What is the best way to get the number of jobs and their states in bigquery, programmatically?</p>
|
<python><sql><google-bigquery><information-schema>
|
2024-11-14 16:32:03
| 2
| 27,696
|
FrustratedWithFormsDesigner
|
79,189,622
| 558,639
|
Lazy creation of an asyncio event loop creates duplicates?
|
<p>I have a number of modules that need to run under an <code>asyncio</code> event loop, though I don't know until runtime which modules will be loaded. So at initialization, I want to make sure that there is one (and only one) event loop in effect.</p>
<p>I tried the following code. I expected that the 'A' code block would create and use an event loop, and the 'B' code block would use the the same event loop:</p>
<pre><code> try:
loop = asyncio.get_running_loop() # use existing event loop
logger.info(f'*** A using existing loop {loop}')
except RuntimeError:
loop = asyncio.new_event_loop() # ... or create and use a new one
asyncio.set_event_loop(loop)
logger.info(f'*** A creating loop {loop}')
try:
loop = asyncio.get_running_loop() # use existing event loop
logger.info(f'*** B using existing loop {loop}')
except RuntimeError:
loop = asyncio.new_event_loop() # ... or create and use a new one
asyncio.set_event_loop(loop)
logger.info(f'*** B creating loop {loop}')
</code></pre>
<p>What I expected:</p>
<pre><code>INFO:builtins:*** A creating loop <ProactorEventLoop running=False closed=False debug=False>
INFO:builtins:*** B using existing loop <ProactorEventLoop running=False closed=False debug=False>
</code></pre>
<p>Instead, it creates two event loops:</p>
<pre><code>INFO:builtins:*** A creating loop <ProactorEventLoop running=False closed=False debug=False>
INFO:builtins:*** B creating loop <ProactorEventLoop running=False closed=False debug=False>
</code></pre>
<p>(Ultimately, this results in a RuntimeError <code>got Future <Future pending> attached to a different loop</code>.)</p>
<p>How do I dynamically create an event loop, if and only if it doesn't already exist?</p>
|
<python><python-asyncio>
|
2024-11-14 16:19:55
| 0
| 35,607
|
fearless_fool
|
79,189,502
| 5,477,531
|
Logging subclass is not retrieved correctly
|
<p>I am using python's <code>logging</code> library and I need to create a custom logger that logs two attributes of my process: <code>instance_name</code> and <code>instance_id</code>. I have come to this solution that works successfully, though I'm not sure if it overlooks something/does something unnecessary:</p>
<pre class="lang-py prettyprint-override"><code>import logging
from logging import Formatter
class CustomLogger(logging.Logger):
def __init__(self, name, extra={}):
super().__init__(name)
self.extra = extra
def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False, **kwargs):
if extra is None:
extra = self.extra
else:
extra.update(self.extra)
super()._log(level, msg, args, exc_info, extra, stack_info, **kwargs)
logger = CustomLogger("test-logger", {"instance_name": "name", "instance_id": 12})
console_handler = logging.StreamHandler()
formatter = Formatter(
"%(asctime)s - %(levelname)s - instance name: %(instance_name)s - instance id: %(instance_id)i - %(message)s"
)
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
</code></pre>
<p>if I use this logger to log a message, I get what I expect</p>
<pre class="lang-py prettyprint-override"><code>logger.info("Hello world")
# 2024-11-14 16:22:51,320 - INFO - name: name - id: 12 - Hello world
</code></pre>
<p>But if I want to retrieve that same logger later in the code, it does not contain any handler and therefore won't log anything</p>
<pre class="lang-py prettyprint-override"><code>logging.setLoggerClass(CustomLogger)
retrieved_logger = logging.getLogger("test-logger")
# doesn't print anything
retrieved_logger.info("Hello world")
logger.handlers, retrieved_logger.handlers
# ([<StreamHandler stderr (NOTSET)>], [])
</code></pre>
<p>I also know that the retrieved logger is a singleton and it must be the same instance as logger</p>
<pre class="lang-py prettyprint-override"><code>logger is retrieved_logger
# False
</code></pre>
<p>My question is: where is the problem here? Why can't I get the same logger I configured?</p>
|
<python><logging>
|
2024-11-14 15:48:51
| 1
| 627
|
mrbolichi
|
79,189,438
| 1,936,046
|
How do I add a Python script to the enterprise schedule ActiveBatch?
|
<p>According to the ActiveBatch website, they do support Python scripts:</p>
<p><a href="https://www.advsyscon.com/en-us/activebatch/script-management" rel="nofollow noreferrer">https://www.advsyscon.com/en-us/activebatch/script-management</a></p>
<p>But I do not see an option for Python from the script editor in ActiveBatch:</p>
<p><a href="https://i.sstatic.net/Cb1CDgIr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cb1CDgIr.png" alt="enter image description here" /></a></p>
<p>Your post is mostly images. Add additional details to explain the problem and expected results.
Your post is mostly images. Add additional details to explain the problem and expected results.
Your post is mostly images. Add additional details to explain the problem and expected results.</p>
|
<python><activebatch>
|
2024-11-14 15:30:00
| 0
| 764
|
Duanne
|
79,189,436
| 10,004,903
|
How to display a GIF in Dearpygui
|
<p>Since importing and displaying GIF is not natively supported in dearpygui, how would one go about to render and animate a GIF in dpg using the existing tools?</p>
|
<python><animated-gif><dearpygui>
|
2024-11-14 15:29:43
| 1
| 548
|
grybouilli
|
79,189,423
| 16,389,095
|
How to display a pdf page into a Flet container
|
<p>I'm trying to develop a simple app for displaying each page of a Pdf file. I start by adding a container and a button. The Pdf file's full path(absolute path + file name) is given to the variable <code>fullname</code>. The file is converted into a list of PIL images using the <a href="https://pypi.org/project/pdf2image/" rel="nofollow noreferrer">pdf2image</a> library.</p>
<p>I'm trying to set the first image(first element of the list) as the content of the container after a button is clicked.</p>
<p>Here is the code:</p>
<pre><code>import flet as ft
import pdf2image
def main(page: ft.Page):
fullname = r'Your Full Path To the Doc.pdf'
viewer = pdf2image.convert_from_path(fullname)
def btn_Click(e):
cont.content = ft.Image(src = viewer[0],
fit=ft.ImageFit.FILL,
)
page.update()
cont = ft.Container(height = 0.4*page.height,
width = 0.4 * page.width,
border=ft.border.all(3, ft.colors.RED),)
btn = ft.IconButton(
icon=ft.icons.UPLOAD_FILE,
on_click=btn_Click,
icon_size=35,)
page.add(ft.Column([cont, btn],
horizontal_alignment="center"))
page.window_maximized = True
page.horizontal_alignment = "center"
page.scroll = ft.ScrollMode.AUTO
page.update()
ft.app(target=main, assets_dir="assets")
</code></pre>
<p>Why is nothing is displayed in the container and no error shown?</p>
|
<python><flutter><pdf><flet>
|
2024-11-14 15:23:31
| 1
| 421
|
eljamba
|
79,189,217
| 2,127,543
|
Retrieve cell of a DataFrame Enum column as an Enum
|
<p>I have a DataFrame in which several columns are defined as Enums. When I retrieve a cell in an Enum column, the value is returned as a string. How do I retrieve the value as an Enum? Shown below is a small repro:</p>
<pre><code>import polars as pl
flags: pl.Enum = pl.Enum(["foo", "bar", "baz"])
df: pl.DataFrame = pl.DataFrame(data={"x": ["foo", "bar", "baz"]}, schema_overrides={"x": flags})
df.dtypes
# Returns [Enum(categories=['foo', 'bar', 'baz'])]
type(df[0, "x"])
# Returns str instead of Enum
# Series.to_list() returns list of str instead of list of Enum
[type(t) for t in df[:, "x"].to_list()]
# Returns [str, str, str]
</code></pre>
|
<python><python-polars>
|
2024-11-14 14:38:37
| 1
| 439
|
scorpio
|
79,189,097
| 11,022,199
|
Removing a combination from stacked parametrization in pytest
|
<p>For a lot of tests we use a stacked parametrizing for testing our functions like this:</p>
<pre><code>@pytest.mark.parametrize("x", x.values()) #possible values 1,2,3,4
@pytest.mark.parametrize("y", y.values()) #possible values True, False
def test_get_somehting(x):
get_something(x, y)
</code></pre>
<p>Now I want to run all combinations except for the combination <code>x=3</code> and <code>y=False</code> Only I have not found an intuitive way to do that. Almost all test run with these parameters so I'd rather not specify them separately every test. Any idead how to go about this? thanks!</p>
|
<python><pytest>
|
2024-11-14 14:07:40
| 1
| 794
|
borisvanax
|
79,189,080
| 3,584,765
|
install specific version of pytorch
|
<p>I am trying to install a specific version of <code>torch</code> (along with <code>torchvision</code> and <code>torchaudio</code>) for a project.</p>
<p>The <a href="https://github.com/Traffic-X/ViT-CoMer/tree/main/segmentation" rel="nofollow noreferrer">instructions</a> from the project mentioned the command:</p>
<pre><code>pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
</code></pre>
<p>and for compatibility reason I installed (after some trial and error approach) on one machine using the command the following versions:</p>
<pre><code>pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 -f https://download.pytorch.org/whl/torch_stable.htm
</code></pre>
<p>Everything seems fine in this machine. But when I tried to apply the same command on another machine there was a surprise.</p>
<pre><code>pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 -f https://download.pytorch.org/whl/torch_stable.htm
</code></pre>
<blockquote>
<p>Looking in links: <a href="https://download.pytorch.org/whl/torch_stable.htm" rel="nofollow noreferrer">https://download.pytorch.org/whl/torch_stable.htm</a>
ERROR: Could not find a version that satisfies the requirement
torch==1.11.0+cu113 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0,
1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1) ERROR: No matching distribution found for torch==1.11.0+cu113</p>
</blockquote>
<p>So, there isn't any version specifically mentioning cuda version. Just the torch version. So, when I pick <code>1.11.0</code> it was installed successfully but with which cuda is not obvious.</p>
<p>From <code>pip</code> in cli it wasn't shown at all:</p>
<pre><code>pip list | grep torch
</code></pre>
<blockquote>
<p>torch 1.11.0<br />
torchaudio 0.11.0<br />
torchvision 0.12.0</p>
</blockquote>
<p>but from inside python it can be seen (!):</p>
<pre><code>>>> import torch
>>> torch.__version__
'1.11.0+cu102'
</code></pre>
<p>So, I guess cuda 10.2 was chosen by default for some reason. When inspecting the <a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">official torch docs</a> the command provided is:</p>
<pre><code>pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
</code></pre>
<p>and for 11.7 (the one I have in this machine):</p>
<pre><code>pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
Looking in indexes: https://download.pytorch.org/whl/cu117
Requirement already satisfied: torch in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (1.11.0)
Requirement already satisfied: torchvision in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (0.12.0)
Requirement already satisfied: torchaudio in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (0.11.0)
Requirement already satisfied: typing-extensions in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from torch) (4.12.2)
Requirement already satisfied: numpy in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from torchvision) (1.26.4)
Requirement already satisfied: requests in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from torchvision) (2.32.3)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from torchvision) (11.0.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from requests->torchvision) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from requests->torchvision) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from requests->torchvision) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from requests->torchvision) (2024.8.30)
</code></pre>
<p>How can be satisfied when cuda 10.2 is selected?</p>
<p>Even for specific torch version 1.11.0 the output is the same:</p>
<pre><code>pip3 install torch==1.11.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
Looking in indexes: https://download.pytorch.org/whl/cu117
Requirement already satisfied: torch==1.11.0 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (1.11.0)
Requirement already satisfied: torchvision in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (0.12.0)
Requirement already satisfied: torchaudio in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (0.11.0)
Requirement already satisfied: typing-extensions in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from torch==1.11.0) (4.12.2)
Requirement already satisfied: numpy in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from torchvision) (1.26.4)
Requirement already satisfied: requests in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from torchvision) (2.32.3)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from torchvision) (11.0.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from requests->torchvision) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from requests->torchvision) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from requests->torchvision) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in /home/orfanidis/miniconda3/envs/torch310/lib/python3.10/site-packages (from requests->torchvision) (2024.8.30)
</code></pre>
<p>For reference in the first machine the output was:</p>
<pre><code>pip install torch==1.11.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch==1.11.0+cu111 (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1, 1.12.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1+rocm5.1.1, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0+rocm5.2, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1+rocm5.2, 2.0.0, 2.0.0+cpu, 2.0.0+cpu.cxx11.abi, 2.0.0+cu117, 2.0.0+cu117.with.pypi.cudnn, 2.0.0+cu118, 2.0.0+rocm5.3, 2.0.0+rocm5.4.2, 2.0.1, 2.0.1+cpu, 2.0.1+cpu.cxx11.abi, 2.0.1+cu117, 2.0.1+cu117.with.pypi.cudnn, 2.0.1+cu118, 2.0.1+rocm5.3, 2.0.1+rocm5.4.2, 2.1.0, 2.1.0+cpu, 2.1.0+cpu.cxx11.abi, 2.1.0+cu118, 2.1.0+cu121, 2.1.0+cu121.with.pypi.cudnn, 2.1.0+rocm5.5, 2.1.0+rocm5.6, 2.1.1, 2.1.1+cpu, 2.1.1+cpu.cxx11.abi, 2.1.1+cu118, 2.1.1+cu121, 2.1.1+cu121.with.pypi.cudnn, 2.1.1+rocm5.5, 2.1.1+rocm5.6, 2.1.2, 2.1.2+cpu, 2.1.2+cpu.cxx11.abi, 2.1.2+cu118, 2.1.2+cu121, 2.1.2+cu121.with.pypi.cudnn, 2.1.2+rocm5.5, 2.1.2+rocm5.6, 2.2.0, 2.2.0+cpu, 2.2.0+cpu.cxx11.abi, 2.2.0+cu118, 2.2.0+cu121, 2.2.0+rocm5.6, 2.2.0+rocm5.7, 2.2.1, 2.2.1+cpu, 2.2.1+cpu.cxx11.abi, 2.2.1+cu118, 2.2.1+cu121, 2.2.1+rocm5.6, 2.2.1+rocm5.7, 2.2.2, 2.2.2+cpu, 2.2.2+cpu.cxx11.abi, 2.2.2+cu118, 2.2.2+cu121, 2.2.2+rocm5.6, 2.2.2+rocm5.7, 2.3.0, 2.3.0+cpu, 2.3.0+cpu.cxx11.abi, 2.3.0+cu118, 2.3.0+cu121, 2.3.0+rocm5.7, 2.3.0+rocm6.0, 2.3.1, 2.3.1+cpu, 2.3.1+cpu.cxx11.abi, 2.3.1+cu118, 2.3.1+cu121, 2.3.1+rocm5.7, 2.3.1+rocm6.0, 2.4.0, 2.4.1, 2.5.0, 2.5.1)
ERROR: No matching distribution found for torch==1.11.0+cu111
</code></pre>
<p>where the error messages is stating specific version which can be chosen.</p>
<p>So, my questions are:
a) Why on some machines torch can be provided with specific cuda version and on others no.</p>
<p>b) Is there any difference between the torch official command and the project's one? I know -f means <a href="https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-f" rel="nofollow noreferrer">find links</a></p>
<p>c) More important how can I install the specific version I need if half of the programs ignore the version I have now (as <code>pip</code> itself for example)?</p>
<p>The python version I am using in my venv is 3.10.9 in both machines but the drivers differ.</p>
|
<python><pytorch><pip>
|
2024-11-14 14:00:49
| 0
| 5,743
|
Eypros
|
79,189,028
| 5,696,601
|
Matplotlib Stackplot Gradient
|
<p>Is it possible to fill the "Odds" area with a gradient from left (green) to right (transparent)? I would like to do this in a plot to indicate uncertainty.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = [1, 2, 3, 4, 5]
y1 = [1, 1, 2, 3, 5]
y2 = [0, 4, 2, 6, 8]
y3 = [1, 3, 5, 7, 9]
y = np.vstack([y1, y2, y3])
labels = ["Fibonacci ", "Evens", "Odds"]
fig, ax = plt.subplots()
ax.stackplot(x, y1, y2, y3, labels=labels)
ax.legend(loc='upper left')
plt.show()
fig, ax = plt.subplots()
ax.stackplot(x, y)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/GPZdydpQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPZdydpQ.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-11-14 13:48:05
| 1
| 1,023
|
Stücke
|
79,188,746
| 17,487,457
|
Presenting complex table data in chart for a single slide
|
<p>Tables allow to summarise complex information. I have a table similar following one (this is produce for this question) in my latex document, like so:</p>
<pre class="lang-latex prettyprint-override"><code>\documentclass{article}
\usepackage{graphicx} % Required for inserting images
\usepackage{tabularx}
\usepackage{booktabs}
\usepackage{makecell}
\begin{document}
\begin{table}[bt]
\caption{Classification results.}
\label{tab:baseline-clsf-reprt}
\setlength{\tabcolsep}{1pt} % Adjust column spacing
\renewcommand{\arraystretch}{1.2} % Adjust row height
\begin{tabular}{lcccccccccccc}
\toprule
& \multicolumn{3}{c}{Data1} &
\multicolumn{3}{c}{\makecell{Data2 \\ (original)}} &
\multicolumn{3}{c}{\makecell{Data2 \\ (experiment 3)}} &
\multicolumn{3}{c}{\makecell{Data2 \\ (experiment 4)}} \\
\cmidrule(r{1ex}){2-4}
\cmidrule(r{1ex}){5-7}
\cmidrule(r{1ex}){8-10}
\cmidrule(r{1ex}){11-13}
& Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 \\
\midrule
Apple & 0.61 & 0.91 & 0.71 & 0.61 & 0.72 & 0.91 & 0.83 & 0.62 & 0.71 & 0.62 & 0.54 & 0.87 \\
Banana & 0.90 & 0.32 & 0.36 & 0.86 & 0.81 & 0.53 & 0.61 & 0.69 & 0.68 & 0.72 & 0.56 & 0.57 \\
Orange & 0.23 & 0.35 & 0.18 & 0.56 & 0.56 & 0.56 & 0.54 & 0.55 & 0.55 & 0.55 & 0.57 & 0.63 \\
Grapes & 0.81 & 0.70 & 0.76 & 0.67 & 0.47 & 0.54 & 0.85 & 0.28 & 0.42 & 0.38 & 0.66 & 0.48 \\
Mango & 0.31 & 0.23 & 0.45 & 0.87 & 0.54 & 0.73 & 0.63 & 0.57 & 0.63 & 0.75 & 0.29 & 0.34 \\
\bottomrule
\end{tabular}
\end{table}
\end{document}
</code></pre>
<p>Which gives:
<a href="https://i.sstatic.net/OlAcBFw1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlAcBFw1.png" alt="enter image description here" /></a></p>
<p>Now, I preparing a slide deck, and I needed to present the classification results in just one slide. To show results of each dataset for each fruit and metric.</p>
<p>My attempts didn't result in a chart that's meaning (showing all info in the table).</p>
<p>First attempt:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
datasets = ['Data1', 'Data2-Orig', 'Data2-Exp3', 'Data2-Exp4']
fruits = ['Apple', 'Banana', 'Orange', 'Grapes', 'Mango']
metrics = ['Precision', 'Recall', 'F1']
colors = ['#1f77b4', '#ff7f0e', '#2ca02c'] # Colors for Precision, Recall, F1
data = {
'Fruit': ['Apple', 'Banana', 'Orange', 'Grapes', 'Mango'],
'Data1_Precision': [0.61, 0.90, 0.23, 0.81, 0.31],
'Data1_Recall': [0.91, 0.32, 0.35, 0.70, 0.23],
'Data1_F1': [0.71, 0.36, 0.18, 0.76, 0.45],
'Data2-Orig_Precision': [0.61, 0.86, 0.56, 0.67, 0.87],
'Data2-Orig_Recall': [0.72, 0.81, 0.56, 0.47, 0.54],
'Data2-Orig_F1': [0.91, 0.53, 0.56, 0.54, 0.73],
'Data2-Exp3_Precision': [0.83, 0.61, 0.54, 0.85, 0.63],
'Data2-Exp3_Recall': [0.62, 0.69, 0.55, 0.28, 0.57],
'Data2-Exp3_F1': [0.71, 0.68, 0.55, 0.42, 0.63],
'Data2-Exp4_Precision': [0.62, 0.72, 0.55, 0.38, 0.75],
'Data2-Exp4_Recall': [0.54, 0.56, 0.57, 0.66, 0.29],
'Data2-Exp4_F1': [0.87, 0.57, 0.63, 0.48, 0.34]
}
df = pd.DataFrame(data)
# Reshape data for Seaborn
df_melted = df.melt(id_vars='Fruit',
var_name='Metric',
value_name='Score')
# Split the 'Metric' column into separate columns for easier grouping
df_melted[['Dataset', 'Measure']] = df_melted['Metric'].str.split('_', expand=True)
df_melted.drop(columns='Metric', inplace=True)
plt.figure(figsize=(12, 8))
sns.set_style("whitegrid")
# Create grouped bar plot
sns.barplot(
data=df_melted,
x='Fruit',
y='Score',
hue='Dataset',
ci=None
)
# Customize plot
plt.title('Classification Results by Fruit and Dataset')
plt.xlabel('Fruit type')
plt.ylabel('Score')
plt.legend(title='Dataset', bbox_to_anchor=(1.05, 1), loc='upper left')
# Show plot
plt.tight_layout()
</code></pre>
<p>Gives:
<a href="https://i.sstatic.net/MB9uutHp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MB9uutHp.png" alt="enter image description here" /></a></p>
<p>Second attempt:</p>
<pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots(figsize=(14, 8))
# Set the width of each bar and spacing between groups
bar_width = 0.2
group_spacing = 0.25
x = np.arange(len(fruits))
# Plot bars for each dataset and metric combination
for i, dataset in enumerate(datasets):
for j, metric in enumerate(metrics):
# Calculate the position for each bar within each group
positions = x + i * (len(metrics) * bar_width + group_spacing) + j * bar_width
# Plot each metric bar
ax.bar(positions,
df[f'{dataset}_{metric}'],
width=bar_width,
label=f'{metric}' if i == 0 else "",
color=colors[j])
# Customize x-axis and labels
ax.set_xticks(x + (len(datasets) * len(metrics) * bar_width + (len(datasets) - 1) * group_spacing) / 2 - bar_width / 2)
ax.set_xticklabels(fruits)
ax.set_xlabel('Fruit type')
ax.set_ylabel('Score ')
ax.set_title('Classification Results by Dataset, Fruit, and Metric')
# Create custom legend for metrics
metric_legend = [plt.Line2D([0], [0], color=colors[i], lw=4) for i in range(len(metrics))]
ax.legend(metric_legend, metrics, title="Metrics", loc="upper left", bbox_to_anchor=(1.05, 1))
plt.tight_layout()
plt.show()
</code></pre>
<p>This gives:
<a href="https://i.sstatic.net/Ix8zoFbW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ix8zoFbW.png" alt="enter image description here" /></a></p>
<p>All these plots does not present the result in a way people can easily flow in a presentation. And adding the original table doesn't just make sense. People cannot easily flow the results in a table as I talk.</p>
<p>How would you recommend plotting the results in this table for adding to a slide?</p>
|
<python><pandas><matplotlib><latex><visualization>
|
2024-11-14 12:30:26
| 2
| 305
|
Amina Umar
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.