QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,893,691
| 3,498,864
|
Capture integer in string and use it as part of regular expression
|
<p>I've got a string:</p>
<pre><code>s = ".,-2gg,,,-2gg,-2gg,,,-2gg,,,,,,,,t,-2gg,,,,,,-2gg,t,,-1gtt,,,,,,,,,-1gt,-3ggg"
</code></pre>
<p>and a regular expression I'm using</p>
<pre><code>import re
delre = re.compile('-[0-9]+[ACGTNacgtn]+') #this is almost correct
print (delre.findall(s))
</code></pre>
<p>This returns:</p>
<pre><code>['-2gg', '-2gg', '-2gg', '-2gg', '-2gg', '-2gg', '-1gtt', '-1gt', '-3ggg']
</code></pre>
<p>But <code>-1gtt</code> and <code>-1gt</code> are not desired matches. The integer in this case defines how many subsequent characters to match, so the desired output for those two matches would be <code>-1g</code> and <code>-1g</code>, respectively.</p>
<p>Is there a way to grab the integer after the dash and dynamically define the regex so that it matches that many and only that many subsequent characters?</p>
|
<python><regex><dna-sequence>
|
2024-08-20 17:18:15
| 2
| 3,719
|
Ryan
|
78,893,523
| 8,079,870
|
Why does mypy not infer types with torch.nn.functional.linear?
|
<p>I am trying to use mypy for some PyTorch code, and am perplexed why mypy gives an error here:</p>
<pre><code>from torch import FloatTensor
from torch.nn.functional import linear
def foo(x: FloatTensor, y: FloatTensor) -> FloatTensor:
return linear(x, y)
</code></pre>
<p>mypy outputs:</p>
<pre><code>error: Incompatible return value type (got "Tensor", expected "FloatTensor") [return-value]
</code></pre>
<ol>
<li><p>Why does <code>torch.nn.functional.linear</code> give a <code>Tensor</code> here? It seems like the type-checker should be able to infer that the output is a <code>FloatTensor</code>.</p>
</li>
<li><p>What is the best way to address the mypy error?</p>
</li>
</ol>
|
<python><pytorch><python-typing><mypy>
|
2024-08-20 16:27:17
| 0
| 1,612
|
Grayscale
|
78,893,444
| 567,059
|
Translate a python string such as 'a.b.c=v' into a dictionary
|
<p>How can I read a 'key=value' strings such as <code>['foo=bar', 'a.b.c=ddd']</code> and translate them in to a dictionary? The dictionary should either be updated if the key already exists, or have the new key added if it doesn't.</p>
<p>The example given in the paragraph above should translate to the following...</p>
<pre class="lang-py prettyprint-override"><code>{'foo': 'bar', 'a': {'b': {'c': 'ddd'}}}
</code></pre>
<p>But it is instead translates to this incorrect dictionary...</p>
<pre class="lang-py prettyprint-override"><code>{'foo': 'bar', 'a': {}, 'b': {}, 'c': 'ddd'}
</code></pre>
<p>How can I fix the code in the example under the comment 'Set the master config value for the current setting'?</p>
<h4>Example Code</h4>
<pre class="lang-py prettyprint-override"><code>### For context - in reality I use an argument parser with two args.
### --config-file - Path to a YAML configuration file. If present,
### this file will be read into Python as a dictionary called
### `master_config`. If not, an empty dictionary of the same name
### will be created.
### --configs - Single 'key=value' settings that will be read into
### `master_config` and either override settings defined in the
### YAML configuration file, or set a value.
master_config = {'foo': 'baz'}
configs = ['foo=bar', 'a.b.c=ddd']
for config in configs:
# Split the setting into key and value.
k, v = config.split('=', 1)
# Split the key, if it's nested.
ks = k.split('.')
# Set the master config value for the current setting.
l = len(ks)
for i, x in enumerate(ks):
if i != l-1:
if not master_config.get(x):
master_config[x] = {}
else:
master_config[x] = v
print(master_config)
</code></pre>
|
<python><dictionary>
|
2024-08-20 16:08:38
| 2
| 12,277
|
David Gard
|
78,893,412
| 10,128,864
|
Python .txt to .xlsx Conversion Chunking and Indexing Issue
|
<p>I have a chunking and/or indexing issue with my python code where I am trying to convert a text script into an xlsx file. The problem is that xlsx files have a hard limit for the number of rows you can have:</p>
<pre><code>Traceback (most recent call last):
File "/Users/rbarrett/Git/Cleanup/yourPeople3/convert_txt_to_xls.py", line 46, in <module>
df_chunk.to_excel(writer, sheet_name=f'Sheet{sheet_number}', index=False)
File "/Users/rbarrett/Library/Python/3.11/lib/python/site-packages/pandas/util/_decorators.py", line 333, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbarrett/Library/Python/3.11/lib/python/site-packages/pandas/core/generic.py", line 2417, in to_excel
formatter.write(
File "/Users/rbarrett/Library/Python/3.11/lib/python/site-packages/pandas/io/formats/excel.py", line 952, in write
writer._write_cells(
File "/Users/rbarrett/Library/Python/3.11/lib/python/site-packages/pandas/io/excel/_openpyxl.py", line 487, in _write_cells
xcell = wks.cell(
^^^^^^^^^
File "/Users/rbarrett/Library/Python/3.11/lib/python/site-packages/openpyxl/worksheet/worksheet.py", line 244, in cell
cell = self._get_cell(row, column)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbarrett/Library/Python/3.11/lib/python/site-packages/openpyxl/worksheet/worksheet.py", line 257, in _get_cell
raise ValueError(f"Row numbers must be between 1 and 1048576. Row number supplied was {row}")
ValueError: Row numbers must be between 1 and 1048576. Row number supplied was 1048577
</code></pre>
<p>As we can see the <code>ValueError: Row numbers must be between 1 and 1048576. Row number supplied was 1048577</code> is my error and it looks like something is wrong with my slicing in the following script:</p>
<ul>
<li><code>convert_txt_to_xls.py</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import pandas as pd
import argparse
# Set up argument parsing
parser = argparse.ArgumentParser(description="Convert a TXT file to CSV or XLSX format.")
parser.add_argument("input_txt_file", help="Path to the input TXT file")
parser.add_argument("output_file", help="Path to the output file (either .csv or .xlsx)")
parser.add_argument("--type", choices=['csv', 'xlsx'], required=True, help="Output file type: 'csv' or 'xlsx'")
parser.add_argument("--multiple-sheets", action='store_true', help="Split data across multiple sheets if type is 'xlsx'")
parser.add_argument("--delimiter", default=' ', help="Delimiter used in the input TXT file")
# Parse the arguments
args = parser.parse_args()
# Read the .txt file into a pandas DataFrame
df = pd.read_csv(args.input_txt_file, delimiter=args.delimiter, engine='python')
# Print DataFrame shape for inspection
print(f"DataFrame shape: {df.shape}")
print(df.head())
# Handle output based on the specified type
if args.type == 'csv':
# Write the DataFrame to a CSV file
df.to_csv(args.output_file, index=False)
print(f"Conversion complete: {args.output_file}")
elif args.type == 'xlsx':
if args.multiple_sheets:
# Define Excel's maximum row limit
max_rows = 1048576
# Create a Pandas Excel writer using openpyxl
with pd.ExcelWriter(args.output_file, engine='openpyxl') as writer:
sheet_number = 1
for i in range(0, len(df), max_rows):
# Extract the chunk of data to be written
df_chunk = df.iloc[i:i + max_rows].copy()
# Reset the index to ensure each sheet starts at row 1
df_chunk.reset_index(drop=True, inplace=True)
# Write the chunk to the corresponding sheet
df_chunk.to_excel(writer, sheet_name=f'Sheet{sheet_number}', index=False)
sheet_number += 1
print(f"Conversion complete with multiple sheets: {args.output_file}")
else:
# Write the entire DataFrame to a single Excel sheet
if len(df) > max_rows:
raise ValueError("Data exceeds Excel's row limit. Use --multiple-sheets to split data across sheets.")
df.to_excel(args.output_file, index=False, engine='openpyxl')
print(f"Conversion complete: {args.output_file}")
</code></pre>
<p>Say for example, I have text file with a lot of rows that looks like this:</p>
<pre><code>46eb61ab1c0i90e909090w.................2 blob 88924339 logs/swf.log.1
5fb..........................c53da3f0cf1 blob 79474600 logs/swf.log.1
0f373270ad....................e3441da6bd blob 75058654 logs/swf.log.1
7f2..................5e510548fe2f35f9358 blob 74196729 hub/growth/growth/files/NewHireOnboarding.pptx
d7........................1e7e1cb8c0631f blob 70885244 logs/sqllog
</code></pre>
<p>but I have a lot of rows in this file, say like <code>4730559</code> lines, what happens is I am trying to create another sheet and chunk the pieces such that if I reach the limit, that I can start paginating across sheets. What's wrong with the python script section?</p>
<p>If you want to run the script:</p>
<pre><code>python3 convert_txt_to_xls.py blobs_with_sizes.txt blobs_with_sizes.xlsx --type=xlsx --multiple-sheets --delimiter=' '
</code></pre>
<p>I am using the delimiter for <code>' '</code> for space between the columns.</p>
|
<python><pandas><xlsx><txt><file-conversion>
|
2024-08-20 15:59:58
| 1
| 869
|
R. Barrett
|
78,893,397
| 20,591,261
|
Use PCA with Python Polars
|
<p>I'm trying to use PCA (from the scikit-learn library) with Polars. I'm following this <a href="https://www.kaggle.com/code/farzadnekouei/customer-segmentation-recommendation-system#Step-9-%7C-K-Means-Clustering" rel="nofollow noreferrer">Kaggle notebook</a>, which uses Pandas, where they set an index for clustering. However, Polars doesn't have the concept of an index or multi-index.</p>
<p>Hereβs my current approach:</p>
<pre><code>pca = PCA().fit(
customer_data_scaled.select(pl.all().exclude("CustomerID"))
)
</code></pre>
<p>But with this aproach im losing the CustomerID.</p>
<p>Also tried</p>
<pre><code>pca = PCA().fit(
X = customer_data_scaled,
y = "CustomerID"
)
</code></pre>
<p>But on that way , all the info get stuck on the first cluster.</p>
<p>Example Data:</p>
<pre><code>import polars as pl
data = {'CustomerID__CustomerID': ['12346', '12347', '12348', '12349', '12350'],
'Days_Since_Last_Purchase__Days_Since_Last_Purchase': [2.293181358195162,
-0.9093550023713796,
-0.18498889715372174,
-0.7483457675933288,
2.1421969281293722],
'Total_Transactions__Total_Transactions': [-0.4764575258322774,
0.7148300598787217,
5.750845212223408e-05,
-0.7147150429744773,
-0.7147150429744773],
'Total_Spend__Total_Spend': [-0.8293123590792867,
2.451307855177288,
0.26466411406493745,
0.2801233688550584,
-0.605225449007581],
'unique_products_purchased__unique_products_purchased': [-0.8802018492520798,
0.7976258516484461,
-0.5512160255460943,
0.2876978249041687,
-0.6334624814725907],
'Average_Days_Between_Purchases__Average_Days_Between_Purchases': [-0.3221805403412395,
-0.1222333928444573,
0.7621482210836178,
-0.3221805403412395,
-0.3221805403412395],
'Day_Of_Week__Day_Of_Week': [2, 2, 4, 1, 3],
'Hour__Hour': [-1.1081405784850002,
0.6220370799522927,
2.7847591529989093,
-1.5406849930943236,
1.4871259091709395],
'Average_Transaction_Value__Average_Transaction_Value': [-1.282393028529395,
1.5081594527658793,
0.34607887126665193,
5.323543943326011,
0.05189247073759856],
'Is_UK__Is_UK': [1, 0, 0, 0, 0],
'Cancellation_Frequency__Cancellation_Frequency': [0.3814774558208268,
-0.5289264033183929,
-0.5289264033183929,
-0.5289264033183929,
-0.5289264033183929],
'Monthly_Spending_Mean__Monthly_Spending_Mean': [-1.3190523375854926,
1.0052503065380078,
0.03733195117007402,
4.183154438138661,
-0.20770132922803144],
'Monthly_Spending_Std__Monthly_Spending_Std': [-0.7096520287471256,
1.2975650939605592,
0.49016569022164963,
-0.7096520287471256,
-0.7096520287471256],
'Spending_Trend__Spending_Trend': [0.08062714678599057,
0.1080562113377501,
-0.5362051086492967,
0.08062714678599057,
0.08062714678599057]}
customer_data_scaled = pl.DataFrame(data)
</code></pre>
<p>Full code with desired output:</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
# Apply PCA
pca = PCA().fit(X = customer_data_scaled.select(pl.all().exclude("CustomerID__CustomerID")),
y = customer_data_scaled.select(pl.col("CustomerID__CustomerID"))
)
# Calculate the Cumulative Sum of the Explained Variance
explained_variance_ratio = pca.explained_variance_ratio_
cumulative_explained_variance = np.cumsum(explained_variance_ratio)
# Set the optimal k value (based on our analysis, we can choose 6)
optimal_k = 6
# Set seaborn plot style
sns.set(rc={'axes.facecolor': '#fcf0dc'}, style='darkgrid')
# Plot the cumulative explained variance against the number of components
plt.figure(figsize=(20, 10))
# Bar chart for the explained variance of each component
barplot = sns.barplot(x=list(range(1, len(cumulative_explained_variance) + 1)),
y=explained_variance_ratio,
color='#fcc36d',
alpha=0.8)
# Line plot for the cumulative explained variance
lineplot, = plt.plot(range(0, len(cumulative_explained_variance)), cumulative_explained_variance,
marker='o', linestyle='--', color='#ff6200', linewidth=2)
# Plot optimal k value line
optimal_k_line = plt.axvline(optimal_k - 1, color='red', linestyle='--', label=f'Optimal k value = {optimal_k}')
# Set labels and title
plt.xlabel('Number of Components', fontsize=14)
plt.ylabel('Explained Variance', fontsize=14)
plt.title('Cumulative Variance vs. Number of Components', fontsize=18)
# Customize ticks and legend
plt.xticks(range(0, len(cumulative_explained_variance)))
plt.legend(handles=[barplot.patches[0], lineplot, optimal_k_line],
labels=['Explained Variance of Each Component', 'Cumulative Explained Variance', f'Optimal k value = {optimal_k}'],
loc=(0.62, 0.1),
frameon=True,
framealpha=1.0,
edgecolor='#ff6200')
# Display the variance values for both graphs on the plots
x_offset = -0.3
y_offset = 0.01
for i, (ev_ratio, cum_ev_ratio) in enumerate(zip(explained_variance_ratio, cumulative_explained_variance)):
plt.text(i, ev_ratio, f"{ev_ratio:.2f}", ha="center", va="bottom", fontsize=10)
if i > 0:
plt.text(i + x_offset, cum_ev_ratio + y_offset, f"{cum_ev_ratio:.2f}", ha="center", va="bottom", fontsize=10)
plt.grid(axis='both')
plt.show()
</code></pre>
|
<python><scikit-learn><python-polars>
|
2024-08-20 15:56:30
| 0
| 1,195
|
Simon
|
78,893,299
| 2,287,458
|
Polars Dataframe full-join (outer) on multiple columns without suffix
|
<p>I have this code:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df1 = pl.DataFrame({
'type': ['A', 'O', 'B', 'O'],
'origin': ['EU', 'US', 'US', 'EU'],
'qty1': [343,11,22,-5]
})
df2 = pl.DataFrame({
'type': ['A', 'O', 'B', 'S'],
'origin': ['EU', 'US', 'US', 'AS'],
'qty2': [-200,-12,-25,8]
})
df1.join(df2, on=['type', 'origin'], how='full')
</code></pre>
<p>which gives</p>
<pre><code>ββββββββ¬βββββββββ¬βββββββ¬βββββββββββββ¬βββββββββββββββ¬βββββββ
β type β origin β qty1 β type_right β origin_right β qty2 β
β --- β --- β --- β --- β --- β --- β
β str β str β i64 β str β str β i64 β
ββββββββͺβββββββββͺβββββββͺβββββββββββββͺβββββββββββββββͺβββββββ‘
β A β EU β 343 β A β EU β -200 β
β O β US β 11 β O β US β -12 β
β B β US β 22 β B β US β -25 β
β null β null β null β S β AS β 8 β
β O β EU β -5 β null β null β null β
ββββββββ΄βββββββββ΄βββββββ΄βββββββββββββ΄βββββββββββββββ΄βββββββ
</code></pre>
<p>But the output I am after is this:</p>
<pre><code>ββββββββ¬βββββββββ¬βββββββ¬βββββββ
β type β origin β qty1 β qty2 β
β --- β --- β --- β --- β
β str β str β i64 β i64 β
ββββββββͺβββββββββͺβββββββͺβββββββ‘
β A β EU β 343 β -200 β
β O β US β 11 β -12 β
β B β US β 22 β -25 β
β S β AS β null β 8 β
β O β EU β -5 β null β
ββββββββ΄βββββββββ΄βββββββ΄βββββββ
</code></pre>
<p>I tried <code>suffix=''</code> via <code>df1.join(df2, on=['type', 'origin'], how='full', suffix='')</code>, but this raises an error:</p>
<pre class="lang-py prettyprint-override"><code>DuplicateError: unable to hstack, column with name "type" already exists
</code></pre>
<p>How can I achieve this?</p>
|
<python><dataframe><join><merge><python-polars>
|
2024-08-20 15:33:20
| 1
| 3,591
|
Phil-ZXX
|
78,892,982
| 2,664,134
|
plotly.py Box Select extremely slow on first call on second and higher traces
|
<p>Running plotly 5.23.0 from Jupyter lab 4.2.4 on Mac.</p>
<p>Drawing 3 traces of 111_000 points each, with "lines" only.</p>
<p>When drawing a Box Select rectangle over the first trace (trace 0), the kernel is busy for less than 1 second and then is available again. When drawing Box Select again over the first trace a few more times, it is always "fast" (less than 1 second).</p>
<p>But, when drawing a Box Select rectangle <strong>over a part of the second or third trace (trace 1, trace 2), the kernel is busy for 15 seconds</strong>. This extremely slow reaction is <strong>only present the first time that Box Select is drawn</strong>. From the second and consecutive times a Box Select rectangle is drawn over the same trace (second trace or third trace), it is also "fast".</p>
<p>This delay of 15 seconds on first usage of a Box Select is a major slow down for our users.</p>
<p>This is "lines" mode only, (no "markers"), so no points are selected, to avoid consequence of actual call backs to on_select. The behaviour is slower when "lines+markers" is used as mode, but I wanted to exclude any impact of the code the is referenced <a href="https://github.com/plotly/plotly.js/blob/master/src/traces/scattergl/select.js#L25" rel="nofollow noreferrer">here</a> (plotly.js src/traces/scattergl/select.js line 25). Even when 0 points are selected with the Box Select, the effect is there, the first time. Also when using lines only and even for hidden plots an effect was found.</p>
<p>Also, when I invert the order of the data in the 3 traces, it is still the first trace (trace 0), that is still fast (but then that is the "top" trace in the drawing), on the first Box Select drawing over it, while trace 1 and trace 2 are slow.</p>
<p>Also, when I <em>click</em> a point, this triggers the on_click callback, that is fast and it does not remove the problem reported here. The first time a Box Select is used on a trace other than trace 0, even after a few clicks on the different traces, that first time is still slow.</p>
<p>I installed a fresh project with poetry with this set of dependencies:</p>
<pre><code>[tool.poetry.dependencies]
python = "^3.12"
jupyterlab = "^4.2.4"
pandas = "^2.2.2"
matplotlib = "^3.9.2"
ipympl = "^0.9.4"
plotly = "^5.23.0"
[tool.poetry.group.dev.dependencies]
pytest = "^7.4.4"
pytest-mock = "^3.14.0"
</code></pre>
<p>On the plotly release page, version v5.23.0 is now (20 Aug 2024), marked as the latest <a href="https://github.com/plotly/plotly.py/releases/tag/v5.23.0" rel="nofollow noreferrer">https://github.com/plotly/plotly.py/releases/tag/v5.23.0</a>.</p>
<pre><code>import plotly.graph_objects as go
from ipywidgets import Layout
import plotly.express as px
import numpy as np
from datetime import datetime
colors = px.colors.qualitative.Vivid
color = colors[1]
LEN = 111_000
WIDTH = 2000
FACTOR = 1.2
N_PLOTS = 3
fig = go.FigureWidget()
x = np.array(range(LEN))
y_data={}
for trace_idx in range(N_PLOTS):
y_data[trace_idx] = np.array([e % WIDTH for e in x]) + (WIDTH * FACTOR * trace_idx)
# alternative order can be obtained with
# y_data[(N_PLOTS - 1) - trace_idx] = np.array([e % WIDTH for e in x]) + (WIDTH * FACTOR * trace_idx)
# In[ ]:
fig.data = []
for trace_idx in range(N_PLOTS):
fig.add_scattergl(
x=x,
y=y_data[trace_idx],
mode="lines+markers" if trace_idx<0 else "lines",
marker=dict(color=colors[trace_idx], size=4),
line=dict(color=color, width=1),
xaxis="x",
yaxis="y1",
name=f"trace {trace_idx}",
hovertemplate="",
visible=True,
)
def update_on_select(trace, points, selector):
print(points.point_inds[0:3], datetime.now(), " on_select")
fig.layout.title = str(points.point_inds[0:3]) + " " + str(len(points.point_inds)) + " select"
def update_on_click(trace, points, selector):
print(points.point_inds[0:3], datetime.now(), "on_click")
fig.layout.title = str(points.point_inds[0:3]) + " " + str(len(points.point_inds)) + " click"
fig.data[0].on_selection(update_on_select)
fig.data[0].on_click(update_on_click)
# In[ ]:
fig.update_layout(
dragmode="select",
yaxis_range=[-200, 100 + WIDTH * N_PLOTS * FACTOR],
)
display(fig)
</code></pre>
<p>Then I:</p>
<ul>
<li>clicked a few time on a point => FAST on_select call back</li>
<li>drew a rectangular Box Select area on "trace 2" => SLOW (datetimenow is 15:51:55.77, but printing to log is 15:52:11, that is 15 seconds later)</li>
<li>drew AGAIN, a rectangular Box Select area on "trace 2" => FAST (datetimenow is 15:53:05.19, and printed to log at 15:53:05)</li>
</ul>
<p>The printed logs from the log at the bottom of Jupyter Lab screen where:</p>
<pre><code>15:51:30 [] 2024-08-20 15:51:28.363443 on_click # click on top trace 2
15:51:35 [9062] 2024-08-20 15:51:35.256576 on_click # click on bottom trace 0
15:51:40 [] 2024-08-20 15:51:40.554232 on_click # click on top trace 2
15:51:42 [9173] 2024-08-20 15:51:42.023582 on_click # click on bottom trace 0
15:52:11 [] 2024-08-20 15:51:55.771423 on_select # drawing rectangular Box Select on top trace 2
15:53:05 [] 2024-08-20 15:53:05.192348 on_select # drawing again rectangular Box Select on top trace 2
</code></pre>
<p>For comparison (not shown here), all rectangular Box Select drawing on trace 0 are fast, also from the first one.</p>
<p>Also, on_click, zoom, etc. on this graph all occur within 1 second. Only that <em>first</em> Box Select on a trace other than trace 0 is very slow. Also in our production notebook, with many more traces, everything is fast, except the first time a Box Select is used.</p>
|
<python><plotly><jupyter-lab>
|
2024-08-20 14:26:03
| 0
| 392
|
peter_v
|
78,892,697
| 10,053,485
|
Preserving an immutable variable required at a higher scope while an error is raised
|
<p>I have set up a worker application in Python. As part of this worker, I always require a <code>new_process_time</code> to be defined in the return, such that the external queue can handle it properly.</p>
<p>Sometimes a severe error type may occur, which should abort the handling of this iteration entirely. This is undesired, but expected. In this case a default time is picked as the return.</p>
<p>My issue occurs when the abort-worthy error occurs, after the next <code>new_process_time</code> has already been found. Ideally, I'd want to both raise and log this error, while simultaneously retaining <code>new_process_time</code> to be handed to the queue, instead of a default.</p>
<p>I can think of 2 implementations:</p>
<ul>
<li><p>Catch the error, return it alongside the var, and reraise it, aborting the wider worker context:</p>
<pre class="lang-py prettyprint-override"><code>def foo(new_process_time):
condition = True
e = None
try:
new_process_time = 10 # Calculated within foo() through context only available in foo()
if condition:
raise ValueError("Stuff broke.")
... # logic I wish to abort if an exception is raised
except ValueError as e:
print('Error triggered')
return new_process_time, e
return new_process_time, e
def worker():
new_process_time = None
try:
new_process_time, e = foo(new_process_time)
if e is not None:
raise e
... # logic I wish to abort if an exception is raised
except Exception as e:
print(f"Logging the exception: {e}")
finally:
if new_process_time is not None:
print(f"Next schedule: {new_process_time}")
else:
print(f"Next schedule: DEFAULT")
worker()
</code></pre>
</li>
<li><p>Use a generator:</p>
<pre class="lang-py prettyprint-override"><code>def foo_2(new_process_time):
condition = True
try:
new_process_time = 10 # Calculated within foo() through context only available in foo()
yield new_process_time
if condition:
raise ValueError("Stuff broke.")
... # logic I wish to abort if an exception is raised
except ValueError as e:
print('Error triggered')
raise e
def worker_2():
new_process_time = None
try:
try:
foo_gen = foo_2(new_process_time)
new_process_time = next(foo_gen) # handle the logic up to new_process_time
next(foo_gen) # handle the rest of the logic
except StopIteration:
print('this is fine, continue as normal')
... # logic I wish to abort if an exception is raised
except Exception as e:
print(f"Logging the exception: {e}")
finally:
if new_process_time is not None:
print(f"Next schedule: {new_process_time}")
else:
print(f"Next schedule: DEFAULT")
worker_2()
</code></pre>
</li>
</ul>
<p>I am <em>not a fan</em> of either of these implementations. While they both work, I feel they (the generator option more than the return) are unpythonic and contain anti-patterns. Simply put, asking for trouble.</p>
<p>Alternatively, I reckon <code>global</code> might be an option here, but given the many reasons to avoid it. I imagine when the worker function is ran in parallel, this would cause problems.</p>
<p>How to ensure a variable required at a higher scope, but prevented from returning normally due to an error that must be raised, is still accessible?</p>
|
<python>
|
2024-08-20 13:20:45
| 1
| 408
|
Floriancitt
|
78,892,568
| 2,287,458
|
Polars split column and get n-th (or last) element
|
<p>I have the following code and output.</p>
<p><strong>Code.</strong></p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'type': ['A', 'O', 'B', 'O'],
'id': ['CASH', 'ORB.A123', 'CHECK', 'OTC.BV32']
})
df.with_columns(sub_id=pl.when(pl.col('type') == 'O').then(pl.col('id').str.split('.')).otherwise(None))
</code></pre>
<p><strong>Output.</strong></p>
<pre><code>shape: (4, 3)
ββββββββ¬βββββββββββ¬ββββββββββββββββββ
β type β id β sub_id β
β --- β --- β --- β
β str β str β list[str] β
ββββββββͺβββββββββββͺββββββββββββββββββ‘
β A β CASH β null β
β O β ORB.A123 β ["ORB", "A123"] β
β B β CHECK β null β
β O β OTC.BV32 β ["OTC", "BV32"] β
ββββββββ΄βββββββββββ΄ββββββββββββββββββ
</code></pre>
<p>Now, how would I extract the n-th element (or in this case, the last element) of each list?</p>
<p>Especially, the expected output is the following.</p>
<pre><code>shape: (4, 3)
ββββββββ¬βββββββββββ¬βββββββββββββ
β type β id β sub_id β
β --- β --- β --- β
β str β str β str β
ββββββββͺβββββββββββͺβββββββββββββ‘
β A β CASH β null β
β O β ORB.A123 β "A123" β
β B β CHECK β null β
β O β OTC.BV32 β "BV32" β
ββββββββ΄βββββββββββ΄βββββββββββββ
</code></pre>
|
<python><dataframe><split><python-polars>
|
2024-08-20 12:54:52
| 1
| 3,591
|
Phil-ZXX
|
78,892,301
| 962,190
|
Condition in pytest fixture depending on some test-suite results
|
<p>Imagine the following test-suite:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.fixture(scope="session")
def global_resource():
yield
if ...:
print("output generated by this resource.")
def test_foo():
assert False
def test_bar(global_resource):
assert True
def test_baz(global_resource):
assert True
</code></pre>
<p>In my actual setup, <code>global_resource</code> starts a docker container that produces a ton of output. If a test that uses that fixture fails, I want to print that output. If none of the tests that use this fixture fail, I do not want to print it.</p>
<p>In other words:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>foo</th>
<th>bar</th>
<th>baz</th>
<th>print?</th>
</tr>
</thead>
<tbody>
<tr>
<td>True</td>
<td>True</td>
<td>True</td>
<td>no</td>
</tr>
<tr>
<td>True</td>
<td>True</td>
<td>False</td>
<td>yes</td>
</tr>
<tr>
<td>True</td>
<td>False</td>
<td>True</td>
<td>yes</td>
</tr>
<tr>
<td>True</td>
<td>False</td>
<td>False</td>
<td>yes</td>
</tr>
<tr>
<td>False</td>
<td>True</td>
<td>True</td>
<td>no</td>
</tr>
<tr>
<td>False</td>
<td>True</td>
<td>False</td>
<td>yes</td>
</tr>
<tr>
<td>False</td>
<td>False</td>
<td>True</td>
<td>yes</td>
</tr>
<tr>
<td>False</td>
<td>False</td>
<td>False</td>
<td>yes</td>
</tr>
</tbody>
</table></div>
<p>Is there something that I can put into the condition after <code>global_resource</code>'s yield in order to learn if I want to print or no? I am aware that at this point tests may still fail due to issues in their teardown, that is ok.</p>
|
<python><pytest><fixtures>
|
2024-08-20 11:56:35
| 1
| 20,675
|
Arne
|
78,892,162
| 5,413,581
|
Different solution using scikit-learn and cvxpy
|
<p>I am trying to code a logistic regression model using the CVXPY library. The code I have written so far "works" in the sense that it can be executed, it does not yield any error message and provides a solution. However, this solution does not match the solution provided by the scikit-learn implementation of logistic regression.</p>
<p>I know scikit-learns implementation includes an L2 penalty by default, and in the code example below you will see that I changed this to None. Also I removed the intercept from the sklearn model. But still the solutions do not match:</p>
<pre><code>import cvxpy as cp
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_features=10, random_state=42)
sk = LogisticRegression(penalty=None, fit_intercept=False)
sk.fit(X, y)
print(sk.coef_)
</code></pre>
<p>This produces the output:</p>
<pre><code>[[-13.46939518 -7.09935934 19.41989522 -10.36990818 3.76335965
2.84616038 3.32474461 -2.84162961 3.13246888 1.08887971]]
</code></pre>
<p>Now, the cvxpy implementation:</p>
<pre><code>beta = cp.Variable(X.shape[1])
log_likelihood = cp.sum(cp.multiply(y, X @ beta) - cp.logistic(X @ beta))
problem = cp.Problem(cp.Maximize(log_likelihood/X.shape[0]))
problem.solve()
beta = beta.value
print(beta)
</code></pre>
<p>produces the solution:</p>
<pre><code>[-31.38130594 -10.72178524 44.07489985 -34.06127916 8.01950276
5.96941765 9.6143194 -7.88785049 12.96349703 -0.13264449]
</code></pre>
|
<python><scikit-learn><logistic-regression><cvxpy>
|
2024-08-20 11:21:51
| 0
| 769
|
Γlvaro MΓ©ndez Civieta
|
78,892,136
| 4,050,510
|
Advice on how to speed up jax compilation times?
|
<p>I want to implement the Legendre approximation of <a href="https://papers.nips.cc/paper_files/paper/2023/hash/ee860a9fa65a55a335754c557a5211de-Abstract-Conference.html" rel="nofollow noreferrer">Dahlke&Pacheco 2020</a> to compute the entropy of a Gaussian Mixture Model.</p>
<p>I want to incorporate this into some deep learning I have in Jax, so I want the implementation to be JIT'able with Jax.</p>
<p>There is a working version attached below. The compilation times are horrendous though, and I think this has to do with the static variable for the number of partitions to iterate. If <code>k</code> and <code>K</code> are large, the number of partitions increase superexponentially. And it seems JAX decides to to some unrolling or something so the compilation time also increase superexponentially.</p>
<p>What can I do to make jax compile this faster (even if at the expense of slower runtime)?</p>
<pre class="lang-py prettyprint-override"><code>import jax
import jax.numpy as jnp
import functools
def gmm_max(logpi, mu, cov):
"""Return an upper bound the GMM density
logpi: (K,)-array of the log-weights
mu: (K, D)-array of the component means
cov: (K,D,D)-array of the component covarainces
if p(x) = sum_i pi_i N(x|mu_i, cov_i) then
max(p(x)) <= sum_i pi_i max(N(x|mu_i, cov_i))
= sum_i pi_i N(0|0, cov_i)
"""
logpdfmaxs = jax.vmap(
lambda m, c: jax.scipy.stats.multivariate_normal.logpdf(m, m, cov=c)
)(mu, cov)
res = jnp.sum(jnp.exp(logpdfmaxs + logpi))
return res
@functools.partial(jax.jit, static_argnames=["K", "c"])
def partitions_(k, K, c):
"""
PRECONDITION: c = combinations(k+K-1, k, exact=True)
TODO come up with a way to NOT have to pass c, while still allowing JIT
"""
def _inner(k, buffer, pos, carry):
obuffer, opos = carry
if pos == K - 1:
updated_buffer = buffer.at[pos].set(k)
return (obuffer.at[opos, :].set(updated_buffer), opos + 1)
else:
return jax.lax.fori_loop(
0,
k + 1,
lambda i, carry: _inner(
k - i, buffer.at[pos].set(i), pos + 1, carry=carry
),
carry,
)
buffer = jnp.zeros(K, dtype=jnp.int32)
carry = (jnp.zeros((c, K), dtype=jnp.int32), 0)
obuffer, opos = _inner(k=k, pos=0, buffer=buffer, carry=carry)
return obuffer
def comb(n, k):
"""Compute n choose k via gamma function"""
return jnp.exp(
jax.scipy.special.gammaln(n + 1)
- jax.scipy.special.gammaln(k + 1)
- jax.scipy.special.gammaln(n - k + 1)
)
def logfactorial(n):
return jax.scipy.special.gammaln(n + 1)
def log_multinomial_coeff(ks):
"""Multinomial coefficient approximated with log-gamma-function"""
return jax.scipy.special.gammaln(jnp.sum(ks) + 1) - jnp.sum(
jax.scipy.special.gammaln(ks + 1)
)
def expectation_of_powers(logpi, mu, cov, k, K, n_partitions):
"""Compute the closed form expectation of powers of a GMM.
Equation 11 in the cited paper
I would *really* like to not pass `K` and `n_partitions` as arguments, but I don't know how to do that with JIT
PRECONDITION:
sum(exp(logpi)) == 1
n_partitions == int(round(comb(k+K-1, k))
"""
assert logpi.ndim == 1
assert len(mu) == len(cov) == K
inv_covs = jax.vmap(jnp.linalg.inv)(cov)
# Iterate over j1, ..., jK >= 0 where j1 + ... + jK = k
result = 0.0
partitions = partitions_(k, K, n_partitions)
def _inner(j):
def _inner2(ith_inv_cov, ith_mu, ith_cov, ith_logpi):
cov_combined = jnp.linalg.inv(
ith_inv_cov
+ jnp.sum(jax.vmap(lambda j_, ic: j_ * ic)(j, inv_covs), axis=0)
)
mu_combined = cov_combined @ (
ith_inv_cov @ ith_mu
+ jnp.sum(
jax.vmap(lambda j_, ic, m_: j_ * ic @ m_)(j, inv_covs, mu), axis=0
)
)
# Gaussian ratio
log_N_ratio = jax.scipy.stats.multivariate_normal.logpdf(
0, mean=ith_mu, cov=ith_cov
) - jax.scipy.stats.multivariate_normal.logpdf(
0, mean=mu_combined, cov=cov_combined
)
# Product term
log_product_term = jnp.sum(
jax.vmap(
lambda j_, logpi_, mu_, cov_: (
j_
* (
logpi_
+ jax.scipy.stats.multivariate_normal.logpdf(
0, mean=mu_, cov=cov_
)
)
)
)(j, logpi, mu, cov),
axis=0,
)
return ith_logpi + log_N_ratio + log_product_term
log_inner_summands = jax.vmap(_inner2)(inv_covs, mu, cov, logpi)
return jnp.exp(log_multinomial_coeff(j) + jax.nn.logsumexp(log_inner_summands))
result = jnp.sum(jax.vmap(_inner)(partitions))
return result
@functools.partial(jax.jit, static_argnames=["order", "K", "partition_sizes"])
def gmm_entropy_legendre_jax_(logpi, mu, cov, order, K, partition_sizes):
"""Entropy of a GMM using legendre polynomial approximation
Formula due to Dahlke and Pacheko
See equation 16 in the cited paper
PRECONDITIONS:
len(logpi) == K
partition_sizes = [combinations(k+K-1, k, exact=True) for k in range(order+1)]
"""
assert logpi.ndim == 1
assert mu.ndim == 2
d = mu.shape[1]
assert mu.shape == (K, d), f"mu.shape={mu.shape}, K={K}, d={d}"
assert cov.ndim == 3
assert cov.shape == (K, d, d)
N = order # alias
a = gmm_max(logpi, mu, cov) + 1e-3
loga = jnp.log(a)
# I cannot understand how to do this with vmap without making `combs` a non-concrete value
Ep_px = [
expectation_of_powers(logpi, mu, cov, k, K, combs)
for k, combs in zip(range(0, N + 1), partition_sizes)
]
def _summand_by_n(n):
coeffs_sum = jnp.sum(
jax.vmap(
lambda j: (
+((-1) ** (n + j))
* ((j + 1) * loga - 1)
* jnp.exp(
logfactorial(n + j)
- logfactorial(n - j)
- 2 * logfactorial(j + 1)
)
)
)(jnp.arange(n + 1))
)
coeffs2_sum = jnp.sum(
jax.vmap(
lambda k, Ep_px_k: (
(-1) ** (n + k)
* jnp.exp(
logfactorial(n + k) - logfactorial(n - k) - 2 * logfactorial(k)
)
* Ep_px_k
/ a**k
)
)(jnp.arange(n + 1), jnp.stack(Ep_px, axis=0)[: n + 1]),
)
return (2 * n + 1) * coeffs_sum * coeffs2_sum
# I cannot understand how to do this with vmap without making `n` a non-concrete value
result = 0.0
for n in range(N + 1):
result += _summand_by_n(n)
return -result
def gmm_entropy_legendre_jax(logpi, mu, cov, order):
"""Entropy of a GMM using legendre polynomial approximation
Formula due to Dahlke and Pacheko
See equation 16 in the cited paper (or the corresponding formula in proof of theorem 4.5)
"""
assert mu.ndim == 2
K = mu.shape[0]
D = mu.shape[1]
assert logpi.shape == (K,), f"logpi.shape={logpi.shape}, K={K}"
assert cov.shape == (K, D, D), f"cov.shape={cov.shape}, K={K}, D={D}"
partition_sizes = tuple(int(round(comb(k + K - 1, k))) for k in range(order + 1))
return gmm_entropy_legendre_jax_(
logpi, mu, cov, order=order, K=K, partition_sizes=partition_sizes
)
if __name__ == "__main__":
order = 8
def mvn_entropy_exact(cov):
"""The entropy of a single Gaussian with covariance matrix `cov`."""
assert cov.ndim == 2
k = cov.shape[0]
assert cov.shape == (k, k)
return (k / 2) * (1 + jnp.log(2 * jnp.pi)) + 0.5 * jnp.log(jnp.linalg.det(cov))
logpi = jnp.array([0.0])
mu = jnp.array([[10.0, -20]])
cov = jnp.array([[[2, 0], [0, 7]]])
a = mvn_entropy_exact(cov[0])
print(a)
a = gmm_entropy_legendre_jax(logpi, mu, cov, order=order)
print(a)
a = mvn_entropy_exact(jnp.array([[1.0]]))
print(a)
a = gmm_entropy_legendre_jax(
jnp.array([0.0]), jnp.array([[0.0]]), jnp.array([[[1.0]]]), order=order
)
print(a)
a = gmm_entropy_legendre_jax(
jnp.log(jnp.ones(2) / 2),
jnp.array([[0.0], [0.0]]),
jnp.array([[[1.0]], [[1.0]]]),
order=order,
)
print(a)
</code></pre>
|
<python><jit><jax>
|
2024-08-20 11:14:53
| 0
| 4,934
|
LudvigH
|
78,892,056
| 18,769,241
|
How to disable the warning message for g4f version deprecation?
|
<p>I am using this code to get my response out of the model :</p>
<pre><code>from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello"}],
)
</code></pre>
<p>This code is run through <code>subprocess.Popen()</code> call like so:</p>
<pre><code>p = subprocess.Popen(['C:\\Python38\\python.exe','-Wignore', 'C:\\Users\\user\\proj\\projName\\chatgpt.py'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=True, env=env)
</code></pre>
<p>But the call to <code>client.chat.completions.create()</code> generates this warning message before actually returning the model response:</p>
<pre><code>New g4f version: 0.3.2.4 (current: 0.3.2.2) | pip install -U g4f
</code></pre>
<p>My question is how to suppress that warning message from being generated by the mentioned call?</p>
|
<python><warnings><openai-api>
|
2024-08-20 11:01:49
| 1
| 571
|
Sam
|
78,892,021
| 26,843,912
|
Websocket connecting only after the ayncio finish the background task
|
<p>I have a process route in my fast api server :-</p>
<pre class="lang-py prettyprint-override"><code>@router.post("/process")
async def handle_options(data: Dict[str, Any]):
#.... other code
asyncio.create_task(editing(videoId, values, videoWidth, videoHeight, fps, userData))
return {'The editing started'}
</code></pre>
<pre class="lang-py prettyprint-override"><code>async def editing(videoId, values, videoWidth, videoHeight, fps, userData):
recordingFile = os.path.join(temp_dir, f"recording_{videoId}.mp4")
webFile = os.path.join(temp_dir, f"final_edited_{videoId}.mp4")
ffmpeg_change_bitrate = [ 'ffmpeg', '-i', recordingFile , '-b:v', '5M', f'{webFile}' ]
subprocess.run(ffmpeg_change_bitrate)
print('Completed Editing, it took 4 minutes')
</code></pre>
<p>The route creates a task that would run in the background and without waiting for the entire editing to complete it just return the response back to the user. This works fine, it did creates the task and immediately return the response ( NOTE :- The editing take few minutes before complete)
Now I also have a websocket route :-</p>
<pre class="lang-py prettyprint-override"><code>@router.websocket("/logs")
async def websocket_endpoint(websocket: WebSocket, videoId: str = Query(...)):
await manager.connect(websocket)
# Print connection details
print(f"Client connected: {websocket.client.host} at {datetime.now()}")
</code></pre>
<p>After the user receives the response from the server from the first request, it redirects user to a loading page where I connected them with the /logs websocket connection, it would be used to keep the process of video editing updated, but instead of connecting this immediately when the user requests, it connects itself only when the <code>asyncio.create_task(editing(videoId, values, videoWidth, videoHeight, fps, userData))</code> asyncio task completes.</p>
<p>The asyncio.create_task editing function blocks every other route, function to work until it finishes</p>
|
<python><multithreading><websocket><python-asyncio><fastapi>
|
2024-08-20 10:52:55
| 1
| 323
|
Zaid
|
78,891,908
| 5,346,843
|
Plot annotations clipped by plt.savefig
|
<p>I am plotting sunrise and sunset times using code adapted from the<br />
matplotlib <a href="https://matplotlib.org/stable/gallery/color/colormap_reference.html#sphx-glr-gallery-color-colormap-reference-py" rel="nofollow noreferrer">colormap examples</a></p>
<p>The code is below:</p>
<pre><code>import datetime
import numpy as np
import matplotlib.pyplot as plt
# Define month/year to process
d0 = datetime.datetime(2024, 9, 1)
d1 = datetime.datetime(2024, 9, 30)
xdates = [d0, d1]
# Initialise matplotlib plot
nrows = len(xdates)
figh = 0.50 + 0.15 + (nrows + (nrows - 1) * 0.1) * 0.25
fig, axs = plt.subplots(nrows=nrows, figsize=(10.0, figh))
fig.subplots_adjust(top=1 - 0.15/figh, bottom=0.15/figh, left=0.2, right=0.99)
# Process start and end of month
for i, ax in enumerate(fig.axes):
dt = xdates[i]
curr_day = dt.strftime("%d %b")
# Get times
if i == 0:
twr, utr, uts, tws = (4.13, 6.30, 19.84, 22.01)
else:
twr, utr, uts, tws = (5.13, 7.08, 18.73, 20.69)
# Create arrays for interpolation
x = [0.0, twr, utr, uts, tws, 24.0, 24.1]
y = [1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0]
# Interpolate for daylight
xval = np.array(x)
yval = np.array(y)
xtim = np.arange(0.0, 24.05, 0.10)
yngt = np.interp(xtim, xval, yval)
yngt = np.vstack((yngt, yngt))
# Plot data
ax.imshow(yngt, aspect='auto', cmap='cividis_r')
ax.text(-.01, .5, curr_day, va='center', ha='right', fontsize=14, transform=ax.transAxes)
# Add times
if i == 0:
ypos = 1.25
else:
ypos = -0.75
for xt in (twr, utr, uts, tws):
if not xt is None:
xpos = xt / 24.0 - 0.03
tnew = dt + datetime.timedelta(hours=xt)
tstr = tnew.strftime("%H:%M")
ax.annotate(tstr, xy=(xpos, ypos), xycoords='axes fraction', fontsize=12)
# Turn off *all* ticks & spines, not just the ones with colormaps
ax.set_axis_off()
plt.savefig("test_sun_rise_set.png")
</code></pre>
<p>The plot that appears in the Spyder window looks fine:</p>
<p><a href="https://i.sstatic.net/xVtHnkpi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xVtHnkpi.png" alt="Spyder plot window" /></a></p>
<p>However, the resulting PNG file has the annotations clipped</p>
<p><a href="https://i.sstatic.net/W5MHOUwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W5MHOUwX.png" alt="PNG file" /></a></p>
<p>I have tried ajusting <code>figh</code> and the margins in <code>fig.subplots.adjust</code> but this just seems to make things worse so would appreciate any suggestions. Thanks in advance.</p>
|
<python><numpy><matplotlib>
|
2024-08-20 10:28:15
| 1
| 545
|
PetGriffin
|
78,891,820
| 1,348,691
|
How do I understand grouped[l].size()?
|
<p>I try to understand the following code <code>grouped[l].size()</code> as in:</p>
<pre><code>f = 10
n = 250
np.random.seed(100)
x = np.random.randint(0,2,(n,f))
y = np.random.randint(0,2,n)
fcols = [f'f{_}' for _ in range(f)]
data = pd.DataFrame(x, columns = fcols)
data['l'] = y
grouped = data.groupby(list(data.columns))
print(grouped['l'].size())
</code></pre>
<p>why it prints like this:</p>
<pre><code>f0 f1 f2 f3 f4 f5 f6 f7 f8 f9 l
0 0 0 0 0 0 0 1 1 1 1 1
1 0 1 0 0 1
1 1
1 1 1
1 0 0 0 0 0 1
..
1 1 1 1 1 0 0 0 0 0 1 1
1 0 0 0 1
1 1 0 0 1 1
1 1 0 0 0 0 1
1 0 1 1 2
Name: l, Length: 239, dtype: int64
</code></pre>
<p>I read it from official websiteοΌ<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html</a></p>
<p>This is what confuses me: <code>l</code> has value either 0 or 1, so when it is used as group condition, the output <code>l</code> should only have 0 or 1 too, 2 lines in output with <code>Length 2</code> (now it is <code>Length 239</code>). the above output in my mind should have form like this:</p>
<pre><code>f0 f1 f2 f3 f4 f5 f6 f7 f8 f9 l
0 0 0 0 0 0 0 1 1 1 1 1
1 0 1 0 0 2
</code></pre>
<ol>
<li>WHY length is 239, not 2?</li>
<li>WHY many f column is empty, not with any value?</li>
</ol>
|
<python><pandas><numpy>
|
2024-08-20 10:07:56
| 1
| 4,869
|
Tiina
|
78,891,775
| 2,292,490
|
Search in CSV and add value if found correct value in another CSV
|
<p>I have 2 CSV that look like this:</p>
<p><code>input.csv</code>:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
<th>sku</th>
<th>costs_production</th>
</tr>
</thead>
<tbody>
<tr>
<td>12</td>
<td>h01</td>
<td></td>
</tr>
<tr>
<td>13</td>
<td>h02</td>
<td></td>
</tr>
</tbody>
</table></div>
<p><code>search.csv</code>:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>sku</th>
<th>costs_production</th>
</tr>
</thead>
<tbody>
<tr>
<td>h01</td>
<td>14.95</td>
</tr>
<tr>
<td>h02</td>
<td>51.99</td>
</tr>
</tbody>
</table></div>
<p>I want to use the value (<code>sku</code>) from <code>search.csv</code> to search for a match in <code>input.csv</code> and, if it finds a match, it should add the corresponding value from <code>cost_production</code> to the <code>cost_production</code> column in <code>input.csv</code> and output it in a new file.</p>
<p>Here's what I've got so far:</p>
<pre><code> import csv
ID, SKU, COSTS_PRODUCTION = 'id', 'sku', 'costs_production' # Field names referenced.
# Read entire input file into a list.
with open('input.csv', 'r', newline='') as inp:
reader = csv.DictReader(inp)
inputs = list(reader)
# Update input rows that match data in search.csv file.
with open('search.csv', 'r', newline='') as sea:
sea_reader = csv.DictReader(sea)
for row in sea_reader:
SKU, COSTS_PRODUCTION = row[SKU], row[COSTS_PRODUCTION]
for input_ in inputs:
if input_[SKU] == SKU: # Match?
input_[COSTS_PRODUCTION] = row[COSTS_PRODUCTION]
break
# Write updated input.csv data out into a file.
with open('input_updated.csv', 'w', newline='') as outp:
fieldnames = inputs[0].keys()
writer = csv.DictWriter(outp, fieldnames)
writer.writeheader()
writer.writerows(inputs)
print('done')
</code></pre>
<p>I keep getting <code>KeyError: 'h01'</code>.</p>
|
<python><csv>
|
2024-08-20 09:58:46
| 3
| 571
|
Bill Bronson
|
78,891,590
| 6,546,694
|
python package installation via pip gives: ERROR: Could not install packages due to an OSError: [Errno 28] No space left on device
|
<p>I am trying to install <code>FlagEmbeddings</code> (<a href="https://github.com/FlagOpen/FlagEmbedding" rel="nofollow noreferrer">https://github.com/FlagOpen/FlagEmbedding</a>)
like so:</p>
<pre><code>git clone https://github.com/FlagOpen/FlagEmbedding.git
cd FlagEmbedding
pip install -e .
</code></pre>
<p>I am running into:</p>
<pre><code>ERROR: Could not install packages due to an OSError: [Errno 28] No space left on device
</code></pre>
<p>I am running a virtual machine on azure</p>
<p>output of <code>df -h</code> is as follows:</p>
<pre><code>Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 28G 1.6G 95% /
devtmpfs 63G 0 63G 0% /dev
tmpfs 63G 4.0K 63G 1% /dev/shm
tmpfs 13G 1.1M 13G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/loop0 64M 64M 0 100% /snap/core20/2318
/dev/loop3 39M 39M 0 100% /snap/snapd/21759
/dev/loop1 92M 92M 0 100% /snap/lxd/24061
/dev/sda15 105M 6.1M 99M 6% /boot/efi
/dev/loop2 92M 92M 0 100% /snap/lxd/29619
/dev/sdb1 590G 32K 560G 1% /mnt
tmpfs 13G 0 13G 0% /run/user/1000
</code></pre>
<p>What can I do to resolve the error? I am open to provisioning more resources from azure if that is what it takes</p>
|
<python><azure><pip><azure-storage><disk>
|
2024-08-20 09:20:44
| 1
| 5,871
|
figs_and_nuts
|
78,891,481
| 5,269,892
|
Pandas string replace with regex argument for non-regex replacements
|
<p>Suppose I have a dataframe in which I want to replace a non-regex substring consisting only of characters (i.e. a-z, A-Z) and/or digits (i.e. 0-9) via <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.replace.html" rel="nofollow noreferrer">pd.Series.str.replace</a>. The docs state that this function is equivalent to <code>str.replace</code> or <code>re.sub()</code>, depending on the <code>regex</code> argument (default <em>False</em>).</p>
<p>Apart from most likely being overkill, are there any downsides to consider if the function was called with <code>regex=True</code> for non-regex replacements (e.g. performance)? If so, which ones? Of course, I am not suggesting using the function in this way.</p>
<p>Example: Replace 'Elephant' in the below dataframe.</p>
<pre><code>import pandas as pd
data = {'Animal_Name': ['Elephant African', 'Elephant Asian', 'Elephant Indian', 'Elephant Borneo', 'Elephant Sumatran']}
df = pd.DataFrame(data)
df = df['Animal_Name'].str.replace('Elephant', 'Tiger', regex=True)
</code></pre>
|
<python><pandas><replace>
|
2024-08-20 08:56:09
| 3
| 1,314
|
silence_of_the_lambdas
|
78,891,473
| 243,755
|
How to include the project root folder into PYTHONPATH when running python script in sub folder
|
<p>Here's my project structure</p>
<pre><code>- folder_1
- folder_2
- file1.py
- file2.py
- folder_3
- file3.py
</code></pre>
<p>Now I use this command to run file1.py</p>
<pre><code>python folder_1/folder_2/file1.py
</code></pre>
<p>But what I found is that python put <code>folder_1/folder_2</code> into PYTHONPATH. What I want is also put <code>folder_1</code> into PYTHONPATH, so that <code>file_1.py</code> can import functions into <code>file3.py</code>. Although I can populate PYTHONPATH by myself, I just wonder whether there is any convenient or standard way to do that, because I believe it is a very common scenario.</p>
|
<python><pythonpath>
|
2024-08-20 08:54:39
| 1
| 29,674
|
zjffdu
|
78,891,025
| 19,356,117
|
Why invoking a python program in another python program will make aboriginal program slower?
|
<p>In my tool <a href="https://github.com/iHeadWater/hydrotopo" rel="nofollow noreferrer"><code>hydrotopo</code></a>, when I run <code>find_edge_nodes</code> to find topological relations for about 100,000 linestrings, it only costs about 10 seconds; but in code below, <code>find_edge_nodes</code> will cost about 30min:</p>
<pre><code># extract from https://github.com/iHeadWater/torchhydro/blob/dev-gnn/torchhydro/datasets/data_sets.py#L872
import hydrotopo.ig_path as htip
# the same method costs about 30min
graph_lists = htip.find_edge_nodes(node_features, network_features, node_idx, 'up', cutoff)
return graph_lists
</code></pre>
<p>In short, when I invoke my tool in other programs, speed of the program will become very slow, and when I pause the program in debug mode, the program will stop in <code>predicates.has_z</code>. There is not too many cycles in my code.</p>
<p>So why will <code>has_z</code> cost so much time, or I should find other solutions?</p>
<p>If you want to know more details, please see this: <a href="https://github.com/shapely/shapely/issues/2108" rel="nofollow noreferrer">https://github.com/shapely/shapely/issues/2108</a></p>
<hr />
<p>Update: hydrotopo and torchhydro both use 3.11, and they are all tested in Intellij IDEA with pytest.</p>
<p>I'm sure <code>hydrotopo</code> is compiled (<code>/home/username/.conda/envs/torchhydro1/lib/python3.11/site-packages/hydrotopo/__pycache__/ig_path.cpython-311.pyc</code>)</p>
<hr />
<p>I try to debug <a href="https://github.com/iHeadWater/torchhydro/blob/dev-gnn/experiments/train_with_era5land_gnn.py#L31" rel="nofollow noreferrer"><code>test_run_model</code></a> by pytest, but it crashed with this:</p>
<pre><code>(torchhydro1) username@vm-jupyterhub-server:~/torchhydro$ pytest experiments/train_with_era5land_gnn.py::run_test_model
platform linux -- Python 3.11.9, pytest-8.3.2, pluggy-1.5.0
rootdir: /home/username/torchhydro
configfile: setup.cfg
collected 0 items / 1 error
ImportError while importing test module '/home/username/torchhydro/experiments/train_with_era5land_gnn.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../.conda/envs/torchhydro1/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
experiments/train_with_era5land_gnn.py:14: in <module>
from torchhydro.configs.config import cmd, default_config_file, update_cfg
E ModuleNotFoundError: No module named 'torchhydro'
ERROR: found no collectors for /home/username/torchhydro/experiments/train_with_era5land_gnn.py::run_test_model
</code></pre>
|
<python><pytest><shapely>
|
2024-08-20 07:04:02
| 1
| 1,115
|
forestbat
|
78,890,898
| 19,616,641
|
Displaying gazebo realsense image using ROS, depth image looks weird and having different size from the raw
|
<p>Trying to display images captured by realsense in my gazebo simulation onto an opencv window. But the depth image is showing colorful despite rviz showing black and white. And the raw and depth image from the same cam have different size despite not resizing. I want the simulation to have the same output as the real scene realsense cam do. How can I fix it? Down below is my image displaying python codes and the launch file and the picture of the output images. Just in case here's the git:</p>
<blockquote>
<p><a href="https://github.com/brian2lee/forklift_test/tree/main" rel="nofollow noreferrer">https://github.com/brian2lee/forklift_test/tree/main</a></p>
</blockquote>
<p>The realsense d435 add-on used in gazebo:</p>
<blockquote>
<p><a href="https://github.com/issaiass/realsense2_description" rel="nofollow noreferrer">https://github.com/issaiass/realsense2_description</a>
<a href="https://github.com/issaiass/realsense_gazebo_plugin" rel="nofollow noreferrer">https://github.com/issaiass/realsense_gazebo_plugin</a></p>
</blockquote>
<p>Edit: The colored depth map has been solved by @Christoph Rackwitz, updated the code, now showing normal depth map but the size problem remains.</p>
<p>images (from top left, 1.opencv raw 2. opencv depth 3. rviz depth):</p>
<p><a href="https://i.sstatic.net/nSM8E8jP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSM8E8jP.png" alt="enter image description here" /></a></p>
<p><code>im_show.py</code>:</p>
<pre><code>#!/usr/bin/env python3
import rospy
import cv2
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
class ImageConverter:
def __init__(self):
self.bridge = CvBridge()
self.image_sub = rospy.Subscriber("/camera/color/image_raw", Image, self.callback)
def callback(self, data):
try:
# Convert the ROS Image message to a CV2 image
cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")
except CvBridgeError as e:
print(e)
return
# Display the image in an OpenCV window
cv2.imshow("Camera Image", cv_image)
cv2.waitKey(3)
def main():
rospy.init_node('image_converter', anonymous=True)
ic = ImageConverter()
try:
rospy.spin()
except KeyboardInterrupt:
print("Shutting down")
cv2.destroyAllWindows()
if __name__ == '__main__':
main()
</code></pre>
<p><code>img_show_depth.py</code>:</p>
<pre><code>#!/usr/bin/env python3
import rospy
import cv2
import numpy as np
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
class DepthImageConverter:
def __init__(self):
self.bridge = CvBridge()
self.image_sub = rospy.Subscriber("/camera/depth/image_raw", Image, self.callback)
def callback(self, data):
try:
# Convert the ROS Image message to a CV2 depth image
cv_image = self.bridge.imgmsg_to_cv2(data, desired_encoding="passthrough")
except CvBridgeError as e:
print(e)
return
# Normalize the depth image to fall within 0-255 and convert it to uint8
cv_image_norm = cv2.normalize(cv_image, None, 0, 255, cv2.NORM_MINMAX)
depth_map = cv_image_norm.astype(np.uint8)
# Display the depth image in an OpenCV window
cv2.imshow("Depth Image", depth_map)
cv2.waitKey(3)
def main():
rospy.init_node('depth_image_converter', anonymous=True)
dic = DepthImageConverter()
try:
rospy.spin()
except KeyboardInterrupt:
print("Shutting down")
cv2.destroyAllWindows()
if __name__ == '__main__':
main()
</code></pre>
<p><code>gazebo.launch</code>:</p>
<pre><code><?xml version="1.0"?>
<launch>
<param name="robot_description" command="xacro '$(find forklift)/urdf/forklift.urdf.xacro'"/>
<param name="pallet_obj" command="xacro '$(find pallet)/urdf/pallet.urdf.xacro'"/>
<node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher"/>
<node name="joint_state_publisher_gui" pkg="joint_state_publisher_gui" type="joint_state_publisher_gui"/>
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="world_name" value="$(find env_world)/world/test.world"/>
<arg name="paused" value="false"/>
<arg name="use_sim_time" value="true"/>
<arg name="gui" value="true"/>
<arg name="headless" value="false"/>
<arg name="debug" value="false"/>
</include>
<node name="spawning_forklift" pkg="gazebo_ros" type="spawn_model" args="-urdf -model forklift -param robot_description -z 0.160253"/>
<node name="spawning_pallet" pkg="gazebo_ros" type="spawn_model" args="-urdf -model pallet -param pallet_obj -x 5 -z 0.001500 "/>
<node name="rviz" pkg="rviz" type="rviz" args="-d $(find forklift)/rviz/cam.rviz" required="true" />
<!--
<? default rviz ?>
<node name="rviz" pkg="rviz" type="rviz" args="-d $(find realsense2_description)/rviz/urdf.rviz" required="true" />
-->
<node name="img" pkg="img" type="img_show.py" output="screen" args="$(find img)/src/img_show.py"/>
<node name="img_depth" pkg="img" type="img_show_depth.py" output="screen" args="$(find img)/src/img_show_depth.py"/>
<!--
<node name="img_both" pkg="img" type="img_show_both.py" output="screen" args="$(find img)/src/img_show_both.py"/>
-->
</launch>
</code></pre>
|
<python><ros><realsense><gazebo-simu><rviz>
|
2024-08-20 06:28:43
| 1
| 421
|
brian2lee
|
78,890,711
| 1,176,573
|
Bar chart not updated when button is selected for a plotly python graph
|
<p>Below plotly code renders bar graph with buttons to select either <code>stack</code> or <code>group</code> style. But chart does not updates style when a selection is made on button. How do I fix this:</p>
<pre><code>fig1 = go.Figure()
x = temp_df.index
y1 = temp_df['Sales'].astype(float) # Sales Column
y2 = temp_df['Expenses'].astype(float) # Expenses Column
y3 = temp_df['Operating Profit'].astype(float) # Expenses Column
y4 = temp_df['Net Profit'].astype(float) # Expenses Column
fig1.add_trace(go.Bar(x=x, y=y1, name='Sales', marker_color ="#4287f5"))
fig1.add_trace(go.Bar(x=x, y=y2, name='Expenses', marker_color ="#f06969"))
fig1.add_trace(go.Bar(x=x, y=y3, name='Operating Profit', marker_color ="#469984"))
fig1.add_trace(go.Bar(x=x, y=y4, name='Net Profit', marker_color ="#53c24f"))
fig1.update_layout(
title='<b>QoQ Performance</b>',
xaxis_title='Quarter',
yaxis_title='Rupees in Cr.')
updatemenus=[
dict(
type = "buttons",
buttons=list([
dict(
args=['barmode', 'Group'],
label="Group",
method="update",
),
dict(
args=["barmode", "stack"],
label="Stack",
method="update"
)
])
),
]
fig1.update_layout(updatemenus=updatemenus)
fig1.show()
</code></pre>
<p><a href="https://i.sstatic.net/HiCZFgOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HiCZFgOy.png" alt="enter image description here" /></a></p>
<p>Reference Dataframe. Index is <code>Quarter</code>.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Quarter</th>
<th>Sales</th>
<th>Expenses</th>
<th>Operating Profit</th>
<th>Net Profit</th>
</tr>
</thead>
<tbody>
<tr>
<td>q1</td>
<td>674.3</td>
<td>529.47</td>
<td>144.83</td>
<td>48.81</td>
</tr>
<tr>
<td>q2</td>
<td>634.32</td>
<td>498.08</td>
<td>136.24</td>
<td>45.91</td>
</tr>
<tr>
<td>q3</td>
<td>338.17</td>
<td>265.54</td>
<td>72.63</td>
<td>24.48</td>
</tr>
<tr>
<td>q4</td>
<td>1209.9</td>
<td>949.99</td>
<td>259.86</td>
<td>87.57</td>
</tr>
</tbody>
</table></div>
|
<python><plotly>
|
2024-08-20 05:16:01
| 1
| 1,536
|
RSW
|
78,890,700
| 1,447,953
|
Bokeh simple data selection slider without python or javascript callback?
|
<p>Is it possible to do simple dynamic selection of data with a slider in Bokeh without a custom Python callback? Here is what I can do using a callback, but it would need a Bokeh server to work in exported html:</p>
<pre><code>import numpy as np
import pandas as pd
import ipywidgets as widgets
from bokeh.io import show, push_notebook
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource
x = np.arange(0, 10, 0.1)
dfs = []
ts = range(100)
for t in ts:
y = x**(t/50.)
dfs.append(pd.DataFrame({"x": x, "y": y, "t": t}))
df = pd.concat(dfs)
cds = ColumnDataSource(df)
p = figure(x_range=(0,10), y_range=(0,100))
p.scatter(x='x', y='y', source=cds)
handle = show(p, notebook_handle=True)
def update(t):
cds.data = df[df.t == t]
push_notebook(handle=handle)
slider = widgets.SelectionSlider(value=0, options=ts)
io = widgets.interactive_output(update, {"t": slider})
display(slider, io)
</code></pre>
<p><a href="https://i.sstatic.net/2fgq3AhM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fgq3AhM.png" alt="enter image description here" /></a></p>
<p>However it really feels like a custom callback should not be necessary for this, and it is quite a pain in my larger application. Is there not some clever way it can be done purely with Bokeh-native objects, like a CDSView or filter or something? I'm sure some custom javascript can do it too, but again I'd rather not if possible.</p>
|
<python><jupyter-notebook><bokeh>
|
2024-08-20 05:11:42
| 1
| 2,974
|
Ben Farmer
|
78,890,489
| 2,604,247
|
Dataframes with Common Lazy Ancestor Means Repeated Computation?
|
<h4>Dependency DAG</h4>
<p><a href="https://i.sstatic.net/wiB44IpY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiB44IpY.png" alt="enter image description here" /></a></p>
<h6>Description</h6>
<p>Pretty straight forward, basically, I am reading some parquet files from disk using polars which are the source of data. Doing some moderately heavy duty processing (a few million rows) to generate an intermediate data frame, then generating two results which need to be written back to some database</p>
<h6>Technology Stack</h6>
<ul>
<li>Ubuntu 22.04</li>
<li>Python 3.10</li>
<li>Polars 1.2.1</li>
</ul>
<h6>Question</h6>
<p>Polars recommends using lazy evaluations as far as possible to optimise the execution. Now, the final results (<code>result_1</code> and <code>result_2</code>) obviously have to be materialised.</p>
<p>But if I call these two in sequence</p>
<pre class="lang-py prettyprint-override"><code>#! /usr/bin/env python3
# encoding: utf-8
import polars as pl
...
result_1.collect() # Materialise result 1
result_2.collect() # Materialise result 2
</code></pre>
<p>Is the transformation from the source to intermediate frame (common ancestor) repeated? If so, it is clearly undesirable. In that case, I have to materialise the intermediate frame and then do the rest of the processing in eager mode.</p>
<p>Any documentation from polars on the expected behaviour and recommended practices around this scenario?</p>
|
<python><dataframe><lazy-evaluation><python-polars><directed-acyclic-graphs>
|
2024-08-20 03:17:11
| 2
| 1,720
|
Della
|
78,890,441
| 1,431,690
|
wordcloud with 2 background colors
|
<p>I generated this on wordcloud.com using one of the "themes".</p>
<p><img src="https://i.sstatic.net/51lNzsZH.png" alt="enter image description here" /></p>
<p>I'd like to be able to do this with the python wordcloud library, but so far all I can achieve is a single background color (so all black, not grey and black). Can anyone give me a hint on how to add the additional background color, using matlab or imshow? Here is my code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
from PIL import Image
from wordcloud import WordCloud
tmp = "some text about Dynasty TV show"
alexis_mask = np.array(Image.open('resources/alexis-poster-3.png'))
print(repr(alexis_mask)) alexis_mask[alexis_mask == 0] = 255
def color_func(word, font_size, position, orientation, random_state=None,
**kwargs):
return "hsl(0, 100%, 27%)"
wc = WordCloud(background_color="black", mask=alexis_mask, max_words=500,contour_width=2, contour_color="black")
wc.generate(tmp)
plt.figure(figsize=(28, 20)) plt.imshow(wc.recolor(color_func=color_func, random_state=3),interpolation="bilinear")
plt.imshow(wc)
</code></pre>
<p>I've tried starting with a black/white image, and with a black/white/grey image. So far neither works. I don't think it's offered in the <a href="https://github.com/amueller/word_cloud" rel="nofollow noreferrer">Wordcloud library</a> but is it something I could do using imshow(), after I apply wordcloud? Thanks.</p>
|
<python><numpy><matplotlib><imshow><word-cloud>
|
2024-08-20 02:49:45
| 1
| 688
|
nettie
|
78,890,438
| 2,966,723
|
Creating a mutable substitute for `int` and other imutable types
|
<p>I have a parameter, <code>N</code> that shows up in some python-based simulations. There are several things that will be set to be equal to <code>N</code>. The code relies on packages that will run things expecting <code>N</code> to be an integer. (I have other parameters that are floats, but let's focus on the integer case).</p>
<p>I would like to be able to change the value of <code>N</code> later and then rerun some calculations such that everything that I've set to be <code>N</code> has its value updated.</p>
<p>So I want to do something along the lines of</p>
<pre><code>class intparameter(int):
def __new__(self, x):
instance = int.__new__(self, x)
self.update = lambda self,y: instance = intparameter(y) #yes, this doesn't work but hopefully it's clear what I want to happen
return instance
N = intparameter(500)
<run some commands here which involves making some things equal N>
<run commands>
N.update(5000)
<run new commands here, with the new value of N automatically used>
</code></pre>
<p>The key thing is that I need <code>N</code> to be treated as if its an integer in mathemtical calculations except when I want to use <code>N.update</code> to update the value of that integer.</p>
<p>I doubt this is possible, but it would make my code much cleaner.</p>
|
<python><class>
|
2024-08-20 02:47:11
| 2
| 24,012
|
Joel
|
78,890,167
| 1,732,418
|
How could the content of pip install and PyPI download be different? is pip broken?
|
<p>Today I tried to install the latest pymavlink package:</p>
<p><a href="https://pypi.org/project/pymavlink/" rel="nofollow noreferrer">https://pypi.org/project/pymavlink/</a></p>
<pre><code>pip install pymavlink
Collecting pymavlink
Using cached pymavlink-2.4.41-py3-none-any.whl.metadata (6.2 kB)
Requirement already satisfied: future in /home/shared/conda/envs/swarm/lib/python3.10/site-packages (from pymavlink) (1.0.0)
Requirement already satisfied: lxml in /home/shared/conda/envs/swarm/lib/python3.10/site-packages (from pymavlink) (5.3.0)
Using cached pymavlink-2.4.41-py3-none-any.whl (11.6 MB)
Installing collected packages: pymavlink
Successfully installed pymavlink-2.4.41
</code></pre>
<p>after installation I found that the source directory is surprisingly small, it only contains the following files:</p>
<pre><code>/dialects /generator /message_definitions /__pycache__ /CSVReader.py /DFReader.py /fgFDM.py /__init__.py /mavexpression.py /mavextra.py /mavparm.py /mavtestgen.py /mavutil.py /mavwp.py /quaternion.py /rotmat.py /setup.py
</code></pre>
<p>But when I download it directly from PyPI, it contains the following files:</p>
<pre><code>/dialects /generator /message_definitions /pymavlink.egg-info /tests /tools /COPYING /CSVReader.py /DFReader.py /fgFDM.py /__init__.py /mavexpression.py /mavextra.py /mavparm.py /mavtestgen.py /mavutil.py /mavwp.py /PKG-INFO /pyproject.toml /quaternion.py /README.md /rotmat.py /setup.cfg /setup.py
</code></pre>
<p>the pip version has several directory missing. How could this happen? How to prevent it?</p>
<p><strong>UPDATE 1</strong>: fearing that it may be caused by defective publishing, it has also been posted on pymavlink issue tracker:</p>
<p><a href="https://github.com/ArduPilot/pymavlink/issues/969" rel="nofollow noreferrer">https://github.com/ArduPilot/pymavlink/issues/969</a></p>
|
<python><pip><pypi>
|
2024-08-20 00:05:38
| 1
| 3,992
|
tribbloid
|
78,890,076
| 2,362,671
|
how to configure basedpyright to work with numpy.typing
|
<p><code>basedpyright</code> can't resolve a simple <code>import numpy.typing</code>. What should I add to my configuration to get it to work?</p>
<p>For instance, if I have a file <code>a.py</code> with a single line:</p>
<pre><code>import numpy.typing as npt
</code></pre>
<p>I get this error when running <code>basedpyright</code>:</p>
<pre><code>> poetry run basedpyright a.py
/home/come/programming/python/basedpyright/a.py
/home/come/programming/python/basedpyright/a.py:1:8 - error: Import "numpy.typing" could not be resolved (reportMissingImports)
2 errors, 0 warnings, 0 notes
</code></pre>
<p>While <code>mypy</code> runs successfully on the same file:</p>
<pre><code>> poetry run mypy a.py
Success: no issues found in 1 source file
</code></pre>
<p>If I create my own virtual environment and run <code>mypy</code> and <code>basedpyright</code> directly, without <code>poetry</code>, I get the same result.</p>
<p>I checked that I do have a <code>typing</code> subdirectory in numpy:</p>
<pre><code>> ls ~/venv/lib/python3.12/site-packages/numpy/typing/
__init__.py mypy_plugin.py __pycache__ tests
</code></pre>
<p><code>basedpyright</code> seems to work as expected for everything else. It's only with <code>numpy.typing</code> that I have a problem.</p>
<p>This is the <code>pyproject.toml</code> that I am using:</p>
<pre><code>[tool.poetry]
name = "demo"
version = "0.0.1"
description = "This is the python version of the demo package"
authors = ["Come Raczy <come@omtx.ai>"]
[tool.poetry.dependencies]
numpy = ">=2.1.0"
python = ">=3.12,<4.0"
[tool.poetry.group.dev.dependencies]
mypy = ">=1.11.1"
data-science-types = "^0.2.23"
typing-extensions = "^4.12.2"
[tool.mypy]
plugins = "numpy.typing.mypy_plugin"
files = ["."]
[tool.pyright]
typeCheckingMode = "basic"
</code></pre>
|
<python><numpy><python-typing><pyright>
|
2024-08-19 23:16:40
| 1
| 1,700
|
Come Raczy
|
78,889,983
| 23,626,926
|
argparse option to create a mapping
|
<p>C compilers have the <code>-D</code> switch to define a preprocessor constant at compilation time. It is called like <code>-D name=value</code>, or <code>-D name</code> and <code>value</code> defaults to 1.</p>
<p>Can a similar thing be done with python argparse? That is, can you create an option that meets these criteria?</p>
<ul>
<li>The argument can be accepted multiple times</li>
<li>The argument takes a key-value pair</li>
<li>If the key is present but the value is not it should use the default</li>
<li>The result is a mapping (Python dict or similar)</li>
</ul>
<p>Desired usage:</p>
<pre class="lang-py prettyprint-override"><code>ap = argparse.ArgumentParser()
ap.add_argument("-D",
#### what goes here ? ####
default=1
)
ap.parse_args(["-D", "foo=bar", "-D", "baz"])
# ==> Namespace(D={'foo': 'bar', 'baz': 1})
</code></pre>
<p>If I just use <code>nargs="append"</code>, I can not specify an automatic default in the add_argument call, and I additionally have to do the string processing myself on the resultant list, which is kind of hairy, and I would like to avoid that.</p>
|
<python><dictionary><argparse>
|
2024-08-19 22:36:34
| 2
| 360
|
dragoncoder047
|
78,889,834
| 1,739,681
|
Use VS Code breakpoints with pantsbuild tests
|
<p>I have the following setup: Python project + pantsbuild + vscode.</p>
<p>I would like to add <a href="https://code.visualstudio.com/docs/editor/debugging#_breakpoints" rel="nofollow noreferrer">VS Code breakpoints</a> in my unit tests (unittest) while using the <code>pants test</code> command. I can already set breakpoints using the <code>debugpy</code> module, but I would prefer to use VS Code's built-in feature.</p>
<p>I created a <code>launch.json</code> file.</p>
<pre class="lang-json prettyprint-override"><code>{
"version": "0.2.0",
"inputs": [],
"configurations": [
{
"name": "Attach Pantsbuild",
"type": "python",
"request": "attach",
"justMyCode": false,
"connect": {
"host": "localhost",
"port": 5680
},
},
],
}
</code></pre>
<p>I am able to run the tests like this:</p>
<pre class="lang-none prettyprint-override"><code>./pants test src/test/python/checkin/test_utils.py --test-debug-adapter --debug-adapter-port=5680 -- -k test_parse_info
</code></pre>
<p>However, my breakpoints are not working because pants runs this in a sandbox and does not preserve breakpoints when it copies the file I'm testing to the temp folder.</p>
|
<python><visual-studio-code><python-unittest><pants><debugpy>
|
2024-08-19 21:23:18
| 1
| 305
|
julianofischer
|
78,889,767
| 20,591,261
|
Polars chain multiple operations on select() with values_counts()
|
<p>I'm working with a Polars dataframe and I want to perform a series of operations using the <code>.select()</code> method. However, I'm facing problems when I try to apply <code>value_counts()</code> followed by <code>unnest()</code> to get separate columns instead of a <code>struct</code> column.</p>
<p>If I just use the method alone, then I don't have any issues:</p>
<pre class="lang-py prettyprint-override"><code>(
df
.select(
pl.col("CustomerID"),
pl.col("Country").value_counts(sort=True).struct.rename_fields(["Country", "State"]).first().over("CustomerID")).unnest("Country")
.unique(maintain_order=True)
)
</code></pre>
<p>But, since I'm doing a series of operations like this:</p>
<pre class="lang-py prettyprint-override"><code>(
df
.select(
pl.col("CustomerID"),
pl.col("Country").value_counts(sort=True).struct.rename_fields(["Country", "Count"]).first().over("CustomerID").unnest("Country"),
Days_Since_Last_Purchase = pl.col("InvoiceDate").max() - pl.col("InvoiceDate").max().over("CustomerID"),
)
.unique(maintain_order=True)
)
</code></pre>
<p>I'm facing the following error:</p>
<p><code>AttributeError: 'Expr' object has no attribute 'unnest'</code></p>
<p>Example Data :</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.read_csv(b"""
InvoiceNo,StockCode,Description,Quantity,InvoiceDate,UnitPrice,CustomerID,Country,Transaction_Status
541431,23166,MEDIUM CERAMIC TOP STORAGE JAR,74215,2011-01-18T10:01:00.000000,1.0399999618530273,12346,United Kingdom,Completed
C541433,23166,MEDIUM CERAMIC TOP STORAGE JAR,-74215,2011-01-18T10:17:00.000000,1.0399999618530273,12346,United Kingdom,Cancelled
537626,84997D,PINK 3 PIECE POLKADOT CUTLERY SET,6,2010-12-07T14:57:00.000000,3.75,12347,Iceland,Completed
537626,22729,ALARM CLOCK BAKELIKE ORANGE,4,2010-12-07T14:57:00.000000,3.75,12347,Iceland,Completed
537626,22492,MINI PAINT SET VINTAGE ,36,2010-12-07T14:57:00.000000,0.6499999761581421,12347,Iceland,Completed
537626,22727,ALARM CLOCK BAKELIKE RED ,4,2010-12-07T14:57:00.000000,3.75,12347,Iceland,Completed
537626,22774,RED DRAWER KNOB ACRYLIC EDWARDIAN,12,2010-12-07T14:57:00.000000,1.25,12347,Iceland,Completed
537626,22195,LARGE HEART MEASURING SPOONS,12,2010-12-07T14:57:00.000000,1.649999976158142,12347,Iceland,Completed
537626,22805,BLUE DRAWER KNOB ACRYLIC EDWARDIAN,12,2010-12-07T14:57:00.000000,1.25,12347,Iceland,Completed
537626,22771,CLEAR DRAWER KNOB ACRYLIC EDWARDIAN,12,2010-12-07T14:57:00.000000,1.25,12347,Iceland,Completed
""", try_parse_dates=True, schema_overrides={"CustomerID": pl.String})
</code></pre>
|
<python><dataframe><python-polars>
|
2024-08-19 21:02:48
| 3
| 1,195
|
Simon
|
78,889,666
| 810,815
|
Python package sumy throwing errors in Jupyter notebook
|
<p>I am trying to use sumy to summarize text but keep getting weird issues. I am using the following code in my Jupyter Notebook.</p>
<pre><code>import nltk
nltk.download('punkt')
nltk.download('punkt_tab')
import sumy
from sumy.parsers.plaintext import PlaintextParser
from sumy.nlp.tokenizers import Tokenizer
from sumy.summarizers.lsa import LsaSummarizer
# Your text to summarize
text = """
Your long text here...
"""
# Create a PlaintextParser object
parser = PlaintextParser.from_string(text, Tokenizer("english"))
# Create an LsaSummarizer object
summarizer = LsaSummarizer()
# Summarize the text
summary = summarizer(parser.document, 3) # Summarize to 3 sentences
# Print the summary
for sentence in summary:
print(sentence)
</code></pre>
<p>But I get the error:</p>
<pre><code> UnpicklingError Traceback (most recent call last)
Cell In[6], line 6
3 from sumy.summarizers.lsa import LsaSummarizer
5 text = "Your long text here..."
----> 6 parser = PlaintextParser.from_string(text, Tokenizer("english"))
7 summarizer = LsaSummarizer()
8 summary = summarizer(parser.document, 3) # Summarize to 3 sentences
File ~/Desktop/sample_project/env/lib/python3.10/site-packages/sumy/nlp/tokenizers.py:160, in Tokenizer.__init__(self, language)
157 self._language = language
159 tokenizer_language = self.LANGUAGE_ALIASES.get(language, language)
--> 160 self._sentence_tokenizer = self._get_sentence_tokenizer(tokenizer_language)
161 self._word_tokenizer = self._get_word_tokenizer(tokenizer_language)
File ~/Desktop/sample_project/env/lib/python3.10/site-packages/sumy/nlp/tokenizers.py:172, in Tokenizer._get_sentence_tokenizer(self, language)
170 try:
171 path = to_string("tokenizers/punkt/%s.pickle") % to_string(language)
--> 172 return nltk.data.load(path)
173 except (LookupError, zipfile.BadZipfile) as e:
174 raise LookupError(
175 "NLTK tokenizers are missing or the language is not supported.\n"
176 """Download them by following command: python -c "import nltk; nltk.download('punkt')"\n"""
177 "Original error was:\n" + str(e)
178 )
File ~/Desktop/sample_project/env/lib/python3.10/site-packages/nltk/data.py:763, in load(resource_url, format, cache, verbose, logic_parser, fstruct_reader, encoding)
761 resource_val = opened_resource.read()
762 elif format == "pickle":
--> 763 resource_val = restricted_pickle_load(opened_resource.read())
764 elif format == "json":
765 import json
File ~/Desktop/sample_project/env/lib/python3.10/site-packages/nltk/data.py:667, in restricted_pickle_load(string)
662 """
663 Prevents any class or function from loading.
664 """
665 from nltk.app.wordnet_app import RestrictedUnpickler
--> 667 return RestrictedUnpickler(BytesIO(string)).load()
662 def find_class(self, module, name):
663 # Forbid every function
--> 664 raise pickle.UnpicklingError(f"global '{module}.{name}' is forbidden")
UnpicklingError: global 'copy_reg._reconstructor' is forbidden
</code></pre>
|
<python>
|
2024-08-19 20:26:55
| 2
| 9,764
|
john doe
|
78,889,556
| 6,930,340
|
Create date_range with predefined number of periods in polars
|
<p>When I create a date range in pandas, I often use the <code>periods</code> argument. Something like this:</p>
<pre class="lang-py prettyprint-override"><code>pd.date_range(start='1/1/2018', periods=8)
</code></pre>
<pre><code>DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'],
dtype='datetime64[ns]', freq='D')
</code></pre>
<p>What would be the equivalent way in polars? I am missing the <code>periods</code> input parameter in <code>pl.date_range</code>.</p>
<p>Having said that, there's probably an easy and clever solution ;-)</p>
|
<python><python-polars>
|
2024-08-19 19:55:22
| 1
| 5,167
|
Andi
|
78,889,546
| 7,685,367
|
PyCharm Pro 2024.1.1 - Not stopping on all breakpoints
|
<p>I am using Pycharm Professional Ver 2024.1.1, in Windows 11, and it is not stopping on breakpoints defined in called/sub functions when 'stepping over' with F8. To recreate this simply use the sample code created with a new PyCharm project:</p>
<pre><code>1 def print_hi(name):
2 print(f'Hi, {name}') # Press Ctrl+F8 to toggle the breakpoint.
3
4 if __name__ == '__main__':
5 print_hi('PyCharm')
</code></pre>
<p>Place a break point on the print (Line 2) statement and the print_hi (Line 5) command.</p>
<p>If I hit F9 a few times it will stop on both break points.</p>
<p>If I hit F9, wait for PyCharm debugger to stop on the line 5 and then hit F8, it will not stop on the break point that is set on line 2. This kinda makes sense as I am hitting F8 to 'step over' the function but I seem to recall that this was not the behavior in previous versions. Also F9 is also a step over function so why doesn't that key bypass the breakpoints?</p>
<p>Is there anyway to tell PyCharm to stop on all break points regardless of the step over -vs- step into key that I press?</p>
|
<python><pycharm>
|
2024-08-19 19:50:40
| 0
| 1,057
|
vscoder
|
78,889,486
| 22,062,869
|
Preserving DataFrame subclass type during pandas groupby().aggregate()
|
<p>I'm subclassing <code>pandas DataFrame</code> in a project of mine. Most <code>pandas</code> operations preserve the subclass type, but <code>df.groupby().agg()</code> does not. Is this a bug? Is there a known workaround?</p>
<pre><code>import pandas as pd
class MySeries(pd.Series):
pass
class MyDataFrame(pd.DataFrame):
@property
def _constructor(self):
return MyDataFrame
_constructor_sliced = MySeries
MySeries._constructor_expanddim = MyDataFrame
df = MyDataFrame({"a": reversed(range(10)), "b": list('aaaabbbccc')})
print(type(df.groupby("b").sum()))
# <class '__main__.MyDataFrame'>
print(type(df.groupby("b").agg({"a": "sum"})))
# <class 'pandas.core.frame.DataFrame'>
</code></pre>
<p>It looks like there was an <a href="https://github.com/pandas-dev/pandas/issues/28330" rel="nofollow noreferrer">issue</a> (described <a href="https://stackoverflow.com/questions/57796464/pandas-groupby-resample-etc-for-subclassed-dataframe">here</a>) that fixed subclassing for df.groupby, but as far as I can tell df.groupby().agg() was missed. I'm using pandas version <code>2.0.3</code>.</p>
|
<python><pandas><dataframe><group-by><subclass>
|
2024-08-19 19:30:00
| 2
| 395
|
rasputin
|
78,889,374
| 1,631,414
|
How do I write a regex to capture a block of text up to a blank line?
|
<p>I'm trying to capture a block of text from a linux command that's delimited by a blank line. However, I'm having trouble with the regex matching past the first line.</p>
<p>So the output might look like this where the blocks are of variable length.
What should the regex be to match past the first line and to match until the blank line?</p>
<p>This is my regex that I thought would logically match but it doesn't:</p>
<blockquote>
<p>[0-9a-f]{2}:[0-9a-f]{2}.[0-9a-f].*^$</p>
</blockquote>
<p>If I remove the ^$, it matches up to the end of the line but no more. If I add the ^$, it doesn't match to the blank line like I thought it would. All other regexes I tried also don't work, although I don't want to clog up this question with everything else I tried.</p>
<p>I'm writing a script in python but I'm testing the regex manually in the unix less command since it's easier to debug and perform trial and error.</p>
<pre><code>00:00.4 Host bridge: Intel Corporation Ice Lake IEH
Subsystem: Intel Corporation Device 0000
Flags: fast devsel, NUMA node 0, IOMMU group 3
Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00
00:01.0 System peripheral: Intel Corporation Ice Lake CBDMA [QuickData Technology]
Subsystem: Intel Corporation Device 0000
Flags: bus master, fast devsel, latency 0, IRQ 255, NUMA node 0, IOMMU group 4
Memory at 38fffff50000 (64-bit, non-prefetchable) [disabled] [size=16K]
Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00
Capabilities: [80] Power Management version 3
Capabilities: [ac] MSI-X: Enable- Count=1 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [1d0] Latency Tolerance Reporting
Kernel modules: ioatdma
</code></pre>
<p>With this sample output above, say I want to capture the line from 00:00.4 to the blank line, which is 4 lines of text, which is,</p>
<pre><code>00:00.4 Host bridge: Intel Corporation Ice Lake IEH
Subsystem: Intel Corporation Device 0000
Flags: fast devsel, NUMA node 0, IOMMU group 3
Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00
</code></pre>
<p>what would the regex need to be to do that?</p>
|
<python><regex><blank-line><less-unix>
|
2024-08-19 18:52:57
| 1
| 6,100
|
Classified
|
78,889,365
| 11,065,874
|
how to make sub-shell env variable available in the main shell python console?
|
<p>I open a python console and run</p>
<pre><code>>>> import os
>>> os.system("export aaa=123")
>>> os.environ["aaa"]
</code></pre>
<p>I see a key error. Why is that?
How can I preserve the env variables across the entire process?</p>
<p>note that this is not the right answer:</p>
<pre><code>>>> import os
>>> os.environ['abc'] = '123'
>>> os.system('echo $abc')
123
</code></pre>
<p>the purpose of this question is to demonstrate a situation that a sub-shell is generating an env variable and one might want to share it with the parent shell</p>
|
<python><linux><bash><shell><sh>
|
2024-08-19 18:50:43
| 4
| 2,555
|
Amin Ba
|
78,889,355
| 2,036,035
|
git filter repo, can't find python on windows, despite it being in the path and being able to run python from cmd
|
<p>Similar to this, but not the same issue (all the steps prior were followed, python is provably on path) <a href="https://stackoverflow.com/q/65626875/">git filter repo - Python was not found - but it's installed</a></p>
<p>Basically I try to run git filter-repo, and while the <em>script</em> is recgonized, python cannot be found, saying:</p>
<pre><code>Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.
</code></pre>
<p>despite being able to run python:</p>
<pre><code>C:\Users\user\Documents>python
Python 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
</code></pre>
<p>Why can't it find python?</p>
|
<python><windows><git><path><git-filter-repo>
|
2024-08-19 18:47:22
| 1
| 5,356
|
Krupip
|
78,889,292
| 2,397,542
|
Azure Python SDK - how can I know the latest API version for a resource ID?
|
<p>I'm using the python Azure SDK, and for things like subnets, ipConfiguration objects will have a generic resource ID for the attached device - sometimes it's a NIC, or a PEP or something else. I know I can use the azure-mgmt-resources.ResourceManagementClient to get a generic resource object for one of these, BUT it requires an api_version which must be a valid version for the specific type of object!</p>
<p>Is there an API somewhere in the SDK to find these? e.g. API version for "Microsoft.Network/privateEndpoints" is "2024-01-01"</p>
<p>I have a lookup table for ones I use commonly, but it feels dirty.</p>
|
<python><azure><azure-sdk-python>
|
2024-08-19 18:23:44
| 1
| 832
|
AnotherHowie
|
78,889,184
| 2,516,862
|
numpy polynomial roots incorrect compared to Matlab?
|
<p>I would like to figure out the roots of the polynomial <code>x^2 - 800x + 160000</code>, which has a multiple root of 400.</p>
<p>In Matlab, if I type <code>roots([1 -800 160000)</code>, I correctly get <code>ans = 400.0000, 400.0000</code> whereas numpy <code>np.roots([1, -800, 160000])</code> gives me <code>array([400.+2.69940682e-06j, 400.-2.69940682e-06j])</code>.</p>
<p>Why does numpy show some sort of floating point precision issue when matlab does not? Do they not both use an algorithm based on the companion matrix? How can I reliably determine, in Python, that the "actual" roots are 400, as Matlab seems to be doing?</p>
<p>Edit: This is a "toy" example that I am facing in a larger, more complex algorithm that is not necessarily faced with only real roots and multiple roots i.e. I do not actually know in advance, in my more complicated case, that the root is multiple and real.</p>
|
<python><numpy><polynomials>
|
2024-08-19 17:53:14
| 2
| 1,420
|
Gary Allen
|
78,889,157
| 7,166,834
|
Functioning of rate limiter for API requests
|
<p>I am trying to limit my API calls based on the below limit rates. To achieve this am trying to use <code>ratelimit</code> from python.</p>
<ul>
<li>Per second: 25 requests</li>
<li>Per minute: 250 requests</li>
<li>Per 30 minutes: 1,000 requests</li>
</ul>
<p>I am able to use the per second limit based on rate limit as shown below:</p>
<pre><code>from ratelimit import limits, sleep_and_retry
SEC_CALLS = 10
SEC_RATE_LIMIT = 1
@sleep_and_retry
@limits(calls=SEC_CALLS, period=SEC_RATE_LIMIT)
def check_limit_secs():
print(f'sec time: {time.time()}')
return
##Function Code:
for i in ID:
check_limit_secs()
url = 'https://xxxxxx{}'.format(ID)
headers = {
'Accept': 'application/json'
}
response = requests.get(url, headers=headers)
# Check the response status
if response.status_code == 200:
# Do something with the response data (e.g., print it)
print(i)
# print(response.json())
else:
# Print an error message if the request was not successful
print(f"Error: {response.status_code} - {response.text}")
</code></pre>
<p>This code stops me from hitting the API after 1 second, but I also want to limit it based on one minute and 30 minutes as well.</p>
<p>FYI: my code hits 25 requests in a second, that means after 10 seconds, my per limit rate would hit. That means my code should stop for about 50 seconds to full fill the per min limit.</p>
<p>Do I have to make multiple calls using <code>ratelimit</code>?</p>
|
<python><rate-limiting>
|
2024-08-19 17:43:33
| 1
| 1,460
|
pylearner
|
78,889,140
| 4,265,321
|
transform DeviceArray into Array for jax
|
<p>I have a <code>.pkl</code> file I downloaded from a public GitHub repository and when I read it using <code>pickle.load</code>, i.e. using</p>
<pre class="lang-py prettyprint-override"><code>with open('filename.pkl'), 'rb') as f:
file_content = pickle.load(f)
</code></pre>
<p>I get the following error</p>
<pre class="lang-py prettyprint-override"><code>ModuleNotFoundError: No module named 'jax._src.device_array'
</code></pre>
<p>From e.g. <a href="https://stackoverflow.com/questions/76548069/pickle-load-gives-jax-related-error-no-matter-what">this StackOverflow question</a> I understand this is an issue with <code>jax</code> versions. Specifically, the <code>.pkl</code> file must have been created with a <code>jax</code> version <code><0.4</code>, while I am currently using <code>v0.4.31</code>.</p>
<p>I then proceed to create a separate <code>conda</code> environment installing <code>jax=0.3.25</code>, following @jakevdp 's answer in <a href="https://stackoverflow.com/questions/76548069/pickle-load-gives-jax-related-error-no-matter-what">that StackOverflow question</a>, and indeed I am able to load the <code>.pkl</code> file. Following again @jakevdp 's advice, I proceed to save the content of the file using <code>jax.numpy.save</code>, as follows:</p>
<pre class="lang-py prettyprint-override"><code>jnp.save('filename.npy', file_content)
</code></pre>
<p>which I can then read with</p>
<pre class="lang-py prettyprint-override"><code>file_content = jnp.load('filename.npy', allow_pickle=True)
</code></pre>
<p>However, the problem now is that I need to use the content of this file with a <code>jax</code> <code>v0.4.31</code>, given the constraints of my specific use case. This is a problem because I am then back to the same issue as above, namely that the DeviceArray is not recognised.</p>
<p>How can I convert the (many) DeviceArrays in the original file into Arrays for the newer version of <code>jax</code>?</p>
|
<python><jax>
|
2024-08-19 17:37:43
| 1
| 1,343
|
johnhenry
|
78,889,116
| 9,112,151
|
How to select only related model?
|
<p>I need to select only <code>Phone.user</code> when selecting <code>Phone model</code>:</p>
<pre><code>from sqlalchemy import ForeignKey, select
from sqlalchemy import create_engine
from sqlalchemy.orm import DeclarativeBase, Session, relationship, joinedload
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
engine = create_engine("sqlite:///:memory:", echo=True)
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
first_name: Mapped[str]
last_name: Mapped[str]
phones: Mapped[list["Phone"]] = relationship(back_populates="user")
class Phone(Base):
__tablename__ = "phones"
id: Mapped[int] = mapped_column(autoincrement=True, primary_key=True)
phone: Mapped[str]
user_id: Mapped[int] = mapped_column(ForeignKey("users.id", ondelete="cascade"))
user: Mapped["User"] = relationship(back_populates="phones")
Base.metadata.create_all(engine)
with Session(engine) as session:
user = User(first_name="Ivan", last_name="Ivanov")
phone = Phone(phone="79995437264", user=user)
session.add(phone)
session.commit()
stmt = select(Phone.user).options(joinedload(Phone.user))
res = session.scalars(stmt).all()
</code></pre>
<p>The code above produces following SQL query:</p>
<pre><code>SELECT users.id = phones.user_id AS user
FROM users, phones
</code></pre>
<p>How to select <code>Phone.user</code>?</p>
|
<python><sql><sqlite><sqlalchemy>
|
2024-08-19 17:28:09
| 1
| 1,019
|
ΠΠ»ΡΠ±Π΅ΡΡ ΠΠ»Π΅ΠΊΡΠ°Π½Π΄ΡΠΎΠ²
|
78,889,055
| 6,141,238
|
In Python, why does preallocation of a numpy array fail to limit its printed precision?
|
<p>Here is a minimal example:</p>
<pre><code>import numpy as np
np.set_printoptions(linewidth=1000, precision=3)
# First attempt fails to limit the printed precision of x
x = np.array([None])
x[0] = 1/3
print(x)
# Second attempt succeeds
x = [None]
x[0] = 1/3
x = np.array(x)
print(x)
</code></pre>
<p>Running this script yields</p>
<pre><code>[0.3333333333333333]
[0.333]
</code></pre>
<p>Why does the "First attempt" above fail to limit the printed precision of <code>x</code> while the second attempt succeeds?</p>
|
<python><numpy><printing><precision><numpy-ndarray>
|
2024-08-19 17:03:55
| 1
| 427
|
SapereAude
|
78,889,013
| 14,271,847
|
pywintypes.com_error when using getCellValue in SAP GUI Scripting with Python
|
<p>I'm working on a Python script that interacts with SAP GUI using the win32com.client module. My goal is to retrieve a specific cell value from a table within SAP. However, I'm encountering an error when I attempt to use the getCellValue method. Below is the traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\aaaa\Desktop\test.py", line 69, in <module>
value = shell.getCellValue(1, "Status do sistema")
File "<COMObject <unknown>>", line 2, in getCellValue
pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, 'SAP Frontend Server', '', None, 0, -2147024809), None)
</code></pre>
<p>Here is the relevant part of my code:</p>
<h1>My code</h1>
<pre><code>id = 'wnd[0]/usr/cntlCUSTOM/shellcont/shell/shellcont/shell'
shell = sap.session.findById(id)
value = shell.getCellValue(1, "Status do sistema")
print(value)
</code></pre>
<p>Based on the SAP GUI Scripting API documentation, the getCellValue method has the following signature in VBScript:</p>
<pre><code>Public Function GetCellValue( _
ByVal Row As Long, _
ByVal Column As String _
) As String
</code></pre>
<h1>What I've Tried</h1>
<ul>
<li><p>Verification: Ensured that SAP GUI scripting is enabled on my system.
ID Path: Double-checked the ID path
(wnd[0]/usr/cntlCUSTOM/shellcont/shell/shellcont/shell) to ensure it
correctly points to the table I want to interact with.</p>
</li>
<li><p>Column Reference: Tried using both the column name ("Status do
sistema") and the column index (e.g., 0, 1) to retrieve the value.</p>
</li>
<li><p>Testing Different Parameters: I also experimented with different row
indices and column names to see if the issue was specific to certain
cells.</p>
</li>
</ul>
<p>I expected the getCellValue method to return the value from the specified cell in the table, which I would then print to the console.</p>
<ul>
<li><p>Python Version: 3.11.8</p>
</li>
<li><p>SAP GUI Version: (SAP GUI version 770)</p>
</li>
<li><p>win32com.client Version: 306</p>
</li>
<li><p>Operating System: Windows 10</p>
</li>
</ul>
<h1>Question</h1>
<p>What could be causing this pywintypes.com_error when I try to use getCellValue? Is there something wrong with how I am referencing the column, or might there be an issue with the SAP GUI setup? I would appreciate any guidance on how to correctly retrieve the cell value.</p>
|
<python><com><win32com><sap-gui>
|
2024-08-19 16:50:28
| 1
| 429
|
sysOut
|
78,888,948
| 251,589
|
How to get `mypy` to raise errors/warnings about using the `typing` package instead of built in types
|
<p>Currently I have a bunch of code that does this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Dict
foo: Dict[str, str] = []
</code></pre>
<p>In Python 3.9+, it is preferable to use the built-in types (<a href="https://mypy.readthedocs.io/en/stable/builtin_types.html#generic-types" rel="nofollow noreferrer">source</a>):</p>
<pre class="lang-py prettyprint-override"><code>foo: dict[str, str] = []
</code></pre>
<p>Is there a way to configure <code>mypy</code> to raise an error/warning when my code uses <code>Dict</code> instead of <code>dict</code>?</p>
|
<python><python-typing><mypy>
|
2024-08-19 16:35:55
| 1
| 27,385
|
sixtyfootersdude
|
78,888,863
| 11,598,948
|
Compute the number of unique combinations while excluding those containing missing values
|
<p>I'd like to count the number of unique values when combining several columns at once. My idea so far was to use <code>pl.struct(...).n_unique()</code>, which works fine when I consider missing values as a unique value:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"x": ["a", "a", "b", "b"],
"y": [1, 1, 2, None],
})
df.with_columns(foo=pl.struct("x", "y").n_unique())
</code></pre>
<pre><code>shape: (4, 3)
βββββββ¬βββββββ¬ββββββ
β x β y β foo β
β --- β --- β --- β
β str β i64 β u32 β
βββββββͺβββββββͺββββββ‘
β a β 1 β 3 β
β a β 1 β 3 β
β b β 2 β 3 β
β b β null β 3 β
βββββββ΄βββββββ΄ββββββ
</code></pre>
<p>However, sometimes I want to exclude a combination from the count if it contains any number of missing values. In the example above, I'd like <code>foo</code> to be 2. However, using <code>.drop_nulls()</code> before counting doesn't work and produces the same output as above.</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(foo=pl.struct("x", "y").drop_nulls().n_unique())
</code></pre>
<p>Is there a way to do this using only Polars expressions?</p>
|
<python><python-polars>
|
2024-08-19 16:11:30
| 1
| 8,865
|
bretauv
|
78,888,851
| 2,110,874
|
Parsing string to datetime using pandas
|
<p>I am trying to parse a string to datetime using pandas but I am getting the following error:</p>
<pre><code>ValueError: Cannot parse both %Z and %z
</code></pre>
<p>My date string is:</p>
<pre><code>ds = '03.01.2021 23:00:00.000 GMT+0100'
</code></pre>
<p>and the conversion I'm trying is as follow:</p>
<pre><code>format = '%d.%m.%Y %H:%M:%S.%f %Z%z'
dt = pd.to_datetime(ds, format=format)
</code></pre>
<p>which is funny because panda seems to use an almost identical version of strptime. When I use:</p>
<pre><code>dt = datetime.datetime.strptime(ds, format)
</code></pre>
<p>it works as expected. In the pandas documentation there is a small section saying "Differences to strptime" but no mention to usage of %Z with %z.</p>
<p>On further explanations about timezones and offsets I only see explanations regarding UTC offsets. In this case I assume that GMT and UTC is anyways the same so if I parse using:</p>
<pre><code>format = '%d.%m.%Y %H:%M:%S.%f GMT%z'
dt = pd.to_datetime(ds, format=format)
</code></pre>
<p>this also works. But I was wondering, what if one has a datetime with offset regarding to a different timezone, for example EST instead of UTC? For example:</p>
<pre><code>ds = '03.01.2021 23:00:00.000 EST+0100'
</code></pre>
<p>is there anyway one could convert this directly using pd.to_datetime()?</p>
|
<python><pandas><datetime><timezone>
|
2024-08-19 16:08:16
| 1
| 419
|
FELIPE_RIBAS
|
78,888,776
| 17,721,722
|
How to Dynamically Determine the Repartition Count for Loading Large CSV Files into PostgreSQL Using PySpark?
|
<p>I need to load 5 million records from a CSV file into a PostgreSQL table as quickly as possible using PySpark. The performance and speed of the operation are critical for me. I often run my code from either my PC or a server, and I suspect the optimal repartition count might differ depending on the environment.</p>
<p><strong>Here are my configurations:</strong></p>
<p><strong>PC Configuration:</strong></p>
<ul>
<li>RAM: 16 GB</li>
<li>Storage: 256 GB</li>
<li>CPU: 8 cores</li>
</ul>
<p><strong>Server Configuration:</strong></p>
<ul>
<li>RAM: 64 GB</li>
<li>Storage: 2 TB</li>
<li>CPU: 32 cores</li>
</ul>
<p><strong>Tech Stack:</strong></p>
<ul>
<li>Backend: Django 5.0.7</li>
<li>Python: 3.11.9</li>
<li>Database: PostgreSQL 15</li>
<li>PySpark: 3.5.1</li>
</ul>
<pre class="lang-py prettyprint-override"><code>df.repartition(400)
.write
.mode("overwrite")
.format("csv")
.save(filepath, header='false')
</code></pre>
<p><strong>What I Need:</strong></p>
<p>I want to dynamically determine the number of partitions based on the size of the data, the number of file rows, and the available CPU resources. The goal is to optimize the loading speed. Can someone suggest a dynamic function or strategy to calculate the appropriate repartition count for different environments?</p>
|
<python><postgresql><dataframe><apache-spark><pyspark>
|
2024-08-19 15:49:46
| 0
| 501
|
Purushottam Nawale
|
78,888,734
| 1,052,628
|
Get more than 1000 records from Matillion Task History API
|
<p>I am using <a href="https://docs.matillion.com/metl/docs/2972278/#get-history" rel="nofollow noreferrer">Matillion Task History API</a> to fetch the history of tasks that have been running in the Matillion ETL instance. Task history results are subject to a limit of 1,000 records for a single task run. I was looking if there is pagination support for this particular API endpoint but seems like it doesn't support pagination.</p>
<p>Is there any way I can get more than 1000 records?</p>
|
<python><etl><monitoring><matillion>
|
2024-08-19 15:39:48
| 1
| 636
|
Rafiul Sabbir
|
78,888,711
| 5,058,384
|
cleanup_tokenization_spaces issue in Flux running in ComfyUI
|
<p>I'm getting the issue below with Flux in ComfyUI and it points to this bug (<a href="https://discuss.huggingface.co/t/cleaup-tokenization-spaces-error/102749" rel="nofollow noreferrer">https://discuss.huggingface.co/t/cleaup-tokenization-spaces-error/102749</a>). What is the solution to resolve it? Do I set the cleanup_tokenization_spaces parameter to false somewhere?</p>
<p>Full terminal output below:</p>
<pre><code>got prompt
model weight dtype torch.bfloat16, manual cast: None
model_type FLUX
/home/garrett/AI/ComfyUI/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
./launch.sh: line 7: 47378 Killed python3 main.py
</code></pre>
|
<python><huggingface-transformers><huggingface-tokenizers>
|
2024-08-19 15:33:47
| 2
| 966
|
garrettlynchirl
|
78,888,680
| 11,028,689
|
ValueError: Number of classes does not match size of target_names for a confusion matrix
|
<p>I have this code which produces a Value error.</p>
<pre><code>y = df['weather type']
# y is an array with the 11 unique values 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
# and of an 'object' type:
array([10, 10, 10, ..., 10, 10, 9])
# Encode the target variable (weather type) to numeric values -
# not sure if I should have done this step because it seems to have messed up my target labels?
y_le = LabelEncoder()
y = y_le.fit_transform(y)
# the unique values of y_le.classes_ are '0', '1', '10', '11', '12', '2', '3', '5', '6', '7', '8'
# the unique values of y_val are 0, 1, 3, 4, 5, 6, 7, 8, 9, 10
# Initialize the XGBoost classifier
xgb_model = xgb.XGBClassifier(objective='multi:softmax', num_class=len(le.classes_))
# Train the model
xgb_model.fit(X_train, y_train)
# Make predictions on the validation set
y_pred_val = grid_search.predict(X_val)
# Evaluate the model
# Print classification report and confusion matrix
print("\nClassification Report:\n", classification_report(y_val, y_pred_val, target_names=y_le.classes_))
</code></pre>
<p>The value error is as follows:</p>
<pre><code>--------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[292], line 6
2 y_pred_val = xgb_model.predict(X_val)
4 # Evaluate the model
5 # Print classification report and confusion matrix
----> 6 print("\nClassification Report:\n", classification_report(y_val, y_pred_val, target_names=y_le.classes_))
7 #print("\nClassification Report:\n", classification_report(y_val, y_pred_val, labels=range(len(y_le.classes_)), target_names=y_le.classes_))
File ~\anaconda3\lib\site-packages\sklearn\metrics\_classification.py:2332, in classification_report(y_true, y_pred, labels, target_names, sample_weight, digits, output_dict, zero_division)
2326 warnings.warn(
2327 "labels size, {0}, does not match size of target_names, {1}".format(
2328 len(labels), len(target_names)
2329 )
2330 )
2331 else:
-> 2332 raise ValueError(
2333 "Number of classes, {0}, does not match size of "
2334 "target_names, {1}. Try specifying the labels "
2335 "parameter".format(len(labels), len(target_names))
2336 )
2337 if target_names is None:
2338 target_names = ["%s" % l for l in labels]
ValueError: Number of classes, 10, does not match size of target_names, 11. Try specifying the labels parameter
</code></pre>
<p>as far as I can see, I have already set target_names=y_le.classes_.
How to fix this?</p>
<p>Additionally my target variable, weather_type is an 'object' data type, and I am not sure if I should converted it to numeric for an XGBoost multi-classification model?</p>
|
<python><xgboost><confusion-matrix>
|
2024-08-19 15:26:01
| 1
| 1,299
|
Bluetail
|
78,888,658
| 1,654,229
|
Pydantic validation issue on discriminated union field from JSON in DB
|
<p>I have a JSON field (form_config) in Postgres table which contains form field structure. This is the value in that table.</p>
<pre><code>[
{
'label': 'Age',
'type': 'number',
'required': True,
'min': 0,
'max': None
},
{
'label': 'Name',
'type': 'text',
'required': True,
'minLength': 0,
'maxLength': None
},
{
'label': 'City',
'type': 'autocomplete',
'required': True,
'api': {
'url': 'api/countries',
'method': 'GET',
'labelKey': 'name'
}
}
]
</code></pre>
<p>There is an API which fetches above data, and it returns based on Response Schema as below:</p>
<pre><code>class FormConfigResponse(BaseModel):
config : List[FormFieldModel]
</code></pre>
<p>where Form Field schema are as follows, using discriminator:</p>
<pre><code>class FormFieldType(str, Enum):
text = 'text'
number = 'number'
autocomplete= 'autocomplete'
class FormFieldBase(BaseModel):
label: str
type: FormFieldType
required: bool = True
class TextField(FormFieldBase):
type: Literal[FormFieldType.text] = FormFieldType.text
minLength: int|None
maxLength: int|None
class NumberField(FormFieldBase):
type: Literal[FormFieldType.number] = FormFieldType.number
min: int|None
max: int|None
class APIAutoCompleteConfig(BaseModel):
url: str
method: str
labelKey: str
class AutoCompleteField(FormFieldBase):
type: Literal[FormFieldType.autocomplete] = FormFieldType.autocomplete
api: APIAutoCompleteConfig
# Create a new model to represent the discriminated union
FormFieldModel = Annotated[Union[NumberField, TextField, AutoCompleteField], Field(discriminator='type')]
</code></pre>
<p>In my API handler code, when it returns the data as follows:</p>
<pre><code>return FormConfigResponse(
config = form_config
)
</code></pre>
<p>I get an error like so:</p>
<pre><code>
fastapi_app | content = await serialize_response(
fastapi_app | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 155, in serialize_response
fastapi_app | raise ResponseValidationError(
fastapi_app | fastapi.exceptions.ResponseValidationError: 1 validation errors:
fastapi_app | {'type': 'model_attributes_type', 'loc': ('response', 'config'), 'msg': 'Input should be a valid dictionary or object to extract fields from', 'input': [NumberField(label='Age', type=<FormFieldType.number: 'number'>, required=True, min=0, max=None), TextField(label='Name', type=<FormFieldType.text: 'text'>, required=True, minLength=0, maxLength=None), AutoCompleteField(label='City', type=<FormFieldType.autocomplete: 'autocomplete'>, required=True, api=APIAutoCompleteConfig(url='api/countries', method='GET', labelKey='name'))], 'url': 'https://errors.pydantic.dev/2.8/v/model_attributes_type'}
</code></pre>
<p>It's saying <em>"Input should be a valid dictionary or object to extract fields from"</em>. Where am I going wrong?</p>
|
<python><fastapi><pydantic>
|
2024-08-19 15:23:02
| 1
| 1,534
|
Ouroboros
|
78,888,584
| 6,930,340
|
Compute difference between dates and convert into weeks/months/years in polars dataframe
|
<p>I have a <code>pl.DataFrame</code> with a <code>start_date</code> and <code>end_date</code> column. I need to compute the difference between those two columns and add new columns representing the result in <code>days</code>, <code>weeks</code>, <code>months</code> and <code>years</code>.</p>
<p>I would be fine to get an approximate result, meaning dividing the days by 7 / 30 / 365.
My problem is to convert the <code>duration[ns]</code> type into an integer type.</p>
<pre><code>import datetime
import polars as pl
df = pl.DataFrame(
{"start_date": datetime.date(2024, 1, 1), "end_date": datetime.date(2024, 7, 31)}
)
df = df.with_columns((pl.col("end_date") - pl.col("start_date")).alias("days"))
print(df)
shape: (1, 3)
ββββββββββββββ¬βββββββββββββ¬βββββββββββββββ
β start_date β end_date β days β
β --- β --- β --- β
β date β date β duration[ms] β
ββββββββββββββͺβββββββββββββͺβββββββββββββββ‘
β 2024-01-01 β 2024-07-31 β 212d β
ββββββββββββββ΄βββββββββββββ΄βββββββββββββββ
</code></pre>
|
<python><python-polars>
|
2024-08-19 15:10:24
| 1
| 5,167
|
Andi
|
78,888,562
| 11,037,602
|
Page loads with Playwright, but fails with scrapy-playwright
|
<h2>TL,DR;</h2>
<p>When page is loaded by headless <code>playwright</code> + proxy it works perfectly, every time.
When page is loaded by <code>scrapy-playwright</code>, also headless and same proxy, it raises Timeout errors and the HTML content of the page object is as JS was disabled. Other domains work with <code>scrapy-playwright</code> without a problem.</p>
<hr />
<h2>Intro</h2>
<p>I'm trying to load the page below with <code>scrapy-playwright</code>, however and it keeps raising TimeOut exceptions. So I wrote a MRE in Playwright and it successfully loads the page.</p>
<p>If I check the content of the page when the timeout is raised, I get the HTML of a page that is not getting rendered at all, asking to enable JS:</p>
<pre><code><!DOCTYPE html><html style="" class=" adownload no-applicationcache blobconstructor blob-constructor borderimage borderradius boxshadow boxsizing canvas canvastext checked classlist contenteditable no-contentsecuritypolicy no-contextmenu cors cssanimations csscalc csscolumns cssfilters cssgradients cssmask csspointerevents no-cssreflections cssremunit cssresize csstransforms3d csstransforms csstransitions cssvhunit cssvmaxunit cssvminunit cssvwunit dataset details deviceorientation displaytable display-table draganddrop fileinput filereader no-filesystem flexbox fullscreen geolocation getusermedia hashchange history hsla indexeddb inlinesvg json lastchild localstorage mathml mediaqueries meter multiplebgs notification objectfit object-fit opacity pagevisibility performance postmessage progressbar no-regions requestanimationframe raf rgba ruby scriptasync scriptdefer sharedworkers siblinggeneral smil no-strictmode no-stylescoped supports svg svgfilters textshadow no-time no-touchevents typedarrays userselect webaudio webgl websockets no-websqldatabase webworkers datalistelem video datauri svgasimg no-csshyphens"><head>\n<meta http-equiv="Pragma" content="no-cache">\n<meta http-equiv="Expires" content="-1">\n<meta http-equiv="CacheControl" content="no-cache">\n<meta http-equiv="Content-Type" content="text/html; charset=utf-8">\n<link rel="shortcut icon" href="data:;base64,iVBORw0KGgo=">\n\n<script type="text/javascript">\n(function(){\nwindow["bobcmn"] = "10111110101010200000005200000005200000006200000001254047669200000096200000000200000002300000000300000000300000006/TSPD/300000008TSPD_10130000000cTSPD_101_DID300000005https3000000b00821df8b06ab2000403b56c35c2a566711ae86a0540ec06085987bafb13c4f6ff50c290b8817631708a53aa37a0a28005ad630686158d96f8a68420d501cd849bbe9225e4b3c2a59471abb33f0767803e9e24ce82c1584f2300000002TS200000000200000000";\n\nwindow["failureConfig"] = "524f6f70732e2e2e2e736f6d657468696e672077656e742077726f6e672e2e2e2e20796f757220737570706f72742069642069733a2025444f534c372e6368616c6c656e67652e737570706f72745f6964252e143134303331333636303235333339313535303337062f545350442f171800";window.xEg=!!window.xEg;try{(function(){(function OL(){var z=!1;function s(z){for(var s=0;z--;)s+=I(document.documentElement,null);return s}function I(z,s){var l="vi";s=s||new J;return ZL(z,function(z){z.setAttribute("data-"+l,s.Z$());return I(z,s)},null)}function J(){this.lZ=1;this.zz=0;this.ol=this.lZ;this.Jo=null;this.Z$=function(){this.Jo=this.zz+this.ol;if(!isFinite(this.Jo))return this.reset(),this.Z$();this.zz=this.ol;this.ol=this.Jo;this.Jo=null;return this.ol};this.reset=function(){this.lZ++;this.zz=0;this.ol=this.lZ}}var l=!1;\nfunction LL(z,s){var I=document.createElement(z);s=s||document.body;s.appendChild(I);I&&I.style&&(I.style.display="none")}function oL(s,I){I=I||s;var J="|";function LL(z){z=z.split(J);var s=[];for(var I=0;I<z.length;++I){var l="",oL=z[I].split(",");for(var sL=0;sL<oL.length;++sL)l+=oL[sL][sL];s.push(l)}return s}var oL=0,ZL="datalist,details,embed,figure,hrimg,strong,article,formaddress|audio,blockquote,area,source,input|canvas,form,link,tbase,option,details,article";ZL.split(J);ZL=LL(ZL);ZL=new RegExp(ZL.join(J),\n"g");while(ZL.exec(s))ZL=new RegExp((""+new Date)[8],"g"),z&&(l=!0),++oL;return I(oL&&1)}function ZL(z,s,I){(I=I||l)&&LL("div",z);z=z.children;var J=0;for(var oL in z){I=z[oL];try{I instanceof HTMLElement&&(s(I),++J)}catch(ZL){}}return J}oL(OL,s)})();var zL=77;try{var SL,iL,jL=O(179)?1:0;for(var Lo=(O(293),0);Lo<iL;++Lo)jL+=O(831)?3:1;SL=jL;window.Ll===SL&&(window.Ll=++SL)}catch(zo){window.Ll=SL}var Zo=!0;function Z(L,z){L+=z;return L.toString(36)}\nfunction io(L){var z=46;!L||document[S(z,164,151,161,151,144,151,154,151,162,167,129,162,143,162,147)]&&document[S(z,164,151,161,151,144,151,154,151,162,167,129,162,143,162,147)]!==Z(68616527620,z)||(Zo=!1);return Zo}function _(L){var z=arguments.length,s=[],I=1;while(I<z)s[I-1]=arguments[I++]-L;return String.fromCharCode.apply(String,s)}function jo(){}io(window[jo[S(zL,187,174,186,178)]]===jo);io(typeof ie9rgb4!==_(zL,179,194,187,176,193,182,188,187));\nio(RegExp("\\x3c")[Z(1372128,zL)](function(){return"\\x3c"})&!RegExp(Z(42812,zL))[_(zL,193,178,192,193)](function(){return"\'x3\'+\'d\';"}));\nvar Jo=window[S(zL,174,193,193,174,176,181,146,195,178,187,193)]||RegExp(S(zL,186,188,175,182,201,174,187,177,191,188,182,177),Z(-59,zL))[Z(1372128,zL)](window["\\x6e\\x61vi\\x67a\\x74\\x6f\\x72"]["\\x75\\x73e\\x72A\\x67\\x65\\x6et"]),LO=+new Date+(O(823)?6E5:796558),oO,ZO,sO,SO=window[_(zL,192,178,193,161,182,186,178,188,194,193)],_O=Jo?O(710)?3E4:17796:O(382)?6E3:8380;\ndocument[S(zL,174,177,177,146,195,178,187,193,153,182,192,193,178,187,178,191)]&&document[S(zL,174,177,177,146,195,178,187,193,153,182,192,193,178,187,178,191)](_(zL,195,182,192,182,175,182,185,182,193,198,176,181,174,187,180,178),function(L){var z=62;document[S(z,180,167,177,167,160,167,170,167,178,183,145,178,159,178,163)]&&(document[S(z,180,167,177,167,160,167,170,167,178,183,145,178,159,178,163)]===_(z,166,167,162,162,163,172)&&L[_(z,167,177,146,176,179,177,178,163,162)]?sO=!0:document[_(z,180,\n167,177,167,160,167,170,167,178,183,145,178,159,178,163)]===Z(68616527604,z)&&(oO=+new Date,sO=!1,iO()))});function iO(){if(!document[S(33,146,150,134,147,154,116,134,141,134,132,149,144,147)])return!0;var L=+new Date;if(L>LO&&(O(239)?6E5:861172)>L-oO)return io(!1);var z=io(ZO&&!sO&&oO+_O<L);oO=L;ZO||(ZO=!0,SO(function(){ZO=!1},O(67)?1:0));return z}iO();var lO=[O(609)?17795081:23657822,O(959)?2147483647:27611931586,O(656)?1558153217:1536529909];\nfunction Lz(L){var z=52;L=typeof L===Z(1743045624,z)?L:L[S(z,168,163,135,168,166,157,162,155)](O(942)?34:36);var s=window[L];if(!s||!s[_(z,168,163,135,168,166,157,162,155)])return;var I=""+s;window[L]=function(L,z){ZO=!1;return s(L,z)};window[L][S(z,168,163,135,168,166,157,162,155)]=function(){return I}}for(var Oz=(O(206),0);Oz<lO[_(zL,185,178,187,180,193,181)];++Oz)Lz(lO[Oz]);io(!1!==window[S(zL,197,146,180)]);window.zJ=window.zJ||{};window.zJ.iZ="083d4956fd018000820ff31a1e95228c25ab1215a5be8e34271fe201bb7934e1197a4432d0146870b4459dacfbf23ddb5bd42a71c8651e07aaf8e6f722cedf63108d8f81e28b4fbdfc370b52b62c462e5ae24d1b58a68be5492d751bd200ff878afe2e5be04961ff09b99c17ef7723a06485b449117b5da3d233e0ceea7e8f11f0e14e41d50e41d1";\nfunction S(L){var z=arguments.length,s=[];for(var I=1;I<z;++I)s.push(arguments[I]-L);return String.fromCharCode.apply(String,s)}function Zz(L){var z=+new Date,s;!document[S(90,203,207,191,204,211,173,191,198,191,189,206,201,204,155,198,198)]||z>LO&&(O(404)?6E5:710795)>z-oO?s=io(!1):(s=io(ZO&&!sO&&oO+_O<z),oO=z,ZO||(ZO=!0,SO(function(){ZO=!1},O(819)?1:0)));return!(arguments[L]^s)}function O(L){return 924>L}(function sz(z){return z?0:sz(z)*sz(z)})(!0);})();}catch(x){}finally{ie9rgb4=void(0);};function ie9rgb4(a,b){return a>>b>>0};\n\n})();\n\n</script>\n\n<script type="text/javascript" src="/TSPD/0821df8b06ab20002f964fdcf5d1316fd5a886ae74ff674af5c2590daeda4a06311eeab330d3be34?type=10"></script>\n<noscript>Please enable JavaScript to view the page content.<br/>Your support ID is: 14031366025339155037.</noscript>\n</head><body>\n<form method="post" action="" enctype="multipart/form-data"><input type="hidden" name="_pd" value=""></form></body></html>'
</code></pre>
<p><strong>Just to highlight</strong> from the HTML above:
<code><noscript>Please enable JavaScript to view the page content.<br/>Your support ID is: 14031366025339155037.</noscript>\n</code></p>
<ul>
<li>Page (example): <a href="https://wbmason.com/ProductDetail.aspx?ItemDesc=Green-Mountain-Coffee-Breakfast-Blend-Coffee-K-Cup-Pods-24-BX&ItemID=GMT6520&uom=BX&COID=&SearchID=907879066&ii=1" rel="nofollow noreferrer">https://wbmason.com/ProductDetail.aspx?ItemDesc=Green-Mountain-Coffee-Breakfast-Blend-Coffee-K-Cup-Pods-24-BX&ItemID=GMT6520&uom=BX&COID=&SearchID=907879066&ii=1</a></li>
<li>I'm using proxy.</li>
<li>No change if using headful browser.</li>
<li>Same playwright installation is used, as its the only one installed.</li>
<li><code>scrapy-playwright</code> is properly installed, with it's download handlers and etc. Other pages from other netlocs work with no problem.</li>
<li>I have reduced <code>PLAYWRIGHT_MAX_PAGES_PER_CONTEXT</code> and <code>PLAYWRIGHT_MAX_CONTEXTS</code> to 1 in order to debug, no improvement.</li>
</ul>
<h3>Exceptions:</h3>
<pre><code>playwright._impl._errors.TimeoutError: Page.wait_for_selector: Timeout 30000ms exceeded.
Call log:
waiting for locator("#ctl00_ContentPlaceholder1_ucProductDetail_fvProductDetail_lblSellPrice") to be visible
</code></pre>
<p>and</p>
<pre><code>playwright._impl._errors.TimeoutError: Page.goto: Timeout 90000ms exceeded.
Call log:
navigating to "https://wbmason.com/ProductDetail.aspx?ItemDesc=Green-Mountain-Coffee-Breakfast-Blend-Coffee-K-Cup-Pods-24-BX&ItemID=GMT6520&uom=BX&COID=&SearchID=907879066&ii=1", waiting until "networkidle"
</code></pre>
<h2>Scrapy-Playwright spider (Doesn't work)</h2>
<pre><code>class ExampleSpider(Spider):
custom_settings = {
"DOWNLOAD_HANDLERS": {
"http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
"https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
},
"HTTPPROXY_ENABLED": False,
"PLAYWRIGHT_LAUNCH_OPTIONS": {
"headless": True,
"proxy":{
"server": SERVER,
"username": USERNAME,
"password": PASSWORD
},
},
"PLAYWRIGHT_BROWSER_TYPE": "firefox",
"PLAYWRIGHT_DEFAULT_NAVIGATION_TIMEOUT": 90_000, # Seconds
"PLAYWRIGHT_MAX_PAGES_PER_CONTEXT": 1,
"PLAYWRIGHT_MAX_CONTEXTS": 1,
}
start_urls = ["https://wbmason.com/ProductDetail.aspx?ItemDesc=Green-Mountain-Coffee-Breakfast-Blend-Coffee-K-Cup-Pods-24-BX&ItemID=GMT6520&uom=BX&COID=&SearchID=907879066&ii=1"]
def start_requests(self):
for url in self.start_urls:
yield Request(
url,
dont_filter=True,
meta={
"playwright": True,
"playwright_page_methods": [
# PageMethod("wait_for_timeout", 5000), # This also didn't help
PageMethod("wait_for_selector", "#ctl00_ContentPlaceholder1_ucProductDetail_fvProductDetail_lblSellPrice"),
],
"playwright_page_goto_kwargs": {
"wait_until": "networkidle",
},
# "playwright_include_page": True, # This also didn't help
}
)
</code></pre>
<h2>Playwright MRE (WORKS)</h2>
<pre><code>async with async_playwright() as p:
url = "ttps://wbmason.com/ProductDetail.aspx?ItemDesc=Green-Mountain-Coffee-Breakfast-Blend-Coffee-K-Cup-Pods-24-BX&ItemID=GMT6520&uom=BX&COID=&SearchID=907879066&ii=1"
browser = await p.firefox.launch(
headless=True,
proxy={
"server": SERVER,
"username": USERNAME,
"password": PASSWORD
},
)
page = await browser.new_page()
await page.goto(url, wait_until="networkidle")
await page.wait_for_selector("#ctl00_ContentPlaceholder1_ucProductDetail_fvProductDetail_lblSellPrice")
content = await page.content()
with open("content.html", "w") as f:
f.write(content)
</code></pre>
<p>The problem I'm trying to solve is, obviously, how load the page with <code>scrapy-playwright</code>. However, I'd love to know how is this different from the playwright solo approach that works in the first try.</p>
|
<python><scrapy><playwright><playwright-python><scrapy-playwright>
|
2024-08-19 15:04:58
| 0
| 2,081
|
Justcurious
|
78,888,513
| 769,922
|
Converting a subset of the dataframe into another dataframe
|
<p>I have a pandas dataframe that looks like so</p>
<pre><code>Index Key 2010-01 2010-02 2010-03 ... 2020-12
A/B/C foo 0.23 0.44 0 2.1
A/B/C bar 0.43 0.12 0.23 1.2
A/B/C baz 0.25 0.23 0.2 2.5
P/Q/R foo 0.31 0.41 0 2.4
P/Q/R foo 0.33 0.54 0.5 4.2
P/Q/R foo 0.93 0.64 0.99 6.5
</code></pre>
<p>The <code>index</code> is a multi-column index. "foo", "bar", "baz" are present for every index.</p>
<p>How do I convert this group of data into individual dataframes that looks like</p>
<pre><code># dataframe for A/B/C
index foo bar baz
2010-01 0.23 0.43 0.25
2010-02 0.44 0.12 0.23
...
2020-12 2.1 1.2 2.5
</code></pre>
<p>Fairly new to pandas, so I tried converting data into dictionaries and working with it. So I have a solution that involves collecting the necessary values and then in the 2nd pass, converting them to their individual dataframes. The pseudo code was like this</p>
<pre><code># loop over the converted dictionary (as per keys)
For each key, create 'foo', 'bar', 'baz' with empty dicts;
when encountering a row for 'foo', collect all values from col 2010-01 to 2020-12 as a list
do the same for 'bar' and 'baz'. Add to the nested dict that is held by the given key
For the second pass, loop through each key
take the nested dict and create a dataframe using the entire dict and the dates 2010-01 to 2020-12 as the index.
</code></pre>
<p>Is there a more panda-ish way to do this.
Is it possible to convert the group obtained by the index A/B/C and transpose it without incurring the hit of performance that comes with transpose?</p>
<p>The actual data in question may have upwards of 10000 such indices (>30k rows).</p>
|
<python><pandas><dataframe>
|
2024-08-19 14:56:11
| 2
| 1,037
|
Serendipity
|
78,888,416
| 4,764,604
|
Problem installing PyTorch 1.9.0 on Windows 10 x64 with Python 3.8 using pip
|
<p>I'm trying to install PyTorch 1.9.0 on my Windows 10 64-bit system with Python 3.8, but pip can't find the proper distribution. When I try to install it using the command:</p>
<pre><code>pip3 install torch torchvision torchaudio
</code></pre>
<p>I get the following error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
</code></pre>
<p>To solve the problem, I tried to download the corresponding .whl file from the page <a href="https://download.pytorch.org/whl/torch/" rel="nofollow noreferrer">https://download.pytorch.org/whl/torch/</a> and chose torch-1.9.0+cpu-cp38-cp38-win_amd64.whl. However, when I try to install it with the command:</p>
<pre><code>pip install torch-1.9.0+cpu-cp38-cp38-win_amd64.whl
</code></pre>
<p>I get the following error:</p>
<pre><code>ERROR: torch-1.9.0+cpu-cp38-cp38-win_amd64.whl is not a supported wheel on this platform.
</code></pre>
<p>Additional details:</p>
<ul>
<li>Operating System: Windows 10 x64</li>
<li>Python version: 3.8</li>
</ul>
|
<python><python-3.x><pytorch><pip><windows-10>
|
2024-08-19 14:33:08
| 1
| 3,396
|
Revolucion for Monica
|
78,888,407
| 3,247,006
|
Is `CPU times` of `%time` the total of User CPU time and System CPU time in IPython?
|
<p>Running the code with the <a href="https://ipython.readthedocs.io/en/stable/interactive/magics.html#built-in-magic-commands" rel="nofollow noreferrer">magic command</a> <a href="https://ipython.readthedocs.io/en/stable/interactive/magics.html" rel="nofollow noreferrer">%time</a> in IPython got <code>CPU times</code> and <code>Wall time</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: %time sum(range(1000000))
CPU times: total: 31.2 ms
Wall time: 26.7 ms
Out[1]: 499999500000
</code></pre>
<p>I know <code>Wall time</code> is <em><strong>wall-clock time</strong></em> or <em><strong>real time</strong></em> and the meaning is the time from when program starts to when program finishes.</p>
<p>But, I don't know what <code>CPU times</code> is because <a href="https://ipython.readthedocs.io/en/stable/interactive/magics.html" rel="nofollow noreferrer">the doc</a> doesn't explain it in detail, just saying:</p>
<blockquote>
<p>The CPU and wall clock times are printed`.</p>
</blockquote>
<p>I know there are <em><strong>User CPU time</strong></em> and <em><strong>System CPU time</strong></em> as shown below:</p>
<ul>
<li><p><em><strong>User CPU time</strong></em> is the amount of the time which program uses CPU on <em><strong>user space</strong></em>. *<em><strong>User space</strong></em> is the memory space for applications.</p>
</li>
<li><p><em><strong>System CPU time</strong></em> is the amount of the time which program uses CPU on <em><strong>kernel space</strong></em>. *<em><strong>Kernel space</strong></em> is the memory space for <em><strong>kernel</strong></em> which is the core of the operating system.</p>
</li>
</ul>
<p>Now, is <code>CPU times</code> the total of both <em><strong>User CPU time</strong></em> and <em><strong>System CPU time</strong></em>? or either of them? or neither of them?</p>
|
<python><time><ipython><cpu-time><magic-command>
|
2024-08-19 14:30:10
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
78,888,400
| 3,556,110
|
How do I override django.db.backends logging to work when DEBUG=False?
|
<p>In django's LOGGING configuration for the builtin <a href="https://docs.djangoproject.com/en/5.1/ref/logging/#django-db-backends" rel="nofollow noreferrer">django.db.backends</a> it states that:
"For performance reasons, SQL logging is only enabled when settings.DEBUG is set to True, regardless of the logging level or handlers that are installed."</p>
<p>As a result the following <code>LOGGING</code> configuration, which is correctly set up to issue debug level logs showing DB queries, will NOT output the messages I need:</p>
<pre><code>
DEBUG = False
LOGGING = {
"version": 1,
"disable_existing_loggers": True,
"root": {"handlers": [ "gcp_structured_logging"]},
"handlers": {
"gcp_structured_logging": {
"level": "DEBUG",
"class": "django_gcp.logging.GoogleStructuredLogsHandler",
}
},
"loggers": {
'django.db.backends': {
'handlers': ["gcp_structured_logging"],
'level': 'DEBUG',
'propagate': True,
},
},
}
</code></pre>
<p>This is preventing me from activating this logging in production, where of course I'm not going to durn on <code>DEBUG=True</code> in my settings but where I need to log exactly this information.</p>
<p>Ironically, I need this in order to debug a performance issue (I plan to run this for a short time in production and cat my logs so I can set up a realistic scenario for a load test and some benchmarking on the database).</p>
<p>How can I override django's override so that sql queries get logged as I intend?</p>
<p><strong>NOTE</strong>
willeM_ Van Onsem's answer as accepted is correct (because it totally is) but it's worth noting that in the end, I came across a <a href="https://github.com/jazzband/django-silk" rel="nofollow noreferrer">library called <code>django-silk</code></a>. Whilst not an answer to this question directly, <code>silk</code> actually covers the capability I was trying to build for myself when I found this peculiarity of how the db logging works. Perhaps someone else trying to achieve the same thing will make good use of it.</p>
|
<python><django><database><logging>
|
2024-08-19 14:28:22
| 2
| 5,582
|
thclark
|
78,888,382
| 12,257,924
|
How to use ChatOpenAI with LM Studio for LLMs? (Langchain)
|
<p>I want to use ChatOpenAI from <code>langchain-openai</code> package to communicate with my local LM Studio server on Ubuntu 22.</p>
<p>I followed docs and came up with this - which is a standard way:</p>
<pre class="lang-py prettyprint-override"><code>from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
llm = ChatOpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
base_url=os.getenv("OPENAI_API_BASE_URL"),
temperature=0.5,
)
</code></pre>
<p>where envs are:</p>
<pre><code>OPENAI_API_BASE_URL="http://localhost:1234/v1"
OPENAI_API_KEY="lm-studio"
</code></pre>
<p>But when I actually use that llm I get an error:</p>
<pre><code>Traceback (most recent call last):
File "<PROJECT_PATH>/web_researcher/main.py", line 155, in <module>
result = crew.run()
^^^^^^^^^^
File "<PROJECT_PATH>/web_researcher/main.py", line 114, in run
result = crew.kickoff()
^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/langtrace_python_sdk/instrumentation/crewai/patch.py", line 153, in traced_method
result = wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/crewai/crew.py", line 469, in kickoff
result = self._run_sequential_process()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/crewai/crew.py", line 577, in _run_sequential_process
return self._execute_tasks(self.tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/crewai/crew.py", line 665, in _execute_tasks
task_output = task.execute_sync(
^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/crewai/task.py", line 180, in execute_sync
return self._execute_core(agent, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/crewai/task.py", line 234, in _execute_core
result = agent.execute_task(
^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/langtrace_python_sdk/instrumentation/crewai/patch.py", line 153, in traced_method
result = wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/crewai/agent.py", line 182, in execute_task
memory = contextual_memory.build_context_for_task(task, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/crewai/memory/contextual/contextual_memory.py", line 24, in build_context_for_task
context.append(self._fetch_stm_context(query))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/crewai/memory/contextual/contextual_memory.py", line 33, in _fetch_stm_context
stm_results = self.stm.search(query)
^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/crewai/memory/short_term/short_term_memory.py", line 33, in search
return self.storage.search(query=query, score_threshold=score_threshold)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/crewai/memory/storage/rag_storage.py", line 102, in search
else self.app.search(query, limit)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/embedchain/embedchain.py", line 699, in search
return [{"context": c[0], "metadata": c[1]} for c in self.db.query(**params)]
^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/embedchain/vectordb/chroma.py", line 220, in query
result = self.collection.query(
^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/langtrace_python_sdk/instrumentation/chroma/patch.py", line 91, in traced_method
result = wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 327, in query
valid_query_embeddings = self._embed(input=valid_query_texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 633, in _embed
return self._embedding_function(input=input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/chromadb/api/types.py", line 193, in __call__
result = call(self, input)
^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/chromadb/utils/embedding_functions.py", line 188, in __call__
embeddings = self._client.create(
^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/langtrace_python_sdk/instrumentation/openai/patch.py", line 467, in traced_method
result = wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/openai/resources/embeddings.py", line 114, in create
return self._post(
^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1271, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 942, in request
return self._request(
^^^^^^^^^^^^^^
File "<COMPUTER_PATH>/.cache/pypoetry/virtualenvs/project-name-977PB1Et-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: lm-studio. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
</code></pre>
<h3>System Info</h3>
<p>Ubuntu 22</p>
<p>Versions:</p>
<ol>
<li><p>Python: 3.11.5</p>
</li>
<li><p>packages:</p>
</li>
</ol>
<ul>
<li>openai - 1.39.0</li>
<li>langchain - 0.2.12</li>
<li>langchain-cohere - 0.1.9</li>
<li>langchain-community - 0.2.11</li>
<li>langchain-core - 0.2.28</li>
<li>langchain-exa - 0.1.0</li>
<li>langchain-experimental - 0.0.64</li>
<li>langchain-openai - 0.1.20</li>
<li>langchain-text-splitters - 0.2.2</li>
</ul>
<p>Perhaps I'm doing something wrong but I don't think so, I read many docs and web posts and all show this is the correct way.</p>
|
<python><openai-api><langchain><agent>
|
2024-08-19 14:25:17
| 2
| 639
|
Karol
|
78,888,343
| 26,843,912
|
Watch for file changes and send the data through websocket
|
<p>I have an endpoint in my python fast api and whenver the frontend connects to it, I want to listen for changes on a specific file stored in the temporary directory of the system. I am using watch dog, but I am not able to make it work it, its just not sending any message on changes :-</p>
<pre class="lang-py prettyprint-override"><code>class LogFileEventHandler(FileSystemEventHandler):
def __init__(self, websocket: WebSocket, videoId: str):
self.websocket = websocket
self.videoId = videoId
async def on_modified(self, event):
if event.src_path.endswith(f"log_{self.videoId}.txt"):
logFile = os.path.join(temp_dir, f"log_{self.videoId}.txt")
with open(logFile, "r") as file:
content = file.read()
# Send file content to WebSocket
await self.websocket.send_text(content)
@router.websocket("/logs")
async def websocket_endpoint(websocket: WebSocket, videoId: str = Query(...)):
await manager.connect(websocket)
print(f" Client connected: {websocket.client.host}")
# Create event handler and observer
event_handler = LogFileEventHandler(websocket, videoId)
observer = Observer()
observer.schedule(event_handler, temp_dir, recursive=False)
observer.start()
try:
while True:
try:
data = await websocket.receive_json()
except RuntimeError:
break
except WebSocketDisconnect:
print(f"Client disconnected: {websocket.client.host}")
manager.disconnect(websocket)
observer.stop()
observer.join()
</code></pre>
|
<python><fastapi><python-watchdog>
|
2024-08-19 14:15:17
| 1
| 323
|
Zaid
|
78,888,041
| 4,435,175
|
Cast multiple columns with Unix epoch to Datetime
|
<p>I have a dataframe with multiple columns containing unix epochs.
In my example I only use 2 of 13 columns I have. I'd like to cast all those columns to a datetime with UTC timezone in a single call to <code>with_columns()</code>.</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_repr("""
βββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββ
β id β start_date β end_date β cancelable β
β --- β --- β --- β --- β
β i64 β i64 β i64 β bool β
βββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββ‘
β 1 β 1566637530 β 1566628686 β true β
β 2 β 1561372720 β 1561358079 β true β
β 3 β 1561374780 β 1561358135 β false β
β 4 β 1558714718 β 1556188225 β false β
β 5 β 1558715044 β 1558427697 β true β
βββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββ
""")
</code></pre>
<p>Polars provides the user with <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.from_epoch.html" rel="nofollow noreferrer"><code>pl.from_epoch</code></a>. However, I didn't find a way to apply it to multiple columns as once.</p>
<p>Expected result:</p>
<pre><code>shape: (5, 4)
βββββββ¬ββββββββββββββββββββββ¬ββββββββββββββββββββββ¬βββββββββββββ
β id β start_date β end_date β cancelable β
β --- β --- β --- β --- β
β i64 β datetime[ΞΌs] β datetime[ΞΌs] β bool β
βββββββͺββββββββββββββββββββββͺββββββββββββββββββββββͺβββββββββββββ‘
β 1 β 2019-08-24 09:05:30 β 2019-08-24 06:38:06 β true β
β 2 β 2019-06-24 10:38:40 β 2019-06-24 06:34:39 β true β
β 3 β 2019-06-24 11:13:00 β 2019-06-24 06:35:35 β false β
β 4 β 2019-05-24 16:18:38 β 2019-04-25 10:30:25 β false β
β 5 β 2019-05-24 16:24:04 β 2019-05-21 08:34:57 β true β
βββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββ
</code></pre>
<p>So far, my code looks as follows.</p>
<pre class="lang-py prettyprint-override"><code>columns_epoch_to_timestamp: list[str] = [
"start_date",
"end_date",
]
df = df.with_columns(pl.col(*columns_epoch_to_timestamp))
</code></pre>
|
<python><datetime><casting><python-polars><epoch>
|
2024-08-19 13:05:41
| 1
| 2,980
|
Vega
|
78,887,911
| 3,906,713
|
Is there an efficient way to sum over numpy splits
|
<p>I would like to split array into chunks, sum the values in each chunk, and return the result as another array. The chunks can have different sizes. This can be naively done by using <a href="https://numpy.org/doc/stable/reference/generated/numpy.split.html" rel="nofollow noreferrer">numpy split</a> function like this</p>
<pre><code>def split_sum(a: np.ndarray, breakpoints: np.ndarray) -> np.ndarray:
return np.array([np.sum(subarr) for subarr in np.split(a, breakpoints)])
</code></pre>
<p>However, this still uses a python for-loop and is thus inefficient for large arrays. Is there a faster way?</p>
|
<python><numpy><split>
|
2024-08-19 12:28:54
| 2
| 908
|
Aleksejs Fomins
|
78,887,797
| 5,013,037
|
How to log a request in FastAPI when an exception occurs which is handled inside submodule?
|
<p>Imagine I have a FastAPI backend application exposing the following test endpoint:</p>
<pre><code>@router.post("/my_test")
async def post_my_test(request: Request):
a = do_stuff(request.var1)
b = do_different_stuff(request.var2)
return a + b
</code></pre>
<p>Inside my <code>do_stuff()</code> and <code>do_different_stuff()</code> a lot of (nested) business logic is executed.
E.g. inside <code>do_stuff</code> we call another method which calls another method etc. When reaching the lowest level the resulting method does some calculation</p>
<pre><code>async def calc_it(m, n):
try:
return m/n
except ZeroDivisionError:
logging.exception("Error occurred")
return -1
</code></pre>
<p>Inside this method we explicitly handle the resulting error (let's assume it might be somewhat expected that in rare cases this happens). If this happens I woult want to log the complete request object from the top-level endpoint. I see the following possibilities on achieving this.</p>
<ol>
<li>Propagate the information that the error occured to the top-level method e.g. <code>return -1, "err_occured"</code> all the way up and log it inside <code>post_my_test</code></li>
<li>Propagate the whole request object all the way down to all low-level methods which execute stuff and log the request object inside <code>calc_it</code></li>
<li>Use some type of FastAPI <code>Middleware</code> object to intercept request and response or a separate Exception handler <code>app.add_exception_handler(Exception, my_exception_handler)</code> --> However I still want to handle the error (e.g. in this case return -1) inside the low-level method because I want to explicitly configure different fallback values in a lot of instances.</li>
</ol>
<p>The 1. and 2. choice would obviously work however I have the feeling there should/could be some better solutions than propagating this all the way down/up. The 3. choice using a basic middleware won't work as it never sees the exception when we catch it with the low-level method <code>try except</code> statement.</p>
<p>I have the feeling there surely exists a smarter way of handling this. E.g. is there some global variable on a request-level I could modify which could pass the information that this complete request should be logged or not or are there any other ways of achieving this functionality without cluttering my code with up and down propagating information?</p>
<p>Thanks in advance for your help!</p>
<p><strong>EDIT:</strong>
I stumbled upon <a href="https://stackoverflow.com/questions/57204499/is-there-a-fastapi-way-to-access-current-request-data-globally">Is there a FastAPI way to access current Request data globally?</a>?
and <a href="https://stackoverflow.com/questions/66747059/request-context-in-fastapi">Request context in FastAPI?</a>
which employ ContextVars to pass the data using imports. As clarified in my comment I could thus set the request object as ContextVar. So I basically pass the complete request object to all my methods down the line. Is this some kind of anti-pattern? (It feels kinda strange to pass a potentially huge object like this) IMO it could be better to pass the flag with this kind of information ("log_this_request") up to the endpoint logic using ContextVars, however as I'm using asnycio tasks I'm only able to pass it down to tasks but not "back up" to their parent.</p>
|
<python><exception><logging><fastapi><uvicorn>
|
2024-08-19 12:02:39
| 1
| 364
|
fragant
|
78,887,694
| 16,527,170
|
Compare Columns in DF & drop rows if Dates are matching between Columns
|
<p>I have df as below. If Column <code>Buy Exit Date</code> is having Date (YYYY/MM/DD) matching with Column <code>Buy Date</code> having Date (YYYY/MM/DD) in next row, then drop the <code>Buy Date</code> row from its matching condition.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {
'Buy Date': ['2022/11/07 15:00:00', '2022/11/11 12:00:00', '2022/11/24 15:00:00', '2022/12/01 12:00:00', '2022/12/14 09:00:00'],
'Buy Exit Date': ['2022/11/11 09:00:00', '2022/11/24 12:00:00', '2022/12/01 09:00:00', '2022/12/06 09:00:00', '2022/12/16 09:00:00'],
'ln_entry': [3232.899902, 3315.000000, 3381.699951, 3476.149902, 3364.699951]
}
df = pd.DataFrame(data)
</code></pre>
<p>df:</p>
<pre class="lang-py prettyprint-override"><code> Buy Date Buy Exit Date ln_entry
0 2022/11/07 15:00:00 2022/11/11 09:00:00 3232.899902
1 2022/11/11 12:00:00 2022/11/24 12:00:00 3315.000000
2 2022/11/24 15:00:00 2022/12/01 09:00:00 3381.699951
3 2022/12/01 12:00:00 2022/12/06 09:00:00 3476.149902
4 2022/12/14 09:00:00 2022/12/16 09:00:00 3364.699951
</code></pre>
<p>Expected Output:</p>
<pre class="lang-py prettyprint-override"><code> Buy Date Buy Exit Date ln_entry
0 2022/11/07 15:00:00 2022/11/11 09:00:00 3232.899902
1 2022/12/14 09:00:00 2022/12/16 09:00:00 3364.699951
</code></pre>
<p>My Code:</p>
<pre class="lang-py prettyprint-override"><code>df['Buy Date'] = pd.to_datetime(df['Buy Date'])
df['Buy Exit Date'] = pd.to_datetime(df['Buy Exit Date'])
df['Buy Date Only'] = df['Buy Date'].dt.date
df['Buy Exit Date Only'] = df['Buy Exit Date'].dt.date
df['Next_Buy_Date_Only'] = df['Buy Date Only'].shift(-1)
to_remove = df['Buy Exit Date Only'] == df['Next_Buy_Date_Only']
df_cleaned = df[~to_remove]
df_cleaned = df_cleaned.drop(columns=['Buy Date Only', 'Buy Exit Date Only', 'Next_Buy_Date_Only'])
print(df_cleaned)
</code></pre>
<p>My Output:</p>
<pre class="lang-py prettyprint-override"><code> Buy Date Buy Exit Date ln_entry
3 2022-12-01 12:00:00 2022-12-06 09:00:00 3476.149902
4 2022-12-14 09:00:00 2022-12-16 09:00:00 3364.699951
</code></pre>
|
<python><pandas><dataframe><date>
|
2024-08-19 11:37:41
| 1
| 1,077
|
Divyank
|
78,887,684
| 11,790,979
|
Pytest can't find imports from tested modules
|
<p>I am trying to test a package with the following structure:</p>
<pre><code>.
βββ src\
βββ--myproject\
β βββ __init__.py
β βββ module1.py
β βββ module2.py
β βββ main.py
βββ-tests\
βββtest_simple.py
βββtest_module1.py
</code></pre>
<p>I am running with a virtualenv.</p>
<pre><code>#test_simple.py
def test_tests_working() -> None:
assert 1 == 1
</code></pre>
<pre><code>#test_module1.py
import sys
sys.path.append(r"path_to_project\src") #i got this from another SO answer but didn't seem to resolve anything
from myproject.module1 import func1
def test_func1() -> None:
blah = func1(args)
assert blah == "expected func1 results"
</code></pre>
<p><code>test_simple</code> works fine, returns True, so I know pytest itself is working okay and its looking in the right places so and so forth, sanity checks check out. However, when I get to <code>test_module1</code>, I get an error on line 1 because in <code>module1</code> it's trying to import another module, which causes failure:</p>
<pre><code>E ModuleNotFoundError: No module named 'module2'
</code></pre>
<pre class="lang-py prettyprint-override"><code>#module1.py
import module2
def func1():
#do stuff
</code></pre>
<p>Why is it not working? If I run the scripts directly, or even from another script, I have <em>no</em> issues with the import statements being the way they are. I tried to change to <code>from . import module2</code> but that didn't solve it. I deleted the <code>tests/__init__.py</code> file on a another SO recommendation as well, but to no avail.</p>
|
<python><pytest>
|
2024-08-19 11:34:29
| 0
| 713
|
nos codemos
|
78,887,664
| 16,723,655
|
Jupyter notebook Nbextensions shows only spinning reload circle
|
<p>I installed WSL Ubuntu 22.04.3 LTS.</p>
<p>I had not any problem with Nbextensions before I reinstall jupyter notebook.</p>
<p>I totally reinstalled Ubuntu (WSL Ubuntu) and jupyter notebook (See below code).</p>
<pre><code>pip install notebook==6.1.5
</code></pre>
<p>Then, I entered below code as before.</p>
<pre><code>pip install jupyter_contrib_nbextensions
</code></pre>
<p><a href="https://i.sstatic.net/XtnDlbcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XtnDlbcg.png" alt="enter image description here" /></a></p>
<pre><code>jupyter contrib nbextension install --user
</code></pre>
<p><a href="https://i.sstatic.net/z1ycyMf5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z1ycyMf5.png" alt="enter image description here" /></a></p>
<pre><code>jupyter nbextension enable varInspector/main
</code></pre>
<p><a href="https://i.sstatic.net/OBs4wI18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OBs4wI18.png" alt="enter image description here" /></a></p>
<p>As can see above, All looks fine for installing Nbextensions.</p>
<p>However, when I go Nbextensions tab, I see only spinning reload circle as below.</p>
<p><a href="https://i.sstatic.net/ozZQ5BA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ozZQ5BA4.png" alt="enter image description here" /></a></p>
<p>I searched about this problem and many people are talking about JSON.</p>
<p>However, I actually don't know much about it. So I entered what it is written.</p>
<p>In ".jupyter" folder, there is "nbconfig" folder and "jupyter_nbconvert_config" file.</p>
<p>In that folder, there are 'notebook' and 'tree' files.</p>
<p>The notebook file has code as below.</p>
<pre><code>{
"load_extensions": {
"nbextensions_configurator/config_menu/main": true,
"contrib_nbextensions_help_item/main": true,
"varInspector/main": true
}
}
</code></pre>
<p>The tree file has code as below.</p>
<pre><code>{
"load_extensions": {
"nbextensions_configurator/tree_tab/main": true
}
}
</code></pre>
<p>'jupyter_nbconvert_config' file has code as below.</p>
<pre><code>{
"version": 1,
"Exporter": {
"extra_template_paths": [
".",
"/home/park/anaconda3/envs/surging/lib/python3.9/site-packages/jupyter_contrib_nbextensions/templates"
],
"preprocessors": [
"jupyter_contrib_nbextensions.nbconvert_support.CodeFoldingPreprocessor",
"jupyter_contrib_nbextensions.nbconvert_support.PyMarkdownPreprocessor"
]
}
}
</code></pre>
<p>What should I have to do with it?</p>
|
<python><jupyter-notebook><jupyter-contrib-nbextensions>
|
2024-08-19 11:29:38
| 1
| 403
|
MCPMH
|
78,887,406
| 5,786,649
|
Is it possible to use the override syntax with structured configs in Hydra?
|
<p>The Hydra docs on <a href="https://hydra.cc/docs/patterns/configuring_experiments/" rel="nofollow noreferrer">configuring experiments</a> explains how to use overrides in standard, file-base configs. Is it possible to use overrides in Structured Configs? A minimum example:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from hydra.core.config_store import ConfigStore
from omegaconf import DictConfig, MISSING
@dataclass
class MyConfig(DictConfig):
data: DataConfig = MISSING
model: ModelConfig = MISSING
parameters: dict[str, Any] = MISSING
cs = ConfigStore.instance()
cs.store(name="my_conf", node=MyConfig)
cs.store(group="model", name="v1", node=ModelConfig_V1)
cs.store(group="data", name="synthetic", node=DataConfig_Synthetic)
</code></pre>
<p>I would like to add in experiments somewhat like this:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class ExperimentConfig_Default:
description: str = "Some experiment."
parameters: dict[str, Any] = field(default_factors=lambda:{"lr":0.01})
cs.store(group="experiment", name="default", node=ExperimentConfig_Default)
</code></pre>
<p>I want to be able to use the CLI like this:</p>
<pre class="lang-bash prettyprint-override"><code>python main.py model=v1 data=synthetic +experiment=default
</code></pre>
<p>to get a config like this:</p>
<pre><code>data:
...
model:
...
parameters:
lr: 0.01
</code></pre>
|
<python><fb-hydra><omegaconf>
|
2024-08-19 10:20:13
| 1
| 543
|
Lukas
|
78,887,327
| 1,251,570
|
Pandas without sqlite3
|
<p>There is a transitive dependency of <code>Pandas</code> on <code>sqlite3</code> via <code>SqlAlchemy</code>. Hence, if you don't have <code>sqlite3</code> installed on Linux or Windows, <code>Pandas</code> won't run. It will show this error.</p>
<blockquote>
<p>ModuleNotFoundError: No module named β_sqlite3β</p>
</blockquote>
<p>Since I don't need SQLite3 in my project, want to run / use Pandas without SQLite3 ?</p>
|
<python><pandas><sqlite><sqlite3-python>
|
2024-08-19 10:00:07
| 1
| 1,622
|
Barun
|
78,887,320
| 13,314,132
|
Streamlit crashing when using ydata_profiling
|
<p>I am using Streamlit to visualize my <code>ydata_profiling</code> report.
However when I am selecting a work_order to generate a profile report it keeps on crashing without any error message.
Attached screenshot:
<a href="https://i.sstatic.net/As3RzJ8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/As3RzJ8J.png" alt="enter image description here" /></a></p>
<p>I have used the same code in jupyter notebook and it is working fine. Please see reference:
<a href="https://i.sstatic.net/p9h6ztfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p9h6ztfg.png" alt="image
image
1503Γ662 46.9 KB" /></a></p>
<p>The code is as follows:</p>
<h1>Analytics Section</h1>
<pre><code>if choice == 'π Analytics':
st.subheader('Analytics')
# Fetch all unique work orders from MongoDB
work_orders = collection.distinct('Work_Order')
if work_orders:
# Create a multi-select dropdown for work orders
selected_work_orders = st.multiselect('Select Work Orders:', work_orders)
if selected_work_orders:
# Fetch data for the selected work orders
records = list(collection.find({"Work_Order": {"$in": selected_work_orders}}))
if records:
# Convert the list of MongoDB records to a DataFrame
df = pd.DataFrame(records)
# Drop the MongoDB internal fields if it's not needed
if '_id' in df.columns:
df = df.drop(columns=['_id'])
df = df.drop(columns=['Object_Detection_Visual'])
# Generate a profiling report using ydata-profiling
profile = ProfileReport(df, title="Work Orders Data Profile", minimal=True)
# Display the profiling report in Streamlit
st_profile_report(profile)
else:
st.write("No data found for the selected work orders.")
else:
st.write("Please select one or more work orders to analyze.")
else:
st.write("No work orders available.")
</code></pre>
<p>Also I am fetching the data from MongoDB and I have checked mongodb is connected.</p>
<p>The dataframe is as follows:</p>
<p><a href="https://i.sstatic.net/8Mzgk76T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Mzgk76T.png" alt="enter image description here" /></a></p>
<p>Versions:</p>
<pre><code>os: Windows
python: 3.11
streamlit: 1.35.0
streamlit-pandas-profiling: 0.1.3
ydata-profiling: 4.9.0
</code></pre>
|
<python><mongodb><streamlit><pandas-profiling>
|
2024-08-19 09:59:05
| 1
| 655
|
Daremitsu
|
78,887,297
| 6,525,347
|
Unable to install PYPI Package from Sonatype-Nexus in Docker container: "No matching distribution found"
|
<p>I have a hosted PyPI Sonatype Nexus Repository named pypi, where I store several packages, including <code>pillow</code>.</p>
<p>On my local machine, I can install <code>pillow</code> without any issues from the self-hosted pypi using the following command:</p>
<pre><code>pip install pillow --trusted-host {HOST_NAME} --index-url https://{HOST_NAME}/repository/pypi/simple
</code></pre>
<p>Additionally, I can search for the package with:</p>
<pre><code>pip search pillow --trusted-host {HOST_NAME} --index-url https://{HOST_NAME}/repository/pypi/pypi
</code></pre>
<p>Both commands work flawlessly on my local setup.</p>
<p>However, I'm facing an issue when trying to replicate this setup in a Docker environment using the <code>python:3.9.19</code> image. While the pip search command works as expected, the <code>pip install</code> command fails with the following error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement pillow (from versions: none)
ERROR: No matching distribution found for pillow
</code></pre>
<p>It's worth noting that the <code>pillow</code> package is present in the PyPI repository, and I'm able to install it on my local machine without any problems. This issue also occurs with other packages, though some, like <code>python-slugify</code>, install without any issues.</p>
<p>Any help or suggestion on how to fix this?</p>
|
<python><docker><nexus><pypi><nexus3>
|
2024-08-19 09:52:38
| 1
| 1,146
|
thelearner
|
78,887,265
| 2,123,706
|
import filename as fn fails in python 3.11
|
<p>I have a scheduled task that runs a python script.</p>
<p>The python script imports a bunch of functions from another python script held in the same directory.</p>
<pre><code>- dir
- script.py
- functions.py
</code></pre>
<p>I call <code>functions.py</code> with:</p>
<pre><code>import functions as paxf
</code></pre>
<p>This has worked for the past 4 months.</p>
<p>Now it returns the following error:</p>
<pre><code>ModuleNotFoundError: No module named 'functions'
</code></pre>
<p>I have not made any changes to the scripts, and am on python 3.11.</p>
<p>I tried updated the path location of <code>functions</code> with:</p>
<ul>
<li><code>import .functions</code>,</li>
<li><code>import parent.functions</code>,</li>
<li><code>import home.parent.functions</code>,</li>
</ul>
<p>but receive the same error.</p>
<p>Any suggestions as to what can be going wrong?</p>
|
<python><import>
|
2024-08-19 09:40:56
| 1
| 3,810
|
frank
|
78,886,970
| 893,254
|
How to add Anaconda Prompt option to Windows Terminal?
|
<p>How can I add an Anaconda Prompt option to <a href="https://apps.microsoft.com/detail/9n0dx20hk701?hl=en-US&gl=US" rel="nofollow noreferrer">Windows Terminal</a>?</p>
<p>I have installed Anaconda Navigator. I do not have any other Python distributions installed. This is a Windows 11 system, but the same information should apply to any other Windows system with Windows Terminal installed.</p>
<p>Here is a screenshot of my current Windows Terminal environment options:</p>
<p><a href="https://i.sstatic.net/51Lwui9H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51Lwui9H.png" alt="Windows Terminal" /></a></p>
|
<python><windows><anaconda><windows-terminal>
|
2024-08-19 08:23:00
| 1
| 18,579
|
user2138149
|
78,886,889
| 4,513,726
|
use an iterface to replace characterization factors of an lca object with Brightway
|
<p>I am trying to change the characterization factors specifying new values using a datapackage, but I am facing some unexpected problems. For example if I do</p>
<pre class="lang-py prettyprint-override"><code>import bw2data as bd
import bw_processing as bwp
import numpy as np
import bw2calc as bc
mobility_db = bd.Database('Mobility example')
combustion_car_driving = mobility_db.get(name='Driving an combustion car')
co2 = mobility_db.get(name='CO2')
# create datapackage with alternative values for the GWP of CO2
indices_array = np.array([(co2.id,co2.id)],dtype=bwp.INDICES_DTYPE)
values_array = np.array([[10,12]])
dp_c = bwp.create_datapackage()
dp_c.add_dynamic_array(
matrix='characterization_matrix',
indices_array=indices_array,
interface=values_array,
)
fu, data_objs , _ = bd.prepare_lca_inputs({combustion_car_driving:1},
method=('IPCC','simple'))
lca = bc.LCA(demand=fu,
method=('IPCC','simple'),
data_objs=data_objs+[dp_c],
use_arrays=True)
lca.lci()
lca.lcia()
</code></pre>
<p>I get an <code>InconsistentGlobalIndex</code> error: <em>Multiple global index values found: [1, None]. If multiple LCIA datapackages are present, they must use the same value for <code>GLO</code>, the global location, in order for filtering for site-generic LCIA to work correctly</em>.</p>
<p>If I filter the resources in the dp_c</p>
<pre class="lang-py prettyprint-override"><code>dp_c.filter_by_attribute('matrix','characterization_matrix').filter_by_attribute("kind", "indices").resources
</code></pre>
<p>it does not have a field of global_index, but the data_objs do.</p>
<p>I've tried adding manually to the resource a global_index of:</p>
<pre class="lang-py prettyprint-override"><code>bd.method.geomapping[bd.method.config.global_location]
</code></pre>
<p>and the calculation can continue (although I get always the same results after next(lca), which is probably a different problem.</p>
<p>Am I doing something wrong ? I am used to change values in A or B matrices, but not C</p>
|
<python><brightway>
|
2024-08-19 07:58:57
| 2
| 1,646
|
mfastudillo
|
78,886,886
| 17,795,398
|
How to align two x-axes (twiny) in matplotlib?
|
<p>I have written this code to plot two curves. I want to show the same <code>y</code> value for two <code>x</code> values (<code>x</code> values are related by some non-linear relation). The function that describes <code>y</code> can be written as a function of any of them, that is why I want to show both dependencies. The problem is that in the plot, x-axes are not aligned.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Energies in keV
# EPKA, Tdam, eff
FeValues = np.array([
[80, 69, 42],
[90, 78, 42],
[100, 86, 42]
])
WValues = np.array([
[150, 133, 30],
[200, 177, 29],
[250, 220, 29]
])
fig = plt.figure()
axPKA = fig.add_subplot()
axPKA.set_xlabel(r"$E_{PKA}$ (keV)")
axPKA.set_ylabel(r"$\overline{\xi}$ (%)")
axTdam = axPKA.twiny()
axTdam.set_xlabel(r"$T_{dam}$ (keV)")
axPKA.plot(FeValues[:, 0], FeValues[:, 2], label="Fe", marker="o")
axPKA.plot(WValues[:, 0], WValues[:, 2], label="W", marker="o")
axTdam.scatter(FeValues[:, 1], FeValues[:, 2], marker=".", color="red")
axTdam.scatter(WValues[:, 1], WValues[:, 2], marker=".", color="red")
axPKA.legend()
fig.tight_layout()
plt.show()
</code></pre>
<p>As you can see below, only first and last points are aligned:
<a href="https://i.sstatic.net/65CcX2mB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65CcX2mB.png" alt="enter image description here" /></a></p>
<p>This question is similar to <a href="https://stackoverflow.com/questions/65971022/matplotlib-how-can-i-align-my-two-x-axes">this one</a>, but none of the solutions proposed there worked.</p>
<p>Edit: Just to be clear, I want the red dots to be placed above the corresponding blue/orange ones.</p>
<p>Edit 2: as @mozway suggested, since the relation between <code>EPKA</code> and <code>Tdam</code> is non-linear, it's impossible to align data properly. In addition (not mentioned before), the non-linear relation is different for <code>FeValues</code> and <code>WValues</code>, so at the end I decided not to use this kind of plots. It might be better to add the <code>EPKA</code>-<code>Tdam</code> relation in another plot.</p>
|
<python><matplotlib>
|
2024-08-19 07:58:39
| 1
| 472
|
Abel GutiΓ©rrez
|
78,886,753
| 1,176,573
|
Bar chart plot shows linearly increasing data
|
<p>Using Python <code>plotly</code>, the charted data shows linearly increasing data, but is not correct.
Also, the first data point (i.e. <code>5585</code>) is rendered as empty bar.
How do I fix these?</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
fig = go.Figure()
x = df_quaterly_results.index
y = df_quaterly_results[df_quaterly_results.columns[0]] # Sales Columns
trace_name = 'QoQ Sales'
fig.add_trace(go.Bar(x=x, y=y, name=trace_name))
plot_title = 'QoQ Sales'
fig.update_layout(
title=plot_title,
xaxis_title='Quarter',
yaxis_title='Rupees in Cr.')
# pyo.plot(fig, filename="temp-plot.html")
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/wj47q2gY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wj47q2gY.png" alt="enter image description here" /></a></p>
<p>Sample Dataframe:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Quarter</th>
<th>Sales</th>
</tr>
</thead>
<tbody>
<tr>
<td>Jun 2021</td>
<td>5585</td>
</tr>
<tr>
<td>Sep 2021</td>
<td>7096</td>
</tr>
<tr>
<td>Dec 2021</td>
<td>8527</td>
</tr>
<tr>
<td>Mar 2022</td>
<td>7893</td>
</tr>
<tr>
<td>Jun 2022</td>
<td>8607</td>
</tr>
<tr>
<td>Sep 2022</td>
<td>8458</td>
</tr>
</tbody>
</table></div>
<p>Note: The column <code>Quarter</code> is dataframe index here.</p>
|
<python><plotly>
|
2024-08-19 07:26:33
| 3
| 1,536
|
RSW
|
78,886,736
| 2,386,605
|
PyCharm can work with setup.py but not pyproject.toml
|
<p>I have a package <code>my_lib</code> and want to use it in another repo in pycharm.</p>
<p>When I have my <code>setup.py</code> file</p>
<pre><code>from setuptools import setup, find_packages
with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()
setup(
name='my_lib',
version='1.0.0',
author='Some author',
author_email='some@email.com',
description='',
long_description=long_description,
long_description_content_type="text/markdown",
url='https://github.com/my-proj/my-lib',
project_urls={
"Bug Tracker": "https://github.com/my-proj/my-lib/issues"
},
license='Apache-2.0',
packages=find_packages(),
install_requires=[
'fastapi'
],
)
</code></pre>
<p>I can install it via</p>
<p><code>pip3.12 install -e ../my-lib</code></p>
<p>and PyCharm recognizes everything</p>
<p>However, when I add <code>pyproject.toml</code></p>
<pre><code>[build-system]
requires = ["setuptools>=64", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "my_lib"
version = "1.0.0"
description = ""
readme = "README.md"
requires-python = ">=3.10"
license = {text = "Apache-2.0"}
authors = [{name = "Some author", email = "some@email.com"}]
dependencies = [
"fastapi"
]
[project.urls]
Homepage = "https://github.com/my-proj/my-lib"
'Bug Tracker' = "https://github.com/my-proj/my-lib/issues"
</code></pre>
<p>the other project does not recognize something like</p>
<pre><code>from my_lib.a import b
</code></pre>
<p>anymore.</p>
<p>Do you know how to resolve that?</p>
|
<python><pycharm>
|
2024-08-19 07:22:02
| 1
| 879
|
tobias
|
78,886,486
| 1,668,622
|
In Micropython, how can I guarantee asynchronous tasks being shut down?
|
<p><strong>Note</strong>: despite I'm beginning my question with a <code>microdot</code> example, this is just because I've first observed the later described behavior with an attempt to re-start a <code>microdot</code> server after aborting a running async script with CTRL-C. Chances are that a solution to my problem doesn't have anything to to with <code>microdot</code>, but I'm not sure so I keep the example for now.</p>
<p><strong>Update</strong>: this is not an ESP32 issue neither. Running the snippets using plain Micropython gives the same results. (see quick installation guide at end of this post)</p>
<p><strong>Update</strong>: to me it feels like a but so I made a <a href="https://github.com/micropython/micropython/issues/15761" rel="nofollow noreferrer">report</a></p>
<hr />
<p>I'm struggling with the following short <code>async</code> snippet running on MicroPython v.1.23 on an ESP32 device:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from microdot import Microdot
async def amain():
app = Microdot()
@app.route('/')
async def index(request):
return 'Hello, world!!'
await app.start_server(host="0.0.0.0", port=80)
asyncio.run(amain())
</code></pre>
<p>The code does what you'd expect it to do (i.e. nothing, because the device is not reachable due to WiFi not being set up, but that's not important for now). Unfortunately it works only the first time after each reboot.</p>
<p>Trying to run it the second time results in</p>
<pre><code>File "main.py", line 26, in amain
File "microdot.py", line 1234, in start_server
File "asyncio/stream.py", line 1, in start_server
OSError: [Errno 112] EADDRINUSE
</code></pre>
<p>due to the fact the <code>asyncio</code> server doesn't terminate, which seems to be a <a href="https://forum.micropython.org/viewtopic.php?t=8275&start=10" rel="nofollow noreferrer">known issue</a>.</p>
<p>However, that doesn't seem to be a problem with the server itself, since you can still shut it down manually, e.g. this way:</p>
<pre class="lang-py prettyprint-override"><code> task = asyncio.create_task(app.start_server(host="0.0.0.0", port=80))
await asyncio.sleep(5)
task.cancel()
</code></pre>
<p>now the server shuts down after 5 seconds, unconditionally.</p>
<p>Easy thing, I thought, just catch <code>KeyboardInterrupt</code> or <code>CancelledError</code> and shut down the server using <code>cancel()</code> and you're done with only a little workaround.</p>
<hr />
<p>Here the <code>asyncio</code>-only (<code>microdot</code>-agnostic) part begins</p>
<hr />
<p>Some steps further - and without the web server anymore - I came up with a script that tries to handle a shutdown of the main application in order to <code>cancel()</code> my problematic task:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
async def async_foo():
try:
while True:
print("bar")
await asyncio.sleep(1)
except asyncio.CancelledError:
print("async_foo:CancelledError")
except KeyboardInterrupt:
print("async_foo:KeyboardInterrupt")
finally:
print("async_foo:finally")
async def amain():
task = asyncio.create_task(async_foo())
try:
await task
except asyncio.CancelledError:
print("amain:CancelledError")
except KeyboardInterrupt:
print("amain:KeyboardInterrupt")
finally:
print("amain:finally")
def run():
try:
asyncio.run(amain())
except asyncio.CancelledError:
print("run:CancelledError")
except KeyboardInterrupt:
print("run:KeyboardInterrupt")
finally:
print("run:finally")
run()
</code></pre>
<p>Here I'm catching all sorts of exceptions in order to find the one I need to handle, only to see those exceptions not being propagated to the async tasks as I'd expect.</p>
<p>Running this snippet on CPython (3.12 for me) I get</p>
<pre><code>$ python3 -c "import main; main.run()"
bar
bar
bar
^Casync_foo:CancelledError
async_foo:finally
amain:finally
run:finally
</code></pre>
<p>while on MicroPython only <code>run()</code> will catch exceptions:</p>
<pre><code>import main; main.run()
bar
bar
run:KeyboardInterrupt
run:finally
</code></pre>
<p>So what's going on here? It looks like MicroPython <code>asyncio</code> immediately shuts down all tasks without raising exceptions, but unfortunately <code>asyncio.run_server</code>, which gets invoked by <code>microdot</code> keeps running for some reason.</p>
<p>Is this a bug? How can I let a coroutine handle <code>CancelledError</code> in general (resp. how would I shutdown <code>microdot.start_server()</code>/<code>asyncio.start_server()</code> specificly)?</p>
<hr />
<p>Install Micropython (on Ubuntu/Debian):</p>
<pre><code># maybe install some requirements sudo apt-get install build-essential libffi-dev git pkg-config
git clone https://github.com/micropython/micropython.git
cd micropython/ports/unix
make submodules
make -j8
build-standard/micropython [<PATH-TO-FILE>]
</code></pre>
|
<python><python-asyncio><micropython>
|
2024-08-19 06:07:41
| 1
| 9,958
|
frans
|
78,886,414
| 6,843,153
|
How to switch to another page with a button click in streamlit
|
<p>I have the following <strong>streamlit</strong> button:</p>
<pre><code> def _render_settings_button(self, column):
with column.container():
st.button(
":material/settings: Settings",
key="settings_button",
use_container_width=True,
on_click=lambda: st.switch_page(st.session_state["pages"]["settings"])
)
</code></pre>
<p>The problem is that when I click the button, I get this error:</p>
<p><code>Calling st.rerun() within a callback is a no-op.</code></p>
<p>How can I switch to another page using a <code>st.button</code> instead of a <code>st.page_link</code> (I want to use <code>st.button</code> because of the styling features).</p>
|
<python><streamlit>
|
2024-08-19 05:34:33
| 2
| 5,505
|
HuLu ViCa
|
78,886,330
| 5,799,033
|
In Python, I want a list (or something) to pop its last member when I return from a function
|
<p>This works, but is it the best way? Performance is not a concern. It just seems clunky and error-prone to do the pop manually.</p>
<p>I want to access all parts of the list in the called functions, but I don't want to know about them from the caller.</p>
<pre><code>def outer():
myList = ['outer']
middle(myList)
print("back to outer:", myList)
def middle(theList):
theList.append('middle')
inner(theList)
print("back to middle:", theList)
theList.pop()
def inner(expandedList):
expandedList.append('inner')
print("inner:", expandedList)
expandedList.pop()
outer()
>>>
inner: ['outer', 'middle', 'inner']
back to middle: ['outer', 'middle']
back to outer: ['outer']
>>>
</code></pre>
<p>If it helps to give context, I'm writing an XML to STL application. As I walk in, x/y/z positional shifts are cumulative as they are written out as absolute triangulated facet vertices, but when I return from a nested situation (the <code><container></code> element in my DTD), I no longer should incorporate the shift/offset of the inner element(s).</p>
<p>I've been programming since the 80s, but am relatively new to Python. Variable scope has been one of the more difficult aspects for me to come to terms with. I initially thought the meaning of the list would just naturally pop.</p>
<p>Editing my question to say @Guy has given a good answer.</p>
<p>Beyond that, if it's not going too far afield, I'd like to consider this alternate code sample:</p>
<pre><code>def outer():
myList = ['outer']
myVal = 7
print("outer myVal", myVal)
middle(myList, myVal)
print("back to outer:", myList)
print("back to outer:", myVal)
def middle(theList, theVal):
theList.append('middle')
theVal += 4
print("middle theVal", theVal)
inner(theList, theVal)
print("back to middle:", theList)
print("back to middle:", theVal)
#theList.pop()
def inner(expandedList, fiddledVal):
expandedList.append('inner')
fiddledVal = 47
print("inner expandedList:", expandedList)
print("inner fiddledVal:", fiddledVal)
#expandedList.pop()
outer()
>>>
outer myVal 7
middle theVal 11
inner expandedList: ['outer', 'middle', 'inner']
inner fiddledVal: 47
back to middle: ['outer', 'middle', 'inner']
back to middle: 11
back to outer: ['outer', 'middle', 'inner']
back to outer: 7
>>>
</code></pre>
<p>Simple integers, after returning from a called function, have the same values they had before their visit to the called function. Lists are modified by the called function in ways that persist after returning from it.</p>
<p>Short of reading all the Python documentation, is there something I could read to better understand the underlying distinction between the integer and the list?</p>
|
<python><list>
|
2024-08-19 04:54:36
| 2
| 321
|
Tamias
|
78,886,325
| 14,458,028
|
How can I use a field of Foreign key (User Model) as the upload destination of FileField?
|
<p>I have a Files model which has the User Model as Foreign key, I want to use the user's username as the name of the folder to which the files of the particular user will be uploaded. How can I achieve this?</p>
<p>I tried creating a getter function but that is not working.</p>
<p>Files Model:</p>
<pre><code>class Files(models.Model):
name = models.CharField(max_length=50)
user = models.ForeignKey(to='User')
file = models.FileField(upload_to=get_media_path())
created = models.DateTimeField(auto_now_add=True)
def get_media_path(self)->str:
return self.user.username
</code></pre>
|
<python><django-models><foreign-keys><django-file-upload>
|
2024-08-19 04:52:10
| 1
| 799
|
Khushal
|
78,886,217
| 11,716,727
|
Facing a problem with running a Federated Learning code
|
<p>I am trying to run the code in the following link <a href="https://github.com/shaoxiongji/federated-learning" rel="nofollow noreferrer">shaoxiongji/federated-learning</a> using the following steps:</p>
<p><strong>1- Clone the Repository:</strong></p>
<pre><code>git clone https://github.com/shaoxiongji/federated-learning.git
</code></pre>
<p><strong>2- Navigate to the Repository Directory:</strong></p>
<pre><code>cd federated-learning
</code></pre>
<p><strong>3- Install Dependencies:</strong></p>
<pre><code>pip install -r requirements.txt
</code></pre>
<p><strong>4- Run the Code:</strong></p>
<pre><code>python main_fed.py
</code></pre>
<p>but I am facing a problem:</p>
<pre><code>Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Traceback (most recent call last):
File "C:\Users\raineen\federated-learning\main_fed.py", line 29, in <module>
dataset_train = datasets.MNIST('../data/mnist/', train=True, download=True, transform=trans_mnist)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchvision\datasets\mnist.py", line 46, in __init__
self.download()
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchvision\datasets\mnist.py", line 114, in download
data = urllib.request.urlopen(url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 216, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 525, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 634, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 557, in error
result = self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 749, in http_error_302
return self.parent.open(new, timeout=req.timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 525, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 634, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 563, in error
return self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "C:\Users\raineen\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
</code></pre>
<p>Any assistance, please? I would like to get two figures results like training loss.</p>
|
<python><federated-learning>
|
2024-08-19 03:45:06
| 0
| 709
|
SH_IQ
|
78,886,125
| 7,483,211
|
VSCode Python extension loading forever, saying βReactivating terminalsβ
|
<p>After updating VS code to v1.92, the Python extension consistently fails to launch, indefinitely showing a spinner next to βReactivating terminalsβ¦β on the status bar.</p>
<p>Selecting <code>OUTPUT > Python</code> reveals the error <code>Failed to resolve env "/mnt/data-linux/miniconda3"</code>.</p>
<p>Hereβs the error trace:</p>
<pre><code>2024-08-07 18:35:35.873 [error] sendStartupTelemetry() failed. s [Error]: Failed to resolve env "/mnt/data-linux/miniconda3"
at ae (/home/user/.vscode-insiders/extensions/ms-python.python-2024.12.2-linux-x64/out/client/extension.js:2:1968174)
at oe (/home/user/.vscode-insiders/extensions/ms-python.python-2024.12.2-linux-x64/out/client/extension.js:2:1966134)
at Immediate.<anonymous> (/home/user/.vscode-insiders/extensions/ms-python.python-2024.12.2-linux-x64/out/client/extension.js:2:1962428)
at processImmediate (node:internal/timers:478:21) {
code: -4,
data: undefined
}
</code></pre>
<p>How do I fix this? Restarting worked, but that's not sustainable.</p>
|
<python><visual-studio-code>
|
2024-08-19 02:31:16
| 11
| 10,272
|
Cornelius Roemer
|
78,886,121
| 10,054,520
|
Using MIME to send myself a text and last two characters of body is missing
|
<p>I have a job setup to run daily that will send me a text reminder. The reminder is sent to me successfully, but not matter what I use for my "message" the last two characters are cut off.</p>
<pre class="lang-py prettyprint-override"><code>import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from dotenv import dotenv_values
import os
secrets = dotenv_values()
email = secrets['EMAIL']
password = secrets['EMAIL_PASSWORD']
recipient = secrets['RECIPIENT']
auth = (email, password)
msg = MIMEMultipart()
msg["Subject"] = "Chicken Alarm"
msg["From"] = email
msg["To"] = recipient
message = "It's chicken time"
msg.attach(MIMEText(message, 'plain'))
server = smtplib.SMTP("smtp.gmail.com", 587)
server.starttls()
server.login(auth[0], auth[1])
text = msg.as_string()
server.sendmail(auth[0], recipient, text)
server.quit()
</code></pre>
<p>What's going on that would be causing the last two characters to get cut off?</p>
|
<python><python-3.x><email><text>
|
2024-08-19 02:29:56
| 0
| 337
|
MyNameHere
|
78,886,008
| 20,591,261
|
Handling Multiple Operations on DataFrame Columns with Polars
|
<p>I'm trying to select all columns of a DataFrame and perform multiple operations on each column using Polars. For example, I discovered that I can use the following code to count non-null values in each column:</p>
<pre><code>df.select(pl.col("*").is_not_null().sum()
</code></pre>
<p>However, when I attempt to concatenate multiple operations like this:</p>
<pre><code>(
df
.select(
pl.col("*").is_not_null().sum().alias("foo"),
pl.col("*").is_null().sum().alias("bar")
)
)
</code></pre>
<p>I encounter a <code>Duplicated Error.</code> This seems to happen because Polars tries to perform the operations but ends up using the same column names, which causes the duplication issue.</p>
<p>To work around this, I'm currently using the following approach:</p>
<pre><code>a = (
df
.select(
pl.col("*").is_null().sum(),
)
.transpose(include_header=True)
.rename(
{"column_0" : "null_count"}
)
)
b = (
df
.select(
pl.col("*").is_not_null().sum(),
)
.transpose(include_header=True)
.rename(
{"column_0" : "not_null_count"}
)
)
a.join(b, how="left", on="column")
</code></pre>
<p>My goal is to generate an output that looks like this:</p>
<pre><code>shape: (8, 3)
βββββββββββββββ¬βββββββββββββ¬βββββββββββββββββ
β column β null_count β not_null_count β
β --- β --- β --- β
β str β u32 β u32 β
βββββββββββββββͺβββββββββββββͺβββββββββββββββββ‘
β InvoiceNo β 0 β 541909 β
β StockCode β 0 β 541909 β
β Description β 1454 β 540455 β
β Quantity β 0 β 541909 β
β InvoiceDate β 0 β 541909 β
β UnitPrice β 0 β 541909 β
β CustomerID β 135080 β 406829 β
β Country β 0 β 541909 β
βββββββββββββββ΄βββββββββββββ΄βββββββββββββββββ
</code></pre>
|
<python><dataframe><python-polars><unpivot>
|
2024-08-19 01:02:05
| 2
| 1,195
|
Simon
|
78,885,709
| 342,095
|
Connect to Google Photos API from Google Colab
|
<p>I made a Google Cloud Project and Enabled Google Photos API and OAuth 2.0.</p>
<p>In Google Colab, the <code>authenticate()</code> brings up a popup asking for permissions of Google Drive. If I provide client_secret.json instead I get other kind of errors including inability to start a server, scopes and other permission issues.</p>
<p>I will be using this to download large videos from Google Photos to Colab environment, compress them using ffmpeg and upload them back.</p>
<p>So far I am failing at the initial steps of connecting to Photos API.</p>
<p>What is the correct way to connect to Photos via API using Google colab?</p>
|
<python><google-colaboratory><google-photos>
|
2024-08-18 21:28:06
| 2
| 7,920
|
SMUsamaShah
|
78,885,619
| 10,416,012
|
How to transform or unpack *args types for typing variadics
|
<p>I want to annotate a function that combines several iterators or generators into one, the case for two iterators is trivial:</p>
<pre><code>def merge_two_iterators[T, H](
gen1: Iterator[T],
gen2: Iterator[H]
) -> Iterator[tuple[T, H]]: ...
</code></pre>
<p>But I can't make the arbitrary number of iterators work.</p>
<pre><code>def merge_iterators[*Ts](
*its: *Ts
) -> Iterator[tuple[*Ts]]: ... # wrong
</code></pre>
<p>How can I unpack the list of iterators into a tuple? Unpack is not doing the trick.</p>
<p>The result should be something like this:</p>
<pre><code>a: list[int] = [1, 2]
b: list[str] = ['a', 'b']
c = merge_iterators(a, b)
# annotated as -> Iterator[tuple[int, str]] or list[tuple[int, str]]
</code></pre>
<p>Solutions with old python versions <3.11 are accepted, solutions that donΒ΄t work in a specific typing tool such as mypy, but comply with the specs and preferably work in at least one tool are accepted as well.</p>
|
<python><generics><python-typing><variadic-functions>
|
2024-08-18 20:38:07
| 1
| 2,235
|
Ziur Olpa
|
78,885,549
| 7,846,884
|
snakemake InputFunctionException in rule KeyError
|
<p>I'm running snakemake pipeline but i get KeyError: when i added longphase_phase rule. i seem not to find how to debug it. The longphase_phase rule uses as input a) each bam file from the samples in yaml file, b)modcallfile vcf file from <code>rule longphase_modcall</code> c)snpFile from <code>rule call_snps_indels</code>.</p>
<pre><code>Building DAG of jobs...
InputFunctionException in rule call_snps_indels in file /data1/greenbab/users/ahunos/apps/workflows/phased_modifications/phase_5mC.smk, line 34:
Error:
KeyError: 'D-0-2_5000/D-0-2_5000'
Wildcards:
samples=D-0-2_5000/D-0-2_5000
Traceback:
File "/data1/greenbab/users/ahunos/apps/workflows/phased_modifications/phase_5mC.smk", line 36, in <lambda> (rule call_snps_indels, line 45, /data1/greenbab/users/ahunos/apps/workflows/phased_modifications/phase_5mC.smk)
</code></pre>
<p>purpose of script: phasing of snps and 5mc</p>
<p>import config files</p>
<pre><code>parent_dir = "/data1/greenbab/users/ahunos/apps/workflows/phased_modifications/"
set_species = "mouse"
configfile: parent_dir + "config/config.yaml"
configfile: parent_dir + "config/samples_5mC_5000sampleRate.yaml"
</code></pre>
<p>pls see rules</p>
<pre><code>rule all:
input:
expand('results/call_snps_indels/{samples}/snv.vcf.gz', samples=config["samples"]),
expand('results/call_snps_indels/{samples}/indel.vcf.gz', samples=config["samples"]),
expand('results/call_snps_indels/{samples}/done.{samples}.txt', samples=config["samples"]),
expand('results/longphase_modcall/{samples}/modcall_{samples}.vcf', samples=config["samples"]),
expand('results/longphase_modcall/{samples}/done.{samples}.txt', samples=config["samples"]),
expand('results/longphase_phase/{samples}/{samples}.vcf', samples=config["samples"]),
expand('results/longphase_phase/{samples}/{samples}_mod.vcf', samples=config["samples"]),
expand('results/longphase_phase/{samples}/done.{samples}.txt', samples=config["samples"])
rule call_snps_indels:
input:
lambda wildcards: config["samples"][wildcards.samples]
params:
reference_genome=lambda wildcards: config["mm10"] if set_species == "mouse" else config["hg38"],
software_dir=config["software_dir"],
out_dir='results/call_snps_indels/{samples}/',
threads=12,
PLATFORM='ont_r10_dorado_sup_5khz'
singularity: "/data1/greenbab/users/ahunos/apps/containers/clairs-to_latest.sif"
output:
done='results/call_snps_indels/{samples}/done.{samples}.txt',
snps='results/call_snps_indels/{samples}/snv.vcf.gz',
indels='results/call_snps_indels/{samples}/indel.vcf.gz'
log:
"logs/call_snps_indels/{samples}/{samples}.log"
shell:
"""
/opt/bin/run_clairs_to \
--tumor_bam_fn {input} \
--ref_fn {params.reference_genome} \
--threads {params.threads} \
--platform {params.PLATFORM} \
--output_dir {params.out_dir} \
--ctg_name="chr19" \
--conda_prefix /opt/micromamba/envs/clairs-to && touch {output.done} 2> {log}
"""
rule longphase_modcall:
input:
bams=lambda wildcards: config["samples"][wildcards.samples]
params:
reference_genome=lambda wildcards: config["mm10"] if set_species == "mouse" else config["hg38"],
software_dir=config["software_dir"],
lphase='/data1/greenbab/users/ahunos/apps/longphase_linux-x64',
threads=12,
out_modcall_prefix='results/longphase_modcall/{samples}/modcall_{samples}'
output:
done_modcall='results/longphase_modcall/{samples}/done.{samples}.txt',
out_modcall='results/longphase_modcall/{samples}/modcall_{samples}.vcf'
log:
"logs/longphase_modcall/{samples}/{samples}.log"
shell:
"""
/data1/greenbab/users/ahunos/apps/longphase_linux-x64 modcall -b {input.bams} -t 8 -o {output.out_modcall} -r {params.reference_genome} && touch {output.done_modcall} 2> {log}
mv results/longphase_modcall/{wildcards.samples}/modcall_{wildcards.samples}.vcf.vcf results/longphase_modcall/{wildcards.samples}/modcall_{wildcards.samples}.vcf
"""
rule longphase_phase:
input:
bamfile=lambda wildcards: config["samples"][wildcards.samples],
modcallfile='results/longphase_modcall/{samples}/modcall_{samples}.vcf',
snpFile='results/call_snps_indels/{samples}/{samples}/snv.vcf.gz'
params:
reference_genome=lambda wildcards: config["mm10"] if set_species == "mouse" else config["hg38"],
lphase='/data1/greenbab/users/ahunos/apps/longphase_linux-x64',
threads=12,
out_lphase_prefix='results/longphase_phase/{samples}/{samples}'
output:
co_phased_mod_vcf='results/longphase_phase/{samples}/{samples}_mod.vcf',
co_phased_vcf='results/longphase_phase/{samples}/{samples}.vcf',
done_longphase_phase='results/longphase_phase/{samples}/done.{samples}.txt'
log:
"logs/longphase_phase/{samples}/{samples}.log"
shell:
"""
/data1/greenbab/users/ahunos/apps/longphase_linux-x64 phase \
-s {input.snpFile} \
--mod-file {input.modcallfile} \
-b {input.bamfile} \
-r {params.reference_genome} \
-t {params.threads} \
-o {params.out_lphase_prefix} \
--ont && touch {output.done_longphase_phase}
"""
</code></pre>
<p>please see sample yaml</p>
<pre><code>$ cat /config/samples_5mC_5000sampleRate.yaml
samples:
D-0-2_5000: /data1/greenbab/projects/triplicates_epigenetics_diyva/DNA/preprocessed/results/mark_duplicates/D-0-2_5000/D-0-2_5000_modBaseCalls_sorted_dup.bam
</code></pre>
|
<python><snakemake>
|
2024-08-18 20:01:12
| 1
| 473
|
sahuno
|
78,885,044
| 810,815
|
Unable to import nltk in Jupyter Notebook
|
<p>I have a Jupyter Notebook. I installed ntlk using the following line of code:</p>
<pre><code>!pip install nltk
</code></pre>
<p>I get the following output:</p>
<pre><code>Requirement already satisfied: nltk in ./env/lib/python3.12/site-packages (3.9)
Requirement already satisfied: click in ./env/lib/python3.12/site-packages (from nltk) (8.1.7)
Requirement already satisfied: joblib in ./env/lib/python3.12/site-packages (from nltk) (1.4.0)
Requirement already satisfied: regex>=2021.8.3 in ./env/lib/python3.12/site-packages (from nltk) (2024.7.24)
Requirement already satisfied: tqdm in ./env/lib/python3.12/site-packages (from nltk)
</code></pre>
<p>Now when I import nltk using</p>
<pre><code>import nltk
</code></pre>
<p>I get the following error:</p>
<pre><code>LookupError Traceback (most recent call last)
File ~/Desktop/machine-learning/env/lib/python3.12/site-packages/nltk/corpus/util.py:84, in LazyCorpusLoader.__load(self)
83 try:
---> 84 root = nltk.data.find(f"{self.subdir}/{zip_name}")
85 except LookupError:
File ~/Desktop/machine-learning/env/lib/python3.12/site-packages/nltk/data.py:579, in find(resource_name, paths)
578 resource_not_found = f"\n{sep}\n{msg}\n{sep}\n"
--> 579 raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource wordnet not found.
Please use the NLTK Downloader to obtain the resource:
</code></pre>
<p>What am I missing?</p>
|
<python><machine-learning><jupyter-notebook><nltk>
|
2024-08-18 15:53:05
| 1
| 9,764
|
john doe
|
78,884,897
| 16,869,946
|
Python function with variable number of arguments *arg
|
<p>I have a function in python that takes the form <code>f(t1, t2, t3, *theta)</code>. And I would like to define a new function <code>g(t1, t3, *theta)</code> defined by</p>
<p><a href="https://i.sstatic.net/Lqz3xLdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lqz3xLdr.png" alt="enter image description here" /></a></p>
<p>So for example, g(1,2,3,4,5) = f(1,2,3,4,5) + f(1,4,3,2,5) + f(1,5,3,4,2)</p>
<p>Here is what i have tried:</p>
<pre><code>def g(t1, t3, *theta):
S = 0
list_i = list(theta)
for ti in theta:
list_i.remove(ti)
tuple_i = tuple(list_i)
S += f(t1, ti, t3, tuple_i)
return S
</code></pre>
<p>but it gives the error <code>TypeError: unsupported operand type(s) for -: 'float' and 'tuple'</code></p>
|
<python><list><function><tuples>
|
2024-08-18 14:53:47
| 2
| 592
|
Ishigami
|
78,884,856
| 2,276,054
|
CP-SAT | OR-Tools: at most one of two different arrays (OR-ed). How to declare constraint?
|
<p>I have two arrays of model variables - <code>arr1</code> and <code>arr2</code>. I would like to declare a constraint that would <strong>forbid</strong> a situation when there is at least 1 <code>true</code> in <code>arr1</code> <strong>AND</strong> at least 1 <code>true</code> in <code>arr2</code>. (There can be multiple <code>true</code>'s in <code>arr1</code>, as long as <code>arr2</code> has only <code>false</code>'s - and vice versa. It is also acceptable if there is no <code>true</code> in neither array).</p>
<p>So, something like that:</p>
<pre><code>arr1 = [model.new_bool_var("a"), model.new_bool_var("b"), ...]
arr2 = [model.new_bool_var("q"), model.new_bool_var("w"), ...]
model.add_at_most_one(bool_or(arr1), bool_or(arr2))
</code></pre>
<p>Hower, such syntax doesn't exist - there is only <code>add_bool_or()</code> method, not generic <code>bool_or()</code> that could serve to build inner clauses for more complex constraints.</p>
<p>Therefore, I thought about taking the following approach:</p>
<pre><code>sum_arr1 = model.new_int_var(0, len(arr1), "sum_arr1")
sum_arr2 = model.new_int_var(0, len(arr2), "sum_arr2")
model.add(sum_arr1 == sum(arr1))
model.add(sum_arr2 == sum(arr2))
arr1_at_least_one = model.new_bool_var("arr1_at_least_one")
arr2_at_least_one = model.new_bool_var("arr2_at_least_one")
model.add(sum_arr1 > 0).only_enforce_if(arr1_at_least_one) # channeling constraints
model.add(sum_arr1 == 0).only_enforce_if(~arr1_at_least_one)
model.add(sum_arr2 > 0).only_enforce_if(arr2_at_least_one)
model.add(sum_arr2 == 0).only_enforce_if(~arr2_at_least_one)
model.add_at_most_one(arr1_at_least_one, arr2_at_least_one)
</code></pre>
<p>This seems to be working (I initially had a silly typo in my code). Still, I am wondering if it is possible to declare the same constraint in a simpler way, in particular - without using integer (non-boolean) variables...?</p>
<p>Could you help me declare such constraint?
In other words, it is <code>NAND(OR(arr1[0], arr1[1], arr1[2], ...), OR(arr2[0], arr2[1], arr2[2], ...))</code></p>
<p>Perhaps something like this will do?</p>
<pre><code>arr1_at_least_one = model.new_bool_var("arr1_at_least_one")
arr2_at_least_one = model.new_bool_var("arr2_at_least_one")
for i in arr1:
model.add_implication(i, arr1_at_least_one)
model.add_at_least_one(arr1).only_enforce_if(arr1_at_least_one)
for j in arr2:
model.add_implication(j, arr2_at_least_one)
model.add_at_least_one(arr2).only_enforce_if(arr2_at_least_one)
model.add_at_most_one(arr1_at_least_one, arr2_at_least_one)
</code></pre>
|
<python><or-tools><cp-sat>
|
2024-08-18 14:32:55
| 2
| 681
|
Leszek Pachura
|
78,884,763
| 2,955,541
|
Repeated Permutation of Powers
|
<p>Let's say I have two radix/base values <code>x</code> and <code>y</code> (both are floats, not ints) and I need to quickly generate all combinations of:</p>
<p>[x<sup>n</sup>y<sup>0</sup>, x<sup>n-1</sup>y, x<sup>n-2</sup>y<sup>2</sup>, ..., x<sup>2</sup>y<sup>n-2</sup>, x<sup>1</sup>y<sup>n-1</sup>, x<sup>0</sup>y<sup>n</sup>]</p>
<p>where <code>n</code> is some integer between [50, 100]. Naively, I can do:</p>
<pre><code>import math
values = []
for i in range(n+1):
j = n - i
values.append(math.pow(x, j) * math.pow(y, i))
</code></pre>
<p>However, I need to repeat this for over 400 million different combinations of <code>(x, y, n)</code>. What is the fastest/most efficient way to go about this? Currently, I believe that this is <code>O(m x n)</code> where <code>m</code> is the total number of <code>(x, y, n)</code> tuples.</p>
|
<python><algorithm><numpy>
|
2024-08-18 13:47:18
| 3
| 6,989
|
slaw
|
78,884,741
| 17,860,557
|
cannot create python virtual environment with Ubuntu on windows partitions
|
<p>I have Pop!_OS dualbooted with Windows. I can create python virtual environments without any problems in anywhere in my linux file system. But, when I try to create one (with <code>python -m venv venv</code>) in windows partitions (not drive C:, any other) which are mounted and then used on linux, I get this error:</p>
<pre><code>Error: Command '['/media/parsa/D2942F56942F3BFB/.../venv/bin/python3', '-m', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1.
</code></pre>
<p>Using <code>virtualenv --python=/usr/bin/python3 venv</code> will also cause this error:</p>
<pre><code>OSError: [Errno 5] Input/output error: '/usr/bin/python3' -> '/media/parsa/D2942F56942F3BFB/.../venv/bin/python'
</code></pre>
<p>I didn't have this problem before and I don't know why it's happening right now.</p>
|
<python><virtualenv><virtual-environment>
|
2024-08-18 13:35:41
| 0
| 349
|
ParsaAi
|
78,884,667
| 11,565,514
|
Send variable to Django crispy forms
|
<p>I have 2 HTML pages <code>choose_type.html</code> and <code>post_form.html</code>. On <code>choose_type.html</code>:</p>
<pre><code>{% extends "blog/base.html" %}
{% load pagination_extras %}
{% block content %}
<body>
<div class="row">
<div class="col-sm-6 mb-3 mb-sm-0">
<div class="card">
<img src="/media/1.png" class="card-img-top m-auto" alt="..." style="width: 10rem;">
<div class="card-body">
<h5 class="card-title">Type 1</h5>
<button class="btn btn-primary" name="b1" value="1" id="btn" onclick="location.href='{% url 'post-create' %}'" >Report</button>
</div>
</div>
</div>
<div class="col-sm-6">
<div class="card">
<img src="/media/2.png" class="card-img-top m-auto" alt="..." style="width: 10rem;">
<div class="card-body">
<h5 class="card-title">Type 2</h5>
<button class="btn btn-primary" name="b1" value="2" id="btn" onclick="location.href='{% url 'post-create' %}'" >Report</button>
</div>
</div>
</div>
</div>
</code></pre>
<p>I have 4 buttons (with descriptions, images, etc.) which have fixed value and all buttons have the same name. On click is URL to the same form defined by <code>views.py</code>:</p>
<pre><code>@login_required
def PostCreate(request):
if request.method == 'POST':
form = PostFileForm(request.POST or None, request.FILES or None)
files = request.FILES.getlist('file')
if form.is_valid():
post = form.save(commit=False)
post.author = request.user
form.instance.author = request.user
etc.
else:
form = PostFileForm()
return render(request, 'post_form.html', {'form': form})
</code></pre>
<p>and <code>post_form.html</code>:</p>
<pre><code>{% extends "blog/base.html" %}
{% load crispy_forms_tags %}
{% block content %}
<div class="content-section">
<button class="btn btn-outline-info" id="audit" onclick="audit()">Basic</button>
<form method="POST" id="PostForm" data-sektor-url="{% url 'ajax_load_sektors' %}" data-department-url="{% url 'ajax_load_departments' %}" data-person-url="{% url 'ajax_load_persons' %}" novalidate enctype="multipart/form-data">
{% csrf_token %}
<fieldset class="form-group">
<legend class="border-bottom mb-4" style="color:red;">Order</legend>
{{ form|crispy }}
</fieldset>
<div class="form-group">
<button class="btn btn-outline-info" id="submit" type="submit">Submit</button>
</div>
</form>
</code></pre>
<p>Depending on the 4 buttons which user can click to call the form, that form should have some fields hidden/shown or autofilled. I hide them using Javascript, for example:</p>
<pre><code><script type="text/javascript">
function on_load(){
document.getElementById('hidden_on_open').style.display = 'none';
</code></pre>
<p>My problem is, I don't know which button is pressed. Is there a way to 'pass a static variable' or something to the <code>post_form.html</code> from <code>choose_type.html</code>, so that I can use Javascript to hide/show and autofill certain fields depending on that variable. Just simple 0/1/2/3 would do the trick. I'm trying to use a same form for multype uses. I don't need models, I just need it for 1 time for form styling.</p>
<p><code>url.py</code>:</p>
<pre><code>urlpatterns = [
path('choose_type/', views.choose_type, name='blog-choose_type'),
path('info/nova/', views.PostCreate, name='post-create'),
]
</code></pre>
<p>So far I have tried:</p>
<p>To get data from (as suggested below) <code>choose_type.html</code> I have edited <code>views.py</code> to get 'b1' value, but value is always NONE:</p>
<pre><code>@login_required
def PostCreate(request):
x = request.POST.get('b1')
if request.method == 'POST':
form = PostFileForm(request.POST or None, request.FILES or None)
files = request.FILES.getlist('file')
if form.is_valid():
post = form.save(commit=False)
post.author = request.user
etc.
else:
form = PostFileForm()
return render(request, 'post_form.html', {'form': form})
</code></pre>
<p>Can I add a fix variable to link, e.g. <code>def PostCreate(request, x):</code> and have each button have different url <code>onclick="location.href='{% url 'post-create' 2%}'"</code>? But I don't know how to implement this.</p>
|
<python><html><django><django-crispy-forms>
|
2024-08-18 13:07:47
| 2
| 373
|
MarkoZg
|
78,884,650
| 2,444,661
|
Receiving streaming server response chunk by chunk
|
<p>I have a rest api which streams data in chunks. Each chunk is a list of 100 records.</p>
<p>In javascipt I use fetch API and it receives one chunk at a time.</p>
<p>But I am not able to have same behaviour in python. I have following code:</p>
<pre><code>response = requests.get(url, stream=True)
response.raise_for_status()
response_text = response.text
print(response_text)
</code></pre>
<p>I see that response_text contains multiple json lists, while I expect only one list at a time.</p>
<p>What mistake am I making here?</p>
|
<python>
|
2024-08-18 12:59:05
| 2
| 7,726
|
Mandroid
|
78,884,633
| 8,935,725
|
Django ORM efficient way for annotating True on first occurrences of field
|
<p>I have a situation where we have some logic that sorts a patient_journey queryset (table) by some heuristics. A patient_journey has a FK to a patient.</p>
<p>I now need an efficient way of setting True for the first occurrence of a <code>patient_journey</code> for a given <code>patient</code>, false otherwise.</p>
<p>The first heuristinc is <code>patient_id</code> so the queryset will already by grouped by patient.</p>
<p>The algorithm is v straight forward & should be fast, yet I'm stuck at trying to get something <1s.</p>
<p>I've tried using <code>distinct</code> and checking for existence, however that's adding 1-2s.</p>
<p>I've tried using a subquery with [:1] & test on id, but that's even worse around 3-5s extra.</p>
<p><strong>Currently most efficient approach:</strong></p>
<ul>
<li>cost of <code>get_sorted_for_primary_journey_qs</code> is minimal, the distinct & exists is expensive, all I want is the first occurrence of <code>patient_id</code> on <code>patient_journey</code> to be marked as <code>primary=True</code></li>
</ul>
<pre><code>def annotate_primary(
*, qs: QuerySet['PatientJourney'] # noqa
) -> QuerySet['PatientJourney']: # noqa
"""
Constraints:
--------------
Exactly One non-global primary journey per patient
Annotations:
------------
is_primary_:Bool, is the primary_journey for a patient
"""
from patient_journey.models import PatientJourney
sorted_pjs = get_sorted_for_primary_journey_qs(qs=qs)
sorted_pjs = sorted_pjs.distinct('patient_id').values('id')
pjs = PatientJourney.objects.all().filter(id__in=sorted_pjs)
qs = qs.annotate(
primary=dm.Exists(
pjs.filter(
id=dm.OuterRef('id'),
)
)
)
return qs
</code></pre>
|
<python><django><postgresql><django-orm>
|
2024-08-18 12:50:03
| 1
| 755
|
Octavio del Ser
|
78,884,572
| 1,339,626
|
Installing Pyhton package from local directory error
|
<p>When installing a Python package a colleague sent as a directory (with <code>setup.py</code>), I'm getting:</p>
<pre class="lang-bash prettyprint-override"><code>LookupError: setuptools-scm was unable to detect version for '/home/michael/Downloads/inversion/s2shores'.
Make sure you're either building from a fully intact git repository or PyPI tarballs. Most other sources (such as GitHub's tarballs, a git checkout without the .git folder) don't contain the necessary metadata and will not work.
</code></pre>
<p>Here is the full printout:</p>
<pre class="lang-bash prettyprint-override"><code>michael@michael-desktop:~/Downloads/inversion$ pip install -e s2shores
Defaulting to user installation because normal site-packages is not writeable
Obtaining file:///home/michael/Downloads/inversion/s2shores
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [40 lines of output]
/usr/lib/python3/dist-packages/setuptools/dist.py:723: UserWarning: Usage of dash-separated 'author-email' will not be supported in future versions. Please use the underscore name 'author_email' instead
warnings.warn(
/usr/lib/python3/dist-packages/setuptools/dist.py:723: UserWarning: Usage of dash-separated 'long-description' will not be supported in future versions. Please use the underscore name 'long_description' instead
warnings.warn(
/usr/lib/python3/dist-packages/setuptools/dist.py:723: UserWarning: Usage of dash-separated 'long-description-content-type' will not be supported in future versions. Please use the underscore name 'long_description_content_type' instead
warnings.warn(
/usr/lib/python3/dist-packages/setuptools/dist.py:723: UserWarning: Usage of dash-separated 'project-urls' will not be supported in future versions. Please use the underscore name 'project_urls' instead
warnings.warn(
/usr/lib/python3/dist-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
warnings.warn(
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/home/michael/Downloads/inversion/s2shores/setup.py", line 22, in <module>
setup(use_pyscaffold=True)
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.10/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 459, in __init__
_Distribution.__init__(
File "/usr/lib/python3.10/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 837, in finalize_options
ep(self)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 858, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/home/michael/Downloads/inversion/s2shores/.eggs/PyScaffold-3.3.1-py3.10.egg/pyscaffold/integration.py", line 140, in pyscaffold_keyword
version_keyword(dist, keyword, setuptools_scm_config(value))
File "/home/michael/Downloads/inversion/s2shores/.eggs/PyScaffold-3.3.1-py3.10.egg/pyscaffold/contrib/setuptools_scm/integration.py", line 17, in version_keyword
dist.metadata.version = _get_version(config)
File "/home/michael/Downloads/inversion/s2shores/.eggs/PyScaffold-3.3.1-py3.10.egg/pyscaffold/contrib/setuptools_scm/__init__.py", line 148, in _get_version
parsed_version = _do_parse(config)
File "/home/michael/Downloads/inversion/s2shores/.eggs/PyScaffold-3.3.1-py3.10.egg/pyscaffold/contrib/setuptools_scm/__init__.py", line 110, in _do_parse
raise LookupError(
LookupError: setuptools-scm was unable to detect version for '/home/michael/Downloads/inversion/s2shores'.
Make sure you're either building from a fully intact git repository or PyPI tarballs. Most other sources (such as GitHub's tarballs, a git checkout without the .git folder) don't contain the necessary metadata and will not work.
For example, if you're using pip, instead of https://github.com/user/proj/archive/master.zip use git+https://github.com/user/proj.git#egg=proj
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>Will appreciate any idea about what I'm missing here, thanks!</p>
|
<python><package><setup.py>
|
2024-08-18 12:21:21
| 0
| 1,020
|
Michael Dorman
|
78,884,560
| 7,483,211
|
Install arbitrary extra dependencies into environment used by a pipx installed tool
|
<p>I've installed <code>codespell</code> using pipx:</p>
<pre><code>pipx install codespell
</code></pre>
<p>and I would like to use the <code>chardet</code> option. However, <code>chardet</code> does not come with <code>codespell</code> by default so <code>codespell</code> errors.</p>
<p>How do I add <code>chardet</code> into the <code>codespell</code> environment that <code>pipx</code> created?</p>
<p>I tried</p>
<pre class="lang-bash prettyprint-override"><code>pipx install codespell chardet
</code></pre>
<p>but that didn't work.</p>
|
<python><pip><pipx>
|
2024-08-18 12:15:50
| 1
| 10,272
|
Cornelius Roemer
|
78,884,485
| 2,846,038
|
Plotly line series animation: draw line continously instead of in steps?
|
<p>I'm trying to produce an animated chart that will draw a couple of time-based line series smoothly over time.</p>
<p>I've managed to animate the line series using <code>graph_objects</code>. However, instead of having the lines being smoothly drawn from point to point by interpolating the gaps between frames, they're just drawn abruptly frame by frame (providing a link as <a href="https://meta.superuser.com/questions/15237/error-uploading-images-limited-to-50-megapixels-for-gif-image">this bug</a> prevents me from uploading the gif here):</p>
<p><a href="https://i.postimg.cc/hP6bW8y4/animation.gif" rel="nofollow noreferrer">https://i.postimg.cc/hP6bW8y4/animation.gif</a></p>
<p>I've played around with the transition settings, but that only seems to affect how the line shape is smoothed out AFTER the new point is drawn.</p>
<p>Is there any way to have the line be continously drawn from point to point instead of jumping from one step to the next?</p>
<p>Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
import pandas as pd
# Sample data
dates = [
"2020-01-01",
"2020-06-01",
"2020-10-01",
"2021-03-01",
"2021-07-01",
"2021-10-01",
"2021-11-01",
]
series1 = [1477.60, 1474.45, 1448.40, 1447.40, 1444.40, 1449.40, 1441.40]
series2 = [1577.60, 1564.45, 1568.40, 1537.40, 1584.40, 1529.40, 1571.40]
df = pd.DataFrame(
list(zip(dates, series1, series2)),
columns=["date", "series_1", "series_2"],
)
# Base plot
fig = go.Figure(
layout=go.Layout(
updatemenus=[
dict(type="buttons", direction="right", x=0.9, y=1.16),
],
xaxis=dict(
range=["2020-01-01", "2021-12-01"],
autorange=False,
tickwidth=2,
title_text="Time",
),
yaxis=dict(range=[1400, 1600], autorange=False, title_text="Values"),
title="Test chart",
)
)
# Add traces
init = 1
fig.add_trace(
go.Scatter(
x=df.date[:init],
y=df.series_1[:init],
name="Series 1",
visible=True,
mode="lines",
line_shape="spline",
line=dict(color="#33CFA5"),
)
)
fig.add_trace(
go.Scatter(
x=df.date[:init],
y=df.series_2[:init],
name="Series 2",
visible=True,
mode="lines",
line_shape="spline",
line=dict(color="#bf00ff"),
)
)
# Animation
fig.update(
frames=[
go.Frame(
data=[
go.Scatter(x=df.date[:k], y=df.series_1[:k]),
go.Scatter(x=df.date[:k], y=df.series_2[:k]),
]
)
for k in range(init, len(df) + 1)
]
)
# Extra Formatting
fig.update_xaxes(ticks="outside", tickwidth=2, tickcolor="white", ticklen=10)
fig.update_yaxes(ticks="outside", tickwidth=2, tickcolor="white", ticklen=1)
fig.update_layout(yaxis_tickformat=",")
fig.update_layout(legend=dict(x=0, y=1.1), legend_orientation="h")
# Buttons
fig.update_layout(
updatemenus=[
dict(
buttons=list(
[
dict(
label="Play",
method="animate",
args=[
None,
{
"frame": {"duration": 1000},
"transition": {"duration": 1000},
"fromcurrent": True,
},
],
),
]
)
)
]
)
fig.show()
</code></pre>
|
<python><animation><plotly><line><smoothing>
|
2024-08-18 11:38:08
| 0
| 325
|
VMX
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.