QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
77,573,944
| 4,980,705
|
df.head(n).itertuples() loop is quite slow
|
<p>the following loop is quite slow (takes minutes), although it is calling a function, but the function <code>getstopscoordinates</code> is quite simple</p>
<pre><code>for row in df_RoadSegments.head(2).itertuples():
df_RoadSegments["OriginCoordinates"] = df_RoadSegments.apply(lambda x: getstopscoordinates(df_Stops, x["RoadSegmentOrigin"]), axis=1)
</code></pre>
<p>I've tried manual inputs in function <code>getstopscoordinates</code> and it works fast, but then in the loop used head(2) just to test in the first two rows, but still takes minutes.</p>
<pre><code>def getstopscoordinates(df_stops, stop_id):
df_stop_id = df_stops.loc[df_stops["stop_id"] == stop_id]
stop_id_lat = str(df_stop_id["stop_lat"].values[0])
stop_id_lon = str(df_stop_id["stop_lon"].values[0])
stop_id_coord = stop_id_lat + "," + stop_id_lon
return stop_id_coord
</code></pre>
<p>RoadSegments.csv:</p>
<pre><code>RoadSegmentOrigin,RoadSegmentDest,trip_id,planned_duration
AREI2,JD4,107_1_D_1,32
JD4,PNG4,107_1_D_1,55
</code></pre>
<p>Stops.csv:</p>
<pre><code>stop_id,stop_code,stop_name,stop_lat,stop_lon,zone_id,stop_url
AREI2,AREI2,AREIAS,41.1591084955401,-8.55577748652738,PRT3,http://www.stcp.pt/pt/viajar/paragens/?t=detalhe&paragem=AREI2
JD4,JD4,JOΓO DE DEUS,41.1578666104126,-8.55802717966919,PRT3,http://www.stcp.pt/pt/viajar/paragens/?t=detalhe&paragem=JD4
</code></pre>
|
<python><pandas>
|
2023-11-29 19:12:58
| 1
| 717
|
peetman
|
77,573,909
| 726,730
|
sub classing python pyqt5 QHeaderView
|
<p><strong>file: table.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.resize(400, 300)
self.verticalLayout = QtWidgets.QVBoxLayout(Dialog)
self.tableWidget = QtWidgets.QTableWidget(Dialog)
self.tableWidget.setColumnCount(6)
self.tableWidget.setRowCount(0)
item = QtWidgets.QTableWidgetItem()
item.setText("1")
self.tableWidget.setHorizontalHeaderItem(0, item)
item = QtWidgets.QTableWidgetItem()
item.setText("2")
self.tableWidget.setHorizontalHeaderItem(1, item)
item = QtWidgets.QTableWidgetItem()
item.setText("3")
self.tableWidget.setHorizontalHeaderItem(2, item)
item = QtWidgets.QTableWidgetItem()
item.setText("4")
self.tableWidget.setHorizontalHeaderItem(3, item)
item = QtWidgets.QTableWidgetItem()
item.setText("5")
self.tableWidget.setHorizontalHeaderItem(4, item)
item = QtWidgets.QTableWidgetItem()
item.setText("6")
self.tableWidget.setHorizontalHeaderItem(5, item)
self.verticalLayout.addWidget(self.tableWidget)
QtCore.QMetaObject.connectSlotsByName(Dialog)
</code></pre>
<p><strong>File: run_me.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from PyQt5 import QtWidgets,QtCore,QtGui
from table import Ui_Dialog
import sys
class Run_me:
def __init__(self):
self.app = QtWidgets.QApplication(sys.argv)
self.Dialog = QtWidgets.QDialog()
self.ui = Ui_Dialog()
self.ui.setupUi(self.Dialog)
self.Dialog.show()
self.columns_min_width = []
self.ui.tableWidget.resizeColumnsToContents()
self.find_table_column_min_width()
horizontal_header = self.ui.tableWidget.horizontalHeader()
self.ui.tableWidget.setHorizontalHeader(CustomHeader(self.ui.tableWidget,self.columns_min_width))
self.ui.tableWidget.updateGeometries()
for column_index in range(0,len(self.columns_min_width)):
header_item_text = self.ui.tableWidget.horizontalHeaderItem(column_index).text()
print(header_item_text)
sys.exit(self.app.exec_())
def find_table_column_min_width(self):
total_columns = self.ui.tableWidget.columnCount()
self.columns_min_width = []
for column_index in range(0,total_columns):
column_width = self.ui.tableWidget.columnWidth(column_index)
print(column_width)
self.columns_min_width.append(column_width)
class CustomHeader(QtWidgets.QHeaderView):
def __init__(self,table,columns_min_width):
self.columns_min_width = columns_min_width
self.total_columns = len(self.columns_min_width)
self.header_labels = []
for column_index in range(0,self.total_columns):
column_text = table.horizontalHeaderItem(column_index).text()
self.header_labels.append(column_text)
super().__init__(QtCore.Qt.Horizontal,table)
self.table = self.parentWidget()
for column_index in range(0,self.total_columns):
header_item = self.parentWidget().horizontalHeaderItem(column_index)
header_item.setText(self.header_labels[column_index])
self.track_move = False
self.updateGeometries()
def mousePressEvent(self,event):
self.track_move = True
QtWidgets.QHeaderView.mousePressEvent(self, event)
print("Pressed")
def mouseMoveEvent(self,event):
if self.track_move:
print("Moved while pressed")
for column_index in range(0,self.total_columns):
column_width = self.parentWidget().columnWidth(column_index)
if column_width < self.columns_min_width[column_index]:
event.ignore()
return None
QtWidgets.QHeaderView.mouseMoveEvent(self, event)
else:
QtWidgets.QHeaderView.mouseMoveEvent(self, event)
def mouseReleaseEvent(self,event):
self.track_move = False
QtWidgets.QHeaderView.mouseReleaseEvent(self, event)
print("Released")
if __name__ == "__main__":
program = Run_me()
</code></pre>
<p>With this example i am trying to set a constraint on header section resize (column width), but as you can see there is no header visible when running <strong>run_me.py</strong></p>
<p>Is there something i am missing?</p>
|
<python><pyqt5><qtablewidget><qheaderview>
|
2023-11-29 19:07:44
| 1
| 2,427
|
Chris P
|
77,573,873
| 5,437,090
|
Execute a Python script from another Python script with argparse input argument and extract an output
|
<p>Given:</p>
<p><code>child.py</code>:</p>
<pre><code>import argparse
import json
parser = argparse.ArgumentParser(description='A sample script with argparse')
parser.add_argument('--query', type=str, help='Query Phrase', default="I love coding!")
args = parser.parse_args()
def customized_fcn(inp="This is my sample text!"):
result=inp.split()
return result
def main():
my_splited_text=customized_fcn(inp=args.query)
# Serialize the list into a string and print it
serialized_result = json.dumps(my_splited_text)
print('Serialized Result:', serialized_result)
if __name__ == '__main__':
main()
</code></pre>
<p>I would like to capture/extract <code>my_splited_text</code> with type <code><class 'list'></code> from <code>child.py</code> in another script called <code>parent.py</code> as follows:</p>
<pre><code>import subprocess
import json
import re
def run_my_script():
command = ['python', 'child.py', '--query', 'I want to eat an ice-cream right now!']
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
return_code = process.wait()
stdout, stderr = process.communicate()
# Extract and deserialize the result
serialized_result = re.search(r'Serialized Result: (.+)', stdout).group(1)
my_splited_text=json.loads(serialized_result)
print('Captured Result:', type(my_splited_text), my_splited_text)
run_my_script()
</code></pre>
<p>Right now I use <code>json</code> to first serialize the <code>my_splited_text</code> into a string representation in <code>child.py</code> and then extract and deserialize the result in <code>parent.py</code> to be able to use it in the rest of my code.</p>
<p>I wonder if there exists an easier and better approach than this serialization or deserialization ?</p>
|
<python><subprocess><argparse>
|
2023-11-29 19:03:03
| 0
| 1,621
|
farid
|
77,573,764
| 8,102,500
|
Why TorchVision's GoogLeNet has this strange "normalization"?
|
<p>I'm reading the source code of <a href="https://github.com/pytorch/vision/blob/main/torchvision/models/googlenet.py" rel="nofollow noreferrer">TorchVision's GoogLeNet</a> and I found these lines strange and can't figure it out.</p>
<pre class="lang-py prettyprint-override"><code>def _transform_input(self, x: Tensor) -> Tensor:
if self.transform_input:
x_ch0 = torch.unsqueeze(x[:, 0], 1) * (0.229 / 0.5) + (0.485 - 0.5) / 0.5
x_ch1 = torch.unsqueeze(x[:, 1], 1) * (0.224 / 0.5) + (0.456 - 0.5) / 0.5
x_ch2 = torch.unsqueeze(x[:, 2], 1) * (0.225 / 0.5) + (0.406 - 0.5) / 0.5
x = torch.cat((x_ch0, x_ch1, x_ch2), 1)
return x
</code></pre>
<p>I know that ImageNet datasets had <code>mean = [0.485, 0.456, 0.406]</code> and <code>std = [0.229, 0.224, 0.225]</code> and it looks like some "normalization" but it is obviously not <code>(x - mean) / std</code> but more like <code>x * std + mean</code>. Also I don't know about the <code>0.5</code> thing.</p>
<p>Anyone who can explain these code?</p>
|
<python><pytorch><torchvision>
|
2023-11-29 18:42:31
| 1
| 1,203
|
Shuai
|
77,573,742
| 2,868,899
|
substring functionality when ignoring the stop
|
<p>Is there a way to use the string <code>length</code> instead of a <code>stop</code> with <code>SubString</code> instead of an error being thrown because <code>SubString</code> reached the end of the string before the <code>stop</code> was reached?</p>
<p>In Python it works like this:</p>
<pre><code>string = "Hello, World"
string2 = string[5:20]
print(string2)
</code></pre>
<p>But in Julia, this throws an error:</p>
<pre><code>string = "Hello, World"
string2 = SubString(string,5,20)
print(string2)
</code></pre>
|
<python><julia>
|
2023-11-29 18:40:05
| 0
| 2,790
|
OldManSeph
|
77,573,709
| 13,135,901
|
How is this chained indexing?
|
<p>I get <code>SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame</code> warning when trying to run the following code:</p>
<pre><code>df = pd.DataFrame(np.random.randn(5, 3),
columns=['A', 'B', 'C'])
df2 = df[(df.A > 0) | (df.B > 0)]
df2.at[-1, 'C'] = df['C'].iloc[-1]
</code></pre>
<p>How is this chained indexing?</p>
<p><em><strong>UPDATE</strong></em></p>
<p>I think I misused the <code>-1</code> index - seems in this case it doesn't return the last row like I expected. Replaced it with <code>df2.at[df2.index[-1], 'C'] = df['C'].iloc[-1]</code> and the warning went away.</p>
|
<python><pandas>
|
2023-11-29 18:33:55
| 0
| 491
|
Viktor
|
77,573,600
| 583,187
|
Format a number to 8 or 16 chars without "e" with highest precision
|
<p>I have the following python procedure.
Input is any number (from a calculation) and format, where format can be 'long' or 'short'</p>
<p>What i want to do:
If the format is short, i want the output to be the number inputted but using a maximum of 8 chars (including sign). If the format is long, the output may use up to 16 chars.
Only negative numbers must have a '-' in front. Scientific format must only be used in the result if it increases the numeric precision of the "shortened" number.
If scientific number is outputted, the "e" must not be used as it uses unnecessary space:</p>
<p>My proc has been extended several times and does not meet the requirements in all cases.</p>
<p>Can anyone help me to achieve the required output in a good way?</p>
<pre><code>def format_nastran(number, format):
if format == "free":
return number
if format == "short":
fieldsize = 8
if format == "long":
fieldsize = 16
charsfor_comma = 1
# Help functions ##########################################################
def remove_trailing_zeros(number):
# Convert to string, strip trailing zeros, and convert back to number
stripped_number = str(number).rstrip('0').rstrip('.') if '.' in str(number) else str(number)
return type(number)(stripped_number)
def count_decimals(number):
# Convert to string
number_str = str(number)
# Check if the string contains a decimal point
if '.' in number_str:
# Get the portion after the decimal point and count its length
decimal_part = number_str.split('.')[1]
return len(decimal_part)
else:
return 0 # No decimals
# #########################################################################
scientific = str(number).find("e")
# Case 1 Integer which fits into the field without any changes
# short format: 12345678
# short format: -1234567
# long format : 1234567812345678
# long format : -123456781234567
if scientific == -1:
number = remove_trailing_zeros(number)
num_chars = len(str(number))
if num_chars <= fieldsize:
return number
# Case 2 Integer which is to large to fit into the field, has to be converted to scientific format
# short format: 1234567891
# short format: -123456789
# long format : 123456781234567812345678
# long format : -12345678123456781234567
if num_chars > fieldsize:
e_number = "{:.12e}".format(float(number))
# Split the number into mantissa and exponent
mantissa, exponent = e_number.split("e")
# Strip leading zeros from exponent
exponent = int(exponent)
if int(exponent) > 0:
exponent = "+" + str(exponent)
charsinexponent = len(str(exponent))
# determine the length of the mantissa
mantissa = remove_trailing_zeros(mantissa)
charsmantissa = len(str(mantissa))
# determine number of decimals
charsdezimals = count_decimals(mantissa)
# determine number of chars before dezimals
chars_intpart = charsmantissa - charsdezimals - charsfor_comma
# To how many numbers do weh have to round the mantissa so that mantissa plus exponent fits into the field?
round_to = fieldsize - chars_intpart - charsfor_comma - charsinexponent
if round_to > 0:
rounded_mantissa = round(float(mantissa),round_to)
# assemble the whole number
formatted_number = str(rounded_mantissa) + str(exponent)
else:
formatted_number = str(mantissa) + str(exponent)
return formatted_number
if scientific != -1:
# Case 3 Scientific number which fits into the field without an changes after the 'e' and unneccesary leading 0 of the exponent has been removed
# short format: 1.2345e-005 -> 1.2345-5 3 signs gain
# short format: -1.234e-005 -> -1.234-5
# long format :
# long format :
mantissa, exponent = str(number).split("e")
# Strip leading zeros from exponent
exponent = int(exponent)
if int(exponent) > 0:
exponent = "+" + str(exponent)
charsinexponent = len(str(exponent))
# determine the length of the mantissa
mantissa = remove_trailing_zeros(mantissa)
charsmantissa = len(str(mantissa))
# determine number of decimals
charsdezimals = count_decimals(mantissa)
# determine number of chars before dezimals
chars_intpart = charsmantissa - charsdezimals - charsfor_comma
# To how many numbers do we have to round the mantissa so that mantissa plus exponent fits into the field?
round_to = fieldsize - chars_intpart - charsfor_comma - charsinexponent
if round_to > 0:
rounded_mantissa = round(float(mantissa),round_to)
# assemble the whole number
formatted_number = str(rounded_mantissa) + str(exponent)
else:
formatted_number = str(mantissa) + str(exponent)
return formatted_number
</code></pre>
<p>Testdata:</p>
<pre><code>#number = 30000000000000.0
#number = 123456789123456789
#number = -123456789123456789
#number = 12345678
number = -12345678 # The question here is if it not better to simply round instead of switching to scientific format...
#number = 6.5678e-06
#number = 6.5678999e-06
#number = 6.5678123456789123e-000006
#number = 6.5678123456789123e-000006
#number = 6.5678123456789123e+000006
#number = -6.5678123456789123e-06
#format = 'long'
format = 'short'
result = format_nastran(number, format)
print(str(result))
</code></pre>
|
<python><formatting><numbers><rounding><precision>
|
2023-11-29 18:15:02
| 1
| 2,841
|
Lumpi
|
77,573,562
| 1,290,485
|
How do I specify feature columns in Databricks AutoML?
|
<p>I am running Databricks AutoML in a Python notebook with the look-ups from the feature tables. However, the additional columns are always included, and all runs fail.</p>
<pre><code>import databricks.automl
automl_feature_lookups = [
{
"table_name":"lakehouse_in_action.favorita_forecasting.oil_10d_lag_ft",
"lookup_key":"date",
"feature_names":"lag10_oil_price"
},
{
"table_name":"lakehouse_in_action.favorita_forecasting.store_holidays_ft",
"lookup_key":["date","store_nbr"]
},
{
"table_name":"lakehouse_in_action.favorita_forecasting.stores_ft",
"lookup_key":"store_nbr",
"feature_names":["cluster","store_type"]
}
]
automl_data = raw_data.filter("date > '2016-12-31'")
summary = databricks.automl.regress(automl_data,
target_col=label_name,
time_col="date",
timeout_minutes=60,
feature_store_lookups=automl_feature_lookups)
</code></pre>
<p>It turns out that when creating a <a href="https://docs.databricks.com/en/machine-learning/feature-store/train-models-with-feature-store.html#create-a-training-dataset" rel="nofollow noreferrer">training set</a> you have the option to specify features using <code>feature_names</code>. When creating the <a href="https://docs.databricks.com/en/machine-learning/automl/train-ml-model-automl-api.html#classification-and-regression-parameters" rel="nofollow noreferrer">dictionary for AutoML</a>, <code>feature_names</code> is not a valid option.</p>
<p>I tried removing <code>feature_names</code>, but it did not fix my issue.</p>
<p>I added<code>exclude_cols=['id','city','state','price_date']</code>, but according to the error I received columns from feature lookup tables cannot be excluded.</p>
<pre><code>InvalidArgumentError: Dataset schema does not contain column with name 'city'. Please pass a valid column name for param: exclude_cols
</code></pre>
|
<python><databricks><automl>
|
2023-11-29 18:07:05
| 1
| 6,832
|
Climbs_lika_Spyder
|
77,573,501
| 1,230,945
|
Assign a new column in a multi-indexed dataframe
|
<p>I have the following multi-indexed dataframe:</p>
<p><a href="https://i.sstatic.net/V4T9K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V4T9K.png" alt="Multi-indexed dataframe" /></a></p>
<p>I want to add a new column labeled <code>returns</code> and assign the percentage change applied to the <code>close</code> column i.e</p>
<pre><code>bars.loc[symbol, 'returns'] = bars.loc[symbol]['close'].pct_change()
</code></pre>
<p>The challenge is that the <code>returns</code> column does not get populated as expected. So with the snippet below, here is what I got</p>
<pre><code>for symbol in symbols:
bars.loc[symbol, 'returns'] = bars.loc[symbol]['close'].pct_change()
print(df.loc[symbols[0]].tail())
</code></pre>
<p><a href="https://i.sstatic.net/2PohS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2PohS.png" alt="enter image description here" /></a></p>
<p>The returns column has NaN as their values, what am I doing wrong here?</p>
|
<python><python-3.x><pandas><dataframe><multi-index>
|
2023-11-29 17:57:41
| 1
| 723
|
Sunday Okpokor
|
77,573,416
| 5,399,268
|
How to get the correct pivoting for a pandas dataframe?
|
<p>I have following <code>pandas</code> DataFrame:</p>
<pre><code>import pandas as pd
df2 = pd.DataFrame({
'nombreNumeroUnico': ['UP2_G1_B', 'UP2_G2_B'],
'pMax': [110.0, 110.0]
})
df2
((Out [1])):
nombreNumeroUnico pMax
0 UP2_G1_B 110.0
1 UP2_G2_B 110.0
</code></pre>
<h2>Expected result</h2>
<p>I want to transform this into:</p>
<pre><code> UP2_G1_B UP2_G2_B
0 110 110
</code></pre>
<h2>What I have tried so far</h2>
<p>So far I was able to convert it using pivot function but I am not getting the exact result I want.</p>
<pre><code>df2.pivot(index=None, columns="nombreNumeroUnico", values="pMax")
((Out [57])):
nombreNumeroUnico UP2_G1_B UP2_G2_B
0 110.0 NaN
1 NaN 110.0
</code></pre>
<h2>Question</h2>
<p>But it's not the exact result I want. How can I get the expected result?</p>
|
<python><pandas><dataframe><pivot>
|
2023-11-29 17:42:22
| 2
| 4,793
|
Cedric Zoppolo
|
77,572,887
| 1,315,621
|
Random non deterministic results from pretrained retinanet
|
<p>I wrote the following class to perform instance segmentation and return the masks of a given class.
The code seems to be running randomly and it's not deterministic.
The labels printed (as well as the number of labels) change at every execution even if I am running the code on the same input image containing a single person.
Is there a problem in how I load the weights? The code is not printing any warning nor exception.
Note that I am running the code on the CPU.</p>
<pre><code>import numpy as np
import torch
from torch import Tensor
from torchvision.models.detection import retinanet_resnet50_fpn_v2, RetinaNet_ResNet50_FPN_V2_Weights
import torchvision.transforms as T
import PIL
from PIL import Image
class RetinaNet:
def __init__(self, weights: RetinaNet_ResNet50_FPN_V2_Weights = RetinaNet_ResNet50_FPN_V2_Weights.COCO_V1):
# Load the pre-trained DeepLabV3 model
self.weights = weights
self.model = retinanet_resnet50_fpn_v2(
pretrained=RetinaNet_ResNet50_FPN_V2_Weights
)
self.model.eval()
# Check if a GPU is available and if not, use a CPU
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
self.model.to(self.device)
# Define the transformation
self.transform = T.Compose([
T.ToTensor(),
])
def infer_on_image(self, image: PIL.Image.Image, label: str) -> Tensor:
# Transform image
input_tensor = self.transform(image)
input_tensor = input_tensor.unsqueeze(0)
input_tensor.to(self.device)
# Run model
with torch.no_grad():
predictions = self.model(input_tensor)
# Post-processing to create masks for requested label
label_index = self.get_label_index(label)
boxes = predictions[0]['boxes'][predictions[0]['labels'] == label_index]
print('labels', predictions[0]['labels']) # random output
masks = torch.zeros((len(boxes), input_tensor.shape[1], input_tensor.shape[2]), dtype=torch.uint8)
for i, box in enumerate(boxes.cpu().numpy()):
x1, y1, x2, y2 = map(int, box)
masks[i, y1:y2, x1:x2] = 1
return masks
def get_label_index(self,label: str) -> int:
return self.weights.value.meta['categories'].index(label)
def get_label(self, label_index: int) -> str:
return self.weights.value.meta['categories'][label_index]
@staticmethod
def load_image(file_path: str) -> PIL.Image.Image:
return Image.open(file_path).convert("RGB")
if __name__ == '__main__':
from matplotlib import pyplot as plt
image_path = 'person.jpg'
# Run inference
retinanet = RetinaNet()
masks = retinanet.infer_on_image(
image=retinanet.load_image(image_path),
label='person'
)
# Plot image
plt.imshow(retinanet.load_image(image_path))
plt.show()
# PLot mask
for i, mask in enumerate(masks):
mask = mask.unsqueeze(2)
plt.title(f'mask {i}')
plt.imshow(mask)
plt.show()
</code></pre>
|
<python><deep-learning><pytorch><image-segmentation><retinanet>
|
2023-11-29 16:22:27
| 1
| 3,412
|
user1315621
|
77,572,766
| 11,611,246
|
Unable to upload package using twine
|
<p>When I build my Python package, I get no warnings or errors:</p>
<pre><code>H:\git\graphab4py>py -m build
* Creating venv isolated environment...
* Installing packages in isolated environment... (setuptools>=61.0, wheel)
* Getting build dependencies for sdist...
running egg_info
writing src\graphab4py.egg-info\PKG-INFO
writing dependency_links to src\graphab4py.egg-info\dependency_links.txt
writing requirements to src\graphab4py.egg-info\requires.txt
writing top-level names to src\graphab4py.egg-info\top_level.txt
reading manifest file 'src\graphab4py.egg-info\SOURCES.txt'
adding license file 'LICENSE'
writing manifest file 'src\graphab4py.egg-info\SOURCES.txt'
* Building sdist...
running sdist
running egg_info
writing src\graphab4py.egg-info\PKG-INFO
writing dependency_links to src\graphab4py.egg-info\dependency_links.txt
writing requirements to src\graphab4py.egg-info\requires.txt
writing top-level names to src\graphab4py.egg-info\top_level.txt
reading manifest file 'src\graphab4py.egg-info\SOURCES.txt'
adding license file 'LICENSE'
writing manifest file 'src\graphab4py.egg-info\SOURCES.txt'
running check
creating graphab4py-1.0.4
creating graphab4py-1.0.4\src
creating graphab4py-1.0.4\src\graphab4py
creating graphab4py-1.0.4\src\graphab4py.egg-info
copying files to graphab4py-1.0.4...
copying LICENSE -> graphab4py-1.0.4
copying README.rst -> graphab4py-1.0.4
copying pyproject.toml -> graphab4py-1.0.4
copying setup.py -> graphab4py-1.0.4
copying src\graphab4py\__init__.py -> graphab4py-1.0.4\src\graphab4py
copying src\graphab4py\functions.py -> graphab4py-1.0.4\src\graphab4py
copying src\graphab4py\project.py -> graphab4py-1.0.4\src\graphab4py
copying src\graphab4py.egg-info\PKG-INFO -> graphab4py-1.0.4\src\graphab4py.egg-info
copying src\graphab4py.egg-info\SOURCES.txt -> graphab4py-1.0.4\src\graphab4py.egg-info
copying src\graphab4py.egg-info\dependency_links.txt -> graphab4py-1.0.4\src\graphab4py.egg-info
copying src\graphab4py.egg-info\requires.txt -> graphab4py-1.0.4\src\graphab4py.egg-info
copying src\graphab4py.egg-info\top_level.txt -> graphab4py-1.0.4\src\graphab4py.egg-info
copying src\graphab4py.egg-info\SOURCES.txt -> graphab4py-1.0.4\src\graphab4py.egg-info
Writing graphab4py-1.0.4\setup.cfg
Creating tar archive
removing 'graphab4py-1.0.4' (and everything under it)
* Building wheel from sdist
* Creating venv isolated environment...
* Installing packages in isolated environment... (setuptools>=61.0, wheel)
* Getting build dependencies for wheel...
running egg_info
writing src\graphab4py.egg-info\PKG-INFO
writing dependency_links to src\graphab4py.egg-info\dependency_links.txt
writing requirements to src\graphab4py.egg-info\requires.txt
writing top-level names to src\graphab4py.egg-info\top_level.txt
reading manifest file 'src\graphab4py.egg-info\SOURCES.txt'
adding license file 'LICENSE'
writing manifest file 'src\graphab4py.egg-info\SOURCES.txt'
* Installing packages in isolated environment... (wheel)
* Building wheel...
running bdist_wheel
running build
running build_py
creating build
creating build\lib
creating build\lib\graphab4py
copying src\graphab4py\functions.py -> build\lib\graphab4py
copying src\graphab4py\project.py -> build\lib\graphab4py
copying src\graphab4py\__init__.py -> build\lib\graphab4py
running egg_info
writing src\graphab4py.egg-info\PKG-INFO
writing dependency_links to src\graphab4py.egg-info\dependency_links.txt
writing requirements to src\graphab4py.egg-info\requires.txt
writing top-level names to src\graphab4py.egg-info\top_level.txt
reading manifest file 'src\graphab4py.egg-info\SOURCES.txt'
adding license file 'LICENSE'
writing manifest file 'src\graphab4py.egg-info\SOURCES.txt'
installing to build\bdist.win-amd64\wheel
running install
running install_lib
creating build\bdist.win-amd64
creating build\bdist.win-amd64\wheel
creating build\bdist.win-amd64\wheel\graphab4py
copying build\lib\graphab4py\functions.py -> build\bdist.win-amd64\wheel\.\graphab4py
copying build\lib\graphab4py\project.py -> build\bdist.win-amd64\wheel\.\graphab4py
copying build\lib\graphab4py\__init__.py -> build\bdist.win-amd64\wheel\.\graphab4py
running install_egg_info
Copying src\graphab4py.egg-info to build\bdist.win-amd64\wheel\.\graphab4py-1.0.4-py3.11.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\graphab4py-1.0.4.dist-info\WHEEL
creating 'H:\git\graphab4py\dist\.tmp-wmoibnbf\graphab4py-1.0.4-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'graphab4py/__init__.py'
adding 'graphab4py/functions.py'
adding 'graphab4py/project.py'
adding 'graphab4py-1.0.4.dist-info/LICENSE'
adding 'graphab4py-1.0.4.dist-info/METADATA'
adding 'graphab4py-1.0.4.dist-info/WHEEL'
adding 'graphab4py-1.0.4.dist-info/top_level.txt'
adding 'graphab4py-1.0.4.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Successfully built graphab4py-1.0.4.tar.gz and graphab4py-1.0.4-py3-none-any.whl
</code></pre>
<p>However, when I try to upload it to PyPI, it fails with the following output:</p>
<pre><code>H:\git\graphab4py>twine upload dist/*
Uploading distributions to https://upload.pypi.org/legacy/
Uploading graphab4py-1.0.4-py3-none-any.whl
100% ---------------------------------------- 29.3/29.3 kB β’ 00:00 β’ 271.1 kB/s
WARNING Error during upload. Retry with the --verbose option for more details.
ERROR HTTPError: 400 Bad Request from https://upload.pypi.org/legacy/
The description failed to render for 'text/x-rst'. See https://pypi.org/help/#description-content-type for
more information.
</code></pre>
<p>With the <code>--verbose</code> option, I get:</p>
<pre><code>H:\git\graphab4py>twine upload dist/* --verbose
INFO Using configuration from C:\Users\poppman\.pypirc
Uploading distributions to https://upload.pypi.org/legacy/
INFO dist\graphab4py-1.0.4-py3-none-any.whl (18.3 KB)
INFO dist\graphab4py-1.0.4.tar.gz (19.2 KB)
INFO username set from config file
INFO password set from config file
INFO username: __token__
INFO password: <hidden>
Uploading graphab4py-1.0.4-py3-none-any.whl
100% ---------------------------------------- 29.3/29.3 kB β’ 00:00 β’ 1.3 MB/s
INFO Response from https://upload.pypi.org/legacy/:
400 The description failed to render for 'text/x-rst'. See https://pypi.org/help/#description-content-type for
more information.
INFO <html>
<head>
<title>400 The description failed to render for 'text/x-rst'. See
https://pypi.org/help/#description-content-type for more information.</title>
</head>
<body>
<h1>400 The description failed to render for 'text/x-rst'. See
https://pypi.org/help/#description-content-type for more information.</h1>
The server could not comply with the request since it is either malformed or otherwise incorrect.<br/><br/>
The description failed to render for &#x27;text/x-rst&#x27;. See
https://pypi.org/help/#description-content-type for more information.
</body>
</html>
ERROR HTTPError: 400 Bad Request from https://upload.pypi.org/legacy/
The description failed to render for 'text/x-rst'. See https://pypi.org/help/#description-content-type for
more information.
</code></pre>
<p>I was able to upload earlier versions. Since then, I only changed some docstrings and built a documentation using Sphinx. This version of the package is already on <a href="https://github.com/ManuelPopp/graphab4py" rel="nofollow noreferrer">GitHub</a> where the README.rst is displayed correctly.</p>
<p>I assumed the issue was related to the Sphinx files and excluded ./docs in setup.py, but I still get the error.</p>
<p>The current README.rst looks like this:</p>
<pre><code>.. role:: bash(code)
:language: bash
.. role:: python(code)
:language: python
.. raw:: html
<p align="center">
<img src="/docs/img/Ga4Py.png" alt="Graphab4py Logo" width="400">
</p>
----
.. image:: https://img.shields.io/pypi/v/graphab4py.svg
:target: https://pypi.org/project/graphab4py/
.. image:: https://img.shields.io/pypi/pyversions/graphab4py.svg
:target: https://pypi.org/project/graphab4py
.. _Supported Python Versions: https://pypi.org/project/graphab4py
.. image:: https://travis-ci.org/username/graphab4py.svg?branch=master
:target: https://travis-ci.org/username/graphab4py
.. _Build Status: https://travis-ci.org/username/graphab4py
.. image:: https://img.shields.io/pypi/dm/graphab4py.svg?label=PyPI%20downloads
:target: https://pypi.org/project/graphab4py
.. _PyPI Downloads: https://pypi.org/project/graphab4py
.. image:: https://img.shields.io/badge/license-UNLICENSE-green.svg
:target: https://unlicense.org/
=====
About
=====
This package provides a Python interface to the program `Graphab <https://sourcesup.renater.fr/www/graphab/en/home.html/>`_.
The author(s) of this Python package are not developing Graphab.
Rather, Graphab is an independent software which provides a graphical user interface, as well as a command line interface.
Further information on Graphab can be found `here <https://sourcesup.renater.fr/www/graphab/en/home.html>`_.
Also view the `documentation <https://htmlpreview.github.io/?https://github.com/ManuelPopp/graphab4py/blob/main/docs/build/html/index.html>`_ of this Python package.
=============
Prerequisites
=============
In order to install and use Graphab4py, `Python <https://www.python.org>`_ >= 3.8 and `Java <https://www.java.com>`_ >= 8 are both required.
It is also recommended to have `pip <https://pip.pypa.io/en/stable/installation/>`_ available to install the `latest version <https://pypi.org/project/graphab4py/#history>`_ of Graphab4py.
Graphab is not required for installation. It can be installed through Graphab4py if missing. Alternatively, Graphab4py can be set up to use an existing Graphab Java executable.
============
Installation
============
Graphab4Py is available on `PyPI <https://pypi.org/project/graphab4py>`_. To install Graphab4Py, simply run the following line:
.. code-block:: console
pip install graphab4py
========
Examples
========
With Graphab4py installed, we will now look at a few examples.
Creating a project
++++++++++++++++++
In the following, we will create a new Graphab project from scratch.
.. code-block:: python
import graphab4py
graphab4py.set_graphab("/home/rca/opt/")
prj = graphab4py.Project()
prj.create_project(
name = "MyProject", patches = "/home/rca/dat/pat/Patches.tif",
habitat = 1, directory = "/home/rca/prj"
)
prj.create_linkset(
disttype = "cost",
linkname = "L1",
threshold = 100000,
cost_raster = "/home/rca/dat/res/resistance_surface.tif"
)
prj.create_graph(graphname = "G1")
prj.save()
In this example, Graphab has already been downloaded and saved to a folder named :bash:`/home/rca/opt/`.
In a first step, Graphab4py is pointed to this folder. ALternatively, the :python:`get_graphab()` function can be used to download Graphab to a specific location.
Subsequently, the project is initialized. Here, the project is given a name and a project folder is created. Moreover, a file containing habitat patches must be provided.
This file is a raster (e.g., a GeoTIFF \*.tif file) with values encoded as INT2S. (Graphab does not accept another format.) The value or values for habitat patches must also be provided.
Now, we create a linkset. The values allowed for :python:`disttype` are :python:`"euclid"` and :python:`"cost"`, which refer to euclidean distance and cumulated cost.
For a linkset based on euclidean distances, the :python:`cost_raster` argument is not used. When, instead, a resistance surface is used, it needs to be provided as a raster file, as indicated in the example.
Moreover, a threshold can be set, to limit the distance for which links are calculated. This may be necessary when dealing with large sets of habitat patches in order to limit computing time.
Finally, we create a graph and save the project.
Loading an existing project
+++++++++++++++++++++++++++
Graphab4py can load existing Graphab projects (\*.xml). However, it also has its own format (\*.g4p) to save and load projects.
.. code-block:: python
import graphab4py
prj = graphab4py.Project()
prj.load_project_xml("/home/rca/prj/MyProject/MyProject.g4p")
prj.enable_distance_conversion(
save_plot = "/home/rca/out/Distance_conversion.png", max_euc = 2200
)
prj.convert_distance(500, regression = "log")
out = prj.calculate_metric(metric = "EC", d = 1500, p = 0.05)
ec = out["metric_value"]
In this example, we load a project from a Graphab4py project file. Subsequently, we use the linkset that we have created in the previous step to establish a relationship between euclidean and cost distance.
We can set limits to the euclidean distance considered for fitting the model, in order to fit the model to a relevant interval of our data.
When :python:`save_plot` is set to a valid path, a figure is created, so we can inspect the relationship and decide whether we want to use the respective regression mode.
By default, a linear regression is forced through zero. We decided that in our case, a log-log regression might give better results.
We can use the :python:`convert_distance` function directly to establish a relationship and return an estimation for a distance translation.
If no relationship for the given distance interval and regression model has established so far, the method will internally call :python:`enable_distance_conversion` and pass the required arguments.
Note that changing the distance interval will overwrite any previously fit model for the same linkset and model type.
In the last line, we calculate the metric "equivalent connectivity" (EC) for the entire graph. This metric requires additional parameters :python:`d` and :python:`p`.
Other metrics might not require additional parameters. A list of all the available metrics and their parameters and properties can be viewed in the original `Graphab manual <https://sourcesup.renater.fr/www/graphab/en/documentation.html>`_.
=======
License
=======
This is free and unencumbered software released into the public domain, as declared in the `LICENSE <https://github.com/ManuelPopp/graphab4py/blob/main/LICENSE>`_ file.
</code></pre>
<p>and it was used in <code>setup.py</code> for the argument <code>long_description</code> with <code>open()</code>. I replaced this with an empty string, but then <code>twine check dist/*</code> returned:</p>
<pre><code>
Checking dist\graphab4py-1.0.4-py3-none-any.whl: FAILED
ERROR `long_description` has syntax errors in markup and would not be rendered on PyPI.
line 7: Warning: "raw" directive disabled.
Checking dist\graphab4py-1.0.4.tar.gz: FAILED
ERROR `long_description` has syntax errors in markup and would not be rendered on PyPI.
line 7: Warning: "raw" directive disabled.
</code></pre>
|
<python><python-sphinx><restructuredtext><twine>
|
2023-11-29 16:05:28
| 1
| 1,215
|
Manuel Popp
|
77,572,731
| 11,578,996
|
Why is the linewidth inconsistent for this animated LineCollection?
|
<p>This animation is supposed to particles in motion with the leading component having alpha=1 and lw=4, with a tail of reducing lw and alpha. The alpha works, the lw is <strong>inconsistent</strong> even for the same particle - sometimes the leading edge is fat and the tail thin, sometimes the obverse is true. Why?!</p>
<p>Note - I'm, actually modelling traffic around a network, but I thought this multiple particle random walk would be a good example. For my use case the tail will be fairly crucial for the clarity and visibility of the final graphic.</p>
<h4>Fake data summary values</h4>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib.collections import LineCollection
from matplotlib import cm
from functools import partial
np.random.seed(42) # to match my example
n_particles = 50 # number of particles to simulate
n_frames = 500 # number of frames to simulate
fade_frames = 5 # number of frames to fade out the animated plots for
# set up particle stating points & ids
particle_ids = [f"p{i}" for i in range(1, n_particles+1)]
starting_x = np.random.uniform(40, 60, n_particles)
starting_y = np.random.uniform(40, 60, n_particles)
# randomly assign start & end lives of points
start_frames = np.random.randint(0, n_frames/2, n_particles)
end_frames = start_frames + np.random.randint(n_frames/4, n_frames/2, n_particles)
colours = np.random.choice(range(10), n_particles) # Range 10 to match colours in the "tab10" colourmap
particles = pd.DataFrame({"id": particle_ids, "starting_x": starting_x, "starting_y": starting_y, "start_frames": start_frames, "end_frames": end_frames, "colour":colours})#.sort_values("start_frames").reset_index(drop=True)
</code></pre>
<h4>Inflate data to get positions of all particles in each frame</h4>
<p>My true data looks like this:</p>
<pre><code>particle_ids = []
x_loc = []
y_loc = []
frames = []
colours = []
for id in particles.id:
start_frame = particles.loc[particles.id == id, "start_frames"].values[0]
end_frame = particles.loc[particles.id == id, "end_frames"].values[0]
colour = particles.loc[particles.id == id, "colour"].values[0]
particle_ids.append(id)
colours.append(colour)
x_loc.append(particles.loc[particles.id == id, "starting_x"].values[0])
y_loc.append(particles.loc[particles.id == id, "starting_y"].values[0])
frames.append(start_frame)
for frame in range(start_frame+1, end_frame+1):
particle_ids.append(id)
colours.append(colour)
x_loc.append(x_loc[-1] + np.random.uniform(-2, 2))
y_loc.append(y_loc[-1] + np.random.uniform(-2, 2))
frames.append(frame)
movements = (pd.DataFrame({
"id": particle_ids,
"x_loc": x_loc,
"y_loc": y_loc,
"frame": frames,
"colour":colours})
.sort_values("frame")
.reset_index(drop=True))
# replace colour code with rgb values. Set alpha during animation
cmap = dict(zip(range(10), cm.tab10.colors))
movements['colour'] = movements.colour.map(cmap)
</code></pre>
<h4>Run the animation</h4>
<pre><code># Set up plot
xmin = movements.x_loc.min()
xmax = movements.x_loc.max()
ymin = movements.y_loc.min()
ymax = movements.y_loc.max()
fig, ax = plt.subplots(figsize=(10, 10))
ax.set_xlim(0,100)
ax.set_ylim(0, 100)
lines = LineCollection([], ) # empty container to update later
ax.add_collection(lines)
def animate(frame, lines):
# Get data for the current frame
start_frame = frame - fade_frames
if frame >= movements.frame.max():
end_frame = movements.frame.max()
else:
end_frame = frame
if start_frame <= 0:
start_frame = 0
end_frame = 1
all_segments = []
all_colours = []
all_linewidths = []
frame_slice = movements[(start_frame<=movements.frame) & (movements.frame<=end_frame)]
# plot LineCollection for each particle separately
for id, grp in frame_slice.groupby('id'):
coords = list(zip(grp.x_loc.tolist(), grp.y_loc.tolist()))
all_segments.extend([(coords[i-1], coords[i]) for i in range(1, len(coords))])
colours = grp.colour.tolist()[:-1]
fade_values = [(frame-start_frame)/fade_frames for frame in grp.frame]
rgba = [(r,g,b,a) for (r,g,b), a in zip(colours, fade_values)]
all_colours.extend(rgba)
all_linewidths.extend([4*val for val in fade_values])
ax.draw_artist(lines)
lines.set_segments(all_segments)
lines.set_color(all_colours)
lines.set_linewidth(all_linewidths)
return lines,
# Run animation
ani = animation.FuncAnimation(
fig,
partial(animate, lines=lines),
frames=n_frames,
interval=100,
repeat_delay=200,
blit=True)
ani.save('test.gif')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/rUTvz.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUTvz.gif" alt="particles on a random walk" /></a></p>
|
<python><matplotlib><matplotlib-animation>
|
2023-11-29 16:00:12
| 1
| 389
|
ciaran haines
|
77,572,706
| 2,977,092
|
Untoggle *all* legend items in an Altair chart in a Jupyter notebook
|
<p>I'm trying to create a chart with Altair in a Jupyter notebook. It's basically a line-chart of various currency values over time. I included my code below.</p>
<p>I can toggle currencies using the legend, and all is cool. I have one annoyance though: once I untoggle the last currency, all <strong>lines</strong> will be <strong>disabled</strong> (greyed out) while all <strong>legend items</strong> become <strong>enabled</strong>. I'd rather have all legend items greyed out as well once I untoggle the last currency, but I cannot figure out how to do it.</p>
<pre><code>import altair as alt
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
# Generate sample data with timestamps and numeric values for currencies
np.random.seed(42)
num_points = 50
date_today = datetime.now()
date_list = [date_today - timedelta(days=x) for x in range(num_points)]
df = pd.DataFrame({
'Timestamp': date_list * 3, # Three currencies
'Currency': ['USD'] * num_points + ['EUR'] * num_points + ['GBP'] * num_points,
'Value': np.random.randint(30, 70, size=num_points * 3) # Random values
})
# Enable Altair to render charts in the notebook
alt.renderers.enable('default')
# Define a selection for the legend with toggle enabled
selection = alt.selection_point(fields=['Currency'], bind='legend', toggle='true', empty=False)
# Define the chart with the selection
chart = alt.Chart(df).mark_line().encode(
x='Timestamp:T',
y='Value:Q',
color=alt.condition(selection, 'Currency:N', alt.value('#F0F0F0')),
tooltip=['Timestamp:T', 'Currency:N', 'Value:Q']
).properties(
width=800,
height=400,
title='Numeric Values Over Time per Currency'
).add_params(
selection
)
# Show the interactive chart in the notebook
chart
</code></pre>
|
<python><jupyter-notebook><toggle><legend><altair>
|
2023-11-29 15:55:49
| 0
| 739
|
luukburger
|
77,572,395
| 18,744,117
|
How to specify f-bounded polymorphism in python typing ( i.e. refering to the type of a subclass )
|
<p>An simple example of what I want to achieve:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Generic
from abc import abstractmethod
T = TypeVar( "T", bound="Copyable['T']" )
class Copyable( Generic[T] ):
@abstractmethod
def Copy( self ) -> T:
pass
class Example( Copyable[ 'Example' ] ):
def Copy( self ) -> 'Example':
return Example()
</code></pre>
<p>However,</p>
<p>Pylance complains with: <code>TypeVar bound type cannot be generic</code></p>
<p>MyPy complains with: <code>error: Type variable "script.example.T" is unbound</code></p>
<p>Any ideas of how I could achieve this.</p>
<p><strong>Extra details</strong></p>
<p>The following is not a solution as it removes information. After copying an object of type <code>Example</code> its type would be reduced to <code>Copyable</code>.</p>
<pre class="lang-py prettyprint-override"><code>class Copyable:
@abstractmethod
def Copy( self ) -> Copyable:
pass
class Example( Copyable ):
def Copy( self ) -> Copyable:
return Example()
</code></pre>
<p><strong>Context</strong></p>
<p>I would like to use this for a class which takes callbacks who's first argument is the class it was given to.
This class can be subclassed, meaning if the callback is of the type <code>Callback[ [ Baseclass ], None ]</code> then it will not have access ( without casting ) to methods of the derived class.</p>
|
<python><mypy><typing><f-bounded-polymorphism>
|
2023-11-29 15:16:32
| 1
| 683
|
Sam Coutteau
|
77,572,223
| 8,037,521
|
Forbid usage of defaults channel in Conda
|
<p>This one is getting a bit annoying for me: <code>defaults</code> channel is apparently paid and cannot be used commercially. Still, it is very difficult to get rid of it, as it keeps randomly re-appearing.</p>
<p>I made an environment, installing everything from <code>conda-forge</code>, and exported that environment to <code>environment.yml</code>. Now, I want this <code>environment.yml</code> to be shareable, even to colleagues who have no idea about differences between <code>conda-forge</code> and <code>defaults</code> channel. The top of my <code>environment.yml</code>:</p>
<pre><code>name: py38
channels:
- conda-forge
dependencies:
- _libgcc_mutex=0.1=conda_forge
</code></pre>
<p>However, if I now copy this <code>environment.yml</code> somewhere else and run <code>conda env create -f environment.yml</code>, I see this:</p>
<pre><code>Channels:
- conda-forge
- defaults
</code></pre>
<p>Is there a way to specify NOT using <code>defaults</code> channel already from <code>environment.yml</code>? I know you can do it in <code>.condarc</code> when you already have environment but that is not really the point... I want to avoid accidentally usage of <code>defaults</code> directly from the <code>environment.yml</code>.</p>
|
<python><conda>
|
2023-11-29 14:53:49
| 1
| 1,277
|
Valeria
|
77,572,220
| 9,318,323
|
Execute statement with DB-API style bind params
|
<p>I am learning sqlalchemy and want to delete rows that are older than X number of days counting from today.
When I try this:</p>
<pre><code>from datetime import datetime, timedelta
import sqlalchemy
db_con_string = 'Driver={ODBC Driver 17 for SQL Server};Server=tcp...'
connection_url = sqlalchemy.engine.URL.create("mssql+pyodbc",
query={"odbc_connect": db_con_string})
engine = sqlalchemy.create_engine(connection_url)
with engine.begin() as sql_conn:
command = 'delete myschema.Logs where [DateTimeSent] < ?'
params = ((datetime.utcnow() + timedelta(days=-90)), )
sql_conn.execute(sqlalchemy.sql.text(command), params)
</code></pre>
<p>I get: <code>sqlalchemy.exc.ArgumentError: List argument must consist only of tuples or dictionaries</code></p>
<p>My <code>params</code> is clearly a tuple so I am thinking it's something to do with datetime. How can I fix this error?</p>
<p>I use SqlAlchemy 2.0.23</p>
|
<python><sql-server><sqlalchemy>
|
2023-11-29 14:53:41
| 1
| 354
|
Vitamin C
|
77,572,077
| 6,619,692
|
Using SETUPTOOLS_SCM_PRETEND_VERSION for package version inside Docker with .git directory in dockerignore
|
<p>I'm using <a href="https://setuptools-scm.readthedocs.io/en/latest/usage/" rel="noreferrer">setuptools scm</a> to dynamically provide version numbers for a Python package, and have these lines in the pyproject.toml:</p>
<pre><code>...
dynamic = ["dependencies", "version", "readme"]
[tool.setuptools]
packages = ["my_package"]
[tool.setuptools_scm]
...
</code></pre>
<p>When I try to install the package inside Docker, the process fails because I have included the <em>.git</em> directory in the .dockerignore.</p>
<p>Here is my Dockerfile:</p>
<pre><code>FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends -y \
build-essential curl wget git sox ffmpeg libsndfile1 zip unzip mandoc groff
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install && \
rm awscliv2.zip
WORKDIR /workspace
COPY requirements.txt .
RUN pip install --root-user-action=ignore --no-cache-dir --no-deps -r requirements.txt
COPY . .
RUN ./scripts/install_custom_requirements.sh
ARG VERSION
ENV VERSION=$VERSION
RUN SETUPTOOLS_SCM_PRETEND_VERSION=$(python -m setuptools_scm) pip install --root-user-action=ignore --no-cache-dir --no-deps .
</code></pre>
<p>For clarity, this is the line I have in my .dockerignore:</p>
<pre><code>**/.git
</code></pre>
<p>When I comment out the line, my Docker build works fine, but I don't want to copy <em>everything</em> across when I do <code>COPY . .</code> in the Dockerfile.</p>
<p>How can I keep the <code>**/.git</code> in my .dockerignore whilst getting setuptools scm to provide the version dynamically?</p>
<p>The <a href="https://setuptools-scm.readthedocs.io/en/latest/usage/" rel="noreferrer">documentation (see section "with Docker / Podman")</a> says:</p>
<blockquote>
<p>To avoid BuildKit and mounting of the .git folder altogether, one can also pass the desired version as a build argument. Note that SETUPTOOLS_SCM_PRETEND_VERSION_FOR_${NORMALIZED_DIST_NAME} is preferred over SETUPTOOLS_SCM_PRETEND_VERSION.</p>
</blockquote>
<p>But when I tried to pass this environment variable, the build failed. Here is what I tried:</p>
<pre class="lang-bash prettyprint-override"><code>#!/usr/bin/env bash
set -e
source vars.env
VERSION=$(git describe --tags --dirty --always)
git submodule update --init --recursive --progress
docker build \
--progress=plain \
--build-arg "VERSION=${VERSION}" \
-t "${DOCKER_IMAGE_NAME}:${VERSION}" \
.
</code></pre>
|
<python><docker><version-control><setuptools>
|
2023-11-29 14:33:00
| 1
| 1,459
|
Anil
|
77,572,009
| 16,674,436
|
Plotting the frequency of number of comments per day over time
|
<p>In python, I have a data frame (<code>gme_interaction</code>) with three columns. One column is the time (<code>created</code>) when a post was commented. The second column is the number of comments the posts received (<code>num_comments</code>), and the third column in the actual comment.</p>
<p>I want a graph that plots the frequency of comments per day over time. How would you go about that?</p>
<p>Sample Data frame that like the actual one:</p>
<pre><code>df = pd.DataFrame([["2021-01-01 00:02:06", "text", 4], ["2021-01-01 00:03:20", "[removed]", 6], ["2021-01-01 00:04:11", "text", 10]],
columns=['created', 'comment', 'num_comments'])
</code></pre>
<p>The <code>created</code> is however of <code>datetime64[ns]</code> type though. (Just not sure how to give it as such in the data frame).</p>
<p>So far I did that, but Iβm not sure that itβs displaying what I really want.</p>
<pre><code>gme_comments_per_day = gme_interaction.groupby(gme_interaction['created'])['num_comments'].sum()
### Plotting
plt.figure(figsize=(20, 12))
sns.barplot(x='created', y='num_comments', data=gme_comments_per_day)
plt.title('Title')
plt.xlabel('Time')
plt.ylabel('Total Number of Comments')
# Format x-axis labels
plt.gca().xaxis.set_major_locator(mdates.DayLocator()) # Set major ticks at days
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%d')) # Display day number
# Set minor ticks at 6-hour intervals (adjust for your specific intervals)
plt.gca().xaxis.set_minor_formatter(mdates.DateFormatter('%I%p')) # Display time intervals
#plt.xticks(rotation=50) # Rotate x-axis labels for better readability
plt.tight_layout()
plt.savefig('Evolution of Interaction in WSB during GME.pdf')
plt.show()
</code></pre>
<p>My plot looks like that, but Iβm suspicious of the drop between 26th and 27th ish:
<a href="https://i.sstatic.net/LJPY1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LJPY1.png" alt="enter image description here" /></a></p>
<p>What Iβm also suspicious about is the way the days are plotted, they fill a bit off.</p>
|
<python><matplotlib><plot><graph>
|
2023-11-29 14:24:52
| 0
| 341
|
Louis
|
77,571,915
| 5,786,649
|
How are dimensions handled in pytorch's tensors?
|
<p>How does the <code>size</code> parameter behave in creation of tensors?
How does the <code>axis</code> parameter behave in methods like <code>torch.Tensor.sum()</code> and <code>torch.Tensor.softmax()</code>?</p>
<p>I would like to give a good overview of how <code>pytorch.Tensor</code> instances in handle dimensions, both in creation and aggregation.</p>
<p><em>Note: I have seen posts explaining some aspects of this, like <a href="https://stackoverflow.com/questions/43328632/pytorch-reshape-tensor-dimension">reshaping</a>. I would like to give a more fundamental explanation, such as I would have expected to find on the <a href="https://pytorch.org/docs/stable/index.html" rel="nofollow noreferrer">official pytorch-docs</a>, but couldn't find. All other explanations I have found were hosted on Medium and similar sites.</em></p>
|
<python><pytorch><tensor>
|
2023-11-29 14:15:08
| 1
| 543
|
Lukas
|
77,571,877
| 1,648,712
|
How can I open a specific database on Google Firestore in Python?
|
<p>I am using Firebase and setting/retrieving documents from Firestore with this code:</p>
<pre><code>import firebase_admin
from firebase_admin import credentials, firestore
cred = credentials.ApplicationDefault()
firebase_admin.initialize_app(cred, options={"projectId": "huq-jimbo"})
firestore_client = firestore.client()
doc = firestore_client.collection(f"example_collection").document("data")
print(doc)
</code></pre>
<p>I can only access the <code>(default)</code> database, and I can't see any way to open a different database.</p>
<p>Looking through the docs (e.g. <a href="https://firebase.google.com/docs/firestore/quickstart" rel="nofollow noreferrer">https://firebase.google.com/docs/firestore/quickstart</a>) it seems there's no <code>database</code> parameter for <code>firestore.client()</code>.</p>
<p>How can I achieve this?</p>
|
<python><firebase><google-cloud-platform><google-cloud-firestore>
|
2023-11-29 14:11:13
| 2
| 414
|
jimbofreedman
|
77,571,796
| 595,870
|
How to create singleton object, which could be used both as type and value (similar to None)?
|
<p>I'd like to have custom NotSet value which could be used in type hinting similar to None.</p>
<pre class="lang-py prettyprint-override"><code>NotSet = ?
class Client:
def partial_update(
self,
obj_id: int,
obj_field: int | None | NotSet = NotSet,
...
):
# Do not update fields which were not specified explicitly.
if obj_field is NotSet:
# do not update obj_field
else:
# update obj_field
...
</code></pre>
<p><strong>Not good enough solutions:</strong></p>
<ol>
<li>Use <code>None</code>.</li>
</ol>
<ul>
<li>None itself is reserved for business logic. Fields can be nullable.</li>
</ul>
<ol start="2">
<li>Use built-in <code>ellipsis</code>:</li>
</ol>
<pre><code>from types import EllipsisType
def partial_update(
obj_field: int | None | EllipsisType = ...,
):
pass
</code></pre>
<ul>
<li>Using built-in Ellipsis is not explicit enough</li>
<li>You cannot use ellipsis as type either: <code>obj_field: int | None | ... = ...,</code></li>
</ul>
<ol start="3">
<li>Use custom singleton.</li>
</ol>
<pre><code>class NotSetType:
_instance = None
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__new__(cls, *args, **kwargs)
return cls._instance
NotSet = NotSetType()
def partial_update(
obj_field: int | None | NotSetType = NotSet,
):
pass
</code></pre>
<ul>
<li>The best one, but still NotSetType is used instead of desired NotSet for type hinting.</li>
</ul>
|
<python><python-typing>
|
2023-11-29 13:59:56
| 2
| 6,078
|
Mikhail M.
|
77,571,576
| 4,436,572
|
performance improvement on numpy ndarray columnar calculation (row reduction)
|
<p>I'm doing row reduction on 3 dimensional ndarray (KxMxN), i.e., taking all values of a column and use a reduce function to produce a scalar value; eventually a KxMxN matrix would become a 2 dimensional ndarray of order KxN.</p>
<p>What I am trying to solve:</p>
<ol>
<li>Take in a 3D-matrix and split it into 16 sub-matrices (or any even number of splits larger than 2, 16 is the optimal one);</li>
<li>Generate all combinations by selecting half the sub-matrices (8) from the 16 pieces and join these 8 sub-matrices into one matrix; therefore there's one matrix for every combination;</li>
<li>Calculate a scalar value from reduce function for each column of the new matrix, for all combinations, ideally at once.</li>
</ol>
<p>There's more implementation details, I'll explain along the way.</p>
<p>The 3-D ndarray is of float numbers.</p>
<p>In following example, <code>njit</code> with <code>numpy</code> is the best I could get for the moment. I am wondering if there's any room for further improvement, from any angle.</p>
<p><code>cupy</code>(GPU parallelization), <code>dask</code> (CPU parallelization) or <code>numba</code> parallelization all failed to beat the following (my usecase apparently is way too insignificant to leverage the power of GPU and I've only got 8G GPU). There's good chance that these tools can be used in a much more advanced way, which I didn't know.</p>
<pre class="lang-py prettyprint-override"><code>from numba import njit, guvectorize, float64, int64
from math import sqrt
import numba as nb
import numpy as np
import itertools
# Create a 2D ndarray
m = np.random.rand(800,100)
# Reshape it into a list of sub-matrices
mr = m.reshape(16,50,100)
# Create an indices matrix from combinatorics
# a typical one for me "select 8 from 16", 12870 combinations
# I do have a custom combination generator, but this is not what I wanted to optimise and itertools really has done a decent job already.
x = np.array( list(itertools.combinations(np.arange(16),8)) )
# Now we are going to select 8 sub-matrices from `mr` and reshape them to become one bigger sub-matrix; we do this in list comprehension.
# This is the matrix we are going to reduce.
# Bottleneck 1: This line takes the longest and I'd hope to improve on this line, but I am not sure there's much we could do here.
m3d = np.array([mr[idx_arr].reshape(400,100) for idx_arr in x])
# We create different versions of the same reduce function.
# Bottleneck 2: The reduce function is another place I'd want to improve on.
# col - column values
# days - trading days in a year
# rf - risk free rate
# njit version with instance function `mean`, `std`, and python `sqrt`
@njit
def nb_sr(col, days, rf):
mean = (col.mean() * days) - rf
std = col.std() * sqrt(days)
return mean / std
# njit version with numpy
@njit
def nb_sr_np(col, days, rf):
mean = (np.mean(col) * days) -rf
std = np.std(col) * np.sqrt(days)
return mean / std
# guvectorize with numpy
@guvectorize([(float64[:],int64,float64,float64[:])], '(n),(),()->()', nopython=True)
def gu_sr_np(col,days,rf,res):
mean = (np.mean(col) * days) - rf
std = np.std(col) * np.sqrt(days)
res[0] = mean / std
# We wrap them such that they can be applied on 2-D matrix with list comprehension.
# Bottleneck 3: I was thinking to probably vectorize this wrapper, but the closest I can get is list comprehension, which isn't really vectorization.
def nb_sr_wrapper(m2d):
return [nb_sr(r, 252, .25) for r in m2d.T]
def nb_sr_np_wrapper(m2d):
return [nb_sr_np(r, 252, .25) for r in m2d.T]
def gu_sr_np_wrapper(m2d):
return [gu_sr_np(r, 252, .25) for r in m2d.T]
# Finally! here's our performance benchmarking step.
%timeit np.array( [nb_sr_wrapper(m) for m in m3d.T] )
# output: 4.26 s Β± 3.67 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit np.array( [nb_sr_np_wrapper(m) for m in m3d.T] )
# output: 4.33 s Β± 26.1 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit np.array( [gu_sr_np_wrapper(m) for m in m3d.T] )
# output: 6.06 s Β± 11.7 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
</code></pre>
|
<python><performance><vectorization><numpy-ndarray><numba>
|
2023-11-29 13:30:51
| 2
| 1,288
|
stucash
|
77,571,555
| 6,367,851
|
Handle GO in pyodbc
|
<p>I wrote a Python script that uses <code>pyodbc</code> (via SQLAlchemy) to execute T-SQL files. Some files are littered with <code>GO</code> statements:</p>
<pre class="lang-sql prettyprint-override"><code>-- query.sql
USE db_name
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE OR ALTER VIEW ...
</code></pre>
<p>How I execute the file:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine, text
with open("query.sql") as fp:
query = text(fp.read())
engine = create_engine("mssql+pyodbc://...")
with engine.begin() as conn:
conn.execute(query)
</code></pre>
<p>I cannot automatically remove the <code>GO</code> lines from my script since <code>CREATE OR ALTER VIEW</code> must be the first statement in a batch. How should I execute these files?</p>
|
<python><sql-server><sqlalchemy>
|
2023-11-29 13:27:44
| 0
| 2,152
|
Mike Henderson
|
77,571,552
| 5,897,305
|
pyCharm warning in django code: Unresolved attribute reference 'user' for 'WSGIRequest'
|
<p>in my brandnew PyCharm professional edition I get the warning "<em>Unresolved attribute reference 'user' for 'WSGIRequest'</em>". The warning occurs in the first line of <code>MyClass.post()</code> (see code below).</p>
<pre><code>from django.views import generic
from django.db import models
from django.http.request import HttpRequest
from django.shortcuts import render
from django.contrib.auth.models import AbstractUser
class CustomUser( AbstractUser ):
emailname = models.CharField()
class Invoice( models.Model ):
posted = models.DateField()
class BaseView ( generic.DetailView ):
model = Invoice
def __int__( self, *args, **kwargs ):
super().__init__( *args, **kwargs )
# some more code
class MyClass( BaseView ):
def __int__( self, *args, **kwargs ):
super().__init__( *args, **kwargs )
# some more code
def post( self, request: HttpRequest, *args, **kwargs ):
context = { 'curr_user': request.user, } # <-- Unresolved attribute reference 'user' for class 'WSGIRequest'
# some more code
html_page = 'mypage.html'
return render( request, html_page, context )
</code></pre>
<p><code>user</code> ist a <code>CustomUser</code> object. Debugging shows, that <code>user</code> is a known attribute in <code>CustomUser</code> and the code works well, 'curr_user' in <code>context</code> is the expected one. Other attributes of <code>request</code> like <code>request.path</code> or <code>request.REQUEST</code> are not warned. Django support is enabled.</p>
<p>I work with PyCharm 2023.2.5 (Professional Edition), Windows 11</p>
<p>Can anybody help? What is my mistake?</p>
<p>Regards</p>
|
<python><django><pycharm>
|
2023-11-29 13:27:26
| 1
| 713
|
Humbalan
|
77,571,482
| 8,110,650
|
Why am I receiving ResourceWarning when testing Flask app
|
<p>In my Flask application which I am testing using <code>unittest</code>, when I use <code>send_from_directory</code> function in my route I am getting <code>ResourceWarning</code>.</p>
<p>This is the route:</p>
<pre class="lang-py prettyprint-override"><code>@app.route("/<filename>")
def file_content(filename):
root = os.path.abspath(os.path.dirname(__file__))
data_dir = os.path.join(root, "data")
return send_from_directory(data_dir, filename)
</code></pre>
<p>And this is the test file:</p>
<pre class="lang-py prettyprint-override"><code>def test_viewing_text_document(self):
# with self.client.get('/history.txt') as response:
response = self.client.get('/history.txt')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content_type, "text/plain; charset=utf-8")
self.assertIn("Python 0.9.0 (initial release) is released.", response.get_data(as_text=True))
</code></pre>
<p>The exact warning is :</p>
<pre class="lang-bash prettyprint-override"><code>./Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/unittest/case.py:579: ResourceWarning: unclosed file <_io.BufferedReader name='path/history.txt'>
if method() is not None:
ResourceWarning: Enable tracemalloc to get the object allocation traceback
</code></pre>
<p>If I use <code>with</code> statement the warning dissapears, but, I thought the <code>send_from_directory</code> function automatically handles closing of the file so my question is why do I need the <code>with</code> statement. If I just <code>render_template</code> in my route the <code>with</code> statement in the test is not needed.</p>
|
<python><unit-testing><flask>
|
2023-11-29 13:16:38
| 1
| 919
|
SrdjaNo1
|
77,571,395
| 15,913,281
|
Filter Numpy Array for Values Greater than Preceding Value
|
<p>Given the following array how can I filter to produce a new array that contains the values that are less than the next value by at least 3?</p>
<p>In other words, I need to compare each value with its neighbour on the right and add it to a new array if that neighbour is higher by 3 or more.</p>
<pre><code>ex_arr = [1, 2, 3, 8, 9, 10, 12, 16, 17, 23]
desired_arr = [3, 12, 17]
</code></pre>
|
<python><numpy>
|
2023-11-29 13:03:40
| 1
| 471
|
Robsmith
|
77,571,319
| 1,153,506
|
Annotate Django Foreign Key model instance
|
<p>Is it possible in Django to annotate a Foreign Key instance?</p>
<p>Suppose I have the following models:</p>
<pre><code>class BaseModel(models.Model):
pass
class Foo(models.Model):
base_model = models.ForeignKey('BaseModel', related_name='foos')
class Bar(models.Model):
base_model = models.ForeignKey('BaseModel', related_name='bars')
</code></pre>
<p>I want to count the <code>Bar</code>s belonging to a <code>BaseModel</code> attached to a <code>Foo</code>, that is:</p>
<pre><code>foos = Foo.objects.all()
for foo in foos:
foo.base_model.bars_count = foo.base_model.bars.count()
</code></pre>
<p>Is it possible in a single query? The following code is syntactically wrong:</p>
<pre><code>foos = Foo.objects.annotate(
base_model.bars_count=Count('base_model__bars')
)
</code></pre>
<p>This one would perform that job in a single query:</p>
<pre><code>foos = Foo.objects.annotate(
base_model_bars_count=Count('base_model__bars')
)
for foo in foos:
foo.base_model.bars_count = foo.base_model_bars_count
</code></pre>
<p>Is there a way with a single query without the loop?</p>
|
<python><django><django-queryset><django-annotate><django-aggregation>
|
2023-11-29 12:53:47
| 1
| 1,987
|
andrea.ge
|
77,571,252
| 8,541,953
|
Change area calculation to km2 in ipyleaflet DrawControl
|
<p>When creating a DrawControl in ipyleaflet, I am able to draw a polygon in the map. An example below:</p>
<pre><code>[![from ipyleaflet import Map, basemaps, basemap_to_tiles, DrawControl
watercolor = basemap_to_tiles(basemaps.Stadia.StamenTerrain)
m = Map(layers=(watercolor, ), center=(50, 354), zoom=5)
draw_control = DrawControl()
draw_control.polyline = {
"shapeOptions": {
"color": "#6bc2e5",
"weight": 8,
"opacity": 1.0
}
}
draw_control.polygon = {
"shapeOptions": {
"fillColor": "#6be5c3",
"color": "#6be5c3",
"fillOpacity": 1.0
},
"drawError": {
"color": "#dd253b",
"message": "Oups!"
},
"allowIntersection": False
}
draw_control.circle = {
"shapeOptions": {
"fillColor": "#efed69",
"color": "#efed69",
"fillOpacity": 1.0
}
}
draw_control.rectangle = {
"shapeOptions": {
"fillColor": "#fca45d",
"color": "#fca45d",
"fillOpacity": 1.0
}
}
m.add(draw_control)
m
</code></pre>
<p>When drawing it, the area of the drawn polygon appears in ha. Would it be possible to change to km2? So far, I have not found any documentation on that possibility.</p>
<p><a href="https://i.sstatic.net/hh5aY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hh5aY.png" alt="enter image description here" /></a></p>
|
<python><leaflet><ipywidgets><ipyleaflet>
|
2023-11-29 12:41:35
| 1
| 1,103
|
GCGM
|
77,571,246
| 1,833,326
|
How to solve a Weighting Problem with pyspark
|
<p>I have matrix A of dimension [nXm] like</p>
<pre><code>w1 w2 0
w3 0 w4
0 w5 0
</code></pre>
<p>a vector <code>b</code> with known value of size [mX1]:</p>
<pre><code>10
5
3
</code></pre>
<p>and a vector <code>c</code> of size [nX1]</p>
<pre><code>0
0
0
</code></pre>
<p>I want now to find on solution (finding values of w1, ..., w5) that fulfills the Equation:</p>
<pre><code>A*b = c
</code></pre>
<p>using pyspark.</p>
|
<python><pyspark><model-fitting>
|
2023-11-29 12:40:39
| 1
| 1,018
|
Lazloo Xp
|
77,571,177
| 10,750,541
|
How to customize an upset graph with stacked bars (size, legend, combo order)
|
<p>I am trying to figure out in this example (<a href="https://upsetplot.readthedocs.io/en/stable/auto_examples/plot_discrete.html" rel="nofollow noreferrer">https://upsetplot.readthedocs.io/en/stable/auto_examples/plot_discrete.html</a>) how to:</p>
<ol>
<li>change the size of the figure</li>
<li>move the legend at the upper left corner</li>
<li>define the order of the data in the combination matrix
Has anyone any idea how to perform the above tasks when the UpSet graph has stacked bar?</li>
</ol>
<p><em>Additionally, has the plotly version of upset any "hue" feature?</em></p>
|
<python><upsetplot>
|
2023-11-29 12:31:29
| 0
| 532
|
Newbielp
|
77,571,120
| 607,846
|
Test function with default argument
|
<p>Lets say I wish to test this function:</p>
<pre><code>def get_user(user_id, adults_only=False):
pass
</code></pre>
<p>where the implementation is not shown.</p>
<p>I would like to test this using parameterised expand, something like this:</p>
<pre><code>@paramaterized.expand([(True, "none"),(False, "child")])
def test_get_child(self, adults_only, expected):
child = self.child
actual = get_user(child.id, adults_only)
assert expected == actual
</code></pre>
<p>However, I am not testing the default argument. And have to add a second test to test this:</p>
<pre><code>def test_get_child_default(self):
child = self.child
actual = get_user(child.id)
assert "child" == actual
</code></pre>
<p>It would be nice to be able to combine the two tests into one. Is it possible to do this using python and the unittest framework?</p>
|
<python><python-unittest>
|
2023-11-29 12:22:23
| 1
| 13,283
|
Baz
|
77,570,976
| 9,302,146
|
'cygpath: not found' & 'exec: cmd: not found' for Pyenv on Windows WSL 2
|
<p>I have been trying to get Tensorflow to recognize my GPU within WSL 2. However, I believe that is largely irrelevant for the problem I am having right now.</p>
<p>Whenever I try to run the <code>pyenv</code> command within WSL I get the following error:</p>
<pre class="lang-bash prettyprint-override"><code>/mnt/c/Users/USER/.pyenv/pyenv-win/bin/pyenv: 3: cygpath: not found
/mnt/c/Users/USER/.pyenv/pyenv-win/bin/pyenv: 3: exec: cmd: not found
</code></pre>
<p>Pyenv does work outside of the WSL environment.</p>
<p>Here are the details of my environment:</p>
<pre class="lang-bash prettyprint-override"><code>pyenv - 3.1.1
WSL Kernel - 5.15.133.1-1
WSL version - 2.0.9.0
Windows 10
Windows Version - 10.0.19045.3693
</code></pre>
<p>What can I do to make pyenv work inside WSL?</p>
|
<python><windows-subsystem-for-linux><pyenv>
|
2023-11-29 12:01:37
| 3
| 429
|
Abe Brandsma
|
77,570,942
| 10,795,473
|
How can I write/show a Python Snowpark DataFrame with more than 64 rows?
|
<p>I'm using Python Snowpark to create a dataframe and show it or write it to my SnowFlake database but I'm having a strange error while doing so.</p>
<p>The error raises when I ".show()" or ".write()" the DataFrame and when this contains more than 64 rows. The error says "Cannot perform DROP. This session does not have a current database. Call 'USE DATABASE', or use a qualified name.", which I don't get the session actually is connected to a database, and it can write on it but the problem comes from the length of the data.</p>
<p>Here I show an example in which with the same data but different slice lengths the output is correct or is an error.</p>
<pre><code>writing_df_50 = db.create_dataframe(data[:50], schema=self.model.schema)
writing_df_50_2 = db.create_dataframe(data[50:100], schema=self.model.schema)
writing_df_100 = db.create_dataframe(data[:100], schema=self.model.schema)
# OK
writing_df_50.show()
# OK
writing_df_50_2.show()
# ERROR
writing_df_100.show()
</code></pre>
<p>Here I write the error traceback:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3548, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-1-86bf889c01ec>", line 1, in <module>
db.create_dataframe(writing_data, schema=self.model.schema).show()
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/_internal/telemetry.py", line 139, in wrap
result = func(*args, **kwargs)
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/dataframe.py", line 2861, in show
self._show_string(
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/dataframe.py", line 2979, in _show_string
result, meta = self._session._conn.get_result_and_metadata(
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/_internal/server_connection.py", line 593, in get_result_and_metadata
result_set, result_meta = self.get_result_set(plan, **kwargs)
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/_internal/analyzer/snowflake_plan.py", line 187, in wrap
raise ne.with_traceback(tb) from None
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/_internal/analyzer/snowflake_plan.py", line 116, in wrap
return func(*args, **kwargs)
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/_internal/server_connection.py", line 576, in get_result_set
self.run_query(
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/_internal/server_connection.py", line 103, in wrap
raise ex
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/_internal/server_connection.py", line 97, in wrap
return func(*args, **kwargs)
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/_internal/server_connection.py", line 367, in run_query
raise ex
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/snowpark/_internal/server_connection.py", line 348, in run_query
results_cursor = self._cursor.execute(query, params=params, **kwargs)
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/connector/cursor.py", line 920, in execute
Error.errorhandler_wrapper(self.connection, self, error_class, errvalue)
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/connector/errors.py", line 290, in errorhandler_wrapper
handed_over = Error.hand_to_other_handler(
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/connector/errors.py", line 345, in hand_to_other_handler
cursor.errorhandler(connection, cursor, error_class, error_value)
File "/home/alex/.cache/pypoetry/virtualenvs/marilyn-vOmyxDUH-py3.10/lib/python3.10/site-packages/snowflake/connector/errors.py", line 221, in default_errorhandler
raise error_class(
snowflake.snowpark.exceptions.SnowparkSQLException: (1304): 01b0a765-0303-21c9-0001-474606530ad2: 090105 (22000): Cannot perform DROP. This session does not have a current database. Call 'USE DATABASE', or use a qualified name.
</code></pre>
<p>Does anybody understand what's happening here and how can I solve it?</p>
|
<python><snowflake-cloud-data-platform>
|
2023-11-29 11:55:47
| 1
| 309
|
aarcas
|
77,570,598
| 8,228,558
|
"The network path was not found" error with multiple backslashes in the error-printed path
|
<p>I am trying to deploy a Flask application on a Windows Server. It works fine on local test but it fails to access the server folders on deployment.
The main folder is a network one which looks like this:</p>
<pre><code>mainpath = '//part1//part2/part3/folder'
</code></pre>
<p>I have a testing route in Flask that just tries to read/print the content of this folder, but it hasn't achieved it so far.</p>
<pre><code>os.listdir(mainpath)
</code></pre>
<p>The error I get is like below:</p>
<pre><code>FileNotFoundError: [WinError 53] The network path was not found: '\\\\\\\\part1\\\\part2\\\\part3\\\\folder'
</code></pre>
<p>Why do Windows add so many backslashes in the path? What is the best practice to avoid this? I have tried more than 30 variances of this but none worked so far. I have tried os.path.normpath, os.path.abspath, Path, PureWindowsPath, different versions of slashes, backslashes, os.path.joins, raw-strings, etc but nothing worked yet</p>
<p>Any suggestion?</p>
|
<python><windows><path>
|
2023-11-29 11:05:55
| 0
| 10,647
|
IoaTzimas
|
77,570,553
| 1,627,106
|
Conditionally required value in Pydantic v2 model
|
<p>I'm working with an API that accepts a query parameter, which selects the values the API will return. Therefore, when parsing the API response, all attributes of the Pydantic model used for validation must be optional:</p>
<pre class="lang-py prettyprint-override"><code>class InvoiceItem(BaseModel):
"""
Pydantic model representing an Invoice
"""
id: PositiveInt | None = None
org: AnyHttpUrl | None = None
relatedInvoice: AnyHttpUrl | None = None
quantity: PositiveInt | None = None
</code></pre>
<p>However, when creating an object using the API, some of the attributes are required. How can I make attributes to be required in certain conditions (in Pydantic v1 it was possible to use metaclasses for this)?</p>
<p>Examples could be to somehow parameterise the model (as it wouldn't know without external input how its being used) or to create another model <code>InvoiceItemCreate</code> inheriting from <code>InvoiceItem</code> and make the attributes required without re-defining them.</p>
|
<python><pydantic>
|
2023-11-29 11:00:48
| 2
| 1,712
|
Daniel
|
77,570,541
| 895,727
|
Why do Python's match statements not raise an Exception when no pattern matches?
|
<p><a href="https://peps.python.org/pep-0634/" rel="nofollow noreferrer">PEP 634</a> and <a href="https://peps.python.org/pep-0635/" rel="nofollow noreferrer">PEP 635</a> introduce <em>structural pattern matching</em>. But no rationale is given why nothing happens, if none of the patterns matches:</p>
<pre class="lang-py prettyprint-override"><code>var = 3
match var:
case 1:
pass
case 2:
pass
print("End is reached without Exception.")
</code></pre>
<p>From Erlang i am used to this raising an Exception. In Rust, this would not compile. In Python you always have to add an <em>irrefutable pattern</em>:</p>
<pre class="lang-py prettyprint-override"><code> case _:
raise Exception("β¦")
</code></pre>
<p>Why is that?</p>
|
<python><python-3.10>
|
2023-11-29 10:59:01
| 2
| 1,279
|
clonejo
|
77,570,456
| 497,219
|
Cannot list tapkey owners despite having right scopes
|
<p>I'm trying to get a list of owners/locks using the Tapkey REST api in Python. I verified the oauth credentials are correct, as I am getting an actual token.</p>
<pre><code>import requests
tapkey_api_url = "https://my.tapkey.com"
tapkey_api_version = "/api/v1"
tapkey_auth_server = "https://login.tapkey.com"
tapkey_client_id = "xxx" #redacted
tapkey_client_secret = "yyy" #redacted
def get_access_token(url, client_id, client_secret):
response = requests.post(
url,
data={"grant_type": "client_credentials", "scope": "read:owneraccounts read:owneraccount:permissions"},
auth=(client_id, client_secret),
)
token_json = response.json()
return token_json["access_token"]
token = get_access_token(f"{tapkey_auth_server}/connect/token", tapkey_client_id, tapkey_client_secret)
print(f"Received token: {token}")
owners_url = f"{tapkey_api_url}{tapkey_api_version}/Owners"
print(owners_url)
response = requests.get(owners_url, headers={"Authorization": f"access_token {token}"})
print(response)
</code></pre>
<p>Output:</p>
<pre><code>Received token: <redacted>
https://my.tapkey.com/api/v1/Owners
<Response [401]>
</code></pre>
<p>I'm passing the correct scopes, those scopes are enabled in the oauth settings in the Tapkey admin portal, and I am given a token. I cannot think of a single reason why I am getting an unauthorized error.</p>
<p>Edit: to be clear, the service-account e-mail address was added as an administrator to my account.</p>
|
<python><tapkey>
|
2023-11-29 10:45:08
| 1
| 1,583
|
Jeroen Jacobs
|
77,570,408
| 3,628,232
|
AsyncResult vs callback for getting many results from apply_async()
|
<p>When using the <code>apply_async()</code> method from Python's <code>multiprocessing.Pool</code>, there are two options for storing the return value - either save the <code>AsyncResult</code> object and call <code>.get()</code>, or use a callback, i.e.:</p>
<pre class="lang-py prettyprint-override"><code># Using AsnycResult
def process_data():
results = []
for i in range(n):
result = pool.apply_async(func, args)
results.append(result)
pool.close()
pool.join() # Not strictly necessary, since .get() will block anyway
data = [r.get() for r in results]
return data
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code># Using callback
data = []
def process_data()
for i in range(n):
pool.apply_async(func, args, callback=save_result)
pool.close()
pool.join()
def save_result(result):
data.append(result)
</code></pre>
<p>Is one way more "canonical" than the other? Assuming we are submitting many (thousands) jobs to the pool, what are the advantages/disadvantages of these two approaches? The <code>AsyncResult</code> approach removes the need for a global variable, but it requires a second list (of <code>AsyncResult</code> objects) - does that effectively double the RAM required?</p>
|
<python><callback><multiprocessing><python-multiprocessing>
|
2023-11-29 10:38:56
| 1
| 780
|
Peet Whittaker
|
77,570,404
| 13,163,943
|
Is there a way to always run black before python, but still have full command line functionality for python (be able to feed in any arguments etc)
|
<p>I often still do a lot of local testing on slightly long running scripts, I have some linting setups, but I'm often just using VIM on a fairly raw VM, and I've not quite cracked getting my perfect editing setup sorted quickly.</p>
<p>What would often help me is if every time I tried to run a python script, black ran over it first, to check everything was valid python.</p>
<p>Simple, but would just save a few errors at the end of long running scripts.</p>
|
<python><pre-commit>
|
2023-11-29 10:38:49
| 1
| 366
|
George Pearse
|
77,570,302
| 1,249,683
|
How can I pass a keyword argument to a function when the name contains a dot `.`?
|
<p>Given a function that accepts "**kwargs", e.g.,</p>
<pre class="lang-py prettyprint-override"><code>def f(**kwargs):
print(kwargs)
</code></pre>
<p>how can I pass a key-value pair if the key contains a dot/period (<code>.</code>)?</p>
<p>The straightforward way results in a syntax error:</p>
<pre class="lang-py prettyprint-override"><code>In [46]: f(a.b=1)
Cell In[46], line 1
f(a.b=1)
^
SyntaxError: expression cannot contain assignment, perhaps you meant "=="?
</code></pre>
|
<python><syntax><keyword-argument>
|
2023-11-29 10:25:01
| 1
| 500
|
xebtl
|
77,570,291
| 1,133,730
|
How to check the version of Spark Core, not pyspark
|
<p>I am running a different version of pyspark and spark core (what's installed in the cluster), is there a way to print the version of spark core? everything so far gives me the version of pyspark. I have tried the following from the user machines:</p>
<pre><code>pyspark.__version__
ss.version # same as spark.version
sc.version
./bin/spark-submit --version
</code></pre>
<p>EDIT:</p>
<p>In case this helps, I am running Spark on a Yarn cluster which has access to multiple machines (not on K8S) and accessing them via pyspark from user machines with:</p>
<pre><code> ss = pyspark.sql.SparkSession.builder.config(conf=conf).getOrCreate()
sc = ss.sparkContext
</code></pre>
|
<python><apache-spark><pyspark>
|
2023-11-29 10:23:03
| 1
| 3,541
|
fersarr
|
77,570,215
| 6,151,828
|
How does sklearn calculate AUC for random forest and why it is different when using different functions?
|
<p>I start with the example given for <a href="https://scikit-learn.org/stable/auto_examples/miscellaneous/plot_roc_curve_visualization_api.html" rel="nofollow noreferrer"><em>ROC Curve with Visualization API</em></a>:</p>
<pre><code>import matplotlib.pyplot as plt
from sklearn.datasets import load_wine
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import RocCurveDisplay
from sklearn.model_selection import train_test_split
X, y = load_wine(return_X_y=True)
y = y == 2
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rfc = RandomForestClassifier(n_estimators=10, random_state=42)
rfc.fit(X_train, y_train)
ax = plt.gca()
rfc_disp = RocCurveDisplay.from_estimator(rfc, X_test, y_test, ax=ax, alpha=0.8)
print(rfc_disp.roc_auc)
</code></pre>
<p>with the answer <code>0.9823232323232323</code>.</p>
<p>Following this immediately by</p>
<pre><code>from sklearn.metrics import roc_auc_score
y_pred = rfc.predict(X_test)
auc = roc_auc_score(y_test, y_pred)
print(auc)
</code></pre>
<p>I obtain <code>0.928030303030303</code>, which is manifestly different.</p>
<p>Interestingly, I obtain the same result with the ROC Curve Visualization API, if I use the predicted values:</p>
<pre><code>rfc_disp1 = RocCurveDisplay.from_predictions(y_test, y_pred)
print(rfc_disp1.roc_auc)
</code></pre>
<p>However the area under the curve obtained does sum up to the former result (using trapezoid integration):</p>
<pre><code>import numpy as np
I = np.sum(np.diff(rfc_disp.fpr) * (rfc_disp.tpr[1:] + rfc_disp.tpr[:-1])/2.)
print(I)
</code></pre>
<p>What is the reason for this discrepancy? I assume that it is related to how teh two functions calculate AUC (perhaps different way of smoothing the curve?) This brings me to a more general question: how is ROC curve obtained for random forest in sklearn? - what parameter/threshold is changed to obtain different predictions? Are these just scores for separate trees of the forest?</p>
|
<python><scikit-learn><random-forest><roc><auc>
|
2023-11-29 10:11:42
| 1
| 803
|
Roger V.
|
77,570,187
| 16,171,413
|
Gracefully Handle Exception from raw user input wrapped in an int function
|
<p>Consider the function below:</p>
<pre><code>def user_input():
try:
a = int(input("Enter a number: "))
except ValueError:
print(f"\n{a} is invalid.\n")
else:
return a
</code></pre>
<p>This doesn't handle exceptions gracefully but instead results in the following error if the user enters a non-number:
<code>ValueError: invalid literal for int() with base 10: 'v' During handling of the above exception, another exception occurred: UnboundLocalError: local variable 'a' referenced before assignment</code></p>
<p>But if I do this:</p>
<pre><code>a = input("Enter an integer: ")
try:
a = int(a)
except ValueError:
print(f"\n{a} is invalid.\n")
else:
print(a)
</code></pre>
<p>It works as can be seen in <a href="https://stackoverflow.com/questions/60240892/how-to-use-f-strings-or-format-in-except-block">this question</a>. But I really want to wrap the input function in the int function so as to do the conversion directly and still gracefully handle exceptions from user input. Is there any alternative to this? Or must I use the input and int functions separately to get my desired result?</p>
|
<python>
|
2023-11-29 10:07:16
| 2
| 5,413
|
Uchenna Adubasim
|
77,570,186
| 129,805
|
django-helpdesk 0.4.1 "extendMarkdown() missing 1 required positional argument: 'md_globals'"
|
<p>When I try to view /tickets/1/ I get the error:</p>
<pre><code>extendMarkdown() missing 1 required positional argument: 'md_globals'
/usr/local/lib/python3.9/dist-packages/markdown/core.py, line 115, in registerExtensions
</code></pre>
<p>How can I fix this?</p>
|
<python><python-3.x><django><markdown>
|
2023-11-29 10:07:11
| 1
| 45,159
|
fadedbee
|
77,570,066
| 7,347,925
|
How to accelerate convolving function?
|
<p>I have written a convolving function like this:</p>
<pre><code>import numpy as np
import numba as nb
# Generate sample input data
num_chans = 111
num_bins = 47998
num_rad = 8
num_col = 1000
rng = np.random.default_rng()
wvl_sensor = rng.uniform(low=1000, high=11000, size=(num_chans, num_col))
fwhm_sensor = rng.uniform(low=0.01, high=2.0, size=num_chans)
wvl_lut = rng.uniform(low=1000, high=11000, size=num_bins)
rad_lut = rng.uniform(low=0, high=1, size=(num_rad, num_bins))
# Original convolution implementation
def original_convolve(wvl_sensor, fwhm_sensor, wvl_lut, rad_lut):
sigma = fwhm_sensor / (2.0 * np.sqrt(2.0 * np.log(2.0)))
var = sigma ** 2
denom = (2 * np.pi * var) ** 0.5
numer = np.exp(-(wvl_lut[:, None] - wvl_sensor[None, :])**2 / (2*var))
response = numer / denom
response /= response.sum(axis=0)
resampled = np.dot(rad_lut, response)
return resampled
</code></pre>
<p>The numpy version runs about 45 s:</p>
<pre><code># numpy version
num_chans, num_col = wvl_sensor.shape
num_bins = wvl_lut.shape[0]
num_rad = rad_lut.shape[0]
original_res = np.empty((num_col, num_rad, num_chans), dtype=np.float64)
for x in range(wvl_sensor.shape[1]):
original_res[x, :, :] = original_convolve(wvl_sensor[:, x], fwhm_sensor, wvl_lut, rad_lut)
</code></pre>
<p>I have tried to accelerate it using numba:</p>
<pre><code>@nb.jit(nopython=True)
def numba_convolve(wvl_sensor, fwhm_sensor, wvl_lut, rad_lut):
num_chans, num_col = wvl_sensor.shape
num_bins = wvl_lut.shape[0]
num_rad = rad_lut.shape[0]
output = np.empty((num_col, num_rad, num_chans), dtype=np.float64)
sigma = fwhm_sensor / (2.0 * np.sqrt(2.0 * np.log(2.0)))
var = sigma ** 2
denom = (2 * np.pi * var) ** 0.5
for x in nb.prange(num_col):
numer = np.exp(-(wvl_lut[:, None] - wvl_sensor[None, :, x])**2 / (2*var))
response = numer / denom
response /= response.sum(axis=0)
resampled = np.dot(rad_lut, response)
output[x, :, :] = resampled
return output
</code></pre>
<p>It still costs about 32s. Note that if I use <code>@nb.jit(nopython=True, parallel=True)</code>, the output is all zero values.</p>
<p>Any idea to apply numba correctly? or improve the convolving function?</p>
|
<python><arrays><numpy><matrix><numba>
|
2023-11-29 09:49:54
| 1
| 1,039
|
zxdawn
|
77,569,960
| 11,742,006
|
Failed building wheel for bcrypt when running pip install paramiko
|
<p>I'm trying run pip install paramiko, but i'm running into the following error :
please help me, i already tried pip update, installing rust. it keep returning this kind of error. i already tried to use pip install pysftp too but the error is same.</p>
<pre><code> =============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install bcrypt:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Ensure you have a recent Rust toolchain installed. bcrypt requires
rustc >= 1.64.0. (1.63 may be used by setting the BCRYPT_ALLOW_RUST_163
environment variable)
Python: 3.9.7
platform: macOS-10.16-x86_64-i386-64bit
pip: n/a
setuptools: 69.0.2
setuptools_rust: 1.8.1
rustc: 1.74.0 (79e9716c9 2023-11-13)
=============================DEBUG ASSISTANCE=============================
error: `cargo rustc --lib --message-format=json-render-diagnostics --manifest-path src/_bcrypt/Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib -- -C 'link-args=-undefined dynamic_lookup -Wl,-install_name,@rpath/_bcrypt.abi3.so'` failed with code 101
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for bcrypt
Failed to build bcrypt
ERROR: Could not build wheels for bcrypt, which is required to install pyproject.toml-based projects
</code></pre>
|
<python><python-3.x><pip><paramiko><pysftp>
|
2023-11-29 09:33:57
| 1
| 524
|
victorxu2
|
77,569,715
| 10,748,166
|
reportlab, Long word or URI doesn't wrap inside table cell
|
<p>Creating a table with cells that contain long words doesnβt seem to work as expected, i.e. the words are not wrapped automatically inside the paragraphs and the table cell breaks page layout by being too wide.
Additionally the rendered text of the URI contains strange characters ( <code>;</code> ) that are not in the original text used to create the paragraph.</p>
<p>How can long word wrapping be controlled for a paragraph inside a table cell ?</p>
<p>Here is a minimal code ( using macOS Sonoma, arm64, python==3.10, reportlab==3.6.11 ):</p>
<pre class="lang-py prettyprint-override"><code>import re
from pathlib import Path
from reportlab.lib import colors
from reportlab.lib.pagesizes import portrait, A4
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib.units import cm
from reportlab.platypus import Paragraph, TableStyle, Table, Image
PAGE_SIZE = portrait(A4)
PAGE_MARGIN_TOP = 2 * cm
PAGE_MARGIN_LEFT = 0.7 * cm
PAGE_MARGIN_RIGHT = 0.7 * cm
PAGE_MARGIN_BOTTOM = 1 * cm
PAGE_CONTENT_WIDTH = PAGE_SIZE[0] - PAGE_MARGIN_LEFT - PAGE_MARGIN_RIGHT
PAGE_CONTENT_HEIGHT = PAGE_SIZE[1] - PAGE_MARGIN_TOP - PAGE_MARGIN_BOTTOM
LEADING_FACTOR = 1.25
BODY_TEXT_STYLE_NAME = "BodyText"
BODY_TEXT_FONT_NAME = "Roboto" # you can use "Courier" too, same problem
BODY_TEXT_FONT_SIZE = 12
STYLES = getSampleStyleSheet()
BODY_TEXT_STYLE = STYLES[BODY_TEXT_STYLE_NAME]
BODY_TEXT_STYLE.fontName = BODY_TEXT_FONT_NAME
BODY_TEXT_STYLE.fontSize = BODY_TEXT_FONT_SIZE
BODY_TEXT_STYLE.leading = BODY_TEXT_FONT_SIZE * LEADING_FACTOR
BODY_TEXT_STYLE.wordWrap = True # this has no effect whatsoever, commenting it out does not improve anything
BODY_TEXT_STYLE.splitLongWords = True # this has no effect whatsoever, commenting it out does not improve anything
BODY_TEXT_STYLE.uriWasteReduce = 0.0001 # using a greater value make things sometimes even worse
def create_pdf():
output_path = Path.home() / "tmp" / "test.pdf"
doc = DocTemplate(
output_path.as_posix(),
pagesize=PAGE_SIZE,
topMargin=PAGE_MARGIN_TOP,
leftMargin=PAGE_MARGIN_LEFT,
rightMargin=PAGE_MARGIN_RIGHT,
bottomMargin=PAGE_MARGIN_BOTTOM,
showBoundary=True,
)
paragraphs = [
Paragraph("Test Text", style=BODY_TEXT_STYLE),
Paragraph(
"Test Text averyveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryvveryveryverylongword",
style=BODY_TEXT_STYLE),
Paragraph(
"Label: This is text with a long URI https://www.example.com/hello/world/this/is/a/very/long/website/path/11111/222222/?abc=123&def=756&ghj=888",
style=BODY_TEXT_STYLE),
]
cells = [
[
paragraphs,
[Image(create_placeholder_image(200, 200), width=3.5 * cm, height=3.5 * cm)],
]
]
table = Table(
cells,
colWidths=[
"*",
4 * cm,
],
)
table.setStyle(
TableStyle(
[
# align image and qr
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("VALIGN", (0, 0), (-2, -1), "CENTER"),
("VALIGN", (-1, 0), (-1, -1), "TOP"),
# force padding
("TOPPADDING", (0, 0), (-1, -1), 0),
("BOTTOMPADDING", (0, 0), (-1, -1), 0),
("LEFTPADDING", (0, 0), (-1, -1), 0),
("RIGHTPADDING", (0, 0), (-1, -1), 0),
# table borders, for debugging
("INNERGRID", (0, 0), (-1, -1), 0.25, colors.red),
("BOX", (0, 0), (-1, -1), 0.25, colors.red),
]
)
)
flowables = [table]
doc.build(flowables)
print(f"Saved {output_path}")
if __name__ == "__main__":
create_pdf()
</code></pre>
<p><a href="https://i.sstatic.net/A52Xf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A52Xf.png" alt="pdf screenshot" /></a></p>
|
<python><reportlab>
|
2023-11-29 08:50:34
| 0
| 305
|
mko
|
77,569,552
| 2,890,129
|
How to display a lets-plot plot in Spyder IDE
|
<p>I am using spyder ide and Lets-plot to generate a plot.</p>
<pre><code>import numpy as np
import polars as pl
LetsPlot.setup_html()
from lets_plot import *
np.random.seed(12)
data = pl.DataFrame(
{
"cond":np.random.lognormal(0, 1, 400),
"rating":np.concatenate((np.random.normal(0, 1, 200), np.random.normal(1, 1.5, 200)))
}
)
ggplot(data, aes(x='rating', y='cond')) + \
geom_point(color='dark_green', alpha=.7)
</code></pre>
<p>Instead of showing a graph, it is simply displaying this object in ipython console.</p>
<pre><code><lets_plot.plot.core.PlotSpec at 0x2771422c100>
</code></pre>
<p>Any help?</p>
|
<python><spyder><python-polars><lets-plot>
|
2023-11-29 08:27:11
| 1
| 439
|
Ramakrishna S
|
77,569,524
| 2,955,827
|
How to enable IDE IntelliSense with dynamic imported python classes?
|
<p>I have thousands of models in my package, each of them contains one class. Users can choose model they want to use by:</p>
<pre class="lang-py prettyprint-override"><code>from myproject.type1.models.m1 import m1
</code></pre>
<p>Very inconvenient.</p>
<p>The subfolder looks like:</p>
<pre><code>type1/
βββ __init__.py
βββ constant.py
βββ decorator.py
βββ models
β βββ m1.py
β βββ m2.py
β βββ m3.py
...
β βββ m999.py
</code></pre>
<p>So I write a module loader to load module dynamic:</p>
<pre class="lang-py prettyprint-override"><code>class ModelLoader:
def __init__(self, model_path):
self.model_path = model_path
self.models = {}
def __getattribute__(self, __name: str) -> Any:
try:
return super().__getattribute__(__name)
except AttributeError:
if __name not in self.models:
try:
model_file = import_module(self.model_path + "." + __name)
model = getattr(model_file, __name)
self.models[__name] = model
return model
except ModuleNotFoundError:
raise AttributeError(f"Model {__name} not found")
else:
return self.models[__name]
</code></pre>
<p>With this, user can load module by just:</p>
<pre class="lang-py prettyprint-override"><code>models = ModelLoader(__name__.rsplit(".", 1)[0] + ".models")
m1 = models.m1
m2 = models.m2
</code></pre>
<p>But this cause a problem.</p>
<p>Vscode or pycharm can not provide IntelliSense for m1 if it is imported by <code>ModelLoader</code>.</p>
<p>Intellisense works:</p>
<p><a href="https://i.sstatic.net/cH34l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cH34l.png" alt="enter image description here" /></a></p>
<p>not work:</p>
<p><a href="https://i.sstatic.net/IG0rq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IG0rq.png" alt="enter image description here" /></a></p>
<p>How can I let IDE know class <code>ModelLoader</code> have a member <code>m1</code> and is defined in <code>myproject.type1.models.m1</code>?</p>
<hr />
<p>I do not import all module in my package's <code>__init__</code> since it will harm user experience, it takes about 30s to import all models.</p>
|
<python><visual-studio-code><pycharm><intellisense>
|
2023-11-29 08:21:34
| 2
| 3,295
|
PaleNeutron
|
77,569,518
| 5,084,560
|
cx_Oracle.DatabaseError: ORA-12847: retry parsing due to concurrent DDL operation
|
<p>I'm reading data from Oracle DB on python cx_Oracle. But sometimes it throws</p>
<blockquote>
<p>ORA-12847: retry parsing due to concurrent DDL operation</p>
</blockquote>
<p>I couldn't find anything about it on internet. It's just a select statement with some joins. Query result has almost 10M rows. There is nothing wrong in explain plan.</p>
<p>here is my code snippet:</p>
<pre><code>with connection.cursor() as cursor:
#cursor.prefetchrows = 2000000
cursor.arraysize = 250000
cursor.execute(query)
data_list = cursor.fetchall()
data_columns = [x[0] for x in cursor.description]
print("I got %d lines " % len(data_list))
</code></pre>
<p>the error occurs at <strong>cursor.fetchall()</strong> line.</p>
|
<python><oracle-database><cx-oracle>
|
2023-11-29 08:20:39
| 0
| 305
|
Atacan
|
77,569,483
| 17,136,258
|
Add empty row between every new market
|
<p>I want to change <code>.append</code> to <code>.concat</code>.
I got the following error <code>ValueError: Must pass 2-d input. shape=(1, 2, 2)</code>.
I looked already at <a href="https://stackoverflow.com/questions/75956209/error-dataframe-object-has-no-attribute-append">Error "'DataFrame' object has no attribute 'append'"</a> I tried but unfortunately I got the error above. How do I fix this issue?</p>
<p>I want to add an empty row between every new <code>Market</code>.</p>
<p>Dataframe</p>
<pre class="lang-py prettyprint-override"><code>Market Values
0 A 1
1 B 2
2 A 3
3 C 4
4 B 5
</code></pre>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {
'Market': ['A', 'B', 'A', 'C', 'B'],
'Values': [1, 2, 3, 4, 5]
}
df_sorted = pd.DataFrame(data)
print(df_sorted)
markets = ['A', 'B', 'C']
appended_df = pd.DataFrame()
# Loop through markets and append rows
for market in markets:
market_rows = df_sorted[df_sorted['Market'] == market]
#appended_df = appended_df.append(market_rows, ignore_index=True) # <--- old
df_sorted = pd.concat([df_sorted, pd.DataFrame([market_rows])], ignore_index=True)
#appended_df = appended_df.append(pd.Series(), ignore_index=True) # <--- old
df_sorted = pd.concat([df_sorted, pd.DataFrame([pd.Series()])], ignore_index=True)
print(appended_df)
[OUT] ValueError: Must pass 2-d input. shape=(1, 2, 2)
</code></pre>
<p>What I want</p>
<pre class="lang-py prettyprint-override"><code> Market Values
0 A 1.0
1 A 3.0
2 NaN NaN
3 B 2.0
4 B 5.0
5 NaN NaN
6 C 4.0
</code></pre>
|
<python><pandas><dataframe>
|
2023-11-29 08:14:32
| 2
| 560
|
Test
|
77,569,468
| 17,267,064
|
Unable to retrieve video url in a Twitter's post using Selenium
|
<p>I am trying to scrape twitter using Selenium in Python. However, I am unable retrieve the video clip's download/streaming url.</p>
<p>Refer below tweet for instance.</p>
<p><a href="https://twitter.com/Tesla/status/1711184330792579093" rel="nofollow noreferrer">https://twitter.com/Tesla/status/1711184330792579093</a></p>
<p>Is the video source url directly available in the page for me to retrieve?</p>
<p>Thank you in advance.</p>
|
<python><html><selenium-webdriver><twitter>
|
2023-11-29 08:11:53
| 1
| 346
|
Mohit Aswani
|
77,569,466
| 1,568,919
|
Get main file name from inside a library function
|
<p>Suppose I have 2 files <code>A.py</code> and <code>B.py</code>. They both do</p>
<pre><code>import C
print(C.get_my_name())
</code></pre>
<p>and in the lib file C.py</p>
<pre><code>def get_my_name():
????
</code></pre>
<p>Is there a way to define the <code>get_my_name</code> function such that when <code>A.py</code> is run, the result is 'A.py' and when <code>B.py</code> is run, the result is 'B.py'?</p>
|
<python>
|
2023-11-29 08:11:38
| 2
| 7,411
|
jf328
|
77,569,348
| 519,422
|
How to skip a value when replacing values in an external file with values from a dataframe?
|
<p>I have a file that's made up of entries like:</p>
<pre><code>A first = 4 | 1_3_5_4 Name1
labelToSkip
i = 1000000 j = -3 k = -15
end
B first = 4 | 9_2_2_4 Name2
labelToSkip
i = 150000 j = -3 k = -20
end
...
</code></pre>
<p>I asked this question about how to replace certain values in the file with corresponding values from a Pandas dataframe like:</p>
<pre><code> i j k
0 unit1 unit2 unit3
1 1000 100 84
2 -3000 200 60
3 -2000 90 195
4 900 40 209
</code></pre>
<p><a href="https://stackoverflow.com/questions/77566914/how-to-write-specific-values-from-a-pandas-python-dataframe-to-a-specific-plac/77567039">How to write specific values from a Pandas (Python) dataframe to a specific place in a file (i.e., after an identifier)?</a></p>
<p>I got a great solution. However, I have some lines in my file where only the i and k values need to be replaced. I.e., the dataframe looks like:</p>
<pre><code> i k
0 unit1 unit3
1 1000 84
2 -3000 60
3 -2000 195
4 900 209
</code></pre>
<p>So I would want this result (for example). Here, I use the third row of values from the dataframe to replace only the values in the "i" and "k" fields of "B" in the file:</p>
<pre><code>A first = 4 | 1_3_5_4 Name1
labelToSkip
i = 1000000 j = -3 k = -15
end
B first = 4 | 9_2_2_4 Name2
labelToSkip
i = -2000 j = -3 k = 195
end
...
</code></pre>
<p>However, in this situation, nothing happens when I run the solution. I have tried changing "idx" to 2. I even tried changing idx to "1" and running it for only i (having removed anything related to j and k) and then k. That doesn't work either. I haven't been able to find anything online about how to ignore/skip a field. If anyone has a hint, I would be grateful.</p>
|
<python><python-3.x><pandas><dataframe><file-io>
|
2023-11-29 07:45:01
| 1
| 897
|
Ant
|
77,569,069
| 2,986,583
|
Working with Twitter OAuth 2.0 Authorization Code Flow with PKCE (User Context)
|
<p>I'm trying to play with the twitter API using tweepy.</p>
<p>This is my code:</p>
<pre><code>oauth2_user_handler = tweepy.OAuth2UserHandler(
client_id=client_id,
redirect_uri=redirect_uri,
scope=scopes,
client_secret=client_secret,
)
auth_url = oauth2_user_handler.get_authorization_url()
print('Please authorize this application: {}'.format(auth_url))
verifier = input('Enter the authorization response URL:')
token = oauth2_user_handler.fetch_token(verifier)
client = tweepy.Client(access_token=token['access_token'])
user = client.get_me(user_auth=False)
</code></pre>
<p>On the last line I'm getting this error:</p>
<blockquote>
<p>Authenticating with OAuth 2.0 Application-Only is forbidden for this
endpoint. Supported authentication types are [OAuth 1.0a User
Context, OAuth 2.0 User Context]</p>
</blockquote>
<p>So, it's telling me that I'm using OAuth 2.0 <strong>Application-Only</strong>, but everything I read says that this is how you get a <strong>user</strong> access token using OAuth 2.0 <strong>User Context</strong>.</p>
<p><a href="https://stackoverflow.com/questions/74331503/tweepy-get-me-error-when-using-oauth-2-0-user-auth-flow-with-pkce">This</a> answer helped me and got me to this point, but I can't pass that error.</p>
<p>What am I missing?</p>
|
<python><twitter><tweepy>
|
2023-11-29 06:43:11
| 1
| 360
|
asafd
|
77,569,044
| 15,948,240
|
Compute a distance matrix in a pandas DataFrame
|
<p>I would like to compute a distance distance between all elements of two series:</p>
<pre><code>import pandas as pd
a = pd.Series([1,2,3], ['a', 'b', 'c'] )
b = pd.Series([4, 5, 6, 7], ['k', 'l', 'm', 'n'])
def dist(x, y):
return x - y #(or some arbitrary function)
</code></pre>
<p>I did achieve the expected result using numpy and converting to a dataframe to add the index and columns.</p>
<pre><code>import numpy as np
pd.DataFrame(a.values[np.newaxis, :] - b.values[:, np.newaxis],
columns=a.index,
index=b.index)
>>> a b c
k -3 -2 -1
l -4 -3 -2
m -5 -4 -3
n -6 -5 -4
</code></pre>
<p>This does not feel as robust as direct operations on the DataFrame, is there a way to achieve this in pandas ?</p>
|
<python><pandas><dataframe><distance>
|
2023-11-29 06:39:36
| 1
| 1,075
|
endive1783
|
77,568,727
| 15,956,657
|
What could be causing "collect2.exe: error: ld returned 1 exit status" when manually compiling C++ with pybind11?
|
<p>I have a C++ script which compiles with pybind11 to create a file that python can use. I got it to work on my Ubuntu machine with python 3.8, but when I try to do it on my Windows machine it fails with <code>collect2.exe: error: ld returned 1 exit status</code>.</p>
<p>From the research that I've done, the issue is that the g++ compiler can't find the include and library paths. So I manually found the <code>Python.h</code> file in the <code>Python311\include</code> folder and set it to a variable in my <code>cmd</code> prompt. I also found the <code>python3.lib</code> file and set that var in my windows <code>cmd</code> prompt. I did the same for my pybind11 include path and all other include paths that I need.</p>
<p>Here is what I did:</p>
<pre><code>> set HASHMAP_INCLUDE=C:\path\to\other\headers\include
> set PYBIND11_INCLUDE=C:\path\to\pybind11_x64-windows\include
> set PYTHON_INCLUDE=C:\path\to\Python311\include
> set PYTHON_LIBS=C:\path\to\Python\Python311\libs
> g++ -O3 -Wall -shared -std=c++11 -I %HASHMAP_INCLUDE% -I %PYBIND11_INCLUDE% -I %PYTHON_INCLUDE% -L %PYTHON_LIBS% test.cpp -o test.cp311-win_amd64.pyd
</code></pre>
<p>This ensures all needed paths are included for compiling.</p>
<p>But it fails and outputs many lines that all say pretty much the same thing:
<code>...undefined reference to PyObject_GenericSetDict</code></p>
<p>And the last line is:</p>
<p><code>collect2.exe: error: ld returned 1 exit status</code></p>
<p>What does it take to use pybind11 on windows that is so different from linux?</p>
<p>Edit:</p>
<p>Here are some lines of the error output.</p>
<pre><code>c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116detail16add_class_methodERNS_6objectEPKcRKNS_12cpp_functionE[_ZN8pybind116detail16add_class_methodERNS_6objectEPKcRKNS_12cpp_functionE]+0x8b): undefined reference to `__imp_PyGILState_Check'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116detail16add_class_methodERNS_6objectEPKcRKNS_12cpp_functionE[_ZN8pybind116detail16add_class_methodERNS_6objectEPKcRKNS_12cpp_functionE]+0xd4): undefined reference to `__imp_PyObject_SetAttr'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116detail16add_class_methodERNS_6objectEPKcRKNS_12cpp_functionE[_ZN8pybind116detail16add_class_methodERNS_6objectEPKcRKNS_12cpp_functionE]+0x164): undefined reference to `__imp__Py_NoneStruct'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116detail16add_class_methodERNS_6objectEPKcRKNS_12cpp_functionE[_ZN8pybind116detail16add_class_methodERNS_6objectEPKcRKNS_12cpp_functionE]+0x195): undefined reference to `__imp_PyObject_SetAttrString'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116detail16add_class_methodERNS_6objectEPKcRKNS_12cpp_functionE[_ZN8pybind116detail16add_class_methodERNS_6objectEPKcRKNS_12cpp_functionE]+0x1c5): undefined reference to `__imp__Py_Dealloc'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_[_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_]+0x1b): undefined reference to `__imp__Py_NoneStruct'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_[_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_]+0x3c): undefined reference to `__imp_PyGILState_Check'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_[_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_]+0x5b): undefined reference to `__imp_PyObject_GetAttrString'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_[_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_]+0x17d): undefined reference to `__imp__Py_Dealloc'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_[_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_]+0x18d): undefined reference to `__imp__Py_Dealloc'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_[_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_]+0x19d): undefined reference to `__imp__Py_Dealloc'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.text$_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_[_ZN8pybind116class_I4TestJEE3defIMS1_FvvEJEEERS2_PKcOT_DpRKT0_]+0x1aa): undefined reference to `__imp_PyErr_Clear'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.data$_ZZN8pybind116detail25enable_dynamic_attributesEP15_heaptypeobjectE6getset[_ZZN8pybind116detail25enable_dynamic_attributesEP15_heaptypeobjectE6getset]+0x8): undefined reference to `PyObject_GenericGetDict'
c:/mingw/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\<username>\AppData\Local\Temp\ccQgaUv6.o:test.cpp:(.data$_ZZN8pybind116detail25enable_dynamic_attributesEP15_heaptypeobjectE6getset[_ZZN8pybind116detail25enable_dynamic_attributesEP15_heaptypeobjectE6getset]+0x10): undefined reference to `PyObject_GenericSetDict'
</code></pre>
<p>It seems like pybind11 is trying to use Python.h methods and failing to find them. For example, the <code>PyObject_GenericSetDict</code> is part of the <code>object.h</code> header file in the \Python311\include folder, which is the same folder that the <code>Python.h</code> file sits.</p>
<p>Any help appreciated.</p>
|
<python><c++><g++><pybind11>
|
2023-11-29 05:13:39
| 0
| 363
|
alvrm
|
77,568,701
| 678,572
|
How to wrap a Python executable to call another Python executable and use all of the argparse options?
|
<p>I have the following Python executable called <code>fastq_preprocessor.py</code> in my path. I'd like to create a wrapper called <code>fastq_preprocessor</code> that calls <code>fastq_preprocessor.py</code> which is in the same directory as <code>fastq_preprocessor</code>. I want to be able to use all of the command line arguments natively with minimal code. How can I configure <code>fastq_preprocessor</code> so it essentially calls <code>fastq_preprocessor.py</code> and utilizes all of the arguments in argparse?</p>
<p>My executable <code>fastq_preprocessor.py</code> has submodules. Here is the entire script:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
# Built-ins
import os, sys, argparse, importlib
# Version
__program__ = os.path.split(sys.argv[0])[-1]
__version__ = "2023.11.28"
# Accepted modules
accepted_programs = ["short", "long"]
script_directory = os.path.dirname(os.path.abspath( __file__ ))
# Controller
def main(argv=None):
parser = argparse.ArgumentParser(prog="fastq_preprocessor",description="A fastq preprocessor for short and long read sequencing. Optional contamination removal.", add_help=True)
parser.add_argument("program", choices=accepted_programs, help="`fastq_preprocessor` program for preprocessing. `short` for Illumina and `long` for ONT/PacBio.")
parser.add_argument("-c", "--citation", action='version', help="Show full citation (doi: 10.1186/s12859-022-04973-8)", version="Espinoza JL, Dupont CL.\nVEBA: a modular end-to-end suite for in silico recovery, clustering, and analysis of prokaryotic, microeukaryotic, and viral genomes from metagenomes.\nBMC Bioinformatics. 2022 Oct 12;23(1):419. doi: 10.1186/s12859-022-04973-8. PMID: 36224545.")
parser.add_argument("-v", "--version", action='version', version="{} v{}".format(__program__, __version__))
opts = parser.parse_args(argv)
return opts.program
# Initialize
if __name__ == "__main__":
# Check version
python_version = sys.version.split(" ")[0]
condition_1 = int(python_version.split(".")[0]) == 3
condition_2 = int(python_version.split(".")[1]) >= 6
assert all([condition_1, condition_2]), "Python version must be >= 3.6. You are running: {}\n{}".format(python_version, sys.executable)
# Get the algorithm
program = main([sys.argv[1]])
module = importlib.import_module("fastq_preprocessor_{}".format(program))
module.main(sys.argv[2:])
</code></pre>
<p>What is the minimal code to use wrap <code>fastq_preprocessor.py</code> in <code>fastq_preprocessor</code>?</p>
<p>I tried this which I got to kind of work:</p>
<pre><code>#!/usr/bin/env python
# Built-ins
import os, sys, importlib
# Version
__program__ = os.path.split(sys.argv[0])[-1]
__version__ = "2023.11.28"
# Accepted modules
script_directory = os.path.dirname(os.path.abspath( __file__ ))
if __name__ == "__main__":
module = importlib.import_module("fastq_preprocessor")
# Call the main function with the command line arguments
module.main(sys.argv[1:])
</code></pre>
<p>I was able to do <code>./fastq_preprocessor -h</code> but when I do <code>./fastq_preprocessor short -h</code> or <code>./fastq_preprocessor long -h</code> it returns the same thing as <code>./fastq_preprocessor -h</code>.</p>
<p>Here's what I'm referring to when I say "it returns the same thing":</p>
<pre><code>(base) jespinozlt2-osx:fastq_preprocessor jespinoz$ ./fastq_preprocessor -h
usage: fastq_preprocessor [-h] [-c] [-v] {short,long}
A fastq preprocessor for short and long read sequencing. Optional contamination removal.
positional arguments:
{short,long} `fastq_preprocessor` program for preprocessing. `short` for Illumina and `long` for ONT/PacBio.
optional arguments:
-h, --help show this help message and exit
-c, --citation Show full citation (doi: 10.1186/s12859-022-04973-8)
-v, --version show program's version number and exit
(base) jespinozlt2-osx:fastq_preprocessor jespinoz$ ./fastq_preprocessor short -h
usage: fastq_preprocessor [-h] [-c] [-v] {short,long}
A fastq preprocessor for short and long read sequencing. Optional contamination removal.
positional arguments:
{short,long} `fastq_preprocessor` program for preprocessing. `short` for Illumina and `long` for ONT/PacBio.
optional arguments:
-h, --help show this help message and exit
-c, --citation Show full citation (doi: 10.1186/s12859-022-04973-8)
-v, --version show program's version number and exit
(base) jespinozlt2-osx:fastq_preprocessor jespinoz$ ./fastq_preprocessor long -h
usage: fastq_preprocessor [-h] [-c] [-v] {short,long}
A fastq preprocessor for short and long read sequencing. Optional contamination removal.
positional arguments:
{short,long} `fastq_preprocessor` program for preprocessing. `short` for Illumina and `long` for ONT/PacBio.
optional arguments:
-h, --help show this help message and exit
-c, --citation Show full citation (doi: 10.1186/s12859-022-04973-8)
-v, --version show program's version number and exit
</code></pre>
|
<python><command-line-interface><executable><argparse>
|
2023-11-29 05:05:48
| 0
| 30,977
|
O.rka
|
77,568,379
| 14,488,413
|
How to run t-test on multiple pandas columns
|
<p>I want to write a code (with few lines) that runs t-test on <code>Product</code> and <code>Purchase_cost</code>,<code>warranty_years</code> and <code>service_cost</code> at the same time.</p>
<pre><code># dataset
import pandas as pd
from scipy.stats import ttest_ind
data = {'Product': ['laptop', 'printer','printer','printer','laptop','printer','laptop','laptop','printer','printer'],
'Purchase_cost': [120.09, 150.45, 300.12, 450.11, 200.55,175.89,124.12,113.12,143.33,375.65],
'Warranty_years':[3,2,2,1,4,1,2,3,1,2],
'service_cost': [5,5,10,4,7,10,4,6,12,3]
}
df = pd.DataFrame(data)
print(df)
</code></pre>
<p>code attempt for <code>Product</code> & <code>Purchase_cost</code>. I want to run t-test for <code>Product</code> & <code>warranty_years</code> and <code>Product</code> & <code>service cost</code></p>
<pre><code>
#define samples
group1 = df[df['Product']=='laptop']
group2 = df[df['Product']=='printer']
#perform independent two sample t-test
ttest_ind(group1['Purchase_cost'], group2['Purchase_cost'])
</code></pre>
|
<python><pandas><for-loop><scipy>
|
2023-11-29 03:16:04
| 1
| 322
|
nasa313
|
77,568,373
| 13,891,321
|
Control duplication of legend items in Python Plotly.express
|
<p>I am using plotly.express to produce a figure with 3 subplots. The legend repeats for each subplot, so I have 3 legend listings (one for each subplot) for each item. All operate together when I click them to turn off/turn on (which is fine, no need for independent operation).
Is there a way to reduce this to 1 legend listing per item? The legend is variable as it is a file name, so I can't use fixed legend title.
The 'Ambient' part can be ignored for the purpose of this question.
<a href="https://i.sstatic.net/1a37B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1a37B.png" alt="enter image description here" /></a></p>
<pre><code> fig = make_subplots(rows=1, cols=3,
column_titles=['Velocity m/s', 'Temperature C',
'Salinity '],
row_titles=['Depth m', 'Depth m'],
shared_yaxes=True)
fig1 = px.line(dfs, x='Sound Velocity', y='Depth',
color='File')
fig2 = px.line(dfs, x='Temperature', y='Depth',
color='File')
fig3 = px.line(dfs, x='Salinity', y='Depth',
color='File')
dips = dfs['File'].nunique() # Count number of different files
for dip in range(dips):
fig.add_trace(fig1['data'][dip], row=1, col=1)
for dip in range(dips):
fig.add_trace(fig2['data'][dip], row=1, col=2)
for dip in range(dips):
fig.add_trace(fig3['data'][dip], row=1, col=3)
fig.update_layout(template="plotly_dark")
fig.add_trace(go.Scatter(y=[z_min, z_max], x=[rv_min, v_max],
line=dict(color='#D7CECE', width=2,
dash='dash'), name="Ambient"))
</code></pre>
<p>Excerpt from the Dataframe crossing two files (SVP-3 & SVP-4).</p>
<pre><code> Date/Time Depth Sound Velocity Pressure Temperature Conductivity Salinity Density File
1377 22/10/2023 05:17 -2.719 1545.445 2.719 29.25 59.854 36.58 1023.179 SVP-3@YOTI 2-DEL.txt
1378 22/10/2023 05:17 -2.092 1545.432 2.092 29.248 59.854 36.582 1023.178 SVP-3@YOTI 2-DEL.txt
1379 22/10/2023 05:17 -1.592 1545.418 1.592 29.248 59.852 36.581 1023.175 SVP-3@YOTI 2-DEL.txt
1380 22/10/2023 05:17 -1.178 1545.41 1.178 29.247 59.848 36.579 1023.172 SVP-3@YOTI 2-DEL.txt
1381 22/10/2023 05:17 -0.691 1545.408 0.691 29.246 59.854 36.584 1023.174 SVP-3@YOTI 2-DEL.txt
1382 22/10/2023 05:17 -0.171 1545.415 0.171 29.247 59.844 36.576 1023.166 SVP-3@YOTI 2-DEL.txt
1383 24/10/2023 15:59 -1.39 1543.341 1.39 29.397 56.71 34.315 1021.423 SVP-4@YOTI 2-DEL.txt
1384 24/10/2023 15:59 -1.585 1543.34 1.585 29.397 56.708 34.314 1021.423 SVP-4@YOTI 2-DEL.txt
1385 24/10/2023 15:59 -2.261 1543.356 2.261 29.395 56.704 34.312 1021.425 SVP-4@YOTI 2-DEL.txt
1386 24/10/2023 15:59 -2.788 1543.38 2.788 29.396 56.71 34.315 1021.429 SVP-4@YOTI 2-DEL.txt
1387 24/10/2023 15:59 -3.173 1543.383 3.173 29.396 56.712 34.317 1021.432 SVP-4@YOTI 2-DEL.txt
1388 24/10/2023 15:59 -3.591 1543.373 3.591 29.395 56.71 34.316 1021.434 SVP-4@YOTI 2-DEL.txt
1389 24/10/2023 15:59 -4.095 1543.358 4.095 29.39 56.706 34.316 1021.438 SVP-4@YOTI 2-DEL.txt
1390 24/10/2023 16:00 -4.654 1543.339 4.654 29.382 56.696 34.315 1021.442 SVP-4@YOTI 2-DEL.txt
1391 24/10/2023 16:00 -5.362 1543.197 5.362 29.35 56.64 34.3 1021.444 SVP-4@YOTI 2-DEL.txt
</code></pre>
<p>Problem solved, thanks to the answer from #EricLavault. Updated graphic exactly as required. Much neater code, used exactly as given, including reduced trace_groupgap suggestion. <a href="https://i.sstatic.net/VeHuw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VeHuw.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><plotly>
|
2023-11-29 03:14:10
| 1
| 303
|
WillH
|
77,568,371
| 1,397,922
|
How to display value of another fields of related field in Odoo form views
|
<p>is that possible to display value of another fields of related field? For example, by default, in Sale Order, the displayed value of <code>partner_id</code> is the value of <code>partner_id.name</code> .. how if I want to display value of <code>partner_id.mobile</code> instead of their default?</p>
<p>I've tried explicitly declare <code>"partner_id.{FIELD}"</code> like this one below, but the SO model always detects that those fields are not available in the model :</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<odoo>
<record id="sale_order_form_inherit" model="ir.ui.view">
<field name="name">sale.order.form.inherit</field>
<field name="model">sale.order</field>
<field name="inherit_id" ref="sale.sale_order_form"/>
<field name="arch" type="xml">
<notebook position="inside">
<page string="BSP Prinsipal" name="bsp_prinsipal_page">
<group>
<field name="partner_id.cp_logistik"/>
<field name="partner_id.cp_finance"/>
<field name="partner_id.cp_marketing"/>
</group>
</page>
</notebook>
</field>
</record>
</odoo>
</code></pre>
<p>Thanks in advance, by the way!</p>
|
<python><xml><odoo><odoo-8><odoo-15>
|
2023-11-29 03:13:46
| 2
| 550
|
Andromeda
|
77,568,251
| 4,451,521
|
Why a print modifies a pairwise?
|
<p>I have the following code</p>
<pre><code>import pandas as pd
import itertools as it
data={
'X': [1, 2, 3, 4, 5,6,7,8,9,10],
'Y': [5, 4, 3, 2, 1,2,3,4,5,5],
'Cat':["cat","dog",None,None,"dog","dog",None,None,"cat","dog" ]
}
df = pd.DataFrame(data)
x_pairs = it.pairwise(df['X'])
y_pairs = it.pairwise(df['Y'])
# print(list(x_pairs)) #<--Uncomment this
# print(list(y_pairs))
for x, y in zip(x_pairs, y_pairs):
print("x",x,"y",y)
</code></pre>
<p>This code works. It shows me the x and y pairs.</p>
<p>Why when I uncomment the print statement the loop does not work anymore?</p>
|
<python>
|
2023-11-29 02:31:19
| 0
| 10,576
|
KansaiRobot
|
77,568,216
| 20,122,390
|
How can I link lists of different sizes with numpy or pandas?
|
<p>I have a list of 7 python elements (it will always be 7 elements because they correspond to the 7 days of the week) For example:</p>
<pre><code>values = [0.10870379696407702, 0.10279934722899532, 0.07997459688639645, 0.09342230637708339, 0.13035629501918441, 0.1704615636877088, 0.12471332159008049]
</code></pre>
<p>And on the other hand I have 7 days of a week that correspond to them, I have them stored in a tuple (it can be any structure)</p>
<pre><code>DAYS = ("2024-01-15", "2024-01-16", "2024-01-17", "2024-01-18", "2024-01-19", "2024-01-20", "2024-01-21")
</code></pre>
<p>For each day of DAYS the value in values corresponds (in the same order, that is, for 2024-01-15 the corresponding value is 0.10870379696407702)</p>
<p>It could be as simple as going through both structures and putting together the final dictionary, but the question is that I need the repeated values for each hour in the YEAR-MONTH-DAY HOUR:MINUTE:SECOND format. That is, my final dictionary should be something like this:</p>
<pre><code>{
"2024-01-15 00:00:00": 0.10870379696407702
"2024-01-15 01:00:00": 0.10870379696407702
"2024-01-15 02:00:00": 0.10870379696407702
...
"2024-01-21 23:00:00": 0.12471332159008049
}
</code></pre>
<p>How can I do it efficiently with numpy or pandas? I would not like to have to go through many elements having these libraries at hand in the microservice. I was trying with pandas but ended up getting tangled. Thank you.</p>
|
<python><pandas><numpy>
|
2023-11-29 02:19:11
| 1
| 988
|
Diego L
|
77,567,998
| 4,896,087
|
How to override a parent class property attribute in Pydantic
|
<p>I want to override a parent class property decorated attribute like this:</p>
<pre><code>from pydantic import BaseModel
class Parent(BaseModel):
name: str = 'foo bar'
@property
def name_new(self):
return f"{'_'.join(self.name.split(' '))}"
class Child(Parent):
name_new = 'foo_bar_foo'
</code></pre>
<p>but I always got this error:
<code>NameError: Field name "name_new" shadows a BaseModel attribute; use a different field name with "alias='name_new'".</code></p>
<p>I tried this:</p>
<pre><code>from pydantic import BaseModel, Field
class Parent(BaseModel):
name: str = 'foo bar'
@property
def name_new(self):
return f"{'_'.join(self.name.split(' '))}"
class Child(Parent):
name_new_new = Field('foo_bar_foo', alias='name_new')
c = Child()
</code></pre>
<p>However, <code>c.name_new</code> still has a value of <code>foo_bar</code> instead of <code>foo_bar_foo</code>. How can I override in the Child class so that the <code>name_new</code> attribute has a value of <code>foo_bar_foo</code>? Thanks!</p>
|
<python><pydantic>
|
2023-11-29 01:00:30
| 1
| 3,613
|
George Liu
|
77,567,995
| 12,725,674
|
IndexError When Processing Downloaded PDF Files
|
<p>I've encountered an issue while automatically downloading .pdf files and subsequently moving them to a specific folder using Python. I've implemented a function to check if the download is completed before proceeding with the file processing. However, sometimes I'm facing an <code>IndexError: list index out of range</code> at the last line of my code.</p>
<pre><code>def download_wait():
seconds = 0
dl_wait = True
while dl_wait and seconds < 100:
time.sleep(1)
dl_wait = False
for fname in os.listdir(r"C:\Users\Testuser\Downloads"):
if fname.endswith('.crdownload') or fname.endswith('.tmp'):
dl_wait = True
seconds += 1
return seconds
Years = ["2010", "2011"]
for year in Years:
try:
report = driver.find_elements(By.XPATH, f"//span[@class='btn_archived download'][.//a[contains(@href,{year})]]")
if len(report) != 0:
report[0].click()
download_wait()
files = os.listdir(r"C:\Users\Testuser\Downloads")
filtered_files = [file for file in files if file.lower().endswith(('.pdf', '.htm'))]
print(files, year, filtered_files)
filename = filtered_files[0] # IndexError occurs here
</code></pre>
<p>The problem is that the <code>filtered_files</code> list is sometimes empty, leading to the IndexError. Even though there is a .pdf file in the Downloads folder, it seems that the list remains empty.</p>
<p>For example, in one attempt the output of <code>print(files, year, filtered_files)</code> is before the error occurs:</p>
<pre><code>['NYSE_XOM_2009.pdf'] 2009 ['NYSE_XOM_2009.pdf']
['NYSE_XOM_2010.pdf.crdownload'] 2010 []
</code></pre>
<p>I am wondering why <code>files</code> can take on <code>['NYSE_XOM_2010.pdf.crdownload']</code>, given that the <code>download_wait()</code> should have prevented this to happen.</p>
<p>Any tips what might cause this issue would be much appreciated.
Thank you!</p>
|
<python><list>
|
2023-11-29 01:00:09
| 1
| 367
|
xxgaryxx
|
77,567,918
| 1,564,195
|
Need to convert a datetime.tzinfo *region* to a ZoneInfo region
|
<p>I'm writing a calendar application, and I have been trying to track down why canceled / rescheduled calendar entries keep appearing on my calendar. I traced the problem to a time zone issue with rruleset.</p>
<p>Let's say that I have a recurring weekly event and I'm in the US Eastern time zone. They're all scheduled at noon, but the recurrence overlaps the shift on November 5th from EDT / GMT-4 to EST / GMT-5.</p>
<p>If I do this:</p>
<pre><code> dtstart = datetime.datetime(2023, 10, 22, 12, 0, 0).astimezone()
dtend = datetime.datetime(2023, 11, 12, 12, 0, 0).astimezone()
instances = rrule.rruleset()
instances.rrule(rrule.rrule(freq=2, dtstart=dtstart, interval=1, wkst=1, until=dtend))
print(dtstart.tzinfo)
for i in instances:
print(i)
</code></pre>
<p>...I get four dates, but they're all pegged to EDT, because that's when dtstart begins:</p>
<pre><code> EDT
2023-10-22 12:00:00-04:00
2023-10-29 12:00:00-04:00
2023-11-05 12:00:00-04:00
2023-11-12 12:00:00-04:00
</code></pre>
<p>But if I do this:</p>
<pre><code> dtstart = datetime.datetime(2023, 10, 22, 12, 0, 0, tzinfo=zoneinfo.ZoneInfo('US/Eastern'))
dtend = datetime.datetime(2023, 11, 12, 12, 0, 0, tzinfo=zoneinfo.ZoneInfo('US/Eastern'))
instances = rrule.rruleset()
instances.rrule(rrule.rrule(freq=2, dtstart=dtstart, interval=1, wkst=1, until=dtend))
for i in instances:
print(i)
</code></pre>
<p>...then I get the correctly shifted times:</p>
<pre><code> 2023-10-22 12:00:00-04:00
2023-10-29 12:00:00-04:00
2023-11-05 12:00:00-05:00
2023-11-12 12:00:00-05:00
</code></pre>
<p>So it appears that all I need to do is to take a datetime and extract the datetime.tzinfo <em>zone</em> (i.e., US/Eastern) and use it to create a zoneinfo.ZoneInfo object. I'm finding it impossible to do that: ZoneInfo can't figure out the region from the tzname of 'EDT'. Any suggestions?</p>
|
<python><datetime><python-dateutil><zoneinfo>
|
2023-11-29 00:31:13
| 0
| 865
|
David Stein
|
77,567,641
| 1,230,724
|
Clear module import cache after executing Python code from zip file
|
<p>I'm looking for a way to reliably clear the module import cache after loading and executing code from zip files (the zip files with Python code serve as a "plugin").</p>
<p>The current way I'm using causes stale code to be executed, meaning code from previously imported zip files is executed. I'm looking for advice because the <a href="https://docs.python.org/3/library/importlib.html#importlib.invalidate_caches" rel="nofollow noreferrer">Python docs</a> (3.11) don't seem to sufficiently handle this case, at least not for me with the case of package imports from within the zip file (the package is also part of the zip file).</p>
<p>This is the code that loads the zip file:</p>
<pre><code>import importlib
from zipimport import zipimporter
# Load main.py:start_func()
# allow package imports in `main.py`
sys.path.insert(0, zippath)
importer = zipimporter(zippath)
main = importer.load_module('main')
# execute `start_func` function in main.py
main.start_func()
# Cleanup
for name, mod in list(sys.modules.items()):
if not hasattr(mod, '__file__') or not mod.__file__:
continue
if mod.__file__.startswith(zippath):
del sys.modules[name]
if zippath in sys.path:
sys.path.remove(zippath)
importlib.invalidate_caches()
</code></pre>
<p>The zip file has this structure and content</p>
<pre><code>/
+-- main.py
| from util import bar
|
| def start_func():
| bar()
| ...
|
+-- /foo
+-- __init__.py
|
+-- util.py
def bar():
print('hello')
</code></pre>
<p>The problem is that when I have multiple zip files with different functions in <code>util.py</code>, subsequent imports <em>can</em> mix up the <code>util.py</code> and use one that was imported previously imported from another zip file (presumably because it was cached as part of the import during the execution/import of <code>start_func</code> from the zip file).</p>
<p>Are there any avenues that I can explore? For instance, is there a way to check the module cache to inspect where code would be executed from (which module - where it's located)? Should this way of importing and executing the zip files work in principle or is there anything else that needs to be done to cleanup after the execution of the zip function?</p>
<p>Any help is appreciated.</p>
|
<python><module><zip><python-import>
|
2023-11-28 23:01:46
| 0
| 8,252
|
orange
|
77,567,521
| 10,623,444
|
Optimize computation of similarity scores by executing native polars command instead of UDF functions
|
<p><strong>Disclaimer (1):</strong> This question is supportive to <a href="https://stackoverflow.com/q/77541498/10623444">this SO</a>. After a request from two users to elaborate on my case.</p>
<p><strong>Disclaimer (2) - added 29/11:</strong> I have seen two solutions so far (proposed in this SO and the supportive one), that utilize the <code>explode()</code> functionality. Based on some benchmarks I did on the whole (~3m rows data) the RAM literally <em>explodes</em>, thus I will test the function on a sample of the dataset and if it works I will accept the solutions of <code>explode()</code> method for those who might experiment on smaller tables.</p>
<p>The input dataset (~3m rows) is the <code>ratings.csv</code> from the ml-latest dataset of 80_000 IMDb movies and respective ratings from 330_000 users (you may download the CSV file from <a href="https://drive.google.com/file/d/1C5AkLrXjUu-BlhgzZdIIqZ-paavDuYkt/view?usp=sharing" rel="nofollow noreferrer">here</a> - 891mb).</p>
<p>I load the dataset using <code>polars</code> like <code>movie_ratings = pl.read_csv(os.path.join(application_path + data_directory, "ratings.csv"))</code>, <code>application_path</code> and <code>data_directory</code> is a parent path on my local server.</p>
<p>Having read the dataset my goal is to generate the cosine similarity of a user between all the other users. To do so, first I have to transform the ratings table (~3m rows) to a table with 1 row per user. Thus, I run the following query</p>
<pre class="lang-py prettyprint-override"><code>## 1st computation bottleneck using UDF functions (2.5minutes for 250_000 rows)
users_metadata = movie_ratings.filter(
(pl.col("userId") != input_id) #input_id is a random userId. I prefer to make my tests using userId '1' so input_id=1 in this case.
).group_by("userId")\
.agg(
pl.col("movieId").unique().alias("user_movies"),
pl.col("rating").alias("user_ratings")
)\
.with_columns(
pl.col("user_movies").map_elements(
lambda row: sorted( list(set(row).intersection(set(user_rated_movies))) ), return_dtype=pl.List(pl.Int64)
).alias("common_movies")
)\
.with_columns(
pl.col("common_movies").map_elements(
lambda row: len(row), return_dtype=pl.Int64
).alias("common_movies_frequency")
)
similar_users = (
users_metadata.filter(
(pl.col("common_movies_frequency").le(len(user_rated_movies))) &
(pl.col("common_movies_frequency").gt(0)) # we don't want the users that don't have seen any movies from the ones seen/rated by the target user.
)
.sort("common_movies_frequency", descending=True)
)
## 2nd computation bottleneck using UDF functions
similar_users = (
similar_users.with_columns(
pl.struct(pl.all()).map_elements(
get_common_movie_ratings, #asked on StackOverflow
return_dtype=pl.List(pl.Float64),
strategy="threading"
).alias("common_movie_ratings")
).with_columns(
pl.struct(["common_movies"]).map_elements(
lambda row: get_target_movie_ratings(row, user_rated_movies, user_ratings),
return_dtype=pl.List(pl.Float64),
strategy="threading"
).alias("target_user_common_movie_ratings")
).with_columns(
pl.struct(["common_movie_ratings","target_user_common_movie_ratings"]).map_elements(
lambda row: compute_cosine(row),
return_dtype=pl.Float64,
strategy="threading"
).alias("similarity_score")
)
)
</code></pre>
<p>The code snippet above groups the table by userId and computes some important metadata about them. Specifically,</p>
<ul>
<li><p>user_movies, user_ratings per user</p>
</li>
<li><p>common_movies = intersection of the movies seen by the user that are the same as seen by the input_id user (thus user 1). Movies seen by the user 1 are basically <code>user_rated_movies = movie_ratings.filter(pl.col("userId") == input_id).select("movieId").to_numpy().ravel()</code></p>
</li>
<li><p>common_movies_frequency = The length of the column<code>common_movies</code> per user. NOT a fixed length per user.</p>
</li>
<li><p>common_movie_ratings = The result of the function I asked <a href="https://stackoverflow.com/q/77541498/10623444">here</a></p>
</li>
<li><p>target_user_common_movie_ratings = The ratings of the target user (user1) that match the indexes of the common movies with each user.</p>
</li>
<li><p>similarity_score = The cosine similarity score.</p>
</li>
</ul>
<p>Screenshot of the table (don't give attention to column <code>potential recommendations</code>)
<a href="https://i.sstatic.net/VIDTC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VIDTC.png" alt="enter image description here" /></a></p>
<p>Finally, I filter the table <code>users_metadata</code> by all the users with less than or equal common_movies_frequency to the 62 (<code>len(user_rated_movies)</code>) movies seen by user1. Those are a total of 250_000 users.</p>
<p>This table is the input dataframe for the UDF function I asked in this question. Using this dataframe (~250_000 users) I want to calculate the cosine similarity of each user with user 1. To do so, I want to compare their rating similarity. So on the movies commonly rated by each user, compute the cosine similarity among two arrays of ratings.</p>
<p>Below are the three UDF functions I use to support my functionality.</p>
<pre class="lang-py prettyprint-override"><code>def get_common_movie_ratings(row) -> pl.List(pl.Float64):
common_movies = row['common_movies']
user_ratings = row['user_ratings']
ratings_for_common_movies = [user_ratings[list(row['user_movies']).index(movie)] for movie in common_movies]
return ratings_for_common_movies
def get_target_movie_ratings(row, target_user_movies:np.ndarray, target_user_ratings:np.ndarray) -> pl.List(pl.Float64):
common_movies = row['common_movies']
target_user_common_ratings = [target_user_ratings[list(target_user_movies).index(movie)] for movie in common_movies]
return target_user_common_ratings
def compute_cosine(row)->pl.Float64:
array1 = row["common_movie_ratings"]
array2 = row["target_user_common_movie_ratings"]
magnitude1 = norm(array1)
magnitude2 = norm(array2)
if magnitude1 != 0 or magnitude2 != 0: #avoid division with 0 norms/magnitudes
score: float = np.dot(array1, array2) / (norm(array1) * norm(array2))
else:
score: float = 0.0
return score
</code></pre>
<blockquote>
<p>Benchmarks</p>
</blockquote>
<ul>
<li>Total execution time for 1 user is ~4 minutes. If I have to compute this over an iteration per user (1 dataframe per user) that will be approximately4 minutess * 330_000 users.</li>
<li>3-5Gb of RAM while computing the polars df for 1 user.</li>
</ul>
<p>The main question is how can I transform those 3 UDF functions into native polars commands.</p>
<h3>logs from a custom logger I made</h3>
<blockquote>
<p>2023-11-29 13:40:24 - INFO - Computed potential similar user metadata for 254188 users in: 0:02:15.586497</p>
</blockquote>
<blockquote>
<p>2023-11-29 13:40:51 - INFO - Computed similarity scores for 194943 users in: 0:00:27.472388</p>
</blockquote>
<p>We can conclude that the main bottleneck of the code is when creating the <code>user_metadata</code> table.</p>
|
<python><user-defined-functions><python-polars>
|
2023-11-28 22:29:45
| 3
| 1,589
|
NikSp
|
77,567,407
| 18,587,779
|
How to add an input to a group in the obs web socket
|
<p>Obs WebSocket docs define that groups in OBS are actually scenes, but renamed and modified. In obs-web sockets, we treat them as scenes where we can.</p>
<pre><code>import obsws_python as obs
cl = obs.ReqClient(host='localhost', port=4444, password='password', timeout=3)
cl.create_input("Group","media","ffmpeg_source", {}, True)
</code></pre>
<p>but when I try to add an input to a specified group it gives me this error:</p>
<pre><code>obsws_python.error.OBSSDKRequestError: Request CreateInput returned code 602. With the message: The specified source is not a scene. (Is group)
</code></pre>
<p>How to add an input to a specific group?</p>
|
<python><websocket><obs>
|
2023-11-28 21:59:03
| 0
| 318
|
Mark Wasfy
|
77,567,365
| 9,788,900
|
Plot a graph from a dictionary in python
|
<p>I have a dictionary with following key-value pairs.</p>
<pre><code>21 {'05': {'210527_NS5001_01_AHT': 244.64}}
22 {'11': {'221101_NS5002_02_AHV': 173.79, '221104_NS5002_03_AHC': 192.17}}
23 {'03': {'230331_NS5002_04_AH3': 222.73}, '04': {'230405_NS5002_05_AHG': 223.97}}
</code></pre>
<p>Where, 21, 22, 23 are years, 05, 11, 03, and 04 are months and strings containing NS are run ids and the float values are densities.</p>
<p>I wish to plot the trends in changing density value for each year and month without the run ids. I have tried following code however it does not seem to generate plots I want</p>
<pre><code>plt.figure(figsize=(12,6))
for year, months in dateDensity.items():
x = [int(month) for month in months.keys()]
for month, runids in months.items():
densities = runids.values()
plt.plot(x, densities, label=year, marker='o')
#plt.plot(x, y, label=year, marker='o')
plt.title('Density Plot over Time')
plt.xlabel('Month')
plt.ylabel('Density')
plt.legend()
plt.savefig('density_plot.pdf')
</code></pre>
<p>How can I generate this plot where there is a label for every year and if there are multiple density values for same months it should be included in the plot ?</p>
<p><a href="https://i.sstatic.net/4F5xo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4F5xo.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-11-28 21:52:09
| 1
| 343
|
Callie
|
77,567,341
| 499,362
|
Problem validating a certificate: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate
|
<p>A site my application submits requests to recently updated their SSL certificate and since that change, our requests have been failing with the well know certificate exception:</p>
<p><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='XXX.YYY', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')))</code></p>
<p>Normally, I understand this would be an issue of the root cert in the chain not being present in the root store; however, unless I'm misunderstanding something that doesn't appear to be the case here as I'm able to successfully make requests to other sites with the exact same root certificate (as identified by its fingerprints) and I've verified that that certificate exists in the <code>certifi</code> <code>cacerts.pem</code>. Are there other causes for this exception?</p>
<p>Comparing the two certificates (working and non working) there are differences in the signature algorithm and public key size in the intermediate CA and server cert portions; could a lack of support for the algorithm lead to this error as well?</p>
<p>I am working with:</p>
<ul>
<li>Amazon Linux 2</li>
<li>Python 3.8</li>
<li>OpenSSL 1.0.2-fips</li>
</ul>
<p>Working cert has:</p>
<ul>
<li>Server
<ul>
<li>Public Key: RSA 2048</li>
<li>Signature Algorithm: SHA-256 with RSA Encryption</li>
</ul>
</li>
<li>Intermediate
<ul>
<li>Public Key: RSA 2048</li>
<li>Signature Algorithm: SHA-384 with RSA Encryption</li>
</ul>
</li>
</ul>
<p>Failing cert has:</p>
<ul>
<li>Server
<ul>
<li>Public Key: RSA 2048</li>
<li>Signature Algorithm: SHA-384 with RSA Encryption</li>
</ul>
</li>
<li>Intermediate
<ul>
<li>Public Key: RSA 3072</li>
<li>Signature Algorithm: SHA-384 with RSA Encryption</li>
</ul>
</li>
</ul>
|
<python><ssl><openssl><ssl-certificate>
|
2023-11-28 21:47:39
| 0
| 9,900
|
Michael C. O'Connor
|
77,567,326
| 4,726,737
|
Why does my altair mark_text() layer come out blurry/duplicated?
|
<p>I am trying to make heatmaps with text on top. I found a great <a href="https://altair-viz.github.io/user_guide/marks/text.html" rel="nofollow noreferrer">tutorial</a> from the Altair website was trying to adapt my dataset to tutorial which works fine for me. But when I run it with my own data:</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt, pandas as pd, numpy as np
quarters = ['2022-Q1', '2022-Q2', '2022-Q3', '2022-Q4', '2023-Q1', '2023-Q2', '2023-Q3', '2023-Q4']
risk_bands = ['A++', 'A+', 'A', 'B', 'C', 'R']
vintages = [2020, 2021, 2022, 2023]
cpr = pd.DataFrame({
'quarter': np.random.choice(quarters, 1000),
'GEN5_RISK_BAND_CALC': np.random.choice(risk_bands, 1000),
'VINTAGE_YEAR': np.random.choice(vintages, 1000),
'cpr': np.random.randint(0, 40, 1000)
})
base = alt.Chart(cpr).mark_rect().encode(
x=alt.X('quarter:O', axis=(alt.Axis(labelAngle=45))),
y='VINTAGE_YEAR:O',
)
heatmap = base.mark_rect().encode(
color=alt.Color(
'cpr:Q',
scale=alt.Scale(
# domain=[0, 20, 40],
range=['white', 'red'],
# interpolate=method
),
legend=alt.Legend(direction='vertical', orient='left', title='cpr')
),
# color='cpr:Q',
# tooltip=tooltip
)
text = base.mark_text(
baseline='middle',
dx=0,
).encode(
text='cpr',
tooltip=[]
)
chart = alt.layer(base + heatmap + text).facet(
'GEN5_RISK_BAND_CALC:N',
columns=2
)
chart
# %%
</code></pre>
<p>I get this:</p>
<p><a href="https://i.sstatic.net/Jox3q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jox3q.png" alt="What I get" /></a></p>
|
<python><streamlit><altair>
|
2023-11-28 21:44:42
| 1
| 338
|
Derek Fulton
|
77,567,276
| 16,707,518
|
Performing a pandas operation on only a subset of columns when a specific criterion in one column is met
|
<p>I've got a data table with a fairly large amount of columns - around 60. For a subset of around 10 of the columns, I'd like to perform a simple addition of a single value when the value in the date column is a specific date.</p>
<p>I've simplified it here - but I'd like to be able to specify the columns in the dataframe I want to adjust in a line of code - i.e.</p>
<pre><code>cols=['A', 'B', 'D', 'F']
</code></pre>
<p>If we simplify this a shortened version, here's my table:</p>
<pre><code>Date A B C D E F
1/1/23 4 7 2 0 0 2
2/1/23 4 1 2 4 0 5
3/1/23 3 7 3 3 0 2
4/1/23 4 4 2 5 2 1
5/1/23 8 9 3 1 2 3
6/1/23 3 1 3 4 0 3
</code></pre>
<p>I want to add 1 to my array of specific columns i.e.</p>
<p>...when the date = 5/1/23.</p>
<p>Result I'm looking for:</p>
<pre><code>Date A B C D E F
1/1/23 4 7 2 0 0 2
2/1/23 4 1 2 4 0 5
3/1/23 3 7 3 3 0 2
4/1/23 4 4 2 5 2 1
5/1/23 7 8 3 0 2 2
6/1/23 3 1 3 4 0 3
</code></pre>
|
<python><pandas>
|
2023-11-28 21:35:53
| 1
| 341
|
Richard Dixon
|
77,567,068
| 1,456,253
|
Is there a way of turning off progress indication for plink?
|
<p>I have a python plink wrapper that essentially calls down to the plink cli via calls that look like:</p>
<pre><code>def plink_wrapper(cmd):
subprocess.run(
cmd, capture_output=True, shell=shell, check=check, encoding="utf-8"
)
return result.stdout
</code></pre>
<p>And my specific call looks something like:</p>
<pre><code>result=plink_wrapper("plink --file input_fileset --make-bed --out output_fileset")
</code></pre>
<p>Unfortunately, as part of stdout, plink includes a progress indication, which, when streamed to a file results in a staggering amount of cruft that looks like:</p>
<blockquote>
<p>\nScanning .ped file...
0%\b\b1%\b\b2%\b\b3%\b\b4%\b\b5%\b\b6%\b\b7%\b\b8%\b\b9%\b\b10%\b\b\b11%\b\b\b12%\b\b\b13%\b\b\b14%\b\b\b15%\b\b\b16%\b\b\b17%\b\b\b18%\b\b\b19%\b\b\b20%\b\b\b21%\b\b\b22%\b\b\b23%\b\b\b24%\b\b\b25%\b\b\b26%\b\b\b27%\b\b\b28%\b\b\b29%\b\b\b30%\b\b\b31%\b\b\b32%\b\b\b33%\b\b\b34%\b\b\b35%\b\b\b36%\b\b\b37%\b\b\b38%\b\b\b39%\b\b\b40%\b\b\b41%\b\b\b42%\b\b\b43%\b\b\b44%\b\b\b45%\b\b\b46%\b\b\b47%\b\b\</p>
</blockquote>
<p>(And this can be arbitrarily long depending on how long the job takes)</p>
<p>Is there an option that plink has to remove this progress indication? I still want the rest of stdout recorded, and regexing out things that look like percentages may run afoul of later output that I want to keep.</p>
<p>I've looked through the plink documentation, especially on the Command Options, Basic Usage and Reference Options sections, but didn't see anything related to progress indication.</p>
|
<python><plink-genomics>
|
2023-11-28 20:49:36
| 0
| 2,397
|
code11
|
77,567,056
| 823,633
|
Implementing a parallel evaluation queue for Jupyter notebook
|
<p>I want to execute long running functions in parallel within a Jupyter notebook, without blocking the notebook itself. However, I also want to add completely unrelated jobs to the queue at any time, even while the queue is still processing previous jobs. Basically, this would be analogous to an extremely simple version of the kind of job schedulers used for compute clusters.</p>
<p>My current implementation is something like this. First create a pool of workers and a job queue.</p>
<pre><code>import multiprocessing
def worker(job_queue, result_queue):
while True:
job = job_queue.get()
function, args, kwargs = job
result = function(*args, **kwargs)
result_queue.put(result)
job_queue = multiprocessing.Queue()
result_queue = multiprocessing.Queue()
pool = multiprocessing.Pool(processes=2, initializer=worker, initargs=[job_queue, result_queue])
</code></pre>
<p>Later, in a separate cell, we define a function that takes a long time to evaluate, so we want to run it without blocking the notebook</p>
<pre><code>def slow_function(x):
import time
time.sleep(3)
return x+1
</code></pre>
<p>then we send several jobs to the queue to execute in parallel in the background</p>
<pre><code>job_queue.put([slow_function, [1], {}])
job_queue.put([slow_function, [2], {}])
</code></pre>
<p>Then, while the previous jobs are still running, we define another function, and add that to the queue as well</p>
<pre><code>def complex_function(x):
return slow_function(x)*2
job_queue.put([complex_function, [1], {}])
job_queue.put([complex_function, [2], {}])
</code></pre>
<p>We continue running other things in the Jupyter notebook for a while, then after all the jobs are finished, we get the results</p>
<pre><code>result1 = result_queue.get(False)
result2 = result_queue.get(False)
result3 = result_queue.get(False)
result4 = result_queue.get(False)
</code></pre>
<p>That is the general workflow, I want to be able to do.</p>
<h1>Problems</h1>
<p>As implemented, after pool creation, the worker processes immediately crash and relaunch in an infinite loop, with pickle errors</p>
<pre><code>AttributeError: Can't get attribute 'worker' on <module '__main__' (built-in)>
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\yy51\AppData\Local\Programs\Miniconda3\Lib\multiprocessing\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yy51\AppData\Local\Programs\Miniconda3\Lib\multiprocessing\spawn.py", line 132, in _main
self = reduction.pickle.load(from_parent)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
</code></pre>
<p>If I switch to using <code>pathos</code> using <code>pool = pathos.multiprocessing.Pool(processes=2, initializer=worker, initargs=[job_queue, result_queue])</code>, then I get <code>RuntimeError: Queue objects should only be shared between processes through inheritance</code></p>
<p>I can kind of get it working if I replace <code>multiprocessing.Queue</code> with <code>pathos</code> queues <code>pathos.helpers.mp.Queue</code></p>
<pre><code>job_queue = pathos.helpers.mp.Queue()
result_queue = pathos.helpers.mp.Queue()
</code></pre>
<p>But it still doesn't work if I try to call another defined function within <code>complex_function</code></p>
<pre><code>job_queue.put([complex_function, [1], {}])
NameError: name 'slow_function' is not defined
</code></pre>
<h1>Restrictions</h1>
<p>I also really, really, do not want to create a separate file to host <code>slow_function</code> or <code>complex_function</code> in a module, as it would be massively inconvenient for interactive notebook usage.</p>
<p>I'd also want to define new functions <em>after</em> creating the process pool. (I'm aware sometimes it's easier to define functions before the pool)</p>
<p>Note: I'm running this on Windows.</p>
|
<python><jupyter-notebook><pickle><python-multiprocessing>
|
2023-11-28 20:46:58
| 1
| 1,410
|
goweon
|
77,566,914
| 519,422
|
How to write specific values from a Pandas (Python) dataframe to a specific place in a file (i.e., after an identifier)?
|
<p>I have a file that's made up of entries like:</p>
<pre><code>A first = 4 | 1_3_5_4 Name1
labelToSkip
i = 1000000 j = -3 k = -15
end
B first = 4 | 9_2_2_4 Name2
labelToSkip
i = 150000 j = -3 k = -20
end
...
</code></pre>
<p>I have managed to construct a Pandas dataframe (df) that contains data I have read in and modified from another file. The dataframe looks like this:</p>
<pre><code> i j k
0 unit1 unit2 unit3
1 1000 100 84
2 -3000 200 60
3 -2000 90 195
4 900 40 209
</code></pre>
<p>Now I want to choose a row from the dataframe (like row 3) and put the i, j, k values in the first file.</p>
<p>For example, I would want to put the i, j, k values from row 3 of the dataframe:</p>
<pre><code>3 -2000 90 195
</code></pre>
<p>in place of the i, j, k values in an entry of my choosing (like "B") to get:</p>
<pre><code>B first = 4 | 9_2_2_4 Name2
labelToSkip
i = -2000 j = 90 k = 195
end
</code></pre>
<p>In reality, the entries are quite complicated and the values I need to replace are not always on the third line of the entry. The main thing I need help with is how to find (1) "B" or "Name2" in the file and then replace the value after a specific identifier under "B" or "Name2."</p>
<p>I apologize for not providing an attempt. I know how to write a dataframe to a .txt file (for example, from this post: <a href="https://stackoverflow.com/questions/51829923/write-a-pandas-dataframe-to-a-txt-file">write a Pandas dataframe to a .txt file</a>). I have also found out how to replace specific values within a dataframe. However, I can't find any information on how to put specific values from a dataframe after an identifier in an external file. If anyone could please provide a hint, I would be very grateful.</p>
|
<python><python-3.x><pandas><dataframe><file-io>
|
2023-11-28 20:14:20
| 1
| 897
|
Ant
|
77,566,724
| 4,126,269
|
Handle circular import of subclasses types in abstract class
|
<p>I am facing a circular import error when I define type aliases for the subclasses of an abstract class.</p>
<p>This is an example of what I'm trying to achieve:</p>
<pre class="lang-py prettyprint-override"><code>#abstract_file_builder.py
from abc import ABC, abstractmethod
from typing import Generic, MutableSequence, TypeVar
from mymodule.type_a_file_builder import TypeARow
from mymodule.type_b_file_builder import TypeBRow
GenericRow = TypeVar("GenericRow", TypeARow, TypeBRow)
class AbstractFileBuilder(ABC, Generic[GenericRow]):
...
@abstractmethod
def generate_rows(
self,
) -> MutableSequence[GenericRow]:
pass
</code></pre>
<pre class="lang-py prettyprint-override"><code>#type_a_file_builder.py
from typing import Any, MutableSequence
from mymodule.abstract_file_builder import AbstractFileBuilder
TypeARow = MutableSequence[Any]
class TypeAFileBuilder(AbstractFileBuilder[TypeARow]):
...
def generate_rows(
self,
) -> MutableSequence[TypeARow]:
... # Code logic for TypeA
return rows
</code></pre>
<pre class="lang-py prettyprint-override"><code>#type_b_file_builder.py
from typing import MutableSequence, Union
from mymodule.abstract_file_builder import AbstractFileBuilder
TypeBRow = MutableSequence[Union[int, float]]
class TypeBFileBuilder(AbstractFileBuilder[TypeBRow]):
...
def generate_rows(
self,
) -> MutableSequence[TypeBRow]:
... # Code logic for TypeB
return rows
</code></pre>
<p>What is the most pythonic way to solve this?</p>
<p>I know I can use the <code>TYPE_CHECKING</code> variable to avoid runtime imports, but that feels like a patch instead of a good solution.</p>
<p>Another thing that can solve the problem is to define the type aliases in the abstract class, but that would ruin the whole purpose of having an abstract class and not having to know what's implemented below.</p>
<p>I'm not sure however if I can do some form of an "abstract" type alias inside the <code>abstract_file_builder.py</code> file, and then declare the TypeARow and TypeBRow types as children of that abstract type.</p>
<p>I must note that the solution must work with, at least, <code>Python 3.9</code>. If it supports versions back to <code>3.7</code> it'll be better, but not super necessary.</p>
|
<python><python-typing>
|
2023-11-28 19:38:36
| 1
| 936
|
CarlosMorente
|
77,566,660
| 8,223,979
|
How to produce barplot in bins?
|
<p>I produce a barplot like this:</p>
<pre><code>sns.barplot(data=number_counts, x="number", y="counts")
</code></pre>
<p>number_counts is a dataframe that looks like this:</p>
<pre><code>number counts
1 7
2 8
3 10
4 12
5 14
6 9
7 6
8 3
9 2
10 1
...etc
</code></pre>
<p>However, I'd like that the x-axis is in bins. So instead of showing 1, 2, 3, 4, 5...
It shows the bars for 0-5, 5-10, 10-15, etc</p>
<p>I tried using a histogram without success:</p>
<pre><code>sns.histplot(data=number_counts, x="number", bins=list(np.arange(0, 100, 5)), stat='count')
</code></pre>
<p>What am I doing wrong?</p>
|
<python><seaborn><bar-chart><histogram>
|
2023-11-28 19:28:44
| 1
| 1,097
|
Caterina
|
77,566,656
| 2,192,824
|
How to use Bazel to build ELF file for linux?
|
<p>I'm new to Bazel, but I want to use Bazel to build my python codes to an ELF file (currently I'm building it to .par file). The purpose is to build an ELF file so that we can launch the program by double clicking the file, just like in windows double clicking an .exe file. Is there any way to do that?</p>
<p>Thanks!</p>
|
<python><linux><bazel><elf>
|
2023-11-28 19:28:38
| 0
| 417
|
Ames ISU
|
77,566,461
| 6,562,240
|
Pandas For Loop with Two Variables
|
<p>I am trying to create a new column in a dataframe based on the values in two other columns:</p>
<pre><code>names_df['Surname'] = [
'MISSING' if i != '' and j == '' else j
for i, j in names_df['Name Entry 1'], names_df['Name Entry 2']
]
</code></pre>
<p>However I am getting an invalid syntax error but I don't see where this is going wrong?</p>
<p>Is this the best way to create multiple elif type statements (I want to build out more). I found <code>.apply()</code> to be quite clumsy when having many else statements.</p>
|
<python><pandas>
|
2023-11-28 18:54:14
| 3
| 705
|
Curious Student
|
77,566,421
| 4,542,117
|
Python IDW with latitude / longitude points
|
<p>Utilizing one of the answers from:
<a href="https://stackoverflow.com/questions/3104781/inverse-distance-weighted-idw-interpolation-with-python?noredirect=1&lq=1">Inverse Distance Weighted (IDW) Interpolation with Python</a></p>
<p>I am looking to convert this IDW method into a latitude / longitude problem. For example, if I have latitude/longitude points within a latitude/longitude grid, how much do they influence one another? The idea would be to, in the end, have a 'threshold' in which the distance no longer impacts one another.</p>
<p>Below is a simple attempt to initialize a lat/lon grid with points for the analyses.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def simple_idw(x, y, z, xi, yi):
dist = distance_matrix(x,y, xi,yi)
# In IDW, weights are 1 / distance
weights = 1.0 / dist
# Make weights sum to one
weights /= weights.sum(axis=0)
# Multiply the weights for each interpolated point by all observed Z-values
#zi = np.dot(weights.T, z)
return weights
def distance_matrix(x0, y0, x1, y1):
obs = np.vstack((x0, y0)).T
interp = np.vstack((x1, y1)).T
# Make a distance matrix between pairwise observations
# Note: from <http://stackoverflow.com/questions/1871536>
# (Yay for ufuncs!)
d0 = np.subtract.outer(obs[:,0], interp[:,0])
d1 = np.subtract.outer(obs[:,1], interp[:,1])
return np.hypot(d0, d1)
def plot(x,y,z,grid):
plt.figure()
plt.imshow(grid, extent=(-130,-60, 20,55))
plt.xlim([-100,-90])
plt.ylim([30,40])
plt.scatter(x,y,c=z)
plt.colorbar()
# Setup: Generate data...
x = np.arange(-100,-90) # Lon points
y = np.arange(30,40) # Lat points
z = np.arange(0,10)/10 # Z values
xii = np.arange(-130,-60,0.01) # Lon grid
yii = np.arange(20,55,0.01) # Lat grid
xi, yi = np.meshgrid(xii, yii)
xi, yi = xi.flatten(), yi.flatten()
# Calculate IDW
grid2 = simple_idw(x,y,z,xi,yi)
grid1 = grid2[0,:]
grid0 = grid1.reshape((len(yii), len(xii)))
# Because of the lat/lon stuff, flip our grid upside-down (along x-axis)
grid0 = np.flipud(grid0)
# Comparisons...
plot(x,y,z,grid0)
plt.title('Homemade IDW')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/FIvPb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FIvPb.png" alt="img1" /></a></p>
<p>There are several questions, but the most perplexing is: why is the 'heatmap' not behaving properly. For example, the scatterpoints are behaving as one would expect based on the z values, but shouldn't the background 'grid0' also be reflecting this? Furthermore, it looks as if the 'max' values in the background grid are in the bottom-left corner. What am I overlooking here?</p>
|
<python><numpy>
|
2023-11-28 18:45:24
| 0
| 374
|
Miss_Orchid
|
77,566,289
| 2,558,241
|
In Django ORM, wildcard \b does not work (Postgres database)
|
<p>All other wildcards work correctly but for example when I want to use <code>"\bsome\b"</code> pattern, it can't find any result however, there are many rows in the database that have a word "some" inside them.</p>
<p>Other wild cards like <code>., +, *, \w</code> and ... work properly.</p>
<p>Any idea on what is the problem?</p>
<p>Code:</p>
<pre><code>regex_pattern = r"\bsome\b"
result = tweets.filter(text__regex=regex_pattern)
</code></pre>
|
<python><django><postgresql><django-orm>
|
2023-11-28 18:24:52
| 1
| 1,436
|
Mehrdad Salimi
|
77,566,198
| 6,141,238
|
In Python, why is y**2 sometimes incorrect when y is a numpy array?
|
<p>First I compute a square in the IDLE shell of Python 3.12.0:</p>
<pre><code>>>> x = 123456
>>> x**2
15241383936
</code></pre>
<p>This agrees with my TI-89. Now I compute what I would think would be the same square using numpy arrays:</p>
<pre><code>>>> import numpy as np
>>> y = np.array([x])
>>> ysqr = y**2
array([-1938485248])
>>> ysqr[0]
-1938485248
</code></pre>
<p>What is going on here? Why aren't <code>x**2</code> and <code>ysqr[0]</code> equal?</p>
<hr />
<p><strong>A solution:</strong> Just to note for those using Python for scientific computing, we can make <code>ysqr[0]</code> equal to <code>x**2</code> in the above example by changing x to a float:</p>
<pre><code>>>> import numpy as np
>>> y = np.array([float(x)])
>>> ysqr = y**2
array([1.52413839e+10])
>>> ysqr[0]
15241383936.0
</code></pre>
|
<python><arrays><numpy><exponent>
|
2023-11-28 18:08:56
| 1
| 427
|
SapereAude
|
77,566,113
| 9,008,162
|
I'm getting WinError 10060; might it be caused by a python thread visiting the server too frequently?
|
<p>I use a Python thread to download data from APIs. I'm not sure how it gets to the server, but I set it to make no more than 1000 calls every minute. It works good most of the time, but occasionally I get <strong>WinError 10060</strong> (without modifying my code). Whenever this happens, I simply re-download and it typically works without issue.</p>
<p>When I contacted the API provider about the 10060 issue, they indicated it had nothing to do with their server and that it could be due to a Windows socket. They tell me to use Linux and test if I have the same problem, but this is too much work for me.</p>
<p>So, what is the problem, and is there a solution?</p>
|
<python><multithreading><sockets>
|
2023-11-28 17:54:27
| 0
| 775
|
saga
|
77,566,062
| 7,077,532
|
Python Dataframe: Transpose 4/5 Columns in a Dataframe And Hold Date Column Without Transposing
|
<p>I have a sample input dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th style="text-align: right;">Date</th>
<th style="text-align: right;">Score</th>
<th style="text-align: right;">Target</th>
<th style="text-align: right;">Difference</th>
</tr>
</thead>
<tbody>
<tr>
<td>Jim</td>
<td style="text-align: right;">2023-10-09</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td>Jim</td>
<td style="text-align: right;">2023-10-16</td>
<td style="text-align: right;">13</td>
<td style="text-align: right;">16</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td>Andy</td>
<td style="text-align: right;">2023-10-09</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td>Andy</td>
<td style="text-align: right;">2023-10-16</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">20</td>
<td style="text-align: right;">15</td>
</tr>
</tbody>
</table>
</div>
<p>Python code to create table:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Name':["Jim","Jim","Andy", "Andy"], 'Date':['2023-10-09', '2023-10-16', '2023-10-09', "2023-10-16"], 'Score':["9","13","7", "5"], 'Target':["12","16","7", "20"], 'Difference':["3","3","0", "15"]})
</code></pre>
<p>I want to transpose the above table by Name and have the rows be Date, Score, Target, and Difference. The desired output table is below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th style="text-align: right;">Category</th>
<th style="text-align: right;">Jim</th>
<th style="text-align: right;">Andy</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-10-09</td>
<td style="text-align: right;">Score</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">7</td>
</tr>
<tr>
<td></td>
<td style="text-align: right;">Target</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">7</td>
</tr>
<tr>
<td></td>
<td style="text-align: right;">Difference</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td>2023-10-16</td>
<td style="text-align: right;">Score</td>
<td style="text-align: right;">13</td>
<td style="text-align: right;">5</td>
</tr>
<tr>
<td></td>
<td style="text-align: right;">Target</td>
<td style="text-align: right;">16</td>
<td style="text-align: right;">20</td>
</tr>
<tr>
<td></td>
<td style="text-align: right;">Difference</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">15</td>
</tr>
</tbody>
</table>
</div>
<p>I tried doing this with the code below but it doesn't produce the desired transposed table grouping by Date and Category columns.</p>
<pre><code>df_2 =df.T
</code></pre>
<p>df_2 produces the following output which is transposing "Date" column which I don't want.</p>
<p><a href="https://i.sstatic.net/FzmiS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FzmiS.png" alt="enter image description here" /></a></p>
|
<python><group-by><multiple-columns><transpose>
|
2023-11-28 17:45:06
| 1
| 5,244
|
PineNuts0
|
77,566,046
| 7,267,480
|
pyearth package installation issue (need to use MARS on Python 3.10)
|
<p>I have a question regarding an old package to do a multivariate adaptive regression splines for my research. for me it's crucial to have an ability to define the max. number of knots to fit the data because the function is pretty complex.</p>
<p>Maybe some of you can suggest a similar Python package to do multivariate adaptive regression splines? I have lost a lot of time with tries.
Any help is appreciated.</p>
<p>I have tried a lot of options to install it, but no success.</p>
<p>I have Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux</p>
<p>Here is the errors I have got during several tries.</p>
<ol>
<li>first attempt to install:</li>
</ol>
<p>direct installation from a branch dev.</p>
<pre><code>(.venv) fire@note-4:~/py_projects/proj$ pip install git+https://github.com/scikit-learn-contrib/py-earth@v0.2dev
Collecting git+https://github.com/scikit-learn-contrib/py-earth@v0.2dev
Cloning https://github.com/scikit-learn-contrib/py-earth (to revision v0.2dev) to /tmp/pip-req-build-igm20quj
Running command git clone --filter=blob:none --quiet https://github.com/scikit-learn-contrib/py-earth /tmp/pip-req-build-igm20quj
Running command git checkout -b v0.2dev --track origin/v0.2dev
Switched to a new branch 'v0.2dev'
Branch 'v0.2dev' set up to track remote branch 'v0.2dev' from 'origin'.
Resolved https://github.com/scikit-learn-contrib/py-earth to commit 400f84d435b7277124535c09ca32132c1d0eaa74
Preparing metadata (setup.py) ... done
Requirement already satisfied: scipy>=0.16 in ./.venv/lib/python3.10/site-packages (from sklearn-contrib-py-earth==0.1.0+16.g400f84d) (1.11.3)
Requirement already satisfied: scikit-learn>=0.16 in ./.venv/lib/python3.10/site-packages (from sklearn-contrib-py-earth==0.1.0+16.g400f84d) (1.3.2)
Requirement already satisfied: six in ./.venv/lib/python3.10/site-packages (from sklearn-contrib-py-earth==0.1.0+16.g400f84d) (1.16.0)
Requirement already satisfied: numpy<2.0,>=1.17.3 in ./.venv/lib/python3.10/site-packages (from scikit-learn>=0.16->sklearn-contrib-py-earth==0.1.0+16.g400f84d) (1.26.1)
Requirement already satisfied: joblib>=1.1.1 in ./.venv/lib/python3.10/site-packages (from scikit-learn>=0.16->sklearn-contrib-py-earth==0.1.0+16.g400f84d) (1.3.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in ./.venv/lib/python3.10/site-packages (from scikit-learn>=0.16->sklearn-contrib-py-earth==0.1.0+16.g400f84d) (3.2.0)
Building wheels for collected packages: sklearn-contrib-py-earth
Building wheel for sklearn-contrib-py-earth (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py bdist_wheel did not run successfully.
β exit code: 1
β°β> [405 lines of output]
...proj/.venv/lib/python3.10/site-packages/setuptools/dist.py:472: SetuptoolsDeprecationWarning: Invalid dash-separated options
!!
********************************************************************************
Usage of dash-separated 'description-file' will not be supported in future
versions. Please use the underscore name 'description_file' instead.
By 2024-Sep-26, you need to update your project and remove deprecated calls
or your builds will no longer be supported.
See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
********************************************************************************
!!
opt = self.warn_dash_deprecation(opt, section)
proj../.venv/lib/python3.10/site-packages/setuptools/__init__.py:80: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!
********************************************************************************
Requirements should be satisfied by a PEP 517 installer.
If you are using pip, you can try `pip install --use-pep517`.
********************************************************************************
!!
dist.fetch_build_eggs(dist.setup_requires)
proj../.venv/lib/python3.10/site-packages/setuptools/dist.py:472: SetuptoolsDeprecationWarning: Invalid dash-separated options
!!
********************************************************************************
Usage of dash-separated 'description-file' will not be supported in future
versions. Please use the underscore name 'description_file' instead.
By 2024-Sep-26, you need to update your project and remove deprecated calls
or your builds will no longer be supported.
See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
********************************************************************************
!!
opt = self.warn_dash_deprecation(opt, section)
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-310
creating build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/export.py -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/__init__.py -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/earth.py -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_version.py -> build/lib.linux-x86_64-cpython-310/pyearth
creating build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/test_qr.py -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/test_util.py -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/test_export.py -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/test_earth.py -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/test_knot_search.py -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/__init__.py -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/testing_utils.py -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/test_pruning.py -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/test_forward.py -> build/lib.linux-x86_64-cpython-310/pyearth/test
creating build/lib.linux-x86_64-cpython-310/pyearth/test/basis
copying pyearth/test/basis/test_linear.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/basis
copying pyearth/test/basis/base.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/basis
copying pyearth/test/basis/test_missingness.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/basis
copying pyearth/test/basis/__init__.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/basis
copying pyearth/test/basis/test_hinge.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/basis
copying pyearth/test/basis/test_constant.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/basis
copying pyearth/test/basis/test_basis.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/basis
copying pyearth/test/basis/test_smoothed_hinge.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/basis
creating build/lib.linux-x86_64-cpython-310/pyearth/test/record
copying pyearth/test/record/test_pruning_pass.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/record
copying pyearth/test/record/__init__.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/record
copying pyearth/test/record/test_forward_pass.py -> build/lib.linux-x86_64-cpython-310/pyearth/test/record
running egg_info
creating sklearn_contrib_py_earth.egg-info
writing sklearn_contrib_py_earth.egg-info/PKG-INFO
writing dependency_links to sklearn_contrib_py_earth.egg-info/dependency_links.txt
writing requirements to sklearn_contrib_py_earth.egg-info/requires.txt
writing top-level names to sklearn_contrib_py_earth.egg-info/top_level.txt
writing manifest file 'sklearn_contrib_py_earth.egg-info/SOURCES.txt'
reading manifest file 'sklearn_contrib_py_earth.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'pyearth/test/pathological_data'
adding license file 'LICENSE.txt'
writing manifest file 'sklearn_contrib_py_earth.egg-info/SOURCES.txt'
copying pyearth/_basis.c -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_basis.pxd -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_forward.c -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_forward.pxd -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_knot_search.c -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_knot_search.pxd -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_pruning.c -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_pruning.pxd -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_qr.c -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_qr.pxd -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_record.c -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_record.pxd -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_types.c -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_types.pxd -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_util.c -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/_util.pxd -> build/lib.linux-x86_64-cpython-310/pyearth
copying pyearth/test/earth_linvars_regress.txt -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/earth_regress.txt -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/earth_regress_missing_data.txt -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/earth_regress_smooth.txt -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/forward_regress.txt -> build/lib.linux-x86_64-cpython-310/pyearth/test
copying pyearth/test/test_data.csv -> build/lib.linux-x86_64-cpython-310/pyearth/test
UPDATING build/lib.linux-x86_64-cpython-310/pyearth/_version.py
set build/lib.linux-x86_64-cpython-310/pyearth/_version.py to '0.1.0+16.g400f84d'
running build_ext
building 'pyearth._util' extension
creating build/temp.linux-x86_64-cpython-310
creating build/temp.linux-x86_64-cpython-310/pyearth
x86_64-linux-gnu-gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -Iproj../.venv/lib/python3.10/site-packages/numpy/core/include -Iproj../.venv/include -I/usr/include/python3.10 -c pyearth/_util.c -o build/temp.linux-x86_64-cpython-310/pyearth/_util.o
In file included from proj../.venv/lib/python3.10/site-packages/numpy/core/include/numpy/ndarraytypes.h:1929,
from proj../.venv/lib/python3.10/site-packages/numpy/core/include/numpy/ndarrayobject.h:12,
from proj../.venv/lib/python3.10/site-packages/numpy/core/include/numpy/arrayobject.h:5,
from pyearth/_util.c:625:
proj../.venv/lib/python3.10/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
17 | #warning "Using deprecated NumPy API, disable it with " \
| ^~~~~~~
pyearth/_util.c: In function β__Pyx_ParseOptionalKeywordsβ:
pyearth/_util.c:7935:21: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations]
7935 | (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
| ^
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from pyearth/_util.c:24:
/usr/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
pyearth/_util.c:7935:21: warning: βPyUnicode_AsUnicodeβ is deprecated [-Wdeprecated-declarations]
7935 | (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
| ^
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from pyearth/_util.c:24:
/usr/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
| ^~~~~~~~~~~~~~~~~~~
pyearth/_util.c:7935:21: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations]
7935 | (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
| ^
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from pyearth/_util.c:24:
/usr/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
pyearth/_util.c:7935:21: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations]
7935 | (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
| ^
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from pyearth/_util.c:24:
/usr/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
pyearth/_util.c:7935:21: warning: βPyUnicode_AsUnicodeβ is deprecated [-Wdeprecated-declarations]
7935 | (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
| ^
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from pyearth/_util.c:24:
/usr/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
| ^~~~~~~~~~~~~~~~~~~
pyearth/_util.c:7935:21: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations]
7935 | (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
| ^
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from pyearth/_util.c:24:
/usr/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
pyearth/_util.c:7951:25: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations]
7951 | (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
| ^
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from pyearth/_util.c:24:
/usr/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
pyearth/_util.c:7951:25: warning: βPyUnicode_AsUnicodeβ is deprecated [-Wdeprecated-declarations]
7951 | (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
| ^
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from pyearth/_util.c:24:
/usr/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
| ^~~~~~~~~~~~~~~~~~~
pyearth/_util.c:7951:25: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations]
7951 | (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
| ^
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from pyearth/_util.c:24:
/usr/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
pyearth/_util.c:7951:25: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations]
7951 | (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
| ^
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from pyearth/_util.c:24:
/usr/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
pyearth/_util.c:7951:25: warning: βPyUnicode_AsUnicodeβ is deprecated [-Wdeprecated-declarations]
7951 | (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
...
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for sklearn-contrib-py-earth
Running setup.py clean for sklearn-contrib-py-earth
Failed to build sklearn-contrib-py-earth
ERROR: Could not build wheels for sklearn-contrib-py-earth, which is required to install pyproject.toml-based projects
</code></pre>
<ol start="2">
<li>I have read a hack <a href="https://github.com/scikit-learn-contrib/py-earth/issues/221" rel="nofollow noreferrer">https://github.com/scikit-learn-contrib/py-earth/issues/221</a> on how to use this package at python3.10, did everything like pgr_123 said:
Here is what I have in this case:</li>
</ol>
<pre><code> python setup.py build_ext --inplace --cythonize
Traceback (most recent call last):
File "/home/fire/py_projects/MARS_pyearth/py-earth/setup.py", line 165, in <module>
setup_package()
File "/home/fire/py_projects/MARS_pyearth/py-earth/setup.py", line 145, in setup_package
from Cython.Distutils import build_ext
ModuleNotFoundError: No module named 'Cython'
(.venv) fire@note-4:~/py_projects/MARS_pyearth/py-earth$ pip install cython
Collecting cython
Downloading Cython-3.0.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.2 kB)
Downloading Cython-3.0.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB)
ββββββββββββββββββββββββββββββββββββββββ 3.6/3.6 MB 11.8 MB/s eta 0:00:00
Installing collected packages: cython
Successfully installed cython-3.0.6
(.venv) fire@note-4:~/py_projects/MARS_pyearth/py-earth$ python setup.py build_ext --inplace --cythonize
Compiling pyearth/_util.pyx because it changed.
Compiling pyearth/_basis.pyx because it changed.
Compiling pyearth/_record.pyx because it changed.
Compiling pyearth/_pruning.pyx because it changed.
Compiling pyearth/_forward.pyx because it changed.
Compiling pyearth/_knot_search.pyx because it changed.
Compiling pyearth/_qr.pyx because it changed.
Compiling pyearth/_types.pyx because it changed.
[1/8] Cythonizing pyearth/_basis.pyx
/home/fire/py_projects/MARS_pyearth/.venv/lib/python3.9/site-packages/Cython/Compiler/Main.py:381: FutureWarning: Cython directive 'language_level' not set, using '3str' for now (Py3). This has changed from earlier releases! File: /home/fire/py_projects/MARS_pyearth/py-earth/pyearth/_basis.pxd
tree = Parsing.p_module(s, pxd, full_module_name)
Error compiling Cython file:
------------------------------------------------------------
...
from cpython cimport bool
cimport numpy as cnp
from _types cimport FLOAT_t, INT_t, INDEX_t, BOOL_t
^
------------------------------------------------------------
pyearth/_basis.pxd:3:0: '_types.pxd' not found
...
pyearth/_basis.pyx:184:48: Invalid type.
Traceback (most recent call last):
File "/home/fire/py_projects/MARS_pyearth/py-earth/setup.py", line 165, in <module>
setup_package()
File "/home/fire/py_projects/MARS_pyearth/py-earth/setup.py", line 161, in setup_package
setup_args['ext_modules'] = get_ext_modules()
File "/home/fire/py_projects/MARS_pyearth/py-earth/setup.py", line 22, in get_ext_modules
ext_modules = cythonize(
File "/home/fire/py_projects/MARS_pyearth/.venv/lib/python3.9/site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
cythonize_one(*args)
File "/home/fire/py_projects/MARS_pyearth/.venv/lib/python3.9/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: pyearth/_basis.pyx
</code></pre>
<p>it seems in this case something is wrong with Cython - so I can't build required files fro the installation...
What to do?</p>
<ol start="3">
<li>tried to use in Colab, thought that maybe something was wrong with my environment - also an issue:</li>
</ol>
<p><a href="https://stackoverflow.com/questions/66503039/how-to-install-pyearth-in-google-colab/77565730#77565730">how to install pyearth in google colab?</a></p>
<pre><code>Collecting git+https://github.com/scikit-learn-contrib/py-earth@v0.2dev
Cloning https://github.com/scikit-learn-contrib/py-earth (to revision v0.2dev) to /tmp/pip-req-build-1a159lyh
Running command git clone --filter=blob:none --quiet https://github.com/scikit-learn-contrib/py-earth /tmp/pip-req-build-1a159lyh
Running command git checkout -b v0.2dev --track origin/v0.2dev
Switched to a new branch 'v0.2dev'
Branch 'v0.2dev' set up to track remote branch 'v0.2dev' from 'origin'.
Resolved https://github.com/scikit-learn-contrib/py-earth to commit 400f84d435b7277124535c09ca32132c1d0eaa74
Preparing metadata (setup.py) ... done
Requirement already satisfied: scipy>=0.16 in /usr/local/lib/python3.10/dist-packages (from sklearn-contrib-py-earth==0.1.0+16.g400f84d) (1.11.3)
Requirement already satisfied: scikit-learn>=0.16 in /usr/local/lib/python3.10/dist-packages (from sklearn-contrib-py-earth==0.1.0+16.g400f84d) (1.2.2)
Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from sklearn-contrib-py-earth==0.1.0+16.g400f84d) (1.16.0)
Requirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.16->sklearn-contrib-py-earth==0.1.0+16.g400f84d) (1.23.5)
Requirement already satisfied: joblib>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.16->sklearn-contrib-py-earth==0.1.0+16.g400f84d) (1.3.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.16->sklearn-contrib-py-earth==0.1.0+16.g400f84d) (3.2.0)
Building wheels for collected packages: sklearn-contrib-py-earth
error: subprocess-exited-with-error
Γ python setup.py bdist_wheel did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Building wheel for sklearn-contrib-py-earth (setup.py) ... error
ERROR: Failed building wheel for sklearn-contrib-py-earth
Running setup.py clean for sklearn-contrib-py-earth
Failed to build sklearn-contrib-py-earth
ERROR: Could not build wheels for sklearn-contrib-py-earth, which is required to install pyproject.toml-based projects
</code></pre>
|
<python><sql-server-mars>
|
2023-11-28 17:42:54
| 1
| 496
|
twistfire
|
77,565,921
| 14,912,118
|
How to Generate SAF-T XML (Standard Audit File)
|
<p>I need to generate the SAF-T XML and using pandas dataframe, but I am very new for generating SAF-T XML using python.</p>
<p>Below is the output which I am expecting</p>
<pre><code><root xmlns:nl="urn:StandardAuditFile-Taxation-Financial:NO" xsi:schemaLocation="urn:StandardAuditFile-Taxation-Financial:NO XML Final.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<AuditFile>
<Header>
<AuditFileVersion>val1</AuditFileVersion>
<Company>
<Name>val3</Name>
</Company>
</Header>
</AuditFile>
</root>
</code></pre>
<p><em>Below is the code which I am using</em>
Library import</p>
<pre><code>import pandas as pd
import numpy as np
import openpyxl
import xml.etree.ElementTree as ET
from bs4 import BeautifulSoup
import re
import os
from lxml import etree
</code></pre>
<p>Creating dataframe</p>
<pre><code>
data_section1 = {'col1': ['val1', 'val2']}
df_section1 = pd.DataFrame(data_section1)
data_section2 = {'col2': ['val3', 'val4']}
df_section2 = pd.DataFrame(data_section2)
# Combine results into a dictionary
dct_master = {
'SECTION1': df_section1,
'SECTION2': df_section2
}
data_xml = {
'Row_seq': [1, 2, 3, 4, 5],
'XML_File': ['XML_Final'] * 5,
'Type': ['root', 'SECTION1', 'SECTION1', 'SECTION1', 'SECTION2'],
'Path': ['', 'AuditFile', 'AuditFile/Header', 'Header/AuditFileVersion', 'Header/Company/Name'],
'Col_tag': ['', 'AuditFile', 'Header', 'AuditFileVersion', 'Name'],
'Col_Unbound': ['', '', '', 'col1', ''],
'Col_Data': ['', '', 'col1', '', '']
}
</code></pre>
<p>Defining XML function</p>
<pre><code>df_xml = pd.DataFrame(data_xml)print(dct_master)
lst_pth = []
var_pth = ''
var_create = 0
df_xml_part = pd.DataFrame(None)
def func_xml_create(df_xml_part):
col_path_lst = re.findall('\(\s?(.+?)\s?\\)', df_xml_part.Path.unique()[0])
col_unbnd_lst = np.unique(np.array(df_xml_part['Col_Unbound'].dropna().to_list())).tolist()
col_data_lst = np.unique(np.array(df_xml_part['Col_Data'].dropna().to_list())).tolist()
col_lstl = col_path_lst + col_unbnd_lst + col_data_lst
col_lst = np.unique(col_lstl)
print(col_path_lst, col_unbnd_lst, col_data_lst, col_lstl, col_lst)
if col_lst.size > 0:
dct_master_data = dct_master[df_xml_part.Type.unique()[0]][col_lst].drop_duplicates().reset_index()
var_iter = len(dct_master_data.index)
else:
var_iter = 1
while var_iter > 0:
for indexl, rowl in df_xml_part.iterrows():
path = df_xml_part['Path'][indexl]
fold = df_xml_part['Col_tag'][indexl]
fold_data = df_xml_part['Col_Data'][indexl]
if str(df_xml_part.Col_Unbound[indexl]) != 'nan':
unbnd_flg = 'Y'
var_unbnd = len(dct_master_data[df_xml_part.Col_Unbound[indexl].split(',')].drop_duplicates())
ar_unbnd_vall = dct_master_data[df_xml_part.Col_Unbound[indexl].split(',')].apply(lambda x: ''.join(x), axis=1).to_numpy()
ar_unbnd_val = ar_unbnd_vall[var_iter - 1]
else:
var_unbnd = 1
unbnd_flg = 'N'
regexp = re.compile(r'/')
regexpl = re.compile(r'\(')
if (var_unbnd != 1) or (unbnd_flg == 'Y'):
fold_nm = fold + '-' + re.sub(r' [^\w]', '', str(ar_unbnd_val))
else:
fold_nm = fold
path_nm = path
for i in re.findall('\(\s?(.+?)\s?\\)', path_nm):
lst = i.split(',')
path_nm = re.sub(
r'[\s\+\.]',
'',
re.sub(
f'\({i}\)',
'-' + re.sub(r'[^\w]', '', dct_master_data[lst].apply(lambda x: ''.join(x), axis=1)[var_iter - 1]),
path_nm
)
)
if regexp.search(path_nm):
for elemt in root.findall(path_nm):
child = ET.SubElement(elemt, fold_nm)
else:
for elemt in root.iter(path_nm):
child = ET.SubElement(elemt, fold_nm)
var_unbnd = var_unbnd - 1
if str(df_xml_part['Col_Data'][indexl]) != 'nan':
child.text = str(eval(f'dct_master_data.{fold_data}')[var_iter - 1]).strip()
var_iter = var_iter - 1
for index, rows in df_xml.iterrows():
if index == 0:
name_space = {
"xmlns:nl" : "urn:StandardAuditFile-Taxation-Financial:NO",
"xsi:schemaLocation":"urn:StandardAuditFile-Taxation-Financial:NO XML Final.xsd",
"xmlns:xsi":"http://www.w3.org/2001/XMLSchema-instance"
}
root = ET.Element(df_xml[df_xml['Type' ]=='root']['Col_tag'][0],name_space)
else:
if (index!=1):
if ((var_pth!=df_xml['Path'][index]) or (str(df_xml.Col_Unbound[index])!='nan')):
print(df_xml_part)
func_xml_create(df_xml_part)
df_xml_part = pd.DataFrame(None)
var_pth = df_xml['Path'][index]
df_xml_part = df_xml_part._append(df_xml[['Type', 'Path', 'Col_tag','Col_Unbound', 'Col_Data']][index:index+1], ignore_index=True)
##### Handling end on XML file
print(df_xml_part)
func_xml_create(df_xml_part)
df_xml_part = pd.DataFrame()
</code></pre>
<p>Formatting the XML</p>
<pre><code>
regexp_no_data = re.compile(r'/>')
regexp_no_datal = re.compile(r'> <')
with open("XML_Finall.xml", "w") as text_file:
print(BeautifulSoup(ET.tostring(root), "xml").prettify(), file=text_file)
#input file
fin = open("XML_Finall.xml", "rt")
#output file to write the result to
fout = open("XML_Final2.xml", "wt")
for l, line in enumerate(fin):
if (l==0):
fout.write(line)
else:
if (l==1):
linel = re.sub(r'<','<nl:',line)
line2 = re.sub(r'nl:/','/nl:',linel)
fout.write(line2)
else:
linel = re.sub(r'<', '<nl:',line)
line2 = re.sub(r'nl:/', '/nl:',linel)
fout.write(re.sub(r'->', '>',re.sub(r'-[0-9a-zA-Z]+?>', '>', line2)))
#close input and output files
fin.close()
fout.close()
myparser = etree.XMLParser(remove_blank_text=True)
abc = etree.parse('XML_Final2.xml', myparser)
for elem in abc.iter():
if elem.text is not None:
elem.text = elem.text.strip()
if elem.tail is not None:
elem.tail = elem.tail.strip()
#write the xml file with pretty print and xml encoding
abc.write('XML_Final3.xml', pretty_print=True, encoding="utf-8", xml_declaration=True)
with open('XML_Final3.xml') as oldfile, open('XML_Final.xml', 'w') as newfile:
for line in oldfile:
if not regexp_no_data.search(line):
if not regexp_no_datal.search(line):
newfile.write(line)
try:
os.remove("XML_Finall.xml")
os.remove("XML_Final2.xml")
os.remove("XML_Final3.xml")
except OSError:
pass
</code></pre>
<p>By running above code I am getting below error</p>
<pre><code>dct_master_data = dct_master[df_xml_part.Type.unique()[0]][col_lst].drop_duplicates().reset_index()
TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
<p>Can anyone help me to achieve the the XML generated as shown above and I am not sure why i am getting the issues. Please correct me if i am wrong anywhere in the code</p>
|
<python><pandas><xml><dataframe>
|
2023-11-28 17:24:00
| 0
| 427
|
Sharma
|
77,565,865
| 520,556
|
Python nltk RegexpTokenizer with all words and one specific phrase/chunk
|
<p>Is it possible to redefine a standard <code>nltk.tokenize.RegexpTokenizer</code> to select all words as tokens and allow only one special phrase/chunk? For example, a standard setup would be:</p>
<pre><code>from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
</code></pre>
<p>Is it possible to allow, e.g., "big data" as a single token?</p>
|
<python><regex><nltk>
|
2023-11-28 17:15:22
| 1
| 1,598
|
striatum
|
77,565,852
| 5,125,925
|
Python jwt decode returning Could not deserialize key data error
|
<p>I have a jwt token generated from aws cognito which gets authenticated in aws api gateway and successfully calls my aws lambda behind the gateway. Everything works and this is a valid jwt received in my aws lambda python, but I'm unable to parse this valid JWT in my python code. Below is the snippet of python method where I pass in the auth header:</p>
<pre class="lang-py prettyprint-override"><code>def decodeJWT(authHeader):
jwt_token=authHeader.replace('Bearer ','')
print(jwt_token)
decoded_jwt=jwt.decode(jwt=jwt_token, algorithms="RS256")
return decoded_jwt
</code></pre>
<p>The value of authHeader passed is below. I've validated the "Bearer " string is removed, and the remaining jwt is valid. This jwt also gets decoded correctly on the site "jwt.io", and seems to be a valid jwt since it gets decoded, and also the auth works on gateway.</p>
<p><code>Bearer eyJraWQiOiJmUjFiZ21VdzhoNW9RUnZxeVUwZW9jbkhsSU1CbTNZMHRGS3oyVXFLNjVzPSIsImFsZyI6IlJTMjU2In0.eyJzdWIiOiI0NTYzMWVlMy04MWEzLTQ3ZTMtYTEzMS0yNGRhNWY1MmYyMzIiLCJpc3MiOiJodHRwczpcL1wvY29nbml0by1pZHAudXMtZWFzdC0yLmFtYXpvbmF3cy5jb21cL3VzLWVhc3QtMl92YjB4dFBrVlciLCJ2ZXJzaW9uIjoyLCJjbGllbnRfaWQiOiI0MzRubm5nbTFrZnZvMGZtNHVtZzN2OGJ1ZCIsImV2ZW50X2lkIjoiMTVhNWNiMTktNjI0ZS00ZTk0LThiMjktMjI3MmFmYjE4Mjc5IiwidG9rZW5fdXNlIjoiYWNjZXNzIiwic2NvcGUiOiJhd3MuY29nbml0by5zaWduaW4udXNlci5hZG1pbiIsImF1dGhfdGltZSI6MTcwMTE4NjkwMCwiZXhwIjoxNzAxMTkwNTAwLCJpYXQiOjE3MDExODY5MDAsImp0aSI6IjY3M2Y2MTVlLWNmMWMtNDZjMC1iOWY0LTBkMjFiNWI1MThkOSIsInVzZXJuYW1lIjoidGVzdF9hIn0.BKdUHR_al5IAprfZHK9fEC7kthrv6kpyDXdsM6nrUQufVEMliIyz1dk3ruGhDDP4HjW8czlwegoYDX5aqrou3AkXWivsN8JplJELwb3li9XsHOrBhajLgHYwTUdEm5EFQcyxTD22IvkeByHokvzeOW1IakHIR_qdiH4zv9MNfjpy2svuS7imnnik1XLGx0A7eZmq6IDQlfu3v1Bk_ePBpgaOlZBEypmcIaZe5BySbZo6QrLfzASgwfMF0XrJYwtydgq3jz7L3EWEO1IaHT3x-qwZuOC8S4n3m0hr-PZwt7r3akXb75OlFbJMoWRuuWViC-Jpqzm3e72ueNc5BeSIBQ</code></p>
<p>Below is the error I receive:</p>
<hr />
<pre><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jwt/algorithms.py", line 350, in prepare_key
RSAPrivateKey, load_pem_private_key(key_bytes, password=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/cryptography/hazmat/primitives/serialization/base.py", line 25, in load_pem_private_key
return ossl.load_pem_private_key(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 747, in load_pem_private_key
return self._load_key(
^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 929, in _load_key
self._handle_key_loading_error()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 984, in _handle_key_loading_error
raise ValueError(
ValueError: ('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [<OpenSSLError(code=503841036, lib=60, reason=524556, reason_text=unsupported)>])
</code></pre>
<hr />
<p>Python 3.11.2; pip 23.3.1</p>
|
<python><jwt>
|
2023-11-28 17:13:15
| 0
| 720
|
ZCode
|
77,565,668
| 3,650,477
|
How to setup the logging of my application with built with Flask and run with gunicorn
|
<p>I have created a simple application that returns a json object on demand based on a GET request. I have used the <a href="https://flask.palletsprojects.com/en/2.3.x/" rel="nofollow noreferrer">Flask</a> framework, and then I run in production with <a href="https://gunicorn.org/" rel="nofollow noreferrer">gunicorn</a>.</p>
<p>It is not clear to me how to setup the logging so that my application integrates better with the other two libraries. To be precise, when I run my application I get this type of logging messages:</p>
<pre><code>> gunicorn -b 0.0.0.0:5000 mymodule.api:app
[2023-11-28 17:29:27 +0100] [124978] [INFO] Starting gunicorn 21.2.0
[2023-11-28 17:29:27 +0100] [124978] [INFO] Listening at: http://0.0.0.0:5000 (124978)
[2023-11-28 17:29:27 +0100] [124978] [INFO] Using worker: sync
[2023-11-28 17:29:27 +0100] [124979] [INFO] Booting worker with pid: 124979
INFO:mymodule.api:Test
</code></pre>
<p>I'm using just the <code>app.logger</code>, as recommended in the <a href="https://flask.palletsprojects.com/en/2.3.x/logging/" rel="nofollow noreferrer">Flask documentation</a>.</p>
<pre class="lang-py prettyprint-override"><code>logging.basicConfig(level=logging.DEBUG)
app = Flask(__name__)
logger = app.logger
logger.info('Test')
</code></pre>
<p>The <code>mymodule.api</code> logger itself has no handler attached, and its parent is the <code>root</code> logger. How can I make the logging from my application to integrate better (at least to have the same formatter) with the ones from gunicorn?</p>
<p>More generally, it's not clear to me how to set it up in a seamless way. I can control the logging in gunicorn from command line arguments. But I'm not sure how to <em>propagate</em> this information to my own application. Currently I'm setting the level of <code>app.logger</code> by hard coding it in the program, for instance. Which is crude and very little flexible.</p>
|
<python><flask><gunicorn><python-logging>
|
2023-11-28 16:42:49
| 0
| 2,729
|
Pythonist
|
77,565,397
| 5,459,343
|
Sphinx documentation tree without full path to module
|
<p>I'm trying to create an automated documentation sphinx with autodoc and autosummary extension. I'm also using the <code>pydata_sphinx_theme</code>.</p>
<p>My goal is to automatically create a tree of objects (let's stick with python functions for now) that will be shown in the primary sidebar of the documentation.</p>
<p>The template (see lower) does the job, nevertheless it always shows the full path to each function - e.g. <code>my_package.my_python_module1.function_A</code>.
The goal is to get rid of the whole path and see just the ending object.</p>
<pre class="lang-none prettyprint-override"><code>Code structure:
ββββmy_package
β ββββmy_python_module1 (contains function_A)
β ββββmy_directory
β ββββmy_python_module2 (contains function_B)
Generated documentation tree:
ββββmy_package
β ββββmy_package.my_python_module1
β ββββmy_package.my_python_module1.function_A
β ββββmy_package.my_directory
β ββββmy_package.my_directory.my_python_module2
β ββββmy_package.my_directory.my_python_module2.function_B
Desired documentation tree
ββββmy_package
β ββββmy_python_module1
β ββββfunction_A
β ββββmy_directory
β ββββmy_python_module2
β ββββfunction_B
</code></pre>
<p>One of the recommended options is to use <code>add_module_names = False</code> in conf.py - this does not work for the pydata_sphinx_theme.</p>
<p>Also playing around with the "fullname" (like replacing with "name") in code lower also solves the job only partially.</p>
<p>Full template code of custom-module-template.rst:</p>
<pre class="lang-none prettyprint-override"><code>{{ fullname | escape | underline}}
.. automodule:: {{ fullname }}
{% block attributes %}
{% if attributes %}
.. rubric:: Module attributes
.. autosummary::
:toctree:
{% for item in attributes %}
{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block functions %}
{% if functions %}
.. rubric:: {{ _('Functions') }}
.. autosummary::
:toctree:
:nosignatures:
{% for item in functions %}
{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block classes %}
{% if classes %}
.. rubric:: {{ _('Classes') }}
.. autosummary::
:toctree:
:template: custom-class-template.rst
:nosignatures:
{% for item in classes %}
{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block exceptions %}
{% if exceptions %}
.. rubric:: {{ _('Exceptions') }}
.. autosummary::
:toctree:
{% for item in exceptions %}
{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block modules %}
{% if modules %}
.. autosummary::
:toctree:
:template: custom-module-template.rst
:recursive:
{% for item in modules %}
{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
</code></pre>
<p>So the question is: how to adjust the template in order to get rid of the full path?</p>
|
<python><python-sphinx><autodoc><autosummary>
|
2023-11-28 16:06:40
| 1
| 573
|
J.K.
|
77,565,339
| 1,714,692
|
How to specify the value for a multiindex in a pandas dataframe?
|
<p>Suppose I want to build a pandas dataframe using multiple indexing.</p>
<p>I start defining the expected columns of the dataframe:</p>
<pre><code>df = pd.DataFrame(columns=["val",])
</code></pre>
<p>Then I build some entries and their indexing:</p>
<pre><code>for j in range(1,5):
tuples = [(str(j), i) for i in range(10)]
vals = [0,1,2,3,j,j,4,4,1,1]
</code></pre>
<p>At each iteration of the for loop I would like to update the dataframes with the new values. The method <code>_append</code> does not seem to support indexes specification and I've read that the <code>.loc</code> method is much more efficient.</p>
<p>So I was trying something like:</p>
<pre><code>for i2, el in enumerate(tuples):
df.loc[el] = vals[i2] #el is a tuple
</code></pre>
<p>But this is not working as I expected:
If I try to execute the command with a single multi index and a single value, similar to:</p>
<pre><code>df.loc[('1', 3)] = 4
</code></pre>
<p>I get a dataframe that looks like:</p>
<pre><code> val 3
1 NaN 4.0
</code></pre>
<p>whereas I was expecting something like:</p>
<pre><code> val
1 3 4.0
</code></pre>
<p>How to specify the value for a multiindex in a pandas dataframe?</p>
|
<python><pandas><dataframe><multi-index>
|
2023-11-28 15:59:42
| 1
| 9,606
|
roschach
|
77,565,329
| 3,070,181
|
Changing tkinter default font in a conda environment
|
<p>I am running a small test program</p>
<p><strong>font.py</strong></p>
<pre><code>import tkinter as tk
import tkinter.font as tkFont
root = tk.Tk()
font_size = 24
fonts = [
'Arial',
'Droid sans Mono',
'Fira Code',
'TSCu_comic',
'Inconsolata',
]
for font in fonts:
my_font = tkFont.Font(family=font, size=font_size)
root.option_add('*Font', my_font)
label = tk.Label(root, text='Hello, world!')
label.pack()
root.mainloop()
</code></pre>
<p>If I run it outside a conda environment it works</p>
<p><a href="https://i.sstatic.net/nijjq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nijjq.png" alt="enter image description here" /></a></p>
<p>In the environment some fonts do not change and others are rendered anomalously</p>
<p><a href="https://i.sstatic.net/iyFS2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iyFS2.png" alt="enter image description here" /></a></p>
<p>What should I do?</p>
|
<python><tkinter><miniconda>
|
2023-11-28 15:58:38
| 1
| 3,841
|
Psionman
|
77,565,283
| 19,130,803
|
pandas: get scalar value by column
|
<p>I have a dataframe and I want get a single scalar value from column <code>store_id</code> which contains same values for all rows.</p>
<pre><code>df = pd.DataFrame(
{
"id": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
"contents": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
"store_id": [2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
}
)
store_id = int(df["store_id"].max())
</code></pre>
<p>The above is giving me result, just wondering will it be slow if the dataframe is very big. As I am retrieving the dataframe from some other function, say for eg it may returns a dataframe that may contain 10 or 300 or big number of rows(i.e. dynamic) and all will have <code>store_id</code> column.</p>
<p>I also tried</p>
<pre><code>store_id = df["store_id"].squeeze() # but it did not worked
</code></pre>
<p>Is there more effecient way to achieve the same?</p>
|
<python><pandas>
|
2023-11-28 15:53:09
| 2
| 962
|
winter
|
77,565,230
| 9,003,184
|
Web Scraping returns T&C link
|
<p><a href="https://cycling.data.tfl.gov.uk/" rel="nofollow noreferrer">This URL</a> has some CSV data files I'd like to use for analysis under the folder 'usage-stats'. However, I'm unable to scrape data from this page. When I try using the below code, I can only see the terms and conditions page instead of the links to the CSV data files:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
def fetch_file_list(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.text, "html.parser")
links = [link.get("href") for link in soup.find_all("a")]
return links
fetch_file_list(<urlname>)
</code></pre>
<p>This code gives the below output:</p>
<pre class="lang-none prettyprint-override"><code>['https://tfl.gov.uk/corporate/terms-and-conditions/transport-data-service']
</code></pre>
<p>One additional thing I tried was using the URL including folder name. Still.. no use. I'm new to scraping.</p>
<p>When we're returned the T&C link are we supposed to assume scraping data from this site is impossible? If not, how else should I do it?</p>
<p>I need data from usage-stats folder for the time period from 2021 to 2023 which is a lot of files. I'd like to find a way to read them automatically and simply if that is possible.</p>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2023-11-28 15:44:14
| 1
| 488
|
a_jelly_fish
|
77,565,163
| 2,215,094
|
OpenEmbedded (Yocto) recipe for Python validators library
|
<p>I want to include the Python library <em><a href="https://pypi.org/project/validators/" rel="nofollow noreferrer">validators</a></em> in my OpenEmbedded build. I am working with the Kirkstone release and the latest version of <em>validators</em> is 0.22.0. There is no recpipe, so I added it myself:</p>
<pre><code>inherit pypi
SUMMARY = "Python Data Validation for Humansβ’"
HOMEPAGE = "https://python-validators.github.io/validators"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://LICENSE;md5=fcf28bd09a60e145c3171c531b9e677d"
SRC_URI[sha256sum] = "77b2689b172eeeb600d9605ab86194641670cdb73b60afd577142a9397873370"
BBCLASSEXTEND = "native nativesdk"
</code></pre>
<p>The build runs through, but the package is not correctly installed. In <code>/usr/lib/python3.10/site-packages</code> I get the folder <code>UNKNOWN-0.0.0.dist-info</code>, which contains some meta information including the correct license file, but no Python files.</p>
<p>I played around inheriting various Python build system classes, but no success. I noticed that <em>validatoirs</em>' <code>pyproject.toml</code> file mentions <em>setuptools</em> version 61:</p>
<pre><code>[build-system]
requires = ["setuptools>=61"]
</code></pre>
<p>While <code>meta-oe-core</code> only contains version 59.5.0 (<code>python3-setuptools_59.5.0.bb</code>).</p>
|
<python><setuptools><openembedded>
|
2023-11-28 15:34:57
| 2
| 385
|
Jan Schatz
|
77,565,028
| 5,132,101
|
how do i mock requests.get().url with side_effect?
|
<p>I have the following code:</p>
<pre><code>def consuming_api_swapi_index_page(initial_page: int = 1):
"""Swapi index page."""
check = HTTPStatus.OK
results = []
while check == HTTPStatus.OK:
response = requests.get(
f'https://swapi.dev/api/people/?page={initial_page}'
)
results.append(url := response.url)
print(url)
check = response.status_code
initial_page += 1
return results
def test_consuming_api_swapi_index_page() -> None:
"""Test it."""
values = [
'https://swapi.dev/api/people/?page=1',
'https://swapi.dev/api/people/?page=2',
'https://swapi.dev/api/people/?page=3',
'https://swapi.dev/api/people/?page=4',
'https://swapi.dev/api/people/?page=5',
'https://swapi.dev/api/people/?page=6',
'https://swapi.dev/api/people/?page=7',
'https://swapi.dev/api/people/?page=8',
'https://swapi.dev/api/people/?page=9',
'https://swapi.dev/api/people/?page=10',
]
assert consuming_api_swapi_index_page() == values
</code></pre>
<p>I need to mock it, How do I do it?</p>
|
<python><mocking><pytest>
|
2023-11-28 15:18:35
| 1
| 2,047
|
britodfbr
|
77,564,964
| 10,847,096
|
Checking if a tensors values are contained in another tensor
|
<p>I have a torch tensor like so:</p>
<pre class="lang-py prettyprint-override"><code>a=[1, 234, 54, 6543, 55, 776]
</code></pre>
<p>and other tensors like so:</p>
<pre class="lang-py prettyprint-override"><code>b=[234, 54]
c=[55, 776]
</code></pre>
<p>I want to create a new mask tensor where the values of <code>a</code> will be true if there is another tensor (<code>b</code> or <code>c</code>) are equal to it.<br><br>
For example, in the tensors we have above I would like to create the following masking tensor:<br></p>
<pre class="lang-py prettyprint-override"><code>a_masked =[False, True, True, False, True, True]
# The first two True values correspond to tensor `b` while the last two True values
correspond to tensor `c`.
</code></pre>
<p>I have seen other methods to check whether a full tensor is contained in another but this isn't the case here.
<br><br>
Is there a torch way to do this efficiently?
Thanks!</p>
|
<python><pytorch>
|
2023-11-28 15:09:15
| 1
| 993
|
Ofek Glick
|
77,564,963
| 7,295,599
|
Communication via PyUSB to Agilent E4980A
|
<p>Communication to USB devices drives me crazy. It's now at least the third USB device creating problems:</p>
<ul>
<li>the first device (<a href="https://stackoverflow.com/q/59105167/7295599">OWON oscilloscope</a>) had erroneous software and wrong documentation and I <a href="https://stackoverflow.com/q/75410653/7295599">couldn't get it to run on another PC</a>.</li>
<li>the second device (<a href="https://stackoverflow.com/q/73151914/7295599">Trinamic stepper motor controller</a>) seemed to be a different type of USB device</li>
<li>now, Agilent E4980A. The adapted script which worked for the OWON oscilloscope doesn't work for Agilent E4980A.</li>
</ul>
<p><code>pyusb</code> and <code>libusb</code> are installed and apparently found. My configuration:
Windows 10, Python 3.11.3, libusb-1.0 (v1.0.26.11724), pyUSB (v1.2.1)</p>
<p><strong>Script:</strong></p>
<pre><code>import usb.core
import usb.util
from usb.backend import libusb1
backend = libusb1.get_backend(find_library=lambda x: r'C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\libusb\_platform\_windows\x64\libusb-1.0.dll')
dev = usb.core.find(idVendor=2391, idProduct=2313, backend=backend) # Agilent E4980A
print(dev)
</code></pre>
<p><strong>Output:</strong> (well, device is apparently found)</p>
<pre><code>DEVICE ID 0957:0909 on Bus 001 Address 012 =================
bLength : 0x12 (18 bytes)
bDescriptorType : 0x1 Device
bcdUSB : 0x200 USB 2.0
bDeviceClass : 0x0 Specified at interface
bDeviceSubClass : 0x0
bDeviceProtocol : 0x0
bMaxPacketSize0 : 0x40 (64 bytes)
idVendor : 0x0957
idProduct : 0x0909
bcdDevice : 0x100 Device 1.0
iManufacturer : 0x1 Error Accessing String
iProduct : 0x2 Error Accessing String
iSerialNumber : 0x3 Error Accessing String
bNumConfigurations : 0x1
CONFIGURATION 1: 0 mA ====================================
bLength : 0x9 (9 bytes)
bDescriptorType : 0x2 Configuration
wTotalLength : 0x27 (39 bytes)
bNumInterfaces : 0x1
bConfigurationValue : 0x1
iConfiguration : 0x0
bmAttributes : 0xc0 Self Powered
bMaxPower : 0x0 (0 mA)
INTERFACE 0: Application Specific ======================
bLength : 0x9 (9 bytes)
bDescriptorType : 0x4 Interface
bInterfaceNumber : 0x0
bAlternateSetting : 0x0
bNumEndpoints : 0x3
bInterfaceClass : 0xfe Application Specific
bInterfaceSubClass : 0x3
bInterfaceProtocol : 0x1
iInterface : 0x4 Error Accessing String
ENDPOINT 0x2: Bulk OUT ===============================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x2 OUT
bmAttributes : 0x2 Bulk
wMaxPacketSize : 0x200 (512 bytes)
bInterval : 0x0
ENDPOINT 0x86: Bulk IN ===============================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x86 IN
bmAttributes : 0x2 Bulk
wMaxPacketSize : 0x200 (512 bytes)
bInterval : 0x0
ENDPOINT 0x88: Interrupt IN ==========================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x88 IN
bmAttributes : 0x3 Interrupt
wMaxPacketSize : 0x2 (2 bytes)
bInterval : 0x1
</code></pre>
<p>However, as soon as I try to send a command (it doesn't matter if with or w/o newline characters <code>\r</code> or <code>\n</code>),</p>
<pre><code>cmd = '*IDN?'+'\r'
dev.write(2,cmd)
</code></pre>
<p>I will get an error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 236, in get_interface_and_endpoint
return self._ep_info[endpoint_address]
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
KeyError: 2
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Lab\Scripts\tbUSB.py", line 12, in <module>
dev.write(2,'*IDN?'+'\r')
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 986, in write
intf, ep = self._ctx.setup_request(self, endpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 113, in wrapper
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 228, in setup_request
intf, ep = self.get_interface_and_endpoint(device, endpoint_address)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 113, in wrapper
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 238, in get_interface_and_endpoint
for intf in self.get_active_configuration(device):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 113, in wrapper
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 249, in get_active_configuration
self.managed_open()
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 113, in wrapper
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 131, in managed_open
self.handle = self.backend.open_device(self.dev)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\backend\libusb1.py", line 804, in open_device
return _DeviceHandle(dev)
^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\backend\libusb1.py", line 652, in __init__
_check(_lib.libusb_open(self.devid, byref(self.handle)))
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\backend\libusb1.py", line 600, in _check
raise NotImplementedError(_strerror(ret))
NotImplementedError: Operation not supported or unimplemented on this platform
</code></pre>
<p>If I do first a <code>set configuration()</code> as recommended in the PyUSB tutorial, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Lab\Scripts\tbUSB.py", line 10, in <module>
dev.set_configuration()
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 915, in set_configuration
self._ctx.managed_set_configuration(self, configuration)
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 113, in wrapper
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 158, in managed_set_configuration
self.managed_open()
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 113, in wrapper
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 131, in managed_open
self.handle = self.backend.open_device(self.dev)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\backend\libusb1.py", line 804, in open_device
return _DeviceHandle(dev)
^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\backend\libusb1.py", line 652, in __init__
_check(_lib.libusb_open(self.devid, byref(self.handle)))
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\backend\libusb1.py", line 600, in _check
raise NotImplementedError(_strerror(ret))
NotImplementedError: Operation not supported or unimplemented on this platform
</code></pre>
<p>Similar posts on StackOverflow with the same error message were not helpful:</p>
<ul>
<li><a href="https://stackoverflow.com/q/76130699/7295599">Pyusb Read error: NotImplementedError: Operation not supported or unimplemented on this platform</a></li>
<li><a href="https://stackoverflow.com/q/77318943/7295599">pyusb: NotImplementedError: Operation not supported or unimplemented on this platform</a></li>
<li><a href="https://stackoverflow.com/q/31960314/7295599">PyUSB 1.0: NotImplementedError: Operation not supported or unimplemented on this platform</a></li>
</ul>
<p>There is a I/O suite from Keysight/Agilent which seems to work, however, I wanted to avoid the installation of about 266 MB or 1.33 GB on every PC if I could do it with a simple driver of about a few hundred Kilobytes.</p>
<p>I'm also aware that there are other Python libraries (e.g. pyMeasure, even with some code for Agilent E4980A), which I also unsuccessfully tried because of (at least for me) insufficient documentation without minimal working examples. But, I don't want to use these libraries, but simply send some commands and receive some data via PyUSB.</p>
<p>Apparently, the PC can get some information of the device via Python, but I can't write and read anything. Am I missing another driver or something?</p>
<p><strong>Update:</strong> (progress?)</p>
<p>In an <a href="https://sourceforge.net/p/pyusb/mailman/message/30229282/" rel="nofollow noreferrer">old post from 2012 in the PyUSB mailing list</a> I read that PyUSB requires special drivers for the device which can be installed via <a href="https://github.com/pbatard/libwdi/wiki/Zadig" rel="nofollow noreferrer">Zadig</a>.
The original driver of the Keysight(Agilent) IO Library Suite <code>Usbtmc (v16.3.17614.0</code>) was replaced by libusbK (v3.1.0.0).</p>
<p><a href="https://i.sstatic.net/Ci5vZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ci5vZ.png" alt="enter image description here" /></a></p>
<p>Now, the Windows Device Manager indicates that the Oscilloscope P1337 and the AgilentE4980A use <code>libusbK</code> as driver.</p>
<p><a href="https://i.sstatic.net/wOZ0M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wOZ0M.png" alt="enter image description here" /></a></p>
<p>As a negative side effect, the KeysightIOLibSuite doesn't work anymore and even not after a re-installation and I don't know to get it back. But well, for me it is more important that it will run with PyUSB.</p>
<p>If I run the following script, the P1337 works fine, but the E4980A always stops with an timeout error.</p>
<p><strong>Script:</strong></p>
<pre><code>import usb.core
import usb.util
from usb.backend import libusb1
backend = libusb1.get_backend(find_library=lambda x: r'C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\libusb\_platform\_windows\x64\libusb-1.0.dll')
devs = usb.core.find(find_all=True, backend=backend)
id_e4980a = (0x0957,0x0909) # Agilent E4980A
id_p1337 = (0x5345,0x1234) # OWON/Peaktech P1337
def get_device(devs, id_dev):
dev = None
for x in devs:
# print("idVendor: 0x{:04x}, idProduct: 0x{:04x}, Manufacturer: {}".format(x.idVendor, x.idProduct, x.iManufacturer))
if id_dev == (x.idVendor, x.idProduct): dev = x
return dev
def check_dev(id_dev,ep1,ep2):
dev = get_device(devs, id_dev)
print(dev)
dev.set_configuration()
cfg = dev.get_active_configuration()
intf = cfg[(0,0)]
ep = usb.util.find_descriptor(intf,custom_match = lambda e: usb.util.endpoint_direction(e.bEndpointAddress) == usb.util.ENDPOINT_OUT)
print(ep)
print(dev.write(ep1,'*IDN?'))
print(dev.read(ep2, 100).tobytes().decode('utf-8'))
check_dev(id_p1337,0x3,0x81)
check_dev(id_e4980a,0x2,0x86)
</code></pre>
<p><strong>Result:</strong> (for P1337, all ok)</p>
<pre><code>DEVICE ID 5345:1234 on Bus 002 Address 003 =================
bLength : 0x12 (18 bytes)
bDescriptorType : 0x1 Device
bcdUSB : 0x200 USB 2.0
bDeviceClass : 0x0 Specified at interface
bDeviceSubClass : 0x0
bDeviceProtocol : 0x0
bMaxPacketSize0 : 0x40 (64 bytes)
idVendor : 0x5345
idProduct : 0x1234
bcdDevice : 0x294 Device 2.94
iManufacturer : 0x1 System CPU
iProduct : 0x2 Oscilloscope
iSerialNumber : 0x3 SERIAL
bNumConfigurations : 0x1
CONFIGURATION 1: 500 mA ==================================
bLength : 0x9 (9 bytes)
bDescriptorType : 0x2 Configuration
wTotalLength : 0x20 (32 bytes)
bNumInterfaces : 0x1
bConfigurationValue : 0x1
iConfiguration : 0x5 Bulk Data Configuration
bmAttributes : 0xc0 Self Powered
bMaxPower : 0xfa (500 mA)
INTERFACE 0: Physical ==================================
bLength : 0x9 (9 bytes)
bDescriptorType : 0x4 Interface
bInterfaceNumber : 0x0
bAlternateSetting : 0x0
bNumEndpoints : 0x2
bInterfaceClass : 0x5 Physical
bInterfaceSubClass : 0x6
bInterfaceProtocol : 0x50
iInterface : 0x4 Bulk Data Interface
ENDPOINT 0x81: Bulk IN ===============================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x81 IN
bmAttributes : 0x2 Bulk
wMaxPacketSize : 0x200 (512 bytes)
bInterval : 0x0
ENDPOINT 0x3: Bulk OUT ===============================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x3 OUT
bmAttributes : 0x2 Bulk
wMaxPacketSize : 0x200 (512 bytes)
bInterval : 0x0
ENDPOINT 0x3: Bulk OUT ===============================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x3 OUT
bmAttributes : 0x2 Bulk
wMaxPacketSize : 0x200 (512 bytes)
bInterval : 0x0
5
,P1337,1842237,V2.4.0->
</code></pre>
<p><strong>Result:</strong> (for E4980A, timeout error)</p>
<pre><code>DEVICE ID 0957:0909 on Bus 002 Address 007 =================
bLength : 0x12 (18 bytes)
bDescriptorType : 0x1 Device
bcdUSB : 0x200 USB 2.0
bDeviceClass : 0x0 Specified at interface
bDeviceSubClass : 0x0
bDeviceProtocol : 0x0
bMaxPacketSize0 : 0x40 (64 bytes)
idVendor : 0x0957
idProduct : 0x0909
bcdDevice : 0x100 Device 1.0
iManufacturer : 0x1 Agilent Technologies
iProduct : 0x2 E4980A
iSerialNumber : 0x3 MY46203491
bNumConfigurations : 0x1
CONFIGURATION 1: 0 mA ====================================
bLength : 0x9 (9 bytes)
bDescriptorType : 0x2 Configuration
wTotalLength : 0x27 (39 bytes)
bNumInterfaces : 0x1
bConfigurationValue : 0x1
iConfiguration : 0x0
bmAttributes : 0xc0 Self Powered
bMaxPower : 0x0 (0 mA)
INTERFACE 0: Application Specific ======================
bLength : 0x9 (9 bytes)
bDescriptorType : 0x4 Interface
bInterfaceNumber : 0x0
bAlternateSetting : 0x0
bNumEndpoints : 0x3
bInterfaceClass : 0xfe Application Specific
bInterfaceSubClass : 0x3
bInterfaceProtocol : 0x1
iInterface : 0x4 tmc48ζΈ
ENDPOINT 0x2: Bulk OUT ===============================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x2 OUT
bmAttributes : 0x2 Bulk
wMaxPacketSize : 0x200 (512 bytes)
bInterval : 0x0
ENDPOINT 0x86: Bulk IN ===============================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x86 IN
bmAttributes : 0x2 Bulk
wMaxPacketSize : 0x200 (512 bytes)
bInterval : 0x0
ENDPOINT 0x88: Interrupt IN ==========================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x88 IN
bmAttributes : 0x3 Interrupt
wMaxPacketSize : 0x2 (2 bytes)
bInterval : 0x1
ENDPOINT 0x2: Bulk OUT ===============================
bLength : 0x7 (7 bytes)
bDescriptorType : 0x5 Endpoint
bEndpointAddress : 0x2 OUT
bmAttributes : 0x2 Bulk
wMaxPacketSize : 0x200 (512 bytes)
bInterval : 0x0
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Lab\Scripts\tbUSB.py", line 31, in <module>
check_dev(id_e4980a,0x2,0x86)
File "C:\Users\Lab\Scripts\tbUSB.py", line 27, in check_dev
print(dev.write(ep1,'*IDN?'))
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\core.py", line 989, in write
return fn(
^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\backend\libusb1.py", line 837, in bulk_write
return self.__write(self.lib.libusb_bulk_transfer,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\backend\libusb1.py", line 938, in __write
_check(retval)
File "C:\Users\Lab\AppData\Local\Programs\Python\Python311\Lib\site-packages\usb\backend\libusb1.py", line 602, in _check
raise USBTimeoutError(_strerror(ret), ret, _libusb_errno[ret])
usb.core.USBTimeoutError: [Errno 10060] Operation timed out
</code></pre>
<p>So, at least with the <code>libusbK</code> driver the <code>dev.set_configuration()</code> doesn't lead to an error, but now there is a timeout error.</p>
<p>Why this? How to solve this? What else is missing?</p>
|
<python><usb><serial-communication><libusb><pyusb>
|
2023-11-28 15:09:10
| 1
| 27,030
|
theozh
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.