QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
76,095,081
| 2,219,819
|
How do I use Kerberos tickets to execute commands via SSH on a remote server?
|
<p>I would like to host a web service (Jupyterhub) which executes the following steps for a user:</p>
<ol>
<li>Acquire Kerberos ticket from user</li>
<li>Use Kerberos ticket to spawn batch job on remote server</li>
</ol>
<p>Therefore, I would need some python snippet to handle the authentication part (python-gssapi) and pass the ticket to Paramiko.
However, I do not understand how to get a Kerberos ticket with username/password and then pass it explicitly to <a href="https://docs.paramiko.org/en/latest/api/client.html#paramiko.client.SSHClient.connect" rel="nofollow noreferrer">SSHClient.connect</a></p>
<p>Any help is highly appreciated :)</p>
|
<python><ssh><kerberos><paramiko><gssapi>
|
2023-04-24 18:39:39
| 0
| 716
|
Hoeze
|
76,094,980
| 13,717,851
|
Why my tensorflow model is not learning in logistic regression - binary classification problem?
|
<p>I am using the following code in tensorflow to fit breast cancer dataset for a binary classification problem. The dataset has 30 features to predict cancer or not. The model is as below:</p>
<pre><code>def loss(y, y_pred):
return tf.reduce_mean(-y*tf.math.log(y_pred) - (1-y)*tf.math.log(1-y_pred))
class Model:
def __init__(self, n_feature, lr= 0.001):
self.lr = lr
self.W = tf.Variable(tf.random.uniform((n_feature+1, 1), -1, 1))
tf.print(self.W)
def __call__(self, X):
return tf.sigmoid(tf.matmul(X, self.W))
def predict(self, X):
return tf.sigmoid(tf.matmul(X, self.W))
def predict_classes(self, X):
a = tf.round(self.predict(X))
return tf.where(a > 0.5, tf.ones_like(a), tf.zeros_like(a))
def fit(self, X, y, epochs=1000):
for epoch in tqdm(range(epochs)):
with tf.GradientTape() as t:
y_pred = self.predict(X)
loss_i = loss(y, y_pred)
grad_W = t.gradient(loss_i, self.W)
self.W.assign_sub(grad_W*self.lr)
print(f"Loss is {loss_i}")
</code></pre>
<p>and I am preprocessing the datset as below (adding ones for bias as well)</p>
<pre><code>from sklearn.datasets import load_breast_cancer
data = load_breast_cancer()
n_feature = len(data.feature_names)
print(f"No of features: {n_feature}")
x = (data.data-data.data.min(axis=0))/(data.data.max(axis=0)-data.data.min(axis=0))
x = np.concatenate((x, np.ones((len(data.data), 1))), axis=1)
X = tf.convert_to_tensor(x, dtype=tf.float32)
y = tf.convert_to_tensor(data.target, dtype=tf.float32)
</code></pre>
<p>I have tried tuning the lr 0.1 - 0.001 and increasing epochs, still of no use. The model gives a max accuracy of 60%, min loss of 0.66 and it predicts all True. Is there some problem with scaling the input data or something else is wrong?</p>
<p>Similar model with keras Sequential API is performing good (95%% accuracy). Following is the caller code:</p>
<pre><code>model = Model(n_feature=n_feature)
model.fit(X, y)
y_hat = model.predict_classes(X)
from sklearn.metrics import accuracy_score
print(f"Accuracy: {accuracy_score(y.numpy(), y_hat.numpy())}")
</code></pre>
|
<python><tensorflow><logistic-regression><gradient-descent><sigmoid>
|
2023-04-24 18:24:45
| 1
| 876
|
Sayan Dey
|
76,094,836
| 604,388
|
How to properly use await with pyscript?
|
<p>Here is my simplified <a href="https://github.com/custom-components/pyscript" rel="nofollow noreferrer">pyscript</a> -</p>
<pre><code>async def auth(brand):
async with aiohttp.ClientSession() as session:
async with session.post(url_auth) as resp:
...
return auth_token_b64
@service
def get_counters(address_id):
auth_token_b64 = await auth(brand)
...
</code></pre>
<p>It worked well when I used it locally with <code>asyncio.run(get_counters(address_id=1))</code>.</p>
<p>But now I've uploaded the file to Home Assistant and there I get the following error -</p>
<blockquote>
<p>return auth_token_b64</p>
<p>TypeError: object str can't be used in 'await' expression</p>
</blockquote>
<p>What is wrong here?</p>
|
<python><python-3.x><pyscript><home-assistant>
|
2023-04-24 18:05:37
| 2
| 20,489
|
LA_
|
76,094,806
| 2,236,231
|
Translate ISO 639-1 to python locale
|
<p>My main aim is to correctly setting <code>locale</code> from <code>update.effective_user.language_code</code> in a python telegram bot</p>
<pre class="lang-py prettyprint-override"><code>locale.setlocale(locale.LC_ALL, update.effective_user.language_code)
</code></pre>
<p><code>update.effective_user.language_code</code> returns ISO 639-1 codes, which for example, sets <code>es</code> for Spanish. However, <code>locale</code> is expecting something like <code>es_ES</code> in Debian Buster, for example.</p>
<p>Surprisingly, in Windows, it is working, somehow, as it internally converts it to <code>es_ES</code>.</p>
<p>I tried installing in Debian Buster <code>locales-all</code> but the <code>es</code> code doesn't appear when I issue an <code>locale -a</code></p>
<p>The error I get is:</p>
<pre><code>locale.Error: unsupported locale setting
</code></pre>
<p>I am using Python 3.11</p>
|
<python><locale><python-telegram-bot>
|
2023-04-24 18:01:50
| 0
| 1,099
|
Geiser
|
76,094,637
| 1,914,781
|
plot line segments with plotly
|
<p>I would like to plot line segments with plotly. below demo works but it use different traces per segments, which should not be the right way to do it.</p>
<p>How can I implement line segments plot?</p>
<pre><code>import pandas as pd
import numpy as np
import plotly.graph_objects as go
from plotly.subplots import make_subplots
def plot_segments(df):
xname = "ts"
yname = "duration"
dfg = df.groupby('name')
fig = go.Figure()
colors=['#4f81bd','#c0504d','#9bbb59','#8064a2','#4bacc6','#f79646','#0000ff']
traces = []
dy = 1.1
for i,[gname, df] in enumerate(dfg):
for index,row in df.iterrows():
x1 = row['ts']
x2 = x1 + pd.to_timedelta(row['duration'],unit = 's')
x = [x1,x2]
y = [dy,dy]
trace1 = go.Scatter(
x=x,
y=y,
mode='lines+markers',
marker=dict(
size=4,
line=dict(width=0,color=colors[i])),
line=dict(width=1,color=colors[i]),
)
traces.append(trace1)
dy += .03
fig.add_traces(traces)
fontsize = 10
fig.add_annotation(dict(
font=dict(color="black",size=fontsize),
x=0.5,
xshift=0,
y=0,
yshift=-30,
showarrow=False,
text='Timestamp',
textangle=0,
xref="paper",
yref="paper",
xanchor='center',
yanchor='top',
))
fig.add_annotation(dict(
font=dict(color="black",size=fontsize),
x=-0,
xshift=-20,
y=0.5,
showarrow=False,
text='Category',
textangle=-90,
xref="paper",
yref="paper",
xanchor='right',
yanchor='middle',
))
xpading=.05
fig.update_layout(
margin=dict(l=50,t=40,r=10,b=40),
plot_bgcolor='#ffffff',#'rgb(12,163,135)',
paper_bgcolor='#ffffff',
title="Segments",
#xaxis2_title="Timestamp",
#yaxis_title="Interval(secs)",
title_x=0.5,
showlegend=False,
legend=dict(x=.02,y=1.05),
barmode='group',
bargap=0.05,
bargroupgap=0.0,
font=dict(
family="Courier New, monospace",
size=fontsize,
color="black"
),
xaxis=dict(
visible=True,
tickangle=-15,
tickformat = '%m-%d %H:%M:%S',#datetime format
showline=True,
linecolor='black',
color='black',
linewidth=.5,
ticks='outside',
showgrid=False,
gridcolor='grey',
gridwidth=.5,
griddash='solid',#'dot',
),
yaxis=dict(
range=[0,1.2],
showline=True,
linecolor='black',
color='black',
linewidth=.5,
showgrid=True,
gridcolor='grey',
gridwidth=.5,
griddash='solid',#'dot',
zeroline=True,
zerolinecolor='grey',
zerolinewidth=.5,
showticklabels=True,
),
)
fig.show()
return
data = [
['04-21 20:54:10.247','A',2],
['04-21 20:54:15.247','A',1],
['04-21 20:54:20.247','A',2],
['04-21 20:54:25.247','A',1],
['04-21 20:54:11.247','B',2],
['04-21 20:54:26.247','B',1],
['04-21 20:54:31.247','B',2],
['04-21 20:54:36.247','B',1]
]
df = pd.DataFrame(data,columns=['ts','name','duration'])
df['ts'] = pd.to_datetime(df['ts'],format="%m-%d %H:%M:%S.%f")
plot_segments(df)
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/czLpp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/czLpp.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2023-04-24 17:38:16
| 1
| 9,011
|
lucky1928
|
76,094,586
| 10,981,411
|
How do I move my label frame to top right side of my window?
|
<p>below are my codes. I want to move my entire LabelFrame to top right. Its currently shown up right in the middle. How do I do that?</p>
<p>I tried using sticky = 'NW' but that didnt work</p>
<pre><code>import tkinter
from tkinter import ttk
from tkinter import messagebox
import os
import openpyxl
from tkinter import *
#root = Tk()
#root.geometry("1200x800")
window = tkinter.Tk()
window.geometry("1200x800")
window.title("Data Entry Form")
frame = tkinter.Frame(window)
frame.pack()
# Saving User Info
user_info_frame = tkinter.LabelFrame(frame, text="User Information")
user_info_frame.grid(row=0, column=0, padx=20, pady=10)
first_name_label = tkinter.Label(user_info_frame, text="First Name")
first_name_label.grid(row=0, column=0)
last_name_label = tkinter.Label(user_info_frame, text="Last Name")
last_name_label.grid(row=1, column=0)
last_name_label = tkinter.Label(user_info_frame, text="Last Name2")
last_name_label.grid(row=2, column=0)
first_name_entry = tkinter.Entry(user_info_frame)
last_name_entry = tkinter.Entry(user_info_frame)
last_name2_entry = tkinter.Entry(user_info_frame)
first_name_entry.grid(row=0, column=1,padx=5, pady=5)
last_name_entry.grid(row=1, column=1,padx=5, pady=5)
last_name2_entry.grid(row=2, column=1,padx=5, pady=5)
# Set default values
first_name_entry.insert(0,"XX")
last_name_entry.insert(0,"YY")
window.mainloop()
</code></pre>
|
<python><tkinter>
|
2023-04-24 17:33:18
| 1
| 495
|
TRex
|
76,094,577
| 9,391,359
|
psycopg2 set proper encoding
|
<p>I am using <code>psycopg2</code> as database adapter.
The connection looks like</p>
<pre><code> conn = psycopg2.connect(host=hostname,⨠user=username,⨠password=password,⨠dbname=database⨠)
</code></pre>
<p>In the result of my query i have rows containing text like <code>"Π ΡΠ β’ Π ΡΠ \xa0Π β’Π βΠ ΠΠ β’Π ΡΠ ΠΠ Π"</code>.
It's russian and it seems it's not properly encoded.
I tried different encodings like <code>latin1, windows-1251 </code> and have errors like</p>
<pre><code> codec can't encode characters in position 0-3: ordinal not in range(256)
</code></pre>
<p>How to properly encode / decode such text ?</p>
|
<python><character-encoding><psycopg2>
|
2023-04-24 17:31:59
| 1
| 941
|
Alex Nikitin
|
76,094,568
| 7,447,976
|
pad_sequences changes the whole array in TensorFlow - Python
|
<p>I am practicing the LSTM networks in <code>tensorflow</code>. I am currently learning about masking and padding that are used in different lengths of inputs. However, when I use <code>pad_sequences</code> method, I am observing a strange behavior.</p>
<pre><code>import numpy as np
import tensorflow as tf
max_length = 8
X = []
for i in range(5):
length = np.random.randint(1, max_length+1)
data = np.random.randn(length, 4)
X.append(data)
print(X)
[array([[ 0.23830355, 0.32776379, 0.19888588, 0.27975603],
[-0.84285787, -0.76969476, 0.01841278, -0.88942005],
[-1.51102046, -0.18195023, -1.32969908, 0.19397443]]),
array([[-0.10567699, -0.79576066, 0.55816155, -0.70074442],
[-0.0386933 , 0.54722971, -1.71065981, 1.00276863],
[ 1.82485917, -1.19912133, -1.91067831, 0.37120413]]),
array([[ 0.03045082, 0.41638681, -1.49605253, -0.41086347],
[ 0.65929396, -0.09148023, -0.22942781, -0.76795439],
[ 0.56964325, 0.7318355 , 1.41732107, 0.38632864],
[ 0.78369032, 1.41461136, -1.32514831, 1.27382442],
[-1.4822751 , 0.44608809, -0.01882849, 0.78095785]]),
array([[ 1.59961346, -0.74595856, -0.91752237, -1.81289865],
[ 0.13899283, -0.93514456, -0.68329374, -0.91662576],
[ 1.09513416, 0.83803103, 0.63074595, -1.88594795]]),
array([[ 1.64358502, -2.28208926, -0.26371596, -0.59044336],
[ 1.52187054, 1.42308418, 0.0275608 , -0.09422734]])]
</code></pre>
<p>First, I create a random dataset with different lengths. Now, I go ahead and use <code>pad_sequences</code> method to make each input vector the same length.</p>
<pre><code>mask_val = -1
X_padded = tf.keras.preprocessing.sequence.pad_sequences(X, maxlen=max_length, padding='post', truncating='post', value=mask_val)
print(X_padded)
array([[[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[-1, 0, -1, 0],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1]],
[[ 0, 0, 0, 0],
[ 0, 0, -1, 1],
[ 1, -1, -1, 0],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1]],
[[ 0, 0, -1, 0],
[ 0, 0, 0, 0],
[ 0, 0, 1, 0],
[ 0, 1, -1, 1],
[-1, 0, 0, 0],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1]],
[[ 1, 0, 0, -1],
[ 0, 0, 0, 0],
[ 1, 0, 0, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1]],
[[ 1, -2, 0, 0],
[ 1, 1, 0, 0],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1],
[-1, -1, -1, -1]]])
</code></pre>
<p>As you can see, for every missing data, <code>tensorflow</code> uses <code>-1</code> as desired. However, all other entries are messed up. Can someone let me know what I am doing wrong here and how to fix it.</p>
|
<python><tensorflow><keras><lstm><padding>
|
2023-04-24 17:30:20
| 1
| 662
|
sergey_208
|
76,094,461
| 4,271,392
|
Fail to setup octave magic commands on Google Colab: module 'oct2py' has no attribute 'octave'
|
<p>I am trying to draft a MATLAB/Octave tutorial on Google Colab, and was hoping to use magic commands <code>%octave</code> and <code>%%octave</code> to write most of the cells. However, when I tried to setup the environment using the below commands</p>
<pre><code>!apt-get install octave
!pip install oct2py
%load_ext oct2py.ipython
</code></pre>
<p>I got an error <code>AttributeError: module 'oct2py' has no attribute 'octave'</code> for the <code>%load_ext</code> command above, here is the full error log:</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-3-0c05e6d2529c> in <cell line: 3>()
1 get_ipython().system('apt-get install octave')
2 get_ipython().system('pip install oct2py')
----> 3 get_ipython().run_line_magic('load_ext', 'oct2py.ipython')
7 frames
<decorator-gen-57> in load_ext(self, module_str)
/usr/local/lib/python3.9/dist-packages/oct2py/ipython/octavemagic.py in __init__(self, shell)
70 """
71 super().__init__(shell)
---> 72 self._oct = oct2py.octave
73
74 # Allow display to be overridden for
AttributeError: module 'oct2py' has no attribute 'octave'
</code></pre>
<p>I am wondering if this is a known issue, and if there is any workaround.</p>
|
<python><google-colaboratory><ipython><octave><oct2py>
|
2023-04-24 17:17:21
| 1
| 1,564
|
FangQ
|
76,094,405
| 11,628,437
|
How do I count the frequency of a specific word within each cell?
|
<p>Here is my Pandas dataframe -</p>
<pre><code># Import pandas library
import pandas as pd
# initialize list elements
data = {'Company': ['Nike', 'Levi', 'Dell'],
'Items': ['Running Shoes, Walking Shoes, Socks', 'Jeans, Jackets, Designer Shoes', 'Laptops'],
'Specified_Word':['Shoes', 'Shoes', 'Laptops']}
# Create the pandas DataFrame with column name is provided explicitly
df = pd.DataFrame(data)
df['count'] = [2,1,1]
print(df.head())
</code></pre>
<p>I am basically trying to count the frequency of the word <code>Shoes</code> in the first 2 rows of the column <code>Items</code>. However, for the last row of that column I'd like to count the frequency of the word <code>Laptops</code>. These number need to be placed in the column <code>count</code>. Basically, I need to count the frequency of the word in the column <code>Specified_Word</code> within the corresponding cell in <code>Items</code>. I would like to automate the generation of the column <code>count</code>. As of now, it is apparent that it's manually generated.</p>
<p>I cannot use <code>str.count()</code> because it counts across the entire column. I need to perform the counting per cell.</p>
<p>The big picture here is that I am trying to compute the term frequency of certain words using Pandas. Term frequency is typically computed for documents, but over here they are cells of the column <code>Items</code>.</p>
|
<python><pandas>
|
2023-04-24 17:10:28
| 4
| 1,851
|
desert_ranger
|
76,094,384
| 5,942,100
|
Create multiple identical columns with different names within a dataframe using Pandas
|
<p>I would like to create multiple identical columns with different names within a dataframe using Pandas.</p>
<p><strong>Data</strong></p>
<pre><code>ID type name series date
AA all sue 111 1/1/2023
AA ok devon 222 1/1/2023
BB yes carey 333 1/1/2023
CC maybe carey 444 1/1/2023
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>ID type type1 type2 type3 name name1 name2 name3 series series1 date
AA all all all all sue sue sue sue 111 111 1/1/2023
AA ok ok ok ok devon devon devon devon 222 222 1/1/2023
BB yes yes yes yes carey carey carey carey 333 333 1/1/2023
CC maybe maybe maybe maybe carey carey carey carey 444 444 1/1/2023
</code></pre>
<p><strong>Doing</strong></p>
<pre><code># Define the number of times to duplicate the column
n_duplicates = 3
# Duplicate a specific column n times using assign and concat
duplicates = [df['column_name']] * n_duplicates
new_columns = pd.concat(duplicates, axis=1)
# Add the duplicated columns to the DataFrame
df = pd.concat([df, new_columns], axis=1)
</code></pre>
<p>However this is not producing my desired outcome.
Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-04-24 17:07:29
| 2
| 4,428
|
Lynn
|
76,094,275
| 19,003,861
|
Changing Folium Geocoder icon
|
<p>I am trying to change the default <code>geocoder</code> icon with <code>folium</code>.</p>
<p>I thought using something like <code>i = folium.Icon(color='black')</code> would work, as I use a similar code to change my the icons of other markers.</p>
<p>But the code does not seem responsive.</p>
<p>What am I missing?</p>
<pre><code>from folium.plugins import Geocoder
def list_venues(request):
venue_markers = Venue.objects.filter(venue_active=True)
center_location = [average_latitude,average_longitude]
#center_zoom_start=
tiles_style = 'Stamen Terrain'
m = folium.Map(location=center_location,tiles=tiles_style)
m.fit_bounds([[min_latitude, min_longitude], [max_latitude, max_longitude]])
icon_location= folium.Icon(color='black')
icon_location_json = json.loads(icon_location.to_json())
Geocoder(icon=icon_location_json).add_to(m)
m_access = folium.Map(location=center_location,tiles=tiles_style)
for venue in venue_markers:
iframe = branca.element.IFrame(html=html, width=150, height=60)
popup=folium.Popup(iframe, max_width=300)
if venue.venue_type == 1:
coordinates =(venue.latitude, venue.longitude)
folium.Marker(coordinates,popup=popup,icon=folium.Icon(color='black',icon='utensils',prefix='fa',fill_opacity=1)).add_to(m)
elif venue.venue_type == 2:
coordinates =(venue.latitude, venue.longitude)
folium.Marker(coordinates,popup=popup,icon=folium.Icon(color='red',icon='glass-cheers',prefix='fa')).add_to(m)
elif venue.venue_type == 3:
coordinates =(venue.latitude, venue.longitude)
folium.Marker(coordinates,popup=popup,icon=folium.Icon(color='blue',icon='coffee',prefix='fa')).add_to(m)
elif venue.venue_type == 4:
coordinates =(venue.latitude, venue.longitude)
folium.Marker(coordinates,popup=popup,icon=folium.Icon(color='orange',icon='bread-slice',prefix='fa')).add_to(m)
elif venue.venue_type == 5:
coordinates =(venue.latitude, venue.longitude)
folium.Marker(coordinates,popup=popup,icon=folium.Icon(color='green',icon='shop',prefix='fa')).add_to(m)
context = {'icon_location_json':icon_location_json,'venue_markers':venue_markers,'map_access':m_access._repr_html_,'map':m._repr_html_}
return render(request,'template.html',context)
</code></pre>
<p><strong>logs</strong></p>
<p>the /None is (I think) due to a toggle switch. Dont expect it to have anything to do with the marker.</p>
<pre><code>[24/Apr/2023 22:21:00] "GET /list_venues/ HTTP/1.1" 200 55142
Not Found: /list_venues/None
[24/Apr/2023 22:21:00] "GET /list_venues/None HTTP/1.1" 404 16305
</code></pre>
|
<python><leaflet><folium>
|
2023-04-24 16:53:14
| 0
| 415
|
PhilM
|
76,094,147
| 178,757
|
What's the encoding used in the output of Linux commands like find, accessed from Python?
|
<p>Python provides a <code>subprocess</code> import that allows fine-grained control of processes, but when I'm creating a process in Unix such as <code>find</code>, what's the encoding of the output of these standard Gnu commands?</p>
<pre><code>import subprocess
myProcess = subprocess.Popen(shlex.split('find ./dir -mindepth 1 -maxdepth 1 -type f -mtime +14'), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
outputs = myProcess.communicate()
myStdout = outputs[0].read()
myStderr = outputs[1].read()
print(myStdout.decode('utf-8/ascii')) # ???
</code></pre>
<p>I'm guessing you can get away with either decoding but officially, how am I supposed to interpret the output of all the standard Unix commands that "output text to the console"?</p>
|
<python><unix><encoding><subprocess>
|
2023-04-24 16:38:37
| 2
| 30,459
|
Jez
|
76,094,113
| 2,924,546
|
package with pip install does install it but same package with setup.py does not get installed
|
<p>I am trying to install some packages using <em>pip</em>, which is working fine but if I try to install it using <code>setup.py</code> then it does not install it.</p>
<p>For example:</p>
<pre class="lang-none prettyprint-override"><code>pip install ruamel-yaml-clib==0.2.7
</code></pre>
<p>does install the package.</p>
<p>But having the same package in <code>setup.py</code>, for example:</p>
<pre class="lang-py prettyprint-override"><code>install_requires=['ruamel-yaml-clib==0.2.7']
</code></pre>
<p>and running:</p>
<pre class="lang-none prettyprint-override"><code>python setup.py install
</code></pre>
<p>throws below error:</p>
<pre class="lang-none prettyprint-override"><code>Could not find suitable distribution for Requirement.parse('ruamel-yaml-clib==0.2.7')
</code></pre>
<p>I tried to search the difference but did not found anything.</p>
<p>I am using <em>Python 3.8</em>.</p>
|
<python><pip><setuptools><setup.py>
|
2023-04-24 16:34:35
| 1
| 2,048
|
Sanjay
|
76,094,097
| 347,298
|
Can I mock a constant string value that is referenced in the class under test?
|
<p>I have class <em>A</em> in module <em>a.py</em>, which has a method <em>do_thing</em>. The method <em>do_thing</em> uses a constant definition <em>CONST_VAL</em>, defined in <em>a_definitions.py</em> as 'Some String'. <em>a_definitions</em> is imported into <em>a.py</em>.</p>
<p>I have a unit test that instantiates an <em>A</em>, then calls <em>do_thing</em>. I want <em>CONST_VAL</em> to contain a different string. I've tried many different approaches to '@patch' and '@patch.object', and I always receive some version of "CONST_VAL not defined".</p>
<p>I appear to have a scoping issue, but I've begun to wonder whether this is not even possible, since the value of <em>CONST_VAL</em> is imported before the mocking is called.</p>
<p>Can I mock an externally defined string value, and if so, how?</p>
|
<python><python-unittest>
|
2023-04-24 16:31:39
| 1
| 364
|
Dana
|
76,093,956
| 4,612,370
|
Calling a python argparse interface from python without Subprocess
|
<p>Consider <strong>the following toy python application</strong>, which only have argparse CLI argparse interface.</p>
<pre><code>import argparse
def main():
parser = argparse.ArgumentParser(description="Printer")
parser.add_argument("message")
args = parser.parse_args()
print(args.message)
if __name__ == "__main__":
main()
</code></pre>
<p>I want to use it <strong>from another python script</strong>,</p>
<ul>
<li>without having to refactor the toy example (because it might be laborious, or maybe I do not have permission to modify it)</li>
<li>without calling a python <code>Subprocess</code> (because it breaks the call stack, sometimes break pycharm debugger, and make debugging harder in general).</li>
</ul>
<p>Any way to achieve this ?</p>
|
<python><command-line-interface><argparse>
|
2023-04-24 16:14:56
| 1
| 838
|
n0tis
|
76,093,885
| 6,722,075
|
'mpremote' is not recognized as an internal or external command
|
<p>I installed python and mpr by using it as well. But now after installing mpr <code>pip install --user mpr</code> and using mpr command I got following error in windows.</p>
<pre class="lang-none prettyprint-override"><code>c:\>mpr version
'mpremote' is not recognized as an internal or external command
</code></pre>
<p><a href="https://i.sstatic.net/rgLzn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rgLzn.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><micropython>
|
2023-04-24 16:06:10
| 1
| 2,544
|
Tohid Makari
|
76,093,883
| 172,277
|
Configuring an asynciohttp_retry client and using later
|
<p>I have a simple sample code aroung <code>asynchttp_retry</code>.</p>
<pre class="lang-py prettyprint-override"><code>import aiohttp
import aiohttp_retry
from aiohttp_retry import RetryClient
import asyncio
retry_statuses = [500, 404]
async def run_http_client():
retry_options = \
aiohttp_retry.ExponentialRetry(attempts=3, statuses=retry_statuses)
async with aiohttp.ClientSession() as session:
async with RetryClient(session, retry_options=retry_options) as client:
response = await client.get('https://dummyjson.com/products/20')
print(await response.text())
asyncio.run(run_http_client())
</code></pre>
<p>It works. However I would like to be able to split the client instantiation from the request itself.</p>
<pre class="lang-py prettyprint-override"><code>import aiohttp
import aiohttp_retry
from aiohttp_retry import RetryClient
import asyncio
retry_statuses = [500, 404]
async def get_retrying_http_client():
retry_options = \
aiohttp_retry.ExponentialRetry(attempts=3, statuses=retry_statuses)
async with aiohttp.ClientSession() as session:
async with RetryClient(session, retry_options=retry_options) as client:
return client
async def main():
client = get_retrying_http_client()
response = client.get('https://dummyjson.com/products/20')
text = (await response).text()
print(text)
asyncio.run(main())
</code></pre>
<p>... I get a <code>coroutine 'get_retrying_http_client' was never awaited</code>. If I try to await it, I get <code>Session is closed</code>.</p>
<p>I welcome any help as well as some explanations.</p>
<p>I am familiar with promises in JavaScript.</p>
<p>I have used the <code>threading</code> library in the past, and I have to admit that I feel that asyncio is just a more painful way to do the same thing.</p>
|
<python><python-asyncio>
|
2023-04-24 16:05:43
| 1
| 7,591
|
AsTeR
|
76,093,781
| 14,729,041
|
Import problems in FastAPI-based backend with Docker
|
<p>I am running a FastAPI-based backend and I am running into the following error:</p>
<pre><code>File "/app/main.py", line 6, in <module>
backend-backend-1 | from app.api.api_v1.api import api_router
backend-backend-1 | ModuleNotFoundError: No module named 'app'
</code></pre>
<p>This is an import error. The file structure is like:</p>
<pre><code>--backend
|---app
|------models
|------schemas
|------etc
|------main.py
</code></pre>
<p>My dockerfile for the backend is like:</p>
<pre><code>FROM python:3.9
ENV PYTHONUNBUFFERED=1
COPY requirements.txt /tmp/requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt
COPY ./app /app
WORKDIR /app/
ENV PYTHONPATH=/app
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>When I print the <code>sys.path</code> I get the following output:</p>
<p>['', '/usr/local/bin', '/app', '/usr/local/lib/python39.zip', '/usr/local/lib/python3.9', '/usr/local/lib/python3.9/lib-dynload', '/usr/local/lib/python3.9/site-packages']</p>
<p><code>/app</code>is there. Do you have any idea what may be causing the problem? Thanks in advance for any help you can provide.</p>
|
<python><docker><docker-compose><python-import><fastapi>
|
2023-04-24 15:52:15
| 0
| 443
|
AfonsoSalgadoSousa
|
76,093,727
| 10,459,366
|
Search strings in dataframe column for specific pattern and update with another column
|
<p>Let's say I have the following sample dataframe:</p>
<pre><code># Create DataFrame from Dict
technologies = {
'Val':[5, 9],
'Stuff':["[demography_form][1]<div></table<text-align>[demography_form_date][1]", "<text-ali>[geography_form][1]<div></table<text-align>[geography_form_date][1]"]
}
df = pd.DataFrame(technologies)
</code></pre>
<p>How can I quickly replace the integers within the square brackets of the "Stuff" column with "Val"?</p>
<p>For example, my desired output will be:</p>
<pre><code>technologies = {
'Val':[5, 9],
'Stuff':["[demography_form][5]<div></table<text-align>[demography_form_date][5]", "<text-ali>[geography_form][9]<div></table<text-align>[geography_form_date][9]"]
}
df = pd.DataFrame(technologies)
</code></pre>
<p>The number of each "[int]" may change in subsequent rows and the position is not consistent. Thanks for your help.</p>
|
<python><pandas><string><dataframe>
|
2023-04-24 15:47:07
| 1
| 878
|
Andrea
|
76,093,721
| 7,598,774
|
EC.element_to_be_clickable condition executes successfully, however in the following line same element's click fails
|
<p><strong>Query:</strong> As mentioned in the title, if below line has successfully executed:</p>
<pre class="lang-py prettyprint-override"><code>element = wait.until(EC.element_to_be_clickable((By.XPATH, "//a[text()='See all teams']")))
</code></pre>
<p>Why the below line throws <code>Element is not clickable at point</code> exception?</p>
<pre class="lang-py prettyprint-override"><code>element.click()
</code></pre>
<p><strong>Code:</strong></p>
<pre class="lang-py prettyprint-override"><code>driver = webdriver.Chrome()
driver.get("https://useinsider.com/careers/")
driver.maximize_window()
wait = WebDriverWait(driver, 20)
# Accept cookies
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, '#wt-cli-accept-all-btn'))).click()
# Desired element is captured and printed successfully
element = wait.until(EC.element_to_be_clickable((By.XPATH, "//a[text()='See all teams']")))
print("The desired element's text:" + element.text)
# below line throws Element not clickable exception
element.click()
</code></pre>
<p><strong>Console output/Error trace:</strong></p>
<pre><code>The desired element's text:See all teams
Traceback (most recent call last):
File "C:\Users\username\PycharmProjects\pythonProject3\test1.py", line 19, in <module>
element.click()
File "C:\Users\username\PycharmProjects\pythonProject3\venv\Lib\site-packages\selenium\webdriver\remote\webelement.py", line 93, in click
self._execute(Command.CLICK_ELEMENT)
File "C:\Users\username\PycharmProjects\pythonProject3\venv\Lib\site-packages\selenium\webdriver\remote\webelement.py", line 403, in _execute
return self._parent.execute(command, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\username\PycharmProjects\pythonProject3\venv\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 440, in execute
self.error_handler.check_response(response)
File "C:\Users\username\PycharmProjects\pythonProject3\venv\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element is not clickable at point (759, 3145)
(Session info: chrome=112.0.5615.138)
</code></pre>
<p><strong>Note:</strong> Clicking desired element using JavaScript or Keyboard events both work.</p>
<pre><code># Below both ways manage to click successfully
driver.execute_script("arguments[0].click();", element)
element.send_keys(Keys.ENTER)
</code></pre>
<p>Why <code>element.click()</code> doesn't work? Hasn't <code>element_to_be_clickable</code> condition already checked that the desired element is clickable or not, and hence <code>element.click()</code> should work?</p>
|
<python><selenium-webdriver><seleniumwaits><element-to-be-clickable>
|
2023-04-24 15:46:20
| 1
| 9,177
|
Shawn
|
76,093,692
| 3,417,379
|
postgresql15-contrib installation on Amazon Linux 2 fails on Python shared lib dependency
|
<p>I'm using an EC2 on AWS running Amazon Linux 2. I'm trying to install Postgresql version 15 server along with the contrib libraries for various extension.</p>
<p>This is how I installed Posgres15</p>
<pre><code>sudo rpm --import https://yum.postgresql.org/RPM-GPG-KEY-PGDG-15
sudo yum update -y
sudo yum install -y postgresql15-server postgresql15-contrib
</code></pre>
<p>PostgreSQL server installs correctly but fails during the contrib package install due to a dependency on Python. I am able to install just the server and right it fine via <code>sudo yum install -y postgresql15-server</code>.</p>
<p>Here is the error I get when installing postgresql15-contrib:</p>
<pre><code>[ec2-user@ip-172-31-51-199 ~]$ sudo yum install -y postgresql15-contrib
Loaded plugins: dkms-build-requires, extras_suggestions, langpacks, priorities, update-motd
225 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package postgresql15-contrib.x86_64 0:15.2-1PGDG.rhel7 will be installed
--> Processing Dependency: libxslt.so.1(LIBXML2_1.0.22)(64bit) for package: postgresql15-contrib-15.2-1PGDG.rhel7.x86_64
--> Processing Dependency: libxslt.so.1(LIBXML2_1.0.18)(64bit) for package: postgresql15-contrib-15.2-1PGDG.rhel7.x86_64
--> Processing Dependency: libxslt.so.1(LIBXML2_1.0.11)(64bit) for package: postgresql15-contrib-15.2-1PGDG.rhel7.x86_64
--> Processing Dependency: libxslt.so.1()(64bit) for package: postgresql15-contrib-15.2-1PGDG.rhel7.x86_64
--> Processing Dependency: libpython3.6m.so.1.0()(64bit) for package: postgresql15-contrib-15.2-1PGDG.rhel7.x86_64
--> Running transaction check
---> Package libxslt.x86_64 0:1.1.28-6.amzn2 will be installed
---> Package postgresql15-contrib.x86_64 0:15.2-1PGDG.rhel7 will be installed
--> Processing Dependency: libpython3.6m.so.1.0()(64bit) for package: postgresql15-contrib-15.2-1PGDG.rhel7.x86_64
--> Finished Dependency Resolution
Error: Package: postgresql15-contrib-15.2-1PGDG.rhel7.x86_64 (pgdg15)
Requires: libpython3.6m.so.1.0()(64bit)
Available: python3-libs-3.6.2-3.amzn2.0.2.x86_64 (amzn2extra-python3)
libpython3.6m.so.1.0()(64bit)
Available: python3-libs-3.6.2-3.amzn2.0.3.x86_64 (amzn2extra-python3)
libpython3.6m.so.1.0()(64bit)
Installed: python3-libs-3.7.16-1.amzn2.0.2.x86_64 (@amzn2-core)
~libpython3.7m.so.1.0()(64bit)
Available: python3-libs-3.7.0-0.12.b2.amzn2.0.2.x86_64 (amzn2-core)
~libpython3.7m.so.1.0()(64bit)
Available: python3-libs-3.7.0-0.16.b3.amzn2.0.1.x86_64 (amzn2-core)
~libpython3.7m.so.1.0()(64bit)
Available: python3-libs-3.7.0-0.20.rc1.amzn2.0.1.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.0-0.20.rc1.amzn2.0.2.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.1-9.amzn2.0.1.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.2-4.amzn2.0.1.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.3-1.amzn2.0.1.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.3-1.amzn2.0.2.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.4-1.amzn2.0.1.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.4-1.amzn2.0.2.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.4-1.amzn2.0.3.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.4-1.amzn2.0.4.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.6-1.amzn2.0.1.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.8-1.amzn2.0.1.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.9-1.amzn2.0.1.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.9-1.amzn2.0.2.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.9-1.amzn2.0.3.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.10-1.amzn2.0.1.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.15-1.amzn2.0.1.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.15-1.amzn2.0.2.i686 (amzn2-core)
Not found
Available: python3-libs-3.7.16-1.amzn2.0.1.i686 (amzn2-core)
Not found
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
[ec2-user@ip-172-31-51-199 ~]$
</code></pre>
<p>Python is definitely installed on the server and I can verify the version like this:</p>
<pre><code>[ec2-user@ip-172-31-51-199 ~]$ python3 -V
Python 3.7.16
[ec2-user@ip-172-31-51-199 ~]$
</code></pre>
<p>And here is what I see when looking for the Python shared lib:</p>
<pre><code>[ec2-user@ip-172-31-51-199 ~]$ sudo locate libpython
/home/ec2-user/timp/Python-3.6.12/libpython3.6m.a
/home/ec2-user/timp/Python-3.6.12/Tools/gdb/libpython.py
/usr/lib64/libpython2.7.so
/usr/lib64/libpython2.7.so.1.0
/usr/lib64/libpython3.7m.so.1.0
/usr/lib64/libpython3.8.so.1.0
/usr/lib64/libpython3.so
/usr/lib64/python2.7/config/libpython2.7.so
/usr/local/lib/libpython3.6m.a
/usr/local/lib/python3.6/config-3.6m-x86_64-linux-gnu/libpython3.6m.a
/usr/share/systemtap/tapset/libpython2.7-64.stp
[ec2-user@ip-172-31-51-199 ~]$
</code></pre>
<p>I've tried multiple ways of installing python, even version 3.6 specifically. I also compile python 3.6 from source and did a <code>make install</code>.</p>
<p>What am I missing? Why wont the postgresql15-contrib install find my python shared lib?</p>
|
<python><linux><postgresql><postgresql-15>
|
2023-04-24 15:43:46
| 0
| 562
|
maxTrialfire
|
76,093,567
| 8,512,262
|
Error with win32event.OpenEvent when trying to get a handle to a named Windows event
|
<p>I've created a Windows service in Python that will launch my main application (an executable built in Python) after some inactivity timeout. That service uses <code>win32event</code> to set up a synchronization event for communication between itself and my main application. The event named <code>'EXIT_EVENT'</code> is created when the service is started, and is ultimately signaled by my main application when it exits so as to tell the service "hey the main app isn't open any more, restart the inactivity timer."</p>
<p>When I run my service in debug mode via the <code>ServiceFramework</code> CLI using <code>py helperservice.py debug</code>, everything works as expected.</p>
<p>But when I use Services.msc to "Start" the service normally, my main app throws an exception while trying to access the synchronization event.</p>
<pre><code># helperservice.py
import win32event
exit_event = win32event.CreateEvent(
None, # no security flags
True, # make this a resettable event
False, # initialize to False
'EXIT_EVENT' # name
)
</code></pre>
<pre><code># app.py
import win32event
try:
exit_event_hdl = win32event.OpenEvent(
win32event.EVENT_ALL_ACCESS, # permissions
True, # bInheritHandle flag (not sure I need this)
'EXIT_EVENT',
)
except win32event.error as err:
print(err)
</code></pre>
<p>When my application tries to access the named event as shown, it fails with the following error:</p>
<pre><code>pywintypes.error: (2, 'OpenEvent', 'The system cannot find the file specified.')
</code></pre>
<p>I expect this error to occur if the service isn't running when the main app is started, but the service is <em>definitely</em> started before the main app in this case.</p>
<p>It's as if the named event isn't created when running the service "normally", but it <em>is</em> created when running in <code>debug</code> mode. Why is this event interaction broken by the normal use case? As far as I can tell, the issue lies in the call to <code>OpenEvent</code>, because I've verified that the call to <code>CreateEvent</code> is returning a <code>PyHANDLE</code> object as expected.</p>
<p>Any insight / information is much appreciated, as ever.</p>
<hr>
<p><em><strong>Additional Info That Might Be Relevant:</strong></em></p>
<p>My main application is built as a single-file executable via pyinstaller, and the service opens it after an inactivity timeout by calling <code>subprocess.Popen('<path>/<to>/app.exe')</code>. This part is working - the application is launched, but fails with the exception above shortly thereafter.</p>
<hr>
<p><em><strong>EDIT:</strong></em></p>
<p><strike>This appears to be a permissions error masquerading as a file error! When running in debug mode via the CLI, I'm running an admin prompt. But my application itself is running with general privileges! I just tried running <code>py helperservice.py debug</code> from a non-elevated prompt and got the same error, plus a much more useful <code>pywintypes.error: (5, 'RegOpenKeyEx', 'Access is denied.')</code> error.</p>
<p>I suppose now the question is: <strong>how, if at all possible, can I signal from my app to my service without the app running with elevated privileges</strong> (which I can't do - it's a requirement for this app to be run under general user accounts).</strike></p>
<p>Further testing has revealed that...this ain't it, chief. I just tried running my application as Administrator and I'm <em>still</em> getting the same exception when it starts up.</p>
<p>Any ideas are welcome!</p>
|
<python><windows><events><service><pywin32>
|
2023-04-24 15:29:21
| 0
| 7,190
|
JRiggles
|
76,093,477
| 3,398,324
|
Rearrange Dataframe
|
<p>I would like to rearrange my df from this:</p>
<pre><code>data = {'date': ['1/1/2022', '1/2/2022','1/3/2022'], 'ticker1': [11, 21, 31], 'ticker2': [12, 22, 32], 'ticker3': [13, 23, 33]}
df = pd.DataFrame(data)
</code></pre>
<p>to this (where the dates still correspond to the correct rows):</p>
<pre><code>data = {'date': ['1/1/2022', '1/1/2022', '1/1/2022', '1/2/2022', '1/2/2022', '1/2/2022','1/3/2022', '1/3/2022', '1/3/2022'], 'ticker': ['ticker1', 'ticker1', 'ticker1', 'ticker2', 'ticker2', 'ticker2', 'ticker3', 'ticker3', 'ticker3'], 'price': [11, 21, 31, 12, 22, 32, 13, 23, 33]}
df = pd.DataFrame(data)
</code></pre>
<p>Not sure what the best way is, transpose? Pivot? Stack?</p>
|
<python><pandas><stack>
|
2023-04-24 15:20:14
| 1
| 1,051
|
Tartaglia
|
76,093,365
| 353,337
|
Read Python XML with tag and text in one element
|
<p>I have the XML file</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<document>
<title>
<tag>A</tag>
X
</title>
</document>
</code></pre>
<p>and I'd like to read it with Python such that I could reconstruct the file exactly. For example, I need to retain <code>A</code> as a <code><tag></code>, and <code>X</code> as text.</p>
<p>The default XML implementation seems to have problems with the combination of <code><tag></code> and <code>text</code> in the <code><title></code> element. <code>itertext()</code> doesn't retain <code>A</code> as a <code><tag></code>, and the regular iteration doesn't capture <code>X</code> at all:</p>
<pre class="lang-py prettyprint-override"><code>import xml.etree.ElementTree as ET
tree = ET.parse("a.xml")
r = tree.getroot()
title = r[0]
print(list(title.itertext()))
print([c for c in title])
print(repr(title.text))
</code></pre>
<pre class="lang-py prettyprint-override"><code>['\n ', 'A', '\n X\n ']
[<Element 'tag' at 0x7fc11c130c20>]
'\n '
</code></pre>
<p>Any hints?</p>
|
<python><xml>
|
2023-04-24 15:08:53
| 4
| 59,565
|
Nico SchlΓΆmer
|
76,093,277
| 127,320
|
How to create a ParquetFile object from a Table (or Dataset) in Pyarrrow
|
<p>I have access to a <code>Dataset</code> or <code>Table</code></p>
<pre><code>ds = pyarrow.dataset.dataset(PARQUET_FILE_PATH, filesystem=fs)
table = ds.to_table()
</code></pre>
<p>I need to get the parquet metadata, which is only available through <code>ParquetFile</code>. Tried the following but failed with an error: <code>AttributeError: type object 'ParquetFile' has no attribute 'from_table'</code></p>
<pre><code>pq_file_from_table = pyarrow.parquet.ParquetFile.from_table(table)
file_meta: FileMetaData = pq_file_from_table.metadata
</code></pre>
|
<python><parquet><pyarrow>
|
2023-04-24 14:59:53
| 0
| 80,467
|
Aravind Yarram
|
76,093,259
| 1,063,647
|
Qt - break long running GUI-update / event handling, e.g. by ESC key
|
<p>My Qt-Application (currently on PySide2 (Qt 5.15.6)) uses QTreeViews and appropriate Models to show big hierarchical structures, some of them really deep and/or containing reference loops. For displaying only nodes that match a given QRegExp I derived <code>QSortFilterProxyModel</code> and reimplemented <code>filterAcceptsRow</code> to decide if a node should be displayed. In case a node doesn't match the RegExp, I walk down recursively to check if any of its' descendants match... more or less that's shown in the code below.</p>
<pre class="lang-py prettyprint-override"><code>def filterAcceptsRow( self, row, parent ):
def _match( index ):
return regExp.exactMatch( pathOf(index) )
def _check( *indices ):
return any( _match(idx) for idx in indices ) or \
any( _walk(idx) for idx in indices )
def _walk( index ):# . o O (how to determine that break was requested?)
if _breakRequested(): return False
else : return any( _check(idx) for idx in childrenOf(index) )
return _check( indexOf( row, col, parent )
</code></pre>
<p>This works usually quite well and (with some optimization) fast. However, in case of a deep tree structure and a careless user input the recursive descent might take a while what is leading to a blocked GUI.</p>
<p><strong>So long story short:</strong> Is there any way of aborting an intensive GUI update (like here filtering nodes) by e.g. pressing ESC?<br />
Here some thoughts and/or what I tried so far...</p>
<ul>
<li>Since <code>filterAcceptsRow</code> is called from Qt for updating the GUI I guess I can't really shift it to some worker thread, right? Well, I tried and it ended fatally ... :-S</li>
<li>In the dummy code above I put a call to <code>_breakRequested()</code> where I would ask for a break request.
But how would such function look like?</li>
<li>Since we're right in processing some Event that led to our filtering the event loop is blocked and can't register any key strokes, right?</li>
<li>Calling <code>processEvents()</code> here (in order to detect key strokes) would mix the processing order of events. According to the try I gave it, bad idea :-(.</li>
</ul>
<p>I searched and tried a lot of different approaches, without luck.
Am I really the only one who wants to escape from a blocked GUI while the blocking part can't be shifted to another thread?</p>
<ul>
<li>Basically I don't need to (and must not) process the events at that point (<code>_breakRequested()</code>). I would be fine with an event look ahead by sneaking along the event queue and looking for an according key event. However this doesn't seem possible in Qt before version 6.</li>
<li>Even if it was possible, is Qt registering Events (adding them to the event queue) between subsequent calls to <code>filterAcceptsRow</code>? Is there something like <code>queueEvents()</code> without processing them?</li>
<li>While the GUI is busy, is it possible to open another (modal) Widget with its' own thread and event loop listening for key strokes?</li>
</ul>
<p>Sorry if this all is just bullshit but as you see, I got a bit stuck ... can somebody show me the light, please?</p>
|
<python><qt><event-handling><pyside2>
|
2023-04-24 14:58:36
| 1
| 394
|
Zappotek
|
76,093,153
| 5,547,553
|
How to read partitioned parquet file into polars?
|
<p>I'd like to read a partitioned parquet file into a polars dataframe.</p>
<p>In spark, it is simple:</p>
<pre><code>df = spark.read.parquet("/my/path")
</code></pre>
<p>The polars documentation says that it should work the same way:</p>
<pre><code>df = pl.read_parquet("/my/path")
</code></pre>
<p>But it gives me the error:</p>
<pre><code>raise IsADirectoryError(f"Expected a file path; {path!r} is a directory")
</code></pre>
<p>How to read this file?</p>
|
<python><dataframe><parquet><python-polars>
|
2023-04-24 14:46:08
| 4
| 1,174
|
lmocsi
|
76,093,096
| 5,114,342
|
How to assign a list without changing the address?
|
<p>I have a subroutine that is given a list which is used at another part in the program, and the subroutine is to modify that list.<br />
In particular, it is supposed to trim the list.</p>
<p>Let's say the code looks like this so far:</p>
<pre><code>class Foo:
array = [0,1,2,3,4,5]
def modify(target, length):
array = target.array
array = array[0:length]
foo = Foo()
print(foo.array)
modify(foo, 3)
print(foo.array)
</code></pre>
<p>Executing this code will produce the output</p>
<pre><code>[0, 1, 2, 3, 4, 5]
[0, 1, 2, 3, 4, 5]
</code></pre>
<p>because the line <code>array = array[0:length]</code> creates a new list and the temporary variable <code>array</code> is then simply set to point to a new address.</p>
<p>However, I'd like to find some way in which <code>modify</code> actually changes the object.<br />
Now for this code example, it would be a simple solution to add a line <code>target.array = array</code>. However, in my actual case, the actual class I am dealing with has a side effect for the setter that I want to avoid (I do something like that several times in a row, and each time would render some data which would show states in-between, and also complain about jagged data).</p>
<p>Now the thing is, I could rewrite this particular function into</p>
<pre><code>def modify(target, length):
array = target.array
while(len(array)>length):
array.pop(-1)
</code></pre>
<p>since <code>pop</code> doesn't change the address.</p>
<p>I could also write some helper method like</p>
<pre><code>def copy_into(source: list, target: list)
</code></pre>
<p>that is then called like</p>
<pre><code>copy_into(array[0:length], array)
</code></pre>
<p>This would be my workaround, but it doesn't strike me exactly as being elegant, or pythonic. And I'd like to know whether there is a standard way to assign without changing address in general.</p>
<p>Is this possible, and if so, how?</p>
|
<python>
|
2023-04-24 14:41:13
| 1
| 3,912
|
Aziuth
|
76,092,964
| 5,510,713
|
Remove pincushion lens distortion in Python
|
<p>I have the following image which is computer generated</p>
<p><a href="https://i.sstatic.net/YXMYF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YXMYF.png" alt="enter image description here" /></a></p>
<p>It is fed as an input to an optical experiment results in the following image:</p>
<p><a href="https://i.sstatic.net/7VYwe.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7VYwe.jpg" alt="enter image description here" /></a></p>
<p>As you can tell the image has a double concave effect due to the lens system being used.</p>
<p>I need to be able to restore the image without distortion and compare it with the original image. I'm new to image processing and I came across two useful python packages:</p>
<p><a href="https://pypi.org/project/defisheye/" rel="nofollow noreferrer">https://pypi.org/project/defisheye/</a></p>
<p>The <code>defisheye</code> was quite straight forward for me to use (script below) but i'm not able to achieve the optimal result so far.</p>
<pre><code>from defisheye import Defisheye
dtype = 'linear'
format = 'fullframe'
fov = 11
pfov = 10
img = "input_distorted.jpg"
img_out = "input_distorted_corrected.jpg"
obj = Defisheye(img, dtype=dtype, format=format, fov=fov, pfov=pfov)
# To save image locally
obj.convert(outfile=img_out)
</code></pre>
<p>Seconly from opencv: <a href="https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html" rel="nofollow noreferrer">https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html</a>
The camera calibration tutorial is way out of my knowledge. If someone could assure me if that's the way to go i can start digging in deeper. Really appreciate any suggestions.</p>
|
<python><opencv><camera-calibration><fisheye>
|
2023-04-24 14:27:20
| 3
| 776
|
DhiwaTdG
|
76,092,929
| 6,387,095
|
pandas using any(1) has suddenly started giving errors?
|
<p>My code was working perfectly, I updated <code>openpyxl</code> now when I try:</p>
<pre><code>data = {'Col1': ['Charges', 'Realized P&L', 'Other Credit & Debit', 'Some Other Value'],
'Col2': [100, 200, 300, 400],
'Col3': ['True', False, 'True', 'False']}
df = pd.DataFrame(data)
# keep rows where certain charges etc are present
filtered_df = df[df.isin(["Charges", "Realized P&L", "Other Credit & Debit"]).any(1)]
</code></pre>
<p>I get the error:</p>
<pre><code>Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
TypeError: any() takes 1 positional argument but 2 were given
</code></pre>
<p>I tried:</p>
<pre><code>filtered_df = df[df.isin(["Charges", "Realized P&L", "Other Credit & Debit"]).any()]
# removed 1 from any
</code></pre>
<pre><code>UserWarning: Boolean Series key will be reindexed to match DataFrame index.
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/core/frame.py", line 3751, in __getitem__
return self._getitem_bool_array(key)
File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/core/frame.py", line 3804, in _getitem_bool_array
key = check_bool_indexer(self.index, key)
File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/core/indexing.py", line 2499, in check_bool_indexer
raise IndexingError(
pandas.errors.IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).
</code></pre>
<p>Not sure what happened or why it suddenly stopped working.</p>
|
<python><python-3.x><pandas>
|
2023-04-24 14:23:29
| 2
| 4,075
|
Sid
|
76,092,904
| 14,986,784
|
Use deep learning models from other containers
|
<h2>What I want to do</h2>
<p>Let's say I want to use ModelA and ModelB. Each model's environment is inside a container Container1 and Container2. Those environment are conflicting, I cannot merge them.</p>
<p>I would like to use both models in the same Python script, something like:</p>
<pre class="lang-py prettyprint-override"><code>from envA import modelA
from envB import modelB
# ... boilerplate code
outputA = modelA(dataA)
outputB = modelB(dataB)
# perform processing with outputA AND outputB
</code></pre>
<p>I would like to know if there is a <em>simple</em> way to perform this with Docker or any container techno, please?</p>
<p><strong>DISCLAIMER</strong>: I am not doing production business DL. I can see that a solution could have some problems with scaling (or other stuff). In my context, I want to run this on a cluster to do some experiments, that's it.</p>
<h3>More about conflicting environments</h3>
<p>Namely, in one of the environment I need a specific version of CUDA to compile some components. If I take a version lower then my GPU is not supported. If I take a version above, then it does not compile (or there is a linking problem, I do not remember well).</p>
<p>While the other environment is running Python 3.10.9 and its base code (and libs) depend on it. So I cannot use a compatible Python version to run with the first environment.</p>
<p>Well, I guess I can work this out if I edit the base codes of modelA and modelB. Or if I tweak their environments. But I would like to don't spend much time on it. Plus, I find the idea of using containerA and containerB much more elegant.</p>
<h2>What I could think of</h2>
<p>I did not manage to find something useful online. There are a bunch of beginner Docker tutorial online, some of great BTW. But, I did not find something like that. Also, my knowledge in Docker is fairly limited.</p>
<p>For more context, I am doing research in Deep Learning and I conducted experiments with 2 models. Those models have very different requirements and one of them has ridiculous painful requirements. Now, I would like to perform experiments with both models, I will need both outputs.</p>
<p>So I am asking myself that would be super handy to just "use" my former containers from another one container (or on my host machine). In this manner, it means I can just set up each environment and have my containers working properly.</p>
<p>I guess the way is doing something with volume or having a volume on Docker socket?
Or, I guess in industry this situation should be common, maybe there is a soft for interfacing DL models so that I can import them?</p>
<p>SO suggested <a href="https://stackoverflow.com/questions/53745482/serving-multiple-deep-learning-models-from-cluster">this post</a> but I do not use TensorFlow and, in my case, there is conflicting version of environments such as shared library (cudnn or cuda) conflict.</p>
<h3>One working solution</h3>
<p>On <code>EnvA</code> (and <code>EnvB</code>) I can store the outputs of my model in a dictionary with ID of the inputs as key and a torch tensor as value. Then I can output the dictionary in a <code>.pkl</code> file. So, in a new environment, I can open both <code>.pkl</code> files to have <code>outputA</code> and <code>outputB</code>. It's not exactly what I ask but it would work fine.</p>
|
<python><docker><deep-learning><development-environment>
|
2023-04-24 14:21:23
| 0
| 474
|
MufasaChan
|
76,092,877
| 7,230,328
|
python 3 regex to contain the create table statement from sql for azure databricks sql
|
<p>I have this oracle SQL Statement. I need a regex that captures only the create table statement with the constraints of primary and foreign keys.</p>
<pre><code>CREATE TABLE "OWB_RUN"."TOKEN_CARD_STATUS_HISTORY"
( "TOKEN_CARD_WH" NUMBER NOT NULL ENABLE,
"TOKEN_STATUS" VARCHAR2(1) NOT NULL ENABLE,
"TOKEN_STATUS_DATE" DATE NOT NULL ENABLE,
"LOAD_DATE" DATE,
CONSTRAINT "TOKEN_CARD_STATUS_PK" PRIMARY KEY ("TOKEN_CARD_WH", "TOKEN_STATUS", "TOKEN_STATUS_DATE")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "OWB_RUN" ENABLE,
CONSTRAINT "TOKEN_CARD_STATUS_FK" FOREIGN KEY ("TOKEN_CARD_WH")
REFERENCES "OWB_RUN"."TOKEN_CARD" ("TOKEN_CARD_WH") ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOLOGGING
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CARDS_CRD_FACT_TBL"
PARTITION BY RANGE ("LOAD_DATE")
(PARTITION "Y2021_Q1_M03" VALUES LESS THAN (TO_DATE(' 2021-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS NOLOGGING
</code></pre>
<p>The following regex i tried:</p>
<pre><code>pattern = re.compile(r'CREATE\s+TABLE\s+.+?\n\)', re.DOTALL)
create_table_statement = pattern.findall(ddl)[0]
print(create_table_statement)
</code></pre>
<p>received the following output:</p>
<pre><code>CREATE TABLE "OWB_RUN"."TOKEN_CARD_STATUS_HISTORY"
( "TOKEN_CARD_WH" NUMBER NOT NULL ENABLE,
"TOKEN_STATUS" VARCHAR2(1)
</code></pre>
<p>with a regex that caputers all teh create table () i would be OK</p>
|
<python><sql><python-3.x><regex>
|
2023-04-24 14:18:41
| 2
| 413
|
KRStam
|
76,092,767
| 5,048,010
|
Install black[d] from conda
|
<p>I want to run black as a code formatter in PyCharm. The black website gives specific instructions on how to do so, which is very helpful: <a href="https://black.readthedocs.io/en/stable/integrations/editors.html" rel="nofollow noreferrer">https://black.readthedocs.io/en/stable/integrations/editors.html</a> . However, it uses pip to install <code>black[d]</code> while I would like to use conda instead to take advantage of the dependency solver and to have a leaner process.</p>
<p>If I try to simply add <code>black</code>, <code>black[d]</code>, or <code>"black[d]"</code> to the env.yaml file for conda, it installs the simple <code>black</code> package without the additional dependencies added by the optional <code>[d]</code>. Why is this happening? Is this syntax not accepted by conda or is something missing from the conda-forge repos?</p>
|
<python><pycharm><conda><python-black>
|
2023-04-24 14:05:47
| 1
| 1,653
|
Gianluca Micchi
|
76,092,703
| 3,734,059
|
Keep only rows until elements in DateTimeIndex are consecutively continued in pandas dataframe
|
<p>I have a pandas <code>DataFrame</code> with a <code>DateTimeIndex</code> that looks as follows:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
index=pd.date_range(
start=pd.Timestamp("2023-03-20 12:00:00+0000", tz="UTC"),
end=pd.Timestamp("2023-03-20 15:00:00+0000", tz="UTC"),
freq="15Min",
),
data={
"my_column": [
2.0,
1.0,
1.0,
3.0,
3.0,
3.0,
4.0,
4.0,
4.0,
2.0,
3.0,
3.0,
1.0,
],
},
)
df = df.drop(index=["2023-03-20 13:45:00+00:00", "2023-03-20 14:45:00+00:00"])
print(df)
</code></pre>
<p>yields:</p>
<pre><code> my_column
2023-03-20 12:00:00+00:00 2.0
2023-03-20 12:15:00+00:00 1.0
2023-03-20 12:30:00+00:00 1.0
2023-03-20 12:45:00+00:00 3.0
2023-03-20 13:00:00+00:00 3.0
2023-03-20 13:15:00+00:00 3.0
2023-03-20 13:30:00+00:00 4.0
2023-03-20 14:00:00+00:00 4.0
2023-03-20 14:15:00+00:00 2.0
2023-03-20 14:30:00+00:00 3.0
2023-03-20 15:00:00+00:00 1.0
</code></pre>
<p>Now, I only want to keep the rows until the first "missing" consecutive element in the <code>DateTimeIndex</code> e. g. filter the <code>DataFrame</code> from <code>2023-03-20 12:00:00+00:00</code> to <code>2023-03-20 13:30:00+00:00</code>.</p>
<p>Is there any quick generic solution to this?</p>
<p>Thanks in advance!</p>
|
<python><pandas>
|
2023-04-24 13:58:15
| 1
| 6,977
|
Cord Kaldemeyer
|
76,092,559
| 7,256,443
|
mypy {{cookiecutter.project_slug}} is not a valid Python package name
|
<p>I am building a cookiecutter template for a python package, and I want to run a bunch of checks for the template repo itself with pre-commit.</p>
<p>A skeleton of the repo looks like this:</p>
<pre><code>my_cookiecutter_template
| .pre-commit-config.yaml
| cookiecutter.json
|
|___{{cookiecutter.project_slug}}
| pyproject.toml
|
|____{{cookiecutter.project_slug}}
| __init__.py
</code></pre>
<p>One of the pre-commit hooks I am using is mypy, but it is failing with this error:</p>
<pre><code>{{cookiecutter.project_slug}} is not a valid Python package name
</code></pre>
<p>because there is an <code>__init__.py</code> file in the directory called <code>{{cookiecutter.project_slug}}</code> (which will obviously be renamed with a valid name when the template is instantiated).</p>
<p>My question is, how can I suppress this mypy error? The mypy docs have details of lots of exceptions with a specific error code which gives you a way to suppress it. But, unless I am mistaken, there is nothing there that pertains to this specific error</p>
|
<python><mypy><cookiecutter>
|
2023-04-24 13:43:41
| 1
| 1,033
|
Ben Jeffrey
|
76,092,533
| 14,875,027
|
Class instance mutability issue
|
<p>I'm defining classes that pass arguments through index notation. To do this, I use the <code>__class_getitem__</code> method. Here is my implementation.</p>
<pre><code> def __class_getitem__(cls, parameters):
if type(parameters) != tuple:
parameters = (parameters,)
if len(parameters) > 2:
raise TypeError("Expected 2 arguments: List[sub_type, max_length]")
if len(parameters):
cls.sub_type = parameters[0]
if len(parameters) > 1:
cls.max_len = parameters[1]
if cls.max_len <= 0:
raise TypeError(f"Max Length of {cls.max_len} is less than or equal to 0")
return cls
</code></pre>
<p>This class implements <code>List</code>, which can be used as <code>List[Int, 10]</code>, denoting a list of length 10 with all ints. The issue I'm seeing is that the <code>cls</code> returned is mutable.</p>
<p>If I instantiated two instances:</p>
<pre><code>x = List[Int,1]
y = List[Int,10]
</code></pre>
<p>Both classes' attribute <code>max_len</code> will be set to 10. I tried to return an instantiated class, i.e., <code>cls ()</code>, but with the same result.</p>
<p>It makes sense that the non-instantiated instance is the same object, but if they return an instantiated class, I don't understand why they are the same. Am I missing something?</p>
|
<python><class-method>
|
2023-04-24 13:40:05
| 2
| 370
|
dvr
|
76,092,486
| 19,155,645
|
mask edges are not continuous - how to solve
|
<p>my ML model is producing a binary mask (separate file) for each object.</p>
<p>The objects are not "filled out" (that is, there are very often "holes" inside the objects).</p>
<p>The issue is that when the "holes" are too close to the edge, the object edges are not continuous, and therefore filling up the holes becomes harder.</p>
<p>Here are two examples:</p>
<p><a href="https://i.sstatic.net/bEST3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bEST3.png" alt="example1" /></a></p>
<p><a href="https://i.sstatic.net/cc3QN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cc3QN.png" alt="example2" /></a></p>
<p>My code for filling the holes is not working in these cases:</p>
<pre><code>import cv2
import numpy as np
from PIL import Image
mask = np.array(Image.open('example1.png'))
_, mask_binary = cv2.threshold(mask, 0, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(mask_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for contour in contours:
cv2.drawContours(mask_binary, [contour], 0, 255, -1)
plt.imshow(mask_binary, cmap='gray')
</code></pre>
<p>I imagine there are "morphological" ways to "close" such open spaces, but I did not figure out how. <br>
<b>Any suggestions?</b> <br>
If there is a different way to resolve my issue (that will not necessitate redoing the ML model...) I'll be happy to hear.</p>
|
<python><opencv><image-processing><computer-vision><mask>
|
2023-04-24 13:35:54
| 0
| 512
|
ArieAI
|
76,092,461
| 9,965,155
|
Why is doc.spans empty after spans are assigned to the doc object in Spacy?
|
<p>I am trying to convert a two-dimensional list with entity labels <code>(0, 1, -1)</code> into a <code>spaCy doc</code> object with <code>Spans</code> as entity labels. The 2-D list corresponds to token labels for the <code>tokens_</code> variable, which is basically a list of words. So basically, spans will have the start and end offsets for the tokens corresponding to the non-zero labels <code>(-1, 1)</code>.</p>
<p>I am trying to do it using the function <code>labelmatrix2doc</code>. However, even after assigning the spans to the doc object, it prints null lists without any spans.</p>
<pre><code>import spacy
from spacy.tokens import Doc, Span, SpanGroup
def labelmatrix2doc(tokens_, token_offsets_, label_matrix):
tokens_ = [token if token else ' ' for token in tokens_] # re-do empty tokens by adding a space
doc = Doc( nlp.vocab, words=tokens_ )
# Create a list of spaCy spans from the label matrix
for i in range(len(label_matrix)):
spans_i = []
for j in range(len(label_matrix[i])):
if label_matrix[i][j] != 0:
start = token_offsets_[j]
end = token_offsets_[j+1] if j < len(label_matrix[i])-1 else token_offsets_[j] + len(tokens_[j])
span = doc.char_span(start, end, label=f'label_{label_matrix[i][j]}')
spans_i.append(span)
if spans_i:
print( len(spans_i) )
doc.spans[f"lf_{i}"] = spans_i
print( doc.spans )
return doc
</code></pre>
<p>Use the code below to simulate the inputs for the above function <code>def labelmatrix2doc(tokens_, token_offsets_, label_matrix)</code>.</p>
<pre><code>import random
import string
# Set the size of the list
list_size = 10
# Generate a list of random English words
words_list = []
end_offsets_list = []
offset = 0
for i in range(list_size):
word = ''.join(random.choices(string.ascii_lowercase, k=random.randint(3, 7)))
words_list.append(word)
offset += len(word)
end_offsets_list.append(offset)
print(words_list)
print(end_offsets_list)
# Set the size of the array
array_size = 10
# Generate a 10x10 two-dimensional array filled with 0s, 1s, and -1s
two_d_array = []
for i in range(array_size):
row = []
for j in range(array_size):
value = random.choice([-1, 0, 1])
row.append(value)
two_d_array.append(row)
print(two_d_array)
</code></pre>
<p>Using the function on the simulated data prints null even though the spans of length 8, 8, 4, 5, 6, 7, 7, 6, 6, 9 are generated.</p>
<pre><code>labelmatrix2doc(words_list, end_offsets_list, two_d_array)
8
{'lf_0': []}
8
{'lf_0': [], 'lf_1': []}
4
{'lf_0': [], 'lf_1': [], 'lf_2': []}
5
{'lf_0': [], 'lf_1': [], 'lf_2': [], 'lf_3': []}
6
{'lf_0': [], 'lf_1': [], 'lf_2': [], 'lf_3': [], 'lf_4': []}
7
{'lf_0': [], 'lf_1': [], 'lf_2': [], 'lf_3': [], 'lf_4': [], 'lf_5': []}
7
{'lf_0': [], 'lf_1': [], 'lf_2': [], 'lf_3': [], 'lf_4': [], 'lf_5': [], 'lf_6': []}
6
{'lf_0': [], 'lf_1': [], 'lf_2': [], 'lf_3': [], 'lf_4': [], 'lf_5': [], 'lf_6': [], 'lf_7': []}
6
{'lf_0': [], 'lf_1': [], 'lf_2': [], 'lf_3': [], 'lf_4': [], 'lf_5': [], 'lf_6': [], 'lf_7': [], 'lf_8': []}
9
{'lf_0': [], 'lf_1': [], 'lf_2': [], 'lf_3': [], 'lf_4': [], 'lf_5': [], 'lf_6': [], 'lf_7': [], 'lf_8': [], 'lf_9': []}
</code></pre>
|
<python><spacy>
|
2023-04-24 13:33:16
| 1
| 2,006
|
PinkBanter
|
76,092,382
| 10,499,034
|
Is Neighbor-Joining Clustering Availalble in SciPy
|
<p>I would like to use scipy.cluster.hierarchy to perform neighbor joining on a distance matrix. However, I have been unable to locate in the documentation that this is an available option. The reason I would like to use scipy.cluster.hierarchy specifically is because I am already using it for UPGMA clustering of the same distance matrix and I would like to keep my analysis consistent and within the same code. Is this an available option of scipy.cluster.hierarchy? And if so what would I put in place of 'Average' in the code:</p>
<pre><code>#Perform UPGMA hierarchical cluster with scipy.cluster.hierarchy
outDND=average(distanceMatrix)
</code></pre>
|
<python><scipy><hierarchical-clustering>
|
2023-04-24 13:21:55
| 0
| 792
|
Jamie
|
76,092,263
| 14,923,024
|
Column- and row-wise logical operations on Polars DataFrame
|
<p>In Pandas, one can perform boolean operations on boolean DataFrames with the <code>all</code> and <code>any</code> methods, providing an <code>axis</code> argument. For example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = dict(A=["a","b","?"], B=["d","?","f"])
pd_df = pd.DataFrame(data)
</code></pre>
<p>For example, to get a boolean mask on columns containing the element "?":</p>
<pre class="lang-py prettyprint-override"><code>(pd_df == "?").any(axis=0)
</code></pre>
<p>and to get a mask on rows:</p>
<pre class="lang-py prettyprint-override"><code>(pd_df == "?").any(axis=1)
</code></pre>
<p>Also, to get a single boolean:</p>
<pre class="lang-py prettyprint-override"><code>(pd_df == "?").any().any()
</code></pre>
<p>In comparison, in <code>polars</code>, the best I could come up with are the following:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl_df = pl.DataFrame(data)
</code></pre>
<p>To get a mask on columns:</p>
<pre class="lang-py prettyprint-override"><code>(pl_df == "?").select(pl.all().any())
</code></pre>
<p>To get a mask on rows:</p>
<pre class="lang-py prettyprint-override"><code>pl_df.select(
pl.concat_list(pl.all() == "?").alias("mask")
).select(
pl.col("mask").list.eval(pl.element().any()).list.first()
)
</code></pre>
<p>And to get a single boolean value:</p>
<pre class="lang-py prettyprint-override"><code>pl_df.select(
pl.concat_list(pl.all() == "?").alias("mask")
).select(
pl.col("mask").list.eval(pl.element().any()).list.first()
)["mask"].any()
</code></pre>
<p>The last two cases seem particularly verbose and convoluted for such a straightforward task, so I'm wondering whether there are shorter/faster equivalents?</p>
|
<python><dataframe><python-polars>
|
2023-04-24 13:09:11
| 2
| 457
|
AAriam
|
76,092,151
| 1,788,771
|
Django model based on queryset rather than table
|
<p>I have a model called <code>Event</code> and a viewset called <code>UserEventViewSet</code> that defines the following method:</p>
<pre><code>def get_queryset(self):
return (
Event.objects.select_related(
'user',
).values(
'user_id',
'type'
).annotate(
count=Count("*"),
user=JSONObject(
id=F("event__user__id"),
first_name=F("event__user__first_name"),
last_name=F("event__user__last_name"),
),
)
)
</code></pre>
<p>This works but it seems less than ideal. As you can see here we are doing the work of the serializer in the viewset.</p>
<p>I think it might be better to move part of this queryset into a manager and attach it to a <code>UserEvent</code> model. Then I would hope I could define relations in the model and use a nested serializer for the user data.</p>
<p>I'm not sure how to do it, or if it is even possible.</p>
|
<python><django><django-rest-framework>
|
2023-04-24 12:56:44
| 0
| 4,107
|
kaan_atakan
|
76,091,986
| 11,616,106
|
QtWebEngine Issue when I run executable format
|
<p>I've a python Pyside6 gui, it is working when I run the code on Pycharm. But after that I want to convert executable format in my MacOS.
So I installed pyinstaller in file path file_path/var/site-packeges/
Project using 3.8 version of Python with below commands:</p>
<pre><code> pyinstaller --onefile test1.py
65 INFO: PyInstaller: 5.10.1
65 INFO: Python: 3.8.9
68 INFO: Platform: macOS-12.3.1-arm64-arm-64bit
69 INFO: wrote /Users/duygu/PycharmProjects/fiver_converting/test1.spec
70 INFO: UPX is not available.
71 INFO: Extending PYTHONPATH with paths
['/Users/duygu/PycharmProjects/fiver_converting']
204 INFO: checking Analysis
204 INFO: Building Analysis because Analysis-00.toc is non existent
204 INFO: Initializing module dependency graph...
205 INFO: Caching module graph hooks...
210 INFO: Analyzing base_library.zip ...
474 INFO: Loading module hook 'hook-heapq.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
607 INFO: Loading module hook 'hook-encodings.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
1113 INFO: Loading module hook 'hook-pickle.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
1660 INFO: Caching module dependency graph...
1708 INFO: running Analysis Analysis-00.toc
1712 INFO: Analyzing /Users/duygu/PycharmProjects/fiver_converting/test1.py
1717 INFO: Loading module hook 'hook-PySide6.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
1972 INFO: Processing module hooks...
1976 INFO: Loading module hook 'hook-PySide6.QtWebEngineWidgets.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
2092 INFO: Loading module hook 'hook-PySide6.QtWidgets.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
2199 INFO: Loading module hook 'hook-PySide6.QtQuickWidgets.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
2336 INFO: Loading module hook 'hook-PySide6.QtCore.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
2548 INFO: Loading module hook 'hook-PySide6.QtOpenGL.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
2656 INFO: Loading module hook 'hook-PySide6.QtGui.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
2785 INFO: Loading module hook 'hook-PySide6.QtDBus.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
2914 INFO: Loading module hook 'hook-PySide6.QtNetwork.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
3061 INFO: Loading module hook 'hook-PySide6.QtPrintSupport.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
3219 INFO: Loading module hook 'hook-PySide6.QtWebEngineCore.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
3513 INFO: Loading module hook 'hook-PySide6.QtQuick.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
3649 INFO: Loading module hook 'hook-PySide6.QtWebChannel.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
3823 INFO: Loading module hook 'hook-PySide6.QtPositioning.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
3982 WARNING: QtLibraryInfo(PySide6): could not find translations with base name 'qtlocation'! These translations will not be collected.
3982 INFO: Loading module hook 'hook-PySide6.QtQml.py' from '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks'...
4160 INFO: Looking for ctypes DLLs
4162 INFO: Analyzing run-time hooks ...
4163 INFO: Including run-time hook '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks/rthooks/pyi_rth_pyside6webengine.py'
4163 INFO: Including run-time hook '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks/rthooks/pyi_rth_pyside6.py'
4164 INFO: Including run-time hook '/Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/hooks/rthooks/pyi_rth_inspect.py'
4179 INFO: Looking for dynamic libraries
129 WARNING: Cannot find path ./QtPdfQuick.framework/Versions/A/QtPdfQuick (needed by /Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PySide6/Qt/qml/QtQuick/Pdf/libpdfquickplugin.dylib)
309 WARNING: Cannot find path ./QtQuick3DSpatialAudio.framework/Versions/A/QtQuick3DSpatialAudio (needed by /Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PySide6/Qt/qml/QtQuick3D/SpatialAudio/libquick3dspatialaudioplugin.dylib)
457 WARNING: Cannot find path ./QtQuick3DHelpersImpl.framework/Versions/A/QtQuick3DHelpersImpl (needed by /Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PySide6/Qt/qml/QtQuick3D/Helpers/impl/libqtquick3dhelpersimplplugin.dylib)
459 WARNING: Cannot find path ./QtQuickEffects.framework/Versions/A/QtQuickEffects (needed by /Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PySide6/Qt/qml/QtQuick/Effects/libeffectsplugin.dylib)
5267 INFO: Looking for eggs
5267 INFO: Using Python library /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Resources/Python.app/Contents/MacOS/../../../../Python3
5269 INFO: Warnings written to /Users/duygu/PycharmProjects/fiver_converting/build/test1/warn-test1.txt
5275 INFO: Graph cross-reference written to /Users/duygu/PycharmProjects/fiver_converting/build/test1/xref-test1.html
5306 INFO: checking PYZ
5306 INFO: Building PYZ because PYZ-00.toc is non existent
5306 INFO: Building PYZ (ZlibArchive) /Users/duygu/PycharmProjects/fiver_converting/build/test1/PYZ-00.pyz
5398 INFO: Building PYZ (ZlibArchive) /Users/duygu/PycharmProjects/fiver_converting/build/test1/PYZ-00.pyz completed successfully.
5400 INFO: EXE target arch: arm64
5400 INFO: Code signing identity: None
5401 INFO: checking PKG
5401 INFO: Building PKG because PKG-00.toc is non existent
5401 INFO: Building PKG (CArchive) test1.pkg
8217 WARNING: Cannot find path @executable_path/../../../../Python3 (needed by /Users/duygu/PycharmProjects/fiver_converting/venv/bin/python)
58605 INFO: Building PKG (CArchive) test1.pkg completed successfully.
58629 INFO: Bootloader /Users/duygu/PycharmProjects/fiver_converting/venv/lib/python3.8/site-packages/PyInstaller/bootloader/Darwin-64bit/run
58629 INFO: checking EXE
58629 INFO: Building EXE because EXE-00.toc is non existent
58629 INFO: Building EXE from EXE-00.toc
58629 INFO: Copying bootloader EXE to /Users/duygu/PycharmProjects/fiver_converting/dist/test1
58630 INFO: Converting EXE to target arch (arm64)
58641 INFO: Removing signature(s) from EXE
58653 INFO: Appending PKG archive to EXE
59076 INFO: Fixing EXE headers for code signing
59079 WARNING: Cannot find path @executable_path/../../../../Python3 (needed by /Users/duygu/PycharmProjects/fiver_converting/venv/bin/python)
59082 INFO: Re-signing the EXE
59866 INFO: Building EXE from EXE-00.toc completed successfully.
.
</code></pre>
<p>After all these things executable file created but when I open the file I get this error:</p>
<pre><code>Sandboxing disabled by user.
The following paths were searched for Qt WebEngine resources:
/var/folders/l9/985bkq8n0kq97vpvhp4clxvw0000gn/T/_MEIhWjxll/PySide6/Qt/resources
/var/folders/l9/985bkq8n0kq97vpvhp4clxvw0000gn/T/_MEIhWjxll/PySide6/Qt
/Users/duygu/PycharmProjects/fiver_converting
/Users/duygu/.timezone_app
but could not find any.
You may override the default search paths by using QTWEBENGINE_RESOURCES_PATH environment variable.
zsh: abort /Users/duygu/PycharmProjects/fiver_converting/timezone_app
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
</code></pre>
<p>How can I solve this issue ?</p>
<p>My MacOS system using Python 3.9 but I use Python 3.8 for this project.
<strong>EDIT:</strong>
small test code for the analysing the issue, I have same error when I convert this file to executable format</p>
<pre><code>import sys
from PySide6 import QtGui,QtCore,QtWidgets,QtWebEngineWidgets
app = QtWidgets.QApplication()
q = QtWebEngineWidgets.QWebEngineView()
q.load(QtCore.QUrl('https://www.google.com/'))
q.show()
sys.exit(app.exec())
</code></pre>
|
<python><pyinstaller>
|
2023-04-24 12:39:21
| 0
| 521
|
hobik
|
76,091,717
| 9,220,442
|
Pivot pandas dataframe by filled values
|
<p>I want to transform/pivot the following dataframe to indicate the immediate data flow from (source) to (target).:</p>
<pre class="lang-py prettyprint-override"><code> l0 l1 l2 l3 sum
0 IN TOTAL <NA> <NA> 1
1 <NA> TOTAL OUT_A OUT_B 2
2 <NA> TOTAL <NA> OUT_C 3
</code></pre>
<p>In the above example, a data flow is represented by e.g. l0 to l1 in row 0. Equivalently, l1 to l2 and l2 to l3 represent (direct) data flows in row 1, as well as l1 to l3 in row 2.</p>
<p>Expectation:</p>
<pre class="lang-py prettyprint-override"><code> source target sum
0 IN TOTAL 1
1 TOTAL OUT_A 2
2 TOTAL OUT_C 3
3 OUT_A OUT_B 2
</code></pre>
<p>For reproducibility:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
"l0": ['IN', pd.NA, pd.NA],
"l1": ['TOTAL','TOTAL','TOTAL'],
"l2": [pd.NA,'OUT_A', pd.NA],
"l3": [pd.NA,'OUT_B',"OUT_C"],
"sum": [1,2,3]})
pd.DataFrame({
"source": ["IN","TOTAL","TOTAL","OUT_A"],
"target": ["TOTAL","OUT_A","OUT_C","OUT_B"],
"sum": [1,2,3,2]
})
</code></pre>
|
<python><pandas>
|
2023-04-24 12:09:25
| 4
| 1,302
|
Thomas
|
76,091,636
| 2,828,006
|
Regular expression to match string starting with and numbers in it
|
<p>I have strings like :</p>
<pre><code>merge_req_title1 = "JARVIS-17442: Enable Fees report"
merge_req_title2= "Resolve JARVIS-15887 'integration new'"
</code></pre>
<p>i am using python to extract the substring out of them which is as <code>JARVIS-<number></code></p>
<p>example output for below strings will be as ;</p>
<pre><code>merge_req_title1 = "JARVIS-17442: Enable Fees report" -o/p-> "JARVIS-17442"
merge_req_title2= "Resolve JARVIS-15887 'integration new'" -o/p-> "JARVIS-15887"
</code></pre>
<p>Any idea for the regex and doing it in python script</p>
<p>below is code what i tried in</p>
<pre><code>import re
merge_req_title1 = "JARVIS-17442: Enable Fees report"
merge_req_title2= "Resolve JARVIS-15887 'integration new'"
if __name__ == '__main__':
reg_ex= "\b(JARVIS)-\d{5}\b"
searched_element = re.search(reg_ex ,merge_req_title1)
print(searched_element)
</code></pre>
<p>But i am getting output as none in</p>
|
<python><python-3.x><regex>
|
2023-04-24 12:00:39
| 2
| 1,474
|
Scientist
|
76,091,424
| 17,174,267
|
selenium: Handle "Allow this site to open the XXX link with YYY"
|
<p>How do I handle this popup with python/selenium? (Chrome and Firefox answers appreciated.)</p>
<p>A. How do I cancel it?</p>
<p>B. How would I do the open link?</p>
<p>C. Are there any Chrome Options/Firefox Profiles to ignore this popup?</p>
<p><a href="https://i.sstatic.net/mvPsZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mvPsZ.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver>
|
2023-04-24 11:32:41
| 0
| 431
|
pqzpkaot
|
76,091,366
| 14,253,961
|
Converting From .h5 model to .pt model :Convert model from keras h5 to pytorch
|
<p>How can I use a .h5 model in Pytorch file;
I worked with Keras with TensorFlow backend so here is my saved model:</p>
<pre><code>model = tf.keras.applications.ResNet50(include_top=False, weights=None, input_tensor=tf.keras.Input(shape=(224, 224, 3)), pooling=None)
....#training
model.save("mymodel.h5")
model.save_weights("saved_weights.h5")
</code></pre>
<p>Now I migrate to pytorch and I would like to use the saved model. so how can I convert it .
Thanks</p>
|
<python><pytorch><conv-neural-network><tf.keras>
|
2023-04-24 11:23:51
| 1
| 741
|
seni
|
76,091,228
| 11,579,184
|
Cannot find AwaitableTriggerDagRunOperator anymore for Airflow [Python]
|
<p>I'm working on a python project using Airflow. In the project there is no <code>requirements.txt</code> file so I simply installed the latest version of the libraries by putting their name inside a <code>requirement.txt</code> file and I've been trying to make it work.</p>
<p>The import which is causing trouble is this one:</p>
<pre class="lang-py prettyprint-override"><code>from airflow_common.operators.awaitable_trigger_dag_run_operator import (
AwaitableTriggerDagRunOperator,
)
</code></pre>
<p>Looking online for <code>AwaitableTriggerDagRunOperator</code> I cannot find any documentation about this operator, the only result that comes up is <a href="https://github.com/cherkavi/cheat-sheet/blob/master/airflow.md" rel="nofollow noreferrer">this</a> page where another person is using it and this person is importing it in the same way I am.</p>
<p>I guess that the project was developed with a very old version of Airflow and things have changed quite a bit. Here is the version that I have installed.</p>
<pre class="lang-bash prettyprint-override"><code>$ pip freeze | awk '/airflow/'
airflow-commons==0.0.67
apache-airflow==2.5.3
apache-airflow-providers-cncf-kubernetes==6.0.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-sqlite==3.3.1
</code></pre>
|
<python><airflow><airflow-2.x>
|
2023-04-24 11:03:51
| 1
| 1,802
|
Gerardo Zinno
|
76,091,214
| 15,178,267
|
Django: python manage.py runserver exits with βPerforming system checks, what is the possible issue?
|
<p>I am working on a django project and have been using <code>python manage.py runserver</code> to spin up my development local server, but it just started exiting at <code>Performing system checks</code>. This is the first time i am encountering such issue, have any one experienced this issue before, please how do i go about it?</p>
<pre><code>C:\Users\Destiny Franks\Desktop\ecommerce_prj>python manage.py runserver
Watching for file changes with StatReloader
Performing system checks...
C:\Users\pc-name\Desktop\ecommerce_prj>python manage.py runserver
Watching for file changes with StatReloader
Performing system checks...
C:\Users\pc-name\Desktop\ecommerce_prj>
</code></pre>
<p>This is how everything looks, it does not show any error message or issue, just breaks off and return to the cmd</p>
<h2>update</h2>
<p>My Settings.py</p>
<pre><code>"""
Django settings for ecommerce_prj project.
Generated by 'django-admin startproject' using Django 3.2.7.
For more information on this file, see
https://docs.djangoproject.com/en/3.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.2/ref/settings/
"""
from django.contrib.messages import constants as messages
from pathlib import Path
from datetime import timedelta
import os
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'django-insecure-@uebu%zn!ghp@b05*c+jgrxv!d)p(iw3=tl*&24m5)&^u^gq@)'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ["*"]
# Application definition
INSTALLED_APPS = [
'jazzmin',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
# Custom
'store',
'core',
'addons',
'userauths',
'vendor',
'reports',
# Third Party
'import_export',
'crispy_forms',
'mathfilters',
'ckeditor',
'ckeditor_uploader',
'django_ckeditor_5',
'taggit',
'corsheaders',
"anymail",
'paypal.standard.ipn',
'geoip2',
'django_user_agents',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django_user_agents.middleware.UserAgentMiddleware',
]
ROOT_URLCONF = 'ecommerce_prj.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'core.context_processor.default',
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'ecommerce_prj.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.2/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static')]
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, "media")
# Default primary key field type
# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
JAZZMIN_SETTINGS = {
'site_header': "Desphixs Shop",
'site_brand': "You order, we deliver",
'site_logo': "assets/imgs/theme/loading.gif",
'copyright': "desphixs-shop.com",
"order_with_respect_to": ["store", 'vendor', "addons" , 'core' ,'userauths']
}
LOGIN_URL = "userauths:sign-in"
# LOGIN_REDIRECT_URL = "core:index"
LOGOUT_REDIRECT_URL = "userauths:sign-in"
AUTH_USER_MODEL = 'userauths.User'
CKEDITOR_UPLOAD_PATH = 'uploads/'
STRIPE_PUBLISHABLE_KEY = 'test_keys'
STRIPE_SECRET_KEY = 'test_keys'
STRIPE_CONNECT_CLIENT_ID = 'test_keys'
PAYPAL_CLIENT_ID = 'test_keys-test_keys'
PAYPAL_SECRET_ID = 'test_keys-test_keys-Vy'
PAYPAL_RECEIVER_EMAIL = 'test_keys@gmail.com'
PAYPAL_TEST = True
GEOIP_PATH =os.path.join('geoip')
MESSAGE_TAGS = {
messages.ERROR: 'danger',
}
CRISPY_TEMPLATE_PACK = 'bootstrap4'
customColorPalette = [
{"color": "hsl(4, 90%, 58%)", "label": "Red"},
{"color": "hsl(340, 82%, 52%)", "label": "Pink"},
{"color": "hsl(291, 64%, 42%)", "label": "Purple"},
{"color": "hsl(262, 52%, 47%)", "label": "Deep Purple"},
{"color": "hsl(231, 48%, 48%)", "label": "Indigo"},
{"color": "hsl(207, 90%, 54%)", "label": "Blue"},
]
ANYMAIL = {
# (exact settings here depend on your ESP...)
"MAILGUN_API_KEY": "key-test_keys",
"MAILGUN_SENDER_DOMAIN": 'test_keys.mailgun.org', # your Mailgun domain, if needed
}
FROM_EMAIL = "test_keys@gmail.com"
EMAIL_BACKEND = "anymail.backends.mailgun.EmailBackend"
DEFAULT_FROM_EMAIL = "test_keys@gmail.com"
SERVER_EMAIL = "test_keys@gmail.com"
CKEDITOR_5_CONFIGS = {
"default": {
"toolbar": [
"heading",
"|",
"bold",
"italic",
"link",
"bulletedList",
"numberedList",
"blockQuote",
"imageUpload"
],
},
"comment": {
"language": {"ui": "en", "content": "en"},
"toolbar": [
"heading",
"|",
"bold",
"italic",
"link",
"bulletedList",
"numberedList",
"blockQuote",
],
},
"extends": {
"language": "en",
"blockToolbar": [
"paragraph",
"heading1",
"heading2",
"heading3",
"|",
"bulletedList",
"numberedList",
"|",
"blockQuote",
],
"toolbar": [
# "heading",
# "codeBlock",
# "|",
# "|",
"bold",
"italic",
# "link",
"underline",
"strikethrough",
# "code",
# "subscript",
# "superscript",
# "highlight",
"|",
"bulletedList",
# "numberedList",
# "todoList",
# "|",
# "outdent",
# "indent",
# "|",
# "blockQuote",
# "insertImage",
# "|",
# "fontSize",
# "fontFamily",
# "fontColor",
# "fontBackgroundColor",
# "mediaEmbed",
"removeFormat",
# "insertTable",
# "sourceEditing",
],
"image": {
"toolbar": [
"imageTextAlternative",
"|",
"imageStyle:alignLeft",
"imageStyle:alignRight",
"imageStyle:alignCenter",
"imageStyle:side",
"|",
"toggleImageCaption",
"|"
],
"styles": [
"full",
"side",
"alignLeft",
"alignRight",
"alignCenter",
],
},
"table": {
"contentToolbar": [
"tableColumn",
"tableRow",
"mergeTableCells",
"tableProperties",
"tableCellProperties",
],
"tableProperties": {
"borderColors": customColorPalette,
"backgroundColors": customColorPalette,
},
"tableCellProperties": {
"borderColors": customColorPalette,
"backgroundColors": customColorPalette,
},
},
"heading": {
"options": [
{
"model": "paragraph",
"title": "Paragraph",
"class": "ck-heading_paragraph",
},
{
"model": "heading1",
"view": "h1",
"title": "Heading 1",
"class": "ck-heading_heading1",
},
{
"model": "heading2",
"view": "h2",
"title": "Heading 2",
"class": "ck-heading_heading2",
},
{
"model": "heading3",
"view": "h3",
"title": "Heading 3",
"class": "ck-heading_heading3",
},
]
},
"list": {
"properties": {
"styles": True,
"startIndex": True,
"reversed": True,
}
},
"htmlSupport": {
"allow": [
{"name": "/.*/", "attributes": True, "classes": True, "styles": True}
]
},
},
}
</code></pre>
|
<python><django><django-rest-framework>
|
2023-04-24 11:02:06
| 0
| 851
|
Destiny Franks
|
76,091,160
| 11,552,661
|
TypeError: isinstance() arg 2 must be a type or tuple of types" while using WriteToBigQuery in Apache Beam
|
<p>I am trying to use Apache Beam with Python to fetch JSON data from an API and write it to a BigQuery table. Here is the code I am using:</p>
<pre><code>import argparse
import json
import requests
import apache_beam as beam
from apache_beam.io import WriteToBigQuery
from apache_beam.options.pipeline_options import PipelineOptions
def run(argv=None):
parser = argparse.ArgumentParser()
parser.add_argument('--project', dest='project', required=True, help='GCP project')
parser.add_argument('--region', dest='region', required=True, help='GCP region')
parser.add_argument('--output', dest='output', required=True, help='Output BigQuery table')
known_args, pipeline_args = parser.parse_known_args(argv)
options = PipelineOptions(pipeline_args)
p = beam.Pipeline(options=options)
schema = 'postId:INTEGER, id:INTEGER, name:STRING, email:STRING, body:STRING'
# Fetch comments from the API
(p | 'Fetch comments' >> beam.Create([requests.get('https://jsonplaceholder.typicode.com/comments').text])
| 'Load JSON' >> beam.Map(json.loads)
| 'Flatten' >> beam.FlatMap(lambda x: x)
| 'Map to BQ row' >> beam.Map(lambda x: {
'postId': x['postId'],
'id': x['id'],
'name': x['name'],
'email': x['email'],
'body': x['body']
})
| 'Write to BigQuery' >> beam.io.WriteToBigQuery(
known_args.output,
schema=schema,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED)
)
result = p.run()
result.wait_until_finish()
if __name__ == '__main__':
run()
</code></pre>
<p>However, I am encountering the following error:</p>
<pre><code>Traceback (most recent call last):
File "fetch_comments_beam.py", line 43, in <module>
run()
File "fetch_comments_beam.py", line 31, in run
| 'Write to BigQuery' >> beam.io.WriteToBigQuery(
File "/usr/local/lib/python3.8/dist-packages/apache_beam/io/gcp/bigquery.py", line 1934, in __init__
self.table_reference = bigquery_tools.parse_table_reference(
File "/usr/local/lib/python3.8/dist-packages/apache_beam/io/gcp/bigquery_tools.py", line 244, in parse_table_reference
if isinstance(table, TableReference):
TypeError: isinstance() arg 2 must be a type or tuple of types
</code></pre>
<p>I've tried different approaches, but the error persists. How can I resolve this issue?</p>
<p>I run the code with the following arguments:</p>
<pre><code>python3 fetch_comments_beam.py --project onyx-osprey-251417 --region us-central1 --output onyx-osprey-251417:comments_dataset.comments
</code></pre>
<h2>CLOSED: I just had to install apache-beam[gcp] instead of plain apache-beam.</h2>
<pre><code>pip install apache-beam[gcp]
</code></pre>
|
<python><apache-beam>
|
2023-04-24 10:54:22
| 0
| 1,354
|
tbone
|
76,091,077
| 1,200,914
|
Django's bulk_create doubts
|
<p>I'm adding this function to my Django app, but I have a couple of questions I need to know and which I couldn't find in Google.</p>
<ol>
<li>Does it insert all records in the order the list is given? Or can they be suffle? Can I know which id corresponds to the index in the list of the items to give?</li>
<li>In case an error happened will it prompt which one failed, and will continue inserting the others, or will it stop inserting the rest of elements?</li>
</ol>
|
<python><django><django-models>
|
2023-04-24 10:44:05
| 1
| 3,052
|
Learning from masters
|
76,091,043
| 7,318,120
|
pd.to_datetime() does not work in windows terminal
|
<p>I have been running python scripts in <code>windows power shell</code> for quite some time.</p>
<p>All of a sudden this morning I get this error message:</p>
<pre><code> line 3802, in get_loc
return self._engine.get_loc(casted_key)
File "pandas\_libs\index.pyx", line 138, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 165, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 5745, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 5753, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'last trade time'
</code></pre>
<p>what is strange is that I can run the code 3 different ways:</p>
<ol>
<li>Integrated Terminal - <code>python test_blah.py</code> (works fine).</li>
<li>Windows power shell - navigate to folder, then <code>python test_blah.py</code> (works fine).</li>
<li>Windows power shell - <code>python "the full path/test_blah.py"</code> (throws error).</li>
</ol>
<p>the full path looks like this:</p>
<pre><code>python "G:\My Drive\darren\python\test_blah.py"
</code></pre>
<p>So I have created a min reproducible example here to show the above:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
print('pandas version:' , pd.__version__)
def some_function():
df = pd.read_csv('basic_GBP_stats.csv')
print(df)
print('got here 01')
try:
df['time'] = pd.to_datetime(df['last trade time'])
print('got here 02')
except Exception as e:
print('exception found:', e)
some_function()
</code></pre>
<p>The contents of the (example) CSV file are this:</p>
<pre><code> Unnamed: 0 symbol num trades ... wavg sell px wavg ratio % last trade time
0 0 ADAGBP 309 ... 0.261606 0.235289 2022-12-13 21:02:32
1 1 BTCGBP 3949 ... 14127.293571 -10.086920 2022-12-13 20:25:34
2 2 ETHGBP 1349 ... 1025.139765 0.183433 2022-12-13 20:18:14
3 3 SOLGBP 1467 ... 11.241261 0.193498 2022-12-13 17:42:31
4 4 XRPGBP 1005 ... 0.314160 0.333005 2022-12-13 22:13:14
... ... ... ... ... ... ... ...
15295 1 BTCGBP 6215 ... 16612.732328 -3.116873 2023-04-24 08:50:25
15296 2 ETHGBP 2075 ... 1139.706164 0.152275 2023-04-24 07:05:07
15297 3 SOLGBP 2366 ... 12.127936 2.285372 2023-04-23 15:48:48
15298 4 XRPGBP 1451 ... 0.315087 0.836744 2023-04-24 07:26:02
15299 5 BNBGBP 405 ... 219.697751 0.476855 2023-04-22 12:35:53
</code></pre>
<p>The version of pandas that is used: 1.5.3.</p>
<p>What I notice is that the <code>df['time'] = pd.to_datetime(df['last trade time'])</code> for some reason produces a malformed dataframe (from method 3 but works fine with methods 1 and 2), which is the reason for the error when using method 3 above.</p>
<p>But I don't know why or how to fix this...</p>
|
<python><pandas><string-to-datetime>
|
2023-04-24 10:40:29
| 0
| 6,075
|
darren
|
76,090,979
| 21,722,065
|
'XlsxWriter' object has no attribute 'save'. Did you mean: '_save'?
|
<p>I'm trying to save data from a DataFrame to an Excel file using pandas. I tried the following code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import xlsxwriter
data = {'Name': ['John', 'Jane', 'Adam'], 'Age': [25, 30, 35], 'Gender': ['M', 'F', 'M']}
df = pd.DataFrame(data)
writer = pd.ExcelWriter('output.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='Sheet1')
workbook = writer.book
worksheet = writer.sheets['Sheet1']
# Example: Adding a chart
chart = workbook.add_chart({'type': 'line'})
chart.add_series({'values': '=Sheet1.$B$2:$B$4'})
worksheet.insert_chart('D2', chart)
writer.save()
</code></pre>
<p>But I get the following error:</p>
<pre class="lang-py prettyprint-override"><code>writer.save()
^^^^^^^^^^^
AttributeError: 'XlsxWriter' object has no attribute 'save'. Did you mean: '_save'?
</code></pre>
<p>Does anyone know how to solve it?</p>
|
<python><pandas><excel><attributeerror><xlsxwriter>
|
2023-04-24 10:34:33
| 2
| 641
|
Straniero95
|
76,090,869
| 1,484,522
|
Python 3.10 - iterating through directory of files says "this file does not exist"
|
<h2>summary</h2>
<p>Creating a Python 3.10 program to read a directory of mp3 files and produce a playlist. I would like this program to be portable across operating systems so it would just use the relative path to mp3files instead of a full OS path.</p>
<p>No matter what filespec I use getting error with the file name - so it's reading the filename from the os spec - was not found.</p>
<h2>structure</h2>
<ul>
<li>mp3reader (directory)
<ul>
<li>reader.py</li>
<li>mp3files (directory)
<ul>
<li>a.mp3</li>
<li>b.mp3</li>
</ul>
</li>
</ul>
</li>
</ul>
<h2>reader.py</h2>
<pre><code>import os
mp3dir="mp3files"
# Iterate over the MP3 files in the directory
for mp3_file in os.listdir(mp3dir):
# Check if the file exists
if os.path.exists(mp3_file):
print(str(os.path.getsize(mp3_file)))
else:
print("The file {} does not exist.".format(mp3_file))
</code></pre>
<h2>result</h2>
<p>The file a.mp3 does not exist.</p>
<p>The file b.mp3 does not exist.</p>
<h2>have tried</h2>
<ul>
<li>mp3dir="mp3files"</li>
<li>mp3dir="mp3reader/mp3files"</li>
<li>mp3dir="mp3reader\mp3files"</li>
<li>mp3dir="(full-path-to-directory)/mp3reader/mp3files"</li>
<li>mp3dir="r(full-path-to-directory)/mp3reader/mp3files"</li>
<li>mp3dir="r(full-path-to-directory)\mp3reader\mp3files"</li>
</ul>
|
<python><filereader>
|
2023-04-24 10:20:53
| 1
| 355
|
lonstar
|
76,090,845
| 21,787,377
|
how to add Comment form inside a post detailed
|
<p>Is there any way I can add a comment form inside a post's details? I have a view that shows a model object, and I want to allow users to comment on that view. I have tried to use <a href="https://djangocentral.com/creating-comments-system-with-django/" rel="nofollow noreferrer">this method</a>, but using that method, a user should leave a post's details and add their comment somewhere else rather than doing it inside a post's details? I have try to do that using the method below, but it gives me an error every time I clicked on submit button: <code>TypeError at /video-play/so-da-shakuwa/ Field 'id' expected a number but got <Video: videos/DJ_AB_-_Masoyiya_Official_Video_9UfStsn.mp4>.</code></p>
<p>views:</p>
<pre><code>def play_video(request, slug):
play = get_object_or_404(Video, slug=slug)
if request.method == 'POST':
comments = request.POST['comments']
new_comment = Comment.objects.create(
comments=comments, post_id=play
)
new_comment.user = request.user
new_comment.save()
return redirect('Videos')
context = {
'play':play
}
return render(request, 'video/play_video.html', context)
</code></pre>
<p>template:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css"
rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC"
crossorigin="anonymous">
<div class="container">
<div class="row">
<div class="col-md-9">
<form action="" method="post">
{% csrf_token %}
<div class="input-group">
<textarea name="comments" id="comment" cols="10"
class="form-control" placeholder="write your comment"></textarea>
<button class="btn btn-primary" type="submit">submit</button>
</div>
</form>
</div>
</div>
</div></code></pre>
</div>
</div>
</p>
<p>models:</p>
<pre><code>class Video(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
title = models.CharField(max_length=70)
video = models.FileField(upload_to='videos')
created_on = models.DateTimeField(auto_now_add=True)
banner = models.ImageField(upload_to='banner')
slug = models.SlugField(max_length=100, unique=True)
class Comment(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
comments = models.TextField(max_length=200)
post = models.ForeignKey(Video, on_delete=models.CASCADE)
def __str__(self):
return self.user
</code></pre>
|
<python><django>
|
2023-04-24 10:17:58
| 1
| 305
|
Adamu Abdulkarim Dee
|
76,090,820
| 12,125,395
|
Create frequency matrix using pandas
|
<p>Suppose I have the following data:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([
['01', 'A'],
['01', 'B'],
['01', 'C'],
['02', 'A'],
['02', 'B'],
['03', 'B'],
['03', 'C']
], columns=['id', 'category'])
</code></pre>
<p>How do I create a frequency matrix like this?</p>
<pre><code> A B C
A 2 2 1
B 2 3 2
C 1 2 2
</code></pre>
<p>One way to do it is through self join:</p>
<pre><code>result = df.merge(df, on='id')
pd.pivot_table(
result,
index='category_x',
columns='category_y',
values='id',
aggfunc='count'
)
</code></pre>
<p>But this will make the data size very large, is there any efficient way to do it without using self join?</p>
<p><strong>Edit</strong>
My original post was closed for duplication of <code>pivot_table</code>. But <code>pivot_table</code> only accept different <code>columns</code> and <code>index</code>. In my case, I have only one <code>category</code> column. So</p>
<pre><code># Does not work
pivot_table(df, column='category', index='category', ...)
</code></pre>
<p>does not work.</p>
|
<python><pandas>
|
2023-04-24 10:15:09
| 1
| 889
|
wong.lok.yin
|
76,090,694
| 5,306,861
|
How to find all matching objects in an image with SIFT
|
<p>I have a picture of a diamond card, and a small picture of one diamond, I'm trying to find all the diamonds in the big picture</p>
<p>Below are the pictures:</p>
<p><a href="https://i.sstatic.net/zzBUd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zzBUd.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/rIQyp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rIQyp.png" alt="enter image description here" /></a></p>
<p>Below is the experimental code:</p>
<pre><code>using System.Collections.Generic;
using System;
using OpenCvSharp;
using OpenCvSharp.Features2D;
using System.Linq;
namespace ConsoleApp1
{
class Program
{
static void Main(string[] args)
{
var src = new Mat("8_diamonds.png");
var templ = new Mat("diamonds_template.png");
var dst = new Mat();
SiftDetector(src, templ, dst, 0, 3, 0.04, 10, 0.6, 1.75);
Cv2.ImShow("dst", dst);
Cv2.ImShow("src", src);
Cv2.ImShow("templ", templ);
Cv2.WaitKey();
}
private static Scalar[] _Scalars = new[]
{
new Scalar(255, 0, 0),
new Scalar(0, 255, 0),
new Scalar(0, 0, 255),
new Scalar(255, 255, 0),
new Scalar(0, 255, 255),
new Scalar(255, 0, 255),
};
public static void SiftDetector(Mat src, Mat templ, Mat dst,
int nFeatures = 0,
int nOctaveLayers = 3,
double contrastThreshold = 0.04,
double edgeThreshold = 10,
double sigma = 1.6,
double ratio_thresh = 0.75)
{
var detector = SIFT.Create(nFeatures, nOctaveLayers, contrastThreshold, edgeThreshold, sigma);
var descriptors_templ = new Mat();
var descriptors_src = new Mat();
detector.DetectAndCompute(templ, null, out var keypoints_templ, descriptors_templ);
detector.DetectAndCompute(src, null, out var keypoints_src, descriptors_src);
var matcher = new FlannBasedMatcher();
src.CopyTo(dst);
if (dst.Channels() == 1)
{
Cv2.CvtColor(dst, dst, ColorConversionCodes.GRAY2BGR);
}
for (int j = 0; j < 5; j++)
{
var knn_matches = matcher.KnnMatch(descriptors_templ, descriptors_src, 2);
//-- Filter matches using the Lowe's ratio test
var good_matches = new List<DMatch>();
for (var i = 0; i < knn_matches.Length; i++)
{
if (knn_matches[i][0].Distance < ratio_thresh * knn_matches[i][1].Distance)
{
good_matches.Add(knn_matches[i][0]);
}
}
if (good_matches.Count < 4)
{
break;
}
var dstPts = new List<Point2d>();
var srcPts = new List<Point2d>();
for (var i = 0; i < good_matches.Count; i++)
{
dstPts.Add(keypoints_templ[good_matches[i].QueryIdx].Pt.ToPoint2d());
srcPts.Add(keypoints_src[good_matches[i].TrainIdx].Pt.ToPoint2d());
}
Mat H = Cv2.FindHomography(dstPts, srcPts, HomographyMethods.Ransac);
if (H.Cols != 0)
{
var obj_corners = new Point2d[4];
obj_corners[0] = new Point2d(0, 0);
obj_corners[1] = new Point2d(templ.Cols, 0);
obj_corners[2] = new Point2d(templ.Cols, templ.Rows);
obj_corners[3] = new Point2d(0, templ.Rows);
var scene_corners = Cv2.PerspectiveTransform(obj_corners, H);
var drawingPoints = scene_corners.Select(p => (Point)p).ToArray();
Cv2.Polylines(dst, new[] { drawingPoints }, true, Scalar.Lime, 4);
}
foreach (var match in srcPts)
{
dst.Circle(match.ToPoint(), 5, _Scalars[j % _Scalars.Length], 2);
}
var toRemove = good_matches.Select(p => p.TrainIdx).Distinct().ToList();
foreach (var row in toRemove)
{
descriptors_src.Row(row).SetTo(new Scalar(float.MaxValue, float.MaxValue, float.MaxValue));
}
}
}
}
static class Extentions
{
public static Point2d ToPoint2d(this Point2f point2F)
{
return new Point2d(point2F.X, point2F.Y);
}
}
}
</code></pre>
<p>In this code I search for matches, paint around the found object, then delete the matches and search again.</p>
<p>The result is:</p>
<p><a href="https://i.sstatic.net/jQYR1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jQYR1.png" alt="enter image description here" /></a></p>
<p>The question is how do I find all the diamonds and not just the first one?</p>
<p><code>SIFT</code> is not required, <code>SURF</code> or any similar algorithm is also possible.</p>
<p>An answer in <strong>Python</strong> or <strong>C++</strong> would also be highly appreciated.</p>
|
<python><c#><c++><opencv><computer-vision>
|
2023-04-24 09:59:08
| 1
| 1,839
|
codeDom
|
76,090,515
| 726,730
|
Datetime objects set to dict list
|
<pre class="lang-py prettyprint-override"><code>#recalculate start and end datetime
current_datetime = self.start_datetime
counter = -1
for schedule_item_final in schedule_items_final:
counter += 1
schedule_items_final[counter]["start_datetime"] = copy.deepcopy(current_datetime)
schedule_items_final[counter]["end_datetime"] = copy.deepcopy(current_datetime) + timedelta(seconds=schedule_item_final["duration_milliseconds"]/1000)
current_datetime += timedelta(seconds=schedule_item_final["duration_milliseconds"]/1000)
print(current_datetime)
for schedule_item_final in schedule_items_final:
print(schedule_item_final["start_datetime"])
</code></pre>
<p>I have this code my when i run it in console i saw something like this:</p>
<pre><code>2023-04-24 13:07:14.931000
2023-04-24 13:08:40.953000
2023-04-24 13:10:06.975000
2023-04-24 13:11:32.997000
2023-04-24 13:12:59.019000
2023-04-24 13:14:25.041000
2023-04-24 13:15:51.063000
2023-04-24 13:17:17.085000
2023-04-24 13:18:43.107000
...
2023-04-24 15:03:42.095000
----------------------
2023-04-24 12:41:45.917000
2023-04-24 15:03:42.095000
2023-04-24 15:03:42.095000
2023-04-24 15:03:42.095000
2023-04-24 15:03:42.095000
2023-04-24 15:03:42.095000
2023-04-24 15:03:42.095000
2023-04-24 15:03:42.095000
2023-04-24 15:03:42.095000
2023-04-24 15:03:42.095000
2023-04-24 15:03:42.095000
...
2023-04-24 15:03:42.095000
</code></pre>
<p>where 2023-04-24 12:41:45.917000 is the self.start_datetime and 2023-04-24 15:03:42.095000 is the expected start_datetime of last list item.</p>
<p>What wrong with the above code?</p>
|
<python><datetime>
|
2023-04-24 09:33:19
| 0
| 2,427
|
Chris P
|
76,090,442
| 5,775,358
|
Xarray apply function
|
<p>I have a large dataset and I want to do some computing on some groups of values.</p>
<p>That works fine, but I am left with the following information:</p>
<pre><code> Array Chunk
Bytes 17.81 kiB 480 B
Shape (38, 1, 60) (1, 1, 60)
Count 57288 Tasks 38 Chunks
Type float64 numpy.ndarray
</code></pre>
<p>So I am guessing that when I for exmaple add some variables togehter, it does not perform the operation but it only saves that there is an operation that has to be done, at least that is how I interpret the "tasks".</p>
<p>When asking for the <code>.value</code> it takes a long time (multiple minutes). I would like to know how one can tell to perform the tasks in parallel? When checking the task manager the CPU and memory usage are extremely low during the operations. I would like to use all cores and more memory for faster computing time.</p>
|
<python><parallel-processing><dask><python-xarray>
|
2023-04-24 09:25:06
| 1
| 2,406
|
3dSpatialUser
|
76,090,332
| 6,117,017
|
Azure (Python) Function Code Deployment --- Zip Deployments' trigger does not work
|
<p>I have an Azure Linux Function App that I am deploying using TerraForm.</p>
<p>I have Linux Function + Consumption Plan.</p>
<p>The .zip function contains <code>3 .py scripts, one __init__.py and function.json</code>.</p>
<p>The code deployment goes well, but the triggering does not work (the function is an Azure Storage Blob Trigger that fires when a specific file is uploaded on the blob container).</p>
<p>If I use the Azure Function Core Tools, the deployment goes well and the triggering works (I open the log streams/monitor and I see the function is constantly polling for objects inside that container).</p>
<p>If I use the CLI or TerraForm to upload the code, the triggering does not work.</p>
<p>Here is my code for the <code>function_app</code>:</p>
<pre><code>resource "azurerm_linux_function_app" "blurring_fn_app" {
name = "blurring-app-new4"
location = var.location
resource_group_name = var.resource_group
storage_account_name = var.storage_account
storage_account_access_key = data.azurerm_key_vault_secret.sensestgaccountkey.value
service_plan_id = azurerm_service_plan.blurring_app_service_plan.id
functions_extension_version = "~4"
app_settings = {
"APPINSIGHTS_INSTRUMENTATIONKEY" = "${data.azurerm_key_vault_secret.appinsightskey.value}"
"AzureWebJobsStorage" = "${data.azurerm_key_vault_secret.azure_web_jobs_storage.value}"
"ENABLE_ORYX_BUILD" = true
"SCM_DO_BUILD_DURING_DEPLOYMENT" = true
}
site_config {
application_insights_key = data.azurerm_key_vault_secret.appinsightskey.value
application_insights_connection_string = data.azurerm_key_vault_secret.appinsightsconnstr.value
application_stack {
python_version = "3.9"
}
}
}
</code></pre>
<p>What I already tried:</p>
<ol>
<li><p>I tried using the func CLI deployment, which works for the uploading, <strong>but the function is not triggered</strong>.</p>
</li>
<li><p>I tried using the <code>"WEBSITE_RUN_FROM_PACKAGE"= azurerm_storage_blob.storage_blob_function.url</code> (.zip of scripts uploaded to an Azure Storage Blob, this must be an URL in case of Linux apps + Consumption Plan), which works as well for the uploading, <strong>but the function is not triggered.</strong></p>
</li>
<li><p>I also tried using <code>zip_deploy_file = path_to_local_zip</code> as a parameter inside the <code>azurerm_linux_function_app</code> and it still did not work.</p>
</li>
<li><p>For all 3 options above, I tried to manually sync the triggers : <a href="https://learn.microsoft.com/en-us/rest/api/appservice/web-apps/sync-function-triggers?tryIt=true&source=docs#code-try-0" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/rest/api/appservice/web-apps/sync-function-triggers?tryIt=true&source=docs#code-try-0</a> but that did not work either.</p>
</li>
</ol>
<p>The <code>function.json</code> is the following:</p>
<pre><code> {
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "blobcontainername/{name}.mp4",
"connection": "AzureWebJobsStorage"
}
]
}
</code></pre>
<p>How can I make sure the function is triggered?</p>
|
<python><azure><terraform><azure-functions><devops>
|
2023-04-24 09:11:54
| 3
| 15,173
|
Timbus Calin
|
76,090,279
| 855,472
|
Add usage examples into swagger
|
<p>We're using swagger to document our API. We need to add some API usage examples. Not just how to call some endpoint, but what endpoint in which order the user need to call to achieve some goals. Something like usage scenarios.</p>
<p>Is it possible to add in Swagger? Or maybe there is some alternative that does that? We do need the API endpoints descriptions and a way to run the endpoints as in Swagger, so switching to mkdocs or something similar is not really feasible.</p>
|
<python><flask><swagger><documentation><openapi>
|
2023-04-24 09:05:27
| 1
| 3,499
|
Djent
|
76,090,247
| 10,967,961
|
Unable to perform a dask merge
|
<p>I have a huge dataframe called Network consisting of two columns (integers): "NiuSup", "NiuCust" and 5 Million observations.
I am trying to perform a merge using dask as follows:</p>
<pre><code>import dask.dataframe as dd
NetworkDD = dd.from_pandas(Network, npartitions=Network['NiuSup'].nunique())
NodesSharingSupplier = dd.merge(NetworkDD, NetworkDD, on='NiuSup').query('NiuCust_x != NiuCust_y').compute()
</code></pre>
<p>however I incur in a "Not enough space left on device" Error. I have 80 GB in SSD left and I have noticed is that, at a certain point of the merge, the SSD drops until the merge is forced to stop and the error occurs. The the SSD goes back almost to its original value.
My suspect is that dask creates some huge temporary files (maybe log files?) that drop memory down and unable the merge when .compute() is typed. So two solutions I thought are:</p>
<ol>
<li>Perform the merge and force dask to store everything online in dropbox repository (where I have 2 TB)</li>
<li>Force dask not to write anything on the local machine.</li>
</ol>
<p>Now, since I am quite new to dask, I cannot perform either 1 nor 2 and I don't know even if they are possible solutions or desirable ones. Could you please help me out figuring a way to perform such a merge please?</p>
|
<python><merge><dask><space>
|
2023-04-24 09:02:33
| 0
| 653
|
Lusian
|
76,090,182
| 16,589,565
|
why pyinstaller available in virtual environment even I did not install it
|
<p>I created a python virtual environment by virtualenv and activate it, then I found I can use pyinstaller in this vir-env, but I had not "pip install pyinstaller" in it, why? And as comparison, I wrote 'import <not_installed_module>' in code, then it threw up 'module unfound' and it is as expected.</p>
<p>---- updated 1st ----
thanks to reminder, added my operation
<a href="https://i.sstatic.net/t3z86.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t3z86.png" alt="enter image description here" /></a></p>
<p>---- updated 2nd -----
thanks to comments, it shows the pyinstaller is still in system path
<a href="https://i.sstatic.net/GdJf1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GdJf1.png" alt="enter image description here" /></a></p>
|
<python><pip><virtualenv><pyinstaller><virtual-environment>
|
2023-04-24 08:54:46
| 1
| 317
|
leotsing
|
76,090,165
| 5,386,595
|
Call custom colormap by name
|
<p>I've seen many posts about creating custom colormaps in <code>matplotlib</code>, however I couldn't find whether it is possible to call such custom colormap by name (which I guess requires first some way to add the custom colormap to the list of findable/built-in colormaps).</p>
<p>As an example, I'd like to do something like:
<code>plt.scatter(..., cmap='my_cmap')</code></p>
<p>I know that I could just pass directly the colormap there, but what if I often wants to use this colormap and would like to avoid having to retrieve/define it before passing it as <code>cmap</code> argument?</p>
|
<python><matplotlib><colormap>
|
2023-04-24 08:52:30
| 1
| 762
|
duff18
|
76,090,143
| 13,506,329
|
Use numpy masked array on an array of arrays without getting a flattened output
|
<p>Consider the following code</p>
<pre><code>x = np.array([[1, 2, 3], ['NaN', 4, 'NaN'], [7, 8, 9]])
# Convert 'NaN' strings to masked values
mask = np.ma.masked_where(x == 'NaN', x)
# Get a boolean array indicating where the original array is not masked
bool_arr = ~mask.Mask
# Filter the original array using the boolean array
filtered_arr = x[bool_arr]
print(filtered_arr)
</code></pre>
<p>The code above results in the following output</p>
<pre><code>['1' '2' '3' '4' '7' '8' '9']
</code></pre>
<p>However I want my output to look as follows</p>
<pre><code>[['1' '2' '3'],
['4'],
['7' '8' '9']]
</code></pre>
<p>Where am I going wrong?</p>
|
<python><arrays><python-3.x><numpy><vectorization>
|
2023-04-24 08:49:32
| 2
| 388
|
Lihka_nonem
|
76,090,058
| 18,949,720
|
Under-sampling leads to poor results for no apparent reason
|
<p>I am using Random Forest for a semantic segmentation task, with 3 classes, which are imbalanced. First, I just trained the algorithms on random subsets containing 20% of all the pixels (else my memory cannot handle training the algorithms), and got IoU and Balanced accuracy scores of 0.83 and 0.91 on my test dataset (comprising 32 images representative of the dataset).
Then, to handle imbalance, I did this on the full training dataset (not the previous subset):</p>
<pre><code>rus = RandomUnderSampler(random_state = 42)
X_resampled, Y_resampled = rus.fit_resample(X, Y)
</code></pre>
<p>leading to a training dataset with similar size to the previous one, but this time with balanced classes. But surprisingly, I notice that IoU and balanced accuracy are now equal to 0.78 and 0.84 on my test dataset.</p>
<p>Of course under-sampling can lead to bad performances, but generally it is because the under-sampled dataset is smaller than the original one. Here, the original one is already a subset.</p>
<p>Can someone tell me what could be the cause of these poor performances ?</p>
|
<python><random-forest><image-segmentation><imbalanced-data>
|
2023-04-24 08:37:00
| 2
| 358
|
Droidux
|
76,089,813
| 11,075,360
|
Pinecone Error when connecting with OpenAi: MaxRetryError
|
<p>I have a simple app that lets you upload a pdf, splits it in chunks, make the embeddings and then uploads it to pinecone. But when I run
<code>docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name)</code>
I get the following error:</p>
<pre><code>SSLEOFError Traceback (most recent call last)
File /usr/lib/python3/dist-packages/urllib3/connectionpool.py:699, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
698 # Make the request on the httplib connection object.
--> 699 httplib_response = self._make_request(
700 conn,
701 method,
702 url,
703 timeout=timeout_obj,
704 body=body,
705 headers=headers,
706 chunked=chunked,
707 )
709 # If we're going to release the connection in ``finally:``, then
710 # the response doesn't need to know about the connection. Otherwise
711 # it will also try to release it and we'll have a double-release
712 # mess.
File /usr/lib/python3/dist-packages/urllib3/connectionpool.py:394, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
393 else:
--> 394 conn.request(method, url, **httplib_request_kw)
396 # We are swallowing BrokenPipeError (errno.EPIPE) since the server is
397 # legitimately able to close the connection after sending a valid response.
398 # With this behaviour, the received response is still readable.
...
--> 574 raise MaxRetryError(_pool, url, error or ResponseError(cause))
576 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
578 return new_retry
MaxRetryError: HTTPSConnectionPool(host='langchain2-e630e5d.svc.asia-northeast1-gcp.pinecone.io', port=443): Max retries exceeded with url: /vectors/upsert (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2396)')))
</code></pre>
<p>I dont know what the error is.</p>
<p>here is the rest of the code:</p>
<pre><code>from langchain.text_splitter import RecursiveCharacterTextSplitter
Load your data
loader = UnstructuredPDFLoader("../data/field-guide-to-data-science.pdf")
# loader = OnlinePDFLoader("https://wolfpaulus.com/wp-content/uploads/2017/05/field-guide-to-data-science.pdf")
data = loader.load()
print (f'You have {len(data)} document(s) in your data')
print (f'There are {len(data[0].page_content)} characters in your document')
You have 1 document(s) in your data
There are 176584 characters in your document
Chunk your data up into smaller documents
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(data)
print (f'Now you have {len(texts)} documents')
Now you have 228 documents
Create embeddings of your documents to get ready for semantic search
from langchain.vectorstores import Chroma, Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
OPENAI_API_KEY = '...'
PINECONE_API_KEY = '...'
PINECONE_API_ENV = 'us-east1-gcp'
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
# initialize pinecone
pinecone.init(
api_key=PINECONE_API_KEY, # find at app.pinecone.io
environment=PINECONE_API_ENV # next to api key in console
)
index_name = "langchain2"
docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name)```
</code></pre>
|
<python><openai-api><langchain>
|
2023-04-24 08:04:44
| 3
| 301
|
Nordic Guy
|
76,089,733
| 18,206,100
|
Using attrs is it ok to set init=False to an attribute with no default value
|
<p>I use <code>attrs</code> library.</p>
<p>I use some attributes that are set by the <code>__attrs_post_init__</code> method.</p>
<p>For them, I want to prevent them from being part of the constructor.</p>
<p>Is it ok to not put a default value, or is it an implicit requirement of putting <code>init=False</code>?</p>
<p>The doc is not really clear about it.</p>
<pre class="lang-py prettyprint-override"><code>import attrs
@attrs.define
class MyClass:
val1: str = attrs.field(init=False)
def __attrs_post_init__(self):
self.val1="something"
a = MyClass()
print(a)
a.val1
</code></pre>
<p>This code is working, however I wonder if itβs relevant regarding the best practices.</p>
|
<python><python-attrs>
|
2023-04-24 07:57:05
| 1
| 919
|
Floh
|
76,089,622
| 4,404,709
|
Polars map_batches on list type raises InvalidOperationError
|
<p>There is a conundrum I cannot solve in Polars:</p>
<p>This behaves as expected:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"int1": [1, 2, 3],
"int2": [3, 2, 1]
}
)
df.with_columns(
pl.struct('int1', 'int2')
.map_batches(lambda x: x.struct.field('int1') + x.struct.field('int2')).alias('int3')
)
</code></pre>
<p>output:</p>
<pre><code>shape: (3, 3)
ββββββββ¬βββββββ¬βββββββ
β int1 β int2 β int3 β
β --- β --- β --- β
β i64 β i64 β i64 β
ββββββββͺβββββββͺβββββββ‘
β 1 β 3 β 4 β
β 2 β 2 β 4 β
β 3 β 1 β 4 β
ββββββββ΄βββββββ΄βββββββ
</code></pre>
<p>Yet this does not:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"int1": [[1], [2], [3]],
"int2": [[3], [2], [1]]
}
)
df.with_columns(
pl.struct('int1', 'int2')
.map_batches(lambda x: x.struct.field('int1').to_list() + x.struct.field('int2').to_list()).alias('int3')
)
</code></pre>
<p>output:</p>
<pre><code># InvalidOperationError: Series int3, length 1 doesn't match the DataFrame height of 3
</code></pre>
<p>This is the output I was expecting:</p>
<pre><code>βββββββββββββ¬ββββββββββββ¬ββββββββββββ
β int1 β int2 β int3 β
β --- β --- β --- β
β list[i64] β list[i64] β list[i64] β
βββββββββββββͺββββββββββββͺββββββββββββ‘
β [1] β [3] β [1, 3] β
β [2] β [2] β [2, 2] β
β [3] β [1] β [3, 1] β
βββββββββββββ΄ββββββββββββ΄ββββββββββββ
</code></pre>
|
<python><dataframe><python-polars>
|
2023-04-24 07:42:49
| 3
| 960
|
erap129
|
76,089,602
| 19,325,656
|
Validate choice field DRF
|
<p>Hi all I have my model and its serializer, what I discovered when testing is that serializer is saved no matter what I have in my choice field but in Django admin choice field is empty when the data is wrong and when the data is right I can see the correct option</p>
<p>so,</p>
<pre><code>-> wrong choice data
-> saved regardless
-> nothing in django admin
</code></pre>
<pre><code>-> correct choice data
-> saved
-> correct value in django admin
</code></pre>
<p>How can I return validation error if I detect that user posted incorrect data?</p>
<p>what I was trying out is</p>
<pre><code>topic = serializers.ChoiceField(choices=School.topic)
</code></pre>
<p>and I don't think that iterating over each tuple and searching for a string is correct or efficient</p>
<p>code</p>
<pre><code>class SchoolViewSet(viewsets.ModelViewSet):
queryset = School.objects.all()
serializer_class = SchoolSerializer
def create(self, request):
serializer = SchoolSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<pre><code>class School(models.Model):
SUBJECT = (
('Math', 'Mathematic'),
('Eng', 'English'),
('Cesolved', 'Chemistry'),
)
name = models.CharField(blank=False, max_length=50)
subject = models.CharField(choices=SUBJECT, max_length=15, blank=False)
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
def __str__(self):
return str(f'{self.name} is {self.subject}')
</code></pre>
<pre><code>class SchoolSerializer(serializers.ModelSerializer):
subject = serializers.ChoiceField(choices=School.SUBJECT)
def validate(self, data):
name = data["name"]
if (5 > len(name)):
raise serializers.ValidationError({"IError": "Invalid name"})
return data
class Meta:
model = School
fields = "__all__"
</code></pre>
|
<python><django><serialization><django-rest-framework>
|
2023-04-24 07:39:30
| 1
| 471
|
rafaelHTML
|
76,089,581
| 1,159,488
|
How to get native points on a elliptic curve from an existing addition of points
|
<p>I'm working on elliptic curves with SageMath.<br />
I know :</p>
<ol>
<li><p>The equation of the curve E defined by y^2 = x^3 + ... : <code>E = EllipticCurve( GF(K), [m, n] )</code></p>
</li>
<li><p>the addition of points A and B (A+B) : <code>(x3;y3)</code></p>
</li>
<li><p>the substraction of points A and B (A-B) : <code>(x4;y4)</code></p>
</li>
</ol>
<p>Here my goal is to find <strong>both A <code>(x1;y1)</code> and B <code>(x2;y2)</code></strong>.<br />
I obviously understand how to find (A+B) but don't find how to reverse it and get A and B from the addition of A and B and the substraction too.</p>
<p>I know too I can use these kind of function to generate (A+B) coordinates from A and B coordinates. Here I use the slope of the native elliptic curve given by : <code>-x1 -x2 + (y1-y2)^2/(x1-x2)^2</code>.</p>
<pre><code> E = EllipticCurve( GF(K), [m, n] )
# We know A and B coords
def x3(A, B):
x1, y1 = A.xy()
x2, y2 = B.xy()
return -x1 -x2 + (y1-y2)^2/(x1-x2)^2
x3(A, B)
</code></pre>
<p>I guess there should be something to do with it but I really don't see how to begin.</p>
<p>Any help would be very appreciated, thanks !</p>
<p><strong>EDIT :</strong></p>
<p>Having C=A+B and D=A-B, I made a try with C+D :</p>
<pre><code>C = E(x3, y3)
D = E(x4, y4)
Z = C + D
# Z gives : Z(x5, y5)
</code></pre>
<p>I get new coordinates <code>Z(x5, y5)</code> but I could not find both A and B coordinates, which does not fit to what I'd like to do.</p>
|
<python><cryptography><sage><elliptic-curve>
|
2023-04-24 07:36:12
| 0
| 629
|
Julien
|
76,089,515
| 6,691,564
|
TensorFlow on Apple M1 without Metal - is it possible?
|
<p>I have a 2017 Intel iMac on which I develop TensorFlow apps. I am on the latest version of everything (as at April 2023) - Python 3.11.3, Tensorflow 2.12.0, numpy 1.24.2.</p>
<p>I have bought a cheap secondhand Mac Mini (2020 M1) to offload some of the training. The experiments I have done with Metal give me worse performance training on the GPU than on my iMac. This seems to be a general problem with certain network configurations. Also, to go back to an earlier version of TF means I have to forego some newer features that I am using in 2.12.</p>
<p>However, good news, the M1 Mac Mini is about twice the speed of the iMac for Python programs that don't use TF.</p>
<p>Can I install my latest-version-of-everything environment on the M1 Mini and just not use the GPU? When I pip install tensorflow, I get the message:</p>
<p><code>ERROR: Could not find a version that satisfies the requirement tensorflow-macos (from versions: none) ERROR: No matching distribution found for tensorflow-macos</code></p>
<p>Python 3.11 seems a lot faster than 3.10, so it would be good to keep that. How can I find out what versions of everything tensorflow-macos needs? Is there a version of tensorflow-macos that is compatible with Python 3.11?</p>
<p>Just to reiterate, I do not want to use the GPU for training, just have the most up to date versions running on the CPU.</p>
<p>Thanks,</p>
<p>Julian</p>
|
<python><tensorflow><apple-m1>
|
2023-04-24 07:26:58
| 1
| 321
|
Julian7
|
76,089,499
| 2,085,438
|
workaround for k-means to use levenshtein distance in scikit?
|
<p>I have tried to apply a custom distance to my kNN model for the reasons I detail below.</p>
<p>Here is my metric:</p>
<pre class="lang-py prettyprint-override"><code>def distance_fun(df, text_feat, num_feat):
# len(text_feat) levenshtein
# len(num_feat) euclidian
# rest dice
num_indices = list(range(len(text_feat), len(text_feat) + len(num_feat)))
cat_indices = list(range(len(text_feat) + len(num_feat), len(df.columns)))
def the_func(x, y):
text_dist = np.sum([lev.distance(x[i],y[i]) for i in np.arange(start=0, stop=len(text_feat))]) / len(text_feat)
num_dist = euclidean(x[num_indices],y[num_indices])
cat_dist = dice(x[cat_indices],y[cat_indices])
return text_dist + num_dist + cat_dist
return the_func
</code></pre>
<p>and here is my call to the NearestNeighbors model:</p>
<pre class="lang-py prettyprint-override"><code>knn = NearestNeighbors(n_neighbors=10,
algorithm='auto',
metric=metric,
).fit(tranches_transformed)
</code></pre>
<p>where <code>tranches_transformed</code> contains text in the first column and floating point values everywhere else (combination of numerical features and OHE features)</p>
<p>My text value are names so there is no point trying to find meaning or sentiment in them. I would still like to group similar names together (very importantly I would like identical names to be very "close" together).</p>
<p>I realize all values in the scikit implementation of k-means needs to be floating values so how would one work around that limitation in my specific case ?</p>
|
<python><scikit-learn><knn><levenshtein-distance>
|
2023-04-24 07:25:09
| 0
| 2,663
|
Chapo
|
76,089,304
| 8,040,369
|
ModbusTcpClient: How to read long integer values from Input registers in python
|
<p>I am trying to get data from a sensor using ModbusTcpClient as below</p>
<pre><code>client = ModbusTcpClient('xx.xx.xx.xx', port=502)
connection = client.connect()
request = client.read_input_registers(220,2, unit=51, debug=False)
result = request.registers
print(result)
</code></pre>
<p>With this result i am getting a list of <strong>unsigned decimal</strong>. How can i convert this to <strong>long int</strong> or <strong>swapped long</strong></p>
<p>The result i am getting from the above code</p>
<pre><code>2023-04-24 12:18:59,856 MainThread DEBUG transaction :297 RECV: 0x0 0x2 0x0 0x0 0x0 0x7 0x33 0x4 0x4 0x0 0x28 0x8a 0x23
2023-04-24 12:18:59,857 MainThread DEBUG socket_framer :147 Processing: 0x0 0x2 0x0 0x0 0x0 0x7 0x33 0x4 0x4 0x0 0x28 0x8a 0x23
2023-04-24 12:18:59,857 MainThread DEBUG factory :266 Factory Response[ReadInputRegistersResponse: 4]
2023-04-24 12:18:59,858 MainThread DEBUG transaction :454 Adding transaction 2
2023-04-24 12:18:59,859 MainThread DEBUG transaction :465 Getting transaction 2
2023-04-24 12:18:59,859 MainThread DEBUG transaction :224 Changing transaction state from 'PROCESSING REPLY' to 'TRANSACTION_COMPLETE'
[40, 35363]
</code></pre>
<p>But i need the <strong>[40, 35363]</strong> to be shown as <strong>swapped long</strong> as <strong>2656849</strong></p>
<p>Thanks,</p>
<p><strong>Edit:</strong></p>
<p>I have answered with my code below</p>
|
<python><python-2.7><modbus><modbus-tcp>
|
2023-04-24 07:01:43
| 2
| 787
|
SM079
|
76,089,130
| 11,479,825
|
Group a list column
|
<p>I have a data frame, containing the following data:</p>
<pre><code>| img | list_col1 | list_col2 |
|------|--------------|-----------------------------|
| img1 | [str1] | [[list1], [list2]] |
| img1 | [str2, str3] | [[list3], [list4]] |
| img2 | [str3] | [[list5], [list6], [list7]] |
</code></pre>
<p>I want to group by col <em>img</em> and receive the following result:</p>
<pre><code>| img | list_col1 | list_col2 |
|------|--------------------|--------------------------------------|
| img1 | [str1, str2, str3] | [[list1], [list2], [list3], [list4]] |
| img2 | [str3] | [[list5], [list6], [list7]] |
</code></pre>
<p>I used this code and it is not working:</p>
<pre><code>grouped_df = temp_df.groupby(['img'])[['list_col1', 'list_col2']].apply(list)
</code></pre>
|
<python><dataframe><group-by>
|
2023-04-24 06:33:30
| 4
| 985
|
Yana
|
76,089,076
| 4,586,761
|
DFS recursively searching the values in a list of dictionaries for each key
|
<p>Given the list of dictionaries:</p>
<pre><code>[{"a": ["b", "c", "d"]},
{"b": ["e", "z", "g"]},
{"g": ["c", "f", "z"]},
{"z": ["w", "y", "x"]}]
</code></pre>
<p>it is implied <code>"a"</code> <strong>directly</strong> depends upon <code>"b", "c", and "d"</code>
it is implied <code>"a"</code> <strong>indirectly</strong> depends upon <code>"e", "z", "g", "f", "w", "y", "z"</code> because its direct dependencies have these dependencies (both direct and indirect).</p>
<p>Therefore I am looking for a depth search that can give a list of dictionaries of each key with its respective indirect dependencies (<em>excluding the direct ones</em>) such that the resulting output for the data above would look like:</p>
<pre><code>[{"a": ["e", "z", "g", "f", "w", "y", "z"]},
{"b": ["c", "f", "w", "y", "x"]},
{"g": ["w", "y", "x"]},
{"z": []}]
</code></pre>
|
<python><depth-first-search>
|
2023-04-24 06:22:54
| 2
| 642
|
Maxwell Chandler
|
76,089,043
| 966,365
|
How can I add a progress indicator to numpy.loadtxt?
|
<p>I need to load very large text CSV files into RAM using numpy <a href="https://numpy.org/doc/stable/reference/generated/numpy.loadtxt.html" rel="nofollow noreferrer">loadtxt</a>. Is there a way to add a progress indicator to show that the file is being read? My code looks like this:</p>
<pre><code>import numpy as np
data = np.loadtxt(filename, dtype=np.int64)
</code></pre>
|
<python><numpy>
|
2023-04-24 06:18:16
| 2
| 322
|
tobi delbruck
|
76,089,003
| 12,883,179
|
Geopandas checking whether point is inside polygon
|
<p>I have ocean geopandas which contains 1 multipolygon (source: <a href="https://www.naturalearthdata.com/download/downloads/10m-physical-vectors/" rel="nofollow noreferrer">naturalearthdata.com</a>)</p>
<p>I also have another dataframe that contains at lot of longitude and latitude information</p>
<p>I want to add a new column that will be True if the Point is in the ocean (inside the multipolygon)</p>
<pre><code>zipfile = "ne_10m_ocean/ne_10m_ocean.shp"
ocean_gpd = geopandas.read_file(zipfile)
df = pd.DataFrame({
'lon': [120.0,120.1,120.2,120.3,120.4],
'lat': [10.0,10.1,10.2,10.3,10.4]
})
for index, row in df.iterrows():
df.loc[index,'is_ocean'] = ocean_gpd.contains(Point(x['lon'],x['lat'])
</code></pre>
<p>but it is too slow, I tried to used lambda function like this</p>
<pre><code>df = df.assign(is_ocean = lambda x: ocean_gpd.contains(Point(x['lon'],x['lat']))
</code></pre>
<p>but failed, the error is <code>cannot convert the series to <class 'float'></code></p>
<p>Is anyone know how to do better individual point checking like this in geopandas?</p>
<p>Note: I just realized that for polygon data I used 10m one (more detailed polygon), if I uses 110m it a lot better, but in the future maybe I need to use 10m</p>
|
<python><pandas><geopandas>
|
2023-04-24 06:12:20
| 2
| 492
|
d_frEak
|
76,088,919
| 5,940,776
|
Downgrade setuptools inside tox dependencies
|
<p>I have a gdal dependency in my tests.</p>
<p>I use rocky-linux 8. epel 8 provides gdal 3.0.4, so I must install the same version in python, but this version is incompatible with the latest version of setuptools. (See: <a href="https://stackoverflow.com/questions/69123406/error-building-pygdal-unknown-distribution-option-use-2to3-fixers-and-use-2">Error building pygdal: Unknown distribution option: 'use_2to3_fixers' and 'use_2to3_exclude_fixers'</a>)</p>
<p>Installing gdal 3.0.4 works fine with setuptools 57.5.0. In a virtual environment, from the command line:</p>
<pre class="lang-bash prettyprint-override"><code>python3.9 -m venv ~/venv
source ~/venv/bin/bin/activate
pip install setuptools==57.5.0
pip install gdal==3.0.4
</code></pre>
<p>But it doesn't work when I try to build the environment with tox. I have tried in my <code>tox.ini</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tox]
env_list = py39
[testenv]
deps =
setuptools==57.5.0
gdal==3.0.4
pytest
commands =
python -m pytest
</code></pre>
<p>How can I build an environment containing gdal with tox on a distribution like redhat 8 ?</p>
<p>My package is a library dealing numpy matrices with pybind11. gdal is used to load data, it's not a dependency of my package.</p>
<p>Running <code>tox -rvv</code>:</p>
<pre><code>py39: 173 I find interpreter for spec PythonSpec(major=3, minor=9) [virtualenv/discovery/builtin.py:56]
py39: 173 I proposed PythonInfo(spec=CPython3.9.16.final.0-64, exe=/opt/python/bin/python3.9, platform=linux, version='3.9.16 (main, Mar 28 2023, 07:59:14) \n[GCC 8.5.0 20210514 (Red Hat 8.5.0-16)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
py39: 173 D accepted PythonInfo(spec=CPython3.9.16.final.0-64, exe=/opt/python/bin/python3.9, platform=linux, version='3.9.16 (main, Mar 28 2023, 07:59:14) \n[GCC 8.5.0 20210514 (Red Hat 8.5.0-16)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
py39: 174 D filesystem is case-sensitive [virtualenv/info.py:24]
py39: 194 I create virtual environment via CPython3Posix(dest=/home/usertest/test/.tox/py39, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:48]
py39: 194 D create folder /home/usertest/test/.tox/py39/bin [virtualenv/util/path/_sync.py:9]
py39: 194 D create folder /home/usertest/test/.tox/py39/lib/python3.9/site-packages [virtualenv/util/path/_sync.py:9]
py39: 194 D write /home/usertest/test/.tox/py39/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
py39: 194 D home = /opt/python/bin [virtualenv/create/pyenv_cfg.py:34]
py39: 194 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
py39: 194 D version_info = 3.9.16.final.0 [virtualenv/create/pyenv_cfg.py:34]
py39: 194 D virtualenv = 20.21.0 [virtualenv/create/pyenv_cfg.py:34]
py39: 194 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
py39: 195 D base-prefix = /opt/python [virtualenv/create/pyenv_cfg.py:34]
py39: 195 D base-exec-prefix = /opt/python [virtualenv/create/pyenv_cfg.py:34]
py39: 195 D base-executable = /opt/python/bin/python3.9 [virtualenv/create/pyenv_cfg.py:34]
py39: 195 D symlink /opt/python/bin/python3.9 to /home/usertest/test/.tox/py39/bin/python [virtualenv/util/path/_sync.py:28]
py39: 195 D create virtualenv import hook file /home/usertest/test/.tox/py39/lib/python3.9/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:89]
py39: 195 D create /home/usertest/test/.tox/py39/lib/python3.9/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:92]
py39: 195 D ============================== target debug ============================== [virtualenv/run/session.py:50]
py39: 196 D debug via /home/usertest/test/.tox/py39/bin/python /opt/python/lib/python3.9/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:193]
py39: 195 D {
"sys": {
"executable": "/home/usertest/test/.tox/py39/bin/python",
"_base_executable": "/home/usertest/test/.tox/py39/bin/python",
"prefix": "/home/usertest/test/.tox/py39",
"base_prefix": "/opt/python",
"real_prefix": null,
"exec_prefix": "/home/usertest/test/.tox/py39",
"base_exec_prefix": "/opt/python",
"path": [
"/opt/python/lib/python39.zip",
"/opt/python/lib/python3.9",
"/opt/python/lib/python3.9/lib-dynload",
"/home/usertest/test/.tox/py39/lib/python3.9/site-packages"
],
"meta_path": [
"<class '_virtualenv._Finder'>",
"<class '_frozen_importlib.BuiltinImporter'>",
"<class '_frozen_importlib.FrozenImporter'>",
"<class '_frozen_importlib_external.PathFinder'>"
],
"fs_encoding": "utf-8",
"io_encoding": "utf-8"
},
"version": "3.9.16 (main, Mar 28 2023, 07:59:14) \n[GCC 8.5.0 20210514 (Red Hat 8.5.0-16)]",
"makefile_filename": "/opt/python/lib/python3.9/config-3.9-x86_64-linux-gnu/Makefile",
"os": "<module 'os' from '/opt/python/lib/python3.9/os.py'>",
"site": "<module 'site' from '/opt/python/lib/python3.9/site.py'>",
"datetime": "<module 'datetime' from '/opt/python/lib/python3.9/datetime.py'>",
"math": "<module 'math' from '/opt/python/lib/python3.9/lib-dynload/math.cpython-39-x86_64-linux-gnu.so'>",
"json": "<module 'json' from '/opt/python/lib/python3.9/json/__init__.py'>"
} [virtualenv/run/session.py:51]
py39: 217 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/usertest/.local/share/virtualenv) [virtualenv/run/session.py:55]
py39: 219 D install pip from wheel /opt/python/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/pip-23.0.1-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
py39: 219 D install wheel from wheel /opt/python/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/wheel-0.38.4-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
py39: 219 D install setuptools from wheel /opt/python/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/setuptools-67.4.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
py39: 220 D copy directory /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.0.1-py3-none-any/pip to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/pip [virtualenv/util/path/_sync.py:36]
py39: 220 D copy directory /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/wheel [virtualenv/util/path/_sync.py:36]
py39: 220 D copy /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/distutils-precedence.pth to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:36]
py39: 221 D copy directory /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/_distutils_hack to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:36]
py39: 221 D copy directory /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/pkg_resources to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/pkg_resources [virtualenv/util/path/_sync.py:36]
py39: 225 D copy directory /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.dist-info to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/wheel-0.38.4.dist-info [virtualenv/util/path/_sync.py:36]
py39: 226 D copy /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.virtualenv to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/wheel-0.38.4.virtualenv [virtualenv/util/path/_sync.py:36]
py39: 228 D generated console scripts wheel3.9 wheel3 wheel-3.9 wheel [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
py39: 231 D copy directory /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/setuptools to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/setuptools [virtualenv/util/path/_sync.py:36]
py39: 257 D copy directory /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/setuptools-67.4.0.dist-info to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/setuptools-67.4.0.dist-info [virtualenv/util/path/_sync.py:36]
py39: 258 D copy /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-67.4.0-py3-none-any/setuptools-67.4.0.virtualenv to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/setuptools-67.4.0.virtualenv [virtualenv/util/path/_sync.py:36]
py39: 259 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
py39: 284 D copy directory /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.0.1-py3-none-any/pip-23.0.1.dist-info to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/pip-23.0.1.dist-info [virtualenv/util/path/_sync.py:36]
py39: 284 D copy /home/usertest/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-23.0.1-py3-none-any/pip-23.0.1.virtualenv to /home/usertest/test/.tox/py39/lib/python3.9/site-packages/pip-23.0.1.virtualenv [virtualenv/util/path/_sync.py:36]
py39: 285 D generated console scripts pip-3.9 pip pip3.9 pip3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
py39: 285 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:61]
py39: 286 D write /home/usertest/test/.tox/py39/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
py39: 286 D home = /opt/python/bin [virtualenv/create/pyenv_cfg.py:34]
py39: 286 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
py39: 286 D version_info = 3.9.16.final.0 [virtualenv/create/pyenv_cfg.py:34]
py39: 286 D virtualenv = 20.21.0 [virtualenv/create/pyenv_cfg.py:34]
py39: 286 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
py39: 286 D base-prefix = /opt/python [virtualenv/create/pyenv_cfg.py:34]
py39: 286 D base-exec-prefix = /opt/python [virtualenv/create/pyenv_cfg.py:34]
py39: 287 D base-executable = /opt/python/bin/python3.9 [virtualenv/create/pyenv_cfg.py:34]
py39: 289 W install_deps> python -I -m pip install gdal==3.0.4 pytest setuptools==57.5.0 [tox/tox_env/api.py:428]
Looking in indexes: https://pypi.org/simple, http://my-nexus.docker:8081/repository/pypi-synpkgs/simple
Collecting gdal==3.0.4
Using cached GDAL-3.0.4.tar.gz (577 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [6 lines of output]
/home/usertest/test/.tox/py39/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'use_2to3_fixers'
warnings.warn(msg)
/home/usertest/test/.tox/py39/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'use_2to3_exclude_fixers'
warnings.warn(msg)
error in GDAL setup command: use_2to3 is invalid.
WARNING: numpy not available! Array support will not be enabled
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
py39: 1491 C exit 1 (1.20 seconds) /home/usertest/test> python -I -m pip install gdal==3.0.4 pytest setuptools==57.5.0 pid=26032 [tox/execute/api.py:275]
py39: FAIL code 1 (1.32 seconds)
evaluation failed :( (1.41 seconds)
</code></pre>
<p>gdal <code>setup.py</code> can be found <a href="https://github.com/OSGeo/gdal/blob/v3.0.4/gdal/swig/python/setup.py" rel="nofollow noreferrer">here</a>.</p>
|
<python><setuptools><gdal><tox><gdal-python-bindings>
|
2023-04-24 05:56:03
| 1
| 896
|
BalaΓ―tous
|
76,088,868
| 11,258,263
|
Cast from Python object back to struct
|
<p>I have declared a struct for embedded Python via:</p>
<pre class="lang-cpp prettyprint-override"><code>struct Report {
int Count;
};
PYBIND11_EMBEDDED_MODULE(embedded, m) {
pybind11::class_<Report>(m, "Report")
.def(pybind11::init<>())
.def_readwrite("Count", &Report::Count);
}
</code></pre>
<p>I then call Python code which sets values on this struct and returns it:</p>
<pre class="lang-cpp prettyprint-override"><code>py::module_ m = py::module_::import("PythonTest");
// get a handle to function
py::object fn = m.attr("get_count");
py::object result = fn().cast<py::object>();
Report e = result.cast<Report>();
</code></pre>
<p><code>PythonTest.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import embedded
def get_count():
result = embedded.Result
result.Count = 10
return result
</code></pre>
<p>But the final cast in C++ fails with <code>Unable to cast Python instance to C++ type</code>.</p>
<p>This seems to be successful in cast:</p>
<pre><code>Report r1;
r1.Count = 10;
py::object result = py::cast(r1);
Report r2 = result.cast<Report>();
</code></pre>
<p>How do I cast the <code>Report</code> class I declared for Python back to the original <code>Report</code> C++ struct?</p>
|
<python><c++><pybind11>
|
2023-04-24 05:46:56
| 0
| 470
|
DLT
|
76,088,766
| 3,735,871
|
Spark submit error - cannot load main class from jar - PySpark
|
<p>I'm running the below spark submit command, and got an error that says <code>cannot load main class from jar file:/path/to/dependency.zip</code> I'm struggling to understand why it looks for main class in the zip file, since I supplied the <code>application.py</code>, which has the main class?</p>
<p>What did I miss here? It looks like it's looking for a jar, as if it's considering the application a scala or java app instead of pyspark, in spite of the <code>--py-files</code> I specified. Thanks for your help</p>
<pre><code>spark-submit \
--master yarn \
--deploy-mode cluster \
--pyfiles "/path/to/something.zip","/path/to/dependency.zip" \
application.py "arg1" "arg2"
</code></pre>
|
<python><apache-spark><pyspark><spark-submit>
|
2023-04-24 05:23:42
| 0
| 367
|
user3735871
|
76,088,611
| 9,542,989
|
Fuzzy Matching Optimization in PySpark
|
<p>I am trying to perform some fuzzy matching on some data through PySpark. To accomplish this I am using the <code>fuzzywuzzy</code> package and running it on Databricks.</p>
<p>My dataset is very simple. It is stored in a CSV file and contains two columns: Name1 and Name2. However, I don't just want to compare the two values in the same row, but I want to compare each Name1 to all available Name2 values.</p>
<p>This is what my code looks like,</p>
<pre><code>from pyspark.sql import functions as f
from fuzzywuzzy import fuzz
from pyspark.sql.types import StringType
# create a simple function that performs fuzzy matching on two strings
def match_string(s1, s2):
return fuzz.token_sort_ratio(s1, s2)
# convert the function into a UDF
MatchUDF = f.udf(match_string, StringType())
# separate the two Name columns into individual DataFrames
df1 = raw_df.select('Name1')
df2 = raw_df.select('Name2')
# perform a CROSS JOIN on the two DataFrames
# CAN THIS BE AVOIDED?
df = df1.crossJoin(df2)
# use the UDF from before to calculate a similarity score for each combination
df = df.withColumn("similarity_score", MatchUDF(f.col("Name1"), f.col("Name2")))
</code></pre>
<p>Once I have the similarity scores, I can calculate a rank for each name thereby, get the best match.</p>
<p>What I am worried about is the CROSS JOIN. This exponentially increases the number of data points that I have. Is there anyway that this can be avoided?</p>
<p>I am also open to completely different approaches that will accomplish what I need to do in more optimized manner.</p>
|
<python><pyspark><databricks><fuzzywuzzy><fuzzy-comparison>
|
2023-04-24 04:37:02
| 2
| 2,115
|
Minura Punchihewa
|
76,088,262
| 754,136
|
Reproducibility with multithreading and multiprocessing in Python (how to fix random seed)
|
<p>My code does the following:</p>
<ul>
<li>Starts processes to collect data</li>
<li>Starts processes to test model</li>
<li>One thread takes care of training (read data from collect processes)</li>
<li>One thread takes care of testing (read data from test processes)</li>
<li>Every time the training thread does a step, it waits for the testing to also do one step</li>
<li>Before doing a step, the testing thread waits for a training step</li>
</ul>
<p>I need to have reproducible results, but there is randomness in both the processes and the threads. I naively fix the seeds in each process and thread, but results are always different.</p>
<p>Is it possible to have reproducible results? I know threads are non-deterministic, but I don't launch multiple threads from the same pool: I have 2 pools, each launching only 1 thread.</p>
<p>Below is a simple MWE. I need the output to be always the same every time I run this program.</p>
<p><strong>EDIT</strong></p>
<p>Using the <code>initializer</code> argument in all pools I can have deterministic behavior <em>within</em> threads and processes. However, the order in which processes write the data is random due to multiprocesses non-deterministic behavior. Sometimes one process reads the queue first and writes, sometimes it's another process.</p>
<p>How can I fix it?</p>
<pre class="lang-py prettyprint-override"><code>import logging
import traceback
import torch
from concurrent.futures import ThreadPoolExecutor
from concurrent.futures import ProcessPoolExecutor
from torch import multiprocessing as mp
shandle = logging.StreamHandler()
log = logging.getLogger('rl')
log.propagate = False
log.addHandler(shandle)
log.setLevel(logging.INFO)
def fix_seed(seed):
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
def collect(id, queue, data):
#log.info('Collect %i started ...', id)
while True:
idx = queue.get()
if idx is None:
break
data[idx] = torch.rand(1)
log.info(f'Collector {id} got idx {idx} and sampled {data[idx]}')
queue.task_done()
#log.info('Collect %i completed', id)
def test(id, queue, data):
#log.info('Test %i started ...', id)
while True:
idx = queue.get()
if idx is None:
break
data[idx] = torch.rand(1)
log.info(f'Tester {id} got idx {idx} and sampled {data[idx]}')
queue.task_done()
#log.info('Test %i completed', id)
def run():
steps = 0
num_collect_procs = 3
num_test_procs = 2
max_steps = 10
data_collect = torch.zeros(num_collect_procs).share_memory_()
data_test = torch.zeros(num_test_procs).share_memory_()
ctx = mp.get_context('spawn')
manager = mp.Manager()
collect_queue = manager.JoinableQueue()
test_queue = manager.JoinableQueue()
train_test_queue = manager.JoinableQueue()
collect_pool = ProcessPoolExecutor(
num_collect_procs,
mp_context=ctx,
initializer=fix_seed,
initargs=(1,)
)
test_pool = ProcessPoolExecutor(
num_test_procs,
mp_context=ctx,
initializer=fix_seed,
initargs=(1,)
)
for i in range(num_collect_procs):
future = collect_pool.submit(collect, i, collect_queue, data_collect)
for i in range(num_test_procs):
future = test_pool.submit(test, i, test_queue, data_test)
def run_train():
nonlocal steps
#log.info('Training thread started ...')
while steps < max_steps:
train_test_queue.put(True)
train_test_queue.join()
for idx in range(num_collect_procs):
collect_queue.put(idx)
log.info('Training, %i %f', steps, data_collect.sum() + torch.rand(1))
collect_queue.join()
steps += 1
#log.info('Training ended')
for i in range(num_collect_procs):
collect_queue.put(None)
train_test_queue.put(None)
def run_test():
nonlocal steps
#log.info('Testing thread started ...')
while steps < max_steps:
status = train_test_queue.get()
if status is None:
break
for idx in range(num_test_procs):
test_queue.put(idx)
log.info('Testing, %i %f', steps, data_test.sum() + torch.rand(1))
test_queue.join()
train_test_queue.task_done()
#log.info('Testing ended')
for i in range(num_test_procs):
test_queue.put(None)
training_thread = ThreadPoolExecutor(1, initializer=fix_seed, initargs=(1,))
testing_thread = ThreadPoolExecutor(1, initializer=fix_seed, initargs=(1,))
training_thread.submit(run_train)
testing_thread.submit(run_test)
if __name__ == '__main__':
run()
</code></pre>
|
<python><multithreading><multiprocessing><random-seed><reproducible-research>
|
2023-04-24 02:36:50
| 2
| 5,474
|
Simon
|
76,088,230
| 9,049,108
|
Which version of SageMath was I using? In 2018 that allowed this code to run?
|
<p>At some point the code bellow was working but now I need to replace the map line with something else. I'm wondering what version of SageMath I was using that my code was working. It may have been SageMath for python2 because It also had xrange() in the code. Additionally the new version throws errors on ^ ^ which was fine in the old version but now throws errors when it is compiled into ** **. I'm going to stick with the python 3 version and try to get it working I just need to fix the indexing because it was working fine with the Ubuntu python environment I was working with in 2018 but now on my mac in 2023 the updates have caused a few errors. python 3.10.10 is the version im using now and SageMath version 9.8, Release Date: 2023-02-11. Any help figuring out the old version though to get the original working would be appreciated.</p>
<pre><code>def aes_encrypt(round_key, plain):
R.< x >= GF(2)[]
F = GF(2).extension(x ^ 8 + x ^ 4 + x ^ 3 + x + 1, 'a')
a = F.gen()
plain # '0189fe7623abdc5445cdba3267ef9810'
ar = map(ord, plain)) # .decode('hex')
#ar = list(map(ord, plain)) # .decode('hex')) NEW VERSION FIX
print('printing AR')
print_ar(ar)
k = 0
while k < 10: # each round
round = k*16
print("round_key")
print_key(round_key, k)
print('END of KEY ',type(round_key),round_key[0])
print(type(ar),' TYPE ar ')
print(list(ar))
print("XOR")
for x in range(0, 4):
#print(ar[x]," "," ",round_key[4*x+16*k])
ar[x] = ar[x] ^^round_key[4*x+16*k]
ar[x+4] = ar[x+4] ^^round_key[4*x+1+16*k]
ar[x+8] = ar[x+8] ^^round_key[4*x+2+16*k]
ar[x+12] = ar[x+12] ^^round_key[4*x+3+16*k]
</code></pre>
|
<python><version-control><sage>
|
2023-04-24 02:26:30
| 1
| 576
|
Michael Hearn
|
76,088,055
| 9,580,869
|
Groupby a dataframe on column A and column B and sum column C based on few values of column A
|
<p>For a dataframe which looks like the below</p>
<pre><code>A B C
foo 1 2
foo 3 3
bar 3 4
bar 3 4
else 4 5
else 2 1
</code></pre>
<p>We need to groupby the dataframe based on column A and sum values in column B only if the values in column A = 'foo' or 'bar', but sum all the values of column c.</p>
<p>So the result should look like:</p>
<pre><code>A B C
foo 4 5
bar 6 8
else 6
</code></pre>
<p>The code I am using the moment is:</p>
<pre class="lang-py prettyprint-override"><code>def sum_hours(df, values):
return df.loc[df['A'].isin(values), 'B'].sum()
grouped = df.groupby(['A'], as_index= False).agg({'A': 'first', 'B': lambda x: sum_hours(x,['foo', 'bar'], 'C':'sum')
</code></pre>
<p>This is not giving the correct output and also taking a very long time.</p>
<p>Please suggest</p>
|
<python><pandas><dataframe><group-by><sum>
|
2023-04-24 01:26:27
| 2
| 1,212
|
zsh_18
|
76,087,984
| 16,895,246
|
why is re.findall regex matching only one group?
|
<p>I am attempting to build a regex to parse some HTTP requests e.g.</p>
<pre><code>146.204.224.152 - feest6811 [21/Jun/2019:15:45:24 -0700] "POST /incentivize HTTP/1.1" 302 4622
197.109.77.178 - kertzmann3129 [21/Jun/2019:15:45:25 -0700] "DELETE /virtual/solutions/target/web+services HTTP/2.0" 203 26554
156.127.178.177 - okuneva5222 [21/Jun/2019:15:45:27 -0700] "DELETE /interactive/transparent/niches/revolutionize HTTP/1.1" 416 14701
100.32.205.59 - ortiz8891 [21/Jun/2019:15:45:28 -0700] "PATCH /architectures HTTP/1.0" 204 6048
168.95.156.240 - stark2413 [21/Jun/2019:15:45:31 -0700] "GET /engage HTTP/2.0" 201 9645
71.172.239.195 - dooley1853 [21/Jun/2019:15:45:32 -0700] "PUT /cutting-edge HTTP/2.0" 406 24498
180.95.121.94 - mohr6893 [21/Jun/2019:15:45:34 -0700] "PATCH /extensible/reinvent HTTP/1.1" 201 27330
144.23.247.108 - auer7552 [21/Jun/2019:15:45:35 -0700] "POST /extensible/infrastructures/one-to-one/enterprise HTTP/1.1" 100 22921
2.179.103.97 - lind8584 [21/Jun/2019:15:45:36 -0700] "POST /grow/front-end/e-commerce/robust HTTP/2.0" 304 14641
241.114.184.133 - tromp8355 [21/Jun/2019:15:45:37 -0700] "GET /redefine/orchestrate HTTP/1.0" 204 29059
224.188.38.4 - keebler1423 [21/Jun/2019:15:45:40 -0700] "PUT /orchestrate/out-of-the-box/unleash/syndicate HTTP/1.1" 404 28211
94.11.36.112 - klein8508 [21/Jun/2019:15:45:41 -0700] "POST /enhance/solutions/bricks-and-clicks HTTP/1.1" 404 24768
126.196.238.197 - gusikowski9864 [21/Jun/2019:15:45:45 -0700] "DELETE /rich/reinvent HTTP/2.0" 405 7894
103.247.168.212 - medhurst2732 [21/Jun/2019:15:45:49 -0700] "HEAD /scale/global/leverage HTTP/1.0" 203 15844
57.86.153.68 - dubuque8645 [21/Jun/2019:15:45:50 -0700] "POST /innovative/roi/robust/systems HTTP/1.1" 406 29046
231.220.8.214 - luettgen1860 [21/Jun/2019:15:45:52 -0700] "HEAD /systems/sexy HTTP/1.1" 201 2578
219.133.7.154 - price5585 [21/Jun/2019:15:45:53 -0700] "GET /incubate/incubate HTTP/1.1" 201 12126
159.252.184.44 - fay7852 [21/Jun/2019:15:45:54 -0700] "GET /convergence HTTP/2.0" 404 23856
40.167.172.66 - kshlerin3090 [21/Jun/2019:15:45:57 -0700] "HEAD /convergence HTTP/2.0" 501 16287
167.153.239.72 - schaden8853 [21/Jun/2019:15:45:58 -0700] "DELETE /bandwidth/reintermediate/engage HTTP/2.0" 302 17774
</code></pre>
<p>my current regex is:</p>
<pre class="lang-py prettyprint-override"><code> return re.findall(
r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s*\-" # match host name and following '-'
r"\s*[\-\w]+\s" # match username or '-' (missing username)
r"\[\d{1,2}/[A-Z][a-z]{2}/\d{4}:\d{1,2}:\d{1,2}:\d{1,2}\s-\d{4}\]\s" # match request time
r"\"(CONNECT|DELETE|GET|HEAD|OPTIONS|PATCH|POST|PUT|TRACE)\s*" # match HTTP request type
r"/[A-Za-z0-9\-\:/\?#\[\]\@\!\$\'\(\)\*\+\,\;\%\=]*\s*" # match url
r"HTTP/\d\.\d" # match http version
, log_data
)
</code></pre>
<p>if you would like to stick it into something like regex 101 here is the same regex in one string:</p>
<pre><code>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s*\-\s*[\-\w]+\s\[\d{1,2}/[A-Z][a-z]{2}/\d{4}:\d{1,2}:\d{1,2}:\d{1,2}\s-\d{4}\]\s\"(CONNECT|DELETE|GET|HEAD|OPTIONS|PATCH|POST|PUT|TRACE)\s*/[A-Za-z0-9\-\:/\?#\[\]\@\!\$\'\(\)\*\+\,\;\%\=]*\s*HTTP/\d\.\d
</code></pre>
<p>Based on the the fact I have a 979 line log and am getting 979 matches, as well as tests of smaller sections of the regex I'm confident I'm matching correctly, however, the regex only returns the HTTP request type e.g. 'HEAD' 'GET' or 'DELETE' whereas I'm wanting the host, username, time and full request e.g.</p>
<pre><code>146.204.224.152 - feest6811 [21/Jun/2019:15:45:24 -0700] "POST /incentivize HTTP/1.1
</code></pre>
<p>could you please let me know how to modify the regex to do this,
Thanks,</p>
|
<python><regex>
|
2023-04-24 01:01:36
| 0
| 1,441
|
Pioneer_11
|
76,087,822
| 111,808
|
How can I use Tkinter with Python 3.11.3 that I built from source?
|
<p>How can I use <code>Tkinter</code> with Python 3.11.3 that I built from source?</p>
<p>I have another version of Python (3.10.6) that came with my distribution (Pop!_OS) and on that version I can successfully import tkinter.</p>
<p>But when I run Python 3.11.3 that I build from source I get the following messages:</p>
<pre><code>>>> import tkinter
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/python/3.11.3/lib/python3.11/tkinter/__init__.py", line 38, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
</code></pre>
<p>I used the following script builder to build Python 3.11.3 from source.</p>
<p><a href="https://www.build-python-from-source.com/" rel="nofollow noreferrer">https://www.build-python-from-source.com/</a></p>
<p>If you have found a way to install <code>Tkinter</code> so that it can work with the version of Python 3.11.3 that you built from source, please reply with the details.</p>
<p>I have done a lot of searching to find a way to do this, but so far have not been successful.</p>
<p>Obviously <code>Tkinter</code> must be on my system, but, for some reason Python 3.11.3 can not "see" it.</p>
|
<python><python-3.x><tkinter>
|
2023-04-23 23:59:42
| 1
| 3,595
|
Richard Fuhr
|
76,087,725
| 2,658,898
|
Calling Super when Dynamically Create Types using Multiclass Inheritance in Python
|
<p>How does one dynamically create a <code>type</code> that inherits from multiple base classes via cooperative inheritance? I have tried a couple options, each of which have had their own issues. My current code works like so:</p>
<pre><code> cls_map = {"SomeClass" : SomeClass, "OtherClass" : OtherClass}
def get_type(classes: list[str]):
def constructor(self, *args, **kwargs):
pass
types: list[type] = [cls_map[c] for c in classes]
return type("MutableEntity", tuple(types), {
"__init__": constructor
})
# Get type, instantiate object
dynamic_cls: type = get_type(['SomeClass', 'OtherClass'])
dynamic_instance = dynamic_cls(...)
</code></pre>
<p>When I execute</p>
<pre><code>assert isinstance(dynamic_instance, SomeClass)
</code></pre>
<p>It returns <code>True</code> (which is great!).</p>
<p>The problem I'm running in to is that I don't know how to make <code>constructor</code> correctly call <code>super().__init__(*args, **kwargs)</code>.</p>
<p>I tried:</p>
<pre><code> def constructor(self, *args, **kwargs):
pass
</code></pre>
<p>But got the following error:</p>
<p><code>E TypeError: super(type, obj): obj must be an instance or subtype of type</code></p>
<p>My theory is that this is because the argument <code>self</code> is <code>None</code> for some reason. Any suggestions how to resolve this? My subclasses are designing to function using cooperative inheritance, so being able to call <code>super()</code> is important.</p>
<p>Full Code:</p>
<pre><code>class SomeClass:
def __init__(self, *args, **kwargs):
super(*args, **kwargs)
self.some_class = True
class OtherClass:
def __init__(self, *args, **kwargs):
super(*args, **kwargs)
self.other_class = True
cls_map = {"SomeClass": SomeClass, "OtherClass": OtherClass}
def get_type(classes: list[str]):
def constructor(self, *args, **kwargs):
super().__init__(*args, **kwargs)
types: list[type] = [cls_map[c] for c in classes]
return type("MutableEntity", tuple(types), {
"__init__": constructor
})
# Get type, instantiate object
dynamic_cls: type = get_type(['SomeClass', 'OtherClass'])
dynamic_instance = dynamic_cls()
assert hasattr(dynamic_instance, 'some_class')
assert hasattr(dynamic_instance, 'other_class')
</code></pre>
|
<python><python-3.x><dynamic><multiple-inheritance>
|
2023-04-23 23:22:53
| 1
| 492
|
CCD
|
76,087,576
| 2,169,327
|
Polars - invalid column type integer when reading sqlite database
|
<p>I tried to use Polars to read data from my SQLite database:</p>
<pre><code>conn = 'sqlite://'+pathToDB
querystring = "SELECT * FROM table1"
msgt1SAT = pl.read_database(querystring, conn)
</code></pre>
<p>However I got an error:</p>
<pre><code>RuntimeError: Invalid column type Integer at index: 1, name: unixtime
</code></pre>
<p>Yes - Polars are absolutely correct that I have an column named Unixtime that is of integer type. (<strong>Actually this is wrong, it is saved as timestamp in the sqlite database. Is this whats making the query fail?</strong>)</p>
<p>Edit:
I've recreated a sample of the database setting the datatype as integer instead of timestamp - now I get <code>Invalid column type TEXT at index:0</code> (different column)). If Polars/connectorx can't handle any of sqlite's datatypes, how is this even released?</p>
|
<python><sqlite><python-polars>
|
2023-04-23 22:27:49
| 0
| 2,348
|
bjornasm
|
76,087,259
| 19,467,973
|
How to do data checks in the fastapi endpoint correctly
|
<p>I am writing a small api for creating different objects. And I wanted to carry out a routine check that can be found on every website with registration. But I came across such an unpleasant use of the if operator.</p>
<pre><code>class AuthenticateController(RegisterCRUD, Controller):
prefix = "/api/auth"
tags = ["auth"]
refresh_token_ttl = 30*60
access_token_ttl = 15*60
@post("/register")
async def register(user: UserIn):
if not (user.name and user.secondname and user.lastname and user.phone and user.login):
raise HTTPException(status_code=400, detail="Please fill in all required fields.")
if not RegisterCRUD.phone_validator(user.phone):
raise HTTPException(status_code=400, detail="Please enter the correct phone number.")
if RegisterCRUD.check_phone(user.phone):
raise HTTPException(status_code=400, detail="This phone number is already registered.")
if RegisterCRUD.check_login(user.login):
raise HTTPException(status_code=400, detail="This login is already occupied.")
if RegisterCRUD.register(user.name, user.secondname, user.lastname, user.phone,
user.login, HashedData.hash_password(user.password)):
return JSONResponse(status_code=201, content={"message": "The user has been successfully created."})
raise HTTPException(status_code=501, detail="Failed to create a user.")
</code></pre>
<p>But in general, my login and phone db models have the value unique=True. So trying to create the same values causes an error. But I haven't figured out how to link the return of the desired message to it.</p>
<p>I would like to clarify whether it is correct to carry out verification in this way? Or there is a more different way.</p>
<p>In general, I am waiting for criticism and advice! Thank you all in advance!</p>
|
<python><http><fastapi>
|
2023-04-23 21:01:53
| 0
| 301
|
Genry
|
76,087,171
| 6,830,361
|
How to extract line with specific word from text file
|
<p>I need to extract all lines from a text file that contain the word true.</p>
<p>Example line in text file looks like this:</p>
<pre class="lang-none prettyprint-override"><code>fficialprobl3mzombie@gmail.com | Full Name = Richard Shafer | Points Saved = false | Points = 0 | Member = True
fficialprobl3mzombie@gmail.com | Full Name = Richard Shafer | Points Saved = false | Points = 0 | Member = False
</code></pre>
<p>I need to extract lines where it says "member true" and remove lines where it says "member = false", how can I achieve this? The code below doesn't work for some reason, nothing changes in the text file.</p>
<pre><code>filepath = "cart.txt"
with open(filepath) as outfile:
lines = outfile.read().splitlines()
with open(filepath, "w") as f:
for line in lines:
if "True" in line:
print(line, file=outfile)
break
</code></pre>
|
<python>
|
2023-04-23 20:40:40
| 1
| 419
|
Leo Bogod
|
76,087,059
| 10,452,700
|
What is the best practice to chain DL model into sklearn Pipeline() stages and still access hyperparameters e.g, batch_size \ epochs in pipeline?
|
<p>I want to experiment DL regression model over time-series data by implementing the model using <a href="/questions/tagged/sklearn" class="post-tag" title="show questions tagged 'sklearn'" aria-label="show questions tagged 'sklearn'" rel="tag" aria-labelledby="tag-sklearn-tooltip-container">sklearn</a> <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html#sklearn.pipeline.Pipeline" rel="nofollow noreferrer"><code>pipeline()</code></a> properly. I formed the following DL model in the form of the <code>class WaveNet</code> and would like to call it within the pipeline accordingly.</p>
<pre class="lang-py prettyprint-override"><code>#Model definition
import keras.backend as K
from keras.models import Model
from keras.layers import Input, Conv1D, Activation, Add, Multiply,Lambda, Convolution1D, Dense, Dropout
from keras.initializers import TruncatedNormal
class WaveNet:
def __init__(self, timesteps, dilation_depth=9, n_filters=32):
self.timesteps = timesteps
self.dilation_depth = dilation_depth
self.n_filters = n_filters
self.model = self._build_model()
def _build_model(self):
# Define the input layer
input_layer = Input(shape=(self.timesteps, 1))
# Define the residual blocks
skip_connections = []
x = input_layer
for i in range(self.dilation_depth):
# Define the dilation rate
dilation_rate = 2 ** i
# Define the residual block
tanh_out = Convolution1D(self.n_filters, 3, activation='tanh', padding='causal', dilation_rate=dilation_rate)(x)
sigm_out = Convolution1D(self.n_filters, 3, activation='sigmoid', padding='causal', dilation_rate=dilation_rate)(x)
x = Multiply()([tanh_out, sigm_out])
skip_connections.append(x)
# Define the skip connection layer
summed = Add()(skip_connections)
out = Activation('relu')(summed)
# Define the output layers
out = Convolution1D(1, 1, activation='linear', padding='same')(out)
out = Lambda(lambda x: x[:, -1, :])(out)
out = Dense(1, kernel_initializer=TruncatedNormal(stddev=0.01))(out)
# Define the model and compile it
model = Model(input_layer, out)
model.compile(optimizer='adam', loss='mse')
return model
def summary(self):
self.model.summary()
#build an end-to-end pipeline, and supply the data into a regression model and train within the pipeline.
#Train and fit the WaveNet model into the pipeline chain
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline, make_pipeline
WaveNetRegressor = WaveNet(timesteps=5, dilation_depth=9, n_filters=32)
WNet_pipeline = Pipeline(steps=[('scaler', MinMaxScaler()),('WNet', WaveNetRegressor())]).fit(X_train, y_train) #X, y
</code></pre>
<p>And get the following error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-26-9c90503b7a90> in <cell line: 94>()
92
93 WaveNetRegressor = WaveNet(timesteps=5, dilation_depth=9, n_filters=32)
---> 94 WNet_pipeline = Pipeline([('scaler', MinMaxScaler()),('WNet', WaveNetRegressor())]).fit(X_train, Y_train) #X, y
95
96 #Displaying a Pipeline with a Preprocessing Step and Regression
TypeError: 'WaveNet' object is not callable
</code></pre>
<p>I checked related posts, and they offer <strong>wrapper</strong> for non-<a href="/questions/tagged/sklearn" class="post-tag" title="show questions tagged 'sklearn'" aria-label="show questions tagged 'sklearn'" rel="tag" aria-labelledby="tag-sklearn-tooltip-container">sklearn</a> models to integrate into <a href="/questions/tagged/sklearn" class="post-tag" title="show questions tagged 'sklearn'" aria-label="show questions tagged 'sklearn'" rel="tag" aria-labelledby="tag-sklearn-tooltip-container">sklearn</a> pipeline, but I couldn't figure out to implement it so far. I also couldn't figure out how to access important hyperparameters e.g, <code>batch_size</code> or <code>epochs</code> within <strong>pipeline</strong> during training (the model itself has some hyper-parameters as well):</p>
<p>Wrapper for <a href="/questions/tagged/sklearn" class="post-tag" title="show questions tagged 'sklearn'" aria-label="show questions tagged 'sklearn'" rel="tag" aria-labelledby="tag-sklearn-tooltip-container">sklearn</a> pipeline:</p>
<ul>
<li><a href="https://stackoverflow.com/a/47520976/10452700">https://stackoverflow.com/a/47520976/10452700</a></li>
</ul>
<p>here they used <em>model function</em> <code>def model</code>, not <code>class model</code> by using <code>keras.wrappers.scikit_learn</code> and no info if we need to manipulate and set hyper-parameters. Maybe there is an elegant way to use the class and access hyper-parameters too:</p>
<pre class="lang-py prettyprint-override"><code># wrap the model using the function you created
from keras.wrappers.scikit_learn import KerasRegressor, KerasClassifier
reg = KerasRegressor(build_fn=reg_model, verbose=0)
# just create the pipeline
pipeline = Pipeline(steps=[('reg',reg)]).fit(X_train, y_train)
</code></pre>
<ul>
<li><a href="https://stackoverflow.com/a/69171762/10452700">https://stackoverflow.com/a/69171762/10452700</a></li>
</ul>
<p>Access the learning hyperparameters within the pipeline:</p>
<ul>
<li><a href="https://stackoverflow.com/a/75263781/10452700">https://stackoverflow.com/a/75263781/10452700</a></li>
</ul>
<p>Here they used <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html#sklearn.pipeline.make_pipeline" rel="nofollow noreferrer"><code>make_pipeline</code></a> not <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html#sklearn.pipeline.Pipeline" rel="nofollow noreferrer"><code>Pipeline()</code></a></p>
<ul>
<li><a href="https://stackoverflow.com/a/70335732/10452700">https://stackoverflow.com/a/70335732/10452700</a></li>
</ul>
<hr />
<p>Edit: <code>WaveNetRegressor</code> instead of <code>WaveNetRegressor()</code></p>
<pre class="lang-py prettyprint-override"><code>....
WNet_pipeline = Pipeline([('scaler', MinMaxScaler()),('WNet', WaveNetRegressor)]).fit(X_train, Y_train) #X, y
...
</code></pre>
<p>get as follow:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-077fd9956e62> in <cell line: 94>()
92
93 WaveNetRegressor = WaveNet(timesteps=5, dilation_depth=9, n_filters=32)
---> 94 WNet_pipeline = Pipeline([('scaler', MinMaxScaler()),('WNet', WaveNetRegressor)]).fit(X_train, Y_train) #X, y
95
96 #Displaying a Pipeline with a Preprocessing Step and Regression
2 frames
/usr/local/lib/python3.9/dist-packages/sklearn/pipeline.py in fit(self, X, y, **fit_params)
399 """
400 fit_params_steps = self._check_fit_params(**fit_params)
--> 401 Xt = self._fit(X, y, **fit_params_steps)
402 with _print_elapsed_time("Pipeline", self._log_message(len(self.steps) - 1)):
403 if self._final_estimator != "passthrough":
/usr/local/lib/python3.9/dist-packages/sklearn/pipeline.py in _fit(self, X, y, **fit_params_steps)
337 # shallow copy of steps - this should really be steps_
338 self.steps = list(self.steps)
--> 339 self._validate_steps()
340 # Setup the memory
341 memory = check_memory(self.memory)
/usr/local/lib/python3.9/dist-packages/sklearn/pipeline.py in _validate_steps(self)
241 and not hasattr(estimator, "fit")
242 ):
--> 243 raise TypeError(
244 "Last step of Pipeline should implement fit "
245 "or be the string 'passthrough'. "
TypeError: Last step of Pipeline should implement fit or be the string 'passthrough'. '<__main__.WaveNet object at 0x7f458634ea90>' (type <class '__main__.WaveNet'>) doesn't
</code></pre>
|
<python><machine-learning><scikit-learn><pipeline><hyperparameters>
|
2023-04-23 20:14:18
| 1
| 2,056
|
Mario
|
76,086,914
| 2,236,231
|
Receiving multiline string from user input in a python telegram bot
|
<p>I am using python-telegram-bot. This is my Python handler for the CommandHandler:</p>
<pre><code>async def bulk_q(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
questions_user = ' '.join(context.args[:])
</code></pre>
<p>It is intended to allow the user to send questions in bulk, one question at a time, and I'm especially interested on catching the <code>\n</code> so that I can do a for-loop over each line of input.</p>
<p>However, if I log the content of <code>context.args</code>, there is no <code>\n</code> anywhere.</p>
<p>Would it be possible to extract each line of text I receive from the user independently?</p>
|
<python><python-telegram-bot>
|
2023-04-23 19:44:24
| 2
| 1,099
|
Geiser
|
76,086,748
| 11,546,773
|
How to combine columns in dask horizontally?
|
<p>I'm trying to combine multiple columns into 1 column with python Dask. However I don't seem to find an (elegant) way to combine columns into a list.</p>
<p>I only need to combine column "b - e" into 1 column. Column "a" needs to stay exactly as it is now. I've tried using apply to achieve this. Despite the fact that this isn't working it's also slow and must likely not the best way to do this.</p>
<p>Polars has this command <code>concat_list</code> which works beautifully using: <code>df.select('a', value=pl.concat_list(pl.exclude('a')))</code>. What is the Dask alternative for this?</p>
<p>Anyone who can help me out with how I can achieve this result?</p>
<p><strong>The dataframe I have:</strong></p>
<pre><code>βββββββ¬ββββββ¬ββββββ¬ββββββ¬ββββββ
β a β b β c β d β e β
βββββββͺββββββͺββββββͺββββββͺββββββ‘
β 0.1 β 1.1 β 2.1 β 3.1 β 4.1 β
β 0.2 β 1.2 β 2.2 β 3.2 β 4.2 β
β 0.3 β 1.3 β 2.3 β 3.3 β 4.3 β
βββββββ΄ββββββ΄ββββββ΄ββββββ΄ββββββ
</code></pre>
<p><strong>The result I need:</strong></p>
<pre><code>βββββββ¬ββββββββββββββββββββ
β a β value β
βββββββͺββββββββββββββββββββ‘
β 0.1 β [1.1, β¦ 4.1] β
β 0.2 β [1.2, β¦ 4.2] β
β 0.3 β [1.3, β¦ 4.3] β
β 0.4 β [1.4, β¦ 4.4] β
βββββββ΄ββββββββββββββββββββ
</code></pre>
<p><strong>Example code of the dataframe:</strong></p>
<pre><code>import dask.dataframe as dd
df = dd.DataFrame.from_dict({
"a": [0.1, 0.2, 0.3],
"b": [1.1, 1.2, 1.3],
"c": [2.1, 2.2, 2.3],
"d": [3.1, 3.2, 3.3],
"e": [4.1, 4.2, 4.3],
}, npartitions=1)
df = df.assign(test=[df["b"], df["c"], df["d"], df["e"]])
print('\nDask\n',df.compute())
</code></pre>
|
<python><pandas><dataframe><dask>
|
2023-04-23 19:08:33
| 1
| 388
|
Sam
|
76,086,714
| 16,510,888
|
Can I set a variable with the result of "match"?
|
<p>Setting a variable with a <code>match</code> can be done simply like this:</p>
<pre class="lang-py prettyprint-override"><code>Mode = 'fast'
puts = ''
match Mode:
case "slow":
puts = 'v2 k5'
case "balanced":
puts = 'v3 k5'
case "fast":
puts = 'v3 k7'
</code></pre>
<p>But can you do something like this?</p>
<pre class="lang-py prettyprint-override"><code>Mode = 'fast'
puts = match Mode:
case "slow": 'v2 k5'
case "balanced": 'v3 k5'
case "fast": 'v3 k7'
</code></pre>
<p>Currently it results in a syntax error.</p>
|
<python>
|
2023-04-23 19:01:55
| 3
| 364
|
baronsec
|
76,086,623
| 1,595,350
|
Query a JSON object with Python based on two or more filters on different levels
|
<p>I am a bit lost when trying to query a json object which looks like the following. I want to query by the sub key <code>"type": "header1"</code> and the sub sub key <code>"type": "simpletext"</code>. And i want to receive all results where i can loop through.</p>
<p>This is the initial raw json:</p>
<pre><code>{
"obj": "typed",
"results": [
{
"obj": "blocktext",
"type": "header1",
"header1": {
"rich_text": [
{
"type": "simpletxt",
"txt": "About me"
}
]
}
},
{
"obj": "blocktext",
"type": "header1",
"header1": {
"rich_text": [
{
"type": "simpletxt",
"txt": "Imprint"
}
]
}
},
{
"obj": "blocktext",
"type": "header1",
"header1": {
"rich_text": [
{
"type": "simpletxt",
"txt": "About me"
}
]
}
},
{
"obj": "blocktext",
"type": "code",
"code": {
"rich_text": [
{
"type": "javascript",
"txt": "ABCDEFG"
}
]
}
}
]
}
</code></pre>
<p>I am trying to do this with Python in Google Colabs but i have absolutely no clue how two query a json object by two or more parameters and on different levels.</p>
|
<python><json>
|
2023-04-23 18:41:20
| 1
| 4,326
|
STORM
|
76,086,317
| 6,431,715
|
Num.to_bytes - OverflowError: int too big to convert
|
<p>In order to convert <strong>-10947726235</strong> into byte array I ran:</p>
<pre><code>Num = -10947726235
ByteArray = Num.to_bytes(4, byteorder='little', signed=True)
</code></pre>
<p>I got:</p>
<pre class="lang-none prettyprint-override"><code>OverflowError: int too big to convert
</code></pre>
<p>Can you please advise?</p>
|
<python><byte>
|
2023-04-23 17:39:00
| 1
| 635
|
Zvi Vered
|
76,085,977
| 1,421,907
|
Why the efficiency of numpy is decreasing fastly?
|
<p>I have a question about efficiency in numpy when increasing the number of elements in a matrix/vector operation.</p>
<p>If you look at the example below, for a number of elements in the array of <code>10_000</code> the time length of each step is approximatively the same. Operations, 31, 32 and 33 took about 450ms. Moreover, on the last line, 36, the time length is approximatively two times the length of each individual operations which seems reasonable.</p>
<pre class="lang-py prettyprint-override"><code>In [1]: import numpy as np
In [29]: m = np.random.randn(10_000)
In [30]: shift = 1.0098765
In [31]: %timeit m - m.reshape(len(m), 1)
407 ms Β± 5.75 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
In [32]: matrix = m - m.reshape(len(m), 1)
In [33]: %timeit matrix - shift
443 ms Β± 6.45 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
In [34]: matrix_s = matrix - shift
In [35]: %timeit np.abs(matrix_s)
455 ms Β± 3.77 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
In [36]: %timeit np.abs(matrix - shift)
936 ms Β± 89.3 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>In the following I increased the number of elements to <code>20_000</code>.</p>
<p>I took again a look at the time length but I do not understand the results because contrary at the case with <code>10_000</code> elements, they do not look like consistent (for me). Operation 39 is very fast compare to the following ones (10 times faster) while the length was the same previously. Moreover, operation 44 is two times longer than the sum of operation 41 and 43 where I decomposed the same calculation.</p>
<pre class="lang-py prettyprint-override"><code>In [37]: m = np.random.randn(20_000)
In [39]: %timeit m - m.reshape(len(m), 1)
1.53 s Β± 47 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
In [40]: matrix = m - m.reshape(len(m), 1)
In [41]: %timeit matrix - shift
18.5 s Β± 5.59 s per loop (mean Β± std. dev. of 7 runs, 1 loop each)
In [42]: matrix_s = matrix - shift
In [43]: %timeit np.abs(matrix_s)
27.3 s Β± 6.65 s per loop (mean Β± std. dev. of 7 runs, 1 loop each)
In [44]: %timeit np.abs(matrix - shift)
1min 15s Β± 4.29 s per loop (mean Β± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>The size of the matrix is of 10^8 elements, it starts to be large but not so much I think. If I well compute it, for a size of <code>10_000</code> I have a matrix array of 763Mb, while for <code>20_000</code> it is 3Gb. Is it a swapping issue ?</p>
|
<python><numpy>
|
2023-04-23 16:22:45
| 1
| 9,870
|
Ger
|
76,085,850
| 3,593,301
|
Uploading a Text File to Azure Blob with Python - Local Variable in Byte Format Error
|
<p>I'm currently working on a project where I need to upload a local text file to an Azure Blob using Python. I wrote the following code to accomplish this:</p>
<pre><code>def upload_blob(self, file_name, local_file_path):
file_name = 'upload_test.txt'
blob_service_client = BlobServiceClient.from_connection_string(self.storage_connection_str,timeout=120)
container_client = blob_service_client.get_container_client(self.storage_container_name)
# Define the directory path and filename
directory_name = self.blob_prefix
# Create a BlobClient object for the file
blob_client = container_client.get_blob_client(directory_name + "/" + file_name)
# Upload the file to the specified directory
with open(local_file_path, "rb") as data:
blob_client.upload_blob(data)
</code></pre>
<p>However, I've been running into an error where the code works fine for local variables in byte format, but encounters problems when trying to upload a local text file.</p>
<p>I'm fairly new to Python and Azure Blob, so I'm not quite sure how to fix this issue. Has anyone else encountered a similar problem and found a solution? Any help or advice would be greatly appreciated!</p>
<p>Thank you in advance.</p>
|
<python><azure><file-upload><azure-storage>
|
2023-04-23 15:55:48
| 1
| 492
|
Shadiqur
|
76,085,734
| 12,226,377
|
Regex not retuning the expected results
|
<p>I have a problem where I am using some regular expression patterns to identify key themes in my "Feedback" column.</p>
<p>I am using the following code:</p>
<pre><code># creating regex patterns
pattern1 = re.compile(r'respond query|email|fast response|email|response time|responses email slow|long time|longer response|respond quickly email')
pattern2 = re.compile(r'visibility honesty|honesty|transparency
timescale|transparency|transparent')
# Assigning what should be the theme
patterns = {
re.compile(pattern1) : 'Communication',
re.compile(pattern2) : 'Transparency'
}
#create a function to use the patterns as input in the next step
def match_pattern(text):
for pattern, result in patterns.items():
if pattern.search(text):
return result
return ''
#Applying the function to my column
df['Feedback_Theme'] = df['Feedback'].apply(match_pattern)
</code></pre>
<p>Now the problem is that for my test feedback that says:</p>
<p>"Transparency is required in data but emails are not received on time",</p>
<p>the Feedback_Theme returned is "Transparency" whereas I was anticipating that the theme would be "Communication".
Now I understand that the "Communication" is returned due to "email" getting encountered first from the pattern1 regex but I want "Transparency" to be returned as the theme.</p>
<p>Is there a method that takes into account the priority order of a given word/phrase that when matches the regular expression should only return that and not get overridden by another regex?</p>
<p>How can I resolve this?</p>
|
<python><pandas>
|
2023-04-23 15:31:34
| 2
| 807
|
Django0602
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.