QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
76,875,171 | 726,730 | python print current line number of running python file | <p>is there any easy way to print current line number of running python file?</p>
<pre class="lang-py prettyprint-override"><code>a = 1
print(__line__)
b = 2
c = a+b
print(__line__)
</code></pre>
<p>Desired output:</p>
<pre class="lang-py prettyprint-override"><code>2
5
</code></pre>
| <python><line-numbers> | 2023-08-10 11:11:16 | 0 | 2,427 | Chris P |
76,875,010 | 2,644,315 | python - Send a mail (text/html) via smtplib with SSL | <p>I am trying to send a report by email using smtplib, this is my code:</p>
<pre><code>import smtplib, ssl
from email import encoders
from email.mime.text import MIMEText
from email.mime.base import MIMEBase
from email.mime.multipart import MIMEMultipart
mm = MIMEMultipart("alternative")
mm["Subject"] = "Subject"
mm["From"] = ff
mm["To"] = to
mm.attach(MIMEText(msg, "html"))
srv = smtplib.SMTP_SSL('localhost', 465)
srv.sendmail(ff, to, mm.as_string())
srv.quit()
</code></pre>
<p>The content must go across SSL as report may contain sensitive data.
However, I get no error, nothing crushes, no mail in the target and no "postmaster" respond in the sender (ff) email.</p>
<p>I am unable to figure out what the hell I am doing wrong... The code above should be working with both Python 2.7 and Python 3.6, in both environments the result is the same...</p>
<p>Thanks a lot</p>
| <python><ssl><smtplib> | 2023-08-10 10:48:06 | 0 | 441 | Xerix |
76,874,787 | 16,852,041 | matplotlib | QObject::moveToThread: Current thread (0x831f000) is not the object's thread (0x95b40e0). Cannot move to target thread (0x831f000) | <p>There were existing posts regarding this error, but not in conjunction with using <code>matplotlib</code>.</p>
<pre><code>QObject::moveToThread: Current thread (0x831f000) is not the object's thread (0x95b40e0).
Cannot move to target thread (0x831f000)
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/me/miniconda3/envs/venv/lib/python3.9/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb, eglfs, minimal, minimalegl, offscreen, vnc, webgl.
Aborted (core dumped)
</code></pre>
| <python><qt><matplotlib><qobject><xcb> | 2023-08-10 10:16:59 | 2 | 2,045 | DanielBell99 |
76,874,719 | 4,391,249 | How do I indicate that a Generic container can be heterogeneous? | <p>Say I want to create a custom class and make use of contained type hinting like in <code>list[int]</code>. As I understand I would do that like:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar
T = TypeVar("T")
class MyContainer(Generic[T]):
def __init__(self, ls: list[T]):
pass
def __getitem__(self, ix: int) -> T:
pass
</code></pre>
<p>But does this now imply it's a homogeneous container? Do I have to do anything special to use it like (<code>c = MyContainer(['a', 1])</code>)?</p>
| <python><types> | 2023-08-10 10:08:32 | 1 | 3,347 | Alexander Soare |
76,874,647 | 1,283,853 | Big O complexity for counting occurences in list using dictionary comprehension | <p>I have list of numbers <code>nums = [1,1,2,3,3]</code> and want to create a dict containing number of occurences of those numbers <code>{1:2,2:1,3:2}</code></p>
<p>If I do it using dict comprehension like this: <code>cntDict = {n:nums.count(n) for n in set(nums)}</code> is it slower than looping through array and increasing the counts one by one?</p>
| <python><big-o> | 2023-08-10 09:59:12 | 1 | 862 | TomΓ‘Ε‘ Ε Γma |
76,874,407 | 10,353,865 | Why is neq-comparison inconsistent in pandas? | <p>A comparison of two pandas arrays does not give the same result as performing the comparison element-wise. Here is an example:</p>
<pre><code>import numpy as np
import pandas as pd
floats_with_None = np.array([None,2.3,1.4])
s_None = pd.Series(floats_with_None)
(s_None != s_None)[0] # gives True
s_None[0] != s_None[0] # gives False
</code></pre>
<p>Can someone explain the cause of this?</p>
| <python><pandas> | 2023-08-10 09:29:39 | 1 | 702 | P.Jo |
76,874,260 | 1,954,677 | Pytest teardown log hides test summary | <p>E.g. when I have a test folder consisting of this <code>conftest.py</code></p>
<pre><code>def pytest_unconfigure(config):
for _ in range(100):
print('some teardown log msg')
</code></pre>
<p>and this <code>test_x.py</code></p>
<pre><code>def test_x():
pass
</code></pre>
<p>the output of <code>pytest -v</code> will be</p>
<pre><code>================================ test session starts ================================
platform linux -- Python 3.8.10, pytest-6.1.2, py-1.10.0, pluggy-0.13.1 --
(...)
collected 1 item
../../../../../tmp/test_project/test_x.py::test_x PASSED [100%]
================================= 1 passed in 0.02s =================================
some teardown log msg
some teardown log msg
some teardown log msg
...
...
... (100 Lines)
</code></pre>
<p>Due to the log messages I cannot see the summary without scrolling up.</p>
<p>Using <code>-rsxpf</code> does not improve that, as the "Report" produced by <code>-r</code> will also appear <em>before</em> the teardown log.</p>
<p>Is there a way to call <code>pytest</code> such that the relevant summary can be seen without scrolling up, whilst keeping the teardown log? E.g. by changing the position of the summary, repeating it at the end, or something similar?</p>
| <python><pytest> | 2023-08-10 09:10:45 | 1 | 3,916 | flonk |
76,874,167 | 9,550,867 | How to remove Freq: MS, Name: des, dtype: int64 from pandas series? | <p>I have some pandas series like <code>result_ses</code>. I wanted to accumulate all the data into a dictionary and save the data in a csv. I am using Google Colab to work. But I am facing trouble removing some of the unnecessary information from the data. My code is following:</p>
<pre><code>asd = {}
for prod in unique_products[:4]:
asd[prod] = {} # empty dictionary for each product
asd[prod]['ses'] = result_ses
asd[prod]['des'] = result_des
print(asd)
</code></pre>
<p>The output is following:</p>
<pre><code>{'2-28-437': {'ses': 2021-05-01 16
2021-06-01 16
2021-07-01 16
Freq: MS, Name: ses, dtype: int64,
'des': 2021-05-01 14
2021-06-01 14
2021-07-01 13
Freq: MS, Name: des, dtype: int64},
'2-2-329': {'ses': 2021-05-01 16
2021-06-01 16
2021-07-01 16
Freq: MS, Name: ses, dtype: int64,
'des': 2021-05-01 14
2021-06-01 14
2021-07-01 13
Freq: MS, Name: des, dtype: int64},
'24-30-42-7400': {'ses': 2021-05-01 16
2021-06-01 16
2021-07-01 16
Freq: MS, Name: ses, dtype: int64,
'des': 2021-05-01 14
2021-06-01 14
2021-07-01 13
Freq: MS, Name: des, dtype: int64},
'2-53-1151': {'ses': 2021-05-01 16
2021-06-01 16
2021-07-01 16
Freq: MS, Name: ses, dtype: int64,
'des': 2021-05-01 14
2021-06-01 14
2021-07-01 13
Freq: MS, Name: des, dtype: int64}}
</code></pre>
<p>Where both the <code>result_ses</code> and <code>result_des</code> are pandas series and <code>unique_products</code> is a list of string.</p>
<pre><code># if I type
result_ses.info()
# I get
<class 'pandas.core.series.Series'>
DatetimeIndex: 3 entries, 2021-05-01 to 2021-07-01
Freq: MS
Series name: ses
Non-Null Count Dtype
-------------- -----
3 non-null int64
dtypes: int64(1)
memory usage: 48.0 bytes
</code></pre>
<p>To view the contents of the <code>result_ses</code> I type <code>print(result_ses)</code> and get:</p>
<pre><code>2021-05-01 16
2021-06-01 16
2021-07-01 16
Freq: MS, Name: ses, dtype: int64 # I do not want this included in the csv
</code></pre>
<p>I do not want the dictionary <code>asd</code> to include this two extra information specifically <strong>Freq: MS, Name: des, dtype: int64</strong> and I want only the rest as it is so that I can get the desired output in the csv. Using the following code, I tried to save the data in the csv but it is not in the format I want.</p>
<pre><code>op_path = '/content/output/'
output_file_path = op_path + f'desired_output.csv'
ddf = pd.DataFrame.from_dict(asd, orient='index')
ddf.to_csv(output_file_path, index_label='Date')
</code></pre>
<p>I am looking forward to getting the final output to be a csv like the following. How can I fix this problem?
<a href="https://i.sstatic.net/0ZgUq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0ZgUq.png" alt="desired output image in csv" /></a></p>
| <python><pandas><csv><dictionary><series> | 2023-08-10 09:00:25 | 1 | 1,195 | raiyan22 |
76,874,030 | 9,542,989 | Querying All Emails in Gmail Inbox Using Python | <p>I am using the following code to send and search emails in Gmail using Python,</p>
<pre><code>import smtplib
import imaplib
import email
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
import pandas as pd
class EmailClient:
def __init__(self, email, password, smtp_server='smtp.gmail.com', smtp_port=587, imap_server="imap.gmail.com"):
self.email = email
self.password = password
self.smtp_server = smtplib.SMTP(smtp_server, smtp_port)
self.imap_server = imaplib.IMAP4_SSL(imap_server)
def send_email(self, to_addr, subject, body):
msg = MIMEMultipart()
msg['From'] = self.email
msg['To'] = to_addr
msg['Subject'] = subject
msg.attach(MIMEText(body, 'plain'))
self.smtp_server.starttls()
self.smtp_server.login(self.email, self.password)
self.smtp_server.send_message(msg)
self.smtp_server.quit()
def search_email(self, mailbox="INBOX", subject=None, to=None, from_=None, since_date=None, until_date=None,
since_emailid=None):
self.imap_server.login(self.email, self.password)
self.imap_server.select(mailbox)
query_parts = []
if subject is not None:
query_parts.append(f'(SUBJECT "{subject}")')
if to is not None:
query_parts.append(f'(TO "{to}")')
if from_ is not None:
query_parts.append(f'(FROM "{from_}")')
if since_date is not None:
since_date_str = since_date.strftime("%d-%b-%Y")
query_parts.append(f'(SINCE "{since_date_str}")')
if until_date is not None:
until_date_str = until_date.strftime("%d-%b-%Y")
query_parts.append(f'(BEFORE "{until_date_str}")')
if since_emailid is not None:
query_parts.append(f'(UID {since_emailid}:*)')
query = ' '.join(query_parts)
ret = []
resp, items = self.imap_server.uid('search', None, query)
items = items[0].split()
for emailid in items[::-1]:
resp, data = self.imap_server.uid('fetch', emailid, "(BODY[HEADER.FIELDS (SUBJECT TO FROM DATE)])")
try:
raw_email = data[0][1].decode("utf-8")
except UnicodeDecodeError:
ValueError(f"Could not decode email with id {emailid}")
email_message = email.message_from_string(raw_email)
email_line = {}
email_line['id'] = emailid
email_line["to"] = email_message['To']
email_line["from"] = email_message['From']
email_line["subject"] = str(email_message['Subject'])
email_line["created_at"] = email_message['Date']
resp, email_data = self.imap_server.uid('fetch', emailid, '(BODY[TEXT])')
email_line["body"] = email_data[0][1].decode('utf-8')
ret.append(email_line)
self.imap_server.logout()
return pd.DataFrame(ret)
</code></pre>
<p>When I execute,</p>
<pre><code>email_client = EmailClient('<email>', '<app_password>')
email_client.search_email()
</code></pre>
<p>I expected all of the emails in my INBOX to be returned, instead I get the following error,</p>
<pre><code>imaplib.IMAP4.error: UID command error: BAD [b'Could not parse command']
</code></pre>
<p>What is the problem here?</p>
<p>I know that the same error has popped up here,
<br>
<a href="https://stackoverflow.com/questions/25186394/unable-to-retrieve-gmail-messages-from-any-folder-other-than-inbox-python3-issu">Unable to retrieve gmail messages from any folder other than inbox (Python3 issue)</a></p>
<p>But, my issue seems to be a little different.</p>
<p>Another question along the same lines, is there a way for me to limit the number of results that are returned when running this function? Something similar to a <code>LIMIT</code> clause? How do I incorporate that to the query?</p>
| <python><email><smtp><gmail><imap> | 2023-08-10 08:43:32 | 0 | 2,115 | Minura Punchihewa |
76,874,013 | 1,050,187 | Not able to read files in Kaggle when running in background notebook | <p>I have saved some files in /kaggle/working/ directory and I can use them when I run the notebook in a client session. For instance running a cell with this code</p>
<pre><code>print('READ DIRECTORIES AND PRINT FILES LIST\n')
import os
for dirname, _, filenames in os.walk('/kaggle/'):
for filename in filenames:
print(os.path.join(dirname, filename))
</code></pre>
<p>gives this output</p>
<pre><code>READ DIRECTORIES AND PRINT FILES LIST
/kaggle/lib/kaggle/gcp.py
/kaggle/working/state.db
/kaggle/working/ANN SPY_30min - P_ITM - L 128-32-1 - DO 0995 - BS 512 - OPTIONS MODEL.h5
/kaggle/working/SPY_30min_final_df_fildered_40_DTE_51.csv
/kaggle/working/Test 2.csv
/kaggle/working/test.csv
</code></pre>
<p>But when I run the notebook in the background that files are not found and the notebook crashes when it tries to load one of them. The same above cell code gives as output the following</p>
<pre><code>25.1s 24 READ DIRECTORIES AND PRINT FILES LIST
25.1s 25
25.1s 26 /kaggle/src/script.ipynb
25.1s 27 /kaggle/lib/kaggle/gcp.py
25.1s 28 /kaggle/working/__notebook__.ipynb
</code></pre>
<p>How can i find the files I need when running in the background?</p>
<p>It is true also the opposite: files saved in the background running are not present in the client session and lost.</p>
| <python><jupyter-notebook><kaggle> | 2023-08-10 08:41:35 | 0 | 623 | fede72bari |
76,873,942 | 4,908,844 | Why does my Keras LSTM model not learn patterns on the training set and even gets worse over time? | <p>I am trying to fit a sequential model (LSTM) to my data. For this I am using the Keras library.</p>
<p>Simplified, my target is a counter that counts people every day in a location. The features are weather data, public holidays and the current date. In total there are 21 features.</p>
<p>My goal is to take the last 3 days, and the data of the next day (4 days in total) including weather prediction, to forecast the number of people for that day.</p>
<p>For comparison I have a base line error which is ~10000 and I fitted already two models which are xgboost with a training error of ~5000 and a dense model with a training error of ~7000. The absolute values are not important but I just mention them, so you get a feeling and to state, that other models are indeed capable of picking up the patterns.</p>
<p>My input shape is:</p>
<pre><code>X_train_3d.shape
>>> (2918, 4, 21)
</code></pre>
<p>The first element looks like this:</p>
<pre><code>X_train_3d[0]
>>> array([[2.0130e+03, 1.0000e+00, 1.0000e+00, 1.0000e+00, 9.1000e+00,
6.9000e+00, 0.0000e+00, 1.9400e+01, 1.0018e+03, 0.0000e+00,
1.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
1.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00],
[2.0130e+03, 1.0000e+00, 2.0000e+00, 2.0000e+00, 7.1000e+00,
1.8000e+00, 0.0000e+00, 2.0200e+01, 1.0175e+03, 3.0000e+01,
1.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00],
[2.0130e+03, 1.0000e+00, 3.0000e+00, 3.0000e+00, 1.0600e+01,
9.0000e-01, 0.0000e+00, 2.3800e+01, 1.0245e+03, 0.0000e+00,
1.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00],
[2.0130e+03, 1.0000e+00, 4.0000e+00, 4.0000e+00, 9.7000e+00,
0.0000e+00, 0.0000e+00, 2.5200e+01, 1.0295e+03, 0.0000e+00,
1.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00]])
</code></pre>
<p>My target shape is:</p>
<pre><code>y_train_3d.shape
>>> (2918, 4, 1)
</code></pre>
<p>The first element looks like this:</p>
<pre><code>y_train_3d[0]
>>> array([[ 5795.],
[19494.],
[24851.],
[13475.]])
</code></pre>
<p>So both the shape and the content look the way they are supposed to be.</p>
<p>The 3. model is my LSTM model. My model is as follows:</p>
<pre><code>num_sequence = 4
inputs = Input(shape=(num_sequence, 21))
x = LSTM(32, activation='relu', dropout=0.0, return_sequences=True)(inputs)
outputs = Dense(1, activation='linear')(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
optimizer = Adam(learning_rate=0.1)
model.compile(optimizer=optimizer, loss='mse', metrics=[RootMeanSquaredError()])
print(model.summary())
</code></pre>
<p>The summary of the model is:</p>
<pre><code>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 4, 21)] 0
lstm_2 (LSTM) (None, 4, 32) 6912
dense_3 (Dense) (None, 4, 1) 33
=================================================================
Total params: 6,945
Trainable params: 6,945
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p><strong>My Problem:</strong>
If I use this LSTM and let it run for 500 epochs, first it seems to actually learn and the training error goes down, but at one point after 180 epochs, the training error gets worse again.</p>
<pre><code>Epoch 178/500
92/92 - 0s - loss: 67061724.0000 - root_mean_squared_error: 8189.1221 - val_loss: 73245328.0000 - val_root_mean_squared_error: 8558.3486 - 488ms/epoch - 5ms/step
Epoch 179/500
92/92 - 1s - loss: 63150060.0000 - root_mean_squared_error: 7946.7012 - val_loss: 73750616.0000 - val_root_mean_squared_error: 8587.8184 - 536ms/epoch - 6ms/step
Epoch 180/500
92/92 - 1s - loss: 114693592.0000 - root_mean_squared_error: 10709.5098 - val_loss: 149320688.0000 - val_root_mean_squared_error: 12219.6846 - 661ms/epoch - 7ms/step
Epoch 181/500
92/92 - 1s - loss: 121518424.0000 - root_mean_squared_error: 11023.5391 - val_loss: 125545216.0000 - val_root_mean_squared_error: 11204.6963 - 646ms/epoch - 7ms/step
Epoch 182/500
92/92 - 1s - loss: 115984808.0000 - root_mean_squared_error: 10769.6240 - val_loss: 122884320.0000 - val_root_mean_squared_error: 11085.3203 - 690ms/epoch - 8ms/step
</code></pre>
<p>After 500 epochs I predicted on the training set and got those results:</p>
<pre><code>model.predict(X_train_3d)
>>> array([[[22839.49 ],
[22839.49 ],
[22839.49 ],
[22839.49 ]],
[[26642.746],
[26642.746],
[26642.746],
[26642.746]],
[[27665.633],
[27665.633],
[27665.633],
[27665.633]],
...,
[[13904.507],
[13904.507],
[13904.507],
[13904.507]],
[[12234.53 ],
[12234.53 ],
[12234.53 ],
[12234.53 ]],
[[15874.946],
[15874.946],
[15874.946],
[15874.946]]], dtype=float32)
</code></pre>
<p>I am quite new to Sequential models and therefore I might miss some concepts to see the error here.
To me everthing looks fine in the sense that the training data has the right shape and the LSTM model gives the correct output shape. Also until the ~epoch 180 the model seems to improve and learn until suddenly it gets worse and it just predicts the same value for each time step.</p>
<p>As you can see I am also not using any regularization here on purpose, since my model does not even seem to be able to fit the training data well and therefore I am guessing this is more of a <code>high bias</code> problem and regularization is not helping.</p>
<p>Can someone provide me with some intuition on what is happening here?
Apart from a general intuition, my expectation would be if I use no regularization at all and train for a long enough time, I should be able to at least overfit my training data which should result in</p>
<pre><code>model.predict(X_train_3d)
</code></pre>
<p>being close to my <code>y_train_3d</code> e.g., first element</p>
<pre><code>>>> array([[ 5795.],
[19494.],
[24851.],
[13475.]])
</code></pre>
<p>But as we can see, all values are the same and far off e.g., for the first element. I also used the same architecture with a bigger model (2 LSTM layers) and received similar result. Therefore, it does not seem to be a problem that my model is too small and cannot pick up the pattern.</p>
<p>For additional comparision:
My Dense model has <code>Trainable params: 3,329</code>
The LSTM has <code>Trainable params: 6,945</code>
So, in theory it should have the capability of picking up complex patterns and do well on the training data.</p>
| <python><machine-learning><keras><deep-learning><lstm> | 2023-08-10 08:31:29 | 1 | 361 | Mrob |
76,873,549 | 8,547,986 | python custom error message for required parameters of a function | <p>In python, for a function, when a required parameter is not passed, python raises a <code>TypeError</code>. How can I customize the error message of this <code>TypeError</code>. For example:</p>
<pre class="lang-py prettyprint-override"><code>>>> def f(a):
... print('a')
...
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() missing 1 required positional argument: 'a'
</code></pre>
<p>I wish to customise the error message to include more information about the error message, like the definition of parameter. Is this doable or is their a better way to do the same?</p>
| <python><python-3.x> | 2023-08-10 07:37:59 | 1 | 1,923 | monte |
76,873,411 | 5,119,871 | How does floating point division work in Python? It seems different than other languages I'm familiar with | <p>In many computer language I'm familiar with an expression such as</p>
<pre class="lang-py prettyprint-override"><code>>>> 1.0 - ((1.0/3.0)*3.0)
0.0
</code></pre>
<p>will evaluate to a number close to 0.0 but not exactly to 0.0.
In Python it seems to evaluate exactly to 0.0.
How does this work?</p>
<pre class="lang-py prettyprint-override"><code>>>> 0.0 == (1.0 - ((1.0/3.0)*3.0))
True
>>> 0.0 == (1.0 - ((1.0/10.0)*10.0))
True
>>> 1.0 - (0.1 * 10)
0.0
>>> 0.0 == (1.0 - (0.1 * 10))
True
</code></pre>
<p>When I look into the <a href="https://docs.python.org/3/tutorial/floatingpoint.html" rel="nofollow noreferrer">Python documentation</a>, I don't see this example explicitly, but it seems to imply that, for example 0.1 * 10 would not equal exactly 1. In fact it says that 0.1 is approximately 0.1000000000000000055511151231257827021181583404541015625</p>
<p>It would be great if someone could explain to me what's happening here.</p>
<p>By the way, the <a href="https://stackoverflow.com/questions/21895756/why-are-floating-point-numbers-inaccurate">post</a> is sort of the opposite of what I'm asking. That article asks why floating point computations are INACCURATE. I'm asking, rather, why are floating point computations surprisingly/magically ACCURATE?</p>
| <python><floating-point> | 2023-08-10 07:17:05 | 1 | 662 | Jim Newton |
76,873,386 | 9,992,341 | Kafka ensuring consumer group stays alive | <p>I have a process which spawns a producer and consumer in separate pods (kubernetes). I want to use <code>auto.offset.reset "latest"</code>, and thus need to guarantee that the consumer pod spins up prior to the producer pod, as I do not want the producer to begin producing messages before the consumer pod comes online.</p>
<p>My simple approach is to have the process which spawns these pods to create the consumer group prior to either of the pods spawning. In testing, I noticed that the consumer group state goes from stable to empty after about ~45 seconds and then the consumer group is variably removed after anywhere from another 30 seconds to a few minutes.</p>
<p>How can I guarantee that the consumer group created stays around for longer?</p>
<p>My <code>offsets.retention.minutes</code> is the default of 7 days, as per <a href="https://stackoverflow.com/a/65189562/9992341">https://stackoverflow.com/a/65189562/9992341</a>.</p>
<p>I am using python's confluent_kafka package to create the group (there appears to be no direct api to create a group id), and I have tried messing around with the subscribe callable params.</p>
<pre><code>from confluent_kafka import Consumer
consumer = Consumer(
{
"sasl.username": "***",
"sasl.password": "***",
"bootstrap.servers": "***",
"group.id": "test-group-1",
"security.protocol": "SASL_SSL",
"sasl.mechanisms": "PLAIN",
"auto.offset.reset": "latest",
},
)
consumer.subscribe([topic]) #, on_lost=lambda *args: None)
</code></pre>
<p>I run the above code and check in a separate script with the admin client the group:</p>
<pre><code>import confluent_kafka.admin
admin_client = confluent_kafka.admin.AdminClient(
{
'sasl.username': "***",
'sasl.password': "***",
'bootstrap.servers': "***",
'security.protocol': 'SASL_SSL',
'sasl.mechanisms': 'PLAIN',
}
)
def list_groups(admin_client):
future = admin_client.list_consumer_groups()
res = future.result()
lst = [(i.group_id, i.state) for i in res.valid]
for i in sorted(lst):
print(i) # noqa: T201
list_groups(admin_client)
# ('test-group-1', <ConsumerGroupState.STABLE: 3>)
</code></pre>
<p>However as stated this group's state pretty quickly becomes "EMPTY" and disappears, even though the retention should be 7 days (which is overkill for my use case where pods come up pretty close together).</p>
<p>Note: I have tested this while messages were being produced to the topic and not produced, but no change is observed.</p>
| <python><apache-kafka><confluent-kafka-python> | 2023-08-10 07:12:59 | 1 | 998 | bbd108 |
76,873,358 | 18,221,164 | accessing keyvault secrets from azure functions fails | <p>Due to some limitations, I am not able to test the azure function first locally and then deploy it.
So, we have followed the approach of working around, with the use of pipelines.
I have a repository in Azure git, where I have a folder (azure-functions) containing <strong>init</strong>.py, functions.json and requirement.txt.</p>
<p>I create a new function from the pipeline using the following script inside the pipeline yaml.</p>
<pre><code>func init LocalFunctionProj --python
cd LocalFunctionProj
func new --name TimeTrigger --template "Timer trigger" --schedule "0 */5 * * * *"
</code></pre>
<p>Now the above command creates a new <strong>init</strong>.py , functions.json and the requirement.txt.
I copy the contents of the files from azure-functions(in my repo) to these files created above.</p>
<p>Now, I have the init.py where I try to access the dummy keyvault value, but end up facing with the module not found error.</p>
<p>init.py</p>
<pre><code>import datetime
import logging
import azure.functions as func
import azure.keyvault.secrets as kv_secrets
import azure.identity as az_identity
#from azure.keyvault.secrets import SecretClient
#from azure.identity import DefaultAzureCredential
def main(mytimer: func.TimerRequest) -> None:
utc_timestamp = datetime.datetime.utcnow().replace(
tzinfo=datetime.timezone.utc).isoformat()
if mytimer.past_due:
logging.info('The timer is past due!')
KVUri = f"XXXXXX"
credential = az_identity.DefaultAzureCredential()
logging.info('credential info is %s',credential)
#client = SecretClient(vault_url=KVUri, credential=credential)
client = az_identity.SecretClient(vault_url=KVUri, credential=credential)
password = client.get_secret('TestSecret').value
logging.info('Secret value is %s', password)
logging.info('Hello!! Good day!')
logging.info('Python timer trigger function ran at %s', utc_timestamp)
</code></pre>
<p>The requirement.txt has the following entries :</p>
<pre><code>azure-functions
azure-common==1.1.28
azure-core==1.28.0
azure-identity==1.13.0
azure-keyvault-secrets==4.7.0
</code></pre>
<p>I deploy the function using the pipeline itself, and it is successful.
The time triggered response shows the following:</p>
<pre><code>Result: Failure Exception: ModuleNotFoundError: No module named 'azure.keyvault'. Please check the requirements.txt file for the missing module. For more info, please refer the troubleshooting guide: https://aka.ms/functions-modulenotfound Stack: File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py", line 380, in _handle__function_load_request func = loader.load_function( File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/utils/wrappers.py", line 48, in call raise extend_exception_message(e, message) File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/utils/wrappers.py", line 44, in call return func(*args, **kwargs) File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/loader.py", line 132, in load_function mod = importlib.import_module(fullmodname) File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/home/site/wwwroot/TimeTrigger/__init__.py", line 5, in <module> import azure.keyvault.secrets as kv_secrets
</code></pre>
| <python><azure><azure-functions> | 2023-08-10 07:08:44 | 1 | 511 | RCB |
76,873,239 | 7,737,689 | Python shutil.unpack_archive exit with Errno 22, how could files left to be unpacked? | <p>When do unpack with shutil, Errno 22 pop up. Some files cannot unpack is OK, but how could files left to be unpacked? Or how could do unpack with errors ignored?</p>
<pre><code>try :
shutil.unpack_archive(file, thirdFolder)
except OSError as error :
print(error)
</code></pre>
<p>[Errno 22] Invalid argument: 'D:\099Logs\HOUJI-10077--UnzipUn7z\bugreport-2023-08-04-141558\bugreport-houji-2023-08-04-141739\lshal-debug\android.frameworks.cameraservice.service@2.0::ICameraService_default.txt'</p>
| <python><shutil><unpack> | 2023-08-10 06:50:30 | 0 | 481 | Fukai |
76,873,078 | 4,575,197 | TypeError: Index(...) must be called with a collection of some kind, 'Mcap' was passed while constructing a Dataframe | <p>i'm trying to <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.mstats.winsorize.html" rel="nofollow noreferrer">winsorize</a> a dataset. I do it in multiple levels.</p>
<p>first one: i need the winsorization based on a Ratio, which is based on TotalAsset (a column in my dataset).</p>
<pre><code>FirmMonthlyAccountingData[item].div(FirmMonthlyAccountingData['TotalAssets'])
</code></pre>
<p>then i use the same code (in order to use less memory) and extract the values as numpy arrays and then i apply the Winsorazation (i need to remove top/button 5%).</p>
<pre><code>winsorize(FirmMonthlyAccountingData[item].div(FirmMonthlyAccountingData['TotalAssets'], axis=0).values,limits=[0.05,0.05])
</code></pre>
<p>then i need to change this back into a Dataframe. The Error happens here actually.</p>
<pre><code>pd.DataFrame(winsorize(FirmMonthlyAccountingData[item].div(FirmMonthlyAccountingData['TotalAssets'],axis=0).values,limits[0.05,0.05]),columns=item)
</code></pre>
<p>then multiply it with <code>*FirmMonthlyAccountingData['totalAssets']</code> so i get the original values back.</p>
<pre><code>Copy_of_firmmonthlydata[item]=pd.DataFrame(winsorize(FirmMonthlyAccountingData[item].div(FirmMonthlyAccountingData['TotalAssets'],axis=0).values,limits[0.05,0.05]),columns=item)*FirmMonthlyAccountingData['totalAssets']
</code></pre>
<p>finally i need to do it for all the columns with a for loop, in order to save memory as much as possible.</p>
<pre><code>columns_to_winsorize= ['Mcap', 'first', 'second', 'third']
for item in columns_to_winsorize:
Copy_of_firmmonthlydata[item]=pd.DataFrame(winsorize(FirmMonthlyAccountingData[item].div(FirmMonthlyAccountingData['TotalAssets'], axis=0).values,limits=[0.05,0.05]),columns=item)*FirmMonthlyAccountingData['totalAssets']
</code></pre>
<p>but i get this error</p>
<pre><code> TypeError Traceback (most recent call last)
Cell In[27], line 10
3 columns_to_winsorize= ['Mcap', 'first', 'second']
9 for item in columns_to_winsorize:
---> 10 Copy_of_firmmonthlydata=pd.DataFrame(winsorize(FirmMonthlyAccountingData[f'{item}'].div(FirmMonthlyAccountingData['TotalAssets'], axis=0).values,limits=[0.05,0.05]),columns=item)*FirmMonthlyAccountingData['totalAssets']
File c:\Users\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\frame.py:722, in DataFrame.__init__(self, data, index, columns, dtype, copy)
720 # a masked array
721 data = sanitize_masked_array(data)
--> 722 mgr = ndarray_to_mgr(
723 data,
724 index,
725 columns,
726 dtype=dtype,
727 copy=copy,
728 typ=manager,
729 )
731 elif isinstance(data, (np.ndarray, Series, Index, ExtensionArray)):
732 if data.dtype.names:
733 # i.e. numpy structured array
File c:\Users\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\internals\construction.py:333, in ndarray_to_mgr(values, index, columns, dtype, copy, typ)
324 values = sanitize_array(
325 values,
326 None,
(...)
329 allow_2d=True,
330 )
332 # _prep_ndarraylike ensures that values.ndim == 2 at this point
--> 333 index, columns = _get_axes(
334 values.shape[0], values.shape[1], index=index, columns=columns
335 )
337 _check_values_indices_shape_match(values, index, columns)
339 if typ == "array":
File c:\Users\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\internals\construction.py:738, in _get_axes(N, K, index, columns)
736 columns = default_index(K)
737 else:
--> 738 columns = ensure_index(columns)
739 return index, columns
...
5066 f"{cls.__name__}(...) must be called with a collection of some "
5067 f"kind, {repr(data)} was passed"
5068 )
TypeError: Index(...) must be called with a collection of some kind, 'Mcap' was passed
</code></pre>
<p>Any help would be appreciated.</p>
| <python><pandas><dataframe><numpy><typeerror> | 2023-08-10 06:22:12 | 1 | 10,490 | Mostafa Bouzari |
76,873,011 | 12,906,920 | Review Summarization Using Langchain and AzureOpenAI | <p>Using Langhcain and Azure OpenAI, i would like to summarize the review based on the product's attributes. For example:</p>
<pre><code>products = {'foo4':["cilt","kargo","orjinallik"],
'foo3':["kurulum", "rΓΌzgar geΓ§irgenliΔi", "taΕΔ±nabilirlik" , "gΓΌneΕ geΓ§irgenliΔi"],
'foo2':["Εarj sΓΌresi","batarya kullanΔ±m sΓΌresi","emiΕ gΓΌcΓΌ"],
'foo':["katlanabilirliΔi", "aΔΔ±rlΔ±ΔΔ±", "taΕΔ±ma kapasitesi"]}
</code></pre>
<p>for each value element, i would like to get summarize. My code is as follows:</p>
<pre><code>llm = AzureChatOpenAI(openai_api_key = openai.api_key,
deployment_name = 'openai.api_deployment_name',
openai_api_version = openai.api_version,
openai_api_base = openai.api_base,
openai_api_type = openai.api_type,
temperature=0,
)
PROMPT_TEMPLATE = ("Review and summarize user comments based on {content}. Summarize only comments on {content}.")
filename = "temp/foo.txt"
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=10000,
chunk_overlap=10,
)
loader = TextLoader(filename)
documents = loader.load()
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002",deployment="foo",openai_api_key=openai.api_key,openai_api_version=openai.api_version,openai_api_base=openai.api_base,openai_api_type=openai.api_type)
attributes = products['foo']
# db2 = Chroma.from_documents(docs, embeddings, persist_directory="./chroma_db_test")
db_disk = Chroma(persist_directory="./chroma_db_test", embedding_function=embeddings)
sonuc = {}
for attr in attributes:
print("chunk is processing")
chain = ConversationalRetrievalChain.from_llm(llm=llm, verbose=True,retriever=db_disk.as_retriever(search_kwargs={"k": 2}),)
result = chain({"question": PROMPT_TEMPLATE.format(content=attr), "chat_history": []})
sonuc[attr] = result['answer']
print(sonuc)
</code></pre>
<p>I have 7 documents loaded and regarding the "k" parameter in the retriever, I may get stuck chunk limit (for example if k is 4). If I create a separate document loader for each page content and loop over documents, I probably get a high LLM cost.</p>
<p>The question is, If the k parameter is set to 1 or 2, the retrieved reviews are reduced. If too large, stuck in the chunk limit. I do not want to miss any review in each document, how can I review all the docs in an LLM cost-friendly way based on the attributes?</p>
| <python><langchain><large-language-model> | 2023-08-10 06:07:55 | 0 | 1,005 | Utku Can |
76,872,995 | 7,200,859 | Replace x-axis index of imshow with frequency (kHz) | <p>I have following plot</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fs = 1000
t = np.arange(0, 0.1, 1/fs)
N = len(t)
f_bin = fs / N
f = np.arange(0, fs, f_bin)
X = [np.fft.fft(np.sin(2 * np.pi * 100 * t)), np.fft.fft(np.sin(2 * np.pi * 200 * t)), np.fft.fft(np.sin(2 * np.pi * 300 * t))]
M = np.absolute(X)
fig, ax = plt.subplots()
im = ax.imshow(M[:,:len(M[0,:]) // 2], cmap='viridis', aspect=1)
clb = fig.colorbar(im, ax=ax)
clb.ax.set_title("Magnitude")
ax.set_xlabel(f"Index (f[{round(100/f_bin)}] = {f[round(100/f_bin)]} Hz)")
ax.set_ylabel("# FFT")
ax.set_title(f"Built-in FFT")
plt.show()
</code></pre>
<p>which results in</p>
<p><a href="https://i.sstatic.net/ZKsSi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZKsSi.png" alt="enter image description here" /></a></p>
<p>Is there a (nice) solution to transform the "Index" into the actual "frequency bin" by using <code>f_bin</code> and <code>f</code></p>
<p><a href="https://i.sstatic.net/cM1ck.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cM1ck.png" alt="enter image description here" /></a></p>
<p>I hope I haven't overlooked some already existing solutions in <code>Stack Overflow</code>.</p>
| <python><numpy><matplotlib> | 2023-08-10 06:04:01 | 2 | 715 | ge45mue |
76,872,946 | 221,270 | create two sets of a dataframe | <p>I want to create two sets of a dataframe by removing two rows (ignoring the NA values). The two rows should be stored in a new dataframe and should be removed from the original dataframe. Then the next two rows (without overlap) and the on:</p>
<pre><code># Dataframe
x y ID
39.54 116.39 ID1
38.27 117.26 ID2
28.27 119.55 ID3
27.34 119.43 ID4
NA NA ID5
30.17 109.28 ID6
9.083333333 39.08333333 ID7
NA NA ID8
NA NA ID9
NA NA ID10
### First set
# Training
x y ID
28.27 119.55 ID3
27.34 119.43 ID4
NA NA ID5
30.17 109.28 ID6
9.083333333 39.08333333 ID7
NA NA ID8
NA NA ID9
NA NA ID10
#Validation
x y ID
39.54 116.39 ID1
38.27 117.26 ID2
### Second set
# Training
x y ID
39.54 116.39 ID1
38.27 117.26 ID2
NA NA ID5
30.17 109.28 ID6
9.083333333 39.08333333 ID7
NA NA ID8
NA NA ID9
NA NA ID10
#Validation
x y ID
28.27 119.55 ID3
27.34 119.43 ID4
### Third set
# Training
x y ID
39.54 116.39 ID1
38.27 117.26 ID2
28.27 119.55 ID3
27.34 119.43 ID4
NA NA ID5
NA NA ID8
NA NA ID9
NA NA ID10
#Validation
x y ID
30.17 109.28 ID6
9.083333333 39.08333333 ID7
</code></pre>
<p>How can I split the dataframes to extract always two rows?</p>
| <python><pandas> | 2023-08-10 05:51:04 | 1 | 2,520 | honeymoon |
76,872,744 | 9,542,989 | Connect to Gmail Using Email Address and Password with Python | <p>I am trying to connect to my Gmail account using Python. I want to connect to it using both SMTP and IMAP. I am aware that it is possible to use an app password to make this connection, but is there a way to use the actual email password instead?</p>
<p>I have been reading this particular article,
<br>
<a href="https://support.google.com/accounts/answer/6010255#zippy=%2Cif-less-secure-app-access-is-on-for-your-account%2Cif-less-secure-app-access-is-off-for-your-account" rel="nofollow noreferrer">https://support.google.com/accounts/answer/6010255#zippy=%2Cif-less-secure-app-access-is-on-for-your-account%2Cif-less-secure-app-access-is-off-for-your-account</a></p>
<p>The warning given at the top tells me that it is not possible to do so, even if 'Less secure app access' is allowed,</p>
<p><a href="https://i.sstatic.net/LjCND.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LjCND.png" alt="enter image description here" /></a></p>
<p>Do I have that right?</p>
<p>Given below is the code I am using to send and search emails. Like I said though, this only works with an app password,</p>
<pre><code>import smtplib
import imaplib
import email
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
import pandas as pd
class EmailClient:
def __init__(self, email, password, smtp_server='smtp.gmail.com', smtp_port=587, imap_server="imap.gmail.com"):
self.email = email
self.password = password
self.smtp_server = smtplib.SMTP(smtp_server, smtp_port)
self.imap_server = imaplib.IMAP4_SSL(imap_server)
def send_email(self, to_addr, subject, body):
msg = MIMEMultipart()
msg['From'] = self.email
msg['To'] = to_addr
msg['Subject'] = subject
msg.attach(MIMEText(body, 'plain'))
self.smtp_server.starttls()
self.smtp_server.login(self.email, self.password)
self.smtp_server.send_message(msg)
self.smtp_server.quit()
def search_email(self, mailbox="INBOX", subject=None, to=None, from_=None, since_date=None, until_date=None,
since_emailid=None):
self.imap_server.login(self.email, self.password)
self.imap_server.select(mailbox)
query_parts = []
if subject is not None:
query_parts.append(f'(SUBJECT "{subject}")')
if to is not None:
query_parts.append(f'(TO "{to}")')
if from_ is not None:
query_parts.append(f'(FROM "{from_}")')
if since_date is not None:
since_date_str = since_date.strftime("%d-%b-%Y")
query_parts.append(f'(SINCE "{since_date_str}")')
if until_date is not None:
until_date_str = until_date.strftime("%d-%b-%Y")
query_parts.append(f'(BEFORE "{until_date_str}")')
if since_emailid is not None:
query_parts.append(f'(UID {since_emailid}:*)')
query = ' '.join(query_parts)
ret = []
resp, items = self.imap_server.uid('search', None, query)
items = items[0].split()
for emailid in items[::-1]:
resp, data = self.imap_server.uid('fetch', emailid, "(BODY[HEADER.FIELDS (SUBJECT TO FROM DATE)])")
try:
raw_email = data[0][1].decode("utf-8")
except UnicodeDecodeError:
ValueError(f"Could not decode email with id {emailid}")
email_message = email.message_from_string(raw_email)
email_line = {}
email_line['id'] = emailid
email_line["to"] = email_message['To']
email_line["from"] = email_message['From']
email_line["subject"] = str(email_message['Subject'])
email_line["created_at"] = email_message['Date']
resp, email_data = self.imap_server.uid('fetch', emailid, '(BODY[TEXT])')
email_line["body"] = email_data[0][1].decode('utf-8')
ret.append(email_line)
self.imap_server.logout()
return pd.DataFrame(ret)
</code></pre>
| <python><email><smtp><gmail><imap> | 2023-08-10 04:56:33 | 1 | 2,115 | Minura Punchihewa |
76,872,580 | 14,954,327 | Can Pandera convert my pa.DataFrameModel into a pa.SeriesSchema? | <p>Given this <code>DataFrame</code></p>
<pre class="lang-py prettyprint-override"><code>import pandera as pa
class MyDataframeSchema(pa.DataFrameModel):
state: pa.Series[str] = pa.Field()
city: pa.Series[str] = pa.Field()
price: pa.Series[int] = pa.Field()
df = pa.DataFrame[MyDataframeSchema](
{
'state': ['NY','FL','GA','CA'],
'city': ['New York', 'Miami', 'Atlanta', 'San Francisco'],
'price': [8, 12, 10, 16],
}
)
</code></pre>
<p>I'd like to be able to validate a row extracted from it</p>
<pre class="lang-py prettyprint-override"><code>MySeriesSchema = magical_and_dreamed_method(MyDataframeSchema)
validated_series = MySeriesSchema.validate(df.iloc[0])
</code></pre>
<p>what would be the best way to do this?</p>
<p>seems wasteful to have to define the row's <code>pa.Series</code> by hand.</p>
| <python><pandas><python-typing><mypy><pandera> | 2023-08-10 04:01:23 | 0 | 960 | codekoriko |
76,872,440 | 517,272 | Detect if image is a perceptual exact duplicate | <p><strong>[Problem]</strong></p>
<p>For an application, I am looking to compare images with an original. With the comparison, I want to determine if they are an exact perceptual match. They could have different resolution e.g. thumbnail or different compression levels. As long as the images look similar, it would be a match. But if the image is altered e.g. cropping, aspect ratio change, perceptual image updates e.g. a highlight or mark, it would not be considered a match.</p>
<p><strong>[Already tried]</strong></p>
<ol>
<li>phash - from imagehash <a href="https://pypi.org/project/ImageHash/" rel="nofollow noreferrer">https://pypi.org/project/ImageHash/</a>
This seems to work well for detecting the different versions but has too many false positives and is too lenient.</li>
<li>Phash from imagededup <a href="https://github.com/idealo/imagededup" rel="nofollow noreferrer">https://github.com/idealo/imagededup</a>
This detects the copies similar to above but this also has a lot of false positives especially ignores minor edits</li>
<li>cv2 template matching <a href="https://www.geeksforgeeks.org/template-matching-using-opencv-in-python/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/template-matching-using-opencv-in-python/</a>
This is the best so far in detecting edits but this fails on compressed images and hence has too many false negatives</li>
</ol>
<p><strong>[Need Advice]</strong></p>
<ol>
<li>Any other options you would suggest? Any other options in open cv, or any other models?</li>
<li>Is there a way to increase the sensitivity of phash?</li>
<li>Any ideas on how to accomplish this with a combination of methods?</li>
</ol>
<p>It would be okay to trade performance a bit for accuracy for this use-case.</p>
<p>Thank you so much for your help!</p>
| <python><opencv><image-processing><hash><computer-vision> | 2023-08-10 03:15:07 | 1 | 921 | gaurav |
76,872,288 | 9,394,364 | Converting JSON list to dictionary | <p>I am using <code>python</code> and have the following JSON block, <code>json_player</code>, which is stored as a list:</p>
<pre><code>[
{
"id": "name",
"value": "john"
},
{
"id": "sport",
"value": "baseball"
},
{
"id": "age",
"value": "20"
}
]
</code></pre>
<p>Note that this is the resulting block from:</p>
<pre><code>json_object = json.loads(df['customFields'])
json_player = json_object['player']
</code></pre>
<p>I know that I can access individual strings within <code>json_player</code> using the following code:</p>
<pre><code>json_player[0]['value']
</code></pre>
<p>Which would return the string <code>john</code>.</p>
<p>However, I would like to be able to call individual key-value pairs by name, rather than index, in order to protect myself if underlying JSON order changes. For example:</p>
<pre><code>json_player['name']
</code></pre>
<p>would return <code>john</code>. Can anyone tell me how I would do this?</p>
| <python> | 2023-08-10 02:26:51 | 1 | 1,651 | DJC |
76,872,115 | 1,601,580 | How does one create a pytorch data loader with a custom hugging face data set without having errors? | <p>Currently my custom data set gives None indices in the data loader, but NOT in the pure data set. When I wrap it in pytorch data loader it fails.</p>
<p>Code is in <a href="https://colab.research.google.com/drive/1sbs95as_66mtK9VK_vbaE9gLE-Tjof1-?usp=sharing" rel="nofollow noreferrer">colab</a> but will put it here in case colab dies someday:</p>
<pre><code>pip install datasets
pip install pytorch
pip install transformers
</code></pre>
<p>then run</p>
<pre><code>token = None
batch_size = 10
from datasets import load_dataset
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
if tokenizer.pad_token_id is None:
tokenizer.pad_token = tokenizer.eos_token
probe_network = GPT2LMHeadModel.from_pretrained("gpt2")
device = torch.device(f"cuda:{0}" if torch.cuda.is_available() else "cpu")
probe_network = probe_network.to(device)
# -- Get batch from dataset
from datasets import load_dataset
# path, name = 'brando/debug1_af', 'debug1_af'
path, name = 'brando/debug0_af', 'debug0_af'
remove_columns = []
dataset = load_dataset(path, name, streaming=True, split="train", token=token).with_format("torch")
print(f'{dataset=}')
batch = dataset.take(batch_size)
# print(f'{next(iter(batch))=}')
# - Prepare functions to tokenize batch
def preprocess(examples): # gets the raw text batch according to the specific names in table in data set & tokenize
return tokenizer(examples["link"], padding="max_length", max_length=128, truncation=True, return_tensors="pt")
def map(batch): # apply preprocess to batch to all examples in batch represented as a dataset
return batch.map(preprocess, batched=True, remove_columns=remove_columns)
tokenized_batch = batch.map(preprocess, batched=True, remove_columns=remove_columns)
tokenized_batch = map(batch)
# print(f'{next(iter(tokenized_batch))=}')
from torch.utils.data import Dataset, DataLoader, SequentialSampler
dataset = tokenized_batch
print(f'{type(dataset)=}')
print(f'{dataset.__class__=}')
print(f'{isinstance(dataset, Dataset)=}')
# for i, d in enumerate(dataset):
# assert isinstance(d, dict)
# # dd = dataset[i]
# # assert isinstance(dd, dict)
loader_opts = {}
classifier_opts = {}
# data_loader = DataLoader(dataset, shuffle=False, batch_size=loader_opts.get('batch_size', 1),
# num_workers=loader_opts.get('num_workers', 0), drop_last=False, sampler=SequentialSampler(range(512)) )
data_loader = DataLoader(dataset, shuffle=False, batch_size=loader_opts.get('batch_size', 1),
num_workers=loader_opts.get('num_workers', 0), drop_last=False, sampler=None)
print(f'{iter(data_loader)=}')
print(f'{next(iter(data_loader))=}')
print('Done\a')
</code></pre>
<p>error:</p>
<pre><code>dataset=<datasets.iterable_dataset.IterableDataset object at 0x7e42c2f21d20>
type(dataset)=<class 'datasets.iterable_dataset.IterableDataset'>
dataset.__class__=<class 'datasets.iterable_dataset.IterableDataset'>
isinstance(dataset, Dataset)=True
iter(data_loader)=<torch.utils.data.dataloader._SingleProcessDataLoaderIter object at 0x7e42c2f21660>
/usr/local/lib/python3.10/dist-packages/datasets/formatting/torch_formatter.py:68: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py in collate(batch, collate_fn_map)
126 try:
--> 127 return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem})
128 except TypeError:
9 frames
/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py in <dictcomp>(.0)
126 try:
--> 127 return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem})
128 except TypeError:
/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py in collate(batch, collate_fn_map)
149
--> 150 raise TypeError(default_collate_err_msg_format.format(elem_type))
151
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'NoneType'>
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-6-1153c5915bd8> in <cell line: 49>()
47 num_workers=loader_opts.get('num_workers', 0), drop_last=False, sampler=None)
48 print(f'{iter(data_loader)=}')
---> 49 print(f'{next(iter(data_loader))=}')
50 print('Done\a')
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py in __next__(self)
631 # TODO(https://github.com/pytorch/pytorch/issues/76750)
632 self._reset() # type: ignore[call-arg]
--> 633 data = self._next_data()
634 self._num_yielded += 1
635 if self._dataset_kind == _DatasetKind.Iterable and \
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
675 def _next_data(self):
676 index = self._next_index() # may raise StopIteration
--> 677 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
678 if self._pin_memory:
679 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
40 else:
41 data = next(self.dataset_iter)
---> 42 return self.collate_fn(data)
43
44
/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py in default_collate(batch)
263 >>> default_collate(batch) # Handle `CustomType` automatically
264 """
--> 265 return collate(batch, collate_fn_map=default_collate_fn_map)
/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py in collate(batch, collate_fn_map)
128 except TypeError:
129 # The mapping type may not support `__init__(iterable)`.
--> 130 return {key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}
131 elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple
132 return elem_type(*(collate(samples, collate_fn_map=collate_fn_map) for samples in zip(*batch)))
/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py in <dictcomp>(.0)
128 except TypeError:
129 # The mapping type may not support `__init__(iterable)`.
--> 130 return {key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}
131 elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple
132 return elem_type(*(collate(samples, collate_fn_map=collate_fn_map) for samples in zip(*batch)))
/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/collate.py in collate(batch, collate_fn_map)
148 return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]
149
--> 150 raise TypeError(default_collate_err_msg_format.format(elem_type))
151
152
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'NoneType'>
</code></pre>
<p>why is this error happening?</p>
<p>I've done all the checks, e.g., make sure the return indices are dicts, even went in detail debugging mode with pdb inside of pytorch's code.</p>
<ul>
<li><p>hf discuss: <a href="https://discuss.huggingface.co/t/how-does-one-create-a-pytorch-data-loader-with-a-custom-hugging-face-data-set-without-having-errors/50204" rel="nofollow noreferrer">https://discuss.huggingface.co/t/how-does-one-create-a-pytorch-data-loader-with-a-custom-hugging-face-data-set-without-having-errors/50204</a></p>
</li>
<li><p>hf discord: <a href="https://discord.com/channels/879548962464493619/1139007085363875922/1139007085363875922" rel="nofollow noreferrer">https://discord.com/channels/879548962464493619/1139007085363875922/1139007085363875922</a></p>
</li>
</ul>
| <python><huggingface-transformers><huggingface><huggingface-datasets><huggingface-hub> | 2023-08-10 01:24:33 | 1 | 6,126 | Charlie Parker |
76,871,958 | 13,494,917 | Azure function Blob Trigger being thrown "Language Worker Process exited. Pid=1652. python exited with code -1073741819 (0xC0000005)." | <p>I have a Blob Trigger function running that takes xml files and ends up storing the data into an Azure SQL db. Mainly through the use of pandas and SQL Alchemy. I have had no real other problem with any other xml file, but this specific one being ran through sends me an error:</p>
<blockquote>
<p>Language Worker Process exited. Pid=1652.
python exited with code -1073741819 (0xC0000005).</p>
</blockquote>
<p>This error seems to trigger when data is being inserted into a specific table. This error doesn't mean much to me, what are possible issues that could be happening here?</p>
| <python><pandas><azure><sqlalchemy><azure-functions> | 2023-08-10 00:22:03 | 1 | 687 | BlakeB9 |
76,871,703 | 1,322,962 | Python Protobuf `MessageToDict` And `MessageToJson` with `include_default_value_fields` Does Add Defaults To `google.protobuf.BoolValue` Fields | <p>I am attempting to use <code>MessageToJSON</code> and <code>MessageToDict</code> with the <code>including_default_value_fields</code> argument to generate a Pandas Dataframe that then gets written to Parquet. I have a deeply nested data structure that I am pivoting into a "flattened" FACT table that then get imported into Databricks, S3 for AWS Athena, and potentially other query engines. Parquet is much nicer to work with if you can define all of the columns you want up front and it's a real pain to add columns on the fly.</p>
<p>So I am looking to generate default values for every field in my message that isn't assigned for each message.</p>
<p>To this effect the <code>including_default_value_fields</code> seems like it should be the right tool for the job exceeeeept that it's not currently assigning defaults for every field in my schema that uses <code>google.protobuf.BoolValue</code></p>
<p>Let's use my Address schema as an example.</p>
<h1>address.proto</h1>
<pre><code>message Address {
string city = 1;
string company = 2;
string countryIso2 = 3;
string email = 4;
string name = 5;
string phone = 6;
string state = 7;
string street1 = 8;
string street2 = 9;
string zip = 10;
int64 countryId = 11;
string street3 = 12;
string streetNumber = 13;
bool isResidential = 14 [deprecated = true];
string objectPurpose = 15;
string objectState = 16;
string objectSource = 17;
bool validate = 18;
string metadata = 19;
string objectId = 20;
google.protobuf.BoolValue residential = 21;
google.protobuf.BoolValue isTest = 22;
repeated ValidationMessage messages = 23;
}
</code></pre>
<h1>PDB Session Showing Missing Fields</h1>
<p>Thereβs 23 items contained in the Address message type. If I use include_default_value_fields=False I get 12 items in the nested address field for the test data that I'm using. If itβs set to True I get 21 keys.</p>
<pre><code>(Pdb) len(MessageToDict(shipment, including_default_value_fields=False)["addressTo"])
12
(Pdb) len(MessageToDict(shipment, including_default_value_fields=True)["addressTo"])
21
(Pdb) MessageToDict(shipment, including_default_value_fields=False)["addressTo"].keys()
dict_keys(['city', 'countryIso2', 'email', 'name', 'state', 'street1', 'zip', 'countryId', 'objectPurpose', 'objectState', 'objectSource', 'objectId'])
(Pdb) MessageToDict(shipment, including_default_value_fields=True)["addressTo"].keys()
dict_keys(['city', 'countryIso2', 'email', 'name', 'state', 'street1', 'zip', 'countryId', 'objectPurpose', 'objectState', 'objectSource', 'objectId', 'company', 'phone', 'street2', 'street3', 'streetNumber', 'isResidential', 'validate', 'metadata', 'messages'])
</code></pre>
<p>Of these the missing keys are <code>isTest</code>, <code>residential</code> which is consistent with all of the other nested message types I have in my larger structure.</p>
<p>Why aren't my <code>google.protobuf.BoolValue</code> fields getting assigned a default and is there anyway for me to fix this?</p>
<h1>Further Issues Trying To Manually Set Missing Fields</h1>
<pre><code>(Pdb) hasattr(shipment, "customsDeclaration")
True
(Pdb) MessageToDict(shipment)["customsDeclaration"]
*** KeyError: 'customsDeclaration'
(Pdb) MessageToDict(shipment, including_default_value_fields=True)["customsDeclaration"]
*** KeyError: 'customsDeclaration'
(Pdb) type(shipment.customsDeclaration)
<class 'rating.customs_pb2.CustomsDeclaration'>
(Pdb) shipment.customsDeclaration.IsInitialized()
True
(Pdb) shipment.HasField("customsDeclaration")
False
</code></pre>
<h1>Software Versions</h1>
<p>Python == 3.9.17</p>
<p>python-protobuf == 4.24.0</p>
| <python><protocol-buffers><protobuf-python> | 2023-08-09 22:45:19 | 0 | 8,598 | AlexLordThorsen |
76,871,675 | 2,662,901 | Matplotlib 3d plot_surface make edgecolors a function of z-value | <p>How can we make <code>edgecolors</code> of a 3D surface plot a function of the z value?</p>
<p>The following will make the edgecolors uniformly black:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm, colors
from mpl_toolkits.mplot3d import Axes3D
x = np.linspace(-1, 1, 20)
y = np.linspace(-1, 1, 20)
x, y = np.meshgrid(x, y)
z = np.sin(x**2 + y**2)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.plot_surface(x, y, z, cmap=cm.coolwarm, edgecolors='k')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/8WZnC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8WZnC.png" alt="Sample 3D surface plot" /></a></p>
<p>How can we modify this example to:</p>
<ol>
<li>Make the edgecolors white for z < 0.5 and black for z >= 0.5?</li>
<li>Make the edgecolors range smoothly from white when z=0 to black when z=1?</li>
</ol>
| <python><matplotlib><matplotlib-3d> | 2023-08-09 22:37:45 | 1 | 3,497 | feetwet |
76,871,621 | 9,226,093 | rq worker unable to connect to EC2 through Heroku app (SSL: CERTIFICATE_VERIFY_FAILED) | <p>My Heroku app uses Redis and an rq worker. I recently enabled SSL on my Heroku app via the dashboard, and was met with SSL errors. Initially I was met with the following error when hitting an endpoint that used Redis:</p>
<pre><code>app[redis-cylindrical-86839]: Error accepting a client connection: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca
</code></pre>
<p>but I resolved that by following advice read on a similar Stack Overflow thread by initializing Redis in-code with <code>ssl_cert_reqs=None</code>. However, I'm still being met with the following error in my rq worker:</p>
<pre><code>app[worker.1]: ERROR:root:Error 1 connecting to ec2-XX-XXX-XX-XXX.compute-1.amazonaws.com:11459. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002).
</code></pre>
<p>and the worker then crashes. Nothing I've tried thus far has worked (mainly different methods of initializing Redis found on StackOverflow or suggested by Heroku support).</p>
<p>My current initialization code:</p>
<pre><code>from redis import Redis, from_url as redis_from_url
from rq import Queue
from dotenv import load_dotenv
# Setup
load_dotenv()
redis_url = os.getenv('REDIS_URL')
if redis_url:
parsed_redis_url = urlparse(redis_url)
redis = Redis(host=parsed_redis_url.hostname, port=parsed_redis_url.port, password=parsed_redis_url.password, ssl=True, ssl_cert_reqs=None)
else:
# for local development
redis = Redis()
</code></pre>
<p>Any insight as to what's going on, or better yet how to fix my RQ worker would be greatly appreciated. Also I understand there are similar threads to this on StackOverflow, but none of them mention the issue as it pertains to RQ workers, and none of the solutions have worked for me.</p>
| <python><flask><heroku><redis><rq> | 2023-08-09 22:22:07 | 1 | 776 | jacob_g |
76,871,548 | 21,575,627 | How does the value of an untouched integer change here? | <p>This really, really messed with me today. This is the most minimum working example I could get:</p>
<pre><code>array = [ 1, 2, 1, 3, 4 ]
n = 3
strips = [[0] * n] * n
mSum = 7
j = 0
strips[0][j] = mSum
print(f'before strips[0]: {strips[0][0]}')
for i in range(1, len(array) - n + 1):
mSum += array[i + n - 1] - array[i - 1]
strips[i][j] = mSum
print(f'strips[{i}][{j}]: {strips[i][j]}')
print(f'after strips[0]: {strips[0][0]}')
</code></pre>
<p>If <code>strips</code> is not a nested-list, the error doesn't occur. Here is the output:</p>
<pre><code>before strips[0][0]: 7
strips[1][0]: 9
strips[2][0]: 11
after strips[0][0]: 11
</code></pre>
<p>Why is <code>strips[0][0]</code> different before and after the loop (where it does not get modified)? The only thing I can think of is that it's somehow connected to <code>mSum</code>.</p>
<p>The same code in a language like C++ does not have this issue.</p>
| <python><python-3.x><list><reference><programming-languages> | 2023-08-09 22:01:51 | 1 | 1,279 | user129393192 |
76,871,545 | 2,805,482 | python dict flatten nested dict values to create new key value pair in numpy/pandas | <p>Hi I have data in below structure where I have a map of key as label and value as array of array and I want to flatten the values and dynamically add index to the key to create a new row like below. I can iterate over each key-value pain and create new dict and add these values to it and get the expected result but its slow. I have around 50M values in array, is there a faster approach in numpy/pandas?</p>
<p>This is what I have</p>
<pre><code>{'user_feature':
array([
[ 1.33677050e-02, -1.45685431e-02],
[-2.30765194e-02, 0.00000000e+00],
[0.00000000e+00, 0.00000000e+00],
[1.16669689e-04, 1.33677050e-02]]),
'sequence_service_id_list':
array([y
[215., 215., 215., ..., 554., 215., 215.],
[215., 215., 215., ..., 215., 215., 215.],
[215., 215., 554., ..., 215., 215., 215.],
'target_label':
array([
1.,
1.,
1., ..., 1., 1., 1.])}
</code></pre>
<p>Expected:</p>
<pre><code>{'user_feature_1': [ 1.33677050e-02, -1.45685431e-02],
'user_feature_2': [-2.30765194e-02, 0.00000000e+00],
'user_feature_3': [0.00000000e+00, 0.00000000e+00],
'sequence_service_id_list_1': [215., 215., 215., ..., 554., 215., 215.],
'sequence_service_id_list_2': [215., 215., 215., ..., 215., 215., 215.],
'sequence_service_id_list_3': [215., 215., 554., ..., 215., 215., 215.],
'target_label_1': 1.,
'target_label_2': 1.,
'target_label_3': 1.,
}
</code></pre>
| <python><pandas><numpy><numpy-ndarray> | 2023-08-09 22:01:49 | 1 | 1,677 | Explorer |
76,871,488 | 12,176,460 | How to manage Lambda Layers in Terraform? | <p>I keep my Terraform code in Git, and my Terraform contains a <code>aws_lambda_layer_version</code> resource. Now, I don't want to keep it as a <code>.zip</code>file. Rather, I'm trying the following approach: Keep a <code>requirements.txt</code> file with the Python dependencies that I want to include in the layer, and whenever this file changes, create a new version of the layer:</p>
<pre><code>resource "null_resource" "layer_file" {
triggers = {
value = filebase64sha256("${path.root}/layer/requirements.txt")
}
provisioner "local-exec" {
command = <<EOT
mkdir python
pip install -r layer/requirements.txt --target ./python
zip -r layer/layer.zip python/
EOT
}
}
resource "aws_lambda_layer_version" "python_requests" {
filename = "${path.root}/layer/layer.zip"
layer_name = "BackupDeps"
compatible_runtimes = ["python3.10"]
source_code_hash = filebase64sha256("${path.root}/layer/layer.zip")
depends_on = [null_resource.layer_file]
}
</code></pre>
<p>The problem with this is that in order for this to work, I have to run Terraform twice:</p>
<p>On the first run, Terraform triggers the <code>null_resource</code> since the <code>requirements.txt</code> is changed. However, the <code>layer.zip</code> is not yet updated, so the <code>aws_lambda_layer_version</code> will not be changed/updated.</p>
<p>On the second run, Terraform will notice that the <code>layer.zip</code> file is changed, so that it will trigger a change in the <code>aws_lambda_layer_version.python_requests</code> resource.</p>
<p>Is there a better way to manage this?</p>
| <python><amazon-web-services><terraform><terraform-provider-aws> | 2023-08-09 21:50:19 | 0 | 2,875 | YoavKlein |
76,871,381 | 1,456,253 | How to have conditional workflow execution in flyte? | <p>I am currently using flyte for a project. In it, I have a number of workflows, A,B,C etc.
However, I recently identified a use case that requires me to change which workflow I run first, A1 or A2.</p>
<p>After learning you can't use if statements in a workflow due to</p>
<blockquote>
<p>Flytekit does not support Unary expressions or performing truth value
testing</p>
</blockquote>
<p>I determined that I probably needed to use dynamics since the execution is being determined at runtime.</p>
<p>So I tried something like:</p>
<pre><code>#...Reference tasks are described up here...
@dynamic
def start_node_generator(should_use_a1):
if should_use_a1:
node_to_use = create_node(node_a1_reference_task)
conditional_val=1
return conditional_val, node_to_use
else:
node_to_use = create_node(node_a2_reference_task).with_overrides(name="spam")
conditional_val=node_to_use.o1
return conditional_val, node_to_use
@workflow
def main_workflow(should_use_a1):
conditional_val, nodeA = start_node_generator(should_use_a1=should_use_a1)
nodeB = create_node(nodeB_reference_task, conditional_val=conditional_val)
nodeC = create_node(nodeC_reference_task)
nodeA >> nodeB
nodeB >> nodeC
</code></pre>
<p>However, I'm clearly thinking about this slightly wrong. You can't return Nodes out of either @tasks or @dynamics since they're not serializable.</p>
<p>How do you conditionally run A1 or A2 first, then the rest?</p>
| <python><flyte> | 2023-08-09 21:28:03 | 1 | 2,397 | code11 |
76,871,262 | 11,152,224 | Postman: Bad Request - Expected request content | <p>I'm making REST API with BlackSheep framework, Piccolo ORM and its Pydantic models.</p>
<p>Table in Piccolo ORM:</p>
<pre><code>class Expense(Table):
amount = Integer()
description = Varchar()
</code></pre>
<p>Working with Pydantic models in application:</p>
<pre><code>from piccolo.utils.pydantic import create_pydantic_model
from blacksheep import Application, FromJSON, json, Response, Content
ExpenseModelIn: typing.Any = create_pydantic_model(table=Expense, model_name="ExpenseModelIn")
ExpenseModelOut: typing.Any = create_pydantic_model(table=Expense, include_default_columns=True, model_name="ExpenseModelOut")
@app.router.post("/expense")
async def create_expense(expense_model: FromJSON[ExpenseModelIn]):
print(expense_model.value)
try:
expense = Expense(**expense_model.value.dict())
await expense.save()
return ExpenseModelOut(**expense.to_dict())
except:
return Response(400, content=Content(b"text/plain", b"Bad Request"))
</code></pre>
<p>When I try to make POST request in Postman I get 400 Bad Request error.
<a href="https://i.sstatic.net/EU2Ud.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EU2Ud.png" alt="Postman POST request" /></a></p>
<p>If I <code>print()</code> content of <code>ExpenseModelIn</code> I get <code>None</code> for every parameter sent from Postman. Do I need to send parameters in Header section in Postman or where?</p>
<p>P.S. Tried to send in Header section, didn't help.</p>
| <python><postman><blacksheep> | 2023-08-09 21:04:52 | 1 | 569 | WideWood |
76,871,063 | 2,987,780 | Unable to find element button on my selenium python code | <p>I wanted to find out following "new request" button from the html page(mentioned below to button tag)</p>
<pre><code><button class="as-main-button definitely-not-a-aff-button" onclick="window.location.href='/FireFlow/SelfService/ChooseTemplate.html?Queue=1';">+ &nbsp;New Request</button>
</code></pre>
<p>Following is the code I tried but after so many tries still not able to find out that the desired button, still not sure why that button is not visible to the code, following is the code:</p>
<pre><code>try:
button_class_name = "button.as-main-button.definitely-not-a-aff-button"
element = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, button_class_name)))
logging.info("New Request element is visible")
time.sleep(5)
except Exception as e:
logging.error(f"Failed to create new request : {e}")
</code></pre>
<p>And I got following error all the time:</p>
<pre><code>2023-08-10 01:34:45,850 - ERROR - Failed to create new request : Message:
Stacktrace:
Backtrace:
GetHandleVerifier [0x003EA813+48355]
(No symbol) [0x0037C4B1]
(No symbol) [0x00285358]
(No symbol) [0x002B09A5]
(No symbol) [0x002B0B3B]
(No symbol) [0x002DE232]
(No symbol) [0x002CA784]
(No symbol) [0x002DC922]
(No symbol) [0x002CA536]
(No symbol) [0x002A82DC]
(No symbol) [0x002A93DD]
GetHandleVerifier [0x0064AABD+2539405]
GetHandleVerifier [0x0068A78F+2800735]
GetHandleVerifier [0x0068456C+2775612]
GetHandleVerifier [0x004751E0+616112]
(No symbol) [0x00385F8C]
(No symbol) [0x00382328]
(No symbol) [0x0038240B]
(No symbol) [0x00374FF7]
BaseThreadInitThunk [0x75AA7D59+25]
RtlInitializeExceptionChain [0x7761B79B+107]
RtlClearBits [0x7761B71F+191]
</code></pre>
<p>This is my web page I look into (not the entire page):</p>
<p><a href="https://gist.github.com/indrajeetgour/a54b8f2f89aaafe6160cb8bfe189ff66" rel="nofollow noreferrer">https://gist.github.com/indrajeetgour/a54b8f2f89aaafe6160cb8bfe189ff66</a></p>
<p>Do let me know if something more required for this problem to solve.
Just to mentioned I am new to the selenium. Please help me out.</p>
<p>Update1: I forgot to mentioned that, I got into this page from another page, do you guys see should I switch the driver first before searching for any element in the new page(as mentioned above)</p>
| <python><selenium-webdriver><selenium-chromedriver> | 2023-08-09 20:28:57 | 2 | 4,570 | Indrajeet Gour |
76,870,917 | 1,344,855 | FastAPI type hint error in Dependency during unit testing | <p>I'm encountering a type hint error (Invalid args for response field) in FastAPI when using a dependency. This issue only arises during unit testing.</p>
<p>Endpoint:</p>
<pre class="lang-py prettyprint-override"><code>@router.get(
'/me/account/user',
response_model=schemas.User,
)
async def account_user(user: Annotated[schemas.User, Depends(prepare_user)]):
return user.dict()
</code></pre>
<p>Dependency:</p>
<pre class="lang-py prettyprint-override"><code>async def prepare_user(
request: Request,
db: Annotated[AsyncSession, Depends(async_get_db)]
) -> schemas.User:
"""Create a user if it does not exist, otherwise return the user"""
external_id = request.state.user.claims.get("sub")
user = await crud.get_user_by_external_id(db=db, external_id=external_id)
if not user:
user_in = schemas.UserCreate(external_id=external_id, is_active=True)
user = await crud.create_user(db=db, user=user_in)
user = schemas.User.from_orm(user)
return user
</code></pre>
<p>My conftest.py for Pytest:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(scope='function')
async def db() -> AsyncSession:
async with async_session() as session:
yield session
@pytest.fixture(scope='function')
async def normal_client():
"""Fixture for a normal/non-admin user client."""
async def override_async_get_db(db: AsyncSession) -> AsyncSession:
try:
yield db
finally:
await db.close()
app.dependency_overrides[async_get_db] = override_async_get_db
async def mock_normal_user(request: Request):
user = B2CUser(
iss='iss',
iat=1,
nbf=2,
exp=3,
sub='sub',
ver='2.0',
claims={
'sub': 'sub'
},
aud='aud',
tid='tid',
access_token='123',
)
request.state.user = user
return user
app.dependency_overrides[azure_scheme] = mock_normal_user
async with AsyncClient(app=app, base_url='http://test') as c:
yield c
app.dependency_overrides = {}
</code></pre>
<p>My test case:</p>
<pre class="lang-py prettyprint-override"><code>async def test_endpoint(normal_client: AsyncClient):
response = await normal_client.get('/me/account/user')
assert response.status_code == 200
</code></pre>
<p>Error:</p>
<pre><code>fastapi.exceptions.FastAPIError: Invalid args for response field!
Hint: check that <class 'sqlalchemy.ext.asyncio.session.AsyncSession'> is a valid Pydantic field type.
If you are using a return type annotation that is not a valid Pydantic field (e.g. Union[Response, dict, None]) you can disable generating the response model from the type annotation with the path operation decorator parameter response_model=None.
Read more: https://fastapi.tiangolo.com/tutorial/response-model/
</code></pre>
<p>While many of my tests work (those not using the prepare_user dependency), this specific test fails. However, when I test the app using the openAPI client, it functions correctly and returns the expected user details.</p>
<p>Versions:</p>
<pre><code>fastapi 0.97.0
pydantic 1.10.12
pytest 7.4.0
pytest-asyncio 0.21.1
Python 3.11.4
</code></pre>
<p>I'm seeking guidance on how to resolve this type hint error during unit testing.</p>
| <python><pytest><fastapi><pydantic> | 2023-08-09 20:00:53 | 1 | 2,436 | dh762 |
76,870,823 | 59,890 | Python Lambda Functions: return statusCode or status_code? | <p>When using an AWS Lambda function through API Gateway in Python, are you supposed to call the HTTP response field "statusCode" or "status_code" ?</p>
<p>AWS recommends "statusCode":
<a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/handle-errors-in-lambda-integration.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/apigateway/latest/developerguide/handle-errors-in-lambda-integration.html</a></p>
<p>but in python, using the 'requests' library, the requests.Response object expects a different casing:
<a href="https://www.w3schools.com/python/ref_requests_response.asp" rel="nofollow noreferrer">https://www.w3schools.com/python/ref_requests_response.asp</a></p>
<p>and doesn't manage to pick up the HTTP status code that I am returning from the lambda, like so:</p>
<pre><code>return {
"statusCode": 500,
'message': "3rd party API call failed. Check server logs for details."
}
</code></pre>
<p>When I run my tests, this yields:</p>
<blockquote>
<p>post_update_response.status_code = <strong>200</strong></p>
</blockquote>
<blockquote>
<p>post_update_response.text:
{"statusCode": 500, "message": "3rd party API call failed. Check
server logs for details."}</p>
</blockquote>
| <python><amazon-web-services><aws-lambda> | 2023-08-09 19:44:28 | 1 | 1,304 | rajat banerjee |
76,870,811 | 19,950,360 | Only Xcom or data flow python operator? | <p>Hi im using Airflow S3List operator</p>
<pre><code>response_table_parquet_list = S3ListOperator(
task_id = 'get_bucket_object_list_operator',
aws_conn_id = 'cloudFlare_conn',
bucket = 'paprika-table-storage',
prefix = 'response_table/',
dag = dag,
)
</code></pre>
<p>So blob my file list
But how to return after task??</p>
<p>All operator except python operator cannot return data or Xcom?</p>
<p>Already Using xcom_pull(task_ids='response_table_parquet_list') But None</p>
| <python><amazon-s3><airflow> | 2023-08-09 19:43:25 | 1 | 315 | lima |
76,870,729 | 6,867,048 | convert interactive python commands to py file commands | <p>Below are the commands & the corresponding output from a pyspark-shell.
I'm trying to separate the commands alone and copy it to a new file.</p>
<pre><code>>>> m=[10,20,30,40,50]
>>> print(m)
[10, 20, 30, 40, 50]
>>> print(type(m))
<class 'list'>
>>> data=[[m]]
>>> header=["num"]
>>> df=spark.createDataFrame(data,header)
>>> df.show(truncate=False)
+--------------------+
|num |
+--------------------+
|[10, 20, 30, 40, 50]|
+--------------------+
>>> df.printSchema()
root
|-- num: array (nullable = true)
| |-- element: long (containsNull = true)
>>> (spark.sql("select num, explode(num) x from numbers")
... .show(truncate=False))
+--------------------+---+
|num |x |
+--------------------+---+
|[10, 20, 30, 40, 50]|10 |
|[10, 20, 30, 40, 50]|20 |
|[10, 20, 30, 40, 50]|30 |
|[10, 20, 30, 40, 50]|40 |
|[10, 20, 30, 40, 50]|50 |
+--------------------+---+
>>> spark.sql(""" select num,
... explode(num) x from numbers
... """)
DataFrame[num: array<bigint>, x: bigint]
>>>
>>> def run(file):
... exec(open(file).read())
...
>>>
</code></pre>
<p>I had partial success with the below perl command,</p>
<pre><code>perl -ne ' { /^>>>(.*)/ && print "$1\n" } ' py_commands.txt
</code></pre>
<p>but there are corner cases
where the python commands extend more than one line when "()" or triple quotes
""" is used.</p>
<p>Required output:</p>
<pre><code>m=[10,20,30,40,50]
print(m)
print(type(m))
data=[[m]]
header=["num"]
df=spark.createDataFrame(data,header)
df.show(truncate=False)
df.printSchema()
(spark.sql("select num, explode(num) x from numbers")
.show(truncate=False))
spark.sql(""" select num,
explode(num) x from numbers
""")
def run(file):
exec(open(file).read())
</code></pre>
| <python><regex> | 2023-08-09 19:30:11 | 2 | 8,893 | stack0114106 |
76,870,568 | 16,425,029 | Custom permission class not returning custom permission denied message - DRF | <p>I have a custom permission class where I want a custom permission message to be returned. But it is returning the default permission denied message. Below is the custom permission class that I have written:</p>
<pre><code>
class IsAuthenticatedAndIsLr(BasePermission):
def has_permission(self, request, view):
message = "user is not LR"
return request.user.is_authenticated and request.user.regd_as == "LR"
</code></pre>
<p>and here is how I am using it on the api:</p>
<pre><code>@api_view(['POST'])
@permission_classes([IsAuthenticatedAndIsLr])
def create_profile(request):
'''
this api creates profile for users based
on their choosen category.
'''
data = request.data
model_dict = {
'IND'
}
try:
...
return Response({'message': f'{request.user} isLR'}, status= status.HTTP_201_CREATED)
except Exception as e:
return Response({"message": str(e)}, status= status.HTTP_500_INTERNAL_SERVER_ERROR)
</code></pre>
<p>I have followed the docs method to raise a custom message but for some reason it is not working. Please suggest to me what is the cause of this problem.</p>
| <python><django><django-rest-framework> | 2023-08-09 19:01:07 | 1 | 776 | Ritankar Bhattacharjee |
76,870,545 | 8,378,817 | Creating a graph from research papers | <p>I am working on simple passion project involving graph analytics for research paper.
Basically, I want to create a graph structure connecting research papers and eventually would like to analyze how information is flowing through the nodes some thing similar to "Connected Papers" But as I am novice in graph analytics, I would like to get your advice on how to proceed forward.</p>
<p>When working with research papers, let say we have few informations just as:</p>
<pre><code>{
'paperId':
'title':
'year published':
'abstract':
'authors':
'keywords';
...
}
</code></pre>
<p>I have have few nodes from 'id' with attributes listed above.
An example of unconnected nodes:
<a href="https://i.sstatic.net/2tcWS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2tcWS.png" alt="enter image description here" /></a></p>
<p>So, my questions are:</p>
<ul>
<li>How can I connect these nodes based on the attributes?</li>
<li>If an article has an author who is also present in another node, the
two nodes should be connected. How should we represent that?</li>
<li>Also, for nodes, should I also be creating individual nodes using the attributes, for
eg: should Authors have their individual nodes, similarly for date, institutions,
keywords, so on..</li>
</ul>
<p>It would be really helpful on to developing a strategy for building the network data structure before doing any analytics. I would like to request experts out there to help me on this side project.
I am currently using Python Networkx.</p>
<ul>
<li>What else should I be thinking in terms of tools?
Currently I am only testing on ~30 articles, but would like to scale it up to 1000 articles, and even more if I can gather more articles.</li>
</ul>
<p>Thank you so much!</p>
| <python><graph><networkx><graph-theory><graph-neural-network> | 2023-08-09 18:56:49 | 1 | 365 | stackword_0 |
76,870,472 | 4,391,249 | Why can't Python infer that my class follows the Sequence protocol? | <p>I would expect that if I write a class that implements all methods needed for a <code>Sequence</code> as per <a href="https://docs.python.org/3.10/library/collections.abc.html" rel="nofollow noreferrer">https://docs.python.org/3.10/library/collections.abc.html</a>, that Python would use duck-typing to know that an instance of it is a Sequence, but I find that is not the case:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Sequence
class MySequence:
def __getitem__(self, ix):
pass
def __iter__(self):
pass
def __len__(self):
pass
def __contains__(self, val):
pass
def __reversed__(self):
pass
def count(self, val):
pass
def index(self, val):
pass
seq = MySequence()
print(isinstance(seq, Sequence))
</code></pre>
<p>This prints False instead of True. Why?</p>
| <python><duck-typing> | 2023-08-09 18:42:56 | 1 | 3,347 | Alexander Soare |
76,870,387 | 2,726,900 | FasfKafka library: how to commit messages in consumer manually when enable_autocommit=False? | <p>I want to read messages from Kafka topic (wrapping my message processor with <code>consumes</code> decorator of <code>fastkafka</code> library).</p>
<p>But I want not just to auto-commit messages, but to commit them manually -- and only if the message processing was correct (when my extra function that was processing their internal data did not throw any exception).</p>
<p>How can it be done? How can I commit messages manually -- and only when I really want to commit them?</p>
| <python><apache-kafka><fastapi> | 2023-08-09 18:26:24 | 1 | 3,669 | Felix |
76,870,366 | 3,247,006 | No build() and create() vs build() vs create() in Factory Boy | <p>I created <code>UserFactory</code> class in <code>factories.py</code> as shown below. I use <a href="https://github.com/pytest-dev/pytest-django" rel="nofollow noreferrer">pytest-django</a> and <a href="https://github.com/pytest-dev/pytest-factoryboy" rel="nofollow noreferrer">pytest-factoryboy</a> in Django:</p>
<pre class="lang-py prettyprint-override"><code># "factories.py"
import factory
from django.contrib.auth.models import User
class UserFactory(factory.django.DjangoModelFactory):
class Meta:
model = User
username = "John"
</code></pre>
<p>Then, I registered <code>UserFactory</code> class in <code>conftest.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "conftest.py"
import pytest
from pytest_factoryboy import register
from tests.factories import UserFactory
register(UserFactory)
</code></pre>
<p>Then, I created <code>test_user()</code> which prints <code>username</code> with <code>user_factory</code> and <a href="https://factoryboy.readthedocs.io/en/stable/reference.html#factory.Factory.build" rel="nofollow noreferrer">user_factory.build()</a> and <a href="https://factoryboy.readthedocs.io/en/stable/reference.html#factory.Factory.create" rel="nofollow noreferrer">user_factory.create()</a> in <code>test_ex1.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "test_ex1.py"
import pytest
from django.contrib.auth.models import User
def test_user(db, user_factory):
print(user_factory.username) # Here
print(user_factory.build().username) # Here
print(user_factory.create().username) # Here
assert True
</code></pre>
<p>Then, I got 3 <code>John</code>'s as shown below:</p>
<pre class="lang-none prettyprint-override"><code>$ pytest -q -rP
. [100%]
=============== PASSES ================
____________ test_new_user ____________
-------- Captured stdout call ---------
John
John
John
1 passed in 0.55s
</code></pre>
<p>My questions:</p>
<ol>
<li>What is the difference between <code>user_factory</code>, <code>user_factory.build()</code> and <code>user_factory.create()</code> in Factory Boy?</li>
<li>When should I use <code>user_factory</code>, <code>user_factory.build()</code> and <code>user_factory.create()</code> in Factory Boy?</li>
</ol>
| <python><django><pytest-django><factory-boy><pytest-factoryboy> | 2023-08-09 18:22:31 | 1 | 42,516 | Super Kai - Kazuya Ito |
76,870,342 | 12,297,666 | Predictions of a multiclass problem with Sklearn | <p>I have a multiclass problem, that contains twelve classes (Class <code>0</code>, <code>1</code>, <code>2</code>, ..., <code>11</code>). My <code>x_train</code> has shape <code>(31187, 36)</code> and <code>y_train</code> has shape <code>(31187,)</code>.</p>
<p>Here is an example of my <code>y_train</code> array which contain the classes of each sample of my train dataset:</p>
<pre><code>y_train
Out[25]: array([ 6, 8, 1, ..., 11, 0, 4], dtype=int64)
</code></pre>
<p>I have used <code>LabelBinarizer()</code> as follows to encode my output into a one-hot-encode format:</p>
<pre><code>label_bin = LabelBinarizer()
unique_classes = np.unique(y_train)
label_bin.fit_transform(unique_classes)
</code></pre>
<p>And got this output from it:</p>
<pre><code>array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]])
</code></pre>
<p>I then proceed to train my classifier:</p>
<pre><code>y_train_one_hot = label_bin.transform(y_train)
classifier_MLP = MLPClassifier(hidden_layer_sizes=(2*x_train.shape[1],
4*y_train_one_hot.shape[1],
2*y_train_one_hot.shape[1]),
max_iter=500,
random_state=42)
classifier_MLP.fit(x_train, y_train_one_hot)
</code></pre>
<p>Here is my question. When using the <code>predict</code> method, why some samples does not comes in the <code>LabelBinarizer</code> format (one-hot-encoded)?</p>
<p>For example, to following sample (sample <code>8</code>) of my <code>x_test</code> dataset, yields this:</p>
<pre><code>classifier_MLP.predict(x_test[8].reshape(1, -1))
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int64)
</code></pre>
<p>But, according to the encode I have done, I was expecting to get this array instead?</p>
<pre><code>array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int64)
</code></pre>
<p>If I check which class that is, using the <code>inverse_transform</code> method, I get class <code>0</code> as output:</p>
<pre><code>label_bin.inverse_transform(classifier_MLP.predict(x_test[8].reshape(1, -1)))
array([0], dtype=int64)
</code></pre>
<p>This does not happens all the time, some samples (for example, sample <code>42</code>), does actually return what I expect:</p>
<pre><code>classifier_MLP.predict(x_test[42].reshape(1, -1))
array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
</code></pre>
<p>Which is the class <code>0</code>:</p>
<pre><code>label_bin.inverse_transform(classifier_MLP.predict(x_test[42].reshape(1, -1)))
array([0], dtype=int64)
</code></pre>
<p>What could be the cause of this? What am I missing?</p>
| <python><scikit-learn> | 2023-08-09 18:18:29 | 1 | 679 | Murilo |
76,870,341 | 6,038,082 | Not able to draw a line between QFrames added in QGridLayout - how to join them by lines? | <p>I am trying to build up a Gui in PyQt where I am adding Frames in a Grid layout.
There can be any number of frames present in the layout in each row with a gap of one grid position.
For example: (0,0) (0,2), (1,0), (1,2), (2,0), (2,2) like that.</p>
<p>I want to join each frame with one another by lines and want to show some number as I mouse hover on each line.</p>
<p>However, the lines are not coming up.
I made a simpler application (not using QGridLayout), where lines are coming but it's very difficult to find the positions of the widgets to draw the lines between them from the center of each widget to make it look prominent. Using pos() , it draws from the edge. So I used x() and y() co-ordinate to get the points from the center. Could not find any easy way to depict them.</p>
<p>Below is the simpler application which is somehow working, adding it for demonstration purpose, since I mentioned about it:</p>
<pre><code>import sys
from PyQt5.QtCore import QEvent, Qt
from PyQt5.QtGui import QPainter, QPen
from PyQt5.QtWidgets import QApplication, QPushButton, QVBoxLayout, QWidget, QLabel
class Drawer:
def paintEvent(self, event):
painter = QPainter(self)
painter.setPen(QPen(QPen(Qt.green, 2, Qt.SolidLine)));
#painter.drawLine(self.p1, self.p2)
painter.drawLine((self.p1x+80), (self.p1y +60), (self.p2x + 80), (self.p2y+ 90) )
class Example(QWidget, Drawer):
def __init__(self, parent=None):
super().__init__(parent)
self.initUI()
def initUI(self):
self.okButton = QPushButton("OK")
self.cancelButton = QPushButton("Cancel")
#self.cancelButton.resize(50, 50)
vbox = QVBoxLayout(self)
vbox.addWidget(self.okButton)
vbox.addWidget(self.cancelButton)
self.setGeometry(300, 300, 300, 150)
self.setWindowTitle("Buttons")
self.cancelButton.setGeometry(10, 10, 60, 40)
self.p1, self.p2 = self.okButton.pos(), self.cancelButton.pos()
print ("p1 ", self.p1 , "| p2 ", self.p2)
#print (dir(self.okButton))
self.p1x, self.p1y , self.p2x, self.p2y = int(self.okButton.x()) ,int(self.okButton.y()) , int(self.cancelButton.x()), int(self.cancelButton.y())
print ("p1x ", self.p1x ,"|p1y ", self.p1y ,"|p2x ", self.p2x, "| p2y ", self.cancelButton.y())
self.okButton.installEventFilter(self)
self.cancelButton.installEventFilter(self)
def eventFilter(self, o, e):
if e.type() == QEvent.Move:
if o is self.okButton:
self.p1 = self.okButton.pos()
elif o is self.cancelButton:
self.p2 = self.cancelButton.pos()
self.update()
return super().eventFilter(o, e)
if __name__ == "__main__":
app = QApplication(sys.argv)
ex = Example()
ex.show()
sys.exit(app.exec_())
</code></pre>
<p>And this is the application where I want to connect the QFrames by lines,
the lines should be from the center to center of frames <br></p>
<ul>
<li><p>How can I make the lines work without the pain of finding the
co-ordinates?<br></p>
</li>
<li><p>mouse hover on line should show some text<br>
I have added only four frames, but there can be more.</p>
<pre><code> import sys
from PyQt5.QtCore import QRect, Qt
from PyQt5.QtWidgets import QApplication, QMainWindow, QPushButton, \
QAction, QVBoxLayout, QLabel, QFrame, QWidget, QStackedLayout, QGridLayout,
QScrollArea, QStyle
from PyQt5.QtGui import QColor, QPainter, QPen
class Drawer:
def paintEvent(self, event):
painter = QPainter(self)
painter.setPen(QPen(QPen(Qt.red, 2, Qt.SolidLine)));
#painter.drawLine(self.p1, self.p2)
print ("In paintEvent function = ", self.p1x , " |", self.p1y)
painter.drawLine((self.p1x), (self.p1y ), (self.p2x + 80), (self.p2y+ 70) )
class MainWindow(QMainWindow, Drawer):
def __init__(self):
super().__init__()
self.setWindowTitle("My App window title")
self.setStyleSheet("background-color: rgb(176,196,222);")
self.centralwidget = QScrollArea()
self.setCentralWidget(self.centralwidget)
menuBar = self.menuBar()
fileMenu = menuBar.addMenu('&File')
openFile = QAction(self.style().standardIcon(QStyle.SP_MessageBoxCritical),
'Open', self)
fileMenu.addAction(openFile)
self.width = 900
self.height = 500
self.resize(self.width, self.height)
self.main_view = QWidget()
self.main_view.setFixedSize(self.width, self.height)
self.gridLayout = QGridLayout()
self.square1 = QFrame()
self.square1.setFixedSize(150, 150)
self.square1.setStyleSheet("background-color:orange")
self.square2 = QFrame()
self.square2.setFixedSize(150, 150)
self.square2.setStyleSheet("background-color:green")
self.square3 = QFrame()
self.square3.setFixedSize(150, 150)
self.square3.setStyleSheet("background-color:blue")
self.square4 = QFrame()
self.square4.setFixedSize(150, 150)
self.square4.setStyleSheet("background-color:cyan")
btn1 = QPushButton('Orange', self.square1)
btn2 = QPushButton('green', self.square2)
btn3 = QPushButton('blue', self.square3)
btn4 = QPushButton('cyan', self.square4)
btn1.clicked.connect(self.func1)
btn2.clicked.connect(self.func2)
btn3.clicked.connect(self.func3)
btn4.clicked.connect(self.func4)
self.gridLayout.addWidget(self.square1,0,0)
self.gridLayout.addWidget(self.square2,0,2)
self.gridLayout.addWidget(self.square3,1,0)
self.gridLayout.addWidget(self.square4,1,2)
#pos
self.p1, self.p2 , self.p3 , self.p4 = self.square1.pos(), self.square2.pos(), self.square3.pos, self.square4.pos
#co-ordinates
self.p1x, self.p1y = int(self.square1.x()) ,int(self.square1.y())
self.p2x, self.p2y = int(self.square2.x()) ,int(self.square2.y())
self.p3x, self.p3y = int(self.square3.x()) ,int(self.square3.y())
self.p4x, self.p4y = int(self.square4.x()) ,int(self.square4.y())
self.main_view.setLayout(self.gridLayout)
self.centralwidget.setWidget(self.main_view)
def func1(self):
print ("This is Orange button")
def func2(self):
print ("This is Green button")
def func3(self):
print ("This is Blue button")
def func4(self):
print ("This is Cyan button")
if __name__ == '__main__':
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec())
</code></pre>
</li>
</ul>
<p>Thanks</p>
| <python><pyqt5> | 2023-08-09 18:18:28 | 0 | 1,014 | A.G.Progm.Enthusiast |
76,870,031 | 8,707,331 | How to use decapoda-research / llama-7b-hf with fine tuning LoRA in LLaMA.cpp? | <p>Currently after fine tune model decapoda-research / llama-7b-hf with tool <a href="https://github.com/zetavg/LLaMA-LoRA-Tuner" rel="nofollow noreferrer">https://github.com/zetavg/LLaMA-LoRA-Tuner</a>.
Now I try to use it in LLaMA.cpp with tutorial: <a href="https://github.com/ggerganov/llama.cpp/discussions/1166" rel="nofollow noreferrer">https://github.com/ggerganov/llama.cpp/discussions/1166</a></p>
<p>As far as I know, I need convert LoRA model to GGML to use. But decapoda-research / llama-7b-hf has 33 files.</p>
<p>So how can I merge multiple bin files into 1 and load fine tuning data?</p>
| <python><pytorch><llama><llamacpp> | 2023-08-09 17:25:03 | 1 | 661 | Khoi V |
76,869,831 | 289,426 | How to run mutilple Windows CMD commands with python? | <p>I was trying to run multiple CMD commands from a Python program using os.system, but i am able to run only first command, second command it not getting executed. Can you please suggest an alternative?</p>
<p>My commands:</p>
<pre><code>cd ..
cd Users\xyz
\abc\pqr.exe
</code></pre>
<p>Script I tried:</p>
<pre><code>os.system('cmd /c "cd .. & cd Users\xyz"')
</code></pre>
| <python><windows><cmd> | 2023-08-09 16:52:55 | 4 | 1,195 | NikRED |
76,869,683 | 3,821,009 | Column ordering when mapping function | <p>With python 3.10.4, this:</p>
<pre><code>df = polars.DataFrame(dict(
j=numpy.tile([1, 0, 0], 3),
k=numpy.random.randint(10, 99, 9)
))
def f(ks):
return dict(
u=ks[0] + 100,
i=ks[1] + 200,
o=ks[2] + 300,
)
dfj = (df
.groupby(polars.col('j').cumsum(), maintain_order=True)
.agg(
polars.col('k').apply(f).alias('k')
)
.unnest('k')
)
print(polars.__version__)
print(dfj)
</code></pre>
<p>produces different column ordering in polars 0.18.9:</p>
<pre><code>0.18.9
j (i64) i (i64) o (i64) u (i64)
1 222 382 147
2 285 315 119
3 274 326 189
shape: (3, 4)
</code></pre>
<p>and polars 0.18.10 (and apparently later - 0.18.13 works as 0.18.10):</p>
<pre><code>0.18.10
j (i64) u (i64) i (i64) o (i64)
1 147 222 382
2 119 285 315
3 189 274 326
shape: (3, 4)
</code></pre>
<p>I couldn't find anything in the docs that says that the order is guaranteed, so I have three related questions:</p>
<ul>
<li>Can others reproduce this?</li>
<li>Is the order guaranteed or is this just an implementation detail?</li>
<li>If not guaranteed, is there a way to do what I'm after that guarantees the order or do I need to reorder columns manually afterwards?</li>
</ul>
| <python><python-polars> | 2023-08-09 16:34:18 | 0 | 4,641 | levant pied |
76,869,517 | 11,141,816 | Sympy could not subs the simple reconizable expressions | <p>Consider the following code to reproduce the issue:</p>
<pre class="lang-py prettyprint-override"><code>from sympy import Function
eta = Function('eta')
from sympy import *
s,c,h=symbols('s c h',real=True)
sbar,cbar,hbar=symbols('sbar cbar hbar',real=True)
Number_of_derivatives=5
eta_dsymbol=[]
eta_ds=[]
eta_dsbar=[]
for ix in range(0,Number_of_derivatives+1):
eta_ds.append(eta(s).diff(s,ix).subs(s,0))
eta_dsbar.append(eta(sbar).diff(sbar,ix).subs(sbar,0))
eta_dsymbol.append( Symbol("eta_d"+str(ix) , real=True) )
</code></pre>
<p>Now try replace Subs instances with a symbol:</p>
<pre class="lang-py prettyprint-override"><code>ix=1
Subs1 = eta_dsbar[ix]
Subs2 = eta_ds[ix]
sym = eta_dsymbol[ix]
eq = ( (eta(s).diff(s,1).subs(s,0)+1)**2*\
(eta(sbar).diff(sbar,1).subs(sbar,0)+2)**3*\
(eta(s).diff(s,1).subs(s,0)+eta(sbar).diff(sbar,1).subs(sbar,0)+5)**7 )\
.expand()
>>> eq.subs(Subs1, sym).subs(Subs2, sym)
128*eta_d1**12 + 3264*eta_d1**11 + 37920*eta_d1**10 + 265264*eta_d1**9 +
1243704*eta_d1**8 + 4114644*eta_d1**7 + 9842070*eta_d1**6 + 17135025*eta_d1**5 +
21528750*eta_d1**4 + 19015625*eta_d1**3 + 11193750*eta_d1**2 +
3937500*Subs(Derivative(eta(s), s), s, 0) + 625000
>>> _.has(Subs)
True
</code></pre>
<p>Somehow the sympy package's subs function malfunctioned, and the subs of <code>Subs(Derivative(eta(s), s), s, 0)</code> were not recognized, even after the expand.
On the other hand, if I use a different symbol it does not appear:</p>
<pre class="lang-py prettyprint-override"><code>>>> A = Symbol("A")
>>> eq.subs(Subs1, sym).subs(Subs2, A).has(Subs)
False
>>> eq.subs(Subs1, A).subs(Subs2, sym).has(Subs)
False
</code></pre>
<p>What went wrong with the subs function?</p>
| <python><sympy> | 2023-08-09 16:07:13 | 0 | 593 | ShoutOutAndCalculate |
76,869,474 | 1,181,452 | Python - solving basic noisy captcha | <p>I am trying to solve basic captchas that have a little bit of noise but it is proving to be difficult.</p>
<p>This is a sample image of one of the captchas:</p>
<p><a href="https://i.sstatic.net/B66iD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B66iD.png" alt="sample.png" /></a></p>
<p>This is the code I am using:</p>
<pre><code>import cv2
from pytesseract import image_to_string
img = cv2.imread("sample.png")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(h, w) = gry.shape[:2]
gry = cv2.resize(gry, (w*2, h*2))
cls = cv2.morphologyEx(gry, cv2.MORPH_CLOSE, None)
thr = cv2.threshold(cls, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
txt = image_to_string(thr)
print(txt)
</code></pre>
<p>The output that I get using this code is: <code>"_JHB9TPR</code></p>
<p>This is obviously not correct. I think more work needs to be done with making the image clearer so that the letters stick out, but it doesn't help that the letters are the same colours as some of the background noise so that leads it to incorrectly recognise some letters.</p>
<p>Is there any other technique (with sample code) that I should be doing?</p>
| <python><ocr><tesseract><captcha><python-tesseract> | 2023-08-09 15:59:45 | 2 | 3,296 | M9A |
76,869,428 | 13,100,938 | Dataflow pipeline logs show connection requests from users outside of the organisation | <p>I've created a Dataflow Pipeline in the Apache Beam Python SDK and it's working fine.</p>
<p>I'm watching logs and every now and then I see connection requests from random usernames at real world IPs attempting to connect in the form:</p>
<pre><code>Invalid user <username> from <ip address> port <port>
</code></pre>
<p>The auth is working fine and these users are obviously being blocked so I'm not worried about this at all, I'm just curious as to what it actually is.</p>
<p>My guess is that it's bots trying to gain access to people who have set up their pipeline permissions incorrectly, but I'm unsure - does anyone know what this is being caused by?</p>
| <python><google-cloud-platform><google-cloud-dataflow><apache-beam> | 2023-08-09 15:52:43 | 1 | 2,023 | Joe Moore |
76,869,420 | 1,397,946 | Combine in-place operator and assignment in one line | <p>Is there any way to combine in-place and assignment operators in a single line?</p>
<p>Here's an example:</p>
<pre class="lang-py prettyprint-override"><code>result = ['a', 'b'].append('c')
</code></pre>
<p>The <code>result</code> is <code>None</code>. An obvious solution is to write it two lines like this:</p>
<pre class="lang-py prettyprint-override"><code>result = ['a', 'b']
result.append('c')
</code></pre>
<p>Can we do that in a single line in Python?</p>
| <python> | 2023-08-09 15:52:06 | 1 | 11,517 | Lukasz Tracewski |
76,869,412 | 298,209 | How to add a handler to root logger that only handles events from a particular sub logger | <p>I'm trying to wrap my head around python logging module's hierarchical design. I have a particular logger <code>A.B.C</code> that needs a very specific handler. But it seems to me that attaching a handler directly to <code>A.B.C</code> is somewhat of an anti-pattern and even if I do that events are propagated all the way to the root logger anyway (unless I do <code>logger.propagate = False</code>). The official documentation says:</p>
<blockquote>
<p>A common scenario is to attach handlers only to the root logger, and to let propagation take care of the rest.</p>
</blockquote>
<p>What is the best-practice to deal with this? If I'm using my package directly I'll define and configure the root logger myself and I can attach a handler for <code>A.B.C</code> to the root logger but how do I make it only handler events originating from <code>A.B.C</code>. And if I'm using my python package as a library in some other program I don't want to have a handler for <code>A.B.C</code>.</p>
| <python><python-logging> | 2023-08-09 15:50:54 | 1 | 5,580 | Milad |
76,869,120 | 4,247,599 | Knowing which variants (or extras) of a python library was installed | <p>There are python libraries that can be installed with a range of <a href="https://packaging.python.org/en/latest/tutorials/installing-packages/#installing-extras" rel="nofollow noreferrer">extras, or variants</a>.</p>
<p>As an example(*), <code>dask</code> can be installed with:</p>
<pre class="lang-bash prettyprint-override"><code>pip install dask
</code></pre>
<p>or with some extras dependencies with:</p>
<pre><code>pip install dask["complete"]
</code></pre>
<p>Is there a way to check if these extra were installed, and which one?</p>
<p>(*) Please note that I would like to know the answer for the general case, not for the specific case of <code>dask</code>.
A solution requiring knowledge of the specific dependencies that are installed when the extra is called for a specific library would not help.</p>
<p>What I tried was: <code>pip freeze | grep dask</code> and <code>pip show dask</code>, though both installation provides the same results.</p>
| <python><pip> | 2023-08-09 15:12:28 | 1 | 4,299 | SeF |
76,868,976 | 14,923,149 | Retrieving Gene Ontology Hierarchy Using Python | <p>I'm trying parsing and hierarchical display of Gene Ontology (GO) terms from an OBO file using Python. While I have made progress, I'm encountering an issue with properly handling multiple is_a relationships within the same term. My goal is to achieve a hierarchical structure that considers all is_a relationships.</p>
<p>I'm working with a subset of the Gene Ontology data from the go-basic.obo file. Here's an example of the data format:</p>
<pre><code> format-version: 1.2
data-version: releases/2023-06-11
subsetdef: chebi_ph7_3 "Rhea list of ChEBI terms representing the major species at pH 7.3."
subsetdef: gocheck_do_not_annotate "Term not to be used for direct annotation"
subsetdef: gocheck_do_not_manually_annotate "Term not to be used for direct manual annotation"
subsetdef: goslim_agr "AGR slim"
subsetdef: goslim_aspergillus "Aspergillus GO slim"
subsetdef: goslim_candida "Candida GO slim"
subsetdef: goslim_chembl "ChEMBL protein targets summary"
subsetdef: goslim_drosophila "Drosophila GO slim"
subsetdef: goslim_flybase_ribbon "FlyBase Drosophila GO ribbon slim"
subsetdef: goslim_generic "Generic GO slim"
subsetdef: goslim_metagenomics "Metagenomics GO slim"
subsetdef: goslim_mouse "Mouse GO slim"
subsetdef: goslim_pir "PIR GO slim"
subsetdef: goslim_plant "Plant GO slim"
subsetdef: goslim_pombe "Fission yeast GO slim"
subsetdef: goslim_synapse "synapse GO slim"
subsetdef: goslim_yeast "Yeast GO slim"
subsetdef: prokaryote_subset "GO subset for prokaryotes"
synonymtypedef: syngo_official_label "label approved by the SynGO project"
synonymtypedef: systematic_synonym "Systematic synonym" EXACT
default-namespace: gene_ontology
ontology: go
[Term]
id: GO:0000001
name: mitochondrion inheritance
namespace: biological_process
def: "The distribution of mitochondria, including the mitochondrial genome, into daughter cells after mitosis or meiosis, mediated by interactions between mitochondria and the cytoskeleton." [GOC:mcc, PMID:10873824, PMID:11389764]
synonym: "mitochondrial inheritance" EXACT []
is_a: GO:0048308 ! organelle inheritance
is_a: GO:0048311 ! mitochondrion distribution
[Term]
id: GO:0048308
name: organelle inheritance
namespace: biological_process
def: "The partitioning of organelles between daughter cells at cell division." [GOC:jid]
subset: goslim_pir
subset: goslim_yeast
is_a: GO:0006996 ! organelle organization
[Term]
id: GO:0007029
name: endoplasmic reticulum organization
namespace: biological_process
def: "A process that is carried out at the cellular level which results in the assembly, arrangement of constituent parts, or disassembly of the endoplasmic reticulum." [GOC:dph, GOC:jl, GOC:mah]
subset: goslim_pir
synonym: "endoplasmic reticulum morphology" RELATED []
synonym: "endoplasmic reticulum organisation" EXACT []
synonym: "endoplasmic reticulum organization and biogenesis" RELATED [GOC:mah]
synonym: "ER organisation" EXACT []
synonym: "ER organization and biogenesis" RELATED [GOC:mah]
is_a: GO:0006996 ! organelle organization
relationship: part_of GO:0010256 ! endomembrane system organization
[Term]
id: GO:0048309
name: endoplasmic reticulum inheritance
namespace: biological_process
def: "The partitioning of endoplasmic reticulum between daughter cells at cell division." [GOC:jid]
synonym: "ER inheritance" EXACT []
is_a: GO:0007029 ! endoplasmic reticulum organization
is_a: GO:0048308 ! organelle inheritance
[Term]
id: GO:0048313
name: Golgi inheritance
namespace: biological_process
def: "The partitioning of Golgi apparatus between daughter cells at cell division." [GOC:jid, PMID:12851069]
synonym: "Golgi apparatus inheritance" EXACT []
synonym: "Golgi division" EXACT [GOC:ascb_2009, GOC:dph, GOC:tb]
synonym: "Golgi partitioning" EXACT []
is_a: GO:0007030 ! Golgi organization
is_a: GO:0048308 ! organelle inheritance
[Term]
id: GO:0007030
name: Golgi organization
namespace: biological_process
def: "A process that is carried out at the cellular level which results in the assembly, arrangement of constituent parts, or disassembly of the Golgi apparatus." [GOC:dph, GOC:jl, GOC:mah]
subset: goslim_pir
synonym: "Golgi apparatus organization" EXACT []
synonym: "Golgi organisation" EXACT []
synonym: "Golgi organization and biogenesis" RELATED [GOC:mah]
is_a: GO:0006996 ! organelle organization
relationship: part_of GO:0010256 ! endomembrane system organization
[Term]
id: GO:0090166
name: Golgi disassembly
namespace: biological_process
def: "A cellular process that results in the breakdown of a Golgi apparatus that contributes to Golgi inheritance." [GOC:ascb_2009, GOC:dph, GOC:tb]
synonym: "Golgi apparatus disassembly" EXACT []
is_a: GO:0007030 ! Golgi organization
is_a: GO:1903008 ! organelle disassembly
relationship: part_of GO:0048313 ! Golgi inheritance
[Term]
id: GO:1903008
name: organelle disassembly
namespace: biological_process
def: "The disaggregation of an organelle into its constituent components." [GO_REF:0000079, GOC:TermGenie]
synonym: "organelle degradation" EXACT []
is_a: GO:0006996 ! organelle organization
is_a: GO:0022411 ! cellular component disassembly
[Term]
id: GO:0006996
name: organelle organization
namespace: biological_process
alt_id: GO:1902589
def: "A process that is carried out at the cellular level which results in the assembly, arrangement of constituent parts, or disassembly of an organelle within a cell. An organelle is an organized structure of distinctive morphology and function. Includes the nucleus, mitochondria, plastids, vacuoles, vesicles, ribosomes and the cytoskeleton. Excludes the plasma membrane." [GOC:mah]
subset: goslim_candida
subset: goslim_pir
synonym: "organelle organisation" EXACT []
synonym: "organelle organization and biogenesis" RELATED [GOC:dph, GOC:jl, GOC:mah]
synonym: "single organism organelle organization" EXACT [GOC:TermGenie]
synonym: "single-organism organelle organization" RELATED []
is_a: GO:0016043 ! cellular component organization
</code></pre>
<p>I used this code</p>
<pre><code>def parse_obo(file_path):
terms = {}
current_term = None
with open(file_path, 'r') as f:
for line in f:
line = line.strip()
if not line:
if current_term:
terms[current_term['id']] = current_term
current_term = None
elif line.startswith('[Term]'):
if current_term:
terms[current_term['id']] = current_term
current_term = {'id': ''}
elif current_term:
parts = line.split(': ', 1)
if len(parts) == 2:
current_term[parts[0]] = parts[1]
return terms
def display_hierarchy(terms, term_id, indent=0):
if term_id in terms:
term = terms[term_id]
print(' ' * indent + term_id)
if 'is_a' in term:
parent_ids = [parent.split()[1] for parent in term['is_a'] if len(parent.split()) > 1]
for parent_id in parent_ids:
display_hierarchy(terms, parent_id, indent + 4)
if 'id' in term:
child_ids = [child_id for child_id in terms if term_id in terms[child_id].get('is_a', [])]
for child_id in child_ids:
display_hierarchy(terms, child_id, indent + 4)
if __name__ == "__main__":
file_path = 'go-basic_1.obo'
terms = parse_obo(file_path)
for term_id in terms:
display_hierarchy(terms, term_id, indent=0)
</code></pre>
<p>I got like this</p>
<pre><code>GO:0000001
GO:0048308
GO:0048309
GO:0048313
GO:0007029
GO:0048309
GO:0048313
GO:0007030
GO:0090166
GO:1903008
GO:0090166
GO:0006996
GO:0048308
GO:0048309
GO:0048313
GO:0007029
GO:0007030
</code></pre>
<p>but I want result like this</p>
<pre><code>GO:0016043
GO:0006996
GO:1903008
GO:0090166
GO:0048308
GO:0000001
GO:0048309
GO:0048313
GO:0007029
GO:0048309
GO:0007030
GO:0048313
GO:0090166
GO:0048311
GO:0000001
GO:0022411
GO:1903008
GO:0090166
</code></pre>
<p>I want to plot result from my genomic data for gene ontology, so I started from here , kindly help</p>
| <python><python-3.x><recursion><bioinformatics><ontology-mapping> | 2023-08-09 14:54:33 | 1 | 504 | Umar |
76,868,602 | 3,231,250 | pandas correlation lower than -1 | <p>I had wanted to filter 1 and -1 correlations from my dataframe but I have realised that some of the correlations slightly bigger than 1 and lower than -1.</p>
<p>I couldn't find what is the real reason behind that. I have put subset of the dataframe here.</p>
<pre><code>ModelID RPL23A ST13
ACH-001196 -4.384573196 0.025759764
ACH-000054 -4.384573196 0.025759764
ACH-001050 -4.384573196 0.025759764
ACH-000505 -4.81558301 0.44097594
ACH-001794 -4.384573196 0.025759764
</code></pre>
<p>example code:</p>
<pre><code>df = pd.read_csv('test.csv',index_col=0)
corr = df.corr()
</code></pre>
<p>and if I want to filter 1 correlations, I can not</p>
<pre><code>corr[corr == -1.0]
</code></pre>
<p><a href="https://i.sstatic.net/FLrMP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FLrMP.png" alt="enter image description here" /></a></p>
<p>and I checked what is the reason behind that, it seems they are not -1</p>
<pre><code>corr.stack().reset_index().astype(str)
</code></pre>
<p><a href="https://i.sstatic.net/I1Kcf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I1Kcf.png" alt="enter image description here" /></a></p>
| <python><pandas><correlation> | 2023-08-09 14:10:06 | 1 | 1,120 | Yasir |
76,868,583 | 10,805,665 | How to authorise to MS Project ODATA API by using ClientCredentialFlow? | <p>I am using the Python msal package and the <code>ClientCredentialFlow</code> when trying to authenticate to the MS-project odata API using following script:</p>
<pre class="lang-py prettyprint-override"><code>import requests
import msal
def get_access_token():
# Set your Azure AD tenant ID, client ID (Application ID), and client secret
tenant_id = "your_tenant_id"
client_id = "your_client_id"
client_secret = "your_client_secret"
# Create a confidential client application
confidential_client = msal.ConfidentialClientApplication(
client_id,
authority=f"https://login.microsoftonline.com/{tenant_id}",
client_credential=client_secret
)
# Define the scope (permissions) you want to request
scopes = ["https://your_tenant_name.sharepoint.com/.default"] # Replace "your_tenant_name" with your actual tenant name
# Get an access token using the "Client Credentials" flow
result = confidential_client.acquire_token_for_client(scopes=scopes)
if "access_token" in result:
access_token = result["access_token"]
return access_token
else:
print("Failed to obtain access token.")
return None
def get_project_data():
# Set the API endpoint URL
ms_project_api_url = 'https://your_tenant_name.sharepoint.com/sites/your_project_instance/_api/ProjectData/'
# Get the access token
access_token = get_access_token()
if access_token:
# Create the headers with the access token
headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/json'
}
# Make the API request
response = requests.get(ms_project_api_url, headers=headers)
if response.status_code == 200:
# The request was successful, you can process the response here
print("Data successfully retrieved:")
print(response.json()) # If the API returns JSON data
else:
# The request was not successful, display the error message
print("Error while retrieving data. Status code:", response.status_code)
print(response.text)
else:
print("Failed to obtain access token.")
if __name__ == "__main__":
get_project_data()
</code></pre>
<p><strong>Which scope do I need and what permissions are needed in the Appregistration via Azure?</strong></p>
<p>As far as I understood I need some application-permissions, but can't find some which belong to MS Project.
I just found following delegated permissions in SharePoint group: <code>Project.Read</code>, <code>Project.Write</code>, <code>ProjectWebApp.FullControl</code>, <code>ProjectWebAppReporting.Read</code> and tried them out:</p>
<p><a href="https://i.sstatic.net/3A7rb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3A7rb.png" alt="screenshot of delegated permissions in Appregistry" /></a></p>
<p>I tried with different scopes, e.g.:
<em>["https://mytenant.sharepoint.com/.default"]</em>
here I got an access-token, but no access to the API, <code>get_project_data()</code> fails with:</p>
<blockquote>
<p><em>The request was not valid or is malformed.</em></p>
</blockquote>
| <python><azure><odata><ms-project><azure-app-registration> | 2023-08-09 14:08:35 | 1 | 308 | ediordna |
76,868,550 | 14,222,845 | What AWS services allow for sharing/hosting of Python pandas code? | <p>I have some Python Pandas code (this isn't really relevant to the question at hand, so, I am not including it). I have several functions that call one another and perform some statistical operations on data stored in excel/csv files.</p>
<p>My code is a jupyter notebook file in an Anaconda environment that has several libraries installed (Pandas, numpy, openpyxl,...). Now that the code is done, I would like to make this available to other users. I read up that AWS can be used as a service for hosting (and compiling) code on. I looked at this post <a href="https://stackoverflow.com/questions/62489657/how-to-run-python-code-on-aws-ec2-lambda">How to run python code on AWS (EC2/Lambda)</a> to get a better idea on AWS EC2 and AWS Lambda. Both seem fine, though, I was wondering if there are any other AWS services that could host python code. Here are some specifications that need to be fulfilled:</p>
<ol>
<li>The code <strong>must</strong> be shareable (i.e., I can give a user access to the code, and they can run it, even if they are not knowledgeable about programming). That's essentially the end goal--that anyone can run/use it.</li>
<li>It is not an event driven code (so, I guess AWS Lambda is off). Essentially, a user uploads a data file from their pc to the code hosted on AWS. This triggers the code to run and it outputs a csv file with the statistical details to the user's desktop.</li>
<li>The aim is that users can run this during the night, so, the runtime of the code will be at least 9+ hours.</li>
<li>Since I have the code in an Anaconda environment, I will have to convert the environment (with the libraries) to a zip file and then upload it. Does any AWS service allow for using additional python libraries/dependencies.</li>
</ol>
<p>Sorry if these questions seem kind of general. I have never used AWS before and want to make sure that it can actually accomplish what I want before I ask my organization for access to AWS.</p>
| <python><pandas><amazon-web-services><anaconda> | 2023-08-09 14:05:02 | 0 | 330 | Diamoniner12345 |
76,868,454 | 2,324,827 | Python mock class variable which is an object | <p>I have a two classes, like this:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def someFunction(self):
print("does work")
class B:
def __init__():
self.instance_a = A()
</code></pre>
<p>I want to mock <code>A.someFunction()</code> in a unit test. How do I mock the class variable, <code>self.instance_a</code>?</p>
| <python><unit-testing><mocking> | 2023-08-09 13:53:52 | 1 | 323 | Brennan Macaig |
76,868,407 | 7,182,833 | nats server not distributing tasks to multiple processes in parallel | <p>I am trying to use nats for job queuing. I want it as an alternative to celery+rabbitmq for my use-case.</p>
<p>I have two scripts.</p>
<ol>
<li>Producer Script that populates the queue.</li>
</ol>
<p><code>producer.py</code></p>
<pre><code>import nats
from nats.js.api import RetentionPolicy
NATS_HOST = '0.0.0.0'
NATS_PORT = '4222'
DOCUMENT_EXT_SUBJECT = "file_process"
nc = await nats.connect(servers=f"nats://{NATS_HOST}:{NATS_PORT}")
js = nc.jetstream()
await js.add_stream(
name="sample-stream",
subjects=[DOCUMENT_EXT_SUBJECT],
retention=RetentionPolicy.WORK_QUEUE,
)
for i in range(20):
ack = await js.publish(
subject=DOCUMENT_EXT_SUBJECT,
payload=json.dumps(
{
"some_load" : "lsome_load",
}).encode()
)
</code></pre>
<ol start="2">
<li>Consumer Script : <code>consumer.py</code></li>
</ol>
<pre><code>import asyncio
import signal
import sys
import logging
import nats
from nats.js.api import RetentionPolicy, ConsumerConfig, AckPolicy
consumer_config = ConsumerConfig(
ack_wait=900,
max_deliver=1,
max_ack_pending=1,
ack_policy=AckPolicy.EXPLICIT
)
# Nats
NATS_HOST = '0.0.0.0'
NATS_PORT = '4222'
DOCUMENT_EXT_SUBJECT = "file_process"
MAX_RECONNECT_ATTEMPTS = 10
## A cpu bound task
def time_consuming_task():
import time; time.sleep(50)
return
async def task_cb(msg):
time_consuming_task()
await msg.ack()
print("acknowledged document-extraction !")
async def run():
logging.info("Started Document processing Consumer ...")
async def error_cb(e):
sys.exit()
async def disconnected_cb():
logging.info(f"Got disconnected from NATS server .. Retrying .")
async def reconnected_cb():
logging.info("Got reconnected...")
nc = await nats.connect(
servers=f"nats://{NATS_HOST}:{NATS_PORT}",
error_cb=error_cb,
reconnected_cb=reconnected_cb,
disconnected_cb=disconnected_cb,
max_reconnect_attempts=MAX_RECONNECT_ATTEMPTS,
)
# Create JetStream context.
js = nc.jetstream()
## PERSIST ON THIS SUBJECT
await js.add_stream(
subjects=[DOCUMENT_EXT_SUBJECT],
name="sample-stream",
## Extra
retention=RetentionPolicy.WORK_QUEUE,
)
await js.subscribe(
DOCUMENT_EXT_SUBJECT,
stream="sample-stream",
queue = "worker_queue", # also knowas as "deliver group". In core nats, it's called "queue group".
cb=task_cb,
manual_ack=True,
config=consumer_config,
)
def signal_handler():
sys.exit()
for sig in ('SIGINT', 'SIGTERM'):
asyncio.get_running_loop().add_signal_handler(getattr(signal, sig), signal_handler)
await nc.flush()
logging.info("Done ... ?")
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(run())
loop.run_forever()
except Exception as e:
print("Got error : ", e)
finally:
loop.close()
</code></pre>
<h3>MY PROBLEM :</h3>
<p>I populate the queue using <code>producer.py</code>.
Then I run <code>consumer.py</code> script in multple terminal as separate processes.</p>
<p>What's happening is, things don't happen in parallel, like it happened in celery.</p>
<p>Although there are 3 processes to consume message, while a task is being processed inside one of the consumer process, nats server doesn't push any task to other 2 consumer. Everything happens one after another.</p>
<p>I cannot figure out what I could be doing wrong. Could it be because my use-case is for CPU-bound task ? I also tried doing early acknowledgement of message <code>await msg.ack()</code> before running the <code>time_consuming_task()</code>.
To be honest, I am new to this domain of async programming.</p>
| <python><async-await><nats.io><nats-streaming-server> | 2023-08-09 13:48:24 | 1 | 934 | bad programmer |
76,868,317 | 1,686,628 | Args to be optional when an option is used, else required | <pre><code>@cli.command()
@click.option("--list", 'list_steps', default=False, is_flag=True)
@click.argument("elem")
@click.pass_obj
def step(cmd, elem, list_steps):
print(elem)
print(list_steps)
click.secho(elem, fg="blue", bold=True)
</code></pre>
<p>I am getting <code>Error: Missing argument 'ELEM'.</code> if i do <code>mycli step --list</code>.</p>
<p>Is it possible to not require an arg only if the option <code>--list</code> is used, otherwise require it?</p>
| <python><python-click> | 2023-08-09 13:37:34 | 0 | 12,532 | ealeon |
76,868,272 | 7,119,501 | How to append rows to an empty pyspark dataframe when each row is generated by a separate thread? | <p>I have an empty dataframe created as below.
schema:</p>
<pre><code>table_schema = StructType([
StructField('bookingNumber', StringType(), True),
StructField('bookingProfile', StringType(), True),
StructField('json_string', StringType(), True),
StructField('json_ingestion_time', TimestampType(), True)
])
def prepare_empty_df(schema: StructType):
empty_rdd = spark.sparkContext.emptyRDD()
empty_df = spark.createDataFrame(empty_rdd, schema)
return empty_df
</code></pre>
<p>My data is coming from an API call. Each GET call will return on JSON and I am converting the API response, which is a JSON into a text. I am parsing this JSON for some attributes and then insert into a table.
Because I have 200k jsons, I dont want to run 200k insert queries on my table and wanted to append all the results of API JSON calls to an empty dataframe and simply ingest the dataframe. The API calls I make are not sequential rather are parallell threads. i.e., I am running 4 parallell API calls at a time and trying to append the 4 outputs to an empty dataframe.
Below is how I am converting API JSON and appending it into the empty dataframe.</p>
<p>Main method:</p>
<pre><code>if __name__ == '__main__':
spark = SparkSession.builder.appName('Raw_Check').getOrCreate()
batch_size = 4
booking_ids = []
initial_df = prepare_empty_df(schema=raw_table_schema)
initial_load = True
cquery = f'select booking_id from db.table limit 20'
booking_ids = get_booking_ids(spark=spark, query=cquery) # returns a list of bookings
for i in range(0, len(booking_ids), batch_size):
sub_list = booking_ids[i:i + batch_size]
threads = []
for index in range(batch_size):
t = threading.Thread(target=get_json, name=str(index), args=(spark, sub_list[index], initial_df))
threads.append(t)
t.start()
for index, thread in enumerate(threads):
thread.join()
print('Final Dataframe count')
print(initial_df.count())
print('-------------------------------------------------------------------------------------------------------------------------------------------------')
print('Final Dataframe contents')
initial_df.show()
print('-------------------------------------------------------------------------------------------------------------------------------------------------')
</code></pre>
<p>get_json method:</p>
<pre><code>def get_json(spark: SparkSession, booking_number: str, init_df: DataFrame):
headers = {"Content-type": "some_content_type"}
token = doing_something_to_get_token
token_headers = {'Authorization': f"Bearer {token}"}
api_response = requests.get(f'https://api_url?booking_number={booking_number}', headers=token_headers)
json_data = spark.sparkContext.parallelize([api_response.text])
df = spark.read.json(json_data)
api_df = (df.select('id').withColumnRenamed('id', 'bookingProfile')
.withColumn('bookingNumber', lit(booking_number))
.withColumn('json_string', lit(api_response.text))
.withColumn('json_ingestion_time', lit(current_timestamp()))
)
api_df.show()
init_df.unionAll(api_df)
</code></pre>
<p>I am unioning every row from the API output to <code>initial_df</code> I created in the main method. I can also see data from the api_df due to <code>api_df.show()</code> when the script runs. Four parallell threads are launching and I can see 4 API calls running at a time.
But the at the end, empty dataframe: <code>initial_df</code> I created is still empty by the end of the script. The count is zero and basically it prints NULL when I displayed the contents of it.</p>
<pre><code>-------------------------------------------------------------------------------------------------------------------------------------------------
Final Dataframe count
0
-------------------------------------------------------------------------------------------------------------------------------------------------
Final Dataframe contents
+--------------+-------------------+-----------+------------------------+
|bookingNumber |bookingProfile |json_string| json_ingestion_time|
+--------------+-------------------+-----------+------------------------+
+--------------+-------------------+-----------+------------------------+
</code></pre>
<p>Could anyone let me know what is the mistake I am doing here and how can I correct it? Any help is massively appreciated.</p>
| <python><apache-spark><pyspark> | 2023-08-09 13:30:28 | 8 | 2,153 | Metadata |
76,868,251 | 10,181,236 | how to load deberta-v3 properly | <p>I am trying to fine tune a DeBERTa model for a regression task,
the problem is that when I load the model using this code</p>
<pre><code>from transformers import AutoConfig, AutoTokenizer, AutoModel
## Model Configurations
MODEL_NAME = 'microsoft/deberta-v3-base'
config = AutoConfig.from_pretrained(MODEL_NAME) ## Configuration loaded from AutoConfig
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) ## Tokenizer loaded from AutoTokenizer
</code></pre>
<p>I get a KeyError</p>
<pre><code>Traceback (most recent call last)
<ipython-input-23-62561c3f4e7b> in <module>
3 MODEL_NAME = 'microsoft/deberta-v3-base'
4
----> 5 config = AutoConfig.from_pretrained(MODEL_NAME) ## Configuration loaded from AutoConfig
6 tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) ## Tokenizer loaded from AutoTokenizer
/usr/lib/python3.8/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
350
351 if "model_type" in config_dict:
--> 352 config_class = CONFIG_MAPPING[config_dict["model_type"]]
353 return config_class.from_dict(config_dict, **kwargs)
354 else:
KeyError: 'deberta-v2'
</code></pre>
<p>what can be the problem? I am using transformers version 4.31.0</p>
| <python><nlp><huggingface-transformers> | 2023-08-09 13:27:29 | 1 | 512 | JayJona |
76,868,206 | 1,638,348 | Redisearch Python simple example | <p>The documentation for Redisearch module is not the best at the moment. Can someone please give short examples that will answer these questions? Thank you</p>
<ol>
<li>How to use Redisearch module with redis-py?</li>
<li>How to sort by multiple fields?</li>
<li>How to paginate results?</li>
</ol>
| <python><redis><redis-py><redisearch> | 2023-08-09 13:21:58 | 1 | 908 | Teodor Scorpan |
76,868,183 | 7,087,604 | Split text on markup in Python | <p>I have the following line of text :</p>
<pre><code><code>stuff</code> and stuff and $\LaTeX$ and <pre class="mermaid">stuff</pre>
</code></pre>
<p>Using Python, I want to break the markup entities to get the following list:</p>
<pre><code>['<code>', 'stuff', '</code>', ' and stuff and $\\LaTeX$ ', '<pre class="mermaid">', 'stuff', '</pre>']
</code></pre>
<p>So far, I used :</p>
<pre class="lang-py prettyprint-override"><code>markup = re.compile(r"(<(?P<tag>[a-z]+).*>)(.*?)(<\/(?P=tag)>)")
text = '<code>stuff</code> and stuff and $\LaTeX$ and <pre class="mermaid">stuff</pre>'
words = re.split(markup, text)
</code></pre>
<p>but it yields :</p>
<pre><code>['<code>', 'code', 'stuff', '</code>', ' and stuff and $\\LaTeX$ ', '<pre class="mermaid">', 'pre', 'stuff', '</pre>']
</code></pre>
<p>The problem is the <code>(?P=tag)</code> group is added to the list because it's captured. I capture it only to get the closest closing tag.</p>
<p>How could I get rid of it in the resulting list, assuming the code processes only one single line at a time ?</p>
| <python><regex><split> | 2023-08-09 13:18:22 | 2 | 713 | AurΓ©lien Pierre |
76,867,990 | 7,745,011 | BufferError in Pycharm Debugger when using ProcessPoolExecutor | <p>I have the following bit of code in a FastAPI application (derived from a <a href="https://stackoverflow.com/questions/76538125/is-it-possible-to-await-multiprocessing-process-join-in-python">previous post</a>):</p>
<pre><code>with self.lock:
from concurrent.futures import ProcessPoolExecutor
loop = asyncio.get_running_loop()
with ProcessPoolExecutor() as pool:
return await loop.run_in_executor(
pool, self._generate_model, input_data
)
</code></pre>
<p>Basically <code>input_data</code> is a <code>pydantic.BaseModel</code> and <code>_generate_model()</code> is a class method doing some heavy calculation.
This bit of code runs perfectly fine in VSCode and using "run" in PyCharm. However, debugging in Pycharm crashes the application as soon as this section is executed a <strong>second</strong> time with:</p>
<pre><code>Exception ignored in tp_clear of: <class 'memoryview'>
Traceback (most recent call last):
File "some_file.py", line xxx # this is different everytime it happens
BufferError: memoryview has 1 exported buffer
</code></pre>
<p>Google has let me down so far, what is happening here?</p>
| <python><debugging><pycharm><multiprocessing> | 2023-08-09 12:53:21 | 2 | 2,980 | Roland Deschain |
76,867,911 | 2,409,793 | jupyter-server keeps asking for token login | <p>Ξ have set up a jupyter server instance on kubernetes. The login prompt screen is the following</p>
<p><a href="https://i.sstatic.net/MYWmb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MYWmb.png" alt="enter image description here" /></a></p>
<p>I exec into the pod, run <code>jupyter notebook list</code>, and in the two boxes at the bottom of the screenshot I enter the token returned by the above command and a password.</p>
<p>I then log in, log out and I am prompted <strong>again</strong> with the same screen</p>
<p>When I insert the password / token I created in the previous step in the <code>Password or token</code> field at the top of the screen, I get an invalid credentials response.</p>
<p>How can I set up a password-only authentication?</p>
<p>The links in the screenshot point to 404 pages.</p>
<pre><code>jupyter --version
jupyter core : 4.6.3
jupyter-notebook : 6.1.4
qtconsole : not installed
ipython : 7.18.1
ipykernel : 5.3.4
jupyter client : 6.1.7
jupyter lab : 2.2.8
nbconvert : 6.0.7
ipywidgets : 7.5.1
nbformat : 5.0.7
traitlets : 5.0.4
</code></pre>
| <python><jupyter-notebook><jupyter><jupyter-server> | 2023-08-09 12:42:54 | 1 | 19,856 | pkaramol |
76,867,664 | 9,975,452 | Open netcdfs data in gz file | <p>I have netcdfs saved in a <a href="https://github.com/kevinkuranyi/archive/blob/main/Maize_1970_Yield_ver12b_BRA.nc.gz" rel="nofollow noreferrer">gz file</a> and I am trying to import as a geodataframe on python. I don't know the name of the variables in the netcdfs.</p>
<p>My code:</p>
<pre><code>
gzipped_file_path = 'Maize_1970_Yield_ver12b_BRA.nc.gz'
with gzip.open(gzipped_file_path, 'rb') as f:
# Read the content of the gzipped file
content = f.read()
</code></pre>
<p>This part works fine, but then, when trying to create a dataset, I'm trying:</p>
<pre><code>df=nc.Dataset(content)
</code></pre>
<p>And it starts to run forever (it`s been running for over 3 hours as of now).
What is wrong with this code?</p>
| <python><netcdf><netcdf4> | 2023-08-09 12:11:17 | 1 | 470 | Oalvinegro |
76,867,520 | 10,852,485 | Sphinx adds span with class colon to the header | <p>We have a project which produces html from rst files, using Sphinx. Earlier we used Sphinx version 2.1 and after upgrade to version 6.1.3 a sideeffect occurs where our headers defined like this:</p>
<pre><code>:header:
</code></pre>
<p>are rendered in our html structure as small headers, just like when we used Sphinx 2.1, but <strong>span with class "colon"</strong> is injected by Sphinx right after the header.
Is there a way to stop Sphinx doing that without resorting to hiding it with CSS?</p>
| <python><documentation><python-sphinx><restructuredtext><docutils> | 2023-08-09 11:52:18 | 1 | 306 | ms3300 |
76,867,460 | 3,649,629 | Can not get shape of the pytorch model for pytorch -> tensorflow lite model conversion | <p>I can not get shape of the pytorch model for pytorch -> tensorflow lite model conversion. I use BABERT chinese model and would like to use <a href="https://github.com/omerferhatt/torch2tflite" rel="nofollow noreferrer">torch2tflite</a> for tensforflow lite model format conversion.</p>
<p>Docs says it implies model shape to run the command:</p>
<pre><code>python3 -m torch2tflite.converter
--torch-path tests/mobilenetv2_model.pt
--tflite-path mobilenetv2.tflite
--target-shape 224 224 3
</code></pre>
<p>So I need to get this shape first. I wrote a simply python script to do this:</p>
<pre><code>import torch;
def loadModel():
model = torch.load("/home/gelassen/Downloads/chinese_babert-base/pytorch_model.bin")
model_shape = list(model.parameters())[0].shape
print(model_shape)
print("Model shape" + model_shape)
loadModel()
</code></pre>
<p>But it fails with exception:</p>
<pre><code>/start.py", line 10, in loadModel
model_shape = list(model.parameters())[0].shape
AttributeError: 'collections.OrderedDict' object has no attribute 'parameters'
</code></pre>
<p>I use the model from <a href="https://github.com/modelscope/AdaSeq/tree/master/examples/babert" rel="nofollow noreferrer">this public repo</a> I would appreciate any help with this issue.</p>
<p>p.s.
The context of this work is I want to split up chinese random text into separate words on the Android phone. Android support only <strong>tensforflow lite</strong> models. It is possible to provide custom model, but it should be converted to tensforflow lite format first.</p>
<p>I can not train a model on my machine as I faced CUDA OOM error - seems it requires GPU with at least 12gb of RAM. Training in cloud might be an option, but I am investigating first existing pre-trained models.</p>
| <python><tensorflow><machine-learning><pytorch><nlp> | 2023-08-09 11:44:54 | 0 | 7,089 | Gleichmut |
76,867,348 | 653,966 | VS 2022 Python - How to get going with FastAPI | <p>I am using Visual Studio 2022 on Win11, and trying to get to grips with Python. In particular, I am trying out FastAPI.</p>
<p>I have installed FastAPI and Uvicorn. I have a virtual environment set up for Python 3.9.</p>
<p>How do I get VS to run the Uvicorn server when testing? I am trying to follow the start up guide (<a href="https://fastapi.tiangolo.com/tutorial/first-steps/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/first-steps/</a>). I have created a main.py file, with the required code. When I try to run it from an interactive console, I type <code>uvicorn -m main:app --reload</code>.</p>
<p>The console comes back with</p>
<pre class="lang-none prettyprint-override"><code>File "stdin", line 1
uvicorn -m main:app --reload
^
SyntaxError: invalid syntax
</code></pre>
<p>I have tried moving the example code to another file and trying to get uvicorn to run that file, but no joy. I think either uvicorn is not running, or the environment cannot find either uvicorn or the target main.py file.</p>
<p>I have tried running a DOS prompt, executing the python.exe in the venv folder, and trying to point to uvicorn directly...</p>
<pre class="lang-none prettyprint-override"><code>"C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\python" -m "C:\Users\User\Documents\Visual Studio 2022\Repos\PythonApplicationFastAPITest1\env\Scripts\uvicorn" main:app --reload
</code></pre>
<p>...and I get "No module named ...uvicorn".</p>
| <python><fastapi><visual-studio-2022><uvicorn> | 2023-08-09 11:30:42 | 2 | 2,195 | Steve Hibbert |
76,867,250 | 5,110,870 | Python: how to go from a regex pattern to a list that includes all individual characters in the pattern? | <p>I am new to <code>regex</code>. If I have the following pattern:</p>
<pre><code>white_list_pattern = '^[0-9a-zA-Z ._-]+$'
</code></pre>
<p>how can I go from that to a list that includes all characters matching the pattern above? This of course doesn't work because it doesn't include all the characters between 0 and 9, for example:</p>
<pre><code>white_list = [str(i) for i in white_list_pattern]
</code></pre>
| <python><regex><string><list> | 2023-08-09 11:16:54 | 2 | 7,979 | FaCoffee |
76,867,230 | 3,091,559 | How to set a Content-Type when uploading a data on an Azure blob storage | <p>I have a simple web application that received the name of a pdf file and load this pdf on a Azure blob storage.</p>
<pre><code>class AzureUploader(web.RequestHandler):
@gen.coroutine
def _write_remotely(self):
json_body = tornado.escape.json_decode(self.request.body)
filename = json_body.get("filename")
container_client = ContainerClient(
f'https://{REMOTE_ACCOUNT}.blob.core.windows.net/',
REMOTE_CONTAINER,
credential=REMOTE_ACCESS_KEY
)
blob_client = container_client.get_blob_client(filename)
blob_client.set_http_headers(content_settings=ContentSettings(content_type='application/pdf'))
logging.info('PROPERTIES {}'.format(blob_client.get_blob_properties()))
with open(os.path.join(json_body.get("path"), filename), "rb") as data:
blob_client.upload_blob(data)
@gen.coroutine
def post(self):
logging.info('UPLOAD ALL REQUEST RECEIVED')
# Data received
self._write_remotely()
return
</code></pre>
<p>I need to set the Content-Type to application/pdf (the default is application/octet-stream).
Apparently I do that, because when I log all the header parameters, inside the field <em>content_settings</em> the value is <em>'content_type': 'application/pdf'</em>:</p>
<pre><code>[I 230809 12:56:01 pdf_receipt_sender_to_azure:70] PROPERTIES {'name': '100099_20230807_1411_010_00339_0487970002881.PDF', 'container': 'invoices/documents', 'snapshot': None, 'versio
n_id': None, 'is_current_version': None, 'blob_type': <BlobType.BLOCKBLOB: 'BlockBlob'>, 'metadata': {}, 'encrypted_metadata': None, 'last_modified': datetime.datetime(2023, 8, 9, 10,
56, 1, tzinfo=datetime.timezone.utc), 'etag': '"0x8DB98C7389BE957"', 'size': 37463, 'content_range': None, 'append_blob_committed_block_count': None, 'is_append_blob_sealed': None, 'pa
ge_blob_sequence_number': None, 'server_encrypted': True, 'copy': {'id': None, 'source': None, 'status': None, 'progress': None, 'completion_time': None, 'status_description': None, 'i
ncremental_copy': None, 'destination_snapshot': None}, 'content_settings': {'content_type': 'application/pdf', 'content_encoding': None, 'content_language': None, 'content_md5': None,
'content_disposition': None, 'cache_control': None}, 'lease': {'status': 'unlocked', 'state': 'available', 'duration': None}, 'blob_tier': 'Hot', 'rehydrate_priority': None, 'blob_tier
_change_time': None, 'blob_tier_inferred': True, 'deleted': False, 'deleted_time': None, 'remaining_retention_days': None, 'creation_time': datetime.datetime(2023, 8, 7, 12, 11, 16, tz
info=datetime.timezone.utc), 'archive_status': None, 'encryption_key_sha256': None, 'encryption_scope': None, 'request_server_encrypted': True, 'object_replication_source_properties':
[], 'object_replication_destination_policy': None, 'last_accessed_on': None, 'tag_count': None, 'tags': None, 'immutability_policy': {'expiry_time': None, 'policy_mode': None}, 'has_le
gal_hold': None, 'has_versions_only': None}
</code></pre>
<p>But then the upload is done with the following parameters:</p>
<pre><code>[I 230809 12:56:01 _universal:523] Request URL: 'https://app.blob.core.windows.net/invoices/documents/100099_20230807_1411_010_00339_0487970002881.PDF'
Request method: 'PUT'
Request headers:
'Content-Length': '37463'
'x-ms-blob-type': 'REDACTED'
'If-None-Match': '*'
'x-ms-version': 'REDACTED'
'Content-Type': 'application/octet-stream'
'Accept': 'application/xml'
'User-Agent': 'azsdk-python-storage-blob/12.14.1 Python/3.7.9 (Linux-4.15.0-74-generic-x86_64-with-Ubuntu-16.04-xenial)'
'x-ms-date': 'REDACTED'
'x-ms-client-request-id': '5453fcf2-36a3-11ee-ae74-00505680a149'
'Authorization': 'REDACTED'
A body is sent with the request
</code></pre>
<p>So the content-type is changed back to application/octet-stream.</p>
| <python><azure><azure-blob-storage> | 2023-08-09 11:14:35 | 1 | 4,727 | k4ppa |
76,867,126 | 4,198,298 | Get x number of rows in a dataframe at equally spaced index | <p>I have a dataframe that looks something like</p>
<pre><code> time value1 value2
1 1000000 1000009842 1009809435
2 1000032 2348974923 2343242342
3 1000342 2342345320 2342342234
...
1000 4324342 2131242353 4234234234
</code></pre>
<p>I want to get 20 random values where the index are spaced uniformly, indexes</p>
<pre><code>10, 20, 30, 40, 50... 200
</code></pre>
<p>or</p>
<pre><code>400, 420, 440, 460... 800
</code></pre>
<p>Where the index starts from can be random, the only thing that needs to be constant is the index between each returned column.</p>
<p>I've used</p>
<pre><code>df.sample(1000)
</code></pre>
<p>to get a sample of 1000 columns but don't see a way of distributing the indexes equally?</p>
| <python><pandas><dataframe> | 2023-08-09 11:00:49 | 2 | 945 | CEamonn |
76,866,876 | 5,188,872 | More efficent way of plotting image on 3D plane in matplotlib than using a meshgrid | <p>Is there a better way to plot a plane with image in 3D in matplotlib than this:</p>
<pre><code>xx, yy = np.meshgrid(np.linspace(0, 1, img_size[1]), np.linspace(0, 1, img_size[0]))
zz = np.ones((img_size[0],img_size[1]))
ax.plot_surface(xx, yy, zz, rstride=1, cstride=1, facecolors=img / 255, shade=False)
</code></pre>
<p>I dont want to create a surface with as many faces as I have pixels, since that quite inefficient.
Is there a better way to do so?</p>
<p>This is what my plot looks like:</p>
<p><a href="https://i.sstatic.net/0P2dX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0P2dX.png" alt="enter image description here" /></a></p>
| <python><numpy><matplotlib> | 2023-08-09 10:29:24 | 1 | 390 | mojado |
76,866,869 | 8,003,790 | Properly type a callable returning a Generator expression | <p>I have the following snippet of code</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import _GeneratorContextManager, contextmanager
GoodWrapperType = Callable[[int, str], _GeneratorContextManager[None]]
BadWrapperType = Callable[[int, str], Generator[None, None, None]]
def wrapper() -> GoodWrapperType:
@contextmanager
def inner(some_int: int, some_str: str) -> Generator[None, None, None]:
# Do things with some_int, some_str
yield
return inner
</code></pre>
<p>I want to do this in the context of a <code>pytest</code> testing suite, where the wrapper is getting some fixture injected into the <code>inner</code>.</p>
<p>I have above the <code>GoodWrapperType</code>, and the <code>BadWrapperType</code>. Pylance is telling me that I can't assign the <code>BadWrapperType</code> as the return type of my <code>wrapper</code>. I have found a solution with the <code>GoodWrapperType</code>, using <code>_GeneratorContextManager</code>, but since it's prefixed with an <code>_</code>, I am not supposed to be importing it.</p>
<p>Is there a better way ? What's the proper way of doing this ?</p>
<p>I was wondering if the fact that there's no straight-forward solution (that I've found anyway), might be a hint that I shouldn't do this in Python.</p>
| <python><python-typing><pyright> | 2023-08-09 10:28:44 | 1 | 3,316 | IMCoins |
76,866,854 | 8,855,527 | PyJWT encode raise "Could not deserialize key data." | <p>I use this code before and it was OK, but after months it does not work. This code suggested by pyjwt documentation without any changes.
I am using XUBUNTU and dockerized my project.</p>
<pre><code>import jwt
SEC = "SECRET"
token = jwt.encode(
{'name': ' ', 'admin': True}, SEC, algorithm='RS256'
)
print(token)
</code></pre>
<p>raise error</p>
<pre><code>Traceback (most recent call last):
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/jwt/algorithms.py", line 350, in prepare_key
RSAPrivateKey, load_pem_private_key(key_bytes, password=None)
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/cryptography/hazmat/primitives/serialization/base.py", line 25, in load_pem_private_key
return ossl.load_pem_private_key(
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 747, in load_pem_private_key
return self._load_key(
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 929, in _load_key
self._handle_key_loading_error()
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 984, in _handle_key_loading_error
raise ValueError(
ValueError: ('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [<OpenSSLError(code=503841036, lib=60, reason=524556, reason_text=unsupported)>])
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/cc/Documents/test/j.py", line 6, in <module>
token = jwt.encode(
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/jwt/api_jwt.py", line 73, in encode
return api_jws.encode(
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/jwt/api_jws.py", line 160, in encode
key = alg_obj.prepare_key(key)
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/jwt/algorithms.py", line 353, in prepare_key
return cast(RSAPublicKey, load_pem_public_key(key_bytes))
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/cryptography/hazmat/primitives/serialization/base.py", line 35, in load_pem_public_key
return ossl.load_pem_public_key(data)
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 794, in load_pem_public_key
self._handle_key_loading_error()
File "/home/cc/Documents/test/venv/lib/python3.10/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 984, in _handle_key_loading_error
raise ValueError(
ValueError: ('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [<OpenSSLError(code=75497580, lib=9, reason=108, reason_text=no start line)>])
</code></pre>
<p>use this version of library</p>
<pre><code>cffi==1.15.1
cryptography==39.0.0
pycparser==2.21
PyJWT==2.6.0
</code></pre>
| <python><jwt><pyjwt> | 2023-08-09 10:27:30 | 1 | 577 | CC7052 |
76,866,634 | 4,626,254 | How does reduceByKey() in pyspark knows which column is key and which one is value? | <p>I'm a newbie to Pyspark and going through <a href="https://sparkbyexamples.com/pyspark/pyspark-reducebykey-usage-with-examples/" rel="nofollow noreferrer">this</a>.</p>
<p>How does <code>reduceByKey()</code> know whether it should consider the first column as and second as value or vice-versa.</p>
<p>I don't see any column name or index mentioned in the code in <code>reduceByKey()</code>. Does <code>reduceByKey()</code> by default considers the first column as key and second as value?</p>
<p>How to perform <code>reduceByKey()</code> if there are multiple columns in the dataframe?</p>
<p>I'm aware of <code>df.select(col1, col2).reduceByKey()</code>. I'm just looking if there is any other way.</p>
| <python><apache-spark><pyspark><reduce> | 2023-08-09 10:01:35 | 1 | 5,266 | Underoos |
76,866,623 | 1,262,382 | what is output of -mzeep --profile? | <p>I am writing a saop client in python and zeep and I am playing with the -mzeep tool. It has an option --profile [filename]. Does anybody know the output file's format or what it is for? The documentation says only this: <em>--profile PROFILE Enable profiling and save output to given file.</em>
I was not able to find any more info. The file seems to be a binary.</p>
| <python><soap><zeep> | 2023-08-09 10:00:37 | 1 | 818 | Belovoj |
76,866,552 | 10,232,932 | Groupby with object and datetime64[ns] leas to empty dataframe | <p>I have a dataframe with the following columns:</p>
<pre><code>print(df.dtypes)
Daten float64
timepoint datetime64[ns]
Level object
Sublevel object
dtype: object
</code></pre>
<p>and the following entries:</p>
<pre><code>Daten timepoint Level Sublevel
1 2019-01-01T00:00:00.000 A AA
1 2019-01-01T00:00:00.000 A AA
2 2019-01-01T00:00:00.000 A AB
2 2019-01-01T00:00:00.000 B BA
1 2019-02-01T00:00:00.000 A AA
</code></pre>
<p>I want to <code>groupby</code>and <code>sum</code> with the following command:</p>
<pre><code>df= df.groupby(
['Level', 'timepoint']).agg({'Daten': 'sum'}).reset_index()
</code></pre>
<p>why does this lead to an empty dataframe?</p>
<p>My expected output would be:</p>
<pre><code>timepoint Level Daten
2019-01-01T00:00:00.000 A 4
2019-01-01T00:00:00.000 B 2
2019-02-01T00:00:00.000 A 1
</code></pre>
| <python><pandas> | 2023-08-09 09:52:00 | 1 | 6,338 | PV8 |
76,866,435 | 5,199,660 | Azure Data Factory: Use Lookup result in ForEach loop with Python | <p>My pipeline looks like the following:</p>
<p><a href="https://i.sstatic.net/I7NUo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I7NUo.png" alt="enter image description here" /></a></p>
<blockquote>
<p>Edit: I think I know why those [ ] in there - from the Python script
the filenames are returned as a list of filenames. And the Lookup
activity results then in the following output:</p>
</blockquote>
<pre><code> {
"count": 1,
"value": [
{
"Prop_0": "['BAT.WISCDD1C.SC.DK002001.D2023191.CSV'",
"Prop_1": " 'BAT.WISCDD1C.SC.DK002001.TOT.D2023191.CSV'",
"Prop_2": " 'BAT.WISCDD4C.SC.DK002004.D2023191.CSV'",
"Prop_3": " 'BAT.WISCDE1C.SC.DK003001.D2023191.CSV'",
"Prop_4": " 'BAT.WISCDE3C.SC.DK003003.D2023191.CSV'",
"Prop_5": " 'BAT.WISCDE4C.SC.DK003004.D2023191.CSV'",
"Prop_6": " 'BAT.WISCDE5C.SC.DK003005.D2023191.CSV'",
"Prop_7": " 'BAT.WISCDP1C.SC.DK001001.D2023191.CSV'",
"Prop_8": " 'BAT.WISCDU1C.SC.DK004011.D2023191.CSV'",
"Prop_9": " 'BAT.WISCDU4C.SC.DK004014.D2023191.CSV'",
"Prop_10": " 'BAT.WISCDU7C.SC.DK004017.D2023191.CSV']"
}
],
"effectiveIntegrationRuntime": "integrationRuntime1 (West Europe)",
"billingReference": {
"activityType": "PipelineActivity",
"billableDuration": [
{
"meterType": "ManagedVNetIR",
"duration": 0.016666666666666666,
"unit": "Hours"
}
]
},
"durationInQueue": {
"integrationRuntimeQueue": 0
}
}
</code></pre>
<p>The ForEach loop should walk through the Prop_ elements and pass them one by one to my Custom Batch Python script.</p>
<p><a href="https://i.sstatic.net/KEhAU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KEhAU.png" alt="enter image description here" /></a></p>
<p>In the Batch Custom activity I refer to the items as @{item()} as I guess I should. However, this is how the result looks like when the Python script receives the parameter!</p>
<pre><code> {Prop_0:[
'BAT.WISCDD1C.SC.DK002001.D2023191.CSV',
Prop_1: 'BAT.WISCDD1C.SC.DK002001.TOT.D2023191.CSV',
Prop_2: 'BAT.WISCDD4C.SC.DK002004.D2023191.CSV',
Prop_3: 'BAT.WISCDE1C.SC.DK003001.D2023191.CSV',
Prop_4: 'BAT.WISCDE3C.SC.DK003003.D2023191.CSV',
Prop_5: 'BAT.WISCDE4C.SC.DK003004.D2023191.CSV',
Prop_6: 'BAT.WISCDE5C.SC.DK003005.D2023191.CSV',
Prop_7: 'BAT.WISCDP1C.SC.DK001001.D2023191.CSV',
Prop_8: 'BAT.WISCDU1C.SC.DK004011.D2023191.CSV',
Prop_9: 'BAT.WISCDU4C.SC.DK004014.D2023191.CSV',
Prop_10: 'BAT.WISCDU7C.SC.DK004017.D2023191.CSV']}
</code></pre>
<p>I don't understand how to use the Prop_ in the value! I would like to get only one CSV filename per ForEach iteration!</p>
<p>What am I doing wrong? Thank you for your support!</p>
| <python><azure-data-factory><azure-batch> | 2023-08-09 09:38:11 | 1 | 656 | Eve |
76,866,139 | 10,430,394 | RDKit image MolToImage scales Bond widths and Element Labels inconsistently for different image sizes? | <p>I've noticed that when I create an image from a molecule in RDKit, the <code>size</code> argument leads to inconsistent scaling of the bond width and element labels. The bigger the size, the thinner the lines and the smaller the element labels.</p>
<p>I've run a test by generating an image for the same molecule using <code>MolToImage</code> at progressively bigger sizes. I rescaled those images to <code>size=(600,600)</code> and then concatenated them into a GIF. This is the result.</p>
<p><a href="https://i.sstatic.net/BwNVK.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BwNVK.gif" alt="Resulting GIF" /></a></p>
<p>Here's my code</p>
<pre class="lang-py prettyprint-override"><code>from glob import glob
from rdkit import Chem
from rdkit.Chem import Draw
from PIL import Image,ImageDraw,ImageFont
def make_frames_from_smi(smi):
for i in range(10):
s = (i+3)*100
mol = Chem.MolFromSmiles(smi)
img = Draw.MolToImage(mol,size=(s,s))
img = img.resize((600,600))
draw = ImageDraw.Draw(img)
text = '%d: Initial Size: (%d,%d)'%(i+1,s,s)
font_size = 40
font = ImageFont.truetype("arial.ttf", font_size) # Use your desired font
# Calculate text position
image_width, image_height = img.size
text_x = (image_width - (bbox[2] - bbox[0])) // 2
text_y = 20 # Adjust the vertical position as needed
draw.text((text_x, text_y), text, font=font, fill='black')
img.save('%03dtest.png'%i)
def make_gif_from_frames(paths):
frames_paths = glob(paths)
frames = [Image.open(imgp) for imgp in frames_paths]
frames[0].save("mols.gif", format="GIF", append_images=frames, save_all=True, duration=500, loop=False)
# make RDKit mol obj.
smi = 'CN(C)CC1CCCCC1(C2=CC(=CC=C2)OC)O'
make_frames_from_smi(smi)
make_gif_from_frames('*.png')
</code></pre>
<p>Is this expected behaviour? Is the bond width held constant for a certain absolute value of pixels? How can I generate these images with consistent proportions regardless of width/height of pixels?</p>
| <python><image><python-imaging-library><rdkit> | 2023-08-09 09:02:02 | 1 | 534 | J.Doe |
76,866,092 | 3,010,217 | How to reorder columns on a subclassed pandas Dataframe | <p>I want to reorder dataframe columns from a subclassed pandas dataframe.</p>
<p>I understood from this <a href="https://stackoverflow.com/a/35619846/3010217">question</a> there might be a better way for not subclassing a dataframe, but I'm still wondering how to approach this.</p>
<p>Without subclassing, I would do it in a classic way:</p>
<pre><code>import pandas as pd
data = {'Description':['mydesc'], 'Name':['myname'], 'Symbol':['mysymbol']}
df = pd.DataFrame(data)
df = df[['Symbol', 'Name', 'Description']]
</code></pre>
<p>But with subclassing, keeping the same behavior as the classic one doesn't reorder the columns:</p>
<pre><code>import pandas as pd
class SubDataFrame(pd.DataFrame):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self = self._reorder_columns()
def _reorder_columns(self):
first_columns = ['Symbol', 'Name', 'Description']
return self[first_columns + [c for c in self.columns if c not in first_columns]]
data = {'Description':['mydesc'], 'Name':['myname'], 'Symbol':['mysymbol']}
df = SubDataFrame(data)
</code></pre>
<p>I believe my mistake is in reassigning <code>self</code> which doesn't have any effect.</p>
<p>How can I achieve column reordering on the subclassed dataframe?</p>
| <python><pandas><dataframe><subclass> | 2023-08-09 08:55:09 | 1 | 1,094 | Begoodpy |
76,866,071 | 7,695,845 | How to override predefined unit in pint? | <p>I use the <code>pint</code> library in my lab assignments to efficiently deal with unit specifications and conversions. We recently had an experiment when we took some pictures and in the code, we need to convert from pixels to meters with a conversion factor. I wanted to make a new <code>pint</code> unit for pixels so it handles this conversion automatically:</p>
<pre class="lang-py prettyprint-override"><code>from pint import UnitRegistry
ureg = UnitRegistry(system="SI")
ureg.define("pixel = 5.2e-6 * meter = px")
size = 42 * ureg.pixel
print(size.to(ureg.meter))
</code></pre>
<p>Unfortunately, this code snippet doesn't work because <code>pixel</code> is already defined with the dimensionality of <code>[printing_unit]</code> so the conversion failed, and the alias <code>px</code> I tried to give it is already an alias for <code>css_pixel</code> which has a dimensionality of <code>[length]</code>, but with a different conversion factor, so if write <code>size = 42 * ureg.px</code>, I don't get an error, but the conversion is wrong.</p>
<p>How can I override these default units with my definitions? I don't believe I'll ever use the predefined <code>pixel</code> or <code>css_pixel</code> units so I have no issue overriding them with units that are more specific to my code. I know I can use a definition file, but it seems to me like an overkill for such a simple task. This is a script for a physics lab assignment, so I don't want to make it very complicated.</p>
| <python><pint> | 2023-08-09 08:52:22 | 1 | 1,420 | Shai Avr |
76,865,930 | 7,713,770 | css files are not loaded when I am using production mode with Django | <p>I have a django app. And I try to simulate production envrionment.</p>
<p>So I have splitted the settings in three seperate files: local , prod and base.</p>
<p>Part of the base.py looks like:</p>
<pre><code>import os
from pathlib import Path
from os import environ
from dotenv import load_dotenv
load_dotenv()
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/4.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get('SECRET_KEY')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = os.environ.get('DEBUG') == "False"
STATIC_URL = '/static/'
STATIC_DIRS = [
(BASE_DIR, 'static')
]
STATIC_ROOT = BASE_DIR /'staticfiles'
STATICFILES_DIRS =['zijn']
MEDIA_URL = '/media/'
MEDIA_ROOT = BASE_DIR /'media'
# Default primary key field type
# https://docs.djangoproject.com/en/4.1/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
MEDIA_ROOT = BASE_DIR/'media'
MEDIA_URL = '/media/'
</code></pre>
<p>And the manage.py file looks like:</p>
<pre><code>#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
from zijn.settings import base
def main():
"""Run administrative tasks."""
if base.DEBUG:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'zijn.settings.local')
else:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'zijn.settings.production')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
</code></pre>
<p>And this works if I set DEBUG to False. I get this output:</p>
<pre><code>Django version 4.2.3, using settings 'zijn.settings.production'
Starting development server at http://127.0.0.1:8080/
</code></pre>
<p>But in production mode the css is not loaded. in dev tools I see the following errors:</p>
<pre><code>GET
http://127.0.0.1:8080/static/admin/js/theme.js
GET
http://127.0.0.1:8080/static/admin/js/nav_sidebar.js
The resource from βhttp://127.0.0.1:8080/static/admin/js/theme.jsβ was blocked due to MIME type (βtext/htmlβ) mismatch (X-Content-Type-Options: nosniff).
admin
The resource from βhttp://127.0.0.1:8080/static/admin/js/nav_sidebar.jsβ was blocked due to MIME type (βtext/htmlβ) mismatch (X-Content-Type-Options: nosniff).
</code></pre>
<p>url.py:</p>
<pre><code>from django.conf import settings
from django.conf.urls.static import static
from django.contrib import admin
from django.urls import include, path
from drf_spectacular.views import (SpectacularAPIView, SpectacularSwaggerView)
from rest_framework.schemas import get_schema_view
urlpatterns = [
path('admin/', admin.site.urls),
path('api/', include('zijnAdmin.urls')),
path('api-auth/', include('rest_framework.urls')),
path('schema/', get_schema_view()),
path('api/schema/', SpectacularAPIView.as_view(), name='api-schema'),
path('api/docs', SpectacularSwaggerView.as_view(url_name='api-schema'), name='api-docs'),
path('api/user/',include('accounts.urls') )
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) \
+ static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
</code></pre>
<p>in local mode everyting works fine.</p>
<p>Question: So how to run Production mode with the css loaded?</p>
| <python><django><django-rest-framework> | 2023-08-09 08:32:19 | 2 | 3,991 | mightycode Newton |
76,865,639 | 8,068,825 | Pandas - Get all rows that have same selected column values as specific row | <p>I have this code below and <code>features</code> is a list of column names and <code>filtered</code> is a Pandas dataframe. What I'm trying to do is given a specific row (index=0) in this case I'm trying to find all the rows with the same <code>features</code> values as row at index = 0. The below throws an error, but essentially the idea was to check each row and if it had the same <code>features</code> and row with index 0 then it would return those rows. The error it throws is <code>pandas.core.indexing.IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).</code></p>
<pre><code>print(filtered[features].drop_duplicates().iloc[0][features])
print(filtered[filtered[features].eq(filtered[features].drop_duplicates().iloc[0][features]).all()])
</code></pre>
<p>Also tried:</p>
<pre><code>for idx, row in filtered.iterrows():
sub_filt = filtered[filtered[sub_features].eq(row[sub_features]).all(axis=1)]
print(len(sub_filt))
</code></pre>
<p>and it prints all 0s, but it should be printing at least a 1.</p>
| <python><pandas> | 2023-08-09 07:48:45 | 2 | 733 | Gooby |
76,865,565 | 15,863,624 | " No such file or directory" error when calling `os.mkdir` on WSL | <p>My laptop's main system is Windows, but it has also Windows Subsystem for Linux (WSL) installed. When I started to write code in python installed on WSL (Connect to WSL option in VSCode), it turned out that <code>os.mkdir</code> doesn't work as expected:</p>
<pre><code>import os
print(os.uname()[0]) # 'Linux'
path = os.path.join("New", "2023") # path = "New/2023"
os.mkdir(path)
</code></pre>
<p>raises <code>FileNotFoundError: [Errno 2] No such file or directory: 'New/2023'</code>.</p>
<p>How can I fix it? For more context: this is part of project when I need to automate creation of some new directories and save generated files to them and it must be OS independent because some users use Windows and some Linux.</p>
| <python><path><operating-system><windows-subsystem-for-linux> | 2023-08-09 07:37:42 | 1 | 453 | stats_b |
76,865,563 | 5,232,122 | Why in Django many-to-many relation shows wrong data in the admin panel | <p>I'm trying to implement the many-to-many relation to support in the user model these features:</p>
<ol>
<li>user followers (persons who follows the user)</li>
<li>user following (persons the user follows)</li>
</ol>
<p>To achieve that I'm using double relation on the same model because followers/following are the same User models.
Basically it works hoverwer I'm facing a weird behavior in the Django admin panel. When I add some user to other user to the <strong>following</strong> list (in the shell everything ok), the Django admin panel shows that user in the <strong>followers</strong> list but he should be in the <strong>following</strong> list. I can't understand why it happens.
<br><br>Many thanks for any help!
<br /><br>
My models.py</p>
<pre><code>from django.contrib.auth.models import AbstractUser
from django.db import models
class User(AbstractUser):
followers = models.ManyToManyField('self',blank=True,related_name='user_followers',symmetrical=False)
following = models.ManyToManyField('self',blank=True,related_name='user_following',symmetrical=False)
pass
</code></pre>
<p>Supppose Admin wants to follow Neo. Neo should have follower - Admin. Admin - should have Neo in his <em>following</em> list.
Shell commands and result:</p>
<pre><code>admin = User.objects.filter(username='admin')[0]
neo = User.objects.filter(username='neo_anderson')[0]
// check that everything is empty
neo.user_following.all()
<QuerySet []>
neo.user_followers.all()
<QuerySet []>
// --- admin wants to follow neo ---
// add the admin to the neo followers
neo.user_followers.add(admin)
neo.save()
// add the neo to the admin following
admin.user_following.add(neo)
admin.save()
// check if admin follows Neo
admin.user_following.contains(neo)
TRUE
// check if neo has the follower admin
neo.user_followers.contains(admin)
TRUE
</code></pre>
<p>But in the admin panel I see that for the user <strong>Admin</strong> the user <strong>Neo</strong> is in the <strong>followers</strong> section but it should be <strong>following</strong> because Admin wants to follow Neo. Not vice versa</p>
<p><a href="https://i.sstatic.net/CSP82.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CSP82.png" alt="admin_screen" /></a></p>
| <python><django> | 2023-08-09 07:37:17 | 1 | 6,154 | Velidan |
76,865,523 | 7,306,999 | xlwings: terminate VBA execution upon Python error | <p>I have an Excel VBA subroutine that calls a Python function with the xlwings <code>RunPython</code> command. If the called Python function terminates due to an error, then a pop-up with the Python error traceback appears in Excel. However afterwards Excel happily continues with execution of the remainder of the VBA subroutine.</p>
<p>How can I force the VBA subroutine to terminate immediately after the Python function from <code>RunPython</code> raises an error?</p>
| <python><excel><vba><xlwings> | 2023-08-09 07:31:19 | 1 | 8,674 | Xukrao |
76,865,429 | 6,162,092 | Python - Using Spreadsheet Dictionary Once per sheet | <p>I'm not much of a Python dev and am in the process of creating a script which will pull all tables from a PDF into Excel. The idea is to then translate the keys of those tables to something meaningful so that I can import it into my DB. The tables are separated by sheets which is perfect.</p>
<p>So currently I have around 1k PDFs (all similar type) and I have a 'master' spreadsheet file which basically says 'If you see this keyword in the converted file, change the value which is in the dictionary'.</p>
<p>However, I want it so that once a key has been used in the dictionary, it cannot be used again in a different sheet.</p>
<p>I have been using Chat GPT to help me write this, but now it is getting confused as some of the keys in the converted Excel's repeat in multiple sheets.</p>
<p>For example:</p>
<p><strong>Sheet 1:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Key</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>ID</td>
<td>1</td>
</tr>
<tr>
<td>Name</td>
<td>Bob</td>
</tr>
</tbody>
</table>
</div>
<p><em>Should be:</em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Key</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>A1.ID</td>
<td>1</td>
</tr>
<tr>
<td>A1.Name</td>
<td>Bob</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Sheet 2</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Key</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>ID</td>
<td>4</td>
</tr>
<tr>
<td>First Name</td>
<td>John</td>
</tr>
</tbody>
</table>
</div>
<p><em>Should be:</em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Key</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>A2.ID</td>
<td>1</td>
</tr>
<tr>
<td>A2.Name</td>
<td>Bob</td>
</tr>
</tbody>
</table>
</div>
<p><em>But infact it is:</em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Key</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>A1.ID</td>
<td>1</td>
</tr>
<tr>
<td>A2.Name</td>
<td>Bob</td>
</tr>
</tbody>
</table>
</div>
<p>The code:</p>
<pre class="lang-py prettyprint-override"><code>def convert_to_dictionary(file_path):
dictionary_file = pd.read_excel(dictionary_file_path, sheet_name="All")
master_dictionary = {}
num_columns = len(dictionary_file.columns)
# Create a dictionary for each column starting from 2 and store them in the master dictionary
for i in range(2, num_columns):
current_column_dictionary = {
str(row[i]).strip().lower(): str(row[1])
for _, row in dictionary_file.iterrows()
}
master_dictionary[i] = current_column_dictionary
xls = pd.ExcelFile(file_path)
modified_sheets = []
used_words_per_sheet = {} # Dictionary to store used words for each sheet
for sheet_name in xls.sheet_names:
input_file = pd.read_excel(xls, sheet_name=sheet_name, header=None)
used_words = set() # Set to store used words for the current sheet
for column_number, column_dictionary in master_dictionary.items():
input_file.iloc[:, 0] = input_file.iloc[:, 0].apply(
lambda x, used_words=used_words, column_dictionary=column_dictionary: column_dictionary.get(
str(x).strip().lower(), x
)
if str(x).strip().lower() not in used_words
else x
)
used_words.update(
input_file.iloc[:, 0].apply(str).str.strip().str.lower()
) # Update used_words set
used_words_per_sheet[
sheet_name
] = used_words # Store used words for the current sheet
modified_sheets.append((sheet_name, input_file))
# Save the modified sheets with sheet-specific names
with pd.ExcelWriter(file_path) as writer:
for sheet_name, modified_sheet in modified_sheets:
modified_sheet.to_excel(writer, sheet_name=sheet_name, index=False)
</code></pre>
<p>I did use ChatGPT for some of this...</p>
<p>Hope this helps!</p>
<p>Thanks,
Sam</p>
| <python><excel> | 2023-08-09 07:14:37 | 1 | 467 | SCramphorn |
76,865,344 | 3,613,158 | Google Ads API & Python - How to get all accounts (name + ID) that a user has access | <p>I'm trying to get the account name and ID of all the Google Ads accounts that the logged user has access to, so the user can later check the stats of each of those accounts.</p>
<p><strong>I've managed to get the IDs</strong> using the code on <a href="https://developers.google.com/google-ads/api/docs/account-management/listing-accounts?hl=es-419" rel="nofollow noreferrer">https://developers.google.com/google-ads/api/docs/account-management/listing-accounts?hl=es-419</a>.</p>
<p>But I need the names too and apparently you have to make <strong>one API call for each ID to get their account names</strong> or any other info.</p>
<p>I've tried with the following Python (Django) code, but it's not working (it probably could be improved a lot and maybe has mistakes, I'm a Python beginner):</p>
<pre><code>def one(request):
client = credenciales(request)
ga_service = client.get_service("GoogleAdsService")
# Get customer resource names from the original code
customer_service = client.get_service("CustomerService")
accessible_customers = customer_service.list_accessible_customers()
customer_resource_names = accessible_customers.resource_names
# Prepare a list to store customer data
list_clients = []
# Iterate through each customer resource name
for resource_name in customer_resource_names:
# Extract the customer ID from the resource name
custom_id = resource_name.split('/')[-1]
# Create a query using the customer_id
query = f'''
SELECT
customer_client.descriptive_name,
customer_client.id,
customer_client.status
FROM customer_client
'''
stream = ga_service.search_stream(customer_id=custom_id, query=query)
for batch in stream:
for row in batch.results:
data_clients = {}
data_clients["descriptive_name"] = row.customer_client.descriptive_name
data_clients["id"] = row.customer_client.id
data_clients["status"] = row.customer_client.status
list_clients.append(data_clients)
# Pass the list of customer data to the template
context = {
'list_clients': list_clients,
}
return render(request, 'one.html', context)
</code></pre>
<p><strong>It gives me the error "authorization_error: USER_PERMISSION_DENIED"</strong> (which I don't understand, because it should be automatically showing the accounts that the user has permission to access, I'm not giving those accounts/IDs manually).</p>
| <python><django><google-ads-api> | 2023-08-09 07:03:36 | 0 | 691 | migueltic |
76,864,971 | 1,581,090 | How to fix "Invalid argument" of pexpect send/sendline on windows with python? | <p>On windows 10 I am using the following python code to send a command to a device</p>
<pre><code>import pexpect
from pexpect import popen_spawn
session = popen_spawn.PopenSpawn(f"plink.exe -ssh 192.168.200.10 9000")
session.expect(pexpect.EOF, timeout=2)
session.send("$SYS,INFO\r\n")
time.sleep(5)
print(session.read_nonblocking(4096, 1))
</code></pre>
<p>But all I get is an error</p>
<pre><code>OSError: [Errno 22] Invalid argument
</code></pre>
<p>I tried to encode the string, use <code>sendline</code> instead, with and without the <code>\r\n</code>, but always the same error. Anything I am missing here?</p>
<ul>
<li>windows 10</li>
<li>python 3.10.11</li>
<li>pexpect 4.8.0</li>
</ul>
| <python><windows><pexpect> | 2023-08-09 06:05:03 | 0 | 45,023 | Alex |
76,864,747 | 19,157,137 | Python pytest cannot find modules inside src directory | <p>I'm facing a challenge with my Python project setup that involves module imports, testing using <code>pytest</code>, and working with Jupyter Notebook. My project directory structure looks like this:</p>
<pre><code>project/
βββ Dockerfile
βββ requirements.txt
βββ pyproject.toml
βββ jupyter_notebook_config.py
βββ notebooks/
β βββ src/
β β βββ my_module.py
β β βββ module_name.c
β βββ tests/
β β βββ test_my_module.py
β β βββ test_sub_module.py
β βββ my_notebook.ipynb
βββ ...
</code></pre>
<p>Inside the <code>pyproject.toml</code> file, I've configured the Python paths for <code>pytest</code> like this using the <code>pytest-pythonpath</code> plugin:</p>
<pre class="lang-ini prettyprint-override"><code># pyproject.toml
[tool.pytest.ini_options]
python_paths = ["src"]
</code></pre>
<p>My Docker container is based on the <code>python:3.11</code> image and uses a Dockerfile similar to the following:</p>
<pre><code>FROM python:3.11
# Copy the requirements.txt file to the container
COPY requirements.txt /app/requirements.txt
# Install the dependencies from the requirements.txt file
RUN pip install -r /app/requirements.txt
# Copy the pyproject.toml file to the container
COPY pyproject.toml /app/pyproject.toml
# Set working directory
WORKDIR /app
# Run pip install -e . to install pyproject.toml
RUN pip install -e .
# Expose Jupyter Notebook port
EXPOSE 8888
# Copy the custom Jupyter configuration file
COPY jupyter_notebook_config.py /root/.jupyter/
# Run Jupyter Notebook on container startup
CMD ["jupyter", "notebook", "--ip=0.0.0.0", "--port=8888", "--no-browser", "--allow-root"]
</code></pre>
<p>I've attempted to address module import issues by adding the following code at the beginning of my test files and notebooks:</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), 'notebooks/src')))
</code></pre>
<p>However, despite these efforts, I'm encountering problems where both <code>pytest</code> and Jupyter Notebook cannot locate the modules inside the <code>src</code> directory. How can I properly configure my project so that both <code>pytest</code> and Jupyter Notebook can access the modules within the <code>src</code> directory for importing, testing, and working with notebooks?</p>
<p>To provide additional context, I've encountered similar issues with module imports and working with compiled C extensions and Python files. I've referred to the following discussions for guidance:</p>
<ol>
<li><a href="https://stackoverflow.com/questions/76759473/unable-to-access-pytest-test-features-without-modifying-sys-path">Unable to Access pytest Test Features Without Modifying sys.path</a></li>
<li><a href="https://discourse.jupyter.org/t/importing-compiled-c-extensions-python-files-from-folder-dynamically-in-jupyter-notebook/20811" rel="nofollow noreferrer">Importing Compiled C Extensions & Python Files from Folder Dynamically in Jupyter Notebook</a></li>
</ol>
<p>Additionally, when attempting to run <code>pytest</code>, I'm encountering an error message like the one below:</p>
<pre><code>============================= test session starts ==============================
platform linux -- Python 3.11.4, pytest-6.2.5, py-1.11.0, pluggy-1.2.0
rootdir: /app
plugins: pythonpath-0.7.4, anyio-3.7.1
collected 0 items / 1 error
==================================== ERRORS ====================================
_______________ ERROR collecting tests/test_my_functions_demo.py _______________
ImportError while importing test module '/app/tests/test_my_functions_demo.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/local/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_my_functions_demo.py:2: in <module>
from my_functions_demo import add, subtract
E ModuleNotFoundError: No module named 'my_functions_demo'
=========================== short test summary info ============================
ERROR tests/test_my_functions_demo.py
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.15s ===============================
</code></pre>
<p>To illustrate the issue further:</p>
<p><strong>Working Solution:</strong></p>
<pre class="lang-py prettyprint-override"><code>import src.module_name
# Calculate lengths and elements for the range from 1 to 15 using Cython
sample_range, lengths, elements = src.module_name.calculate_lengths_cython(1, 15)
</code></pre>
<p><strong>Not Working Solution:</strong></p>
<pre class="lang-py prettyprint-override"><code>import module_name
# Calculate lengths and elements for the range from 1 to 15 using Cython
sample_range, lengths, elements = module_name.calculate_lengths_cython(1, 15)
</code></pre>
| <python><docker><jupyter-notebook><containers><jupyter> | 2023-08-09 05:10:07 | 0 | 363 | Bosser445 |
76,864,569 | 1,866,775 | Why does tensorflow.function (without jit_compile) speed up forward passes of a Keras model? | <p><a href="https://www.tensorflow.org/xla" rel="nofollow noreferrer">XLA</a> can be enabled using <code>model = tf.function(model, jit_compile=True)</code>. Some model types are faster that way, some are slower. So far, so good.</p>
<p>But why can <code>model = tf.function(model, jit_compile=None)</code> speed things up significantly (without TPU) in some cases?</p>
<p>The <code>jit_compile</code> <a href="https://github.com/tensorflow/tensorflow/blob/v2.13.0/tensorflow/python/eager/polymorphic_function/polymorphic_function.py#L1519-L1532" rel="nofollow noreferrer">docs</a> state:</p>
<blockquote>
<p>If <code>None</code> (default), compiles the function with XLA when running on TPU
and goes through the regular function execution path when running on
other devices.</p>
</blockquote>
<p>I'm running my tests on two non-TPU (and even non-GPU) machines (with the latest TensorFlow (<code>2.13.0</code>) installed).</p>
<pre class="lang-py prettyprint-override"><code>import timeit
import numpy as np
import tensorflow as tf
model_plain = tf.keras.applications.efficientnet_v2.EfficientNetV2S()
model_jit_compile_true = tf.function(tf.keras.applications.efficientnet_v2.EfficientNetV2S(), jit_compile=True)
model_jit_compile_false = tf.function(tf.keras.applications.efficientnet_v2.EfficientNetV2S(), jit_compile=False)
model_jit_compile_none = tf.function(tf.keras.applications.efficientnet_v2.EfficientNetV2S(), jit_compile=None)
def run(model):
model(np.random.random(size=(1, 384, 384, 3)))
# warmup
run(model_plain)
run(model_jit_compile_true)
run(model_jit_compile_false)
run(model_jit_compile_none)
runs = 10
duration_plain = timeit.timeit(lambda: run(model_plain), number=runs) / runs
duration_jit_compile_true = timeit.timeit(lambda: run(model_jit_compile_true), number=runs) / runs
duration_jit_compile_false = timeit.timeit(lambda: run(model_jit_compile_false), number=runs) / runs
duration_jit_compile_none = timeit.timeit(lambda: run(model_jit_compile_none), number=runs) / runs
print(f"{duration_plain=}")
print(f"{duration_jit_compile_true=}")
print(f"{duration_jit_compile_false=}")
print(f"{duration_jit_compile_none=}")
</code></pre>
<pre><code>duration_plain=0.53095479644835
duration_jit_compile_true=1.5860380740836262
duration_jit_compile_false=0.09831228516995907
duration_jit_compile_none=0.09407951850444078
</code></pre>
| <python><tensorflow><keras><tensorflow-xla><xla> | 2023-08-09 04:22:30 | 1 | 11,227 | Tobias Hermann |
76,864,568 | 2,966,197 | Issue in using Langchain and Opensearch to query multiple indices at once - Python | <p>I have this <code>Opensearch</code> <code>Vector DB</code> and I maintain multiple <code>indices</code> that start with "index-" (for example <code>index-pdf</code>, <code>index-html</code>). I have indexed sets of documents to each of the indices using <code>Langchain's</code> <code>OpenSearchVectorSearch.from_documents()</code> function.</p>
<p>Now, I want to run some queries which I want them to be run across multiple <code>indices</code>. An example would be "What is the title of each document?". When I execute the below code, it either just outputs answer from the first or last matching <code>index</code>, or says it cannot find the answer. Here is my current code:</p>
<pre><code>from langchain.vectorstores import OpenSearchVectorSearch
from langchain.chains import RetrievalQA, ConversationalRetrievalChain
import os
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
embeddings = OpenAIEmbeddings()
def get_llm(model):
llm = ChatOpenAI(model_name=model.lower(), verbose=False, temperature=0)
return llm
docsearch = OpenSearchVectorSearch(opensearch_url="http://localhost:9200",
index_name="index-*",
embedding_function=embeddings)
chain = ConversationalRetrievalChain.from_llm(
llm=get_llm("gpt-3.5-turbo"),
retriever=docsearch.as_retriever(),
)
result = chain({'question': 'What is the title of each document?', "chat_history": []})
response = result['answer']
print(response)
</code></pre>
<p>How can I effectively run a <code>Langchain</code> based query on multiple indices (or all indices) and get a response back? Sample questions could be listing all document names/titles.</p>
| <python><openai-api><opensearch><langchain><amazon-opensearch> | 2023-08-09 04:22:27 | 1 | 3,003 | user2966197 |
76,864,514 | 5,454 | Template hook is not being triggered in custom Nikola theme | <p>I am trying to write a custom theme for Nikola, a static site generator. I have been mostly following <a href="https://getnikola.com/creating-a-theme.html" rel="nofollow noreferrer">their tutorial</a> but have run into a problem. I want to show certain HTML specifically on pages (not on posts) in the spot where the <code>page_header</code> template hook appears. So I first ran this command:</p>
<pre><code>nikola theme -c page.tmpl
</code></pre>
<p>That copies the base theme's page.tmpl file into my custom theme's directory. That file is actually only 1 line long, simply this:</p>
<pre><code><%inherit file="story.tmpl"/>
</code></pre>
<p>So I added a definition for <code><%block name="page_header"> ... </%block></code>. However, the code I defined inside that block does not show up on my pages. Just to make sure I wasn't going crazy, I also defined <code><%block name="extra_head"> ... </%block></code> within that same page.tmpl file, and in that case the code contained therein <em>did</em> show up.</p>
<p>Why would the overridden <code>extra_head</code> block work while the overridden <code>page_header</code> block doesn't work within the same template file?</p>
| <python><mako><nikola> | 2023-08-09 04:09:28 | 0 | 10,170 | soapergem |
76,864,328 | 4,701,426 | Loop stopping in cmd until Enter is pressed | <p>I apologize in advance as this is going to be a vague question. I just hope someone has the same experience and can help me. Imagine a simple script that has this simple code in it:</p>
<pre><code>for i in range(100):
print(i)
</code></pre>
<p>When I run the script from the terminal (CMD), occasionally, the loop gets stuck but as soon as I press the Enter key in the terminal, it jumps back into working and the loop continues. I want to emphasize that there is nothing like a user input line in the code causing a pause in the loop until a key is pressed or anything like that in the code. Also, this never happens if I run the code inside an IDE. This only happens in CMD and in possibly in any script that has a loop like this.</p>
| <python> | 2023-08-09 03:12:38 | 1 | 2,151 | Saeed |
76,864,107 | 4,629,624 | Moving from Django signals to save override: How to translate the "created" parameter of a Django post_save signal for a save method override | <p>I have made some bad decisions earlier regarding how I handle post_save events on a Django model and I am currently looking into changing my approach.</p>
<p>Let's begin by making my example. Here is my model.</p>
<pre><code>class MyModel(models.Model):
#... all my model creation and logic are here
def do_something(self):
# Actually do something, for the sake of simplifying the example, just write pass
pass
</code></pre>
<p>Now, what I am using is a receiver function like this one. It works, but for many reasons that are mine, I want to stop using signal in this case.</p>
<pre><code>@receiver(post_save, sender=MyModel)
def foo(sender, instance, created, **kwargs):
if created:
instance.do_something()
</code></pre>
<p>I imagine I could override the <code>MyModel.save</code> method, something like this:</p>
<pre><code>class MyModel(models.Model):
#...
def save(self):
super().save()
if created: # It is this line that I need to figure how to do.
self.do_something()
</code></pre>
<p>By what should I replace the <code>if created:</code> of my receiver function if I want to override the <code>save()</code> method? Or do you have something else to recommend?</p>
<p>I would also be curious if it is the same thing for <code>pre_save</code> signals.</p>
| <python><django><django-models><django-signals> | 2023-08-09 01:54:03 | 1 | 1,068 | V. Brunelle |
76,864,083 | 1,056,563 | Pattern to work around the static value of a python default argument | <p>Python has this <em>stunning</em> characteristic that default parameters are evaluated once - at the time the function/method is defined:</p>
<pre><code>from datetime import datetime
import time
def some_func(ts: datetime = datetime.now()):
print("What time is it 'now'?",ts)
some_func()
time.sleep(5) # sleeps 5 seconds
some_func()
</code></pre>
<p>Prints:</p>
<pre><code>What time is it 'now'? 2023-08-08 18:43:43.697853
What time is it 'now'? 2023-08-08 18:43:43.697853
</code></pre>
<p>Yes this behavior is documented. It is also beyond my prior experience having worked in dozens of languages since the early eighties. More recent languages such as scala (and java before that) certainly do not take that angle on it. Is there any pattern to use that can achieve the same result as evaluating default values <em>at invocation time</em> ?</p>
| <python> | 2023-08-09 01:48:25 | 2 | 63,891 | WestCoastProjects |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.