QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,914,522 | 10,296,584 | Pandas create buckets depending on the values of a certain column | <pre><code> Sales
0 10
1 150
2 1800
3 25000
4 3000000
</code></pre>
<p>consider above pandas dataframe, I need a function which adds a new column called bubblesize which can have a value between a min and max range and a step size of 4.
for example if I pass min bubble size as 12 and max bubble size as 32 and step size as 4, I should get the below output</p>
<pre><code> Sales bubble_size
0 10 12
1 150 20
2 1800 24
3 25000 28
4 3000000 32
</code></pre>
| <python><pandas><dataframe><math> | 2023-08-16 14:23:31 | 1 | 597 | Atharva Katre |
76,913,936 | 1,394,228 | Multiple Datastore writes efficiently to ensure consistency and keep a good user experience | <p>We have an application on GAE & Datastore written in Python which allows <em>admin</em> users to divide a large sum (<em>interest</em>) to multiple <em>accounts</em> each has its share allocated and saved in the <em>share</em> entity.</p>
<p>It has been noticed that the write operations for admins who have accounts in the order of thousands are quite slow. We are talking 40+ seconds to write and return to user. Which is both, inconvenient and alarmingly close to GAE 60 seconds limit to respond to user request.</p>
<p>The entity models are structured in ancestor based structure, <em>admin account</em> is parent to <em>account</em>, and <em>share</em> has a key property referring to the <em>account</em> they belong to and another key property for the <em>interest</em> they belong to as well.</p>
<p><em><strong>Entities:</strong></em></p>
<pre><code>admin
|__account
interest
share
</code></pre>
<p>All writes are in a cross group transaction function call <code>@ndb.transactional(xg=True)</code> to ensure data consistency.</p>
<p>We have tried:</p>
<p><strong>1. Consecutive <code>put_multi()</code> calls</strong></p>
<p>We have used consecutive put_multi() calls to save each set of entities list:</p>
<pre><code>interest.put()
put_multi(accounts_list)
put_multi(shares_list)
...
</code></pre>
<p>Which at first glance seemed to be a good place to optimize since each call will need to wait to finish before calling the next put_multi() call.</p>
<p><strong>2. Single <code>put_multi()</code> call</strong></p>
<p>We then tried combining all the lists, and single data entities in a single list and call <code>put_multi()</code> once</p>
<pre><code>interest.put()
all_data_list = list()
all_data_list.extend(accounts_list)
all_data_list.extend(shares_list)
...
put_multi(all_data_list)
</code></pre>
<p>The impact of combining all lists was not noticeable when writing to datastore in one call using <code>put_multi()</code></p>
<p><strong>3. Used <code>put_multi_async()</code> for asynchronous write calls</strong></p>
<p>We then moved to <code>put_multi_async()</code> and called each entity list on its own since each call is now asynchronous:</p>
<pre><code>interest.put()
put_multi_async(accounts_list)
put_multi_async(shares_list)
...
</code></pre>
<p>Still performance hasn't been impacted and writes are still slow.</p>
<p><strong>4. Call <code>ndb.put_multi()</code> and <code>ndb.put_multi_async()</code> in batches</strong></p>
<p>We have tried dividing the data lists before calling <code>ndb.put_multi/_async()</code> with variable batch sizes (10,15,100,300..etc.). Although docs already mentioned that <code>put_multi_async()</code> automatically handles batching</p>
<pre><code>batchsize = 100
for i in range(0, len(accounts_list), batchsize):
ndb.put_multi(accounts_list[i:i+batchsize])
ndb.put_multi(shares_list[i:i+batchsize])
</code></pre>
<p>Still no time seems to have been saved, at least not significant enough to be noticeable</p>
<ol start="5">
<li><strong>tasklet</strong> and batches of <strong>tasklet</strong> calls</li>
</ol>
<p>We created a tasklet to just save the data, we tried calling it in batches as well, sending the data to it in batches to maximize concurrent writing calls, we also tried different batch sizes, still no progress in terms of write time and returning to user.</p>
<pre><code>@ndb.tasklet
def save_entities_tasklet(self, entities):
yield ndb.put_multi_async(entities, use_memcache=False)
</code></pre>
<p>We thought of passing this all to a deferred function to run in the background, but it is both, a bit cumbersome since it threw a size error <code>Exception: The value of property "data" is longer than 1048487 bytes.</code> <a href="https://cloud.google.com/tasks/docs/quotas" rel="nofollow noreferrer">probably the 1 MB task size limit to the data it can handle in the task queue</a>. And inconvenient to the user as they won't be able to see the results right away, and will need to refresh to eventually see the results after the background process has finished.</p>
<p>Increasing the instance class to F4 had little impact as well.</p>
<p>All tests were performed locally and on App Engine.</p>
<p>We unfortunately can not change the data model structure at this point. Our ultimate goal is to minimize the latency to an acceptable user experience level that does not span to 10's of seconds of waiting.</p>
<p>We checked the logs and trace and can confirm it takes that long for saving/loading pages with large number of <em>accounts</em> (thousands). Below is an example of the stack trace of one of the save requests that took ~38 seconds to complete..</p>
<p><a href="https://i.sstatic.net/KWV3I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KWV3I.png" alt="Trace snapshot" /></a></p>
<p>Full trace example:
<a href="https://i.sstatic.net/JDnbp.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JDnbp.jpg" alt="Full trace snapshot" /></a></p>
<p>Appreciate your help</p>
| <python><google-app-engine><google-cloud-datastore><latency> | 2023-08-16 13:13:21 | 1 | 927 | Khaled |
76,913,935 | 22,399,487 | selenium.common.exceptions.SessionNotCreatedException: This version of ChromeDriver only supports Chrome version 114. LATEST_RELEASE_115 doesn't exist | <pre><code>#Once the zip has finished downloading, extract the folder and copy the path of the chromedriver exe file (should be the #first one), add it to your code like this,
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
url = "somewebsite.com"
service_obj = Service("D:\\Users\\eggman\Downloads\chromedriver-win64\chromedriver-win64\chromedriver.exe")
driver = webdriver.Chrome(service=service_obj)
driver.get(url)
</code></pre>
<p>Returns the error:</p>
<blockquote>
<p>selenium.common.exceptions.SessionNotCreatedException: This version of ChromeDriver only supports Chrome version 114. LATEST_RELEASE_115 doesn't exist</p>
</blockquote>
<p>I'm guessing to avoid this in the future I can just turn off automatic updates?</p>
<p>I originally used the following code which worked fine</p>
<pre><code>driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)
</code></pre>
| <python><google-chrome><selenium-webdriver><selenium-chromedriver><chromium> | 2023-08-16 13:13:14 | 14 | 463 | eggman |
76,913,868 | 20,830,264 | Python library to add editable text box in a PDF file | <p>I need to add the following text: "Hello World!" in a PDF file using Python and then be able to edit it from the PDF reader.
Basically I need a Python script that opens a PDF file and then adds the editable text. Then I should also be able to edit the sentence in Adobe Reader (or a generic pdf reader) from "Hello World!" to "Hello Jim!"</p>
<p>I have tried many libraries like <code>pikepdf</code>, <code>reportlab</code>, <code>pypdf</code>, <code>PyPDF2</code>, <code>pdfrw</code>, etc, but I have not found what I need.</p>
<p>For example with this code I'm able to add text in the PDF file:</p>
<pre><code>from reportlab.pdfgen import canvas
pdf = canvas.Canvas("s_pii_pdf_fac_simile_Copy.pdf")
pdf.drawString(x = 300, y = 400, text = "Hello World!", mode = None, charSpace = 2, direction = None, wordSpace = None)
</code></pre>
<p>But, once I open the PDF file from Adobe Reader, I can't edit the text.</p>
<p>Another example is this:</p>
<pre><code>from reportlab.pdfgen import canvas
from reportlab.lib.units import cm
from reportlab.lib import colors
from reportlab.lib.pagesizes import A4
from PyPDF2 import PdfFileReader, PdfFileWriter
text = "input.pdf"
pdf = canvas.Canvas(text)
pdf.drawCentredString(100, 0, "blablabla")
x = pdf.acroForm
x.textfield(fillColor = colors.yellow, borderColor = colors.black, textColor = colors.red, borderWidth = 2, borderStyle = 'solid', width = 500, height = 50, x = 50, y = 40, tooltip = None, name = None, fontSize = 20)
</code></pre>
<p>Here I'm able to add an editable box when I open the PDF file with Adobe reader, but the box is empty and I need to manually add text in it via Adobe reader. In this case I would need to prepopulate the box with the "hello world!" string and be able to edit the text inside the box from Adobe reader:</p>
<p><a href="https://i.sstatic.net/Y2wUH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y2wUH.png" alt="enter image description here" /></a></p>
<p>Do you have any ideas on how I can solve this?</p>
| <python><pdf> | 2023-08-16 13:07:01 | 3 | 315 | Gregory |
76,913,830 | 2,776,885 | Python GUI - Passing variables between classes on button press | <p>I have a GUI containing:</p>
<ul>
<li>A <code>MainWindow</code> with a <code>QLineEdit</code>.</li>
<li>A <code>CustomWindow</code> that is initiated by a button press from <code>MainWindow</code>.</li>
</ul>
<p>In my application I want to pass a dataframe generated in the <code>CustomWindow</code> class back to <code>MainWindow</code>. I do this by re-initiating the <code>MainWindow</code> class with the dataframe as argument within <code>CustomWindow</code>.</p>
<p>By default, <code>MainWindow</code> is initiated with a dataframe equal to <code>None</code>. In this case, the <code>QLineEdit</code> should be disabled. When <code>MainWindow</code> is re-initialized with a dataframe the commands to enable the <code>QLineEdit</code> seem unresponsive. It remains disabled while it does not read any statement to disable it. As shown in the minimal working example below, the <code>QLineEdit</code> is indeed identified as the text passed is successful.</p>
<p>How do I get full control back over my <code>QLineEdit</code> or how should I pass the dataframe to <code>MainWindow</code>?</p>
<pre><code>import sys
import pandas as pd
from PyQt5 import QtCore, QtGui, QtWidgets
class MainWindow(QtWidgets.QMainWindow):
def __init__(self, dataframe = None):
super().__init__()
## Set global variable
self.dataframe = dataframe
## Initialize different class
self.custom = CustomWindow()
## Initialise the window
self.window()
## Based on variable call different init functions
if self.dataframe is None:
self.init_main()
else:
self.init_custom()
## Connect buttons
self.connectButtons()
return None
def window(self):
## Set window title and size
self.setWindowTitle("Minimal Example")
self.setFixedSize(QtCore.QSize(400, 300))
## Create a window with a QLineEdit and a QPushButton
self.line = QtWidgets.QLineEdit()
self.button = QtWidgets.QPushButton("Custom")
layout = QtWidgets.QVBoxLayout()
layout.addWidget(self.line)
layout.addWidget(self.button)
w = QtWidgets.QWidget()
w.setLayout(layout)
self.setCentralWidget(w)
return None
def init_main(self):
## On start-up when dataframe is None, disable the QLineEdit
self.line.setEnabled(False)
return None
def init_custom(self):
## On start-up when dataframe is not None, enable the QLineEdit and set text
self.line.setEnabled(True)
self.line.setText(self.dataframe["Column_One"][0])
print(self.line.text())
return None
def connectButtons(self):
## Connect custom window to button press
self.button.clicked.connect(self.custom.show)
return None
class CustomWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
## Create a window with a QLineEdit and a QPushButton
self.line = QtWidgets.QLineEdit()
self.button = QtWidgets.QPushButton("Button")
layout = QtWidgets.QVBoxLayout()
layout.addWidget(self.line)
layout.addWidget(self.button)
w = QtWidgets.QWidget()
w.setLayout(layout)
self.setCentralWidget(w)
self.connectButtons()
return None
def create_df(self):
## Create dataframe based on QLineEdit input
value = str(self.line.text()).strip()
if len(value) > 0:
df = pd.DataFrame([value], columns = ["Column_One"])
## Call MainWindow and pass dataframe
MainWindow(df)
## Close the custom window
self.close()
return None
def connectButtons(self):
## Connect custom window to button press
self.button.clicked.connect(self.create_df)
return None
app = QtWidgets.QApplication(sys.argv)
ui = MainWindow()
ui.show()
sys.exit(app.exec())
</code></pre>
| <python><pyqt5> | 2023-08-16 13:02:46 | 0 | 4,040 | The Dude |
76,913,746 | 2,492,332 | how can I run pjsua in a worker thread? | <p>I cross compiled the simple_pjsua program from the pjsip samples to use the lichee zero board and it works without any problems:</p>
<pre><code>#include <stdio.h>
#include <pjsua-lib/pjsua.h>
#include <pjlib.h>
#include <pjmedia.h>
#include <pjmedia-codec.h>
#define THIS_FILE "APP"
#define SIP_DOMAIN "192.168.111.52"
#define SIP_USER "1018"
#define SIP_PASSWD "pass1018"
/* Callback called by the library upon receiving incoming call */
static void on_incoming_call(pjsua_acc_id acc_id, pjsua_call_id call_id,
pjsip_rx_data *rdata)
{
pjsua_call_info ci;
PJ_UNUSED_ARG(acc_id);
PJ_UNUSED_ARG(rdata);
pjsua_call_get_info(call_id, &ci);
PJ_LOG(3,(THIS_FILE, "Incoming call from %.*s!!",
(int)ci.remote_info.slen,
ci.remote_info.ptr));
/* Automatically answer incoming calls with 200/OK */
pjsua_call_answer(call_id, 200, NULL, NULL);
}
/* Callback called by the library when call's state has changed */
static void on_call_state(pjsua_call_id call_id, pjsip_event *e)
{
pjsua_call_info ci;
pj_status_t status;
PJ_UNUSED_ARG(e);
pjsua_call_get_info(call_id, &ci);
PJ_LOG(3,(THIS_FILE, "Call %d state=%.*s", call_id,
(int)ci.state_text.slen,
ci.state_text.ptr));
}
/* Callback called by the library when call's media state has changed */
static void on_call_media_state(pjsua_call_id call_id)
{
pjsua_call_info ci;
pjsua_call_get_info(call_id, &ci);
if (ci.media_status == PJSUA_CALL_MEDIA_ACTIVE) {
// When media is active, connect call to sound device.
pjsua_conf_connect(ci.conf_slot, 0);
pjsua_conf_connect(0, ci.conf_slot);
}
}
/* Display error and exit application */
void error_exit(const char *title, pj_status_t status)
{
pjsua_perror(THIS_FILE, title, status);
pjsua_destroy();
exit(1);
}
/*
* main()
*
* argv[1] may contain URL to call.
*/
int main()
{
pjsua_acc_id acc_id;
pj_status_t status;
/* Create pjsua first! */
status = pjsua_create();
if (status != PJ_SUCCESS) error_exit("Error in pjsua_create()", status);
/* Init pjsua */
{
pjsua_config cfg;
pjsua_logging_config log_cfg;
pjsua_config_default(&cfg);
cfg.cb.on_incoming_call = &on_incoming_call;
cfg.cb.on_call_media_state = &on_call_media_state;
cfg.cb.on_call_state = &on_call_state;
pjsua_logging_config_default(&log_cfg);
status = pjsua_init(&cfg, &log_cfg, NULL);
if (status != PJ_SUCCESS) error_exit("Error in pjsua_init()", status);
}
/* Add UDP transport. */
{
pjsua_transport_config cfg;
pjsua_transport_config_default(&cfg);
cfg.port = 5060;
status = pjsua_transport_create(PJSIP_TRANSPORT_UDP, &cfg, NULL);
if (status != PJ_SUCCESS) error_exit("Error creating transport", status);
}
/* Initialization is done, now start pjsua */
status = pjsua_start();
if (status != PJ_SUCCESS) error_exit("Error starting pjsua", status);
/* Register to SIP server by creating SIP account. */
{
pjsua_acc_config cfg;
pjsua_acc_config_default(&cfg);
cfg.id = pj_str("sip:" SIP_USER "@" SIP_DOMAIN);
cfg.reg_uri = pj_str("sip:" SIP_DOMAIN);
cfg.cred_count = 1;
cfg.cred_info[0].realm = pj_str("*");
cfg.cred_info[0].scheme = pj_str("digest");
cfg.cred_info[0].username = pj_str(SIP_USER);
cfg.cred_info[0].data_type = PJSIP_CRED_DATA_PLAIN_PASSWD;
cfg.cred_info[0].data = pj_str(SIP_PASSWD);
status = pjsua_acc_add(&cfg, PJ_TRUE, &acc_id);
if (status != PJ_SUCCESS) error_exit("Error adding account", status);
}
/* Wait until user press "q" to quit. */
for (;;) {
char option[10];
puts("Press 'h' to hangup all calls, 'q' to quit");
if (fgets(option, sizeof(option), stdin) == NULL) {
puts("EOF while reading stdin, will quit now..");
break;
}
if (option[0] == 'q')
break;
if (option[0] == 'h')
pjsua_call_hangup_all();
}
/* Destroy pjsua */
pjsua_destroy();
return 0;
}
</code></pre>
<p>I converted it to a library using the Makefile below:</p>
<pre><code>PJDIR = ~/pjsip/pjproject-2.13
include $(PJDIR)/build.mak
simple_pjsua: simple_pjsua.o
$(PJ_CC) -shared -Wl,-soname -o libsimple_pjsua.so simple_pjsua.o $< $(PJ_LDFLAGS) $(PJ_LDLIBS)
simple_pjsua.o: simple_pjsua.c
$(PJ_CC) -c -fPIC -o $@ $< $(PJ_CFLAGS)
clean:
rm -f libsimple_pjsua.so simple_pjsua.o
</code></pre>
<p>Then I converted the simple_pjsua program to Python with ctype:</p>
<p>simple_pjsua.py:</p>
<pre><code>import ctypes
import threading
import asyncio
# Load the shared library
lib = ctypes.CDLL('./libsimple_pjsua.so')
# Callback function types
CALLBACK_FUNC_TYPE = ctypes.CFUNCTYPE(None, ctypes.c_int, ctypes.c_void_p)
MEDIA_CALLBACK_FUNC_TYPE = ctypes.CFUNCTYPE(None, ctypes.c_int)
# Callback functions
def on_incoming_call(acc_id, call_id, rdata):
print(f"Incoming call: Account ID={acc_id}, Call ID={call_id}")
def on_call_state(call_id, e):
print(f"Call {call_id} state changed")
def on_call_media_state(call_id):
print(f"Call {call_id} media state changed")
# Convert Python callbacks to C function pointers
incoming_call_cb = CALLBACK_FUNC_TYPE(on_incoming_call)
call_state_cb = CALLBACK_FUNC_TYPE(on_call_state)
call_media_state_cb = MEDIA_CALLBACK_FUNC_TYPE(on_call_media_state)
# Define C structures
class pjsip_rx_data(ctypes.Structure):
pass
class pjsua_call_info(ctypes.Structure):
pass
# Set the callback functions in C
lib.on_incoming_call = incoming_call_cb
lib.on_call_state = call_state_cb
lib.on_call_media_state = call_media_state_cb
# Initialize pjsua
lib.pjsua_create.restype = ctypes.c_int
status = lib.pjsua_create()
if status != 0:
print("Error in pjsua_create()")
exit(1)
#I use this command when I want to run simple_pjsua.py alone.
#lib.main()
#I use this command when I want to run the program in a separate thread in the main project.I will explain about it further.
def run_pjsua():
lib.main()
</code></pre>
<p>I run the above program with Python and it works without any problems.
My problem starts when I have a PyQt5 program and when I want to run the simple_pjsua.py program using a thread in this program:</p>
<pre><code>import sys
import os
import json
import threading
import asyncio
from time import sleep
from buttons import timer_callback
from udpserverasync import start_udp_listener
from PyQt5 import QtWidgets, uic
from PyQt5.QtCore import QTimer
from PyQt5.QtGui import QFontDatabase, QFont
# from simple_pjsua import pjsua_thread
import simple_pjsua
class Ui(QtWidgets.QMainWindow):
def __init__(self):
super(Ui, self).__init__()
uic.loadUi('/root/maindesign.ui', self)
self.timer=QTimer(self)
self.timer.timeout.connect(self.animate)
self.timer.start(3000)
self.show()
def start(self):
# start the UDP listener in a separate thread
udp_listener_thread = threading.Thread(target=start_udp_listener, daemon=True)
udp_listener_thread.start()
//PJSUA Thread
pjsua_listener_thread = threading.Thread(target=simple_pjsua.run_pjsua, daemon=True)
pjsua_listener_thread.start()
.....
def animate(self):
.....
....
.....
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
fontdir= os.environ.get('QT_QPA_FONTDIR')
QFontDatabase.addApplicationFont(f'{fontdir}/TAHOMA_0.TTF')
window = Ui()
window.start()
window.show()
sys.exit(app.exec_())
</code></pre>
<p>Running the program gives the following error:</p>
<blockquote>
<p>python3: ../src/pj/os_core_unix.c:692: pj_thread_this: Assertion
`!"Calling pjlib from unknown/external thread. You must " "register
external threads with pj_thread_register() " "before calling any pjlib
functions."' failed.</p>
</blockquote>
<p>My question is how can I run simple_pjsua.py in a worker thread?</p>
| <python><multithreading><pjsip> | 2023-08-16 12:52:09 | 1 | 412 | amirhossein |
76,913,677 | 6,068,294 | python rounding error when sampling variable Y as a function of X with histogram? | <p>I'm trying to sample a variable (SST) as a function of another variable (TCWV) using the function histogram, with weights set to the sample variable like this:</p>
<pre><code># average sst over bins
num, _ = np.histogram(tcwv, bins=bins)
sstsum, _ = np.histogram(tcwv, bins=bins,weights=sst)
out=np.zeros_like(sstsum)
out[:]=np.nan
sstav = np.divide(sstsum,num,out=out, where=num>100)
</code></pre>
<p>The whole code for reproducability is given below. My problem is that when I plot a scatter plot of the raw data and then I plot my calculated averages, the averages lie way outside the data "cloud" like this (see points on right):</p>
<p><a href="https://i.sstatic.net/GcsbM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GcsbM.png" alt="enter image description here" /></a></p>
<p>I can't think why this is happening, unless it is a rounding error perhaps?</p>
<p>This is my whole code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from netCDF4 import Dataset
# if you have a recent netcdf libraries you can access it directly here
url = ('http://clima-dods.ictp.it/Users/tompkins/CRM/data/WRF_1min_mem3_grid4.nc#mode=bytes')
ds=Dataset(url)
### otherwise need to download, and use this:
###ifile="WRF_1min_mem3_grid4.nc"
###ds=Dataset(idir+ifile)
# axis bins
bins=np.linspace(40,80,21)
iran1,iran2=40,60
# can put in dict and loop
sst=ds.variables["sst"][iran1:iran2+1,:,:]
tcwv=ds.variables["tcwv"][iran1:iran2+1,:,:]
# don't need to flatten, just tried it to see if helps (it doesn't)
sst=sst.flatten()
tcwv=tcwv.flatten()
# average sst over bins
num, _ = np.histogram(tcwv, bins=bins)
sstsum, _ = np.histogram(tcwv, bins=bins,weights=sst)
out=np.zeros_like(sstsum)
out[:]=np.nan
sstav = np.divide(sstsum,num,out=out,where=num>100)
# bins centroids
avbins=(np.array(bins[1:])+np.array(bins[:-1]))/2
#plot
subsam=2
fig,(ax)=plt.subplots()
plt.scatter(tcwv.flatten()[::subsam],sst.flatten()[::subsam],s=0.05,marker=".")
plt.scatter(avbins,sstav,s=3,color="red")
plt.ylim(299,303)
plt.savefig("scatter.png")
</code></pre>
| <python><numpy><histogram><floating-accuracy> | 2023-08-16 12:45:44 | 1 | 8,176 | ClimateUnboxed |
76,913,631 | 1,379,826 | Dash input from callback and string | <p>I would like to get a table returned based on both a dash <code>Input</code> and some other 'string' value. The idea is to re-use the same callback at multiple places, based on the same <code>Input</code> but differing the <code>string</code> value. Right now, my callback is something like this:</p>
<pre><code>@app.callback(
[
Output('table', 'data'),
Output('table', 'columns')],
[Input('dropdown', 'value')]
)
def update_output(selected_type):
#(code continues)
## referred in app as:
app = dash.Dash(__name__)
app.layout = html.Div([
html.H1("Some dynamic table")
dcc.Dropdown(
id='dropdown',
options=['X','Y'],
value='X'
),
dash_table.DataTable(id='table') # arguments to update_output() based only on Input dropdown
])
</code></pre>
<p>I would like to have it adapted such that the <code>dropdown</code> input remains, but I can specify a string on the arguments of the function. Something like this:</p>
<pre><code>@app.callback(
[
Output('table', 'data'),
Output('table', 'columns')],
[Input('dropdown', 'value')]
)
def update_output(selected_type, some_string='dog'):
#(code continues)
## referred in app as:
app = dash.Dash(__name__)
app.layout = html.Div([
html.H1("Some dynamic table")
dcc.Dropdown(
id='dropdown',
options=['X','Y'],
value='X'
),
'''This is table with dog:'''
dash_table.DataTable(id='table', 'dog') # arguments to update_output() based on Input dropdown, and I specify the string "dog" here myself
'''This is table with cat:'''
dash_table.DataTable(id='table', 'cat') # arguments to update_output() based on Input dropdown, and I specify the string "cat" here myself
])
</code></pre>
<p>Perhaps I'm making it more complex than it needs to be? The end goal is that a table should be returned based on both user <code>Input</code> and my specification. Of course, an easy workaround is to have a single callback for each of my own arguments (e.g. <code>update_output_cat()</code> and <code>update_output_dog()</code>) but this is not really a good solution</p>
| <python><python-3.x><plotly-dash> | 2023-08-16 12:39:22 | 1 | 1,959 | Sos |
76,913,572 | 1,915,854 | How to replace this naive code with scaled_dot_product_attention() in Pytorch? | <p>Consider a <a href="https://github.com/Thinklab-SJTU/Crossformer/blob/fd0f601319e21d8ad6e058403927a11e75ed13ef/cross_models/attn.py#L23" rel="nofollow noreferrer">code fragment from Crossformer:</a></p>
<pre><code>def forward(self, queries, keys, values):
B, L, H, E = queries.shape
_, S, _, D = values.shape
scale = self.scale or 1./sqrt(E)
scores = torch.einsum("blhe,bshe->bhls", queries, keys)
A = self.dropout(torch.softmax(scale * scores, dim=-1))
V = torch.einsum("bhls,bshd->blhd", A, values)
return V.contiguous()
</code></pre>
<p>I'm trying to accelerate it by replacing the naive calls with Flash Attention. For that, I did the following:</p>
<pre><code>def forward(self, queries, keys, values):
# I'm not sure about the below - it's just a ChatGPT-assisted guess
# B represents the batch size.
# L is the sequence length for queries (or target sequence length).
# H is the number of attention heads.
# E is the depth (dimension) of each attention head for queries/keys.
# S is the sequence length for keys/values (or source sequence length).
# D is the depth (dimension) of each attention head for values.
B, L, H, E = queries.shape
_, S, _, D = values.shape
y = torch.nn.functional.scaled_dot_product_attention(
queries, keys, values, dropout_p=self.dropout_p if self.training else None)
y = y.contiguous()
return y
</code></pre>
<p>However, with the above code, I'm getting the following error:</p>
<pre><code>RuntimeError: The size of tensor a (10) must match the size of tensor b (4) at
non-singleton dimension 1
</code></pre>
<p>The debugger shows me the following tensor sizes:</p>
<ul>
<li><code>keys</code>: (2048, 4, 16, 32)</li>
<li><code>queries</code>: (2048, 10, 16, 32)</li>
<li><code>values</code>: (2048, 4, 16, 32)</li>
</ul>
<p>What am I missing in this change?</p>
| <python><deep-learning><pytorch><tensor><attention-model> | 2023-08-16 12:31:26 | 1 | 15,266 | Serge Rogatch |
76,913,496 | 1,668,622 | await Docker().containers.get() and .delete() are stuck for some containers, what might be the cause? | <p>On one of three identical machines running Ubuntu:22.04 and Python 3.10.12 I'm encountering a strange problem with <code>aiodocker</code>: <code>await client.containers.get(ident)</code> and <code>await container.delete()</code> are stuck and never return for some (not even running) containers but not for others.</p>
<p><code>await container.delete()</code> executed on a container in <code>exited</code> state will hang until it's been deleted manually using <code>docker rm <ident></code>. Deletion runs without problems or messages and the <code>await</code> terminates.</p>
<p>Running <code>await client.containers.get(ident)</code> with the same ident (now deleted) won't give me an exception as with any other random identifier but will hang also (forever now, since there is nothing to delete).</p>
<p>It looks to me as if <em>certain</em> containers cant' be deleted via <code>aiodocker</code> and some then set internal state keeps even <code>containers.get(ident)</code> from terminating.</p>
<p>This behavior can only be observed on one of the three 'identical' machines. Docker version is <code>24.0.5</code> on all of them, also <code>aiodocker</code> (<code>0.21.0</code>).</p>
<p>What can I do to investigate the problem? Are there known issues of this kind for <code>aiodocker</code>?</p>
| <python><docker><async-await><python-asyncio><python-docker> | 2023-08-16 12:20:49 | 0 | 9,958 | frans |
76,913,448 | 15,915,737 | Get the EC2 Id create by terraform and use it in a python script for the lambda | <p>With terraform I want to create an EC2 and a lambda. The lambda will be use to start and stop this EC2. But to start the EC2 with a lambda I need a script where I specify the id of this instance.
How can I extract the id the instance I create and add it to my python script ?</p>
<p>Create the instance:</p>
<pre><code>resource "aws_instance" "app_server" {
ami = "ami-0ed752ea0f62749af"
instance_type = "t2.micro"
tags = {
Name = "Instance_test"
}
}
</code></pre>
<p>The lambda:</p>
<pre><code># GET AND CONVERT PYTON FILE TO ZIP
data "archive_file" "zipit_lambda_start" {
type = "zip"
source_file = "index_lambda_start.py"
output_path = "lambda_function_start.zip"
}
# GET THE ROLE
module "role" {
source = "../IAM_POLICY_IAM_ROLE"
}
#CREATE LAMBDA
resource "aws_lambda_function" "lambda_start_ec2" {
filename = "lambda_function_start.zip"
function_name = "StartEC2Instance"
role = module.role.role_lambda_test_arn
handler = "index.lambda_handler"
source_code_hash = "${data.archive_file.zipit_lambda_start.output_base64sha256}"
runtime = "python3.9"
timeout = 10
}
</code></pre>
<p>The python script export to the lambda to start the EC2:</p>
<pre><code>import boto3
region = 'eu-west-1'
instances = ['i-0e6**********c3']
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
ec2.start_instances(InstanceIds=instances)
print('started your instances: ' + str(instances))
</code></pre>
<p><a href="https://i.sstatic.net/rklAo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rklAo.png" alt="enter image description here" /></a></p>
| <python><aws-lambda><terraform> | 2023-08-16 12:14:02 | 2 | 418 | user15915737 |
76,913,406 | 12,493,545 | How allow FastAPI to handle multiple requests at the same time? | <p>For some reason FastAPI doesn't respond to any requests while a request is handled. I would expect FastAPI to be able to handle multiple requests at the same time. I would like to allow multiple calls at the same time as multiple users might be accessing the REST API.</p>
<h2>Minimal Example: Asynchronous Processes</h2>
<p>After starting the server: <code>uvicorn minimal:app --reload</code>, running the request_test <code>run request_test</code> and executing <code>test()</code>, I get as expected <code>{'message': 'Done'}</code>. However, when I execute it again within the 20 second frame of the first request, the request is not being processed until the <code>sleep_async</code> from the first call is finished.</p>
<h2>Without asynchronous Processes</h2>
<p>The same problem (that I describe below) exists even if I don't use asynchronous calls and wait directly within async def info. That doesn't make sense to me.</p>
<h3>FastAPI: minimal</h3>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
from fastapi import FastAPI
from fastapi.responses import JSONResponse
import time
import asyncio
app = FastAPI()
@app.get("/test/info/")
async def info():
async def sleep_async():
time.sleep(20)
print("Task completed!")
asyncio.create_task(sleep_async())
return JSONResponse(content={"message": "Done"})
</code></pre>
<h3>Test: request_test</h3>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import requests
def test():
print("Before")
response = requests.get(f"http://localhost:8000/test/info")
print("After")
response_data = response.json()
print(response_data)
</code></pre>
| <python><asynchronous><fastapi> | 2023-08-16 12:08:20 | 2 | 1,133 | Natan |
76,913,084 | 1,499,019 | Calling another member decorator from another member decorator in Python 3 | <p>I am trying to re-use a member function decorator for other member function decorator but I am getting the following error:</p>
<pre><code>'function' object has no attribute '_MyClass__check_for_valid_token'
</code></pre>
<p>Basically I have a working decorator that checks if a user is logged in (<code>@LOGIN_REQUIRED</code>) and I would like to call this first in the <code>@ADMIN_REQUIRED</code> decorator (so the idea is to check that the user is logged in with the existing <code>@LOGIN_REQUIRED</code> decorator and then add some specific validation to check if the logged user is an Administrator in the <code>@ADMIN_REQUIRED</code> decorator.</p>
<p>My current code is like this:</p>
<pre><code>class MyClass:
def LOGIN_REQUIRED(func):
@wraps(func)
def decorated_function(self, *args, **kwargs):
# username and token should be the first parameters
# throws if not logged in
self.__check_for_valid_token(args[0], args[1])
return func(self, *args, **kwargs)
return decorated_function
@LOGIN_REQUIRED
def ADMIN_REQUIRED(func):
@wraps(func)
def decorated_function(self, *args, **kwargs):
is_admin = self.check_if_admin()
if not is_admin:
raise Exception()
return func(self, *args, **kwargs)
return decorated_function
@ADMIN_REQUIRED
def get_administration_data(self, username, token):
# return important_data
# currently throws 'function' object has no attribute '_MyClass__check_for_valid_token'
</code></pre>
<p>Do you have any idea how could I get this to work?</p>
<hr />
<p>Some <strong>notes</strong> based on the comments and answers for clarification:</p>
<ol>
<li>The method <code>__check_for_valid_token</code> name can be changed to not run into name mangling issues. I was just using double underscore because it was a method supposedly only accessible by the class itself (private).</li>
<li>There is no inheritance in "MyClass".</li>
<li>The <code>@LOGIN_REQUIRED</code> code must run before the <code>@ADMIN_REQUIRED</code> code (as that is what someone expects, at least in my case).</li>
</ol>
| <python><python-3.x><decorator><python-decorators> | 2023-08-16 11:21:02 | 3 | 667 | RandomGuy |
76,913,082 | 4,865,723 | Disable GNU gettext function "_()" temporarily in Python | <p>I do use GNU gettext with its class based API which in consequence installs <code>_()</code> in Pythons <code>builtins</code> namespace. I want to disable the translation temporarily in a global scope.</p>
<p>I tried to somehow "mask" <code>_()</code> via shading it with a local function definition. But I got these error:</p>
<pre><code>No problem.
Traceback (most recent call last):
File "/home/user/ownCloud/_transfer/./z.py", line 27, in <module>
.format(foobar(True), foobar(False)))
File "/home/user/ownCloud/_transfer/./z.py", line 19, in foobar
return _('Hello')
UnboundLocalError: local variable '_' referenced before assignment
</code></pre>
<p>This is the MWE</p>
<pre><code>#!/usr/bin/env python3
import gettext
translation = gettext.translation(
domain='iso_639',
languages=['de'],
localedir='/usr/share/locale'
)
translation.install() # installs _() in "buildins" namespace
# Keep in mind: The object "translation" is not avialble in the original
# productive code because it is instanciated elsewhere.
def foobar(translate):
if not translate:
# I try to mask the global _() builtins-function
def _(txt):
return txt
return _('Hello')
if __name__ == '__main__':
# To ilustrate that _() is part of "builtins" namespace
print(_('No problem.'))
print('The translated string "{}" is originally "{}".'
.format(foobar(True), foobar(False)))
</code></pre>
<p>In my original productive code I try to display a big multi line string in two flavours: Untranslated and translated. But I don't want to duplicate that string in code like this.</p>
<pre><code>original = 'Hello'
translated = _('Hello')
</code></pre>
<p>To my knowledge the GNU gettext utilities do need the second line to figuring out which strings are translatable and should go into the <code>po</code>/<code>pot</code> files. Because of that I can not do it like this because gettext utils don't know what <code>original</code> is.</p>
<pre><code>original = 'Hello'
translated = _(original)
</code></pre>
<p>Maybe I try it the wrong way. Alternativily I could formulate my question with: <em>How can I display one string in its original source (untranslated) form and translated site by site without duplicating the string in the python source code?</em></p>
| <python><gettext> | 2023-08-16 11:20:41 | 1 | 12,450 | buhtz |
76,912,913 | 10,771,559 | How to add a continuous color scale to a go.scatter plot in dash app | <p>I have a dash app with a scatterplot. I am using go.scatter and have set marker_color to be a continuous variable, but I want to be able to set the colorscale to cividis and to have a colorbar legend. How can I do this?
I tried using color_continuous_scale and got the error: ValueError: Invalid property specified for object of type plotly.graph_objs.Scatter: 'color'. In my app, I am uploading data, but for the sake of reproducibility, I have used the Iris dataset in this example.</p>
<pre><code>external_stylesheets = ["https://codepen.io/chriddyp/pen/bWLwgP.css"]
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
server = app.server
app.layout = html.Div(
[
dcc.Upload(
id="upload-data",
children=html.Div(["Drag and Drop or ", html.A("Select Files")]),
multiple=True,
), dcc.Graph(id="Mygraph"),
html.Div(id="output-data-upload"),
]
)
@app.callback(Output('Mygraph', 'figure'), [
Input('upload-data', 'contents'),
Input('upload-data', 'filename'),
])
def update_graph(contents, filename):
x = []
y = []
color=[]
df_two=px.data.iris()
x=df_two['sepal_length']
y=df_two['sepal_width']
color=df_two['sepal_length']
fig = go.Figure(
data=[
go.Scatter(
mode="markers",
x=x,
y=y,
marker_color=color,
# This line doesn't work: color_continuous_scale='cividis',
showlegend=True
)
],
layout=go.Layout(
width=500,
height=500,
))
fig.update_layout(
xaxis = dict(
tickmode = 'linear',
dtick = 10
),
yaxis=dict(
tickmode= 'linear',
dtick=10
)
)
return fig
</code></pre>
| <python><plotly-dash> | 2023-08-16 10:59:24 | 1 | 578 | Niam45 |
76,912,822 | 2,819,689 | How to convert three columns into pandas datetime? | <p>I am reading my logs like this</p>
<pre><code>d = [line.rstrip() for line in open('ufw.txt').readlines()]
</code></pre>
<p>Once I have a list,I am reading it into Pandas df.</p>
<pre><code>df = pd.DataFrame([i.split(" ") for i in d])
0 1 2 3 4 5 ... 22 23 24 25 26 27
0 Aug 14 10:57:53 mjbc-IdeaPad kernel: [ ... SPT=57846 DPT=6443 WINDOW=64860 RES=0x00 SYN URGP=0
1 Aug 14 10:58:12 mjbc-IdeaPad kernel: [ ... SPT=60154 DPT=10250 WINDOW=64860 RES=0x00 SYN URGP=0
2 Aug 14 10:58:34 mjbc-IdeaPad kernel: [ ... SPT=53386 DPT=10250 WINDOW=64860 RES=0x00 SYN URGP=0
3 Aug 14 10:58:57 mjbc-IdeaPad kernel: [ ... SPT=34210 DPT=10250 WINDOW=64860 RES=0x00 SYN URGP=0
4 Aug 14 10:59:12 mjbc-IdeaPad kernel: [ ... SPT=49134 DPT=10250 WINDOW=64860 RES=0x00 SYN URGP=0
5 Aug 14 10:59:34 mjbc-IdeaPad kernel: [ ... SPT=52082 DPT=10250 WINDOW=64860 RES=0x00 SYN URGP=0
6 Aug 14 10:59:52 mjbc-IdeaPad kernel: [ ... SPT=54094 DPT=6443 WINDOW=64860 RES=0x00 SYN URGP=0
7 Aug 14 11:00:12 mjbc-IdeaPad kernel: [ ... SPT=35966 DPT=10250 WINDOW=64860 RES=0x00 SYN URGP=0
8 Aug 14 11:00:34 mjbc-IdeaPad kernel: [ ... SPT=40132 DPT=10250 WINDOW=64860 RES=0x00 SYN URGP=0
</code></pre>
<p>What I really want is to convert the first three columns into the single datetime column and leave the rest.
How to achieve that?</p>
| <python><pandas> | 2023-08-16 10:46:10 | 3 | 2,874 | MikiBelavista |
76,912,588 | 7,318,120 | type hints (warnings) for Python in VS Code | <p>I enjoy using type hinting (annotation) and have used it for some time to help write clean code.</p>
<p>I appreciate that they are just hints and as such do not affect the code.</p>
<p>But today I saw a video <strong>where the linter picked up the hint</strong> with a warning (a squiggly yellow underline), which looks really helpful. My VS Code does not pick this up in the linter.</p>
<p>Here is an image of what I expect (with annotations):</p>
<p><a href="https://i.sstatic.net/mTnWQ.png" rel="noreferrer"><img src="https://i.sstatic.net/mTnWQ.png" alt="enter image description here" /></a></p>
<p>So my question is, how can I achieve this?</p>
<p>For example, is there a specific linter or setting that would do this?</p>
| <python><visual-studio-code><python-typing> | 2023-08-16 10:11:02 | 1 | 6,075 | darren |
76,912,464 | 10,181,236 | Correctly run a jupyter notebook in a local machine using interpreter on a remote machine | <p>I am using pycharm to develop a program on a remote server.
I am able to connect to the remote machine without errors and create a virtual environment. In fact if I run a script helloworld.py the correct environment is being selected.
The problem is when I use a jupyter notebook and run the command <code>!which python</code> to get the current python interpreter that I am using, in fact I get <code>/usr/bin/python</code> that is the default one and not the virtual environment I created.</p>
<p>I checked the environment I am using from pycharm setting and as you can see from the screenshot it is correct since the path is the one for the virtual environment.
<a href="https://i.sstatic.net/q6ifg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q6ifg.png" alt="enter image description here" /></a></p>
<p>So what can I do to run the jupyter notebook with the right environment?</p>
| <python><jupyter-notebook> | 2023-08-16 09:54:07 | 2 | 512 | JayJona |
76,912,452 | 2,195,440 | How can I programmatically extract type information from a Python project using Pylance? | <p>I'm working on a project where I need to analyze a Python codebase and extract type information for specific variables programatically. I'm interested in using Pylance (or the underlying Pyright) since I've found its type inference to be quite accurate when using it with VS Code.</p>
<p>Is there a way to programmatically use Pylance or Pyright to analyze Python code and retrieve type information?</p>
<p>What I've considered so far:</p>
<p>Language Server Protocol (LSP):
Pylance operates as a language server using the Language Server Protocol (LSP). I thought about setting up an LSP client, starting the Pylance language server, and then sending LSP requests to get type information. However, I'm unsure about the specifics of this approach.</p>
<p>Using Pyright Directly:
Since Pyright is the core type-checker behind Pylance, I considered using it directly as a library to analyze the code. But I'm not sure about the internal APIs and how to leverage them for this purpose programatically.</p>
<p>Alternative Tools:
I know there are other tools like mypy that might offer programmatic APIs for analyzing Python code. But I'm particularly interested in Pylance/Pyright due to their accuracy in type inference.</p>
<p>Manual Parsing:
As a last resort, I thought about manually parsing the Python code using the ast library to extract type hints. This wouldn't provide inferred types, but it might be a starting point.</p>
<p>A simple example:</p>
<pre><code>class Person:
def __init__(self, name: str, age: int):
self.name = name
self.age = age
def get_age(self) -> int:
return self.age
def add_numbers(a: int, b: int) -> int:
return a + b
def main():
# Create a Person object
person = Person("Alice", 30)
# Invoke get_age method and print the result
age = person.get_age()
print(f"{person.name}'s age is {age} years old.")
if __name__ == "__main__":
main()
</code></pre>
<p>For the line "age = person.get_age()", for person variable, I want to know that it of type "Person". I have to do this programmatically.</p>
<p>Has anyone attempted something similar or have insights on how to achieve this? Any guidance or pointers would be greatly appreciated!</p>
| <python><pylance><pyright> | 2023-08-16 09:52:24 | 0 | 3,657 | Exploring |
76,912,416 | 12,909,598 | Can't install or create environment in conda | <p>Every time I try to install a package or create an environment with <code>conda</code> appears, a message like this:</p>
<pre><code>Collecting package metadata (repodata.json): / DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/r/noarch/repodata.json HTTP/1.1" 304 0
- DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/main/noarch/repodata.json HTTP/1.1" 304 0
DEBUG:urllib3.connectionpool:https://conda.anaconda.org:443 "GET /conda-forge/osx-64/repodata.json HTTP/1.1" 200 None
\ DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/r/osx-64/repodata.json HTTP/1.1" 304 0
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/main/osx-64/repodata.json HTTP/1.1" 304 0
DEBUG:urllib3.connectionpool:https://conda.anaconda.org:443 "GET /conda-forge/noarch/repodata.json HTTP/1.1" 200 None
</code></pre>
<p>For example:</p>
<p>To create an environment called `env3:</p>
<pre><code>conda create -n env3 -vv
</code></pre>
<p>Appears:</p>
<pre><code>DEBUG conda.gateways.logging:set_verbosity(218): verbosity set to 2
DEBUG conda.core.solve:solve_final_state(297): solving prefix /home/nayelilv/anaconda3/envs/env3
specs_to_remove: frozenset()
specs_to_add: frozenset()
prune: Null
Collecting package metadata (repodata.json): ...working... DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __archspec
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __osx
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __unix
done
Solving environment: ...working... DEBUG conda.resolve:get_reduced_index(673): Retrieving packages for:
DEBUG conda.core.solve:_add_specs(823): specs_map with targets: {'__archspec': MatchSpec("__archspec"), '__osx': MatchSpec("__osx"), '__unix': MatchSpec("__unix")}
DEBUG conda.resolve:get_reduced_index(673): Retrieving packages for:
- __archspec
- __osx
- __unix
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __unix
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __osx
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __archspec
DEBUG conda.common._logic:_run_sat(613): Invoking SAT with clause count: 2
DEBUG conda.resolve:_get_sat_solver_cls(79): Using SAT solver interface 'pycosat'.
DEBUG conda.resolve:gen_clauses(1061): gen_clauses returning with clause count: 9
DEBUG conda.resolve:generate_spec_constraints(1069): generate_spec_constraints returning with clause count: 9
DEBUG conda.common._logic:_run_sat(613): Invoking SAT with clause count: 12
DEBUG conda.resolve:bad_installed(1282): Checking if the current environment is consistent
DEBUG conda.core.solve:_find_inconsistent_packages(646): inconsistent precs: None
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __unix
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __osx
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __archspec
DEBUG conda.resolve:gen_clauses(1061): gen_clauses returning with clause count: 9
DEBUG conda.resolve:generate_spec_constraints(1069): generate_spec_constraints returning with clause count: 9
DEBUG conda.common._logic:_run_sat(613): Invoking SAT with clause count: 12
DEBUG conda.core.solve:_run_sat(1042): final specs to add:
- __archspec
- __osx
- __unix
DEBUG conda.resolve:solve(1439): Solving for:
- 0: __osx target=None optional=False
- 1: __archspec target=None optional=False
- 2: __unix target=None optional=False
DEBUG conda.resolve:solve(1445): Solve: Getting reduced index of compliant packages
DEBUG conda.resolve:solve(1477): Solve: determining satisfiability
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __unix
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __osx
DEBUG conda.resolve:__init__(132): restricting to unmanageable packages: __archspec
DEBUG conda.resolve:gen_clauses(1061): gen_clauses returning with clause count: 9
DEBUG conda.resolve:generate_spec_constraints(1069): generate_spec_constraints returning with clause count: 9
DEBUG conda.common._logic:_run_sat(613): Invoking SAT with clause count: 12
DEBUG conda.resolve:solve(1531): Requested specs:
- __archspec
- __osx
- __unix
DEBUG conda.resolve:solve(1532): Optional specs:
DEBUG conda.resolve:solve(1533): All other specs:
DEBUG conda.resolve:solve(1534): missing specs:
DEBUG conda.resolve:solve(1537): Solve: minimize removed packages
DEBUG conda.resolve:solve(1544): Solve: maximize versions of requested packages
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.resolve:solve(1550): Initial package channel/version metric: 0/0
DEBUG conda.resolve:solve(1553): Solve: minimize track_feature count
DEBUG conda.resolve:generate_feature_count(1081): generate_feature_count returning with clause count: 12
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.resolve:solve(1556): Track feature count: 0
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.resolve:solve(1567): Package misfeature count: 0
DEBUG conda.resolve:solve(1570): Solve: maximize build numbers of requested packages
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.resolve:solve(1572): Initial package build metric: 0
DEBUG conda.resolve:solve(1575): Solve: prefer arch over noarch for requested packages
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.resolve:solve(1577): Noarch metric: 0
DEBUG conda.resolve:solve(1581): Solve: minimize number of optional installations
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.resolve:solve(1584): Optional package install metric: 0
DEBUG conda.resolve:solve(1587): Solve: minimize number of necessary upgrades
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.resolve:solve(1590): Dependency update count: 0
DEBUG conda.resolve:solve(1593): Solve: maximize versions and builds of indirect dependencies. Prefer arch over noarch where equivalent.
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.resolve:solve(1602): Additional package channel/version/build/noarch metrics: 0/0/0/0
DEBUG conda.resolve:solve(1611): Solve: prune unnecessary packages
DEBUG conda.common._logic:minimize(673): Empty objective, trivial solution
DEBUG conda.resolve:solve(1614): Weak dependency count: 0
DEBUG conda.resolve:solve(1622): Looking for alternate solutions
done
DEBUG conda.core.solve:solve_final_state(423): solved prefix /home/nayelilv/anaconda3/envs/env3
solved_linked_dists:
DEBUG conda.core.prefix_data:_load_single_record(190): loading prefix record /home/nayelilv/anaconda3/conda-meta/jsonschema-4.17.3-py311h06a4308_0.json
DEBUG conda.core.prefix_data:_load_single_record(190): loading prefix record /home/nayelilv/anaconda3/conda-meta/libssh2-1.10.0-h37d81fd_2.json
DEBUG conda.core.prefix_data:_load_single_record(190): loading prefix record /home/nayelilv/anaconda3/conda-meta/param-1.13.0-py311h06a4308_0.json
DEBUG conda.core.prefix_data:_load_single_record(190): loading prefix record /home/nayelilv/anaconda3/conda-meta/bokeh-3.2.1-py311h92b7b1e_0.json
DEBUG conda.core.prefix_data:_load_single_record(190): loading prefix record /home/nayelilv/anaconda3/conda-meta/jupyter_console-6.6.3-py311h06a4308_0.json
DEBUG conda.core.prefix_data:_load_single_record(190): loading prefix record /home/nayelilv/anaconda3/conda-meta/pycparser-2.21-pyhd3eb1b0_0.json
...
INFO conda.core.link:__init__(210): initializing UnlinkLinkTransaction with
target_prefix: /home/nayelilv/anaconda3/envs/env3
unlink_precs:
link_precs:
DEBUG conda.core.package_cache_data:__init__(717): instantiating ProgressiveFetchExtract with
</code></pre>
<p>To install a package from <code>conda</code>, the message appears again and it get frozen:</p>
<pre><code>conda install -c conda-forge mamba
</code></pre>
<p>Again:</p>
<pre><code>Collecting package metadata (repodata.json): ...working... DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/r/osx-64/repodata.json HTTP/1.1" 304 0
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/main/osx-64/repodata.json HTTP/1.1" 304 0
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/r/noarch/repodata.json HTTP/1.1" 304 0
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/main/noarch/repodata.json HTTP/1.1" 304 0
done
Solving environment: unsuccessful initial attempt using frozen solve. Retrying with flexible solve.
Solving environment: /
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
</code></pre>
<p>I already tried:</p>
<ul>
<li>Uninstall and reinstall Anaconda, the version of conda is <code>23.7.2</code>.</li>
<li>Several posts with similar issue like: <a href="https://stackoverflow.com/questions/63317348/why-is-my-connection-to-repo-anacoda-com-failing">this</a> and <a href="https://stackoverflow.com/questions/76285584/unable-to-install-a-package-in-conda-not-found">this</a> and <a href="https://community.anaconda.cloud/t/after-each-new-environment-i-need-to-run-conda-clean-why/57229/2" rel="nofollow noreferrer">this</a> and <a href="https://github.com/conda/conda-build/issues/4539" rel="nofollow noreferrer">this</a> no one worked for me.</li>
</ul>
<p>I'm using Ubuntu 23.04.
Thanks in advance.</p>
| <python><json><anaconda><conda> | 2023-08-16 09:47:58 | 1 | 348 | geomicrobio |
76,912,378 | 9,986,657 | Save/load Huggingface model and image processor from local disk | <p>I'm trying to save the <code>microsoft/table-transformer-structure-recognition</code> Huggingface model (and potentially its image processor) to my local disk in Python 3.10. The goal is to load the model inside a Docker container later on without having to pull the model weights and configs from HuggingFace each time the container and Python server boots up.</p>
<h3>Goal</h3>
<pre class="lang-py prettyprint-override"><code>from transformers import pipeline
# instead of this
pipe = pipeline(
task="object-detection",
model="microsoft/table-transformer-structure-recognition",
)
# i'd like this
pipe = pipeline(
task="object-detection",
model="./local_model_directory",
)
</code></pre>
<h3>Saving/Loading Model with Pipe ❌ (Failed)</h3>
<p>I tried to save the model with <code>pipe.save_pretrained("./local_model_directory")</code> and then load the model in the second run with `pipe("object-detection", model="./local_model_directory"). This throws an error and doesn't work at all.</p>
<h4>1. Run: Saving to Local Disk ✅</h4>
<pre class="lang-py prettyprint-override"><code>pipe = pipeline(
task="object-detection",
model="microsoft/table-transformer-structure-recognition",
)
pipe.save_pretrained("./local_model_directory")
</code></pre>
<p>The following files are saved to <code>./local_model_directory</code>:</p>
<ul>
<li><code>config.json</code></li>
<li><code>pytorch_model.bin</code></li>
</ul>
<h4>2. Run: Loading from Local Disk ❌ (Fails)</h4>
<pre class="lang-py prettyprint-override"><code>pipe = pipeline(
task="object-detection",
model="./local_model_directory",
)
</code></pre>
<p>Error:
<code>OSError: ./local_model_directory does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/./local_model_directory/None' for available files.</code></p>
<h3>Manually Downloading Weights and Config ❌ (Fails)</h3>
<p>I manually downloaded the missing <a href="https://huggingface.co/microsoft/table-transformer-structure-recognition/resolve/main/preprocessor_config.json" rel="nofollow noreferrer"><code>preprocessor_config.json</code></a> and <code>model.safetensors</code> from Huggingface and added them to <code>./local_model_directory</code>.</p>
<p>Running the pipe as before generates a new error.</p>
<pre class="lang-py prettyprint-override"><code>pipe = pipeline(
task="object-detection",
model="./local_model_directory",
)
</code></pre>
<p>Error: <code>AttributeError: 'NoneType' object has no attribute 'get'</code></p>
<h3>Saving/Loading Model with <code>model.save...</code> 🥴 (Works with Warnings)</h3>
<p>Then I tried loading and saving the model and its image processor manually without <code>pipeline</code> like so:</p>
<h4>1. Attempt - Works with Warnings 🥴</h4>
<pre class="lang-py prettyprint-override"><code>from transformers import AutoFeatureExtractor, AutoModelForObjectDetection
extractor = AutoFeatureExtractor.from_pretrained(
"microsoft/table-transformer-structure-recognition"
)
model = AutoModelForObjectDetection.from_pretrained(
"microsoft/table-transformer-structure-recognition"
)
extractor.save_pretrained("./local_model_directory")
model.save_pretrained("./local_model_directory")
</code></pre>
<p>Three files are saved to <code>./local_model_directory</code>:</p>
<ul>
<li><code>config.json</code></li>
<li><code>preprocessor_config.json</code></li>
<li><code>pytorch_model.bin</code></li>
</ul>
<p>Loading the model with <code>pipeline("object-detection", model="./local_model_directory")</code> works this time, but I get a warning:</p>
<p><code>python3.10/site-packages/transformers/models/detr/feature_extraction_detr.py:28: FutureWarning: The class DetrFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use DetrImageProcessor instead.</code></p>
| <python><huggingface-transformers> | 2023-08-16 09:42:19 | 0 | 2,079 | Jay |
76,912,368 | 6,930,340 | Drop all column levels without a name in a pandas multi-column-index | <p>I have a multi-column-level dataframe. I need to drop all the column levels that don't have a name (i.e. <code>None</code>). I don't know in advance which levels are affected (if any at all).</p>
<p>What's a pythonic way to do this?</p>
<pre><code>import pandas as pd
import numpy as np
# Create a MultiIndex for the columns with None values in names
index = pd.MultiIndex.from_tuples([
('A', 'X', 'I'),
('A', 'X', 'II'),
('A', 'Y', 'I'),
('A', 'Y', 'II'),
('B', None, 'I'),
('B', None, 'II'),
('B', 'Z', 'I'),
('B', 'Z', 'II'),
], names=[None, 'level_2', None])
# Create a DataFrame with random values
data = np.random.randint(0, 10, (8, 8)) # Corrected shape to match the MultiIndex
df = pd.DataFrame(data, columns=index)
print(df)
A B
level_2 X Y NaN Z
I II I II I II I II
0 5 0 3 5 0 9 6 6
1 6 2 6 3 7 1 8 0
2 7 5 1 0 3 2 9 5
3 8 6 1 5 9 0 8 2
4 4 0 0 2 4 7 0 6
5 3 8 0 5 4 8 6 8
6 2 4 2 7 4 6 5 3
7 6 5 0 3 1 6 0 4
</code></pre>
<p>I am looking for this result:</p>
<pre><code>level_2 X Y NaN Z
0 5 0 3 5 0 9 6 6
1 6 2 6 3 7 1 8 0
2 7 5 1 0 3 2 9 5
3 8 6 1 5 9 0 8 2
4 4 0 0 2 4 7 0 6
5 3 8 0 5 4 8 6 8
6 2 4 2 7 4 6 5 3
7 6 5 0 3 1 6 0 4
</code></pre>
| <python><pandas><multi-index> | 2023-08-16 09:41:21 | 2 | 5,167 | Andi |
76,912,246 | 12,176,803 | Python synchronous pyaudio data in asynchronous code | <p>Here is my use-case:</p>
<ul>
<li>I am streaming some audio from a microphone using Pyaudio. I receive chunks of audio data every x milliseconds.</li>
<li>I am opening a websockets connection if some condition is met.</li>
<li>Once the websockets is opened, I am streaming some messages in and out in an asyncronous manner. There is a consumer / producer way of doing things (I must be able to receive or send messages independently). The messages sent corresponds to the new chunks of data that I receive from the microphone.</li>
</ul>
<p>The issue I have:</p>
<ul>
<li>the streaming from the microphone is synchronous, and is blocking the thread. I do not have a deep understanding of the event loop, but my basic understanding is that this synchronous waiting blocks everything else. The consequence is that I cannot even open the websockets connection, except if I add some asyncio.sleep everywhere, which will be not robust at all.</li>
</ul>
<p>The way I tried to solve it:</p>
<ul>
<li>First idea: run the websockets connector in a second thread, and send the chunks from the main thread. The problem is that asynchronous queues are not thread safe, so in order to work I must have a synchronous queue... and the problem is still there. I also tried <a href="https://pypi.org/project/janus/" rel="nofollow noreferrer">this package</a> but that didn't help.</li>
<li>Second idea: running everything in an asynchronous fashion. But the synchronous incoming data is blocking everything.</li>
<li>Third idea: implementing <a href="https://stackoverflow.com/questions/53993334/converting-a-python-function-with-a-callback-to-an-asyncio-awaitable">this solution</a> in the following manner (code below). The idea is to use Pyaudio callbacks with an asynchronous stream.</li>
</ul>
<pre><code>import asyncio
import msgpack
import os
import websockets
import asyncio
import pyaudio
from src.utils.constants import CHANNELS, CHUNK, FORMAT, RATE
from dotenv import load_dotenv
from .utils import websocket_data_packet # just a utils function
load_dotenv()
QUEUE_MAX_SIZE = 10
MY_URL = os.environ.get("WEBSOCKETS_URL")
def make_iter():
loop = asyncio.get_event_loop()
queue = asyncio.Queue()
def put(in_data, frame_count, time_info, status):
loop.call_soon_threadsafe(queue.put_nowait, in_data)
return None, pyaudio.paContinue
async def get():
while True:
yield await queue.get()
return get(), put
class MicrophoneStreamer(object):
chunk: int = CHUNK
channels: int = CHANNELS
format: int = FORMAT
rate: int = RATE
def __init__(self):
self._pyaudio = pyaudio.PyAudio()
self.is_stream_open: bool = True
self.stream_get, stream_put = make_iter()
self.stream = self._pyaudio.open(
format=self.format,
channels=self.channels,
rate=self.rate,
input=True,
frames_per_buffer=self.chunk,
stream_callback=stream_put,
)
self.stream.start_stream()
def close(self):
self.is_stream_open = False
self.stream.close()
self._pyaudio.terminate()
async def consumer(websocket):
async for message in websocket:
print(f"Received message: {msgpack.unpackb(message)}")
await asyncio.sleep(0.2)
async def producer(websocket, audio_queue):
while True:
chunck = await audio_queue.get()
print(f"Sending message with audio data of size: {len(chunck)}")
await websocket.send(msgpack.packb(websocket_data_packet(chunck)))
await asyncio.sleep(0.2)
async def handler(audio_queue):
print("This is before creating a new websocket")
async with websockets.connect(MY_URL) as websocket:
print("Just created a new websocket")
producer_task = asyncio.create_task(producer(websocket, audio_queue))
consumer_task = asyncio.create_task(consumer(websocket))
done, pending = await asyncio.wait(
[consumer_task, producer_task],
return_when=asyncio.FIRST_COMPLETED,
timeout=60,
)
for task in pending:
task.cancel()
websocket.close()
async def main():
audio_queue = asyncio.Queue(maxsize=5)
i = 0
async for in_data in MicrophoneStreamer().stream_get:
print(f"Processing item {len(in_data)}")
# on trigger, create websockets connection
if i == 2:
asyncio.create_task(handler(audio_queue))
await asyncio.sleep(0) # starts the task
# on each iteration, add element to the queue
if audio_queue.full():
_ = await audio_queue.get()
await audio_queue.put(in_data)
i += 1
asyncio.run(main())
</code></pre>
<p>When I run this code, here is what happens:</p>
<ul>
<li>before i==2, elements gets added to the queue, everything works fine.</li>
<li>then, at i==2, the websockets get opened, messages get out and get in without issue</li>
<li>but then, the loop blocks. No new incoming bytes from the microphone streamer. Nothing happens anymore.</li>
</ul>
<p>Would somebody know how to solve this issue?</p>
<p>Thank you very much</p>
| <python><asynchronous><python-asyncio><python-multithreading><pyaudio> | 2023-08-16 09:27:43 | 0 | 366 | HGLR |
76,912,075 | 12,892,937 | Python pandas read_csv optional columns (handling files with different number of columns) | <p>I need to read CSV from files (1 at a time) that can have different number of columns, where newer files have extra columns that old files don't have.</p>
<pre><code>date|time|name|math
20101230|1345|mickey|0.5|
</code></pre>
<pre><code>date|time|name|math|literature|physics
20101230|1345|mickey|0.5|3.5|9
</code></pre>
<pre><code>date|time|name|math|literature|physics|chemistry|art
20101230|1345|mickey|0.5|3.5|9|6|7.4
</code></pre>
<p>I need to write code that can both old and new formats. The output dataframe will always use latest format. When the code read a file with old format, each unavailable column will be initialized with 1 default value. So in the above example, the output will always contain 8 columns, even if the file only contains 4.</p>
<p>The simplest solution is:</p>
<pre><code>df = pandas.read_csv('input.txt',
dtype = {'date': int, 'time': int, 'name': str, 'math': float,
'literature': float, 'physics': float, 'chemistry': float, 'art': float})
n_cols = len(df.columnns)
if n_cols == 4:
df['literature'] = 0.0
df['physics'] = 0.0
df['chemistry'] = 0.0
df['art'] = 0.0
elif n_cols == 6:
df['chemistry'] = 0.0
df['art'] = 0.0
elif ...
return df
</code></pre>
<p>However, this solution doesn't look good, since you have to change a lot of old code everytime there's a new format.</p>
<p>How should I handle this problem?</p>
<p>Edit: the question was closed because it's "similar" to <a href="https://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe">this</a>. But it's very clearly different question. I need to load 1 file that might have missing columns (compared to latest format), not loading multiple files then concatenate them.</p>
| <python><pandas><dataframe><csv> | 2023-08-16 09:05:55 | 0 | 1,831 | Huy Le |
76,912,018 | 2,251,058 | Fast api add all parameters to Json Payload | <p>I need to add all parameters to the api as part of Json Payload.
Working on an api I figured creating classes put them as part of Json Payload.</p>
<p>How can I add the rest of them to the Payload (Request Body), do I need to create a class for each <code>str</code> type ?</p>
<pre><code>@router.post(path='/content-relevancy-interface/submit')
def content_relevancy_get_records(site_name: str, locale: str, lob: str, page_type: str, origins: OriginList = None,
destinations: DestinationList = None,
optimisation_opportunity: Union[str, None] = None,
filter: FilterList = None):
</code></pre>
<p><a href="https://i.sstatic.net/qMXIS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qMXIS.png" alt="enter image description here" /></a></p>
| <python><python-3.x><fastapi> | 2023-08-16 08:57:54 | 2 | 3,287 | Akshay Hazari |
76,911,944 | 3,616,293 | Comparing floating vs int numpy arrays | <p>I have 2 numpy arrays: one of the them is obtained using sinusoidal functions (and so, it has float values), while the other array consists of integers. An example of such an array is:</p>
<pre><code>x = np.array([0.43, 2.11, 3.67, 4.12, 7.05, 8.87, 9.1, 11, 11.67])
y = np.array([1, 3, 5, 9, 10])
</code></pre>
<p>The two arrays are much larger than this toy example. But x is larger than y and contains elements of y within it. The problem is that naively you have to check for a given element within y in x two times: once by rounding up and another by rounding down. So a simple np.round() will not suffice.</p>
<pre><code>np.floor(x)
# array([0., 3., 4., 8., 9.])
np.ceil(x)
# array([ 1., 4., 5., 9., 10.])
</code></pre>
<p>For example, for floor(x), 0 doesn't exist in y, but ceil(x) gives 1 and that is present in y. So, for each element in x, it would need to be rounded up and down and then compared to each element in y to find a match.</p>
<p>Is there an efficient and/or better way to achieve this?</p>
| <python><arrays><numpy> | 2023-08-16 08:48:32 | 1 | 2,518 | Arun |
76,911,812 | 11,872,550 | Unable to properly post file to php from python | <p>I am posting a text file from python script to PHP from where I want to store them in a directory.
here is my python code from where i am posting file</p>
<pre><code>import requests
url = 'http://localhost:80/project/php/files.php'
file_path = 'C:\\wamp64\\www\\project\\python\\New folder\\directory_listing.txt'
with open(file_path, 'rb') as file:
files = {'file': file}
headers = {'Content-Type': 'multipart/form-data'}
response = requests.post(url, files=files, headers=headers)
print(response.text)
</code></pre>
<p>below is my php code which is receiving the file and then moving it to specified folder</p>
<pre><code><?php
$target_dir = "files_list/";
if (isset($_FILES["file"])) {
$target_file = $target_dir . basename($_FILES["file"]["name"]);
if (move_uploaded_file($_FILES["file"]["tmp_name"], $target_file)) {
echo "The file " . basename($_FILES["file"]["name"]) . " has been uploaded and moved.";
} else {
echo "Error uploading file.";
}
} else {
echo "No file uploaded.";
}
?>
</code></pre>
<p>my issue is when i run my code i get No file uploaded. as output. I have already verified all path and filenames these all are ok. I don't know why my file is not receiving at php side.If anyone has any idea kindly guide me what is wrong.</p>
| <python><php> | 2023-08-16 08:30:27 | 0 | 309 | sam |
76,911,811 | 9,381,985 | numpy: create a 2-d coordinate matrix array from two 1-d array as row and column | <p>I have two numpy arrays as the coordinates of rows and columns of a matrix:</p>
<pre><code>r = np.array([1, 2, 3])
c = np.array([7, 8, 9])
</code></pre>
<p>Is there a simple way to create a new array of the coordinates of all cells of the matrix, like:</p>
<pre><code>m = np.SOME_FUNCTION(r, c)
# m = array([[1, 7], [1, 8], [1, 9], [2, 7], [2, 8], [2, 9], [3, 7], [3, 8], [3, 9]])
</code></pre>
<p>Thanks.</p>
| <python><numpy> | 2023-08-16 08:30:24 | 1 | 575 | Cuteufo |
76,911,803 | 5,300,978 | elasticsearch.BadRequestError: BadRequestError(400, 'search_phase_execution_exception', 'runtime error') | <p>My code is crashing when I execute the following code, I must say I have already documents indexed in ES</p>
<pre><code>db = ElasticVectorSearch(
elasticsearch_url=url, # it is fine, I didnt change it
index_name=os.environ["ELASTICSEARCH_INDEX_NAME"],
embedding=OpenAIEmbeddings(openai_api_key=os.environ["OPENAI_API_KEY"],
deployment=os.environ["AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT"]),
ssl_verify={"ca_certs": os.environ["APP_LANGCHAIN_CERT_PATH"]})
docs = db.similarity_search(query="my query",
k=10)
</code></pre>
<p>Thank you</p>
<p>It was working before, I removed the indexes and I created them with</p>
<pre><code>client.options(ignore_status=400).indices.create(index=os.environ["ELASTICSEARCH_INDEX"])
</code></pre>
| <python><elasticsearch> | 2023-08-16 08:29:50 | 1 | 1,324 | M. Mariscal |
76,911,767 | 3,055,351 | How to define custom objects as query parameters in FastAPI (using PydanticV2 annotations)? | <p>I am trying to figure out how to update my FastAPI app after from Pydantic v1 to Pydantic v2.</p>
<p>Before I had code that worked perfectly fine:</p>
<pre class="lang-py prettyprint-override"><code>
class ObjectId(bson.ObjectId):
@classmethod
def __get_validators__(cls):
yield cls.validate
@classmethod
def validate(cls, value):
if isinstance(value, str):
try:
return bson.ObjectId(value)
except bson.errors.InvalidId:
raise ValueError("Invalid ObjectId")
elif isinstance(value, bson.ObjectId):
return value
else:
raise TypeError("ObjectId required")
app = FastAPI()
@app.get("/test")
def test(id: ObjectId) -> bool:
return True
</code></pre>
<p>However, when I try to migrate to FastAPI V2:</p>
<pre class="lang-py prettyprint-override"><code>import bson
from typing import Annotated
from pydantic import (
field_validator,
PlainSerializer,
WithJsonSchema,
BeforeValidator,
TypeAdapter,
)
from fastapi import FastAPI
ObjectId = Annotated[
bson.ObjectId,
BeforeValidator(lambda x: bson.ObjectId(x) if isinstance(x, str) else x),
PlainSerializer(lambda x: f"{x}", return_type=str),
WithJsonSchema({"type": "string"}, mode="validation"),
WithJsonSchema({"type": "string"}, mode="serialization"),
]
# Casting from str to ObjectId - WORKS
ta = TypeAdapter(
ObjectId,
config=dict(arbitrary_types_allowed=True),
)
ta.validate_python("5f7f9f0c2a0e0b0001d5b3d0")
ta.json_schema()
# Use ObjectId as a query parameter in FastAPI - FAILS:
app = FastAPI()
@app.get("/test")
def test(id: ObjectId) -> bool:
return True
</code></pre>
<p>I get the following error message:</p>
<blockquote>
<p>fastapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that <class 'bson.objectid.ObjectId'> is a valid Pydantic field type. If you are using a return type annotation that is not a valid Pydantic field (e.g. Union[Response, dict, None]) you can disable generating the response model from the type annotation with the path operation decorator parameter response_model=None. Read more: <a href="https://fastapi.tiangolo.com/tutorial/response-model/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/response-model/</a></p>
</blockquote>
<p>I am a bit confused, because it seems that the problem is not with a response model (which is <code>bool</code> in the example), but with a custom type in query parameter.</p>
<p>I also checked other suggested approaches, for example:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/76686267/what-is-the-new-way-to-declare-mongo-objectid-with-pydantic-v2-0">What is the new way to declare Mongo ObjectId with PyDantic v2.0^?</a></li>
<li><a href="https://stackoverflow.com/questions/76686888/using-bson-objectid-in-pydantic-v2/76722139#76722139">Using bson.ObjectId in Pydantic v2</a> <- btw, duplicate</li>
</ul>
<p>There, it is suggested an alternative approach to introduce a custom annotation type, however, it still does not work for me.</p>
<p>I also did not find any relevant in the FastAPI documentation about query parameters and a list of Extra Data Types.</p>
<p>I would highly appreciate if you could help to find a possible solution.</p>
| <python><mongodb><fastapi><pydantic> | 2023-08-16 08:25:03 | 0 | 1,390 | desa |
76,911,555 | 5,775,358 | Pytest for package for data processing | <p>I am trying to include pytests for the project I am working on. I struggle with two parts of it.</p>
<ol>
<li><p>Some algorithms are quite complex, including FFT etc. So the easy examples like <code>assert add(1, 1) == 2</code> does not apply here, because the function is needed to calculate the expected output. In what way can we tackle it so that the assert is independent of the function it tests?</p>
</li>
<li><p>Data processing is also involved, processing of 1 file can take up to 20 minutes. Is it necessary to include tests for the reading of a file? Or is it sufficient to create a small file and use that for the tests? The reason I doubt this, is because the file will never be a "edge case". I understood that edge cases are the ones that we want test the most.</p>
</li>
</ol>
<p>For the first problem one can think of a simple function which in my opinion is hard to test:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from typing import Union
def get_grid(x: Union[list, np.ndarray], y: Union[list, np.ndarray]):
X, Y = np.meshgrid(x, y)
return np.sin(X) + np.cos(Y)
</code></pre>
<p>So one way to test this is to call the function, save the output as plain text and assert this output with the function call:</p>
<pre><code>def test_get_grid():
assert np.allclose(
get_grid([1, 2, 3], [1, 2, 3]),
np.array([
[ 1.38177329, 1.44959973, 0.68142231],
[ 0.42532415, 0.49315059, -0.27502683],
[-0.14852151, -0.08069507, -0.84887249]])
)
</code></pre>
<p>But this test the function with its own output. This may be useful when the code changes and to check if the output with previous version is the same, but this is not a real test in my opinion since it does net test the expected outcome?</p>
| <python><unit-testing><pytest> | 2023-08-16 07:53:44 | 1 | 2,406 | 3dSpatialUser |
76,911,509 | 4,470,419 | How to use retriever in Langchain? | <p>I looked through lot of documentation but got confused on the retriever part.</p>
<p>So I am building a chatbot using user's custom data.</p>
<ol>
<li>User will feed the data</li>
<li>Data should be upserted to Pinecone</li>
<li>Then later user can chat with their data</li>
<li>there can be multiple users and each user will be able to chat with their own data.</li>
</ol>
<p>Now I am following below approach</p>
<ol>
<li>Storing user data into Pinecone</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def doc_preprocessing(content):
doc = Document(page_content=content)
text_splitter = CharacterTextSplitter(
chunk_size=1000,
chunk_overlap=0
)
docs_split = text_splitter.split_documents([doc])
return docs_split
def embedding_db(user_id, content):
docs_split = doc_preprocessing(content)
# Extract text from the split documents
texts = [doc.page_content for doc in docs_split]
vectors = embeddings.embed_documents(texts)
# Store vectors with user_id as metadata
for i, vector in enumerate(vectors):
upsert_response = index.upsert(
vectors=[
{
'id': f"{user_id}",
'values': vector,
'metadata': {"user_id": str(user_id)}
}
]
)
</code></pre>
<p>This way it should create embeddings for the given data into pinecone.</p>
<p>Now the second part is to chat with this data. For QA, I have below</p>
<pre class="lang-py prettyprint-override"><code>def retrieval_answer(user_id, query):
text_field = "text"
vectorstore = Pinecone(
index, embeddings.embed_query, text_field
)
vectorstore.similarity_search(
query,
k=10,
filter={
"user_id": str(user_id)
},
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type='stuff',
retriever=vectorstore.as_retriever(),
)
result = qa.run(query)
print("Result:", result)
return result
</code></pre>
<p>but I keep getting</p>
<pre><code>Found document with no `text` key. Skipping.
</code></pre>
<p>When i am doing QA, its not referring to the data stored in pinecone. Its just using the normal chatgpt. I am not sure what i am missing here.</p>
| <python><openai-api><langchain><pinecone> | 2023-08-16 07:45:41 | 1 | 1,125 | Manoj ahirwar |
76,911,496 | 3,357,352 | Inline constants in format strings | <p>Very close to <a href="https://stackoverflow.com/questions/55805574/extracting-all-constants-from-code-object-in-python-3">this question</a> (but not a duplicate IMHO)</p>
<pre class="lang-py prettyprint-override"><code>In [1]: def foo():
...: A = 'something' + ' ' + 'good'
...: B = 'is'
...: C = f'{B} going on'
...: return A + C
...:
In [2]: foo.__code__.co_consts
Out[2]: (None, 'something good', 'is', ' going on')
</code></pre>
<p>Can I somehow help python evaluate the formatted string as a constant?</p>
<p>Since the runtime is able to evaluate string concatenation (as evident by evaluating <code>'something good'</code>), I'm curious why it isn't able to evaluate the formatted string as <code>'is going on'</code></p>
| <python> | 2023-08-16 07:44:17 | 1 | 7,270 | OrenIshShalom |
76,911,433 | 4,277,485 | KeyError: None of [Int64Index([0], dtype='int64')] are in the [columns] | <p>Data frame</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">name</th>
<th style="text-align: center;">date</th>
<th style="text-align: right;">flag</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">abc 012</td>
<td style="text-align: center;">2023-08-14</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
<p>Creating df</p>
<p>df = pd.read_sql_query("select * from table", engine)</p>
<p>df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d')</p>
<p>creating a new column using split(), and getting following error</p>
<p>df[['tag']] = df['name'].str.split(' ', expand=True)[[0]]</p>
<p>expected output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">name</th>
<th style="text-align: center;">date</th>
<th style="text-align: right;">flag</th>
<th style="text-align: right;">tag</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">abc 012</td>
<td style="text-align: center;">2023-08-14</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">abc</td>
</tr>
</tbody>
</table>
</div>
<p>But I am getting Error:</p>
<pre><code>df[['tag']] = df['name'].str.split(' ', expand=True)[[0]]
</code></pre>
<p>File "/opt/py37/lib/python3.7/site-packages/pandas/core/frame.py", line 3464, in <strong>getitem</strong>
indexer = self.loc._get_listlike_indexer(key, axis=1)[1]
File "/opt/py37/lib/python3.7/site-packages/pandas/core/indexing.py", line 1314, in _get_listlike_indexer
self._validate_read_indexer(keyarr, indexer, axis)
File "/opt/py37/lib/python3.7/site-packages/pandas/core/indexing.py", line 1374, in _validate_read_indexer
raise KeyError(f"None of [{key}] are in the [{axis_name}]")
KeyError: "None of [Int64Index([0], dtype='int64')] are in the [columns]"</p>
| <python><string><split> | 2023-08-16 07:34:21 | 0 | 438 | Kavya shree |
76,911,374 | 1,319,998 | Deadlock/hanging while reading from a subprocess's stderr in thread - with consideration of KeyboardInterrupt and SystemExit | <p>I'm using Popen from Python's subprocess module to start a subprocess, and also running another thread to capture its standard error.</p>
<p>I suspect that the Popen context hangs on exit of the context if something is reading from the standard error of the process. This shows the issue:</p>
<pre class="lang-py prettyprint-override"><code>import time
from subprocess import Popen, PIPE
from threading import Thread
def read_from_stderr(proc):
try:
c = proc.stderr.read(1)
except:
# Swallow for easier to understand trace from main thread
pass
with Popen(['cat'], stderr=PIPE) as proc:
print("Starting thread to read stderr")
t = Thread(target=read_from_stderr, args=(proc,))
t.start()
time.sleep(1)
print("Exiting the context. Will now hang.")
</code></pre>
<p>And if I hit CTRL+C, I can see in the trace where it stopped:</p>
<pre><code> File "[removed]/subprocess.py", line 1092, in __exit__
self.stderr.close()
</code></pre>
<p>To avoid this, I can put a</p>
<pre class="lang-py prettyprint-override"><code>proc.terminate()
</code></pre>
<p>in the context, which will allow exit of the context.</p>
<p>But there are complications: <code>KeyboardInterrupt</code> and <code>SystemExit</code>. They can be raised at more or less <em>any</em> point.(?) How can I avoid hanging in the case that that it's raised before any of my own code in the context, so like:</p>
<pre><code>raise KeyboardInterrupt()
proc.terminate()
</code></pre>
<p>Note - I am considering the case where KeyboardInterrupt say is raised before the interpreter gets to any try/except block I can write where it can be caught.</p>
| <python><subprocess><pipe><python-multithreading><stderr> | 2023-08-16 07:26:36 | 1 | 27,302 | Michal Charemza |
76,911,306 | 17,160,160 | Pyomo Constraint Declaration Displays Unexpected Calculation? | <p>In the following example model, I am attempting to create a constraint function that takes the sum of <code>model.auction</code> * <code>model.volume</code> for each item in a set, deducts it from parameter <code>model.icCap</code> and assigns that value to the variable <code>model.daVol</code>:</p>
<pre><code>model = ConcreteModel()
model.WEEKS = Set(initialize = [1,2,3])
model.MONTHS = Set(initialize = ['J24','F24'])
model.PRODS = Set(initialize = ['Q24','J24','F24'])
model.icCap = Param(initialize = 900)
model.auction = Var(model.WEEKS,model.PRODS, within = Binary)
model.volume = Var(model.WEEKS,model.PRODS, within = NonNegativeIntegers)
model.daVol = Var(model.MONTHS, within = NonNegativeIntegers)
def daVol_rule(model,j):
return model.icCap - sum(model.auction[i,j] * model.volume[i,j] for i in model.WEEKS) == model.daVol[j]
model.daVol_const = Constraint(model.MONTHS, rule = daVol_rule)
</code></pre>
<p>However, upon viewing the created statement for <code>daVol_const</code> in <code>pprint()</code>, I notice that it appears to be deducting <code>daVol[j]</code> at the end of each statement. This, I believe, is causing my objective function to calculate incorrectly.</p>
<p>See below (scroll along to end):</p>
<pre><code>1 Constraint Declarations
daVol_const : Size=2, Index=MONTHS, Active=True
Key : Lower : Body : Upper : Active
F24 : 0.0 : 900 - (auction[1,F24]*volume[1,F24] + auction[2,F24]*volume[2,F24] + auction[3,F24]*volume[3,F24]) - daVol[F24] : 0.0 : True
J24 : 0.0 : 900 - (auction[1,J24]*volume[1,J24] + auction[2,J24]*volume[2,J24] + auction[3,J24]*volume[3,J24]) - daVol[J24] : 0.0 : True
</code></pre>
<p>Am I misunderstanding the <code>pprint()</code> output here or have I incorrectly formatted my constraint function perhaps?</p>
<p>Guidance appreciated!</p>
| <python><pyomo> | 2023-08-16 07:15:52 | 0 | 609 | r0bt |
76,911,079 | 2,071,612 | Django model aggregate function returns different result in shell and in request endpoint | <p>Django version: 4.2.4
Python version: 3.10.8</p>
<pre class="lang-py prettyprint-override"><code>from django.shortcuts import render
from django.db.models.aggregates import Max
from .models.product import Product
def product_page(request):
agg = Product.objects.aaggregate(max__price=Max('price'))
return render(request, 'product.html', {'max_price': agg['max__price']}
</code></pre>
<p>I get following error when reading the result from <code>agg</code> :</p>
<pre><code>'coroutine' object is not subscriptable
</code></pre>
<p>But, when I run the same code in django shell I get the expected result in <code>dict</code></p>
<p>What am I missing here?</p>
| <python><django> | 2023-08-16 06:39:44 | 1 | 4,645 | Lekhnath |
76,910,803 | 1,354,517 | layout-parser basic api test code throws weird error | <p>I am trying out layout-parser api from</p>
<p><a href="https://layout-parser.readthedocs.io/en/latest/notes/modelzoo.html#example-usage" rel="nofollow noreferrer">https://layout-parser.readthedocs.io/en/latest/notes/modelzoo.html#example-usage</a></p>
<p>My detectron2 installation is quite successful as evident from running <code>python -m detectron2.utils.collect_env </code></p>
<p>So is my torch installation by running <code>pip3 show torch</code></p>
<p>I have created a first sample as below</p>
<pre><code>import layoutparser as lp
import cv2
image = cv2.imread("data/test-image.jpeg")
image = image[..., ::-1]
model = lp.models.Detectron2LayoutModel(
config_path ='lp://PubLayNet/faster_rcnn_R_50_FPN_3x/config', # In model catalog
label_map ={0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"}, # In model`label_map`
extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.8] # Optional
)
model.detect(image)
</code></pre>
<p>Running the above with <code>python3 sample-first.py</code> throws an error</p>
<pre><code>Traceback (most recent call last):
File "/Users/_dga/ml-git/layout-parser-trial/sample-first.py", line 6, in <module>
model = lp.models.Detectron2LayoutModel(
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/layoutparser/models/layoutmodel.py", line 124, in __init__
self._create_model()
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/layoutparser/models/layoutmodel.py", line 149, in _create_model
self.model = self._engine.DefaultPredictor(self.cfg)
File "/Users/_dga/ml-git/detectron2/detectron2/engine/defaults.py", line 288, in __init__
checkpointer.load(cfg.MODEL.WEIGHTS)
File "/Users/_dga/ml-git/detectron2/detectron2/checkpoint/detection_checkpoint.py", line 62, in load
ret = super().load(path, *args, **kwargs)
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/fvcore/common/checkpoint.py", line 155, in load
checkpoint = self._load_file(path)
File "/Users/_dga/ml-git/detectron2/detectron2/checkpoint/detection_checkpoint.py", line 99, in _load_file
loaded = self._torch_load(filename)
File "/Users/_dga/ml-git/detectron2/detectron2/checkpoint/detection_checkpoint.py", line 114, in _torch_load
return super()._load_file(f)
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/fvcore/common/checkpoint.py", line 252, in _load_file
return torch.load(f, map_location=torch.device("cpu"))
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/torch/serialization.py", line 815, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/torch/serialization.py", line 1033, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.
</code></pre>
<p>It seems to be coming from torch and I am guessing its unable to load the model in question.</p>
<p>I am using MAC M2 with python3.9 for the whole exercise.</p>
<p>After some research on chatgpt and google, I downgraded my detectron2 to 0.5 from 0.6 and have a new issue</p>
<pre><code>Traceback (most recent call last):
File "/Users/_dga/ml-git/layout-parser-trial/sample-first.py", line 6, in <module>
model = lp.models.Detectron2LayoutModel(
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/layoutparser/models/layoutmodel.py", line 52, in __new__
cls._import_module()
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/layoutparser/models/layoutmodel.py", line 40, in _import_module
cls, m["import_name"], importlib.import_module(m["module_path"])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/detectron2/engine/__init__.py", line 11, in <module>
from .hooks import *
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/detectron2/engine/hooks.py", line 19, in <module>
from detectron2.evaluation.testing import flatten_results_dict
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/detectron2/evaluation/__init__.py", line 2, in <module>
from .cityscapes_evaluation import CityscapesInstanceEvaluator, CityscapesSemSegEvaluator
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/detectron2/evaluation/cityscapes_evaluation.py", line 11, in <module>
from detectron2.data import MetadataCatalog
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/detectron2/data/__init__.py", line 2, in <module>
from . import transforms # isort:skip
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/detectron2/data/transforms/__init__.py", line 4, in <module>
from .transform import *
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/detectron2/data/transforms/transform.py", line 36, in <module>
class ExtentTransform(Transform):
File "/Users/_dga/Library/Python/3.9/lib/python/site-packages/detectron2/data/transforms/transform.py", line 46, in ExtentTransform
def __init__(self, src_rect, output_size, interp=Image.LINEAR, fill=0):
AttributeError: module 'PIL.Image' has no attribute 'LINEAR'
</code></pre>
| <python><python-3.x><tensorflow><layout-parser> | 2023-08-16 05:45:38 | 1 | 1,117 | Gautam |
76,910,777 | 5,278,205 | How can I filter a pandas series to only elements that are numeric characters 0-9 and grab the indices of those elements? | <p>How can I filter a pandas series to only elements that are numbers (0-9) and grab the indices of those elements?</p>
<pre><code>import pandas as pd, numpy as np
data = np.array(['g', "1", 'e', "5", 'h'])
df = pd.Series(data, index=['ee1', 'ee2', 'ee3', 'ee4', 'ee5'])
</code></pre>
<p>For example, I want to access <code>ee2</code> and <code>ee4</code> as <code>['ee2', 'ee4']</code>.</p>
| <python><pandas> | 2023-08-16 05:39:55 | 2 | 5,213 | Cyrus Mohammadian |
76,910,767 | 6,649,591 | How can I use features in statsforecast | <p>How can I use features in statsforecast (e.g. moving average, lags, user defined function)?</p>
<pre><code>fcst = StatsForecast(
m4_daily_train,
models = [(auto_arima,7)],
freq = 'D',
n_jobs = min(len(m4_daily_train.index.unique()),cpu_count())
)
</code></pre>
<p>Or is it possible to create the features on my own in a previous step in pandas and use then the total feature table in the fitting like...</p>
<pre><code>df['lag1'] = df['y'].shift(1)
df['day'] = df['timestamp'].dt.day
fcst = StatsForecast(
df,
models = [(auto_arima,7)],
freq = 'D',
n_jobs = min(len(m4_daily_train.index.unique()),cpu_count())
)
</code></pre>
| <python><time-series><statsforecast> | 2023-08-16 05:37:40 | 1 | 487 | Christian |
76,910,548 | 2,195,440 | Determining When a Package First Appeared in the PyPI Index using BigQuery? | <p>I'm working on a project where I need to find out the initial release date of a package on the PyPI index. I came across the <code>bigquery-public-data.pypi.distribution_metadata</code> public dataset on BigQuery. From my understanding, there's a field called <code>upload_time</code> in this dataset that might indicate the upload timestamp of a package.</p>
<p>Can anyone confirm if the <code>upload_time</code> field refers to the time when a package was first uploaded to the PyPI index?</p>
<p>For reference, I found details about the PyPI BigQuery dataset here: PyPI bigquery dataset.
<a href="https://warehouse.pypa.io/api-reference/bigquery-datasets.html" rel="nofollow noreferrer">https://warehouse.pypa.io/api-reference/bigquery-datasets.html</a></p>
<p>But PyPI website does not mention anything about the table metadata description.</p>
| <python><google-bigquery><package><pypi> | 2023-08-16 04:35:43 | 1 | 3,657 | Exploring |
76,910,344 | 8,444,568 | Is multiprocessing.Queue.qsize() reliable if protected by multiprocessing.Lock? | <p>I want to get elements from a multiprocessing.Queue only if it has at least N items:</p>
<pre><code>def popn(n, q, lock):
lock.acquire()
ret = []
if q.qsize() >= n:
for i in range(n):
ret.append(q.get())
lock.release()
return ret
def push(element, q, lock):
lock.acquire()
q.put(element)
lock.release()
</code></pre>
<p>However, according to <a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue:%7E:text=Return%20the%20approximate%20size%20of%20the%20queue.%20Because%20of%20multithreading/multiprocessing%20semantics%2C%20this%20number%20is%20not%20reliable" rel="nofollow noreferrer">docs</a>, multiprocessing.Queue.qsize() is not reliable, I want to know whether it will be accurate size of the queue if I protect the queue with multiprocessing.Lock(as is shown in the code above)?</p>
| <python><multiprocessing> | 2023-08-16 03:27:29 | 2 | 893 | konchy |
76,910,244 | 541,657 | Python: PDF to text is consuming too much CPU | <p>I'm trying to convert a large pdf to text. The file is about 20 MB. No matter what python library I use, the CPU spikes to 99% or more. I have an app deployed on AWS with 1 vcpu / 8GB RAM and the server hangs when I try to extract text from more than one file in parallel. Backend is a flask app.</p>
<p>I've tried unstructuredio, pdfminer, pypdf2, pymupdf and a few more.</p>
<p>How do I solve this?</p>
| <python><flask><pypdf> | 2023-08-16 03:03:28 | 1 | 600 | Aravind |
76,910,053 | 6,869,232 | Gunicorn & Django: Handle timeout | <p>We run a Django REST API behind Gunicorn (on a Kubernetes Cluster in EKS exposed via AWS ALB).</p>
<p>The Gunicorn command:</p>
<pre><code>gunicorn config.wsgi --timeout 20 (...)
</code></pre>
<p>When a request takes over 20 seconds to be processed by Django (due to various reasons), gunicorn will timeout by emitting a SigAbt signal and restart the gunicorn worker.</p>
<p>However, this causes an issue in tracking the error in various tools such as Datadog or Sentry which are not able to track the error correctly.</p>
<p>Instead, we would like to emit an explicit error (customer error) called TimeoutError.</p>
<p>After a thorough investigation, we want to find the best way to raise this TimeoutError at Django Level when the request takes 20 seconds to complete.</p>
<p>What would be a recommended solution?</p>
| <python><django><timeout><gunicorn><timeoutexception> | 2023-08-16 01:59:56 | 1 | 420 | Yassine Belmamoun |
76,909,998 | 9,933,130 | How to handle this complex file structure in Python | <p>Currently my file structure looks like this:</p>
<pre><code>\HelloWorld
-helloWorld.py
- __init__.py
\api
- __init__.py
- userAPI.py
- util.py
</code></pre>
<p>The code looks like the following:</p>
<p><strong>HelloWorld.py</strong>:</p>
<pre class="lang-py prettyprint-override"><code>import api.userAPI
def handler(event, context):
api.userAPI.create_user()
</code></pre>
<p><strong>userAPI.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import util
def create_user():
helper()
print("creation")
</code></pre>
<p><strong>util.py</strong></p>
<pre class="lang-py prettyprint-override"><code>def helper()::
print("HI!")
</code></pre>
<p>However, I'm getting a " Unable to import module 'HelloWorld': No module named 'util'". I'm running it with Python3.8 and writing lambda code. HelloWorld.py is my lambda</p>
| <python><amazon-web-services><lambda><import> | 2023-08-16 01:44:16 | 0 | 1,761 | Qwerty Qwerts |
76,909,948 | 16,115,413 | Creating a Year-wise Bar Chart Visualization from CSV Data | <h1>Problem:</h1>
<p>I'm working on a data visualization project where I want to create a bar chart similar to the one shown in this <a href="https://i.sstatic.net/EmiHs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EmiHs.png" alt="reference image" /></a>. The image is from a story available <a href="https://www.chartr.co/stories/2023-07-24-1-barbenheimer-was-just-the-boost" rel="nofollow noreferrer">here</a>.</p>
<h1>My Effort:</h1>
<p>I've written Python code using pandas, seaborn, and matplotlib to visualize the data. Here's my code snippet:</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
box = pd.read_csv("box_office_18_23.csv")
# Data Cleaning
box["overall_gross"] = box["overall_gross"].str.replace("$", "").str.replace(",", "").astype(int)
# Data Analysis
sns.barplot(x='year', y='overall_gross', data=box)
plt.show()
</code></pre>
<h1>Output:</h1>
<p>Here's the output of my code: <a href="https://i.sstatic.net/DuYiW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DuYiW.png" alt="Output Image" /></a></p>
<h1>Link to Code and Dataset:</h1>
<p>I have uploaded my Jupyter Notebook and the relevant dataset (CSV file) to this <a href="https://drive.google.com/drive/folders/17jnK-sLac4VbTaPkZSwsZuPCJqYae87K?usp=drive_link" rel="nofollow noreferrer">Google Drive link</a>.</p>
<h1>Issue:</h1>
<p>While my code runs without errors, the resulting bar chart doesn't match the desired visualization. I'm looking for guidance on how to modify my code to achieve a similar year-wise bar chart as shown in the reference image.</p>
<p>Also if other libraries or tools can do the job , let me know that too.</p>
<p>Thank you for your help!</p>
| <python><pandas><visualization><data-analysis><graph-visualization> | 2023-08-16 01:27:00 | 1 | 549 | Mubashir Ahmed Siddiqui |
76,909,730 | 9,933,130 | Grab a response mid--transact_write_items in boto3 | <p>In boto3, I know transact_write_items is an atomic method. Let's say inside of my transact_write_items, I'm trying to delete from a table and update a table. For example:</p>
<ol>
<li>Delete item from ITEM_TABLE</li>
<li>UPDATE entry in CHARACTER_TABLE, specifically removing item from a set</li>
</ol>
<p>Is it possible to do this in a single transaction where, when I delete the item, it gives me the user_id Primary Key to find which entry in CHARACTER_TABLE to modify?</p>
<p>I know I can do this in separate calls, but in that case, it wouldn't be atomic.</p>
| <python><amazon-dynamodb><boto3> | 2023-08-16 00:05:37 | 1 | 1,761 | Qwerty Qwerts |
76,909,659 | 11,416,654 | RSA Oracle - Getting the flag by using chosen ciphertext attack | <p>I am trying to solve a simple RSA CTF challenge, but I am facing problems that go beyond the theory behind the attack (or at least I guess so). Basically, I have an oracle at disposal that will first print the encrypted flag and then encrypt and decrypt whatever I want (except for the decryption of the flag). The idea for the attack is to encrypt 2*encrypted_flag and then decrypt the given cipher. By dividing the obtained number by 2 I should then get the flag. Am I missing something, below you can find both the oracle's code and my exploit.</p>
<p>The attack idea is a chosen ciphertext attack and I "stole" the equations from this video: <a href="https://www.youtube.com/watch?v=ZjYzrn8M3w4&ab_channel=BillBuchananOBE" rel="nofollow noreferrer">https://www.youtube.com/watch?v=ZjYzrn8M3w4&ab_channel=BillBuchananOBE</a>.</p>
<p>Oracle's code:</p>
<pre><code>#!/usr/bin/env python3
import signal
from binascii import hexlify
from Crypto.PublicKey import RSA
from Crypto.Util.number import *
from random import randint
from secret import FLAG
import string
TIMEOUT = 300 # 5 minutes time-out
def menu():
print()
print('Choice:')
print(' [0] Exit')
print(' [1] Encrypt')
print(' [2] Decrypt')
print('')
return input('> ')
def encrypt(m):
return pow(m, rsa.e, rsa.n)
def decrypt(c):
return pow(c, rsa.d, rsa.n)
rsa = RSA.generate(1024)
flag_encrypted = pow(bytes_to_long(FLAG.encode()), rsa.e, rsa.n)
used = [bytes_to_long(FLAG.encode())]
def handle():
print("================================================================================")
print("= RSA Encryption & Decryption oracle =")
print("= Find the flag! =")
print("================================================================================")
print("")
print("Encrypted flag:", flag_encrypted)
while True:
choice = menu()
# Exit
if choice == '0':
print("Goodbye!")
break
# Encrypt
elif choice == '1':
m = int(input('\nPlaintext > ').strip())
print('\nEncrypted: ' + str(encrypt(m)))
# Decrypt
elif choice == '2':
c = int(input('\nCiphertext > ').strip())
if c == flag_encrypted:
print("Wait. That's illegal.")
else:
for no in used:
if m % no == 0:
print("Wait. That's illegal.")
break
else:
print('\nDecrypted: ' + str(m))
# Invalid
else:
print('bye!')
break
if __name__ == "__main__":
signal.alarm(TIMEOUT)
handle()
</code></pre>
<p>My current approach:</p>
<pre><code>from Crypto.Util.number import *
from math import gcd
import gmpy2
import sys
#sys.set_int_max_str_digits(0)
r = remote('oracle.challs.cyberchallenge.it', 9041)
r.recvuntil(b'Encrypted flag: ')
encrypted_flag = int(r.recvline().strip().decode())
e = 65537
# Let's first gather the ciphertext of the new num
"""
Here's another hint: suppose I encrypt 2. The oracle will give me back c2= pow(2, 65537, rsa.n). Now I can also compute 2**65537 as an integer. We know that 2**65537 - c2 is divisible by N. So we can try to factor 2**65537 - c2 using, say, the elliptic curve method (ECM). If we are incredibly lucky, 2**65537 - c2 = N * (bunch of relatively small primes), and after ECM finds all the small factors we'll be left with N. But, suppose, instead, that I also encrypt 3, so I get c3 = pow(3, 65537, rsa.n). And maybe even c5 = pow(5, 65537, rsa.n) How can I combine these to find rsa.n
"""
public_exponent = 65537
numbers = [2,3,4,5,6]
numbers_bytes = [b'\x02',b'\x03',b'\x04',b'\x05',b'\x06']
ciphers = []
diffs = []
for i in range(4):
r.recvuntil(b'>')
r.sendline(b'1')
r.recvuntil(b'Plaintext > ')
r.sendline(str(bytes_to_long(numbers_bytes[i])))
r.recvuntil(b'Encrypted: ')
cipher = int(r.recvline().strip().decode())
ciphers.append(cipher)
diffs.append(gmpy2.sub(pow(numbers[i], public_exponent),cipher))
print(diffs)
common_factor = None
for diff in diffs:
if common_factor is None:
common_factor = diff
else:
common_factor = gmpy2.gcd(common_factor, diff)
print(common_factor)
#let's check whether the common factor is N
print(ciphers[0] == pow(bytes_to_long(b'\x02'), public_exponent, common_factor))
# We have found N if True
# To trick the decryption method just sum to the original ciphertext N once
print(common_factor)
encrypted_flag += int(common_factor)
r.recvuntil(b'>')
r.sendline(b'2')
r.recvuntil(b'Ciphertext > ')
r.sendline(str(encrypted_flag))
r.recvuntil('Decrypted: ')
flag = int(r.recvline().decode())
print(long_to_bytes(flag))
</code></pre>
| <python><rsa><biginteger><number-theory><ctf> | 2023-08-15 23:39:51 | 0 | 823 | Shark44 |
76,909,587 | 13,860,719 | Fastest way to calculate the sum of row-wise "v.T @ v" of a matrix | <p>I have a 1000 x 500 matrix. For each row, I want to calculate the dot product between the transpose of the row vector and the row vector itself (which should be a scalar), and sum over all rows. Right now I am using</p>
<pre><code>import numpy as np
A = np.random.random((1000, 500))
res = sum(A[i].T @ A[i] for i in range(A.shape[0]))
</code></pre>
<p>Since this is the performance bottleneck of my algorithm, I wonder if there are faster ways to do this, peferably a Numpyic solution.</p>
| <python><python-3.x><numpy><performance> | 2023-08-15 23:16:15 | 2 | 2,963 | Shaun Han |
76,909,437 | 19,299,757 | session not created: This version of ChromeDriver only supports Chrome version 114 | <p>I am running a Docker image from a Docker container in AWS Batch environment.
It was all working nicely for a while now, but since today I am getting the following error.</p>
<pre><code>E selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This
version of ChromeDriver only supports Chrome version 114
E Current browser version is 116.0.5845.96 with binary path /opt/google/chrome/google-chrome
</code></pre>
<p>The Dockerfile that has the chrome installation is as below</p>
<pre><code>FROM python:3.10
WORKDIR /usr/src/app
COPY . .
RUN pip install --trusted-host pypi.org --upgrade pip
RUN pip install --no-cache-dir \
--extra-index-url https://artifactory.int.csgdev01.citcosvc.com/artifactory/api/pypi/citco-
pypi/simple \
-r requirements.txt
RUN pip install awscli
RUN apt-get install -yqq unzip curl
RUN apt-get -y update
RUN apt-get install zip -y
RUN apt-get install unzip -y
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >>
</code></pre>
<p>/etc/apt/sources.list.d/google-chrome.list
RUN apt-get -y update
RUN apt-get -y install -y google-chrome-stable</p>
<pre><code># Install chrome driver
RUN wget -N https://chromedriver.storage.googleapis.com/`curl -sS
chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip -P ~/
RUN unzip ~/chromedriver_linux64.zip -d ~/
RUN rm ~/chromedriver_linux64.zip
RUN mv -f ~/chromedriver /usr/local/bin/chromedriver
RUN chmod 0755 /usr/local/bin/chromedriver
RUN ls -lt
RUN ls -lt /usr/local/bin
RUN chmod +x ./run.sh
CMD ["bash", "./run.sh"]
</code></pre>
<p>My selenium python test class is below</p>
<pre><code>from selenium import webdriver
import unittest
class Test_SecTransferWorkflow(unittest.TestCase):
options = webdriver.ChromeOptions()
options.add_argument('--no-sandbox')
options.add_argument("--enable-javascript")
options.add_argument("--start-maximized")
options.add_argument("--incognito")
options.add_argument('--headless')
options.add_argument('--ignore-certificate-errors')
options.add_argument('--enable-features=NetworkService')
options.add_argument('--shm-size=1g')
options.add_argument('--disable-gpu')
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_argument("--window-size=1920,1080")
options.add_argument("--disable-extensions")
options.add_argument('--disable-dev-shm-usage')
options.add_experimental_option('useAutomationExtension', False)
options.add_experimental_option("detach", True)
options.add_argument('--allow-running-insecure-content')
options.add_argument('--allow-insecure-localhost')
options.add_argument('--ignore-ssl-errors=yes')
options.add_argument('--user-agent=Chrome/77')
driver = webdriver.Chrome(options=options)
@classmethod
def setUpClass(cls):
try:
cls.driver.delete_all_cookies()
cls.driver.get(TestData_common.BASE_URL)
time.sleep(2)
except WebDriverException as e:
print('Site down...> ', e)
cls.driver.delete_all_cookies()
time.sleep(3)
def test_001_login(self):
if not TestData_common.URL_FOUND:
pytest.skip('Site seems to be down...')
self.loginPage = LoginPage(self.driver)
self.loginPage.do_click_agree_button()
self.driver.maximize_window()
print('Successfully clicked on AGREE button...')
time.sleep(2)
</code></pre>
<p>I didn't have any issues running this image so far, until I encountered this error today.
Any help is much appreciated.</p>
| <python><amazon-web-services><docker><google-chrome><selenium-webdriver> | 2023-08-15 22:21:03 | 9 | 433 | Ram |
76,909,395 | 9,933,130 | __enter__ error wheen trying to use boto3's transact_write_items | <p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>def create_report_and_update_user(db_client, report_info):
# TODO:: This will correctly perform operations, but will return a __enter__ error. Figure out why...
report_info = json.loads(json.dumps(report_info), parse_float=Decimal)
user_email = report_info["email"]
report_id = (
"R-" + str(uuid.uuid4())
if not report_info.get("report_id")
else report_info.get("report_id")
)
report_data = {
"report_id": report_id,
"creation_date": str(datetime.now()),
"email": user_email,
"loan_details": report_info["loan_details"],
"purchase_details": report_info["purchase_details"],
"property_details": report_info["property_details"],
"monthly_budget": report_info["monthly_budget"],
}
serializer = TypeSerializer()
report_data = {
key: serializer.serialize(value) for key, value in report_data.items()
}
try:
print("open is assigned to %r" % open)
with db_client.transact_write_items(
TransactItems=[
{
"Put": {
"TableName": REPORTS_TABLE_NAME,
"Item": report_data,
"ConditionExpression": "attribute_not_exists(report_id)",
}
},
]
) as response:
print("Inside transact_write_items")
return response
except Exception as e:
print("HERE: ", e)
</code></pre>
<p>However, I'm getting "HERE: <strong>enter</strong>" printed out to console. What is strange is that my DynamoDB table does properly add a new entry.</p>
<p>I saw another stackoverflow post that advised typing <code>print("open is assigned to %r" % open)</code>, but I get <code>open is assigned to <built-in function open></code> as expected. I don't remember ever overwritting the <code>open</code> keyword...</p>
<p>Not sure if it will help, but I'm using Windows and testing via sam cli. To run the following code, I use : <code> npx cdk ls; sam local invoke ReportManager-Lambda -t ./cdk.out/GlobalStack.template.json -e .\events\create_report.json</code></p>
| <python><amazon-web-services><amazon-dynamodb><boto3> | 2023-08-15 22:08:40 | 1 | 1,761 | Qwerty Qwerts |
76,909,383 | 6,300,872 | How to setup SQL ALchemy in a Flask project properly? | <p>I have a flask project in which I want to have a db connection to my Postgrsql db.
I've looked around online and I've seen that for initializing the connection you simply instantiate a SQLAlchemy object inside your app.py file by passing the Flask Application object</p>
<pre><code>app = Flask(__name__)
db = SQLAlchemy(app)
</code></pre>
<p>then you can declare every model you want by making it extending the Model class defined in the db object</p>
<pre><code>class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
</code></pre>
<p>I think it's quite a mess because if I want to separate my models in different files I have to reference the 'db' instance around my code by importing directly from the app.py file (having then circular references)</p>
<p>All the tutorials online put everything inside the app.py file and it's than importing directly from the app.py file</p>
<p>Is there a way to initialize SQLAlchemy in a cleaner way, by separating all the models under different folders and maybe without a direct reference to the app.py?</p>
<p>Thanks</p>
| <python><flask><sqlalchemy> | 2023-08-15 22:06:59 | 1 | 847 | Paolo Ardissone |
76,909,328 | 13,131 | pydantic error when adding a field to a derived class: ValueError: "SubDerived" object has no field "_myVar" | <p>Ran into an issue trying to add a field to a Derived class, which seems due to pydantic 1.10.12 (as it works fine with the latest pydantic). Here is a minimal repro:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Base():
pass
class Derived(Base, BaseModel):
def __init__(self, **data):
super().__init__(**data)
print("Derived.__init__() called")
self._myVar = True
@property
def myVar(self):
return self._myVar
@myVar.setter
def myVar(self, value):
self._myVar = value
class SubDerived(Derived):
pass
if __name__ == "__main__":
v = SubDerived()
print(v.myVar)
</code></pre>
<p>If I try pydantic 1.10.12, which my real project dependencies require, I get this exception:</p>
<pre><code>Exception has occurred: ValueError (note: full exception trace is shown but execution is paused at: <module>)
"SubDerived" object has no field "_myVar"
File "/Users/rah/repos/pythonTests/inheritanceTest.py", line 13, in __init__
self._myVar = True
^^^^^^^^^^^
File "/Users/rah/repos/pythonTests/inheritanceTest.py", line 29, in <module> (Current frame)
v = SubDerived()
^^^^^^^^^^^^
ValueError: "SubDerived" object has no field "_myVar"
</code></pre>
<p>Any ideas on how to resolve this? I tried to override <code>__init__</code> and set <code>self._myVar</code> in each inheriting class, but I get the same exception, and as far as I know I shouldn't have to do that anyways.</p>
| <python><pydantic> | 2023-08-15 21:52:43 | 3 | 10,700 | Richard Anthony Hein |
76,909,076 | 857,932 | How do I export members of a nested Python enum into the containing class? | <p>I have a Python class that contains a <code>state</code> field. To define all possible states, I have declared a nested enumeration:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
class State(enum.Enum):
Unloaded = enum.auto()
Loaded = enum.auto()
Processed = enum.auto()
state: State
</code></pre>
<p>However, it is a bit cumbersome to write code like this:</p>
<pre class="lang-py prettyprint-override"><code>if foo.state == Foo.State.Unloaded:
do_something_with(foo)
foo.state = Foo.State.Loaded
</code></pre>
<hr />
<p>I want to make the enumeration members accessible directly as members of <code>Foo</code> (like <code>Foo.Unloaded</code>, <code>Foo.Loaded</code> and <code>Foo.Processed</code>). Is there a better way than just assigning them by hand?</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
class State(enum.Enum):
Unloaded = enum.auto()
Loaded = enum.auto()
Processed = enum.auto()
Unloaded = State.Unloaded
Loaded = State.Loaded
Processed = State.Processed
state: State
</code></pre>
| <python><enums><class-variables> | 2023-08-15 20:59:24 | 1 | 2,955 | intelfx |
76,909,044 | 901,426 | decode MQTT Sparkplug B messages from Ignition without Protobuf? | <p>up front, i am somewhat new to MQTT and especially new to Google's Protobuf. with that in mind, i'm working with a leftover project from the previous Python "developer" and am finding myself a little lost because he didn't document any of his code.</p>
<p>to get up an running, as it were, i wrote a quick script following <a href="https://github.com/eclipse/paho.mqtt.python#getting-started" rel="nofollow noreferrer">the tute on the Paho MQTT site</a>, which works only in that i am GETTING messages. but i am unable to decode them. i ran a loop through the many ecoding types and found that <code>latin-1</code> gave me somewhat human-readable results. so this is my first roadblock. it is <em>supposed</em> to be <code>utf-8</code> but that generates a <code>can't decode byte 0xf3: invalid continuation type</code> error.</p>
<p>here's the complete code:</p>
<pre class="lang-py prettyprint-override"><code>import paho.mqtt.client as mqtt
import json
def decodeMQTT(msg):
#
# it's coming in as SparkplugB v1.0
#
print('x!x!x-- decoding payload... --x!x!x')
def on_connect(client, userdata, flags, rc):
print('connected with result code: ' + str(rc))
client.subscribe("spBv1.0/spanky/+/TCG4 Test1/#")
def on_message(client, userdata, msg):
tokens = msg.topic.split('/')
print('--- tokens ---> ',tokens)
# OUTPUTS: --- tokens ---> ['spBv1.0', 'spanky', 'NCMD', 'TCG4 Test1']
print('-+-+-+- raw payload: ', msg.payload)
# OUTPUTS: -+-+-+- raw payload: b'\x08\xa8\xbe\xbc\xad\x9f1\x12\x12\x10\x03\x18\xa8\xbe\xbc\xad\x9f1
# \t8\x00e\x00\x00\xa0A\x18\xff\xff\xff\xff\xff\xff\xff\xff\xff\x01'
try:
#
# this should be using UTF-8 to decode the message string... but it isn't
# which leads me to believe it has something to do with protobuf (dammit)
# if UTF-8 is used, i get this error:
# »» can't decode byte 0xf3: invalid continuation type ««
# which i can't seem to find any explanation for... :P
#
decoded = msg.payload.decode('latin-1')
print('♥♥♥-» decoded payload: ', str(decoded), '«-♥♥♥')
#
# this prints: ♥♥♥-» decoded payload: ÅÒÖ1↕&
# ‼/tags/inputs/input1►♥↑ÇÒÖ1 ♀z♦2.01↑^ «-♥♥♥
# NOTE the line break is part of the decoded output
#
except Exception as ex:
print('↓↓↓↓↓ decoding failed: ', str(ex), '↓↓↓↓↓')
def main():
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.username_pw_set('fnork', 'blarp')
client.connect("nunyabiznitz", 16500, 10)
client.loop_forever()
if __name__ == '__main__':
main()
</code></pre>
<p>the messages are coming from an Ignition server in the SparkplugB v1.0 format. which i presumed <code>Paho MQTT</code> would be able to decode...</p>
<p>the original code i took over used Protobuf, but as you can see here, i'm not. both as an attempt to simply understand the MQTT exchange and to evaluate if it actually brings anything to the table, since <code>Paho MQTT</code> <em>should</em> be handling all the necessary bits; primary of which is the encoding and decoding of message strings, no?</p>
<p>i DO understand that <code>Protobuf</code> <em>ensures</em> there's no structure issues or data-type issues (and there's a brick-ton of other stuff that lacks documentation), but if i'm only reading a handful of values, i don't know that there's value in an extra 600 lines of code...</p>
<p>anyway. as you can see, the return message from the Ignition MQTT module is NOT decoding properly and i want to know why. is there something i am missing here? if i'm missing information that could help clear up this question better, please let me know and i'll supply as best i am able.</p>
<p>EDIT: forgot a line of code that actually did the decoding. fixed.</p>
<p>EDIT2: okay. this morning i came in and this is the payload coming through now:</p>
<pre><code>\x08\xd0\xca\x99\xf4\x9f1\x12&\n\x13/tags/inputs/input1\x10\x03\x18\x8e\xcd\x99\xf4\x9f1 \x0cz\x042.01\x18\xae\x01
</code></pre>
<p>as you can see, the parts i actually care about are in clear text. this makes me think that, yes, i actually DO need the Protobuf after all. it looks like the hex parts are the shorthand for the header and flags, while the payload has some other kind of decorators going on... which will require Protobuf to decode for full use. dang it. i was hoping for a sleeker build. that stuff is bloated AF.</p>
| <python><protocol-buffers><mqtt><ignition> | 2023-08-15 20:50:14 | 1 | 867 | WhiteRau |
76,908,986 | 13,381,632 | Convert Pixel Values to Coordinates - Python | <p>I have a set of pixel values in the format of lower left (x1y1) (e.g. 204.55, 196.33) and upper right (x2y2) (e.g. 314.56, 294.77), and need to convert these values to their corresponding coordinates in an image.</p>
<p>The pixel values are not 0-255 RGB values, but the locations of the pixels themselves in the image. I do not have the corresponding GeoTIFF to compare or to use GDAL/Rasterio to perform the conversion, but I would imagine there should be a somewhat straightforward way to do this that I cannot seem to find.</p>
<p>Is there a convenient Python library or recommended transform to use?</p>
| <python><image><coordinates><gis><pixel> | 2023-08-15 20:35:50 | 0 | 349 | mdl518 |
76,908,846 | 2,781,105 | Iterrows merge in Pandas results in duplicated columns with suffixes | <p>I have a dataframe of event occurrences in which the date is formatted as 'YYYY-WW'.
Various events can take place, some at the same time as other events in the same timeframe. Example datframe as follows;</p>
<pre><code>df1 = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03', '2022-01','2022-02','2022-03','2022-01','2022-03'],
'event': ['event1','event1','event1','event2','event2','event3','event4','event4'],
'event_flag': [1,1,1,1,1,1,1,1,]})
</code></pre>
<p><a href="https://i.sstatic.net/Oo9DS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Oo9DS.png" alt="enter image description here" /></a></p>
<p>I have a 2nd dataframe to which I want to left join the 1st dataframe. The 2nd dataframe could potentially contain many more dates than is featured in df1 but the for the purposes of this question is as follows:</p>
<pre><code>df2 = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03'],
'col1': ['apple','car','banana']})
</code></pre>
<p><a href="https://i.sstatic.net/bGF8I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bGF8I.png" alt="enter image description here" /></a></p>
<p>Ultimately, I want to perform a left join such that the values from <code>event</code> in df1 become additional column headers in df2, with the <code>event_flag</code> from df1 becoming a boolean value under the respective column header, as follows:</p>
<pre><code>desired_outcome = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03'],
'col1': ['apple','car','banana'],
'event1':[1,1,1],
'event2':[1,1,0],
'event3':[0,0,1],
'event4':[1,0,1],
})
</code></pre>
<p><a href="https://i.sstatic.net/zxC4s.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zxC4s.png" alt="enter image description here" /></a></p>
<p>However, when using <code>iterrows()</code> to achieve this, what I end up with is something that bears some resemblance to the desired outcome but duplicates the columns such that I end up with multiple columns with suffixes, as follows:</p>
<pre><code>for index, row in df1.iterrows():
index_value = row['event']
#column_a_value = row['disco']
yyyyww = row['yyyyww']
event_flag = row['event_flag']
df2 = df2.merge(pd.DataFrame({'yyyyww': [yyyyww],
f'{index_value}': [event_flag]
}),
left_on='yyyyww', right_on='yyyyww', how='left')
df2.fillna(0)
</code></pre>
<p><a href="https://i.sstatic.net/uuCKc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uuCKc.png" alt="enter image description here" /></a></p>
<p>How can I perform the required operation without resulting in duplicated columns?</p>
| <python><pandas><merge><left-join> | 2023-08-15 20:08:27 | 2 | 889 | jimiclapton |
76,908,773 | 2,708,215 | _io.BytesIO file handle not recognized as bytes | <p>I've recently started using the <code>BytesIO</code> class from <code>io</code>. It's very handy and has solved my initial problem, but now I've created another for myself. In short, we'll be working with Azure Functions or DataBricks (which one TBD) and won't have local file storage but will have blob storage. I'm using <code>io</code> to get around that. So, there's a model file in blob storage that I get:</p>
<pre><code>blob_name = 'my_model_file.model'
blob_client = BlobClient( account_url = account_url,
container_name = container_name,
blob_name = blob_name,
credential = credential)
download = blob_client.download_blob().content_as_bytes()
saved_model_fh = BytesIO(download).getvalue()
print(type(saved_model_fh))
</code></pre>
<p>The output is <code><class 'bytes'></code>. So, as far as I know, that gives me what I need. But when I try to use this for gensim Doc2Vec.load:</p>
<pre><code>model = Doc2Vec.load(saved_model_fh)
</code></pre>
<p>I get this error at the end of the traceback:</p>
<pre><code>TypeError: endswith first arg must be bytes or a tuple of bytes, not str
</code></pre>
<p>Perhaps I'm naive, but it seems plain that I am giving it bytes, as it wants, not a str. When I run <code>type</code> on the exact object I feed it, it says that it's bytes. What am I missing here? I'm happy to post more of the traceback, but I suspect I'm missing something basic here.</p>
<p>Edit: Traceback added:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[76], line 18
13 print(type(saved_model_fh))
---
---> 18 cpc_model = Doc2Vec.load(saved_model_fh)
File ~\AppData\Local\anaconda3\Lib\site-packages\gensim\models\doc2vec.py:809, in Doc2Vec.load(cls, *args, **kwargs)
786 """Load a previously saved :class:`~gensim.models.doc2vec.Doc2Vec` model.
787
788 Parameters
(...)
806
807 """
808 try:
--> 809 return super(Doc2Vec, cls).load(*args, rethrow=True, **kwargs)
810 except AttributeError as ae:
811 logger.error(
812 "Model load error. Was model saved using code from an older Gensim version? "
813 "Try loading older model using gensim-3.8.3, then re-saving, to restore "
814 "compatibility with current code.")
File ~\AppData\Local\anaconda3\Lib\site-packages\gensim\models\word2vec.py:1942, in Word2Vec.load(cls, rethrow, *args, **kwargs)
1923 """Load a previously saved :class:`~gensim.models.word2vec.Word2Vec` model.
1924
1925 See Also
(...)
1939
1940 """
1941 try:
-> 1942 model = super(Word2Vec, cls).load(*args, **kwargs)
1943 if not isinstance(model, Word2Vec):
1944 rethrow = True
File ~\AppData\Local\anaconda3\Lib\site-packages\gensim\utils.py:484, in SaveLoad.load(cls, fname, mmap)
455 """Load an object previously saved using :meth:`~gensim.utils.SaveLoad.save` from a file.
456
457 Parameters
(...)
480
481 """
482 logger.info("loading %s object from %s", cls.__name__, fname)
--> 484 compress, subname = SaveLoad._adapt_by_suffix(fname)
486 obj = unpickle(fname)
487 obj._load_specials(fname, mmap, compress, subname)
File ~\AppData\Local\anaconda3\Lib\site-packages\gensim\utils.py:573, in SaveLoad._adapt_by_suffix(fname)
558 @staticmethod
559 def _adapt_by_suffix(fname):
560 """Get compress setting and filename for numpy file compression.
561
562 Parameters
(...)
571
572 """
--> 573 compress, suffix = (True, 'npz') if fname.endswith('.gz') or fname.endswith('.bz2') else (False, 'npy')
574 return compress, lambda *args: '.'.join(args + (suffix,))
TypeError: endswith first arg must be bytes or a tuple of bytes, not str
</code></pre>
| <python> | 2023-08-15 19:53:14 | 0 | 503 | Andrew |
76,908,756 | 5,304,058 | how to get only values from json column as comma seprated | <p>I am updating the my question .
I have a dataframe and a json formatted column rawrecord</p>
<pre><code>filedate code errorID rawrecord errortype
20230811 8003 100 {"Action":"NEW","ID":"30811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"111111111111",flag:False} BBBB
20230811 8003 101 {"Action":"NEW","ID":"20811-195555-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"111111111112",flag:False} BBBB
20230811 8003 102 {"Action":"NEW","ID":"50811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"411111111111",flag:False} BBBB
20230811 8003 103 {"Action":"NEW","ID":"60811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"511111111111",flag:False} BBBB
20230811 8003 104 {"Action":"NEW","ID":"40811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"611111111111",flag:False} BBBB
20230811 8003 105 {"Action":"NEW","ID":"80811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"811111111111",flag:False} BBBB
20230811 8003 106 {"Action":"NEW","ID":"70811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"911111111111",flag:False} AAAA
</code></pre>
<p>I would like to extract all the values from the column rawrecord and comma seprate them.</p>
<p>result:</p>
<pre><code>filedate code errorID rawrecord errortype
20230811 8003 100 NEW,30811-195552-952,MMMM,20230811T000000.000000,111111111111,False BBBB
20230811 8003 101 NEW,20811-195555-952,MMMM,20230811T000000.000000,111111111112,False BBBB
20230811 8003 102 NEW,50811-195552-952,MMMM,20230811T000000.000000,411111111111,False BBBB
20230811 8003 103 NEW,60811-195552-952,MMMM,20230811T000000.000000,511111111111,False BBBB
20230811 8003 104 NEW,40811-195552-952,MMMM,20230811T000000.000000,611111111111,False BBBB
20230811 8003 105 NEW,80811-195552-952,MMMM,20230811T000000.000000,811111111111,False BBBB
20230811 8003 106 NEW,70811-195552-952,MMMM,20230811T000000.000000,911111111111,False AAAA
</code></pre>
<p>How can i get only values seperated by comma for raw record column?</p>
<p><a href="https://i.sstatic.net/gZAib.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gZAib.png" alt="enter image description here" /></a></p>
| <python><pandas> | 2023-08-15 19:50:04 | 1 | 578 | unicorn |
76,908,714 | 13,944,524 | why spawn method is much slower than fork method in python multiprocessing | <p>I was experimenting different starting methods in <em>multiprocessing</em> module and I found something weird. Changing the variable <code>method</code> from <code>"spawn"</code> to <code>"fork"</code>, drops the execution time from <code>9.5</code> to just <code>0.5</code>.</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing as mp
from multiprocessing import Process, Value
from time import time
def increment_value(shared_integer):
with shared_integer.get_lock():
shared_integer.value += 1
if __name__ == "__main__":
method = "spawn"
mp.set_start_method(method)
start = time()
for _ in range(200):
integer = Value("i", 0)
procs = [
Process(target=increment_value, args=(integer,)),
Process(target=increment_value, args=(integer,)),
]
for p in procs:
p.start()
for p in procs:
p.join()
assert integer.value == 2
print(f"{method} - Finished in {time() - start:.4f} seconds")
</code></pre>
<p>outputs for different runs:</p>
<pre class="lang-none prettyprint-override"><code>spawn - Finished in 9.4275 seconds
fork - Finished in 0.5316 seconds
</code></pre>
<p>I'm aware of how these two methods start a new child process(well-explained <a href="https://stackoverflow.com/questions/68806714/determining-exactly-what-is-pickled-during-python-multiprocessing">here</a>), but this difference puts a big question mark in my head. I would like know exactly which part of the code impacts the performance mostly? Is that the pickling part in <code>"spawn"</code>? Does it have anything to do with the lock?</p>
<p>I'm running this code on Linux Pop!_OS and my interpreter version is 3.11.</p>
| <python><python-3.x><multiprocessing><fork><python-multiprocessing> | 2023-08-15 19:40:51 | 1 | 17,004 | S.B |
76,908,698 | 1,401,472 | Taking a Screenshot for a given URL on Google Cloud Platform | <p>I have been trying to take a screenshot of given URL using pyppeteer & FAST in Python. But after many attempts and fixing various errors, I managed to get it installed as an App Engine.</p>
<p>I am using python 3.10 and entry point for my app engine is</p>
<pre><code>gunicorn main:app --timeout 540
</code></pre>
<p>But after looking at the logs, every time, the code seems to break at the time of launch of browser. I did try providing executable path, no-sandbox arguments, but the application keeps on failing with Browser terminated with "BrowserError: Browser closed unexpectedly:" error.</p>
<p>For taking screenshot, few other options were also evaluated like html2image, but that seems to require Chrome executable and not chrome driver.</p>
<p>Any help here would be appreciated.</p>
| <python><google-app-engine> | 2023-08-15 19:37:49 | 1 | 2,321 | user1401472 |
76,908,678 | 10,262,805 | what is the difference between llm and llm chain in langchain? | <p>this is llm:</p>
<pre><code>question=st.text_input("your question")
llm=OpenAI(temperature=0.9)
if prompt:
response=llm(prompt)
st.write(response)
</code></pre>
<p>then if we need to execute a prompt we have to crate llm chain:</p>
<pre><code>from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
question=st.text_input("your question")
llm=OpenAI(temperature=0.9)
template="Write me something about {topic}"
topic_template=PromptTemplate(input_variables=['topic'],template=template)
topic_chain=LLMChain(llm=llm,prompt=topic_template)
if prompt:
response=topic_chain.run(question)
st.write(response)
</code></pre>
<p>I am confused because we used <code>llm(prompt)</code> in the first example, but we created <code>LLMChain(llm=llm,prompt=topic_template)</code> in the second example. Could you please explain the difference between these two approaches and when it's appropriate to use one over the other?</p>
| <python><streamlit><openai-api><langchain><large-language-model> | 2023-08-15 19:32:32 | 4 | 50,924 | Yilmaz |
76,908,648 | 10,237,558 | Pass multiple rows to function using spark UDF | <p>I have this code that makes use of a udf to make an HTTP request:</p>
<pre><code>def insert(name, insertURL):
url = insertURL
try:
payload = json.dumps({
"records": [
{
"fields": {
"name": name
}
}
]
})
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
response = requests.request(
"POST", url, headers=headers, data=payload)
dataRes = response.json()
return dataRes
except Exception as e
print("Error Occurred:", e)
def insertUDF(insertURL):
return udf(lambda z: insert(z, insertURL), StringType())
def main():
match args.subparser:
case 'Insert':
fake = Faker()
columns = ["serial", "name"]
data = []
for i in range(10):
data.append((i,fake.name()))
df = spark.createDataFrame(data=data, schema=columns)
df.show(truncate=False)
df.withColumn("Name", insertUDF(args.insertURL)(col("name"))).show()
if __name__ == "__main__":
main()
</code></pre>
<p>I would like to change the <code>payload</code> such that it takes multiple fields for performance such that the payload becomes:</p>
<pre><code>payload = json.dumps({
"records": [
{
"fields": {
"name": name
},
"fields": {
"name": name
},
"fields": {
"name": name
},
"fields": {
"name": name
}
}
]
})
</code></pre>
<p>where the <code>name</code> in the <code>fields</code> takes the first name present in the dataframe, then second, then third and so on in the same payload, for example if the dataframe is:</p>
<pre><code>serial|name |
+-----+-------------------+
|0 |Christina Bishop |
|1 |Vincent Smith |
|2 |Deanna Brown |
|3 |James Medina MD |
|4 |Ashley Harper |
|5 |Tina Schultz |
|6 |Patrick Archer |
|7 |Mark Campbell |
|8 |Anthony Roach |
|9 |Justin Jackson |
</code></pre>
<p>then in the first payload names till <code>James</code> have been included.</p>
<p>How do I do this?</p>
| <python><python-3.x><apache-spark><pyspark><user-defined-functions> | 2023-08-15 19:28:29 | 1 | 632 | Navidk |
76,908,623 | 2,386,113 | Plotting Sine curve using Python | <p>I am facing a problem in writing a <strong>generic program</strong> that can automatically adjust the number of points required <strong>to plot a sine curve</strong> if the frequency and the total duration for the plot are given.</p>
<p>Based on the suggestions provided by <a href="https://stackoverflow.com/a/76894669/2386113">Maria</a> and <a href="https://stackoverflow.com/a/76895268/2386113">gboffi</a> on another <a href="https://stackoverflow.com/questions/76894594/how-to-plot-a-sine-curve-for-longer-time-duration/76894669?noredirect=1#comment135569994_76894669">SO question</a>, I have written the following generic code.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import math
# Take formula delta-t <= 1/(5*omega) from SO answer "https://stackoverflow.com/a/76895268/2386113"
frequency = 0.02 #Two HZ
delta_t = math.floor( 1/(5*2*np.pi*frequency))
total_duration = 300
num_points = math.ceil(total_duration/delta_t)
t = np.linspace(0, total_duration, num_points)
sine_t = np.sin(t)
#Only for test: to plot a line of constant amplitude
straight_line = t/t
# red for numpy.sin()
plt.plot(t, sine_t, color = 'red')
plt.plot(t, straight_line, color = 'blue')
plt.title("numpy.sin()")
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
</code></pre>
<p><strong>Problem:</strong> The above code works fine for <code>frequency = 2 Hz</code> but if I set the <code>frequency = 0.02 Hz</code> then I again get a wrong plot (which can be corrected by increasing the number of points hard-coded). How can I generically compute the number of points based on the frequency and the duration?</p>
<p><strong>At frequency = 2 Hz (Correct)</strong></p>
<p><a href="https://i.sstatic.net/QjIh7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QjIh7.png" alt="enter image description here" /></a></p>
<p><strong>At frequency = 0.02 Hz (Wrong)</strong></p>
<p><a href="https://i.sstatic.net/LrmO5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LrmO5.png" alt="enter image description here" /></a></p>
<p><strong>UPDATE:</strong></p>
<p>Based on <a href="https://stackoverflow.com/users/20037042/chrslg">chrslg</a> suggestion in the comment (which makes sense), I corrected the code and used <code>np.sin(t*2*np.pi*frequency)</code>. The updated part of the code is (note that frequency is set to 2 now):</p>
<pre><code>frequency = 2 #Two HZ
delta_t = max(math.floor( 1/(5*2*np.pi*frequency)), 1)
total_duration = 300
num_points = math.ceil(total_duration/delta_t)
t = np.linspace(0, total_duration, num_points)
sine_t = np.sin(2*np.pi*frequency*t) #sin(omega * t)
</code></pre>
<p>As a result of the above code, I am getting the sine-curve below which is again <strong>wrong</strong> because <code>frequency = 2</code> means that there should be 2 sine waves per second and hence 600 sine waves for a duration of 300 seconds, which is not the case in the screenshot below:</p>
<p><a href="https://i.sstatic.net/hDKv7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hDKv7.png" alt="enter image description here" /></a></p>
| <python><numpy> | 2023-08-15 19:23:40 | 2 | 5,777 | skm |
76,908,549 | 11,819,955 | How to create new column in Python pandas dataframe with column labels based on the content of those columns | <p>I have a bunch of data files from a scientific instrument that come with checksums. I've read the files into a combined pandas dataframe, but the checksum for each row is for comparison to the column label of the checksum column and is unique to each file. This results in a dataframe with O10 columns with very large strings in the column labels and column values (for the rows from the file with that column label). Now I need to combine these O10 columns into 2 columns, one for the values, and one for the headers the checksums are compared against.</p>
<p>Simplified Example Starting Table, CH=Checksum Header, CV=Checksum Value</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>CH_A</th>
<th>CH_B</th>
<th>CH_C</th>
</tr>
</thead>
<tbody>
<tr>
<td>CV_A1</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>CV_A2</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>NaN</td>
<td>CV_B1</td>
<td>NaN</td>
</tr>
<tr>
<td>NaN</td>
<td>NaN</td>
<td>CV_C1</td>
</tr>
</tbody>
</table>
</div>
<p>Should combine to be 2 columns</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ChecksumHeaders</th>
<th>ChecksumValues</th>
</tr>
</thead>
<tbody>
<tr>
<td>CH_A</td>
<td>CV_A1</td>
</tr>
<tr>
<td>CH_A</td>
<td>CV_A2</td>
</tr>
<tr>
<td>CH_B</td>
<td>CV_B1</td>
</tr>
<tr>
<td>CH_C</td>
<td>CV_C1</td>
</tr>
</tbody>
</table>
</div>
<p>The 'ChecksumValues' table was easy enough to get with df.ffill, but I'm struggling to determine a good method for creating the 'ChecksumHeaders' column. Seeing as how <code>'string'*1 =='string'</code> and <code>'string'*0 ==''</code>. I tied using a number of variations of <code>np.sum([df[CH].notnull().astype(np.int)*CH for CH in ListOfHeaders])</code> but found that the string*int only works as a single equation and trying to do an array results in errors like:
<code>UFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U9'), dtype('int32')) -> None</code></p>
<p>So to ensure it was only 1 at a time I used nested list comprehension:
<code>df['ChecksumHeaders'] = [''.join([ListOfHeaders[idx]*df[CH].notnull().iloc[rowidx].astype(np.int) for idx,CH in enumerate(ListOfHeaders)]) for rowidx in np.arange(len(df))]</code> which works fine if you only have a handful of rows and columns to go through, but takes a long time when it needs to run through O10 columns and O100k rows. Surely there's a better way to do this, any ideas?</p>
| <python><pandas><dataframe><numpy> | 2023-08-15 19:10:41 | 2 | 437 | Messypuddle |
76,908,546 | 5,662,005 | Start execution of script at specific points based on input | <p>This is a very close analogue to <a href="https://stackoverflow.com/questions/23388668/how-to-start-at-a-specific-step-in-a-script">How to start at a specific step in a script?</a>, arguably a duplicate.</p>
<p>Maybe this is more of a request for code review/critique. Having hardcoded integers in the code certainly works, but I'm trying to figure out a way to expose an intuitive argument to users that let's them start the process at a given point. Technically they do all depend on each other, so a simple if/else won't do.</p>
<pre><code>restart_at_point = <get argument from job with widgets>
restart_map = {
"initial": 0,
"loans": 1,
"transactions": 2,
"collections": 3,
"customer_contacts": 4
}
restart_index = restart_map[restart_at_point]
if restart_index <= restart_map["loans"]:
loans()
if restart_index <= restart_map["transactions"]:
transactions()
if restart_index <= restart_map["customer_contacts"]:
customer_contacts()
</code></pre>
<p>This works but I worry that it's a bit confusing. Would it be more clear to have it be like:</p>
<pre><code>def loans():
... # do loan stuff
transactions()
def transactions():
... # do transaction stuff
collections()
def collections():
... # do collections stuff
customer_contacts()
def customer_contacts():
... # do customer contact stuff
restart_at_point = <get argument from job with widgets>
if restart_point == "loans":
loans()
if restart_point == "transactions":
transactions()
...
...
if restart_point == "customer_contacts":
customer_contacts()
</code></pre>
<p>This seems bad because the behavior of each function is not self-contained. It does it's own thing, but then also has to tip the next domino.</p>
| <python><python-3.x> | 2023-08-15 19:10:15 | 1 | 3,899 | Error_2646 |
76,908,515 | 5,790,653 | python how to dynamically print index number of items and print the items | <p>This is my code so far:</p>
<pre><code>things = ['humans', 'fish', 'plants', 'planets', 'stars']
print('These are types of things we learnt so far:')
for thing in things:
print(thing, end=', ')
print()
print(len(things))
the_input = int(input('Now please enter a number which you want to print: '))
print(f'You have chosen {the_input} which is {things[the_input]}.')
# the unfinished rest:
# the_input = int(input('Now please enter a number which you want to print: '))
# if the_input == the index number of indexes of things list,
# then print:
# You have chosen {the_input} which is {things[the_input]}
</code></pre>
<p>This is my expected output:</p>
<pre><code>These are types of things we learnt so far:
1) humans, 2) fish, 3) plants, 4) planets, 5) stars
Now please enter a number which you want to print:
2 # this is the input of terminal, not the output of print(the_input)
You have chosen 2 which is fish.
</code></pre>
<p>For this code I don't know how to do this (there maybe other issues but I'm not yet aware of them):</p>
<ol>
<li>The first print statement to show <code>index_number) each_list_item</code></li>
<li>to omit the last <code>,</code> which is shown in my code if you run it</li>
</ol>
| <python> | 2023-08-15 19:02:57 | 3 | 4,175 | Saeed |
76,908,487 | 9,795,817 | Create an interaction between two categorical columns in PySpark | <p>I have two multi-level categorical columns stored in <code>df</code>:</p>
<ol>
<li><code>dow</code> represents the day of week (seven catagories mapped to integers: 1, 2, ..., 7).</li>
<li><code>type</code> represents four types of observation (four categories mapped to integers: 1, 2, 3, 4).</li>
</ol>
<p>How can I create an interaction (i.e., the multiplication) of these two columns in PySpark?</p>
<p>I know how to encode them using <code>OneHotEncoder</code>. However, I'm not sure how to go about the feature engineering process to account for all 28 combinations (7 x 4 possible cases), especially because <code>OneHotEncoder</code> returns sparse vectors.</p>
<p>For the purpose of this question, assume my pyspark dataframe <code>df</code> looks as follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>dow</th>
<th>type</th>
<th>target</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>200</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>222</td>
</tr>
<tr>
<td>1</td>
<td>7</td>
<td>229</td>
</tr>
</tbody>
</table>
</div>
<p>Where <code>dow</code> can take on seven different values and <code>type</code> can take on four. Is there a built-in way to create interactions between these two columns in order to account for all possible combinations?</p>
| <python><apache-spark><pyspark> | 2023-08-15 18:57:31 | 1 | 6,421 | Arturo Sbr |
76,908,457 | 1,970,609 | PyQt/PySide: Handle Barcode Reader Input | <p>In my GUI (written in PySide 6), I want to allow user input through the keyboard (e.g., in a text field), and I want to handle input from a barcode reader (e.g., scanning of a barcode runs a bunch of actions in sequence).</p>
<p>The barcode reader behaves like a keyboard and triggers <code>QKeyEvent</code>s for every character in the barcode. All those events are triggered in quick succession. At the end of the barcode, <code>Key_Enter</code> is triggered.</p>
<p>My implementation currently uses an <code>eventFilter</code> to catch all the keyboard inputs independent of the widget in focus. It looks like this (reduced to the essentials):</p>
<pre><code>import sys
import time
from PySide6 import QtCore, QtGui
from PySide6.QtCore import Qt, QObject
from PySide6.QtWidgets import QMainWindow, QApplication
class MainWindow(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
# SHOW APP
self.show()
class BarcodeEventFilter(QObject):
def __init__(self, parent: MainWindow):
QObject.__init__(self)
self.main_window = parent
self.input_buffer = ''
self.last_keypress = time.time()
def handle_barcode_input(self, input_string):
print(f'Barcode received: {input_string}')
def eventFilter(self,
watched: QtCore.QObject,
event: QtCore.QEvent) -> bool:
# verbose output for debugging:
if event.type() == QtCore.QEvent.KeyPress:
print(f'{event.text()} - {type(watched)}')
if event.type() == QtCore.QEvent.KeyPress \
and type(watched) == QtGui.QWindow:
if time.time() - self.last_keypress < 1e-1:
if event.key() in (Qt.Key_Enter, Qt.Key_Return):
if len(self.input_buffer) > 1:
self.main_window.handle_barcode_input(
self.input_buffer)
self.input_buffer = ''
else:
self.input_buffer += event.text()
else:
self.input_buffer = event.text()
self.last_keypress = time.time()
# return True would cause that no user input through keyboard is possible.
return False
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MainWindow()
window.show()
barcode_event_filter = BarcodeEventFilter(parent=window)
app.installEventFilter(barcode_event_filter)
sys.exit(app.exec())
</code></pre>
<p>The trouble is that now all the <code>KeyPress</code> events are handled by the underlying widgets, no matter if they originate from the user keyboard or the barcode scanner. This can cause unwanted GUI interactions depending on the text in the barcode.</p>
<p>If the <code>eventFilter</code> returns True, all keyboard input is blocked. So this is also not a solution.</p>
<p>I wonder if there is a better way of handling input from these different devices. For example: Is there a way to buffer all <code>KeyPress</code> events for a few milliseconds to see if it is a barcode (closing on a <code>Key_Enter</code>)? If it is a barcode, process it, if not, release all the <code>KeyPress</code> events that have been on hold.</p>
<p>Any input is welcome - pun intended ;-)</p>
| <python><pyqt><pyside><barcode-scanner><keyboard-events> | 2023-08-15 18:52:34 | 0 | 659 | erik |
76,907,952 | 4,215,840 | prevent closing of python app using windows CMD X button (by mistake) | <p>I have a running Django app (localy)
in "start.bat" file.</p>
<pre><code>CALL venv\Scripts\activate.bat
manage.py runserver 0.0.0.0:8000 --insecure --noreload
</code></pre>
<p>Someone on the PC sometimes close the window <a href="https://i.sstatic.net/UVwql.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UVwql.png" alt="enter image description here" /></a>
How can I prevent this?
like: warning popup with prompet / hide X button / run in different app than CMD ?</p>
<p>I did run it on TK gui but it was messing with my process manager inside the Django.</p>
| <python><django><windows> | 2023-08-15 17:25:55 | 0 | 451 | Sion C |
76,907,951 | 2,200,362 | Parse DataFrame of postal addresses - remove country and unit number | <p>I have a dataframe with a column of postal addresses (generated with <code>geopy.geocoders GoogleV3</code> - I used it to parse my dataframe). The output of <code>geolocator.geocode</code>, however, has the country name - which I don't want. It also contains Unit number - which I don't want.</p>
<p>How can I do it?</p>
<p>I have tried:</p>
<pre><code>test_add['clean address'] = test_add.apply(lambda x: x['clean address'][:-5], axis = 1)
</code></pre>
<p>and</p>
<pre><code>def remove_units(X):
X = X.split()
X_new = [x for x in X if not x.startswith("#")]
return ' '.join(X_new)
test_add['parsed addresses'] = test_add['clean address'].apply(remove_units)
</code></pre>
<p>It works for:</p>
<pre><code>data = ["941 Thorpe St, Rock Springs, WY 82901, USA",
"2809 Harris Dr, Antioch, CA 94509, USA",
"7 Eucalyptus, Newport Coast, CA 92657, USA",
"725 Mountain View St, Altadena, CA 91001, USA",
"1966 Clinton Ave #234, Calexico, CA 92231, USA",
"431 6th St, West Sacramento, CA 95605, USA",
"5574 Old Goodrich Rd, Clarence, NY 14031, USA",
"Valencia Way #1234, Valley Center, CA 92082, USA"]
test_df = pd.DataFrame(data, columns=['parsed addresses'])
</code></pre>
<p>but get an error: "AttributeError: 'float' object has no attribute 'split'" when I use a larger dataframe with 150k such addresses.</p>
<p>Ultimately, I require only street number, street name, city, state and zipcode.</p>
| <python><pandas><string><parsing><geopy> | 2023-08-15 17:25:40 | 2 | 623 | Saania |
76,907,940 | 6,846,071 | spark data frame assign unique number for companies | <p>I have a dataset with fields company and contract number, I'd like to add another column called Company number used as a unique number for each company.</p>
<pre><code>Company | contract number | Company number
amazon | IO123456 | 1
google | IO113456 | 2
google | IO111456 | 2
yahoo | IO111156 | 3
amazon | IO111116 | 1
</code></pre>
<p>I tried using</p>
<pre><code>window_spec = Window.partitionBy('Company')
Programmatic_df = Programmatic_df.withColumn('Company number', dense_rank().over(window_spec))
</code></pre>
<p>but it doesn't yield the result I wanted. It assigns index for each row per company instead of assigning an unique number for each company</p>
<pre><code>Company | contract number | Company number
amazon | IO123456 | 1
google | IO113456 | 1
google | IO111456 | 2
yahoo | IO111156 | 1
amazon | IO111116 | 2
</code></pre>
<p>What is the correct approach here?</p>
| <python><apache-spark><pyspark><apache-spark-sql> | 2023-08-15 17:23:19 | 1 | 395 | PiCubed |
76,907,845 | 3,240,790 | Python split function to extract the first element and last element | <pre><code> python code --> ",".join("one_two_three".split("_")[0:-1])
</code></pre>
<p>I want to take the 0th element and last element of delimited string and create a new string put of it . The code should delimited string of any length</p>
<pre><code>Expected output : "one,three"
</code></pre>
<p>The above code gives me</p>
<pre><code> 'one,two'
</code></pre>
<p>Can some one please help me</p>
| <python> | 2023-08-15 17:06:23 | 4 | 3,629 | Surender Raja |
76,907,775 | 18,291,356 | I can't upload a file to Vercel deployment | <p>I need to save a few files to a directory ("ROOTDIR/sabiduria_tool_api/data/"). Then I need to process them. I am doing this process with FastAPI. On my local everything works well but when I deployed to Vercel. I am having the below error</p>
<pre><code>[ERROR] OSError: [Errno 30] Read-only file system: '/var/task/sabiduria_tool_api/data/'
Traceback (most recent call last):
File "/var/task/vc__handler__python.py", line 305, in vc_handler
response = asgi_cycle(__vc_module.app, body)
File "/var/task/vc__handler__python.py", line 208, in __call__
loop.run_until_complete(asgi_task)
File "/var/lang/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/var/task/fastapi/applications.py", line 289, in __call__
await super().__call__(scope, receive, send)
File "/var/task/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/var/task/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/var/task/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/var/task/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/var/task/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/var/task/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/var/task/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/var/task/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/var/task/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/var/task/starlette/routing.py", line 66, in app
response = await func(request)
File "/var/task/fastapi/routing.py", line 273, in app
raw_response = await run_endpoint_function(
File "/var/task/fastapi/routing.py", line 190, in run_endpoint_function
return await dependant.call(**values)
File "/var/task/sabiduria_tool_api/api/routes/router_sabiduria_tool.py", line 18, in upload_files
await processer.calculate()
File "/var/task/sabiduria_tool_api/services/service_sabiduria_tool.py", line 97, in calculate
await self.save_files()
File "/var/task/sabiduria_tool_api/services/service_sabiduria_tool.py", line 35, in save_files
mkdir(self.data_store_path)
</code></pre>
<p>I have two endpoints, the first one(health check) works well but the second one(POST) is failing.
Here is where the code throws the error</p>
<pre class="lang-py prettyprint-override"><code>async def save_files(self) -> None:
if isdir(self.data_store_path):
old_files = self.get_files()
for o_file in old_files:
remove(f"{self.data_store_path}{o_file}")
else:
mkdir(self.data_store_path)
for file in self.files:
with open(f"{self.data_store_path}{file.filename}", "wb") as destination:
copyfileobj(file.file, destination)
</code></pre>
<p>How can I handle that problem?</p>
| <python><deployment><fastapi><vercel> | 2023-08-15 16:56:22 | 1 | 432 | Serdar |
76,907,639 | 2,200,362 | Compare two sets of data frames (strings) | <p>I have two pandas data frames: test_df and golden_df. There are poastal addresses in each of these frames. Basically, the test_df is user entered and may have several mistakes (missing street/pincode/incorrect formatting/etc.) - I have used the Google API to correct these issues and format the addresses of test_df.
Next, using the golden_df - I have to see which addresses are present in test_df and which are not.</p>
<p>I am not an expert and hence the first isea I got was using a for loop read each address in test_df and search for it in golden_df - yes this works - but I know there has to be a batter and more efficient way.
My data set is large - it may contain upto 150000 rows. I have shared a few for representation below:</p>
<pre><code>#test_df:
data = ["941 Thorpe St, Rock Springs, WY 82901",
"2809 Harris Dr, Antioch, CA 94509",
"7 Eucalyptus, Newport Coast, CA 92657",
"725 Mountain View St, Altadena, CA 91001",
"1966 Clinton Ave, Calexico, CA 92231",
"431 6th St, West Sacramento, CA 95605",
"5574 Old Goodrich Rd, Clarence, NY 14031",
"Valencia Way, Valley Center, CA 92082"]
test_df = pd.DataFrame(data, columns=['parsed addresses'])
</code></pre>
<pre><code>#golden_df:
data = ["941 Thorpe St, Rock Springs, WY 82901",
"2809 Harris Dr, Antioch, CA 94509",
"8838 La Jolla Scenic Dr N, La Jolla, CA 92037",
"16404 Parthenia St North Hills, CA 91343",
"1966 Clinton Ave, Calexico, CA 92231",
"431 6th St, West Sacramento, CA 95605",
"1010 Hillcroft Rd, Glendale, CA 91207",
"Valencia Way, Valley Center, CA 92082"]
golden_df = pd.DataFrame(data, columns=['golden addresses'])
</code></pre>
<p>This is what I had started to write:</p>
<pre><code>for i in range(0, len(test_df)):
each_add = test_df.at[i,"parsed addresses"]
golden_df['golden addresses'].str.contains(row_add)
</code></pre>
<p>Given the number of rows - what is the best approach?</p>
<p>My goal is to get the number of correct matches.</p>
<p>Edited to add latest update:</p>
<p>I am trying:</p>
<p><code>data_intersection = gold_add.loc[gold_add["golden address"].isin(test_add["parsed address"]),"golden address"]</code></p>
<p>I see that <code>len(data_intersection)</code> is > test_add.shape[0]</p>
<p>How can this be true if the function computes common data in both test and golden dataframes?</p>
<p>I also tried to convert the addresses to set. The <code>len(set(golden_df['golden addresses'])</code> and the number of rows in <code>golden_df.shape[0]</code> is not equal. Any idea why?</p>
| <python><pandas><string><search><geopy> | 2023-08-15 16:35:51 | 1 | 623 | Saania |
76,907,587 | 4,620,387 | python setup.py egg_info failed | <p>I just downloaded Pycharm community 2023.2, also using python 2.7 interpreter
I'm trying to run the following:</p>
<pre><code>python -m pip install --upgrade pip
</code></pre>
<p>And I'm getting the following error:</p>
<pre><code>Collecting pip
Using cached https://files.pythonhosted.org/packages/ba/19/e63fb4e0d20e48bd2167bb7e857abc0e21679e24805ba921a224df8977c0/pip-23.2.1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "local\temp\pip-build-fagwaz\pip\setup.py", line 7
def read(rel_path: str) -> str:
^
SyntaxError: invalid syntax
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in local\temp\pip-build-fagwaz\pip\
You are using pip version 20.3.4, however version 23.2.1 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command
</code></pre>
<p>And also whenever I run the code, I get the following issue:</p>
<p>Info on the packages I need to import (but when i download them it doesn't help).</p>
<pre><code>ImportError: DLL load failed: %1 is not a valid Win32 application.
</code></pre>
<p>Each time I try something it either doesn't compile or doesn't help.</p>
| <python><python-2.7><pip><pycharm><python-typing> | 2023-08-15 16:27:43 | 1 | 1,805 | Sam12 |
76,907,524 | 1,991,502 | Does each item in a QStandardItemModel have a unique QModelIndex? | <p>I have a QStandardItem Model with QStandardItems that look like this</p>
<ul>
<li>USA
<ul>
<li>Population</li>
<li>Capital city</li>
</ul>
</li>
<li>France
<ul>
<li>Population</li>
<li>Capital city</li>
</ul>
</li>
</ul>
<p>When a user doubleClicked signal is sent by the user, I can get a QModelIndex with a data() method so that if they double-click on, say, "Population", then data() will return "Population".</p>
<p>My question. Does the QModelIndex distinguish between, say, the QStandardItems belonging to USA from those belonging to france? Is there a more elegant solution than stuff like model_index.parent().parent()... ? An ideal solution would be a function like</p>
<pre><code>def get_standard_item(model: QStandardItemModel, index: QModelIndex) -> QStandardItem:
...
</code></pre>
<p>[edit] - I found this in the QModelIndex class documentation:</p>
<p>"QModelIndex::internalId() - Returns a quintptr used by the model to associate the index with the internal data structure."</p>
<p>Can this be used somehow to fetch the corresponding QStandardItem?</p>
| <python><pyqt><qstandarditemmodel> | 2023-08-15 16:15:02 | 0 | 749 | DJames |
76,907,481 | 5,060,208 | Resolve path of .py and .ipynb file for sphinx-gallery build and pytest | <p><a href="https://sphinx-gallery.github.io/stable/index.html" rel="nofollow noreferrer">Sphinx-gallery</a> offers the option to create <code>.py</code> files that can be generated into <code>.ipynb</code> files and directly execute them for documentation.</p>
<p>I have the following file structure:</p>
<pre><code>Project/
|--doc
| |--Makefile
|
|--examples
| |--data
| |--plot_example_1.py
|
|--tests
| |--test_run_examples.py
</code></pre>
<p>In <code>plot_example_1.py</code> I now want to read the example data stored in <code>data</code>, and therefore require the path of the current directory.
When first generating the documentation I can resolve it with</p>
<pre><code>from pathlib import Path
Path().resolve()
</code></pre>
<p>I want to test the examples however additionally using <code>pytest</code>. This fails when calling <code>pytest tests</code> from the root directory.</p>
<p>Instead I could succesfully test it when adapting the path call to <code>Path(__file__).absolute()</code>, but the <code>__file__</code> option is not available in the <code>ipynb</code> as explained <a href="https://stackoverflow.com/questions/16771894/python-nameerror-global-name-file-is-not-defined">here</a></p>
<p>So I need an option that can resolve the path for both the python file and ipynb. Is there any option for that?</p>
| <python><python-sphinx><pathlib> | 2023-08-15 16:07:31 | 1 | 331 | Merk |
76,907,438 | 1,323,044 | Python subprocess call fails when string with comma is passed as argument | <p>I'm on Windows and having a subprocess call like below to a batch script as below:</p>
<pre><code> print(curValue)
subprocess.call([BA_EDITING_SCRIPT,
self.blazeInstallationFolder,
self.repoFolder,
curMetaphor,
self.romProjectPath,
curInstPath,
curMetaphorType,
curRule,
curType,
curValue])
</code></pre>
<p>In my batch script, I print curValue argument which is passed as the 9th argument by</p>
<pre><code>echo ****** BA Editing JAR Calling Batch Script: initiated ******
echo %9
</code></pre>
<p>This is the combined output I'm getting:</p>
<pre><code>{'ctr':0,'__meta_klass':'CodeBuilderViewModel'}
****** BA Editing JAR Calling Batch Script: initiated ******
{'ctr':0
</code></pre>
<p>Looks like the string is broken by the commas while it's being passed by subprocess call. Any ideas to fix this greatly appreciated.</p>
| <python><string><subprocess> | 2023-08-15 16:02:12 | 0 | 569 | Baykal |
76,907,276 | 1,471,980 | how do you summarize a pandas data frame by groupby in special format | <p>I have a data frame as this:</p>
<pre><code>print(df)
Env location lob grid row server model make slot ports connected disabled
Prod USA Market AB3 bc2 Server123 Hitachi Stor 1 3 1 5
Prod USA Market AB3 bc2 Server123 Hitachi Stor 2 2 3 3
Prod USA Market AB3 bc2 Server123 Hitachi Stor 3 0 0 2
Prod USA Market AB3 bc2 Server123 Hitachi Stor 4 8 7 1
Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 1 6 5 0
Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 2 8 4 0
Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 3 12 4 0
Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 4 10 2 0
</code></pre>
<p>I need to summarize this table and sum the numeric values group by Server. The summary table needs to look like this. I need to print out the string values from the last row of the group by (Server), no need to add the slot column. Other columns needs to be summed and addedd to the last row of the group by. Need to insert a space between each group.</p>
<pre><code>print(df2)
Env location lob grid row server model make slot ports connected disabled
Prod USA Market AB3 bc2 Server123 Hitachi Stor 1 3 1 5
Prod USA Market AB3 bc2 Server123 Hitachi Stor 2 2 3 3
Prod USA Market AB3 bc2 Server123 Hitachi Stor 3 0 0 2
Prod USA Market AB3 bc2 Server123 Hitachi Stor 4 8 7 1
Total USA Market AB3 bc2 Server123 Model Stor 13 11 11
Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 1 6 5 0
Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 2 8 4 0
Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 3 12 4 0
Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 4 10 2
Total EMEA Ins AB6 bc4 Serverabc IBM Mfa 36 15 0
</code></pre>
<p>I have tried this:</p>
<pre><code>df2=df1.groupby('Server').sum()
</code></pre>
<p>this adds the values and shows each server in the data frame. I need to add the sum values at the end of each group in data frame.</p>
| <python><pandas> | 2023-08-15 15:39:01 | 1 | 10,714 | user1471980 |
76,907,274 | 3,559,859 | PySpark, pyspark.sql.DataFrame.foreachPartition example does not work | <p>I am running an Apache Spark Cluster within Azure Synapse and I'm currently checkin for a way to perform the same operation for each partition. To understand the spark function foreachPartition I started to execute the <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.foreachPartition.html" rel="nofollow noreferrer">example code</a>.</p>
<p>However, the example:</p>
<pre><code>df = spark.createDataFrame([(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
def func(itr):
for person in itr:
print(person.name)
df.foreachPartition(func)
</code></pre>
<p>does not output anything:
<a href="https://i.sstatic.net/MBT4F.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBT4F.jpg" alt="enter image description here" /></a></p>
<p>I thought it has something to do with the inner function and the print, so I tried to add it to a list and output the list at the end, but it's just empty.</p>
<p><a href="https://i.sstatic.net/AEWrB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AEWrB.png" alt="enter image description here" /></a></p>
<p>Can someone explain why this is not working or where the actual problem is?</p>
<p>Spark Version is: 3.3.1.5.2-92314920</p>
| <python><apache-spark><pyspark><azure-synapse> | 2023-08-15 15:38:40 | 2 | 416 | NFoerster |
76,907,255 | 2,537,394 | Pandas grouping multiple columns with MultiIndex | <p>When having a simple 2D-DataFrame it is simple to group multiple columns with <code>df.groupby(["x", "y"])</code>. However I have a DataFrame with MultiIndices as both index and columns:</p>
<pre class="lang-py prettyprint-override"><code>cols = pd.MultiIndex.from_arrays([[*["coords"]*3, *["type"]*3],[ "x", "y", "z", "a", "b", "c"]])
idx = pd.MultiIndex.from_arrays([[*["values"]*6, "meta"], [*range(6), "foo"]])
df = pd.DataFrame([[1,1,1,6,7,3], [1,1,0,1,5,9], [2,1,0,1,8,3], [2,1,0,5,7,2], [3,1,0,6,5,9], [3,1,0,7,4,5], [None, None, None, "bar", "baz", "qux"]], index=idx, columns=cols)
coords type
x y z a b c
values 0 1.0 1.0 1.0 6 7 3
1 1.0 1.0 0.0 1 5 9
2 2.0 1.0 0.0 1 8 3
3 2.0 1.0 0.0 5 7 2
4 3.0 1.0 0.0 6 5 9
5 3.0 1.0 0.0 7 4 5
meta foo NaN NaN NaN bar baz qux
</code></pre>
<p>Now I want to group on the coordinates <code>["x", "y"]</code> (<code>"z"</code> doesn't matter in this case and could be discarded). I've tried different things such as <code>df.groupby(["x", "y"])</code>, <code>df.groupby(["x", "y"], level=1)</code>, or <code>df.groupby([pd.Grouper(level=1, axis=1), "x", "y"])</code>, however none work. What would be the correct call to achieve a result like (achieved with <code>.sum()</code>):</p>
<pre class="lang-py prettyprint-override"><code> coords type
x y a b c
0 1.0 1.0 7 12 12
1 2.0 1.0 6 15 5
2 3.0 1.0 13 9 14
</code></pre>
<p>Whether the <code>("meta", "foo")</code> row is included or not doesn't matter to me, it should be easy enough to add or remove it.</p>
| <python><pandas><dataframe><multi-index> | 2023-08-15 15:36:24 | 1 | 731 | YPOC |
76,907,204 | 5,213,015 | Django - How to fix multiple pks in url? | <p>I currently have a live search functionality built out mostly working how I want.</p>
<p><strong>Functionality works like this:</strong></p>
<p>A user searches a word on the search bar.</p>
<p>Then the user gets a display of suggested searches that includes text and an image.</p>
<p>The user clicks on the text/image from the suggested searches that they want and then the user goes to the template page that includes the primary key in the url.</p>
<p><strong>The Problem:</strong></p>
<p>When the user is on the template page that includes the primary key and another form submission is made. An additional pk gets added to the URL.</p>
<p><strong>Example:</strong></p>
<p><a href="http://127.0.0.1:8000/1/2/" rel="nofollow noreferrer">http://127.0.0.1:8000/1/2/</a></p>
<p>This results in Page Not Found 404.</p>
<p>How do i fix this? After a user makes another submission I just need the url updated with the new primary key.</p>
<p>So this:
<a href="http://127.0.0.1:8000/1/2/" rel="nofollow noreferrer">http://127.0.0.1:8000/1/2/</a> needs to be this <a href="http://127.0.0.1:8000/2/" rel="nofollow noreferrer">http://127.0.0.1:8000/2/</a></p>
<p>Any help is gladly appreciated.</p>
<p>Thanks!</p>
<p><strong>Code Below:</strong></p>
<p><strong>urls.py:</strong></p>
<pre><code>urlpatterns = [
path('', account_view_home, name='home'),
# Below are the urls where the live search is enabled #
path ('game_details/', search_results_view, name="game_details"),
path ('<pk>/', search_detail_view, name="detail")
]
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>def search_detail_view(request,pk):
obj = get_object_or_404(Game_Info, id=pk)
user_prices = Shopping_Prices.objects.all()
game_leaderboard = User_Info.objects.all()
# return render(request, 'detail.html', {'obj':obj})
if request.user.is_authenticated:
user_profile = User_Info.objects.filter(user=request.user)
context = {
'user_profile': user_profile,
'user_prices': user_prices,
'game_leaderboard': game_leaderboard,
'obj': obj
}
else:
context = {
'user_prices': user_prices,
'game_leaderboard': game_leaderboard,
'obj': obj
}
return render(request, "detail.html", context)
def search_results_view(request):
if request.is_ajax():
res = None
game = request.POST.get('game')
print(game)
qs = Game_Info.objects.filter(game_title__icontains=game).order_by('game_title')[:4]
if len(qs) > 0 and len(game) > 0:
data = []
for pos in qs:
item ={
'pk': pos.pk,
'name': pos.game_title,
'game_provider': pos.game_provider,
'image': str(pos.game_image.url)
}
data.append(item)
res = data
else:
res = 'No games found...'
return JsonResponse({'data': res})
return JsonResponse({})
</code></pre>
<p><strong>JS:</strong></p>
<pre><code>$(function () {
const url = window.location.href
const searchForm = document.getElementById("search-form");
const searchInput = document.getElementById("search_input_field");
const resultsBox = document.getElementById("results-box");
const csrf = document.getElementsByName('csrfmiddlewaretoken')[0].value
console.log(csrf)
const sendSearchData = (game) =>{
$.ajax ({
type: 'POST',
url: '/game_details/',
data: {
'csrfmiddlewaretoken': csrf,
'game' : game,
},
success: (res)=> {
console.log(res.data)
const data = res.data
if (Array.isArray(data)) {
resultsBox.innerHTML = `<div class="row-tw"><h1>Recommended Games</h1></div>`
data.forEach(game=> {
resultsBox.innerHTML += `
<a href="${url}${game.pk}" class="item" >
<div class="row">
<div class="col-2">
<img src="${game.image}" class="game-img">
</div>
<div class="col-6">
<p>${game.name}</p>
<span class="game-pub-title">${game.game_provider}</span>
</div>
</div>
</a>
`
})
} else {
if (searchInput.value.length > 0) {
resultsBox.innerHTML = `<h2>${data}</h2>`
} else {
resultsBox.classList.add('not-visible')
}
}
},
error: (err)=> {
console.log(err)
}
})
}
searchInput.addEventListener('keyup', e=> {
if (resultsBox.classList.contains('not-visible')) {
resultsBox.classList.remove('not-visible')
}
sendSearchData(e.target.value)
})
});
</code></pre>
<p><strong>HTML</strong></p>
<pre><code> <form method="POST" autocomplete="off" id="search-form" action="{% url 'search_results' %}">
{% csrf_token %}
<div class="input-group">
<input id="search_input_field" type="text" name="q" autocomplete="off" class="form-control gs-search-bar" placeholder="Search Games..." value="">
<div id="results-box" class="results-card not-visible"></div>
<span class="search-clear">x</span>
<button id="search_btn" type="submit" class="btn btn-primary search-button" disabled>
<span class="input-group-addon">
<i class="zmdi zmdi-search"></i>
</span>
</button>
</div>
</form>
</code></pre>
| <javascript><python><django><django-views><django-templates> | 2023-08-15 15:29:54 | 1 | 419 | spidey677 |
76,906,879 | 13,955,154 | Skip loop iteration if it exceeds some time limit | <p>I have this loop:</p>
<pre><code>for index, row in df.iterrows():
process_row(index, row)
</code></pre>
<p>where process_row is a method that calls two time an API.</p>
<pre><code>def process_row(index, row):
print("Evaluating row index:", index)
question = row["Question"]
answer = row["Answer"]
instruct = "..."
instruct2 = "..."
try:
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=[{"role": "user", "content": instruct}]
)
response = completion["choices"][0]["message"]["content"]
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=[{"role": "user", "content": instruct2}]
)
response2 = completion["choices"][0]["message"]["content"]
.... OTHER CODE ....
except Exception as e:
print(e)
</code></pre>
<p>I want that if the whole method takes more than 30 seconds for an iteration, it performs this:</p>
<pre><code>min_vote = 10
row_with_vote = row.tolist() + [min_vote]
passed_writer.writerow(row_with_vote)
</code></pre>
<p>How can I do so? I tried something with concurrent.futures but I don't see any improvement, but if you want I can add it to the post. I have seen other posts but they make a check after every instruction, while I'm pretty sure that in my case it wouldn't solve as the program gets stuck at a single line. Moreover, what reasons can make the method this slow? Most of the iteration take just a couple of seconds, while sometimes one takes 10 or more minutes so something goes wrong.</p>
| <python><multithreading><time><sleep> | 2023-08-15 14:46:40 | 1 | 720 | Lorenzo Cutrupi |
76,906,804 | 534,298 | Why does the memoryview assignment from array have python interactions? | <p>When compiled with <code>cython -a my-file.pyx</code>, this simple line of cdef is annotated as yellow in the html file.</p>
<pre><code># my-file.pyx
from cpython.array cimport array
def f(double[:] xyz):
cdef double[:] inv2 = array('d', [xyz[0]*3, xyz[1], xyz[2]*3])
</code></pre>
<p>Is this correct? I was expecting this line to have no python interactions.</p>
<p>I actually don't know how to tell if the code still has python interactions except for the coloring of the lines in the html files. How can I tell if there is still any improvement to be done when a line is yellow?</p>
<p>The corresponding c code is</p>
<pre><code> __pyx_t_1 = 0;
__pyx_t_2 = -1;
if (__pyx_t_1 < 0) {
__pyx_t_1 += __pyx_v_xyz.shape[0];
if (unlikely(__pyx_t_1 < 0)) __pyx_t_2 = 0;
} else if (unlikely(__pyx_t_1 >= __pyx_v_xyz.shape[0])) __pyx_t_2 = 0;
if (unlikely(__pyx_t_2 != -1)) {
__Pyx_RaiseBufferIndexError(__pyx_t_2);
__PYX_ERR(0, 5, __pyx_L1_error)
}
__pyx_t_3 = PyFloat_FromDouble(((*((double *) ( /* dim=0 */ (__pyx_v_xyz.data + __pyx_t_1 * __pyx_v_xyz.strides[0]) ))) * 3.0)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_1 = 1;
__pyx_t_2 = -1;
if (__pyx_t_1 < 0) {
__pyx_t_1 += __pyx_v_xyz.shape[0];
if (unlikely(__pyx_t_1 < 0)) __pyx_t_2 = 0;
} else if (unlikely(__pyx_t_1 >= __pyx_v_xyz.shape[0])) __pyx_t_2 = 0;
if (unlikely(__pyx_t_2 != -1)) {
__Pyx_RaiseBufferIndexError(__pyx_t_2);
__PYX_ERR(0, 5, __pyx_L1_error)
}
__pyx_t_4 = PyFloat_FromDouble((*((double *) ( /* dim=0 */ (__pyx_v_xyz.data + __pyx_t_1 * __pyx_v_xyz.strides[0]) )))); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__pyx_t_1 = 2;
__pyx_t_2 = -1;
if (__pyx_t_1 < 0) {
__pyx_t_1 += __pyx_v_xyz.shape[0];
if (unlikely(__pyx_t_1 < 0)) __pyx_t_2 = 0;
} else if (unlikely(__pyx_t_1 >= __pyx_v_xyz.shape[0])) __pyx_t_2 = 0;
if (unlikely(__pyx_t_2 != -1)) {
__Pyx_RaiseBufferIndexError(__pyx_t_2);
__PYX_ERR(0, 5, __pyx_L1_error)
}
__pyx_t_5 = PyFloat_FromDouble(((*((double *) ( /* dim=0 */ (__pyx_v_xyz.data + __pyx_t_1 * __pyx_v_xyz.strides[0]) ))) * 3.0)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_5);
__pyx_t_6 = PyList_New(3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__Pyx_GIVEREF(__pyx_t_3);
PyList_SET_ITEM(__pyx_t_6, 0, __pyx_t_3);
__Pyx_GIVEREF(__pyx_t_4);
PyList_SET_ITEM(__pyx_t_6, 1, __pyx_t_4);
__Pyx_GIVEREF(__pyx_t_5);
PyList_SET_ITEM(__pyx_t_6, 2, __pyx_t_5);
__pyx_t_3 = 0;
__pyx_t_4 = 0;
__pyx_t_5 = 0;
__pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_5);
__Pyx_INCREF(__pyx_n_s_d);
__Pyx_GIVEREF(__pyx_n_s_d);
PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_n_s_d);
__Pyx_GIVEREF(__pyx_t_6);
PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_6);
__pyx_t_6 = 0;
__pyx_t_6 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_7cpython_5array_array), __pyx_t_5, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
__pyx_t_7 = __Pyx_PyObject_to_MemoryviewSlice_ds_double(__pyx_t_6, PyBUF_WRITABLE); if (unlikely(!__pyx_t_7.memview)) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
__pyx_v_inv2 = __pyx_t_7;
__pyx_t_7.memview = NULL;
__pyx_t_7.data = NULL;`
</code></pre>
| <python><arrays><cython><memoryview> | 2023-08-15 14:36:24 | 2 | 21,060 | nos |
76,906,801 | 10,789,707 | How to Split a String from substrings regex patterns with placeholders and keeping the delimiter | <p>Similar questions have been answered, yet I did not find the right one for the following case.</p>
<p>Close related questions:</p>
<p><a href="https://stackoverflow.com/questions/9141076/how-do-i-use-regular-expressions-in-python-with-placeholder-text">How do I use regular expressions in Python with placeholder text?</a></p>
<p><a href="https://stackoverflow.com/questions/43427143/how-to-replace-placeholders-in-python-strings">How to replace placeholders in python strings</a></p>
<p><a href="https://stackoverflow.com/questions/2136556/in-python-how-do-i-split-a-string-and-keep-the-separators">In Python, how do I split a string and keep the separators?</a></p>
<p>I need to split this string to get the following output:</p>
<pre><code>txt = "MCMXLIII"
</code></pre>
<p>Needed output:</p>
<pre><code>['M', 'CM', 'XL', 'III']
</code></pre>
<p>I tested this:</p>
<pre><code>import re
txt = "MCMXLIII"
print(re.sub('(CM)|(XL)', '❤️{0}❤️|❤️{1}❤️', txt).format("CM", "XL").split('❤️'))
Output:
['M', 'CM', '|', 'XL', '', 'CM', '|', 'XL', 'III']
</code></pre>
<p>I also tested this:</p>
<pre><code>import re
txt = "MCMXLIII"
print(re.sub('(CM)|(XL)', '(❤️{0}❤️|❤️{1}❤️)', txt).format("CM", "XL").split('❤️'))
Output:
['M(', 'CM', '|', 'XL', ')(', 'CM', '|', 'XL', ')III']
</code></pre>
<p>Finally I tested this:</p>
<pre><code>
import re
txt = "MCMXLIII"
print(re.sub('(XL)', '❤️{0}❤️', (re.sub('(CM)', '❤️{0}❤️', txt).format("CM"))).format("XL").split('❤️'))
Output:
['M', 'CM', '', 'XL', 'III']
</code></pre>
<p>If that's of help here's the correct output in Google Sheets formula version:</p>
<pre><code>=SPLIT(
IF(
REGEXMATCH(A1,"(CM)|(XL)"),REGEXREPLACE(A1, "(CM)|(XL)","❤️$0❤️")),"❤️")
</code></pre>
<p>Google Sheets output:</p>
<p><a href="https://i.sstatic.net/hIStP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hIStP.png" alt="splitplaceholders" /></a></p>
| <python><regex> | 2023-08-15 14:35:50 | 1 | 797 | Lod |
76,906,733 | 1,130,231 | Python - HDF5 to numpy array - out of memory | <p>I have a HDF5 file that contains 20'000'000 rows, each row has 8 float32 columns. The total raw memory size should be approx 640MB.</p>
<p>I want to load this data in my Python app, however, during the load to numpy array I run out of memory (I have 64GB RAM)</p>
<p>I use this code:</p>
<pre><code>import h5py
hf = h5py.File(dataFileName, 'r')
data = hf['data'][:]
</code></pre>
<p>For smaller files it works ok, however, my input is not so big as well. So is there any other way how to load the entire dataset to memory because it should fit without any problems.
On the other hand, why it takes so much memory? Even if it would internally convert float32 to float64 it is not nearly the size of the entire RAM.</p>
<p>Dataset info from HDFView 3.3.0</p>
<p><a href="https://i.sstatic.net/IXVSy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IXVSy.png" alt="enter image description here" /></a></p>
| <python><numpy><hdf5><h5py> | 2023-08-15 14:27:35 | 3 | 9,585 | Martin Perry |
76,906,551 | 1,364,158 | How to apply polars.Expr.map_batches to multiple series? | <p>I'm trying to use <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.map_batches.html#polars-map-batches" rel="nofollow noreferrer"><code>.map_batches()</code></a></p>
<p>The function signature shows:</p>
<pre><code>polars.map_batches(
exprs: Sequence[str] | Sequence[Expr],
function: Callable[[Sequence[Series]], Series],
return_dtype: PolarsDataType | None = None,
) -> Expr
</code></pre>
<p>Yet, I didn't manage to figure out calling it in a way that would actually pass more than a single series at the time.</p>
<p>I am trying to implement my own version of <code>rolling_mean</code> and I would need to be able to pass values and weights simultaneously.</p>
<p>Any idea what I am missing?</p>
| <python><python-polars> | 2023-08-15 14:01:04 | 3 | 7,274 | Samuel Hapak |
76,906,491 | 4,575,197 | pandas Series to Dataframe using Series indexes as columns with multiple values in a list | <p>i have a series that i need it in Dataframe. so the series looks like this.</p>
<pre><code>column1 [-333, -333, -3,3 -33, -33, ...
column2 [-121.0, -431.0, -41.0, -1.0, -1.0, ...
column3 [0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, ...
column4 [-451.0, -5121.0, -41.0, -21.0, -4121.0, ...
column5 [1451.0, 19851.0, 1451.0, 1451.0, 1941.0, ...
</code></pre>
<p>i tried to implement this <a href="https://stackoverflow.com/questions/40224319/pandas-series-to-dataframe-using-series-indexes-as-columns">post (pandas-series-to-dataframe-using-series-indexes-as-columns)</a> but i got this dataframe (i'm illustrating only for one column here) which all the value in a list is in one row of the dataframe:</p>
<pre><code> column1
0 [-9.0,
-811.0,
-71.0,
-691.0,
-41.0, ...
</code></pre>
<p>is there any way to convert pandas series into dataframe while the value is a list and column value is the index.</p>
<p>EDIT:</p>
<p>the data as <code>srs.head().to_dict()</code></p>
<pre><code>{'column1': masked_array(data=[-4524, -41144, -44314,...,444,44005, 44],
</code></pre>
| <python><pandas><dataframe><list> | 2023-08-15 13:52:57 | 1 | 10,490 | Mostafa Bouzari |
76,906,469 | 5,865,058 | LangChain Zero Shot React Agent uses memory or not? | <p>I'm experimenting with LangChain's <code>AgentType.CHAT_ZERO_SHOT_REACT</code> agent. By its name I'd assume this is an agent intended for chat use and I've given it memory but it doesn't seem able to access its memory. What else do I need to do so that this will access its memory? Or have I incorrectly assumed that this agent can handle chats?</p>
<p>Here is my code and sample output:</p>
<pre class="lang-py prettyprint-override"><code>llm = ChatOpenAI(model_name="gpt-4",
temperature=0)
tools = load_tools(["llm-math", "wolfram-alpha", "wikipedia"], llm=llm)
memory = ConversationBufferMemory(memory_key="chat_history")
agent_test = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
memory=memory,
verbose=True
)
</code></pre>
<pre><code>>>> agent_test.run("What is the height of the empire state building?")
'The Empire State Building stands a total of 1,454 feet tall, including its antenna.'
>>> agent_test.run("What was the last question I asked?")
"I'm sorry, but I can't provide the information you're looking for."
</code></pre>
| <python><langchain><large-language-model><py-langchain> | 2023-08-15 13:47:29 | 3 | 439 | BBrooklyn |
76,906,369 | 12,320,370 | Return Status of DAG-A in another DAG-B | <p>I am looking for a way to retrieve the last status of DAG-A, in a another DAG-B. Then pass that status as a string in my current DAG-B.</p>
<p>But I cannot find anything online? I don't want to check the status of DAG-A.</p>
<p>Say DAG-A triggers multiple times a day, whenever DAG-B triggers (custom time) I want it to have the last status of DAG-A.</p>
<p>I just have code that would output a Slack Message, having a Slack Message in DAG-A is unfortunately not a good solution for my problem. And I would like to learn how to return status of others dags.</p>
<p>I would want to pass the status in the message body.</p>
<pre><code>def _get_message() -> str:
return "Status of data_pipeline: "
with DAG("slack_dag", start_date=datetime(2021, 1 ,1),
schedule_interval="@daily", default_args=default_args, catchup=False
) as dag:
send_slack_notification = SlackWebhookOperator (
task_id="send_slack_notification",
http_conn_id="slack_conn",
message=_get_message(),
channel="#test-public"
)
</code></pre>
| <python><airflow><pipeline> | 2023-08-15 13:33:20 | 1 | 333 | Nairda123 |
76,905,761 | 15,893,581 | in scipy.optimize.minimize - could not be broadcast together data with shapes (2,) (2,5) | <p>I cannot cope with shapes mismatch in <code>scipy.optimize.minimize</code> here in this example:</p>
<pre><code>import numpy as np
from scipy.spatial import distance_matrix
from scipy.optimize import minimize
# Create the matrices
X = np.array([[1,2,3,4,5],[2,1,0,3,4]])
y = np.array([[0,0,0,0,1],[1,1,1,1,0]])
# Display the matrices
print("matrix x:\n", X)
print("matrix y:\n", y)
# compute the distance matrix
dist_mat = distance_matrix(X, y, p=2)
# display distance matrix
print("Distance Matrix:\n", dist_mat)
loss_res = lambda z: 0.5 * z ** 2 * (np.abs(z) <= 1) + (np.abs(z) - 0.5) * (np.abs(z) > 1)
f_to_optMin = lambda w: np.sum(loss_res(X @ w.ravel() - y)) # (2,) 10, (2,5)
res= minimize(f_to_optMin, (10,10,10,10,10,))
print(res.x)
</code></pre>
<p>seems, that problem/solution is rather simple, but I don't see how to make it working. Can somebody help me, please?</p>
<pre><code>Traceback (most recent call last):
File "E:\NEW docs\dist_mat.py", line 33, in <module>
res= minimize(f_to_optMin, ([1,1,1,1,1,]))
File "C:\Users\adm\AppData\Local\Programs\Python\Python310-32\lib\site-packages\scipy\optimize\_minimize.py", line 618, in minimize
return _minimize_bfgs(fun, x0, args, jac, callback, **options)
File "C:\Users\adm\AppData\Local\Programs\Python\Python310-32\lib\site-packages\scipy\optimize\optimize.py", line 1201, in _minimize_bfgs
sf = _prepare_scalar_function(fun, x0, jac, args=args, epsilon=eps,
File "C:\Users\adm\AppData\Local\Programs\Python\Python310-32\lib\site-packages\scipy\optimize\optimize.py", line 261, in _prepare_scalar_function
sf = ScalarFunction(fun, x0, args, grad, hess,
File "C:\Users\adm\AppData\Local\Programs\Python\Python310-32\lib\site-packages\scipy\optimize\_differentiable_functions.py", line 140, in __init__
self._update_fun()
File "C:\Users\adm\AppData\Local\Programs\Python\Python310-32\lib\site-packages\scipy\optimize\_differentiable_functions.py", line 233, in _update_fun
self._update_fun_impl()
File "C:\Users\adm\AppData\Local\Programs\Python\Python310-32\lib\site-packages\scipy\optimize\_differentiable_functions.py", line 137, in update_fun
self.f = fun_wrapped(self.x)
File "C:\Users\adm\AppData\Local\Programs\Python\Python310-32\lib\site-packages\scipy\optimize\_differentiable_functions.py", line 134, in fun_wrapped
return fun(np.copy(x), *args)
File "E:\NEW docs\dist_mat.py", line 31, in <lambda>
f_to_optMin = lambda w: np.sum(loss_res(X @ w.ravel() - y)) # 2, 10, [2,5]
ValueError: operands could not be broadcast together with shapes (2,) (2,5)
</code></pre>
| <python><scipy-optimize-minimize> | 2023-08-15 11:54:32 | 1 | 645 | JeeyCi |
76,905,648 | 20,200,927 | UTF-8 Encoding is not showing special charachters | <h3>Question:</h3>
<p>I'm facing an issue with my Python function that reads data from a CSV file and converts it to JSON format. The CSV file contains special Slovenian letters such as "č," "š," "ž," etc. I'm using UTF-8 encoding for both reading and writing the files, which, to my knowledge should support these characters. However, the function doesn't seem to handle these characters the way I intended it to. They appear as Unicode Escape sequences in the output JSON file.</p>
<p>Here is a simplified version of the function:</p>
<pre class="lang-py prettyprint-override"><code>import csv
import json
def read_rail_nodes(_in_file: str, _out_file: str) -> None:
json_objects = []
with open(_in_file, 'r', encoding='utf-8') as file:
csv_reader = csv.reader(file)
next(csv_reader)
for row in csv_reader:
id, station_name, _, _ = row
json_objects.append({'id': id, 'station_name': station_name})
with open(_out_file, 'w', encoding='utf-8') as file:
json.dump(json_objects, indent=4, fp=file)
print("\n[rail_nodes.csv] data converted and saved to [", _out_file, "]\n")
read_rail_nodes('rail_nodes.csv', 'rail_nodes.json')
</code></pre>
<p><strong>Sample Input:</strong></p>
<pre><code>43002,Laško,46.15453611,15.23225833
</code></pre>
<p><strong>Sample Output:</strong></p>
<pre class="lang-json prettyprint-override"><code> {
"id": "43002",
"station_name": "La\u0161ko"
},
</code></pre>
<p><strong>Desired Output:</strong></p>
<pre><code> {
"id": "43002",
"station_name": "Laško"
},
</code></pre>
<p>I am working in VS Code and I've double-checked my encoding, It is set to UTF-8, and opening the <code>.josn</code> file with any other encoding only made it worse.</p>
<p>I have been contemplating a single potential solution, although I am cautious about potential complications that may arise in the future. It occurs to me that employing UTF-8 encoding when reading the .json file could be a viable remedy. However, when it comes to visually presenting the JSON data, I find myself in a bit of a predicament as I am currently unable to devise a suitable approach</p>
<p>Am I missing something in my code, is there something specific I need to do to properly support special Slovenian letters when processing files with UTF-8 encoding, or are my hands tied?</p>
<p>Any help or guidance would be greatly appreciated!</p>
<h3>Additional Information:</h3>
<p>Python version: 3.11.3 <br>
Operating System: MacOS</p>
| <python><utf-8> | 2023-08-15 11:41:03 | 1 | 320 | i_hate_F_sharp |
76,905,259 | 20,107,918 | PyGWalker: cannot import name 'Field' from 'pydantic' | <p><code>pygwalker</code> was installed with: <code>(streamlit) c:\XxX\Anaconda3>conda install -c conda-forge pygwalker</code> (15 Aug '23)</p>
<p><code>import pygwalker as pyg</code> threw an <code>ImportError</code>, shown at the end of this question.</p>
<p><strong>question</strong>: how can one resolve this?</p>
<p>Looking through the file structure, the following was noted.
In "C:\XxX\Anaconda3\envs\streamlit\Lib\site-packages\pydantic_<em>init</em>_.py"</p>
<pre><code>from .fields import Required
from .main import BaseConfig, BaseModel, create_model, validate_model
</code></pre>
<p>In "C:\XxX\Anaconda3\envs\streamlit\Lib\site-packages\pydantic\main.py"</p>
<pre><code>from .fields import Field
</code></pre>
<p>and in "C:\XxX\Anaconda3\envs\streamlit\Lib\site-packages\pydantic\fields.py"</p>
<pre><code>class Field:
__slots__ = (
... ... )
</code></pre>
<hr />
<pre class="lang-py prettyprint-override"><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File <timed exec>:7
File c:\XxX\Anaconda3\envs\streamlit\Lib\site-packages\pygwalker\__init__.py:14
11 __version__ = "0.2.1"
12 __hash__ = __rand_str()
---> 14 from pygwalker.api.walker import walk
15 from pygwalker.api.gwalker import GWalker
16 from pygwalker.api.html import to_html
File c:\XxX\Anaconda3\envs\streamlit\Lib\site-packages\pygwalker\api\walker.py:6
2 import inspect
4 from typing_extensions import Literal
----> 6 from .pygwalker import PygWalker
7 from pygwalker.data_parsers.base import FieldSpec, BaseDataParser
8 from pygwalker._typing import DataFrame
File c:\XxX\Anaconda3\envs\streamlit\Lib\site-packages\pygwalker\api\pygwalker.py:19
13 from pygwalker.services.tip_tools import TipOnStartTool
14 from pygwalker.services.render import (
15 render_gwalker_html,
16 render_gwalker_iframe,
17 get_max_limited_datas
18 )
---> 19 from pygwalker.services.preview_image import PreviewImageTool, render_preview_html, ChartData
20 from pygwalker.services.upload_data import (
21 BatchUploadDatasToolOnWidgets,
22 BatchUploadDatasToolOnJupyter
23 )
24 from pygwalker.services.spec import get_spec_json
File c:\XxX\Anaconda3\envs\streamlit\Lib\site-packages\pygwalker\services\preview_image.py:4
1 from typing import Optional, List, Dict
3 from jinja2 import Environment, PackageLoader
----> 4 from pydantic import BaseModel, Field
6 from pygwalker.utils.display import display_html
9 jinja_env = Environment(
10 loader=PackageLoader("pygwalker"),
11 autoescape=(()), # select_autoescape()
12 )
ImportError: cannot import name 'Field' from 'pydantic' (c:\XxX\Anaconda3\envs\streamlit\Lib\site-packages\pydantic\__init__.py)
</code></pre>
<p>PS:
This is somewhat related but doesn't resolve the problem - <a href="https://stackoverflow.com/questions/66645515/why-i-cant-import-basemodel-from-pydantic">Why i can't import BaseModel from Pydantic?</a></p>
| <python><import><importerror> | 2023-08-15 10:45:58 | 1 | 399 | semmyk-research |
76,905,100 | 18,972,785 | How to take a part of content in xml file using python? | <p>I have parsed and XML file using python. In one of the tags it has been written an ID and then a sequence of amino acids. I just want to get the amino acid sequence and not the ID.
this is what the tag looks like:</p>
<pre><code> <sequences>
<sequence format="FASTA">&gt;DB00001 sequence
LTYTDCTESGQNLCLCEGSNVCGQGNKCILGSDGEKNQCVTGEGTPKPQSHNDGDFEEIP
EEYLQ</sequence>
</sequences>
</code></pre>
<p>When i use this code:</p>
<pre><code>drug.find('sequences').find('sequence').text
</code></pre>
<p>it prints:</p>
<pre><code>>DB00001 sequence
LTYTDCTESGQNLCLCEGSNVCGQGNKCILGSDGEKNQCVTGEGTPKPQSHNDGDFEEIP
EEYLQ
</code></pre>
<p>But i want to print only <code>LTYTDCTESGQNLCLCEGSNVCGQGNKCILGSDGEKNQCVTGEGTPKPQSHNDGDFEEIPEEYLQ</code></p>
<p>I appreciate answers which solves my problem.</p>
| <python><xml> | 2023-08-15 10:17:52 | 2 | 505 | Orca |
76,905,060 | 11,630,148 | dj-rest-auth TemplateResponseMixin requires either a definition of 'template_name' or an implementation of 'get_template_names()' | <p>I'm using <code>dj_rest_auth</code> for authentication and when I click on the verify email link, this error comes out <a href="https://i.sstatic.net/YFDub.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YFDub.png" alt="enter image description here" /></a></p>
<p>I overriden the <code>ConfirmEmailView</code> of <code>django-allauth</code> but still getting the error
My views are:</p>
<pre class="lang-py prettyprint-override"><code>class CustomConfirmEmailView(ConfirmEmailView):
template_name = 'accounts/register.html'
</code></pre>
| <python><django><django-rest-framework><dj-rest-auth> | 2023-08-15 10:11:40 | 1 | 664 | Vicente Antonio G. Reyes |
76,904,926 | 8,587,796 | cannot run aidegen, missing the dataclasses module | <p>I have been using aidegen to develop on AOSP.</p>
<p>For some reason, I cannot get it to work on one of the build tree I am working on. I get this error despite trying to use it in a venv, without venv, each time while having installed the dataclasses module :</p>
<pre><code>$: aidegen
Traceback (most recent call last):
File "/tmp/Soong.python_4y5nr8sq/__soong_entrypoint_redirector__.py", line 6, in <module>
runpy._run_module_as_main("aidegen.aidegen_main", alter_argv=False)
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/Soong.python_4y5nr8sq/aidegen/aidegen_main.py", line 50, in <module>
from aidegen.lib import aidegen_metrics
File "/tmp/Soong.python_4y5nr8sq/aidegen/lib/aidegen_metrics.py", line 24, in <module>
from aidegen.lib import common_util
File "/tmp/Soong.python_4y5nr8sq/aidegen/lib/common_util.py", line 40, in <module>
from atest import atest_utils
File "/tmp/Soong.python_4y5nr8sq/atest/atest_utils.py", line 45, in <module>
from dataclasses import dataclass
ModuleNotFoundError: No module named 'dataclasses'
</code></pre>
<pre><code>$: pip install dataclasses
Requirement already satisfied: dataclasses in <path_to_venv>/venv_3/lib/python3.6/site-packages
</code></pre>
| <python><android-source> | 2023-08-15 09:51:01 | 0 | 353 | Paul |
76,904,759 | 1,648,641 | Ignoring None values when calling dumps on a marshmallow schema | <p>I am using a marshmallow schema. One of the member variables is created like so</p>
<p><code>uc = fields.Integer(data_key='uc', description='the description')</code></p>
<p>At some point I use this schema to dump a class as JSON, something like</p>
<p><code>json=schema_instance.dumps(my_class)</code></p>
<p>This all works fine. However sometimes my_class.uc is None. Is there a way to ignore None values when it comes to creating the JSON? .. at the moment it includes it like so</p>
<p><code>...'uc': null...</code></p>
<p>in this case uc is optional and I would like it to be excluded</p>
| <python><marshmallow> | 2023-08-15 09:27:09 | 1 | 1,870 | Lieuwe |
76,904,746 | 11,630,148 | django.db.utils.IntegrityError: UNIQUE constraint failed: accounts_profile.user_id | <p>I'm having issues with registering a user in my project. I have a custom serializer that inherits from the <code>RegisterView</code> of <code>dj_rest_auth</code>. My code is:</p>
<p><code>views.py</code></p>
<pre class="lang-py prettyprint-override"><code>class ProfileDetail(APIView):
def get_object(self, pk):
try:
return Profile.objects.get(pk=pk)
except Profile.DoesNotExist:
raise Http404
def get(self, request, pk, format=None):
profile = self.get_object(pk)
serializer = ProfileSerializer(profile)
return Response(serializer.data)
class CustomRegisterView(RegisterView):
serializer_class = CustomRegisterSerializer
</code></pre>
<p><code>serializer.py</code></p>
<pre class="lang-py prettyprint-override"><code>class CustomRegisterSerializer(RegisterSerializer):
user_type = serializers.ChoiceField(choices=[('seeker', 'Seeker'), ('recruiter', 'Recruiter')])
class Meta:
model = User
fields = "__all__"
def get_cleaned_data(self):
data = super().get_cleaned_data()
data['user_type'] = self.validated_data.get('user_type', '')
return data
def save(self, request):
user = super().save(request)
user_type = self.validated_data.get('user_type')
Profile.objects.create(user=user, user_type=user_type)
return user
</code></pre>
<p><code>models.py</code></p>
<pre class="lang-py prettyprint-override"><code>class Profile(models.Model):
class Type(models.TextChoices):
seeker = "Seeker"
recruiter = "Recruiter"
base_type = Type.seeker
user = models.OneToOneField(User, on_delete=models.CASCADE)
user_type = models.CharField(choices=Type.choices, default=base_type, max_length=20)
</code></pre>
<p><code>signals.py</code></p>
<pre class="lang-py prettyprint-override"><code>@receiver(post_save, sender=User)
def create_user_profile(sender, instance, created, **kwargs):
if created:
Profile.objects.create(user=instance, user_type='')
post_save.connect(create_user_profile, sender=User)
</code></pre>
<p>I expect the code to return a <code>200</code> after I perform a <code>POST</code> request.</p>
| <python><django><django-rest-framework> | 2023-08-15 09:24:33 | 1 | 664 | Vicente Antonio G. Reyes |
76,904,673 | 7,306,999 | Pandas: difference from a fixed time of the day for a series of datetimes | <p>Suppose I have a Series with datetimes. As a simple example:</p>
<pre><code>import pandas as pd
data = [
"2023-08-14 21:56",
"2023-08-15 23:04",
"2023-08-16 15:43",
"2023-08-17 12:05",
"2023-08-18 22:19",
]
s = pd.Series(data, dtype="datetime64[ns]")
</code></pre>
<p>For each datetime I would like to know the difference in hours from a particular time on that day (as an example: 21:00).</p>
<p>So my desired result would be as follows:</p>
<pre><code>0 0.93
1 2.07
2 -5.28
3 -8.92
4 1.32
dtype: float64
</code></pre>
<p>What would be the quickest way to accomplish this?</p>
| <python><pandas><datetime> | 2023-08-15 09:14:06 | 1 | 8,674 | Xukrao |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.