QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,276,558
| 3,246,693
|
pandas.DataFrame.fillna with booleans?
|
<p>I have 2 dataframes, one that contains data and one that contains exlusions that need to be merged onto the data and marked as included(True or False). I have been doing this as follows for a couple of years by simply adding a new column to the exclusions data frame and setting everything to True, then merging that onto the main dataframe which results in the additional column containing either True or NaN. Finally I run a pd.fillna to replace all the NaN values with False and I'm good to go.</p>
<pre><code>import pandas as pd
MainData = {'name': ['apple', 'pear', 'orange', 'watermelon'],
'other': ['blah' , 'blah', 'blah' , 'blah']}
dfMainData = pd.DataFrame(MainData)
Exclusions = {'name': ['pear' , 'watermelon'],
'reason': ['pears suck', 'too messy!']}
dfExclusions = pd.DataFrame(Exclusions)
dfExclusions['excluded'] = True
dfMainData = pd.merge(dfMainData, dfExclusions, how='left', on='name')
dfMainData['excluded'] = dfMainData['excluded'].fillna(False)
</code></pre>
<p>I was previously running pands 1.2.4 but am making code updates and am migrating to 2.2.1 as part of this, and am now receiving the following warning:</p>
<blockquote>
<p>dfMainData['excluded'] = dfMainData['excluded'].fillna(False)
:1: FutureWarning: Downcasting object dtype arrays on .fillna, .ffill, .bfill is deprecated and will change in a future version. Call result.infer_objects(copy=False) instead. To opt-in to the future behavior, set <code>pd.set_option('future.no_silent_downcasting', True)</code></p>
</blockquote>
<p>It still technically works but it appears that this is NOT the pandas'esque way of doing things, so I am curious how I should be going about this now to avoid compatibility issues in the future?</p>
|
<python><pandas><dataframe><fillna>
|
2024-04-04 20:52:46
| 2
| 803
|
user3246693
|
78,276,476
| 274,610
|
How can I use Python to get a stock's Earnings date and time from the TraderWorkstation Api
|
<p>I’m trying to get Earnings Date/Time info for AAPL using the Trader Workstation Api for Python.</p>
<p>I have subscribed to Wall Street Horizon data.</p>
<p>Using the following Python script:</p>
<pre><code>from ibapi.client import *
from ibapi.wrapper import *
import os
port = 7496
class TestApp(EClient, EWrapper):
def __init__(self):
EClient.__init__(self, self)
def nextValidId(self, orderId: int):
self.reqScannerParameters()
def scannerParameters(self, xml):
dir_path = "C:\\Temp\\Traders Academy"
file_path = os.path.join(dir_path, "scanner.xml")
with open(file_path, 'w') as file:
file.write(xml)
print("Scanner parameters has been stored.")
app = TestApp()
app.connect("127.0.0.1", port, 1001)
app.run()
</code></pre>
<p>I was able to store the XML, and in looking at it, I noticed:</p>
<pre><code><ScanType>
<vendor>WSH</vendor>
<displayName>Upcoming Earnings (WSH)</displayName>
<scanCode>WSH_NEXT_EARNINGS</scanCode>
<instruments>STK,STOCK.NA,STOCK.EU,STOCK.ME,STOCK.HK</instruments>
<absoluteColumns>false</absoluteColumns>
<Columns varName="columns">
<Column>
<colId>742</colId>
<name>Upcoming Earnings</name>
<display>true</display>
<section>m</section>
<displayType>DATA</displayType>
</Column>
<ColumnSetRef>
<colId>0</colId>
<name>BA</name>
<display>false</display>
<displayType>DATA</displayType>
</ColumnSetRef>
</Columns>
</ScanType>
</code></pre>
<p>I’m wondering if I can use this XML to create a filter to access the Earnings Date/Time info.</p>
<p>If so, how would I do this in Python?</p>
<p>The web manual pages for the Trader Workstation API are now deprecated. The code to access Wall Street Horizons data no longer works. They say: "Go to Traders Acadamy" but there are no Api manual pages. Very frustrating.</p>
<p>Charles</p>
|
<python><tws>
|
2024-04-04 20:34:52
| 1
| 529
|
user274610
|
78,276,391
| 5,844,134
|
Attach a debugger to a flask process started by tasks.json in vscode
|
<p>I have the following tasks.json in my workspace:</p>
<pre><code>{
"version": "2.0.0",
"tasks": [
{
"label": "Run Locally",
"dependsOn": ["Run Backend", "Run Frontend"],
"dependsOrder": "parallel",
},
{
"label": "Run Frontend",
"type": "shell",
"options": {
"cwd": "${workspaceFolder}/frontend"
},
"runOptions": {
"runOn": "folderOpen"
},
"command": "npm",
"args": ["run", "dev"],
"presentation": {
"panel": "dedicated"
}
},
{
"label": "Run Backend",
"options": {
"cwd": "${workspaceFolder}/backend"
},
"runOptions": {
"runOn": "folderOpen"
},
"command": "python",
"args": ["-m", "flask", "--debug", "--app", "main", "run"],
"presentation": {
"panel": "dedicated"
}
},
]
}
</code></pre>
<p>When I open vscode, it starts two terminals: one running the react frontend with <code>npm run dev</code> and another running the flask application with <code>python -m flask --debug --app main run</code>.</p>
<p>What I'm trying to do is to attach the VSCode debugger to the flask process so when I create a breakpoint I can debug the variables, is this even possible?</p>
<p>All solutions I found point to using launch.json, but if possible I'd like to keep this in tasks because it runs automatically when I open VSCode</p>
|
<python><visual-studio-code><flask><vscode-debugger><vscode-tasks>
|
2024-04-04 20:14:15
| 1
| 754
|
Yuri Waki
|
78,276,306
| 5,676,198
|
ModuleNotFoundError: No module named 'psycopg2._psycopg' in AWS GLUE - Python 3.9
|
<p>I am trying to use the library <code>psycopg2</code> in AWS Glue.</p>
<p>I followed <a href="https://stackoverflow.com/a/58305654/5676198">this question</a>:</p>
<p>"What I did was install psycopg2-binary into a directory and zip up the contents of that directory:</p>
<pre><code>mkdir psycopg2-binary
cd psycopg2-binary
pip install psycopg2-binary -t .
zip -r9 psycopg2.zip *
</code></pre>
<p>I then copied psycopg2.zip to an S3 bucket and added it as an extra Python library under "Python library path" in the Glue Spark job.</p>
<p>But I got this error when I tried to import the library <code>import psycopg2</code>:</p>
<pre class="lang-bash prettyprint-override"><code> import psycopg2
File "/tmp/psycopg2.zip/psycopg2/__init__.py", line 51, in <module>
from psycopg2._psycopg import ( # noqa
ModuleNotFoundError: No module named 'psycopg2._psycopg'
</code></pre>
<ul>
<li>I am using <code>Python 3.9</code></li>
<li>I uploaded the zip in the <code>Python library path</code>.</li>
</ul>
<p>I also tried to zip only the psycopg2 folder (not the entire project) and obtained the same error.</p>
<p>I also tried to pass the <code>Job parameters</code> as <code>--additional-python-modules</code>:<code>psycopg2</code> without the zip file, this way Glue would try to install it. But I got the following error:</p>
<pre><code>Error Category: IMPORT_ERROR; ModuleNotFoundError: No module named 'psycopg2'
</code></pre>
<p>Also tried <code>--additional-python-modules</code>:<code>psycopg2-binary==2.9.9</code> with the same error.</p>
<p>The package is being installed, but the import is not working.</p>
|
<python><amazon-web-services><psycopg2><aws-glue>
|
2024-04-04 19:55:20
| 1
| 1,061
|
Guilherme Parreira
|
78,276,261
| 5,089,311
|
Python CSV reader: need ignore quoted comma as delimiter
|
<p>I need parse text file by comma, but not by quoted comma.<br />
It looks like trivial task, but can't make Python do it right. Mainly because of an unquoted string prepending the quoted string, which makes it probably not well-formatted CSV, but I need it exactly this way.</p>
<p>Example input:</p>
<pre><code>cmd,print "AA"
cmd, print "AA,BB,CC"
cmd, print " AA, BB, CC ", separate-window
</code></pre>
<p>Desired result (in Python syntax):</p>
<pre><code>[['cmd', 'print "AA"'],
['cmd', 'print "AA,BB,CC"'],
['cmd', 'print " AA, BB, CC "', 'separate-window']]
</code></pre>
<p>Stripping surrounding spaces is optional, once I get a proper list I can <code>strip()</code> each item, that's not a problem.</p>
<p><code>csv.reader</code> splits by quoted commas too, so that I rather get <code>['cmd', 'print "AA', 'BB', 'CC"']</code>.</p>
<p><code>shlex</code> with altered <code>.whitespace=','</code> and <code>.whitespace_split=True</code> almost does the trick, but removes quotes <code>['cmd', 'print AA, BB, CC']</code>. I need retain quotes.</p>
<p>Thought about <code>re.split</code> but I have very weak understanding of how <code>(?=)</code> thingy works...</p>
<p>Found few <a href="https://stackoverflow.com/questions/118096/how-can-i-parse-a-comma-delimited-string-into-a-list-caveat">similar topics</a> over here, but none of the proposed answers work for me.</p>
<p>UPDATE: screenshot for whoever questioning if I do exactly what I describe:
<a href="https://i.sstatic.net/nMMxT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nMMxT.png" alt="screenshot" /></a></p>
|
<python><csv><regex-lookarounds>
|
2024-04-04 19:45:17
| 2
| 408
|
Noob
|
78,276,231
| 3,795,219
|
Ensuring Order of Package Installation with Conda to Enable CUDA with PyTorch
|
<p>I am attempting to streamline the setup of a Python environment that leverages both CUDA and PyTorch through Conda. My goal is to craft a unified environment.yml file that concurrently installs CUDA and a CUDA-capable version of PyTorch. However, I've encountered an issue where the installation order by Conda leads to PyTorch defaulting to a CPU-only version, seemingly because it installs PyTorch prior to CUDA, missing the CUDA availability during its setup.</p>
<p><strong>Objective</strong>: Craft an environment.yml for Conda that guarantees the installation of CUDA before PyTorch to ensure the latter can leverage GPU capabilities.</p>
<p><strong>Problem</strong>: Conda's installation order occasionally prioritizes PyTorch before CUDA. As a result, PyTorch does not recognize the presence of CUDA on the system and reverts to a CPU-only installation.</p>
<p><strong>Key Question</strong>: Is there a methodology or a specific configuration within Conda that enforces the installation of CUDA before PyTorch to circumvent this issue?</p>
<p>Below is a minimal representation of my environment.yml file that illustrates the problem:</p>
<pre><code># environment.yml
name: notworking
dependencies:
- nvidia/label/cuda-12.1.0::cuda
- python=3.11
- pytorch::pytorch
- pytorch::pytorch-cuda=12.1
- pytorch::torchaudio
- pytorch::torchvision
</code></pre>
<p>While relocating the pytorch dependencies to a pip section within the YAML file addresses this immediate challenge, it does not provide a scalable solution for future scenarios where similar sequence-dependency issues may arise with packages lacking pip installers.</p>
<p><strong>Additional Context</strong>:</p>
<pre><code>Operating System: Ubuntu 22.04
Conda Version: 23.3.1
Hardware: NVIDIA RTX 3060
</code></pre>
|
<python><pytorch><anaconda><conda>
|
2024-04-04 19:36:00
| 0
| 8,645
|
Austin
|
78,276,161
| 13,354,617
|
Auto resize Canvas to fit tkinter window
|
<p>so I want to make the canvas fit all the window but currently it's like this:
<a href="https://i.sstatic.net/SQBaW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SQBaW.png" alt="ss" /></a></p>
<p>I want to do it without hard coding the width and height when initiating the canvas.</p>
<p>here's the code:</p>
<pre><code>import PIL.Image
from PIL import Image, ImageTk
from tkinter import *
class ExampleApp(Frame):
def __init__(self,master):
Frame.__init__(self,master=None)
self.canvas = Canvas(self)
self.sbarv=Scrollbar(self,orient=VERTICAL)
self.sbarh=Scrollbar(self,orient=HORIZONTAL)
self.sbarv.config(command=self.canvas.yview)
self.sbarh.config(command=self.canvas.xview)
self.canvas.config(yscrollcommand=self.sbarv.set)
self.canvas.config(xscrollcommand=self.sbarh.set)
self.canvas.grid(row=0,column=0,sticky="nsew")
self.sbarv.grid(row=0,column=1,stick="ns")
self.sbarh.grid(row=1,column=0,sticky="ew")
self.im = PIL.Image.open("a.png")
self.imw,self.imh=self.im.size
self.canvas.config(scrollregion=(0,0,self.imw,self.imh))
self.tk_im = ImageTk.PhotoImage(self.im)
self.canvas.create_image(0,0,anchor="nw",image=self.tk_im)
if __name__ == "__main__":
root=Tk()
app = ExampleApp(root)
app.grid(sticky="nsew")
root.geometry(f"{1000}x{600}")
root.mainloop()
</code></pre>
|
<python><tkinter><tkinter-canvas>
|
2024-04-04 19:16:15
| 1
| 369
|
Mhmd Admn
|
78,276,136
| 9,357,484
|
PyTorch 1.7.1 unable to detect CUDA version 11.0
|
<p>PyTorch unable to locate the CUDA version in my GPU enabled machine. However the CUDA 11.0 is installed in my machine. The graphic card is GeForce RTX 2070 SUPER provided by NVIDIA. The GPU and CUDA configuration can be found in the following picture</p>
<p><a href="https://i.sstatic.net/Ckjs1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ckjs1.png" alt="enter image description here" /></a></p>
<p>I also tried the command <strong>!python -m torch.utils.collect_env</strong> on the Jupyter notebook to verify the environment setup which returned,</p>
<pre><code>Collecting environment information...
/home/user/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /opt/conda/conda-bld/pytorch_1607370120218/work/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
PyTorch version: 1.7.1
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Python version: 3.6 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce RTX 2070 SUPER
Nvidia driver version: 450.119.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.7.1
[pip3] torchaudio==0.7.0a0+a853dff
[pip3] torchvision==0.8.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.0.221 h6bb024c_0
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py36he8ac12f_0
[conda] mkl_fft 1.3.0 py36h54f3939_0
[conda] mkl_random 1.1.1 py36h0573a6f_0
[conda] numpy 1.19.2 py36h54aff64_0
[conda] numpy-base 1.19.2 py36hfa32c7d_0
[conda] pytorch 1.7.1 py3.6_cuda11.0.221_cudnn8.0.5_0 pytorch
[conda] torchaudio 0.7.2 py36 pytorch
[conda] torchvision 0.8.2 py36_cu110 pytorch
</code></pre>
<p><strong>torch.cuda.is_available()</strong> returned <strong>False</strong> and
<strong>print(torch.backends.cudnn.enabled)</strong> returned <strong>True</strong></p>
<p>The command I used to install the PyTorch is <strong>conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch</strong></p>
<p>How can I solve this problem?</p>
<p>Thank you in advance.</p>
|
<python><jupyter-notebook><pytorch>
|
2024-04-04 19:10:24
| 0
| 3,446
|
Encipher
|
78,275,952
| 1,445,015
|
Tkinter scrollbar on treeview table disappears when resized
|
<p>I wrote a table class using <code>Treeview</code>:</p>
<pre><code>class SimpleDataTable(tk.Frame):
def __init__(self, parent, colHeadings, colWidths, height):
tk.Frame.__init__(self, parent)
self.parent = parent
self.colHeadings = colHeadings
self.colWidths = colWidths
self.height = height
self.dataDict = {}
self.tree = None
self.vsb = None
self.hsb = None
self.setupUI()
def setupUI(self):
self.tree = ttk.Treeview(self, columns=self.colHeadings, show="headings", height=self.height)
self.tree.grid(column=0, row=0, sticky='news')
self.vsb = ttk.Scrollbar(self, orient="vertical", command=self.tree.yview)
self.tree.configure(yscrollcommand=self.vsb.set)
self.vsb.grid(column=1, row=0, sticky='ns')
self.hsb = ttk.Scrollbar(self, orient="horizontal", command=self.tree.xview)
self.tree.configure(xscrollcommand=self.hsb.set)
self.hsb.grid(column=0, row=1, sticky='ew')
for i in range(2):
self.grid_columnconfigure(i, weight=1)
self.grid_rowconfigure(i, weight=1)
self.setupColumns()
self.populateData()
# TODO: motion binding callbacks when the user resizes row/column
# self.tree.bind("<Configure>", self.onTreeviewConfigure) --> this doesn't work
# TODO: A way to delete data [and delete from the report]
def addData(self, key, values: List[Any]):
self.dataDict[key] = self.processValuesForDisplay(values)
self.updateData()
def setupColumns(self):
for i,col in enumerate(self.colHeadings):
self.tree.heading(col, text=col, anchor=tk.CENTER)
self.tree.column(col, width=self.colWidths[i], anchor=tk.CENTER)
def processValuesForDisplay(self, values: List[Any]) -> List[Any]:
for i,val in enumerate(values):
if isinstance(val, float):
values[i] = round(val, 2)
return values
def populateData(self):
for key, values in self.dataDict.items():
self.tree.insert("", "end", values=tuple(values), text=str(key))
def updateData(self):
# When new data is added, the whole table gets re-wriiten. Works with simple cases with a few rows and cols
# Clear the existing data, self.tree.get_children() returns a tuple which we'd need to unpacks
self.tree.delete(*self.tree.get_children())
self.populateData()
def selectRowByKey(self, key):
for item in self.tree.get_children():
if self.tree.item(item, 'text') == str(key):
self.tree.selection_set(item)
self.tree.focus(item)
self.tree.tag_configure('selected', background='lightblue') # Change the background color to light blue
self.tree.item(item, tags=('selected',))
break
def onTreeviewConfigure(self, event):
# Resize columns when the treeview is configured (e.g., on window resize)
for i, col in enumerate(self.colHeadings):
width = self.tree.column(col, option="width")
self.colWidths[i] = width
self.resizeColumns()
def resizeColumns(self):
# Resize the columns based on stored widths
for i, col in enumerate(self.colHeadings):
width = self.colWidths[i]
print(width)
self.tree.column(col, width=width, anchor=tk.CENTER)
</code></pre>
<p>I put this table in a frame via below code:</p>
<pre><code>self.dataTable = SimpleDataTable(self, colHeadings = table_titles, colWidths = [350] * len(table_titles) ,height = 3)
self.dataTable.grid(row=5, column=0, columnspan=12, pady=10, padx=(20, 10))
</code></pre>
<p>Initially, the table has an expected look. The number of columns are dynamic. If the columns exceed the frame size, the scrollbar activates, and I can use that to see all the columns.
Now, if the use wants to resize any single column, the scrollbar disappears and all the columns shrink within the parent frame size.
I have tried different types of callbacks (shown in code) so the table is redrawn when an event occurs, but nothing is happening. What can I do?
I want the user to adjust column widths and the scrollbar to still remain active.</p>
<p>Note that, all the grids are configured in the relevant frames.</p>
|
<python><tkinter><treeview><scrollbar>
|
2024-04-04 18:36:14
| 1
| 1,741
|
Nasif Imtiaz Ohi
|
78,275,777
| 11,748,924
|
Why generator loss using BinaryCrossEntropy with from_logits enabled?
|
<p>From a simple vanilla GAN code I look from <a href="https://github.com/shinyflight/Simple-GAN-TensorFlow-2.0/blob/master/GAN.py" rel="nofollow noreferrer">GitHub</a></p>
<p>I saw this generator model with activation <code>sigmoid</code>:</p>
<pre><code># Generator
G = tf.keras.models.Sequential([
tf.keras.layers.Dense(28*28 // 2, input_shape = (z_dim,), activation='relu'),
tf.keras.layers.Dense(28*28, activation='sigmoid'),
tf.keras.layers.Reshape((28, 28))])
</code></pre>
<p>The <code>G</code>'s loss is defined like this where <code>from_logits</code> is enabled:</p>
<pre><code>cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits = True)
def G_loss(D, x_fake):
return cross_entropy(tf.ones_like(D(x_fake)), D(x_fake))
</code></pre>
<p>As far as I know, <code>from_logits=True</code> is intended to make loss function accept <code>y_pred</code> value that has range between <code>-infinity</code> to <code>infinity</code>. Opposite to <code>from_logits=False</code> where the loss function assume the value is range between <code>0</code> until <code>1</code>.</p>
<p>As you see, the output layer of <code>G</code> model already has <code>sigmoid</code> activation where it's ranging between <code>0</code> until <code>1</code>.</p>
<p>But, why the author still using <code>from_logits=True</code>?</p>
|
<python><tensorflow><machine-learning><keras><loss-function>
|
2024-04-04 18:03:32
| 2
| 1,252
|
Muhammad Ikhwan Perwira
|
78,275,627
| 13,354,617
|
Drawing a rectangle on a tkinter canvas isn't working
|
<p>I'm trying to draw a simple rectangle on an image placed inside a canvas in tkinter, but it's not drawing. There's no errors.</p>
<p>is it because I'm placing the image not with <code>canvas.create_image</code>? because I'm drawing the image inside a frame and I'm placing the frame inside the canvas. It's the only way of opening the full image with scrolling feature.</p>
<p>Here's the code:</p>
<pre><code>import tkinter as tk
from PIL import ImageTk, Image
root = tk.Tk()
# Tkinter widgets needed for scrolling. The only native scrollable container that Tkinter provides is a canvas.
# A Frame is needed inside the Canvas so that widgets can be added to the Frame and the Canvas makes it scrollable.
canvas = tk.Canvas(root)
fTable = tk.Frame(canvas)
hor_scroll_bar = tk.Scrollbar(root)
ver_scroll_bar = tk.Scrollbar(root)
# Updates the scrollable region of the Canvas
def updateScrollRegion():
canvas.update_idletasks()
canvas.config(scrollregion=fTable.bbox())
# Sets up the Canvas, Frame, and scrollbars for scrolling
def createScrollableContainer():
canvas.config(xscrollcommand=hor_scroll_bar.set,yscrollcommand=ver_scroll_bar.set, highlightthickness=0)
hor_scroll_bar.config(orient=tk.HORIZONTAL, command=canvas.xview)
ver_scroll_bar.config(orient=tk.VERTICAL, command=canvas.yview)
hor_scroll_bar.pack(fill=tk.X, side=tk.BOTTOM, expand=tk.FALSE)
ver_scroll_bar.pack(fill=tk.Y, side=tk.RIGHT, expand=tk.FALSE)
canvas.pack(fill=tk.BOTH, side=tk.LEFT, expand=tk.TRUE)
canvas.create_window(0, 0, window=fTable, anchor=tk.NW)
# Adding an image
def addNewLabel():
img = ImageTk.PhotoImage(file="test.jpg")
tk.Label(fTable, image=img).grid(row=0, column=0)
canvas.img = img
canvas.create_rectangle(0, 0, 100, 100, fill="blue")
# Update the scroll region after new widgets are added
updateScrollRegion()
createScrollableContainer()
addNewLabel()
root.mainloop()
</code></pre>
<p>Thank you</p>
|
<python><tkinter><python-imaging-library><tkinter-canvas>
|
2024-04-04 17:32:18
| 1
| 369
|
Mhmd Admn
|
78,275,550
| 5,980,655
|
Setting specific package version in Google Colab
|
<p>It's a long post but I'll try to make it more clear as possible.</p>
<p>For reproducibility reason, I would like to set up conda environment in Google Colab to use a specific version of some packages.</p>
<p>Based on many similar questions on this forum I followed the following steps:</p>
<p>I install <code>condacolab</code></p>
<pre><code>!pip install -q condacolab
import condacolab
condacolab.install() # expect a kernel restart
</code></pre>
<p>Everythig's OK</p>
<pre><code>import condacolab
condacolab.check()
</code></pre>
<p><a href="https://i.sstatic.net/tsmyx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tsmyx.png" alt="enter image description here" /></a></p>
<p>I download miniconda</p>
<pre><code>!mkdir -p ~/miniconda3
!wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
!bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
!rm -rf ~/miniconda3/miniconda.sh
</code></pre>
<p>And get the following warning (I don't know if it's an issue)</p>
<pre><code>--2024-04-04 11:36:27-- https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
Resolving repo.anaconda.com (repo.anaconda.com)... 104.16.130.3, 104.16.131.3, 2606:4700::6810:8303, ...
Connecting to repo.anaconda.com (repo.anaconda.com)|104.16.130.3|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 144041912 (137M) [application/octet-stream]
Saving to: ‘/root/miniconda3/miniconda.sh’
/root/miniconda3/mi 100%[===================>] 137.37M 164MB/s in 0.8s
2024-04-04 11:36:28 (164 MB/s) - ‘/root/miniconda3/miniconda.sh’ saved [144041912/144041912]
PREFIX=/root/miniconda3
Unpacking payload ...
Installing base environment...
Downloading and Extracting Packages:
Downloading and Extracting Packages:
Preparing transaction: done
Executing transaction: done
installation finished.
WARNING:
You currently have a PYTHONPATH environment variable set. This may cause
unexpected behavior when running the Python interpreter in Miniconda3.
For best results, please verify that your PYTHONPATH only points to
directories of packages that are compatible with the Python interpreter
in Miniconda3: /root/miniconda3
</code></pre>
<p>I update the <code>base</code> environment since I got that it should be the way as answered here: <a href="https://stackoverflow.com/questions/75229250/is-there-a-method-to-run-a-conda-environment-in-google-colab/75239829#75239829">Is there a method to run a Conda environment in Google Colab?</a> and newly created conda environments won't last in other cells as commented in this answer: <a href="https://datascience.stackexchange.com/questions/75948/how-to-setup-and-run-conda-on-google-colab">https://datascience.stackexchange.com/questions/75948/how-to-setup-and-run-conda-on-google-colab</a></p>
<pre><code>!conda env update -n base -f environment.yaml #--prune ## I leave --prune for later since even this doesn't do what I actually want
</code></pre>
<p>And it didn't work because I understand I can't downgrade <code>python</code>:</p>
<pre><code>Channels:
- defaults
- conda-forge
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: failed
SpecsConfigurationConflictError: Requested specs conflict with configured specs.
requested specs:
- cx_oracle=8.2.1
- hvac
- ipykernel=6.2.0
- jupyter=1.0.0
- matplotlib=3.4.2
- mlflow=1.24.0
- numpy=1.20.3
- pandas=1.3.2
- pip
- plotly=5.11.0
- python=3.9.7
- python-pptx=0.6.21
- scikit-learn=0.24.2
- scipy=1.7.1
- seaborn=0.11.2
- shap=0.39.0
- sphinx=4.2.0
- sphinxcontrib-applehelp
- sphinxcontrib-devhelp=1.0.2
- sphinxcontrib-htmlhelp=2.0.0
- sphinxcontrib-jsmath=1.0.1
- sphinxcontrib-qthelp=1.0.3
- sphinxcontrib-serializinghtml=1.1.5
- sqlalchemy=1.4.22
- xgboost=1.3.3
pinned specs:
- cuda-version=12
- python=3.10
- python_abi=3.10[build=*cp310*]
Use 'conda config --show-sources' to look for 'pinned_specs' and 'track_features'
configuration parameters. Pinned specs may also be defined in the file
/usr/local/conda-meta/pinned.
</code></pre>
<p>I also tried with a much simpler environment but still in order to install <code>numpy==1.18.4</code> I would need to downgrade python.</p>
<p>I then tried to downgrade python to <code>python==3.7.6</code> following these instructions <a href="https://www.geeksforgeeks.org/how-to-downgrade-python-version-in-colab/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/how-to-downgrade-python-version-in-colab/</a> and these <a href="https://saturncloud.io/blog/how-to-change-python-version-in-google-colab/" rel="nofollow noreferrer">https://saturncloud.io/blog/how-to-change-python-version-in-google-colab/</a></p>
<p>So I tried</p>
<pre><code>!sudo apt-get install python3.7
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1
update-alternatives: renaming python3 link from /usr/bin/python3.10 to /usr/bin/python3
</code></pre>
<p>And also</p>
<pre><code>!ln -sf /usr/bin/python3.7 /usr/bin/python
</code></pre>
<p>But I still get</p>
<pre><code>!python --version
Python 3.10.13
</code></pre>
<p>If this may be of help, I notice that I have the following</p>
<pre><code>!which python
/usr/local/bin/python
</code></pre>
<p>And</p>
<pre><code>!ls /usr/bin/pytho*
/usr/bin/python /usr/bin/python3.10-config /usr/bin/python3.7m /usr/bin/python3.real /usr/bin/python3 /usr/bin/python3.7 /usr/bin/python3-config
</code></pre>
<p>It doesn't need to be with conda, I just need to work with some specific versions of python packages.</p>
<p><strong>UPDATE:</strong>
I managed to change python following these instructions:
<a href="https://medium.com/the-owl/changing-the-python-version-on-google-colab-245fd510d3ae" rel="nofollow noreferrer">https://medium.com/the-owl/changing-the-python-version-on-google-colab-245fd510d3ae</a> but then I can't use <code>!pip</code> to install <code>condacolab</code>. Is there another way?</p>
<p>Thanks</p>
|
<python><conda><google-colaboratory><environment>
|
2024-04-04 17:21:49
| 0
| 1,035
|
Ale
|
78,275,401
| 616,460
|
Python 3.7 equivalent of `importlib.resources.files`
|
<p>I have an application that unfortunately does not support Python past 3.7. I have some code I'm trying to port (from Python 3.12) back to Python 3.7 that uses <code>importlib.resources.files</code> to get the path to some non-Python resources that are included in the package:</p>
<pre><code>def _get_file (filename: str) -> bytes:
"""given a resource filename, return its contents"""
res_path = importlib.resources.files("name.of.my.package")
with importlib.resources.as_file(res_path) as the_path:
page_path = os.path.join(the_path, filename)
with open(page_path, "rb") as f:
return f.read()
</code></pre>
<p>It seems like <code>importlib.resources</code> doesn't have a <code>files()</code> in Python 3.7.</p>
<p>What is the Python 3.7 compatible equivalent of this code?</p>
|
<python><python-3.7><python-importlib>
|
2024-04-04 16:52:34
| 1
| 40,602
|
Jason C
|
78,275,320
| 5,256,563
|
Use packages which require different python versions
|
<p>I'm developing a script for a pipeline that uses various packages. Unfortunately, these packages require different versions of Python. Here is a mock example of the situation:</p>
<pre><code>Main script runs on Python 3.11, and it needs package 1 & 2
Package 1 runs only on Python 3.8
Package 2 runs only on Python 3.9
</code></pre>
<p>What are my best options to run everything together?</p>
<p>The only nasty solution that crosses my mind would be to do something like:</p>
<pre><code># Some code in python 3.11
# Export (save on disk) data required by Package 1
os.system("/path/to/python38/python LaunchPackage1")
# Get/Load Package 1 results
# Export (save on disk) data required by Package 2
os.system("/path/to/python39/python LaunchPackage2")
# Get/Load Package 2 results
</code></pre>
<p>I do hope there is something better to do.</p>
|
<python><python-3.x>
|
2024-04-04 16:36:39
| 0
| 5,967
|
FiReTiTi
|
78,275,265
| 10,200,497
|
Is there a ONE-LINER way to give each row of a dataframe a unique id consisting of an integer and string?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [4, 3, 2, 2, 6]
}
)
</code></pre>
<p>And this is the expected output: I want to create column <code>id</code>:</p>
<pre><code> a id
0 4 x_0
1 3 x_1
2 2 x_2
3 2 x_3
4 6 x_4
</code></pre>
<p>I can create <code>id</code> like this but I think there is a one-liner for this:</p>
<pre><code>df['id'] = np.arange(len(df))
df['id'] = 'x_' + df.id.astype(str)
</code></pre>
<p>I prefer a solution that does not use <code>index</code>.</p>
|
<python><pandas><dataframe>
|
2024-04-04 16:26:00
| 2
| 2,679
|
AmirX
|
78,275,253
| 1,436,222
|
best way to create serializable data model
|
<p>somewhat inexperienced in python. I am coming from C# world, and I am trying to figure out what is the best practice to create a data structure in python that:</p>
<ol>
<li>can have empty fields (None)</li>
<li>can have default values assigned to some fields</li>
<li>can have aliases assigned to fields that would be used during serialization</li>
</ol>
<p>to clarify: for example in C# I can do something like this:</p>
<pre class="lang-cs prettyprint-override"><code>using Newtonsoft.Json;
public class MyDataClass
{
[JsonProperty(PropertyName = "Data Label")]
public string label { get; set; }
[JsonProperty(PropertyName = "Data Value")]
public string value { get; set; }
[JsonProperty(PropertyName = "Data Description")]
public MyDataDefinition definiton { get; set; }
public MyDataClass
{
this.label = "Default Label";
}
}
</code></pre>
<p>with this class, i can create instance with only one field pre-populated, and populate the rest of the data structure at will, and then serialize it to JSON with aliased field names as decorated.</p>
<p>In python, i experimented with several packages, but every time i end up with super complex implementation that doesn't hit all of the requirements. I MUST be missing something very fundamental, because it seems like such a simple and common use case.</p>
<p>how would you implement something like this in most "pythonic" way?</p>
|
<python><python-dataclasses><python-datamodel>
|
2024-04-04 16:23:32
| 1
| 459
|
concentriq
|
78,275,164
| 19,299,757
|
Selenium Pytest ValueError: setup did not yield a value
|
<p>I have this test suite (using selenium python pytest) named "test_invoice.py" which has several tests inside it.
Similarly I have other test suites in other python files named "test_admin.py", "test_cash.py" etc with same setup. I run all these suites from a docker image like this:</p>
<pre><code>python -m pytest -v -s --capture=tee-sys --html=report.html --self-contained-html test_invoice.py test_admin.py test_cash.py
@pytest.fixture(scope='class')
def setup(request):
try:
options = webdriver.ChromeOptions()
options.add_argument("--window-size=1920,1080")
options.add_argument("--start-maximized")
options.add_argument('--headless')
options.add_argument('--ignore-certificate-errors')
driver = webdriver.Chrome(options=options)
request.cls.driver = driver
driver.delete_all_cookies()
driver.get(TestData_common.BASE_URL)
yield
driver.quit()
except WebDriverException as e:
print('App seems to be down...> ', e)
@pytest.mark.usefixtures("setup")
@pytest.mark.incremental
class Test_app:
def test_001_login(self):
assert TestData_common.URL_FOUND, "UAT seems to be down.."
self.loginPage = LoginPage(self.driver)
self.loginPage.do_click_agree_button()
assert TestData_common.AGREE_BTN_FOUND, "Unable to click AGREE button.."
self.driver.maximize_window()
print('Successfully clicked AGREE button')
time.sleep(2)
</code></pre>
<p>Issue: Sometimes I am getting the below error</p>
<pre><code>request = <SubRequest 'setup' for <Function test_001_login>>
kwargs = {'request': <SubRequest 'setup' for <Function test_001_login>>}
def call_fixture_func(
fixturefunc: "_FixtureFunc[FixtureValue]", request: FixtureRequest, kwargs
) -> FixtureValue:
if is_generator(fixturefunc):
fixturefunc = cast(
Callable[..., Generator[FixtureValue, None, None]], fixturefunc
)
generator = fixturefunc(**kwargs)
try:
fixture_result = next(generator)
except StopIteration:
> raise ValueError(f"{request.fixturename} did not yield a value") from
None
E **ValueError: setup did not yield a value**
</code></pre>
<p>I have no idea why this would happen. Is it something wrong with the pytest fixture?
Any help is much appreciated.</p>
|
<python><selenium-webdriver><pytest>
|
2024-04-04 16:07:45
| 1
| 433
|
Ram
|
78,274,979
| 9,342,193
|
Put two plots into the same figure as subplots in Python
|
<p>I am trying to add two plots into the same plot as subplots.</p>
<p>So far I tried :</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
penguins = sns.load_dataset("penguins")
print(penguins['species'])
fig = plt.gcf()
gs = fig.add_gridspec(ncols=2, nrows=1, width_ratios=[1, 1])
#PLOT0
ax0 = fig.add_subplot(gs[0, 0])
g = sns.JointGrid(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species")
g.plot_joint(sns.scatterplot, ax=ax0) # Link the scatterplot to ax0
sns.boxplot(penguins, x=g.hue, y=g.y, ax=g.ax_marg_y)
sns.boxplot(penguins, y=g.hue, x=g.x, ax=g.ax_marg_x)
#PLOT1
# Add another plot to display the distribution of bill_length_mm values
ax1 = fig.add_subplot(gs[0, 1])
# Plot the box plot of bill_length_mm values
sns.boxplot(data=penguins, x='bill_length_mm', ax=ax1)
plt.show()
</code></pre>
<p>But I cannot manage to plot the <strong>PLOT0</strong> in the left size in the <code>ax0 fig.add_subplot(gs[0, 0])</code> and the the <strong>PLOT1</strong> in the left size in the <code>ax0 fig.add_subplot(gs[0, 1])</code> for some reason it does not work.</p>
<p>The last result should look like that :</p>
<p><a href="https://i.sstatic.net/0f41K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0f41K.png" alt="enter image description here" /></a></p>
<p>In fact I think that the bay I add the boxplots in the margin of the scatter plot does not match the way I merge PLOT0 and PLOT1 together using <strong>fig.add_gridspec</strong></p>
|
<python><matplotlib><scatter-plot>
|
2024-04-04 15:40:21
| 1
| 597
|
Grendel
|
78,274,904
| 632,813
|
ML ColumnTransformer OneHotEncoder
|
<p>When converting categorical data in first column of my dataframe I am getting strange behavior of ColumnTransformer with OneHotEncoder. the behavior occurs when I add one row to my csv file.</p>
<p>the initial data is:</p>
<pre><code>title,dailygross,theaters,DayInYear
ACatinParis,307,5,257
ALettertoMomo,307,5,257
AnotherDayofLife,307,5,257
ApprovedforAdoption,307,5,257
AprilandtheExtraordinaryWorld,307,5,257
Belle,307,5,257
BirdboyTheForgottenChildren,307,5,257
ChicoRita,307,5,257
</code></pre>
<p>when running code</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('../data/GKIDS_DayNum_test_names.csv')
dataset['title'].str.strip()
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
title_column_index = dataset.columns.get_loc('title')
print('title index:', title_column_index)
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [title_column_index])], remainder='passthrough')
X_Encoded = np.array(ct.fit_transform(X))
print(X_Encoded)
</code></pre>
<p>the result is correct:</p>
<pre><code>[[1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 307 5]
[0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 307 5]
[0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 307 5]
[0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 307 5]
[0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 307 5]
[0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 307 5]
[0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 307 5]
[0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 307 5]]
</code></pre>
<p>However when I add additional row: <code>BlueGiant,307,5,257</code>
to the file and re run the code I am getting weird output:</p>
<pre><code> (0, 0) 1.0
(0, 9) 307.0
(0, 10) 5.0
(1, 1) 1.0
(1, 9) 307.0
(1, 10) 5.0
(2, 2) 1.0
(2, 9) 307.0
(2, 10) 5.0
(3, 3) 1.0
(3, 9) 307.0
(3, 10) 5.0
(4, 4) 1.0
(4, 9) 307.0
(4, 10) 5.0
(5, 5) 1.0
(5, 9) 307.0
(5, 10) 5.0
(6, 6) 1.0
(6, 9) 307.0
(6, 10) 5.0
(7, 8) 1.0
(7, 9) 307.0
(7, 10) 5.0
(8, 7) 1.0
(8, 9) 307.0
(8, 10) 5.0
</code></pre>
<p>I do not understand why it is.</p>
<p>Please help.</p>
|
<python><machine-learning><one-hot-encoding>
|
2024-04-04 15:25:57
| 1
| 435
|
Tech Wizard
|
78,274,571
| 14,954,327
|
Typing a function param to be any JSON serializable object
|
<p>what would be the best type for this function?<br />
I want basically any input type that <code>json.dumps</code> can process (with or without <code>JSONEncoder</code>)</p>
<p>For now i'm using <code>Union[List[Any], Dict[Any, Any]]</code> but it's not exhaustive and mypy complains <code>Explicit "Any" is not allowed</code></p>
<pre class="lang-py prettyprint-override"><code>import json
from typing import List, Dict
from datetime import date
def my_function_doing_stuff_then_serializing(input: Union[List[Any], Dict[Any, Any]], **kwargs) -> None:
json.dumps(input, **kwargs)
</code></pre>
<p>so I can do this</p>
<pre class="lang-py prettyprint-override"><code>import json
from datetime import date
from somewhere import my_function_doing_stuff_then_serializing
class DateEncoder(json.JSONEncoder):
def default(self, obj: Any) -> Any:
if isinstance(obj, date):
return obj.isoformat()
return super().default(obj)
my_function_doing_stuff_then_serializing([date.today()], cls=DateEncoder)
</code></pre>
|
<python><python-typing>
|
2024-04-04 14:31:20
| 1
| 960
|
codekoriko
|
78,274,561
| 3,785,719
|
Pandas display() nice table in AWS Glue Jupyter Notebook
|
<p>In <strong>AWS Glue Jupyter Notebook</strong>, I wanted to <code>display()</code> the dataframe in Pandas nice table (HTML) but it only shows the raw text.</p>
<p>Example: In AWS Glue Jupyter Notebook, it only shows the raw text:
<a href="https://i.sstatic.net/GsbKZ.png" rel="noreferrer"><img src="https://i.sstatic.net/GsbKZ.png" alt="Pandas output in text" /></a></p>
<p>Expectation: I want to display it on Pandas nice table (HTML) like this:
<a href="https://i.sstatic.net/E0GXt.png" rel="noreferrer"><img src="https://i.sstatic.net/E0GXt.png" alt="Pandas output in HTML" /></a></p>
|
<python><pandas><amazon-web-services><jupyter-notebook><aws-glue>
|
2024-04-04 14:30:02
| 0
| 696
|
Chong Onn Keat
|
78,274,136
| 13,158,157
|
Convert to float, pandas string column with mixed thousand and decimal separators
|
<p>I have a pandas DataFrame with a column containing strings representing numbers. These strings have mixed formats. Some times numbers use comma as a decimal separator and sometimes a dot. When a dot is used as a decimal separator, that number can contain comma as a thousand separator.</p>
<p>For example:</p>
<pre><code>import pandas as pd
data = {
'NumberString': [
'1,234.56',
'789,012.34',
'45,678',
'9,876.54',
'3,210.98',
'1,000,000.01',
'123.45',
'42,000',
'NaN'
]
}
df = pd.DataFrame(data)
</code></pre>
<p>I want to convert this column to numeric without losing some of the data due to inconsistent format (commas vs dots). However, using pd.to_numeric with parameter errors='coerce' will drop down some of the number
Python</p>
<p>Is there a way to format all the strings to numbers without loosing them due to format?</p>
<p>What I have tried so far:</p>
<pre><code>>>> df['Number'] = pd.to_numeric(df['NumberString'].str.replace(',','.'), errors='coerce')
</code></pre>
<pre><code>NumberString Number
1,234.56 NaN
789,012.34 NaN
45,678 45.678
9,876.54 NaN
3,210.98 NaN
1,000,000.01 NaN
123.45 123.450
42,000 42.000
NaN NaN
</code></pre>
<p>Desired output:</p>
<pre><code>NumberString Number
1,234.56 1234.56
789,012.34 789012.34
45,678 45.678
9,876.54 9876.54
3,210.98 3210.98
1,000,000.01 1000000.01
123.45 123.450
42,000 42.000
NaN NaN
</code></pre>
|
<python><pandas><dataframe>
|
2024-04-04 13:18:36
| 2
| 525
|
euh
|
78,274,117
| 5,022,847
|
Pytest mock ring.redis decorator
|
<p>Below is my class implementation:</p>
<pre><code>class Person:
def __init__(self, n):
self.n = n
self.foo = self.fun()
@ring.redis(redis.RedisClient, coder="json")
def fun(self):
another_thirparty_class()
# somebusiness logic here
def bar(self):
a = self.foo["a"]
print(a+1)
</code></pre>
<p>Now I want to write the test case for <code>bar()</code> but it is failing. I have mocked all but seems like <code>@ring.redis</code> is creating problems:</p>
<pre><code>@pytest.fixture
def mock_redis():
return redis.StrictRedis(
host="fake-host",
port="fake-port",
db="fake-db",
)
@patch("path.myfile.redis")
@patch("path.myfile.another_thirparty_class")
def test_bar(mock_another_thirparty_class, mockredis):
mockredis.return_value = mock_redis()
p = Person("xyz")
p.bar()
</code></pre>
<p>Even after mocking redis it is still going to redis and getting below error:
<code>redis.exceptions.ConnectionError: Error -2 connecting to fake_host:1234. Name or service not known.</code></p>
|
<python><redis><mocking><pytest><ring>
|
2024-04-04 13:14:37
| 0
| 1,430
|
TechSavy
|
78,274,097
| 108,390
|
Group/Cluster Polars Dataframe by substring in string or string in substring
|
<p>Given this Polars DataFrame:</p>
<pre><code>df = pl.DataFrame(
{
"id": [1, 2, 3, 4, 5],
"values": ["A", "B", "A--B", "C--A", "D"],
}
)
</code></pre>
<p>1, How can I group/cluster it so that 1,2 and 3 ends up in the same group?<br />
2. Can I even achieve having 4 in the same group/cluster?</p>
|
<python><python-polars>
|
2024-04-04 13:11:00
| 1
| 1,393
|
Fontanka16
|
78,273,851
| 547,231
|
How can I create a new tensor with shape ( k, x.shape), where x is another tensor?
|
<p>Having tensor <code>x</code>, I want to create a tensor <code>y</code> of the shape <code>(k, x.shape)</code>. However, <code>y = torch.empty((k, x.shape))</code> does not work, since <code>(k, x.shape)</code> is a tuple of type <code>(int, torch.Size)</code>. Is there no possibility of achieving this?</p>
<p>Naturally, I could use:</p>
<pre><code>y = torch.empty((k, x.shape[0], x.shape[1], x.shape[2], x.shape[3]))
</code></pre>
<p>But this requires knowing the number of dimensions in <code>x</code>...</p>
|
<python><pytorch>
|
2024-04-04 12:28:00
| 1
| 18,343
|
0xbadf00d
|
78,273,782
| 7,692,855
|
Python 3: TypeError: Can't instantiate abstract class BaseConnection with abstract method create_connection
|
<p>I am upgrading some very old Python code from Python 2 to Python 3. There is a simple method to check rabbitmq connection using Pika.</p>
<pre><code> from contextlib import closing
from pika import URLParameters, BaseConnection
def check_rabbitmq(self):
conn_params = URLParameters(self.config.rabbitmq_params['amqp.url'])
with closing(BaseConnection(conn_params)):
return True
</code></pre>
<p>However, in Python 3 it returns</p>
<p><code>TypeError: Can't instantiate abstract class BaseConnection with abstract method create_connection</code> I feel like I'm missing something obvious.</p>
|
<python><pika>
|
2024-04-04 12:14:29
| 2
| 1,472
|
user7692855
|
78,273,399
| 357,313
|
Format float in N chars max with best precision
|
<p>For concise display in chart titles, I would like to format arbitrary floating point values as a fixed size string, in Python. Either decimal (0.0123) or in scientific notation (1.2e-2) but with the highest precision possible (choosing decimal here). The result is allowed to be <em>smaller</em> than the fixed size, so for instance trailing decimal zeros can be stripped off.</p>
<p>I searched high and low for a solution, algorithm or library, as I'm sure I'm not the first to have this need. But I did not find anything useful. And it's harder than it sounds.</p>
<p>I guess the interface can be:</p>
<pre><code>def format_short(nr: float, maxwidth: int = 4) -> str:
# ...
return "1.23"
</code></pre>
<p>Of course values exist where no representation can possibly fit at all when <code>maxwidth</code> is small enough (6 or less), but those are rare in practice and a simple placeholder label can be chosen (like <code>'inf'</code>).</p>
<p>Depending on <code>maxwidth</code>, values like 9.95E-1 and 9.95E9 are where things get really hairy...</p>
<p>Isn't there a standard solution to this common problem?</p>
|
<python><floating-point><formatting>
|
2024-04-04 11:03:43
| 0
| 8,135
|
Michel de Ruiter
|
78,273,388
| 11,049,863
|
How to display the uuid field of my serializer as a string in a get request with django restframework
|
<p>Here is my serializer</p>
<pre><code>class AgenceSerializer(serializers.Serializer):
uuid = serializers.UUIDField(format='hex',allow_null=True,required=False) <-------
p1 = serializers.CharField(max_length=50,required=True)
p2 = serializers.CharField(max_length=50,required=True)
</code></pre>
<p>views.py</p>
<pre><code>@api_view(["POST"])
def details_agence(request):
if request.method == "POST":
serializer = AgenceUUIDSerializer(data=request.data)
if serializer.is_valid():
agence_repr = serializer.agence_details(data=request.data)
serializer = AgenceSerializer(agence_repr)
print(serializer)
return Response(serializer.data,status=status.HTTP_200_OK) <---------
...
</code></pre>
<p>a print on the serializer gives me the following:</p>
<pre><code>AgenceSerializer({'uuid': 'A75DF036-6C16-4311-9825-22B1295D8A26', 'p1': 'Ozangue', ...}):
uuid = UUIDField(allow_null=True, format='hex', required=False)
...
</code></pre>
<p>But when I display the data, in the response, the uuid field is always null<br/>
How to solve this problem ?<a href="https://i.sstatic.net/OgfES.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OgfES.png" alt="enter image description here" /></a></p>
|
<python><django><django-rest-framework>
|
2024-04-04 11:00:56
| 2
| 385
|
leauradmin
|
78,273,384
| 534,238
|
Type hints and overloading for type parameter
|
<p>I have a case where, based upon the <code>type</code> provided as a parameter, I want to return a matching type. I am trying to ingest environment variables from outside of the program, which will always be ingested as strings, but which I want to push into whatever type. The function works perfectly fine, but I want for the <em>type system</em> to also support it.</p>
<p>Below is a simpler example that captures the problem I am trying to address:</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload, Literal, Union
from os import environ
@overload
def get_var(name: str, mytype: Literal[str]) -> str: ...
@overload
def get_var(name: str, mytype: Literal[int]) -> int: ...
@overload
def get_var(name: str, mytype: Literal[float]) -> float: ...
@overload
def get_var(name: str, mytype: Literal[bool]) -> bool: ...
def get_var(name: str, mytype: type = str) -> Union[str, int,float, bool]:
val = environ[name]
if mytype == str:
return str(val)
if mytype == int or mytype == float:
return type(val)
if mytype == bool:
return val.lower()[0] == "t"
raise ValueError(f"Unsupported type: {mytype}")
</code></pre>
<p>The function itself works fine. However, there is no such thing as <code>Literal[str]</code> (or a <code>Literal</code> of any of the other types listed).</p>
<p>So this fails because the overloading fails. If I remove all of the overloading, then the return type is <code>str | int | float | bool</code>, which causes all sorts of trouble within my IDE downstream when I want to use any of the variables I get back.</p>
<p>I don't want to find a way around the problem with overloading and <code>Literal</code>s by using something like:</p>
<pre class="lang-py prettyprint-override"><code>@overload
def get_var(name: str, mytype: Literal["str"]) -> str
.
.
.
def get_var(name: str, mytype: str) -> Union[str, int, float, bool]
</code></pre>
<p>if I can avoid doing that, because I would like for the type passed to be an actual type, and not just a string that represents the type. This way, I can do things like:</p>
<pre class="lang-py prettyprint-override"><code>my_type = type(old_param)
new_param = get_var("my_hyperparameter", my_type)
</code></pre>
<p>In other words, <code>get_var</code> would be much more user-friendly if <code>my_type</code> could be the actual type (<code>str</code>, <code>int</code>, etc.) instead of a string that simply says <code>"str"</code>, <code>"int"</code>, etc.</p>
|
<python><python-typing>
|
2024-04-04 11:00:12
| 0
| 3,558
|
Mike Williamson
|
78,273,251
| 688,080
|
Autoscaling a figure with heatmap + scatter introduces extra padding
|
<p>The code below</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import plotly.graph_objects as go
def make_blob(n: int) -> np.ndarray:
xx, yy = np.meshgrid(np.arange(n), np.arange(n))
return np.exp(-((xx - n // 2) ** 2 + (yy - n // 2) ** 2) / (2 * (n // 2) ** 2))
if __name__ == "__main__":
n = 101
x = np.arange(n)
blob = make_blob(n)
fig = go.Figure(data=go.Heatmap(x=x, y=x, z=blob, type="heatmap", colorscale="Viridis"))
fig.add_scatter(x=x, y=x)
fig.update_xaxes(range=(0, n - 1))
fig.update_yaxes(range=(0, n - 1))
fig.show()
</code></pre>
<p>gives
<a href="https://i.sstatic.net/Q0hDt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q0hDt.png" alt="blob with diagonal line" /></a></p>
<p>But pressing “Autoscale” on the toolbar will introduce extra padding at the y-axis:
<a href="https://i.sstatic.net/hGmAx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hGmAx.png" alt="autoscaled blob with diagonal line" /></a></p>
<p>Although it can be removed with “Reset axes”, can I make them not appear in the first place? I found <a href="https://github.com/plotly/plotly.js/issues/1876" rel="nofollow noreferrer">this issue on GitHub</a> but not sure how relevant that is.</p>
|
<python><plotly><visualization><toolbar>
|
2024-04-04 10:37:00
| 1
| 4,600
|
Ziyuan
|
78,272,872
| 13,088,678
|
Optimze API invocations in parallel
|
<p>We have a below python code. This has 2 APIs</p>
<ul>
<li>a) to get all customers - get_all_customers
()</li>
<li>b) to get specific customer detail - get_specifc_customer_details()</li>
</ul>
<p>currently (b) get_specifc_customer_details run sequentially. Is there anyway we can use spark parallelism, so that we can invoke in parallel. something like below or any better approaches to use APIs in Spark.</p>
<p>maintain all customers in a dataframe df, separate rows</p>
<pre class="lang-py prettyprint-override"><code>df = df.withColumn(cust_status_1, <invoke api pass specific customer >) \
.withColumn(cust_status_2, <invoke api pass specific customer >)
</code></pre>
<p>finally, df willhave 3 columns -- customer_id,cust_status_1,cust_status_2</p>
<p>Original code:</p>
<pre class="lang-py prettyprint-override"><code>def get_all_customers():
url = "https://test.url.com/api/v1/customers?status=Customer"
headers = {
'Content-Type': 'application/json; charset=utf-8',
'Authorization': 'Bearer Token'
}
customer_id_list = []
response = requests.request("GET", url, headers=headers)
for cust in response.json()["customers"]:
customer_id_list.append(cust["uid"])
return customer_id_list
def get_specifc_customer_details(customer_id_list):
customer_status_overall = []
for customer_id in customer_id_list:
url = f"https://test.url.com/api/v1/customers/{customer_id}"
headers = {
'Content-Type': 'application/json; charset=utf-8',
'Authorization': 'Bearer Token'
}
response = requests.request("GET", url, headers=headers)
cust_status_1 = response.json()["customer"]["status_1"]
cust_status_2 = response.json()["customer"]["status_2"]
customer_status_overall.append((customer_id,cust_status_1,cust_status_2))
customer_status_df = spark.createDataFrame(customer_status_overall, schema)
return customer_status_df
</code></pre>
|
<python><apache-spark><pyspark>
|
2024-04-04 09:23:35
| 3
| 407
|
Matthew
|
78,272,867
| 726,730
|
pyqtSignal emit multi type
|
<p>I have a signal to be: <code>auto_dj_start = pyqtSignal(dict,str,dict,str)</code></p>
<p>Sometimes when the player list is empty i want to emit <code>None</code> to all parameters instead of dicts and string.</p>
<p>But this error happens:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "C:\Users\chris\Documents\My Projects\papinhio-player\src\python+\main-window\../..\python+\main-window\page-4\player-list.py", line 721, in run
self.auto_dj_start.emit(data["play"],data["play-sequence"],data["load"],data["load-sequence"])
TypeError: Player_List_Emitter.auto_dj_start[dict, str, dict, str].emit(): argument 1 has unexpected type 'NoneType'
</code></pre>
<p>So what i can do is to emit empty dict and empty string.</p>
<p>Is there any other alternative?</p>
<p>Maybe if i made a new type to be dict_or_None to be dict or None and pass it to pyqtSignal creation.</p>
|
<python><pyqt5><qthread>
|
2024-04-04 09:22:43
| 1
| 2,427
|
Chris P
|
78,272,860
| 9,548,319
|
Alembic - Inverted workflow upgrade/downgrade
|
<p>I have this issue with Alembic + FastAPI + PostgreSQL</p>
<p>SQL Alchemy Alembic generates migrations in reverse, downgrade creates the table and upgrade drops it, while the database is empty.</p>
<p>For example:</p>
<pre><code>alembic revision --autogenerate -m "Testing 2"
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.autogenerate.compare] Detected removed index 'ix_Site_keyword' on 'Site'
INFO [alembic.autogenerate.compare] Detected removed table 'Site'
Generating /app/migrations/versions/1e3a0f40182c_testing_2.py ... done
</code></pre>
<p>I've checked the file and I got this one:</p>
<pre><code>"""Testing 2
Revision ID: 1e3a0f40182c
Revises: 928ab2a61fa7
Create Date: 2024-04-04 08:32:29.784316
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision: str = '1e3a0f40182c'
down_revision: Union[str, None] = '928ab2a61fa7'
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index('ix_Site_keyword', table_name='Site')
op.drop_table('Site')
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('Site',
sa.Column('id', sa.UUID(), autoincrement=False, nullable=False),
sa.Column('title', sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column('keyword', sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column('description', sa.TEXT(), autoincrement=False, nullable=True),
sa.Column('is_active', sa.BOOLEAN(), autoincrement=False, nullable=True),
sa.Column('created_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True),
sa.Column('updated_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True),
sa.Column('deleted_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True),
sa.PrimaryKeyConstraint('id', name='Site_pkey')
)
op.create_index('ix_Site_keyword', 'Site', ['keyword'], unique=False)
# ### end Alembic commands ###
</code></pre>
<p>Why do this one? If I switch the blocks (put upgrade block to downgrade block and vice-versa) If I run the next automigration, I got the all delete syntax in upgrade function... so, always I have to rewrite the code.</p>
<p>What's going on here?</p>
<p>UPDATE: This is my env.py file.</p>
<pre><code>from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from database.orm import Base
from alembic import context
config = context.config
if config.config_file_name is not None:
fileConfig(config.config_file_name)
target_metadata = Base.metadata
def run_migrations_offline() -> None:
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
</code></pre>
|
<python><postgresql><fastapi><alembic>
|
2024-04-04 09:21:10
| 1
| 417
|
sincorchetes
|
78,272,705
| 2,329,968
|
Weird margins when using curvilinear axis
|
<p>This example comes from Matplotlib's <a href="https://matplotlib.org/stable/gallery/axisartist/demo_curvelinear_grid.html#sphx-glr-gallery-axisartist-demo-curvelinear-grid-py" rel="nofollow noreferrer">documentation</a>. I'm running it inside Jupyter Lab. Here I attach the output figure:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.projections import PolarAxes
from matplotlib.transforms import Affine2D
from mpl_toolkits.axisartist import Axes, HostAxes, angle_helper
from mpl_toolkits.axisartist.grid_helper_curvelinear import \
GridHelperCurveLinear
def curvelinear_test1(fig):
"""
Grid for custom transform.
"""
def tr(x, y): return x, y - x
def inv_tr(x, y): return x, y + x
grid_helper = GridHelperCurveLinear((tr, inv_tr))
ax1 = fig.add_subplot(1, 2, 1, axes_class=Axes, grid_helper=grid_helper)
# ax1 will have ticks and gridlines defined by the given transform (+
# transData of the Axes). Note that the transform of the Axes itself
# (i.e., transData) is not affected by the given transform.
xx, yy = tr(np.array([3, 6]), np.array([5, 10]))
ax1.plot(xx, yy)
ax1.set_aspect(1)
ax1.set_xlim(0, 10)
ax1.set_ylim(0, 10)
ax1.axis["t"] = ax1.new_floating_axis(0, 3)
ax1.axis["t2"] = ax1.new_floating_axis(1, 7)
ax1.grid(True, zorder=0)
fig = plt.figure()
curvelinear_test1(fig)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/ZVKgU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZVKgU.png" alt="enter image description here" /></a></p>
<p>Note the weird top/bottom margins. Left/Right margins seems ok. If I execute this code inside a standard python interpreter, the picture looks like this:</p>
<p><a href="https://i.sstatic.net/SEk6u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SEk6u.png" alt="enter image description here" /></a></p>
<p>Here, maybe there is too much top/bottom margin, and definitely too much right margin.</p>
<p>I've tried <code>plt.tight_layout()</code> just before <code>plt.show()</code>, but it does nothing in this case. I've looked at the source code of <code>Axes/HostAxes</code> but I couln't find anything obvious that would help fix these margins.</p>
<p>What can I try?</p>
|
<python><matplotlib><jupyter-notebook><jupyter><axis>
|
2024-04-04 08:47:20
| 1
| 13,725
|
Davide_sd
|
78,272,691
| 5,346,843
|
3D interpolation in Python revisited
|
<p>I have a piece-wise linear path in 3D space, comprising an array of (x, y, z) values, and an array of values giving the distance along the path. I want to find the distance along the path of other (x, y, z) points which all lie "on" the path (ie the distance to one of the path segments is less than some tolerance).</p>
<p>I found this algorithm
<a href="https://stackoverflow.com/questions/47900969/4d-interpolation-for-irregular-x-y-z-grids-by-python">4D interpolation for irregular (x,y,z) grids by python</a> and reproduced the answers.</p>
<p>However, when I use my values, I get an array of <code>nan</code>'s. Would appreciate any suggestions to resolve this - thanks in advance!</p>
<pre><code>import scipy.interpolate
import numpy as np
nodes = np.array([
[511.03925, 897.2107, 48.937611],
[499.58658, 889.2893, 49.988685],
[474.94204, 872.2437, 51.114033],
[461.30299, 862.8101, 51.072050],
[450.27944, 855.1856, 50.847374],
[425.61826, 838.1285, 50.344743],
[400.95708, 821.0714, 49.842111]])
vals = np.array([3496.03, 3510.00, 3540.00, 3556.59, 3570.00, 3600.00, 3630.00])
pts = np.array([
[492.09, 884.11, 50.33],
[482.34, 877.36, 50.78],
[488.52, 881.64, 50.49],
[476.24, 873.14, 51.05],
[482.34, 877.36, 50.78]])
dist = scipy.interpolate.griddata(nodes, vals, pts)
print(dist)
</code></pre>
|
<python><scipy>
|
2024-04-04 08:44:47
| 1
| 545
|
PetGriffin
|
78,272,653
| 315,427
|
How to avoid interpolation in button_press_event of plot?
|
<p>For horizontal axis of my plot I have <code>int64</code> type numpy array.
Then I've linked button press event to the plot:</p>
<pre><code>plt.connect("button_press_event", on_click)
</code></pre>
<p>This is the handler</p>
<pre><code>def on_click(event):
cursor_x = event.xdata
...
</code></pre>
<p>What I see is non-integer interpolated <code>xdata</code>. How can I avoid interpolation and get the nearest datapoint from <code>x</code> axis in handler function?</p>
|
<python><numpy><matplotlib>
|
2024-04-04 08:36:55
| 1
| 29,709
|
Pablo
|
78,272,633
| 10,798,503
|
Nodriver click captcha box
|
<p>I'm trying to click the Captcha box witihin <a href="https://github.com/ultrafunkamsterdam/nodriver" rel="nofollow noreferrer">nodriver</a> in Python,
At the moment nodriver is unable to locate the chekbox anchor element, but is able to locate the iframe, which makes me think, maybe I need to switch window to it?</p>
<p>I saw examples only using selenium, there you need to switch to the iframe before clicking the checkbox. but in nodriver I'm not sure the option exists.</p>
<p>Here is a snippet of my code</p>
<pre><code>iframe_element = await twitch_tab.select('iframe[title="reCAPTCHA"]')
await tab.wait_for('div[id="rc-anchor-container"]') # Error doesn't exists
</code></pre>
|
<python><recaptcha><captcha><undetected-chromedriver>
|
2024-04-04 08:32:17
| 0
| 1,142
|
yarin Cohen
|
78,272,574
| 10,200,497
|
What is the best way to slice a dataframe up to the first instance of a mask?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70],
'b': [1, 1, 1, -1, -1, -2, -1, 2, 2, -2, -2, 1, -2],
}
)
</code></pre>
<p>The mask is:</p>
<pre><code>mask = (
(df.b == -2) &
(df.b.shift(1) > 0)
)
</code></pre>
<p>Expected output: slicing <code>df</code> up to the first instance of the <code>mask</code>:</p>
<pre><code> a b
0 10 1
1 15 1
2 20 1
3 25 -1
4 30 -1
5 35 -2
6 40 -1
7 45 2
8 50 2
</code></pre>
<p>The first instance of the mask is at row <code>9</code>. So I want to slice the <code>df</code> up to this index.</p>
<p>This is what I have tried. It works but I am not sure if it is the best way:</p>
<pre><code>idx = df.loc[mask.cumsum().eq(1) & mask].index[0]
result = df.iloc[:idx]
</code></pre>
|
<python><pandas><dataframe>
|
2024-04-04 08:20:48
| 1
| 2,679
|
AmirX
|
78,272,521
| 6,221,742
|
Hugging Face Prompt Injection Identifier
|
<p>packages versions</p>
<ul>
<li>langchain==0.1.14</li>
<li>langchain-community==0.0.31</li>
<li>langchain-core==0.1.38</li>
<li>langchain-experimental==0.0.56</li>
<li>pydantic==1.10.14</li>
</ul>
<p>I am attempting to utilize prompt injection identification following the guidelines outlined in the tutorial linked <a href="https://python.langchain.com/docs/guides/safety/hugging_face_prompt_injection" rel="nofollow noreferrer">here</a>.</p>
<p>Here's the code snippet I'm using to load the injection identifier:</p>
<pre><code>from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("ProtectAI/deberta-v3-base-prompt-injection",
subfolder="onnx")
tokenizer.model_input_names = ["input_ids", "attention_mask"]
model = ORTModelForSequenceClassification.from_pretrained("ProtectAI/deberta-v3-base-
prompt-injection",
export=False,
subfolder="onnx",
file_name="model_optimized.onnx"
)
classifier = pipeline(
task="text-classification",
model=model,
tokenizer=tokenizer,
truncation=True,
max_length=512,
)
from langchain_experimental.prompt_injection_identifier import (
HuggingFaceInjectionIdentifier,
)
injection_identifier = HuggingFaceInjectionIdentifier(
model=classifier,
)
</code></pre>
<p>However, I encounter the following error:</p>
<pre><code>ERROR: pydantic.errors.ConfigError: field "model" not yet prepared so type is still a
ForwardRef, you might need to call
HuggingFaceInjectionIdentifier.update_forward_refs().
CONTEXT: Traceback (most recent call last):
</code></pre>
|
<python><prompt><langchain><large-language-model>
|
2024-04-04 08:08:53
| 2
| 339
|
AndCh
|
78,272,508
| 739,809
|
where to find my Google project's path for Google DLP
|
<p>probably very stupid question but in the following code
I can't find what is my project path in Google DLP. Where can I find this path in Google console (or any other way)?</p>
<pre><code>dlp = google.cloud.dlp_v2.DlpServiceClient()
parent = dlp.project_path('WHERE CAN I FIND WHAT GOES HERE???')
</code></pre>
|
<python><google-developers-console><google-cloud-dlp>
|
2024-04-04 08:06:13
| 1
| 2,537
|
dsb
|
78,272,340
| 6,013,016
|
--pyi_out: protoc-gen-pyi: Plugin failed with status code 1
|
<p>I'm trying to run this command in docker:</p>
<pre><code>RUN python -m grpc_tools.protoc -I ./ --python_out=. --pyi_out=. --grpc_python_out=. my.proto
</code></pre>
<p>and seeing this error:</p>
<pre><code>0.423 protoc-gen-pyi: program not found or is not executable
0.423 Please specify a program using absolute path or make sure the program is available in your PATH system variable
0.423 --pyi_out: protoc-gen-pyi: Plugin failed with status code 1.
------
failed to solve: process "/bin/bash -c python -m grpc_tools.protoc -I ./ --python_out=. --pyi_out=. --grpc_python_out=. my.proto" did not complete successfully: exit code: 1
</code></pre>
<p>why this is happening and how to solve it?</p>
|
<python><docker><grpc><protoc>
|
2024-04-04 07:23:28
| 1
| 5,926
|
Scott
|
78,272,294
| 14,930,256
|
MacOS: AttributeError: module 'socket' has no attribute 'AF_BLUETOOTH'
|
<p>I kept getting this error:</p>
<pre><code>AttributeError: module 'socket' has no attribute 'AF_BLUETOOTH'
</code></pre>
<p>I tried to follow solutions from t<a href="https://stackoverflow.com/questions/52811888/attributeerror-module-socket-has-no-attribute-af-inet">his stackoverflow question</a>, however still getting the same error.</p>
<p>If I comment out everything else, and just execute:</p>
<pre><code>import socket
print(socket)
</code></pre>
<p>It returns this, which looks good to me,</p>
<pre><code><module 'socket' from '/Users/raheyo/.pyenv/versions/3.9.18/lib/python3.9/socket.py'>
</code></pre>
<p>However, when I try printing the constants:</p>
<pre><code>print(socket.AF_BLUETOOTH)
</code></pre>
<p>I get the same error again for "no attribute 'AF_BLUETOOTH'"
Note that it doesn't just happen for 'AF_BLUETOOTH', same thing happens for 'AF_ASH' and 'AF_ALG'.</p>
<p>Please help!</p>
|
<python><bluetooth><pyenv><python-sockets><bluetooth-socket>
|
2024-04-04 07:13:45
| 0
| 480
|
Ryan Wang
|
78,272,225
| 2,875,230
|
Use f-string instead of format call
|
<p>The server I'm posting ym data to doesn't seem to support <code>format</code> call so I get the folowing error.</p>
<p>Use f-string instead of 'format' call for the functions below.</p>
<pre><code>def write_xlsx_response(self, queryset):
response = HttpResponse(
content_type="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
)
response["Content-Disposition"] = 'attachment; filename="{}.xlsx"'.format(
self.get_filename()
)
export_xlsx_assessment(queryset, response)
return response
def write_csv_response(self, queryset):
response = HttpResponse(content_type="application/CSV")
response["Content-Disposition"] = 'attachment; filename="{}.csv"'.format(
self.get_filename()
)
export_csv_assessment(queryset, response)
return response
</code></pre>
<p>Please how can I convert this to f-string.</p>
<p>Thanks</p>
|
<python><f-string>
|
2024-04-04 07:00:06
| 1
| 337
|
user2875230
|
78,271,970
| 5,707,137
|
Polars read_database does not respect iter_batches = True when using sqlalchemy/oracledb
|
<p>This query using polars and sqlalchemy does not fetch chunked.</p>
<pre><code>d = pl.read_database(sql_text(sql), e, iter_batches=True, batch_size=1000000)
</code></pre>
<p>The connection is a sqlalchemy create_engine("oracle+oracledb://@xxx") on top of oracledb.</p>
<p>It fetches every 2 billion rows, or tries to, ending in a memoryerror. Any way of getting the message through to the engine to do batched fetches in another way?</p>
|
<python><python-polars>
|
2024-04-04 06:03:23
| 0
| 458
|
Niels Jespersen
|
78,271,655
| 2,537,486
|
Line plot of multiple data sets with different x axis coordinates
|
<p>I have data organized like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
df = pd.DataFrame({(1,'x'):[0,1,2],(1,'y'):[10,11,12],
(2,'x'):[1,2,3],(2,'y'):[11.5,11.8,13.2]})
df
1 2
x y x y
0 0 10 1 11.5
1 1 11 2 11.8
2 2 12 3 13.2
</code></pre>
<p>In words: I have data sets for several items, and each data set consists of a number of x/y pairs, but each data set has its own, different set of x values.</p>
<p>Now, I want to plot all data on the same plot, like this:</p>
<p><a href="https://i.sstatic.net/dCZXa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dCZXa.png" alt="line plot two lines" /></a></p>
<p>I can do it easily with a loop, like this:</p>
<pre class="lang-py prettyprint-override"><code>fig1,ax = plt.subplots()
for item in range(1,3):
df.xs(item,axis=1,level=0).plot(ax=ax,kind='line',x='x',y='y',
style='o-',label=str(item))
</code></pre>
<p>But I wonder if there's a way to get the same plot without using a loop.</p>
|
<python><pandas><matplotlib><plot>
|
2024-04-04 04:20:57
| 2
| 1,749
|
germ
|
78,271,534
| 8,860,759
|
First Migration with alembic not created table in fastApi
|
<p>When run command for create my first migration with one table, do not create table, the script that created is empty in functions upgrade and downgrade and only create table alembic_version</p>
<p>I understand that it should contain something like:</p>
<pre><code>op.create_table(
'task',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('name', sa.String(20), nullable=False),
sa.Column('description', sa.String(200)),
sa.Column('status', sa.String(20)),
)
</code></pre>
<p>I'm missing something?</p>
<p>env.py</p>
<pre><code>from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
from database.database import Base, engine
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = Base.metadata
print("-->", Base.metadata.tables)
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"}
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
</code></pre>
<p>database.py conection to data base</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker ,declarative_base
DATABASE_URL = "postgresql://postgres:postgres@192.168.1.11:5432/lean"
engine = create_engine(DATABASE_URL,pool_pre_ping=True, pool_size=20, max_overflow=30)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
</code></pre>
<p><a href="https://i.sstatic.net/Pm65V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pm65V.png" alt="enter image description here" /></a></p>
|
<python><sqlalchemy><fastapi><alembic>
|
2024-04-04 03:34:38
| 0
| 435
|
Lana YC.
|
78,271,379
| 3,962,849
|
Series Variable Convert Pinescript to Python Code
|
<p>I want to transfer my code to Python and use it more effectively. I have succeeded in most parts, but I need to translate the part that starts with the <code>zoneDecider</code> variable into Python. How can I do it?</p>
<p>Python Code</p>
<pre class="lang-py prettyprint-override"><code>df=data
zone_length = 10
zone_multiplier = 3.0
df['hl2'] = (df['h'] + df['l']) / 2
df.ta.atr(high=df['h'], low=df['l'], close=df['c'], length=zone_length,append=True)
df['downZone'] = df['hl2'] + (zone_multiplier * df['ATRr_10'])
df['upZone'] = df['hl2'] - (zone_multiplier * df['ATRr_10'])
</code></pre>
<p>Pinescript Code</p>
<pre class="lang-py prettyprint-override"><code>downZone = hl2 + (3.0 * ta.atr(10))
upZone = hl2 - (3.0 * ta.atr(10) )
downZoneNew = nz(downZone[1], downZone)
downZone := close[1] < downZoneNew ? math.min(downZone, downZoneNew) : downZone
upZoneNew = nz(upZone[1], upZone)
upZone := close[1] > upZoneNew ? math.max(upZone, upZoneNew) : upZone
zoneDecider = 1
zoneDecider := nz(zoneDecider[1], zoneDecider)
zoneDecider := zoneDecider == -1 and close > downZoneNew ? 1 : zoneDecider == 1 and close < upZoneNew ? -1 : zoneDecider
redZone = zoneDecider == -1 and zoneDecider[1] == 1
greenZone = zoneDecider == 1 and zoneDecider[1] == -1
plotshape(redZone ? downZone : na, location=location.absolute, style=shape.labeldown, size=size.tiny, color=color.new(color.red, 0), textcolor=color.new(color.white, 0), title='SAT', text='SAT')
plotshape(greenZone ? upZone : na, location=location.belowbar, style=shape.labelup, size=size.tiny, color=color.new(color.green, 0), textcolor=color.new(color.white, 0), title='AL', text='AL')
</code></pre>
|
<python><pine-script><pine-script-v5>
|
2024-04-04 02:42:18
| 1
| 5,244
|
abaci
|
78,271,277
| 6,546,694
|
Understanding sorted in the presense of NaN - Python
|
<p>Can you help me understand the following:</p>
<pre><code>import numpy as np
sorted([3,2,np.nan, -1])
output:
[-1, 2, 3, nan]
</code></pre>
<pre><code>import numpy as np
sorted([1,2,np.nan, -1])
output:
[1, 2, nan, -1]
</code></pre>
<p>It is almost as if I need a sorting triggered pre-NaN otherwise it returns the same list</p>
|
<python><numpy><sorting><nan>
|
2024-04-04 01:57:20
| 1
| 5,871
|
figs_and_nuts
|
78,271,184
| 877,314
|
Python Triple Quoted comments choke on too many backslashes
|
<p>Python displays <code>SyntaxWarning: invalid escape sequence '\ '</code> when I am drawing a tree structure in a triple quoted comment string:</p>
<pre><code> A
/ \
B C
/ \ \
D E F
</code></pre>
|
<python>
|
2024-04-04 01:19:43
| 2
| 1,511
|
Stefan Musarra
|
78,271,162
| 21,321,616
|
Federated Averaging somehow always gives identical accuracy on test set
|
<p>I am implementing federated averaging on the <a href="https://www.stratosphereips.org/datasets-iot23" rel="nofollow noreferrer">IoT-23 dataset(lighter version)</a></p>
<p>Omitting the preprocessing I am saving the data into a test set and 9 train sets as below:</p>
<pre><code>X_train, X_test, Y_train, Y_test = train_test_split(X, y, random_state=10, test_size=0.2)
test_set = pd.concat([X_test, Y_test], axis=1)
test_set.to_csv('./dataset/test_set.csv')
num_sets = 9
set_size = len(X_train) // num_sets
train_sets = np.array_split(train_set, num_sets)
for i in range(num_sets):
train_sets[i].to_csv(f'./dataset/train{i}.csv')#type:ignore
</code></pre>
<p>My model looks like this:</p>
<pre><code>model = models.Sequential([
layers.Input(shape=(24,)),
layers.Dense(150,activation='relu'),
layers.Dense(80,activation='relu'),
layers.Dropout(0.2),
layers.Dense(7, activation='softmax')
])
loss_fn = losses.CategoricalFocalCrossentropy(alpha=0.2)
model.compile(loss=loss_fn, optimizer='rmsprop', metrics=['accuracy'])
</code></pre>
<p>Now with these I am training and saving weights for each of the train sets:</p>
<pre><code>for client_number in range(9):
model = models.load_model('./model/model.keras')
train_data=pd.read_csv(f"./dataset/train{client_number}.csv",dtype='float')
X_train = train_data.iloc[:, 1:-7]
y_train = train_data.iloc[:, -7:]
base_model.fit(X_train, y_train, epochs=5)
model.save_weights(f'./weights/weight{client_number}.weights.h5')#type:ignore
</code></pre>
<p><a href="https://i.sstatic.net/uogr0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uogr0.png" alt="enter image description here" /></a><br />
As you can see from the screenshot each of them does give different accuracies while training</p>
<p>Now for federated averaging I am <a href="https://towardsdatascience.com/federated-learning-a-step-by-step-implementation-in-tensorflow-aac568283399" rel="nofollow noreferrer">using this <code>sum_scaled_weights()</code> method</a> but without adding weights since I've divided the dataset into equal sizes.</p>
<pre><code>def sum_weights(weight_list):
averaged_weights = list()
#get the average grad accross all client gradients
for grad_list_tuple in zip(*weight_list):
layer_mean = tf.math.reduce_sum(grad_list_tuple, axis=0)
averaged_weights.append(layer_mean)
return averaged_weights
</code></pre>
<p>I am aggregating the weights with something like:</p>
<pre><code>model=models.load_model('./model/model.keras')
for i in range(3):
model.load_weights(f'./weights/weight{i}.weights.h5')#type:ignore
org_weights.append(model.get_weights())#type:ignore
average_weights = sum_weights(org_weights)
model.set_weights(average_weights)
loss, accuracy = model.evaluate(X, y)#type:ignore
print(f'Org Aggregate no.{j} accuracy: {accuracy}')
</code></pre>
<p>Now the issue I'm running into is that regardless of how many weights I aggregate, the accuracy when checking for test ends up the exact same.
eg. with 3 of them seperately (i.e weights 0-2,3-5,6-8):<br />
<a href="https://i.sstatic.net/r3StG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r3StG.png" alt="enter image description here" /></a></p>
<p>with all 9:<br />
<a href="https://i.sstatic.net/Y8Po0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y8Po0.png" alt="enter image description here" /></a></p>
<p>I have tried the test set with the initial weights instead and they do give me different accuracies:<br />
<a href="https://i.sstatic.net/tUv35.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tUv35.png" alt="enter image description here" /></a></p>
<p>I'm honestly stumped and curious about how and why this is happening. Any explanation or corrections for any error in the logic would be appreciated.</p>
|
<python><tensorflow><machine-learning><keras><federated-learning>
|
2024-04-04 01:04:42
| 0
| 415
|
AdityaDN
|
78,271,087
| 20,386,110
|
Using get_tree to create a timer but it's not activating
|
<p>I'm working on creating Pong and I have two <code>RayCast2Ds</code> on the left and right side of the arena. I have it so whenever one of the RayCast gets hit, the ball resets back to the middle of the arena and the opposite player gets a point. I'm trying to make it so after the ball hits the RayCast, it resets back to the middle of the arena, pauses for 3s and starts the next round. I've added the code inside the if statement for gaining a point but for some reason the timer won't start.</p>
<pre><code>func _physics_process(delta):
if $RayCast2D.is_colliding():
if (!already_collided):
already_collided = true
$Ball.position.x = 575
$Ball.position.y = 350
Globals.p2_score += 1
$Label2.text = str(Globals.p2_score)
await get_tree().create_timer(3).timeout #timer
else:
already_collided = false
if $RayCast2D2.is_colliding():
if (!already_collided):
already_collided = true
$Ball.position.x = 575
$Ball.position.y = 350
Globals.p1_score += 1
$Label.text = str(Globals.p1_score)
await get_tree().create_timer(3).timeout #timer
else:
already_collided = false
</code></pre>
|
<python><game-development><godot><gdscript>
|
2024-04-04 00:27:09
| 1
| 369
|
Dagger
|
78,271,085
| 449,347
|
python requirements.txt - requirement specifier for Linux docker on real Apple hardware?
|
<p>I am trying to create a <code>requirements.txt</code> to behave on</p>
<ol>
<li>Linux</li>
<li>MacOS</li>
<li>Linux container running on MacOS (M1 laptop) in Docker. I THINK this needs to behave like in (2)</li>
</ol>
<p>My scenario is similar to <a href="https://stackoverflow.com/questions/72720235/requirements-txt-for-pytorch-for-both-cpu-and-gpu-platforms">requirements.txt for pytorch for both CPU and GPU platforms</a> except that question didn't have docker.</p>
<p>I have this in my <code>requirements.txt</code> ...</p>
<pre><code>torch==2.2.1+cpu ; sys_platform == "linux"
torch==2.2.1 ; sys_platform != "linux"
</code></pre>
<p>and I run <code> pip install -r requirements.txt --find-links https://download.pytorch.org/whl/torch_stable.html</code></p>
<p>It works as expected on Linux (and also for a docker build on Linux base) and works as expected on MacOS because <code>sys.platform</code> there is <code>'darwin'</code>. The <code>2.2.1+cpu</code> version only is available for Linux.</p>
<p>However building in docker on MacOS, on a M1 laptop, I see ...</p>
<pre><code>docker build -t my-image:latest -f Dockerfile .
// SNIP
ERROR: Could not find a version that satisfies the requirement torch==2.2.1+cpu (from versions: 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2)
ERROR: No matching distribution found for torch==2.2.1+cpu
</code></pre>
<p>Can I have one <code>requirements.txt</code> that will work in all cases? I can't find requirement specifier to know what the underlying physical hardware is (assuming that is the actually the real issue?)</p>
|
<python><docker><requirements.txt>
|
2024-04-04 00:26:33
| 0
| 5,028
|
k1eran
|
78,270,801
| 14,385,099
|
Glob pattern to read in a specific string in Python
|
<p>I have a folder with a list of files I want to read in as a list of full file paths.</p>
<p>The files that I want to read in have this structure: <code>[0-9]_beta_[Y].nii.gz</code>, where:</p>
<ul>
<li>[0-9] is a 3 digit sequence of numbers (e.g. 123)</li>
<li>[Y] is a character string of any length (e.g. 'faces', 'faces_up')</li>
</ul>
<p>What is the right pattern for this?</p>
<p>Here's the code I have so far but it doesn't get all the files I need:</p>
<pre><code>file_list = glob.glob(os.path.join(data_dir, f'*_beta_*nii.gz'))
</code></pre>
<p>Thanks so much for your help!</p>
|
<python><glob>
|
2024-04-03 22:29:50
| 1
| 753
|
jo_
|
78,270,745
| 15,452,168
|
How to Predict New Data Topics with BERTopic Model Loaded from Hugging Face in Python? (Automated solution)
|
<p>I've developed a BERTopic model for analyzing mobile app reviews and have successfully pushed it to Hugging Face. I'm able to load the model in my Python script but am facing challenges in predicting topics for new incoming feedback.</p>
<p>My goal is to automatically assign a topic to new app feedback using the trained BERTopic model. Here's the workflow I've implemented:</p>
<p>I load the BERTopic model from Hugging Face.
I use the SentenceTransformer model to encode the new documents (app feedback).
I attempt to transform the new documents using the loaded BERTopic model to predict their topics.</p>
<p>However, I'm uncertain about the last part of my code, where I aim to merge the predicted topics with the original dataset containing app feedback. I want to end up with a dataset that includes the original feedback and the assigned topic names, ideally with automatic incrementation for new data.</p>
<p>Here's the code snippet I'm using:</p>
<pre><code>from huggingface_hub import login
from bertopic import BERTopic
from sentence_transformers import SentenceTransformer
import pandas as pd
# Login to Hugging Face
access_token_read = "hf_xynbKhivF.........."
login(token=access_token_read)
# Load the BERTopic model
model = BERTopic.load("shantanudave/BERTopic_ArXiv")
# Sample new document
new_docs = ["There is an issue with payment on the checkout."]
# Generate embeddings for the new document
embedding_model = SentenceTransformer("all-MiniLM-L6-v2")
embeddings = embedding_model.encode(new_docs, show_progress_bar=True)
# Predict the topic for the new document
new_topics, new_probs = model.transform(new_docs, embeddings)
# Code for merging predicted topics with the original dataset (uncertain part)
...
</code></pre>
<p>Questions:</p>
<p>Is the approach I'm using to predict and assign topics to new feedback using BERTopic correct?
How can I effectively merge the predicted topics back into the original dataset, ensuring that each piece of feedback gets its corresponding topic and topic name?</p>
<pre><code>#My manual approach : -
T = topic_model.get_document_info(docs)
T_df = T[['Document', 'Topic', 'Name', 'CustomName']]
# Remove duplicates from this new DataFrame
T_clean = T_df.drop_duplicates(subset=['Document'], keep='first')
T_clean
# Performing a left join where 'Document' in T_clean matches 'eng_content' in df , 'eng_content' is the review/feedback
merged_df = pd.merge(df, T_clean, how='left', left_on='eng_content', right_on='Document')
topic_doc_df_1 = merged_df
# Rename columns
topic_doc_df_1 = topic_doc_df_1.rename(columns={
'CustomName': 'Topic_name',
'score': 'Rating',
'date': 'Date',
'language': 'Language',
'sentiment': 'Sentiment',
'probability': 'Probability',
'app_version': 'App_version'
})
# Rearrange columns
topic_doc_df_1 = topic_doc_df_1[['Topic', 'Topic_name', 'Document', 'Rating', 'Date', 'Language', 'Sentiment', 'Probability', 'App_version']]
topic_doc_df_1

</code></pre>
<p>I have another function clean_doc() which basically reads new data from df['eng_content'] and provide list 'new_docs'</p>
<p>I am sure there must be a better way of doing this that I am missing,</p>
<p>Thanks in advance,
Shantanu</p>
|
<python><nlp><huggingface>
|
2024-04-03 22:11:01
| 0
| 570
|
sdave
|
78,270,713
| 10,985,257
|
Connect jpype to existing JVM
|
<p>I try to access an API of an java Application from python. The Application is a simulation tool. It has more sophisticated functions but the simplest functions are <code>start</code>, <code>pause</code> and <code>reset</code> the simulation.</p>
<p>I launch the java application like the following:</p>
<pre class="lang-bash prettyprint-override"><code>java -jar lib/simulation.jar
</code></pre>
<p>Inside the simulation I have access to two buttons:</p>
<ul>
<li>start/pause</li>
<li>reset</li>
</ul>
<p>So far how the GUI of the application looks/works.</p>
<p>So far I understand the JVM is already running at this moment. Now I launch my python script to actually execute the command via API:</p>
<pre class="lang-py prettyprint-override"><code>import jpype
import jpype.imports
from jpype.types import *
jpype.startJVM(classpath="lib/simulation.jar")
from com.simulation.basic import SimModel
model = SimModel()
model.start()
model.pause()
model.reset()
</code></pre>
<p>So the line <code>model.start()</code> throws the first error, that classmethod <code>com.simulation.basic.SimModel.getSimulation()</code> returns null, and so the model is not able to execute method <code>start</code>.</p>
<p>From my perspective it looks like I am able to create successful the APIclass SimModel but it doesn't know about the actual Application, because both instances are spawned in different JVMs.</p>
<p>If my assumption is correct, how could I force the JPype to load the JVM from my already running Application?</p>
|
<python><java><jpype>
|
2024-04-03 22:04:23
| 0
| 1,066
|
MaKaNu
|
78,270,495
| 1,925,652
|
Why does breakpoint() not work in jupyter lab?
|
<p>Title says it all, I'm running jupyterlab with what should be default settings & the python builtin breakpoint() does nothing... It works fine in ipython/python though. Am I missing something?</p>
|
<python><jupyter><pdb>
|
2024-04-03 21:02:14
| 0
| 521
|
profPlum
|
78,270,468
| 23,260,297
|
Combine data from different scripts into a dataframe
|
<p>I have multiple python scripts that read different csv/excel files that are received every day. The scripts export multiple dataframes to the same excel file, but on different sheets. I need to take it a step further and take specific data from each dataframe, combine them into a new, separate dataframe on another sheet in the same file. Keep in mind the scripts are seperate from one another.</p>
<p>For instance, let's say I have the following dataframes in an excel file:</p>
<pre><code>Sheet1:
ID Counterparty Date Commodity Deal Price Total
-- ------------ ----- -------- ----- ----- ------
1 party1 04/03/2024 Oil Sell 10.00 100.00
1 party1 04/03/2024 Oil Sell 10.00 100.00
1 party1 04/03/2024 Oil Sell 10.00 100.00
1 party1 04/03/2024 Oil Buy 10.00 100.00
1 party1 04/03/2024 Oil Buy 10.00 100.00
1 party1 04/03/2024 Oil Buy 10.00 100.00
Sheet2:
ID Counterparty Date Commodity Deal Price Total
-- ------------ ----- -------- ----- ----- ------
1 party2 04/03/2024 Oil Sell 5.00 50.00
1 party2 04/03/2024 Oil Sell 5.00 50.00
1 party2 04/03/2024 Oil Sell 5.00 50.00
1 party2 04/03/2024 Oil Buy 5.00 50.00
1 party2 04/03/2024 Oil Buy 5.00 50.00
1 party2 04/03/2024 Oil Buy 5.00 50.00
</code></pre>
<p>On sheet 3 I need the following dataframe:</p>
<pre><code>Sell
Oil ... ... ...
party 1 300.00 ... ... ...
party 2 150.00 ... ... ...
Buy
Oil ... ... ...
party 1 300.00 ... ... ...
party 2 150.00 ... ... ...
</code></pre>
<p>Is this even possible to achieve? I have no idea the first place to even start with something like this? Any type of suggestions would help. If you need any other information, just let me know</p>
|
<python><pandas><excel>
|
2024-04-03 20:57:21
| 1
| 2,185
|
iBeMeltin
|
78,270,182
| 3,517,025
|
connecting to postgres database from azure function - fails with local build/GH actions, succeeds from cli remote build
|
<p>This question is about the right way to set up/build an azure function such that it'll connect to a postgres database leveraging github actions CI/CD.</p>
<p><strong>How things work</strong></p>
<pre class="lang-py prettyprint-override"><code>import os
import azure.functions as func
import logging
import psycopg2
app = func.FunctionApp(http_auth_level=func.AuthLevel.FUNCTION)
@app.route(route="ping_db", auth_level=func.AuthLevel.FUNCTION)
def ping_db(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
sample_sql = "SELECT * FROM table limit 1;"
cnx_str = os.environ.get("POSTGRES_CONNECTION_STRING")
with psycopg2.connect(cnx_str) as cnx:
cursor = cnx.cursor()
cursor.execute(sample_sql)
result = cursor.fetchone()
cursor.close()
return func.HttpResponse(f"{result}")
@app.route(route="sanity_check", auth_level=func.AuthLevel.FUNCTION)
def sanity_check(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
return func.HttpResponse("sanity_check")
</code></pre>
<p>requirements.txt:</p>
<pre><code>azure-functions
psycopg2-binary
</code></pre>
<p><strong>Remote Build From Dev Machine</strong></p>
<p>With the above, a remote build from cli <code>func azure functionapp publish <func-app-name> --build remote</code> works just fine.</p>
<p><strong>How Things Don't Work</strong></p>
<p><strong>Local Build</strong></p>
<p>deploying the above with <code>func azure functionapp publish <func-app-name> --build local</code> though gets the following error:</p>
<blockquote>
<p>There was an error restoring dependencies. ERROR: cannot install psycopg2_binary-2.9.9 dependency: binary dependencies without wheels are not supported when building locally. Use the "--build remote" option to build dependencies on the Azure Functions build server, or "--build-native-deps" option to automatically build and configure the dependencies using a Docker container. More information at <a href="https://aka.ms/func-python-publish" rel="nofollow noreferrer">https://aka.ms/func-python-publish</a></p>
</blockquote>
<p><strong>Github Actions</strong></p>
<p>The below is the default github actions created by the azure portal when I integrated the CI/CD pipeline</p>
<pre><code># Docs for the Azure Web Apps Deploy action: https://github.com/azure/functions-action
# More GitHub Actions for Azure: https://github.com/Azure/actions
# More info on Python, GitHub Actions, and Azure Functions: https://aka.ms/python-webapps-actions
name: Build and deploy Python project to Azure Function App - dig-data-functions
on:
push:
branches:
- my_branch
workflow_dispatch:
env:
AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
PYTHON_VERSION: '3.9' # set this to the python version to use (supports 3.6, 3.7, 3.8)
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Python version
uses: actions/setup-python@v1
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: pip install -r requirements.txt
# Optional: Add step to run tests here
- name: Zip artifact for deployment
run: zip release.zip ./* -r
- name: Upload artifact for deployment job
uses: actions/upload-artifact@v3
with:
name: python-app
path: |
release.zip
!venv/
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-function.outputs.webapp-url }}
permissions:
id-token: write #This is required for requesting the JWT
steps:
- name: Download artifact from build job
uses: actions/download-artifact@v3
with:
name: python-app
- name: Unzip artifact for deployment
run: unzip release.zip
- name: Login to Azure
uses: azure/login@v1
with:
client-id: ${{ secrets.AZUREAPPSERVICE_CLIENTID_67E3A84100BC4DF68179EAC07489636F }}
tenant-id: ${{ secrets.AZUREAPPSERVICE_TENANTID_AFCA8CC613004EC3B0C8708766E6472B }}
subscription-id: ${{ secrets.AZUREAPPSERVICE_SUBSCRIPTIONID_D8A4D5D5E65F4F2986E1BAC1BDD79A92 }}
- name: 'Deploy to Azure Functions'
uses: Azure/functions-action@v1
id: deploy-to-function
with:
app-name: 'my-app-name'
slot-name: 'Production'
package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}
scm-do-build-during-deployment: true
enable-oryx-build: true
</code></pre>
<p>For some reason, however, when deployed with a github action, there is no error (the job is green), but no functions are functional or visible through the azure web portal, even though the right files are there.</p>
<p>Noting, this is specifically when the line <code>include psycopg2</code> (and dependent code) is commented in - if this is commented out, the <code>sanity_check</code> works just fine.</p>
|
<python><postgresql><azure><azure-functions><github-actions>
|
2024-04-03 19:53:16
| 1
| 5,409
|
Joey Baruch
|
78,270,044
| 339,144
|
How to get insight in the relations between tracebacks of exceptions in an exception-chain
|
<p>This question is best introduced example-first:</p>
<p>Consider the following trivial program:</p>
<pre><code>class OriginalException(Exception):
pass
class AnotherException(Exception):
pass
def raise_another_exception():
raise AnotherException()
def show_something():
try:
raise OriginalException()
except OriginalException:
raise_another_exception()
show_something()
</code></pre>
<p>running this will dump the following on screen (minus annotations on the Right-Hand-Side):</p>
<pre><code>Traceback (most recent call last):
File "...../stackoverflow_single_complication.py", line 15, in show_something t1
raise OriginalException()
__main__.OriginalException
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "...../stackoverflow_single_complication.py", line 20, in <module> t0
show_something()
File "...../stackoverflow_single_complication.py", line 17, in show_something t2
raise_another_exception()
File "...../stackoverflow_single_complication.py", line 10, in raise_another_exception t3
raise AnotherException()
__main__.AnotherException
</code></pre>
<p>What we see here is first the <code>OriginalException</code> with the stackframes between the moment that it was raised and the moment it was handled.
Then we see <code>AnotherException</code>, with a complete traceback from its point-of-raising to the start of the program (<em>all</em> frames)</p>
<p>In itself this is perfectly fine, but a consequence of this way of presenting the information is that the stackframes are <em>not</em> laid out on the screen in the order in which they were called (and not in the reverse order either), as per the annotations <em>t1</em>, <em>t0</em>, <em>t2</em>, <em>t3</em>. The path leading up to <em>t1</em> is of course the same as the path leading up to <em>t2</em>, and the creators of Python have chosen to present it only once, in the latter case, presumably because that Exception is usually the most interesting one, and because it allows one to read the bottom exception bottom-up without loss of information. However, it does leave people that want to analyze the <code>OriginalException</code> somewhat mystified: what led up to it?</p>
<p>A programmer that wants to understand what led up to <em>t1</em> would need to [mentally] copy all the frames above the point <em>t2</em> to the first stacktrace to get a complete view. However, in reality the point <em>t2</em> is, AFAIK, not automatically annotated for you as a special frame, which makes the task of mentally copying the stacktrace much harder.</p>
<p>Since the point <em>t2</em> is in general "the failing line in the <code>except</code> block", by cross-referencing the source-code this excercise can usually be completed, but this seems unnecessarily hard.</p>
<p><strong>Is it possible to automatically pinpoint <em>t2</em> as the "handling frame"?</strong></p>
<p>(The example above is given without some outer exception-handling context; I'm perfectly fine with answers that introduce it and then use <code>traceback</code> or other tools to arrive at the correct answer).</p>
<p>This is the most trivial case that illustrates the problem; real cases have many more stack frames and thus less clearly illustrate the problem but more clearly illustrate the need for (potentially automated) clarification of what's happening that this SO question is about.</p>
<p>Edit to add:</p>
<p>I wrote a full blog post on <a href="http://www.bugsink.com/blog/chained-stacktraces-puzzle/" rel="nofollow noreferrer">Chained Exceptions in Python</a></p>
|
<python>
|
2024-04-03 19:24:22
| 0
| 2,577
|
Klaas van Schelven
|
78,270,032
| 1,987,319
|
AttributeError: '_UnixSelectorEventLoop' object has no attribute '_signal_handlers' while running pytest
|
<p>This is my error:</p>
<pre><code>Connected to pydev debugger (build 233.15026.15)
Exception ignored in: <function BaseEventLoop.__del__ at 0x12809be50>
Traceback (most recent call last):
File "/usr/local/Cellar/python@3.9/3.9.19/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 688, in __del__
self.close()
File "/usr/local/Cellar/python@3.9/3.9.19/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/unix_events.py", line 60, in close
for sig in list(self._signal_handlers):
AttributeError: '_UnixSelectorEventLoop' object has no attribute '_signal_handlers'
Exception ignored in: <function BaseEventLoop.__del__ at 0x12809be50>
Traceback (most recent call last):
File "/usr/local/Cellar/python@3.9/3.9.19/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 688, in __del__
self.close()
File "/usr/local/Cellar/python@3.9/3.9.19/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/unix_events.py", line 60, in close
for sig in list(self._signal_handlers):
AttributeError: '_UnixSelectorEventLoop' object has no attribute '_signal_handlers'
Launching pytest with arguments <args here>
</code></pre>
<p>I am getting above error as soon as I launch purest. I'm using PyCharm on Mac M1. I am not sure what further information to provide. If you need any more information, please let me know in the comment. This is all the stacktrace that I see.</p>
<p>Any help will be appreciated.</p>
|
<python>
|
2024-04-03 19:21:44
| 0
| 3,947
|
prabodhprakash
|
78,269,841
| 6,141,238
|
When saving and loading Keras or TensorFlow models: How do I disable/hide the warnings that appear when using the .h5 extension?
|
<p>I am trying to save and load Keras and/or TensorFlow models, and approaching the problem by following the <a href="https://keras.io/api/models/model_saving_apis/model_saving_and_loading/" rel="nofollow noreferrer">Keras documentation</a> and a <a href="https://www.tensorflow.org/tutorials/keras/save_and_load#hdf5_format" rel="nofollow noreferrer">TensorFlow tutorial</a>. In VS Code on my PC, both the native <code>.keras</code> format and the SavedModel format described in the <a href="https://www.tensorflow.org/tutorials/keras/save_and_load#hdf5_format" rel="nofollow noreferrer">TensorFlow tutorial</a> return errors. However, the following near-minimal example using the legacy <code>.h5</code> format does complete without errors (this example is adapted from the <a href="https://keras.io/api/models/model_saving_apis/model_saving_and_loading/" rel="nofollow noreferrer">Keras documentation</a>).</p>
<pre><code>import keras
model = keras.Sequential(
[
keras.layers.Dense(5, input_shape=(3,)),
keras.layers.Softmax(),
],
)
model.save('model.h5')
loaded_model = keras.saving.load_model('model.h5')
</code></pre>
<p>And yet it also displays two warnings, which become annoying when running the model many times:</p>
<p><strong>WARNING:absl:You are saving your model as an HDF5 file via <code>model.save()</code> or <code>keras.saving.save_model(model)</code>. This file format is considered legacy. We recommend using instead the native Keras format, e.g. <code>model.save('my_model.keras')</code> or <code>keras.saving.save_model(model, 'my_model.keras')</code>.</strong></p>
<p><strong>WARNING:absl:No training configuration found in the save file, so the model was <em>not</em> compiled. Compile it manually.</strong></p>
<p>The first corresponds to the <code>.save</code> method and the second to the <code>.load_model</code> method.</p>
<p>How can these warnings be suppressed so that they do not appear on every run?</p>
<hr />
<p>For reference, I have tried adding the following without success.</p>
<pre><code>import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
tf.get_logger().setLevel('ERROR')
import warnings
warnings.filterwarnings("ignore")
import tensorflow as tf
tf.keras.config.disable_interactive_logging()
</code></pre>
|
<python><tensorflow><keras><deep-learning><save>
|
2024-04-03 18:42:47
| 0
| 427
|
SapereAude
|
78,269,721
| 395,857
|
How can I automatically cancel slow downloads and automatically rerun them with the Internet Archive Python library?
|
<p>I use the Python library <a href="https://pypi.org/project/internetarchive/" rel="nofollow noreferrer">internetarchive</a> to download files from archive.org, e.g. to download <a href="https://archive.org/details/meetingbank-alameda" rel="nofollow noreferrer">https://archive.org/details/meetingbank-alameda</a> I run:</p>
<pre><code>pip install internetarchive
ia download meetingbank-alameda
</code></pre>
<p>I see that a few files download have a low download speeds (see red arrows) compared to others. If I kill and rerun <code>ia download meetingbank-alameda</code>, then the same files are typically fast to download: it's just that once in a while, a transfer is slow, and restarting the transfer typically makes it fast. How can I automatically cancel slow downloads and automatically rerun them?</p>
<p><a href="https://i.sstatic.net/SQ2J1.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SQ2J1.jpg" alt="enter image description here" /></a></p>
|
<python><download>
|
2024-04-03 18:16:44
| 0
| 84,585
|
Franck Dernoncourt
|
78,269,514
| 6,599,648
|
Snowflake Pandas Error: AttributeError: 'Engine' object has no attribute 'cursor'
|
<p>I'm trying to read data from a Snowflake table using Python (pandas). When I execute the following code:</p>
<pre class="lang-py prettyprint-override"><code>from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine
import pandas as pd
url = URL(
user="MY_SERVICE_ACCOUNT",
password="MY_SECRET",
account="my_account",
warehouse="MY_WAREHOUSE",
database="MY_DATABASE",
schema="MY_SCHEMA",
)
engine = create_engine(url)
connection = engine.connect()
df = pd.read_sql_query('select * from my_table', connection)
</code></pre>
<p>I get an error:
<code>AttributeError: 'Engine' object has no attribute 'cursor'</code></p>
<p>I set up my virtual environment with:</p>
<pre class="lang-bash prettyprint-override"><code>$ pip install --upgrade snowflake-sqlalchemy
$ pip install pandas
</code></pre>
|
<python><pandas><snowflake-cloud-data-platform>
|
2024-04-03 17:40:06
| 1
| 613
|
Muriel
|
78,269,422
| 4,399,016
|
Python Logical Operations as conditions in Pandas
|
<p>I have a dataframe with columns:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
'A': [False, True, False, False, False, False, True, True, False, True],
'B': [True, False, False, False, True, True, False, False, False, False ]
})
df
A B
0 False True
1 True False
2 False False
3 False False
4 False True
5 False True
6 True False
7 True False
8 False False
9 True False
</code></pre>
<p>How to identify and mark the first occurrence that has <code>[True - False]</code> after encountering a <code>[False - False]</code> value pair? Every row that satisfies this condition needs to be flagged in a new column.</p>
<p>In the example above, <code>[3 False False]</code> is followed by <code>[6 True False]</code> and also, <code>[8 False False]</code> is followed by <code>[9 True False]</code>.</p>
<p>These are the only valid solutions in this example.</p>
|
<python><pandas><logical-operators>
|
2024-04-03 17:23:18
| 1
| 680
|
prashanth manohar
|
78,269,415
| 2,776,012
|
Jinja not displaying the values in HTML
|
<p>I am very new to Python and Jinja and facing an issue in accessing the values of the list in HTML. It is showing blank on the page. Given below is the code snippet, any help would be highly appreciated. The <strong>emplList</strong> is retrieving the values in the manner
<strong>[0,"John"] [1,"Sunny"] [2,"Pal"]</strong></p>
<pre><code>app = Flask(__name__)
emplList = pd.read_sql_query("SELECT DISTINCT employee_name FROM employeeTBL", conn)
@app.route('/')
def home():
return render_template('app.html', employees=emplList)
if __name__=='__main__':
app.run(debug=True)
<select id="multipleSelect" multiple name="native-select" placeholder="Native Select" data-search="true" data-silent-initial-value-set="true">
{% for row in employees%}
<option value={{row.employee_name}}>{{row.employee_name}}</option>
{% endfor %}
</select>
</code></pre>
|
<python><html><list><jinja2>
|
2024-04-03 17:21:09
| 2
| 837
|
Shivayan Mukherjee
|
78,269,403
| 10,516,773
|
How do I access iCloud mail programmatically using Python?
|
<p>I am working on a Python application where I need to access and retrieve emails from iCloud mail. From some searching there doesn't seem to be any official API provided by iCloud. Is there some other way to do this programmatically?</p>
<p>One possible task for the application could be to list all messages in a mailbox.</p>
|
<python><email><icloud>
|
2024-04-03 17:18:48
| 1
| 1,120
|
pl-jay
|
78,269,252
| 1,137,483
|
Concatenating pandas dataframe with multi index in different order
|
<p>I have two data frames, which should be concatenated.
Both are multi index data frames with identical indexes, but in a different order.</p>
<p>So, the index of the first data frame (df) looks like:</p>
<pre><code>MultiIndex([(11, 1, 1),
(11, 1, 2),
(11, 1, 3),
...
(11, 24, 5),
(11, 24, 6),
(11, 24, 7)],
names=['id_a', 'id_b', 'id_c'], length=168)
</code></pre>
<p>The second looks like:</p>
<pre><code>MultiIndex([(11, 1, 1),
(11, 2, 1),
(11, 3, 1),
(11, 3, 2),
...
(11, 5, 23),
(11, 6, 23),
(11, 7, 23),
(11, 7, 24)],
names=['id_a', 'id_c', 'id_b'], length=168)
</code></pre>
<p>As you can see, indexes are in a different order.
Now by running <code>pd.concat([df, df2]).index.names</code> I get the following results:</p>
<pre><code>FrozenList(['id_a', None, None])
</code></pre>
<h2>How to reproduce</h2>
<pre><code>import pandas as pd
# create first data frame
idx = pd.MultiIndex.from_product(
[['A1', 'A2', 'A3'], ['B1', 'B2', 'B3'], ['C1', 'C2', 'C3']],
names=['a', 'b', 'c'])
cols = ['2010', '2020']
df = pd.DataFrame(1, idx, cols)
# Create second data frame with varying order
idx = pd.MultiIndex.from_product(
[['A1', 'A2', 'A3'], ['C1', 'C2', 'C3'], ['B1', 'B2', 'B3']],
names=['a', 'c', 'b'])
df2 = pd.DataFrame(2, idx, cols)
result = pd.concat([df, df2])
</code></pre>
<h3>Output</h3>
<pre><code>> df
2010 2020
a b c
A1 B1 C1 1 1
C2 1 1
C3 1 1
B2 C1 1 1
C2 1 1
C3 1 1
B3 C1 1 1
C2 1 1
C3 1 1
A2 B1 C1 1 1
C2 1 1
C3 1 1
B2 C1 1 1
C2 1 1
C3 1 1
B3 C1 1 1
C2 1 1
C3 1 1
A3 B1 C1 1 1
C2 1 1
C3 1 1
B2 C1 1 1
C2 1 1
C3 1 1
B3 C1 1 1
C2 1 1
C3 1 1
> df2
2010 2020
a c b
A1 C1 B1 2 2
B2 2 2
B3 2 2
C2 B1 2 2
B2 2 2
B3 2 2
C3 B1 2 2
B2 2 2
B3 2 2
A2 C1 B1 2 2
B2 2 2
B3 2 2
C2 B1 2 2
B2 2 2
B3 2 2
C3 B1 2 2
B2 2 2
B3 2 2
A3 C1 B1 2 2
B2 2 2
B3 2 2
C2 B1 2 2
B2 2 2
B3 2 2
C3 B1 2 2
B2 2 2
B3 2 2
> result
2010 2020
a
A1 B1 C1 1 1
C2 1 1
C3 1 1
B2 C1 1 1
C2 1 1
C3 1 1
B3 C1 1 1
C2 1 1
C3 1 1
A2 B1 C1 1 1
C2 1 1
C3 1 1
B2 C1 1 1
C2 1 1
C3 1 1
B3 C1 1 1
C2 1 1
C3 1 1
A3 B1 C1 1 1
C2 1 1
C3 1 1
B2 C1 1 1
C2 1 1
C3 1 1
B3 C1 1 1
C2 1 1
C3 1 1
A1 C1 B1 2 2
B2 2 2
B3 2 2
C2 B1 2 2
B2 2 2
B3 2 2
C3 B1 2 2
B2 2 2
B3 2 2
A2 C1 B1 2 2
B2 2 2
B3 2 2
C2 B1 2 2
B2 2 2
B3 2 2
C3 B1 2 2
B2 2 2
B3 2 2
A3 C1 B1 2 2
B2 2 2
B3 2 2
C2 B1 2 2
B2 2 2
B3 2 2
C3 B1 2 2
B2 2 2
B3 2 2
> result.index.names
FrozenList(['a', None, None])
</code></pre>
<p>Indexes 'b' and 'c' are gone.</p>
|
<python><pandas>
|
2024-04-03 16:51:34
| 1
| 1,458
|
Tobias Lorenz
|
78,269,202
| 818,754
|
matplotlib's pyplot.show works from Python's Docker image but doesn't work with a custom Docker image built from the same Python image
|
<h3>Context</h3>
<p>I'm looking to set up a development environment for a Python project that uses <a href="https://pypi.org/project/matplotlib/" rel="nofollow noreferrer">matplotlib</a>. Things work fine and I am able to see the plot when I run the default Docker image from Dockerhub as:</p>
<p><code>docker run -v ./:/work -w /work python:3.11 pip install -r requirements.txt && python3 main.py</code></p>
<h4>main.py</h4>
<pre class="lang-py prettyprint-override"><code># importing the required module
import matplotlib.pyplot as plt
# x axis values
x = [1,2,3]
# corresponding y axis values
y = [2,4,1]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('x - axis')
# naming the y axis
plt.ylabel('y - axis')
# giving a title to my graph
plt.title('My first graph!')
# function to show the plot
plt.show()
</code></pre>
<p>However, this would need downloading and installing packages with every run. To address this, I created a <code>Dockerfile</code> that does a <code>pip install</code> when building a custom image so that my code can be executed without a <code>pip install</code> every time. I can rebuild the custom image anytime my <code>requirements.txt</code> changes. My Dockerfile looks like:</p>
<pre><code>FROM python:3.11
WORKDIR /work
COPY . .
RUN pip install -r requirements.txt
CMD ["python3", "main.py"]
</code></pre>
<p>I build the image as:</p>
<p><code>docker build -t py-polygon:0.01 .</code></p>
<p>and run it as:</p>
<p><code>docker run py-polygon:0.01</code></p>
<p>In this case, a plot isn't shown on my display.</p>
<p>I'm running this on an Arch Linux box, in case it's relevant.</p>
<h3>Question</h3>
<p>Why does the plot work when using the docker image from Dockerhub but it doesn't work when using a custom image built from the same image? Am I missing something?</p>
|
<python><python-3.x><docker><matplotlib>
|
2024-04-03 16:43:11
| 0
| 1,868
|
rhetonik
|
78,269,134
| 16,717,009
|
Logging with adapter losing __name__ information
|
<p>When I use logger in the standard way I am able to capture the module name and it appears correctly in my logs. But if I implement an adapter (again, in the standard way) it seems like the module name keeps getting overwritten at initialization and my logs only ever show the same, last module initialized.</p>
<p>To be specific, in a <code>logger.py</code> module that sets up my logging, I also have:</p>
<pre><code>def get_logger(module_name):
return logging.getLogger(APP_LOGGER_NAME).getChild(module_name)
</code></pre>
<p>and in each module I do:</p>
<pre><code> import logger
LOGGER = logger.get_logger(__name__)
</code></pre>
<p>That works fine - I get the correct module name wherever a log message is generated.</p>
<p>But if I use an adapter in my <code>logger.py</code> module like so:</p>
<pre><code>class AppLogger(logging.LoggerAdapter):
""" Adapter to add log entry header """
_instance = None
def __init__(self, logger, hdr):
super(AppLogger, self).__init__(logger, extra={'hdr': hdr})
self.hdr = hdr
def __new__(cls, logger, hdr):
""" Use singleton pattern so we don't lose the saved hdr from module to module """
if cls._instance is None:
cls._instance = super(AppLogger, cls).__new__(cls)
cls._instance.hdr = hdr
return cls._instance
def set_header(self, hdr):
self.hdr = hdr
def process(self, msg, kwargs):
if 'extra' not in kwargs:
return msg, {'extra': {'hdr': self.hdr}}
else:
return msg, kwargs
</code></pre>
<p>and change my <code>get_logger</code> function like so:</p>
<pre><code>def get_logger(module_name):
return AppLogger(logging.getLogger(APP_LOGGER_NAME).getChild(module_name),hdr='')
</code></pre>
<p>I no longer get different module names in my log entries. They're all the same, apparently the last module initialized (but I'm not sure on that).</p>
<p>I've verified with <code>print</code> statements that the <code>get_logger</code> module is getting the various module names correctly, by doing the following:</p>
<pre><code>def get_logger(module_name):
#return logging.getLogger(APP_LOGGER_NAME).getChild(module_name)
print(f'{module_name=}')
foo= logging.getLogger(APP_LOGGER_NAME).getChild(module_name)
print(f'{foo.name=}')
bar= AppLogger(foo,hdr='')
print(f'{bar.name=}')
return bar
</code></pre>
<p>I can even see that the correct name is available in bar as each module is loaded.</p>
<p>Any ideas, anyone?</p>
|
<python><python-logging>
|
2024-04-03 16:30:25
| 1
| 343
|
MikeP
|
78,269,103
| 2,883,209
|
In Pydantic, how do I set env_ignore_empty to True
|
<p>I'm currently trying to use Pydantic to check if JSON received is in the right format, and it's one of those where we have 30, 40 fields coming in the JSON, and they have decided that if an amount (for example) is empty, they will provide a null string. Of course, when I check for a float, even if it is Optional, the system finds a string.</p>
<p>So if I have</p>
<pre><code>from typing import Optional
from pydantic import BaseModel
class Balance(BaseModel):
outstanding: float
paidtodate: Optional[float] = None
data1 = Balance(**{"outstanding": 1000.00})
data2 = Balance(**{"outstanding": 1000.00, "paidtodate": 500.00})
data3 = Balance(**{"outstanding": 1000.00, "paidtodate": ""})
</code></pre>
<p>Problem is that data3 fails, because (rigthly so) paidtodate is a string, but I want to be able to say, if it's empty string, just accept it as none.</p>
<p>There seems to be an environment variable called env_ignore_empty, but I can't for the life of me figure out how to set it.</p>
<p>Of course, if there is some other way, would love that as well.</p>
<p>Using pydantic_core-2.16.3-cp39-cp39-macosx_11_0_arm64.whl</p>
|
<python><pydantic>
|
2024-04-03 16:24:31
| 2
| 1,244
|
vrghost
|
78,269,030
| 12,352,239
|
Create array of structs pyspark with named fields
|
<p>I would like to pass a list of strings - which are column names - into a transform function which results in a new column containing an array of structs with two fields - "key" and "value". Where key is the column name and value is the corresponding column value</p>
<pre><code>data = [
Row(name='Alice', age=30, city='New York'),
Row(name='Bob', age=25, city='Los Angeles'),
Row(name='Charlie', age=35, city='Chicago')
]
rdd = spark.sparkContext.parallelize(data)
</code></pre>
<p>I currently have:</p>
<pre><code>df_with_column_of_structs = df.withColumn('array_of_structs',
expr(f"TRANSFORM(array(name, age), col -> named_struct('key', 'col', 'val', col))")
)
df_with_column_of_structs.printSchema()
df_with_column_of_structs.show(truncate=False)
// output
|-- name: string (nullable = true)
|-- age: long (nullable = true)
|-- city: string (nullable = true)
|-- array_of_structs: array (nullable = false)
| |-- element: struct (containsNull = false)
| | |-- key: string (nullable = false)
| | |-- val: string (nullable = true)
+-------+---+-----------+---------------------------+
|name |age|city |array_of_structs |
+-------+---+-----------+---------------------------+
|Alice |30 |New York |[{col, Alice}, {col, 30}] |
|Bob |25 |Los Angeles|[{col, Bob}, {col, 25}] |
|Charlie|35 |Chicago |[{col, Charlie}, {col, 35}]|
+-------+---+-----------+---------------------------+
</code></pre>
<p>my desired output is</p>
<pre><code>|name |age|city |array_of_structs |
+-------+---+-----------+---------------------------+
|Alice |30 |New York |[{name, Alice}, {age, 30}] |
|Bob |25 |Los Angeles|[{name, Bob}, {age, 25}] |
|Charlie|35 |Chicago |[{name, Charlie}, {age, 35}]|
+-------+---+-----------+---------------------------+
</code></pre>
<p>however I am unable to dereferecne the column name to accomplish this. I have tried using both <code>$</code> and <code>{}</code> to derefernce but have gotten syntax errors for both</p>
<p>i.e.</p>
<pre><code>df_with_column_of_structs = df.withColumn('array_of_structs',
expr(f"TRANSFORM(array(name, age), col -> named_struct('key', {col}, 'val', col))")
)
//output
An error was encountered:
name 'col' is not defined
Traceback (most recent call last):
NameError: name 'col' is not defined
</code></pre>
|
<python><pyspark>
|
2024-04-03 16:12:07
| 1
| 480
|
219CID
|
78,268,849
| 447,426
|
how override FastApi app.dependency_overrides with a function/ fixture that has an argument?
|
<p>I have setup tests with a fixture for my deb connection (SQLAlchemy based). The main feature here is that it is rolling back transactions (the tests are not changing the db):</p>
<pre><code>@pytest.fixture(scope="session")
def connection():
SQLALCHEMY_DATABASE_URL = "postgresql://{0}:{1}@{2}:{3}/{4}".format(
settings.DB_username, settings.DB_password, settings.DB_hostname, settings.DB_port, settings.DB_name)
engine = create_engine(
SQLALCHEMY_DATABASE_URL,
connect_args={"options": "-c timezone=utc"}
)
return engine.connect()
@pytest.fixture(scope="session")
def db_session(connection):
transaction = connection.begin()
yield scoped_session(
sessionmaker(autocommit=False, autoflush=False, bind=connection)
)
transaction.rollback()
</code></pre>
<p>On the other hand i have a fixture that provides a test client. I use this to have integration test that test from api to the DB:</p>
<pre><code>@pytest.fixture(scope='module')
def client() -> Generator:
with TestClient(app) as c:
yield c
</code></pre>
<p>Everything is working fine. The problem is that app and all endpoint are injecting the db via not directly but at the end via <code>db: Session = Depends(get_db)</code>. And thus the client is using the "real" connection that does not roll back, and the DB changes are persisted.</p>
<p>Now i tried to override <code>get_db</code> with:</p>
<pre><code>@pytest.fixture(scope='module')
def client() -> Generator:
app.dependency_overrides[get_db] = db_session
with TestClient(app) as c:
yield c
</code></pre>
<p>The problem is: this yields a 422 what is caused by the missing argument db_session needs within my setup. As soon as i remove the line with overrides the test works fine again.</p>
<p>So how to override <code>get_db</code> with my <code>db_session(connection)</code> or in general how to have a TestClient that uses same db_session as other tests that directly rely on.</p>
|
<python><pytest><fastapi>
|
2024-04-03 15:42:36
| 1
| 13,125
|
dermoritz
|
78,268,799
| 1,563,422
|
How to replace the body contents (but not the body tag) in BeautifulSoup?
|
<p>If we have the following document:</p>
<pre class="lang-html prettyprint-override"><code><html>
<body class="some classes here" id="test">
<div id="something">This text and the div it is inside are the content</div>
<div id="another">So is this</div>
</body>
</html>
</code></pre>
<p>How can we replace <em>only</em> the contents of <code><body></code> without altering the attributes it has?</p>
<p>This code errors:</p>
<pre><code>soup.body.contents.replace_with(BeautifulSoup(f'<div class="body-content">{soup.body.contents}</div>', "html.parser"))
</code></pre>
<p>With this error message:</p>
<pre class="lang-text prettyprint-override"><code>AttributeError: 'list' object has no attribute 'replace_with'
</code></pre>
<p>The following code works:</p>
<pre><code>soup.body.replace_with(BeautifulSoup(f'<div class="body-content">{soup.body.contents}</div>', "html.parser"))
</code></pre>
<p>However it removes the <code><body></code> tag itself, leaving only the content in its place!</p>
<p>I have also tried:</p>
<pre class="lang-py prettyprint-override"><code>soup.body.string = f'<div class="body-content">{soup.body.string}</div>'
</code></pre>
<p>But this results in the <code><</code> being changed to <code>&lt;</code> etc, and the inner content shows <code>None</code>.</p>
<p>I have also tried:</p>
<pre class="lang-py prettyprint-override"><code>soup.body.string.replace_with(BeautifulSoup(f'<div class="body-content">{soup.body.contents}</div>', "html.parser"))
</code></pre>
<p>But that errors with:</p>
<pre class="lang-text prettyprint-override"><code>AttributeError: 'NoneType' object has no attribute 'replace_with'
</code></pre>
<p>Finally I have tried:</p>
<pre><code>soup.body.string = BeautifulSoup(f'<div class="body-content">{soup.body.contents}</div>', "html.parser")
</code></pre>
<p>But that errors with:</p>
<pre class="lang-text prettyprint-override"><code>TypeError: 'NoneType' object is not callable
</code></pre>
<p>I have managed to get something working which keeps the <code><body></code> tag and its attributes - using the following code:</p>
<pre><code>attributes = ""
for attr, value in soup.body.attrs.items():
if type(value) is list:
value = " ".join(value)
attributes += f'{attr}="{value}" '
soup.body.replace_with(BeautifulSoup(f'<body {attributes}><div class="body-content">{soup.body.contents}</div></body>',"html.parser"))
</code></pre>
<p>But it feels a bit nasty to do it this way, and I'm sure there are probably edge-cases to code like that.</p>
<p>How can we properly set the contents of the <code><body></code> tag?</p>
|
<python><beautifulsoup>
|
2024-04-03 15:33:29
| 0
| 21,027
|
Danny Beckett
|
78,268,766
| 8,506,921
|
Fastest way to extract relevant rows - pandas
|
<p>I've got a very large pandas df that has the fields <code>group</code>, <code>id_a</code>, <code>id_b</code> and <code>score</code>, ordered by <code>score</code> so the highest is at the top. There is a row for every possible combination of <code>id_a</code> and <code>id_b</code>. I want to extract rows so that there is only one row per <code>id_a</code> and <code>id_b</code>, which reflects the highest <code>score</code> possible without repeating IDs.</p>
<p>Showing an example of what this might look like - the resulting df has 3 rows, with all ids in <code>id_a</code> and <code>id_b</code> appearing once each. In the case of A2/B2 and A1/B1, the row with the best score for both IDs has been used. In the case of A3, the best score related to a row with B1, which had already been used, so the next best score combined with B3 is used.</p>
<p><strong>Input table</strong></p>
<p><a href="https://i.sstatic.net/KRPQe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KRPQe.png" alt="Input table" /></a></p>
<p><strong>Desired result</strong></p>
<p><a href="https://i.sstatic.net/enD8E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/enD8E.png" alt="Desired result" /></a></p>
<p>To achieve this, I've got a loop iterating through the original <code>df</code>. This is incredibly slow with a large dataset. I've tried coming up with alternatives but I am struggling, for e.g.:</p>
<ul>
<li>Once a row is identified for a pair of IDs, I could remove those IDs from the original df, but I'm not sure how to do this without restarting the loop</li>
<li>I could split things by group (the example only has 1 group but there would be lots more, with IDs unique across groups) - however this doesn't seem to save time</li>
</ul>
<p>Can anybody offer any other approaches? Thank you!</p>
<pre><code>import pandas as pd
# Create sample df
group = [1, 1, 1, 1, 1, 1, 1, 1, 1]
id_a = ['A2', 'A1', 'A3', 'A3', 'A2', 'A1', 'A1', 'A2', 'A3']
id_b = ['B2', 'B1', 'B1', 'B3', 'B1', 'B2', 'B3', 'B3', 'B2']
score = [0.99, 0.98, 0.97, 0.96, 0.93, 0.5, 0.41, 0.4, 0.2]
df = pd.DataFrame({'group': group, 'id_a': id_a, 'id_b': id_b, 'score': score})
result = pd.DataFrame(columns=df.columns)
# Extract required rows
for i, row in df.iterrows():
if len(result) == 0:
result = row.to_frame().T
else:
if ((row['id_a'] in result['id_a'].tolist())
or (row['id_b'] in result['id_b'].tolist())):
continue
else:
result = pd.concat([result, row.to_frame().T[result.columns]])
</code></pre>
|
<python><pandas>
|
2024-04-03 15:27:11
| 2
| 1,874
|
Jaccar
|
78,268,740
| 1,126,944
|
Import ... From ... Confusion
|
<p>I read the Python3 references for the Import statement <a href="https://docs.python.org/3/reference/simple_stmts.html#import" rel="nofollow noreferrer">here</a>, in which it said:</p>
<blockquote>
<p>The from form uses a slightly more complex process:
find the module specified in the from clause, loading and initializing it if necessary;
for each of the identifiers specified in the import clauses:
check if the imported module has an attribute by that name
<strong>if not, attempt to import a submodule with that name and then check
the imported module again for that attribute</strong>
if the attribute is not found, ImportError is raised.
otherwise, a reference to that value is stored in the local namespace,
using the name in the as clause if it is present, otherwise using the
attribute name</p>
</blockquote>
<p>For the quoted block in above, I cannot figure out what does it really say about the form:
"from A import B". In my understanding, in that case, if B is not an attribute of A, then the process will search for a submodule named as "B", if that is found, the process would search an attribute named "B" in B submodule(from <em>"then check the imported module again for that attribute"</em>), is my understanding to that correct or not ? Need your help.</p>
|
<python><python-3.x>
|
2024-04-03 15:24:00
| 2
| 1,330
|
IcyBrk
|
78,268,547
| 5,330,527
|
Autocomplete dropdown for related-field filtering in Django admin
|
<p>I'd like to have a nice autocomplete dropdown in my Django admin backend for filtering out results in the <code>project</code> model according to the <code>subject</code> attached to a <code>person</code>. this is my <code>models.py</code>:</p>
<pre><code>class Subject(models.Model):
name = models.CharField(max_length=200)
class Person(models.Model):
subject = models.ForeignKey(Subject, on_delete=models.CASCADE, blank=True, null=True)
class PersonRole(models.Model):
project = models.ForeignKey('Project', on_delete=models.CASCADE)
person = models.ForeignKey(Person, on_delete=models.CASCADE)
class Project(models.Model):
person = models.ManyToManyField(Person, through=PersonRole)
</code></pre>
<p>For now I managed to get a simple list using <code>SimpleListFilter</code> with the following code in <code>admin.py</code>:</p>
<pre><code>class PISubjectFilter(admin.SimpleListFilter):
title = 'PI Subject'
parameter_name = 'subject'
def lookups(self, request, model_admin):
return Subject.objects.values_list("id", "name").order_by("id")
def queryset(self, request, queryset):
count = range(1, 100)
for x in count:
if self.value() == str(x):
return queryset.filter(
person__personrole__person__subject=x
).distinct()
break
</code></pre>
<p>How can I turn this long list into a autocomplete dropdown?</p>
<p>I've <em>almost</em> achieved what I want by specifying a custom template for my form in <code>admin.py</code> with <code>template = "custom_form.html"</code>, with something like the below:</p>
<pre><code>{% load i18n %}
{% load static %}
<script src="https://code.jquery.com/jquery-3.7.1.min.js" integrity="sha256-/JqT3SQfawRcv/BIHPThkBvs0OEvtFFmqPF/lYI/Cxo=" crossorigin="anonymous"></script>
<link href="https://cdn.jsdelivr.net/npm/select2@4.1.0-rc.0/dist/css/select2.min.css" rel="stylesheet" />
<script src="https://cdn.jsdelivr.net/npm/select2@4.1.0-rc.0/dist/js/select2.min.js"></script>
<details data-filter-title="{{ title }}" open>
<summary>
{% blocktranslate with filter_title=title %} By {{ filter_title }} {% endblocktranslate %}
</summary>
<select id='select_subject' onchange="location = this.value;">
{% for choice in choices %}
<option value="{{ choice.query_string|iriencode }}">{{choice.display}}</option>
{% endfor %}
</select>
</details>
<script>
$(document).subject').select2();
});
</script>
</code></pre>
<p>This creates indeed the autocomplete dropdown, although at each page reload the <code>choice.selected</code> remains the same, and therefore the selected item is not updated. And anyway, I'm sure there is a less clunky way of doing this.</p>
<p>PS: no, something like</p>
<pre><code>class SubjectFilter(AutocompleteFilter):
title = 'Subject'
field_name = 'person__personrole__person__subject'
</code></pre>
<p>doesn't work (<code>Project has no field named 'person__personrole__person__subject'</code>)</p>
|
<python><django>
|
2024-04-03 14:54:08
| 1
| 786
|
HBMCS
|
78,268,161
| 182,737
|
Indicate to mypy that an attribute cannot be none in a subclass
|
<p>Given the following code</p>
<pre><code>@dataclass
class MaybeTextContainer:
text: list[str] | None
class CertainlyText(MaybeTextContainer):
def __init__(self) -> None:
super().__init__(text=[])
self.say_hi()
def say_hi(self) -> None:
self.text.append("Hi")
</code></pre>
<p>mypy will complain on the last line that <code>error: Item "None" of "list[str] | None" has no attribute "append"</code>. Of course I could add a <code>assert self.text</code> in the <code>say_hi</code> function, but that's ugly if you modify <code>self.text</code> in a lot of functions.</p>
<p>Is there a general way to indicate that <code>self.text</code> cannot be <code>None</code> anymore?</p>
|
<python><mypy><python-dataclasses>
|
2024-04-03 13:53:00
| 1
| 1,229
|
Frank Meulenaar
|
78,268,053
| 9,381,746
|
How to obtain UTF-8 binary representation of an emoji character?
|
<p>I am trying to print bits of of UTf-8 character. In that case an emoji.
According to the the following video: <a href="https://www.youtube.com/watch?v=ut74oHojxqo" rel="nofollow noreferrer">https://www.youtube.com/watch?v=ut74oHojxqo</a>
the following code <code>script.py</code> (both from the script edited with vim and the python command line from the terminal</p>
<pre><code>s = '👍'
print(len(s))
print(s[0])
</code></pre>
<p>should give the following output:</p>
<pre><code>$ python script.py
4
'xf0'
</code></pre>
<p>However it gives me</p>
<pre><code>$ python script.py
1
👍
</code></pre>
|
<python><string><utf-8><byte><bit>
|
2024-04-03 13:33:32
| 1
| 5,557
|
ecjb
|
78,267,967
| 9,758,352
|
Ways to intercept error responses in Django and DRF
|
<p>The problem:</p>
<ul>
<li>Django errors are values to <code>details</code> keys.</li>
<li>Django Rest Framework (DRF) errors are values to <code>detail</code> keys.</li>
</ul>
<p>Therefore, having an application that raises both Django and DRF exceptions is inconsistent by default. I want a way to change the response errors so that they look like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"message": ...,
"code": ...
}
</code></pre>
<p><strong>Are there ways to handle both Django and DRF exceptions at the same time and respond with a custom response body?</strong></p>
<p>I have tried:</p>
<ol>
<li>custom exception handler in DRF, but this only handles DRF exceptions</li>
<li>custom error messages when defining models
<pre class="lang-py prettyprint-override"><code> email = models.EmailField(error_messages={"required": {"message": "", "code": 123}})
</code></pre>
but Django can't handle a dictionary as a message</li>
<li>Adding a DRF validator to my serializer:
<pre class="lang-py prettyprint-override"><code>email = serializers.EmailField(
validators=[
UniqueValidator(
queryset=models.User.objects.all(),
message={
"message": "a user with this email already exists",
"code": status.EMAIL_EXISTS,
},
)
],
)
</code></pre>
but this does not override the response body, but instead it embeds the message as an error message to the <code>{"details": {"message": {<embedded dictionary>}}}</code></li>
</ol>
<p><strong>The only thing I believe would work, is to <code>try</code>-<code>except</code> all exceptions inside my Views, but I would like to know if there is a prettier way than placing all my Views inside <code>try</code> blocks.</strong></p>
|
<python><django><django-rest-framework>
|
2024-04-03 13:18:37
| 0
| 457
|
BillTheKid
|
78,267,668
| 22,371,917
|
How to make a terminal emulator in flask
|
<p>I want to make a flask app that allows me to run commands in cmd in the path D:<br />
I tried using subprocess and it worked for commands that didn't need to stay open but running something like a file with another flask app or something it would error because it needed it to stay alive or something i also tried os but it spawns a whole cmd hiding my screen, i want the site to look like and act like a regular terminal</p>
|
<python><windows><flask><cmd>
|
2024-04-03 12:32:28
| 1
| 347
|
Caiden
|
78,267,650
| 827,927
|
How to generate a random binary invertible matrix?
|
<p>I need to generate a random n-by-n matrix A with the following properties:</p>
<ol>
<li>Every element in A is either 0 or 1;</li>
<li>A is invertible;</li>
<li>In the inverse of A, the sum of values in each row is at least 0.</li>
</ol>
<p>I can generate random binary matrices using numpy:</p>
<pre><code>A = np.random.randint(0,2, (n,n))
</code></pre>
<p>In most cases, the generated matrix A is already invertible. However, the third condition is usually not satisfied - in the inverse of A, there is almost always one or more rows with a negative value.</p>
<p>I do not need perfect randomness - I only need a way to generate sufficiently many different matrices that satisfy the conditions 1-3. How can I do this?</p>
|
<python><matrix><linear-algebra>
|
2024-04-03 12:29:32
| 0
| 37,410
|
Erel Segal-Halevi
|
78,267,585
| 7,331,538
|
Scrapy force close at timeout limit when TCP connection freezes
|
<p>In my scpraper I have a specific url which goes down periodically. The finish stats show</p>
<pre><code> 'downloader/exception_count': 2,
'downloader/exception_type_count/twisted.internet.error.TCPTimedOutError': 2,
'elapsed_time_seconds': 150.027039,
'finish_reason': 'closespider_timeout',
</code></pre>
<p>I added <code>CLOSESPIDER_TIMEOUT=30</code> in my settings, however the above crawl is taking <code>150s</code> before terminating with the above stats. <strong>why is that?</strong></p>
<p>I also have a custom download timeout set within the scraper:</p>
<pre><code>custom_args = {
'DOWNLOAD_TIMEOUT': 12
}
</code></pre>
<p>But this is also not being respected. According to this <a href="https://stackoverflow.com/questions/46055295/scrapy-set-tcp-connect-timeout">SO question</a> It seems like an OS connection limit configuration is taking precedence over scrapy related connection limits. <strong>Is there no way of adding a hard kill limit for scrapy?</strong> Like a middleware or a signal that forces the exit of the scraper?</p>
|
<python><tcp><scrapy><web-crawler><twisted.internet>
|
2024-04-03 12:17:41
| 2
| 2,377
|
bcsta
|
78,267,511
| 4,847,250
|
How to use curve_fit of scipy with constrain where the fitted curve is always under the observations?
|
<p>I'm trying to fit a signal with an exponential decay curve.
I would like to constrain the fitted curve to be always under the signal.
How can I add such a constraint?
I tried something with a residual function with a penalization but the fit is not good</p>
<p>Here a minimal example</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit,leastsq
y = np.array([0.13598974610162404,0.14204518683071268,0.12950580786633123,0.11907324299581903,0.10128368784179803,0.09801605741178761,0.08384607033484785,0.080831165652505,0.08320697432504208,0.0796448643292049,0.08036960780924939,0.07794871929139761,0.06684868128842808,0.08473240868175465,0.12911858937102086,0.2643875667237164,0.35984364939831903,0.2193622531576059,0.11434823952113388,0.07542004424929072,0.05811782617304745,0.05244297390163204,0.046658695718735835,0.04848192538027753,0.04720951580680828,0.043285109240216044,0.04182209865781944,0.039844899409411334,0.03462168053862101,0.03378305258506322,0.03533297573624328,0.03434759644082368,0.033784129758841895,0.030419029760045915,0.028085746545496386,0.02614296782807577,0.024221565132520304,0.022189741126251487,0.02093159168492871,0.02041496822457043,0.021031182865802436,0.024510234374072886,0.023307213889378165,0.0267484745286596,0.02258945483736504,0.014891232218542747,0.01151363712852099,0.010139967470707011,0.009769727537338574,0.009323591440734363,0.008852570111374145,0.008277064263333187,0.007088585763561308,0.00607584327561278,0.005423044957885124,0.005017536008889349,0.005194048550726604,0.005066069823795679,0.004923514285732114,0.0053721924337601975,0.005156078360383089,0.004962157137571195,0.0045958264654801136,0.0043323942880189766,0.004310971039183395,0.004733498071711899,0.005238905827304569,0.005180319290046715,0.0050892994891999395,0.005323200339923676,0.005430819354625569,0.0051261318575094965,0.004608215352126279,0.0042522740751442835,0.003964475580118653,0.004281845094328685,0.003932866994198572,0.003751478035379218,0.003988758544406512,0.00366304957414055,0.0030455636180720283,0.0027753884456863088,0.0025920006620398267,0.00253411154251131,0.0024133671863316246,0.0020164600081521793,0.002294208143652257,0.0021879013667402856,0.00213873257081609,0.0019997327222615736,0.00195034020886016,0.0022503784328324725,0.003038201783164678,0.003603415824772916,0.003642976691503975,0.003263887163622944,0.0035506429555724373,0.0047798428190157045,0.0040553738896165386,0.002473176007612183,0.0025941258844692236,0.0018292994313265358,0.00209892075806378,0.0023955564365646335,0.0020375114833779307,0.002260575557815427,0.0022985835848993693,0.002099406433733155,0.0018586368200849512,0.0016053613868235123,0.001438613175578214,0.00143049357541102,0.0013095127315154774,0.001262471540939509,0.0013514522407795408,0.001605619634800475,0.001961075896285937,0.001865266816887284,0.0023526578031602017,0.00246341280674717,0.0025884459641316543,0.0025289043233280195,0.0027480853600970576,0.003160811294269662,0.003061310957205347,0.0034708227008575852,0.0027193887970078795,0.0025019043062104967,0.001721602287020676,0.0014938287993981696,0.001379701311142287,0.001482278335951954,0.0017739654977338047,0.0016173740322614279,0.0014568993700072393,0.001561687803455451,0.0016478201019948435,0.001296045775857753,0.001237797494806695,0.0014233100660923912,0.001327643348684166,0.0012058468589450113,0.001326993796471779,0.0015302363900395407,0.0019691433239499958,0.001914607620254396,0.0017054233649494027,0.001999944948934884,0.001586257522693384,0.0017888302317418617,0.0024194552369763127,0.002602486169233071,0.0023322619326367703,0.002188641252143114,0.002160637896948486,0.0017183240941773745,0.0013791696278384316,0.0013010975606518034,0.0012917607493148195,0.0014473287423454842,0.0011277134770190562,0.0009788023156115833,0.0011624520875172602,0.0011529250281587956,0.0011286272690398862,0.0011650110432320925,0.0011670732824154513,0.0012701258601414223,0.0010863631780132393,0.001151403997327795,0.001261531100583112,0.0014433469612850924,0.0012625181480229021,0.0013366719381237742,0.0013129577294860868,0.0010799358566476144,0.0012361331567450533,0.0013155633998451644,0.0017427549165517102,0.0017117554798138019,0.0014424582600283703,0.0014934381441740442,0.001320132472902865,0.0010134949123866623,0.0009392144030905535,0.0008956207514417853,0.0009483482891766875,0.0007118586291810097,0.0006572633034661715,0.0006246206878692327])
x = np.array([1.1,1.2000000000000002,1.3,1.4000000000000001,1.5,1.6,1.7000000000000002,1.8,1.9000000000000001,2.0,2.1,2.2,2.3000000000000003,2.4000000000000004,2.5,2.6,2.7,2.8000000000000003,2.9000000000000004,3.0,3.1,3.2,3.3000000000000003,3.4000000000000004,3.5,3.6,3.7,3.8000000000000003,3.9000000000000004,4.0,4.1000000000000005,4.2,4.3,4.4,4.5,4.6000000000000005,4.7,4.800000000000001,4.9,5.0,5.1000000000000005,5.2,5.300000000000001,5.4,5.5,5.6000000000000005,5.7,5.800000000000001,5.9,6.0,6.1000000000000005,6.2,6.300000000000001,6.4,6.5,6.6000000000000005,6.7,6.800000000000001,6.9,7.0,7.1000000000000005,7.2,7.300000000000001,7.4,7.5,7.6000000000000005,7.7,7.800000000000001,7.9,8.0,8.1,8.200000000000001,8.3,8.4,8.5,8.6,8.700000000000001,8.8,8.9,9.0,9.1,9.200000000000001,9.3,9.4,9.5,9.600000000000001,9.700000000000001,9.8,9.9,10.0,10.100000000000001,10.200000000000001,10.3,10.4,10.5,10.600000000000001,10.700000000000001,10.8,10.9,11.0,11.100000000000001,11.200000000000001,11.3,11.4,11.5,11.600000000000001,11.700000000000001,11.8,11.9,12.0,12.100000000000001,12.200000000000001,12.3,12.4,12.5,12.600000000000001,12.700000000000001,12.8,12.9,13.0,13.100000000000001,13.200000000000001,13.3,13.4,13.5,13.600000000000001,13.700000000000001,13.8,13.9,14.0,14.100000000000001,14.200000000000001,14.3,14.4,14.5,14.600000000000001,14.700000000000001,14.8,14.9,15.0,15.100000000000001,15.200000000000001,15.3,15.4,15.5,15.600000000000001,15.700000000000001,15.8,15.9,16.0,16.1,16.2,16.3,16.400000000000002,16.5,16.6,16.7,16.8,16.900000000000002,17.0,17.1,17.2,17.3,17.400000000000002,17.5,17.6,17.7,17.8,17.900000000000002,18.0,18.1,18.2,18.3,18.400000000000002,18.5,18.6,18.7,18.8,18.900000000000002,19.0,19.1,19.200000000000003,19.3,19.400000000000002,19.5,19.6,19.700000000000003,19.8,19.900000000000002,20.0])
def funcExp(x, a, b, c):
return a * np.exp(-b * x) + c
# here you include the penalization factor
def residuals(p, x, y):
est = funcExp(x, p[0], p[1], p[2])
penalization = y - funcExp(x, p[0], p[1], p[2] )
penalization[penalization<0] = 0
penaliz = np.abs(np.sum(penalization))
return y - funcExp(x, p[0], p[1], p[2] ) - penalization
popt, pcov = curve_fit(funcExp, x, y,p0=[y[0], 1, y[-1]])
popt2, pcov2 = leastsq(func=residuals, x0=(y[0], 1, y[-1]), args=(x, y))
fig, ax = plt.subplots()
ax.plot(x,y )
ax.plot(x,funcExp(x, popt[0], popt[1], popt[2]),'r' )
ax.plot(x,funcExp(x, popt2[0], popt2[1], popt2[2]),'g' )
plt.show()
</code></pre>
<p>which gives :
<a href="https://i.sstatic.net/0kFb1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kFb1.png" alt="enter image description here" /></a></p>
|
<python><curve-fitting>
|
2024-04-03 12:03:26
| 1
| 5,207
|
ymmx
|
78,267,481
| 9,251,158
|
How to set video language when uploading to YouTube with Python API client
|
<p>I am uploading videos to YouTube programmatically following <a href="https://developers.google.com/youtube/v3/guides/uploading_a_video" rel="nofollow noreferrer">the official guide</a>:</p>
<pre class="lang-py prettyprint-override"><code> body=dict(
snippet=dict(
title=title,
description=description,
tags=tags,
),
status=dict(
privacyStatus="public"
)
)
</code></pre>
<p>I can upload the video, set the title, the description, and upload a thumbnail. The setting "Made for kids" comes from the channel's default.</p>
<p>But the video's language does not come from the channel's default, so I want to set it programmatically. On the web interface, it looks like this:</p>
<p><a href="https://i.sstatic.net/Ale6I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ale6I.png" alt="upload language settings" /></a></p>
<p>I would also like to set programmatically other options in the web interface, such as:</p>
<ul>
<li>altered content</li>
<li>Allow automatic chapters and key moments</li>
<li>Allow automatic concepts</li>
<li>Don't allow remixing</li>
</ul>
<p>How can I set these additional options during or after the upload with the Python API client?</p>
<h2>Update</h2>
<p>I tried <code>defaultLanguage="fr-FR"</code> in <code>snippet</code> and it sets the title and description language, not the video language:</p>
<p><a href="https://i.sstatic.net/ExQpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ExQpB.png" alt="language setting from the API" /></a></p>
<p>I also tried <code>defaultLanguage="fr"</code> in <code>snippet</code> and I get the same result.</p>
|
<python><python-3.x><youtube><youtube-api><youtube-data-api>
|
2024-04-03 11:59:36
| 1
| 4,642
|
ginjaemocoes
|
78,267,455
| 3,840,551
|
How can I indicate to static type checkers that specific methods of a library's baseclass MUST be re-implemented in our subclasses?
|
<p>I use a protocol to enforce that our subclasses should implement extra methods and define extra attributes. But I would like to also enforce that specific methods of the base class <em>MUST</em> be re-implemented in our subclass. Is this possible?</p>
<pre class="lang-py prettyprint-override"><code>### From some lib: ###
class BaseFoo:
def a_really_nice_func(self, intparam: int) -> str:
print("This is the base behaviour")
### End lib ###
### Start my code: ###
class OurBaseFooProtocol(Protocol):
description: str
def a_really_nice_func(self, intparam: int) -> str:
...
# We need to implement a lot of classes like this:
class Foo(BaseFoo, OurBaseFooProtocol):
description = "I like trains" # Correctly gives an error if it's not defined
# but if I don't define `a_really_nice_func`, it doesn't complain - because it's defined in `BaseFoo`.
# How do I force all subclasses of BaseFoo that implement the OurBaseFoo protocol
# to *re-implement `a_really_nice_func`?
</code></pre>
<p>EDIT - NOTE:</p>
<p>I originally implemented this as an ABC:</p>
<pre class="lang-py prettyprint-override"><code>
class OurBaseFoo(BaseFoo, ABC):
description: str
@abstractmethod
def a_really_nice_func(self, intparam: int) -> str:
raise NotImplementedError
</code></pre>
<p>But, unfortunately, this does not allow mypy/pyright to show an error when our subclass does not define the <code>description</code> attribute.</p>
|
<python><python-typing>
|
2024-04-03 11:56:01
| 1
| 1,529
|
Gloomy
|
78,267,418
| 536,262
|
fastapi returns HTML on HTTPException
|
<p>I'm trying to return a HTML response on HTTPException for a display board, so it can reload on connection errors:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import HTTPException
from fastapi.responses import HTMLResponse, JSONResponse
class ReloadException(HTTPException):
""" return a HTMLResponse so browser will try reload later """
def __init__(self, status_code:int=500, detail:str=None):
self.status_code = status_code
self.detail = detail
self.resp = HTMLResponse(
status_code=self.status_code,
content=f"<html><head><meta http-equiv=\"refresh\" content=\"60\"></head><body>error:{self.detail}</body></html>",
headers={'Content-Type': 'text/html'}
)
# self.resp = JSONResponse(json.dumps({"detail": self.detail}), status_code=self.status_code)
def __str__(self):
return self.resp
:
# connection test
try:
r = await s.get(url, headers={'PRIVATE-TOKEN': config.get_config()['gitlabtoken']})
r.raise_for_status()
except httpx.HTTPError as exc:
raise ReloadException(status_code=500, detail=f"{exc.request.url} - {exc}")
if r and r.status_code!=200:
raise HTTPException(status_code=r.status_code, detail=f"{r.text}")
</code></pre>
<p>I get this error from starlette:</p>
<pre><code> File "site-packages\starlette\responses.py", line 49, in render
return content.encode(self.charset) # type: ignore
^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'encode'
</code></pre>
<p><strong>UPDATED</strong>
Sending content as string (as suggested by @MatsLindh corrected above), but it still returns json.</p>
<pre><code>curl.exe -v "http://localhost:8000/lastjobs"
* Trying [::1]:8000...
* Trying 127.0.0.1:8000...
* Connected to localhost (127.0.0.1) port 8000
> GET /lastjobs HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< date: Wed, 03 Apr 2024 13:20:43 GMT
< server: uvicorn
< content-length: 100
< content-type: application/json
<
{"detail":"https://xxxxxxx/api/v4/projects/2309 - [Errno 11001] getaddrinfo failed"}
</code></pre>
|
<python><fastapi>
|
2024-04-03 11:47:49
| 0
| 3,731
|
MortenB
|
78,266,811
| 2,058,355
|
Unable to generate sentence embedding via multiprocessing
|
<p>I am trying to generate sentence embeddings for say 300 sentences using a huggingface model and my code is just keeps getting stuck, execution just doesn't proceed.</p>
<p>I have put debug statements at different parts of the snippet but nothing conclusive.
On the contrary when i try to embed sentences sequentially, everything works smoothly.</p>
<p>Can someone suggest where is the issue ?</p>
<pre><code>import torch
import multiprocessing as mp
from transformers import AutoTokenizer, AutoModel
def encode_sentence(sentence, model, tokenizer, output_list):
encoded_input = tokenizer(sentence, return_tensors="pt")
with torch.no_grad():
output = model(**encoded_input)
embedding = output.last_hidden_state[:, 0, :]
output_list.append(embedding)
def worker(input_queue, output_list, model, tokenizer):
while True:
try:
sentence = input_queue.get()
if sentence is None:
break
encode_sentence(sentence, model, tokenizer, output_list)
except queue.Empty:
pass
if __name__ == "__main__":
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
num_workers = mp.cpu_count()
input_queue = mp.Queue()
output_list = mp.Manager().list()
workers = [mp.Process(target=worker, args=(input_queue, output_list, model, tokenizer)) for _ in range(num_workers)]
for w in workers:
w.start()
sentences = ["This is sentence {}".format(i) for i in range(300)]
for sentence in sentences:
input_queue.put(sentence)
for _ in range(num_workers):
input_queue.put(None)
for w in workers:
w.join()
print(output_list)
</code></pre>
|
<python><multiprocessing>
|
2024-04-03 09:52:29
| 1
| 1,755
|
prashantgpt91
|
78,266,786
| 4,673,585
|
Azure function app in python not trigger when blob is uploaded
|
<p>I have a consumption based function app in python which will be triggered when a blob with extension .db is created/uploaded in a folder in a container of a storage account.</p>
<p>function.json:</p>
<pre><code>{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "abc360/sqlite_db_file/{name}.db",
"connection": "AzureWebJobsStorage"
}
]
}
</code></pre>
<p>local.settings.json:</p>
<pre><code>{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "connection string",
"FUNCTIONS_WORKER_RUNTIME": "python"
}
}
</code></pre>
<p>This is my settings in visual studio code.</p>
<p>I have created a build and release pipeline for deploying function app to azure. It works fine and updates the code in cloud.</p>
<p>But for some reason, when a db file is uploaded, function app is not triggered. It works locally.
Which confirms there is some setting in cloud that i fuzzed up.</p>
<p>Please help me get this right.</p>
<p>Thank You!</p>
|
<python><azure-functions><azure-blob-trigger>
|
2024-04-03 09:48:53
| 1
| 337
|
Rahul Sharma
|
78,266,772
| 567,059
|
How to stream Python sub-process output when using 'stdin'
|
<p>When using <code>stdin=PIPE</code> with <code>subprocess.Popen()</code>, is there a way to read <code>stdout</code> and <code>stderr</code> as they are streamed?</p>
<p>When not using <code>stdin=PIPE</code> I have been successfully using reading the output from <code>stdout</code> and <code>stderr</code>, as described <a href="https://stackoverflow.com/a/21978778/567059">here</a>. However, now I need to include <code>stdin=PIPE</code> and it seems that the only option is to wait for the process to finish, and then use the tuple returned from <code>p.communicate()</code>.?</p>
<p>The code indicated in the example below worked before using <code>stdin=PIPE</code>, but now fails with <code>ValueError: I/O operation on closed file.</code>.</p>
<p>Can this method still be used somehow, or is the only way to use <code>out</code> and <code>err</code> from <code>p.communicate()</code>?</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen, PIPE, CalledProcessError, TimeoutExpired
def log_subprocess_output(pipe, func=logger.info) -> None:
'''Log subprocess output from a pipe.'''
for line in pipe:
func(line.decode().rstrip())
try:
p = Popen(args, stdin=PIPE, stdout=PIPE, stderr=PIPE, text=True)
out, err = p.communicate(input=input, timeout=10)
# This worked before using `stdin=PIPE`.
# Can this method still be used somehow, or is the only way to use `out` and `err` from `p.communicate()`?
with p.stdout:
log_subprocess_output(p.stdout)
with p.stderr:
log_subprocess_output(p.stderr, func=logger.error)
except TimeoutExpired as e:
logger.error(f'My process timed out: {e}')
p.kill()
except CalledProcessError as e:
logger.error(f'My process failed: {e}')
except Exception as e:
logger.error(f'An exception occurred: {e}')
</code></pre>
|
<python><subprocess>
|
2024-04-03 09:46:49
| 1
| 12,277
|
David Gard
|
78,266,757
| 833,362
|
How to iterate a dict within a template that contains a key named items?
|
<p>I have some fairly simple django template and part of it is iterating through some dict ( that originates from some parsed json files ) like:</p>
<pre><code>{% for k, v in d.items %}
</code></pre>
<p>However my dict now contains an item called <code>items</code>, which because for the syntax of <code>a.b</code> django first tries to resolve it as <code>a[b]</code> before trying <code>getattr(a,b)</code> (and calling the result if its callable), will try to iterate through the value under that key instead of the dict.</p>
<p>What would be a good way to resolve that? I know already I could write a filter that simply returns <code>list(d.items())</code> but that seems wrong/wasteful to me. Are there some other possibilities to solve this problem?</p>
|
<python><django><django-templates>
|
2024-04-03 09:44:08
| 2
| 16,146
|
PlasmaHH
|
78,266,625
| 4,399,016
|
Identifying Peaks (Local Maxima) and Troughs (Local Minima) in Data and Labeling them
|
<p>I have data that resembles this:</p>
<pre><code>import pandas as pd
import random
random.seed(901)
rand_list = []
for i in range(20):
x = random.randint(480, 600)
rand_list.append(x/10)
df = pd.DataFrame({'INDEX':rand_list})
df['INDEX_DIFF'] = df.INDEX.diff()
df
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code> INDEX INDEX_DIFF
0 53.5 NaN
1 56.4 2.9
2 51.7 -4.7
3 49.4 -2.3
4 55.4 6.0
5 49.9 -5.5
6 52.9 3.0
7 57.7 4.8
8 54.2 -3.5
9 51.4 -2.8
10 57.1 5.7
11 56.7 -0.4
12 58.5 1.8
13 52.1 -6.4
14 57.6 5.5
15 56.1 -1.5
16 54.2 -1.9
17 52.9 -1.3
18 56.6 3.7
19 53.2 -3.4
</code></pre>
<p>I want to identify instances where the <code>INDEX</code> value crosses 50 from below and keeps growing while above 50. The first instance when <code>INDEX_DIFF</code> shows <code>-ve</code> indicates that the <code>INDEX</code> has peaked above 50.</p>
<p>For instance, the 6th value 49.9 crosses 50 and continues to grow (52.9, 57.7) before peaking at 57.7 and dropping to 54.2 in the next instance. The <code>INDEX_DIFF</code> gives the first negative value since crossing 50 from below.</p>
<p>How can I check this condition and label this as a Peak in a new column in the pandas dataframe?</p>
<p>I have tried to use <a href="https://stackoverflow.com/questions/73939479/how-to-find-last-occurrence-of-value-meeting-condition-in-column-in-python">Last occurrence of a condition</a>, but I don't know how to check this elaborate condition.</p>
|
<python><pandas><data-wrangling>
|
2024-04-03 09:23:09
| 2
| 680
|
prashanth manohar
|
78,266,597
| 188,331
|
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
|
<p>Using the HuggingFace Transformers framework, I encountered a problem when I used a custom <code>forward()</code> function in a custom class to fine-tune a pre-trained model to match my custom Tokenizer, but a strange error occurred:</p>
<blockquote>
<p>TypeError: forward() got an unexpected keyword argument 'token_type_ids'</p>
</blockquote>
<p>Here is the code:</p>
<pre><code>class MyModel(BartForConditionalGeneration):
def __init__(self, config):
super().__init__(config)
def forward(self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None):
return super().forward(
input_ids=input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
head_mask=head_mask,
)
# load the model with custom tokenizer
model = MyModel(BartConfig.from_pretrained(checkpoint)).half().cuda()
model.resize_token_embeddings(len(tokenizer))
model.config.vocab_size = len(tokenizer)
model.config.use_cache = False
</code></pre>
<p>The whole piece of code does not have <code>token_type_ids</code> at all.</p>
<p>The trackback is as follows:</p>
<pre><code>TypeError Traceback (most recent call last)
File <timed exec>:2
File ~/.local/lib/python3.9/site-packages/transformers/trainer.py:1645, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1640 self.model_wrapped = self.model
1642 inner_training_loop = find_executable_batch_size(
1643 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1644 )
-> 1645 return inner_training_loop(
1646 args=args,
1647 resume_from_checkpoint=resume_from_checkpoint,
1648 trial=trial,
1649 ignore_keys_for_eval=ignore_keys_for_eval,
1650 )
File ~/.local/lib/python3.9/site-packages/transformers/trainer.py:1938, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1935 self.control = self.callback_handler.on_step_begin(args, self.state, self.control)
1937 with self.accelerator.accumulate(model):
-> 1938 tr_loss_step = self.training_step(model, inputs)
1940 if (
1941 args.logging_nan_inf_filter
1942 and not is_torch_tpu_available()
1943 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
1944 ):
1945 # if loss is nan or inf simply add the average of previous logged losses
1946 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File ~/.local/lib/python3.9/site-packages/transformers/trainer.py:2759, in Trainer.training_step(self, model, inputs)
2756 return loss_mb.reduce_mean().detach().to(self.args.device)
2758 with self.compute_loss_context_manager():
-> 2759 loss = self.compute_loss(model, inputs)
2761 if self.args.n_gpu > 1:
2762 loss = loss.mean() # mean() to average on multi-gpu parallel training
File ~/.local/lib/python3.9/site-packages/transformers/trainer.py:2784, in Trainer.compute_loss(self, model, inputs, return_outputs)
2782 else:
2783 labels = None
-> 2784 outputs = model(**inputs)
2785 # Save past state if it exists
2786 # TODO: this needs to be fixed and made cleaner later.
2787 if self.args.past_index >= 0:
File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:1511, in Module._wrapped_call_impl(self, *args, **kwargs)
1509 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1510 else:
-> 1511 return self._call_impl(*args, **kwargs)
File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:1520, in Module._call_impl(self, *args, **kwargs)
1515 # If we don't have any hooks, we want to skip the rest of the logic in
1516 # this function, and just call forward.
1517 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1518 or _global_backward_pre_hooks or _global_backward_hooks
1519 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1520 return forward_call(*args, **kwargs)
1522 try:
1523 result = None
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
</code></pre>
<p>My question is: How can I resolve this problem?</p>
<p><strong>UPDATE</strong></p>
<p>I researched on the Internet, and found out that I can add the following attribute to the tokenizer loading:</p>
<pre><code>tokenizer = MyTokenizerFast.from_pretrained(tokenizer_repo, return_token_type_ids="token_type_ids")
</code></pre>
<p>OR</p>
<pre><code>tokenizer = MyTokenizerFast.from_pretrained(tokenizer_repo, return_token_type_ids=True)
</code></pre>
<p>but after applying this code, the error persists.</p>
<p>Versions: (not all the latest versions due to version conflict with other libraries)</p>
<ul>
<li>accelerate 0.28.2</li>
<li>transformers 4.36.2</li>
<li>tokenizers 0.15.2</li>
<li>spacy-transformers 1.3.4</li>
</ul>
<hr />
<p><strong>UPDATE 2</strong></p>
<p>After removing <code>token_type_ids</code> in the <code>DatasetDict</code>, the training seems to be working, but in evaluating after the first 5000 steps, the process stops with another error:</p>
<blockquote>
<p>TypeError: forward() got an unexpected keyword argument 'encoder_outputs'</p>
</blockquote>
<p>which the <code>encoder_outputs</code> never appears in any codes. I wonder if the <code>BartForConditionalGeneration</code> is not suitable for the base class of the fine-tuning model. In the source code of <code>BartForConditionalGeneration</code>, I found the following related codes: <a href="https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L999" rel="nofollow noreferrer">https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L999</a></p>
<p>I am unsure what I should do next now.</p>
|
<python><huggingface-transformers><huggingface-tokenizers>
|
2024-04-03 09:18:54
| 0
| 54,395
|
Raptor
|
78,266,465
| 2,875,230
|
How to remove json elements and reformat it
|
<p>I have the follow JSON as an output of an enumerate function.</p>
<pre><code>{'question': (0, 'Who invented the light bulb?'), 'answers': [{'answer': 'Thomas Edison', 'score': '2.0'}, {'answer': 'Nikola Tesla', 'score': '2.0'}, {'answer': 'Albert Einstein', 'score': '0.0'}], 'error': "We didn't quite get that"}
</code></pre>
<p>How can I remove the 0 and the enclosing bracket so it looks like this.</p>
<pre><code>{'question': 'Who invented the light bulb?', 'answers': [{'answer': 'Thomas Edison', 'score': '2.0'}, {'answer': 'Nikola Tesla', 'score': '2.0'}, {'answer': 'Albert Einstein', 'score': '0.0'}], 'error': "We didn't quite get that"}
</code></pre>
<p>I'd appreciate any python or Regex solution. Thank you.</p>
|
<python><json><string>
|
2024-04-03 08:55:27
| 3
| 337
|
user2875230
|
78,266,452
| 22,326,950
|
Range slider for plotly waterfall in python
|
<p>I would like to implement a <a href="https://plotly.com/python/range-slider/" rel="nofollow noreferrer">range slider</a> for <code>plotly.graph_objects.Waterfall</code> chart. Unfortunately its layout has no property <code>rangeslider</code> so passing <code>rangeslider=dict(visible=True)</code> in <code>fig.update_layout</code> won't work. However, there seems to be a <code>sliders</code> property but <a href="https://plotly.com/python/sliders/" rel="nofollow noreferrer">plotly sliders</a> seem to not support ranges and do not display a miniature graph.</p>
<p>Currently I have a <code>waterfall_ax</code> chart in my Jupyter notebook with a <code>ipywidgets</code> <code>IntRangeSlider</code> which is working fine ecxept for it beeing to short for conveniently selecting the values and also placed on the top left of the chart. But knowing the rangeslider from plotly I was hoping to achieve a more cleaner and intuitive interface and chart.</p>
<p>Is there a way to get something similar to the plotly range slider for waterfall charts?</p>
|
<python><jupyter-notebook><plotly>
|
2024-04-03 08:53:31
| 1
| 884
|
Jan_B
|
78,266,416
| 1,422,096
|
Reed-Solomon correction with a stream of bits
|
<p>This works with the <a href="https://github.com/tomerfiliba-org/reedsolomon" rel="nofollow noreferrer"><code>reedsolo</code></a> package:</p>
<pre><code>from reedsolo import RSCodec, ReedSolomonError
rsc = RSCodec(10) # 10 ecc symbols
rsc.encode(b'hello world') # b'hello world\xed%T\xc4\xfd\xfd\x89\xf3\xa8\xaa'
rsc.decode(b'heXlo worXd\xed%T\xc4\xfdX\x89\xf3\xa8\xaa')[0] # 3 errors, output: b'hello world'
</code></pre>
<p>but I notice that correction bytes are added at the end.</p>
<p>How to deal work with a stream of bytes of unknown-in-advance length, should we split the input in chunks?
Also can <code>reedsolo</code> work with a stream of bits? Example:</p>
<pre><code>rsc.encode_bits([1, 0, 0, 1, 1])
# [1, 0, 0, 1, 1, 0, 0, 1, 1]
</code></pre>
|
<python><bit><error-correction><reed-solomon>
|
2024-04-03 08:46:55
| 1
| 47,388
|
Basj
|
78,266,223
| 13,903,839
|
Replace the row with a norm of 0 in the tensor with the corresponding row in another tensor
|
<p>I now have a pytorch tensor A of dimensions (2000, 1, 360, 3). I'm trying to find all indexes with norms of 0 in the last dimension of this tensor. And replace these positions with the values of the corresponding positions in another tensor B (the same dimension as A).</p>
<p>Example (A, B: (2, 1, 3, 3))</p>
<pre><code>A = [[[[0, 0, 0], # norm == 0
[1, 2, 1],
[0, 1, 0]]],
[[[2, 0, 0],
[0, 0, 0], # norm == 0
[1, 1, 1]]]]
B = [[[[0, 0, 1],
[1, 1, 1],
[0, 1, 0]]],
[[[1, 0, 0],
[0, 1, 1],
[2, 1, 1]]]]
</code></pre>
<p>Expected result:</p>
<pre><code>new_A = [[[[0, 0, 1], # <-- replaced
[1, 2, 1],
[0, 1, 0]]],
[[[2, 0, 0],
[0, 1, 1], # <-- replaced
[1, 1, 1]]]]
</code></pre>
|
<python><pytorch><tensor>
|
2024-04-03 08:10:55
| 1
| 301
|
ojipadeson
|
78,266,175
| 354,051
|
scipy.signal.resample vs Ramer Douglas Peucker
|
<p><a href="https://i.sstatic.net/pqEZO.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pqEZO.jpg" alt="enter image description here" /></a></p>
<p>The graph data is loaded from a wave file and only 1600 samples are used. For resampling, I have used</p>
<pre class="lang-py prettyprint-override"><code>resampled_audio_data = scipy.signal.resample(audio_data, len(audio_data) // 10)
</code></pre>
<p>Here is the code for calculating RDP</p>
<pre class="lang-py prettyprint-override"><code>def rdp(points, epsilon):
"""
Implementation of the Ramer-Douglas-Peucker algorithm.
Parameters:
points (array): Array of data points.
epsilon (float): Tolerance parameter to control the reduction level.
Returns:
reduced_points (array): Array of reduced data points.
"""
if len(points) < 3:
return points
dmax = 0
index = 0
end = len(points) - 1
for i in range(1, end):
d = perpendicular_distance(points[i], points[0], points[end])
if d > dmax:
index = i
dmax = d
if dmax > epsilon:
rec_results1 = rdp(points[:index+1], epsilon)
rec_results2 = rdp(points[index:], epsilon)
results = np.vstack((rec_results1[:-1], rec_results2))
else:
results = np.array([points[0], points[end]])
return results
def perpendicular_distance(point, start, end):
"""
Calculate the perpendicular distance of a point from a line.
Parameters:
point (array): The point.
start (array): The start point of the line.
end (array): The end point of the line.
Returns:
dist (float): The perpendicular distance.
"""
if np.array_equal(start, end):
return np.linalg.norm(point - start)
return np.abs(np.linalg.norm(np.cross(end - start, start - point))) / np.linalg.norm(end - start)
</code></pre>
<p>And to calculate RDP from 160 points of resampled_audio_data</p>
<pre class="lang-py prettyprint-override"><code>time_data = range(0, len(resampled_audio_data))
reduced_points = rdp(np.column_stack((time_data, resampled_audio_data)), 0.00001)
</code></pre>
<p>Although the number of points produced by RDP is either the same or a bit less than resampling, you can see in the image that the results are almost identical. The <a href="https://github.com/scipy/scipy/blob/2ecac3e596fdb458c85000e7707a8f5f46926621/scipy/signal/_signaltools.py#L3048" rel="nofollow noreferrer">resampling</a> code is based on fft yet both are producing the same result. Performance-wise sampling is much faster than RDP. I was under the impression that scipy must be using some c/c++ based optimized code but it's based on pure Python, even fft is based on Python used in resampling. The result:</p>
<pre><code>Resampled in = 0.0 secs Points = 160
RDP in = 0.06255841255187988 secs Points = 160
</code></pre>
<p>Question: What is so special about RDP?</p>
<p>Question: Is it possible to improve the RDP code further for performance?</p>
<p>Please suggest if there are any other algorithms can produce the same results and performance-wise better than these two.</p>
<p>Cheers</p>
|
<python><scipy><sampling>
|
2024-04-03 08:04:21
| 0
| 947
|
Prashant
|
78,266,102
| 23,461,455
|
OpenCV throws Error: (-215:Assertion failed)
|
<p>Hi I am trying to read in and display a picture with cv2 for further analysis with the following code:</p>
<p>Edit: I added the path with my username being clear to prove it doesn't contain wacky characters. I also tried with multiple .jpg files. I am using opencv-python version 4.9.0.80</p>
<pre><code>import cv2
path = r'C:\Users\Max\Documents\Pictures\pandas_secondary.jpg'
image = cv2.imread(path)
window_name = 'image'
cv2.imshow(window_name, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>However, this gives me the following error:</p>
<pre><code>error: OpenCV(4.9.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:971: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
</code></pre>
<p>The path to my image is correct in my first posted Question I only anonymized it for the question. Now I added the actual path. what am I doing wrong here?</p>
<p>Update: moving the picture to the same directory as my python code leads to the code running for ever, even if the picture is only 20 kb sized. Displaying the picture with pillow works fine though. I also reinstalled cv2 by:</p>
<pre><code>pip install --upgrade --force-reinstall opencv-python
</code></pre>
<p>which didn't solve the problem.</p>
<p>This is the picture:</p>
<p><a href="https://i.sstatic.net/CFywo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CFywo.png" alt="Sample Picture" /></a></p>
<pre><code>os.path.isfile(path)
</code></pre>
<p>returns false.</p>
|
<python><file><path><anaconda><conda>
|
2024-04-03 07:51:40
| 2
| 1,284
|
Bending Rodriguez
|
78,265,963
| 15,313,661
|
Error when connecting to Azure SQL DataBase with PYODBC and ODBC Connection using DSN
|
<p>I'm using python in a project, in which I query data from an Azure SQL Database, and then write back the results to that same database.</p>
<p>I use the following code to create the SQLAlchemy engine:</p>
<pre class="lang-py prettyprint-override"><code>connection_string = 'mssql+pyodbc://user:pwd@dsn'
engine = sqlalchemy.create_engine(connection_string)
</code></pre>
<p>When running a query, I then get this error:</p>
<p>InterfaceError: (pyodbc.InterfaceError) ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)')
(Background on this error at: https://sqlalche.me/e/20/rvf5)</p>
<p>What puzzles me is that:</p>
<ul>
<li>The issue only arises on our stg database, dev and prd work fine</li>
<li>I thought it might be special characters in the password, but other environments have the same type of characters and no issue</li>
<li>The dsn parameters are identical, and the stg dsn connection is used by other software, succesfully</li>
<li>Using the <code>connection_url = sqlalchemy.URL.create()</code> method with the host and database specified directly works on stg. So the database can be reached in Python. However, for reasons specific to our system and this project, I have to use the example method above with dsn.</li>
</ul>
<p>Any idea what the issue could be? It looks like it has something to do with the stg dsn configuration, but since it looks to be the same as the other environments, I have no idea what the problem could be.</p>
|
<python><sqlalchemy><azure-sql-database><odbc>
|
2024-04-03 07:30:06
| 2
| 536
|
James
|
78,265,889
| 11,258,263
|
sys.path not always/reliably set by sitecustomize -- how can it be reliably overridden?
|
<p>I have an application using embedded Python, which includes <code>python.exe</code>. I have added a <code>sitecustomize.py</code> to set <code>sys.path</code>, something like:</p>
<pre class="lang-py prettyprint-override"><code>import sys
sys.path = ['.','.\\python310.zip','.\\Lib','.\\Lib\\site-packages']
</code></pre>
<p>This works for me and I can see this path when running <code>python.exe</code>. But it does not work for a colleague, their environment shows a different <code>sys.path</code>:</p>
<pre class="lang-py prettyprint-override"><code>['.','.\\python310.zip']
</code></pre>
<p>They're calling the same command from the same installation from the same folder location but on a different host.</p>
<p>I understand there are also <code>._pth</code> (or <code>.pth</code>) files and <code>pyvenv.cfg</code> that can have an effect on <code>sys.path</code> and a <code>PYTHONPATH</code> environment variable. Then there's <code>PYTHONSTARTUP</code>, <code>usercustomize</code>, <code>PYTHONUSERBASE</code>, <code>sysconfig</code> and <em>nine</em> 'schemes' 🤦.</p>
<p>What are all the bootstrapping options to set <code>sys.path</code> and what takes precedence?</p>
<p>What role does <code>site.py</code> play in this? Do user packages take precedence over site packages and is there another 'global' level above site that overrides everything?</p>
<p>I really want to clobber everything but my own paths based on the installation file layout. How do I define my embedded environment so that it is not affected by settings outside that environment?</p>
|
<python><pythonpath><python-embedding>
|
2024-04-03 07:17:19
| 1
| 470
|
DLT
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.