QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,519,683
| 19,157,137
|
Pass Arguments to pyproject.toml and conf.py files from Docker Compose File
|
<p>I have a project with a <code>pyproject.toml</code> file containing metadata such as <code>name, version, description</code>, and <code>authors</code>. Additionally, I have a <code>conf.py</code> file that includes <code>project</code>, <code>author</code>, and <code>version</code> information. I want to pass these values as arguments to my Docker Compose file to automate the setup process. How can I achieve this?</p>
<p>I would like to dynamically set the values of the following fields in my Docker Compose file using the metadata from <code>pyproject.toml</code> and <code>conf.py</code>:</p>
<ul>
<li>Project name</li>
<li>Project version</li>
<li>Project description</li>
<li>Authors</li>
</ul>
<p>Tree Structure:</p>
<pre><code>project/
├── docker-compose.yml
├── pyproject.toml
└── docs/
└── conf.py
</code></pre>
<p>Contents of <code>docker-compose.yml</code>:</p>
<pre><code>version: '3.8'
services:
myapp:
build:
context: .
args:
- PROJECT_NAME=${PROJECT_NAME}
- PROJECT_VERSION=${PROJECT_VERSION}
- PROJECT_DESCRIPTION=${PROJECT_DESCRIPTION}
- AUTHORS=${AUTHORS}
</code></pre>
<p>Contents of <code>pyproject.toml</code>:</p>
<pre><code>[project]
name = "My Project"
version = "1.0.0"
description = "A sample project"
authors = ["John Doe", "Jane Smith"]
</code></pre>
<p>Contents of <code>docs/conf.py</code>:</p>
<pre><code>project = 'My Project'
author = 'John Doe, Jane Smith'
version = '1.0.0'
</code></pre>
<p>Is there a way to extract these values from the respective files and pass them as arguments to the Docker Compose file during the setup process? Any guidance or examples on how to accomplish this would be greatly appreciated.</p>
|
<python><docker><docker-compose><configuration><python-poetry>
|
2023-06-21 03:07:53
| 1
| 363
|
Bosser445
|
76,519,682
| 20,648,944
|
Set tick locations on twin log axis
|
<p>I'm having difficulty getting custom tick locations to display on a twinned axis set to log scale. The tick locations display fine on the original log-scale axis, and the tick locations display fine if the twinned axis is not set to log scale. My desired outcome is to have the normal log axis ticks on the bottom X axis (displayed in the right-side plot), and have the custom tick locations displayed on the upper twinned axis at their appropriate log-scale location (locations shown on the bottom left plot, but needed on top of plot).</p>
<p>Other solutions I've come across don't fix this issue for me. Are there other ways to control the tick locations and labels? (currently running matplotlib version 3.7.1)</p>
<pre><code>import matplotlib.pyplot as plt
## Locations for upper ticks
tick_locations=[3.9, 63, 250, 500]
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7,3))
##Shared x limits
xlims=(10**-1, 2000)
## Twin the axes
sizeax0 = ax[0].twiny()
sizeax1 = ax[1].twiny()
## Plot 1 --------------------
## Setting tick locations on twinned log X axis fails
sizeax0.set_xticks(ticks=tick_locations,
labels=["a", "b", "c", "d"])
sizeax0.set(xlim=(xlims[0], xlims[1]),
xscale="log",
title="Top should look like the bottom axis")
## Setting tick locations on original log X axis works
ax[0].set(xscale="log",
xlim=(xlims[0], xlims[1]))
ax[0].grid(which="major")
ax[0].set_xticks(ticks=tick_locations,
labels=["a", "b", "c", "d"])
## Removing the minor ticks between custom tick locations
ax[0].get_xaxis().set_tick_params(which='minor', size=0, width=0)
## Plot 2 --------------------
## Setting tick locations on twinned non-log X axis works
sizeax1.set_xticks(ticks=[3.9, 63, 250, 500],
labels=["a", "b", "c", "d"])
sizeax1.set(xlim=(xlims[0], xlims[1]),
title="Ticks aren't log-spaced")
ax[1].set(xscale="log",
xlim=(xlims[0], xlims[1]),
xlabel="Bottom axis should look like this")
ax[1].grid(which="major")
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/JvdLP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JvdLP.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><twiny>
|
2023-06-21 03:07:42
| 1
| 393
|
astroChance
|
76,519,605
| 2,897,115
|
install python version using ubuntu 22.04 image
|
<p>I was using python official images for getting specific python version.</p>
<p>But due to some other requirements, i want to use ubuntu 22.04 image</p>
<p>How to install specific python version in unbuntu image</p>
<pre><code>FROM ubuntu:22.04
# how to install specific python version as we get from official image
#install python 3.10.10
</code></pre>
|
<python><docker><ubuntu>
|
2023-06-21 02:46:48
| 1
| 12,066
|
Santhosh
|
76,519,575
| 8,968,910
|
Python: replace values when column name contains specific strings
|
<p>Firstly I want to find a column name that contains 'year/month/day', then replace its values. I was able to do it, but when I assigned it back to the column name that contains 'year/month/day', the error occurs.</p>
<pre><code>Wrong Code:
df[df.columns[df.columns.str.contains('year/month/day')]].squeeze()=df[df.columns[df.columns.str.contains('year/month/day')]].squeeze().str.replace('year','-').str.replace('month','-').str.replace('day','')
print(df)
Output:
SyntaxError: cannot assign to function call here. Maybe you meant '==' instead of '='?
Code without problems by using real column names:
df['birthday_year/month/day']=df[df.columns[df.columns.str.contains('year/month/day')]].squeeze().str.replace('year','-').str.replace('month','-').str.replace('day','')
</code></pre>
<p>I only want to assign it back to the column name that contains 'year/month/day', but not the 'birthday_year/month/day'. Any suggestion?</p>
<p>df:</p>
<pre><code> birthday_year/month/day name
2023year5month2day Jack
2019year6month29day Lauren
</code></pre>
|
<python><replace>
|
2023-06-21 02:37:17
| 1
| 699
|
Lara19
|
76,519,522
| 1,889,140
|
ThreadpoolController causing a Flask server (via passenger_wsgi) to fail to load while importing Numpy
|
<p>I am playing around with a Flask-based webpage. If it matters, it's hosted on A2 Hosting shared hosting server. I have a basic working flask page that accepts user input and a file upload via a form. I also have a simple Python script that simplifies an image to a user-defined number of colors. I had errors running the image manipulation script because it uses KMeans from sklearn, and thread creation was failing. I solved that problem by forcing it to use a single OpenMP thread as follows:</p>
<pre><code>from threadpoolctl import ThreadpoolController, threadpool_limits
controller = ThreadpoolController()
@controller.wrap(limits=1, user_api='openmp')
def simplify_image():
etc...
</code></pre>
<p>With that fix in place, I can now successfully run the image simplification utility from the command line on the server that hosts the website. However, now when I go to restart the Python web app, which imports the image manipulation utility, it fails to load. Log output is pasted in at the bottom. To be clear, it would load fine with the utility as an import before I limited that function to a single thread, and now it fails. Looking at the logs, it gets as far as importing Numpy before it fails to create a bunch of OpenBLAS threads and ends with a keyboard interrupt. It seems obvious to me that this is some side effect of my fix, but I don't understand why it would be affecting the import of a module when I've restricted the thread limitation only one function, and the script isn't currently running.</p>
<p>So to summarize,</p>
<ul>
<li>The Flask portion works as expected</li>
<li>The image script works fine on its own when limited to 1 OpenMP thread</li>
<li>I put them together and it fails, evidently because it can't create OpenBLAS threads when importing Numpy</li>
</ul>
<p>Here is the Flask code...</p>
<pre><code>import os
from flask import Flask, request, render_template
import simplify_image_to_approved_colors
import logging
logging.basicConfig(filename ='./flask_app.log', level = logging.ERROR)
logging.error("Started flask server.")
UPLOAD_FOLDER = 'uploads'
ALLOWED_EXTENSIONS = set(['png', 'jpg', 'jpeg', 'gif'])
project_root = os.path.dirname(os.path.realpath('__file__'))
template_path = os.path.join(project_root, 'templates')
static_path = os.path.join(project_root, 'static')
upload_path = os.path.join(static_path, UPLOAD_FOLDER)
simplified_image_path = os.path.join(upload_path, 'simplified')
app = Flask(__name__, template_folder=template_path, static_folder=static_path)
app.config['UPLOAD_FOLDER'] = upload_path
@app.route('/', methods=['GET', 'SEND', 'POST'])
def index():
# Handle uploads
if request.method == 'POST':
# Save original image
if 'file1' not in request.files:
return 'there is no file1 in form!'
file1 = request.files['file1']
path = os.path.join(app.config['UPLOAD_FOLDER'], file1.filename)
file1.save(path)
# Run pattern generator
is_circle = None
if request.form["is_circle"] == "yes":
is_circle = True
else:
is_circle = False
image_path = path
num_colors = int(request.form["max_colors"])
target_width = int(request.form["width"])
line_spacing = int(request.form["count"])
pattern_location = simplify_image_to_approved_colors.simplify_image(template_path, simplified_image_path, image_path, num_colors, target_width=target_width, line_spacing=line_spacing, is_circle=is_circle)
return f"{pattern_location} \n {request.form}"
# Default homepage
return render_template("index.html")
@app.route('/configure_pattern', methods=['GET', 'POST'])
def configure_pattern(pitch, width, is_circle):
return render_template("configure_pattern.html", pitch=pitch, width=width, is_circle=is_circle)
@app.route('/results')
def results(pattern_filename):
return render_template("results.html", link=f"https://www.thestarvingmartian.com/chromacross/patterns/{pattern_filename}")
application = app
</code></pre>
<p>Though it's probably not necessary, here are the imports and the function that uses Numpy.</p>
<pre><code>import numpy as np
from sklearn.cluster import KMeans
from sklearn.neighbors import NearestNeighbors
from PIL import Image
from reportlab.pdfgen import canvas
from reportlab.lib.pagesizes import letter
from reportlab.lib.colors import HexColor
import dmc_colors_list
import os
import logging
from threadpoolctl import ThreadpoolController, threadpool_limits
logging.basicConfig(
filename = 'app.log',
level = logging.WARNING,
format = '%(levelname)s:%(asctime)s:%(message)s')
controller = ThreadpoolController()
@controller.wrap(limits=1, user_api='openmp')
def simplify_image(pattern_path, simplified_image_path, image_path, num_colors, valid_colors=np.array([color[2] for color in dmc_colors_list.dmc_colors]), target_width=None, line_spacing=14, is_circle=False, verbose=False):
global image_name
image_name = image_path.split("/")[-1].split(".")[0]
target_width = target_width * line_spacing
# Load the image
if verbose: print("Opening the image...")
image = Image.open(image_path)
# Scale the image if target_width is specified
if verbose: print("Preparing to simplify the image...")
if target_width is not None:
aspect_ratio = image.width / image.height
target_height = int(target_width / aspect_ratio)
image = image.resize((target_width, target_height))
# Convert the image to a numpy array
if verbose: print("Converting image to a numpy array and reshaping...")
image_array = np.array(image)
# Reshape the image array to a 2D array of pixels
pixels = image_array.reshape(-1, 3)
# Perform k-means clustering
if verbose: print("Doing k-means magic...")
kmeans = KMeans(n_clusters=num_colors, n_init="auto")
kmeans.fit(pixels)
# Get the cluster labels for each pixel
labels = kmeans.predict(pixels)
# Get the unique colors (centroids) assigned by k-means
unique_colors = kmeans.cluster_centers_
# Create a Nearest Neighbors model
if verbose: print("Doing nearest neighbors stuff to create new pixels...")
nn = NearestNeighbors(n_neighbors=1)
nn.fit(valid_colors)
# Find the nearest match for each unique color
if verbose: print("Finding nearest matches...")
_, indices = nn.kneighbors(unique_colors)
nearest_colors = valid_colors[indices.flatten()]
# Replace each pixel with the corresponding nearest matching color
if verbose: print("Replacing pixels with nearest matches...")
new_pixels = nearest_colors[labels]
# Reshape the new pixel array to match the original image shape
if verbose: print("Reshaping new pixels...")
new_image_array = new_pixels.reshape(image_array.shape)
# Create a new PIL image from the simplified pixel array
if verbose: print("Creating new image with PIL...")
new_image = Image.fromarray(np.uint8(new_image_array))
# Save the simplified image as JPEG
if verbose: print("Saving new image...")
new_path = os.path.join(simplified_image_path, f"simplified_{image_name}.jpg")
new_image.save(new_path)
</code></pre>
<p>Log output</p>
<pre><code>App 19383 output: OpenBLAS blas_thread_init: pthread_create failed for thread 9 of 32: Resource temporarily unavailable
App 19383 output: OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
</code></pre>
<p><em>Next 21 identical errors omitted...</em></p>
<pre><code>App 19383 output: OpenBLAS blas_thread_init: pthread_create failed for thread 31 of 32: Resource temporarily unavailable
App 19383 output: OpenBLAS blas_thread_init: RLIMIT_NPROC -1 current, -1 max
App 19383 output: Traceback (most recent call last):
App 19383 output: File "/opt/cpanel/ea-ruby27/root/usr/share/passenger/helper-scripts/wsgi-loader.py", line 369, in <module>
App 19383 output: app_module = load_app()
App 19383 output: File "/opt/cpanel/ea-ruby27/root/usr/share/passenger/helper-scripts/wsgi-loader.py", line 76, in load_app
App 19383 output: return imp.load_source('passenger_wsgi', startup_file)
App 19383 output: File "/opt/alt/python310/lib64/python3.10/imp.py", line 172, in load_source
App 19383 output: module = _load(spec)
App 19383 output: File "<frozen importlib._bootstrap>", line 719, in _load
App 19383 output: File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
App 19383 output: File "<frozen importlib._bootstrap_external>", line 883, in exec_module
App 19383 output: File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
App 19383 output: File "/home/thestarv/chromacross/passenger_wsgi.py", line 3, in <module>
App 19383 output: import simplify_image_to_approved_colors
App 19383 output: File "/home/thestarv/chromacross/simplify_image_to_approved_colors.py", line 2, in <module>
App 19383 output: import numpy as np
App 19383 output: File "/home/thestarv/virtualenv/chromacross/3.10/lib64/python3.10/site-packages/numpy/__init__.py", line 139, in <module>
App 19383 output: from . import core
App 19383 output: File "/home/thestarv/virtualenv/chromacross/3.10/lib64/python3.10/site-packages/numpy/core/__init__.py", line 23, in <module>
App 19383 output: from . import multiarray
App 19383 output: File "/home/thestarv/virtualenv/chromacross/3.10/lib64/python3.10/site-packages/numpy/core/multiarray.py", line 10, in <module>
App 19383 output: from . import overrides
App 19383 output: File "/home/thestarv/virtualenv/chromacross/3.10/lib64/python3.10/site-packages/numpy/core/overrides.py", line 8, in <module>
App 19383 output: from numpy.core._multiarray_umath import (
App 19383 output: File "<frozen importlib._bootstrap>", line 216, in _lock_unlock_module
App 19383 output: KeyboardInterrupt
</code></pre>
|
<python><multithreading><numpy><flask><passenger>
|
2023-06-21 02:22:10
| 0
| 401
|
Troy D
|
76,519,481
| 132,438
|
How do I convert a Ruby hash string into a JSON string using Python?
|
<p>I need to convert a string like:</p>
<pre><code>{:old_id=>{:id=>"12345", :create_date=>Mon, 15 May 2023, :amount=>50.0}, :new_id=>{:id=>nil, :create_date=>"2023-05-15", :amount=>"50.00"}}
</code></pre>
<p>into a JSON string in Python.</p>
<p>This seems to be a Ruby Hash formatted object, and I don't see a straightforward way to parse it in Python.</p>
|
<python><ruby><parsing>
|
2023-06-21 02:06:27
| 1
| 59,753
|
Felipe Hoffa
|
76,519,457
| 2,287,122
|
plt.boxplot(df[col]) fails because 8th column is text, need to exclude
|
<h1>Detect outliers with boxplots and histograms</h1>
<p>plt.figure(figsize=(15, 30))
i = 0
for col in feature_vars:
i += 1
plt.subplot(9, 4, i)
plt.boxplot(df[col])
plt.title('{}'.format(col), fontsize=9)
plt.hist(df[col])
plt.suptitle('Detect Outliers', fontsize=16, verticalalignment='top', horizontalalignment='center',
fontweight='bold')
plt.savefig('charts/Detect_Outlier_Plots.png', dpi=None, facecolor='w', edgecolor='g', orientation='portrait',
format=None, transparent=False, bbox_inches=None, pad_inches=0.0, metadata=None)
plt.show()</p>
|
<python><plotly>
|
2023-06-21 01:59:54
| 1
| 637
|
Scott
|
76,519,007
| 13,916,049
|
Match pandas index to any row values in another pandas dataframe
|
<p>I want to retrieve the rows of <code>mrna_kirp</code> where its index matches any of the values anywhere in the <code>gmt_c4</code> dataframe.</p>
<pre><code>mrna_subset = mrna_kirp.loc[mrna_kirp.index.isin(gmt_c4)]
</code></pre>
<p>As per the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isin.html" rel="nofollow noreferrer">API</a>, my code only returns matches where both the index and column labels match. But I want to retrieve all possible matches.</p>
<p>Input:</p>
<p><code>gmt_c4.iloc[0:5,0:5]</code></p>
<pre><code>pd.DataFrame({'MORF_ATRX': {('MORF_BCL2',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BCL2',
'ADCY3',
'SYT5',
'LTBP4',
'A1BG',
'AQP5',
'AQP7'): 'TMEM11',
('MORF_BNIP1',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BNIP1',
'PVR',
'ADCY3',
'BMP10',
'NRTN',
'S100A5',
'IL16'): 'SYT5',
('MORF_BCL2L11',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BCL2L11',
'LORICRIN',
'PVR',
'A2BP1',
'FGF18',
'BMP10',
'F2RL3'): 'NRTN',
('MORF_CCNF',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_CCNF',
'A1CF',
'EIF5B',
'TMEM11',
'EEF1AKMT3',
'PEX3',
'HMGN4'): 'GTSE1',
('MORF_ERCC2',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_ERCC2',
'SEC31A',
'BTD',
'GRIK5',
'EIF5B',
'TMEM11',
'BPHL'): 'HNRNPL'},
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_ATRX': {('MORF_BCL2',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BCL2',
'ADCY3',
'SYT5',
'LTBP4',
'UTRN',
'AQP5',
'AQP7'): 'KIFC3',
('MORF_BNIP1',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BNIP1',
'PVR',
'ADCY3',
'BMP10',
'NRTN',
'S100A5',
'IL16'): 'LTBP4',
('MORF_BCL2L11',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BCL2L11',
'LORICRIN',
'PVR',
'KLRC4',
'FGF18',
'BMP10',
'F2RL3'): 'S100A5',
('MORF_CCNF',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_CCNF',
'BMS1',
'EIF5B',
'TMEM11',
'EEF1AKMT3',
'PEX3',
'HMGN4'): 'HNRNPL',
('MORF_ERCC2',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_ERCC2',
'SEC31A',
'BTD',
'GRIK5',
'EIF5B',
'TMEM11',
'BPHL'): 'MUTYH'},
'ADCY3': {('MORF_BCL2',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BCL2',
'ADCY3',
'SYT5',
'LTBP4',
'UTRN',
'AQP5',
'AQP7'): 'HTR1B',
('MORF_BNIP1',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BNIP1',
'PVR',
'ADCY3',
'BMP10',
'NRTN',
'S100A5',
'IL16'): 'FIG4',
('MORF_BCL2L11',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BCL2L11',
'LORICRIN',
'PVR',
'KLRC4',
'FGF18',
'BMP10',
'F2RL3'): 'IL16',
('MORF_CCNF',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_CCNF',
'BMS1',
'EIF5B',
'TMEM11',
'EEF1AKMT3',
'PEX3',
'HMGN4'): 'PLEKHB1',
('MORF_ERCC2',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_ERCC2',
'SEC31A',
'BTD',
'GRIK5',
'EIF5B',
'TMEM11',
'BPHL'): 'TAF5L'},
'SEC31A': {('MORF_BCL2',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BCL2',
'ADCY3',
'SYT5',
'LTBP4',
'UTRN',
'AQP5',
'AQP7'): 'DDX11',
('MORF_BNIP1',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BNIP1',
'PVR',
'ADCY3',
'BMP10',
'NRTN',
'S100A5',
'IL16'): 'CYP2D6',
('MORF_BCL2L11',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BCL2L11',
'LORICRIN',
'PVR',
'KLRC4',
'FGF18',
'BMP10',
'F2RL3'): 'SLC6A2',
('MORF_CCNF',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_CCNF',
'BMS1',
'EIF5B',
'TMEM11',
'EEF1AKMT3',
'PEX3',
'HMGN4'): 'PIGF',
('MORF_ERCC2',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_ERCC2',
'SEC31A',
'BTD',
'GRIK5',
'EIF5B',
'TMEM11',
'BPHL'): 'AGPS'},
'BTD': {('MORF_BCL2',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BCL2',
'ADCY3',
'SYT5',
'LTBP4',
'UTRN',
'AQP5',
'AQP7'): 'AGPS',
('MORF_BNIP1',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BNIP1',
'PVR',
'ADCY3',
'BMP10',
'NRTN',
'S100A5',
'IL16'): 'GRIK5',
('MORF_BCL2L11',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_BCL2L11',
'LORICRIN',
'PVR',
'KLRC4',
'FGF18',
'BMP10',
'F2RL3'): 'MASP2',
('MORF_CCNF',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_CCNF',
'BMS1',
'EIF5B',
'TMEM11',
'EEF1AKMT3',
'PEX3',
'HMGN4'): 'TPP2',
('MORF_ERCC2',
'http://www.gsea-msigdb.org/gsea/msigdb/human/geneset/MORF_ERCC2',
'SEC31A',
'BTD',
'GRIK5',
'EIF5B',
'TMEM11',
'BPHL'): 'SFSWAP'}})
</code></pre>
<p><code>mrna_kirp.iloc[0:4,0:4]</code></p>
<pre><code>pd.DataFrame({'TCGA.2K.A9WE.01': {'A1BG': 391.94,
'A1CF': 8.0,
'A2BP1': 1.0,
'A2LD1': 159.46},
'TCGA.2Z.A9J1.01': {'A1BG': 68.91,
'A1CF': 75.0,
'A2BP1': 0.0,
'A2LD1': 247.06},
'TCGA.2Z.A9J3.01': {'A1BG': 71.9,
'A1CF': 28.0,
'A2BP1': 33.0,
'A2LD1': 516.7},
'TCGA.2Z.A9J5.01': {'A1BG': 325.6,
'A1CF': 47.0,
'A2BP1': 4.0,
'A2LD1': 151.49}})
</code></pre>
<p>Desired output:</p>
<pre><code>pd.DataFrame({'TCGA.2K.A9WE.01': {'A1BG': 391.94,
'A1CF': 8.0,
'A2BP1': 1.0},
'TCGA.2Z.A9J1.01': {'A1BG': 68.91,
'A1CF': 75.0,
'A2BP1': 0.0},
'TCGA.2Z.A9J3.01': {'A1BG': 71.9,
'A1CF': 28.0,
'A2BP1': 33.0},
'TCGA.2Z.A9J5.01': {'A1BG': 325.6,
'A1CF': 47.0,
'A2BP1': 4.0}})
</code></pre>
|
<python><pandas>
|
2023-06-20 23:34:22
| 3
| 1,545
|
Anon
|
76,518,912
| 5,942,100
|
Merge and match using Pandas
|
<p>I would like to Merge and match using Pandas</p>
<p><strong>Data</strong></p>
<p>df1</p>
<pre><code>ID name stat
aa678 TRUE 112
aa678 FALSE 111
bb131 TRUE 122
</code></pre>
<p>df2</p>
<pre><code>ID2 Box
aa678 santa fe
cc121 delux
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>ID name stat Box
aa678 TRUE 112 santa fe
aa678 FALSE 111 santa fe
bb131 TRUE 122
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>df1.join(df2.set_index('ID'), on = 'ID2')
</code></pre>
<p>However, this is not giving the desired result and not showing the values that do not have matches.</p>
<p>Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-06-20 23:07:47
| 1
| 4,428
|
Lynn
|
76,518,909
| 19,157,137
|
Determining Volumes for Poetry and Sphinx files in Python from Dockerfile
|
<p>I'm working on containerizing my project using Docker, and I want to create volumes for certain files to ensure they are persistent and can be easily accessed. Specifically, I need to determine which files should be created as volumes in my Dockerfile.</p>
<p>For my project managed by Poetry, I want to create volumes for the following files: <code>poetry.lock</code>, <code>pyproject.toml</code>, the <code>/tests</code> directory (used for pytest), and the <code>/src</code> directory (containing my Python files).</p>
<p>Additionally, I'm using Sphinx for documentation, and I would like to create volumes for the following Sphinx files: <code>conf.py</code>, <code>index.rst</code>, and <code>index.html</code>.</p>
<p>My directory structure looks like this:</p>
<pre><code>project
├── poetry.lock
├── pyproject.toml
├── tests/
│ ├── test_file1.py
│ ├── test_file2.py
│ └── ...
├── src/
│ ├── main.py
│ ├── module1.py
│ ├── module2.py
│ └── ...
└── docs/
├── conf.py
├── index.rst
└── index.html
</code></pre>
<p>What would be the best approach to identify these files and include them as volumes in my Dockerfile? Any guidance or example syntax would be greatly appreciated. Thank you!</p>
<p>This question is aimed at seeking guidance on identifying the files that should be created as volumes in the Dockerfile, specifically for Poetry and Sphinx projects, and requesting examples or advice on how to include them as volumes in the Dockerfile.</p>
|
<python><docker><dockerfile><python-sphinx><python-poetry>
|
2023-06-20 23:07:21
| 1
| 363
|
Bosser445
|
76,518,869
| 2,379,009
|
tweepy.errors.Forbidden: 403 Forbidden - Issue with Twitter API authentication using Tweepy
|
<p>I'm encountering</p>
<pre><code>tweepy.errors.Forbidden: 403 Forbidden
When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal.
</code></pre>
<p>while trying to run the following code that fetches a user's post history using the Twitter API and Tweepy:</p>
<pre><code> client = tweepy.Client(bearer_token=bearer_token)
tweets = client.search_recent_tweets(query=f'from:{user_handle}')
</code></pre>
<p>My app does seem to be connected to a project (see img)
I have come across some links that suggest it might be a Twitter issue related to API authentication. However, I would like to confirm if this is indeed the case and if there are any possible solutions or workarounds for this problem.</p>
<p>Links indicating it might be a Twitter issue:</p>
<p><a href="https://github.com/twitterdev/Twitter-API-v2-sample-code/issues/58" rel="noreferrer">https://github.com/twitterdev/Twitter-API-v2-sample-code/issues/58</a>
<a href="https://twittercommunity.com/t/when-authenticating-requests-to-the-twitter-api-v2-endpoints-you-must-use-keys-and-tokens-from-a-twitter-developer-app-that-is-attached-to-a-project-you-can-create-a-project-via-the-developer-portal/189699" rel="noreferrer">https://twittercommunity.com/t/when-authenticating-requests-to-the-twitter-api-v2-endpoints-you-must-use-keys-and-tokens-from-a-twitter-developer-app-that-is-attached-to-a-project-you-can-create-a-project-via-the-developer-portal/189699</a></p>
<p>I would greatly appreciate any insights, explanations, or potential solutions to resolve this issue.</p>
<p><a href="https://i.sstatic.net/aE0A1.png" rel="noreferrer"><img src="https://i.sstatic.net/aE0A1.png" alt="enter image description here" /></a></p>
|
<python><twitter><tweepy>
|
2023-06-20 22:59:17
| 1
| 2,173
|
DankMasterDan
|
76,518,671
| 12,011,020
|
Polars deselect / filter columns with only missing values
|
<p>I want to apply a lambda function to all <code>pl.Date</code> columns that exchanges the date <code>'0001-01-01'</code> for null.</p>
<pre class="lang-py prettyprint-override"><code>replace_func = lambda date: None if date == datetime.date(1,1,1) else date
df.select(pl.col(pl.Date).map_elements(replace_func))
</code></pre>
<p>This works fine for columns in which there is at least one date/value not missing (e.g. <code>'A'</code>), however fails on columns with only Null/None values. These all-Null columns can not simply be dropped in general because they are needed / filled later on. I am struggling with a way how I could filter this before I use the apply-lambda</p>
<p><strong>Example Data</strong></p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import datetime
replace_func = lambda date: None if date == datetime.date(1,1,1) else date
df = pl.DataFrame({'A':[datetime.date(2023,3,3),datetime.date(1,1,1),None],
'B':[1,1,3],
'C':[None,None,None]
})
</code></pre>
<pre><code>shape: (3, 3)
┌────────────┬─────┬──────┐
│ A ┆ B ┆ C │
│ --- ┆ --- ┆ --- │
│ date ┆ i64 ┆ null │
╞════════════╪═════╪══════╡
│ 2023-03-03 ┆ 1 ┆ null │
│ 0001-01-01 ┆ 1 ┆ null │
│ null ┆ 3 ┆ null │
└────────────┴─────┴──────┘
</code></pre>
<p>I've tried this (below) but can't figure it out.</p>
<pre class="lang-py prettyprint-override"><code>df.select(pl.col(pl.Date).map_elements(replace_func))
</code></pre>
<p>This results in <code>AttributeError: 'DataFrame' object has no attribute '_pyexpr'</code></p>
|
<python><dataframe><python-polars>
|
2023-06-20 22:07:39
| 1
| 491
|
SysRIP
|
76,518,572
| 13,682,080
|
Python typing: passing int to subclass of int
|
<p>Here is a minimal demonstration of my problem:</p>
<pre><code>class PositiveInt(int):
def __new__(cls, number: int):
if number <= 0:
raise ValueError("number must be positive")
return super().__new__(cls, number)
class Order:
def __init__(self, price: PositiveInt) -> None:
self.price = price
Order(1) # Argument of type "Literal[1]" cannot be assigned to parameter "price" of type "PositiveInt" in function "__init__"
# "Literal[1]" is incompatible with "PositiveInt"PylancereportGeneralTypeIssues
</code></pre>
<p>Why Pylance throw this error, if 1 is <code>int</code> and <code>PositiveInt</code> inherits from <code>int</code>? How should I fix it? Or what is wrong with my approach and how can I make it better?</p>
|
<python><inheritance><python-typing>
|
2023-06-20 21:44:57
| 1
| 542
|
eightlay
|
76,518,458
| 10,317,376
|
How to normalize each tensor in a batch separately without in place operations?
|
<p>I have a series of images where each pixel should be in the range [0, 1]. I want to know I am running a deep learning model on these images and I want to know how to normalize each image in the batch in such a way that the images will always be in the range[0, 1] after the transformation.</p>
<p>This is what I am trying to do</p>
<pre class="lang-py prettyprint-override"><code>for idx in range(x.shape[0]):
x[idx] = x[idx] - x[idx].min()
x[idx] = x[idx] / x[idx].max()
</code></pre>
<p>However this leaves me with this error:</p>
<pre><code>Exception has occurred: RuntimeError
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2, 128, 128]], which is output 0 of AsStridedBackward0, is at version 10; expected version 9 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
</code></pre>
<p>What is the solution to this?</p>
<p>Some other things to consider: I can't use batch normalization because each image in the batch must be normalized separately. I can't use layer normalization because this is a convolutional network and I don't know the image size ahead of time. I cannot use instance normalization because I have multiple channels and the relative size between channels is important.</p>
|
<python><pytorch><normalization>
|
2023-06-20 21:20:40
| 1
| 719
|
Shep Bryan
|
76,518,457
| 10,380,766
|
Replicating HdrHistogram's HistogramPlotter Output
|
<p>I am trying to recreate the <a href="https://hdrhistogram.github.io/HdrHistogram/plotFiles.html" rel="nofollow noreferrer">HistogramPlotter</a> output utilizing pandas and matplotlib.</p>
<p>The output from the provided link looks something like:
<a href="https://i.sstatic.net/OlBAs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlBAs.png" alt="Output from HistogramPlotter" /></a></p>
<p>The output from my current script is considerably more "squished" at the tail:
<a href="https://i.sstatic.net/u6KAm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u6KAm.jpg" alt="Output from Python Script" /></a></p>
<p>I'm wondering how I could make this with similar dimensions?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
df = pd.read_csv('consumer.hgrm', skiprows=4, skipfooter=3, engine='python', delim_whitespace=True)
df.columns = ['Value', 'Percentile', 'TotalCount', 'LastColumn']
# Transform Percentile to accommodate the percentage scale
df['Percentile'] = 100 - (100 - df['Percentile']*100)
plt.figure(figsize=[10,8])
# Create an evenly-spaced scale (0-100) for plotting
df['Plot_X'] = np.linspace(0, 100, len(df))
plt.plot(df['Plot_X'], df['Value'])
# Custom x-ticks
ticks_labels = [0, 90, 99, 99.9, 99.99, 99.999, 99.9999, 100]
ticks_positions = np.interp(ticks_labels, df['Percentile'], df['Plot_X'])
plt.xticks(ticks_positions, labels=[str(tick) for tick in ticks_labels])
plt.xlabel('Percentile')
plt.ylabel('Latency')
plt.title('Latency by Percentile')
plt.grid(True)
plt.show()
</code></pre>
<p>This is the <code>consumer.hgrm</code> In case anyone wants to try it for themselves:</p>
<pre><code>#[Overall percentile distribution between 0.000 and <Infinite> seconds (relative to StartTime)]
#[StartTime: 1686845426.968 (seconds since epoch), Thu Jun 15 09:10:26 PDT 2023]
Value Percentile TotalCount 1/(1-Percentile)
0.00000 0.000000000000 709 1.00
0.00001 0.100000000000 2476441 1.11
0.00001 0.200000000000 4356006 1.25
0.00001 0.300000000000 8567334 1.43
0.00001 0.400000000000 10722387 1.67
0.00001 0.500000000000 12908630 2.00
0.00001 0.550000000000 12908630 2.22
0.00002 0.600000000000 15124819 2.50
0.00002 0.650000000000 15124819 2.86
0.00002 0.700000000000 17277174 3.33
0.00002 0.750000000000 17277174 4.00
0.00002 0.775000000000 17277174 4.44
0.00002 0.800000000000 19201856 5.00
0.00002 0.825000000000 19201856 5.71
0.00002 0.850000000000 19201856 6.67
0.00002 0.875000000000 19201856 8.00
0.00002 0.887500000000 20598743 8.89
0.00002 0.900000000000 20598743 10.00
0.00002 0.912500000000 20598743 11.43
0.00002 0.925000000000 20598743 13.33
0.00002 0.937500000000 20598743 16.00
0.00002 0.943750000000 20598743 17.78
0.00002 0.950000000000 21368551 20.00
0.00002 0.956250000000 21368551 22.86
0.00002 0.962500000000 21368551 26.67
0.00002 0.968750000000 21368551 32.00
0.00002 0.971875000000 21368551 35.56
0.00002 0.975000000000 21368551 40.00
0.00002 0.978125000000 21368551 45.71
0.00002 0.981250000000 21368551 53.33
0.00002 0.984375000000 21659778 64.00
0.00002 0.985937500000 21659778 71.11
0.00002 0.987500000000 21659778 80.00
0.00002 0.989062500000 21659778 91.43
0.00002 0.990625000000 21659778 106.67
0.00002 0.992187500000 21659778 128.00
0.00002 0.992968750000 21659778 142.22
0.00002 0.993750000000 21659778 160.00
0.00002 0.994531250000 21659778 182.86
0.00002 0.995312500000 21728114 213.33
0.00002 0.996093750000 21728114 256.00
0.00002 0.996484375000 21728114 284.44
0.00002 0.996875000000 21728114 320.00
0.00002 0.997265625000 21728114 365.71
0.00002 0.997656250000 21728114 426.67
0.00002 0.998046875000 21728114 512.00
0.00002 0.998242187500 21740997 568.89
0.00002 0.998437500000 21740997 640.00
0.00002 0.998632812500 21740997 731.43
0.00002 0.998828125000 21744511 853.33
0.00003 0.999023437500 21748087 1024.00
0.00004 0.999121093750 21750177 1137.78
2.30725 0.999218750000 21753164 1280.00
2.30726 0.999316406250 21757285 1462.86
2.30726 0.999414062500 21757285 1706.67
2.30818 0.999511718750 21764315 2048.00
2.30818 0.999560546875 21764315 2275.56
2.30818 0.999609375000 21764315 2560.00
2.30818 0.999658203125 21764315 2925.71
2.30818 0.999707031250 21764315 3413.33
2.30818 0.999755859375 21764315 4096.00
2.30819 0.999780273438 21768288 4551.11
2.30819 0.999804687500 21768288 5120.00
2.30819 0.999829101563 21768288 5851.43
2.30819 0.999853515625 21768288 6826.67
2.30819 0.999877929688 21768288 8192.00
2.30819 0.999890136719 21768288 9102.22
2.30819 0.999902343750 21768288 10240.00
2.30819 0.999914550781 21768288 11702.86
2.30819 0.999926757813 21768288 13653.33
2.30819 0.999938964844 21768288 16384.00
2.30819 0.999945068359 21768288 18204.44
2.30819 0.999951171875 21768288 20480.00
2.30901 0.999957275391 21768699 23405.71
2.30901 0.999963378906 21768699 27306.67
2.30901 0.999969482422 21768699 32768.00
2.30901 0.999972534180 21768699 36408.89
2.30902 0.999975585938 21768857 40960.00
2.30902 0.999978637695 21768857 46811.43
2.30904 0.999981689453 21769288 54613.33
2.30904 1.000000000000 21769288
#[Mean = 0.00183, StdDeviation = 0.06471]
#[Max = 2.30904, Total count = 21769288]
#[Buckets = 13, SubBuckets = 262144]
</code></pre>
|
<python><pandas><dataframe><matplotlib>
|
2023-06-20 21:19:49
| 1
| 1,020
|
Hofbr
|
76,518,441
| 856,804
|
How to type the output from df.itertuples
|
<p>I have a script like below</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from typing import Any
def info(i: Any) -> None:
print(f'{type(i)=:}')
print(f'{i=:}')
print(f'{i.Index=:}')
print(f'{i.x=:}')
print(f'{i.y=:}')
if __name__ == "__main__":
df = pd.DataFrame([[1,'a'], [2, 'b']], columns=['x', 'y'])
for i in df.itertuples():
info(i)
</code></pre>
<p>Its output is</p>
<pre><code>type(i)=<class 'pandas.core.frame.Pandas'>
i=Pandas(Index=0, x=1, y='a')
i.Index=0
i.x=1
i.y=a
type(i)=<class 'pandas.core.frame.Pandas'>
i=Pandas(Index=1, x=2, y='b')
i.Index=1
i.x=2
i.y=b
</code></pre>
<p>I'd like to avoid using <code>Any</code>, but what's the proper way to type <code>i</code> in <code>info</code> function? My goal is to make my aware that <code>i</code> has fields <code>Index</code>, <code>x</code> and <code>y</code>.</p>
<p>If I follow pandas' <a href="https://github.com/pandas-dev/pandas/blob/v1.5.3/pandas/core/frame.py#L1416" rel="nofollow noreferrer">way of typing</a> (I'm still using python3.8):</p>
<pre><code>def info(i: Tuple[Any, ...]) -> None:
</code></pre>
<p>mypy complains:</p>
<pre><code>toy.py:8:11: error: "Tuple[Any, ...]" has no attribute "Index"; maybe "index"? [attr-defined]
toy.py:9:11: error: "Tuple[Any, ...]" has no attribute "x" [attr-defined]
toy.py:10:11: error: "Tuple[Any, ...]" has no attribute "y" [attr-defined]
Found 3 errors in 1 file (checked 1 source file)
</code></pre>
|
<python><pandas><mypy><python-typing>
|
2023-06-20 21:15:11
| 1
| 9,110
|
zyxue
|
76,518,391
| 4,936,905
|
Asynchronous tools break the chain in langchain implementation
|
<p>I implemented langchain as a Python API, created via FASTAPI and uvicorn.</p>
<p>The Python API is composed of one main service and various microservices the main service call when required. These microservices are tools. I use 3 tools: web search, image generation, image description. All are long running tasks.</p>
<p>The microservices need to be called as a chain. I.e. the output of one microservice can be used as the input to another microservice (who's output is then returned, or used as an input to another tool, as required).</p>
<p>Now I have made each microservice asynchronous for better scalability. As in, they do the heavy lifting in a background thread, managed via Celery+Redis.</p>
<p>This setup breaks the chain. Why?</p>
<p>Because the first async microservice immediately returns a <code>task_id</code> (to track the background work) when it is run via Celery. This output (the <code>task_id</code>) is passed as input to the next microservice. But this input is essentially meaningless to the second microservice. It's like giving a chef a shopping receipt and expecting them to cook a meal with it.</p>
<p>The next microservice requires the actual output from the first one to do its job, but it's got the <code>task_id</code> instead, which doesn't hold any meaningful information for it to work with.</p>
<p>This makes the chain return garbage output ultimately. So in that sense, the chain "breaks".</p>
<p>How else could I have implemented my langchain execution to ensure concurrency and parallelism?</p>
<p>Please provide an illustrative example.</p>
|
<python><microservices><langchain><py-langchain>
|
2023-06-20 21:06:21
| 0
| 15,924
|
Hassan Baig
|
76,518,338
| 5,942,100
|
Filter out complex query with conditions using Pandas
|
<p>I would like to show the ID's that have the same ID and different stat value.</p>
<p><strong>Data</strong></p>
<pre><code>ID name tag stat
aaBBB1234:3716 apv eertyyuiiio FALSE
aaBBB1234:3716 mps rtuui FALSE
aaBBB1234:3716 ty fgggll1 TRUE
bbSSS2333:5000 teas dexcv FALSE
bbSSS2333:5000 llv ieeve FALSE
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>ID name tag stat
aaBBB1234:3716 apv eertyyuiiio FALSE
aaBBB1234:3716 mps rtuui FALSE
aaBBB1234:3716 ty fgggll1 TRUE
</code></pre>
<p><strong>Doing</strong></p>
<pre><code># Filter the dataframe based on the desired condition
filtered_df = df[df['ID'].duplicated]]
</code></pre>
<p>This filters, however this needs to be expanded to specifically show the ID's that have the same ID and different stat value.
Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-06-20 20:55:57
| 0
| 4,428
|
Lynn
|
76,518,217
| 287,297
|
How to map stripplots onto boxplots in a FacetGrid
|
<p>I just encountered a rather serious bug in seaborn 0.12.2 facet plotting, in which a graph is produced -- without any warning or error -- showing the data in the wrong category. Potentially leading the scientist to draw the wrong conclusion!</p>
<pre><code>#!/usr/bin/env python3
# Modules #
import seaborn, pandas
# Create a list for each category #
patients = ['Patient1', 'Patient2', 'Patient3']
cohorts = ['Cohort1', 'Cohort2', 'Cohort3']
treatments = ['Treatment1', 'Treatment2', 'Treatment3']
# We will use these lists to create a DataFrame with unique combinations #
data = {
'Patient': [],
'Cohort': [],
'Treatment': [],
'Value': []
}
# Create all unique combinations and add random values for each #
for patient in patients:
for cohort in cohorts:
for treatment in treatments:
for i in range(10):
data['Patient'].append(patient)
data['Cohort'].append(cohort)
data['Treatment'].append(treatment)
data['Value'].append(np.random.rand())
# Make dataframe #
df = pandas.DataFrame(data)
# Find the indexes of the rows to drop #
index_to_drop = df[(df['Patient'] == 'Patient2') &
(df['Cohort'] == 'Cohort2') &
(df['Treatment'] == 'Treatment2')].index
# Drop these rows from the DataFrame #
df = df.drop(index_to_drop)
###############################################################################
facet_params = dict(data = df,
col = 'Patient',
row = 'Cohort',
col_order = patients,
row_order = cohorts)
seaborn_params = dict(x = 'Treatment',
y = 'Value',)
# Call seaborn #
grid = seaborn.FacetGrid(**facet_params)
# Bar plot #
grid.map_dataframe(seaborn.boxplot, **seaborn_params, showfliers=False)
# Bar plot #
grid.map_dataframe(seaborn.stripplot, **seaborn_params, jitter=True)
# Save #
grid.savefig('facet_bug.png')
</code></pre>
<p><a href="https://i.sstatic.net/vmbmv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vmbmv.png" alt="boxplot with bug" /></a></p>
<p>In this short example, we have four different levels:</p>
<ul>
<li>Patients (categorical)</li>
<li>Cohorts (categorical)</li>
<li>Treatments (categorical)</li>
<li>Value (scalar)</li>
</ul>
<p>We introduce missing data: there are no values for Patient2 of Cohort2 getting Treatment2.</p>
<p>We make a FacetGrid with the patients and cohorts levels, and plot the treatments on the x axis with the value on the y axis (of each subplot).</p>
<p>We superimpose both a boxplot and a stripplot.</p>
<p>In the case of the stripplot, the data is correctly plotted. In the case of the boxplot, the data pertaining to Treatment3 ends up under the label of Treatment2!</p>
<p>The ideal behavior would actually be to produce a graph where the subaxes for Patient2 of Cohort2 only has two categories on the x axis in order to display only two boxplots (and shouldn't contain an empty space).</p>
<p>Is there any way of producing a facet grid where the number of categories of each X-axis is variable based on the data available?</p>
<p>Here is a mockup of plot desired that I edited manually using GIMP:</p>
<p><a href="https://i.sstatic.net/BVmPe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BVmPe.png" alt="boxplot desired" /></a></p>
|
<python><seaborn><boxplot><facet-grid><catplot>
|
2023-06-20 20:33:17
| 1
| 6,514
|
xApple
|
76,518,144
| 12,223,536
|
Trouble deleting ChromaDB documents
|
<p>I can't seem to delete documents from my Chroma vector database. I would appreciate any insight as to why this example does not work, and what modifications can/should be made to get it functioning correctly.</p>
<pre class="lang-py prettyprint-override"><code>import dotenv
import os
import chromadb
from chromadb.config import Settings
from chromadb.utils import embedding_functions
dotenv.load_dotenv()
client = chromadb.Client(
Settings(chroma_db_impl="duckdb+parquet", persist_directory="db/chroma")
)
embedding = embedding_functions.OpenAIEmbeddingFunction(
api_key=os.getenv("OPENAI_API_KEY"),
model_name="text-embedding-ada-002",
)
collection = client.get_or_create_collection(name="test", embedding_function=embedding)
from llama_index import SimpleDirectoryReader
documents = SimpleDirectoryReader(
input_dir="./sampledir",
recursive=True,
exclude_hidden=False,
filename_as_id=True,
).load_data()
collection.add(
documents=[doc.get_text() for doc in documents],
ids=[doc.doc_id for doc in documents],
)
print(collection.count()) # PRINTS n
doc_ids = collection.get()["ids"]
collection.delete(ids=doc_ids)
print(collection.count()) # SHOULD BE ZERO, BUT PRINTS n
</code></pre>
|
<python><database><artificial-intelligence><openai-api><chromadb>
|
2023-06-20 20:19:22
| 3
| 445
|
wolfeweeks
|
76,518,102
| 14,208,556
|
subtract values where column has specific value
|
<p>I have the following table</p>
<pre><code>| account | date | client_ID | value |
|---------|------------|-----------|------|
| A | 31/01/2023 | 1 | 10 |
| B | 31/01/2023 | 1 | 2 |
| C | 31/01/2023 | 1 | 50 |
| A | 28/02/2023 | 1 | 15 |
| B | 28/02/2023 | 1 | 11 |
| C | 28/02/2023 | 1 | 50 |
| A | 31/01/2023 | 2 | 7 |
| B | 31/01/2023 | 2 | 10 |
</code></pre>
<p>And I want to substract for each date and client_id the value from account A by the value from account B. So, on 31/01/2023 client_id 1 will be 10-2=8, and client_id 2 will be 7-10=-3. For 28/02/2023 there is only data for client_id 1 and the result should be 15-11=4.</p>
<p>So, the expected output has 3 columns: date, client_id, and the difference calculated above.</p>
<p>It is possible that for a certain date or client_ID there is no A or B account so the code should be able to handle that as well.</p>
<p>How should I approach this?</p>
<p>Thank you in advance!</p>
|
<python><pandas><dataframe>
|
2023-06-20 20:12:38
| 2
| 333
|
t.pellegrom
|
76,518,083
| 480,118
|
403 error in code, but not when using browser
|
<p>Im trying to pull data from this url: <a href="https://sbcharts.investing.com/events_charts/us/38.json" rel="nofollow noreferrer">https://sbcharts.investing.com/events_charts/us/38.json</a></p>
<p>I can paste that in Edge or Chrome and i see the return JSON. Looking at the request/response headers from that..i am adding the following to my python code, but despite all this i still get a 403 permission error. Any help in debugging this would be appreciated</p>
<pre><code>from urllib import request
url='https://sbcharts.investing.com/events_charts/us/38.json'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.51',
'Access-Control-Allow-Origin':'*',
'Access-Control-Allow-Methods':'*',
'Access-Control-Allow-Headers':'*',
'Cross-Origin-Opener-Policy':'cross-origin',
'Accept-Encoding': 'gzip, deflate, br',
'Content-Security-Policy':'upgrade-insecure-requests',
'Content-Security-Policy':"default-src https: data: wss: 'unsafe-inline' 'unsafe-eval'; form-action https:; report-uri",
'Content-Type':'application/json',
'Accept':'application/json,text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'Accept-Language':'en-US,en;q=0.9',
'Referrer-Policy':'strict-origin-when-cross-origin',
'Referrer':'http://localhost:5000/'
}
data = requests.get(url, headers=headers)
</code></pre>
<p>The response returned from this is 403. The data in the response seems to be binary or hex..so not readable</p>
<pre><code>raw:
<urllib3.response.HTTPResponse object at 0x7fd97814c7f0>
reason: 'Forbidden'
request: <PreparedRequest [GET]>
status_code: 403
text:
'\x13�\x11\x00�>��ݹ�z�U\x0eU��$l�!\x06��\x1e�o!���\x0c\x13�{\x13�5�N����)\'��\x12��d۷I�))��\x034�!w\n��S�^cB{|\x1aKD���T:\x01Z���?�o[�1��\x14��\x1686ܺ����A�K�t�����x\x00y��Ϣ�P�\x18p�e\x06�m4�G�!�\x1e��_�=�Hɩ\rQ���R�+G�����<���\x13���H�\x06��\u0605�TNE��&G�\x1c�\x1b\x15\x13�Iԟ�d%�RN\r\r��\x00J����������>55Qpa�Z:\t͘���\x07(_��ZC�\x06&\x02�\n��w�\x14N[�ݿ�LZ9:m�\x1f���@g[[s\x12�:ovȕC|��T��J\x1b/umW�O�t��\x18CLˌS�\n=^��\x05�P��$uJ�׀�WB�XfϢ\x0cf|Y}\x7fA�>4\x1a1&7#�\x14�ö́/?�f����1Ň���\x00[avS�\x7f�\x0e�b�S�JGf��\x0c\x00\x16�OV������\x1e�o�2d�\x0c\x050��\x06��\x08�#p���G�f6%��T�"O�;؛�7���i邾L�\x1ex�g�u\x18\x10��5�%3�n�h%�\x0316<�@�REEM�\x15Y\x06�� U�\x0c��T3�\x10ڄ��\x01sx\t�\x17r��$�˒x��L\x12�\x0f�T"$\x05ñ\\�%�EP�\x14А�\x1e\'&A��H1C?\x0b^\x0b��h�&�\x0c\x08W�WK��G\x03�f�\x08\x13�/:7����D�s�q\\`\x0c=�j�ч�q�l �[�\x17��\x04�f�K�,�/�p�o�(�\x0c=T$���\x03`\x10�� �����:��-��\x04\x0b�J�Ւ{�\x1a�[�СsP�+نI���+�d\x03�>\x1d�|j\x02G�tA��w\x0e�d`�\x0b�\t\x07u%}�J��X���\x12,�`�&YN�Zu�\x1bE��Ef#\x0f*?�t��\t<\x04���R2Y��٪�\'��Ќ���{3�9�...
url: 'https://sbcharts.investing.com/events_charts/us/38.json'
_content:
b'\x13\xa4\x11\x00\xc4>\x9b\xf6\xdd\xb9\x9cz\xb4U\x0eU\x96\xf3$l\xb3!\x06\x97\x90\x1e\xa0o!\xb1\xd2\xc3\x0c\x13\xee\x9b{\x13\xa65\xaeN\x94\xad\xd9\xdd)\'\xa5\xfc\x12\xd0\xecd\xdb\xb7I\x81))\xa0\x84\x034\xe2\x94!w\n\xd4\xc9S\xf2^cB{|\x1aKD\xe9\xde\xd0T:\x01Z\xb1\xe6\xfa?\xe1o[\xc11\xbe\xfc\x14\xf7\xff\x1686\xdc\xba\xb0\x86\x9f\xf2A\xbeK\xb0t\x87\xe0\x8c\xa5\xf2x\x00y\xf9\xeb\xcf\xa2\xc0P\xb3\x18p\xe2e\x06\xc0m4\xdeG\xf6!\xba\x1e\xac\x9a_\xb7=\xdeH\xc9\xa9\rQ\x84\x8c\xa1R\xfe+G\xb6\xec\xa8\xf8\x90\x99<\xdb\xe0\xf1\x13\xfd\xdf\xdbH\xe6\x06\xaf\xf8\xd8\x85\xdeTNE\x8a\xb7&G\xb7\x1c\xa8\x1b\x15\x13\xf1I\xd4\x9f\x91d%\xa2RN\r\r\xb7\xee\x00J\xf9\xc1\xb6\xf4\xf9\xab\xf7\x97\x9f\x9e>55Qpa\xafZ:\t\xcd\x98\xe0\xc3\xe5\x07(_\xc0\x87ZC\xed\x06&\x02\x93\n\xe7\xcew\x9c\x14N[\x9c\xdd\xbf\x97LZ9:m\x90\x1f\xed\xac\xbf@g[[s\x12\xba:ov\xc8\x95C|\xa8\x89T\x9d\xc4J\x1b/umW\xc9O\xadt\xb5\xa4\x18CL\xcb\x8cS\xad\n=^\xe1\xe3\xbf\x05\x9aP\xd3\xeb\xb3$uJ\xd3\xd7\x80\xa5WB\xb7Xf\xcf\xa2\x0cf|Y}\x7fA\xbe>4\x1a1&7#\xcb\x14\xbao\xcd\x84/?\xc...
_content_consumed: True
_next: None
</code></pre>
|
<python><python-requests>
|
2023-06-20 20:09:28
| 2
| 6,184
|
mike01010
|
76,518,078
| 6,087,667
|
merge 2D array of 2D arrays into 2D array
|
<p>I have a 2D array of 2D arrays <code>t</code>, which can be emulated by this code:</p>
<pre><code>import numpy as np
n=3
x = np.random.randint(0,10, (n,1))
y = np.random.randint(0,10, (n,2))
t = np.empty((2,2), object)
t[0,:] = [x,y]
t[1,:] = [x+10,y+10]
</code></pre>
<p>How can I merge the <code>t</code> into a single array of size (6,3) like this:</p>
<pre><code>np.vstack([np.hstack([x,y]), np.hstack([x,y])+10])
</code></pre>
|
<python><numpy><numpy-ndarray>
|
2023-06-20 20:08:55
| 1
| 571
|
guyguyguy12345
|
76,517,964
| 6,936,489
|
python polars : apply a custom function efficiently on parts of dataframe
|
<p>I'm trying to optimize <code>map_elements</code> in polars the way I did in pandas (which might entirely be the wrong way to proceed...).</p>
<p>I have a function I'm not managing but that I have to apply to parts of my dataframe; to assert reproducibility, let's say this is <code>lat_lon_parse</code> from the <code>lat_lon_parser</code> package.</p>
<pre><code>from lat_lon_parser import parse as lat_lon_parse
def test_lat_lon_parse(x):
try:
print(f"parsing {x}")
return lat_lon_parse(x)
except Exception:
return None
</code></pre>
<p>Let's also say that your dataframe, reflecting true data, contains mixed data.</p>
<pre><code>import polars as pl
df = pl.DataFrame({'A':["1", "2", "5°N", "4°S"], "B":[1, 2, 3, 4]})
</code></pre>
<p>For efficiency's sake, I don't want to run <code>test_lat_lon_parse</code> on rows 1 and 2 (as I could do a simple <code>.cast()</code> operation and get the same result). What is the state-of-the-art way to proceed ?</p>
<p>In pandas, I would have computed an index and applied my function on the subset of the dataframe only.</p>
<p>In polars, I see two ways of proceeding :</p>
<pre><code>mask = pl.col('A').str.contains('°')
# way #1
def way_1(df):
return df.with_columns(
pl.when(mask)
.then(pl.col('A').map_elements(test_lat_lon_parse))
.otherwise(pl.col('A'))
.cast(pl.Float64)
)
# way #2
def way_2(df):
return pl.concat([
df.filter(mask).with_columns(pl.col('A').map_elements(test_lat_lon_parse).alias('dummy')),
df.filter(~mask).with_columns(pl.col('A').cast(pl.Float64).alias('dummy'))
])
</code></pre>
<p>You will see that way #1 will apply the function on each row (hence the <code>print</code>s); note that this is not as trivial as it seems as your applied function may also trigger exceptions when encountering strange data. For instance, you can't cast to <code>pl.Float64</code> inside the <code>otherwise</code> part of the expression because it will be applied to the whole series - and fail.</p>
<p>The way #2 will execute the function on the only subset I specified but will alter the dataframe's order.</p>
<p>I used <code>timeit</code>s to compare the two processes. I got those results:</p>
<pre><code># way #1:
%timeit way_1(df)
>> 443 µs ± 76.2 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
# way #2:
%timeit way_2(df)
>> 1.49 ms ± 462 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>When I increase the dataframe's size, it shows (not unsurprisingly) that way #1 does not handle scalibility best:</p>
<pre><code>df = pl.DataFrame({'A':["1", "2", "5°N", "4°S"]*10000, "B":[1, 2, 3, 4]*10000})
# way #1:
%timeit way_1(df)
>> 400 ms ± 48.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# way #2:
>> 234 ms ± 59.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>Is my understanding of polars' proceeding right? Are there better ways to handle this (and how do they handle scalibility)?</p>
|
<python><python-polars>
|
2023-06-20 19:48:25
| 1
| 2,562
|
tgrandje
|
76,517,897
| 2,055,938
|
Python erroneous logic
|
<p>I am new to Python and trying to understand where my logic is wrong. I am creating a project which controls a hydroponic setup. I have a fogger, a heat lamp and one oxygen machine. The power or on is controlled by relays and the sensors also have their controllers. I can pretty much control the appliances as I want but now when I realized that I must start using threads to get the last pieces on place, my logic seems to be failing. It does not behave as I expected.</p>
<p>The setup is a Raspbian on a Raspberry Pi 3. I have a LCD (16x2) where I wish to print useful data without having the screen on all the time and as mentioned the machinery, sensors and relays.</p>
<p>What I do wish to accomplish is this; I have three processes (one for the fogger, one for the heat lamp and one for the oxygen) and they must each run on a thread. Data from the sensors will be shown on the LCD.</p>
<p>My plan is for a perpetual run unless a kill switch is engaged (two buttons must be pressed at the same time and then the kill switch flag will be set to True. This is something I have not yet implemented but the plan is that when this flag is set the sensors and all should stop their work and gracefully die. This kill switch is when I need to change the water or something like that). While the run is going, the threads are doing their job and displaying their data on the LCD with some interval. I.e first the fogger is displaying the temperature from the sensor for four seconds, then the next set of sensors display their sensor data for four seconds etc. This repetitive cycle should go on forever.</p>
<p>What changes must I do to make the logic behave as I expect it to? The logic in its entirety is down below. As it is right now the cycles are off, the relay turns on and off sporadically and the LCD displays data arbitrarily and sometimes even strange characters (interference?).</p>
<pre><code># Control program
# Source: rplcd.readthedocs.io/en/stable/getting_started.html
# https://www.circuitbasics.com/raspberry-pi-lcd-set-up-and-programming-in-python/
# LCD R Pi
# RS GPIO 26 (pin 37)
# E GPIO 19 (35)
# DB4 GPIO 13 (33)
# DB5 GPIO 6 (31)
# DB6 GPIO 5 (29)
# DB7 GPIO 11 / SCLK (23)
# WPSE311/DHT11 GPIO 24 (18)
# Relay GPIO 21 (40)
# Imports--- [ToDo: How to import settings from a configuration file?]
import adafruit_dht, board, digitalio, threading
import adafruit_character_lcd.character_lcd as characterlcd
import RPi.GPIO as GPIO
from time import sleep, perf_counter
from datetime import datetime
# Compatible with all versions of RPI as of Jan. 2019
# v1 - v3B+
lcd_rs = digitalio.DigitalInOut(board.D26)
lcd_en = digitalio.DigitalInOut(board.D19)
lcd_d4 = digitalio.DigitalInOut(board.D13)
lcd_d5 = digitalio.DigitalInOut(board.D6)
lcd_d6 = digitalio.DigitalInOut(board.D5)
lcd_d7 = digitalio.DigitalInOut(board.D11)
# Define LCD column and row size for 16x2 LCD.
lcd_columns = 16
lcd_rows = 2
# Initialise the lcd class---
lcd = characterlcd.Character_LCD_Mono(lcd_rs, lcd_en, lcd_d4, lcd_d5, lcd_d6,
lcd_d7, lcd_columns, lcd_rows)
GPIO.setwarnings(False)
# Init sensors---
dhtDevice_nutrient_mist = adafruit_dht.DHT11(board.D24, use_pulseio=False)
#dhtDevice_xx = adafruit_dht.DHT22(board.Dxx, use_pulseio=False)
#dhtDevice = adafruit_dht.DHT11(board.D24, use_pulseio=False)
# Define relays
relay_fogger = 21 #digitalio.DigitalInOut(board.D21) #- Why does this not work?
#relay_heatlamp = xx
#relay_oxygen = xx
# Init relays---
GPIO.setwarnings(False)
GPIO.setup(relay_fogger, GPIO.OUT)
# Define liquid nutrient temperature probe
liquid_nutrients_probe = 16 #digitalio.DigitalInOut(board.D16) - Why does this not work?
# Global variables---
temp_nutrient_solution = 0
temp_nutrient_mist = 0
humidity_nutrient_mist = 0
temp_roots = 0
humidity_roots = 0
fogger_on_seconds = 2700 #45 min
fogger_off_seconds = 900 #15 min
killswitch = False
# Methods---
def get_temp_nutrient_solution():
"""Measure the temperature of the nutrients solution where the ultrasonic fogger is."""
lcd.clear()
lcd.home()
lcd.message = datetime.now().strftime('%b %d %H:%M:%S\n')
def get_temp_humidity_nutrient_mist():
"""Measure the humidity of the nutrient mist where the ultrasonic fogger is."""
lcd.clear()
lcd.home()
try:
# Print the values to the lcd
temperature_c = dhtDevice_nutrient_mist.temperature
temperature_f = temperature_c * (9 / 5) + 32
humidity = dhtDevice_nutrient_mist.humidity
temp_nutrient_mist = temperature_c
humidity_nutrient_mist = humidity
lcd.message = (
"T: {:.1f}C / {:.1f}F \nHumidity: {}% ".format(
temperature_c, temperature_f, humidity
)
)
# For development process
print(
"T: {:.1f} C / {:.1f} F Humidity: {}% ".format(
temperature_c, temperature_f, humidity
)
)
except RuntimeError as error:
# Errors happen fairly often, DHT's are hard to read, just keep going
print(error.args[0])
sleep(1) # sleep(1) for DHT11 and sleep(2) for DHT22
pass
except Exception as error:
dhtDevice.exit()
raise error
sleep(1)
def get_temp_():
"""Measure the temperature of..."""
lcd.message = 'Test'
def relay_fogger_control():
"""Fogger on for 45 min and off for 15. Perpetual mode unless kill_processes() is activated"""
GPIO.output(relay_fogger, GPIO.HIGH)
sleep(1)
#sleep(fogger_on_seconds)
GPIO.output(relay_fogger, GPIO.LOW)
sleep(1)
#sleep(fogger_off_seconds)
def relay_heatLED_control():
"""Heat LED controller. When is it too hot for the crops? Sleep interval? Perpetual mode unless kill_processes() is activated"""
def relay_oxygen_control():
"""Oxygen maker. Perpetual mode unless kill_processes() is activated"""
def kill_processes():
"""ToDo: Two push buttons (must be pressed simultaneously) which gracefully kills all processes preparing for shutdown."""
# Do something
# Joined the threads / stop the threads after killswitch is true
t1.join()
t2.join()
t3.join()
t4.join()
lcd.clear()
lcd.home()
# Call something that cleans everything up
lcd.message = 'Full stop.\r\nSafe to remove.'
GPIO.cleanup()
# Created the Threads
t1 = threading.Thread(target=get_temp_nutrient_solution)
t2 = threading.Thread(target=get_temp_nutrient_solution)
t3 = threading.Thread(target=get_temp_humidity_nutrient_mist)
t4 = threading.Thread(target=relay_fogger_control)
# Started the threads
t1.start()
t2.start()
t3.start()
t4.start()
# Code main process--- What to do now?
while not killswitch:
relay_fogger_control()
get_temp_nutrient_solution()
get_temp_humidity_nutrient_mist()
#sleep(20) #sleep 60 sec? sleep(60)
#killswitch = True
# Graceful exit
kill_processes()
</code></pre>
<p>After getting excellent suggestions from SO users I have updated the code to look like below. My new code has a logical error somewhere but since my question with this thread has been answered I consider this issue solved and will post a new question. I decided to go with one button finally and hope that the code below helps someone else.</p>
<pre><code>
# Source: rplcd.readthedocs.io/en/stable/getting_started.html
# https://www.circuitbasics.com/raspberry-pi-lcd-set-up-and-programming-in-python/
# LCD R Pi
# RS GPIO 26 (pin 37)
# E GPIO 19 (35)
# DB4 GPIO 13 (33)
# DB5 GPIO 6 (31)
# DB6 GPIO 5 (29)
# DB7 GPIO 11 / SCLK (23)
# WPSE311/DHT11 GPIO 24 (18)
# Relay Fog GPIO 21 (40)
# Relay Oxy GPIO 20 (38)
# Relay LED GPIO 16 (36)
# Button Killsw. GPIO 12 (32)
# Imports--- [ToDo: How to import settings from a configuration file?]
import adafruit_dht, board, digitalio, threading
import adafruit_character_lcd.character_lcd as characterlcd
import RPi.GPIO as GPIO
from time import sleep, perf_counter
from gpiozero import CPUTemperature
from datetime import datetime
# Compatible with all versions of RPI as of Jan. 2019
# v1 - v3B+
lcd_rs = digitalio.DigitalInOut(board.D26)
lcd_en = digitalio.DigitalInOut(board.D19)
lcd_d4 = digitalio.DigitalInOut(board.D13)
lcd_d5 = digitalio.DigitalInOut(board.D6)
lcd_d6 = digitalio.DigitalInOut(board.D5)
lcd_d7 = digitalio.DigitalInOut(board.D11)
# Define LCD column and row size for 16x2 LCD.
lcd_columns = 16
lcd_rows = 2
# Initialise the lcd class---
lcd = characterlcd.Character_LCD_Mono(lcd_rs, lcd_en, lcd_d4, lcd_d5, lcd_d6,
lcd_d7, lcd_columns, lcd_rows)
# Init sensors---
dhtDevice_nutrient_mist = adafruit_dht.DHT11(board.D24, use_pulseio=False)
#dhtDevice_xx = adafruit_dht.DHT22(board.Dxx, use_pulseio=False)
#dhtDevice = adafruit_dht.DHT11(board.D24, use_pulseio=False)
# Define relays
relay_fogger = 21 #digitalio.DigitalInOut(board.D21) #- Why does this not work?
relay_oxygen = 20 #digitalio.DigitalInOut(board.D20) #- Why does this not work?
relay_led = 16 #digitalio.DigitalInOut(board.D16) #- Why does this not work?
# Init relays---
GPIO.setwarnings(False)
GPIO.setup(relay_fogger, GPIO.OUT)
GPIO.setup(relay_oxygen, GPIO.OUT)
GPIO.setup(relay_led, GPIO.OUT)
# Define liquid nutrient temperature probe
liquid_nutrients_probe = 16 #digitalio.DigitalInOut(board.D16) - Why does this not work?
# Define the killswitch push button
GPIO.setup(12, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
# Global variables---
killswitch = False
# Fogger bucket vars
temp_nutrient_solution = 0
temp_nutrient_mist = 0
humidity_nutrient_mist = 0
fogger_on_seconds = 2700 #45 min
fogger_off_seconds = 900 #15 min
sleep_fogger = False
# Grow bucket vars
temp_roots = 0
humidity_roots = 0
# Oxygen bucket vars
sleep_oxygen = False
# Rapsberry Pi internal temperature
rpi_internal_temp = 0
# Methods---
def get_temp_nutrient_solution(killswitch): # Temp för näringsväska. KLAR för TEST
"""Measure the temperature of the nutrient solution where the ultrasonic fogger is."""
while not killswitch:
global temp_nutrient_solution
temp_nutrient_solution = 22
#lcd.message = datetime.now().strftime('%b %d %H:%M:%S\n')
#lcd.message = "Dummy temp liquid solution:/n {:.1f}C".format(temp_nutrient_solution)
# For development process
print(
"T: {:.1f} C / {:.1f} F".format(
temp_nutrient_solution, c2f(temp_nutrient_mist)
)
)
sleep(1)
def get_temp_humidity_nutrient_mist(killswitch): # Temp och fuktighet för ånga. KLAR för TEST
"""Measure the tmeperature and humidity of the nutrient mist where the ultrasonic fogger is."""
while not killswitch:
try:
# Update global temp value and humidity once per second
global temp_nutrient_mist
global humidity_nutrient_mist
temp_nutrient_mist = dhtDevice_nutrient_mist.temperature
humidity_nutrient_mist = dhtDevice_nutrient_mist.humidity
# For development process
print(
"T: {:.1f} C / {:.1f} F Humidity: {}% ".format(
temp_nutrient_mist, c2f(temp_nutrient_mist), humidity_nutrient_mist
)
)
except RuntimeError as error:
# Errors happen fairly often, DHT's are hard to read, just keep going
print(error.args[0])
sleep(1) # sleep(1) for DHT11 and sleep(2) for DHT22
pass
except Exception as error:
dhtDevice.exit()
kill_processes() # Förbättra denna här så att den ska visa vilken DHT-enhet som har fått error
raise error
sleep(1)
def relay_fogger_control(killswitch, sleep_fogger): # Fogger av eller på
"""Fogger on for 45 min and off for 15. Perpetual mode unless kill_processes() is activated"""
while not killswitch or sleep_fogger:
GPIO.output(relay_fogger, GPIO.HIGH)
sleep(1)
#sleep(fogger_on_seconds)
GPIO.output(relay_fogger, GPIO.LOW)
sleep(1)
#sleep(fogger_off_seconds)
def relay_heatLED_control(killswitch): # Värmelampa LED av eller på
"""Heat LED controller. When is it too hot for the crops? Sleep interval? Perpetual mode unless kill_processes() is activated"""
while not killswitch:
GPIO.output(relay_led, GPIO.HIGH)
sleep(3)
#sleep(fogger_on_seconds)
GPIO.output(relay_led, GPIO.LOW)
sleep(3)
#sleep(fogger_off_seconds)
def relay_oxygen_control(killswitch, sleep_oxygen): # Syremaskin av eller på
"""Oxygen maker. Perpetual mode unless kill_processes() is activated"""
while not killswitch or sleep_oxygen:
GPIO.output(relay_oxygen, GPIO.HIGH)
sleep(5)
#sleep(fogger_on_seconds)
GPIO.output(relay_oxygen, GPIO.LOW)
#sleep(fogger_off_seconds)
sleep(5)
def kill_processes(): # Döda alla processer
"""ToDo: A button must be pressed which gracefully kills all processes preparing for shutdown."""
# Power off machines
GPIO.output(relay_fogger, GPIO.LOW)
GPIO.output(relay_led, GPIO.LOW)
GPIO.output(relay_oxygen, GPIO.LOW)
# Joined the threads / stop the threads after killswitch is true
t1.join()
t2.join()
t3.join()
t4.join()
t5.join()
#t6.join()
reset_clear_lcd()
# Stop message and GPIO clearing
lcd.message = 'Full stop.\r\nSafe to remove.'
GPIO.cleanup()
def reset_clear_lcd(): # Reset och rensa LCD
"""Move cursor to (0,0) and clear the screen"""
lcd.home()
lcd.clear()
def get_rpi_temp(): # Reset och rensa LCD
"""Move cursor to (0,0) and clear the screen"""
global rpi_internal_temp
cpu = CPUTemperature()
rpi_internal_temp = cpu.temperature
def c2f(temperature_c):
"""Convert Celsius to Fahrenheit"""
return temperature_c * (9 / 5) + 32
def lcd_display_data_controller(killswitch): # LCD display data controller
"""Display various measurments and data on the small LCD. Switch every four seconds."""
while not killswitch:
reset_clear_lcd()
# Raspberry Pi internal temperature
lcd.message = (
"R Pi (int. temp): \n{:.1f}C/{:.1f}F ".format(
rpi_internal_temp, c2f(rpi_internal_temp)
)
)
sleep(5)
reset_clear_lcd()
# Nutrient liquid temperature
lcd.message = (
"F1: {:.1f}C/{:.1f}F ".format(
temp_nutrient_solution, c2f(temp_nutrient_solution)
)
)
sleep(5)
reset_clear_lcd()
# Nutrient mist temperature and humidity
lcd.message = (
"F2: {:.1f}C/{:.1f}F \nHumidity: {}% ".format(
temp_nutrient_mist, c2f(temp_nutrient_mist), humidity_nutrient_mist
)
)
sleep(5)
reset_clear_lcd()
# Root temperature and humidity
lcd.message = (
"R1: {:.1f}C/{:.1f}F \nHumidity: {}% ".format(
temp_roots, c2f(temp_roots), humidity_roots
)
)
sleep(5)
reset_clear_lcd()
def button_callback(channel):
global killswitch
print("Button was pushed!")
killswitch = True
# Init the button
GPIO.add_event_detect(12, GPIO.RISING, callback=button_callback)
# Create the threads
#tx = threading.Thread(target=xx, args=(killswitch,sleep_fogger,))
t1 = threading.Thread(target=get_temp_nutrient_solution, args=(killswitch,))
t2 = threading.Thread(target=get_temp_humidity_nutrient_mist, args=(killswitch,))
t3 = threading.Thread(target=relay_fogger_control, args=(killswitch,sleep_fogger,))
t4 = threading.Thread(target=lcd_display_data_controller, args=(killswitch,))
t5 = threading.Thread(target=get_rpi_temp)
#t6 = threading.Thread(target=killswitch_button)
# Start the threads
t1.start()
t2.start()
t3.start()
t4.start()
t5.start()
#t6.start()
# Code main process---
while not killswitch:
sleep(1)
# Graceful exit
kill_processes()
</code></pre>
|
<python><multithreading><raspberry-pi3>
|
2023-06-20 19:35:53
| 0
| 517
|
Emperor 2052
|
76,517,839
| 17,638,206
|
Extracting Arabic numerals using OCR and Python
|
<p>I have an image that I need to extract this number from it ( truth label): <code>۱٤٤۲٦۷</code>. I am using EasyOCR: <code>results = reader.readtext(image,paragraph = True,text_threshold =0.15,low_text=0.2,add_margin=0.09)</code>
When I print the text that is output from the OCR, I get : <code>رقم :٤٢٦٧ ١٤ (</code>, so to extract the number I have used: <code>arabic_num = re.search(r':([\d\s]+)', text, re.UNICODE)</code> and then the code <code>print(arabic_num.group(1))</code> produces <code>٤٢٦٧ ١٤</code>. However, when I try to remove the space between <code>١٤</code> and <code>٤٢٦٧</code> using <code>arabic_num = arabic_num.group(1).replace(' ', '')</code> I get ٤٢٦٧١٤. So I have decided to loop over the text to see how it is stored using: <code>for i in text: print(i)</code> and the result was:</p>
<pre><code>ر
ق
م
:
٤
٢
٦
٧
١
٤
(
</code></pre>
<p>So it seems that the text is displayed and stored in different ways, so how do I fix this issue and output the text as <code>۱٤٤۲٦۷</code> like the truth value in the image ?</p>
|
<python><string><replace><ocr>
|
2023-06-20 19:24:22
| 1
| 375
|
AAA
|
76,517,809
| 436,418
|
How can I make this function more numerically stable?
|
<p>The following function is supposed to work similarly to <code>pow(x, 1/k)</code> but to be symmetric around the line <code>y = 1 - x</code> as well as not having a 0 or 1 slope at either end of [0, 1]:</p>
<pre><code>def sym_gamma(x, k):
if k == 1.0:
return x
a = 1.0 / k - 1.0
b = 1.0 / a
c = k + 1.0 / k - 2.0;
return 1.0 / (a - c * x) - b
</code></pre>
<p>As can be seen, it is not defined when <code>k = 1</code> so when that is the case, I simply return <code>x</code>. However, this special case handling is not enough since the function also behaves poorly when <code>x</code> is not equal to but very close to <code>1.0</code>. For example <code>sym_gamma(0.5, 1.00000001)</code> yields <code>0.0</code> while it's supposed to return something very close to <code>0.5</code>.</p>
<p>How can achieve the same thing without the poor stability? I know that I can introduce a tolerance with respect to <code>k</code> equaling <code>1.0</code> but it feels like a hack and I would also want to make sure that the function is perfectly smooth with regards to <code>k</code>.</p>
|
<python><numerical-stability>
|
2023-06-20 19:18:38
| 2
| 2,162
|
Emil Sahlén
|
76,517,780
| 2,520,640
|
Is there an equivalent to %+% for plotnine?
|
<p>In R's ggplot2, you can modify the dataset for a saved plot with <code>%+%</code>. Is there an equivalent in Python's plotnine?</p>
<p>As an example, here is what this looks like in R:</p>
<pre><code>library(ggplot2)
df1 <- data.frame(x = 1:10, y = 1:10 * 2)
p1 <- ggplot(data = df1, aes(x = x, y = y)) + geom_line()
df2 <- data.frame(x = 1:10, y = 1:10 * 3)
p1 %+% df2 # produces the same plot using the df2 data.frame
</code></pre>
|
<python><plotnine>
|
2023-06-20 19:13:20
| 1
| 3,330
|
Jake Fisher
|
76,517,739
| 1,945,881
|
Visualize text with different colors
|
<p>I am trying to visualize DNA sequences and their assembly. I have been able to produce the image <a href="https://i.sstatic.net/ywX8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ywX8J.png" alt="assembly of DNA fragments" /></a></p>
<p>with the code below. This is very difficult since I need to experiment with the x coordinates to get the sequences with different colors close to each other. I was wondering if there is any proper way of doing this? Thanks in advance!</p>
<pre><code>from revcomp import revcomp
import matplotlib.pyplot as plt
sequence = "ATGCGTGGACGTG"
complement = revcomp(sequence, reverse=False, complement=True)
seqlist_1 = ["5' ", sequence, " 3'"]
hydrobonds_1 = [" ", len(sequence)*"|"]
complist_1 = ["3' ", complement, " 5'"]
# Digest sequence like a 5' exonuclease
digest_1 = ["".join(complist_1)[0:10], 7*" ", "5'"]
sequence = "GACGTGAGTGTGACGTGACCCGGTTTT"
complement = revcomp(sequence, reverse=False, complement=True)
seqlist = ["5' ", sequence, " 3'"]
hydrobonds = [" ", len(sequence)*"|"]
complist = ["3' ", complement, " 5'"]
plt.plot()
plt.text(-0.04, 0.04, "".join(seqlist_1), color = "blue")
plt.text(-0.04, 0.035, "".join(complist_1)[0:10], color = "blue")
plt.text(-0.0235, 0.035, "".join(complist_1)[10:], color = "red")
plt.text(0.025, 0.04, "".join(seqlist)[9:], color = "green")
plt.text(0.01, 0.04, "".join(seqlist)[0:9], color = "red")
plt.text(0.01, 0.035, "".join(complist), color = "green")
plt.text(-0.04, 0.02, "".join(seqlist_1), color = "blue")
plt.text(-0.04, 0.015, "".join(complist_1)[0:10], color = "blue")
plt.text(0.025, 0.02, "".join(seqlist)[9:], color = "green")
plt.text(0.01, 0.015, "".join(complist), color = "green")
plt.text(-0.04, -0.01, "".join(seqlist_1)[:-2], color = "blue")
plt.text(-0.012, -0.01, "".join(seqlist)[9:], color = "green")
plt.text(-0.04, -0.015, "".join(complist_1)[0:10], color = "blue")
plt.text(-0.025, -0.015, "".join(complist)[2:], color = "green")
plt.text(0, 0.025, "5' exonuclease")
plt.text(0, 0, "Gibson assembly")
plt.axis('off')
plt.show()
</code></pre>
<p>The code uses the function below:</p>
<pre><code>def revcomp(dna, reverse=True, complement=True):
""" Takes a sequence of DNA and converts it to its
compliment, if only compliment is intended
set reverse to False """
bases = 'ATGCatgcWSRYMKwsrymkHBVDhbvdNnTACGTACGWSYRKMWSYRKMDVBHDVBHNN'
complement_dict = {} # build a dictionary that contains each base with its complement.
for i in range(30):
complement_dict[bases[i]] = bases[i+30]
if reverse: # if reverse is True, default is true
dna = reversed(dna)
result_as_list = None # define an empty list
if complement: # if complement is true, default is true
result_as_list = [complement_dict[base] for base in dna]
else:
result_as_list = [base for base in dna]
return ''.join(result_as_list)
</code></pre>
|
<python><matplotlib>
|
2023-06-20 19:04:15
| 0
| 2,224
|
Homap
|
76,517,478
| 8,869,570
|
Reassigning a passed by reference variable to a different type
|
<p>I don't quite understand when python passes/copies variables by value, reference.</p>
<p>In this example, it seems <code>arg</code> is passed by reference, and a reference is held within the <code>MyClass</code> instance, so when <code>func</code> modifies <code>self.val</code> it also modifies <code>arg</code>.</p>
<pre><code>arg = [1,2,3]
class MyClass:
def func(self, arg):
self.val = arg
self.val[0] = -22
m = MyClass()
m.func(arg)
print(arg)
</code></pre>
<p>prints</p>
<pre><code>[-22, 2, 3]
</code></pre>
<p>In this case, it seems <code>self.val</code> gets overwritten by the <code>-22</code>, and nothing seems to happen to <code>arg</code>. So in this case, why does <code>arg</code> not become modified? Why is the list not changed to an int?</p>
<pre><code>arg = [1,2,3]
class MyClass:
def func(self, arg):
self.val = arg
self.val = -22
m = MyClass()
m.func(arg)
print(arg)
</code></pre>
<p>prints</p>
<pre><code>[1, 2, 3]
</code></pre>
|
<python><reference>
|
2023-06-20 18:21:03
| 1
| 2,328
|
24n8
|
76,517,202
| 5,416,228
|
json_normalize to transform several nested lists in pandas dataframe
|
<p>I have a dictionary where one of the elements is a list of several lists. I am trying to use <code>json_normalize</code> to transform some elements in pandas dataframe.</p>
<p>Reproducible dict</p>
<pre><code>my_dict = {'policyId':['123'],
'Elements':[[{'id': '100',
'coverages': [{'id': 'ABC',
'premiums': {'var1': {},
'var2': {}},
'quoteDetails': [{'id': 'PRICE',
'tags': ['SOMETHING'],
'value': 150.0,
'modifiable': False},
{'id': 'DISCOUNT',
'tags': ['SOMETHING'],
'value': 10.0,
'modifiable': False}]}],
'mandatory': True,
'selected': True},
{'id': '101',
'coverages': [{'id': 'DEF',
'premiums': {'var1': {},
'var2': {}},
'quoteDetails': [{'id': 'PRICE',
'tags': ['SOMETHING'],
'value': 200.0,
'modifiable': False},
{'id': 'DISCOUNT',
'tags': ['SOMETHING'],
'value': 15.0,
'modifiable': False}]}],
'mandatory': True,
'selected': True}
]]
}
</code></pre>
<p>I already tried using <code>json_normalize</code> in several ways but I still cannot get the expect result.</p>
<p>This is my desired result</p>
<pre><code>policyId coverages PRICE DISCOUNT
123 ABC 150.0 10.0
123 DEF 200.0 15.0
</code></pre>
|
<python><json><pandas>
|
2023-06-20 17:40:36
| 1
| 675
|
Rods2292
|
76,517,104
| 10,200,497
|
finding the first value that is out of a range that is defined by other columns
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame(
{
'date': list('abcdefghi'),
'open': [6, 8, 12, 5, 6, 22, 19, 1, 3],
'high': [10, 12, 20, 5, 2, 44, 11, 12, 5],
'low': [5, 7, 3, 1, 3, 18, 12, 1, 7],
}
)
</code></pre>
<p>And this is the output that I want:</p>
<pre><code> date open high low x
0 a 6 10 5 c
1 b 8 12 7 NaN
2 c 12 20 3 f
3 d 5 5 1 NaN
4 e 6 2 3 NaN
5 f 22 44 18 h
6 g 19 11 12 NaN
7 h 1 12 1 NaN
8 i 3 5 7 NaN
</code></pre>
<p>I want to add column <code>x</code>. We start from first row. A range is defined by using columns <code>high</code> and <code>low</code>. Since we have started from first row the range is [5, 10]. Now I have to find the first open that is out of this range. There is 12 in third row. So in column <code>x</code> we put the <code>date</code> that the first out-of-range open occurred which is <code>c</code>.</p>
<p>Now our new range is defined in the row that has the date of <code>c</code>. So the new range is [3, 20]. We have to find the first open that is out of this range now. It occurs at the row that its <code>date</code> is <code>f</code>.</p>
<p>Now like the above process our range is defined by the row that its <code>date</code> is <code>f</code>. So our new range is [18, 44]. The first open that is out of this range occurs at the row that its <code>date</code> is <code>h</code>. So we put <code>h</code> in column <code>x</code>.</p>
<p>I have tried to use two masks to find the first out-of-range <code>open</code> but it didn't work. This is my try:</p>
<pre><code>mask1 = (df.open > df.high)
mask2 = (df.open < df.low)
df.loc[mask1.cumsum().eq(1) | mask2.cumsum().eq(1), 'x'] = df.date
</code></pre>
|
<python><pandas>
|
2023-06-20 17:25:24
| 2
| 2,679
|
AmirX
|
76,517,083
| 5,568,409
|
How to name the marker size in scatter plot?
|
<p>Recently, I used the following program (which has no particular interest, other than personal...):</p>
<pre><code>fig, ax = plt.subplots(1, 1, figsize = (6,3))
N = 100
mu = [4,3]
cov = [[0.25,0.50],[0.50, 2.50]]
rng = np.random.default_rng(seed = 1949)
draw = partial(pm.draw, random_seed = rng)
x_draws = draw(pm.MvNormal.dist(mu = mu, cov = cov), draws = N)
ax.scatter(x_draws[:, 0],
x_draws[:, 1],
color = "darkblue",
markersize = 5,
alpha = 1)
plt.show()
</code></pre>
<p>If you run it, you'll get an error:</p>
<pre><code>PathCollection.set() got an unexpected keyword argument 'markersize'
</code></pre>
<p>It means that I can use <code>c =</code> <strong>or</strong> <code>color =</code> for specifying the color, but I absolutely need to use <code>s =</code> <strong>and not</strong> <code>markersize =</code> for specifying the size of the marker...?</p>
<p>Isn't that inconsistent?</p>
<p>Can I replace <code>s =</code> with another denomination if I want to? Or do I have to stay "stuck" to <code>s =</code> ?</p>
|
<python><matplotlib>
|
2023-06-20 17:21:45
| 1
| 1,216
|
Andrew
|
76,516,845
| 4,502,950
|
unpivot multilevel index pandas
|
<p>I have data frames as shown below, what I want to do is unpivot the data frame that has multi index . This is what I have tried so far</p>
<pre><code> df = pd.DataFrame([[2016, 2016, 2015, 2015],
['Dollar Sales', 'Unit Sales', 'Dollar Sales', 'Unit Sales'],
[1, 2, 3, 4], [5, 6, 7, 8]], columns=[*'ABCD'])
df['Dates'] = ['date','Dates','10/12','06/08']
new_labels = pd.MultiIndex.from_frame(df.iloc[:2].T.astype(str), names=['Year', 'Sales'])
df1 = df.set_axis(new_labels, axis=1).iloc[2:]
df1 = df1.stack()
df1 = df1.reset_index()
</code></pre>
<p>The result I get is</p>
<pre><code>Year level_0 Sales 2015 2016 date
0 2 Dates NaN NaN 10/12
1 2 Dollar Sales 3 1 NaN
2 2 Unit Sales 4 2 NaN
3 3 Dates NaN NaN 06/08
4 3 Dollar Sales 7 5 NaN
5 3 Unit Sales 8 6 NaN
</code></pre>
<p>However, what I want the end result to look like is</p>
<pre><code>Year level_0 Sales 2015 2016 Dates
1 2 Dollar Sales 3 1 10/12
2 2 Unit Sales 4 2 10/12
4 3 Dollar Sales 7 5 06/08
5 3 Unit Sales 8 6 06/08
</code></pre>
<p>How can I achieve this?</p>
|
<python><pandas>
|
2023-06-20 16:47:47
| 1
| 693
|
hyeri
|
76,516,705
| 19,130,803
|
Sklearn: How to write custom transformers for handling column names
|
<p>I am trying to build a pipeline for cleaning operations. I have written custom transformers for each operations for eg:</p>
<pre><code>Class RemoveDuplicateRows(BaseEstimator, TransformerMixin):
def __init__(self):
# init
def fit(self, X, y):
return self
def transform(self, X, y):
return X.drop_duplicates()
class RemoveDuplicateColumns(BaseEstimator, TransformerMixin):
def __init__(self):
self.duplicate_column_names = list()
def fit(self, X, y):
self.duplicate_column_names_ = # logic_to_find_duplicate column names
return self
def transform(self, X, y):
return X.drop(self.duplicate_column_names_, axis=1)
class FillOutliers(BaseEstimator, TransformerMixin):
def __init__(self):
self.numeric_columns_ = list()
def fit(self, X, y):
self.numeric_columns_ = X_.select_dtypes(include=[np.number]).columns.tolist()
# logic to find outliers using IQR
return self
def transform(self, X, y):
# logic
return X
</code></pre>
<p>Individually, they all running fine. But problem occurs, when I put them in pipeline, getting error as</p>
<pre><code>steps = [
("remove_duplicate_rows", RemoveDuplicateRows()),
("remove_duplicate_columns", RemoveDuplicateColumns()),
("fill_outliers", FillOutliers()),
]
pipe.Pipeline(steps=steps)
pipe.fit(X=df)
# saving the pipe
# loading the pipe
pipe.transform(X=df)
ValueError: Length mismatch: Expected axis has x elements, new values have y elements
</code></pre>
<p>My guess is say for eg I have 4 numerical columns in the dataframe and suppose <code>RemoveDuplicateColumns</code> removes 2 numerical columns from it but next <code>FillOutliers</code> still have learned 4 numerical columns and their column names.</p>
<p>Do I need to manually handle the column names in each transformers?</p>
|
<python><scikit-learn>
|
2023-06-20 16:25:19
| 1
| 962
|
winter
|
76,516,700
| 4,629,950
|
Pandas - Subtract two dataframes with left join instead of union / outer join?
|
<p>I have two dataframes that contain the same column names, but mismatched row indices. I want to subtract them from each other, but keep the rows from the left one only. This is equivalent to a left-join operation, but instead of adding new columns to my dataframe, I want to substract the values from each other.</p>
<p>Here is an example - but I do not want to add the row <code>square</code> to my result!</p>
<pre><code>df1 = pd.DataFrame({'angles': [0, 3, 4],
'degrees': [360, 180, 360]},
index=['circle', 'triangle', 'rectangle'])
df2 = pd.DataFrame({'angles': [1, 2, 3],
'degrees': [370, 200, 20]},
index=['square', 'triangle', 'rectangle'])
df1.sub(df2)
</code></pre>
<p><a href="https://i.sstatic.net/pduAk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pduAk.png" alt="enter image description here" /></a></p>
<p>Pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.subtract.html" rel="nofollow noreferrer">docs</a> are clear about what happens: If indices mismatch, "union" (effectively an outer join) will be done.</p>
<p>To me, it looks like there is an option <code>how</code> missing, where I can specify <code>left</code> instead of union.</p>
<p>Am I missing something? Is there another function that does what I want, or do I need to string commands together?</p>
|
<python><pandas><dataframe><join><subtraction>
|
2023-06-20 16:24:36
| 1
| 5,154
|
Thomas
|
76,516,673
| 21,404,794
|
How can I check in a pandas dataframe for a different values in different columns at the same time?
|
<p>Let's say we have a pandas.df with a bunch of columns filled with numbers between 0 and 1, like so:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'col1':[0.1,0.2,0.3,0.4],
'col2':[0.2,0.3,0.4,0.5],
'col3':[0.3,0.4,0.1,0.2]})
</code></pre>
<p>and then, a list of numbers I want to check if they exist in the dataframe, for example</p>
<pre><code>vals = [0.1,0.2,0.3]
</code></pre>
<p>Where the first value should be checked in <code>col1</code>, the second value in <code>col2</code> and so on.</p>
<p>Is there a way to check the list of values against each column easily?</p>
<p>If it's one column we want to check, the answer is easy, we can just do</p>
<pre class="lang-py prettyprint-override"><code>df.loc[df['col1'] == vals[0]]
</code></pre>
<p>and that returns the first row</p>
<p>|col1| col2| col3|
|-|-|-|
|0.1 | 0.2 |0.3|
which matches the requirements</p>
<p>The obvious next step would be doing something like:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[df['col1'] == vals[0] & df['col2'] == vals[1] & df['col3'] == vals[2]]
</code></pre>
<p>which doesn't work because you need the <code>.any()</code> function to avoid it being a series (with ambiguous truth value. Adding <code>.any()</code></p>
<pre class="lang-py prettyprint-override"><code>df[(df['col1'] == vals[0]).any() and (df['col2'] == vals[1]).any() and (df['col3'] == vals[2]).any()]
</code></pre>
<p>doesn't solve the problem, as it will return <code>KeyError: True</code></p>
<p>So, is there a way to check different values in different columns (they should appear in the same row)?</p>
<p>PS: It would be best if it does not need to manually write the columns as in <code>df.loc[df['col1'] == vals[0]]</code>, the dataframe I'm using has more than 20 columns and it would be really annoying.</p>
|
<python><pandas><dataframe>
|
2023-06-20 16:20:46
| 2
| 530
|
David Siret Marqués
|
76,516,439
| 1,874,170
|
Collecting return types from vararg Callable for use elsewhere in signature?
|
<p>I have the following function signature:</p>
<pre class="lang-py prettyprint-override"><code>Y = Annotated[TypeVar("Y"), "That which might be yielded"]
R = Annotated[TypeVar("R"), "That which might be returned"]
def teeinto_constantmemory(
it: Iterable[Y],
*consumer_callables: Callable[[Iterable[Y]], R]
) -> Tuple[R, ...]: ...
</code></pre>
<p>This function signature is obviously inadequate.</p>
<ol>
<li><p>It doesn't indicate that the length of the return value is equal to the quantity of <code>consumer_callables</code>.</p>
</li>
<li><p>Further, it doesn't indicate that the <em>n</em>-th element of the return tuple has the same type as <strong>the return type of</strong> the <em>n</em>-th element of <code>consumer_callables</code> (especially when the callables return different types from each other).</p>
</li>
</ol>
<p>I already checked out <a href="https://peps.python.org/pep-0646/" rel="nofollow noreferrer">PEP 646</a>, but the closest section they had to relevant was “Type Variable Tuples with <code>Callable</code>”, which I couldn't see how to apply to this situation.</p>
<p>Have I missed the solution, or is this use-case actually not yet addressed by Python's type hinting ecosystem?</p>
<hr />
<p>For example, if you call the function with five positional arguments, of types</p>
<ol>
<li><p><code>Iterable</code></p>
</li>
<li><p><code>Callable[[Iterable], int]</code></p>
</li>
<li><p><code>Callable[[Iterable], float]</code></p>
</li>
<li><p><code>Callable[[Iterable], str]</code></p>
</li>
<li><p><code>Callable[[Iterable], FooType]</code></p>
</li>
</ol>
<p>then the return value will be of type</p>
<ul>
<li><code>Tuple[int, float, str, FooType]</code></li>
</ul>
|
<python><python-typing><variadic-functions>
|
2023-06-20 15:46:39
| 1
| 1,117
|
JamesTheAwesomeDude
|
76,516,437
| 3,734,568
|
Removal of adjacent Duplicate word/phrase from string having accented characters
|
<p>I am trying to remove duplicate word / phrases from string.</p>
<p>For example if I have below string</p>
<p>"normalement on <strong>on</strong> on va, <strong>on va</strong> diviser, générique <strong>générique</strong> générique l'explication, <strong>générique l'explication</strong> détaille, <strong>détaille</strong>"</p>
<p>I wanted to remove duplicate phrase "on va" after , and "générique l'explication" after , in above string, also duplicate consecutive single word "on" and "générique".
Tried below two approach but seems it is working on single word when it will be without any punctuation</p>
<pre><code>>>> import re
>>> s = "normalement on on on va, on va diviser, générique générique l'explication, générique l'explication détaille, détaille"
>>> re.sub(r'\b(.+)(\s+\1\b)+', r'\1', s)
"normalement on va, on va diviser, générique l'explication, générique l'explication détaille, détaille"
>>> sen="normalement on on on va, on va diviser, générique générique l'explication, générique l'explication détaille, détaille"
>>> re.sub(r"\b([a-zA-z àâäèéêëîïôœùûüÿçÀÂÄÈÉÊËÎÏÔŒÙÛÜŸÇ']+\s *)\1{1,}", '\\1', sen, flags=re.IGNORECASE)
"normalement on va, on va diviser, générique l'explication, générique l'explication détaille, détaille"
</code></pre>
<p>Can anyone help me in this and advice how I can remove adjacent duplicate word/phrases appearing with punctuation and without punctuation.</p>
|
<python><regex>
|
2023-06-20 15:46:01
| 2
| 1,481
|
user3734568
|
76,516,420
| 955,273
|
pybind11 - memcpy array of values to numpy array
|
<p>I have a vector of vector of doubles which I want to return as a 2D numpy array using pybind11.</p>
<p>I am looking for an efficient way to copy the data from each inner vector to the appropriate strided point in the numpy array.</p>
<p>Below is an example which works, but it iterates over every element, setting 1 double at a time:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <pybind11/numpy.h>
#include <pybind11/pybind11.h>
#include <vector>
namespace py = pybind11;
using PyValues = py::array_t<double, py::array::c_style | py::array::forcecast>;
PyValues vals()
{
using Row = std::vector<double>;
using Rows = std::vector<Row>;
Rows rows = {
{ 1., 11.1, 101 },
{ 2., 12.2, 102 },
{ 3., 13.3, 103 },
{ 4., 14.4, 104 },
};
const std::size_t num_rows = rows.size();
const std::size_t num_cols = rows[0].size();
PyValues values = PyValues({num_rows,num_cols});
auto mu = values.mutable_unchecked();
for (std::size_t i = 0; i < num_rows; i++)
for (std::size_t j = 0; j < num_cols; j++)
mu(i, j) = rows[i][j];
return values;
}
</code></pre>
<p>I am looking to replace this nested loop:</p>
<pre class="lang-cpp prettyprint-override"><code>for (std::size_t i = 0; i < num_rows; i++)
for (std::size_t j = 0; j < num_cols; j++)
mu(i, j) = rows[i][j];
</code></pre>
<p>Looking at the internals of <code>pybind11::numpy</code>, I can see that <code>mutable_unchecked</code> returns an object which has the following member:</p>
<pre class="lang-cpp prettyprint-override"><code>/// Mutable pointer access to the data at the given indices.
template <typename... Ix>
T *mutable_data(Ix... ix)
{
return &operator()(ssize_t(ix)...);
}
</code></pre>
<p>As such, I tried to do the following:</p>
<pre><code>for (std::size_t i = 0; i < num_rows; ++i)
std::memcpy(mu.mutable_data(i,0), rows[i].data(), num_cols);
</code></pre>
<p>However, this is not working as I expected, instead my numpy array has garbage in it, so presumably my access indices are incorrect?</p>
<p>I've played around with swapping the indices to no avail.</p>
<p>How can I do a block copy of each of my inner vectors into my numpy array?</p>
|
<python><c++><numpy><pybind11>
|
2023-06-20 15:44:04
| 0
| 28,956
|
Steve Lorimer
|
76,516,191
| 4,145,798
|
twinx or twinx-like supylabel for matplotlib subplots?
|
<p>Given the subplots setup shown below, is there any possibility to add a super y label at the right side of the plot analogously to the 'Green label'?</p>
<pre><code>import matplotlib.pyplot as plt
fig, axs = plt.subplots(2, 2, sharex=True)
axs[0,0].tick_params(axis ='y', labelcolor = 'g')
t = axs[0,0].twinx()
t.tick_params(axis ='y', labelcolor = 'b')
axs[0,1].tick_params(axis ='y', labelcolor = 'g')
axs[0,1].twinx().tick_params(axis ='y', labelcolor = 'b')
axs[1,0].tick_params(axis ='y', labelcolor = 'g')
axs[1,0].twinx().tick_params(axis ='y', labelcolor = 'b')
axs[1,1].tick_params(axis ='y', labelcolor = 'g')
axs[1,1].twinx().tick_params(axis ='y', labelcolor = 'b')
fig.supylabel('Green label', color='g')
plt.tight_layout()
</code></pre>
<p><a href="https://i.sstatic.net/CwZZJ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CwZZJ.jpg" alt="enter image description here" /></a></p>
|
<python><matplotlib><subplot><twinx>
|
2023-06-20 15:17:26
| 1
| 649
|
corinna
|
76,516,185
| 8,176,763
|
redis as celery results backend and broker using redis in docker
|
<p>I am going through the celery tutorial and stepped upon a problem when trying to configure my results backend. I would like to use redis for both results backend and as a broker.</p>
<p>So I started redis with dockers as follows:</p>
<p><code>docker run -d -p 6379:6379 redis</code></p>
<p>Then I start my app as:</p>
<pre><code>from celery import Celery
app = Celery('tasks', backend='redis://localhost:6379/0', broker='redis://localhost:6379/0')
@app.task
def add(x,y):
return x + y
</code></pre>
<p>but upon trying few commands:</p>
<pre><code>>>> res = add.delay(5,5)
>>> res
<AsyncResult: a10b81dd-b27d-47e8-9030-8361a8ce18c9>
>>> res.get(timeout=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/alex/mambaforge/envs/new_env/lib/python3.11/site-packages/celery/result.py", line 247, in get
File "/Users/alex/mambaforge/envs/new_env/lib/python3.11/site-packages/celery/backends/base.py", line 755, in wait_for_pending
File "/Users/alex/mambaforge/envs/new_env/lib/python3.11/site-packages/celery/backends/base.py", line 1104, in _is_disabled
NotImplementedError: No result backend is configured.
Please see the documentation for more information.
</code></pre>
|
<python><redis><celery>
|
2023-06-20 15:17:00
| 2
| 2,459
|
moth
|
76,515,914
| 10,853,071
|
Ploting timeseries graph with ploty
|
<p>I have some transaction data containing product sold, datetime and value.
I am organizing such information on a daily graph with 5min cumulative value using plotly.
I can create 100% "ok" graph like this :</p>
<pre><code>df = teste.groupby([pd.Grouper(key='data', freq='5min', origin='start_day', convention = 'start', dropna = True, sort=True, closed = 'left')]).aggregate({'gmv' :'sum'}).reset_index()
df.sort_values(by='data', inplace=True)
dti = pd.date_range(df['data'].min().normalize(), df['data'].max(), freq='5min', name='data')
df = df.set_index('data').reindex(dti, fill_value=0).reset_index()
df["cum_sale"]=df.groupby([df['data'].dt.date])['gmv'].cumsum(axis=0)
df['time'] = df['data'].dt.time
df['date'] = df['data'].dt.date
fig = px.line(df, x="time", y="cum_sale", color="date")
fig.show()
</code></pre>
<p>This code achieve this graphics</p>
<p><a href="https://i.sstatic.net/5MX3S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5MX3S.png" alt="enter image description here" /></a></p>
<p>Recently I have some days where there were a discount over some products, so I want to visualize such days in a different color of all other.</p>
<p>I´ve just added this code to set which days are promotional e which are not.</p>
<pre><code>df['promo'] = False
df.loc[df['date'].isin([date(2023,6,16), date(2023,6,17), date(2023,6,18)]), 'promo'] = True
fig = px.line(df, x="time", y="cum_sale", color="promo")
fig.show()
</code></pre>
<p>After that, I just try to plot using the "promo" column for color instead of "date", but ploty create some crazy lines over the graphic and I can´t remove it.</p>
<p><a href="https://i.sstatic.net/eVSVI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eVSVI.png" alt="enter image description here" /></a></p>
<p>Any tip?</p>
<h2>Update</h2>
<p>After updating my code with the answer, the weird lines do disappear</p>
<p><a href="https://i.sstatic.net/5oGmU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5oGmU.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2023-06-20 14:43:47
| 1
| 457
|
FábioRB
|
76,515,904
| 10,755,032
|
Selenium - Getting the Text in span tag
|
<p>I am trying to get the followers count from Instagram. I am using selenium to do the task. Now the structure in which the followers is as follows(This is just to give you an idea. Please check the Instagram website using inspect tool)</p>
<pre><code>[...]
<span class="_ac2a">
<span> 216 </span>
</span>
[...]
</code></pre>
<p>The above is the rough structure. I want <code>216</code>. When I try the following code I get <code>[]</code> as the result
The code:</p>
<pre><code> username = self.username
driver.get(f"https://www.instagram.com/{username}/")
try:
#html = self.__scrape_page()
#page = self.__parse_page(html)
#followers = page.find_all("meta", attrs={"name": "description"})
followers = driver.find_elements(By.XPATH, '//span[@class="_ac2a"]/span')
return followers
# followers_count = (followers[0]["content"].split(",")[0].split(" ")[0])
# return {
# "data": followers_count,
# "message": f"Followers found for user {self.username}",
# }
except Exception as e:
message = f"{self.username} not found!"
return {"data": None, "message": message}
</code></pre>
<p>How do I get the followers?</p>
|
<python><python-3.x><selenium-webdriver><web-scraping>
|
2023-06-20 14:42:22
| 1
| 1,753
|
Karthik Bhandary
|
76,515,900
| 11,644,523
|
Pyspark / Snowpark SQL Error: Cumulative window frame unsupported for function LAG,
|
<p>I have a bunch of dataframes all containing the same columns; <code>id, current_ind, change_date</code>.</p>
<p>Sample Input 1:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>current_ind</th>
<th>change_date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>2021-06-20</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>2022-06-20</td>
</tr>
<tr>
<td>1</td>
<td>5</td>
<td>2023-06-20</td>
</tr>
</tbody>
</table>
</div>
<p>Sample Input 2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>current_ind</th>
<th>change_date</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>2</td>
<td>2021-04-20</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>2022-05-20</td>
</tr>
<tr>
<td>2</td>
<td>4</td>
<td>2023-06-20</td>
</tr>
</tbody>
</table>
</div>
<p>I want to combine all of them, to create a new dataframe with the following output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>old_ind</th>
<th>new_ind</th>
<th>change_date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
<td>2021-06-20</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>2</td>
<td>2022-06-20</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>5</td>
<td>2023-06-20</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>2</td>
<td>2021-04-20</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>3</td>
<td>2022-05-20</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>4</td>
<td>2023-06-20</td>
</tr>
</tbody>
</table>
</div>
<p>What I tried:</p>
<pre><code>union_df = df1.union(df2)
window_spec = Window.partitionBy('id').orderBy('change_date')
df = union_df.withColumn('old_ind', lag(col("current_ind")).over(window_spec))
</code></pre>
<p>This gives the correct <code>old_ind</code>. But I do not know how to get the <code>new_ind</code>.</p>
<pre><code>df = df.withColumn('new_ind', lag(col("current_ind")).over(prev_win.rowsBetween(Window.currentRow, Window.unboundedFollowing)))
df.select("id", "old_ind", "new_ind", "change_date")
</code></pre>
<p>This returns an error: <code>Cumulative window frame unsupported for function LAG</code></p>
<p>Or is there another way without using Window functions too?</p>
|
<python><dataframe><pyspark><snowflake-cloud-data-platform>
|
2023-06-20 14:42:12
| 1
| 735
|
Dametime
|
76,515,866
| 11,400,016
|
Dynamically change color of enlighten status bar
|
<p>I would like to have a status bar that dynamically changes color without having to remake the bar every time.</p>
<pre><code>import enlighten
manager = enlighten.get_manager()
status_bar = manager.status_bar("Starting",
color="white_on_blue",
justify=enlighten.Justify.CENTER,
autorefresh=True,
)
status_bar.update("Working", color= "white_on_red")
</code></pre>
<p>Is there a way of doing this using the update method?</p>
|
<python>
|
2023-06-20 14:38:39
| 1
| 450
|
DDD1
|
76,515,800
| 11,922,237
|
The difference between `poetry add` and `poetry install`
|
<p>I've thought that <code>poetry add package</code> would simply add the package to <code>pyproject.toml</code> but it seems it doesn't just add but also installs it in a virtual environment.</p>
<p>But what does <code>poetry install</code> do? When I run it after I added the deps with <code>add</code>, I am getting the following message:</p>
<pre><code>Installing dependencies from lock file
No dependencies to install or update
</code></pre>
<p>Note that I started a project from scratch with <code>mkdir new_dir; cd new_dir; poetry init</code>.</p>
|
<python><dependencies><python-poetry>
|
2023-06-20 14:29:50
| 2
| 1,966
|
Bex T.
|
76,515,734
| 17,638,206
|
Extracting Arabic number from a text file
|
<p>I have a text file that includes ")رقم : ٤٢٢٧ ٢٢٤" . I am using this code to extract ٢٢٤٤٢٢٧ :</p>
<pre><code> arabic_num = re.search(r':([\d\s]+)', text, re.UNICODE)
arabic_num = arabic_num.group(1)
arabic_num = arabic_num.replace(' ', '')
</code></pre>
<p>But the output is wrong <code>٤٢٢٧٢٢٤</code>. This happens when I remove the space between <code>٢٢٤</code> and <code>٤٢٢٧</code>. How Can I fix it, keeping in mind that any arabic digits can be between <code>:</code> and <code>)</code> and sometimes the number in the text file doesn't include a space(s) between its digits.</p>
|
<python><regex><string><ocr>
|
2023-06-20 14:21:47
| 1
| 375
|
AAA
|
76,515,723
| 1,473,517
|
How to show progress of differential_evolution without recomputing the objective function
|
<p>I want to show the progress of <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html" rel="nofollow noreferrer">differential evolution</a> and store the objective funtion values as it runs. My MWE is:</p>
<pre><code>def de_optimise():
def build_show_de(MIN=None):
if MIN is None:
MIN = [0]
def fn(xk, convergence):
obj_val = opt(xk)
if obj_val < MIN[-1]:
print("DE", [round(x, 2) for x in xk], obj_val)
MIN.append(opt(xk))
return fn
bounds = [(0,1)]*3
# Define the linear constraints
A = [[-1, 1, 0], [0, -1, 1]]
lb = [0.3, 0.4]
ub = [np.inf, np.inf]
constraints = LinearConstraint(A, lb, ub)
progress_f = [0]
c = build_show_de(progress_f)
print("Optimizing using differential evolution")
res = differential_evolution(
opt,
bounds=bounds,
constraints=constraints,
callback=c,
disp=True
)
print(f"external way of keeping track of MINF: {progress_f}")
de_optimise()
</code></pre>
<p>It works but in the function <code>fn</code> I have to recompute <code>opt(xk)</code> which must have already been computed. I have to do this as the callback function of differential_evolution is documented <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html" rel="nofollow noreferrer">as follows</a>:</p>
<blockquote>
<p><strong>callback: callable, callback(xk, convergence=val), optional</strong> A function to follow the progress of the minimization. xk is the best
solution found so far. val represents the fractional value of the
population convergence. When val is greater than one the function
halts. If callback returns True, then the minimization is halted (any
polishing is still carried out).</p>
</blockquote>
<p>As this is slow it slows down the optimization a lot. How can I avoid having to do this?</p>
|
<python><scipy>
|
2023-06-20 14:20:42
| 1
| 21,513
|
Simd
|
76,515,625
| 15,520,615
|
How to return a date format with PySpark with Databricks
|
<p>The following PySpark code will return the following date format on the field 'mydates'</p>
<p>as yyyy-MM-dd</p>
<pre><code>df = sql("select * from mytable where mydates = last_day(add_months(current_date(),-1))")
</code></pre>
<p>However, I would like the code to return the 'mydates' field with the following format</p>
<p>yyyyMMdd</p>
<p>I tried the following</p>
<pre><code>df = sql("select * from mytable where mydates = last_day(add_months(current_date(),'yyyyMMdd'-1))")
</code></pre>
<p>I didn't get an error with the above, however it didnt' return any results. Whereas the previous code did return results, but with date format on field 'mydates' as yyyy-MM-dd and I would like yyyyMMdd.</p>
<p>Any thoughts?</p>
<p>I have updated this question in line with the suggested answer, however I'm still getting yyyy-MM-dd.</p>
<pre><code>%python
from pyspark.sql.functions import date_format
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = spark.sql("select * from xxxxx where date_format(mydates, 'yyyyMMdd') = date_format(last_day(add_months(current_date(), -1)), 'yyyyMMdd')")
</code></pre>
<p>Very strange</p>
|
<python><pyspark><azure-databricks>
|
2023-06-20 14:10:59
| 2
| 3,011
|
Patterson
|
76,515,232
| 188,331
|
Changing DataFrame.iterrows() to List Comprehension / Vectorization to improve performance
|
<p>I have the following codes to calculate the average of outputs in DataFrame with the data from a XLSX file. The <code>calculate_score()</code> will return a <code>float</code> score, e.g. 5.12.</p>
<pre><code>import pandas as pd
testset = pd.read_excel(xlsx_filename_here)
total_score = 0
num_records = 0
for index, row in testset.iterrows():
if row['Data1'].isna() or row['Data2'].isna() or row['Data3'].isna():
continue
else:
score = calculate_score([row['Data1'], row['Data2']], row['Data3'])
total_score += score
num_records += 1
print("Average score:", round(total_score/num_records, 2))
</code></pre>
<p>According to <a href="https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas/55557758#55557758">this answer</a>, <code>df.iterrows()</code> is slow and anti-pattern. How can I change the above codes to use either Vectorization or List Comprehension?</p>
<hr />
<p><strong>UPDATE</strong></p>
<p>I over-simplify the <code>calculate_score()</code> in the example above, it is actually calculating the BLEU score of some sentences using SacreBLEU library:</p>
<pre><code>import evaluate
sacrebleu = evaluate.load("sacrebleu")
def calculate_score(ref, translation):
return sacrebleu.compute(predictions=[translation], references=[ref])
</code></pre>
<p>Note the original codes updated slightly as well. How can I modify the <code>calculate_score()</code> to use list comprehension? Thanks.</p>
|
<python><pandas>
|
2023-06-20 13:26:30
| 1
| 54,395
|
Raptor
|
76,515,127
| 17,487,457
|
pandas read a crappy csv file
|
<p>I got a poorly written <code>csv</code> file that I want to load using pandas' <code>read_csv</code>. Below are the first few lines to illustrate how it looks and the error generated.
File <code>test.csv</code>:</p>
<pre class="lang-py prettyprint-override"><code>feature_idx,cv_scores,avg_score,total-features
(4,),[0.71657 0.75430665 0.77866281 0.85293036 0.76370522],0.773235007449579,80
(4, 15),[0.79150981 0.82751849 0.83777517 0.9246948 0.82462535],0.8412247254527763,80
(1, 4, 15),[0.82173419 0.85052599 0.86065046 0.93704226 0.84315839],0.862622256166522,80
(1, 4, 15, 70),[0.82448556 0.86513518 0.87640778 0.93881338 0.84777784],0.8705239466728865,80
</code></pre>
<p>When I attempt to load it:</p>
<pre class="lang-py prettyprint-override"><code>pandas.read_csv('test.csv')
pandas.errors.ParserError: Error tokenizing data. C error: Expected 5 fields in line 4, saw 6
</code></pre>
<p>I understand this is because the first filed is a <code>tuple</code>. How do I let <code>pandas</code> know the first field is a <code>tuple</code> so that everything between <code>(..)</code> is regarded as one field?</p>
<p><strong>EDIT</strong></p>
<p>The current answer didn't work yet.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_csv('test.csv', converters={'feature_idx': parse_tuple}) # parse_tuple as per the answer
pandas.errors.ParserError: Error tokenizing data. C error: Expected 5 fields in line 4, saw 6
# pandas version
>>> print(pd.__version__)
1.5.3
</code></pre>
|
<python><pandas><dataframe><csv>
|
2023-06-20 13:15:03
| 1
| 305
|
Amina Umar
|
76,515,064
| 12,436,050
|
Conditional filter of rows in python3.7
|
<p>I have following dataframe</p>
<pre><code>col1 col2
B20.0 B20 | B20-B20
B20.0 B20
A16 A15-A20
</code></pre>
<p>I would like to filter the rows such that for the same 'col1' value if col2 has a single value (without '|') choose this row otherwise chose another row. Should regex work here or a better approach work here.</p>
<p>The expected output is:</p>
<pre><code>col1 col2
B20.0 B20
A16 A15-A20
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-20 13:07:54
| 1
| 1,495
|
rshar
|
76,515,058
| 16,591,513
|
I can't figure out why dropna() does not work in my code
|
<p>I have a following block of code where I'm trying to drop rows, which contains 'date_added' missing values, however even after trying variety of different parameters, it still does not work properly. I'm a bit confused, because I've tried literally everything</p>
<pre class="lang-py prettyprint-override"><code>import pandas
netflix_dataset = pandas.read_csv("netflix_titles.csv")
def process_date_missing_values() -> None:
netflix_dataset.date_added.dropna(inplace=True)
def process_duration_missing_values() -> None:
netflix_dataset.duration.dropna(inplace=True)
def process_rating_missing_values() -> None:
netflix_dataset.rating.dropna(inplace=True)
# Removing Missing Values from the Dataset
def remove_missing_values():
process_date_missing_values()
process_rating_missing_values()
process_duration_missing_values()
remove_missing_values()
cleaned_data = netflix_dataset
print(cleaned_data.isna().sum()) # there missing values still exists
</code></pre>
<p>I can provide more information, if necessary, but truly apologize, if it turns out to be something obvious causing this problem, I'm only starting out with pandas</p>
|
<python><pandas>
|
2023-06-20 13:06:58
| 1
| 449
|
CraZyCoDer
|
76,514,308
| 2,245,709
|
pyodbc: unixODBC Driver error SQLAllocHandle on SQL_HANDLE_HENV failed
|
<p>I am trying to query SQL Server from <code>pyodbc</code> using DSN with the following snippet:</p>
<pre><code>import pyodbc
pyodbc.autocommit = True
conn = pyodbc.connect('DSN=SQLSERVER_CONN')
cursor = conn.cursor()
cursor.execute('select count(1) from jupiter.fact_load')
result = cursor.fetchall()
for row in result:
print(row)
cursor.close()
conn.close()
</code></pre>
<p>My .odbc.ini looks:</p>
<pre><code>[SQLSERVER_CONN]
Description=Connection to SQLSERVER UAT
DRIVER=/home/aiman/mssql-jdbc/9.2.0/libmsodbcsql-11.0.so.2270.0
SERVER=my.sqlserver.com,10501
DATABASE=jupiter
UID=aiman
PWD=xxxxx
Trusted_Connection=yes
</code></pre>
<p>And its giving me this following error:</p>
<pre><code>Traceback (most recent call last):
File "test_odbc.py", line 5, in <module>
conn = pyodbc.connect('DSN=SQLSERVER_CONN')
pyodbc.Error: ('IM004', "[IM004] [unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed (0) (SQLDriverConnect)")
</code></pre>
<p>In one post I read it happens when <strong>.rll</strong> file is not present, but I have both the files (driver, and .rll) present at the driver's path:</p>
<pre><code>libmsodbcsql-11.0.so.2270.0
msodbcsqlr11.rll
</code></pre>
<p>Similar question was given <a href="https://stackoverflow.com/questions/55474713/drivers-sqlallochandle-on-sql-handle-henv-failed-0-sqldriverconnect-when-co">here</a> (<code>echo "default:x:$uid:0:user for openshift:/tmp:/bin/bash" >> /etc/passwd</code>), but I can't do this since it will overwrite the system account settings, and I am trying to run from my own ID.</p>
|
<python><sql-server><odbc><pyodbc>
|
2023-06-20 11:37:11
| 1
| 1,115
|
aiman
|
76,514,236
| 9,542,989
|
net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for numpy.dtype) in Writing DataFrame
|
<p>I have defined the following function in my PySpark code running on Databricks. This is a function which uses a .shp file to retrieve the districts that relate to a set of coordinates in a dataset (outlets_geolocation).</p>
<pre><code>def get_outlet_location(outlets_geolocation: DataFrame):
gdf = gpd.read_file(
'/dbfs/mnt/path/to/file.shp'
)
def get_districts(lat, lon):
point = Point(lon, lat)
return gdf.loc[gdf.contains(point), "ADM2_EN"].squeeze()
outlet_locations = (
outlets_geolocation
.filter(
(f.col('latitude').isNotNull()) &
(f.col('longitude').isNotNull()) &
(f.col('longitude') != 0) &
(f.col('latitude') != 0) &
(f.col('is_gps_on') == True)
)
.dropDuplicates(['outlet_code'])
)
outlet_locations = outlet_locations.select(f.col('outlet_code'), f.col('latitude'), f.col('longitude'))
get_districts_udf = udf(get_districts, StringType())
master_outlets_geolocation = outlet_locations.withColumn('district', get_districts_udf('latitude', 'longitude'))
return master_outlets_geolocation
</code></pre>
<p>This seems to work fine. When I execute a <code>.head()</code> on the DataFrame that is returned I see the records I am looking for.</p>
<p>However, when I try write this dataset as a parquet file to my data lake (ADSL), I get the following error,</p>
<pre><code>Py4JJavaError: An error occurred while calling o13792.save.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:606)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:360)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:198)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:126)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:124)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:138)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:160)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:239)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:386)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:186)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:968)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:141)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:336)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:160)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:156)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:590)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:168)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:590)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:566)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:156)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:324)
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:156)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:141)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:132)
at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:186)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:959)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:427)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:396)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:250)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 892.0 failed 4 times, most recent failure: Lost task 0.3 in stage 892.0 (TID 12512) (10.168.5.135 executor 1): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:610)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:457)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$20(FileFormatWriter.scala:336)
at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:75)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:75)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:55)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:156)
at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:125)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.Task.run(Task.scala:95)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$13(Executor.scala:832)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1681)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:835)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:690)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for numpy.dtype)
at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:773)
at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:213)
at net.razorvine.pickle.Unpickler.load(Unpickler.java:123)
at net.razorvine.pickle.Unpickler.loads(Unpickler.java:136)
at org.apache.spark.sql.execution.python.BatchEvalPythonExec.$anonfun$evaluate$6(BatchEvalPythonExec.scala:110)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:240)
at org.apache.spark.sql.execution.SortExec$$anon$2.sortedIterator(SortExec.scala:133)
at org.apache.spark.sql.execution.SortExec$$anon$2.hasNext(SortExec.scala:147)
at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithIterator(FileFormatDataWriter.scala:90)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:437)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1715)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:444)
... 19 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:3088)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:3035)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:3029)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:3029)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1391)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1391)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1391)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3297)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3238)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3226)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1157)
at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:2657)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2640)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:325)
... 43 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:610)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:457)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$20(FileFormatWriter.scala:336)
at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:75)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:75)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:55)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:156)
at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:125)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.Task.run(Task.scala:95)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$13(Executor.scala:832)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1681)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:835)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:690)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for numpy.dtype)
at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:773)
at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:213)
at net.razorvine.pickle.Unpickler.load(Unpickler.java:123)
at net.razorvine.pickle.Unpickler.loads(Unpickler.java:136)
at org.apache.spark.sql.execution.python.BatchEvalPythonExec.$anonfun$evaluate$6(BatchEvalPythonExec.scala:110)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:240)
at org.apache.spark.sql.execution.SortExec$$anon$2.sortedIterator(SortExec.scala:133)
at org.apache.spark.sql.execution.SortExec$$anon$2.hasNext(SortExec.scala:147)
at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithIterator(FileFormatDataWriter.scala:90)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:437)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1715)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:444)
... 19 more
</code></pre>
<p>What exactly is the problem here? Does it have something to do with my data types?</p>
<p>I saw a few other similar questions like this one:
<a href="https://stackoverflow.com/questions/61183355/pyspark-udf-for-converting-utm-error-expected-zero-arguments-for-construction-of">PySpark UDF for converting UTM error expected zero arguments for construction of ClassDict (for numpy.dtype)</a></p>
<p>But, I can't seem to figure out the exact issue with my code.</p>
|
<python><pyspark><databricks><parquet><geopandas>
|
2023-06-20 11:27:30
| 1
| 2,115
|
Minura Punchihewa
|
76,514,227
| 8,560,127
|
Numba doesn't work when parameter is a dictionary address
|
<p>Consider the code:</p>
<pre><code>from timeit import default_timer as timer
from datetime import timedelta
import numpy as np
from numba import prange, jit, njit
@jit(nopython=True, parallel=True)
def random(**shape):
return np.random.random((shape[0],shape[1]))
shape = {0:100, 1:100}
start = timer()
random(**shape)
end = timer()
print(timedelta(seconds=end-start))
</code></pre>
<p>It is gving the error:</p>
<pre><code>Traceback (most recent call last):
File "/home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/timer.py", line 51, in
<module>
random(**shape)
TypeError: too many arguments: expected 1, got 2
</code></pre>
<p>Whats going wrong and how to pass such dictionary addresses?</p>
|
<python><python-3.x><numba>
|
2023-06-20 11:26:29
| 0
| 923
|
DuttaA
|
76,514,174
| 8,040,369
|
Create a list from df based on a column type
|
<p>I am getting a df from running a sql query using <strong>pymssql</strong></p>
<pre><code>data = pd.read_sql_query(query, cnxn)
data
addr value d_no
=========================
XXX 1 A
YYY 2 B
XXX 3 C
XXX 4 D
YYY 5 E
ZZZ 6 F
ZZZ 7 G
ZZZ 8 H
</code></pre>
<p>I want to split the df into a list of df. i was able to do that using</p>
<pre><code>batches = [d for _, d in data.groupby(['addr'])]
</code></pre>
<pre><code>[
addr value d_no
=========================
XXX 1 A
XXX 3 C
XXX 4 D ,
addr value d_no
=========================
YYY 2 B
YYY 5 E ,
addr value d_no
=========================
ZZZ 6 F
ZZZ 7 G
ZZZ 8 H
]
</code></pre>
<p>The list has split based on the <strong>addr</strong>. but i need to split it in a way where each df in a list has only 2 values and should be also of same <strong>addr</strong> something like below,</p>
<pre><code>[
addr value d_no
=========================
XXX 1 A
XXX 3 C ,
addr value d_no
=========================
XXX 4 D ,
addr value d_no
=========================
YYY 2 B
YYY 5 E ,
addr value d_no
=========================
ZZZ 6 F
ZZZ 7 G ,
addr value d_no
=========================
ZZZ 8 H
]
</code></pre>
<p>Any help is much appreciated.</p>
<p>Thanks,</p>
|
<python><pandas><dataframe><list>
|
2023-06-20 11:19:44
| 2
| 787
|
SM079
|
76,514,146
| 6,447,123
|
np.load from relative file in the python package
|
<p>I have created a python package and I need to have <code>np.load('./my_file.npy')</code> in my package.
when I install package and run the code the path is not correct and python cannot find the file</p>
<p>I tried the following code as well</p>
<pre class="lang-py prettyprint-override"><code>dirname = Path(__file__).parent
path = dirname / 'my_file.npy'
np.load(str(path))
</code></pre>
<p>In <code>pyproject.toml</code> file I have as well</p>
<pre class="lang-ini prettyprint-override"><code>[tool.flit.sdist]
include = [
"my_file.npy",
]
</code></pre>
|
<python><np><pyproject.toml><flit>
|
2023-06-20 11:16:02
| 1
| 4,309
|
A.A
|
76,514,015
| 5,359,846
|
How to wait for URL to contain text using Playwright
|
<p>I'm clicking on an element on my page and being redirected to another page.</p>
<p>How to test using <code>Playwright</code> and <code>Expect</code> that the url was updated?</p>
<pre><code>def is_url_contains(self, text_in_url):
try:
expect(self.page).to_have_url(text_in_url, timeout=10 * 1000)
return True
except AssertionError:
return False
</code></pre>
|
<python><playwright><playwright-python>
|
2023-06-20 10:58:49
| 2
| 1,838
|
Tal Angel
|
76,513,926
| 1,185,081
|
How to define the path where to install Python modules?
|
<p>On SUSE SLES 15.4, I recently installed Python 3.10, using the zypper package manager. I plan to install Airflow and some companion packages. I am confused about the way to install Python's stuff: how to avoid it to install in user's home folder?</p>
<p>I configured the following in <strong>/etc/profile.d/python.sh</strong></p>
<pre><code># add python startup script for interactive sessions
export PYTHONSTARTUP=/etc/pythonstart
export AIRFLOW_HOME=/usr/lib/airflow
export PYTHONPATH=/usr/lib/python3.10:$AIRFLOW_HOME
export LC_ALL="fr_CH.UTF-8"
export PATH=/usr/lib/airflow:/usr/lib/airflow/bin:/usr/lib:$PATH
export LD_LIBRARY_PATH=/usr/lib:/usr/lib/airflow/bin:/usr/lib/airflow:/usr/lib/python3.10:/usr/lib/python3.10/site-packages:$LD_LIBRARY_PATH
</code></pre>
<p>I now own the <strong>/usr/lib/python3.10/site-packages</strong> folder:</p>
<pre><code>ls -al /usr/lib/python3.10
total 0
drwxr-xr-x 1 root root 26 3 mai 16:30 .
drwxr-xr-x 1 root root 6414 12 jun 10:56 ..
drwxrwxr-x 1 a80838986 linuxusers 288 20 jun 10:34 site-packages
</code></pre>
<p>I try to install a Python package which is supposed to land in the site-packages folder:</p>
<pre><code>pip install psycopg2-binary
Defaulting to user installation because normal site-packages is not writeable
Collecting psycopg2-binary
Using cached psycopg2_binary-2.9.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.0 MB)
Installing collected packages: psycopg2-binary
Successfully installed psycopg2-binary-2.9.6
</code></pre>
<p>but it finally ends in my home folder:</p>
<pre><code>ls -al ~/.local/lib/python3.10/site-packages
total 0
drwxr-xr-x 1 a80838986 linuxusers 118 20 jun 10:32 .
drwxr-xr-x 1 a80838986 linuxusers 26 20 jun 10:32 ..
drwxr-xr-x 1 a80838986 linuxusers 308 20 jun 10:32 psycopg2
drwxr-xr-x 1 a80838986 linuxusers 114 20 jun 10:32 psycopg2_binary-2.9.6.dist-info
drwxr-xr-x 1 a80838986 linuxusers 722 20 jun 10:32 psycopg2_binary.libs
</code></pre>
<p>So here are my questions:</p>
<ol>
<li>How can I have <strong>pip</strong> to install - by default - the packages in the server's /usr/lib/python3.10/site-packages?</li>
<li>Is /usr/lib/python3.10/site-packages the right place?</li>
<li>How is it possible that <strong>site-packages is not writeable</strong> for pip?</li>
</ol>
|
<python><pip>
|
2023-06-20 10:48:40
| 0
| 2,168
|
user1185081
|
76,513,793
| 14,820,295
|
Replace commas with dots in a string column pandas without nan values
|
<p>I have a dataset with a string column.
I'm searching to replace comma with dot, but when i use str.replace method, for some rows (maybe for integer values), it return NaN values.</p>
<pre><code>df['code_input'] = df['code_input'].str.replace(',', '.')
</code></pre>
<p>How can i do it?</p>
<p>Example of my desidered dataset:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>code_input</th>
<th>desidered_code_output</th>
</tr>
</thead>
<tbody>
<tr>
<td>15.5C72HT</td>
<td>15.5C72HT</td>
</tr>
<tr>
<td>46,3C8,4HTT</td>
<td>46.3C8.4HTT</td>
</tr>
<tr>
<td>40</td>
<td>40</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>Example of my desidered with str.replace method:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>code_input</th>
<th>desidered_code_output</th>
</tr>
</thead>
<tbody>
<tr>
<td>15.5C72HT</td>
<td>15.5C72HT</td>
</tr>
<tr>
<td>46,3C8,4HTT</td>
<td>46.3C8.4HTT</td>
</tr>
<tr>
<td>40</td>
<td>NaN</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
|
<python><string><replace><nan>
|
2023-06-20 10:31:01
| 0
| 347
|
Jresearcher
|
76,513,594
| 8,754,028
|
sum elements of an array by an indicies array into a smaller array in numpy
|
<p>I have an array of N elements of type float called <code>a</code>. I also have an array of N elements called <code>b</code>. <code>b</code>'s elements are all unsigned integers in the range <code>[0, M-1]</code>. I want to get a float array of size M called <code>c</code>. <code>c</code> is just a "reduced" by summing up <code>a</code>'s element that falls into the same bin, defined in <code>b</code>.</p>
<p>Basically this operation:</p>
<pre><code>a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
b = np.array([3, 3, 0, 3, 2, 1, 2, 1, 2, 2])
c = ?(a, b)
</code></pre>
<p>I want <code>c = [2, 5+7, 4+6+8+9, 0+1+3] = [2, 12, 27, 4]</code></p>
<p>so, what is the name of this operation? and how can I do it in numpy?</p>
|
<python><arrays><numpy>
|
2023-06-20 10:06:52
| 2
| 529
|
RRR
|
76,513,459
| 9,488,023
|
Set values in a Pandas column based on identical values in another column
|
<p>I have a very large Pandas dataframe in Python that looks something like this:</p>
<pre><code>df_test = pd.DataFrame(data=None, columns=['file', 'quarter', 'status'])
df_test.file = ['file_1', 'file_1', 'file_2', 'file_3', 'file_3', 'file_4']
df_test.quarter = [2023q1, 2023q2, 2022q4, 2022q3, 2023q2, 2021q3]
df_test.status = ['', 'kept', '', '', 'kept', '']
</code></pre>
<p>What I want to do here is to set the 'status' column to equal either 'kept' or 'removed' based on whether those entries in the 'file' column that has an identically named file that already has 'kept' as status. For example, since 'file_1' has one row where 'status' equals 'kept', all entries of 'file_1' should equal 'kept'. The end result of the status column should be:</p>
<pre><code>['kept', 'kept', 'removed', 'kept', 'kept', 'removed']
</code></pre>
<p>Any help on how to do this would be really appreciated, thanks!</p>
|
<python><pandas><dataframe><replace>
|
2023-06-20 09:51:40
| 2
| 423
|
Marcus K.
|
76,513,423
| 1,566,140
|
Selenium/Python Disable ChromeDriver updates and Revert Version
|
<p>I've been running a selenium/python script on Windows 7 x64 for a few weeks. Google Chrome is installed on the machine with version 109 (last supported win 7 version) with browser updates disabled. Occasionally, I would get messages like this when instatiating ChromeDriver;</p>
<pre><code>[WDM] - Downloading: 0%| | 0.00/6.30M [00:00<?, ?B/s]
[WDM] - Downloading: 43%|####3 | 2.71M/6.30M [00:00<00:00, 26.0MB/s]
[WDM] - Downloading: 100%|##########| 6.30M/6.30M [00:00<00:00, 38.5MB/s]
</code></pre>
<p>which as I understand it is some automatic update of ChromeDriver (?)</p>
<p>The most recent update has broken my script, and now I get the error;</p>
<p>selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 114
Current browser version is 109.0.5414.120 with binary path C:\Program Files\Google\Chrome\Application\chrome.exe</p>
<p>I need help with two things;</p>
<ol>
<li>How do I get ChromeDriver back to a version that supports Chrome 109?</li>
<li>[more importantly] How do I prevent this automatic update of ChromeDriver from happening again?</li>
</ol>
<p>I can probably figure out #1. I need definitive help with #2. Most of the searches come back with discussions of how to prevent the browser from updating, not how to prevent the call to</p>
<pre><code>driver = webdriver.Chrome(
service=Service(ChromeDriverManager().install()),
options=options,
desired_capabilities=caps
)
</code></pre>
<p>from triggering a (package?) update.</p>
|
<python><google-chrome><selenium-webdriver><web-scraping><selenium-chromedriver>
|
2023-06-20 09:47:13
| 1
| 360
|
Aladdin
|
76,513,294
| 19,598,212
|
why queue in python cause lag and data loss
|
<p>I have two threads, one for capturing frames from a high frame rate camera (producter) and the other for writing frames to video files (consumer). I use <code>Queue</code> to transfer frames from producter to consumer. At first, my code was like this:</p>
<pre class="lang-py prettyprint-override"><code># consumer
class VideoWriter:
...
def record(self):
while self.isRecording or not self.frame_queue.empty():
if not self.frame_queue.empty():
writer.imwrite(self.frame_queue.get())
...
# producter
class MyVideo:
...
def play(self):
while self.isLiving:
if self.cap.IsGrabbing():
frame = self.cap.RetrieveResult(1000)
if self.video_writer.isRecording:
self.video_writer.frame_queue.put(frame)
show_frame(frame)
...
</code></pre>
<p>When I don't start recording, everything in the code is normal. However, when I started the consumer thread(started recording), the producer thread began to become very sluggish, and the consumer thread received far fewer frames than the producer thread captured from the camera.
Then I fixed this problem by changing my codes into following:</p>
<pre class="lang-py prettyprint-override"><code># consumer
class VideoWriter:
...
def record(self):
while self.isRecording or not self.frame_queue.empty():
try:
writer.imwrite(self.frame_queue.get(block=False))
except Empty:
time.sleep(0.001)
...
# producter
class MyVideo:
...
def play(self):
while self.isLiving:
if self.cap.IsGrabbing():
frame = self.cap.RetrieveResult(1000)
if self.video_writer.isRecording:
self.video_writer.frame_queue.put(frame,block=False)
show_frame(frame)
...
</code></pre>
<p>Now, everything is fine. But I don't know the reason why this problem occured. Can someone explain to me why?</p>
|
<python><multithreading><queue>
|
2023-06-20 09:30:58
| 1
| 331
|
肉蛋充肌
|
76,513,289
| 534,238
|
How to properly use GCP's Artifact Repository with Python?
|
<h1>Adding Private GCP Repo Breaks normal <code>pip</code> behaviour</h1>
<p>When using Google Cloud Platform's Artifact Repository, you have to alter your <code>.pypirc</code> file for any uploads (<code>twine</code>) and your <code>pip.conf</code> for any downloads (<code>pip</code>).</p>
<p>For the downloads specifically, you have to add something like:</p>
<pre><code>[global]
extra-index-url = https://<YOUR-LOCATION>-python.pkg.dev/<YOUR-PROJECT>/<YOUR-REPO-NAME>/simple/
</code></pre>
<p>However, by doing this, now <em>anything</em> that will call <code>pip</code> will also check this extra repository, and when doing so, it will ask for a user name and password. This means that anything, like calls behind the scenes that <code>poetry</code>, <code>pdm</code>, <code>pip</code>, or <code>pipx</code> do will all ask for this username and password. Often these requests are being made as part of a non-interactive action, so that everything just stalls.</p>
<h1>Non-ideal, but working, solution:</h1>
<p>I ran across <a href="https://towardsdatascience.com/if-you-are-using-python-and-google-cloud-platform-this-will-simplify-life-for-you-part-2-bef56354fd4c" rel="noreferrer">this "solution"</a>, which does indeed work, but which the author himself says is not the <em>right</em> way to do things because it compromises security, bringing us back to the "infinitely live keys stored on a laptop" days.</p>
<p>(I'm sorry, that link is now behind Medium's paywall. In short, the link said that you should use a JSON key and provide that key in your <code>pip.conf</code> and <code>.pypirc</code> files. You can create a JSON key following something like this <a href="https://cloud.google.com/artifact-registry/docs/python/authentication#keyring-user" rel="noreferrer">Google doc showing how to authenticate with a key file</a>.)</p>
<h1>More secure solution??</h1>
<p>But what is the right solution? I want the following:</p>
<ol>
<li>To be able to run things like <code>pip</code>, <code>pdm</code>, etc. on my local machine and not have them stall, waiting for a username and password that I cannot fill out.
<ul>
<li>This is both for things that are in fact in my private repository, but also things living in normal PYPI or wherever I look.</li>
</ul>
</li>
<li>To keep the security in place, so that I am being recognized as "ok to do this" because I have authorized myself and my computer via <code>gcloud auth login</code> or something similar (<code>gcloud auth login</code> does nothing to assist with this repo issue, at least not with any flags I tried).</li>
<li>And still be able to perform <code>twine</code> actions (upload to registry) without problems.</li>
<li>I use newer solutions, specifically <code>pdm</code>, for package build. I need something that uses <code>pyproject.toml</code>, not <code>setup.py</code>, etc. If I perform something like <code>pdm install</code> (or <code>poetry install</code>), I need for credentials to be evaluated without human input.</li>
</ol>
|
<python><google-cloud-platform><pip>
|
2023-06-20 09:30:22
| 2
| 3,558
|
Mike Williamson
|
76,513,199
| 1,559,401
|
How to pass Ajax POST request with GeoJSON or simply JSON data in Django view?
|
<p>I am learning how to use jQuery Ajax and how to combine it with my Django project.</p>
<p>My Django template comes with a JS source that adds a Leaflet <code>EasyButton</code> to my map with an Ajax POST request to pass some JSON data that is then processed on the Django side:</p>
<pre><code>// Create a button to process Geoman features
var btn_process_geoman_layers = L.easyButton('<img src="' + placeholder + '" height="24px" width="24px"/>', function (btn, map) {
var csrftoken = jQuery("[name=csrfmiddlewaretoken]").val();
function csrfSafeMethod(method) {
return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method));
}
$.ajaxSetup({
beforeSend: function (xhr, settings) {
if (!csrfSafeMethod(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken);
}
}
});
// Retrieve all Geoman layers
var geoman_layers = map.pm.getGeomanLayers(true).toGeoJSON();
// Process only layers that are feature collections
if (geoman_layers["type"] === "FeatureCollection")
{
// For every feature (e.g. marker, polyline, polygon, circle)
for (feature_idx in geoman_layers["features"])
{
var feature = geoman_layers["features"][feature_idx];
// pass the feature using Ajax to Django
switch(feature["geometry"]["type"])
{
case "Polygon":
console.log("Polygon");
$.ajax({
url: "",
data: {
"feature": feature
},
datatype: "json",
type: "POST",
success: function (res, status) {
//alert(res);
alert(status);
},
error: function (res) {
alert(res.status);
}
});
break;
default:
console.log("Other feature: " + feature["geometry"]["type"]);
break;
}
}
}
});
btn_process_geoman_layers.addTo(map).setPosition("bottomright");
</code></pre>
<p>Currently the data is all features of type <code>Polygon</code> from the Geoman plugin (a plugin used for drawing), which I want to post-process a little bit and perhaps insert into the PostGIS DB that my project is using.</p>
<p>Please note that I'm anything but seasoned web developer. In addition the code is quite messy since right now I am just trying to make things work. In the future the <code>switch</code> statement will probably be replaced by just the contents found inside `case "Polygon".</p>
<p>In my Django view I have the following endpoint (in my case this is <code>/</code>):</p>
<pre><code>def view_map(request):
if request.method == "POST":
print(request.POST)
request_feature = ... # Convert request.POST to a JSON or better GeoJSON object
# Process feature
# ...
</code></pre>
<p>My issue is that <code>request_feature</code> is a <code>QueryDict</code> and I cannot figure out how to convert it to a proper JSON/GeoJSON object in Python. An example would be:</p>
<pre><code><QueryDict: {'feature[type]': ['Feature'], 'feature[geometry][type]': ['Polygon'], 'feature[geometry][coordinates][0][0][]': ['8.408489', '48.934202'], 'feature[geometry][coordinates][0][1][]': ['8.408489', '48.96465'], 'feature[geometry][coordinates][0][2][]': ['8.467026', '48.96465'], 'feature[geometry][coordinates][0][3][]': ['8.467026', '48.934202'], 'feature[geometry][coordinates][0][4][]': ['8.408489', '48.934202']}>
</code></pre>
<p>Using <code>json.dumps(request.POST)</code> or <code>geojson.dumps(request.POST)</code> yields a JSON object but not in the form I am expecting it to be:</p>
<pre><code>{
"feature[type]": "Feature",
"feature[geometry][type]": "Polygon",
"feature[geometry][coordinates][0][0][]": "48.934202",
"feature[geometry][coordinates][0][1][]": "48.96465",
"feature[geometry][coordinates][0][2][]": "48.96465",
"feature[geometry][coordinates][0][3][]": "48.934202",
"feature[geometry][coordinates][0][4][]": "48.934202"
}
</code></pre>
<p>So instead of e.g. calling <code>request_feature['geometry']['coordinates'][0]</code> in Python to retrieve the array item, I have to do <code>request_feature['feature[geometry][coordinates][0]']</code>. Given how my data currently looks, I would say that I have a very flat dictionary-like object, where all initially nested objects are now at the top-level.</p>
<p>I am almost sure that I am doing something wrong on the Ajax side of things but I cannot pinpoint the issue.</p>
|
<python><json><django><ajax><post>
|
2023-06-20 09:18:44
| 1
| 9,862
|
rbaleksandar
|
76,513,096
| 8,839,068
|
Use pre-saved physics options for pyvis network
|
<p>I have a pyvis network. Consider this example</p>
<pre><code>from pyvis import Network
G = Network(notebook=True)
G.add_nodes([1, 2, 3], label=['Peter', 'Paul', Mary])
G.add_nodes([4, 5, 6, 7],
label=['Michael', 'Ben', 'Oliver', 'Olivia'],
color=['#3da831', '#9a31a8', '#3155a8', '#eb4034'])
</code></pre>
<p>When I visualized my actual network, the points kept moving. As per <a href="https://stackoverflow.com/a/70905482/8839068">this</a> answer, I turn on physics controls using <code>G.show_buttons(filter_=['physics'])</code>. I then show the network in an HTML (<code>G.show('tmp.html')</code>) and open the HTML.</p>
<p>There, I modify the options and click on 'generate options' which gives me this options summary:</p>
<pre><code>const options = {
"physics": {
"enabled": false,
"forceAtlas2Based": {
"springLength": 100
},
"minVelocity": 0.75,
"solver": "forceAtlas2Based"
}
}
</code></pre>
<p>I would now like to set these options to a graph and not display the physics options every time anew.</p>
<p><strong>How do I set pre-saved physics options to the <code>pyvis</code> object?</strong></p>
<p>Thanks for any and all pointers.</p>
|
<python><python-3.x><pyvis>
|
2023-06-20 09:06:18
| 1
| 4,240
|
Ivo
|
76,512,898
| 5,302,323
|
Cannot extract table using Selenium and BS despite seeing it in the HTML using inspect
|
<p>I have a website that has a dynamic table, which simply does not show up when I try to use combos of Selenium and BS.</p>
<p>I have looked at code in several answers, but none of it works.</p>
<p>Do you have any bulletproof code that will extract tables, no matter how dynamic the webpage is?</p>
<p>Here is my (unsuccessful) attempt :</p>
<pre><code># Configure Selenium options
options = webdriver.ChromeOptions()
options.add_argument('--headless') # Run in headless mode
options.add_argument('--disable-gpu')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
# Create a new instance of the Chrome driver
driver = webdriver.Chrome(options=options)
driver.get(url)
# Get the full page source
page_source = driver.page_source
# Parse the HTML content using BeautifulSoup
soup = BeautifulSoup(page_source, 'html.parser')
# Find all tables using class name or attribute value
tables = soup.find_all('table', attrs={'class': 'table1'})
print(tables)
# Close the browser
driver.quit()
</code></pre>
<p>Prior to that also tried using requests.get with headers equal to:</p>
<pre><code>
header = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'}
</code></pre>
<p>Then I also tried adding sleep or time to wait for content to load... but nada.</p>
<p>The table is nested here in the html:</p>
<pre><code><div class="rightContainer" style="width:730px">
<div id="master" class="mainContent Small" style="width:730px">
<div class="greyContainerBtm">
<div class="greyContainerBody">
<div class="freeTextArea">
<img class="printObj" src="/images/logo.gif" style="display:none">
<h2 class="printObj" style="display:none">Foreign Currency Fixed Deposit Rates</h2>
<b>(As at&nbsp;<span id="datetime">20-JUN-2023 10:30 a.m.</span>)</b>
<br><br><span>The rates shown here are indicative only and are subject to change without prior notice.</span>
<div class="mainContentArea" style="width:700px">
<br>
<h2 class="pageTitle">Board Rates</h2><br><h2 class="printObj" style="display:none">Board Rates</h2><br><table class="table1">
<tbody><tr>
<th rowspan="2" colspan="2" id="hd1_1" class="txt_acc_type valignC" style="width: 100px;">Currency </th>
<th colspan="7" class="txt_deposit_term alignC valignC">Rate (% p.a.)</th>
</tr>
<tr>
<td class="txt_overnight highlight alignR" id="hd1_2" style="width: 70px;">Minimum (Equivalent) Personal / Corporate</td>
<td class="txt_24h highlight alignR" id="hd1_3" style="width: 60px;">1 week</td>
<td class="txt_24h highlight alignR" id="hd1_4">1 month</td>
<td class="txt_24h highlight alignR" id="hd1_5">2 months</td>
<td class="txt_24h highlight alignR" id="hd1_6">3 months</td>
<td class="txt_24h highlight alignR" id="hd1_7">6 months</td>
<td class="txt_24h highlight alignR" id="hd1_8">12 months</td>
</tr>
<tr class="data">
<td class="align"><b>Australian Dollar</b></td>
<td class="align"><b>AUD</b></td>
<td class="alignR">USD10K/20K</td>
<td class="alignR">3.6400</td>
<td class="alignR">3.8000</td>
<td class="alignR">4.0600</td>
<td class="alignR">4.2800</td>
<td class="alignR">4.4900</td>
<td class="alignR">4.9300</td>
</tr>
<tr class="data">
<td class="align"><b>Canadian Dollar</b></td>
<td class="align"><b>CAD</b></td>
<td class="alignR">USD10K/20K</td>
<td class="alignR">4.2200</td>
<td class="alignR">4.2700</td>
<td class="alignR">4.4600</td>
<td class="alignR">4.6500</td>
<td class="alignR">4.8500</td>
<td class="alignR">5.2200</td>
</tr>
<tr class="data">
<td class="align"><b>Swiss Franc</b></td>
<td class="align"><b>CHF</b></td>
<td class="alignR">USD10K/20K</td>
<td class="alignR">1.0700</td>
<td class="alignR">1.2300</td>
<td class="alignR">1.2300</td>
<td class="alignR">1.3200</td>
<td class="alignR">1.4200</td>
<td class="alignR">1.6600</td>
</tr>
<tr class="data">
<td class="align"><b>Renminbi</b></td>
<td class="align"><b>CNY</b></td>
<td class="alignR">USD10K/20K</td>
<td class="alignR">0.2500</td>
<td class="alignR">1.2000</td>
<td class="alignR">1.3000</td>
<td class="alignR">1.7000</td>
<td class="alignR">1.8000</td>
<td class="alignR">2.0000</td>
</tr>
<tr class="data">
<td class="align"><b>Euro</b></td>
<td class="align"><b>EUR</b></td>
<td class="alignR">USD10K/20K</td>
<td class="alignR">2.8000</td>
<td class="alignR">2.9200</td>
<td class="alignR">3.0300</td>
<td class="alignR">3.1400</td>
<td class="alignR">3.3900</td>
<td class="alignR">3.5900</td>
</tr>
<tr class="data">
<td class="align"><b>Pound Sterling</b></td>
<td class="align"><b>GBP</b></td>
<td class="alignR">USD10K/20K</td>
<td class="alignR">4.0200</td>
<td class="alignR">4.2700</td>
<td class="alignR">4.4000</td>
<td class="alignR">4.5300</td>
<td class="alignR">4.8600</td>
<td class="alignR">5.2400</td>
</tr>
<tr class="data">
<td class="align"><b>Hong Kong Dollar</b></td>
<td class="align"><b>HKD</b></td>
<td class="alignR">USD10K/20K</td>
<td class="alignR">4.6300</td>
<td class="alignR">4.6400</td>
<td class="alignR">4.6400</td>
<td class="alignR">4.6400</td>
<td class="alignR">4.6600</td>
<td class="alignR">4.7500</td>
</tr>
<tr class="data">
<td class="align"><b>Japanese Yen</b></td>
<td class="align"><b>JPY</b></td>
<td class="alignR">USD10K/20K</td>
<td class="alignR">0.0000</td>
<td class="alignR">0.0000</td>
<td class="alignR">0.0000</td>
<td class="alignR">0.0000</td>
<td class="alignR">0.0000</td>
<td class="alignR">0.0000</td>
</tr>
<tr class="data">
<td class="align"><b>New Zealand Dollar</b></td>
<td class="align"><b>NZD</b></td>
<td class="alignR">USD10K/20K</td>
<td class="alignR">4.8500</td>
<td class="alignR">4.9500</td>
<td class="alignR">5.1900</td>
<td class="alignR">5.3900</td>
<td class="alignR">5.5500</td>
<td class="alignR">5.8500</td>
</tr>
<tr class="data">
<td class="align"><b>United States Dollar</b></td>
<td class="align"><b>USD</b></td>
<td class="alignR">USD10K/20K</td>
<td class="alignR">4.1700</td>
<td class="alignR">4.2500</td>
<td class="alignR">4.7200</td>
<td class="alignR">4.9000</td>
<td class="alignR">5.0000</td>
<td class="alignR">5.0400</td>
</tr>
</tbody></table>
<br> <br>
<h2 class="pageTitle">Tier Rates </h2><br><h2 class="printObj" style="display:none">Tier Rates </h2><br><table class="table2">
<tbody><tr>
<th rowspan="2" colspan="2" class="txt_acc_type valignC" id="hd1_1" width="100">Currency </th>
<th colspan="7" class="txt_deposit_term alignC valignC">Rate (% p.a.)</th>
</tr>
<tr>
<td class="txt_overnight highlight alignR" id="hd1_2" width="70">Minimum </td>
<td class="txt_24h highlight alignR" id="hd1_3" width="60">1 week</td>
<td class="txt_24h highlight alignR" id="hd1_3" width="60">1 month</td>
<td class="txt_24h highlight alignR" id="hd1_3" width="60">2 months</td>
<td class="txt_24h highlight alignR" id="hd1_3" width="60">3 months</td>
<td class="txt_24h highlight alignR" id="hd1_3" width="60">6 months</td>
<td class="txt_24h highlight alignR" id="hd1_3" width="60">12 months</td>
</tr>
<tr class="data">
<td class="align"><b>Australian Dollar</b></td>
<td class="align"><b>AUD</b></td>
<td class="alignR">100,000</td>
<td class="alignR">3.7700</td>
<td class="alignR">3.9300</td>
<td class="alignR">4.1900</td>
<td class="alignR">4.4100</td>
<td class="alignR">4.6200</td>
<td class="alignR">5.0600</td>
</tr>
<tr class="data">
<td class="align"><b>Canadian Dollar</b></td>
<td class="align"><b>CAD</b></td>
<td class="alignR">100,000</td>
<td class="alignR">4.3500</td>
<td class="alignR">4.4000</td>
<td class="alignR">4.5900</td>
<td class="alignR">4.7800</td>
<td class="alignR">4.9800</td>
<td class="alignR">5.3500</td>
</tr>
<tr class="data">
<td class="align"><b>Swiss Franc</b></td>
<td class="align"><b>CHF</b></td>
<td class="alignR">100,000</td>
<td class="alignR">1.2000</td>
<td class="alignR">1.3600</td>
<td class="alignR">1.3600</td>
<td class="alignR">1.4500</td>
<td class="alignR">1.5500</td>
<td class="alignR">1.7900</td>
</tr>
<tr class="data">
<td class="align"><b>Renminbi</b></td>
<td class="align"><b>CNY</b></td>
<td class="alignR">500,000</td>
<td class="alignR">0.3000</td>
<td class="alignR">1.5000</td>
<td class="alignR">1.6000</td>
<td class="alignR">2.0000</td>
<td class="alignR">2.1000</td>
<td class="alignR">2.3000</td>
</tr>
<tr class="data">
<td class="align"><b>Euro</b></td>
<td class="align"><b>EUR</b></td>
<td class="alignR">100,000</td>
<td class="alignR">2.9300</td>
<td class="alignR">3.0500</td>
<td class="alignR">3.1600</td>
<td class="alignR">3.2700</td>
<td class="alignR">3.5200</td>
<td class="alignR">3.7200</td>
</tr>
<tr class="data">
<td class="align"><b>Pound Sterling</b></td>
<td class="align"><b>GBP</b></td>
<td class="alignR">50,000</td>
<td class="alignR">4.1500</td>
<td class="alignR">4.4000</td>
<td class="alignR">4.5300</td>
<td class="alignR">4.6600</td>
<td class="alignR">4.9900</td>
<td class="alignR">5.3700</td>
</tr>
<tr class="data">
<td class="align"><b>Hong Kong Dollar</b></td>
<td class="align"><b>HKD</b></td>
<td class="alignR">500,000</td>
<td class="alignR">4.7600</td>
<td class="alignR">4.7700</td>
<td class="alignR">4.7700</td>
<td class="alignR">4.7700</td>
<td class="alignR">4.7900</td>
<td class="alignR">4.8800</td>
</tr>
<tr class="data">
<td class="align"><b>Japanese Yen</b></td>
<td class="align"><b>JPY</b></td>
<td class="alignR">10,000,000</td>
<td class="alignR">0.0000</td>
<td class="alignR">0.0000</td>
<td class="alignR">0.0000</td>
<td class="alignR">0.0000</td>
<td class="alignR">0.0000</td>
<td class="alignR">0.0000</td>
</tr>
<tr class="data">
<td class="align"><b>New Zealand Dollar</b></td>
<td class="align"><b>NZD</b></td>
<td class="alignR">100,000</td>
<td class="alignR">4.9800</td>
<td class="alignR">5.0800</td>
<td class="alignR">5.3200</td>
<td class="alignR">5.5200</td>
<td class="alignR">5.6800</td>
<td class="alignR">5.9800</td>
</tr>
<tr class="data">
<td class="align"><b>United States Dollar</b></td>
<td class="align"><b>USD</b></td>
<td class="alignR">100,000</td>
<td class="alignR">4.3000</td>
<td class="alignR">4.3800</td>
<td class="alignR">4.8500</td>
<td class="alignR">5.0300</td>
<td class="alignR">5.1300</td>
<td class="alignR">5.1700</td>
</tr>
</tbody></table>
<br><br><br>
<div style="width:700px">
<b>Notes:</b><br>For fixed deposit amount of SGD500,000 equivalent or over, please contact us for the applicable prevailing interest rate.
</div>
<br>
</div>
</div>
</div>
</div>
</div>
</div> '''
enter code here
</code></pre>
|
<python><selenium-webdriver><beautifulsoup>
|
2023-06-20 08:46:15
| 1
| 365
|
Cla Rosie
|
76,512,653
| 16,383,578
|
How do I speed-up batch multi-index lookup where some keys might not be present?
|
<p>I want to know how to do multi-index lookup for a sequence containing a large number (thousands and up) of elements, where two levels of indexes are used, one of the indexes are the elements themselves, the other index is the same in each lookup.</p>
<p>Before I explain anything further, I would like to clarify that this is not homework or job-related, I am an unemployed programming enthusiast, and I only state this because some people might assume this is homework.</p>
<p>The elements can be of any hashable type, not just <code>int</code>s, they can be <code>str</code>s for instance, and they might not be present in the lookup table, the second index is a <code>int</code> and the same in each batch lookup. The desired behavior is, if the element is in the lookup table, output the element located at the second index in the collection associated with the element in the lookup table, else output the element itself.</p>
<p>I want to know how to scale it up so it can handle gigantic inputs.</p>
<p>I have created a script to rotate basic Latin letters in a string by n (0 <= n <= 25, n is an integer) to illustrate the point, if a character in the string is one of the 52 Latin letters, it is replaced by the letter located at n indices from the character in the alphabet, with wrap-around and case preserving, else the character is left as is.</p>
<p>The code is only used as a minimal reproducible example, it is not my goal, and I have measured the execution time of the statements and provided the timing in comments:</p>
<pre class="lang-py prettyprint-override"><code>import random
from typing import Mapping, Sequence
from itertools import product
LETTERS = [
{chr(a + b): chr(a + (b + i) % 26) for a, b in product((65, 97), range(26))}
for i in range(26)
]
# 365 µs ± 24.7 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
UPPER = [chr(65 + i) for i in range(26)]
# 2.43 µs ± 75.5 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
LOWER = [chr(97 + i) for i in range(26)]
# 2.68 µs ± 222 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
ROT = {c: case[i:] + case[:i] for case in (LOWER, UPPER) for i, c in enumerate(case)}
# 24.9 µs ± 1.77 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
ROT_LIST = [{} for _ in range(26)]
# 1.58 µs ± 13.3 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
for k, v in ROT.items():
for i, c in enumerate(v):
ROT_LIST[i][k] = c
# 135 µs ± 5.05 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
# (2.43 + 2.68 + 24.9 + 1.58) == 31.59
# (31.59 + 135) == 166.59
# 31.59 / 365 == 0.08654794520547945
# 166.59 / 365 == 0.4564109589041096
def rot(s: str, d: int = 13) -> str:
return "".join(LETTERS[d].get(c, c) for c in s)
def rot_v1(s: str, d: int = 13) -> str:
return "".join(c if (r := ROT.get(c, c)) == c else r[d] for c in s)
def rot_v2(s: str, d: int = 13) -> str:
return "".join(ROT[c][d] if c in ROT else c for c in s)
def get_size(obj: object) -> int:
size = obj.__sizeof__()
if isinstance(obj, Mapping):
size += sum(get_size(k) + get_size(v) for k, v in obj.items())
elif isinstance(obj, Sequence) and not isinstance(obj, str):
size += sum(get_size(e) for e in obj)
return size
# get_size(LETTERS) == 194152
# get_size(ROT) == 85352
# 85352 / 194152 == 0.439614322798632
string = random.choices(LOWER, k=4096)
</code></pre>
<p><code>LETTERS</code> is the most straightforward way I can think of to generate the lookup table, but it is both memory-inefficient and time-inefficient. <code>ROT</code> is both time-efficient and memory-efficient, it reduces memory usage by 56.04% and execution time by 91.35%, though I guess querying it might be slower than <code>LETTERS</code>.</p>
<p>I have also found that generating a structure like <code>LETTERS</code> from <code>ROT</code> takes 54.36% less time than computing it directly.</p>
<p>I have performed multiple tests and the timings vary widely, but it seems that <code>rot_v2</code> is consistently faster than <code>rot</code>, which in turn is consistently faster than <code>rot_v1</code>:</p>
<pre><code>In [9]: %timeit rot(string)
742 µs ± 27.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [10]: %timeit rot_v1(string)
771 µs ± 8.55 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [11]: %timeit rot_v2(string)
569 µs ± 6.14 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [12]: %timeit rot_v2(string)
588 µs ± 50.7 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [13]: %timeit rot_v1(string)
821 µs ± 44.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [14]: %timeit rot_v2(string)
609 µs ± 53.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [15]: %timeit rot(string)
601 µs ± 50.9 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [16]: %timeit rot(string)
603 µs ± 48.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [17]: %timeit rot_v2(string)
586 µs ± 54.9 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [18]: %timeit rot_v1(string)
818 µs ± 68.5 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>I have considered using <code>NumPy</code> but so far I have only used booleans and integers as array indices and I don't know how to use string array as indices.</p>
<p>How can I speed it up?</p>
<hr />
<h2>Edit</h2>
<p>Clarification: <code>rot</code> is not what I actually want, it is just an MRE of a more general multi-index lookup where some keys might be missing.</p>
|
<python><python-3.x><multidimensional-array><nested>
|
2023-06-20 08:15:00
| 1
| 3,930
|
Ξένη Γήινος
|
76,512,259
| 8,458,083
|
How to do pattern matching with alias type
|
<p>This simple example works</p>
<pre><code>from linesParser import get_ProcessCell
from typing import List,Tuple,Union
ti=Tuple[int,int]
ua=Union[int,ti]
a=(3,3)
match a:
case (x,y):
print("A")
case _:
print("B")
</code></pre>
<p>But if I change the definition of ti in something more complicated like an union of different tuples</p>
<pre><code>ti=Union[Tuple[int,int],Tuple[int,int,int]]
ua=Union[int,ti]
a=(3,3)
match a:
case (x,y) | (x2,y2,z2):
print("A")
case _:
print("B")
</code></pre>
<p>mypy myfile.py returns</p>
<blockquote>
<p>tobedeteled.py:9: error: Alternative patterns bind different names</p>
</blockquote>
<p>I've tried that too</p>
<pre><code>from linesParser import get_ProcessCell
from typing import List,Tuple,Union
ti=Union[Tuple[int,int],Tuple[int,int,int]]
ua=Union[int,ti]
a=(3,3)
match a:
case isinstance(a,ti):
print("A")
case _:
print("B")
</code></pre>
<blockquote>
<p>error: Expected type in class pattern; found "isinstance"</p>
</blockquote>
<p>I would prefer a solution like the the last because if I change the definition of ti again I don't want to change the code in the case statement.</p>
<p><strong>sequel</strong></p>
<p>After reading this <a href="https://buddy.works/tutorials/structural-pattern-matching-In-python" rel="nofollow noreferrer">website</a> I tried this</p>
<pre><code>from linesParser import get_ProcessCell
from typing import List,Tuple,Union
ti=Union[Tuple[int,int],Tuple[int,int,int]]
ua=Union[int,ti]
a=(3,3) match a:
case ti() as ati:
print("A")
case _:
print("B")
</code></pre>
<blockquote>
<p>error: Class pattern class must not be a type alias with type parameters [misc]</p>
</blockquote>
<pre><code>from linesParser import get_ProcessCell
from typing import List,Tuple,Union
ti=Union[Tuple[int,int],Tuple[int,int,int]]
ua=Union[int,ti]
a=(3,3)
match a:
case ti(ati):
print("A")
case _:
print("B")
</code></pre>
<blockquote>
<p>error: Class pattern class must not be a type alias with type parameters [misc]</p>
</blockquote>
<p><strong>sequel 2</strong></p>
<p>I ve tried this way. mypy raise no error but it doesn't work.</p>
<pre><code>a:ua=(1,3)
if type(a) is ti:
print("A")
elif type(a) is int:
print("B")
else:
print("C")
</code></pre>
<p>C is printed...</p>
|
<python><mypy>
|
2023-06-20 07:23:05
| 2
| 2,017
|
Pierre-olivier Gendraud
|
76,511,964
| 9,809,865
|
Redis connection scheduled to be closed ASAP for overcoming of output buffer limits
|
<p>I have some celery tasks that run on VMs to run some web crawling tasks.</p>
<p>Python 3.6, Celery 4.2.1 with broker as Redis (self-managed). The same Redis server is used for caching and locks.</p>
<h3>There are two relevant tasks:</h3>
<p><strong>1. job_executor:</strong> This celery worker runs on a VM and listens to the queue crawl_job_{job_id}. This worker will execute the web crawling tasks.
Only a single job_executor worker with concurrency = 1 runs on 1 VM.
Each Crawl Job can have 1-20,000 URLs. Each Crawl Job can have anywhere between 1 to 100 VMs running in a GCP Managed Instance Group. The number of VMs to be run are defined in a configuration for each crawl job.
Each task can take from 15 seconds to 120 minutes.</p>
<p><strong>2. crawl_job_initiator:</strong> This celery worker runs on a separate VM and listens to the queue crawl_job_initiator_queue. One task creates the required MIG and the VMs using terraform for a single Crawl Job ID and adds the job_executor tasks to the crawl_job_{job_id} queue.</p>
<p>The task takes about 70 seconds to complete.</p>
<p>The concurrency for this worker was set to 1 so only 1 Crawl Job could be started at once.</p>
<p>To reduce the time it was taking to start large number of Crawl Jobs, I decided to increase the concurrency of crawl_job_initiator to 20 without changing any other configuration. I also added a lock mechanism at the job_id level so that other tasks do not interfere with the crawl_job_initiator task. The lock is acquired at the start of the task and is released once the task gets over. It is a non-blocking lock that retries a task after exponential backoff if the lock was not acquired.</p>
<p>Other tasks include a periodic task that deletes the VMs once the Crawl Job is finished.</p>
<h2>The problem:</h2>
<p>After increasing the concurrency I started getting the following 2 errors:</p>
<h3>Error #1</h3>
<p>On the crawl_job_initiator and other task logs:</p>
<pre><code>consumer: Cannot connect to redis://:**@10.16.1.3:6379/0: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error..
</code></pre>
<p>On checking the redis server logs I found this:</p>
<pre><code># Can't save in background: fork: Cannot allocate memory
</code></pre>
<p>Increasing the redis server memory configuration solved the issue for now. I think this is also solvable by setting vm.overcommit_memory = 1 (which I have not done yet since everything is going fine till now).</p>
<h3>Error #2:</h3>
<pre><code>Client id=43572 addr=10.128.1.218:57232 fd=7876 name= age=393 idle=385 flags=P db=0 sub=0 psub=1 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=2958 omem=48606000 events=rw cmd=psubscribe scheduled to be closed ASAP for overcoming of output buffer limits
</code></pre>
<p>The IP address is the IP for one of the MIG VMs running the job_executor tasks. This was also happening for the clients running crawl_job_initiator task.</p>
<p>I read more about this and found that increasing the <code>client-output-buffer-limit</code> for pubsub clients will solve that.
Original setting: <code>pubsub 16mb 8mb 60</code>
Updated setting: <code>pubsub 64mb 32mb 120</code></p>
<p>Even with this setting I started getting the same error. So I increased it by a lot so that the issue would be fixed for once:</p>
<p><code>pubsub 4000mb 2000mb 60</code></p>
<p>Since then I have been trying to figure out why this error is coming. I tried adding 100,000 tasks to a single job_executor queue to see if the buffer would be filled but that was not the case.</p>
<p>What can be the reason behind the error? How can I go ahead debugging this issue? Is there a straightforward fix for the same?</p>
<h3>Celery Worker Configuration</h3>
<pre><code># job_executor supervisord config for celery worker
[crawl-job-executor]
command=/home/ubuntu/Env/bin/celery worker -A crawler.taskapp --loglevel=info --concurrency=1 --max-tasks-per-child=1 --max-memory-per-child=350000 -Ofair -Q crawl_job_{job_id} -n crawl_job_{job_id}
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=10
</code></pre>
<pre><code># crawl_job_initiator supervisord config for celery worker
[crawl-job-initiator]
command=/home/ubuntu/Env/bin/celery worker -A crawler.taskapp --loglevel=info --concurrency=20 --max-tasks-per-child=1 --max-memory-per-child=350000 -Ofair -Q crawl_job_initiator -n crawl_job_initiator@%%h
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=10
</code></pre>
|
<python><django><redis><celery><django-redis>
|
2023-06-20 06:40:14
| 0
| 656
|
Prashant Sengar
|
76,511,923
| 14,263,125
|
Add columns to existing dataframe with a loop
|
<p>I try to build a dataframe with the first columns of several other dataframes with a loop. All of them have same index.</p>
<pre><code>df1 = pd.DataFrame(np.random.randint(0,100,size=(3, 2)), columns=list('AB'), index=('class1', 'class2', 'class3'))
df2 = pd.DataFrame(np.random.randint(0,100,size=(3, 2)), columns=list('CD'), index=('class1', 'class2', 'class3'))
df3 = pd.DataFrame(np.random.randint(0,100,size=(3, 2)), columns=list('EF'), index=('class1', 'class2', 'class3'))
df = pd.DataFrame( index=('class1', 'class2', 'class3'))
for f in [df1, df2, df3]:
first_col = f.iloc[:,0]
df[f] = first_col.values
</code></pre>
<p>The expected output is a matrix with same formatting as below:</p>
<pre><code> A C E
class1 2 18 62
class2 46 46 11
class3 57 73 92
</code></pre>
<p>But this code did not work.</p>
<p>The question mirror this query, but the answers tried (below) did not work.
<a href="https://stackoverflow.com/questions/12555323/how-to-add-a-new-column-to-an-existing-dataframe">How to add a new column to an existing DataFrame?</a></p>
<blockquote>
<pre><code>df.set_index([first_col], append=True)
df.assign(f=first_col.values)
</code></pre>
</blockquote>
|
<python><pandas><dataframe><loops>
|
2023-06-20 06:33:53
| 2
| 375
|
Drosera_capensis
|
76,511,594
| 603,657
|
Why does celery on_worker_shutdown not maintain object state
|
<p>I'm trying to accumulate a list of events in memory and then write them to an API periodically in a celery task. I want to attempt to flush the buffer if the celery worker is being shut down, so I attempted to use <code>on_worker_shutdown.connect</code> with my flush function. However, although it does call the flush function, it shows that there are no items in my buffer.</p>
<p>My code is as follows (simplified a little):</p>
<pre class="lang-py prettyprint-override"><code>from celery import Task, Celery
from celery.signal import worker_shutdown
class EventsBuffer:
def __init__(self):
self.buffer = []
worker_shutdown.connect(self._on_worker_shutdown)
def add_event(self, event):
self.buffer.append(event)
if len(self.buffer) > 50:
self.flush()
def flush():
print('Flushing the buffer with %s items' % len(self.buffer))
# do actual flush work
def _on_worker_shutdown(self, **kwargs):
print('Shutting down')
self.flush()
class TaskBase(Task):
def __init__(self):
self.events_buffer = EventsBuffer()
app = Celery(__name__)
@app.task(base=TaskBase, bind=True)
def send_event(self, event):
self.events_buffer.add_event(event)
</code></pre>
<p>Let's say I add two tasks then try to shut it down.</p>
<p>Expected result is:</p>
<pre><code>Shutting down.
Flushing the buffer with 2 items.
</code></pre>
<p>However in reality I get:</p>
<pre><code>Shutting down.
Flushing the buffer with 0 items.
</code></pre>
<p>And upon inspection, the buffer object is empty.</p>
<p>What's going on, and how can I accomplish what I want to do?</p>
|
<python><celery>
|
2023-06-20 05:17:24
| 0
| 4,530
|
Paul
|
76,511,542
| 3,845,626
|
Adding header button in Odoo 16
|
<p>I'm trying to add header button to a tree view in Odoo 16 I created for a custom module.</p>
<p>Here is the view :</p>
<pre><code> <record id="cc_order_tree_view" model="ir.ui.view">
<field name="name">cc.order.list</field>
<field name="model">cc.order</field>
<field name="arch" type="xml">
<tree>
<header>
<button name="action_mark_as_discarded" string="Discard" class="oe_highlight" type="object" data-hotkey="d"></button>
</header>
<field name="number" />
<field name="itemsTotal" />
<field name="total" />
<field name="odoo_status" />
</tree>
</field>
</record>
</code></pre>
<p>However, button never shows. I tried to check how it's done in other views but it seems there are different ways to do so. Also, documentation is quite vague about adding new elements. Any idea which is the correct way to add this button ?</p>
|
<python><odoo><odoo-16>
|
2023-06-20 05:05:41
| 2
| 569
|
Biologeek
|
76,511,479
| 19,157,137
|
Unable to delete Docker images associated with stopped containers using Python script
|
<p>I'm working on a Python script to manage Docker containers using Docker Compose. The script starts and stops containers based on a given Docker Compose file. However, I'm having trouble deleting the Docker images associated with the stopped containers. All the files are in the same directory. Here are the details of my setup:</p>
<p><code>Dockerfile</code>:
Here's the content of my Dockerfile:</p>
<pre><code>FROM python:3.9-slim-buster
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
</code></pre>
<p><code>docker-compose.yaml</code>:
Here's the content of my docker-compose.yaml:</p>
<pre><code>version: '3'
services:
web:
build:
context: .
ports:
- 5000:5000
volumes:
- .:/app
</code></pre>
<p><code>app.py</code>:
Here's the content of my app.py Flask application:</p>
<pre><code>from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return 'Hello, world!'
if __name__ == '__main__':
app.run(host='0.0.0.0')
</code></pre>
<p><code>docker_operations.py</code>:
I have a Python script docker_operations.py that I'm using to start and stop the Docker containers. Here's the content of that file:</p>
<pre><code>import subprocess
from pathlib import Path
def stop_and_remove_container_from_compose(compose_file):
try:
subprocess.run(["docker-compose", "-f", compose_file, "down"], check=True)
print(f"Containers stopped and removed successfully using Docker Compose: {compose_file}")
subprocess.run(["docker-compose", "-f", compose_file, "rm", "-fsv"], check=True)
print(f"Docker Compose services removed successfully: {compose_file}")
# Delete associated Docker images
containers = subprocess.check_output(["docker", "ps", "-aq", "--filter", f"name={compose_file.stem}", "--format", "{{.Image}}"]).decode().splitlines()
for container in containers:
image_id = subprocess.check_output(["docker", "inspect", "-f", "{{.Image}}", container]).decode().strip()
subprocess.run(["docker", "image", "rm", image_id], check=True)
print("Docker images deleted successfully")
except subprocess.CalledProcessError as e:
print(f"An error occurred while stopping or removing the containers using Docker Compose: {e}")
# Example usage
compose_file = Path("docker-compose.yaml") # Replace with the path to your Docker Compose file
stop_and_remove_container_from_compose(compose_file)
</code></pre>
<p>The issue I'm facing is that when I execute the stop_and_remove_container_from_compose function in docker_operations.py, the containers are stopped and removed successfully using Docker Compose. However, the associated Docker images are not being deleted. I have tried using the docker image rm command with the container IDs, but it doesn't seem to work.</p>
<p>Is there something wrong with the way I'm retrieving and deleting the Docker images? How can I ensure that the images associated with the stopped containers are properly deleted?</p>
|
<python><docker><image><docker-compose><containers>
|
2023-06-20 04:43:57
| 1
| 363
|
Bosser445
|
76,511,083
| 4,525,932
|
opencv's findCirclesdGrid behaves differently in C++ than in python?
|
<p>I am porting some python code using opencv to C++. I am having trouble with the camera calibration stage. The python code is able to find circles on my grid, but when I call (seemingly) identical calls using C++, the call to <code>findCirclesGrid</code> seems to hang. I confirmed that my input images after some prepping (median blur and grey scale) are identical (printed them out and used beyond compare). I also checked that all the fields in the <code>params</code> struct are identical. Furthermore, I used the blob detector to run <code>detect</code> on the input images and the results from both languages are identical there as well. Both my python environment and my C++ project are loading opencv 4.6.0.</p>
<p>Just as a sanity check, I also ran the python code on the exported 'grey' image rendered by opencv in C++---probably not very meaningful since I verified that these images are identical between the 2 languages. However, as expected, the python code returned a nice healthy grid of 25x25 circles on that image.</p>
<p>Here is the code (image loading excluded).</p>
<p>Python:</p>
<pre><code>params = cv.SimpleBlobDetector_Params()
params.filterByCircularity = True
params.minCircularity = 0.1
detector = cv.SimpleBlobDetector_create(params)
blur = cv.medianBlur(img,3)
grey = cv.cvtColor(blur, cv.COLOR_BGR2GRAY)
grid_size = [25,25]
retval, centers = cv.findCirclesGrid(grey, grid_size, cv.CALIB_CB_SYMMETRIC, blobDetector=detector)
</code></pre>
<p>C++</p>
<pre><code>cv::SimpleBlobDetector::Params params;
params.filterByCircularity = true;
params.minCircularity = .1;
bool ret = false;
auto blobDetector = cv::SimpleBlobDetector::create(params);
cv::Mat blur;
cv::Mat grey;
cv::medianBlur(image, blur, 3);
cv::cvtColor(blur, grey, cv::COLOR_BGR2GRAY);
cv::Size gridSize(25, 25);
std::vector<cv::Point2f> imagePoints;
ret = cv::findCirclesGrid(grey, gridSize, cv::Mat(imagePoints), cv::CALIB_CB_SYMMETRIC_GRID, blobDetector);
</code></pre>
<p>I saw some activity on the opencv repo having to do with <code>findCirclesGrid</code> hanging (<a href="https://github.com/opencv/opencv/pull/19079/commits/175cd03ff2237eca57089f14f866fe17186130b4" rel="nofollow noreferrer">https://github.com/opencv/opencv/pull/19079/commits/175cd03ff2237eca57089f14f866fe17186130b4</a>), but that was a few version ago---and besides, the underlying libraries should be identical in both languages. I am also confused as to why the <code>detect</code> code can find all the visible blobs, but in the case of the C++ implementation <code>findCirclesGrid</code> never returns.</p>
<p>Here is the input image (reduced by 50%):
<a href="https://i.sstatic.net/VDqNu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VDqNu.png" alt="enter image description here" /></a></p>
|
<python><c++><opencv>
|
2023-06-20 02:34:55
| 1
| 1,571
|
dmedine
|
76,511,081
| 2,769,240
|
How to highlight a blob of text using PyMupdf
|
<p>so, I have a pdf file. I am reading it via the <code>PyMuPDF</code> package.</p>
<p>I read the text and break the text into chunks. So for the below text screenshot in one of the pages of the original pdf, I get the text read as below:</p>
<p><a href="https://i.sstatic.net/REcJJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/REcJJ.png" alt="enter image description here" /></a></p>
<p>The text I have in Python:</p>
<pre><code>text_variable = cancer. Your team should include the following \nboard-certified experts:\n \n� A pulmonologist is a doctor who’s an \nexpert of lung diseases.\n \n� A thoracic radiologist is a doctor who’s \nan expert of imaging of the chest
</code></pre>
<p>as u can see it is having issues reading Unicode characters.</p>
<p>Now I need to find the above text on the pdf page and then highlight those lines using Annotation in PyMUPDF. I tried below:</p>
<pre><code>doc = fitz.open("/Users/abc.pdf") # open a document
page = doc.load_page(13)
#print(page.get_text())
text_variable = "cancer. Your team should include the following \nboard-certified experts:\n \n� A pulmonologist is a doctor who’s an \nexpert of lung diseases.\n \n� A thoracic radiologist is a doctor who’s \nan expert of imaging of the chest"
quads = page.search_for(text_variable, quads=True)
#Add a highlight annotation for each rectangle
page.add_highlight_annot(quads)
</code></pre>
<p>As you would expect, it won't be able to find the corresponding text on the pdf page as it's not exactly the same due to Unicode and escape sequence issues.</p>
<p>Does anyone know how to make it work?</p>
<p>Thanks</p>
|
<python><pymupdf>
|
2023-06-20 02:34:48
| 0
| 7,580
|
Baktaawar
|
76,510,894
| 18,769,241
|
How to sum columns while performing an inner merge?
|
<p>I have two dataframes that I want to inner join while summing over two of their columns (COUNT1 and COUNT2)
For now I do it like so :</p>
<pre><code>df_res = df1.merge(df2,on=['G-S'], how='inner')
print(df_res)
df_res['COUNT1']=df_res['COUNT1_x']+df_res['COUNT1_y']
df_res['COUNT2']=df_res['COUNT2_x']+df_res['COUNT2_y']
df_res=df_res.drop(columns=['COUNT1_x', 'COUNT1_y','COUNT2_x','COUNT2_y'])
</code></pre>
<p>But I want a one-liner when merging the two dataframes.
Example of input and output are below:</p>
<p>df1:</p>
<pre><code>SOMEATT,COUNT1,COUNT2,G-S
B,53,31,SOMEVAL1
E,81,19,SOMEVAL1
L,90,20,SOMEVAL3
S,30,17,SOMEVAL2
</code></pre>
<p>df2:</p>
<pre><code>SOMENUMBER,COUNT1,COUNT2,G-S
1,50,19,SOMEVAL1
7,32,16,SOMEVAL2
</code></pre>
<p>output:</p>
<pre><code>SOMENUMBER,SOMEATT,COUNT1,COUNT2,G-S
1,B,103,50,SOMEVAL1
1,E,131,38,SOMEVAL1
7,S,62,33,SOMEVAL2
</code></pre>
<p>PS: I don't want a solution with concat but ONLY with pd.merge()</p>
|
<python><pandas>
|
2023-06-20 01:35:18
| 1
| 571
|
Sam
|
76,510,863
| 2,769,240
|
How to format a string with escape sequences implemented? Not same as earlier Q
|
<p>I have an output coming out of a function. The output string has escape sequence (mainly \n).</p>
<p>I need to format this string so that escape sequence are implemented before I pass the formatted string to another function. The second function needs it in formatted way for it to do a text search with similar text.</p>
<p>Here's what I mean:</p>
<pre><code>text = r'Your team should include the following \nboard-certified experts:\n \nA pulmonologist is a doctor who's an \nexpert of lung diseases.\n'
formatted_text = """Your team should include the following
board-certified experts:
A pulmonologist is a doctor who's an
expert of lung diseases."""
</code></pre>
<p>So as u see, if I do print(text), i will get the second string as print will implement the escape sequences. But I don't want to print. I want to format as second and store it another variable.</p>
<p>EDIT:</p>
<p>Pls run it in a notebook cell and DO NOT print(formatted_text). Run it just as a variable. If it doesn't remove escape sequence, it's not what I want.</p>
<p><a href="https://i.sstatic.net/0UqJc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0UqJc.png" alt="enter image description here" /></a></p>
<p>Edit 2:</p>
<p>What I am looking for:</p>
<p><a href="https://i.sstatic.net/DHRdc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DHRdc.png" alt="enter image description here" /></a></p>
|
<python>
|
2023-06-20 01:26:23
| 2
| 7,580
|
Baktaawar
|
76,510,723
| 5,355,024
|
Is there a way in Python to extract a single value from a pandas dataframe as a primative value?
|
<p>I have a data file with thousands of bills. When I try to extract a single cell value I notice the type is <code><class 'numpy.ndarray'></code>. Is there a way to extract an element as a primitive value like <code>float</code> or <code>int</code> directly from the dataframe or series?</p>
<pre><code> import pandasql as ps
import pandas as pd
df = pd.read_csv(strFPath, sep='|', encoding='windows-1252')
strQry = "SELECT SUM(BILLHOURS) AS 'BILLHOURS' FROM df"
dfQry = ps.sqldf(strQry, globals())
intHoursAll = float(dfQry.values[0].round(2))
</code></pre>
|
<python><pandas><dataframe><sqldf>
|
2023-06-20 00:24:34
| 2
| 368
|
Jorge
|
76,510,719
| 8,923,742
|
adding a vertical line to plot using seaborn object interface
|
<p>I am trying to use seaborn object interface. It looks intuitive like ggplot2 in R. However, since it is still in development stage, the underlying documentation is still WIP.</p>
<p>For example I am trying to add a vertical line at let's say x=5. How could I do so using the object interface.</p>
<pre><code> import seaborn.objects as so
r=fmri[(fmri['event']=='stim') ].reset_index()
(
so.Plot(r, x="timepoint", y="signal",color='subject')
.facet(row="region", wrap=2)
.add(so.Line())
)
</code></pre>
|
<python><seaborn><seaborn-objects>
|
2023-06-20 00:23:03
| 1
| 1,396
|
itthrill
|
76,510,709
| 2,542,856
|
PPO model learns well then predicts only negative actions
|
<p>I'm using openai's gymnasium python package to create a PPO model to play a simple grid based game, similar to gym's GridWorld example. Most actions will result in a positive reward. Usually there is only one action that will result in a negative reward.</p>
<p>During the learning phase, I can by printing out in the environment's <code>step()</code> function, that the model is doing pretty well. It rarely chooses the actions that would have negative rewards.</p>
<p>When I try to test the model after and predict on a new game, it freaks out and chooses a few good actions followed by only choosing the only action that gives a negative reward. Once it finds the bad action, it sticks with it until the end.</p>
<p>Is there a bug in the code for testing/using the model to predict?</p>
<pre><code>env = GameEnv()
obs = env.reset()
model = PPO("MultiInputPolicy", env, verbose=1)
model.learn(total_timesteps=10_000)
obs = env.reset()
for i in range(50):
action, _states = model.predict(obs, deterministic=True)
obs, reward, done, info = env.step(int(action))
env.render()
if done:
obs = env.reset()
</code></pre>
<p>Sample output from learning:</p>
<pre><code>action, reward = 2, 1
action, reward = 3, 1
action, reward = 2, 5
action, reward = 0, 1
action, reward = 0, 9
action, reward = 1, 1
action, reward = 3, 1
action, reward = 3, -5
action, reward = 2, 1
</code></pre>
<p>Sample output from testing:</p>
<pre><code>action, reward = 0, 1
action, reward = 1, 5
action, reward = 2, 1
action, reward = 0, 1
action, reward = 0, -5
action, reward = 0, -5
action, reward = 0, -5
action, reward = 0, -5
action, reward = 0, -5
action, reward = 0, -5
action, reward = 0, -5
action, reward = 0, -5
...
</code></pre>
|
<python><machine-learning><reinforcement-learning><openai-gym>
|
2023-06-20 00:19:56
| 1
| 4,565
|
Matt C
|
76,510,708
| 13,738,079
|
Simple GAN RuntimeError: Given groups=1, weight of size [512, 3000, 5], expected input[1, 60, 3000] to have 3000 channels, but got 60 channels instead
|
<p>I'm new to GANs and I'm having a hard time matching the GAN architecture to my training data dimensions. The dimensions of my training data is <code>(60x3000)</code>. My goal is to artificially generate a sample with size <code>1x3000</code>. So I have 60 of training samples, since my training data is <code>60x3000</code>. The architecture of my GAN is:</p>
<pre><code>Generator(
(map1): Conv1d(100, 512, kernel_size=(5,), stride=(1,))
(map2): Conv1d(512, 256, kernel_size=(5,), stride=(1,))
(map3): Conv1d(256, 3000, kernel_size=(5,), stride=(1,))
(leakyRelu): LeakyReLU(negative_slope=0.1)
)
Discriminator(
(map1): Conv1d(3000, 512, kernel_size=(5,), stride=(1,))
(map2): Conv1d(512, 256, kernel_size=(5,), stride=(1,))
(map3): Conv1d(256, 1, kernel_size=(5,), stride=(1,))
(leakyRelu): LeakyReLU(negative_slope=0.1)
)
</code></pre>
<p>If I print my training data it looks like this:</p>
<pre><code>array([[2.14236454, 2.10500993, 2.06635705, ..., 7.57922477, 7.56801547,
7.55263677],
...,
[1.07467659, 1.07582106, 1.07628207, ..., 1.49663065, 1.43491185,
1.37456978]])
</code></pre>
<p>When I run my GAN code, I get this error, which is very confusing because my input data is <code>[60x3000]</code> not <code>[1, 60, 3000]</code>. Could you please guide me on how to resolve this error? I would love to get a deep theoretical understanding of why this error is arising and how to fix it. Thank you very much.</p>
<p><code>RuntimeError: Given groups=1, weight of size [512, 3000, 5], expected input[1, 60, 3000] to have 3000 channels, but got 60 channels instead</code></p>
<pre><code>class Generator(nn.Module):
def __init__(self, input_size, output_size):
super(Generator, self).__init__()
self.map1 = nn.Conv1d(input_size, 512, 5)
self.map2 = nn.Conv1d(512, 256, 5)
self.map3 = nn.Conv1d(256, output_size, 5)
self.leakyRelu = nn.LeakyReLU(0.1)
self.tanh = torch.tanh
def forward(self, x):
x = self.leakyRelu(self.map1(x))
x = self.leakyRelu(self.map2(x))
x = self.leakyRelu(self.map3(x))
return self.tanh(x)
class Discriminator(nn.Module):
def __init__(self, input_size, output_size):
super(Discriminator, self).__init__()
self.map1 = nn.Conv1d(input_size, 512, 5)
self.map2 = nn.Conv1d(512, 256, 5)
self.map3 = nn.Conv1d(256, output_size, 5)
self.leakyRelu = nn.LeakyReLU(0.1)
self.sigmoid = torch.sigmoid
def forward(self, x):
x = x.float()
x = self.leakyRelu(self.map1(x))
x = self.leakyRelu(self.map2(x))
x = self.leakyRelu(self.map3(x))
return self.sigmoid(x)
def train():
# Model parameters
g_input_size = 100 # Design decision (i.e we can choose). Latent size (existing but not yet developed). Should match random noise dimension coming into generator
g_output_size = 3000 # Size of generated output vector (should match input/desired data size)
d_input_size = 3000 # Minibatch size - cardinality of distributions (should match size of input)
d_output_size = 1 # Always 1. Single dimension for 'real' vs. 'fake' classification
d_sampler = get_distribution_sampler(0, 1) # real data placeholder
real_data = torch.tensor(interval_data) # real data with dimensions (60 x 3000)
gi_sampler = get_generator_input_sampler() # random noise with dimensions (g_input_size, g_output_size) => should match generator input size (latent size)
G = Generator(input_size=g_input_size, output_size=g_output_size)
D = Discriminator(input_size=d_input_size, output_size=d_output_size)
d_learning_rate = 1e-3
g_learning_rate = 1e-3
sgd_momentum = 0.9
num_epochs = 500
print_interval = 100
d_steps = 20
g_steps = 20
dfe, dre, ge = [], [], []
d_real_data, d_fake_data, g_fake_data = None, None, None
criterion = nn.BCELoss()
d_optimizer = optim.SGD(D.parameters(), lr=d_learning_rate, momentum=sgd_momentum)
g_optimizer = optim.SGD(G.parameters(), lr=g_learning_rate, momentum=sgd_momentum)
for epoch in range(num_epochs):
### train the Discriminator ###
for d_index in range(d_steps):
D.zero_grad()
d_real_data = real_data # size (60x3000)
d_real_data.requires_grad=True
d_real_decision = D(d_real_data)
d_real_error = criterion(d_real_decision, Variable(torch.ones([1])))
d_real_error.backward()
d_noise = Variable(gi_sampler(g_input_size, g_output_size))
d_fake_data = G(d_noise).detach()
d_fake_decision = D(preprocess(d_fake_data.t()))
d_fake_error = criterion(d_fake_decision, Variable(torch.zeros([1, 1])))
d_optimizer.step()
dre.append(extract(d_real_error)[0])
dfe.append(extract(d_fake_error)[0])
### train the Generator ###
for g_index in range(g_steps):
G.zero_grad()
noise = Variable(gi_sampler(g_input_size, g_output_size))
g_fake_data = G(noise)
dg_fake_decision = D(preprocess(g_fake_data.t()))
g_error = criterion(dg_fake_decision, Variable(torch.ones([1, 1])))
g_error.backward()
g_optimizer.step()
ge.append(extract(g_error)[0])
if epoch % print_interval == 0:
print("Epoch %s: D (%s real_err, %s fake_err) G (%s err); Real Dist (%s), Fake Dist (%s) " %
(epoch, dre, dfe, ge, stats(extract(d_real_data)), stats(extract(d_fake_data))))
return dfe, dre, ge, d_real_data, d_fake_data, g_fake_data
disc_fake_error, disc_real_error, gen_error, disc_real_data, disc_fake_data, gen_fake_data = train()
</code></pre>
|
<python><machine-learning><deep-learning><pytorch><generative-adversarial-network>
|
2023-06-20 00:18:44
| 1
| 1,170
|
Jpark9061
|
76,510,452
| 10,853,071
|
Grouping data on a daily cumulative timeseries
|
<p>I have a big table with sales data. Every row contains several sales information, but for our discussion, it contains a datetime column and a sale value.</p>
<p>I´ve been trying to group the sales by a 5min freq time series using grouper.</p>
<pre><code>import pandas as pd
import datetime
data = pd.DataFrame({
'date' : [datetime.datetime(2022,11,1,0,10,0), datetime.datetime(2022,11,1,0,25,0),datetime.datetime(2022,11,1,0,35,0)],
'gmv' : [10,20,40]})
df = data.groupby([pd.Grouper(key='date', freq='5Min', origin='start_day', convention = 'start', dropna = False, sort=True, closed = 'left')]).aggregate({'gmv' :'sum'}).reset_index()
df["cum_sale"]=df.groupby([df['date'].dt.date])['gmv'].cumsum(axis=0)
</code></pre>
<p>But despite I´ve required a 5min freq, the first result is a 10min delay. I know the first 5min does not have any transaction, but how can a "force" it to exist?</p>
<p><a href="https://i.sstatic.net/hObjX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hObjX.png" alt="enter image description here" /></a></p>
<p>Is there a better way to organize such data? After turning it on a daily cumulative sales, I am plotting it, but the "10 min" offset above is breaking my graph.</p>
<hr />
<p><strong>Update for futher questions</strong></p>
<p>This is my origin table value_counts()</p>
<p><a href="https://i.sstatic.net/Jve3r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jve3r.png" alt="enter image description here" /></a></p>
<p>I am trying to apply the suggested answer</p>
<pre><code>tdf = teste[['data', 'marca','gmv']]
tdf = tdf.astype({'marca':'str'}) #("marca" was a category type)
dti = pd.date_range(tdf['data'].min().normalize(), tdf['data'].max(), freq='5min', name='data')
df = tdf.set_index('data').reindex(dti, fill_value=0).reset_index()
df['cum_sale'] = df.resample('D', on='data')['gmv'].cumsum()
df
</code></pre>
<p>but now my table is almost empty.</p>
<p><a href="https://i.sstatic.net/572K1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/572K1.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-06-19 22:45:11
| 1
| 457
|
FábioRB
|
76,510,438
| 17,850,568
|
How to get Video Spout Stream in Python
|
<p>I'm using a program (VTube Studio) that has a Spout2 option. Is it possible to get that stream in python?</p>
|
<python><stream>
|
2023-06-19 22:40:50
| 2
| 508
|
KyleRifqi
|
76,510,335
| 10,565,820
|
Calculate Python Enum each time when evaluated with datetime?
|
<p>Is there a way to create an <code>Enum</code> in Python that calculates the value each time the <code>Enum</code> attribute is accessed? Specifically, I'm trying to create an <code>Enum</code> class that stores values like <code>TODAY</code>, <code>TOMORROW</code>, etc. for a web app (in Django) that runs for many days and I want the value recalculated each time it's accessed in case the day changes. So, say, the app is launched on March 1, if the app makes the check for <code>TODAY</code> on March 3, it will return March 3 and not March 1 which is what it would be initialized for.</p>
<pre><code>from enum import Enum
import datetime
class MyEnum(Enum):
TODAY = datetime.datetime.now()
TOMORROW = datetime.datetime.now() + datetime.timedelta(days=1)
</code></pre>
|
<python><python-3.x><datetime><enums>
|
2023-06-19 22:12:43
| 1
| 644
|
geckels1
|
76,510,267
| 1,554,752
|
Solving system of non-linear equations (products of latent variables)
|
<p>I'm attempting to solve a system of equations in python, where each outcome is the sum of a series of products between two latent variables:</p>
<p><img src="https://latex.codecogs.com/png.image?%5Cbg%7Bwhite%7Dy_%7Bit%7D&space;=&space;%5Csum_%7Bj%7D%5Cgamma_%7Bij%7D&space;%5Ctimes&space;%5Ceta_%7Bjt%7D" alt="Text" /></p>
<p>where <code>i</code> and <code>t</code> take on many more values (e.g., 30 each) than <code>j</code> does (between 2 and 5). So if I and T took on 30 values and J took on 4, then there'd be 900 outcomes and 240 unknowns. I'd ideally like to solve for the values of gamma and eta in a least-squares sense. I know there needs to be some normalization.</p>
<p>Is this a standard problem with a canned function, or do i need to use a general minimization techniques?</p>
|
<python><scipy><nonlinear-optimization>
|
2023-06-19 21:55:19
| 1
| 644
|
user1554752
|
76,510,011
| 2,417,922
|
How to fix Python problem with embedded class definitions
|
<p>Working on LeetCode #1032, I'm having a problem that appears related to having embedded classes. Here is the code:</p>
<pre><code>from collections import deque
class StreamChecker:
def __init__(self, words):
self.RPT = StreamChecker.ReversePrefixTree( StreamChecker.ReversePrefixTreeNode )
def query(self, letter):
...
class ReversePrefixTreeNode:
def __init__( self ):
self.char_dict = {} # Key is char from word, value is successor node
self.words = [] # All words that complete at this node
def successor( self, char ):
return self.char_dict.get( char, None )
def getWords( self ):
return self.words
def __str__( self ): f"[ RPTNode: char_dict = { self.char_dict }, words = { self.words } ]"
class ReversePrefixTree:
def __init__( self, root ):
self.root = root
def insert( self, word ):
node = self.root
for char in reversed( word ):
if char in node.char_dict:
node = node.char_dict[ char ]
else:
next_node = ReversePrefixTreeNode()
node.char_dict[ char ] = next_node
node = next_node
node.words.append( word )
def tester():
obj = StreamChecker( [ "here", "are", "some", "words" ] )
print( "obj", obj, "obj.RPT", obj.RPT )
print( "obj.RPT.root", obj.RPT.root.getWords() ) <<<=========== Problem statement
tester()
</code></pre>
<p>I get the following error message:</p>
<pre><code>obj <__main__.StreamChecker object at 0x00000213C5FB1710> obj.RPT <__main__.StreamChecker.ReversePrefixTree object at 0x00000213C5FB3E50>
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[22], line 41
38 print( "obj", obj, "obj.RPT", obj.RPT )
39 print( "obj.RPT.root", obj.RPT.root.getWords() )
---> 41 tester()
Cell In[22], line 39, in tester()
37 obj = StreamChecker( [ "here", "are", "some", "words" ] )
38 print( "obj", obj, "obj.RPT", obj.RPT )
---> 39 print( "obj.RPT.root", obj.RPT.root.getWords() )
TypeError: StreamChecker.ReversePrefixTreeNode.getWords() missing 1 required positional argument: 'self'
</code></pre>
<p>I am flummoxed by the meaning of the last sentence. Aren't I giving a value "object path" to get what I want?</p>
|
<python><python-3.x><python-class>
|
2023-06-19 20:51:26
| 0
| 1,252
|
Mark Lavin
|
76,509,992
| 10,309,712
|
Add a column with the new value from a tuple value in another column
|
<p>I have this <code>df</code>:</p>
<pre class="lang-py prettyprint-override"><code>
df = pd.DataFrame(
{'loss': [0.044, 0.044, 0.038, 0.037, 0.036],
'code': ["('ac',)", "('ac', 'be')", "('ab', 'ac', 'be')",
"('ab', 'ac', 'be', 'fi')", "('ab', 'ac', 'be', 'de', 'fi')"]}
)
df
loss code
0 0.044 ('ac',)
1 0.044 ('ac', 'be')
2 0.038 ('ab', 'ac', 'be')
3 0.037 ('ab', 'ac', 'be', 'fi')
4 0.036 ('ab', 'ac', 'be', 'de', 'fi')
</code></pre>
<p>Now I want add a new column <code>added-code</code>, the new value introduce in the <code>code</code> column.</p>
<p>Expected results:</p>
<pre class="lang-py prettyprint-override"><code>
loss code added-code
0 0.044 ('ac',) ac
1 0.044 ('ac', 'be') be
2 0.038 ('ab', 'ac', 'be') ab
3 0.037 ('ab', 'ac', 'be', 'fi') fi
4 0.036 ('ab', 'ac', 'be', 'de', 'fi') de
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-19 20:49:08
| 1
| 4,093
|
arilwan
|
76,509,991
| 5,224,236
|
running python selenium script from docker
|
<p>Basically I'm using the <code>selenium/standalone-chrome</code> image to run my python selenium script.</p>
<p>But I am getting <code>No module named pip</code> and <code>ModuleNotFoundError: No module named 'selenium'</code> in <code>python3</code>.</p>
<p>How comes selenium is not pre-installed? Do I need to install everything via Dockerfile or is there something I am missing ?</p>
<p>I also would like to know what is the default entrypoint for this image - which launches the grid UI and which I need to overwrite to run my script. Thanks</p>
|
<python><docker><selenium-webdriver>
|
2023-06-19 20:48:36
| 1
| 6,028
|
gaut
|
76,509,905
| 8,068,825
|
Pandas - Skip over rows that "apply" function throws error on
|
<p>So I have this simple code</p>
<pre><code>import ast
df = df["strings"].apply(ast.literal_eval)
</code></pre>
<p>Which just converts a column of a list in string form back into a string. I get an error <code>ValueError: malformed node or string: 0</code>, I'd like <code>apply</code> to just skip over the rows it fails and just return a Dataframe of the rows it was successful on. How can I do this?</p>
|
<python><pandas>
|
2023-06-19 20:33:18
| 2
| 733
|
Gooby
|
76,509,775
| 8,545,455
|
How to recursively read many include files with lark?
|
<p>For this config file format:</p>
<pre><code># comments here
# next an empty commment line
#
include "parseconf/dir_with_many_files"
Thing1
"objname" {
InnerThing "inner_thing_name"
{
IP_Address = "10.12.14.1" Hostname = "abc.fred.com"
}
}
Thing2 # comment
"objname" {
InnerThing "inner_thing_name" #a comment
{
IP_Address = "10.12.14.1"
Hostname = "abc.fred.com" # comments here
}
# comment
}
</code></pre>
<p>The <code>include</code> statement, if pointing to a directory, needs to read all .conf files in that directory.</p>
<p>I have the following <code>lark</code> syntax:</p>
<pre><code>start: (value|COMMENT)*
value: name (COMMENT)* string (COMMENT)* object
| assignment
| include
include: "include" string
object: "{" (COMMENT)* (value)* "}" (COMMENT)*
assignment: name "=" string (COMMENT)*
| object
name: /[A-Za-z_][A-Za-z_0-9]*/
COMMENT: "#" /[^\n]*/ _NEWLINE
_NEWLINE: "\n"
string: ESCAPED_STRING
%import common.ESCAPED_STRING
%import common.SIGNED_NUMBER
%import common.WS
%ignore WS
</code></pre>
<p>The tree is built with output including</p>
<pre><code> value
include
string "parseconf/dir_with_many_files"
</code></pre>
<p>Based on <a href="https://stackoverflow.com/questions/58783994/lark-parsing-implementing-import-file">this</a> and the comments below, I'm trying to handle the includes with a Transformer, like this:</p>
<pre><code>#!/usr/bin/env python3
from lark import Lark, Transformer
from pprint import pprint
class ExpandIncludes(Transformer):
def include(self, item):
path = item[0]
filedir = strip_quotes(path.children[0].value)
# how to return back a tree from parsed files?
return item
def strip_quotes(s):
if s.startswith('"') and s.endswith('"'):
return s[1:-1]
def parse_file(conf_grammar, conf_file):
# Create the parser with Lark, using the Earley algorithm
conf_parser = Lark(conf_grammar, parser='earley')
with open(conf_file, 'r') as f:
test_conf = f.read()
tree = conf_parser.parse(test_conf)
ExpandIncludes().transform(tree)
return tree
if __name__ == '__main__':
with open('parseconf/try.lark') as f:
conf_grammar = f.read()
tree = parse_file(conf_grammar, 'parseconf/test.conf')
print(tree.pretty())
</code></pre>
<p>This gives <code>filedir</code> which I can read and parse.
How do you return contents back into the tree?</p>
|
<python><parsing><lark>
|
2023-06-19 20:06:49
| 0
| 1,237
|
tuck1s
|
76,509,715
| 10,676,682
|
Pipe opencv frames into ffmpeg
|
<p>I am trying to pipe opencv frames into ffmpeg, but it does not work.</p>
<p>After the research, I found this answer (<a href="https://stackoverflow.com/a/62807083/10676682">https://stackoverflow.com/a/62807083/10676682</a>) to work the best for me, so I have the following:</p>
<pre><code>def start_streaming_process(rtmp_url, width, height, fps):
# fmt: off
cmd = ['ffmpeg',
'-y',
'-f', 'rawvideo',
'-vcodec', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s', "{}x{}".format(width, height),
'-r', str(fps),
'-i', '-',
'-c:v', 'libx264',
'-pix_fmt', 'yuv420p',
'-preset', 'ultrafast',
'-f', 'flv',
'-flvflags', 'no_duration_filesize',
rtmp_url]
# fmt: on
return subprocess.Popen(cmd, stdin=subprocess.PIPE)
</code></pre>
<pre><code>def main():
width, height, fps = get_video_size(SOURCE_VIDEO_PATH)
streaming_process = start_streaming_process(
TARGET_VIDEO_PATH,
width,
height,
fps,
)
model = load_yolo(WEIGHTS_PATH)
frame_iterator = read_frames(video_source=SOURCE_VIDEO_PATH)
processed_frames_iterator = process_frames(
model, frame_iterator, ball_target_area=400
)
for processed_frame in processed_frames_iterator:
streaming_process.communicate(processed_frame.tobytes())
streaming_process.kill()
</code></pre>
<p><em><code>processed_frame</code> here is an annotated OpenCV frame.</em></p>
<p>However, after I do my first <code>streaming_process.communicate</code> call, the ffmpeg process exits with code 0 (meaning everything was ok), but it is not. I can not feed the rest of the frames into ffmpeg, because the process exited.</p>
<p>Here are the logs:</p>
<pre><code>Input #0, rawvideo, from 'fd:':
Duration: N/A, start: 0.000000, bitrate: 663552 kb/s
Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1280x720, 663552 kb/s, 30 tbr, 30 tbn
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
[libx264 @ 0x132e05570] using cpu capabilities: ARMv8 NEON
[libx264 @ 0x132e05570] profile High, level 3.1, 4:2:0, 8-bit
[libx264 @ 0x132e05570] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - h
ttp://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme
=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11
fast_pskip=1 chroma_qp_offset=-2 threads=15 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 inter
laced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=
1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbt
ree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, flv, to 'rtmp://global-live.mux.com:5222/app/9428e064-e5d3-0bee-dc67-974ba53ce164':
Metadata:
encoder : Lavf60.3.100
Stream #0:0: Video: h264 ([7][0][0][0] / 0x0007), yuv420p(tv, progressive), 1280x720, q=2-31, 30 fps
, 1k tbn
Metadata:
encoder : Lavc60.3.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame= 1 fps=0.0 q=29.0 Lsize= 41kB time=00:00:00.00 bitrate=N/A speed= 0x eed=N/A
video:40kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.678311%
[libx264 @ 0x132e05570] frame I:1 Avg QP:25.22 size: 40589
[libx264 @ 0x132e05570] mb I I16..4: 37.7% 33.4% 28.9%
[libx264 @ 0x132e05570] 8x8 transform intra:33.4%
[libx264 @ 0x132e05570] coded y,uvDC,uvAC intra: 51.1% 53.2% 14.4%
[libx264 @ 0x132e05570] i16 v,h,dc,p: 32% 38% 20% 10%
[libx264 @ 0x132e05570] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 36% 28% 3% 2% 2% 3% 3% 6%
[libx264 @ 0x132e05570] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 18% 37% 17% 4% 4% 4% 5% 4% 7%
[libx264 @ 0x132e05570] i8c dc,h,v,p: 46% 37% 12% 4%
[libx264 @ 0x132e05570] kb/s:9741.36
</code></pre>
<p>That's all. Exit code 0.</p>
|
<python><opencv><video><ffmpeg><rtmp>
|
2023-06-19 19:55:23
| 1
| 450
|
Dmytro Soltusyuk
|
76,509,707
| 18,150,609
|
Plotly Sankey Diagram: How to display the value for each links and node on the link/node without hover?
|
<p>In the Plotly Sankey diagram, you are able to see the 'value' of a link/node by hovering over it. I want the image to display the values without hovering though.</p>
<p>I've looked through the documentation and see virtually no way of doing this beside replacing the labels themselves with the desired value. That is not a good option, as nothing would then by labeled. Short of making dynamic labels that include both <em>and and value</em>, I'm not sure how to approach this.</p>
<p>Examples below...</p>
<p>Sample Sankey Diagram <a href="https://plotly.com/python/sankey-diagram/#more-complex-sankey-diagram-with-colored-links" rel="nofollow noreferrer">(source)</a>:</p>
<pre><code>import plotly.graph_objects as go
fig = go.Figure(data=[go.Sankey(
node = dict(
pad = 15,
thickness = 20,
line = dict(color = "black", width = 0.5),
label = ["A1", "A2", "B1", "B2", "C1", "C2"],
customdata = ["Long name A1", "Long name A2", "Long name B1", "Long name B2",
"Long name C1", "Long name C2"],
hovertemplate='Node %{customdata} has total value %{value}<extra></extra>',
color = "blue"
),
link = dict(
source = [0, 1, 0, 2, 3, 3],
target = [2, 3, 3, 4, 4, 5],
value = [8, 4, 2, 8, 4, 2],
customdata = ["q","r","s","t","u","v"],
hovertemplate='Link from node %{source.customdata}<br />'+
'to node%{target.customdata}<br />has value %{value}'+
'<br />and data %{customdata}<extra></extra>',
))])
fig.update_layout(title_text="Basic Sankey Diagram", font_size=10)
fig.show()
</code></pre>
<p>Actual Output:
<a href="https://i.sstatic.net/CkfBA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CkfBA.png" alt="Actual Output" /></a></p>
<p>Desired Output:
<a href="https://i.sstatic.net/SfnmC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SfnmC.png" alt="Desired Output" /></a></p>
|
<python><plotly><visualization><sankey-diagram>
|
2023-06-19 19:54:18
| 1
| 364
|
MrChadMWood
|
76,509,575
| 5,611,471
|
Apache Spark ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
|
<p>I am getting ConnectionRefusedError while running this code:</p>
<pre><code>spark = SparkSession.builder.getOrCreate()
</code></pre>
<p>I installed <code>Apache Spark 3.4.0</code>, <code>Java 20.0.1</code>, and used winutils.exe for hadoop 3.3.</p>
<p>In C drive I created three folders for spark, hadoop, java.
<br>The directories look like this:</p>
<pre class="lang-none prettyprint-override"><code>C:\spark\spark-3.4.0-bin-hadoop3
C:\hadoop\bin\winutils.exe
C:\java\jdk
</code></pre>
<p>I added to the environment variables like this:</p>
<pre class="lang-none prettyprint-override"><code>HADOOP_HOME = C:\hadoop
JAVA_HOME = C:\java\jdk
SPARK_HOME = C:\spark\spark-3.4.0-bin-hadoop3
</code></pre>
<p>Here's the screenshot.</p>
<p><a href="https://i.sstatic.net/yBBtW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yBBtW.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/u78fh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u78fh.png" alt="enter image description here" /></a></p>
<p>I ran the following snippet.</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
# Load Dataset A and Dataset B as Spark DataFrames
dataset_A = spark.read.csv('A.csv', header=True, inferSchema=True)
dataset_B = spark.read.csv('B.csv', header=True, inferSchema=True)
merged_data = dataset_A.join(dataset_B, on='key', how='left')
</code></pre>
<p>until here there's no problem. But when I run this:</p>
<pre><code>merged_data_pandas = merged_data.toPandas()
</code></pre>
<p>then it throws the connection error message.</p>
<p>Was I supposed to change any configuration files?</p>
|
<python><apache-spark><hadoop><pyspark><apache-spark-sql>
|
2023-06-19 19:29:59
| 1
| 529
|
007mrviper
|
76,509,485
| 11,644,523
|
Snowflake / Snowpark compilation error, column alias on join - invalid identifier
|
<p>I am working on a Snowpark worksheet, here is the sample code:</p>
<pre><code>source_df = session.table("source")
stg_df = session.table("other_source")
df = (source_df.join(stg_df, source_df.id==stg_df.id, "left")
.select("stuff here")
)
</code></pre>
<p>I can return and view df, based on the show(), and <code>print(df.columns)</code> shows all the column names as uppercase.</p>
<p>The problem comes when I try to join <code>df</code> with <code>stg_df</code> as the next step:</p>
<pre><code> df2 = (stg_df.join(df, stg_df.id==df.id, "left")
.select(stg_df.id.alias("id"))
.filter("stuff here")
)
</code></pre>
<p>It fails when creating <code>df2</code></p>
<p>Error in Query History:</p>
<pre><code>SELECT * FROM (( SELECT NULL :: BIGINT AS "l_h6oy_ID", NULL :: DOUBLE AS "l_h6oy_TOTAL_REVENUE", NULL :: TIMESTAMP AS "MODIFIED") AS SNOWPARK_LEFT LEFT OUTER JOIN ( SELECT NULL :: BIGINT AS "r_g5n4_ID", NULL :: DOUBLE AS "r_g5n4_TOTAL_REVENUE") AS SNOWPARK_RIGHT ON ("ID" = "r_g5n4_ID"))
# SQL compilation error: error line 1 at position 821 invalid identifier 'ID'
</code></pre>
<p>It seems the problem is with this part <code>stg_df.id==df.id</code>. If I rename either dataframe's ID column name then it works. But how come I do not face this issue in the first join?</p>
|
<python><pyspark><snowflake-cloud-data-platform>
|
2023-06-19 19:11:59
| 0
| 735
|
Dametime
|
76,509,389
| 9,542,989
|
Get Pages of Top Results from Search Using pymediawiki
|
<p>I am trying to use the <code>pymediawiki</code> Python library to extract data from the MediaWiki API.</p>
<p>What I want to do is get the top 10 hits for a particular search term and then get the relevant page for each of these hits.</p>
<p>My code looks like this so far,</p>
<pre><code>from mediawiki import MediaWiki
wiki = MediaWiki()
# Perform the search
search_results = wiki.search('washington', results=10)
# Retrieve the pages for the search results
pages = []
for result in search_results:
page = wiki.page(result)
pages.append(page)
# Print the titles of the retrieved pages
for page in pages:
print(page.title)
</code></pre>
<p>However, with this approach, I often run into the <code>DisambiguationError</code> error.</p>
<p>Given below is an example for the stack trace of the error for the search term given above,</p>
<pre><code>DisambiguationError:
"Washington" may refer to:
All pages with titles beginning with Washington
All pages with titles containing Washington
Boeing Washington
Booker T. Washington High School (disambiguation)
Cape Washington, Greenland
Catarman, Northern Samar
Central Washington Wildcats
Eastern Washington Eagles
Escalante, Negros Occidental
Fort Washington (disambiguation)
Fort Washington, Pennsylvania
George Washington
George Washington High School (disambiguation)
George Washington University
George Washington, Cuba
Harold Washington College
Lake Washington (disambiguation)
Lake Washington High School
Mahaica-Berbice
Mount Washington (disambiguation)
New Washington, Aklan
Port Washington (disambiguation)
SS Washington
SS Washington (1941)
San Jacinto, Masbate
Surigao City
USS Washington
University of Mary Washington
University of Washington
Washington & Jefferson College
Washington (footballer, born 1 April 1975)
Washington (footballer, born 10 April 1975)
Washington (footballer, born 1953)
Washington (footballer, born 1985)
Washington (footballer, born 1989)
Washington (footballer, born August 1978)
Washington (footballer, born May 1986)
Washington (footballer, born November 1978)
Washington (footballer, born November 1986)
Washington (musician)
Washington (name)
Washington (state)
Washington (steamboat 1851)
Washington (tree)
Washington Academy (disambiguation)
Washington Avenue (disambiguation)
Washington Boulevard (disambiguation)
Washington Bridge (disambiguation)
Washington Capitals
Washington College
Washington College (California)
Washington College Academy
Washington College of Law
Washington College, Connecticut
Washington Commanders
Washington County (disambiguation)
Washington County High School (disambiguation)
Washington Court House, Ohio
Washington Escarpment
Washington F.C.
Washington Female Seminary
Washington High School (disambiguation)
Washington Huskies
Washington International School
Washington International University
Washington Island (French Polynesia)
Washington Island (Kiribati)
Washington Island (disambiguation)
Washington Land
Washington Medical College
Washington Mystics
Washington Nationals
Washington Old Hall
Washington Park (disambiguation)
Washington School (disambiguation)
Washington Square (Philadelphia)
Washington Square (disambiguation)
Washington Square West, Philadelphia
Washington State Cougars
Washington Street (disambiguation)
Washington Township (disambiguation)
Washington University Bears
Washington University in St. Louis
Washington University of Barbados
Washington Valley (disambiguation)
Washington Wizards
Washington district (disambiguation)
Washington metropolitan area
Washington station (disambiguation)
Washington, Alabama
Washington, Arkansas
Washington, California
Washington, Connecticut
Washington, D.C.
Washington, Georgia
Washington, Illinois
Washington, Indiana
Washington, Iowa
Washington, Kansas
Washington, Kentucky
Washington, Louisiana
Washington, Maine
Washington, Massachusetts
Washington, Michigan
Washington, Mississippi
Washington, Missouri
Washington, Nebraska
Washington, New Hampshire
Washington, New Jersey
Washington, New York
Washington, North Carolina
Washington, Oklahoma
Washington, Ontario
Washington, Pennsylvania
Washington, Rhode Island
Washington, Tyne and Wear
Washington, Utah
Washington, Vermont
Washington, Virginia
Washington, West Sussex
Washington, West Virginia
Washington, Wisconsin (disambiguation)
Washington, Yolo County, California
Washington-on-the-Brazos, Texas
Washingtonian (disambiguation)
Western Washington Vikings
federal government of the United States
</code></pre>
<p>Is there a way around this in order for me to achieve what I need?</p>
<p>I am open to other approaches as well.</p>
|
<python><mediawiki-api><mediawiki-extensions>
|
2023-06-19 18:54:58
| 1
| 2,115
|
Minura Punchihewa
|
76,509,370
| 2,080,960
|
Using Python.NET in a C# Azure function
|
<p>We have developed all our functions using C#, and would like to continue doing so. But we have a parsing library written in Python that we'd like to use. We've therefor looked at <code>Python.NET</code> (pythonnet) to execute the Python code within our C# process.</p>
<p>Everything works if I run it locally using a vanilla C# Console app. But when I move the same code to a Azure function project it fails (even locally).</p>
<pre class="lang-cs prettyprint-override"><code>Runtime.PythonDLL = @"python310.dll";
PythonEngine.Initialize();
// This line fails with "No module named 'wmbus2json'"
var parser = Py.Import(@"wmbus2json");
</code></pre>
<p>It seams the runtime cannot find the Python files although they are in the <em>bin</em> directory.
We are running our Azure Function using <strong>.net 6 Isolated</strong></p>
<p>Thanks for any help</p>
|
<python><c#><azure-functions><python.net>
|
2023-06-19 18:51:43
| 1
| 963
|
wmmhihaa
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.