QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,289,545
| 6,197,439
|
PyQt5 SetWindowIcon from SVG from qrc resources rendered blurry/pixelized/wrong?
|
<p>Consider this example:</p>
<p>File <code>logo.svg</code></p>
<pre class="lang-svg prettyprint-override"><code><?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
xml:space="preserve"
id="svg201"
x="0"
y="0"
style="enable-background:new 0 0 773.3 769"
version="1.1"
viewBox="0 0 773.3 769"
sodipodi:docname="logo.svg"
inkscape:version="1.2.1 (9c6d41e410, 2022-07-14)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><defs
id="defs31" /><sodipodi:namedview
id="namedview29"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:showpageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="1"
inkscape:deskcolor="#d1d1d1"
showgrid="false"
inkscape:zoom="0.53105125"
inkscape:cx="409.56499"
inkscape:cy="440.63544"
inkscape:window-width="1280"
inkscape:window-height="657"
inkscape:window-x="-8"
inkscape:window-y="-8"
inkscape:window-maximized="1"
inkscape:current-layer="g574" />
<style
id="style2"
type="text/css">.st2,.st4 { stroke:#231f20;stroke-miterlimit:10 }
.st2 { fill:none;stroke-width:20 }
.st4 { stroke-width:10 }
.st4,.st66 { fill:#fff }
</style>
<g
id="g574"><g
id="targetgrp"><circle
id="XMLID_431_"
cx="386.7"
cy="382.3"
r="302.6"
class="st2"
style="stroke:#000000" /><path
id="XMLID_432_"
style="stroke:#000000;stroke-width:10;stroke-miterlimit:10;fill:none"
d="M 185.55313 382.3 L 39.6 382.3 " /><path
id="path431"
class="st4"
d="M 737.3 382.3 L 587.84755 382.29999 "
style="fill:none;stroke:#000000" /></g></g>
</svg>
</code></pre>
<p>It looks like this in Inkscape:</p>
<p><a href="https://i.sstatic.net/4zfND.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4zfND.png" alt="enter image description here" /></a></p>
<p>Then I refer this <code>logo.svg</code> in file <code>resources.qrc</code>:</p>
<pre class="lang-none prettyprint-override"><code><!DOCTYPE RCC><RCC version="1.0">
<qresource>
<file alias="logo.svg">logo.svg</file>
</qresource>
</RCC>
</code></pre>
<p>... and I "compile" the <code>resources.qrc</code> into <code>qrc_resources.py</code> with:</p>
<pre class="lang-none prettyprint-override"><code>pyrcc5 -o qrc_resources.py resources.qrc
</code></pre>
<p>Then I have this as <code>test_pyqt5.py</code> file:</p>
<pre class="lang-python prettyprint-override"><code>#!/usr/bin/env python3
import sys, os
os.environ["QT_ENABLE_HIGHDPI_SCALING"] = "1"
os.environ["QT_AUTO_SCREEN_SCALE_FACTOR"] = "1"
os.environ["QT_SCALE_FACTOR"] = "1"
from PyQt5.QtWidgets import QApplication, QMenu
from PyQt5 import QtCore, QtWidgets, QtGui, uic
from PyQt5.QtCore import Qt
# remember `pyrcc5 -o qrc_resources.py resources.qrc`
import qrc_resources
class My_MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super(My_MainWindow, self).__init__()
self.initUI()
def initUI(self):
self.setGeometry(300, 300, 300, 300)
self.setWindowTitle('Testing PyQt5')
self.statusBar().showMessage('Ready')
self.setWindowIcon(QtGui.QIcon(':logo.svg'))
self.show()
def main(args):
app = QApplication(sys.argv)
app.setAttribute(Qt.AA_UseHighDpiPixmaps) # https://stackoverflow.com/q/58430006
wind = My_MainWindow()
sys.exit(app.exec_())
if __name__ == '__main__':
main(sys.argv[1:])
</code></pre>
<p>When I run <code>python3 test_pyqt5.py</code>, I get this:</p>
<p><a href="https://i.sstatic.net/3LkQK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3LkQK.png" alt="qt5 GUI" /></a></p>
<p>Note that the application window is barely visible - seemingly just the left line is rendered; same for app icon Windows toolbar (tricky to see, as its black, but possible if the toolbar element is selected):</p>
<p><a href="https://i.sstatic.net/2x64B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2x64B.png" alt="app icon toolbar" /></a></p>
<p>... and same with app icon in Alt-Tab switcher:</p>
<p><a href="https://i.sstatic.net/UIPet.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UIPet.png" alt="alt-tab switcher" /></a></p>
<p>What's going on here, and how can I get the SVG icon for the application rendered fully?</p>
<p>Versions:</p>
<pre class="lang-none prettyprint-override"><code>$ uname -s
MINGW64_NT-10.0-19045
$ python3 --version
Python 3.11.8
$ pacman -Q mingw-w64-x86_64-python-pyqt5
mingw-w64-x86_64-python-pyqt5 5.15.10-2
</code></pre>
|
<python><svg><pyqt5><icons><qt5>
|
2024-04-07 22:50:10
| 0
| 5,938
|
sdbbs
|
78,289,299
| 5,121,282
|
Using variable from __init__.py in main does not work
|
<p>I want to use a dictionary created in the init.py file:</p>
<pre><code>import requests
request = requests.get('http://cloudconfig:8001/msprueba/desarrollo')
request.raise_for_status()
all_config = request.json()
property_sources = all_config['propertySources']
config = {}
for sources in property_sources:
for key, value in sources['source'].items():
if key not in config:
config[key] = value
</code></pre>
<p>In my main.py I try to access config dictionary:</p>
<pre><code>import py_eureka_client.eureka_client as eureka_client
import uvicorn
from fastapi import FastAPI
from contextlib import asynccontextmanager
from company.controladores import controladorRouter
from company import config
print(config['eureka.client.serviceUrl.defaultZone'])
@asynccontextmanager
async def lifespan(app: FastAPI):
await eureka_client.init_async(
eureka_server="http://eureka-primary:8011/eureka/,http://eureka-secondary:8012/eureka/,http://eureka-tertiary:8013/eureka/",
app_name="msprueba",
instance_port=8000,
instance_host="localhost"
)
yield
await eureka_client.stop_async()
app = FastAPI(lifespan=lifespan)
app.include_router(controladorRouter)
if __name__ == "__main__":
config = uvicorn.Config("company.main:app", host="localhost", port=0)
server = uvicorn.Server(config)
server.run()
</code></pre>
<p>when I run the code I get the error <code>'module' object is not subscriptable</code>, if I create another variable and pass the config value to the new variable like this:</p>
<pre><code>import requests
request = requests.get('http://cloudconfig:8001/msprueba/desarrollo')
request.raise_for_status()
all_config = request.json()
property_sources = all_config['propertySources']
config = {}
for sources in property_sources:
for key, value in sources['source'].items():
if key not in config:
config[key] = value
global_config = config
</code></pre>
<p>And in my main.py file I use the new variable:</p>
<pre><code>import py_eureka_client.eureka_client as eureka_client
import uvicorn
from fastapi import FastAPI
from contextlib import asynccontextmanager
from coompany.controladores import controladorRouter
from company import global_config
print(global_config['eureka.client.serviceUrl.defaultZone'])
@asynccontextmanager
async def lifespan(app: FastAPI):
await eureka_client.init_async(
eureka_server="http://eureka-primary:8011/eureka/,http://eureka-secondary:8012/eureka/,http://eureka-tertiary:8013/eureka/",
app_name="msprueba",
instance_port=8000,
instance_host="localhost"
)
yield
await eureka_client.stop_async()
app = FastAPI(lifespan=lifespan)
app.include_router(controladorRouter)
if __name__ == "__main__":
config = uvicorn.Config("company.main:app", host="localhost", port=0)
server = uvicorn.Server(config)
server.run()
</code></pre>
<p>Everything works fine, but I don't understand why, can someone explain me this please?</p>
<p>Note: <strong>init</strong>.py and main.py are in the same location</p>
|
<python>
|
2024-04-07 21:05:02
| 1
| 940
|
Alan Gaytan
|
78,289,229
| 984,621
|
Scrapy - one spider, two pipelines. How to decide which one will be used?
|
<p>I have a spider that processes data from a website. On this website, there are products and manufacturers. The structure of the spider looks like this:</p>
<pre><code>import scrapy
from sreality.itemsloaders import ProductLoader, ManufacturerLoader
from sreality.items import Product, Manufacturer
class ProductSpider(scrapy.Spider):
...
custom_settings = {
'ITEM_PIPELINES': {
'proj.pipelines.ProductPipeline': 400,
'proj.pipelines.ManufacturerPipeline': 300
}
}
...
def parse(self, response):
data_product = ProductLoader(item=Product(), selector=response)
...
data_manufacturer = ManufacturerLoader(item=Manufacturer(), selector=response)
...
yield data_product.load_item() # I want this to be processed by ProductPipeline
yield data_manufacturer.load_item() # and this by ManufacturerPipeline
</code></pre>
<p><code>pipelines.py</code></p>
<pre><code>class ProductPipeline:
def process_item(self, item):
...
class ManufacturerPipeline:
def process_item(self, item):
...
</code></pre>
<p>If I run this code, then the data from <code>data_manufacturer</code> will be saved through the <code>ManufacturerPipeline</code>, but then it will through an error in the <code>ProductPipeline</code> that some database attributes do not exist (which is understandable).</p>
<p>What I am trying to achieve is the following:</p>
<ol>
<li>spider runs -> gets data to <code>data_manufacturer</code> and <code>product_manufacturer</code>.</li>
<li><code>data_manufacturer</code> gets saved to <code>ManufacturerPipeline</code></li>
<li><code>data_product</code> gets saved to <code>ProductPipeline</code></li>
</ol>
<p>I don't want <code>data_manufacturer</code> to be sent through <code>ProductPipeline</code> and <code>data_product</code> through <code>ManufacturerPipeline</code>. Is there any way to achieve it?</p>
|
<python><scrapy><scrapy-pipeline>
|
2024-04-07 20:39:48
| 0
| 48,763
|
user984621
|
78,288,966
| 5,257,430
|
Create A stacked bar plot in facetgrid
|
<p>I would like to plot using seaborn and pandas. The dataframe is below.</p>
<pre><code>d = {'cutoff': ['>6']*8+ ['>7']*8 +['>8']*8,
'month': ['3m','3m','3m','3m','6m','6m','6m','6m'] * 3,
'outcome': ['D','D','I','I']*6,
'count': ['0','>0']*12,
'proportion': [0.25, 0.75, 0.4, 0.6, 0.3, 0.7, 0.8, 0.2, 0.1, 0.9, 0, 1.0,
0.15, 0.85, 0.2, 0.8, 0.3, 0.7, 0.8, 0.2, 0.35, 0.65, 1.0, 0]}
tdf = pd.DataFrame(d)
</code></pre>
<p>I would like to show that by using different cutoffs, in different months, different outcome have different proportions of count. I would like to create a facetgrid and create a stacked bar plot. The plot should look like below,</p>
<pre><code> >6 >7 >8
| | |
| xx xx | x x x x |
| xx xx | I x x x |
| Ix xI | I I x x |
| II xI | I I x X |
| II II | I I I X |
|__II__________II___________ |____I___I___I___I_________ |____________________
DI DI 3m,D 3m,I 6m,D 6m,I
3m 6m
</code></pre>
<p>I use the below code.</p>
<pre><code> def draw_stacked_bar(*args, **kwargs):
data = kwargs.pop('data')
ax = plt.gca()
pivot_df = data.pivot(index=['month','outcome'], columns='gitis_count', values='proportion')
pivot_df.plot(kind='bar', stacked=True, ax=ax)
plt.xticks(rotation=45)
g = sb.FacetGrid(grouped_df, col='cutoff', col_wrap=3, height=4, sharey=True)
g.map_dataframe(draw_stacked_bar)
</code></pre>
<p>But instead, I create an evenly distributed bar in the second grid (>7). Any one can help me to modify it?</p>
|
<python><pandas><seaborn>
|
2024-04-07 19:01:28
| 0
| 621
|
pill45
|
78,288,154
| 108,390
|
How to pipe expressions from a list of expressions in Python Polars
|
<p>I cannot get my head around this.
Say that I have a dictionary of regexes and replacement strings that I want to replace, and if none of these regexes are matched (when returns False) I want to resume with the next when statement.</p>
<p>So, instead of this way, that does not check for data in "new data"...</p>
<pre><code>all_items = pl.DataFrame(
{
"data": ["Swedish fish", "English tea", "", "", ""],
"ISO_codes": ["fin", "nor", "eng", "eng", "swe"],
})
replacement_rules = {
r"^Swe.*": "Svenska",
r"^Eng.*": "English",
}
iso_tranlation = {
"swe": "Svenska",
"eng": "English",
"nor": "Norsk",
"fin": "Finska pΓ₯ finska",
}
for pattern, replacement in replacement_rules.items():
all_items = (
all_items.lazy()
.with_columns(
pl.when(pl.col("data").str.contains(pattern))
.then(pl.lit(replacement))
.alias("new_data")
)
.collect()
)
all_items = (
all_items.lazy()
.with_columns(
pl.when(pl.col("ISO_codes").str.len_chars() > 0)
.then(
pl.col("ISO_codes")
.replace(iso_tranlation , default="Unknown ISO Code")
)
.alias("new_data")
)
.collect()
)
</code></pre>
<p>...i would like to do something along the lines of this:</p>
<pre><code>expressions = [
pl.when(pl.col("data").str.contains(pattern))
.then(pl.lit(replacement))
for pattern, replacement in replacement_rules.items()]
all_items = (
all_items.lazy()
.with_columns(
expressions.explode_and_pipe()
.when(pl.col("ISO_codes").str.len_chars() > 0)
.then(
pl.col("ISO_codes")
.replace(iso_tranlation , default="Unknown ISO Code")
)
.alias("new_data")
)
.collect()
)
</code></pre>
<p>Is there a way to achieve that <code>expressions.explode_and_pipe()</code>?</p>
<p>EDIT:
This is the resulting dataframe i'm after:</p>
<pre><code>shape: (5, 3)
ββββββββββββββββ¬ββββββββββββ¬βββββββββββ
β data β ISO_codes β new_data β
β --- β --- β --- β
β str β str β str β
ββββββββββββββββͺββββββββββββͺβββββββββββ‘
β Swedish fish β fin β Svenska β
β English tea β nor β English β
β β eng β English β
β β eng β English β
β β swe β Svenska β
ββββββββββββββββ΄ββββββββββββ΄βββββββββββ
</code></pre>
|
<python><python-polars>
|
2024-04-07 14:31:29
| 2
| 1,393
|
Fontanka16
|
78,287,997
| 2,986,153
|
plot density ridge plot with conditional fill color in python
|
<p>I am trying to produce density ridge plots in python where the fill is conditional on the x-values as has been done in this post: <a href="https://stackoverflow.com/questions/78200593/plot-continuous-geom-density-ridges-with-conditional-fill-color">plot continuous geom_density_ridges with conditional fill color</a></p>
<p><a href="https://i.sstatic.net/gUpk4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gUpk4.png" alt="enter image description here" /></a></p>
<p>My current python code is only producing a density plot per fill color rather than a continuous density plot with alternating fill color, which I want.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
data = pd.DataFrame({
'A': np.random.normal(0, 0.25, 4000),
'B': np.random.normal(0.2, 0.25, 4000),
'C': np.random.normal(0.4, 0.25, 4000)
})
data_melted = pd.melt(data, var_name='recipe', value_name='values')
data_melted.head()
</code></pre>
<p><a href="https://i.sstatic.net/p4NQV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p4NQV.png" alt="enter image description here" /></a></p>
<pre><code># Create a ridge plot for each recipe
for recipe in data_melted['recipe'].unique():
# Filter data for the current recipe
recipe_data = data_melted[data_melted['recipe']==recipe]
# Create a density plot with filled areas for values > 0 and <= 0
sns.kdeplot(x=recipe_data['values'], hue=recipe_data['values']>0,
fill=True, palette=['red', 'blue'], alpha=0.5, linewidth=0.5)
# Set title and labels
plt.title(f"Ridge Plot for Recipe {recipe}")
plt.xlabel('Values')
plt.ylabel('Density')
# Display plot
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/gqf2P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gqf2P.png" alt="enter image description here" /></a></p>
|
<python><seaborn>
|
2024-04-07 13:38:40
| 1
| 3,836
|
Joe
|
78,287,949
| 1,234,173
|
Video file too Small while trying to upload a reel on facebook API
|
<p>i am trying to upload a reel (MP4, 16 seconds, and 9 MB) using facebook graph API, but i get the following error:</p>
<p>Failed to upload reel: {'error': {'message': 'The video file you tried to upload is too small. Please try again with a larger file.', 'type': 'OAuthException', 'code': 382, 'error_subcode': 1363022, 'is_transient': False, 'error_user_title': 'Video Too Small', 'error_user_msg': 'The video you tried to upload is too small. The minimum size for a video is 1 KB. Please try again with a larger file.', 'fbtrace_id': 'A0WVVSGWnXEfffdwrjFUENsJVC'}}</p>
<p>The code:</p>
<pre><code>import requests
from datetime import datetime
import subprocess
import os
def human_readable_to_timestamp(date_string):
try:
# Convert the human-readable date string to a datetime object
dt_object = datetime.strptime(date_string, '%Y-%m-%d %H:%M:%S')
# Convert the datetime object to UNIX timestamp
timestamp = int(dt_object.timestamp())
return timestamp
except ValueError:
print("Invalid date format. Please provide the date in the format: YYYY-MM-DD HH:MM:SS")
return None
def extract_frame(video_path, time, output_path):
# Use ffmpeg to extract a frame from the video at the specified time
cmd = [
'ffmpeg', '-y',
'-i', video_path,
'-ss', str(time),
'-vframes', '1',
'-q:v', '2', # Set quality (1-31, 1 being highest)
output_path
]
subprocess.run(cmd, check=True)
def upload_reel_with_thumbnail(video_path, thumbnail_path, access_token, file_size, target_file_size, scheduled_publish_time=None):
# Endpoint for uploading videos
url = 'https://graph.facebook.com/v15.0/me/videos'
if file_size < target_file_size:
# Pad the file with zeros to reach the target file size
with open(video_path, 'ab') as video_file:
video_file.write(b'\0' * (target_file_size - file_size))
file_size = target_file_size # Update the file size after padding
print (file_size)
# Video data
video_data = {
'access_token': access_token,
#'file_url': video_path,
'description': 'Check out this amazing reel!',
'file_size': file_size,
'published': 'false'
# Additional parameters can be added here as needed
}
# If scheduling time is provided, add it to the request
if scheduled_publish_time:
video_data['scheduled_publish_time'] = human_readable_to_timestamp(scheduled_publish_time)
try:
# Sending POST request to upload the video
files = {'file': video_path}
response = requests.post(url, data=video_data, files=files)
response_json = response.json()
# Check if the upload was successful
if 'id' in response_json:
reel_id = response_json['id']
print("Reel uploaded successfully! Reel ID:", reel_id)
# Upload the thumbnail
upload_thumbnail(reel_id, thumbnail_path, access_token)
# If scheduling time is provided, schedule the reel
if scheduled_publish_time:
schedule_reel(reel_id, scheduled_publish_time, access_token)
else:
print("Failed to upload reel:", response_json)
except Exception as e:
print("An error occurred:", str(e))
def upload_thumbnail(reel_id, thumbnail_path, access_token):
# Endpoint for uploading thumbnail
url = f'https://graph.facebook.com/v15.0/{reel_id}'
# Thumbnail data
files = {'thumb': open(thumbnail_path, 'rb')}
params = {'access_token': access_token}
try:
# Sending POST request to upload the thumbnail
response = requests.post(url, files=files, params=params)
response_json = response.json()
# Check if the thumbnail upload was successful
if 'success' in response_json and response_json['success']:
print("Thumbnail uploaded successfully!")
else:
print("Failed to upload thumbnail:", response_json)
except Exception as e:
print("An error occurred:", str(e))
def schedule_reel(reel_id, scheduled_publish_time, access_token):
# Endpoint for scheduling the reel
url = f'https://graph.facebook.com/v15.0/{reel_id}'
# Schedule data
data = {
'access_token': access_token,
'published': 'false',
'scheduled_publish_time': scheduled_publish_time
}
try:
# Sending POST request to schedule the reel
response = requests.post(url, data=data)
response_json = response.json()
# Check if scheduling was successful
if 'id' in response_json:
print("Reel scheduled successfully!")
else:
print("Failed to schedule reel:", response_json)
except Exception as e:
print("An error occurred:", str(e))
# Example usage
if __name__ == "__main__":
# Path to the video file
video_path = '55.mp4'
file_size = os.path.getsize(video_path)
target_file_size = 10 * 1024 # 30 KB
print (file_size)
# Path to the thumbnail image
thumbnail_path = 'thumbnails/55.jpg'
extract_frame(video_path, 6, thumbnail_path)
# Facebook Access Token
access_token = 'xxxxx'
# Scheduled publish time (human-readable format)
scheduled_publish_time = '2024-04-10 15:30:00' # Example: April 10, 2024, 15:30:00 UTC
# Call the function to upload the reel with the extracted thumbnail and schedule it
upload_reel_with_thumbnail(video_path, thumbnail_path, access_token, file_size, target_file_size, scheduled_publish_time)
print ("done")
</code></pre>
<p>Any idea :(?</p>
|
<python><facebook><facebook-graph-api>
|
2024-04-07 13:21:13
| 1
| 385
|
Augustus
|
78,287,834
| 433,261
|
Filter entries by tags
|
<p>I have two tables with a relationship</p>
<pre><code>class Patient(db.Model):
__tablename__ = "patient"
id = db.Column(db.UUID(as_uuid=True), primary_key=True, unique=True, server_default=sql.text("gen_random_uuid()"))
name = db.Column(db.String)
tags = db.relationship('PatientTag', backref="patient", lazy="dynamic")
class PatientTag(db.Model):
__tablename__ = "patient_tag"
id = db.Column(db.Integer, primary_key=True, autoincrement=True, unique=True)
patient_id = db.Column(db.UUID(as_uuid=True), db.ForeignKey(Patient.id))
mainTag = db.Column(db.String)
subTag = db.Column(db.String)
</code></pre>
<p>I want to get all patients that have a specific tag, for example:</p>
<pre><code>--Table Patient
id|name
--------
1 |pt a
2 |pt b
3 |pt c
--Table PatientTag
patient_id|mainTag|subTag
-------------------------
1 |spine |
1 |spine |degenerative
2 |spine |trauma
3 |spine |trauma
</code></pre>
<p>I want all patients with tag (spine, trauma) which will give me patient 2 and 3</p>
<p>or all patients with tag (spine, degenerative) which will give me patient 1 only</p>
|
<python><flask><sqlalchemy>
|
2024-04-07 12:41:24
| 1
| 751
|
Hadi
|
78,287,827
| 1,172,907
|
How to ignore functions matching pattern in pyright?
|
<p>Despite ignoring <code>vcr</code>, pyright still reports an error. Why am I still seeing this error:</p>
<pre class="lang-none prettyprint-override"><code>[Pyright] Arguments missing for parameters "instance", "args", "kwargs" (3:5)
</code></pre>
<pre class="lang-py prettyprint-override"><code># test_foo.py
@pytest.mark.skip
class TestPublisher:
@vcr.use_cassette('cassettes/test_publish_post.yaml')
</code></pre>
<pre class="lang-ini prettyprint-override"><code># pyproject.toml
[tool.pyright]
ignore = ["*vcr*"]
</code></pre>
<p>Actually I don't think <code>pyproject.toml</code> is respected at all in pyright after trying:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.pyright]
ignore = [".","*"]
exclude = [".","*"]
</code></pre>
|
<python><pyright>
|
2024-04-07 12:39:50
| 2
| 605
|
jjk
|
78,287,788
| 11,520,306
|
Produce to Confluent Kafka Topic with SASL_SSL and OAUTHBEARER mechanism using python client
|
<p>I'm trying to produce one message to a topic in my RBAC-enabled and TLS-enabled Confluent Kafka cluster. My solution only works partially. I'm seeking help to successfully publish a message to the topic using the <a href="https://docs.confluent.io/kafka-clients/python/current/overview.html" rel="nofollow noreferrer">confluent-kafka library</a>.</p>
<p>Broker config:</p>
<pre><code>listener.security.protocol.map=EXTERNAL:PLAINTEXT,INTERNAL:SSL,REPLICATION:SSL,TOKEN:SASL_SSL
listeners=EXTERNAL://:9092,INTERNAL://:9071,REPLICATION://:9072,TOKEN://:9073
listener.name.token.sasl.enabled.mechanisms=OAUTHBEARER
confluent.metadata.server.authentication.method=BEARER
confluent.metadata.server.listeners=https://0.0.0.0:8090
confluent.metadata.server.token.auth.enable=true
kafka.rest.confluent.metadata.http.auth.credentials.provider=BASIC
</code></pre>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import requests
from requests.auth import HTTPBasicAuth
from confluent_kafka import Producer
def oauthbearer_refresh_cb(oauthbearer_config):
url = "https://my-kafka-broker:8090/security/1.0/authenticate"
headers = {
"Content-Type": "application/json",
"Accept": "application/json",
}
response = requests.get(url, headers=headers, auth=HTTPBasicAuth('kafka', 'kafka-secret'), verify=False)
if response.status_code == 200:
token_info = response.json()
token = token_info.get("auth_token")
expires_in = token_info.get("expires_in", 3600) * 1000 # Convert to milliseconds
print(f"Refreshed OAuth token: {token}, expires in {expires_in} milliseconds")
return token, expires_in
else:
print(f"Failed to refresh OAuth token: {response.text}")
def acked(err, msg):
if err is not None:
print("Failed to deliver message: %s: %s" % (str(msg), str(err)))
else:
print("Message produced: %s" % (str(msg)))
def produce_message(producer, topic, message):
producer.produce(topic, value=message, on_delivery=acked)
producer.flush()
if __name__ == '__main__':
config = {
'bootstrap.servers': 'my-kafka-broker:9073',
'security.protocol': 'SASL_SSL',
'sasl.mechanisms': 'OAUTHBEARER',
'ssl.ca.location': '/app/assets/certs/ca/cacerts.pem',
'oauth_cb': oauthbearer_refresh_cb,
}
producer = Producer(**config)
produce_message(producer, 'kafka-inbox', 'Hello, Kafka!')
</code></pre>
<p>The output shows that the auth token is refreshed periodically, but publishing the message seems to be unsuccessful:</p>
<pre class="lang-bash prettyprint-override"><code>kafka@python-producer:/app/docker/producer$ ./producer_native.py
/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py:1103: InsecureRequestWarning: Unverified HTTPS request is being made to host 'my-kafka-broker'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#tls-warnings
warnings.warn(
Refreshed OAuth token: <actual-auth-token-redacted>, expires in 3600000 milliseconds
/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py:1103: InsecureRequestWarning: Unverified HTTPS request is being made to host 'my-kafka-broker'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#tls-warnings
warnings.warn(
Refreshed OAuth token: <actual-auth-token-redacted>, expires in 3600000 milliseconds
...
^C^Z
[21]+ Stopped ./producer_native.py
</code></pre>
<p>The token header and payload retrieved looks like this:</p>
<pre><code>kafka@python-producer:/app/docker/producer$ curl --silent -k -u kafka:kafka-secret -H "Content-Type: application/json" -H "Accept: application/json" -X GET https://my-kafka-broker:8090/security/1.0/authenticate | jq -r '.auth_t
oken | split(".") | .[0:2] | map(@base64d) | map(fromjson)'
[
{
"alg": "RS256",
"kid": null
},
{
"jti": "<redacted-actual-jti-value>",
"iss": "Confluent",
"sub": "kafka",
"exp": 1712494519,
"iat": 1712490919,
"nbf": 1712490859,
"azp": "kafka",
"auth_time": 1712490919
}
]
</code></pre>
|
<python><apache-kafka><confluent-platform><confluent-kafka-python>
|
2024-04-07 12:25:08
| 0
| 558
|
Moritz Wolff
|
78,287,484
| 268,581
|
Stacked bar chart from dataframe
|
<h1>Program</h1>
<p>Here's a small Python program that gets tax data via the treasury.gov API:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import treasury_gov_pandas
# ----------------------------------------------------------------------
df = treasury_gov_pandas.update_records(
url = 'https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v1/accounting/dts/deposits_withdrawals_operating_cash')
df['record_date'] = pd.to_datetime(df['record_date'])
df['transaction_today_amt'] = pd.to_numeric(df['transaction_today_amt'])
tmp = df[(df['transaction_type'] == 'Deposits') & ((df['transaction_catg'].str.contains('Tax')) | (df['transaction_catg'].str.contains('FTD'))) ]
</code></pre>
<p>The program is using the following library to download the data:</p>
<p><a href="https://github.com/dharmatech/treasury-gov-pandas.py" rel="nofollow noreferrer">https://github.com/dharmatech/treasury-gov-pandas.py</a></p>
<h1>Dataframe</h1>
<p>Here's what the resulting data looks like:</p>
<pre class="lang-none prettyprint-override"><code>>>> tmp.tail(20).drop(columns=['table_nbr', 'table_nm', 'src_line_nbr', 'record_fiscal_year', 'record_fiscal_quarter', 'record_calendar_year', 'record_calendar_quarter', 'record_calendar_month', 'record_calendar_day', 'transaction_mtd_amt', 'transaction_fytd_amt', 'transaction_catg_desc', 'account_type', 'transaction_type'])
record_date transaction_catg transaction_today_amt
371266 2024-04-03 DHS - Customs and Certain Excise Taxes 84
371288 2024-04-03 Taxes - Corporate Income 237
371289 2024-04-03 Taxes - Estate and Gift 66
371290 2024-04-03 Taxes - Federal Unemployment (FUTA) 10
371291 2024-04-03 Taxes - IRS Collected Estate, Gift, misc 23
371292 2024-04-03 Taxes - Miscellaneous Excise 41
371293 2024-04-03 Taxes - Non Withheld Ind/SECA Electronic 1786
371294 2024-04-03 Taxes - Non Withheld Ind/SECA Other 2315
371295 2024-04-03 Taxes - Railroad Retirement 3
371296 2024-04-03 Taxes - Withheld Individual/FICA 12499
371447 2024-04-04 DHS - Customs and Certain Excise Taxes 82
371469 2024-04-04 Taxes - Corporate Income 288
371470 2024-04-04 Taxes - Estate and Gift 59
371471 2024-04-04 Taxes - Federal Unemployment (FUTA) 8
371472 2024-04-04 Taxes - IRS Collected Estate, Gift, misc 127
371473 2024-04-04 Taxes - Miscellaneous Excise 17
371474 2024-04-04 Taxes - Non Withheld Ind/SECA Electronic 1905
371475 2024-04-04 Taxes - Non Withheld Ind/SECA Other 1092
371476 2024-04-04 Taxes - Railroad Retirement 1
371477 2024-04-04 Taxes - Withheld Individual/FICA 2871
</code></pre>
<p>The dataframe has data that goes back to 2005:</p>
<pre class="lang-none prettyprint-override"><code>>>> tmp.drop(columns=['table_nbr', 'table_nm', 'src_line_nbr', 'record_fiscal_year', 'record_fiscal_quarter', 'record_calendar_year', 'record_calendar_quarter', 'record_calendar_month', 'record_calendar_day', 'transaction_mtd_amt', 'transaction_fytd_amt', 'transaction_catg_desc', 'account_type', 'transaction_type'])
record_date transaction_catg transaction_today_amt
2 2005-10-03 Customs and Certain Excise Taxes 127
7 2005-10-03 Estate and Gift Taxes 74
10 2005-10-03 FTD's Received (Table IV) 2515
12 2005-10-03 Individual Income and Employment Taxes, Not Wi... 353
21 2005-10-03 FTD's Received (Table IV) 15708
... ... ... ...
371473 2024-04-04 Taxes - Miscellaneous Excise 17
371474 2024-04-04 Taxes - Non Withheld Ind/SECA Electronic 1905
371475 2024-04-04 Taxes - Non Withheld Ind/SECA Other 1092
371476 2024-04-04 Taxes - Railroad Retirement 1
371477 2024-04-04 Taxes - Withheld Individual/FICA 2871
</code></pre>
<h1>Question</h1>
<p>I'd like to plot this data as a stacked bar chart.</p>
<ul>
<li>x-axis should be 'record_date'.</li>
<li>y-axis should be the 'transaction_today_amt'.</li>
<li>The 'transaction_catg' values should be used for the stacked items.</li>
</ul>
<p>I'm open to any plotting library. I.e. matplotlib, bokeh, plotly, etc.</p>
<p>What's a good way to implement this?</p>
|
<python><pandas>
|
2024-04-07 10:33:00
| 3
| 9,709
|
dharmatech
|
78,287,471
| 16,389,095
|
Python/Flet: RuntimeError: asyncio.run() cannot be called from a running event loop
|
<p>I'm trying to use an <a href="https://github.com/flet-dev/examples/blob/main/python/apps/counter/counter.py" rel="nofollow noreferrer">example</a> provided by the <strong>Flet</strong> developers written in Python Flet framework.
After having installed Flet, and downloaded the example, I tried to run it into Spyder IDE 5.5. When I ran it, I got this error:</p>
<p><em>RuntimeError: asyncio.run() cannot be called from a running event loop</em></p>
<p>How can I solve it? Thank you in advance.</p>
|
<python><runtime-error><python-asyncio>
|
2024-04-07 10:28:03
| 1
| 421
|
eljamba
|
78,287,455
| 6,805,396
|
Adding 'pass' instruction breaks PyCharm iterating through a set
|
<p>Found a strange behaviour with PyCharm in debugging mode.
Suppose we have a code:</p>
<pre><code>a = {'a', 'b', 'c'}
for el in a:
print(el)
pass
</code></pre>
<p>If you but a break on a 'pass' line and try the code in debugging mode you got an error:</p>
<pre><code>Process finished with exit code -1073741819 (0xC0000005)
</code></pre>
<p>because the program tries to iterate through the cycle the fourth time.
But code works fine if you just run it (in working mode, not debugging) or remove the break from the 'pass' line (to the 'print' command for example).
Is it a PyCharm issue or I have some mistakes in a code?</p>
|
<python><debugging><pycharm>
|
2024-04-07 10:19:36
| 0
| 609
|
Vlad
|
78,287,270
| 18,205,996
|
How to send confirmation emails and scheduled emails in Django
|
<p>I'm working on a Django project involving a custom user registration form. My goal is to implement a two-step email notification process upon form submission:</p>
<p>Immediate Email Confirmation: Automatically send a customized email to the user immediately after they submit the form.
Scheduled Email Notification: Send a second customized email at a later date, which is determined by the information provided when the form is created (e.g., a specific date for event reminders).
The scheduling of the second email needs to be dynamic, allowing for different dates based on the form's context, such as varying event dates.</p>
<p>How can I achieve this with Django? especially for scheduling emails to be dispatched at a future date.
Note that I expect a volume of 1000 submissions per month.</p>
<p>Thanks for your help in advance.</p>
|
<python><django><forms><email>
|
2024-04-07 09:16:29
| 1
| 597
|
taha khamis
|
78,287,245
| 6,187,792
|
Encrypting columns in Databricks using Python while retaining original datatype
|
<p>I'm working on a data security project in Databricks using Python, and I need to encrypt certain columns in a DataFrame while ensuring that the encrypted columns retain their original datatype. I've followed below article, but struggling to find a solution that preserves the datatype integrity post-encryption.</p>
<p><a href="https://medium.com/@datamadness/column-encryption-and-decryption-in-databricks-baf9ada3a7cf" rel="nofollow noreferrer">https://medium.com/@datamadness/column-encryption-and-decryption-in-databricks-baf9ada3a7cf</a></p>
|
<python><pyspark><encryption><databricks><azure-databricks>
|
2024-04-07 09:07:41
| 1
| 1,340
|
practicalGuy
|
78,287,188
| 9,137,547
|
Pylance complaining for sklearn.datasets.load_iris()
|
<p>I am using Pylance with <code>Type Checking Mode: Basic</code> and I am loading the iris dataset with <a href="https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html" rel="nofollow noreferrer">sklearn.datasets.load_iris()</a>.</p>
<pre><code>from sklearn.datasets import load_iris
def main():
iris = load_iris()
data = iris.data
</code></pre>
<p>This code works for the Python interpreter but Pylance complains giving:</p>
<blockquote>
<p>Cannot access member "<code>data</code>" for type "<code>tuple[Bunch, tuple[Unknown, ...]]</code>"
Member "<code>data</code>" is unknown Pylance(reportAttributeAccessIssue)</p>
</blockquote>
<p>So I tried type hinting hope to solve the issue adding</p>
<pre><code>from sklearn.utils import Bunch
iris: Bunch = load_iris()
</code></pre>
<p>And now I get:</p>
<blockquote>
<p>Expression of type "<code>tuple[Bunch, tuple[Unknown, ...]]</code>" cannot be assigned to declared type "<code>Bunch</code>"
"<code>tuple[Bunch, tuple[Unknown, ...]]</code>" is incompatible with "<code>Bunch</code>" Pylance(reportAssignmentType)</p>
</blockquote>
<p>If instead I follow the suggestion of Pylance and unpack it as it was a tuple (which is not according to the docs) and do:</p>
<pre><code>iris: Bunch
iris, _ = load_iris()
</code></pre>
<p>Pylance warnings disappear but the code no longer works with error:</p>
<blockquote>
<p>Traceback (most recent call last):
File ".../main.py", line 10, in
main()
File ".../main.py", line 6, in main
iris, _ = load_iris()
^^^^^^^
ValueError: too many values to unpack (expected 2)</p>
</blockquote>
<p>Looking at the source and the docs load_iris() should return a Bunch object not a tuple of two nor a tuple with more than two elements.</p>
<p>I would like to fix this with something other than <code># type: ignore</code> and to fully understand the issue.</p>
|
<python><types><scikit-learn><pylance><pyright>
|
2024-04-07 08:47:01
| 1
| 659
|
Umberto Fontanazza
|
78,287,134
| 11,824,828
|
Using OpenAI API for answering questions about csv file
|
<p>I'm starting with OpenAI API and experimenting with langchain. I have a .csv file with approximately 1000 rows and 85 columns with string values. I found some beginner article that I followed and have a colab notebook with the following code:</p>
<pre><code>txt_file_path = '/content/drive/My Drive/Colab Notebooks/preprocessed_data_10.csv'
with open(txt_file_path, 'r', encoding="utf-8") as file:
data = file.read()
txt_file_path = '/content/drive/My Drive/Colab Notebooks/preprocessed_data_10.csv'
loader = TextLoader(file_path=txt_file_path, encoding="utf-8")
data = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
data = text_splitter.split_documents(data)
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(data, embedding=embeddings)
llm = ChatOpenAI(temperature=0.7, model_name="gpt-4")
memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
memory=memory
)
query = "question"
result = conversation_chain({"question": query})
answer = result["answer"]
answer
</code></pre>
<p>The errors I got were:</p>
<pre><code>Error code: 429 - {'error': {'message': 'Request too large for gpt-4 in organization org-xxx on tokens per min (TPM): Limit 10000, Requested 139816.
</code></pre>
<p>and</p>
<pre><code>BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, your messages resulted in 32045 tokens.
</code></pre>
<p>I tried to figure out how big csv file I can feed it with and reduced the file for 10 rows and 53 columns.</p>
<p>What are possible workarounds so I can search on entire csv file?</p>
|
<python><artificial-intelligence><openai-api><langchain><retrieval-augmented-generation>
|
2024-04-07 08:27:26
| 2
| 325
|
vloubes
|
78,287,128
| 6,225,526
|
What is python equivalent of qweibull in MathCAD?
|
<p>I want to get the x value for the given quantile while the data is weibull distributed. Mathcad has <code>qweibull</code> and scipy has <code>scipy.stats.norm.ppf(q)</code> for normal distribution.</p>
<p>However, I dont find <code>scipy.stats.weibull.ppf(q, c)</code></p>
<p>How can I achieve this?</p>
|
<python><numpy><scipy><mathcad>
|
2024-04-07 08:25:50
| 1
| 1,161
|
Selva
|
78,287,117
| 9,108,781
|
How to restart a Python script from within itself?
|
<p>There seem to be at least two ways to restart a Python script. What's the difference between these two ways, and when to use which? Thanks a lot!
One way:</p>
<pre><code>import sys
import os
def restart_script():
python = sys.executable
os.execl(python, python, *sys.argv)
def main():
print("Starting script...")
try:
# Some code
# For demonstration, simulate an error
raise ValueError("Simulated error occurred!")
except Exception as e:
print(f"Error occurred: {e}")
print("Restarting script...")
restart_script()
if __name__ == "__main__":
main()
</code></pre>
<p>Another way:</p>
<pre><code>import os
import sys
import time
import subprocess
def main():
while True:
print("Starting program")
try:
p = subprocess.Popen(['python', 'my_program.py'])
p.wait()
except Exception as e:
print(f"Error occurred: {e}")
print("Restarting program...")
time.sleep(5) # Delay before restarting to avoid rapid restarts
else:
print("Program exited without error")
break # Exit the loop if the program exits without error
if __name__ == "__main__":
main()
</code></pre>
|
<python><python-3.x>
|
2024-04-07 08:20:33
| 1
| 943
|
Victor Wang
|
78,286,931
| 5,218,497
|
Create user in congito during external signup by providing default phone_number if it doesn't exist
|
<p>How to update phone number to default value if phone_number doesn't exist during signup.
Our cognito cannot have phone_number to have null so it needs to have default value.</p>
<p>The below code is written in <strong>Pre sign-up Lambda trigger</strong></p>
<p>Below code doesn't seem to work when signup during External Azure Provider</p>
<pre><code>def lambda_handler(event, context):
print("Pre Signup Congito")
if event['triggerSource'] == "PreSignUp_ExternalProvider":
request = event['request']
user_attributes = request['userAttributes']
# If no phone number initialize with default number
if not user_attributes['phone_number']:
user_attributes['phone_number'] = '+1888888888'
user_attributes['phone_number_verified'] = 'false'
request['userAttributes'] = user_attributes
event['request'] = request
# Return to Amazon Cognito
return event
</code></pre>
<p>Below are the required attributes</p>
<p><a href="https://i.sstatic.net/MJZ0R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MJZ0R.png" alt="enter image description here" /></a></p>
<p>Adding default phone number for cognito since its a required attribute.</p>
<p>During federated microsoft login I get the below error. So to avoid it... I am adding default phone number if it doesn't exist.</p>
<p>http://localhost:3000/login#error_description=Invalid+SAML+response+received%3A+Invalid+user+attributes%3A+phoneNumbers%3A+The+attribute+phoneNumbers+is+required+&error=server_error</p>
|
<python><amazon-web-services><amazon-cognito>
|
2024-04-07 07:01:17
| 0
| 2,428
|
Sharath
|
78,286,871
| 3,628,240
|
Webscraping JavaScript Table with Selenium and Beautiful Soup
|
<p>I'm trying to scrape this website: <a href="https://www.globusmedical.com/patient-education-musculoskeletal-system-conditions/resources/find-a-surgeon/" rel="nofollow noreferrer">https://www.globusmedical.com/patient-education-musculoskeletal-system-conditions/resources/find-a-surgeon/</a></p>
<p>It appears that the website uses JavaScript so that when I load inspect the code, I don't see the table with the doctors listed. However, when you inspect the element specifically, it's there with all of the information.</p>
<p>I'm trying to click the "Load More" button multiple times until it disappears and then parse through the page with BeautifulSoup.</p>
<p>Could someone help me with why the page_source that's being printed doesn't display any of the information? Is there a while loop you would setup to accomplish clicking "Load More" until it's gone?</p>
<pre><code>from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver import ActionChains
import time
import requests
doctor_dict = {}
#configure webdriver
options = webdriver.ChromeOptions()
options.add_argument("--headless=new")
driver = webdriver.Chrome(options = options)
driver.get("https://www.globusmedical.com/patient-education-musculoskeletal-system-conditions/resources/find-a-surgeon/")
time.sleep(5)
clickable = driver.find_element(By.XPATH,'//button[@class="js-eml-load-more-button eml-load-more-button eml-btn btn btn--primary"]')
driver.execute_script("arguments[0].click();", clickable)
# items = driver.find_element(By.CLASS_NAME,"eml-location grid--item")
soup = BeautifulSoup(driver.page_source, 'html.parser')
print(soup.prettify())
driver.quit()
</code></pre>
|
<python><selenium-webdriver><web-scraping><beautifulsoup>
|
2024-04-07 06:38:28
| 2
| 927
|
user3628240
|
78,286,768
| 6,115,999
|
Trying to use keras pipeline: Unrecognized keyword arguments passed to Dense:
|
<p>I installed keras-ocr from pip and am running this:</p>
<p><code>import keras_ocr</code></p>
<p>I get this error here but I've read I can just ignore it:</p>
<pre><code>2024-04-07 01:32:55.323689: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-07 01:32:56.612061: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
</code></pre>
<p>I continue with:</p>
<p><code>pipeline=keras_ocr.pipeline.Pipeline()</code></p>
<p>and get this crazy error:</p>
<pre><code>ValueError: Unrecognized keyword arguments passed to Dense: {'weights': [array([[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]], dtype=float32), array([1., 0., 0., 0., 1., 0.], dtype=float32)]}
</code></pre>
<p>Anyone know how I resolve this? I've tried installing tensorrt. I'm not deliberately passing anything into pipeline other than the command that everyone has as an example.</p>
|
<python><tensorflow><keras>
|
2024-04-07 05:40:02
| 4
| 877
|
filifunk
|
78,286,690
| 20,386,110
|
Godot: CharacterBody2D or Area2D for an object that needs collision AND to queue_free?
|
<p>I'm working on the game <em>Breakout</em> which is a game similar to <em>Pong</em> where a ball bounces off a paddle and you have to aim in order to hit a brick. When you hit the brick you score a point and the brick disappears. Each of my bricks is an <code>Area2D</code> because I need access to the <code>on_body_entered</code> signal.</p>
<pre><code>extends Area2D
func _on_body_entered(body):
score += 1
queue_free()
</code></pre>
<p>When the ball hits the brick that scripts run which is exactly what I want to happen. The problem with this is the block simply disappears upon being hit and the ball keeps going straight through it. I'm trying to implement proper collision so when the brick disappears, the ball bounces off it, similar to the bounce when the ball hits the paddle (which is a CharacterBody2D).</p>
<p>The ball is also a CharacterBody2D</p>
<pre><code>extends CharacterBody2D
var speed: int = 6
func _ready():
var target_position = Vector2(575, 645)
var direction = global_position.direction_to(target_position)
velocity = direction.normalized() * speed
func _physics_process(delta):
var collision = move_and_collide(velocity)
if collision != null:
velocity = velocity.bounce(collision.get_normal())
</code></pre>
<p>Based on this information, how can I implement proper logic on the brick to get the ball to bounce off it as it disappears?</p>
|
<python><game-development><godot><gdscript>
|
2024-04-07 04:54:12
| 1
| 369
|
Dagger
|
78,286,674
| 726,730
|
PyQt5 - QtWebEngineWidgets error while trying to open a web page
|
<p>For some reason, when i am trying to open a web page using QtWebEngineWidgets i show this error: <code>[10316:10636:0407/073627.994:FATAL:tsf_text_store.cc(52)] Failed to initialize CategoryMgr.</code></p>
<p>MRE:</p>
<p>file: test.py</p>
<pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)
self.gridLayout.setObjectName("gridLayout")
self.deck_1_web = QtWidgets.QFrame(self.centralwidget)
self.deck_1_web.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.deck_1_web.setFrameShadow(QtWidgets.QFrame.Raised)
self.deck_1_web.setObjectName("deck_1_web")
self.gridLayout.addWidget(self.deck_1_web, 0, 0, 1, 1)
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 22))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
</code></pre>
<p>file: app.py</p>
<pre class="lang-py prettyprint-override"><code>from PyQt5 import QtWidgets, QtCore, QtGui
from PyQt5.QtWebEngineWidgets import *
from PyQt5.QtCore import QUrl
from test import Ui_MainWindow
import pyuac
import traceback
import sys
sys.coinit_flags = 0
# if i import the following two error oquered
from comtypes import CLSCTX_ALL
from pycaw.pycaw import AudioUtilities, IAudioEndpointVolume
class App:
def __init__(self):
self.app = QtWidgets.QApplication(sys.argv)
self.MainWindow = QtWidgets.QMainWindow()
self.ui = Ui_MainWindow()
self.ui.setupUi(self.MainWindow)
self.MainWindow.show()
self.web_page()
sys.exit(self.app.exec_())
def web_page(self):
try:
if "web" in dir(self):
self.web.loadFinished.disconnect()
del self.web
#self.clearLayout(self.main_self.ui.horizontalLayout_9)
except Exception as e:
error_message = str(traceback.format_exc())
print(error_message)
self.ui.deck_1_web.show()
self.web = QWebEngineView(self.ui.deck_1_web)
print("1")
self.web.setPage(WebEnginePage(self.web))
print("2")
#self.web.load(QUrl(self.tmp_item["url"]))
self.web.load(QUrl('http://www.google.gr'))
print("3")
self.page_loaded = False
self.web.show()
print("4")
self.ui.deck_1_web.hide()
print("5")
self.web.page().setAudioMuted(True)
#self.web.loadFinished.connect(lambda ok: self.main_page_loaded_finished(ok))
class WebEnginePage(QWebEnginePage):
def javaScriptConsoleMessage(self, level, message, lineNumber, sourceID):
pass
if __name__ == "__main__":
program = App()
</code></pre>
<p>Any thoughts? I am running python 3.12.x in windows 11 64 bit. the problem exist only with pycaw import or only with comtypes import.</p>
|
<python><pyqt5><qwebengineview><qwebenginepage><qwebengine>
|
2024-04-07 04:45:19
| 0
| 2,427
|
Chris P
|
78,286,494
| 908,328
|
A question about Python's pop and what does the grammar have to say?
|
<p>This is something I tried out in <code>ipython</code> the behavior is quite clear: when creating the dictionary in lines 3 and 6, the dictionaries are created as if invoked by <code>dict(**kwargs)</code> and the kwargs are being processed left to right but is it a CPython implementation or is the behavior actually specified in Python's grammar?</p>
<pre class="lang-py prettyprint-override"><code>Python 3.12.2 (v3.12.2:6abddd9f6a, Feb 6 2024, 17:02:06) [Clang 13.0.0 (clang-1300.0.29.30)]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.22.2 -- An enhanced Interactive Python. Type '?' for help.
In [1]: x={"a":1, "b":2, "c":4}
In [2]: y= dict(z=x.pop("b"), **x)
In [3]: y
Out[3]: {'z': 2, 'a': 1, 'c': 4}
In [4]: x
Out[4]: {'a': 1, 'c': 4}
In [5]: x={"a":1, "b":2, "c":4}
In [6]: y= dict(**x, z=x.pop("b") )
In [7]: y
Out[7]: {'a': 1, 'b': 2, 'c': 4, 'z': 2}
In [8]: x
Out[8]: {'a': 1, 'c': 4}
</code></pre>
|
<python><dictionary><grammar>
|
2024-04-07 02:49:14
| 1
| 1,123
|
pbhowmick
|
78,286,424
| 11,170,350
|
Trouble with Django Authentication: Instead of showing templates it show admin page
|
<p>I'm currently learning Django authentication and have encountered an issue with customizing HTML templates. Instead of displaying my custom templates, Django redirects me to the admin dashboard.</p>
<p>For instance, when I modify <code>LOGOUT_REDIRECT_URL</code> to <code>logout</code> and <code>name</code> in <code>TemplateView</code> to <code>logout</code>, I expect to see my custom logout page located at <code>templates/registration/logout.html</code>. However, Django continues to redirect me to the default admin dashboard logout page. But when i set both to <code>customlogout</code> then it directs me correctly.</p>
<p>Similarly, I face a problem with the "Forgotten your password?" button. When clicking on it, I anticipate being directed to <code>templates/registration/password_reset_form.html</code>, but instead, I'm redirected to the Django admin dashboard.</p>
<p>Below is my code setup:</p>
<p><strong>settings.py:</strong></p>
<pre class="lang-py prettyprint-override"><code>LOGIN_REDIRECT_URL = 'home'
LOGOUT_REDIRECT_URL = "customlogout"
</code></pre>
<p><strong>app urls.py:</strong></p>
<pre class="lang-py prettyprint-override"><code>from django.urls import path
from django.views.generic import TemplateView
urlpatterns = [
path('', TemplateView.as_view(template_name='home.html'), name='home'),
path('logout/', TemplateView.as_view(template_name='logout.html'), name='customlogout'),
]
</code></pre>
<p><strong>project urls.py:</strong></p>
<pre class="lang-py prettyprint-override"><code>from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('', include('account.urls')),
path("accounts/", include("django.contrib.auth.urls")),
]
</code></pre>
<p><strong>Password reset template <code>templates/registration/password_reset_form.html</code>:</strong></p>
<pre class="lang-html prettyprint-override"><code><h1>Password Reset</h1>
<p>Enter your email ID to reset the password</p>
<form method="post">
{% csrf_token %}
{{ form.as_p }}
<button type="submit">Reset</button>
</form>
</code></pre>
<p><strong>Edit</strong><br />
If I move my templates out of registration folder into templates folder, then the following work</p>
<pre><code> path('logout/', auth_views.LogoutView.as_view(template_name='logout.html'), name='logout'),
path('password-reset/', auth_views.PasswordResetView.as_view(template_name='password_reset_form.html'), name='password_reset'),
</code></pre>
<p>But why its not working before</p>
|
<python><django><django-authentication>
|
2024-04-07 02:07:27
| 3
| 2,979
|
Talha Anwar
|
78,286,353
| 1,886,237
|
Writing a pandas dataframe with large vector column
|
<p>I have a pandas dataframe with 4 columns as shown below. The <strong>text_vector</strong> column contains vectors that are 1500 dimensions:</p>
<p><a href="https://i.sstatic.net/rvBFY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rvBFY.png" alt="enter image description here" /></a></p>
<p>When I write this out using: <code>df.to_csv("./data/train_clean_vec.csv", index=False, encoding='utf-8')</code>, the values in the <strong>text_vector</strong> column get truncated as seen when looking at the csv in a text editor as shown below:</p>
<p><a href="https://i.sstatic.net/WDYzo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WDYzo.png" alt="enter image description here" /></a></p>
<p>Is there a way to write this dataframe such that all the vector values are preserved? If not, any suggestions as how to write this out in some other format where I can easily read the data back in? I'm "data format agnostic" in other words, I don't care what the format is (csv, json, HDF5, whatever...) as my goal is to preserve these vectors so I read them back in instead of having to regenerate them each time I need to work with this data.</p>
|
<python><pandas><dataframe><dataformat>
|
2024-04-07 01:19:05
| 1
| 2,170
|
Michael Szczepaniak
|
78,285,775
| 22,418,446
|
How to remove inactive devices when scanning for Bluetooth devices using PyBluez?
|
<p>I'm developing a Python application to scan for nearby Bluetooth devices using <code>PyBluez</code>. I'm using the <code>discover_devices</code> function to discover devices within a certain duration, and I've set <code>lookup_names</code> and <code>lookup_class</code> parameters to True to retrieve the device names and class information.</p>
<p>However, I've noticed that the scan results include <strong>inactive devices that I've connected to before</strong>. These inactive devices clutter the scan results and make it harder to identify currently available devices.</p>
<p>Is there a way to filter out or remove inactive devices from the scan results in PyBluez? I'd like to only display devices that are currently active and available for connection.</p>
<p>Here is my code simply code</p>
<pre><code>import Bluetooth
def discover_devices(self):
nearby_devices = bluetooth.discover_devices(duration=5, lookup_names=True, lookup_class=True)
return nearby_devices
</code></pre>
|
<python><bluetooth><pybluez>
|
2024-04-06 20:23:59
| 0
| 1,160
|
msamedozmen
|
78,285,729
| 416,035
|
Inline python script in cmake execute_process fails on windows
|
<p>I'm working on a project that uses cmake, and it is failing while trying to locate my python interpreter. cmake v3.26 is being run under cmd.exe on Windows 10.</p>
<p>The error message is</p>
<pre><code>CMake Error at <project_path>/cmake/share/cmake-3.26/Modules/FindPython/Support.cmake:2175 (list):
list GET given empty list
Call Stack (most recent call first):
<project_path>/cmake/share/cmake-3.26/Modules/FindPython.cmake:606 (include)
CMakeLists.txt:30 (find_package)
</code></pre>
<p>Digging further in, it seems that it is running the following <code>execute_process</code> inside <code>Support.cmake</code>:</p>
<pre><code> execute_process (COMMAND ${_${_PYTHON_PREFIX}_INTERPRETER_LAUNCHER} "${_${_PYTHON_PREFIX}_EXECUTABLE}" -c
"<inline python script>"
RESULT_VARIABLE _${_PYTHON_PREFIX}_RESULT
OUTPUT_VARIABLE _${_PYTHON_PREFIX}_LIBPATHS
ERROR_QUIET)
</code></pre>
<p>The inline python script is line separated with <code>\n</code>, and uses either <code>distutils</code> or <code>sysconfig</code> to print the paths of various python folders, which cmake the expects to parse back into internal variables, however even though <code>_${_PYTHON_PREFIX}_RESULT</code> comes out 0 indicating no error, <code>_${_PYTHON_PREFIX}_LIBPATHS</code> comes back empty.</p>
<p>I've isolated this problem into a standalone cmake script:</p>
<pre><code>include(CMakePrintHelpers)
set(_PYTHON_PREFIX "Python")
execute_process (COMMAND "<abspath>" -c "import sys\nsys.stdout.write('one')"
RESULT_VARIABLE _${_PYTHON_PREFIX}_RESULT
COMMAND_ECHO STDOUT
OUTPUT_VARIABLE _${_PYTHON_PREFIX}_LIBPATHS
ERROR_VARIABLE _${_PYTHON_PREFIX}_ERR)
cmake_print_variables(_${_PYTHON_PREFIX}_LIBPATHS)
cmake_print_variables(_${_PYTHON_PREFIX}_RESULT)
cmake_print_variables(_${_PYTHON_PREFIX}_ERR)
</code></pre>
<p>With <code><abspath></code> being an absolute path to the desired python.</p>
<p>Running this with <code>cmake -P test.cmake</code> results in:</p>
<pre><code>Prompt>cmake -P test.cmake
'<abspath>' '-c' 'import sys
sys.stdout.write('one')'
-- _Python_LIBPATHS=""
-- _Python_RESULT="0"
-- _Python_ERR=""
</code></pre>
<p>If I change the <code>\n</code> to a <code>;</code> to make it a one-liner, <code>_Python_LIBPATHS</code> gets correctly set to <code>one</code>.</p>
<p>What can I change to allow cmake to successfully get through this script in FindPython/Support.cmake ?</p>
|
<python><cmake>
|
2024-04-06 20:03:29
| 0
| 786
|
RFairey
|
78,285,698
| 4,089,351
|
Python Jupyter Notebook kernel in VSCode not defaulting to prior settings after attempting to run a prior version of Python
|
<p>Background: I was getting comfortable running without a glitch Jupyter Notebooks in VSC. However, I tried to run an old NB, and it didn't run due to some deprecated commands in the code. So I tried using an older version of Python. All this within VSC. It didn't recognize <code>install numpy as np</code>, which I don't know if it had to do with some python environment issues. So I let it go.</p>
<p>Now, whenever I try to run a previously functioning Jupyter NB with my most updated version of Python (3.11), I have to point to it by going to <code>command palette</code> and selecting it. Further, it doesn't seem to settle (the kernel name keeps on spinning), and as soon as I run the first cell, I get this error:</p>
<p>[<img src="https://i.sstatic.net/LTy9U.png" alt="enter image description here" />][1</p>
<p>However, if I start a terminal and type <code>pip3 install ipykernel</code>, I get that the requirements are already satisfied.</p>
<p>Clearly my experiment with an old version of Python with NB's altered a few settings. It didn't affect <code>.py</code> files - just NB's.</p>
<p>How can I return to the baseline settings?</p>
|
<python><visual-studio-code><jupyter-notebook>
|
2024-04-06 19:46:06
| 1
| 4,851
|
Antoni Parellada
|
78,285,601
| 16,363,897
|
Rolling standard deviation of all columns, ignoring NaNs
|
<p>I have the following dataframe:</p>
<pre><code>data = {'a': {1: None, 2: 1, 3: 7, 4: 2, 5: 4},
'b': {1: None, 2: 2, 3: 2, 4: 9, 5: 6},
'c': {1: None, 2: 2.0, 3: None, 4: 7.0, 5: 4.0}}
df = pd.DataFrame(data).rename_axis('day')
a b c
day
1 NaN NaN NaN
2 1.0 2.0 2.0
3 7.0 2.0 NaN
4 2.0 9.0 7.0
5 4.0 6.0 4.0
</code></pre>
<p>I want to get a new column ("std") with the rolling standard deviation of all column values. NaNs should be ignored. Let's say the number of rows to be included in the rolling window is 3 and min_periods (meaning the number of rows with at least one non-null value) is 2.</p>
<p>This is the expected output:</p>
<pre><code> a b c std
day
1 NaN NaN NaN NaN
2 1.0 2.0 2.0 NaN
3 7.0 2.0 NaN 2.387467
4 2.0 9.0 7.0 3.116775
5 4.0 6.0 4.0 2.531939
</code></pre>
<p>The first std value (2.387467) is equal to np.std ([1,2,2,7,2], ddof=1).</p>
<p>I tried both solutions proposed <a href="https://stackoverflow.com/questions/77704029/pandas-calculate-rolling-standard-deviation-over-all-columns">here</a> but they don't work properly with my dataframe, probably because of NaNs.</p>
|
<python><pandas><dataframe><numpy>
|
2024-04-06 19:10:45
| 2
| 842
|
younggotti
|
78,285,559
| 7,576,157
|
Airflow PYTHONPATH
|
<p>I am catching the error:</p>
<pre><code>ModuleNotFoundError: No module named 'scripts.marketing_functions'
</code></pre>
<p>Project structure example.</p>
<pre><code>scripts/
βββ__init__.py
βββ marketing_functions.py
βββ marketing/
βββ some_script.py # Here my marketing_functions.py is used
dags/
βββ my_dag.py # Dag with bashOperator. it uses some_script.py with marketing_function.py import
</code></pre>
<ol>
<li>I put scripts into /opt/bitnami/airflow/scripts.</li>
<li>I put dags into /opt/bitnami/airflow/dags.</li>
<li>I add to PYTHONPATH /opt/bitnami/airflow/scripts. I see right PYTHONPATH using command "airflow info":</li>
</ol>
<pre><code>Paths info
airflow_home | /opt/bitnami/airflow
system_path | /opt/bitnami/common/bin:/opt/bitnami/python/bin:/opt/bitnami/postgresql/bin:/opt/bitnami/airflow/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
python_path | /opt/bitnami/python/bin:/opt/bitnami/airflow/scripts:/opt/bitnami/python/lib/python39.zip:/opt/bitnami/python/lib/python3.9:/opt/bitnami/python/lib/python3.9/lib-dynload:/opt/bitnami/python/lib/pyt
| hon3.9/site-packages:/opt/bitnami/python/lib/python3.9/site-packages/setuptools-63.4.3-py3.9.egg:/opt/bitnami/python/lib/python3.9/site-packages/pip-21.3.1-py3.9.egg:/opt/bitnami/airflow/dags:/opt/
| bitnami/airflow/config:/opt/bitnami/airflow/plugins
airflow_on_path | True
</code></pre>
<p>Dag code example:</p>
<pre><code>import datetime
from airflow import DAG
from airflow.operators.bash import BashOperator
with DAG(
dag_id="marketing_some_script",
start_date=datetime.datetime(2023, 4, 16),
schedule_interval="30 2 * * *",
) as dag:
some_script = "/opt/bitnami/airflow/scripts/marketing/some_script.py"
task_some_script = BashOperator(
task_id="task_some_report",
bash_command=f"python {some_script}"
)
task_some_script
</code></pre>
<p>Head of some_script.py file</p>
<pre><code>from datetime import datetime
import gspread
import pandas as pd
from airflow.providers.postgres.hooks.postgres import PostgresHook
from oauth2client.service_account import ServiceAccountCredentials
from scripts.marketing_functions import *
</code></pre>
<p>Pls. Help me find solution.</p>
<p>UPD. Even if i just use python console i have the same problem:</p>
<pre><code>I have no name!@0a8ff614c4b8:/$ python /opt/bitnami/airflow/scripts/marketing/some_script.py
Traceback (most recent call last):
File "/opt/bitnami/airflow/scripts/marketing/export_marketing_daily_report.py", line 8, in <module>
from scripts.marketing_functions import *
ModuleNotFoundError: No module named 'scripts.marketing_functions'
</code></pre>
<p>BUT</p>
<pre><code>I have no name!@0a8ff614c4b8:/$ echo $PYTHONPATH
/opt/bitnami/airflow/scripts
</code></pre>
|
<python><bash><airflow><airflow-2.x>
|
2024-04-06 18:58:41
| 1
| 505
|
ΠΠ»Π΅ΠΊΡΠ°Π½Π΄Ρ Π¨Π°ΠΏΠΎΠ²Π°Π»ΠΎΠ²
|
78,285,290
| 10,035,190
|
how to filter huge csv file by pandas
|
<p>I have a 10GB csv file <code>data/history_{date_to_be_searched}.csv</code>. it has more than 27000 zip codes. On the basis of zip code I have to filter the csv file then each filtered file I have to upload to azure blob storage.</p>
<p>I am taking data in chunks of size 1 000 000 like this <code>for chunk in pd.read_csv(f'data/history_{date_to_be_searched}.csv', chunksize=1000000):</code> and then taking the unique zipcode from chunk <code>uniquezipcode = df_final["zipcode"].unique()</code> then uploading it to blob storage.
In this way chuck is looping 25 times and data filter is looping 27 000 times. This is taking too much time around 70 hours. so I thought to use <code>map()</code> insted of looping like this.</p>
<pre><code>def filtered_data():
try:
for chunk in pd.read_csv(f'data/history_{date_to_be_searched}.csv', chunksize=1000000):
df_final=chunk
uniquezipcode = df_final["zipcode"].unique()
grouped_data = df_final.groupby('zipcode')
del df_final
data_for_upload = [(zipcode, group_df) for zipcode, group_df in grouped_data]
map(upload_filtered_data,*zip(*data_for_upload))
del data_for_upload
except Exception as e:
print("filtered_data Error : ",e)
def upload_filtered_data(each_zone,df_filter):
global date_to_be_searched
try:
if df_filter.shape[0] != 0:
file_path = f"{path}/{container_name}/{date_to_be_searched.year}/{date_to_be_searched.month}/{each_zone}/{each_zone}{date_to_be_searched}.parquet"
outdir = f"{path}/{container_name}/{date_to_be_searched.year}/{date_to_be_searched.month}/{each_zone}/"
outdir_blob = f"{date_to_be_searched.year}/{date_to_be_searched.month}/{each_zone}/{each_zone}{date_to_be_searched}.parquet"
fullname = os.path.join(outdir, f"{each_zone}{date_to_be_searched}.parquet")
if not os.path.exists(file_path):
os.makedirs(file_path)
else:
df_filter = merge_df(df_filter, file_path)
df_filter.to_parquet(fullname, allow_truncated_timestamps=True)
upload_to_blob(fullname, outdir_blob)
del each_battery
del df_filter
except Exception as e:
print("upload_filtered_data Error : ",e)
</code></pre>
<p>I think this way will be much faster but it take RAM due to which every time I run this script is getting terminated/killed. please suggest any other efficient way or should I reduce chunk size? Are there any ways to filter the whole csv file in single loop without loading in RAM?</p>
|
<python><pandas><csv><large-data><large-files>
|
2024-04-06 17:27:47
| 1
| 930
|
zircon
|
78,285,251
| 9,827,719
|
Python Session in 2nd gen Google Cloud Function - RuntimeError: The session is unavailable because no secret key was set
|
<p>How can I use sessions in a Google Cloud Function 2nd gen?</p>
<p>When I try to set a session I get a error.</p>
<pre><code>session['logged_in'] = True
</code></pre>
<p>Gives error:</p>
<blockquote>
<p>RuntimeError: The session is unavailable because no secret key was
set. Set the secret_key on the application to something unique and
secret.</p>
</blockquote>
<p><strong>main.py</strong></p>
<pre><code>import functions_framework
from flask import session
@functions_framework.http
def hello_http(request):
"""HTTP Cloud Function.
Args:
request (flask.Request): The request object.
<https://flask.palletsprojects.com/en/1.1.x/api/#incoming-request-data>
Returns:
The response text, or any set of values that can be turned into a
Response object using `make_response`
<https://flask.palletsprojects.com/en/1.1.x/api/#flask.make_response>.
"""
request_json = request.get_json(silent=True)
request_args = request.args
# Sessions
session['logged_in'] = True
# Hello
if request_json and 'name' in request_json:
name = request_json['name']
elif request_args and 'name' in request_args:
name = request_args['name']
else:
name = 'World'
return 'Hello {}!'.format(name)
</code></pre>
<p><strong>requirements.txt</strong></p>
<pre><code>flask
functions-framework==3.*
</code></pre>
|
<python><google-cloud-functions>
|
2024-04-06 17:15:51
| 1
| 1,400
|
Europa
|
78,285,241
| 15,453,560
|
Extracting dates from a sentence in spaCy
|
<p>I have a string like so:</p>
<pre><code>"The dates are from 30 June 2019 to 1 January 2022 inclusive"
</code></pre>
<p>I want to extract the dates from this string using spaCy.</p>
<p>Here is my function so far:</p>
<pre><code>def extract_dates_with_year(text):
doc = nlp(text)
dates_with_year = []
for ent in doc.ents:
if ent.label_ == "DATE":
dates_with_year.append(ent.text)
return dates_with_year
</code></pre>
<p>This returns the following output:</p>
<pre><code>['30 June 2019 to 1 January 2022']
</code></pre>
<p>However, I want output like:</p>
<pre><code>['30 June 2019', '1 January 2022']
</code></pre>
|
<python><regex><nlp><spacy><named-entity-recognition>
|
2024-04-06 17:12:39
| 1
| 657
|
Muhammad Kamil
|
78,284,969
| 1,307,851
|
Difference across rows in Pandas DataFrame
|
<p>How do I create <code>df2</code> directly using <code>df</code> as input? I can't wrap my head around it...</p>
<pre><code>>>> import pandas as pd
>>> data = {'Year': [1990, 1990, 1990, 2022, 2022, 2022], 'City': ['Paris', 'Berlin', 'Lisbon', 'Paris', 'Berlin', 'Lisbon'], 'Population': [2.15, 3.40, 2.54, 2.11, 3.57, 2.99]}
>>> df = pd.DataFrame.from_dict(data)
Year City Population
0 1990 Paris 2.15
1 1990 Berlin 3.40
2 1990 Lisbon 2.54
3 2022 Paris 2.11
4 2022 Berlin 3.57
5 2022 Lisbon 2.99
>>> data2 = {'City': ['Paris', 'Berlin', 'Lisbon'], 'Population change': [2.11 - 2.15, 3.57 - 3.40, 2.99 - 2.54]}
>>> df2 = pd.DataFrame.from_dict(data2)
City Population change
0 Paris -0.04
1 Berlin 0.17
2 Lisbon 0.45
</code></pre>
|
<python><pandas>
|
2024-04-06 15:46:40
| 3
| 694
|
Fredrik P
|
78,284,936
| 8,030,794
|
Using group_by in pandas but with condition
|
<p>I have dataframe</p>
<pre><code>data = {'time': ['10:00', '10:01', '10:02', '10:02', '10:03','10:04', '10:06', '10:10', '10:15'],
'price': [100, 101, 101, 103, 101,101, 105, 106, 107],
'volume': [50, 60, 30, 80, 20,50, 10, 40, 40]}
</code></pre>
<p>I need to group by this df by every 5 minutes and price, sum up the volume</p>
<pre><code>df.groupby([df['time'].dt.floor('5T'), 'price']).agg({'volume' : 'sum'}).reset_index()
</code></pre>
<p>Then i need to find time when pandas groups them where after sum new volume i will get value more than 100.</p>
<p>In this df i find 10:03 and after sum, value will be 60 + 30 + 20 = 110. In 10:04 sum will be 60 + 30 + 20 + 50 = 160</p>
<p>How can i do this using pandas?</p>
|
<python><pandas><group-by>
|
2024-04-06 15:35:34
| 1
| 465
|
Fresto
|
78,284,892
| 7,959,614
|
ThreadPoolExecutor and time-outs for individual threads
|
<p>I have the following code that yields a <code>TimeoutError</code> for the third <code>result</code>.</p>
<pre><code>import time
from concurrent.futures import ThreadPoolExecutor, TimeoutError
def sleeping(i):
time.sleep(i)
return f'slept {i} seconds'
with ThreadPoolExecutor(max_workers=1) as exe:
try:
for result in exe.map(sleeping, [1, 2, 10, 3], timeout=5):
print(result)
except TimeoutError:
print('Timeout waiting for map()')
</code></pre>
<p>The problem is that the current code checks for each result whether a timeout occurs and then the whole thread pool is closed. I would like to complete all the tasks.
Desired output:</p>
<pre><code>slept 1 seconds
slept 2 seconds
Time- out waiting for map()
slept 3 seconds
</code></pre>
<p>Please advice</p>
|
<python><multithreading><concurrent.futures>
|
2024-04-06 15:22:38
| 1
| 406
|
HJA24
|
78,284,761
| 275,552
|
Altair sorting not working properly on faceted chart
|
<p>I have some faceted bar graphs displaying the amounts of different colors in different paintings.</p>
<pre><code>base = alt.Chart(df).mark_bar().encode(
x=alt.X(
'color',
title='',
),
y=alt.Y(
'amount',
title='',
),
color=alt.Color('color',scale= alt.Scale(domain=rp,range=rph))
).properties(
width=80,
height=80
).facet(
facet='TITLE',
columns=5
)
</code></pre>
<p>Here's the first row:
<a href="https://i.sstatic.net/uYJAI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uYJAI.png" alt="enter image description here" /></a></p>
<p>I want to sort each graph descending on the x-axis, so I add:</p>
<pre><code>x=alt.X(
'color',
title='',
sort='-y'
)
</code></pre>
<p>Here's the result:
<a href="https://i.sstatic.net/cgFPq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cgFPq.png" alt="enter image description here" /></a></p>
<p>The 2nd and 5th graphs <em>almost</em> sort correctly, but not quite, and the same for the 1st and 3rd graphs but they're ascending for some reason. what would cause this strange behavior? Is it something to do with faceting?</p>
|
<python><altair>
|
2024-04-06 14:35:54
| 1
| 16,225
|
herpderp
|
78,284,173
| 1,341,024
|
Flask-Security: CSRF SESSION Token Missing in Angular Request
|
<p>I have a Flask application using the Flask-Security extension. Here's how my app is initialized:</p>
<pre class="lang-py prettyprint-override"><code>app = Flask(__name__)
app.config['DEBUG'] = True
CORS(app)
CSRFProtect(app)
app.secret_key = b'_5#y2L"F4Q8z\n\xec]/'
app.config['SECRET_KEY'] = os.environ.get("SECRET_KEY", 'pf9Wkove4IKEAXvy-cQkeDPhv9Cb3Ag-wyJILbq_dFw')
app.config['SECURITY_PASSWORD_SALT'] = os.environ.get("SECURITY_PASSWORD_SALT", '146585145368132386173505678016728509634')
app.config["SECURITY_EMAIL_VALIDATOR_ARGS"] = {"check_deliverability": False}
app.teardown_appcontext(lambda exc: db_session.close())
# Setup Flask-Security
user_datastore = SQLAlchemySessionUserDatastore(db_session, User, Role)
app.security = Security(app, user_datastore)
</code></pre>
<p>I also have another Angular-based app communicating with the Flask app. In the <code>OnInit</code> function, I ensure a CSRF token is provided:</p>
<pre class="lang-js prettyprint-override"><code>this.http.get("http://localhost:5000/login")
.subscribe({
next: (response: any) => {
let crsftoken = response.response.csrf_token;
console.log("CRFS TOKEN aus GET" + crsftoken);
localStorage.setItem("crsftoken", crsftoken);
},
// ...
});
</code></pre>
<p>This works fine, but when I try to log in a user using a login form, the CSRF token doesn't seem to be included in the request. Here's the code executed when the form is submitted:</p>
<pre class="lang-js prettyprint-override"><code>const formData = new FormData();
formData.append('email', this.loginForm.controls.username.value || "");
formData.append('password', this.loginForm.controls.password.value || "");
this.http.post("http://localhost:5000/login", formData)
.subscribe({
next: (response: any) => {
console.log(response);
},
// ...
});
</code></pre>
<p>When I execute this code, I receive a "Bad Request" response with the message "The CSRF session token is missing." It seems like the session token isn't being sent with the request, but I'm not sure why.</p>
<p>During the first call to the <code>GET</code> method for "http://localhost:5000/login", I can see the session in the response and in the <code>Set-Cookie</code> header. However, it doesn't seem to be stored in the browser. As a result, it's not sent with the subsequent request.</p>
<p>I tried the same using POSTMAN - and there it works?!</p>
<p>Added: Of course I am usingen an HTTP-Interceptor to put in the Csrf-Token in the header with each request, see:</p>
<pre><code>@Injectable()
export class AuthInterceptor implements HttpInterceptor {
intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
let crsftoken = localStorage.getItem('crsftoken');
let headers = request.headers;
headers = headers.set("X-CSRF-TOKEN", crsftoken || "");
request = request.clone({
headers : headers
});
return next.handle(request);
}
}
</code></pre>
<p>What am I doing wrong? Maybe I missunderstand the difference between CSRF token and CSRF session token?
Any insights or suggestions would be greatly appreciated. Thank you!</p>
|
<python><angular><flask><flask-security><flask-session>
|
2024-04-06 11:22:51
| 2
| 344
|
CodeCannibal
|
78,284,003
| 14,923,149
|
Connecting dendrograms in matplotlib
|
<p>I'm currently working on a project where I need to visualize hierarchical clustering dendrograms using matplotlib in Python. I've managed to generate dendrograms for two datasets (X1 and X2) and plotted them side by side. However, I'm facing difficulties in connecting the dendrograms together.</p>
<p>Previously I have asked <a href="https://stackoverflow.com/q/78282069/3001761">Connecting tips between dendrograms in side-by-side subplots</a> but it was using plotly.</p>
<p>I've tried extracting the tip labels from both dendrograms and sorting them, but when I attempt to connect them, the connections seem to be misplaced or missing.</p>
<p>Here's the code snippet I'm using:</p>
<pre><code>import matplotlib.pyplot as plt
import scipy.cluster.hierarchy as hierarchy
import numpy as np
# Sample data for X1 , X2
np.random.seed(20240406)
X1 = np.random.rand(10, 12)
X2 = np.random.rand(10, 12)
names = ['Jack', 'Oxana', 'John', 'Chelsea', 'Mark', 'Alice', 'Charlie', 'Rob', 'Lisa', 'Lily']
# Plotting
fig, axes = plt.subplots(1, 3, figsize=(12, 6))
# Generate dendrogram structure for X1
Z1 = hierarchy.linkage(X1, method='complete')
dn1 = hierarchy.dendrogram(Z1, ax=axes[0], orientation='left', labels=names)
axes[0].set_title('Left Dendrogram for X1')
axes[0].set_xlabel('Distance')
# Generate dendrogram structure for X2
Z2 = hierarchy.linkage(X2, method='complete')
dn2 = hierarchy.dendrogram(Z2, ax=axes[2], orientation='right', labels=names)
axes[2].set_title('Right Dendrogram for X2')
axes[2].set_xlabel('Distance')
# Extract leaves and match them with names
leaves_left = dn1['leaves']
leaves_right = dn2['leaves']
# Use leaves and names to create connections
connections = []
for i in range(len(leaves_left)):
left_name = names[leaves_left[i]]
try:
right_index = names.index(left_name)
except ValueError:
continue # Skip to the next iteration if the name is not found
connections.append((0, 1, i , right_index))
# Draw connections
for left, right, y_left, y_right in connections:
axes[1].plot([left, right], [y_left, y_right], 'k-', alpha=0.5)
# Customize the third plot for connections
axes[1].set_title('Connections')
axes[1].set_xlabel('Connection')
axes[1].set_xlim(0, 1) # Set limits for connection plot
axes[1].set_ylim(-0.5, len(names) - 0.5) # Adjust y-axis limits for connections
axes[1].axis('off')
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/pjraS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pjraS.png" alt="enter image description here" /></a></p>
<p>but you can see in image its tip not linking properly, I'm aiming to connect the dendrograms using lines that connect each label from X1 to its corresponding label in X2. How to properly implement this connection between the dendrograms?</p>
|
<python><matplotlib><dendrogram>
|
2024-04-06 10:23:41
| 1
| 504
|
Umar
|
78,283,850
| 11,422,610
|
AttributeError: 'NoneType' object has no attribute 'hexdigest' with sha256
|
<pre><code>#!/usr/bin/env python3
from hashlib import sha256
def produce_digest_char_by_char(word):
d=sha256();print(d.hexdigest())
for b in word:
#d=d.update(b.encode()):print(d.hexdigest())#AttributeError: 'NoneType' object has no attribute 'hexdigest'
d.update(b.encode());print(d.hexdigest())#This works fine
return d.hexdigest()
print(sha256().hexdigest())
print(sha256(b"lana").hexdigest())
print(produce_digest_char_by_char("lana"))
</code></pre>
<p>The commented out line does not work, giving me this error: <code>AttributeError: 'NoneType' object has no attribute 'hexdigest'</code>. By trial-and-error I figured out what the fix in the code is but I would like to know why does Python behave this way? <code>help(sha256)</code> gives this info</p>
<pre><code> Returns a sha256 hash object; optionally initialized with a string
</code></pre>
<p>In this info there is nothing written that <code>sha256</code> may return <code>None</code>.</p>
|
<python><python-3.x><hash><sha256><hashlib>
|
2024-04-06 09:31:09
| 0
| 937
|
John Smith
|
78,283,816
| 6,197,439
|
What do to, when pyinstaller sys._MEIPASS does not work?
|
<p>Here is an example, which in fact works, and so it does not actually illustrate my problem, but just to set a basis for discussion:</p>
<pre><code>mkdir C:/test/testpyinst
cd C:/test/testpyinst
</code></pre>
<p>Here I have <code>main.py</code>:</p>
<pre class="lang-python prettyprint-override"><code>#!/usr/bin/env python3
import sys, os
from lib import relative_path2
def relative_path(relative_path): # https://stackoverflow.com/q/58487122
try:
base_path = sys._MEIPASS
print("_MEIPASS OK {}".format(base_path))
except Exception:
base_path = os.path.abspath(".")
print("_MEIPASS exception {}".format(base_path))
return os.path.join(base_path, relative_path)
def main():
print("Hello from main")
print("relative_path {}".format(relative_path(".")))
print("relative_path2 {}".format(relative_path2(".")))
if __name__ == '__main__':
main()
</code></pre>
<p>... and <code>lib.py</code>:</p>
<pre class="lang-python prettyprint-override"><code>import sys, os
def relative_path2(relative_path): # https://stackoverflow.com/q/58487122
try:
base_path = sys._MEIPASS
print("_MEIPASS OK {}".format(base_path))
except Exception:
base_path = os.path.abspath(".")
print("_MEIPASS exception {}".format(base_path))
return os.path.join(base_path, relative_path)
</code></pre>
<p>I use this - <code>pyinstaller</code> installed via <code>python3 -m pip install pyinstaller</code>:</p>
<pre class="lang-none prettyprint-override"><code>$ uname -s
MINGW64_NT-10.0-19045
$ python3 --version
Python 3.11.8
$ pyinstaller --version
6.5.0
</code></pre>
<p>When I run this code directly from python, <code>_MEIPASS</code> is as expected not accessible:</p>
<pre class="lang-none prettyprint-override"><code>$ python3 main.py
Hello from main
_MEIPASS exception C:/test/testpyinst
relative_path C:/test/testpyinst/.
_MEIPASS exception C:/test/testpyinst
relative_path2 C:/test/testpyinst/.
</code></pre>
<p>Then I create an <code>.exe</code> with PyInstaller:</p>
<pre class="lang-none prettyprint-override"><code>pyinstaller --onefile main.py
</code></pre>
<p>... and when I run the .exe, as expected, <code>_MEIPASS</code> works:</p>
<pre class="lang-none prettyprint-override"><code>$ ./dist/main.exe
Hello from main
_MEIPASS OK D:\msys64\tmp\_MEI199762
relative_path D:\msys64\tmp\_MEI199762/.
_MEIPASS OK D:\msys64\tmp\_MEI199762
relative_path2 D:\msys64\tmp\_MEI199762/.
</code></pre>
<hr />
<p>So, here is my problem - in my actual (pyqt5) project, <code>sys._MEIPASS</code> does not work - in the sense that I <em>always</em> get an exception in <code>relative_path()</code> when trying to read <code>sys._MEIPASS</code> <em>while running from the final <code>.exe</code> created by PyInstaller</em> !?</p>
<p>I found this <a href="https://github.com/pyinstaller/pyinstaller/issues/1785" rel="nofollow noreferrer">AttributeError: 'module' object has no attribute '_MEIPASS' (redux) Β· Issue #1785 Β· pyinstaller/pyinstaller</a>:</p>
<blockquote>
<p><code>sys._MEIPASS</code> is set by the exe produced by pyinstaller before it runs any python code. I can think of a few reasons why it would not be set:</p>
<ul>
<li>The code reload(sys) is executed</li>
<li>You are using a version of PyInstaller prior to 3.0</li>
<li>PyInstaller was improperly installed and is using the old 2.1 bootloader files</li>
</ul>
</blockquote>
<p>I have tried running <code>python3 -m trace --trace myprogram.py | grep reload</code>, and apparently reload(sys) is never called in my program ...</p>
<p>So, I am still at a loss: what can I possibly do in my project, to get <code>sys._MEIPASS</code> working as expected?</p>
|
<python><python-3.x><pyinstaller>
|
2024-04-06 09:19:30
| 1
| 5,938
|
sdbbs
|
78,283,494
| 106,019
|
OCR with varying background colors and low contrasts?
|
<p>I'm trying to convert images to text using pytesseract. It works well for images with white background and black text, but it fails for images with less contrast and varying colors.</p>
<p>I have tried to put together all situations into one image. In most situations, the text and background each has a single color. What image processing would you recommend for the image below? Is there some kind of "local high contrast" conversion that could be used?</p>
<p>Edit: The image is representative of the images I will be processing, with regards to colors. With regards to text, some texts will be longer. I try to locate data on screenshots of webpages.</p>
<p>Code:</p>
<pre><code>import PIL.Image
import numpy as np
import pytesseract
img = PIL.Image.open('ocrtest.png')
ocr_string = pytesseract.image_to_string(img, timeout=30)
print(ocr_string)
</code></pre>
<p>Result:</p>
<pre><code>Test one
Test five
Test 6 (not as important)
</code></pre>
<p><a href="https://i.sstatic.net/YW604.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YW604.png" alt="enter image description here" /></a></p>
<p>I have played around with curves in Gimp and found the equivalent using numpy. I've had some success with subsets of the image, but it does not solve the generic problem.</p>
<pre><code># Points found using Curve tool in gimp
# Based on https://stackoverflow.com/a/74279250/106019
lut_x = [226, 229]
lut_y = [ 0, 252]
import numpy as np
lut_u8 = np.interp(np.arange(0, 256), lut_x, lut_y).astype(np.uint8)
R, G, B, A = [0, 1, 2, 3]
source = img.split()
out = []
for band in [R, G, B]:
out.append(source[band].point(lut_u8))
out.append(source[A])
merged_img = PIL.Image.merge('RGBA', out)
</code></pre>
<p><a href="https://i.sstatic.net/JZgpe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JZgpe.png" alt="enter image description here" /></a></p>
|
<python><ocr><tesseract><python-tesseract><image-preprocessing>
|
2024-04-06 06:51:01
| 1
| 765
|
thomasa88
|
78,283,302
| 292,233
|
Understanding pytorch performance
|
<p>I am running code to pass an input image (as pytorch tensor) through a convolutional network (<code>torchvision.models.segmentation.deeplabv3_mobilenet_v3_large()</code> with a modified head). On the returned output tensor I perform a bit of post-processing. Surprisingly, the post-processing (applying non-maximum suppression) takes longer than the inference itself:</p>
<pre><code>#run inference
pred = my_deeplap_based_network(input)
#create a max pool operation
R = 9
self.maxpool = torch.nn.MaxPool2d(2*R+1, stride=1, padding=R, ceil_mode=True)
#run non-maximum-suppression. "pred" is a kind of heatmap and I want to suppress all non-max pixels:
nms = self.detected_points_nms(pred)
def detected_points_nms(self, pred, thresh = 0.3):
predmax = self.maxpool(pred)
max_mask = torch.logical_and(pred == predmax,pred > thresh)
detected_points = torch.zeros_like(pred)
detected_points[max_mask] = pred[max_mask]
return detected_points
</code></pre>
<p>The input and output tensor shape (BxCxHxW) is 12x3x512x512. Running this on a GTX 1080Ti takes ~25ms for the inference and ~75ms for the nms.</p>
<p>In my expectation, the nms operations should be much simpler (and thus faster) than the deeplab/mobilenet network. Is this assumption wrong, or am I doing something inefficient in my nms implementation?</p>
|
<python><performance><pytorch>
|
2024-04-06 05:09:46
| 0
| 11,663
|
Philipp
|
78,283,262
| 1,521,645
|
How to mock environment variable that uses equality operator
|
<p>Given the following code, I am unable to properly patch the os.environ "USE_THING".
I believe this is because USE_THING is a constant created before the test runs, so I cannot reassign the value. If I define USE_THING within the do_thing method, I can patch it no problem, but I believe the instantiation with the equality operator at the top of the file is preventing me from doing so.</p>
<p>How should I best define USE_THING once in my codebase, to be used by multiple methods, but also allow me to patch the value in various unittests?</p>
<pre><code># kustomization.yaml
configMapGenerator:
- name: my-service
behavior: merge
literals:
- USE_THING=True
</code></pre>
<pre><code># thing.py
import os
USE_THING = os.environ.get("USE_THING", "False").upper() == "TRUE"
def do_thing():
if not USE_THING:
return
try:
doing_the_thing()
except Exception as e:
logger.error(e)
def do_another_thing():
if not USE_THING:
return
try:
doing_the_other_thing()
except Exception as e:
logger.error(e)
</code></pre>
<pre><code># test.py
class TestThing(TestCase):
@mock.patch.dict(os.environ, {"USE_THING": "True"}, clear=True)
@mock.patch("thing.do_thing")
def test_do_thing():
...
</code></pre>
|
<python><unit-testing><python-unittest.mock>
|
2024-04-06 04:38:14
| 1
| 379
|
bort
|
78,283,007
| 1,487,336
|
how to write a function with unknown number of input in python?
|
<p>I would like to write a function keeping the common index of input dataframes. I could write it as follows for 2 dataframes:</p>
<pre><code>def get_df_common_index(df1, df2):
common_index = df1.index.intersection(df2.index)
return df1.loc[common_index], df2.loc[common_index]
</code></pre>
<p>My question is how to make it a general function which can take multiple inputs and return dataframes accordingly? As if I could use it as follows:</p>
<pre><code>df1, df2, df3, df4 = get_df_common_index(df1, df2, df3, df4)
</code></pre>
|
<python>
|
2024-04-06 02:08:03
| 0
| 809
|
Lei Hao
|
78,282,984
| 401,736
|
Unable to run certain python command from Cmake
|
<p>I'm having following python code</p>
<pre><code> cmd = [
"cmd.exe", "/c",
"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Auxiliary/Build/vcvarsx86_amd64.bat",
"^&^&", "set", "PATH", ">", temp_file_path
]
print(cmd)
result = subprocess.run(cmd, capture_output=True)
if result.returncode != 0:
print(f"Error running vcvarsx86_amd64: {result.stderr.decode('utf-8')}")
sys.exit(1)
</code></pre>
<p>If I run this python script from the console eveything works fine.</p>
<p>But if I run it from inside cmake using the following command:</p>
<pre><code>execute_process(
COMMAND ${PYTHON_CMD} script.py
--input_file ${INPUT_FILE}.dll
--output_file ${OUTPUT_FILE}.dll
RESULT_VARIABLE result
)
</code></pre>
<p>I got following error:</p>
<pre><code> ['cmd.exe', '/c', 'C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Auxiliary/Build/vcvarsx86_amd64.bat', '^&^&', 'set', 'PATH', '>', 'C:\\Users\\MYKOLA~1.GER\\AppData\\Local\\Temp\\tmpio00uuoh']
Error running vcvarsx86_amd64: The input line is too long.
The syntax of the command is incorrect.
</code></pre>
|
<python><windows><cmake>
|
2024-04-06 01:55:30
| 1
| 970
|
StNickolay
|
78,282,930
| 8,723,227
|
Why are there double parentheses around my Python virtual environment in Visual Studio Code?
|
<p>After updating Visual Studio Code to version 1.88.0, I opened one of my Python projects and noticed that there are double parentheses in my virtual environment: <code>((env) )</code>.</p>
<p>I'm using the Python extension v2024.4.0.</p>
<p><a href="https://i.sstatic.net/qA7Ap.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qA7Ap.png" alt="screenshot of the problem" /></a></p>
<p>Previously, in the same and all other projects, I had only one pair of parentheses like <code>(env)</code>. I have checked, but I didn't find any information about it. I have read that <code>(venv1) (venv2)</code> indicates a double virtual environment, but I don't know if this is the case.</p>
<p>I have tried deleting the env (I have the requirement.txt) or closing/reopening VSCode, but the problem persists. Any suggestions on how to fix it?</p>
<p>I have checked the files: <code>.bashrc</code>, <code>.zshrc</code>, <code>.bash_profile</code>, but everything seems fine. Also, by starting a new project from scratch, the problem persists.</p>
|
<python><visual-studio-code><virtualenv>
|
2024-04-06 01:25:37
| 4
| 723
|
Piero
|
78,282,813
| 480,118
|
pandas: sorting multi-level columns
|
<p>I have the following pandas dataframe:</p>
<pre><code>import pandas as pd
data1 = [['01/01/2000', 101, 201, 301],
['01/02/2000', 102, 202, 302],
['01/03/2000', 103, 203, 303],]
df1 = pd.DataFrame(data1, columns=['date', 'field1', 'field2', 'field3'])
df1 = df1.set_index('date')
data2 = [['01/01/2000', 101, 201, 301],
['01/02/2000', 102, 202, 302],
['01/03/2000', 103, 203, 303],]
df2 = pd.DataFrame(data2, columns=['date', 'field2', 'field1', 'field3'])
df2= df2.set_index('date')
df = pd.concat([df1, df2], keys={'group1':df1, 'group2': df2 }, axis=1)
df
</code></pre>
<p>This produces:</p>
<pre><code> group1 group2
field1 field2 field3 field2 field1 field3
date
01/01/2000 101 201 301 101 201 301
01/02/2000 102 202 302 102 202 302
01/03/2000 103 203 303 103 203 303
</code></pre>
<p>I would like to sort the group and fields by a custom order. For level 1 of the column, I tried the following:</p>
<pre><code>group_sort=['group2', 'group1']
m = {k:v for k, v in enumerate(group_sort)}
df = df.sort_index(axis=1, key=lambda x: x.map(m), level=1)
df
</code></pre>
<p>This produces the following, which is still the same as the original dataframe:</p>
<pre><code> group1 group2
field2 field1 field3 field1 field2 field3
date
01/01/2000 101 201 301 101 201 301
01/02/2000 102 202 302 102 202 302
01/03/2000 103 203 303 103 203 303
</code></pre>
<p>For level 0 (the fields) I tried:</p>
<pre><code>field_sort=['field3', 'field2', 'field1']
m = {k:v for k, v in enumerate(field_sort)}
df = df.sort_index(axis=1, key=lambda x: x.map(m), level=0)
df
</code></pre>
<p>But this produces:</p>
<pre><code> group1 group2 group1 group2 group1 group2
field1 field1 field2 field2 field3 field3
date
01/01/2000 201 101 101 201 301 301
01/02/2000 202 102 102 202 302 302
01/03/2000 203 103 103 203 303 303
</code></pre>
<p>So my question is - how would I sort both group and fields?
Is there a cleaner, more concise or efficient way of doing this?</p>
<p>What I am expecting as an output sorted as follows:</p>
<pre><code> group2 group1
field3 field2 field1 field3 field2 field1
date
01/01/2000 301 201 101 301 201 101
01/02/2000 302 202 102 302 202 102
01/03/2000 303 203 103 303 203 103
</code></pre>
<p>Thanks</p>
|
<python><pandas>
|
2024-04-06 00:12:21
| 1
| 6,184
|
mike01010
|
78,282,781
| 4,061,339
|
Python polars - how to aggregate dataframes
|
<h1>Objective</h1>
<p>To efficiently aggreate the returned dataframes of a function in a Polars dataframe in Python.</p>
<h1>Environment</h1>
<ul>
<li>Windows 10</li>
<li>Python 3.9.18</li>
<li>Polars 0.20.18</li>
</ul>
<h1>What I did so far</h1>
<p>I want the equivalent of this code (this is a dummy code).</p>
<pre><code>def dummy_dataframe(val):
return pl.DataFrame(
{
"x": [va1 + 1, 0, 1],
"y": [val + 2, 8, 9],
"z": [val + 3, 5, 6],
}
)
df = pl.DataFrame(
{
"a": ["a", "b", "a", "b"],
}
)
groups = df.groupby('a').agg(pl.apply(exprs=['a'], function=lambda x: dummy_dataframe(x)).alias('result'))
results = []
for row in groups.iter_rows():
group_name = row[0]
group_val = row[1]
df_ret = group_val.with_columns(pl.lit(group_name).alias('group_name'))
results.append(df_ret)
df_results = pl.concat(results)
df_results
</code></pre>
<p>However, I believe <code>iter_rows()</code> is inefficient. <a href="https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.iter_rows.html" rel="nofollow noreferrer">This polars document</a> says "Row iteration is not optimal as the underlying data is stored in columnar form; where possible." Since <code>iter_rows()</code> is not very efficient in Pandas I could imagine it's the same in Polars.</p>
<h1>Research</h1>
<p>Googling "polars dataframe in a cell" and "polars agg dataframe" and checking the first 10 pages of each didn't give me much information.</p>
<h1>Conclusion</h1>
<p>How do I efficiently aggreate the returned dataframes of a function?</p>
|
<python><windows><dataframe><group-by><python-polars>
|
2024-04-05 23:54:29
| 1
| 3,094
|
dixhom
|
78,282,780
| 7,318,488
|
Polars: How to prevent: polars.exceptions.InvalidOperationError: `min` operation not supported for dtype `null`
|
<p>Hi I have the following data frame with only one row, that can contain an empty list (with only one value null).</p>
<p>When I try to get the min over the list, and the one row dataframe does contain only an empty list I run in the following error</p>
<pre><code>polars.exceptions.InvalidOperationError: `min` operation not supported for dtype `null`
</code></pre>
<h2>MRE</h2>
<pre><code>import polars as pl
data = pl.DataFrame({"list_col": [[None]]}, schema={"list_col": pl.List})
print(data)
col = "list_col"
data.with_columns(pl.col(col).list.min().alias(col + "_min"))
shape: (1, 1)
ββββββββββββββ
β list_col β
β --- β
β list[null] β
ββββββββββββββ‘
β [null] β
ββββββββββββββ
</code></pre>
<h2>What I've tried</h2>
<p>With a conditional <code>pl.when()</code></p>
<pre><code>data.with_columns(
pl.when(pl.col(col).is_not_null()).then(pl.col(col).list.min()).alias(col + "_min")).collect()
</code></pre>
<p>Additionally before drop nulls from the list</p>
<pre><code>data.with_columns(
pl.when(pl.col(col).list.drop_nulls().is_not_null()).then(pl.col(col).list.min()).alias(col + "_min")).collect()
</code></pre>
<h2>Solution</h2>
<p>Thanks to the help of jqurious I was able to solve it. It is pretty simple, you just need to ensure that the dtype of the inner list is properly set, e.g., rather than <code>pl.List(pl.Null)</code> set it to the appropriate dtype e.g. <code>pl.List(pl.Float)</code></p>
<pre><code># Solution convert the dtype
data = data.with_columns(pl.col(col).cast(pl.List(pl.Float64)))
data
# Now works as expected
data.with_columns( pl.col(col).list.min().alias(col + "_min").alias(col))
</code></pre>
|
<python><python-polars><data-wrangling><dtype>
|
2024-04-05 23:54:18
| 0
| 1,840
|
BjΓΆrn
|
78,282,533
| 2,893,053
|
Pass 'self' into a function defined via python exec in a class method
|
<p>I have the following piece of code:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __init__(self):
self.name = "foo"
def hello(self):
exec("def func():\n print(self.name)")
import pdb; pdb.set_trace()
aa = A()
aa.hello()
</code></pre>
<p>Running this code triggers the pdb shell. Then I observe that</p>
<pre><code>-> import pdb; pdb.set_trace()
(Pdb) func
<function func at 0x750d187a6950>
(Pdb) func()
*** NameError: name 'self' is not defined
</code></pre>
<p>So, <code>func</code> is a known function. However, <code>self</code> is not defined.</p>
<p>How to fix this?</p>
<p><strong>Update:</strong> To clarify, my intent is to define a function dynamically in a class method, such that the function's execution depends on properties of an instance of that class.</p>
<p><strong>Update 2:</strong> To clarify, I wanted the debugger to act as a shell -- that is specific to my use case. (That's why I included pdb line in my code snippet);</p>
<p>The snippet here is just a simplified snippet of my use case, which involves dynamically creating a bunch of functions based on some user input. And the user can interact with those functions through a pdb session</p>
<p><strong>Update 3</strong>: Thanks @Ammar. I have upvoted your answer. What about the case where the function execution depends on executing a method (not just a property)? As an example,</p>
<pre class="lang-py prettyprint-override"><code>class A:
def foo(self):
print("running foo")
def hello(self):
exec("def func():\n self.foo()")
import pdb; pdb.set_trace()
aa = A()
aa.hello()
</code></pre>
|
<python><exec>
|
2024-04-05 22:10:09
| 2
| 1,528
|
zkytony
|
78,282,512
| 2,893,053
|
Access function defined via python exec in class method
|
<p>Consider the following code snippet:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def hello(self):
exec("def func():\n print('hello world')")
func()
import pdb; pdb.set_trace()
</code></pre>
<p>In the pdb shell, I create an instance of <code>A</code> and call <code>hello</code>:</p>
<pre><code>(Pdb) aa = A()
(Pdb) aa.hello()
*** NameError: name 'func' is not defined
</code></pre>
<p>As shown, the code failed due to 'func' not defined.</p>
<p>However, when <code>exec</code> is outside of class method definition, it works.</p>
<pre><code>(Pdb) exec("def func():\n print('hello world')")
(Pdb) func()
hello world
</code></pre>
<p>Why? How to make this work for the case where <code>exec</code> runs in a class method definition?</p>
|
<python><exec>
|
2024-04-05 22:04:13
| 0
| 1,528
|
zkytony
|
78,282,483
| 22,407,544
|
No such file or directory: 'ffmpeg' in Django/Docker
|
<p>I run my Django code in Docker. I run transcription software that requires ffmpeg to function. However I've been getting the error <code>[Errno 2] No such file or directory: 'ffmpeg'</code> whenever I try to run my code.</p>
<p>Here's my views.py:</p>
<pre><code>def initiate_transcription(request, session_id):
...
with open(file_path, 'rb') as f:
path_string = f.name
transcript = transcribe_file(path_string,audio_language, output_file_type)
...
</code></pre>
<p>Here's my Dockerfile:</p>
<pre><code># Pull base image
FROM python:3.11.4-slim-bullseye
# Set environment variables
ENV PIP_NO_CACHE_DIR off
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV COLUMNS 80
#install Debian and other dependencies that are required to run python apps(eg. git, python-magic).
RUN apt-get update \
&& apt-get install -y --force-yes \
nano python3-pip gettext chrpath libssl-dev libxft-dev \
libfreetype6 libfreetype6-dev libfontconfig1 libfontconfig1-dev\
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y git
RUN apt-get update && apt-get install -y libmagic-dev
RUN apt-get -y update && apt-get -y upgrade && apt-get install -y --no-install-recommends ffmpeg
# Set working directory for Docker image
WORKDIR /code/
RUN apt-get update \
&& apt-get -y install libpq-dev gcc
# Install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy project
COPY . .
</code></pre>
<p>I've searched online for possible solutions but have not been able to find anything similar to my problem</p>
|
<python><django><docker><ffmpeg>
|
2024-04-05 21:52:34
| 1
| 359
|
tthheemmaannii
|
78,282,287
| 305,466
|
Why am I getting incompatible pointer to integer conversion initializing 'Py_ssize_t' on an M1 mac?
|
<p>Recently (April 2024), I started seeing the following error in a couple of builds on my M1 mac:</p>
<pre><code>"error: incompatible pointer to integer conversion initializing 'Py_ssize_t' (aka 'long') with an expression of type 'void *' [-Wint-conversion]"
</code></pre>
<p>How do I fix it?</p>
|
<python><apple-m1>
|
2024-04-05 20:48:39
| 1
| 1,417
|
sshevlyagin
|
78,282,231
| 17,800,932
|
Directly import modules in a Poetry project from a non-Poetry managed module
|
<p>I have an existing codebase that I am working in, and for any new code, I am using <a href="https://python-poetry.org/" rel="nofollow noreferrer">Poetry</a> to manage the modules, dependencies, etc. For existing code, I am slowly migrating over code, but this is obviously a bit tedious and time consuming. There is some code that still exists outside of the Poetry project, and prior to getting to the point where I can migrate it over, I would simply like to call modules within Poetry from this external file.</p>
<p>Here's an example of the file structure:</p>
<pre><code>ββββmodules-project
β β poetry.lock
β β pyproject.toml
β β
β ββββmodules
β some_client.py
β __init__.py
β
ββββscripts
script.py
</code></pre>
<p>Inside <code>script.py</code>, I want to import <code>some_client.py</code>. What is the easiest way to do this? This code is all in the same repository, and given that it will eventually be rewritten or migrated into the Poetry project, I am wanting to avoid going through all the trouble of creating another Poetry project, exporting the <code>modules</code> Poetry project and then installing it for <code>script.py</code>, etc. I don't care how dirty the solution inside <code>script.py</code> is as long as it can see the <code>some_client.py</code> module and its dependencies.</p>
<pre class="lang-py prettyprint-override"><code># inside `script.py`
import modules-project.modules.some_client.py # <-- what should this actually be?
</code></pre>
|
<python><python-import><python-module><python-poetry>
|
2024-04-05 20:32:36
| 1
| 908
|
bmitc
|
78,282,162
| 1,568,658
|
In pycharm, what is the test_xxx.http file, for a fastapi project?
|
<p>After creating a <code>fastapi</code> project, via <code>pycharm</code>, I got such a file:</p>
<p><strong>test_main.http:</strong></p>
<pre><code># Test your FastAPI endpoints
GET http://127.0.0.1:8000/
Accept: application/json
###
GET http://127.0.0.1:8000/hello/User
Accept: application/json
</code></pre>
<p>and can run it in <code>pycharm</code> by click the run button appear in the file.</p>
<p>But, what is this <code>test_xxx.http</code> file ?</p>
<p>I've checked fastapi doc: <a href="https://fastapi.tiangolo.com/tutorial/testing/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/testing/</a></p>
<p>Also searched on google, didn't found anything mention such <code>.http</code> file.</p>
|
<python><pycharm><fastapi>
|
2024-04-05 20:14:28
| 1
| 25,717
|
Eric
|
78,281,985
| 4,766
|
How do install gst-python on macOS to work with the recommended GStreamer installers?
|
<p>I'm trying to get the following to work on macOS 14.4.1 on an M1 Pro:</p>
<pre><code>$ python3
Python 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> import gi
>>> gi.require_version('GLib', '2.0')
>>> gi.require_version('GObject', '2.0')
>>> gi.require_version('Gst', '1.0')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/lib/python3.12/site-packages/gi/__init__.py", line 126, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace Gst not available
</code></pre>
<p>Note:</p>
<pre><code>$ which python3
/opt/homebrew/bin/python3
</code></pre>
<p>I found this question: <a href="https://stackoverflow.com/q/78029050/4766">Installing gst-python in macOS</a>. It appears I need to install gst-python. I've already installed the 1.24 runtime and development installers of <a href="https://gstreamer.freedesktop.org/download/#macos" rel="nofollow noreferrer">GStreamer</a>--I opted not to use Homebrew after reading the notes there--but apparently gst-python is not installed with it.</p>
<p>In the <a href="https://stackoverflow.com/q/78029050/4766">StackOverflow question</a> <a href="https://stackoverflow.com/users/2778860/chronosynclastic">@chronosynclastic</a> describes cloning the <a href="https://gitlab.freedesktop.org/gstreamer/gstreamer" rel="nofollow noreferrer">gstreamer GitLab repository</a> and then building the <a href="https://gitlab.freedesktop.org/gstreamer/gstreamer/-/tree/main/subprojects/gst-python" rel="nofollow noreferrer">gst-python subproject</a>, so I tried the same:</p>
<pre><code>$ meson setup builddir -Dpygi-overrides-dir=/opt/homebrew/lib/python3.12/site-packages/gi/overrides && ninja -C builddir
The Meson build system
Version: 1.4.0
Source dir: /Users/dspitzer/dev/from_gitlab/gstreamer/subprojects/gst-python
Build dir: /Users/dspitzer/dev/from_gitlab/gstreamer/subprojects/gst-python/builddir
Build type: native build
Project name: gst-python
Project version: 1.24.1.1
C compiler for the host machine: cc (clang 15.0.0 "Apple clang version 15.0.0 (clang-1500.3.9.4)")
C linker for the host machine: cc ld64 1053.12
Host machine cpu family: aarch64
Host machine cpu: aarch64
Found pkg-config: YES (/Library/Frameworks/GStreamer.framework/Versions/1.0/bin/pkg-config) 0.29.2
Run-time dependency gstreamer-1.0 found: YES 1.24.1
Run-time dependency gstreamer-base-1.0 found: YES 1.24.1
Run-time dependency gmodule-no-export-2.0 found: YES 2.74.4
Library dl found: YES
Found CMake: /opt/homebrew/bin/cmake (3.29.1)
Run-time dependency pygobject-3.0 found: NO (tried pkgconfig, framework and cmake)
Looking for a fallback subproject for the dependency pygobject-3.0
meson.build:26:16: ERROR: Neither a subproject directory nor a pygobject.wrap file was found.
A full log can be found at /Users/dspitzer/dev/from_gitlab/gstreamer/subprojects/gst-python/builddir/meson-logs/meson-log.txt
</code></pre>
<p>But I can't figure out how to fix "Run-time dependency pygobject-3.0 found: NO".</p>
<p><strong>Note:</strong> <a href="https://stackoverflow.com/users/499581/lll">l'L'l</a>'s comment pointed out that the <code>pip3</code> command below is using a different version of Python. I fixed that--so you can skip to "Update" below.</p>
<p>I found <a href="https://stackoverflow.com/a/68481434/4766">this StackOverflow answer (and comment)</a> and successfully ran <code>brew install pygobject3 gtk+3</code> but:</p>
<pre><code>$ pip3 install pygobject
Collecting pygobject
Using cached pygobject-3.48.1.tar.gz (714 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
Γ Preparing metadata (pyproject.toml) did not run successfully.
β exit code: 1
β°β> [23 lines of output]
+ meson setup /private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-thqb1k2v/pygobject_ad81b1b46af0427a953b312d45c6e129 /private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-thqb1k2v/pygobject_ad81b1b46af0427a953b312d45c6e129/.mesonpy-aial33lr -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md -Dtests=false -Dwheel=true --wrap-mode=nofallback --native-file=/private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-thqb1k2v/pygobject_ad81b1b46af0427a953b312d45c6e129/.mesonpy-aial33lr/meson-python-native-file.ini
The Meson build system
Version: 1.4.0
Source dir: /private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-thqb1k2v/pygobject_ad81b1b46af0427a953b312d45c6e129
Build dir: /private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-thqb1k2v/pygobject_ad81b1b46af0427a953b312d45c6e129/.mesonpy-aial33lr
Build type: native build
Project name: pygobject
Project version: 3.48.1
C compiler for the host machine: cc (clang 15.0.0 "Apple clang version 15.0.0 (clang-1500.3.9.4)")
C linker for the host machine: cc ld64 1053.12
Host machine cpu family: aarch64
Host machine cpu: aarch64
Program python3 found: YES (/Users/dspitzer/.new_local/share/pyenv/versions/3.11.7/bin/python3.11)
Found pkg-config: YES (/Library/Frameworks/GStreamer.framework/Commands/pkg-config) 0.29.2
Run-time dependency python found: YES 3.11
Found CMake: /opt/homebrew/bin/cmake (3.29.1)
Run-time dependency gobject-introspection-1.0 found: NO (tried pkgconfig, framework and cmake)
Not looking for a fallback subproject for the dependency gobject-introspection-1.0 because:
Use of fallback dependencies is disabled.
../meson.build:29:9: ERROR: Dependency 'gobject-introspection-1.0' is required but not found.
A full log can be found at /private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-thqb1k2v/pygobject_ad81b1b46af0427a953b312d45c6e129/.mesonpy-aial33lr/meson-logs/meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p><strong>Update:</strong> I had added <code>$HOME/.new_local/bin</code> (created by the GStreamer install) to my PATH, in front of <code>/opt/homebrew/bin</code> so <code>pip3</code> was running the one in the former which resulted in a different version of Python.</p>
<p>So I fixed the path and reran:</p>
<pre><code>$ pip3 install pygobject
Collecting pygobject
Downloading pygobject-3.48.2.tar.gz (715 kB)
ββββββββββββββββββββββββββββββββββββββββ 715.2/715.2 kB 13.3 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
Γ Preparing metadata (pyproject.toml) did not run successfully.
β exit code: 1
β°β> [23 lines of output]
+ meson setup /private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-i7tjs1ue/pygobject_d272551047184d069c19b9453f89f97d /private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-i7tjs1ue/pygobject_d272551047184d069c19b9453f89f97d/.mesonpy-rhycpxqs -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md -Dtests=false -Dwheel=true --wrap-mode=nofallback --native-file=/private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-i7tjs1ue/pygobject_d272551047184d069c19b9453f89f97d/.mesonpy-rhycpxqs/meson-python-native-file.ini
The Meson build system
Version: 1.4.0
Source dir: /private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-i7tjs1ue/pygobject_d272551047184d069c19b9453f89f97d
Build dir: /private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-i7tjs1ue/pygobject_d272551047184d069c19b9453f89f97d/.mesonpy-rhycpxqs
Build type: native build
Project name: pygobject
Project version: 3.48.2
C compiler for the host machine: cc (clang 15.0.0 "Apple clang version 15.0.0 (clang-1500.3.9.4)")
C linker for the host machine: cc ld64 1053.12
Host machine cpu family: aarch64
Host machine cpu: aarch64
Program python3 found: YES (/Users/dspitzer/dev/python/dashboard-fake-frontend/env/bin/python3.12)
Found pkg-config: YES (/Library/Frameworks/GStreamer.framework/Versions/1.0/bin/pkg-config) 0.29.2
Run-time dependency python found: YES 3.12
Found CMake: /opt/homebrew/bin/cmake (3.29.1)
Run-time dependency gobject-introspection-1.0 found: NO (tried pkgconfig, framework and cmake)
Not looking for a fallback subproject for the dependency gobject-introspection-1.0 because:
Use of fallback dependencies is disabled.
../meson.build:29:9: ERROR: Dependency 'gobject-introspection-1.0' is required but not found.
A full log can be found at /private/var/folders/sb/329vdstd77q73c_7j_smp9r40000gp/T/pip-install-i7tjs1ue/pygobject_d272551047184d069c19b9453f89f97d/.mesonpy-rhycpxqs/meson-logs/meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>So the Python version wasn't the issue.</p>
<p>Now I'm stuck here.</p>
|
<python><macos><gstreamer><pygobject>
|
2024-04-05 19:24:43
| 2
| 150,682
|
Daryl Spitzer
|
78,281,846
| 6,674,599
|
How to call `time` from python with its stderr returned only
|
<p>How can I capture the output of <code>/usr/bin/time</code> only, if the command itself outputs stderr?</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
print(subprocess.run('/usr/bin/time -p foo', capture_output=True, shell=True).stderr.decode())
</code></pre>
<p>prints</p>
<pre><code>foo_stderr
real 0.02
user 0.01
sys 0.00
</code></pre>
<p>I am aware (and not looking for answers) of the following options:</p>
<ul>
<li>writing output to a file (e.g. GNU <code>time --output file</code>)</li>
<li>processing the captured output to filter out the relevant data</li>
</ul>
|
<python><time><subprocess><command><stderr>
|
2024-04-05 18:45:59
| 1
| 2,035
|
Semnodime
|
78,281,829
| 1,981,797
|
Polars: Select element of list using column value as index
|
<p>Code to generate the toy dataset:</p>
<pre><code>import itertools
import numpy as np
import polars as pl
first = 30
second = 50
third = 40
data = {
"a": np.concatenate(
(np.repeat(1, first), np.repeat(2, second), np.repeat(3, third))
),
"b": np.concatenate(
(
sorted(np.random.randint(1, first, size=first)),
sorted(np.random.randint(1, second, size=second)),
sorted(np.random.randint(1, third, size=third)),
)
),
}
d = [
np.tile(np.random.randint(1, first * 2, size=first), (first, 1)).tolist(),
np.tile(np.random.randint(1, second * 2, size=second), (second, 1)).tolist(),
np.tile(np.random.randint(1, third * 2, size=third), (third, 1)).tolist(),
]
data["d"] = list(itertools.chain.from_iterable(d))
df = pl.DataFrame(data)
pl_df = df.with_columns([pl.col("a").cum_count().over("a", "b").alias("c")])
pl_df.select(['a', 'b', 'c', "d"]).head()
</code></pre>
<p><a href="https://i.sstatic.net/ROdg2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ROdg2.png" alt="enter image description here" /></a></p>
<p>I'm using polars "0.20.3" and I'd like to take the index value (from col B) of a list of values that are on another column, meaning that:</p>
<ul>
<li>for first row, I would take number 23 from col D</li>
<li>for second row, I would take number 17 from col D</li>
<li>...</li>
</ul>
<p>How can I achieve that without iterating over the rows of the dataframe?</p>
<p>Thanks in advance</p>
|
<python><dataframe><python-polars>
|
2024-04-05 18:40:38
| 1
| 641
|
Maxwell's Daemon
|
78,281,803
| 825,227
|
Elementwise subtract one dataframe from another with different number of columns
|
<p>I have a dataframe:</p>
<p><strong>df</strong></p>
<pre class="lang-none prettyprint-override"><code> East Midwest Mountain
2015-01-02 737 849 152
2015-01-09 685 774 142
2015-01-16 631 711 136
2015-01-23 595 677 134
</code></pre>
<p>That I'd like to subtract a second dataframe from to create a resulting dataframe of differences.</p>
<p><strong>dy</strong></p>
<pre class="lang-none prettyprint-override"><code> East_5yavg Midwest_5yavg Mountain_5yavg East_max Midwest_max Mountain_max
2015-01-02 755.0 867.0 187.0 838.0 946.0 200.0
2015-01-09 706.0 807.0 178.0 807.0 909.0 192.0
2015-01-16 662.0 754.0 170.0 764.0 865.0 181.0
2015-01-23 616.0 696.0 163.0 708.0 808.0 174.0
</code></pre>
<p><strong>df - dy</strong></p>
<pre class="lang-none prettyprint-override"><code> East_x Midwest_x Mountain_x East_y Midwest_y Mountain_y
2015-01-02 -18.0 -18.0 -35.0 -101.0 -97.0 -48.0
2015-01-09 -21.0 -33.0 -36.0 -122.0 -135.0 -50.0
2015-01-16 -31.0 -43.0 -34.0 -133.0 -154.0 -45.0
2015-01-23 -21.0 -19.0 -29.0 -113.0 -131.0 -40.0
</code></pre>
<p>This obviously doesn't return values (all entries filled with NaN):</p>
<pre><code>df - dy
</code></pre>
<p>This addresses that but can only be used for dataframes with same number columns:</p>
<pre><code>df - dy.iloc[:,:3].values
</code></pre>
<p>And then the natural extension (<code>df - dy.values)</code> complains about differing dimensions with error below. How can I do this?</p>
<pre><code>ValueError: Unable to coerce to DataFrame, shape must be (473, 3): given (473, 6)
</code></pre>
|
<python><pandas>
|
2024-04-05 18:34:28
| 1
| 1,702
|
Chris
|
78,281,781
| 1,209,675
|
Python C extensions - Can't put float into array
|
<p>I'm learning C and trying to write an extension to convert COO sparse data to a dense array. (I know SciPy can do it.) But I'm stuck. I can declare the integers for the 2D pointer into the array, but I can't get the float value that I want to put into the array. I don't understand what the difference is between my PyLong_AsLong declarations that work, and my PyFloat_AsDouble that doesn't.</p>
<pre><code>motion_tools.c:20:68: error: incompatible type for argument 1 of 'PyFloat_AsDouble'
20 | double v = (double) PyFloat_AsDouble(val_float_list[i]);
| ~~~~~~~~~~~~~~^~~
| |
| PyObject {aka struct _object}
In file included from /usr/include/python3.10/Python.h:87,
from motion_tools.c:2:
/usr/include/python3.10/floatobject.h:49:37: note: expected 'PyObject *' {aka 'struct _object *'} but argument is of type 'PyObject' {aka 'struct _object'}
49 | PyAPI_FUNC(double) PyFloat_AsDouble(PyObject *);
| ^~~~~~~~~~
</code></pre>
<pre><code># tool_test.py
import motion_tools
motion_tools.sparse_to_dense([0,1],[0,1],[1.0,1.0],(2,2))
</code></pre>
<pre><code>// motion_tools.c
#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <stdio.h>
PyObject *sparse2dense(PyObject* self, PyObject* args) {
PyObject* x_int_list; // incoming python list
PyObject* y_int_list; // incoming python list
PyObject* val_float_list; // incoming python list
PyObject* shape_int_tuple; // incoming python list
if (!PyArg_ParseTuple(args, "OOOO", &x_int_list, &y_int_list, &val_float_list, &shape_int_tuple))
return NULL;
PyObject *out_array = PyArray_Zeros(2, shape_int_tuple);
Py_ssize_t element_count = PyList_Size(x_int_list);
int i;
for (i=0; i<element_count; ++i){
int x = (int) PyLong_AsLong(x_int_list[i]); // This works
int y = (int) PyLong_AsLong(y_int_list[i]); // This works
double v = (double) PyFloat_AsDouble(val_float_list[i]); // This doesn't work.
//out_array[y][x] = v;
PyArray_SETITEM(out_array, PyArray_GETPTR2(out_array, y, x), v);
}
return PyArray_Return(out_array);
}
static PyModuleDef methods[] = {
{ "sparse_to_dense", sparse2dense, METH_VARARGS, "Converts from COO to dense array"},
{ NULL, NULL, 0, NULL }
};
static struct PyModuleDef sparse_to_dense = {
PyModuleDef_HEAD_INIT,
"motion_tools",
"Sparse to Dense module",
-1,
methods
};
PyMODINIT_FUNC PyInit_motion_tools(void) {
return PyModule_Create(&sparse_to_dense);
}
'''
</code></pre>
|
<python><c><numpy><cpython>
|
2024-04-05 18:29:54
| 0
| 335
|
user1209675
|
78,281,668
| 16,770,405
|
Nonlocal variable not updated when return value from recursive function is not bound to a variable:
|
<p>Came across some pattern similar to this for a leetcode problem. Basically, both functions sums a list a recursively using a nonlocal value. The unassigned value only updates <code>res</code> once it seems.</p>
<pre class="lang-py prettyprint-override"><code>def assigned_sum(l: list[int]):
res = 0
def recurse(i: int):
nonlocal res
if i >= len(l):
return 0
assigned = recurse(i+1)
res += assigned
return l[i]
recurse(-1)
return res
def rvalue_sum(l: list[int]):
res = 0
def recurse(i: int):
nonlocal res
if i >= len(l):
return 0
res += recurse(i+1)
return l[i]
recurse(-1)
return res
test = [1,2,3,4,5]
f"expected={sum(test)}, lvalue={assigned_sum(test)}, rvalue={rvalue_sum(test)}"
</code></pre>
<p>When thrown into colab, I get <code>'expected=15, lvalue=15, rvalue=1'</code></p>
|
<python><recursion><python-nonlocal>
|
2024-04-05 18:04:53
| 1
| 323
|
Kevin Jiang
|
78,281,530
| 22,407,544
|
Docker error: failed to solve: process "/bin/sh -c pip install -r requirements.txt" did not complete successfully: exit code: 2
|
<p>I've recently cloned my repository to a new device. My repository is run in a docker environment. However, despite having one successful docker image build on this device, whenever I try to build a new Docker image(by running <code>docker build . </code>) I encounter this error when installing in requirements.txt:</p>
<pre><code>Dockerfile:30
--------------------
28 | # Install dependencies
29 | COPY requirements.txt .
30 | >>> RUN pip install -r requirements.txt
31 |
32 | # Copy project
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install -r requirements.txt" did not complete successfully: exit code: 2
</code></pre>
<p>I haven't been able to find anything related to error code 2 online. I've restarted Docker and ran the command at least 10 times. I've also increased pip timeout to 2 mins using <code>set PIP_TIMEOUT=120</code> but it still results in the same error.</p>
<p>Here is the full traceback(along with the last package being installed:</p>
<pre><code>Collecting nvidia-cudnn-cu11==8.5.0.96 (from torch==2.0.1->-r requirements.txt (line 45))
179.9 Downloading nvidia_cudnn_cu11-8.5.0.96-2-py3-none-manylinux1_x86_64.whl (557.1 MB)
239.0 ββββββββββββββββββΈ 250.1/557.1 MB 1.1 MB/s eta 0:04:38
239.0 ERROR: Exception:
239.0 Traceback (most recent call last):
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 438, in _error_catcher
239.0 yield
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 561, in read
239.0 data = self._fp_read(amt) if not fp_closed else b""
239.0 ^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 527, in _fp_read
239.0 return self._fp.read(amt) if amt is not None else self._fp.read()
239.0 ^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/http/client.py", line 466, in read
239.0 s = self.fp.read(amt)
239.0 ^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/socket.py", line 706, in readinto
239.0 return self._sock.recv_into(b)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/ssl.py", line 1278, in recv_into
239.0 return self.read(nbytes, buffer)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/ssl.py", line 1134, in read
239.0 return self._sslobj.read(len, buffer)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 TimeoutError: The read operation timed out
239.0
239.0 During handling of the above exception, another exception occurred:
239.0
239.0 Traceback (most recent call last):
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/cli/base_command.py", line 169, in exc_logging_wrapper
239.0 status = run_func(*args)
239.0 ^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/cli/req_command.py", line 248, in wrapper
239.0 return func(self, options, args)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/commands/install.py", line 377, in run
239.0 requirement_set = resolver.resolve(
239.0 ^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 92, in resolve
239.0 result = self._result = resolver.resolve(
239.0 ^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 546, in resolve
239.0 state = resolution.resolve(requirements, max_rounds=max_rounds)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 427, in resolve
239.0 failure_causes = self._attempt_to_pin_criterion(name)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 239, in _attempt_to_pin_criterion
239.0 criteria = self._get_updated_criteria(candidate)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 230, in _get_updated_criteria
239.0 self._add_to_criteria(criteria, requirement, parent=candidate)
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 173, in _add_to_criteria
239.0 if not criterion.candidates:
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/resolvelib/structs.py", line 156, in __bool__
239.0 return bool(self._sequence)
239.0 ^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in __bool__
239.0 return any(self)
239.0 ^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in <genexpr>
239.0 return (c for c in iterator if id(c) not in self._incompatible_ids)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built
239.0 candidate = func()
239.0 ^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 206, in _make_candidate_from_link
239.0 self._link_candidate_cache[link] = LinkCandidate(
239.0 ^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 293, in __init__
239.0 super().__init__(
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 156, in __init__
239.0 self.dist = self._prepare()
239.0 ^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 225, in _prepare
239.0 dist = self._prepare_distribution()
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 304, in _prepare_distribution
239.0 return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 516, in prepare_linked_requirement
239.0 return self._prepare_linked_requirement(req, parallel_builds)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 587, in _prepare_linked_requirement
239.0 local_file = unpack_url(
239.0 ^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 166, in unpack_url
239.0 file = get_http_url(
239.0 ^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 107, in get_http_url
239.0 from_path, content_type = download(link, temp_dir.path)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/network/download.py", line 147, in __call__
239.0 for chunk in chunks:
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/cli/progress_bars.py", line 53, in _rich_progress_bar
239.0 for chunk in iterable:
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_internal/network/utils.py", line 63, in response_chunks
239.0 for chunk in response.raw.stream(
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 622, in stream
239.0 data = self.read(amt=amt, decode_content=decode_content)
239.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 560, in read
239.0 with self._error_catcher():
239.0 File "/usr/local/lib/python3.11/contextlib.py", line 155, in __exit__
239.0 self.gen.throw(typ, value, traceback)
239.0 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/urllib3/response.py", line 443, in _error_catcher
239.0 raise ReadTimeoutError(self._pool, None, "Read timed out.")
239.0 pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
------
Dockerfile:30
--------------------
28 | # Install dependencies
29 | COPY requirements.txt .
30 | >>> RUN pip install -r requirements.txt
31 |
32 | # Copy project
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install -r requirements.txt" did not complete successfully: exit code: 2
View build details: docker-desktop://dashboard/build/default/default/xkiouh9bexjvwvhk6p67lie73
</code></pre>
<p>The error seems to only ever occur when installing pytorch( and specifically the <code>nvidia_cudnn_ ...</code> line) so I'm not sure if it is a pytorch specific error.</p>
<p>Here is my Dockerfile:</p>
<pre><code># Pull base image
FROM python:3.11.4-slim-bullseye
# Set environment variables
ENV PIP_NO_CACHE_DIR off
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV COLUMNS 80
#install Debian and other dependencies that are required to run python apps(eg. git, python-magic).
RUN apt-get update \
&& apt-get install -y --force-yes \
nano python3-pip gettext chrpath libssl-dev libxft-dev \
libfreetype6 libfreetype6-dev libfontconfig1 libfontconfig1-dev\
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y git
RUN apt-get update && apt-get install -y libmagic-dev
RUN apt-get -y update && apt-get -y upgrade && apt-get install -y --no-install-recommends ffmpeg
# Set working directory for Docker image
WORKDIR /code/
RUN apt-get update \
&& apt-get -y install libpq-dev gcc
# Install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy project
COPY . .
</code></pre>
<p>And here is a section of my requirements.txt</p>
<pre><code>six==1.16.0
sqlparse==0.4.4
sympy==1.12
termcolor==2.3.0
tiktoken==0.3.3
torch==2.0.1
tqdm==4.66.1
</code></pre>
|
<python><django><docker><pip><pytorch>
|
2024-04-05 17:36:10
| 0
| 359
|
tthheemmaannii
|
78,281,505
| 13,392,257
|
S3 Minio: An error occurred (400) when calling the HeadBucket operation: Bad Request"}
|
<p>I am trying to run S3 environment locally</p>
<p>I created docker-compose dile</p>
<pre><code>services:
# docker-compose up is not working on MacOS because of EFK services
s3:
image: minio/minio
command: server --console-address ":9001" /data
container_name: minio-s3
ports:
- "9000:9000"
- "9001:9001"
volumes:
- minio_storage:/data
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio_pass
MINIO_ACCESS_KEY: ***
MINIO_SECRET_KEY: ***
MINIO_DOMAIN: s3minio.ru
restart: always
networks:
default:
aliases:
- api.s3minio.ru
</code></pre>
<p>Logged into <a href="http://127.0.0.1:9001/browser" rel="nofollow noreferrer">http://127.0.0.1:9001/browser</a>, created buckets, created access and secret keys
Updated docker-compose file with access and secret keys</p>
<p>My code</p>
<pre><code>import boto3
import botocore
access_key = None
secret_key = None
access_key = "CNLliwavcrqJIWAbfb7i"
secret_key = "VGEAJoqb0q1vJpqVqqe2Nno5F97xQsvQmtj7cQiw"
aws_config = boto3.session.Config (
signature_version='s3v4',
connect_timeout=100,
)
s3_client = boto3.client('s3',
endpoint_url="http://127.0.0.1:9001",
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
aws_session_token=None,
config=aws_config,
verify=False
)
s3_client.head_bucket(Bucket="bucket-in")
</code></pre>
<p>Error</p>
<pre><code>Traceback (most recent call last):
...python3.8/site-packages/botocore/client.py", line 719, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (400) when calling the HeadBucket operation: Bad Request
</code></pre>
|
<python><amazon-s3><minio>
|
2024-04-05 17:27:10
| 1
| 1,708
|
mascai
|
78,281,416
| 219,159
|
Several file descriptors vs seeking all the time
|
<p>I'm pondering a design for a parser library that needs to maintain several (on the order of 10) read streams to the same file with independent positions. I could do it by opening the file several times, or opening once and seeking whenever needed. On general grounds, which will provide better performance? The resource footprint will probably be worse under the multiple handles approach, but would that translate into a performance gain?</p>
<p>The answer will probably be rather platform dependent. Assume desktop Linux, with Windows being considered too. The project is in Python, but the underlying difference would be, I suspect, Python agnostic.</p>
|
<python><performance><optimization><file-io><posix>
|
2024-04-05 17:05:13
| 0
| 61,826
|
Seva Alekseyev
|
78,281,300
| 10,311,694
|
Redshift query with regexp_replace works in Redshift console but not Python lambda
|
<p>One of the fields in my Redshift table is structured like <code>{event_type} 2024-01-01</code>, so I am using <code>regexp_replace</code> to remove the date from the event type. I am calling Redshift via a Python Lambda like this</p>
<pre><code>client = boto3.client("cluster-name", aws_region, aws_access_key_id, aws_secret_access_key, aws_session_token)
query = """
select id, regexp_replace(event_type, '\\s+\\d{4}-\\d{2}$', '') as event
from table
"""
</code></pre>
<p>But the regex expression does not work, all results returned by the query still have the date attached.</p>
<p>However, if I try the query in the Redshift console, the field is transformed correctly.</p>
<p>I assumed it was something to do with the <code>\\s</code> and the escape characters, I tried it with three and four slashes instead of two, but that didn't work</p>
|
<python><sql><regex><aws-lambda><amazon-redshift>
|
2024-04-05 16:36:16
| 3
| 451
|
Kevin2566
|
78,281,273
| 8,694,474
|
create and use python venv from cmake
|
<p>we are using cmake and invoking some python packages (like conan) from cmake.
However latest python 3.12 seams to enforce using venv.
I did some searching and found this interesting article
<a href="https://discourse.cmake.org/t/possible-to-create-a-python-virtual-env-from-cmake-and-then-find-it-with-findpython3/1132" rel="nofollow noreferrer">https://discourse.cmake.org/t/possible-to-create-a-python-virtual-env-from-cmake-and-then-find-it-with-findpython3/1132</a></p>
<p>I managed to install the requirements inside the venv but find_program is failing.</p>
<p>This is my cmake code</p>
<pre><code>macro(create_venv)
find_package(Python3 3.9 REQUIRED COMPONENTS Interpreter Development)
set(SYSTEM_PYTHON_EXE_PATH ${Python3_EXECUTABLE})
message(STATUS "+++++++++++++++++ Found system python located at ${SYSTEM_PYTHON_EXE_PATH}++++++++++++++++")
# Taken from https://discourse.cmake.org/t/possible-to-create-a-python-virtual-env-from-cmake-and-then-find-it-with-findpython3/1132
execute_process(COMMAND "${Python3_EXECUTABLE}" -m venv "${PROJECT_BINARY_DIR}/venv" --upgrade-deps COMMAND_ERROR_IS_FATAL ANY)
# Here is the trick update the environment with VIRTUAL_ENV variable (mimic the activate script)
set(VENV_PATH "${PROJECT_BINARY_DIR}/venv")
set(ENV{VIRTUAL_ENV} "${VENV_PATH}")
#[[
enabling this line will enforce conan to use all packages from the venv cache (which is empty when newly created)
this means every time you clean your cmake cache all 3rd parties will be recompiled or downloaded which will increase configure time
You should comment in this line when playing around with conan config to not break your local build
]]
# set (ENV{CONAN_USER_HOME} "${PROJECT_BINARY_DIR}/conan") change the context of the search
set(Python3_FIND_VIRTUALENV FIRST)
# unset Python3_EXECUTABLE because it is also an input variable (see documentation, Artifacts Specification section)
unset(Python3_EXECUTABLE)
# Launch a new search
find_package(Python3 3.9 REQUIRED COMPONENTS Interpreter Development)
message(STATUS "+++++++++++++++++ Found venv python located at ${Python3_EXECUTABLE}++++++++++++++++")
if(SYSTEM_PYTHON_EXE_PATH STREQUAL Python3_EXECUTABLE)
message(FATAL_ERROR "Python3 executable is the same as the system one, this is not expected")
endif()
# taken from https://www.scivision.dev/cmake-install-python-package/
set(REQUIREMENTS_ARG "-r ${TOOLBOX_SOURCE_DIR}/python3/requirements_generic.txt")
message(STATUS "Trying to install deps")
execute_process(
COMMAND "${Python3_EXECUTABLE}" -m pip install -r "${TOOLBOX_SOURCE_DIR}/python3/requirements_win32.txt" COMMAND_ERROR_IS_FATAL ANY
)
find_program(CONAN_CMD conan REQUIRED)
if(NOT CONAN_CMD)
message(FATAL_ERROR "Conan executable not found! Please install conan.")
endif()
endmacro()
</code></pre>
<p>This results in <code>Could not find CONAN_CMD using the following names: conan</code>
Do you know what is the issue here?</p>
|
<python><cmake><python-venv>
|
2024-04-05 16:30:19
| 1
| 1,041
|
JuliusCaesar
|
78,281,062
| 8,799,005
|
position streamlit buttons horizontaly next to each others
|
<p>How to position two buttons in the same line, next to each other.
I had the two following failing attempts.</p>
<p><strong>Attempt 1</strong>: The buttons do not get placed side by side</p>
<pre class="lang-py prettyprint-override"><code> st.markdown('<div style="display: inline-block;">', unsafe_allow_html=True)
st.download_button(
label="Download data as CSV",
data="Your DataFrame's CSV data here".encode('utf-8'),
file_name='filtered_data.csv',
mime='text/csv',
key="download_csv"
)
if st.button("Reset Metrics", key="reset_metrics"):
st.experimental_rerun()
st.markdown('</div>', unsafe_allow_html=True)
</code></pre>
<p><strong>Attempt 2</strong>: The buttons are side by side, but still very far away from each others</p>
<pre><code> col = st.columns([0.5, 0.5],gap="small")
with col[0]:
st.download_button(
label="Download data as CSV",
data=filtered_data.to_csv().encode('utf-8'),
file_name='filtered_data.csv',
mime='text/csv',
key="download_csv"
)
with col[1]:
if st.button("Reset Metrics", key="reset_metrics"):
st.experimental_rerun()
</code></pre>
|
<python><streamlit>
|
2024-04-05 15:51:32
| 1
| 376
|
Sou
|
78,281,016
| 464,277
|
Multivariate forecast in NeuralProphet
|
<p>I'm trying to build a global model to predict two time series at the same time. The code below works with no error. But the forecast dataframe has all NaN corresponding to one of the two IDs (i.e. there are two time series). The yhat values are also all NaN. Am I doing something wrong or missing?</p>
<pre><code>m = NeuralProphet(
yearly_seasonality=True,
weekly_seasonality=True,
daily_seasonality=False,
quantiles=quantiles,
n_lags=60,
epochs=100,
n_forecasts=30,
loss_func='Huber',
)
m.set_plotting_backend('plotly')
m.highlight_nth_step_ahead_of_each_forecast(step_number=10)
metrics = m.fit(train_df[['ds', 'y', 'ID']])
df_future = m.make_future_dataframe(
train_df,
n_historic_predictions=True,
)
forecast = m.predict(df_future)
</code></pre>
|
<python><machine-learning><time-series><facebook-prophet>
|
2024-04-05 15:44:18
| 0
| 10,181
|
zzzbbx
|
78,280,954
| 1,321,547
|
Add timezone based on Column Value
|
<p>I have a polars Dataframe with two columns: a string column containing datetimes and an integer column containing UTC offsets (for example -4 for EDT). Essentially the Dataframe looks like this:</p>
<pre><code>>>> data
shape: (2, 2)
βββββββββββββββββββββββ¬βββββββββββ
β Datetime β Timezone β
β --- β --- β
β str β i64 β
βββββββββββββββββββββββͺβββββββββββ‘
β 2022-01-01 12:52:23 β -4 β
β 2023-03-31 04:22:59 β -5 β
βββββββββββββββββββββββ΄βββββββββββ
</code></pre>
<p>Now I want to convert this column either to UTC or a timezone-aware datetime column. I looked into the <code>pl.Expr.str.to_datetime</code> function which accepts the <code>time_zone</code> argument. Unfortunately this argument can only be passed as a string and not as a <code>pl.Expr</code>.</p>
<p>In other words, I can convert all columns to the same specified timezone, but I cannot dynamically use timezone based on the value of another column.</p>
<p>What I would like in the end is something like the following (note that the Datetime column is now of <code>datetime</code> type and the timezone offset has been added dynamically (4 hours for the first row and 5 for the second).</p>
<pre><code>>>> data
shape: (2, 2)
βββββββββββββββββββββββ¬βββββββββββ
β Datetime β Timezone β
β --- β --- β
β datetime β i64 β
βββββββββββββββββββββββͺβββββββββββ‘
β 2022-01-01 16:52:23 β -4 β
β 2023-03-31 09:22:59 β -5 β
βββββββββββββββββββββββ΄βββββββββββ
</code></pre>
<p>Is there a way to do this without going to <code>map_elements</code> or <code>iter_rows</code> based approaches?</p>
|
<python><python-polars>
|
2024-04-05 15:32:45
| 2
| 313
|
upapilot
|
78,280,423
| 5,707,850
|
How to refresh/trigger all content within a Tableau Schedule
|
<p>On Tableau you can run a schedule, but I'm looking to run a schedule via python/tabcmd or other method after our daily data operations are complete. This would allow us the ability to add and remove content to the schedule without having to code for that specific content. Currently each data source or workbook ID has to be manually added to the script.</p>
|
<python><tableau-api>
|
2024-04-05 13:59:08
| 1
| 359
|
Caleb
|
78,280,227
| 419,399
|
Matplotlib issue with subplots - 'Axes' object has no attribute 'is_first_col'
|
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots(1,2)
import pandas as pd
df = pd.DataFrame([1,2,3])
df.plot(ax=ax[0])
</code></pre>
<p>matplotlib: '3.8.4'
pandas: '1.2.0'
python:3.11.0</p>
<p>results in this error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[3], line 5
3 import pandas as pd
4 df = pd.DataFrame([1,2,3])
----> 5 df.plot(ax=ax[0])
File ~/miniforge3/envs/ExMAS/lib/python3.11/site-packages/pandas/plotting/_core.py:955, in PlotAccessor.__call__(self, *args, **kwargs)
952 label_name = label_kw or data.columns
953 data.columns = label_name
--> 955 return plot_backend.plot(data, kind=kind, **kwargs)
File ~/miniforge3/envs/ExMAS/lib/python3.11/site-packages/pandas/plotting/_matplotlib/__init__.py:61, in plot(data, kind, **kwargs)
59 kwargs["ax"] = getattr(ax, "left_ax", ax)
60 plot_obj = PLOT_CLASSES[kind](data, **kwargs)
---> 61 plot_obj.generate()
62 plot_obj.draw()
63 return plot_obj.result
File ~/miniforge3/envs/ExMAS/lib/python3.11/site-packages/pandas/plotting/_matplotlib/core.py:283, in MPLPlot.generate(self)
281 self._add_table()
282 self._make_legend()
--> 283 self._adorn_subplots()
285 for ax in self.axes:
286 self._post_plot_logic_common(ax, self.data)
File ~/miniforge3/envs/ExMAS/lib/python3.11/site-packages/pandas/plotting/_matplotlib/core.py:485, in MPLPlot._adorn_subplots(self)
483 all_axes = self._get_subplots()
484 nrows, ncols = self._get_axes_layout()
--> 485 handle_shared_axes(
486 axarr=all_axes,
487 nplots=len(all_axes),
488 naxes=nrows * ncols,
489 nrows=nrows,
490 ncols=ncols,
491 sharex=self.sharex,
492 sharey=self.sharey,
493 )
495 for ax in self.axes:
496 if self.yticks is not None:
File ~/miniforge3/envs/ExMAS/lib/python3.11/site-packages/pandas/plotting/_matplotlib/tools.py:400, in handle_shared_axes(axarr, nplots, naxes, nrows, ncols, sharex, sharey)
395 if ncols > 1:
396 for ax in axarr:
397 # only the first column should get y labels -> set all other to
398 # off as we only have labels in the first column and we always
399 # have a subplot there, we can skip the layout test
--> 400 if ax.is_first_col():
401 continue
402 if sharey or _has_externally_shared_axis(ax, "y"):
AttributeError: 'Axes' object has no attribute 'is_first_col'
</code></pre>
|
<python><pandas><matplotlib>
|
2024-04-05 13:27:34
| 1
| 1,230
|
Intelligent-Infrastructure
|
78,279,924
| 1,973,451
|
pytorch: autograd with tensors generated by arange
|
<p>I want to compute the gradient of a function in several points. However, If I use tensors generated with <code>torch.arange</code> the gradient is not computed. Instead, using classical tensors it works. Why?</p>
<pre><code>import torch
from torch import tensor
def l(w_1,w_2):
return w_1*w_2
w_1 = tensor(3., requires_grad=True)
w_2 = tensor(5., requires_grad=True)
l_v = l(w_1, w_2)
l_v.backward()
print(l_v.item(), w_1.grad, w_2.grad) # HERE WORKS OK
#############
for w_1_value in torch.arange(+2,+4,0.1, requires_grad=True):
for w_2_value in torch.arange(-2,+4,0.1, requires_grad=True):
print(w_1_value, w_2_value)
l_value = l(w_1_value, w_2_value)
l_value.backward()
print(l_value.item(), w_1_value.grad, w_2_value.grad) # HERE I GET NONE ON GRAD VALUES
</code></pre>
|
<python><pytorch><gradient><tensor><autograd>
|
2024-04-05 12:36:30
| 1
| 1,441
|
volperossa
|
78,279,823
| 1,194,864
|
How exactly the forward and backward hooks work in PyTorch
|
<p>I am trying to understand how exactly code-wise the hooks operate in <code>PyTorch</code>. I have a model and I would like to set a forward and backward hook in my code. I would like to set a hook in my model after a specific layer and I guess the easiest way is to set a hook to this specific <code>module</code>. This introductory <a href="https://youtu.be/syLFCVYua6Q?t=1207" rel="noreferrer">video</a> warns that the backward module contains a bug, but I am not sure if that is still the case.</p>
<p>My code looks as follows:</p>
<pre><code>def __init__(self, model, attention_layer_name='desired_name_module',discard_ratio=0.9):
self.model = model
self.discard_ratio = discard_ratio
for name, module in self.model.named_modules():
if attention_layer_name in name:
module.register_forward_hook(self.get_attention)
module.register_backward_hook(self.get_attention_gradient)
self.attentions = []
self.attention_gradients = []
def get_attention(self, module, input, output):
self.attentions.append(output.cpu())
def get_attention_gradient(self, module, grad_input, grad_output):
self.attention_gradients.append(grad_input[0].cpu())
def __call__(self, input_tensor, category_index):
self.model.zero_grad()
output = self.model(input_tensor)
loss = ...
loss.backward()
</code></pre>
<p>I am puzzled to understand how code-wise the following lines work:</p>
<pre><code>module.register_forward_hook(self.get_attention)
module.register_backward_hook(self.get_attention_gradient)
</code></pre>
<p>I am registering a hook to my desired module, however, then, I am calling a function in each case without any input. My question is <code>Python</code>-wise, how does this call work exactly? How the arguments of the <code>register_forward_hook</code> and <code>register_backward_hook</code> operate when the function it's called?</p>
|
<python><pytorch><hook>
|
2024-04-05 12:15:14
| 2
| 5,452
|
Jose Ramon
|
78,279,748
| 5,175,802
|
snowflake-connector-python[pandas] write_pandas creates duplicate records in table
|
<p>I am attempting to copy data into snowflake on an AWS Lambda. I have a situation right now where I have a dataframe that has no duplicates in it. I verify this by checking my dataframe like so:</p>
<p><code>df.duplicated().any()</code> and verify that it returns <code>False</code></p>
<p>I then double check by filtering by what should be a unique value in the dataframe</p>
<p><code>df[df["myColumn"] == "uniqueValue"]</code> and I get 1 result.</p>
<p>I then run the following:</p>
<pre class="lang-py prettyprint-override"><code>write_pandas(
conn=con,
df=df,
table_name=table_name,
database=database,
schema=schema,
chunk_size=chunk_size,
quote_identifiers=False,
)
</code></pre>
<p>and then when the data lands in the Snowflake table and I query it, there are 5 of each row in the SF database.</p>
<p>I verified that this function only runs one time as well.</p>
<p>Why am I getting 5 duplicates?</p>
<p><strong>EDIT</strong>
OK so I realized it's not related to this package. The issue is that after 1 minute the lambda is triggered again, and then again 1 minute later, etc. until it's been triggered 5 times.</p>
<p>No idea why it's being triggered multiple times though because all of the executions succeed eventually, but there are 5 of them running before the first one actually completes</p>
<p><strong>UPDATE</strong></p>
<p>Verified that it's not a memory issue and not a timeout issue.</p>
<p>What I have noticed is that when an API Call is made to retrieve some external data is when the next lambda seems to be triggered. Not sure why that would play a role but it seems to be affecting it.</p>
<p>Also, it's not set at 5 times, it will just re-trigger every minute until the first lambda execution finishes. I can see that the logs stop when the API call starts, and it's at that same log mark that I see the next lambda execution start.</p>
|
<python><pandas><aws-lambda><snowflake-cloud-data-platform>
|
2024-04-05 11:57:25
| 1
| 3,626
|
Jake Boomgaarden
|
78,279,743
| 2,417,709
|
How to identify whether the partition belongs to OS (Linux) or not using python
|
<p>I have a requirement to identify only the operating system volumes using python. Is there any method in "psutil" to identify that? Currently I'm thinking of to read fstab, based on the 6th column (if not "0") we have to identify whether the partition belongs to OS or just data. Can some through some light on fstab reading is correct for my requirement or there is some easy way in python to identify this?</p>
|
<python><linux><psutil><fstab>
|
2024-04-05 11:55:28
| 0
| 311
|
Naga
|
78,279,592
| 4,277,485
|
create columns dynamically depending on string structure in Python
|
<p>To give background on my project, comparing two documents which has nested <code>JSON</code> structure using <code>deepDiff</code> in python. During comparison, if values of a field are changed, those are written to a DataFrame for analysis.</p>
<p>Example data change:</p>
<pre><code>"values_changed": {
"root['prod1']['p_col']['c_col']": {
"new_value": -2.866711109999983,
"old_value": -2.75
},
"root['prod1']['p_col23']": {
"new_value": 1,
"old_value": 54
},
"root['prod1']['p_col']['c_col2']['c_col5']": {
"new_value": 1.678,
"old_value": 5.12
}
}
</code></pre>
<p>Current output:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">product</th>
<th style="text-align: center;">field</th>
<th style="text-align: right;">new_value</th>
<th style="text-align: right;">old_value</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">prod</td>
<td style="text-align: center;">p_col</td>
<td style="text-align: right;">-2.866</td>
<td style="text-align: right;">-2.75</td>
</tr>
<tr>
<td style="text-align: left;">prod</td>
<td style="text-align: center;">p_col23</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">54</td>
</tr>
<tr>
<td style="text-align: left;">prod</td>
<td style="text-align: center;">p_col</td>
<td style="text-align: right;">1.678</td>
<td style="text-align: right;">5.12</td>
</tr>
</tbody>
</table></div>
<p>I am using string pattern to read till <strong>root['prod1']['p_col']</strong>, adding to another dict further converted to pandas dataframe <br><br></p>
<pre><code>k = """root['prod1']['p_col']['c_col']"""
field = k[k.find("['",8)+1:k.find("']",-1)].split("'")[1::2][0]
</code></pre>
<p>but want to dynamically add new column on existence of nested field in string</p>
<p>Required output:<br></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">product</th>
<th style="text-align: center;">parent-field</th>
<th style="text-align: center;">field</th>
<th style="text-align: center;">field</th>
<th style="text-align: right;">new_value</th>
<th style="text-align: right;">old_value</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">prod</td>
<td style="text-align: center;">p_col</td>
<td style="text-align: center;">c_col</td>
<td style="text-align: center;"></td>
<td style="text-align: right;">-2.866</td>
<td style="text-align: right;">-2.75</td>
</tr>
<tr>
<td style="text-align: left;">prod</td>
<td style="text-align: center;">p_col23</td>
<td style="text-align: center;"></td>
<td style="text-align: center;"></td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">54</td>
</tr>
<tr>
<td style="text-align: left;">prod</td>
<td style="text-align: center;">p_col</td>
<td style="text-align: center;">c_col2</td>
<td style="text-align: center;">c_col5</td>
<td style="text-align: right;">1.678</td>
<td style="text-align: right;">5.12</td>
</tr>
</tbody>
</table></div>
|
<python><pandas><dataframe><dictionary>
|
2024-04-05 11:25:16
| 3
| 438
|
Kavya shree
|
78,279,361
| 7,695,845
|
How to parallelize a loop over a multi dimensional array with numba?
|
<p>I am trying to implement a numerical scheme over a multi-dimensional array with <code>numba</code>. For each dimension, there is a corresponding 1D vector with some values I use in my scheme. As a starting point, I had a 2D array with vectors of x values and y values. My scheme is similar to this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numba import jit, prange
@jit(
nopython=True,
nogil=True,
cache=True,
error_model="numpy",
fastmath=True,
parallel=True,
)
def run_scheme(arr, x_vector, y_vector, max_iterations, tolerance):
# arr.shape[0] = len(x_vector), arr.shape[1] = len(y_vector)
for n in range(1, max_iterations + 1): # This outer loop can't run in parallel since the scheme is an iterative method.
max_err = 0 # Find maximum error to know if we can stop
# The inner loops over the array can easily be parallelized since no element in the array depends on the others
for i in prange(len(x_vector)):
x = x_vector[i]
for j in in prange(len(y_vector)):
y = y_vector[j]
arr[i, j] = f(arr[i, j], x, y) # the numerical computation (the details of the computation are unimportant)
max_err = max(max_err, error_value(arr[i, j])) # Find the error in the computation
if max_err < tolerance:
return n # If the error in all elements is less than the tolerance, we finished after `n` steps
return -1 # The scheme failed to reach the desired tolerance in `max_iterations` steps
</code></pre>
<p>This scheme worked well and produced the results I needed. In addition, the parallelization was a huge optimization in my case. I found that the scheme runs 4 or 5 times faster than if I set <code>parallel=False</code>: A huge improvement. Now I need to generalize this scheme for an array with N dimensions and N corresponding 1D vectors (where each vector satisfies <code>arr.shape[i] = len(vectors[i])</code>). The problem is I can't have an unknown number of nested loops. Technically, in my specific case, I know I need to deal with 4D arrays, but I still don't want to write 4 nested loops. If I didn't care about parallelization, I'd use something like <code>np.ndenumerate()</code>:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numba import jit
@jit(
nopython=True,
nogil=True,
cache=True,
error_model="numpy",
fastmath=True,
)
def run_scheme(arr, vectors, max_iterations, tolerance):
# arr.shape[i] = len(vectors[i])
for n in range(1, max_iterations + 1):
max_err = 0
for indices, a in np.ndenumerate(arr):
arr[indices] = f(a, [vectors[i][j] for i, j in enumerate(indices)])
max_err = max(max_err, error_value(arr[indices]))
if max_err < tolerance:
return n
return -1
</code></pre>
<p>But from my testing of the 2D case, the parallelization gave a huge improvement of the performance which I am not willing to give up on.</p>
<p>How can I parallelize a loop over an N-dimensional array with <code>numba</code>? If anybody has more advice to optimize the scheme, I'd like to hear them. Performance is important in my case, since the better the performance is, the higher the accuracy I can request the computer to reach.</p>
|
<python><numpy><parallel-processing><numba>
|
2024-04-05 10:44:41
| 0
| 1,420
|
Shai Avr
|
78,279,136
| 11,934,706
|
"ImportError: cannot import name 'triu' from 'scipy.linalg'" when importing Gensim
|
<p>I am trying to use Gensim, but running <code>import gensim</code> raises this error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/gensim/__init__.py", line 11, in <module>
from gensim import parsing, corpora, matutils, interfaces, models, similarities, utils # noqa:F401
File "/usr/local/lib/python3.10/dist-packages/gensim/corpora/__init__.py", line 6, in <module>
from .indexedcorpus import IndexedCorpus # noqa:F401 must appear before the other classes
File "/usr/local/lib/python3.10/dist-packages/gensim/corpora/indexedcorpus.py", line 14, in <module>
from gensim import interfaces, utils
File "/usr/local/lib/python3.10/dist-packages/gensim/interfaces.py", line 19, in <module>
from gensim import utils, matutils
File "/usr/local/lib/python3.10/dist-packages/gensim/matutils.py", line 20, in <module>
from scipy.linalg import get_blas_funcs, triu
ImportError: cannot import name 'triu' from 'scipy.linalg' (/usr/local/lib/python3.10/dist-packages/scipy/linalg/__init__.py)
</code></pre>
<p>Why is this happening and how can I fix it?</p>
|
<python><scipy><gensim>
|
2024-04-05 10:01:46
| 3
| 1,490
|
CodeRed
|
78,278,918
| 6,768,058
|
Can't run await with async methods in llama index
|
<p>I try the code below which is from <a href="https://docs.llamaindex.ai/en/stable/examples/evaluation/batch_eval/" rel="nofollow noreferrer">https://docs.llamaindex.ai/en/stable/examples/evaluation/batch_eval/</a>.</p>
<p>I came across an error of</p>
<blockquote>
<p>SyntaxError: 'await' outside function</p>
</blockquote>
<pre><code>from llama_index.core.evaluation import BatchEvalRunner
runner = BatchEvalRunner(
{"faithfulness": faithfulness_gpt4, "relevancy": relevancy_gpt4},
workers=8,
)
eval_results = await runner.aevaluate_queries(
vector_index.as_query_engine(llm=llm), queries=qas.questions
)
</code></pre>
<p>I tried asyncio loop method as shown below but got another error:</p>
<blockquote>
<p>RuntimeError: coroutine raised StopIteration</p>
</blockquote>
<pre><code>import asyncio
loop = asyncio.get_event_loop()
eval_results = loop.run_until_complete(runner.aevaluate_queries(index.as_query_engine(), queries=eval_question_texts[:1]))
loop.close()
</code></pre>
<p>Any idea for sorting this out?</p>
<p>Thanks</p>
|
<python><async-await><python-asyncio><large-language-model>
|
2024-04-05 09:24:16
| 1
| 423
|
LUSAQX
|
78,278,746
| 9,236,505
|
Plot for every subgroup of a groupby
|
<pre><code> data = {0: {'VAR1': 'A', 'VAR2': 'X', 'VAL1': 3, 'VAL2': 1},
1: {'VAR1': 'A', 'VAR2': 'X', 'VAL1': 4, 'VAL2': 1},
2: {'VAR1': 'A', 'VAR2': 'X', 'VAL1': 5, 'VAL2': 1},
3: {'VAR1': 'A', 'VAR2': 'Y', 'VAL1': 3, 'VAL2': 2},
4: {'VAR1': 'A', 'VAR2': 'Y', 'VAL1': 4, 'VAL2': 2},
5: {'VAR1': 'A', 'VAR2': 'Y', 'VAL1': 5, 'VAL2': 2},
6: {'VAR1': 'A', 'VAR2': 'Z', 'VAL1': 3, 'VAL2': 3},
7: {'VAR1': 'A', 'VAR2': 'Z', 'VAL1': 4, 'VAL2': 3},
8: {'VAR1': 'A', 'VAR2': 'Z', 'VAL1': 5, 'VAL2': 3},
9: {'VAR1': 'B', 'VAR2': 'X', 'VAL1': 3, 'VAL2': 1},
10: {'VAR1': 'B', 'VAR2': 'X', 'VAL1': 4, 'VAL2': 1},
11: {'VAR1': 'B', 'VAR2': 'X', 'VAL1': 5, 'VAL2': 1},
12: {'VAR1': 'B', 'VAR2': 'Y', 'VAL1': 3, 'VAL2': 2},
13: {'VAR1': 'B', 'VAR2': 'Y', 'VAL1': 4, 'VAL2': 2},
14: {'VAR1': 'B', 'VAR2': 'Y', 'VAL1': 5, 'VAL2': 2},
15: {'VAR1': 'B', 'VAR2': 'Z', 'VAL1': 3, 'VAL2': 3},
16: {'VAR1': 'B', 'VAR2': 'Z', 'VAL1': 4, 'VAL2': 3},
17: {'VAR1': 'B', 'VAR2': 'Z', 'VAL1': 5, 'VAL2': 3},
18: {'VAR1': 'C', 'VAR2': 'X', 'VAL1': 3, 'VAL2': 1},
19: {'VAR1': 'C', 'VAR2': 'X', 'VAL1': 4, 'VAL2': 1},
20: {'VAR1': 'C', 'VAR2': 'X', 'VAL1': 5, 'VAL2': 1},
21: {'VAR1': 'C', 'VAR2': 'Y', 'VAL1': 3, 'VAL2': 2},
22: {'VAR1': 'C', 'VAR2': 'Y', 'VAL1': 4, 'VAL2': 2},
23: {'VAR1': 'C', 'VAR2': 'Y', 'VAL1': 5, 'VAL2': 2},
24: {'VAR1': 'C', 'VAR2': 'Z', 'VAL1': 3, 'VAL2': 3},
25: {'VAR1': 'C', 'VAR2': 'Z', 'VAL1': 4, 'VAL2': 3},
26: {'VAR1': 'C', 'VAR2': 'Z', 'VAL1': 5, 'VAL2': 3}}
df = pd.DataFrame.from_dict(dictio, orient='index')
</code></pre>
<p>I like to achieve:</p>
<ul>
<li>new axes for every unique element in <em>VAR1</em></li>
<li>new scatter plot of <em>VAL1</em>(x-value) and <em>VAL2</em>(y-value) for elements in <em>VAR2</em> for every axes from <em>VAR1</em></li>
</ul>
<p>Example for axes of <em>VAR1=A</em></p>
<p><a href="https://i.sstatic.net/T7cs8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T7cs8.png" alt="enter image description here" /></a></p>
<p>I could not figure out how to do it with the groupby.</p>
<p>My approach is not very good/correct:</p>
<pre><code>group_var1 = df.groupby('VAR1')
for name_var1, grouped_var1 in group_var1:
i = 0
fig, axes = plt.subplots(nrows=3, ncols=1,figsize=(20, 8), tight_layout=True)
group_var2 = grouped_var1.groupby('VAR2')
for name_var2, grouped_var2 in group_var2:
grouped_var2.plot(kind='scatter', ax=axes[i], x='VAL1', y='VAL2')
i+=1
</code></pre>
<p>EDIT:</p>
<p>This works, but i highly dislike this approach</p>
<pre><code>group_var1 = df.groupby('VAR1')
fig, axes = plt.subplots(nrows=3, ncols=1,figsize=(20, 8), tight_layout=True)
i = 0
for name_var1, grouped_var1 in group_var1:
group_var2 = grouped_var1.groupby('VAR2')
for name_var2, grouped_var2 in group_var2:
grouped_var2.plot(kind='scatter', ax=axes[i], x='VAL2', y='VAL1', c=['red','green','yellow'])
i+=1
</code></pre>
|
<python><python-3.x><pandas><matplotlib>
|
2024-04-05 08:47:58
| 1
| 336
|
Paul
|
78,278,716
| 1,900,384
|
Thread-safe read from a fastly growing list?
|
<p>Let's say, we have one list shared across two Python processes:</p>
<ul>
<li>a <strong>data logger process</strong>, which fastly appends (immutable) objects to the list, and</li>
<li>a <strong>collector process</strong>, which occasionally reads the currently available list for further processing.</li>
</ul>
<p>How would I make that construct smoothly running and thread-safe? Ideally, I would like the list to keep growing during a collect process and just collect the elements that have been available so far. As <code>list.append</code> is atomic (see <a href="https://docs.python.org/3/faq/library.html#what-kinds-of-global-value-mutation-are-thread-safe" rel="nofollow noreferrer">Python3/FAQ</a>), I fear doing something like on collect</p>
<pre class="lang-py prettyprint-override"><code>def collect(shared_list):
# see how many elements safely exist
initial_length = len(shared_list)
# and copy those for further processing
# (even if the list has grown/is growing in the meanwhile)
collected_list = shared_list[:initial_length]
</code></pre>
<p>may lock the <code>shared_list</code> from new <code>append</code>s for too long. I would love to just let it grow in the meanwhile, but in that case the list could be reorganized in memory when running out of allocated memory.</p>
<p>Of course, we do not have to stay with the native <code>list</code> if any other data structure is more appropriate for this use case.</p>
|
<python><list><multithreading><thread-safety>
|
2024-04-05 08:41:06
| 0
| 2,201
|
matheburg
|
78,278,705
| 10,200,497
|
How to select first N number of groups based on values of a column conditionally and groupby two columns?
|
<p>This is a follow up to this <a href="https://stackoverflow.com/questions/78278506/how-to-select-first-n-number-of-groups-based-on-values-of-a-column-conditionally#78278536">post</a></p>
<p>This is my DataFrame:</p>
<pre><code>df = pd.DataFrame(
{
'a': [10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 10, 22],
'b': [1, 1, 1, -1, -1, -1, -1, 2, 2, 2, 2, -1, -1, -1, -1],
'c': [25, 25, 25, 45, 45, 45, 45, 65, 65, 65, 65, 40, 40, 30, 30],
'main': ['x', 'x', 'x', 'x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'y', 'y', 'y', 'y']
}
)
</code></pre>
<p>Expected output: Groupby <code>main</code> AND <code>c</code>:</p>
<pre><code> a b c main
0 10 1 25 x
1 15 1 25 x
2 20 1 25 x
3 25 -1 45 x
4 30 -1 45 x
5 35 -1 45 x
6 40 -1 45 x
11 65 -1 40 y
12 70 -1 40 y
13 10 -1 30 y
14 22 -1 30 y
</code></pre>
<p>The process is as follows: Note that <code>groupby</code> is done by TWO columns:</p>
<p>So for each <code>main</code>:</p>
<p><strong>a)</strong> Selecting the group that all of the <code>b</code> values is <code>1</code>. In my data and this <code>df</code> there is only one group with this condition.</p>
<p><strong>b)</strong> Selecting first two groups (from top of <code>df</code>) that all of their <code>b</code> values are -1.</p>
<p>Note that there is a possibility in my data that there are no groups that has <code>a</code> or <code>b</code> condition. If that is the case, returning whatever matches the criteria is fine. For example the output could be only one group or no groups at all.</p>
<p>The groups that I want are shown below:</p>
<p><a href="https://i.sstatic.net/ocygK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ocygK.png" alt="enter image description here" /></a></p>
<p>This is my attempt based on this <a href="https://stackoverflow.com/a/78278536/10200497">answer</a> but it appears that something else must change:</p>
<pre><code># identify groups with all 1
m1 = df['b'].eq(1).groupby(df['c', 'main']).transform('all')
# identify groups with all -1
m2 = df['b'].eq(-1).groupby(df['c', 'main']).transform('all')
# keep rows of first 2 groups with all -1
m3 = df[['c', 'main']].isin(df.loc[m2, ['c', 'main']].unique()[:2])
# select m1 OR m3
out = df[m1 | m3]
</code></pre>
|
<python><pandas><group-by>
|
2024-04-05 08:39:06
| 2
| 2,679
|
AmirX
|
78,278,659
| 448,317
|
Is mypy contradicting itself or is it just me? Mypy is giving an error on a variable but not on the excatc same literal
|
<p>The following code:</p>
<pre><code>def foo(bar: dict[ int | float , int | float]) -> None:
pass
foo({1: 1})
bas = {1: 1}
foo(bas)
</code></pre>
<p>Triggers the following mypy error:</p>
<pre><code>6: error: Argument 1 to "foo" has incompatible type "dict[int, int]"; expected "dict[int | float, int | float]" [arg-type]
</code></pre>
<ol>
<li>The error is in line 6 not in line 4: i.e. using a literal is OK,
but not the same value as a variable. Why?</li>
<li>Why is <code>dict[int, int]</code> not compatible with <code>dict[int | float, int | float]</code></li>
</ol>
<p>I ran in to this error using pyarrow, in the function Table.replace_schema_metadata</p>
<ol start="3">
<li>What can I do to get around this error (apart from <code># type: ignore[arg-type]</code>)?</li>
<li>Is this an error in mypy or in pyarrow?</li>
</ol>
<p>Using Python 3.11.3 and mypy 1.9.0 (and pyarrow 15.0.1)</p>
<p>Your help is appreciated - if for nothing else - just for my curiosity and sanity :-)</p>
<p>Thanks
Troels</p>
<p>Edit: For pyarrow context the following code will result in a mypy error:</p>
<pre><code>metadata = table.schema.metadata
assert metadata is not None
metadata[b'my-metadata'] = b'interesting stuff'
table = table.replace_schema_metadata(metadata)
</code></pre>
<p>This arises from pyarrow-stubs containing:</p>
<pre><code>class Schema(_Weakrefable):
metadata: dict[bytes, bytes] | None
...
class Table(_PandasConvertible):
...
def replace_schema_metadata(
self: _Self, metadata: dict[str | bytes, str | bytes] | None = ...
) -> _Self: ...
</code></pre>
<p>So the datatypes are directly compatible.</p>
|
<python><mypy><python-typing><pyarrow>
|
2024-04-05 08:30:19
| 1
| 864
|
Troels Blum
|
78,278,506
| 10,200,497
|
How to select first N number of groups based on values of a column conditionally?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 10, 22],
'b': [1, 1, 1, -1, -1, -1, -1, 2, 2, 2, 2, -1, -1, -1, -1],
'c': [25, 25, 25, 45, 45, 45, 45, 65, 65, 65, 65, 40, 40, 30, 30]
}
)
</code></pre>
<p>The expected output: Grouping <code>df</code> by <code>c</code> and a condition:</p>
<pre><code> a b c
0 10 1 25
1 15 1 25
2 20 1 25
3 25 -1 45
4 30 -1 45
5 35 -1 45
6 40 -1 45
11 65 -1 40
12 70 -1 40
</code></pre>
<p>The process is as follows:</p>
<p><strong>a)</strong> Selecting the group that all of the <code>b</code> values is <code>1</code>. In my data and this <code>df</code> there is only one group with this condition.</p>
<p><strong>b)</strong> Selecting first two groups (from top of <code>df</code>) that all of their <code>b</code> values are -1.</p>
<p>For example:</p>
<p>a) Group 25 is selected.</p>
<p>b) There are three groups with this condition. First two groups are: Group 45 and 40.</p>
<p>Note that there is a possibility in my data that there are no groups that has <code>a</code> or <code>b</code> condition. If that is the case, returning whatever matches the criteria is fine. For example the output could be only one group or no groups at all.</p>
<p>The groups that I want are shown below:</p>
<p><a href="https://i.sstatic.net/6jqeH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6jqeH.png" alt="enter image description here" /></a></p>
<p>These are my attempts that got very close:</p>
<pre><code>df1 = df.groupby('c').filter(lambda g: g.b.eq(1).all())
gb = df.groupby('c')
new_gb = pd.concat([gb.get_group(group) for i, group in enumerate(gb.groups) if i < 2])
</code></pre>
|
<python><pandas><dataframe>
|
2024-04-05 07:56:33
| 2
| 2,679
|
AmirX
|
78,278,504
| 11,328,614
|
trio and rich, live update, abandon_on_cancel, console cursor vanishes
|
<p>I like to combine <code>async</code> code with a <code>rich</code> tui in a small tool to observe some hardware states.<br />
The tool should run endlessly after it has been started so if somebody wants to control the hardware he does not need to start the tool again and again. However, during development I would like to be able to hard break via <code>strg-c</code>.</p>
<p>The <code>trio</code> part manages scheduling of subprocesses and parsing console output whereas the rich part manages a colourful and easily recognizable console output.</p>
<p>I can run any code asynchronously except the live update of the rich tui.</p>
<p>Therefore, I wrap the live update in a thread and start it via the <code>trio</code> <code>nursery</code>.
In order to make it interuptable via <code>strg-c</code> I specify <code>abandon_on_cancel=True</code>.</p>
<pre class="lang-py prettyprint-override"><code>async with trio.open_nursery() as nursery:
nursery.start_soon(
lambda: trio.to_thread.run_sync(screen_layout_refresher, abandon_on_cancel=True))
nursery.start_soon(async_func1, params...)
nursery.start_soon(async_func2, params...)
nursery.start_soon(async_func3, params...)
</code></pre>
<pre class="lang-py prettyprint-override"><code>def screen_layout_refresher():
with Live(the_tui_layout, refresh_per_second=1, screen=True): # the_tui_layout is of type rich.layout.Layout()
while True:
# Update some values here
</code></pre>
<p>So here is my question:</p>
<p>Everything works fine, except that after pressing <code>strg-c</code> the console cursor disappears, which is sort of nasty if you have to do some console stuff until the next debug cycle.</p>
<p>Does somebody have an explanation for this behaviour?</p>
<p>I can only guess that rich does not reset the cursor colour due to the hard <code>strg-c</code> break?
However, why it becomes black on black background? I do not use a <code>black</code> colour in my console output.</p>
|
<python><multithreading><asynchronous><console-application><rich>
|
2024-04-05 07:55:28
| 0
| 1,132
|
WΓΆr Du Schnaffzig
|
78,278,082
| 6,480,751
|
Python zlib.compress not accepting 3 arguments?
|
<p>System: Windows</p>
<p>Trying to run zlib.compress(save_file_data, level=1, wbits=-zlib.MAX_WBITS) but I keep getting</p>
<pre><code>return zlib.compress(save_file_data, level=1, wbits=-zlib.MAX_WBITS)
TypeError: compress() takes at most 2 arguments (3 given)
</code></pre>
<p>even though <a href="https://docs.python.org/3/library/zlib.html" rel="nofollow noreferrer">https://docs.python.org/3/library/zlib.html</a> shows compress taking those arguments. I have tried reinstalling zlib and confirmed I'm running python3</p>
|
<python><python-zlib>
|
2024-04-05 06:22:26
| 1
| 365
|
Kilbo
|
78,277,473
| 11,402,025
|
load testing for burst requests using locust
|
<p>I am trying to simulate a burst request in locust of 1200 with request rate of 600 per second</p>
<pre><code>@task
def send_burst_request(self):
def send_request():
self.client.get(
f"/api/v1/test",
)
# Adjust the number of requests as needed
futures = [send_request() for _ in range(10)]
# Wait for all requests to complete
for future in futures:
future.result() # Wait for the request to complete
</code></pre>
<p>Number of users selected in locust UI will be 60 and ramping it to 60 per second.</p>
<p>Is my assumptions correct ? Thank you.</p>
|
<python><performance><testing><load><locust>
|
2024-04-05 02:30:00
| 0
| 1,712
|
Tanu
|
78,277,200
| 395,857
|
How can I download a HuggingFace dataset via HuggingFace CLI while keeping the original filenames?
|
<p>I downloaded a <a href="https://huggingface.co/datasets/huuuyeah/MeetingBank_Audio" rel="nofollow noreferrer">dataset</a> hosted on HuggingFace via the <a href="https://huggingface.co/docs/huggingface_hub/main/en/guides/cli" rel="nofollow noreferrer">HuggingFace CLI</a> as follows:</p>
<pre><code>pip install huggingface_hub[hf_transfer]
huggingface-cli download huuuyeah/MeetingBank_Audio --repo-type dataset --local-dir-use-symlinks False
</code></pre>
<p>However, the downloaded files don't have their original filenames. Instead, their hashes (git-sha or sha256, depending on whether theyβre LFS files) are used as filenames:</p>
<pre><code>--- /home/dernonco/.cache/huggingface/hub/datasets--huuuyeah--MeetingBank_Audio/blobs ---------------------------------------------
/..
12.9 GiB [##########] b581945ddee5e673fa2059afb25274b1523f270687b5253cb8aa72865760ebc0
3.9 GiB [### ] 86ebd2861a42b27168d75f346dd72f0e2b9eaee0afb90890beff15d025af45c6
3.9 GiB [## ] f9b81739ee30450b930390e1155e2cdea1b3063379ba6fd9253513eba1ab1e05
3.7 GiB [## ] e54c7d123ad93f4144eebdca2827ef81ea1ac282ddd2243386528cd157c02f36
3.7 GiB [## ] 736e225a7dd38a7987d0745b1b2f545ab701cfdf1f639874f5743b5bfb5cb1e1
3.7 GiB [## ] 0687246c92ec87b54e1c5fe623a77b650c02e6884e17a6f0fb4052a862d928d0
3.6 GiB [## ] 2becb5f9878b95f1b12622f50868f5855221985f05910d7cc759e6be074e6b8e
3.5 GiB [## ] 2208068c69b39c46ee9fac862da3c060c58b61adcaee1b3e6aa5d6d5dd3eba86
3.5 GiB [## ] caf87e71232cbb8a31960a26ba30b9412c15893c831ef118196c581cfd3a3779
3.4 GiB [## ] dc88cbf0ef45351bdc1f53c4396466d3e79874803719e266630ed6c3ad911d6a
3.4 GiB [## ] f05f7fb3b55b6840ebc4ada5daa28742bbae6ad4dcc35781dc811024f27a1b4e
3.4 GiB [## ] 88bd831618b36330ef5cd84b7ccbc4d5f3f55955c0b223208bc2244b27fb2d78
3.4 GiB [## ] bf80943b3389ddbeb8fb8a56af2d7fa5d09c5af076aac93f54ad921ee382c77d
3.3 GiB [## ] 83b2627e644c9ad0486e3bd966b02f014722e668d26b9d52394c974fcf2fdcf8
3.2 GiB [## ] e52e7b086dabd431b25cf309e1fe513190543e058f4e7a2d8e05b22821ded4fe
3.2 GiB [## ] 4fe583348f3ac118f34c7b93b6a187ba4e21a5a7f5b6ca1a6adbce1cc6d563a9
3.2 GiB [## ] ae6b6faca3bbd75e7ca99ccf20b55b017393bf09022efb8459293afffe06dc6e
3.1 GiB [## ] 5865379a894f8dc40703bdc1093d45fda67d5e1a742a2eebddd37e1a00f067fd
3.1 GiB [## ] cd346324b29390a589926ccab7187ae818cf5f9fcbaf8ecc95313e6cdfab86bc
3.0 GiB [## ] 914eb2b1174a662e3faebac82f6b5591a54def39a9d3a7e5ab2347ecc87a982f
2.9 GiB [## ] 24789f33332e8539b2ee72a0a489c0f4d0c6103f7f9600de660d78543ade9111
2.9 GiB [## ] 35e8da5f831b36416c9569014c58f881a0a30c00db9f3caae0d7db6a8fd3c694
2.8 GiB [## ] d5127e0298661d40a343d58759ed6298f9d2ef02d5c4f6a30bd9e07bc5423317
2.8 GiB [## ] 1b4e1951da2462ca77d94d220a58c97f64caa2b2defe4df95feed9defcee6ca7
2.8 GiB [## ] 75a4725625c095d98ecef7d68d384d7b1201ace046ef02ed499776b0ac02b61e
2.8 GiB [## ] fefbbc3e87be522b7e571c78a188aba35bd5d282cf8f41257097a621af64ff60
Total disk usage: 184.8 GiB Apparent size: 184.8 GiB Items: 85
</code></pre>
<p>How can I download a HuggingFace dataset via HuggingFace CLI while keeping the original filenames?</p>
|
<python><download><dataset><huggingface-datasets>
|
2024-04-05 00:18:13
| 3
| 84,585
|
Franck Dernoncourt
|
78,277,014
| 4,697,337
|
Typing of the `__getitem__` method of numpy array subclasses
|
<p>Let's consider a subclass of <code>numpy</code>'s <code>ndarray</code> class:</p>
<pre><code>import numpy as np
class ArraySubClass(np.ndarray):
def __new__(cls, input_array: np.ndarray):
obj = np.asarray(input_array).view(cls)
return obj
</code></pre>
<p>Then, taking a slice of an <code>ArraySubClass</code> return an object of the same type, as explained in the <a href="https://numpy.org/doc/stable/user/basics.subclassing.html#creating-new-from-template" rel="nofollow noreferrer">documentation</a>:</p>
<pre><code>>>> type(ArraySubClass(np.zeros((3, 3)))[:, 0])
<class '__main__.ArraySubClass'>
</code></pre>
<p>So far so good, but I start getting unexpected behavior when I use the static type checked <code>pyright</code>, as seen with the example below:</p>
<pre><code>def f(x: ArraySubClass):
print(x)
f(ArraySubClass(np.zeros((3, 3)))[:, 0])
</code></pre>
<p>The last line raises an error from <code>pyright</code>:</p>
<pre><code> Argument of type "ndarray[Any, Unknown]" cannot be assigned to parameter "x" of type "ArraySubClass" in function "f"
Β Β "ndarray[Any, Unknown]" is incompatible with "ArraySubClass"
</code></pre>
<p>What causes this behavior? Is this a <code>pyright</code> bug? Is the signature of the <code>__getitem__</code> method from <code>np.ndarray</code> given by the type hints incorrect? Or perhaps should I override this method in <code>ArraySubClass</code> with the correct signature?</p>
|
<python><numpy><python-typing><pyright>
|
2024-04-04 22:58:06
| 1
| 396
|
Kevlar
|
78,277,012
| 20,302,906
|
Field 'id' expected a number but got dict when seeding database django
|
<p>I'm looking for a way to store a question fetched from an external API to be stored in a model called <code>Question</code>. My <code>views.py</code> module does that by requesting data based on user's question difficulty choice, then renders what's fetched in a form. It's a pretty straightforward method but for some reason I get <code>Field 'id' expected a number but got {'type': 'multiple'...}</code>.</p>
<p>I deleted a code that intended to create a another model singleton that was interfering with this implementation from <a href="https://stackoverflow.com/questions/49735906/how-to-implement-singleton-in-django">last answer</a> of this question. Then I ran <code>./manage.py makemigrations</code> and <code>./manage.py migrate</code> to reflect changes but the exception was again raised. After that, I deleted migrations.py and their cached files to run the same two commands and nothing changed.</p>
<p>Can anyone point out what I'm missing/doing wrong please?</p>
<p><em>models.py</em></p>
<pre><code>from django.db import models
class Question(models.Model):
type = models.TextField()
difficulty = models.TextField()
category = models.TextField()
question = models.TextField()
correct_answer = models.TextField()
incorrect_answers = models.TextField()
</code></pre>
<p><em>views.py</em></p>
<pre><code>from django.shortcuts import render, HttpResponse
from .forms import QuestionForm, QuestionLevelForm
from urllib.request import URLError
from .models import Question
import requests
def process_question(request):
if "level" in request.POST:
return fetch_question(request)
elif "answer" in request.POST:
return check_answer(request)
else:
form = QuestionLevelForm()
return render(request, "log/question.html", {"form": form})
def fetch_question(request):
match request.POST["difficulty"]:
case "easy":
url = "https://opentdb.com/api.php?amount=1&category=9&difficulty=easy&type=multiple"
case "medium":
url = "https://opentdb.com/api.php?amount=1&category=9&difficulty=medium&type=multiple"
case "hard":
url = "https://opentdb.com/api.php?amount=1&category=9&difficulty=hard&type=multiple"
try:
response = requests.get(url)
except URLError as e:
HttpResponse("Couldn't fetch data, try again")
else:
render_question(request, response.json())
def render_question(request, response):
content = response["results"][0]
question = {
"type": content["type"],
"difficulty": content["difficulty"],
"category": content["category"],
"question": content["question"],
"correct_answer": content["correct_answer"],
"incorrect_answers": content["incorrect_answers"],
}
form = QuestionForm(question)
model = Question(question)
model.save()
context = {"question": question, "form": form}
render(request, "./log/templates/log/question.html", context)
</code></pre>
<p><em>test_views.py</em></p>
<pre><code>from django.http import HttpRequest
from django.test import TestCase, Client
from .. import views
client = Client()
class QuestionTest(TestCase):
def test_page_load(self):
response = self.client.get("/log/question")
self.assertEqual(response["content-type"], "text/html; charset=utf-8")
self.assertTemplateUsed(response, "log/question.html")
self.assertContains(response, "Choose your question level", status_code=200)
def test_fetch_question(self):
request = HttpRequest()
request.method = "POST"
request.POST["level"] = "level"
request.POST["difficulty"] = "hard"
request.META["HTTP_HOST"] = "localhost"
response = views.process_question(request)
self.assertEqual(response.status_code, 200)
</code></pre>
|
<python><django><model>
|
2024-04-04 22:57:50
| 1
| 367
|
wavesinaroom
|
78,276,978
| 8,838,303
|
Python: Shuffle not working in class initialiser
|
<p>I want to write a class "SomeClass" that stores a shuffled version of the list it is initialised with. However, the list is shuffled in exactly the same way for every instance of my class. Could you please tell me what I am doing wrong?</p>
<pre><code>import random
class SomeClass:
def __init__(self, list):
self.shuffled_list =list
random.shuffle(self.shuffled_list)
list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
A = SomeClass(list)
B = SomeClass(list)
# Why are both lists the same?
print(A.shuffled_list)
print('\n\n')
print(B.shuffled_list)
</code></pre>
|
<python><class><random><shuffle>
|
2024-04-04 22:49:33
| 1
| 475
|
3nondatur
|
78,276,904
| 9,108,781
|
How to get all docs and their corresponding embeddings from a Chromadb collection
|
<p>I'd like to get all docs and their corresponding embeddings from a collection for a pairwise cosine similarity calculation to identify very similar documents. Using the following function,</p>
<pre><code>def get_docs_and_embeddings(collection):
results = collection.get(include=["documents", "embeddings"])
docs = results["documents"]
embeddings = results["embeddings"]
return docs, embeddings
</code></pre>
<p>I find that embeddings are not in the same order as the documents. I'm using an external tool to calculate pairwise similarity because I find it quicker than the built-in query. What is the right way to approach this? Thanks a lot!</p>
|
<python><chromadb>
|
2024-04-04 22:26:03
| 0
| 943
|
Victor Wang
|
78,276,864
| 16,912,844
|
Python Nesting Async and Sync Context Manager
|
<p>I was wondering when using mixed ContextManager (async and sync) in Python, is there any standard to say if async should be inside sync, or vice versa? Maybe it depends on what the scenario is, is there best practice to say when async should be inside sync or sync inside async?</p>
<p>put async within the sync cm:</p>
<pre class="lang-py prettyprint-override"><code>with sync_cm as scm:
async with async_cm as acm:
...
</code></pre>
<p>put sync within the async cm:</p>
<pre class="lang-py prettyprint-override"><code>async with async_cm as acm:
with sync_cm as scm:
...
</code></pre>
|
<python><contextmanager>
|
2024-04-04 22:11:19
| 0
| 317
|
YTKme
|
78,276,786
| 5,374,161
|
Gemini Python API Deadline Exceeded
|
<p>I have been trying to use Google's Gemini API using python, and I am running into continuous <code>504 Deadline Exceeded</code>. This is also not a one of thing as well, I have tried 20+ times using python SDK and it failed everytime, while curl returned in 5-6 seconds everytime.</p>
<p>I initially thought this was a network problem, but a simple <code>curl</code> calls works well.</p>
<p>I have a very basic starter script:</p>
<pre><code>genai.configure(api_key=key)
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content("What is the meaning of life")
print(response.text)
</code></pre>
<p>The response I get:</p>
<pre><code>Traceback (most recent call last):
File "../test.py", line 5, in <module>
response = model.generate_content("What is the meaning of life.")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../env/lib/python3.12/site-packages/google/generativeai/generative_models.py", line 232, in generate_content
response = self._client.generate_content(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../env/lib/python3.12/site-packages/google/ai/generativelanguage_v1beta/services/generative_service/client.py", line 566, in generate_content
response = rpc(
^^^^
File "../env/lib/python3.12/site-packages/google/api_core/gapic_v1/method.py", line 131, in __call__
return wrapped_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../env/lib/python3.12/site-packages/google/api_core/retry/retry_unary.py", line 293, in retry_wrapped_func
return retry_target(
^^^^^^^^^^^^^
File "../env/lib/python3.12/site-packages/google/api_core/retry/retry_unary.py", line 153, in retry_target
_retry_error_helper(
File ..env/lib/python3.12/site-packages/google/api_core/retry/retry_base.py", line 212, in _retry_error_helper
raise final_exc from source_exc
File "../env/lib/python3.12/site-packages/google/api_core/retry/retry_unary.py", line 144, in retry_target
result = target()
^^^^^^^^
File "../env/lib/python3.12/site-packages/google/api_core/timeout.py", line 120, in func_with_timeout
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "../env/lib/python3.12/site-packages/google/api_core/grpc_helpers.py", line 78, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.DeadlineExceeded: 504 Deadline Exceeded
</code></pre>
<p>The curl call though works as expected:</p>
<pre><code>curl -H 'Content-Type: application/json' \
-d '{"contents":[{"parts":[{"text":"What is the meaning of life"}]}]}' \
-X POST 'https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key={key}'
{
"candidates": [
{
"content": {
"parts": [
{
"text": "In the life....
</code></pre>
<p>Can anyone help me figure out what is wrong here?</p>
|
<python><large-language-model><google-ai-platform><google-gemini><google-cloud-ai>
|
2024-04-04 21:49:24
| 2
| 667
|
Krishh
|
78,276,722
| 13,354,617
|
How to constrain a QRubberBand within an image/QLabel?
|
<p>I'm creating an image viewer in pyqt5, it's all working. I need to add a cropping function where the user can draw a rectangle over the image and the rectangle will be used later.</p>
<p>Currently, it's drawing the QRubberBand perfectly, but I want it to stay inside the image and not draw outside the image like this:</p>
<p><a href="https://i.sstatic.net/GlbLD.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GlbLD.gif" alt="enter image description here" /></a></p>
<p>I already tried this:</p>
<pre><code>if eventQMouseEvent.pos().x() < self.imageLabel.size().width() and eventQMouseEvent.pos().y() < self.imageLabel.size().height():
self.currentQRubberBand.setGeometry(QRect(self.originQPoint, eventQMouseEvent.pos()).normalized())
</code></pre>
<p>but it's buggy/blocky and not working as intended.</p>
<p>Here's the full working code so you can recreate the current app:</p>
<pre><code>from PyQt5 import QtCore
from PyQt5.QtCore import Qt, QRect
from PyQt5.QtGui import QImage, QPixmap, QPalette, QPainter
from PyQt5.QtWidgets import QLabel, QSizePolicy, QScrollArea, QMessageBox, QMainWindow, QMenu, QAction, qApp, QFileDialog, QApplication, QRubberBand
import sys
class QImageViewer(QMainWindow):
def __init__(self):
super().__init__()
self.scaleFactor = 0.0
self.opened = False
self.imageLabel = QLabel()
self.imageLabel.setBackgroundRole(QPalette.Base)
self.imageLabel.setSizePolicy(QSizePolicy.Ignored, QSizePolicy.Ignored)
self.imageLabel.setScaledContents(True)
self.scrollArea = QScrollArea()
self.scrollArea.setBackgroundRole(QPalette.Dark)
self.scrollArea.setWidget(self.imageLabel)
self.scrollArea.setVisible(False)
self.setCentralWidget(self.scrollArea)
self.createActions()
self.createMenus()
self.setWindowTitle("Image Viewer")
self.resize(800, 600)
def open(self):
options = QFileDialog.Options()
# fileName = QFileDialog.getOpenFileName(self, "Open File", QDir.currentPath())
fileName, _ = QFileDialog.getOpenFileName(self, 'QFileDialog.getOpenFileName()', '',
'Images (*.png *.jpeg *.jpg *.bmp *.gif)', options=options)
if fileName:
image = QImage(fileName)
if image.isNull():
QMessageBox.information(self, "Image Viewer", "Cannot load %s." % fileName)
return
self.opened = True
self.imageLabel.setPixmap(QPixmap.fromImage(image))
self.scaleFactor = 1.0
self.scrollArea.setVisible(True)
self.fitToWindowAct.setEnabled(True)
self.updateActions()
if not self.fitToWindowAct.isChecked():
self.imageLabel.adjustSize()
def zoomIn(self):
self.scaleImage(1.25)
def zoomOut(self):
self.scaleImage(0.8)
def normalSize(self):
self.imageLabel.adjustSize()
self.scaleFactor = 1.0
def fitToWindow(self):
fitToWindow = self.fitToWindowAct.isChecked()
self.scrollArea.setWidgetResizable(fitToWindow)
if not fitToWindow:
self.normalSize()
self.updateActions()
def createActions(self):
self.openAct = QAction("&Open...", self, shortcut="Ctrl+O", triggered=self.open)
self.exitAct = QAction("E&xit", self, shortcut="Ctrl+Q", triggered=self.close)
self.zoomInAct = QAction("Zoom &In (25%)", self, shortcut="Ctrl++", enabled=False, triggered=self.zoomIn)
self.zoomOutAct = QAction("Zoom &Out (25%)", self, shortcut="Ctrl+-", enabled=False, triggered=self.zoomOut)
self.normalSizeAct = QAction("&Normal Size", self, shortcut="Ctrl+S", enabled=False, triggered=self.normalSize)
self.fitToWindowAct = QAction("&Fit to Window", self, enabled=False, checkable=True, shortcut="Ctrl+F",
triggered=self.fitToWindow)
def createMenus(self):
self.fileMenu = QMenu("&File", self)
self.fileMenu.addAction(self.openAct)
self.fileMenu.addSeparator()
self.fileMenu.addAction(self.exitAct)
self.viewMenu = QMenu("&View", self)
self.viewMenu.addAction(self.zoomInAct)
self.viewMenu.addAction(self.zoomOutAct)
self.viewMenu.addAction(self.normalSizeAct)
self.viewMenu.addSeparator()
self.viewMenu.addAction(self.fitToWindowAct)
self.menuBar().addMenu(self.fileMenu)
self.menuBar().addMenu(self.viewMenu)
def updateActions(self):
self.zoomInAct.setEnabled(not self.fitToWindowAct.isChecked())
self.zoomOutAct.setEnabled(not self.fitToWindowAct.isChecked())
self.normalSizeAct.setEnabled(not self.fitToWindowAct.isChecked())
def scaleImage(self, factor):
self.scaleFactor *= factor
self.imageLabel.resize(self.scaleFactor * self.imageLabel.pixmap().size())
self.adjustScrollBar(self.scrollArea.horizontalScrollBar(), factor)
self.adjustScrollBar(self.scrollArea.verticalScrollBar(), factor)
self.zoomInAct.setEnabled(self.scaleFactor < 3.0)
self.zoomOutAct.setEnabled(self.scaleFactor > 0.333)
def adjustScrollBar(self, scrollBar, factor):
scrollBar.setValue(int(factor * scrollBar.value()
+ ((factor - 1) * scrollBar.pageStep() / 2)))
def mousePressEvent (self, eventQMouseEvent):
if self.opened:
self.originQPoint = eventQMouseEvent.pos()
self.currentQRubberBand = QRubberBand(QRubberBand.Rectangle, self)
self.currentQRubberBand.setGeometry(QRect(self.originQPoint, QtCore.QSize()))
self.currentQRubberBand.show()
def mouseMoveEvent (self, eventQMouseEvent):
if self.opened:
self.currentQRubberBand.setGeometry(QRect(self.originQPoint, eventQMouseEvent.pos()).normalized())
def mouseReleaseEvent (self, eventQMouseEvent):
if self.opened:
self.currentQRubberBand.hide()
currentQRect = self.currentQRubberBand.geometry()
self.currentQRubberBand.deleteLater()
if __name__ == '__main__':
app = QApplication(sys.argv)
imageViewer = QImageViewer()
imageViewer.show()
sys.exit(app.exec_())
</code></pre>
<p>I found this <a href="https://stackoverflow.com/questions/54219597/how-to-constrain-a-qrubberband-within-a-qlabel">answer</a> but it's in C++ and I didn't understand anything.</p>
|
<python><pyqt><pyqt5>
|
2024-04-04 21:32:31
| 1
| 369
|
Mhmd Admn
|
78,276,702
| 22,221,987
|
Tkinter event loop can't process some key combinations when use <KeyPress> event
|
<p>Here is the code from another <a href="https://stackoverflow.com/questions/39606019/tkinter-using-two-keys-at-the-same-time">question</a>, which demonstrates the problem very well (I just found it when I was looking for solution for my problem).<br />
So, when you press some specific key combination, the last (3rd key in my future examples) key doesn't generate a signal when pressed.</p>
<p>Here is the code:</p>
<pre><code>from tkinter import *
root = Tk()
var = StringVar()
Label(root, textvariable=var).pack()
history = []
def keyup(e):
print(e.keycode)
if e.keycode in history:
history.pop(history.index(e.keycode))
var.set(str(history))
def keydown(e):
if e.keycode not in history:
history.append(e.keycode)
var.set(str(history))
frame = Frame(root, width=200, height=200)
frame.bind("<KeyPress>", keydown)
frame.bind("<KeyRelease>", keyup)
frame.pack()
frame.focus_set()
root.mainloop()
</code></pre>
<p>And here is weird key combinations, which doesn't work normally:</p>
<ul>
<li>d + space + return</li>
<li>e + space + return</li>
<li>shift + e + space
And so go on. First two keys generates an event, but last key, looke like, doesn't even generate the event.</li>
</ul>
<p>I've spent the whole day to find the solution, but this behaviour doesn't make any sense, bc other combinations works well (w + space + return, as example).<br />
And tkinter can handle more than 3 keypresses. Just test different combinations and you will see, that there is a couple of combinations that simply impossible to press.</p>
<p>What the reason of such behaviour and how can I fix it?</p>
<p><strong>UPD</strong>: Several tests with Python 3.10 and 3.12 on Windows 11 and Python 3.11 on Ubuntu 23.10 showed the same behaviour for me (some sequences don't process).</p>
|
<python><tkinter><events><keyboard><tk-toolkit>
|
2024-04-04 21:27:02
| 0
| 309
|
Mika
|
78,276,576
| 58,553
|
Return http status code 401 when login with invalid credentials
|
<p>How would i go about changing an <code>django</code> application that uses <code>allauth</code> so that it returns 401 response when invalid login credentials are provided?</p>
<p>I have tried to put custom logic in a custom <code>ModelBackend</code> but found no way to actually modify the response status code there.</p>
<p>I have also tried to put custom logic in a <code>CustomAccountAdapter.authentication_failed</code> but same issue there i found no way to change the status code.</p>
|
<python><django><django-allauth>
|
2024-04-04 20:56:48
| 1
| 38,870
|
Peter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.