QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,398,642
| 7,175,213
|
Maximum number of elements on list whose value sum up to at most K in O(log n)
|
<p>I have this exercise to do:</p>
<p>Let M be a positive integer, and V = β¨v1, . . . , vnβ© an ordered vector <strong>where the value of item vi is 5Γi.</strong></p>
<p>Present an O(log(n)) algorithm that returns the maximum number of items from V that can be selected given that the sum of the selected items is less than or equal to M (repeated selection of items is not allowed).</p>
<p>First I did a naive solution where:</p>
<ul>
<li>I know the sum of elements on the array will be always less than the M/5 index on the array. So a did <code>for i=0..i<=M/5</code> and found the sum. Moreover this is not <code>O(log(n))</code> because given a big M, bigger than the sum of all elements on the array, it will be <code>O(n).</code></li>
</ul>
<p>Therefore I tried divide and conquer, I thought a binary search should be the way. But actually no because if I do that I will sum the minimum elements that can be chosen to arrive in M, not the maximum. My code is below</p>
<pre><code> def max_items_selected_recursive2(M, V, left, right, max):
if len(V[left:right]) == 1:
return max
mid = math.floor((left+right)/2)
if V[mid] >= M:
return max_items_selected_recursive2(M - V[mid], V, mid + 1, right, max+1)
else:
if M - V[mid] >= 0:
return max_items_selected_recursive2(M - V[mid], V, left, mid - 1, max+1)
else:
return max_items_selected_recursive2(M, V, left, mid - 1, max)
</code></pre>
<p>example of call</p>
<pre><code>M = 20
V = [0, 5, 10, 15, 20]
max_items_selected_recursive2(M, V, 0, len(V) - 1, 0) +1 # +1 since all have the O element
</code></pre>
<p>Any ideas on how to do this on <strong>O(log n)</strong>?</p>
|
<python><algorithm>
|
2024-04-28 15:12:06
| 4
| 1,148
|
Catarina Nogueira
|
78,398,590
| 3,067,485
|
How to expose serial interface to the docker host?
|
<p>I have a python dockerized service that creates a TTY interface using PSL <code>pty</code>.</p>
<pre><code>import os, pty
server, device = pty.openpty()
tty_name = os.ttyname(device)
</code></pre>
<p>I can see this interface created within the container <code>/dev/pts/0</code>:</p>
<pre><code>ls -l /dev/pts
total 0
crw--w---- 1 aqlog tty 136, 0 Apr 28 14:50 0
</code></pre>
<p>Now I want to expose it (I guess through volumes) to the host in order to have external service to connect it using <code>serial</code>:</p>
<pre><code>import serial
ser = serial.Serial("/dev/tty_host")
</code></pre>
<p>I cannot simply map a volume:</p>
<pre><code>volumes:
- "/dev/tty_host:/dev/pts/0"
</code></pre>
<p>Because the file does not exists a image build it is created at container run.</p>
<p>I tried to create it at image build using touch, but it fails due to permission error:</p>
<pre><code>RUN su -c "touch /dev/pts/0"
</code></pre>
<p>So my question is: Is it possible to expose to docker host a serial interface created within a container? If so, how should I proceed?</p>
|
<python><docker><tty><volumes>
|
2024-04-28 14:56:50
| 0
| 11,564
|
jlandercy
|
78,398,573
| 243,031
|
Not able to access resource from another stack in AWS CDK
|
<p>I am using AWS CDK with python, and my directory structure is as below.</p>
<pre><code>βββ README.md
βββ app.py
βββ bootstrap_scripts
βΒ Β βββ install_httpd.sh
βββ cdk.context.json
βββ cdk.json
βββ resource_stacks
Β Β βββ __init__.py
Β Β βββ custom_vpc.py
Β Β βββ web_server_stack.py
</code></pre>
<p>The files as as below.</p>
<p><strong>resource_stacks/custom_vpc.py</strong></p>
<pre><code>from aws_cdk import (
# Duration,
Stack,
CfnOutput,
aws_ec2 as _ec2,
# aws_sqs as sqs,
)
from constructs import Construct
class CustomVpcStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
prod_configs = self.node.try_get_context("envs")["prod"]
self.custom_vpc = _ec2.Vpc(
self,
"customeVpcId",
ip_addresses=_ec2.IpAddresses.cidr(prod_configs["vpc_configs"]["vpc_cidr"]),
max_azs=2,
nat_gateways=1,
subnet_configuration=[
_ec2.SubnetConfiguration(
name="publicSubnet",
cidr_mask=prod_configs["vpc_configs"]["cidr_mask"],
subnet_type=_ec2.SubnetType.PUBLIC,
),
_ec2.SubnetConfiguration(
name="privateSubnet",
cidr_mask=prod_configs["vpc_configs"]["cidr_mask"],
subnet_type=_ec2.SubnetType.PRIVATE_WITH_EGRESS,
),
_ec2.SubnetConfiguration(
name="dbSubnet",
cidr_mask=prod_configs["vpc_configs"]["cidr_mask"],
subnet_type=_ec2.SubnetType.PRIVATE_ISOLATED,
),
])
CfnOutput(self,
"customVpcOutput",
value=custom_vpc.vpc_id,
export_name="customeVpcId")
</code></pre>
<p><strong>app.py</strong></p>
<pre><code>#!/usr/bin/env python3
import os
import aws_cdk as cdk
#from mycdk.mycdk_stack import MycdkStack
#from mycdk.my_env_based_stack import MyEnvBasedStack
#from mycdk.context_var_stack import ContextVarStack
from resource_stacks.custom_vpc_including_tags import CustomVpcStack
from resource_stacks.web_server_stack import WebServerStack
env_USA = cdk.Environment(account="00000000000", region="us-east-1")
app = cdk.App()
# MycdkStack(app, "MycdkStack",
# # If you don't specify 'env', this stack will be environment-agnostic.
# # Account/Region-dependent features and context lookups will not work,
# # but a single synthesized template can be deployed anywhere.
# # Uncomment the next line to specialize this stack for the AWS Account
# # and Region that are implied by the current CLI configuration.
# #env=cdk.Environment(account=os.getenv('CDK_DEFAULT_ACCOUNT'), region=os.getenv('CDK_DEFAULT_REGION')),
# # Uncomment the next line if you know exactly what Account and Region you
# # want to deploy the stack to. */
# #env=cdk.Environment(account='123456789012', region='us-east-1'),
# # For more information, see https://docs.aws.amazon.com/cdk/latest/guide/environments.html
# )
custom_vpc = CustomVpcStack(app, "MyCustomVpcStack")
WebServerStack(app, "MyWebServerStack", vpc=custom_vpc.custom_vpc, env=env_USA)
app.synth()
</code></pre>
<p>When I try to do <code>ls</code>, it gives error.</p>
<pre><code>(venv) veer99@veers-MacBook-Air mycdk % cdk ls
[WARNING] aws-cdk-lib.aws_ec2.VpcProps#cidr is deprecated.
Use ipAddresses instead
This API will be removed in the next major release.
Traceback (most recent call last):
File "/Users/abcd/src/mycdk/app.py", line 58, in <module>
WebServerStack(app, "MyWebServerStack", vpc=custom_vpc.custom_vpc, env=env_USA)
^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'CustomVpcStack' object has no attribute 'custom_vpc'
Subprocess exited with error 1
</code></pre>
<p>How can I access <code>custom_vpc</code> variable from the <code>CustomVpcStack</code> ?</p>
|
<python><amazon-web-services><aws-cdk>
|
2024-04-28 14:52:31
| 1
| 21,411
|
NPatel
|
78,398,311
| 10,710,625
|
AttributeError: module 'collections' has no attribute 'Sized' when trying to load a pickled model
|
<p>I am trying to load a pretrained model but I'm hitting an error when I do. AttributeError: module 'collections' has no attribute 'Sized'</p>
<pre><code>from fastai import *
from fastai.vision import *
from matplotlib.pyplot import imshow
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from PIL import Image
learn = load_learner("", "model.pkl")
</code></pre>
<p>These are the version I'm using.</p>
<pre><code>torch 1.11.0
torchvision 0.12.0
python 3.10.14
fastai 1.0.60
</code></pre>
<p>Can someone help me fix this problem?</p>
<pre><code>File c:\Users\lib\site-packages\fastai\basic_train.py:620, in load_learner(path, file, test, tfm_y, **db_kwargs)
618 state = torch.load(source, map_location='cpu') if defaults.device == torch.device('cpu') else torch.load(source)
619 model = state.pop('model')
--> 620 src = LabelLists.load_state(path, state.pop('data'))
621 if test is not None: src.add_test(test, tfm_y=tfm_y)
622 data = src.databunch(**db_kwargs)
File c:\Users\lib\site-packages\fastai\data_block.py:578, in LabelLists.load_state(cls, path, state)
576 "Create a `LabelLists` with empty sets from the serialized `state`."
577 path = Path(path)
--> 578 train_ds = LabelList.load_state(path, state)
579 valid_ds = LabelList.load_state(path, state)
580 return LabelLists(path, train=train_ds, valid=valid_ds)
File c:\Users\lib\site-packages\fastai\data_block.py:690, in LabelList.load_state(cls, path, state)
687 @classmethod
688 def load_state(cls, path:PathOrStr, state:dict) -> 'LabelList':
689 "Create a `LabelList` from `state`."
--> 690 x = state['x_cls']([], path=path, processor=state['x_proc'], ignore_empty=True)
691 y = state['y_cls']([], path=path, processor=state['y_proc'], ignore_empty=True)
...
--> 298 if not isinstance(a, collections.Sized) and not getattr(a,'__array_interface__',False):
299 a = list(a)
300 if np.int_==np.int32 and dtype is None and is_listy(a) and len(a) and isinstance(a[0],int):
AttributeError: module 'collections' has no attribute 'Sized'
</code></pre>
|
<python><pytorch><fast-ai>
|
2024-04-28 13:24:35
| 1
| 739
|
the phoenix
|
78,398,307
| 547,231
|
Convert a sequence of exr files to mp4 using moviepy
|
<p>I have a sequence of <code>exr</code> files which I want to convert into a video using <code>moviepy</code>. When Since the colors in the <code>exr</code>s need be converted (otherwise the video appears almost black) I need to specify a color transfer characteristic. When I run <code>ffmpeg</code> directly using</p>
<blockquote>
<p><code>ffmpeg -y -apply_trc iec61966_2_1 -i input_%d.exr -vcodec mpeg4 output.mp4</code></p>
</blockquote>
<p>everything is working perfectly fine. However, if I read the <code>exr</code>s using <code>clip = ImageSequenceClip("folder_to_my_exrs/", fps = 24)</code> and try to write the video using <code>.write_videofile("output.mp4", codec = "mpeg4", ffmpeg_params = ["-apply_trc", "iec61966_2_1"])</code> I'm receiving the error</p>
<blockquote>
<p>b'Codec AVOption apply_trc (color transfer characteristics to apply to EXR linear input) specified for output file #0 (output.mp4) is not an encoding option.\r\n'</p>
</blockquote>
<p>I don't really understand this. What can I do?</p>
|
<python><ffmpeg><moviepy>
|
2024-04-28 13:22:28
| 0
| 18,343
|
0xbadf00d
|
78,398,227
| 2,971,574
|
How to show all columns on a table with lots of columns?
|
<p>I've got an Ibis table (duckdb backend) t and I'm in interactive mode (<code>ibis.options.interactive=True</code>). When I try to see the first couple of rows (<code>t.head()</code>) ibis only shows the first three columns:
<a href="https://i.sstatic.net/D11hEe4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D11hEe4E.png" alt="enter image description here" /></a></p>
<p>As I'd like to see all columns I wonder whether there is an option to see all the columns of my table, something like pandas <code>pd.set_option('display.max_columns', None)</code>? Unfortunately I couldn't find anything like that in <code>ibis.options</code>.</p>
|
<python><ibis>
|
2024-04-28 12:55:12
| 1
| 555
|
the_economist
|
78,397,889
| 17,915,481
|
Can not create shortcut of file using google drive api
|
<p>Here is sample code that i am trying to run.</p>
<pre class="lang-py prettyprint-override"><code>import requests
api_endpoint = 'https://www.googleapis.com/upload/drive/v3/files?supportsAllDrives=true'
headers = {
'Authorization': f'Bearer {access_token}'
}
json={"name": "a.jpg", "mimeType": "application/vnd.google-apps.shortcut", "parents": [{"id": "folder_id"}], "shortcutDetails": {"targetId": "target_file_id"}}
res = requests.post(api_endpoint, headers=headers, json=json)
print(res.status_code)
print(res.text)
</code></pre>
<p>Even this code does not pickup that name and it just create a text file that contain json data.</p>
|
<python><google-drive-api>
|
2024-04-28 10:45:25
| 1
| 512
|
jak bin
|
78,397,757
| 3,142,695
|
Iterate over folder name list to copy folder if existing
|
<p>This is how I copy the folder <code>src</code> from source to workspace destination - replacing the content if already existing:</p>
<pre><code>def copy_and_replace(directory):
target = os.path.join('workspace', directory, 'src')
source = os.path.join('..', 'source', directory, 'src')
try:
if os.path.isdir(target):
shutil.rmtree(target)
print(f"Directory removed successfully: {target}")
shutil.copytree(source, target)
print(f"Directory copied successfully: {target}")
except OSError as o:
print(f"Error, {o.strerror}: {directory}")
</code></pre>
<p>Now I would like to define two more folders, which should also be copied if existing. Of course I could do this like this:</p>
<pre><code>def copy_and_replace(directory):
targetSrc = os.path.join('workspace', directory, 'src')
sourceSrc = os.path.join('..', 'source', directory, 'src')
targetPbl = os.path.join('workspace', directory, 'public')
sourcePbl = os.path.join('..', 'source', directory, 'public')
try:
if os.path.isdir(targetSrc): # src folder is always existing
shutil.rmtree(targetSrc)
print(f"Directory removed successfully: {targetSrc}")
shutil.copytree(sourceSrc, targetSrc)
print(f"Directory copied successfully: {targetSrc}")
if os.path.isdir(sourcePbl): # Check if public folder is existing in source
if os.path.isdir(targetPbl):
shutil.rmtree(targetPbl)
print(f"Directory removed successfully: {targetPbl}")
shutil.copytree(sourcePbl, targetPbl)
print(f"Directory copied successfully: {targetPbl}")
# do this last part for multiple folder like `specs`, `assets` and so on
except OSError as o:
print(f"Error, {o.strerror}: {directory}")
</code></pre>
<p>Is it possible to use a list of folder names and iterate over those items to check of folder is existing in source, and if it is exsting to copy it to target workspace?</p>
|
<python>
|
2024-04-28 09:54:14
| 2
| 17,484
|
user3142695
|
78,397,575
| 15,018,688
|
Proper way to handle data from a generator using PySpark and writing it to parquet?
|
<p>I have a data generator that returns an iterable. It fetches data from a specified date range. The total data it will fetch is close to a billion objects. My goal is to fetch all of this data, write it to a folder (local filesystem) and (already set up and working) use pyspark readstream to read these files and write it onto my database (cassandra). The scope of my question is limited to the fetching and writing the data onto the local filesystem.</p>
<p>I am trying to:</p>
<ol>
<li>Fetch data using a generator.</li>
<li>Accumulate a batch of data</li>
<li>When <code>batch == batch_size</code>, create a Spark Dataframe and,</li>
<li>Write this Dataframe as .parquet format.</li>
</ol>
<p>However, the issues I run into are segmentation faults (core dumped) and java error connection reset. I am very new to PySpark and am trying to educate myself on how to properly set it up and implement the workflow that I am going for. Specifically, I would appreciate <strong>help and feedback</strong> on the <strong>spark configuration</strong> and the primary error I keep getting consistenly:</p>
<pre><code>Failed to write to data/data/polygon/trades/batch_99 on attempt 1: An error occurred while calling o1955.parquet.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 8 in stage 99.0 failed 1 times, most recent failure: Lost task 8.0 in stage 99.0 (TID 3176) (furkan-desktop executor driver): java.net.SocketException: Connection reset
</code></pre>
<h2>Here is a screenshot of the spark UI:</h2>
<p><a href="https://i.sstatic.net/e8sPT7zv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8sPT7zv.png" alt="sparkUIScreenshot" /></a></p>
<h1>Current Implementation:</h1>
<pre><code>from datetime import datetime
import logging
import time
from dotenv import load_dotenv
import pandas as pd
import os
from pyspark.sql import SparkSession
from pyspark.sql.types import (
StructType,
StructField,
IntegerType,
StringType,
LongType,
DoubleType,
ArrayType,
)
import uuid
from polygon import RESTClient
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
filename="spark_logs/logfile.log",
filemode="w",
)
from_date = datetime(2021, 3, 10)
to_date = datetime(2021, 3, 31)
load_dotenv()
client = RESTClient(os.getenv("POLYGON_API_KEY"))
# Create Spark session
spark = (
SparkSession.builder.appName("TradeDataProcessing")
.master("local[*]")
.config("spark.driver.memory", "16g")
.config("spark.executor.instances", "8")
.config("spark.executor.memory", "16g")
.config("spark.executor.memoryOverhead", "4g")
.config("spark.executor.cores", "4")
.config("spark.memory.offHeap.enabled", "true")
.config("spark.memory.offHeap.size", "4g")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.config("spark.kryoserializer.buffer.max", "512m")
.config("spark.network.timeout", "800s")
.config("spark.executor.heartbeatInterval", "20000ms")
.config("spark.dynamicAllocation.enabled", "true")
.config("spark.dynamicAllocation.minExecutors", "1")
.config("spark.dynamicAllocation.maxExecutors", "8")
.config("spark.dynamicAllocation.initialExecutors", "4")
.getOrCreate()
)
# Define the schema corresponding to the JSON structure
schema = StructType(
[
StructField("exchange", IntegerType(), False),
StructField("id", StringType(), False),
StructField("participant_timestamp", LongType(), False),
StructField("price", DoubleType(), False),
StructField("size", DoubleType(), False),
StructField("conditions", ArrayType(IntegerType()), True),
]
)
def ensure_directory_exists(path):
"""Ensure directory exists, create if it doesn't"""
if not os.path.exists(path):
os.makedirs(path)
# Convert dates to timestamps or use them directly based on your API requirements
from_timestamp = from_date.timestamp() * 1e9 # Adjusting for nanoseconds
to_timestamp = to_date.timestamp() * 1e9
# Initialize the trades iterator with the specified parameters
trades_iterator = client.list_trades(
"X:BTC-USD",
timestamp_gte=int(from_timestamp),
timestamp_lte=int(to_timestamp),
limit=1_000,
sort="asc",
order="asc",
)
trades = []
file_index = 0
output_dir = "data/data/polygon/trades" # Output directory
ensure_directory_exists(output_dir) # Make sure the output directory exists
def robust_write(df, path, max_retries=3, retry_delay=5):
"""Attempts to write a DataFrame to a path with retries on failure."""
for attempt in range(max_retries):
try:
df.write.partitionBy("exchange").mode("append").parquet(path)
print(f"Successfully written to {path}")
return
except Exception as e:
logging.error(f"Failed to write to {path} on attempt {attempt+1}: {e}")
time.sleep(retry_delay) # Wait before retrying
logging.critical(f"Failed to write to {path} after {max_retries} attempts.")
for trade in trades_iterator:
trade_data = {
"exchange": int(trade.exchange),
"id": str(uuid.uuid4()),
"participant_timestamp": trade.participant_timestamp,
"price": float(trade.price),
"size": float(trade.size),
"conditions": trade.conditions if trade.conditions else [],
}
trades.append(trade_data)
if len(trades) == 10000:
df = spark.createDataFrame(trades, schema=schema)
file_name = f"{output_dir}/batch_{file_index}"
robust_write(df, file_name)
trades = []
file_index += 1
if trades:
df = spark.createDataFrame(trades, schema=schema)
file_name = f"{output_dir}/batch_{file_index}"
robust_write(df, file_name)
</code></pre>
|
<python><apache-spark><pyspark>
|
2024-04-28 08:46:28
| 1
| 556
|
Furkan ΓztΓΌrk
|
78,397,447
| 1,497,418
|
Sending PDF to gemini-1.5-pro-latest fail with 500 error
|
<p>Below is a snippet that sends a prompt and a pdf file to gemini.</p>
<pre><code>def runGemini(prompt, model, document=None):
model = genai.GenerativeModel(model)
content = prompt
if document:
content = [
document,
"What is this document?",
]
response = model.generate_content(
content,
)
return response.text
</code></pre>
<p>And here I generate the content part for PDF:</p>
<pre><code>import base64
bytes = pdf_file.getvalue()
base64_bytes = base64.b64encode(bytes).decode("utf-8")
blob = glm.Blob(mime_type="application/pdf", data=base64_bytes)
document = glm.Part(inline_data=blob)
</code></pre>
|
<python><google-api-python-client><google-gemini>
|
2024-04-28 07:50:17
| 2
| 2,578
|
Walucas
|
78,397,357
| 16,277,807
|
How to use mediaquery or responsiveness in python reflex?
|
<p>The <a href="https://reflex.dev" rel="nofollow noreferrer">official documentation</a> isnβt talking anything at all about CSS mediaquery.
particularly when I would like to place 2 divs and 1 image in single row (rx.hstack) in larger screen and want to place as single column (rx.vstack) in smaller screen, how can I conditionally set that?</p>
<p>Iβve tried wrapping media property inside style dictionary. But no luck!</p>
<p><code>display</code> property works like this:
<code>display=[βnoneβ, βnoneβ, βnoneβ, βflexβ, βflexβ]</code></p>
<p>But how to organize rows or columns differently based on reading display sizes?</p>
|
<python><python-reflex>
|
2024-04-28 07:11:26
| 0
| 514
|
Mahmudur Rahman Shovon
|
78,397,222
| 13,942,929
|
Cython: How can I import multiple class object from a single file?
|
<p>So this is how I currently structure my files.</p>
<pre class="lang-none prettyprint-override"><code>CPP Folder
Cython Folder
βββ setup.py
βββ Geometry
βββ Circle
β βββ __init__.py
β βββ Circle.pyx
β βββ Circle.pyi
β βββ Circle.pxd
βββ Point
βββ __init__.py
βββ Point.pyx
βββ Point.pyi
βββ Point.pxd
</code></pre>
<p>And this is my setup.py file</p>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup, Extension, find_packages
from Cython.Build import cythonize
point_extension = Extension(
"Geometry.Point.Point",
[
"src/Geometry/Point/Point.pyx",
"../cpp/lib/src/Point.cpp"
],
include_dirs=[
"../cpp/lib/include"
],
libraries=["Some Library"],
library_dirs=[
"src/Geometry"
],
extra_compile_args=["-std=c++17", "-O3"],
language="c++"
)
circle_extension = Extension(
"Geometry.Circle.Circle",
[
"../cpp/lib/src/Circle.cpp",
"src/Geometry/Circle/Circle.pyx",
"../cpp/lib/src/Point.cpp"
],
include_dirs=[
"../cpp/lib/include"
],
libraries=["Some Library"],
library_dirs=[
"src/Geometry"
],
extra_compile_args=["-std=c++17", "-O3"],
language="c++"
)
setup(
name="Geometry",
version="0.1",
packages=find_packages(where="src"),
package_dir={"": "src"},
package_data={
"Geometry.Circle": ["*.so", "*.pyi"],
"Geometry.Point": ["*.so", "*.pyi"]
},
ext_modules=cythonize([point_extension, circle_extension],
compiler_directives={"language_level": 3},
include_path=[
"../../Expression/cython/src/Some Library",
"src/Geometry",
],
annotate=True),
zip_safe=False,
)
</code></pre>
<p>With this setup, when I want to import Circle or Point for testing, I have to do as below:</p>
<pre class="lang-py prettyprint-override"><code>from Geometry.Point import Point
from Geometry.Circle import Circle
</code></pre>
<p>And my goal is to be able to import them in this way: <code>from Geometry import Circle, Point</code></p>
<p>So I think I should structure my file as follows:</p>
<pre class="lang-none prettyprint-override"><code>CPP Folder
Cython Folder
βββ setup.py
βββ Geometry
βββ __init__.py
βββ Geometry.pyx
βββ Geometry.pyi
βββ Geometry.pxd
βββ Circle
β βββ __init__.py
β βββ Circle.pyx
β βββ Circle.pyi
β βββ Circle.pxd
βββ Point
βββ __init__.py
βββ Point.pyx
βββ Point.pyi
βββ Point.pxd
</code></pre>
<p>How should I rewrite my <code>setup.py</code> and write my <code>Geometry.pxd</code>, <code>.pyx</code> and <code>.pyi</code>?
FYI this is a sample of my <code>Point.pxd</code> and <code>Point.pyx</code></p>
<p>[Point.pxd]</p>
<pre class="lang-py prettyprint-override"><code>from libcpp.memory cimport shared_ptr, weak_ptr, make_shared
from Bryan cimport _Bryan, Bryan
cdef extern from "Point.h":
cdef cppclass _Point:
_Point(shared_ptr[_Bryan] x, shared_ptr[_Bryan] y, shared_ptr[_Bryan] z)
shared_ptr[_Bryan] get_x()
shared_ptr[_Bryan] get_y()
shared_ptr[_Bryan] get_z()
cdef class Point:
cdef shared_ptr[_Point] c_point
</code></pre>
<p>[Point.pyx]</p>
<pre class="lang-py prettyprint-override"><code>from Point cimport *
cdef class Point:
def __cinit__(self, Bryan x=Bryan("0", None), Bryan y=Bryan("0", None), Bryan z=Bryan("0", None)):
self.c_point = make_shared[_Point](x.thisptr, y.thisptr, z.thisptr)
def __dealloc(self):
self.c_point.reset()
def get_x(self) -> Bryan:
cdef shared_ptr[_Bryan] result = self.c_point.get().get_x()
cdef Bryan coord = Bryan("", None, make_with_pointer = True)
coord.thisptr = result
return coord
def get_y(self) -> Bryan:
cdef shared_ptr[_Bryan] result = self.c_point.get().get_y()
cdef Bryan coord = Bryan("", None, make_with_pointer = True)
coord.thisptr = result
return coord
def get_z(self) -> Bryan:
cdef shared_ptr[_Bryan] result = self.c_point.get().get_z()
cdef Bryan coord = Bryan("", None, make_with_pointer = True)
coord.thisptr = result
return coord
property x:
def __get__(self):
return self.get_x()
property y:
def __get__(self):
return self.get_y()
property z:
def __get__(self):
return self.get_z()
</code></pre>
<p>Thank you</p>
|
<python><c++><cython><cythonize>
|
2024-04-28 06:00:32
| 1
| 3,779
|
Punreach Rany
|
78,396,795
| 420,867
|
Python `audioop.ulaw2lin` with width 2 via PyVoip creates distorted sped up audio
|
<p>I'm trying to convert u-law audio into 16 bit 8khz raw audio in python using <code>audioop.ulaw2lin(ulaw_audio_bytes, 2)</code>.</p>
<p>The resulting bytes object has the right size, however if I write it to a wave file as 16 bit 8khz raw audio, the result is double speed with medium to low frequency noise and quite low overall volume.</p>
<p>The same audio processed by ulaw2lin with width 1 and saved to as a 8bit 8khz wav file sounds ok (modulo the limitations of the representation and the need to convert to unsigned ints needed by 8bit wav).</p>
<p>What am I missing? In principle I should achieve better quality audio with a 16bit representation.</p>
<p><strong>EDIT: this was being done with in PyVoip.. see my answer below for</strong> explanation.</p>
|
<python><audio><mu-law>
|
2024-04-28 01:02:54
| 1
| 15,330
|
drevicko
|
78,396,733
| 2,444,008
|
ModuleNotFoundError: No module named 'django.db.backends.postgresql' on Ubuntu 24.04
|
<p>Yesterday I switched to Ubuntu 24.04 LTS from windows and having an issue with my django project and tried every suggestion on the web but no luck. I'm getting below issue:</p>
<pre><code>django.core.exceptions.ImproperlyConfigured: 'django.db.backends.postgresql' isn't an available database backend or couldn't be imported. Check the above exception. To use one of the built-in backends, use 'django.db.backends.XXX', where XXX is one of:
'mysql', 'oracle', 'sqlite3'
</code></pre>
<p>Most of the suggesion on the web telling update django but I'm already using the latest version 5.0.4</p>
<p>Also postgresql is working fine. I'm able to connect it.</p>
|
<python><django><postgresql><ubuntu-24.04>
|
2024-04-28 00:18:01
| 2
| 1,093
|
ftdeveloper
|
78,396,693
| 16,852,890
|
Pyspark - How to handle error in for list
|
<p>I've written something to read the location of some lake files dynamically provided by a list called <code>partition_paths</code>:</p>
<pre><code>dfs = [spark.read.parquet(f'{l}') for l in partition_paths]
</code></pre>
<p>I will then combine all these dfs into one in the next line:</p>
<pre><code>df = reduce(DataFrame.unionAll, dfs)
</code></pre>
<p>But it maybe possible that the <code>partition_paths</code> are either built up incorrectly, or the location in the lake simply doesn't exist, so I need to error handle the first line of code. How can I do that so it won't just stop and would continue getting all the dfs?</p>
|
<python><pyspark>
|
2024-04-27 23:52:14
| 1
| 316
|
tommyhmt
|
78,396,677
| 827,371
|
Most efficient way of running correlation of a small sequence against another larger sequence to try to find matching index
|
<p>In Python, I want to take a smaller sequence of numbers, and find the area along a very large sequence of numbers that has the highest correlation to this smaller sequence of numbers.</p>
<p>Is there an efficient way to do this besides brute force ie. taking a subset of the larger sequence of the same length as the smaller sequences, calculating the coefficient, then incrementing the starting index, and doing this over and over again?</p>
<p>Is there some functionality that numpy or some other math package that can do this efficiently?</p>
|
<python><algorithm><numpy><correlation>
|
2024-04-27 23:46:10
| 1
| 1,860
|
steve8918
|
78,396,664
| 2,593,878
|
Python - using parent's method from child's super() call
|
<p>I imagine there are lots of similar questions, but I'm not sure how to search for this situation. Suppose I have the following code structure:</p>
<pre><code>class Parent():
def method(self):
self.helper()
print('Parent method')
def helper(self):
print('Parent helper')
class Child(Parent):
def method(self):
super().method()
print('Child method')
def helper(self):
print('Child helper')
</code></pre>
<p>When I call <code>Child.method()</code>, the <code>super()</code> call runs <code>Parent.method()</code> as expected. However, from <code>Parent.method()</code>, the helper that gets called is <code>Child.helper()</code>. Why does this occur? How can I get this to run <code>Parent.helper()</code> instead?</p>
|
<python><inheritance><parent-child><method-resolution-order>
|
2024-04-27 23:32:18
| 1
| 7,392
|
dkv
|
78,396,642
| 10,262,805
|
module 'openai' has no attribute 'OpenAI'
|
<p>I installed the latest <code>openai</code> version:</p>
<pre><code>pip install openai==1.23.6
</code></pre>
<p>then</p>
<pre><code>from openai import OpenAI
</code></pre>
<p>I get error</p>
<blockquote>
<p>cannot import name 'OpenAI' from 'openai'</p>
</blockquote>
<p>From <a href="https://pypi.org/project/openai/" rel="nofollow noreferrer">https://pypi.org/project/openai/</a> openai version 1.23.6 example</p>
<pre><code>import os
from openai import OpenAI
client = OpenAI(
# This is the default and can be omitted
api_key=os.environ.get("OPENAI_API_KEY"),
)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
model="gpt-3.5-turbo",
)
</code></pre>
<ul>
<li><p><code> print(openai.__file__)</code> refers to: <code>/home/yilmaz/.local/lib/python3.11/site-packages/openai/__init__.py</code></p>
</li>
<li><p><code>print(dir(openai))</code></p>
<p><code>['APIError', 'Audio', 'Callable', 'ChatCompletion', 'Completion', 'ContextVar', 'Customer', 'Deployment', 'Edit', 'Embedding', 'Engine', 'ErrorObject', 'File', 'FineTune', 'FineTuningJob', 'Image', 'InvalidRequestError', 'Model', 'Moderation', 'OpenAIError', 'Optional', 'TYPE_CHECKING', 'Union', 'VERSION', '__all__', '__annotations__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', 'aiosession', 'api_base', 'api_key', 'api_key_path', 'api_requestor', 'api_resources', 'api_type', 'api_version', 'app_info', 'ca_bundle_path', 'datalib', 'debug', 'enable_telemetry', 'error', 'log', 'openai_object', 'openai_response', 'organization', 'os', 'proxy', 'requestssession', 'sys', 'util', 'verify_ssl_certs', 'version']</code></p>
</li>
<li><p><code>pip show openai</code> shows correct version: "Version: 1.23.6"</p>
</li>
</ul>
|
<python><openai-api>
|
2024-04-27 23:18:48
| 3
| 50,924
|
Yilmaz
|
78,396,549
| 6,727,976
|
How to run LangChains getting-started examples and getting output?
|
<p>I'm going to learn LangChain and stumble upon their Getting Started section. Because it doesn't work and I'm curious if I am the only person where LangChain examples don't work.</p>
<p>This is their tutorial I am talking about. <a href="https://python.langchain.com/docs/get_started/quickstart/" rel="nofollow noreferrer">https://python.langchain.com/docs/get_started/quickstart/</a></p>
<p>Let's use the very first example:</p>
<pre><code>llm = ChatOpenAI(openai_api_key=api_key)
llm.invoke("how can langsmith help with testing?")
</code></pre>
<p>I wrote some initializing code as well to make ChatOpenAI work:</p>
<pre><code>import os
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
llm = ChatOpenAI(openai_api_key=api_key)
llm.invoke("how can langsmith help with testing?")
</code></pre>
<p>The <code>invoke</code> function seems to be executed as I can't see any error message. But I also can't see any further output. Nothing happens.</p>
<p>They even wrote <em>"We can also guide its response with a prompt template."</em>. However, there is not response.</p>
<p>Who can explain to me, what is happening here? And can you probably recommend me a better tutorial instead of that from LangChain?</p>
|
<python><langchain><py-langchain>
|
2024-04-27 22:23:10
| 2
| 1,153
|
MrsBookik
|
78,396,467
| 13,952,135
|
OpenGL pygame cannot get buffers working reliably with vertextPointer and colorPointer
|
<p>So I'm pretty sure I'm just doing something wrong here with how I'm calling the OpenGL.
I've provided an example below where about 30% of the time it shows the semi transparent vertex colored squares on top of a red background. The other 70% of the time it only shows the red background. I tried commenting out various opengl commands to see if I could get it working reliably. This is my attempt at a "minimum" reproducible example from a much larger program.</p>
<p>I tried adding z coordinates to the points and setting GL_DEPTH_TEST but it made no difference it still doesn't work 70% of the time.</p>
<p>Strangely sometimes it will get in a state where the program will work every other time for afew times but perhaps that's just a coincidence.</p>
<p>Also when it does work it works fine and cycles through all the colors for the different squares until the program terminates, so it makes me think it's something wrong with the initialization or some race condition or something.</p>
<pre><code>
import OpenGL.GL as gl
import numpy as np
import pygame as pg
from pygame import locals as pg_locals
import time
class OpenGLWindow:
def __init__(self):
self.use_color_buffer = 0
flags = pg_locals.DOUBLEBUF | pg_locals.OPENGL | pg_locals.RESIZABLE
self.pg_window = pg.display.set_mode(
(800, 600),
flags,
vsync=1,
)
# gl.glTexEnvf(gl.GL_TEXTURE_ENV, gl.GL_TEXTURE_ENV_MODE, gl.GL_MODULATE)
# gl.glEnable(gl.GL_DEPTH_TEST) # we are disabling this for now because the texture z depth and overlay elements aren't configured right yet.
gl.glEnable(gl.GL_BLEND)
gl.glEnable(gl.GL_COLOR_MATERIAL)
gl.glColorMaterial(gl.GL_FRONT_AND_BACK, gl.GL_AMBIENT_AND_DIFFUSE)
gl.glBlendFunc(gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA)
# gl.glEnable(gl.GL_TEXTURE_2D)
# gl.glLoadIdentity()
self.setup_buffers_2()
def setup_buffers_2(self):
self._vertexBuffer = gl.glGenBuffers(1)
self._colorBuffers = gl.glGenBuffers(3)
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, self._vertexBuffer)
vertices = np.array(
[0.5, 0.5, 0, 0.5, 0, 0, 0.5, 0, -0.5, -0.5, 0, -0.5, 0, 0, -0.5, 0],
dtype="float32",
)
gl.glBufferData(gl.GL_ARRAY_BUFFER, vertices, gl.GL_STATIC_DRAW) # Error
for n in range(3):
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, self._colorBuffers[n])
colors = []
for p in range(8):
for c in range(3):
colors.append((p % 4) * 0.25 if c == n else 0)
colors.append((p % 4) * 0.25)
print(colors)
gl.glBufferData(
gl.GL_ARRAY_BUFFER, np.array(colors, dtype="float32"), gl.GL_STATIC_DRAW
)
def cleanup_buffers_2(self):
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, self._vertexBuffer)
gl.glDeleteBuffers(1, self._vertexBuffer)
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, 0)
for n in range(3):
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, self._colorBuffers[n])
gl.glDeleteBuffers(3, self._colorBuffers)
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, 0)
def render_opengl_2(self):
gl.glClearColor(1.0, 0.0, 0.0, 1.0)
gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT)
gl.glEnableClientState(gl.GL_VERTEX_ARRAY)
gl.glEnableClientState(gl.GL_COLOR_ARRAY)
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, self._vertexBuffer)
gl.glVertexPointer(2, gl.GL_FLOAT, 0, 0)
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, self._colorBuffers[self.use_color_buffer])
gl.glColorPointer(4, gl.GL_FLOAT, 0, 0)
gl.glDrawArrays(gl.GL_QUADS, 0, 8)
pg.display.flip()
self.use_color_buffer += 1
self.use_color_buffer %= 3
window = OpenGLWindow()
# window.render_opengl_2()
# time.sleep(1)
# window.cleanup_buffers_2()
try:
for _ in range(30):
window.render_opengl_2()
time.sleep(0.1)
finally:
window.cleanup_buffers_2()
</code></pre>
|
<python><opengl><pygame><pyopengl><vertex-buffer>
|
2024-04-27 21:33:41
| 1
| 615
|
Peter
|
78,396,068
| 13,393,940
|
How to plot SHAP summary plots for all classes in multiclass classification
|
<p>I am using XGBoost with SHAP to analyze feature importance in a multiclass classification problem and need help plotting the SHAP summary plots for all classes at once. Currently, I can only generate plots one class at a time.</p>
<pre class="lang-py prettyprint-override"><code>SHAP version: 0.45.0
Python version: 3.10.12
</code></pre>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import xgboost as xgb
import shap
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score
# Generate synthetic data
X, y = make_classification(n_samples=500, n_features=20, n_informative=4, n_classes=6, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
# Train a XGBoost model for multiclass classification
model = xgb.XGBClassifier(objective="multi:softprob", random_state=42)
model.fit(X_train, y_train)
</code></pre>
<p>I then tried to plot the shape values:</p>
<pre class="lang-py prettyprint-override"><code># Create a SHAP TreeExplainer
explainer = shap.TreeExplainer(model)
# Calculate SHAP values for the test set
shap_values = explainer.shap_values(X_test)
# Attempt to plot summary for all classes
shap.summary_plot(shap_values, X_test, plot_type="bar")
</code></pre>
<p>I got this interaction plot instead:</p>
<p><a href="https://i.sstatic.net/nS15e6YP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nS15e6YP.png" alt="enter image description here" /></a></p>
<p>I remedied the problem with help from <a href="https://stackoverflow.com/questions/78368073/incorrect-array-shapes-for-shap-values-in-binary-classification-when-trying-to-c/78368951#78368951">this</a> post:</p>
<pre class="lang-py prettyprint-override"><code>shap.summary_plot(shap_values[:,:,0], X_test, plot_type="bar")
</code></pre>
<p>which gives a normal bar plot for class 0:</p>
<p><a href="https://i.sstatic.net/BHbGh93z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHbGh93z.png" alt="enter image description here" /></a></p>
<p>I can then do the same with classes 1, 2, 3, etc.</p>
<p>The question is, how can you make a summary plot for all the classes? I.e., a single plot showing the contribution of a feature to each class?</p>
|
<python><machine-learning><classification><shap><xgbclassifier>
|
2024-04-27 19:02:16
| 1
| 873
|
cconsta1
|
78,396,014
| 7,846,884
|
Row bind multiple data frames in dictionary using dict.keys() as column 1 of output pd.df
|
<p>i found a way to concatenate my pandas data frames in a dictionary.
Que; how can i use the dictionary keys as first column in the final dataframe?
i tried but thar didn't work</p>
<pre><code>pd.concat(test_list.values(), ignore_index=False, keys= test_list.keys())
</code></pre>
<p>data</p>
<pre><code>test_list = {'ENSMUSG00000025333.10': treatment_code subjectCode fraction_modified_entropy_in_log2 log_RPKM
59478 0 2 1.822939 3.109444
107087 1 4 0.811278 3.476707, 'ENSMUSG00000025903.14': treatment_code subjectCode fraction_modified_entropy_in_log2 log_RPKM
178498 0 1 1.000000 3.175110
107116 1 3 3.551271 3.002025, 'ENSMUSG00000063663.11': treatment_code subjectCode fraction_modified_entropy_in_log2 log_RPKM
59417 0 2 2.755851 1.07040
118918 1 3 3.088372 1.19147}
</code></pre>
|
<python><pandas><dictionary>
|
2024-04-27 18:40:39
| 1
| 473
|
sahuno
|
78,395,916
| 1,073,493
|
Regex with optional final capture group
|
<p>How do I construct a regex pattern that matches the following examples with three capture groups like so:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>example</th>
<th>grp1</th>
<th>grp2</th>
<th>grp3</th>
</tr>
</thead>
<tbody>
<tr>
<td>foo:bar-alpha</td>
<td>foo</td>
<td>bar</td>
<td>alpha</td>
</tr>
<tr>
<td>foo:bar-beta</td>
<td>foo</td>
<td>bar</td>
<td>beta</td>
</tr>
<tr>
<td>foo:bar</td>
<td>foo</td>
<td>bar</td>
<td></td>
</tr>
</tbody>
</table></div>
<p>grp3 is optional</p>
<p>Attempts:</p>
<p><code>(.*):(.*)(-(alpha|beta))</code> Matches only the first two cases but I want the last group to be optional..</p>
<p><code>(.*):(.*)(-(alpha|beta))?</code> adding the <code>?</code> quantifier matches all 3 but allows the second group to capture everything</p>
<p>I'm using Python regex</p>
<p>N.b. My actual input is confidential. The examples are representative.</p>
|
<python><regex>
|
2024-04-27 18:04:33
| 1
| 477
|
Lobert
|
78,395,683
| 596,922
|
Regular expresssion to extract the headers between two lines
|
<p>I'm trying to extract a headers between the two dotted line</p>
<pre class="lang-none prettyprint-override"><code> ------------------------------------------------------------------------------------------
PortNo Descr Status Speed Mode Pause FlowCtrl
------------------------------------------------------------------------------------------
1 A UP 200G FULL NO YES
</code></pre>
<p>I wrote this regular expression to get the headers alone</p>
<pre><code>header = re.search(r'--+\n(.*)\n--+', cli_output).group(1)
</code></pre>
<p>I tried to use the regex</p>
<pre><code>--+\n(.*)
</code></pre>
<p>which picks the 2 and 4th line. To restrict picking up the 4th line, I modified the regex as</p>
<pre><code>--+\n(.*)\n--+
</code></pre>
<p>Now it does not match anything. Where am I going wrong here? I tried in <a href="https://regex101.com/" rel="nofollow noreferrer">https://regex101.com/</a> as well but it does not match.</p>
|
<python><regex><regex-group>
|
2024-04-27 16:34:31
| 1
| 1,865
|
Vijay
|
78,395,272
| 9,582,542
|
Issue with Vscode and interactive window
|
<p>Currently running VScode 1.88.0 which is very new.</p>
<p>Here is the Scenario</p>
<p>Session #1:
I have Vscode open on a folder C:\user\Alastname\Datascience
when I open code in this folder and make a selection hit Shift+Enter
I get an interactive window and my code get processed successfully by the python interpreter</p>
<p>Session #2:
This session is on a file share on my network Z:\Data\github
when I open code in this folder and make a selection hit Shift+Enter
I get an error <strong>"command 'jupyter.runSelectionLine' not found"</strong>
I dont understand one session works fine yet the other session does not load python at all</p>
<p>Not even the terminal I have to manually run this at the terminal in order for the terminal to hook python
(& "G:\ProgramFiles\Miniconda3\Scripts\conda.exe" "shell.powershell" "hook") | Out-String | Invoke-Expression
This line give me access to python on the terminal but not the interactive mode
Is there an issue with security when I open a folder from a share that is mapped Z:\ rather than local C:\</p>
<p>My question is how do I get my interactive mode to work on the second session that is working on a share folder?</p>
|
<python><visual-studio-code>
|
2024-04-27 14:11:18
| 1
| 690
|
Leo Torres
|
78,395,038
| 8,020,900
|
Pandas: bin column by frequency with unique bin intervals
|
<p>Let's say we have this DataFrame:</p>
<p><code>df = pd.DataFrame(columns=["value"], data=[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 3, 5, 7, 10])</code></p>
<p>I want to split up the values into 5 bins based on their frequency, with the condition that <strong>no value is in more than one bin.</strong> You might say: "<code>pd.qcut</code> is the way to go!" But the issue with <code>qcut</code> is that it will either not make unique bins, or if you use <code>duplicates="drop"</code> then it will decrease the number of bins you want. I am aware that a similar question has been asked <a href="https://stackoverflow.com/questions/20158597/how-to-qcut-with-non-unique-bin-edges">here</a>, but none of the answers seemed to directly answer that question, and also the question I'm asking is a bit more complex.</p>
<p>Here is what happens if we use <code>pd.qcut</code>:</p>
<ol>
<li>Case 1: <code>df["bins"] = pd.qcut(df["value"], q=5, duplicates="drop")</code>: this gives us 3 bins instead of 5.</li>
<li>Case 2: <code>df["bins"] = pd.qcut(df["value"].rank(method='first'), q=5)</code>: this gives us 5 bins but the value of 0 is split across two bins, and so is the value of 1.</li>
</ol>
<p>Here is what is desired:</p>
<p>If we run <code>df["value"].value_counts(normalize=True)</code>, we see:</p>
<p><a href="https://i.sstatic.net/wKCaj4Y8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wKCaj4Y8.png" alt="enter image description here" /></a></p>
<p>So from the percentage breakdown and the condition that each value is only in one bin, we would want <strong>bin 1 to contain all 0s (and only 0s), bin 2 to contain all 1s (and only 1s), bin 3 to contain all 2s (and only 2s), bin 4 to contain 5&7, and bin 5 to contain 7&10</strong>. This is because we wanted 5 bins, so each bin should have 20% of the values. But more than 20% of the values are 0, so 0 should be its own bin (and same for 1). And then once those two bins are assigned, there are 3 remaining bins to split up the rest of the values in. So each of those bins should contain 33% of the rest of the values, and more than 33% of the remaining values are 2, so 2 should be its own bin as well. And then finally that leaves 2 bins for the remaining values and those are split 50-50.</p>
|
<python><pandas><dataframe><quantile><qcut>
|
2024-04-27 12:50:54
| 1
| 3,539
|
Free Palestine
|
78,394,806
| 1,082,349
|
Only take color cycle from a style
|
<p>I hate everything about the style ggplot, but I do like the colors.</p>
<pre><code>import matplotlib.pyplot as plt
plt.style.use('ggplot')
</code></pre>
<p>This sets the colors from that style, and everything as well. How can I import only the color cycle from that style (for all lines, dots, ...), but import nothing else from that style?</p>
|
<python><matplotlib>
|
2024-04-27 11:26:35
| 1
| 16,698
|
FooBar
|
78,394,714
| 9,877,065
|
How to force Designer to keep QScrollAreas in QGridLayout of same size during resize in PyQt5?
|
<p>Code main.py:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5.QtWidgets import QApplication, QScrollArea, QWidget, QGridLayout, QVBoxLayout
from PyQt5 import QtGui , uic
from PyQt5.QtGui import QDrag
class MainWidget(QWidget):
def __init__(self, *args, **kwargs):
super().__init__()
uic.loadUi('test006-001.ui' , self)
self.show()
if __name__ == '__main__':
app = QApplication(sys.argv)
w = MainWidget()
# w.show()
sys.exit(app.exec_())
</code></pre>
<p><code>test006-001.ui</code>:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<ui version="4.0">
<class>Form</class>
<widget class="QWidget" name="Form">
<property name="geometry">
<rect>
<x>0</x>
<y>0</y>
<width>698</width>
<height>635</height>
</rect>
</property>
<property name="windowTitle">
<string>Form</string>
</property>
<layout class="QGridLayout" name="gridLayout">
<item row="0" column="0">
<widget class="QScrollArea" name="scrollArea">
<property name="verticalScrollBarPolicy">
<enum>Qt::ScrollBarAlwaysOn</enum>
</property>
<property name="widgetResizable">
<bool>true</bool>
</property>
<widget class="QWidget" name="scrollAreaWidgetContents">
<property name="geometry">
<rect>
<x>0</x>
<y>-37</y>
<width>315</width>
<height>598</height>
</rect>
</property>
<property name="sizePolicy">
<sizepolicy hsizetype="Expanding" vsizetype="Preferred">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
<layout class="QVBoxLayout" name="verticalLayout">
<item>
<widget class="QLineEdit" name="lineEdit"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_2"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_3"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_4"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_13"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_5"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_12"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_11"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_6"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_16"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_14"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_7"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_15"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_8"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_9"/>
</item>
<item>
<widget class="QLineEdit" name="lineEdit_10"/>
</item>
</layout>
</widget>
</widget>
</item>
<item row="0" column="1">
<widget class="QScrollArea" name="scrollArea_2">
<property name="verticalScrollBarPolicy">
<enum>Qt::ScrollBarAlwaysOn</enum>
</property>
<property name="widgetResizable">
<bool>true</bool>
</property>
<widget class="QWidget" name="scrollAreaWidgetContents_2">
<property name="geometry">
<rect>
<x>0</x>
<y>0</y>
<width>315</width>
<height>305</height>
</rect>
</property>
<property name="sizePolicy">
<sizepolicy hsizetype="Expanding" vsizetype="Preferred">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
</widget>
</widget>
</item>
<item row="1" column="0">
<widget class="QScrollArea" name="scrollArea_3">
<property name="verticalScrollBarPolicy">
<enum>Qt::ScrollBarAlwaysOn</enum>
</property>
<property name="widgetResizable">
<bool>true</bool>
</property>
<widget class="QWidget" name="scrollAreaWidgetContents_3">
<property name="geometry">
<rect>
<x>0</x>
<y>0</y>
<width>315</width>
<height>304</height>
</rect>
</property>
<property name="sizePolicy">
<sizepolicy hsizetype="Expanding" vsizetype="Preferred">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
</widget>
</widget>
</item>
<item row="1" column="1">
<widget class="QScrollArea" name="scrollArea_4">
<property name="verticalScrollBarPolicy">
<enum>Qt::ScrollBarAlwaysOn</enum>
</property>
<property name="widgetResizable">
<bool>true</bool>
</property>
<widget class="QWidget" name="scrollAreaWidgetContents_4">
<property name="geometry">
<rect>
<x>0</x>
<y>0</y>
<width>315</width>
<height>304</height>
</rect>
</property>
<property name="sizePolicy">
<sizepolicy hsizetype="Expanding" vsizetype="Preferred">
<horstretch>0</horstretch>
<verstretch>0</verstretch>
</sizepolicy>
</property>
</widget>
</widget>
</item>
</layout>
</widget>
<resources/>
<connections/>
</ui>
</code></pre>
<p>I tried different size policies etc in Designer, but cannot accomplish my goal of having always the four <strong>QScrollArea</strong>s of same size while resizing my main Widget, I get it only at full screen and smallest size:</p>
<p><a href="https://i.sstatic.net/Qs4Q37kn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qs4Q37kn.png" alt="enter image description here" /></a></p>
<p>but not at intermediate sizes:</p>
<p><a href="https://i.sstatic.net/65KPdx5B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65KPdx5B.png" alt="enter image description here" /></a></p>
<p>Any idea about how to force keeping same size while resizing?</p>
|
<python><pyqt><pyqt5><qt-designer>
|
2024-04-27 10:50:00
| 0
| 3,346
|
pippo1980
|
78,394,687
| 3,103,957
|
Even loop creation in Asyncio of Python
|
<p>In asyncio library of Python, which method(s) create the event loop? Is it asyncio.run() / asyncio.get_event_loop() ? Also which method(s) start running the event loop? I tried searching; but it seems there are not very clear answers.</p>
|
<python><python-asyncio><event-loop>
|
2024-04-27 10:36:32
| 1
| 878
|
user3103957
|
78,394,663
| 3,156,085
|
Why does attempting to do a `Callable` alias (`Alias=Callable`) cause a "Bad number of arguments" when using it as a generic?
|
<p>I've encountered the following (error in 2nd snippet) while experimenting with type hinting:</p>
<h3>First, without any alias, everything passes the type checking:</h3>
<pre><code>from typing import Callable
T = TypeVar("T") # Invariant by default
P = ParamSpec("P")
def some_decorator(f: Callable[P, T]): ... # Passes type checking
</code></pre>
<h3>Then, with my first attempt to use an alias:</h3>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, TypeVar, ParamSpec
T = TypeVar("T") # Invariant by default
P = ParamSpec("P")
MyCallableAlias = Callable
# error: Bad number of arguments for type alias, expected 0, given 2 [type-arg]
def my_decorator(f: MyCallableAlias[P, T]): ...
</code></pre>
<p>I don't really understand this as I would have expected <code>MyCallableAlias</code> to behave exactly like <code>Callable</code>.</p>
<h3>The solution:</h3>
<p>The thing that seems to work is to use a <a href="https://docs.python.org/3/library/typing.html#protocols" rel="nofollow noreferrer"><code>Protocol</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Protocol, TypeVar, ParamSpec, Generic
# Using the default variance (invariant) causes an error at type checking.
T = TypeVar("T", covariant=True)
P = ParamSpec("P")
class MyCallableAlias(Generic[P, T], Protocol):
def __call__(self, *args: P.args, **kwargs: P.kwargs): ...
def my_decorator(f: MyCallableAlias[P, T]): ...
</code></pre>
<p>Why didn't my alias work in the second example?</p>
<p>My solution seems over-complicated for a simple alias (actually not an alias anymore) compared to the use of what it's an alias for.</p>
<p><strong>Note :</strong></p>
<p>The tools and versions used for these examples were:</p>
<ul>
<li>Python 3.12</li>
<li>MyPy 1.10</li>
</ul>
|
<python><python-typing><type-alias>
|
2024-04-27 10:28:15
| 1
| 15,848
|
vmonteco
|
78,394,542
| 4,060,904
|
SQLAlchemy - include rows where func.count() called on many-many relationship results in 0
|
<p>I am querying a table <code>Team</code> to get a list of all the teams. This query object is fed to a pagination function that makes use of sqlalchemy's <code>paginate()</code>.</p>
<p>The function takes inputs for <code>order</code> and <code>order_by</code> which determine the column and order of the resulting query. This works fine when the sort is performed directly on one of <code>Team</code>'s attributes, but I also want to perform the sort on the count of the number of relationships each record has with another table <code>Player</code>.</p>
<p>Using <code>func.count()</code>, <code>.join()</code> and <code>.group_by()</code> this is possible - however if a team does not have any players recorded, the record is ommitted from the query. I want to include all results in this query.</p>
<p>I have thought of creating a second query that omits the results of the first one, and then combine them somehow before passing them to the paginate function, but I haven't been able to find a way to do this.</p>
<p>Is there a way to include the results where <code>count</code> should return <code>0</code>, or is there another way to achieve this effect?</p>
<p><strong>The function:</strong></p>
<pre><code>def get_teams(filters):
"""Get the collection of all teams"""
page = filters['page']
per_page = filters['per_page']
order = filters['order'] # validates as either 'asc' or 'desc'
order_by = filters['order_by']
if order_by == 'active_players': # checks if sort needs to be done manually
query = db.session.query(Team, sa.func.count(PlayerTeam.id).label('count')) \
.join(Team.player_association) \
.filter(PlayerTeam.end_date == None) \
.group_by(Team) \
.order_by(getattr(sa, order)('count'))
else: # sort is directly on attribute, this is easy
query = sa.select(Team).order_by(getattr(sa, order)(getattr(Team, order_by)))
return Team.to_collection_dict(query, page, per_page)
</code></pre>
<p><strong>Models:</strong></p>
<pre><code>class Team(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64), index=True, unique=True, nullable=False)
# some other fields
player_association = db.relationship('PlayerTeam', back_populates='team',lazy='dynamic')
players = association_proxy('player_association', 'player')
@staticmethod
def to_collection_dict(query, page, per_page):
resources = db.paginate(query, page=page, per_page=per_page, error_out=False)
# convert resources to dict and return
class Player(db.Model):
id = db.Column(db.Integer, primary_key=True)
player_name = db.Column(db.String(64), nullable=False)
# some other fields
team_association = db.relationship('PlayerTeam', back_populates='player', lazy='dynamic')
teams = association_proxy('team_association', 'team')
class PlayerTeam(db.Model):
id = db.Column(db.Integer, primary_key=True)
player_id = db.Column(db.Integer, db.ForeignKey('player.id'))
team_id = db.Column(db.Integer, db.ForeignKey('team.id'))
end_date = db.Column(db.DateTime)
# some other fields
player = db.relationship('Player', back_populates='team_association')
team = db.relationship('Team', back_populates='player_association')
</code></pre>
|
<python><sqlalchemy><flask-sqlalchemy>
|
2024-04-27 09:43:09
| 1
| 492
|
Locke Donohoe
|
78,394,306
| 14,159,253
|
Failed to execute cell. Could not send execute message to runtime: Error: await connected: disconnected
|
<p>While executing a cell in Google Colab, my aim was to display a selection of random rows from a DataFrame containing columns labeled <code>Id</code>, <code>condensed_summary</code>, and <code>generated_image</code>. Initially, I attempted to exhibit the entire dataset comprising 200 rows, only to encounter display issues. Subsequently, I opted to showcase a subset of <code>n_results</code> random rows, but to my dismay, the display failed, prompting a session restart.</p>
<p>In my endeavor to troubleshoot, I experimented with various values for <code>n_results</code>. When I substituted <code>n_results</code> with 10, Google Colab presented a pop-up message in the following manner:</p>
<img src="https://i.sstatic.net/6HQeffuB.png" width="500" />
<p>This is the code I'm utilizing, which yields the error:</p>
<pre class="lang-py prettyprint-override"><code>def show_results(df_result, images_dir_path: str, n_results: int):
from IPython.core.display import display, HTML
from PIL import Image
from io import BytesIO
import base64
from os import listdir
import random
image_file_paths = []
img_strs = []
if n_results >= len(df_result): # all samples
image_file_paths = listdir(images_dir_path)
else: # random sampling
df_result = df_result.sample(n=n_results).reset_index()
image_file_paths = random.sample(listdir(images_dir_path), n_results)
for idx, f in enumerate(image_file_paths):
img = Image.open(images_dir_path + f)
img_buffer = BytesIO()
img.save(img_buffer, format="PNG")
img_strs.append(base64.b64encode(img_buffer.getvalue()).decode())
df_result.at[idx, 'generated_image'] = '<img src="data:image/png;base64,{'+str(idx)+':s}" width="300" height="300">'
html_all = df_result.to_html(escape=False, index=False).format(*img_strs)
display(HTML(html_all))
df_result = pd.DataFrame({"id": ids, "condensed_summary": condensed_summaries})
show_results(df_result, IMAGES_DIR_PATH, 10)
</code></pre>
<p>My code operates smoothly for small values of <code>n_results</code>, such as 10, but encounters crashes when handling larger numbers, failing to display any output. This dilemma prompts me to seek solutions for both displaying all results and deciphering the message's implications.</p>
|
<python><jupyter-notebook><google-colaboratory><ipython>
|
2024-04-27 08:17:23
| 1
| 1,725
|
Behdad Abdollahi Moghadam
|
78,394,289
| 21,395,742
|
Running ollama on kaggle
|
<p>I downloaded ollama on a kaggle notebook (linux). I want to interact with it using a python script. On following the instructions on the github repo and running: <code>ollama run llama3</code> I got the output: <code>Error: could not connect to ollama app, is it running?</code>.</p>
<p>It appears that I need to run <code>ollama serve</code> before running llama3. However the entire main thread is taken up by <code>ollama serve</code> so you cannot run anything else after it.</p>
<p>The work arounds I tried:</p>
<ul>
<li>Trying to create a background process: <code>ollama serve &</code> which returned <code>OSError: Background processes not supported.</code></li>
<li>Trying to run it via python using <code>subprocess.run('ollama', 'serve')</code> which returns <code>TypeError: bufsize must be an integer</code></li>
</ul>
<p>Full logs of second method:</p>
<pre><code>TypeError Traceback (most recent call last)
Cell In[29], line 1
----> 1 subprocess.run('ollama', 'serve')
File /opt/conda/lib/python3.10/subprocess.py:503, in run(input, capture_output, timeout, check, *popenargs, **kwargs)
500 kwargs['stdout'] = PIPE
501 kwargs['stderr'] = PIPE
--> 503 with Popen(*popenargs, **kwargs) as process:
504 try:
505 stdout, stderr = process.communicate(input, timeout=timeout)
File /opt/conda/lib/python3.10/subprocess.py:780, in Popen.__init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask, pipesize)
778 bufsize = -1 # Restore default
779 if not isinstance(bufsize, int):
--> 780 raise TypeError("bufsize must be an integer")
782 if pipesize is None:
783 pipesize = -1 # Restore default
TypeError: bufsize must be an integer
</code></pre>
<p>I chose ollama because the setup was simple (just running a single command). I am ok with using another method besides ollama, however I want to run it on python without much setup as it is running on kaggle</p>
|
<python><llama><ollama>
|
2024-04-27 08:11:53
| 1
| 845
|
hehe
|
78,393,875
| 4,268,976
|
ValueError: time data does not match format '%Y-%m-%dT%H:%M:%S.%fZ'
|
<p>While running the following code I'm getting following error</p>
<p><code>ValueError: time data '2024-04-27T04:50:23.3480072Z' does not match format '%Y-%m-%dT%H:%M:%S.%fZ'</code></p>
<p>Here is my python code</p>
<pre><code>from datetime import datetime
time_string = '2024-04-27T04:50:23.3480072Z'
format_string = '%Y-%m-%dT%H:%M:%S.%fZ'
parsed_time = datetime.strptime(time_string, format_string)
print(parsed_time)
</code></pre>
<p>I'm passing the correct date format string still getting error.</p>
|
<python><datetime>
|
2024-04-27 05:11:30
| 3
| 4,122
|
NIKHIL RANE
|
78,393,822
| 2,256,085
|
animate 1D data with LineConnection
|
<p>The example code below creates 2 plots that attempt to convey what the desired animation is seeking to do. Note that the plot on the right is similar to the plot on the left, but represents the position of a wave after the passage of time <code>t</code>. The goal is to animate the downward movement of waves, with the waves depicted using color. The reason for adopting an approach that uses <code>LineConnections</code> is because the spacing is not necessarily regular (notice that the first two values in <code>z</code> don't follow the same pattern as the rest of the data in <code>z</code>. Is there a way to modify the small reproducible example below to animate the downward movement of the color-filled waves? I've not found examples of the animation feature that uses <code>LineConnections</code>. [Also, I'm open to other approaches that would accomodate irregularly spaced data (i.e., <code>z</code> position may be irregularly spaced)]</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
# x position of 1D profile
x = [0.5 for i in np.arange(20)]
# z position of 1D profile
z = [59.95, 59.70, 59.25, 58.75, 59.25,
58.75, 58.25, 57.75, 57.25, 56.75,
56.25, 55.75, 55.25, 54.75, 54.25,
53.75, 53.25, 52.75, 52.25, 51.75]
points = np.array([x, z]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
norm = plt.Normalize(
0.0, 2.0
)
# data at time t1
t = 1
t1 = np.sin(2 * np.pi * (np.linspace(0,2,len(z)) - 0.01 * t)) + 1
# data as time t20
t = 20
t20 = np.sin(2 * np.pi * (np.linspace(0,2,len(z)) - 0.01 * t)) + 1
cmap = 'viridis'
lc1 = LineCollection(segments, cmap=cmap, norm=norm)
lc1.set_array(t1)
lc1.set_linewidth(50)
lc2 = LineCollection(segments, cmap=cmap, norm=norm)
lc2.set_array(t20)
lc2.set_linewidth(50)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(3,6), tight_layout=True)
ax1.add_collection(lc1)
line2 = ax2.add_collection(lc2)
plt.colorbar(
line2,
shrink=1.0,
ax=ax,
label="Temperature",
location='right',
pad=0.05,
fraction=0.05,
aspect=7
)
ax1.set_xlim(0.4, 0.6)
ax1.set_ylim(51, 60.5)
ax2.set_xlim(0.4, 0.6)
ax2.set_ylim(51, 60.5)
plt.show()
</code></pre>
|
<python><matplotlib><matplotlib-animation>
|
2024-04-27 04:42:37
| 1
| 469
|
user2256085
|
78,393,728
| 10,138,470
|
Merging Pandas dataframe and SQL table values
|
<p>I have dataframe and want to update it or create a new dataframe based on some input from an SQL table. The dataframe A has two columns (ID and Added_Date).</p>
<p>On the other hand, the SQL table has a few more columns including ID, Transaction_Date, Year, Month and Day. My idea is to merge contents of dataframe A to the SQL table and after the merge, pick all records transacted 30 days after the Transaction_Date in SQL table. In summary, I'm keen on having a dataframe with all transactions that happened 30 days (in SQL table) after the Added_Date in the df A. The SQL table is quite huge and is partitioned by Year, Month and Day. How can I optimize this process?</p>
<p>I understand the join can happen when the dataframe is converted to a tuple or may be dictionary but nothing past that. Sample code is below :</p>
<pre class="lang-py prettyprint-override"><code>import sqlite3
import pandas as pd
# create df
data = {'ID': [1, 2, 3], 'Added_Date': ['2023-02-01', '2023-04-15', '2023-03-17']}
df_A = pd.DataFrame(data)
</code></pre>
<p>Below is code to create sample transactions in memory table in SQL</p>
<pre class="lang-py prettyprint-override"><code># Create an in-memory SQLite database
conn = sqlite3.connect(':memory:')
c = conn.cursor()
# Create the transactions table
c.execute('''CREATE TABLE transactions
(ID INTEGER, transaction_date DATE)''')
# Insert sample data into the transactions table
c.execute('''INSERT INTO transactions VALUES
(1, '2023-01-15'), (1, '2023-02-10'), (1, '2023-03-01'),
(2, '2023-04-01'), (2, '2023-04-20'), (2, '2023-05-05'),
(3, '2023-03-10'), (3, '2023-03-25'), (3, '2023-04-02')''')
</code></pre>
<p>Expected outcome should be something like this:</p>
<pre class="lang-py prettyprint-override"><code>ID transaction_date
1 2023-02-10
1 2023-03-01
2 2023-04-20
2 2023-05-05
3 2023-03-10
3 2023-03-25
3 2023-04-02
</code></pre>
<p>I hope that's more clear.</p>
|
<python><pandas><dataframe><sqlite>
|
2024-04-27 03:33:08
| 1
| 445
|
Hummer
|
78,393,687
| 8,124,392
|
Where to set the scheduler and sampler?
|
<p>If I'm trying to build an inferencer using Python and the huggingface library, where do I set the scheduler and sampler? What function do I call? For context, this is what my code looks like:</p>
<pre><code>import torch, PIL, random
from typing import List, Optional, Union
from diffusers import StableDiffusionInpaintPipeline
device = "cuda"
# pipeline = StableDiffusionXLPipeline.from_single_file("/content/models/juggernautXL_version2.safetensors", torch_dtype=torch.float16, use_safetensors=True, safety_checker=None ).to("cuda")
model_path = "runwayml/stable-diffusion-inpainting"
pipe = StableDiffusionInpaintPipeline.from_pretrained(model_path,torch_dtype=torch.float16,).to(device)
def image_grid(imgs, rows, cols):
assert len(imgs) == rows*cols
w, h = imgs[0].size
grid = PIL.Image.new('RGB', size=(cols*w, rows*h))
grid_w, grid_h = grid.size
for i, img in enumerate(imgs):
grid.paste(img, box=(i%cols*w, i//cols*h))
return grid
def present_img(url):
return PIL.Image.open(url)
mask_url = "masks/mask_PXL_20240419_181351038.MP.jpg.png"
img_url = "originals/PXL_20240419_181351038.MP.jpg"
image = present_img(img_url).resize((512, 512))
mask_image = present_img(mask_url).resize((512, 512))
prompt = "car on a desert highway. Detailed. High resolution. Photorealistic. Soft light."
guidance_scale=7.5
num_samples = 3
generator = torch.Generator(device="cuda").manual_seed(random.randint(0,1000)) # change the seed to get different results
# Assuming 'image' and 'mask_image' are predefined Image objects
images = pipe(
prompt=prompt,
image=image,
mask_image=mask_image,
guidance_scale=guidance_scale,
generator=generator,
num_images_per_prompt=num_samples,
).images
images.insert(0, image)
for idx, img in enumerate(images):
img.save(f"output/output_image_{idx}.png")
</code></pre>
|
<python><deep-learning><huggingface><stable-diffusion>
|
2024-04-27 03:03:14
| 0
| 3,203
|
mchd
|
78,393,675
| 8,849,071
|
How to make a custom type inheriting from UUID work as a pydantic model
|
<p>in our codebase we are using a custom UUID class. This is a simplified version of it:</p>
<pre class="lang-py prettyprint-override"><code>class ID(uuid.UUID):
def __init__(self, *args: Any, **kwargs: Any):
super().__init__(*args, **kwargs)
</code></pre>
<p>We cannot inherit from <code>BaseModel</code> and <code>uuid.UUID</code> because <code>pydantic</code> throws an error (due to multiple inheritance not being allowed). Nonetheless, the way it is defined, <code>pydantic</code> throws an error if we use that <code>ID</code> class as part of a <code>pydantic</code> model:</p>
<pre class="lang-py prettyprint-override"><code>class Model(BaseModel):
id: ID
</code></pre>
<p>This is the error we are getting:</p>
<pre><code>pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class '__main__.ID'>. Set `arbitrary_types_allowed=True` in the model_config to ignore this error or implement `__get_pydantic_core_schema__` on your type to fully support it.
If you got this error by calling handler(<some type>) within `__get_pydantic_core_schema__` then you likely need to call `handler.generate_schema(<some type>)` since we do not call `__get_pydantic_core_schema__` on `<some type>` otherwise to avoid infinite recursion.
For further information visit https://errors.pydantic.dev/2.7/u/schema-for-unknown-type
</code></pre>
<p>I have read a little bit about how to fix this by trying to implement the function suggested by the error message (<code>__get_pydantic_core_schema__ </code>). I have tried to fix it by adding the following to the <code>ID</code> class:</p>
<pre class="lang-py prettyprint-override"><code> @classmethod
def __get_pydantic_json_schema__(cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler):
return {"type": "UUID"}
@classmethod
def __get_pydantic_core_schema__(cls, source_type: Any, handler: GetCoreSchemaHandler):
return core_schema.no_info_plain_validator_function(uuid.UUID)
</code></pre>
<p>But then I get more errors. Do you know by any chance how to properly setup this?</p>
|
<python><uuid><pydantic>
|
2024-04-27 02:56:23
| 1
| 2,163
|
Antonio Gamiz Delgado
|
78,393,575
| 361,530
|
Why does iPython choose an earlier version of Python?
|
<p>I'm trying to upgrade from Python 3.11 to 3.12 and running into some problems, illustrated by iPython's unwillingness to go with me. I'm doing all this on an Arm Macbook Air with Python installed by Homebrew, in case that matters. Here's a copy of my Terminal activity, followed by my understanding of what's happening. Any corrections are appreciated. The *** strings are for a bit of security, replacing pathnames (newlines inserted for readability).</p>
<pre><code>(ds) *** % python --version
Python 3.12.3
(ds) *** % ipython
/opt/homebrew/lib/python3.11/site-packages/IPython/core/interactiveshell.py:937:
UserWarning: Attempting to work in a virtualenv. If you encounter problems,
please install IPython inside the virtualenv.
warn(
Python 3.11.9 (main, Apr 2 2024, 08:25:04) [Clang 15.0.0 (clang-1500.3.9.4)]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.24.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import sys
In [2]: print (sys.version)
3.11.9 (main, Apr 2 2024, 08:25:04) [Clang 15.0.0 (clang-1500.3.9.4)]
In [3]: ^D
</code></pre>
<p>After getting the UserWarning the first time, I ran "pip install ipython"
AND "pipx install ipython" but I continue to get that UserWarning. I did pipx because pip told me the requirements were already satisfied. All of this activity is within the "ds" virtual environment.</p>
<p>I first encountered this when a "from sklearn import..." failed in a jupyter
notebook and did not fail from a "python" REPL. I have the feeling that
solving the ipython problem will get me a long way toward the original objective.</p>
|
<python><ipython><version>
|
2024-04-27 01:51:55
| 0
| 389
|
RadlyEel
|
78,393,570
| 1,678,160
|
Inverted hierarchy of nested dicts/lists
|
<p>Considering the following dictionary:</p>
<pre><code>{
"a": {
"b1": {
"c": {
"value": "c_value",
"children": {
"d": "d_value"
}
}
},
"b2": [
"b2_1",
{
"b2_2_1": 2,
"b2_2_2": {
"b2_2_2_1": "b2_2_2_1_value",
"b2_2_2_2": "b2_2_2_2_value"
}
},
3
]
}
}
</code></pre>
<p>how to <strong>reverse</strong> it such as the <em>result</em> is this:</p>
<pre><code>{
"d": {
"children": {
"c": {
"b2": [
3,
{
"b2_2_2": {
"b2_2_2_2": "b2_2_2_2_value",
"b2_2_2_1": "b2_2_2_1_value"
},
"b2_2_1": 2
},
"b2_1"
],
"b1": {
"a": "d_value"
}
}
},
"value": "c_value"
}
}
</code></pre>
<p>I'm struggling with a recursive approach to this problem, without the need of creating a global variable to handle the results.</p>
|
<python><python-3.x>
|
2024-04-27 01:45:24
| 1
| 522
|
kairos
|
78,393,551
| 1,601,580
|
How does one use vllm with pytorch 2.2.2 and python 3.11?
|
<h1>Title: How does one use vllm with pytorch 2.2.2 and python 3.11?</h1>
<p>I'm trying to use the vllm library with pytorch 2.2.2 and python 3.11. Based on the GitHub issues, it seems vllm 0.4.1 supports python 3.11.</p>
<p>However, I'm running into issues with incompatible pytorch versions when installing vllm. The github issue mentions needing to build from source to use pytorch 2.2, but the pip installed version still uses an older pytorch.</p>
<p>I tried creating a fresh conda environment with python 3.11 and installing vllm:</p>
<pre class="lang-bash prettyprint-override"><code>$ conda create -n vllm_test python=3.11
$ conda activate vllm_test
(vllm_test) $ pip install vllm
...
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
vllm 0.4.1 requires torch==2.1.2, but you have torch 2.2.2 which is incompatible.
</code></pre>
<p>I also tried installing pytorch 2.2.2 first and then vllm:</p>
<pre class="lang-bash prettyprint-override"><code>(vllm_test) $ pip install torch==2.2.2
(vllm_test) $ pip install vllm
...
Building wheels for collected packages: vllm
Building wheel for vllm (pyproject.toml) ... error
error: subprocess-exited-with-error
Γ Building wheel for vllm (pyproject.toml) did not run successfully.
β exit code: 1
</code></pre>
<p>Can someone clarify what versions of vllm, pytorch and python work together currently? Is there a recommended clean setup to use vllm with the latest pytorch 2.2.2 and python 3.11?</p>
<p>I've tried creating fresh conda environments, but still run into version conflicts. Any guidance on the right installation steps would be much appreciated. Thanks!</p>
<p>ref: <a href="https://github.com/vllm-project/vllm/issues/2747" rel="nofollow noreferrer">https://github.com/vllm-project/vllm/issues/2747</a></p>
|
<python><pytorch><nlp><huggingface-transformers><vllm>
|
2024-04-27 01:34:19
| 1
| 6,126
|
Charlie Parker
|
78,393,417
| 19,626,271
|
Optimizing a function with linear constraints containing a singular matrix, in Python
|
<p>I'm working with traffic data to try the reconstruct the origin-destination matrix of a region, based on the article <a href="https://web.archive.org/web/20240426215022/https://sci-hub.se/10.1287/trsc.17.2.198" rel="nofollow noreferrer">Bell1983</a>.</p>
<p>Basically, for each pair of locations, we need to get how many vehicles (/passengers) go from one place to the other, and we have data of public road traffic for it. This would be the O-D matrix. It's stored in the <code>odm</code> vector (flattened) with length <em>M</em>, the daily traffic for each road is stored in vector <code>v</code> with length <em>N</em>, and an extra matrix <em>P</em>, describing the probability of going through a certain road when going from place A to place B, stored in array <code>P</code> of size <em>N x M</em> (this matrix is constructed by some simple heuristic). In my case, there are 31 roads and 120 pairs of locations (16 choose 2). <br>The goal is the same as in the article: <strong>given constraints</strong> of the relation of these variables, get the <strong>solution that minimizes a function</strong> <code>f</code>. In practicality, another important thing to note that <strong>constraint always contains singular matrix</strong>. (In the article they actually derive a general procedure for finding the solution, but for now I'm trying to write code to get a solution for any objective function.)</p>
<p>Models have the constraint <code>v = P * odm</code> so we have <em>M</em> equations for the <em>M</em> variables.<br> The problem is, in practical cases, <em>P</em> will be underdetermined because of things like 3 places laying on the same line (e.g. connected by 2 roads only, you have only 2 independent equations for 3 place pair values) thus <em>P</em> won't have an inverse. Same in my case, <em>P</em> has 28 independent equations only instead of 31. For this reason there are many possible solutions to the equation. Classical methods introduce some function and choose the solution minimizing this function - such a function is typically maximizing entropy (== minimizing (-1)*entropy).</p>
<p>Using Python, I've tried <code>scipy.optimize.minimize</code> with a simple function to minimize, given the constraint, but got the message: <em>Singular matrix C in LSQ subproblem</em>. I know, that <code>P</code> is singular and that caused this problem, but <code>P</code> will always be singular, underdetermined, I can only work with it. The code:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.optimize import minimize, Bounds
#Objective function
def f(odm):
#Entropy maximizing (== minimizing the negative entropy)
return np.sum(odm * np.log(odm))
#Constraint(s)
def constraint_eq(odm):
return P @ odm - v
constraints = {'type': 'eq', 'fun': constraint_eq}
bounds = Bounds(1e-5, np.inf) #positive values for log
res = minimize(f, odm, constraints=constraints, bounds=bounds)
optimal_odm = res.x
</code></pre>
<p>There will always be multiple solutions (and it is guaranteed that there will be at least one solution) and if the solution space is given it should be easy to find the one minimizing <code>f</code>.</p>
<p>How can I run an optimization that allows for singular matrices in constraint? Do I need to use a different solver method?<br>
I know one possibility is to just exclude 3 equations to have a non-singular <em>P</em> matrix for optimization, then after having the optimal solution, solving the remaining equations, but this seems like quite some hussle.<br>
(Solutions for MATLAB would also be useful, I'm familiar with it to some extent.)</p>
<p><br>For reproducing the problem, the road vector would be <code>v = np.array([11479, 24663, 39783, 26064, 65054, 52134, 45172, 18807, 6386, 6544, 23218, 36905, 16558, 5385, 4562, 38232, 8263, 13132, 22395, 3759, 3910, 4100, 17482, 7736, 14255, 5154, 9436, 6554, 11623, 10747, 13110])</code> <br>
and the P-matrix stored as a CSV can be found here: <a href="https://raw.githubusercontent.com/me9hanics/origin-destination-matrix-from-traffic/main/computing/P_matrix_16_cities.csv" rel="nofollow noreferrer">GitHub</a>.</p>
|
<python><optimization><scipy><mathematical-optimization><scipy-optimize>
|
2024-04-27 00:16:06
| 0
| 395
|
me9hanics
|
78,393,346
| 5,431,734
|
comparing two lists of tuples, np.isin
|
<p>I have two lists of tuples and I want to find which elements from the first are in the second one. For example:</p>
<pre><code>elements = [
(903, 468),
(913, 468),
(926, 468),
(833, 470),
(903, 470),
(917, 470),
(833, 833),
(903, 833),
(913, 833),
(917, 833),
]
test_elements = [
(903, 468),
(913, 468),
(833, 470),
(903, 470),
(833, 833),
(903, 833),
]
</code></pre>
<p>and I want to return a boolean mask for the elements in <code>elements</code> that exist in <code>test_elements</code>. I dont understand why np.isin doesnt give the correct result:</p>
<pre><code>list(map(np.all, np.isin(elements, test_elements)))
>>> [True, True, False, True, True, False, True, True, True, False]
</code></pre>
<p>which means that <code>(913, 833)</code> should be in <code>test_elements</code> but this is not true</p>
<p>This expesssion however (which I found from <a href="https://stackoverflow.com/a/54828333/5431734">another post</a> here) returns the correct mask:</p>
<pre><code>list((np.array(elements)[:,None] == np.array(test_elements)).all(2).any(1))
>>> [True, True, False, True, True, False, True, True, False, False]
</code></pre>
<p>What am I missing with <code>np.isin</code> (or maybe <code>np.all</code> and <code>map</code>) please?</p>
|
<python><numpy>
|
2024-04-26 23:36:19
| 3
| 3,725
|
Aenaon
|
78,393,278
| 2,487,330
|
Filter by unique counts within groups
|
<p>I'm trying to filter by the count of unique items within groups.</p>
<p>For example, suppose I have the following data set:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'data': [1,1,1,1,1,2,2],
'group': [1,1,1,2,2,1,1],
'id': [1,2,3,4,5,1,2],
'x': [1,1,2,1,2,1,1]
})
</code></pre>
<p>Where (data, group) is a compound key for groups of items that each have an 'id' and an 'x' value. I'd like to filter the data set to only keep groups that have at least two different 'x' values, 1 and 2.</p>
<p>I tried the following but get an error message:</p>
<pre class="lang-py prettyprint-override"><code>df.filter(pl.col('x').unique_counts().over('data', 'group') >= 2)
</code></pre>
<pre class="lang-py prettyprint-override"><code># ShapeError: the length of the window expression did not match that of the group
</code></pre>
<p>Can someone please help me understand what I'm doing wrong, or how to achieve this goal?</p>
|
<python><dataframe><python-polars>
|
2024-04-26 23:00:33
| 1
| 645
|
Brian
|
78,393,211
| 10,902,944
|
Default n_workers when creating a Dask cluster?
|
<p>Simple question. If I create a Dask cluster using the following code:</p>
<pre><code>from dask.distributed import Client
client = Client()
</code></pre>
<p>How many workers will it create? I ran this code on one machine, and it created 4 workers. I ran this same code on a server, and it created 8 workers. Does it just create as much as it possibly can based on resources available? In the source code, there is no default value for <code>n_workers</code> listed in the docstrings. I'm trying see how to create a cluster automatically without having to know in advance the resources available to me.</p>
<pre><code>class LocalCluster(SpecCluster):
"""Create local Scheduler and Workers
This creates a "cluster" of a scheduler and workers running on the local
machine.
Parameters
----------
n_workers: int
Number of workers to start
memory_limit: str, float, int, or None, default "auto"
Sets the memory limit *per worker*.
Notes regarding argument data type:
* If None or 0, no limit is applied.
* If "auto", the total system memory is split evenly between the workers.
* If a float, that fraction of the system memory is used *per worker*.
* If a string giving a number of bytes (like ``"1GiB"``), that amount is used *per worker*.
* If an int, that number of bytes is used *per worker*.
</code></pre>
|
<python><dask><cpu-usage><dask-distributed>
|
2024-04-26 22:32:26
| 1
| 397
|
Adriano Matos
|
78,393,150
| 1,275,942
|
Python: Alias a module (in other libraries)
|
<p>Suppose I have two python installs. Let's say my global python3.7, and a python3.10 venv.</p>
<p>Between 3.7 and 3.10, a module changed its name from <code>foo</code> to <code>foobar</code>. So in 3.7, we would do <code>import foo</code>, and in python 3.10, we'd do <code>import foobar</code>.</p>
<p>Suppose my code is structured like:</p>
<pre><code>py3.7_project/src
py3.10_project/src
common/src
</code></pre>
<p>Code in <code>common/src</code> must be able to <code>import foo</code>, whether it is running in 3.7 or 3.10.</p>
<p>However, other dependencies in <code>site-packages</code> will import it by the name expected by that version, so it's not as simple as renaming <code>virtualenv-3.10/site-packages/foobar</code> to <code>site-packages/foo</code>.</p>
<p>Fixing any individual file is simple:</p>
<pre><code>try:
import foo
except ImportError:
import foobar as foo
</code></pre>
<p>And we can potentially put this in a shim, <code>foo_resolver.py</code> that creates the correct exports. However, that still requires updating all imports to <code>foo_resolver</code>, communicating to all stakeholders that all imports should use <code>foo_resolver</code> from now on, etc.</p>
<p>Is there a way to make <code>import foo</code> import <code>foobar/__init__.py</code> in the python3.10 context, ideally without changing all my imports to <code>import foo_resolver as foo</code> or messing with links in site-packages?</p>
|
<python><python-import><python-packaging>
|
2024-04-26 22:06:42
| 0
| 899
|
Kaia
|
78,393,072
| 480,118
|
pandas: dataframe with multilevel columns shift up the date index to to flatten out
|
<p>I have data that arrives to me that looks like this:</p>
<pre><code>import pandas as pd, numpy as np
data = [['', 'CCC', 'CCC', 'AAA', 'BBB' ],
['date', 'field1', 'file2', 'field1','field1'],
['01/01/2024', 100, 102, 103, 104 ],
['01/02/2024', 200, 202, 203, 204 ],
['01/03/2024', 300, 302, 303, 304 ]]
df = pd.DataFrame(data)
idx = pd.MultiIndex.from_arrays(df.iloc[:2,1:].values, names=['symbol', None])
df = df.iloc[2:,1:].set_axis(df.iloc[2:,0].rename('date'))
df = df.set_axis(idx, axis=1)
df
</code></pre>
<p><a href="https://i.sstatic.net/xYDkY1iI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xYDkY1iI.png" alt="enter image description here" /></a></p>
<p>i do some processing of the dataframe then i now want to flatten this out a bit to ensure 'date' is on the 2nd row (with the field)..so that if i were to serialize to json or an excel file, it would appear much like the array above.
I am having trouble doing so..I have the below code that has a couple of problems</p>
<ol>
<li><code>df.columns.levels[0]</code> seem to return a set like list..with unique values. so this fails when trying to do the MultiIndex because of different widths of the two arrays</li>
<li>it also returns a sorted list, which means if i were to add it back as an index it would be out of order.</li>
</ol>
<pre><code>l1_idx = list(df.columns.levels[0])
df.columns = df.columns.droplevel(0)
#reset to flatten out and shift up that 'date' column
df = df.reset_index()
#now add back the level 0 index - how do we maintain the original order?
idx = pd.MultiIndex.from_arrays([l1_idx, df.columns])
df = data.set_axis(idx, axis=1)
df
</code></pre>
<p>what im hoping for is a dataframe that looks exactly like the array above, but without the 'symbol' label</p>
|
<python><pandas>
|
2024-04-26 21:38:24
| 1
| 6,184
|
mike01010
|
78,393,066
| 9,877,065
|
Can't have Designer to resize Qwidget inside QGridLayout in pyqt5?
|
<p>I can run this code:</p>
<pre><code>import sys
from PyQt5.QtWidgets import QApplication, QScrollArea, QWidget, QGridLayout, QVBoxLayout, QLabel
from PyQt5 import QtGui, QtCore
from PyQt5.QtGui import QDrag
class MainWidget(QWidget):
def __init__(self, *args, **kwargs):
super().__init__()
self.setStyleSheet("border: 1px solid black;")
self.layout = QGridLayout(self)
self.resize(600, 400)
# for i in [(0,0) , (0,1) , (1,0) , (1,1)]:
for i in [(0,0)]:
print(str(i))
wig = QWidget()
wig.resize(150,100)
# VBox = QVBoxLayout()
# VBox.addStretch(1)
# VBox.addWidget(label)
# wig.setLayout(VBox)
scroll = QScrollArea(wig)
scroll.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
scrollAreaWidgetContents = QWidget()
# label = QLabel()
# font = self.font()
# font.setPointSize(15)
# label.setFont(font)
# label.setText(str(i)+' ')
# VBox = QVBoxLayout()
# VBox.addWidget(label)
# scrollAreaWidgetContents.setLayout(VBox)
scroll.setWidget(scrollAreaWidgetContents)
self.layout.addWidget(scroll, i[0] , i[1])
self.show()
if __name__ == '__main__':
app = QApplication(sys.argv)
w = MainWidget()
sys.exit(app.exec_())
</code></pre>
<p>and be able to resize my widget together with the widgets contained in the QGridlayout.</p>
<p>While using <strong>PyQt-Designer</strong> to write my app (<code>test003.ui</code>), after converting via <code>pyuic5</code> I get:</p>
<p><code>test003.py</code>:</p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(658, 538)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(1)
sizePolicy.setVerticalStretch(1)
sizePolicy.setHeightForWidth(Form.sizePolicy().hasHeightForWidth())
Form.setSizePolicy(sizePolicy)
Form.setSizeIncrement(QtCore.QSize(1, 1))
Form.setStyleSheet("QWidget{background-color: yellow\n"
"}")
self.gridLayoutWidget = QtWidgets.QWidget(Form)
self.gridLayoutWidget.setGeometry(QtCore.QRect(0, 10, 641, 501))
self.gridLayoutWidget.setObjectName("gridLayoutWidget")
self.gridLayout = QtWidgets.QGridLayout(self.gridLayoutWidget)
self.gridLayout.setSizeConstraint(QtWidgets.QLayout.SetNoConstraint)
self.gridLayout.setContentsMargins(0, 0, 0, 0)
self.gridLayout.setObjectName("gridLayout")
self.scrollArea = QtWidgets.QScrollArea(self.gridLayoutWidget)
self.scrollArea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
self.scrollArea.setWidgetResizable(True)
self.scrollArea.setObjectName("scrollArea")
self.scrollAreaWidgetContents_2 = QtWidgets.QWidget()
self.scrollAreaWidgetContents_2.setGeometry(QtCore.QRect(0, 0, 614, 495))
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents_2.sizePolicy().hasHeightForWidth())
self.scrollAreaWidgetContents_2.setSizePolicy(sizePolicy)
self.scrollAreaWidgetContents_2.setStyleSheet("QWidget{background-color: blue}")
self.scrollAreaWidgetContents_2.setObjectName("scrollAreaWidgetContents_2")
self.scrollArea.setWidget(self.scrollAreaWidgetContents_2)
self.gridLayout.addWidget(self.scrollArea, 0, 0, 1, 1)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Form"))
</code></pre>
<p>and with main code:</p>
<pre><code>import sys
from PyQt5.QtWidgets import QApplication, QScrollArea, QWidget, QGridLayout, QVBoxLayout
from PyQt5 import QtGui
from PyQt5.QtGui import QDrag
# from untitled001 import Ui_Form
# from untitled002 import Ui_Form
# from untitled003b import Ui_Form
# from test001 import Ui_Form # form
# from test002 import Ui_Form #form + grid
from test003 import Ui_Form #form + grid + widget
class MainWidget(QWidget, Ui_Form):
def __init__(self, *args, **kwargs):
super().__init__()
self.setupUi(self)
self.show()
if __name__ == '__main__':
app = QApplication(sys.argv)
w = MainWidget()
# w.show()
sys.exit(app.exec_())
</code></pre>
<p>but I get:</p>
<p><a href="https://i.sstatic.net/nSranHXP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSranHXP.png" alt="enter image description here" /></a></p>
<p>where the blue widget doesn't resize together with main one:</p>
<p><a href="https://i.sstatic.net/65flLNcB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65flLNcB.png" alt="enter image description here" /></a></p>
<p>What am I doing wrong? I wasn't able to find any hint in PyQt designer and reading out the <code>test003.py</code> ui.</p>
|
<python><qt><pyqt><pyqt5><qt-designer>
|
2024-04-26 21:37:17
| 1
| 3,346
|
pippo1980
|
78,393,035
| 3,103,957
|
Unexpected behaviour in Python event loop
|
<p>I have the following piece of Async code in Python.</p>
<pre><code>import asyncio
async def get_some_values_from_io():
print("Getsome value Executing...")
await asyncio.sleep(3)
return [100,200]
vals = []
async def fetcher():
while True:
print("Fetcher Executing...")
io_vals = await get_some_values_from_io()
for val in io_vals:
vals.append(io_vals)
async def monitor():
while True:
print("Monitor Executing...")
print (len(vals))
await asyncio.sleep(3)
async def main():
t1 = asyncio.create_task(fetcher())
t2 = asyncio.create_task(monitor())
await asyncio.gather(t1, t2)
asyncio.run(main())
print("Rest of the method is executing....")
</code></pre>
<p>Both the async functions being called into are calling async.sleep() method with considerable time to sleep.
While both of them sleep, the last print stetment <code>print("Rest of the method is executing....")</code> must be run.</p>
<p>But what is is getting printed is: (it just keeps going in fact)</p>
<pre><code>Fetcher Executing...
Getsome value Executing...
Monitor Executing...
0
Fetcher Executing...
Getsome value Executing...
Monitor Executing...
2
...
</code></pre>
<p>My understanding is that the whole Python programs is just single thread (GIL) and the event loop too would share the GIL. Is it not correct?</p>
<p>Also, there are mentions of run_in_executor() method which says that CPU bound tasks can be run using this method outside of event loop. So this means, there is another thread running parallelly besides event loop? This contradicts the fact of GIL.</p>
|
<python><python-asyncio><event-loop>
|
2024-04-26 21:26:05
| 3
| 878
|
user3103957
|
78,392,900
| 4,880,003
|
How can I have multiple heatmaps share axes in holoviews?
|
<p>I'm creating two heatmaps which have different ranges of data. I'd like to plot them together with the same extents, with empty cells added as needed to make them the same size. Here's code for two sample heatmaps:</p>
<pre class="lang-py prettyprint-override"><code>import random
import holoviews as hv
hv.extension("matplotlib")
data1 = [(x, y, random.random()) for x in range(3) for y in range(6)]
data2 = [(x, y, random.random() * 3) for x in range(7) for y in range(2)]
hmap1 = hv.HeatMap(data1)
hmap2 = hv.HeatMap(data2)
combined = (hmap1 + hmap2).opts(hv.opts.HeatMap(show_values=False, colorbar=True))
</code></pre>
<p>which renders as</p>
<p><a href="https://i.sstatic.net/BHXgfBsz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHXgfBsz.png" alt="enter image description here" /></a></p>
<p>with one taller heatmap and one wider. I'd like them both to be 7x6 in this example. I tried doing <code>combined.opts(shared_axes=True)</code>, but the results are the same. Doing <code>hmap1.redim.values(x=[0, 1, 2, 3, 4, 5, 6])</code> also produces the same plot.</p>
<p>How can I resize multiple (not necessarily just two) heatmaps to plot them together with the same grid?</p>
|
<python><visualization><holoviews>
|
2024-04-26 20:44:43
| 1
| 10,466
|
Nathan
|
78,392,534
| 16,770,846
|
Why doesnβt the ASGI specification allow handshake rejection with custom reason?
|
<p>A client initially opens a connection to an ASGI compliant server. The server forwards a <a href="https://asgi.readthedocs.io/en/latest/specs/www.html#connect-receive-event" rel="nofollow noreferrer"><code>Connect event</code> <sup>[asgi-spec]</sup></a> to the application. This event must be responded to with either an <a href="https://asgi.readthedocs.io/en/latest/specs/www.html#accept-send-event" rel="nofollow noreferrer"><code>Accept event</code> <sup>[asgi-spec]</sup></a> or a <a href="https://asgi.readthedocs.io/en/latest/specs/www.html#close-send-event" rel="nofollow noreferrer"><code>Close event</code>. <sup>[asgi-spec]</sup></a> The server must send this event during the <em>handshake phase</em> of the WebSocket and <em><strong>not</strong></em> complete the handshake until it gets a reply.</p>
<p>If the application responds with a <code>Close event</code>, the server <em><strong>must</strong></em> close the connection with a HTTP <code>403</code> status code and not complete the WebSocket handshake.</p>
<hr/>
<p>Why simply <code>403</code>? Not even a reason key is allowed to be part of the event responded. Just <code>403</code>. You would expect that it is during <em>handshake</em> that <em>Authentication</em> would be done hence a possible <code>401</code>.</p>
<p>The WebSocket Protocol specification allows any HTTP status code besides <code>101</code>.</p>
<blockquote>
<p>Any status code other than <code>101</code> indicates that the WebSocket handshake has not completed and that <em><strong>the semantics of HTTP still apply</strong></em>.</p>
</blockquote>
<p>What is the rationale behind ASGIβs specification?</p>
|
<python><websocket><asgi>
|
2024-04-26 18:57:28
| 2
| 4,252
|
Chukwujiobi Canon
|
78,392,238
| 10,219,156
|
python dict is giving key error even if initialized with some values
|
<pre><code>data={}
vega={}
for coin in ['BTC']:
vega[coin] = {}
data[coin] = {}
data[coin]['columns']=['27-Apr-24', '28-Apr-24', '29-Apr-24', '03-May-24', '10-May-24', '17-May-24', '31-May-24']
for expiry in data[coin]['columns']:
vega[coin][expiry] = data[coin].get('Vega', {}).get('Total', {}).get(expiry, 0)
for coin in ['BTC']:
data[coin]['columns']=['27-Apr-24', '28-Apr-24', '29-Apr-24', '03-May-24', '10-May-24', '17-May-24', '31-May-24', '28-Jun-24']
for expiry in data[coin]['columns']:
vega[coin][expiry] += data[coin].get('Vega', {}).get('Total', {}).get(expiry, 0)
</code></pre>
<p>Im initializing dict, and giving values if key is not present.</p>
<p>But however, getting</p>
<pre><code>KeyError: '28-Jun-24'
</code></pre>
<p>on last line why?</p>
<p>Could someone help me figure out mistake im doing here?</p>
|
<python><pandas><list><dictionary>
|
2024-04-26 17:53:21
| 2
| 326
|
Madan Raj
|
78,392,032
| 17,524,128
|
Cannot able to redirect even if user password and user name are correct
|
<p>Here is my <code>app.py</code> file and when I am running with flask but when i am running this with my flask and try to login then it is showing this error
<a href="https://i.sstatic.net/2f86JXrM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2f86JXrM.png" alt="enter image description here" /></a></p>
<p><code>app.py</code> file</p>
<pre><code>from flask import Flask, render_template, request, redirect, url_for, session
import json,admin,teacher,student,quiz_api
from models import Student, Teacher, db, app
# from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.secret_key = 'secret_key'
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///quiz.db'
db = SQLAlchemy(app)
with app.app_context():
db.create_all()
# db.create_all()
@app.route('/')
def index():
return render_template("index.html")
@app.route('/login', methods=['POST'])
def login():
username = request.form['username']
password = request.form['password']
user_type = request.form['type']
if user_type == 'Teacher':
user = Teacher.query.filter_by(username=username).first()
if user is None:
return "incorrect username"
else:
if user.password == password:
session['username'] = username
session['id'] = user.id
return redirect(url_for('teacher_index'))
else:
return "incorrect password"
if user_type == 'Admin':
if username == 'admin' and password == 'admin':
print("inside admin ------")
return redirect(url_for('admin'))
else:
user = Student.query.filter_by(username=username).first()
if user is None:
return "incorrect username"
else:
if user.password == password:
session['username'] = username
session['id'] = user.id
return redirect(url_for('student_index'))
else:
return "incorrect password"
@app.route("/logout", methods=['POST'])
def logout():
session.pop('username', None)
return redirect(url_for('index'))
if __name__ == "__main__":
app.secret_key = 'super secret key'
app.config['SESSION_TYPE'] = 'filesystem'
app.debug = True
app.run()
</code></pre>
<p>and here is another <code>admin.py</code> file where I want to redirect</p>
<p>admin.py</p>
<pre><code>from flask import render_template, request, redirect, url_for
from models import Teacher, Student, Score, Quiz, app, db
@app.route('/admin')
def admin():
print("hello inside admin")
if 'username' == None:
return redirect(url_for("index"))
teachers = Teacher.query.all()
students = Student.query.all()
user_data = {'students': students, 'teachers': teachers}
print(user_data)
return render_template("admin.html", user_data=user_data)
@app.route('/delete_student/<student_id>')
def delete_student(student_id):
student = Student.query.filter_by(id=student_id).first()
student_quiz_scores = Score.query.filter_by(student_id=student.id).all()
for student_score in student_quiz_scores:
db.session.delete(student_score)
db.session.commit()
db.session.delete(student)
db.session.commit()
return redirect(url_for('admin'))
@app.route('/delete_teacher/<teacher_id>')
def delete_teacher(teacher_id):
teacher = Teacher.query.filter_by(id=teacher_id).first()
# quizes_of_teacher = Quiz.query.filter_by(teacher_id=teacher.id)
for quiz in Quiz.query.filter_by(teacher_id=teacher.id):
db.session.delete(quiz)
db.session.commit()
db.session.delete(teacher)
db.session.commit()
return redirect(url_for('admin'))
@app.route('/update_teacher', methods=['POST', 'GET'])
def update_teacher():
teacher_id = request.form['t_id']
teacher_username = request.form['t_username']
teacher_password = request.form['t_password']
teacher = Teacher.query.filter_by(id=teacher_id).first()
teacher.username = teacher_username
teacher.password = teacher_password
db.session.commit()
return redirect(url_for('admin'))
@app.route('/update_student', methods=['POST', 'GET'])
def update_student():
student_id = request.form['s_id']
student_username = request.form['s_username']
student_password = request.form['s_password']
student = Student.query.filter_by(id=student_id).first()
student.username = student_username
student.password = student_password
db.session.commit()
return redirect(url_for('admin'))
@app.route("/insert_teacher", methods=['POST'])
def insert_teacher():
name = request.form["teacher_name"]
password = request.form["teacher_password"]
teacher = Teacher.query.filter_by(username=name).first()
if teacher is None:
teacher = Teacher(username=name, password=password)
db.session.add(teacher)
db.session.add(teacher)
db.session.commit()
return redirect(url_for("admin"))
else:
return "404 Duplicate Name Error!"
@app.route("/insert_student", methods=['POST'])
def insert_student():
name = request.form["student_name"]
password = request.form["student_password"]
student = Student.query.filter_by(username=name).first()
if student is None:
student = Student(username=name, password=password)
db.session.add(student)
db.session.commit()
return redirect(url_for("admin"))
else:
return "404 Duplicate Name Error!"
</code></pre>
<p>but it is showing following error</p>
<pre><code>
inside admin ------
[2024-04-26 22:22:52,567] ERROR in app: Exception on /login [POST]
Traceback (most recent call last):
File "C:\Users\vatsal\miniconda3\Lib\site-packages\flask\app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vatsal\miniconda3\Lib\site-packages\flask\app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vatsal\miniconda3\Lib\site-packages\flask\app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vatsal\miniconda3\Lib\site-packages\flask\app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\temp\Quiz\quiz_management_system_final\app.py", line 43, in login
return redirect(url_for('admin'))
^^^^^^^^^^^^^^^^
File "C:\Users\vatsal\miniconda3\Lib\site-packages\flask\helpers.py", line 220, in url_for
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vatsal\miniconda3\Lib\site-packages\flask\app.py", line 1074, in url_for
return self.handle_url_build_error(error, endpoint, values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
rv = url_adapter.build( # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vatsal\miniconda3\Lib\site-packages\werkzeug\routing\map.py", line 919, in build
raise BuildError(endpoint, values, method, self)
</code></pre>
<p>Can anyone able to tell me where I am making mistake</p>
|
<python><python-3.x><flask>
|
2024-04-26 17:08:53
| 1
| 301
|
vatsal mangukiya
|
78,391,980
| 132,785
|
What is this Python list containing all tokens from my code?
|
<p>If I run this code in Python 3.10:</p>
<pre><code>import gc
def main():
a = 23764723
ref = gc.get_referrers(a)[0]
print(ref)
if __name__ == "__main__":
main()
</code></pre>
<p>I get the following output:</p>
<pre><code>['gc', 'main', 'a', 23764723, 'ref', 'gc', 'get_referrers', 'a', 0, 'print', 'ref', '__main__', '__name__', 'main']
</code></pre>
<p>What <em>is</em> this list, that seems to contain all of the literals(?) from my code? Is there an explanation in the Python docs anywhere?</p>
|
<python><list>
|
2024-04-26 16:55:07
| 4
| 1,988
|
Neil
|
78,391,689
| 159,361
|
Cascaded optional dependencies in Python
|
<p>A python library has two packages "package1" and "package2". Both packages support optional extras "extras". Package1 has a setup.cfg as follows:</p>
<pre><code>[options]
install_requires = package2
[options.extras_require]
extras = package2[extras]
</code></pre>
<p>and package2 has setup.cfg has:</p>
<pre><code>[options.extras_require]
extras = extra_package
</code></pre>
<p>The idea being that if package1 is install via <code>pip install package1[extras]</code> then the dependent package2 will also pull in its extras - namely "extra_package".</p>
<p>However cascaded "extras" dependencies of package2 are NOT pulled in and extra_package is missing from the environment.
How to achieve the cascading of an extras so that both "extras" of package1 and package2 are pulled in with single pip install of the top package?</p>
|
<python><setuptools><python-packaging>
|
2024-04-26 15:59:06
| 1
| 7,755
|
Ricibob
|
78,391,600
| 1,045,755
|
multiprocessing doesn't clear cache
|
<p>I am trying to use <code>multiprocessing</code> to optimize some calculations. The code to start it looks something like this:</p>
<pre><code>if __name__ == "__main__":
params = list(itertools.product(cutoff_hours, cutoff_minutes, chunk_sizes, buffer_sizes, products, optimize_ons))
params_dilled = [dill.dumps(param) for param in params]
with multiprocessing.Pool(processes=20, initializer=worker_init, maxtasksperchild=10) as pool:
results = pool.map(compute_for_parameters, params_dilled)
</code></pre>
<p>where:</p>
<pre><code>def worker_init():
globals()["dill"] = dill
</code></pre>
<p>and:</p>
<pre><code>def compute_for_parameters(params_dilled):
try:
params = dill.loads(params_dilled)
var1, var2, var3 = params
try:
starter = StarterClass(
var1=var1,
var2=var2,
var3=var3,
)
results = starter.get_data()
return results
except Exception as e:
logging.error(f"Error processing parameters {params}: {e}")
raise
finally:
logger.info(f"Clearing cache...")
del pnl, results
gc.collect()
</code></pre>
<p>Each instance of <code>StarterClass</code> loads some data (unfortunately a lot), which probably equates to roughly 500MB in size.</p>
<p>My issue is, that in the beginning, everything is doing fine. It loads, calculates, and goes on to the next. But as the multiprocessing progresses, I can see more and more data is being cached in memory, and eventually it just ends up being all my RAM used for caching, and everything just halts or becomes crazy slow.</p>
<p>How do I circumvent, or at least clear my cache whenever a process is done ?</p>
<p>The:</p>
<pre><code>finally:
logger.info(f"Clearing cache...")
del pnl, results
gc.collect()
</code></pre>
<p>doesn't seem to do much. Also, results doesn't take up much size, so even though it's saving that, that shouldn't be the cause of concern.</p>
<p>Or do I need some kind of other approach for this ?</p>
|
<python><python-multiprocessing>
|
2024-04-26 15:42:18
| 0
| 2,615
|
Denver Dang
|
78,391,553
| 11,431,477
|
Basic save_pretrained / from_pretrained not retrieving the same model that was saved - Transformers
|
<p>I created my model with:</p>
<pre><code>#Load of the model
model_checkpoint = 'microsoft/deberta-v3-large'
# model_checkpoint = 'roberta-base' # you can alternatively use roberta-base but this model is bigger thus training will take longer
# Define label maps specific to your task
id2label = {0: "Human", 1: "AI"}
label2id = {"Human": 0, "AI": 1}
# Generate classification model from model_checkpoint with the defined labels
model = AutoModelForSequenceClassification.from_pretrained(
model_checkpoint, num_labels=2, id2label=id2label, label2id=label2id)
peft_config = LoraConfig(task_type="SEQ_CLS",
r=1,
lora_alpha=16,
lora_dropout=0.2)
model = get_peft_model(model, peft_config)
</code></pre>
<p>This works ok, and I call trainer.train() to train my model</p>
<p>When I finish, I want to save the model to export it to another machine, with</p>
<pre><code>model_path = "./deberta-v3-large-5"
model.save_pretrained(model_path)
</code></pre>
<p>And reload the model with</p>
<pre><code>reloaded_model = AutoModelForSequenceClassification.from_pretrained(
model_path, num_labels=2, id2label=id2label, label2id=label2id)
</code></pre>
<p>Must be something super simple, but I can figure it out.</p>
<p>If I run tests on my reloaded_model, I get much worse accuracy than on the original model which was trained</p>
<p>I have also tried, with no luck:</p>
<pre><code># Save the model and the tokenizer
model_path = "./deberta-v3-large-4"
trainer.save_model(model_path)
tokenizer.save_pretrained(model_path, set_lower_case=False)
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# Path where the model and tokenizer were saved
model_path = "./deberta-v3-large-4"
# Define label maps specific to your task
id2label = {0: "Human", 1: "AI"}
label2id = {"Human": 0, "AI": 1}
# Generate classification model from model_checkpoint with the defined labels
model_regenerate = AutoModelForSequenceClassification.from_pretrained(
model_path, num_labels=2, id2label=id2label, label2id=label2id)
tokenizer_reloaded = AutoTokenizer.from_pretrained(model_path)
peft_config = LoraConfig(task_type="SEQ_CLS",
r=1,
lora_alpha=16,
lora_dropout=0.2)
model_full_regenerate = get_peft_model(model_regenerate, peft_config)
model_full_regenerate.print_trainable_parameters()
</code></pre>
<p>In all cases, when I load the model, i get</p>
<pre><code>Some weights of DebertaV2ForSequenceClassification were not initialized from the model checkpoint at microsoft/deberta-v3-large and are newly initialized: ['classifier.bias', 'classifier.weight', 'pooler.dense.bias', 'pooler.dense.weight']
</code></pre>
<p>Thanks</p>
|
<python><huggingface-transformers>
|
2024-04-26 15:34:56
| 1
| 535
|
miguelik
|
78,391,485
| 802,678
|
Use `pip freeze > requirements.txt` to create a file with environment markers?
|
<p>There are <a href="https://peps.python.org/pep-0508/#environment-markers" rel="nofollow noreferrer">Environment Markers</a> or <a href="https://pip.pypa.io/en/stable/reference/requirement-specifiers/#requirement-specifiers" rel="nofollow noreferrer">Requirement Specifiers</a> we can use in <code>requirements.txt</code>. For example, <code>argparse;python_version<"2.7"</code> will tell pip to install <code>argparse</code> only when the python version is less than 2.7.</p>
<p>However, is it possible to generate these environment markers when using <code>pip freeze > requirements.txt</code>? <a href="https://pip.pypa.io/en/stable/cli/pip_freeze/" rel="nofollow noreferrer">Documentation of pip freeze</a> does not mention a way. I tried the <code>-r</code> option but had no luck. Do we have to edit the <code>requirements.txt</code> file manually after <code>pip freeze</code>?</p>
|
<python><pip><requirements.txt>
|
2024-04-26 15:24:35
| 1
| 582
|
Betty
|
78,391,438
| 6,674,599
|
Python Classes: NameError: name 'foo' is not defined
|
<p>Why does accessing the class attribute <code>foo</code> fail when building a tuple with non-zero length only?</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
foo = 42
bar_ok1 = (foo for _ in range(10))
bar_ok2 = tuple(foo for _ in [])
bar_fail1 = tuple(foo for _ in range(10))
bar_fail2 = tuple(foo for _ in [0, 1, 2])
</code></pre>
<pre><code>Traceback (most recent call last):
File "main.py", line 1, in <module>
class Foo:
File "main.py", line 7, in Foo
bar_fail1 = tuple(foo for _ in range(10))
File "main.py", line 7, in <genexpr>
bar_fail1 = tuple(foo for _ in range(10))
NameError: name 'foo' is not defined
</code></pre>
|
<python><class><static><iterator><nameerror>
|
2024-04-26 15:18:13
| 2
| 2,035
|
Semnodime
|
78,391,417
| 2,080,848
|
Transform value in same format as output from read(AES.block_size)
|
<p>I am working on some encrypt task using python. I am using Crypto.Cipher and Crypto from <code>python</code>. The code I used is next:</p>
<pre><code>from Crypto.Cipher import AES
from Crypto import Random
iv = Random.new().read(AES.block_size)
iv
</code></pre>
<p>The output of <code>iv</code> is next:</p>
<pre><code>'\x81zD\x80\x1a\x83\xda\x02w\xd2\xf9\x98&-^\x0e'
</code></pre>
<p>Because this is an encrypting task I need to define my own iv, so I have:</p>
<pre><code>#Define own iv
val="643f5a4957263b6b4e72544e42593275"
</code></pre>
<p>My question is: how can I transform <code>val</code> to a similar format like the first <code>iv</code> I printed.</p>
<p>Many thanks.</p>
|
<python><encryption><cryptography><aes>
|
2024-04-26 15:15:37
| 1
| 39,643
|
Duck
|
78,391,292
| 7,564,952
|
pyspark code on databricks never completes execution and hang in between
|
<p>I have two data frames: df_selected and df_filtered_mins_60</p>
<p><code>df_filtered_mins_60.columns()</code><br/></p>
<blockquote>
<p>Output:["CSku", "start_timestamp", "end_timestamp"]<br/></p>
</blockquote>
<p><code>df_selected.columns() </code><br/></p>
<blockquote>
<p>Output:["DATEUPDATED", "DATE", "HOUR", "CPSKU", "BB_Status",
"ActivePrice", "PrevPrice", "MinPrice", "AsCost",
"MinMargin", "CPT", "Comp_Price", "AP_MSG"]</p>
</blockquote>
<p><code>df_selected.count()</code><br/></p>
<blockquote>
<p>Output: 7,816,521<br/></p>
</blockquote>
<p><code>df_filtered_mins_60.count()</code><br/></p>
<blockquote>
<p>Output: 112,397 <br/></p>
</blockquote>
<p>What i want to implement is:
iterate through df_filtered_mins_60, for each row take:<br/>
start_time = start_timestamp<br/>
stop_time = end_timestamp<br/>
sku = CSku<br/>
Apply below conditions on df_selected WHEN:<br/>
DATEUPDATED is equal to or in between start_time and stop_time
<br/> AND CPSKU = sku <br/>
THEN assign all the rows satisfying this condition with a constant number i. continue doing this until the end of the rows in df_filtered_mins_60. After each update increment i=i+1<br/>
<br>
Code I wrote is given below. this code never executes instead gets stuck somewhere. It would keep running for hours until I forcefully stop it.</p>
<pre><code>i = 1
df_selected = df_selected.withColumn("counter", lit(0))
# Iterate through each row of df_filtered_mins_60
for row in df_filtered_mins_60.collect():
sku = row['CSku']
start_time = row['start_timestamp']
stop_time = row['stop_timestamp']
# Apply conditions on df_selected and update "counter" column
df_selected = df_selected.withColumn("counter",
when((df_selected.DATEUPDATED >= start_time) &
(df_selected.DATEUPDATED <= stop_time) &
(df_selected.CPSKU == sku),
lit(i)).otherwise(df_selected.counter))
i += 1
# Display the updated df_selected DataFrame with the "counter" column
display(df_selected)
</code></pre>
<p>I am assigning counters because I need a set of rows from df_selected which are in between certain time windows for each SKU and this information is present in df_filtered_mins_60. After assigning a counter I need to perform aggregates on other columns in df_selected. Basically, for each window, I need some insights into what was happening during certain time windows.<br/></p>
<p>I need to get the right code in Pyspark to run on Databricks. <br/></p>
<p>Generate Sample Data:</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import to_timestamp
from pyspark.sql.types import StructType, StructField, StringType, IntegerType, DoubleType, TimestampType
# Initialize SparkSession
spark_a = SparkSession.builder \
.appName("Create DataFrame") \
.getOrCreate()
schema = StructType([
StructField("DATEUPDATED", StringType(), True),
StructField("DATE", StringType(), True),
StructField("HOUR", IntegerType(), True),
StructField("CPSKU", StringType(), True),
StructField("BB_Status", IntegerType(), True),
StructField("ActivePrice", DoubleType(), True),
StructField("PrevPrice", DoubleType(), True),
StructField("MinPrice", DoubleType(), True),
StructField("AsCost", DoubleType(), True),
StructField("MinMargin", DoubleType(), True),
StructField("CPT", DoubleType(), True),
StructField("Comp_Price", DoubleType(), True)
])
data=[('2024-01-01T19:45:39.151+00:00','2024-01-01',0,'MSAN10115836',0,14.86,14.86,14.86,12.63,0.00,13.90,5.84) ,
('2024-01-01T19:55:10.904+00:00','2024-01-01',0,'MSAN10115836',0,126.04,126.04,126.04,108.96,0.00,0.00,93.54),
('2024-01-01T20:35:10.904+00:00','2024-01-01',0,'MSAN10115836',0,126.04,126.04,126.04,108.96,0.00,0.00,93.54),
('2024-01-15T12:55:18.528+00:00','2024-01-01',1,'PFXNDDF4OX',1,18.16,18.16,10.56,26.85,-199.00,18.16,34.10) ,
('2024-01-15T13:25:18.528+00:00','2024-01-01',1,'PFXNDDF4OX',1,18.16,18.16,10.56,26.85,-199.00,18.16,34.10) ,
('2024-01-15T13:35:18.528+00:00','2024-01-01',1,'PFXNDDF4OX',1,18.16,18.16,10.56,26.85,-199.00,18.16,34.10) ,
('2024-01-15T13:51:09.574+00:00','2024-01-01',1,'PFXNDDF4OX',1,20.16,18.16,10.56,26.85,-199.00,18.16,34.10) ,
('2024-01-15T07:28:48.265+00:00','2024-01-01',1,'DEWNDCB135C',0,44.93,44.93,44.93,38.09,0.25,26.9,941.26),
('2024-01-15T07:50:32.412+00:00','2024-01-01',1,'DEWNDCB135C',0,44.93,44.93,44.93,38.09,0.25,26.9,941.26),
('2024-01-15T07:52:32.412+00:00','2024-01-01',1,'DEWNDCB135C',0,44.93,44.93,44.93,38.09,0.25,26.9,941.26)]
df_selected = spark.createDataFrame(data, schema=schema)
df_selected = df_selected.withColumn("DateUpdated", to_timestamp(df_selected["DATEUPDATED"], "yyyy-MM-dd'T'HH:mm:ss.SSS'+00:00'"))
display(df_selected)
</code></pre>
<p>Second Dataframe:</p>
<pre><code>schema = StructType([
StructField("CPSKU", StringType(), True),
StructField("start_timestamp", StringType(), True),
StructField("stop_timestamp", StringType(), True)
])
data_2=[('MSAN10115836','2024-01-01T19:45:39.151+00:00','2024-01-01T20:35:10.904+00:00'),
('MSAN10115836','2024-01-08T06:04:16.484+00:00','2024-01-08T06:42:14.912+00:00'),
('DEWNDCB135C','2024-01-15T07:28:48.265+00:00','2024-01-15T07:52:32.412+00:00'),
('DEWNDCB135C','2024-01-15T11:37:56.698+00:00','2024-01-15T12:35:09.693+00:00'),
('PFXNDDF4OX','2024-01-15T12:55:18.528+00:00','2024-01-15T13:51:09.574+00:00'),
('PFXNDDF4OX','2024-01-15T19:25:10.150+00:00','2024-01-15T20:24:36.385+00:00')]
df_filtered_mins_60 = spark.createDataFrame(data_2, schema=schema)
df_filtered_mins_60 = df_filtered_mins_60.withColumn("start_timestamp", to_timestamp(df_filtered_mins_60["start_timestamp"], "yyyy-MM-dd'T'HH:mm:ss.SSS'+00:00'"))
df_filtered_mins_60 = df_filtered_mins_60.withColumn("stop_timestamp", to_timestamp(df_filtered_mins_60["stop_timestamp"], "yyyy-MM-dd'T'HH:mm:ss.SSS'+00:00'"))
display(df_filtered_mins_60)
</code></pre>
|
<python><apache-spark><join><pyspark><databricks>
|
2024-04-26 14:53:10
| 1
| 455
|
irum zahra
|
78,391,260
| 2,817,520
|
Conditionally defining a class variable
|
<p>Is the following code considered Pythonic?</p>
<pre><code>class A():
if True:
x = 10
print(A.x) # prints 10
</code></pre>
<p>This came to my mind when working on a plugin based application.</p>
|
<python>
|
2024-04-26 14:46:14
| 1
| 860
|
Dante
|
78,391,203
| 732,570
|
How to use LifespanManager to test a reverse proxy in FastAPI (async testing)
|
<p>According to <a href="https://fastapi.tiangolo.com/advanced/async-tests/" rel="nofollow noreferrer">FastAPI documentation</a> I may need to use a LifespanManager. Can someone show me an example of how to use the LifespanManager in an async test? Like, with this lifespan:</p>
<pre class="lang-py prettyprint-override"><code> @asynccontextmanager
async def lifespan(_app: FastAPI):
async with httpx.AsyncClient(base_url=env.proxy_url, transport=httpx.MockTransport(dummy_response)) as client:
yield {'client': client}
await client.aclose()
</code></pre>
<p>I'm trying to test an endpoint called <code>proxy</code>, which works fine but I need tests for regression:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
import pytest_asyncio
from fastapi import FastAPI
from contextlib import asynccontextmanager
from asgi_lifespan import LifespanManager
import httpx
from httpx import Response
import importlib
import uvicorn
import proxy
import env
def dummy_response(_request):
res = Response(200, content="Mock response")
res.headers['Content-Type'] = 'text/plain; charset=utf-8'
return res
@pytest_asyncio.fixture
async def mock_proxy():
importlib.reload(env)
importlib.reload(proxy)
@asynccontextmanager
async def lifespan(_app: FastAPI):
async with httpx.AsyncClient(base_url=env.proxy_url, transport=httpx.MockTransport(dummy_response)) as client:
yield {'client': client}
await client.aclose()
app = FastAPI(lifespan=lifespan)
app.add_route("/proxy/path", proxy.proxy)
async with LifespanManager(app) as manager:
yield app
@pytest_asyncio.fixture
async def _client(mock_proxy):
async with mock_proxy as app:
async with httpx.AsyncClient(app=app, base_url=env.proxy_url) as client:
yield client
@pytest.mark.anyio
async def test_proxy_get_request(_client):
async with _client as client:
response = await client.get(f"{env.proxy_url}/proxy/path", params={"query": "param"})
assert response.status_code == 200
</code></pre>
<p>This attempt tells me</p>
<blockquote>
<p>TypeError: 'FastAPI' object does not support the asynchronous context manager protocol</p>
</blockquote>
<h3>edit:</h3>
<p>this code seems pretty close, but the lifespan change to state is not occurring:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
import pytest_asyncio
from fastapi import FastAPI
from contextlib import asynccontextmanager
from asgi_lifespan import LifespanManager
import httpx
from httpx import Response
import importlib
import uvicorn
import proxy
import env
def dummy_response(_request):
res = Response(200, content="Mock response")
res.headers['Content-Type'] = 'text/plain; charset=utf-8'
return res
@pytest_asyncio.fixture
async def _client():
with pytest.MonkeyPatch.context() as monkeypatch:
monkeypatch.setenv("PROXY_URL", "http://proxy")
importlib.reload(env)
importlib.reload(proxy)
@asynccontextmanager
async def lifespan(_app: FastAPI):
async with httpx.AsyncClient(
base_url=env.proxy_url,
transport=httpx.MockTransport(dummy_response)) as client:
yield {'client': client} # startup
await client.aclose() # shutdown
app = FastAPI(lifespan=lifespan)
app.add_route("/proxy/path", proxy.proxy)
transport = httpx.ASGITransport(app=app)
async with httpx.AsyncClient(transport=transport, base_url=env.proxy_url) \
as client, LifespanManager(app):
yield client
@pytest.mark.asyncio
async def test_proxy_get_request(_client):
response = await _client.get(f"/proxy/path", params={"query": "param"})
assert response.status_code == 200
</code></pre>
<blockquote>
<p>==================================================== short test summary info ====================================================
FAILED tests/regression/test_proxy.py::test_proxy_get_request - AttributeError: 'State' object has no attribute 'client'</p>
</blockquote>
<p>... in fact it seems like the LifespanManager is not doing a lot of work. If I change the last part of the fixture to:</p>
<pre class="lang-py prettyprint-override"><code> async with LifespanManager(app) as manager:
print(manager._state, app.state._state)
app.state = State(state=manager._state)
transport = httpx.ASGITransport(app=app)
print(manager._state, app.state.client)
async with httpx.AsyncClient(transport=transport, base_url=env.proxy_url) \
as client:
yield client
</code></pre>
<p>I get:</p>
<blockquote>
<p>----------------------------------------------------- Captured stdout setup -----------------------------------------------------
{'client': <httpx.AsyncClient object at 0x1034da240>} {}
{'client': <httpx.AsyncClient object at 0x1034da240>} <httpx.AsyncClient object at 0x1034da240>
==================================================== short test summary info ====================================================
FAILED tests/regression/test_proxy.py::test_proxy_get_request - AttributeError: 'State' object has no attribute 'client'</p>
</blockquote>
<p>So, app is not getting state at startup (but the manager is, it's just not applied to app for some reason). Likewise, even manually setting state myself (so why even use LifespanManager at that point), the state is not available in the proxy function's request like it is supposed to be.</p>
<p>The reason I am doing this is the first line in the proxy is:</p>
<pre class="lang-py prettyprint-override"><code>async def proxy(request: Request):
client = request.state.client
</code></pre>
<p>And this is what is failing.</p>
<h3>edit 2:</h3>
<p>thanks to Yurii's comments, I resolved this initial issue, but not what led me down this path in the first place. I can get past this issue with:</p>
<pre class="lang-py prettyprint-override"><code> async with LifespanManager(app) as manager:
async with httpx.AsyncClient(transport=httpx.ASGITransport(app=manager.app), base_url=env.proxy_url) as client:
yield client
</code></pre>
<p>However, this all started when my initial approach with FastAPI's TestClient was failing because of a weird cancellation, triggering unhandled issues in a taskgroup, triggering the stream to be exhausted and then attempted to read (which, if there was a streaming issue, the proxy wouldn't work, and it does). It turns out FastAPI doesn't allow you to use TestClient for async tests, and recommends this approach (see the link at the start of this for more). I am now getting much the same issue here:</p>
<blockquote>
<p>FAILED tests/regression/test_proxy.py::test_proxy_get_request - ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)</p>
</blockquote>
<p>which is caused by the same cancellation issue:</p>
<pre><code>self = <asyncio.locks.Event object at 0x105cdade0 [unset]>
async def wait(self):
"""Block until the internal flag is true.
If the internal flag is true on entry, return True
immediately. Otherwise, block until another coroutine calls
set() to set the flag to true, then return True.
"""
if self._value:
return True
fut = self._get_loop().create_future()
self._waiters.append(fut)
try:
</code></pre>
<blockquote>
<pre><code> await fut
</code></pre>
</blockquote>
<p>E asyncio.exceptions.CancelledError: Cancelled by cancel scope 105cdb8f0</p>
|
<python><pytest><fastapi>
|
2024-04-26 14:35:53
| 1
| 4,737
|
roberto tomΓ‘s
|
78,391,190
| 8,507,034
|
Plotly ScatterGeo Text color
|
<p>I'm using plotly to make a choropleth map.</p>
<p>I would like to add text labels to the map using fig.add_scattergeo, but it's borrowing the colormapping from the plot and it doesn't look good. The text doesn't contrast with the background color and it is difficult to read.</p>
<p>I would like to know if and how I can modify the text traces produced by fig.add_scattergeo to have better contrast.</p>
<p><a href="https://i.sstatic.net/nKpF9dPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nKpF9dPN.png" alt="enter image description here" /></a></p>
<p>Code:</p>
<pre><code># Create a choropleth map with state names
fig = px.choropleth(
top_sectors_empl,
locations="st", # Column with state names
locationmode="USA-states", # Set location mode for U.S. states
color="ds_state_sector_headcount", # Column with data to visualize
scope="usa", # Restrict to the U.S.
title="Highest Employing Sector by State", # Title for the map
color_continuous_scale="Viridis", # Color scale
# labels={"value": "Value"}, # Label for the legend
)
fig.update_coloraxes(
dict(
colorbar=dict(
title="Annual Salary",
)
)
)
fig.add_scattergeo(
locations=top_sectors_empl["st"],
locationmode="USA-states",
text=top_sectors_empl["short_sector"],
mode="text")
# Save the figure as an HTML file
fig.write_html("choropleth_map_empl.html")
os.system("open choropleth_map_empl.html")
</code></pre>
|
<python><plotly><choropleth>
|
2024-04-26 14:34:12
| 1
| 315
|
Jred
|
78,391,084
| 19,392,385
|
Generate invite from other servers (discord.py)
|
<p>I'm doing a little experiment trying to see if I can generate invite from server my bot has been added to. I encounter a problem in the definition of something called <em>InviteTarget</em> that I am not even using (?)</p>
<p>The code is the following:</p>
<pre class="lang-py prettyprint-override"><code> @commands.hybrid_command(name='makeinvite', with_app_command=True)
@commands.has_permissions(administrator=True)
async def makeinvite(self, ctx, channel: discord.abc.GuildChannel = None):
"""
Generates an invitation link for any server the bot is in.
Parameters:
channel: The ID of channel you want to generate an invitation for. If not provided, it will use the current channel where command was used.
"""
if channel is None:
invite = await ctx.channel.create_invite(max_uses=1,unique=True)
else:
invite = await channel.create_invite(max_uses=1, unique=True)
</code></pre>
<p>I then get the error message</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\[redacted]\PycharmProjects\bot-ze-fourth\venv\lib\site-packages\discord\app_commands\commands.py", line 827, in _do_call
return await self._callback(self.binding, interaction, **params) # type: ignore
File "C:\Users\[redacted]\PycharmProjects\bot-ze-fourth\cogs\admin.py", line 162, in makeinvite
invite = await ctx.channel.create_invite(max_uses=1, unique=True)
File "C:\Users\[redacted]\PycharmProjects\bot-ze-fourth\venv\lib\site-packages\discord\abc.py", line 1249, in create_invite
if target_type is InviteTarget.unknown:
NameError: name 'InviteTarget' is not defined
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\[redacted]\PycharmProjects\bot-ze-fourth\venv\lib\site-packages\discord\ext\commands\hybrid.py", line 438, in _invoke_with_namespace
value = await self._do_call(ctx, ctx.kwargs) # type: ignore
File "C:\Users\[redacted]\PycharmProjects\bot-ze-fourth\venv\lib\site-packages\discord\app_commands\commands.py", line 846, in _do_call
raise CommandInvokeError(self, e) from e
discord.app_commands.errors.CommandInvokeError: Command 'makeinvite' raised an exception: NameError: name 'InviteTarget' is not defined
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\[redacted]\PycharmProjects\bot-ze-fourth\venv\lib\site-packages\discord\ext\commands\hybrid.py", line 438, in _invoke_with_namespace
value = await self._do_call(ctx, ctx.kwargs) # type: ignore
File "C:\Users\[redacted]\PycharmProjects\bot-ze-fourth\venv\lib\site-packages\discord\app_commands\commands.py", line 846, in _do_call
raise CommandInvokeError(self, e) from e
discord.ext.commands.errors.HybridCommandError: Hybrid command raised an error: Command 'makeinvite' raised an exception: NameError: name 'InviteTarget' is not defined
</code></pre>
<p>I have found several posts that either points at invite link from the current <a href="https://www.google.com/" rel="nofollow noreferrer"><code>ctx.channel</code></a> or with <a href="https://stackoverflow.com/questions/63932887/is-it-possible-to-create-a-invite-link-of-a-server-using-the-guild-id-in-discord?rq=3">non working solutions</a>.</p>
|
<python><discord><discord.py>
|
2024-04-26 14:16:08
| 0
| 359
|
Chris Ze Third
|
78,390,934
| 736,662
|
Python and parameter in Locust request
|
<p>Given a SequantialTask in a locust script I want to replace 73808 with the list all_pp_ids.
How can I construct the data= parameter of my post request to instead take in the list as parameter? Now it is hardcoded to 73808 but I want alle values in all_pp_ids instead.</p>
<pre><code> @task(2)
def generate_bids_for_all_powerplants(self):
response = self.client.post(f'https://example.com/bids/generate',
headers={"Authorization": f'Bearer {token}', "Content-Type": 'application/json'},
data='{"powerPlantIds":[73808]}', name='/bids/generate')
print("Response: ", response.request.body)
print("All ids: ", all_pp_ids)
</code></pre>
|
<python><locust>
|
2024-04-26 13:52:12
| 1
| 1,003
|
Magnus Jensen
|
78,390,766
| 1,928,054
|
Access files within python package with python 3.12 and importlib
|
<p>I'm trying to figure out what the best practice is when it comes to reading data from within a package.</p>
<p>I understood that in python 3.12 I should use importlib.resources, see e.g. <a href="https://stackoverflow.com/questions/6028000/how-to-read-a-static-file-from-inside-a-python-package">How to read a (static) file from inside a Python package?</a></p>
<p>Initially, I organized the package as follows:</p>
<pre><code>foo
βββ setup.cfg
βββ data
β βββ __init__.py
β βββ data.csv
β βββ data2
β βββ __init__.py
β βββ data2.csv
βββ src
βββ foo
βββ __init__.py
βββ bar.py
</code></pre>
<p>Such that data could be read in bar.py, as follows:</p>
<pre><code>import importlib.resources
def get_data_file(file_name):
return importlib.resources.files("foo.data").joinpath(file_name).read_text()
</code></pre>
<p>I added the following to setup.cfg:</p>
<pre><code>include_package_data = True
[options.package_data]
mypkg = data/*.csv
mypkg.data2 = data2/*.csv
</code></pre>
<p>I intended to use foo as follows:</p>
<pre><code>import foo.bar
foo.bar.get_data_file('data.csv')
foo.bar.get_data_file('data2/data2.csv')
</code></pre>
<p>However, I got the error message <code>No module named 'foo.data'</code></p>
<p>I suspect that instead, my package should be organized as:</p>
<pre><code>foo
βββ setup.cfg
βββ src
βββ data
β βββ __init__.py
β βββ data.csv
β βββ data2
β βββ __init__.py
β βββ data2.csv
βββ foo
βββ __init__.py
βββ bar.py
</code></pre>
<p>and bar.py should be changed to:</p>
<pre><code>import importlib.resources
def get_data_file(file_name):
return importlib.resources.files("data").joinpath(file_name).read_text()
</code></pre>
<p>While I can now read the text, I wonder whether this is the best practice, in terms of organizing the package, setting up setup.cfg, and the syntax related to importlib.</p>
<p>In particular, personally, I though would've been more logical to put the data folder in the root folder, instead of the src folder. Moreover, the syntax in setup.cfg is a bit confusing to me.</p>
<p><strong>EDIT</strong></p>
<p>Following up on 9769953's comments:</p>
<ol>
<li><p>Could you elaborate on using relative import vs. importlib?</p>
</li>
<li><p>Indeed the data inherently belongs to foo. Does that mean that the package should be organized as follows?</p>
</li>
</ol>
<pre><code> foo
βββ setup.cfg
βββ src
βββ foo
βββ data
β βββ __init__.py
β βββ data.csv
β βββ data2
β βββ __init__.py
β βββ data2.csv
βββ __init__.py
βββ bar.py
</code></pre>
<ol start="3">
<li>I'm not completely sure I follow the question regarding helper functions.</li>
</ol>
<p>I'll try to clarify the use case. I foresee that users would want to be able to install this package, and use it such that they can carry out calculations given data provided in data.csv, and data2.csv. In particular, these data will be parsed to pandas DataFrames.</p>
<p>If I understand your question correctly, you wonder whether this is the correct way to provide users with both the package, and the required data, correct?</p>
<ol start="4">
<li>I believe you're referring to example5.py, right? Seeing that this example seemed to be inconsistent with the advice given in the linked question, I got a bit confused regarding the best practice.</li>
</ol>
<p>Furthermore, let's assume we would like to try to get the first filetree to work. I was wondering whether this could be achieved by adapting setup.cfg, in particular, I was wondering whether I could/should add <code>data</code> to:</p>
<pre><code>[options.packages.find]
where = src
</code></pre>
<ol start="5">
<li><p>I based the inclusion of <code>__init__.py</code> in the data folders on <a href="https://importlib-resources.readthedocs.io/en/latest/using.html" rel="nofollow noreferrer">https://importlib-resources.readthedocs.io/en/latest/using.html</a> and <a href="https://www.youtube.com/watch?v=ZsGFU2qh73E" rel="nofollow noreferrer">https://www.youtube.com/watch?v=ZsGFU2qh73E</a>. That said, I now see in wim's answer that this is deprecated. I have removed those <code>__init__.py</code>, and can succesfully read the files, confirming that these are not needed anymore.</p>
</li>
<li><p>Regarding pkgutil, it seems that importlib is favored over pkgutil as of python 3.9, right?</p>
</li>
</ol>
|
<python><python-importlib>
|
2024-04-26 13:25:14
| 0
| 503
|
BdB
|
78,390,753
| 5,224,881
|
How to correctly schedule and wait for result in asyncio code from synchronous context in Jupyter notebook?
|
<p>I have a small utility for calling synchronous code using <code>asyncio</code> in parallel.</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from concurrent.futures import ThreadPoolExecutor
from asyncio import AbstractEventLoop, BaseEventLoop
async def call_many_async(fun, many_kwargs):
return await asyncio.gather(*[asyncio.to_thread(fun, **kwargs) for kwargs in many_kwargs])
def call_many(fun, many_kwargs):
loop = asyncio.get_event_loop()
if loop.is_running():
print('running loop scheduling there')
# implement the correct run inside the loop, without the run_until_complete which is crashing, because the loop already runs
future = asyncio.run_coroutine_threadsafe(call_many_async(fun, many_kwargs),
loop)
print('got the future')
res = future.result()
print('got the result')
return res
else:
return loop.run_until_complete(call_many_async(fun, many_kwargs))
</code></pre>
<p>and it works well when used from python</p>
<pre class="lang-py prettyprint-override"><code>import time
def something_complex(param) -> int:
print(f"call started with {param=}")
time.sleep(0.1) # calling some time-costly API
print("call ended")
return 3 # returning the result
results = call_many(something_complex, ({"param": i} for i in range(1, 5)))
</code></pre>
<p>from python works without any problem, but I have problem with using it from <code>IPython</code> in Jupyter, I just get</p>
<pre><code>running loop scheduling there
got the future
</code></pre>
<p>and it hangs forever.</p>
<p>Originally I had just</p>
<pre class="lang-py prettyprint-override"><code>def call_many(fun, many_kwargs):
loop = asyncio.get_event_loop()
return loop.run_until_complete(call_many_async(fun, many_kwargs))
</code></pre>
<p>but there I was getting the error</p>
<pre><code>RuntimeError: This event loop is already running
</code></pre>
<p>How to solve it?</p>
<p>Of course the</p>
<pre class="lang-py prettyprint-override"><code>results = await call_many_async(something_complex, ({"param": i} for i in range(1, 5)))
assert len(results) == 4
</code></pre>
<p>works, but I want to use the <code>call_many</code> as part of the larger codebase which I will be calling from jupyter notebook.
I have read the <a href="https://blog.jupyter.org/ipython-7-0-async-repl-a35ce050f7f7" rel="nofollow noreferrer">https://blog.jupyter.org/ipython-7-0-async-repl-a35ce050f7f7</a> but I did not find the solution, because I do not want to call the asynchronous code directly from the jupyter notebook cell, but from a synchronous code.</p>
<p>I want to avoid solutions using <code>async def call_many(fun, many_kwargs)</code> because the whole point is to be able to use the code which is calling this function from several places without needing to have sync and async equivalent of the same thing.</p>
<p>I have seen <a href="https://stackoverflow.com/questions/47518874/how-do-i-run-python-asyncio-code-in-a-jupyter-notebook">How do I run Python asyncio code in a Jupyter notebook?</a> but that explains how to call asyncio code directly, which I'm explaining above I'm not interested in.</p>
|
<python><jupyter-notebook><python-asyncio>
|
2024-04-26 13:22:50
| 1
| 1,814
|
MatΔj RaΔinskΓ½
|
78,390,578
| 2,817,520
|
How to add a relationship to an existing mapping class from another module
|
<p>I have two modules. The first one is:</p>
<pre><code># module_1.py
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str]
</code></pre>
<p>and the second one is:</p>
<pre><code># module_2.py
from sqlalchemy.orm import Mapped, mapped_column
from .module_1 import Base, User
class Address(Base):
id: Mapped[int] = mapped_column(primary_key=True)
city: Mapped[str]
user_id: Mapped[int] = mapped_column(
ForeignKey('user.id'), nullable=False
)
user: Mapped['User'] = relationship(back_populates='addresses')
</code></pre>
<p>Now the problem is I don't know how to add</p>
<pre><code>addresses: Mapped[Set['Address']] = relationship(back_populates='user')
</code></pre>
<p>to the <code>User</code> class from inside <code>module_2.py</code> without modifying <code>module_1.py</code> file. By the way, I don't want to use the legacy <code>backref</code> relationship parameter.</p>
|
<python><python-3.x><sqlalchemy>
|
2024-04-26 12:49:53
| 1
| 860
|
Dante
|
78,390,366
| 4,013,571
|
Construct a new tuple type from existing tuples
|
<p>I have a <code>list</code> structure as a flattened input to a API</p>
<pre class="lang-json prettyprint-override"><code>(point_x, point_y, thing_a, thing_b, thing_c)
</code></pre>
<p><em>This real structure is a very long list of a many flattened objects. This example is a much simplified case.</em></p>
<p>I would like to make the types that create it clear in python</p>
<pre class="lang-py prettyprint-override"><code>from typing import Tuple
point = (int, int)
thing = (int, float, str)
TypeA = Tuple[point + thing]
</code></pre>
<p>This is a valid python type:</p>
<pre class="lang-py prettyprint-override"><code>typing.Tuple[int, int, int, float, str]
</code></pre>
<p>However, MyPy does not like it
.</p>
<pre class="lang-py prettyprint-override"><code>Invalid type alias: expression is not a valid type [valid-type]
</code></pre>
<p>The real structure I have is very complex and it's helpful for devs to see how the structure is created.</p>
<p>How can I do this properly without MyPy causing errors</p>
<hr />
<p>Just as an addendum, the following is not a valid solution</p>
<pre class="lang-py prettyprint-override"><code>from typing import Tuple
Tuple[
int, # point_x
int, # point_y
int, # thing_a
float, # thing_b
str # thing_c
]
</code></pre>
|
<python><python-typing><mypy>
|
2024-04-26 12:13:19
| 1
| 11,353
|
Alexander McFarlane
|
78,389,639
| 3,400,076
|
How to create a download files button in ckan?
|
<p>Hi I am currently using Ckan2.10 and would like to create a Download button in the ckan webpage. After clicked on the Download button, it will call the plugin.py to generate an excel file, but how can I pass the excel file back to the html page so that user is able to download the excel file?</p>
<p>I am trying to upload my excel file (that I have created in the plugin.py) to the api/v3/action/resource_create,</p>
<pre><code>requests.post("http://localhost:5000/api/v3/action/resource_create",
data = {"package_id": "test1234"},
headers={"X-CKAN-API-Key": "xxxxxxxxxxxxxxxxx"},
files={"upload", open("the excel path", "rb")}
}
</code></pre>
<p>When I execute the above code, ckan will prompt error
<strong>NotImplementedError: Streamed bodies and files are mutually exclusive.</strong></p>
<p>I am trying to follow the document, <a href="https://docs.ckan.org/en/2.9/maintaining/filestore.html" rel="nofollow noreferrer">https://docs.ckan.org/en/2.9/maintaining/filestore.html</a>, to create the resource and upload the excel file into the resource then user can download from there. Not sure if this works. Appreciate if someone will enlighten me, thank you very much.</p>
|
<python><ckan>
|
2024-04-26 09:55:51
| 0
| 519
|
xxestter
|
78,389,600
| 9,542,989
|
Host and Port for snowflake-connector-python
|
<p>I am attempting to establish a connection to Snowflake using the <code>snowflake-connector-python</code> package. I am able to connect to it using the following:</p>
<pre><code>connection = connector.connect(
user='<my-username>',
password='<my-password>',
account='<my-account>'
)
</code></pre>
<p>I thought that I would also be able to connect to it by specifying the host and port like this:</p>
<pre><code>connection = connector.connect(
user='<my-username>',
password='<my-password>',
host='<my-host>,
port=443,
)
</code></pre>
<p>However, this does not work. I still have to specify my account.</p>
<p>So, essentially my question is this: what is the use of the host and port parameters? In what kind of situation will users be required to enter these, especially given the fact that only way to use Snowflake is via their cloud offering?</p>
<p>Note: Their documentation is not very clear on this and code in the repo is quite difficult to read.</p>
|
<python><snowflake-cloud-data-platform>
|
2024-04-26 09:49:11
| 1
| 2,115
|
Minura Punchihewa
|
78,389,427
| 17,123,424
|
How to generate Multiple Responses for single prompt with Google Gemini API?
|
<h2>Context</h2>
<p>I am using google gemini-api. <br />
Using their <a href="https://ai.google.dev/gemini-api/docs/get-started/python" rel="nofollow noreferrer">Python SDK</a></p>
<h2>Goal</h2>
<p>I am trying to generate multiple possible responses for a single prompt according to the <a href="https://ai.google.dev/gemini-api/docs/get-started/python#generate_text_from_text_inputs" rel="nofollow noreferrer">docs</a> and <a href="https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse#candidates" rel="nofollow noreferrer">API-reference</a></p>
<p><strong>Expected result - multiple response for a single prompt</strong> <br />
<strong>Actual result - single response</strong></p>
<h2>Code I have tried</h2>
<pre class="lang-py prettyprint-override"><code># ... more code above
model = genai.GenerativeModel(model_name="gemini-1.5-pro-latest", system_instruction=system_instruction)
response = model.generate_content("What is the meaning of life?")
resps = response.candidates
</code></pre>
<ul>
<li><code>resps</code> is a <code>list</code> which should contain more than 1 response. But there is only 1 response inside it.</li>
<li>The prompt used here is a demo prompt. But the outcome is same for any input string.</li>
</ul>
<p><strong>If any more information is needed please ask in the comments.</strong></p>
|
<python><large-language-model><google-gemini>
|
2024-04-26 09:16:21
| 1
| 1,549
|
Curious Learner
|
78,389,262
| 5,790,653
|
Microsoft OneDrive download files
|
<p>I'm googling different ways regarding how to download files from my personal OneDrive accounts.</p>
<p>I have these problems:</p>
<p>The following code requires <code>code</code> parameter, I googled a lot but didn't find anything what's the <code>code</code> and how to find or get it:</p>
<pre><code>import requests
token_params = {
'client_id': 'ClientID',
'grant_type': 'authorization_code',
'scope': 'https://graph.microsoft.com/.default',
'client_secret': 'ClientSecret',
'redirect_uri': 'https://jwt.ms'
}
tenant = 'TenantID'
token = token = requests.post(url=f'https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token', data=token_params)
</code></pre>
<p>response:</p>
<pre><code>"error":"invalid_request","error_description":"AADSTS900144: The request body must contain the following parameter: \'code\'.
</code></pre>
<p>Before this, I tried the following code which requires <code>refresh_token</code>, so I had to run the above code:</p>
<pre><code>import requests
params = {
'grant_type': 'refresh_token',
'client_id': 'ClientID',
'refresh_token': ''
}
response = requests.post('https://login.microsoftonline.com/common/oauth2/v2.0/token', data=params)
</code></pre>
<p>response:</p>
<pre><code>"error":"invalid_request","error_description":"AADSTS900144: The request body must contain the following parameter: \'refresh_token\'.
</code></pre>
<p>I'm finally going to download all files from OneDrive, but I think I should pass these steps, and unfortunately I couldn't find a way to reach.</p>
<p>One of the questions I saw its answers is <a href="https://stackoverflow.com/questions/74071916/how-to-get-access-token-using-refresh-token-azuread">this</a>. I closed other tabs and I don't have them to add them in the question.</p>
|
<python><onedrive>
|
2024-04-26 08:46:10
| 0
| 4,175
|
Saeed
|
78,389,254
| 9,059,634
|
Ingesting HLS into Mediapackage
|
<p>I have the following setup for a streaming app.
EMX -> EML -> S3 -> LAMBDA -> EMP.
When I try to make a <strong>put</strong> request to mediapackage hls ingest endpoint, I get a 201.</p>
<pre class="lang-py prettyprint-override"><code>def postStreamToMediaPackage(envVariables, fileName, content, contentType):
mediaPackageUrl = envVariables["url"]
username = envVariables["username"]
password = envVariables["password"]
ingestUrl = f"{mediaPackageUrl.rstrip('/channel')}/{fileName}" # not sure what mediapackage wants.
response = requests.put(
ingestUrl, data=content, headers={
"ContentType": contentType
}, auth=HTTPDigestAuth(username, password))
if response.status_code != 201:
print(
f"Error ingesting file {fileName} to {ingestUrl}. error: {response.text}")
return {"ingestUrl": ingestUrl, "fileName": fileName, "status": response.status_code}
</code></pre>
<p>But if i check mediapackage ingress access logs,</p>
<ul>
<li><p>I can see for every file I send, it logs it twice, one that says <code>401</code> and the other one as a <code>201</code>.</p>
</li>
<li><p>I've also noticed that the root manifest gets a 404 if I send it to `channel/{root manifest name}.m3u8 but any other endpoint gets the same behaviour as mentioned previously
To test this, I connected EML to EMP directly and enabled logging and can see that the request is send in the following style</p>
</li>
<li><p><code>channel_filename</code> for all files</p>
</li>
<li><p>`channel_timestamp_sequence.ts for ts files</p>
</li>
<li><p><code>.m3u8</code> for root manifest</p>
</li>
</ul>
<p>I've tried everything I can to get a positive response from mediapackage origin endpoint but it always returns a 404 for manifest.</p>
<p>Mediapackage hls ingest is a webDav server.</p>
<p>Has anyone tried doing this? I can't find any useful doc that says how they expect these.</p>
|
<python><amazon-web-services><webdav><aws-media-live><aws-mediapackage>
|
2024-04-26 08:44:35
| 1
| 536
|
sakib11
|
78,388,929
| 23,461,455
|
Configure Sweetviz to analyze object-type column without forced type conversion?
|
<p>Consider the following short dataframe example:</p>
<pre><code>df = pd.DataFrame({'column1': [2, 4, 8, 0],
'column2': [2, 0, 0, 0],
'column3': ["test", 2, 1, 8]})
</code></pre>
<p>df.dtypes shows that the datatypes of the columns are:</p>
<pre><code>column1 int64
column2 int64
column3 object
</code></pre>
<p>Obviously column3 is of type <code>Object</code> since it has values of mixed types inside of it.</p>
<p>Now I would like to run <a href="https://pypi.org/project/sweetviz/" rel="nofollow noreferrer">sweetviz</a> over this sample dataset to generate a reporting on the columns and their data:</p>
<pre><code>import sweetviz as sv
report = sv.analyze(df)
report.show_notebook()
</code></pre>
<p>The problem is, Sweetviz seems to realise that my column3 is mostly numbers even though it is of the type <code>object</code>. Now it is not generating the report but instead giving the following suggestion:</p>
<pre><code> Convert series [column3] to a numerical value (if makes sense):
One way to do this is:
df['column3'] = pd.to_numeric(df['column3'], errors='coerce')
</code></pre>
<p>Unfortunately for my usecase this isn't an option, because I want the report also to highlight wrongly used columns in my Data. E.g. I want to highlight if a column that should contain a number contains text. So I want to treat the column as <code>object</code> even though only a small fraction of the values are not numbers.</p>
<p>I have played around with the possible parameters that sweetviz allows:</p>
<pre><code>feature_config = sv.FeatureConfig(force_text=['column3'])
report = sv.analyze(df)
report.show_notebook()
</code></pre>
<p>For example I would expect sweetviz with this config to treat column3 as text and ignore the type detection implemented in sweetviz.</p>
<p>Unfortunately I get the same suggestion to convert the column to numeric and convert the string values to NaN.</p>
<p>I also tried the other possible parameters for column3 <code>skip</code>, <code>force_cat</code>, <code>force_num</code>.
<code>force_cat</code>, <code>force_num</code> don't help at all leading to the same result.
<code>skip</code> leaves column3 out in the report which is also not a solution.</p>
<p>Any way to force sweetviz to leave the object-typed column3 as it is and analyze it? Can someone confirm, that this is a Feature of Sweetviz to check for column values data types?</p>
|
<python><pandas><dataframe><visualization>
|
2024-04-26 07:39:43
| 1
| 1,284
|
Bending Rodriguez
|
78,388,928
| 2,707,342
|
How do I update status to either expired or active depending on date and time?
|
<p>I have an application built in Django. The application allows businesses to manage their day-to-day operations and has features like; HR Management, Sales, Point of Sale, Accounting, etc.</p>
<p>For businesses to be able to attach discounts on their products, I have created a <code>Discount</code> model:</p>
<pre><code>class Discount(CommonField):
name = models.CharField(max_length=255, blank=True, null=True)
discount = models.DecimalField(max_digits=15, decimal_places=2)
discount_type = models.CharField(max_length=255, choices=DISCOUNT_TYPE_CHOICES, blank=True, null=True)
discounted_products_count = models.PositiveSmallIntegerField(default=0)
start_date = models.DateTimeField(blank=True, null=True)
expiry_date = models.DateTimeField(blank=True, null=True)
status = models.CharField(max_length=255, default="inactive", choices=DISCOUNT_STATUS)
objects = DiscountModelManager()
</code></pre>
<p>Discounts have a <strong>start date</strong> and an <strong>expiry date</strong> which have been included in the model, they also have a status field that will determine if the status is <strong>expired</strong>, <strong>active</strong>, or <strong>inactive</strong>.</p>
<p>One of the challenges I was facing had to do with at what point I should update the status of discounts. To overcome this challenge, I created a <strong>Model Manager</strong> to have a central place where the logic of updating the status is placed;</p>
<pre><code>class DiscountModelManager(TenantAwareManager):
def get_queryset(self):
queryset = super().get_queryset()
self.change_promo_code_if_end_date_extended(queryset)
self.activate_discount_if_start_date_reached(queryset)
self.expire_discount_if_expiry_date_reached(queryset)
return super().get_queryset()
def change_promo_code_if_end_date_extended(self, queryset):
"""
Activates promo codes if expiry_date has been extended and the status is expired.
"""
queryset.filter(expiry_date__gte=timezone.now(), status="expired").update(status="active")
def activate_discount_if_start_date_reached(self, queryset):
"""
Activates promo codes if start_date has been reached and the status is inactive.
"""
queryset.filter(start_date__lte=timezone.now(), status="inactive").update(status="active")
def expire_discount_if_expiry_date_reached(self, queryset):
queryset.filter(expiry_date__lte=timezone.now()).update(status="expired")
</code></pre>
<p>I have four views at the moment that are dealing with the discounts:</p>
<ol>
<li><code>Dicount List View</code>: Where all the discounts are listed belonging to that particular business (<em><strong>with pagination</strong></em>).</li>
<li><code>Detail Discount View</code>: Where a detail view of the discount is shown.</li>
<li><code>Edit Discount View</code>: Where a user can edit the discount after viewing it in the detail view</li>
<li><code>Point of Sale View</code>: Where a sale is taken place and a discount may be used during checkout.</li>
</ol>
<p>The code is working perfectly well except that I have one more issue...</p>
<p>If we have a look at the Discount List View:</p>
<pre><code>class DiscountListView(LoginRequiredMixin, View):
def get(self, request):
business_id = current_business_id()
discounts = Discount.objects.filter(business__business_id=business_id)
paginator = Paginator(discounts, 25)
page_number = request.GET.get("page")
page = paginator.get_page(page_number)
context = {
"table_headers": HTMLTemplateTags().table_headers["discounts"],
"page": page,
"search_query": search_query,
"paginator": paginator,
}
return render(request, "pages/sales/discounts/discount_list.html", context)
</code></pre>
<p>You can see that we have a <code>queryset</code> where we are fetching all the discounts belonging to the current business</p>
<p><code>discounts = Discount.objects.filter(business__business_id=business_id)</code></p>
<p>Before this queryset is completed, <code>DiscountModelManager</code> is called and the statuses are updated. Right now, it's not a big problem as we are just starting out, but once we have millions of discounts in the table, then this approach is not optimal at all as all the discounts belonging to a particular business are updated.</p>
<p>Is there a better and more efficient way to approach this?</p>
|
<python><django><django-models><optimization>
|
2024-04-26 07:39:37
| 1
| 571
|
Harith
|
78,388,899
| 2,596,475
|
Encrypt Decrypt file using GNUPG
|
<p>I created PGP keys using <a href="https://pgpkeygen.com/" rel="nofollow noreferrer">this key generator website</a> (Algo - RSA, Key Size - 4096 bits). I am using Databricks to write its encrypt and decrypt function and store public and private keys generated through pgpkeygen.com. I tried multiple ways to achieve this functionality but failed every time. Below is the latest code I have for encryption and decryption:</p>
<p>Encryption:</p>
<pre><code>import gnupg
import os
gpg = gnupg.GPG(gnupghome = 'pgp_keys/')
def encrypt_file(file_path, output_path):
with open(file_path, 'rb') as f:
encrypted_data = gpg.encrypt_file(f, "a@xyz.com")
with open(output_path, 'wb') as encrypted_file:
encrypted_file.write(encrypted_data.data)
print('ok: ', encrypted_data.ok)
print('status: ', encrypted_data.status)
print('stderr: ', encrypted_data.stderr)
</code></pre>
<p>Below are logs I gathered after executing the encryption function-</p>
<blockquote>
<p>ok: False status: invalid recipient stderr: gpg: WARNING: unsafe
permissions on homedir '/Workspace/Users/a@xyz.com/pgp_keys' [GNUPG:]
KEY_CONSIDERED 337B0001AEB11E875CBFE01C99E7824740791203 0 [GNUPG:]
KEY_CONSIDERED 337B0001AEB11E875CBFE01C99E7824740791203 0 gpg:
01E18C0B5E758C10: There is no assurance this key belongs to the named
user [GNUPG:] INV_RECP 10 a@xyz.com [GNUPG:] FAILURE encrypt 53 gpg:
[stdin]: encryption failed: Unusable public key</p>
</blockquote>
<p>The keys are correct and usable. Uploaded them multiple times after seeing the <em>Unusable public key</em> message.</p>
<p>Below is the decryption code:</p>
<pre><code>def decrypt_file(file_path, output_path):
with open(file_path, 'rb') as f:
decrypted_data = gpg.decrypt_file(f,passphrase='passphrase', output=output_path)
return decrypted_data.ok
</code></pre>
<p>I tried multiple things to rectify these errors but was not able to perform correct encryption and decryption. I need help to perform correct encryption and decryption using PGP keys.</p>
|
<python><azure-databricks><public-key-encryption><gnupg><pgp>
|
2024-04-26 07:33:17
| 1
| 795
|
Ajay
|
78,388,889
| 11,267,783
|
Matplotlib issue with colorbar using subplot_mosaic and make_axes_locatable
|
<p>I want to create a figure using subplot_mosaic which is very useful to organize plots. However with my code, when I plot the figure, I don't see the colorbar value and title. This is probably due to <code>make_axes_locatable</code> and <code>constrained_layout=True</code>, but in my case I don't have the choice to use it.</p>
<p>Moreover, when I save it, I get the entire plot but the aspect ratio is not preserved (5,7).</p>
<p>This is my code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.axes_grid1.axes_divider import make_axes_locatable
def plot2d(data, ax):
im = ax.imshow(data, aspect="auto")
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(im, cax=cax, label="DATA")
fig = plt.figure(constrained_layout=True)
fig.set_size_inches(5, 7)
ax = fig.subplot_mosaic([["D1"]])
plot2d(np.random.rand(10, 1000), ax["D1"])
plt.savefig("foo3.pdf", bbox_inches="tight", dpi=100)
plt.show()
</code></pre>
|
<python><matplotlib>
|
2024-04-26 07:31:40
| 1
| 322
|
Mo0nKizz
|
78,388,852
| 20,240,835
|
How to run Snakemake on an undetermined set of samples?
|
<p>I am planning to create a Snakemake script that will run on a large scale data set. The script will:</p>
<ol>
<li>preprocess the samples,</li>
<li>filter the samples based on the results of the preprocessing (note: all preprocessing samples are required for filtering)</li>
<li>proceed to the next step for samples that meet the condition.</li>
</ol>
<p>But I dont know how to archieve this.</p>
<p>Here's the basic structure of my script:</p>
<pre><code>sample = ['A', 'B', 'C']
rule all:
input:
expand('output/pre_process/{sample}.txt', sample=sample)
# I am not sure how to add the input
# just a toy run
rule pre_process:
output:
'output/pre_process/{sample}.txt'
shell:
"""
echo "" > {output}
"""
rule filter:
input:
expand('output/pre_process/{sample}.txt', sample=sample)
output:
# all passed filter sample will in folder, one sample one file
directory('output/filter')
shell:
"""
# toy run
cp output/pre_process/{{A,B}}.txt output/filter/
"""
rule process:
input:
# I need process samples in output/filter one by one
output:
'output/data/{sample}.txt'
shell:
"""
# just example, not run
echo "" > {output}
"""
</code></pre>
<p>Note: cant get what sample need be furture process unless <code>filter</code> step output the files</p>
|
<python><workflow><snakemake>
|
2024-04-26 07:25:37
| 1
| 689
|
zhang
|
78,388,761
| 724,403
|
Manim axes alignment problem: How to align ParametricFunction and Points with Axes?
|
<p>How do I align my ParametricFunction and Points with Axes?</p>
<p>I felt inspired to build parametric animations in manim. I have played around with the package, gotten a variety of 2D and parametric curves rendering, and am generally enjoying it. However, I cannot seem to get the axes aligned with my parametric equations, plotted using ParametricFunction in a ThreeDScene. Technically I am using ThreeDAxes, but I have tried this with Axes as well and I get a similar result.</p>
<p>The ParametricFunction of y = (x-2)^2-1 is plotted in BLUE. For comparison, I decided to plot y = (x-2)^2-1 using axes.plot in RED. The RED version is correctly aligned with the axes. The BLUE parametric version is not.</p>
<p>I also plotted 2 yellow points: the y-intercept at (0,3), and the global minima at (2, -1). They're tiny, but you can see they they align with the BLUE ParametricFunction if you look closely.</p>
<p><a href="https://i.sstatic.net/MKtcAgpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MKtcAgpB.png" alt="y=(x-2)^2-1 using ThreeDAxes" /></a>
y=(x-2)^2-1 using ThreeDAxes</p>
<p><a href="https://i.sstatic.net/iVqJeZMj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVqJeZMj.png" alt="y=(x-2)^2-1 using Axes" /></a>
y=(x-2)^2-1 using Axes</p>
<p>alignment_test.py</p>
<pre><code>from manim import (
Axes,
ThreeDAxes,
ThreeDScene,
ParametricFunction,
Point,
BLUE,
RED,
YELLOW,
)
from manim.utils.file_ops import open_file as open_media_file
import numpy as np
class ParabolaAlignmentTest(ThreeDScene):
def construct(self):
# axes = Axes()
axes = ThreeDAxes()
self.add(axes)
self.add(axes.plot(lambda t: (t - 2) ** 2 - 1, x_range=[-1, 5], color=RED))
self.add(
ParametricFunction(
lambda t: np.array([t, (t - 2) ** 2 - 1, 0]),
t_range=[-1, 5],
color=BLUE,
)
)
self.add(Point([0, 3, 0], color=YELLOW))
self.add(Point([2, -1, 0], color=YELLOW))
self.wait(1)
if __name__ == "__main__":
scene = ParabolaAlignmentTest()
scene.render()
# Now, open the .mp4 file!
open_media_file(scene.renderer.file_writer.movie_file_path)
</code></pre>
<p>You can ignore the <code>if __name__ == "__main__":</code> block at the bottom if you are running manim with <code>manim -pql alignment_test.py</code>.</p>
|
<python><math><graph-visualization><manim><algorithm-animation>
|
2024-04-26 07:06:22
| 1
| 359
|
David
|
78,388,607
| 2,604,247
|
Jupyter Notebook not Starting in Virtual Environment
|
<h5>Environment</h5>
<ul>
<li>Ubuntu 20.04</li>
<li>Python 3.8.10</li>
<li>Pip 20.0.2</li>
</ul>
<p>Using pip venv to create a virtual environment in my project directory. These are the commands I am running.</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m pip venv .venv # Create the virtual environment
source .venv/bin/activate
(.venv) $ time python3 -m pip install --requirement pip_requirements.txt # Inside the venv
</code></pre>
<p>My <code>pip_requirements.txt</code> has the following dependencies listed (among others)</p>
<pre><code>jupyterlab
jupyter
notebook
ipython
ipykernel
</code></pre>
<p>So when I try to run jupyter notebook inside the venv, this is the error I get.</p>
<pre class="lang-bash prettyprint-override"><code>(.venv) della@workstation:~/Python_scripts/email-classification$ jupyter-notebook
(.venv) della@workstation:~/Python_scripts/email-classification$ jupyter notebook
Traceback (most recent call last):
File "/home/della/.local/bin/jupyter-notebook", line 5, in <module>
from notebook.app import main
File "/home/della/.local/lib/python3.8/site-packages/notebook/app.py", line 20, in <module>
from jupyterlab.commands import ( # type:ignore[import-untyped]
ModuleNotFoundError: No module named 'jupyterlab'
</code></pre>
<p>Tried with both space and hyphen in <code>jupyter notebook</code> and I get the same result.</p>
<p>When I look at the path, I see two things happening the interpreter is going <em>outside</em> the venv to look for the executable script. Should it happen at all when venv is meant to provide isolation? How can I run jupyter inside the venv with packages available only in the venv?</p>
<p>More information. When I try to see the executable for jupyter, I get one inside the venv.</p>
<pre class="lang-bash prettyprint-override"><code>(.venv) della@workstation:~/Python_scripts/email-classification$ which jupyter notebook -a
/home/della/Python_scripts/email-classification/.venv/bin/jupyter
</code></pre>
|
<python><jupyter-notebook><jupyter><virtualenv><python-venv>
|
2024-04-26 06:26:04
| 2
| 1,720
|
Della
|
78,388,444
| 8,890,613
|
Counting Instances of an element with selenium and python
|
<p>so I am working on a online class that requires me to use selenium to navigate to * Navigate to The Internet website (<a href="https://the-internet.herokuapp.com" rel="nofollow noreferrer">https://the-internet.herokuapp.com</a>) - then to a specific child page (add/remove elements)then add some elements and count them and remove some and count the result. my first approach was rather ramshackle. I was able to get it to "work" (i.e correctly navigate and click elements) but unable to fill all the requirements (i.e I could not keep an a variable count and I didn't know how to add assertions)</p>
<p>So I settled on this approach</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as ec
class Webpage:
element_count = 0
def __init__(self, element_count, driver):
self.element_count = element_count
self.driver = driver
driver = webdriver.Firefox()
try:
wait = WebDriverWait(driver, 10)
driver.get('https://the-internet.herokuapp.com')
# Parent page loads
wait.until(ec.url_to_be('https://the-internet.herokuapp.com/'))
# Variables go here (as they depend on parent loading)
add_remove_parent = wait.until(ec.presence_of_element_located(
(By.LINK_TEXT, 'Add/Remove Elements')))
driver.fullscreen_window()
# This is where the functions happen
add_remove_parent.click()
# * Assert that no elements have yet been added
wait.until(ec.url_to_be('https://the-internet.herokuapp.com/add_remove_elements/'))
# Child Page Loaded
wait.until(ec.element_to_be_clickable((By.XPATH, '//button[text()="Add Element"]')))
def add_element(self):
add_element_button = self.driver.find_element(By.XPATH, '//button[text()="Add Element"]')
add_element_button.click()
self.element_count += 1
print(f'The element count is {self.element_count}')
def delete_element(self):
delete_element_button = self.driver.find_element(By.CLASS_NAME, 'added-manually')
delete_element_button.click()
self.element_count -= 1
print(f'the element count is {self.element_count}')
webpage=Webpage(self,element_count=0 ,driver)
finally:
# Close the browser
driver.quit()
#
</code></pre>
<p>But , (and here is where I am getting it wrong) I don't seem to be (Instantiating?) if that is the right word -> the class correctly and would like some direction with this.As well as maybe some direction on correctly calling the methods on it. I've done it with simpler things , and i've done it with alot of the code abstracted away. But never on my own from scratch and I can't yet see what I'm missing.</p>
<p>Also, I know I don't have any assertions yet , I'll try to add those in after , maybe into the methods themselves or maybe separate.</p>
<p>Most times when I've tried the class does not seem to understand what the argument for self , or driver should be but I thought I provided that by saying it should be firefox.</p>
<p>Much appreciated for any help!</p>
|
<python><selenium-webdriver>
|
2024-04-26 05:43:13
| 1
| 441
|
Jason Harder
|
78,388,365
| 209,942
|
Python Error: Unable to create process using python Access is denied
|
<p><strong>Environment</strong></p>
<p>I had python 3.11.x and 3.12.x installed. But I found that 3.11.x is more compatible with things in general, and 3.12.x isn't mainstream yet. Sorry if that badly described, but I think it's roughly correct.</p>
<p><strong>Problem</strong></p>
<p>In vscode, my code runs. I use a virtual environment.</p>
<p>But, I'm getting the following error:</p>
<pre><code>vscode> py
Unable to create process using
'"C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.1008.0_x64__qbz5n2kfra8p0\python3.12.exe"':
Access is denied.
</code></pre>
<p><strong>What I've tried</strong></p>
<p>Selected 3.11.x in palette, with <code>Python: Select Interpreter</code>.</p>
<p>Uninstalled 3.12, and ran 3.11 repair.</p>
<p>Uninstalled all versions of python, then re-installed 3.11.x.</p>
<p>Examined Windows environment variable in system properties, but don't see anything about python there.
<a href="https://i.sstatic.net/KnaqtXbG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnaqtXbG.png" alt="enter image description here" /></a></p>
<p>Still getting same error.</p>
|
<python><visual-studio-code>
|
2024-04-26 05:16:38
| 1
| 2,270
|
johny why
|
78,388,333
| 2,779,280
|
Nested quotes in f-string with Python 3.12 vs older versions
|
<p>With Python 3.12 I can do the following without error:</p>
<pre><code>a = "abc"
s = f"{a.replace("b", "x")}"
</code></pre>
<p>Note the nested <code>"</code> characters.</p>
<p>With Python 3.6 the same will throw a SyntaxError because closing <code>)</code> and <code>}</code> are missing in this part <code>f"{a.replace("</code>.</p>
<p>Why is that? I would have expected a SyntaxError with 3.12 as well.</p>
|
<python><python-3.x><f-string>
|
2024-04-26 05:03:52
| 2
| 660
|
pktl2k
|
78,388,171
| 4,931,657
|
client connecting to server works with python, but not with kotlin ktor
|
<p>I am connecting to a web server that returns binary data to me. Using the below python code, I am able to output the results:</p>
<pre><code>from struct import unpack_from
import socket
HOST = "127.0.0.1"
PORT = 12345
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((HOST, PORT))
while True:
data = s.recv(7000)
print(f"type: {type(data)}, size: {len(data)}, data: {data[0:10]}")
output result >>> type: <class 'bytes'>, size: 6985, data: b'\x00\x00\x1bE\x00\x00\x00\x00\x10\x00'
</code></pre>
<p>However, when I try to do the same with ktor <a href="https://ktor.io/docs/client-create-websocket-chat.html#create-chat-client" rel="nofollow noreferrer">using their example</a> with the below code:</p>
<pre><code>import io.ktor.client.*
import io.ktor.client.plugins.websocket.*
import io.ktor.http.*
import io.ktor.websocket.*
import io.ktor.util.*
import kotlinx.coroutines.*
fun main() {
val client = HttpClient {
install(WebSockets)
}
runBlocking {
client.webSocket(method = HttpMethod.Get, host = "127.0.0.1", port = 12345) {
while(true) {
val othersMessage = incoming.receive() as? Frame.Binary ?: continue
println(othersMessage.readBytes())
}
}
}
client.close()
println("Connection closed. Goodbye!")
}
</code></pre>
<p>I get hit with an <code>Exception: Unsupported byte code, first byte is 0x92</code>.</p>
<p>Not sure what is going wrong or how I can debug this. Any ideas please?</p>
|
<python><kotlin><binary><ktor>
|
2024-04-26 04:04:36
| 1
| 5,238
|
jake wong
|
78,388,150
| 10,964,685
|
Pattern matching callback for plotly dash - python
|
<p>I've got a dropdown that allows me to filter a categorical plotly graph. A separate callback allows the user to alter this graph from a bar chart to a pie chart. This part works as expected.</p>
<p>I've got separate slider components that adjust the settings for both the bar chart or the pie chart. The problem is the sliders are fixed in place. <strong>Is it possible to add/remove the relevant sliders based on the selection chosen in the radio items?</strong></p>
<p>If the bar chart is selected (see below), only the two bar sliders should be visible. The other two should be dropped or removed. Conversely, if the pie is chosen, to the opposite should occur. The bar sliders are removed in place of the pie sliders.</p>
<p>The layout on the filtering div should stay the same.</p>
<pre><code>import dash
from dash import dcc
from dash import html
from dash.dependencies import Input, Output
import dash_bootstrap_components as dbc
import plotly.express as px
import plotly.graph_objs as go
import pandas as pd
df = pd.DataFrame({
'Fruit': ['Apple','Banana','Orange','Kiwi','Lemon'],
'Value': [1,2,4,8,6],
})
external_stylesheets = [dbc.themes.SPACELAB, dbc.icons.BOOTSTRAP]
app = dash.Dash(__name__, external_stylesheets = external_stylesheets)
filter_box = html.Div(children=[
html.Div(children=[
html.Label('Fruit', style = {}),
dcc.Dropdown(
id = 'value_type',
options = [
{'label': x, 'value': x} for x in df['Fruit'].unique()
],
value = df['Fruit'].unique(),
multi = True,
clearable = True
),
html.Label('Cat Chart', style = {'display': 'inline','paddingTop': '0rem', "justifyContent": "center"}),
dcc.RadioItems(['Bar','Pie'],'Bar',
id = 'catmap',
),
html.Label('Bar Transp', style = {'display': 'inline-block', 'paddingTop': '0.1rem',}),
dcc.Slider(0, 1, 0.2,
value = 0.6,
id = 'bar_transp'),
html.Label('Bar Width', style = {'display': 'inline-block'}),
dcc.Slider(200, 1000, 200,
value = 600,
id = 'bar_width'),
html.Label('Pie Transp', style = {'display': 'inline-block', 'paddingTop': '0.1rem',}),
dcc.Slider(0, 1, 0.2,
value = 0.6,
id = 'pie_transp'),
html.Label('Pie Hole', style = {'display': 'inline-block'}),
dcc.Slider(0, 1, 0.2,
value = 0.4,
id = 'pie_hole'),
], className = "vstack",
)
])
app.layout = dbc.Container([
dbc.Row([
dbc.Col([
dbc.Row([
dbc.Col(html.Div(filter_box),
),
]),
]),
dbc.Col([
dbc.Row([
dcc.Graph(id = 'type-chart'),
]),
])
])
], fluid = True)
@app.callback(
Output('type-chart', 'figure'),
[Input('value_type', 'value'),
Input('catmap', 'value'),
Input('bar_transp', 'value'),
Input('bar_width', 'value'),
Input('pie_transp', 'value'),
Input('pie_hole', 'value'),
])
def chart(value_type, catmap, bar_transp, bar_width, pie_transp, pie_hole):
dff = df[df['Fruit'].isin(value_type)]
if catmap == 'Bar':
df_count = dff.groupby(['Fruit'])['Value'].count().reset_index(name = 'counts')
if df_count.empty == True:
type_fig = go.Figure()
else:
df_count = df_count
type_fig = px.bar(x = df_count['Fruit'],
y = df_count['counts'],
color = df_count['Fruit'],
opacity = bar_transp,
width = bar_width
)
elif catmap == 'Pie':
df_count = dff.groupby(['Fruit'])['Value'].count().reset_index(name = 'counts')
if df_count.empty == True:
type_fig = go.Figure()
else:
df_count = df_count
type_fig = px.pie(df_count,
values = df_count['counts'],
opacity = pie_transp,
hole = pie_hole
)
return type_fig
if __name__ == '__main__':
app.run_server(debug = True)
</code></pre>
<p><a href="https://i.sstatic.net/5nFxFkHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5nFxFkHO.png" alt="enter image description here" /></a></p>
|
<javascript><python><callback><plotly><pattern-matching>
|
2024-04-26 03:57:15
| 1
| 392
|
jonboy
|
78,387,926
| 4,732,111
|
Polars read_parquet method converts the original date value to a different value if the date is invalid
|
<p>I'm reading a parquet file from S3 bucket using polars and below is the code that i use:</p>
<pre><code>df = pl.read_parquet(parquet_file_name, storage_options=storage_options, hive_partitioning=False)
</code></pre>
<p>In the S3 bucket, the value (which is invalid as the year is 0200) of the date column is stored as</p>
<pre><code> start_date = 0200-03-01 00:00:00
</code></pre>
<p>After reading this value from S3 bucket using polars.read_parquet method, it internally converts the date as</p>
<pre><code> start_date = 1953-10-28 10:43:41.128654848
</code></pre>
<p>and it sets the datatype in the polars dataframe as <em>Datetime(time_unit='ns', time_zone=None)</em> for that column.</p>
<p>Is there any way that i can retain the date as is even if it is invalid? I tried doing cast but it doesn't help because the read_parquet method internally reads it as 1953-10-28 and doing any cast on top of already converted value doesn't help.</p>
<p>Any help would be highly appreciated please.</p>
<p><em>Note: Also, for comparison purpose, i tried using pandas and it behaves same like polars i.e., the start_date is 1953-10-28 10:43:41.128654848</em></p>
|
<python><pandas><python-polars>
|
2024-04-26 02:19:46
| 2
| 363
|
Balaji Venkatachalam
|
78,387,925
| 11,124,121
|
How to swap the columns according to the data type?
|
<p>The sample data is as below (The data is fake, not real data):</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Key</th>
<th>Death indicator</th>
<th>Date Death</th>
<th>Exact date of death</th>
<th>Death Cause</th>
</tr>
</thead>
<tbody>
<tr>
<td>00</td>
<td>Alive</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>02</td>
<td>Death hos</td>
<td>Y</td>
<td>25/9/2011</td>
<td>N00</td>
</tr>
<tr>
<td>03</td>
<td>Alive</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>09</td>
<td>Death hos</td>
<td>Y</td>
<td>J189</td>
<td>28/8/2015</td>
</tr>
<tr>
<td>07</td>
<td>Death nonhos</td>
<td>12/6/2018</td>
<td>Y</td>
<td>C20</td>
</tr>
</tbody>
</table></div>
<p>From the table, you can see the types of data within the same columns are not consistent. <code>Date Death</code> should be in date format; <code>Exact date of death</code> should only contain <code>Y</code>, <code>N</code> or blank; <code>Death Cause</code> should be in string (i.e. ICD code).</p>
<p>I forgot to mention one important thing, the date format may not be consistent, e.g. '01-05-2010','01 May 2010' can also appear in the date columns.
I tried to perform some basic data cleaning:</p>
<p>Python:</p>
<pre><code>import pandas as pd
death_y_n = death['Date Death'][pd.to_datetime(death['Date Death'], \
format='%d/%m/%Y',
errors = 'coerce')\
.isnull()]
death_disease_case = death['Exact date of death'][~((death['Exact date of death'].isin(['Y','N']))\
|(death['Exact date of death'].isnull()))]
death['Death Cause'][~pd.to_datetime(\
death['Death Cause'], \
format='%d/%m/%Y', errors = 'coerce')\
.isnull()] = \
death_disease_case
death['Date Death'][pd.to_datetime(\
death['Date Death'], \
format='%d/%m/%Y', errors = 'coerce')\
.isnull()] = \
death_to_date[pd.to_datetime(\
death['Date Death'], \
format='%d/%m/%Y', errors = 'coerce')\
.isnull()]
death['Exact date of death'][~death['Exact date of death'].isin(['Y','N'])] = \
death_y_n[~death['Exact date of death'].isin(['Y','N'])]
death['Death Cause'][pd.to_datetime(\
death['Date Death'], \
format='%d/%m/%Y', errors = 'coerce')\
.isnull()] = \
death_y_n[pd.to_datetime(\
death['Date Death'], \
format='mixed', errors = 'coerce')\
.isnull()]
</code></pre>
<p>R:</p>
<pre><code>library(tidyverse)
library(magrittr)
library(anytime)
library(Hmisc)
death_to_date = anytime(death$`Death Cause`) %>% as.character
death_y_n = death$`Date Death`[is.na(as_date(death$`Date Death`))]
death_disease_case = death$`Exact date of death`[death$`Exact date of death` %nin% c('Y','N')]
death$`Death Cause`[!is.na(as_date(death$`Death Cause` ))] = death_disease_case[!is.na(as_date(death$`Death Cause` ))]
death$`Date of Registered Death`[is.na(as_date(death$`Date Death`))] = death_to_date[is.na(as_date(death$`Date Death`))]
death$`Exact date of death`[death$`Exact date of death` %nin% c('Y','N')] = death_y_n[death$`Exact date of death` %nin% c('Y','N')]
</code></pre>
<p>However, due to multiple formats of date, some date formats cannot be parsed successfully. Is there a method to swap the columns without using <code>to_datetime()</code>/<code>anytime()</code>?</p>
<p>I am new to Python, if there are any mistakes I made, please point them out! Thank you.</p>
<p><strong>Updated:</strong></p>
<p>My python solution:</p>
<pre><code>import pandas as pd
#for death date variable save as exact date of death:'Y'/'N'
death_to_date_index_exact = (death['Date Death'].isin(['Y','N']))
death_to_date_exact = death['Date Death'][death_to_date_index_exact]
#for death cause variable save as date of death
death_cause_index_date = (~death['Death Cause'].str.contains('^[A-Za-z].*[0-9]$',na=True))
death_cause_date = death['Death Cause'][death_cause_index_date]
#for exact date of death variable save as death cause
death_exact_index_cause = (death['Exact date of death'].str.contains('^[A-Za-z].*[0-9]$',na=False))
death_exact_cause = death['Exact date of death'][death_exact_index_cause]
death['Date Death'][death_cause_index_date] = death_cause_date
death['Exact date of death'][death_to_date_index_exact] = death_to_date_exact
death['Death Cause'][death_exact_index_cause] = death_exact_cause
#Convert the date in death cause into empty
death['Death Cause'][~death['Death Cause'].str.contains('^[A-Za-z].*[0-9]$',na=True)] = np.nan
</code></pre>
|
<python><r><pandas><dataframe><dplyr>
|
2024-04-26 02:18:49
| 2
| 853
|
doraemon
|
78,387,917
| 12,454,639
|
Django models cannot be queried due to missing id field?
|
<p>I currently am working on a django project for a discord bot. the issue that I am trying to resolve is that I cannot seem to query the data for one of my models.</p>
<p>A piece of information that has been unclear to me that I am sure caused this issue was a series of migration issues I had trying to update my Character model with new relationships to the InventoryInstance model in a seperate app.</p>
<p>when I boot up a django shell_plus session - I get this error when trying to query the Character Model:</p>
<p>located in inventory.models</p>
<pre><code>In [1]: Character.objects.all()
Out[1]: OperationalError: no such column: oblivionalchemy_character.inventory_instance_id
</code></pre>
<p>Here is my Character and InventoryInstance models:</p>
<p>located at oblivionalchemy.models</p>
<pre><code>class InventoryInstance(models.Model):
character_name = models.ForeignKey('oblivionalchemy.Character', on_delete=models.CASCADE, null=True, related_name="character")
items = models.JSONField(default=dict)
</code></pre>
<pre><code>
class Character(models.Model):
user = models.ForeignKey('discordbot.DiscordUser', on_delete=models.CASCADE, null=True, related_name="characters")
name = models.CharField(max_length=40)
strength = models.IntegerField(default=10)
endurance = models.IntegerField(default=10)
agility = models.IntegerField(default=10)
speed = models.IntegerField(default=10)
willpower = models.IntegerField(default=10)
intelligence = models.IntegerField(default=10)
personality = models.IntegerField(default=10)
luck = models.IntegerField(default=10)
alchemy = models.IntegerField(default=10)
survival = models.IntegerField(default=10)
blade = models.IntegerField(default=10)
marksman = models.IntegerField(default=10)
inventory_instance = models.OneToOneField('inventory.InventoryInstance', on_delete=models.CASCADE, related_name="character", null=True, blank=True)
</code></pre>
<p>I cannot seem to figure out a way to even interact with this data to recreate records after creating "upstream" dependencies on another model. Ideally I want the workflow to go</p>
<p>DiscordUser model record created
Character model record created
InventoryInstance model record created.</p>
<p>And I already had existing Character records before I created the DiscordUser model. But Im not sure I understand how to query this data and "catch it all up" after I've made changes to the model. Can anyone help me understand what is going on here?</p>
<p>Here is the initial migration files for each app:</p>
<p>oblivionalchemy</p>
<pre><code>class Migration(migrations.Migration):
initial = True
dependencies = [
('discordbot', '0001_initial'),
('inventory', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Character',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=40)),
('strength', models.IntegerField(default=10)),
('endurance', models.IntegerField(default=10)),
('agility', models.IntegerField(default=10)),
('speed', models.IntegerField(default=10)),
('willpower', models.IntegerField(default=10)),
('intelligence', models.IntegerField(default=10)),
('personality', models.IntegerField(default=10)),
('luck', models.IntegerField(default=10)),
('alchemy', models.IntegerField(default=10)),
('survival', models.IntegerField(default=10)),
('blade', models.IntegerField(default=10)),
('marksman', models.IntegerField(default=10)),
('inventory_instance', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='character', to='inventory.inventoryinstance')),
('user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='characters', to='discordbot.user')),
],
),
]
</code></pre>
<p>discordbot</p>
<pre><code>class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='User',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('discord_id', models.CharField(blank=True, max_length=100, unique=True)),
],
),
]
</code></pre>
<p>inventory</p>
<pre><code>class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='InventoryInstance',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('items', models.JSONField(default=dict)),
],
),
migrations.CreateModel(
name='InventoryItem',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=40)),
('weight', models.IntegerField(default=0)),
('hitpoints', models.IntegerField(default=100)),
('is_default', models.BooleanField(default=False)),
('effects', models.JSONField(default=inventory.models.default_effects_for_item)),
],
),
]
</code></pre>
|
<python><django><migration>
|
2024-04-26 02:15:23
| 1
| 314
|
Syllogism
|
78,387,721
| 1,100,248
|
How to scope autogen tool to working dir?
|
<p>I am playing with <code>AutoGen</code> and I've added tools to read and write text files (mainly because don't want to waste resource)</p>
<p>My agent has working dir</p>
<pre class="lang-py prettyprint-override"><code>executor = autogen.UserProxyAgent(
name="executor",
system_message="Executor. Execute the code written by the Engineer and report the result.",
human_input_mode="NEVER",
code_execution_config={
"last_n_messages": 3,
"work_dir": WORKING_DIR,
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
)
</code></pre>
<p>And tools</p>
<pre class="lang-py prettyprint-override"><code>def read_file(file_name: Annotated[str, "File name has to be json, txt or html"]) -> int:
if not file_name.endswith(".json") and not file_name.endswith(".txt") and not file_name.endswith(".html"):
return f"I can read only .json, .txt or .html files you asked for {file_name}. Use python to read other files."
if not os.path.exists(os.path.join(WORKING_DIR, file_name)):
return f"File {file_name} does not exist."
with open(os.path.join(WORKING_DIR, file_name), "r") as f:
return f.read()
def write_file(file_name: Annotated[str, "File name"], content: Annotated[str, "text or json content"]) -> int:
# verify that nested folders exists
if not os.path.exists(f"{WORKING_DIR}/{os.path.dirname(file_name)}"):
os.makedirs(f"{WORKING_DIR}/{os.path.dirname(file_name)}")
with open(f"{WORKING_DIR}/{file_name}", "w") as f:
return f.write(content)
</code></pre>
<p>How can I do it better? So I don't have to worry about working dir in my code.</p>
|
<python><artificial-intelligence><ms-autogen>
|
2024-04-26 00:31:41
| 1
| 19,544
|
Vova Bilyachat
|
78,387,662
| 7,499,546
|
Cannot import name 'setuptools' from setuptools
|
<p>When installing a package I get the error:</p>
<p><code>ImportError: cannot import name 'setuptools' from setuptools</code></p>
<p>I am on the latest setuptools version <code>69.5.1</code> at the time of writing.</p>
|
<python><python-3.x><setuptools>
|
2024-04-25 23:57:22
| 1
| 703
|
Joshua Patterson
|
78,387,495
| 11,338,984
|
Query Firebase for "not-in document_id" in Python
|
<p>I have an array with some document ids to avoid on my search. I am trying to run the query by using the <code>FieldPath.document_id()</code> but I am getting the <code>__key__ filter value must be a Key</code>. Is there any way to solve it?</p>
<p>Here's a snippet:</p>
<pre class="lang-py prettyprint-override"><code>from firebase_admin import firestore
from google.cloud.firestore_v1.field_path import FieldPath
blocked_ids = ['123123','232323']
db = firestore.AsyncClient()
query_ref = db.collection('items').document('some-id-here').collection('other-items')
items = await query_ref.where(FieldPath.document_id(), "not-in", blocked_ids)
</code></pre>
<p>When i am printing <code>FieldPath.document_id()</code>, i am getting <code>__name__</code>.</p>
<p>I took a look at some other questions here and it seems like it's working for some.</p>
<p><a href="https://stackoverflow.com/questions/74278645/python-firestore-collection-query-order-by-document-id/74279825#74279825">Python firestore collection query order by document_id</a></p>
<p><a href="https://stackoverflow.com/questions/47876754/query-firestore-database-for-document-id/71147997#71147997">Query firestore database for document id</a></p>
<p><a href="https://stackoverflow.com/questions/63214521/how-to-use-firestore-documentid-name-in-query/63237113#63237113">How to use Firestore DocumentID __name__ in query</a></p>
|
<python><python-3.x><firebase><google-cloud-firestore>
|
2024-04-25 22:32:46
| 0
| 1,783
|
Ertan Hasani
|
78,387,304
| 2,437,514
|
possible to create custom type hint creation functions?
|
<p>I am playing around with creating a tool that can detect the addition and subtraction of incompatible dimensions (of unit-bearing values) at type checking time. It would need to know how to combine dimensions correctly when they are multiplied and divided.</p>
<p>As a part of that effort, I'm trying to get this code type check the way I want both in mypy and pycharm:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Generic
T = TypeVar('T')
def f(item):
return float
class C(Generic[T]):
def g(self) -> f(T): # mypy says f(T) should be f[T] (mypy is wrong :) )
...
a = C().g()
a # pycharm knows this is a float!!
</code></pre>
<p>The goal here is for the type checker to be happy, and to know that <code>a</code> is a <code>float</code>.</p>
<p>This is just a simple example showing the kind of thing I'm trying to figure out.</p>
<p>Mypy doesn't like it because normally you don't see syntax like <code>f(T)</code> as a return type. However it works fine in pycharm (pycharm knows that <code>a</code> is a <code>float</code>). I understand this is probably just because mypy is pretty strict and pycharm is much more forgiving.</p>
<p>I tried turning <code>f</code> into different things to get the <code>f[T]</code> syntax to work- such as a <code>dict</code>, or a class that allows usage of <code>__getitem__</code>, but mypy is still unhappy, AND pycharm doesn't know <code>a</code> is a <code>float</code> in those cases.</p>
<p>EDIT: From the comments I now understand that a function call as a type hint is a serious no-go because function calls are a 100% runtime kind of thing. So if I'm going to do this I need to find a way to stick to the standard type hint <code>[]</code> syntax.</p>
<p>EDIT: For those interested: I posted about getting language support for an idea like this <a href="https://discuss.python.org/t/typing-enhancements-for-type-checking-values-based-on-physical-dimensions-sorted/51991/8" rel="nofollow noreferrer">on python discuss</a>. Turns out it would probably require a lot of complex features to be added to the python type system.</p>
<hr />
<p>MORE ABOUT WHY I'M TRYING TO DO THIS</p>
<p>The idea behind all of this is I would create some kind of <code>DimMultiply</code> and also some kind of <code>DimDivide</code> type hint syntax, so that you can write code like this (using 3.12 type hint syntax):</p>
<pre class="lang-py prettyprint-override"><code>class Value[T]:
def __mul__[O](self, other: O) -> DimMultiply[T, O]:
...
def __truediv__[O](self, other: O) -> DimDivide[T, O]:
...
</code></pre>
<p>Since all dimension combinations in principle can be represented as, optionally, a series of multiplications in a numerator, and, optionally, a divide operation with a series of multiplication in the denominator, you could represent any dimension combination with only tokens for each dimension type (<code>Length</code>, <code>Mass</code>, <code>Time</code>, etc etc) and with a single <code>DIVIDE</code> operator (if required).</p>
<p>The <code>DimMultiply</code> and <code>DimDivide</code> syntax would properly combine the dimensions into new tuples of types when they are multiplied/divided. Like this (very very simple):</p>
<pre><code>a: Tuple[Length]
b: Tuple[Length]
c: Tuple[Length, Length] = a * b
</code></pre>
<p>But to generalize this behavior across all kinds of dimension combinations, I'd need to write some kind of <code>DimMultiply</code> and <code>DimDivide</code> function/type that magically creates the types, and for the type checker to recognize the result coming from that function/type:</p>
<pre><code>>>> DimMultiply[Tuple[Length, DIVIDE, Time], Tuple[DIVIDE, Time]]
Tuple[Length]
</code></pre>
<p>With that kind of tool in place, you could use the type checker to detect incompatible dimension errors, like this:</p>
<pre><code>from my_unit_library import ft, lbs
x = 1*ft
y = 3*lbs
z = x + y # type checker throws error here
</code></pre>
<p>The type checker would know that the type of <code>x</code> is something like <code>Tuple[Length]</code>, and the type of <code>y</code> is something like <code>Tuple[Length, Mass, DIVIDE, Time, Time]</code> (which is a force), and that you can't add those two unlike types together.</p>
<p>However this <code>__mult__</code> and <code>__truediv__</code> return type syntax isn't working in either pycharm or mypy at the moment. If I write it in the function call way, however, pycharm seems to work, at least sometimes.</p>
<p>Bottom line: is there some kind of method I am missing for customizing the behavior of type checkers, so that it would know to "drill down" into something like <code>DimMultiply[Tuple[Length], Tuple[Length]]</code> and return <code>Tuple[Length, Length]</code> as the actual type?</p>
|
<python><pycharm><mypy><python-typing>
|
2024-04-25 21:21:25
| 0
| 45,611
|
Rick
|
78,387,303
| 10,638,608
|
structlog with Celery
|
<p>I have a working celery app. I want to add structured logging to it.</p>
<p>A complete working example would be hard to provide, so let me demostrate:</p>
<pre class="lang-py prettyprint-override"><code>import structlog
import logging
logging.config.dictConfig(...)
structlog.configure(...)
app = Celery()
</code></pre>
<p>Then, in some task:</p>
<pre class="lang-py prettyprint-override"><code>logger = structlog.getLogger()
logger.info("Foo")
</code></pre>
<p>In this app, when there are logs <strong>outside</strong> Celery tasks, they print out in a JSON format - like I want. However, each log from a Celery task has this:</p>
<pre class="lang-bash prettyprint-override"><code>[2024-04-25 20:37:14,305: INFO/ForkPoolWorker-2] {...}
</code></pre>
<pre class="lang-bash prettyprint-override"><code>[2024-04-25 20:37:14,305: INFO/MainProcess] {...}
</code></pre>
<p>prefix. And those logs don't have keys that should be here (due to my logging configuration), so that suggests that the logging isn't "configured" inside Celery tasks.</p>
<p>My question is: how do I make Celery tasks log how I want them to? Could I maybe move those two <code>logging.config.dictConfig</code> and <code>structlog.configure</code> calls to a separate function (let's say <code>initialize_logging</code>) and add it as a parameter or something to Celery so it executes it at the beggining of every task or something like this?</p>
|
<python><celery><structlog>
|
2024-04-25 21:21:05
| 2
| 1,997
|
dabljues
|
78,387,280
| 947,012
|
How to locate CA bundle used by pip?
|
<p>Pip <a href="https://pip.pypa.io/en/stable/topics/https-certificates/" rel="nofollow noreferrer">documentation</a> says:</p>
<blockquote>
<p>By default, pip will perform SSL certificate verification for network
connections it makes over HTTPS. These serve to prevent
man-in-the-middle attacks against package downloads. <strong>This does not use
the system certificate store but, instead, uses a bundled CA
certificate store from certifi.</strong></p>
</blockquote>
<p>However, if a fresh Python installation certifi is missing:</p>
<pre><code>% docker run -it --rm python:3.10 bash
root@067205989d2e:/# python -m certifi
/usr/local/bin/python: No module named certifi
</code></pre>
<p>Still, pip is using some store, and it is likely different from the path provided by certifi module. So what that path is?</p>
|
<python><pip><ssl-certificate>
|
2024-04-25 21:13:34
| 1
| 3,234
|
greatvovan
|
78,387,204
| 1,209,675
|
Pybind11 can't figure out how to access tuple elements
|
<p>I'm an experienced Python programmer trying to learn C++ to speed up some projects. Passing a py::tuple to a function, how do I access the elements?</p>
<p>Here's the constructor for a class I'm making to hold images from a video.</p>
<pre><code>Buffer::Buffer(int size, py::tuple hw_args_) {
unsigned int frame_height = get<0>(hw_args_);
unsigned int frame_width = get<1>(hw_args_);
Buffer::time_codes = new int[size];
Buffer::time_stamps = new string[size];
Buffer::frames = new unsigned char[size][hw_args[0]][hw_args[1]];
if ((Buffer::time_codes && Buffer::time_stamps) == 0) {
cout << "Error allocating memory\n";
exit(1);
}
</code></pre>
<p>}</p>
<p>This gives me an error "No instance of the overloaded function 'get' matches the argument list, argument types are pybind11::tuple"</p>
<p>I've also tried setting the frame height and widths this way.</p>
<pre><code>unsigned int frame_height = hw_args_[0];
unsigned int frame_width = hw_args_[1];
</code></pre>
<p>This gives an error "No suitable conversion function from 'pybind11::detail::tuple_accessor' to 'unsigned int' exists"</p>
<p>I'm at a loss, I can only seem to find info on making tuples in C++, not accessing them from Pybind11 / Python.</p>
|
<python><c++><c++11><pybind11>
|
2024-04-25 20:53:24
| 1
| 335
|
user1209675
|
78,387,122
| 480,118
|
Pandas merge complaining about non unique labels when key is a composite and unique
|
<p>I am trying to merge two dataframes such that i end up with one with same number of columns but but an increased row count.</p>
<pre><code>import pandas as pd, numpy as np
data1 = [['date' , 'symbol', 'value'],
['1999-01-10', 'AAA', 101],
['1999-01-11', 'AAA', 201]]
I am trying to merge two dataframes such that i end up with one with same number of columns but row count should increase
import pandas as pd, numpy as np
data1 = [['date' , 'symbol', 'value'],
['1999-01-10', 'AAA', 101],
['1999-01-11', 'AAA', 201]]
data2 = [['date' , 'symbol', 'value'],
['1999-01-10', 'BBB', 101],
['1999-01-11', 'BBB', 201]]
df1 = pd.DataFrame(data1[1:], columns=data1[:1])
df2 = pd.DataFrame(data2[1:], columns=data2[:1])
df = df1.merge(df2, on = ['date', 'symbol'], how='outer')
</code></pre>
<p>The code above produces an error on the merge line:</p>
<pre><code>ValueError: The column label 'date' is not unique.
For a multi-index, the label must be a tuple with elements corresponding to each level.
</code></pre>
<p>I know i can achieve what i am seeking with pd.CONCAT in the above case, but i want to understand why merge is failing here given that the composite keys of date+symbol are different/unique?
Furthermore i dont understand the part about multi-index. there is no index except the 'natural' one on these dataframes.</p>
|
<python><pandas>
|
2024-04-25 20:31:13
| 2
| 6,184
|
mike01010
|
78,387,100
| 305,883
|
Setting window length and frame length in STFT for clustering audio
|
<p>I am looking to understand better the consequences of setting the window and the fft length in the short time fourier transform (STFT). My goal is increasing clustering of brief vocalisations (utterances) and I am trying to increase resolution of frequencies and squash the temporal component.</p>
<p>What are the effects of setting the: <code>window frame > fft length</code>, <code>window frame == fft</code> length and <code>window frame < fft length</code> ?</p>
<hr />
<p>For clarity, I refer to:</p>
<ul>
<li>window length : number of samples in each windowed segment of the input signal (which is <code>frame_length</code> in tensorflow and <code>win_length</code>in librosa, sometimes called <code>fft_size</code> in other packages)</li>
<li>hop : Number of samples between consecutive frames</li>
<li>fft_length : length of the Fast Fourier Transform (FFT) applied to each frame (which is <code>fft_length</code> in tensorflow and <code>n_fft</code>in librosa)</li>
</ul>
<p><strong>Context</strong></p>
<p>[1] <a href="https://stackoverflow.com/a/29866550/305883">https://stackoverflow.com/a/29866550/305883</a>
I understand STFT works best for sounds considered stable, so for speech a good choice is to select the time during which sound is considered stable.</p>
<p>[2] <a href="https://www.researchgate.net/post/Fast_Fourier_Transform_FFT_for_soundscape_ecology_studies_how_to_determine_window_size_the_trade-off_between_time_and_frequency_resolution" rel="nofollow noreferrer">https://www.researchgate.net/post/Fast_Fourier_Transform_FFT_for_soundscape_ecology_studies_how_to_determine_window_size_the_trade-off_between_time_and_frequency_resolution</a></p>
<p>In my case, I am working with animals, and I only can make an "informed" guess: I select the time span in ms, and select a window depending on the sampling rate (e.g. assuming 40 ms stability at 250000 sampling rate => 10000 frames, and 8192 window is the closest power of 2)</p>
<p>[3] <a href="https://support.ircam.fr/docs/AudioSculpt/3.0/co/FFT%20Size.html" rel="nofollow noreferrer">https://support.ircam.fr/docs/AudioSculpt/3.0/co/FFT%20Size.html</a></p>
<p>I understand that window is the main parameter, and that I can increase the frequency resolution by increasing the window, and the ftt length.</p>
<p>The main param is the window. I can also oversample and interpolate the frames, by zero-padding, i.e. setting the fft length <strong>shorter</strong> than the window: fft length < window</p>
<p>However, this source also make clear that <strong>ftt is independent from the window</strong>, and that I could also setting wider.</p>
<p>[4] Indeed <strong>in Tensorflow I can set all the options</strong> - <code>window frame > fft length</code>, <code>window frame == fft</code> length and <code>window frame < fft length</code>.</p>
<p>But *<em>not in Librosa</em>, where must be Β΄fft length >= windowΒ΄ Examples:</p>
<pre><code>librosa.pyin(y,
sr=250000,
frame_length=4096,
win_length=8192,
)
# ParameterError: win_length=8192 must be less than frame_length=4096
librosa.stft(y, n_fft=2048, hop_length=None, win_length=8192)
# ParameterError: Target size (2048) must be at least input size (8192)
</code></pre>
<p>I am confused by terminology and the errors.</p>
<p>Which would be the effect of <code>window frame > fft length</code>, <code>window frame == fft</code> length and <code>window frame < fft length</code> ? Why FTT is independent from the window, but librosa enforce at least an equal or longer size, but Tensorflow allows any choice ?</p>
<p>Thanks for helping to understand and possibly contextualise to my challenge with some practical advices.</p>
|
<python><tensorflow><signal-processing><fft><librosa>
|
2024-04-25 20:26:03
| 1
| 1,739
|
user305883
|
78,387,063
| 3,731,622
|
numpy aliases np.float_ and np.float64 treated differently by mypy
|
<p>From <a href="https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.double" rel="nofollow noreferrer">numpy documentation for np.double</a> it says <code>np.float</code> and <code>np.float64</code> are aliases for <code>np.double</code>. (Noting the docs say <code>Alias on this platform</code> for <code>np.float64</code>)</p>
<p>I was expecting mypy to treat the aliases similarly, but it didn't.</p>
<p>Here is an example (print_max.py):</p>
<pre><code>import numpy as np
from numpy.typing import NDArray
def print_max(arr: NDArray[np.float32]) -> None:
print(f"arr.max() = {arr.max()}")
a = np.ones((2,3), dtype=np.float_)
b = np.ones((2,3), dtype=np.float64)
c = np.ones((2,3), dtype=np.double)
d = np.ones((2,3), dtype=np.float32)
print_max(a)
print_max(b)
print_max(c)
print_max(d)
</code></pre>
<p>If I run <code>mypy print_max.py</code>, I get the following:</p>
<pre><code>practice.py:14: error: Argument 1 to "print_max" has incompatible type "ndarray[Any, dtype[floating[_64Bit]]]"; expected "ndarray[Any, dtype[floating[_32Bit]]]" [arg-type]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>I would have expected mypy to have found errors for cases when the ones array was defined with with dtypes <code>np.float_</code>, <code>np.float64</code>, & <code>np.double</code> since the type hint in <code>print_max</code> specified <code>np.float32</code>.</p>
<p>Why is numpy treating these aliases for 64-bit floating points (<code>np.float_</code>, <code>np.float64</code>, & <code>np.double</code>) differently?</p>
|
<python><numpy><mypy><python-typing>
|
2024-04-25 20:15:11
| 0
| 5,161
|
user3731622
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.