QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,732,335 | 14,222,845 | How to convert a python function (with parameters) into a standalone executable? | <p>I have a python pandas function that uses some libraries and takes in a couple of parameters. I was wondering if it is possible to convert a python function with parameters to an application (so, .exe file). Would pyinstaller do the trick in this case?</p>
<p>Here is the code for my function:</p>
<pre><code>import math
import statistics
import pandas as pd
import numpy as np
from scipy.stats import kurtosis, skew
from openpyxl import load_workbook
def MySummary(fileIn, MyMethod):
DF = pd.read_csv(fileIn)
temp = DF['Vals']
if (MyMethod == "mean"):
print("The mean is " + str(statistics.mean(temp)))
elif (MyMethod == "sd"):
print("The standard deviation is " + str(temp.std()))
elif (MyMethod == "Kurtosis"):
print("The kurtosis is " + str(kurtosis(temp)))
else:
print("The method is not valid")
</code></pre>
<p>What will happen if this gets converted to an .exe file. Will it automatically request arguments for the function MySummary or something like that?</p>
| <python><pandas><function><executable><python-standalone> | 2023-07-20 17:28:59 | 3 | 330 | Diamoniner12345 |
76,732,210 | 5,805,827 | Polynomial function on meshgrid | <p>I have few variables <code>x</code>, <code>y</code>, <code>z</code></p>
<pre class="lang-py prettyprint-override"><code># Variables
x = np.linspace(-1.0, 1.0, num=5, endpoint=True)
y = np.linspace(-1.0, 1.0, num=5, endpoint=True)
z = np.linspace(-1.0, 1.0, num=5, endpoint=True)
# And the corresponding meshgrid
x_v, y_v, z_v = np.meshgrid(x, y, z)
</code></pre>
<p>Let's say I want to calculate a value of a quadratic polynomial with the coefficients <code>C</code> at each point of a meshgrid. For this I have a function to return a polynomial terms for a set of arguments (NOTE: here the function is generic to make it work for any number of arguments):</p>
<pre class="lang-py prettyprint-override"><code>def pol_terms(*args):
# -- Calculate terms of the polynomial --
# Linear part (b0, b11*x1, b12*x2, ..., b1k*xk, where k=len(args))
entries = [1] + [pt for pt in args]
# Combinations part (b12*x12, b13*x13, ..., bk-1k*xk-1k)
n_args = len(args)
for i in range(n_args):
for j in range(i+1, n_args):
entries.append(args[i]*args[j])
# Quadratic part
entries += [pt**2 for pt in args]
return np.array([entries])
</code></pre>
<p>So, for one set of variables' values I can get the value of my polynomial as follows:</p>
<pre class="lang-py prettyprint-override"><code>X = pol_terms(1,2,3)
X.dot(C)
</code></pre>
<p>I can't figure out how to make the same for the meshgrid <code>x_v</code>, <code>y_v</code>, <code>z_v</code>. Applying <code>np.vectorize</code> on the <code>pol_terms</code> wouldn't work because it returns an array. I don't want manually looping through all the dimensions neither because I want the solution to be generic (so it works for 2 variable as well as for 5).</p>
<p>Thanks in advance!</p>
| <python><numpy><vectorization> | 2023-07-20 17:08:58 | 1 | 1,079 | Roman J. |
76,732,153 | 3,948,658 | AWS CDK Update ImageBuilder AMI for EC2 instance, Stack Dependency Error | <p>I am using AWS CDK to create an AMI via <strong>AWS Imagebuilder</strong> and then deploy that AMI to an <strong>AWS EC2 instance</strong>. I use one stack for AMI ImageBuilder Resources (ImageBuilderStack) and one stack for EC2 resources (EC2Stack). <strong>EC2Stack</strong> calls the AMI image generated in <strong>ImageBuilderStack</strong> to deploy the EC2 instance. However, when I try to update the AMI image in ImageBuilderStack I get the following error:</p>
<p><strong>Export ImageBuilderStack:ExportsOutputFnGetAttAmiImagebImageIdEB9587C0 cannot be updated as it is in use by EC2Stack</strong>.</p>
<p>How can I resolve this and deploy an updated AMI image? I know my code works if I destroy EC2Stack first and then redeploy it with the updated image. But is there anyway to do this in one go with aws cdk deploy or one AWS cdk cli command? Right now I am trying use <strong>cdk deploy EC2-Stack</strong> or <strong>cdk deploy --all</strong> to update my AMI image and EC2 instance simultaneously.</p>
<p>To modify the AMI I am adding shell commands to <strong>component_documents.test_doc_b.py</strong> (defined below).
To pass the AMI image from ImageBuilderStack to EC2Stack I am using <strong>stack_creator.py</strong>(defined below)</p>
<p>Here is my code:</p>
<p><strong>ec2_stack.py</strong></p>
<pre><code>from constructs import Construct
from aws_cdk import (
Stack,
aws_ec2 as ec2,
Duration,
CfnOutput,
)
class Ec2Stack(Stack):
def __init__(self, scope: Construct, construct_id: str, stacks, stage, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
image_builder_stack = stacks["ImageBuilderStack"]
iam_stack = stacks["IamStack"]
image_b = ec2.MachineImage.generic_linux({"us-west-2":
image_builder_stack.ami_image_b.attr_image_id}) # Where the ami from ImageBuilderStack is called
CfnOutput(self, "AmiOutputB", value=image_builder_stack.ami_image_b.attr_image_id)
test_ec2 = ec2.Instance(self, "Ec2Instance",
instance_name="deployment_test",
vpc=shared_vpc_stack.vpc,
vpc_subnets=ec2.SubnetSelection(
subnet_group_name="TestPrivateSubnet"
),
instance_type=ec2.InstanceType.of(ec2.InstanceClass.C5, ec2.InstanceSize.XLARGE9),
machine_image=image_b,
role=iam_stack.instance_role,
security_group=test_security_group,
)
test_ec2.node.add_dependency(image_builder_stack.ami_image_b)
</code></pre>
<p><strong>image_builder_stack.py</strong></p>
<pre><code>from constructs import Construct
from aws_cdk import (
Stack,
aws_imagebuilder as imagebuilder,
aws_iam as iam,
)
from component_documents import test_doc_b # shell commands to run in AMI
class ImageBuilderStack(Stack):
def __init__(self, scope: Construct, construct_id: str, stacks, stage, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
iam_stack = stacks["IamStack"]
instance_profile = iam.CfnInstanceProfile(self, "InstanceProfile",
roles=[iam_stack.image_build_role.role_name],
instance_profile_name="EC2InstanceProfileForImageBuilder",
)
infrastructure_configuration = imagebuilder.CfnInfrastructureConfiguration(self, "InfrastructureConfiguration",
instance_profile_name=instance_profile.instance_profile_name,
name="InfrastructureConfig",
description="Infrastructure configuration for test AMI image",
terminate_instance_on_failure=True
)
infrastructure_configuration.node.add_dependency(instance_profile)
yaml_data = test_doc_b.return_test_component_doc()
test_component = imagebuilder.CfnComponent(stack, f"testComponent-{image_doc}",
name="Test-Image-Component",
platform="Linux",
version= version, #ie: "1.0.3",
description="Downloads, installs and configures splunk on Amazon Linux",
supported_os_versions=["Amazon Linux 2"],
data=yaml_data # Arbitrary shell script to generate AMI
)
test_image_recipe = imagebuilder.CfnImageRecipe(stack, f"TestImageRecipe-{image_doc}",
components=[{"componentArn": test_component.attr_arn}],
name="Test-Image-Recipe",
# Amazon Linux2 X86 64bit Kernel 5.10
parent_image="ami-076bca9dd71a9a514",
version=”1.0.1”, #ie: "1.0.3", iterate up on new releases
working_directory="/tmp"
)
self.test_ami_image = imagebuilder.CfnImage(stack, f"TestAmiImage-{image_doc}",
infrastructure_configuration_arn=infrastructure_configuration.attr_arn,
image_recipe_arn=test_image_recipe.attr_arn,
tags={
"Name": "test AMI"
}
)
</code></pre>
<p><strong>component_documents.test_doc_b.py</strong> (Contains shell commands for AWS Image Builder to run in new AMI, gets passed to image builder stack)</p>
<pre><code>def return_test_component_doc():
test_component_doc = f'''
name: TestDeploymentDocument_B
description: This is document for downloading, installing and deploying
schemaVersion: 1.0
phases:
- name: build
steps:
- name: SetEnvironment
action: ExecuteBash
inputs:
commands:
- sudo yum update -y
- echo “Hello World”
</code></pre>
<p><strong>stack_creator.py</strong> (How I deploy stacks and pass ImageBuilder to EC2Stack)</p>
<pre><code>from constructs import Construct
from aws_cdk import (
Stack,
Environment
)
from test_stacks.ec2 import Ec2Stack
from test_stacks.image_builder import ImageBuilderStack
from test_stacks.iam import IamStack
class ApplicationInfrastructure(Stack):
def __init__(self, scope: Construct, **kwargs) -> None:
super().__init__(scope, **kwargs)
stage = self.node.try_get_context("stage")
env_USA = Environment(account=self.account, region="us-west-2")
ImageBuilderStack = ImageBuilderStack(self, "Image-Builder-Stack", env=env_USA, stage="beta")
Ec2Stack = Ec2Stack(self, "EC2-Stack", env=env_USA, stacks={"ImageBuilderStack":
ImageBuilderStack, “IamStack”: IamStack},stage="beta")
Ec2Stack.add_dependency(ImageBuilderStack)
</code></pre>
| <python><amazon-web-services><amazon-ec2><aws-cdk><amazon-ami> | 2023-07-20 17:03:19 | 1 | 1,699 | dredbound |
76,732,062 | 11,681,306 | pip 23.2 fails installation for local files | <p>I am behind a firewall, so I need to open a proxy to do this, however this is what's bugging me... the example of beautifulsoup4 is just an example, this happens with any package:</p>
<pre><code>create a new venv with python 3.10 or above
pip install --upgrade pip --proxy=....
pip install beautifulsoup4 --proxy=.... ## SUCCESS installed beautifulsoup4.12.2
pip uninstall beautifulsoup4
wget https://files.pythonhosted.org/packages/af/0b/44c39cf3b18a9280950ad63a579ce395dda4c32193ee9da7ff0aed547094/beautifulsoup4-4.12.2.tar.gz
tar -xvf beautifulsoup4-4.12.2.tar.gz
pip install ./beautifulsoup4-4.12.2 --proxy=.... ## FAIL with http / ssl connection issues
</code></pre>
<p>the error I get is always similar to this, with the name of the package changing</p>
<pre><code>WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)'))': /simple/setuptools/
</code></pre>
<p>which results in</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement setuptools>=40.8.0 (from versions: none)
ERROR: No matching distribution found for setuptools>=40.8.0
error: subprocess-exited-with-error
</code></pre>
<p>it's like when downloading a package from pypi (success above) it's fine, but when installing locally it fails to fetch the dependencies from pypi.</p>
<p>this did not happen before pip 23.2</p>
| <python><python-3.x><pip><proxy> | 2023-07-20 16:50:12 | 2 | 309 | Fabri Ba |
76,732,028 | 9,855,588 | moto metadata path has not been implemented | <p>Locally I can run Moto just fine.
When I run moto unit tests from Jenkins, I get the following error:</p>
<pre><code>NotImplementedError: The /latest/dynamic/instance-identity/pkcs7 metadata path has not been implemented
</code></pre>
<p>I am trying to run tests which mock creating an s3 bucket.</p>
| <python><python-3.x><moto> | 2023-07-20 16:44:54 | 0 | 3,221 | dataviews |
76,731,994 | 21,891,079 | How to prevent log messages from being printed to cell output? | <p>Importing <a href="https://github.com/Kanaries/pygwalker" rel="nofollow noreferrer">pygwalker</a> (<a href="https://pypi.org/project/pygwalker/0.1.11/" rel="nofollow noreferrer">v0.1.11</a>) changes what logging messages are displayed in the cell output. I can temporarily remove this import to prevent the messages from being logged, but I was wondering if there is an intended way to control the log messages displayed in Jupyter.</p>
<p>This example <em>does not</em> print the log message:</p>
<pre><code>import logging
import numpy
import pandas
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.debug("test")
</code></pre>
<p>This example (below) <em>does</em> print the log message:</p>
<pre><code>import pygwalker
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.debug("test")
</code></pre>
<p>Is there some way to programmatically determine the log messages that are printed in the cell output that can be performed regardless of the package imported?</p>
<h3>What I've tried</h3>
<ul>
<li>I have tried removing the import and this resolved the issue. However, I would like to be able to import this package AND control the log messages printed to the cell output.</li>
<li>I have reported this as an issue on the GitHub repo for pygwalker.</li>
<li>This question is related to <a href="https://stackoverflow.com/questions/69574331/jupyter-lab-stop-the-loging-messages-printing-out-to-cell-output">Jupyter lab: Stop the loging messages printing out to cell output</a> but the imported package is different, and this one includes a minimally reproducible example.</li>
</ul>
| <python><jupyter-notebook><python-import><python-logging> | 2023-07-20 16:40:39 | 1 | 1,051 | Joshua Shew |
76,731,986 | 149,900 | Traverse a directory recursively but stay in the same filesystem | <p>If I want to grab everything under a directory, I normally just use <code>pathlib.Path.rglob()</code></p>
<p>However let's assume that <code>/var/tmp</code> is actually a different storage device.</p>
<p>How do I recursively traverse <code>/var</code> but <em>only</em> for contents in the same filesystem/storage device?</p>
<p>Using <code>rglob()</code> simply grabs all paths (dirs & files) with no regards to the filesystem.</p>
<p><strong>Edit:</strong> The OS is Linux; the Python script will be run <em>only</em> on Linux, so cross-platform compatibility is not necessary.</p>
| <python><pathlib> | 2023-07-20 16:39:56 | 1 | 6,951 | pepoluan |
76,731,966 | 494,134 | Get-CsOnlineUser equivalent in MS Graph REST API? | <p>I want to use the MS Graph REST API to look up a Microsoft Teams user, similar to the <code>get-csOnlineUser</code> powershell cmdlet.</p>
<p>Specifically, I want to fetch these fields:</p>
<pre><code>lineUri
onlineVoiceRoutingPolicy
onlineDialOutPolicy
</code></pre>
<p>I can look up general user information with the <code>https://graph.microsoft.com/v1.0/users/user@company.com</code> url, but the information returned does not include the above fields.</p>
<p>I tried adding <code>?$select=lineUri,onlineVoiceRoutingPolicy,onlineDialOutPolicy</code> to the end of the url to request those fields specifically, but it did not help.</p>
| <python><microsoft-graph-api><microsoft-teams> | 2023-07-20 16:37:11 | 0 | 33,765 | John Gordon |
76,731,908 | 3,529,352 | What does "break" mean ? libpython2-stdlib : Breaks: libpython-stdlib (< 2.7.15-2) but 2.7.12-1~16.04 is to be installed | <p>Env:python is 2.7.18,os is ubuntu 22.04</p>
<p>I use "apt-get install python-openssl", and error is below
may I know that does "Breaks" mean here. the errors looks confusing about version number</p>
<pre><code>The following packages have unmet dependencies:
libpython2-stdlib : Breaks: libpython-stdlib (< 2.7.15-2) but 2.7.12-1~16.04 is to be installed
...
</code></pre>
<p>I have tried three, but still see this issue.</p>
<p>1)install libpytyon2-stdlib</p>
<p>2)install libpython-stdlib</p>
<p>3)remove libpython -- what ever.</p>
| <python><apt> | 2023-07-20 16:29:27 | 0 | 854 | nathan |
76,731,898 | 10,252,177 | Unable to install cppyy on Ubuntu — gmake error 2 | <p>I'm trying to install <a href="https://cppyy.readthedocs.io/" rel="nofollow noreferrer"><code>cppyy</code></a> on Ubuntu (arm64) using a Python installation created using <a href="https://github.com/pyenv/pyenv" rel="nofollow noreferrer"><code>pyenv</code></a>. <code>gcc</code>, <code>g++</code>, <code>build-essential</code>, and <code>make</code> are installed from the default Ubuntu distribution packages.</p>
<p>I have tried two methods to install <code>cppyy</code>, but neither seems to be working. <strong>How can I install it successfully?</strong></p>
<h2>Method 1</h2>
<pre class="lang-shellsession prettyprint-override"><code>$ pip install -v cppyy
</code></pre>
<p><strong>Output</strong>: <a href="https://pastebin.com/zRg1mHGM" rel="nofollow noreferrer">Link</a></p>
<h2>Method 2</h2>
<pre class="lang-shellsession prettyprint-override"><code>$ STDCXX=20 MAKE_NPROCS=32 pip install --verbose cppyy --no-binary=cppyy-cling
</code></pre>
<p><strong>Output:</strong> <a href="https://pastebin.com/VLTxUrsp" rel="nofollow noreferrer">Link</a></p>
| <python><c++><build><python-wheel><cppyy> | 2023-07-20 16:28:41 | 0 | 522 | Ben Zelnick |
76,731,857 | 21,404,794 | Functional approach to undoing replacement of a dictionary in pandas dataframe | <p>Some time ago I asked <a href="https://stackoverflow.com/questions/76554958/undoing-replacement-with-a-dictionary-in-pandas-dataframe">this question</a> about reverting an encoding done to feed the data to a machine learning model.</p>
<p>At the time the answer was enough, but now I need more. I have some data of molar masses with noise, and want to encode it to the element symbols of the different compositions.
Here we have an example of how it can be done:</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
import random as rand
#Create DataFrame
df = pd.DataFrame({'col1':['one','two','two','one', 0.151],'col2':[0.2,0.2,0.2,0.2,0.2],'col3':[0.3,0.3,0.3,0.3,0.3], 'col4':[0.4,0.4,0.4,0.4,0.4]})
print(df)
#Create Simple encoding and replace with it
encoding = {'one': 0.1, 'two': 0.2}
df.replace({'col1':encoding}, inplace=True)
print(df)
#Add some noise, like the one we could find in real life
df['col1'] = df['col1'] + [rand.randint(-100,100)/10000000 for _ in range(df.shape[0])]
print(df)
#Find out the closest encoding with the noise
ms = [abs(df['col1'] - encoding[i]) for i in encoding]
print(ms)
#Revert the encoding
enc = []
for row in range(len(ms[0])):
m = min(ms[0][row], ms[1][row])
if m == ms[0][row]:
enc.append(list(encoding.keys())[0])
elif m == ms[1][row]:
enc.append(list(encoding.keys())[1])
else:
enc.append('NotEncoded')
print(enc)
#Assign the reverted encoding to the df
df['col1'] = enc
print(df)
</code></pre>
<p>The problem with this solution I've come with is that it's hard encoded and does not translate well to the real data (30+ keys in the dictionary, hundreds of rows to check), so I'm looking for a way with less hard-encoding and more functional approach to the problem.</p>
<p>Any help will be welcome,</p>
<p>Thanks.</p>
| <python><pandas><dictionary> | 2023-07-20 16:22:48 | 1 | 530 | David Siret Marqués |
76,731,811 | 166,442 | Is there a way to use QUILoader without a project/resource file? | <p>I used QT Creator to generate a .ui file for my Python project.
To be clear, I am not not using qtcreator to manage my entire project, I just use it as a .ui editor.</p>
<p>So in qtcreator, I added icons by file name, not by referencing a resource catalog, and they show fine in qtcreator.</p>
<p>However, they do not show up when loading the file in Python as follows, using PySide 5.15.2:</p>
<pre><code>loader = QtUiTools.QUiLoader()
ui_file = QtCore.QFile(ui_path)
ui_file.open(QtCore.QFile.ReadOnly)
form = loader.load(ui_file)
ui_file.close()
</code></pre>
<p>Looking in the XML generated by qtcreator, it looks like this:</p>
<pre><code> ...
<property name="icon">
<iconset>
<normaloff>../../resources/icons/list-add.svg</normaloff>../../resources/icons/list-add.svg</iconset>
</property>
</widget>
...
</code></pre>
<p>Any way to run this with the icons showing up without using a .qrc file?
I am trying to avoid introducing a build step for my application/UI.</p>
| <python><qt><pyqt><qt-creator> | 2023-07-20 16:16:11 | 0 | 6,244 | knipknap |
76,731,779 | 5,835,423 | Update data in db only when there is new item is added in the list | <p>I have a sql db with columns [Record_ID](Primary Key)
,[Source]
,[Entity] (Can be null)
,[Last_Name] (Can be null)
,[Given_Name] (Can be null)
,[Aliases] (Can be null)
,[Date_Of_Birth] (Can be null).
These values come from 3 different XML files. I parse these files in python and add in DB. These 3 XMLs are coming from website. If these files are updated I need to update my db with new values without duplicates. How can I do that ?</p>
<pre><code>def readXML(sql_server, database, sql_user_name, sql_password, sql_driver):
print("Start")
url = 'somewebsite.com'
resp = requests.get(url)
soup = BeautifulSoup(resp.content, "xml")
recordData = soup.findAll('record')
now = datetime.datetime.now()
sql_insert_in_table = """
INSERT INTO myTable (Source, Given_Name, Last_Name, Date_Of_Birth, Aliases,Entity,Date_Added,Added_by) values (?,?,?,?,?,?,?,?)
"""
params = []
for child in recordData:
firstName = child.find('GivenName').text if child.find('GivenName')!=None else "N/A"
lastName = child.find('LastName').text if child.find('LastName')!=None else "N/A"
DoB = child.find('DateOfBirth').text if child.find('DateOfBirth')!=None else "N/A"
entity= child.find('Entity').text if child.find('Entity')!=None else "N/A"
aliases = child.find('Aliases').text if child.find('Aliases')!=None else "N/A"
params.append((source, firstName, lastName,
DoB, aliases, entity, now.date(), "Admin"))
exec_sql_query (sql_insert_in_table, params, sql_server, database, sql_user_name, sql_password, sql_driver)
def exec_sql_query(query, params, sql_server, database, sql_user_name, sql_password, sql_driver):
try:
with pyodbc.connect('DRIVER='+sql_driver+';SERVER=tcp:'+sql_server+';PORT=1433;DATABASE='+database+';UID='+sql_user_name+';PWD=' + sql_password) as conn:
with conn.cursor() as cursor:
conn.autocommit = True
cursor.executemany(query, params)
except pyodbc.Error as e:
logging.error(f"SQL query failed: {e}")
raise
</code></pre>
| <python><sql><mysql> | 2023-07-20 16:11:19 | 1 | 3,003 | heman123 |
76,731,427 | 5,405,669 | Generating an async effect in a flask app; does joining a thread to the parent process block the process | <p>One of my <code>flask</code> endpoints, I create a process to run a "long job" before sending the user-agent a "ticket". The user-agent uses that ticket to "check-in"... and retrieve the results when ready.</p>
<p>The following takes place in the process:</p>
<ol>
<li><p>generate the "long job" result (can involve consuming 0 or more additional threads; joined when multiple threads are required to complete the job)</p>
</li>
<li><p>once (1) is complete, write that result to a place where the user-agent "knows" to get it</p>
</li>
<li><p>kick-off a "fire and forget" background task that copies files to a shared cache resource; using current_app resources</p>
</li>
</ol>
<p>Before including the following in the body of the function used to instantiate the process:</p>
<pre><code>thread_handle.join()
</code></pre>
<p>(<a href="https://stackoverflow.com/questions/67419800/error-cannot-schedule-new-futures-after-interpreter-shutdown-while-working-thr">per the lucid explanation here</a>)</p>
<p>I would get</p>
<pre><code>cannot schedule new futures after interpreter shutdown;
</code></pre>
<h2>The question</h2>
<p>Since writing out this question, the answer seems more clear to me. Notwithstanding,is it ever possible to generate a "computing task" in the flask request context, that captures the "fire and forget" intent; i.e., one where the task will execute and shutdown without the parent process or thread needing to remain in memory? ... do all resources (e.g., access to os resources) depend on the parent process?</p>
<p>If so, that seems a bit inconsistent with what I have intuited given how the life of a process does not depend on the success/failure of the child thread.</p>
<p>Furthermore, if it blocks the process, what's the point of creating a new Thread? Given that the "fire and forget" thread is the last step in the "long job" and only requires a single thread (no parallel processing), I'm thinking that spawning a new thread is a wasted effort; I should just run it in the process. Right?</p>
| <python><multithreading><flask><process> | 2023-07-20 15:25:16 | 1 | 916 | Edmund's Echo |
76,731,426 | 19,369,393 | How to access 2D numpy array elements along axis 1 using an array of indices? | <p>Let <code>N</code> be a numpy array of shape <code>(m,n)</code> and <code>A</code> be an array of indices <code>(m,)</code> containing values between 0 and <code>n-1</code> including.</p>
<p>If I <code>i</code> is an index in range from 0 to m-1 including, then I want to access elements <code>N[i,A[i]]</code>.</p>
<p>For example:</p>
<pre><code>import numpy as np
m = 2000
n = 10
N = np.zeros((m,n))
A = np.random.choice(n, m)
N[???, A] = 1
</code></pre>
<p>I expect the example code above to generate an array N of shape <code>(m,n)</code> initially with all zeros, and then set <code>N[i,j]=1</code> where <code>A[i]=j</code>.</p>
<p>I tried <code>N[:, A]=1</code>, but instead it changes all the elements of the array <code>N</code> to 1, which is not what I'm trying to achieve.</p>
| <python><arrays><numpy> | 2023-07-20 15:25:10 | 2 | 365 | g00dds |
76,731,206 | 12,878,983 | Memory consumption in PyArrow iter_batches | <p>Let's suppose I have n parquet files that represent the columns of my final parquet file. The columns have 5 milion rows and I want to read my entire file in batches and each batch is given by the concatenation of the batches of all columns as follows</p>
<pre><code>def read_arrow_col_batch(path_to_column: str, column: str, batch_size: int = 10_000):
"""
Read a column from parquet column and return a pyarrow.Table table containing it.
:param batch_size: size of the batch
:param path_to_column: path to parquet that contains the column
:param column: Column Name.
"""
parquet_file = ParquetFile(path_to_column)
for batch in parquet_file.iter_batches(columns=[column], batch_size=batch_size):
yield batch
def read_arrow_cols_batch(paths_to_columns: List[str], columns: List[str], batch_size: int = 10_000):
"""
Read all columns batches
:param batch_size: size of the batch
:param paths_to_columns: paths to parquet files that contains the columns
:param columns: Column Names.
"""
pqs_batches = [read_arrow_col_batch(path_to_column=p, column=c) for c, p in zip(columns, paths_to_columns)]
for idx, zipped_batches in enumerate(zip(*pqs_batches)):
batches = [z[0] for z in zipped_batches]
t = pa.Table.from_arrays(batches, names=columns)
yield t
</code></pre>
<p>why does this consume more memory than if I apply the <code>iter_batches</code> function to the <code>ParquetFile</code> object of the entire parquet file? Am I missing something? There exists a method to combine all parquet files in one object without allocating memory?</p>
| <python><pyarrow><apache-arrow> | 2023-07-20 15:00:05 | 0 | 573 | marco |
76,731,205 | 417,385 | pip install pylbfgs fails in a clean virtualenv | <p>On a completely fresh virtualenv, installing <code>pylbfgs</code> fails with the error below. My goal is to install <code>dedupe</code>, but it depends on <code>pylbfgs</code>.</p>
<p>I'm assuming it has something to do with the release of Cython 3.0.0 a few days ago, but even if I do <code>pip install Cython==0.29.35</code> it still fails. Last commit to <code>pylbfgs</code> was in 2014 so something external must have changed.</p>
<p>Any idea how I can set up an environment to make it work?</p>
<pre><code>$ pip install pylbfgs
Collecting pylbfgs
Using cached PyLBFGS-0.2.0.14.tar.gz (98 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [57 lines of output]
/private/var/folders/jg/pb4kddl17kzf4njpdp3ht8rh0000gn/T/pip-build-env-rhor0rp4/overlay/lib/python3.11/site-packages/Cython/Compiler/Main.py:381: FutureWarning: Cython directive 'language_level' not set, using '3str' for now (Py3). This has changed from earlier releases! File: /private/var/folders/jg/pb4kddl17kzf4njpdp3ht8rh0000gn/T/pip-install-hgmjirii/pylbfgs_27ac07c00d1741aaa6fb205fbf713b21/lbfgs/_lowlevel.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
Error compiling Cython file:
------------------------------------------------------------
...
x_a = aligned_copy(x0.ravel())
try:
callback_data = (f, progress, x0.shape, args)
r = lbfgs(n, x_a, fx_final, call_eval,
^
------------------------------------------------------------
lbfgs/_lowlevel.pyx:395:40: Cannot assign type 'lbfgsfloatval_t (void *, lbfgsconst_p, lbfgsfloatval_t *, int, lbfgsfloatval_t) except? -1' to 'lbfgs_evaluate_t'
Error compiling Cython file:
------------------------------------------------------------
...
x_a = aligned_copy(x0.ravel())
try:
callback_data = (f, progress, x0.shape, args)
r = lbfgs(n, x_a, fx_final, call_eval,
call_progress, <void *>callback_data, &self.params)
^
------------------------------------------------------------
lbfgs/_lowlevel.pyx:396:22: Cannot assign type 'int (void *, lbfgsconst_p, lbfgsconst_p, lbfgsfloatval_t, lbfgsfloatval_t, lbfgsfloatval_t, lbfgsfloatval_t, int, int, int) except? -1' to 'lbfgs_progress_t'
Compiling lbfgs/_lowlevel.pyx because it changed.
[1/1] Cythonizing lbfgs/_lowlevel.pyx
Traceback (most recent call last):
File "/Users/_/dev/test/env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/Users/_/dev/test/env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/_/dev/test/env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/jg/pb4kddl17kzf4njpdp3ht8rh0000gn/T/pip-build-env-rhor0rp4/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/jg/pb4kddl17kzf4njpdp3ht8rh0000gn/T/pip-build-env-rhor0rp4/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "/private/var/folders/jg/pb4kddl17kzf4njpdp3ht8rh0000gn/T/pip-build-env-rhor0rp4/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 488, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/jg/pb4kddl17kzf4njpdp3ht8rh0000gn/T/pip-build-env-rhor0rp4/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 27, in <module>
File "/private/var/folders/jg/pb4kddl17kzf4njpdp3ht8rh0000gn/T/pip-build-env-rhor0rp4/overlay/lib/python3.11/site-packages/Cython/Build/Dependencies.py", line 1134, in cythonize
cythonize_one(*args)
File "/private/var/folders/jg/pb4kddl17kzf4njpdp3ht8rh0000gn/T/pip-build-env-rhor0rp4/overlay/lib/python3.11/site-packages/Cython/Build/Dependencies.py", line 1301, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: lbfgs/_lowlevel.pyx
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
| <python><pip><cython><cythonize><python-dedupe> | 2023-07-20 14:59:59 | 1 | 5,245 | Mathias Bak |
76,731,175 | 4,638,207 | Continue fitting an sklearn imputer that did not converge after max_iter | <p>I fit an <code>sklearn.impute.IterativeImputer</code> using default settings (a <code>BayesianRidge()</code> regressor), and after 200 iterations of fitting the regressor still hadn't converged to within the scaled tolerance, though the reported change was still decreasing. Since I had maxed out the iterations (which in my case took ~13 hours), I saved the "fitted" imputer using <code>joblib</code>.</p>
<p>What I'd like to do is continue training the regressor do get a better imputation of my missing values, but when I load the imputer and call <code>fit()</code> on the same data, it seems to start all over, i.e. (I assume) not using the previously optimized regularization parameters.</p>
<p>Is it possible to resume imputer estimator training using <code>IterativeImputer</code>, or is there perhaps another way to do this?</p>
<p>An example script:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from joblib import dump, load
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
data = np.random.normal(size=(10000, 100))
# Setting max_iter to be low so it doesn't converge in time
imputer = IterativeImputer(max_iter=5, random_state=42, verbose=2)
imputer.fit(data)
dump(imputer, "imputer.joblib")
# Assuming the imputer does not converge to within the default tolerance (1e-3)
imputer = load("imputer.joblib")
imputer.fit(data) # Wishfully thinking the imputer will simply continue improving
</code></pre>
<p>On a small side-note, is there any way to get a sense of the training error when fitting a Bayesian Ridge regressor using <code>IterativeImputer</code>? <code>verbose=2</code> simply outputs a "change" value, which doesn't give a very clear picture of how the regressor is actually performing, but maybe I don't understand the estimator well enough.</p>
| <python><scikit-learn><bayesian><imputation> | 2023-07-20 14:56:47 | 0 | 572 | lusk |
76,730,985 | 4,075,155 | Lora fine-tuning taking too long | <p>Any reason why this is giving me a month of expected processing time?</p>
<p>More importantly, how to speed this up?</p>
<p>My dataset is a collection of 20k short sentences (max 100 words each).</p>
<pre><code>import transformers
import torch
model_id = "tiiuae/falcon-40b-instruct"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto", trust_remote_code=True)
from peft import prepare_model_for_kbit_training
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["query_key_value"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config)
trainer = transformers.Trainer(
model=model,
train_dataset=tokenized_data,
args=transformers.TrainingArguments(
num_train_epochs=100,
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
warmup_ratio=0.05,
learning_rate=2e-4,
fp16=False,
logging_steps=1,
output_dir="output",
optim="paged_adamw_8bit",
lr_scheduler_type='cosine',
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()```
</code></pre>
| <python><deep-learning><nlp><huggingface-transformers><peft> | 2023-07-20 14:36:26 | 1 | 2,380 | Lucas Azevedo |
76,730,923 | 9,261,322 | Can't install pytest | <p>Tried to run</p>
<pre><code>pip install -U pytest
</code></pre>
<p>from the command prompt, but got this:</p>
<pre><code>WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘ProxyError(‘Cannot connect to proxy.’, OSError(‘Tunnel connection failed: 407 authenticationrequired’))’: /simple/pytest/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘ProxyError(‘Cannot connect to proxy.’, OSError(‘Tunnel connection failed: 407 authenticationrequired’))’: /simple/pytest/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘ProxyError(‘Cannot connect to proxy.’, OSError(‘Tunnel connection failed: 407 authenticationrequired’))’: /simple/pytest/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘ProxyError(‘Cannot connect to proxy.’, OSError(‘Tunnel connection failed: 407 authenticationrequired’))’: /simple/pytest/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘ProxyError(‘Cannot connect to proxy.’, OSError(‘Tunnel connection failed: 407 authenticationrequired’))’: /simple/pytest/
ERROR: Could not find a version that satisfies the requirement pytest (from versions: none)
ERROR: No matching distribution found for pytest
WARNING: There was an error checking the latest version of pip.
</code></pre>
<p>what does it mean?</p>
<p>My python version is 3.11.1</p>
| <python><python-3.x><pytest> | 2023-07-20 14:28:46 | 0 | 4,405 | Alexey Starinsky |
76,730,751 | 22,221,987 | C sizeof(some_structure) returns different value in compare with python struct.calcsize(struct_string) | <p>I have a <strong>C</strong> structure.</p>
<pre><code>typedef struct
{
double cycle_time;
double cycle_duty;
double state;
double servo_mode;
double motion_mode;
double jcond;
struct
{
uint16_t buff_sz;
uint16_t buff_fill;
uint16_t cmd_cntr;
uint16_t res;
} wpi;
double move_des_q[6];
double move_des_qd[6];
double move_des_x[6];
double move_des_xd[6];
double act_q[6];
double act_qd[6];
double act_x[6];
double act_xd[6];
double act_tq[6];
double frict_tq[6];
double ne_tq[6];
double act_force_e[6];
double act_force_0[6];
double des_trq[6];
double des_qd[6];
double temp_m[6];
double temp_e[6];
double arm_current;
double arm_voltage;
double psu_voltage;
struct
{
uint8_t dig_in_count;
uint8_t an_in_count;
uint8_t dig_in[8];
uint8_t an_in_curr_mode[4];
double an_in_value[4];
uint8_t dig_out_count; //number of bits
uint8_t an_out_count;
uint8_t dig_out[8];
uint8_t an_out_curr_mode[4];
double an_out_value[4];
} io;
struct
{
uint32_t jointState;
float joint_volt;
float joint_amp;
uint8_t joint_window;
float joint_des_iq;
float joint_des_vel;
} jointInfo[6];
} some_structure;
</code></pre>
<p>After running calculating code, i've found out the size (<strong>1136 bytes</strong>) of the <strong>C</strong> structure (at least i hope it's correct). Here is the code:</p>
<pre><code>int main()
{
printf("Size of struct ABC: %lu\n", sizeof(some_structure));
}
</code></pre>
<p>But, after checkout the python structure size i've found some difference (<strong>python</strong> size is <strong>1114 bytes</strong>). Here is python code:</p>
<pre><code> struct_string = "<6d4H105d14B4d14B4dL2fB2fL2fB2fL2fB2fL2fB2fL2fB2fL2fB2f"
struct_byte_size = struct.calcsize(struct_string)
print(struct_byte_size)
</code></pre>
<p>What cause this size shift?
How can i receive the data from socket and avoid this shift ?
May i've a mistake while i was creating the struct_string?</p>
<p><strong>UPD</strong>: <a href="https://i.sstatic.net/R9s3a.png" rel="nofollow noreferrer">Here</a> is the struct-module rule-table and calcsize() description.</p>
| <python><c><python-3.x><sockets><types> | 2023-07-20 14:09:24 | 1 | 309 | Mika |
76,730,726 | 4,390,189 | Parse a list in yaml into a python set | <p>Is it possible to parse lists in a yaml file directly into a python <a href="https://www.w3schools.com/python/python_sets.asp" rel="nofollow noreferrer">set</a> by default ?
If the answer is yes, how can I achieve this?</p>
<p>E.g.:</p>
<pre><code>import yaml
from yaml.loader import SafeLoader
# Open the file and load the file
with open('Userdetails.yaml') as f:
data = yaml.load(f, Loader=SafeLoader)
print(data)
</code></pre>
<p>Expected Output:</p>
<pre><code>{'User': {'user1', 'user2'}}
</code></pre>
<p>Instead of:</p>
<pre><code>{'User': ['user1', 'user2']}
</code></pre>
<p>And without looping through the dictionary after load.</p>
| <python><python-3.x><yaml> | 2023-07-20 14:05:50 | 2 | 1,806 | dl.meteo |
76,730,720 | 189,878 | gdb on MacOS Ventura fails with python library not loaded | <p>I have a build of gdb for an Ada toolchain, and it appears there is a reference to a Python dynamic library that does not exist on my system (Intel Mac, Ventura 13.4.1 (c)).</p>
<pre><code>$ which gdb
/opt/gcc-13.1.0/bin/gdb
$ gdb
dyld[19305]: Library not loaded: /Library/Frameworks/Python.framework/Versions/3.9/Python
Referenced from: <3FCB836C-8BBC-39C7-894C-6F9582FEAE7F> /opt/gcc-13.1.0/bin/gdb
Reason: tried: '/Library/Frameworks/Python.framework/Versions/3.9/Python' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Library/Frameworks/Python.framework/Versions/3.9/Python' (no such file), '/Library/Frameworks/Python.framework/Versions/3.9/Python' (no such file), '/System/Library/Frameworks/Python.framework/Versions/3.9/Python' (no such file, not in dyld cache)
Abort trap: 6
$ dyld_info /opt/gcc-13.1.0/bin/gdb
/opt/gcc-13.1.0/bin/gdb [x86_64]:
-platform:
platform minOS sdk
macOS 12.0 10.17
-segments:
load-offset segment section sect-size seg-size perm
0x00000000 __TEXT 7728KB r.x
0x00001090 __text 5025114
0x004CBDF0 __text_startup 23672
0x004D1A68 __text_cold 125143
0x004F0340 __stubs 9060
0x004F26A4 __stub_helper 6276
0x004F3F28 __cstring 899918
0x005CFA80 __const 155833
0x005F5B40 __info_plist 466
0x005F5D18 __eh_frame 1663704
0x0078C000 __DATA_CONST 1088KB rw.
0x0078C000 __got 5824
0x0078D6C0 __mod_init_func 800
0x0078D9E0 __const 1099176
0x0089C000 __DATA 304KB rw.
0x0089C000 __la_symbol_ptr 12080
0x0089EF30 __gcc_except_tab 118952
0x008BBFE0 __data 76000
0x008CE8C0 __bss 91000
0x008E4C40 __common 9104
-dependents:
attributes load path
/usr/lib/libiconv.2.dylib
/usr/lib/libncurses.5.4.dylib
/Library/Frameworks/Python.framework/Versions/3.9/Python
/usr/lib/libexpat.1.dylib
/opt/gcc-13.1.0/lib/libmpfr.6.dylib
/opt/gcc-13.1.0/lib/libgmp.10.dylib
/opt/gcc-13.1.0/lib/libstdc++.6.dylib
/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
/usr/lib/libSystem.B.dylib
$ ls /Library/Frameworks/Python.framework
ls: /Library/Frameworks/Python.framework: No such file or directory
$ which python3
/usr/bin/python3
$ which python3.9
/usr/local/bin/python3.9
</code></pre>
<p>I have installed Python via brew. Where should I look for the required library (so I can set <code>DYLD_LIBRARY_PATH</code>), or how can I install the proper one?</p>
| <python><macos><gdb><ada> | 2023-07-20 14:05:30 | 1 | 2,346 | Patrick |
76,730,710 | 11,659,631 | why is the fourier transform in python only working while taking the absolute value? | <p>I encourted a very strange thing in python again. I wanted to plot the Fourier Transform of a sinus signal; so I tried the following code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
#--------------------- test -----------------------------------
#x axis
x02 = np.arange(0, 1, 0.001) # start, stop, step
y02 = np.sin(50.0 * 2.0 * np.pi * x02)
#plot sin in time domain
plt.figure()
plt.plot(x02, y02, color ='b')
plt.grid()
plt.title('Sinus in time domain')
plt.show()
# Fourier transform
yf02 = np.fft.fft(y02)
xf02 = np.fft.fftfreq(len(y02), 0.001)
# plot the Fourier transform
plt.figure()
plt.plot(xf02, yf02, color ='b')
plt.grid()
plt.title('Sinus in frequency domain')
plt.show()
</code></pre>
<p>which gives:</p>
<p><a href="https://i.sstatic.net/r3MOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r3MOy.png" alt="enter image description here" /></a></p>
<p>now if I plot the absolute values of my Fourier transform with the following code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
#--------------------- test -----------------------------------
#x axis
x02 = np.arange(0, 1, 0.001) # start, stop, step
y02 = np.sin(50.0 * 2.0 * np.pi * x02)
#plot sin in time domain
plt.figure()
plt.plot(x02, y02, color ='b')
plt.grid()
plt.title('Sinus in time domain')
plt.show()
# Fourier transform
yf02 = np.fft.fft(y02)
xf02 = np.fft.fftfreq(len(y02), 0.001)
# plot the Fourier transform
plt.figure()
plt.plot(xf02, abs(yf02), color ='b')
plt.grid()
plt.title('Sinus in frequency domain')
plt.show()
</code></pre>
<p>it works:</p>
<p><a href="https://i.sstatic.net/p9n8F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p9n8F.png" alt="enter image description here" /></a></p>
<p>Could someone explain me why ? I'm very confused and don't understand how np.fft.fft works despite reading every tutorial.</p>
| <python><fft> | 2023-07-20 14:03:22 | 2 | 338 | Apinorr |
76,730,532 | 15,175,378 | Adding Custom Python library in Azure Synapse | <p>I have spinned up a notebook in Azure Synapse workbook and I have attached an Apache Spark pool and I am installing my python sdk in there by running command in first block of my notebook with code as <code>"from XYZ import XYZ.client"</code> but I get error as</p>
<blockquote>
<p>"Module "XYZ" not found error.</p>
</blockquote>
<p>How do i fix this? I have the python wheel package of XYZ with me. can someone guide me the steps in Azure Synapse</p>
| <python><azure><azure-synapse> | 2023-07-20 13:44:43 | 2 | 1,401 | ZZZSharePoint |
76,730,505 | 11,974,708 | Safely block multiprocessing.Queue until element is received by another process | <p>I want to pass messages of various types between two different processes in both directions using the <code>multiprocessing</code> package. My class looks like this:</p>
<pre><code>import multiprocessing
import numpy as np
class MessagePasser:
def __init__(self):
self.queue = multiprocessing.Queue(maxsize=1)
self.event = multiprocessing.Event()
def send_message(self, message: np.ndarray|int) -> None:
self.queue.put(message)
# block until message has been received (event set by the receiver)
self.event.wait()
self.event.clear()
def receive_message(self) -> np.ndarray|int:
# block until message has been sent
while self.queue.empty():
pass
message = self.queue.get()
# now it should be safe for the sender to exit `send_message`
self.event.set()
return message
</code></pre>
<p>So the idea is to allow only a single object in the <code>multiprocessing.Queue</code> and use <code>multiprocessing.Event</code> to block the sender's access to the queue by "capturing" it in the <code>send_message</code>-method until the receiver got the message. This way, or so I thought, messages should be transmitted safely if for each call to <code>send_message</code> by one process there is a call to <code>receive_message</code> by another process. However, this does not work using the following main code:</p>
<pre><code>def alices_job(message_passer: MessagePasser):
for i in range(1000):
array = np.random.rand(10)
integer = 1
message_passer.send_message(array)
message_passer.send_message(integer)
modified_array = message_passer.receive_message()
modified_array.sum()
def bobs_job(message_passer: MessagePasser):
for i in range(1000):
array = message_passer.receive_message().astype(float)
integer = message_passer.receive_message()
array *= int(integer)
message_passer.send_message(array)
if __name__ == "__main__":
message_passer = MessagePasser()
alices_process = multiprocessing.Process(target=alices_job, args=(message_passer,))
bobs_process = multiprocessing.Process(target=bobs_job, args=(message_passer,))
alices_process.start()
bobs_process.start()
alices_process.join()
bobs_process.join()
</code></pre>
<p>Obviously, each call to <code>send_message</code> in <code>alices_job</code> corresponds to a call to <code>receive_message</code> in <code>bobs_job</code>. But my attempt of blocking does not seem to work. In some random iteration of the loop the whole program either hangs or it gives me the error</p>
<pre><code>AttributeError: 'int' object has no attribute 'sum'
</code></pre>
<p>in the last line of <code>alices_job</code> indicating that the sending process picked up its own message. I also tried <code>multiprocessing.connections.PipeConnections</code>, which seem to be faster, but cause the same problems (unless I use two pipes, which is not practical for my application as the assignment of connectors to processes could not take place during construction of <code>MessagePasser</code>).</p>
<p>My goal is not to realize a server-client model but to enable bidirectional message passing between two parties, very similar to the blocking send and receive functions in MPI, but with shared memory and without the need to specify the sender/receiver since only two processes are present. If somehow possible, I would like to accomplish that with only slight modifications and no additional packages like <code>threading</code> or <code>concurrent.futures</code>, using only <code>multiprocessing</code>, which is already used for other parts of my application. Many thanks in advance</p>
| <python><multiprocessing> | 2023-07-20 13:41:44 | 0 | 370 | BernieD |
76,730,270 | 14,169,836 | Write data into excel files using python and save text in exact cells | <p>I want to ask how I can write into an Excel file string to see the calculation in output Excel.</p>
<p>example:</p>
<p>I would like to have in column L in cell L6 this string: =K6-L2+L5 and iterate it through the whole table as it is in the picture below:</p>
<p><a href="https://i.sstatic.net/ywpYy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ywpYy.png" alt="enter image description here" /></a></p>
<p>The logic in those cells is like that:</p>
<pre><code>L6 = K6 - L2 + L5
M6 = L6 - M2 + M5
N6 = M6 - N2 + N5
O6 = N6 - O2 + O5
</code></pre>
<p>and it should be in every column that has a type E.</p>
<p>I've tried like that but my code does not create any changes in the file.</p>
<pre><code>import openpyxl
wb = openpyxl.load_workbook(filename='newfile.xlsx')
ws = wb.worksheets[0]
sb = [i for i in range(6,len(out),5)] #E cells
total = [i for i in range(2,len(out),5)]#A cells
dvl = [i for i in range(5,len(out),5)]#D cells
columns_names = ['K','L','M','N','O','P','R','S','T','U','V','W','X','Y','Z','AA','AB']
colNr = len(columns_names)
for i in sb:
ws[f'K{i}'] = f'=G{i}'
for i in range(0,len(sb)):
for cellnr in range(0,colNr):
try:
ws[f'{columns_names[cellnr+1]}{sb[i]}'] = f'={columns_names[cellnr]}{sb[i]} - {columns_names[cellnr+1]}{total[i]} + {columns_names[cellnr+1]}{dvl[i]}'
except:
pass
wb.save(filename='test.xlsx')
</code></pre>
<p>Here are the data:</p>
<pre><code>Type 2023/07 2023/08 2023/09 2023/10 2023/11 2023/12 2024/01 2024/02 2024/03 2024/04 2024/05 2024/06 2024/07 2024/08 2024/09 2024/10 2024/11 2024/12
A 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
B 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
C 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
D 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
E 127 124 121 118 115 112 109 106 103 100 97 94 91 88 85 82 79 76
A 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
B 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
D 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
E 3633 3633 3633 3633 3633 3633 3633 3633 3633 3633 3633 3633 3633 3633 3633 3633 3633 3633
A 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
B 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
C 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
D 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
E 1637 1632 1627 1622 1617 1612 1607 1602 1597 1592 1587 1582 1577 1572 1567 1562 1557 1552
A 1 49 49 61 37 37 37 25 37 25 25 25 49 49 37 49 37 13
B 0 48 48 60 36 36 36 24 36 24 24 24 48 48 36 48 36 12
C 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
D 10000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
E 1023 974 925 864 827 790 753 728 691 666 641 616 567 518 481 432 395 382
</code></pre>
| <python><pandas><excel><string><openpyxl> | 2023-07-20 13:10:26 | 1 | 329 | Ulewsky |
76,729,977 | 1,909,206 | SQLAlchemy & Using shared server connection, but different databases | <p>I have a usecase where a given host (DBMS) houses multiple databases, each with an identical schema.</p>
<p>Given an input, it's possible to programmatically discern which database I need to use (same connection/server, different DB)</p>
<p>I'm looking at optimizing connections for a vertically sharded dataset. Ideally, we could just connect to A-F databases, using a shared connection (the only difference is the DB name).</p>
<p>Is this doable with SQLAlchemy?</p>
| <python><sqlalchemy> | 2023-07-20 12:38:27 | 0 | 672 | geudrik |
76,729,812 | 1,428,692 | Jupyter not recognizing packages that I have installed | <p>I am using Jupyter Lab and am having trouble figuring out why DataGrid displays are not working. When I run</p>
<pre><code>import pandas as pd
from ipydatagrid import DataGrid
df = pd.DataFrame({
"column 1": [{"key": 11}, ["berry", "apple", "cherry"]],
"column 2": [["berry", "berry", "cherry"], {"key": 10}]})
DataGrid(df)
</code></pre>
<p>I get the following output instead of the graphical display I would expect:</p>
<pre><code>DataGrid(auto_fit_params={'area': 'all', 'padding': 30, 'numCols': None}, corner_renderer=None, default_render…
</code></pre>
<p>When I run <code>!jupyter --version</code> in my notebook I get the following:</p>
<pre><code>jupyter core : 4.6.3
jupyter-notebook : 6.0.3
qtconsole : not installed
ipython : 7.16.1
ipykernel : 5.3.0
jupyter client : 6.1.5
jupyter lab : not installed
nbconvert : 5.6.1
ipywidgets : not installed
nbformat : 5.0.6
traitlets : 4.3.3
</code></pre>
<p>However when I search for the three uninstalled packages individually and import them it works:</p>
<pre><code>import ipywidgets
import qtconsole
import jupyterlab
print("ipywidgets: " + ipywidgets.__version__)
print("qtconsole: " + qtconsole.__version__)
print("jupyterlab: " + jupyterlab.__version__)
</code></pre>
<p>gives:</p>
<pre><code>ipywidgets: 8.0.7
qtconsole: 5.4.3
jupyterlab: 3.6.5
</code></pre>
<p>What exactly is going on here and how do I get Jupyter to properly recognize these packages in the <code>!jupyter --version</code> command? I have uninstalled and reinstalled these packages, restarted my Python kernel, restarted my computer, etc. several times and have double checked that these packages do exist in my site-packages folder (and that my venv is pointing to the correct site-packages folder). I am using pip and not anaconda in case that makes any difference.</p>
| <python><jupyter-notebook><datagrid><jupyter> | 2023-07-20 12:20:48 | 0 | 2,087 | weskpga |
76,729,793 | 7,762,646 | how to specify similarity threshold in langchain faiss retriever? | <p>I would like to pass to the retriever a similarity threshold. So far I could only figure out how to pass a k value but this was not what I wanted. How can I pass a threshold instead?</p>
<pre><code>from langchain.document_loaders import PyPDFLoader
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
def get_conversation_chain(vectorstore):
llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo')
qa = ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(search_kwargs={'k': 2}), return_source_documents=True, verbose=True)
return qa
loader = PyPDFLoader("sample.pdf")
# get pdf raw text
pages = loader.load_and_split()
faiss_index = FAISS.from_documents(list_of_documents, OpenAIEmbeddings())
# create conversation chain
chat_history = []
qa = get_conversation_chain(faiss_index)
query = "What is a sunflower?"
result = qa({"question": query, "chat_history": chat_history})
</code></pre>
| <python><nlp><openai-api><langchain><large-language-model> | 2023-07-20 12:18:22 | 2 | 1,541 | G. Macia |
76,729,716 | 2,307,570 | Console program checks if number is prime. Why does `threading.Lock()` cause it to fail for products of non-tiny primes? | <p>I am reading the chapter about parallel programming in a Python book.<br>
The following code is based on an example about the module <code>threading</code>.<br>
The point of this particular example is the use of <code>threading.Lock()</code>.<br>
The code can also be found <a href="https://github.com/entenschule/examples_py/blob/main/a002_book/b31_parallel/c2_threading/d1_prime/e2_console_lock.py" rel="nofollow noreferrer">here</a> (on GitHub).
The code without the lock is <a href="https://github.com/entenschule/examples_py/blob/main/a002_book/b31_parallel/c2_threading/d1_prime/e1_console.py" rel="nofollow noreferrer">here</a>.</p>
<pre class="lang-py prettyprint-override"><code>import threading
class PrimeThread(threading.Thread):
lock = threading.Lock()
def __init__(self, n):
super().__init__()
self.n = n
def run(self):
if self.n < 2:
with PrimeThread.lock:
print('▪ That is not an integer greater 1.')
return
i = 2
while i ** 2 <= self.n:
remainder = self.n % i
quotient = self.n // i
if remainder == 0: # if n is divisible by i
with PrimeThread.lock:
print(
f'▪ {self.n} is not prime, '
f'because it is {i} * {quotient}.'
)
return
i += 1
with PrimeThread.lock:
print(f'▪ {self.n} is prime.')
my_threads = []
user_input = input('▶ ')
while user_input != 'q':
try:
n = int(user_input) # raises ValueError if string is not integer
thread = PrimeThread(n)
my_threads.append(thread)
thread.start()
except ValueError:
with PrimeThread.lock:
print('▪ That is not an integer.')
with PrimeThread.lock:
user_input = input('▶ ')
for t in my_threads:
t.join()
</code></pre>
<p>The problem with the initial version was, that the output could interfere with the new input:</p>
<pre><code>▶ 126821609265383
▶ 874496478251477▪ 126821609265383 is prime.
▶ ▪ 874496478251477 is not prime, because it is 23760017 * 36805381.
</code></pre>
<p>The lock is supposed to achieve, that it looks like this:</p>
<pre><code>▶ 126821609265383
▶ 874496478251477
▪ 126821609265383 is prime.
▪ 874496478251477 is not prime, because it is 23760017 * 36805381.
</code></pre>
<p>Maybe it does achieve that. (Hard to test with inputs that allow a fast answer.)<br>
<strong>But now the program does not return anything for many inputs with 7 or more digits.</strong><br>
4374553 (prime) sometimes works.
But 76580839 (prime) and 67898329 (2953 · 22993) will fail.</p>
<p>How can I use the lock to prevent the mixing of input and output lines, and still get the result printed, when the calculation is finished?</p>
<p><s>More generally, I would like to ask, if the console is necessary to illustrate the point of threads and locks.
Could those also be illustrated by just looping over a list of numbers?</s><br>
(I should probably bring that up on CodeReview.)</p>
<hr />
<p><strong>Edit:</strong> I just realized, that the missing output can be triggered, by entering some easy input.<br>
In this case the program was stuck after entering 76580839.<br>
But entering 123 caused both answers to appear:</p>
<p><a href="https://i.sstatic.net/MTCoH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MTCoH.png" alt="trick first attempt" /></a></p>
<p>But of course the missing answer can only be released with this trick, when it already exists. In this case the calculation for 126821609265383 was not yet finished, when 123 was entered, but a bit later it was released by entering 345. Surprisingly now the output for 345 is stuck:</p>
<p><a href="https://i.sstatic.net/7oBy2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7oBy2.png" alt="trick second attempt" /></a></p>
<p>My question remains the same:<br>
What needs to be changed, to make the program behave normally for non-easy inputs?<br>
(The way it already works for easy inputs.)</p>
<p>As mentioned above: <strong>This is an example problem about threading.</strong><br>
The prime check is just a placeholder for a calculation that takes time and resources.</p>
| <python><locking><python-multithreading> | 2023-07-20 12:11:14 | 1 | 1,209 | Watchduck |
76,729,670 | 7,422,232 | How to iterate over 'Row' values in pyspark? | <p>I have this data as output when i perform <code>timeStamp_df.head()</code> in pyspark:</p>
<pre><code>Row(timeStamp='ISODate(2020-06-03T11:30:16.900+0000)', timeStamp='ISODate(2020-06-03T11:30:16.900+0000)', timeStamp='ISODate(2020-06-03T11:30:16.900+0000)', timeStamp='ISODate(2020-05-03T11:30:16.900+0000)', timeStamp='ISODate(2020-04-03T11:30:16.900+0000)')
</code></pre>
<p>My expected output is:</p>
<pre><code>+-------------------------------+
|timeStamp |
+-------------------------------+
|2020-06-03T11:30:16.900+0000|
|2020-06-03T11:30:16.900+0000|
|2020-06-03T11:30:16.900+0000|
|2020-05-03T11:30:16.900+0000|
|2020-04-03T11:30:16.900+0000|
+-------------------------------+
</code></pre>
<p>I tried to first use <code>.collect()</code> method and want to iterate</p>
<pre><code>rows_list = timeStamp_df.collect()
print(rows_list)
</code></pre>
<p>It's output is:</p>
<pre><code>[Row(timeStamp='ISODate(2020-06-03T11:30:16.900+0000)', timeStamp='ISODate(2020-06-03T11:30:16.900+0000)', timeStamp='ISODate(2020-06-03T11:30:16.900+0000)', timeStamp='ISODate(2020-05-03T11:30:16.900+0000)', timeStamp='ISODate(2020-04-03T11:30:16.900+0000)')]
</code></pre>
<p>Just to see the values I am using the print statement:</p>
<pre><code>def print_row(row):
print(row.timeStamp)
for row in rows_list:
print_row(row)
</code></pre>
<p>But I am getting the single output as it only iterates once in list:</p>
<pre><code>ISODate(2020-06-03T11:30:16.900+0000)
</code></pre>
<p>How can I iterate over the data of Row in pyspark?</p>
| <python><pyspark> | 2023-07-20 12:05:12 | 1 | 395 | AB21 |
76,729,654 | 16,623,197 | Why is the behaviour of stumpy.stump changing so abruptly? Why is it unable to match constant intervals as the same shape? | <p>I want to find similar shapes in time series using stumpy. However there seems to be some kind of special treatment of values that I do not understand.</p>
<p>Let me give you an example:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import stumpy
ssss=105*np.ones(800)
ssss[:50]=100
m = 210
mp = stumpy.stump(ssss, m=m)
plt.plot(ssss, color="blue")
plt.plot(mp[:,0], color="orange")
</code></pre>
<p>Results in</p>
<p><a href="https://i.sstatic.net/SXZS3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SXZS3.png" alt="wrong distance plot" /></a></p>
<p>But clearly there are many parts where there is a perfect match after the jump, so the orange line, the distance, should be 0. Why is that not the case?</p>
<p>Surprisingly, if you change the 100 to 101 you get the result you would expect:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import stumpy
ssss=105*np.ones(800)
ssss[:50]=101
m = 210
mp = stumpy.stump(ssss, m=m)
plt.plot(ssss, color="blue")
plt.plot(mp[:,0], color="orange")
</code></pre>
<p><a href="https://i.sstatic.net/G1qtN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G1qtN.png" alt="distance correct" /></a></p>
<p>What is an explanation for that?</p>
| <python><stumpy><matrix-profile> | 2023-07-20 12:02:24 | 1 | 896 | Noskario |
76,729,648 | 6,725,213 | How can I catch Exception raised in conftest.py | <p>I've developed a Pytest plugin, which is used to make a http request to report the status of the tests to an external system during execution.</p>
<p>However, as I have no control over my colleagues' code, they might make some errors in <code>conftest.py</code>, which could crash before this plugin is even loaded. Therefore, the external system is unable to receive the test status and stuck there.</p>
<pre class="lang-py prettyprint-override"><code>@pytest.hookimpl(hookwrapper=True, trylast=True)
def pytest_sessionfinish(session, exitstatus):
yield
make_test_end_report_request(xxxxx)
@pytest.hookimpl(tryfirst=True)
def pytest_sessionstart(session):
make_test_start_report_request(xxxxx)
</code></pre>
<p>In this situation, how can I ensure that I can catch all exceptions and send <code>error occured</code> report to the external system?</p>
<p>I also tries using <code>subprocess</code></p>
<pre class="lang-py prettyprint-override"><code>popen = subprocess.Popen(
['pytest'] + params or [],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=True
)
return_code = popen.wait()
if return_code > 0:
make_http_report(reason='Error occured')
</code></pre>
<p>but still can not 'try catch' which type of error is occured during the test.</p>
| <python><pytest> | 2023-07-20 12:01:59 | 1 | 1,678 | Chweng Mega |
76,729,605 | 4,451,521 | Why an empty cuda kernel takes more time than a opencv operation on CPU? | <p>I have read the argument that sometimes implementing things with CUDA on the GPU takes more time than doing it with the CPU because of:</p>
<ul>
<li>The time to allocate device memory</li>
<li>The time to transfer to and back to that memory</li>
</ul>
<p>alright, so I have written a script (two actually) in which I <em>do not</em> include the above considerations when I measure the time. It is not ideal but I measure <em>only</em> the time consumed by the kernel. Not transfer, not allocation.</p>
<p>Also, I use a kernel <em>that does nothing</em>. So no complex operator to delay us.</p>
<p>However, even there the kernel takes 10 times more than a opencv operation done in the CPU.</p>
<p>Here the pycuda script</p>
<pre><code>import cv2
import numpy as np
import pycuda.autoinit
import pycuda.driver as cuda
from pycuda.compiler import SourceModule
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--show', action='store_true',help="show the video while making")
parser.add_argument('--resize',type=int,default=800,help="if resize is needed")
parser.add_argument('--noconvert', action='store_true',help="avoid rgb conversion")
# Parse and print the results
args = parser.parse_args()
print(args)
# Path to the input H.264 file
input_video_path = '70secsmovie.h264' # Replace with the path to your input H.264 file
# Path to the output MP4 file
output_video_path = 'output_video_cuda.mp4' # Replace with the desired output MP4 file name
# Open the input video file
video_capture = cv2.VideoCapture(input_video_path)
# Check if the video file was opened successfully
if not video_capture.isOpened():
print("Failed to open the video file.")
exit()
# Set the desired width for the output frames
desired_width = args.resize #800
# Get the video properties
frame_width = int(video_capture.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(video_capture.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(video_capture.get(cv2.CAP_PROP_FPS))
aspect_ratio = frame_width / frame_height
desired_height = int(desired_width / aspect_ratio)
# Create a VideoWriter object to save the output video
codec = cv2.VideoWriter_fourcc(*'mp4v')
# output_video = cv2.VideoWriter(output_video_path, codec, fps, (frame_width, frame_height))
output_video = cv2.VideoWriter(output_video_path, codec, fps, (desired_width, desired_height))
# Load the CUDA kernel for drawing the rectangle
mod = SourceModule("""
__global__ void draw_rectangle_kernel(unsigned char *image, int image_width, int x, int y, int width, int height, unsigned char *color)
{
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
if (row >= y && row < y + height && col >= x && col < x + width)
{
// Perform no operation
}
}
""")
draw_rectangle_kernel = mod.get_function("draw_rectangle_kernel")
# Set the block dimensions
block_dim_x, block_dim_y = 16, 16
# Calculate the grid dimensions
grid_dim_x = (frame_width + block_dim_x - 1) // block_dim_x
grid_dim_y = (frame_height + block_dim_y - 1) // block_dim_y
# Define the rectangle properties (you can modify these as desired)
x, y, width, height = 100, 100, 200, 150
color = np.array([0, 255, 0], dtype=np.uint8)
# Initialize the frame count
frame_count = 0
average = 0.0
start = cuda.Event()
end = cuda.Event()
# Read, process, and write each frame from the input video
while True:
# Read a frame from the video file
ret, frame = video_capture.read()
# If the frame was not read successfully, the end of the video file is reached
if not ret:
break
# Increment the frame count
frame_count += 1
if not args.noconvert:
# Convert the frame to the RGB format
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
else:
frame_rgb = frame
# start.record()
# start.synchronize()
# Upload the frame to the GPU
frame_gpu = cuda.mem_alloc(frame_rgb.nbytes)
cuda.memcpy_htod(frame_gpu, frame_rgb)
start.record()
start.synchronize()
# Invoke the CUDA kernel to draw the rectangle
# grid_dim_x = (frame_width + block_dim_x - 1) // block_dim_x
# grid_dim_y = (frame_height + block_dim_y - 1) // block_dim_y
draw_rectangle_kernel(frame_gpu, np.int32(frame_width), np.int32(x), np.int32(y),
np.int32(width), np.int32(height), cuda.In(color), block=(block_dim_x, block_dim_y, 1),
grid=(grid_dim_x, grid_dim_y))
end.record()
end.synchronize()
# Download the modified frame from the GPU
frame_with_rectangle_rgb = np.empty_like(frame_rgb)
cuda.memcpy_dtoh(frame_with_rectangle_rgb, frame_gpu)
# end.record()
# end.synchronize()
secs = start.time_till(end)*1e-3
# print("Time of Squaring on GPU with inout")
# print("%fs" % (secs))
average = average + secs
if not args.noconvert:
# Convert the modified frame back to the BGR format
frame_with_rectangle_bgr = cv2.cvtColor(frame_with_rectangle_rgb, cv2.COLOR_RGB2BGR)
else:
frame_with_rectangle_bgr = frame_with_rectangle_rgb
# Resize the frame to the desired width and height while maintaining the aspect ratio
resized_frame = cv2.resize(frame_with_rectangle_bgr, (desired_width, desired_height))
# Write the modified frame to the output video
output_video.write(resized_frame)
# Write the modified frame to the output video
# output_video.write(frame_with_rectangle_bgr)
if args.show:
# Display the modified frame (optional)
# cv2.imshow('Modified Frame', frame_with_rectangle_bgr)
cv2.imshow('Modified Frame', resized_frame)
# Wait for the 'q' key to be pressed to stop (optional)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release the video capture and writer objects and close any open windows
video_capture.release()
output_video.release()
if args.show:
cv2.destroyAllWindows()
# Print the total frame count
print("Total frames processed:", frame_count)
print("Operation took ", (average/frame_count))
</code></pre>
<p>and here to compare the opencv script</p>
<pre><code>import cv2
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--show', action='store_true',help="show the video while making")
# Parse and print the results
args = parser.parse_args()
print(args)
# Path to the input H.264 file
input_video_path = '70secsmovie.h264' # Replace with the path to your input H.264 file
# Path to the output MP4 file
output_video_path = 'output_video.mp4' # Replace with the desired output MP4 file name
# Open the input video file
video_capture = cv2.VideoCapture(input_video_path)
# Check if the video file was opened successfully
if not video_capture.isOpened():
print("Failed to open the video file.")
exit()
# Set the desired width for the output frames
desired_width = 800
# Get the video properties
frame_width = int(video_capture.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(video_capture.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(video_capture.get(cv2.CAP_PROP_FPS))
codec = cv2.VideoWriter_fourcc(*'mp4v')
aspect_ratio = frame_width / frame_height
desired_height = int(desired_width / aspect_ratio)
# Create a VideoWriter object to save the output video
# output_video = cv2.VideoWriter(output_video_path, codec, fps, (frame_width, frame_height))
output_video = cv2.VideoWriter(output_video_path, codec, fps, (desired_width, desired_height))
# Initialize the frame count
frame_count = 0
average = 0.0
# Read, process, and write each frame from the input video
while True:
# Read a frame from the video file
ret, frame = video_capture.read()
# If the frame was not read successfully, the end of the video file is reached
if not ret:
break
# Increment the frame count
frame_count += 1
# Draw a rectangle on the frame (you can modify the rectangle's properties here)
x, y, width, height = 100, 100, 200, 150
start = cv2.getTickCount()
cv2.rectangle(frame, (x, y), (x + width, y + height), (0, 255, 0), 2)
end = cv2.getTickCount()
time = (end - start)/ cv2.getTickFrequency()
# print("Time for Drawing Rectangle using OpenCV")
# print("%fs" % (time))
average = average + time
# Resize the frame to the desired width and height while maintaining the aspect ratio
resized_frame = cv2.resize(frame, (desired_width, desired_height))
# Write the modified frame to the output video
output_video.write(resized_frame)
if args.show:
# Display the modified frame (optional)
cv2.imshow('Modified Frame', resized_frame)
# Wait for the 'q' key to be pressed to stop (optional)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release the video capture and writer objects and close any open windows
video_capture.release()
output_video.release()
if args.show:
cv2.destroyAllWindows()
# Print the total frame count
print("Total frames processed:", frame_count)
print("Operation took ", (average/frame_count))
</code></pre>
<p>With that the opencv script takes</p>
<pre><code>Total frames processed: 704
Operation took 3.119159232954547e-05
</code></pre>
<p>while the pycuda took</p>
<pre><code>Total frames processed: 704
Operation took 0.0003763223639266063
</code></pre>
<p>How can pycuda (or cuda for that matter) be useful for image processing?</p>
| <python><opencv><cuda><pycuda> | 2023-07-20 11:56:41 | 1 | 10,576 | KansaiRobot |
76,729,527 | 4,865,723 | Convert language codes to their native language names in python? | <p>I know how to get the name of a language based on its language code. But I don't know how to get that name in its native form with native letters. For example (south) Korean would be 한국말 or 한국어. Japanese would be 日本語.</p>
<pre><code>import pycountry
ko = pycountry.languages.get(alpha_2='ko')
print(ko)
</code></pre>
<p>Result:</p>
<pre><code>Language(alpha_2='ko', alpha_3='kor', name='Korean', scope='I', type='L')
</code></pre>
| <python> | 2023-07-20 11:46:43 | 1 | 12,450 | buhtz |
76,729,435 | 4,100,282 | What are best practices to change a Python module's name while keeping backwards compatibility? | <p>I wrote a Python module used by various other people. If I decide that the module's name should change (not saying this is necessarily a good idea, let's just assume that a decision was made that it needs to be renamed).</p>
<p>Is there a canonical way to ensure that users can still use the old name, so that both <code>import newname; newname.foo()</code> and <code>import oldname; oldname.foo()</code> work and invoke the same underlying code?</p>
| <python><python-packaging> | 2023-07-20 11:35:58 | 1 | 305 | Mathieu |
76,729,393 | 534,238 | How to properly capture library / package version in my package when using pyproject.toml to build | <p>I have moved away from a <code>setup.py</code> file to build my packages / libraries to fully using <code>pyproject.toml</code>. I prefer it overall, but it seems that the version placed in the <code>pyproject.toml</code> does not propagate through the build in any way. So I cannot figure out how to inject the package version -- or any other metadata provided in the pyproject.toml -- into my package.</p>
<p>A google search led me to <a href="https://github.com/python-poetry/poetry/issues/273" rel="nofollow noreferrer">this thread</a>, which had some suggestions. They all seemed like hacks, but I tried this one because it seemed best:</p>
<pre class="lang-py prettyprint-override"><code>from pip._vendor import tomli ## I need to be backwards compatible to Python 3.8
with open("pyproject.toml", "rb") as proj_file:
_METADATA = tomli.load(proj_file)
DESCRIPTION = _METADATA["project"]["description"]
NAME = _METADATA["project"]["name"]
VERSION = _METADATA["project"]["version"]
</code></pre>
<p>It worked fine upon testing, but I did not test robustly enough: once I tried to install this in a fresh location / machine, it failed because the <code>pyproject.toml</code> file is not part of the package installation. (I should have realized this.)</p>
<hr />
<p>So, what is the right / best way to provide metadata, like the package version, to my built package? I need the following requirements:</p>
<ol>
<li>I <em>only</em> want to provide the information once, in the <code>pyproject.toml</code>. (I know that if I need to repeat a value, at some point there will be a mismatch.)</li>
<li>I want the information to be available to the end user, so that someone who installs the package can do something like <code>mypackage.VERSION</code> from her interactive Python session.</li>
<li>I want to <em>only</em> use <code>pyproject.toml</code> and Poetry / PDM. (I actually use PDM, but I know that Poetry is more popular. The point is that I don't want a <code>setup.py</code> or <code>setup.cfg</code> hack. I want to purely use the new way.)</li>
</ol>
| <python><metadata><python-packaging> | 2023-07-20 11:29:34 | 2 | 3,558 | Mike Williamson |
76,729,370 | 9,110,646 | How to add two pandas data frames and keep both indexes | <p>What is the most elegant and efficient way of adding two pandas data frames and keep both indexes?</p>
<p><strong>Table 1</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Index</th>
<th style="text-align: center;">Col1</th>
<th style="text-align: right;">Col2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Sample1</td>
<td style="text-align: center;">0.5</td>
<td style="text-align: right;">1.0</td>
</tr>
<tr>
<td style="text-align: left;">Sample2</td>
<td style="text-align: center;">0.0</td>
<td style="text-align: right;">0.5</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Table 2</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Index</th>
<th style="text-align: center;">Col1</th>
<th style="text-align: right;">Col2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Sample3</td>
<td style="text-align: center;">0.0</td>
<td style="text-align: right;">1.0</td>
</tr>
<tr>
<td style="text-align: left;">Sample4</td>
<td style="text-align: center;">1.0</td>
<td style="text-align: right;">1.0</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Result Table</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Index</th>
<th style="text-align: center;">Col1</th>
<th style="text-align: right;">Col2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">[Sample1, Sample3]</td>
<td style="text-align: center;">0.5</td>
<td style="text-align: right;">2.0</td>
</tr>
<tr>
<td style="text-align: left;">[Sample2, Sample4]</td>
<td style="text-align: center;">1.0</td>
<td style="text-align: right;">1.5</td>
</tr>
</tbody>
</table>
</div>
<pre><code>import pandas as pd
table_one = pd.DataFrame({'Col1': [0.5, 1.0],
'Col2': [0.0, 0.5]},
index=['Sample1', 'Sample2'])
table_two = pd.DataFrame({'Col1': [0.0, 1.0],
'Col2': [1.0, 1.0]},
index=['Sample3', 'Sample4'])
table_result = pd.DataFrame({'Col1': [0.5, 2.0],
'Col2': [1.0, 1.5]},
index=[['Sample1','Sample3'], ['Sample2','Sample4']])
# Please insert solution here...
</code></pre>
| <python><python-3.x><pandas><dataframe><vector> | 2023-07-20 11:25:44 | 1 | 423 | Pm740 |
76,729,282 | 11,332,249 | Why is flask redirect turning an external URL into a local URL while using reverse proxy in IIS? | <p>Just for some background, I have been trying to set up Azure AD OAuth authentication using flask dance. The actual issue I'm having is when redirecting to the authorization URL, flask is not redirecting as I would expect it to.</p>
<p>I've tried using a simple redirect like this:</p>
<pre><code>@oauth_blueprint.route('/')
def route_oauth():
return flask.redirect('https://login.microsoftonline.com/tenant/oauth2/v2.0/authorize')
</code></pre>
<p>Now, if I access the page directly at <code>localhost:5000/oauth</code> then it works as expected and redirects me to the URL.</p>
<p>However, our web application is set up using a reverse proxy. The flask application is accessed at <code>www.example.com/flask/xxx</code> which returns the corresponding page at <code>localhost:5000/xxx</code>. So if I go to <code>www.example.com/flask/oauth</code> then I should get the same page as <code>localhost:5000/oauth</code>.</p>
<p>So the problem is when I go to <code>www.example.com/flask/oauth</code> the redirect does not work correctly. Instead of taking me to <code>https://login.microsoftonline.com/tenant/oauth2/v2.0/authorize</code>, I instead get redirected to <code>www.example.com/tenant/oauth2/v2.0/authorize</code>. I can't make sense of this as the URL is clearly not a relative URL.</p>
<p>In case it's relevant, we are using URL Rewrite in IIS. There is a simple rewrite rule which matches any URL starting with <code>flask/</code>. This has worked fine for all our other pages.</p>
<p>This is what the rule looks like in IIS:
<a href="https://i.sstatic.net/H30TV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H30TV.png" alt="IIS Inbound Rule" /></a></p>
<p>I've tried using FRT to trace the rewrite. Below is the output I got. It changes to the undesired URL at line 27 but I don't understand why.</p>
<pre><code>| No. | EventName | Details | Time |
| --- | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ |
| 1. | GENERAL_REQUEST_START | SiteId="2", AppPoolId="App", ConnId="1610613017", RawConnId="1610613017", RequestURL="http://www.example.com/flask/oauth/", RequestVerb="GET" | 12:35:02.963 |
| 2. | GENERAL_ENDPOINT_INFORMATION | RemoteAddress="::1", RemotePort="64068", LocalAddress="::1", LocalPort="81" | 12:35:02.963 |
| 3. | GENERAL_REQUEST_HEADERS | Headers="Connection: keep-alive Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,\*/\*;q=0.8,application/signed-exchange;v=b3;q=0.7 Accept-Encoding: gzip, deflate, br Accept-Language: en-US,en;q=0.9 Host: www.example.com User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.82 sec-ch-ua: "Not.A/Brand";v="8", "Chromium";v="114", "Microsoft Edge";v="114" sec-ch-ua-mobile: ?0 sec-ch-ua-platform: "Windows" Upgrade-Insecure-Requests: 1 Sec-Fetch-Site: none Sec-Fetch-Mode: navigate Sec-Fetch-User: ?1 Sec-Fetch-Dest: document " | 12:35:02.963 |
| 4. | GENERAL_GET_URL_METADATA | PhysicalPath="", AccessPerms="513" | 12:35:02.963 |
| 5. | HANDLER_CHANGED | OldHandlerName="", NewHandlerName="StaticFile", NewHandlerModules="StaticFileModule,DefaultDocumentModule,DirectoryListingModule", NewHandlerScriptProcessor="", NewHandlerType="" | 12:35:02.963 |
| 6. | URL_REWRITE_START | RequestURL="/flask/oauth/", Scope="Distributed", Type="Inbound" | 12:35:02.963 |
| 7. | RULE_EVALUATION_START | RuleName="Flask", RequestURL="flask/oauth/", QueryString="", PatternSyntax="Regex", StopProcessing="true", RelativePath="/" | 12:35:02.963 |
| 8. | PATTERN_MATCH | Pattern="^flask/(.\*)", Input="flask/oauth/", Negate="false", Matched="true" | 12:35:02.963 |
| 9. | REWRITE_ACTION | Substitution="http://localhost:5000/{R:1}", RewriteURL="http://localhost:5000/oauth/", AppendQueryString="true", LogRewrittenURL="true" | 12:35:02.963 |
| 10. | RULE_EVALUATION_END | RuleName="Flask", RequestURL="http://localhost:5000/oauth/", QueryString="", StopProcessing="true", Succeeded="true" | 12:35:02.963 |
| 11. | GENERAL_SET_REQUEST_HEADER | HeaderName="X-Original-URL", HeaderValue="/flask/oauth/", Replace="true" | 12:35:02.963 |
| 12. | URL_CHANGED | OldUrl="/flask/oauth/", NewUrl="http://localhost:5000/oauth/" | 12:35:02.963 |
| 13. | URL_REWRITE_END | RequestURL="http://localhost:5000/oauth/" | 12:35:02.963 |
| 14. | USER_SET | AuthType="", UserName="", SupportsIsInRole="true" | 12:35:02.963 |
| 15. | HANDLER_CHANGED | OldHandlerName="StaticFile", NewHandlerName="ApplicationRequestRoutingHandler", NewHandlerModules="ApplicationRequestRouting", NewHandlerScriptProcessor="", NewHandlerType="" | 12:35:02.963 |
| 16. | GENERAL_SET_REQUEST_HEADER | HeaderName="Max-Forwards", HeaderValue="10", Replace="true" | 12:35:02.963 |
| 17. | GENERAL_SET_REQUEST_HEADER | HeaderName="X-Forwarded-For", HeaderValue="[::1]", Replace="true" | 12:35:02.963 |
| 18. | GENERAL_SET_REQUEST_HEADER | HeaderName="X-ARR-SSL", HeaderValue="", Replace="true" | 12:35:02.963 |
| 19. | GENERAL_SET_REQUEST_HEADER | HeaderName="X-ARR-ClientCert", HeaderValue="", Replace="true" | 12:35:02.963 |
| 20. | GENERAL_SET_REQUEST_HEADER | HeaderName="X-ARR-LOG-ID", HeaderValue="0f2e2610-c6a8-4412-be80-431075354701", Replace="true" | 12:35:02.963 |
| 21. | GENERAL_SET_REQUEST_HEADER | HeaderName="Connection", HeaderValue="", Replace="true" | 12:35:02.963 |
| 22. | URL_CHANGED | OldUrl="http://localhost:5000/oauth/", NewUrl="/flask/oauth/" | 12:35:02.963 |
| 23. | GENERAL_SET_RESPONSE_HEADER | HeaderName="Content-Length", HeaderValue="371", Replace="true" | 12:35:02.963 |
| 24. | GENERAL_SET_RESPONSE_HEADER | HeaderName="Content-Type", HeaderValue="text/html; charset=utf-8", Replace="true" | 12:35:02.963 |
| 25. | GENERAL_SET_RESPONSE_HEADER | HeaderName="Location", HeaderValue="https://login.microsoftonline.com/tenant/oauth2/v2.0/authorize", Replace="true" | 12:35:02.963 |
| 26. | GENERAL_SET_RESPONSE_HEADER | HeaderName="Access-Control-Allow-Origin", HeaderValue="/..", Replace="false" | 12:35:02.963 |
| 27. | GENERAL_SET_RESPONSE_HEADER | HeaderName="Location", HeaderValue="http://www.example.com/tenant/oauth2/v2.0/authorize", Replace="true" | 12:35:02.963 |
| 28. | GENERAL_SET_RESPONSE_HEADER | HeaderName="X-Powered-By", HeaderValue="ARR/3.0", Replace="false" | 12:35:02.963 |
| 29. | GENERAL_NOT_SEND_CUSTOM_ERROR | Reason="SETSTATUS_SUCCESS" | 12:35:02.963 |
| 30. | GENERAL_FLUSH_RESPONSE_START | | 12:35:02.963 |
| 31. | GENERAL_RESPONSE_HEADERS | Headers="Content-Type: text/html; charset=utf-8 Location: http://www.example.com/tenant/oauth2/v2.0/authorize Server: Microsoft-IIS/10.0 Access-Control-Allow-Origin: /.. X-Powered-By: ARR/3.0 " | 12:35:02.963 |
| 32. | GENERAL_RESPONSE_ENTITY_BUFFER | Buffer="<!doctype html> <html lang=en> <title>Redirecting...</title> <h1>Redirecting...</h1> <p>You should be redirected automatically to the target URL: <a href="https://login.microsoftonline.com/tenant/oauth2/v2.0/authorize">https://login.microsoftonline.com/tenant/oauth2/v2.0/authorize</a>. If not, click the link. " | 12:35:02.963 |
| 33. | GENERAL_FLUSH_RESPONSE_END | BytesSent="666", ErrorCode="The operation completed successfully. (0x0)" | 12:35:02.963 |
| 34. | GENERAL_REQUEST_END | BytesSent="666", BytesReceived="1028", HttpStatus="302", HttpSubStatus="0" | 12:35:02.963 |
</code></pre>
<p>I'd really appreciate any help on this issue. I'm happy to provide any more details if necessary. Thank you.</p>
| <python><flask><reverse-proxy><iis-10> | 2023-07-20 11:13:50 | 2 | 465 | dashingdove |
76,729,250 | 13,506,329 | Resize a numpy array of lists so that so that the lists all have the same length and dtype of numpy array can be inferred correctly | <p>I currently have the following <code>dataframe</code></p>
<pre><code>data = {'col_a': [['a', 'b'], ['a', 'b', 'c'], ['a'], ['a', 'b', 'c', 'd'], ['a', 'b', 'c'], ['a', 'b', 'c', 'd']],
'col_b':[[1, 3], [1, 0, 0], [4], [1, 1, 2, 0], [0, 0, 5], [3, 1, 2, 5]]}
df= pd.DataFrame(data)
</code></pre>
<p>Suppose I work with <code>col_a</code>, I want to resize the lists in <code>col_a</code> in a vectorized manner so that the <code>length of all the sub lists = max length of largest list</code> and I want to fill the empty values with <code>'None'</code> in the case of <code>col_a</code>. I want the final output to look as follows</p>
<pre><code> col_a col_b
0 [a, b, None, None] [1, 3, nan, nan]
1 [a, b, c, None] [1, 0, 0, nan]
2 [a, None, None, None] [4, nan, nan, nan]
3 [a, b, c, d] [1, 1, 2, 0]
4 [a, b, c, None] [0, 0, 5, nan]
5 [a, b, c, d] [3, 1, 2, 5]
</code></pre>
<p>So far I have done the following</p>
<pre><code># Convert the column to a NumPy array with object dtype
col_np = df['col_a'].to_numpy()
# Find the maximum length of the lists using NumPy operations
max_length = np.max(np.frompyfunc(len, 1, 1)(col_np))
# Create a mask for padding
mask = np.arange(max_length) < np.frompyfunc(len, 1, 1)(col_np)[:, None]
# Pad the lists with None where necessary
result = np.where(mask, col_np, 'None')
</code></pre>
<p>This results in the following error
<code>ValueError: operands could not be broadcast together with shapes (6,4) (6,) ()</code></p>
<p>I feel like I'm close but there's something that I'm missing here. Please note that only <strong>vectorized</strong> solutions will be marked as the answer.</p>
| <python><python-3.x><pandas><numpy><numpy-ndarray> | 2023-07-20 11:09:48 | 2 | 388 | Lihka_nonem |
76,729,200 | 22,221,987 | Unsigned int VS Unsigned long in python struct module | <p>I was reading python struct module's <a href="https://docs.python.org/3/library/struct.html" rel="nofollow noreferrer">docs</a> and found interesting thing.</p>
<p><a href="https://i.sstatic.net/Zszq6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zszq6.png" alt="enter image description here" /></a></p>
<p>Unsigned int and unsigned long both equals 4 bytes there.</p>
<p>But, as i know, <strong>unsigned int</strong> can be 2 or 4 bytes, depended on CPU.</p>
<p>So, here is the question. I have PC with little-endian system. This PC share some data via local net. I want to receive this data in another PC (with some endian system, which i don't know) via socket.
I know, that host PC will send 4 variables which are <strong>uint32_t</strong>.</p>
<p>How should i unpack these values in python and why? Should i use <strong>unsigned long</strong> or <strong>unsigned int</strong>? In docs they have the same size. So, the format string will look like <strong>4L</strong> or <strong>4I</strong>?</p>
<p>And additional question - how should i avoid endianness conflicts? Host PC is 100% little endian and it won't change in the future. Client PC can be different, so, should i write a struct string like that: <strong><4L</strong> (or <4I, depended on the previous question)? Or i should use <strong>!4</strong> bc i receive data from local net (via socket)?</p>
<p>Research in some <a href="https://stackoverflow.com/questions/6823134/python-struct-byte-order-and-alignment-for-network-application-and-difference-b">relative</a> <a href="https://learn.microsoft.com/en-us/cpp/cpp/data-type-ranges?view=msvc-170" rel="nofollow noreferrer">questions</a> didn't help a lot, so, here i am.</p>
<p>Pls, shed light on this issue, as long as i messed with this topic a bit.</p>
| <python><c><python-3.x><sockets><decode> | 2023-07-20 11:02:31 | 1 | 309 | Mika |
76,728,876 | 7,692,855 | SQLAlchemy, Flask and cross-contamination? | <p>I have taken over a flask app, but it does not use the flask-sqlalchemy plugin. I am having a hard time wrapping my head around how it's set up.</p>
<p>It has a <code>database.py</code> file.</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, scoped_session, Session
_session_factory = None
_scoped_session_cls = None
_db_session: Session = None
def _get_session_factory():
global _session_factory
if _session_factory is None:
_session_factory = sessionmaker(
bind=create_engine(CONNECTION_URL)
)
return _session_factory
def new_session():
session_factory = _get_session_factory()
return session_factory()
def new_scoped_session():
global _scoped_session_cls
if _scoped_session_cls is None:
session_factory = _get_session_factory()
if not session_factory:
return
_scoped_session_cls = scoped_session(session_factory)
return _scoped_session_cls()
def init_session():
global _db_session
if _db_session is not None:
log.warning("already init")
else:
_db_session = new_scoped_session()
return _db_session
def get_session():
return _db_session
</code></pre>
<p>We we start up the flask app, it calls <code>database.init_session()</code> and then anytime we want to use the database it calls <code>database.get_session()</code>.</p>
<p>Is this a correct/safe way to interact with the database? What happens if there are two requests being processed at the same time by different threads? Will this result in cross-contamination with both using the same session</p>
| <python><flask><sqlalchemy> | 2023-07-20 10:17:49 | 2 | 1,472 | user7692855 |
76,728,740 | 7,760,910 | Two fields with same name causing problem in pydantic model classes | <p>I have two <code>Pydantic</code> model classes where two fields have the same name as can be seen below:</p>
<pre><code>class SnowflakeTable(BaseModel):
database: str
schema: str
table: str
table_schema: List[SnowflakeColumn]
class DataContract(BaseModel):
schema: List[OpenmetadataColumn]
</code></pre>
<p>These model classes have been integrated with other modules to run via <code>FastAPI</code> and <code>Mangum</code>. Now, when I try hitting the APIs it gives me the below error:</p>
<pre><code>NameError: Field name "schema" shadows an attribute in parent "BaseModel"; you might want to use a different field name with "alias='schema'".
</code></pre>
<p>So, to resolve this I tried using <code>Field</code> of Pydantic as below but it didn't work either.</p>
<pre><code>class SnowflakeTable(BaseModel):
database: str
schema: str = Field(..., alias='schema')
table: str
table_schema: List[SnowflakeColumn]
class DataContract(BaseModel):
schema: List[OpenmetadataColumn] = Field(..., alias='schema')
</code></pre>
<p>The error remains the same. I tried multiple things w.r.t Fields. Also, tried using Route API but nothing worked. What am I missing here? TIA</p>
<p>P.S. I can't rename the attributes. Thats against the API rules here.</p>
| <python><python-3.x><pydantic> | 2023-07-20 09:59:41 | 1 | 2,177 | whatsinthename |
76,728,698 | 7,848,740 | Pyhton requests.post raise error SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)') | <p>I've tried <a href="https://stackoverflow.com/questions/35405092/sslerror-sslv3-alert-handshake-failure">this</a> solution but it didn't work</p>
<p>I'm doing a POST requests in the form of</p>
<pre><code>res = requests.post(
url, data=json.dumps(data), headers=self.headers, verify=False
)
</code></pre>
<p>But I'm getting <code>SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)')</code></p>
<p>I'm using pyhton 3.10 with</p>
<ul>
<li>pyOpenSSL 23.2.0</li>
<li>requests 2.31.0</li>
</ul>
<p>I've tried</p>
<pre><code>>>> import ssl
>>> print(ssl.OPENSSL_VERSION)
OpenSSL 1.1.1n 15 Mar 2022
</code></pre>
<p>Doing <code>openssl s_client -connect 10.10.96.28:443</code></p>
<pre><code>SSL handshake has read 1010 bytes and written 611 bytes
Verification error: self signed certificate
---
New, TLSv1.2, Cipher is AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : AES256-GCM-SHA384
</code></pre>
<p>Everything works in Python 3.9</p>
| <python><python-3.x><ssl> | 2023-07-20 09:56:19 | 1 | 1,679 | NicoCaldo |
76,728,568 | 1,877,002 | Python multiprocessing Pool over CPU nodes | <p>I am using a Windows based PC with CPU that has 96 cores (48 in node#1 and 48 in node#2)
Am running a Python script using the <code>multiprocessing</code> module.</p>
<pre><code> from multiprocessing import Pool
def mp_main():# main function to be run in parallel....
mp_ins =.. # iterable inputs to mp_main
num_pools = 92
with Pool(num_pools) as p:
p.map(mp_main,mp_ins)
</code></pre>
<p>by inspecting the task manager and resource manager I see that the CPU utilizitation is about 50% where all the cores that belongs to node#1 are close to 100% and vice versa for the cores of node#2.</p>
<p>How can I utilize the cores of node#2 better?</p>
| <python><python-3.x><python-multiprocessing> | 2023-07-20 09:38:45 | 1 | 2,107 | Benny K |
76,728,562 | 20,612,566 | Get mean value for price in dict (Django JSONField) | <p>I need to extract some products from orders with the same SKU number.</p>
<pre><code>orders=Order.objects.filter(products__contains=[{"sku":"002-2-1"}])
for e in orders:
print(e.products)
>>> [{'sku': 002-2-1, 'price': '366.00'}, {'sku': 002-2-1, 'price': '300.00'}] # 2 products in 1 order
>>> [{'sku': 002-2-1, 'price': '400.00'}] # 1 product in the order
</code></pre>
<p>I need to find the mean value of "price"</p>
<p>I tried to get a one list with dicts:</p>
<pre><code>a = sum(list(orders.values_list("products", flat=True)), [])
[{'sku': 002-2-1, 'price': '366.00'}, {'sku': 002-2-1, 'price': '300.00'}, {'sku': 002-2-1, 'price': '400.00'}]
</code></pre>
<p>How can I find the mean value of price?</p>
<p><code>[mean(e.get("price")) for e in a]</code></p>
<p>May be there is a better way to find it via F?</p>
| <python><django><dictionary><django-jsonfield> | 2023-07-20 09:38:13 | 1 | 391 | Iren E |
76,728,479 | 995,052 | How to correctly create an extended Pydantic model | <p>I am working on a app which uses FastAPI and ElasticSearch. I thought it is a good idea to use Pydantic models with some abstractions to create entity classes, so that it becomes inter-operable in the API and Index layers. When I tried extending a base entity class (which is a Pydantic BaseModel), the new objects I create is not initializing correctly and failing validations. What is the correct way to do this ?</p>
<p>For example, I created the following class hierarchy</p>
<pre><code>from abc import ABCMeta, abstractmethod
from datetime import datetime
from pydantic import BaseModel, field_serializer, field_validator
class BaseIndex(BaseModel, metaclass=ABCMeta):
id: str
name: str
created_on: datetime
modified_on: datetime
created_by: str
modified_by: str
@field_serializer("created_on", "modified_on", mode="plain")
@classmethod
def serialize_datetime(cls, v: datetime):
return int(v.timestamp())
@field_validator("created_on", "modified_on", mode="after")
@classmethod
def validate_datetime(cls, v: int):
return datetime.utcfromtimestamp(v)
@abstractmethod
def as_schema(self, with_defaults: bool = False):
pass
@classmethod
def from_record(cls, record: dict[str, Any]):
pass
# ----------------------------------------------------------- #
class UserIndex(BaseIndex):
user_name: str
email: str
first_name: str
last_name: str
groups: list[str]
roles: list[str]
last_login: datetime
@field_serializer("last_login", mode="plain")
def serialize_last_login(cls, value: datetime):
return int(value.timestamp())
@field_validator("last_login", mode="after")
def validate_last_login(cls, value: int):
return datetime.utcfromtimestamp(value)
def as_schema(self, with_defaults: bool = False):
return {
"properties": {
"id": {"type": "keyword"},
"name": {"type": "string"},
"user_name": {"type": "keyword"},
"email": {"type": "keyword"},
"first_name": {"type": "string", "copy_to": "name"},
"last_name": {"type": "string", "copy_to": "name"},
"groups": {"type": "string"},
"roles": {"type": "string"}
},
}
</code></pre>
<p>When I create a User object with the details , providing values of everything like</p>
<pre><code>user = UserIndex(id="u1", name="A1", user_name="Bob", email="bob@bb.com", first_name="Bob", last_name="B",
groups=["g1", "g2"], roles=["r1", "r2"], last_login=datetime.now())
</code></pre>
<p>ends up as validation error</p>
<pre><code>File ".....\Lib\site-packages\pydantic\main.py", line 150, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
pydantic_core._pydantic_core.ValidationError: 4 validation errors for User
created_on
Field required [type=missing, input_value={'id': 'u1..': 'A1'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.1.2/v/missing
modified_on
Field required [type=missing, input_value={'id': 'u1..': 'A1'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.1.2/v/missing
created_by
Field required [type=missing, input_value={'id': 'u1..': 'A1'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.1.2/v/missing
modified_by
Field required [type=missing, input_value={'id': 'u1..': 'A1'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.1.2/v/missing
</code></pre>
<p>Any one can point me in the right direction ? Is this a possibility in pydantic ? I could not see any examples in the official docs though.</p>
| <python><oop><inheritance><pydantic> | 2023-07-20 09:28:27 | 0 | 8,898 | Kris |
76,728,431 | 7,397,953 | equivalent numpy.random.choice function in Golang | <p>Basically what the title says is, is there any function / library in Golang that does <code>numpy.random.choice</code>?</p>
<p>What I want to do is, I want to shuffle a slice, based on the probability for each element.</p>
<p>The closest thing I got is the <code>rand.Shuffle</code>, but AFAIK it is not considering the probabilities of each element, and I couldn't find the equivalent function in <code>gonum</code> package as well.</p>
<p>Referrence:</p>
<ul>
<li><code>numpy.random.choice</code> docs --> <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html</a></li>
</ul>
| <python><numpy><go><gonum> | 2023-07-20 09:23:04 | 1 | 682 | dzakyputra |
76,728,229 | 9,204,838 | How to avoid circular imports when using Python type hints | <p>I'm trying to figure out what would be the best solution to avoid circular imports in a situation where I'm using <code>Protocol</code> and type hints. The problem is the following:</p>
<p>I have some storage targets that in the push method receives and Event.</p>
<pre><code>from typing import Protocol
from event import Event
class StorageProtocol(Protocol):
def push(self, event: Event):
...
class StoragePostgres(StorageProtocol):
def push(self, event: Event):
"""pushes Event to Postgres"""
pass
class StorageS3(StorageProtocol):
def push(self, event: Event):
"""pushes Event to S3"""
pass
</code></pre>
<p>And I have some event that has a function to store itself in something that implements the Protocol <code>StorageProtocol</code>.</p>
<pre><code>from dataclasses import dataclass
from storage import StorageProtocol
@dataclass
class Event:
id: str
start_datetime: str
end_datetime: str
def push(self, storage: StorageProtocol):
storage.push(self)
</code></pre>
<p>This results in a circular import error because the <code>storage.py</code> imports Event and <code>event.py</code> imports StorageProtocol.</p>
<p>I'm aware of using some approaches like the following</p>
<ol>
<li><code>from __future__ import annotations</code></li>
<li><code>from typing import TYPE_CHECKING</code></li>
<li>Importing only within the class</li>
<li>Using some if statements to avoid import at run time</li>
<li>Remove type hint</li>
<li>Using type hints in quotes</li>
</ol>
<p>But none of them seem like a good way to keep the code clean. What am I missing to not figure out how to make this simple example clean?</p>
| <python><python-3.x><design-patterns><circular-dependency><typing> | 2023-07-20 08:56:17 | 0 | 680 | Carlos Azevedo |
76,728,208 | 1,274,613 | How to type annotate a property overloading a base class attribute? | <p>I have an abstract base class</p>
<pre><code>class B(metaclass=ABCMeta):
h: HomeMadeType
</code></pre>
<p>where in one concrete class, I want <code>h</code> to be a property. How do I type annotate this?</p>
<pre><code>class C(B):
@property
def h(self) -> HomeMadeType:
HomeMadeType()
</code></pre>
<p>is obviously not enough (mypy says ‘error: Cannot override writeable attribute with read-only property [override]’), but it shows that mypy understands properties in principle. However,</p>
<pre><code>class D(B):
@property
def h(self) -> HomeMadeType:
return HomeMadeType()
@h.setter
def set_h(self, value: HomeMadeType) -> None:
pass
</code></pre>
<p>also fails with</p>
<pre><code>error: Cannot override writeable attribute with read-only property [override]
error: "Callable[[D], HomeMadeType]" has no attribute "setter" [attr-defined]
</code></pre>
<p>so that support is not so clear, and</p>
<pre><code>class E(B):
def get_h(self) -> HomeMadeType:
return HomeMadeType()
def set_h(self, value: HomeMadeType) -> None:
pass
h = property(get_h, set_h)
</code></pre>
<p>results in ‘error: Incompatible types in assignment (expression has type "property", base class "B" defined the type as "HomeMadeType") [assignment]’ – so it seems like the support is specifically for the <code>@property</code> decorator, not even for property descriptors in general.</p>
<p>Is there a way I can overload an attribute with a property in a way that makes mypy happy?</p>
| <python><properties><mypy> | 2023-07-20 08:54:40 | 0 | 6,472 | Anaphory |
76,728,178 | 386,861 | How to combine pandas, dataclasses to import data | <p>I'm trying to learn classes with dataclasses</p>
<pre><code>@dataclass
class Collect:
url : str
def collect(url):
df = pd.read_csv(url)
return df
df = Collect("national_news_scrape.csv")
df.head()
</code></pre>
<p>But I get error:</p>
<p>AttributeError: 'Collect' object has no attribute 'head'</p>
<p>Not sure why.</p>
| <python><pandas><oop><python-dataclasses> | 2023-07-20 08:50:13 | 2 | 7,882 | elksie5000 |
76,728,134 | 11,416,654 | RSA - CTF Encrypt and Decrypt | <p>I am currently trying to solve a practice CTF challenge on RSA. The source code of the challenge is the following:</p>
<pre><code>from Crypto.Util.number import getStrongPrime, bytes_to_long
from secret import flag
p = getStrongPrime(1024)
n = p*p
ct = pow(bytes_to_long(flag),65537,n)
print(f"N: {n}")
print("e: 65537")
print(f"Ciphertext: {ct}")
</code></pre>
<p>My goal is to find the flag. I noticed that n is nothing other than p squared, so basically p=q in terms of RSA.
This is the script that I am using to solve the challenge:</p>
<pre><code>from sympy import factorint
from Crypto.Util.number import inverse
from Crypto.Util.number import long_to_bytes, bytes_to_long
N = 17247429011400091594903121614278317774635194567355664182083286460825623278786842450296276336243601369886531345460567758683264711621579053621928923112845729038920820584866481858788199156251002137294317693549968171587560980199578605277615016297806648517292231417503335937517545040818693753744974426077235846550662950287459352497273884563460997553049302884794110615691778846001187875451148062541191040207901569501139838046342432918478105568543142728845613434476488073435158841063873479450746792085243366610793708083771235300723836114651517179308753861599354559357082701098376497379860365093082194763554366394532766270441
e = 65537
ciphertext = 73856274733636037480705118582707253154331884152543812530396852364910317444631279978151266880998392327051579551195174910966346458203462739328504111752660934987920144143256608807202384495146366180063763952442956953997212234338589093090543779433867912610529819086616268003032728521238128403257422990840265611603144926710938571975237945229348543608800432648053640151779084773334154380549080493528741315675693189798034401372997956383236742945661608648934118804562523298133099955814197894630073716823425525171494907446686386474871039477578650745672272267639633128732470409207666675371064176768285518092393337398629693441
#print(factorint(N)) #square of 131329467414590894604854795173365398896201184952104193748129988169713995480202398488092403487193967215049091388509880107122001081151286397499791450577587622771865343057370826566912737156758236033887044593314395514760330131964758403355753063826514226886688942810874269688433452014205077769669852552277528221229
p = 131329467414590894604854795173365398896201184952104193748129988169713995480202398488092403487193967215049091388509880107122001081151286397499791450577587622771865343057370826566912737156758236033887044593314395514760330131964758403355753063826514226886688942810874269688433452014205077769669852552277528221229
q = p
#print(p*p==N) This is true
#Now that we have found p and q..
phi = (p-1)*(q-1)
d = inverse(e, phi)
m = pow(ciphertext, d, N)
print(m)
print(ciphertext == pow(m,e,N)) #False..
</code></pre>
<p>Now, the problem is that I correctly find the p (although N is very big), but then if I try to compute phi, d and the respective m, by recomputing the ciphertext it doesn't match, meaning that m is wrong. Does anyone have any proposal on what I am making wrong?</p>
| <python><rsa><modular-arithmetic><ctf> | 2023-07-20 08:44:21 | 1 | 823 | Shark44 |
76,728,047 | 5,467,541 | Pandas groupby apply too slow. Suggest improvements / alternatives | <p><strong>Input Dataframe:</strong></p>
<pre><code> ff pp xx yy
0 10000 IVR -19.6000 0.9700
1 10000 IVL -19.8100 11.0900
2 10000 RV -19.8500 -10.0300
3 10000 LV -20.1500 23.3100
4 10001 RV -19.8700 -10.0100
5 10001 IVR -19.5900 0.9900
6 10001 IVL -19.8100 11.0700
7 10001 LV -20.1600 23.3300
8 10002 RV -19.8900 -10.0000
9 10002 IVR -19.5700 1.0100
10 10002 IVL -19.8200 11.0500
11 10002 LV -20.1800 23.3600
12 10003 IVR -19.5400 1.0300
13 10003 RV -19.9100 -9.9800
14 10003 IVL -19.8200 11.0300
15 10003 LV -20.1900 23.3700
16 10004 RV -19.9400 -9.9600
17 10004 IVR -19.5000 1.0600
18 10004 IVL -19.8400 11.0100
19 10004 LV -20.2000 23.4000
</code></pre>
<p><strong>Output Dataframe:</strong></p>
<pre><code> ff x_min_LV x_min_RV x_max_LV x_max_RV y_min_LV y_min_RV y_max_LV y_max_RV
0 10000 0.3400 0.0400 0.5500 0.2500 12.2200 11.0000 22.3400 21.1200
1 10001 0.3500 0.0600 0.5700 0.2800 12.2600 11.0000 22.3400 21.0800
2 10002 0.3600 0.0700 0.6100 0.3200 12.3100 11.0100 22.3500 21.0500
3 10003 0.3700 0.0900 0.6500 0.3700 12.3400 11.0100 22.3400 21.0100
4 10004 0.3600 0.1000 0.7000 0.4400 12.3900 11.0200 22.3400 20.9700
</code></pre>
<p><strong>Logic explained:</strong></p>
<p>For each value of 'ff', it is required to create the following columns. (show in output df)</p>
<pre><code>1. x_min_LV= minimum 'xx' value between (LV) and (IVR, IVL)
ex:
when ff = 10000,
x_min_LV = (-20.1500) - (-19.8100) = -0.34
np.abs(x_min_LV) = 0.34
2. x_max_LV= maximum 'xx' value between (LV) and (IVR, IVL)
ex:
when ff = 10000,
x_max_LV = (-20.1500) - (-19.6000) = -0.55
np.abs(x_max_LV) = 0.55
3. y_min_RV= minimum 'yy' value between (RV) and (IVR, IVL)
ex: when ff = 10000,
y_min_RV= (-10.03) - (0.97) = -11
np.abs(y_min_RV) = 11
</code></pre>
<p>and so on...</p>
<p><strong>Minimum Reproducible Example</strong></p>
<p><a href="https://www.online-python.com/sXlef30Q1V" rel="nofollow noreferrer">https://www.online-python.com/sXlef30Q1V</a></p>
<p>The current approach to code is rudimentary at best and solves it for me but takes 43s for 80k rows. I wanted to optimize time taken. Please suggest any optimizations or alternate methods.</p>
| <python><pandas><dataframe><numpy><group-by> | 2023-07-20 08:33:39 | 1 | 595 | Abhinav Ralhan |
76,728,008 | 713,200 | How to split a string starting from first digit in python | <p>I'm looking for a python built-in function that will help me to split a string starting from where a digit starts.
For example</p>
<pre><code>input: TengGigE0/0/3/4
output: 0/0/3/4
input TengigabitEthernet 4/3/4/3
output: 4/3/4/3
input: Te0/4/5
output: 0/4/5
</code></pre>
<p>How can this be achieved in python, if not in regex ?</p>
| <python> | 2023-07-20 08:28:27 | 2 | 950 | mac |
76,727,985 | 8,551,360 | Couldn't login using google on server but works fine on localhost | <p>I have integrated google calendar in which client needs to login to their google account first.
Everything works fine on local host but on server redirection for google account is not happening. I tried to look for this issue and have found some solution but couldn't undestand and implement that on my end that works on server.</p>
<p>Documentation: <a href="https://developers.google.com/calendar/api/quickstart/python" rel="nofollow noreferrer">https://developers.google.com/calendar/api/quickstart/python</a></p>
<p>This is the main code that is used for google sign in:</p>
<pre><code>flow = InstalledAppFlow.from_client_secrets_file('google_calendar_client_python.json', self.scope)
self.creds = flow.run_local_server(port=0)
</code></pre>
<p>Now I understand that there is some issue in "flow.run_local_server(port=0)" that works only on local host but I couldn't find any solution for this to make it work on server</p>
<p>Please help if you can.</p>
<p>P.S: Will post full code if anybody wants that!</p>
| <python><google-api><google-oauth><google-signin> | 2023-07-20 08:26:47 | 1 | 548 | Harshit verma |
76,727,866 | 9,895,048 | How to Remove Watermark from PDF using Python, without converting PDF to images in intermediate stage | <p>I have the PDF file having watermark on it.It looks like follows:
<a href="https://i.sstatic.net/Yb4f3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yb4f3.png" alt="enter image description here" /></a></p>
<p>I want to remove the watermark from the PDF file using python. I have tested existed solution on internet such as using PyPDF4.But that doesn't works for me. What I want to do is remove watermark and the apply some extraction task on PDF. My extraction task is to get the tabular data from PDF. For that purpose I am using camelot-py.Here, We shouldn't convert PDF to images and back to PDF because we couldn't extract the contents in PDF using pdf readers like camelot-py or tabula. PDF file is attached <a href="https://drive.google.com/file/d/15fJnJBzG6b_QqPtTLgRLVYE4WJI5M8KN/view?usp=sharing" rel="nofollow noreferrer">here.</a></p>
<p>Thank you!</p>
| <python><pdf><python-camelot> | 2023-07-20 08:10:22 | 1 | 411 | npn |
76,727,843 | 9,334,609 | sqlalchemy.exc.OperationalError: (mysql.connector.errors.OperationalError) MySQL Connection not available | <p>For some reason I am getting the disconnect error message with the database. I am using Flask, Python and SQLAlchemy and the app is running on pythonanywhere.com
I've been through similar questions and can't find a solution to the problem.
Specifically I have app.config["SQLALCHEMY_POOL_RECYCLE"] = 299 as the recommended setting. However in the following link <a href="https://help.pythonanywhere.com/pages/UsingSQLAlchemywithMySQL/" rel="nofollow noreferrer">https://help.pythonanywhere.com/pages/UsingSQLAlchemywithMySQL/</a> I found in pythonanywhere the recommendation says that pool recycle should have a value of 280.</p>
<p>I am using Flask-SQLAlchemy==3.0.5 and sqlalchemy version '2,0,19'</p>
<pre><code>Python 3.10.5 (main, Jul 22 2022, 17:09:35) [GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlalchemy
>>> sqlalchemy.__version__
'2.0.19'
>>>
</code></pre>
<p>My configuration is:</p>
<pre><code> SQLALCHEMY_DATABASE_URI = "mysql+mysqlconnector://{username}:{password}@{hostname}/{databasename}".format(
username=DATABASE_USERNAME,
password=DATABASE_PASSWORD,
hostname=DATABASE_HOST_NAME,
databasename=DATABASE_NAME
)
app.config["SQLALCHEMY_DATABASE_URI"] = SQLALCHEMY_DATABASE_URI
app.config["SQLALCHEMY_POOL_RECYCLE"] = 280
</code></pre>
<p>The documentation at the following link <a href="https://help.pythonanywhere.com/pages/UsingSQLAlchemywithMySQL/" rel="nofollow noreferrer">https://help.pythonanywhere.com/pages/UsingSQLAlchemywithMySQL/</a> on pythonanywhere states the following "If you're using the Flask-SQLAlchemy plugin, then for recent versions you configure it like this:</p>
<pre><code>app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'pool_recycle' : 280}
db.init_app(app)
</code></pre>
<p>"</p>
<p>My question is:
I should replace</p>
<pre><code>app.config["SQLALCHEMY_POOL_RECYCLE"] = 280
</code></pre>
<p>with</p>
<pre><code>app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'pool_recycle' : 280}
</code></pre>
<p>Thanks.</p>
| <python><mysql><python-3.x><sqlalchemy><flask-sqlalchemy> | 2023-07-20 08:06:24 | 1 | 461 | Ramiro |
76,727,774 | 20,646,254 | Selenium WebDriver Chrome 115 stopped working | <p>I have Chrome 115.0.5790.99 installed on Windows, and I use Selenium 4.10.0. In my Python code, I call <strong>service = Service(ChromeDriverManager().install())</strong> and it returns the error:</p>
<blockquote>
<p>ValueError: There is no such driver by url [sic] <a href="https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790" rel="nofollow noreferrer">https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790</a>.</p>
</blockquote>
<p>I use ChromeDriverManager().install() in order to ensure the use of last stable version of webdriver. How to solve the issue?</p>
<p><strong>My simple code:</strong></p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
import time
# Install Webdriver
service = Service(ChromeDriverManager().install())
# Create Driver Instance
driver = webdriver.Chrome(service=service)
# Get Web Page
driver.get('https://www.crawler-test.com')
time.sleep(5)
driver.quit()
</code></pre>
<p><strong>Error output:</strong></p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "C:\Users\Administrator\Documents\...\test.py", line 7, in <module>
service = Service(ChromeDriverManager().install())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\chrome.py", line 39, in install
driver_path = self._get_driver_path(self.driver)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\core\manager.py", line 30, in _get_driver_path
file = self._download_manager.download_file(driver.get_driver_download_url())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\drivers\chrome.py", line 40, in get_driver_download_url
driver_version_to_download = self.get_driver_version_to_download()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\core\driver.py", line 51, in get_driver_version_to_download
self._driver_to_download_version = self._version if self._version not in (None, "latest") else self.get_latest_release_version()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\drivers\chrome.py", line 62, in get_latest_release_version
resp = self._http_client.get(url=latest_release_url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\core\http.py", line 37, in get
self.validate_response(resp)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\core\http.py", line 16, in validate_response
raise ValueError(f"There is no such driver by url {resp.url}")
ValueError: There is no such driver by url https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790
</code></pre>
<p>I tried the following but no success:</p>
<ul>
<li>to disable Chrome auto-update, but Chrome manages to update itself anyway (<a href="https://www.minitool.com/news/disable-automatic-chrome-updates.html" rel="nofollow noreferrer">https://www.minitool.com/news/disable-automatic-chrome-updates.html</a> and <a href="https://www.webnots.com/7-ways-to-disable-automatic-chrome-update-in-windows-and-mac" rel="nofollow noreferrer">https://www.webnots.com/7-ways-to-disable-automatic-chrome-update-in-windows-and-mac</a>);</li>
<li>to install Chrome 114 and webdriver for version 114, than it works till Chrome get updated automatically;</li>
<li>to follow instructions <a href="https://chromedriver.chromium.org/downloads/version-selection" rel="nofollow noreferrer">https://chromedriver.chromium.org/downloads/version-selection</a> but when generating URL and running link <a href="https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790" rel="nofollow noreferrer">https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790</a> I get error <strong>No such object: chromedriver/LATEST_RELEASE_115.0.5790</strong></li>
</ul>
<p>How can I solve the issue till webdriver for Chrome 115 will be finally released at <a href="https://chromedriver.chromium.org/downloads" rel="nofollow noreferrer">the download location</a>?</p>
| <python><google-chrome><selenium-webdriver> | 2023-07-20 07:55:42 | 15 | 447 | TaKo |
76,727,610 | 6,730,854 | Alternatives to Scipy in Global optimization | <p>I'm trying to find a global minimum of a multivariate function (<code>ndim = 9</code>). The function has very narrow bottoms and a slight change to the X causes the output value to rise quite quickly. I tried various scipy methods, like <code>differential_evolution</code>, <code>basin_hopping</code>, <code>dual_annealing</code>, <code>shgo</code>. They perform quite well, but I still end up in the local minimum rather than the absolute global minimum.</p>
<p>What other packages/algorithms would you suggest trying?</p>
| <python><scipy-optimize><minimization> | 2023-07-20 07:32:24 | 1 | 472 | Mike Azatov |
76,727,608 | 7,946,082 | fails to connect github with ssh while building python dockerfile | <h2>What I want to do</h2>
<p>I'm trying to connect to github with ssh while building python docker image.</p>
<pre><code>FROM python:3.10.4 AS builder
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /running
COPY ./pyproject.toml ./
# copy key
COPY github.pem /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
# Add the necessary SSH config
RUN echo 'Host * \n\
IdentityFile /root/.ssh/id_rsa \n\
' > /root/.ssh/config
RUN which ssh
RUN ssh -T git@github.com
</code></pre>
<h2>instead below happened</h2>
<p>the result is...</p>
<pre><code>#12 [8/8] RUN ssh -T git@github.com
#12 sha256:009cd770ecaf452b09b48969487c7957d5c5fddf449cc2ea54bc195b09c44e37
#12 1.230 Host key verification failed.
#12 ERROR: executor failed running [/bin/sh -c ssh -T git@github.com]: exit code: 255
</code></pre>
<p>but I can connect to github well with that ssh key in my local mac</p>
<h2>the question</h2>
<p>how do I connect to github while building docker image?</p>
| <python><git><docker><ssh> | 2023-07-20 07:31:31 | 0 | 513 | Jerry |
76,727,604 | 4,404,919 | FileNotFoundError: [Errno 2] No such file or directory - Python | <p>I am trying to run this python script I got off the internet but keep getting this error why does it keep adding an extra \ in the path</p>
<p>Sorry I am a newbie to python</p>
<p><code>FileNotFoundError: [Errno 2] No such file or directory: 'c:\\Users\\HOMEPC\\Downloads\\Image-Remover-Python-main\\saved_models\\u2net\\u2net.pth</code></p>
<p>Here is the code</p>
<pre><code>import os
currentDir = os.path.dirname(__file__)
print("---Loading Model---")
model_name = 'u2net'
model_dir = os.path.join(currentDir, 'saved_models',
model_name, model_name + '.pth')
</code></pre>
| <python> | 2023-07-20 07:31:04 | 1 | 5,579 | Jacques Krause |
76,727,431 | 5,269,892 | Python set global variable in namespace of only partially imported module | <p>I would like to import a function from a python subscript (test_sub.py) into a main python script (test_main.py). As the function uses global variables (this is intended), the variables in test_main.py need to be bound to the namespace of test_sub.py to be known there. Is there a way to avoid a duplicate test_sub import? And equally importantly to avoid that other, unused functions are imported as well (since else a single <code>import test_sub</code> could be used)?</p>
<p><strong>test_sub.py:</strong></p>
<pre><code>def test_func_a():
print(a)
def test_func_b():
print(b)
def test_func_c():
print(c)
</code></pre>
<p><strong>test_main.py:</strong></p>
<pre><code>from test_sub import test_func_a
a = 'Hello, a'
import test_sub
test_sub.a = a
test_func_a()
</code></pre>
| <python><global-variables><python-import> | 2023-07-20 07:06:15 | 1 | 1,314 | silence_of_the_lambdas |
76,727,384 | 10,682,289 | How to draw bar charts with labels | <p>I'm trying to execute the code from official documentation (<a href="https://altair-viz.github.io/gallery/bar_chart_with_labels.html" rel="nofollow noreferrer">Bar Chart with Labels</a>) and use exactly the same lines:</p>
<pre><code>import altair as alt
from vega_datasets import data
source = data.wheat()
base = alt.Chart(source).encode(
x='wheat',
y="year:O",
text='wheat'
)
base.mark_bar() + base.mark_text(align='left', dx=2)
</code></pre>
<p>but when I add</p>
<pre><code>base.save('/path/to/chart.png')
</code></pre>
<p>to save chart as PNG file I get</p>
<blockquote>
<p>altair.utils.schemapi.SchemaValidationError: '{'data': {'name':
'data-76d1ce26ea5761007c35827e1564d86c'}, 'encoding': {'text':
{'field': 'wheat', 'type': 'quantitative'}, 'x': {'field': 'wheat',
'type': 'quantitative'}, 'y': {'field': 'year', 'type': 'ordinal'}}}'
is an invalid value.</p>
<p>'mark' is a required property</p>
</blockquote>
<p>Is it a bug or I'm doing something wrong?</p>
| <python><altair> | 2023-07-20 06:59:22 | 2 | 4,891 | JaSON |
76,727,251 | 2,802,576 | Python logger override filename | <p>I am following object oriented design for the python package and here is how I have created logger and used in classes -</p>
<p><strong>logger.py</strong></p>
<pre><code>from logging import getLogger, config
class MyLogger():
def __init__(self, config_path):
config.fileConfig(config_path)
self.logger = getLogger()
def debug(self, message):
self.logger.debug(message)
</code></pre>
<p><strong>client.py</strong></p>
<pre><code>from logger import Logger
class Client():
def __init__(self, logger: Logger):
self.logger = logger
</code></pre>
<p><strong>service.py</strong></p>
<pre><code>from logger import Logger
class Service():
def __init__(self, logger: Logger):
self.logger = logger
</code></pre>
<p><strong>log_config.conf</strong></p>
<pre><code>[loggers]
keys=root
[handlers]
keys=console
[formatters]
keys=std_out
[logger_root]
handlers=console
level=DEBUG
[handler_console]
class=logging.StreamHandler
level=DEBUG
formatter=std_out
[formatter_std_out]
format=%(asctime)s: %(levelname)s: %(filename)s: %(message)s
</code></pre>
<p>Issue is if I call the <code>debug()</code> method of <code>MyLogger</code> from <code>client.py</code> or <code>service.py</code> it prints the message but the <code>filename</code> it always shows as <code>logger.py</code>. Is there a way to override the <code>filename</code> content? Or does the design needs to be changed?</p>
| <python><logging><python-logging> | 2023-07-20 06:35:34 | 1 | 801 | arpymastro |
76,726,911 | 7,700,802 | Creating a unique json object from pandas | <p>I have this dataframe</p>
<pre><code> Year type Median_Home_Value
786252 2010 analyzed 11973.000
786253 2011 analyzed 12500.000
786254 2012 analyzed 13325.000
786255 2013 analyzed 14204.000
786256 2014 analyzed 14815.000
786257 2015 analyzed 15393.000
786258 2016 analyzed 15901.000
786259 2017 analyzed 16680.000
786260 2018 analyzed 17497.000
786261 2019 analyzed 18249.000
786262 2020 analyzed 19381.000
786263 2021 analyzed 20292.000
899389 2027 predicted 20718.132
899390 2024 predicted 21225.432
899397 2026 predicted 21103.680
899415 2025 predicted 20779.008
899481 2023 predicted 20941.344
</code></pre>
<p>I want to create a json object to represent this dataframe to be like this</p>
<pre><code>{
median_home_value: [{year: 2010, value: 11973, type: analyzed}, {year: 2011, value: 12500, type: analyzed}, ....]
}
</code></pre>
<p>I tried to do something like this</p>
<pre><code>d = {}
d['Median_Home_Value] = test_df[['Year', 'Median_Home_Value', 'type']].to_json()
</code></pre>
<p>but it does not give me the expected result, any suggestions are appreciated.</p>
| <python><json><pandas> | 2023-07-20 05:28:31 | 1 | 480 | Wolfy |
76,726,830 | 6,494,707 | extracting RGB image frames from the bag file: rosbag.bag.ROSBagException: unsupported compression type: lz4 | <p>I have a piece of code that I want to extract the RGB frames from the bag file. Code is as follows:</p>
<pre><code>import os
import argparse
import pdb
import cv2
import rosbag
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
bag_file = './bag_files/20230707_152832.bag'
output_dir= './frames/rgb_bag_output'
# topic can be obtained by using the following command in terminal
# time rosbag info ./bag_files/20230707_152832.bag
image_topic = '/device_0/sensor_1/Color_0/image/data'
bag = rosbag.Bag(bag_file, "r")
bridge = CvBridge()
count = 0
for topic, msg, t in bag.read_messages(topics=[image_topic]):
cv_img = bridge.imgmsg_to_cv2(msg, desired_encoding= "rgb8")#"passthrough")
cv2.imwrite(os.path.join(output_dir, "frame%06i.png" % count), cv_img)
print("Wrote image %i" % count)
count += 1
bag.close()
</code></pre>
<p>But it seems it does not run the for loop at all. I ran the following command in terminal to get the topic info:</p>
<pre><code> time rosbag info ./bag_files/20230707_152832.bag
path: ./bag_files/20230707_152832.bag
version: 2.0
duration: 12.6s
start: Jan 01 1970 07:30:00.00 (0.00)
end: Jan 01 1970 07:30:12.58 (12.58)
size: 410.0 MB
messages: 17874
compression: lz4 [757/757 chunks; 65.32%]
uncompressed: 627.0 MB @ 49.8 MB/s
compressed: 409.6 MB @ 32.5 MB/s (65.32%)
types: diagnostic_msgs/KeyValue [cf57fdc6617a881a88c16e768132149c]
geometry_msgs/Transform [ac9eff44abf714214112b05d54a3cf9b]
realsense_msgs/StreamInfo [311d7e24eac31bb87271d041bf70ff7d]
sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
std_msgs/Float32 [73fcbf46b49191e672908e50842a83d4]
std_msgs/String [992ce8a1687cec8c8bd883ec73ca41d1]
std_msgs/UInt32 [304a39449588c7f8ce2df6e8001c5fce]
topics: /device_0/info 13 msgs : diagnostic_msgs/KeyValue
/device_0/sensor_0/Depth_0/image/data 378 msgs : sensor_msgs/Image
/device_0/sensor_0/Depth_0/image/metadata 9072 msgs : diagnostic_msgs/KeyValue
/device_0/sensor_0/Depth_0/info 1 msg : realsense_msgs/StreamInfo
/device_0/sensor_0/Depth_0/info/camera_info 1 msg : sensor_msgs/CameraInfo
/device_0/sensor_0/Depth_0/tf/0 1 msg : geometry_msgs/Transform
/device_0/sensor_0/info 2 msgs : diagnostic_msgs/KeyValue
/device_0/sensor_0/option/Asic_Temperature/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Asic_Temperature/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Depth_Units/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Depth_Units/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Emitter_Always_On/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Emitter_Always_On/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Emitter_Enabled/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Emitter_Enabled/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Emitter_On_Off/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Emitter_On_Off/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Enable_Auto_Exposure/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Enable_Auto_Exposure/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Error_Polling_Enabled/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Error_Polling_Enabled/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Exposure/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Exposure/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Frames_Queue_Size/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Frames_Queue_Size/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Gain/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Gain/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Global_Time_Enabled/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Global_Time_Enabled/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Hdr_Enabled/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Hdr_Enabled/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Inter_Cam_Sync_Mode/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Inter_Cam_Sync_Mode/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Laser_Power/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Laser_Power/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Output_Trigger_Enabled/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Output_Trigger_Enabled/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Projector_Temperature/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Projector_Temperature/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Sequence_Id/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Sequence_Id/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Sequence_Name/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Sequence_Name/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Sequence_Size/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Sequence_Size/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Stereo_Baseline/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Stereo_Baseline/value 1 msg : std_msgs/Float32
/device_0/sensor_0/option/Visual_Preset/description 1 msg : std_msgs/String
/device_0/sensor_0/option/Visual_Preset/value 1 msg : std_msgs/Float32
/device_0/sensor_0/post_processing 9 msgs : std_msgs/String
/device_0/sensor_1/Color_0/image/data 378 msgs : sensor_msgs/Image
/device_0/sensor_1/Color_0/image/metadata 7938 msgs : diagnostic_msgs/KeyValue
/device_0/sensor_1/Color_0/info 1 msg : realsense_msgs/StreamInfo
/device_0/sensor_1/Color_0/info/camera_info 1 msg : sensor_msgs/CameraInfo
/device_0/sensor_1/Color_0/tf/0 1 msg : geometry_msgs/Transform
/device_0/sensor_1/info 2 msgs : diagnostic_msgs/KeyValue
/device_0/sensor_1/option/Auto_Exposure_Priority/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Auto_Exposure_Priority/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Backlight_Compensation/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Backlight_Compensation/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Brightness/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Brightness/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Contrast/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Contrast/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Enable_Auto_Exposure/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Enable_Auto_Exposure/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Enable_Auto_White_Balance/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Enable_Auto_White_Balance/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Exposure/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Exposure/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Frames_Queue_Size/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Frames_Queue_Size/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Gain/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Gain/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Gamma/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Gamma/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Global_Time_Enabled/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Global_Time_Enabled/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Hue/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Hue/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Power_Line_Frequency/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Power_Line_Frequency/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Saturation/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Saturation/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/Sharpness/description 1 msg : std_msgs/String
/device_0/sensor_1/option/Sharpness/value 1 msg : std_msgs/Float32
/device_0/sensor_1/option/White_Balance/description 1 msg : std_msgs/String
/device_0/sensor_1/option/White_Balance/value 1 msg : std_msgs/Float32
/device_0/sensor_1/post_processing 1 msg : std_msgs/String
/file_version 1 msg : std_msgs/UInt32
real 0m0.220s
user 0m0.096s
sys 0m0.016s
</code></pre>
<p>Where I am making mistake? It seems it does not read the messages at all. I tried to run the code in terminal, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "grab_rgb_bag.py", line 51, in <module>
for topic, msg, t in bag.read_messages(topics=[image_topic]):
File "/home/es/anaconda3/envs/hsi-env/lib/python3.8/site-packages/rosbag/bag.py", line 2705, in read_messages
yield self.seek_and_read_message_data_record((entry.chunk_pos, entry.offset), raw, return_connection_header)
File "/home/es/anaconda3/envs/hsi-env/lib/python3.8/site-packages/rosbag/bag.py", line 2839, in seek_and_read_message_data_record
raise ROSBagException('unsupported compression type: %s' % chunk_header.compression)
rosbag.bag.ROSBagException: unsupported compression type: lz4
</code></pre>
| <python><ros><bag> | 2023-07-20 05:09:36 | 1 | 2,236 | S.EB |
76,726,769 | 2,751,433 | Save bfloat16 as binary format | <p>What is the idiomatic way of saving a <code>bfloat</code> <code>torch.tensor</code> to disk as raw binary? Below code will throw error since numpy doesn't support <code>bfloat16</code>.</p>
<pre class="lang-py prettyprint-override"><code>import torch
import numpy as np
tensor = torch.tensor([1, 2, 3, 4, 5]).bfloat16()
# TypeError: Got unsupported ScalarType BFloat16
arr = tensor.numpy()
arr.tofile("output.bin")
</code></pre>
| <python><numpy><pytorch><bfloat16> | 2023-07-20 04:54:56 | 1 | 673 | flexwang |
76,726,524 | 18,308,621 | How to take head(n) and tail(n) in one group_by with polars | <p>Here is a sample dataframe, I want to get head(n) rows and tail(n) rows in everyday which is a <code>group_by("date").agg()</code> with polars. I know it could use two group_by to get head and tail part and then concat them.</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_repr("""
┌────────────┬────────┬──────────────────┐
│ date ┆ symbol ┆ ts_dom2secdom_op │
│ --- ┆ --- ┆ --- │
│ date ┆ str ┆ f64 │
╞════════════╪════════╪══════════════════╡
│ 2000-01-04 ┆ AL ┆ -0.119165 │
│ 2000-01-04 ┆ RU ┆ 0.256691 │
│ 2000-01-05 ┆ AL ┆ -0.126549 │
│ 2000-01-05 ┆ RU ┆ 0.1851 │
│ 2000-01-06 ┆ CU ┆ -0.121354 │
│ 2000-01-06 ┆ RU ┆ 0.228452 │
│ 2000-01-07 ┆ AL ┆ -0.126013 │
│ 2000-01-07 ┆ RU ┆ 0.348729 │
│ 2000-01-10 ┆ AL ┆ -0.139447 │
│ 2000-01-10 ┆ RU ┆ 0.263048 │
└────────────┴────────┴──────────────────┘
""")
</code></pre>
<p>Is there some trick to do this in one group_by or with_columns or filter?</p>
| <python><python-polars> | 2023-07-20 03:38:42 | 2 | 331 | Hakase |
76,726,453 | 18,758,062 | In FastAPI, how do you return a StreamingResponse, but also do something after the streaming is done? | <p>In my FastAPI route, I need to return a <code>StreamingResponse</code> to the user as each chunk of a text sentence gets generated.</p>
<p>However, the complete text sentence needs to be inserted into a database. Is there a way to do this?</p>
<pre><code>@app.post("/foo_stream")
def foo_stream(input: Input, db: Session = Depends(get_db)):
return StreamingResponse(
generate_streaming_response(input),
media_type="text/plain",
)
# Now that the streaming is done, how do you insert the entire text into our database?
</code></pre>
| <python><python-3.x><sqlalchemy><fastapi><streamingresponsebody> | 2023-07-20 03:12:36 | 0 | 1,623 | gameveloster |
76,726,433 | 11,471,385 | How to use AsyncElasticsearch client with FastAPI? | <p>I have question that if my implementation was good enough and are there any better way to implement AsyncElastichsearc client with FastAPI in more controllable and esier to manage.</p>
<p>Current implementation.</p>
<p>I create a dependency in FastAPI to use AsyncElasticsearch</p>
<pre><code>async def get_es():
elastic_cred = get_elastic_cred()
es = AsyncElasticsearch(**elastic_cred)
try:
yield es
finally:
await es.close()
</code></pre>
<p>I used this depenency in Fast API endpoint like below</p>
<pre><code>@router.get("/chat/conversations")
async def get_conversations_api(
req: ConversationsQueryParams = Depends(ConversationsQueryParams),
user: User = Depends(get_user_info),
db: PgDb = Depends(get_db),
es: AsyncElasticsearch = Depends(get_es),
):
pass
</code></pre>
<p>Then whenever API called, a new Elasticsearch connection will be created then being closed after API've done.</p>
<p>At this point, this make me feel like if there're some overhead time for creating ES connection and close this API's performance worse.</p>
<p>I have come up with other idea to use AsyncElasticsearch client as a singleton but I'm not sure that is it a proper way to handle AsyncElasticsearch connection because I didn
t find any recommendation from doc <a href="https://elasticsearch-py.readthedocs.io/en/v8.8.2/" rel="nofollow noreferrer">https://elasticsearch-py.readthedocs.io/en/v8.8.2/</a>.</p>
<p>Please help me elaborate and which way is the best?</p>
| <python><elasticsearch><asynchronous><fastapi> | 2023-07-20 03:08:12 | 0 | 717 | DFX Nguyễn |
76,726,404 | 2,953,544 | How to install GDAL with poetry and numpy? | <p>I'm writing an application in Python 3.11, using the GDAL Python bindings to process raster data. I have <code>libgdal-dev</code> successfully installed on my system. I'm using GDAL version 3.6.2.
When I try to read a raster Band as an array, I get the following error:</p>
<pre><code>f: Band = gdal.Open("/some/file/path.tif").GetRasterBand(1)
arr = f.ReadAsArray()
...
ImportError: cannot import name '_gdal_array' from 'osgeo'
</code></pre>
<p>I understand that this usually occurs because <code>numpy</code> needs to be installed as well. However, <code>numpy</code> is already in my pyproject.toml.
I've tried changing the order of installation; that is, I've done <code>poetry add numpy</code> before gdal, and vice versa, to no effect. I've tried installing both with <code>--no-cache</code>.
I wonder if there is something about Poetry that I'm not understanding?
Here's the dependencies section of my pyproject.toml:</p>
<pre><code>[tool.poetry.dependencies]
python = ">=3.9,<4.0"
prometheus-client = "^0.14.1"
gunicorn = "^20.0.4"
uvicorn = {extras = ["standard"], version = "^0.18.3"}
pydantic = "^2.0.3"
pydantic-settings = "^2.0.2"
ddtrace = "^1.16.1"
matplotlib = "^3.7.2"
numpy = "^1.25.1"
gdal = "3.6.2"
</code></pre>
<p>To reiterate, the problem is installing GDAL+numpy with Poetry. I know there are ways to do this easily with pip, but that is not an option for me. Thank you!</p>
| <python><gis><gdal><python-poetry> | 2023-07-20 02:58:04 | 1 | 468 | AJSmyth |
76,726,258 | 9,768,260 | How to build python package and migrate to alpine | <p>Is there a way to write a dockerfile firstly build a local python package using docker container, and then migrate this building package file to alpine container?</p>
| <python><docker><alpine-linux> | 2023-07-20 02:14:47 | 1 | 7,108 | ccd |
76,726,200 | 19,634,193 | PyTorch CNN model's inference performance depends on batch size | <p>I have a CNN model which is basically just VGG16. I train it with my own data then using the model to infer the same dataset as a part of the model evaluation. The code is roughly as follows:</p>
<pre><code>batch_size = 16
misclassified_count = 0
data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=False)
with torch.no_grad():
# Loop through the images in batches
for batch_idx, (images, y_trues) in enumerate(data_loader):
images = images.to(device)
output = vgg6(images)
_, predicted = torch.max(output, dim=1)
for i, pred_label in enumerate(predicted):
y_true = y_trues[i].item()
pred_label = pred_label.item()
if pred_label == y_true:
continue
misclassified_count += 1
print(f'{misclassified_count} misclassified samples found')
</code></pre>
<p>The problem is, as I increase the <code>batch_size</code>, the misclassified samples will decrease:</p>
<pre><code>batch_size=1, 2872 misclassified samples found
batch_size=2, 2133 misclassified samples found
batch_size=4, 1637 misclassified samples found
batch_size=8, 1364 misclassified samples found
batch_size=16, 1097 misclassified samples found
</code></pre>
<p>Any thoughts? Is there any flaw in my code?</p>
| <python><deep-learning><pytorch><conv-neural-network> | 2023-07-20 01:57:15 | 1 | 537 | D.J. Elkind |
76,726,134 | 1,887,101 | Python set isdst of datetime object | <p>A UTC time without an offset</p>
<pre><code>from datutil import parser, tz
time_dst_str = '2023-06-01T15:25:00Z'
time_dst = parser.parse(time_dst_str)
</code></pre>
<p>An offset separately, along with whether or not this timezone has daylight savings time</p>
<pre><code>offset_str = '-7.00'
is_dst = True # or False
offset = float(offset_str)
time_dst_tz = time_dst.replace(tzinfo=tz.tzoffset(None, offset*60*60)
</code></pre>
<p>How can I set whether or not this timezone supports? I was also trying to find the current implementations of ZoneInfo classes to reference and maybe do my own implementation, but haven't had luck yet</p>
| <python><datetime><timezone><python-datetime> | 2023-07-20 01:30:54 | 0 | 10,303 | Adjit |
76,725,927 | 6,101,024 | Splitting and flattenig a dataframe in Multiple dataframe | <p>I am trying to derive multi dataframe from a single one as shown below:</p>
<p>D_input:</p>
<pre><code>import pandas as pd
from numpy import nan
data = {'ID': {0: 'id1', 1: 'id1', 2: 'id1', 3: 'id1', 4: 'id1', 5: 'id1'}, 'hr': {0: 55, 1: 56, 2: 57, 3: 75, 4: 65, 5: 55}, 'hrMax': {0: nan, 1: 60.0, 2: 59.0, 3: nan, 4: 70.0, 5: 79.0}, 'hrMin': {0: nan, 1: 45.0, 2: 45.0, 3: nan, 4: 45.0, 5: 35.0}}
df = pd.DataFrame(data)
</code></pre>
<p>D_output: [D1, D2]</p>
<pre><code> ID hr_a hr_b hrMax hrMin
id1 55 56 60 45
id1 55 57 59 45
ID hr_a hr_b hrMax hrMin
id1 75 65 70 45
id1 75 55 79 35
</code></pre>
<p>I have tried</p>
<pre><code># Select the indexes where df is NaN using hrMax
index = df['hrMax'].index[df['hrMax'].apply(np.isnan)]
df_index = df.index.values.tolist()
# get each sub-dataframe using iloc
for i in range(0, len(index)) :
df_single_observation = df.iloc[df_index.index(i):df_index.index(i+1)-1]
</code></pre>
<p>but it does not work. Please could I ask for any help?
Many thanks in advance.
Best Regards.</p>
| <python><pandas><dataframe> | 2023-07-20 00:14:40 | 1 | 697 | Carlo Allocca |
76,725,748 | 9,809,542 | How to extend list in mysql with another list? | <p>I have a list like this in a column: ["item 1", "item 2"]. I have a second list, ["item 3, "item 4"] and I want to extend the original list to get ["item 1", "item 2", "item 3", "item 4"]. The original list is very long so I don't want to extract it, extend it, and then resend it. Using JSON_ARRAY_APPEND just gives me something like ["item 1", "item 2", "["item 3", "item 4"]"] which is not what I want.</p>
| <python><mysql><json> | 2023-07-19 23:15:56 | 1 | 587 | DMcC |
76,725,608 | 14,729,820 | How to add new rows for empty data frame using pandas | <p>I have list of images with coresponding text . After I do prediction I want to save <code>image_name</code> with the coresspoinding <code>text</code> in dataframe formate
Here is the code I wrote :</p>
<pre><code>from transformers import TrOCRProcessor ,VisionEncoderDecoderModel
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
def process_image(image_path):
pixel_values = processor(image_path, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
return generated_text
import pandas as pd
column_names = ["image_name", "text"]
df = pd.DataFrame(columns = column_names)
def add_to_df(image,generated_text):
lst_dict = []
lst_dict.append({'image_name':image, 'text':generated_text})
df2 = df.append(lst_dict)
return df2
images_1 = [10.png,11.png,12.png]
for image in images_1:
image_path = Image.open(f'{working_dir}imgs/{image}').convert("RGB")
generated_text= process_image(image_path)
add_to_df(image_path,generated_text)
</code></pre>
<p>The problem it save only the last row !!
<a href="https://i.sstatic.net/tRLMa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tRLMa.png" alt="enter image description here" /></a></p>
<p>The expacted is to save all rows like :</p>
<pre><code> image_name text
0 10.JPG need validation. She just knows
1 11.JPG text.....2
2 12.JPG text....3
.... .... .....
</code></pre>
<p><a href="https://i.sstatic.net/mvoFK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mvoFK.png" alt="enter image description here" /></a>
Update: I need to save image name as string not as object</p>
<pre><code># Converting data frame to jsonl
import json
reddit = df.to_dict(orient= "records")
print(type(reddit) , len(reddit))
# we have list of dict[{},{},{}]
with open(f"{working_dir}labels.jsonl","w") as f:
for line in reddit:
f.write(json.dumps(line,ensure_ascii=False) + "\n")
</code></pre>
| <python><pandas><dataframe><image><deep-learning> | 2023-07-19 22:36:43 | 1 | 366 | Mohammed |
76,725,521 | 12,969,913 | ModuleNotFoundError: No module named 'winreg' with Adobe Photoshop API in Python on MacOS | <p>I am looking to build a program that edits text within an Adobe Photoshop file (.psd), and have found multiple libraries that will help me do this, but all seem to be limited to Windows machines. Are there any alternatives on MacOS that I haven't found yet?</p>
<p>Whenever I try use the Windows modules, I always get the error:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'winreg'.</p>
</blockquote>
<p>Any ideas on ways around this would be greatly appreciated.</p>
| <python><photoshop> | 2023-07-19 22:16:26 | 0 | 427 | benpomeroy9 |
76,725,520 | 2,015,882 | Convert to dictionary a tree like structure | <p>I have a python class <code>Event</code> which has a list of children <code>Event</code> plus some extra properties.</p>
<pre><code>class Event:
def __init__(self):
self.m_children = []
self.m_start = 0
self.m_end = np.inf
self.m_name = None
self.m_tId = None
@property
def start(self):
return self.m_start
@start.setter
def start(self, start):
self.m_start = start
@property
def end(self):
return self.m_end
@end.setter
def end(self, end):
self.m_end = end
@property
def name(self):
return self.m_name
@name.setter
def name(self, name):
self.m_name = name
@property
def thread(self):
return self.m_tId
@thread.setter
def thread(self, tId):
self.m_tId = tId
@property
def children(self):
return self.m_children
@children.setter
def children(self, children):
self.m_children = children
</code></pre>
<p>However, when I run <code>vars</code> on a <code>Event</code> object, I get this</p>
<pre><code>{'m_children': [<__main__.Event at 0x7faa9549a750>,
<__main__.Event at 0x7faa893c9e10>,
<__main__.Event at 0x7faa893ca1d0>,
...
<__main__.Event at 0x7faa809b6450>,
<__main__.Event at 0x7faa809be690>,
<__main__.Event at 0x7faa809be790>,
...],
'm_start': 0,
'm_end': inf,
'm_name': None,
'm_tId': 1}
</code></pre>
<p>What should I do to get the whole tree as a dictionary?</p>
| <python><dictionary> | 2023-07-19 22:16:05 | 1 | 1,757 | jjcasmar |
76,725,428 | 5,407,287 | Using Spark on an Iterative Process | <p>I'm new to Spark and am trying to understand the best way to set up iterative processes to run in parallel.</p>
<p>For example, if I have a DataFrame and the <a href="https://en.wikipedia.org/wiki/Collatz_conjecture" rel="nofollow noreferrer">Collatz Conjecture</a> (which says, given a function that returns <code>3n+1</code> if n is odd and <code>n/2</code> if n is even, if we repeatedly run this function, it will always eventually return 1) and I want to determine how many iterations it takes to return 1 -- I can easily write this iteratively (as that's how it's defined) like</p>
<pre><code>nums = [(x, 0) for x in range(1, 5)]
schema = ['num', 'iters']
df = spark.createDataFrame(data=nums, schema=schema)
while True:
checker = df.filter(F.col('num') != 1)
if (checker.count() == 0):
break
df = df.withColumn('num',
F.when(
F.col('num') == 1,
F.col('num')
)
.otherwise(
F.when(F.col('num') % F.lit(2) != 0,
F.col('num') * F.lit(3) + F.lit(1)
)
.otherwise(
F.col('num') / F.lit(2)
)
)
)
df = df.withColumn('iters',
F.when(F.col('num') != 1.0, F.col('iters') + F.lit(1))
.otherwise(F.col('iters'))
)
df.show()
</code></pre>
<p>Note: I know that you can do this stuff recursively and do a better job with all of this. I'm just using this as an example of iterative processes in Spark.</p>
<p>But this is really ugly and doesn't optimize on Spark. I know that loops are an antipattern in Spark -- but I'm not sure how else one goes about doing this.</p>
| <python><apache-spark> | 2023-07-19 21:53:13 | 1 | 913 | LivingRobot |
76,725,393 | 178,750 | override pyproject.toml settings on command line | <p>When building a python project using the python <a href="https://github.com/pypa/build" rel="noreferrer">build</a> module, is there a way to override settings in pyproject.toml on the command line?</p>
<p>For instance, suppose <code>pyproject.toml</code> contains:</p>
<pre><code>[build-system]
build-backend = "foo"
</code></pre>
<p>I would like to know if it is possible to override <code>foo</code> (say with <code>myfoo</code>) on the command line <code>python -m build</code>. This is just an example.</p>
<p>My question is more general than that example. Whatever upstream defined in a pyproject.toml, a downstream user/packager may want or need to override something that upstream defined for any number of reasons.</p>
| <python><pep517> | 2023-07-19 21:45:59 | 0 | 1,391 | Juan |
76,725,343 | 7,700,802 | Sagemaker Step function SDK not downloading any of the python modules in requirements.txt | <p>I define this estimator as so</p>
<pre><code>from sagemaker.tensorflow.estimator import TensorFlow
env = {
'SAGEMAKER_REQUIREMENTS': 'requirements.txt', # path relative to `source_dir` below.
}
keras_estimator = TensorFlow(
entry_point=sm_script,
role=workflow_execution_role,
instance_count=1,
instance_type=training_instance,
dependencies=[sm_script, 'requirements.txt'],
env=env,
requirements_file='requirements.txt',
sagemaker_session=sm_sess,
framework_version="1.15.2",
base_job_name='{}-training'.format(base_name),
py_version="py3",
distribution={"parameter_server": {"enabled": True}},
metric_definitions=[
{'Name': 'validation_accuracy', 'Regex': "Belt Vision accuracy = ([0-9.]+)"},
{'Name': 'validation_f1', 'Regex': "Belt Vision f1 = ([0-9.]+)"}]
)
</code></pre>
<p>Yes, there are multiple points I refer to <code>requirements.txt</code> yet none of them work. For an sklearn estimator I just use the <code>dependencies</code> part. For some reason none of my python modules are downloaded. I cannot see them even collecting in Cloud Watch. Thus I keep getting the same error in <code>sm_script</code> where I try to <code>import cv2</code>. Hence I get the following error</p>
<pre><code>ModuleNotFoundError: No module named 'cv2'
</code></pre>
<p>Any suggestions? As requested here is my requirements.txt</p>
<pre><code>sagemaker==2.65.0
pandas==1.2.4
scikit-learn==0.23.1
awswrangler==2.12.1
boto3==1.19.1
numpy~=1.19.2
opencv-python
random
keras==2.13.1
tensorflow==2.13.0
</code></pre>
| <python><machine-learning><amazon-sagemaker><aws-step-functions> | 2023-07-19 21:37:56 | 1 | 480 | Wolfy |
76,725,326 | 10,994,244 | Logistic regression for disease detection | <p>I want to do a logistic regression model, I have 140 test results from 140 people, each test has 300 points (x,y) representing mass and intensity and I have 140 labels of whether or not they have the disease (0,1)</p>
<p>The 300 points for each exam look like this:</p>
<pre><code>499.871 551
499.883 552
499.896 566
499.909 514
499.921 503
499.934 462
499.946 507
499.959 467
499.972 408
499.984 419
499.997 457
500.010 469
.
.
.
</code></pre>
<p>If x,y graph looks like this:</p>
<p><a href="https://i.sstatic.net/U5kc4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U5kc4.png" alt="graph" /></a></p>
<p>The problem is that the examples I find of Logistic Regression have one data per variable (x, y), for example this diabetes dataset <a href="https://github.com/gonzalezgouveia/clases-youtube/blob/main/diabetes-logistic/diabetes.csv" rel="nofollow noreferrer">DATASET</a></p>
<p>In that dataset a person has a glucose value, BMI (x, y). In my problem I have 300 (x,y) values.</p>
<p>I am generating an example for my problem with random data, but I don't know if handling a three-dimensional array will solve that problem.</p>
<pre><code>import numpy as np
from sklearn.linear_model import LogisticRegression
# Training data
exams = np.random.rand(140, 300, 2) # Example of generating random data
labels = np.random.randint(2, size=140) # Example of generating random labels
# Flatten the exam data
flattened_exams = exams.reshape(140, -1)
# Create and fit the logistic regression model
model = LogisticRegression()
model.fit(flattened_exams, labels)
# Test data (example)
test_exams = np.random.rand(10, 300, 2) # Example of generating random test data
flattened_test_exams = test_exams.reshape(10, -1)
# Make predictions on the test data
predictions = model.predict(flattened_test_exams)
# Print the predictions
print("Predictions:", predictions)
</code></pre>
<p>Would it be ok to manage my training data like this? Or what other algorithm could I use to address my problem?</p>
| <python><machine-learning><scikit-learn><logistic-regression> | 2023-07-19 21:33:14 | 0 | 828 | danielm2402 |
76,725,224 | 9,251,158 | "AttributeError: 'NoneType' object has no attribute 'find'" when converting with OpenTimelineIO | <p>I have video projects that I want to convert from Final Cut Pro to KdenLive. I found the OpenTimelineIO project and it would solve all my problems. I installed with</p>
<pre><code>$ python3 -m pip install opentimelineio
...
$ python3 -m pip show opentimelineio
Name: OpenTimelineIO
Version: 0.15.0
</code></pre>
<p>I tried the sample code provided:</p>
<pre><code>import opentimelineio as otio
timeline = otio.adapters.read_from_file("/path/to/file.fcpxml")
for clip in timeline.find_clips():
print(clip.name, clip.duration())
</code></pre>
<p>and get the error:</p>
<pre><code> File "~/Library/Python/3.8/lib/python/site-packages/opentimelineio_contrib/adapters/fcpx_xml.py", line 998, in _format_id_for_clip
resource = self._compound_clip_by_id(
AttributeError: 'NoneType' object has no attribute 'find'
</code></pre>
<p>The offending lines are:</p>
<pre><code> if resource is None:
resource = self._compound_clip_by_id(
clip.get("ref")
).find("sequence")
</code></pre>
<p>so I monkey-patched the source code around that line to show information about the error:</p>
<pre><code> if resource is None:
print("Clip: ", clip)
print("Clip dir: ", dir(clip))
tmp_ = clip.get("ref")
print("Clip.get('href'): ", tmp_)
tmp2_ = self._compound_clip_by_id(
clip.get("ref")
)
print("Compound clip by id: ", tmp2_)
resource = self._compound_clip_by_id(
clip.get("ref")
).find("sequence")
</code></pre>
<p>and I get:</p>
<pre><code>Clip: <Element 'title' at 0x1092c3c20>
Clip dir: ['__class__', '__copy__', '__deepcopy__', '__delattr__', '__delitem__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__len__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__setstate__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'attrib', 'clear', 'extend', 'find', 'findall', 'findtext', 'get', 'getchildren', 'getiterator', 'insert', 'items', 'iter', 'iterfind', 'itertext', 'keys', 'makeelement', 'remove', 'set', 'tag', 'tail', 'text']
Clip.get('href'): r2
Compound clip by id: None
</code></pre>
<p>The offending clip in the Final Cut Pro XML is the first in the project, a seemingly innocent title:</p>
<pre><code> <title ref="r2" offset="0s" name="Atletismo Emocional - Basic Title" start="8996800/2500s" duration="56200/2500s">
<param name="Position" key="9999/999166631/999166633/1/100/101" value="1.24609 219.013"/>
<param name="Flatten" key="9999/999166631/999166633/2/351" value="1"/>
<param name="Alignment" key="9999/999166631/999166633/2/354/3001383869/401" value="1 (Center)"/>
<param name="Alignment" key="9999/999166631/999166633/2/354/3001383870/401" value="1 (Center)"/>
<param name="Alignment" key="9999/999166631/999166633/2/354/3001383871/401" value="1 (Center)"/>
<param name="Alignment" key="9999/999166631/999166633/2/354/3001392281/401" value="1 (Center)"/>
<param name="Alignment" key="9999/999166631/999166633/2/354/3001484853/401" value="1 (Center)"/>
<param name="Alignment" key="9999/999166631/999166633/2/354/999169573/401" value="1 (Center)"/>
<text>
<text-style ref="ts1">Atletismo Emocional
</text-style>
<text-style ref="ts2">para crianças</text-style>
<text-style ref="ts3">
</text-style>
<text-style ref="ts4">
</text-style>
<text-style ref="ts3">Nº4: Se tens uma emoção, é porque
alguma coisa é importante para ti.</text-style>
</text>
<text-style-def id="ts1">
<text-style font="Helvetica" fontSize="160" fontFace="Regular" fontColor="1 0.999974 0.999991 1" alignment="center"/>
</text-style-def>
<text-style-def id="ts2">
<text-style font="Helvetica" fontSize="100" fontFace="Regular" fontColor="1 0.999974 0.999991 1" alignment="center"/>
</text-style-def>
<text-style-def id="ts3">
<text-style font="Helvetica" fontSize="80" fontFace="Regular" fontColor="1 0.999974 0.999991 1" alignment="center"/>
</text-style-def>
<text-style-def id="ts4">
<text-style font="Helvetica" fontSize="110" fontFace="Regular" fontColor="1 0.999974 0.999991 1" alignment="center"/>
</text-style-def>
<asset-clip ref="r3" lane="-1" offset="1295588471/360000s" name="GinjaGenerico2" duration="16476588/720000s" format="r4" audioRole="dialogue">
<adjust-volume>
<param name="amount">
<fadeIn type="easeIn" duration="200428/720000s"/>
<fadeOut type="easeIn" duration="1080077/720000s"/>
</param>
</adjust-volume>
</asset-clip>
</title>
</code></pre>
<p>If I delete this clip, I get a similar error with other clips too:</p>
<pre><code> <ref-clip ref="r14" lane="2" offset="49400/2500s" name="wave" start="55602547/2400000s" duration="5200/2500s" useAudioSubroles="1">
<conform-rate srcFrameRate="23.98"/>
<timeMap>
<timept time="28799771/1280000s" value="28799771/8000s" interp="smooth2"/>
<timept time="27088061/960000s" value="27088061/6000s" interp="smooth2"/>
</timeMap>
<adjust-transform position="0.222208 0.124992" scale="1.01973 1.01973"/>
</ref-clip>
</code></pre>
<p>How can I fix the XML or the OpenTimelineIO code to convert this project? Should I submit a bug report to the GitHub repo?</p>
<h2>update</h2>
<p><a href="https://www.dropbox.com/scl/fi/7xg18lh4a6ehy9k3f442a/Info.fcpxml?rlkey=v0o9b59lhn1mgl7qxnuw6tzsy&dl=0" rel="nofollow noreferrer">Here</a> is a link to a complete FCP XML file that causes this error.</p>
| <python><finalcut><kdenlive> | 2023-07-19 21:12:14 | 1 | 4,642 | ginjaemocoes |
76,724,939 | 5,508,752 | There is no such driver by URL https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790 error with Python webdrivermanager & Chrome 115.0 | <p>I recently updated my Google Chrome browser to version <em>115.0.5790.99</em> and I'm using Python webdrivermanager library (version 3.8.6) for Chrome driver management.</p>
<p>However, since this update, when I call the <code>ChromeDriverManager().install()</code> function, I encounter the following error:</p>
<blockquote>
<p>There is no such driver by URL <a href="https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790" rel="noreferrer">https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790</a></p>
</blockquote>
<p>Steps to reproduce the issue:</p>
<ul>
<li>Update Google Chrome to version 115.0.5790.99.</li>
</ul>
<p>Execute the following Python code:</p>
<pre><code>from webdriver_manager.chrome import ChromeDriverManager
driver_path = ChromeDriverManager().install()
</code></pre>
<p>capture:</p>
<p><img src="https://i.sstatic.net/TzVsD.png" alt="exception catched" /></p>
| <python><python-3.x><google-chrome><selenium-webdriver><selenium-chromedriver> | 2023-07-19 20:22:50 | 10 | 313 | Christian Rubio |
76,724,895 | 5,423,080 | Extract data from a log file and put into a Pandas DataFrame | <p>I have a log file with 10000 and more lines, some of them have this structure:</p>
<pre><code>"2023-07-19 13:38:45,220 INFO Type:type: ('rate',), {'value': 123, 'unit': 'Count/Second', 'id': 'ABC123', 'name': 'London'}\n"
</code></pre>
<p>I would like to extract these info and put in a <code>pandas.DataFrame</code>.</p>
<p>This is my initial code:</p>
<pre class="lang-py prettyprint-override"><code>import ast
import pandas as pd
import re
infile = "./log_file.log"
with open(infile) as f:
lines = f.readlines()
df = pd.DataFrame(columns=["time", "type", "value", "unit", "id", "name"])
for line in lines:
if "type" in line:
value = ast.literal_eval(re.search('{(.*)}', line).group(0))
value["time"] = line.split("INFO")[0][:-1]
value["type"] = re.search(r"\((.*)\)", line).group(1)[:-1]
df = df.append(value, ignore_index=True)
</code></pre>
<p>so to have a dataframe like this:</p>
<pre class="lang-none prettyprint-override"><code> time type value unit id name
0 2023-07-19 13:38:45,220 rate 123 Count/Second ABC123 London
</code></pre>
<p>but the <code>for</code> loop takes ages to go through the whole file.</p>
<p>Any suggestion how to optimise it?</p>
| <python><pandas><regex> | 2023-07-19 20:15:33 | 2 | 412 | cicciodevoto |
76,724,785 | 726,730 | How to remove the empty space at the bottom of QPlainTextEdit | <pre class="lang-py prettyprint-override"><code>from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 130)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)
self.gridLayout.setObjectName("gridLayout")
self.plainTextEdit = QtWidgets.QPlainTextEdit(self.centralwidget)
self.plainTextEdit.setStyleSheet("padding:0px;")
self.plainTextEdit.setObjectName("plainTextEdit")
self.gridLayout.addWidget(self.plainTextEdit, 0, 0, 1, 1)
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 21))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.plainTextEdit.setPlainText(_translate("MainWindow", "1\n"
"1\n"
"1\n"
"1\n"
"1\n"
"1\n"
"1\n"
"1\n"
"1\n"
"1\n"
"1\n"
"1\n"
"last line."))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/LIs07.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LIs07.png" alt="enter image description here" /></a></p>
<p>Some work-arounds:</p>
<pre class="lang-py prettyprint-override"><code> #self.main_self.ui_scheduled_transmitions_create_window.review_text.setPlainText("")
self.main_self.ui_scheduled_transmitions_create_window.review_text.setPlainText(review_text)
self.main_self.ui_scheduled_transmitions_create_window.review_text.setDocumentTitle("123")
#if self.main_self.ui_scheduled_transmitions_create_window.review_text.height()<800:
# self.main_self.ui_scheduled_transmitions_create_window.review_text.setFixedHeight(self.main_self.ui_scheduled_transmitions_create_window.review_text.height())
#else:
# self.main_self.ui_scheduled_transmitions_create_window.review_text.setFixedHeight(800)
self.main_self.ui_scheduled_transmitions_create_window.review_text.document().setDocumentMargin(0)
#self.main_self.ui_scheduled_transmitions_create_window.review_text.setCenterOnScroll(False)
m = self.main_self.ui_scheduled_transmitions_create_window.review_text.fontMetrics()
RowHeight = m.lineSpacing()
nRows = self.main_self.ui_scheduled_transmitions_create_window.review_text.document().blockCount()
text_height = RowHeight*nRows
document_format = self.main_self.ui_scheduled_transmitions_create_window.review_text.document().rootFrame().frameFormat()
document_format.setBottomMargin(0)
document_format.setHeight(text_height)
self.main_self.ui_scheduled_transmitions_create_window.review_text.document().rootFrame().setFrameFormat(document_format)
scrollBarHeight=self.main_self.ui_scheduled_transmitions_create_window.review_text.horizontalScrollBar().sizeHint().height()
#scrollBarHeight=0
if text_height+scrollBarHeight>800:
self.main_self.ui_scheduled_transmitions_create_window.review_text.setFixedHeight(600)
else:
self.main_self.ui_scheduled_transmitions_create_window.review_text.setFixedHeight(text_height+scrollBarHeight)
</code></pre>
| <python><qt><pyqt5><qplaintextedit> | 2023-07-19 19:54:41 | 1 | 2,427 | Chris P |
76,724,764 | 1,811,073 | How to override list-type arguments that have default values | <p>If there's an <code>ArgumentParser</code> defined like this:</p>
<pre><code>parser.add_argument("--list", type=str, nargs="+", default=["arg"])
args = parser.parse_args()
</code></pre>
<p>the <code>list</code> type arguments can have their default values overridden by setting the argument to an empty string:</p>
<p><code>python test.py --list ""</code></p>
<p>and adding logic to filter <code>list</code> type args for <code>None</code>:</p>
<pre><code>for arg, value in args._get_kwargs():
if isinstance(value, list):
setattr(args, arg, list(filter(None, value)))
</code></pre>
<p>Is there anyway to do this without accessing the private method of <code>argparse.Namespace</code>...ideally only changing what's passed from the command-line?</p>
| <python><argparse> | 2023-07-19 19:49:41 | 1 | 876 | aweeeezy |
76,724,620 | 4,835,204 | Python NumPy - how to limit array addition to max of dtype without conversion to larger dtype? | <p>Suppose I have 2 <code>uint8</code> arrays like this:</p>
<pre><code>arr1 = np.array([50, 100, 150, 200, 250], dtype=np.uint8)
arr2 = np.array([50, 100, 150, 200, 250], dtype=np.uint8)
</code></pre>
<p>If I do array addition my intended result is:</p>
<pre><code>[100 200 255 255 255]
</code></pre>
<p>i.e. when values get to the max value and then overflow, I'd like to keep the max value instead. Simply doing addition does not work due to this overflow concern:</p>
<pre><code>import numpy as np
arr1 = np.array([50, 100, 150, 200, 250], dtype=np.uint8)
arr2 = np.array([50, 100, 150, 200, 250], dtype=np.uint8)
result = arr1 + arr2
print(result)
</code></pre>
<p>result (not as desired due to overflow for the last 3 numbers):</p>
<pre><code>[100 200 44 144 244]
</code></pre>
<p>The easy around this is to convert to a larger datatype, so in this case <code>uint16</code>, then convert back like so:</p>
<pre><code>arr1 = np.array([50, 100, 150, 200, 250], dtype=np.uint8)
arr2 = np.array([50, 100, 150, 200, 250], dtype=np.uint8)
arr1 = arr1.astype(np.uint16)
arr2 = arr2.astype(np.uint16)
result = np.where(arr1 + arr2 > 255, 255, arr1 + arr2).astype(np.uint8)
</code></pre>
<p>This gives the intended result:</p>
<pre><code>[100 200 255 255 255]
</code></pre>
<p>But my question is, is there a more efficient way to do this that does not involve converting to the next larger dtype?</p>
| <python><numpy> | 2023-07-19 19:26:00 | 0 | 3,840 | cdahms |
76,724,528 | 1,473,517 | Is there a fast convolution that works with large numbers? | <p>I am doing convolutions of a lot of reasonably long arrays. The values in them are also big although they fit within the scope of float64. Unfortunately, scipy's <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve.html" rel="nofollow noreferrer">convolve</a> gives the wrong answer when it uses the fft method. Here is a MWE:</p>
<pre><code>from scipy.signal import convolve
import numpy as np
A = np.array([0] + [1e100]*10000)
convolve(A, A)
</code></pre>
<p>This gives the wrong output of:</p>
<pre><code>array([ 9.15304445e+187, -7.04080342e+187, 1.00000000e+200, ...,
3.00000000e+200, 2.00000000e+200, 1.00000000e+200])
</code></pre>
<p>It is fast though:</p>
<pre><code>%timeit convolve(A, A)
458 µs ± 5.09 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>You can get the right answer with:</p>
<pre><code>convolve(A, A, method="direct")
array([0.e+000, 0.e+000, 1.e+200, ..., 3.e+200, 2.e+200, 1.e+200])
</code></pre>
<p>However this is much slower:</p>
<pre><code>%timeit convolve(A, A, method="direct")
23.4 ms ± 511 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>Is there any way to get the right answer quickly? I am happy to use any library that is freely available.</p>
<hr />
<p>I get the same problem with less extreme inputs sizes:</p>
<pre><code>A = np.array([0] + [1e10]*10000)
convolve(A, A)
array([1.96826156e+08, 1.35742176e+08, 1.00000000e+20, ...,
3.00000000e+20, 2.00000000e+20, 1.00000000e+20])
</code></pre>
<h1>Bounty</h1>
<p>The bounty is for the less extreme example of</p>
<pre><code> A = np.array([0] + [1e10]*10000)
</code></pre>
<p>My input arrays will always be made of integers</p>
| <python><scipy><convolution> | 2023-07-19 19:07:45 | 3 | 21,513 | Simd |
76,724,383 | 866,333 | requests claims self-signed cert has an IP address mismatch when server enforces https | <p>This is a test in my CI pipeline that began failing in the last 3 months.</p>
<p>I've checked and double checked that the certificate is good. Here is the result of <code>openssl x509 -text -noout -in the-cert.crt</code>:</p>
<pre><code>Certificate:
Data:
Version: 3 (0x2)
Serial Number:
<redacted>
Signature Algorithm: ecdsa-with-SHA256
Issuer: CN = 172.17.0.2
Validity
Not Before: Jul 19 16:51:57 2023 GMT
Not After : Jul 18 16:51:57 2024 GMT
Subject: CN = 172.17.0.2
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
<redacted>
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Subject Key Identifier:
<redacted>
X509v3 Authority Key Identifier:
<redacted>
X509v3 Basic Constraints: critical
CA:TRUE
Signature Algorithm: ecdsa-with-SHA256
Signature Value:
<redacted>
</code></pre>
<p>I need the following to pass:</p>
<pre class="lang-py prettyprint-override"><code>res2 = requests.get('http://172.17.0.2/index.html', timeout=15, verify="/home/mrx/appx/the-cert.crt")
</code></pre>
<p>Instead I get the error:</p>
<pre class="lang-py prettyprint-override"><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='172.17.0.2', port=443): Max retries exceeded with url: /index.html (Caused by SSLError(SSLCertVerificationError(1, "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '172.17.0.2'. (_ssl.c:997)")))
</code></pre>
<p>Port 443 was bugging me. It doesn't change if I specify "https" but obviously I had port 80 being forwarded to 443 in my server configuration.</p>
<p>This works from the shell:</p>
<pre><code>curl https://172.17.0.2/index.html --cacert the-cert.crt
</code></pre>
<p>The plain http version returns a 301 redirect. If I comment out the <code>listen 443</code> in the server block, everything (including the <code>request</code>) works, but distorts the original test.</p>
<p>Any ideas for a neat (other than try-catch-branch) fix?</p>
<p>requests 2.31.0, python 3.10.6, (ubuntu 22.04)</p>
<p>I tried running nginx on Debian from 2022, bullseye-slim, right up to current bookworm. Bookworm provided nginx 1.22.1.</p>
| <python><ssl> | 2023-07-19 18:45:14 | 0 | 6,796 | John |
76,724,321 | 10,415,970 | How to autopopulate fields on Pydantic model in Pydantic V2? | <p>Say I have a model like this:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class MyModel(BaseModel):
url: str
id: str
</code></pre>
<p>I'd like to autofill the <code>id</code> field using data from the url.</p>
<pre class="lang-py prettyprint-override"><code>url = 'https://www.example.com/open-requests/myid123'
instance = MyModel(url=url)
print(instance.id) # 'myid123'
</code></pre>
<p>If I was using dataclasses, normally I'd just be able to overwrite the <code>__post_init__</code> method and could do some extra stuff there. (<strong>Note</strong>: This method is still not optimal)</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
@dataclass
class MyModel:
url: str
id: str = None
def __post_init__(self):
self.id = self.url.split('/')[-1]
</code></pre>
<p>I'd like to <strong>not</strong> have to assign the <code>id</code> fields type to <code>str | None = None</code>.</p>
<p>How can I accomplish this?</p>
| <python><pydantic> | 2023-07-19 18:35:31 | 2 | 4,320 | Zack Plauché |
76,724,084 | 12,320,257 | Exporting alerts json data using Grafana API in Grafana v10 | <p>I am using Grafana cloud v10.0.3 and could not find any way to exports alerts json data. I tried to write a python script to access alerts json data but it is not showing any output. Below is the python script</p>
<pre><code>import requests
import json
# Replace this with your actual token
token = "api token"
grafana_url = "https://domain.grafana.net"
alert_name = "NetworkDevice Down" //alert name
headers = {
"Authorization": f"Bearer {token}",
"Accept": "application/json",
"Content-Type": "application/json"
}
response = requests.get(f"{grafana_url}/api/alerts", headers=headers)
if response.status_code == 200:
alerts = response.json()
for alert in alerts:
if alert['name'] == alert_name:
print(json.dumps(alert, indent=4))
else:
print(f"Request failed with status code {response.status_code}")
</code></pre>
<p>Any idea what I am doing wrong? Thanks</p>
| <python><grafana><grafana-alerts><grafana-api> | 2023-07-19 18:02:43 | 1 | 830 | Faisal Shani |
76,723,988 | 5,666,203 | Orekit killing python3 on M2 (Mac) via Anaconda | <p>I know that Anaconda has an easy install command for Orekit, but something's not working out. It seems this is par for the course based on <a href="https://forum.orekit.org/t/orekit-python-wrapper-on-mac-m2/2571/4" rel="nofollow noreferrer">this Orekit discussion</a>. Here are the steps I've followed:</p>
<ol>
<li><p>Download / install Anaconda via <a href="https://www.anaconda.com/download#macos" rel="nofollow noreferrer">the installer</a> using the "Download for Mac (M1/M2) option. Wait for completion.</p>
</li>
<li><p>Edit the <code>~/.zshrc</code> file using <a href="https://stackoverflow.com/questions/22773432/mac-using-default-python-despite-anaconda-install#:%7E:text=Update%20for%20all,So%20instead%20do%3A">this script</a> to point <code>python3</code> command to the Anaconda distribution.</p>
</li>
<li><p>Validate Conda being opened when <code>python3</code> is executed in terminal:</p>
<p><code>>>> python3</code></p>
<p><code>Python 3.11.0 | packaged by conda-forge | (main, Jan 14 2023, 12:25:12) [Clang 14.0.6 ] on darwin</code></p>
</li>
<li><p>Install Orekit using the link from <a href="https://anaconda.org/conda-forge/orekit" rel="nofollow noreferrer">conda-forge</a> (conda install -c conda-forge orekit). Wait to complete.</p>
</li>
<li><p>Re-run Anaconda via the terminal, and import Orekit (<code>import orekit</code>)</p>
</li>
</ol>
<p>When I do so, the following text is returned:</p>
<pre><code>>>> import orekit
zsh: killed python3
</code></pre>
<p>I'm confused, to say the least. Any suggestions are appreciated, considering this seems like a pretty straight-forward task. Per requests from the comments...</p>
<p>Here is the return from <code>conda info</code>:</p>
<pre><code>(python3p8) apung@MacBookPro-MTXQM1Y6PM ~ % conda info
active environment : python3p8
active env location : /Users/apung/anaconda3/envs/python3p8
shell level : 2
user config file : /Users/apung/.condarc
populated config files : /Users/apung/.condarc
conda version : 23.5.2
conda-build version : 3.25.0
python version : 3.11.3.final.0
virtual packages : __archspec=1=arm64
__osx=13.4.1=0
__unix=0=0
base environment : /Users/apung/anaconda3 (writable)
conda av data dir : /Users/apung/anaconda3/etc/conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/osx-arm64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/osx-arm64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /Users/apung/anaconda3/pkgs
/Users/apung/.conda/pkgs
envs directories : /Users/apung/anaconda3/envs
/Users/apung/.conda/envs
platform : osx-arm64
user-agent : conda/23.5.2 requests/2.29.0 CPython/3.11.3 Darwin/22.5.0 OSX/13.4.1
UID:GID : 502:20
netrc file : None
offline mode : False
</code></pre>
<p>and here is the return from <code>conda list</code>:</p>
<pre><code>(python3p8) apung@MacBookPro-MTXQM1Y6PM ~ % conda list
# packages in environment at /Users/apung/anaconda3/envs/python3p8:
#
# Name Version Build Channel
bzip2 1.0.8 h3422bc3_4 conda-forge
ca-certificates 2023.5.7 hf0a4a13_0 conda-forge
libcxx 16.0.6 h4653b0c_0 conda-forge
libffi 3.4.4 hca03da5_0
libsqlite 3.42.0 hb31c410_0 conda-forge
libzlib 1.2.13 h53f4e23_5 conda-forge
ncurses 6.4 h313beb8_0
openjdk 8.0.332 he4db4b2_0 conda-forge
openssl 3.1.1 h53f4e23_1 conda-forge
orekit 11.3.2 pypi_0 pypi
pip 23.1.2 py38hca03da5_0
python 3.8.17 h3ba56d0_0_cpython conda-forge
python_abi 3.8 3_cp38 conda-forge
readline 8.2 h1a28f6b_0
setuptools 67.8.0 py38hca03da5_0
sqlite 3.41.2 h80987f9_0
tk 8.6.12 hb8d0fd4_0
wheel 0.38.4 py38hca03da5_0
xz 5.4.2 h80987f9_0
zlib 1.2.13 h53f4e23_5 conda-forge
</code></pre>
| <python><package><conda><zsh> | 2023-07-19 17:48:52 | 0 | 1,144 | AaronJPung |
76,723,848 | 848,277 | Better pandas performance profiling | <p>Is there a better way to profile the performance of a large code base that uses pandas?</p>
<p>Currently if I use standard tools like <code>cProfile</code> and even when using <code>gprof2dot</code> the call stacks for pandas aren't very informative. As an example I might get a result such as <code>common:67:new_method 77.87%</code> which just tells me some comparisons in the code are taking the most time, the problem here is it doesn't tell me where. I can provide many similar examples which makes performance profiling pandas a pain. I'd like a more granular way to profile my code that tells me where the problems are. One such example is below:</p>
<p><a href="https://i.sstatic.net/G52pE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G52pE.png" alt="Example call stack" /></a></p>
<p>Google and stack overflow searching for pandas profiling doesn't help much as the results are for profiling a data set and not the performance.</p>
| <python><pandas><profiling> | 2023-07-19 17:28:54 | 0 | 12,450 | pyCthon |
76,723,807 | 2,270,789 | Subset/Slice dask array using boolean array of lower dimension | <p>Using <code>numpy</code> I can subset a 3D array using a 2D 'mask'. The same returns an <code>IndexError</code> with dask arrays. Is there any way to reproduce that <code>numpy</code> behaviour below using <code>dask</code>?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import dask.array as da
# Create 3d arrays of random values and mask with shape matching second and third dimensions
y_da = da.random.random(size=(20, 100, 100))
y_np = np.random.rand(20, 100, 100)
mask = np.zeros((100, 100), dtype=np.uint8)
mask[20:80, 3:77] = 1
# Apply mask (flattens axes 1 and 2)
print(y_np[:,mask == 1].shape) # OK
print(y_da[:,mask == 1].shape) # IndexError
</code></pre>
| <python><numpy><dask> | 2023-07-19 17:22:15 | 1 | 401 | Loïc Dutrieux |
76,723,777 | 6,330,106 | DLL load failed while importing _libheif_cffi on Windows | <p>I'm using Python 3.11.4 on Windows 11. Following the <a href="https://github.com/carsales/pyheif/issues/2#issuecomment-886469620" rel="nofollow noreferrer">instructions</a>, I managed to install <code>pyheif</code>. When <code>import pyheif</code>, it raises the error</p>
<pre><code>DLL load failed while importing _libheif_cffi. The specified module could not be found
</code></pre>
<p><code>pyheif</code> is under <code>c:\src\p3venv\lib\site-packages\</code>. I tried to find out where <code>_libheif_cffi</code> is.</p>
<pre><code>import importlib.util
importlib.util.find_spec('_libheif_cffi')
</code></pre>
<p>It prints</p>
<pre><code>ModuleSpec(name='_libheif_cffi', loader=<_frozen_importlib_external.ExtensionFileLoader object at 0x0000...>, origin='C:\\src\p3venv\\lib\\site-packages\\_libheif_cffi.pyd')
</code></pre>
<p><code>C:\\src\p3venv\\lib\\site-packages\\_libheif_cffi.pyd</code> does exist and <code>file _libheif_cffi.pyd</code> prints <code>_libheif_cffi.pyd: PE32+ executable (DLL) (GUI) x86-64, for MS windows, 6 sections</code>.</p>
<p>In a related question, an answer suggests that with Python 3.8 and above in order to load dlls, <code>os.add_dll_directory('/path/to/dlldir/')</code> is necessary. So, I tried</p>
<pre><code>import os
os.add_dll_directory('C:\\src\p3venv\\lib\\site-packages\\')
</code></pre>
<p>It prints <code><AddedDllDirectory('C:\\src\p3venv\\lib\\site-packages\\')></code>. However, <code>import pyheif</code> still raises the same error.</p>
<p>Thanks for any help.</p>
| <python> | 2023-07-19 17:18:33 | 0 | 31,575 | ElpieKay |
76,723,639 | 9,465,029 | pandas conditional groupby on a group | <p>I am trying to do a conditional groupby and would like to know the best approach.
I am trying to compute the minimum value over a period, and the period depends on the country that i want to groupby. The following code works, but there might be better ways to do this.</p>
<p>Any possible improvements?</p>
<pre><code>df
datetime country scenario value
2023-01-01 00:00:00 FR 1 1
2023-01-01 01:00:00 FR 1 2
2023-01-01 02:00:00 FR 1 3
2023-01-01 03:00:00 FR 1 4
2023-01-01 01:00:00 DE 1 1
2023-01-01 02:00:00 DE 1 2
2023-01-01 03:00:00 DE 1 3
2023-01-01 04:00:00 DE 1 4
countries = {'DE':2,'FR':4}
df3l=[]
for sc in scenarios: # iterate over countries
df2l = []
for c in countries: # iterate over scenarios
df2 = df[ (df['scenario']==sc) & (df['country']==c)] # select country and scenario
# compute min over period per country
df2['min over period']= pd.DataFrame(df.groupby(['scenario','country',pd.Grouper(key='datetime', freq=str(countries[c])+'H')]).transform('min'))
df2l.append(df2)
df2 = pd.concat(df2l,axis=0)
df3l.append(df2)
df = pd.concat(df3l,axis=0)
</code></pre>
<p>Expected outcome:</p>
<pre><code> df
datetime country scenario min over period
2023-01-01 00:00:00 FR 1 1
2023-01-01 01:00:00 FR 1 1
2023-01-01 02:00:00 FR 1 1
2023-01-01 03:00:00 FR 1 1
2023-01-01 00:00:00 DE 1 1
2023-01-01 01:00:00 DE 1 1
2023-01-01 02:00:00 DE 1 3
2023-01-01 03:00:00 DE 1 3
</code></pre>
| <python><pandas><group-by> | 2023-07-19 16:59:32 | 2 | 631 | Peslier53 |
76,723,463 | 11,986,067 | Using UnmanagedExports Framework in Unity | <p>I try to use <a href="https://www.nuget.org/packages/UnmanagedExports" rel="nofollow noreferrer">UnmanagedExports</a> in my Unity project since I have one function which I would like to call from a Python program. I get the following error when I try to call the function in my Python script:</p>
<blockquote>
<p>AttributeError: function 'ConvertRawToBinData' not found</p>
</blockquote>
<p>Like mentioned <a href="https://stackoverflow.com/a/34620167/11986067">here</a>, I tried to build the project via x64 target. When I change the target to x64 in Visual Studio I cannot import my libraries:</p>
<blockquote>
<p>The type or namespace name 'type/namespace' could not be found</p>
</blockquote>
<p>Can I use UnmanagedExports in a Unity project and if yes, how?</p>
<p>Here is the C# code:</p>
<pre class="lang-py prettyprint-override"><code>using RGiesecke.DllExport;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.InteropServices;
using System.Text;
using System.Threading.Tasks;
namespace ConverterAPI
{
public class ConverterAPI
{
[DllExport("ConvertRawToBinData", CallingConvention = CallingConvention.StdCall)]
public static void ConvertRawToBinData(string settingsPath, string collectionSettingsPath, string rawDataPath)
{
Settings settings = Util.ImportJson<Settings>(settingsPath);
CollectionSettings CollectionSettings = Util.ImportJson<CollectionSettings>(collectionSettingsPath);
Converter.Convert(settings, CollectionSettings, rawDataPath);
}
}
}
</code></pre>
<p>Here is the Python code:</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
import os
from pathlib import Path, PurePath
apiPath = PurePath(assetsPath.parent, 'Build', 'Target', 'API.dll')
converterAPI = ctypes.cdll.LoadLibrary(str(apiPath))
converterAPI.ConvertRawToBinData("", "", "")
</code></pre>
| <python><c#><visual-studio><unity-game-engine><unmanagedexports> | 2023-07-19 16:35:09 | 0 | 495 | Ling |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.