QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,951,543
| 8,478
|
Testing AWS emails using Moto - can't get it to work
|
<p>I'm trying to use moto to test sending aws emails. The documentation is poor, and some things have changed - like @mock_ses doens't appear to exist anymore. Frankly my understanding of mocking is poor.</p>
<p>This is the function under test:</p>
<pre><code>import logging
from typing import Iterable, Literal
import boto3.session
logger = logging.getLogger(__name__)
def send_email(boto3_session: boto3.session.Session, to: Iterable[str], subject: str, body: str, body_format: Literal['Html', 'Text'] = 'Html', cc: Iterable[str] = None, from_email: str = 'test@domain.com'):
# Create the email message
send_args = {
'Source': from_email,
'Destination': {
'ToAddresses': to,
},
'Message': {
'Subject': {'Data': subject},
'Body': {body_format: {'Data': body}}
}
}
if cc:
send_args['Destination']['CcAddresses'] = cc
response = boto3_session.client('ses', region_name="us-east-1").send_email(**send_args)
message_id = response['MessageId']
logger.debug(
"Sent mail %s to %s.", message_id, to[0])
</code></pre>
<p>and this is my test function:</p>
<pre><code>import os
import boto3
from moto import mock_aws
from moto.core import DEFAULT_ACCOUNT_ID
from moto.ses import ses_backends
import pytest
import pytest_check as ptc
import rbn_lib.comms as comms
# you need to have the secrets, key_id, and key in your environment This tests against the live DB.
DEFAULT_REGION = "us-east-1"
@pytest.fixture
def aws_credentials():
os.environ["AWS_ACCESS_KEY_ID"] = "testing"
os.environ["AWS_SECRET_ACCESS_KEY"] = "testing"
os.environ["AWS_SECURITY_TOKEN"] = "testing"
os.environ["AWS_SESSION_TOKEN"] = "testing"
os.environ["AWS_DEFAULT_REGION"] = DEFAULT_REGION
@mock_aws
@pytest.fixture
def sess(aws_credentials):
sess = boto3.Session()
yield sess
@mock_aws
def test_send_email(sess):
to = ["test@test_email.us"]
subject = "Test"
body = "Test"
comms.send_email(sess, to, subject, body)
ses_backend = ses_backends[DEFAULT_ACCOUNT_ID][DEFAULT_REGION]
ptc.equal(ses_backend.sent_messages[0].subject, subject)
</code></pre>
<p>I'm getting errors that indicate that aws is actually getting called:</p>
<pre><code>self = <botocore.client.SES object at 0x000002542F110310>
operation_name = 'SendEmail'
api_params = {'Destination': {'ToAddresses': ['test@test_email.us']}, 'Message': {'Body': {'Html': {'Data': 'Test'}}, 'Subject': {'Data': 'Test'}}, 'Source': 'test@domain.com'}
def _make_api_call(self, operation_name, api_params):
operation_model = self._service_model.operation_model(operation_name)
service_name = self._service_model.service_name
history_recorder.record(
'API_CALL',
{
'service': service_name,
'operation': operation_name,
'params': api_params,
},
)
if operation_model.deprecated:
logger.debug(
'Warning: %s.%s() is deprecated', service_name, operation_name
)
request_context = {
'client_region': self.meta.region_name,
'client_config': self.meta.config,
'has_streaming_input': operation_model.has_streaming_input,
'auth_type': operation_model.auth_type,
}
api_params = self._emit_api_params(
api_params=api_params,
operation_model=operation_model,
context=request_context,
)
(
endpoint_url,
additional_headers,
properties,
) = self._resolve_endpoint_ruleset(
operation_model, api_params, request_context
)
if properties:
# Pass arbitrary endpoint info with the Request
# for use during construction.
request_context['endpoint_properties'] = properties
request_dict = self._convert_to_request_dict(
api_params=api_params,
operation_model=operation_model,
endpoint_url=endpoint_url,
context=request_context,
headers=additional_headers,
)
resolve_checksum_context(request_dict, operation_model, api_params)
service_id = self._service_model.service_id.hyphenize()
handler, event_response = self.meta.events.emit_until_response(
'before-call.{service_id}.{operation_name}'.format(
service_id=service_id, operation_name=operation_name
),
model=operation_model,
params=request_dict,
request_signer=self._request_signer,
context=request_context,
)
if event_response is not None:
http, parsed_response = event_response
else:
maybe_compress_request(
self.meta.config, request_dict, operation_model
)
apply_request_checksum(request_dict)
http, parsed_response = self._make_request(
operation_model, request_dict, request_context
)
self.meta.events.emit(
'after-call.{service_id}.{operation_name}'.format(
service_id=service_id, operation_name=operation_name
),
http_response=http,
parsed=parsed_response,
model=operation_model,
context=request_context,
)
if http.status_code >= 300:
error_info = parsed_response.get("Error", {})
error_code = error_info.get("QueryErrorCode") or error_info.get(
"Code"
)
error_class = self.exceptions.from_code(error_code)
> raise error_class(parsed_response, operation_name)
E botocore.errorfactory.MessageRejected: An error occurred (MessageRejected) when calling the SendEmail operation: Email address not verified test@domain.com
</code></pre>
|
<python><email><pytest><boto3><moto>
|
2024-02-07 00:08:31
| 1
| 3,321
|
Marc
|
77,951,326
| 1,172,606
|
Django update_or_create within a loop is duplicating field values, but just a couple fields
|
<p>It's been a while but I have a somewhat puzzling issue. I am looping over some data being pulled from an api and performing an update_or_create call through django.</p>
<pre><code>for product in response['products']:
for variant in product['variants']:
print(variant['product_id'])
obj, created = Product.objects.update_or_create(
sku=variant['sku'],
defaults={
'name': product['title'],
'price': variant['price'],
'sku': variant['sku'],
'shopify_product_id': variant['product_id'],
'shopify_variant_id': variant['id'],
'weight_grams': variant['grams'],
},
)
</code></pre>
<p>During the loop you'll notice I am printing the value of the variant product id.</p>
<p><code>print(variant['product_id'])</code></p>
<p>In the console I see the print results are correct with different variant product ids.</p>
<pre><code>7396903682207
7405275381919
7405275775135
7405273579679
7405278101663
7396921475231
9034125279391
</code></pre>
<p>But in the database both the <code>variant['product_id']</code> and <code>variant['id']</code> are all the same for every row. Yet the other details such as <strong>name</strong> and <strong>price</strong> are all correct and differ per row.</p>
<p><a href="https://i.sstatic.net/7vv7y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7vv7y.png" alt="enter image description here" /></a></p>
<p>I can not for the life of me figure out why this is happening. I am using the <strong>sku</strong> as the unique identifier to filter on and it is set as unique in my model. No matter what I do or try, it is only those two fields that get duplicated.</p>
|
<python><django><for-loop><django-queryset>
|
2024-02-06 23:01:47
| 1
| 4,813
|
VIDesignz
|
77,950,904
| 1,245,281
|
setting python logging config using dictionary fails if `incremental` is set to True?
|
<p>I've got a pretty simple example, cfg1.yaml:</p>
<pre><code>version: 1
incremental: False
formatters:
text_fmt:
format: "%(asctime)s - %(levelname)s - job_id %(job_id)s - task_id %(task_id)s - username %(username)s - %(message)s"
handlers:
text_file:
class: logging.FileHandler
formatter: text_fmt
filename: job_log.txt
loggers:
logger:
level: INFO
handlers: [text_file]
</code></pre>
<p>I've got a pretty simple example, cfg2.yaml (same as first one, just changed "text" to "txt":</p>
<pre><code>version: 1
incremental: True
formatters:
txt_fmt:
format: "%(asctime)s - %(levelname)s - job_id %(job_id)s - task_id %(task_id)s - username %(username)s - %(message)s"
handlers:
txt_file:
class: logging.FileHandler
formatter: txt_fmt
filename: job_log.txt
loggers:
logger:
level: INFO
handlers: [txt_file]
</code></pre>
<p>and my code to load it is:</p>
<pre><code>import yaml
from logging import config
fh = open("./cfg1.yaml")
log_config = yaml.safe_load(fh.read())
config.dictConfig(log_config)
fh = open("./cfg2.yaml")
log_config = yaml.safe_load(fh.read())
config.dictConfig(log_config)
</code></pre>
<p>What I get when I run it is the following error <code>No handler found with name 'text_file'</code> - full trace:</p>
<pre><code>Traceback (most recent call last):
File "/Users/schapman/Workspaces/obs-demo-job-doer/converter.py", line 10, in <module>
config.dictConfig(log_config)
File "/Users/schapman/.pyenv/versions/3.9.18/lib/python3.9/logging/config.py", line 809, in dictConfig
dictConfigClass(config).configure()
File "/Users/schapman/.pyenv/versions/3.9.18/lib/python3.9/logging/config.py", line 508, in configure
raise ValueError('No handler found with '
ValueError: No handler found with name 'text_file'
</code></pre>
<p>If I comment out the <code>incremental</code> line in the yaml it works.</p>
<p>What am I doing wrong??</p>
|
<python><dictionary><python-logging>
|
2024-02-06 21:20:47
| 0
| 551
|
RedBullet
|
77,950,848
| 7,475,838
|
Downloading file with Python gives error 699
|
<p>On <a href="https://www.bibleprotector.com" rel="nofollow noreferrer">https://www.bibleprotector.com</a> are files available for download (like <code>TEXT-PCE.zip</code>).<br />
Manual downloading with a 'right click' works just fine.</p>
<p>However, when trying to download the same file using Python, an <code>699</code> error is returned.</p>
<pre class="lang-py prettyprint-override"><code>import requests
url = 'https://www.bibleprotector.com/TEXT-PCE.zip'
r = requests.get(url, allow_redirects=True)
open('test.zip', 'wb').write(r.content)
</code></pre>
<p>Is there a way to download this file using Python?</p>
|
<python><python-requests><download><zip>
|
2024-02-06 21:08:10
| 2
| 4,919
|
René
|
77,950,691
| 22,953,332
|
Model Creation Freezing Django (or Django Rest Framework)
|
<p>I have this model:</p>
<pre class="lang-py prettyprint-override"><code>class AllowedUser(models.Model):
PLACE_CHOICES = (
(1, 'Loc1'),
(2, 'Loc2'),
(3, 'Loc3'),
(4, 'Loc4'),
(5, 'Loc5'),
)
id = models.CharField(primary_key=True, max_length=8, unique=True, default=generate_unique_id)
name = models.CharField(max_length=60)
place = models.IntegerField(choices=PLACE_CHOICES)
current_version = models.CharField(max_length=8, default="0.0.1")
last_updated = models.DateTimeField(default=datetime.datetime(1970,1,1,0,0,0))
def __str__(self):
return self.id + ' - ' + self.name
</code></pre>
<p>And when I try it in shell, or even in runtime with DRF, it just does nothing.</p>
<p>In DRF, it stops with no error right after <code>serializer.save()</code>, in Shell it freezes (not actually freeze, but waits for something) right after <code>AllowedUser(...Data...)</code>. I'm using SQLite3 as database.</p>
<p>I actually don't know what's the root problem here. Anyone got an idea on what's causing it to hold?</p>
<p>Thanks in advance.</p>
|
<python><django><django-rest-framework>
|
2024-02-06 20:32:27
| 1
| 317
|
Arthur Araujo
|
77,950,663
| 12,027,869
|
Total Number of Rows Having Different Row Values by Group
|
<p>I have a dataframe:</p>
<pre><code>data = {
'group1': ['A', 'A', 'A', 'A', 'A', 'B', 'B'],
'group2': ['xx', 'xx', 'xx', 'xx', 'xx', 'xy', 'xy'],
'num1': [1, 1, 1, 1, 1, 4, 3],
'num2': [2, 2, 2, 3, 3, 6, 7],
}
pd.DataFrame(data))
</code></pre>
<p><a href="https://i.sstatic.net/0zALW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0zALW.png" alt="enter image description here" /></a></p>
<p>I want to get the total number of rows that have different row records by <code>group1</code> and <code>group2</code>. A count means for a unique combination of <code>num1</code> and <code>num2</code> by <code>group1</code> and <code>group2</code>, hence if there are multiple records of the same values then it still counts as one.</p>
<p>For instance, there are 5 records for <code>group1 = A & group2 = xx</code>.<br />
It should output 2 because:</p>
<ul>
<li>the first 3 rows of <code>num1</code> and <code>num2</code> are identical and</li>
<li>the 4th and 5th row (<code>num1 = 1 & num2 = 3</code>) is another duplicate.</li>
</ul>
<p>The last two rows is another group (<code>group1 = B & group2 = xy</code>). These two records have a different num1 and num2 values so it should output 2.</p>
<p>Expected outcome:</p>
<p><a href="https://i.sstatic.net/QFbBq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QFbBq.png" alt="enter image description here" /></a></p>
<p>My attempt so far:</p>
<pre><code>df[df.groupby(['group1', 'group2'])[['num1', 'num2']].transform(lambda x, y: x.nunique() > 1 & y.nunique() > 1)]
</code></pre>
|
<python><pandas>
|
2024-02-06 20:26:38
| 2
| 737
|
shsh
|
77,950,625
| 857,932
|
What is the fastest equivalent to `str.lstrip()` that does not copy the string?
|
<p>In Python, the <code>str.lstrip()</code> method returns a copy of the string with leading characters from the set removed (or leading whitespace removed, if the set is not provided):</p>
<blockquote>
<p><code>str.lstrip([chars])</code><br />
Return <strong>a copy of the string</strong> with leading characters removed. The chars argument is a string specifying the set of characters to be removed. If omitted or None, the chars argument defaults to removing whitespace. The chars argument is not a prefix; rather, all combinations of its values are stripped</p>
</blockquote>
<p>Let's suppose I have a large input string (assume hundreds of megabytes) that consists of runs of non-whitespace data interleaved by runs of whitespace:</p>
<pre class="lang-py prettyprint-override"><code>instr = ''.join(
(''.join(random.choice(string.whitespace) for _ in range(random.randint(0, 1e3))) +
''.join(random.choice(string.ascii_letters) for _ in range(random.randint(0, 1e3)))
for _ in range(int(1e5)))
)
</code></pre>
<p>I have a parser for the data, but this parser</p>
<ol>
<li>does not accept leading whitespace, and</li>
<li>stops at the trailing whitespace, returning the last parsed position.</li>
</ol>
<p>The naive method of parsing the whole string would be to call <code>str.lstrip()</code> on the remaining substring, then call the parser on the result, update the substring and loop. However, this would copy the substring unnecessarily on each iteration.</p>
<p>The parser is able to accept an optional starting position, but that interface is undocumented and I'd prefer not to use it.</p>
<hr />
<p>How do I avoid the copy in <code>str.lstrip()</code> and the substring (slice) operator?</p>
|
<python><string>
|
2024-02-06 20:19:30
| 0
| 2,955
|
intelfx
|
77,950,619
| 11,211,041
|
How to suppress "Line too long" warning (pyright-extended) in Replit Python?
|
<p>When coding in Python using <a href="https://replit.com/%7E" rel="nofollow noreferrer">Replit</a>, I get the annoying warning "Line too long" (pyright-extended).</p>
<p><a href="https://i.sstatic.net/N9kUD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N9kUD.png" alt="enter image description here" /></a></p>
<p>How can I get Replit to always ignore displaying this type of warnings?</p>
|
<python><suppress-warnings><replit><ruff>
|
2024-02-06 20:18:40
| 1
| 892
|
Snostorp
|
77,950,512
| 11,444,715
|
Pandas read_parquet() with filters raises ArrowNotImplementedError for partitioned column with int64 dtype
|
<p>I am encountering an issue while trying to read a parquet file with Pandas read_parquet() function using the filters argument. One of the partitioned columns in the parquet file has an int64 dtype. However, when applying filters on this column, I'm getting the following error:</p>
<pre><code>pyarrow.lib.ArrowNotImplementedError: Function 'equal' has no kernel matching input types (string, int64)
</code></pre>
<p>It seems that Pandas is incorrectly inferring the data type of the partitioned column as a string, causing this error. I've verified that the data type of the filter is correct, so the issue seems to be with Pandas incorrectly inferring the dtype of the partitioned column.</p>
<p>How can I resolve this issue and correctly read the parquet file with filters applied to the partitioned column?</p>
<p>Here's the code I'm using - it just this:</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
# Read the parquet file with filters
df = pd.read_parquet('path_to_file.parquet', filters=[('partition_column_name', '==', 123)])
</code></pre>
<p>Ps. I am sure the data is correct</p>
<p>Thank you for your help!</p>
|
<python><pandas><pyarrow>
|
2024-02-06 19:54:44
| 1
| 735
|
Rafael Higa
|
77,950,504
| 929,732
|
Unable to get BASH Script to get the version of python unless it is version 2
|
<pre><code>#!/bin/bash
for x in `find $(readlink -f /usr/local/folder/) -iregex ".*\/python[23]"`
do
TORUN=$(echo "${x} --version")
echo $TORUN
TORUN2=$(eval $TORUN)
done
</code></pre>
<p><strong>and the results</strong></p>
<pre><code>/usr/local/zz/AA_EXTERNAL/venv3/bin/python3 --version
/usr/local/zz/AA_EXTERNAL_TEST/venv3/bin/python3 --version
/usr/local/zz/API_PROXY/forapis/bin/python2 --version
Python 2.7.5
</code></pre>
<p>When I run</p>
<blockquote>
<p>/usr/local/zz/AA_EXTERNAL/venv3/bin/python3 --version</p>
</blockquote>
<p>on the command line....</p>
<p>I get</p>
<blockquote>
<p>Python 3.6.4</p>
</blockquote>
<p>Just a bit confused..</p>
|
<python><bash><for-loop>
|
2024-02-06 19:53:08
| 0
| 1,489
|
BostonAreaHuman
|
77,950,292
| 7,932,327
|
Creating a cython memoryview manually
|
<p>I have a contiguous 1-d numpy array that holds data. However, this array is really a buffer, and I have several numpy views to access that data. These views have different shapes, ndims, offsets etc ...</p>
<p>For instance :</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import math as m
shape_0 =(2,3)
shape_1 = (2,2,2)
shifts = np.zeros((3),dtype=np.intp)
shifts[1] = m.prod(shape_0)
shifts[2] = shifts[1] + m.prod(shape_1)
buf = np.random.random((shifts[2]))
arr_0 = buf[shifts[0]:shifts[1]].reshape(shape_0)
arr_1 = buf[shifts[1]:shifts[2]].reshape(shape_1)
print(buf)
print(arr_0)
print(arr_1)
</code></pre>
<p>My question is the following: How can I do the same thing, but in cython using memoryviews in a nogil environment ?</p>
|
<python><numpy><cython><memoryview><typed-memory-views>
|
2024-02-06 19:07:51
| 1
| 501
|
G. Fougeron
|
77,950,129
| 590,552
|
Python 3.9 Tests in separate directory - Module not found
|
<p>I am working on my first python project, and I would like to structure the folders correctly.
What I have so far is :</p>
<pre><code>HomeDir/MyProject/
ConfigHandler.py
HomeDir/Tests/
ConfigHanderTests.py
</code></pre>
<p>In the ConfigHanderTests.py I am importing a module defined in ConfigHandler.py called ConfigHandler :</p>
<pre><code>from ConfigHandler import ConfigHandler
</code></pre>
<p>But when I run the tests using :</p>
<pre><code>python -m unittest ConfigHandlerTests
</code></pre>
<p>I get the following error :</p>
<pre><code>ModuleNotFoundError: No module named 'ConfigHandler'
</code></pre>
<p>I have read that I need an <strong>init</strong>.py in the test folder? But other posts mention that is no longer required.
If I just have all the python files and test files in one directory, it works fine. But splitting out into folders doesn't work. What am I missing?</p>
<p>Thanks!</p>
|
<python><python-3.x>
|
2024-02-06 18:41:03
| 0
| 312
|
evolmonster
|
77,950,122
| 8,010,921
|
Handle Exceptions in Python Function's signature
|
<p>I wonder if I have designed my script correctly (it works so far) because of a loop of warnings from which I cannot get out.</p>
<p>I am trying to define a function that either is able to parse a JSON response from a website or it simply terminates the program. Here my code:</p>
<pre class="lang-py prettyprint-override"><code>from requests import get, Response, JSONDecodeError
from typing import Any
from sys import exit
def logout() -> None:
exit()
def get_page(url:str) -> dict[Any,Any]:
try:
response: Response = get(url)
page: dict[Any,Any] = response.json()
except JSONDecodeError as e:
print(e)
logout()
return page
try:
result: dict[Any, Any] = get_page("https://stackoverflow.com/")
except Exception as e:
#handle exception
pass
print(result['something'])
</code></pre>
<p>Pylance informs me that <code>page</code> (as in <code>return page</code>) and <code>result</code> (as in <code>print(result['something']</code>) <code>is possibly unbound</code>.
As far as I understand there are different possible workarounds to this problem, but none of them solves entirely the problem:</p>
<ul>
<li>When moving <code>return page</code> before the <code>except</code> statement, <code>mypy</code> raises a <code>Missing return statement</code>.</li>
<li>If I ignore the "error" from <code>mypy</code>, Pylance informs me that I should anyway sign my function specifying that it should return <code>dict[Any,Any] | None</code>.</li>
<li>Changing the return signature to <code>dict[Any,Any]</code> means that outside the function <code>result</code> should be <code>result: dict[Any,Any]|None</code> but then <code>result['something'] is possibly unbound</code> and <code>Object of type "None" is not subscriptable</code></li>
<li>initializing <code>page = {}</code> at the beginning of the function is a possible solution, but what if <code>page</code> is a custom <code>class</code> which need some parameter that I cannot provide?</li>
<li>I could <code>raise</code> an <code>Exception</code> from within the function if there would have been a way to inform Pylance about this possible behaviour (is there?)</li>
</ul>
<p>My point is: the function will NEVER return <code>None</code> right? So, am I doing something wrong? How do you deal with this? Should I refactor my function?</p>
|
<python><typing>
|
2024-02-06 18:39:59
| 2
| 327
|
ddgg
|
77,950,106
| 14,113,504
|
Numpy array not updating during a for loop
|
<p>I am trying to code something to solve differential equations, here the case is rather simple, it's the famous harmonic oscillator, however I am trying to make a general code that would work for an ODE of order n.</p>
<p>During my for loop, the values in X are not updating and I checked that the member on the right side of the equation should change it. The initial value of X, meaning X[0] is [1, 0] and the right side of the X[k+1] is [1, -0.1] but when I print the value of X[k+1] it's still [0, 0], the value that was supposed to be replaced.</p>
<pre><code>from matplotlib import pyplot as plt
import numpy as np
def deriv(X_k, omega):
functions = [lambda _: X_k[1], lambda _: -omega**2*X_k[0]]
X_dot_k = np.array([f(1) for f in functions])
return X_dot_k
step = 0.1 # Time step
omega = 1 # Angular velocity
dimension = 2 # Important when there is n equations.
t0, tf, x0, v0 = 0, 10, 1, 0
t = np.linspace(t0, tf, int((tf-t0)/step) + 1)
X = np.asarray([[0 for _ in range(dimension)] for _ in t])
X[0] = [x0, v0] # Inital conditions.
for k in range(len(t)-1):
X[k+1] = X[k] + deriv(X[k], omega)*step
plt.plot(t, X[:, 0], label="position")
plt.xlabel("time (s)")
plt.ylabel("position (AU)")
plt.title("Position in function of time.")
plt.show()
</code></pre>
|
<python><numpy><differential-equations>
|
2024-02-06 18:36:24
| 1
| 726
|
Tirterra
|
77,949,983
| 18,419,414
|
A simple Flask server with connexion cannot be distribuate with pyinstaller
|
<h1>Context</h1>
<p>I’m trying to distribute my python server that uses <a href="https://github.com/spec-first/connexion" rel="nofollow noreferrer">connexion</a> with <a href="https://pyinstaller.org/en/stable/" rel="nofollow noreferrer">pyinstaller</a>.</p>
<p>Unfortunately, the executable product not pyinstaller does not work. Of the first request an error 500 is issued while it is not in "normal" mode.</p>
<pre><code>INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
File "uvicorn/middleware/proxy_headers.py", line 84, in __call__
File "connexion/middleware/main.py", line 497, in __call__
self.app, self.middleware_stack = self._build_middleware_stack()
File "connexion/middleware/main.py", line 338, in _build_middleware_stack
app.add_api(
File "connexion/apps/flask.py", line 141, in add_api
self.app.register_blueprint(api.blueprint)
File "flask/sansio/scaffold.py", line 46, in wrapper_func
File "flask/sansio/app.py", line 599, in register_blueprint
File "flask/sansio/blueprints.py", line 310, in register
ValueError: The name '/swagger' is already registered for a different blueprint. Use 'name=' to provide a unique name.
INFO: 127.0.0.1:56490 - "GET /swagger/ui/ HTTP/1.1" 500 Internal Server Error
</code></pre>
<p>I created a simple server available on GitHub: <a href="https://github.com/Brinfer/helloworld-connexion" rel="nofollow noreferrer">repository</a></p>
<h1>The question</h1>
<p>Does anyone know why I have this error and how I can correct it?</p>
<p>I tried changing the server url and changing when the API is initialized, but it didn't change anything.</p>
<p>The aim would be to be able to distribute a server with a simple executable on any linux system.</p>
|
<python><flask><pyinstaller><connexion>
|
2024-02-06 18:12:45
| 0
| 437
|
Brinfer
|
77,949,946
| 147,175
|
Are system installed python applications or tools vulnerable to breaking if I uninstall a pip package as my username not sudo?
|
<p>As much as possible this is a cross OS platform question</p>
<p>After I have created and sourced a python virtual environment when I encounter pip install errors like</p>
<pre><code>pip install tensorflow
...
Downloading protobuf-4.23.4-cp37-abi3-manylinux2014_x86_64.whl (304 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 304.5/304.5 kB 3.4 MB/s eta 0:00:00
Installing collected packages: protobuf
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
streamlit 1.12.0 requires protobuf<4,>=3.12, but you have protobuf 4.23.4 which is incompatible.
</code></pre>
<p>then outside of this python virt env I see I have streamlit installed as shown by issuing</p>
<pre><code>pip list|grep streamlit
streamlit 1.12.0
</code></pre>
<p>naturally I can remove this pip package using</p>
<pre><code>pip uninstall streamlit
</code></pre>
<p>I do not care about breaking anything I have installed dependent on <code>streamlit</code> or similar yet I want to avoid breaking other folks python applications/tools/etc</p>
<p>How tight is the referential integrity of system installed python applications regarding pip packages?</p>
<p><strong>UPDATE</strong></p>
<p>my guess is below should show dependencies on package in question</p>
<pre><code>pkg=streamlit
grep ^Required-by <(pip show $pkg)
</code></pre>
<p>so if nothing is <code>Required-by</code> I should be free to uninstall offending package ... or no</p>
<p>I do not set env var PYTHONPATH</p>
|
<python><pip>
|
2024-02-06 18:06:29
| 1
| 28,509
|
John Scott Stensland
|
77,949,797
| 4,434,941
|
Updating a python scraper that no longer works due to javascript blocks
|
<p>So I had written a scraper (deployed on a schedule) which worked pretty well until recently when the site (NYTimes) made changes that broke it</p>
<p>Essentially the scraper worked by going to an article URL, and using using xpath to extract the full artcile content which I would pass to an LLM in order to summarize it</p>
<p>Here's the code:</p>
<p>--</p>
<pre><code>import requests
from scrapy.selector import Selector
url ='https://www.nytimes.com/2024/02/06/us/politics/border-ukraine-israel-aid-congress.html' #works with any nytimes article url
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.9',
'Accept-Encoding': 'gzip, deflate, br',
'DNT': '1',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
}
response = requests.request("GET", url, headers=headers)
# response
sel = Selector(text=response.text)
b = sel.xpath('//section[@class="meteredContent css-1r7ky0e"]')
art = b.xpath('.//p[@class="css-at9mc1 evys1bk0"]').extract() #article body
art2 = '\n '.join(art) #newline
art2 = html2text.html2text(art2) #convert from html to human/LLM readable text
print(art2)
#pass to an LLM via api
</code></pre>
<p>--</p>
<p>Previously, the code above would return the entire article. Now, it returns a partial article because a javascript screen is thrown app asking for human verification before the full article can be rendered</p>
<p>I have two questions:</p>
<ol>
<li>This is a fairly high-frequency call so is there anything clever I can do to get around this limitation without incorporating a heavy stack that involves rendering javascript through a browser for every call, e.g. using hidden API endpoints or incorporating header values that would suggest this call is from a human?</li>
<li>If the answer to 1 is No, then what is the simplest, most lightweight library, package and approach to render javascript and scrape it for this type of site? I am running this script on a very lightweight server so I really want to try and NOT bloat the memory/infrastructure requirements needed to run this code</li>
</ol>
<p>Thanks so much</p>
|
<python><web-scraping><python-requests>
|
2024-02-06 17:41:10
| 1
| 405
|
jay queue
|
77,949,791
| 6,694,814
|
OpenPyxl - problem with saving file to the specified directory
|
<p>I would like to save my excel files to the directory pointed out by the user.</p>
<pre><code>file_path = filedialog.askdirectory()
print(y)
for x in range(2, (int(y))+1):
newname = "%03d" % x
wb.save(f''+file_path+filewithoutextension+'- '+newname+'.'+fileextension)
print (filewithoutextension+'- '+newname+'.'+fileextension)
</code></pre>
<p>The following code doesn't work properly, as I have the situation in the image below.</p>
<p><a href="https://i.sstatic.net/F0PRX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0PRX.png" alt="enter image description here" /></a></p>
<p>Despite saving the file in the selected directory, the filename inherits the name of the directory folder and is saved in the same folder where the python file is located.</p>
<p>How can I fix it?</p>
|
<python><python-3.x>
|
2024-02-06 17:39:59
| 2
| 1,556
|
Geographos
|
77,949,736
| 11,729,033
|
How do I type hint a function whose return type is specified by an argument, when that argument can be a type hint as well as a bare type?
|
<p>I have a function that accepts a type (or type hint) and returns an object matching that type (or hint). For example:</p>
<pre><code>def get_object_of_type(typ):
...
x = get_object_of_type(int)
# x is now guaranteed to be an int
y = get_object_of_type(str | bytes)
# y is now guaranteed to be either a string or a bytes object
</code></pre>
<p>How can I type hint this function to make this behavior clear to static analysers?</p>
<p>The following solution (with Python 3.12 syntax) works for types but not, for example, type union expressions:</p>
<pre><code>def get_object_of_type[T](typ: type[T]) -> T:
...
</code></pre>
|
<python><python-typing>
|
2024-02-06 17:32:11
| 4
| 314
|
J E K
|
77,949,719
| 17,040,989
|
plotly title_side position not working properly
|
<p>I'm greenhand with Python plots but I was trying to do a PCA for different human populations; while I'm still working on the actual data, the main issue I'm having with visualizations in Python, as opposed to <code>R</code>, is that doesn't make things very easy...</p>
<p>Specifically, I spent some time figuring out how to set the layout of my plot but I can't get the <em>legend title</em> to show in the middle of the legend itself.</p>
<p>Supposedly, there is an option in <code>title_side</code> which should do so; however, instead, it corrupts the plot legend. If anyone has any experience with this, any help would be much appreciated. Below the code for plotting and the PCA plot.</p>
<pre><code>fig = px.scatter(pca, x='PC1', y='PC2', color='#LOC', template="seaborn")
fig.update_layout(legend=dict(
title="<i> metapopulation <i>",
title_side="top",
orientation="h",
entrywidthmode='fraction',
entrywidth=.2,
x=.5),
autosize=False,
height=800,
width=800
)
fig.update_xaxes(title_text = f"PC1 ( {pve.at[pve.index[0], 0]} %)",
range=(-0.05, 0.2),
constrain='domain')
fig.update_yaxes(title_text = f"PC2 ( {pve.at[pve.index[1], 0]} %)",
scaleanchor="x",
scaleratio=1)
fig.show()
</code></pre>
<p><strong>EDIT</strong> for @Naren Murali
This is what I mean by corrupted legend and figure too, I just realized the use of a <code>dict</code> somehow shifts the whole scatter plot...
<a href="https://i.sstatic.net/J8S9n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J8S9n.png" alt="enter image description here" /></a></p>
<p><strong>Original Image</strong>
<a href="https://i.sstatic.net/AAAlF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AAAlF.png" alt="enter image description here" /></a></p>
|
<python><plotly><legend><centering><legend-properties>
|
2024-02-06 17:29:58
| 1
| 403
|
Matteo
|
77,949,519
| 3,120,501
|
Python multiprocessing/Pathos Process pickling error - Numpy vectorised function
|
<p>I'm trying to parallelise my program by running the main bulk of the code in different processes and drawing the results together periodically. The format of my code is similar to the following example (which, unfortunately, works):</p>
<pre><code>import abc
import numpy as np
from multiprocessing import Process
# from multiprocess.context import Process
class ProblemClassBase(metaclass=abc.ABCMeta):
def __init__(self):
self.problem_function_vectorised = np.vectorize(self.problem_function, otypes=[np.float64])
@abc.abstractmethod
def problem_function(self, arg):
pass
def use(self, arg):
return self.problem_function_vectorised(arg)
class ProblemClass(ProblemClassBase):
def __init__(self):
super().__init__()
def problem_function(self, arg):
# Arbitrary example
if arg > 2:
return arg + 1
else:
return arg - 1
class NestingClass:
def __init__(self, problem_object):
self.po = problem_object
def make_problem(self, arg):
return self.po.use(arg)
class MainClass:
def __init__(self):
self.problem_obj = ProblemClass()
self.nesting_obj = NestingClass(self.problem_obj)
def run(self, arg):
return self.nesting_obj.make_problem(arg)
# Starting point for running the parallelisation
@classmethod
def run_multiproc(cls, arg):
obj = cls()
# Would somehow return this value
print(obj.run(arg))
def run_parallel():
# In reality I would start a number of processes
proc = Process(target=MainClass.run_multiproc, args=(5,))
proc.start()
proc.join()
if __name__ == "__main__":
run_parallel()
</code></pre>
<p>When I try to run my actual code, I get the error messages:</p>
<ul>
<li>_pickle.PicklingError: Can't pickle <ufunc '_wind_dfn (vectorized)'>: attribute lookup _wind_dfn (vectorized) on <strong>main</strong> failed (for multiprocessing.Process)</li>
<li>_pickle.PicklingError: Can't pickle <ufunc '_wind_dfn (vectorized)'>: it's not found as <strong>main</strong>._wind_dfn (vectorized) (for multiprocess.context.Process)</li>
</ul>
<p>Here '_wind_dfn' is the equivalent of 'problem_function' in the code above.</p>
<p>I've seen some answers which refer to the problems being caused by nesting of the code, and that rearranging things can help, but I'm not entirely sure how to fix it. Does anybody have any ideas on how I could fix this?</p>
|
<python><multiprocessing><vectorization><pathos>
|
2024-02-06 16:56:49
| 1
| 528
|
LordCat
|
77,949,486
| 547,231
|
Pytorch: Appending tensors like a list
|
<p>I have a dataset with random data like the following:</p>
<pre><code>data = torch.normal(0, 1, size = (1, 3))
dataset = torch.tensor(data.T).float()
print(dataset)
#tensor([[-2.1445],
# [-1.3322],
# [-0.6355]])
</code></pre>
<p>And now I want to simulate Brownian motions which are started in the 1-dimensional points given by the dataset up to a certain time <code>t</code> with a step size <code>dt</code>. In the example above, the desired output should look like:</p>
<pre><code>#tensor([[-2.1445, -2.1035, -2.1022],
# [-1.3322, -1.3121, -1.3210],
# [-0.6355, -0.6156, -0.5999]])
</code></pre>
<p>for <code>dt = .01</code> and <code>t = .02</code>. That is, the first dimension corresponds to <code>t = 0</code>, the second to <code>t = .01</code>, the third to <code>t = .02</code> and so on in general. Here is what I tried:</p>
<pre><code>def brownian_motion(x, t, dt):
k = int(t / dt)
path = [x]
path.append(x)
for i in range(k):
xi = torch.normal(torch.zeros(x.shape), torch.ones(x.shape))
x += numpy.sqrt(dt) * xi
path.append(x)
</code></pre>
<p>However, this gives me a list of several tensors of size 3. But what I need want to return is a tensor of size (k + 1, 3) (assuming the dataset size is 3). How can I do that?</p>
|
<python><pytorch>
|
2024-02-06 16:50:06
| 2
| 18,343
|
0xbadf00d
|
77,949,419
| 1,137,254
|
How to force an Async Context Manager to Exit
|
<p>I've been getting into Structured Concurrency recently and this is a pattern that keeps cropping up:</p>
<p>It's nice to use async context managers to access resource - say a some websocket. That's all great if the websocket stays open, but what about if it closes? well we expect our context to be forcefully exited - normally through an exception.</p>
<p>How can I write and implement a context manager that exhibits this behaviour? How can I throw an exception 'into' the calling codes open context? How can I forcefully exit a context?</p>
<p>Here's a simple setup, just for argument's sake:</p>
<pre class="lang-py prettyprint-override"><code># Let's pretend I'm implementing this:
class SomeServiceContextManager:
def __init__(self, service):
self.service = service
async def __aenter__(self):
await self.service.connect(self.connection_state_callback)
return self.service
async def __aexit__(self, exc_type, exc, tb):
self.service.disconnect()
return False
def connection_state_callback(self, state):
if state == "connection lost":
print("WHAT DO I DO HERE? how do I inform my consumer and force the exit of their context manager?")
class Consumer:
async def send_stuff(self):
try:
async with SomeServiceContextManager(self.service) as connected_service:
while True:
await asyncio.sleep(1)
connected_service.send("hello")
except ConnectionLostException: #<< how do I implement this from the ContextManager?
print("Oh no my connection was lost!!")
</code></pre>
<p>How is this generally handled? It seems to be something I've run up into a couple of times when writing ContextManagers!</p>
<p>Here's a slightly more interesting example (hopefully) to demonstrate how things get a bit messy - say you are receiving through an async loop but want to close your connection if something downstream disconnects:</p>
<pre class="lang-py prettyprint-override"><code># Let's pretend I'm implementing this:
class SomeServiceContextManager:
def __init__(self, service):
self.service = service
async def __aenter__(self):
await self.service.connect(self.connection_state_callback)
return self.service
async def __aexit__(self, exc_type, exc, tb):
self.service.disconnect()
return False
def connection_state_callback(self, state):
if state == "connection lost":
print("WHAT DO I DO HERE? how do I inform my consumer and force the exit of their context manager?")
class Consumer:
async def translate_stuff_stuff(self):
async with SomeOtherServiceContextManager(self.otherservice) as connected_other_service:
try:
async with SomeServiceContextManager(self.service) as connected_service:
for message in connected_other_service.messages():
connected_service.send("message received: " + message.text)
except ConnectionLostException: #<< how do I implement this from the ContextManager?
print("Oh no my connection was lost - I'll also drop out of the other service connection!!")
</code></pre>
|
<python><python-asyncio><python-trio><structured-concurrency>
|
2024-02-06 16:39:53
| 3
| 3,495
|
Sam
|
77,949,414
| 13,102,905
|
python3 loop on listener based on keyboard pressed or released
|
<p>I tried running this code and using the insert key to activate the status, and I confirmed in the debug that it actually enters status = True, but it never stays in the loop, I can't understand why it doesn't stay in the loop when I click on insert key.</p>
<pre><code>from pynput.keyboard import Listener, Key
STATUS = False
def on_press(key):
global STATUS
if key == Key.insert:
STATUS = True
def on_release(key):
global STATUS
if key == Key.end:
STATUS = False
with Listener(on_press=on_press, on_release=on_release) as listener:
while STATUS:
print('on loop')
pass
listener.join()
</code></pre>
|
<python>
|
2024-02-06 16:39:37
| 0
| 1,652
|
Ming
|
77,949,363
| 17,323,391
|
AttributeError: module 'my_package' has no attribute 'member', but I can see 'my_package.member' in site_packages
|
<p>I have a Poetry package and a consuming Python application.</p>
<p>The Poetry package has this in its <code>pyproject.toml</code>:</p>
<pre><code>[tool.poetry]
name = "my-package"
version = "0.1.0"
description = ""
authors = [...]
readme = "README.md"
packages = [
{ include = "my_package/__init__.py", from = "." },
{ include = "my_package/transmitter/*", from = "." },
]
</code></pre>
<p>I am installing this as a git dependency in the consuming app, which has this requirement in its <code>pyproject.toml</code>:</p>
<pre><code>[tool.poetry.group.dev.dependencies]
my-package = { git = "git@github.com:example/my-company-my-package.git", branch = "my-feature-branch"}
</code></pre>
<p>I'm installing all deps using <code>poetry install</code>. After doing so, I can see two directories appear under <code>site_packages</code> in PyCharm (and under the virtual poetry environment):</p>
<ul>
<li><code>my_package</code></li>
<li><code>my_package-0.1.0.dist-info</code></li>
</ul>
<p>The former contains a <code>transmitter</code> package, which of course contains an <code>__init__.py</code>, as well as a <code>udp_transmitter.py</code> module. <code>__init__.py</code> contains <code>from .udp_transmitter import *</code></p>
<p><code>udp_transmitter.py</code> contains a <code>UDPTransmitter</code> class.</p>
<p>So, I would expect this import to work inside the consuming application:</p>
<p><code>from my_package.transmitter import UDPTransmitter</code></p>
<p>However, when running the code, I get:</p>
<p><code>AttributeError: module 'my_package' has no attribute 'transmitter'</code></p>
<p>But I can clearly see inside <code>site_packages</code> that there is a package named <code>my_package</code>, which contains a <code>transmitter</code> package, which in turn exports everything from a <code>udp_transmitter.py</code> module.</p>
<p>In summary, I have a Poetry package that, when installed as a (dev) dependency of another project, results in this:</p>
<pre><code>.
└── site_packages/
├── my_package/
│ ├── __init__.py
│ └── transmitter/
│ ├── __init__.py (imports * from udp_transmitter.py)
│ └── udp_transmitter.py
└── my_package-0.1.0.dist-info/
├── direct_url.json
├── INSTALLER
├── METADATA
├── RECORD
└── WHEEL
</code></pre>
<p>But I get an AttributeError if I try to import <code>my_package.transmitter</code>.</p>
<p>What could cause this?</p>
<p><strong>EDIT</strong></p>
<p>Importing this way works:
<code>from my_package.transmitter.udp_transmitter import UDPTransmitter</code></p>
|
<python><python-import><python-packaging><python-poetry><pyproject.toml>
|
2024-02-06 16:32:46
| 0
| 310
|
404usernamenotfound
|
77,949,314
| 897,272
|
Parsing a string representation of typing to get types of a child
|
<p>I have a string representing the type of an object, in this case always a tuple, in a format consistent with how it would be written using type hinting. So something like this;</p>
<pre><code>Tuple[int, int]
</code></pre>
<p>I want to get out a list of types in the tuple. Unfortunately our simple regex failed due to it not handling cases in which the type inside the object is itself a collection or union. Ideally I should be able to take something like this:</p>
<pre><code>Tuple[Union[file.File, directory.Directory, Tuple[file.File, directory.Directory]], Tuple[file.File, directory.Directory]]
</code></pre>
<p>and get back a list with two elements:</p>
<pre><code>Union[file.File, directory.Directory, Tuple[file.File, directory.Directory]]
Tuple[file.File, directory.Directory]
</code></pre>
<p>but doing that using regex and string manipulation seems ugly and messy. Surely this is a usecase that has come up and there is some library already for handling pulling apart typing like this?</p>
|
<python><python-3.x><regex>
|
2024-02-06 16:24:02
| 3
| 6,521
|
dsollen
|
77,949,301
| 1,738,879
|
Pandas groupby with capture groups from extractall
|
<p>I am working with Pandas <code>extract()</code> method to feed the capture group into <code>groupby</code>. A small example of the result of this process is something like the one illustrated below:</p>
<pre><code>import pandas as pd
import re
from io import StringIO
DATA = StringIO('''
colA;colB;colC
Foo;1;1
Bar;2;2
Foo,Bar;3;3
''')
df = pd.read_csv(DATA, sep=';')
m1 = df['colA'].str.extract('(Bar|Foo)', flags=re.IGNORECASE, expand=False)
for t, _d in df.groupby(m1):
t
_d
# 'Bar'
# colA colB colC
# 1 Bar 2 2
#
# 'Foo'
# colA colB colC
# 0 Foo 1 1
# 2 Foo,Bar 3 3
</code></pre>
<p>However, the row with index <code>2</code> (third row) is only captured in the <code>Foo</code> group, whereas I wanted to capture it in both <code>Foo</code> and <code>Bar</code>.</p>
<p>Playing around with the <code>extractall()</code> method seems to capture all matched groups, but apparently cannot be used together with <code>groupby()</code> because pandas complaints about the grouper not being 1-dimensional: <code>ValueError: Grouper for '<class 'pandas.core.frame.DataFrame'>' not 1-dimensional</code></p>
<pre><code>m2 = df['colA'].str.extractall('(Bar|Foo)', flags=re.IGNORECASE)
# 0
# match
# 0 0 Foo
# 1 0 Bar
# 2 0 Foo
# 1 Bar
</code></pre>
<p>The desired output for <code>groupby()</code> would be something like the following:</p>
<pre><code>for t, _d in df.groupby(somematch):
t
_d
# 'Bar'
# colA colB colC
# 1 Bar 2 2
# 2 Foo,Bar 3 3
#
# 'Foo'
# colA colB colC
# 0 Foo 1 1
# 2 Foo,Bar 3 3
</code></pre>
<p>Any suggestions are welcome.</p>
|
<python><python-3.x><pandas>
|
2024-02-06 16:22:16
| 2
| 1,925
|
PedroA
|
77,949,206
| 2,812,625
|
Split a Column Based on first instance
|
<p>Looking to split a df | series column into 2 parts based on the first "_"</p>
<p>example in column:</p>
<p>Male_85__and_over</p>
<pre><code>test['gender'] = test['column_Name_pivoted'].str.split('_').str[0]
test['age'] = test['column_Name_pivoted'].str.split('_',n=1).str[1:]
</code></pre>
<p>Output is not what I was looking for:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>gender</th>
<th>age</th>
</tr>
</thead>
<tbody>
<tr>
<td>Male</td>
<td>[85__and_over]</td>
</tr>
</tbody>
</table>
</div>
|
<python><split><series>
|
2024-02-06 16:08:45
| 3
| 446
|
Tinkinc
|
77,949,009
| 343,215
|
In Pandas, how can I combine column-wise DataFram to a row-wise DataFrame?
|
<p><strong>I have two DataFrames. One is column-wise, which has observations added by column. The second adds observations row-wise:</strong></p>
<pre><code> foo 2024-01-01 2024-02-01
-- ----- --------- ---------
0 4010 100.00 10.00
1 4020 101.00 11.00
2 4030 102.00 12.00
3 4040 101.00 11.00
Date Total
-- ---- -----
0 2024-01-01 35.86
1 2024-02-01 3.91
</code></pre>
<p><strong>I want to combine these two reports on the <code>Date</code> index/value, like this:</strong></p>
<pre><code> foo 2024-01-01 2024-02-01
-- ---- --------- ---------
0 4010 100.00 10.00
1 4020 101.00 11.00
2 4030 102.00 12.00
3 4040 101.00 11.00
4 2200 35.86 3.91 <<<
</code></pre>
<p>Here is the toy data I'm working with:</p>
<pre><code>date1 = date(2024, 1, 1)
date2 = date(2024, 2, 1)
df_1 = pd.DataFrame({
"foo": ["4010", "4020", "4030", "4040"],
date1: [100.00, 101.00, 102.00, 101.00],
date2: [10.00, 11.00, 12.00, 11.00],})
df_2 = pd.DataFrame({
"Date": [date1, date2],
"Total": [35.86, 3.91]})
</code></pre>
|
<python><pandas>
|
2024-02-06 15:44:14
| 2
| 2,967
|
xtian
|
77,948,957
| 11,618,586
|
Identifying row numbers where value is stable before and after the value in the column hits a specified value
|
<h2>EDITED</h2>
<p>I have a pandas dataframe like so:</p>
<pre><code>data = {'ID': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B'],
'column_1': [0, 0, 0, 0, 0.1, 1, 1.5, 2, 3, 4, 4.5, 5, 4.9, 3, 2, 1.8, 1, 0, 0, 1, 3, 0, 1.3, 2, 3, 4.3, 4.8, 5, 4.2, 3.5, 3, 2.6, 2, 1.9, 1, 0, 0, 0, 0, 0, 0.1, 0.2, 0.3, 1, 2, 3, 5, 4, 2, 0.5, 0, 0],
'column_2': [13, 25, 96, 59, 5, 92, 82, 141, 50, 85, 84, 113, 119, 128, 8, 133, 82, 10, 15, 62, 11, 68, 18, 24, 37, 55, 83, 48, 13, 81, 43, 36, 56, 43, 36, 46, 45, 127, 55, 67, 113, 98, 78, 78, 57, 131, 121, 126, 142, 51, 64, 95]}
ID column_1 column_2
0 A 0.0 13
1 A 0.0 25
2 A 0.0 96
3 A 0.0 59
4 A 0.1 5
5 A 1.0 92
6 A 1.5 82
7 A 2.0 141
8 A 3.0 50
9 A 4.0 85
10 A 4.5 84
11 A 5.0 113
12 A 4.9 119
13 A 3.0 128
14 A 2.0 8
15 A 1.8 133
16 A 1.0 82
17 A 0.0 10
18 A 0.0 15
19 A 1.0 62
20 A 3.0 11
21 A 0.0 68
22 A 1.3 18
23 A 2.0 24
24 A 3.0 37
25 A 4.3 55
26 A 4.8 83
27 A 5.0 48
28 A 4.2 13
29 A 3.5 81
30 A 3.0 43
31 A 2.6 36
32 A 2.0 56
33 A 1.9 43
34 A 1.0 36
35 A 0.0 46
36 A 0.0 45
37 A 0.0 127
38 A 0.0 55
39 A 0.0 67
40 A 0.1 113
41 A 0.2 98
42 B 0.3 78
43 B 1.0 78
44 B 2.0 57
45 B 3.0 131
46 B 5.0 121
47 B 4.0 126
48 B 2.0 142
49 B 0.5 51
50 B 0.0 64
51 B 0.0 95
</code></pre>
<p>Tracing back from when the value hits <code>5</code> in <code>column_1</code>, I want to find the value in <code>column_2</code> just before the value in <code>column_1</code> increased from <code>0</code> and just after it came back down to <code>0</code>. So, in the data frame above, the values in <code>column_2</code> would be <code>5</code>, <code>10</code> and <code>18</code>, <code>46</code>.
I want to perform some arithmetic and would like to add 2 columns <code>before</code> & <code>after</code> with those values grouped by the <code>ID</code> column.
The expected output would be:</p>
<pre><code> ID column_1 column_2 Before After
0 A 0.0 13 0 0
1 A 0.0 25 0 0
2 A 0.0 96 0 0
3 A 0.0 59 0 0
4 A 0.1 5 0 0
5 A 1.0 92 0 0
6 A 1.5 82 0 0
7 A 2.0 141 0 0
8 A 3.0 50 0 0
9 A 4.0 85 0 0
10 A 4.5 84 0 0
11 A 5.0 113 5 10
12 A 4.9 119 0 0
13 A 3.0 128 0 0
14 A 2.0 8 0 0
15 A 1.8 133 0 0
16 A 1.0 82 0 0
17 A 0.0 10 0 0
18 A 0.0 15 0 0
19 A 1.0 62 0 0
20 A 3.0 11 0 0
21 A 0.0 68 0 0
22 A 1.3 18 0 0
23 A 2.0 24 0 0
24 A 3.0 37 0 0
25 A 4.3 55 0 0
26 A 4.8 83 0 0
27 A 5.0 48 18 46
28 A 4.2 13 0 0
29 A 3.5 81 0 0
30 A 3.0 43 0 0
31 A 2.6 36 0 0
32 A 2.0 56 0 0
33 A 1.9 43 0 0
34 A 1.0 36 0 0
35 A 0.0 46 0 0
36 A 0.0 45 0 0
37 A 0.0 127 0 0
38 A 0.0 55 0 0
39 A 0.0 67 0 0
40 A 0.1 113 0 0
41 A 0.2 98 0 0
42 B 0.3 78 0 0
43 B 1.0 78 0 0
44 B 2.0 57 0 0
45 B 3.0 131 0 0
46 B 5.0 121 78 64
47 B 4.0 126 0 0
48 B 2.0 142 0 0
49 B 0.5 51 0 0
50 B 0.0 64 0 0
51 B 0.0 95 0 0
</code></pre>
<p>For a given <code>ID</code> if <code>column_1</code> starts with a non zero value, it should give the first value of <code>column_2</code> for that group.</p>
<p>The rest of the rows in <code>Before</code> and <code>After</code> can be filled with <code>null</code> or zeroes.
Is there an elegant way to achieve this?</p>
|
<python><python-3.x><pandas><periodicity>
|
2024-02-06 15:37:16
| 2
| 1,264
|
thentangler
|
77,948,918
| 893,254
|
How to merge dataframes in Pandas while maintaining multi-index when one dataframe is empty
|
<p>I am trying to merge some dataframes with the following line of code:</p>
<pre><code>df_list = [...]
required_columns = ['timestamp']
default_empty_dataframe = pandas.DataFrame(columns=required_columns)
default_empty_dataframe['timestamp'] = pandas.to_datetime(default_empty_dataframe['timestamp'], unit='ms')
default_empty_dataframe.set_index('timestamp', inplace=True, verify_integrity=True)
merged_df = \
functools.reduce(
lambda left, right: pandas.merge(left, right, on=['timestamp'], how='outer'),
df_list,
default_empty_dataframe
)
</code></pre>
<p>The value <code>default_empty_dataframe</code> is required to handle cases where the iterable <code>df_list</code> is empty. Without this <code>functools.reduce</code> doesn't know how to calculate a value.</p>
<p>The dataframes typically look like this and have a 2-level multiindex.</p>
<pre><code> AACT
open high low close
timestamp
2023-06-12 18:17:00 10.10 10.10 10.10 10.10
2023-06-12 18:22:00 10.10 10.10 10.10 10.10
2023-06-12 18:39:00 10.10 10.10 10.10 10.10
2023-06-12 19:25:00 10.15 10.15 10.15 10.15
2023-06-12 19:40:00 10.15 10.15 10.15 10.15
... ... ... ... ...
2023-12-29 20:55:00 10.42 10.42 10.42 10.42
2023-12-29 20:56:00 10.42 10.42 10.42 10.42
2023-12-29 20:57:00 10.42 10.42 10.42 10.42
2023-12-29 20:58:00 10.43 10.43 10.43 10.43
2023-12-29 20:59:00 10.44 10.44 10.44 10.44
[1005 rows x 8 columns]}
</code></pre>
<p>This is what the <code>default_empty_dataframe</code> looks like.</p>
<pre><code>Empty DataFrame
Columns: []
Index: []
</code></pre>
<p>The merge code produces this warning:</p>
<pre><code>FutureWarning: merging between different levels is deprecated and will be removed in a future version. (1 levels on the left, 2 on the right)
</code></pre>
<p>I understand why this warning is produced. In principle it doesn't make much logical sense to merge a dataframe with a 2-level multi-index with something which (I assume) has a 1-level multi index. (The empty dataframe contains no columns, but presumably by default a column index has 1 level.)</p>
<p>This is the result of merging. The index is flattened.</p>
<pre><code> (AACT, open) (AACT, high) (AACT, low) (AACT, close)
timestamp
2023-06-12 18:17:00 10.10 10.10 10.10 10.10
2023-06-12 18:22:00 10.10 10.10 10.10 10.10
2023-06-12 18:39:00 10.10 10.10 10.10 10.10
2023-06-12 19:25:00 10.15 10.15 10.15 10.15
2023-06-12 19:40:00 10.15 10.15 10.15 10.15
... ... ... ... ...
2023-12-29 20:55:00 10.42 10.42 10.42 10.42
2023-12-29 20:56:00 10.42 10.42 10.42 10.42
2023-12-29 20:57:00 10.42 10.42 10.42 10.42
2023-12-29 20:58:00 10.43 10.43 10.43 10.43
2023-12-29 20:59:00 10.44 10.44 10.44 10.44
</code></pre>
<p>It should be possible to make this merge operation work, but I don't understand how to do it and I don't have any ideas about how I might proceed.</p>
<p>I would have expected it to work since there are no columns in the empty dataframe. (But it doens't.)</p>
<p>What <strong>does</strong> work is to create an empty dataframe with the same column names/multi-index structure, using a pseudo column name like <code>DELETE_ME</code> in place of <code>AACT</code>. But then after merging, the returned dataframe contains a set of 4 columns which are entirely populated with NANs, and so an extra step is required to then delete these columns.</p>
<p>That is obviously not a very efficient or elegant solution...</p>
|
<python><pandas><dataframe>
|
2024-02-06 15:30:58
| 1
| 18,579
|
user2138149
|
77,948,903
| 6,231,251
|
'No matching distribution found for setuptools' while building my own pip package
|
<p>I am building a Python package using setuptools. This is the basic structure:</p>
<pre><code>/path/to/project/
├── myproj/
│ ├── __init__.py
│ └── my_module.py
├── pyproject.toml
├── setup.cfg
└── ## other stuff
</code></pre>
<p>This is the content of <code>pyproject.toml</code></p>
<pre><code>[build-system]
requires = [
"setuptools",
"setuptools-scm"]
build-backend = "setuptools.build_meta"
</code></pre>
<p>and this is the content of <code>setup.cfg</code></p>
<pre><code>[project]
name = "my_package"
author = 'my name'
requires-python = ">=3.8"
keywords = ["one", "two"]
license = {text = "BSD-3-Clause"}
classifiers = []
dependencies = []
[metadata]
version = attr: my_package.__version__
[options]
zip_safe = True
packages = my_package
include_package_data = True
install_requires =
matplotlib==3.6.0
spacy==3.7.2
networkx>=2.6.3
nltk>=3.7
shapely>=1.8.4
pandas>=1.3.5
community>=1.0.0b1
python-louvain>=0.16
numpy==1.23.4
scipy
[options.package_data]
my_package = ## pointers to data
[options.packages.find]
exclude =
examples*
demos*
</code></pre>
<p>I can run successfully <code>python -m build</code>. Then, if I <code>pip install</code> it locally, i.e., pointing directly to the <code>.tar</code> file that was built, I can install it.</p>
<p>However, if I upload it to test.pypi and then I <code>pip install</code> it from test.pypi, I get the following error:</p>
<pre><code>pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [3 lines of output]
Looking in indexes: https://test.pypi.org/simple/
ERROR: Could not find a version that satisfies the requirement setuptools (from versions: none)
ERROR: No matching distribution found for setuptools
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>Looks like an issue with setuptools, but I really cannot figure out how to fix it. Any suggestions?</p>
|
<python><pip><setuptools><pypi><python-packaging>
|
2024-02-06 15:28:40
| 1
| 882
|
sato
|
77,948,428
| 8,388,707
|
Issue with qdrant collection creation | Not sure about the format which input format support filter too
|
<p>this is my sample input dataframe</p>
<pre><code>data = {
'name': ['Entry 1', 'Entry 2', 'Entry 3'],
'urls': ['http://example.com/1', 'http://example.com/2', 'http://example.com/3'],
'text': ['Text for Entry 1', 'Text for Entry 2', 'Text for Entry 3'],
'type': ['Type A', 'Type B', 'Type C']}
</code></pre>
<p>I want to index it on qdrant cloud and for that, I have tried below long-chain code following qdrant documentation</p>
<pre><code>from langchain.vectorstores import Qdrant
texts = data["text"].tolist()
model_name = "sentence-transformers/sentence-t5-base"
embeddings = HuggingFaceEmbeddings(
model_name=model_name)
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
doc_store = Qdrant.from_texts(
texts,
embeddings,
url=qdrant_url,
api_key=qdrant_key,
collection_name="my-collection"
)
</code></pre>
<p>This method, it's not store page metadata and vectors in the cloud, meanwhile, I was following this official documentation: <a href="https://qdrant.tech/documentation/frameworks/langchain/" rel="nofollow noreferrer">https://qdrant.tech/documentation/frameworks/langchain/</a></p>
<p>this is the way it's stored in the cloud
<a href="https://i.sstatic.net/atkNm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/atkNm.png" alt="enter image description here" /></a></p>
<p>you can see blank metadata and vectors, can someone please help me here, I find not much support in langchain</p>
|
<python><langchain><qdrant>
|
2024-02-06 14:22:26
| 2
| 1,592
|
Vineet
|
77,948,246
| 11,064,604
|
python multiprocessing.pool but with N cores per process
|
<p>Python's <code>multiprocessing.pool</code> module gives one core per process. Is there an equivalent that instead gives out <strong>N</strong> cores per process? i.e. the <code>multiprocessing.cool_new_pool</code> defined below?</p>
<pre><code>import multiprocessing
def func(x):
return x**2
TOTAL_CORES = 10 #os.cpu_count() for nonminimal example
N_CORES_PER_PROCESS = 2
#p = multiprocessing.Pool(N_CORES) # multiprocessing with pool
p = multiprocessing.cool_new_pool(TOTAL_CORES , N_CORES_PER_PROCESS) #Desired functionality
results = p.map(func, [i for i in range(10000)])
</code></pre>
|
<python><multiprocessing><python-multiprocessing>
|
2024-02-06 13:58:22
| 1
| 353
|
Ottpocket
|
77,948,215
| 12,018,554
|
numpy effiency way to multiple transformation array
|
<p>In Linux, I take screenshots and draw squares at certain coordinates. My code works fine, but I transform the numpy array several times.</p>
<pre><code>
def get_screenshot(self):
# Take screenshot
pixmap = window.get_image(0, 0, width, height, X.ZPixmap, 0xffffffff)
# type(data) = <class 'bytes'>
data = pixmap.data
# type(data) = <class 'bytearray'>
data = bytearray(data)
self.screenshot = np.frombuffer(data, dtype='uint8').reshape((height, width, 4))
def getRGBScreenShot(self):
with self.lock:
image = self.screenshot[..., :3]
image = np.ascontiguousarray(image)
return image
</code></pre>
<p>1- If i dont use <code>data = bytearray(data)</code>, the numpy will be read-only</p>
<p>2- <code>image = np.frombuffer(data, dtype='uint8').reshape((height, width, 4))</code> creating numpy array</p>
<p>3- <code>self.screenshot[..., :3]</code> to convert rgb. I guest it should be.</p>
<p>4- If i dont use <code>np.ascontiguousarray</code>, the numpy will be <code>C_CONTIGUOUS : False</code></p>
<p>Can the above steps be optimized?
Is constantly processing an array bad for performance?</p>
<p>also for grayscale the situation is worse:</p>
<pre><code>
def getGrayScaleScreenShot(self):
with self.lock:
# Convert to grayscale using luminosity formula
gray_image = np.dot(self.screenshot[..., :3], [0.2989, 0.5870, 0.1140])
gray_image = np.ascontiguousarray(gray_image.astype(np.uint8))
return gray_image
</code></pre>
|
<python><numpy>
|
2024-02-06 13:53:07
| 1
| 557
|
Qwe Qwe
|
77,948,193
| 2,190,411
|
jax/jaxopt solution for linear programming?
|
<p><strong>EDIT:</strong> I just found <a href="https://ott-jax.readthedocs.io/en/latest/" rel="nofollow noreferrer">ott-jax</a> which looks like it might be what I need, but if possible I'd still like to know what I did wrong with jaxopt below!</p>
<p><strong>Original:</strong> I'm trying to solve an optimal transport problem, and after following this <a href="https://alexhwilliams.info/itsneuronalblog/2020/10/09/optimal-transport/#f7b" rel="nofollow noreferrer">great blog post</a> I have a working version in numpy/scipy (comments removed for brevity).</p>
<p>In trying to get a jax version of this working I came across <a href="https://github.com/google/jax/issues/12827" rel="nofollow noreferrer">this issue</a> and tried looking at the jaxopt library but have not been able to find an implementation of linprog or linear programming (LP). I believe LP is a subset of quadratic programming which jaxopt does implement, but have not been able to replicate the numpy version successfully. Any idea where I am going wrong or how else I can solve this?</p>
<pre class="lang-py prettyprint-override"><code>import jax
import jax.numpy as jnp
import jaxopt
import numpy as np
from scipy.optimize import linprog
from scipy.spatial.distance import pdist, squareform
from scipy.special import softmax
jax.config.update('jax_platform_name', 'cpu')
def prep_arrays(x, p, q):
n, d = x.shape
C = squareform(pdist(x, metric="sqeuclidean"))
Ap, Aq = [], []
z = np.zeros((n, n))
z[:, 0] = 1
for i in range(n):
Ap.append(z.ravel())
Aq.append(z.transpose().ravel())
z = np.roll(z, 1, axis=1)
A = np.row_stack((Ap, Aq))[:-1]
b = np.concatenate((p, q))[:-1]
return n, C, A, b
def demo_wasserstein(x, p, q):
n, C, A, b = prep_arrays(x, p, q)
result = linprog(C.ravel(), A_eq=A, b_eq=b)
T = result.x.reshape((n, n))
return np.sqrt(np.sum(T * C)), T
def jax_attempt_1(x, p, q):
n, C, A, b = prep_arrays(x, p, q)
C, A, b = jnp.array(C), jnp.array(A), jnp.array(b)
def matvec_Q(params_Q, u):
del params_Q
return jnp.zeros_like(u) # no quadratic term so Q is just 0
def matvec_A(params_A, u):
return jnp.dot(params_A, u)
hyper_params = dict(params_obj=(None, C.ravel()), params_eq=A, params_ineq=(b, b))
osqp = jaxopt.BoxOSQP(matvec_Q=matvec_Q, matvec_A=matvec_A)
sol, state = osqp.run(None, **hyper_params)
T = sol.primal[0].reshape((n, n))
return np.sqrt(np.sum(T * C)), np.array(T)
def jax_attempt_2(x, p, q):
n, C, A, b = prep_arrays(x, p, q)
C, A, b = jnp.array(C), jnp.array(A), jnp.array(b)
def fun(T, params_obj):
_, c = params_obj
return jnp.sum(T * c)
def matvec_A(params_A, u):
return jnp.dot(params_A, u)
# solver = jaxopt.EqualityConstrainedQP(fun=fun, matvec_A=matvec_A)
solver = jaxopt.OSQP(fun=fun, matvec_A=matvec_A)
init_T = jnp.zeros((16, 16))
hyper_params = dict(params_obj=(None, C.ravel()), params_eq=(A, b), params_ineq=None)
init_params = solver.init_params(init_T.ravel(), **hyper_params)
sol, state = solver.run(init_params=init_params, **hyper_params)
T = sol.primal.reshape((n, n))
return np.sqrt(np.sum(T * C)), np.array(T)
if __name__ == '__main__':
np.random.seed(0)
n = 16
q_values = np.random.normal(size=n)
p = np.full(n, 1. / n)
q = softmax(q_values)
x = np.random.uniform(-1., 1., (n, 1))
dist_numpy, plan_numpy = demo_wasserstein(x, p, q)
dist_jax_1, plan_jax_1 = jax_attempt_1(x, p, q)
dist_jax_2, plan_jax_2 = jax_attempt_2(x, p, q)
print(f'numpy: dist {dist_numpy}, min {plan_numpy.min()}, max {plan_numpy.max()}')
print(f'jax_1: dist {dist_jax_1}, min {plan_jax_1.min()}, max {plan_jax_1.max()}')
print(f'jax_2: dist {dist_jax_2}, min {plan_jax_2.min()}, max {plan_jax_2.max()}')
# numpy: dist 0.18283759367232585, min 0.0, max 0.06250000000000001
# jax_1: dist nan, min -395690848.0, max 453536128.0
# jax_2: dist nan, min -461479360.0, max 528943168.0
</code></pre>
|
<python><numpy><jax><earth-movers-distance>
|
2024-02-06 13:49:45
| 1
| 470
|
logan
|
77,948,024
| 2,287,486
|
Pandas: compare rows of dataframe on the basis of multiple columns
|
<p>Following is the example dataset</p>
<pre><code>df1 = pd.DataFrame(
data=[['Afghanistan','2015','5.1'],
['Afghanistan','2015','6.1'],
['Bahrain','2020',''],
['Bahrain','2020','32'],
['Bahrain','2021','32'],
['Bahrain','2022','32']],
columns=['Country', 'Reference Year', 'value'])
</code></pre>
<p>I want to compare the rows on the basis of columns and flag inconsistency. For example, if Country and Reference year is same between two rows and value is different, both should be flagged out as "invalid".</p>
<p>What I am trying is</p>
<pre><code>df1['Validity'] = np.where((df1[['Country', 'Reference Year']] == df1[['Country', 'Reference Year']]) & df1['value'] != df1['value'],'Valid','Invalid')
</code></pre>
<p>This gives me an error "ValueError: Expected a 1D array, got an array with shape (6, 8)". I believe this should be possible by using iloc or loc. But couldn't figure out the right way to do it.</p>
<p>My expected output is "Invalid" for row 1, row 2, row 3, row 4 and "valid" for row 5 and row 6.</p>
|
<python><pandas>
|
2024-02-06 13:23:54
| 2
| 579
|
khushbu
|
77,947,927
| 2,567,544
|
Why does Python round(x, 0) not equal round(x)?
|
<p>I'm trying to understand the behaviour of rounding decimal values. The code below is supposed to illustrate the difference in rounding the decimal value 1.5 according to both the half up and half down methods. See comments in the code for specific questions, which revolve around the difference in behaviour in rounding to zero decimal places versus the nearest integer. I would have expected no difference, i.e. these two cases to be equivalent, but looks like not:</p>
<pre><code>import decimal
a: decimal.Decimal = decimal.Decimal('1.5')
decimal.getcontext().rounding = decimal.ROUND_HALF_DOWN
print(f'{decimal.getcontext()=}')
print(f'{a=}, {round(a, 0)=}') # This rounds down as expected
print(f'{a=}, {round(a)=}') # This rounds up, why? And why does round not appear to return a decimal here?
decimal.getcontext().rounding = decimal.ROUND_HALF_UP
print(f'{decimal.getcontext()=}')
print(f'{a=}, {round(a, 0)=}') # This rounds up as expected
print(f'{a=}, {round(a)=}') # This rounds up as expected, but again does not return a decimal
</code></pre>
|
<python><floating-point><rounding>
|
2024-02-06 13:08:02
| 0
| 607
|
user2567544
|
77,947,765
| 101,022
|
AWS generate_presigned_post content-type with two values
|
<p>Using Python's boto3 library, I create a presigned URL and return it to the front end:</p>
<pre class="lang-py prettyprint-override"><code>s3_client = boto3.client('s3')
response = s3_client.generate_presigned_post(
"bucket",
"logo",
Fields={
"Content-Type": "image/png"
},
Conditions=[
["starts-with", "$Content-Type", "image/"],
],
ExpiresIn=3600
)
</code></pre>
<p>...here's an example response:</p>
<pre class="lang-json prettyprint-override"><code>{
"url": "https://s3.amazonaws.com/bucket,
"fields": {
"Content-Type": "image/png",
"key": "logo",
"AWSAccessKeyId": "ABC",
"policy": "blahblah",
"signature": "XYZ123"
}
}
</code></pre>
<p>I then use the response in the front end to POST the image to S3:</p>
<pre class="lang-js prettyprint-override"><code>const formData = new FormData();
for (const key in json.fields) {
formData.append(key, json.fields[key]);
}
formData.append("file", image);
return fetch(json.url, {
method: "POST",
body: formData
});
</code></pre>
<p>..which uploads the file as expected....except the Content-Type is set to <code>image/png,image/png</code>:
<a href="https://i.sstatic.net/8IdCJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8IdCJ.png" alt="enter image description here" /></a></p>
<p>How do I ensure that the Content-Type is <code>image/png</code> and not <code>image/png,image/png</code>?</p>
|
<javascript><python><amazon-web-services><amazon-s3>
|
2024-02-06 12:39:35
| 0
| 1,480
|
timborden
|
77,947,742
| 11,170,350
|
Can I update lifespan in FastAPI without restarting the app?
|
<p>I want to update values in <code>lifespan</code> of FastAPI.</p>
<p>This is the code:</p>
<pre class="lang-py prettyprint-override"><code>@asynccontextmanager
async def lifespan(app: FastAPI):
names = requests.get(url).json()
yield
</code></pre>
<p>Let's say I knew that the response of API has been updated. Is there a way I can update the value name in <code>lifespan</code> without stopping and starting the app.</p>
<p>Maybe just creating another endpoint, upon calling the value of <code>names</code> in <code>lifespan</code> event got override.
Thanks</p>
|
<python><fastapi>
|
2024-02-06 12:36:38
| 1
| 2,979
|
Talha Anwar
|
77,947,479
| 90,580
|
python webbrowser.open() ignores BROWSER="safari"
|
<p>I have the following script</p>
<pre><code>import webbrowser
webbrowser.open("https://python.org")
</code></pre>
<p>that I run with</p>
<pre><code>BROWSER="safari" python myscript.py
</code></pre>
<p>This opens Google Chrome instead of safari.</p>
|
<python>
|
2024-02-06 11:51:01
| 1
| 25,455
|
RubenLaguna
|
77,947,204
| 5,356,096
|
Paramater-based dependecies and outs in DVC from constants file
|
<p>I am trying to define a single-source set of paths such that it can be modified if necessary from a single spot rather than modifying it in various places across many scripts. I am doing this by simply using a <code>constants.py</code> file:</p>
<p><em>constants.py</em></p>
<pre class="lang-py prettyprint-override"><code>MY_DATA="data/location"
MY_OUT="data/out"
</code></pre>
<p>I tried doing it by using the top-level <code>vars</code> declaration in my <code>dvc.yaml</code> but to no avail:</p>
<p><em>dvc.yaml</em></p>
<pre class="lang-yaml prettyprint-override"><code>vars:
- constants.py
stages:
something:
deps:
- ${MY_DATA}
outs:
- ${MY_OUT}
cmd: python some_script.py
</code></pre>
<p>What is the proper way to do this in DVC? I looked at <code>params</code> too but I'm not sure whether that fits this specific use case or whether it'd be valid.</p>
|
<python><dvc>
|
2024-02-06 11:06:58
| 1
| 1,665
|
Jack Avante
|
77,947,027
| 845,210
|
How can I use a ClassVar Literal as a discriminator for Pydantic fields?
|
<p>I'd like to have Pydantic fields that are discriminated based on a class variable.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field
from typing import Literal, ClassVar
class Cat(BaseModel):
animal_type: ClassVar[Literal['cat']] = 'cat'
class Dog(BaseModel):
animal_type: ClassVar[Literal['dog']] = 'dog'
class PetCarrier(BaseModel):
contains: Cat | Dog = Field(discriminator='animal_type')
</code></pre>
<p>But this code throws an exception at import time:</p>
<blockquote>
<p>pydantic.errors.PydanticUserError: Model 'Cat' needs a discriminator field for key 'animal_type'</p>
</blockquote>
<p>If I remove the <code>ClassVar</code> annotations, it works fine, but then <code>animal_type</code> is only available as an instance property, which is less convenient in my case.</p>
<p>Can anyone help me use class attributes as discriminators with Pydantic? This is Pydantic version 2.</p>
|
<python><python-typing><pydantic><pydantic-v2>
|
2024-02-06 10:37:00
| 1
| 3,331
|
bjmc
|
77,946,957
| 11,630,148
|
csrfmiddlewaretoken included in the search url query
|
<p>The problem is I have 2 searches, one is a search for job seekers to find jobs and another is a search for companies to find job seekers. The expected result is to have <code>http://localhost:8000/search-job-seeker/?&q=searchquery</code> but what I'm getting is <code>http://localhost:8000/search-job-seeker/?csrfmiddlewaretoken=pF6HWEH2rOTvZTRsXzaDuQ9GiGw9ChmukeCYUSND15gzFPCKWmRtRGvVecMHIWKK&q=searchquery</code> I don't see any error messages when I debug in the console or in the terminal.</p>
<p>I've tried to use <code>get</code> in the form which results in the <code>http://localhost:8000/search-job-seeker/?csrfmiddlewaretoken=pF6HWEH2rOTvZTRsXzaDuQ9GiGw9ChmukeCYUSND15gzFPCKWmRtRGvVecMHIWKK&q=searchquery</code> but when I use <code>post</code> method in the form, I get <code>http://localhost:8000/search-job-seeker/</code> and the search result comes out with out the <code>q=searchquery</code> in the url.</p>
<p>My view for this is:</p>
<pre class="lang-py prettyprint-override"><code>class JobSeekerSearchView(LoginRequiredMixin, View):
template_name = "core/job_seeker_search.html"
form_class = JobSeekerSearchForm
def post(self, request, *args, **kwargs):
"""
Handle HTTP POST requests, process form data, search for job seekers, and render the template.
Args:
self: the instance of the class
request: the HTTP request object
*args: variable length argument list
**kwargs: variable length keyword argument list
Returns:
HTTP response object
"""
form = self.form_class(request.POST)
job_seeker_results = []
if form.is_valid():
query = form.cleaned_data["q"]
# Search for job seekers using PostgreSQL full-text search and icontains
job_seeker_results = Seeker.objects.annotate(
search=SearchVector("seekerprofile__pk", "seekerprofile__headline"),
).filter(
Q(search=SearchQuery(query))
| Q(seekerprofile__skills__icontains=query)
| Q(seekerprofile__rate__icontains=query)
)
context = {"query": query, "job_seeker_results": job_seeker_results, "form": form}
return render(request, self.template_name, context)
</code></pre>
<p>The template form is:</p>
<pre><code> <form class="d-flex" method="get" action="{% url 'core:seeker_search' %}">
{% csrf_token %}
<input class="form-control me-2" type="text" name="q" placeholder="Search job seekers..." value="{{ request.GET.q }}">
<button class="btn btn-outline-success" type="submit">Search</button>
</form>
</code></pre>
<p>Here is the <code>forms.py</code></p>
<pre class="lang-py prettyprint-override"><code>class JobSeekerSearchForm(forms.Form):
q = forms.CharField(max_length=100, required=False, widget=forms.TextInput(attrs={"class": "form-control"}))
</code></pre>
|
<python><django>
|
2024-02-06 10:25:16
| 0
| 664
|
Vicente Antonio G. Reyes
|
77,946,772
| 2,376,651
|
Segregate spark and hadoop configuration properties
|
<p>I have a use case where I want to segregate the spark config properties and hadoop config properties from the spark-submit command.</p>
<p>Example spark-submit command:</p>
<pre><code>/usr/lib/spark/bin/spark-submit --master yarn --class com.benchmark.platform.TPCDSBenchmark --deploy-mode cluster --conf spark.executor.instances=5 --conf spark.dynamicAllocation.minExecutors=2 --conf spark.dynamicAllocation.maxExecutors=5 --conf spark.executor.cores=4 --conf spark.executor.memory=10240M --conf spark.driver.memory=8192M --conf spark.hadoop.hive.metastore.uris=thrift://METASTORE_URI:10016 --conf spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider --conf spark.hadoop.fs.s3a.assumed.role.credentials.provider=com.amazonaws.auth.WebIdentityTokenCredentialsProvider --conf spark.hadoop.fs.s3a.assumed.role.arn=arn:aws:iam::ACCOUNT:ROLE s3://JAR_PATH.jar --iterations=2 --queryFilter=q1-v2.4
</code></pre>
<p>I want to extract spark_conf and hadoop_conf from the above command.</p>
<p>Sample output:</p>
<pre><code>"spark_conf": {
"spark.driver.memory": "8192M",
"spark.executor.cores": "4",
"spark.executor.memory": "10240M",
"spark.executor.instances": "5",
"spark.dynamicAllocation.maxExecutors": "5",
"spark.dynamicAllocation.minExecutors": "2"
}
</code></pre>
<pre><code>"hadoop_conf": {
"spark.hadoop.hive.metastore.uris": "thrift://METASTORE_URI:10016",
"spark.hadoop.fs.s3a.assumed.role.arn": "arn:aws:iam::ACCOUNT:ROLE",
"spark.hadoop.fs.s3a.aws.credentials.provider": "org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider",
"spark.hadoop.fs.s3a.assumed.role.credentials.provider": "com.amazonaws.auth.WebIdentityTokenCredentialsProvider"
}
</code></pre>
<p>The comprehensive list of hadoop related config properties is available here: <a href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml" rel="nofollow noreferrer">list1</a> <a href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml" rel="nofollow noreferrer">list2</a> <a href="https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml" rel="nofollow noreferrer">list3</a> <a href="https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml" rel="nofollow noreferrer">list4</a>. Remaining config properties can be assigned to spark. I don't want to save these hundreds of properties in a database and search for a match. Is there a better way to segregate between the two types of config properties?</p>
|
<python><apache-spark><hadoop><logic>
|
2024-02-06 09:57:47
| 1
| 599
|
Prabhjot
|
77,946,746
| 10,595,871
|
Connect docker container with SQL Server
|
<p>I'm trying to run code that writes on a SQL Server database.
The code is working fine on my machine (Microsoft).</p>
<p>dockerfile:</p>
<pre><code># syntax=docker/dockerfile:1
FROM ubuntu:22.04
WORKDIR /app
RUN apt-get update
RUN apt-get install -y apt-utils \
&& echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get install -y locales \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 \
&& localedef -i it_IT -c -f UTF-8 -A /usr/share/locale/locale.alias it_IT.UTF-8
ENV LANG it_IT.utf8
RUN apt-get install -y python3 python3-pip
RUN apt-get install -y unixodbc-dev libodbc2
COPY ./src .
RUN pip3 install -r requirements.txt
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0", "--port=5000"]
</code></pre>
<p>application-docker.cfg file:</p>
<pre><code>SECRET_KEY = 'xxx'
ALLOWED_FILES = ['xls']
MAX_RUNNING_JOBS = 3
STORAGE_PATH = '/tmp'
DRIVER = 'SQL Server'
SERVER = 'xxx'
DB_NAME = 'xxx'
USER_ID = 'xxx'
PSW = 'xxx'
</code></pre>
<p>It raises this error:</p>
<blockquote>
<p>sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'SQL Server' : file not found (0) (SQLDriverConnect)")</p>
</blockquote>
<p>I've tried to replace DRIVER with DRIVER = '/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.3.so.1.1' but it raises quite the same error:</p>
<blockquote>
<p>sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib '/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.3.so.1.1' : file not found (0) (SQLDriverConnect)")</p>
</blockquote>
<p>Tried also by using DRIVER = 'ODBC Driver 17 for SQL Server' and in the docker file</p>
<pre><code>RUN apt-get install -y msodbcsql17 mssql-tools
</code></pre>
<p>but when I build the image:</p>
<blockquote>
<p>0.915 E: Unable to locate package msodbcsql17.<br>
0.915 E: Unable to locate package mssql-tools</p>
</blockquote>
|
<python><sql-server><docker>
|
2024-02-06 09:52:36
| 1
| 691
|
Federicofkt
|
77,946,718
| 354,051
|
subprocess.run not throwing errors on stderr when using msvc compiler cl.exe as command
|
<p>Python 3.12.0, Windows 10, MSVC 14.3</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import run
command = ['cl', ...]
proc = run(command, stdout=subprocess.PIPE, shell=True, text=True, stderr=subprocess.PIPE)
output = proc.stdout.splitlines()
errors = proc.stderr.splitlines()
</code></pre>
<p>When compiler is not producing any errors then list <strong>errors</strong> is empty but when compiler is throwing errors, it goes to list <strong>output</strong> instead of list <strong>errors</strong>. This is not the case when using mingw32 tool chain on windows. The error messages goes into list <strong>errors</strong> in case of any errors produced while compilation. It seems msvc compiler cl.exe is throwing error messages to stdout instead of stderr.</p>
<p>How do you correctly pass errors to <strong>errors</strong> list?</p>
<p>One option I can think of is is to pass errors to stdout and check the return code. If it's not 0 then I believe the process has thrown some errors and you can parse the output list for further investigation. Am I right here?</p>
|
<python><subprocess>
|
2024-02-06 09:48:39
| 1
| 947
|
Prashant
|
77,946,717
| 6,281,366
|
pydantic v1 vs v2: dict field inside a model
|
<p>In pydantic V1, if i have a class with a dict attribute, and i pass a model into it, it will be converted to a dict:</p>
<pre><code>class Test(pydantic.BaseModel):
x: dict
class X(pydantic.BaseModel):
name: str
x = X(name="name")
test = Test(x=x)
Test(x={'name': 'name'})
</code></pre>
<p>But in pydantic V2, if i will do the same, i will get a validation error, saying it expects a dict and not a model.</p>
<p>Is there anyway to reserve such a behavior? or the only option is that i convert the model to a dict by myself?</p>
|
<python><pydantic>
|
2024-02-06 09:48:39
| 1
| 827
|
tamirg
|
77,946,651
| 16,383,578
|
How to store NTFS MFT file records in C++?
|
<p>I have recently written a Python program that resolves an entire Master File Table, you can see it <a href="https://codereview.stackexchange.com/questions/289308/resolve-an-entire-ntfs-master-file-tables-worth-of-file-records-to-absolute-pat">here</a>.</p>
<p>I wrote it all by myself, using various online documentations I have found as references.
It is completely working, but in doing so I have found that I have pushed Python to its limits.</p>
<p>There were various performance issues, I know looking up Master File Table like I did is I/O bound, I know I can just load the whole $MFT into memory up-front before parsing, but it is over 2GiB in size...</p>
<p>The biggest issue with the code is with memory usage, it takes over 4GiB to store all parsed entries.</p>
<p>So I would like to reimplement it in C++.</p>
<p>I am learning C++ and have written a few working programs in C++ previously, they do compile but they are all relatively simple. I can already implement most of the logic in C++, but there is a giant obstacle that I cannot overcome: the classes.</p>
<p>Let me show you, below is a small snippet from my script, it defines the classes I use to store parsed MFT File Record in Python:</p>
<pre class="lang-py prettyprint-override"><code>@patcher
class Record_Header(ctypes.Structure):
__slots__ = ()
__names__ = (
"Written",
"HardLinks",
"In_Use",
"Directory",
"Base_Record",
"Record_ID",
)
_fields_ = (
("Written", ctypes.c_uint16),
("HardLinks", ctypes.c_uint16),
("In_Use", ctypes.c_bool),
("Directory", ctypes.c_bool),
("Base_Record", ctypes.c_uint32),
("Record_ID", ctypes.c_uint32),
)
@classmethod
def parse(cls, data: bytes) -> Record_Header:
chunks = struct.unpack("<16xHH2xH8xQ4xL", data[:48])
flag = chunks[2]
return cls(
*chunks[:2], bool(flag & 1), bool(flag & 2), chunks[3] & UINT48, chunks[4]
)
@patcher
class Data_Run(ctypes.Structure):
__slots__ = ()
__names__ = ("First_Cluster", "Cluster_Count")
_fields_ = (("First_Cluster", ctypes.c_uint32), ("Cluster_Count", ctypes.c_uint32))
def parse_attribute_name(data: bytes, name_length: int, name_offset: int) -> str:
return (
data[name_offset : name_offset + 2 * name_length : 2].decode("utf8")
if name_length
else ""
)
class NonResident_Attribute(ctypes.Structure):
__repr__ = __repr__
__slots__ = ()
__names__ = (
"Type",
"Attribute_ID",
"Allocated_Size",
"Real_Size",
"Attribute_Name",
"Count",
"Data_Runs",
)
_fields_ = (
("Type", ctypes.c_uint8),
("Attribute_ID", ctypes.c_uint16),
("Allocated_Size", ctypes.c_uint64),
("Real_Size", ctypes.c_uint64),
("Attribute_Name", ctypes.c_wchar_p),
("Count", ctypes.c_uint8),
("Data_Runs", ctypes.POINTER(Data_Run)),
)
@staticmethod
def parse_data_runs(data: bytes) -> Generator[Data_Run, None, None]:
while (size := data[0]) and len(data) > 2:
count = (size & 15) + 1
first = (size >> 4) + count
cluster_count = parse_little_endian(data[1:count])
starting_cluster = parse_little_endian(data[count:first])
data = data[first:]
yield Data_Run(starting_cluster, cluster_count)
@staticmethod
def get_data_runs(data: bytes) -> Tuple[int, ctypes.POINTER(Data_Run)]:
data_runs = list(NonResident_Attribute.parse_data_runs(data))
l = len(data_runs)
return l, ctypes.cast((Data_Run * l)(*data_runs), ctypes.POINTER(Data_Run))
@classmethod
def parse(cls, data: bytes, data_type: int) -> NonResident_Attribute:
chunks = struct.unpack("<9xBH2xH16xH6xQQ", data[:56])
name = parse_attribute_name(data, chunks[0], chunks[1])
l, data_runs = NonResident_Attribute.get_data_runs(data[chunks[3] :])
return cls(data_type, chunks[2], *chunks[4:], name, l, data_runs)
def to_dict(self) -> dict:
return {key: getattr(self, key) for key in self.__names__[:6]} | {
"Data_Runs": [self.Data_Runs[i].to_dict() for i in range(self.Count)]
}
def get_size(self) -> int:
return ctypes.sizeof(self) + sum(
ctypes.sizeof(self.Data_Runs[i]) for i in range(self.Count)
)
def preprocess_resident(data: bytes) -> Tuple[bytes, int, str]:
name_length, name_offset, attribute_id, data_length, data_offset = struct.unpack(
"<9xBH2xHLH", data[:22]
)
return (
data[data_offset : data_offset + data_length],
attribute_id,
parse_attribute_name(data, name_length, name_offset),
)
class Standard_Information(ctypes.Structure):
__repr__ = __repr__
get_size = get_size
__slots__ = ()
__names__ = (
"Attribute_ID",
"File_Created",
"File_Modified",
"Record_Changed",
"Last_Access",
"Read_Only",
"Hidden",
"System",
"Attribute_Name",
)
_fields_ = (
("Attribute_ID", ctypes.c_uint16),
("_file_created", ctypes.c_uint64),
("_file_modified", ctypes.c_uint64),
("_record_changed", ctypes.c_uint64),
("_last_access", ctypes.c_uint64),
("Read_Only", ctypes.c_bool),
("Hidden", ctypes.c_bool),
("System", ctypes.c_bool),
("Attribute_Name", ctypes.c_wchar_p),
)
@property
def File_Created(self) -> datetime:
return parse_NTFS_timestamp(self._file_created)
@property
def File_Modified(self) -> datetime:
return parse_NTFS_timestamp(self._file_modified)
@property
def Record_Changed(self) -> datetime:
return parse_NTFS_timestamp(self._record_changed)
@property
def Last_Access(self) -> datetime:
return parse_NTFS_timestamp(self._last_access)
@classmethod
def parse(cls, data: bytes) -> Standard_Information:
data, attribute_id, name = preprocess_resident(data)
chunks = struct.unpack("<4QL", data[:36])
flag = chunks[4]
return cls(
attribute_id, *chunks[:4], *(bool(flag & i) for i in (1, 2, 4)), name
)
def to_dict(self) -> dict:
return (
{"Attribute_ID": self.Attribute_ID}
| {key: to_timestamp(getattr(self, key)) for key in self.__names__[1:5]}
| {key: getattr(self, key) for key in self.__names__[5:]}
)
@patcher
class Attribute_List_Entry(ctypes.Structure):
__slots__ = ()
__names__ = ("Type", "Base_Record", "Attribute_ID", "Attribute_Name")
_fields_ = (
("Type", ctypes.c_uint8),
("Base_Record", ctypes.c_uint32),
("Attribute_ID", ctypes.c_uint16),
("Attribute_Name", ctypes.c_wchar_p),
)
@classmethod
def parse(cls, data: bytes) -> Attribute_List_Entry:
(
attribute_type,
name_length,
name_offset,
base_record,
attribute_id,
) = struct.unpack("<L2xBB8xQH", data[:26])
return cls(
attribute_type,
base_record & UINT48,
attribute_id,
parse_attribute_name(data, name_length, name_offset),
)
class Attribute_List(ctypes.Structure):
__repr__ = __repr__
__slots__ = ()
__names__ = ("Attribute_ID", "Attribute_Name", "Count", "List")
_fields_ = (
("Attribute_ID", ctypes.c_uint16),
("Attribute_Name", ctypes.c_wchar_p),
("Count", ctypes.c_uint8),
("List", ctypes.POINTER(Attribute_List_Entry)),
)
@staticmethod
def parse_attribute_list(
data: bytes,
) -> Generator[Attribute_List_Entry, None, None]:
while len(data) > 26:
offset = 26 + 2 * data[6]
yield Attribute_List_Entry.parse(data)
data = data[((offset + 7) >> 3) << 3 :]
@classmethod
def parse(cls, data: bytes) -> Attribute_List:
data, attribute_id, name = preprocess_resident(data)
attributes = list(Attribute_List.parse_attribute_list(data))
return cls(
attribute_id,
name,
(l := len(attributes)),
ctypes.cast(
(Attribute_List_Entry * l)(*attributes),
ctypes.POINTER(Attribute_List_Entry),
),
)
def to_dict(self) -> dict:
return {key: getattr(self, key) for key in self.__names__[:3]} | {
"List": [self.List[i].to_dict() for i in range(self.Count)]
}
def get_size(self) -> int:
return ctypes.sizeof(self) + sum(
ctypes.sizeof(self.List[i]) for i in range(self.Count)
)
@patcher
class FileName(ctypes.Structure):
NAMESPACES = ("POSIX", "Win32", "DOS", "Win32+DOS")
__slots__ = ()
__names__ = (
"Attribute_ID",
"Parent_Record",
"Allocated_Size",
"Real_Size",
"Name_Space",
"File_Name",
"Attribute_Name",
)
_fields_ = (
("Attribute_ID", ctypes.c_uint16),
("Parent_Record", ctypes.c_uint32),
("Allocated_Size", ctypes.c_uint64),
("Real_Size", ctypes.c_uint64),
("_name_space", ctypes.c_uint8),
("File_Name", ctypes.c_wchar_p),
("Attribute_Name", ctypes.c_wchar_p),
)
@property
def Name_Space(self) -> str:
return FileName.NAMESPACES[self._name_space]
@staticmethod
def parse_file_name(data: bytes) -> str:
name_data = data[66 : 66 + 2 * data[64]]
try:
return name_data.decode("utf8").replace("\x00", "")
except UnicodeDecodeError:
return name_data.decode("utf-16-le").replace("\x00", "")
@classmethod
def parse(cls, data: bytes) -> FileName:
data, attribute_id, attribute_name = preprocess_resident(data)
parent_record, allocated_size, real_size, namespace = struct.unpack(
"<Q32xQQ9xB", data[:66]
)
return cls(
attribute_id,
parent_record & UINT48,
allocated_size,
real_size,
namespace,
FileName.parse_file_name(data),
attribute_name,
)
class File_Record(DotTuple):
__repr__ = __repr__
__slots__ = ("Header", "Attributes")
__names__ = __slots__
parsers = {
16: Standard_Information,
32: Attribute_List,
48: FileName,
}
def __init__(self, header: Record_Header, attributes: tuple) -> None:
self.Header = header
self.Attributes = attributes
@staticmethod
def preprocess_file_record(data: bytes) -> bytes:
if data[:4] != b"FILE":
raise ValueError("File record is corrupt")
token = data[48:50]
if token != data[510:512] or token != data[1022:1024]:
raise ValueError("File record is corrupt")
update_sequence = data[50:54]
return data[:510] + update_sequence[:2] + data[512:1022] + update_sequence[2:]
@staticmethod
def parse_record_attributes(data: bytes) -> Generator:
data = data[56:]
while data[:4] != b"\xff\xff\xff\xff":
length = parse_little_endian(data[4:8])
attribute_type = parse_little_endian(data[:4])
if data[8]:
if attribute_type in (32, 128):
yield NonResident_Attribute.parse(data[:length], attribute_type)
elif cls := File_Record.parsers.get(attribute_type):
yield cls.parse(data)
data = data[length:]
@classmethod
def parse(cls, data: bytes) -> File_Record:
data = File_Record.preprocess_file_record(data)
return cls(
Record_Header.parse(data),
tuple(File_Record.parse_record_attributes(data)),
)
def get_size(self) -> int:
return (
self.__sizeof__()
+ self.Header.get_size()
+ self.Attributes.__sizeof__()
+ sum(attr.get_size() for attr in self.Attributes)
)
def to_dict(self) -> dict:
return {
"Header": self.Header.to_dict(),
"Attributes": [attr.to_dict() for attr in self.Attributes],
}
</code></pre>
<p>The problem is extremely simple, all MFT file records are split into two sections: Header and Attributes.</p>
<p>The attributes can be empty if the header has In-Use flag set to 0, which would be a big problem for me if I want to store it, because it wouldn't be easy to make it into the same class as other records. But I am not interested in deleted files and I would just discard the record.</p>
<p>But then, all file records marked as in-use have different number of attributes and different kinds of attributes.</p>
<p>Some files have only 0x80 $DATA attributes and nothing else, these files have other attributes stored in a separate file that is referenced by the Base_Record field.</p>
<p>Some files have one 0x10 $STANDARD_INFORMATION attribute and one or two $FILE_NAME attributes depending on whether or not the filename is DOS-compatible, files that are too large will have 0x20 $ATTRIBUTE_LIST attribute.</p>
<p>All attributes can have names, and the names obviously have variable lengths. And $FILE_NAME of course stores filenames, which can have even more diverse lengths.</p>
<p>Then attributes that are non-resident will have any number of data-runs that is greater or equal to one...</p>
<p>How can I store all of them in an array or a vector?</p>
<p>To my very limited understanding of C++, C++ is statically typed, and all objects must have a well-defined class, array elements should be homogeneous and should have the same length (but I know I can put strings into a vector), the elements should have the same length to make lookups faster because the bytes of the objects are stored head to tail in sequential order and lookup is simply reading the byte sequence that starts at array start + index * length of one element and ends at that number plus the length of one element...</p>
<p>Obviously these file records are far from homogeneous.</p>
<p>How can I then store them in memory?</p>
<p>There are currently two options that I can think of, both with severe drawbacks.</p>
<p>Option 1: make one class for each and every configuration of the attributes of file records, this can make the array elements homogeneous, but filenames can still be a problem. And this necessitates use of multiple arrays for each of these classes, and will break if there is a configuration that is unexpected...</p>
<p>Option 2: make the base class based on the longest possible configuration, and use pointers for the attributes, fill the empty attributes with the null pointer. For example, make every file record object have references to two $DATA attributes, one $STANDARD_INFORMATION, two $FILE_NAME attributes, and one $ATTRIBUTE_LIST. If the file only has a $DATA attribute, fill all other references with NULL pointer.</p>
<p>Option 2 will waste memory heavily, and will break if the file record has more than two $DATA attributes and pointers are really complicated...</p>
<p>What class should I use to store them in memory in C++?</p>
|
<python><c++>
|
2024-02-06 09:38:59
| 0
| 3,930
|
Ξένη Γήινος
|
77,946,514
| 1,689,811
|
Python threadpool io bound tasks
|
<p>Assume following code :</p>
<pre><code>import concurrent.futures
import time
import itertools
def task_function(task_id):
print(f"Task {task_id} started")
time.sleep(10) # Simulating some work
print(f"Task {task_id} completed")
def generate_task_ids():
# Infinite generator for task IDs
for i in itertools.count(start=1):
yield i
# Create a ThreadPoolExecutor with 3 threads
with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
task_ids_generator = generate_task_ids()
try:
# Continuously submit tasks to the thread pool
while True:
task_id = next(task_ids_generator)
task = executor.submit(task_function, task_id)
time.sleep(1) # Introduce a delay between task submissions (for demonstration)
except KeyboardInterrupt:
print("Received KeyboardInterrupt. Stopping task submissions.")
# Main thread continues after exiting the with block
print("Main thread continues")
</code></pre>
<p>As we sleep for 10 seconds in task_function does the ThreadPoolExecutor counts this thread as free thread and submit new tasks to it effectively creating more than 3 threads ?</p>
<p>My goal was to mix this with asyncio to have faster io bound apps reponse but at first step i am confused how ThreadPoolExecutor works upon facing something like sleep.</p>
|
<python><python-multithreading>
|
2024-02-06 09:18:15
| 0
| 334
|
Amir
|
77,946,460
| 16,446,640
|
Unable to See "View Value in Data Viewer" Option in VS Code Debugger
|
<p>I'm currently encountering an issue with Visual Studio Code where the "View Value in Data Viewer" option is not appearing in the debugger. This issue persists even though I have the Jupyter extension installed. Below are the details of my VS Code setup and the extensions I have installed:</p>
<pre><code>VS Code Version: 1.85.2 (Commit: 8b3775030ed1a69b13e4f4c628c612102e30a681, Architecture: x64)
Installed Extensions (relevant to Python and Jupyter):
Jupyter-related extensions:
ms-toolsai.jupyter@2023.11.1100101639
ms-toolsai.jupyter-keymap@1.1.2
ms-toolsai.jupyter-renderers@1.0.17
ms-toolsai.vscode-jupyter-cell-tags@0.1.8
ms-toolsai.vscode-jupyter-slideshow@0.1.5
Python-related extensions:
ms-python.python@2024.0.0
ms-python.vscode-pylance@2023.12.1
ms-python.autopep8@2023.8.0
ms-python.black-formatter@2024.0.0
ms-python.debugpy@2024.0.0
ms-python.flake8@2023.10.0
</code></pre>
<p>Here's a screenshot of my current debugger view where the option is missing:
<a href="https://i.sstatic.net/e864n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e864n.png" alt="debugger view" /></a></p>
<p>here is my vs-code settings.json file :</p>
<pre><code>{
"go.useLanguageServer": true,
"go.autocompleteUnimportedPackages": true,
"go.formatTool": "gofmt",
"editor.formatOnSave": true,
"go.testFlags": [
"-v"
],
"explorer.decorations.badges": false,
"window.titleBarStyle": "custom",
"editor.minimap.size": "fill",
"terminal.integrated.enableMultiLinePasteWarning": false,
"explorer.confirmDragAndDrop": false,
"files.autoSaveDelay": 10000,
"files.autoSave": "afterDelay",
"diffEditor.ignoreTrimWhitespace": false,
"[python]": {
"editor.formatOnType": false,
"editor.defaultFormatter": "ms-python.autopep8",
"editor.formatOnSave": true,
},
"autopep8.args": [
"--aggressive"
],
"editor.renderWhitespace": "all",
"http.proxy": "http://COMPANY.proxy.COMPANY.fr:3838",
"markdownConverter.ConversionType": [
"HTML",
"PDF"
],
"python.experiments.enabled": true,
"security.workspace.trust.untrustedFiles": "open",
"python.analysis.autoImportCompletions": true,
"python.analysis.completeFunctionParens": true,
"git.enableSmartCommit": true,
"flake8.args": [
"--max-line-length=190",
"--ignore=E402,F841,F401,W503,E721,E203,E501",
// "--ignore=E402,F841,F401,E302,E305",
],
// "black-formatter.args": [
// "--line-length=130"
// ],
"explorer.confirmDelete": false,
"python.languageServer": "Pylance",
"python.analysis.indexing": true,
"python.analysis.typeCheckingMode": "off"
}
</code></pre>
<p>I've tried the usual troubleshooting steps like restarting VS Code, reinstalling the extensions, and checking for updates, but the problem persists. I'm wondering if there are specific settings or configurations I might be missing.</p>
<p>Could anyone provide insights or suggestions on what might be causing this issue and how to resolve it? Any help would be greatly appreciated!</p>
<p>Thank you!</p>
|
<python><pandas><visual-studio-code><jupyter>
|
2024-02-06 09:06:14
| 0
| 427
|
Thomas LESIEUR
|
77,946,456
| 19,694,624
|
DIscord bot can't edit message
|
<p>I am having issues with fetching a message by id. It's kinda weird that I can't edit the message I just sent.</p>
<p>I get the error:</p>
<pre><code>discord.errors.NotFound: 404 Not Found (error code: 10008): Unknown Message
</code></pre>
<p>Code to replicate the error:</p>
<pre><code>import asyncio
import discord
from discord.ext import commands
from discord.commands import slash_command
class Test(commands.Cog):
def __init__(self, bot):
self.bot = bot
@slash_command(name='test', description='')
async def test(self, ctx):
embed = discord.Embed(
description=f"The message I want to get by id"
)
msg = await ctx.respond(embed=embed, ephemeral=True)
msg_id = msg.id
await asyncio.sleep(2)
channel = self.bot.get_channel(ctx.channel_id)
message = await channel.fetch_message(msg_id)
new_embed = discord.Embed(
description=f"new message test"
)
await message.edit(embed=new_embed)
def setup(bot):
bot.add_cog(Test(bot))
</code></pre>
<p>I specifically need to get channel and then fetch message by its id. I seems unlogical, but I need it.</p>
|
<python><discord><discord.py><pycord><disnake>
|
2024-02-06 09:06:02
| 1
| 303
|
syrok
|
77,946,231
| 9,194,965
|
langchain loader with power point not working
|
<p>The below def load_documents function is able to load various documents such as .docx, .txt, and .pdf into langchain. I would also like to be able to load power point documents and found a script here: <a href="https://python.langchain.com/docs/integrations/document_loaders" rel="nofollow noreferrer">https://python.langchain.com/docs/integrations/document_loaders</a> that I added to below function.</p>
<p>However, the function is unable to read .pptx files because I am not able to pip install UnstructuredPowerPointLoader. Can somebody please suggest a way to do this or to augment below function so I can load .pptx files?</p>
<p>Python function follows below:</p>
<pre><code>def load_document(file):
import os
name, extension = os.path.splitext(file)
if extension == '.pdf':
from langchain.document_loaders import PyPDFLoader
print(f'Loading {file}')
loader = PyPDFLoader(file)
elif extension == '.docx':
from langchain.document_loaders import Docx2txtLoader
print(f'Loading {file}')
loader = Docx2txtLoader(file)
elif extension == '.txt':
from langchain.document_loaders import TextLoader
print(f'Loading {file}')
loader = TextLoader(file)
elif extension == '.pptx':
from langchain_community.document_loaders import UnstructuredPowerPointLoader
print(f'Loading {file}')
loader = UnstructuredPowerPointLoader(file)
else:
print('Document format is not supported!')
return None
data = loader.load()
return data
</code></pre>
<p>The error I am getting is because !pip install unstructured is failing. I tried also tried !pip install -q unstructured["all-docs"]==0.12.0 but was unsuccessful again. Appreciate any help!</p>
|
<python><powerpoint><loader><langchain>
|
2024-02-06 08:29:11
| 1
| 1,030
|
veg2020
|
77,945,871
| 9,072,753
|
How to properly overload on bool for 3 different cases?
|
<p>I am writing my own <code>run</code> function. How to write 3rd overload on dynamic value <code>bool</code> for <code>text</code>, so that it doesn't conflict with overload for <code>Literal[False]</code> and <code>Literal[True]</code>?</p>
<pre><code>from __future__ import annotations
from typing import TypeVar, overload, Optional, Literal, Union
T = TypeVar('T', str, bytes)
# If text is empty or false, input has to be bytes, we are returning bytes.
@overload
def run(text: Literal[False] = False, input: Optional[bytes] = ...) -> bytes: ...
# If text is true, input is str, returning str
@overload
def run(text: Literal[True], input: Optional[str] = ...) -> str: ...
# When we do not know what is text, it can be anything, but has to be the same.
#@overload
#def run(text: bool, input: Optional[Union[str, bytes]] = ...) -> Union[str, bytes]: ...
#def run(text: bool, input: Optional[T] = ...) -> T: ...
def run(text: bool = False, input: Optional[Union[str, bytes]] = None) -> Union[str, bytes]: return ""
run(input="") # error
run(input=b"") # ok
run(False, "") # error
run(False, b"") # ok
run(True, "") # ok
run(True, b"") # error
def test() -> bool: ...
run(test(), "") # ok
run(test(), b"") # ok
</code></pre>
<p><a href="https://pyright-play.net/?pyrightVersion=1.1.349&pythonVersion=3.12&pythonPlatform=Linux&strict=true&code=GYJw9gtgBA%2BjwFcAuCQFM5QJYQA5hCSgEMA7UsJYpLMUgZwChRIokBPXLUgc2zwJEAKpzQA1YiAA0UMADc0IADZhiAExkB5XDTrElMgDJYki-TICqpWqUaMhUALxQRucZIAUAciFeZ9JGkoACN2U3oASjsAYigASWA2NAAPIix6KDQ8DlkQKGB9ejQZblxkKAALYgykMBC0ELC0ehkAdwbJBvQUEGteRvCAOkYAAXlFFXVGNTREkARSD1NUgC4oY1MQfQBtADFCtABdJyh9pSKS0jKkNe1dUh3Q8OPnQbeIqABaAD4B5rW3sNYgkkqlsDV5sVsFdyukoAEgt1UH0%2BAjRuNlKo1NNZlB5otljd1iYzEptkJIYdLtdbjobDsES8oICPj94YEAW9GLEAOoVNCkKDtKBqOoUIgAaworSFVTSNRSSBKRAAxmR6iRSGEKtweDJguUqjU6sEGkh%2BfDiBA0ECxgpMVNojM5gsloq1sEwGADNCaVA7vSyVYbNsEfqmvRDkyWV9fsG6KHAuHnpzhjiXQT3SEvUoTmcLr7kLT7jt46RE0Ens0oycAHJ0NCsuPWBNhv6RtZI3pQABEPbs%2BI8pWQjj7HygsUU4BAjEHw6QjmCY6gE9kEtnrvzUOXq6nBA3iy3%2Bp3sTA68HFIQ25749P59dl6hS5vK8nIGn6aSAQ8Tez3tTB5ut%2BEQyCea6AeESA-seL6rmejBAA" rel="nofollow noreferrer">pyright playground</a> .</p>
<p>When the 3rd overload is uncommented, all forms are valid. Is it possible for the 3rd form not to be "catch it all"?</p>
|
<python><python-3.x><overloading><typing>
|
2024-02-06 07:19:04
| 0
| 145,478
|
KamilCuk
|
77,945,784
| 8,176,763
|
reflecting materiliazed view in sqlalchemy2.0 using postgres as backend with async engine
|
<p>According to this github issue mat views and views are fully supported.</p>
<p><a href="https://github.com/sqlalchemy/sqlalchemy/issues/8085" rel="nofollow noreferrer">https://github.com/sqlalchemy/sqlalchemy/issues/8085</a></p>
<p>However binding an engine to reflect method for an async engine is not yet implemented.</p>
<p><a href="https://docs.sqlalchemy.org/en/20/errors.html#no-inspection-available" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/errors.html#no-inspection-available</a>
<a href="https://github.com/sqlalchemy/sqlalchemy/issues/6121" rel="nofollow noreferrer">https://github.com/sqlalchemy/sqlalchemy/issues/6121</a>
<a href="https://i.sstatic.net/XWxzi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWxzi.png" alt="enter image description here" /></a></p>
<p>I have an async engine with fastapi that starts as such, <code>main.py</code>:</p>
<pre><code>@asynccontextmanager
async def lifespan(app_: FastAPI):
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.reflect(views=True),only=['eos_general'])
await conn.run_sync(Base.metadata.create_all)
yield
</code></pre>
<p>My <code>models.py</code> is:</p>
<pre><code>class Base(DeclarativeBase):
pass
eos_general = Table("eos_general",Base.metadata)
</code></pre>
<p>And my <code>db.py</code> is:</p>
<pre><code>engine = create_async_engine(CON_,echo=True)
SessionLocal = async_sessionmaker(engine)
</code></pre>
<p>If I use the above code I get this error:</p>
<p><code>TypeError: reflect() missing 1 required positional argument: 'bind'</code></p>
<p>If I bind the engine :</p>
<pre><code>@asynccontextmanager
async def lifespan(app_: FastAPI):
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.reflect(views=True,bind=engine),only=['eos_general'])
await conn.run_sync(Base.metadata.create_all)
yield
</code></pre>
<p>I get:</p>
<pre><code>sqlalchemy.exc.NoInspectionAvailable: Inspection on an AsyncEngine is currently not supported. Please obtain a connection then use ``conn.run_sync`` to pass a callable where it's possible to call ``inspect`` on the passed connection. (Background on this error at: https://sqlalche.me/e/20/xd3s)
</code></pre>
<p>If I don't call <code>reflect</code> method as in <code>await conn.run_sync(Base.metadata.reflect,only=['eos_general'])</code> then my views are not reflected and I cannot access it. So clearly I need to call the reflect method with <code>views</code> argument and without <code>bind</code>, but that is not possible. How can I solve this dilemma ?</p>
|
<python><sqlalchemy>
|
2024-02-06 07:02:24
| 1
| 2,459
|
moth
|
77,945,267
| 16,220,410
|
VSCode settings.json error - incorrect type expected string
|
<p>how do i fix the error shown in the pic, i was trying to edit my VSCode settings.json, installed the extension Ruff and tried to edit the settings based on an article I've read here:</p>
<p><a href="https://medium.com/@ordinaryindustries/the-ultimate-vs-code-setup-for-python-538026b34d94" rel="nofollow noreferrer">https://medium.com/@ordinaryindustries/the-ultimate-vs-code-setup-for-python-538026b34d94</a></p>
<p>but i get the error shown in the pic below</p>
<p><a href="https://i.sstatic.net/1DPpa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1DPpa.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code>
|
2024-02-06 04:32:47
| 1
| 1,277
|
k1dr0ck
|
77,945,092
| 8,573,615
|
How do I write a python decorator that depends on a class attribute? Or is there a better way to approach this problem?
|
<p>I am writing a class that takes an external object from an api, performs some processing on that object, exposes some useful methods, then writes the processed object to a database. In cases where the external object no longer exists, I want the class methods to use the data from the database record.</p>
<p>This is causing a lot of repeated, simple code in the class methods:</p>
<pre><code>class Synchronised_object:
def __init__(self, obj=None, rec=None):
self.object = obj
self.record = rec
def a(self):
if self.object:
return self.object.a
else:
return self.record['a']
def b(self):
if self.object:
return self.object.b.upper()
else:
return self.record['b']
</code></pre>
<p>Repeated, simple code in functions sounds like a great use case for decorators, but in this case the decorator code would depend on an attribute of the instantiated class object, which appears to be problematical from everything I read on here and elsewhere.</p>
<p>Is there any way to write a decorator that depends on self.object? If not, is there another way to reduce the repetition of the "if self.object.... else return self.record[name]"?</p>
|
<python><python-3.x><python-decorators>
|
2024-02-06 03:23:57
| 2
| 396
|
weegolo
|
77,944,859
| 214,526
|
optuna parameter tuning for tweedie - Input contains infinity or a value too large for dtype('float32') error
|
<p>I am trying to tune a XGBRegressor model and I am getting below error only when I try to use the parameter tuning flow:</p>
<p><code>Input contains infinity or a value too large for dtype('float32')</code></p>
<p>I do not get this error if I do not try to tune parameters.</p>
<p>I have ensured my data does not have any NaN or np.inf - I replace +/- np.inf with np.nan and replace all NaN with 0 later. Before training, I have changed all columns to np.float64 type.</p>
<p>I suspect during parameter tuning, the target value may be causing overflow with float32 - how to ensure sklearn/xgboost/optuna uses float64 instead of float32?</p>
<p>My training code is roughly following:</p>
<pre><code>def __fit_new_model(
df: pd.DataFrame, feature_cols: List[str], target_col: str, tuning_iterations: int
) -> XGBRegressor:
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_percentage_error
if df.empty:
raise AssertionError("input parameter error - empty input DataFrame")
if not feature_cols:
raise AssertionError("input parameter error - empty feature_cols")
if not target_col:
raise AssertionError("input parameter error - invalid target_col name")
X, y = df[feature_cols], df[target_col]
if X.isna().any().any() or y.isna().any():
raise AssertionError("input data error - NaN values exist")
X_train, X_validation, y_train, y_validation = train_test_split(X, y, test_size=0.20, random_state=42)
regressor: XGBRegressor = XGBRegressor(
random_state=42,
tree_method="hist",
n_estimators=100,
early_stopping_rounds=100,
objective="reg:tweedie",
tweedie_variance_power=1.5,
eval_metric=mean_absolute_percentage_error,
)
if tuning_iterations > 0:
tuned_parameters: Dict[str, Any] = __get_tuned_model_parames(
x_train=X_train,
y_train=y_train,
x_validation=X_validation,
y_validation=y_validation,
num_trials=tuning_iterations,
)
regressor = XGBRegressor(eval_metric=mean_absolute_percentage_error, **tuned_parameters)
regressor.fit(X=X_train, y=y_train, eval_set=[(X_train, y_train), (X_validation, y_validation)], verbose=False)
return regressor
def __get_tuned_model_parames(
x_train: pd.DataFrame,
y_train: pd.Series,
x_validation: pd.DataFrame,
y_validation: pd.Series,
num_trials: int = 200,
) -> Dict[str, Any]:
import optuna
def objective(trial: optuna.trial.Trial):
from sklearn.metrics import mean_absolute_percentage_error
param = {
"tree_method": "hist",
"booster": trial.suggest_categorical("booster", ["gbtree", "dart"]),
"lambda": trial.suggest_float("lambda", 1e-3, 10.0),
"alpha": trial.suggest_float("alpha", 1e-3, 10.0),
"colsample_bytree": trial.suggest_float("colsample_bytree", 0.05, 1.0),
"subsample": trial.suggest_float("subsample", 0.05, 1.0),
"learning_rate": trial.suggest_float("learning_rate", 1e-3, 0.1, log=True),
"n_estimators": 100,
"objective": trial.suggest_categorical(
# "objective", ["reg:tweedie", "reg:squarederror", "reg:squaredlogerror"]
"objective",
["reg:tweedie"],
),
"max_depth": trial.suggest_int("max_depth", 1, 12),
"min_child_weight": trial.suggest_int("min_child_weight", 1, 20),
"early_stopping_rounds": 100,
"random_state": 42,
"base_score": 0.5,
}
if param["objective"] == "reg:tweedie":
param["tweedie_variance_power"] = trial.suggest_categorical(
"tweedie_variance_power", [1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]
)
regressor_model: XGBRegressor = XGBRegressor(**param)
regressor_model.fit(X=x_train, y=y_train, eval_set=[(x_validation, y_validation)], verbose=False)
predictions = regressor_model.predict(x_validation)
mape: float = mean_absolute_percentage_error(y_true=y_validation, y_pred=predictions)
return mape
sampler: optuna.samplers.TPESampler = optuna.samplers.TPESampler(seed=42)
study_xgb = optuna.create_study(direction="minimize", sampler=sampler)
optuna.logging.set_verbosity(optuna.logging.ERROR)
study_xgb.optimize(lambda trial: objective(trial), n_trials=num_trials)
model_params: Dict[str, Any] = study_xgb.best_params
return model_params
</code></pre>
|
<python><scikit-learn><xgboost><optuna><xgbregressor>
|
2024-02-06 01:49:25
| 1
| 911
|
soumeng78
|
77,944,648
| 453,851
|
Can recycling python object ids be a problem to a Pickler?
|
<p>I read that pyhton will recycle IDs, meaning that a new object can end up with the ID of one that previously existed and was distroyed. I also read about <a href="https://docs.python.org/3/library/pickle.html" rel="nofollow noreferrer">pickle</a>:</p>
<blockquote>
<p>The pickle module keeps track of the objects it has already serialized, so that later references to the same object won’t be serialized again. marshal doesn’t do this.</p>
</blockquote>
<p>If I hold an instance of a <code>Pickler</code> open for several minutes writing to a single file as information comes in, and discard it immediately after calling <code>Pickler.dump(obj)</code>, is there a risk that a new <code>obj</code> will be given the id of another that's already been written to in the same file and so accidently the wrong thing is written?</p>
|
<python><python-3.x><pickle><memoization>
|
2024-02-06 00:10:05
| 1
| 15,219
|
Philip Couling
|
77,944,598
| 22,674,380
|
How to detect object color efficiently?
|
<p>I'm trying to detect a object's color in an efficient way. Let's assume I run a YOLO model and crop the object region given the bounding boxes. Given the cropped object image, then what's the most efficient and accurate way to detect the color of the object?</p>
<p>Previously, I trained a YOLO model to detect the color (10 class of colors), but running 2 deep learning models is too slow for my real-time requirements. I need the color detection/classification part to be very fast, preferably not using deep learning. Maybe pure Python or OpenCV or whatnot.</p>
<p>I wrote this piece of code that resizes the image to a 1x1 pixel. I then visualize the color in a square. But it's not accurate at all. Just too off.</p>
<pre><code>from PIL import Image, ImageDraw
def get_dominant_color(pil_img):
img = pil_img.copy()
img = img.convert("RGBA")
img = img.resize((1, 1), resample=0)
dominant_color = img.getpixel((0, 0))
return dominant_color
# Specify the path to your image
image_path = "path/to/your/image.jpg"
# Open the image using PIL
image = Image.open(image_path)
# Get the dominant color
dominant_color = get_dominant_color(image)
# Print the color in RGB format
print("Dominant Color (RGB):", dominant_color[:3])
# Create a new image with a 100x100 square of the dominant color
square_size = 100
square_image = Image.new("RGB", (square_size, square_size), dominant_color[:3])
# Display the square image
square_image.show()
</code></pre>
|
<python><opencv><deep-learning><computer-vision><classification>
|
2024-02-05 23:49:28
| 1
| 5,687
|
angel_30
|
77,944,401
| 11,280,068
|
Loguru python dynamically update the log format with each request to a FastAPI app
|
<p>I want to intergrate loguru into my FastAPI app in the middleware, so that I can monitor and troubleshoot every request that comes in.</p>
<p>One thing I want to achieve, that I don't see a clear solution to, is dynamically set the log format string with every request that comes in. For example:</p>
<pre class="lang-py prettyprint-override"><code>LOG_LEVEL = 'DEBUG'
logger.remove(0)
log_format = "<green>{time:YYYY-MM-DD HH:mm:ss.SSS zz}</green> | <level>{level: <8}</level> | <yellow>Line {line: >4} ({file}):</yellow> <b>{message}</b>"
logger.add(sys.stdout, level=LOG_LEVEL, format=log_format, colorize=True, backtrace=True, diagnose=True)
logger.add('log.log', rotation='2 MB', level=LOG_LEVEL, format=log_format, colorize=False, backtrace=True, diagnose=True)
@app.middleware("http")
async def process_middleware(request: Request, call_next):
# in here, I want the log_format string to include the request.url
# so that I can just call
logger.debug(request)
# and it will include the URL in the log format itself
# is this possible?
</code></pre>
|
<python><python-3.x><logging><fastapi><loguru>
|
2024-02-05 22:46:43
| 1
| 1,194
|
NFeruch - FreePalestine
|
77,944,065
| 1,442,731
|
Python Logging Configuration from both Dict and file
|
<p>I looked all morning for a solution. Nothing seemed to fit. If anyone has one, I'll be glad to close this. I offer a solution at the end, feel free to let me know if there is anything better. Not sure why this is not a common issue.</p>
<p>I have a relatively complex testing module that I want to do logging on. One of the things I want to add is the ability to easily change logging parameters for specific module loggers, etc.. For example, turn on a custom TRACE level for just the USB module, or limit logging from the BLE module(s) to just INFO and above. It would be nice to change anything, not just level. Some of the modules I use (like Bleak) have logging, but need some configuring from outside the module and so modifying that module is unrealistic.</p>
<p>My test program sets a reasonable set of defaults in the code as a dictionary, but occasionally I would like to, for example turn on TRACE logging for BLE. Instead of modifying the the dictionary in the original code, I would like to override or update the configuration from a separate configuration file. I understand why this is not easy from logger.py itself, as it would require the actual logger object factories to re-issue some of the objects (i.e. loggers).</p>
<p>What I came up with is a function to do a deep update of the default configuration dictionary from a configuration update dictionary. The dictionary update() method is close, but can't handle the nested dictionaries in logging.</p>
<p>For example. Given the default configuration:</p>
<pre><code>debugConfig = {
'version':1,
'loggers':{
...
'usb_interface':{
'handlers':['default', 'error', 'file'],
'level': 'DEBUG',
'propagate':False},
...
}
</code></pre>
<p>And an update configuration dictionary (read from a file config.py) with:</p>
<pre><code>conf.logger = {
'loggers':{
'usb_interface':{
'level':'TRACE',
},
},
}
</code></pre>
<p>I then have the code:</p>
<pre><code>def updateConfig(conf, updates):
for key in updates:
if key in conf:
# keys match, update
if isinstance(updates[key], dict) and isinstance(conf[key], dict):
updateConfig(conf[key], updates[key])
else:
conf[key]=updates[key]
updateConfig(debugConfig, config.logger)
logging.config.dictConfig(debugConfig)
</code></pre>
<p>The updateConfig function walks through the conf dictionary and updates dictionary trees and overrides matching elements. I realize there is a possible situation where the conf and update elements would possible have a type mismatch, resulting with, for example an embedded dictionary being overwritten with a simple string, but I deemed that as a feature, not a fault.</p>
<p>This results in the dict:</p>
<pre><code>debugConfig = {
'version':1,
'loggers':{
...
'usb_interface':{
'handlers':['default', 'error', 'file'],
'level': 'TRACE',
'propagate':False},
...
}
</code></pre>
<p>Hope this helps someone not miss lunch searching around.</p>
|
<python><dictionary><python-logging>
|
2024-02-05 21:18:25
| 1
| 6,227
|
wdtj
|
77,943,896
| 893,254
|
Most Pythonic way to handle exception caused by functools.reduce when the iterable yields no elements?
|
<p>Python's <code>functools.reduce</code> throws an exception when the iterable passed to it yields no elements.</p>
<p>Here's how I currently use it:</p>
<pre><code>some_list = [] # empty list should be permissible
functools.reduce(
lambda left, right: pandas.merge(left, right, on=['idx'], how='outer'),
some_list
)
</code></pre>
<p>This throws an exception if the list contains no elements.</p>
<p>What I actually want it to do is return <code>None</code> if the list is empty. But that can't be achieved by setting the initial value to <code>None</code> because <code>None</code> cannot be merged with a <code>DataFrame</code> type in the call to <code>pandas.merge</code>.</p>
<p>I could wrap this statement in a function and perform a return-early check like so:</p>
<pre><code>def f(some_list):
if len(some_list) < 1:
return None
</code></pre>
<p>But this doesn't seem like a great solution. Is there a more elegant way to do it?</p>
|
<python><pandas><functional-programming><reduce>
|
2024-02-05 20:46:15
| 1
| 18,579
|
user2138149
|
77,943,889
| 405,017
|
How to defend against accidentally shadowing the standard library?
|
<p>I'm new to Python, and wrote a utility script that generates some numbers I need. I named the file <code>numbers.py</code>. This script depends on the PyGame library. I found that just importing that library caused errors, because PyGame tries to <code>import numbers</code> from the Python standard library, and the default lookup rules have my own file shadowing the standard library.</p>
<p>The linters I have tried (PyLint, Ruff) do not seem to catch this rookie mistake of mine, but I'm having trouble understanding how best to avoid it. As best as I can tell, the advice to "just rename your files" to not collide with the standard library requires either:</p>
<ol>
<li>memorizing the 307 standard library names (that don't have periods; 1027 names in total), or</li>
<li>being able to recognize via errors from other libraries that maybe you accidentally shadowed a library, so you can rename</li>
</ol>
<p>I'm hoping there's a better solution, like <code>#include <foo></code> vs <code>#include "foo"</code> that would differentiate between the standard library and a local file, or a lint configuration that detects when a file I create shadows the standard library.</p>
<p>How can I avoid this problem as I go forward?</p>
<pre class="lang-none prettyprint-override"><code>phrogz:~/proj/ai$ ls
numbers.py
phrogz:~/proj/ai$ cat numbers.py
import pygame
phrogz:~/proj/ai$ python numbers.py
Traceback (most recent call last):
File "/home/phrogz/proj/ai/numbers.py", line 1, in <module>
import pygame
File "/home/phrogz/.local/lib/python3.11/site-packages/pygame/__init__.py", line 264, in <module>
import pygame.surfarray
File "/home/phrogz/.local/lib/python3.11/site-packages/pygame/surfarray.py", line 47, in <module>
import numpy
File "/home/phrogz/.local/lib/python3.11/site-packages/numpy/__init__.py", line 130, in <module>
from numpy.__config__ import show as show_config
File "/home/phrogz/.local/lib/python3.11/site-packages/numpy/__config__.py", line 4, in <module>
from numpy.core._multiarray_umath import (
File "/home/phrogz/.local/lib/python3.11/site-packages/numpy/core/__init__.py", line 72, in <module>
from . import numerictypes as nt
File "/home/phrogz/.local/lib/python3.11/site-packages/numpy/core/numerictypes.py", line 595, in <module>
_register_types()
File "/home/phrogz/.local/lib/python3.11/site-packages/numpy/core/numerictypes.py", line 590, in _register_types
numbers.Integral.register(integer)
^^^^^^^^^^^^^^^^
AttributeError: module 'numbers' has no attribute 'Integral'
</code></pre>
|
<python><python-3.x><python-import>
|
2024-02-05 20:45:58
| 1
| 304,256
|
Phrogz
|
77,943,877
| 9,983,652
|
How to update string element in a list?
|
<p>I have a list with string for each element. I'd like to change some element in the list and it didn't work. For example, I meant to change all elements from 2nd element to 'black'.</p>
<pre class="lang-py prettyprint-override"><code>list_string=['black','red','green','blue','gray']
list_string[1:]='black'
</code></pre>
<p><strong>Result:</strong></p>
<pre class="lang-py prettyprint-override"><code>['black', 'b', 'l', 'a', 'c', 'k']
</code></pre>
<p><strong>Expected:</strong></p>
<pre class="lang-py prettyprint-override"><code>['black','black', 'black', 'black', 'black']
</code></pre>
|
<python>
|
2024-02-05 20:43:50
| 3
| 4,338
|
roudan
|
77,943,846
| 17,800,932
|
Running `mypy` on a project with `pysnmp-lextudio` package dependency returns `named-defined` errors
|
<p>To recreate the issue I am having:</p>
<pre class="lang-bash prettyprint-override"><code>poetry new pysnmp-and-mypy
cd ./pysnmp-and-mypy
poetry add mypy
poetry add pysnmp-lextudio
touch ./pysnmp_and_mypy/test.py
</code></pre>
<p>Put the following code into <code>./pysnmp_and_mypy/test.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from pysnmp.hlapi import * # type: ignore
from typing import Any
def convert_snmp_type_to_python_type(
snmp_value: Integer32 | Integer | Unsigned32 | Gauge32 | OctetString | Any,
) -> int | str:
match snmp_value:
case Integer32():
return int(snmp_value)
case Integer():
return int(snmp_value)
case Unsigned32():
return int(snmp_value)
case Gauge32():
return int(snmp_value)
case OctetString():
return str(snmp_value)
case _:
raise TypeError(
"Only SNMP types of type integer and string are supported. Received type of {}".format(
str(type(snmp_value))
)
)
def get_data(ip_address: str, object_identity: str) -> int | str:
iterator = getCmd(
SnmpEngine(),
CommunityData("public", mpModel=0),
UdpTransportTarget((ip_address, 161)),
ContextData(),
ObjectType(ObjectIdentity(object_identity)),
)
error_indication, error_status, error_index, variable_bindings = next(iterator)
if error_indication:
raise RuntimeError(error_indication.prettyPrint())
elif error_status:
raise RuntimeError(error_status.prettyPrint())
else:
[variable_binding] = variable_bindings
[_oid, value] = variable_binding
return convert_snmp_type_to_python_type(value)
</code></pre>
<p>Run:</p>
<pre class="lang-bash prettyprint-override"><code>poetry install
poetry run mypy ./pysnmp_and_mypy
</code></pre>
<p>Now observe that the following errors are returned:</p>
<pre><code>pysnmp_and_mypy\test.py:6: error: Name "Integer32" is not defined [name-defined]
pysnmp_and_mypy\test.py:6: error: Name "Integer" is not defined [name-defined]
pysnmp_and_mypy\test.py:6: error: Name "Unsigned32" is not defined [name-defined]
pysnmp_and_mypy\test.py:6: error: Name "Gauge32" is not defined [name-defined]
pysnmp_and_mypy\test.py:6: error: Name "OctetString" is not defined [name-defined]
pysnmp_and_mypy\test.py:9: error: Name "Integer32" is not defined [name-defined]
pysnmp_and_mypy\test.py:11: error: Name "Integer" is not defined [name-defined]
pysnmp_and_mypy\test.py:13: error: Name "Unsigned32" is not defined [name-defined]
pysnmp_and_mypy\test.py:15: error: Name "Gauge32" is not defined [name-defined]
pysnmp_and_mypy\test.py:17: error: Name "OctetString" is not defined [name-defined]
pysnmp_and_mypy\test.py:28: error: Name "getCmd" is not defined [name-defined]
pysnmp_and_mypy\test.py:29: error: Name "SnmpEngine" is not defined [name-defined]
pysnmp_and_mypy\test.py:30: error: Name "CommunityData" is not defined [name-defined]
pysnmp_and_mypy\test.py:31: error: Name "UdpTransportTarget" is not defined [name-defined]
pysnmp_and_mypy\test.py:32: error: Name "ContextData" is not defined [name-defined]
pysnmp_and_mypy\test.py:33: error: Name "ObjectType" is not defined [name-defined]
pysnmp_and_mypy\test.py:33: error: Name "ObjectIdentity" is not defined [name-defined]
</code></pre>
<p>How do I get rid of these errors in the appropriate way? Why isn't MyPy finding the definitions? They are definitely there, as the code runs just fine and Pylance doesn't have an issue finding them. If there isn't a good way, what is the best workaround?</p>
<p>Note that I had to do <code>from pysnmp.hlapi import * # type: ignore</code> because I otherwise get the error <code>error: Skipping analyzing "pysnmp.hlapi": module is installed, but missing library stubs or py.typed marker [import-untyped]</code>.</p>
|
<python><mypy><python-typing><python-poetry><pysnmp>
|
2024-02-05 20:36:52
| 1
| 908
|
bmitc
|
77,943,668
| 3,700,524
|
Playwright setting cookies not working in python
|
<p>I'm trying to use python to add cookies to the browser in playwright, when I print <code>BrowserContext</code> cookies, I can see the cookie that I added, but when I'm checking it from the browser, It doesn't exists. How can I tell the browser to add cookies from browser context? here is my code :</p>
<pre><code>from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(headless = False,devtools=True)
context = browser.new_context()
page = context.new_page()
page.goto("https://crawler-test.com/")
# defining a random cookie
cookies = [{'name': 'temp', 'value': 'temp', 'domain': 'temp.com', 'path': '/'}]
# adding cookie to the browser context
context.add_cookies(cookies)
# printing cookies
print(context.cookies())
browser.close()
</code></pre>
<p>And here is the output is see in the terminal :</p>
<pre><code>[{'name': '_ga_78MMTFSGVB', 'value': 'GS1......', 'domain': '.crawler-test.com', 'path': '/', 'expires': 1 'httpOnly': False, 'secure': False, 'sameSite': 'Lax'},
{'name': '_ga', 'value': 'GA1......', 'domain': '.crawler-test.com', 'path': '/', 'expires': 1, 'httpOnly': False, 'secure': False, 'sameSite': 'Lax'},
{'name': '_gid', 'value': 'GA1.......', 'domain': '.crawler-test.com', 'path': '/', 'expires': 1, 'httpOnly': False, 'secure': False, 'sameSite': 'Lax'},
{'name': '_gat_UA-7097885-11', 'value': '1', 'domain': '.crawler-test.com',
'path': '/', 'expires': 1, 'httpOnly': False, 'secure': False, 'sameSite': 'Lax'},
{'name': 'temp', 'value': 'temp', 'domain': 'temp.com', 'path': '/', 'expires': -1, 'httpOnly': False, 'secure': False, 'sameSite': 'Lax'}]
</code></pre>
<p>As you can see the desired cookie is successfully added to the browser context, but when I check the cookies from the browser, I don't see the added cookies :</p>
<p><a href="https://i.sstatic.net/dL9DT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dL9DT.png" alt="enter image description here" /></a></p>
|
<python><web-crawler><playwright><playwright-python>
|
2024-02-05 20:03:56
| 1
| 3,421
|
Mohsen_Fatemi
|
77,943,649
| 248,340
|
Join an Azure Communication Services Chat as a Teams user
|
<p>I'm trying to build a proof of concept application using Azure Communication Services.</p>
<p>I have followed the chat hero example <a href="https://learn.microsoft.com/en-us/azure/communication-services/samples/chat-hero-sample" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/communication-services/samples/chat-hero-sample</a> for basic chat setup and I have that all working.
I now want to join the chat thread as a teams user instead of creating a new chat identity.</p>
<p>I've added the ability to exchange a teams token for an ACS identity token as described in the manage-teams-idenity quickstart: <a href="https://github.com/Azure-Samples/communication-services-python-quickstarts/blob/main/manage-teams-identity-mobile-and-desktop/exchange-communication-access-tokens.py" rel="nofollow noreferrer">https://github.com/Azure-Samples/communication-services-python-quickstarts/blob/main/manage-teams-identity-mobile-and-desktop/exchange-communication-access-tokens.py</a></p>
<p>From there, I create a MicrosoftTeamsUserIdentifier object, and call
chat_client_thread.add_participants with a list of the participants to add.</p>
<p>The add_participants call is failing with an underlying 403 error: 'Permissions check failed' for the added participant, although the overall add_participants call succeeds with 201.</p>
<p>If I attempt to use the AAD OID to create a CommunicationUserIdentifer object I get an error saying the Participant format is invalid, so I know that I need to use the MicrosoftTeamsUserIdentifier Identifier type.</p>
<p>I cannot find any documentation on what privileges or permissions I need to tweak. None of the examples discuss this workflow.</p>
<p>Is adding a teams user to an ACS chat supported?
If so, what permissions do I need to check or adjust to allow them to join?</p>
|
<python><azure><azure-communication-services>
|
2024-02-05 19:59:36
| 0
| 318
|
David Just
|
77,943,598
| 16,436,774
|
Debugging seeming memory issues in Python
|
<p>I have a Python script which deals with a fair amount of data using a fair amount of recursion, though not so much that it triggers a <code>MemoryError</code> or <code>RecursionError</code>. Whether this script runs to completion depends on how its run, as well as some seeming random chance.</p>
<ul>
<li><strong>Run through PyCharm</strong>: succeeds or stops w/ exit code <code>0xC0000005</code>, <code>-2147483645</code>, or an exception</li>
<li><strong>Run through CLI</strong>: succeeds or stops w/ an exception or completely silently</li>
<li><strong>Run through CLI w/ <code>trace</code></strong>: takes forever (because of all the tracing), but succeeds</li>
</ul>
<p>The exceptions referenced above are all of a similar variety: a <code>TypeError</code> or <code>AttributeError</code> a dozen or so calls down a recursive chain that doesn't actually happen. For example,</p>
<pre class="lang-py prettyprint-override"><code>TypeError: unsupported operand type(s) for |: 'function' and 'set'
</code></pre>
<p>where the left operand is <em>never</em> a <code>function</code>, and in particular not a <code>function</code> in the offending call, as confirmed in a debugger.</p>
<p>All of this madness points to nasty memory errors... somewhere (<code>0xC0000005</code> for example is a Windows access violation). Python is not a language that deals with cryptic memory issues often, unless there's an obvious low-level library mucking things up (this script is pure Python). Debugging is near impossible, as the debugger catches all of the mentioned exceptions but offers no explanation as to how they creeped up. And there's no runaway memory leakage; the script is running (with <code>trace</code>) right now with a stable 1.6 GB footprint.</p>
<p>I found other answers indicating that PyCharm could be a culprit, and indeed its runner is at least partially responsible for early termination, but even the CLI is yielding odd results (why would it ever stop silently?). And <code>trace</code> is of no help, since with it the script magically succeeds, as if a watchful eye scares it into submission.</p>
<p>So, all of this to say that I'm not necessarily looking for assistance with this particular script; there's no need for anybody <em>else</em> to go digging through this mess. Instead, I'm looking for <strong>advice on debugging such memory-related errors in Python</strong>, and, if able, a description of <strong>what potential causes to look for</strong>.</p>
<p>Searching for existing answers about this stuff has proved extremely difficult; SO questions concerning <code>0xC0000005</code>, for example, almost always have a library like PyTorch as the suspect. I've attempted reworking my script, and I think I've made it incrementally more efficient, but to no avail. This is such a particular and nasty problem, but I'm certain I'm not the only one to have faced it. Any information or places to find it would be greatly appreciated.</p>
|
<python><debugging><memory>
|
2024-02-05 19:48:21
| 2
| 866
|
kg583
|
77,943,414
| 4,927,641
|
Microsoft Graph API python SDK ,creating a enterprise application
|
<p>After following <a href="https://stackoverflow.com">https://learn.microsoft.com/en-us/graph/application-saml-sso-configure-api?tabs=python%2Cpowershell-script</a> I'm trying to build a Enterprise Application .</p>
<p>However code block mentions</p>
<pre><code>graph_client = GraphServiceClient(credentials, scopes)
request_body = InstantiatePostRequestBody(
display_name = "AWS Contoso",
)
result = await graph_client.application_templates.by_application_template_id('applicationTemplate-id').instantiate.post(request_body)
</code></pre>
<p>There is no clue to understand how to import <strong>InstantiatePostRequestBody method</strong>
Please can anyone will be able to help</p>
<p>tried randomly importing different packages to test if any of then had InstantiatePostRequestBody method to no avail</p>
|
<python><microsoft-graph-api><azure-ad-graph-api>
|
2024-02-05 19:09:37
| 1
| 316
|
gayan ranasinghe
|
77,943,395
| 3,457,351
|
OpenAI Embeddings API: How to extract the embedding vector?
|
<p>I use nearly the same code as here in <a href="https://gist.github.com/limcheekin/997de2ae0757cd46db796f162c3dd58c" rel="nofollow noreferrer">this</a> GitHub repo to get embeddings from OpenAI:</p>
<pre><code>oai = OpenAI(
# This is the default and can be omitted
api_key="sk-.....",
)
def get_embedding(text_to_embed, openai):
response = openai.embeddings.create(
model= "text-embedding-ada-002",
input=[text_to_embed]
)
return response
embedding_raw = get_embedding(text,oai)
</code></pre>
<p>According to the GitHub repo, the vector should be in <code>response['data'][0]['embedding']</code>. But it isn't in my case.</p>
<p>When I printed the response variable, I got this:</p>
<pre><code>print(embedding_raw)
</code></pre>
<p>Output:</p>
<pre><code>CreateEmbeddingResponse(data=[Embedding(embedding=[0.009792150929570198, -0.01779201813042164, 0.011846082285046577, -0.0036859565880149603, -0.0013213189085945487, 0.00037509595858864486,..... -0.0121011883020401, -0.015751168131828308], index=0, object='embedding')], model='text-embedding-ada-002', object='list', usage=Usage(prompt_tokens=360, total_tokens=360))
</code></pre>
<p>How can I access the embedding vector?</p>
|
<python><vector><openai-api><embedding><openaiembeddings>
|
2024-02-05 19:05:39
| 1
| 325
|
user39063
|
77,943,358
| 149,138
|
PEP-484 type hint for an invisibly-inherited property
|
<p>How can I type-hint an <code>@property</code> which will come from an undeclared base class (as often occurs when using a mixin)?</p>
<p>E.g. to annotate "expected attributes" coming from the hidden inheritance hierarchy, I'm using:</p>
<pre class="lang-py prettyprint-override"><code>
class SomeMixin:
name: str
logger: Logger
def __init__(self):
super().__init__()
</code></pre>
<p>We always use <code>SomeMixin</code> in an inheritance hierarchy that proves <code>self.name</code> and <code>self.logger</code> attributes of type <code>str</code> and <code>Logger</code>. So far, so good.</p>
<p>If the mixin is instead to be used where the <code>logger</code> is provided by another class is actually a property like:</p>
<pre class="lang-py prettyprint-override"><code>@property
def logger(self):
return self.__logger
</code></pre>
<p>How can I type hint that in the mixin? I could keep using the existing attribute type hint as above, but then I get an "incompatible override" error while type checking, because <code>logger</code> is not in fact an attribute, for example:</p>
<pre class="lang-py prettyprint-override"><code>
class Base:
@property
def logger(self) -> Logger:
return Logger("foo")
class Derived(SomeMixin, Base):
pass
</code></pre>
<p>The type checker will complain about <code>Dervied</code> have incompatible defintions of <code>logger</code>, e.g., in pyright:</p>
<pre><code>Base classes for class "Derived" define variable "logger" in incompatible wayPylancereportIncompatibleVariableOverride
foo.py(17, 9): Base class "Base" provides type "property", which is overridden
foo.py(9, 5): Base class "SomeMixin" overrides with type "Logger"
</code></pre>
<p>I could use something like:</p>
<pre class="lang-py prettyprint-override"><code>
class SomeMixin:
name: str
logger: property
</code></pre>
<p>... but then <code>logger</code> has type <code>Any</code> and I don't get the benefit of type checking on its value.</p>
|
<python><python-3.x><python-typing>
|
2024-02-05 18:59:28
| 1
| 66,260
|
BeeOnRope
|
77,943,278
| 5,094,261
|
How to make progressively loading JSON responses work with FastAPI + React
|
<p>I have a problem where I have a FastAPI endpoint that returns a list of JSON objects. The API might become slow at times but I want the responses to be streamable so that they're ready to be rendered by the React Frontend as soon as they are available.</p>
<p>I can think of two approaches.</p>
<ol>
<li><p>Use <code>Server-Sent-Events</code>. This can work with <a href="https://github.com/sysid/sse-starlette" rel="nofollow noreferrer">sse-starlette</a>. The problem is that there are two backend APIs involved. I have a microservice API that produces the JSON results and a backend API interfaced with the microservice that simply authenticates user requests and relays the response back. If I use SSE on the microservice API, I would have to use another SSE client at my backend to intercept and resend the SSE events.</p>
</li>
<li><p>Use <code>StreamingResponse</code>. This is good because I can return a StreamingResponse from the microservice and read that response line by line and stream it back from my backend. The only problem with this is that the content is in bytes and I have to parse a stream of bytes to identify individual JSON objects.</p>
</li>
</ol>
<p>Basically, I want to yield each result object one by one and make them available for my Frontend to render progressively.</p>
<p>I am not opposed to either of these ideas or an even better option, if any. The point of this post is to get an idea about how this can be achieved as I couldn't find a definitive answer anywhere for this.</p>
<p>Any help would be greatly appreciated.</p>
|
<python><reactjs><streaming><fastapi><server-sent-events>
|
2024-02-05 18:43:37
| 2
| 1,273
|
Shiladitya Bose
|
77,943,247
| 5,896,591
|
How to install bytestring formatters for built-in types in Python 3?
|
<p>I am trying to port Python2 protocol code to Python3. How do I get bytestring formatters for built-in types? In Python2, we could do:</p>
<pre><code>>>> b'%s' % None
'None'
>>> b'%s' % 15
'15'
>>> b'%s' % []
'[]'
</code></pre>
<p>The same code in Python3 gives:</p>
<pre><code>>>> b'%s' % None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: %b requires bytes, or an object that implements __bytes__, not 'NoneType'
>>> b'%s' % 15
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: %b requires bytes, or an object that implements __bytes__, not 'int'
>>> b'%s' % []
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: %b requires bytes, or an object that implements __bytes__, not 'list'
</code></pre>
<p>How do I install the standard bytestring formatters for built-in types?</p>
|
<python><python-3.x><byte>
|
2024-02-05 18:38:00
| 2
| 4,630
|
personal_cloud
|
77,943,210
| 5,790,653
|
python while loop if all conditions are equal then do another random choice from list
|
<p>This is my python code:</p>
<pre class="lang-py prettyprint-override"><code>import secrets
from time import sleep
ids = [{'id': number} for number in range(1, 5+1)]
rand1 = secrets.choice(ids)
rand2 = secrets.choice(ids)
rand3 = secrets.choice(ids)
n = 0
while rand1['id'] == rand2['id'] == rand3['id']:
n += 1
print('Before')
print(rand1['id'], rand2['id'], rand3['id'])
sleep(1)
rand1 = secrets.choice(ids)
rand2 = secrets.choice(ids)
rand3 = secrets.choice(ids)
print('After')
print(rand1['id'], rand2['id'], rand3['id'])
</code></pre>
<p>I'm going to reach this:</p>
<blockquote>
<p>do the while loop and choose a random id until none of the
rand1['id'], rand2['id'] and rand3['id'] are equal.</p>
<p>Even two of them are equal, then do another for loop.</p>
</blockquote>
|
<python>
|
2024-02-05 18:28:37
| 3
| 4,175
|
Saeed
|
77,943,150
| 18,150,609
|
Microsoft Graph SDK for Python: How to Get a SharePoint List?
|
<p>Preface, there are similar questions (<a href="https://stackoverflow.com/questions/49349577/microsoft-graph-sdk-and-sharepoint-list-items">such as</a>) but these are different as the SDK for python doesn't seem to provide a manner to specify a domain name for the MS365 tenant.</p>
<p>I've created a client secret with adequate permissions to access the named resource. Below is my attempt and results:</p>
<pre><code>from azure.identity.aio import ClientSecretCredential
from msgraph import GraphServiceClient
# Authentication details
tenant_id = 'abcdefgh-1234-5678-9012-abcdefghijkl'
app_client_id = 'abcdefgh-1234-5678-9012-abcdefghijkl'
client_secret_val = 'lmnop~abcdefghijklmnopqrstuvwzxy'
# SharePoint site ID and list name
site_id = 'mysite'
list_name = 'mylist'
# Build client
credential = ClientSecretCredential(tenant_id, app_client_id, client_secret_val)
scopes = ['https://graph.microsoft.com/.default']
client = GraphServiceClient(credentials=credential, scopes=scopes)
# Collect data
req_list = client.sites.by_site_id(site_id).lists.by_list_id(list_name)
res = await req_list.get()
</code></pre>
<pre><code>### outputs
ODataError:
APIError
Code: 400
message: None
error: MainError(
additional_data={},
code='invalidRequest',
details=None,
inner_error=InnerError(
additional_data={},
client_request_id='23691144-abe3-467d-b160-21c7ae84473b',
date=DateTime(2024, 2, 5, 17, 57, 8, tzinfo=Timezone('UTC')),
odata_type=None,
request_id='f69847e7-4f06-41b9-b2b2-5ce52e597364'
),
message='Invalid hostname for this tenancy',
target=None
)
</code></pre>
<p>It seems to claim use of an invalid hostname for the tenancy, though I did not make use of a hostname. The SDK for python is quite new and I'm having a difficult time locating available methods and parameters for these objects.</p>
<p>Some links I needed to bookmark:</p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/graph/api/list-get?view=graph-rest-1.0&tabs=python" rel="nofollow noreferrer">Get a SharePoint List</a><a href="https://learn.microsoft.com/en-us/graph/api/list-get?view=graph-rest-1.0&tabs=python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/graph/api/list-get?view=graph-rest-1.0&tabs=python</a></li>
<li><a href="https://learn.microsoft.com/en-us/python/api/overview/azure/mgmt-datafactory-readme?view=azure-python" rel="nofollow noreferrer">Microsoft Azure SDK for Python</a></li>
</ul>
|
<python><azure><sharepoint><sdk><microsoft-graph-api>
|
2024-02-05 18:14:54
| 1
| 364
|
MrChadMWood
|
77,943,148
| 1,467,552
|
How to add numeric value from one column to other List column elements in Polars?
|
<p>Suppose I have the following Polars DataFame:</p>
<pre><code>import polars as pl
df = pl.DataFrame({
'lst': [[0, 1], [9, 8]],
'val': [3, 4]
})
</code></pre>
<p>And I want to add the number in the <code>val</code> column, to every element in the corresponding list in the <code>lst</code> column, to get the following result:</p>
<pre><code>┌───────────┬─────┐
│ lst ┆ val │
│ --- ┆ --- │
│ list[i64] ┆ i64 │
╞═══════════╪═════╡
│ [3, 4] ┆ 3 │
│ [13, 12] ┆ 4 │
└───────────┴─────┘
</code></pre>
<p>I know how to add a constant value, e.g.:</p>
<pre><code>new_df = df.with_columns(
pl.col('lst').list.eval(pl.element() + 2)
)
</code></pre>
<p>But when I try:</p>
<pre><code>new_df = df.with_columns(
pl.col('lst').list.eval(pl.element() + pl.col('val'))
)
</code></pre>
<p>I get the following error:</p>
<pre><code>polars.exceptions.ComputeError: named columns are not allowed in `list.eval`; consider using `element` or `col("")`
</code></pre>
<p>Is there any elegant way to achieve my goal (<strong>without map_elements</strong>)?</p>
<p>Thanks in advance.</p>
|
<python><dataframe><python-polars>
|
2024-02-05 18:14:30
| 3
| 1,170
|
barak1412
|
77,943,147
| 880,874
|
Why isn't my script printing all my results to the file?
|
<p>I have the simple code below that loops through a dataframe and prints the results to the screen and also to a file.</p>
<p>My nag issue is however, it prints all the data to the screen just perfectly, but the file is only getting the last end of the data.</p>
<p>Here is my code:</p>
<pre><code>for star in Constellation_data(starDf.values.tolist()):
print(star)
sourceFile = open('stars.txt', 'w')
print(star, file = sourceFile)
sourceFile.close()
</code></pre>
<p>I open the file, then print to it, then close. So I not sure why it doesn't contain all the data like the screen has.</p>
<p>Thanks!</p>
|
<python><python-3.x>
|
2024-02-05 18:14:28
| 1
| 7,206
|
SkyeBoniwell
|
77,943,054
| 2,289,030
|
How do I make a custom class that's serializable with dataclasses.asdict()?
|
<p>I'm trying to use a dataclass as a (more strongly typed) dictionary in my application, and found this strange behavior when using a custom type subclassing <code>list</code> within the dataclass. I'm using Python 3.11.3 on Windows.</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, asdict
class CustomFloatList(list):
def __init__(self, args):
for i, arg in enumerate(args):
assert isinstance(arg, float), f"Expected index {i} to be a float, but it's a {type(arg).__name__}"
super().__init__(args)
@classmethod
def from_list(cls, l: list[float]):
return cls(l)
@dataclass
class Poc:
x: CustomFloatList
p = Poc(x=CustomFloatList.from_list([3.0]))
print(p) # Prints Poc(x=[3.0])
print(p.x) # Prints [3.0]
print(asdict(p)) # Prints {'x': []}
</code></pre>
<p>This does not occur if I use a regular list[float], but I'm using a custom class here to enforce some runtime constraints.</p>
<p>How do I do this correctly?</p>
<p>I'm open to just using <code>.__dict__</code> directly, but I thought <code>asdict()</code> was the more "official" way to handle this</p>
<p>A simple modification makes the code behave as expected, but is slightly less efficient:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, asdict
class CustomFloatList(list):
def __init__(self, args):
dup_args = list(args)
for i, arg in enumerate(dup_args):
assert isinstance(arg, float), f"Expected index {i} to be a float, but it's a {type(arg).__name__}"
super().__init__(dup_args)
@classmethod
def from_list(cls, l: list[float]):
return cls(l)
@dataclass
class Poc:
x: CustomFloatList
p = Poc(x=CustomFloatList.from_list([3.0]))
print(p)
print(p.x)
print(asdict(p))
</code></pre>
|
<python><python-3.x><generator><python-dataclasses>
|
2024-02-05 17:59:19
| 2
| 968
|
ijustlovemath
|
77,942,862
| 5,363,686
|
Function to aggregate json
|
<p>Assume I have a gcs bucket with json files with the following structure:</p>
<pre><code>[
{
"Id": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"Name": "alibaba",
"storeid": "Y1",
"storeName": "alibaba1",
"a": "1/2/3",
"b": "1.0/1.0/3",
"c": "0/0/0",
"d": "0/0/0",
"e": "1.8/3.4",
"f": "1/2/3",
"g": "1/2/3",
},
{
"Id": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"Name": "alibaba",
"storeUuid": "Y2",
"storeName": "alibaba2",
"a": "1/2/3",
"b": "1.0/1.0/3",
"c": "0/0/0",
"d": "0/0/0",
"e": "1.7/2.4",
"f": "1/2/3",
"g": "1/2/3",
},
{
"Id": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"Name": "alibaba",
"storeUuid": "Y3",
"storeName": "alibaba3",
"a": "1/2/3",
"b": "1.0/1.0/3",
"c": "0/0/0",
"d": "0/0/0",
"e": "2.7/4.4",
"f": "1/2/3",
"g": "1/2/3",
},
{
"Id": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"Name": "alibaba",
"storeUuid": "Y4",
"storeName": "alibaba4",
"a": "1/2/3",
"b": "1.0/1.0/3",
"c": "0/0/0",
"d": "0/0/0",
"e": "3.7/5.4",
"f": "1/2/3",
"g": "1/2/3",
}
]
</code></pre>
<p>What I want to do is to aggregate the different values by summing <code>a, b,c, d, f,g</code> and taking the average of <code>e</code> to return one single <code>json</code> like</p>
<pre><code>[
{
"Id": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"Name": "alibaba",
"a": "sum over all first instance/sum over all second instances/sum aover all third instance",
"b": "sum over all first instance/sum over all second instances/sum aover all third instance",
"c": "sum over all first instance/sum over all second instances/sum aover all third instance",
"d": "sum over all first instance/sum over all second instances/sum aover all third instance",
"e": "average over all first instance/average over all second instance",
"f": "sum over all first instance/sum over all second instances/sum aover all third instance",
"g": "sum over all first instance/sum over all second instances/sum aover all third instance",
}
]
</code></pre>
<p>Not that any of the values in <code>*/*/*</code> could be NaN and that the data in <code>e</code> could be a string <code>data unvavailable</code>.</p>
<p>In have created this function</p>
<pre><code>def format_large_numbers_optimized(value):
abs_values = np.abs(value)
mask = abs_values >= 1e6
formatted_values = np.where(mask,
np.char.add(np.round(value / 1e6, 2).astype(str), "M"),
np.round(value, 2).astype(str))
return formatted_values
def process_json_data_optimized(json_list):
result = {}
keys = set(json_list[0].keys()) - {'Id', 'Name', 'storeid', 'storeName'}
for key in keys:
result[key] = {'values': []}
for json_data in json_list:
for key in keys:
value = json_data.get(key, '0')
result[key]['values'].append(value)
for key in keys:
all_values_processed = []
for value in result[key]['values']:
if isinstance(value, str) and '/' in value:
processed_values = [float(v) if v != 'data unavailable' else 0 for v in value.split('/')]
elif isinstance(value, float) or isinstance(value, int):
processed_values = [value]
else:
processed_values = [0.0]
all_values_processed.append(processed_values)
numeric_values = np.array(all_values_processed)
if numeric_values.ndim == 1:
numeric_values = numeric_values[:, np.newaxis]
summed_values = np.sum(numeric_values, axis=0)
formatted_summed_values = '/'.join(format_large_numbers_optimized(summed_values))
result[key]['summed'] = formatted_summed_values
processed_result = {key: data['summed'] for key, data in result.items()}
processed_result['Id'] = json_list[0]['Id']
processed_result['Name'] = json_list[0]['Name']
return processed_result
</code></pre>
<p>But it does not create what I expect. I am a at a total loss. Would really appreciate any help.</p>
|
<python><pandas>
|
2024-02-05 17:27:54
| 2
| 11,592
|
Serge de Gosson de Varennes
|
77,942,843
| 6,734,243
|
how to install a namespace package with hatch?
|
<p>in The context of the sphinxcontrib organisation the package are supposed to all live in a sphinxcontrib namespace package like "sphinxcontrib.icon" or "sphinxcontrib.badge".</p>
<p>The file have the following structure:</p>
<pre><code>sphinxcontrib-icon/
├── sphinxcontrib/
│ └── icon/
│ └── __init__.py
└── pytproject.toml
</code></pre>
<p>In a setuptools based pyproject.toml file I would do the following:</p>
<pre class="lang-ini prettyprint-override"><code># pyproject.toml
[build-system]
requires = ["setuptools>=61.2", "wheel", "pynpm>=0.2.0"]
build-backend = "setuptools.build_meta"
[tool.setuptools]
include-package-data = false
packages = ["sphinxcontrib.icon"]
</code></pre>
<p>Now I would like to migrate to hatchings but I don't manage to reprduce this behaviour. In my parameters I do:</p>
<pre class="lang-ini prettyprint-override"><code># pyproject.toml
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["sphinxcontrib/skeleton"]
</code></pre>
<p>In the "site-packages" folder my lib is not under in "sphincontrib/icon" as I would expect.</p>
<p>How should I adapt my code to make it work ?</p>
<p>I have a example sitting here: <a href="https://github.com/sphinx-contrib/sphinxcontrib-skeleton/tree/test" rel="nofollow noreferrer">https://github.com/sphinx-contrib/sphinxcontrib-skeleton/tree/test</a>
Building the docs with <code>nox -s docs</code> is failing because the package is not really installed.</p>
|
<python><packaging><hatch>
|
2024-02-05 17:25:56
| 1
| 2,670
|
Pierrick Rambaud
|
77,942,774
| 4,704,065
|
Plot a dataframe with different conditions
|
<p>I have a data frame where I need to plot X-Y axis value (Itow - X-axis , Icorr - Yaxis) and label them based on svid .</p>
<p>My data fame looks like this:</p>
<p><a href="https://i.sstatic.net/dot0z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dot0z.png" alt="enter image description here" /></a></p>
<ul>
<li>I need to plot Itow on X-axis and ionoCorr on Y-axis.</li>
<li>I need to check what is svDataSvId used for each iTOW and show it as a label across the plot , each unique <strong>svDataSvId</strong> representing a color and as a label</li>
<li>I need to create two such similar sub-plots when sourceId=1 and another when sourceId=2</li>
<li>My plot should look something similar to like this. Each label (eg: 300,301 ), on the right side represent the <strong>svDataSvId</strong> used for each Itow</li>
</ul>
<p><a href="https://i.sstatic.net/u6dm9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u6dm9.png" alt="enter image description here" /></a></p>
<p>This is what i have tried , but I am not sure how I can represent the <strong>svDataSvId</strong> used for each <strong>Itow</strong> here</p>
<pre><code> list_id=[9,10,4,36,2,30,34,11,10]
for i, (source_id, list,color) in enumerate([(1,list_id, "green"), (2,list_id, "red")]):
mask = (final_df1["sourceId"] == source_id) & (final_df1["svDataSvId"] == list)
fig, ax = plt.subplots()
ax.plot(final_df1[mask]['iTOW'], final_df1[mask]['ionoCorr'], color=color, label=f"Source {source_id}")
ax.set_ylabel("Correction")
ax.set_xlabel("Itows")
</code></pre>
|
<python><pandas><dataframe><matplotlib>
|
2024-02-05 17:15:54
| 1
| 321
|
Kapil
|
77,942,477
| 8,211,382
|
How to Put the message using UOM (begin, commit, backout). getting error: 2012 MQRC_ENVIRONMENT_ERROR
|
<pre><code>import pymqi
queue_manager = 'QM1'
channel = 'DEV.APP.SVRCONN'
host = '127.0.0.1'
port = '1414'
queue_name = 'TEST.1'
conn_info = f'{host}({port})'
# Connect to the queue manager
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name, pymqi.CMQC.MQOO_OUTPUT)
try:
for rec in range(1,21):
message = f"hello python {rec}"
queue.put(message)
except Exception as err:
print(f"(err)")
queue.close()
qmgr.disconnect()
</code></pre>
<p>When I did the <strong>PUT</strong> operation using the above code it worked as expected.</p>
<p>When I am trying to do the <strong>PUT</strong> operation using the below code. It is giving me an error saying <code>FAILED 2012: MQRC_ENVIRONMENT_ERROR</code>.</p>
<pre><code>import pymqi
queue_manager = 'QM1'
channel = 'DEV.APP.SVRCONN'
host = '127.0.0.1'
port = '1414'
queue_name = 'TEST.1'
conn_info = f'{host}({port})'
# Connect to the queue manager
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name, pymqi.CMQC.MQOO_OUTPUT)
pmo = pymqi.PMO()
pmo.Options = pymqi.CMQC.MQPMO_SYNCPOINT | pymqi.CMQC.MQPMO_FAIL_IF_QUIESCING
transaction = False
try:
# transaction start
qmgr.begin()
# set the flag true after qmgr begin
transaction = True
for rec in range(1,21):
mqmd = pymqi.MD()
mqmd.Version = pymqi.CMQC.MQMD_VERSION_2
if rec == 9:
raise Exception("Some Error occur")
message = f"hello python {rec}"
queue.put(message, mqmd, pmo)
# commit the transaction if all message were successfully processed
qmgr.commit()
except Exception as err:
if transaction:
# rollback the transaction if any error occur during the processing
qmgr.backout()
finally:
queue.close()
qmgr.disconnect()
</code></pre>
<p>The <strong>MQRC_ENVIRONMENT_ERROR</strong> typically indicates a problem with the environment or configuration, rather than an issue with the code itself but where and what should I need to check?</p>
<p>IBM MQ's transactions (begin/commit/backout) should be used in conjunction with the <code>MQOO_INPUT_EXCLUSIVE</code> and <code>MQOO_OUTPUT</code> options. These options control how the queue is opened and dictate the ability to participate in transactions.</p>
|
<python><ibm-mq><pymqi>
|
2024-02-05 16:28:20
| 2
| 450
|
user_123
|
77,942,388
| 2,989,089
|
Last git changes on a given file
|
<p>Using the <a href="https://gitpython.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer"><code>gitpython</code></a> library, I'm trying access the date of the last changes on a given file. So far I only found out how to:</p>
<pre class="lang-py prettyprint-override"><code>import git
# get a handle on a repo object for the current directory
repo=git.Repo()
# get a handle on the tree object of the repo on the current branch
tree=repo.active_branch.commit.tree
# get a handle of the blob object of a given file
blob=tree['path/to/a/file.py']
</code></pre>
<p>After that I got stuck...</p>
|
<python><gitpython>
|
2024-02-05 16:14:55
| 0
| 884
|
Antoine Gallix
|
77,942,347
| 3,433,875
|
Create an arc between two points in matplotlib
|
<p>I am trying to recreate the chart below using matplotlib:
<a href="https://i.sstatic.net/P9NGe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P9NGe.png" alt="enter image description here" /></a></p>
<p>I have most of it done but, I just cant figure out how to create the arcs between the years:</p>
<pre><code>import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
import numpy as np
import pandas as pd
colors = ["#CC5A43","#2C324F","#5375D4",]
data = {
"year": [2004, 2022, 2004, 2022, 2004, 2022],
"countries" : [ "Denmark", "Denmark", "Norway", "Norway","Sweden", "Sweden",],
"sites": [4,10,5,8,13,15]
}
df= pd.DataFrame(data)
df = df.sort_values([ 'year'], ascending=True ).reset_index(drop=True)
df['ctry_code'] = df.countries.astype(str).str[:2].astype(str).str.upper()
df['year_lbl'] ="'"+df['year'].astype(str).str[-2:].astype(str)
sites = df.sites
lbl1 = df.year_lbl
fig, ax = plt.subplots( figsize=(6,6),sharex=True, sharey=True, facecolor = "#FFFFFF", zorder= 1)
ax.scatter(sites, sites, s= 340, c= colors*2 , zorder = 1)
ax.set_xlim(0, sites.max()+3)
ax.set_ylim(0, sites.max()+3)
ax.axline([ax.get_xlim()[0], ax.get_ylim()[0]], [ax.get_xlim()[1], ax.get_ylim()[1]], zorder = 0, color ="#DBDEE0" )
for i, l1 in zip(range(0,6), lbl1) :
ax.annotate(l1, (sites[i], sites[i]), color = "w",va= "center", ha = "center")
ax.set_axis_off()
</code></pre>
<p>Which gives me this:
<a href="https://i.sstatic.net/UaONB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UaONB.png" alt="enter image description here" /></a></p>
<p>I have tried both <a href="https://stackoverflow.com/questions/30642391/how-to-draw-a-filled-arc-in-matplotlib">mpatches.arc</a> and <a href="https://stackoverflow.com/questions/50346166/draw-an-arc-as-polygon-using-start-end-center-and-radius-using-python-matplotl">patches and path</a> but cant make it work.</p>
|
<python><matplotlib>
|
2024-02-05 16:08:10
| 1
| 363
|
ruthpozuelo
|
77,942,241
| 10,595,871
|
pandas dataframe.to_sql() works on jupyter but not on VScode with same parameters
|
<p>I have a simple code that I'm running fine on jupyter notebook:</p>
<pre><code>import pyodbc
import sqlalchemy
user_id = 'userid'
password = 'password'
server = 'server'
database_name = 'database_name'
driver = 'SQL Server'
connection_string = (
f'DRIVER={{{driver}}};'
f'SERVER={server};'
f'DATABASE={database_name};'
f'UID={user_id};'
f'PWD={password};')
conn = pyodbc.connect(connection_string)
cursor = conn.cursor()
cursor.execute("TRUNCATE TABLE [Startup];")
conn.commit()
connection_string = f'mssql+pyodbc://{user_id}:{password}@{server}/{database_name}?driver={driver}'
engine = sqlalchemy.create_engine(connection_string)
df_clean.to_sql('Startup', engine, if_exists='replace', index=False)
conn.commit()
</code></pre>
<p>This is working fine (I know that by using if_exists = 'replace' I don't need the TRUNCATE, but as for now it's fine like this).</p>
<p>The problem is that I wrote the same code in VScode, and the</p>
<pre><code>df_clean.to_sql('Startup', engine, if_exists='replace', index=False)
</code></pre>
<p>is throwing an error:</p>
<pre><code>(pyodbc.Error) ('HY104', '[HY104] [Microsoft][ODBC SQL Server Driver]Valore di precisione non valido. (0) (SQLBindParameter)')
[SQL: SELECT [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME]
FROM [INFORMATION_SCHEMA].[TABLES]
WHERE ([INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max)) OR [INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max))) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME] = CAST(? AS NVARCHAR(max)) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max))]
[parameters: ('BASE TABLE', 'VIEW', 'Startup', 'dbo')]
</code></pre>
<p>Valore di precisione non valido is italian for precision value not valid.<br />
I googled it and I found that the data that I'm trying to insert into the db does not correspond with the table already existent, but then why it's working fine on jupyter?</p>
<p>The section of the code before the .to_sql in VScode is also working fine (the TRUNCATE query is working), and I tried to save the df_clean before pushing it and the file is also fine.</p>
<p>All of the packages that I need are installed in the venv in VScode.<br />
I printed both the connection string and the engine in both Jup and VS and they are exactly the same.</p>
<p>edit: I tried to create a new venv in jupyter with only python installed, I then installed the packages from vs code and I got the same error.
The problem now is that the first (working) try was on the root env of anaconda jupyter, so I have like thousand of packages already installed. How do I know which one(s) do I need for this project?</p>
<p>edit2 SOLVED: i added use_setinputsizes=False in the create_engine and for some reasons it works</p>
|
<python><sql-server><pandas>
|
2024-02-05 15:55:00
| 1
| 691
|
Federicofkt
|
77,942,064
| 20,599,682
|
Why does running this python program from Terminal fail?
|
<p>So, I wrote a basic script:</p>
<pre><code>import sys
if len(sys.argv) != 3:
print(0)
else:
arg1 = int(sys.argv[1])
arg2 = int(sys.argv[2])
print(arg1 + arg2)
</code></pre>
<p>But when I try to do <code>python3 test.py 1 2</code> it fails with exit code 1 and just prints "Python"
In my path I have C:\Users\filip\AppData\Local\Programs\Python\Python311</p>
<p>However, if I run <code>C:\Users\filip\AppData\Local\Programs\Python\Python311\python.exe test.py 1 2</code> it works fine and prints "3".</p>
<p>I tried changing the Path but this doesnt seem to work...</p>
|
<python><path><sys>
|
2024-02-05 15:27:00
| 0
| 328
|
FoxFil
|
77,941,994
| 14,775,478
|
What is setuptool's alternative to (the deprecated) distutils `strtobool`?
|
<p>I am migrating to Python 3.12, and finally have to remove the last <code>distutils</code> dependency.</p>
<p>I am using <code>from distutils.util import strtobool</code> to enforce that command-line arguments via <code>argparse</code> are in fact bool, properly taking care of <code>NaN</code> vs. <code>False</code> vs. <code>True</code>, like so:</p>
<pre><code>arg_parser = argparse.ArgumentParser()
arg_parser.add_argument("-r", "--rebuild_all", type=lambda x: bool(strtobool(x)), default=True)
</code></pre>
<p>So this question is actually twofold:</p>
<ul>
<li>What would be an alternative to the deprecated <code>strtobool</code>?</li>
<li>Alternatively: What would be an even better solution to enforce 'any string' to be interpreted as <code>bool</code>, in a safe way (e.g., to parse args)?</li>
</ul>
|
<python><argparse><setuptools><distutils>
|
2024-02-05 15:15:59
| 3
| 1,690
|
KingOtto
|
77,941,872
| 14,114,654
|
Create a new column by combining information from other columns
|
<p>How could I create the "message" column that incorporates information from other columns? The dataframe is already sorted in every way.</p>
<pre><code>df head members
0 Abba As Ally
1 Abba As Apo
2 Abba As Abba
3 Bella Bi Bella
4 Bella Bi Boo
5 Bella Bi Brian
6 Abba As Arra
7 Abba As Alya
8 Abba As Abba
</code></pre>
<p>Expected Output</p>
<pre><code>df head message
0 Abba As Hi Abba, we invite you, Ally and Apo. Please use "Abba As" when arriving.
1 Bella Bi Hi Bella, we invite you, Boo and Brian. Please use "Bella Bi" when arriving.
2 Abba As Hi Abba, we invite you, Arra and Alya. Please use "Abba As" when arriving.
</code></pre>
<p>I tried creating a first name column:</p>
<pre><code>df["head_first_name"] = df.head.str.split(" ").str[0]
df.loc[df.head_first_name.isin(df.members)
</code></pre>
|
<python><pandas><group-by>
|
2024-02-05 14:58:18
| 2
| 1,309
|
asd
|
77,941,622
| 16,092,023
|
Not able to call a shell variable as other variable within the script
|
<p>in order to parse yaml file, we were using the solution provided <a href="https://github.com/mrbaseman/parse_yaml" rel="nofollow noreferrer">here</a> and which is working expected for us to generate the variables in the key value pairs of yaml.</p>
<p>The script part which I tried is as below</p>
<pre><code>#!/bin/bash
source parse_yaml.sh
eval $(parse_yaml sample2.yaml policy)
echo ".............Eval Result..............................."
for f in $policy_ ; do eval echo \$f \$${f}_ ; done
echo "............Eval Result................................"
for f in $policy_ ; do
if [[ $(eval echo \$${f}_name) == "ipfilter" ]]; then
echo " given policy name is ipfilter "
for g in $(eval echo \$${f}_session_); do
if [[ $(eval echo \$${g}) == "inbound" ]]; then
echo "add the add above Ipfilter string to the inbound session of xml"
fi
if [[ $(eval echo \$${g}) == "outbound" ]]; then
echo "add the add above Ipfilter string to the outbound session of xml"
fi
if [[ $(eval echo \$${g}) == "backend" ]]; then
echo "add the add above Ipfilter string to the backend session of xml"
fi
done
for h in $(eval echo \$${f}_api_); do
echo "decided the scope of Ipfilter policy as $(eval echo \$${h})"
done
for i in $(eval echo \$${f}_operations_); do
echo "decided the scope of Ipfilter policy as $(eval echo \$${i})"
done
fi
done
</code></pre>
<p>In our scenario below, the policy list can be n numbers and in each policies there can be multiple sessions and operations also can be list . So our requirement is to create custom policy file related to each of these properties of each policy and apply in its scope.</p>
<p>################ Policy ################</p>
<pre><code>- name: policy1
session:
- inbound
- backend
scope: api
apiname:
customvalue1: xxxxx
customvalue2: xxxxx
- name: policyB
scope: operation
operation:
- operation1
- operation2
session:
- inbound
- backend
customvalue3: xxxxx
customvalue4: xxxxx
etc.....................................
```
</code></pre>
<p>in the above scenario, we are not able to dynamically loop the policies properties such as session value and operation value to add condition based on them.</p>
|
<python><bash><shell><yaml><azure-pipelines-yaml>
|
2024-02-05 14:22:15
| 0
| 1,551
|
Vowneee
|
77,941,613
| 8,484,261
|
Comparing two pandas dataframe to see the changes and list them out after grouping on three columns?
|
<p>I have two dataframes df1 and df2 which gives the list of customers at different points of time. Each customer is in a District. Districts are grouped into Regions, which are grouped into Zones.
I am trying to create a table which shows the change in customer count by Zone/Region/District.
Teh output should be as the output below, with the following columns:</p>
<ol>
<li>Zone</li>
<li>Region</li>
<li>District</li>
<li>Initial Count</li>
<li>Final Count</li>
<li>Transfer Out Count</li>
<li>Transfer In Count</li>
<li>New Customer Count</li>
<li>Leaver Count</li>
<li>Names of Transfer-Ins</li>
<li>Names of Transfer-Outs</li>
<li>Names of Leavers</li>
<li>Names of New Cusotmers</li>
</ol>
<p>I am able to use groupby and concat to get a dataframe with the counts upto column 9 above. But how do I add the columns with names?</p>
<pre><code>df1
cust_name cust_id town_id Zone Region District
1 cxa c1001 t001 A A1 A1a
2 cxb c1002 t002 A A2 A2a
3 cxc c1003 t001 A A1 A1a
4 cxd c1004 t003 B B1 B1a
5 cxe c1006 t002 A A2 A2b
6 cxf c1007 t002 A A2 A2b
df2
cust_name cust_id town_id Zone Region District
2 cxb c1002 t002 A A2 A2a
3 cxc c1003 t001 A A1 A1a
4 cxd c1004 t003 A A1 A1a
5 cxe c1006 t002 A A2 A2a
6 cxf c1007 t002 C C1 C1a
output
Zone Region District Initial Count Final Count Transfer Out Transfer In New Cust Leaver NamesTransferIn NamTransferOut NamLeaver NamNewCustomer
A A1 A1a 2 2 0 1 0 1 cxd cxa
A A2 A2a 1 2 0 1 cxe
A A2 A2b 2 0
B B1 B1a 1 0 1 0 0 0 transferOut: cxd
C C1 C1a 0 1 0 0 1 0 newCustomer: cxf
</code></pre>
|
<python><pandas><dataframe>
|
2024-02-05 14:20:09
| 1
| 3,700
|
Alhpa Delta
|
77,941,565
| 1,662,230
|
Peewee overwrites default SQL mode
|
<p>There's no error raised when calling <code>save()</code>, <code>create()</code>, or <code>insert().execute()</code> on a model instantiated with one or more fields omitted, even on fields configured as <code>null=False</code> and <code>default=None</code> (<a href="http://docs.peewee-orm.com/en/latest/peewee/models.html#field-initialization-arguments" rel="nofollow noreferrer">the default setting for all fields</a>) despite MySQL being configured to use strict mode globally:</p>
<pre><code>mysql> SET GLOBAL sql_mode="TRADITIONAL";
Query OK, 0 rows affected (0.00 sec)
</code></pre>
<pre><code>from rich import inspect
from peewee import Model, MySQLDatabase
from peewee import CharField, FixedCharField, BooleanField, DateTimeField
debug_db = MySQLDatabase(
database = 'debug_db',
user = 'DEBUG',
host = 'localhost',
password = 'secret'
)
class Person(Model):
first_name = CharField(32)
last_name = CharField(32, null=False)
email = FixedCharField(255)
signup_time = DateTimeField()
approved = BooleanField()
class Meta:
database = debug_db
debug_db.connect()
debug_db.create_tables([Person])
john_doe = Person(
first_name = "John"
)
inspect(john_doe)
# │ approved = None │
# │ dirty_fields = [<CharField: Person.first_name>] │
# │ email = None │
# │ first_name = 'John' │
# │ id = None │
# │ last_name = None │
# │ signup_time = None │
john_doe.save()
# mysql> select * from person;
# +----+------------+-----------+-------+---------------------+----------+
# | id | first_name | last_name | email | signup_time | approved |
# +----+------------+-----------+-------+---------------------+----------+
# | 1 | John | | | 0000-00-00 00:00:00 | 0 |
# +----+------------+-----------+-------+---------------------+----------+
# 1 row in set (0.00 sec)
# Debug logger:
# ('SELECT table_name FROM information_schema.tables WHERE table_schema = DATABASE() AND table_type != %s ORDER BY table_name', ('VIEW',))
# ('INSERT INTO `person` (`first_name`) VALUES (%s)', ['John'])
</code></pre>
<p>On strict mode, the equivalent <code>INSERT</code> statement issued directly to MySQL throws an error:</p>
<pre><code># mysql> INSERT INTO person (first_name) VALUES ("John");
# ERROR 1364 (HY000): Field 'last_name' doesn't have a default value
</code></pre>
<p>As shown in the example, inspecting the instance reveals the omitted attributes are set to <code>None</code> internally. Interestingly, doing so manually triggers an error in Peewee and includes the <code>None</code>-valued field in the generated SQL:</p>
<pre class="lang-python prettyprint-override"><code>john_doe = Person(
first_name = "John",
last_name = None
)
# peewee.IntegrityError: (1048, "Column 'last_name' cannot be null")
# Debug logger:
# ('INSERT INTO `person` (`first_name`, `last_name`) VALUES (%s, %s)', ['John', None])
</code></pre>
<p>In Peewee's documentation, the chapter on querying includes <a href="http://docs.peewee-orm.com/en/latest/peewee/querying.html#creating-a-new-record" rel="nofollow noreferrer">an example</a> of gradually assigning values to an object created with some fields initially omitted, so allowing omission at instantiation must be intentional, but I would expect an error at some point before the row is inserted, either arising from the resulting SQL statement or when calling <code>save()</code>.</p>
<p>By comparison, using SQLite instead of MySQL triggers an error in Peewee:</p>
<pre><code>peewee.IntegrityError: NOT NULL constraint failed: person.last_name
</code></pre>
<p>I've also tested using <code>playhouse.mysql_ext.MySQLConnectorDatabase</code>, which produces the same result as the default MySQL driver.</p>
<p>I'm on Peewee 3.17, MySQL 8.0.31, and Python 3.10.5.</p>
|
<python><mysql><peewee><strict><sql-mode>
|
2024-02-05 14:11:25
| 1
| 559
|
Magnus Lind Oxlund
|
77,941,482
| 7,026,806
|
Can Mypy overload single objects and unpacked tuples?
|
<p>The following is easy enough to implement at runtime, but it seems impossible to express in Mypy.</p>
<p>Using the <code>*</code> unpacking (for its nice compactness, e.g. <code>foo(1, 2, ...)</code>) I also want to express the case when there's a single element, because requiring to unpack the single tuple adds a lot of unnecessary indexing. However, it doesn't seem to be possible to disambiguate in any way:</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload
@overload
def foo(a: int) -> int: # Impossible to distinguish inputs from overload below
...
@overload
def foo(*a: int) -> tuple[int, ...]:
...
def foo(*a: int | tuple[int, ...]) -> int | tuple[int, ...]:
if len(a) == 1:
return a[0]
return a
assert foo(1) == 1 # This is the expected, but how would the type checker know?
assert foo(1, 2) == (1, 2) # This is obviously the correct signature
</code></pre>
<p>Is avoiding the unpacking altogether really the only way?</p>
|
<python><mypy><python-typing>
|
2024-02-05 13:58:24
| 1
| 2,020
|
komodovaran_
|
77,941,473
| 5,604,555
|
Paramiko "UnicodeDecodeError" when authenticating with key from Pageant
|
<p>Experiencing an issue when attempting to connect to a server using Paramiko and SSH agent Pageant. The error indicates that the script is failing during the connection attempt, specifically when Paramiko tries to interact with the SSH agent (Pageant) and processes the key data. The error message suggests that Paramiko is encountering non-UTF-8 byte sequences when it expects UTF-8 encoded data:</p>
<blockquote>
<p>"utf-8' codec can't decode byte 0x82 in position 1: invalid start byte"</p>
</blockquote>
<p>However, decoding does not fix the problem.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File: “c:../ssh_little.py”, line 11, in module,
client.connect(hostname=hostname, username=username)
File “c:/..../conda/Lib/site-package/paramiko/agent.py, line 415, in __init__ self.connect(conn)
File “c:/..../conda/Lib/site-package/paramiko/agent.py, line 89, in _connect AgentKey(
File “c:/..../conda/Lib/site-package/paramiko/agent.py, line 443, in __init__ self.name = msg.get_text()
File “c:/..../conda/Lib/site-package/paramiko/message.py, line 184, in get_text return u(self.get_string())
File “c:/..../conda/Lib/site-package/paramiko/util.py, line 333, in u return s.decode(encoding)
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0x82 in position1: invalid start byte
</code></pre>
<p>Here is the used code:</p>
<pre><code>import paramiko
hostname = 'host.com'
username = 'user'
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname=hostname, username=username)
</code></pre>
<p>On the other hand, I can connect manually to that server using PuTTY and Pageant with the same key.</p>
<p>Any idea, please? Thanks!</p>
|
<python><ssh><paramiko><ssh-keys><pageant>
|
2024-02-05 13:56:54
| 1
| 1,417
|
Frank
|
77,941,328
| 86,072
|
Is there a way to inherit the parent __init__ arguments?
|
<p>Suppose I have a basic class inheritance:</p>
<pre><code>class A:
def __init__(self, filepath: str, debug=False):
self.filepath = filepath
self.debug = debug
class B(A):
def __init__(self, portnumber: int, **kwargs):
super(B, self).__init__(**kwargs)
self.portnumber = portnumber
</code></pre>
<p>For typing and completion purposes, I would like to somehow "forward" the list of arguments from <code>A.__init__()</code> to <code>B.__init__()</code>.</p>
<p>Is there a way to do this? To have a type checker correctly infer the signature for <code>B.__init__(...)</code> and have an IDE be able to provide meaningful completions or checks?</p>
<hr />
<p>[edit] after searching a little bit more, here is something that is perhaps closer to what I look:</p>
<p>if I declared <code>A</code> and <code>B</code> as <em>dataclasses</em> :</p>
<pre><code>from dataclasses import dataclass
@dataclass
class A:
filepath: str
debug: bool = False
@dataclass
class B(A):
portnumber: int = 42
</code></pre>
<p>I can get the following hints in vscode with the standard pylance extension:
<a href="https://i.sstatic.net/Xv9L0m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xv9L0m.png" alt="screen capture of vscode autocompletion" /></a></p>
<p>Could there be something similar to target just the <code>__init__()</code> method?<br />
perhaps by explicitly naming the base method that gets "extended" (e.g: a special <code>@extends(A.__init__)</code> decorator)?</p>
|
<python><inheritance><python-typing>
|
2024-02-05 13:32:54
| 2
| 53,340
|
LeGEC
|
77,941,127
| 12,173,376
|
How can I get the fully qualified names of return types and argument types using libclang's python bindings?
|
<p>Consider the following example. I use <code>python clang_example.py</code> to parse the header <code>my_source.hpp</code> for function and method declarations.</p>
<h1>my_source.hpp</h1>
<pre class="lang-cpp prettyprint-override"><code>#pragma once
namespace ns {
struct Foo {
struct Bar {};
Bar fun1(void*);
};
using Baz = Foo::Bar;
void fun2(Foo, Baz const&);
}
</code></pre>
<h1>clang_example.py</h1>
<p>I use the following code to parse the function & method declarations using libclang's python bindings:</p>
<pre class="lang-py prettyprint-override"><code>import clang.cindex
import typing
def filter_node_list_by_predicate(
nodes: typing.Iterable[clang.cindex.Cursor], predicate: typing.Callable
) -> typing.Iterable[clang.cindex.Cursor]:
for i in nodes:
if predicate(i):
yield i
yield from filter_node_list_by_predicate(i.get_children(), predicate)
if __name__ == '__main__':
index = clang.cindex.Index.create()
translation_unit = index.parse('my_source.hpp', args=['-std=c++17'])
for i in filter_node_list_by_predicate(
translation_unit.cursor.get_children(),
lambda n: n.kind in [clang.cindex.CursorKind.FUNCTION_DECL, clang.cindex.CursorKind.CXX_METHOD]
):
print(f"Function name: {i.spelling}")
print(f"\treturn type: \t{i.type.get_result().spelling}")
for arg in i.get_arguments():
print(f"\targ: \t{arg.type.spelling}")
</code></pre>
<h1>Output</h1>
<pre><code>Function name: fun1
return type: Bar
arg: void *
Function name: fun2
return type: void
arg: Foo
arg: const Baz &
</code></pre>
<p>Now I would like to extract the fully qualified name of the return type and argument types so I can correctly reference them from the outermost scope:</p>
<pre><code>Function name: ns::Foo::fun1
return type: ns::Foo::Bar
arg: void *
Function name: ns::fun2
return type: void
arg: ns::Foo
arg: const ns::Baz &
</code></pre>
<p>Using <a href="https://stackoverflow.com/a/40328378/12173376">this SO answer</a> I can get the fully qualified name of the function declaration, but not of the return and argument types.</p>
<p>How do I get the fully qualified name of a type (not a cursor) in clang?</p>
<p><strong>Note:</strong></p>
<p>I tried using <code>Type.get_canonical</code> and it gets me close:</p>
<pre class="lang-py prettyprint-override"><code>print(f"\treturn type: \t{i.type.get_result().get_canonical().spelling}")
for arg in i.get_arguments():
print(f"\targ: \t{arg.type.get_canonical().spelling}")
</code></pre>
<p>But <code>Type.get_canonical</code> also resolves typedefs and aliases, which I do not want. I want the second argument of <code>fun2</code> to be resolved as <code>const ns::Baz &</code> and not <code>const ns::Foo::Bar &</code>.</p>
<p><strong>EDIT:</strong></p>
<p>After having tested <a href="https://stackoverflow.com/a/77947098/12173376">Scott McPeak's answer</a> on my real application case I realized that I need this code to properly resolve template classes and nested types of template classes as well.</p>
<p>Given the above code as well as</p>
<pre class="lang-cpp prettyprint-override"><code>namespace ns {
template <typename T>
struct ATemplate {
using value_type = T;
};
typedef ATemplate<Baz> ABaz;
ABaz::value_type fun3();
}
</code></pre>
<p>I would want the return type to be resolved to <code>ns::ABaz::value_type</code> and not <code>ns::ATemplate::value_type</code> or <code>ns::ATemplate<ns::Foo::Bar>::value_type</code>. I would be willing to settle for <code>ns::ATemplate<Baz>::value_type</code>.</p>
<p>Also, I can migrate to the C++ API, if the functionality of the Python bindings are too limited for what I want to do.</p>
|
<python><c++><libclang>
|
2024-02-05 12:56:24
| 1
| 2,802
|
joergbrech
|
77,941,048
| 2,016,632
|
Scipy B-splines don't seem to all follow Irwin–Hall
|
<p>I'm confused how to get B-splines from Scipy. For example, if I take <code>t = [0,0,0,0,1,1,1,1]</code> and <code>c=[1,0,0,0,...]</code> <code>c=[0,1,0,0,...]</code> <code>c=[0,0,1,0,...]</code> and <code>c=[0,0,0,1,...]</code> and call <code>scipy.interpolate.BSpline</code> then I get four lovely cubics: <code>x^3</code>, <code>3(x^3-2x^2+x)</code>, <code>3((1-x)^3-2(1-x)^2+(1-x)</code> and <code>(1-x)^3</code> but these are not the equations for B-splines in, say, <a href="https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution#Special_cases" rel="nofollow noreferrer">Irwin-Hall</a>. Also for cubics, scipy says that 8 knots is the minimum which is a bit confusing.</p>
<p>If I add an extra knot, say, <code>t = [0.,0.,0., 0., 0.5, 1, 1,1,1]</code> then for <code>c=[0,0,0,0,1,0,....]</code> then that one looks like a B-spline.</p>
<p>So is the answer to always set coefficients to zero for what scipy calls the first and last <code>k</code> knots?</p>
|
<python><scipy><spline>
|
2024-02-05 12:45:49
| 1
| 619
|
Tunneller
|
77,940,993
| 12,546,311
|
How to plot a violinplot from frequency data not using repeat?
|
<p>I have data in a frequency table since it is too much data without keeping it like this:</p>
<pre><code>value count year status
0 985572 2000 U
1 1857356 2000 U
2 3904079 2000 U
3 6399287 2000 U
4 9321185 2000 U
5 13093158 2000 U
6 16379938 2000 U
7 18409244 2000 U
... ... ... ...
95 3 2000 U
99 3 2000 U
100 5 2000 U
2 1 2000 B
3 9 2000 B
4 13 2000 B
5 19 2000 B
6 23 2000 B
7 80 2000 B
8 69 2000 B
9 82 2000 B
... ... ... ...
49 2 2000 B
50 1 2000 B
53 1 2000 B
</code></pre>
<p>I need to plot the distribution of these different statuses with a violin plot. However, I cannot seem to figure out how.
The repeat function in Python does not work, as there is too much data and too little computational power.
I want the y-axis to be the <code>values</code>, the x-axis to be the <code>year</code>, the hue to be <code>status</code>, and the distribution of the violin plot to resemble the <code>count</code>.
How can I achieve that and not get these uniform violinplots between different statuses?</p>
<pre><code>plt.figure(figsize=(15, 8))
sns.violinplot(x='year', y='value', data=test, hue='status', inner='quart', palette = color)
</code></pre>
<p><a href="https://i.sstatic.net/LLqdR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LLqdR.png" alt="img" /></a></p>
|
<python><matplotlib><seaborn>
|
2024-02-05 12:37:23
| 1
| 501
|
Thomas
|
77,940,781
| 12,106,577
|
Decode h264 video bytes into JPEG frames in memory with ffmpeg
|
<p>I'm using python and ffmpeg (4.4.2) to generate a h264 video stream from images produced continuously from a process. I am aiming to send this stream over websocket connection and decode it to individual image frames at the receiving end, and emulate a stream by continuously pushing frames to an <code><img></code> tag in my HTML.</p>
<p>However, I cannot read images at the receiving end, after trying combinations of <code>rawvideo</code> input format, <code>image2pipe</code> format, re-encoding the incoming stream with <code>mjpeg</code> and <code>png</code>, etc. So I would be happy to know what the standard way of doing something like this would be.</p>
<p>At the source, I'm piping frames from a while loop into ffmpeg to assemble a h264 encoded video. My command is:</p>
<pre class="lang-py prettyprint-override"><code> command = [
'ffmpeg',
'-f', 'rawvideo',
'-pix_fmt', 'rgb24',
'-s', f'{shape[1]}x{shape[0]}',
'-re',
'-i', 'pipe:',
'-vcodec', 'h264',
'-f', 'rawvideo',
# '-vsync', 'vfr',
'-hide_banner',
'-loglevel', 'error',
'pipe:'
]
</code></pre>
<p>At the receiving end of the websocket connection, I can store the images in storage by including:</p>
<pre class="lang-py prettyprint-override"><code> command = [
'ffmpeg',
'-i', '-', # Read from stdin
'-c:v', 'mjpeg',
'-f', 'image2',
'-hide_banner',
'-loglevel', 'error',
f'encoded/img_%d_encoded.jpg'
]
</code></pre>
<p>in my ffmpeg command.</p>
<p>But, I want to instead extract each individual frame coming in the pipe and load in my application, without saving them in storage. So basically, I want whatever is happening at by the <code>'encoded/img_%d_encoded.jpg'</code> line in ffmpeg, but allowing me to access each frame in the stdout subprocess pipe of an ffmpeg pipeline at the receiving end, running in its own thread.</p>
<ul>
<li>What would be the most appropriate ffmpeg command to fulfil a use case like the above? And how could it be tuned to be faster or have more quality?</li>
<li>Would I be able to read from the stdout buffer with <code>process.stdout.read(2560x1440x3)</code> for each frame?</li>
</ul>
<p>If you feel strongly about referring me to a more update version of ffmpeg, please do so.</p>
<p>PS: It is understandable this may not be the optimal way to create a stream. Nevertheless, I do not find there should be much complexity in this and the latency should be low. I could instead communicate JPEG images via the websocket and view them in my <code><img></code> tag, but I want to save on bandwidth and relay some computational effort at the receiving end.</p>
|
<python><ffmpeg><video-streaming><h.264>
|
2024-02-05 12:01:48
| 1
| 399
|
John Karkas
|
77,940,641
| 16,556,045
|
Change a MultipleChoiceField to only be able to select one choice at a time
|
<p>so I am currently working on a program in Django that uses MultipleChoiceFields, I only want to be able to select one choice at a time within the bounds of my code. How exactly would I be able to change my MultipleChoice field to accomplish this goal, and what attributes would I need to add?</p>
<p>I have tried looking into documentation as well as some questions here as well for how one could modify the MultipleChoiceField function as seen here:</p>
<p><a href="https://www.geeksforgeeks.org/multiplechoicefield-django-forms/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/multiplechoicefield-django-forms/</a></p>
<p><a href="https://stackoverflow.com/questions/5747188/django-form-multiple-choice">https://groups.google.com/g/django-users/c/faCorI1i8VE?pli=1</a></p>
<p><a href="https://stackoverflow.com/questions/14597937/show-multiple-choices-to-admin-in-django/14598061#14598061">Show multiple choices to admin in django</a></p>
<p>But I have found no attributes to help me limit the size of the answers one could pick out to just one answer, or the answers are unrelated to the problem I am seeking to solve.</p>
<p>Here is my code as well which also has the MultipleChoiceFields that I am trying to have select only one option:</p>
<pre><code>class ArdvarkForm(forms.ModelForm):
class Meta:
model = Course
fields = ('is_done',)
score_visual = forms.IntegerField(widget=forms.HiddenInput())
score_kinestetic = forms.IntegerField(widget=forms.HiddenInput())
score_auditory = forms.IntegerField(widget=forms.HiddenInput())
question1options = (('1', 'Read the instructions.'), ('2', 'Use the diagrams that explain the various stages, moves and strategies in the game.'), ('3', 'Watch others play the game before joining in.'), ('4', 'Listen to somebody explaining it and ask questions.'))
question2options = (('1', 'Applying my knowledge in real situations.'), ('2', 'Applying my knowledge in real situations.'), ('3', 'Using words well in written communications'), ('4', 'Working with designs, maps or charts.'))
question3options = (('1', 'Applying my knowledge in real situations.'), ('2', 'Communicating with others through discussion.'), ('3', 'Using words well in written communications.'), ('4', 'Working with designs, maps or charts.'))
question4options = (('1', 'Read books, articles and handouts.'), ('2', 'Like to talk things through or talk things out.'), ('3', 'See patterns in things.'), ('4', 'Use examples and applications.'))
question5options = (('1', 'Have a detailed discussion with my doctor'), ('2', 'To view a video of the property'), ('3', 'A discussion with the owner.'), ('4', 'A plan showing the rooms and a map of the area.'))
question6options = (('1', 'A printed description of the rooms and features.'), ('2', 'To view a video of the property.'), ('3', 'A discussion with the owner.'), ('4', 'A plan showing the rooms and a map of the area.'))
question7options = (('1', 'Podcasts and videos where I can listen to experts.'), ('2', 'Interesting design and visual features.'), ('3', 'Videos showing how to do things.'), ('4', 'Detailed articles.'))
question8options = (('1', 'Talk with people who know about the program.'), ('2', 'Read the written instructions that came with the program.'), ('3', 'Follow the diagrams in a book.'), ('4', 'Start using it and learn by trial and error.'))
question9options = (('1', 'Rely on paper maps or GPS maps.'), ('2', 'Head in the general direction to see if I can find my desination without instructions.'), ('3', 'Rely on verbal instructions from GPS or from someone traveling with me.'), ('4', 'Like to read instructions from GPS or instructions that have been written.'))
question10options = (('1', 'An oppurtunity to discuss the project.'), ('2', 'Examples where the project has been used successfully.'), ('3', 'A written report describing the main features of the project.'), ('4', 'Diagrams to show the project stages with charts of benefits and costs.'))
question11options = (('1', 'Use a map and see where the places are.'), ('2', 'Talk with the person who planned the tour or others who are going on the tour.'), ('3', 'Read about the tour on the itinerary.'), ('4', 'Look at details about the highlights and activities on the tour.'))
question12options = (('1', 'Question and answer, talk, group discussion, or guest speakers.'), ('2', 'Diagrams, charts, maps or graphs.'), ('3', 'Handouts, books, or readings.'), ('4', 'Demonstrations, models or practical sessions.'))
question13options = (('1', 'Reading the words'), ('2', 'Seeing the diagrams'), ('3', 'Listening.'), ('4', 'Watching the actions.'))
question14options = (('1', 'Consider examples of each option using my financial information.'), ('2', 'Use graphs showing different options for different time periods.'), ('3', 'Read a print brochure that describes the options in detail.'), ('4', 'Talk with an expert about the options.'))
question15options = (('1', 'Study diagrams showing each stage of the assembly.'), ('2', 'Read the instructions that came with the table.'), ('3', 'Ask for advice from someone who assembles furniture.'), ('4', 'Watch a video of a person assembling a similar table.'))
question16options = (('1', 'Ask questions and talk about the camera and its features.'), ('2', 'Use diagrams showing the camera and what each part does.'), ('3', 'Use examples of good and poor photos showing how to improve them.'), ('4', 'Use the written instructions about what to do.'))
question1 = forms.MultipleChoiceField(choices=question1options, widget=forms.CheckboxSelectMultiple(), required=True, label='I want to learn how to play a new board game or card game. I would:')
question2 = forms.MultipleChoiceField(choices=question2options, widget=forms.CheckboxSelectMultiple(), label='I am having trouble assembling a wooden table that came in parts (kitset). I would:')
question3 = forms.MultipleChoiceField(choices=question3options, widget=forms.CheckboxSelectMultiple(), label='When learning from the Internet I like:')
question4 = forms.MultipleChoiceField(choices=question4options, widget=forms.CheckboxSelectMultiple(), label='I prefer a presenter or a teacher who uses:')
question5 = forms.MultipleChoiceField(choices=question5options, widget=forms.CheckboxSelectMultiple(), label='I want to find out more about a tour that I am going on. I would:')
question6 = forms.MultipleChoiceField(choices=question6options, widget=forms.CheckboxSelectMultiple(), label='I want to save more money and to decide between a range of options. I would:')
question7 = forms.MultipleChoiceField(choices=question7options, widget=forms.CheckboxSelectMultiple(), label='I want to learn how to take better photos. I would:')
question8 = forms.MultipleChoiceField(choices=question8options, widget=forms.CheckboxSelectMultiple(), label='A website has a video showing how to make a special graph or chart. There is a person speaking, some lists and words describing what to do and some diagrams. I would learn most from:')
question9 = forms.MultipleChoiceField(choices=question9options, widget=forms.CheckboxSelectMultiple(), label='I have been advised by the doctor that I have a medical problem and I have some questions about it. I would:')
question10 = forms.MultipleChoiceField(choices=question10options, widget=forms.CheckboxSelectMultiple(), label='I want to learn about a new project. I would ask for:')
question11 = forms.MultipleChoiceField(choices=question11options, widget=forms.CheckboxSelectMultiple(), label='I have finished a competition or test and I would like some feedback:')
question12 = forms.MultipleChoiceField(choices=question12options, widget=forms.CheckboxSelectMultiple(), label='When finding my way, I:')
question13 = forms.MultipleChoiceField(choices=question13options, widget=forms.CheckboxSelectMultiple(), label='I want to learn to do something new on a computer. I would:')
question14 = forms.MultipleChoiceField(choices=question14options, widget=forms.CheckboxSelectMultiple(), label='When choosing a career or area of study, these are important for me:')
question15 = forms.MultipleChoiceField(choices=question15options, widget=forms.CheckboxSelectMultiple(), label='When I am learning I:')
question16 = forms.MultipleChoiceField(choices=question16options, widget=forms.CheckboxSelectMultiple(), label='I want to find out about a house or an apartment. Before visiting it I would want:')
is_done = forms.BooleanField(widget=forms.HiddenInput(),required=True, initial=False)
</code></pre>
<p>Any help on this matter would be most appretiated, thank you.</p>
|
<python><django>
|
2024-02-05 11:40:25
| 0
| 934
|
KronosHedronos2077
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.